@richardmcdougll

Pivotal, Big Data and VMware

April 24, 2013

 

It’s great to see the public launch of Pivotal today. The mission — to build a new platform for a new era — is bold but appropriately targeted at some of the biggest fundamental changes in application technologies.

Pivotal is now a separate entity, bringing several teams and technologies from both VMware and EMC — including Greenplum’s Hadoop (now Pivotal HD), Greenplum Database (fused with Hadoop as a new database known as HAWQ), CETAS, Pivotal Labs, Gemfire in-memory database, the Spring Application Framework and the Cloud Foundry PaaS platform.

The goal of the platform is to enable the new wave of predictive big data applications — those which pull in vast quantities and sources of data — including high rate real time sources, and can make decisions in real time based on incoming data and learned historical behavior. Combine these technologies with the ease of application development and delivery through platform as a service for  and you have a powerful platform.

Is also great to see GE’s endorsement and investment.  While the application of these technologies to the web giants with huge user bases is clearer, GE is a good example of where traditional businesses will leverage real time analytics to fundamentally change their business. In their case, putting sensors on every product from jet engines through consumer appliances will give a full connected customer experience.

I’m often asked how the Pivotal initiative interacts with VMware’s big data efforts. VMware continues its focus on building the best infrastructure for big-data, enabling our partners’s big data products on a virtualized platform. This allows mixed workloads and multi-tennancy of Hadoop and key big-data applications on a  common infrastructure platform. Project Serengeti continues to be developed at VMware, and is our reference implementation and glue to allow Hadoop to be deployed rapidly on vSphere with key integration capabilities to allow elastic grow/shrink, integrated high availability. We continue to work with the Hadoop community and key partners on the integration of big data solutions with project Serengeti and vSphere.

Congratulations on the launch and I look forward to continue working with the team at Pivotal.

You can see today’s webcast here, and find information on the accompanying blog.

 

 

 

@richardmcdougll

Richard McDougall

vSphere Storage, Big Data

Richard McDougall is the CTO for Storage and Availability at VMware. He is responsible for the technical strategy for core vSphere storage and application storage services, including Big Data, Hadoop ... More

Leave a Reply