IBM’s CEO Says Big Data is Like Oil, Enterprises Need Help Extracting the Value

IBM chief executive officer Ginni Rometty regards analytics as a new source of competitive advantage for her company’s enterprise clientele.

“I want you to think about data as the next natural resource,” Rometty told an audience of business leaders and lawmakers during a recent speech  in front of the Council on Foreign Relations in New York. She pointed out how data analytics helped reduce crime by 30 percent in Memphis, and said that IBM correctly predicted the outcome of swing states for President Barack Obama’s campaign.

Abhishek Mehta, the founder of Tresata, weighed in on the CEO’s comments and explained her particular choice of words:

“Just like oil was a natural resource powering the last industrial revolution, data is going to be the natural resource for this industrial revolution. Data is the core asset, and the core lubricant, for not just the entire economic models built around every single industry vertical but also the socioeconomic models.”

See Abhi’s full commentary on IBM’s Big Data strategy in the video below, recorded from this morning’s NewsDesk broadcast with Kristin Feledy.

Death of the average


Besides comparing data with oil, Romenty also noted that “you will see the death of the average,” by which she meant that organizations will be able to tap into their Big Data in order to track audiences – whether consumers, voters or employees – based on individual metrics.

Companies need more than just willingness to make use of their raw information – they require the proper tools for the jobs, and the know-how to apply these tools to business problems. IBM happens to be offering both.

Building Big Data: IBM’s early story

Big Blue’s analytics portfolio is one of the largest in the industry, thanks to its famously strong R&D organization and a series of acquisitions it made in recent years. The vendor maintains an equally strong presence in the Services arena, which is ever-more profitable thanks to the growing number of Big Data practitioners.

Companies are turning to IBM for much-needed advice on how to govern their information.  And while recent years have provided the time to explore all Big Data has to offer, 2013 has been dubbed the year of Big Data Implementation, with IBM among the handful of major vendors leading the charge.  Now that we’re in the middle of a transition from exploratory Big Data to its execution, there’s far more factors that can now be considered and applied in a desired solution.

Wikibon analyst Jeff Kelly outlines three extremely important points a Big Data practioner should keep in mind:


  • Data Quality – How accurate, complete and reliable is the data in question? In traditional data management scenarios, this meant ensuring customer names and addresses were accurate and up-to-date, for example. In Big Data scenarios, things grow more complex. Does this Twitter handle for @JohnRSmith refer to my customer John R. Smith? Do these IP addresses and mobile app log data correlate to the user or users I think they do?
  • Data Governance – What are acceptable uses of data? Who is authorized to analyze particular data sets? When should data be disposed of? In short, data governance refers to a comprehensive, predetermined set of policies to govern the entire data management lifecycle. Data governance is particularly important in highly-regulated industries, where the improper use of data can result in legal action. Big Data poses particular challenges to data governance. Much of the value of Big Data comes from merging disparate data sources, creating yet new data sources. How should these new data sources be governed and who should be allowed to analyze them? And if analysis results in “sensitive” data, which privacy safeguards need to be applied? And on and on.
  • Data Stewardship – Who “owns” a particular data set or data source? Data stewards typically are responsible for applying agreed metadata definitions to data sets and ensuring the accuracy of the data for particular use cases. In traditional environments, a product manager naturally would be the data steward for a product database. But in Big Data scenarios, who owns data streaming in from sensors on products in the field?