UPDATED 12:00 EDT / JUNE 07 2012

Public Networks Facing a Capacity Crisis, Warns Intel GM

Just about the only thing that Intel Data Center Group General Manager Pauline Nist worries about in the future of IT is that the Internet is facing a capacity crisis. And with the huge growth in traffic from mobile devices, and the carriers promising 4G and 5G service, she only expects the problem to get worse.

“It’s not getting fiber to my home that concerns me, it’s whose handling the backhaul and what capacity they have,” she says.

The public network is already showing signs of maxing out. “Remember in March Madness when they had to cap the number of people who could watch a game, and you had to wait for somebody to drop off before you could log in?” As consumer demand for video over the Internet including movies and TV grows, the issue will only get worse.

And the carriers cannot afford to simply upgrade the entire network. Unless they can find a way to recover the cost of upgrades, they will be left with two choices – “they will either have to throttle user data rates or increase prices to pay for network upgrades.”

Meanwhile, business is moving from batch to near real-time and wants to transfer more data faster across the public network to multiple locations. Nist definitely sees a trend there. In retail, for instance, moving inventory from batch to near real-time will, for instance, allow stores to change prices across their entire inventory in an hour rather than needing weeks. It can allow them to react to demand changes, drop prices, for instance, on food items that are nearing the end of their shelf life, or charge less for something not selling in the store while keeping the same product at full price online if it is selling better there.

Intel, she says, is happy to see Big Data moving toward near real-time because it means more data in memory, which means more memory chip sales. It also means more compression, because those memory chips are expensive, and compression means more processing cycles, which again means sale of more high-end processors.

When virtualization began to take off, many analysts expected Intel would see a drop in sales as companies eliminated all the excess capacity in their data centers. That never happened for a couple of reasons, Nist said. First high-performance computing demand is growing rapidly, and “virtualization is anathema to them. They don’t want to give up a single cycle” to running a hypervisor.

Second, she said, while well-behaved infrastructure like Infrastructure-as-a-Service or SAP are being virtualized, few high performance production databases are virtualizing. “If you are a manufacturer, for instance, you are lucky if you can get your entire manufacturing database into memory to support near real-time response, much less have a bunch of VMs on the system.”

Performance and security are key issues keeping customer-facing systems in particular off hypervisors. “If I’m standing in front of an ATM, I definitely don’t want it running on a VM.”


A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU