Exclusive Interview Founder of Juniper Networks Pradeep Sindhu on Future of Cloud and Mobile Networks
At Mobile World Congress #mwc10 Juniper Networks Social Event in Barcelona I had a chance to sit down for an hour with the founder of Juniper Networks Pradeep Sindhu. We talked about network theory in media and in mobile applications as well as the changes in the networking systems.
This was the first time that I had the chance to meet and discuss technology with Pradeep. I was very impressed. He’s proven to be a world class entrepreneur in his success with building Juniper from inception to the size it is now. More impressive is his humble, intelligent, and pragmatic view on where networking is at and where it’s evolving.
Pradeep still has that entrepreneurial fire in his approach. I was really impressed with his vision for the new network and how software is taking the front and center role in the revolution of the Internet. I’ve called Juniper the Google of networking – with their serious developer and software focus. This is the most relevant model for what’s going on in the market with applications being the new utility. I called it in a previous post Mobile Innovation Cycle in talking about this trend in the mobile advertising space.
Juniper recently launched a venture capital fund of $50 million to investing in networking software plays that use their Junos operating system which they are opening up to developers.
Download the MP3 or Listen Below
Here is a link to the audio recording at their customer and analyst event in Barcelona Spain at the Mobile World Congress. It’s noisy in the background and I was glad I had my audio recorder handy.
Transcript:
John Furrier:
Juniper has new focus, and as the founder of Juniper what are your thoughts on this new network in particular the cloud movement?
Pradeep Sindhu:
Cloud computing is an echo from the change that happened in the network from circuit switching to packet switching. It’s effectively the largest re-architecting of the information infrastructure since the birth of computing.
If you look at the fundamental reasons why cloud is happening it boils down to a few things. First, is the economics specifically the cap-x and op-x of information infrastructure networking being defined as networking, computing, and storage all inclusive.
Cap-x actually has dropped exponentially. Cap-x per unit capability has been dropping for the past few decades. Conversely, Op-x in fact has been rising. Because Cap-x has fell down, the infrastructure has become more physically distributed.
When you physically distribute information and information infrastructure it becomes more complicated to manage. As things get more complex you can’t automate them. If you can’t automate you have to use manual labor. If you use manual labor costs go up with inflation.
On one side you have cost going up with inflation and the other side costs are dropping exponentially. Essentially cloud is an attempt to re-centralize the heavy part of computing and storage. It is enabled by the network being “good enough”. So cloud could not be viable 10 years ago because the network wasn’t good enough.
First reason is economics, second reason is that technology of networking is a good enabler, and third reason is the desirers of end users. What end users want is access to information anytime anywhere.
The cloud based model allows them (end users) to do that (get information) more effectively rather than information being stored at end points. As an example, if I store information on my PC at home I won’t be able to have that information available when on the road unless I setup secure VPNs to my home and not many people know how to do that.
If the information is out in a datacenter somewhere and it’s replicated for the reasons for redundancy I can access that information from anywhere. That convenience is a big factor.
These three factors tell us that the information infrastructure will be re-architected. The model of computing is going to change from the model that has been with us for the past 10 years – e.g. distributed computing model with heavy PCs. That model will evolve to a model where the master copy of information and content sits out in the cloud in a datacenter somewhere. Information will get cached on demand progressively including on the end user device.
This new model is very different from the old model in which all the information was sitting at the end point. The primary objective of this new model is that it’s more efficient and it makes things simpler for end users.
John Furrier:
The application development and service models are radically changing. How is it from a software perspective for developers? Things like static and dynamic provisioning of services.
Pradeep Sindhu:
Our perspective is that network infrastructure used to be built in silos. We used to be in a world where there were limited number of killer applications and people would build separate network infrastructure for each of these killer applications.
The vertical silo’d approach is a very ineffective way to do things. The new network ought to be built using a horizontal platform where we have infrastructure as general purpose as possible where most of the functionality is provided by software.
So what you’re doing is writing application software that allows the network to be much more simple much more dynamic. Dynamic means it’s able to respond to real-time events. For example look at the situation where say a large concert is happening and a large crowd forms (aka flash mobs), if the network is static the performance of the network at peek is pathetic. You can’t even make phone calls (at say 12 kilobits/sec) never mind like download videos or upload photos – which is what end user desire.
Allowing the network to be more dynamic only becomes possible if the control plane of the network is done with software. Software that is open and standard. We are working to bring the world to an open software ecosystem and general purpose infrastructure. That is the only way in which the network service providers can be profitable.
We recently did a study where the various services are going and whether in the current environment the network service provider can stay profitable.
What the over the top players are doing is capturing the profits and what the network services providers end up offering is what they call “dumb pipes”. This is because they are following the old transport model. If they follow the horizontal approach not only can they reduce their cap-x because equipment gets leveraged across more users and larger set of applications. We now live in a world where there is a long tail distribution of applications. It’s not anymore where a small number applications services the masses (e.g. SMS). Instead there are millions of possible applications targeting smaller set users, but the overall spectrum of will be wide.
The old model of building special purpose infrastructure doesn’t apply anymore. One of the challenges that the industry has is that many service providers especially mobile world is thinking the old model. Service providers are thinking how they can develop the next killer application (like SMS). This is the dilemma that service providers have. The model they have today in creating applications is a model where you have to have a 5 year plan, spend a ton of money, and then it’s still not known if the application will be successful.
What we have to do is get the service providers to a world where applications can be written very quickly and then use the network to figure out if the application will take or not. If it doesn’t take you basically do a fast fail, and if it takes you pour a ton of money into it.
John Furrier:
If you where out there as an entrepreneur what kind of startup would you work on?
Pradeep Sindhu:
My interest has always been in systems and the environment that we have today it’s hard to think of a startup that would be based upon doing systems. The investment required to get a startup like that going is enormous. Plus there isn’t an appetite from the venture capitalist community to support that kind of startup.
John Furrier:
Is the notion of systems changing?
Pradeep Sindhu:
The notion of systems is definitely changing in the computer systems world. It’s all going to large scale datacenters to cloud computing. The notion of networking systems is changing from these vertical stacks to horizontal.
John Furrier:
If we go to a more centralized resource approach (systems software model) you’d have more subsystems to deal with. What area would you develop as a entrepreneur or startup?
Pradeep Sindhu:
What I would do is gravitate to an ecosystem that has some promise to becoming popular. Then develop software like what happened with the Apple app store. Because in the networking universe there is a tremendous opportunity for people to add value and actually make money by helping service providers who have to move from the old model to the new model.
The service providers spend a ton of money on software that they have acquired or built on the old model.
John Furrier:
What is your view on the future in the next five years infrastructure, cloud, mobility – convergence is actually happening now.
Pradeep Sindhu:
Getting back to cloud the network is the enabler but that is mainly the wide area network. There is another network that is inside datacenters – the local area network (LAN) which is an inhibitor to cloud computing.
Local area networks were never designed for the scale of computing and storage that they are being asked to perform. Most of the inefficiency that exist in datacenters can be pinned down on the internal networking architectures of these datacenters.
The three technologies that get used are Ethernet, Fiberchannel, and infinyband. None of these technologies were purpose built for the scale in which people are trying to use them. Juniper has a project called Stratus which is reinventing the technologies of how you connect hundreds or thousands of computers in a facility to provide general purpose computing like never seen before. Our goal is to multiplying the general purpose computing power by 100x. That is the area of networking infrastructure being cost effective and the mismatch between computing and storage being redefined.
We are talking about the datacenter as a large scale computer and large scale operating environment where applications can be build and run across the operating environment the new datacenter.
The idea around Juniper’s vision is that datacenter have a very important role to play for service providers. Datacenters are the application factories.
Datacenters are where these applications will run. The whole cloud computing model is that the heavy lifting is done inside datacenters. The network actually provides a conduit, and the end user devices are the places where the display and end user experience resides.
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU