UPDATED 12:58 EST / NOVEMBER 02 2023

Krishna Rangasayee, Supercloud 4, 2023 AI

Solving the latency issue: AI at the network edge

Amid a growing demand for a wide range of artificial intelligence-driven services, the amount of data needed is rapidly increasing, networks are becoming more complex, and there’s a need for quick response times.

This is especially crucial as real-time reactions have become vital for AI applications. These factors highlight the significance of moving AI inferencing to the network edge, where data is collected, insights are obtained and actions are executed. AI inferencing at the edge is predicted to become a dominant trend, led by low power, low cost and highly performant Arm-based hardware and software architectures, according to Krishna Rangasayee (pictured), founder and chief executive officer of SiMa Technologies Inc.

“I think one challenge in throwing everything at the cloud is not every application is built for the latency. A classic example all of us can relate to is you want cars with semi-autonomous or autonomous capability,” Rangasayee said. “As the data’s being captured, you’re driving at 65 miles per hour. You cannot afford for the compute infrastructure to be away from the car, go to the cloud and then come back with a decision. You’re really moving too fast.”

Rangasayee spoke with theCUBE industry analyst Dave Vellante at Supercloud 4, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed providing low-power, low-cost and highly performant hardware and software solutions for edge applications and revolutionizing the industry along the way.

Software and hardware solution

SiMa decided to pursue a software-hardware solution because it realized that a software-only approach would not be sufficient for its goals, explained Rangasayee. The firm focuses on creating a software front end that can solve any computer vision application, with performance per watt being the most important criterion for edge applications.

To solve problems at the edge and achieve ten times the performance per watt compared to legacy solutions, a combined software and hardware solution is necessary, along with a push-button experience for instant deployment, according to Rangasayee. The edge market, with purpose-built platforms, is projected to be four to five times larger than the current market, making it a more attractive market than the cloud due to the cumulative volume of shipments.

“It’s increasingly going to be a trend in that you want to get processing localized to where the data is being created for mission-critical or safety-critical applications,” he added. “And not everybody can suffer the latency loss that I think people could look through. The reason why the cloud has been used for these applications so far is nobody has really built a purpose-built platform to enable high-performance computing at low power and an ease-of-use manner in such a way that the data and the data processing can all be done locally. And that’s really the thesis of what we start to do at SiMa.”

Machine learning on a chip

The semiconductor industry experienced a period of stagnation until the emergence of AI. This led to increased investment, new architecture and Nvidia Corp. becoming the first company to reach a trillion-dollar market cap, according to Rangasayee. Semiconductor companies are integrating AI and machine learning into computer architecture to stay relevant, and there is a lot of innovation happening to speed up chip development and production using AI and ML.

“It’d be fair to say if you don’t touch AI and ML as a company in your roadmap today, you’re probably going to be a sitting duck in the next five to 10 years,” he said. “Every single semiconductor company, public or private, fundamentally is weaving their computer architecture into an AI or ML form factor.”

SiMa has developed a machine learning system on a chip that allows for the gradual migration of classic compute problems into a proprietary accelerator, providing legacy support and risk mitigation, explained Rangasayee. The success of a chip company, especially in AI and machine learning, is determined by the quality of its software, as seen in the case of Nvidia Corp., whose software has been the defining element for its GPU architectures.

Meanwhile, SiMa is the first company to provide a flexible software front end that can accommodate any machine learning framework, resolution and sensor, making it the Ellis Island of computer vision, explained Rangasayee. The firm also provides libraries and a single hosted docker container for model development, pipeline development and device management, making it one of the easiest-to-use software in the machine learning industry.

“From an innovation perspective, the next 20 years, you’re going to see a tremendous amount of solutions built out at the edge,” he said. “And the thing that’s really going to define it from my perspective, from a market adoption, is really going to be ease-of-use in software. I think ease of use is the large predictor, and this market is going to be an amazing market. How amazing is going to be predicted by how easy the solutions are to build.”

Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of Supercloud 4:

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU