Serverless computing: It’s all about functional stateless microservices
In between meeting with customers, crowdchatting with our communities and hosting theCUBE, the research team at Wikibon, owned by the same company as SiliconANGLE, finds time to meet and discuss trends and topics regarding digital business transformation and technology markets. We look at things from the standpoints of business, the Internet of Things, big data, application, cloud and infrastructure modernization. We use the results of our research meetings to explore new research topics, further current research projects and share insights. This is the fifth summary of findings from these regular meetings, which we plan to publish every week. This week’s meeting included Dave Vellante and Jim Kobielus offering insights on serverless computing.
Premise: Serverless computing is coming fast and furious to the cloud world, bringing many advantages. In particular, with serverless, developers don’t have to manage complex infrastructure typically associated with containers, virtual machines and other underlying infrastructure. With serverless, developers build applications using functional programming languages and tools which we believe will largely be a complement to, not a replacement for, traditional programming models. The latter, in our opinion, will remain in vogue for stateful enterprise applications while serverless models will increasingly address stateless apps.
What exactly is serverless computing?
Serverless computing is a cloud-oriented operating model that dynamically manages underlying infrastructure resources. Serverless is typically deployed as a functional microservices architecture that allows developers to invoke functions as they’re needed and pay for resources based on what’s consumed by an application versus paying for fixed units of capacity.
Serverless still requires hardware and the name is somewhat misleading, but the management of the infrastructure resource is essentially “invisible” to application developers. Specifically, in serverless environments, developers don’t have to define the attributes of the servers. The infrastructure that supports invoked services is managed by the cloud provider and developers don’t need to know what’s sitting behind the functions. Serverless can be thought of as completely preconfigured functions-as-a-service where pricing for the functions is utilitylike and paid for by consumption at some interval of granularity, for example hours, minutes or seconds.
What are the benefits?
Serverless architectures are much simpler for application developers to manage. Serverless virtually eliminates the responsibility to maintain software, microcode, operating system levels and the like, and developers need only worry about developing and testing a function-based offering. As such, serverless architectures are highly scalable and potentially much less expensive platforms on which to develop and maintain applications. As a result, the compute fabrics that support serverless can be exceedingly efficient and cost effective.
What are the main use cases?
The main use cases for serverless are stateless applications and functional programming models. Examples include application programming interface publishing, query response, face recognition and voice recognition; these are typical for stateless apps using functional programming models.
Edge-oriented environments are another emerging use case for serverless computing. As edge devices capture data on certain events — for example, an Internet of Things device emitting some data over time — the device platform can call functions or a model or logic service to perform some real-time analysis and make an on-the-fly adjustments, such as increasing or decreasing flow. Notably, we believe the serverless model will be used extensively for edge applications, even those that are end-to-end, as long as these applications are stateless. Stateful applications are likely to use more traditional models for some time.
We also view certain data analytic workloads such as business intelligence and high-performance computing (HPC) use cases — for example, climate modeling, genomics and basic scientific research — as potentially good candidates for serverless.
Where did serverless come from?
Serverless is a relatively immature space. Amazon Web Services Inc. announced Lambda in 2014 as the industry’s first serverless offering. Other clouds vendors have followed suit, including Google Inc. with Cloud Functions, Microsoft Corp. with Azure Functions and IBM Corp. with Bluemix OpenWhisk.
What are the key caveats for developers?
Serverless environments today run in a shared cloud environment, so this means there will be peaks, valleys and competition for resources. As such, developers must be manage unexpected situations as they arise, especially those related to latency and error recovery. Users of serverless computing models must do rigorous testing in this new environment and focus on recovery, for example how to deal with timeouts. As well, practitioners should expect that service level agreements from cloud providers will be less rigorous with serverless than with stateful apps, at least for now.
Also, by deploying multiple serverless cloud offerings, organizations can be exposed to “serverless creep.” Just as spinning up virtual machines and using containers extensively has created challenges for organizations, development managers must be sensitive to an explosion of serverless apps. In our view, customers must be wary of getting to a point where they lose track of what’s being developed within the application portfolio, a probability precisely because of the lack of state. The risks here include compliance and audit challenges, duplicative work products and cost overruns. Moreover, different clouds will support different functional languages, such as Javascript vs. Python, and serverless apps may not be very portable to other clouds. This brings up a potential issue of diluting skill sets across an organization where the cloud choice wags the skills dog, versus a more deliberate and well-thought-out people and process strategy.
Where do containers and platform-as-a-service fit?
Serverless computing leverages containers as the underlying infrastructure. Serverless allows developers to essentially abstract away the core container complexity. Platform-as-a-service is a microservices environment by its very nature. Containerized microservices require management by developers, whereas the functional microservices associated with serverless abstract that complexity — assuming the cloud provider is doing its job.
Action Item: Serverless is an emerging and highly useful concept for developers of cloud-based services, and Wikibon believes that it’s a fundamental operating model that’s here to stay. Developers should begin using serverless and start with simple use cases. In particular, we advise embracing stateless functions such as Web content publishing, API notification and alerts, and other event-driven applications. However, developers must be careful to consider recovery plans in these new environments. As always, hope for the best, plan for the worst.
Image: bsdrouin/Pixabay
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU