UPDATED 13:40 EDT / JULY 26 2021

APPS

Here are 5 insights you might have missed from KubeCon + CloudNativeCon

The speed required to integrate, develop and perform analytics with containers is a key issue front and center in the enterprise, one that was discussed widely during the recent KubeCon + CloudNativeCon event.

Computing at the edge, the need for real-time analytics — along with newer forms of DevOps, such as continuous integration and continuous delivery — and new, open-source development concepts turning traditional industries’ closely held ownership of code upside down are all emerging issues that were on the agenda.

In case you missed the event, theCUBE, SiliconANGLE Media’s livestreaming studio, is highlighting five key insights emerging from the KubeCon + CloudNativeCon event along with links to in-depth interviews discussing the topics. (* Disclosure below.)

1. Red Hat wants Kubernetes to be location-agnostic.

Hybrid cloud, Red Hat Inc.’s existing remote strategy for Kubernetes is being augmented with widescale edge functionality, the company disclosed at the event. The Kubernetes system automates management of containerized applications.

It has traditionally functioned within a data center. However, that’s changing because users want to take advantage of the lower latency obtained by managing data close by where it’s created, along with security and cost advantages of not moving data far from its source.

“More and more people are not only talking about using Kubernetes for edge, but [are] actually getting in there and doing it,” Steve Gordon, director of product management, cloud platforms, at Red Hat, said in an interview with theCUBE.

A Kubenetes and OpenShift combo deployed as an edge-interface allows for a common interface over multiple footprints — easier to keep track of, according to Gordon. Faster analytics, by running the numbers at the source, is also one of the principal applications.

2.Real-time data monitoring is getting increased focus.

Tying in with Red Hat’s aforementioned drive toward edge and that resultant decentralization of data, along with business-wide shifts to digitization and cloud native, comes growing awareness of the need for real-time analytics.

“If you just toss [data] into a data lake and do batch analysis like half a day later, no one cares about it anymore,” said Richard Hartmann, community director at Grafana Labs Inc.

Grafana is an open-source visualization software platform geared toward querying and understanding metrics. Consequently, tools such as Prometheus real-time monitoring and OpenMetrics were hovering around in the background at the event, according to Hartmann in an interview with theCUBE. (Grafana uses the Prometheus event monitoring and alerting.)

In 2016, open-source Prometheus was the second project accepted into the Cloud Native Computing Foundation, after Kubernetes itself. And OpenMetrics claims to be the de-facto standard for sending cloud metrics at scale.

3. Delivery and integration are getting more continuous.

It’s not just real-time analytics that’s more frenetic, though. Continuous integration/continuous delivery is becoming more prevalent and is ripe for Kubernetes use. CI/CD imposes automation in the building, testing and deployment of applications.

By continuous, it’s not quite real time, as in the current requirement for analytics, but almost — deployments can occur many times per day, for example. That’s many times a day delivering potential problems. But it’s here to stay and is how DevOps will now be run, according to pundits.

Red Hat OpenShift, including the OpenShift GitOps and OpenShift Pipelines, features are native to Kubernetes. They will help with adapting common, prior, non-cloud applications to cloud native and CI/CD environments.

“There is always a clash of how do I build cloud native application using these technologies that are not really built for cloud native space,” said Siamak Sadeghianfar, senior principal product manager of cloud platforms at Red Hat, who was interviewed by theCUBE. Container-based cloud being more dynamic than traditional, virtual machine-oriented architecture.

He explained: “In the cloud native ways of CI/CD, you’re running most likely in a container platform. You don’t have dedicated infrastructure, you are running mostly on-demand and you scale when there is a demand for running CI/CD, for example.”

Rather than dedicate infrastructure to it, OpenShift GitOps allows developers to create repeatable processes for container clusters. That delivers consistency, among other benefits, and helps legacy work get up to speed.

4. Banks are going open source and, enthusiastically, are joining in.

Who would have thought it? But the venerable vertical of finance, its history dating back thousands of years and perceived to be entrenched in its ways, is not only adopting open source, but is contributing.

“We see banks and financial organizations that are looking to adopt open source,” said Katie Gamanji, ecosystem advocate for the Cloud Native Computing Foundation, in an interview with theCUBE. “They’re looking for ways to either contribute or actually deep-dive a bit more into these areas.”

Cloud native, fiscally oriented operations such as City Bank are looking to “give back to the community” according to Cheryl Hung, vice president of ecosystem at Cloud Native Computing Foundation, who was also interviewed with Gamanji. Community-organized events in Africa was one example.

Interestingly, the duo remarked that there’s a faction who are interested primarily in the backstories behind technical challenges — which they hope to cater to. “Cloud organizations are working to make these discussions more accessible,” Gamanji said.

5. Increasing computer capacity in the science arena is accelerating cloud native.

Altruism and the use of Kubernetes isn’t restricted to just finance though. Science is increasingly embracing containerization and cloud native, explained Ricardo Rocha, computing engineer at CERN, during an interview with theCUBE.

CERN, an organization run by 23 member countries, looks into the “fundamental structure of particles that make up everything around us,” according to its website. That’s a lot of data. Kubernetes works well for the orchestration, Rocha explained. One reason is that it can handle the large number of users.

“CERN is known for having a lot of data and requiring a lot of computing capacity to analyze all this data,” he said. “But, actually, we also have a very large community and we have a lot of users and people interested in the stuff we do.”

The group infrastructure is being migrated into Kubernetes OpenShift and will run large quantities of science websites. The nuclear research organization has over 1,000 sites!

It’s been Kubernetes’ API functionality that has been crucial, Rocha elaborated. “It’s not just an orchestrator; it’s really the API and its capability of managing a huge number of resources, including custom resources,” he concluded.

Be sure to check out more of SiliconANGLE’s and theCUBE’s coverage of KubeCon + CloudNativeCon(* Disclosure: TheCUBE is a paid media partner for Red Hat Summit Inc. Neither Red Hat, the sponsor for theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Image: Siwabud Veerapaisarn

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU