UPDATED 15:34 EST / FEBRUARY 04 2023

SECURITY

AI threats and open-source vulnerabilities top host of security issues facing cloud-native community

It turns out that security concerns in the cloud-native world look a lot like what’s keeping practitioners up at night in the rest of the technology ecosystem.

Implementation of zero-trust security in the enterprise, supply chain vulnerability, threats to cryptography from quantum computing, and the rise of powerful artificial intelligence engines such as OpenAI LLC’s ChatGPT are high on the list of worries among cloud-native security personnel. Additional concerns surrounding continued open-source security flaws paint an overall picture for cloud-native as a community under escalating attack.

Threat actors follow growth vectors in the compute world and cloud-native is riding an upward trend. Within the next two years, Gartner has estimated that 95% of new digital workloads will be deployed on cloud-native platforms. The need to protect this infrastructure will be paramount, which is why the cloud-native security community is now actively assessing where the most serious risks reside.

“Everyone is becoming a cloud-native developer,” Priyanka Sharma (pictured), executive director and general manager of the Cloud Native Computing Foundation, said during her keynote remarks at the inaugural CloudNativeSecurityCon in Seattle on Wednesday. (More coverage of the event from theCUBE, SiliconANGLE Media’s video studio, is available here.) “We’re essential to organizations and business everywhere. The lessons in cloud security have staying power.”

Worries about ChatGPT

Those lessons will need lasting impact because machines are getting smarter. OpenAI’s machine learning model ChatGPT has signaled the dawn of a new era in artificial intelligence, one where powerful open-source automation tools have become readily available and easier to use by a mass audience.

The cloud-native security community is worried about ChatGPT. In a presentation at the conference on Wednesday, OpenSSF General Manager Brian Behlendorf described a range of concerns, ranging from automated spear-fishing attacks on open-source projects using AI-generated replies to AI-spoofed contributors that place malicious backdoors into source code. “We know that AI models can be corrupted,” Behlendorf said.

Issues around the potential for corruption have accompanied other recent AI advances such as the release by GitHub of an AI programming tool named Copilot last year. Copilot generates and suggests lines of code directly within a programmer’s editing function. In January, Microsoft Corp. announced general availability of its Azure OpenAI service that provided a suite of services that include Codex, a neural network that powers Copilot.

Concerns have been expressed by security researchers over the potential for Copilot to generate exploitable code. One study of Copilot by researchers at NYU found that the tool generated vulnerable code 40% of the time.

Meanwhile, use of ChatGPT continues to mushroom. According to a recent analysis, it has become the fastest-growing app of all time.

“The real elephant in the room is the rise of AI and specifically large language models,” Matt Jarvis, director of developer relations at Snyk Inc., said Thursday. “Millions of people have been trying ChatGPT out. The field is moving incredibly quickly. It is already clear that it’s going to drive massive change.”

Open-source vulnerability

The open-source community depends on a widely used set of collaborative platforms to build new projects and enhance existing ones. This extends to spaces such as GitHub, where several notable hacks have been disclosed in recent weeks. Over the past 60 days, Slack employee tokens were stolen, Okta Inc.’s source code on GitHub was hacked and Dropbox Inc. disclosed a breach after a malicious actor exfiltrated 130 GitHub repositories.

“People are still checking credentials into GitHub,” Matt Klein, software engineer at Lyft Inc. and creator of the open-source project Envoy, said during a panel discussion hosted at the conference by Tetrate Inc. “In 2023, this is still a major problem.”

One of the most notable recent open-source hacks involved corruption of the PyTorch machine learning framework. In December, PyTorch developers identified a security breach in a service that hosted third-party extensions to the AI development tool.

The malicious extension is believed to have been downloaded over 2,300 times by open-source users. “The attacker abused a trust relationship in order to get their own code into PyTorch,” Maya Levine, product manager at Sysdig Inc., said during a presentation at the conference. “We have yet to know what the true implications of this were.”

The PyTorch hack demonstrates why corruption in the software supply chain remains a troubling concern in enterprise IT circles. It took just a few hours for malicious actors to begin launching attacks after researchers released details of the Log4j vulnerability in December 2021. Sonatype Inc. produced a study last year that documented a 700% average annual increase in software supply chain attacks over last three years.

For a software supply chain that was initially built on trust, there are now new tools emerging to mitigate risk. These include Tekton Chains, a security subsystem of the Kubernetes Tekton CI/CD pipeline, and Sigstore, a tool that automates digital signing and verification of software elements.

Sigstore was prototyped at Red Hat Inc. and is a cornerstone of the company’s supply chain trust and security strategy. Partners with Red Hat on the Sigstore project include Google LLC, Hewlett Packard Enterprise Co., VMware Inc. and Cisco Systems Inc.

“In order to fully implement software supply chain security, you have to do it in partnership,” Emmy Eide, senior manager for product security supply chain at Red Hat, said Thursday. “We’ve only seen success at Red Hat when we’ve used this partnership approach. You keep your messaging around risk.”

Zero trust and quantum

One approach embraced by major sectors of the cloud-native security community to minimize risk is by implementing zero-trust practices. This approach, which requires the authentication of all users before granting system access, has been effective in reducing cybersecurity risk, according to some studies.

However, zero trust has also emerged as a source of friction within organizations as security researchers struggle to control access for business-critical applications they never built in the first place.

“Zero trust for me is to remove implicit trust and be intentional,” Kelsey Hightower, developer advocate at Google, said Wednesday. “We try to put these hard shells around all apps. Most security professionals don’t know what these apps are doing. We end up trying to secure things we don’t understand.”

AI, supply chain, open source and zero trust are current areas of focus for the security community, but they’re not the only concerns. Cloud-native security researchers are also beginning to cast a wary eye at quantum computing and its future implications for public key cryptography.

The concern is that quantum machines may ultimately surpass the performance limitations of conventional computers. That has raised the possibility that quantum computers could eventually bypass data encryption algorithms, thereby creating a massive security hole.

The technology industry is already building solutions in anticipation of this threat. In November, SandboxAQ, a startup incubated in Alphabet Inc., received a contract to assist the U.S. Air Force in the implementation of post-quantum cryptography. Last month, QuSecure Inc. unveiled what it termed the industry’s first quantum-safe orchestration for protecting encrypted private data on any website or mobile app using quantum-resistant connections.

“The cycle of technology change moves pretty fast and it’s only getting faster,” said Snyk’s Jarvis. “We’re going down the rabbit hole of finding something we can trust.”

Photo: Mark Albertson/SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU