While the open-source community is still discovering new opportunities for container technology’s methods for virtualizing software application deployments, its ancestor the Linux kernel OS, first released in 1991, isn’t old news just yet, according to Scott McCarty (pictured), technical product marketing, containers, at Red Hat Inc.
“Containers have made the [Linux] kernel hot again in a lot of ways,” said McCarty, who spoke to Stu Miniman (@stu) and James Kobielus (@jameskobielus), co-hosts of theCUBE, SiliconANGLE’s mobile live-streaming studio, during DockerCon17 in Austin, Texas, about the container community. (*Disclosure below.)
Portability is prime
A large portion of the container market deals with legacy use cases. McCarty gave the example of a network scanning tool; with containers, the tool could easily be run on the network without having to build any infrastructure around it. Likewise, it was just as easy to remove.
Another pain point containers address is the sometimes complex integration points between cloud applications and stack — the network interactions. While the network architecture might be mature, the interactions are less so, he explained. Containers could allow for more dynamic interactions between the network and stack.
“I’d like to see more of that dynamic provisioning happening,” McCarty stated.
Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s independent editorial coverage of DockerCon US 2017 Austin. (*Disclosure: Rat Hat Inc. sponsors some DockerCon segments on SiliconANGLE Media’s theCUBE. Neither Red Hat nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)