

What is dark matter or dark energy? Why don’t we see antimatter? These are questions that help us understand the origin of the universe.
CERN, the European Organization for Nuclear Research, is one of the world’s largest and most respected centers for scientific research. Its purpose is to find out what the universe is made of and how it works through physics and probing the fundamental structure of particles that make up everything around us. Using the world’s largest and most complex scientific instruments, the organization provides a unique range of particle accelerator facilities to researchers to advance society’s knowledge of the universe.
“With particle accelerators, we try to recreate some of the moments just after the universe was created — the Big Bang — to understand better what the state of matter was at that time,” said Ricardo Rocha (pictured), computing engineer at CERN. “The result is very often a lot of data that has to be analyzed, and that’s why we traditionally have had huge requirements for computing resources.”
Rocha spoke with Jeff Frick, host of theCUBE, SiliconANGLE Media’s livestreaming studio, during the KubeCon + CloudNativeCon NA event. They discussed how a massive amount of data is captured for researchers, containerization, and how the Cloud Native Computing Foundation helps CERN find and expand its use of external resources to make its software more efficient and, in the future, its infrastructure more agile. (* Disclosure below.)
The data collection process begins by accelerating particles so they get very close to the speed of light and, at specific points, CERN engineers will make them collide. Gigantic detectors acting like a camera take about 40 million pictures per second, generating one petabyte of data per second. Such a huge amount of data is difficult to work with, so filters are used to reduce the size to a more manageable amount — several tens of kilobytes per second.
Once captured, the data needs to be recorded and made available to researchers to conduct their experiments.
“Traditionally we’ve had to build the software [to do this], because there weren’t many people around who had this same kind of need,” Rocha explained. “But the revolution of containers and the cloud has allowed us to join other communities and benefit from their work so we don’t have to do everything ourselves.”
Containerization is key because the organization needs the ability to share information and resources among physicists and engineers.
“The idea of containerizing the work, including all the code, all the data, and then sharing this with our colleagues is very appealing. The fact that we can also take this unit of work and just deploy it in any infrastructure that has a standardized API like Kubernetes is very appealing,” Rocha stated.
The engineering team at CERN is also working to make its infrastructure more agile, and coordinators from the CNCF have played a key role helping them connect with the external resources needed to do this. CERN will always be working to expand its on-premises resources, but Kubernetes will continue to be a critical component in finding useful external resources as well, Rocha concluded.
Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s coverage of the KubeCon + CloudNativeCon NA event. (*Disclosure: TheCUBE is a paid media partner for the Cloud Native Computing Foundation. Neither the Cloud Native Computing Foundation, the sponsor for theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
THANK YOU