LinkedIn previews upcoming enhancements for Apache Kafka
The engineering team at LinkedIn Inc. hasn’t been sitting idly since contributing its homegrown message broker to the open-source community three years ago. The volume of data flowing through its network has since ballooned to over 1.34 petabytes per week, which required making some major modifications that are slated to become available in the next major release.
One of the most important additions previewed in the blog post that announced the forthcoming update this morning is a feature that allows administrators to set limits on the amount of traffic from applications. That expanded control is meant to help avoid situations where a sudden spike in the message output of one application clogs the network for all the others.
When a usage threshold is exceeded, Kafka will simply stop allocating additional bandwidth to the offending service, enabling operators to easily account for the fact that some workloads naturally generate more messages than others when planning out their implementation. But not every possibility can be foreseen, which is why the capability will also allow for traffic caps to be modified on the fly.
That means administrators won’t have to individually restart every node in the underlying server cluster to apply the changes, a convenience that saves effort and, most importantly, time. The ability to quickly react to usage requirements is invaluable for the more than 100 companies using Kafka in production, a list that includes the likes of The Goldman Sachs Group, Inc., which deal with events such as the trading surge that followed China’s recent move to devalue its currency on a regular basis.
But scalability is only one of several operational issues that needs to be addressed when handling such high-volume, sensitive data. The other big priority is security, which LinkedIn also plans to address with the addition of native encryption that the blog post specifies will be rolled out internally in the coming months and released to the open-source community next year.
The social networking giant also plans to reduce Kafka’s reliance on the relatively unsafe Apache Zookeeper synchronization service as part of that security push and introduce a standardized monitoring framework that is currently in the planning stage. The proposed addition will serve to verify the integrity of data and check that is flowing smoothly.
LinkedIn has been working to make Kafka more reliable, too, removing latency bottlenecks and increasing the current 1MB limit on messages using an experimental fragmentation feature that is likewise slated to become available for the open-source community. When released, the enhancements will make Kafka even more even attractive than it is now for organizations trying to keep up with the vast amounts of data flowing into their networks.
Photo via mumuxe
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU