

Over the past few years, Yelp, Inc. has expanded the services its offerings by creating an interactive feature called Yelp Reservations, which allows customers to interact with their restaurant of choice and to create dining reservations. Because the app has proven to be heavily data-intensive, Yelp turned to Splunk to help them manage it.
Kris Wehner, SeatMe VP of Engineering at Yelp, and Charles Guenther, senior software engineer at Yelp, spoke with John Walls (@JohnWalls21) and John Furrier (@furrier), cohosts of theCUBE, from the SiliconANGLE Media team, live from the Splunk.conf 2016 conference in Orlando, FL. The pair discussed their Yelp Reservations service and about how they are using Splunk to help manage the huge volume of data created by it.
According to Wehner, the Yelp Reservations application is a combination of legacy code and a large number of micro-services running on an open, distributed Platform-as-a-Service Yelp themselves created called PaaSTA. The micro-services have enabled developers working on the system to be able to push code on the fly, without the need for involvement from management or teams outside IT. Giving developers freedom to do this allows for rapid deployment of code fixes or enhancements, while eliminating any roadblocks to deployment that may be caused by a lengthy approvals process.
“What that’s about is developer empowerment,” explained Guenther. “We don’t want them having to reach out to the site reliability team or the operations team to say, ‘I want to deploy this new code.’ We want to enable them to just check in some configuration, just check in their code and basically just let go.”
Wehner also said that Yelp engineers and developers decided early on to handle the micro-services and log data on the same Marathon-hosted platform. This means that as soon as data is generated, it is immediately available to Splunk for analytics. Making data available in this efficient manner not only saves time, but also allows developers to see any issues that may occur immediately, as it is all operating in close proximity.
“This makes it when we push service, we know that the service is going to emit logs in a way that is immediately available to all the downstream analytics tools, which for us includes Splunk,” said Wehner. “So our Splunk orders are running inside a marathon-hosted platform that then consume off a centralized log bus.”
Watch the complete video interview below, and be sure to check out more of SiliconANGLE and theCUBE’s coverage of Splunk.conf 2016.
THANK YOU