UPDATED 16:44 EST / SEPTEMBER 08 2021

BIG DATA

Ascend.io aims to simplify data engineering for better data interaction

Driving more value from data poses more quandaries nowadays than how to store and manage the stuff, according to an engineering startup firm that specializes in orchestration and pipelines.

One problem is that data teams are at over-capacity; and another is that skillsets are so varied that solutions for extracting meaning from data aren’t a fit for all involved.

“The new challenge that is emerging is not how do you store more data or how do you process more data?” said Sean Knapp, (pictured), founder and chief executive officer of Ascend.io. “But it’s how do you create more data products.”

Knapp’s company says it simplifies data engineering allowing for better data interaction and eliminates extra work traditionally created by inefficient, vaguely accumulated data lakes. What’s called a flex-code model and high levels of automation perform the orchestration within Amazon cloud.

Knapp spoke with John Furrier, host of theCUBE, SiliconANGLE Media’s livestreaming studio, in advance of the AWS Startup Showcase: New Breakthroughs in DevOps, Analytics, and Cloud Management Tools event. They discussed how data engineering methodology needs to change to drive value for business. (* Disclosure below.)

Flex-code

“People are expecting building data pipelines to be much easier,” Knapp said. However, it’s too complicated. It needs simplifying. Flex-code, he reckons, is the answer.

Flex-code provides a combination of an individual team member’s preferred programing language with low- or no-code tools, as opposed to a straight no-code or low-code user interface that his company believes almost no one wants.

“Teams can [then] actually plug in at different layers of the stack — in different abstraction layers — and contribute side-by-side with each other,” he said. “All towards the creation of this data product.”

Flex-code is a methodology. The idea being that one should be able to “peel-back layers” with any team member being able to contribute to the architecture in the least intrusive way for that individual. Adding chunks of low-code, like SQL, or the construction of a no-code interface should all be possible, according to Knapp. That’s contrary to the traditional method used for constructing pipelines and data architecture, which is that the deeper one goes into the stack, the more unique customizations need building out — all adding complication for anyone not involved in the structure.

“You end up then throwing away most of the benefits, and the adoption of any of these other code and tools ends up shutting off a lot of the rest of the company from contributing,” he added.

Automation

Declarative models are the second differentiator Ascend.io says it uses that make it more efficient than other pipeline engineering operations. Declarative programming is where the code is written to describe what should be achieved rather than how to perform the objective. The compiler generates that bit. SQL is an example of a declarative programming language.

Ascend.io reckons data pipelines should be built declaratively because the reduction in code means less maintenance, it claims. Python and Java, for example, can be used for specifying inputs, outputs and data logic, it explains on its website. It includes an intelligence layer, according to Knapp.

Automation also speeds things up, he explained: “Where the classic model has been writing code, you compile it, you ship it, you push it out. And then, you know, you cross your fingers; you’re like, ‘Gosh, I really hope that that works!’” That’s a slow iteration cycle, he reckons.

This newer way, things that used to take weeks or months due to slow iteration cycles can be done in hours or day. That’s due to the fast feedback loops developed as you build, caused by the automation.

Scaling

Overall “scaling problems, while not entirely solved, for most companies are largely solved problems now,” Knapp stated. By that he means that “bits and bytes, how many servers, how big your clusters are” are not really considerations anymore.

In other words, the challenge has shifted away from that physical size problem to one of accessing the value. “It’s how do you get more people able to build more products, faster and safely, that propel the business forward?” Knapp asked.

Knapp also promised to help customers get going with usable data pipelines within guaranteed periods from signing up.

“We will help architect their first project with them and ensure that they have full-fledged live data, data products live, within the first four weeks,” he said. “We do a lot of collaborative building.”

Watch the complete video interview below, and be sure to check out SiliconANGLE’s and theCUBE’s coverage of the AWS Startup Showcase: New Breakthroughs in DevOps, Analytics, and Cloud Management Tools event on September 22. (* Disclosure: Ascend.io sponsored this segment of theCUBE. Neither Ascend.io nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU