UPDATED 10:58 EDT / NOVEMBER 09 2023

AI

There’s more to AI than generative AI

A friend of mine from high school once wrote on his senior yearbook inscription, “For every question, there is an answer that is simple, direct, forthright, and wrong.”

The boilerplate answers that are provided by general-purpose large language models such as ChatGPT simply reinforce that truism. And we’re starting to get the same impression about the dialogues about artificial intelligence. Having hijacked the AI conversation, all too often, people are conflating AI with generative AI.

Dig into this article and the point will be obvious. The Information’s Readers Use AI—and Pay for It (paywalled) sounds like it’s talking about all forms of AI, but dig deeper, and you’ll see that the discussion sticks exclusively to gen AI. And this was with a pretty elite audience; it is the crowd that will pay several hundred dollars annually for this channel that covers the ins and outs of the tech venture capital scene. These are the readers who should be pretty sophisticated about their knowledge of technology, cognizant of the differences between AI and gen AI, and have the budgets to become early adopters.

There’s a danger in viewing AI as gen AI or thinking that this newer form of AI is somehow “better.” But AI isn’t a hammer, and all use cases aren’t nails. There are different forms of AI, each of which are suited for different types of use cases and deliver different types of results. AI is not one big monolithic black box.

Don’t forget about machine learning

For review, let’s define at a 100,000-foot level what we’re talking about.

There’s machine learning, which is designed for generating predictive or prescriptive answers. ML is a linear approach that uses different types of algorithms that perform different tasks.

Just a few examples (this list is hardly exhaustive) include linear regressions for making predictions based on specific inputs; decision trees, which literally branch out to multiple paths providing answers to different what-if scenarios; classification and clustering, for grouping and identifying related data points; and reinforcement learning, which takes iterative approaches for refining the models as they navigate various interactions or events. These approaches can be supervised, where humans provide input to keep the model on track, or unsupervised, where the model undergoes trial and error to come to the optimal decision.

Then there’s deep learning, which, depending on your viewpoint, is either a more complex form of ML or its own creature. DL mimics the brain with its three-dimensional neutral networks and multifaceted connections to make complex decisions. It is typically used for deciphering complex data, such as image, text, audio, and/or video recognition, or for other complex, multi-sensory tasks such as autonomous driving.

Just as there are different forms of machine learning, there are different types of neural networks that are suited for different tasks, a few of which include convolutional neural networks (CNNs) that are useful for image recognition; generative adversarial networks (GANs) that use competing models for image creation; and recurrent neural networks (RNNs) that are often suited for speech recognition, robotics and complex forecasting.

Now the next big thing

That brings us to generative AI. It uses generative pre-trained transformer (GPT) models, that are, in essence, probability models on steroids. They take deep learning neural networks to new levels that work through predicting by word, pixel or other entity, as to what is the next most probable term or object to follow in building a sentence, constructing an image or extracting specific entities. The best-known GPT models are large language models (LLMs), because generation or translation of text or voice are among the most popular use cases. GPT models can also be trained on other entities as varied as molecular structures or geospatial data, just to give a couple of examples.

The key word for gen AI models is “transformers.” Before transformer models were invented, RNNs were viewed as the best-suited for generative use cases. Invented by former researchers at Google back in 2017, Transformer models provided a shortcut to RNNs for processing huge troves of data on complex problems by, in effect, processing only the important data. Slicing compute loads by orders of magnitude, transformers made ChatGPT possible, and with it, a Cambrian explosion of foundation models (FMs) enabling organizations to choose the right transformer model for the task.

And so, the most common use cases for the first wave of gen AI models have centered on language and images. They are taking natural language queries and chatbots to the next level by an approach that ditches keywords in favor of digesting huge troves of content that is more representative of how people answer questions. It will make chatbots sound less robotic and make it possible to write queries without having to input keywords, as you would in a Google search.

Gen AI is not only transforming content generation, but also scaling document entity extraction. That is where key items of information are extracted from a corpus of documents or transactions. For instance, it scales human effort to uncover violations of customer privacy from a huge trove of transactions, compliance violations hidden inside corporate documents, or significant genomic data from viruses or human DNA that then can be used in conjunction with discovering potentially useful molecular structures that could lead to innovative new medical treatments.

With all the miracles that gen AI can deliver comes the realization that all this comes not as the result of contextual processing, but of taking pattern recognition and probability calculations to the extreme. Despite how the answers sound, gen AI models do not make computers sentient. And because “common sense” does not factor into the computations, we all know that gen AI can hallucinate. We’ve all heard about this problem, but that’s not the only potential glitch. Hold that thought.

Cut to the chase

Clearly, not all AI models are alike. Even categorizing them under the buckets of machine learning, deep learning and generative pre-trained transformers doesn’t do justice to the many variations for how algorithms are structured, and how data is processed.

But it all starts with finding the right tool for the job. Predictive and prescriptive analytics are different from voice recognition, which in turn is different from entity extraction, natural language query or content generation. Some problems require hard facts, while others just require a general idea.

A colleague of ours, Jason Bloomberg, summed it up nicely: It’s a matter of precision versus salience. At this point, ML or DL models are better-suited for providing more precise answers, while generative models will be best utilized for establishing context. In many cases, the choice of approach won’t be either-or, but an “ensemble” of different models that each solve parts of the problem that are assembled into a composite answer.

So, if you are a financial institution or insurer deciding whether to grant a loan or underwrite coverage, you need hard, quantifiable information. The same goes if you are a farmer seeking to optimize how much water or fertilizer to apply to different parts of your spread; a manufacturer, transport or logistics provider seeking preventing maintenance; or a retailer predicting customer churn. An ML algorithm that is fed relevant statistical data is likely to be more precise in its answer than an LLM transformer, and therefore better-suited for these use cases.

On the other hand, if the goal is to generate marketing content, where general context rather than precision is needed, that is where generative comes in. But there are crossover points in each of these scenarios. For instance, if you want to embellish the farming, underwriting or customer churn models with English language explanations, or add supporting market trends data to the marketing content, it makes sense to pair ML algorithms and LLM models to present a composite answer. Likewise, if you are a financial institution that leverages ML for making loan decisions, you might also want to use a generative approach to comb loan documents in the past for violations of policies for making these decisions.

Of course, for all models, it is about the data, stupid. Generalized models such as ChatGPT, which comb the internet, reflect the general state of information and misinformation out in the wild. Even with more fine-tuned efforts, getting the data right for more classic predictive models can present its own challenges for relevance, bias or skew.

The difference between the new breed of transfer models for gen AI and the classic algorithms of ML and DL comes down to the school of hard knocks; the business world has had over a decade of practical experience with classic machine learning, far dwarfing practical use of LLMs.

But in time, we will be able to develop more confidence with the transformers and LLMs that power gen AI. The portfolio of foundation models is growing, and with it, the likelihood that enterprises will find alternatives more relevant to their domains than general-purpose models such as ChatGPT or Llama.

The same goes for organizations that carefully curate the training data that feeds their generative models and take liberal advantage of Retrieval-Augmented Generation (RAG) to keep their training data current and relevant. In some cases, the move to curate data and models will be driven by data sovereignty laws. Better-designed models and better-curated data can reduce, although not eliminate, the incidence of hallucinations or intellectual property issues.

The bottom line is that there is more to AI than generative AI. Generative AI might captivate the boardroom, but don’t sell classic models short: Gen AI is not going to replace “classic” AI. Behind every sexy natural language model will likely be an ensemble of classic machine learning models under the hood that may perform much of the heavy lift.

Tony Baer is principal at dbInsight LLC, which provides an independent view on the database and analytics technology ecosystem. Baer is an industry expert in extending data management practices, governance and advanced analytics to address the desire of enterprises to generate meaningful value from data-driven transformation. He wrote this article for SiliconANGLE.

Image: Bing Image Creator

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU