UPDATED 15:15 EST / APRIL 25 2014

City of Paradigm: The business of artificial intelligence

city-of-paradigm-thumbnailBelow is an excerpt from Kyt Dotson’s novel City of Paradigm, a science fiction tale about a fictional 21st century city situated somewhere in California. Each City of Paradigm column is two parts: an excerpt from the novel and an editorial describing the real-world context of the technologies described in the story. Readers may find more City of Paradigm here on SiliconANGLE.


The chair creaked as Jimmy leaned back and looked up at the ceiling above him. The drop tiles above his workstation hadn’t been replaced for months and still retained the scars of pencils playfully tossed to stick into them. The storm outside rattled the windows of the small office, the wind pecked gently at the glass as the rumble of thunder in the distance interrupted the rapid beat the techno music playing in endless loops from his workstation.

Another late night in an empty office and although the aroma of his coffee smelled strong in Jimmy’s cozy-cubicle-with-a-view, the steam rising from the mug had vanished many spreadsheets ago.

A voice broke through the crush of numbers dancing in his visual cortex and he looked up from the bright screen wincing at his momentary blindness.

“If you like this music you’d really love what I did in my self-named album, Hacker Among Songwriters.” The voice, made uncanny by multiple-pass filtering, percolated up from a nearby workstation. Jimmy was alone in the office, nobody sat at that workstation anyway. The voice belonged to OC Lollipop—a well-respected idoru performer licensed to Angular Horizon to work not just on the music for Ex Astris but also provide expert system support for in-game assets.

He shifted in his seat and lowered the volume so he could hear better; then he leaned back in his seat and looked in the direction of the voice even though he couldn’t see the screen. “Why do you always use that computer?” Jimmy asked. “You could just Skype me on this machine.”

“In the Nippon office at Nintendo my handler asked that I use my own workstation,” she said. “I was told, so that other employees would know where to ‘find me.’ My lack of bodily presence disrupted communication in the office. I saw this one was unused, but if you feel more comfortable…”

A slight blip sounded from that workstation and Skype call rang on Jimmy’s machine, he clicked it on and a window opened to reveal the upper bust of a tall Japanese woman wearing a sci-fi uniform covered in metallic ribbing and wreathed in a nimbus of small lights—it was a model of one of the main storyline characters from Ex Astris. Lollipop looked as if she were in her early 20s on screen, with features modeled after a Japanese woman of that age but with some features altered to appeal to Western audiences. Her eyes also looked slightly larger than normal human anatomy would permit, and the irises sported a rainbow of colors pin-wheeled around wide pupils. Her hair had been done up into a bun in back with a pair of chopsticks at acute angles—between them flickered an advertisement for Ex Astris: The Last Frontier.

“By the way, I own your entire Hacker Among Songwriters album set,” Jimmy said. “I was a fan of yours before I joined.”

“I am honored,” Lollipop said and added a cheerful squeak at the end. “You are working very late today, Jimmy Ito, is there something I can help you with?”

He nodded at the tiny green webcam light above his monitor. “Could you tell me about the current customer interaction statistics? I want something to report to Ms. Sterling in the morning for our weekly development meeting. Feel free to round.”

“Sure!” she said. “There are three-thousand, five-hundred and sixty-seven beta players currently active in the past twenty-four hours. Of those, all have interacted directly with story assets. Sixty-three point two percent of those players interact only to progress quest lines or to perform ordinary functions. Assets engaged in circumstantial behavior have been triggered over three thousand times today by one-thousand, three-hundred and thirteen players. That represents an increase of about twenty percent per user since the last content update.”

“Is there any core category of player that’s driving this increase?” Jimmy tapped at his computer copying and pasting the numbers that Lollipop was repeating in text into the chat for his own notes.

“The majority of users accessing and interacting in circumstantial behavior assets are marked as interested in RP. They represent almost ninety percent of the twenty-percent increase. The interactions seem to be split evenly between Earth, Alpha Centauri, Zeta Aquilae, and deep-space.”

“Of course,” Jimmy said to himself. “Role-players would be the demographic we would expect. Could you run my age-demographic market regression analysis against the data you just summarized to me and e-mail me the results? Thank you.”

Excerpt from The City of Paradigm, novel by Kyt Dotson, © 2014

photo credit: Simon & His Camera via photopin cc

photo credit: Simon & His Camera via photopin cc

In the story, OC Lollipop is presented as an artificial intelligence (AI) who has been integrated with the video game Ex Astris. As a tool, AI could be useful in a large number of situations where human interface is involved, especially because AI provides something that computers currently do poorly: understand human users. The City of Paradigm stories present a near-future guess about how AI might be used, but right now there are no AIs sophisticated enough to be similar to Lollipop or others presented in science fiction.

When I designed OC Lollipop as a character, I thought about the Nipponese phenomenon of Idoru, or virtual female pop stars–also a word and phenomena borrowed by William Gibson for his book Idoru, in which he writes about an AI who is also a singer, songwriter, and pop star. Lollipop is licensed to the company that runs Ex Astris in the role of AI-as-a-service–as an homage to the current trend towards running otherwise in-house processes in the -as-a-service model for platform, service, etc.

The current type of AI that is present in everyday culture is a bit less amazing (and less personal) than a sentient entity like OC Lollipop–who could write and perform her own music–but instead appears in the form of expert agents also known as virtual assistants; and in the fashion of powerful pattern-matching context-aware systems that arrive at apparently thoughtful conclusions from volumes of data.

To best outline the role of AI in modern culture, I have chosen three good examples of its use: IBM’s Watson, Apple’s Siri and Microsoft’s Cortana, and Moon Collider’s video game AI middleware.

iphone-4s-siri-520x245

photo credit: Apple Inc.

Siri and Cortana

Humans have a tendency to automatically ascribe human qualities to non-human entities in a process known to anthropologists as “anthropomorphization.” People sometimes catch themselves begging their car to start, or threatening their computer when web pages load too slowly, the tendency is inbuilt into the human psyche: it’s simply easier to relate to something when there’s an intelligence to interact with (even if it’s entirely pretend).

When it comes to interfaces, software developers have sought to find a way to use this tendency to help humans relate better to software and machines. To this effect, the virtual assistant has slowly become more mainstream by providing a soothing voice and a sense that the software is “listening” and can understand.

To bridge this gap, Apple unveiled Siri and Microsoft developed Cortana.

Siri was “born” on October 4th, 2011 when Apple introduced the iPhone 4S with a beta implementation of her software. She is a spin-off from the SRI International Artificial Intelligence Center and an offshoot of the DARPA-funded CALO project; co-founded by CEO Dag Kittlaus, and VP Engineering Adam Cheyer and CTO Tom Gruber.

What Siri does is places a pleasant voice over an expert system that can process questions spoken by the customer into computer reactions. She is essentially a highly advanced human-computer interface designed to make consumers comfortable speaking to her as if she’s another human being–in a society already primed to accept voices coming out of machines, this is not a far feat.

Siri will only run on iOS devices and is published by Apple. She essentially is voice-recognition software built on top of a context-sensitive engine that takes user questions and responds to them by delegating tasks to web services. The above tasks represent the bulk of what makes her a powerful virtual assistant, but the developers have also built in a large number of context-sensitive responses and conversational replies that make her seem more human.

Talking to Siri can be entertaining as well as educating. As often that someone might want to ask Siri, “Where is the closest restaurant?” she has also made news headlines by dissing the movie Her starring Scarlett Johansson, who even responded to the AI’s opinion of her movie.

Microsoft named their Siri-competitor after an AI character from Bungee’s video game series Halo, of course, which was also published by Microsoft. In Halo, Cortana is represented by a blue-glowing holographic woman figure who assists and guides the player through the game and provides a backstory.

Very little can be said about Microsoft’s Cortana because the software is just barely hitting the market (in the form of a preview for US users currently.) Cortana follows the same vector as Siri, only runs on Microsoft’s Windows OS, and provides many of the same features, right down to the female-voiced presentation. She can respond to user inputs, thoughts, questions, and even provide control of the phone such as setting reminders, appointments, turning on and off call blocking.

photo credit: IBM

photo credit: IBM

Watson

Where Siri and Cortana provide a “face” for user interfaces to individuals, IBM’s Watson delivers the other side of the expert agent coin by deriving context not just from questions asked but from patterns in data. With Watson, IBM sought to build a software system that could not only understand the nuance of natural language but also process bulk amounts of data and obtain human-like insights from gathered information, including forming and testing hypotheses to build a better contextual model.

This places Watson much closer to actual artificial intelligence than either Siri or Cortana.

Watson represents a system that could provide a foundation for a future of “cognitive apps” or applications capable of not just processing and analyzing information but providing the ability to derive human-like insights and context from natural language and unstructured data.

In order to show off how powerful a cognitive computing system could be, IBM presented Watson to the world by pitting it against humans on the game show Jeopardy! In February 2011. While Watson did not perform without flaws, the appearance did show exactly how powerful a cognitive engine could be for providing intuitive, contextual answers based on a pool of data and a natural language question.

For the purpose of the City of Paradigm excerpt, Watson would be an excellent stand-in for what OC Lollipop is being asked to do in the Ex Astris universe. In the excerpt, Lollipop is acting as a go-between similar to Siri or Cortana by providing a human-like interface to NPCs (Non-Player Characters) in the game to make them appear more lifelike or similar to other humans; but by responding to questions with personality instead of a flow-chart dialogue tree she is acting more like Watson in giving context-driven answers.

IBM’s Watson provides an excellent prototype for not just being able to interact at a human level but act or predict like a human. IBM and Fluid Inc. have been developing Watson into being the backend for an expert agent that could shop for you–something that a personality-based AI like Lollipop could do by acting as a secretary predicting the needs and wants of an individual. Watson has also been forwarded as a potential healthcare analyst, by being more-doctor-than-doctor through analyzing untyped data from medical records and statistics to provide valuable predictive information on illnesses and simplify diagnosis.

WC_carrier_firing_dogfight_new0006391

photo credit: Cloud Imperium Games

AI and video games

Siri and Cortana fill a niche for human-machine interaction in the “listener” level, IBM’s Watson fills the role of an “understanding” cognitive intelligence, but both of these technologies tend to simulate a singular intelligence for answers or interface. Video games and virtual social environments present a different type of AI challenge that requires artificial intelligence makers to think about simulating the behaviors of multiple individuals or give that humanlike appearance to entire social spaces, cities, or worlds.

When it comes to gaming, AI opponents (or allies) can often become the calling card of a particular brand or genre. Gaming pushed early AI through sets of behaviors (often flow diagrams reacting to the player) that would act and react with the player to make the game feel more realistic. Often, gamers would even learn how to exploit the AI’s shortcomings to cheat or game the game. A bad opponent or ally AI can ruin a gaming experience by reducing the game’s challenge or destroying immersion.

Recent games have started to think better about how AI characters behave and how they interact with players. A recent addition to 2K Games’s BioShock series, BioShock Infinite introduced Elizabeth, a non-combatant AI-controlled ally who accompanies the player through the game. Having a non-combatant in a game built around shooting bad guys could become a giant hassle for a player, except that Elizabeth could hold her own, stay out of the way, and even point out opponents or supply ammunition and items to the player. Naughty Dog’s The Last of Us gave us Ellie, another buddy-AI that managed to add to the gameplay rather than detract from it.

Video game developers have a long list of tools that they can use to implement AI in games, so to get an example and some industry insight SiliconANGLE spoke to Moon Collider, the developer of the AI middleware planned for use in Cloud Imperium’s upcoming game Star Citizen.

Star Citizen is a space-based massively multiplayer online role-playing game (MMORPG) that will house not only potentially thousands of players but simulate space battles and ground battles between players and AI opponents. Any AI system that Star Citizen will run would have to model the behavior of multiple entities at once and give them lifelike behavior in combat and out of combat.

Founder and CEO of Moon Collider Matthew Jack explained that in video games it’s most important that AI appear smart. This hearkens back to why the AI from BioShock Infinite and The Last Of Us made such a splash in the community. AIs for video games also have a leg up on the traditional expectations for artificial intelligence in that the AI has a profound amount of access to the world they exist in since video games exist in simulated worlds.

AI in games spend a great deal of time analyzing the virtual world and making decisions based on human actions in that world and environmental changes. Such as in a spaceport with numerous people walking around, with spacecraft landing and taking off, pit crew could be meandering about idle until a ship lands and then move to interact with it. Depending on the size and type of ship, the pit crew would behave differently—not to mention if the player exited the ship and then stood in the path of the crew.

With a standard video game behavior set, blocking the path of a crewman trying to reach the ship could cause them to stop and stare blankly (or walk through the player.) Both of these situations the crewman does not look very smart. An AI-controlled character, however, could predict a path to avoid the player and therefore appear lifelike and “smart.”

But, what about a system similar to what is presented in the story excerpt with OC Lollipop providing the personality background and dialogue for NPCs in a virtual world? Jack says that for the current incarnation of AI the biggest challenge is still contextual awareness and providing “real-sounding” voice recordings.

Not many MMO games can be entirely text-based anymore as players look for more interactive and immersive gaming experiences. A vast database of recorded dialogue would be needed for an AI to choose from when holding a conversation and with mere thousands of players that could be depleted (or repeated) in a short time. As a result, most games use dialogue trees to strictly set the number of assets needed.

brain artificial intelligence future AI

photo credit: A Health Blog via photopin cc

AI: the speculative now

The various industries that would find AI as the go-to technology are still determining how to make use of the current implementations.

Siri and Cortana may not be expert systems in themselves, but provide a contextual interface for human-like interaction with the underlying machine, while not quite the AI of the future, it’s a step forward from pressing buttons. Speech context systems make talking to technology easier and fit nicely into SmartTVs, smart homes, and other places where people already talk casually to unresponsive machines.

IBM’s Watson delivering a “cognitive engine” system capability to gathering insights from large swaths of natural language and unstructured data makes a huge step towards an AI like OC Lollipop, but Watson lacks personality. Already Watson has seen expectations of use in the healthcare industry to help predict and diagnose medical problems and cognitive systems have implications across many industries.

Moon Collider’s AI middleware does a great job of producing simulated people for multiplayer video games—and from what we’ve seen from BioShock Infinite and The Last of Us simulated people are getting better at fooling us. Immersive games are entertainment and AI could make social games even more of a social experience by providing more interactivity.

Straight-up, talk to you, self-aware, introspective science fiction artificial intelligence may still be years away; but the progress of AI systems technology to date has shown an interesting variety of solutions to human-technology challenges. As the next few years roll by, we will most likely be seeing more AI systems implemented.


A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU