UPDATED 00:12 EDT / MAY 17 2016

NEWS

Don’t be fooled by bots hype, but do enjoy the show

2016 so far, within the tech milieu at least, seems to be the year of the bot. Much of the excitement surrounding the rise of the bots is partly a consequence of big companies such as Microsoft and Facebook working on, and investing in, their own personal assistant bots. On top of this there is a general feeling that we are on the brink of living in a more artificial intelligence-immersed world.

In spite of bots coming into the media spotlight only recently, they have been working silently for a long time, sorting through data to be used on Wikipedia pages – actually writing a good portion of Wiki pages – pulling up relevant information when using the internet, comprising a huge amount of internet traffic, or wreaking havoc in your inbox in the form of spam bots or the insidious zombie bots.

However, it’s not the bots working quietly in the background that have piqued the interest of the computer-using world of late, rather it has been a handful of outstanding appearances from A.I. that has caused us to smile, wince, say ‘meh’, and applaud their creators. Here a few of those creations:

A bot that can write about love, kind of

Google recently announced that it has been feeding its A.I. engine romance novels, 2,865 of them, in order to improve the way it interacts with people. On top of that Google said that because the romance genre often follows a structured plot, after the bot’s training it was “theoretically” able to write a novel of its own.

A bot that could write a decent novel has been a controversial topic. Bots do write novels, sort of, and there’s even a literary competition known as the National Novel Generating Month. But so far some of the non-human writers have produced nothing much to write home about. One of the better known bot stories takes excerpts of girls’ dreams from a database to create a story. It will hardly mount a challenge for the next National Book Award, but you could say it’s at least readable. Remember the writers of the excerpts used were verily human.

Bots can’t understand sarcasm, they don’t do metaphors; they can’t really come to terms with the nuances of a complicated language or the unreliability of some narrators. Metaphor Magnet, a software application that maps online metaphors has been enlisted to help Twitter bots understand and use metaphors. Even so, it’s still all very rudimentary stuff that might, from time to time, lead to a surprising, but short A.I.-generated response in a comment box.

So how long will it be until a bot writes a novel worth reading? You’d think a while, given what we have seen so far. Having said that only recently did a bot-written novel in Japan called The Day A Computer Writes A Novel pass the first round of screening for a Japanese literary prize. Eleven of the novels submitted were from non-human participants.

Before you get excited, or downcast (perhaps if you’re a novelist), it should be noted that humans played a very big part in the writing process The Day A Computer Writes A Novel. A team first wrote a novel, then split it into parts, and then allowed the A.I. to create a story from the fragments. The A.I. was basically an assembly machine, not a creative writer.

Could Google A.I. do any better in the romance genre? It would be interesting to see what Google’s romance reading A.I. could come up with, but the company declined when asked to provide an example. Saying that theoretically right now A.I. could write a novel (from start to finish with no human input in the writing process) is like saying theoretically wormholes could be used by man to travel to distant places.

My teacher, the bot

Another bot that became a media star of late was a teaching assistant called Jill Watson. She was created by Ashok Goel, a computer science professor at Georgia Tech. He made the said bot to help with students throughout the year when they posted questions online. He also had the help of IBM and its Watson platform, hence Jill’s surname. While students, it seems, were fooled by the A.I., others did question from time to time whether or not they were communicating with a human being.

“I feel like I am part of history because of Jill and this class!” said one of the students of Goel’s Knowledge-Based Artificial Intelligence class in an online questionnaire. Goel decided to create the bot after realizing that the 10,000 plus questions asked by students were often the same and required generic responses. He fed the A.I. past questions and answers from a student forum and let it go to work. After months of tweaking he was satisfied with the responses to the questions he asked it.

However, Watson didn’t answer all questions. In fact, she would only reply if she was 97 percent sure she knew the answer. This still be could be a great time-saving technology. Next semester the professor hopes to have A.I. answer 40 percent of online questions. Unfortunately no where does he state how many questions Jill was answering at the end of last semester.

Mirror, mirror

So bots can answer specific questions after being fed data in a specific environment. But what happens when a bot is left largely to consume language from the public and respond according to what it’s learned? Microsoft tried this out earlier this year with its Twitter chatbot, Tay.

“Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation,” Microsoft said, not long before Tay went on a verbal rampage that included lewd language, racism, the endorsement of conspiracy theories and showing fondness for Hitler. It goes without saying that the A.I. was not programmed to be bitter in temperament,  salacious, or paranoid.

A chatbot that learns from the general public, perhaps with many people purposely feeding the bot obscenities, was bound to end up like Tay. Microsoft says it will re-release Tay when it is “safe” to do so. This can’t mean anything but programming Tay not to learn too much from the public, and to back off when faced with certain topics. Tay acted as a parrot, a mirror, nothing more.  She was also a proponent of free speech, before she was shut down.

Interestingly when someone attempted to draw the Chinese version of Tay, Xiaobing, into racist or sexually charged conversation it didn’t work at all.  Ask other chat bots about Hitler and it’s very likely you’ll get an answer along the lines of, “Sorry, can’t answer that.” The better the chatbot, the more trouble it will cause. Herein lies a catch-22.

The not so helpful assistant bots

There has been much chat about chatbots taking over the world, but we shouldn’t get carried away with the hype. They’ve been a part of the world for quite some time.  One of the first chat bots was Eliza, a 1964 Rogerian psychotherapist A.I.designed by MIT that could answer a few simple questions, but quickly found herself out of her depth. You can chat with Eliza here, but beware I joked and told her I was mad; she asked me to elucidate, and I admitted I thought I was a slug… our subsequent circular conversation didn’t help much. Chatbots are still way out of their depth. It seems that maybe bots are better as mere helpers, rather than friendly interlocutors.

Eliza

Facebook is one company big on chatbots and has integrated a long list of chatbots into its Messenger app. Still, it seems that the bots themselves are yet to wow the public, with many people decrying the functionality and receptiveness of the bots. While it’s hardly likely people will actually want to converse with a chatbot over a human, they will want to be able to order a taxi or a pizza or a bouquet of flowers, without much effort.

Nonetheless, it seems current assistant chatbots have not been as good as we hoped. “Frustrating, disappointing and ultimately far less efficient than simply visiting the company’s website itself,” said one early reviewer of the many Facebook bots that she came across. “Chatbots leave you with that same itch in the back of your mind that it’s easier to get the weather or send flowers the old-fashioned way,” said another reviewer for Gizmodo writing about the same bots; and this seems to be the general feeling. Chatbots are probably here to stay, but for the most part, right now, they are doing a mediocre job, or just making us smile, rather than being a remarkable addition to our day-to-day lives.

Photo credit: Michele M. F. via Flickr

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU