AI Developers to power new generation of context driven artificial intelligence
Spurred by recent advances in machine learning and artificial intelligence, context-aware intelligent assistants represent the new frontier of content search and discovery. Companies leverage unstructured data—things like photographs, videos, chat logs, documents—to make better, more informed business decisions to automate processes. Now leveraging humanlike capabilities inside automated workflows to augment what’s possible in business and humanity.
When IBM opened up its Watson cognitive computing platform to developers last November, it launched an effort to enable anybody with programming skills and an idea to take the technology and use it as a foundation for building apps that can both understand natural language and learn new things to expand its knowledge base.
Watson represents a breakthrough in the field of artificial intelligence, and IBM’s opening up the platform indicates just how far AI has come.
Microsoft is trying to make AI more human
Imagine if a machine could help you refine or augment the way you approach and think about new situations and solve challenges? Microsoft is looking at more advanced artificial intelligence for deployment on not only its own platforms, but also in the real world too.
Eric Horvitz, managing director of Microsoft’s research unit, recently said in an interview that Microsoft is working on an AI platform which, as part of the solution, involves allowing computers to look beyond questions posed. The software giant is working on improvements involving the AI capturing the context within speech to better understand questions. There are some critical signals in context. These include location, time of day, day of week, user patterns of behavior, current modality–are you driving, are you walking, are you sitting, are you in your office etc.
The deep learning techniques are finding their way into more and more Microsoft technologies, including Windows Phone, security, Xbox and other products. Microsoft is hoping that its upcoming digital assistant, currently known as “Cortana” will carry out all of the aforementioned tasks. For example, Cortana could be employed to help a user find a hotel, a specific type of restaurant (like Italian or Chinese), even a parking spot. To carry out these tasks, Cortana uses Microsoft’s unique Satori technology, which the firm currently uses with its Bing search engine.
In Xbox, the Kinect was also trained with machine learning. The fact that it can see you in the room even though it’s poor lighting and you can wave your arms and it can track you—that’s all done with a piece of software that was trained with machine learning. In addition, Microsoft is using machine learning in security. The company arms its malware analysts with machine learning-driven technology, both to give the analysts superpowers to make them much more effective at searching through lots of data, and also by autonomously helping to find malware authors.
Cognitive computing is the new frontier for developers
IBM’s Watson cognitive computing innovation represents a new class of services, software and apps that analyze, improve by learning, and discover answers and insights to complex questions from massive amounts of disparate data.
IBM’s Watson supercomputer won the Jeopardy game show, demonstrating how cognitive computing could make sense of vast amounts of unstructured information to deliver straightforward answers. Now IBM is making this same technology available to developers to bring cognitive computing to Internet applications in general.
The trend of making AI consumable through APIs, which IBM is doing by opening up Watson to developers—is an important one because machine learning is tricky. Not all developers can use machine learning. There are many libraries developers can use to write machine learning code. There are even a few deep learning libraries developers can use.
IBM will be creating a cloud hosted marketplace that will allow developers to access resources for developing Watson powered applications. The marketplace will have a developer’s toolkit, access to Watson’s API, and learning materials. IBM also plans on providing access to over 500 of its own subject matter experts to assist with design and development.
“By sharing IBM Watson’s cognitive abilities with the world, we aim to fuel a new ecosystem that accelerates innovation, creativity and entrepreneurial spirit,” said Michael Rhodin, Senior Vice President, IBM Software Solutions Group. “With this move, IBM is taking a bold step to advance the new era of cognitive computing. Together with our partners we’ll spark a new class of applications that will learn from experience, improve with each interaction and outcome, and assist in solving the most complex questions facing the industry and society.”
The big blue company will provide three new offerings to developers – IBM Watson Discovery Advisor, IBM Watson Analytics, and IBM Watson Explorer. Watson Analytics will make it possible for users to seek out the best answers based on quantitative data from databases and qualitative data from text. The Watson Discovery Advisor service will help reduce the amount of time researchers need to formulate conclusions. Meanwhile, IBM Watson Explorer will provide a unified view of enterprise information.
Watson represents a major step forward in the trend of using natural language to make sense of large bodies of text and structured data. By offering Watson services to the broader development community, IBM hopes to enable a much broader set of applications. IBM works with developers to help understand the capabilities of Watson. It also offers a sandbox where developers can upload content and can then immediately interact with the data. The developer creates the app, and the cognitive piece becomes a capability of this.
IBM plans on having different Watson instances target various industries differently. For example, the health care instance of Watson is being trained on different information from that used in banking.
Google building an AI army
Early this year, Google bought London-based Artificial Intelligence company DeepMind that specializes in games and e-commerce algorithms. Founded in 2011, the DeepMind team is considered by experts as one of the most valuable and innovative in AI field. The company employ the best technologies related to machine learning and neuroscience, to give birth to learning algorithms to be used in various fields.
DeepMind has experience in the application of technologies of artificial intelligence regarding e-commerce and gaming. Google could therefore have decided to move in this direction to strengthen the own system of online shopping.
Google CEO Larry Page told the TED conference that Google is an effort to make it the world’s best personal assistant that’s able to predict what you need or want even before you ask it.
“We’re still very much at the early stages of [search], which is totally crazy. Thinking about where we’re going — computing is a mess. The computer doesn’t know where you are, what you’re doing, what you know. A lot of what we’re doing is making your devices work and understand their context,” he said.
“Looking at search and really trying to understand everything, and trying to make computers not clunky and try to understand you, voice was really important. We started doing machine learning to improve that. We started looking at YouTube. We ran machine learning on YouTube and it discovered cats by itself. That’s an important concept. What’s really amazing about DeepMind is they’re learning things in their own supervised way, starting with [how to play] video games. Learning how to do that automatically.”
Google expects deep learning to help developers create new types of products that can understand and learn from the images, text, and video clogging the Web. DeepMind’s expertise is in an area called reinforcement learning, which involves getting computers to learn about the world even from very limited feedback. Such skilled computer programs could have important commercial applications, including improving search engines, video recognition, speech recognition and translation, security, social and ecommerce. This will help Google developers to improve already-existing products, such as its self-driving car, but would also allow it to build next generation AI products.
Google already has a semi-secret lab called Google X where they train scientists to think of science fiction-sounding solutions with the intent of building products so futuristic, they sound like they come from a science-fiction novel.
AI to solve smaller, real-world problems
Facebook, with its DeepFace project, promises to solve the problem of facial recognition so that it can help with Facebook photos. Facebook developers and researchers are working on this facial recognition system with 97.25 percent accuracy – a mere .28 percent less than a human being.
The development of DeepFace represents a significant advancement over previous facial recognition systems. This is due to the new approach to artificial intelligence known as “deep learning,” in which networks of simulated neurons learn to recognize patterns in large amounts of data.
Deep learning technology works in four stages: detect, align, represent and classify. Essentially, the artificial intelligence is recognizing the small features that make up an object or a single piece of text and then putting them together to create a map of the whole thing. DeepFace takes the aligning and representing stages one step further.
top photo credit: A Health Blog via photopin cc
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU