Ambient Intelligence: Sensing a Future Mobile Revolution

Artificial intelligence finds its way to more aspects of human life every day. The future of technology is a sea of opportunity for artificial intelligence and application development, especially for mobile devices.  In fact, we’re counting on it.

When we refer to Artificial Intelligence (AI), we now have a much simpler definition, something we need in order to truly push this technology into our every day lives. One working definition is that a system of artificial intelligence is one who can perform a task traditionally associated with human intelligence, for example, recognize the voice, or translating from one language to another.

When it comes to an optimal use of AI in the mobile realm, we’ll also have to consider ambient surroundings.  The term called Ambient Intelligence (AmI) was coined by the European Commission. AmI in smartphones would be classified as an envisaged environment, which is unobtrusive, interconnected, adaptable, dynamic, embedded, and intelligent. It can carry four main points to be considered in this way, which are: the ability to speak, the ability to make decisions using algorithms, data generation and collection, and the ability to act with others on our behalf.

AI in ambient data

.

Ask your smartphone to give you information about medicine that can help you feel better taking multiple symptoms into account (common medical recommendations, no antibiotics), or whether to see a doctor, and in some cases, set a date with a doctor or “reserve” a place in a hospital emergency room, depending on the severity. Later, if it is necessary to do some shopping, tell your smartphone to help make a list of what to buy, and then ask of it the nearest store for home delivery.

Another example is that AmI technological advances could provide access to digitalized documents, family photos, and films, regardless of location and equipment.

These are just two examples of what could be done through artificial intelligence in smartphones, and already some companies are working on better AI integration. Researchers at IBM created a machine called Watson that can filter a terabyte of data and can automatically obtain answers to complicated questions in three to five seconds. A version of the software that runs Watson could be on a tablet computer within three to five years. The AI can analyze the results of tests to make a diagnosis, or could analyze market data in real time and recommend how to rebalance your investment portfolio, all from the comfort of a smartphone.

Ambient intelligence-based mobile devices have seen some progress and quality of sensors for things like gyros, humidity, and temperature. In the future we might find these sensors will be integrated with more connected devices such as cloud-based data retail outlets, cloud processing power and inference engines or medical sensors integrated within handsets.

The market in question is large and growing fast. The total turnover of the mobile voice recognition, one of the thousands of artificial-intelligence applications, could reach one billion in 2017 from $73 million today, according to ABI Research.

AT&T has devoted more than 1 million hours of research to develop AmI technology that can convert speech into text, and deliver responses to spoken questions.   The telecom is exploiting ways to allow people to use voice commands to get directions while driving, and control appliances and home electronics, such as television sets.

Developing AI ambient apps

.

Mobile devices now have rather extensive capabilities. In addition to analyzing digital content, mobile devices can harness ambient data such as temperature, location, user movements, schedule, user habits and engagement. Developers are leveraging these new capabilities and sources of data to create more advanced apps.

The crop of real-world artificial intelligence app is probably wearable electronics, like the rumored Apple iWatch, or Google Glass, all hinting at the future of consumer tech.  Then we have Google’s self-driving cars to help those who are incapable of driving and increase our productivity while traveling.  Google Now is another consumer Service from that uses past history searches to predict and display information such local weather or flight info.

Nuance Communications is working on software that extracts the interests, likes and dislikes of people, as they say on Facebook and Twitter site, to provide more relevant results in searches. Then there’s Qualcomm’s context awareness platform, which aims to power context aware applications through an SDK that includes location-based selection image recognition, and interest sensing.

Using video and voice chat capabilities similar to Skype, a new iPad app called MindMeld not only facilitates the conversation, but also adds pertinent photos or videos to the conversation as it interprets what is being said. Tim Tuttle, the founder of MindMeld, explains the app approach is to look at the past 10 minutes of activity and then anticipate what users might need in the next 10 seconds. Over time, MindMeld will become more intelligent at reading and aggregating ambient data.

PlaceMe app, which sits in the background of your mobile phone, uses every sensor in your handset to track your activities, location and environment and keeps a record of everywhere you’ve been. What that means is that, apps using PlaceMe APIs can precisely determine someone location because of the way data is combined.

“Developers who are using [the SDK] are in the categories of dating, fitness and health apps that want to track your exercise and make recommendations, and shopping apps that make suggestions based on your location and your likes and favorites,” says Alvin La, founder of PlaceMe.

Highlight is another app that scans people around you and checks their profiles. If the app finds interesting people, such as someone that you share common friends with, it will let you know everything it knows about them.

Future Perfect?

.

Our recognition technology keeps getting better and better, and it’s getting better with no training data provided by the person who’s being recognized. These technical insights have been branching out in a lot of different areas, and these advances have been very interesting for thinking about how intelligent systems need to work.

The scope for ambient intelligent services is much larger than the personal level. Anticipatory mobile computing has the potential to reach millions of users via services such as commercial, public safety, planning, forecasting and research, and health monitoring.  There’s several business opportunities to be explored, and AI guru Lars Hard, CTO and founder of Expertmaker, who expects more creativity in this space, starting with the enterprise.

“I think that in the initial stages, of course it will be a different enterprise system–something selling products,” Hard says.  ”And I think we’ll see more creativity coming from this space down the road because interest is so huge for those that already have data.

“They may have digital products too, which is even better because you can model and play with it better, put it into a predictive model,” Hard continues.  ”A huge competitive edge is the quality on how you bring products to customers…by allowing AI to help create models and new user experience, you help with discovery and exploration.  It’s enormously beneficial for everyone.”

The opportunities are immense and future apps will have increased precision and relevance, become more personal, and enable the use of many more information sources and to link to other devices or apps, and allow more adaptive user experience. All of these more complicated features require the use of one or many AI technologies.

photo credit: Wi2_Photography via photopin cc

photo credit: Zavarykin Sergey via photopin cc

photo credit: Ed Yourdon via photopin cc

photo credit: Saad Faruque via photopin cc

About Saroj Kar

Saroj is a Staff Writer at SiliconANGLE covering DevOps, social, mobile and gaming news. If you have a story idea or tip, send it to @SiliconAngle on Twitter.