Ambient intelligence promises a world where your car, your home, and your office buildings anticipate your needs. Learn what’s happening now, including how people are evaluating the trade-offs between convenience and privacy.
Ray Bradbury and others envisioned a world in which human needs are anticipated by the surrounding environment. In the late 1990s, this vision was termed “ambient intelligence” (AmI). Since that time, innovators have continued to imagine scenarios in which technology is ubiquitous, more transparent, and more valuable to humans than it ever has been.
What the user interface will ultimately look like is a matter of debate. Some foresee a Minority Report scenario in which the surrounding environment itself adapts to individuals in context. Others anticipate a virtual personal assistant that serves as the interface to the always-on, interconnected world of things.
“Technology is supposed to work for us. The idea that we have to keep pulling out our phones and trying to coax simple things in our lives to do what we want them to do isn’t the vision we fell in love with while watching The Jetsons,” said Craig Macy, CEO of distributed real-time reasoning application provider Onstream, in an interview.
Of course, getting to the point where machines anticipate the needs of humans isn’t an easy task. Innovators are simultaneously working on different parts of the problem — including artificial intelligence (AI), the Internet of Things (IoT), power, the cloud, connectivity, integration, visual display technology, natural language processing (NLP), and interoperability, to name a few. Slowly but surely the technologies necessary to enable AmI are maturing and converging, ultimately paving the way to intelligent environments that can sense, anticipate, and react to individual human requirements or desires in real time.
Windows 95 and 98 architect Satoshi Nakajima is in the thick of this. As founder and CEO of Swipe, which transforms any plain digital document into media-rich content, and interim CEO of cloud-based middleware company UIEvolution, he’s constantly reimagining the future. The latter company, UIEvolution, provides connected device software and solutions to customers in the automotive, cruise, hospitality, and retail verticals. Shopping malls and airports are the next targets because they will need to deliver true AmI experiences.
“We call the services and software we provide car manufacturers like Toyota ‘intelligent,’ but it’s really not that intelligent. It’s navigation, music, and weather. We’re trying to predict behaviors and anticipate, but we’re not there yet,” said Nakajima.
Car navigation already operates in the cloud, so it is able monitor and record what a user does. What it can’t do is anticipate where the driver will go, when, and why.
“I shouldn’t have to tell the navigation system I’m going to the gym on Monday morning. The system should know that, and if there are any traffic issues it should tell me,” said Nakajima.
Instead of landing at an airport and firing up the Uber app to book a ride to a hotel, future systems will work together to ensure that happens automatically.
“Ten to fifteen years from now, I’ll somehow get a message in my ear that says, ‘Welcome to San Francisco, Mr. Nakajima. Please go to the curb after picking up your bag.’ When I get to the curb, [a self-driving] car will pick me up. When I get in, it will tell me we’re headed to the Marriott hotel,” said Nakajima.
Enabling that kind of experience would require airlines, hotels, and car companies to work together in new ways. However, the ecosystem would necessarily be broader than that. For example, the car could also anticipate that the passenger may be hungry, given the time of day, and would likely prefer certain types of food, which would influence restaurant or even dish suggestions.
Obviously, sharing information across what have previously been distinct barriers raises privacy and security concerns, as well as issues about data ownership, all of which must be addressed.
Creating AmI Environments
The smart home is the classic AmI use case. Today, “intelligence” is thought of in terms of motion-sensitive lighting or apps that control automatic systems such as garage doors. The goal is to have systems and appliances that communicate with users, manufacturers, and each other proactively in a post-app environment.
“Ambient intelligence is missing pieces. It needs to be able to sense, reason, and act. We have this notion of sensing and acting, but the reasoning part is missing — the part where it senses people, knows what our preferences are, learns over time, and adapts to what’s going on,” said Onstream’s Macy.
Unlike traditional systems, AmI-enabled systems will be active and proactive, rather than passive. If they’re implemented as envisioned, they’ll become so transparent that the services they provide become a reasonable expectation of everyday living. To get there, today’s rule-based systems must be replaced by adaptive systems that take advantage of AI, machine learning, natural language processing (NLP), and more.
Onstream is working with various manufacturers to enable smart commercial and industrial buildings. Macy sees the demand for such structures being driven by two factors: employee expectations and competitiveness. Employees will expect their workplaces to behave as intelligently as their homes in the future. In addition, the value of a property and its attractiveness to potential tenants will be influenced by its level of intelligence.
“Ninety percent of non-residential buildings are not intelligent. Millennials are going to get agitated because [their office buildings] don’t do what their homes do,” said Macy. “Are they going to pay for it? No, but the building owners will.”
As systems and devices start connecting to each other, their core value propositions may evolve. For example, when Onstream engages a potential partner, that partner must necessarily be open to possibilities it may not yet have anticipated.
“When we find a [potential partner], like a lock company, we say, ‘We want to have your lock as part of the package.’ Part of the incentive we have is, ‘Is the lock is going to do more than you thought?’ By giving us access to it, it’s going to be able to do stuff it hasn’t done before,” said Macy.
The Use-Cases Will Broaden
AmI in the manufacturing space is about what’s happening in that environment, such as preventing equipment failures. The same is true for assets and equipment in various vertical industries. However, the solutions are still point solutions designed to address a particular problem, such as ensuring oil flow through a pipeline.
“In the past, you’d have a lot of data points coming in and you ended up with a 15-page report. Then you’d wonder if all the devices really needed to be fixed or only 50 of them. [Ambient intelligence] enables you to act with more certainty,” said Jana Eggers, CEO of AI platform provide Nara Logics.
Like others, Eggers foresees seamless experiences that adapt as humans move from their work lives to their home lives and vice versa.
“I’ll have my manufacturing information and my restaurant reservation, and those things will help me flow seamlessly as a human. That’s the way of the future,” she said.
There’s a Lot of Work to Do
Connecting the dots necessary to provide seamless, transparent, and persistent experiences in context is a grand vision that will take time to realize. Right now, the focus tends to be on individual use-cases, including devices and services, even if the use-case is as broad as a smart city. Ultimately, the true value of AmI depends on building a much larger and more intelligent ecosystem than exists today.
Lisa Morgan is a freelance writer who covers big data and BI for InformationWeek. She has contributed articles, reports, and other types of content to various publications and sites ranging from SD Times to the Economist Intelligent Unit.