Sensors are being embedded in everything from industrial equipment to tennis rackets to refrigerators. Software UIs and functional designs are changing to make the most of sensor and sensor webs but their potential has yet to be realized. That’s where you come in.
Sensors are changing UIs and human-to-device interactions in ways that were once imagined only in science fiction. As sensors become smaller and cheaper, they are being embedded into more product types with the goal of providing humans with higher levels of convenience and instant, at-a-glance intelligence.
“Before, a sensor might just blink,” said Xerox PARC scientist Oliver Brdizcka, who manages PARC’s Contextual Intelligence research area. “Now you can upload the data and do something big in the cloud. From a design perspective, it’s not just building a product that has one function tied to a sensor.”
Combining sensors opens up entirely new possibilities for products, software, and services. In fact, just combining two sensors can change a design’s effectiveness.
Long Le, a developer at iOS game developer Startled Monocle, built a fitness app that tracks users’ steps using an accelerometer. The problem was, it interpreted other movements as steps and steps could be “spoofed” by hand movements.
“The iPhone 4 included a gyroscope which measured rotation,” said Le. “That allowed me to cross-reference the X and Y values with the rotational value providing more accurate identification of users’ steps.”
Steps are just a starting point, however. Increasingly, fitness apps are able to discern the type of activity based on the user’s motion. For example, the Atlas fitness wristband uses inertial sensors to determine whether a user is doing a regular pushup or a triangle pushup. The inertial sensors include use accelerometers to track linear acceleration and gyroscopes to track angular acceleration.
Meanwhile, developers new to the Internet of Things (IoT) don’t have to do all the software design work on their own. Sensor-aware platforms are emerging. Relayr has an open source platform and an IoT starter kit to catalyze innovation. Its Wunderbar starter kit includes a Master module equipped with Bluetooth Low Energy (BLE) and low-energy Wi-Fi attached to six detachable smart modules, each of which has its own BLE (Beacon) processers. Its sensors include light, color, distance, temperature, humidity, remote control, accelerometer, and gyroscope. One sensor is yet to be publicly defined.
“Our founder said we need to create a hub that allows a Phillips light bulb to connect to Nike wristband, Nest, and the millions of maker things,” said Jackson Bond, CPO and co-founder of Relayr. “People are going to be creating combinations of things we haven’t even thought of yet.”
The Evolving User Interface
Developers have put a lot of effort into rich UIs but with the rise of big data, looks aren’t everything. Performance and intelligent data matter. As always, when the target device changes, so does the thinking about user experience. With the IoT, many developers find themselves developing solutions for emerging device types instead of or in addition to their mobile apps.
“Let’s say you put something in someone’s hand like a thermometer that indicates the change in temperature. If they can touch it and change it rather than pointing and clicking, it becomes a physical thing they can program,” said Relayr’s Chief Engineer Paul Hopton.
The best UI may not be the prettiest; it may be the most intelligent.
“The UI is actually disappearing,” said PARC’s Brdizcka. “Right now if I want to access a service, I pull out my smartphone, switch it on, and look for the service. In the future, I’ll pull out my smartphone and it will have started the service I need that is relevant to the situation I am in at that moment.”
Wearables like Google Glass and the Pebble Watch have minimized UIs. That is not surprising given their form factors and the fact they are smartphone extensions rather than smartphone replacements.
“It’s not a rich UI like you have on a touchscreen; it’s back to the basics,” said Brdizcka, who happens to be working on Google Glass projects at PARC. “It’s a backward-looking UI if you don’t have intelligence. If you have intelligence, then it’s the UI you need. Then you can minimize the options, providing the right ones at the right moment with a minimal amount of interaction to select what you want.”
Jeff Powers, CEO of computer vision solution provider Occipital, agrees.
“Sensor-based intelligence changes software because users don’t have to answer as many questions. You’re aware of those questions already,” Powers said.
Some think that rich UIs will continue on the smartphone with limited UIs available on smart phone peripherals. Not so when it comes to the Internet of Things, said Relayr’s Hopton.
“The Internet of Things isn’t about turning your smartphone into a giant remote control; it’s going to be about reducing user interaction and enhancing user scenarios,” said Hopton. “The credo is don’t make the user think: Just make it happen.”
Innovation is occurring at several levels that are already influencing the direction of software, one of which is the availability of more inexpensive sensors.
“There will always be new sensors, new standards that come in so you have to have a framework that integrates different sensor types,” said PARC’s Brdizcka. “In your software framework you have to have some aspect of machine learning. You also need processing in the cloud and in the device. You have to think about what you will do in the cloud and what you will be caching.”
One new sensor (although not a new sensor type) is Occipital’s Structure Sensor that connects to a smartphone or tablet. It understands the geometry of the user’s environment such as the walls, floor, ceiling, and furniture that make up a room.
“It senses all the objects around you like a camera and their distance,” said Occipital’s Powers. “When you combine that information with the color of the camera you can create a 3D model of your surroundings.”
The technology is being used for home remodeling and redecorating but it can also be used to design interactions with the physical world. “We created a game called ‘Fetch’ that scans a portion of your room, furniture, floors, walls and whatever is in front of the sensors,” said Adam Rodnitzky, director of marketing at Occipital. “Once you have a model of the space around you we apply a physics engine so you can play catch with a virtual cat.”
Prophecy Sciences developed sensor-based cognitive assessment that helps match people with companies and jobs. During the assessment, the subject plays a series of cognitive games while wearing sensors that measure biometric signals during game play. According to Founder and CEO Dr. Bob Schafer, the sensors include an optical pulse monitor, electrodes for measuring electro-dermal activity, and video measures for eye tracking and measuring pupil diameter.
“The use of the sensors makes the assessment unlike any other personality or aptitude test on the market,” said Schafer. “The biometric signals reveal how a person thinks and makes decisions, rather than simply what the decision is. When it comes to understanding how you might behave in real-life scenarios, we've found that the biosignals can tell a lot more than behaviors alone.”
The sheer amount of data generated by users and their devices can be used collectively to do everything from predicting disease outbreaks to preventing heart attacks.
“The goal is get all the subtleties of a user’s life. If you view a PC as a device that sees your mouse movement then at least it’s seeing your location and the mouse movements,” said Brdizcka. “When a device sees the applications you use, your social interactions, your communications, geolocation, movement, what you are seeing, and who you are in proximity, it has evolved into something that is learning much more about you. That completely changes the perception we have of machines, the services a device can offer, and the type of software you can write.”
Your Better (Digital) Half
When a critical mass of data is available it may be possible to create a digital copy of the self based on all the data that is available about an individual, Brdizcka said. And, there will be apps for that.
“All the data about you is converging.There will be machine learning models that connect the data. There will also be sociological models about what your behavior is like because humans are inherently predictable,” Brdizcka said. “Although you can’t predict every detail on a fine-grain level, there are a lot of very predictable patterns. If you have a critical mass of information about a person you could create a digital brother or sister that is a higher-level advisor.”
Remember The Matrix? Yeah, like that. If a person wants to learn Kung Fu, that knowledge could be downloaded to the digital copy of the self which trains its human counterpart.
“Because the copy understands you at a higher level it can give you advice. It would be completely personal and completely objective,” said Brdizcka. “It’s kind of like a therapist, but the copy would know many [more] details about you. Many companies see the value in it and are interested in building it.”
For now, mainstream developers have enough to think about as new device types and 3D printed prototypes emerge at lightning speed. Already, a number of tool providers are anticipating a vast landscape that will make the desktop/Web/mobile ecosystem appear simple by comparison. They expect mainstream developers’ jobs will be affected by the explosion of device types and the Internet of Things but they’re not sure what that looks like yet or how it will affect their product offerings. One thing is clear: There’s a lot of room for innovation and it’s happening fast.