The When and Why of UI

43423059 - user interface arrows icon composition set with hand. layered file

It’s fun to be a part of the tech industry, but it’s also interesting sometimes to take a step back and look at it from afar. I took the opportunity to do that a couple days ago with respect to all the attention Alexa and Amazon Echo have been getting ever since the Crestron compatibility was shown at CEDIA. Then some lighter fluid was thrown on that with the story about Facebook’s Mark Zuckerberg creating his own Jarvis control system that incorporated Alexa and Crestron as well.

My initial excitement about voice control was tempered over time by the continued hype around the format as the future of the user interface, (UI).  I definitely believe it has a place in the future of control, but it will not replace or displace all other forms of UI. I even wrote a mock press release to point out the potential shortcomings of a world that only uses voice as the primary method of controlling everything.

The piece got some immediate commentary online and led to a side conversation about the potential of gesture control to be the future of UI based on its presence at CES this year. I promptly shared another piece on gesture control I wrote back in 2013.


So, with all the hype and speculation about UIs, which one is best? The answer is “it depends,” and to be honest, the right UI may be a combination of many of them in the same environment.

So what type of UI is best for different applications? Let’s take a quick look.

Traditional Devices — I define these as switches, keypads, keyboards and mice. Sometimes the best UI is the traditional one. Nothing beats the simplicity of a well placed switch for turning on lights or the fixed location of tactile keys on a keyboard for entering a large amount of information into whatever system you are attempting to control. Pros: intuitive, Fixed, Familiar, Fast, Reliable. Cons: inflexible, accessibility, security, take up space.

Remotes — We’ve all had these and they usually work fairly well. Line of sight IR issues were solved long ago with RF based communication coupled with IR emitters. The great thing about remotes is that they have hard buttons are always in the same place so you don’t have to look down to operate them like you would a touchscreen. Pros: intuitive, consistent, portable. Cons: line of sight for IR, additional hardware for RF, light interference issues.

Touch Screens — This category includes touchpanels, Smart Boards, Touchscreens, tablets, and smart phones.  These were the de facto standard for “high-end” control for years. Dedicated touchpanels were infinitely flexible in their layouts and utilized processors with RS232 and IP control that eliminated the issues with RF remotes and IR emitters in many cases. Touch screens integrated into displays also give users the ability to interact with software applications at the screen without having to go back to their PC. Pros: flexible, customizable, more connectivity options. Cons: potential theft concerns, access, parallax, potential light interference.

Gesture Control — Wii and Kinect brought gesture control to the mainstream, even if they both use slightly different methods to facilitate it. Gesture control allows users to control systems without ever touching them by making gestures in the air. This is a great advantage in public spaces, areas where sanitation may be a concern, or when the display someone may be trying to control is way to large to reach each corner. The gestures do have to be learned and the user may or may not be shy to start using gesture control in public in front of a public audience. Pros: sanitary, accessible. Cons: requires user registration, limits number of concurrent users, learning curve, lighting issues, stage fright.

See also  Visix Debuts Touch Room Signs with a Customizable UI

Eye Tracking — If you haven’t seen eye tracking user interfaces you need to check one out.  It’s pretty cool to look at a program icon on a computer screen and then open it by holding your gaze on it.  There are fringe applications for eye tracking but the largest opportunity comes in solving ADA issues, where a disabled person that may be paralyzed, can use their eyes to control things around them. Pros: sanitary, ADA, no theft concerns. Cons: requires user registration, limits number of concurrent users, eye issues can make it unreliable.

Voice — There is definitely something cool about having a conversation with your technology, hence the excitement around voice control. It also allows users to not have to manage a physical device for control. It is great for short commands to remote devices that generate longer term results such as: “Play my Spotify jazz list” or “turn on the TV and play Game of Thrones.” Applications that require continuous input become more tedious. Noisy environments can be a problem as well as accents and speech intelligibility. Again, like gesture control, there may be some apprehension to use voice control in public spaces based on potentially sounding silly. Pros: conversational, easily accessible. Cons: speech recognition, proper phrasing, noisy environments, stage fright.

Sensors and Software — This combination could really encompass a lot of UI possibilities. They include everything from passive UI like walking into a room where a motion sensor or pressure mat executes a command, biometric readers that utilize fingerprints to execute commands or open doors based on clearance levels, and even things like IR tracking that is used in Apache helicopter helmets, Nintendo Wii, and camera tracking systems.

Cameras and facial recognition software are also a part of UI now. Android can assess whether someone is looking at the screen so it won’t time out and go black. Sony even had facial recognition cameras in their TVs at one point that would dim the screen when kids were sitting too close, save power when no one was in the room, and could even differentiate between humans and the dog for making theses adjustments. Pros and Cons: vary depending on the hardware combination.

Content and Layout — Now if deciding which type of UI is best for each task isn’t enough, the final part of this is creating the phrasing, gesturing strategy, and mechanical and visual layouts of all of these in a way that is easy for the user to interpret, remember and access efficiently. This is where an understanding of the customers habits, work flow, and experience levels are key.

As you can see, depending on the environment and the task needing to be performed, a single system may utilize two or more UIs to perform separate tasks.

Ironman’s Jarvis is often said to be the ideal user interface. If you pay attention, Jarvis utilizes may UIs, not one. The conversations Tony Stark has with Jarvis are the most memorable, as they are unique, but Tony also uses a combination of gesture control, touch, light pens, sensors, switches and readers to interface with Jarvis as well depending on the task at hand.

So Jarvis is much more than a UI. He is a combination of UIs and great visualizations that are all best suited to perform the tasks they are paired to, and this “blended UI” if you will, creates the overall experience of using Jarvis, or the user experience (UX). We’ll talk more about the UX next time!