With all the hype surrounding artificial intelligence, it is important not to get too immersed in the technical and science fiction aspects. Steve Jobs said “You‘ve got to start with the customer experience and work backwards to the technology.” and this is true of artificial intelligence as well. Although a full discussion is totally out of the scope of a short blog post, I would like to provide a few perspectives.
In my opinion, the right click in Windows 95 was a huge innovation. Prior to that, one had to look inside a huge array of options under a menu bar, or scan through a panel of small and often obfuscated icons. The right-click contextual menu showed a short list of tasks that were all relevant to the object that was currently selected and relieved you of this wasteful routine.
Although this may not strictly be classified as AI, the way that that it lessened the burden on the human brain was significant.
Similarly, one application of AI that would most certainly be very popular with users would be UI improvements that significantly reduced the need to scan through a list of options to find the relevant actions. In iOS 10, Apple has introduced AI that learns which emails should be sorted to which folders and intelligently provides a shortcut so that sorting emails is much quicker and easier.
Email spam filters learn what emails have a high probability of being span. From a user perspective, this is by all means artificial intelligence.
Although spam filters occasionally make mistakes, they help save our time and cognitive load by pre-filtering out stuff that is completely irrelevant to our work. Good spam filters also protect us from phishing attacks which can compromise whole corporate networks, and so it is no surprise that these are in high demand.
This is a very important market for AI.
Apple had a patent for a very powerful technology commonly known as Data Detectors. This technology can detect addresses, event dates etc. inside text, and dramatically improves the user experience on smartphones where it is inconvenient to copy and paste.
The analysis of text, prediction of what a user might want to do with it, and providing a convenient and intuitive UI that enables the user to quickly get it done, can be a great timesaver.
It is well know that machine learning techniques have greatly improved voice recognition. Voice recognition has historically been valued by people who have difficulty typing. With mobile devices, voice recognition is convenient when you cannot use your hands.
Graphical user interfaces are great for a stepwise approach for getting things done. However, since they operate by providing a list of options on a 2D screen, there is a limit to the breadth of commands that can be issued at any one time.
Command line interfaces and voice interfaces can get around this issue because they do not have to present a list of options. They are limited only by the ability of the user to memorise the available commands, and to issue them without referring to a menu. Hence voice interfaces are a convenient way to issue tasks quickly.
Current advances in machine learning (Deep Learning) will build upon what we already have, and for smartphones with big screens, what we already have is a good graphical user interface.
AI, Voice UIs and predictive assistants should be evaluated based on their merits. How will they save us time and for what tasks? How will they help us when we cannot or it would be inconvenient to view our phone screens? How can they reduce our cognitive load?
Apple is pretty good at understanding what the user experience should be, and arguably, this will be just as important or maybe even more so than the underlying algorithms.