The Last Straw Of Creepiness

Eric Schmidt has previously mentioned that  Google’s company policy was “to get right up to the creepy line and not cross it”. And despite occasions where it has got itself real trouble by sniffing in on data from personal WiFi routers as Google Maps cars roamed the streets, the general public has not made a huge fuss over privacy.

However, it is not as if the general public is totally oblivious to the privacy issues. There have been reports, for example, that young adults (being more tech savvy) take more security/privacy measures than their elders. Interestingly, the young adults are more concerned with hiding information from their parents than from Google and Facebook, which is obvious when you think about it. A Pew Research survey also show “Some 86% of internet users have taken steps online to remove or mask their digital footprints, but many say they would like to do more or are unaware of tools they could use.” I have also seen similar surveys in Japan.

If this is the case, then it seems like there is a delicate balance in place which reflects Eric Schmidt’s quote, where online privacy is a serious issue for many, but not quite enough for a public backlash. The Internet trackers are a whole have been successful in coming close to the creepy line, but also in not having crossed it.

The next question is then, how will the Internet trackers including Google, Facebook, and a slew of other online ad brokers, cross the line? When will they do something that is so creepy that the public will revolt?

The key to understanding this is, I think, by recalling why the US does not appreciate Edward Snowden. Snowden uncovered rampant privacy intrusions on a massive scale by the NSA. However, the US citizens do not seem to care so much. They seem happy to let the NSA collect data, as long as the information spied upon is not used against themselves, but against terrorists. Of course, I’m sure that US citizens who come from the middle east are not so reassured, but for the majority of Americans, they simply don’t consider themselves as the victims.

Looking at it this way, the creepy line will be crossed when and only when the massive data collected is used against the majority of citizens, and not against terrorists, in a way that is easily noticeable and potentially harmful. For example, re-targeting ads are getting very close to the line because they demonstrate in an unambiguous way, that Google is carefully watching which 3rd party websites you visit. This is completely unlike the previous generation of search or display ads. Re-targeting ads have reminded the public that Google is watching your every move. The only thing that has to happen now is for something to demonstrate that this information can be used to harm you, and then the creepy line will most likely be crossed.

Thus we should next focus on when the public will consider the information gathered by Google and Facebook to be dangerous and harmful. If the information that they have is used in crime in a way that the majority of citizens can identify with, then I would most certainly expect a backlash. This will be when the creepy line will be crossed. However, Google and Facebook themselves have no intention of harming their users, so it won’t be them that cross the line. It will be someone else.

There is no doubt that the information in Google’s servers is potentially damaging. Google probably has the most harmful data if revealed. Unlike Facebook or Apple where you typically send the information yourself, and are unlikely to send stuff that will harm you down the road, Google collects everything. They collect all your searches, all the places that you’ve been to, and all your emails. You do not select which information to share with Google, so the good and the bad get sent there.

I think we a just one major security breach or one major malware attack away from a crisis of confidence. Google itself will not cross the line, but malware can make this happen. Current malware does not collect the privacy/location information from Android devices or Google accounts, but this is because the business model is not there yet. If somebody decides that this is indeed something that they can make money from, then this will happen, and I expect it will bring down Google’s data collection practices down along with it.

Security breaches are becoming more sophisticated and more targeted. Large leaks of accounts are reported quite frequently, although not all of them can be entirely trusted. Indeed, one could imagine hackers announcing the leak of a large number of bogus accounts, just to scare the public into responding to phishing emails. As long as this trend continues, I believe that the largest threat to Google’s data collection practices will be a security breach and not a sudden awakening to privacy by the public.

Points Of Convergence

I’ve been studying the Android operating system and ecosystem recently, out of boredom, and one thing struck me as very odd. That is, Google’s AI applies (or is supposed to apply) to Google’s services, but that is not necessarily the case for third parties.

For example, the Calendar app has an option to scan Gmail and extract event information (presumably using clever AI), which is then automatically added into your calendar. However, although the Android Gmail app can now comfortably connect with Microsoft Exchange servers, Android cannot touch email coming out from these.

This is not the case with iOS. Mail.app analyses all email, regardless of where it came from, applies the same algorithms to extract date information, and suggests events to add to the calendar (it will automatically add events if there is an attachment with the “.dat” extension). Mail.app can do this legitimately because everything is done on the device, without sending the emails to Apple servers. (Note that Google is still facing litigation regarding scanning of email for advertising purposes, which questions their scanning non-Gmail originating email.)

From a technical point of view, features like Google Now-on-Tap may allow Google to analyse data that resides in third party services. However, these third parties may be reluctant due to competitive concerns. Furthermore, privacy policies at least in Japan are very sensitive about sending data to other entities. I expect the same policies or at least expectations exist in many other countries as well, and the above Gmail litigation suggests that this is indeed the case.

This means that iOS will be able to analyse and learn (locally) from emails stored in third party servers like Microsoft Exchange, whereas Gmail will not. In other words, the iOS Mail.app can stand at the crossroads where information from multiple sources come together, and learn from each of them. The iOS Mail.app will be able to benefit from being at the point of convergence. On the other hand, Gmail will have to be separated and isolated.

Gartner has reported that only 4.7% of public companies use Gmail for work. This is based on email routing records, and is likely to be quite reliable. Therefore, in terms of the treasure trove of corporate email data, Gmail is mostly insignificant. For Google to really access this information, it most likely needs to move away from cloud computing and towards AI on the device, because the device is where the data converges.

The interesting to note here is that the point of convergence from a purely technical point of view, will not necessarily be where it will be in real life. Whereas technically it makes sense to accumulate all data in the cloud, privacy concerns alone could force convergence to occur solely on the device.

What Is AI Good For?

With all the hype surrounding artificial intelligence, it is important not to get too immersed in the technical and science fiction aspects. Steve Jobs said “You‘ve got to start with the customer experience and work backwards to the technology.” and this is true of artificial intelligence as well. Although a full discussion is totally out of the scope of a short blog post, I would like to provide a few perspectives.

Contextual interfaces

In my opinion, the right click in Windows 95 was a huge innovation. Prior to that, one had to look inside a huge array of options under a menu bar, or scan through a panel of small and often obfuscated icons. The right-click contextual menu showed a short list of tasks that were all relevant to the object that was currently selected and relieved you of this wasteful routine.

Although this may not strictly be classified as AI, the way that that it lessened the burden on the human brain was significant.

Similarly, one application of AI that would most certainly be very popular with users would be UI improvements that significantly reduced the need to scan through a list of options to find the relevant actions. In iOS 10, Apple has introduced AI that learns which emails should be sorted to which folders and intelligently provides a shortcut so that sorting emails is much quicker and easier.

Automated processing

Email spam filters learn what emails have a high probability of being span. From a user perspective, this is by all means artificial intelligence.

Although spam filters occasionally make mistakes, they help save our time and cognitive load by pre-filtering out stuff that is completely irrelevant to our work. Good spam filters also protect us from phishing attacks which can compromise whole corporate networks, and so it is no surprise that these are in high demand.

This is a very important market for AI.

Data Detectors

Apple had a patent for a very powerful technology commonly known as Data Detectors. This technology can detect addresses, event dates etc. inside text, and dramatically improves the user experience on smartphones where it is inconvenient to copy and paste.

The analysis of text, prediction of what a user might want to do with it, and providing a convenient and intuitive UI that enables the user to quickly get it done, can be a great timesaver.

Voice Recognition

It is well know that machine learning techniques have greatly improved voice recognition. Voice recognition has historically been valued by people who have difficulty typing. With mobile devices, voice recognition is convenient when you cannot use your hands.

Voice Interface

Graphical user interfaces are great for a stepwise approach for getting things done. However, since they operate by providing a list of options on a 2D screen, there is a limit to the breadth of commands that can be issued at any one time.

Command line interfaces and voice interfaces can get around this issue because they do not have to present a list of options. They are limited only by the ability of the user to memorise the available commands, and to issue them without referring to a menu. Hence voice interfaces are a convenient way to issue tasks quickly.

Summing up

My intention with this post was to show that there is much more to AI than a voice UI, and that from a practical perspective, the other applications have already proven to be very significant in terms of user benefit. Although voice UIs and predictive assistants like Google Now are interesting and futuristic, there is no reason why these applications will be the most useful and revolutionary.

Current advances in machine learning (Deep Learning) will build upon what we already have, and for smartphones with big screens, what we already have is a good graphical user interface.

AI, Voice UIs and predictive assistants should be evaluated based on their merits. How will they save us time and for what tasks? How will they help us when we cannot or it would be inconvenient to view our phone screens? How can they reduce our cognitive load?

Apple is pretty good at understanding what the user experience should be, and arguably, this will be just as important or maybe even more so than the underlying algorithms.

Peak Google Revisited

Almost a year ago, I noted that while a few prominent tech pundits had pronounced “Peak Google” at the beginning of 2015, Google was actually as strong as ever 12 months later.

In my post, I said that since no company keeps succeeding forever, anybody that predicts the demise of a company without giving a specific timeframe will always eventually be right. That is to say, any prediction without a timeframe is utterly valueless. I also noted that giving a timeline is extremely difficult.

However, I think we now have enough information to give a rough timeline on when we can expect “Peak Google” in financial terms.

Data points

I will lean on the following data points.

  1. The historically constant size of advertising spending
    In 2014, Eric Chemi writing for Bloomberg noted that the US advertising industry has always been about 1 percent of US GDP since the 1920s. This is significant because the US is much wealthier than it was 100 years ago, and it has gone though many ups and downs, even one world war in this time.
  2. The share of internet advertising within the whole advertising market
    According to eMarketer, total digital ad spending in 2017 will be 38.4% (77.37 billion USD) of total media ad spending. It will surpass TV ad spending which will be 35.8% of total.
  3. Google’s US advertising revenue is 31.00 billion USD in 2015, calculated from 67.39 billion global revenue of which 46% comes from the US. This is close to half of total digital ad spending (77.37 billion USD as noted above).
  4. Facebook’s 2015 advertising revenue was 17.08 billion USD. This is roughly a quarter of Google’s.
  5. As noted by Horace Dediu, economic growth in developing nations is not accelerating Google’s revenue growth. Despite rapid economic growth, developing nations are not becoming a larger part of Google’s revenue.

Assumptions

I will also assume the following;

  1. Google will not find a new revenue source that will be large enough to significantly add to its top line.
  2. Google’s revenue growth will continue to be dependent on and on par with growth in the US.

Logic

  1. Since the size of total media ad spending is constant as a percentage of GDP, this is the hard ceiling of advertising growth in the US.
  2. Digital ad spending is rapidly approaching this ceiling. With already close to 40% of total ad spending, there is less and less room left for digital to grow.
  3. Google has close to half of total digital ad spending. Of the remainder, it is likely that Facebook is taking half of this. Google has little space to grow by increasing its share within the total digital ad market. In fact, it is more likely that Facebook will eat into Google’s ad market share. Note that one estimate suggests that Google & Facebook own 85% of the US the digital ad market.
  4. Since Google’s ad revenue growth has largely been independent of developing countries, it is reasonable to assume that this will continue for the mid-term.

In simple terms, there is no longer room in the advertising industry for both Google and Facebook. Since Facebook has more momentum, it is likely that we will see Google being increasingly squeezed. Although the total digital ad spending will likely still see mid double digit growth, Facebook will take the majority of this growth and Google will probably drop to single digit growth before 2020.

What to expect in the future

We are already seeing signs of more disciplined spending at Google/Alphabet, most likely in anticipation of a slow-down in growth. Given the highly talented people at Google, it is no surprise that they understand that the end of double digit ad revenue growth is near.

However, disciplined spending can significantly alter what projects companies chase. Unlike the current Google which constantly throws spaghetti on the wall, a fiscally disciplined Google would probably be more cautious. Within the next few years, I expect that we will see a very different Google from what we are seeing now.

Update

One important thing to note is that “Peak Google” will be a result not of any strategic mistake made by the company, but rather a result of the saturation of the digital advertising market. This has the following implications;

  1. The whole digital advertising industry will suffer along with Google. In fact, smaller and less established players are more susceptible to adverse environments. This is already happening

  2. The saturation of the digital advertising industry also means the saturation of the ad-driven Internet. Startups without a monetisation model will find it harder to bolt-on an ad-driven one later.

  3. Being the most established brand in digital advertising, it is likely that Google will maintain a very strong position in the market for years to come. Like Apple, the issue will be the lack of rapid growth.

Smartphone Sales Down In Japan, But Android Hurting Most

MMRI released their report for mobile phone sales in Japan for he first half of 2016, and the results were not good. 

  1. Total handset sales decreased by 10.9% YoY. 
  2. Smartphones decreased 8.4% YoY. 
  3. However, looking at iPhone sales, this decreased only 3.1%, resulting in an increased market share of 40.7% (including feature phones).
  4. Importantly, Sony which is 2nd in market share saw a 28.5% drop in sales, while Sharp which is 3rd in smartphone share fell off a cliff with a 46.4% drop. 
  5. The sharp decline has been attributed to the government decision to restrict what they consider excessive discounting of devices. The government thinks that by restricting discounting (some smartphones are sold for free by carriers if the purchaser agrees to a 24 month contract), carriers will eventually reduce the prices of their data plans. However, data plan prices have yet to come down, and are actually increasing depending on your usage pattern. 

What this suggests is that when customers are more exposed to the real price of smartphones, it is the Android users who either decide to buy cheaper devices, or hold on to their devices longer. The iPhone users seem to be less sensitive to price increases. 

In a nutshell, the Android market has a high level of price elasticity whereas the iPhone market does not. 

I believe that the iPhone markets and the Android markets are actually different despite both being smartphones. Customers buy each for different needs, and they are not interchangeable. This is similar to how Mercedes and BMWs do not share the same market as cheap cars; the role of luxury cars is not just transportation. 

Who Will Win The Next Big Thing?

Many people seem to think that the next big thing in tech will be artificial intelligence, and that Google is much better positioned to win than Apple. Other people think that VR/AR is the next big thing, and again, at least one of the companies that is currently announcing hot new VR/AR gadgets is going to win (and not Apple).

However, history has clearly shown that this discussion is without merit. In fact, when a next big thing does come along, the most unexpected company or a company that simply did not exist before, is the one that actually wins. Very rarely if ever, does the company that invests tons of money on the early stage research emerge as the victor.

Google did not exist yet when Yahoo, Lycos, Altavista and many others were first battling to become the telephone directory of the web. Apple was just a failed PC company that was finding success in music when Blackberry, Palm, Microsoft and Nokia were battling to bring smartphones to the masses. Again Apple was a company that was fighting a losing war against IBM when Steve Jobs visited Xerox PARC which had invested heavily in next generation computing research. Compaq did not exist when IBM introduced the IBM Personal Computer. Microsoft was not even in the OS market when IBM knocked on the door looking for an OS for the x86 CPU.

Time and time again, history has shown that when something really new comes along, the companies that seem to have the strongest position from both market and technical standpoints, are rarely the ones that win in the end. The companies that do win are those that we would not even think about, or the ones that didn’t exist. This is what Clayton Christensen’s Disruption Theory is all about.

Therefore from a historical standpoint, if AI or VR/AR succeeds in disrupting tech, it is actually very unlikely that Google, Microsoft of Facebook would win in the end. These companies are in the exact same positions regarding AI and VR/AR as were Blackberry and Palm prior to iPhone, or as were Yahoo, Lycos and others were prior to Google Search. They have invested heavily into research and also into developing the early market. However, they have not yet discovered the formula that would propel them into the mass market.

No matter how unlikely it may seem today, history is actually quite unequivocal on this. The large and established companies that pioneer an early market, do not reap the rewards when disruption happens and the market goes mainstream. The odds are against Google for winning in AI, and the odds are against Microsoft and Facebook for winning in AR/VR (assuming though that AI and AR/VR do end up being disruptive technologies and not simply sustaining).

Although it is almost impossible to predict what will happen, I will just end this post highlighting a couple scenarios under which the Google might find itself vulnerable for illustrative purposes only.

  1. What if privacy became a block for AI penetrating the mainstream? What if consumers started to feel uneasy with the suggestions that Google’s AI made. What if a data breach at a major internet advertising company made it clear to mainstream customers that far more information was being collected about them than they had ever imagined? What if the technology emerged that made machine learning possible without compromising privacy? Would Google invest in this technology, or would it try to improve the AI results with its current privacy compromising methods? It is likely that Google will invest in the latter, which might be a bet on the wrong horse.
  2. AI could actually become the biggest threat to Google’s business model. What would happen if somebody came along with a good enough AI service which made web search obsolete, and which was combined with a monetisation scheme that was far less profitable than Google’s search advertising? Would Google copy that scheme, or would it wait until it found something that was at least as lucrative as the search business that it was cannibalising? What if this service took off, while Google was still looking for ways to maintain profits?

Twitter Grows 13% in Japan

Twitter Japan announced on the 2nd November that user growth had been 13% (+5 million) for the past 9 months, bringing the Japanese monthly active user count (MAU) to 40 million in September. This compares to Twitters MAU growth in the rest of the world, which is essentially flat.

In December 2015, Twitter Japan announced that 10% of MAUs were from Japan.

I have written on this topic several times on this blog. What I think is most important is that the features and characteristics of social media platforms are not dictated by what features they have and do not have, but instead are determined by the users themselves. Therefore, all the pundits that say Twitter should do this or Twitter should do that are essentially clueless because they only know how they or their close circles use Twitter, but are mostly blind to how other are using it. Analysing what features are available does not capture what people use a social service for, nor does it give you any idea of how people would use a new feature when available.

Social media needs social analysis. Furthermore, you need to analyse many societies and not one society.

BYOD And Hardware Sales Growth To Enterprise

In a recent article, Jan Dawson called the enterprise markets “The fastest-growing segment in mature smartphone markets”. Tim Cook had said in 2015 that enterprise markets had seen annual growth of 40% for Apple revenue, and this is indeed massive growth. The magnitude of the revenue was $25 billion annually, only 9% of total Apple revenues in the same period, but nonetheless huge. To put this into perspective, Dell’s peak revenue in 2012 was $62.1 billion annually.

The question is, if corporate adoption of mobile was driven by BYOD, wouldn’t Apple not see revenue growth? If all that was being done was adapting iPhones that the employees already owned to the corporate network, why would Apple see increased sales of iPhones to the enterprise?

My guess is that either of the following is happening in the marketplace;

  1. Despite continued popularity of BYOD, there is a significant portion of employees/employers who prefer to separate work and private devices. Hence purchases of company-owned devices.
  2. The popularity of BYOD itself may be on the decline, due to a shift towards corporate-owned-personally-enabled (CoPE) or choose-your-own-device (CYOD) scenarios.

Either way, it does not seem unreasonable to predict that within a few years time, Apple may be the largest IT hardware vendor to enterprise customers in the world.

Thoughts On Andromeda

It is widely expected that Google will announce their new Andromeda operating system next week on Oct. 4th. There is a lot of speculation on what the Andromeda OS might look like, and the original sources (1, 2) suggest some key points.

  1. Ambitious initiative that is being pursued via merging Chrome features into Android, not vice versa.
  2. Google plans to launch its forthcoming Andromeda Android/Chrome OS hybrid OS on two devices: a Huawei Nexus tablet and a “convertible laptop”.

All this suggests that Andromeda is mainly focused on tablets and convertible laptops, at least for the short-term. Without going into the details of what Andromeda is actually capable of, I believe that this is the core of the argument and what will dictate whether Andromeda will succeed or not.

Andromeda is aimed at Google’s weakness

Google has two separate operating systems for the PC and tablet markets. One is Chrome OS which has seen significant adoption in the institutional US education market, but has mostly failed to make any significant contribution to the general consumer or business markets. The other is Android which holds a significant share of the tablet market, but only for what is often labeled “media consumption” consisting largely of video viewing.

Unlike Microsoft which still commands the vast majority of the business personal computing market via PCs, Android tablets do not appeal to people who want to work on business documents. This is also true for the mass iPad market, and is the challenge for tablets as a whole.

It has also been often mentioned that there are very few Android apps that have been designed to take advantage of the tablet form factor. Ars Technica’s Ron Amadeo examined 200 apps from Google Play’s “Top Apps” list and found the situation to be quite dire. (To be fair, the design of this analysis experiment is not very scientific. The choice of the “top free apps” list is arbitrary, and a control experiment with a similar list for iPad is necessary.)

Of the top 200 apps:

  • Nineteen were not compatible with the Pixel C
  • Sixty-nine did not support landscape at all
  • Eighty-four were stretched-out phone apps
  • Twenty-eight were, by my judgment, actual “tablet” apps

From the above, I think that it is safe to say that the markets that Andromeda is targeting (the PC and tablet markets), are the markets where Google is weakest.

Similarities to Microsoft’s attempt at the smartphone market

The above situation is similar to the predicament where Microsoft finds itself in with respect to entering the smartphone market. Android is very strong in the smartphone market, and Andromeda is an attempt to use that strength to push Google into the productivity tablet (a market that has yet proved illusive for the iPad as well) and PC market. Microsoft on the other hand has tried to use their dominance of the PC market to gain an entry into the smartphone market.

We know that Microsoft’s attempt has largely failed up till now. The smartphone market has matured and is split between iPhone and Android. Although newcomers have tried to break into the market, all have failed to date. Microsoft’s chance was during the early days when Android’s dominance was not yet secured, but they failed to deliver a compelling solution in time.

We can apply the same analysis to the PC market. The PC market has matured and is dominated by Windows. Although the Mac has tried to regain market share on the halo effect of the iPhone and has gained some market share, this has been a very slow process. The majority market is still dominated by Windows. Similarly, Andromeda will find it extremely challenging to break into a market which is highly mature, and where the major battles have already been fought decades ago.

The consumerisation of IT as a tailwind

The consumerisation of IT is a relatively new phenomenon, and favours players that are strong in the consumer IT arena over those in corporate IT. That is, if the consumerisation of IT is a strong tailwind and if Apple and Google ride this well, there is a possibility that they could challenge Microsoft’s dominance in PCs. Given the maturation and stability of Microsoft’s dominance, without some kind of strong tailwinds, Apple and Google cannot win. In other words, the consumerisation of IT is a new force that could change the balance of power in the PC market, and could create an opening for Apple and Google that they could not have pried open alone.

However if the reverse happens, that is if IT stops flowing from the consumer to corporate and instead starts flowing in the other direction, the direct opposite situation can happen. Jan Dawson has argued that this is indeed starting to happen. Therefore, instead of Andromeda gaining traction in the PC market, we might actually see the the reverse which is Windows gaining traction in smartphones.

The OS is not what matters most

When looking at a new OS like Andromeda, we must be careful to remember that the OS is not necessarily the most important piece of the puzzle. In fact, its importance may indeed be minor. More important is the market position that Google is currently in, their ability to execute a coherent strategy, the commitment of 3rd party developers to create software that makes use of the new OSes features, and the broad market trends that sweep across the industry.

As I have argued above, regardless of the features that Andromeda may have, other factors are not in Google’s favour. Furthermore, what I consider to be most significant and indeed pivotal is whether the consumerisation of IT continues, or whether this will be reversed. The fate of Andromeda hinges on this.

Conclusion

  1. Whatever features may be announced for Andromeda will not be the most important.
  2. Andromeda and Windows 10 are tackling the same problem from opposite ends and with inverse strengths & weaknesses.
  3. What will determine Andromeda’s fate is whether the consumerisation of IT will continue. Recent trends suggest that this is questionable.

Keyboards As Legacy Devices

One of the common arguments against the tablets as productivity devices, is that writing is an essential part of “content creation” and that long-form writing necessitates a keyboard.

I have strongly questioned the validity of both these assertions. I do not think that writing is an essential part of “content creation”, nor do I think that long-form writing needs a keyboard. Here I will focus on the second assertion and illustrate how the new generation might consider keyboards as legacy I/O.

Japanese students are faster with smartphones than with keyboards

A Japanese article in ITMedia tested how fast 16 Japanese students could enter text with smartphones and with PCs. The author found that many students could type up to 2x faster on smartphones, and that the fastest smartphone typer was faster than the fastest PC typer. They also found that the two students who were faster on a PC were using QWERTY keyboards on smartphone, instead of the flick input.

If we consider the comfortability of long-form text entry to be an essential part of a “content creation” device, then at least for the Japanese youth, smartphones are better than PCs.

L kz kawa0730 01

QWERTY is holding back Western languages

One might think that the above only applies to non-Western languages. However, I believe that we can also extend this argument to Western languages as well.

The issue is that Western language users still are using the inefficient and legacy QWERTY keyboard layout instead of something that has been designed for and optimised for smartphones (or even PCs for that matter). If Western users started to use a keyboard layout that was designed for smartphones, then maybe they wouldn’t need hacks like Swype to type faster. It is possible that what is holding tablet text entry behind is not the lack of a physical keyboard, but the lack of new ideas and the unwillingness to try a new input method.

Implications for the future

There is a possibility that the legacy of QWERTY keyboards is holding back innovation. The physical keyboards that Blackberry insisted on, prevented them from pioneering phones that had large touch-screen displays. The insistence on physical keyboards is probably a huge factor in keeping US schools from embracing the tablet form-factor (and is helping float the Chromebook market). If this continues, then it is very likely that innovation in the next wave of “content creation”, if it is to happen on tablets, will not come from QWERTY countries, but from non-Western language ones.

I see physical keyboards as legacy devices. They are slowing down innovation. Instead of discussing whether future “content creation” devices should have keyboards (like the 2-in-1 form factor), the real discussion should be how to create a better keyboard layout that is completely free of the century-old typewriter QWERTY legacy.

Appendix: About Flick Input

Flick input uses a keyboard like the one shown in the image below. There are 12 light grey keys that are used to enter characters. The Japanese phonetic writing system uses roughly 50 characters which is much more than the 12 grey keys. However, when you press one of the grey keys, you are presented with 5 different options. Flicking in the direction of any of these keys allows you to select one of these (no flicking selects the centre one). Therefore, from the 12 light grey keys, you can generate 12 x 6 = 60 different characters. Proficient users will memorise the flick direction, and will not need to wait for the options to appear on the screen. Instead, they will simply put their finger on any of the keys and immediately flick in the appropriate direction.

Since three Japanese characters contains about as much information as a single English word, you can see how efficient Flick input can be. Add the fact that the keys are much larger (fewer mistakes) and can comfortably be accessed with a single hand, and you can understand why Japanese youth are so fast with this.

Similar concepts are available for Western languages like MessagEase. One problem for Western languages may be that QWERTY is bad but not hurting enough to convince people to learn a new keyboard layout.

NewImage