The end of the keyboard, and what it means for software applications

Imagine a future without the keyboard. Imagine we spend 80% of our time on a tablet and the mobile phone. I am sure there are set of functions (developers, finance, etc) that will continue to use the traditional laptop or desktop but imagine that number is stagnant or reducing.

In our current usage paradigm there are creators and the consumers of content. We are all creators at some point and consumers for most of the time. As consumers we dont quite need the keyboard as much. The typical use of consumption is point and click – primarily using the mouse or the touch interface.

I wanted to speculate on what would happen if the keyboard were to go away and what would be the changes would result in the way we build we applications?

1. The text box would mostly go away. I already see lots of companies asking you to login using Facebook or Twitter instead of entering a user name, email and password.

2. If the text box exists, I imagine auto complete would be the default option for all text boxes, which dramatically reduces typos.

3. The text box would be mostly replaced by drop down lists. This makes it relatively easy for users to consume, or input data and reduces errors dramatically.

4. The simple radio box is rather painful in both a mobile form factor and too small for fat fingers on a tablet. I suspect it will just be done with and more designers and developers will stop using it.

5. Replacement of lots of text with “tag cloud” picks. When you have to get users to write a short description about themselves, say their bio, then writing a bunch of stuff is just painful. I imagine the bio will get replaced with large tags which the user can click on to add or a small text box to add a new tag which did not exist.

One thought on “The end of the keyboard, and what it means for software applications”

  1. An excellent topic for discussion!

    In the future, a machine will increasingly learn to predict our intent — as if it were a genie awaiting to fulfill our command. Machines could create a composite of our intent from multiple inputs — text, voice, retina, and other biometric scans, etc.

    A full keyboard, physical or otherwise, is slow, and gets in the way of a rich interactive experience. A smart keyboard could morph in real-time and show only a predictive subset of keys that are relevant to our intent at any given time. A user-facing camera could help predict our intent by following the movement of our retina against a tag cloud. A voice genie could intelligently prompt us to add clarity to our intent. Multi-dimensional inputs could enrich application behavior and response.

Comments are closed.