This week is the 3rd anniversary of Apple launching the original iPad. It’s currently on it’s 5th generation with the 128GB version, not counting the 1st generation iPad mini. Back then many pundits claimed it would never catch on. Instead, Apple has sold 140 million iPads, totalling $75 billion in sales beating Microsoft Windows revenue, it holds the profit-leading position in the tablet market, outsells netbooks, will outsell notebook computers this year and desktops next year, has seriously cut into PC sales numbers, heralded the post-PC generation, and is the fastest growing consumer tech product ever. It’s in restaurants, airplane cockpits, and is replacing cash registers. Below is the article I wrote three years ago upon it’s launch.
“In our lifetimes, we have seen at least five generations of user interfaces.”
So started a recent conversation, on the advent of the recent iPad launch.
How is the iPad the 5th Generation of user interfaces for computers?
Before I begin, let me explain that a "user interface" is the way a human interacts with a computer. At the nerd level, this means how to program the computer, but at the user level (read: you) it means how you tell the computer what you want it to do: send email, edit a document, or empty the trash. To be perfectly technical we’re talking about the "input" part of the user interface. How did this start?
Back in the B.C. days (Before Console) those prehistoric days before high-powered semiconductors and PCs — in the days of kit computers and early mainframes — the "users," who were typically system programmers, would instruct the computer what to do with direct electrical connections or mechanical manipulations. The tools of the trade were wires, jumpers, plugs, pegs, dials, and switches.
Popular back in the ’60s and ’70s, I built and programmed a "mechanical digital" kit computer called the Digi-Comp. It was made out of plastic, wires, pegs, and rubber bands and could do simple calculations and play games. Of course I reprogrammed it so I’d always win.
Keyboards added a layer of abstraction to the mechanical approach, by using a familiar interface metaphor — typing commands on a typewriter keyboard — to deliver instructions to the computer.
This was done in two ways:
Direct — teletype, keypunch, console. Popular in the ’70s, these were directly connected to the computer mechanically or electrically to instruct it what to do. Keypunch machines would use a keyboard to punch holes into Hollerith or IBM cards. Consoles were typewriter-like machines that would be used for inputting commands or seeing the machine status.
Software — Unix, CP/M, MS-DOS. These allowed a “shell” (think of a nut) to speak to the “kernel” (get it?) of the computer master program, or operating system, via a command line of text. In the ’70s and ’80s, using brief but obscure commands in Classical Geek, familiar only by the high priesthood of computer cognoscenti, usually known as “superuser” or “root,” instructions were delivered to the computer.
While practically all computers now use these electronic rodents, it was originally developed for the Xerox Alto. It was popularized by Apple for Macintosh personal computers, and gained universal acceptance with modern versions of Microsoft Windows PCs. Engineers used a mouse on the powerful Sun workstation. Desktop computers became more popular than mainframes or minicomputers.
- WIMP — Windows, Icon, Menu, Pointer — became the GUI or graphical user interface popularly used for decades.
Curiously, those who were proficient during the previous user interface, thought that anyone who took their hands off the keyboard to reach for the mouse was a wimp.
4. Natural User Interface (NUI)
But the "direct manipulation" of the WIMP model still involved the use of a mouse. You didn’t actually touch the icons or objects on the screen directly. In the ’90s and ’00s we saw the emergence of a variety of natural user interfaces or NUIs that used a pen, voice, or touch. Pen-based computers, or tablet computers — in many ways like laptops without a keyboard — were evident in the ’90s.
- Voice recognition appeared, and has seen greater and lesser improvements, being used for computer command and text input dictation. The former is available now in Google’s Android phone navigation system, the later in Google Voice and Dragon transcription software.
- Touch input on tablet computers was limited to an active or passive digitizer, but was still essentially a mouse, still not true direct manipulation of objects on the “desktop.”
Aside from some specialized industrial uses tablet computing did not gain wide acceptance nor market share. However, some early PDAs (Personal Digital Assistants) like the Palm Treo and Windows Mobile devices did use a pen-based metaphor but these still remained too complex a user interface for many people.
The Apple iPhone and subsequent iPod Touch — essentially an iPhone without the phone service — popularized direct touch in a unique way: a multi-touch metaphor. You could now use not a single pen click, but more than one finger to touch, slide, drag, shrink, and expand things on a screen — by directly touching icons and elements on the screen. This was a new level of abstraction.
And now the iPad, a groundbreaking advancement in mobile tablet computing, has just become available. Lighter than a laptop, simpler than a netbook, and larger than an iPhone or iPod Touch, the iPad’s 9.7 inch screen introduces an exciting new way of computing, customized not for portable computing, but for true mobile computing. More on that below.
In its first iteration, the iPad is being promoted as an infotainment consumption device: heavy on media consumption, light on content creation. The later is due to the limitations in speed of text input that is a function of the on-screen virtual keyboard. However, I expect this to change with advancements in this "multi-touch metaphor" develop. We’re already seeing multi-finger gestures on the iPad like we have on MacBook touchpads and the recent Apple Magic Mouse. On the iPad a variety of applications, like SketchBook Pro, use 3-finger gestures, though these gestures not yet consistent across all applications.I believe we’ll see more elegant gesture-based input that permit a variety of interaction models. I expect "chord-based" input where, rather than the standard keyboard-based input, using multiple fingers and gestures simultaneously, words and phrases could be input via a system of shortcuts and macros.
Do I believe, as Walt Mossberg of the Wall Street Journal does, that "Apple has the potential to change portable computing profoundly." Absolutely, and I have said it before in my previous article at the announcement of the iPad. Do I think it’s a "laptop killer?"
Yes and No.
Let me answer the "No" part first. I believe that there is still a market for both laptop and desktop computers. Laptops will continue to get smaller, lighter, faster and cheaper. But will people need both a laptop and a desktop? I am dubious. The laptop already introduces compromises from both the desktop down or from the mobile up.
Now I’ll address the "Yes" part. I suspect that many mobile workers, and mobile users will opt for the iPad-style of computing rather than using a laptop. Have you ever seen people at airports leaning up against a wall with their laptop open, trying to get that last email out with the formatted Word document attached? It gives new meaning to "clumsy."
But do we need another viewing device? I think "Yes," you’re already using three. Today we live in a world of Three Screens: the television (lean back), computer (lean forward), and mobile phone. The television is getting larger, the mobile phone is getting smarter… the iPad will fit in between. It will not be a fourth screen, but a replacement for one of the other screens.
Do you think the iPad approach will catch on?
Bill Petro, your friendly neighborhood historian