[Info] On User Inputs and Keyboards

I was skimming this article the other day about a multi-window interface on jailbroken iPads, called OS Experience, allowing users to run multiple apps simultaneously in resizable and movable windows. For the sake of argument, let’s lump this reinvention of the wheel in together with Samsung’s Tab Pro split-screen functions, and Microsoft’s Surface Pro’s similiar feature, or say, every PC since 1995. Revolutionary none of them are, but it had me thinking about multitasking and user interfaces on mobile devices.

First off, let’s establish that multitasking for nearly any kind of productivity work is a real necessity. If I’m researching something, I’ll have a browser with a mind-boggling amount of tabs open on half screen (thank you TreeStyleTabs), and my collection of notes on the left. Or a video playing on one side, while reading something on the other. People other than myself often attest to the productivity gains of having a multi-monitor setup, both in the workplace and at home. I call that bad desktop/window management, since you end up moving your head/neck far more than your eyes, but each to their own.

It seems natural, to then bring that to our mobile devices, in the name of productivity of course. I’ve since messed around split-screen functionality in the Galaxy Note 2, Surface tablets, Tab Pros, and jailbroken iPads. Generally they work as advertised, sometimes well, sometimes not so well. I’ve really tried to get things done with half a dozen different tablets over the years and countless peripherals. You can see there’s a trend, and that is each time a mobile device gains a function that has been on desktops/laptops for decades, it’s heralded as a brilliant innovation. The mimicry is not an innovation, only the implementation within the constraints. We’re making progress though, that’s a good thing.

There’s also a significant difference here, and that’s the input method.

poiirycxp0b1nmzhh8uq

Users reported losing around 4kg while checking e-mail this morning.

Nearly all the time whenever I see people in coffee shops and in meetings using their tablets in a productivity capabity, it involves a keyboard case of some kind to type out notes. This is a combination of a hair-pulling experience of typing using the on-screen keyboard, a healthy selection of third-party keyboards, as well as the portability and advantages of having a tablet. Logically, the next often-heralded step is to bring multi-tasking to these devices, right? But each step forward, such as adding a keyboard, adding traditional PC-like features, takes us one step closer to the form factor which we’ve embraced for decades – the laptop. In short, the tablet, in it’s vanilla form without a keyboard, has been in existence for nearly half a decade, but it hasn’t become mainstream as a productivity device. We’re basically getting closer to where we started.

Which brings me to user interfaces. It seems like as tablets get more laptop-like, and laptops become more tablet-like, they’re bound to converge somewhere. Touchscreens are more or less unavoidable when purchasing a new non-Macbook laptop nowadays. In actual usage, I find touchscreens on laptops far more of a gimmick than a real aid. The take-up rate has also been lacklustre, though it’s not entirely the hardware manufacturer’s fault. Trying to force a tablet-like interface onto a traditional laptop is a recipe for disaster, it’s not just that applications aren’t designed for touch, it’s terribly inefficient, from a human being perspective, which I’ll get to shortly. It probably also doesn’t help that most PC trackpads are completely horrendous. Don’t even get me started on offset keyboards with a number pad on one side. This is why I’ve stuck to a relatively ancient laptop.

This in large part explains why many people still have laptops even though they use tablets regularly. This figure is around 96% that own both, according to Microsoft. There’s a time and place for the latter, mostly in consumption, in which it excels.

Where touchscreens are excellent is a replacement to another pointing device, like a mouse or trackpad. There are clear overlaps here – even Macbooks have long since reversed scrolling directions to more closely mimic using a touchscreen. Moving a static object around the screen (the mouse pointer) while jabbing at various things, can be much more efficiently done with a fingertip – that’s what the mouse cursor was originally intended to emulate. It removes two obstacles, the first being having to know where the current cursor was relative to where you want it to go. The second is having to attune to the sensitivity and acceleration rates of the cursor, functions which are designed to map the device to the screen. With a finger, it’s a 1:1 ratio, what you point is what you get – simple.

But compared to a keyboard for most productivity tasks, neither of these are as good. See on-screen keyboards, scrolling large documents (hello RSI), using lists, rapid navigation, for example. If you’ve taken the time to memorise keyboard shortcuts, or use a browser addon like Vimium/VimFX, or a smart launcher like Alfred/Synapse, then using a pointing device to click on a button somewhere during your workflow will seem like an ancient and extraordinary waste of time. It’s also a waste of screen space with unnecessary icons and toolbars. The hurdle here is it requires a few minutes to learn, average people would rather keep jabbing than spend a few minutes to learn a more efficient workflow.

That’s why things like the X1 Carbon’s capacitive key row to replace the F1 keys is just downright insulting to humanity, it completely disrupts traditional keyboard shortcuts, for no appreciable gain. Nobody worth their salt uses uses the manufacturer’s built in ‘back’ or ‘refresh’ buttons – they’ll just hit alt-left or F5 instinctively or god forbid, the back button in the address bar. By the time you look down, make sure that the X1’s capacitive row is cycle between one (of THREE) presets that you’re after, find the custom ‘refresh’ icon, then hit that, the next revision of the X1 Carbon, now with randomly placed keys each time you open it up, will be released.

Fancy user interfaces with slick flying windows, sound effects and waving hand gestures look good in the movies and concept videos, but in real-life, nothing beats a good keyboard and pointing device. Moving my hands off the keyboard to the mouse is something I try to avoid doing, as most touch typists will attest to. Moving my hands up to jab at the screen is a nightmare – why would somebody want to leave greasy prints all over the screen (yes, I clean all my devices thoroughly with methylated spirits alarmingly regularly), while expending far more energy than required to achieve the task? In nearly all the desktop apps I use, I rely on keyboard shortcuts, which are all easily within reach, often subconsciously.

Even for people who don’t regularly use keyboard shortcuts, they naturally gravitate towards relying on a trackpad/mouse. Contrast this with lifting your entire arm, disrupting your typing position, to create ONE input action. After all the billions spent in human interface research, we have this anomaly. The novelty might be entertaining for a short while, but I guarantee if you’re under the pump to churn out some work by a deadline, gestures be damned, you’re breaking out the keyboard and getting it done. Don’t confuse this for laziness, but instead consider the hundreds or thousands of times per day a person wishes to create inputs on a laptop. This probably explains why devices like the Leap Motion haven’t gained traction out of niche uses – it’s a solution for a problem nobody really has.

As an aside, having see-through screens is also a senseless idea. Having text and UI floating on top of a clear screen might also look good in movies, or when you want to talk to somebody BUT still do your work at the same time without appearing rude. But in real-life, a moving background, bright lights, distractions, basically anything will disrupt the focus on the task at hand. Imagine if every application had a moving, live wallpaper directly on it. I’d rather go back to writing with crayons on scrap paper for my productivity needs.

Once again, we return back to the laptop form factor. Or more accurately, a plain old screen with a keyboard. Call me old-fashioned, but I don’t foresee moving off a keyboard (and sometimes a mouse) combination for a long time, at least for productivity tasks. I don’t want to be jabbing at screens, or endlessly swiping through long documents. There’s always a better way to do things, and I think we’ve already had it for a long time. The next logical step is NOT using our meat digits to poke things, but eye or mind tracking to control things, but that’s still a long way off being reliable and accurate.

 

 

Advertisements

One thought on “[Info] On User Inputs and Keyboards

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s