Dan Bricklin's Web Site: www.bricklin.com
The Developer's Challenge in 2011
The SAAS developer has special challenges brought on by the Apple iPad, smartphones, wide adoption of all sorts of flat panels, and varying levels of connectivity.
There are a few trends that are converging in the software world that will bring new challenges to developers in 2011. These stem from the success and popularity of Internet-based apps delivered as software as a service (SAAS), the popularity of the new smartphones, the wide adoption of flat panel displays in every aspect of presentation, the pattern of adoption of the very successful Apple iPad, and the ubiquitous everyday personal use of highly crafted apps on all of these targets.

The challenge is to deliver experiences appropriate to each of the different configurations people find themselves using. No longer do people expect to use one and only one computer (or computer type) for a particular application. They expect to access their SAAS systems, be it Google search, Google Maps, Twitter, or their internal corporate content management or order processing system, on their desk, in a meeting, at home, waiting in line to get on a plane, or at Starbucks. And, when they use these systems on a particular device they expect them to behave in a certain way.

I'll assume that, for this essay, we are talking about services that involve some remote stored data, and that in most cases implementation may be through HTML and related browser-based technologies. Native and other local apps may be used where there is the need and it's financially sound and practical to do so. Unfortunately for many SAAS developers, making lots of native apps is not a practical option.

This is an attempt to develop a little bit of taxonomy about that challenge and use that taxonomy to explore some options.

The Challenge Facets and Their Variations
Let's start by breaking the configurations down into three parts, which I'll call Challenge Facets: Connectivity, Display, and Interaction Method. Within each facet there are a few different variations that, I believe, require special attention from the developer and can have major implications for UI design and internal application architectural design.

First, let me explain the Challenge Facets and their variations:

Connectivity here refers to the speed of connection to the Internet and the servers that provide the service. Connectivity can be broken down into four variations: LAN-speed (wired or WiFi), 3G (and 4G) somewhat high speed, 2G "dribbling" speed (on the average over many seconds), and Disconnected which can happen for long (many minutes or hours) periods of time. Local connectivity to nearby devices is another area, but I won't deal with it in this essay (though it's a quite interesting topic).

Display here refers to the size of the screen and its physical relationship to the viewer(s). For this, I see at least five distinct variations: Handheld (like a smartphone) with a screen of just a few inches maximum that fits in a pocket or on a belt, Tablet (like the Apple iPad) with a screen of several inches or more that can be used standing up, sitting, or held between a few individuals, Laptop with an integral hinge and keyboard and usually a larger screen than a tablet such as 13"-17", Desktop with even larger screens (17"-27") -- often more than one, and finally Wall displays with the number of pixels similar to a laptop but physically much larger (32"-50", or more if projected) and distant from the viewers who may be sitting on a chair (in a conference room) or couch (in a home), or, in the case of a presentation, controlled by a user who is standing and perhaps moving.

Interaction Method in this case refers to the main ways in which the user controls the application while it is running. The Interaction Method can be broken down into these variations: Keyboard (with a physical keyboard that has tactile response), Mouse which allows quick, precise (almost to the pixel) positioning of a pointer on the screen as well as separate command "buttons" and "wheels", Touch which involves directly touching the screen for selection as well as gesturing (and even an virtual "keyboard") but without being able to quickly do precise positioning before touching (like "hovering" with a mouse) nor giving input without touching, and Pen which also involves directly touching the screen for both selection and gesturing but that gives precise positioning before touching as well as input before touching (hover).

For those not familiar with the term as I'm using it, "gesturing" refers to controlling the computer by making special motions with your finger, a stylus, or something else you move with your body (like the XBox Kinect or the Nintendo Wii). Instead of treating the touch as "ink" to be used directly, the motion is interpreted by special software as a command. Popular examples are scrolling a document by "flicking" up or down, or zooming with a "pinch" gesture. The recognition of these different gestures replaces the separate mode setting in mouse-based apps, such as switching between zooming and scrolling a PDF document. How a gesture is interpreted is often dependent upon where it is made, such as tapping one place to zoom and another to follow a link, or dragging one place to move a slider and another to pan around a display. Mouse-based systems often have simple gestures like double-click and click-and-hold.

Each of these variations has strengths and weaknesses in relation to the other variations. Each has implications for application design. Native applications designed specifically for systems that run on a device with a particular combination of variations of each of these Challenge Facets are tuned to those strengths and weaknesses. This is especially the case with pre-loaded or built-in apps. These tuned apps are often some of the most frequently used apps on the device and train the users what to expect. When your app runs on that device, users will expect a similarly tuned experience, especially if many other apps manage to do it, too.

Let's look at each of the variations and see what implications they have.

With respect to Connectivity, even though the connection speed might be slow, users will want to have some reasonable feeling of responsiveness. A poor 2G connection might be 1,000 times (or more) slower than a LAN or WiFi  connection, but the user won't want to wait minutes or more to download huge code libraries and background images and animations just to modify one simple setting or make a common query. Further, when the connection is dropped or the user is on a plane without WiFi, they would expect to at least not lose the work they've started, and perhaps even let them examine recently accessed material or create some more for later upload. Loss of connectivity should be treated as a normal occurrence, not something that crashes the app. As connectivity speed goes down, or latency increases, there needs to be some sort of graceful degradation.

The screen poses an additional challenge. One layout does not fit all. Having one large virtual layout, and moving a porthole around over it on smaller screens, is not a good solution in many cases. It may look good in a TV commercial that shows off reading NYTimes.com, but that's a very special case. Navigation by zooming out and zooming in to jump around a big layout is not good information design, especially when you need to see two different parts at the same time and interact with various controls often positioned at a different place than the data display they control.

The uses are often different on different sizes even if the underlying data is the same. Handheld units are often used for quick queries or transactions, while larger units are often used for more sustained use and for doing more extensive analysis. Larger units, such as desktop screens, have room for more visible controls with explanation, and are often used with several simultaneous apps in view at once, or multiple views of one app's data.

Wall units, such as at meetings, may require more use of a preset, but controllable, sequence of content, and visible extraneous controls can be distracting and confusing to the viewers. Annotation and highlighting may be needed. Navigation in response to needs that come up during a meeting may need to be clear and easy for someone to do with a simple remote.

Keyboards, the Mouse, Pens, and Touch: Some history
The interaction method has recently moved to being a major challenge. To help understand that, let's look at some personal computer history.

The early PCs (including the Apple II and IBM PC) mainly had keyboards for input. (I tried using the game paddles as a mouse-replacement for the first VisiCalc prototype, but quickly found the arrow keys better for quick, reliable operation.) Those keyboards, basically identical to today's keyboards, had character keys for entering data and commands, and some controller keys for selection and quick commands (like Return, the arrows, Backspace, as well as modifiers like Ctrl, etc.). Applications were designed for that mode of input and control, such as Lotus 1-2-3 with its typing into cells and moving-highlight menu/Function-key control system. When the mouse started to appear on systems, along with GUI-centric operating systems, the old applications (at least on Windows) continued to be useable, and, even for many commands that might normally be issued using a mouse, there were common and useful keyboard equivalents. Laptops, the main alternative to desktop PCs, provided the same interaction methods and could run applications unmodified. The web browser, developed originally on GUI-based computers (with character-based browsers quickly being ignored), fit well with this method. This experience with constant backwards-compatibility (and the financial success that came along with it) probably led Microsoft to feel that you could "write once and then run anywhere" with applications despite different configurations.

Apple, thinking differently, created a completely different system for the mouse-centric Mac after their successful keyboard-centric Apple II. It was not upwards compatible (the initial Mac didn't even have arrow keys on the keyboard). Applications had to be created pretty much from scratch for the Mac. Early applications clustered in areas that went to the special strengths of the system (the display and sounds coupled with the mouse, and the full-graphics printers), such as desktop publishing and graphics, and then later a graphics-output-centric spreadsheet (Excel). At great expense (and later reward), Microsoft made special versions of their apps for the Mac (tuned to the Mac). You didn't hear much (as I recall) of any apps that were the same on the PC (before Windows) or Apple II and the Mac. The Mac was also pretty successful, reinforcing Apple's view of how to do things.

The early pen-centric personal computers (of the 1990's) came in two flavors: Those with an operating system and apps designed specifically for pen input (such as Go's Penpoint and Palm OS), and those almost completely used with pen API's bolted onto a traditional mouse-based system (Microsoft's Windows for Pen Computing and manufacturer-specific equivalents). The pen hardware for most Windows-based systems could handle "hover" and show cursors just like when using a mouse, so was a direct superset of mouse hardware from the software's viewpoint.

The Palm system was designed around the particular configuration of the hardware. It was designed for generally disconnected operation with periodic synchronization with the "data repository" on a desktop. It had UI controls that worked with the financially viable pen of its day (which had no hover but could tap relatively precisely) and that were tuned to the small handheld screen of the device. It was very speedy, turning pages and navigating faster than most desktop machines with similar applications, which was a key aspect of its use as a handheld "digital assistant" for quick access to calendar and contact information. The custom applications that independent software developers produced for it were an important part of its appeal (according to Palm head Donna Dubinsky). The desktop companion applications (from Palm and others) were tuned to that platform and were not clones of the handheld app.

Other than the Palm systems, which had widespread acceptance, selling millions of units a year for many years, the others either failed completely for various reasons or became niche products. Microsoft later created its own system somewhat tailored to handheld pen use (and with custom apps). It is apparently being phased out for another different attempt right now.

The Microsoft Tablet PC systems of the last several years followed the same Microsoft path of upwards compatibility, bolting pens on to mouse/keyboard-based systems. Other than a few minor "pen enhancements," little software from Microsoft or the manufacturers really took special advantage of the pen. (Microsoft OneNote is probably the one exception for some people. It has a history going back to some Penpoint-based pencentric origins as I understand it.) If you take out some specific vertical application niches (an inadvertent pun, since they often involved operation while standing) the Tablet PCs have not been viewed as especially successful and have not become something that most SAAS developers need to take into account.

We now have some very popular systems that are based on hardware that can detect one or more fingers touching the screen. The most popular so far include the Apple iPhone, iPod touch, iPad, and the Android phones. (In addition to a finger, there are various forms of a stylus that appear to the hardware as a pseudo-finger, but to the software they produce finger touches and directly pointing is still somewhat coarse.) More and more devices are incorporating this type of hardware. The popular term for this is "multi-touch." Some newer hardware reportedly can also use a special pen with hover capability as well as this type of touch, but that is currently not common, and the need for a stylus can blunt some of the ease of use of some of these devices except in special cases such as note taking and drawing where people expect a stylus which can have a much better feel than writing with your fingers.

The Interaction Method Challenge
The challenge with the interaction method is that it's very hard to target one method that works for all systems. Apps originally targeted at keyboard/mouse systems have relied upon specific aspects of those systems, even in browsers (thanks to Javascript). They make heavy use of techniques that don't work well with multi-touch based systems.

For example, many apps make use of multiple target areas to click on that are only several pixels or so in size, too small to reliably differentiate between with big fingers. They rely upon hover capability to provide "tool tips" to explain operation or expose extra information that is not present in today's multi-touch systems. They have menus exposed by dragging that would be obscured by finger tips, and more. (Try using the regular Wikipedia Edit page unzoomed on an iPhone or iPad. It leaves a lot to be desired. The "magical" feel is more like a curse.) Many apps also use small scrolling areas on the screen, such as for editing, options, or data display. The standard web browsers on multi-touch systems often handle these in a different, less discoverable way than regular browsers. (Instead of showing scrollbars, or reacting to a mouse wheel, they may or may not react to 2-fingered dragging.) Tapping on some controls, like text input areas, brings up an on-screen touch keyboard or chooser that may obscure much of the context.

On the other hand, apps designed for a multi-touch system would leave a lot to be desired on a mouse/keyboard system. How do you make a map zoom with one mouse and no "+ / -" buttons and slider like in the browser version? How do you take drawn input with a mouse without lots of extra tools? The extra space needed for touch-sized controls would look childish and wasteful.

Why the iPad Makes This All Harder
The success of the Apple iPad is bringing this whole issue to a head. Initial users of devices like the Apple iPhone were happy to just have any access at all to their browser-based applications. Later, very simple, basic-HTML-like "mobile" interfaces worked to access common systems like Google, airline flight status, many news sites, and more. These were similar to the interfaces you'd find in early handheld browsing that worked on the very early RIM devices, WAP (Wireless Application Protocol) phones, and Palm Treo devices, and didn't take much work for the developer compared to a very rich UI. They were really good for quick transactions or queries on the run, and have been developed over the last several years or so.

The iPad user is different. The iPad normally presents a very rich visual and tactile experience. You feel very much in control of it. (See my "Is the Apple iPad really "magical"?" essay.) Apps designed for it have smooth responsiveness, with many useful, often fluid, controls. Look at popular apps such as the built-in Maps application, Star Walk, many games, and many reader and note taking apps. An iPad user expects a first-class experience.

This normally wouldn't be a problem at this point. After all, there are only a few million iPads. They are dwarfed by the number of normal smartphones, laptops, and desktops. Maybe 1 out of 100 people in a company would have one and use it for work.

However, there is something special about the iPad. There is a high likelihood that that one person in a hundred with an iPad is the CEO or other senior decision maker. The iPad is a very viral device: Once you've seen one and used it, you often want one. In the case of senior executives, they see one person in a board meeting with it who sings its praises and shows it off, and then it's the next thing they must buy ($499 isn't very much of a deterrent for such people) or have IT procure for them. Worse yet for the developer, from what I've seen, once they start using it, they really like it and start depending upon it. It is a perfect combination of size, feel, function, and status for such people. (It's also great for many others, but the economics aren't there the same way nor the fit with the job requirements.)

This is part of the problem: While the popularity of the iPad (and perhaps other upcoming tablets) is such that there will probably eventually be a lot of them in the next few years, the people you may need to serve already have them today, or will in the next 6-12 months -- a very quick entrance. Worse yet, those people may have the final signature authority on approving deployment (and purchase) of your service. Handling some other configuration differences in SAAS systems, such as screen size, may not be too hard using technologies such as CSS. The iPad, with it's medium-sized screen and multi-touch interaction method, poses a less straight-forwardly solved problem.

What to Do?
I hope that it's clear from all of the above that taking different common configurations into account is important.

As you design and implement systems, make sure that you look at the full spectrum of variations and figure out when you will need to support them. Inexpensive flat-panel displays are making large conference room screens much more common and their use during meetings will be frequent. More and more users will have smartphones and depend upon them. The iPad will become an important factor sooner than you realize.

Make sure that application designers and developers are familiar with, and have access to, common configurations. Even just testing prototypes and early implementations on such variations can help you steer design decisions in ways that will make it easier to have appropriate support. In many cases, the underlying "engine" code that implements your server and user interfaces may not be that different than you would build it otherwise. However, you need to make sure that you can interface to that engine through appropriate means to handle the different connectivity requirements, the screen layouts, and the interaction methods.

Some of the more popular multi-touch browsers, such as on the iPhone, iPad, and Android, give your Javascript code access to the same type of "touch" information given to native applications. A good programmer, with work, can make a browser-based app that is much closer to the feel of a native one. There are even some code libraries becoming available to help with this. However, you do have to design and test those user interfaces, and they may be somewhat different than one designed for a browser running on a 24" monitor with a mouse. There are different UI metaphors to learn. The iPad user is familiar with the iPad way of doing things -- the developers should be, too.

This is going to have a cost. You may have to support two or more variants of various parts of your system if you don't have control of the devices your users will chose to access your service. You are not alone. CEO Eric Schmidt of Google explained, in response to a question at the Web 2.0 Summit recently, that their Chrome operating system is for keyboard-based systems, like desktops, and their Android operating system is for touch-based ones, such as phones and tablets. They are going the "customize for the variation" route, or at least having it as an option.

Apple seems to be moving in an unclear direction, as is Microsoft. Apple's laptops are more and more taking advantage of touch gestures on their large touchpad, and they are even adding an even larger touchpad as an option to their desktop systems with their new "Magic Trackpad." They've changed the name of the iPhone operating system to "iOS" and are probably going to integrate it more and more with their other systems.

Microsoft has had pen-related software in their desktop operating systems for some time, but has not required any of the normal hardware to have a pen and has not done much with multi-touch in that regard either. As they experiment with products in the gesture field with their "Surface" system and the Kinect game controller, maybe we'll see more integration with their normal operating system and dependence upon it with their own, ubiquitous software offerings, like Office, Internet Explorer, and the Windows controls themselves. Windows 7 has some of this capability.

These moves by Apple and Microsoft are apparently further out than 2011 with respect to how they will impact SAAS developers with "must have" features. The smartphone, iPad, and flat-panel challenges with respect to connectivity, display, and interaction mode, though, will be here in 2011.

Other Issues
I haven't covered lots of other issues that may be related, and that could be addressed at the same time, such as Accessibility to people with differing abilities (which affects display and interaction methods), and the more and more frequent need to deal with near-real-time (Twitter-like) or real-time collaboration (which affects connectivity as well).

-Dan Bricklin, 23 November 2010

© Copyright 1999-2018 by Daniel Bricklin
All Rights Reserved.