Hopefully, you read my recent column discussing surface computing (see www.CPATechAdvisor.com/go/1934).
For those who haven’t, I’ll attempt to quickly bring you up to speed.
Microsoft Surface is the first commercially available touch-screen type computing
device in a tabletop form factor. Think of it as an iPhone in a bigger (actually,
much bigger) form factor. What is interesting about the release of this device
and the iPhone is that our paradigm is slowly changing … away from the
mouse. Both Microsoft Surface and the Apple iPhone rely on our sense of touch
to not only move and select items and applications, but to also now modify those
applications. An example would be a photograph or map that could previously
be selected using traditional touch screen devices. But if you want to zoom,
flip, enhance, merge, etc., you previously had to get back to the mouse/keyboard
or stylus. The new surface computing devices are beginning to change all that.
I mused in that recent article that perhaps we would see a surface device in
an accounting firm and use this touch/surface computing technology to open and
modify client documents.
Microsoft and Apple have both invested in this “more natural method”
for interacting with our computing devices. The enhancements in the pen and
handwriting technology in Windows Vista are evidence of the advances being made
in this form of input. Many students are now using Tablet and Ultra-mobile PCs
to capture notes electronically, which further saves the step of scanning or
OCR-ing handwritten notes. Speech recognition has seen major enhancements, as
well, which although not appropriate for say a college class does provide yet
another natural way to interact with our computing devices. Physicians and lawyers
have traditionally been the biggest users of dictation, but that dictation required
the interim step of transcription. Speech-enabled computing devices will serve
to eliminate the transcription step.
Every year, the Wall Street Journal hosts an “all things digital”
conference. The conference debuted in 2003 and is affectionately referred to
as ‘D’ plus the number of the conference, so this year’s conference
is D6. Just last night, soon to be ex-Microsoft CSA Bill Gates and current Microsoft
CEO Steve Ballmer shared the stage and talked about the next version of Windows,
now referred to as Windows 7 (keep in mind that I’m writing this column
in May, so when I say just last night, you’ll realize I mean several months
Obviously, development of this version is progressing. And as with previous
versions of Microsoft’s operating systems, they tend to build upon the
successes of the past and learn from the failures. The reason I mention all
of this background is that during the presentation and discussion of what Windows
7 might look like, there was an interesting demo offered by Julie Larson-Green.
It was interesting for a few reasons: First, Microsoft has been very careful
about talking about the next version of the operating system, and for good reason.
As Microsoft’s Chris Flore noted on the Vista blog, “We know that
when we talk about our plans for the next release of Windows, people take action.
As a result, we can significantly impact our partners and our customers if we
broadly share information that later changes.” Secondly, what was shared
in this demo felt very much like the Surface platform.
Larson-Green proceeded to pull up a brand new application called “Touchable
Paint,” and using all 10 fingers she began to draw freehand. Then, she
brought up a photo gallery. And again, using her fingers, she selected, zoomed,
flipped … well, you get the picture (no pun intended). Anyone vaguely
familiar with Apple’s iPhone or iTouch would be familiar with this functionality.
From there, she moved on to a mapping application that called up information
from Microsoft’s Virtual Earth and allowed her to pan/zoom to a location
on the map (in this case Carlsbad, California). “Search for Starbucks,”
she said. And since there must be thousands of Starbucks, sure enough multiple
push-pins appeared on the map.
After the demo, Bill Gates commented that this new technology referred to
as Windows Multi-Touch is the beginning of an era of computing based on a new
hierarchy of input systems. That may be an understatement, and we most likely
won’t know for sure until we get a look at Windows 7, which isn’t
due out until the latter part of next year (2009).
As a writer and accounting practitioner, I’ve become very used to a
keyboard and mouse for input, but I don’t like to carry on a conversation
using instant messaging. I’d rather pick up the phone and have that conversation.
So I’m wondering if I’ll ever get to that point with PC input devices.
In other words, will I get to the point where I’d rather write on, talk
to and listen to my PC than move a mouse and type on a keyboard?