In 1996 some academics tried to re-imagine human-computer interaction, and they used the current Mac OS as a starting point (FYI: It was version 7.5).
How Their Predictions Panned Out:
First off, they got some relatively obvious things right: they correctly saw that computers of the future would be hyper-connected instead of isolated (but in 1996 this wasn't exactly a bold prediction), and that they would have hardware orders of magnitude more powerful.
Somewhat more insightfully they predicted that the purpose of computers would shift from mainly solo productivity-type work to games, multimedia, creative, social, etc. They also predicted the rise of alternative forms of I/O, and I think the I/O of Apple's iOS, for instance, is in many ways consistent with what they envisioned.
A few of their guesses have not yet materialized, though. They emphasize the use of "language" over point and click icons. Unfortunately, NLP and AI are harder than they anticipated. If we again look at iOS as an example, touch icons, it seems, are still much more useful than natural language.
The article provides a fascinating time capsule of HCI thinking at the dawn of the Internet.
Incidentally, just weeks after this article was published Steve Jobs returned to Apple and sparked the new generation of interfaces that they could only imagine.
"Incidentally, just weeks after this article was published Steve Jobs returned to Apple and sparked the new generation of interfaces that they could only imagine."
In what way were the newer generation of interfaces different to what they'd already described (as the 'mac interface')?
--
I think we're still using very WIMP orientated interfaces .. most interfaces haven't changed that much. Even iOS / Android use central themes from the old desktop metaphor.
Apple's iOS doesn't use windows, all apps are full-screen. IOS doesn't have a menu bar, only an info bar (time, power, cell reception, etc). It doesn't even use a pointing device, and there's no onscreen pointer -- you can use your finger to 'click' (touch) and drag, but you can't point.
Window, Icon, Menu, Pointing device. Only the icons survived.
There's no desktop metaphor either in iOS: no recycle bin, no central file cabinet to browse through, no desktop to arbitrarily place things on, no way to make file folders.
* (W) You are presented with modal dialogs (a form of window) at various points in many iOS interfaces - ditto for Android.
* (I) The main way of representing an application is via an icon.
* (M) There are definitely menus in Android. I'd argue that there are also even menus in iOS - lists of options that are allow you select which operation you'd like to perform (e.g. mail settings menu).
* (P) You can point - you use your finger. You can't hover.
It's not really that ground-breaking - it's a relatively small iteration.
When using Mac OS X, no matter what you do, there's always a menu bar at the top of the screen. Until recently, all Mac OS X apps had window chrome, even when the application window was maximized. IOS doesn't have any of that -- there are specific situations in which you are presented with a modal dialog or a menu, but they aren't GUI elements that are always onscreen.
Then there's the pointing. In iOS, you can't just point -- to touch is to point+click.
I don't understand why these distinctions make any difference? Fundamentally - I think the core elements are very similar - the differences are necessary due to the constraints of mobile devices. Perhaps implementation differs - but I don't see how this makes my point less true.
Are you stating that we actually are making use of interfaces which are groundbreaking (in a similar way to those presented in the article)?
We often don't notice many "ground breaking" shifts while in the midst of them. The internet or hyper-linked docs took their time to be called "ground breaking" with hindsight.
Rgd the iPad/touch I have an indicator though - my son got used to slide-unlocking my ipod touch when he was 2yrs old, to play drums on it. Ever since he did that, he'd try to touch my laptop screen and expect things to happen. He'd totally ignore the keyboard! That's a ground shift in interfaces for me :)
I doubt we'll see another leap as huge as from CLI to GUI. But as incremental improvements in user interfaces go, I find iOS (and multitouch control in general) pretty significant.
I doubt we'll see another leap as huge as from CLI to GUI
Unless you have a particular reason to believe that GUI is the end of evolution, then there's room for massive change.
Circumstances suggest that wimp is not the last word on user interfaces. Some factors to think about:
* The GUI has grown during an era of groups fighting to establish platforms, when there were constant newcomers to the scene which caused a large emphasis on creating systems that are as newcomer-friendly as possible. Those circumstances will change.
* As the scale of data increases, user interface needs change. Consider the progression from flat file structure, to directories on a single machine to searching through large datasets, and the different tooling we use. I deal with large numbers of small files, and have built custom tools to allow myself not to drown in scale. GUI tools and CLI directory systems are inadequate.
* A lot of GUI has been driven by the need to make glossy things that sell, rather than things that are useful. Look at changes to Windows since XP. As free software eats away at territory that is currently commercially viable, this will change. We won't have companies pushing gaudy bells and whistles that have shifted platforms over the last fifteen years. Stability increases in value.
* Plenty of power users have never been happy with the WIMP approach, and consider it a leap backwards from the CLI. 1980s word processing users, unix users, people who knew menu-driven mainframe systems before being "upgraded" to less-useful GUI replacements. There's strong interface loyalty to the Bloomberg interface from people who have also dealt with GUI and CLI tools, despite significant problems in that system.
It's unusual to know about groundbreaking things before they happen. I expect our grandchildren will look back at screenshots of today's candy interfaces and ask how we took it seriously.
your comment made me think of the striking resemblance of the modern mobile OS (iOS, Android) to the icon boards used to try and develop communications with other homonids.
I'm definitely not saying this to make fun, many people have commented how this new UI paradigm is so easy even their toddlers can learn to use it at some level.
While not as 'language' based as we would like, I do think tremendous progress has been made in language-as-an-interface with the rise of google. It doesn't talk back to me, but I can type a lot of meaningful statements or questions into google and get a 'meaningful' response.
Text/Language search in general is much more useful in the GUI now than it ever was. For instance, I use spotlight for everything - I hardly ever dive into folders looking for something.
Ever since Windows renamed some UI (Programs and Features etc) and made some other things harder to reach (System etc) I largely use the search interface to get to what I want in the Control Panel and associated settings tools.
TL;DR - the user interface paradigms that we current use are still pretty similar to the interfaces that were developed at Xerox PARC (bought by Apple). Not much has changed.
It doesn't have to be this way.
What are the principles of current interfaces / what could the future principles be (if we're not stuck with the constraints set by the current batch of WIMP interfaces)?
- Users are "The Post-Nintendo Generation" (grown up with computers)
- Work, play, groupware, embedded, and ubiquitous
- Humongous computer (multi-gigabyte RAM, Cray-on-a-chip RISC processors)
- Rich communication (computer can see you, knows where you are, large high-res
screen, new I/O devices)
- Connected system subjected to constant change
- Language (instead of icons)
- Strong object-orientation (large number of small objects with rich attribute
sets)
- Personal information retrieval as unifying principle with atomic information
units as basic interaction object
- Information comes to you (rather than you browsing your hard drive)
- You won't always have to work that hard
Neilsen wrote this in 1996, but it seems to predict an emerging trend of pervasive social networking, cloud-storage, Google (omniscient agents with a natural language interface), and location APIs, embedded cameras, gesture-pads, etc. Of course I could be looking for evidence selectively, but I will leave that judgment to you.
The article is quite vague about what an anti-Mac interface would but it does make some good points about the limits of the mouse-window-and-menu GUI.
Especially: Language-like interfaces are better than thing-like interfaces. Interacting with language is a much preferable approach than physically manipulating things, especially since most of what an average user wants to input is discrete, meaningful bits of information (visual and auditory artists are the exception).
Both Google and Quicksilver are example of more "language like" interfaces but there's not too much flesh on the bones of the article's "anti-Mac User Interface".
This article shares with many other academic exercises a fine analytical examination of current trends but provides a woefully unhelpful basis for innovation. It sounds merely negative to say that, but I think the problem isn't with the scholarship of these exercises, but the purposes they're put to. Plainly put, academic constraints in all but the most fearless imposes a kind of self-censorship, a desire to present a tidy, well-researched argument that only with extreme caution ventures out on a limb.
Almost the first thing I do when reading articles like this is scroll to the end and see if any actual interface designs are offered. There are none. No criticism is intended; mostly a statement of the obvious!
The frustrating thing is that interface design FEELS like it ought to be the most natural thing in the world, yet even after the first Macintosh in 1984 we are STILL steaming down the track of the window paradigm. Over 25 years it's of course become more 'natural' - which is to simply say more mentally efficient - but thinking outside the paradigm is kind of like trying to imagine a third arm; or at least wiggle an eyebrow you've never wiggled ...
There are two distinct way to look at it. The first, progress for the sake of progress is not the solution. However, when you think of it, we have been walking using our legs since time immemorial. Is that wrong or does it call for rapid evolutionary prototyping of humans? Maybe, maybe not?
I remember using one of such revolutionary interfaces recently "Bump top desktop", where did that lead us? I was honestly irritated with it. I am of course talking about the visual interface and not of things like memory of last actions performed and understanding the series of events. The nearest things that comes to my mind is "Clippy" from MS Office. Remember how if you used to paste something in every slide, it used to tell you of the Master Slide View? That is the direction interfaces need to go.
Also has anyone used Soulver? For a real metaphor of calculator in the UI, it goes out of box to solve the way people want to deal with it on computer. Using keyboards rather than click point 1/+/2/=
The yearning for the richness of language in computer interfaces is a meme that keeps coming back. I wonder whether there is a way to satisfy that. Here's an idea (around MacOSX) -
Push the dock over to the side as a column and set aside a fixed text box at the bottom of the screen for both text input and output. When you do things using he GUI, the text box should continuously update itself with a textual description of what you're doing, that will also work the other way - i.e. if you'd typed that textual description in there, the same actions would be accomplished. This may setup a dialog between the comp and the user gradually building a vocabulary for linguistic interaction with your computer. Could this be a way to leverage the explorability of a GUI to teach a language using which you can over time become a power user?
Since the article was written file search has become much more a part of both Mac and Windows operations. And on my Linux laptop I use dmenu to launch most programs that aren't running in a terminal or important enough to have a keyboard shortcut.
In this article they have tried to Think Different for the sake of thinking different. Not bad. But they basically described Linux, Command-Line interfaces, and the Web. Their idea of "language" is basically scripting languages.
The fact that is is only beginning, is the reason I want to go into Human Computer Interactions and it is the reason I am truly terrified I am going try and make this magical "anti-WIMP" and then the current paradigm will be too deeply ingrained to make room. This is what I hope the wearable computing niche will find symbiosis with.
Using language instead of icons is only useful for advanced users. I love gnome-do/launchy/quicksilver but most people don't get them. It would be bad to think of them as "replacement" for icon. Instead I think it's better to think of them as complimentary components.
I think they have the "Feedback and Dialog" and "System Handles Details" in the wrong column. Those should be reversed. Macs love handling all the little "details" for me without letting me change them, for example the mouse acceleration curve.
The authors (in 1996) imagine what an interface would look like that is inspired by the opposite of all the guiding principles for the Mac GUI.
Mac => Anti-Mac
* Metaphors => Reality
* Direct Manipulation => Delegation
* See and Point => Describe and Command
* Consistency => Diversity
* WYSIWYG => Represent Meaning
* User Control => Shared Control
* Feedback and Dialog => System Handles Details
* Forgiveness => Model User Actions
* Aesthetic Integrity => Graphic Variety
* Modelessness => Richer Cues
The authors then talk about the weaknesses of each of the principles guiding Mac interfaces.
1) Metaphors - impose artificial restraints and obscure the true capabilities of computers.
2) Direct Manipulation - repetitive work is better handled by batch processing and simple scripting.
3) See and Point - language is more expressive.
4) Consistency - different things should be represented differently, forced consistency is oversimplification.
5) WYSIWYG - the authors interpret this as meaning "your document, as it appears on the screen, accurately reflects what it will look like when it is printed," and argue that interactivity is better.
6) User Control - sometimes automation is better, and when there are multiple actors (as in networked systems, like the internet), control must be compromised.
7) Feedback and Dialog - interruptions should only be made when they are valuable to the user, and over time as he/she gains proficiency they will matter less and less.
8) Forgiveness - forgiveness means there should always be an "undo" button and warning signs, but this can become a nuisance when the warnings are gratuitous.
9)Perceived Stability - the real world is not stable because there are forces beyond our control, and that's what makes life interesting. (This principle is curiously missing from the table summary in the article).
10) Aesthetic Integrity - variety is more interesting and expressive than unity.
11) Modelessness - this is defined as not having "modes" which restrict the user's range of actions. The problem is that users can only cope with so much at once, modes help chunk things up.
-------------- Next: The Anti-Mac Interface --------------------
This was a pretty time-consuming summary, and I'm kinda wanting to get back to work. Want me to write a summary for the second part of the article? Use the upvote as a demand signal. If this gets 20 upvotes I'll summarize the second part.
I think the 1990s were an interesting time for the Internet. Lots of crazy thinking about what it could become - large ideas, less fixation on profit and commerce. Lots of theory.
If you look at the success of Windows 95 in that era (and what failed) it was very much about the money as much as the theory in that era. What's interesting is that when the article was being written Jobs was working on NeXT which is still alive today on your Mac and iPhone.
I agree - commercialism was around .. but not many companies really understood how to monetise the web in the early / mid nineties. For example, Bill Gates famously didn't think the Internet would catch on, and assumed that MSN would be most people's choice of walled-garden alternative.
The WWW was just another application for the Internet (along with Gopher / FTP / Usenet / IRC) - and therefore wasn't seen as the Internet.
Because the technology was relatively new, I think people were interested in taking risks - user expectations hadn't been set in stone, and innovation could be less about iteration and more about being bold / daring.
In 1996 some academics tried to re-imagine human-computer interaction, and they used the current Mac OS as a starting point (FYI: It was version 7.5).
How Their Predictions Panned Out:
First off, they got some relatively obvious things right: they correctly saw that computers of the future would be hyper-connected instead of isolated (but in 1996 this wasn't exactly a bold prediction), and that they would have hardware orders of magnitude more powerful.
Somewhat more insightfully they predicted that the purpose of computers would shift from mainly solo productivity-type work to games, multimedia, creative, social, etc. They also predicted the rise of alternative forms of I/O, and I think the I/O of Apple's iOS, for instance, is in many ways consistent with what they envisioned.
A few of their guesses have not yet materialized, though. They emphasize the use of "language" over point and click icons. Unfortunately, NLP and AI are harder than they anticipated. If we again look at iOS as an example, touch icons, it seems, are still much more useful than natural language.
The article provides a fascinating time capsule of HCI thinking at the dawn of the Internet.
Incidentally, just weeks after this article was published Steve Jobs returned to Apple and sparked the new generation of interfaces that they could only imagine.