Seeing people reason about this subject in the main article and the subsequent comments brings me back to my college "literary theory" courses. Instead of tossing up pithy declarative statements like "The best X is no X", or "The best Y is as little Y as possible", and then commenting on how these statements strike our intuition, we can bypass the literary methods in favor of scientific ones.
There's an entire body of research on visual design (a great introduction to which is Colin Ware's "Information Visualization") that gets right down to the nitty gritty details of human perception, like eye tracking speed, visual channel capacity, move and scan loops, visual working memory, the relationship between luminance channels, motion and shape as opposed to color channels, and on and on and on.
It turns out that "good" design principles can be quantified, communicated, taught, and learned, are based on our common biology, and that they needn't come at the expense of aesthetics or sound reasoning.
We have methods to test UI's to validate or invalidate them, we do not have a way to create new UI's from testing. HCI research provides a good grounding for design architecture options, but it does not dictate an actual design for you.
This is a very important distinction most test-only UX designers miss.
For example: If you test a skeuomorphic design versus a flat design, the testing will only tell you which of these interpretations of each design philosophy works best. However, maybe a better, more inspired, skeuomorphic design would possible perform better than flat if given a a different interpretation.
tl;dr -- testing only shows you which implementation works better in a set, not the best implementation possible.
Many UX experts laughed when this was written in 2010. But it is now the default IMO.
Testing has to be done in its actual environment to provide really useful feedback or be something completely (like a new way of interacting with your computer) new in order to be tested outside its final environment.
Testing no-ui is even harder since you have to build a "persona" before you can test the results.
Once you have an agreed goal, you can investigate techniques for reaching it. But some of the disagreements are over what the goal should be.
The "invisible interface" people's goal (in strong form) is to dispense with explicit interfaces as much as possible, seeing them as just friction between what people want and what the computer can do. One version of how to achieve that vision is AI-heavy, relying on people saying what they want in natural language and software like Siri figuring out how to give it to them.
The rough opposite is more of an interface-centric approach, where the goal is to actually foreground the interface as an explicit point of interaction, and have it present a logical and coherent set of controls that, rather than being invisible, makes clear "I am an interface and I do X/Y/Z".
Detailed information about things such as eye tracking speed seems to me like it'd be most useful in doing the actual implementation, once you have a rough idea of its purpose and use case. It's certainly possible to do scientific investigation of the more general question as well, but it requires more than micro-scale laboratory tests.
This article encapsulates the frustration I felt this weekend trying to help my parents find their way via their fancy new in-car GPS. I eventually realized their difficulty wasn't just the result of Ford's piss-poor UI,[1] but also a basic conceptual issue.
It seems silly to us tech-minded folk, but my parents truly did not understand what role they had in correcting for the GPS's mistakes.
It never occurred to them that it was possible to edit the route selection, or just go off-course and wait for the route to recalculate. As the poor map data[2] led them on roundabout routes veering across the state, their response to my pleading that they pre-select the best route (which they already knew about) was simply "let's see where she's taking us."
Through constant reinforcement via the media, the abstract 'cloud' paradigm has been deceitfully expanded to represent any new technology.[3] The disembodied voice of the GPS,[4] to them, is an authority rather than simply a tool.
UX problems are going to become much more complicated through all of this. We've got to figure out whether our goal is to sell people products or help people to enhance their lives with technology. To do the latter, software needs to be helpfully aware of the possibility of failure. That's what honest, understandable design means to me.
[1]: If your designers are too terrible to make a visually distinctive on/off button state, *you need to use sliders*.
[2]: (and/or impossible-to-determine avoid highways/tolls setting)
[3]: "How does it get maps?" they asked me, a process which, were I not present, they would have simply taken for granted along with the million other points of abstraction necessary for a piece of tech like in-car GPS.
[4]: (which they can only refer to as "her" or "she" rather than "it")
This reminds me of how my grandmother used to leave voicemails in a robotic, staccato voice so that the answering machine could understand her if the machine's default robot voice picked up the phone.
To her, the answering machine's voice is an instruction of what type of input the machine needs.
To us, of course, the answering machine's voice is just a product of the sad state of answering machine voice technology.
The point about no UI isn't that you wont have a display, in fact you will have many many types of displays.
The point is that the actual interaction becomes invisible. I.e. not as much manual input but rather the input is based your everyday actions.
You take the train as you usually do. The system knows you normally get to work at 9 but it also know the train is delayed so you get a message that its delayed.
Thats the vision of the future, thats the seamless and non-obtrusive part.
Not that it wont be visually displayed.
The nest takes away much of the manual work you had to do (adjust based on when you are home etc) it doesn't remove the feedback mechanism.
About a dozen years ago I worked with a developer who built "smart house" applications. He was describing his home and how the house knew when he walked into a room and activated the lights. When he got into to bed, the lights in the room automatically went off. I asked him: "so, what do you do if you want to read in bed?" His reply was a quizzical look, and "?? The bed is for sleeping." He didn't understand how broken this mentality was.
The point is, if systems that anticipate your needs do not have adequate interface for working around them - and by adequate, I mean easily discoverable and intuitive, then they will be rejected by the majority of consumers. And with good reason. Technology should serve us, not require us to adapt to them.
Well that depends on how meta you are prepared to take it :)
The UI is in itself forcing you to adapt to it by asking you to input data into it.
Your refrigerator does not require you to turn on the light when you open it. Instead it turns on when you open the door.
I see this not as a revolution but as a slow (but exponentially faster) evolution. As issues gets ironed out you can remove more and more manual labour from the system.
You fail to address the point of how much control people are comfortable with giving away. Some lights are fine if they're fully automated (fridge). But for others people want to have some control (e.g., reading the bedroom, looking to see if there's a mosquito etc). So no, there appears to be a limit of how much UI you can take away.
I didn't disagree with your comment but pointed out that it doesn't address this "no UI" concept and whether people actually want that. While it is likely that in the future more things will be automated, it's not going to converge to people not controlling anything. I do believe systems will become better over time, just not necessarily through not having any interface.
Good example of a misidentified problem. That engineer identified the problem as the lights need to be on when somebody is in a room and off otherwise, and off when the person lies down.
But the real problem is that turning the lights on and off normally takes too much effort (for a #firstworldproblem at least). You have to walk over and flip a switch, interrupting whatever you doing.
A better solution would be design a system that lets people control the lights with minimal effort - a gesture, or a clap, or a voice command. So you can turn on the lights only when dark and not during the day, or when you want to read in bed, etc.
I think "inferred preferences" sums it up pretty well. Automatically adjusted clocks were probably one of the first examples of this, get off a plane in a different country and your phone/watch is automatically adjusted to the current time zone.
The main problem with that which needs to be addressed is when you don't know whether it has already switched to the new time zone, or if it's still about to switch. As a result, you don't know the time.
I never related invisibility literally. To me, invisibility in design is when the user doesn't figure out the stuff, making all fluid helped by the evident UI. That's when the design becomes invisible because is not part of the problem. Tell me if you figure out your doorbell design, you wont because the design is not the problem so the aesthetics of your doorbell, the functioning and other areas become invisible. You just focus on actions.
“Lack of understanding leads to uncertainty and folk-theories that hinder our ability to use technical systems, and clouds the critique of technological developments.”
The study linked to in the sentence points out that folk theories, while “not completely accurate accounts...can provide people with explanatory power, can guide behavior surrounding use of the technology, and can allow people to make predictions about how a technology will function under certain conditions.” [1]
These heuristics, though not perfectly accurate, can work for most of the user's use cases. Intervention is called for only when the practical implications of the folk theory are misleading.
Folk theories in other contexts provide the basis for crap like homeopathy though. It's hard to see how a folk theory could be as liberating as the truth. There's a tradeoff here between "investment in" and "utility derived from" that is worth raising an awareness of, but I can't disagree with the author that keeping people ignorant--even if it is convenient for both parties--is a disenfranchising force.
You are misunderstanding how "folk theory" is used in this context. Homeopathy (which I'll take as a stand-in for bad science) has nothing to do with folk theory. Think instead of "folk theory" as the informal models we use to interact with the world and to communicate with others about the world.
For example, a former employer of mine had problems reported with an internal search facility used by a branch of the customer service team. A coworker and I shadowed members of this team (ala contextual interview) to learn about their work and workflow. We were very interested to find that each user of this system had formed their own folk theory about the idiosyncrasies of the (very difficult to use) existing software, theories which let them get their jobs done. These theories had little to do with the inner workings of the software machine and everything to do with the tasks to be accomplished.
As such, creation of folk theory happens automatically in virtually any environment where humans use tools and processes. Even when this theory is backed by "science" or "inside knowledge", what emerges is still the working practice, mental models, and language forms of a folk theory. Back to the article's point then: the idea of invisible interface is problematic because it denies the formation and elevation of culture (folk theory) that will naturally form around an interface.
You're focusing on the first consequence in that sentance ("that hinder our ability to use technical systems") and indeed the paper suggests what you describe. However, the second part ("and clouds the critique of technological developments") seems well supported by the paper. In their 'discussion' section, they even state
This need for intelligibility of the inner workings of the
technology, rather than simply keeping it invisible,
echoes Chalmers et al.’s work on “seamful design” [4]
which is exactly what the blog post is advocating.
The second is also false if I understand you correctly. In some cases a busy, non-minimalistic design can be appropriate, e.g. casinos, games, clothes, cars, etc. As little design as possible implies no decorations and decorations certainly have their place in design.
What if you're being paid as a designer to be flashy/sexy or for the product to be a display of conspicuous consumption?
Trying to define good design is like trying to define good literature. Are you trying to convince the user, force the user, agitate the user, guilt trip the user, motivate the user, educate/train the user, de-educate/un-train the user, empower the user, de-empower the user, force the user to conform, force the user to rebel... its all going to be different both for lit and for design work.
The idea that design should be invisible isn't a new phenomenon. As far as I know, one of the earliest mentions of the idea was by Beatrice Warde in her essay The Crystal Goblet (http://en.wikipedia.org/wiki/The_Crystal_Goblet). No UI just made the mistake of taking a poor, overloaded name for their movement. Invisible Design would have been a better one that doesn't already impose a solution.
The argument for interface culture seems really misguided though. Since when should a culture around a poor design implementation require that we don't try and improve it's design? I'd argue that we already have said, "the best TV is no TV". Flat screen TVs are exactly this, we're moving away from the huge clunky things we used to have. I'm sure some were disappointed when their TVs lost their knobs and dials, they were part of the culture, but now no one thinks twice about it.
I don't think reducing a UI inherently means making the mental model harder to understand.
This article was a response to the "Best UI no UI" debate that was sparked by one of Cooper's article I think [1]
Personally I think the "Best Interface = No Interface" mantra is too black and white and totally ignores all the shades of grey in between. If you come up with these principles, the language needs to be much clearer.
I think what people meant to say was "Sometimes the best UI is no GUI".
One minor problem with Berkun's otherwise good essay is around paragraph five, his description of no-UI as a desire for simplicity, perpetuating the idea that complex interactions with machines are undesirable and a symptom of insanity. Then declares everyone's goal is simplicity, its just no-UI is doing simple "wrong". I could not disagree more. They are not linked or in any way related, complexity and desirability are orthogonal. I greatly enjoy highly complex, desirable interactions. Why must we only be capable of simple, boring actions? Highly complex action for little reward like his ridiculous example of a door opening UI shows the orthogonality of the two concepts, not that granting the user the ability to do complex things if they want to is somehow "wrong".
The best literature analogy I can make is something like the ideal love story is probably a lot more like a 200 page romance novel than like a 2 minute pr0n video.
As a web developer/designer I will utilize the technology that is most successful for the end client. If no UI is what is hot; I will do it. If phone apps with big shiny buttons are what's hot; I will do it. I understand that I will have to continually learn new and evolving techniques to display or represent my content in a way that is desirable users. UI evangelism is somewhat a mute point to me.
Of course, there is something to be said for the medium as art. That is, form for its own sake.
Function-only is also short-sighted, as it demeans the environment.
However, I would argue that the best architecture is hyper-rational, derived from raw need, and designed with a skill of an artisan. The real art is fulfilling the need and looking good doing it.
one of the tradeoffs involved is between general expressiveness of the framework and ui unobstrusiveness. E.g. direct manipulation "naked objects" interface with fully automated objects placement on screen is quite generic but fairly ugly. My http://www.nestgrid.org user interface pattern improves on that by allowing free form editing of quasi html page, hence providing for mostly manual placement of objects, but it's still no more "unobtrusive" than Excel is. That is, it takes getting used to.
well maybe your content could take the entire page width instead of being in a 1/3 page width column , or is there invisible content on the right ? you should think about users who disabled javascript before talking about UI. that's UI 101.
There's an entire body of research on visual design (a great introduction to which is Colin Ware's "Information Visualization") that gets right down to the nitty gritty details of human perception, like eye tracking speed, visual channel capacity, move and scan loops, visual working memory, the relationship between luminance channels, motion and shape as opposed to color channels, and on and on and on.
It turns out that "good" design principles can be quantified, communicated, taught, and learned, are based on our common biology, and that they needn't come at the expense of aesthetics or sound reasoning.