In A Brief Rant on the Future of Interaction Design, Bret Victor identifies that the goal of interaction design is to make interfaces and objects used by humans. By definition, he says, “A tool addresses human needs by amplifying human capabilities.” The problem though is that sometimes, interfaces confuse and detract from human capabilities rather than amplify them. In the prototypes of future interfaces that we see in commercials, movies, and television, is what we see a realistic approach to the future or an obsession with touch-based interfaces that can easily detract from human capabilities?

Victor makes the very good point that humans have hands. And hands are to be used to manipulate objects. When the objects being manipulated have no differential tactile value (glass touchscreens) then are we optimizing the potential interface? Probably not. It in fact does not always make sense to design interfaces that can not be explored in a tactile way. The trend toward touchscreens for everything is symptomatic of this. Victor aptly calls it “Pictures under glass.”

Ultimately, Victor argues that future interfaces should be capable of manipulation by human hands. This is not a necessarily revolutionary idea, but it is one that is being ignored entirely by the touchscreen revolution. Perhaps part of the problem with gesture or icon based devices sometimes is that they do not encourage play. Tactile experience with one’s hands is inherently playful and humans are not afraid to do it. I’ve seen many people afraid to tap a button on their iPhone because “I don’t know what will happen.” Though touchscreens are clearly here to stay, I hope we’ll flow back to a more moderate and considered use of them. They shouldn’t be the default for every device as sometimes it is important for the user to be able to play, explore, and in fact, feel, the device.

Initially when we started the interface project I thought to myself, “how hard could this really be?” The deeper I got into the intricacies of the requirements of the machine, the more I realized that interface design is definitely a balance of art and science.

I learned a lot about my ideal process when working on the metro machine project. I experienced a few false starts because I had no idea what method was best for me. Initially, I started with screen layouts in Indesign. These were similar in my mind to the way we had laid out other interfaces in class activities. I could not wrap my head around the project this way. I kept sitting there, staring at the screen, not able to figure out what to do next.

I went back to requirements gathering. I had an initial list of requirements that I had created from the fare schedules and other information on metro’s website, but I went back through this list and refined it. I categorized items by functional area and I went through the pictures I had taken of the real live metro machine and filled in a few functions that I had been missing. From this false start and reexamination of requirements, I learned that I need to start with a thorough examination of requirements before moving to a wireframe stage.

I went back to my InDesign layouts, but they still weren’t working for me. I made a basic flow chart of what I thought needed to be included in the interface and started to code a prototype of the machine using basic CSS and HTML. This was the key for me. I started to play with the prototype I was building and began to refine each screen according to my observations of what was and was not working. Since all formatting was CSS driven, it was easy to play around and change things.

Once I got to a certain point with the prototype, I went back and made a new, much more intricate flow chart. As I was doing this, I found other errors of logic and immediately saw a few inefficient loops that I was able to eliminate. I went back and forth between refining the prototype and refining the flow chart to complete the project.

Ultimately, I tried to make the whole thing work. I was taken down in this attempt by relative links failing to work properly if I duplicated a folder at a different level in the prototype. The lesson here is, I really should have only tried to build one branch of each option of the prototype at a time. THEN, once everything was finished and refined on these singular branches, it would have made sense to build out the full set of options. Lesson learned!

In What will our (future) interfaces feel like?, Francisco Inchauste posits that future interfaces will be gesture based and that we will move more and more away from content and icon based interfaces in the future. He cites the new Clear app as an example of an interface that is gesture oriented and possibly ahead of it’s time.

As technology evolves, so does interface design. This is obvious. If someone straight out of 1950 was plopped in front of an iPhone, can we reasonably expect that they would even identify it as a phone? Possibly not. If we gave them a so-called “feature phone” would they know it was a phone? Possibly. The buttons on the keypad would be a huge clue that might help them make the leap (even though rotary phones dominated in 1950).

I think the key elements here are background knowledge (what does a phone look like, I can gesture to make this move, etc.) and the willingness to play. Several highly intelligent people I know recently were flummoxed by “floating heads” in their Facebook apps on their iPhones. Turns out the floating heads are actually related to the Facebook chat feature, but there were no affordances available to guide how to interact with said heads. About a week after the heads were first observed, Facebook finally released an update that guided users through what the heads are and, more importantly to those haunted by the floating heads, how to get rid of them. I think we’ll be able to move more towards gesture-oriented interfaces once the average user is totally comfortable with touchscreen devices and once the average user is not afraid to play with a new app or program. From what I’ve seen, this might take awhile!

This chapter covers the following information.

Four ways users seek information:

  1. Known-item search
  2. Exploratory seeking
  3. Browsing and discovering that you didn’t know what you need to know
  4. Refinding previously discovered information

Three types of navigation:

  1. Structural navigation
    • Global navigation
    • Local navigation
  2. Associative navigation
    • Driven by time (of publication, news event, etc.)
    • Driven by type (articles, video, pictures, etc.)
    • Driven by topic or subject (categories)
    • Driven by interest (most popular items)
    • Driven by owner or group (same author, eg)
    • Driven by community (what other people like you like)
  3. Utility navigation

Questions to ask when designing navigation:

  1. How is your content organized?
  2. What do your users want to do?
  3. What do you want your users to do?

Key terms from “The Tao of Navigation” in Information Architecture: Blueprints for the Web:

structural navigation

Represents the content hierarchy; “tends to take the form of global and local navigation”

global navigation

Available throughout a website; typically consists of top-level categories

local navigation

Includes levels of the hierarchy that are close to where you are in the site; often appears below global navigation; also called sub-navigation

associative navigation

“Connects a page with other pages that hold similar content.”

utility navigation

“Pages and features that help visitors use the site itself” that are outside of the content hierarchy; eg. sign in, user information


When “users go to a subcategory, and then must go back to the parent category to choose a different subcategory.” Often used on sites with large amounts of heterogeneous content.


When “users choose a category and can choose links to sibling categories provided on the page.”

safety nets

These types of strategies “imagine what might go wrong and then create a mechanism for helping the user out of that problem.”


A unique form of navigation “that lets people flip through multiple pages.” Can also be used to guide people through forms by implementing pagination and removing local/global navigation.

Though I have heard that the iPhone is revolutionary technology used by the blind community, I’ve always had some cognitive dissidence understanding how that can be true given the flat touchscreen surface of the phone. How the Blind are Reinventing the iPhone, published in The Atlantic helped to answer a lot of my questions.

iPhones in general are built to be an exploratory device – and by that I mean that there aren’t obvious design clues to nudge us in the right direction. Experimentation is imperative to learning what your iPhone can do and how to make it work for you. It makes sense that this would also be true for blind users as well.

Navigation without the use of sight on an iPhone at first sounded impossible to me. Then I realized that I too know exactly where to press (without looking) to pull up my most frequently used apps on my phone. The grid system of organizing the apps on the screen is very visual, but it also relies on a grid with a small number of fields, all of which are more than big enough for a human finger to tap. Muscle memory of the location of buttons is something that many of us are used to. As I’m typing this, for example, I have not looked once at the keyboard on my computer because I don’t have to. My brain has mapped where the buttons for each letter are. I imagine it’s a similar mapping with the placement of apps and buttons in relation to the orientation of the iPhone.

The idea of continuing one’s craft even if the dominant sense or physical capability associated with that craft fades is an interesting one. Beethoven continuing to compose music as he descended into deafness is certainly not the only example. Henri Matisse continued to create art until his death, despite the fact that he was confined to a wheelchair and could no longer paint the large canvases for which he is known.

In the aftermath of the Boston Marathon bombings, the perseverance and determination of some who were injured to one day run or dance again, despite losing a foot and/or parts of their legs, is impressive. Interviews with these victims remind me that for each of us, our physical and sensory capabilities contribute to the personal and professional activities that make our lives meaningful and define our identities.

In this spirit, it does not surprise me that architect Chris Downey adapted his tools and processes to accommodate for the onset of his blindness. In most creative fields, there is a most obvious mode of production guided by sight or hearing. Perhaps experimenting with alternate modes of production (touch in place of sight, vibrations instead of sound) could help able-bodied creatives to expand their craft in new, unexpectedly fruitful directions.

Here’s an episode on Chris Downey’s architecture from 99% Invisible.

Mark Hurst’s post, The Google Glass feature no one is talking about, is cautionary about the effects unforeseen by the consumer and the potential widespread deleterious impact of new technology.

Especially in a time where it is possible for huge companies like Google or Facebook to be omnipresent in many people’s lives, there is a surprising lack of critical thought by consumers of these companies’ services.

Millions regularly log into their Gmail accounts, use google maps, and search using google – all for free. How can this possibly be profitable without data mining by Google itself? The average person also mistakenly believes that their email accounts, for example, are are somehow private. They are not.

Unless you are a particularly savvy user (who reads those change of service email notifications anyway?) or a regular reader of Mashable or TechCrunch, why would you know that Facebook now owns Instagram and that the data you voluntarily input on both platforms now has the potential to be aggregated in one place?

People are being hoodwinked. As many commentators have already declared: we are the product.

Integration of one technology with another (Pinterest’s connection to Facebook, for example) does not happen primarily to make users’ lives easier. Integration of multiple platforms enables data scraping companies easy access to larger swaths of data. Google glasses and their ability to record anonymously and covertly represent the future potential of this type of integration.

Google glasses may end up as the ultimate test of the company itself. It’s famous “do no evil” philosophy can easily be questioned depending on how the data gathered from new technology like Google Glasses – or old technology like Gmail – is used in the future. As good as Google can make us feel, the company has a checkered past of questionable disclosure practices, biases in search results, and other deceptive practices. This leads to the penultimate question: as Google develops technology that integrates even more closely into our lives, can we trust that they will do no evil? Doubtful, highly doubtful.


Get every new post delivered to your Inbox.