There is an architectural concept called the “kitchen triangle” that I often use in talks and presentations. I use it to point out the difference between arguments and articulations of arguments. The gist of it is this: in order to create an effective kitchen, the sink, the stove and the refrigerator must each be placed no closer than 4 feet to each other, but no further than 9 feet apart. There must also be limited or no traffic through the center of the triangle.
In this example, the triangle is the argument: it presents a solution for how we use space to accomplish a task. It is based on the dimensions of the human body (the length of our arms, of our steps, of our ability to twist and pivot) and on the process of food preparation (which requires, among other things, that access to key areas not be disrupted by household traffic flows).
The articulation of this argument is the design of a kitchen in particular: a kitchen in a Craftsman style home looks very different than the kitchen in my college apartment. Both look different than the kitchen in a hotel room or in a handicap accessible condo. The kitchen argument — the solution for how we use a space to accomplish a task — can be articulated in different ways and still achieve the same result. In fact, in these cases the kitchen argument needs to be articulated in different ways to accomplish the same result. It has to adapt to context to be effective.
This example provides a convenient model for discussing ways to adapt the arguments we make in information design across contexts. In architecture the kitchen triangle presents a solution for how we use space to accomplish a task; in information design our arguments make cases for how we understand information to arrive at insight. If we strive to get at the crux of the argument itself, the core of the solution or insight it presents, we can more easily and more effectively articulate that argument across contexts.
I recently presented this idea at a talk at the University of Washington’s Information School. It was part of the larger argument that in order for the information architecture solutions we design as UX professionals to be effective, they need to be “articulated” appropriately across different devices in contextually appropriate ways. During the Q&A following my talk, one student held up his smartphone and asked, “Okay, I get the way the idea of ‘kitchen’ gets translated to lots of different settings, but how do a translate the ‘page’ argument from a desktop computer onto a screen like this?”
I found this question brilliant. It perfectly framed an issue I had been struggling to formulate on my own. My response to this student at the time was that “the page” isn’t the argument, the “page” is one possible articulation of the argument. The argument is whatever understanding that particular page is trying to convey. Even as I was mid-way through this response, however, the regrettable slipperiness of it stood out. Part of this is because it goes right to the root of a key set of challenges we face as IAs designing for a multi-device and multi-context infosphere: we traffic in language and ideas, but we still only bring a rudimentary understanding of the relationship between them to our discipline.
In these early days of websites and mobile apps, this basic level of understanding has generally been enough. In the face of the interconnected smart systems and the data avalanche that looms just on the horizon, however, I fear such tactics will come up short. My interest in language, meaning, and what the relationship between the two brings to the practice of user experience architecture is an attempt to close that gap.
Let me admit this right up front: there’s no way I’ll clear this whole issue up in the course of a single blog post. I would like, however, to present what I see as the contours of the problem and begin sketching out some hypotheses for how we might begin to develop solutions. For me this begins with language.
The most common definition of “language” usually falls along the lines of “the method of human communication, either spoken or written, consisting of the use of words in a structured and conventional way” (thanks OS X dictionary!).
While both technically and practically correct, this definition is also perilously single-faceted. In addition to being a medium for communication, language is also a precondition to conceptual thinking. Simply put, if we didn’t have language, we wouldn’t have thought as we know it.
In his seminal Course in General Linguistics Ferdinand de Saussure (the founder of modern linguistics) argues that
Psychologically, our thought–apart from its expression in words–is only a shapeless and indistinct mass. [...] Without the help of signs we would be unable to make a clear-cut, consistent distinction between two ideas. Without language, thought is a vague, uncharted nebula. There are no pre-existing ideas, and nothing is distinct before the appearance of language.
By this reading, language is not the result of complex representational thinking; it is the cause of it. Even when we communicate complex ideas in shapes, or colors, or motion, the causative element in our capacity for reason — our ability to make sense of ideas — is language.
This constitutive quality of language in relation to conceptual thinking becomes a challenge when we try to articulate meaning in flexible ways. Language is a shared, socially held system in which we participate. Saussure notes that language is “a product of the collective mind of linguistic groups.” No single individual possess or understands it in its entirety. In order to make sense of messages, we rely on simplified models of how language works.
In this context, “modeling” refers to the process by which we make sense of systems. Our language models are, for instance, what allow us to invent and make sense of new words. When I say that a particular piece of software is “crashtastic,” people who are part of my narrower linguistic group know what I mean. This is because of our shared model of lexicography. The fact that “She tweets a lot” is correct and “She tweet a lot” is not is a function of a culturally shared model of morphology.
In both cases, we haven’t consciously modeled either lexicography or morphology. But we’re using models all the same: our knowledge of both of these areas relative to the actual functioning of language is incomplete. Likewise, our models, though generally reliable, are also limited: this is what leads some people to believe (to their peril and sometimes fiery doom) that “inflammable” means “not flammable” (it actually comes from the Latin root “inflammare” — the same place we get “inflame”).
We can see similar models all around us — and they are likewise simplified representations of much more complex systems. The trackpad on a Mac laptop is one such example. My trackpad is set up so when I move two fingers across it toward the keyboard, the page in focus moves up. My model is that my fingers are on the page and are (quasi-) physically moving it.
Several of my colleagues, on the other hand, have their trackpads set up to work the opposite way. Their model is that they are grabbing the scroll bar at the right of the window in focus and moving it up, which moves the page down (which, in turn, is a relic of the wheel functionality on scrolling mice).
Neither of these models are anywhere near an accurate representation of the functionality of the actual system at play — but both of them create meaning that affords functionality. In both of these cases, the conceptualization that enables each of these models is linguistic: it is language that creates the conceptual space in which we can build a metaphor of movement.
User Experience Architecture
In light of this, I submit that in order to effectively articulate the arguments we make across contexts in digital — and often physical — information spaces, we need to understand the conceptual foundations of those arguments. We need to understand the way we make meaning as thinking animals. We also need to dig in to the details of the language and models we use to make them intelligible, both so that we can use them more effectively and so that we can leave them behind when they are hindering communication (for instance, when trying to translate a digital “page” onto a 3.5 inch touchscreen).
Armed with this knowledge, we can begin to design and articulate arguments that are intelligible across contexts. In the same way that we construct our built environments in response to the physical mechanics of our bodies, we can construct our information environments in response to the conceptual mechanics of our minds.
What exactly this looks like in practice is a question I’m just beginning to explore. To guide this exploration, and to eventually articulate what this approach means to user experience architecture as a practice, I’ve begun assembling hypotheses to test. Here’s a short list of the top few:
- To effectively negotiate the cognitive variables involved in meaning-making, we need to develop a better understanding of where and how our ideas and our arguments begin.
- Articulated information architecture is important for interaction design — and for interactive information environments — because we unconsciously enlist models of all of the world around us to make meaning.
- Just as responsive web design reacts to the physical changes in context, a responsive information architecture should react to the cognitive process changes in different meaning-making contexts (kinetic, aural, tactile, etc).
I’m treating this short list as a first step towards developing a model of articulated information architecture based on linguistic cognitive process. I won’t venture to guess what the connected infosphere of five years from now will look like, but I’m pretty confident that negotiating it will require some additional tools in our information design toolboxes. I suspect this might be one of them. I’ll let you know what I find out.