Sunday, October 28, 2007

Seams and the Suspension of Disbelief

At an unconference that I was at a few weeks ago, Lucy Suchman began a conversation about illusion and suspension of disbelief in technocultural systems. The example that she gave was animation: we really buy into the actions of lovable cartoon characters, and readily attribute intentionality to them. And yet, of course, there is nothing beneath the surface to match the imagined anthropomorph. Such suspension of disbelief seems to follow quite readily when the details are right. It's hard to look at the slowly pulsing LED of a "sleeping" PowerBook and not feel like the machine is a little bit more human for it.

Suspension of disbelief is something that historians strive for, too. In public history, for example, costumed interpreters, museum dioramas, and replicas of artifacts and documents stand in for the originals that they are intended to resemble, although they may have little or no causal relationship to them. The monographs of traditional history are also simulacra. They bear a principled relationship to past events, but have rarely partaken of them. Instead, their job is to put the reader into a some kind of relationship with the past, to get him or her to see through the physical reality of the codex.

The discipline of citation allows sophisticated readers to assess the evidentiary material from which a particular account is constructed. Each footnote serves as a kind of thread. Pulling on it may tighten a seam or rip it open. Professional historians expect the body of a work to be relatively smooth and tightly integrated, but they also expect to be able to use the footnotes to take it apart as necessary. Ideally, monographs always present both smooth and seamful faces.

In digital history, we have to pay attention to finding the right balance of smoothness and seamfulness, but we can work at a number of different levels, ranging from low-level electronic and hardware decisions to very high-level software abstractions. It is possible for something to appear smooth at every level. Carrying on the unconference conversation, Casey O'Donnell gives the examples of the iPod and Wii. As he says, "these devices (mostly) just work," and have been designed to suppress tinkering. It is possible, however, to construct systems that are smooth at one level and seamful at another, that signal their willingness to be hacked in particular ways. (See the work of Matthew Chalmers for more on explicitly seamful design). At the unconference, I demonstrated a simple musical instrument made from a distance sensor, Phidgets interface, a laptop running Max/MSP and a MIDI software synthesizer. All of the seams were out in the open--in fact it was a bit messy. But to play the instrument all you have to do is wave your hand in front of the sensor and you get glissandos of marimba notes. At the behavioral level it is fun and responsive; at the hardware and software levels it is obviously hackable.

As we develop historical projects online, we need to ask ourselves how we can incorporate tinkering while maintaining smoothness where we want it. A great recent example of this is Devon Elliott's suggestion that archives use wiki technology to allow historians and other researchers to create item-level metadata.

Tags: | | |

Sunday, October 21, 2007

The Archive as Time Machine

[Cross-posted to Cliopatria & Digital History Hacks]

Our story so far: even though we know that it's probably impossible, we've decided to think through the problem of building a time machine. In the last episode we decided that we wouldn't want one that allowed us to rewrite the past willy-nilly... because what would be the point of history then? It turned out, however, that the world itself is a pretty awesome time machine, tirelessly transporting absolutely everything into the future. Today we look at the archive widely construed: one small portion of the world charged with the responsibility of preserving our collective representational memory.

As every schoolboy used to know (at least back when there were 'schoolboys' who knew the Classics), Thucydides wanted his work to be "judged useful by those inquirers who desire an exact knowledge of the past as an aid to the interpretation of the future ... an everlasting possession, not the showpiece of an hour." The fact that we know this twenty-five centuries later speaks pretty well for the potential of preserving representations for long periods of time. Precisely because they can be readily transferred from one material substratum to another, written words, well, remain. Of course, since languages change over time there can be difficulties of decipherment or translation, and exactly which words survive can be a real crap shoot.

With the relatively recent spread of optical, magnetic, and other media, it became necessary to archive media readers, too. The endurance of the written word (or new cousins like photographs and phonographic records) now also depended on devices to amplify, transduce or otherwise transform signals into a form that is visible or audible to human users. Along with the obsolescence of media, librarians and archivists now had to worry about the obsolescence of reading devices.

Enter the computer. Representations are now being created in such quantity that the mind boggles, and they can be transformed into one another so easily that we've taken to referring to practically all media as simply "new." This, of course, poses librarians and archivists with a class of problems we could also refer to as "new." My students and I were talking about this in my digital history grad class a few weeks ago. How do we store all of this born-digital material in a form that will be usable in the future, and not just the showpiece of an hour? One possibility, technically sweet but practically difficult is to create emulators. The archive keeps only one kind of machine: a general-purpose computer that is Turing-equivalent to every other. In theory, software that runs on the general-purpose machine can emulate any desired computer.

My students are most familiar with systems that emulate classic video and arcade games, so that framed our discussion. One group was of the opinion that all you need is the 'blueprint' to create any technological system. Another thought that you would be losing the experience of what it was like to actually use the original system. (Here I should say that I'm solidly in the latter camp. No amount of time spent on the CCS64 emulator can convey the experience of cracking open the Commodore 64 power transformer and spraying it with compressed air so it wouldn't overheat and crash the machine while you were hacking.)

More than this, however, the idea that a blueprint is all you need to recreate a technical system shows how much more attention is focussed on the ghost than on the machine these days. The showiness of new, endlessly plastic media obscure their crucial dependence on a systematic colonization of the nanoscale. I might be able to read a microfiche with sunlight and some strong lenses, but never a DVD. The blueprint for a DVD reader is completely useless without access to some of the most advanced fabrication techniques on the planet. So we're in the process of creating all this eternally-new stuff, running on systems whose lifecycles are getting shorter every year. What would Thucydides say?

Next time: how and why to send messages way into the future.

Tags: | |

Friday, October 12, 2007

Unoriginal

I'm in Montreal this week participating in the Playful Technocultures unconference and the annual meeting of 4S. I've met a lot of interesting people, learned about what's been going on in STS since I last dropped in, and had a number of thought-provoking conversations. For me, a lot of the discussion has centered on (artificial?) distinctions between play and work, on what makes something "serious" or not. Today, for example, I had lunch with three RPI guys, Hector, Casey and Sean. We were talking about the role of blogging in academic careers, how it is not yet valued for promotion or tenure, even though it is clearly a form of public engagement. Many of us, in fact, have already found our online reputations and readership to be at least as beneficial as our published work in providing access to scholarly opportunities, funding, and other good stuff. The academic perception of professional blogging is bound to change as a generation of academic bloggers becomes tenured, and committees begin to recognize that blogging may be fun, but it can also be work, that blogs can be about more than where you ate lunch.

Our discussion turned from there to the fact that bloggers tend to value substantive posts much more than short ones that link to other things of interest. Sean noted, however, that given the sheer volume of stuff that comes through the feed reader every day, these link posts serve a useful "buzz" function... you tend to check out the pointers that recur in the blogs that you follow regularly. In a sense, both the glut of information and the new value of "unoriginal" content (like link posts) are concomitants of the shift to what Roy Rosenzweig called the culture of abundance. There is way too much out there now to monitor by yourself; you really need other people to add their "me too" when someone thinks something is cool. Think of these link posts as providing a gradient to the search space, so you or your bots have a better chance of finding spikes of interest.

Don't get me wrong: I think originality can be a good thing, but I don't think that it's the only good thing. The internet gives us instant access to the contents of the hive mind. It's easy to find out that someone else has already had your brainwave, or done the hack that you were planning to try. Don't let that stop you. You have to play with other people's ideas, words, tropes, code, artifacts, instruments, and story lines to achieve any kind of mastery of anything. Besides, historians are fond of pointing out that every new new thing actually has a long past [insert unoriginal allusion to Santayana here]. Sure the collective is doomed to repeat things, but how else could it memorize them?

Tags: | | |