Wednesday, August 02, 2006

Microsoft Live Labs Photosynth and Place-Based Computing

One of my research programs is in what I call place-based computing. By combining digital sources with geographic information software (GIS), global positioning receivers (GPS) and handheld or tablet computers, it becomes possible to access historical sources in the place that the source represents. You can stand outside of an old building, for example, and see photographs of how it used to look, census or ownership records, old newspaper articles about it, and so on. We introduced the technology in the Western Public History MA program last year, when our grad students used it to create a technology-enhanced walking tour of a heritage conservation district.

Up until now, most of the sources that we've been using for place-based computing have been geocoded by hand. If we know that a newspaper article refers to a particular building or intersection, we can use a web service like to determine the latitude and longitude, and then make an entry for the source in the GIS database. Part of the impetus for text and data mining is to automate this process. (One might, for example, use web services to geocode every place name in the Dictionary of Canadian Biography, then download the entire thing to a handheld. Then when you visited Chicoutimi, you could discover that Dominique Racine was parish priest there from 1862-88, that William Price and Peter McLeod had timber operations there in the 1840s, and so on. In effect, the system would be a travel guide that emphasized Canadian historical biography.)

Techniques for mining text are relatively well-understood compared with those for mining images. At this point, a number of Canadian archives, such as the McCord Museum and the BC Archives, have large repositories of digital historical images. Many of these are images of places, although the metadata does not include tags for latitude and longitude. Ideally, it would be possible to sift through these images automatically, returning ones that matched, say, a present-day photograph of an existing structure.

As a preliminary step in this direction, I got a small grant to experiment with using some off-the-shelf photogrammetry software (PhotoModeler). I used photographs of a present-day building to create a partial 3D model, then imported historical photographs of an adjacent building that had been demolished. The software allowed historical and contemporary photographs to be displayed in the same 3D space, although the work is painstaking, to say the least. (You can read the report here.)

Enter Photosynth, from Microsoft Live Labs. Researchers recently announced a pretty amazing new tool that automatically extracts features from images, uses photogrammetry to stitch them together in three dimensions and then gives you an immersive browser to explore the whole thing. Their demo shows the system with contemporary photographs, but historical photographs should work as well. I can't wait for a chance to play with their software.

Tags: | | | | | | | |