Rob MacDougall and I are putting together something new and fun for Western freshmen this coming fall, a course called "Science, Technology and Global History." Our goals are modest. We hope to cover the history of the whole enchilada from the Big Bang to the near future, while inculcating the idea that historians and scientists both need to have the same kind of critical, evidence-based habits of thought. Forget the two cultures. While Rob is figuring out how our students can work in teams online with students in South Asia, I'm left to kick back and brainstorm classroom mischief.
One of the interesting things about first year courses at our university is that the enrollment can't be capped. So we could have six students or six hundred. I've done large lectures before, and I'm not very enthusiastic about the format. I try to wave my hands a lot, because I once attended a seminar by a psychologist who studies the teaching evaluation process and he said that students rank mobile professors more highly than sessile ones. I also stopped talking every ten minutes or so to give students a chance to ask questions, but most of them seemed pretty shy. Each term, I got to know the half-dozen who did like to speak up in class.
Since I teach with a laptop and LCD projector, I've been thinking it would be fun to have a chat window running so students could provide backchannel commentary that could be seen by all. This might be something like IM or Twitter. As I was talking, I could keep an eye on the chat window and field questions that would take the class somewhere interesting. If there was a sudden storm of confusion, I could go back and unpack or repeat something. Students who read my blog could even try to amuse me by setting loose chatterbots that simulate famous historical figures. Now I suspect that some of you might be worrying that a few students would abuse the system and type obscenities or whatever. But I'm not worried, because I can always walk over to the computer and close the chat window. It's that easy. I figure that if you treat people like adults they respond in kind.
I'd be happy to hear from anyone who has tried something like this.
Tags: feedback | pedagogy | Twitter
Tuesday, March 25, 2008
Sunday, March 23, 2008
A Lunchtime Chat
There is a question that I'm told is popular to ask incoming freshmen: "Which historical figure (Jesus, Gandhi, Ozzy, etc.) would you most like to have lunch with and why?" Now I have no idea what quality in the student this question is supposed to elicit, except perhaps forbearance. I'm glad that no one ever tried it out on me, because most of the answers that occur to me--"Is that likely to happen if I decide to attend this school, sir?"--probably wouldn't help my case. When the list of candidates is specified in advance, they're typically chosen either because they are (in)famous icons of recent pop culture or because they are timeless sages who have already provided written answers to the most common set of meaning-of-life-style questions. As much as I might rather meet Lao Tzu than Elvis, my hunch is that it would be more in keeping with Taoist principles to dine with someone who speaks your language and shares your preference for Southern fried cooking. I could be wrong about that.
The whole dining with the stars thing puts me in mind of the Turing test. Alan Turing famously argued that we'd know that a computer was intelligent when its conversational interaction was indistinguishable from a person. Because people and computers look differently (android fantasies notwithstanding) he suggested a situation that would cloak the embodiment of the interlocutor. The person who is conducting the test takes turns asking questions of two different respondents via a low-bandwidth connection (think IM). If he or she can tell which one is the computer, it fails the Turing test.
In 1966, Joseph Weizenbaum created a conversational program called Eliza. Eliza could read an incoming statement like "I hate dogs" and use simple transformational grammar to turn it into a question "Why do you hate dogs?" It could offer noncommittal responses like "Please go on." If the person answered a question with "Yes," Eliza might say "You seem positive." Many people interacted with Eliza enthusiastically, leading some to say the Turing Test had already been passed and others to say that it was rubbish. (If you'd like to converse with Eliza you can Google for one of her many incarnations.)
If I were chatting with freshmen, say over lunch, I'd be looking for students who had heard of Eliza and the Turing test and had a well-developed sense of anachronism. That hasn't happened to me yet. As a public service, I'm going to offer a new question that has been updated for the digital humanities: "What challenges would you encounter when trying to create an Eliza-style simulation of each of the following historical figures? Which would be most or least likely to pass a Turing test and why?"
Tags: RTFM
The whole dining with the stars thing puts me in mind of the Turing test. Alan Turing famously argued that we'd know that a computer was intelligent when its conversational interaction was indistinguishable from a person. Because people and computers look differently (android fantasies notwithstanding) he suggested a situation that would cloak the embodiment of the interlocutor. The person who is conducting the test takes turns asking questions of two different respondents via a low-bandwidth connection (think IM). If he or she can tell which one is the computer, it fails the Turing test.
In 1966, Joseph Weizenbaum created a conversational program called Eliza. Eliza could read an incoming statement like "I hate dogs" and use simple transformational grammar to turn it into a question "Why do you hate dogs?" It could offer noncommittal responses like "Please go on." If the person answered a question with "Yes," Eliza might say "You seem positive." Many people interacted with Eliza enthusiastically, leading some to say the Turing Test had already been passed and others to say that it was rubbish. (If you'd like to converse with Eliza you can Google for one of her many incarnations.)
If I were chatting with freshmen, say over lunch, I'd be looking for students who had heard of Eliza and the Turing test and had a well-developed sense of anachronism. That hasn't happened to me yet. As a public service, I'm going to offer a new question that has been updated for the digital humanities: "What challenges would you encounter when trying to create an Eliza-style simulation of each of the following historical figures? Which would be most or least likely to pass a Turing test and why?"
Tags: RTFM
Monday, March 10, 2008
Pupation
Every so often in the past few decades I've had to go through my accumulated collections of code and text and binaries and try to translate them so that they could be used on a new platform or new version of an operating system. In some cases, such as text files, it's always been quite easy. In others, it has been more difficult, or even impossible. The assembly language that I wrote for one chip, for example, won't run on any other. The KnowledgeMan database programming that I did in the 1980s dates me, but otherwise isn't of much use now. More poignantly, KMan doesn't even have its own page in Wikipedia. Now I'm in the process of moving all of my files to an open source revision-control system (more on that in a later post) and face many familiar problems. Once again, I'm discovering that open formats are a really good idea, and that in thirty years--if I last that long--the only sources that I will have to look back on my work right now may be text, XML and source code.
As I go through my files this time around, however, there are a lot of notes from writing my dissertation and publishing it. I'm reminded that I've created a few new careers by metabolizing a succession of older ones and metamorphosing into something different. And when I look through my archival notes and book notes and lists of ideas and questions, I see that most of my work didn't end up in the published book. Some of it was tangential, some was forgotten, some better forgotten.
I'm thinking a lot about the computational tools that historians might use to write different kinds of history. In methodological guides, the emphasis is always on keeping track of things, on proper notetaking and proper citation, so that you don't forget where something came from. Working with digitized sources makes it much easier to search and cite and archive, and easier to imagine that almost everything can be saved. But what if some projects are crucially dependent on a period of forgetting and reuse? What kind of tool would allow some sources to be lost, remake your tangents into something new, turn your caterpillar into a butterfly or a moth?
Tags: digital history | historical consciousness | historiography | open formats
As I go through my files this time around, however, there are a lot of notes from writing my dissertation and publishing it. I'm reminded that I've created a few new careers by metabolizing a succession of older ones and metamorphosing into something different. And when I look through my archival notes and book notes and lists of ideas and questions, I see that most of my work didn't end up in the published book. Some of it was tangential, some was forgotten, some better forgotten.
I'm thinking a lot about the computational tools that historians might use to write different kinds of history. In methodological guides, the emphasis is always on keeping track of things, on proper notetaking and proper citation, so that you don't forget where something came from. Working with digitized sources makes it much easier to search and cite and archive, and easier to imagine that almost everything can be saved. But what if some projects are crucially dependent on a period of forgetting and reuse? What kind of tool would allow some sources to be lost, remake your tangents into something new, turn your caterpillar into a butterfly or a moth?
Tags: digital history | historical consciousness | historiography | open formats
Thursday, March 06, 2008
A Cure for Continuous Partial Attention
On my way home the other night I noticed that the lead story in one of the university student newspapers was headlined "Frustrated profs consider laptop ban." This is one of those perennial favorites. Students seem distracted? Cut off their wireless, ban laptops and smart phones, and forbid internet use for coursework. After all, everyone knows that students always paid respectful attention to their teachers before computer and wireless internet use became widespread. The part of the article that made me laugh the hardest was a quote from an anonymous professor who complained that one student was typing into a laptop furiously for no reason. How hard must that class suck, if the prof thinks that nothing noteworthy was going on? And wouldn't you feel stupid if your inattentive student was brainstorming a cure for cancer? For their part, the students interviewed for the story mostly seemed to think that laptop use was actually helping them to learn and to prepare for their futures.
Really, shouldn't we be worried about the digital divide, rather than trying to exacerbate it? As Manuel Castells argues in The Internet Galaxy, a lack of access to networked devices is only one part of the problem. One of the fundamental challenges for a network society is
A student's freedom to think their own thoughts, to structure their own mental activity, is a far greater good than trying to compel some semblance of attention. So here's a suggestion for all you frustrated profs: relax. I'm guessing that you may have spent some of your own undergraduate hours daydreaming, doodling or writing snarky notes in the margins of your notebooks. And look how well you turned out!
Really, shouldn't we be worried about the digital divide, rather than trying to exacerbate it? As Manuel Castells argues in The Internet Galaxy, a lack of access to networked devices is only one part of the problem. One of the fundamental challenges for a network society is
the installation of information-processing and knowledge-generation capacity in every one of us--and particularly in every child. By this I obviously do not mean literacy in using the Internet in its evolving forms (this is presupposed). I mean education. But in its broader, fundamental sense; that is, to acquire the intellectual capacity of learning to learn throughout one's whole life, retrieving the information that is digitally stored, recombining it, and using it to produce knowledge for whatever purpose we want. This simple statement calls into question the entire education system developed during the industrial era. (277-78)
A student's freedom to think their own thoughts, to structure their own mental activity, is a far greater good than trying to compel some semblance of attention. So here's a suggestion for all you frustrated profs: relax. I'm guessing that you may have spent some of your own undergraduate hours daydreaming, doodling or writing snarky notes in the margins of your notebooks. And look how well you turned out!