Technology is enabling complete strangers to cooperate for social good: in this case, the hunt for adventurer Steve Fossett, whose plane disappeared recently in the U.S. southwest. (I’ve written about other attempts to use techology for such social goals here and here.)
Google’s Amazon Mechanical Turk has farmed out pieces of the southwestern landscape to up to 20,000 strangers who are searching the landscape from their computers (thanks to Google Earth). If 10 individuals find nothing in their *sector*, the patch of ground is ignored. If one or more of the 10 see something, it is passed off to humans to actually scan the terrain on the ground.
What is the impact of such collaboration, above and beyond their potential important benefit in locating stranded victims like Steve Fossett? Such efforts obviously show cooperation and are pro-social. But they don’t build any social capital (in other words, Jane who is searching this piece of land and Charles who is searching that piece of land don’t build up any social interconnections). What’s less clear is whether such actions have any bearing on social trust: does my participation in this act of altrusim together with some 20,000 others change my conception about whether strangers can be trusted? At a logical level, probably not (20,000 individuals are a very small percentage of the world’s population), but behavioral economists are increasingly demonstrating that we often do not behave in the logical fashion that classical economists expected.
For a story on this, see: Searching by Land, Sea and the Web (NYT Week in Review, 9/16/07)