Hive Intelligence

I heard an interesting talk by Rob Goldstone from Indiana University on 3/3/08, courtesy of the Program on Networked Governance at Harvard’s Kennedy School.

Rob studies how individuals each performing their own actions exhibit group properties, what could be called “hive intelligence.” In other words, without a dictator telling each individual how to act, there may still be observable and interesting patterns that emerge. (Each individual contributes to the overall pattern, even though no one individual’s behavior dictates it and the result may be unforeseen by any individual.) This “hive intelligence” has been shown to be the case for example in which there are observable patterns for where Saguaros send out branches (close to our heart), the distribution of sunflower seeds, the stripes on zebras or other natural properties, but can also be found in human behavior (imitation, traffic patterns, pedestrian traffic, etc.). He studies this largely through experiments (in laboratories, over the Internet, in virtual worlds like Second Life where he notes it is very easy to recruit volunteers and pay them for their experimental performance in Linden dollars).

He discussed, in detail, two experimental areas: 1) foraging behavior; and 2) imitation/innovation.

In the foraging experiments, human subjects are trying to forage as much virtual food as possible. Food packets are dropped on screens every 4/N seconds, where N is the number of human subjects. The food is generally dispersed in two circular areas of the same size and the distribution of the food between the two is experimentally altered to be either 50/50, 65/35 or 80/20 across the two areas. When a subject moves to a square containing a food packet, he/she acquires the food and the food packet disappears. Participants play in various conditions where either the other food packets are visible and/or the other players’ positions are visible. Rob finds some inefficiencies in foraging, especially in the invisible condition (where you can’t see where other players are and where the food is). Specifically they find extra scatter (respondents are not as closely honed in on where the food is), there is undermatching (that is that more people hang out at the less populated food site and they would harvest more food at the other site that has more food), and they observe population cycles. The population cycles refer to the fact that people’s desire to avoid crowds, actually leads to them to greater crowding; in the visible condition, people are tempted to move to another site, but they observe others moving to that site and talk themselves out of moving; in the invisible condition, without this feedback loop of others’ behavior, they are all more likely to move and hence recreate the crowding they sought to avoid. On the undermatching front, it looks like evolution must have favored those who avoided more populated food sites (perhaps because food was more scarcely distributed and hence there were lower returns from all being at the same site), and thus we are conditioned to favor avoiding being at the more popular location, even when it would lead to greater food harvesting. (Obviously while the experiment was narrowly about food harvesting, it could equally well apply to things like traffic patterns, etc.) They also observed a certain level of inertia: that people, all things being relatively equal, were more likely to stay where they were than move to another square on the food foraging game. Rob also notes that there are greater inequalities of outcomes in the invisible condition, especially with an 80/20 distribution of food, since people who happen upon the food stashes are more likely to acquire a lot of while others are still exploring to find the food sources. And finally, they observed the importance of knowledge: for example, when the location of other agents was visible but not the food packets, people observed “buzzarding” behavior, assuming that the presence of other agents was an indication that there was prevalent food there, and hence there was greater herding behavior. (This is reminiscent of Communist Russia where people would instantly get in line when they saw a long line outside of a store, assuming that the store must be offering some really good food.) A paper on this available here and the simulation available here.

They also did some experimentation with innovation and imitation. Groups need both of these to prosper: too much innovation and you don’t have effective dissemination of good new ideas and too much imitation and you underinvest in exploring new ideas and better solutions. He tested how much of each occur in 4 different type of networks:

1) Lattice: a world in which everyone is connected to their immediate neighbors and maybe one near neighbor (looks like a ring network with people also connected to near and not only immediate neighbors)
2) Fully connected (everyone connected to everyone else)

3) Random (people have some number of links as lattice but it is random to whom they are connected)

4) Small world: basically a lattice network (where you know your neighbors but with a few random links thrown in that shorten the distance dramatically between any two actors in the network.

They asked participants to maximize points and there was a hidden function (graph) that either had a single peak or 3 peaks. Participants would play 15 rounds and at the end of each round learn their score and the guesses of the others in the network to whom they were connected and what scores they received.

Results: for a single peaked problem, the fully connected network does the best earliest, since it disseminated information quickly, but the other networks catch up over 15 rounds. For a 3-peaked problem, the small world does better. The fully connected network leads to premature bandwagoning where they settle on a local maximum but not the global peak. The small world combines some dissemination with enough local niches to permit continued experimentation and innovation. Participants were more likely to explore early on than later and less likely to imitate (strangely enough) in late rounds. For really hard problems, like a needle in a haystack type of function, the lattice network actually does best because it fosters the most innovation. Rob acknowledged that in these experiments, the fully connected model, since you learn the guesses of all the other participants may lead to information overload. The results are consistent with some earlier research by my colleague David Lazer. One questioner noted that a fully connected network may be bad for the Delphi decision making process, since if you learn everyone’s views too early it may inhibit greater innovation in the exploration of options/solutions. Rob noted that in order to effectively configure a work team you both need to know something about the nature of the problem being solved (is it closer to the one-peak solution, the 3-peak solution or the needle in a haystack) and something about the disposition of team members (are they naturally people whose inclination is to imitate or innovate). David Lazer raised an interesting point that in networks where certain people in the network play a disproportionately critical role in sharing information (bridgers in a small world network or hubs in a scale-free network), there may be greater variation in the results depending on how effective that bridger or hub is in sharing key insights; if that person basically does what he/she wants to and ignores others’ learning, it effectively dismantles a key portion of information sharing and potential imitation. Paper on innovation/imitation available here.

Rob’s perception Lab, where one can participate in experiments, can be found here. Rob noted that when there are not enough live volunteers, the volunteers play against ‘bots, based on models of human behavior and Rob’s goal is an adapted form of the Turing Test, where volunteers don’t know if they are playing against humans or ‘bots.

After the presentation Rob explained that he is also doing some experimental work on the commons. Together with Elinor Ostrom, he has an experiment on foraging where if food is left, other food can grow in adjacent cells. Obviously this depends on the food not being immediately harvested. Their game enables the participants to develop rules for working together and agreeing to limit harvesting rights; under such regimes groups typically agree to enable property rights and do better long-term under such a system. But Rob said he was discouraged and surprised that even experimental participants who participated in such a commons experiment will immediately ignore their learning when they then play the foraging game (in a variant where new food does not continually reappear) and act on their own personal short-term self-interest, with the result that the food in the world is quickly overharvested and nothing else can grow in the later rounds. The participants bemoan the outcome but feel powerless and believe that others will also overharvest so they want to get in while the going is good; a classic prisoner’s dilemma outcome.

Also briefly discussed the concept of stigmergy (basically where individuals in an ecosystem alter the ecosystem for others, and hence alter their behaviors). One can think of people making a shortcut across the grass which as the grass is worn down induces others to follow, or think about the pheromones that an ant lays down from its trail that induces others to follow its path. Rob said one can visualize a jungle where the first intrepid explorer machetes a path through (at great effort) and this induces more people to follow which leads to a trail, which leads others to put down a pebble base which leads to a road which leads to a 4-lane highway. In this typology, for better or worse, we can visualize human progress.Rob Goldstone noted that viral popularity (which I’ve written about earlier) has some of those same properties. In other words, by buying books on Amazon or watching videos on YouTube, we lay down pheromones that say that this is good, or that people who like X also like Y, and the programs on Amazon (recommendations) or You Tube viral videos watch these digital traces and use this to induce others to follow our paths. [Incidentally, I gather such shortcut paths are known as “desire paths”; sample images here.]


One response to “Hive Intelligence

  1. Pingback: Virtual simulations of social dynamics « Social Capital Blog

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s