Category Archives: david lazer

Advances in social capital measurement (UPDATED 4/12/12)

Here is an update on our great progress on social capital measurement.

We should begin with a word about the concept of social capital measurement in general. Since social capital refers to the value of social networks, in principle if you were going to measure social capital, you’d ask everyone to detail all their friends, contacts, acquaintances and then ask them all sorts of questions about these folks (the demographics of each friend, how frequently they contact each person, for what purposes, what they could use these ties for, etc.). It is an interesting approach employed by social networks academics and practical for a business work group, or a university class but far too time-consuming for a city or a country. [One interesting area, on which I have blogged before in “Life In The Network” and “Life In the Network II” is the emerging field of digital traces, where digital footprints like one’s e-mails, call logs, locations recorded through GPS/bluetooth devices in cellphones, etc. might collectively reveal our social networks on a grand scale without requiring such detailed surveying. It raises lots of privacy concerns, but it is certainly an area to watch. In principle, one could watch them dynamically change over time, and with demographic information about each person could figure out which links are social bridges across various dimensions or how social patterns differed by demographic groups. Some interesting work of David Lazer has at least found that one can use some of this information to quite accurately gauge who work and social friends are. But these data are not generally available.]

Thus, for now, we have gathered social capital data at the individual level by gathering proxies for social capital: volunteering, religious involvement, neighborliness, trust, participation and leadership in voluntary associations, philanthropy, political participation, etc. For more on the dimensions of social capital, click here. One can then aggregate random individual-level social capital data at a neighborhood, or town or city or state to understand social capital strengths or weaknesses of places and which places have overall greater connectedness. Of course, since there are differential benefits of being in the networks (job leads, lifetime earnings) from the spillover benefits of networks to isolated individuals (lower crime in areas, better performing governments, lower corruption rates, higher public health, etc.), not all the residents in a community with high social capital will necessarily get the same benefits if they are relying on others’ social capital rather than their own.

One of the things we’ve been pushing (given the strong connections of social capital with so many of these public goods), is government measurement of social capital.

The good news is that the US government has started annually measuring social capital on the Current Population Survey (the largest government survey other than the Census). While we’ve been urging this for a while among high level government contacts, the 2 key breakthroughs were a meeting of Robert Putnam with President George W. Bush where he personally committed to make this happen, and then the extremely diligent work of Robert Grimm and Nathan Dietz at the Corporation for National and Community Service, working with the folks at the Bureau of Labor Statistics with background help and reinforcement from the Saguaro Seminar. It is a terrific step forward for policy makers, civic leaders, academics and citizens.  [We’re also grateful to the Ford Foundation and a consortium of about 3 dozen community foundations that partnered with us to measure social capital in 2000 and 2006 as the lessons learned from those surveys helped provide the answers to many of the questions that CNCS and the Bureau of Labor Statistics had.]

The US government began measuring volunteering annually (on the Current Population Survey September supplement in 2000), included questions on attending a public meeting and working with neighbors to fix/improve something (starting in 2006), but are now expanding the list by some 20 items starting with Fall 2008. This CPS is the gold standard as far as measurement and has a national sample size of about 57,000 households annually (although they obtain approximately 110,000 responses since they ask about other folks 15+ living in the household). The data is primarily used to construct monthly unemployment rates and has oversamples of larger cities. They plan to ask about volunteering and social capital every year (volunteering on the September CPS supplement and social capital mainly on the November CPS supplement).

As to the questions they are asking beyond volunteering, they ask about attending family dinners, working with neighbors to fix/improve something, attending a public meeting, talking to neighbors, talking to friends/family via the Internet, exchanging favors, and participation in various types of group (school, religious, service/civic, sports/recreation, other). The survey does NOT contain some key social capital items (like religious attendance, generalized social trust, inter-racial trust, subjective wellbeing, etc.). These may be asked in future years, but no guarantees.

Much of this past social capital data has been made available on the Corporation for National and Community Service-sponsored website Volunteering in America [See blog post on that terrific new resource here.] or on the Civic Life in America website.

If you click on Select a City/State and then choose *All*, you can see all the cities that they have enough volunteering data on which to develop reliable estimates. Over the next several years, they will have reliable social capital measurement for similar middle sized cities, states, or regions of states.  This will function, as Robert Putnam calls it, as a “social capital seismograph”, always going in the background, that will be very useful to researchers who want to produce natural experiments:  seeing how baseline levels of social capital affect the ability of two similar communities experiencing different events (a major plant closing or a hurricane or…).

For those of you dealing with smaller level geography (rural areas, cities with populations of under 100,000), I’m not positive that the CPS data even when 3-4 years are lumped together will produce reliable estimates for you. You may, if you want to measure your social capital, have to figure a way to band together with some other community foundations or other local groups to commission social capital surveys in your community and to commission a national survey to compare these data to. You can also always e-mail the Corporation for National and Community Service and make a request for a lower level of geography if they have it. They might be able to provide you with data they have already run but didn’t put on the website.

Three researchers at Penn State University (Anil Rupasingha, Stephen Goetz, and David Freshwater) developed county-level social capital measures that are reasonably good based on the density of civic and non-profit organizations, voting turnout, and census completion rates, among other factors. [You should note however that we found higher correlations, r=.37, between our social capital measures in 2000 Social Capital Community Benchmark Survey and RGF county measures than the Corporation for National and Community Service did in their analysis of their own social capital measures with RGF data at the MSA level.]
– These data are available for 1990, 1997 and 2005.  If you want the RGF data, you can download these county-level data here:

A note to the wise: I would urge that you NOT try to compare local social capital data that you gather to these CPS measures. CPS numbers are typically far LOWER and LESS civic than what you would get in a phone survey (both because the government survey is not about community or civic engagement and because they garner a far higher response rate, they hear more from people who are uninterested in civic engagement). The CPS numbers are probably more accurate but thus hard to compare with what you would get from the phone survey.

If you are interested in doing your own survey, you can, as always, find a copy of our Short Form Social Capital Survey on our website. We ask you e-mail us if you do use the Short-Form so we can keep track of who is using this.

The latest Social Capital Survey we administered was the 2006 social capital community survey. The national benchmark banners (what proportion of total, men, women, etc. gave various answers to the questions) is also available.  We also asked a lot of social capital questions (with lots of questions on religion) in our 2006 Faith Matters Survey, available here.

For more information on social capital measurement in general, visit here.

See related blog post “US expands social capital measures

Hive Intelligence

I heard an interesting talk by Rob Goldstone from Indiana University on 3/3/08, courtesy of the Program on Networked Governance at Harvard’s Kennedy School.

Rob studies how individuals each performing their own actions exhibit group properties, what could be called “hive intelligence.” In other words, without a dictator telling each individual how to act, there may still be observable and interesting patterns that emerge. (Each individual contributes to the overall pattern, even though no one individual’s behavior dictates it and the result may be unforeseen by any individual.) This “hive intelligence” has been shown to be the case for example in which there are observable patterns for where Saguaros send out branches (close to our heart), the distribution of sunflower seeds, the stripes on zebras or other natural properties, but can also be found in human behavior (imitation, traffic patterns, pedestrian traffic, etc.). He studies this largely through experiments (in laboratories, over the Internet, in virtual worlds like Second Life where he notes it is very easy to recruit volunteers and pay them for their experimental performance in Linden dollars).

He discussed, in detail, two experimental areas: 1) foraging behavior; and 2) imitation/innovation.

In the foraging experiments, human subjects are trying to forage as much virtual food as possible. Food packets are dropped on screens every 4/N seconds, where N is the number of human subjects. The food is generally dispersed in two circular areas of the same size and the distribution of the food between the two is experimentally altered to be either 50/50, 65/35 or 80/20 across the two areas. When a subject moves to a square containing a food packet, he/she acquires the food and the food packet disappears. Participants play in various conditions where either the other food packets are visible and/or the other players’ positions are visible. Rob finds some inefficiencies in foraging, especially in the invisible condition (where you can’t see where other players are and where the food is). Specifically they find extra scatter (respondents are not as closely honed in on where the food is), there is undermatching (that is that more people hang out at the less populated food site and they would harvest more food at the other site that has more food), and they observe population cycles. The population cycles refer to the fact that people’s desire to avoid crowds, actually leads to them to greater crowding; in the visible condition, people are tempted to move to another site, but they observe others moving to that site and talk themselves out of moving; in the invisible condition, without this feedback loop of others’ behavior, they are all more likely to move and hence recreate the crowding they sought to avoid. On the undermatching front, it looks like evolution must have favored those who avoided more populated food sites (perhaps because food was more scarcely distributed and hence there were lower returns from all being at the same site), and thus we are conditioned to favor avoiding being at the more popular location, even when it would lead to greater food harvesting. (Obviously while the experiment was narrowly about food harvesting, it could equally well apply to things like traffic patterns, etc.) They also observed a certain level of inertia: that people, all things being relatively equal, were more likely to stay where they were than move to another square on the food foraging game. Rob also notes that there are greater inequalities of outcomes in the invisible condition, especially with an 80/20 distribution of food, since people who happen upon the food stashes are more likely to acquire a lot of while others are still exploring to find the food sources. And finally, they observed the importance of knowledge: for example, when the location of other agents was visible but not the food packets, people observed “buzzarding” behavior, assuming that the presence of other agents was an indication that there was prevalent food there, and hence there was greater herding behavior. (This is reminiscent of Communist Russia where people would instantly get in line when they saw a long line outside of a store, assuming that the store must be offering some really good food.) A paper on this available here and the simulation available here.

They also did some experimentation with innovation and imitation. Groups need both of these to prosper: too much innovation and you don’t have effective dissemination of good new ideas and too much imitation and you underinvest in exploring new ideas and better solutions. He tested how much of each occur in 4 different type of networks:

1) Lattice: a world in which everyone is connected to their immediate neighbors and maybe one near neighbor (looks like a ring network with people also connected to near and not only immediate neighbors)
2) Fully connected (everyone connected to everyone else)

3) Random (people have some number of links as lattice but it is random to whom they are connected)

4) Small world: basically a lattice network (where you know your neighbors but with a few random links thrown in that shorten the distance dramatically between any two actors in the network.

They asked participants to maximize points and there was a hidden function (graph) that either had a single peak or 3 peaks. Participants would play 15 rounds and at the end of each round learn their score and the guesses of the others in the network to whom they were connected and what scores they received.

Results: for a single peaked problem, the fully connected network does the best earliest, since it disseminated information quickly, but the other networks catch up over 15 rounds. For a 3-peaked problem, the small world does better. The fully connected network leads to premature bandwagoning where they settle on a local maximum but not the global peak. The small world combines some dissemination with enough local niches to permit continued experimentation and innovation. Participants were more likely to explore early on than later and less likely to imitate (strangely enough) in late rounds. For really hard problems, like a needle in a haystack type of function, the lattice network actually does best because it fosters the most innovation. Rob acknowledged that in these experiments, the fully connected model, since you learn the guesses of all the other participants may lead to information overload. The results are consistent with some earlier research by my colleague David Lazer. One questioner noted that a fully connected network may be bad for the Delphi decision making process, since if you learn everyone’s views too early it may inhibit greater innovation in the exploration of options/solutions. Rob noted that in order to effectively configure a work team you both need to know something about the nature of the problem being solved (is it closer to the one-peak solution, the 3-peak solution or the needle in a haystack) and something about the disposition of team members (are they naturally people whose inclination is to imitate or innovate). David Lazer raised an interesting point that in networks where certain people in the network play a disproportionately critical role in sharing information (bridgers in a small world network or hubs in a scale-free network), there may be greater variation in the results depending on how effective that bridger or hub is in sharing key insights; if that person basically does what he/she wants to and ignores others’ learning, it effectively dismantles a key portion of information sharing and potential imitation. Paper on innovation/imitation available here.

Rob’s perception Lab, where one can participate in experiments, can be found here. Rob noted that when there are not enough live volunteers, the volunteers play against ‘bots, based on models of human behavior and Rob’s goal is an adapted form of the Turing Test, where volunteers don’t know if they are playing against humans or ‘bots.

After the presentation Rob explained that he is also doing some experimental work on the commons. Together with Elinor Ostrom, he has an experiment on foraging where if food is left, other food can grow in adjacent cells. Obviously this depends on the food not being immediately harvested. Their game enables the participants to develop rules for working together and agreeing to limit harvesting rights; under such regimes groups typically agree to enable property rights and do better long-term under such a system. But Rob said he was discouraged and surprised that even experimental participants who participated in such a commons experiment will immediately ignore their learning when they then play the foraging game (in a variant where new food does not continually reappear) and act on their own personal short-term self-interest, with the result that the food in the world is quickly overharvested and nothing else can grow in the later rounds. The participants bemoan the outcome but feel powerless and believe that others will also overharvest so they want to get in while the going is good; a classic prisoner’s dilemma outcome.

Also briefly discussed the concept of stigmergy (basically where individuals in an ecosystem alter the ecosystem for others, and hence alter their behaviors). One can think of people making a shortcut across the grass which as the grass is worn down induces others to follow, or think about the pheromones that an ant lays down from its trail that induces others to follow its path. Rob said one can visualize a jungle where the first intrepid explorer machetes a path through (at great effort) and this induces more people to follow which leads to a trail, which leads others to put down a pebble base which leads to a road which leads to a 4-lane highway. In this typology, for better or worse, we can visualize human progress.Rob Goldstone noted that viral popularity (which I’ve written about earlier) has some of those same properties. In other words, by buying books on Amazon or watching videos on YouTube, we lay down pheromones that say that this is good, or that people who like X also like Y, and the programs on Amazon (recommendations) or You Tube viral videos watch these digital traces and use this to induce others to follow our paths. [Incidentally, I gather such shortcut paths are known as “desire paths”; sample images here.]