- Ranking Agendas for Negotiations
hashing out the agenda for a summit in which each will make costly
help the others.
Should the summit focus on pollution, trade tariffs, or disarmament?
This is a theory to help them decide based on marginal costs and
benefits, without transferable
Consider a negotiation in which agents will make costly concessions to
benefit others—e.g., by implementing tariff reductions, environmental
regulations or disarmament policies. An agenda specifies which issue or
dimension each agent will make concessions on; after an agenda is
the negotiation comes down to the magnitude of each agent's
contribution. We seek a ranking of agendas based on the marginal costs
and benefits generated at the status quo, which are captured in a
matrix for each agenda. In a transferable utility (TU) setting, there
is a simple ranking based on the best available social return per unit
of cost (measured in the numeraire). Without transfers,
the problem of ranking agendas is more difficult, and we take an
axiomatic approach. First, we require the ranking not to depend on
economically irrelevant changes of units. Second, we require that the
ranking be consistent with the TU ranking on problems that are
equivalent to TU problems in a suitable sense. The unique ranking
satisfying these axioms is represented by the spectral radius
(Frobenius root) of a matrix closely related to the Jacobian, whose
entries measure the marginal benefits per unit marginal cost agents can
confer on one another.
1, 2014. Current
version: February 22, 2015. Submitted.
Network Approach to Public Goods
R&R at the Journal of Political Economy
eigenvalues are a natural way to measure whether an
economic system is at an efficient point, and eigenvector centrality
relates naturally to efficient negotiated outcomes.
We demonstrate these connections in a simple model of investment with
externalities, without parametric assumptions. [
Suppose each of several agents can exert costly effort that creates
nonrival, heterogeneous benefits for some others. For any possible
outcome, we define a weighted, directed network describing marginal
externalities, and argue that its structure sheds new light on
negotiated outcomes. The Pareto efficient outcomes are those which make
the largest eigenvalue of the network equal to 1. We use this to
identify the essential agents for achieving Pareto improvements, and
when a negotiation can be divided into smaller ones without much loss.
How central agents are in this network, according to a standard
measure, also relates to negotiated outcomes: in any Lindahl
equilibrium, contributions are proportional to centralities.
November 2012. Current
December 12, 2015. Submitted. [non-SSRN download]
Networks and Contagion
Elliott and Matthew
American Economic Review,
104(10), October 2014
(more counterparties) and integration (deeper relationships with each
counterparty) have different, non-monotonic effects on financial
We model contagions and cascades of failures among organizations linked
through a network of financial interdependencies.
We identify how the network propagates discontinuous changes in asset
values triggered by failures
(e.g., bankruptcies, defaults, and other insolvencies) and use that to
study the consequences of integration(each organization becoming more
dependent on its counterparties)
and diversification (each organization interacting with a larger number
Integration and diversification have different, nonmonotonic effects on
the extent of
cascades. Initial increases in diversification connect the network
cascades to propagate further, but eventually, more diversification
makes contagion between any
pair of organizations less likely as they become less dependent on each
Integration also faces tradeoffs: increased dependence on other
versus less sensitivity to own investments. Finally, we illustrate some
aspects of the model with data on European debt cross-holdings.
September 2012. [Online
Homophily Affects the Speed of Learning and Best-Response Dynamics
Journal of Economics, 127(3), August 2012.
segregation patterns in
networks seriously slow convergence to consensus behavior when agents'
choices are based on an average of neighbors' choices. When the process
is a simple contagion, homophily doesn't matter.
[ More] [Pre-print version]
examine how the speed of learning and best-response processes depends
on homophily: the tendency of agents to associate disproportionately
with those having similar traits. When agents' beliefs or behaviors are
developed by averaging what they see among their neighbors, then
convergence to a consensus is slowed by the presence of homophily, but
is not influenced by network density (in contrast to other network
processes that depend on shortest paths). In deriving these results, we
propose a new, general measure of homophily based on the relative
frequencies of interactions among different groups. An application to
communication in a society before a vote shows how the time it takes
for the vote to correctly aggregate information depends on the
homophily and the initial information distribution.
November 24, 2008.
Better Information Can Garble Experts' Advice
Elliott and Andrei Kirilenko)
Do we get better
advice as our experts get smarter? Two experts, who like to be right,
make predictions about whether an event will occur based on private
signals about its likelihood. It is possible for both
experts' information to improve unambiguously while the
usefulness of their advice to any third party unambiguously decreases. [
We model two experts who must make predictions about whether an event
will occur or not. The experts receive private signals about the
likelihood of the event occurring, and simultaneously make one of a
finite set of possible predictions, corresponding to varying degrees of
alarm. The information structure is commonly known among the experts
and the recipients of the advice. Each expert's payoff depends on
whether the event occurs, her prediction, and possibly the prediction
of the competing expert. Our main result shows that when either or both
experts receive uniformly more informative signals, their predictions
can become unambiguously less informative. We call such information
improvements perverse. Suppose a third party wishes to use the experts'
recommendations to decide whether to take some costly preemptive action
to mitigate a possible bad event. The third party would then trade off
the costs of two kinds of mistakes: (i) failing to take action when the
event will occur; and (ii) needlessly taking the action when the event
will not occur. Regardless of how this third party trades off the
associated costs, he will be worse off after a perverse information
improvement. These perverse information improvements can occur when
each expert's payoff is independent of the other expert's predictions
and when the information improvement is due to a transfer of technology
between the experts.
November 21, 2010. Current
version: June 11, 2012.
Random Networks and Tipping Points in Network Formation
If agents form
networks in an environment of uncertainty, then arbitrarily small
changes in economic parameters (such as costs and benefits of linking)
can discontinuously change the properties
of the equilibrium networks, especially efficiency. [
Agents invest costly effort to socialize. Their effort
levels determine the probabilities of relationships, which are valuable
for their direct benefits and also because they lead to other
relationships in a later stage of ``meeting friends of friends''. In
many network formation models, there is fundamental uncertainty at the
time of investment regarding
which friendships will form. The
equilibrium outcomes are random graphs, and we characterize how their
density, connectedness, and other properties depend on the economic
fundamentals. When the value of friends of friends is low, there are
both sparse and thick equilibrium networks. But as soon as this value
crosses a key threshold, the sparse equilibria disappear completely and
only densely connected networks are possible. This transition mitigates
an extreme inefficiency.
April, 2010. Current
version: November 2, 2010. Working paper.
Learning in Social Networks and the Wisdom of
Crowds (with Matthew
In what networks do agents who learn very
naively get the right answer?
[ More] [Three-page
learning and influence in a setting where agents receive independent
noisy signals about the true value of a variable of interest and then
communicate according to an arbitrary social network. The agents
naively update their beliefs over time in a decentralized way by
repeatedly taking weighted averages of their neighbors'
We identify conditions determining whether the beliefs of all agents in
large societies converge to the true value of the variable, despite
their naive updating. We show that such convergence to truth
obtains if and only if the influence of the most influential agent in
the society is vanishing as the society grows. We identify
obstructions which can prevent this, including the existence of
prominent groups which receive a disproportionate share of attention.
By ruling out such obstructions, we provide structural conditions on
the social network that are sufficient for convergence to the truth.
Finally, we discuss the speed of convergence and note that whether or
not the society converges to truth is unrelated to how quickly a
society's agents reach a consensus.
January 14, 2007.
Selection Bias to Explain the Observed Structure of
Internet Diffusions (with
Proceedings of the National
Academy of Sciences, 107(24):10833-10836, June 15, 2010.
and Jon Kleinberg
that the reconstructed family trees of chain letter petitions
are strangely tall and narrow. We show that this can be explained with
selection and observation biases
within a simple
model. [ More] [PNAS
Recently, large data sets stored on the Internet have enabled the
analysis of processes, such as large-scale diffusions of information,
at new levels of detail. In a recent study, Liben-Nowell and Kleinberg
((2008) Proc Natl Acad Sci USA 105:4633-4638) observed that the flow of
information on the Internet exhibits surprising patterns whereby a
chain letter reaches its typical recipient through long paths of
hundreds of intermediaries. We show that a basic
Galton-Watson epidemic model combined with the selection bias of
observing only large diffusions suffices to explain the global patterns
in the data. This demonstrates that accounting for selection
biases of which data we observe can radically change the estimation of
classical diffusion processes.
Homophily Predict Consensus Times? Testing a Model of Network Structure
via a Dynamic Process
Review of Network Economics,
network models forget most of the details of a network, focusing on
just a few dimensions of its structure. Can such models nevertheless
make good predictions about how a process would run on real networks,
in all their complexity? [
We test theoretical results from Golub
, which are based on a random network model, regarding
convergence of a learning/behavior-updating process. In particular, we
see how well those theoretical results match the process when it is
simulated on empirically observed high school friendship networks. This
tests whether a parsimonious random network model mimics real-world
networks with regard to predicting properties of a class of behavioral
processes. It also tests whether our theoretical predictions on
asymptotically large societies are accurate when applied to populations
ranging from thirty to three thousand individuals. We find that the
theoretical results account for more than half of the variation in
convergence times on the real networks. We conclude that a simple
multi-type random network model with types defined by simple observable
attributes (age, sex, race) captures aspects of real networks that are
relevant for a class of iterated updating processes.
of Learning: Measuring Homophily Based on its Consequences (with
Annals of Economics and
Statistics, 107/108, 2012.
measure of segregation in a network (in which less popular people
matter more) predicts quite precisely how long convergence of beliefs
will take under a naive process in which agents form their own beliefs
by averaging those of their neighbors.
Homophily is the tendency of people to associate relatively more with
are similar to them than with those who are not. In Golub and Jackson
introduced degree-weighted homophily (DWH), a new measure of this
showed that it gives a lower bound on the time it takes for a certain
or learning process operating in a social network to converge. Here we
show that, in important
settings, the DWH convergence bound does substantially better than
bounds based on the Cheeger inequality. We also develop a new
bound on convergence time, tightening the relationship between DWH and
processes on networks. In doing so, we suggest that DWH is a natural
measure because it tightly tracks a key consequence of homophily
in updating processes.
of Weak Ties: How
Linking Groups Affects Inequality
weak bridges linking social groups can have arbitrarily large
consequences for inequality.
Centrality measures based on eigenvectors are important in models of
how networks affect investment decisions, the transmission of
information, and the provision of local public goods. We fully
characterize how the centrality of each member of a society changes
when initially disconnected groups begin interacting with each other
via a new bridging link. Arbitrarily weak intergroup connections can
have arbitrarily large effects on the distribution of centrality. For
instance, if a high-centrality member of one group begins interacting
symmetrically with a low-centrality member of another, the latter group
has the larger centrality in the combined network — in inverse
proportion to the centrality of its emissary! We also find that agents
who form the intergroup link, the ``bridge agents'', become relatively
more central within their own groups, while other intragroup centrality
ratios remain unchanged.
version: April 12, 2010. Working paper.
and Coffee Breaks: A Flow Model of Corporate Activity with Delays
(with R. Preston McAfee)
of Economic Design
, 15(1), March 2011.
How and when to decentralize networked
model that takes into account 'human' features of processing. [ More]
The multidivisional firm is modeled as
a system of interconnected nodes that exchange continuous flows of
projects of varying urgency and queue waiting tasks. The main
innovation over existing models is that the rate at which waiting
projects are taken into processing depends positively on both the
availability of resources and the size of the queue, capturing a
salient quality of human organizations. A transfer pricing scheme for
decentralizing the system is presented, and conditions are given to
determine which nodes can be operated autonomously. It is shown that a
node can be managed separately from the rest of the system when all of
the projects flowing through it are equally urgent.
First version: May
Brokerage (with Katherine
Proceedings of the National
Academy of Sciences, 108(Suppl. 4):21326-21332, December
facilitate transactions across gaps in social structure, and there are
many reasons for their position to be unstable.
Here, we take a look, from a sociological and an economic perspective,
at what institutions stabilize brokerage. [ More]
Expository, Teaching, Surveys
in Social Networks
(with Evan Sadler )
The Oxford Handbook of the
Economics of Networks,
(Yann Bramoullé, Andrea Galeotti, and Brian Rogers, eds.), 2016
A broad overview
of two kinds of network learning models: sequential ones
in the tradition of information cascades and herding; iterated linear
updating models (DeGroot); and their variations,
foundations, and critiques. Ideal for a graduate course. [
This survey covers models of how agents update behaviors and beliefs
using information conveyed through social connections. We begin with
sequential social learning models, in which each agent makes a decision
once and for all after observing a subset of prior decisions; the
discussion is organized around the concepts of diffusion and
aggregation of information. Next, we present the DeGroot framework of
average-based repeated updating, whose long- and medium-run dynamics
can be completely characterized in terms of measures of network
centrality and segregation. Finally, we turn to various models of
repeated updating that feature richer optimizing behavior, and conclude
by urging the development of network learning theories that can deal
adequately with the observed phenomenon of persistent disagreement.