All presentations are in PDF format; typical file size is 1MB. The files represent the newest version, older talks may have been different (and sometimes had different titles). Please do not use these slides without my permission. Thanks!
Abstract:
The solution concept of ``correlated equilibrium''
allows for coordination in games.
For game trees with imperfect information, it gives rise to
NP-hard problems, even for two-player games without chance
moves.
We introduce the ``extensive form correlated equilibrium''
(EFCE), which extends Aumann's correlated equilibrium,
where coordination is achieved by signals that are
received ``locally'' at information sets.
An EFCE is polynomial-time computable
for two-player games without chance moves.
(Joint work with Francoise Forges.)
Talks given:
Abstract: We survey algorithms for finding equilibria of two-player games in strategic form and extensive form. The main tool are polyhedra that show a player's mixed strategies and the other player's pure best responses. For extensive games, the exponential size of the strategic form is avoided by using the "sequence form" which has the same size as the game tree.
Talks given:
Abstract:
We describe algorithms for finding all Nash equilibria of a
two-player game in strategic form. We present two algorithms
that extend earlier work. Our presentation is
self-contained, and explains the two methods in a unified
frame-work using faces of best-response polyhedra. The
first method lrsNash is based on the known vertex
enumeration program lrs, for "lexicographic reverse search".
It enumerates the vertices of only one best-response
polytope, and the vertices of the complementary faces that
correspond to these vertices (if they are not empty) in the
other polytope. The second method is a modification of the
known EEE algorithm, for "enumeration of extreme equilibria".
We report on computational experiments that compare the two
algorithms, which show that both have their strengths and
weaknesses.
(Joint work with David Avis, Gabriel Rosenberg,
and Rahul Savani.)
Talks given:
Abstract:
We describe work in progress on a software tool for creating
and analyzing noncooperative games. The vision of this
project is to allow users to create models of interactive
decisions with the tools of game theory, and to obtain the
theoretical predictions for the model at a mouseclick.
The software is currently directed at people who are
familiar with game theory, such as researchers and students
in economics, but they do not have to be experts in
algorithms for solving games.
The Game Theory Explorer (GTE) is a graphical user interface
which is opened in a web browser.
It allows to create game trees with imperfect information
(represented by "information sets"), or games in strategic
form. The games can be exported to graphical file formats.
The Nash equilibria of the game are computed and displayed,
currently implemented for 2-player games.
The development of GTE has been supported by "Google Summer
of Code" studentships for open-source software. It is part
of the Gambit project, which was initiated over 20 years ago
by Richard McKelvey and is now maintained by Theodore
Turocy.
The talk will present a demonstration of the software,
outline its underlying algorithms, and discuss the
"managerial" experience of open-source software development
with the help of student volunteers.
Talks given:
Song "Find the longest path" by Dan Barrett (MP3-file).
Abstract: Game-theoretic problems have found massive recent interest in computer science. Obvious applications arise from the internet, for example the study of online auctions. In theoretical computer science, some of the most intriguing open problems concern the time that an algorithm needs to find a Nash equilibrium of a game. We give a survey of these open problems, which depend on the type of game considered. For bimatrix games, Nash equilibria are best understood geometrically, but recent results seem to make it unlikely that a "polynomial-time" algorithm for finding a Nash equilibrium exists. Zero-sum "simple stochastic games", on the other hand, are likely to be solvable in polynomial time, but a corresponding algorithm continues to be elusive. The talk is directed at a general audience, not at experts in computational complexity.
Talks given:
Talks given:
Abstract:
A bimatrix game is a two-player game in strategic form, a
basic model in game theory. A Nash equilibrium is a pair of
(possibly randomized) strategies, one for each player, so
that no player can do better by unilaterally changing his or
her strategy. The exact computational complexity of finding
one Nash equilibrium of a bimatrix game is open. Already a
subexponential algorithm would be a major breakthrough in
the field.
In this talk, which will introduce the main concepts and
geometric tools, we show that the commonly used Lemke-Howson
algorithm for finding one equilibrium of a bimatrix game is
exponential. This question had been open for some time.
The algorithm is a pivoting method similar to the simplex
algorithm for linear programming. We present a class of
square bimatrix games for which the shortest Lemke-Howson
path grows exponentially in the dimension d of the game.
We construct the games using pairs of dual cyclic polytopes
with 2d facets in d-space. An extension of this work gives
games where in addition to long Lemke-Howson paths, a
trivial "support guessing" algorithm also takes exponential
time on average.
(Joint work with Rahul Savani.)
Talks given:
Song "Find the longest path" by Dan Barrett (MP3-file).
Abstract: A "Leadership Game" is obtained from a game in strategic form by letting on player become the "leader", who can commit to a strategy, against which the other player or players (called "followers") react. The classic example is the Stackelberg game, which is the leadership game obtained from the Cournot duopoly game of quantity competition, the oldest formally defined "game".
First, we present a simple result on "follower payoffs in symmetric duopoly games", where we give a general and very short proof of the following property of a leadership game for a symmetric duopoly game, for example of price or quantity competition. Under certain standard assumptions, The follower's payoff is either higher than the leader's, or lower than even in the simultaneous game. A possible interpretation is that ``endogenous'' timing is difficult since the players either want to move both second or both first. Apart from the surprising result, the setup introduces key concepts of game theory and its modelling assumptions using only simple mathematics, and a very basic economic model.
Secondly, we study leadership games for the mixed extension of a finite game, where the leader commits to a mixed, that is, randomized strategy. In a generic two-player game, the leader payoff is unique and at least as large as any Nash payoff in the original simultaneous game. In non-generic two-player games, which are completely analyzed, the leader payoffs may form an interval, which as a set of payoffs is never worse than the Nash payoffs for the player who has the commitment power. In other words, the power to commit never hurts, even when best responses are not unique.
Talks given:
Abstract:
We consider a coordination game between a sender and a
receiver who communicate over a noisy channel.
The sender wants to inform the receiver about the state by
transmitting a message over the channel. Both receive
positive payoff only if the receiver decodes the received
signal as the correct state. The sender uses a known
"codebook" to map states to messages. When does this
codebook define a Nash equilibrium?
The receiver's best response is to decode the received
signal as the most likely message that has been sent.
Given this decoding, an equilibrium or "Nash code" results
if the sender encodes every state as prescribed by the
codebook, which is not always the case. We show two
theorems that give sufficient conditions for Nash codes.
First, the "best" codebook for the receiver (which gives
maximum expected receiver payoff) defines a Nash code.
A second, more surprising observation holds for
communication
over a binary channel which is used independently a number
of times, a basic model of information theory: Given a
consistent tie-breaking decoding rule which holds
generically, ANY codebook of binary codewords defines a Nash
code. This holds irrespective of the quality of the code
and also for nonsymmetric errors of the binary channel.
(Joint work with P. Hernández)
Talks given:
Abstract:
Existence of an equilibrium in various economic models can
be shown to exist by following a certain combinatorially
defined path, for example of edges of a labeled polytope
similar to the simplex algorithm for linear programming.
In addition, such a path has a direction that defines the
"index" of an equilibrium. Examples include Sperner's
lemma about completely labeled simplices and Nash equilibria
in games. We present a general model of "pivoting systems"
that shows the essence of the path-following and the
directedness argument. We also present a connection of
bimatrix games where the paths are exponentially long with
signed perfect matchings in Euler graphs, and an algorithm
that gives a polynomial-time "shortcut" for these paths.
We also present a concept of orientation for the "Euler
complexes" or oiks introduced by Edmonds, which generalizes
the orientability of abstract simplicial manifolds.
Our exposition uses colorful pictures and examples
wherever possible.
(Joint work with L. Vegh)
Talks given:
Abstract:
The central solution concept for strategic games is the Nash
equilibrium. Every equilibrium has an index, which is an
integer describing a geometric-topological concept related
to the sign of a determinant, essentially a positive or
negative orientation. The sum of all indices over all
equilibria is +1. The index is related to concepts of
"stability", e.g. with respect to perturbing the game data
(equilibrium components with index 0 can disappear), or with
respect to evolutionary dynamics, for example. The index is
typically described very technically.
We give a new construction that allows to visualize the
index in a low-dimensional picture, for example the plane
for a 3 x n game, for any n. We prove that the index can be
characterized in strategic terms: an equilibrium has index
+1 if and only if it can be made the unique equilibrium of
the game by adding suitable strategies.
(Joint work with Arndt von Schemde.)
Talks given:
Abstract: LP duality (the strong duality theorem of linear programming) and the minimax theorem for zero-sum games are considered "equivalent" in the sense that one can easily be proved from the other. However, the classic proof by Dantzig (1951) of LP duality from the minimax theorem is flawed. It needs an additional assumption of strict complementarity. We show that this assumption amounts to assuming the Lemma of Farkas, which implies LP duality directly. We fix this with a new, different proof. We also mention a beautiful existing direct proof of the Lemma of Farkas based on minimally infeasible systems of inequalities.
Talks given:
Back to Bernhard von Stengel's homepage