representing and reasoning with preferences

12
Articles WINTER 2007 59 I consider how to represent and reason with users’ preferences. While areas of economics like social choice and game theory have traditional- ly considered such topics, I will argue that com- puter science and artificial intelligence bring some fresh perspectives to the study of repre- senting and reasoning with preferences. For instance, I consider how we can elicit prefer- ences efficiently and effectively. P references are a central aspect of decision making for single or multiple agents. With one agent, the agent’s desired goal may not be feasible. The agent wants a cheap, low-mileage Ferrari, but no such car exists. We may therefore look for the most preferred out- come among those that are feasible. With mul- tiple agents, their goals may be conflicting. One agent may want a Prius, but another wants a Hummer. We may therefore look for the out- come that is most preferred by the agents. Pref- erences are thus useful in many areas of artifi- cial intelligence including planning, sched- uling, multiagent systems, combinatorial auc- tions, and game playing. Artificial intelligence is not the only disci- pline in which preferences are of interest. For instance, economists have also studied prefer- ences in several contexts including social choice, decision theory, and game theory. In this article, I will focus on the connections between the study of preferences in artificial intelligence and in social choice. Social choice is the theory of how individual preferences are aggregated to form a collective decision. For example, one person prefers Gore to Nader to Bush, another prefers Bush to Gore to Nader, and a third prefers Nader to Bush to Gore. Who should be elected? There are many useful ideas about preferences that have been imported from social choice into artificial intelligence. For example, as I will discuss later in this arti- cle, voting procedures have been proposed as a general mechanism to combine agents’ prefer- ences. As a second example, ideas from game theory like Nash equilibrium have proven very influential in multiagent decision making. In the reverse direction, artificial intelligence brings a fresh perspective to some of the ques- tions addressed by social choice. These new perspectives are both computational and rep- resentational. From a computational perspec- tive, we can look at how computationally we reason with preferences. As we shall see later in this article, computational intractability may actually be advantageous in this setting. For example, we can show that for a number of dif- ferent voting rules manipulating the result of an election is possible in theory, but computa- tionally difficult to perform in practice. From a representational perspective, we can look at how we represent preferences, especially when the number of outcomes is combinatorially large. We shall see situations where we have a few agents but very large domains over which they are choosing. Another new perspective, both computa- tional and representational, is how we repre- sent and reason about uncertainty surrounding preferences. As we shall see, uncertainty can arise in many contexts. For example, when eliciting an agent’s preferences, we will have uncertainty about some of their preferences. As a second example, when trying to manipulate an election, we may have uncertainty about the other agents’ votes. As a third example, Copyright © 2007, American Association for Artificial Intelligence. All rights reserved. ISSN 0738-4602 Representing and Reasoning with Preferences Toby Walsh AI Magazine Volume 28 Number 4 (2007) (© AAAI)

Upload: buiphuc

Post on 19-Jan-2017

225 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Representing and Reasoning with Preferences

Articles

WINTER 2007 59

■ I consider how to represent and reason withusers’ preferences. While areas of economics likesocial choice and game theory have traditional-ly considered such topics, I will argue that com-puter science and artificial intelligence bringsome fresh perspectives to the study of repre-senting and reasoning with preferences. Forinstance, I consider how we can elicit prefer-ences efficiently and effectively.

Preferences are a central aspect of decisionmaking for single or multiple agents.With one agent, the agent’s desired goal

may not be feasible. The agent wants a cheap,low-mileage Ferrari, but no such car exists. Wemay therefore look for the most preferred out-come among those that are feasible. With mul-tiple agents, their goals may be conflicting.One agent may want a Prius, but another wantsa Hummer. We may therefore look for the out-come that is most preferred by the agents. Pref-erences are thus useful in many areas of artifi-cial intelligence including planning, sche d- uling, multiagent systems, combinatorial auc-tions, and game playing.

Artificial intelligence is not the only disci-pline in which preferences are of interest. Forinstance, economists have also studied prefer-ences in several contexts including socialchoice, decision theory, and game theory. Inthis article, I will focus on the connectionsbetween the study of preferences in artificialintelligence and in social choice. Social choiceis the theory of how individual preferences areaggregated to form a collective decision. Forexample, one person prefers Gore to Nader toBush, another prefers Bush to Gore to Nader,and a third prefers Nader to Bush to Gore. Who

should be elected? There are many useful ideasabout preferences that have been importedfrom social choice into artificial intelligence.For example, as I will discuss later in this arti-cle, voting procedures have been proposed as ageneral mechanism to combine agents’ prefer-ences. As a second example, ideas from gametheory like Nash equilibrium have proven veryinfluential in multiagent decision making.

In the reverse direction, artificial intelligencebrings a fresh perspective to some of the ques-tions addressed by social choice. These newperspectives are both computational and rep-resentational. From a computational perspec-tive, we can look at how computationally wereason with preferences. As we shall see later inthis article, computational intractability mayactually be advantageous in this setting. Forexample, we can show that for a number of dif-ferent voting rules manipulating the result ofan election is possible in theory, but computa-tionally difficult to perform in practice. From arepresentational perspective, we can look athow we represent preferences, especially whenthe number of outcomes is combinatoriallylarge. We shall see situations where we have afew agents but very large domains over whichthey are choosing.

Another new perspective, both computa-tional and representational, is how we repre-sent and reason about uncertainty surroundingpreferences. As we shall see, uncertainty canarise in many contexts. For example, wheneliciting an agent’s preferences, we will haveuncertainty about some of their preferences. Asa second example, when trying to manipulatean election, we may have uncertainty aboutthe other agents’ votes. As a third example,

Copyright © 2007, American Association for Artificial Intelligence. All rights reserved. ISSN 0738-4602

Representing and Reasoning with

Preferences

Toby Walsh

AI Magazine Volume 28 Number 4 (2007) (© AAAI)

Page 2: Representing and Reasoning with Preferences

there may be uncertainty in how the chair willperform the election. For instance, in whatorder will the chair compare candidates? Suchuncertainty brings fresh computational chal-lenges. For example, how do we computewhether we have already elicited enough pref-erences to declare the winner?

Representing PreferencesAs with other types of knowledge, many differ-ent formalisms have been proposed and stud-ied to represent preferences. One broad dis-tinction is between cardinal and relationalpreference representations. In a cardinal repre-sentation, a numerical evaluation is given toeach outcome. Such an evaluation is oftencalled a utility. In a relational representation,on the other hand, a ranking of outcomes isgiven by means of a binary preference relation.For example, we might simply have that theagent prefers a “hybrid car” to a “diesel car”without assigning any weights to this. In therest of the article, I shall restrict much of myattention to this latter type of relational repre-sentation. It is perhaps easier to elicit relation-al preferences: the agent simply needs to beable to rank outcomes. In addition, it is per-haps easier to express conditional choicesusing a relational formalism. For example, ifthe car is new, the agent might prefer a “hybridcar” to a “diesel car,” but if the car is second-hand, the agent is concerned about batteryreplacement and so prefers a “diesel car” to a“hybrid car.” Such a conditional preference isdifficult to express using utilities but, as weshall see, is straightforward with certain rela-tional formalisms. Nevertheless, utilities havean important role to play in representingagents’ preferences.

A binary preference relation is generallyassumed to be transitive. That is, if the agentprefers a “hybrid car” to a “diesel car,” and a“diesel car” to a “petrol car” then the agent alsoprefers a “hybrid car” to a “petrol car.” Thereare three other important properties to consid-er: indifference, incompleteness, and incompa-rability. It is important to make a distinctionbetween these three “I”s. Indifference repre-sents that the agent likes two outcomes equal-ly. For instance, the agent might have an equal(dis)like for sports utilities and minivans.Incompleteness, on the other hand, representsa gap in our knowledge about the agent’s pref-erences. For instance, when eliciting prefer-ences, we may not have queried the agent yetabout its preference between an “electric car”and a “hybrid car.” We may wish to representthat the preference relation is currently incom-

plete and that at some later point the preciserelationship may become known. Finally,incomparability represents that two outcomescannot in some fundamental sense be com-pared with each other. For example, an agentmight prefer a “hybrid car” to a “diesel car”and a “cheap car” to an “expensive car.” Butthe agent might not want to compare an“expensive hybrid” with a “cheap diesel.” Acheap diesel has one feature that is better (theprice) but one feature that is worse (theengine). The agent might want both choices tobe returned, as both are Pareto optimal (that is,there is no car that is more preferred). Suchincomparability is likely to arise when out-comes combine together multiple features.However, it also arises when we are comparingoutcomes that are essentially very different (forexample, a car and a bicycle).

Another important aspect of representingpreferences is dealing with the combinatorialnature of many domains. Returning to the cardomain, we have the engine type, the numberof seats, the manufacturer, the age, as well asthe price and other features like the fuel con-sumption, the color, the trim, and so on. Anumber of formalisms have been proposed torepresent preferences over such large combina-torial domains. For example, CP-nets decom-pose a complex preference relation into condi-tionally independent parts (Boutilier et al.1999). CP-nets exploit the ceteris paribus (“allelse being equal”) assumption under which thepreference relation depends only on featuresthat change. This formalism lets us representpreferences over a complex feature space usinga small number of possibly conditional prefer-ence statements (see figure 1). CP-nets exploitconditional independence within the prefer-ence relation in much the same way as a Bayesnetwork tries to compactly represent a complexprobability distribution function.

A number of extensions of CP-nets havebeen proposed including TCP-nets to representtrade-off (for example, “price” is more impor-tant to me than “engine type”) (Brafman andDomshlak 2002) and mCP-nets to representthe preferences of multiple agents (each agenthas its own CP-net and these are combinedusing voting rules) (Rossi, Venable, and Walsh2004). However, there are several outstandingissues concerning CP-nets including their deci-siveness and their complexity. CP-nets inducein general only a partial order over outcomes(such as the “expensive hybrid” versus “cheapdiesel” example). A CP-net may not thereforeorder enough outcomes to be useful in prac-tice. In addition, reasoning with CP-nets is ingeneral computationally hard. For example,

Articles

60 AI MAGAZINE

Page 3: Representing and Reasoning with Preferences

determining whether one outcome is more pre-ferred than another is PSPACE-hard (Gold-smith et al. 2005). To reduce this complexity,various approximations have been proposed(Domshlak et al. 2003, 2006). In addition,restricted forms of CP-nets have been identi-fied (for example, those where the dependencybetween features is acyclic) where reasoning ismore tractable (Boutilier et al. 1999). However,more work needs to be done if CP-nets are tofind application in practical systems.

Another way to represent an agent’s prefer-ences is by means of the agent’s ideal and non-ideal outcomes. For instance, an agent mightlike two cars on display (say, a Volvo and aJaguar) but not the third car (say, a Lada). Theagent might therefore specify “I would likesomething like the Volvo or the Jaguar but notthe Lada.” Hebrard, O’Sullivan, and Walsh(2007) proposed a method to reason aboutsuch logical combinations of ideal and nonide-al outcomes (see figure 2 for some moredetails). This approach has a flavor of qualita-tive methods, in allowing logical combinationsof ideals and nonideals, and quantitative meth-ods, in measuring distance from such idealsand nonideals. The propagators developed toreason about such distance constraints are, itturns out, closely related to those that canreturn a diverse set of solutions (“show me fivedifferent cars that satisfy my constraints”) andthose that return a similar set of solutions(“show me some similar cars to this one thatalso satisfy my constraints”) (Hebrard et al.2005). An attractive feature of representingpreferences through ideal and nonideal out-comes is that preference elicitation may bequick and easy. Agents need answer only a fewquestions about their preferences. On thedownside, it is difficult to express the sort ofcomplex conditional preferences that are easyto represent with formalisms like CP-nets. Aninteresting research direction would be to learn(conditional) preference statements like thoseused in CP-nets given some ideal and nonidealoutcomes.

Preference AggregationIn multiagent systems, we may need to com-bine the preferences of several agents. Forinstance, each member of a family might havepreferences about what car to buy. A commonmechanism for aggregating together prefer-ences is to apply a voting rule. Each agentexpresses a preference ordering over the set ofoutcomes, and an election is held to computethe winner. When there are only two possibleoutcomes, it is easy to run a “fair” election. We

Articles

WINTER 2007 61

Suppose an agent declares unconditionally that a hybrid is better than a diesel. We write this:

hybrid > diesel

With a hybrid, the agent declares:

saloon > station wagon

But with a diesel, the agent declares:

station wagon > saloon

Then a hybrid saloon is more preferred to a hybrid station wagon since ceteris paribus we keep the engine type constant but move from the more preferred saloon to the less preferred station wagon according to the first conditional preference statement: with a hybrid then saloon > station wagon.

A hybrid station wagon itself is more preferred to a diesel station wagon since we move from the more preferred hybrid to the less preferred diesel according to the first (unconditional) preference statement: hybrid > diesel.

Finally a diesel station wagon is more preferred to a diesel saloon since we keep the engine type constant but move from the more preferred station wagon to the less preferred saloon according to the second conditional preference statement: with a diesel then station wagon > saloon.

Thus, we have: hybrid saloon > hybrid station wagon > diesel station wagon > diesel saloon.

Figure 1. An Example CP-Net.

Page 4: Representing and Reasoning with Preferences

apply the majority rule and select the outcomewith the most votes. However, elections aremore probematic when there are more thantwo possible outcomes.

Going back to at least the Marquis de Con-dorcet in 1785, and continuing with Arrow,Sen, Gibbard, Satterthwaite, and others fromthe 1950s onwards, social choice theory hasidentified fundamental issues that arise in run-ning elections with more than two outcomes(see figure 3 for an illustrative example). Forinstance, Arrow’s famous impossibility theo-rem shows that there is no “fair” method torun an election if we have more than two out-comes. Fairness is defined in an axiomatic wayby means of some simple but desirable proper-ties like the absence of a dictator (that is, anagent whose vote is the result) (Arrow 1970). Aclosely related result, the Gibbard-Satterth-waite theorem shows that all “reasonable” vot-ing rules are manipulable (Gibbard 1973; Sat-terthwaite 1975). The assumptions of thistheorem are again not very strong. For exam-ple, we have three or more outcomes, and thereis some way for every candidate to win (forexample, the election cannot be rigged so Gorecan never win). Manipulation here means thatan agent may get a result it prefers by votingtactically (that is, declaring preferences differ-ent to those the agent has). Consider, forinstance, the plurality rule under which theoutcome with the most votes wins. Supposeyou prefer a hybrid car over a diesel car, and adiesel car over a petrol car. If you know that noone else likes hybrid cars, you might votestrategically for a diesel car, as your first choicehas no hope.

Strategic voting is generally consideredundesirable. There are many reasons for this,including the result is not transparent to theelectorate, agents need to be sophisticated andinformed to get a particular result, and fraudmay be difficult to detect if the result is hard topredict. To discuss manipulability results inmore detail, we need to introduce several dif-ferent voting rules. A vote is one agent’s rank-ing of the outcomes. For simplicity, we willassume this is a total order, but as we observedearlier, it may be desirable in some situationsto consider partial orders. A voting rule is thensimply a function mapping a set of votes ontoone outcome,1 the winner. We shall normallyassume that any rule takes polynomial time toapply. However, there are some voting ruleswhere it is NP-hard to compute the winner(Bartholdi, Tovey, and Trick 1989b). Finally, wewill also consider weighted votes. Weights willbe integers so a weighted vote can be seen sim-ply as a number of agents voting identically.

Articles

62 AI MAGAZINE

We suppose the user expresses her preferences in terms of ideal or nonideal (partial) solutions. Partiality is important so we can ignore irrelevant attributes. For example, we might not care whether our ideal car has run-flat tires or not.

One of the fundamental decision problems underlying this approach is that a solution is at a given distance d to (resp. from) an ideal (resp. nonideal) solution. These are called dCLOSE and dDISTANT respectively.

We can specify more complex preferences by using negation, conjunction, and disjunction.

dDISTANT(a) ↔ ¬dCLOSE(a) dDISTANT(a ∨ b) ↔ dDISTANT(a) ∨ dDISTANT(b) ↔ ¬dCLOSE(a ∧ b) dDISTANT(a ∧ b) ↔ dDISTANT(a) ∧ dDISTANT(b) ↔ ¬dCLOSE(a ∨ b)

These can be represented graphically: a and b are solutions, sol(P) is the set of solutions within the distance d, and the shaded region represents the solutions that satisfy the constraints.

a

sol(P)

(a) dCLOSE(a)

a

sol(P)

(b) dDISTANT(a)

a b

sol(P)

(c) dCLOSE(a ∨ b)

aa bb

sol(P)

(d) dDISTANT(a ∧ b)

a b

sol(P)

(e) dCLOSE(a ∧ b)

a b

sol(P)

(f) dCLOSE(a) ∧ dDISTANT(b)

Figure 2. Representing Preferences through Ideal and Nonideal Outcomes(Hebrard, O’Sullivan, and Walsh 2007).

Page 5: Representing and Reasoning with Preferences

Weighted voting systems are used in a numberof real-world settings like shareholder meetingsand elected assemblies. Weights are useful inmultiagent systems that have different types ofagents. Weights are also interesting from acomputational perspective. For example,adding weights to the votes may introducecomputational complexity. For instance,manipulation can become NP-hard whenweights are added (Conitzer and Sandholm2002). As a second example, as I discuss later inthe article, the weighted case informs us aboutthe unweighted case when there is uncertaintyabout the votes. I now define several votingrules that will be discussed later in this article.

Scoring RulesScoring rules are defined by a vector of weights,(w1, …, wm). The ith outcome in a total orderscores wi, and the winner is the outcome withthe highest total score. The plurality rule hasthe weight vector (1, 0, …, 0). In other words,each highest ranked outcome scores one point.When there are just two outcomes, this degen-erates to the majority rule. The veto rule hasthe weight vector (1, 1, …, 1, 0). The winner isthe outcome with the fewest “vetoes” (zeroscores). Finally, the Borda rule has the weightvector (m – 1, m – 2, …, 0). This attempts togive the agent’s second and lower choices someweight.

Cup (or Knockout) RuleThe winner is the result of a series of pairwisemajority elections between outcomes. Thisexploits the fact that when there are only twooutcomes, the majority rule is a fair means topick a winner. We therefore divide the probleminto a series of pairwise contests. The agenda isthe schedule of pairwise contests. If each out-come must win the same number of majoritycontests to win overall, then we say that thetournament is balanced.

Single Transferable Vote (STV) RuleThe single transferable vote rule requires anumber of rounds. In each round, the outcomewith the least number of agents ranking themfirst is eliminated until one of the remainingoutcomes has a majority.

We will also consider one other commonpreference aggregation rule in which voters donot provide a total ordering over outcomes, butsimply a set of preferred outcomes.

Approval RuleThe agents approve of as many outcomes asthey wish. The outcome with the mostapprovals wins.

Articles

WINTER 2007 63

Consider an election in which Alice votes:

Prius > Civic Hybrid > Tesla

Bob votes:

Tesla > Prius > Civic Hybrid

And Carol votes:

Civic Hybrid > Tesla > Prius

Then we arrive in the “paradoxical” situation where two thirds prefer a Prius to a Civic Hybrid, two thirds prefer a Tesla to a Prius, but two thirds prefer a Civic Hybrid to a Tesla. The collective preferences are cyclic.

There is no fair and deterministic resolution to this example since the votes are symmetric. With majority voting, each car would receive one vote. If we break the three-way tie, this will inevitably be “unfair,” favoring one agent’s preference ranking over another’s.

Figure 3. Condorcet’s Paradox.

Page 6: Representing and Reasoning with Preferences

ManipulationGibbard-Satterthwaite’s theorem proves that all“reasonable” voting rules are manipulable oncewe have more than two outcomes (Gibbard1973, Satterthwaite 1975). That is, voters mayneed to vote strategically to get their desiredresult. Researchers have, however, started toconsider computational issues surroundingstrategic voting and such manipulation of elec-tions. One way around Gibbard-Satterthwaite’stheorem may be to exploit computationallycomplexity. In particular, we might look forvoting rules that are manipulable but wherethe manipulation is computationally difficultto find (Bartholdi, Tovey, and Trick 1989a). Aswith cryptography, computational complexityis now wanted and is not a curse. For example,it has been proven that it is NP-hard to com-pute how to manipulate the STV rule to get aparticular result if the number of outcomes andagents is unbounded (Bartholdi and Orlin1991).

One criticism made of such results byresearchers from social choice theory is that,while elections may have a lot of agents voting,elections often only choose between a smallnumber of outcomes. However, as arguedbefore, in artificial intelligence, we can havecombinatorially large domains. In addition, itwas subsequently shown that STV is NP-hardto manipulate even if the number of outcomesis bounded, provided the votes are weighted(Conitzer and Sandholm 2002). Manipulationnow is no longer by one strategic agent but bya coalition of agents. This may itself be a moreuseful definition of manipulation. A singleagent can rarely change the outcome of manyelections. It may therefore be more meaningfulto consider how a coalition might try to votestrategically.

Many other types of manipulations havebeen considered. One major distinction isbetween destructive and constructive manipu-lation. In constructive manipulation, we aretrying to ensure a particular outcome wins. Indestructive manipulation, we are trying toensure a particular outcome does not win.Destructive manipulation is at worse a polyno-mial cost more difficult than constructivemanipulation provided we have at most a poly-nomial number of outcomes. We can destruc-tively manipulate the election if and only if wecan constructively manipulate some other out-come. In fact, destructive manipulation cansometimes be computationally easier. Forinstance, the veto rule is NP-hard to manipu-late constructively but polynomial to manipu-late destructively (Conitzer, Sandholm, andLang 2007). This result may chime with per-

sonal experiences on hiring committees: it isoften easier to ensure someone is not hiredthan to ensure someone else is!

Another form of manipulation is of individ-ual preferences. We might, for example, be ableto manipulate certain agents to put Boris infront of Nick, but Ken’s position on the ballotwill remain last.2 Surprisingly, manipulation ofindividual preferences is more computational-ly difficult than manipulation of the whole bal-lot. For instance, for the cup rule, manipulatingby a coalition of agents is polynomial (Con -itzer, Sandholm, and Lang 2007), but manipu-lation of individual preferences of those agentsis NP-hard (Walsh 2007a).

Another form of manipulation is of the vot-ing rule itself. Consider again the cup rule. Thisrule requires an agenda, the tree of pairwisemajority comparisons. The chair may try tomanipulate the result by choosing an agendathat gives a desired result. If the tournament isunbalanced, then it is polynomial for the chairto manipulate the election. However, we con-jecture that it is NP-hard to manipulate the cuprule if the tournament is required to be bal-anced (Lang et al. 2007). Many other types ofmanipulations have also been considered. Forexample, the chair may try to manipulate theelection by adding or deleting outcomes,adding or deleting agents, partitioning the can-didates into two and running an election in thetwo halves, and so on.

UncertaintyOne important consideration is the impact ofuncertainty on voting. One source of uncer-tainty is in the votes. For example, during pref-erence elicitation, not all agents may haveexpressed their preferences. Even if all agentshave expressed their preferences, a new out-come might be introduced. To deal with suchsituations, Konczak and Lang (2005) have con-sidered how to reason about voting when pref-erences are incompletely specified. Forinstance, how do we compute whether a cer-tain outcome can still win? Can we computewhen to stop eliciting preferences? Konczakand Lang introduced the concept of possiblewinners, those outcomes that win in sometransitive completion of the votes, and neces-sary winner, that outcome that wins in all tran-sitive completions of the votes. Preference elic-itation can stop when the set of possiblewinners equals the necessary winner. Unfortu-nately, computing the possible and necessarywinners is NP-hard in general (Pini et al. 2007).In fact, it is even NP-hard to compute these setsapproximately (that is, to within a constant

Articles

64 AI MAGAZINE

Page 7: Representing and Reasoning with Preferences

factor in size) (Pini et al. 2007). However, thereare a wide range of voting rules, where possibleand necessary winners are polynomial to com-pute. For example, possible and necessary win-ners are polynomial to compute for any scor-ing rule (Konczak and Lang 2005).

Another source of uncertainty is in the vot-ing rule itself. For example, uncertainty may bedeliberately introduced into the voting rule tomake manipulation computationally difficult(Conitzer and Sandholm 2002). For instance, ifwe randomize the agenda used in the cup rule,the cup rule goes from being polynomial tomanipulate to NP-hard. There are other formsof uncertainty that I shall not consider here.For example, preferences may be certain, butthe state of the world uncertain (Gajdos et al.2006). As a second example, we may have aprobabilistic model of the user’s preference thatis used to direct preference elicitation (Boutili-er 2002).

Weighted VotesAn important connection is between weightedvotes and uncertainty. Weights permit manip-ulation to be computationally hard even whenthe number of outcomes is bounded. If votesare unweighted and the number of outcomes isbounded, then there is only a polynomialnumber of different votes. Therefore to manip-ulate the election, we can try out all possiblemanipulations in polynomial time. We canmake manipulation computationally in -tractable by permitting the votes to haveweights (Conitzer and Sandholm 2002; Coni -tzer, Lang, and Sandholm 2003). As I men-tioned, certain elections met in practice haveweights. However, weights are also interestingas they inform the case where we have uncer-tainty about how the other agents will vote. Inparticular, Conitzer and Sandholm proved thatif manipulation with weighted votes is NP-hardthen manipulation with unweighted votes buta probability distribution over the other agents’votes is also NP-hard (Conitzer and Sandholm2002).

Preference ElicitationOne interesting application for computing theset of possible and necessary winners is forpreference elicitation. The basic idea is simple.Preference elicitation can focus on resolvingthe relationship between possible winners. Forinstance, which of these two possible winnersis more preferred? Pini et al. (2007) gives a sim-ple algorithm for preference elicitation thatfocuses elicitation queries on just those out-

comes that are possible winners. In fact, undersome simple assumptions on the voting rule,the winner can be determined with a numberof preference queries that is polynomial in theworst case in the number of agents and out-comes. In practice, we hope it may even be less.

Preference elicitation is closely related tomanipulation. Suppose we are eliciting prefer-ences from a set of agents. Since preferenceelicitation can be time-consuming and costly,we might want to stop eliciting preferences assoon as we can declare the winner. This mightbe before all votes had been collected. How dowe compute when we can stop? If we can stillmanipulate the election, then the winner is notfixed. However, if we can no longer manipulatethe election, the winner is fixed and elicitationcan be terminated. It thus follows that manip-ulation and deciding whether preference elici-tation can be terminated are closely relatedproblems. Indeed, if manipulation is NP-hardthen deciding whether we can terminate elici-tation is also (Konczak and Lang 2005).

Complexity considerations can be used tomotivate the choice of a preference elicitationstrategy. Suppose that we are combining pref-erences using the cup rule. Consider two dif-ferent preference elicitation strategies. In thefirst, we ask each agent in turn for the agent’svote (that is, we elicit whole votes). In the sec-ond, we pick a pair of outcomes and ask allagents to order them (that is, we elicit individ-ual preferences). Then it is polynomial todecide whether we can terminate elicitationusing the first strategy, but NP-hard using thesecond (Walsh 2007a). Thus, there is reason toprefer the first strategy in which we ask eachagent in turn for the agent’s vote. In fact, manyof the manipulation results cited in this papercan be transformed into a similar result aboutpreference elicitation.

Single-Peaked PreferencesOne of the concerns with results like thosementioned so far is that NP-hardness is only aworst-case analysis. Are the votes met in prac-tice easier to reason about? For instance, votesmet in practice are often single peaked. That is,outcomes can be placed in a left to right order,and an agent’s preference for an outcomedecreases with distance from the agent’s peak.For instance, an agent might have a preferredcost, and the preference decreases with dis-tance from this cost. Single-peaked preferencesare interesting from several other perspectives.First, single-peaked preferences are easy to elic-it. For instance, we might simply ask you foryour optimal house price. Conitzer has given a

Articles

WINTER 2007 65

Page 8: Representing and Reasoning with Preferences

simple strategy for eliciting a complete prefer-ence ordering with a linear number of pairwiseranking questions under the assumption thatpreferences are single peaked (Conitzer 2007).Second, single-peaked preferences are easy toaggregate. In particular, there is a fair way toaggregate single-peaked preferences. We simplyselect the median outcome; this is a Condorcetwinner, beating all others in pairwise compar-isons (Black 1948). Third, with single-peakedpreferences, preference aggregation is strategyproof. There is no incentive to misreport pref-erences.

Suppose we assume that agents’ preferenceswill be single peaked. Does this make it easierto decide whether elicitation can be terminat-ed? We might, for example, stop eliciting pref-erences if an outcome is already guaranteed tobe the Condorcet winner. We suppose we knowin advance the ordering of the outcomes thatmake agents’ preferences single peaked. Forinstance, if the feature is price in dollars, wemight expect preferences to be single peakedover the standard ordering of integers. Howev-er, an interesting extension is when this order-ing is not known. If votes are single peaked andthe voting rule elects the Condorcet winner,deciding whether we can terminate elicitation,along with related questions like manipula-tion, is polynomial (Walsh 2007b).

However, there are several reasons why wemight not want to select the Condorcet winnerwhen aggregating single-peaked preferences.For example, the Condorcet winner does notconsider the agents’ intensity of preferences.We might want to take into account the agents’lower ranked outcomes using a method likeBorda. There are also certain situations wherewe cannot identify the Condorcet winner. Forinstance, we may not know each agent’s mostpreferred outcome. Many web search mecha-nisms permit users to specify just an approvedrange of prices (for example, upper and lowerbound on price). In such a situation, it mightbe more appropriate to use approval voting(which may not select the Condorcet winner)to aggregate preferences. Finally, we might notbe able to select the Condorcet winner even ifwe can identify it. For example, we might havehard constraints as well as preferences. As aresult, the Condorcet winner might be infeasi-ble. We might therefore consider a voting sys-tem which that returns not just a single winnerbut a total ranking over the outcomes3 so thatwe can return those feasible outcomes whichthat are not less preferred than other feasibleoutcomes (so so-called “undominated feasibleoutcomes”). However, using other voting rulesrequires care. For instance, manipulation is NP-

hard when preferences are single peaked for anumber of common voting rules including STV(Walsh 2007b).

Some Negative (Positive?) Results

Various researchers have started to address con-cerns that NP-hardness is only a worst-caseanalysis, and votes met in practice might beeasier to reason about. For instance, eventhough it may be NP-hard to compute how tomanipulate the STV rule in theory, it might beeasy for the sort of elections met in practice.Results so far have been largely negative. Thatis, manipulation stops being computationallyhard. However, in this case, preference elicita-tion becomes polynomial so these resultsmight also be seen in a positive light! Forinstance, Conitzer and Sandholm have proventhat, with any weakly monotone voting rule,4

a manipulation can be found in polynomialtime if the election is such that the manipula-tor can make either of exactly two outcomeswin (Conitzer and Sandholm 2006). As a sec-ond example, Procaccia and Rosenschein haveshown that for any scoring rule, you are likelyto find a destructive manipulation in polyno-mial time for a wide class of probability distri-bution of preferences (Procaccia and Rosen-schein 2007). This average case includespreferences that are drawn uniformly at ran-dom.

It is perhaps too early to draw a definite con-clusion yet about this line of research. A gener-al impression is that while voting rules with asingle round may be easy on average, ruleswith multiple rounds (like STV or the cup rule)introduce a difficult balancing problem. If weare to find a manipulation, we may need tomake an outcome good enough to get throughto the final round but bad enough to lose thefinal contest. This may make manipulationcomputationally difficult in practice and notjust in the worst case.

Hybrid Voting RulesTo demonstrate that rules with multiple roundsmay be more difficult to manipulate, we con-sider some recent results on constructinghybrid voting rules. In such rules, we performsome number of rounds of one voting rule (forexample, one round of the cup rule or oneround of STV) and then finish the electionwith some other rule (for example, we thencomplete the election by applying the plurali-ty rule to the remaining outcomes). Consider,for instance, the plurality rule. Like all scoring

Articles

66 AI MAGAZINE

Page 9: Representing and Reasoning with Preferences

rules, this is polynomial to manipulate(Conitzer and Sandholm 2002). However, thehybrid rule where we apply one round of thecup rule and then the plurality rule to theremaining outcomes is NP-hard to manipulate(Conitzer and Sandholm 2003). As a secondexample, the hybrid rule, which has some fixednumber of rounds of STV then completes theelection with the plurality rule (or the Borda orcup rules), is NP-hard to manipulate (Elkindand Lipmaa 2005). Hybridization does not,however, always ensure computational com-plexity. For example, the hybrid rule, whichhas some fixed number of rounds of plurality(in each we eliminate the lowest scoring out-come) and then completes the election withthe Borda (or cup) rule, is polynomial tomanipulate (Elkind and Lipmaa 2005). Never-theless, introducing some qualifying rounds toa voting rule seems a good route to some addi-tional complexity, making it computationallydifficult to predict the winner or to manipulatethe result. It is interesting to wonder if FIFAand other sporting bodies take this intoaccount when deciding the format of majorsporting events like the World Cup.

Preferences and ConstraintsSo far, we have largely ignored the fact thatthere may be hard constraints preventing usfrom having the most preferred outcome. Forexample, we might prefer a cheap car to anexpensive car, and a Tesla to a Prius. However,there are no cheap Teslas for us to purchase, atleast for the near future. Combining (relation-al) preferences and constraints in this waythrows up a number of interesting computa-tional challenges. One possibility is simply touse the preferences to guide search for a feasi-ble outcome by enumerating outcomes in theconstraint solver in preference order (Boutilieret al. 1999). Another possibility is to turn qual-itative preferences into soft constraints(Domshlak et al. 2003, 2006). We can thenapply any standard (soft) constraint solver.Prestwich et al. (2005), we give a third possibil-ity, a general algorithm for finding the feasiblePareto optimal5 outcomes, that combines bothconstraint solving and preference reasoning.The algorithm works with any preference for-malism that generates a preorder over the out-comes (for example, it works with CP-nets).Briefly, we first find all outcomes that are feasi-ble and optimal in the preference order. If allthe optimals in the preference order are feasiblethen there are no other feasible Pareto opti-mals, and we can stop. Otherwise, we mustcompare these with the other feasible out-

comes in case some of these are also feasiblePareto optimals. These three examples illus-trate some of the different methods proposedto reason with both constraints and prefer-ences. However, much remains to be investi-gated.

Multiple ElectionsAnother interesting topic is that agents may beexpressing preferences over several relatedissues. This can lead to paradoxical resultswhere multiple elections result in the leastfavorite combination of issues being decided(Brams, Kilgour, and Zwicker 1998; Lacy andNiou 2000; Xia, Lang, and Ying 2007). Forexample, suppose agents are deciding theirposition with respect to three topical issues. Forsimplicity, I will consider three agents andthree binary issues. The issues are: is globalwarming happening, does this have cata-strophic consequences, and should we act now.A position on the three issues can be expressedas a triple. For instance, “N Y Y” represents thatglobal warming is not happening, globalwarming would have catastrophic conse-quences, and that we do need to act now. Theagents’ preferences over the three issues areshown in figure 4.

All agents believe that if global warming ishappening and this is causing catastrophicconsequences, we must act now. Thereforethey all place “Y Y N” last in their preferencerankings. As there are an exponential numberof outcomes in general, it may be unrealisticfor agents to provide a complete ranking whenvoting. One possibility is for the agents just todeclare their most preferred outcomes. As

Articles

WINTER 2007 67

Agent1 Agent2 Agent3

1 Y Y Y Y N N N Y N

2 Y N N Y Y Y N Y Y

3 N Y N N N N N N N

4 N N N N N Y N Y N

5 Y N Y Y N Y Y N Y

6 N Y Y N Y Y Y Y Y

7 N N Y N N Y Y N N

8 Y Y N Y Y N Y Y N

Figure 4. Agents’ Preferences over the Three Issues.

Page 10: Representing and Reasoning with Preferences

identify in which 2008 election agentsmight be voting for Boris, Nick, or Ken.

3. A voting system that returns a total rank-ing over the outcomes is called a social wel-fare function.

4. See Conitzer and Sandholm (2006) forthe formal definition of weak monotonici-ty. Informally, monotonicity is the proper-ty that improving the vote for an outcomecan only help the outcome win.

5. The feasible Pareto optimal outcomes arethose outcomes that are feasible and morepreferred than any other feasible outcome.

ReferencesArrow, K. 1970. Social Choice and IndividualValues. New Haven, CT: Yale UniversityPress.

Bartholdi, J., and Orlin, J. 1991. SingleTransferable Vote Resists Strategic Voting.Social Choice and Welfare 8(4):341–354.

Bartholdi, J.; Tovey, C.; and Trick, M.1989a. The Computational Difficulty ofManipulating an Election. Social Choice andWelfare 6(3): 227–241.

Bartholdi, J.; Tovey, C.; and Trick, M.1989b. Voting Schemes for Which It CanBe Difficult to Tell Who Won the Election.Social Choice and Welfare 6(2): 157–165.

Black, D. 1948. On the Rationale of GroupDecision Making. Journal of Political Econo-my 56(1): 23–34.

Boutilier, C. 2002. A POMDP Formulationof Preference Elicitation Problems. In Pro-ceedings of the Eighteenth National Conferenceon Artificial Intelligence, 239–246. MenloPark, CA: Association for the Advancementof Artificial Intelligence.

Boutilier, C.; Brafman, R.; Hoos, H.; andPoole, D. 1999. Reasoning with Condition-al Ceteris Paribus Preference Statements. InProceedings of Fifteenth Annual Conference onUncertainty in Artificial Intelligence (UAI-99),71–80. San Francisco: Morgan KaufmannPublishers.

Brafman, R., and Domshlak, C. 2002. Intro-ducing Variable Importance Tradeoffs intoCP-nets. In Proceedings of Eighteenth AnnualConference on Uncertainty in Artificial Intelli-gence (UAI-02), 69–76. San Francisco: Mor-gan Kaufmann Publishers.

Brams, S.; Kilgour, D.; and Zwicker, W.1998. The Paradox of Multiple Elections.Social Choice and Welfare 15(2): 211–236.

Conitzer, V. 2007. Eliciting Single-PeakedPreferences Using Comparison Queries. InProceedings of the International Conference onAutonomous Agents and Multiagent Systems.New York: Association for ComputingMachinery.

Conitzer, V., and Sandholm, T. 2002. Com-plexity of Manipulating Elections with Few

issues are binary, we apply the majori-ty rule to each issue. In this case, thisgives “Y Y N.” Unfortunately, this iseveryone’s least favorite option. Lacyand Niou show that such “paradoxi-cal” results are a consequence of theagents’ preferences not being separa-ble (Lacy and Niou 2000). We candefine the possible and necessary win-ners for such an election. Both arepolynomial to compute. Even thoughthere can be an exponential numberof possible winners, the set of possiblewinners can always be represented inlinear space as the issues are decidedindependently. For example, if Agent1and Agent2 have voted, but Agent3has not, the possible winners underthe majority rule are “Y* *.”

One way around such paradoxicalresults is to vote sequentially, issue byissue. Suppose the agents vote sincere-ly (that is, declaring their most pre-ferred option at each point consistentwith the current decisions). Here, theagents would decide Y for globalwarming, then N for catastrophic con-sequences (as both Agent2 and Agent3prefer this given Y for global warm-ing), and then N for acting now. Thus,the winner is “Y N N.” Lacy and Niouprove that if agents vote sincerely,such sequential voting will not returnan outcome dominated by all others(Lacy and Niou 2000). However, suchsequential voting does not necessarilyreturn the Condorcet winner, the out-come that will beat all others in pair-wise elections. Here, for example, itdoes not return the Condorcet winner,“Y Y Y.” We can define possible andnecessary winners for such sequentialvoting. However, it is not at all obvi-ous whether the possible or necessarywinners of such a sequential vote canbe computed in polynomial time, noreven whether the set of possible win-ners can be represented in polynomialspace.

Another way around this problem issophisticated voting. Agents need toknow the preferences of the other vot-ers and anticipate the outcome of eachvote to eliminate dominated strategieswhere a majority prefer some otheroutcome. Lacy and Niou (2000) provethat such sophisticated voting willalways produce the overall Condorcetwinner if it exists, irrespective of

whether voters have separable orinseparable preferences. However, thismay not be a computationally feasiblesolution since it requires reasoningabout exponentially large rankings.Another way around this problem is adomain restriction. For instance, Lacyand Niou prove that if agents’ prefer-ences are separable, then sequentialmajority voting is not manipulable,and it is in the best interest of agentsto vote sincerely (Lacy and Niou2000).

ConclusionThis survey has argued that there aremany interesting issues concerningthe representation of and the reason-ing about preferences. Whilstresearchers in areas like social choicehave studied preferences, artificialintelligence brings some fresh perspec-tives. These perspectives include bothcomputational questions like the com-plexity of eliciting preferences andrepresentational questions like dealingwith uncertainty. This remains a veryactive research area. At AAAI-07, therewas an invited talk, several technicaltalks, a tutorial, and a workshop onpreferences. It is sure therefore thatthere will be continuing progress inunderstanding how we represent andreason with preferences.

AcknowledgementsThis article is loosely based on aninvited talk given at the Twenty-Sec-ond Conference on Artificial Intelli-gence (AAAI-07) in Vancouver, July2007. NICTA is funded by the Aus-tralian Government’s Department ofCommunications, Information Tech-nology, and the Arts, and the Aus-tralian Research Council. Thanks toJerome Lang, Michael Maher, MariaSilvia Pini, Steve Prestwich, FrancescaRossi, Brent Venable, and other col-leagues for their contributions.

Notes1. In addition to voting rules that return asingle winner, it is interesting to considerextensions such as rules that select multi-ple winners (for example, to elect a multi-member committee) and social welfarefunctions that return a total ranking overthe outcomes.

2. The interested reader is challenged to

Articles

68 AI MAGAZINE

Page 11: Representing and Reasoning with Preferences

Candidates. In Proceedings of the EighteenthNational Conference on Artificial Intelligence,239–246. Menlo Park, CA: Association forthe Advancement of Artificial Intelligence.

Conitzer, V., and Sandholm, T. 2003. Uni-versal Voting Protocol Tweaks to MakeManipulation Hard. In Proceedings of Eigh-teenth International Joint Conference on Arti-ficial Intelligence, 781–788. San Francisco:Morgan Kaufmann Publishers.

Conitzer, V., and Sandholm, T. 2006.Nonexistence of Voting Rules That AreUsually Hard to Manipulate. In Proceedingsof the Twenty-First National Conference onArtificial Intelligence, 239–246. Menlo Park,CA: Association for the Advancement ofArtificial Intelligence.

Conitzer, V.; Lang, J.; and Sandholm, T.2003. How Many Candidates Are Neededto Make Elections Hard to Manipulate. InProceedings of the Conference on TheoreticalAspects of Reasoning about Knowledge (TARKIX). New York: Association for ComputingMachinery.

Conitzer, V.; Sandholm, T.; and Lang, J.2007. When Are Elections with Few Candi-dates Hard to Manipulate. Journal of theAssociation for Computing Machinery 54(3).

Domshlak, C.; Prestwich, S.; Rossi, F.; Ven-able, K.; and Walsh, T. 2006. Hard and SoftConstraints for Reasoning about Qualita-tive Conditional Preferences. Journal ofHeuristics 12(4–5): 263–285.

Domshlak, C.; Rossi, F.; Venable, B.; andWalsh, T. 2003. Reasoning about Soft Con-straints and Conditional Preferences: Com-plexity Results and Approximation Tech-niques. In Proceedings of EighteenthInternational Joint Conference on ArtificialIntelligence. San Francisco: Morgan Kauf-mann Publishers.

Elkind, E., and Lipmaa, H. 2005. HybridVoting Protocols and Hardness of Manipu-lation. In Proceedings of the Sixteenth AnnualInternational Symposium on Algorithms andComputation (ISAAC’05). Berlin: Springer-Verlag.

Gajdos, T.; Hayashi, T.; Tallon, J.; andVergnaud, J. 2006. On the Impossibility ofPreference Aggregation Under Uncertainty.Working Paper, Centre d’Economie de laSorbonne, Université Paris 1-Pantheon-Sorbonne.

Gibbard, A. 1973. Manipulation of VotingSchemes: A General Result. Econometrica41(4): 587–601.

Goldsmith, J.; Lang, J.; Truszczynski, M.;and Wilson, N. 2005. The ComputationalComplexity of Dominance and Consisten-cy in CP-Nets. In Proceedings of NineteenthInternational Joint Conference on ArtificialIntelligence. San Francisco: Morgan Kauf-mann Publishers.

Articles

WINTER 2007 69

Hebrard, E.; Hnich, B.; O’Sullivan, B.; andWalsh, T. 2005. Finding Diverse and SimilarSolutions in Constraint Programming. InProceedings of the Twentieth National Confer-ence on Artificial Intelligence, 239–246. Men-lo Park, CA: Association for the Advance-ment of Artificial Intelligence.

Hebrard, E.; O’Sullivan, B.; and Walsh, T.2007. Distance Constraints in ConstraintSatisfaction. In Proceedings of TwentiethInternational Joint Conference on ArtificialIntelligence. Menlo Park, CA: AAAI Press.

Konczak, K., and Lang, J. 2005. Voting Pro-cedures with Incomplete Preferences. Paperpresented at the IJCAI-2005 Workshop onAdvances in Preference Handling, Edin-burgh, Scotland, 31 July.

Lacy, D., and Niou, E. 2000. A Problem withReferenda. Journal of Theoretical Politics12(1): 5–31.

Lang, J.; Pini, M.; Rossi, F.; Venable, B.; andWalsh, T. 2007. Winner Determination inSequential Majority Voting. In Proceedingsof Twentieth International Joint Conference onArtificial Intelligence. San Francisco: MorganKaufmann Publishers.

Pini, M.; Rossi, F.; Venable, B.; and Walsh, T.2007. Incompleteness and Incomparabilityin Preference Aggregation. In Proceedings ofTwentieth International Joint Conference onArtificial Intelligence. San Francisco: MorganKaufmann Publishers.

Prestwich, S.; Rossi, F.; Venable, K.; andWalsh, T. 2005. Constraint-Based Preferen-tial Optimization. In Proceedings of theTwentieth National Conference on ArtificialIntelligence, 239–246. Menlo Park, CA: Asso-ciation for the Advancement of ArtificialIntelligence.

Procaccia, A. D., and Rosenschein, J. S.2007. Junta Distributions and the Average-Case Complexity of Manipulating Elec-tions. Journal of Artificial IntelligenceResearch 28: 157–181.

Rossi, F.; Venable, B.; and Walsh, T. 2004.mCP Nets: Representing and Reasoningwith Preferences of Multiple Agents. In Pro-ceedings of the Eighteenth National Conferenceon Artificial Intelligence, 239–246. MenloPark, CA: Association for the Advancementof Artificial Intelligence.

Satterthwaite, M. 1975. Strategy-Proofnessand Arrow’s Conditions: Existence andCorrespondence theorems for Voting Pro-cedures and Social Welfare Functions. Jour-nal of Economic Theory 10(2): 187–216.

Walsh, T. 2007a. Manipulating IndividualPreferences. Technical Report COMIC-2007-0011, National ICT Australia (NICTA)and the University of New South Wales,Sydney, Australia.

Walsh, T. 2007b. Uncertainty in PreferenceElicitation and Aggregation. In Proceedings

of the Twenty-Second AAAI Conference onArtificial Intelligence, 239–246. Menlo Park,CA: Association for the Advancement ofArtificial Intelligence.

Xia, L.; Lang, J.; and Ying, M. 2007. Sequen-tial Voting and Multiple Election Paradox-es. In Proceedings of the Conference on Theo-retical Aspects of Reasoning about Knowledge.New York: Association for ComputingMachinery.

Toby Walsh is a seniorprincipal researcher atNational ICT Australia(NICTA) in Sydney, con-joint professor at Univer-sity of New South Wales,external professor atUppsala University, andan honorary fellow of

the School of Informatics at EdinburghUniversity. He is currently editor-in-chief ofthe Journal of Artificial Intelligence Research(JAIR) and will be program chair of IJCAI-11.

IAAI-08 Author

Deadlines

December 1, 2007 – January 22, 2008:

Authors register on theIAAI web site

January 22, 2008:Electronic papers due

April 1, 2008: Camera-ready copy due

at AAAI office

Page 12: Representing and Reasoning with Preferences

Articles

70 AI MAGAZINE

The classic book on data mining

Available from AAAI Press / The MIT Press

http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=8132

New

from The M

IT Press

To order call 800-405-1619 • http://mitpress.mit.eduThe MIT Press

The Robotics PrimerMaja J Matarić“As with electronics and then computers, the world is about to be transformed yet again by the age of robotics. Every age needs its technology to be adopted by smart kids and dedicated hobbyists to provide the tide of people who will carry out the transformation. The Robotics Primer is a bridge between academia and everyperson, and its readers will become the crucibles of the new age.”— Rodney Brooks, Director, MIT Computer Science and Artifi cial Intelligence LabIntelligent Robotics and Autonomous Agents series288 pp., 88 illus., $30 paper

Introduction to Statistical Relational Learningedited by Lise Getoor and Ben TaskarAdvanced statistical modeling and knowledge representation techniques for a newly emerging area of machine learn-ing and probabilistic reasoning; includes introductory material, tutorials for diff erent proposed approaches, and applications.Adaptive Computation and Machine Learning series480 pp., 134 illus., $50 cloth

AutonomousBidding AgentsStrategies and Lessons from the Trading Agent CompetitionMichael P. Wellman, Amy Greenwald, and Peter StoneThis book presents algorithmic advances and strategy ideas developed within an integrated bidding agent architecture that have emerged from recent research in this fast-growing domain of AI.Intelligent Robotics and Autonomous Agents series256 pp., 25 illus., $35 cloth

PredictingStructured Dataedited by Gökhan H. Bakir,Thomas Hofmann, Bernhard Schölkopf, Alexander J. Smola, Ben Taskarand S. V. N. VishwanathanState-of-the-art algorithms and theory in a novel domain of machine learning, predic-tion when the output has structure.Neural Information Processing series8 x 10, 360 pp., 61 illus., $45 cloth

Toward Brain-Computer Interfacingedited by Guido Dornhege,Jose del R. Millan, Thilo Hinterberger, Dennis J. McFarland, and Klaus-Robert MüllerThe latest research in the development of technologies that will allow humans to communicate, using brain signals only, with computers, wheelchairs, prostheses, and other devices.Neural Information Processing series519 pp., 150 illus., $45 cloth

Large-Scale KernelMachinesedited by Léon Bottou, Olivier Chapelle, Dennis DeCoste and Jason WestonTopics covered include fast implementa-tions of known algorithms, approxima-tions that are amenable to theoretical guarantees, and algorithms that perform well in practice but are diffi cult to analyze theoretically.Neural Information Processing series8 x 10, 416 pp., 116 illus., $45 cloth