The foundations of negotiation theory are decision analysis, behavioral decision making, game theory, and negotiation analysis. Another classification of theories distinguishes between Structural Analysis, Strategic Analysis, Process Analysis, Integrative Analysis and behavioral analysis of negotiations. Individuals should make separate, interactive decisions; and negotiation analysis considers how groups of reasonably bright individuals should and could make joint, collaborative decisions. These theories are interleaved and should be approached from the synthetic perspective.
Common Assumptions Of Most Theories Negotiation is a specialized and formal version of conflict resolution most frequently employed when important issues must be agreed upon. Negotiation is necessary when one party requires the other party’s agreement to achieve its aim. The aim of negotiating is to build a shared environment leading to long-term trust and often involves a third, neutral party to extract the issues from the emotions and keep the individuals concerned focused. It is a powerful method for resolving conflict and requires skill and experience.
Zartman defines negotiation as “a process of combining conflicting positions into a common position under a decision rule of unanimity, a phenomenon in which the outcome is determined by the process. ” Most theories of negotiations share the notion of negotiations as a process, but they differ in their description of the process. Structural Analysis considers this process to be a power game. Strategic analysis thinks of it as a repetition of games (Game Theory). Integrative Analysis prefers the more intuitive notion of process, in which negotiations undergo successive stages, e. g. pre-negotiation, stalemate, settlement.
Especially structural, strategic and procedural analysis build on rational actors, who are able to prioritize clear goals, are able to make trade-offs between conflicting values, are consistent in their behavioral pattern, and are able to take uncertainty into account. Negotiations differ from mere coercion, in that negotiating parties have the theoretic possibility to withdraw from negotiations. It is easier to study bi-lateral negotiations, as opposed to multilateral negotiations. Structural Analysis Structural Analysis is based on a distribution of empowering elements among two negotiating parties.
Structural theory moves away from traditional Realist notions of power in that it does not only consider power to be a possession, manifested for example in economic or military resources, but also thinks of power as a relation. Based on the distribution of elements, in structural analysis we find either power-symmetry between equally strong parties or power-asymmetry between a stronger and a weaker party. All elements from which the respective parties can draw power constitute structure. They may be of material nature, i. e. hard power, (such as weapons) or of social nature, i.
e. soft power, (such as norms, contracts or precedents). These instrumental elements of power, are either defined as parties’ relative position (resources position) or as their relative ability to make their options prevail. Structural analysis is easy to criticise, because it predicts that the strongest will always win. This, however, does not always hold true. Strategic Analysis According to structural analysis, negotiations can therefore be described with matrices, such as the Prisoner’s Dilemma, a concept taken from Game Theory. Another common game is the Chicken Dilemma.
Strategic analysis starts with the assumption that both parties have a veto. Thus, in essence, negotiating parties can cooperate (C) or defect (D). Structural analysis then evaluates possible outcomes of negotiations (C, C; C, D; D, D; D, C), by assigning values to each of the possible outcomes. Often, co-operation of both sides yields the best outcome. The problem is that the parties can never be sure that the other is going to cooperate, mainly because of two reasons: first, decisions are made at the same time or, second, concessions of one side might not be returned.
Therefore the parties have contradicting incentives to cooperate or defect. If one party cooperates or makes a concession and the other does not, the defecting party might relatively gain more. Trust may be built only in repetitive games through the emergence of reliable patterns of behaviour such as tit-for-tat. This table illustrates the options and possible outcomes of the Negotiator’s Dilemma. Process Analysis Process analysis is the theory closest to haggling. Parties start from two points and converge through a series of concessions. As in strategic analysis, both sides have a veto (e.
g. sell, not sell; pay, not pay). Process analysis also features structural assumptions, because one side may be weaker or stronger (e. g. more eager to sell, not willing to pay a certain price). Process Analysis focuses on the study of the dynamics of processes. E. g. both Zeuthen and Cross tried to find a formula in order to predict the behaviour of the other party in finding a rate of concession, in order to predict the likely outcome. The process of negotiation therefore is considered to unfold between fixed points: starting point of discord, end point of convergence.
The so called security point, that is the result of optional withdrawal, is also taken into account. Integrative Analysis Integrative analysis divides the process into successive stages, rather than talking about fixed points. It extends analysis to pre-negotiations stages, in which parties make first contacts. The outcome is explained as the performance of the actors at different stages. Stages may include pre-negotiations, finding a formula of distribution, crest behaviour, settlement Bad faith negotiation
Bad faith is a concept in negotiation theory whereby parties pretend to reason to reach settlement, but have no intention to do so, for example, one political party may pretend to negotiate, with no intention to compromise, for political effect.  Inherent bad faith model in international relations and political psychology Bad faith in political science and political psychology refers to negotiating strategies in which there is no real intention to reach compromise, or a model of information processing.
 The “inherent bad faith model” of information processing is a theory in political psychology that was first put forth by Ole Holsti to explain the relationship between John Foster Dulles’ beliefs and his model of information processing.  It is the most widely studied model of one’s opponent.  A state is presumed to be implacably hostile, and contra-indicators of this are ignored. They are dismissed as propaganda ploys or signs of weakness. Examples are John Foster Dulles’ position regarding the Soviet Union, or Israel’s initial position on the Palestinian Liberation Organization.  Decision analysis Last updated 5 days ago
Decision analysis (DA) is the discipline comprising the philosophy, theory, methodology, and professional practice necessary to address important decisions in a formal manner. Decision analysis includes many procedures, methods, and tools for identifying, clearly representing, and formally assessing important aspects of a decision, for prescribing a recommended course of action by applying the maximum expected utility action axiom to a well-formed representation of the decision, and for translating the formal representation of a decision and its corresponding recommendation into insight for the decision maker and other stakeholders.
History and methodology The term decision analysis was coined in 1964 by Ronald A. Howard, who since then, as a professor at Stanford University, has been instrumental in developing much of the practice and professional application of DA. Graphical representation of decision analysis problems commonly use influence diagrams and decision trees. Both of these tools represent the alternatives available to the decision maker, the uncertainty they face, and evaluation measures representing how well they achieve their objectives in the final outcome.
Uncertainties are represented through probabilities and probability distributions. The decision maker’s attitude to risk is represented by utility functions and their attitude to trade-offs between conflicting objectives can be made using multi-attribute value functions or multi-attribute utility functions (if there is risk involved). In some cases, utility functions can be replaced by the probability of achieving uncertain aspiration levels.
Decision analysis advocates choosing that decision whose consequences have the maximum expected utility (or which maximize the probability of achieving the uncertain aspiration level). Such decision analytic methods are used in a wide variety of fields, including business (planning, marketing, and negotiation), environmental remediation, health care research and management, energy exploration, litigation and dispute resolution, etc. Decision analysis is used by major corporations to make multi-billion dollar capital investments.
In 2010, Chevron won the Decision Analysis Society Practice Award for its use of decision analysis in all major decisions. In a video detailing Chevron’s use of decision analysis, Chevron Vice Chairman George Kirkland notes that “decision analysis is a part of how Chevron does business for a simple, but powerful, reason: it works. ” Controversy Decision researchers studying how individuals research decisions have found that decision analysis is rarely used.  High-stakes decisions, made under time pressure, are not well described by decision analysis.
 Some decision analysts, in turn, argue that their approach is prescriptive, providing a prescription of what actions to take based on sound logic, rather than a descriptive approach, describing the flaws in the way people do make decisions. Critics cite the phenomenon of paralysis by analysis as one possible consequence of over-reliance on decision analysis in organizations. Studies have demonstrated the utility of decision analysis in creating decision-making algorithms that are superior to “unaided intuition”.
 The term “decision analytic” has often been reserved for decisions that do not appear to lend themselves to mathematical optimization methods. Methods like applied information economics, however, attempt to apply more rigorous quantitative methods even to these types of decisions. Game theory Last updated 2 days ago Game theory is the study of strategic decision making. More formally, it is “the study of mathematical models of conflict and cooperation between intelligent rational decision-makers.
“ An alternative term suggested “as a more descriptive name for the discipline” is interactive decision theory.  Game theory is mainly used in economics, political science, and psychology, as well as logic and biology. The subject first addressed zero-sum games, such that one person’s gains exactly equal net losses of the other participant(s). Today, however, game theory applies to a wide range of class relations, and has developed into an umbrella term for the logical side of science, to include both human and non-humans, like computers.
Classic uses include a sense of balance in numerous games, where each person has found or developed a tactic that cannot successfully better his results, given the other approach. Modern game theory began with the idea regarding the existence of mixed-strategy equilibria in two-person zero-sum games and its proof by John von Neumann. Von Neumann’s original proof used Brouwer’s fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics.
His paper was followed by his 1944 book Theory of Games and Economic Behavior, with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this book provided an axiomatic theory of expected utility, which allowed mathematical statisticians and economists to treat decision-making under uncertainty. This theory was developed extensively in the 1950s by many scholars. Game theory was later explicitly applied to biology in the 1970s, although similar developments go back at least as far as the 1930s.
Game theory has been widely recognized as an important tool in many fields. Eight game-theorists have won the Nobel Memorial Prize in Economic Sciences, and John Maynard Smith was awarded the Crafoord Prize for his application of game theory to biology. History John von Neumann Early discussions of examples of two-person games occurred long before the rise of modern, mathematical game theory. The first known discussion of game theory occurred in a letter written by James Waldegrave in 1713.
In this letter, Waldegrave provides a minimax mixed strategy solution to a two-person version of the card game le Her. James Madison made what we now recognize as a game-theoretic analysis of the ways states can be expected to behave under different systems of taxation.  In his 1838 Recherches sur les principes mathematiques de la theorie des richesses (Researches into the Mathematical Principles of the Theory of Wealth), Antoine Augustin Cournot considered a duopoly and presents a solution that is a restricted version of the Nash equilibrium.
The Danish mathematician Zeuthen proved that a mathematical model has a winning strategy by using Brouwer’s fixed point theorem. In his 1938 book Applications aux Jeux de Hasard and earlier notes, Emile Borel proved a minimax theorem for two-person zero-sum matrix games only when the pay-off matrix was symmetric. Borel conjectured that non-existence of mixed-strategy equilibria in two-person zero-sum games would occur, a conjecture that was proved false. Game theory did not really exist as a unique field until John von Neumann published a paper in 1928.
 Von Neumann’s original proof used Brouwer’s fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics. His paper was followed by his 1944 book Theory of Games and Economic Behavior, with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this book provided an axiomatic theory of expected utility, which allowed mathematical statisticians and economists to treat decision-making under uncertainty.
Von Neumann’s work in game theory culminated in the 1944 book Theory of Games and Economic Behavior by von Neumann and Oskar Morgenstern. This foundational work contains the method for finding mutually consistent solutions for two-person zero-sum games. During this time period, work on game theory was primarily focused on cooperative game theory, which analyzes optimal strategies for groups of individuals, presuming that they can enforce agreements between them about proper strategies.  In 1950, the first discussion of the prisoner’s dilemma appeared, and an experiment was undertaken by notable mathematicians Merrill M.
Flood and Melvin Dresher, as part of the RAND corporation’s investigations into game theory. Rand pursued the studies because of possible applications to global nuclear strategy.  Around this same time, John Nash developed a criterion for mutual consistency of players’ strategies, known as Nash equilibrium, applicable to a wider variety of games than the criterion proposed by von Neumann and Morgenstern. This equilibrium is sufficiently general to allow for the analysis of non-cooperative games in addition to cooperative ones.
Game theory experienced a flurry of activity in the 1950s, during which time the concepts of the core, the extensive form game, fictitious play, repeated games, and the Shapley value were developed. In addition, the first applications of Game theory to philosophy and political science occurred during this time. In 1965, Reinhard Selten introduced his solution concept of subgame perfect equilibria, which further refined the Nash equilibrium (later he would introduce trembling hand perfection as well). In 1967, John Harsanyi developed the concepts of complete information and Bayesian games.
Nash, Selten and Harsanyi became Economics Nobel Laureates in 1994 for their contributions to economic game theory. In the 1970s, game theory was extensively applied in biology, largely as a result of the work of John Maynard Smith and his evolutionarily stable strategy. In addition, the concepts of correlated equilibrium, trembling hand perfection, and common knowledge were introduced and analyzed. In 2005, game theorists Thomas Schelling and Robert Aumann followed Nash, Selten and Harsanyi as Nobel Laureates. Schelling worked on dynamic models, early examples of evolutionary game theory.
Aumann contributed more to the equilibrium school, introducing an equilibrium coarsening, correlated equilibrium, and developing an extensive formal analysis of the assumption of common knowledge and of its consequences. In 2007, Leonid Hurwicz, together with Eric Maskin and Roger Myerson, was awarded the Nobel Prize in Economics “for having laid the foundations of mechanism design theory. ” Myerson’s contributions include the notion of proper equilibrium, and an important graduate text: Game Theory, Analysis of Conflict (Myerson 1997).
Hurwicz introduced and formalized the concept of incentive compatibility. Representation of games The games studied in game theory are well-defined mathematical objects. A game consists of a set of players, a set of moves (or strategies) available to those players, and a specification of payoffs for each combination of strategies. Most cooperative games are presented in the characteristic function form, while the extensive and the normal forms are used to define noncooperative games. Extensive form