Representing and Applying Knowledge for
Argumentation in a Social Context
Dept. of Information Systems and Computing
Uxbridge, Middlesex UB8 3PH
Tel. (+44)(0)1895 27400 Ext.2139
Fax (+44)(0)1895 251686
(Running Title: Argumentation in a Social Context)
Abstract:The concept of argumentation in AI is based almost exclusively on the use of formal, abstract representations. Despite their appealing computational properties, these abstractions become increasingly divorced from their real world counterparts, and, crucially, lose the ability to express the rich gamut of natural argument forms required for creating effective text. In this paper, the demands that socially situated argumentation places on knowledge representation are explored, and the various problems with existing formalisations are discussed. Insights from argumentation theory and social psychology are then adduced as key contributions to a notion of social context which is both computationally tractable and suitably expressive for handling the complexities of argumentation found in natural language.
Keywords: Argumentation; Natural language generation; Persuasion; User modelling.
The task of (re)presenting an argument for some proposition is one which is attracting increasing attention in the AI community. Representations are designed to facilitate understanding and development of law, legal contracts, and debate, and to build rigorous models for reasoning under uncertainty. Argument presentation plays a key role in the provision of explanations and justifications in expert systems, in the generation of tailored health education materials, in critiquing user decisions, and in computer assisted learning (CAL) systems for teaching skills of exposition and critical thinking. The close ties between representation and presentation of argument are also being explored productively in multi-agent systems where agents negotiate and persuade one another, and in knowledge based systems where data is represented using argumentation schemes which then form the basis of the presentation of subsequently retrieved information.
Yet much of this work suffers as a result of adopting a naïve approach to one of the fundamental aims of argumentation, in assuming that logically sound argument is equivalent to an argument which will persuade any audience. But although a logically sound argument should be persuasive to any rational judge, argument is a situated communicative act, and aspects of the situation influence the success of an argument at least as heavily as the logical content. As a situated phenomenon, it is therefore inappropriate to design an argument solely on the basis of its desired propositional content. The design of the content, structure and presentation of an argument must be sensitive to the beliefs and attitudes of the hearer, and to the sociological context in which the argument is set.
This paper surveys existing techniques for reasoning with argument and discusses some of the problems inherent to these approaches (section 2). The major extra-logical and extra-linguistic factors impinging upon argumentation are enumerated in section 3, and their introduction to a computational model presented in section 4. Finally, the role of such a model is discussed and appraised in section 5.
2. Formal argumentation
There is increasing interest in using argumentation for systems based upon formal logic which need to reason about the real world, and in particular, which must be able to cope with uncertain and incomplete information. Reasoning about such domains can rarely employ strict deductive inference; rather, it becomes necessary to use some weaker notion of support - and often then to express the degree of that support (either qualitatively - e.g. (Parsons and Fox, 1996), or quantitatively - e.g. (Sillince and Minors, 1992) ). If a system no longer relies solely upon strict inference then it may as a result benefit from the use of multiple subarguments (Reed and Long, 1997a). To calculate an overall value of belief in a proposition with multiple supports, these separate lines of support may then be aggregated under some flattening function, such as those discussed in (Das et al. 1996). Furthermore the set of arguments contributing to a claim can itself be evaluated as a first class data object to determine the acceptability of the argument as a whole. This is the approach adopted in the argumentation logic LA (Krause et al., 1995), which uses a labelled deductive system (Gabbay, 1992) to record sets of supports and determine acceptability. This approach is motivated by the need to reason under uncertainty, (Elvang-Gøransson et al., 1993), and has successfully been applied in a number of medical domains (Fox and Das, 1996).
LA represents a highly specific logic, based, in the first instance, upon intuitions of evaluating a claim on the basis of ‘pro’ and ‘con’ arguments. A more generic approach to uncertainty and incompleteness which has been at least as successful in its use of argumentation is defeasible reasoning, e.g. (Pollock, 1987), in which there are two closely related trends. Firstly, argumentation is used as a technique for implementing systems of defeasible reasoning, both those involving priorities between defeasible rules such as (Prakken and Sartor, 1996) and (Vreeswijk, 1992), and those based upon a probabilistic underpinning, (Geffner, 1996). The second trend is closely allied with the first in terms of its results, but its motivations differ in that the aim is to model argument using defeasible reasoning, (Loui, 1987) for example, and Prakken’s (Prakken, 1996) ‘dialectical proof theory’. This last work is characteristic of the field in that it does not attempt to model generic, free argument, but instead concentrates on legal reasoning. Argumentation in jurisprudence benefits not only from the practical advantage of a plentiful existence of transcribed source, but also from the theoretical advantage of possessing clearer rules of exchange and dialectical progression - legal argument is more ‘rigorous’, demanding greater adherence to logical consistency, and admitting little retraction (Kowalski and Toni, 1996), (Prakken and Sartor, 1996), (Daskalopulu and Sergot, 1995), (Verheij, 1996), etc.
In general terms, systems such as LA, and those of (Parsons, 1996), (Prakken, 1996), etc., formalise intuitive notions of argument to solve particular problems of representation and reasoning (such as dealing with uncertainty, incompleteness and prioritisation). However, when comparison is made between these intuitions and the finely honed definitions offered by argumentation theory ( (Eemeren et al., 1996) offers a good overview), it becomes clear that the former are at once underspecified and too restrictive. Their underspecification becomes manifest when linguistic considerations are brought to bear – these considerations, of course, are crucial if arguments are to be communicated intelligibly to humans (though a number of the systems discussed in (Fox and Das, 1996) do indeed convey their results to humans, the process is template based and presents serious problems of assimilation to a user). When conveying information linguistically, it is important to know (i) what information may safely be omitted or aggregated (for conciseness), (ii) which pieces of information must become central topics, and which occur as supplementary or supporting data, (iii) where appropriate breaks in information can be accommodated, (iv) what constraints hold over the organisation of information to ensure coherency in the resultant text, (v) which linguistic qualitative terms map most appropriately to the formal quantitative or qualitative data, and (vi) which pieces of information the hearer either already knows, would be able to accommodate or would find difficult to palate. There are also broader communication-based considerations, concerned with higher level structuring of a text – how and what to summarise, where repetition is appropriate, what level of introduction and background is required, etc. Typical argumentation-based reasoning systems underspecify their content with respect to both the fine-grained considerations acting at a clause and inter-clause level, and to the more coarse-grained discourse level considerations. Equally, though, these systems are also too restrictive, with little or no support for nondeductive reasoning, no means of representing ‘fallacious’ reasoning (which can be perfectly acceptable in particular situations, e.g. (Walton, 1992)), only very limited means of distinguishing linked from convergent inter-argument support (Reed and Long, 1997a), frequent dependence upon a particular qualitative or quantitative representation scheme (which may or may not accord with linguistic resources), similarly frequent dependence upon a particular view of argumentation (approaches based on (Toulmin, 1958) are particularly predominant) and an inability to represent the multifarious factors of the social context, discussed below. These constraints restrict the range of argumentation to only a small subset of that available in natural language.
The various linguistic considerations are thus crucial in the design of representation and application criteria for argumentation-based systems which need to communicate reasoning to humans. Research in argumentation theory has been driven by such considerations, purely because it is an empirical discipline, with theories being defined through analysis of real world argument. There is a small, but comparable area of research in artificial intelligence, where natural argument is taken as the phenomenon to be modelled, and where, therefore, these linguistic considerations cannot be ignored (rather than modelling abstract intuitions about natural argument). Within this area, there are two distinct trends: (i) representing and occasionally analysing natural argument, (ii) automatically generating natural language arguments from some knowledge base.
Under the first research trend, there are several distinct identifiable aims. A number of systems support humans in constructing or following argumentation in order to reach decisions: a variety of medical decision support systems based on Krause's (Krause et al., 1995) LA are discussed in (Fox and Das, 1996), and similarly, the Negoplan system of (Matwin et al., 1989) supports human negotiation using an expert system to represent the structure of the arguments. Distinct from those which offer support are a group of systems which offer the medium for argument. In particular, there have been attempts to integrate argumentation frameworks with the world-wide web (WWW), which though attractive arena for debate, suffers, with its current modes of interaction, from a number of inherent problems. In particular, Jackson (1997) points out that newsgroup postings - a prime example of rich, wide-ranging and unregulated debate - are exceedingly difficult to employ successfully for constructive argument: it is difficult to see the thread of an argument, and see which points have been addressed, which are contentious, and which need resolving. She suggests a solution whereby structure is imposed upon the debate; postings become hypertext documents which are arranged hierarchically according to their functional role in the argument (rather than ordered sequentially and chronologically). A similar approach has been suggested by Gordon et al. (Gordon, 1994), (Gordon and Karacapilidis, 1996) in their Zeno Framework, which intends to offer mediation on the WWW and uses for its underlying argument structuring the IBIS system of (Rittel and Webber, 1973). Distinct again from support- and medium-oriented research, computational representation of argument is also employed to pedagogical ends (in teaching skills of both argument production, e.g. (Cavalli-Sforza et al., 1992), (Pilkington et al., 1992), and argument criticism, (Cavalli-Sforza et al., 1993) ), and finally, as a means of abstracting from particular data sources and thus facilitating representation of arguments drawing on disparate and possibly conflicting sources (implemented in Haggith’s FORA system (Haggith, 1995), (Haggith, 1996) ).
The second research trend, aiming to create - rather than represent - argument can also be subdivided into work focusing primarily on the structure of argument, and that focusing primarily on the language of argument. Clearly these two tasks are not entirely separable, but nevertheless, systems such as Zukerman's NAG (McConachy and Zukerman, 1996), (Zukerman and McConachy, 1995), (Zukerman et al., 1996) are chiefly concerned with the generation of the structure of an argument: NAG uses ‘reasoning agents’ to select information from a range of knowledge bases (in this it bears resemblance to Haggith’s FORA, though the latter offers a more principled approach to resolving inter-KB conflict). The premises supporting a conclusion form nodes in an argument graph, producing a structure similar to that arrived at by analysis in informal logic (although, as pointed out in (Reed and Long, 1997a), NAG does not distinguish between linked and convergent structures and its expressive power is thus restricted). Further processing then determines an optimum ‘path’ through the argument graph, based on parameters concerning the user’s abilities and the system’s honesty (e.g. whether or not it is appropriate to exploit misconceptions held by the hearer). Finally, NAG determines an appropriate presentation strategy, though the system described in (Zukerman et al., 1996) is limited to a very narrow range of options, with a naive approach to the problems of component ordering and linguistic style, issues examined in (Reed et al., 1996), (Reed and Long, 1997b) and in more detail below. The earlier work of Birnbaum et al. (Birnbaum, 1982), (Flowers et al., 1982) also concentrates on determining the content of argument, and in particular, on identifying and implementing schema-like argument exchanges termed argument molecules; these, however, led to only very primitive linguistic realisation.
Standing in contrast is the work in natural language generation (NLG), where the goal is to produce the text of an argument, rather than to restrict effort to generating the underlying structure. Elhadad (1992), (1995), concentrates on generating arguments comprising a single paragraph, in order to investigate the impact of argumentative ‘orientation’ on lexical choice. In particular, he builds on the distinction between evaluation functions and topoi proposed by Anscombre and Ducrot (1983) - the former determine the force of particular propositions in a particular context; the latter then link these evaluations through a generic form "the more/less X is P, the more/less Y is Q". These topoi relations as construed in Elhadad’s work are similar to the links in Sycara’s belief graphs (Sycara, 1989), (Sycara, 1990) (which form the basis of her PERSUADER argumentation system discussed below) and also to the arcs in qualitative probabilistic networks (QPNs) (Wellman, 1990), used, for example, in Parsons’s argumentation reasoning system, (Parsons, 1996), (Parsons, 1997). These relations, however, seem unable to express the full range of argumentation moves (it is difficult to see, for example, how a categorical syllogism could be expressed accurately in these terms). A different approach to the linguistic realisation of arguments has been adopted by Maybury (1993), which builds on plan-based models of communication (rather than Elhadad’s unification based approach). Maybury proposes abstract plan operators which encode argument strategies (similar to the molecules proposed by (Birnbaum, 1982) ), such as convince-by-cause-and-evidence. Although the general approach appears promising, there are a number of specific problems with Maybury’s system from the viewpoints of both argumentation and NLG. Firstly, Maybury offers an abstract taxonomy of communicative acts, which at its highest level divides argue into deduce, induce, and persuade. In light of informal logic research this seems highly implausible - deduction and induction (along with various other forms of non-standard reasoning, some of which Maybury notes) are utilised during the process of persuasion (and also during other, similar processes such as negotiation, information-seeking, and so on). Secondly, Maybury implies that his system employs the NOAH planning architecture (Sacerdoti, 1974), which suffers inherently from inflexibility and inability to cope with uncertain domains (Reed et al., 1996). Finally, Maybury’s system has no notion of focus or context, and it is not at all clear whether intention is expressly represented, and if so how it is achieved (cf. (Moore and Pollack, 1992) ). Despite these shortcomings, (Maybury, 1993) represents the first attempt at plan-based generation of natural language arguments, which seems to offer the most flexible and expressive means of such generation.
Present to varying extents in the work of Maybury, Sycara, Zukerman, etc., is a key omission. It is essential to remember that argument occurs in what might be termed a social context, which encompasses not just the beliefs of the interlocutors, but also their attitudes, psychological susceptibilities, and the relationship holding between them. This social context impinges on the form and content of the argument at least as much as the more formal determination of argument discussed above. Though small parts of the social context have been partly addressed (such as Zukerman’s coarse characterisation of user types, or Sycara’s utilisation of user’s susceptibility to a small class of fallacies), the social context as a whole has been neither recognised nor addressed in computational work.
3. The Social Context of Argumentation
Since the writers of classical Greece and Rome, it has been recognised that argumentation is more than logical content. For argumentation is put to a use, and an orator must devise "arguments suited to convince, in law court disputes and in debates of public business" (Antonius, in (Billig, 1996), p81). As a result there was a well developed awareness of the need to be sensitive to the social context in which the argument was delivered – and in particular, to tailor argument to the audience. In this regard, Billig (a contemporary social psychologist) quotes Aristotle: "Aristotle defined the academic aims of rhetoric as being ‘not to persuade but to discover the available means of persuasion in each case’ " (Billig, 1996), p84. It is the ‘available means of persuasion’ which are characterised by the social context, the importance of which is highlighted by Gilbert:
"Argumentation, first and last, is a subspecies of communication, and communication is a complex act that integrates cultural and sub-cultural symbolism, social actors and local context. This means that any given argument or part thereof may be acceptable or appropriate or useful or sensible when used by one set of persons in one place and time, and not acceptable, etc., when any or all of those variables are altered.", (Gilbert, 1995), p127
3.1 Belief modelling
A large part of the persuader’s task is to "have a keen sharpness about the thoughts, feelings, beliefs and hopes of his fellow-citizens" (Cicero in (Billig, 1996), p83). This categorisation is remarkably similar to that of (Doyle, 1988) with beliefs, desires and intentions or (Kiss, 1989) with cognitive, conative and attentive attitudes. To accurately represent these features of the hearer, however, the taxonomy needs to be refined further – there are, for example, several kinds of belief. The taxonomy suggested in (Reed et al., 1997) distinguishes factual beliefs, opinions, and cultural beliefs, based on a similar tripartite distinction proposed by Blair (1838) in his seminal work on rhetoric. Briefly, factual beliefs are either testable (at least in theory) or are definitional (and include beliefs based on sensory experiences); opinions are based on moral and aesthetic judgement and are thus ultimately personal and unprovable (since there is no universally accepted and provably correct aesthetic-moral framework - and it is hard to conceive how there could be); and cultural beliefs are based upon sociocultural maxims (such as that which states that living into old age is desirable). As discussed in (Reed et al., 1997), the class of a belief is an important determinant of how it is expressed and presented in relation to surrounding beliefs.
Another major problem is how to resolve two seemingly contradictory views generated by introspection, that of whether beliefs are best represented as dichotomous or scalar. A pragmatic resolution to this issue is in itself crucial to competent belief modelling, but it will also affect the approach taken to other problems (factual beliefs are usually dichotomous, whereas opinions and cultural beliefs are generally scalar, for example). The scalar nature of belief in particular is crucial to pin down precisely, for it is implicit in argumentation itself - arguments rely upon concepts such as 'persuasiveness' and 'strength of argument', and the very fact that an argument will often involve several separate and identifiably distinct component sub-arguments, suggests that the process involves some scalar value of 'mounting evidence' or 'increasing conviction'. These problems associated with the concept of strength of belief are discussed more fully in (Galliers, 1992).
Argument modelling also relies on a treatment of the complex phenomenon of mutual belief: an argument is based upon common ground - a set of mutual beliefs (i.e. which both parties hold, and which both parties also know the other to hold). Mutual belief is defined in terms of an infinite regress of nested beliefs. That is, A and B mutually believe a proposition, P, if A believes (i) P and (ii) that B believes P and (iii) that A and B mutually believe that A and B mutually believe P. The problem is to pragmatically choose a level of nesting beyond which 'mutual' belief is to be assumed. In making this choice, it is understood that no matter how many levels a system can cope with, it is always possible to construct a (highly convoluted) example which exceeds the capabilities of that system. From a psychological (and intuitive) point of view, choosing some arbitrary level of nesting by which to define mutuality seems rather implausible. In humans, it would appear that belief nesting is a resource bounded operation with no known limit, and it is possible to construct deeply nested examples which present remarkably little difficulty (such as the example in Fig. 1, below), though handling further complexity rapidly becomes extremely difficult and time consuming. It may be possible to utilise this evidence and allow for a similar process in an implementation, so that some default operator (say, BMB, following (Cohen and Levesque, 1990) ) is employed using a naively shallow level of nesting, but, in the light of new evidence, this may be replaced (or supplemented) with a more sophisticated nesting and appropriate operator.
Finally, it is important to identify whose beliefs need modelling. Clearly in a dialogue with two interlocutors (one of which is a machine), it is necessary to model the (assumed) beliefs of the hearer. In the case where there is an audience, the problem becomes more complex. Perelman and Ohlbrechts-Tyteca (1969) distinguish the constructs of particular audience and general audience: the former is a well defined subset of the audience, which might be modelled by a single stereotype; and the latter an artificial construct representing the speaker’s notion of an idealised rational judge – which could then be modelled trivially. The further distinction between particular audience and single hearer (which in the work of Perelman and Ohlbrechts-Tyteca are conflated) is important for characterising situations where both may be present. For it is often not transparently obvious who the intended audience is in any given situation - in the debating chamber, for example, the speaker has one or more opponents to whom she is supposed to be addressing herself - the primary aim of her discourse, however, is to change the beliefs of the nonparticipatory audience. This form of ‘misdirection’ is very common, especially in those examples where a particular position is being attacked. Other permutations are rarer, but one could imagine a scenario in which a monologue was addressed to a general audience or large particular audience and yet the speaker hope only to influence the beliefs of some particular subset of that audience.
Accurate identification of the audience is the most important component of the social context. Perelman emphasises this point: "For since argumentation aims at securing the adherence of those to whom it is addressed, it is, in its entirety, relative to the audience to be influenced", (Perelman and Ohlbrechts-Tyteca, 1969), p19.
3.2 Non-epistemic facets of the hearer
The hearer or audience stereotype is not, however, just a knowledge base of beliefs. There are a number of other facets of the hearer which need to be included in the definition of an argument’s social context.
The first class of such facets is closely related to the hearer’s beliefs, and, importantly, how those beliefs are grounded – i.e. how they have been arrived at, and how they are maintained (Reed et al., 1997). Beliefs that are deeply entrenched (Gärdenfors, 1988) in the hearer’s knowledge – i.e. beliefs which if removed would have a massive influence on the remaining belief set – will be much more difficult for a speaker to alter: the hearer holds a bias towards these beliefs. Such bias presents two problems. In the first place, a speaker who intends to go up against hearer bias must ensure that she employs a stronger, more cogent set of counterarguments and supports than would normally be required. In addition, there are also secondary effects: if a speaker were to embark upon an argument against a deeply entrenched belief, it may engender a sceptical reaction, which then prejudices the hearer against further arguments. The notion of hearer scepticism is an important factor in argument construction - if the speaker is aware that during all or part of her argument, the hearer is maintaining high levels of scepticism, she must be much more diligent in the construction of that argument. As discussed in more detail below, this loose, intuitive notion of diligence can be shown to be amenable to a computational reading. By way of example of the effect of scepticism on argument structure, it is interesting to note that an assumption of scepticism often leads to a thin-end-of-the-wedge argument – Blair, for example, notes that
"… the orator conceals his intention concerning the point he is to prove, till he has gradually brought his hearers to the designed conclusion. They are led on, step by step, from one known truth to another, till the conclusion be stolen upon them, as the natural consequence of a chain of propositions… It is a very artful method of reasoning; may be carried on with much beauty, and is proper to be used when hearers are much prejudiced against any truth, and by imperceptible steps must be led to conviction." (Blair, 1838), p429.
The technical and general competence of the hearer are also important parameters to be considered at the outset. General competence determines the hearer's ability to understand complex argumentation (and to some extent, complex grammar); technical competence enables the argument to pitched at the right level, and affects the choice of appropriate vocabulary. Relatedly, structural limits to various aspects of argumentation, such as the maximum number of subarguments contributing to a conclusion, and the length of each, are in part determined by the capabilities of the hearer. Blair, again, emphasises this point:
"…against extending arguments too far, and multiplying them too much. This serves to render a cause suspected, than to give it weight. An unnecessary multiplicity of arguments both burdens the memory and detracts from the weight of that conviction which a few well-chosen arguments carry." (Blair, 1838), p432.
The investment that the interlocutors have in the outcome of an argument also heavily affects both its reception and hence, construction. Thus if the speaker is perceived by the hearer to have significant potential gain in winning an argument – or potential loss in losing - (regardless, of course, of whether or not the perception is accurate), the hearer may be more sceptical. In addition, if the speaker in fact has a significant investment in winning an argument, this would clearly also lead to more diligent argumentation.
Finally, assuming the hearer to be human, there is a wide range of generic psychological aspects of his makeup which can exploited in generating effective argument. It is these susceptibilities which Billig is referring to when he claims that "modern social psychology has set itself the task of translating into actuality Plato’s dream of a complete science of persuasion" (Billig, 1996), p84. A complete survey of these psychological components is far beyond the scope of this paper; in the remainder of this section a summary is presented of the key features addressed or utilised by rhetoric, and which have shown themselves to be suitable for computational modelling in recent research.
The ‘irrational’, ‘non-logical’ means by which features of an argument can have a persuasive effect can be categorised by the level at which they are manifest in the text. At the lowest levels, the vocabulary choice and syntactic arrangement play an important role. Sandell (1977) offers a review of these features, including work demonstrating (i) the adverse effects on comprehension and subsequent acceptance of message involving "difficult content words" or involving a high frequency of exceptional words; (ii) that the type of verb employed in a generalisation is correlated with the likelihood of acceptance (and more generally, that the verb plays a particularly important role in persuasive effect); (iii) that simple syntactic structure aids comprehension and retention (though with a caveat on the beneficial effect of rhythmicity). Sandell goes on to detail studies which display correlation between a variety of syntactic stylistic variables (such as the proportion of adjectives, of nouns and of elliptical constructions) and various levels of ‘acceptance’ (viz. comprehension, retention, message acceptance and content acceptance).
Marcu (1996) summarises a number of other, quite specific, features at the lexical and clause level which can influence the reception of an argument – the use of particular qualitative adjectives (rather than expressing probabilities), elimination of modifiers expressing uncertainty, use of specific rather than abstract terms, and the introduction of repetition (including the fact that "Contrary to NLG wisdom, a system capable of generating persuasive text will also have to generate information that is known to the audience", p44). Marcu also indicates the importance of the ‘stages of change’ model (McGuire, 1969) of how hearer attitudes are subject to alteration, and the impact the model has on theories of persuasive communication; the hearer’s current stage of attitude change should thus also form part of the social context.
Results in experimental psychology have also demonstrated human susceptibility to features of argument operating at higher levels of abstraction. The foremost of these features is the effect of ordering: both within a single persuasive monologue (i.e. how premises and conclusions are arranged in their hierarchical structure) and across turns in an extended argumentative dialogue. Early work (Lund, 1925), (Hovland et al., 1957) investigates the potential for primacy effects (that speaking first should positively influence an argument’s reception) and recency effects (that speaking last should be advantageous). McGuire (1957) and Janis and Feierabend (1957) examine how persuasive effect can be enhanced simply by placing first arguments which are more acceptable to the hearer or which are phrased as ‘pro’ arguments (rather than ‘con’). Lastly, McGuire (McGuire, 1969) discusses means of ordering subarguments on the basis of their perceived strength (generally, that climax ordering - with arguments increasing in strength - is the optimal arrangement).
Given Billig’s comments above, it is quite unsurprising to find that handbooks of rhetoric such as (Blair, 1838) have long offered remarkably precise dicta for optimal arrangement of components which accord well with the limited psychological evidence. A number of Blair’s heuristics are discussed in (Reed et al., 1996) and (Reed et al., 1997b), but in summary, he suggests that arguments should increase in strength, should be collected by type, should be grouped if weak, or emphasised if strong, and so on.
3.3 Relationship between speaker and audience
One vitally important parameter affecting the generation of an argument is the relationship which the speaker wishes to create or maintain with the hearer. This relationship is established through predominantly stylistic rather than structural means, and is not necessarily divorced from other aims: if a hearer accepts the speaker's authoritative stance, for example, the speaker may be able to use the relationship to reinforce his statements. (Consider, by way of example, two differing relationship stances taken by automatic advice-givers. The Smoking Letters project (Reiter et al., 1997) offers advice on how to give up smoking, tailored to the individual; the letters are seen by patients to originate from the GP’s office. As a result, the advice offered carries with it the weight of authority, and this can be called upon in structuring both form and content of the argument. Though not strictly argumentative, many on-line help systems are using more intelligent approaches in recognising what users can and can’t do and tailor their advice appropriately. However, such systems are increasingly adopting a relationship of ‘friend by your side’ rather than the traditional ‘computer expert’, which can be intimidating to novices. Adopting this approach necessitates a careful use of language to avoid a situation in which users simply don’t respect the advice offered.)
As well as offering opportunities for argumentative form and content, the relationship also poses constraints on the argumentation process, which is by its very nature conflict-based, and this conflict can threaten any existing or incipient relationship. These constraints can be analysed in terms of a distinction between task goals, which specify a participant’s direct aims of the discourse (e.g. to convince an opponent that a particular proposition is true), and face goals which specify the limits of appropriate behaviour, including maintaining ‘face’, and respecting ‘face’ of the interlocutor, (Gilbert, 1996), (Tracy, 1990), (Waldron et al., 1990). Indeed, the role of facework is of crucial importance in argumentation, where conflict is almost unavoidable. As discussed in (O’Keefe, 1995), there are several means of managing face threats in conflict situations, including toning the threat down and offering redress: these various methods relate to levels of ‘politeness’ in discourse, an issue which has received increased attention after the seminal work of Brown and Levinson (1987).
In addition to the sociological complexity of characterising face goals, task goals present an equally complex challenge because in natural argument it is often unclear exactly what set of task goals are motivating the discourse (frequently because to reveal some or all of one’s task goals may contravene other goals of face or task). There is a wide range of task goals - although the most common is perhaps to persuade a hearer of a proposition (and this is the usual characterisation adopted in computational research), argument is also used to dissuade, shed doubt, confuse, confound, and deceive. Often, a speaker’s ‘best hope’ may be to persuade, but would settle for simply altering the audience’s certainty in their belief (either increasing their belief in the thesis or decreasing their objection to it). Furthermore, altering levels of belief is also too narrow a characterisation of the aims of arguments, which, as Vorobej points out, often "aim principally to alter behaviour, generate enthusiasm, or create feelings of various sorts (guilt, pleasure, solidarity), rather than alter beliefs." (Vorobej, 1997), p2. Each of these various groups of task goals (altering belief, inducing behaviour, and stimulating emotion) - both individually and in combination - are associated with characteristic reasoning patterns and stylistic constructions in language.
3.4 Situational and modal components of the social context
The term ‘context’ is widely used throughout artificial intelligence - generally to refer to a diverse range of more or less clearly defined constructs. In this work, the social context of argumentation is seen as encompassing the most common intuitions of the role of context in discourse. In particular, notions such as Sperber and Wilson’s (Sperber and Wilson, 1986) mutual cognitive environment are an important component of the social context for the resolution of deictic reference, and other types or disambiguation through relevance. Similarly, Simons’s (Simons, 1976) discussion of the situational context includes the history and sociocultural norms inferable from the situation (he cites as an example the difference between reading a Hamlet soliloquy from the page, and seeing the same performed in the context of the play as a whole and in the physical setting of a darkened chamber, etc.).
Finally, the modality in which the argument is presented will have a very great impact on the construction process. Thus verbal oration will be organised with clearer indications of structure, a greater use of repetition and summary, and lower limits on argument complexity. There may also be absolute physical limits on the amount of time available for presentation. If argument is to be presented textually rather than orally, additional problems of formatting, layout and graphical arrangement become important. Again, column-inches or page-limits may impose strict restrictions on length. If argument is to be delivered in a dialogic rather than monologic situation (whether orally in debate or textually in letters to the editor, for example), numerous additional factors come to the fore: turn length, level of preplanning, means of addressing previous issues, arrangement of turntaking, etc.– these are discussed in (Reed and Long, 1997c).
In building a system capable of generating persuasive text which is both sound and effective, it is clear that it is not only the structural aspects of argumentation which must be formalised, but also the much more loosely defined problems in the social context of that argumentation. It is this second task which is examined in this section.
The various extra-epistemic facets of the hearer (bias, scepticism, competence, etc.) and the interpersonal situation (the relationship between speaker and hearer, the vested interests, etc.), can be modelled reasonably well using crude parameters which impact the generation process in much the same way as system-wide style parameters, such as those used in the Pauline system (Hovy, 1990), and in the work of DiMarco et al. (Green and DiMarco, 1996). As mentioned above, particular combinations of these factors may demand ‘extra diligence’ in constructing argument. The approach taken in (Reed and Long, 1997b) to this problem is to employ the concept of resource allocation offered by the underlying abstraction-based hierarchical planner AbNLP (Fox and Long, 1995), whereby limited computational resources are divested amongst particular parts of a subproblem at an abstract level. Thus a subargument involving a counter to a proposition known by the speaker to be deeply entrenched in the hearer’s beliefs may be allocated greater resources by the planning process, thus enabling a larger argument to be constructed (or equally, the effects of an argument to be calculated more precisely – analysing such effects on a given hearer model is a computationally expensive task, which like the planning itself is a resource bounded operation). In addition to the use of resource allocation, it is also possible to alter the way in which arguments ‘bottom out’. As discussed in (Reed and Long, 1997c), there are a limited number of ways in which a line of reasoning may terminate. Premises are supported by subarguments, whose premises are also supported by subarguments, and so on until basic premises are reached which fulfil one of three conditions: (i) the speaker believes them and has no further information available for supporting them with; (ii) the speaker believes the hearer believes them (irrespective of whether the speaker herself believes them); (iii) the speaker believes the hearer will accept them without further argumentation (even though, as far as the speaker’s model of the hearer goes, he doesn’t currently believe them). In arguments where a greater level of diligence is specified, each of these three conditions may alter slightly. In the first, the speaker may choose to avoid lines of reasoning which she cannot substantiate with beliefs she knows the hearer to hold. In the second, if the speaker represents (either qualitatively or quantitatively) the strength of her beliefs, she may choose to avoid arguments founded upon beliefs which she is unsure the hearer holds. In the third, she may decide to raise the threshold above which she assumes the hearer will accept unsupported premises. Between these two techniques of resource allocation and threshold manipulation, it is possible to characterise significant aspects of the intuitive notion of arguments requiring more careful, more diligent construction in certain circumstances.
In addition to the hearer model and interpersonal aspects of the social context, the psychological susceptibilities – an awareness of which becomes manifest at various levels in the text – are a fertile ground for computational investigation. Marcu’s (1996) analysis of the lower level features of persuasive arguments is pessimistic with regard to the ability of current formalisms to handle the required heuristics in a principled way. The work presented in (Reed and Long, 1997b) goes some way to providing a framework within which his features might be characterised. In addition, Sandell’s (1977) work on the relationship between linguistic style and persuasive effect also offers a number of interesting avenues for computational characterisation of this part of the social context. The effects of lexical choice and, to a lesser extent, of syntactic complexity, fit well into the framework for realisation at this level proposed by Meteer (1991) and extended in a number of important respects by Panaget (1994). In this work, the notion of abstract linguistic resource offers a means of planning lexical realisation without arbitrarily delineating between syntactic, lexical, and morphological features, thus bridging the generation gap (Meteer, 1991). Sandell’s analysis involves components at exactly this level, and could thus be viewed as constraints or heuristic guidelines on the planning process. Though Sandell’s work also demonstrated a number of complex interactions between the stylistic variables he identified, broader heuristics could be devised which, at least in part, manage to capture these relationships.
Generating appropriate content ordering has been explored in a generation system which draws upon insights from argumentation theory, employing a propositional analysis of argument structure and a hierarchical notion of ‘argument’ whereby a conclusion from a subargument may stand as a premise in a superargument (Reed and Long, 1997b). In this work, ordering between premises is seen as a distinct problem from the ordering occurring between premises and their associated conclusion (this is a necessary result of the hierarchical definition of argument structure). For each type of ordering, however, a similar set of heuristics are available to guide the ordering process. These heuristics are divided into two types, the first of which takes precedence, ceteris paribus. The initial structure generated is minimally coherent, but then subject to reorderings to improve coherency (such as avoiding large subarguments between a premise and its conclusion), and subsequently to improve persuasive effect. The former draw upon the work of Cohen (Cohen, 1987), who examines the result on coherency of reordering parts of a small argument. The latter follow the work in social psychology and rhetoric of McGuire (1957; 1969) and Blair (1838), mentioned above. Indeed, one surprising feature of working on computational models of rhetoric is the ease with which texts such as (Blair, 1838) can be translated into formal, implementable heuristics (consider, for example, the two quotes from Blair given above in Section 3.2).
In order to quantify the notion of ‘strength’ used to in part to control ordering in both rhetoric and social psychology, the work adopts Freeman’s (Freeman, 1991) distinction between inferential force and persuasive force: the former is purely a matter of determining the validity of the inference, whilst the latter is assessed with full reference to the social context, integrating data from the model of the hearer’s beliefs, from the various parameterisations of competence, bias, and scepticism, and of interlocutors’ relationship and investment to assess how the inference is likely be accepted by the hearer.
This paper has surveyed a number of the major argumentation-based modelling techniques developed in artificial intelligence, and has demonstrated the problem facing such systems when communicating their reasoning to humans - namely that the formal structure is at once underspecified and excessively restrictive with respect to the linguistic form required for communication.
In order to address this problem, it becomes necessary to adduce research not only from linguistics but also from social psychology and rhetoric (though the fact that both social psychology and rhetoric are important is to be expected in the light of Billig’s (1996) claims of their topical - if not chronological - overlap). The development of a rich user model which details not only the beliefs, attitudes and desires of the hearer, but also a range of other features of his cognitive disposition (all of which may change dynamically during the discourse) is crucial. Finally, the physical and interpersonal setting for the discourse must also be characterised and integrated with the other components. Only with such a rich characterisation of the social context can a natural language generation system fully exploit the flexibility of the plan-based architecture in producing persuasive text.
The main areas where computational research has addressed these issues seriously has been surveyed, and perhaps the single most striking feature of the survey is its brevity. In stark contrast, there exists a strong tradition in social psychology investigating how communicative devices impact attitude change, and rhetoric has potentially even more to contribute, from the classics of Quintilian and Aristotle to more recent encyclopaedic offerings from Blair and Whately. Given this vast catalogue of source material, and the increasing interest in the generation of natural language text in non-trivial human-computer situations, the formalisation of rhetorical maxims and wisdom is an area which is likely to attract a significant research effort. Such computational study of rhetoric is a crucial prerequisite for the design of systems which are to automatically produce engaging and effective argument.
The author gratefully acknowledges the thoughtful comments offered by Cathy Hawes, Aspassia Daskalopulu and Nancy Pouloudi on earlier drafts of this paper.
Anscombre, J.C. & Ducrot, O. (1983). Philosophie et langage: L’argumentation dans las langue. Pierre Mardaga, Bruxelles.
Billig, M. (1996). Arguing and Thinking: A Rhetorical Approach to Social Psychology, 2nd Edition, Cambridge University Press, Cambridge, UK.
Birnbaum, L. (1982). Argument Molecules: A Functional Representation of Argument Structure. In Proceedings of the 2nd National Conference on Artificial Intelligence (AAAI-82), AAAI, Pittsburgh, PA, pp63-65.
Blair, H. (1838). Lectures on Rhetoric and Belles Lettres, Charles Daly, London.
Brown, P. & Levinson, S.C. (1987). Politeness: Some universals in language usage, Cambridge Univeristy Press, Cambridge.
Cavalli-Sforza, V., Lesgold, A.M. and Weiner, A.W. (1992). Strategies for Contributing to Collaborative Arguments. In Proceedings of the 14th Conference of the Cognitive Science Society, pp755-760.
Cavalli-Sforza, V., Moore, J.D. and Suthers, D.D. (1993). Helping Students Articulate, Support, and Criticize Scientific Explanations. In Proceedings of the World Conference on Artificial Intelligence in Education, pp113-120.
Cohen, R. (1987). Analyzing the Structure of Argumentative Discourse. Computational Linguistics 13 (1), pp11-24.
Cohen, P.R. & Levesque, H.J. (1990). Rational Interaction as the Basis for Communication. In Cohen, P.R., Morgan, J. & Pollack, M.E., (eds), Intentions in Communication, MIT Press, Boston, pp221-255.
Das, S., Fox, J. & Krause, P. (1996). A Unified Framework for Hypothetical and Practical Reasoning (1): Theoretical Foundations. In Gabbay, D. & Ohlbach, H.J. Practical Reasoning, Springer Verlag, Berlin pp58-72.
Daskalopulu A. & Sergot M. J. (1995). A Constraint-Driven System for Contract Assembly. In Proceedings of the 5th International Conference on Artificial Intelligence and Law, University of Maryland, College Park, May 21-24, ACM Press 1995, pp62-69.
Doyle, J. (1988). Artificial Intelligence and rational self-government Report CMU-CS-88-124. Carnegie Mellon University, Pittsburgh.
Eemeren, F.H. van, Grootendorst, R. & Snoeck-Henkemans, F. (1996). Fundamentals of Argumentation Theory, Lawrence Erlbaum, Mahwah, NJ.
Elhadad, M. (1995). Using Argumentation in Text Generation. Journal of Pragmatics 24, pp189-220.
Elhadad, M. (1992). Generating Coherent Argument Paragraphs. In Proceedings of the Conference on Computational Linguistics (COLING'92), Nantes, pp638-644.
Elvang-Gøransson, M., Krause, P. & Fox, J. (1993). Dialectic reasoning with inconsistent information. In Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI'93), pp114-121.
Flowers, M., McGuire, R. & Birnbaum, L. (1982). Adversary Arguments and the Logic of Personal Attacks. In Lehnert, W.G., Ringle, M.H., (eds), Strategies for Natural Language Processing, Lawrence Erlbaum Associates, pp275-294.
Fox, J. & Das, S. (1996). A unified framework for hypothetical and practical reasoning (2): lessons from medical applications. In Gabbay, D., Ohlbach, H.J., (eds), Practical Reasoning, Springer Verlag, Berlin pp73-92.
Fox, M. & Long, D. (1995). Hierarchical planning using abstraction. IEE Proc.-Control Theory Appl., 142 (3), pp197-210.
Freeman, J.B. (1991). Dialectics and the Macrostructure of Arguments. Foris, Dordrecht.
Gabbay, D. (1992). LDS – labelled deductive systems. 7th Expanded Draft, Imperial College.
Galliers, J.R. (1992). Autonomous belief revision and communication. In Gardenfors, P., (ed), Belief Revision, Cambridge University Press, Cambridge, pp220-246.
Gärdenfors, P. (1988). Knowledge in Flux, MIT Press.
Geffner, H. (1996). A Formal Framework for Causal Modeling and Argumentation. In Gabbay, D., Ohlbach, H.J., (eds), Practical Reasoning, Springer Verlag, Berlin.
Gilbert, M.A. (1995). Argument and Arguers. Teaching Philosophy 18 (2), pp125-138.
Gilbert, M.A. (1996). Goals in Argumentation. In Gabbay, D., Ohlbach, H.J., (eds), Practical Reasoning, Springer Verlag, Berlin.
Gordon, T.F. (1994). Computational Dialectics. In Proceedings of the Workshop Kooperative Juristische Informationsysteme, GMD Studien Nr. 241, pp25-36.
Gordon, T. & Karacapilidis, N. (1996). The Zeno Argumentation Framework. In Proceedings of the FAPR'96 Workshop on Computational Dialectics, Bonn.
Green, S.J. & DiMarco, C. (1996). Stylistic Decision-Making in Natural Language Generation. In Adorni, G., Zock, M., (eds), Trends in NLG: An AI Perspective - Selected Papers from EWNLG'93, Springer Verlag, pp125-143.
Haggith, M. (1995). A meta-level framework for exploring conflicts in multiple knowledge bases. In Hallam, J., (ed), Hybrid Problems, Hybrid Solutions, IOS Press, pp87-98.
Haggith, M. (1996). A meta-level argumentation framework for representing and reasoning about disagreement, PhD Thesis, University of Edinburgh.
Hovland, C.I., Campbell, E.H. & Brock, T. (1957). The Effects of "Committment" on Opinion Change Following Communication. In Hovland, C.I., (ed), The Order of Presentation in Persuasion, Yale University Press, New Haven, CT, pp23-32.
Hovy, E.H. (1990). Pragmatics and Natural Language Generation. Artificial Intelligence 43, pp153-197.
Jackson, S.A. (1997). Disputation by Design. In Proceedings of the OSSA Conference on Argument and Rhetoric, (to appear), St. Catharines, Canada.
Janis, I.L. & Feierabend, R.L. (1957). Effects of Alternative Ways of Ordering Pro and Con Arguments in Persuasive Communications. In Hovland, C.I., (ed), The Order of Presentation in Persuasion, Yale University Press, New Haven, CT, pp115-128.
Kiss, G. (1989). Some Aspects of Agent Theory. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI’89), Detroit.
Kowalski, R.A. & Toni, F. (1994). Argument and Reconciliation. In Proceedings of the Legal Reasoning Workshop at the International Symposium on 5th Generation Computer Systems, Tokyo, pp9-16.
Krause, P., Ambler, S., Elvang-Goransson, M., Fox, J. (1995). A Logic of Argumentation for Reasoning under Uncertainty. Computational Intelligence 11 (1), pp113-131.
Loui, R. (1987). Defeat among arguments, Computational Intelligence 3, pp100-106.
Lund, F.H. (1925). The psychology of belief. IV. The law of primacy in persuasion. Journal of Abnormal Social Psychology 20, pp183-91.
Marcu, D. (1996). The Conceptual and Linguistic Facets of Persuasive Arguments. In Working Notes of the ECAI’96 Workshop, Gaps and Bridges: New Directions in Planning and NLG, Budapest, pp43-46.
Matwin, S., Szpakowicz, S., Koperczak, Z., Kersten, G.E & Michalowski, W. (1989). Negoplan: An Expert System Shell for Negotiation Support. IEEE Expert 4 (4), pp50-62.
Maybury, M.T. (1993). Communicative Acts for Generative Natural Language Arguments. In Proceedings of the National Conference on Artificial Intelligence (AAAI-93), AAAI, pp357-364.
McConachy, R. & Zukerman, I. (1996). Using Argument Graphs to Generate Arguments. In Proceedings of the12th European Conference on Artificial Intelligence (ECAI’96), Budapest, pp592-596.
McGuire, W.J. (1957). Order of Presentation as a Factor in "Conditioning" Persuasivenes. In Hovland, C.I., (ed), The Order of Presentation in Persuasion, Yale University Press, New Haven, CT, pp98-114.
McGuire, W.J. (1969). The nature of attitudes and attitude change. In Lindzey, G. & Aronson, E. (eds), The Handbook of Social Psychology, volume 3, Addison Wesley, pp136-314.
Meteer, M.W. (1991). Bridging the Generation Gap between Text Planning and Linguistic Realization. Computational Intelligence 7 (4), pp296-304.
.Moore, J.D. & Pollack, M.E. (1992). A Problem for RST: The Need for Multi-Level Discourse Analysis. Computational Linguistics 18 (4), pp537-544.
O'Keefe, B.J. (1995). Identity and Influence in Social Interaction. Argumentation 9, pp785-800.
Panaget, F. (1994). Using a textual representational level component in the context of discourse or dialogue generation. In Proceedings of the 7th International Workshop on Natural Language Generation, Kennebunkport, Maine, pp127-136.
Parsons, S. (1996). Defining Normative Systems for Qualitative Argumentation. In Gabbay, D. & Ohlbach, H.J. Practical Reasoning, Springer Verlag, Berlin pp449-463.
Parsons, S. (1997). Normative Argumentation and Qualitative Probability. In Gabbay, D.M., Kruse, R., Nonnengart, A. & Ohlbach, H.J. (eds), Qualitative and Quantitative Practical Reasoning, Springer Verlag, Berlin.
Perelman, Ch. & Ohlbrechts-Tyteca, L. (1969). The New Rhetoric. University of Notre Dame Press.
Pilkington, R.M., Hartley, J.R., Hintze, D. & Moore, D.J. (1992). Learning to Argue and Arguing to Learn: An Interface for Computer-based Dialogue Games. Journal of Artificial Intelligence in Education 3 (3), pp275-295.
Pollock, J.L. (1987). Defeasible Reasoning. Cognitive Science 11, pp481-518.
Prakken, H. (1996). Dialectical proof theory for defeasible argumentation with defeasible priorities. In Proceedings of the FAPR'96 Workshop on Computational Dialectics, Bonn.
Prakken, H. & Sartor, G. (1996). A System for Defeasible Argumentation, with Defeasible Priorities. In Gabbay, D., Ohlbach, H.J., (eds), Practical Reasoning, Springer Verlag, Berlin.
Reed, C.A. & Long, D.P. (1997a). Multiple Subarguments in Logic, Argumentation, Rhetoric and Text Generation. In Gabbay, D.M., Kruse, R., Nonnengart, A. & Ohlbach, H.J. (eds), Qualitative and Quantitative Practical Reasoning, Springer Verlag, Berlin.
Reed, C.A. & Long, D.P. (1997b). Content Ordering in the Generation of Persuasive Discourse. In Proceedings of the Fifteenth International Joint Conference on Artificial Intelligence (IJCAI'97), Nagoya, Japan, pp1024-1030.
Reed, C.A. & Long, D.P. (1997c). Persuasive Monologue. In Proceedings of the OSSA Conference on Argument and Rhetoric, (to appear), St. Catharines, Canada.
Reed, C.A., Long, D.P. & Fox, M. (1996). An Architecture for Argumentative Discourse Planning. In Gabbay, D., Ohlbach, H.J., (eds), Practical Reasoning, Springer Verlag, Berlin, pp555-566.
Reed, C.A., Long, D.P., Fox, M. & Garagnani, M. (1997). Persuasion as a Form of Inter-Agent Negotiation. In Lukose, D., Zhang, C., (eds), Multi-Agent Methodologies, Springer Verlag, Berlin.
Reiter, E., Cawsey, A., Osman, L. & Roff, Y. (1997). Knowledge Acquisition for Content Selection. In Proceedings of the 6th European Workshop on Natural Language Generation (EWNLG'97), Duisburg, pp117-126.
Rittel, H.W. & Webber, M.M. (1973). Dilemmas in a General Theory of Planning. Policy Sciences 4, pp155-169.
Sacerdoti, E. (1974). Planning in a Hierarchy of Abstraction Spaces. Artificial Intelligence 5, pp115-135.
Sandell, R. (1977). Linguistic Style and Persuasion. Academic Press, London.
Sillince, J.A.A. & Minors, R.H. (1992). Argumentation, Self-Inconsistency, and Multidimensional Argument Strength. Communication and Cognition 25 (4), pp325-338.
Simons, H.W. (1976). Persuasion: understanding, practice and analysis. Addison-Wesley, Reading, MA.
Sperber, D. & Wilson, D. (1986). Relevance: Communication and Cognition. Basil Blackwell, Oxford.
Sycara, K.P. (1989). Argumentation: Planning Other Agent's Plans. In Proceedings of the 11th International Joint Conference on Artificial Intelligence (IJCAI'89), Detroit, MI, pp517-523.
Sycara, K. (1990). Persuasive Argumentation in Negotiation. Theory and Decision 28, pp203-242.
Toulmin, S. E. (1958). The Uses of Argument. Cambridge University Press, Cambridge, UK.
Tracy, K. (1990). Multiple Goals in Discourse: An Overview of Issues. Journal of Language and Social Psychology 9 (1-2), pp1-13.
Verheij, B. (1996). Two approaches to dialectical argumentation: Admissible sets and argumentation stages. In Proceedings of the FAPR'96 Workshop on Computational Dialectics, Bonn.
Vorobej, M. (1997). What Exactly is a Persuasive Monologue? In Proceedings of the OSSA Conference on Argument and Rhetoric, (to appear), St. Catharines, Canada.
Vreeswijk, G. (1992). Reasoning with Defeasible Arguments. In Wagner, G., Pearce, D., (eds), Proceedings of the European Workshop on Logics in AI, Springer Verlag, Berlin, pp189-211.
Waldron, V.R., Cegala, D.J., Sharkey, W.F. & Teboul, B. (1990). Cognitive and Tactical Dimensions of Conversation Goal Management. Journal of Language and Social Psychology 9 (1-2), pp101-118.
Walton, D.N. (1992). The place of emotion in argument. Pennsylvania State University Press.
Wellman, M.P. (1990). Formulation of tradeoffs in planning under uncertainty. Pitman, London
Zukerman, I. & McConachy, R. (1995). Generating Discourse across Several User Models: Maximizing Belief while Avoiding Boredom and Overload. In Proceedings of the 14th International Joint Conference on Artificial Intelligence (IJCAI'95), pp1251-1257.
Zukerman, I., Korb, K. & McConachy, R. (1996). Perambulations on the way to an Architecture for a Nice Argument Generator. In Working Notes of the ECAI’96 Workshop, Gaps and Bridges:New Directions in Planning & NLG, Budapest, pp31-36.