1 Introduction

Argumentation is a reasoning process in which interacting arguments are built and evaluated. It is widely studied in Artificial Intelligence, namely for reasoning about making decisions [5, 8, 22] and modelling agents interactions [7, 30]. An argumentation-based formalism or argumentation framework is generally defined as a set of arguments, attacks amongst the arguments, and a semantics for evaluating the arguments. A semantics assesses to what extent an argument is acceptable. Examples of semantics are those proposed by Dung, which compute extensions of arguments [6, 11, 21, 26, 29, 32] and ranking semantics, which compute the overall strengths of each argument [2, 10, 12, 13, 18, 31].

With respect to the ranking semantics, in the works of Amgoud and Ben-Naim [2], it was proposed a set of principles for it and consequently refined in [3]. Moreover, new principles were introduce to describe strategies that a semantics may use when it faces a conflict between the quality of attackers and their quantity [4]. The strategies are: i) privileging quality through the principle of (Quality Precedence), ii) privileging cardinality through the principle of (Cardinality Precedence), or iii) simply allowing compensation between quality and cardinality through the principle of (Compensation).

It was pointed out some limitations in the literature about this subject. First, there was no semantics satisfying (Cardinality Precendence), which was unfortunate since (Cardinality Precendence) is a viable choice in (multiple criteria) decision making [17]. Second, there was only one semantics satisfying (Quality Precedence). Third, several other semantics satisfy (Compensation), however, none of them satisfies all the principles that are compatible with the compensation principle. With that in mind, Amgoud and Ben-Naim [4] provided three new semantics: a max-based, a cardinality-based and a sum-based one. A formal analysis and thorough comparison with other semantics were done to fill the previous gaps by introducing three novel semantics and show new semantics that enjoy more desirable properties than existing semantics.

Besides that, another important result pointed out is that some of the principles are incompatible. They cannot be satisfied all together by a semantics. This is particularly the case with the (Quality Precedence), (Cardinality Precedence) and (Compensation). From this perspective, in this work, we propose to explore further the relation among these three principles and show weaker principles that are intermediary of these three principles. Furthermore, we present two new semantics, one based on t-conorms operators [24], which present a mix of max-based semantics and the sum-based semantics; and the other semantics based on the notion of cumulative sum [25], which presents a trade off between the cardinality-based and the sum-based semantics. The novelty is both semantics satisfy not only all the basic principles, but also weakened versions of two incompatible principles. As far as we know, this is the first work to push forward the frontier of knowledge on the development of semantics aiming at satisfying weakened versions of incompatible principles.

The paper is organized as follows: in Sect. 2, we first recall some basic notions of Argumentation theory, introduce the notations used throughout the paper and show some examples of semantics and the principles that a semantics could satisfy. In Sect. 3, we then consider the contribution of the paper, with the introduction of two new semantics and two new principles. We provide a formal analysis and comparison of existing semantics and these new principles. Finally, in Sect. 4, we conclude the paper.

2 Foundations of Weighted Argumentation Frameworks

2.1 Basic Concepts

A weighted argumentation graph is a set of arguments and an attack relation between them. Each argument has a weight in the interval [0, 1] representing its basic strength (the smaller the weight, the weaker the argument).

Definition 1

(WAG). A weighted argumentation graph (WAG) is an ordered tuple \(\mathbf {G} = \langle \mathcal {A},w, \mathcal {R} \rangle \), where \(\mathcal {A}\) is a non-empty finite set of arguments, w is a function from \(\mathcal {A}\) to [0, 1], and \(\mathcal {R} \subseteq \mathcal {A} \times \mathcal {A}\).

Intuitively, w(a) is the basic strength of argument a, and \((a, b) \in \mathcal {R}\) (or \(a\mathcal {R}b)\) means argument a attacks argument b.

Example 1

Consider the WAG \(\mathbf {G}\) below consisted of four arguments \(a, b, c, \dots , k\). For instance, the basic strength of a is \(w(a) = 1\) and the basic strength of d is \(w(d) = 0.6\). Besides that, argument d attacks argument a, which is depicted by a directed edge in the graph.

figure a

Definition 2

(Isomorphism). Let \(\mathbf {G} = \langle \mathcal {A},w, \mathcal {R} \rangle \) and \(\mathbf {G'} = \langle \mathcal {A'},w', \mathcal {R'} \rangle \) be two WAGs. An isomorphism from \(\mathbf {G}\) to \(\mathbf {G'}\) is a bijective function f from \(\mathcal {A}\) to \(\mathcal {A'}\) such that: i) \(\forall a \in \mathcal {A}, w(a) = w'(f(a))\), ii) \(\forall a, b \in \mathcal {A}, a\mathcal {R}b\) iff \(f(a)\mathcal {R'}f(b)\).

An acceptability semantics is a function assigning a value, called acceptability degree, to every argument in a weighted argumentation graph. This value represents the overall strength of an argument, and is issued from the aggregation of the basic strength of the argument and the overall strengths of its attackers. The greater this value, the more acceptable the argument. Unlike extension semantics where arguments are either accepted or rejected, it is consider graded semantics, which may assign various acceptability degrees to arguments. Throughout the paper, we consider the scale [0, 1].

Definition 3

(Semantics). A semantics is a function \(\mathbf {S}\) transforming any WAG \(\mathbf {G} = \langle \mathcal {A},w, \mathcal {R} \rangle \) into a vector \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}\) in \([0, 1]^n\), where \(n = |\mathcal {A}|\). For \(a \in \mathcal {A}\), \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(a)\) is called acceptability degree of a.

We present next the list of all notations used in the paper. Let \(\mathbf {G} = \langle \mathcal {A},w, \mathcal {R} \rangle \) be a WAG and \(a \in \mathcal {A}\). \(\text {Att}_\mathbf {G}(a)\) denotes the set of all attackers of a in \(\mathbf {G}\), i.e. \(\text {Att}_\mathbf {G}(a) = \{ b \in \mathcal {A} : b\mathcal {R}a \}\). For \(\mathbf {G} = \langle \mathcal {A},w, \mathcal {R} \rangle \) and \(\mathbf {G'} = \langle \mathcal {A'},w', \mathcal {R'} \rangle \) such that \(\mathcal {A} \cap \mathcal {A'} = \emptyset \), \(\mathbf {G} \oplus \mathbf {G'}\) is the WAG \(\langle \mathcal {A} \cup \mathcal {A'},w'', \mathcal {R} \cup \mathcal {R'} \rangle \) where for any \(x \in \mathcal {A}\) (resp. \(x \in \mathcal {A'}\)), \(w''(x) = w(x)\) (resp. \(w''(x) = w'(x)\)).

2.2 Examples of Weighted Semantics

The first semantics satisfies quality precedence, thus it favors the quality of attackers over their cardinality. It is based on a scoring function which follows a multiple steps process. At each step, the function assigns a score to each argument. In the initial step, the score of an argument is its basic strength. Then, in each step, the score is recomputed on the basis of the basic strength as well as the score of the strongest attacker of the argument at the previous step.

Definition 4

(Weighted Max-Based Semantics [4]). For any WAG \(\mathbf {G} = \langle \mathcal {A},w, \mathcal {R} \rangle \) and \(a \in \mathcal {A}\), \(\text {Deg}^{\mathbf {Mbs}}_{\mathbf {G}}(a) = \frac{w(a)}{1 + \max _{b \in \text {Att}_\mathbf {G}(a) } \text {Deg}^{\mathbf {Mbs}}_{\mathbf {G}}(b) }\). By convention, if \(\text {Att}_\mathbf {G}(a) = \emptyset \), \(\max _{b \in \text {Att}_\mathbf {G}(a)} \text {Deg}^{\mathbf {Mbs}}_{\mathbf {G}}(b) = 0\).

Example 2

Considering the WAG \(\mathbf {G}\) from Example 1, we have that \(\text {Deg}^{\mathbf {Mbs}}_{\mathbf {G}}(a) = \frac{w(a)}{1 + \max _{b \in \text {Att}_\mathbf {G}(a) } \text {Deg}^{\mathbf {Mbs}}_{\mathbf {G}}(b) } = \frac{w(a)}{1 + \max \{ \text {Deg}^{\mathbf {Mbs}}_{\mathbf {G}}(d), \text {Deg}^{\mathbf {Mbs}}_{\mathbf {G}}(e) \} }\). Since the argument d is attacked by the argument i, we need to compute \(\text {Deg}^{\mathbf {Mbs}}_{\mathbf {G}}(d) = \frac{w(d)}{1 + \max \{ \text {Deg}^{\mathbf {Mbs}}_{\mathbf {G}}(i) \} } = \frac{0.6}{1 + 0.5 } = 0.4\) (since there is no argument attacking i, \(\text {Deg}^{\mathbf {Mbs}}_{\mathbf {G}}(i) = w(i) = 0.5\)). We also have that \(\text {Deg}^{\mathbf {Mbs}}_{\mathbf {G}}(e) = w(e) = 0.3\). Therefore, \(\text {Deg}^{\mathbf {Mbs}}_{\mathbf {G}}(a) = \frac{1}{1 + \max \{ 0.4, 0.3 \} } = 0.71\).

For a matter of simplicity, in this paper we are considering examples with acyclic graphs, but the weighted semantics are also defined to deal with cyclic graphs. The details and proofs about this issue can be found in [4].

The second semantics, called weighted card-based, favors the number of attackers over their quality. It considers only arguments that have a basic strength greater than 0, called founded. This restriction is due to the fact that unfounded arguments are lifeless and their attacks are ineffective.

Definition 5

(Weighted Card-Based Semantics [4]). Let \(\mathbf {G} = \langle \mathcal {A},w, \mathcal {R} \rangle \) be a WAG and \(a \in A\). The argument a is founded iff \(w(a) > 0\). It is unfounded otherwise. Let \(\text {AttF}_\mathbf {G}(a)\) denote the set of founded attackers of a. For any WAG \(\mathbf {G} = \langle \mathcal {A},w, \mathcal {R} \rangle \) and \(a \in \mathcal {A}\), \(\text {Deg}^{\mathbf {Cbs}}_{\mathbf {G}}(a) = \frac{w(a)}{1 + |\text {AttF}_\mathbf {G}(a)| + \frac{\sum _{b \in \text {AttF}_\mathbf {G}(a)} \text {Deg}^{\mathbf {Cbs}}_{\mathbf {G}}(b) }{|\text {AttF}_\mathbf {G}(a)|} }\). By convention, if \(\text {Att}_\mathbf {G}(a) = \emptyset \), \(\sum _{b \in \text {AttF}_\mathbf {G}(a)} \text {Deg}^{\mathbf {Cbs}}_{\mathbf {G}}(b) = 0\).

Example 3

From the WAG \(\mathbf {G}\) in Example 1, we have that \(\text {Deg}^{\mathbf {Cbs}}_{\mathbf {G}}(d) = 0.24\), \(\text {Deg}^{\mathbf {Cbs}}_{\mathbf {G}}(e) = 0.3\) and \(\text {Deg}^{\mathbf {Cbs}}_{\mathbf {G}}(a) = 0.3\).

The third semantics extends h-categorizer, initially proposed by Besnard and Hunter [12] for non-weighted and acyclic graphs. Then, it was extended to account for varying degrees of basic strengths, and any graph structure.

Definition 6

(Weighted h-Categorizer Semantics [4]). For any WAG \(\mathbf {G} = \langle \mathcal {A},w, \mathcal {R} \rangle \) and \(a \in \mathcal {A}\), \(\text {Deg}^{\mathbf {Hbs}}_{\mathbf {G}}(a) = \frac{w(a)}{1 + \sum _{b \in \text {Att}_\mathbf {G}(a)} \text {Deg}^{\mathbf {Hbs}}_{\mathbf {G}}(b) }\). By convention, if \(\text {Att}_\mathbf {G}(a) = \emptyset \), \(\sum _{b \in \text {Att}_\mathbf {G}(a)} \text {Deg}^{\mathbf {Hbs}}_{\mathbf {G}}(b) = 0\).

Example 4

From the WAG \(\mathbf {G}\) in Example 1, we have that \(\text {Deg}^{\mathbf {Hbs}}_{\mathbf {G}}(d) = 0.4\), \(\text {Deg}^{\mathbf {Hbs}}_{\mathbf {G}}(e) = 0.3\) and \(\text {Deg}^{\mathbf {Hbs}}_{\mathbf {G}}(a) = 0.58\).

2.3 Principles and Properties

In the sequel, we present some principles, which are important for i) a better understanding of semantics, ii) the definition of reasonable semantics, iii) comparing semantics, iv) choosing suitable semantics for applications [4].

It was proposed 15 principles, which described the role and impact of attacks and basic strengths in the evaluation of arguments, and how these two elements are aggregated. The first principle, called (Anonymity), can be found in almost all axiomatic studies. In the argumentation literature, (Anonymity) is called abstraction in [2] and language independence in [9].

Principle 1

(Anonymity). A semantics \(\mathbf {S}\) satisfies anonymity iff for any two WAGs \(\mathbf {G} = \langle \mathcal {A},w, \mathcal {R} \rangle \) and \(\mathbf {G'} = \langle \mathcal {A'},w', \mathcal {R'} \rangle \), for any isomorphism f from \(\mathbf {G}\) to \(\mathbf {G'}\), the following property holds: \(\forall a \in \mathcal {A}, \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(a) = \text {Deg}^{\mathbf {S}}_{\mathbf {G'}}(f(a))\).

The second principle, called (Independence), states that the acceptability degree of an argument should be independent of any argument that is not connected to it.

Principle 2

(Independence). A semantics \(\mathbf {S}\) satisfies independence iff for any two WAGs \(\mathbf {G} = \langle \mathcal {A},w, \mathcal {R} \rangle \) and \(\mathbf {G'} = \langle \mathcal {A'},w', \mathcal {R'} \rangle \) such that \(\mathcal {A} \cap \mathcal {A'} = \emptyset \), the following holds: \(\forall a \in \mathcal {A}, \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(a) = \text {Deg}^{\mathbf {S}}_{\mathbf {G} \oplus \mathbf {G'}}(a)\).

The next principle states that the acceptability degree of an argument a in a graph can depend on argument b only if there is a path from b to a, i.e., a finite non-empty sequence \(\langle x_1, \dots , x_n \rangle \) such that \(x_1 = b, x_n = a\) and \(\forall i < n, x_i\mathcal {R}x_{i+1}\).

Principle 3

(Directionality). A semantics \(\mathbf {S}\) satisfies directionality iff for any two WAGs \(\mathbf {G} = \langle \mathcal {A},w, \mathcal {R} \rangle \) and \(\mathbf {G'} = \langle \mathcal {A'},w', \mathcal {R'} \rangle \) such that \(\mathcal {R'} = \mathcal {R} \cup \{ (a,b) \}\), it holds that: \(\forall x \in \mathcal {A}\), if there is no path from b to x, then \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(x) = \text {Deg}^{\mathbf {S}}_{\mathbf {G'}}(x)\).

The next principle, called (Neutrality), states that an argument, whose acceptability degree is 0, has no impact on the arguments it attacks.

Principle 4

(Neutrality). A semantics \(\mathbf {S}\) satisfies neutrality iff for any WAG \(\mathbf {G} = \langle \mathcal {A},w, \mathcal {R} \rangle \), \(\forall a,b \in \mathcal {A}\), if i) \(w(a) = w(b)\), and ii) \(\text {Att}_\mathbf {G}(b) = \text {Att}_\mathbf {G}(a) \cup \{x\}\) with \(x \in A \backslash \text {Att}_\mathbf {G}(a)\) and \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(x) = 0\), then \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(a) = \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(b)\).

The condition \(w(a) = w(b)\) ensures that the attacks from \(\text {Att}_\mathbf {G}(a)\) have the same effect on both arguments a and b. (Equivalence) principle ensures that the overall strength of an argument depends only on the basic strength of the argument and the overall strengths of its (direct) attackers.

Principle 5

(Equivalence). A semantics \(\mathbf {S}\) satisfies equivalence iff for any WAG \(\mathbf {G} = \langle \mathcal {A},w, \mathcal {R} \rangle \), \(\forall a,b \in \mathcal {A}\), if i) \(w(a) = w(b)\), and ii) there exists a bijective function f from \(\text {Att}_\mathbf {G}(a)\) to \(\text {Att}_\mathbf {G}(b)\) such that \(\forall x \in \text {Att}_\mathbf {G}(a), \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(x) = \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(f(x)),\) then \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(a) = \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(b)\).

(Maximality) principle states that an unattacked argument receives an acceptability degree equal to its basic strength.

Principle 6

(Maximality). A semantics \(\mathbf {S}\) satisfies maximality iff for any WAG \(\mathbf {G} = \langle \mathcal {A},w, \mathcal {R} \rangle \), \(\forall a \in \mathcal {A}\), if \(\text {Att}_\mathbf {G}(a) = \emptyset \), then \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(a) = w(a)\).

The role of attacks is (Weakening) their targets. Indeed, when an argument receives an attack, its overall strength decreases whenever the attacker is “alive”.

Principle 7

(Weakening). A semantics \(\mathbf {S}\) satisfies weakening iff for any WAG \(\mathbf {G} = \langle \mathcal {A},w, \mathcal {R} \rangle \), \(\forall a \in \mathcal {A}\), if i) \(w(a) > 0\), and ii) \(\exists b \in \text {Att}_\mathbf {G}(a)\) such that \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(b) > 0\), then \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(a) < w(a)\).

(Weakening) leads to strength loss as soon as an argument is attacked by at least one alive attacker. (Counting) principle states that each alive attacker has an impact on the overall strength of the argument. Thus, the more numerous the alive attackers of an argument, the weaker the argument.

Principle 8

(Counting). A semantics \(\mathbf {S}\) satisfies counting iff for any WAG \(\mathbf {G} = \langle \mathcal {A},w, \mathcal {R} \rangle \), \(\forall a,b \in \mathcal {A}\), if i) \(w(a) = w(b)\), ii) \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(a) > 0\), and iii) \(\text {Att}_\mathbf {G}(b) = \text {Att}_\mathbf {G}(a) \cup \{y\}\) with \(y \in \mathcal {A} \backslash \text {Att}_\mathbf {G}(a)\) and \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(y) > 0\), then \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(a) > \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(b)\).

(Weakening Soundness) principle goes further than weakening by stating that attacks are the only source of strength loss.

Principle 9

(Weakening Soundness). A semantics \(\mathbf {S}\) satisfies weakening soundness iff for any WAG \(\mathbf {G} = \langle \mathcal {A},w, \mathcal {R} \rangle \), \(\forall a \in A\) such that \(w(a) > 0\), if \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(a) < w(a)\), then \(\exists b \in \text {Att}_\mathbf {G}(a)\) such that \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(b) > 0\).

(Reinforcement) principle states that the stronger the source of an attack, the greater its intensity.

Principle 10

(Reinforcement). A semantics \(\mathbf {S}\) satisfies reinforcement iff for any WAG \(\mathbf {G} = \langle \mathcal {A},w, \mathcal {R} \rangle \), \(\forall a, b \in A\), if i) \(w(a) = w(b)\), ii) \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(a) > 0\) or \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(b) > 0\), iii) \(\text {Att}_\mathbf {G}(a) \backslash \text {Att}_\mathbf {G}(b) = \{x\}\), iv) \(\text {Att}_\mathbf {G}(b) \backslash \text {Att}_\mathbf {G}(a) = \{y\}\), and v) \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(y) > \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(x)\), then \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(a) > \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(b)\).

(Resilience) principle states that an attack cannot completely kill an argument, i.e., to turn its acceptability degrees equal to 0.

Principle 11

(Resilience). A semantics \(\mathbf {S}\) satisfies resilience iff for any WAG \(\mathbf {G} = \langle \mathcal {A},w, \mathcal {R} \rangle \), \(\forall a \in A\), if \(w(a) > 0\), then \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(a) > 0\).

(Proportionality) states that the stronger the target of an attack, the weaker its intensity.

Principle 12

(Proportionality). A semantics \(\mathbf {S}\) satisfies proportionality iff for any WAG \(\mathbf {G} = \langle \mathcal {A},w, \mathcal {R} \rangle \), \(\forall a, b \in A\) such that i) \(\text {Att}_\mathbf {G}(a) = \text {Att}_\mathbf {G}(b)\), ii) \(w(a) > w(b)\), and iii) \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(a) > 0\) or \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(b) > 0\), then \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(a) > \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(b)\).

The three last principles concern possible choices offered to a semantics when it faces a conflict between the quality and the number of attackers. (Quality Precedence) principle gives more importance to the quality.

Principle 13

(Quality Precedence). A semantics \(\mathbf {S}\) satisfies quality precedence iff for any WAG \(\mathbf {G} = \langle \mathcal {A},w, \mathcal {R} \rangle \), \(\forall a, b \in A\), if i) \(w(a) = w(b)\), ii) \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(a) > 0\), and iii) \(\exists y \in \text {Att}_\mathbf {G}(b)\) such that \(\forall x \in \text {Att}_\mathbf {G}(a), \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(y) > \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(x)\), then \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(a) > \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(b)\).

(Cardinality Precedence) principle states that a great number of attackers has more effect on an argument than just few.

Principle 14

(Cardinality Precedence). A semantics \(\mathbf {S}\) satisfies cardinality precedence iff for any WAG \(\mathbf {G} = \langle \mathcal {A},w, \mathcal {R} \rangle \), \(\forall a, b \in A\), if i) \(w(a) = w(b)\), ii) \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(a) > 0\), and iii) \(|\{y \in \text {Att}_\mathbf {G}(b) : \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(y)> 0\}|> |\{x \in \text {Att}_\mathbf {G}(a) : \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(x) > 0\}|\), then \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(a) > \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(b)\).

Finally, (Compensation) states that several weak attacks may compensate the quality of attacks overall.

Principle 15

(Compensation). A semantics \(\mathbf {S}\) satisfies compensation iff there exists a WAG \(\mathbf {G} = \langle \mathcal {A},w, \mathcal {R} \rangle \), such that for two arguments \(a, b \in A\), i) \(w(a) = w(b)\), ii) \(\text{ Deg}^{\mathbf {S}}_{\mathbf {G}}(a) > 0\), iii) \(|\{x \in \text {Att}_\mathbf {G}(a) : \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(x)> 0\}|> |\{y \in \text {Att}_\mathbf {G}(b) : \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(y) > 0\}| \), iv) \(\exists y \in \text {Att}_\mathbf {G}(b)\) such that \(\forall x \in \text {Att}_\mathbf {G}(a), \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(y) > \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(x)\) and \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(a) = \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(b)\).

The results corresponding to the compatibility of the principles is stated below.

Proposition 1

[4] The three following properties hold. i) (Quality Precedence), (Cardinality Precedence) and (Compensation) are pairwise incompatible; ii) (Independence), (Directionality), (Equivalence), (Resil- ience), (Reinforcement), (Maximality) and (Quality Precedence) are incompatible; iii) (Cardinality Precedence) (respectively (Compensation)) is compatible with all principles 1–12.

It was shown that Weighted max-based semantics satisfies (Quality Precedence) as well as all the principles which are compatible with it [4]. It violates, however, (Counting) since by definition, this semantics focuses only on the strongest attacker of an argument, and neglects the remaining attackers.

Theorem 1

[4] Weighted max-based semantics violates (Cardinality Precedence), (Compensation), (Counting) and (Reinforcement). It satisfies all the remaining principles.

Weighted card-based semantics satisfies (Cardinality Precedence) as well as all the principles that are compatible with it.

Theorem 2

[4] Weighted card-based semantics satisfies all the principles except (Quality Precedence) and (Compensation).

Weighted h-categorizer semantics satisfies (Compensation) as well as all the principles that are compatible with it.

Theorem 3

[4] Weighted h-categorizer semantics satisfies all the principles except (Quality Precedence) and (Cardinality Precedence).

3 Refinements of Compensation-Based Semantics

Fuzzy set theory has been shown to be a useful tool to describe situations in which the data are imprecise or vague. Fuzzy sets handle such situations by attributing a degree to which a certain object belongs to a set [15]. An important notion in fuzzy set theory is that of triangular norms and conorms: t-norms and t-conorms are used to define a generalized intersection and union of fuzzy sets [24]. Triangular norms and conorms serve as aggregation operators, which can be used, e.g., for querying databases [20], to compute the resulting degree of confidence of agents [33], in Approximate Reasoning [16], Information Retrieval [14], Neuro-symbolic Learning [19], Machine Learning [1], etc.

Definition 7

(T-conorm [24]). A binary function \(\oplus : [0,1] \times [0,1] \rightarrow [0,1]\) is a t-conorm if it satisfies the following conditions:

  1. 1.

    \(\oplus \{ a, b\} = \oplus \{ b, a \} \) (Commutativity);

  2. 2.

    \(\oplus \{ a, \oplus \{ b, c \} \} = \oplus \{ \oplus \{ a,b \}, c \}\) (Associativity);

  3. 3.

    \(a \le c\) and \(b \le d \Rightarrow \oplus \{ a, b\} \le \oplus \{ c, d\}\) (Monotonicity);

  4. 4.

    \(\oplus \{ a, 0\} = a\) (Neutral Element).

A t-conorm acts as a disjunction in fuzzy logic or as a union in fuzzy set theory. When one of its arguments is 0, it returns its other argument; when one of its arguments is 1, it returns 1. It is both associative and commutative, and its partial derivatives with respect to its parameters are non-negative. T-conorms are a generalization of the usual two-valued logical disjunction (or the maximum operator), studied by classical logic, for fuzzy logics. The four basic t-conorms are described below:

Definition 8

(Basic T-conorms [24]). The following are the four basic t-conorms:

  1. 1.

    Maximum t-conorm: \(\oplus _\mathbf{M }\{x,y \} = max(x,y)\);

  2. 2.

    Probabilistic sum t-conorm: \(\oplus _\mathbf{P }\{ x,y \} = x+y - x\cdot y\);

  3. 3.

    Łukasiewicz t-conorm: \(\oplus _\mathbf{L } \{ x,y \} = min(x + y,1)\);

  4. 4.

    Drastic sum t-conorm: \(\oplus _\mathbf{D } \{ x,y \} =\) \(\left\{ \begin{array}{ll} 1, &{} \text {if } (x,y) \in \ ]0,1]\times ]0,1]; \\ max(x,y), &{} \text {otherwise}. \end{array} \right. \)

These four basic t-conorms are remarkable for several reasons. The drastic sum \(\oplus _\mathbf{D }\) and the maximum \(\oplus _\mathbf{M }\) are the largest and the smallest t-conorms, respectively (with respect to the pointwise order). The maximum \(\oplus _\mathbf{M }\) is the only t-conorm where each \(x \in [0,1]\) is an idempotent element (recall \(x \in [0,1]\) is called an idempotent element of \(\oplus \) if \(\oplus \{ x,x \} = x\)). The probabilistic sum \(\oplus _\mathbf{P }\) and the Łukasiewicz t-conorm \(\oplus _\mathbf{L }\) are examples of two important subclasses of t-conorms, namely, the classes of strict and nilpotent t-conorms, respectively (more details in [23]).

One way to compare t-conorms is using the notion of strength. Consider two t-conorms \(\oplus _1\) and \(\oplus _2\). If we have \(\oplus _1 \{ x,y \} \le \oplus _2 \{ x,y \}\) for all \(x,y \in [0,1]\), then we say that \(\oplus _1\) is weaker than \(\oplus _2\) or, equivalently, that \(\oplus _2\) is stronger than \(\oplus _1\), and we write in this case \(\oplus _1 \le \oplus _2\). We shall write \(\oplus _1 < \oplus _2\) if \(\oplus _1 \le \oplus _2\) and \(\oplus _1 \not = \oplus _2\). The drastic sum \(\oplus _\mathbf{D }\) is the strongest, and the Maximum \(\oplus _\mathbf{M }\) is the weakest t-conorm, i.e., for each t-conorm \(\oplus \) we have \(\oplus _\mathbf{M } \le \oplus \le \oplus _\mathbf{D }\). Between the four basic t-conorms we have these strict inequalities: \(\oplus _\mathbf{M }< \oplus _\mathbf{P }< \oplus _\mathbf{L } < \oplus _\mathbf{D }\).

Example 5

Consider \(x=0.4\) and \(y=0.7\). We have that \(\oplus _\mathbf{M }\{0.4,0.7 \} = 0.7\), \(\oplus _\mathbf{P }\{0.4,0.7 \} = 0.82\), \(\oplus _\mathbf{L }\{0.4,0.7 \} = 1\) and \(\oplus _\mathbf{D }\{0.4,0.7 \} = 1\). We can state that the maximum t-conorm disconsiders all the values that are not the maximum, while the result of the probabilistic sum t-conorm takes into consideration the value of each argument. The Łukasiewicz t-conorm follows a similar idea, however, when the sum of the argument reaches a threshold, i.e. the value 1 which represents total membership, the result is equal to 1 (and all the excess is disconsidered). The Drastic sum t-conorm, as the name states, it is radical in the decision: if a argument x has a partial (i.e., \(x \in (0,1]\)) or total membership (i.e., \(x = 1\)), the result of drastic t-conorm between x and any other element is equal to 1 (any partial membership is transformed in a total membership).

As the t-conorms are a natural generalization of the maximum operator, we can generalize the definition of Weighted Max-Based Semantics to a Weighted t-conorm Semantics.

Definition 9

(Weighted T-conorm Semantics). For any WAG \(\mathbf {G} = \langle \mathcal {A},w\), \( \mathcal {R} \rangle \) and \(a \in \mathcal {A}\), \(\text {Deg}^{\mathbf {\oplus }}_{\mathbf {G}}(a) = \frac{w(a)}{1 + \bigoplus _{b \in \text {Att}_\mathbf {G}(a)} \text {Deg}^{\mathbf {\oplus }}_{\mathbf {G}}(b) }\). By convention, if \(\text {Att}_\mathbf {G}(a) = \emptyset \), \(\bigoplus _{b \in \text {Att}_\mathbf {G}(a)} \text {Deg}^{\mathbf {\oplus }}_{\mathbf {G}}(b) = 0\).

Intuitively, we can think of the probabilistic sum semantics as the following idea: the degree of acceptability of an argument a is measured based on its weight and also the sum of pairs of acceptability degrees of the attacking arguments minus a rate (measured by the product between the attackers’ acceptability degrees). It is a different approach to the maximum that only considers the largest value of an attacking argument and also different from the sum approach present in the h-categorizer semantics. In fact, the probabilistic sum semantics encompasses a bit of both worlds.

Łukasiewicz’s semantics is much closer to h-categorizer semantics, and its big difference is that if the sum of the acceptability degrees of the attacking arguments exceeds 1, the weight of this attack is considered 1 (in the h-categorizer semantics this value can be greater than 1).

Drastic semantics is much less interesting when compared to the two previous ones, since it considers that if an argument is attacked by any other argument or a set of arguments with a degree of acceptability greater than 0, the total weight of the attack is always 1, regardless the values of the degrees of acceptability. In practice this means that when an argument is attacked, if the weight of any argument is greater than 0, the degree of acceptance of that argument attacked results in half its original weight.

Example 6

Considering the WAG \(\mathbf {G}\) from Example 1, we have that \(\text {Deg}^{\mathbf {\oplus }_\mathbf{M }}_{\mathbf {G}}(a) = 0.71\), \(\text {Deg}^{\mathbf {\oplus }_\mathbf{P }}_{\mathbf {G}}(a) = 0.63\), \(\text {Deg}^{\mathbf {\oplus }_\mathbf{L }}_{\mathbf {G}}(a) = 0.58\) and \(\text {Deg}^{\mathbf {\oplus }_\mathbf{D }}_{\mathbf {G}}(a) = 0.5\). As said before, the drastic t-conorm is the strongest t-conorm and maximum t-conorm is the weakest. Consequently, the acceptability degree of an argument is higher for the maximum semantics and lower for the drastic semantics, when compared to the other t-conorms semantics.

With respect to the range of values of the acceptability degree for t-conorms semantics, we have the following results.

Proposition 2

For any WAG \(\mathbf {G} = \langle \mathcal {A},w, \mathcal {R} \rangle \) and for any \(a \in \mathcal {A}\), we have that \( \{ \text {Deg}^{\mathbf {\oplus }_\mathbf{M }}_{\mathbf {G}}(a),\text {Deg}^{\mathbf {\oplus }_\mathbf{P }}_{\mathbf {G}}(a),\text {Deg}^{\mathbf {\oplus }_\mathbf{L }}_{\mathbf {G}}(a) \} \in [\frac{w(a)}{2} ,w(a)]\) and \(\text {Deg}^{\mathbf {\oplus }_\mathbf{D }}_{\mathbf {G}}(a) = \frac{w(a)}{2}\) or \(\text {Deg}^{\mathbf {\oplus }_\mathbf{D }}_{\mathbf {G}}(a) = w(a)\).

Next, we will propose a weaker principle than (Quality Precedence), named (Quality Compensation). The idea of this principle is to prioritize, during a conflict of attacks on two arguments, the quality of attacks when the overall sum of attacks on these arguments is equal.

Principle 16

(Quality Compensation). A semantics \(\mathbf {S}\) satisfies quality compensation iff for any WAG \(\mathbf {G} = \langle \mathcal {A},w, \mathcal {R} \rangle \), \(\forall a, b \in A\), if i) \(w(a) = w(b)\), ii) \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(a) > 0\), iii) \(\sum _{x \in \text {Att}_\mathbf {G}(a)} \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(x) = \sum _{y \in \text {Att}_\mathbf {G}(b)} \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(y) \), and iv) \(\exists y \in \text {Att}_\mathbf {G}(b)\) such that \(\forall x \in \text {Att}_\mathbf {G}(a), \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(y) > \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(x)\), then \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(a) > \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(b)\).

This principle weakens (Quality Precedence) by introducing condition iii) \(\sum _{x \in \text {Att}_\mathbf {G}(a)} \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(x) = \sum _{y \in \text {Att}_\mathbf {G}(b)} \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(y) \), where it considers equal the sum of the acceptability degrees of the attacking arguments. Therefore, if a semantics satisfies (Quality Precedence) it also satisfies (Quality Compensation). Regarding Weighted t-conorms semantics, we have the following results according to their properties.

Theorem 4

Considering the weighted t-conorm semantics:

  1. 1.

    Weighted Drastic t-conorm semantics violates only (Counting), (Reinforcement), (Quality Precedence), (Cardinality Precedence), (Compensation) and (Quality Compensation).

  2. 2.

    Weighted Łukasiewicz t-conorm violates only (Quality Precedence), (Cardinality Precedence) and (Quality Compensation).

  3. 3.

    Weighted Probabilistic sum t-conorm semantics violates only (Quality Precedence) and (Cardinality Precedence).

Weighted drastic t-conorm semantics behaves similar to weighted max-based semantics, except that it does not satisfy (Quality Precendence) or (Quality Compensation). Although they are t-conorms, weighted Łukasiewicz and weighted probabilistic sum semantics go in a direction different from weighted max-based Semantics and satisfy (Compensation), along with all the 1–12 principles. In special, weighted probabilistic sum also satisfies (Quality Compensation), which is a weaker version of (Quality Precedence), that is, it presents a balance between compensation and quality in its decisions.

Example 7

From the WAG \(\mathbf {G}\) in Example 1, we have that \(w(a) = w(b)\) and the sum of degrees of acceptability (for \(\mathbf {\oplus }_\mathbf{P }\) semantics) of the attackers of a and b are, respectively, \(\text {Deg}^{\mathbf {\oplus }_\mathbf{P }}_{\mathbf {G}}(d) + \text {Deg}^{\mathbf {\oplus }_\mathbf{P }}_{\mathbf {G}}(e) = 0.4 + 0.3 = 0.7\) and \(\text {Deg}^{\mathbf {\oplus }_\mathbf{P }}_{\mathbf {G}}(e) + \text {Deg}^{\mathbf {\oplus }_\mathbf{P }}_{\mathbf {G}}(f) + \text {Deg}^{\mathbf {\oplus }_\mathbf{P }}_{\mathbf {G}}(g) = 0.3 + 0.34 + 0.06 = 0.7\). However, \(\text {Deg}^{\mathbf {\oplus }_\mathbf{P }}_{\mathbf {G}}(a) = 0.63 < 0.64 = \text {Deg}^{\mathbf {\oplus }_\mathbf{P }}_{\mathbf {G}}(b),\) since the argument a has an attacker with the highest degree of acceptability (\(\text {Deg}^{\mathbf {\oplus }_\mathbf{P }}_{\mathbf {G}}(d) = 0.4\)). We can see that quality takes on importance when the total of values compared is indistinguishable. Otherwise, the semantics behaves like a compensating semantics.

The next semantics introduced in this paper is the Weighted CS-Based Semantics, based on the idea of Cumulative Sum. This operator has been applied in the areas of Outlier Detection [25], identifying rare items, events or observations which raise suspicions by differing significantly from the majority of the analyzed data. It is also studied in the area of Economy, from the notion of the Lorenz curve [27], which is most often used to represent economic inequality and it can also demonstrate unequal distribution in any system. Formally, a cumulative sum can be defined as follows.

Definition 10

(Cumulative Sum). Consider the vectors \(L = (x_1, \dots , x_n)\) and \(L' = (x_{\sigma (1)}, \dots , x_{\sigma (n)})\), where \(\sigma \) is the permutation of \(\{ 1, \dots , n \}\) sorting the \(x_i\) in descending order. We define the vector of accumulated sum \(AS_L = (AS^1_{L'}, \dots , A^n_{L'})\), where \(AS^i_{L'} = \displaystyle \sum ^{i}_{x_k \in L', k=1} x_k\). The Cumulative Sum of L is defined as \(CS_L = \sum AS_L\) (the sum of its elements).

A cumulative sum is a sequence of partial sums of a given sequence. For example, the cumulative sums of the sequence abc, ..., are \(a, a+b, a+b+c, \dots \). After that, the sum of all these elements is performed.

Example 8

Consider the vector \(V = (0.23,0.26,0.1)\) and \(V' = (0.26,0.23,0.1)\) its ordered version. The cumulative sum of V is given by \(CS_V = \sum AS_V = \sum (AS^1_{V'}, AS^2_{V'}, A^3_{V'}) = \sum (0.26, 0.49, 0.59) = 1.34\).

Definition 11

(Weighted CS-Based Semantics). For any WAG \(\mathbf {G} = \langle \mathcal {A},w\), \( \mathcal {R} \rangle \) and \(a \in \mathcal {A}\), \(\text {Deg}^{\mathbf {CSbs}}_{\mathbf {G}}(a) = \frac{w(a)}{1 + CS_{\text {AttS}_\mathbf {G}(a)} }\), where \(\text {AttS}_\mathbf {G}(a) = (\text {Deg}^{\mathbf {CSbs}}_{\mathbf {G}}(b_1), \dots , \text {Deg}^{\mathbf {CSbs}}_{\mathbf {G}}(b_n))\) and \(b_i \in \text {Att}_\mathbf {G}(a)\), for \(1 \le i \le n\). By convention, if \(\text {Att}_\mathbf {G}(a) = \emptyset \), then \(CS_{\text {AttS}_\mathbf {G}(a)} = 0\).

Weighted CS-Based Semantics extends the h-categorizer semantics by making the cumulative sum of the acceptability degrees of the attacking arguments.

Example 9

Considering the WAG \(\mathbf {G}\) from Example 1, we have that \(\text {Deg}^{\mathbf {Hbs}}_{\mathbf {G}}(a) = 0.58\) and \(\text {Deg}^{\mathbf {CSbs}}_{\mathbf {G}}(a) = 0.47\). As it happened with the maximum t-conorm, which has a higher acceptability degree when compared with the other t-conorms, the acceptability degree of an argument is higher for the h-categorizer when compared to cumulative sum semantics.

As said previously, the cumulative sum is used in data analysis to detect variations and anomalies in a set of data, and this operator is sensible to the number of elements in the set. Thus, we propose a weaker principle than (Cardinality Precedence), named (Cardinality Compensation). The idea of this principle is to prioritize, during a conflict of attacks on two arguments, the quantity of attacks when the overall sum of attacks on these arguments is equal.

Principle 17

(Cardinality Compensation). A semantics \(\mathbf {S}\) satisfies cardinality compensation iff for any WAG \(\mathbf {G} = \langle \mathcal {A},w, \mathcal {R} \rangle \), \(\forall a, b \in A\), if i) \(w(a) = w(b)\), ii) \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(a) > 0\), iii) \(\sum _{x \in \text {Att}_\mathbf {G}(a)} \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(x) = \sum _{y \in \text {Att}_\mathbf {G}(b)} \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(y) \), and iv) \(|\{y \in \text {Att}_\mathbf {G}(b) : \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(y)> 0\}|> |\{x \in \text {Att}_\mathbf {G}(a) : \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(x) > 0\}|\), then \(\text {Deg}^{\mathbf {S}}_{\mathbf {G}}(a) > \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(b)\).

This principle weakens (Cardinality Precedence) by introducing condition iii) \(\sum _{x \in \text {Att}_\mathbf {G}(a)} \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(x) = \sum _{y \in \text {Att}_\mathbf {G}(b)} \text {Deg}^{\mathbf {S}}_{\mathbf {G}}(y) \), where it considers equal the sum of the acceptability degrees of the attacking arguments. Therefore, if a semantics satisfies (Cardinality Precedence) it also satisfies (Cardinality Compensation). Regarding Weighted CS-Based semantics, we have the following results according to their properties.

Theorem 5

Weighted CS-based semantics satisfies all the principles except (Cardinality Precedence), (Quality Precedence) and (Quality Precedence). Additionally, it satisfies (Cardinality Compensation).

Example 10

From the WAG \(\mathbf {G}\) in Example 1, we have that \(w(b) = w(c) = 1\) and the sum of degrees of acceptability (for cumulative sum semantics) of the attackers of b and c are, respectively, \(\text {Deg}^{\mathbf {CSbs}}_{\mathbf {G}}(e) + \text {Deg}^{\mathbf {CSbs}}_{\mathbf {G}}(f) + \text {Deg}^{\mathbf {CSbs}}_{\mathbf {G}}(g) = 0.3 + 0.23 + 0.06 = 0.59\) and \(\text {Deg}^{\mathbf {CSbs}}_{\mathbf {G}}(g) + \text {Deg}^{\mathbf {CSbs}}_{\mathbf {G}}(h) = 0.06 + 0.53 = 0.59\). However, \(\text {Deg}^{\mathbf {CSbs}}_{\mathbf {G}}(b) = 0.41 < 0.47 = \text {Deg}^{\mathbf {CSbs}}_{\mathbf {G}}(c),\) since the argument b has more attackers than argument c. We can see that quantity takes on importance when the total of values compared is indistinguishable. Otherwise, the semantics behaves like a compensating semantics.

With respect to the range of values of the acceptability degree for CS-based semantics, they have the same result of the h-categorizes semantics.

Theorem 6

For any WAG \(\mathbf {G} = \langle \mathcal {A},w, \mathcal {R} \rangle \), for any \(a \in \mathcal {A}\), \(\text {Deg}^{\mathbf {CS}}_{\mathbf {G}}(a) \in (0,w(a)]\).

4 Conclusion

This paper introduced two new semantics for the weighted argumentation framework. The main objective is to show that there are semantics that have a hybrid behavior between the principles of (Quality Precedence), (Cardinality Precedence) and (Compensation). The semantics based on t-conorms, as the drastic t-conorms, Łukasiewicz and probabilistic sum seek an alternative to the maximum operator, which has the principle of (Quality Precedence) as a characteristic. It has been shown that the probabilistic sum t-conorm has the most interesting properties, because although it does not satisfy (Quality Precedence), it satisfies all the basic principles, along with (Compensation) and (Quality Compensation).

Table 1. The list of the principles satisfied (or violated) by the semantics.

The semantics based on cumulative sum is intended to exhibit a hybrid behavior between a (Compensation) and (Cardinality Precedence). Unlike the classical sum operator, presented in the h-categorizer semantics, the cumulative sum gives more weight to the total sum according to the number of elements: the more elements, the greater the sum value. As a result, we show that the cumulative sum semantics satisfies all the principles of the h-categorizer semantics plus a weak version of (Cardinality Precedence), called (Cardinality Compensation). Table 1 summarizes the results regarding the weighted argumentation framework operators and the satisfaction of all principles.

As future work, we intend to continue exploring other operators with intermediate characteristics between these three main incompatible principles. An alternative is to study t-conorm families. Besides the four basic t-conorms, it is possible to extend them into several families of t-conorms through parameters [28], resulting in several operators with different properties. Another point to be investigated are operators that exhibit intermediate behavior between quality and quantity. Furthermore, another possibility of research is to study other new principles in the weighted argumentation framework. A remarkable question remains open: is it possible to conceive a new semantics with a hybrid behavior involving not only two, but these three main incompatible principles?