Believe me—We can do this! Annotating persuasive acts in blog text
Citation: Pranav Anand, Joseph King, Jordan Boyd-Graber, Earl Wagner, Craig Martell, Doug Oard, and Philip Resnik (2011) Believe me—We can do this! Annotating persuasive acts in blog text. AAAI Workshop on Computational Models of Natural Argument (RSS)
This paper describes the construction of a corpus of blog posts annotated to highlight persuasion tactics
The authors define persuasion as "instances where an agent attempts to convince another party to adopt a novel belief, attitude, or commitment to act."
(from Table 2; these come mainly from Marwell and Schmitt or Cialdini, with a few inspired by Walton et al.
- Social esteem
- Deontic Appeal (to duties/obligations)
- Moral Appeal
- Social Generalization
- Good/bad traits
- Blogs were selected from the Blog Authorship Corpus (Koppel et al. 2006), collected in August 2004 from Blogger.com, with posts from 19,320 different blogs.
A separate test set of 30 posts was hand-selected "based on their coverage of tactics and persuasion acts". Interannotator agreement was low due to confusion between opinion and persuasion; situations where it wasn't clear whether persuasion was intended by the author; and "difficulty in distinguish between belief revision and attitude change (Is “smoking is bad for your health” aimed at attitude change or belief revision? What about “smoking is dangerous”?)."
Annotation guidelines focused on "justificatory text" and "blatant persuasion" (where the author's intent is clear); they are available with the persuasion corpus. Annotators were asked to mark the smallest text span containing a tactic, and to note whether persuasion was used (i.e. "was intended to forestall potential skepticism in the reader")
Eight annotators were trained, then tested, on subsets of the test set, resulting in a Kappa of >.8.
40 blogs (among those with more than 200 posts) were randomly selected, resulting in 25,048 posts. Each annotator annotated all the posts in 7 blogs, with 20% overlap with other annotators. Persuasive acts or persuasion tactics were found in 4,603 posts in 37 blogs.
Persuasion acts were relatively rare (457 posts; 380 for believe revision, 128 for compliance gaining--51 had both). Agreement was measured with Krippendorff's $\alpha$. Sometimes there were serious disagreements (e.g. Redefinition--to distinguish metaphors and value judgements).
Predicting Persuasion from Tactics
They tested 3 systems to see how well they could classify posts based on features (compared to human annotation). Precision, recall, and F-score were the measures used.
The systems were stemmed unigram; sentiment/causation/insight (using 71 list count features from LIWC and MPQA); topics (using LDA); and tactics (based on the annotation). They also did an "ablation study", testing all 7 possible combinations with a Naive Bayes classifier, then trained RIPPER, a rule-based classifier, over the seven most important features (reason, deonic appeal, outcome, empathy, recharacterization, threat/promise, good/bad traits).
Analyzing False Negatives
2.1% of false negatives were incorrectly annotated 16% were "highly charged, opinionated statements or personal statements" 4% were reviews of movies, books, and CD's (not covered) 68% were "legitimately persuasive on reexamination" 3% were imperatives with "remember", "think", and "imagine"
Taxonomy was influenced by
- Cialdini, R. B. 2000. Influence: Science and practice (4th Edition). Allyn & Bacon.
- Marwell, G., and Schmitt, D. 1967. Dimensions of compliance-gaining behavior: An empirical analysis. Sociomety 30:350-364.
- Walton, D.; Reed, C.; and Macagno, F. 2008. Argumentation Schemes. Cambridge University Press.
- Marcu, D. 1997. Perlocutions: The achilles’ heel of speech act theory. Journal of Pragmatics.
- Mochales, R., and Ieven, A. 2009. Creating an argumentation corpus: Do theories apply to real arguments? A case study on the legal argumentation of the ECHR. In Proceedings of the Twelfth International Conference on Artificial Intelligence and Law, 21-30.
- Palau, R., and Moens, M. 2009. Argumentation mining: The detection, classification and structure of arguments in text. In Proceedings of The Twelfth International Conference on Artificial Intelligence and Law, 98-107.
- Wilson, T. A. 2008. Fine-grained subjectivity and sentiment analysis: Recognizing the intensity, polarity and attitudes of private states. Ph.D. Dissertation, University of Pittsburgh.
- Koppel, M.; Schler, J.; Argamon, S.; and Pennebaker, J. 2006. Effects of age and gender on blogging. In In AAAI 2006 Spring Sym- posium on Computational Approaches to Analysing Weblogs.
Theoretical and practical relevance:
Their persuasion corpus, which contains over 4600 blog posts annotated for the presence of persuasion and related tactics, is publicly available.
Distinguishing Persuasion from Opinion
Examples in the paper distinguish persuasion from opinion, and indicate the persuasive texts attempt to engage the reader.