How can you say such things?!?: Recognizing disagreement in informal political argument
Citation: Rob Abbott, Marilyn Walker, Pranav Anand, Jean E. Fox Tree, Robeson Bowmani, and Joseph King (2011) How can you say such things?!?: Recognizing disagreement in informal political argument. Proceedings of the Workshop on Language in Social Media (RSS)
Internet Archive Scholar (search for fulltext): How can you say such things?!?: Recognizing disagreement in informal political argument
Tagged: Computer Science
(RSS) disagreement (RSS), online argumentation (RSS), NLP (RSS), natural language processing (RSS), Mechanical Turk (RSS), cue words (RSS), discourse analysis (RSS), hand annotation (RSS), discourse markers (RSS), forum posts (RSS)
Summary
This paper builds a manually annotated corpus of informal argumentation built from a political bulletin board discussion forum, 4forums.com. The ARGUE corpus consists of 11,216 discussions and 109,553 posts by 2764 authors, and was annotated using Mechanical Turk.
The authors define a "quote-response pair" (Q-R pair) where the response is a portion of a post directly following a quotation. 10,003 Q-R pairs were chosen, using 20 discourse markers/cue words (the 17 that occurred at least 50 times in the quote response were: actually, and, because, but, I believe, I know, I see, I think, just, no, oh, really, so, well, yes, you know, you mean).
Turkers were asked whether posters agree/disagree; use fact/emotion; attack or insult; use sarcasm; are nice/nasty.
Cue Word Findings
Marking disagreement
- Really
- No
- Actually
- But
- So
- You mean
Marking agreement
- Yes
- I know
- I believe
- I think
- Just
Nearly even
- And
- Because
- Oh
- I see
- You know
- Well
Sarcasm Markers
Unsurprisingly, sarcasm was hard to detect; agreement was low. One interesting finding is that "oh", rather than indicating sarcasm "was the discourse marker with the highest rating of feelings over fact". Further, sarcasm was not correlated with disagreement. However, sarcasm is emotional, personal, and nastier.
Most sarcastic
That said, these are the most indicative of sarcasm:
- You mean
- Oh
- Really
- So
- I see
Least sarcastic
- I think
- I believe
- Actually
Methods
Besides the Mechanical Turk annotation, the authors used the Weka machine learning toolkit with Naive-Bayes and JRip.
Features investigated
- MetaPost (posterid, time between posts, etc.)
- Unigrams, Bigrams
- Cue words (initial unigram, bigram, and trigram)
- Punctuation (collapsed into 3 categories: ??, !!, ?!)
- LIWC measures and frequencies
- Dependencies (from the Stanford Parser)
- Generalized dependencies (POS of the head word; opinion polarity of both words) -- see Somasundaran & Wiebe (2009)
Selected References
- M. Galley, K. McKeown, J. Hirschberg, and E. Shriberg. 2004. Identifying agreement and disagreement in conversational speech: Use of bayesian networks to model pragmatic dependencies. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, pages 669-es. Association for Computational Linguistics.
- John E. Hunter. 1987. A model of compliance-gaining message selection. Communication Monographs, 54(1):54-63.
- S. Somasundaran and J. Wiebe. 2009. Recognizing stances in online debates. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1, pages 226-234. Association for Computational Linguistics.
- S. Somasundaran and J. Wiebe. 2010. Recognizing stances in ideological on-line debates. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, pages 116-124. Association for Computational Linguistics
- Y.C. Wang and C.P. Rose´ . 2010. Making conversational structure explicit: identification of initiation-response pairs within online discussions. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 673-676. Association for Computational Linguistics.
Discourse words
- Barbara Di Eugenio, Johanna D. Moore, and Massimo Paolucci. 1997. Learning features that predict cue usage. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics, ACL/EACL 97, pages 80-87.
- J.E. Fox Tree and J.C. Schrock. 1999. Discourse markers in spontaneous speech: Oh what a difference an oh makes. Journal of Memory and Language, 40(2):280-295.
- J.E. Fox Tree and J.C. Schrock. 2002. Basic meanings of you know and I mean. Journal of Pragmatics, 34(6):727-747.
- J. E. Fox Tree. 2010. Discourse markers across speakers and settings. Language and Linguistics Compass, 3(1):113.
- Julia Hirschberg and Diane Litman. 1993. Empirical studies on the disambiguation of cue phrases. Computational Linguistics, 19(3):501-530.
- Margaret G. Moser and Johanna Moore. 1995. Investigating cue selection and placement in tutorial discourse. In ACL 95, pages 130-137.
- Deborah Schiffrin. 1987. Discourse markers. Cambridge University Press, Cambridge, U.K.