Automatic detection of arguments in legal texts
Citation: Marie-Francine Moens, Erik Boiy, Raquel Mochales Palau, Chris Reed (2007) Automatic detection of arguments in legal texts. Proceedings of the 11th international conference on Artificial intelligence and law (RSS)
DOI (original publisher): 10.1145/1276318.1276362
Semantic Scholar (metadata): 10.1145/1276318.1276362
Sci-Hub (fulltext): 10.1145/1276318.1276362
Internet Archive Scholar (search for fulltext): Automatic detection of arguments in legal texts
Download: http://portal.acm.org/citation.cfm?id1276318.1276362
Tagged: Computer Science
(RSS) argument mining (RSS), argument detection (RSS), legal citations (RSS)
Summary
The goal of the ACILA project (2006-2010) is to automatically recognize the structure of arguments. Currently this task is done by analysts, but it is very time-consuming.
This paper focuses on a subproblem: distinguishing argumentative and non-argumentative sentences, considered in isolation.
This is a classification problem, and two algorithms--multinomial naive Bayes classifier [20] (MNB) and a maximum entropy model [4] (Maxent)--are used, based on the features below. The corpus had an equal number of argumentative and non-argumentative sentences: The Araucaria corpus (1899 argumentative sentences and 827 sentences without arguments) was augmented with 1072 new sentences containing no argument. Sources are not strictly from the legal domain, but also draw from newspapers, discussion fora.
Features
- unigrams
- bigrams
- trigrams
- adverbs (POS tagger)
- verbs (POS tagger) -- only main verbs (excluding "to be", "to do", and "to have")
- modal auxiliary (which indicate the level of necessity) (POS tagger)
- word couples (all possible word pairs), ignoring stop words ("to be", "to do", "to have", general determiners (a, the, this, that), proper nouns, pronouns, and symbols)
- text statistics - sentence length, average word length, number of punctuation marks
- punctuation marks
- key words indicative of argumentation (286 words from [16]), such as "but", "consequently" and "because of"
- parse features, see [7] - depth of the tree, number of subclause
Social media
The corpus includes 750 sentences from discussion fora, to which the MNB and Maxent were applied.
Results
They reviewed the 98 false positives and false negatives. 21.4% could have been resolved using previous content. Ambiguity of linguistic markers is a major issue (where words such as "should" and "more" are incorrectly read), but the most difficult problem is when the text doesn't give any cue for identifying an argument, but real world and common sense knowledge is needed.
Selected references
Work that underlies this
- [4] A. L. Berger, S. D. Pietra, and V. J. D. Pietra. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39-71, 1996.
- [7] E. Charniak. A maximum-entropy-inspired parser. Technical Report CS-99-12, 1999.
- [16] A. Knott and R. Dale. Using linguistic phenomena to motivate a set of rhetorical relations. Technical Report HCRC/RP-39, Edinburgh, Scotland, 1993.
- [20] A. McCallum and K. Nigam. A comparison of event models for naive Bayes text classification. In Proceedings of the AAAI/ICML-98 Workshop on Learning for Text Categorization, pages 41-48. AAAI Press, 1998.
Related Work
- S. Brüninghaus and K. D. Ashley. Generating legal arguments and predictions from case texts. In ICAIL ’05, pages 65-74, New York, NY, USA, 2005. ACM.
- B. Hachey and C. Grover. Automatic legal text summarisation: Experiments with summary structuring. In ICAIL ’05, pages 75-84, New York, NY, USA, 2005. ACM.
- H. Horacek and M. Wolska. Interpreting semi-formal utterances in dialogs about mathematical proofs. Data Know l. Eng., 58(1):90-106, 2006.