Can you ever trust a wiki?: Impacting perceived trustworthiness in Wikipedia
Citation: Aniket Kittur, Bongwon Suh, Ed H. Chi (2008) Can you ever trust a wiki?: Impacting perceived trustworthiness in Wikipedia. Proceedings of the ACM 2008 conference on Computer supported cooperative work (RSS)
DOI (original publisher): 10.1145/1460563.1460639
Semantic Scholar (metadata): 10.1145/1460563.1460639
Sci-Hub (fulltext): 10.1145/1460563.1460639
Internet Archive Scholar (search for fulltext): Can you ever trust a wiki?: Impacting perceived trustworthiness in Wikipedia
Tagged: Computer Science (RSS) Wikipedia (RSS), trust (RSS), trustworthiness (RSS)
Kittur, Suh, and Chi is a short "note" published in CSCW and presents the design and evaluation of an interface edition to Wikipedia designed to help understand issues of trust in the context of the audience for Wikipedia articles.
The authors suggest several designs which can "surface trust" in Wikipedia and that essentially provide ways of making history information more visible. One measure is the percentage of words contributed by anonymous userss. Another measure is whether the last edit was made by an anonymous user, a third is the stability of previous content, and a third is graphical display of past editing activity.
The authors then use built-in Wikipedia articles ratings (e.g., classification as good or as "B" grade). They then ask users to rate each article based on trustworthiness. They rated articles that were high or low quality, controversial or not, and included either a high trust visualization or a low trust visualization. Using participants recruited from Amazon Mechanical Turk, the authors showed a robust effect of surfacing what they suggest are trust relevant information.
In a second study, the authors attempted the same study but established a baseline by not including a visualizations. They found that including trust information can have an impact in both the positive and negative direction.
Theoretical and Practical Relevance
There are open questions raised by the work about the degree to which the metrics (e.g., the number of anonymous edits) are actually good measures of "trust" -- something that the authors did not attempt to explore in this study.