Information Systems Success in Free and Open Source Software Development: Theory and Measures

From AcaWiki
Jump to: navigation, search

Citation: Kevin Crowston, James Howison, Hala Annabi (2006) Information Systems Success in Free and Open Source Software Development: Theory and Measures. Software Process Improvement and Practice (RSS)
DOI (original publisher): 10.1002/spip.259
Semantic Scholar (metadata): 10.1002/spip.259
Sci-Hub (fulltext): 10.1002/spip.259
Internet Archive Scholar (search for fulltext): Information Systems Success in Free and Open Source Software Development: Theory and Measures
Wikidata (metadata): Q17745387
Download: http://howison.name/pubs/Crowston2006FlossSuccess.pdf
Tagged:

Summary

What does success mean in FLOSS context? Authors review applicability of existing measures of information system (IS) success to FLOSS, present additional concepts for assessing FLOSS, and present empirical study based on data from SourceForge.

Long term goal: "identify processes that enable distributed software team performance, specifically (FLOSS teams)." Measuring project success a necessary step in order to know what constitutes an improvement in output of processes.

"The most commonly cited model for IS success...suggests six interrelated measures of success: system quality, information quality, use, user satisfaction, individual impact and organizational impact...related model that includes system quality, information quality, perceived usefulness, user satisfaction and IS use", suggesting following candidate measures for FLOSS:

  • System and Information Quality: code quality, eg number of defects per kloc, documentation quality; readily available for FLOSS
  • User satisfaction: some data available for FLOSS (eg Freshmeat ratings) but nonrandom sample and low variability; in principle user survey possible but difficult
  • Use: Some data available for FLOSS for relative popularity eg Debian popcon, inclusion in distributions, download counts, network software scans (eg Netcraft web server stats), page views on sites like Freshmeat, dependencies indicating reuse
  • Individual or Organizational Impacts: Hard to define, likely not usable for individual FLOSS project assessment

Identify additional measures for FLOSS

  • Commonly cited model treats creation, use, and impact as components of success; FLOSS could look inside creation -- use relatively easy to measure for proprietary software, development opaque, vice versa for FLOSS.
  • Hackman model for teams: task output, team's continued ability to work together, satisfaction of individual team members' personal needs
  • Output: info quality, system quality; project completion less appropriate for FLOSS as requirements development discursive; feasible to survey developers' individual satisfaction
  • Process: number of developers (for FLOSS individuals who actively participate), level of activity of participants, cycle time eg time to close bugs and proportion of bugs fixed
  • Effects on Project Teams: improved employment opportunities (salary or jobs acquired through participation), individual reputation, individual learning

Authors posted survey on Slashdot, 201 responses, authors investigated corresponding SourceForge usernames to determine responses included FLOSS developers, coded responses for analysis. "Overall, the responses of the developers posting on SlashDot were in general agreement with the list of success measures we developed from the literature and our re-examination of the process...new themes did emerge":

  • recognition of project (eg on other sites) as measure of success
  • level of involvement of users in submitting bugs and participating in discussions
  • porting (eg to Windows), could be considered special case of popularity

None of the responses mentioned individual learning or getting a better job.

Success measures in recent FLOSS research

  • Coded 182 postings to http://opensource.mit.edu/pre-press for empirical (84) vs conceptual, and use of project success as dependent variable (14)
  • Found wide range of measures, many related to process of creation

Empirical measures from SourceForge data:

  • number of developers
  • bug-fixing time
  • popularity (downloads and views)
  • inclusion in distributions

Only 140 projects had at least 7 developers and 100 bugs, restricted selection justified as indicating a team, and necessity for planned analysis of bug fixing; a spider was able to collect data on 122 projects, 120 after removing one with bugs imported from another site, and one with bugs in Russian. Analysis:

  • "suggest that the count of participants and number of bugs function more like indications of the popularity of a FLOSS project, rather than the success of its development processes"
  • "hazard ratio for bug lifetimes and the median bug lifetime are not significantly correlated with any of the other variables, suggesting that they do provide an independent view of a project’s performance"

Sample may not have sufficient variance of success; projects meeting 7/100 criteria already somewhat successful. Popularity unadjusted for market size.