Researchers at the University of Michigan in Ann Arbor have found that when a collaborative crowdsourced project generates extreme interest or attention, the creators must carefully manage community engagement or risk stalling progress.
Crowdsourcing, or obtaining information or input by enlisting services of a large number of people, usually online, has become a popular way to advance a technology idea.
New research from the U-M School of Information shows that often core team members find themselves in somewhat traditional management roles as they seek to move forward, sometimes by enlisting members of the crowd for more involved assignments.
“They struggle with staffing and response. This forces them to carry on as before or open up and accept outside help,” says Danaja Maldeniya, doctoral candidate at the School of Information and first author of the paper.
Maldeniya and colleagues looked at how an overabundance of good Samaritans on more than 1,100 open source software projects that topped GitHub Trending Projects page resulted in growing pains, requiring them to adapt work routines, organizational structure, and management style. The researchers analyzed millions of actions of thousands of contributors by scraping data from the GitHub trending page every three hours for seven months.
In crowdsourced ventures, there is typically a small core team that opens its project to the masses while expecting to remain in their role as developers. Newcomers typically show interest by “starring” the project, reporting issues they encounter, suggesting additional features, or contributing code or other content. They sometimes express interest in taking the software in a new direction.
Most newcomer contributions are shallow and transient, but in cases where interest is especially strong and involvement is deeper, the original team must transition into administrative roles, responding to requests and reviewing newcomers’ work. As a result, projects follow a more distributed coordination model, with newcomers becoming more central, although still in a limited way.
When the original team is unprepared for this series of events, the result can lead to long delays in responding to interest and ideas from the crowd, which can chill interest or stall momentum. After the shock of sudden engagement, response time for an issue or pull request increased by 30 percent and 42 percent, respectively.
“When you have a team of say five people, and you get 1,000 external engagements, how do you respond to that? Most likely you will be overwhelmed and not respond,” Maldeniya says. “Most engagements will be shallow. There will be a limited number of high-value engagements, but how do you find them among the 1,000?”
Maldeniya says easy fixes to help keep momentum going could include creating to-do lists for crowd members with specific tasks or using an automated system to weed out bots and responses that are not serious, along with with boilerplate messaging to acknowledge interest, including a promise to follow up with contact from a developer.
The research was supported in part by the National Science Foundation.