Nature has a great article discussing various business strategies for making a collaborative scientific resource work. What makes some grow and others wither? What returns does a scientific organization get on its investment in public, collaborative technologies? Is there a way to see that contributors and the host organization all reap the rewards of their work?
From the article in Nature.
The science wikis face a tougher challenge in building critical mass, if only because they’re aiming at a much smaller audience. One obvious strategy is to avoid fragmenting that audience. As Evelo points out, “biologists aren’t going to work on a dozen wikis to see which will survive”. They are going to want the various wikis to be interoperable and mutually supporting, so that the data they enter in one can be easily ported to another — or will even flow to all the appropriate sites automatically.
It should help that so many of the sites are based on the same MediaWiki software. That gives them the potential to act as one big open-source community, sharing code and improvements. And it’s not just potential, adds Pico: “We’ve been in close contact with Jim Hu and the Ecolihub folks about making our wikis interoperable.” Ecolihub is the ‘parent’ website of EcoliWiki, providing access to vast amounts of information on the bacterium Escherichia coli. Also critical to interoperability will be a standard language that can be understood by all the databases. In the realm of pathways, says Pico, the closest to that right now is BioPAX, an XML-based standard for the exchange of pathway and interaction information. “We’re planning on converting our system to it.”
Interoperability is only part of the equation, however. Few scientists will contribute to these sites out of altruism. They need tangible incentives — starting with a real benefit to their day-to-day research.“Biologists aren’t going to work on a dozen wikis to see which will survive.”
Giving them that is definitely a work in progress, says van Iersel. The wiki architecture offers some possibilities. “For example, you can sign up to be e-mailed whenever a change is made to a page you’re interested in,” he says. So a researcher could immediately be alerted to any new findings in an area he or she is working on, not to mention the existence of potential collaborators (or rivals), without having to wait for a paper to come out.
There will always be some hypercompetitive fields in which people will keep their work under wraps for fear of getting scooped. But the hope is that for most researchers, the win–win dynamics of real-time data sharing will prevail. “Community annotation supports the natural process in which people form intellectual networks around topics,” says Mons. “The system tells me, ‘Hey — if you’re interested in ABC, you’d better look at XYZ, as well.’ And that will become part of the workflow of a scientist’s life.”
Academic culture being what it is, however, the wiki sites will have to crack the credit-assignment problem, and provide some way for scientists’ efforts there to be identified, recognized, cited and shown to funding agencies and tenure committees. Without a solution, says Hu, wiki-based community annotation will get nowhere. “Everybody gets excited by the idea,” he says, “but then it always falls off the table, because it’s not one of the things that pays the rent.” Pico couldn’t agree more. At the San Diego workshop, “we had break-out sessions, lunches, lots of brainstorming trying to think of metrics for scientists to quantify their activity at these sites,” he says.
Perhaps the most thoroughly worked out demonstration of how credit assignment could function in a wiki context is WikiGenes (not to be confused with Gene Wiki), created by Massachusetts Institute of Technology computer scientist Robert Hoffmann (see R. Hoffmann Nature Genet. 40, 1047–1051; 2008). Like Wikipedia, the WikiGenes site consists of articles that are collaboratively written and edited by the users. Unlike Wikipedia, however, WikiGenes links every piece of text directly to its author. (In principle, a user could find that information on Wikipedia by tracking back through every previous version of an article, but in practice this rapidly becomes unworkable.) A single click leads to an automatically constructed page for that author, which lists all his or her contributions. Registered users have the option to do a one-click rating of each contribution, thus providing a fine-grained community peer review.
Hat tip: Science Commons blog.