Can Teamwork Solve One Of Psychology’s Biggest Problems?


Psychologist Christopher Chartier admits to a case of “physics envy.” That field boasts numerous projects on which international research teams come together to tackle big questions. Just think of CERN’s Large Hadron Collider or LIGO, which recently detected gravitational waves for the first time. Both are huge collaborations that study problems too big for one group to solve alone. Chartier, a researcher at Ashland University, doesn’t think massively scaled group projects should only be the domain of physicists. So he’s starting the “Psychological Science Accelerator,” which has a simple idea behind it: Psychological studies will take place simultaneously at multiple labs around the globe. Through these collaborations, the research will produce much bigger data sets with a far more diverse pool of study subjects than if it were done in just one place.

The accelerator approach eliminates two problems that can contribute to psychology’s much-discussed reproducibility problem, the finding that some studies aren’t replicated in subsequent studies. It removes both small sample sizes and the so-called weird samples problem, which is what happens when studies rely on a very particular population — like relatively wealthy college students from Western countries — that may not represent the world at large. In the process, the program could make important contributions to what Simine Vazire, a psychologist at the University of California, Davis, is calling psychology’s credibility revolution1 — the push for openness, replicability and methodological rigor. “It’s good science to have larger and more diverse samples,” she said.

So far, the project has enlisted 183 labs on six continents. The idea is to create a standing network of researchers who are available to consider and potentially take part in study proposals, Chartier said. Not every lab has to participate in any given study, but having so many teams in the network ensures that approved studies will have multiple labs conducting their research.

Here’s how the studies are chosen: After a researcher submits a proposal, a small selection committee reviews it, taking into consideration things like feasibility and the potential impact the research might have on the field. Studies that are selected go through a collaborative process in which researchers hammer out protocols and commit to publish a research plan in advance — a process known as preregistration — to make the experiment’s methods and analysis transparent. This process acts as a sort of pre-publication peer review, in which researchers get feedback and suggestions on methodology before the study begins.

The idea isn’t to pump out a bunch of studies, but to produce a lot of data, Chartier said: “Everything is open and transparent from the start, so what we’re going to end up with is a really solid data set on a specific hypothesis.”

Two studies have been approved so far.

The first, proposed by Lisa DeBruine and Ben Jones at the University of Glasgow, will test whether a model of how people evaluate human faces (based on perceived trustworthiness and dominance) applies across global regions and cultures. Chartier anticipates that the experiment will involve close to 100 labs and begin early next year.

Curtis Phills, a psychologist at the University of North Florida, proposed the project’s second study, which will test whom people think of when they think of a social group. Previous research has found that “when people think about black people, at least in America, they tend to think of a black man,” Phills said. That matters, he said, because if stereotypes of a racial group focus on men, for example, then efforts to overturn those stereotypes might overlook women. Phills submitted his proposal to the accelerator because he “really wanted to be confident” that the effect is real. “I found it in my lab, but I want to know that it’s a real thing before I go out there telling the world about it,” he said.

Previous work has shown that small studies are prone to false results, and Chartier said the accelerator is likely to turn up some null results, finding that the intervention or theory being tested doesn’t work or doesn’t hold. “I wouldn’t be disappointed if we reveal a series of dead ends with some of our initial investigations — that would be quite productive,” he said. The idea is to root out preliminary effects that apply in only very narrow circumstances before researchers waste too much time chasing them.

That’s a clear benefit to science, but it could create some risks to individual researchers, said Brian Nosek, executive director of the Center for Open Science. Right now, the incentive system in academic research focuses on individual contributions and that can mean that researchers seeking tenure or promotions are better rewarded if they publish their own small studies (instead of being one of a large number of authors on a larger paper), even if these individual studies represent smaller advances for science. But Nosek said that the accelerator has the potential to “help to spur change in the reward systems rather than just waiting for them to change on their own.”

I asked Phills how he’d feel if his study didn’t find the effect he’s expecting. “I’ll probably cry for a while,” he said. “But at least then I’ll know so instead of spending more time going down this road, I can focus on more fruitful lines of research.”