Jan 8, 2014

When Open Science is Hard Science

by

When it comes to opening up your work there is, ironically, a bit of a secret. Here it is: being open - in open science, open source software, or any other open community - can be hard. Sometimes it can be harder than being closed.

In an effort to attract more people to the cause, advocates of openness tend to tout its benefits. Said benefits are bountiful: increased collaboration and dissemination of ideas, transparency leading to more frequent error checking, improved reproducibility, easier meta-analysis, and greater diversity in participation, just to name a few.

But there are downsides, too. One of those is that it can be difficult to do your research openly. (Note here that I mean well and openly. Taking the full contents of your hard drive and dumping it on a server somewhere might be technically open, but it’s not much use to anyone.)

How is it hard to open up your work? And why?

Closed means privacy.

In the privacy of my own home, I seldom brush my hair. Sometimes I spend all day in my pajamas. I leave my dirty dishes on the table and eat ice cream straight out of the tub. But when I have visitors, or when I’m going out, I make sure to clean up.

In the privacy of a closed access project, you might take shortcuts. You might recruit participants from your own 101 class, or process your data without carefully documenting which steps you took. You’d never intentionally do something unethical, but you might get sloppy.

Humans are social animals. We try to be more perfect for each other than we do for ourselves. This makes openness better, but it also makes it harder.

Two heads need more explanation than one.

As I mentioned above, taking all your work and throwing it online without organization or documentation is not very helpful. There’s a difference between access and accessibility. To create a truly open project, you need to be willing to explain your research to those trying to understand it.

There are numerous routes towards sharing your work, and the most open projects take more than one. You can create stellar documentation of your project. You can point people towards background material, finding good explanations of the way your research methodology was developed or the math behind your data analysis or how the code that runs your stimulus presentation works. You can design tutorials or trainings for people who want to run your study. You can encourage people to ask questions about the project, and reply publicly. You can make sure to do all the above for people at all levels - laypeople, students, and participants as well as colleagues.

Even closed science is usually collaborative, so hopefully your project is decently well documented. But making it accessible to everyone is a project in itself.

New ideas and tools need to be learned.

As long as closed is the default, we’ll need to learn new skills and tools in the process of becoming open, such as version control, format conversion and database management.

These skills aren’t unique to working openly. And if you have a good network of friends and colleagues, you can lean on them to supplement your own expertise. But the fact remains that “going open” isn’t as easy as flipping a switch. Unless you’re already well-connected and well-informed, you’ll have a lot to learn.

People can be exhausting.

Making your work open often means dealing with other people - and not always the people you want to deal with. There are the people who mean well, but end up confusing, misleading, or offending you. There are the people who don’t mean well at all. There are the discussions that go off in unproductive directions, the conversations that turn into conflicts, the promises that get forgotten.

Other people are both a joy and a frustration, in many areas of life beyond open science. But the nature of openness assures you’ll get your fair share. This is especially true of open science projects that are explicitly trying to build community.

It can be all too easy to overlook this emotional labor, but it’s work - hard work, at that.

There are no guarantees.

For all the effort you put into opening up your research, you may find no one else is willing to engage with it. There are plenty of open source software projects with no forks or new contributors, open science articles that are seldom downloaded or science wikis that remain mostly empty, open government tools or datasets that no one uses.

Open access may increase impact on the whole, but there are no promises for any particular project. It’s a sobering prospect to someone considering opening up their research.

How can we make open science easier?

We can advocate for open science while acknowledging the barriers to achieving it. And we can do our best to lower those barriers:

Forgive imperfections. We need to create an environment where mistakes are routine and failures are expected - only then will researchers feel comfortable exposing their work to widespread review. That’s a tall order in the cutthroat world of academia, but we can begin with our own roles as teachers, mentors, reviewers, and internet commentators. Be a role model: encourage others to review your work and point out your mistakes.

Share your skills as well as your research. Talk about your experiences opening up your research with colleagues. Host lab meetings, department events, and conference panels to discuss the practical difficulties. If a training, website, or individual helped you understand some skill or concept, recommend widely. Talking about the individual steps will help the journey seem less intimidating - and will give others a map for how to get there.

Recognize the hard work of others with words and, if you can, financial support. Organization, documentation, mentorship, community management. These are areas that often get overlooked when it comes to celebrating scientific achievement - and allocating funding. Yet many open science projects would fail without leadership in these areas. Contribute what you can and support others who take on these roles.

Collaborate. Open source advocates have been creating tools to help share the work involved in opening research - there’s Software Carpentry, the Open Science Framework, Sage Bionetworks, and Research Compendia, just to name a few. But beyond sharing tools, we can share time and resources. Not every researcher will have the skillset, experience, or personality to quickly and easily open up their work. Sharing efforts across labs, departments and even schools can lighten the load. So can open science specialists, if we create a scientific culture where these specialists are trained, utilized and valued.

We can and should demand open scientific practices from our colleagues and our institutions. But we can also provide guidelines, tools, resources and sympathy. Open science is hard. Let’s not make it any harder.

Jan 1, 2014

Timeline of Notable Open Science Events in 2013 - Psychology

by

Happy New Year! New Year’s is a great time for reflection and resolution, and when I reflect on 2013, I view it with an air of excitement and promise. As a social psychologist, I celebrated with my many of my colleagues in Washington, DC. at the 25th anniversary of the Association for Psychological Science. There were many celebrations including a ‘80s themed dance night at the Convention. However, this year was also marred by the “Crisis of Confidence” in psychological and broader sciences that has been percolating since the turn of the 21st century. Our timeline begins the year with the Perspectives on Psychological Science’s special issue dedicated to addressing this Crisis. Rather than focusing on the problems, papers in this issue suggested solutions and many of those suggestions emerged as projects in 2013. This timeline focuses on these many Open Science Collaboration successes and initiatives and offers a glimpse at the activity directed at reaching the Scientific Utopia envisioned by so many in the OSC.

Maybe when APS celebrates its 50th Anniversary, it will also mark the 25th Anniversary of the year that the tide turned on the bad practices that had led to the “Crisis of Confidence”. Perhaps in addition to a ‘13 themed dance band playing Lorde’s “Royals” or Imagine Dragon’s “Demons”, maybe there will be a theme reflecting on changing science practices. With the COS celebrating a 25th anniversary of its own, let us share your memory of the important events from 2013.

These posts reflect a limited list of psychology-related events that one person noticed. We invite you to add other notable events that you feel are missing from this list, particularly in other scientific areas. Add a comment below with information about any research projects aimed at replication across institutions or initiatives directed at making science practices more transparent.

View the timeline!

Dec 18, 2013

Researcher Degrees of Freedom in Data Analysis

by

The enormous amount of options available for modern data analysis is both a blessing and a curse. On one hand, researchers have specialized tools for any number of complex questions. On the other hand, we’re also faced with a staggering number of equally-viable choices, many times without any clear-cut guidelines for deciding between them. For instance, I just popped open SPSS statistical software and counted 18 different ways to conduct post-hoc tests for a one-way ANOVA. Some choices are clearly inferior (e.g., the LSD test doesn’t adjust p-values for multiple comparisons) but it’s possible to defend the use of many of the available options. These ambiguous choice points are sometimes referred to as researcher degrees of freedom.

In theory, researcher degrees of freedom shouldn’t be a problem. More choice is better, right? The problem arises from two interconnected issues: (a) Ambiguity as to which statistical test is most appropriate and (b) an incentive system where scientists are rewarded with publications, grants, and career stability when their p-values fall below the revered p < .05 criterion. So, perhaps unsurprisingly, when faced with a host of ambiguous options for data analysis, most people settle on the one that achieves statistically significant results. Simmons, Nelson, and Simonsohn (2011) argue that this undisclosed flexibility in data analysis allows people to present almost any data as “significant,” and calls for 10 simple guidelines for reviewers and authors to disclose in every paper – which, if you haven’t read yet are worth checking out. In this post, I will discuss a few guidelines of my own for conducting data analysis in a way that strives to overcome our inherent tendency to be self-serving.

  1. Make as many data analytic decisions as possible before looking at your data. Review the statistical literature and decide on which statistical test(s) will be best before looking at your collected data. Continue to use those tests until enough evidence emerges to change your mind. The important thing is that you make these decisions before looking at your data. Once you start playing with the actual data, your self-serving biases will start to kick in. Do not underestimate your ability for self-deception: Self-serving biases are powerful, pervasive, and apply to virtually everyone. Consider pre-registering your data analysis plan (perhaps using the Open Science Framework to keep yourself honest and to convince future reviewers that you aren’t exploiting researcher degrees of freedom.

  2. When faced with a situation where there are too many equally viable choices, run a small number of the best choices, and report all of them. In this case, decide on 2-5 different tests ahead of time. Report the results of all choices, and make a tentative conclusion based if the majority of these tests agree. For instance, when determining model fit in structural equation modeling, there many different methods you might use. If you can’t figure out which method is best by reviewing the statistical literature – it’s not entirely clear, statisticians disagree about as often as any other group of scientists – then report the results of all tests, and make a conclusion if they all converge on the same solution. When they disagree, make a tentative conclusion based on the majority of tests that agree (e.g., 2 of 3 tests come to the same conclusion). For the record, I currently use CFI, TLI, RMSEA, and SRMR in my own work, and use these even if other fit indices provide more favorable results.

  3. When deciding on a data analysis plan after you’ve seen the data, keep in mind that most researcher degrees of freedom have minimal impact on strong results. For any number of reasons, you might find yourself deciding on a data analysis plan after you’ve played around with the data for a while. At the end of the day, strong data will not be influenced much by researcher degrees of freedom. For instance, results should look much the same regardless of whether you exclude outliers, transform them, or leave them in the data when you have a study with high statistical power. Simmons et al. (2011) specifically recommend that results should be presented (a) with and without covariates, and (b) with and without specific data points excluded, if any were removed. Again, the general idea is that strong results will not change much when you alter researcher degrees of freedom. Thus, I again recommend analyzing the data in a few different ways and looking for convergence across all methods when you’re developing a data analysis plan after seeing the data. This sets the bar higher to try and combat your natural tendency to report just the one analysis that “works.” When minor data analytic choices drastically change the conclusions, this should be a warning sign that your solution is unstable and the results are probably not trustworthy. The number one reason why you have an unstable solution is probably because you have low statistical power. Since you hopefully had a strict data collection end date, the only viable alternative when results are unstable is to replicate the results in a second, more highly-powered study using the same data analytic approach.

At the end of the day, there is no “quick-fix” for the problem of self-serving biases during data analysis so long as the incentive system continues to reward novel, statistically significant results. However, by using the tips in this article (and elsewhere) researchers can focus on finding strong, replicable results by minimizing the natural human tendency to be self-serving.

References

Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359-1366. doi:10.1177/0956797611417632

Dec 13, 2013

Chasing Paper, Part 3

by

This is part three of a three part post brainstorming potential improvements to the journal article format. Part one is here, part two is here.

The classic journal article is only readable by domain experts.

Journal articles are currently written for domain experts. While novel concepts or terms are usually explained, there is the assumption of a vast array of background knowledge and jargon is the rule, not the exception. While this leads to quick reading for domain experts, it can make for a difficult slog for everyone else.

Why is this a problem? For one thing, it prevents interdisciplinary collaboration. Researchers will not make a habit of reading outside their field if it takes hours of painstaking, self-directed work to comprehend a single article. It also discourages public engagement. While science writers do admirable work boiling hard concepts down to their comprehensible cores, many non-scientists want to actually read the articles, and get discouraged when they can’t.

While opaque scientific writing exists in every format, technologies present new options to translate and teach. Jargon could be linked to a glossary or other reference material. You could be given a plain english explanation of a term when your mouse hovers over it. Perhaps each article could have multiple versions - for domain experts, other scientists, and for laypeople.

Of course, the ability to write accessibly is a skill not everyone has. Luckily, any given paper would mostly use terminology already introduced in previous papers. If researchers could easily credit the teaching and popularization work done by others, they could acknowledge the value of those contributions while at the same time making their own work accessible.

The classic journal article has no universally-agreed upon standards.

Academic publishing, historically, has been a distributed system. Currently, the top three publishers still account for less than half (42%) of all published articles (McGuigan and Russell, 2008). While certain format and content conventions are shared among publishers, generally speaking it’s difficult to propagate new standards, and even harder to enforce them. Not only do standards vary, they are frequently hidden, with most of the review and editing process taking place behind closed doors.

There are benefits to decentralization, but the drawbacks are clear. Widespread adoption of new standards, such as Simmons et al’s 21 Word Solution or open science practices, depends on the hard work and high status of those advocating for them. How can the article format be changed to better accommodate changing standards, while still retaining individual publishers’ autonomy?

One option might be to create a new section of each journal article, a free-form field where users could record whether an article met this or that standard. Researchers could then independently decide what standards they wanted to pay attention to. While this sounds messy, if properly implemented this feature could be used very much like a search filter, yet would not require the creation or maintenance of a centralized database.

A different approach is already being embraced: an effort to make the standards that currently exist more transparent by bringing peer review out into the open. Open peer review allows readers to view an article’s pre-publication history, including the authorship and content of peer reviews, while public peer review allows the public to participate in the review process. However, these methods have yet to be generally adopted.

*

It’s clear that journal articles are already changing. But they may not be changing fast enough. It may be better to forgo the trappings of the journal article entirely, and seek a new system that more naturally encourages collaboration, curation, and the efficient use of the incredible resources at our disposal. With journal articles commonly costing more than $30 each, some might jump at the chance to leave them behind.

Of course, it’s easy to play “what if” and imagine alternatives; it’s far harder to actually implement them. And not all innovations are improvements. But with over a billion dollars spent on research each day in the United States, with over 25,000 journals in existence, and over a million articles published each year, surely there is room to experiment.

Bibliography

Budd, J.M., Coble, Z.C. and Anderson, K.M. (2011) Retracted Publications in Biomedicine: Cause for Concern.

Wright, K. and McDaid, C. (2011). Reporting of article retractions in bibliographic databases and online journals. J Med Libr Assoc. 2011 April; 99(2): 164–167.

McGuigan, G.S. and Russell, R.D. (2008). The Business of Academic Publishing: A Strategic Analysis of the Academic Journal Publishing Industry and its Impact on the Future of Scholarly Publishing. Electronic Journal of Academic and Special Librarianship. Winter 2008; 9(3).

Simmons, J.P., Nelson, L.D. and Simonsohn, U.A. (2012) A 21 Word Solution.

Dec 12, 2013

Chasing Paper, Part 2

by

This is part two of a three part post brainstorming potential improvements to the journal article format. Part one is here, part three is here here.

The classic journal article format is not easily updated or corrected.

Scientific understanding is constantly changing as phenomena are discovered and mistakes uncovered. The classic journal article, however, is static. When a serious flaw in an article is found, the best a paper-based system can do is issue a retraction, and hope that a reader going through past issues will eventually come across the change.

Surprisingly, retractions and corrections continue to go mostly unnoticed in the digital era. Studies have shown that retracted papers go on to receive, on average, more than 10 post-retraction citations, with less than 10% of those citations acknowledging the retraction (Budd et al, 2011). Why is this happening? While many article databases such as PubMed provide retraction notices, the articles themselves are often not amended. Readers accessing papers directly from publishers’ websites, or from previously saved copies, can sometimes miss it. A case study of 18 retracted articles found several which they classified as “high risk of missing [the] notice”, with no notice given in the text of the pdf or html copies themselves (Wright et al, 2011). It seems likely that corrections have even more difficulty being seen and acknowledged by subsequent researchers.

There are several technological solutions which can be tried. One promising avenue would be the adoption of version control. Also called revision control, this is a way of tracking all changes made to a project. This technology has been used for decades in computer science and is becoming more and more popular - Wikipedia and Google Docs, for instance, both use version control. Citations for a paper could reference the version of the paper then available, but subsequent readers would be notified that a more recent version could be viewed. In addition to making it easy to see how articles have been changed, adopting such a system would acknowledge the frequency of retractions and corrections and the need to check for up to date information.

Another potential tool would be an alert system. When changes are made to an article, the authors of all articles which cite it could be notified. However, this would require the maintenance of up-to-date contact information for authors, and the adoption of communications standards across publishers (something that has been accomplished before with initiatives like CrossRef). A more transformative approach would be to view papers not as static documents but as ongoing projects that can be updated and contributed to over time. Projects could be tracked through version control from their very inception, allowing for a kind of pre-registration. Replications and new analyses could be added to the project as they’re completed. The most insightful questions and critiques from the public could lead to changes in new versions of the article.

The classic journal article only recognizes certain kinds of contributions.

When journal articles were first developed in the 1600s, the idea of crediting an author or authors must have seemed straightforward. After all, most research was being done by individuals or very small groups, and there were no such things as curriculum vitae or tenure committees. Over time, academic authorship has become the single most important factor in determining career success for individual scientists. The limitations of authorship can therefore have an incredible impact on scientific progress.

There are two major problems with authorship as it currently functions, and they are sides of the same coin. Authorship does not tell you what, precisely, each author did on a paper. And authorship does not tell you who, precisely, is responsible for each part of a paper. Currently, the authorship model provides only a vague idea of who is responsible for a paper. While this is sometimes elaborated upon briefly in the footnotes, or mentioned in the article, more often readers employ simple heuristics. In psychology, the first author is believed to have led the work, the last author to have provided physical and conceptual resources for the experiment, and any middle authors to have contributed in an unknown but significant way. This is obviously not an ideal way to credit people, and often leads to disputes, with first authorship sometimes misattributed. It has grown increasingly impractical as multiauthor papers have become more and more common. What does authorship on a 500-author paper even mean?

The situation is even worse for people whose contributions are not awarded with authorship. While contributions may be mentioned in the acknowledgements or cited in the body of the paper, neither of these have much impact when scientists are applying for jobs or up for tenure. This gives them little motivation to do work which will not be recognized with authorship. And such work is greatly needed. The development of tools, the collection and release of open data sets, the creation of popularizations and teaching materials, and the deep and thorough review of others’ work - these are all done as favors or side projects, even though they are vital to the progress of research. How can new technologies address these problems? There have been few changes made in this area, perhaps due to the heavy weight of authorship in scientific life, although there are some tools like Figshare which allow users to share non-traditional materials such as datasets and posters in citable (and therefore creditable) form. A more transformative change might be to use the version control system mentioned above. Instead of tracking changes to the article from publishing onwards, it could follow the article from its beginning stages. In that way, each change could be attributed to a specific person.

Another option might simply be to describe contributions in more detail. Currently if I use your methodology wholesale, or briefly mention a finding of yours, I acknowledge you in the same way - a citation. What if, instead, all significant contributions were listed? Although space is not a constraint with digital articles, the human attention span remains limited, and so it might be useful to create common categories for contribution, such as reviewing the article, providing materials, doing analyses, or coming up with an explanation for discussion.

There are two other problems are worth mentioning in brief. First, the phenomenon of ghost authorship, where substantial contributions to the running of a study or preparation of a manuscript go unacknowledged. This is frequently done in industry-sponsored research to hide conflicts of interest. If journal articles used a format where every contribution was tracked, ghost authorship would be impossible. Another issue is the assignment of contact authors, the researchers on a paper who readers are invited to direct questions to. Contact information can become outdated fairly quickly, causing access to data and materials to be lost; if contact information can be changed, or responsibility passed on to a new person, such loss can be prevented.

Dec 11, 2013

Chasing Paper, Part 1

by

This is part one of a three part post. Parts two and three have now been posted.

The academic paper is old - older than the steam engine, the pocket watch, the piano, and the light bulb. The first journal, Philosophical Transactions, was published on March 6th, 1665. Now that doesn’t mean that the journal article format is obsolete - many inventions much older are still in wide use today. But after a third of a millennium, it’s only natural that the format needs some serious updating.

When brainstorming changes, it may be useful to think of the limitations of ink and paper. From there, we can consider how new technologies can improve or even transform the journal article. Some of these changes have already been widely adopted, while others have never even been debated. Some are adaptive, using the greater storage capacity of computing to extend the functions of the classic journal article, while others are transformative, creating new functions and features only available in the 21st century.

The ideas below are suggestions, not recommendations - it may be that some aspects of the journal article format are better left alone. But we all benefit from challenging our assumptions about what an article is and ought to be.

The classic journal article format cannot convey the full range of information associated with an experiment.

Until the rise of modern computing, there was simply no way for researchers to share all the data they collected in their experiments. Researchers were forced to summarize: to gloss over the details of their methods and the reasoning behind their decisions and, of course, to provide statistical analyses in the place of raw data. While fields like particle physics and genetics continue to push the limits of memory, most experimenters now have the technical capacity to share all of their data.

Many journals have taken to publishing supplemental materials, although this rarely encompasses the entirety of data collected, or enough methodological detail to allow for independent replication. There are plenty of explanations for this slow adoption, including ethical considerations around human subjects data, the potential to patent methods, or the cost to journals of hosting this extra materials. But these are obstacles to address, not reasons to give up. The potential benefits are enormous: What if every published paper contained enough methodological detail that it could be independently replicated? What if every paper contained enough raw data that it could be included in meta-analysis? How much of meta-scientific work is never undertaken, because it's dependent on getting dozens or hundreds of contact authors to return your emails, and on universities to properly store data and materials?

Providing supplemental material, no matter how extensive, is still an adaptive change. What might a transformative change look like? Elsevier’s Article of the Future project attempts to answer that question with new, experimental formats that include videos, interactive models, and infographics. These designs are just the beginning. What if articles allowed readers to actually interact with the data and perform their own analyses? Virtual environments could be set up, lowering the barrier to independent verification of results. What if authors reported when they made questionable methodological decisions, and allowed readers, where possible, to see the results when a variable was not controlled for, or a sample was not excluded?

The classic journal article format is difficult to organize, index or search.

New technology has already transformed the way we search the scientific literature. Where before researchers were reliant on catalogues and indexes from publishers, and used abstracts to guess at relevance, databases such as PubMed and Google Scholar allow us to find all mentions of a term, tool, or phenomena across vast swathes of articles. While searching databases is itself a skill, its one that allows us to search comprehensively and efficiently, and gives us more opportunities to explore.

Yet old issues of organization and curation remain. Indexes used to speed the slow process of skimming through physical papers. Now they’re needed to help researchers sort through the abundance of articles constantly being published. With tens of millions of journal articles out there, how can we be sure we’re really accessing all the relevant literature? How can we compare and synthesize the thousands of results one might get on a given search?

Special kinds of articles - reviews and meta-analyses - have traditionally helped us synthesize and curate information. As discussed above, new technologies can help make meta-analyses more common by making it easier for researchers to access information about past studies. We can further improve the search experience by creating more detailed metadata. Metadata, in this context, is the information attached to an article which lets us categorize it without having to read the article itself. Currently, fields like title, author, date, and journal are quite common in databases. More complicated fields less often adopted, but you can find metadata on study type, population, level of clinical trial (where applicable), and so forth. What would truly comprehensive metadata look like? Is it possible to store the details of experimental structure or analysis in machine-readable format - and is that even desirable?

What happens when we reconsider not the metadata but the content itself? Most articles are structurally complex, containing literature reviews, methodological information, data, and analysis. Perhaps we might be better served by breaking those articles down into their constituent parts. What if methods, data, analysis were always published separately, creating a network of papers that were linked but discrete? Would that be easier or harder to organize? It may be that what we need here is not a better kind of journal article, but a new way of curating research entirely.

Dec 9, 2013

New “Reviewer Statement” Initiative Aims to (Further) Improve Community Norms Toward Disclosure

by

Photo of Etienne LeBel

An Open Science Collaboration -- made up of Uri Simonsohn, Etienne LeBel, Don Moore, Leif D. Nelson, Brian Nosek, and Joe Simmons -- is glad to announce a new initiative aiming to improve community norms toward the disclosure of basic methodological information during the peer-review process. Endorsed by the Center for Open Science, the initiative involves a standard reviewer statement that any peer reviewer can include in their review requesting that authors add a statement to the paper confirming that they have disclosed all data exclusions, experimental conditions, assessed measures, and how they determined their samples sizes (following from the 21-word solution; Simmons, Nelson, & Simonsohn, 2012, 2013; see also PsychDisclosure.org; LeBel et al., 2013). Here is the statement, which is available on the Open Science Framework:

"I request that the authors add a statement to the paper confirming whether, for all experiments, they have reported all measures, conditions, data exclusions, and how they determined their sample sizes. The authors should, of course, add any additional text to ensure the statement is accurate. This is the standard reviewer disclosure request endorsed by the Center for Open Science (see http://osf.io/project/hadz3). I include it in every review."

The idea originated from the realization that as peer reviewers, we typically lack fundamental information regarding how the data was collected and analyzed which prevents us from be able to properly evaluate the claims made in a submitted manuscript. Some reviewers interested in requesting such information, however, were concerned that such requests would make them appear selective and/or compromise their anonymity. Discussions ensued and contributors developed a standard reviewer disclosure request statement that overcomes these concerns and allows the community of reviewers to improve community norms toward the disclosure of such methodological information across all journals and articles.

Some of the contributors, including myself, were hoping for a reviewer statement with a bit more teeth. For instance, requesting the disclosure of such information as a requirement before accepting to review an article or requiring the re-review of a revised manuscript once the requested information has been disclosed. The team of contributors, however, ultimately decided that it would be better to start small to get acceptance, in order to maximize the probability that the initiative has an impact in shaping the community norms.

Hence, next time you are invited to review a manuscript for publication at any journal, please remember to include the reviewer disclosure statement!

References

LeBel, E. P., Borsboom, D., Giner-Sorolla, R., Hasselman, F., Peters, K. R., Ratliff, K. A., & Smith, C. T. (2013). PsychDisclosure.org: Grassroots support for reforming reporting standards in psychology. Perspectives on Psychological Science, 8(4), 424-432. doi: 10.1177/1745691613491437

Simmons J., Nelson L. & Simonsohn U. (2011) False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allow Presenting Anything as Significant. Psychological Science, 22(11), 1359-1366.

Simmons J., Nelson L. & Simonsohn U. (2012) A 21 Word Solution. Dialogue: The Official Newsletter of the Society for Personality and Social Psychology, 26(2), 4-7.

Nov 27, 2013

The State of Open Access

by

To celebrate Open Access Week last month, we asked people four questions about the state of open access and how it's changing. Here are some in depth answers from two people working on open access: Peter Suber, Director of the Harvard Office for Scholarly Communication and the Harvard Open Access Project, and Elizabeth Silva, associate editor at the Public Library of Science (PLOS).

How is your work relevant to the changing landscape of Open Access? What would be a successful outcome of your work in this area?

Elizabeth: PLOS is now synonymous with open access publishing, so it’s hard to believe that 10 years ago, when PLOS was founded, most researchers were not even aware that availability of research was a problem. We all published our best research in the best journals. We assumed our colleagues could access it, and we weren’t aware of (or didn’t recognize the problem with) the inability of people outside of the ivory tower to see this work. At that time it was apparent to the founders of PLOS, who were among the few researchers who recognized the problem, that the best way to convince researchers to publish open access would be for PLOS to become an open access publisher, and prove that OA could be a viable business model and an attractive publishing venue at the same time. I think that we can safely say that the founders of PLOS succeeded in this mission, and they did it decisively.

We’re now at an exciting time, where open access in the natural sciences is all but inevitable. We now get to work on new challenges, trying to solve other issues in research communication.

Peter: My current job has two parts. I direct the Harvard Office for Scholarly Communication (OSC), and I direct the Harvard Open Access Project (HOAP). The OSC aims to provide OA to research done at Harvard University. We implement Harvard's OA policies and maintain its OA repository. We focus on peer-reviewed articles by faculty, but are expanding to other categories of research and researchers. In my HOAP work, I consult pro bono with universities, scholarly societies, publishers, funding agencies, and governments, to help them adopt effective OA policies. HOAP also maintains a guide to good practices for university OA policies, manages the Open Access Tracking Project, writes reference pages on federal OA-related legislation, such as FASTR, and makes regular contributions to the Open Access Directory and the catalog of OA journals from society publishers.

To me success would be making OA the default for new research in every field and language. However, this kind of success more like a new plateau than a finish line. We often focus on the goal of OA itself, or the goal of removing access barriers to knowledge. But that's merely a precondition for an exciting range of new possibilities for making use of that knowledge. In that sense, OA is closer to the minimum than the maximum of how to take advantage of the internet for improving research. Once OA is the default for new research, we can give less energy to attaining it and more energy to reaping the benefits, for example, integrating OA texts with open data, improving the methods of meta-analysis and reproducibility, and building better tools for knowledge extraction, text and data mining, question answering, reference linking, impact measurement, current awareness, search, summary, translation, organization, and recommendation.

From the researcher's side, making OA the new default means that essentially all the new work they write, and essentially all the new work they want to read, will be OA. From the publisher's side, making OA the new default means that sustainability cannot depend on access barriers that subtract value, and must depend on creative ways to add value to research that is already and irrevocably OA.

How do you think the lack of Open Access is currently impacting how science is practiced?

Peter: The lack of OA slows down research. It distorts inquiry by making the retrievability of research a function of publisher prices and library budgets rather than author consent and internet connectivity. It hides results that happen to sit in journals that exceed the affordability threshold for you or your institution. It limits the correction of scientific error by limiting the number of eyeballs that can examine new results. It prevents the use of text and data mining to supplement human analysis with machine analysis. It hinders the reproducibility of research by excluding many who would want to reproduce it. At the same time, and ironically, it increases the inefficient duplication of research by scholars who don't realize that certain experiments have already been done.

It prevents journalists from reading the latest developments, reporting on them, and providing direct, usable links for interested readers. It prevents unaffiliated scholars and the lay public from reading new work in which they may have an interest, especially in the humanities and medicine. It blocks research-driven industries from creating jobs, products, and innovations. It prevents taxpayers from maximizing the return on their enormous investment in publicly-funded research.

I assume we're talking about research that authors publish voluntarily, as opposed to notes, emails, and unfinished manuscripts, and I assume we're talking about research that authors write without expectation of revenue. If so, then the lack of OA harms research and researchers without qualification. The lack of OA benefits no one except conventional publishers who want to own it, sell it, and limit the audience to paying customers.

Elizabeth: There is a prevailing idea that those that need access to the literature already have it; that those that have the ability to understand the content are at institutions that can afford the subscriptions. First, this ignores the needs of physicians, educators, science communicators, and smaller institutions and companies. More fundamentally, limiting access to knowledge, so that rests in the hands of an elite 1%, is archaic, backwards, and counterproductive. There has never been a greater urgency to find solutions to problems that fundamentally threaten human existence – climate change, disease transmission, food security – and in the face of this why would we advocate limited dissemination of knowledge? Full adoption of open access has the potential to fundamentally change the pace of scientific progress, as we make this information available to everyone, worldwide.

When it comes to issues of reproducibility, fraud or misreporting, all journals face similar issues regardless of the business model. Researchers design their experiments and collect their data long before they decide the publishing venue, and the quality of the reporting likely won’t change based on whether the venue is OA. I think that these issues are better tackled by requirements for open data and improved reporting. Of course these philosophies are certainly intrinsically linked – improved transparency and access can only improve matters.

What do you think is the biggest reason that people resist Open Access? Do you think there are good reasons for not making a paper open access?

Elizabeth: Of course there are many publishers who resist open access, which reflects a need to protect established revenue streams. In addition to large commercial publishers, there are a lot of scholarly societies whose primary sources of income are the subscriptions for the journals they publish.

Resistance from authors, in my experience, comes principally in two forms. The first is linked to the impact factor, rather than the business model. Researchers are stuck in a paradigm that requires them to publish as ‘high’ as possible to achieve career advancement. While there are plenty of high impact OA publications with which people choose to publish, it just so happens that the highest are subscription journals. We know that open access increases utility, visibility and impact of individual pieces of research, but the fallacy that a high impact journal is equivalent to high impact research persists.

The second reason cited is that the cost is prohibitory. This is a problem everyone at PLOS can really appreciate, and we very much sympathize with authors who do not have the money in their budget to pay author publication charges (APCs). However, it’s a problem that should really be a lot easier to overcome. If research institutions were to pay publication fees, rather than subscription fees, they would save a fortune; a few institutions have realized this and are paying the APCs for authors who choose to go OA. It would also help if funders could recognize publishing as an intrinsic part of the research, folding the APC into the grant. We are also moving the technology forward in an effort to reduce costs, so that savings can be passed onto authors. PLOS ONE has been around for nearly 7 years, and the fees have not changed. This reflects efforts to keep costs as low as we can. Ironically, the biggest of the pay-walled journals already charge authors to publish: for example, it can be between $500 and $1000 for the first color figure, and a few hundred for each additional one; on top of this there are page charges and reprint costs. Not only is the public paying for the research and the subscription, they are paying for papers that they can’t read.

Peter: There are no good reasons for not making a paper OA, or at least for not wanting to.

There are sometimes reasons not to publish in an OA journal. For example, the best journals in your field may not be OA. Your promotion and tenure committee may give you artificial incentives to limit yourself to a certain list of journals. Or the best OA journals in your field may charge publication fees which your funder or employer will not pay on your behalf. However, in those cases you can publish in a non-OA journal and deposit the peer-reviewed manuscript in an OA repository.

The resistance of non-OA publishers is easier to grasp. But if we're talking about publishing scholars, not publishers, then the largest cause of resistance by far is misunderstanding. Far too many researchers still accept false assumptions about OA, such as these 10:

--that the only way to make an article OA is to publish it in an OA journal --that all or most OA journals charge publication fees --that all or most publication fees are paid by authors out of pocket --that all or most OA journals are not peer reviewed --that peer-reviewed OA journals cannot use the same standards and even the same people as the best non-OA journals --that publishing in a non-OA journal closes the door on lawfully making the same article OA --that making work OA makes it harder rather than easier to find --that making work OA limits rather than enhances author rights over it --that OA mandates are about submitting new work to OA journals rather than depositing it in OA repositories, or --that everyone who needs access already has access.

In a recent article in The Guardian I corrected six of the most widespread and harmful myths about OA. In a 2009 article, I corrected 25. And in my 2012 book, I tried to take on the whole legendarium.

How has the Open Access movement changed in the last five years? How do you think it will change in the next five years?

Peter: OA has been making unmistakable progress for more than 20 years. Five years ago we were not in a qualitatively different place. We were just a bit further down the slope from where we are today.

Over the next five years, I expect more than just another five years' worth of progress as usual. I expect five years' worth of progress toward the kind of success I described in my answer to your first question. In fact, insofar as progress tends to add cooperating players and remove or convert resisting players, I expect five years' worth of compound interest and acceleration.

In some fields, like particle physics, OA is already the default. In the next five years we'll see this new reality move at an uneven rate across the research landscape. Every year more and more researchers will be able to stop struggling for access against needless legal, financial, and technical barriers. Every year, those still struggling will have the benefit of a widening circle of precedents, allies, tools, policies, best practices, accommodating publishers, and alternatives to publishers.

Green OA mandates are spreading among universities. They're also spreading among funding agencies, for example, in the US, the EU, and global south. This trend will definitely continue, especially with the support it has received from Global Research Council, Science Europe, the G8 Science Ministers, and the World Bank.

With the exception of the UK and the Netherlands, countries adopting new OA policies are learning from the experience of their predecessors and starting with green. I've argued in many places that mandating gold OA is a mistake. But it's a mistake mainly for historical reasons, and historical circumstances will change. Gold OA mandates are foolish today in part because too few journals are OA, and there's no reason to limit the freedom of authors to publish in the journals of their choice. But the percentage of peer-reviewed journals that are OA is growing and will continue to grow. (Today it's about 30%.) Gold OA mandates are also foolish today because gold OA is much more expensive than green OA, and there's no reason to compromise the public interest in order to guarantee revenue for non-adaptive publishers. But the costs of OA journals will decline, as the growing number of OA journals compete for authors, and the money to pay for OA journals will grow as libraries redirect money from conventional journals to OA.

We'll see a rise in policies linking deposit in repositories with research assessment, promotion, and tenure. These policies were pioneered by the University of Liege, and since adopted at institutions in nine countries, and recommended by the Budapest Open Access Initiative, the UK House of Commons Select Committee on Business, Innovation and Skills, and the Mediterranean Open Access Network. Most recently, this kind of policy has been proposed at the national level by the Higher Education Funding Council for England. If it's adopted, it will mitigate the damage of a gold-first policy in the UK. A similar possibility has been suggested for the Netherlands.

I expect we'll see OA in the humanities start to catch up with OA in the sciences, and OA for books start to catch up with OA for articles. But in both cases, the pace of progress has already picked up significantly, and so has the number of people eager to see these two kinds of progress accelerate.

The recent decision that Google's book scanning is fair use means that a much larger swath of print literature will be digitized, if not in every country, then at least in the US, and if not for OA, then at least for searching. This won't open the doors to vaults that have been closed, but it will open windows to help us see what is inside.

Finally, I expect to see evolution in the genres or containers of research. Like most people, I'm accustomed to the genres I grew up with. I love articles and books, both as a reader and author. But they have limitations that we can overcome, and we don't have to drop them to enhance them or to create post-articles and post-books alongside them. The low barriers to digital experimentation mean that we can try out new breeds until we find some that carry more advantages than disadvantages for specific purposes. Last year I sketched out one idea along these lines, which I call an evidence rack, but it's only one in an indefinitely large space constrained only by the limits on our imagination.

Elizabeth: It’s starting to feel like universal open access is no longer “if” but “when”. In the next five years we will see funders and institutions recognize the importance of access and adopt policies that mandate and financially support OA; resistance will fade away, and it will simply be the way research is published. As that happens, I think the OA movement will shift towards tackling other issues in research communication: providing better measures of impact in the form of article level metrics, decreasing the time to publication, and improving reproducibility and utility of research.

Nov 20, 2013

Theoretical Amnesia

by

Photo of Denny
Boorsboom

In the past few months, the Center for Open Science and its associated enterprises have gathered enormous support in the community of psychological scientists. While these developments are happy ones, in my view, they also cast a shadow over the field of psychology: clearly, many people think that the activities of the Center for Open Science, like organizing massive replication work and promoting preregistration, are necessary. That, in turn, implies that something in the current scientific order is seriously broken. I think that, apart from working towards improvements, it is useful to investigate what that something is. In this post, I want to point towards a factor that I think has received too little attention in the public debate; namely, the near absence of unambiguously formalized scientific theory in psychology.

Scientific theories are perhaps the most bizarre entities that the scientific imagination has produced. They have incredible properties that, if we weren’t so familiar with them, would do pretty well in a Harry Potter novel. For instance, scientific theories allow you to work out, on a piece of paper, what would happen to stuff in conditions that aren’t actually realized. So you can figure out whether an imaginary bridge will stand or collapse in imaginary conditions. You can do this by simply just feeding some imaginary quantities that your imaginary bridge would have (like its mass and dimensions) to a scientific theory (say, Newton’s) and out comes a prediction on what will happen. In the more impressive cases, the predictions are so good that you can actually design the entire bridge on paper, then build it according to specifications (by systematically mapping empirical objects to theoretical terms), and then the bridge will do precisely what the theory says it should do. No surprises.

That’s how they put a man on the moon and that’s how they make the computer screen you’re now looking at. It’s all done in theory before it’s done for real, and that’s what makes it possible to construct complicated but functional pieces of equipment. This is, in effect, why scientific theory makes technology possible, and therefore this is an absolutely central ingredient of the scientific enterprise which, without technology, would be much less impressive than it is.

It’s useful to take stock here, and marvel. A good scientific theory allows you infer what would happen to things in certain situations without creating the situations. Thus, scientific theories are crystal balls that actually work. For this reason, some philosophers of science have suggested that scientific theories should be interpreted as inference tickets. Once you’ve got the ticket, you get to sidestep all the tedious empirical work. Which is great, because empirical work is, well, tedious. Scientific theories are thus exquisitely suited to the needs of lazy people.

My field – psychology – unfortunately does not afford much of a lazy life. We don’t have theories that can offer predictions sufficiently precise to intervene in the world with appreciable certainty. That’s why there exists no such thing as a psychological engineer. And that’s why there are fields of theoretical physics, theoretical biology, and even theoretical economics, while there is no parallel field of theoretical psychology. It is a sad but, in my view, inescapable conclusion: we don’t have much in the way of scientific theory in psychology. For this reason, we have very few inference tickets – let alone inference tickets that work.

And that’s why psychology is so hyper-ultra-mega empirical. We never know how our interventions will pan out, because we have no theory that says how they will pan out (incidentally, that’s also why we need preregistration: in psychology, predictions are made by individual researchers rather than by standing theory, and you can’t trust people the way you can trust theory). The upshot is that, if we want to know what would happen if we did X, we have to actually do X. Because we don’t have inference tickets, we never get to take the shortcut. We always have to wade through the empirical morass. Always.

This has important consequences. For instance, as a field has less theory, it has to leave more to the data. Since you can’t learn anything from data without the armature of statistical analysis, a field without theory tends to grow a thriving statistical community. Thus, the role of statistics grows as soon as the presence of scientific theory wanes. In extreme cases, when statistics has entirely taken over, fields of inquiry can actually develop a kind of philosophical disorder: theoretical amnesia. In fields with this disorder, researchers no longer know what a theory is, which means that they can neither recognize its presence nor its absence. In such fields, for instance, a statistical model – like a factor model – can come to occupy the vacuum created by the absence of theory. I am often afraid that this is precisely what has happened with the advent of “theories” like those of general intelligence (a single factor model) and the so-called “Big Five” of personality (a five-factor model). In fact, I am afraid that this happened in many fields in psychology, where statistical models (which, in their barest null-hypothesis testing form, are misleadingly called “effects”) rule the day.

If your science thrives on experiment and statistics, but lacks the power of theory, you get peculiar problems. Most importantly, you get slow. To see why, it’s interesting to wonder how psychologists would build a bridge, if they were to use their typical methodological strategies. Probably, they would build a thousand bridges, record whether they stand or fall, and then fit a regression equation to figure out which properties are predictive of the outcome. Predictors would be chosen on the basis of statistical significance, which would introduce a multiple testing problem. In response, some of the regressors might be clustered through factor analysis, to handle the overload of predictive variables. Such analyses would probably indicate lots of structure in the data, and psychologists would likely find that the bridges’ weight, size, and elasticity loads on a single latent “strength factor”, producing the “theory” that bridges higher on the “strength factor” are less likely to fall down. Cross validation of the model would be attempted by reproducing the analysis in a new sample of a thousand bridges, to weed out chance findings. It’s likely that, after many years of empirical research, and under a great number of “context-dependent” conditions that would be poorly understood, psychologists would be able to predict a modest but significant proportion of the variance in the outcome variable. Without a doubt, it would ta ke a thousand years to establish empirically what Newton grasped in a split second, as he wrote down his F=m*a.

Because increased reliance on empirical data makes you so incredibly slow, it also makes you susceptible to fads and frauds. A good theory can be tested in an infinity of ways, many of which are directly available to the interested reader (this is what give classroom demonstrations such enormous evidential force). But if your science is entirely built on generalizations derived from specifics of tediously gathered experimental data, you can’t really test these generalizations without tediously gathering the same, or highly similar, experimental data. That’s not something that people typically like to do, and it’s certainly not what journals want to print. As a result, a field can become dominated by poorly tested generalizations. When that happens, you’re in very big trouble. The reason is that your scientific field becomes susceptible to the equivalent of what evolutionary theorists call free riders: people who capitalize on the invested honest work of others by consistently taking the moral shortcut. Free riders can come to rule a scientific field if two conditions are satisfied: (a) fame is bestowed on whoever dares to make the most adventurous claims (rather than the most defensible ones), and (b) it takes longer to falsify a bogus claim than it takes to become famous. If these conditions are satisfied, you can build your scientific career on a fad and get away with it. By the time they find out your work really doesn’t survive detailed scrutiny, you’re sitting warmly by the fire in the library of your National Academy of Sciences1.

Much of our standard methodological teachings in psychology rest on the implicit assumption that scientific fields are similar if not identical in their methodological setup. That simply isn’t true. Without theory, the scientific ball game has to be played by different rules. I think that these new rules are now being invented: without good theory, you need fast acting replication teams, you need a reproducibility project, and you need preregistered hypotheses. Thus, the current period of crisis may lead to extremely important methodological innovations – especially those that are crucial in fields that are low on theory.

Nevertheless, it would be extremely healthy if psychologists received more education in fields which do have some theories, even if they are empirically shaky ones, like you often see in economics or biology. In itself, it’s no shame that we have so little theory: psychology probably has the hardest subject matter ever studied, and to change that may very well take a scientific event of the order of Newton’s discoveries. I don’t know how to do it and I don’t think anybody else knows either. But what we can do is keep in contact with other fields, and at least try to remember what theory is and what it’s good for, so that we don’t fall into theoretical amnesia. As they say, it’s the unknown unknowns that hurt you most.

1 Caveat: I am not saying that people do this on purpose. I believe that free riders are typically unaware of the fact that they are free riders – people are very good at labeling their own actions positively, especially if the rest of the world says that they are brilliant. So, if you think this post isn’t about you, that could be entirely wrong. In fact, I cannot even be sure that this post isn’t about me.

Nov 13, 2013

Let’s Report Our Findings More Transparently – As We Used to Do

by

Photo of Etienne LeBel

In 1959, Festinger and Carlsmith reported the results of an experiment that spawned a voluminous body of research on cognitive dissonance. In that experiment, all subjects performed a boring task. Some participants were paid $1 or $20 to tell the next subject the task was interesting and fun whereas participants in a control condition did no such thing. All participants then indicated how enjoyable they felt the task was, their desire to participate in another similar experiment, and the scientific importance of the experiment. The authors hypothesized that participants paid $1 to tell the next participant that the boring task they just completed was interesting would experience internal dissonance, which could be reduced by altering their attitudes on the three outcomes measures. A little known fact about the results of this experiment, however, is that only on one of these outcome measures did a statistically significant effect emerge across conditions. The authors reported that subjects paid $1 enjoyed the task more than those paid $20 (or the control participants), but no statistically significant differences emerged on the other two measures.

In another highly influential paper, Word, Zanna, and Cooper (1974) documented the self-fulfilling prophecies of racial stereotypes. The researchers had white subjects interview trained white and black applicants and coded for six non-verbal behaviors of immediacy (distance, forward lean, eye contact, shoulder orientation, interview length, and speech error rate). They found that that white interviewers treated black applicants with lower levels of non-verbal immediacy than white applicants. In a follow-up study involving white subjects only, applicants treated with less immediate non-verbal behaviors were judged to perform less well during the interview than applicants treated with more immediate non-verbal behaviors. A fascinating result, however, a little known fact about this paper is that only three of the six measures of non-verbal behaviors assessed in the first study (and subsequently used in the second study) were statistically significant.

What do these two examples make salient in relation to how psychologists report their results nowadays? Regular readers of prominent journals like Psychological Science and Journal of Personality and Social Psychology may see what I’m getting at: It is very rare these days to see articles in these journals wherein half or most of the reported dependent variables (DVs) fail to show statistically significant effects. Rather, one typically sees squeaky-clean looking articles where all of the DVs show statistically significant effects across all of the studies, with an occasional mention of a DV achieving “marginal significance” (Giner-Sorolla, 2012).

In this post, I want us to consider the possibility that psychologists’ reporting practices may have changed in the past 50 years. This then raises the question as to how this came about. One possibility is that as incentives became increasingly more perverse in psychology (Nosek,Spies, & Motyl, 2012), some researchers realized that they could out-compete their peers by reporting “cleaner” looking results which would appear more compelling to editors and reviewers (Giner-Sorolla, 2012). For example, decisions were made to simply not report DVs that failed to show significant differences across conditions or that only achieved “marginal significance”. Indeed, nowadays sometimes even editors or reviewers will demand that such DVs not be reported (see PsychDisclosure.org; LeBel et al., 2013). A similar logic may also have contributed to researchers’ deciding not to fully report independent variables that failed to yield statistically significant effects and not fully reporting the exclusion of outlying participants due to fear that this information may raise doubts among the editor/reviewers and hurt their chance of getting their foot in the door (i.e., at least getting a revise-and-resubmit).

An alternative explanation is that new tools and technology have given us the ability to measure a greater number of DVs, which makes it more difficult to report on all them. For example, neuroscience (e.g., EEG, fMRI) and eye-tracking methods yield multitudes of analyzable data that were not previously available. Though this is undeniably true, the internet and online article supplements gives us the ability to fully report our methods and results and use the article to draw attention to the most interesting data.

Considering the possibility that psychologists’ reporting practices have changed in the past 50 years has implications for how to construe recent calls for the need to raise reporting standards in psychology (LeBel et al., 2013; Simmons, Nelson, & Simonsohn, 2011; Simmons, Nelson, & Simonsohn, 2012). Rather than seeing these calls as rigid new requirements that might interfere with exploratory research and stifle our science, one could construe such calls as a plea to revert back to the fuller reporting of results that used to be the norm in our science. From this perspective, it should not be viewed as overly onerous or authoritarian to ask researchers to disclose all excluded observations, all tested experimental conditions, all analyzed measures, and their data collection termination rule (what I’m now calling the BASIC 4 methodological categories covered by PsychDisclosure.org and Simmons et al.’s, 2012 21-word solution). It would simply be the way our forefathers used to do it.

References

Festinger, L. & Carlsmith, J. M. (1959). Cognitive consequences of forced compliance. The Journal of Abnormal and Social Psychology, 58(2), Mar 1959, 203-210. doi: 10.1037/h0041593

Giner-Sorolla, R. (2012). Science or art? How esthetic standards grease the way through the publication bottleneck but undermine science. Perspectives on Psychological Science, 7(6), 562–571. 10.1177/1745691612457576

LeBel, E. P., Borsboom, D., Giner-Sorolla, R., Hasselman, F., Peters, K. R., Ratliff, K. A., & Smith, C. T. (2013). PsychDisclosure.org: Grassroots support for reforming reporting standards in psychology. Perspectives on Psychological Science, 8(4), 424-432. doi: 10.1177/1745691613491437

Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science, 7, 615-631. doi: 10.1177/1745691612459058

Simmons J., Nelson L. & Simonsohn U. (2011) False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allow Presenting Anything as Significant. Psychological Science, 22(11), 1359-1366.

Simmons J., Nelson L. & Simonsohn U. (2012) A 21 Word Solution Dialogue: The Official Newsletter of the Society for Personality and Social Psychology, 26(2), 4-7.

Word, C. O., Zanna, M. P., & Cooper, J. (1974). The nonverbal mediation of self-fulfilling prophecies in interracial interaction. Journal of Experimental Social Psychology, 10(2), 109–120. doi: 10.1016/0022-1031(74)90059-6

← Previous Next → Page 4 of 5