PILab research

Research in the PILab focuses on developing and applying new methods for acquiring, organizing, and synthesizing psychological data. Our projects span multiple fields of psychology, including cognitive neuroimaging, personality psychology, psycholinguistics, judgment & decision-making, and working memory & executive control. The common theme uniting research in the lab is a belief in the transformative power of good methods to reveal novel insights into the nature of the human mind and brain. Some of our current lines of research include:

The Neurosynth framework

Neurosynth

Neurosynth is a software framework for large-scale automated synthesis of functional neuroimaging data. The framework uses text mining, meta-analysis, and machine-learning techniques to distill the results of nearly 6,000 published fMRI articles and make them available to the neuroimaging community via a web interface (http://neurosynth.org). We introduced Neurosynth in a 2011 Nature Methods paper illustrating the ability of relatively simple text mining and machine learning methods to reproduce fMRI meta-analyses that previously had required a considerable amount of manual effort. We also showed that the framework could be used to support quantitative reverse inference--that is, inferring cognitive function from patterns of brain activation in a statistically supported manner. In a more recent application (Chang et al, 2012), we used the Neurosynth framework to 'decode' cognitive functions associated with distinct insula networks, revealing greater functional specificity than is readily apparent using conventional approaches.

Current work on Neurosynth focuses on expanding the codebase, developing novel interactive visualization tools, and adding real-time web-based meta-analysis nad decoding capabilities. The code, data, and resulting images are all freely accessible online, and we're always looking for new collaborators. Work on Neurosynth is generously supported by an R01 grant from NIMH.

Big Data approaches to personality

Bigdata
The scientific study of personality is over a hundred years old, but advances in electronic communication have recently revolutionized the way researchers investigate the causes and consequences of individual differences in human personality. Instead of samples of dozens or hundreds of participants, researchers now routinely acquire data from thousands or tens of thousands of people all over the world. The enormous datasets obtained via the web and mobile applications present exciting new opportunities, but also raise new measurement and analysis challenges. Research in the PILab focuses partly on developing tools to acquire and analyze personality data on a large scale. For example, in Yarkoni (2010a), we introduced a novel genetic algorithm-based method for automatically abbreviating personality measures, enabling much more efficient measurement of individual differences. In Yarkoni (2010b), we conducted large-scale analyses of personality and word use in a large sample of blogs (N=694), enabling exploration of the relationship between personality and language at the level of individual words. Among other current projects, we're currently using mobile apps to investigate the role of personality in daily experience; conducting large-scale studies of social decision-making behavior using Amazon Mechanical Turk; and using machine-learning methods to develop better blog-based personality prediction tools.

Open science

Opensci

Traditional pre-publication peer review of scientific output is a slow, inefficient, and unreliable process. A growing movement within the scientific community seeks to ensure that science is done out in the open, facilitating rapid communication, evaluation, and replication of research findings. One line of research in the PILab focuses on designing and promoting alternative evaluation platforms based on recommender systems and collaborative filtering algorithms widely used in commercial web applications. For instance, in Yarkoni (2012), we joined a chorus of other commentators in calling for a shift away from pre-publication review to post-publication evaluation. We proposed an evaluation platform based largely on social news sites like reddit and Q&A sites like Stack Overflow, and argued that successful implementation of open evaluation platforms has the potential to dramatically improve both the pace and the quality of scientific publication and evaluation.

Also, on an entirely unrelated note, it turns out that working in support of the Open Science cause means you get to call your blogging and tweeting activity "research" without feeling guilty.