We began with a very simple question: why do we have so many words to express our likes and dislikes?
Why use the word "amazing" versus "perfect"? Why "fantastic" versus "excellent"? While these words all appear to be quite positive, we have the intuition that they differ. But in what way?
Our work reveals that these words have the ability to signal different levels of emotionality, valence, and extremity. Based on this observation, we created the Evaluative Lexicon (EL): a computational linguistic tool that measures these facets of people's opinions, with a special focus on emotionality.
We constructed and validated the EL using 15 million online reviews, 1 million tweets, and over 10,000 movie and TV scripts (Rocklage, Rucker, and Nordgren 2018). It can be used with natural text in any form, including newspaper articles, online reviews, Twitter and Facebook posts, and transcribed audio.
The EL and its measure of emotionality have been directly validated both experimentally in the lab as well as in archival text. Moreover, the EL has been shown to be unique in its ability to measure emotionality compared to other popular text analysis tools (link).
Speaking to its unique capabilities, using the EL we have found that attitude emotionality is predictive of more extreme final judgments in over 15 million real-world online reviews (Rocklage and Fazio 2015; Rocklage, Rucker, and Nordgren 2018; Rocklage and Fazio invited resubmission), greater consistency in expressing opinions across contexts (Rocklage and Fazio 2016), greater ability for consumers to quickly indicate their opinion – due in part to its ability to provide an important signal to them (Rocklage and Fazio 2018; Rocklage, Durso, Way, and Luttrell in prep), and even the future success of restaurants, Super Bowl commercials, and movies (Rocklage, Rucker, and Nordgren in prep). See below for the current list of publications using the EL.
Interested in trying it out? Download it below.
The software can utilize TXT, CSV, and Excel files and can analyze millions of full-text records in a matter of minutes.
We hope that the program itself should be intuitive to use, but we've created a document that guides you through each variable the EL calculates (link).
See our publication (link) which provides more detail on how we constructed and validated the EL (version 2.0) as well as pointers on how best to utilize the EL variables (see the General Discussion section of the paper).
The EL was specifically designed to be a general text analysis tool - one that analyzes language that is clearly indicative of a person's opinion and language that is used consistently across a large range of topics. In that regard, the EL emphasizes accuracy over absolute coverage and thus will not code a piece of text if it does not contain an EL word. See this chapter we wrote as an example of the tradeoffs of the different text analysis approaches (link, pp. 18-20).
We have tested the software extensively, but it is still in beta so please let us know if you run into any issues by e-mailing Matt Rocklage at email@example.com.