Sociologica. V.16 N.1 (2022), 47–58
ISSN 1971-8853

Thank You, Reviewer 2: Revising as an Underappreciated Process of Data Analysis

Stefan TimmermansDepartment of Sociology, University of California, Los Angeles (United States) http://www.stefantimmermans.com
ORCID https://orcid.org/0000-0002-4751-2893

Stefan Timmermans is Professor at the UCLA, Department of Sociology, as well as a Professor at ISG. His research draws from medical sociology and science studies and uses ethnographic and historical methods to address key issues in the for-profit U.S. health care system. He has conducted research on medical technologies, health professions, death and dying, and population health, and is the medical sociology editor of the journal Social Science & Medicine.

Iddo TavoryDepartment of Sociology, New York University (United States) https://as.nyu.edu/content/nyu-as/as/faculty/iddo-tavory.html
ORCID https://orcid.org/0000-0002-4603-8958

Iddo Tavory is an Associate Professor of Sociology at New York University (USA), and editor of Sociological Theory. He has published Abductive Analysis: Theorizing Qualitative Research (Chicago University Press, 2014) and Data Analysis in Qualitative Research: Theorizing with Abductive Analysis (Chicago University Press, 2022) (both with Stefan Timmermans), Summoned: Identification and Religious Life in a Jewish Neighborhood (Chicago University Press, 2016), and Tangled Goods: The Practical Life of Pro Bono Advertising (Chicago University Press, 2022). Iddo has received the Lewis A. Coser Award for theoretical agenda setting in Sociology."

Submitted: 2022-03-29 – Accepted: 2022-04-06 – Published: 2022-05-19

Abstract

Qualitative data-analysis is considered finished after the researcher writes up the analysis for publication. However, if we compare the text as initially submitted to a journal with what has been published, we often find great discrepancies because of the way reviewers push authors to revise their article during the review process. We show how reviewers may initiate a new round of data analysis by focusing their comments on three areas: the fit between observations and theoretical claims, the plausibility of the theoretical framing or explanation compared to other possible explanations, and the issue or relevance or the contribution to scholarships. The result is that reviewers as representatives of a community of inquiry help shape data analysis.

Keywords: Qualitative data analysis; coding; scientific writing.

Qualitative data-analysis is considered finished after the researcher writes up the analysis for publication. However, if we compare the text as initially submitted to a journal with what has been published, we often find great discrepancies because of the way reviewers push authors to revise their article during the review process. We show how reviewers may initiate a new round of data analysis by focusing their comments on three areas: the fit between observations and theoretical claims, the plausibility of the theoretical framing or explanation compared to other possible explanations, and the issue or relevance or the contribution to scholarships. The result is that reviewers as representatives of a community of inquiry help shape data analysis.

Qualitative data analysis books extensively discuss making sense of observations as a process of gaining theoretical closure. The texts give advice on looking for places to start analyzing in the mess of observations by singling out luminous data (Katz, 2001), conduct open coding of fragments to get a sense of their theoretical or conceptual heft (Charmaz, 2014), look for patterns by coding along a conceptual axis (axial coding) (Corbin & Strauss, 2008), map the data in a two-dimensional space (Clarke, 2005), and then settle upon a promising theme (Glaser & Strauss, 1967). Once a theme has been singled out, the researcher is encouraged to explore its manifestations across the dataset and examine variations, negative cases, consequences, and causes (Tavory & Timmermans, 2013) by focused coding, spinning off memos, and gather additional materials until data saturation has been reached (Small, 2009). The goal of theory-driven data analysis is to draw a theme out of observations and, at the same time, transcend the immediacy of the observations to speak to a broader theoretical concern.

The qualitative data analysis process starts thus in an open-ended way and gradually homes in on an analytical theme. The different data analysis approaches differ on when existing theory and scholarship becomes relevant: grounded theory saw existing theory as potentially corrupting data analysis and recommended that scholars only consult the literature after they worked through their observations (Glaser, 1992) while extensive case method adherents skirted closer to a deductive approach where researchers update or extend their “favorite” theory (Burawoy, 2009). A new approach grounded in pragmatism, abductive analysis (Tavory & Timmermans, 2014), encourages the researcher to theorize surprising findings in light of existing theories, advancing a recursive dynamic between data and scholarship.

Whether the literature is present throughout the research project, is already guiding research from the get-go, or consulted at the end, the qualitative researcher then writes up an article (or book, but we focus on articles here). If the researcher has followed the steps of data analysis, this last step to submission should happen smoothly. The analytical memos articulating theoretical insights based on observations form the foundation of the data analysis section of the article (Corbin & Strauss, 2008). Writing up qualitative research at that stage is then a matter of articulating a compelling argument that ties memos, maps, and luminous data together. Format the references in the journal-required format and press “submit.”

Except that’s not usually where the analysis ends. If you were to compare a researcher’s data analysis at initial submission with what ends up being published, the difference is often astounding. The original analysis occasionally offers a bare-bones outline that was further refined and elaborated in the writing process but often there is even less continuity: the published analysis is completely different. Obviously, tremendously important analytical work is done during the review and revision stage of publishing, but it falls outside the purview of most methodology books.1 While authors thank reviewers for helpful comments during the review process in their acknowledgments, no methods section explains how reviewers shaped what was published.

If the article is not desk-rejected or rejected after reviews, most manuscripts will come back with a “revise and resubmit.” Here is the dilemma: reviewers and editors are the gatekeeper to your work getting published. Ignore them, and they may get upset and take it out on your manuscript. Few editors will give you another chance if the reviewer thinks you did not respond sufficiently to the initial round of suggestions. Also, even if you think the reviewer is misguided, it is always wise to accept that your writing may have played a role in the misreading and give the reviewer’s reading the benefit of the doubt. At a minimum, some clarification is warranted to avoid further misinterpretations. But seeing it in this way still makes reviewers seem like foes rather than allies. Even while we compliment them for their “wonderful insights,” we to get our way; we attempt to resist their misguided requests; we “capitulate” to their demands. And yet, often begrudgingly, responding to reviewers often takes the analysis in a new, positive, direction: the analysis needs to be reframed, the data needs realigning, the concepts need rethinking, and sometimes the entire argument requires overhauling.

Most often, more analytical work is needed because the reviewers take issue with the data interpretation and the theoretical contributions. In her observations of grant panels, Michele Lamont (2009, p. p. 182) found that 75 percent of the panelists mentioned that the connection between theory and data analysis is an important aspect of grant proposals. In a study of the role of peer review in published quantitative papers, Teplitskiy (2015) compared papers presented at the American Sociological Association meetings and their publication in two major general sociology journals: American Sociological Review and Social Forces. She found that the nature of the data analysis altered modestly while the theoretical framing changed substantively. “This finding suggests that a chief achievement of peer review may be to provoke authors to adjust their theoretical framing while leaving the bulk of the data analysis intact” (Teplitskiy, 2015, p. 266). In other words, the theoretical framing for quantitative papers is collectively negotiated during peer review.

Qualitative researchers aspire to convince readers of the observed facts: the I-was-here textual effect that Atkinson (1990), drawing from literary theory, refers to as verisimilitude. These eyewitness reports depend on the utilization and obfuscation of discursive conventions to give the impression that the text conforms to an observed reality. Qualitative research persuades through a combination of demonstration (e.g., excerpts of field notes or interview transcripts) and analytical commentary, with the proviso that the observations are always already analytically infused (Atkinson, 1990). This verisimilitude renders an analysis transparent but also invites reviewers to take issue with the presented interpretation.

Verisimilitude is a necessary starting point. Yet, it only convinces readers that we were, in one way or another, there, that our data is valid, systematic, sufficient, and reliable. Focusing on the relationship between data and theoretical claims requires us to think more deeply about the different ways in which readers assess our argument as a trustworthy contribution to scholarship. Following a proposed set of evaluation criteria for qualitative analysis (Tavory & Timmermans, 2014, ch. 7), we can divide reviewer comments that push an analysis into three categories. These considerations, of course, should be contended with even before authors submit their work: they form the backbone of comments from colleagues, advisors, or conference participants. And yet, they are also crucially important during the review and revision process where under the cloak of anonymity reviewers may offer pointed criticisms.

First, reviewers may question the fit between the evidence and the researcher’s claims. The researcher offers data to back up a claim but the reviewer is not convinced that the claims really provide an accurate analysis is of the data, even if they are convinced that the researcher indeed collected the data appropriately. The fit between data and claims is too loose: the researcher argues relationships that a reader just doesn’t see in the quoted examples. Too close a fit between data and claim is also problematic: a strong argument transcends the immediacy of the observations. Simply summarizing the data adds little value. A second group of criticisms are directed to the plausibility of the explanation. The author argues in favor of a set of connections, a causal relationship, or consequences, but the reviewer offers an alternative explanation that is already described in the literature. Then there is the criterion of relevance or the dreaded “so what?” question: even if your analysis is correct, how does it matter for social science scholarship. As pragmatism holds, a theory needs to be evaluated for its practical effects, commitments, and consequences. The problem here is that the analysis might be accurate but doesn’t offer any novel insights. The authors only address narrow substantive concerns, repeat what others have said before, don’t move the analysis forward (e.g., introduction and conclusion are interchangeable), or add negligible nuances to scholarship (Healy, 2017).

In what follows we offer illustrative examples of reviewers who pushed authors on these three criteria and we examine how the analysis was affected. Our contention is that exposure to peer review changed the initial analysis, and often for the better. Authors tightened the links between their data and claims, weeded out alternative explanations, and participated in broader theoretical debates. Much of this work is hidden in a private dialogue between reviewers-editors and authors during the review process. However, we draw on our roles as current editors of social science journals (senior medical sociology editor of Social Science and Medicine and editor-in-chief of Sociological Theory respectively). We picked examples that have been published and of which the authors are publicly known. For the examples we used, we asked permission of the authors (because of double-blind peer review we did not approach the reviewers for permission but we do not provide identifying information about them). The authors of the articles also had an opportunity to make changes to our write-up.

1 Fit: Does the Evidence State What You Claim?

One reviewer put the crux of fit eloquently: “I question if some of the conclusions were really”earned“? […] I question if this concern about some of the conclusions is an outcome of findings that are presented in a rather cursory and superficial manner?” Such concerns are a challenge to rethink the analysis and play to the data’s strengths.

Manuela Perrotta and Alina Geampana (2020) submitted a manuscript to Social Science and Medicine that analyzed the proliferation of add-on technologies in fertility medicine. These technologies are promoted on the websites of fertility clinics as improving IVF success rates, which hovered around a low 22%. The add-on technologies were aimed at patients with repeated implantation or IVF failures. A British TV program, however, had exposed these technologies as having little evidentiary backup for their effectiveness, even though they greatly added to the cost of treatment. In their research project, the researchers asked fertility specialists about the benefits and drawbacks of one such add-on technology: time-lapse imaging. These are laboratory incubators with integrated cameras that continuously take pictures of embryos during their development. The technology was promoted as a promising way to improve the live birth rate of embryos.

Based on their interviews, the authors argued in their initial analysis that the fertility specialists were not bothered by the lack of clinical trial evidence for the technology. Instead, the researchers reported that respondents embraced orthodox views of evidence and highlighted additional benefits of the technology (such as allowing to monitor fertilization, deselect some embryos, and managing the expectations of patients).

While both reviewers were enthusiastic about the submitted manuscript, reviewer two made a couple of critical points. First, the authors reported that they had conducted observations and interviews in five UK clinics but they only drew from interviews. The reviewer argued that the analysis would be strengthened with observational data. This recommendation questioned the fit between data and claims, or more precisely how best to empirically support the manuscript’s claims. Second, drawing from the work of Karin Knorr-Cetina (1999) the reviewer encouraged the authors to pay attention to the forms of knowledge offered by the technology, and not simply at the evidence for the technology.

Given the positive reviews, Perrotta and Geampana did not comprehensively rewrite the manuscript or reanalyze the data. Instead, they drew out some themes already present in the manuscript and gave them more prominence. Doing so, however, changed the analysis. Prompted by the second reviewer, the authors thought carefully about the kinds of interactions that spoke to the strength of their interview data. They decided to limit the analysis for the SSM paper to the interview data but that required them to address how interview data could speak to the lack of evidence for the add-on technology and capture the broad range of other reasons their respondents offered for using time-lapse imaging. They added a section on narrative legitimation, a concept social scientist have used to analyze other areas of medicine (e.g., alternative medicine and midwifery) to show how actors deviate from prevailing evidentiary standards in medicine. This literature predicts that health professionals either accept the customary standards of scientific evidence, rely on lower forms of evidence to support decisions, or question customary standards of evidence. This new theoretical set-up allowed the authors to emphasize that fertility specialists agreed with orthodox views of evidence, acknowledged that time-lapsing imaging did not meet those criteria, but bring up other reasons for using the new technology. While the fertility specialists could not prove that time-lapse imaging improved live births, for instance, the film of the fertilization process as it occurred over time allowed them to discard some embryos with three rather than two viable pronuclei.

This invigorated analysis opened a new theoretical front in the analysis where the authors could contrast external knowledge and evidence criteria with the persuasive power of accumulated clinical experience, in the process expanding the range of treatment benefits beyond traditional effectiveness endpoints. This also increased the relevance of the manuscript because it allowed the authors to engage the literature on the hegemony of evidence-based criteria to evaluate new technologies at the expense of other, more practice-based forms of knowledge.

As with the other evaluation criteria, it is not surprising that our exemplar is of a more gradual analytical modification. If the reviewer found the analysis inadequate — usually telling the editor that the data is “under-analyzed” — the reviewer would likely have rejected the paper rather than asking for a major revision. It is one issue to miss some analytical leads, it is a more profound problem to comprehensively ignore the interpretive potential of observations. In our experience as journals editors, unfortunately, this systematic underplaying of the data happens more with some analytical approaches than others. Some researchers have (mis)interpreted grounded theory as a license for searching for themes without bothering with a theoretical argument. The result is an exercise in coding and classification. The abstract announces that the authors found four or five themes and the manuscript’s analytical section lists these themes without connecting them in a theoretical way. While a list of themes may initiate, they do not constitute a compelling analysis. Grounded theory may tell you what’s in your data with its useful coding paradigms, but it does not necessary tell you what the theoretical relevance is of the observations. For that, you need to closely engage the existing literature and figure out what’s surprising or novel, and therefore analytically relevant. Deductive qualitative research papers, using extended case method analysis or a similar approach, are rare. Their weakness may be that the author ignores the richness of the data due to a strong precommitment to theoretical precepts. Different analytical approaches then produce different challenges around fit and some of these challenges may lead to a premature end in the review process.

2 Plausibility: What Are Alternative Explanations?

If fit focuses on the relationship between the theoretical claim and the data, plausibility focuses on alternative explanations of the empirical pattern. If we see any writing as enmeshed in a conversation with a particular disciplinary community of inquiry, reviewers may well ask whether there aren’t alternative existing theoretical tools that would do the job just as well. This is, for many manuscripts, a make-or-break question because it questions the entire theoretical construct the author created and suggests that there is a different, often simpler and more familiar, explanation for the data. Even if the data fit the theoretical claim, it may fit other claims just as well — making the theoretical intervention tenuous.

To take one example, this time culled from the pages of Sociological Theory, we turn to Sourabh Singh’s (2022) Can habitus explain individual particularities? The question animating the manuscript was interesting: sociologists have increasingly used Pierre Bourdieu’s notion of habitus to explain actors’ actions. Since a habitus is defined as the tastes, and (mostly) class-based “principles of vision and division,” it operates as a conservative force, reproducing positions and inequalities within the social field. Yet, can we explain any particular actor’s actions by recourse to such a theory? This question, dubbed the “problem of particularity,” remained a sticking point in post-Bourdieusian sociology.

The analytic claim made by Singh was that part of the supposed Bourdieusian problem to account for actions stems from the fact that Bourdieu’s theories of habitus and of capital have been analyzed as separate from his analysis of fields. Without accounting for field structure, including the possible shifts of actors within fields, among fields, and of the fields themselves, the problem of particularity seems more intractable than it otherwise might be within the Bourdieusian framework. Put otherwise, we often treat the problem of particularity as intractable because people change the course of their actions over time. But, argues Singh, part of the reason we consider this a problem is that we forget that habitus is always located within a field’s structure, and as people move within a field, and as the field itself changes, we should, in fact expect to see changes in actors’ strategies of action.

Singh then exemplified this problem by detailing the biographies of two important leaders in Indian national history — Jawaharlal Nehru and Indira Gandhi — over their political careers. Singh argued that understanding their habitus in conjunction to their movement within the political field (and the changing political field itself), provides a strong key for understanding their actions. While the reviewers generally liked what Singh did in his manuscript, they weren’t completely sanguine. Even within the world of Bourdieusian scholars, there were other ways to account for the problem of particularity. There were, in other words, other explanatory alternatives.

Summarizing reviewers’ concerns in his decision letter, Tavory wrote:

Within the Bourdiuesian literature there seem to be basically four reactions to the problem of particularity: (a) most common: implicitly denying that there is a problem at all. (b) arguing that the notion of habitus is not intended to capture each particularity, but a kind of “zone” — that it is sociology, and sociology trades in generalities. Bourdieu explicitly says this in a number of places; (c) Bernard Lahire’s solution: which the author completely ignores, in what is currently the most significant omission in the manuscript — that habitus are themselves complex and discontinuous. That the context(s) of transmission involves multiple actors, and modes of being-in-the-world, located in relation to specific practices and situations. Thus, to get at “sociology at the level of the individual” we need to go account for the such complexity; (d) the solution provided by Atkinson, Crossley and some others: that sedimentation occurs throughout, and that taking phenomenology seriously (whether Schutz or Merleau-Ponty) means that we account for a living, changing, personhood.

The author considers mostly the first and the last of these. I think that in revising this work, the other alternatives need to be accounted for. I think, especially, that they need to take on Lahire’s solution, which is, probably, the most concerted and sophisticated effort to account for the problem of particularity within a Bourdieusian framework. I am also surprised that the author did not delve more deeply into Bourdieu’s own “Sketch for a self-analysis” which is very much about crafting a sociology on the level of the individual.

At its core, the author ran into a theoretical issue with empirical implications. Although Singh paid attention to some solutions presented in the literature to the problem of particularity, he didn’t pay enough attention to others. But this is not only a matter of strengthening the conceptual parts of the paper. Rather, part of the problem was that since Singh focused on two different biographies, he spent relatively little time detailing their trajectory. With so little space to develop each biography, readers could find the alternatives not only conceptually plausible, but also empirically so.

In a sense, then, plausibility reversed the problem of fit. Rather than having an uncomfortable fit between data and theory, the problem of plausibility is that other theoretical explanations fit just as well and, in the eyes of the reviewers, maybe better. Thus, to solve the problem of plausibility, Singh ended up cutting out one of the characters, and focusing on the biography of Nehru. What this allowed him, through added details, was to increase the plausibility of the focus on field transformations. While he still couldn’t quite reject other explanations, he could show that focusing on such field dynamics was at the very least one of the important explanatory keys for understanding how Nehru’s political strategy and commitments changed over his career.

3 Relevance: So, What?

The “so what?” question is the trickiest reviewer concern to anticipate because it more than the other evaluation criteria is a judgment call. In terms of fit, reviewers can marshal evidence from the manuscript that the evidence does not meet the author’s claims. For plausibility, they can point to the literature to note alternative explanations but whether the analysis makes a strong contribution is often more difficult to assess. The exception is the situation where the reviewer as expert in the field can show that the author is saying something that is already well-established in the literature. In most other cases, the question of relevance turns around the more nebulous concern of whether the analysis and theorization are a compelling contribution to the literature. In essence, the paper needs to allow other people in the discipline to solve other problems: it needs to be able to travel elsewhere (Timmermans & Tavory, 2022). In a sense, like any text or cultural object, it needs to resonate with its audience (McDonnell et al., 2017) by helping them work through the puzzles they face.

Many manuscripts that lack relevance are filtered out by the journal’s editor. The editor has to imagine an audience for the work that, if everything works out, renders the manuscript citable by a group of scholars that goes beyond the most immediate subject audience. A lack of relevance is one of the major reasons editors desk-reject manuscripts. Social Science and Medicine received 6,497 submissions between July 2020-2021 across its seven disciplinary offices (the medical sociology office received 1,037 of those). About 85% get desk rejected and the editors pick from ready-made phrases in their rejection letter. The most used phrase is: “Although your manuscript falls within the aim and scope of this journal, it is being declined due to lack of sufficient novelty,” which is a gentle way to point out the lack of relevance. Those that make it to the review process may encounter a reviewer who is enthusiastic about the work but a bit frustrated that the manuscript does not fulfill its scholarship potential to speak to broader themes. That’s what happened in the following situation.

Kelly Holloway, Fiona Miller, and Nicole Simms (2021) received the following reviewer comment for an article they submitted to Social Science and Medicine:

When you are clearer about who you are talking about and what you are doing, you can also clarify the contribution of your paper, especially to a scholarly audience. At times you allude to the significance of all of this for tests and approvals and practitioners. What exactly do you want scholars to learn from this? There seem to be many important implications (about regulation, risk, industry practices, professional practices, patient safety, the impact of the private sector on public wellbeing, about experts, and so on). Which of these many implications are most important to you, and what do you want academics to take from your study to move the literature forward? This could be clarified earlier in the paper, and again in the findings and discussion sections.

In other words, the reviewer saw many potential broader themes that the authors hinted at but found those themes insufficiently work out. This is the kind of comment that suggests that the author is on board with most of the analysis but would like to have a better articulation of how the research addresses the stakes of the research project: how does the article go beyond the materials presented to intervene in larger theoretical and policy debates.

The manuscript initially focused on demonstrating an “invisible college” of industry and private experts who influenced the regulation of a new genetic test Non-Invasive Prenatal testing, where blood from a pregnant woman is analyzed for the fetus’ genetic conditions. This technology has been heavily marketed as an improvement on more invasive technologies as safe, accurate and reliable and for-profit manufacturers have rapidly disseminated this test across the US healthcare system. The authors focused on the retreat of government regulatory mechanisms as an independent check on this industry and the discovery of an invisible college of informal experts with industry ties that set standards for test validation, developed clinical practice guidelines, and influenced critical reimbursement and insurance coverage decisions.

Rather than sticking with their discovery of the invisible college (a concept present in this literature drawn from (Demortain, 2011), the authors took on the reviewer’s challenge to elevate the relevance of their article by introducing a new institutional form — a diffuse, polycentric regulatory regimen permeated by commercial interest — that not only build on their own case study but linked the research to similar studies of conflicts of interest in the pharmaceutical industry. They pointed out the lack of accountability in decisions to disseminate and promote the genetic test, the lack of transparency of scientific data produced by test manufacturers, and the construction of covert clinical and regulatory knowledge. The reviewer’s prompt lifted the analysis to a new conceptual level and also gave the authors a stronger rationale for their study: expose the influential backroom regulatory work in order to hold it accountable.

4 Conclusion

A Facebook group with more than 75,000 members “Reviewer 2 must be stopped!”2 encourages members to vent about the second reviewer’s depredations: thwarting the publication of good science with irrelevant, petty or meanspirited objections and demonstrating a lack of methodological acumen or even elementary reading skills under the cloak of anonymity. Obviously, anonymous double-blind peer review is an imperfect gatekeeping mechanism for publication. Abuses and frustrations abound, even if we have surprisingly few studies of the actual review process and most of the complaints remain anecdotal (but see Hirschauer, 2010).

As journal editors, we also see another side to the review process. When it works well, the review process strengthens a manuscript, sometimes transforming it from a mediocre work into a powerful contribution. It does so precisely by doubling down on the considerations we outlined above: by pushing authors to show how the empirical material supports the theoretical claims, whether other theoretical explanations could not do a better job of explaining the observations, or pushing the author to clarify the broader implications of their study. Engaging such comments often adds another round of analysis to the manuscript with the result that the analysis pre and post review is profoundly different. What the author thought was closed, has been reopened and altered.

The review process often seems to insert a level of conservativeness in a body of scholarship, especially in tradition-rich fields. This should not be surprising. The more people are engaged in a particular disciplinary conversation, the harder it is to convince them that a new explanation is better than existing alternatives. This is not necessarily problematic: the bar for new theoretical work should be high. Of course, such a system can also sometimes be narrow minded, rewarding incremental change (or even repetition) rather than bold new arguments. As Thomas Kuhn noted more than half a century ago, adhering to a research tradition increases the probability for publication but forgoes opportunities for originality. It’s a reliable career path. Taking a higher risk strategy may lead to higher rejection rates, because such explanations feel “too far” to be considered plausible, and to exotic to help other researchers puzzle out the problems that they face (McDonnell et al., 2017). However, if you do manage to convince your readers that a novel explanation is both better than the plausible alternatives and relevant for other cases, it can have a strong staying power. In a study of 6.5 million abstracts in biomedicine, Foster and co-authors (2015) found that while innovation in science is rare, truly innovative articles accumulate in higher rewards — at least in terms of the citations to the work.

Lastly, conservative or not, the review process often leads to an additional analytical loop. As authors have to tackle skeptical readings of fit, plausibility, and relevance, they often have to deepen their analysis, sometimes even collect new data. As early observer of science Ludwick Fleck (1979) noted, scientific experience is collective. At their best, reviewer 2 is this invisible manifestation of the collective, erased after the fact, but leaving their mark on the shape of the argument, and the relationship between theory and observations.

References

Atkinson, P. (1990). The Ethnographic Imagination: Textual Constructions of Reality. New York, NY: Routledge.

Burawoy, M. (2009). The Extended Case Method: Four Countries, Four Decades, Four Great Transformations, and One Theoretical Tradition. Berkeley, CA: University of California Press. https://doi.org/10.1525/9780520943384

Charmaz, K. (2014). Constructing Grounded Theory (2nd ed.). Thousand Oaks, CA: Sage.

Clarke, A. (2005). Situational Analysis: Grounded Theory After the Postmodern Turn. Thousand Oaks, CA: Sage. https://doi.org/10.4135/9781412985833

Corbin, J., & Strauss, A.L. (2008). Basics of Qualitative Research (3rd ed.). Thousand Oaks, CA: Sage.

Demortain, D. (2011). Scientists and the Regulation of Risk: Standardising Control. Cheltenham: Elgar. https://doi.org/10.4337/9781849809443

Fleck, L. (1935). Entstehung und Entwicklung einer wissenschaftlichen Tatsache. Basel: Schwabe.

Foster, J. G., Rzhetsky, A., & Evans, J.A. (2015). Tradition and Innovation in Scientists’ Research Strategies. American Sociological Review, 80(5), 875–908. https://doi.org/10.1177/0003122415601618

Glaser, B. (1992). Basic of Grounded Theory Analysis. Mill Valley, CA: Sociology Press.

Glaser, B., & Strauss, A. L. (1967). The Discovery of Grounded Theory. New York, NY: Aldine.

Healy, K. (2017). Fuck Nuance. Sociological Theory, 35(2), 118-127. https://doi.org/10.1177/0735275117709046

Hirschauer, S. (2010). Editorial Judgments: A Praxeology of “Voting” in Peer Review. Social Studies of Science, 40(1), 71–103. https://doi.org/10.1177/0306312709335405

Holloway, K., Miller, F. A., & Simms, N. (2021). Industry, Experts and the Role of the Invisible College in the Dissemination of Non-invasive Prenatal Testing in the US. Social Science & Medicine, 270, 113635. https://doi.org/10.1016/j.socscimed.2020.113635

Katz, J. (2001). From How to Why: On Luminous Description and Causal Inference in Ethnography (Part 1). Ethnography, 2(4), 443–473. https://doi.org/10.1177/146613801002004001

Knorr-Cetina, K. (1999). Epistemic Cultures: How the Sciences Make Knowledge. Cambridge, MA: Harvard University Press. https://doi.org/10.4159/9780674039681

Lamont, M. (2009). How Professors Think: Inside the Curious World of Academic Judgment. Cambridge, MA: Harvard University Press. https://doi.org/10.4159/9780674054158

McDonnell, T.E., Bail, C.A., & Tavory, I. (2017). A Theory of Resonance. Sociological Theory, 35(1), 1–14. https://doi.org/10.1177/0735275117692837

Perrotta, M., & Geampana, A. (2020). The Trouble with IVF and Randomised Control Trials: Professional Legitimation Narratives on Time-Lapse Imaging and Evidence-Informed Care. Social Science & Medicine, 258, 113115. https://doi.org/10.1016/j.socscimed.2020.113115

Singh, S. (2022). Can Habitus Explain Individual Particularities? Critically Appreciating the Operationalization of Relational Logic in Field Theory. Sociological Theory, 40(1), 28–50. https://doi.org/10.1177/07352751221075645

Small, M.L. (2009). “How Many Cases Do I Need?” On Science and the Logic of Case Selection in Field-Based Research. Ethnography, 10(1), 5–38. https://doi.org/10.1177/1466138108099586

Tavory, I., & Timmermans, S. (2013). A Pragmatist Approach to Causality in Ethnography. American Journal of Sociology, 119(3), 682–714. https://doi.org/10.1086/675891

Tavory, I., & Timmermans, S. (2014). Abductive Analysis: Theorizing Qualitative Research. Chicago, IL: University of Chicago Press. https://doi.org/10.7208/chicago/9780226180458.001.0001

Teplitskiy, M. (2015). Frame Search and Re-Search: How Quantitative Sociological Articles Change During Peer Review. American Sociologist, 47(2), 264–288. https://doi.org/10.2139/ssrn.2634766

Timmermans, S., & Tavory, I. (2022). Data Analysis in Qualitative Research: Theorizing with Abductive Analysis. Chicago, IL: University of Chicago Press. https://doi.org/10.7208/chicago/9780226817729.001.0001


  1. This article elaborates some ideas from our book (Timmermans & Tavory, 2022). We draw especially from chapter 8.↩︎

  2. https://www.facebook.com/groups/71041660468↩︎