Most contributors reported widespread experience of methodologically incongruous feedback in peer review. A core concept that drew together different forms of incongruity was universalizing norms or practices associated with quantitative or specific qualitative research approaches. This is conveyed in our first category, inappropriate universalization. Our interest in the implications of incongruous peer review is conveyed in our second and third categories. Our discussion of strategies for navigating these (second category) is contextualized by our exploration of how power structures within academia intersect with responding, and the (emotional) labor around (incongruous) peer review. In lieu of a conventional discussion section, we end by synthesizing contributors’ suggestions for improving the methodological integrity of peer review of qualitative research.
Inappropriate Universalization
Methodologically incongruent reviewing most typically appeared through comments, claims, or expectations that applied standards or practices inappropriate to the
article being reviewed. These predominantly reflected postpositivist and/or
quantitative research perspectives and standards—but some also reported experiences of inappropriately universalized
delimited qualitative standards. Contributors described that in some
disciplines (e.g., health, the predominant area contributors worked in),
quantitative researchers had begun “dabbling” with
qualitative research, approaching it through a postpositivist
lens, something evident both in research they produced and their
peer reviews (a trend also noted by
Riley et al., 2019).
In some cases,
peer reviewers and editors seemed to have some understanding that
qualitative research is different from
quantitative research in some
respects, but nonetheless expected qualitative research to conform to the same or similar research values. The most commonly reported modes of universalization related to “
sample size,” the need for “
reliability practices” (for
coding) and (lack of)
quantification. For example:
The primary comment I get is about sample size and how it is not “representative,” therefore I cannot make any conclusions from the data. There is such an emphasis on multiple coders and intercoder reliability because it is a quantitative measure, that if you do not have that in your study, the reviewers question your rigour. (C140)
In some contributors’ experience, reviewers and editors treated a constructed-as-small (and therefore unrepresentative, nongeneralizable, nonrandom, not statistically significant; see also
Clark, 2003;
Herber et al., 2020;
Martin et al., 1999;
Zaruba et al., 1996)
data set as grounds for rejecting an
article:
Given the reviewers comment and the small number of highly selected participants on a qualitative review, the manuscript is not considered to be suitable for publication. (C144, quoting an editor)
Although we do publish qualitative studies, we expect all studies to have results that can be generalized to large groups or cultures. It is unlikely that the 15 individuals that you interviewed represent the thousands of immigrants in [country]. (C69, quoting an editor)
Alternatively, reviewers and editors requested that authors collect more data, and/or note a “small sample” as a limitation of the research. We use the term “requested” around changes but often these were framed more fluidly—a reviewer may comment on what had not been done or was missing rather than explicitly stating a practice was required. In the absence of editorial
guidance to the contrary, these become effective requests. Some contributors had encountered reviewers and editors requesting
power analysis and other calculations to demonstrate
validity of their
sample size, or evidence of data
saturation (i.e., information
redundancy) as a stopping
criterion (
Herber et al., 2020). For example:
On multiple occasions I have been asked to indicate how we achieved data saturation, despite not using Grounded Theory. People seem to think this is the qualitative equivalent of working out sample size and power when using statistics. (C72)
Did you achieve saturation? This important issue needs to be addressed. (C8, quoting an editor)
Requests for measures of the “
reliability” of
coding were another common example of methodologically incongruous review (see
Braithwaite et al., 2014; although
Herber et al., 2020 reported this infrequently). For example, C161 described:
Requests for interrater reliability statistics from a secondary coder … they usually want a kappa, gamma or ICC—to supplement in-depth qualitative analyses (e.g., reflexive TA [thematic analysis], IPA [interpretative phenomenological analysis], discourse analysis). … Generalisability vis-a-vis sample size: I am constantly told I need large sample sizes “for generalisability” to conduct qualitative research (hundreds of participants) for designs like IPA, narrative etc. … This is because of the positivistic ontology that plagues Psychology. … They want a table with frequencies/percentages, so they can understand the variability in the themes—which is tough with a study with only 6 people, for example. Or, they want a clear description of “some,” “most,” etc. … They want minimal researcher involvement, standardized codebooks, etc. … They do not understand theoretical frameworks/lenses at all.
In reviewer and editorial
feedback such as that reported here, the researcher
subjectivity essential to quality (Big Q)
qualitative (
Gough & Madill, 2012) is conceptualized as a problem and a
threat to
objectivity, and thus a flaw in the research. Having only one coder—common on
qualitative research—was understood by reviewers and editors as
problematic. Reviewers and editors wanted authors to add codebooks, rules for
coding, additional coders, training of coders, and
consensus practices (e.g., multiple coders agreeing codes, multiple researchers agreeing themes/researcher
triangulation) to their analytic procedures.
Not using qualitative data analysis
software (QDAS)—and
NVivo specifically—was understood by reviewers and editors as problematic, as these were seen as ways to enhance
reliability. Some contributors reported
peer reviewers and editors requesting participant
validation of the
accuracy of transcripts and the use of member checking to ensure the accuracy of
interpretations (
Zaruba et al., 1996)—again suggesting a
conceptualization of researcher subjectivity as a potential source of
bias and threat to be contained.
The idea that the world is discretely (and objectively) knowable was conveyed also in numerous reported requests for
quantification of the analysis—already noted by C161 (see
Clark, 2003). Contributors reported that reviewers and editors wanted (discussion of) frequency counts added to the
article or even
statistical analysis of the
data set. C105 quoted a reviewer as an example of incongruous review they frequently received from reviewers and editors:
I love the work by Braun [sic] on how to do thematic coding, but I wonder why different techniques were not used. For example, a quantitative content analysis would have addressed the frequency of themes and let researchers compare responses across respondent groups (i.e., demographic qualities of respondents).
As this example shows, contributors described reviewers and editors wanting participant demographics to be treated as variables and comparative analyses undertaken (see also
Braithwaite et al., 2014;
Martin et al., 1999). Contributor C67 quoted an editor who recommended reworking their
conversation analysis of a
corpus of 25 doctor–patient interactions into a “rigorous
quantitative analysis”:
You could identify the patterns of interest, code them in the conversations, and statistically evaluate their occurrence to test against spurious effects. You can of course take an alternate quantitative approach to data analysis, but it needs to be sufficiently rigorous because we can’t trust the conclusions without the numbers and without tests against spurious (chance) patterns.
These various methodologically incongruous requests demonstrate a failure to review qualitative research on its own methodological terms.
These regularly reported methodological incongruous reviewing requests often aligned with practices featured in the popular COREQ
checklist (
Tong et al., 2007) and other quality and reporting criteria in health sciences more broadly (see
Santiago-Delefosse et al., 2016). COREQ was mentioned by some contributors as shaping
peer reviewers’ and editors’ (narrow but universalized) views on good practice in reporting
qualitative research:
More typically, the reviewer is looking for some key word(s) in the reporting of methods, and most usually the reporting of the analysis. Of course, the words they are looking for (e.g., saturation) may not be relevant to the methodology. … Probably the most common is saturation, e.g., with 10 interviews it’s unlikely you reached data saturation and you have not mentioned it. Sometimes a comment like this is accompanied by reference to the data saturation item from Tong et al.’s checklist (COREQ). The next most common is to be asked for a “coding tree” or “code book” when that is not consistent with the methodology described, e.g., reflexive thematic analysis. Again, this may be accompanied by reference to the item in COREQ that asks for a description of the coding tree. The third most common is to be asked whether transcripts were independently coded, by how many coders, and what measure of coder agreement was used. (C87)
Bigger Q Qualitative contributors tended to view all of the different types of comments described here as problematic (see
Morrill & Rizo, 2023). In contrast, (some) smaller q
qualitative contributors saw value in COREQ and/or used it to rebut methodological incongruent reviews. They were mostly troubled by the comments of
peer reviewers and editors who could not make sense of or did not see any value in
qualitative research and wanted qualitative research to look like
quantitative research.
Contributors reported comments about research
design that universalised postpositivist/
quantitative approaches and norms (e.g., representative and generalisable samples,
statistical analysis). Many reported encountering assumptions that
qualitative research should have hypotheses, or that
qualitative researchers should discuss what they expected to find:
It was difficult for me to evaluate how the presented interview protocol was uniquely suited for testing the hypotheses. (C156, quoting a reviewer’s report)
To make the paper more publishable, we would strongly encourage the authors of this paper to consider recruiting a comparison group for a more robust analysis. (C66, quoting a reviewer’s report)
Some noted reviewers and editors commenting that theory should
only be used to make
empirical predictions. C28 quoted a reviewer:
If they are to use a theoretical framework … it should be one which can confidently make empirical predictions. The current “theory” adds nothing to their paper and, in scientific terms, is not a theory.
Comments like this exemplify our “confidently wrong” characterization of much of the reported methodologically incongruous peer review.
Numerous contributions conveyed
peer reviewers and editors confused by and unfamiliar with the style and
presentation of
qualitative research. Failure to comply with (expected, universalized) norms seemed to render
qualitative research puzzling or even incomprehensible (and therefore wrong) to some reviewers and editors, with some unable to ascertain any value in research that was not
quantitative. For example, C96 quoted a reviewer’s report which asked “where are the findings? You only provide quotes.” Contributors also reported requests/requirements to change
article organization or
content to align the presentation of the research with postpositivist/quantitative reporting norms (
Tracy, 2012;
Walsh, 2015). For example, C9 quoted reviewers reports:
“There is a lot of personal experiences in the method which I’m not sure is necessary to this study” and “Although you are personally involved in the data collection, I think this would read better if it was written in the 3rd person throughout” … “I felt there was an overreliance on quotes to tell their story rather than the author making a strong narrative. To improve consider reducing the quotes used and have more narrative” … “Results sections are generally from the data only. Remove all references to other research in this section. Also leave out reflections on the data for the discussion section.”
This quotation conveys key stylistic aspects noted by many, such as: separating “results” from “discussion,” and removing researchers’
interpretation and references to literature from the former (see also
Clark, 2003); removing or reducing data quotations from the “results” (see also
Martin et al., 1999); conversely, only presenting data quotations and no analytic commentary in the “results”; writing in the third person; removing (
qualitative) “
jargon” (terms such as
pragmatism); and removing (discussions of)
reflexivity,
methodology,
ontology and
epistemology, and other theory.
The contributor’s experiences conveyed a sense in which some reviewers and editors positioned themselves as expert, and as unequivocally right, and the author(s) as wrong, and needing to change:
Being told to write up results and discussion rather than combine these and condescendingly explained what each section should include. … Being told not to use first person as it’s not academic. (C45)
This experience evokes a role more like an examiner than a
peer reviewer—something we come back to when we discuss power and emotional
labor in
peer review. Reviewers and editors’ lack of
familiarity with the conventions of reporting
qualitative research was sometimes combined with explicit (and implicit) disrespect for, or dismissal of, qualitative research (see also
Herber et al., 2020)—usually some version of it being un/less scientific, and idiosyncratic rather than systematic or rigorous. For example:
The paper comes across as particularly idiosyncratic, non-generalisable, and personal level opinion—perhaps from the couch of a psychoanalyst or hypnotist. (C79, quoting a reviewer’s report)
It is just a subjective opinion of the author. It is not in the form of the paper. (C106, quoting a reviewer’s report)
This article certainly can’t be accepted, there is no data here. Author not transparent with equations. (C10, quoting a reviewer’s report)
Some contributors noted this disdain for
qualitative research particularly around mixed method research (see also
Morrill & Rizo, 2023):
It also doesn’t feel like qual is ever enough on its own right—truly. We need to juxtapose them [quantitative and qualitative] against each other, and then make it known that quant is better, and qual is “supplementing” the analysis. … Reviewers want qual studies written up like quant studies—they want a cookie cutter study that looks like a survey. (C161)
Despite
decades of
qualitative scholarship, these reviewers appear to still universalize
quantitative/postpositivist norms to
construct qualitative scholarship as
inherently methodologically insufficient. However, inappropriate universalization also featured around
qualitative research specifically, when
peer reviewers and editors
familiar with
some qualitative research approaches universalized qualitative research by assuming that the conventions of one particular approach applied to all. Some referred to a kind of “
boundary policing” of particular qualitative methods, and how they should be used:
A less frequent problem, but one that does come to mind, is over-confident/over-stated claims about what a “method” can or can’t do. “This isn’t template analysis, because it has themes,” or “This can’t be IPA [interpretative phenomenological analysis], there are two samples,” etc. I think a bit less boundary policing and a bit more curiosity (“It’s interesting to see two samples in an IPA study, can you explain a bit more about how that fits with the approach?”) is all that is needed here. (C124)
I got comments back from the editor saying that I should NOT have piloted my interview because “qualitative research does not involve use of pilots,” and that I should therefore remove reference to this part of the process from the article. (C53)
Universalization of (specific forms of)
qualitative research or totalizing declarations produced
frustration, and a
wish for
qualitative researchers to recognize the bounds and limits of their
expertise:
I am very tired of qualitative researchers reviewing papers that are outside their expertise and not considering that—e.g., thematic analysts reviewing conversation analysis. For all the complaints qualitative researchers make about quantitative reviewers, surely they would also want to apply that same approach to themselves! (C90)
Those contributors working with approaches that treat language as productive, such as
discourse or
conversation analysis, appeared to particularly experience this methodologically incongruent form of review:
Reviewers who are obviously unfamiliar with discourse analysis argue that interpretations of discursive functions are over-interpretations and unwarranted claims and suggest to “let the data speak more for themselves.” (C47)
These responses suggest methodological
incongruence (and a related issue of
qualitative methodological
expansion) is perpetuated by qualitative reviewers who do not have a full understanding of the diverse conventions and practices of varying forms of
qualitative research, or the diverse philosophical underpinnings and assumptions of different approaches. The kinds of experiences of
peer review described suggest the need for “connoisseur” reviewers (
Sandelowski, 2015;
Sparkes & Smith, 2009), equipped with both expert knowledge and openness and flexibility when encountering unfamiliar methodological approaches. However, having enough reviewers with such
expertise—and willingness/capacity to review—remain a
challenge.
One contributor requested that we produce a list of reviewer and editor requests that are methodologically incoherent with Big Q
qualitative and present this in the
article. We have provided this in Note 1 of the
Supplemental Material. Such a list was requested because it would be a helpful resource in responding to and rebutting methodologically incoherent
feedback. We now turn to the contributors’ strategies for navigating such feedback.
Strategies for Navigating Methodologically Incongruent Feedback
It’s tricky because there is a felt sense that we as authors have to save face for the reviewers/editors even though their comments were methodologically inconsistent. Part of this is about the power differential between the journal, us as authors, and our need and desire to publish our work. Another part is about helping bring editors and reviewers along in a way where they might learn something rather than turned off. (C31)
This quotation powerfully conveys the experience, affect, and power differentials of
peer review described by many contributors. Some contributors noted that editors often “shared the reviews with no additional comment or
feedback” (C14) and “did not provide
guidance” (P122), and so they had to navigate methodologically incongruent and at times contradictory feedback without any editorial support or
input:
Reviewer 1 wrote back (paraphrase), “This article certainly can’t be accepted, there is no data here. Author not transparent with equations.” The other reviewer wrote “This is an excellent and rich qualitative study ….” The editor asked me to consider and respond to both reviewers. (C10)
Contributors expressed
surprise and disappointment that editors simply “wave through” (C102) reviewers’ methodologically incongruent
feedback and demeaning comments about
qualitative research; others thought that some editors simply “did not know” (C122) that some reviewers’ comments were incongruent. Some faltered without editorial
guidance:
I withdrew the article from the peer review process. I should have spoken to the editor first, but at the time I just felt there was no point surely as the comments were shared with me with no notes about how to engage with them. (C73)
Most navigated through (even contradictory)
feedback to resubmit (several had their
article rejected). Many outlined strategies for dealing with such feedback, sometimes specific and sometimes general strategies often honed over time and experience (see also
Watling et al., 2023). Some selectively ignored methodologically incongruent comments; some opted to withdraw the article when comments illustrated too strong a methodological disconnect (see also
Cerejo, 2014). Strategies reported were broadly clustered into (overlapping) practices of educating (including preemptively), seeking support, what might be term “calling out” the decision,
acquiescence, and engaging with the editor.
Educating—the most common strategy reported by contributors—typically involved
not making the requested changes. It involved explaining
why they had not made the requested changes to the
article, and the
incongruence of the
feedback with their particular
qualitative approach. Contributors reported citing or quoting relevant methodological literature to support their position, recommending readings for the reviewers, and effectively educating them (and the editor) about the assumptions of
qualitative research in general and specific to their approach. The length of time the rebuttal response took was noted by some—contributors described both “lengthy responses” and “blunt rebuttals,” seemingly depending in part on the anticipated receptiveness from the reviewers and/or editor. Much of this was conveyed by C161:
I fight back—I do not give in. I send lengthy responses (several paragraphs) back and hope they will understand. I will explicitly interrogate the claims, and provide references to back up NOT doing power analysis for qual, NOT doing IRR [interrater reliability] for RTA [reflexive thematic analysis], not having 100 participants for an IPA [interpretative phenomenological analysis] study, etc. So, I defend and counter each one of the incorrect arguments, with citations, and this is exhausting, and often times reviewers won’t budge and so I find a new journal that publishes qualitative work.
The process was framed here (and by some others) as a(n exhausting) battle, evoking the adversarial experience noted by others (
Jamali et al., 2020). The layers of work hint at the psychological intricacies of the
peer review process, discussed (along with power) in the next
section. It also connects to something noted particularly by ECRs: the importance of having a good support system in place for responding to reviewers’ and editor’s incongruent comments (see also
Watling et al., 2023)—and especially one that encouraged rebuttal:
I was lucky to have a good supervisory team behind me to discuss the comments with. (C151)
It was the other members of the team who are more senior, who reminded me that we could simply politely tell the reviewer no. (C58)
Support typically referred to a “team” around or behind the author and having people “with [a] long history of engagement with qualitative research and well-established reputation in publishing such research” (C163) was notably helpful. This raises questions of how to resource those who do not have access to such support—something we hope this article contributes to.
Another strategy was citing
articles from the journal they had submitted to or from similar journals that had used the same methodological approach. Contributors used this approach to support their
argument that their article represented
established practice in
qualitative research:
I also often include citations to recently published articles in the journal I have submitted to, to show examples of interpretive/critical qualitative scholarship that was accepted without compromising their methodological commitments. (C24)
A later career researcher described using disagreement between the reviewers to their advantage, to discredit the methodological incongruent feedback from one of the reviewers. Some contacted the editor directly “to inform them of the problems with these statements” (C90), complain about the review process, or check if the revisions required were deal breakers. Such strategies require a researcher who both knows, and feels able, to do that.
Others described preemptive strategies when writing their article. One ECR noted trying to preempt criticism by explaining in the article why certain practices were not used, citing relevant literature. This strategy did not always work. Another ECR described being asked to do the very thing they had already justified not doing.
Overall, such strategies were sometimes effective, and sometimes not. If pushing back did not work, “caving”/“capitulating,” compromising or submitting elsewhere were the main responses. Some described making a pragmatic decision to comply with reviewer and editor demands because they did not want to revise the
article again or have it rejected:
I tend to capitulate to reviewers because I worry the revisions will get sent back again. (C162)
Tried to explain but as an ECR I often feel beholden to reviewers’ comments and pressure to address them. (C84)
This highlights the power inherent in the review process, something potentially more acutely impacting ECRs or inexperienced
qualitative scholars. Some noted capitulating previously, when they were less experienced and less confident:
Earlier in my career I just sucked this up and got someone to double code a percentage. Now I refuse and instead include how the team were involved in developing coding frameworks and refining themes from the outset and defend this position epistemologically. (C115)
However, requiring incongruous practices might also “play forward” into what becomes
understood as congruent or good practice. One ECR described not just capitulation in the
article, but a change in how they
taught qualitative methods:
In this case, I very much caved to their demands since I felt the need to get this article published as a tenure track faculty member. Further, reviewer/editor comments such as these have influenced how I teach qualitative methods—I have begun to incorporate a larger focus on interrater reliability statistics/methods to prepare my students for these expectations when trying to publish. (C83)
Others navigated a line between
fighting back and capitulation: They partially addressed reviewers’ and editors’ comments, but not so far that they felt they had completely compromised the
integrity of their research. For example:
Did not provide interrater reliability but did insert a few words indicating prevalence of sentiment in results (e.g., “Most participants felt …”). (C132)
Some noted their fears of offending the reviewers or the editor when responding to feedback, and the potential implications of this for the publication of their article and their career progression. This takes us to issues of power and (emotional) labor.
Power Dynamics, Loss, and (Emotional) Labor
Challenging editors is very difficult. If I had less fear of implications of offending the editor and peer reviewers, I would email back saying … that I found some of the comments as not showing a thorough understanding of the methodology and analysis (C156).
This
section focuses on power
dynamics in the
peer review system, the impacts of this, and the emotional and other
labor participants engaged in as they navigated through it. Earlier quotations have already evoked a system in which authors feel comparatively powerless, something we noted felt to us more like an
examination than a
peer engagement
dynamic. For some, this power dynamic also reflected disciplinary/scholarly failure to accord
qualitative approaches status equal to
quantitative:
I think it is the height of arrogance that people who are not familiar with this field feel they can review it. I feel it stems from an attitude amongst some disciplines and fields that qualitative research is somehow “less than” and does not have its own rigorous methodology. (C78)
Previous quotations illustrated how the language and
framing of
qualitative research in
peer review situated it as lesser than
quantitative, as nonrigorous. Some noted that methodologically incongruent comments went hand-in-hand with, or were a veil for, other types of poor peer review practices, such as subtle
racism. For example, research with an Indigenous community was characterized by a reviewer as “rather parochial” (C141). Another contributor noted Global North/South power
dynamics at play:
Based on my experience and the experiences of colleagues who conduct qualitative research in the “Global South” and submit papers to “Global North” journals (even the critical ones), this is also worth noting. Often the “methodologically incoherent comments” we receive are cloaked in subtle and overt tones of intellectual superiority: write this sentence this way and not that way, cite research and methods from Global North researchers and not local sources (which may be more relevant to the subject at hand). (C143)
These accounts evoke the disproportionate harms of
peer review on researchers from underrepresented groups noted by others (e.g.,
Rodríguez-Bravo et al., 2017;
Silbiger & Stubler, 2019). Many contributors noted impacts of methodologically incongruous review. Inappropriate review comments had emotional impacts—particularly for ECRs (see also
Majumder, 2016;
Watling et al., 2023). C156 was “completely floored by these comments”; C30 noted they felt:
Miserable and with a decline in confidence, whereas my already raging impostor syndrome flourished.
Describing the
labor of responding, some referenced the importance of
tone in responding to such comments: noting practices of responding “politely but firmly” (C142), and being “kind and firm” (C87), “patient” (C99), and “respectful” (C122), and using “‘appeasing’ language” (C85) and a “professional and friendly tone” (C146). Cohering with existing reports of the psychological burden of
peer review (e.g.,
Horn, 2016;
Majumder, 2016;
Watling et al., 2023), some contributors reported feeling tired and frustrated with methodologically incongruous reviewer and editor comments. Later career researchers expressed concern about the damaging psychological impact of such comments on ECRs (noted in the wider literature;
Hollister et al., 2023). These types of comments made some ECRs question continuing with a particular type of
qualitative research or qualitative research in general:
It’s very frustrating and makes me reluctant to keep doing co-production work in future because it’s so often an uphill battle with reviewers. (C162)
I think this is a serious issue—when early career researchers receive this sort of advice it can be highly demoralising especially. (C14)
Some ECRs’ concerns went beyond
frustration, as they considered the impact of such reviews, and the concomitant lack of knowledge of
qualitative research in their
disciplines, on their career progression. Contributors noted the
time involved in rebutting methodologically incongruent comments—time that could be better and more productively spent elsewhere. Some ECRs felt like they were missing out on opportunities for
feedback, learning and development, for intellectual
dialogue or improvements to their
article, because of poor
peer review practices (see also
Watling et al., 2023). For example, C94 noted:
I’m sure my own understanding of qualitative research methods could improve, but it is frustrating to be having the shallow, basic arguments via rebuttal rather than much more interesting and enriching arguments.
Publication obligations created extra pressures for ECRs, which can produce anxiety when receiving negative
peer reviews (
Horn, 2016):
On several occasions I did not have the opportunity to respond to comments that seemed extremely unfair and jeopardized my career. It takes so long to respond to such comments, especially at an early career stage. It has been hard for me personally to remain motivated in my profession as a qualitative researcher. (C146)
As an early career researcher, I have twice recently been told by reviewers that my sample, for qualitative papers, is not statistically significant. It is absolutely gutting to know your professional career is being determined by people who either don’t respect or understand it. (C33)
Some reported “having to educate” (P127) longer serving/more experienced reviewers and editors about
qualitative research:
It is stressful and frustrating to need to teach reviewers about qualitative methods (when they agreed that they had the methodological expertise to review the paper in the first place), and disappointing when the editors also don’t know any better or use these reviews as an excuse to reject the paper. (C144)
Peer review is good when it’s good, but it’s often bad because you’re basically just teaching Qual 101 to reviewers/editors and it’s extremely boring. I don’t find that my work is improved or made clear, but rather that I’m engaging in labour for people who shouldn’t be reviewing my work. … It’s just extremely disappointing as a PhD/early career researcher that peer review often doesn’t involve people engaging seriously with your ideas or the literature you are speaking to. Instead, you’re just handholding some reviewer through the basics of qualitative research and everyone’s time is wasted. (C8)
One contributor expressed “dramatically” how the experience of incongruous
peer review made them feel—in light of the real world consequences of
challenges to the
validity and
integrity of
qualitative research through methodologically incongruous peer review:
I’m aware of how dramatic this sounds, but the reality of reviewers not understanding this methodology and analysis means I think I’ve wasted my time doing a PhD. … If I can’t publish, it will be incredibly difficult to secure a permanent position. I strongly believe in the value of this research, but I wish I could go back in time and just not do the PhD. (C156)
Within this system, some ECRs reported that they felt obligated to comply with methodologically incongruent requests because they needed to get published:
My status as a tenure track faculty member also makes me feel pressured to get published and comply, even if I don’t agree with the reviewer requests. (C83)
This knowing compromise (see also
Overall, 2015) highlights unequal power and evidences how
peer review can do the opposite of what it is intended for and work against quality. For others, the work to continue to do and publish
qualitative research with methodological
integrity was paramount—the language used by C161 conveyed the effort it can take:
I feel alone in my department, and I am constantly fighting. … I’m passionate in qual, and will continue to fight—I will die on this hill!!!!
This
echoes Morrill and Rizo’s (2023, p. 416) call for
qualitative researchers “to hold steadfast to preserve
methodological pluralism and the transformative possibilities of qualitative
paradigms to resist
assimilation, misappropriation, and co-option.” Although we agree, our research demonstrates the burden this can take. Across the quotations already presented, contributors’
frustrations are evident, as is the
affective impacts of, and
labor required to respond to, such review. C78 evoked this starkly:
I am so tired of receiving comments like these. This is not the only occasion I have had these, just the latest.