Response to “The many diversities of the history of science discipline”

Emily Klancher Merchant, UC Davis

This essay is in response to the contribution by Yves Gingras.

“The many diversities of the history of science discipline” by Yves Gingras provides an interesting overview of the geographical distribution of and gender imbalance between scholars who publish in history of science journals. The critique in which it is embedded, however, seems to misunderstand the purpose of last year’s Open Conversation. In this response, I will clarify the goals of the conversation and the intention of my own contribution, and I will consider what Gingras’s critique adds to the original discussion.

Contributors to last year’s “Open Conversation” were asked to address the following question: given that the editors of Isis are committed to increasing diversity in the journal, would collecting demographic data from scholars who submit manuscripts help them do that? We were asked to bring expertise from our own research to weigh in on whether it is possible to collect and use demographic data in a responsible way, given the historical role of data collection and analysis in perpetrating and perpetuating various forms of oppression. Our focus on Isis was therefore not an accident, a result of our lacking reflexivity, or a symptom of our self-centeredness, as Gingras charges. Rather, it was the instruction we were given.

Yet Gingras’s critique does suggest that we should have more coherently theorized the relationship between Isis and the history of science. As Gingras points out, Isis authors are not a representative sample of the much larger set of authors he identifies in the thirty-five journals he analyzes. The point that I was trying to make in my own contribution is that, in the days of bibliometrics and computational text analysis, analysts may be tempted to substitute people who have published in a particular journal or set of journals for a census of the field represented by those journals, which is what Gingras himself does. (I have done this too in my own work.) Certainly, Gingras’s sample is much broader than Isis itself, but it is neither equivalent to nor representative of the sample one would obtain by writing to all of the world’s universities and asking for a list of people working in the history of science. This is not to say that one sampling method is necessarily better than another, simply that the way journal metadata will be used by researchers is a responsibility that editors must bear. Gingras’s analysis suggests that research on the history of science as a field should not rely solely on Isis. However, since Isis is the official journal of the History of Science Society, any such study would be incomplete if it didn’t include Isis. The same could not be said for most of the other journals on Gingras’s list.

The Open Conversation at times slipped between “Isis” and “the history of science” as if the two were coterminous, or as if the relationship between the two were clear. Neither is the case. Isis may not represent the field in a statistical sense, but, as the official journal of the HSS, it does purport to speak for the field, representing it in a political sense. The editors of Isis claim that the journal’s readership encompasses the history of science as a whole; articles that speak only to historians of a particular type of science or to historians of science in a particular time and/or place are not published. Publishing in Isis is therefore a signal to search committees and tenure and promotion reviewers that a scholar’s work has significance for the history of science writ large, beyond the specific research topic. The Editors of Isis thereby serve a kind of gatekeeping function in the field that editors of many other journals do not, even though, as Gingras points out, they don’t directly create jobs. As such, Isis plays a performative role in the history of science, and must therefore take at least some measure of responsibility for the field’s diversity or lack thereof. Isis is not equivalent to the history of science, but its actions have consequences for historians of science, and its editorial decisions influence the field as a whole. Yet it is also possible that Isis could lose its stature and power within the history of science if it doesn’t embrace the growing diversity of the field: if it remains “too white” (a phrase that came up repeatedly in the Open Conversation) and “too male” (a phrase that didn’t come up, but is equally valid).

As Gingras notes, the Open Conversation focused largely, though not exclusively, on race, with several contributors highlighting Isis’s lack of racial diversity as particularly problematic. Gingras points to other potential axes of diversity: geographic and gendered, which he analyzes, and thematic and economic, which he does not. Data about geography and gender are already available, which is what made Gingras’s analysis possible. What Gingras couldn’t do with published articles, but what the editors of the journals in his sample could do, is compare the gender and geographic distribution of scholars whose work is published with that of scholars whose work is rejected. Simply knowing who publishes and is therefore included in the field doesn’t tell us anything about whose manuscripts are rejected, thereby being excluded from the field. The same type of analysis could be done to evaluate thematic diversity (of both accepted and rejected manuscripts) using topic modeling or another type of algorithmic content analysis. But the existing data aren’t ideal: assuming a person’s gender on the basis of their first name reinforces the male/female binary and risks putting people into categories with which they do not identify, and the geographical location of a person’s institutional affiliation tells us nothing about their own place of origin or native language, information that would also be relevant to the project of diversifying a journal or discipline. Hence the Isis editors’ desire to collect their own data.

Given that the Open Conversation occurred among mostly U.S.-based scholars in the spring of 2020, it is perhaps unsurprising that we had race on our minds. But we also had scholarly reasons for focusing on race. As we demonstrated in the Open Conversation, racist and anti-racist projects have historically drawn heavily on taxonomic categories and on statistics, so race is a particularly vivid example, not only of how data are socially constructed, but also of how they are constructive of the social worlds they purport to describe. In his response, Gingras states his own awareness “that some scholars assume that all statistics are ‘socially constructed,’ hence all numbers may be problematic, misleading or even completely useless.” However, his use of scare quotes around “socially constructed” and the second clause of the sentence indicate a misunderstanding of what the term means in the history of science. Statistics are socially constructed because science (like everything else) is a social activity. The categories into which we classify things and people are the outcomes of social processes. This is as true of characteristics like age and country of residence as it is of race and gender: how old you are depends on what country you are in and what country you are in has been determined by a long history of wars, treaties, migration flows, and other geopolitical events. But when it comes to people, the process of classification is recursive, or “looping” to use Ian Hacking’s term. People interact with the categories that are available to them. When a state carries out a census, it doesn’t just ask people to check boxes; in so doing, it tells us what the categories of humanity are and requires us to think about ourselves in those terms (whether that means identifying with them or rejecting them). To borrow from Hacking again, classification “makes up people.” The Open Conversation centered race both as an important axis of diversity in our current world and as a well-researched example of the intended and unintended effects of human classification and the potential dangers of collecting (and not collecting) demographic data.

As Gingras suggests, our focus on race left largely unexplored other axes of diversity that Isis might usefully work to enhance. As I noted in passing above, several contributors described Isis as “too white,” but nobody described it as “too male,” which Gingras’s analysis demonstrates it is (35% of authors published in the journal between 2010 and 2018 have feminine first names, though we don’t know the corresponding proportion of people who submitted manuscripts that were rejected). As historians of science, we certainly understand the processes through which women have been excluded from science; many of the same processes have kept women out of the history of science, though it is also true that many women have been redirected from careers in science to careers in the history of science. Similarly, we know how women’s exclusion has biased the development of science; the same is certainly true for the history of science. These are points that I made somewhat more obliquely in my contribution to the Open Conversation. Yet when it comes to thinking about the potential consequences of collecting demographic data, gender was not the first axis of diversity that came to my mind. This may be because, until recently, sex/gender categories have not been subject to anything near the same degree of critical attention as race categories. It may also have something to do with the conceptual distinction between gender and sex, which — again until recently — suggested incorrectly that the latter was straightforwardly biological while only the former was socially constructed. Certainly, Gingras’s response to the Open Conversation indicates that gender is an axis of diversity to which Isis (along with other journals in the field) needs to attend. Its relative absence from last year’s discussion also suggests that it is high time to consider the statistical production of gender and sex in the same ways we think critically about the statistical production of race.


Emily Klancher Merchant is an assistant professor in the Science and Technology Studies department at the University of California-Davis. The author of Building the Population Bomb (Oxford, 2021), her work focuses on the quantitative human sciences and technologies of human measurement in the nineteenth and twentieth centuries.