This post offers a few comments about my style, and then examines a ScienceDaily article presenting distorted research in support of a preconceived feminist bias. In this case, as outlined in the Summary, ScienceDaily’s knee-jerk promotion of endless second-wave feminist complaints about victimization has the paradoxically misogynistic effect of encouraging predatory male behavior.
My Process of Inquiry
An Ambitious Study
Cutting Corners: Non-Representative Data
Different Male and Female Callback Rates
A Qualitative Study
My Process of Inquiry
A number of previous posts in this blog identify instances in which researchers and/or media have distorted key facts in order to support a preexisting anti-male bias. Before offering another example, it may be helpful to explain how I choose such examples.
I can assure the reader that I don’t have a research staff. I just have me. The basic process is that I subscribe to RSS feeds from several science outlets, such as LiveScience and the New York Times science section. These feeds give me one-line summaries of various articles. My RSS feeds also include a number of non-science outlets, such as various New York Times columnists, The Atlantic, and CNN. I skim through these entries, at a rate of maybe fifty to a hundred a day (I haven’t counted). If I see an entry that looks interesting, I open it up and at least skim through it.
So that’s how these articles come to my attention. I’m not going out looking for them; I’m just trying to keep up with various sources. Articles that seem to attack men, or to complain that women are disadvantaged, are likely to catch my attention. I wasn’t always interested in that stuff. I learned to be, after malicious feminists (not all female) played a key role in wrecking my academic career.
I like women; as noted in a previous post, I have often felt more comfortable with women than with men. I don’t find that the more destructive feminists reflect well upon women, nor that their thinking represents the thinking of most women. To the extent that I have become personally acquainted with really bad feminists, I have the same reaction as in my personal acquaintance with especially destructive men: often, these people have problems. As documented in some of my other posts (not just in this blog), they hurt people. I wouldn’t advocate denying their logic or their evidence; the point has more to do with balance, perspective, basic human kindness, and awareness of priorities beyond their personal crusade.
That is the background by which I have once again become aware of an example of supposedly scientific sources that distort the evidence in order to serve an ideology. In effect, I looked at the summary of the article; something about it didn’t seem quite right; I read further; and I decided to do the writeup.
An Ambitious Study
In this case, the research summary that I encountered appeared in ScienceDaily. ScienceDaily is one of a number of outlets that pass along research writeups provided by others, with little if any modification of those received writeups. Other such sources include LiveScience, Phys.org, and Sci-News. What is particularly objectionable about a site like ScienceDaily is usually its editorial decisions to highlight one kind of article while leaving others to languish in obscurity. ScienceDaily does seem biased in favor of supposedly scientific articles that favor feminist perspectives, with little if any representation of a more male-friendly perspective.
ScienceDaily offered, in this particular writeup, a summary of an article by Natasha Quadlin, “The Mark of a Woman’s Record: Gender and Academic Performance in Hiring” (2018), published in American Sociological Review. ScienceDaily summarized this particular research article as follows:
Stellar grades in college could hurt — rather than help — women new to the job market, according to a new study that suggests employers place more value on the perceived ‘likability’ of female applicants than on their academic success.
Following that summary, the ScienceDaily writeup began as follows:
Male applicants with high grade point averages were twice as likely to be contacted by employers as women with the same grades and comparable experience and educational background in a study from The Ohio State University.
The picture was even worse for women who majored in math. Male math majors who excelled in school were called back by employers three times as often as their women counterparts.
Already, the careful reader may be confused. Are we to understand that a single study compared men vs. women with high vs. low grades and high vs. low likeability across the dozens if not hundreds of major fields offered in the typical university? It sounds ambitious. Needless to say, the relevance and impact of male or female gender may vary greatly from one field to another (e.g., fashion, gender studies, nursing, computer science). Quadlin’s literature review states, moreover, that scholars disagree as to whether employers value grades as an indication that the student has relevant skills, or use grades to identify applicant quality regardless of learning, or don’t care about grades at all. It is unlikely that a single study could do a competent job of sorting out all that. And, in fact, that is not what happened here. It was not a single study, but rather a combination of two separate studies.
The reader might wonder why Quadlin would muddy the water by combining two studies. Her rationale was that she was using “multiple methods” to study a single phenomenon. But that was not true. The two studies addressed two separate questions. In her words, the first asked “how men’s and women’s academic performance affects their chances of advancing to the interview stage,” while the second asked “why employers make the decisions they do” (her emphasis).
Traditionally, researchers focus on a single question in order to make sure they have it right. Among other things, there are typically limits on the time and funding available. Trying to squeeze two studies into one could force the researcher to do a poor job of both. As we shall see, that was a concern in this case.
Cutting Corners: Non-Representative Data
Quadlin’s attempt to combine two separate studies into a single article may explain a rather glaring problem. The summary published on the American Sociological Association’s (ASA) website claimed that Quadlin reached conclusions about stereotypically male “fields,” plural. But that was untrue. Quadlin studied only one field that some might consider stereotypically male: math.
Far from attempting to develop a solid grasp of what was really going on in traditionally male-dominated fields, Quadlin explicitly declined to study computer science because, she felt, it was “more applied in [its] orientation” and thus would not fit within her study design. She also explicitly declined to study engineering, another stereotypically male field, on grounds that it was “overly specific.” But this was plainly backwards: first, you decide what you’re studying. Only then can you know which study methods will help you study it. If Quadlin wanted to reach conclusions about male-dominated fields, obviously, she would have to investigate them. She didn’t do that.
Quadlin decided that math was “male-dominated” because this was what undergraduates told her. That is, she says, she “conducted pretests with undergraduates to confirm that these majors’ perceived sex compositions are in line with their actual sex compositions” (her emphasis). This raises a question of whether Quadlin investigated the “actual” sex composition of math before determining that it was “male-dominated.” According to the American Physical Society (2018), women have earned more than 40% of bachelor’s degrees in math every year since the early 1970s. That’s a female minority, but not at a level that would ordinarily qualify as male “dominance” — unlike, say, engineering (19% female) or computer science (18% female). That 40%+ level certainly doesn’t support Quadlin’s rather bizarre claim that women in math are “expected to perform poorly.”
The only fields Quadlin studied were math, business, and English. She chose those because she felt that, in contrast to math, business was “sex-neutral” and English was “female-dominated.” That was approximately right: men earn 31% of English bachelor degrees and 53% of business bachelor degrees. But as just noted, if she wanted to be able to offer opinions about male-dominated “fields,” plural, she really did need data from multiple fields. Moreover, if she wanted to speak knowledgeably about “the barriers women face in STEM [i.e., science, technology, engineering, and math] fields,” Quadlin herself needed to face the barrier of STEM fields like the biological and biomedical sciences, where women get 59% of bachelor’s degrees (see PBS, 2015).
In this light, the ScienceDaily summary (above) is rather ludicrous. It speaks of the callback rates for “male applicants,” as if Quadlin had studied a representative sample of male and female college graduates from across all fields. Moreover, the ScienceDaily summary misstated what Quadlin did find, even within her narrow sample. ScienceDaily said that male applicants with high GPAs, across all fields, “were twice as likely to be contacted by employers as women with the same grades and comparable experience and educational background.” But here’s how Quadlin phrased her own findings:
The callback rate for high-achieving men, as a result, is nearly double that of high-achieving women. Yet this penalty for high achievement does not apply equally to women in all fields of study. Of the majors I examined, only women in math were penalized, whereas high-achieving women in business and English did not experience a significant penalty. The callback rate for high-achieving men math majors was triple that of high-achieving women math majors.
As in my recent posts on sexual harassment on Australian campuses and equity in STEM workplaces, we have another instance in which feminist writers hype small bits of bad news while avoiding large amounts of good news. In this case, Quadlin finds that the callback rates for male and female applicants were “not significantly different from each other” overall, and that there was also no significant difference in callback rates within the subset of men and women who had high grades. The only problem she was able to find was that female math majors with high grades were much less likely to receive callbacks than male math majors with high grades.
On this basis, let us review the ScienceDaily summary — the first paragraph that readers encounter, as quoted above. ScienceDaily says, “Stellar grades in college could hurt — rather than help — women new to the job market.” That is true. It is also true that Bigfoot “could” be roaming Montana at this very moment. But what Quadlin actually found was that, except in math, stellar grades do not hurt women any more than men. Math is not particularly representative of fields that are really male-dominated but, unfortunately, Quadlin didn’t want to study fields that are. In the end, it was her study. If she wanted to make claims about women in STEM fields, she needed to provide the data.
Why did gender make a difference in the callback rates for math majors, but not for the other majors? One possibility is that it didn’t — that Quadlin’s data are flawed. In this era of a replication crisis in which “scientists have found that the results of many scientific studies are difficult or impossible to replicate/reproduce on subsequent investigation” (Wikipedia), it is entirely possible — indeed, it is likely, as demonstrated by the scholarly disputes summarized in Quadlin’s own writeup — that other researchers will reach different conclusions. For instance, Quadlin did not look at the actual resumes submitted by real male and female math majors. Her attempts to imitate such resumes may have been flawed; what she chose to put into those resumes may have reflected her own biases, or her own wishes for what the study would show. It does seem suspicious that she found that smart women in math (43% female) — that is, women with abilities like her own — were extraordinarily penalized, while smart women in business (47% female) were not penalized at all. It is unlikely that such an enormous difference was due to employers’ radically different gender expectations, between two fields whose levels of female representation differed by only 4%.
Different Male and Female Callback Rates
Quadlin’s method of studying the job application experiences of recent male and female college graduates was to examine real employers’ reactions to fictitious applications. Details of the applications (e.g., information included in the applicants’ resumes) were designed to look genuine, but were in fact invented by Quadlin. In both studies, along with minor differences to keep employers from becoming suspicious (e.g., membership in different but comparably prestigious student groups), the resumes varied according to gender (i.e., Quadlin gave her imaginary people recognizably male or female names), GPA, and major field.
Quadlin’s first study reached some interesting findings, regarding the rates at which her fictitious applicants were called back for interviews. First, it seems employers were not especially concerned about men’s grades. Quadlin randomly generated GPAs for her fictitious applicants, across a range from 2.50 (i.e., C+) to 3.95 (i.e., nearly straight A grades); and for purposes of discussion she divided them into four groups. Men who had A or A- averages, and men who had B/B+ averages, both received callbacks at a rate of about 16%, while about 12% of men with B/B- and B-/C+ grades received callbacks.
That may seem odd. Ordinarily, grades are believed to matter a great deal. And surely they do, to employers in more competitive fields. But in this case, Quadlin sent her applications only to employers whose ads appeared “in the job categories ‘entry level’ or ‘general.'” Among the ads, she said, 30% did not even require a college degree. What she studied, then, was not the job market for college graduates who chose one among the many specific fields for which universities provide a targeted education (e.g., teaching, exercise physiology); nor did Quadlin study the job market in fields (e.g., engineering, applied statistics) where employers would tend to care greatly about hard knowledge demonstrated by high grades.
Given that context, it is rather preposterous to assert, as ScienceDaily (above) does, that Quadlin found that “Stellar grades in college could hurt — rather than help — women new to the job market.” Quadlin simply didn’t study that. She only studied the catchall “general” and “entry level” job categories familiar to idealistic, unfocused, and non-career-oriented college graduates (e.g., those who majored in “social policy,” like Quadlin). In these catchall categories, the five most common job types, in decreasing order, were Sales, Analyst, Administrative Assistant, Customer Service, and Human Resources. It is not surprising that these employers didn’t care all that much about an applicant’s grades. Grades — indeed, the degree itself — mostly appear to have been just one factor, along with the fictitious summer internships and other details with which Quadlin filled her applicants’ resumes.
Unlike the situation with male applicants, Quadlin found that employers did evince some sensitivity to the grades earned by female applicants. Women in the B grade ranges (B- to B+) received callbacks at rates of about 13% to 17%, whereas women who averaged A/A- and those with grades pushing down into C+ territory received callbacks at rates of only about 8-10%. From this, as just noted, Quadlin trumpeted the disparate treatment of women like herself, with high grades; she was less concerned about women with C+ grades. I mention that because of the frustration that many women of color have voiced with second-wave feminism: it has often been a movement of relatively privileged white women who concern themselves with corporate glass ceilings rather than the minimum wage.
When you drill down into Quadlin’s data for specific majors, however, you find that the patterns just described did not hold consistently. Callback rates were virtually identical, between male and female English majors, at all grade levels, except that males with A grades were about twice as likely to receive callbacks as females with A grades. Among business majors, women with the lowest grades were less likely to receive callbacks; but those with the highest grades were just as likely as males to receive callbacks, and in the B grade ranges women enjoyed a substantial advantage over men in callback rates.
I was nonplussed to read Quadlin’s claim that, for business majors, “Within each level of GPA, the callback rates for men and women do not differ.” Here is her graph on that:
The graph indicates the opposite of what Quadlin said: women with B-/B grades (flagged with an asterisk signifying that their callback rate was “significantly different from baseline within gender”) had callback rates of over 21%, while males with the same grades were a little more than half that. Such findings suggest, again, that Quadlin should have focused on studying more majors, so as to arrive at a clearer sense of whether employers display any consistent reaction to grades, gender, and field of study.
Two out of Quadlin’s three fields of study (i.e., English and math) did display the patterns that she personally considered most important: men who had A grades received callbacks at two to three times the rate of women with A grades, and (as in the business graph, above), women with A grades were actually less likely to receive callbacks than women with B grades.
The reader may immediately suspect one possible explanation for such a pattern: overqualification. For these kinds of generalist jobs, potentially low in gratification for the relatively desperate generalist college graduates who are most likely to seek them, employers may realize that A students are not likely to stay around. Another possibility is that, if they did stick around, they might expect raises that the employer was not prepared to give, or might be too geeky for the kinds of customer-oriented work where grades would be less important.
Quadlin does look at overqualification. We will get to that shortly. But on the question of how these employers actually reacted to these applications, we don’t know because Quadlin didn’t investigate it. Instead of attempting a multi-method study on two different research topics, she might have been well advised to arrange a follow-up survey, contacting employers who were deceived by her fictitious resumes, to notify them of her work and to learn more about their reactions. Doing so might have satisfied traditional expectations of research ethics, which are intended among other things to promote public trust in scientific research. As summarized by Boynton et al. (2013),
Deception in psychological research is often stated as acceptable only when all of the following conditions are met: 1) no other nondeceptive method exists to study the phenomenon of interest; 2) the study makes significant contributions to scientific knowledge; 3) the deception is not expected to cause significant harm or severe emotional distress to research participants; and 4) the deception is explained to participants as soon as the study protocol permits.
Since we don’t have the data from any such follow-up survey, we simply don’t know why those particular employers reacted to those particular resumes as they did. Thus, among other things, we don’t know how many didn’t call back because they suspected that the resumes were fake.
Quadlin tried to find sexism in the pattern where A-grade female math majors received callbacks at a far lower rate than A-grade male math majors. As quoted in ScienceDaily, Quadlin said,
There’s a particularly strong bias against female math majors — women who flourish in male-dominated fields — perhaps because they’re violating gender norms in terms of what they’re supposed to be good at.
But that explanation fails: not only is math not “dominated” by males, but (as noted above) males in English likewise received callbacks at a much higher rate than females in English, contradicting Quadlin’s theory that the male-female difference is math-specific. Her only remark on the different callback rates for English majors — a suggestion that perhaps “men who excel in English or business are not as well-regarded” — is inconsistent with her own evidence that such men, in English, are quite well regarded. The explanation for that seeming contradiction may be that, as Quadlin says, her sample size for English majors was too small to calculate the impact of grades and gender — indicating, again, that she would have been well advised to focus on gathering enough data to do a good job with her first study, rather than inflate her meager findings far beyond the conclusions they could actually support.
A Qualitative Study
Quadlin’s second study involved 261 college graduates who routinely made company hiring decisions as human resource personnel, business executives or owners, or midlevel managers. As she noted, the results of this study “cannot be generalized to a broader population,” because these 261 people were not drawn at random from the population of human resource professionals. That is, the results did not provide a reliable impression of the typical thinking or behavior of human resource professionals confronted with a situation like the one Quadlin studied. That severely limited the usefulness of this study. In the words of the National Science Foundation (2005), “There are sometimes scientific reasons to conduct non-random surveys, but they are often unscientific.”
In this study, Quadlin gave each of those 261 hiring decisionmakers two same-sex resumes (either two males or two females, so as not to cue the gender issue) and asked them to rate each applicant. She asked, in effect, which personal traits (i.e., competence, likeability, hardworking nature, commitment, social skills) seemed strongest and, generally, why the applicant should or should not be interviewed for an entry-level position. As one might expect from a nonrandomized study, some of the results of this second study were consistent with those of the first study, and some weren’t. Despite acknowledging that these results did not provide insights that could be accurately generalized to anyone other than these 261 decisionmakers, Quadlin proceeded to discuss them as though they were established facts about male and female jobhunters.
Sometimes a mixed-methods study can yield insights that would not be visible in a single-method study. For instance, if Quadlin had followed up with the employers in her first study (above), she might have pursued these same questions with them. Doing so might have yielded important insights if, for instance, she had homed in on those employers who did not express interest in her fictitious high-GPA female applicants.
Quadlin addressed the question of whether employers were less likely to contact high-GPA women for fear of overqualification. In this context, “overqualification” could mean two different things: (1) employers believed that such women would be very good employees, that they would have many other job opportunities, and therefore that it would be a waste of time and expense to interview them; or (2) employers believed that such women would be inferior, as employees, to women with somewhat lower grades.
To decide between those two possibilities, Quadlin used the salary data that she had collected in her first study. Unfortunately, this approach was flawed. Only 40% of job ads provided any salary numbers and, of those, an unspecified percentage provided only a salary range. Quadlin did not say how big those ranges might be. Rather than limit her analysis to jobs with a specified salary, Quadlin chose to leverage the salary numbers that she did have, to estimate salaries for all of the jobs that didn’t provide salary information. Her first step was to assume that the successful applicant would be paid the amount equal to the midpoint of a salary range, in those cases where the job ad provided only a range. She did not say how big those ranges were, and it appeared she did not attempt to find out whether the midpoint would be an appropriate value for an entry-level employee. Then, although her procedural explanation is rather cryptic, it appears she used those estimated numbers to produce estimates for all jobs of similar types. Some of the job “types” she used were extraordinarily vague, and could vary greatly in salary. For instance, at this writing, Glassdoor reported that salaries for “analyst” jobs (i.e., the second most frequently occurring category, in the jobs Quadlin studied) requiring zero to one year of experience ranged from $41,000 to $91,000.
Thus, Quadlin used a very speculative method to generate salary numbers that could be completely mistaken. It appeared that she did not consult relevant literature on her proposed procedure, but instead just made it up as she went along. For instance, Li et al. (c. 2016) outline some of the complications that can arise in a calculation of this sort. Once again, there can be a great difference between a study that throws together multiple half-baked analyses and one that concentrates on gathering good data (in this case, salary data) to produce a fairly defensible answer to a single question. Thus, it seems the most likely reason why other scholars have not reached what Quadlin describes as her new “notion” that women are punished for earning high GPAs is that Quadlin is simply wrong about that. Even if one were to accept Quadlin’s patently flawed calculations at face value, one would probably not follow her to the extreme conclusion that employers in every kind of job, ranging from science and engineering to social work and teaching, were more likely to interview a low-GPA man than a high-GPA woman.
In short, it was very misleading for Quadlin to write an abstract, for public consumption, asserting that “achievement invokes gendered stereotypes that penalize women for having good grades,” without explaining that her study’s findings — even if she had data to support them — applied only to women like herself, who accumulated high GPAs while earning B.A. degrees in fields (e.g., English, policy analysis) that would leave them competing in the job market against high school graduates.
The core concern addressed by Quadlin’s article is that female applicants with high GPAs are called back for interviews at lower rates than others, including notably men with high GPAs and females with moderate GPAs. As her abstract summarizes, “high-achieving women are most readily penalized when they major in math.” Her explanation is that “achievement invokes gendered stereotypes.” As just noted, her abstract failed to clarify that her research was limited to a backwater of nondescript entry-level jobs for administrative assistants, clerks, and salespeople.
It was unfortunate that Quadlin reached conclusions about gendered stereotypes: her research did not produce any generalizable data on that subject. Using Wu (2017) as an example, a study of stereotypes in the labor market might start by collecting and analyzing various spoken or written communications. Wu analyzed a dataset of 2.2 million posts in online discussions, to explore gender stereotyping in a forum focused on jobs in the field of economics. Speculation is no substitute for that sort of effort.
From the 261 hiring personnel examined in her second study, Quadlin did collect some comments supporting her impressions of employers who “relied on their perceptions of likeability and social skills to pass over high-achieving women.” Quadlin found, in effect, that employers were about 2.5 times more likely to express concern about likeability when appraising a high-GPA woman than when appraising a high-GPA man.
Let’s suppose, for purposes of discussion, that that finding was generalizable — that Quadlin’s second study had produced data providing a useful estimate of how employers actually behave, at least for high-GPA women in Quadlin’s “entry level” and “general” job ad categories. What would such a finding tell us? Quadlin thinks it says that employers are guided by “gendered stereotypes” against women who earn good grades in the supposedly (but not really) male field of mathematics.
But there is another possibility, and it really is unfortunate that Quadlin didn’t consider it. This other possibility is that women who have good grades are being penalized for being disproportionately unlikeable. What is needed, on that point, is data on the question of whether higher- and lower-GPA women behave differently in jobs for salespeople and administrative assistants. Before arbitrarily overruling employers with an assumption that we know their business better than they do, it would make sense to watch and learn. For all we know, what employers are failing to realize may be, not that high-GPA women are actually good for business, but that high-GPA men actually aren’t.
Why would women with high GPAs be unlikeable, from an employer’s perspective? Quadlin’s own rhetoric offers some possible reasons. Consider, for instance, her statement:
Women are also perceived as less committed to their jobs than men, which further disadvantages them in the hiring process. . . . The few employees who are able to live up to this norm are disproportionately men, as women are often expected to juggle competing demands on their time (including, but not limited to, housework and childcare) that prevent them from focusing exclusively on their jobs.
Notice the passive voice, the victimization pose: “women are often expected.” Is it really the case that women in today’s America are knuckling under to have children because that’s what their parents and spouses expect? Those raised in very traditional families may be. But the Huffington Post (Gross, 2015) suggests, to the contrary, that many women are choosing not to have children for a variety of reasons, including this one. If women like Quadlin are torn between domestic and career lives, it is usually because they chose to be there: they put themselves in a situation where, as she acknowledges, they find it hard to compete against more singlemindedly careerist males. There is nothing sexist about that: the same is true for males who prioritize family life or, for that matter, who just don’t want to spend their lives so focused on their careers.
An employer, encountering the sort of attitude manifested in those words from Quadlin, may have a legitimate concern that — as discussed in one of my other posts — some women with graduate degrees (and perhaps also high-GPA BAs) have a disproportionate sense of entitlement, believing that they have a right to be hired just because they are females with academic credentials. For that matter, from an employer’s perspective, you have to ask yourself who’s more likely to sue you for any reason, real or manufactured or perhaps just imagined: a low-GPA man with no other job options, or a high-GPA woman like Quadlin, who seems very eager to find evidence that you and the rest of the world are biased against her?
Under such circumstances, it is not really surprising that, in Quadlin’s words, men “are able to be perceived as competent and likeable simultaneously.” Among other things, men are not “protected” by, and their presence does not serve as a constant reminder of, the many legal and nonlegal means by which people like Quadlin seek to find fault with employers and jobs. It’s not that Quadlin et al. have failed to identify real problems in today’s jobs and workplaces. It is that, as demonstrated in the preceding paragraph, they are misrepresenting those problems by ignoring their effects on males as well as females, for the purpose of making a false case that women are being unfairly attacked.
As Quadlin observes, “[M]any women earn high grades in college.” That is partly because many women study hard, and partly because grade inflation is rampant in college. To the extent that women outnumber men among students with the best GPAs, they — especially those just emerging from school, into the kinds of jobs that Quadlin studied — may ironically be disproportionately disadvantaged by an unrealistic expectation that success in real life will be like success in school. Having had the experience of teaching college students who lost their minds when confronted by an unfamiliar type of question on an exam, I for one would have been interested to see Quadlin actually explore the interface between the high-GPA woman and the difficulties of her workplace, or at least cite other research on it, rather than rest upon tedious assumptions of sexism.
Here’s another Quadlin quote, reflecting on the topic of likeability from another perspective:
Although moderate-achieving women receive a premium in hiring, these women’s long-term career prospects are less promising. Because moderate-achieving women benefit as a result of their personalities — and not their ability — they may not achieve the same level of pay, responsibility, and general esteem as other workers, allowing subtle forms of gender inequality to persist.
In this instance, Quadlin appears to accept that moderate-GPA women probably do have more appealing personalities. Her argument here seems to be that likeability is different from ability; that ability rather than likeability is important for purposes of achieving “the same level of pay, responsibility, and general esteem” as men; and, thus, that the hiring of women with moderate GPAs disserves womanhood. Are these beliefs accurate?
We would be more able to answer that question if Quadlin had collected any data on it. Her study, including her review of previous literature, is focused on hiring, not on career development. There seem to be a couple of problems with her belief, though. One is that it seems contradictory to say that women who are more likeable may somehow not receive as much “general esteem” as “other workers.” Another is the belief that likeability is not conducive to career success in sales jobs, for instance, where likeability may be an essential ingredient of success.
What Quadlin probably means, there, is that likeable women with mediocre grades are not going to wind up in positions of leadership. That, too, is probably false for those who are highly motivated to succeed. They may have been going to school just to get a degree, meanwhile working hard on developing their own independent careers. In that sense, it would be interesting to see a study of female graduates’ personality traits, GPAs, and career outcomes five years post-graduation.
Narrowing it further, maybe Quadlin was speaking of what it takes for a woman to make it to the executive suite within a typical corporation. There, she may have a point. For instance, Wille et al. (2018) found that “men and women in executive positions demonstrate a similar pattern of classically masculine personality traits.” In other words, if male leaders manage to be perceived as simultaneously competent and likeable, then perhaps female leaders have to figure out how to do that too. This would be a logical expectation, in a society that continues to value the most predatory males. But it is ironic in the extreme to arrive at the conclusion that feminism will only have succeeded when it produces women who excel at behaving like men.
That said, it appears that Quadlin may be uncertain as to how those highly successful women do, or should, behave. Her concern is that women are at risk of being perceived as “uptight or ‘bitchy'” if they master such skills as “being organized, meeting deadlines, and following rules.” Those are indeed among the skills of a successful student and of a good “worker bee.” They may be what Quadlin has needed, or what any bureaucrat would need, for a midlevel position in a university or government agency. But it should not be surprising if those traits seem smallminded or fussy, given the finding, by Wille et al. (2018), that “executives (male and female) are consistently characterized by mainly agentic personality features” such as dominance, responsibility, achievement, and self-assurance. Thus, Quadlin may herself illustrate a consequence of the feminization of school, by which academia finds it convenient to encourage docile, stereotypically female behavior, rather than the potentially brutal personality traits needed for the kind of career success that Quadlin thinks women might, or should, prefer (e.g., Steinmayr & Kessels, 2017).
Natasha Quadlin of Ohio State published a social science journal article reporting on her research. The concern addressed in that research was that employers discriminate against women with high GPAs, especially those who major in math. Quadlin theorized that this discrimination is due to a sexist determination to suppress women who intrude upon male-dominated fields like math. That theory seemed unsupportable, not only because women have successfully intruded upon many traditional male fields in recent decades, but also because math as a field of study has not been dominated by males since the 1960s.
Quadlin provided evidence that employers discriminate against women with high GPAs. Regrettably, her evidence was limited to employers’ reactions to resumes from fictitious job applicants who majored in English, math, and business and who were responding to ads in the catchall “entry level” and “general” job categories, explicitly ruling out more career-oriented majors and ads. Sadly, even within that narrow scope, her argument was not supported by her own data regarding employer responses to high-GPA females with degrees in business. To the contrary, those females were equally or better received than males, at all except the lowest grade level.
The narrowness of Quadlin’s actual work did not prevent her from making absurdly broad statements about female college graduates as a whole, as quoted above. Regardless of Quadlin’s own ethicality, it is hard to understand how the editor of the American Sociological Review could have approved for publication an article whose abstract made claims about the entire population of male and female college graduates, when Quadlin did not even collect data on that population.
If Quadlin wanted to make broad claims about female job applicants, she should have studied employers’ reactions to applications from a representative sample of college majors, seeking a representative selection of jobs. It would have made sense to pay particular attention to the job application results of graduates with degrees in extremely sexist fields like social work, as well as gendered but less sexist career-oriented fields like nursing and engineering, along with reportedly gay-friendly fields like criminology and journalism. Quadlin’s writeup pales by comparison against the openminded and nuanced discussion of the complex tapestry of gender in employment that such an exploration might have produced.
In addition to problems of representation across majors, Quadlin’s research had problems of logic. Her primary argument seemed to be that employers were biased against high-GPA females due to concerns for likeability. Remarks in her text suggested that while Quadlin resented the suspicion that high-GPA females might be less likeable than high-GPA males and moderate-GPA females, she simultaneously recognized that it might be true. In any case, Quadlin did not investigate actual likeability. But that did not prevent her from arguing that likeable women with moderate grades had lower odds of achieving high levels of career success and that, in effect, hiring them would perpetuate patriarchal suppression of women as a whole. Such reasoning would obviously be mistaken where likeability is highly conducive to career success (in e.g., sales). It would also be mistaken for many women whose mediocre grades were a consequence of extracurricular motivation (e.g., developing valuable work experience and contacts, or actually building their own businesses on the side, during college).
Ironically, it seemed that Quadlin’s argument on likeability was most credible for those women whose high GPAs would facilitate their rise to the executive suite in a typical corporation. This was ironic because Quadlin, herself, did not choose that path. To the contrary, she expressed an appreciation for worker-bee qualities that might help her succeed as a professor or midlevel manager, but that appeared not to be among the primary traits associated with excellence in executive leadership. In this regard, her advocacy on behalf of the most ambitious women seemed especially abstract.
Quadlin’s work did raise a number of interesting questions, including some that she apparently did not intend. I sympathize with the experience of finding that the number of questions, and their ramifications, can far outpace the scholar’s ability to explore and resolve them, one at a time. Without denying that multimethod research can help the researcher to explore diverse facets of an issue, there is also the risk that it can become something of a stunt. As in the circus, of course, sometimes stunts do work. They can contribute, for instance, to the odds of academic publication and of promotion through websites like ScienceDaily — especially where the researcher chooses to exploit an established academic bias, such as the prolonged indulgence of the aggrieved feminist.
Notwithstanding such marketing calculations, it did materialize, in a number of instances, that Quadlin’s work would have been much improved if she had been more determined to address and resolve a single question. One way to narrow the focus would have been to drop the sexism of second-wave feminist ideology and try to view the relevant issues through gender-neutral eyes. For instance, it is extremely obvious that women are not the only ones who are disadvantaged in a labor market that essentially denigrates the pursuit of a well-rounded or balanced life.
When all you’ve got is a hammer, everything starts to look like a nail. But a researcher who was less religiously devoted to feminism über alles and more concerned about people, individually and as a society, might stop to ask whether it makes sense to perpetuate the common fondness for predatory males. A fair case can be made that strongman leadership is responsible for, and thrives upon, much of the dysfunctionality of contemporary America. People who care about women, and about the influence that women can have on this world, may rightly conclude that it has been inordinately stupid to treat ordinary, non-predatory males as enemies rather than as allies with largely similar priorities vis-à-vis such predators.
That topic arises here because, as just indicated, feminists like Quadlin dwell upon the question of how the most talented women can become more like those men in the executive suite. One might hope or imagine that, once in power, such women would change their stripes and depart significantly from the behaviors that got them there. I would certainly like to see research supporting that wish. What appears more likely is that so-called feminists like Quadlin are actually working to neutralize the potential impact of women by discouraging women from developing traits (e.g., likeability) that may be natural strengths.
Science news outlets like ScienceDaily serve the important purpose of sifting through the torrent of research to find and summarize works of particular importance. And yet, like the tuna and other top feeders in the animal kingdom, ScienceDaily also accumulates, in itself, the pollution of corruption adhering to so-called research throughout the processes of research and academic publication. In other words, Quadlin’s article demonstrates that ScienceDaily has become so thoroughly fouled by the endless complaining of second-wave feminists as to accept more of the same without question.
Looking back through the publication process, we can just tick off the boxes, as they accrue along the way: published in a supposedly scientific journal? Check. Identification of another way in which women (or at least the most successful women) are allegedly repressed? Check. Verify that the reported research actually supports the claims being made? Never mind. Send propaganda and hype back for a rewrite? Uh, no, that won’t be necessary. Fire professors who produce such palpable falsehood? Almost never, except maybe if it entails financial abuses covered by the New York Times. Treat character as an essential trait of the scholar, throughout the process of enrolling and training future PhDs? That would require principled faculty, and university administrations with backbone. Ha! Maybe in another lifetime.
One can fairly hope that someone at ScienceDaily, smelling the coffee, will someday wake up and realize that the truth is not the enemy. Until then, it seems, the reader who encounters feminist propaganda masquerading as science must ask him/herself, What are they hiding? What would I be seeing, if I weren’t forced to see this?