Read Understanding Research Online
Authors: Marianne Franklin
Bruce Berg puts it well when he notes that a more productive way of proceeding is working with a ‘model that encompasses both the research-before-theory and theory-before-research models’ (Berg 2009: 26). This is, not unlike Holmes in fiction and many scientists and scholars in everyday life, a mixture of inductive and deductive modes of reasoning with preferences and their consequences emerging along the way.
In the above section we looked at the differences between inductive and deductive modes of reasoning; how these approaches diverge in theory yet also overlap in practice along the course of research projects in which the gathering of primary data is a core activity more or less, but not exclusively, prior to its analysis.
Alongside these two paths runs another set of debates about the nature, the ‘special status’ claimed by scholarly research.
Chapter 3
addressed some of these issues in terms of distinctive worldviews (
Chapter 3
, 000 section).
Chapter 6
and the first part of this chapter also looked at the practical implications of not only divisions but also overlaps between how quantitative and qualitative sorts of evidence relate to particular rules and procedures for gathering and then analysing this evidence, including the sorts of conclusions, generalizations or specific details, that a researcher can expect to make from their findings.
There is a larger issue, one that could be seen as the quintessential marker of the quantitative–qualitative divide if we were to step back and take a bird’s eye view: the difference between research based on
behaviouralist
understandings of human nature and behaviour, and research based on
constructivist
ones. How these differences become evident within and across disciplines, even departments, differs from place
to place and generation to generation, their more starkly drawn
incommensurability
often emerging during public debates, peer-assessment procedures, or in research seminars.
What are the key differences and why do they matter?
Working example
: if we take
sex-gender roles
, indeed the whole way in which the term gender is conceptualized within these two schools as an example, the implications of these differences are that:
Both approaches could rely on the accumulation and use of quantitative or qualitative sorts of evidence, although constructivist research tends to favour the latter. The key difference boils down to diverging ways of drawing conclusions about first how people, individuals or groups, and societies operate and, second, explanations for why they do so.
16
Let’s zoom in on the behavioural approach. For the purposes of this discussion the focus is on the historical link between behaviouralist psychological models and behaviouralist approaches to social research. Both are based on a core question: ‘Why do people behave the way they do?’ (Sanders, cited in Marsh and Stoker 2002: 58). This is different from asking another core question: Why do people think the way they do? Or: Why do people think, or say one thing and behave in quite another way? Moreover, answers to the first question will differ according to the underlying, if not explicit psychological approaches to why human beings behave the way they do.
17
Two operating principles distinguish behaviouralist from non-behaviouralist takes:
Any theory purporting to explain the political (or social) behaviour under study must be empirically, in this case meaning
quantitatively
, verifiable. In behaviouralist psychology, political science, media and communications studies we see then how this interest in why people behave they way they do (based on what they say and observations of what they do) have become equated with quantitative methods and large scale data-gathering techniques.
However, the behavioural movement’s embrace of quantitative techniques and with that its claiming of the latter as the only way to understand the notion of empirical (and so ‘objective’) knowledge has not gone unchallenged from within those disciplines where it has pride of place. For example in political research, theorists and political philosophers were quick to suggest that those working in this tradition had lost touch with the real world of politics and the best way to study it, because of their increased reliance on refining data-gathering techniques (see Barber 2006, Berns 1962, Lippmann 1998 [1922]). As far back as the 1960s the tendency to create sophisticated survey instruments was to some commentators tantamount to ‘the sacrifice of political relevance on the altar of methodology. The questions asked and pursued are determined by the limits of the scientific method rather than by the subject matter’(Berns 1962: 55).
This view has been echoed in other parts of academe; critiques have been developed and rich literature and research traditions have evolved out of this fundamental difference about the aim of scholarly research and its claims to produce a particular sort of superior knowledge; from feminists and postcolonial scholars coming of age in the 1960s to generations today who stake their scientific reputation, and thereby claim to engage in scholarship as a socially and politically worthwhile enterprise, on treating the ‘subject matter’ as the primary objective; method follows these matters.
Whether quantitative or qualitative approaches to investigating these matters do, or should be confined to the behaviouralist side of the divide is a moot point. As I note in the opening chapters of this book and all the way through, endeavouring to quantify something is not in itself a pointless exercise for a non-behaviouralist research project. Nor should the incorporation of survey data or quantitative content analysis necessarily relegate your project to the ‘enemy camp’; conversely, neither should
undertaking an ethnographic study that includes questionnaires or other uses of statistics turn you into a ‘woolly postmodern’ thinker. In many respects, research in the humanities and social sciences has moved past these polarizations as the emergence of mixed method, multi-sited analysis, or hybrid research designs and guides show.
BOX 7.2 COMPOSITE APPROACHES TO COMPLEX REALITIES – WORKING EXAMPLE
One response to ‘reality out there’ is to be pragmatic. Realize that much research today actually incorporates quantitative and qualitative elements. Termed mixed method or hybrid research, this approach has been gaining ground in social science departments. For some, this straddling of the divide points to a third methodology that goes some way in reconciling many imagined differences (see Burn and Parker 2003: 3–4.
For example: in the
Guardian
newspaper (Hattenstone 2009), research on teenage boys actually combines three specific methods: a content analysis (quantitative and qualitative) of terminology and relationships in media reporting on teenage boys; a survey – questionnaire to 1,000 boys (quantitative); and semi-structured interviews (qualitative).
That said, renaming your research design as ‘mixed method’ versus either quantitative or qualitative does still require a rationale in the context of your inquiry; less may indeed be more in some projects, so methodologies comprised of data-gathering ‘clip-ons’ for the sake of it serve little purpose unless they have some point. For this reason, now is the time to consider why you are undertaking this sort of data-gathering/analysis as opposed to another one; are you able to articulate the trade-offs, show an understanding of what it is you are
not
going to research?
The polemics surrounding the rights and wrongs of particular methods reduces these procedures to their (mis)application in certain instances, thus caricaturing the subtleties and corresponding debates occurring within any mode of research. Such standoffs actually obfuscate the practical realities of research design decisions, given that no data-gathering as process or product is above reproach. Stronger still, popular imaginaries and media discussions of ‘good’ or ‘bad’ science perpetuate caricatures of academic research rather than facilitate productive debate, reproducing some longstanding stereotypes along the way.
In the meantime, academic research continues as debates proliferate. Let’s return to some working examples of where practice belies political differences.
For research projects pursued as part of university degrees, the narrow notion of empirical as synonymous with quantitative modes of data-gathering, quantitative
data as facts pure and simple, and then their statistical analysis as ipso facto ‘scientific’ can result in either an overly reverent or dismissive views of their usefulness before a research project has even got off the ground. Many research students hobble themselves all too early in the day by rushing to characterize their project as either quantitative or qualitative per se, and then launch into the corresponding ways of data-gathering without considering whether these are actually best suited to their research question or even match the purpose of the inquiry. This is effectively research design by prejudice rather than consideration.
Putting it another way: whether you are interested in quantifying in order to make predictions and generalizations about the social world, or interested in symbolic, representational and experiential dimensions to the world, the way you go about gathering and making sense of material pertinent to the research question at hand may require more pragmatism than dogmatism, more use of available knowledge and research findings than overly ambitious claims of originality.
How do these conundrums, and the dead-ends they can create during a project, unpack in the real world of academic research?
Second, what do such conventions mean for the design and successful execution of the research part of your project?
Third, what do these principles mean at the analysis stage and how is the handling and eventual presentation of any data gathered, however defined, to make sense in the final report, whether or not your conclusions are negative, positive, categorical or qualified? By way of illustration, the three cases below, drawing on studies of voters and political elites from quantitative research literature, show the ways in which notions of qualitative data are applied within quantitative research traditions. They also highlight how not all sorts of qualitative data lend themselves to quantification.
(a)
Assigning qualities
: A commonly used question in large-scale surveys of citizens and attempts to measure how ‘warmly’ a voter feels about a political leader is a ‘feeling thermometer’. Respondents to the survey are shown a picture of a thermometer marked from 0 to 100 degrees and given the name of a party leader. The respondent then indicates the warmth of feeling he or she has toward the party leader with 100 being the warmest. The thermometer here is a heuristic device to elicit how favourable a voter is toward a party leader, to capture emotion or affection for the party leader rather than how respondents evaluate a party leader based on the policy positions of the party.
This approach recognizes a truism for qualitative research: in practice it is difficult to separate affective judgments from policy-based judgments. The feeling thermometer attempts to measure the former along a quantitative scale so as to make
comparisons across a larger sample in order to conduct a statistical analysis (by calculating the mean thermometer ranking for a party leader or candidates across all respondents).
(b)
Gender as an empirical category
: Gender is considered a qualitative category for quantitative researchers as well. It is impossible to talk about an ‘average gender’. However, it is possible to talk about the ‘quantity’ or percentage of women in the population. Researching gender as an empirical category (part of a demographic) is also where research drawing on a range of feminist approaches parts company (see Hughes and Cohen 2010); gender is construed as a relational and analytical category in which quantifying is not the main objective (see Butler 1990, Peterson and Runyan 1999, True 2001).
(c)
Qualitative databases
: The
Economic and Social Data Service
(ESDS) has been running a project to capture qualitative data and make it available to secondary users.
18
While the ESDS has been known for providing access to quantitative data, this newer initiative provides access to a range of social science qualitative data sets. Electoral Reform and British Members of the European Parliament, 1999–2004 (designated in the archive as SN 5372) comprises qualitative interviews with sixty British MEPs of the 1999–2004 European Parliament. The interviews focus on the representative role of the MEP and the impact upon this of the change in the electoral system (from single-member district representation to multi-member regions) in Britain for the 1999 European elections. The available data if downloaded from the data archive consist of the transcribed interviews with the sixty MEPs.
19
The latter example underscores how transcribed interview material is not amenable to the same mathematical transformations as the quantitative indicators used in the thermometer example. Moreover, analysis that understands
gender
in other ways than a countable empirical category would not be content with numerical assignations such as ‘0’ or ‘1’ particularly for research projects looking at sexuality, social relations based on people living with or being assigned multiple (rather than two) genders, and feminist research approaches that regard gender as a lens, or as a facet of power relations (Peterson and Runyan 1999, Shepherd 2009).
BOX 7.3 SEX, GENDER, AND CHROMOSOMES
20
There are many topical examples of how the above differences in academic terms work in everyday life; in our daily lives and work we are often confronted with questions about how we ‘know’ something for sure. In this case, how we know for sure who is female and who is male. As feminist theorists and women’s rights activists point out, behaviour and appearance that count as masculine or feminine have differed over time and across cultures. Where quantitative indicators have a powerful influence is in the sporting arena, for centuries divided along male/female lines of physical prowess – a debate in itself. It is a longstanding issue in the Olympic Games, where both gender and race have played major roles (for example, the
1936 games held in Berlin during the Nazi era; the affect of drugs on female but also male athletes during the Cold War era; whether African/African American men and women jump higher, run faster for genetic reasons, and so on). Sex tests have been a feature of the modern Olympics for female athletes.
One well-known case of gender ambiguity in athletics was only discovered upon the athlete’s death. Polish born athlete Stella Walsh (Stanislawa Walasiewicz was her Polish name) was an American national champion and won Olympic gold in the 1932 and 1936 Olympics competing for Poland. She was shot dead during an armed robbery in 1980. The autopsy revealed she possessed both male genitalia along with female characteristics; further investigation revealed that she had both an XX and an XY set of chromosomes. The 2009 athletics World Championships presented a more recent illustration of these complex issues of evidence, method, and interpretation addressed in this and other chapters. During the competition it was reported that South African runner Caster Semenya had been asked to undergo ‘gender tests’ in South Africa and further tests in Berlin during the competition, to establish whether she was eligible to compete in the
women’s
800-metre track-race.
The goal of gender verification is to prevent an unfair advantage in gender-restricted sports (male athletes have an advantage to date in most events but not all, for example, equestrian). The earliest gender verification tests for athletes amounted to simply verification of the existence of the appropriate genitalia.
21
The sex chromatin test, which identifies only the sex chromosome component of gender, was used but later discredited as it was misleading particularly in cases of individuals who could be considered intersexual. Genetic anomalies can present as having a male genetic makeup while having female physiological characteristics. The International Association of Athletics Federation (IAAF) stopped gender verification testing in 1991, while gender determination tests were abandoned for Olympic competition in 2000. However, the IAAF has reserved the right to invoke the test and Semenya has been the first to be tested by the IAAF since 1991. What we see here is that even when resorting to the evidence provided by the chromosomal model of biological sex there are crucial ambiguities. For example, transgender adults or technical ‘hermaphrodites’ can be brought up as girl or boy children without knowing otherwise; others undergo ‘sexual reassignment at birth’ which means that if a gender test reveals their ambiguous chromosomal structure their gender identity, and in the case of sporting events, integrity, becomes a public matter.
Back to academic debates. In this context, gender theorists and feminist biologists such as Judith Butler (1990) and Donna Haraway (1990) respectively argue that neither human – gender-based – behaviour nor the human genome are outside cultural or political influences. These things ‘matter’ when even scientists cannot categorically support one interpretation (male) or another (female) in terms of social costs for individuals and families. For the huge national and financial stakes in global sporting spectacles such as the Olympics we see how the tension between fact,
observation, and interpretation when applied through a dominant interpretation in everyday life and politics is anything but an arcane question of semantics. They are also cultural and political questions about the correct and proper way for boy and girl children to be brought up, and how they (should) behave as adults.