Read Understanding Research Online
Authors: Marianne Franklin
Moving on now to another loaded term for many: method/s. The characterization of method offered by two social epidemiologists below, who also know how to bake, encapsulates this second dimension to the ‘how’ of research designs for most of us. Namely that methods are
rules and procedures employed by those trying to accomplish a task. Sometimes such rules and procedures are written down. For example cookbooks provide recipes for baking better cookies and cakes. In much the same way research methods are rules and procedures that researchers working within a disciplinary framework employ to improve the validity of their inferences. . . . [R]esearchers who abide by good research methods may more reliably produce valid inferences.
. ..
There are always exceptions but the point seems to hold generally
.
(Oakes and Kaufman 2006: 5, emphasis added)
Exactly which unwritten and written ‘rules and procedures’ are considered to be the tried and true ways to accomplish the task is where paths diverge once again; philosophers speak of ‘philosophical research’ (Radder 2006), linguists may speak of ‘semiotic methods’ (see Scolari 2009), political scientists may speak of ‘critical realism’ (Burnham et al. 2004), feminists of ‘gender mainstreaming’ (True and Ackerly 2010).
The point of knowing which recipe you are following and to make what sort of dish, to extend this analogy, is a first-base distinction in terms of method/s. A second-base one, to continue the baseball analogy, is that by learning how to do things a certain way we are also learning to know things a certain way. Immediately, this takes us into the terrain of what counts as the best way to
follow
a particular recipe; leaving aside for the moment questions of why this recipe (see section above). For instance, two key criteria for
empirical
researchers, two criteria for assessing the mettle of any research design and the
inferences
its results rest upon, are
replicability
and
transparency
.
For less rules-and-procedures bound ways of conducting research, these general criteria hold as well. For most though, to speak of method is about how you went about things; I went here to talk to whomever, I accessed this space with permission from whom, I took part in an internet discussion group with permission or prior knowledge, those ideas from others I cite can be found on such and such a page, this many respondents did my survey. However, the term also entails more complex criteria, and issues about the strengths and weaknesses – and appropriateness – of the chosen method/s given the stated aims and objectives of the project and the research question.
This is why method/s are not innocent in the sense of being neutral. One size does not fit all because all methods (recipes) can be applied and used in various ways. They also arise from various layers of understanding of the processes, and outcomes which they are laying out; e.g. various sampling techniques, questionnaire formats, experimental parameters, dependent versus independent variables, primary versus secondary sources, software analytical tools, on-the-ground participant-observation versus – or alongside – online (web-based) ones.
In addition, certain broad categories of methods are distinctive, ‘brand names’ in their own right; for example,
regression analysis, semiotics, psychoanalysis
. Their defining role in particular debates, lines of intellectual allegiance and professional qualifications also means that opting to use some particular sorts of methods for the ‘data-gathering’ pertinent to your project brings certain conceptual vocabularies, authors, and expectations with them. In these cases, your nominal method speaks for itself even though nuances reside within debates generic to these approaches.
BOX 2.2
CLIMATE CHANGE OR GLOBAL WARMING?
In the first decade of this century supporters of this positive correlation between global warming and human activities over the centuries and those with the view that global warming is a gradual, autonomous aspect of climate change over time have been locking horns, in academic conferences, UN meetings, and the media; prominent scientific reports (e.g. the Stern Report in the UK, the ICCC reports and their authors, ‘sceptical environmentalists’, economists, politicians, governments from the Global South such as India, and representatives of major petrochemical industries all take diametrically opposed positions. Where scientists differ is not only on the underlying criteria by which they analyse the data and on this evidence they then use in making
predictions
about the future of the planet, in order to offer recommendations to governments and industry about ways to minimize the human element as an integral causal factor. They have also been hotly debating the very integrity of the data used; the methods by which it was collected, how it was then conveyed as statistical probabilities.
The key point here is to note how the scientific community is deeply divided about whether global warming and corollary effects is something that ‘naturally’ occurs or is the outcome of industrialization, large-scale agriculture, deforestation, and other activities over the last few centuries. Whilst there have been exposés of sloppy research methods, the main contentions also pivot on philosophical issues and political stances about the ecosystem and humanity’s legacy. Today’s debates have their precursors in the 1970s and 1980s when ecological thought first made inroads in the public debate, and ensuing research; William Lovelock’s
Gaia
notion, first presented in the 1960s, which posited the earth’s various components (from the oceans to the atmosphere) as a large complex ecosystem where all parts worked in delicate balance, has been influential (Lovelock 2000).
This example highlights how the methodological integrity of truth claims and high social status enjoyed by Science (with a big S) straddle public and scholarly debates. When fundamental errors or ambiguous modelling – some call it ‘fudging’ – come to light in areas as politically and economically sensitive as this one, the debate becomes quickly polarized, pitting not only environmental scientists and environ-mentalists against one another but also different schools of thought within these respective camps. Who is right or wrong about the causes, speed, and responsibility for climate change shows how predictive modelling, where computers generate graphs based on an array of complex data sets, inform governmental budgets, industry research and development, domestic energy bills, and even big power politics at the United Nations (see Maslin 2009: 60
passim
). It could be that both sides have got it right, as well as wrong (see Schulz 2010).
Nonetheless, this second side to the ‘how’ question is actually not that mysterious; it involves us taking the time and space to provide simple, though not mundane explanations of what exactly we intend to do (proposals) or what we did do (afterwards); advantages and disadvantages included. What these techniques, tools, or combinations will provide in terms of ‘facts’, ‘data’, ‘insights’, or ‘experiences’ and what they cannot do are the baselines for any project’s claims and achievements.
But what is the distinction between your method/s you are employing, your theory – conceptual framework – and the need then to talk about
methodology
? At times, little point in that the latter is used synonymously with methods; both refer to a ‘description of the methods or procedures used in some activity’ (Sloman 1977: 387). Then there are those instances where methodology is pronounced as ‘merely as a more impressive-sounding synonym for method’ (ibid.: 388), its often unintentional use as a synonym for theorizing notwithstanding. But there is more to it than this.
This term also refers to a particular sort of undertaking, an ‘investigation of the aims, concepts, and principles of reasoning of some discipline, and the relationships between its sub-disciplines’ (ibid.: 388). In this wider sense, methodology can also be an object of study, an academic discipline in itself. There are theorists and philosophers whose specialization is methodology. Moreover, every discipline generates its own set of methodological conundrums, in turn those who specialize in asking each other and working researchers awkward questions about a research practice, particularly those that become standard procedures or ways of talking about the ‘right’ and ‘correct’ procedures in any domain as if they were beyond question (see Oakes and Kaufman 2006: 7
passim
).
Moses and Knutsen distinguish between the two ‘m’s’ in their likening methodology to the toolbox and respective methods to the tools in the box (Moses and Knutsen 2007: 4–7); different tools need different sorts of toolboxes. Creswell opts for the expression ‘strategies of inquiry’ instead, which may help those who are not into DIY (Creswell 2009: 11), when he distinguishes between methodology as
strategies
and the particular techniques –
methods
– used to conduct the research . . .
How do these nuances actually pan out in general practice? In the day-to-day grind of getting a research project done do we need to be so concerned about such analytical distinctions? Whatever the response, you need to engage at some level of methodological explication, including the pros and cons of this chosen approach; sooner rather than later.
Here are some general things to aim for even before you know exactly how you want to go about your investigation:
How do these rules of thumb play out within, and across the quantitative–qualitative divide, particularly in the early decision-making and planning stages?
For those working in the quantitative tradition, and even those working with qualitative data and relying on an understanding of
scientific reasoning
(see Chapter 3), the above privileging of methodological discussions, with a big M, is less evident in either a research proposal or final report. The following provisos may well apply to your project if not expectations arising from your disciplinary setting:
There are several bottlenecks, however, no matter which way you look at things. To recapitulate, research proposals and the reports that ensue often stand or fall on what peers, supervisors, and examiners, make of the practical side of things; how the data was gathered, and then analysed (see Gray 2009: 57–8). Before considering ways of dealing with a number of recurring bottlenecks, let’s look at them more closely:
Either way, not having an idea about what position to take and why leaves you open to not only tough but also fundamental criticism about the very point of your work; the relevance of the data-gathering approach you opted for, and by implication the quality of the findings and conclusions made. These need not come from less sympathetic audiences from the ‘other side’, or hostile external reviewers. They may well be valid points raised from within your own scholarly circle-of-choice or affiliation.
Next: how to deal with these headaches, especially early on when positioned on either side of the divide?
In short, some baseline rules apply to most of us when setting out: