Read Understanding Research Online
Authors: Marianne Franklin
What do you think might be the problem with relying entirely on search engines on the open web, Google in particular? What is good about them?
‘To google or not to google, that is the question’; In a very short time, Google has established itself as the predominant (‘free’) search engine on the world-wide web, superseding predecessors like Gopher, File Transfer Protocol (FTP), Usenet, and contemporaneous search engines like Alta Vista and Yahoo!, inter alia. However, Google’s ‘spider web’, algorithmic search functionality with its hierarchy of top ‘hits’, is based on the ordering of citation frequencies rather than the random filtering of keyword frequencies. Incredibly efficient, this principle means that keyword-placement and citation by other websites has become increasingly strategic, for website designers/owners and advertisers alike (parties pay for a position in the top ten hits) as they situate their websites’ content accordingly. Web surfing and information access is now inseparable from ‘product placement’. The upshot for researchers particularly is that relying entirely on Google leads you, more often than not, to a selected few websites or highly cited online sources, and often the same ones in different guises, rather than a selection of various websites or online sources that also relate to your keyword selections. There is a host of sound, high-quality material that doesn’t always get to the top of the Google hit parade.
Online researching has opened new environments to researchers that move beyond traditional research and challenge some of our notions of what it means to research, how people engage online, and so forth. . . . These new environments not only offer new ways for learning, but also new ways in which to conduct research, creating simulations and testing conclusions.
(Gaiser and Schreiner 2009: 5)
More and more research students are engaging in doing research into, or gathering information from, domains that are entirely, or largely web-based (see Gaiser and Schreiner 2009: 83–9, 90–2, Ó Dochartaigh 2009: 98
passim
). This emerging field has two dimensions: specific sorts of digital tools used to mine, map, or collate ‘raw’ data directly from the web; web-based domains, groups, and activities that are the object of research and/or designated field in which a researcher carries out their data-gathering by various methods (for example, observation, interviewing). These emerging research environments include:
Entering to access, observe and/or participate in the various sorts of communities, readerships, or documentary resources made available in these domains brings with it a host of familiar and new practical and ethical challenges for the researcher. But before taking a closer look at these issues, we need to consider how web-based data can be collected and then analysed; by what means (digital – automated, or manually), and to what ends.
This section looks forward to a more in-depth discussion in
Chapters 6
and
7
in light of the growing market in software designed to aid and speed up qualitative data-analysis; particularly when large amounts of (digitized or web-embedded) material is under investigation.
Quantitative analysis tools such as SPSS have been a staple of quantitative modes of research for some time, as statistical analysis is integral to modern computing. Home office products all include spreadsheet and database programs for collating and crunching figures; basic statistical functions can be used without using more extensive software. Qualitative sorts of content analysis can be done manually, indeed quite reasonably for smaller to medium-sized fields or data such as interview transcripts, smaller-scale focus group work, or a well-defined set of policy documents.
10
For long-standing email/listserv text, internet communities, or intersecting ones on a larger portal, the quantity of text can be too large to handle by manual means only. Here, new qualitative digital tools are being launched and refined all the time. That said, any software program will have a certain degree of ‘default’ built-in, which will have an impact on your eventual findings. If you don’t know what you want to do with the material and to what ends, you will be putting the cart before the horse and more than likely spending a lot of time for little added benefit. So, first do a self-evaluation; ask yourself the following questions:
Making use of software programs is a digitized way of collating, organizing, and then analysing textual, and visual content; bringing order to an array of material gathered during the research in order for some sorts of conclusions to be made, an argument substantiated. These products have been gaining momentum in academic research as the quality and flexibility of programs improve. One thing to remember, however, is that automated searches and then coding of any content are
not in themselves a method
. They may do a lot of work for us, such as sorting out terms according to criteria such as the ‘nearest neighbour’ or particular word combinations – collocations. But they need to be in service of our analytical framework. Coming up with findings based on word frequencies, syntax, or patterns of placement in a document is the outcome of criteria that are built into the program and then put into action by the researcher according to her/his priorities and respective analytical method.
Qualitative software tools are designed for ‘any kind of research that produces findings not arrived at by statistical procedures or other means of quantification’ (Strauss and Corbin 1990: 17, cited in Gaiser and Schreiner 2009: 114). Quantitative tools can accompany this sort of research in that counting keywords, tagging, or figuring out significance entails a level of quantification. However, in methodological terms, the key distinction remains for quantitative content analysis tools: these are ‘by definition about numbers’ and analyses by using statistical techniques (Gaiser and Schreiner 2009: 121). So, once again, which is best for your project relates to your research question, research design, and rationale.
So, what kind of
qualitative software
that can facilitate in collating and then analysing large amounts of text is currently available? Remember, try out these various offers
before
committing time and resources to them if they are not made available to you by your institution. If so, check how long your access rights last.
The above are currently available and in use. Content analysis software is improving steadily. And with that the complexity of the programs is also increasing. Licensing costs, for students as for full-time researchers, depend on institutional and commercial factors.
Looking ahead to
Chapter 7
for a moment, note that these programs are automated albeit sophisticated versions of what many people do when reading a book or article; using post-its, or colour highlighters to mark significant passages for later reference. When we want to analyse texts, and images, using any particular
coding scheme
or analytical approach, these programs can be put to good use.
When considering whether or not these software packages are feasible, indeed even appropriate for the sort of research you are doing, it bears reiterating that it is highly recommended you take the time to try them out; here trial offers offer a first taste, many universities offer free tutorials, and the user manuals are an invaluable way to test your willingness to persevere as well as provide insight to how user-friendly the program really is. Sometimes trying out various software packages can help in moving you forward in designing your analytical framework, even if you end up not using a package extensively.
Still convinced that these will solve your data-gathering and analysis puzzle? Perhaps, but first note:
It is better not to be too hasty about adopting the latest software tool (at time of writing Nvivo was top of that hit parade) without being sure about what its role, and the disadvantages thereof, is in your project. Committing yourself, and your raw data, to a software package without your supervisor knowing, without due consideration of the methodological pros and cons, or without preliminary practice (better still, a practical workshop if offered by your graduate school or support services), is more often than not a recipe for disaster.
In this area quantitative data-processing tools have been around a long time, before the web in fact. However, their availability and existence online have also developed with the web. Again, these are tools not a methodology and as always you need to learn how to use them, know why you want to use them for your inquiry, and then apply their functionalities in analysing your material and drawing eventual conclusions. The main ones used are:
In sum, all these packages are computing tools that consist of ‘powerful data management tools, a wide variety of statistical analysis and graphical procedures’.
11
Again, before committing, if you have not been introduced to these tools as part of a methods training course, then you need to trial them, or take a course beforehand. Here some products offer online training, or university computing departments make their learning resources available in the open web, as is the case with UCLA above.
A final point: make sure that there will be terms under which student access and use of these tools, namely via their institution, may have a shelf-life; think about how and where you intend to store your data (for example, survey results) and the way these tools process them, for later reference.
There are a number of software tools that are useful for more than one sort of research methodology: online and partially free survey tools and low-cost tools developed by academic researchers.