Understanding Research (24 page)

Read Understanding Research Online

Authors: Marianne Franklin

BOOK: Understanding Research
7.96Mb size Format: txt, pdf, ePub
Getting past Google and why bother

What do you think might be the problem with relying entirely on search engines on the open web, Google in particular? What is good about them?

‘To google or not to google, that is the question’; In a very short time, Google has established itself as the predominant (‘free’) search engine on the world-wide web, superseding predecessors like Gopher, File Transfer Protocol (FTP), Usenet, and contemporaneous search engines like Alta Vista and Yahoo!, inter alia. However, Google’s ‘spider web’, algorithmic search functionality with its hierarchy of top ‘hits’, is based on the ordering of citation frequencies rather than the random filtering of keyword frequencies. Incredibly efficient, this principle means that keyword-placement and citation by other websites has become increasingly strategic, for website designers/owners and advertisers alike (parties pay for a position in the top ten hits) as they situate their websites’ content accordingly. Web surfing and information access is now inseparable from ‘product placement’. The upshot for researchers particularly is that relying entirely on Google leads you, more often than not, to a selected few websites or highly cited online sources, and often the same ones in different guises, rather than a selection of various websites or online sources that also relate to your keyword selections. There is a host of sound, high-quality material that doesn’t always get to the top of the Google hit parade.

DIGITAL TOOLS FOR ONLINE DATA-GATHERING AND ANALYSIS

Online researching has opened new environments to researchers that move beyond traditional research and challenge some of our notions of what it means to research, how people engage online, and so forth. . . . These new environments not only offer new ways for learning, but also new ways in which to conduct research, creating simulations and testing conclusions.

(Gaiser and Schreiner 2009: 5)

More and more research students are engaging in doing research into, or gathering information from, domains that are entirely, or largely web-based (see Gaiser and Schreiner 2009: 83–9, 90–2, Ó Dochartaigh 2009: 98
passim
). This emerging field has two dimensions: specific sorts of digital tools used to mine, map, or collate ‘raw’ data directly from the web; web-based domains, groups, and activities that are the object of research and/or designated field in which a researcher carries out their data-gathering by various methods (for example, observation, interviewing). These emerging research environments include:

  • Social networking sites: Facebook, YouTube, MySpace, and other sorts of social media-based portals, and their constituent groups.
  • Internet communities: These predate and co-exist alongside the above. Often using a mixture of ‘old-school’ bulletin board services (BBS), Newsgroups, and listserv set-ups, they are core constituents of many grassroots activist groups, NGOs, and special interest communities. Moderators and passwords keep track of ‘who’s who’ in varying ways.
  • News and entertainment portals.
  • Blogs: the many individual blogs, professional and amateur, that make up the ‘blogosphere’.
  • Web portals and websites of intergovernmental organizations (the UN, WTO, or World Bank for instance) and those of NGOs, large and small.
  • Political party websites along with politician’s blogs or home pages.
  • Computer games and virtual worlds; for example, Second Life, computer gaming communities, games like World of Warcraft, The Sims, and such like.

Entering to access, observe and/or participate in the various sorts of communities, readerships, or documentary resources made available in these domains brings with it a host of familiar and new practical and ethical challenges for the researcher. But before taking a closer look at these issues, we need to consider how web-based data can be collected and then analysed; by what means (digital – automated, or manually), and to what ends.

Making sense of the data: web-based and software tools

This section looks forward to a more in-depth discussion in
Chapters 6
and
7
in light of the growing market in software designed to aid and speed up qualitative data-analysis; particularly when large amounts of (digitized or web-embedded) material is under investigation.

Quantitative analysis tools such as SPSS have been a staple of quantitative modes of research for some time, as statistical analysis is integral to modern computing. Home office products all include spreadsheet and database programs for collating and crunching figures; basic statistical functions can be used without using more extensive software. Qualitative sorts of content analysis can be done manually, indeed quite reasonably for smaller to medium-sized fields or data such as interview transcripts, smaller-scale focus group work, or a well-defined set of policy documents.
10

For long-standing email/listserv text, internet communities, or intersecting ones on a larger portal, the quantity of text can be too large to handle by manual means only. Here, new qualitative digital tools are being launched and refined all the time. That said, any software program will have a certain degree of ‘default’ built-in, which will have an impact on your eventual findings. If you don’t know what you want to do with the material and to what ends, you will be putting the cart before the horse and more than likely spending a lot of time for little added benefit. So, first do a self-evaluation; ask yourself the following questions:

  1. What is my realistic level of computer-use, skills, and equipment?
  2. What length of time will I spend doing this research? Answers will differ according to whether this is an undergraduate or postgraduate, (post)doctoral project.
  3. Where will I be storing the raw data, and in what format? As software formats are often exclusive, not always compatible for other formats, the best advice its to keep your data in a relatively or widely available format (e.g. text-only, or RTF formats), rather than locked into a high-end or proprietary format (e.g. Photoshop). You need to think about being able to access this data after you have updated your operating system, laptop, or mobile device.
Software analysis tools

Making use of software programs is a digitized way of collating, organizing, and then analysing textual, and visual content; bringing order to an array of material gathered during the research in order for some sorts of conclusions to be made, an argument substantiated. These products have been gaining momentum in academic research as the quality and flexibility of programs improve. One thing to remember, however, is that automated searches and then coding of any content are
not in themselves a method
. They may do a lot of work for us, such as sorting out terms according to criteria such as the ‘nearest neighbour’ or particular word combinations – collocations. But they need to be in service of our analytical framework. Coming up with findings based on word frequencies, syntax, or patterns of placement in a document is the outcome of criteria that are built into the program and then put into action by the researcher according to her/his priorities and respective analytical method.

Qualitative software tools are designed for ‘any kind of research that produces findings not arrived at by statistical procedures or other means of quantification’ (Strauss and Corbin 1990: 17, cited in Gaiser and Schreiner 2009: 114). Quantitative tools can accompany this sort of research in that counting keywords, tagging, or figuring out significance entails a level of quantification. However, in methodological terms, the key distinction remains for quantitative content analysis tools: these are ‘by definition about numbers’ and analyses by using statistical techniques (Gaiser and Schreiner 2009: 121). So, once again, which is best for your project relates to your research question, research design, and rationale.

Qualitative content analysis

So, what kind of
qualitative software
that can facilitate in collating and then analysing large amounts of text is currently available? Remember, try out these various offers
before
committing time and resources to them if they are not made available to you by your institution. If so, check how long your access rights last.

The above are currently available and in use. Content analysis software is improving steadily. And with that the complexity of the programs is also increasing. Licensing costs, for students as for full-time researchers, depend on institutional and commercial factors.

Looking ahead to
Chapter 7
for a moment, note that these programs are automated albeit sophisticated versions of what many people do when reading a book or article; using post-its, or colour highlighters to mark significant passages for later reference. When we want to analyse texts, and images, using any particular
coding scheme
or analytical approach, these programs can be put to good use.

  • That said, setting them up in order to have them work for us, in the way that we want them to and to serve the purposes of our analytical method, is more time-consuming than many would believe.
  • The promise of automation and digital formatting belies this preparatory and decision-making phase particularly for qualitative content analysis programs, where manual forms of interpretation are still more than a match.
  • For statistical analysis, software tools have a longer pedigree, general and more specialized. Their effective application requires preliminary knowledge and understanding of the principles of statistical work as well.

When considering whether or not these software packages are feasible, indeed even appropriate for the sort of research you are doing, it bears reiterating that it is highly recommended you take the time to try them out; here trial offers offer a first taste, many universities offer free tutorials, and the user manuals are an invaluable way to test your willingness to persevere as well as provide insight to how user-friendly the program really is. Sometimes trying out various software packages can help in moving you forward in designing your analytical framework, even if you end up not using a package extensively.

Still convinced that these will solve your data-gathering and analysis puzzle? Perhaps, but first note:

  • It bears reiterating; these tools are not in themselves a method. Their use is not a methodological shortcut, or explanation per se.
  • Find out how others have found any of these packages; only once you have taken a look yourself, though, as then you will not be wasting other people’s time with base-one queries. Here user-group listservs and websites are invaluable sources of information.
  • If you are someone who checks out the reviews and comments for new consumer products and services online, then why not do the same with something as important as a piece of research software?
  • Make sure you confer with your supervisor about this option, in principle and practically. If they do not show much enthusiasm, and there could be many reasons for this, try and have a better response to the question ‘Why use this tool
    instead of doing the analysis manually?’ than ‘It is free’ or ‘Because it can do the work for me’. It can only do the work for you up to a point. Then you still have to make sense of the findings, and justify them on your own.

It is better not to be too hasty about adopting the latest software tool (at time of writing Nvivo was top of that hit parade) without being sure about what its role, and the disadvantages thereof, is in your project. Committing yourself, and your raw data, to a software package without your supervisor knowing, without due consideration of the methodological pros and cons, or without preliminary practice (better still, a practical workshop if offered by your graduate school or support services), is more often than not a recipe for disaster.

Quantitative analytical tools

In this area quantitative data-processing tools have been around a long time, before the web in fact. However, their availability and existence online have also developed with the web. Again, these are tools not a methodology and as always you need to learn how to use them, know why you want to use them for your inquiry, and then apply their functionalities in analysing your material and drawing eventual conclusions. The main ones used are:

  • SPSS, a program developed by the IBM corporation, is a mainstay in many social science departments. Apart from the company website, consult university resources such as
    www.ats.ucla.edu/stat/spss/
    . The point here is that using powerful tools such as these requires an understanding of basic statistical analysis to make proper use of them. For anyone conducting smaller-scale surveys that generate quantitative results, you could rely on the functionalities provided by either you own spreadsheet program or online survey tools.
  • Stata (standing for statistics/data) is the trade name for a popular package. See
    www.ats.ucla.edu/stat/stata/
    for a useful online resource that includes tutorials and specific sorts of actions. If you are still not sure, or even if you are absolutely sure that statistical analysis is what your inquiry requires, then consult the useful link on this page entitled ‘What statistical analysis should I use?’ at
    www.ats.ucla.edu/stat/stata/whatstat/default.htm
    . The table provided there provides a comprehensive overview of terms and parameters with links to the relevant tool.
  • SAS (Statistical Analysis System) is another statistical computing package with specific functions. See
    www.ats.ucla.edu/stat/sas/default.htm
    for an online primer and links.

In sum, all these packages are computing tools that consist of ‘powerful data management tools, a wide variety of statistical analysis and graphical procedures’.
11

Again, before committing, if you have not been introduced to these tools as part of a methods training course, then you need to trial them, or take a course beforehand. Here some products offer online training, or university computing departments make their learning resources available in the open web, as is the case with UCLA above.

A final point: make sure that there will be terms under which student access and use of these tools, namely via their institution, may have a shelf-life; think about how and where you intend to store your data (for example, survey results) and the way these tools process them, for later reference.

Crossover tools

There are a number of software tools that are useful for more than one sort of research methodology: online and partially free survey tools and low-cost tools developed by academic researchers.

  • Surveys: for smaller (up to ten questions) and more complex surveys, Survey Monkey,
    www.surveymonkey.com/
    , is an attractive and easy to use tool. For those on a limited budget and doing a BA or MA level project, it is a good exercise to restrict yourself to ten questions. Before sending out the survey to your target group (something that needs preparation), try out the survey yourself, and with a pilot group. Only then will you see whether your questions are well-formulated or in line with your aims.
  • Web-mapping. For those looking for connections, linkages, and undertaking any kind of network analysis, one readily available and low-cost tool is Issue Crawler, at
    www.govcom.org/
    . The online tutorials and research results are both useful ways to consider whether this approach to researching the internet might offer you an avenue for your topic. That said, the maps that get produced with this tool are not in themselves self-explanatory. They often raise as many questions as they answer. This particular product is one of several sorts of web-mapping tools available.

Other books

Ripples Through Time by Lincoln Cole
Gone Too Far by Natalie D. Richards
Silent Playgrounds by Danuta Reah