The Language Revolution (11 page)

Read The Language Revolution Online

Authors: David Crystal

BOOK: The Language Revolution
2.1Mb size Format: txt, pdf, ePub

It is not surprising to see the Web (and the Internet as a whole) becoming predominantly
non
-English as communications infrastructure develops in Asia, Africa and South America. This is where the bulk of the people are. The Web is increasingly reflecting the distribution of language presence in the real world, and many sites provide the
evidence. There are thousands of businesses now doing their best to present a multilingual identity, and hundreds of major sites collecting all kinds of data on the languages themselves. Call up the font archive at the University of Oregon, for example: you'll find 112 printing fonts in their archives for over forty languages. They have a nice sense of humour too – because you'll also find some data there on alien languages, such as Klingon, and folklore languages, such as Elvish, which Tolkien invented for
The Lord of the Rings.
Spend an hour hunting for languages on the World Wide Web and you will find hundreds. In 2001 I spent a few days tracking down as many examples as I could find, for my book
Language and the Internet.
I found one site, called World Language Resources, which listed products for 728 languages. I found an African resource list which covered several local languages: Yoruba, for example, was illustrated by some 5,000 words, along with proverbs, naming patterns and greetings. Another site dealt with no less than eighty-seven European minority languages. Some of the sites were very small in content, of course, but nonetheless extensive in range: one gave the Lord's Prayer in nearly 500 languages.

Nobody has yet worked out just how many languages have obtained a modicum of presence on the Web. I found over 1,000 quite quickly. It is not difficult to find evidence of an Internet presence for all the more frequently used languages in the world, and for a large number of minority languages too. My guess is that at least a quarter of the world's languages – about 1,500 – have some sort of cyber-existence now. And this is language presence in a real sense. These are not sites which only analyse or talk about languages, from the point of view of linguistics or some other academic subject. They are sites which allow us to see languages as they are. In many cases, the total Web presence, in terms of number of pages, is quite small.
The crucial point is that the languages
are
out there, even if they are represented by only a sprinkling of sites.

The Internet is the ideal medium for minority languages, and this is a lifeline which could prove to be important for some of the languages whose plight was described in chapter 2. If you are a speaker or supporter of an endangered language – an aboriginal language, say, or one of the Celtic languages – you are keen to give the language some publicity, to draw its plight to the attention of the world. Previously, this was very difficult to do. It was hard to attract a newspaper article on the subject, and the cost of a newspaper advertisement was prohibitive. It was virtually impossible to get a radio or television programme devoted to it. But now, with Web pages and e-mail waiting to be used, you can get your message out in next to no time, in your own language – with a translation as well, if you want – and in front of a global audience whose potential size makes traditional media audiences look minuscule by comparison. The Web message stays around, too, in a way that newspaper and broadcasting references do not. Chat rooms, moreover, are a boon to speakers living in isolation from each other, as now there can be a virtual speech community to which they can belong. Several of the world's ‘smaller' languages that have access to Internet technology – such as the minority languages of Europe and many North American Indian languages – now have Web sites and foster virtual speech communities.

On the other hand, I have to recognize that developing a significant cyber-presence for a language is not easy. To begin with, the infrastructure has to be there – and with so many endangered languages existing in parts of the world where the electricity supply is unreliable or non-existent, the priorities are clear. Then, to use present-day Web technology, the language has to be written down, and as we have seen in chapter 2, this excludes some 2,000
languages which have not been documented at all. Another complication is that the distinctive letters of some languages (especially those which make use of a range of accent marks) are often not easily encodable so that they can be routinely ‘read' by computers everywhere. Lastly, even if there is the technology and the literacy, there is a hurdle of motivation to be crossed. There seems to be a sort of ‘critical mass' of Internet penetration which has to build up in a country or community before a language develops a vibrant cyber-life. It is not much use, really, to have just one or two sites in a local language on the Web. People wanting to use or find out about the language would soon get bored. The number of sites has to build up until, suddenly, everybody is using them and adding to them and talking about them. That is a magic moment, and only a few hundred languages have so far reached it. In the jargon of the Internet, there needs to be lots of good ‘content' in the local languages out there, and until there is, people will stay using the languages that have managed to accumulate content – English, in particular.

So the character of a multilingual Internet is still evolving, and likely to be one of the main points of development in the next few years. Everything depends on how quickly new sites can build up a local language momentum. And we must not underestimate the practical difficulties. Take, for example, the apparently simple issue of representing a language's letters accurately. Until quite recently there were real problems in using the characters of the keyboard to cope with the alphabetical diversity of the world's languages. Because it was the English alphabet that was the standard, only a very few non-English symbols could be handled. If it was a foreign word with some strange-looking accent marks, the Internet software would simply ignore them, and assume they weren't important. This can still happen – but there have been important developments.
First, the basic set of keyboard characters, the so-called ASCII set, was extended, so that the commoner non-English accents could be included. Even then it only allowed up to 256 characters – and there are far more letter or word shapes in the world than that. Just think of the array of shapes you find in Arabic, Hindi, Chinese, Korean and the many other languages which do not use the Roman alphabet. Today, a new coding, the UNICODE system, is much more sophisticated: in its latest version it allows the representation on screen of over 94,000 characters – though that is still well short of the total number of written characters in all the world's languages, which has been estimated to be about 175,000.

My feeling is that the future looks good for Web multilingualism, and this opinion seems to be becoming widespread. Ned Thomas, for instance, is editor of a bulletin called
Contact –
the quarterly publication of the European Bureau of Lesser Used Languages. In an editorial in 2000 he said: ‘It is not the case … that all languages will be marginalized on the Net by English. On the contrary, there will be a great demand for multilingual Web sites, for multilingual data retrieval, for machine translation, for voice recognition systems to be multilingual.'
8
And Tyler Chambers, the creator of various Web language projects, agrees: ‘the future of the Internet', he says, ‘is even more multilingualism and cross-cultural exploration and understanding than we've already seen.'
9
I concur. The Web offers a World Wide Welcome for global linguistic diversity. And in an era when so many languages of the world are dying, such optimism is truly revolutionary.

4

After the Revolution

The three trends described in earlier chapters – the emergence of a global language, the phenomenon of language endangerment and the arrival of the Internet – have had consequences for our developing notions of linguistic diversity. Global English has given extra purpose to a variety of Standard English in the way it guarantees a medium of international intelligibility; but it has also fostered the growth of local varieties as a means of expressing regional identity, and some of these new varieties will, in due course, evolve into new languages. The Internet has provided us with a new linguistic medium which provides a completely fresh range of expressive possibilities, as well as offering novel dimensions of stylistic variation and new ways of focusing on language use. There is even an up-side to language endangerment: the manifestation of language death on such a scale has sharpened the minds of minority language users wonderfully, and fresh initiatives are now everywhere – not least the one which led to the European Year of Languages – to influence public opinion about what linguistic identity means and how it can be fostered. The potential is present for great things to happen. But, as always with revolutions, it is up to individuals to capitalize on them. And to do this we have to rethink several of our
long-established notions about the nature of language. It is not always a comfortable process.

The most important rethinking arises out of what happens if we take the axiom of the European Year of Languages seriously and really think it through. I take this axiom to be the recognition that multilingualism (often referred to as plurilingualism) in general, and bilingualism in particular, is an intrinsic good. I relate this axiom to the postulate that multilingualism is the normal human condition. Depending on what we mean by bilingualism, which I discuss below, estimates for the number of people in the world who are bilingual range from 50 per cent (for a high-level competence) to 80 per cent (for some level of competence). A significant number use three or more languages. This seems to be prima facie evidence for the view that children are born not just with a LAD (= Language Acquisition Device), as Chomsky argued, but with a MAD (= Multilingual Acquisition Device), the acronym avoiding the ambiguity that it is just one language that children are ready to acquire. Rather, the reality seems to be that there is no limit to the number of languages that a child will pick up once exposed to them. From the young child's point of view, of course, the fact that they are different languages is immaterial. They are simply different ways of speaking. We adults know they are different languages, but it is not until children are in the fourth year of life that they become aware of this and start to manipulate the different languages to personal advantage.

Thinking through the notion of multilingualism means, first of all, recognizing that it is not homogeneous. Learning a language is a multi-tasking experience, involving in its fullest form four modes – listening, speaking, reading and writing (deaf signing, of course, is a fifth mode in certain circumstances). It is perfectly possible to develop a multilingual competence in only the first two of these
modes – indeed, in some 40 per cent of the world's languages, as we have seen, the users have no choice, because their languages have never been written down. It is also possible to develop just a ‘reading' knowledge of a language. And differentials between the active and passive modes within spoken and written language are also common: people who listen better than they speak, and who read better than they write. The notion of multilingualism cannot be restricted to people who are fluent in all four modes, as this would exclude a significant proportion of the world's population whose lives actually function through the use of more than one language. Rather, multilingualism has to allow for ability in any subset of the modes.

It also has to allow for varying levels of ability within a mode. Learning a language involves minimally the learning of pronunciation, grammar and vocabulary (to restrict the point to just these three traditional domains). Let us call the total acquisition of each of these domains ‘100 per cent fluency' – that is, a speaker can pronounce all the sounds, use all the grammatical constructions, and know all the vocabulary available in a (dialect of a) language. On that basis, of course, no-one is totally fluent, for no-one knows the million or so words in English, for example; and even some of its 3,500 or so grammatical constructions will not be comfortably used by everyone (e.g. some of the more complex instructions of literary or legal English), nor will some of the more sophisticated tone-of-voice effects (e.g. those used by actors). Plainly we make all kinds of allowances in talking about fluency, and operate with a notional scale from 0 to 100 per cent within each of these areas. We then (also notionally) synthesize a combined total for the language as a whole, so that we are prepared to rate Ms X as being ‘more fluent' than Mr Y. But there is no way of evaluating whether Mr A, who is strong in
grammar and weak in vocabulary, is ‘more' or ‘less' fluent than Ms B, who is weak in grammar and strong in vocabulary. The number of possibilities is immense. Both Mr A and Ms B are bilingual, to an extent – and are certainly ‘more bilingual' than Mr C, who has no ability in any area. Only this relativistic conception of bilingualism makes sense of what we actually see in the world.

And what we see, when we look around, is a world where different levels of linguistic demand are made on people. A commonplace notion, for example, includes ‘survival ability' in a language, or a notion of ‘getting by'. People use these notions all the time, assessing their strengths as greater or weaker in certain areas and languages. We all know how difficult it is to answer the question, ‘How many languages do you speak?' or ‘… do you know?' We all want to hedge straight away. This is to recognize the reality of bilingualism, that it is not an all-or-none phenomenon, but a dynamic mixture of different levels of ability, constantly changing as we change our circumstances, gain or lose our opportunities to use a language, or, quite simply, grow old. When we hear everyone hedging in this way if asked apparently straightforward questions such as ‘Do you speak X?' or ‘Are you bilingual?', then we must be asking the wrong questions. Any theory of bilingualism that wants to be taken seriously has to recognize this indeterminacy.

The recognition of indeterminacy brings into centre-stage a notion that has been much neglected, but whose significance is bound to grow in the twenty-first century – semilingualism. The term has been used in several ways. It can mean people who have not achieved high levels of native fluency in
any
language – one is reminded of Salvatore, in Umberto Eco's
The Name of the Rose,
who spoke ‘all languages and no language' – usually because they have been extremely mobile as children, and never lived
long enough in a place to have a stable family or community background. Thousands of migrant families, travellers, asylum seekers and refugees fall into this category. They must not be excluded from our notion of multilingualism just because their linguistic world is different. More common are those people who live their lives in a multilingual community but who for some reason are unable (or unwilling) to achieve high levels of achievement in all the languages of that community. A common situation is a youngster who learns a second language (L2) at home or in primary school, then leaves home to find work in an area where L2 is not used, and returns in later life with a semilingual command of L2. This is a form of bilingualism too. A third situation is illustrated by the typical scenario in Africa, where a community may make routine use of several languages, but the use of each is related to a particular social situation. One language might be used at home, another in the market-place, a third in church, a fourth in school, and so on. However, the point is that the ‘amount' of language someone might need to ‘survive' or ‘perform' in any one of these contexts might be very different from the corresponding amount needed in the others. Indeed it may be very little – as in the days when a very restricted range of Latin expressions was actively used in the Roman Catholic Church. But someone who competently uses a language in a restricted way cannot be excluded from our multilingualism tally. Quite considerable levels of language ability may be present – but still a long way from what we would count as 100 per cent fluency. Such limited levels would not have much survival value in a context like the European Union, for example, where there is a demand for total translation equivalence. But the European situation is a rather special case.

This demand for total translation equivalence – the principle that everything which can be said in one language
should be available in another – also needs some rethinking. It is common for someone to have an experience in one language which they are unable to talk about in another, because they do not know the relevant vocabulary or idiom (as already noted in chapter 1, with the example of the French-speaking mother in England). In the African case above, people who routinely experience the marketplace context might have a strongly developed vegetable vocabulary, for example, which they lack in the language that they encounter in church. It simply would not be possible for the people to carry on a sophisticated conversation in their church language about cabbages – nor, one imagines, would they ever need to. Only in certain circumstances – where there are certain legal constraints, for example, or where people are worried about competition between languages in a public arena – does the demand for total translational equivalence make sense. The idea of ‘translating everything' is an unusual one. Multilingualism has not evolved to enable us to translate everything into everything else. It has evolved to meet the pragmatic communicative needs of individual people and communities. Sometimes translation is useful; sometimes it is unnecessary; sometimes it is positively undesirable; and sometimes it is absolutely impracticable.

It is the last criterion, of course, which produces the dilemma faced by the European Union as its membership grows into the mid-twenties. There is no solution to such dilemmas if our mindset is conditioned by a ‘translate everything' paradigm. Solutions can be found only if this paradigm is replaced by one which recognizes some sort of pragmatically guided selectivity in the context of a lingua franca. A pragmatic paradigm asserts that we translate when it is useful to do so, and not because ‘everything must be translated'. The various criteria for defining ‘useful' need to be thought about, of course. Some items
(documents, speeches) will be crucial because they relate to a country's perception of its identity. Some will be crucial because they encapsulate legal content which needs to be present in every language. Some will be useful only to certain countries (e.g. a document about coastal defences is presumably of limited interest to countries which have no coastline). It is an axiom that every country has the status of its language respected. But it does not follow from this that everything has to be translated. As a theoretical case: if there are twenty documents, and four language communities (who share a lingua franca, of course), and documents 1–5 are translated into L1, documents 6–10 into L2, and so on, then everyone is being treated equally, and respect is shared, though none have all documents translated. How far such a model can be implemented in practice, given political sensitivities, is unclear; but it is plain that respect, like translation, is a pragmatic notion.

This kind of reasoning scares people, because the brave new world it points towards is unfamiliar and untested. But it is the nature of revolutions to present people with the need for new paradigms. And currently we are experiencing a linguistic revolution in which old models are being replaced by new ones, and a transitional period which is inevitably one of great uncertainty. People are unclear about the role of a truly global lingua franca, because they have never experienced one before. They are seeing the loss of languages around the world, and are not sure what to do. And they are faced with new and unexplored technologies which they have limited experience in handling. Teachers, at the cutting-edge of language work, routinely bemoan their plight. A typical remark: ‘In the old days there was American English and British English, and I knew where I was; now, I've no idea where I am.' But everyone, not just teachers, is faced with
the uncertainties of a rapidly changing linguistic world. As a result there is an understandable tendency to dig the heels in, to take up extreme positions, and to make traditional notions (such as the notion of a language having ‘official status') bear a weight which they were never designed to carry. The result is what we see: huge quantities of unread translations; vast amounts of time wasted and points left unsaid because people feel the need to say everything twice in a speech (once in their own language and again in the lingua franca); and the covert use of ‘relay languages' and ‘working languages' to make sure jobs get done (which insidiously eats away at the principle of respecting linguistic diversity). Far better, it seems to me, is for people to work towards replacing absolutist conceptions by relativistic ones – the concept of ‘official language', for example, being replaced by ‘official for a particular purpose', and to spend the time trying to work out what these purposes might be.

These directions of thinking are uncomfortable, also, because it is the nature of linguistic reality to be uncomfortable, especially in a revolutionary era, where change is so rapid and universal. Relativistic notions bear little resemblance to the black-and-white world that linguistic purists inhabit. And the world of multilingualism is full of purists – people who believe that there exists some form of a language which is intrinsically superior to all others and which it is their duty to protect against change, especially against the influence of other languages (and most especially against English). There is an element of the purist in all of us, but it is an element which we have to control, for the historical reality is clear: that all languages change, that all borrow from each other, and that there is no such thing as a ‘pure' language, and never has been. English, indeed, is the borrower par excellence, as it were, as we have seen in chapter 1. But borrowing is often
viewed as anathema by puristically minded supporters of a language, because they feel that their language is somehow debased if it uses words from other languages. Purists have a very short community memory: they forget that, a generation before, several forms of the language that they now accept as standard were contentious.

Other books

Home Free by Sonnjea Blackwell
And Baby Makes Three by Dahlia Rose
A Secret Identity by Gayle Roper
Country Days by Taylor, Alice
The Mayan Apocalypse by Mark Hitchcock
Gunner by Judy Andrekson
Happy Endings by Jon Rance
Shades of Earl Grey by Laura Childs