Sign Languages

A sign language (also signed language or simply signing) is a language which uses manual communication and body language to convey meaning, as opposed to acoustically conveyed sound patterns. This can involve simultaneously combining hand shapes, orientation and movement of the hands, arms or body, and facial expressions to fluidly express a speaker's thoughts. They share many similarities with spoken languages (sometimes called "oral languages", which depend primarily on sound), which is why linguists consider both to be natural languages, but there are also some significant differences between signed and spoken languages.

Wherever communities of deaf people exist, sign languages develop. Signing is also done by persons who can hear, but cannot physically speak. While they utilize space for grammar in a way that spoken languages do not, sign languages exhibit the same linguistic properties and use the same language faculty as do spoken languages.[1][2] Hundreds of sign languages are in use around the world and are at the cores of local deaf cultures. Some sign languages have obtained some form of legal recognition, while others have no status at all.

A common misconception is that all sign languages are the same worldwide or that sign language is international. Aside from the pidgin International Sign, each country has its own, native sign language, though they may share similarities to signs used in another country. In the United States, sign language is generally termed ASL (American Sign Language).

History


Groups of deaf people have used sign languages throughout history. One of the earliest written records of a sign language is from the fifth century BC, in Plato's Cratylus, where Socrates says: "If we hadn't a voice or a tongue, and wanted to express things to one another, wouldn't we try to make signs by moving our hands, head, and the rest of our body, just as dumb people do at present?"[3]

Until the 19th century, most of what we know about historical sign languages is limited to the manual alphabets (fingerspelling systems) that were invented to facilitate transfer of words from a spoken to a signed language, rather than documentation of the rest of the language.

In 1620, Juan Pablo Bonet published Reducción de las letras y arte para enseñar a hablar a los mudos (‘Reduction of letters and art for teaching mute people to speak’) in Madrid.[4] It is considered the first modern treatise of sign language phonetics, setting out a method of oral education for deaf people and a manual alphabet.


In Britain, manual alphabets were also in use for a number of purposes, such as secret communication,[5] public speaking, or communication by deaf people.[6] In 1648, John Bulwer described "Master Babington", a deaf man proficient in the use of a manual alphabet, "contryved on the joynts of his fingers", whose wife could converse with him easily, even in the dark through the use of tactile signing.[7]

In 1680, George Dalgarno published Didascalocophus, or, The deaf and dumb mans tutor,[8] in which he presented his own method of deaf education, including an "arthrological" alphabet, where letters are indicated by pointing to different joints of the fingers and palm of the left hand. Arthrological systems had been in use by hearing people for some time;[9] some have speculated that they can be traced to early Ogham manual alphabets.[10][11]

The vowels of this alphabet have survived in the contemporary alphabets used in British Sign Language, Auslan and New Zealand Sign Language. The earliest known printed pictures of consonants of the modern two-handed alphabet appeared in 1698 with Digiti Lingua, a pamphlet by an anonymous author who was himself unable to speak.[12] He suggested that the manual alphabet could also be used by mutes, for silence and secrecy, or purely for entertainment. Nine of its letters can be traced to earlier alphabets, and 17 letters of the modern two-handed alphabet can be found among the two sets of 26 handshapes depicted.

Charles de La Fin published a book in 1692 describing an alphabetic system where pointing to a body part represented the first letter of the part (e.g. Brow=B), and vowels were located on the fingertips as with the other British systems.[13] He described codes for both English and Latin.

By 1720, the British manual alphabet had found more or less its present form.[14] Descendants of this alphabet have been used by deaf communities (or at least in classrooms) in former British colonies India, Australia, New Zealand, Uganda and South Africa, as well as the republics and provinces of the former Yugoslavia, Grand Cayman Island in the Caribbean, Indonesia, Norway, Germany and the USA.

Frenchman Charles-Michel de l'Épée published his manual alphabet in the 18th century, which has survived basically unchanged in France and North America until the present time. In 1755, Abbé de l'Épée founded the first school for deaf children in Paris; Laurent Clerc was arguably its most famous graduate. Clerc went to the United States with Thomas Hopkins Gallaudet to found the American School for the Deaf in Hartford, Connecticut, in 1817.[15] Gallaudet's son, Edward Miner Gallaudet founded a school for the deaf in 1857 in Washington, D.C., which in 1864 became the National Deaf-Mute College. Now called Gallaudet University, it is still the only liberal arts university for deaf people in the world.

Sign languages generally do not have any linguistic relation to the spoken languages of the lands in which they arise. The correlation between sign and spoken languages is complex and varies depending on the country more than the spoken language. For example, the US, Canada, UK, Australia and New Zealand all have English as their dominant language, but American Sign Language (ASL), used in the US and most parts of Canada, is derived from French Sign Language whereas the other three countries sign dialects of British, Australian and New Zealand Sign Language.[16] Similarly, the sign languages of Spain and Mexico are very different, despite Spanish being the national language in each country,[17] and the sign language used in Bolivia is based on ASL rather than any sign language that is used in a Spanish-speaking country.[18] Variations also arise within a 'national' sign language which don't necessarily correspond to dialect differences in the national spoken language; rather, they can usually be correlated to the geographic location of residential schools for the deaf.[19][20]

International Sign, formerly known as Gestuno, is used mainly at international Deaf events such as the Deaflympics and meetings of the World Federation of the Deaf. Recent studies claim that while International Sign is a kind of a pidgin, they conclude that it is more complex than a typical pidgin and indeed is more like a full sign language.[21]

Linguistics of sign languages

In linguistic terms, sign languages are as rich and complex as any spoken language, despite the common misconception that they are not "real languages". Professional linguists have studied many sign languages and found that they exhibit the fundamental properties that exist in all languages.[22][23]

Sign languages are not mime – in other words, signs are conventional, often arbitrary and do not necessarily have a visual relationship to their referent, much as most spoken language is not onomatopoeic. While iconicity is more systematic and widespread in sign languages than in spoken ones, the difference is not categorical.[24] The visual modality allows the human preference for close connections between form and meaning, present but suppressed in spoken languages, to be more fully expressed.[25] This does not mean that sign languages are a visual rendition of a spoken language. They have complex grammars of their own, and can be used to discuss any topic, from the simple and concrete to the lofty and abstract.

Sign languages, like spoken languages, organize elementary, meaningless units (phonemes; once called cheremes in the case of sign languages) into meaningful semantic units. Like in spoken languages, these meaningless units are represented as (combinations of) features, although often also crude distinctions are made in terms of Handshape (or Handform), Orientation, Location (or Place of Articulation), Movement, and Non-manual expression.

More generally, both sign and spoken languages share the following common features that linguists have found in all natural human languages. 1) Mode of communication 2) Semanticity 3) Pragmatic function 4) Interchangeability 5) Cultural Transmission 6) Arbitrariness 7) Discreteness 8) Displacement 9) Productivity These nine features serve to define the notion "language".

Common linguistic features of many sign languages are the occurrence of classifiers, a high degree of inflection, and a topic-comment syntax. More than spoken languages, sign languages can convey meaning by simultaneous means, e.g. by the use of space, two manual articulators, and the signer's face and body. Though there is still much discussion on the topic of iconicity in sign languages, classifiers are generally perceived to be highly iconic, as these complex constructions "function as predicates that may express any or all of the following: motion, position, stative-descriptive, or handling information"[26] It needs to be noted that the term classifier is not used by everyone working on these constructions. Across the field of sign language linguistics the same constructions are also referred with other terms.

Sign languages' relationships with spoken languages


A common misconception is that sign languages are somehow dependent on spoken languages, that is, that they are spoken language spelled out in gesture, or that they were invented by hearing people.[27] Hearing teachers in deaf schools, such as Thomas Hopkins Gallaudet, are often incorrectly referred to as “inventors” of sign language.

Although not part of sign languages, elements from the Manual alphabets (fingerspelling) may be used in signed communication, mostly for proper names and concepts for which no sign is available at that moment. Elements from the manual alphabet can sometimes be a source of new signs (e.g. initialized signs, in which the shape of the hand represents the first letter of the word for the sign).

On the whole, sign languages are independent of spoken languages and follow their own paths of development. For example, British Sign Language and American Sign Language (ASL) are quite different and mutually unintelligible, even though the hearing people of Britain and America share the same spoken language. The grammars of sign languages do not usually resemble that of spoken languages used in the same geographical area; in fact, in terms of syntax, ASL shares more with spoken Japanese than it does with English.[28]

Similarly, countries which use a single spoken language throughout may have two or more sign languages; whereas an area that contains more than one spoken language might use only one sign language. South Africa, which has 11 official spoken languages and a similar number of other widely used spoken languages, is a good example of this. It has only one sign language with two variants due to its history of having two major educational institutions for the deaf which have served different geographic areas of the country.

Spatial grammar and simultaneity

Sign languages exploit the unique features of the visual medium (sight), but may also exploit tactile features (tactile sign languages). Spoken language is by and large linear; only one sound can be made or received at a time. Sign language, on the other hand, is visual and, hence, can use simultaneous expression, although this is limited articulatorily and linguistically. Visual perception allows processing of simultaneous information.

One way in which many sign languages take advantage of the spatial nature of the language is through the use of classifiers. Classifiers allow a signer to spatially show a referent's type, size, shape, movement, or extent.

The large focus on the possibility of simultaneity in sign languages in contrast to spoken languages is sometimes exaggerated, though. The use of two manual articulators is subject to motor constraints, resulting in a large extent of symmetry[29] or signing with one articulator only.

Non-manual signs

Sign languages convey much of their prosody through non-manual signs. Postures or movements of the body, head, eyebrows, eyes, cheeks, and mouth are used in various combinations to show several categories of information, including lexical distinction, grammatical structure, adjectival or adverbial content, and discourse functions.

In ASL (American Sign Language), some signs have required facial components that distinguish them from other signs. An example of this sort of lexical distinction is the sign translated 'not yet', which requires that the tongue touch the lower lip and that the head rotate from side to side, in addition to the manual part of the sign. Without these features it would be interpreted as 'late'.[30]

Grammatical structure that is shown through non-manual signs includes questions, negation, relative clauses,[31] boundaries between sentences,[32] and the argument structure of some verbs.[33] ASL and BSL use similar non-manual marking for yes/no questions, for example. They are shown through raised eyebrows and a forward head tilt.[34][35]

Some adjectival and adverbial information is conveyed through non-manual signs, but what these signs are varies from language to language. For instance, in ASL a slightly open mouth with the tongue relaxed and visible in the corner of the mouth means 'carelessly,' but a similar sign in BSL means 'boring' or 'unpleasant.'[35]

Discourse functions such as turn taking are largely regulated through head movement and eye gaze. Since the addressee in a signed conversation must be watching the signer, a signer can avoid letting the other person have a turn by not looking at them, or can indicate that the other person may have a turn by making eye contact.[36]

Iconicity in sign languages

The first studies on iconicity in ASL were published in the late 1970s, and early 1980s. Many early sign language linguists rejected the notion that iconicity was an important aspect of the language.[37][38] Though they recognized that certain aspects of the language seemed iconic, they considered this to be merely extralinguistic, a property which did not influence the language. Frishberg (1975) wrote a very influential paper addressing the relationship between arbitrariness and iconicity in ASL. She concluded that though originally present in many signs, iconicity is degraded over time through the application of grammatical processes. In other words, over time, the natural processes of regularization in the language obscures any iconically motivated features of the sign.

Some researchers have suggested that the properties of ASL give it a clear advantage in terms of learning and memory.[39] Brown, a psychologist by trade, was one of the first to document this benefit. In his study, Brown found that when children were taught signs that had high levels of iconic mapping they were significantly more likely to recall the signs in a later memory task than when they were taught signs that had little or no iconic properties. File:Hello1.ogv File:Hello2.ogv

The pioneers of sign language linguistics were yoked with the task of trying to prove that ASL was a real language and not merely a collection of gestures or “English on the hands.” One of the prevailing beliefs at this time was that ‘real languages’ must consist of an arbitrary relationship between form and meaning. Thus, if ASL consisted of signs that had iconic form-meaning relationship, it could not be considered a real language. As a result, iconicity as a whole was largely neglected in research of sign languages.

The cognitive linguistics perspective rejects a more traditional definition of iconicity as a relationship between linguistic form and a concrete, real-world referent. Rather it is a set of selected correspondences between the form and meaning of a sign.[40] In this view, iconicity is grounded in a language user’s mental representation (“construal” in Cognitive Grammar). It is defined as a fully grammatical and central aspect of a sign language rather than periphery phenomena.[41]

The cognitive linguistics perspective allows for some signs to be fully iconic or partially iconic given the number of correspondences between the possible parameters of form and meaning.[42] In this way, the Israeli Sign Language (ISL) sign for ASK has parts of its form that are iconic (“movement away from the mouth” means “something coming from the mouth”), and parts that are arbitrary (the handshape, and the orientation).[43]

Many signs have metaphoric mappings as well as iconic or metonymic ones. For these signs there are three way correspondences between a form, a concrete source and an abstract target meaning. The ASL sign LEARN has this three way correspondence. The abstract target meaning is “learning.” The concrete source is putting objects into the head from books. The form is a grasping hand moving from an open palm to the forehead. The iconic correspondence is between form and concrete source. The metaphorical correspondence is between concrete source and abstract target meaning. Because the concrete source is connected to two correspondences linguistics refer to metaphorical signs as “double mapped.”[40][42][43]

Classification of sign languages

Although sign languages have emerged naturally in deaf communities alongside or among spoken languages, they are unrelated to spoken languages and have different grammatical structures at their core.

Sign languages may be classified by how they arise.

Home sign is not a full language, but closer to a pidgin. Home sign is amorphous and generally idiosyncratic to a particular family, where a deaf child does not have contact with other deaf children and is not educated in sign. Such systems are not generally passed on from one generation to the next. Where they are passed on, creolization would be expected to occur, resulting in a full language.

A village sign language is a local indigenous language that typically arises over several generations in a relatively insular community with a high incidence of deafness, and is used both by the deaf and by a significant portion of the hearing community, who have deaf family and friends.[44] The most famous of these is probably Martha's Vineyard Sign Language of the US, but there are also numerous village languages scattered throughout Africa, Asia, and America.

Deaf-community sign languages, on the other hand, arise where deaf people come together to form their own communities. These include school sign, such as Nicaraguan Sign Language, which develop in the student bodies of deaf schools which do not use sign as a language of instruction, as well as community languages such as Bamako Sign Language, which arise where generally uneducated deaf people congregate in urban centers for employment. At first, Deaf-community sign languages are not generally known by the hearing population, in many cases not even by close family members. However, they may grow, in some cases becoming a language of instruction and receiving official recognition, as in the case of ASL.

Both contrast with speech-taboo languages such as the various Aboriginal Australian sign languages, which are developed by the hearing community and only used secondarily by the deaf. It is doubtful whether any of these are languages in their own right, rather than manual codes of spoken languages. Hearing people may also develop sign to communicate with speakers of other languages, as in Plains Indian Sign Language; this was a contact signing system or pidgin that was evidently not used by deaf people in the Plains nations, who used home sign.

Language contact and creolization is common in the development of sign languages, making clear family classifications difficult – it is often unclear whether lexical similarity is due to borrowing or a common parent language, or whether there was one or several parent languages, such as several village languages merging into a Deaf-community language. Contact occurs between sign languages, between sign and spoken languages (contact sign, a kind of pidgin), and between sign languages and gestural systems used by the broader community. One author has speculated that Adamorobe Sign Language, a village sign language of Ghana, may be related to the "gestural trade jargon used in the markets throughout West Africa", in vocabulary and areal features including prosody and phonetics.[45]

The only comprehensive classification along these lines going beyond a simple listing of languages dates back to 1991.[48] The classification is based on the 69 sign languages from the 1988 edition of Ethnologue that were known at the time of the 1989 conference on sign languages in Montreal and 11 more languages the author added after the conference.[49]

Wittmann classification of sign languages
Primary
language
Primary
group
Auxiliary
language
Auxiliary
group
Prototype-A[50]

5

1

7

2

Prototype-R[51]

18

1

1

BSL-derived

8

DGS-derived

1 or 2

JSL-derived

2

LSF-derived

30

LSG-derived

1?

In his classification, the author distinguishes between primary and auxiliary sign languages[52] as well as between single languages and names that are thought to refer to more than one language.[53] The prototype-A class of languages includes all those sign languages that seemingly cannot be derived from any other language.[50] Prototype-R languages are languages that are remotely modelled on a prototype-A language (in many cases thought to have been FSL) by a process Kroeber (1940) called "stimulus diffusion".[51] The families of BSL, DGS, JSL, LSF (and possibly LSG) were the products of creolization and relexification of prototype languages.[54] Creolization is seen as enriching overt morphology in sign languages, as compared to reducing overt morphology in spoken languages.[55]

Typology of sign languages

Linguistic typology (going back on Edward Sapir) is based on word structure and distinguishes morphological classes such as agglutinating/concatenating, inflectional, polysynthetic, incorporating, and isolating ones.

Sign languages vary in word-order typology as there are different word orders in different languages. For example, ÖGS, Japanese Sign Language and so-called Indo-Pakistani Sign Language are Subject-Object-Verb while ASL is Subject-Verb-Object. Influence from the surrounding spoken languages is not improbable.

Sign languages tend to be incorporating classifier languages, where a classifier handshape representing the object is incorporated into those transitive verbs which allow such modification. For a similar group of intransitive verbs (especially motion verbs), it is the subject which is incorporated. Only in a very few sign languages (for instance Japanese Sign Language) are agents ever incorporated. in this way, since subjects of intransitives are treated similarly to objects of transitives, incorporation in sign languages can be said to follow an ergative pattern.

Brentari[56][57] classifies sign languages as a whole group determined by the medium of communication (visual instead of auditive) as one group with the features monosyllabic and polymorphemic. That means, that via one syllable (i.e. one word, one sign) several morphemes can be expressed, like subject and object of a verb determine the direction of the verb's movement (inflection).

Acquisition of sign languages

Children who are exposed to a sign language from birth will acquire it, just as hearing children acquire their native spoken language.[58]

The acquisition of non-manual features follows an interesting pattern: When a word that always has a particular non-manual feature associated with it (such as a wh- question word) is learned, the non-manual aspects are attached to the word but don’t have the flexibility associated with adult use. At a certain point the non-manual features are dropped and the word is produced with no facial expression. After a few months the non-manuals reappear, this time being used the way adult signers would use them.[59]

Written forms of sign languages

Sign languages do not have a traditional or formal written form. Many deaf people do not see a need to write their own language.[60]

Several ways to represent sign languages in written form have been developed.

  • Stokoe notation, devised by Dr. William Stokoe for his 1965 Dictionary of American Sign Language,[1] is an abstract phonemic notation system. Designed specifically for representing the use of the hands, it has no way of expressing facial expression or other non-manual features of sign languages. However, his was designed for research, particularly in a dictionary, not for general use.
  • The Hamburg Notation System (HamNoSys), developed in the early 1990s, is a detailed phonetic system, not designed for any one sign language, and intended as a transcription system for researchers rather than as a practical script.
  • David J. Peterson has attempted to create a phonetic transcription system for signing that is Sign Language International Phonetic Alphabet (SLIPA).
  • SignWriting, developed by Valerie Sutton in 1974, is a system for representing sign languages phonetically (including mouthing, facial expression and dynamics of movement). The script is sometimes used for detailed research, language documentation, as well as publishing texts and works in sign languages.
  • Si5s is another orthography which is largely phonemic. However, a few signs are logographs and/or ideographs due to regional variation in sign languages.
  • ASL-phabet is a system designed primarily for education of deaf children by Dr. Sam Supalla which uses a minimalist collection of symbols in the order of Handshape-Location-Movement. Many signs can be written the same way (homograph).

So far, there is no formal acceptance of any of these writing systems for any sign language, or even any consensus on the matter. None are widely used.

Sign perception

For a native signer, sign perception influences how the mind makes sense of their visual language experience. For example, a handshape may vary based on the other signs made before or after it, but these variations are arranged in perceptual categories during its development. The mind detects handshape contrasts but groups similar handshapes together in one category.[61][62][63] Different handshapes are stored in other categories. The mind ignores some of the similarities between different perceptual categories, at the same time preserving the visual information within each perceptual category of handshape variation.

Sign languages in society

Telecommunications


One of the first demonstrations of the ability for telecommunications to help sign language users communicate with each other occurred when AT&T's videophone (trademarked as the "Picturephone") was introduced to the public at the 1964 New York World's Fair – two deaf users were able to freely communicate with each other between the fair and another city.[64] However, video communication did not become widely available until sufficient bandwidth for the high volume of video data became available in the early 2000s.

The Internet now allows deaf people to talk via a video link, either with a special-purpose videophone designed for use with sign language or with "off-the-shelf" video services designed for use with broadband and on ordinary computer webcam. The latter technology, though more widely available, often does not provide sufficient quality for sign language communication, although high-definition units are increasingly available. The special videophones that are designed for sign language communication typically provide more frames per second than 'off-the-shelf' services and may use data compression methods specifically designed to maximize the intelligibility of sign languages. Some advanced equipment enables a person to remotely control the other person's video camera, in order to zoom in and out or to point the camera better to understand the signing.

Sign language interpretation


In order to facilitate communication between deaf and hearing people, sign language interpreters are often used. Such activities involve considerable effort on the part of the interpreter, since sign languages are distinct natural languages with their own syntax, different from any spoken language.

The interpretation flow is normally between a sign language and a spoken language that are customarily used in the same country, such as French Sign Language (LSF) to spoken French in France, Spanish Sign Language (LSE) to spoken Spanish in Spain, British Sign Language (BSL) to spoken English in the U.K., and American Sign Language (ASL) to spoken English in the U.S.A. (since BSL and ASL are distinct sign languages both used in English-speaking countries), etc. Sign language interpreters who can translate between signed and spoken languages that are not normally paired (such as between LSE and English), are also available, albeit less frequently.

Remote interpreting

Main articles: Video Remote Interpreting and Video Relay Service

Interpreters may be physically present with both parties to the conversation, but since the technological advancements in the early 2000s, provision of interpreters in remote locations has become available. In Video Remote Interpreting (VRI), the two clients (a sign-language user and a hearing person who wish to communicate with each other) are in one location, and the interpreter is in another. The interpreter communicates with the sign-language user via a video telecommunications link, and with the hearing person by an audio link. VRI can be used for situations in which no on-site interpreters are available.

However, VRI cannot be used for situations in which all parties are speaking via telephone alone. In Video Relay Service (VRS), the sign-language user, the interpreter, and the hearing person are in three separate locations, thus allowing the two clients to talk to each other on the phone through the interpreter.

Home sign

Main article: Home sign

Sign systems are sometimes developed within a single family. For instance, when hearing parents with no sign language skills have a deaf child, an informal system of signs will naturally develop, unless repressed by the parents. The term for these mini-languages is home sign (sometimes homesign or kitchen sign).[65]

Home sign arises due to the absence of any other way to communicate. Within the span of a single lifetime and without the support or feedback of a community, the child naturally invents signals to facilitate the meeting of his or her communication needs. Although this kind of system is grossly inadequate for the intellectual development of a child and it comes nowhere near meeting the standards linguists use to describe a complete language, it is a common occurrence. No type of Home Sign is recognized as an official language.

Use of signs in hearing communities

On occasion, where the prevalence of deaf people is high enough, a deaf sign language has been taken up by an entire local community. Famous examples of this include Martha's Vineyard Sign Language in the USA, Kata Kolok in a village in Bali, Adamorobe Sign Language in Ghana and Yucatec Maya sign language in Mexico. In such communities deaf people are not socially disadvantaged.

Many Australian Aboriginal sign languages arose in a context of extensive speech taboos, such as during mourning and initiation rites. They are or were especially highly developed among the Warlpiri, Warumungu, Dieri, Kaytetye, Arrernte, and Warlmanpa, and are based on their respective spoken languages.

A pidgin sign language arose among tribes of American Indians in the Great Plains region of North America (see Plains Indian Sign Language). It was used to communicate among tribes with different spoken languages. There are especially users today among the Crow, Cheyenne, and Arapaho. Unlike other sign languages developed by hearing people, it shares the spatial grammar of deaf sign languages.

Signs may also be used for manual communication in noisy or secret situations.

Sign language and children

Main article: Baby sign language

Sign language is becoming a popular teaching style among hearing parents for young hearing children. Since the muscles in babies' hands grow and develop quicker than their mouths, sign language is a beneficial option for better communication.[66] Babies can usually produce signs before they can speak. This decreases the confusion between parents when trying to figure out what their child wants. When the child begins to speak, signing is usually abandoned.

Gestural theory of human language origins

The gestural theory states that vocal human language developed from a gestural sign language.[67] An important question for gestural theory is what caused the shift to vocalization.[68]

Primate use of sign language

There have been several notable examples of scientists teaching non-human primates basic signs in order to communicate with humans,[69] but the degree to which these basic signs relate to human sign language and the ability of the animals in question to actually communicate is a matter of substantial controversy and dispute.[70][71] Notable examples include:

Deaf communities and deaf culture

Deaf communities are very widespread in the world and the culture which comprises within them is very rich. Sometimes it even does not intersect with the culture of the hearing population because of different impediments for hard-of-hearing people to perceive aurally conveyed information.

There are many theories indicating what native American sign language were applied for. One theory indicates that the sign system's development provided great ease for the local inhabitants to talk with each other: In the 1500s, a Spanish expeditionary, Cabeza de Vaca, observed using sign language with the natives on the west part of modern day Florida. In mid 16th century, Francisco de Coronado also mentioned that communication with the Tonkawa using signs, was possible without the presence of an translator.

Ideas narrate to doing business with the use of sign as a common understandable language, and even exaggerated ideas of Native American using sign because they were perceived to be "exotic" and "uncivilized" group also prevail. Nevertheless, the sign adhered by the Indians were used primarily with communication between tribes or for the usage of hunting. If gestures that were used by primitive individuals or Native Indians did in fact or not quite reach the stage of being official languages, excluding the usage of oration and still having full communication, is still up for debate.There are estimates indicating that as frequent as 15 in 650 Native Americans have serious deafness or are completely deaf. These estimates are more than twice the national medium.

Legal recognition

Some sign languages have obtained some form of legal recognition, while others have no status at all. Sarah Batterbury has argued that sign languages should be recognized and supported not merely as an accommodation for the disabled, but as the communication medium of language communities.[72]

See also

References

Bibliography

  • Branson, J., D. Miller, & I G. Marsaja. (1996). "Everyone here speaks sign language, too: a deaf village in Bali, Indonesia." In: C. Lucas (ed.): Multicultural aspects of sociolinguistics in deaf communities. Washington, Gallaudet University Press, pp. 39–5.
  • Brentari, D. (1998). A prosodic model of sign language phonology. Cambridge, MA: MIT Press.
  • Brown R. (1980). “Why are signed languages easier to learn than spoken languages?” in Proceedings of the First National Symposium on Sign Language Research and Teaching, ed. Stokoe W. C., editor. (Washington, DC: National Association of the Deaf), 9–24.
  • Canlas, Loida (2006). "Laurent Clerc: Apostle to the Deaf People of the New World." The Laurent Clerc National Deaf Education Center, Gallaudet University.[5]
  • Deuchar, Margaret (1987). "Sign languages as creoles and Chomsky's notion of Universal Grammar." Essays in honor of Noam Chomsky, 81–91. New York: Falmer.
  • Emmorey, Karen; & Lane, Harlan L. (Eds.). (2000). The signs of language revisited: An anthology to honor Ursula Bellugi and Edward Klima. Mahwah, NJ: Lawrence Erlbaum Associates. ISBN 0-8058-3246-7.
  • Fischer, Susan D. (1974). "Sign language and linguistic universals." Actes du Colloque franco-allemand de grammaire générative, 2.187–204. Tübingen: Niemeyer.
  • Frishberg, N. (1975). Arbitrariness and Iconicity: Historical Change in America. Language, 51(3), 696–719.
  • Frishberg, Nancy (1987). "Ghanaian Sign Language." In: Cleve, J. Van (ed.), Gallaudet encyclopaedia of deaf people and deafness. New York: McGraw-Hill Book Company.
  • Goldin-Meadow, Susan, 2003, The Resilience of Language: What Gesture Creation in Deaf Children Can Tell Us About How All Children Learn Language, Psychology Press, a subsidiary of Taylor & Francis, New York, 2003
  • Gordon, Raymond, ed. (2008). [8].
  • Groce, Nora E. (1988). Everyone here spoke sign language: Hereditary deafness on Martha's Vineyard. Cambridge, MA: Harvard University Press. ISBN 0-674-27041-X.
  • Healy, Alice F. (1980). Can Chimpanzees learn a phonemic language? In: Sebeok, Thomas A. & Jean Umiker-Sebeok, eds, Speaking of apes: a critical anthology of two-way communication with man. New York: Plenum, 141–143.
  • Johnston, Trevor A. (1989). Auslan: The Sign Language of the Australian Deaf community. The University of Sydney: unpublished Ph.D. dissertation.[9]
  • Kamei, Nobutaka (2004). The Sign Languages of Africa, "Journal of African Studies" (Japan Association for African Studies) Vol.64, March, 2004. [NOTE: Kamei lists 23 African sign languages in this article].
  • Kegl, Judy (1994). "The Nicaraguan Sign Language Project: An Overview." Signpost 7:1.24–31.
  • Kegl, Judy, Senghas A., Coppola M (1999). "Creation through contact: Sign language emergence and sign language change in Nicaragua." In: M. DeGraff (ed), Comparative Grammatical Change: The Intersection of Language Acquisition, Creole Genesis, and Diachronic Syntax, pp. 179–237. Cambridge, MA: MIT Press.
  • Kegl, Judy (2004). "Language Emergence in a Language-Ready Brain: Acquisition Issues." In: Jenkins, Lyle, (ed), Biolinguistics and the Evolution of Language. John Benjamins.
  • Kendon, Adam. (1988). Sign Languages of Aboriginal Australia: Cultural, Semiotic and Communicative Perspectives. Cambridge: Cambridge University Press.
  • Kimura, Doreen (1993). Neuromotor Mechanisms in Human Communication. Oxford: Oxford University Press.
  • Klima, Edward S.; & Bellugi, Ursula. (1979). The signs of language. Cambridge, MA: Harvard University Press. ISBN 0-674-80795-2.
  • Kolb, Bryan, and Ian Q. Whishaw (2003). Fundamentals of Human Neuropsychology, 5th edition, Worth Publishers.
  • Krzywkowska, Grazyna (2006). "Przede wszystkim komunikacja", an article about a dictionary of Hungarian sign language on the internet (Polish).
  • Lane, Harlan L. (Ed.). (1984). The Deaf experience: Classics in language and education. Cambridge, MA: Harvard University Press. ISBN 0-674-19460-8.
  • Lane, Harlan L. (1984). When the mind hears: A history of the deaf. New York: Random House. ISBN 0-394-50878-5.
  • Madell, Samantha (1998). Warlpiri Sign Language and Auslan – A Comparison. M.A. Thesis, Macquarie University, Sydney, Australia.[10]
  • Madsen, Willard J. (1982), Intermediate Conversational Sign Language. Gallaudet University Press. ISBN 978-0-913580-79-0.
  • [11]
  • Meir, I. (2010). Iconicity and metaphor: Constraints on metaphorical extension of iconic forms. Language, 86(4), 865–896.
  • O'Reilly, S. (2005). Indigenous Sign Language and Culture; the interpreting and access needs of Deaf people who are of Aboriginal and/or Torres Strait Islander in Far North Queensland. Sponsored by ASLIA, the Australian Sign Language Interpreters Association.
  • Padden, Carol; & Humphries, Tom. (1988). Deaf in America: Voices from a culture. Cambridge, MA: Harvard University Press. ISBN 0-674-19423-3.
  • Pfau, Roland, Markus Steinbach & Bencie Woll (eds.), Sign language. An international handbook (HSK - Handbooks of linguistics and communication science). Berlin: Mouton de Gruyter.
  • Poizner, Howard; Klima, Edward S.; & Bellugi, Ursula. (1987). What the hands reveal about the brain. Cambridge, MA: MIT Press.
  • Premack, David, & Ann J. Premack (1983). The mind of an ape. New York: Norton.
  • Sacks, Oliver W. (1989). Seeing voices: A journey into the world of the deaf. Berkeley: University of California Press. ISBN 0-520-06083-0.
  • Sandler, Wendy (2003). Sign Language Phonology. In William Frawley (Ed.), The Oxford International Encyclopedia of Linguistics.[12]
  • Sandler, Wendy; & Lillo-Martin, Diane. (2001). Natural sign languages. In M. Aronoff & J. Rees-Miller (Eds.), Handbook of linguistics (pp. 533–562). Malden, MA: Blackwell Publishers. ISBN 0-631-20497-0.
  • Sandler, Wendy; & Lillo-Martin, Diane. (2006). Sign Language and Linguistic Universals. Cambridge: Cambridge University Press
  • Stiles-Davis, Joan; Kritchevsky, Mark; & Bellugi, Ursula (Eds.). (1988). Spatial cognition: Brain bases and development. Hillsdale, NJ: L. Erlbaum Associates. ISBN 0-8058-0046-8; ISBN 0-8058-0078-6.
  • Stokoe, William C. (1960, 1978). Sign language structure: An outline of the visual communication systems of the American deaf. Studies in linguistics, Occasional papers, No. 8, Dept. of Anthropology and Linguistics, University at Buffalo. 2d ed., Silver Spring: Md: Linstok Press.
  • Stokoe, William C. (1974). Classification and description of sign languages. Current Trends in Linguistics 12.345–71.
  • Taub, S. (2001). Language from the body. New York : Cambridge University press.
  • Valli, Clayton, Ceil Lucas, and Kristin Mulrooney. (2005) Linguistics of American Sign Language: An Introduction. 4th Ed. Washington, DC: Gallaudet University Press.
  • Van Deusen-Phillips S.B., Goldin-Meadow S., Miller P.J., 2001. Enacting Stories, Seeing Worlds: Similarities and Differences in the Cross-Cultural Narrative Development of Linguistically Isolated Deaf Children, Human Development, Vol. 44, No. 6.
  • Wilbur, R. B. (1987). American Sign Language: Linguistic and applied dimensions. San Diego, CA: College-Hill.
  • Wilcox, P. (2000). Metaphor in American Sign Language. Washington D.C.: Gallaudet University Press.
  • Wilcox, S. (2004). Conceptual spaces and embodied actions: Cognitive iconicity and signed languages. Cognitive Linguistics, 15(2), 119–147.
  • [13]
  • [14]

Further reading

  • Fox, Margalit (2007) Talking Hands: What Sign Language Reveals About the Mind , Simon & Schuster ISBN 978-0-7432-4712-2
  • Quenqua, Douglas. The New York Times, December 4, 2012, p.D1 and published online at NYTimes.com on December 3, 2012. Retrieved on December 7, 2012.

External links

Note: the articles for specific sign languages (e.g. ASL or BSL) may contain further external links, e.g. for learning those languages.

  • International Dictionary of Sign Languages Community compiled dictionary of various sign languages from around the world.
  • Signes du Monde, directory for all online Sign Languages dictionaries (French) / (English)
  • List Serv for Sign Language Linguistics
  • The MUSSLAP Project Multimodal Human Speech and Sign Language Processing for Human-Machine Communication.
  • Smithsonian Institution, 1879–1880
  • Pablo Bonet, J. de (1620) , Biblioteca Digital Hispánica (BNE).
  • Science in Sign (Video, 3 min. 48 secs.), by Davis, Leslye & Huang, Jon & Xaquin, G.V.; interpreted by Callis, Lydia, on NYTimes.com website, December 4, 2012. Retrieved December 13, 2012. The video translates a shortened version of a N.Y. Times science article on how new signs are being developed to enhance communication in the sciences, extracted from:
    • Quenqua, Douglas. The New York Times, December 4, 2012, p.D1 and published online at NYTimes.com on December 3, 2012. Retrieved on December 7, 2012.

This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
 
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
 
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.