The methods of lingustic research used in lexicology


Types of analysis in lexicology



Yüklə 325,2 Kb.
səhifə2/5
tarix16.05.2023
ölçüsü325,2 Kb.
#114032
1   2   3   4   5
MINISTRY OF HIGHER EDUCATION SCIENCE AND INNOVATION

1.2 Types of analysis in lexicology
Contrastive Analysis
Contrastive linguistics as a systematic branch of linguistic science is of fairly, recent date though it is not the idea which is new but rather the systematisation and the underlying principles. It is common knowledge that comparison is the basic principle in comparative philology. However the aims and methods of comparative philology differ considerably from those of contrastive linguistics. The comparativists compares languages in order to trace their philogenic relationships. The material he draws for comparison consists mainly of individual sounds, sound combinations and words, the aim is to establish family relationship. The term used to describe this field of investigation is historical linguistics or diachronic linguistics.
Comparison is also applied in typological classification and analysis. This comparison classifies languages by types rather than origins and relationships. One of the purposes of typological comparison is to arrive at language universals — those elements and processes despite their surface diversity that all language have in common.
Contrastive linguistics attempts to find out similarities and differences in both philogenically related and non-related languages.
Linguistic scholars working in the field of applied linguistics assume that the most effective teaching materials are those that are based upon a scientific description of the language to be learned carefully compared with a parallel description of the native language of the learner.1
Contrastive analysis can be carried out at three linguistic levels: phonology, grammar (morphology and syntax) and lexis (vocabulary). In what follows we shall try to give a brief survey of contrastive analysis mainly at the level of lexis.
Contrastive analysis is applied to reveal the features of sameness and difference in the lexical meaning and the semantic structure of correlated words in different languages.
It is commonly assumed by non-linguists that all languages have vocabulary systems in which the words themselves differ in sound-form but refer to reality in the same way. From this assumption it follows that for every word in the mother tongue there is an exact equivalent in the foreign language. It is a belief which is reinforced by the small bilingual dictionaries where single word translations are often offered. Language learning however cannot be just a matter of learning to substitute a new set of labels for the familiar ones of the mother tongue.
Firstly, it should be borne in mind that though objective reality exists outside human beings and irrespective of the language they speak every language classifies reality in its own way by means of vocabulary units. In English, e.g., the word foot is used to denote the extremity of the leg. In Russian there is no exact equivalent for foot. The word нога denotes the whole leg including the foot.
Classification of the real world around us provided by the vocabulary units of our mother tongue is learned and assimilated together with our first language. Because we are used to the way in which our own language structures experience we are often inclined to think of this as the only natural way of handling things whereas in fact it is highly arbitrary. One example is provided by the words watch and clock. It would seem natural for Russian speakers to have a single word to refer to all devices that tell us what time it is; yet in English they are divided into two semantic classes depending on whether or not they are customarily portable. We also find it natural that kinship terms should reflect the difference between male and female: brother or sister, father or mother, uncle or aunt, etc. yet in English we fail to make this distinction in the case of cousin (cf. the Russian — двоюродный брат, двоюродная сестра). Contrastive analysis also brings to light what can be labelled problem pairs, i.e. the words that denote two entities in one language and correspond to two different words in another language.
Compare, for example часы in Russian and clock, watch in English, художник in Russian and artist, painter in English.
Each language contains words which cannot be translated directly from this language into another. For example, favourite examples of untranslatable German words are gemütlich (something like ‘easygoing’, ‘humbly pleasant’, ‘informal’) and Schadenfreude (‘pleasure over the fact that someone else has suffered a misfortune’). Traditional examples of untranslatable English words are sophisticated and efficient.
This is not to say that the lack of word-for-word equivalents implies also the lack of what is denoted by these words. If this were true, we would have to conclude that speakers of English never indulge in Shadenfreude and that there are no sophisticated Germans or there is no efficient industry in any country outside England or the USA.
If we abandon the primitive notion of word-for-word equivalence, we can safely assume, firstly, that anything which can be said in one language can be translated more or less accurately into another, secondly, that correlated polysemantic words of different languages are as a rule not co-extensive. Polysemantic words in all languages may denote very different types of objects and yet all the meanings are considered by the native speakers to be obviously logical extensions of the basic meaning. For example, to an Englishman it is self-evident that one should be able to use the word head to denote the following:

 head

{

of a person of a bed of a coin of a cane


head

{

of a match of a table of an organisation

whereas in Russian different words have to be used: голова, изголовье, сторона, головка, etc.
The very real danger for the Russian language learner here is that having learned first that head is the English word which denotes a part of the body he will assume that it can be used in all the cases where the Russian word голова is used in Russian, e.g. голова сахара (‘a loaf of sugar’), городской голова (‘mayor of the city’), он парень с головой (‘he is a bright lad’), в первую голову (‘in the first place’), погрузиться во что-л. с головой (‘to throw oneself into smth.’), etc., but will never think of using the word head in connection with ‘a bed’ or ‘a coin’. Thirdly, the meaning of any word depends to a great extent on the place it occupies in the set of semantically related words: its synonyms, the constituents of the lexical field the word belongs to, other members of the word-family which the word enters, etc2.
Thus, e.g., in the English synonymic set brave, courageous, bold, fearless, audacious, valiant, valorous, doughty, undaunted, intrepid each word differs in certain component of meaning from the others, brave usually implies resolution and self-control in meeting without flinching a situation that inspires fear, courageous stresses stout-hearted-ness and firmness of temper, bold implies either a temperamental liking for danger or a willingness to court danger or to dare the unknown, etc. Comparing the corresponding Russian synonymic set храбрый, бесстрашный, смелый, мужественный, отважный, etc. we see that the Russian word смелый, e.g., may be considered as a correlated word to either brave, valiant or valorous and also that no member of the Russian synonymic set can be viewed as an exact equivalent of any single member of the English synonymic set in isolation, although all of them denote ‘having or showing fearlessness in meeting that which is dangerous, difficult, or unknown’. Different aspects of this quality are differently distributed among the words making up the synonymic set. This absence of one-to-one correspondence can be also observed if we compare the constituents of the same lexico-semantic group in different languages. Thus, for example, let us assume that an Englishman has in his vocabulary the following words for evaluating mental aptitude: apt, bright, brilliant, clever, cunning, intelligent, shrewd, sly, dull, stupid, slow, foolish, silly. Each of these words has a definite meaning for him. Therefore each word actually represents a value judgement. As the Englishman sees a display of mental aptitude, he attaches one of these words to the situation and in so doing, he attaches a value judgement. The corresponding Russian semantic field of mental aptitude is different (cf. способный, хитрый, умный, глупый, тупой, etc.), therefore the meaning of each word is slightly different too. What Russian speakers would describe as хитрый might be described by English speakers as either cunning or sly depending on how they evaluate the given situation.
The problem under discussion may be also illustrated by the analysis of the members of correlated word-families, e.g., cf. голова, головка, etc. head, heady, etc. which are differently connected with the main word of the family in each of the two languages and have different denotational and connotational components of meaning. This can be easily observed in words containing diminutive and endearing suffixes, e.g. the English word head, grandfather, girl and others do not possess the connotative component which is part of the meaning of the Russian words головка, головушка, головёнка, дедушка, дедуля, etc.
Thus on the lexical level or to be more exact on the level of the lexical meaning contrastive analysis reveals that correlated polysemantic words are not co-extensive and shows the teacher where to expect an unusual degree of learning difficulty. This analysis may also point out the effective ways of overcoming the anticipated difficulty as it shows which of the new items will require a more extended and careful presentation and practice.
Difference in the lexical meaning (or meanings) of correlated words accounts for the difference of their collocability in different languages. This is of particular importance in developing speech habits as the mastery of collocations is much more important than the knowledge of isolated words.
Thus, e.g., the English adjective new and the Russian adjective новый when taken in isolation are felt as correlated words as in a number of cases new stands for новый, e.g. новое платье — a new dress, Новый Год — New Year. In collocation with other nouns, however, the Russian adjective cannot be used in the same meaning in which the English word new is currently used. Compare, e.g., new potatoes — молодая картошка, new bread — свежий хлеб, etc.
The lack of co-extension may be observed in collocations made up by words belonging to different parts of speech, e.g. compare word-groups with the verb to fill:
to fill a lamp — заправлять лам- to fill a truck — загружать ма-
ny шину
to fill a pipe — набивать трубку to fill a gap — заполнять пробел
As we see the verb to fill in different collocations corresponds to a number of different verbs in Russian. Conversely one Russian word may correspond to a number of English words.
For instance compare тонкая книга — a thin book тонкая ирония — subtle irony тонкая талия — slim waist
Perhaps the greatest difficulty for the Russian learners of English is the fact that not only notional words but also function words in different languages are polysemantic and not co-extensive. Quite a number of mistakes made by the Russian learners can be accounted for by the divergence in the semantic structure of function words. Compare, for example, the meanings of the Russian preposition до and its equivalents in the English language.
(Он работал) до 5 часов till 5 o'clock
(Это было) до войны before the war
(Он дошел) до угла to the corner
Contrastive analysis on the level of the grammatical meaning reveals that correlated words in different languages may differ in the grammatical component of their meaning.
To take a simple instance Russians are liable to say the *news are good, *the money are on the table, *her hair are black, etc. as the words новости, деньги, волосы have the grammatical meaning of plurality in the Russian language.
Of particular interest in contrastive analysis are the compulsory grammatical categories which foreign language learners may find in the language they are studying and which are different from or nonexistent in their mother tongue. These are the meanings which the grammar of the language “forces” us to signal whether we want it or not.
One of the compulsory grammatical categories in English is the category of definiteness/indefiniteness. We know that English signals this category by means of the articles. Compare the meaning of the word man in the man is honest and man is honest.
As this category is non-existent in the Russian language it is obvious that Russian learners find it hard to use the articles properly.
Contrastive analysis brings to light the essence of what is usually described as idiomatic English, idiomatic Russian etc., i.e. the peculiar way in which every language combines and structures in lexical units various concepts to denote extra-linguistic reality.
The outstanding Russian linguist acad. L. V. Sčerba repeatedly stressed the fact that it is an error in principle if one supposes that the notional systems of any two languages are identical. Even in those areas where the two cultures overlap and where the material extralinguistic world is identical, the lexical units of the two languages are not different labels appended to identical concepts. In the overwhelming majority of cases the concepts denoted are differently organised by verbal means in the two languages. Different verbal organisation of concepts in different languages may be observed not only in the difference of the semantic structure of correlated words but also in the structural difference of word-groups commonly used to denote identical entities.
For example, a typical Russian word-group used to describe the way somebody performs an action, or the state in which a person finds himself, has the structure that may be represented by the formula adverb followed by a finite form of a verb (or a verb + an adverb), e.g. он крепко спит, он быстро /медленно/ усваивает, etc. In English we can also use structurally similar word-groups and say he smokes a lot, he learnsslowly (fast), etc. The structure of idiomatic English word-groups however is different. The formula of this word-group can be represented as an adjective + deverbal noun, e.g. he is a heavy smoker, a poor learner, e.g. “the Englishman is a slow starter but there is no stronger finisher” (Galsworthy). Another English word-group used in similar cases has the structure verb to be + adjective + the infinitive, e.g. (He) is quick to realise, (He) is slow to cool down, etc. which is practically non-existent in the Russian language. Commonly used English words of the type (he is) an early-riser, a music-lover, etc. have no counterparts in the Russian language and as a rule correspond to phrases of the type (Он) рано встает, (он) очень любит музыку, etc.1
Last but not least contrastive analysis deals with the meaning and use of situational verbal units, i.e. words, word-groups, sentences which are commonly used by native speakers in certain situations.
For instance when we answer a telephone call and hear somebody asking for a person whose name we have never heard the usual answer for the Russian speaker would be Вы ошиблись (номером), Вы не туда попали. The Englishman in identical situation is likely to say Wrong number. When somebody apologises for inadvertently pushing you or treading on your foot and says Простите\ (I beg your pardon. Excuse me.) the Russian speaker in reply to the apology would probably say — Ничего, пожалуйста, whereas the verbal reaction of an Englishman would be different — It’s all right. It does not matter. * Nothing or *please in this case cannot be viewed as words correlated with Ничего, Пожалуйста."
To sum up contrastive analysis cannot be overestimated as an indispensable stage in preparation of teaching material, in selecting lexical items to be extensively practiced and in predicting typical errors. It is also of great value for an efficient teacher who knows that to have a native like command of a foreign language, to be able to speak what we call idiomatic English, words, word-groups and whole sentences must be learned within the lexical, grammatical and situational restrictions of the English language. .
Statistical Analysis
An important and promising trend in modern linguistics which has been making
progress during the last few decades is the quantitative study of language phenomena and the application of statistical methods in linguistic analysis.
Statistical linguistics is nowadays generally recognised as one of the major branches of linguistics. Statistical inquiries have considerable importance not only because of their precision but also because of their relevance to certain problems of communication engineering and information theory.
Probably one of the most important things for modern linguistics was the realisation of the fact that non-formalised statements are as a matter of fact unverifiable, whereas any scientific method of cognition presupposes verification of the data obtained. The value of statistical methods as a means of verification is beyond dispute.
Though statistical linguistics has a wide field of application here we shall discuss mainly the statistical approach to vocabulary.
Statistical approach proved essential in the selection of vocabulary items of a foreign language for teaching purposes.
It is common knowledge that very few people know more than 10% of the words of their mother tongue. It follows that if we do not wish to waste time on committing to memory vocabulary items which are never likely to be useful to the learner, we have to select only lexical units that are commonly used by native speakers. Out of about 500,000 words listed in the OED the “passive” vocabulary of an educated Englishman comprises no more than 30,000 words and of these 4,000 — 5,000 are presumed to be amply sufficient for the daily needs of an average member of the English speech community. Thus it is evident that the problem of selection of teaching vocabulary is of vital importance.1 It is also evident that by far the most reliable single criterion is that of frequency as presumably the most useful items are those that occur most frequently in our language use.
As far back as 1927, recognising the need for information on word frequency for sound teaching materials, Ed. L. Thorndike brought out a list of the 10,000 words occurring most frequently in a corpus of five million running words from forty-one different sources. In 1944 the extension was brought to 30,000 words.2
Statistical techniques have been successfully applied in the analysis of various linguistic phenomena: different structural types of words, affixes, the vocabularies of great writers and poets and even in the study of some problems of historical lexicology.
Statistical regularities however can be observed only if the phenomena under analysis are sufficiently numerous and their occurrence very frequent. Thus the first requirement of any statistic investigation is the evaluation of the size of the sample necessary for the analysis.
To illustrate this statement we may consider the frequency of word occurrences.
It is common knowledge that a comparatively small group of words makes up the bulk of any text.3 It was found that approximately 1,300 — 1,500 most frequent words make up 85% of all words occurring in the text. If, however, we analyse a sample of 60 words it is hard to predict the number of occurrences of most frequent words. As the sample is so small it may contain comparatively very few or very many of such words. The size of the sample sufficient for the reliable information as to the frequency of the items under analysis is determined by mathematical statistics by means of certain formulas.
It goes without saying that to be useful in teaching statistics should deal with meanings as well as sound-forms as not all word-meanings are equally frequent. Besides, the number of meanings exceeds by far the number of words. The total number of different meanings recorded and illustrated in OED for the first 500 words of the Thorndike Word List is 14,070, for the first thousand it is nearly 25,000. Naturally not all the meanings should be included in the list of the first two thousand most commonly used words. Statistical analysis of meaning frequencies resulted in the compilation of A General Service List of English Words with Semantic Frequencies. The semantic count is a count of the frequency of the occurrence of the various senses of 2,000 most frequent words as found in a study of five million running words. The semantic count is based on the differentiation of the meanings in the OED and the frequencies are expressed as percentage, so that the teacher and textbook writer may find it easier to understand and use the list. An example will make the procedure clear.

room (’space’) takes less room, not enough room to turn round (in) make room for (figurative) room for improvement

}

12%

come to my room, bedroom, sitting room; drawing room, bathroom

}

83%

(plural = suite, lodgings) my room in college to let rooms

}

2%

It can be easily observed from the semantic count above that the meaning ‘part of a house’ (sitting room, drawing room, etc.) makes up 83% of all occurrences of the word room and should be included in the list of meanings to be learned by the beginners, whereas the meaning ’suite, lodgings’ is not essential and makes up only 2% of all occurrences of this word3.
Statistical methods have been also applied to various theoretical problems of meaning. An interesting attempt was made by G. K. Zipf to study the relation between polysemy and word frequency by statistical methods. Having discovered that there is a direct relationship between the number of different meanings of a word and its relative frequency of occurrence, Zipf proceeded to find a mathematical formula for this correlation. He came to the conclusion that different meanings of a word will tend to be equal to the square root of its relative frequency (with the possible exception of the few dozen most frequent words). This was summed up in the following formula where m stands for the number of meanings, F for relative frequency — tn — F1/2. This formula is known as Zipf’s law.
Though numerous corrections to this law have been suggested, still there is no reason to doubt the principle itself, namely, that the more frequent a word is, the more meanings it is likely to have.
One of the most promising trends in statistical enquiries is the analysis of collocability of words. It is observed that words are joined together according to certain rules. The linguistic structure of any string of words may be described as a network of grammatical and lexical restrictions.1
The set of lexical restrictions is very complex. On the standard probability scale the set of (im)possibilities of combination of lexical units range from zero (impossibility) to unit (certainty).
Of considerable significance in this respect is the fact that high frequency value of individual lexical items does not forecast high frequency of the word-group formed by these items. Thus, e.g., the adjective able and the noun man are both included in the list of 2,000 most frequent words, the word-group an able man, however, is very rarely used.
The importance of frequency analysis of word-groups is indisputable as in speech we actually deal not with isolated words but with word-groups. Recently attempts have been made to elucidate this problem in different languages both on the level of theoretical and applied lexicology and lexicography.
It should be pointed out, however, that the statistical study of vocabulary has some inherent limitations.
Firstly, statistical approach is purely quantitative, whereas most linguistic problems are essentially qualitative. To put it in simplar terms quantitative research implies that one knows what to count and this knowledge is reached only through a long period of qualitative research carried on upon the basis of certain theoretical assumptions.
For example, even simple numerical word counts presuppose a qualitative definition of the lexical items to be counted. In connection with this different questions may arise, e.g. is the orthographical unit work to be considered as one word or two different words: work n — (to) work v. Are all word-groups to be viewed as consisting of so many words or are some of them to be counted as single, self-contained lexical units? We know that in some dictionaries word-groups of the type by chance, at large, in the long run, etc. are counted as one item though they consist of at least two words, in others they are not counted at all but viewed as peculiar cases of usage of the notional words chance, large, run, etc. Naturally the results of the word counts largely depend on the basic theoretical assumption, i.e. on the definition of the lexical item.1
We also need to use qualitative description of the language in deciding whether we deal with one item or more than one, e.g. in sorting out two homonymous words and different meanings of one word.2 It follows that before counting homonyms one must have a clear idea of what difference in meaning is indicative of homonymy. From the discussion of the linguistic problems above we may conclude that an exact and exhaustive definition of the linguistic qualitative aspects of the items under consideration must precede the statistical analysis.
Secondly, we must admit that not all linguists have the mathematical equipment necessary for applying statistical methods. In fact what is often referred to as statistical analysis is purely numerical counts of this or that linguistic phenomenon not involving the use of any mathematical formula, which in some cases may be misleading.
Thus, statistical analysis is applied in different branches of linguistics including lexicology as a means of verification and as a reliable criterion for the selection of the language data provided qualitative description of lexical items is available.
The essential difference between grammar and lexis is that grammar deals with an obligatory choice between a comparatively small and limited number of possibilities, e.g. between the man and men depending on the form of the verb to be, cf. The man is walking, The men are walking where the selection of the singular number excludes the selection of the plural number. Lexis accounts for the much wider possibilities of choice between, say, man, soldier, fireman and so on. Lexis is thus said to be a matter of choice between open sets of items while grammar is one between closed systems.1 The possibilities of choice between lexical items are not limitless however. Lexical items containing certain semantic components are usually observed only in certain positions. In phrases such as all the sun long, a grief ago and farmyards away the deviation consists of nouns sun, grief, farm yards in a position where normally only members of a limited list of words appear (in this case nouns of linear measurements such as inches, feet, miles). The difference between the normal lexical paradigm and the ad hoc paradigm can be represented as follows:

inches
feet yards, etc.

)

away (normal)

farmyards griefs, etc.

}

away (deviant)

Cf. also “half an hour and ten thousand miles ago” (Arthur C. Clark). “She is feeling miles better today.” (Nancy Milford)
Distribution defined as the occurrence of a lexical unit relative to other lexical units can be interpreted as co-occurrence of lexical items and the two terms can be viewed as synonyms.
It follows that by the term distribution we understand the aptness of a word in one of its meanings to collocate or to co-occur with a certain group, or certain groups of words having some common semantic component. In this case distribution may be treated on the level of semantic classes or subclasses of lexical units.Thus, e.g., it is common practice to subdivide animate nouns into nouns denoting human beings and non-humans (animals, birds, etc.). Inanimate nouns are usually subdivided into concrete and abstract (cf., e.g., table, book, flower and joy,, idea, relation) which may be further classified into lexico-semantic groups, i.e. groups of words joined together by a common concept, e.g. nouns denoting pleasurable emotions (joy, delight, rapture, etc.), nouns denoting mental aptitude (cleverness, brightness, shrewdness, etc.). We observe that the verb to move followed by the nouns denoting inanimate objects (move + Nin) as a rule have the meaning of ‘cause to change position’; when, however, this verb is followed by the nouns denoting human beings (move + Nanim pers) it will usually have another meaning, i.e. ‘arouse, work on the feelings of. In other cases the classification of nouns into animate / inanimate may be insufficient for the semantic analysis, and it may be necessary to single out different lexico-semantic groups as, e.g., in the case of the adjective blind. Any collocation of this adjective with a noun denoting a living being (animate) (blind+Nan) will bring out the meaning ‘without the power to see’ (blind man, cat. etc.). Blind followed by a noun denoting inanimate objects, or abstract concepts may have different meanings depending on the lexico-semantic group the noun belongs to. Thus, blind will have the meaning ‘reckless, thoughtless, etc’ when combined with nouns denoting emotions (blind passion, love, fury, etc.) and the meaning ‘hard to discern, to see’ in collocation with nouns denoting written or typed signs (blind handwriting, blind type, etc.).
In the analysis of word-formation pattern the investigation on the level of lexico-semantic groups is commonly used to find out the word-meaning, the part of speech, the lexical restrictions of the stems, etc. For example, the analysis of the derivational pattern n+ish -> A shows that the suffix -ish is practically never combined with the noun-stems which denote units of time, units of space, etc. (*hourish, *mileish, etc.). The overwhelming majority of adjectives in -ish are formed from the noun-stems denoting living beings (wolfish, clownish, boyish, etc.).
Transformational Analysis
Transformational analysis in lexicological investigations may be defined as re-patterning of various distributional structures in order to discover difference or sameness of meaning of practically identical distributional patterns.
As distributional patterns are in a number of cases polysemantic, transformational procedures are of help not only in the analysis of semantic sameness / difference of the lexical units under investigation but also in the analysis of the factors that account for their polysemy.
For example, if we compare two compound words dogfight and dogcart, we shall see that the distributional pattern of stems is identical and may be represented as n+n. The meaning of these words broadly speaking is also similar as the first of the stems modifies, describes, the second and we understand these compounds as ‘a kind of fight’ and ‘a kind of cart’ respectively. The semantic relationship between the stems, however, is different and hence the lexical meaning of the words is also different. This can be shown by means of a transformational procedure which shows that a dogfight is semantically equivalent to ‘a fight between dogs’, whereas a dogcart is not ‘a cart between dogs’ but ‘a cart drawn by dogs’.Word-groups of identical distributional structure when re-patterned also show that the semantic relationship between words and consequently the meaning of word-groups may be different. For example, in the word-groups consisting of a possessive pronoun followed by a noun, e.g. his car, his failure, his arrest, his goodness, etc., the relationship between his and the following nouns is in each instant different which can be demonstrated by means of transformational procedures.
his car (pen, table, etc.) may be re-patterned into he has a car (a pen, a table, etc.) or in a more generalised form may be represented as A possesses B.
his failure (mistake, attempt, etc.) may be represented as he failed (was mistaken, attempted) or A performs В which is impossible in the case of his car (pen, table, etc.).
his arrest (imprisonment, embarrassment, etc.) may be re-patterned into he was arrested (imprisoned and embarrassed, etc.) or A is the goal of the action B.
his goodness (kindness, modesty, etc.) may be represented as he is good (kind, modest, etc.) or В is the quality of A.
It can also be inferred from the above that two phrases which are transforms of each other (e.g. his car -> he has a car; his kindness -> he is kind, etc.1) are correlated in meaning as well as in form.
Regular correspondence and interdependence of different patterns is viewed as a criterion of different or same meaning. When the direction of. conversion was discussed it was pointed out that transformational procedure may be used as one of the criteria enabling us to decide which of the two words in a conversion pair is the derived member.2
Transformational analysis may also be described as a kind of translation. If we understand by translation transference of a message by different means, we may assume that there exist at least three types of translation:3 1. interlingual translation or translation from one language into another which is what we traditionally call translation; 2. intersemiotic translation or transference of a message from one kind of semiotic system to another. For example, we know that a verbal message may be transmitted into a flag message by hoisting up the proper flags in the right sequence, and at last 3. intralingual translation which consists essentially in rewording a message within the same language — a kind of paraphrasing. Thus, e.g., the same message may be transmitted by the following his work is excellent -> his excellent work -> the excellence of his work.
The rules of transformational analysis, however, are rather strict and should not be identified with paraphrasing in the usual sense of the term. There are many restrictions both on the syntactic and the lexical level. An exhaustive discussion of these restrictions is unnecessary and impossible within the framework of the present textbook. We shall confine our brief survey to the transformational procedures commonly used in lexicological investigation. These are as follows:
1. permutation — the re-patterning of the kernel transform on condition that the basic subordinative relationships between words and the word-stems of the lexical units are not changed. In the example discussed above the basic relationships between lexical units and the stems of the notional words are essentially the same: cf. his work is excellent -> his excellent work -> the excellence of his work -> he works excellently.
2. replacement — the substitution of a component of the distributional structure by a member of a certain strictly defined set of lexical units, e.g. replacement of a notional verb by an auxiliary or a link verb, etc. Thus, in the two sentences having identical distributional structure He will make a bad mistake, He will make a good teacher, the verb to make can be substituted for by become or be only in the second sentence (he will become, be a good teacher) but not in the first (*he will become a bad mistake) which is a formal proof of the intuitively felt difference in the meaning of the verb to make in each of the sentences. In other words the fact of the impossibility of identical transformations of distributionally identical structures is a formal proof of the difference in their meaning.
3. additiоn (or expansion) — may be illustrated by the application of the procedure of addition to the classification of adjectives into two groups — adjectives denoting inherent and non-inherent properties. For example, if to the two sentences John is happy (popular, etc.) and John is tall (clever, etc.) we add, say, in Moscow, we shall see that *John is tall (clever, etc.) in Moscow is utterly nonsensical, whereas John is happy (popular, etc.) in Moscow is a well-formed sentence. Evidently this may be accounted for by the difference in the meaning of adjectives denoting inherent (tall, clever, etc.) and non-inherent (happy, popular, etc.) properties.
4. deletion — a procedure which shows whether one of the words is semantically subordinated to the other or others, i.e. whether the semantic relations between words are identical. For example, the word- group red flowers may be deleted and transformed into flowers without making the sentence nonsensical. Cf.: I love red flowers, I love flowers, whereas I hate red tape cannot be transformed into I hate tape or I hate red.1
Transformational procedures may be of use in practical classroom teaching as they bring to light the so-called sentence paradigm or to be more exact different ways in which the same message may be worded in modern English.
It is argued, e.g., that certain paired sentences, one containing a verb and one containing an adjective, are understood in the same way, e.g. sentence pairs where there is form similarity between the verb and the adjective.
Cf.: I desire that. . . — I am desirous that . . .; John hopes that . . . — John is hopeful that . . .; His stories amuse me . . . — are amusing to me; Cigarettes harm people — are harmful to people.
Such sentence pairs occur regularly in modern English, are used interchangeably in many cases and should be taught as two equally possible variants.
It is also argued that certain paired sentences, one containing a verb and one a deverbal noun, are also a common occurrence in Modern English. Cf., e.g., I like jazz — > my liking for jazz; John considers Mary’s feelings -> John’s consideration of Mary’s feelings.2
Learning a foreign language one must memorise as a rule several commonly used structures with similar meaning. These structures make up what can be described as a paradigm of the sentence just as a set of forms (e.g. go — went — gone, etc.) makes up a word paradigm. Thus, the sentence of the type John likes his wife to eat well makes up part of the sentence paradigm which may be represented as follows John likes his wife to eat well — > John likes his wife eating well — > what John likes is his wife eating well, etc. as any sentence of this type may be re-patterned in the same way4.
Transformational procedures are also used as will be shown below in componental analysis of lexical units.
Componental Analysis
In recent years problems of semasiology have come to the fore in the research work of linguists of different schools of thought and a number of attempts have been made to find efficient procedures for the analysis and interpretation of meaning.3 An important step forward was taken in 1950’s with the development of componental analysis. In this analysis linguists proceed from the assumption that the smallest units of meaning are sememes (or semes) and that sememes and lexemes (or lexical items) are usually not in one-to-one but in one-to-many correspondence. For example, in the lexical item woman several components of meaning or sememes may be singled out and namely ‘human’, ‘female’, ‘adult’. This one-to-many correspondence may be represented as follows.
The analysis of the word girl would also yield the sememes ‘human’ and ‘female’, but instead of the sememe ‘adult’ we shall find the sememe ‘young’ distinguishing the meaning of the word woman from that of girl. The comparison of the results of the componental analysis of the words boy and girl would also show the difference just in one component, i..e. the sememe denoting ‘male’ and ‘female’ respectively.
It should be pointed out that componental analysis deals with individual meanings. Different meanings of polysemantic words have different componental structure. For example, the comparison of two meanings of the noun boy (1. a male child up to the age of 17 or 18 and 2. a male servant (any age) esp. in African and Asian countries) reveals that though both of them contain the semantic components ‘human’ and ‘male’ the component ‘young’ which is part of one meaning is not to be found in the other. As a rule when we discuss the analysis of word-meaning we imply the basic meaning of the word under consideration.
In its classical form componental analysis was applied to the so-called closed subsystems of vocabulary, mostly only to kinship and colour terms. The analysis as a rule was formalised only as far as the symbolic representation of meaning components is concerned. Thus, e.g. in the analysis of kinship terms, the component denoting sex may be represented by A — male, A — female, В may stand for one generation above ego, В — for the generation below ego, С — for direct lineality, С — for indirect lineality, etc. Accordingly the clusters of symbols ABC and ABC represent the semantic components of the word mother, and father respectively.
In its more elaborate form componental analysis also proceeds from the assumption that word-meaning is not an unanalysable whole but can be decomposed into elementary semantic components. It is assumed, however, that these basic semantic elements which might be called semantic features can be classified into several subtypes thus ultimately constituting a highly structured system. In other words it is assumed that any item can be described in terms of categories arranged in a hierarchical way; that is a subsequent category is a subcategory of the previous category.
The most inclusive categories are parts of speech — the major word classes are nouns, verbs, adjectives, adverbs. All members of a major class share a distinguishing semantic feature and involve a certain type of semantic information. More revealing names for such features might be “thingness” or “substantiality” for nouns, “quality” for adjectives, and so on.
All other semantic features may be classified into semantic markers — semantic features which are present also in the lexical meaning of other words and distinguishers — semantic features which are individual, i.e. which do not recur in the lexical meaning of other words. Thus, the distinction between markers and distinguishers is that markers refer to features which the item has in common with other items, distinguishers refer to what differentiates an item from other items. The componental analysis of the word, e.g., spinster runs: noun, count-noun, human, adult, female, who has never married. Noun of course is the part of speech, meaning the most inclusive category; count-noun is a marker, it represents a subclass within nouns and refers to the semantic feature which the word spinster has in common with all other countable nouns (boy, table, flower, idea, etc.) but which distinguishes it from all uncountable nouns, e.g. salt, bread, water, etc; human is also a marker which refers the word spinster to a subcategory of countable nouns, i.e. to nouns denoting human beings; adult is another marker pointing at a specific subdivision of human beings into adults & young or not grown up. The word spinster possesses still another marker — female — which it shares with such words as woman, widow, mother, etc., and which represents a subclass of adult females. At last comes the distinguisher who has never married which differentiates the meaning of the word from other words which have all other common semantic features. Thus, the componental analysis may be represented as a hierarchical structure with several subcategories each of which stands in relation of subordination to the preceding subclass of semantic features.
Componental analysis with the help of markers and distinguishers may be used in the analysis of hyponymic groups.1 In the semantic analysis of such groups we find that they constitute a series with an increasingly larger range of inclusion. For example, bear, mammal, animal represent three successive markers in which bear is subordinated to mammal and mammal to animal. As one ascends the hierarchical structure the terms generally become fewer and the domains — larger, i.e. the shift is from greater specificity to greater generic character. Words
that belong to the same step in the hierarchical ladder are of the same degree of specificity and have all of them at least one marker — one component of meaning in common. They constitute a series where the relationship between the members is essentially identical.
Componental analysis is also used in the investigation of the semantic structure of synonyms. There is always a certain component of meaning which makes one member of the synonymic set different from any other member of the same set. Thus, though brave, courageous, fearless, audacious, etc. are all of them traditionally cited as making up a set of synonymic words, each member of the set has a component of meaning not to be found in any other member of this set. In a number of cases this semantic component may be hard to define, nevertheless intuitively it is felt by all native speakers. For instance, that is how the difference in the meaning components of the words like, enjoy, appreciate, etc. is described. Analysing the difficulty of finding an adequate translation for John appreciates classical music; he doesn't appreciate rock the author argues that “... appreciate is not quite the same as enjoy or like or admire or take an interest in though quite1 a number of semantic components making up their meaning is identical. To appreciate is to be attuned to the real virtue X is presupposed to have and not to appreciate is to fail to be attuned. It is not to deny that X has virtues. In short, appreciate seems to presuppose in the object qualities deserving admiration in a way that like, admire, and so on do not."
Componental analysis is currently combined with other linguistic procedures used for the investigation of meaning. For example, contrastive analysis supplemented by componental analysis yields very good results as one can clearly see the lack of one-to-one correspondence not only between the semantic structure of correlated words (the number and types of meaning) but also the difference in the seemingly identical and correlated meanings of contrasted words.
For example, the correlated meanings of the Russian word толстый and the English words thick, stout, buxom though they all denote broadly speaking the same property (of great or specified depth between opposite surfaces) are not semantically identical because the Russian word толстый is used to describe both humans and objects indiscriminately (cf., толстая женщина, (книга), the English adjective thick does not contain the semantic component human. Conversely stout in this meaning does not contain the component object (cf. a thick book but a stout man). The English adjective buxom possesses in addition to human the sex component, and namely, female which is not to be found in either the English stout or in the Russian толстый. It can be inferred from the above that this analysis into the components animate / inanimate, human male / female reveals the difference in the comparable meanings of correlated words of two different languages — Russian and English — and also the difference in the meaning of synonyms within the English language.
The procedure of componental analysis is also combined with the semantic analysis through collocability or co-occurrence as the components of the lexical (or the grammatical) meaning may be singled out by the co-occurrence analysis. It is assumed that certain words may co-occur in a sentence, others may not. The co-occurrence of one word with another may be treated as a clue to the criterial feature of the concept denoted by the word. Thus, for example, if one learns that a puffin flies, one can assume that a puffin is animate and is probably a bird or an insect.
A close inspection of words with which the prepositions occur brings out the components of their meaning. Thus, e.g., down the stairs is admitted *down the day is not; during the day is admitted but *during the stairs is not. We may infer that time feature is to be found in the preposition during but not in the meaning of down. We can also see that some prepositions share the features of space and time because of their regular co-occurrence with the nouns denoting space and time, e.g. in the city
A completion test in which the subjects have a free choice of verb to complete the sentences show that, though in the dictionary definitions of a number of verbs one cannot find any explicit indication of constraints, which would point at the semantic component, e. g. animate — inanimate, human — nonhuman, etc., the co-occurrence of the verbs with certain types of nouns, functioning as subjects, can be viewed as a reliable criterion of such components. For example, in the sentences of the type The cows — through the fields, The boys — through the fields, etc. various verbs were offered stray, wander, ran, lumber, walk, hurry, stroll, etc. The responses of the subjects showed, however, the difference in the components of the verb-meanings. For example, for all of them stroll is constrained to human subjects though no dictionaries include this component (of human beings) in the definition of the verb.
The semantic peculiarities of the subcategories within nouns are revealed in their specific co-occurrence. For example, the combination of nouns with different pronouns specifies the sex of the living being denoted by the noun. Cf. The baby drank his bottle and The baby drank her bottle where the sex-component of the word-meaning can be observed through the co-occurrence of the noun baby with the possessive pronouns his or her.
Componental analysis may be also arrived at through transformational procedures. It is assumed that sameness / difference of transforms is indicative of sameness / difference in the componental structure of the lexical unit. The example commonly analysed is the difference in the transforms of the structurally identical lexical units, e.g. puppydog, bulldog, lapdog, etc. The difference in the semantic relationship between the stems of the compounds and hence the difference in the component of the word-meaning is demonstrated by the impossibility of the same type of transforms for all these words. Thus, a puppydog may be transformed into ‘a dog (which) is a puppy’, bull-dog, however, is not ‘a dog which is a bull’, neither is a lapdog ‘a dog which is a lap’. A bulldog may be transformed into ‘a bulllike dog’, or ‘a dog which looks like a bull’, but a lapdog is not ‘a dog like a lap’, etc.
Generally speaking one may assume that practically all classifications of lexical units implicitly presuppose the application of the the-ory of semantic components. For instance the classification of nouns into animate — inanimate, human — nonhuman proceeds from the assumption that there is a common semantic component found in such words as, e.g., man, boy, girl, etc., whereas this semantic component is nonexistent in other words, e.g. table, chair, pen, etc., or dog, cat, horse, etc.
Thematic classification of vocabulary units for teaching purposes is in fact also based on componental analysis.
Thus, e.g., we can observe the common semantic component in the lexico-semantic group entitled ‘food-stuffs’ and made up of such words as sugar, pepper, salt, bread, etc., or the common semantic component ‘non-human living being’ in cat, lion, dog, tiger, etc.

Yüklə 325,2 Kb.

Dostları ilə paylaş:
1   2   3   4   5




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©azkurs.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin