Fast links: Interglossa » Glosa »

Re: [glosalist] Re: natural semantic metalanguage and Glosa

William T. Branch ("William T. Branch" <bill@...>) on August 18, 2006

Kevin Smith wrote:

— In glosalist@yahoogroups.com glosalist%40yahoogroups.com, “William T. Branch” wrote:

I’ve been studying the NSM or natural semantic metalanguage

If this claim holds up, (and so far it seems sound to me) then the implications are profound for all languages and especially constructed languages. It means a simple word list won’t and can’t cut it for an artificial language since several words don’t really translate across very well.

Could you explain what you mean? Based on what you wrote and a quick read of the NSM material, I would have reached an opposite conclusion. If there really are a finite set of primitives from which all other words can be derived, it would seem that a language could have a vocabulary of just those primitives and still function (although awkwardly). Something like Toki Pona, I suppose.

A language of 1000 words, built around these universal primitives, plus extra words for convenience, seems very practical, based on this reasearch.

Kevin

.

Sorry about the ambiguity. By “simple word list” I didn’t mean “small word list”. I meant word lists with no definitions other then a direct word for word translation from language A to B. The simple word list should only work for the sixty three primitives. All other words are subject to cultural variations and understandings as well as differences in the underlying predicate behavior of the verbs. All words beyond the primes should be defined using the sixty three primes where reasonably possible.

An example from the website is “umbrella”. This is defined using the primes plus the word “hand” and “rain”. “Hand” is considered to be a level one molecule because it can be described directly from the primes. “Rain” is two levels up currently and thus a level two molecule. In theory, all words can be explicated directly from the primes, but their interpretation could get tedious. Certain prime combinations would be used so often as a part of the explication of other words that it makes sense to first define them into a word, then use them in further definitions. “hand” and “water” and “rain” are such words. “Rain” uses tprimes plus the molecule “water” while “water” is directly explicated from the primes.

It is my (so far unproven) suspicion that Glosa written by those who understand English can be understood easily by others who understand English. Others while understanding much of written Glosa would remain confused at the way the words are used and the intended final meanings. This I suspect is the case even when the author carefully leaves his writings free of idioms and perfectly adheres to the grammar set out by Gaskell.

The reason I suspect this is that at the time Interglossa was being developed, much of what we know now in linguistics had not been discovered. The two main areas I’m specifically reffering to are LFG (Lexical Functional Grammar) and NSM. Modern explications of NSM words are actually taking into account LFG as well. One recent example of this is a flurry of debates over the explication of the English word “left” as in “John left to go to the store”. The final explication had to show that the subject of the predicate must be a person for that particular way the word was being explicated. The explication could continue with further variations for various other subject categories.

Furthermore, the way LFG, NSM as well as the way Chomsky’s deep grammar is used when a native speaker uses language is invisible to the speaker. But these things are tacitly used in every sentence the speaker utters. A non-native speaker being exposed to this eventually picks up enough of it to get by without being consciously aware of it.

I think this is why artificial languages are more difficult to pick up then they seem they should be. A word list just isn’t enough. Actual exposure to the use of the language allows a learner to pick up the hidden side of the language. The problem with Glosa and most if not all other artificial languages is there is an inherrant catch 22. If an actual body of written or spoken examples of the language must exist to transmit this side of the language to the learner, then authors can just start building it up. These authors have no choice but to use the lexicon and LFG as their own language is used, and for the most part the fact that they are doing this would be invisible to them as well as other speakers of their native language. What you end up with is the authors own language with word substitutions and a different surface grammar.

I think some of what is necessary in a modern artificial language such as Glosa is a reference to how each verbs LFG works as well as how the word is defined by the languages own primitives. This can be done in a way that a non-linguist understands. After all, language is pretty natural. The LFG and NSM of Glosa could be defined in a very culturally neutral way as well.

I, like you Kevin, believe that a very small lexicon carefully chosen is all that is necessary for an auxiliary language. This is desirable to lighten the load of the language learner. The load should be heavier on the authors using the language to write. They must try to keep their writings within the minimal lexicon while keeping the reading of their text effortless. Where this is not possible and the reading becomes tedious because of the small lexicon the author can use an expanded word list, or take liberty to add their own words with the caveat that these are always defined within the text the first few times the word is used. This in-body definition can be cleverly put in without sounding like a formal definition. The reader may even be unaware that they just picked up a new word.

Just my two cents.

-bill

Fast links: Interglossa » Glosa »

Re: [glosalist] Re: natural semantic metalanguage and Glosa - Committee on language planning, FIAS. Coordination: Vergara & Hardy, PhDs.