Fast links: Interglossa » Glosa »

Re: [glosalist] Re: natural semantic metalanguage and Glosa

Robin Fairbridge Gaskell (Robin Fairbridge Gaskell <drought-breaker@...>) on August 22, 2006

At 12:44 PM 8/18/06, Bill Branch grafo:

Kevin Smith grafo:

— In

glosalist%40yahoogroups.comglosalist@yahoogroups.com glosalist%40yahoogroups.com,

“William T. Branch” grafo:

I’ve been studying the NSM or natural semantic metalanguage

If this claim holds up, (and so far it seems sound to me) then the implications are profound for all languages and especially constructed languages. It means a simple word list won’t and can’t cut it for an artificial language since several words don’t really translate across very well.

Could you explain what you mean? Based on what you wrote and a quick read of the NSM material, I would have reached an opposite conclusion. If there really are a finite set of primitives from which all other words can be derived, it would seem that a language could have a vocabulary of just those primitives and still function (although awkwardly). Something like Toki Pona, I suppose.

A language of 1000 words, built around these universal primitives, plus extra words for convenience, seems very practical, based on this reasearch.

Kevin

Sorry about the ambiguity. By “simple word list” I didn’t mean “small word list”. I meant word lists with no definitions other then a direct word for word translation from language A to B. The simple word list should only work for the sixty three primitives. All other words are subject to cultural variations and understandings as well as differences in the underlying predicate behavior of the verbs. All words beyond the primes should be defined using the sixty three primes where reasonably possible. *** Yes, such a “simple, short” wordlist of the 63 primitives - as they manifested in Glosa - would need its own distinctive name. And, yes, I did imagine writing a basic Glosa dictionary with the words defined in Glosa. I suspect that using NSM/LFG theories, there could emerge a tension between the (technical, clinical derivation from primes) …in brackets at the start, and a more folksy, normal definition in conversational style - for ordinary folk to make sense of - following this. Remember, Clark and Ashby were restricted by space (and money) restraints, when printing their paper-based dictionaries; when, however, we release electronic versions of similar, learners’ dictionaries for people in the ‘First World’, “spelling out” the definition in phrasal form costs us, and the reader, virtually nothing extra. But, now we have the NSM theory of primes and derivative words, we ought to use it … and spell out our meanings in our dictionaries.

An example from the website is “umbrella”. This is defined using the primes plus the word “hand” and “rain”. “Hand” is considered to be a level one molecule because it can be described directly from the primes. “Rain” is two levels up currently and thus a level two molecule. In theory, all words can be explicated directly from the primes, but their interpretation could get tedious. Certain prime combinations would be used so often as a part of the explication of other words that it makes sense to first define them into a word, then use them in further definitions. “hand” and “water” and “rain” are such words. “Rain” uses tprimes plus the molecule “water” while “water” is directly explicated from the primes. *** The new language about language in relation to both NSM and LFG (primes, molecules, tprimes, etc) needs to be spelled out clearly and quickly [not discussed], showing how it relates to Glosa, so that Glosa-pe might ‘click in’ to the new explanations of how language, including Planned Language, works, and be where the action is at - rather than spending all of our time chasing up, and defending Glosa against challenges from the Esperanto establishment.

It is my (so far unproven) suspicion that Glosa written by those who understand English can be understood easily by others who understand English. Others while understanding much of written Glosa would remain confused at the way the words are used and the intended final meanings. This I suspect is the case even when the author carefully leaves his writings free of idioms and perfectly adheres to the grammar set out by Gaskell. *** Break away from National Language patterns!!! EG When hitting my first heavy translation job: producing a Glosa version of “The Three Bears”, I learnt a lot through trial and error:- ! I tried word-for-word, and even phrase-by-phrase translation from an English- language version of the story = FAIL. It was too stilted. @ Next I tried interpreting the English-language ideas into simple, Glosa-language sentences. This seemed to be working until I tried reading over my first Glosa paragraph, and found I had to keep back-tracking to find out what I was saying. Perhaps, I thought, I was gabbling on too quickly. # After that, I decided to read over my work one sentence at a time. I failed to pick up immediately the meaning of much of what I had said. And, reading silently, and hearing the Glosa in my head, did not seem to be the best way of catching the meaning I had intended. $ Next, came the noisy approach. I read a sentence out aloud, to discover if I could understand what I had heard. And that was when I realised I was starting to get somewhere. With most sentences, I found I did not instantly get the gist of what I heard. % So, then came the “reading aloud/instant understanding” test: as soon as I found a sentence that was not instantly understandable when I heard it read out, it was scrapped, and recast. Through this process, I broke my patterns of thinking in English, while writing in Glosa.

     ^ Subsequently, I found I was moving over to "thinking in  Glosa" while writing in Glosa: I considered that this realisation was  a hidden pearl of wisdom; it helped me to observe the 'fault' of  "thinking in English while writing in Glosa" in others.
     & It was at about this stage in the process that I started  to question the syntax(es) I was using.  Just how different is the  syntax of English to (or from) the syntax of Glosa??  And, if Glosa  is supposed to be a language with Syntax-based Grammar, then ought  not there to be a standardised (and correct) syntax for Glosa?

     Needless to say, all of this thinking was going on against a  backdrop of Ron Clark's saying that we ought not to impose a rigid  set of grammatical rules on Glosa learners, because we would be  likely to confuse them, and lose them.  I held out for the thought  that a Syntax-based Grammar should have a good, clean syntax to start  with.  NOTE. While the idea of good syntax does not feature in either  NSM or LFG, I hope that through Glosa joining them it might.

[Bill Branch] The reason I suspect this is that at the time Interglossa was being developed, much of what we know now in linguistics had not been discovered. The two main areas I’m specifically reffering to are LFG (Lexical Functional Grammar) and NSM. Modern explications of NSM words

are actually taking into account LFG as well. One recent example of this is a flurry of debates over the explication of the English word “left” as in “John left to go to the store”. The final explication had to show that the subject of the predicate must be a person for that particular way the word was being explicated. The explication could continue with further variations for various other subject categories.

Furthermore, the way LFG, NSM as well as the way Chomsky’s deep grammar is used when a native speaker uses language is invisible to the speaker. But these things are tacitly used in every sentence the speaker utters. A non-native speaker being exposed to this eventually picks up enough of it to get by without being consciously aware of it.

I think this is why artificial languages are more difficult to pick up then they seem they should be. A word list just isn’t enough. Actual exposure to the use of the language allows a learner to pick up the hidden side of the language. The problem with Glosa and most if not all other artificial languages is there is an inherrant catch 22. If an actual body of written or spoken examples of the language must exist to transmit this side of the language to the learner, then authors can just start building it up. These authors have no choice but to use the lexicon and LFG as their own language is used, and for the most part the fact that they are doing this would be invisible to them as well as other speakers of their native language. What you end up with is the authors own language with word substitutions and a different surface grammar. *** While I am basically agreeing with Bill here, I’d say that Ron Clark did say that Glosa had its own inherent logic: this was what I sought. Right from the start, I knew that Glosa had a life of its own … or none at all. Yes, I really did strive, right from the beginning, to speak in pure Glosa.

I think some of what is necessary in a modern artificial language such as Glosa is a reference to how each verbs LFG works as well as how the word is defined by the languages own primitives. This can be done in a way that a non-linguist understands. After all, language is pretty natural. The LFG and NSM of Glosa could be defined in a very culturally neutral way as well. *** Wow! Have we broken the Syntax Barrier to reach the previously hidden Semantic Level of Glosa — at last?

I, like you, Kevin, believe that a very small lexicon carefully chosen is all that is necessary for an auxiliary language. This is desirable to lighten the load of the language learner. The load should be heavier on the authors using the language to write. They must try to keep their writings within the minimal lexicon while keeping the reading of their text effortless. Where this is not possible and the reading becomes tedious because of the small lexicon the author can use an expanded word list, or take liberty to add their own words with the caveat that these are always defined within the text the first few times the word is used. This in-body definition can be cleverly put in without sounding like a formal definition. The reader may even be unaware that they just picked up a new word. *** Well, lexicon size is a rather elastic sort of thing. At the start, when someone is learning the mechanics of Glosa, they need only enough ^nuts and bolts^ to be able to put sentences together. But, people with good memories and a love of the beauty of language will not be satisfied with using a ^skeleton language^ for long. They will seek ways of finding sophistication and nuance in what they are saying, and will automatically seek to extend the lexicon. Likewise, if Ron Clark’s original raison d'etre for Glosa

  • as a ‘Language for Science’ is still valid, then I can see satellite “subject matter lexicons” forming to surround and interlock with the global, basic level of Glosa’s implementation. And, I’d say that a Basic Global Implementation of Glosa would include not only Primes, but also prime/prime combinations and derivative words to at least two levels of derivation. Thinking ‘Metalanguage’ - at the very pointy end of the Glosa spectrum - I could imagine an extremely technical, functional language … something like a computer machine code … which was little more than Glosa Primes, designed as a humanly-readable control language for machines, or for machine-machine interaction.

Just my two cents.

-bill *** And thanks from me to Bill and the Glosa-pe for the opportunity to have a good think,

Robin Gaskell P.S. Previous thinks on my part have not done very much for the cause of human communication; maybe things will be different this time. Now, I have to do some research to find out what the Hell NSM and LFG really mean. R.

[Non-text portions of this message have been removed]

Fast links: Interglossa » Glosa »

Re: [glosalist] Re: natural semantic metalanguage and Glosa - Committee on language planning, FIAS. Coordination: Vergara & Hardy, PhDs.