[comp.ai] Early Language Learning & Ancient Language

osborn@ultima.cs.uts.oz (Tomasso Osborn) (05/25/90)

baez@ucr.edu (john baez) writes:

>In article <2246@bruce.cs.monash.OZ.AU> frank@bruce.cs.monash.OZ.AU (francis John breen) writes:
>>From article <17809@ultima.cs.uts.oz>, by dcorbett@ultima.cs.uts.oz (Dan Corbett):

>>> ...  Three different consonants in Navajo may get mapped to only one
>>> consonant in my brain, making it impossible for me to _ever_ tell the
>>> difference among them.  The structure of my brain was determined by my
>>> English-speaking parents, and cannot now be altered by memorizing foreign
>>> words.

>This seems very unlikely unless you're somehow specially
>disabled.  Don't blame your brain!  :-)  It's very flexible
>if one works hard.  I used to lisp, but was trained out of
>it by a speech therapist.  I can now play things on the piano
>that would have been impossible at one time.  I would only
>think the mental task you describe is impossible for you
>if you said that you'd spend several months of daily practice
>on it without making any headway.

What Frank Breen is talking about is learning to do things (changing
from lisp, or playing piano) - produce actions, effective stuff. Dan 
is talking about way up the front-end of perception (mapping speech 
sounds to consonants). [Think how you would engineering this - 
pre-filters are not available, so do you re-wire, or try to post-
process?]

The former (effective) learning is easily accomodated by neural plasticity 
but how far this can go towards the front-end is something I haven't
seen discussed (definitively). Seems like the front-end gets pretty well
hard-wired. Of course, after you have (adequate) categories wired-in
the cortex(es) have a lot of scope for adaptation (even learning new
languages if they fit old phonetics OK). But what if the front-end
can't cope?

Tomasso.
-- 
Tom Osborn,                        "HEY! There is something more important
School of Computing Sciences,       than money... 
University of Technology, Sydney,                ...it's called breathing."
PO Box 123 Broadway 2007,  AUSTRALIA.

news@athena.mit.edu (News system) (05/25/90)

Many cognitive psych studies have been done (in humans and animals) concerning
how early learning effects later learning.  The classic studies in babies
looked abilitiy to differentiate sounds.  After a certain age, if the babies
have not been exposed to those sounds, they are no longer able to learn to 
differentiate ('l' and 'r' in japanese children for example).  The extreme
example is cats raise in darkness.  When they are eventually taken out of the
darkness, they are blind.  They never "learn" to see.
From: ccimino@hstbme.mit.edu (c cimino)
Path: hstbme.mit.edu!ccimino

Chris Cimino, MD   MGH Lab of Computer Science

sdutcher@netxdev.DHL.COM (Sylvia Dutcher) (05/25/90)

In article <1990May22.122714.14445@hri.com> rolandi@sparc9.hri.com (Walter Rolandi) writes:
|In article <17809@ultima.cs.uts.oz>, dcorbett@ultima.cs.uts.oz (Dan
|Corbett) writes:
|>
|> It is also possible that there are consonants in Navajo which my brain cannot
|> hear.  Three different consonants in Navajo may get mapped to only one
|> consonant in my brain, making it impossible for me to _ever_ tell the
|> difference among them.  The structure of my brain was determined by my
|> English-speaking parents, and cannot now be altered by memorizing foreign
|> words.
|> 
|>
|
|This is commonly said to be the case but I wouldn't underestimate the power
|of learning. 
|
|Does anyone remember the 40+ something US Army deserter that recently came
|back to the US after spending over 20 years in East Germany.  He was born
|in the US of native American English speakers and lived in the US throughout
|the thought-to-be crucial developmental periods up to his late teens.
|
|Yet, after 20 some years hearing and speaking nothing but German, the guy
|spoke broken English with a profound German accent.  I saw him interviewed
|on a morning news show and he could scarcely respond to the simplest questions.
|
|If the structure of one's brain is determined by cultural effects and the
|structure of one's brain determines what you can and cannot hear and say,
|what happened with this guy?  

I think Dan is missing the point here.  If you can hear the different sounds 
of a foreign language, and replicate them, then you should be able to speak 
that language.  If you can't _hear_ the differences, then you will not be
able to replicate those sounds.  The ability to differentiate sounds appears
to be present in all babies, but fades at an early age so only the baby's
native language(s) become wired into the brain.

There was an interesting program on TV that showed how a baby could hear
sounds in a language (some Eskimo dialect, I think) that were not
perceptible to adults that did not speak the language.  

As for your example, complete immersion in a language is the best way to
learn it.  You get to the point where you think in the language, and after
20 years, I'm not surprised this person lost most of his English.  He 
probably made a point of sticking strictly to German to hide his identity.
Sylvia Dutcher			    *   The likeliness of things
NetExpress Communications, Inc.	    *   to go wrong is in direct
1953 Gallows Rd.		    *   proportion to the urgency
Vienna, Va. 22180		    *   with which they shouldn't.

kp@uts.amdahl.com (Ken Presting) (05/25/90)

In article <1990May25.142052.16989@athena.mit.edu> ccimino@hstbme.mit.edu.UUCP (c cimino) writes:
>. . .   After a certain age, if the babies
>have not been exposed to those sounds, they are no longer able to learn to 
>differentiate ('l' and 'r' in japanese children for example).

Negative results are very difficult to prove in cases like these.  I was
told by a speech researcher about some animal studies on color vision
in cats.  For a long time it was believed that cats were color blind.
Nobody could get a cat to respond differentially to differently colored
stimuli.

It turned out that cats have pretty good color vision - they were just
bored to tears by the dumb experiments, and didn't *care* about the
stimuli.


Ken Presting

daugher@cs.tamu.edu (Dr. Walter C. Daugherity) (05/26/90)

In article <1990May25.142052.16989@athena.mit.edu> ccimino@hstbme.mit.edu.UUCP (c cimino) writes:
>....  After a certain age, if the babies
>have not been exposed to those sounds, they are no longer able to learn to 
>differentiate ('l' and 'r' in japanese children for example).  ...

Nonetheless, Japanese can pronounce the difference when it is important: when
Emperor Akihito was Crown Prince I never heard a Japanese say "Clown Plince,"
which would have been extremely disrespectful.  I also find it ironic that
some provincial Americans who make jokes about the Japanese 'l' and 'r' seem
unaware of how they (the Americans) pronounce "colonel"!

Walter Daugherity
Texas A&M University
daugher@cs.tamu.edu (Internet)
uunet!cs.tamu.edu!daugher (uucp)
DAUGHER@TAMVENUS (BITNET)

aipdc@castle.ed.ac.uk (Paul D. Crowley) (05/26/90)

Please, everybody, stop throwing absolutes around! "Impossible" is
rarely applicable to the human brain.  Try "hard". 

It _is_ possible to learn to differentiate.  English speakers will
generally hear voiced and unvoiced "l" the same way, but if someone says
"No, listen.  Blank, Clank, Blank.  Hear the difference?" then after a
while you will. 

There is a part of the brain that is designed for the job of learning
languages which stops working at age 7 or so.  But we can still learn
languages using other parts of our brain - it's just harder, and never
quite as natural. 

-- 
\/ o\ Paul Crowley aipdc@uk.ac.ed.castle
/\__/ "Trust me, I know what I'm doing" - Sledge Hammer

rschmidt@silver.ucs.indiana.edu (roy schmidt) (05/26/90)

>In article <1990May25.142052.16989@athena.mit.edu> ccimino@hstbme.mit.edu.UUCP (c cimino) writes:
>>....  After a certain age, if the babies
>>have not been exposed to those sounds, they are no longer able to learn to 
>>differentiate ('l' and 'r' in japanese children for example).  ...
>
This is one of those age-old myths.  There is no such 'l' and 'r' thing
in oriental languages!  In Japanese, the 'r' sound is very soft, very
close to an 'l'.  It is true that orientals have trouble with these
sounds in English, but only because in Japanese and Chinese the l and r
are only used as initial consonants, and it confuses them to no end to
fit such sounds in unaccustomed places.  Still, with a good teacher
(which many of them lack) they do learn to make the sounds.
	In Chinese, there are both normal 'l' sounds and soft 'r'
sounds.  Some Americans have a lot of trouble making the 'r' sound, and
think its an 'l' when they hear it....Now who has the problem?

Roy

P.S>  Forgot my sig on the last posting:
Roy Schmidt
Indiana University
Graduate School of Business
Bloomington              rschmidt@silver.ucs.indiana.edu 

lee@uhccux.uhcc.Hawaii.Edu (Greg Lee) (05/27/90)

From article <1990May25.142052.16989@athena.mit.edu>, by news@athena.mit.edu (News system):
>...  After a certain age, if the babies
>have not been exposed to those sounds, they are no longer able to learn to 
>differentiate ('l' and 'r' in japanese children for example).  The extreme
>example is cats raise in darkness.  When they are eventually taken out of the
>darkness, they are blind.  They never "learn" to see.

If what you mean is that after having learned to ignore the l/r
difference in Japanese (as they must), it takes Japanese speakers a
while to distinguish l/r in a language like English which requires it,
that's true.  There's nothing surprising about that.  If you mean that
adult Japanese speakers cannot learn to distinguish l/r, given practice
(and motivation), that is just not true.  There is the possibility that
they do not distinguish the two in the same way as native English
speakers, once they have acquired the distinction.  The comparison
to cats raised in darkness is really absurd.

				Greg, lee@uhccux.uhcc.hawaii.edu

cs4g6at@maccs.dcss.mcmaster.ca (Shelley CP) (05/28/90)

  The statement that people cannot produce sounds which are not differentiated
in their own language seems odd to me.  I have "spoken" with people who have
been deaf from birth and yet have been trained to talk out loud so that they
are at least understood.  They have never "heard" a word of any language but
can still speak well enough!  It requires competent training and many hours
of hard work, but it is possible.

  Perhaps the reason most people who learn a foreign language speak with a
"thick" accent or never learn to produce the foreign sounds properly is that
they do not spend enough time or their teachers are not equipped to overcome
this.  Speech acquisition is naturally very fast in infants, but adults can
be trained, by the right methods, to speak foreign languages with a
relatively small accent - but it takes much longer.  The fact that most
people don't do this does not mean it isn't possible.


-- 
******************************************************************************
* Cameron Shelley   *    Return Path: cs4g6at@maccs.dcss.mcmaster.ca         *
******************************************************************************
*  /\\    /\\    /\\    /\\    /\\    /\\    /\\    /\\    /\\    /\\    /\\ *

news@athena.mit.edu (News system) (05/29/90)

The comment about the difficulty of proving negative results is well taken.
The example about cats raised in darkness illustrates that at least one 
example exists to demonstrate the brain's non-plasticity, as much as people 
would like to believe they can trancend their limits.
From: ccimino@hstbme.mit.edu (c cimino)
Path: hstbme.mit.edu!ccimino

mmt@client1.DRETOR.UUCP (Martin Taylor) (05/29/90)

Greg Lee says:

If what you mean is that after having learned to ignore the l/r
difference in Japanese (as they must), it takes Japanese speakers a
while to distinguish l/r in a language like English which requires it,
that's true.  There's nothing surprising about that.  If you mean that
adult Japanese speakers cannot learn to distinguish l/r, given practice
(and motivation), that is just not true.  There is the possibility that
they do not distinguish the two in the same way as native English
speakers, once they have acquired the distinction.  The comparison
to cats raised in darkness is really absurd.

=============

It is interesting to notice that the English titles and abstracts of
Japanese-language journals often have the l-r confusion as a typographical
error.  Likewise, my psycholinguist wife, Korean born (where l and r are
supposed to be distinct) but English-speaking for nearly 30 years, makes
occasional l-r errors both in speech and in writing.  It isn't easy to
change your basic sound discriminations.

But it is possible.  One of the founders of modern psychoacoustics, Wilson
P. (Spike) Tanner, in the late 50s did an experiment to test his idea that
people could learn to discriminate anything for which there was a consistent
discriminatory feedback.  He devised a signal pair that no-one initially
could discriminate between, and asked subjects to make a forced-choice
discrimination (with correctness feedback after every trial) between them.
To begin with, everyone got 50% correct, and this went on for some time--
for one subject, for 40 days or more at (?) an hour a day of listening.
But eventually, everyone learned how to hear the difference between the
two signals.  Once the score went up a bit, to, say, 60%, it would very
quickly (within a day or two) go up to 100% correct discrimination.  So
people can learn to discriminate sounds that they initially claim to
be identical.

The situation with babies and the native language seems to be a bit different.
Up to the age of about 10 months, babies seem to be able to discriminate
sound pairs from just about any language, but after that they lose the
ability for discriminations not in their own language while they enhance
the ability for discriminations that are in their own language.  All the
same, adults *can* make the discriminations they seem to have lost, given
appropriate testing procedures.  What seems to happen is a drift into
"categorical perception" which might be described as "hearing" the label
for a sound rather than its acoustic pattern.

And no, I don't know what this is doing on comp.ai, except that there has
been a long string of speculation on the topic.

Ref: Psycholinguistics: Learning and Using Language, Taylor and Taylor,
Prentice-Hall, 1990, pp240-242. (Plug;-)
-- 
Martin Taylor (mmt@zorac.dciem.dnd.ca ...!uunet!dciem!mmt) (416) 635-2048
"Viola, the man in the room doesn't UNDERSTAND Chinese. Q.E.D."  (R. Kohout)

edwards@cogsci.berkeley.edu (Jane Edwards) (05/29/90)

In article <1990May28.184332.25927@athena.mit.edu> ccimino@hstbme.mit.edu.UUCP (c cimino) writes:
>The comment about the difficulty of proving negative results is well taken.
>The example about cats raised in darkness illustrates that at least one 
>example exists to demonstrate the brain's non-plasticity, as much as people 
>would like to believe they can trancend their limits.

Ignoring for the moment the ideological stand, there is a purely practical
matter of which assumption is likely to generate the most careful scientific
inquiry.  Just because there may be developmental constraints on plasticity
in the visual system doesn't mean the same is true of other capabilities,
especially not those which are known to be more diffusely represented, i.e.,
less strictly regionalized.  (Note that speech recognition and production
involves more than just the mechanical speech centers.)  It seems almost 
mystical and far too easy scientifically to assume that just because there 
are some developmental constraints in one system there must necessarily be
the same kinds of constraints in others, especially when the domains and
tasks are so different.  Careful empirical work is needed, while keeping 
an open mind on what is to be found until the data are actually in.   

Concerning the differences in domains and tasks, a particularly important 
issue relevant to the r/l discussion, in my opinion, is the fact that the 
speech sounds which we categorize when listening to speech are not the 
same sounds from word to word nor from speaker to speaker, but rather involve
a constant re-calibration with reference to the context (e.g., word) in which 
they are embedded and the size and shape of the vocal tract of the person
producing them.  Pretty amazing process.  This process is not isomorphic
with such classification tasks as identifying vertical vs. horizontal lines, 
or other physically definable visual stimuli.  Adjustments and calibrations
are also needed in the latter, but without the necessity of engaging 
language understanding mechanisms relating to semantics, pragmatics, possible
alternatives in the lexicons, etc., i.e., aspects of processing which seem
likely to be more diffusely represented in the brain and hence not as
vulnerable to localized limits on plasticity.

The necessary research has not yet been done to substantiate the claims
one hears that adults are biological unable to acquire non-native phonologies,
so it seems to me important to keep the issue open to encourage the type of
rigorous research which is needed, namely the type which pushes language 
learners to their limits rather than assuming in advance that they are 
biologically incapable. This would involve such things as bombarding them 
with native speaker input (rather than the sizeable amount of non-native 
input they get from classmates in foreign language courses), under 
high-motivation circumstances (presumably using people who are not timid 
about leaving behind the hallmarks of their cultural identity 
while speaking the foreign language, or of feeling "silly" when making all
of these new sounds which are not yet habitual to them), and, finally,
monitoring separately their ability to *distinguish* the range of
variation associated with an r vs. an l (or whatever sounds are relevant),
from their ability to *produce* them in their own speech.  Until this
is done, it seems more practical in the long run in terms of progress in
our understanding to leave the issue open and avoid a stance of premature
biological determinism.

-Jane Edwards (edwards@cogsci.berkeley.edu)

zarnuk@caen.engin.umich.edu (Paul Steven Mccarthy) (05/30/90)

>(Shelley CP) writes:
> [...people deaf from birth who learn to speak...]

You bring up a good point here.  However, I think the heart of the problem
lies in learning to _hear_ subtle differences in foreign languages.  As far
as speaking goes, we can train the muscles in the vocal tract to reproduce
the desired formations and a talented-teacher/dilligent-student pair can
succeed with the help of imitating examples.  It is very difficult, and 
especially so for deaf people who do not receive the immediate, direct
feedback that a hearing person does; but it can be done, and I do not
doubt that physical imitation plays an important role.  

It seems that learning to _hear_ subtle differences in another language
suffers because there is no way to "watch-and-mimic" the process.  I don't
think we can excercise the same intentional control over which "sub-nets"
"recognize" a given sound pattern as we might conciously direct the
muscles of the vocal tract.  In fact, I imagine the main difficulty arises
from suspending the disbelief that there is a difference at all, much less
reliably identifying it.  I think bio-feedback with printed traces of the
voice-prints would probably speed the process.  (Any takers? :)  In a 
similar vein, I have heard that Asiatics learn to distinguish quarter-tones,
which westerners fail to identify.  

On the converse, I have also heard that some spanish-speaking peoples in Latin
America swear that there is a distinction between their pronunciation of 
"b" and "v" -- despite the fact that voice-prints show no such difference!

 
---Paul...   (you know, 'the impossible just takes a little longer'...)

rwojcik@bcsaic.UUCP (Rick Wojcik) (06/01/90)

Paul Steven Mccarthy (PM) writes:

PM>...However, I think the heart of the problem
PM>lies in learning to _hear_ subtle differences in foreign languages.  As far
PM>as speaking goes, we can train the muscles in the vocal tract to reproduce
PM>the desired formations and a talented-teacher/dilligent-student pair can
PM>succeed with the help of imitating examples.  It is very difficult, and 
PM>especially so for deaf people who do not receive the immediate, direct
PM>feedback that a hearing person does; but it can be done, and I do not
PM>doubt that physical imitation plays an important role.  

Martin Taylor brought up a very nice point about the difference between
*hearing* and the *categorical perception* of speech sounds.  What we are
really touching on here is the theory of the psychological phoneme, which was
first developed by Jan Baudouin de Courtenay in the late 19th century.  It is
still fashionable among linguists to say that speakers of a given language
don't 'hear' allophonic distinctions, but the fact is that they hear
everything.  They just don't use the perceived differences to categorize
speech phonemically (where phonemes may be taken essentially as minimal phonic
segments used to distinguish words in memory).  The best known article on the
subject is Edward Sapir's 1933 "The Psychological Reality of Phonemes."
Chomsky and Halle, despite claims to the contrary, never resurrected the old
view of the phoneme.  The only modern theory that currently recognizes it is
David Stampe's theory of Natural Phonology (cf. Donegan and Stampe "The Study
of Natural Phonology" in D. Dinnsen, ed. Current Approaches to Phonological
Theory" Indiana U. Press 1979).  The Moscow School of Phonology (cf. M.V.
Panov.  1979.  Sovremennyi russkii iazyk.  Fonetika.  Moscow: Vysshaia SHkola)
has a similar, but allegedly 'nonpsychological,' approach to phonological
representations.

Your point about pronunciation through imitation is particularly important.
Perhaps the major function of the phonological system is its role in muscular
coordination during speech.  It is what allows speakers to speed up and slow
down their articulation (via deletions, insertions, assimilations, etc., of
sounds grouped into rhythmic prosodic units).  The phonemic representation of
a word remains constant, while the pronunciation may vary in a bewildering
variety of ways.  Phonology plays several roles in perception.  It makes
variant pronunciations fairly predictable to listeners.  But it also
emphasizes perceptual saliency in emphatic speech ("Stee-rike!" for 'strike'
and "nah-yun" for 'nine').  Speech recognition/understanding systems currently
tend to treat phonological variation as 'noise' in the signal, and there have
so far been only a few serious attempts to address it in the computational
literature.

You should, however, distinguish nonlinguistic from linguistic 'imitation.'
Adult speakers can articulate any speech sound produced in any human language.
The problem is that they can't use those sounds in talking, because that
activity requires precise movements in the speech tract that are governed by a
'phonological system.'  Physically, anyone with a normal mouth can trill [r].
Try getting them to do it while pronouncing foreign words--well, the brain
just doesn't let it happen so easily.  Spanish speakers make the initial sound
in 'the'--a voiced interdental fricative--every time they try to pronounce a
[d] that occurs between vowels.  But they can't pronounce it easily when they
are trying to pronounce the English word 'the' because the initial sound is
not a [d] between vowels.  Similarly, Americans have trouble pronouncing the
flapped Spanish [r] in 'pero' ('but').  Nevertheless, they make an almost
identical sound when they pronounce the [d] between vowels in 'reading.'

PM>On the converse, I have also heard that some spanish-speaking peoples in
PM>Latin America swear that there is a distinction between their pronunciation
PM>of "b" and "v" -- despite the fact that voice-prints show no such
PM>difference! 

Actually, one of my Spanish teachers (a graduate student) at Columbia insisted
on this, and she was from Gibraltar, of all places.  Of course, there is no
truth to this impression, which seems to be based solely on spelling.  I had a
very amusing time in one class, where the teacher, who was Cuban, consistently
confused dental and alveolar nasals (the final sounds in "bin" and "bing").
These sounds vary freely at the ends of syllables and are phonologically
indistinct in Cuban Spanish.  One American student, in frustration, finally
demanded to know whether she was saying "naranja" or "nara[ng]ja" for
'orange.'  She thought he was being perverse because she couldn't hear the
difference in either pronunciation.  They never did straighten out the
problem, but it served to point up why an education in linguistics ought to be
mandatory for all language teachers.
-- 
Rick Wojcik   csnet:  rwojcik@atc.boeing.com	   
              uucp:   uw-beaver!bcsaic!rwojcik 

cs4g6at@maccs.dcss.mcmaster.ca (Shelley CP) (06/01/90)

In article <1990May30.000350.20070@caen.engin.umich.edu> zarnuk@caen.engin.umich.edu (Paul Steven Mccarthy) writes:
>>(Shelley CP) writes:
>> [...people deaf from birth who learn to speak...]
>
>You bring up a good point here.  However, I think the heart of the problem
>lies in learning to _hear_ subtle differences in foreign languages.  As far
>as speaking goes, we can train the muscles in the vocal tract to reproduce
>the desired formations and a talented-teacher/dilligent-student pair can
>succeed with the help of imitating examples.  It is very difficult, and 
>especially so for deaf people who do not receive the immediate, direct
>feedback that a hearing person does; but it can be done, and I do not
>doubt that physical imitation plays an important role.  
>
>It seems that learning to _hear_ subtle differences in another language
>suffers because there is no way to "watch-and-mimic" the process.  I don't
>think we can excercise the same intentional control over which "sub-nets"
>"recognize" a given sound pattern as we might conciously direct the
>muscles of the vocal tract.  In fact, I imagine the main difficulty arises
>from suspending the disbelief that there is a difference at all, much less
>reliably identifying it.  I think bio-feedback with printed traces of the
>voice-prints would probably speed the process.  (Any takers? :)  In a 
>similar vein, I have heard that Asiatics learn to distinguish quarter-tones,
>which westerners fail to identify.  
>
Actually, the same training process used for deaf people could (and has, I
believe) be applied to the non-deaf.  Deaf people are trained using voice
'spectrographs'.  A hearing person who is being trained to produce news
sound types must also simultaneously train himself to hear the distinction
in what he is producing.  All hearing speakers use this feedback circuit
during normal speech anyway, to correct any errors they make in their own
language.  Certainly, this kind of learning takes place at a very high
cognitive level and is therefore much slower, but this is precisely what
has brought homo sapiens to its present 'intelligent' condition.  This is
the quibble I would raise over the 'kittens raised in the dark' example
which came up earlier.  The whole thing is obviously difficult and subject
to individual variation, but not impossible, as some people have stated.
'Nuff said. :>

>On the converse, I have also heard that some spanish-speaking peoples in Latin
>America swear that there is a distinction between their pronunciation of 
>"b" and "v" -- despite the fact that voice-prints show no such difference!
>
Well, I would think then that perhaps the percieved difference exists (in
reality) at a higher level than just the phonetic.  The phonological 
context of the 'b' and 'v' may be different and just come out the same
in speech.  In this case, perception may have been abstracted away from
the actual stimulus.  Mind you, I don't know anything about spanish
phonology....

> 
>---Paul...   (you know, 'the impossible just takes a little longer'...)


-- 
******************************************************************************
* Cameron Shelley   *    Return Path: cs4g6at@maccs.dcss.mcmaster.ca         *
******************************************************************************
*  /\\    /\\    /\\    /\\    /\\    /\\    /\\    /\\    /\\    /\\    /\\ *