Register  Login  Active Topics  Maps  

Revolutionary approach to learning langua

 Language Learning Forum : General discussion Post Reply
129 messages over 17 pages: 1 2 35 6 7 ... 4 ... 16 17 Next >>
chelovek
Diglot
Senior Member
United States
Joined 6090 days ago

413 posts - 461 votes 
5 sounds
Speaks: English*, French
Studies: Russian

 
 Message 25 of 129
29 January 2009 at 9:02pm | IP Logged 
Cainntear wrote:

As I said, there is no course that I'm aware of that builds up the speakers lexicon by the gradual, controlled introduction of new phonemes -- therefore there is no baseline for him to begin from.

If he had genuinely explored this variable, it would have been newsworthy, as it would have meant developing such a course -- a potentially massive undertaking, involving quite a lot of statistical work. It would have been mentioned. In fact, it would probably have been considered even more significant than what the actual article talks about.


What on earth are you talking about? As I told you before, stop trying to extrapolate so much from the article. The fact of the matter is you don't even know the specifics of what was being studied, and yet you're sitting here arrogantly talking about what he must have done, and must not have done.

That said, it's clear from the article that he was interested in whether the brain grew tissue from lots of audio exposure, and whether it was subsequently easier to learn new words. You certainly wouldn't need to design a specialized, epic course to test this.

More than likely, the experiment involved an uncommon foreign language that most participants would never have heard before (ie. Icelandic). They'd take a baseline account of brain tissue (probably in the form of MRI scan of brain volume), and have a control group and a test group. The control group would be given a time period to learn a set of words and then tested, and the test group would listen to the audio sample for a period of time, and then go through the same study/testing procedure.

They would likely go through this pattern for several weeks, do another brain scan, and compare the test results of participants in each group.

Have you become so obsessed with the "course" aspect of this just because you couldn't understand his remark about an hour of listening being more beneficial than an hour of studying French text? It was ambiguous, sure, but when there are two possible meanings, and one can be ruled out by common sense, why get caught up on it?

Edited by chelovek on 29 January 2009 at 9:09pm

1 person has voted this message useful



Cainntear
Pentaglot
Senior Member
Scotland
linguafrankly.blogsp
Joined 6014 days ago

4399 posts - 7687 votes 
Speaks: Lowland Scots, English*, French, Spanish, Scottish Gaelic
Studies: Catalan, Italian, German, Irish, Welsh

 
 Message 26 of 129
30 January 2009 at 3:13am | IP Logged 
chelovek wrote:
Have you become so obsessed with the "course" aspect of this just because you couldn't understand his remark...

No, I'm responding because you were extremely rude to another poster about him not understanding it.

If it's not clear, it's the writer's fault, not the reader's.
1 person has voted this message useful



Raчraч Ŋuɲa
Triglot
Senior Member
New Zealand
Joined 5821 days ago

154 posts - 233 votes 
Speaks: Bikol languages*, Tagalog, EnglishC1
Studies: Spanish, Russian, Japanese

 
 Message 27 of 129
30 January 2009 at 6:41am | IP Logged 
I think everyone can agree that no one amongst us has read the actual thesis, so we are all limited to referencing the article only.

I've re-read the article and I've changed my mind: I think Sulzberger did claim it. I am inclined to believe that the article writer did not made a mistake in attributing to the Dr the claim that 'frequent "listening" is the best way to learn a language' even if incomprehensible because Sulzberger added: "A lot of language teachers may not accept that". If he is referring to listening as critical, him saying this is odd. But if he said that "best way to learn a language is through frequent exposure to its sound patterns—even if you haven't a clue what it all means" that would make a lot of sense. The probable reason why he is not quoted saying as such is due to the writer's technique. He probably wrote what the researcher said faithfully, yet did not quote him again to avoid redundancy/repetition. Yes, I am speculating, as everyone had been.

Now, I still do not agree with this article, not just for claiming that "the best way to learn a language is through frequent exposure to its sound patterns (read: listening)—even if you haven't a clue what it all means" but also for claiming that "Our ability to learn new words is directly related to how often we have been exposed to the particular combinations of the sounds which make up the words." There is actually a research that contradicts this statement.


Quote:

"Children can easily hear how the same word can be pronounced in different ways. We might say, 'Is that your kiiiiiitty"' or, 'Show me the kitty.' In English, we're still talking about the same cat. But children have to figure this out. In other languages, like Japanese or Finnish, those two versions of "kitty" could mean completely different things. Our study showed that 18-month-olds have already learned this and apply that knowledge when learning new words."

Psychologists tested vowel duration ("kitty" versus "kiiiitty") in three experiments comparing Dutch- and English-learning 18-month-olds. Children were shown two different toys. With one toy, researchers repeated a word dozens of times, naming it a "tam." The other toy was named too, with the same label only with the vowel acoustically longer in duration ("taam").

Dutch children, learning a language that includes words differentiated by how long the vowel is pronounced, interpret the variations as meaningful and learn which word goes with each object. English speakers ignored the elongation of vowel sounds.

English learners did not somehow lack the cognitive power to learn both words. They can hear the difference between the words, and they succeed on words that really are different in English ("tam" vs. "tem"). The difference arose from the phonological generalizations children had already made from their brief experience with English: "tam" and "taam", like "kitty" and "kiiiitty", mean the same thing. Dutch children, on the other hand, interpreted vowel duration as lexically contrastive in keeping with the properties of their language.


English children can hear the differences in vowel length and have been exposed to it until then, but has not learned the words. Listening to incomprehensible sounds will be worse. New words are learned not by listening only, but at the same time by being able to work out its meaning. Meanings must be matched to words (series of sounds) and minimal differences in their sounds helps us distinguish between words. For a person to be able to differentiate sounds, he must figure out also that the differences in pronunciation is lexically contrastive, if to be ignored or retained. But how are we going to do that if what we're listening to is incomprehensible so as to be gibberish?

I think the researcher should have tested this by listening to a radio (he mentioned Spanish language radio station as an example) broadcasting in an unidentified aboriginal language continuously and see how much he can identify the meaningful sounds and learn words. By using the radio as an example, he is illustrating that sounds is all that matter, not meaning from other cues or context. Why not !Xóõ, which has the most phonemes in any language, and limiting the input to aural stimulus only.

----

EDIT:

According to Wikipedia:
Taa, also known as ǃXóõ, is a Khoisan language with a very large number of phonemes (speech sounds), with at least 58 consonants, 31 vowels, and four tones (Traill 1985, 1994 on East !Xoon), or at least 87 consonants, 20 vowels, and two tones (DoBeS 2008 on West !Xoon), by many counts the most of any known language. These include 20 (Trail) or 43 (DoBeS) click consonants and several vowel phonations, though opinions vary as to which of the 130 (Traill) or 164 (DoBeS) consonant sounds are single segments and which are consonant clusters.

As of 2002, Taa is spoken by about 4,200 people worldwide. These are mainly in Botswana (approximately 4,000 people), but some are in Namibia.



Edited by Raчraч Ŋuɲa on 30 January 2009 at 6:19pm

1 person has voted this message useful



slucido
Bilingual Diglot
Senior Member
Spain
https://goo.gl/126Yv
Joined 6678 days ago

1296 posts - 1781 votes 
4 sounds
Speaks: Spanish*, Catalan*
Studies: English

 
 Message 28 of 129
30 January 2009 at 9:42am | IP Logged 
Raчraч Ŋuɲa wrote:

Now, I still do not agree with this article, not just for claiming that "the best way to learn a language is through frequent exposure to its sound patterns (read: listening)—even if you haven't a clue what it all means" but also for claiming that "Our ability to learn new words is directly related to how often we have been exposed to the particular combinations of the sounds which make up the words." There is actually a research that contradicts this statement.


This article you quote doesn't contradict the other article. Actually both are journalist articles and not scientific papers. I gave you scientific articles in previous posts.

Here you have another scientific article about subliminal learning:

http://www.neuron.org/content/article/abstract?uid=PIIS08966 27308005758

Mathias Pessiglione, Predrag Petrovic, Jean Daunizeau, Stefano Palminteri, Raymond J. Dolan, and Chris D. Frith. Subliminal Instrumental Conditioning Demonstrated in the Human Brain. Neuron, 2008; 59: 561-567

Here you have a journalistic commentary about this research:

Subliminal Learning Demonstrated In Human Brain

http://www.sciencedaily.com/releases/2008/08/080827163810.ht m



Edited by slucido on 30 January 2009 at 12:58pm

1 person has voted this message useful



slucido
Bilingual Diglot
Senior Member
Spain
https://goo.gl/126Yv
Joined 6678 days ago

1296 posts - 1781 votes 
4 sounds
Speaks: Spanish*, Catalan*
Studies: English

 
 Message 29 of 129
30 January 2009 at 10:11am | IP Logged 
This one is very very interesting:

Subliminal Speech Priming

Psychological Science
Volume 16 Issue 8, Pages 617 - 625
Published Online: 8 Aug 2005

Sid Kouider 1,2 and Emmanuel Dupoux 1
1 Laboratoire de Sciences Cognitives et Psycholinguistique, EHESS-ENS-CNRS, Paris, France, and   2 Cognitive Neuroimaging Unit, INSERM, IFR 49, SHFJ, Orsay, France

http://www3.interscience.wiley.com/journal/118661809/abstrac t

Quote:

Abstract—We present a novel subliminal priming technique that operates in the auditory modality. Masking is achieved by hiding a spoken word within a stream of time-compressed speechlike sounds with similar spectral characteristics. Participants were unable to consciously identify the hidden words, yet reliable repetition priming was found. This effect was unaffected by a change in the speaker's voice and remained restricted to lexical processing. The results show that the speech modality, like the written modality, involves the automatic extraction of abstract word-form representations that do not include nonlinguistic details. In both cases, priming operates at the level of discrete and abstract lexical entries and is little influenced by overlap in form or semantics.




Edited by slucido on 30 January 2009 at 10:17am

1 person has voted this message useful



DaraghM
Diglot
Senior Member
Ireland
Joined 6154 days ago

1947 posts - 2923 votes 
Speaks: English*, Spanish
Studies: French, Russian, Hungarian

 
 Message 30 of 129
30 January 2009 at 10:22am | IP Logged 
Cainntear wrote:

..he did also say that "One hour a day of studying French text in a classroom is not enough—but an extra hour listening to it on the iPod would make a huge difference," which is about as unclear as it gets. We have the apparent fixed variable of time spent (1 hour)


The article has a number of ambiguities, but this isn't one. He seems to be saying one hour of French class isn't enough, you need two hours, the hour of French class and an additional hour of listening.

In the article it mentions his experience over seven years teaching Russian to New Zealand students. If he chose this language, or French, for the basis of his thesis, then adding a listening component would make a huge difference to their success rate. If you hear an unknown Russian word, and then see it written, it's a lot easier to remember, then trying to memorise an unknown Russian word you've never heard before.   

Edited by DaraghM on 30 January 2009 at 10:24am

1 person has voted this message useful



slucido
Bilingual Diglot
Senior Member
Spain
https://goo.gl/126Yv
Joined 6678 days ago

1296 posts - 1781 votes 
4 sounds
Speaks: Spanish*, Catalan*
Studies: English

 
 Message 31 of 129
30 January 2009 at 1:55pm | IP Logged 
Here you have a few interesting scientific articles.

Long-Term Auditory Word Priming in Preschoolers: Implicit Memory Support for Language Acquisition.

Journal of Memory and Language
Volume 39, Issue 4, November 1998, Pages 523-542
Barbara A. Churcha and Cynthia Fisherb
a State University of New York at Buffalo
b University of Illinois


http://linkinghub.elsevier.com/retrieve/pii/S0749596X9892601 8

Quote:

Abstract

Three experiments explored long-term auditory word priming in young preschoolers. Children 2.5 and 3 years old and adults more accurately identified low-pass-filtered words that had been presented once in an initial study phase (Experiment 1) than words that had not been presented. The auditory priming effect showed no significant change from 2.5 years of age to college age. An effect of similar magnitude was also found in 2-year-olds (Experiment 2). Similar to findings with adults, auditory word priming in 3-year-olds did not significantly increase following a semantic encoding task, although explicit recognition memory improved under semantic study conditions (Experiment 3). The similar auditory word priming in preschoolers and adults suggests that the same learning mechanisms are at work in both groups. We argue that the powerful perceptual learning mechanism underlying auditory word priming has just the right properties to play a crucial role in the development of an auditory lexicon.



Learners' Production of Passives during Syntactic Priming Activities
Applied Linguistics 2008 29(1):149-154; doi:10.1093/applin/amn004
Youjin Kim and Kim McDonough
Northern Arizona University

Quote:

http://applij.oxfordjournals.org/cgi/content/full/29/1/149

http://www.ingentaconnect.com/content/oup/applij/2008/000000 29/00000001/art00007

Abstract:
Previous research has shown that during syntactic priming activities, L1 speakers produce more target structures when they are prompted by a lexical item that occurred in their interlocutor's previous utterance. This preliminary study investigated whether L2 speakers are similarly influenced by lexical items during syntactic priming activities. Korean EFL learners from three proficiency levels carried out a picture description activity with a researcher whose interactional contributions were scripted with passive sentences. The results indicated that the learners produced more passives when they were prompted by verbs that had occurred in the researcher's passives. Directions for future research to investigate the relationships among syntactic priming, lexical items, and L2 development are suggested.




Edited by slucido on 30 January 2009 at 1:58pm

1 person has voted this message useful



Raчraч Ŋuɲa
Triglot
Senior Member
New Zealand
Joined 5821 days ago

154 posts - 233 votes 
Speaks: Bikol languages*, Tagalog, EnglishC1
Studies: Spanish, Russian, Japanese

 
 Message 32 of 129
30 January 2009 at 2:45pm | IP Logged 
Ok, its an article about a research, though more transparent in describing the research it is writing about.

It does contradict it. He said "Our ability to learn new words is directly related to how often we have been exposed to the particular combinations of the sounds which make up the words." Now if it doesn't contradict it, the English 18-month olds could have learned that the "taam" unlike the "tam" has a different meaning (goes with a different object) like the Dutch children, yet English speakers ignored the elongation of vowel sounds", meaning they haven't learned the the other word. Neural tissues have not developed automatically to learn that.

English children, like all babies, have the capacity to "distinguish most of the phonetic contrasts used by all the worlds languages" when they were young. Even before the experiment, they have been exposed to differences in vowel length for several months, for no person can guarantee speech of equal vowel length. That vowel length differences though does not affect the meaning of words in English. Yet, that exposure has not favorably affected the ability of the English children to learn just 2 words repeated 12 times each in 3 experiments. How simple is that?

I do agree with the researcher that we need a lot of exposure, but I can't agree with the unqualified statement that " just listening to the language, even though you don't understand it, is critical" and "simply listening to a new language sets up the structures in the brain required to learn the words" and especially if preceeded by ""the best way to learn a language is through frequent exposure to its sound patterns—even if you haven't a clue what it all means". It implies that even a wholly incomprehensible exposure will benefit me and phonetics is the only factor to setup those word learning structures and never semantics, which is an unacceptable, repugnant idea as it goes against my own personal experience.

There is another article describing another research among children that describes a more believable way to acquire vocabulary than the one described by Sulzberger, but this one does not advocate use of incomprehensible language.

Quote:

Researchers have long known that at about 18 months children experience a vocabulary explosion, suddenly learning words at a much faster rate. They have theorized that complex mechanisms are behind the phenomenon. But new research by a University of Iowa professor suggests far simpler mechanisms may be at play: word repetition, variations in the difficulty of words and the fact that children are learning multiple words at once.

But a series of computational simulations that he conducted suggest that simpler explanations - such as the repetition of words over time, the fact that children learn many words at the same time and the fact that words vary in difficulty - are sufficient to account for the vocabulary explosion.

"Children are going to get that word spurt guaranteed, mathematically, as long as a couple of conditions hold," McMurray said. "They have to be learning more than one word at a time, and they must be learning a greater number of difficult or moderate words than easy words. Using computer simulations and mathematical analysis, I found that if those two conditions are true, you always get a vocabulary explosion."

McMurray's simulations are analogous to a series of jars of different sizes, each representing a word, with more difficult words represented by larger jars. As individual units of time passed, a chip is dropped into each jar. Once the jar is filled, the word is learned.

McMurray's mathematical analysis suggests that the word spurt is largely driven by the number of small jars (easy words) relative to large jars (difficult words). As long as there are more difficult words than easy ones, the vocabulary explosion is guaranteed.

Few words in any language are used an overwhelming number of times in ordinary speech. So, if frequency of use is considered as a measure of degree of difficulty, languages have many more difficult than easy words, McMurray said.

Experts have long thought that once a child learns a word, it is easier for him or her to learn more words. Or in the case of McMurray's simulation, the jars become smaller. But McMurray also simulated a model in which the jars became larger once a word was learned and found that the vocabulary explosion still occurred."


Now that article implies the ability to differentiate easy from difficult words based on frequency of usage. And words are learned best by working out their meanings in another article. And we learn nouns faster than verbs says another. How can incomprehensible input account for that?

chelovek wrote:
I can't believe you wrote all of that crap.


Now chelovek, which of us two has a crappy idea?

---

slucido:

I've realized you've been posting about subliminal learning a lot, and that your using that article to support it. I have nothing to add about that topic, that doesn't interest me at all only because that is really hard to decide either way. Cheers.



Edited by Raчraч Ŋuɲa on 30 January 2009 at 3:05pm



1 person has voted this message useful



This discussion contains 129 messages over 17 pages: << Prev 1 2 35 6 7 8 9 10 11 12 13 14 15 16 17  Next >>


Post ReplyPost New Topic Printable version Printable version

You cannot post new topics in this forum - You cannot reply to topics in this forum - You cannot delete your posts in this forum
You cannot edit your posts in this forum - You cannot create polls in this forum - You cannot vote in polls in this forum


This page was generated in 0.3428 seconds.


DHTML Menu By Milonic JavaScript
Copyright 2024 FX Micheloud - All rights reserved
No part of this website may be copied by any means without my written authorization.