129 messages over 17 pages: 1 2 3 4 5 6 7 ... 5 ... 16 17 Next >>
Maikl Tetraglot Senior Member Germany Joined 6227 days ago 121 posts - 145 votes Speaks: German*, Dutch, English, Spanish Studies: Turkish
| Message 33 of 129 30 January 2009 at 3:06pm | IP Logged |
Just want to throw in my 2 pence here:
in most teaching environments listening may still be largely underrated.
Why?
I don't know. Probably because we live in a text based culture...
I don't think there's any kind of approach that merits being called revolutionary.
The most efficient strategy of learning that i've come across so far is listening to a text in the target language with a word-by-word translation at hand.
But thisonly works within certain limits...some elemnts of the target language may have no equivalnet in your language and would therefore resist any back-translation.
Do you know what i mean?
i'm probably going to open a thread on this one..
1 person has voted this message useful
| slucido Bilingual Diglot Senior Member Spain https://goo.gl/126Yv Joined 6677 days ago 1296 posts - 1781 votes 4 sounds Speaks: Spanish*, Catalan* Studies: English
| Message 34 of 129 30 January 2009 at 4:29pm | IP Logged |
Raчraч Ŋuɲa wrote:
Ok, its an article about a research, though more transparent in describing the research it is writing about.
It does contradict it. He said "Our ability to learn new words is directly related to how often we have been exposed to the particular combinations of the sounds which make up the words." Now if it doesn't contradict it, the English 18-month olds could have learned that the "taam" unlike the "tam" has a different meaning (goes with a different object) like the Dutch children, yet English speakers ignored the elongation of vowel sounds", meaning they haven't learned the the other word. Neural tissues have not developed automatically to learn that. |
|
|
It doesn't contradict it, because the first article talks about ADULT second language acquisition and this one talks about children. On the other hand listening isn't the only factor either in adults or children. Nobody denies that. For example, differential reinforcement is very important. By the way, in the last message I gave you evidence about priming effects in children as well.
Regarding your quoted article, it explains children fail to discriminate many non-native sounds because of the fact they haven't heard them from their parents, id listening as the most important factor.
Here you have two extracts:
Quote:
The study shows how important the child's first year is in acquiring language. By listening to their parents and learning words, children discover how speech in their language works, a process that is vital for gaining command of vocabulary and grammar.
...
Previous research showed that at birth infants can distinguish most of the phonetic contrasts used by all the world's languages. This ''universal'' capacity shifts over the first year to a language-specific pattern in which infants retain or improve categorization of native-language sounds but fail to discriminate many non-native sounds. Eventually, they learn to ignore subtle speech distinctions that their language does not use.
|
|
|
Here you can add differential REINFORCEMENT to the listening factor. This two factors overlap.
Quote:
I do agree with the researcher that we need a lot of exposure, but I can't agree with the unqualified statement that " just listening to the language, even though you don't understand it, is critical" and "simply listening to a new language sets up the structures in the brain required to learn the words" and especially if preceeded by ""the best way to learn a language is through frequent exposure to its sound patterns—even if you haven't a clue what it all means". It implies that even a wholly incomprehensible exposure will benefit me and phonetics is the only factor to setup those word learning structures and never semantics, which is an unacceptable, repugnant idea as it goes against my own personal experience.
|
|
|
I don't have any doubt about that.
Children begin to learn from wholly incomprehensible input. Regarding adults I gave you a lot of evidence. It works through priming and facilitation processes. When you begin to study the language more consciously, you will go faster.
The practical advice is: listen native materials as much as you can from the very beginning, even without paying attention.
Quote:
There is another article describing another research among children that describes a more believable way to acquire vocabulary than the one described by Sulzberger, but this one does not advocate use of incomprehensible language. |
|
|
Sulzberger, or the article, doesn't talk about children. I don't know what you are talking about. I think you are missing the point.
Where does this article contradicts anything I am exposing?
Quote:
Now that article implies the ability to differentiate easy from difficult words based on frequency of usage. And words are learned best by working out their meanings in another article. And we learn nouns faster than verbs says another. How can incomprehensible input account for that?
|
|
|
This articles talk about children learning their own language and not about adult SLA, but they don't contradict the facts about repetitive listening as the main factor. I don't see any contraction with all the implicit, subliminal or unconscious language learning.
This articles talk about meanings, but SEMANTIC PRIMING is another working factor in all this unconscious listening stuff. If you activate these priming and facilitation processes (lexical, syntactic, semantic), you will go faster.
1 person has voted this message useful
| Raчraч Ŋuɲa Triglot Senior Member New Zealand Joined 5820 days ago 154 posts - 233 votes Speaks: Bikol languages*, Tagalog, EnglishC1 Studies: Spanish, Russian, Japanese
| Message 35 of 129 30 January 2009 at 6:07pm | IP Logged |
If you re-read the article, he is not just talking about adult language learning, but babies as well.
Quote:
Dr Sulzberger's research challenges existing language learning theory. His main hypothesis is that simply listening to a new language sets up the structures in the brain required to learn the words.
"Neural tissue required to learn and understand a new language will develop automatically from simple exposure to the language—which is how babies learn their first language," Dr Sulzberger says.
Dr Sulzberger says he was interested in what makes it so difficult to learn foreign words when we are constantly learning new ones in our native language. He found the answer in the way the brain develops neural structures when hearing new combinations of sounds. |
|
|
Therefore, his hypothesis applies to language learning in general, adults or babies, therefore the other article about children contradicts it.
slucido wrote:
Regarding your quoted article, it explains children fail to discriminate many non-native sounds because of the fact they haven't heard them from their parents, |
|
|
If you will notice, what was tested was vowel length, not another sound unheard from the parents.
slucido wrote:
Children begin to learn from wholly incomprehensible input. Regarding adults I gave you a lot of evidence. It works through priming and facilitation processes. When you begin to study the language more consciously, you will go faster.
The practical advice is: listen native materials as much as you can from the very beginning, even without paying attention.
|
|
|
Is this subliminal/subconscious learning? Looks more like conscious than subconscious to me, with reduced attention. This is why "subliminal learning" is unappealing to me. Its hard to pin down. Quick question: If I am multitasking 3 tasks, and my attention at the moment is just with one of the tasks, are the other 2 tasks in my conscious or subconscious?
slucido wrote:
The practical advice is: listen native materials as much as you can from the very beginning, even without paying attention.
|
|
|
I'm doing that while learning Castellano, but not for Sulzberger's reasons or yours. But I do give my full, undivided attention.
Cheers.
Edited by Raчraч Ŋuɲa on 30 January 2009 at 6:16pm
1 person has voted this message useful
| chelovek Diglot Senior Member United States Joined 6089 days ago 413 posts - 461 votes 5 sounds Speaks: English*, French Studies: Russian
| Message 36 of 129 30 January 2009 at 6:18pm | IP Logged |
Edited.
Edited by chelovek on 30 January 2009 at 6:28pm
1 person has voted this message useful
| chelovek Diglot Senior Member United States Joined 6089 days ago 413 posts - 461 votes 5 sounds Speaks: English*, French Studies: Russian
| Message 37 of 129 30 January 2009 at 6:25pm | IP Logged |
Raчraч Ŋuɲa wrote:
If you re-read the article, he is not just talking about adult language learning, but babies as well.
Quote:
Dr Sulzberger's research challenges existing language learning theory. His main hypothesis is that simply listening to a new language sets up the structures in the brain required to learn the words.
"Neural tissue required to learn and understand a new language will develop automatically from simple exposure to the language—which is how babies learn their first language," Dr Sulzberger says.
Dr Sulzberger says he was interested in what makes it so difficult to learn foreign words when we are constantly learning new ones in our native language. He found the answer in the way the brain develops neural structures when hearing new combinations of sounds. |
|
|
Therefore, his hypothesis applies to language learning in general, adults or babies, therefore the other article about children contradicts it.
|
|
|
He mentions babies, yes, but only as a reference to how they learn their first language. He was stating that the process that adults need to go through to properly learn a language is similar to the process that babies go through when they learn their first language.
How do we learn our first language? As babies, we listen to language all day. After several months, we learn to distinguish between the dominant language and foreign ones. The key here is that babies get lots of aural exposure, and the researcher says that that is what adults tend to lack. He is NOT saying that listening is the ONLY thing required to learn and understand a language, but he is saying that it's a required thing that we don't tend to get enough of as adults.
That's all there is to it.
Edited by chelovek on 30 January 2009 at 6:32pm
1 person has voted this message useful
| Cainntear Pentaglot Senior Member Scotland linguafrankly.blogsp Joined 6013 days ago 4399 posts - 7687 votes Speaks: Lowland Scots, English*, French, Spanish, Scottish Gaelic Studies: Catalan, Italian, German, Irish, Welsh
| Message 38 of 129 30 January 2009 at 7:12pm | IP Logged |
slucido wrote:
Sulzberger, or the article, doesn't talk about children. I don't know what you are talking about. I think you are missing the point.
Where does this article contradicts anything I am exposing?
|
|
|
Raчraч Ŋuɲa is making an important point which may or may not be applicable here -- either way, you'd do well to try to understand it rather than dismissing out of hand. You don't have to agree with it, but if you're going to reject it, please do so in a reasoned manner.
Raчraч's point is on the perceptual difference between phonetics and phonemics. The child has been exposed to lots of different (phonetic) sounds, including both aa and a, but even by an early age he has reduced these into a distinct number of phonemes.
IE. the child's mind no longer perceives any difference between two (or more) sounds.
How does that transfer to this case, and for adults?
Well, the suggestion would be that the brain genuinely doesn't hear the difference between (for example) the castillian phonemes R and RR, as there is only one equivalent phoneme in English.
This is standard thinking, and basically one of the points that the study challenges. As yet the overwhelming weight of evidence supports current thinking (naturally -- otherwise current standard thinking would be different!). Regardless, this is the context for the study, and you can't understand the study without understanding its context.
Speaking from personal experience, I have a problem with "intrusive Rs" in some languages. You may be aware that a large number of English people drop non-prevocalic Rs, whereas most Scottish people don't. Today I was trying to learn some Polish from an audio course, and when I was supposed to say "ciekavy", I kept wanting to say "cierkavy" -- the vowel quality (the fact that it was non-schwa despite being in an unstressed syllable directly adjacent to a stressed syllable) activated the "er" phoneme rather than the appropriate "e" phoneme.
Conversely, there are many English people who fail to realise that they drop the "R" and "RR" phonemes when speaking Spanish, because to them it's a non-phonemic difference.
1 person has voted this message useful
| Raчraч Ŋuɲa Triglot Senior Member New Zealand Joined 5820 days ago 154 posts - 233 votes Speaks: Bikol languages*, Tagalog, EnglishC1 Studies: Spanish, Russian, Japanese
| Message 39 of 129 30 January 2009 at 11:34pm | IP Logged |
chelovek wrote:
He is NOT saying that listening is the ONLY thing required to learn and understand a language, but he is saying that it's a required thing that we don't tend to get enough of as adults.
|
|
|
I agree, that listening-only is not the ONLY thing he is saying, and is required for language learning, but rather what I disagree with him is that he is saying that listening-only will be beneficial even if one has no idea of what is being heard. This is actually what is making his research "revolutionary" and "ground-breaking" and being used by slucido for his subliminal/subconscious learning thing. Without that, it is hardly "revolutionary" and "ground-breaking".
But another research compared reading only, reading-while-listening, and listening only for their rate of vocabulary acquisition and decay, yet concluded that listening only acquire vocabularies the least, for the reasons explained below. I think the value of listening is only for correct vocalization of learned words.
Quote:
The MC test results for the reading-while-listening mode across all texts indicate that an impressive 48% (13.31) of the 28 words were learned (compare gains of 22% in the study by Horst et al., 1998). MC gains made in the reading-only mode were similarly impressive standing at 45% (12.54). Gains made in the listening-only mode, however, were less remarkable standing at 29% (8.20).
Of the two tests, the meaning-translation test is probably the one that most closely indicates whether a subject actually knew the meaning of the word while reading and listening. This is because it shows that the subject is not only capable of recognizing the word but can also assign a meaning to it without being prompted. In Table 4, the meaning-translation test results across all texts show that 16% (4.39) of the 28 words were learned in the reading-while-listening mode. This rate of acquisition is followed closely in the reading-only mode, which yielded gains of 15% (4.10) of the 28 target words. This reading-only rate matches that in the Waring and Takaki (2003) study, in which the meaning-translation test scores showed that 18% of the 25 target words were learned. In the present study, gains in the listening-only mode were minimal with only 2% (0.56) of the 28 words learned.
Reading-only mode versus reading-while-listening mode. The scores the subjects attained in these two modes were similar across the tests. The mean test scores for the three books varied relatively little depending on the test type (even after 3 months). Given the almost equal expected learning outcome from each of these modes, it would seem that the selection of preferred input mode should rest with the learner.
Listening-only mode. It seems rather obvious that the listening-only mode should be the most difficult to acquire new vocabulary from (especially given the length of the listening task). In this study, the results of the meaning-translation test at the immediate posttest for the listening-only mode showed that only 2% (0.56) of the 28 target words were learned (compared with 15% and 16% in the other two modes). Moreover, as we shall see in detail later, when asked which input mode they preferred, 0% of the subjects chose listening-only.
The subjects, it seems, displayed a critical lack of familiarity with spoken English. As they listened to the story, they had to pay constant attention to a stream of speech whose speed they could not control. Because they were incapable of processing the phonological information as fast as the stream of speech, they may have failed to recognize many of the spoken forms of words that they already knew in their written forms.
A possible reason for this is that the subjects’ phonological knowledge of English varied from the phonological system employed by native speakers. The Japanese language has a different syllable structure to English and is often said to be mora-timed; therefore, Japanese learners may expect to hear words pronounced in this manner and thus may have considerable problems interpreting spoken English. McArthur (2003) claimed that Japanese learners have great difficulty in speaking and listening to English because of this “tendency not only to pronounce English in terms of Japanese syllable structure but also to adapt English words syllabically into Japanese” (p. 21).
A second reason might have been a lack of skill in detecting word boundaries in connected speech (i.e., skill in the lexical segmentation of the input signal). On reviewing the comments made by the subjects regarding the listening-only mode, it became apparent that a major challenge for them was negotiating the seamless nature of connected speech. Because of the way one word runs into the next seamlessly “without any little silences between the spoken words compared with the way there are white spaces between written words” (Pinker, 1994, p. 159), subjects may have found it particularly difficult to tell where one word ended and the next began. In terms of second-language listening, Field (2003) characterized the lexical segmentation of streams of speech as “arguably the commonest perceptual cause of breakdown of understanding” (p. 327).
A third reason might have been that the subjects were required to listen at a coverage rate (95%) that was set for reading and not listening. The data suggest that the coverage rate was too low for the listening-only mode, rendering the task of inferring the meanings of the 28 target words as too great a challenge. Although no statistical data was provided, Nation (2001) claimed that “it is likely that for extensive listening the ratio of unknown words to known words should be around 1 in 100” (p. 118).
........
The data suggest that the acquisition of words through listening is considerably slower than from reading, and as such more recurrences of words are needed for acquisition (as defined by a correct score on the meaning-translation test) to take place.
Ultimately, this suggests that there is little or no chance a new word will be picked up from listening unless the word is met considerably more than 20 times. Extrapolation of these data shows that maybe 50 or even 100 meetings may not be enough to acquire a word’s meaning from listening-only. As has recently been shown, even partial knowledge such as the ability to recognise a word’s form is hard to pick up from listening alone (Donkaewbua, 2008). It also suggests that far more listening than reading needs to be done for vocabulary learning through extensive exposure. It should also be noted that in this study more uptake of vocabulary might have been possible if the listening treatment had been in shorter, more manageable sessions.
The reading-only mode data in this study replicate the Waring and Takaki (2003) findings, which showed that (a) unless words are met a sufficient number of times and (b) are met again soon after reading, then the word knowledge gained will decay. Recent research indicates that a sufficient number is likely to be much higher than 7–9 times for long term retention, and in fact may be closer to 30–50 times or higher (Waring, 2008) for new words met through graded reading.
|
|
|
1 person has voted this message useful
| slucido Bilingual Diglot Senior Member Spain https://goo.gl/126Yv Joined 6677 days ago 1296 posts - 1781 votes 4 sounds Speaks: Spanish*, Catalan* Studies: English
| Message 40 of 129 31 January 2009 at 4:19am | IP Logged |
Raчraч Ŋuɲa wrote:
Is this subliminal/subconscious learning? Looks more like conscious than subconscious to me, with reduced attention. This is why "subliminal learning" is unappealing to me. Its hard to pin down. Quick question: If I am multitasking 3 tasks, and my attention at the moment is just with one of the tasks, are the other 2 tasks in my conscious or subconscious? |
|
|
Is it hard to pin down? Obviously, but this is a scientific problem and not a practical one. As language learners we only need to know that listening is good, even if we don't pay attention and if we don't understand. And we do know this, because scientifics are pinning down this in their studies. I gave you some of them.
Raчraч Ŋuɲa wrote:
slucido wrote:
The practical advice is: listen native materials as much as you can from the very beginning, even without paying attention.
|
|
|
I'm doing that while learning Castellano, but not for Sulzberger's reasons or yours. But I do give my full, undivided attention.
Cheers.
|
|
|
You are doing what I recommend based in increased scientific evidence...Good :0)
Edited by slucido on 31 January 2009 at 4:37am
1 person has voted this message useful
|
You cannot post new topics in this forum - You cannot reply to topics in this forum - You cannot delete your posts in this forum You cannot edit your posts in this forum - You cannot create polls in this forum - You cannot vote in polls in this forum
This page was generated in 0.7109 seconds.
DHTML Menu By Milonic JavaScript
|