129 messages over 17 pages: << Previous 1 2 3 4 5 6 7 ... 11 ... 16 17 Next >>
ChiaBrain Bilingual Diglot Senior Member United States Joined 5807 days ago 402 posts - 512 votes Speaks: English*, Spanish* Studies: Portuguese, Italian, French Studies: German
| Message 82 of 129 08 February 2009 at 8:43pm | IP Logged |
ZOMBIE THREAD!!! Kill it with FIRE!!!
1 person has voted this message useful
| Jar-ptitsa Triglot Senior Member Belgium Joined 5897 days ago 980 posts - 1006 votes Speaks: French*, Dutch, German
| Message 83 of 129 08 February 2009 at 9:10pm | IP Logged |
ChiaBrain wrote:
ZOMBIE THREAD!!! Kill it with FIRE!!! |
|
|
Good idea.
1 person has voted this message useful
| Raчraч Ŋuɲa Triglot Senior Member New Zealand Joined 5817 days ago 154 posts - 233 votes Speaks: Bikol languages*, Tagalog, EnglishC1 Studies: Spanish, Russian, Japanese
| Message 84 of 129 10 February 2009 at 2:52am | IP Logged |
I haven't had the oppurtunity to listen to Paul Sulzberger's radio interview and its a real revelation. Here's the link from Google cache, as the original link doesn't work anymore.
After listening, I came to realize that he views his research's results as a counter-evidence against the "most important tenet in modern language learning" called "comprehensible input, make sure the kids understand what's going on". He said "It seems its pretty important for incomprehensible input to happen first", as the brain is already analyzing the sound input, putting down new sound patterns. He said further that in the first few minutes of watching a foreign film, you will have heard virtually all the words your ever going to hear in a normal conversation and you need to hear those sounds over and over, hundreds and hundreds of times.
Before I listened, I was curious about how he tested his hypothesis. The short answer is found from 4:20 of the sound file: He still has to set up a real language learning situation to test it out. What kind of testing did he do then? Well, he said he took a bunch of Russian words and analyzed them from the point of view of English. He analyzed all those Russian words and calculated the frequency which the combinations in the Russian words actually occur in English, and from there he can predict how well English speaking people will be able to learn those words. The "Russian words which were most like English words are of course the ones they remembered, the difficult ones they had no neural machinery to remember them", he said. The test seems to me like rote memorization of a sample of Russian words and then he administered a recall test. Well, we still have to see/read the details of his "test". I'm only guessing here that that is the kind of test he conducted.
I don't know how frequent each word was repeated and the interval between the testing and the memorization. I don't know what Russian words he used to test his hypothesis. If he limited the testing to Russian sound combinations with English phonemes only, or even those which does not exist in English, like hard and soft (palatalized) sounds, voiceless velar fricative /x/, retroflex post-alveolar fricatives /ʂ/ & /ʐ/, affricate /ts/, and the central close vowel /ɨ/. If the subjects know the meaning of the Russian words or they memorized words without knowing their meaning. A lot of unknowns that could influence how we interpret the results.
What is the problem I am seeing if this is the kind of test he'd done, based only on his interview and university press release? This result can also be interpreted in another way. What he was testing actually is which Russian sound combinations are more easily remembered by English speaking persons, more or less. His research did not test whether frequent listening to incomprehensible Russian will make the subjects able to recognize un-Englishlike Russian sound combinations at par with Englishlike Russian sound combinations. He did not let them listen to a Russian radio station for a fixed duration for several periods, asked them to repeat those bunch of Russian words he analyzed with all sorts of sound combinations, and see which ones were spoken with the correct sound combinations, including phonemes not found in English. I think with that, his research will have hit the nail right.
How can he then say that "Neural tissue required to learn and understand a new language will develop automatically from SIMPLE (emphasis mine) exposure to the language -- which is how babies learn their first language" through frequent exposure to the sound patterns, when he actually did not test that?
parasitius wrote:
http://72.14.235.132/search?q=cache:is-U0di00mkJ:www.victori a.ac.nz/lals/research/phdma-students.aspx+Paul+Sulzberger+la nguage&hl=en&ct=clnk&cd=9&gl=sg&client=firefox-a
Here is some extra info:
Second Language Learning (PhD research).
This research considers the hypothesis that the acquisition of vocabulary in a second language is (inter alia) dependent on the acquisition of a knowledge of the phonotactic structure of the second language. The observation that children acquire considerable knowledge of the phonotactic structure of their native language before they begin to speak, coupled with the finding that phonological memory in both children and adults is correlated with native language "wordlikeness", suggests that implicit knowledge of the phonotactic structure of the native language is implicated in vocabulary development - in particular the ability to rapidly acquire ("fast-mapping") the form of novel, native (but typically not foreign) words. This thesis considers the argument that the lack of such experientially-derived, implicit phonotactic knowledge can explain many of the difficulties experienced by second language learners in the acquisition of vocabulary in the early stages. Email Paul.
|
|
|
I wonder if how was he able to test "wordlikeness" as well for Russian incomprehensible input. I doubt it. In an unknown spoken language, there is actually no hint where one word begins and the other starts, much less if a syllable is a word, apart from meaning and a few pauses here and there.
Edited by Raчraч Ŋuɲa on 10 February 2009 at 3:30am
1 person has voted this message useful
| icing_death Senior Member United States Joined 5860 days ago 296 posts - 302 votes Speaks: English*
| Message 85 of 129 10 February 2009 at 5:08am | IP Logged |
Wow, what a stretch. Thanks for clearing that up Raчraч Ŋuɲa. Incomprehensible input does close to nothing for
me, so I'm glad to hear Krashen hasn't been disproved.
1 person has voted this message useful
|
Iversen Super Polyglot Moderator Denmark berejst.dk Joined 6702 days ago 9078 posts - 16473 votes Speaks: Danish*, French, English, German, Italian, Spanish, Portuguese, Dutch, Swedish, Esperanto, Romanian, Catalan Studies: Afrikaans, Greek, Norwegian, Russian, Serbian, Icelandic, Latin, Irish, Lowland Scots, Indonesian, Polish, Croatian Personal Language Map
| Message 86 of 129 11 February 2009 at 4:50am | IP Logged |
So far I haven't written in this thread because I saw the original claim to be too absurd to be taken seriously. Raчraч Ŋuɲa's summary made it somewhat more likely that Paul Sulzberger's project wasn't just a practical joke, but also that he hasn't really proved his idea - or even tested it. And now the claim seems even more limited: Sulzberger just wanted to show that listening to the sounds of an unknown language would prepare your brain for decoding speech in that language. Maybe a hypothesis that could be proven if somebody did an experiment, but not with me as a guinea pig.
However the outcome of that experiment doesn't prove nor disprove the claim of Krashen, namely that listening to utterances that are just a bit too difficult (comprehensible input) will in the long run teach you a language, - and in its strong form: that anything else (including grammar) is irrelevant. Nobody in their sane mind would probably deny the relevancy of listening to comprehensible input (the more the better), but I know from my own language studies how much the concurrent study of grammar and word lists has speeded up my learning, so I have really no reason to take Krashen's claims in their extreme form seriously. And even less reason to waste my time on trying out his ideas.
Edited by Iversen on 11 February 2009 at 4:56am
1 person has voted this message useful
| slucido Bilingual Diglot Senior Member Spain https://goo.gl/126Yv Joined 6674 days ago 1296 posts - 1781 votes 4 sounds Speaks: Spanish*, Catalan* Studies: English
| Message 88 of 129 11 February 2009 at 1:18pm | IP Logged |
Iversen wrote:
So far I haven't written in this thread because I saw the original claim to be too absurd to be taken seriously. Raчraч Ŋuɲa's summary made it somewhat more likely that Paul Sulzberger's project wasn't just a practical joke, but also that he hasn't really proved his idea - or even tested it. And now the claim seems even more limited: Sulzberger just wanted to show that listening to the sounds of an unknown language would prepare your brain for decoding speech in that language. Maybe a hypothesis that could be proven if somebody did an experiment, but not with me as a guinea pig.
However the outcome of that experiment doesn't prove nor disprove the claim of Krashen, namely that listening to utterances that are just a bit too difficult (comprehensible input) will in the long run teach you a language, - and in its strong form: that anything else (including grammar) is irrelevant. Nobody in their sane mind would probably deny the relevancy of listening to comprehensible input (the more the better), but I know from my own language studies how much the concurrent study of grammar and word lists has speeded up my learning, so I have really no reason to take Krashen's claims in their extreme form seriously. And even less reason to waste my time on trying out his ideas.
|
|
|
People learn languages from input and output.
Regarding input, people don't learn languages from comprehensible input, they learn languages making input understandable
People achieve this using several means: readers, dictionaries, word lists, grammar books, Pimsleur, Thomas, Assimil, teachers, CONTEXT or whatever.
That's the reason we read so many discussion about best methods.
For example, my main method is to read native websites using pop up dictionaries. This input is always above my level. You prefer dictionaries and word lists as well and it's perfectively feasible as long helps you making your input understandable.
Whatever input method that help us to make input understandable is good as long as you feel good.
Incomprehensible or passive input is very useful as well, because facilitates this conscious process.
Edited by slucido on 11 February 2009 at 1:21pm
1 person has voted this message useful
|
You cannot post new topics in this forum - You cannot reply to topics in this forum - You cannot delete your posts in this forum You cannot edit your posts in this forum - You cannot create polls in this forum - You cannot vote in polls in this forum
This page was generated in 0.3438 seconds.
DHTML Menu By Milonic JavaScript
|