35 messages over 5 pages: 1 2 3 4 5
vermillon Triglot Senior Member United Kingdom Joined 4678 days ago 602 posts - 1042 votes Speaks: French*, EnglishC2, Mandarin Studies: Japanese, German
| Message 33 of 35 09 March 2012 at 2:51pm | IP Logged |
Hi Lucky Charms,
I'll try to answer your questions:
0. it may be similar to LinQ, but I use it only for Chinese. If I had to use it for other languages, I would certainly not work the same way as LinQ, which I find a bit weird (do I really want to have one card for singular and one for plural, when all the words simply make their plural in "-s"? or perhaps I don't remember very well, but that was my impression), and instead it would have one card per "dictionary entry" (so conjugation would be collapsed into one word for instance). Real questions now.
1. First I display a list, and I look through it to see words I want to pick up and learn. That way I tend to learn nouns and verbs and leave out the descriptive adverbs (because I don't need to know 10 words to describe the wind blowing). Then I add the ones I want to Anki, and I agree that it's not very good, but I mostly don't "study" them, as many of those words are anyway words I'm going to have as passive knowledge and I just want to be able to recognize rather than use. NB: this is because I started doing it after several thousands of words, when most of the words I would ever use actively were already known.
If you wanted to get examples, and not "spoil" your reading, then what one could do is to have a corpus and retrieve examples from it... with an extra bit of magick (which I'm trying to implement, even though I don't spend much time on it), examples can be chosen to have a meaning close to that of its occurrence in your text. That way, if a word has 2 very different meanings, you may want to get examples that are related to what you're about to read, if you're concerned with efficiency and want more to read the book than learn vocabulary.
2.No, as I mentioned, I have a tokenizer which first separates the sentence into its words. That would indeed be pointless to work at a character level only. I still list the unknown characters apart, but that's more out of fun than learning really. And then I learn words/chengyu... With a bit more work, it would be possible to detect longer idiomatic patterns, but I haven't worked on that at all.
3.I believe you are entirely right. When I started using this technique, it popped up words like 他们、她们... the solution is either to add them to Anki (and use "easy" so as to rarely see them) or to have a sort of exclusion list, so that you don't add them to Anki but the program knows you know these words. It is a bit annoying in the first few days (a week at most), but from then on this problem almost doesn't occur anymore.
A more elaborated solution, that I haven't implemented but only thought of, is to evaluate the level of the user first to estimate which size of vocabulary he knows ("the 3000 most common words" for instance) and focus only on the words that are rarer. It can make you miss some very common words that you didn't know of, but overall it should be a good compromise.
Then as you said, for languages that are closely related to your native tongue, it may not be suitable. That's an interesting point I had not thought of, and if you have any idea how that can be tackled, let me know.
Hope this helps. I've mentioned quite a few ideas that I haven't implemented yet (retrieving appropriate examples being the main one), but the core (getting a list of vocabulary for pre-learning) is quite straightforward and works well.
2 persons have voted this message useful
| Serpent Octoglot Senior Member Russian Federation serpent-849.livejour Joined 6597 days ago 9753 posts - 15779 votes 4 sounds Speaks: Russian*, English, FinnishC1, Latin, German, Italian, Spanish, Portuguese Studies: Danish, Romanian, Polish, Belarusian, Ukrainian, Croatian, Slovenian, Catalan, Czech, Galician, Dutch, Swedish
| Message 34 of 35 09 March 2012 at 3:30pm | IP Logged |
A good idea could be to look for glossaries/shared decks for books you're reading.
1 person has voted this message useful
| frenkeld Diglot Senior Member United States Joined 6943 days ago 2042 posts - 2719 votes Speaks: Russian*, English Studies: German
| Message 35 of 35 10 March 2012 at 5:04pm | IP Logged |
A. Arguelles has two videos (video 1, video 2) discussing vocabulary acquisition through extensive reading.
He states that the most suitable texts for acquring vocabulary from context are those where you aready know 98 percent of the vocabulary, corresponding to one in fifty words being new. This is interesting because the figure from some study I've seen quoted in this forum several years ago was 95 percent, corresponding to one in twenty words being new, which is quite different from fifty. Looking at this prerequisites creep makes me wonder just how effective this approach is.
Edited by frenkeld on 19 March 2012 at 8:04pm
1 person has voted this message useful
|
This discussion contains 35 messages over 5 pages: << Prev 1 2 3 4 5 If you wish to post a reply to this topic you must first login. If you are not already registered you must first register
You cannot post new topics in this forum - You cannot reply to topics in this forum - You cannot delete your posts in this forum You cannot edit your posts in this forum - You cannot create polls in this forum - You cannot vote in polls in this forum
This page was generated in 0.3125 seconds.
DHTML Menu By Milonic JavaScript
|