Register  Login  Active Topics  Maps  

Is comprehension measurable?

 Language Learning Forum : General discussion Post Reply
211 messages over 27 pages: << Previous 1 2 3 4 5 6 7 ... 10 ... 26 27 Next >>
Serpent
Octoglot
Senior Member
Russian Federation
serpent-849.livejour
Joined 6408 days ago

9753 posts - 15779 votes 
4 sounds
Speaks: Russian*, English, FinnishC1, Latin, German, Italian, Spanish, Portuguese
Studies: Danish, Romanian, Polish, Belarusian, Ukrainian, Croatian, Slovenian, Catalan, Czech, Galician, Dutch, Swedish

 
 Message 73 of 211
13 August 2014 at 11:49pm | IP Logged 
s_allard wrote:
Reading skills tend to be very low because they are not formally educated in the language.

I'd go even further and suggest that an overwhelming number of heritage learners could turn into speakers with some reading, probably even less than what the super challenge requires. Obviously those who CAN watch TV/movies/etc but DON'T actually do that should also listen/watch more.
1 person has voted this message useful



s_allard
Triglot
Senior Member
Canada
Joined 5241 days ago

2704 posts - 5425 votes 
Speaks: French*, English, Spanish
Studies: Polish

 
 Message 74 of 211
13 August 2014 at 11:53pm | IP Logged 
luke wrote:
s_allard wrote:
"It showed that the sophistication in vocabulary use of high-proficiency
candidates was characterized by the fluent use of various formulaic expressions, often composed of high-
frequency words, perhaps more so than any
noticeable amount of low-frequency words in their speech."

For those readers who still believe that high-proficiency means large vocabulary, here you have it from the expert
himself.


The modifiers Nation uses are important in representing his position accurately.

This post is a little case study in understanding or misunderstanding. We read that the modifiers - what I
understand to be the words made bold by luke not by Nation - are important in representing Nation's position
accurately.

What did Nation say and how did I represent it? Well, we know what Nation said, he said it himself. It seems
pretty clear to me. If I summarize it or paraphrase it, he is saying that high-proficiency candidates demonstrated
sophisticated vocabulary use. But this sophistication seems to stem more from the use of formulaic expressions
than from the use of low-frequency, i.e. uncommon, words.

My interpretation of this is that sophistication of speech did not lie necessarily in using a large number of
different words but the in "sophisticated" use of common words in formulaic expressions. This, of course, is
what I have been saying for years.

In my understanding of English grammar, I wouldn't call "characterized" and "perhaps" modifiers. "Noticeable"
modifies amount and I understand what that means. Frankly, I don't understand what this post is saying.

Edited by s_allard on 13 August 2014 at 11:55pm

1 person has voted this message useful



luke
Diglot
Senior Member
United States
Joined 7016 days ago

3133 posts - 4351 votes 
Speaks: English*, Spanish
Studies: Esperanto, French

 
 Message 75 of 211
14 August 2014 at 1:51am | IP Logged 
s_allard wrote:
In my understanding of English grammar, I wouldn't call "characterized" and "perhaps"
modifiers. "Noticeable" modifies amount and I understand what that means. Frankly, I don't understand what
this post is saying.


Grammatical modifiers
2 persons have voted this message useful



s_allard
Triglot
Senior Member
Canada
Joined 5241 days ago

2704 posts - 5425 votes 
Speaks: French*, English, Spanish
Studies: Polish

 
 Message 76 of 211
14 August 2014 at 3:29am | IP Logged 
emk wrote:
s_allard wrote:
For those people who think these tests are easy and one can easily guess one's
way through, I'll remind them that
their are 60 questions that must be answered in 90 minutes. And the texts get progressively harder, that is more
complex, more subtle and longer. Note that there are 10 pilot or dummy tests that are used for research by the
test makers.

I like this kind of "functional comprehension" test, because it focuses on what I consider important: Can a learner
use the language in the real world? But there are some problems.

For example, a B2 student with good test-taking skills should be able to answer concrete questions about most
real-world texts, and do so with a high degree of accuracy. A C1 student should be able to do all that, and do it
quickly. This means that test makers need to use very sophisticated texts, and ask students tricky questions that
require careful analysis of the text.

The usual problem is that, past a certain point, the texts become so dense, and the questions become so subtle,
that even native speakers have to think long and hard before choosing the correct answer. Even when preparing
for the DELF B2, there were sample questions where both my tutor (a university-educated native speaker who
knew the exam format very well) and I would agree on an answer, but the official answer key would require a
different one.

...

This is a problem of test design. When we say that the texts get progressively harder or more difficult, it is not
that they are necessarily more dense - I'm not sure what that means. In fact they become more complex. This
means that the grammatical constructions are more complicated, using combinations that are less frequent,
longer and more abstract words, more suffixes and prefixes, more formulaic language, different levels of
formality, more varied word order. There are certainly more nuances and subtleties that are often hard to
distinguish.

To return to something of a leitmotiv for me and that Nation pointed out, sophistication of speaking - and I think
this also applies to writing and reading - does not lie only in the number of different words but how the words
are used.

Edited by s_allard on 14 August 2014 at 3:35pm

1 person has voted this message useful



shapd
Senior Member
United Kingdom
Joined 5960 days ago

126 posts - 208 votes 
Speaks: English*
Studies: German, Italian, Spanish, Latin, Modern Hebrew, French, Russian

 
 Message 77 of 211
15 August 2014 at 11:04pm | IP Logged 
@ s allard
The paper was Hu and Nation. I am not sure what your objection to it is. They asked questions to elicit the main ideas of the story, in two different ways. The results agreed with each other. There was also a reasonable agreement between how much the subjects thought they had understood and the formal tests, showing that people can judge how much comprehension they have, at least roughly. Interestingly, they did not quiz words but removed words from the text by replacing them with nonsense, rarest first. A fiction piece with a strong narrative was chosen deliberately to give the subjects the best chance of following the thread.

They cannot be criticised for choosing learners with a good grasp of grammar, since that was not what they wanted to test.They also made sure that they knew the common words. They quote another researcher, Laufer, who has apparently looked at the effects of other variables apart from vocabulary.

They showed a linear relation between percentage unknown (in this case actually missing) words and test scores, allowing them to arrive at the 98% figure.

As to the greater difficulty of assessing active knowledge, Nation claims that in all his books, as does every other book I have read on the subject. It is straightforward, if tedious, to make up frequency lists and ask people whether they recognise words on them. If you are sophisticated, you can add fake words to exclude guessing and ask subjects to rate them on a scale of how well they think they know them. How can you tell how many of these would ever be used in practice? The CEFR tests do show differences, partly depending on vocabulary, but they are still coarse grained, with a limited number of categories.

I think we both agree it is possible to function with a limited vocabulary with good use of techniques such as formulaic phrases, islands and avoidance techniques, but for ease of function and full comprehension of what you read or listen to, I am convinced a large vocabulary is needed. That is the essence of Benny's approach. Make the best use of what you do know, but still study like crazy for the higher levels.
3 persons have voted this message useful



Cavesa
Triglot
Senior Member
Czech Republic
Joined 4820 days ago

3277 posts - 6779 votes 
Speaks: Czech*, FrenchC2, EnglishC1
Studies: Spanish, German, Italian

 
 Message 78 of 211
16 August 2014 at 11:13pm | IP Logged 
I think several things are being mixed up here. Shortly: I don't think it is possible to objectively and accurately measure comprehension but I don't think we need it as our guesses are perfectly usable for our purposes: 1)self assessment of our own progress 2)informing people with similar subjective scale so that they can give us advice.

I believe it doesn't matter whether we say 70%, 7 on the scale from 1 to 10, fairly good but without details, etc. We aren't competing against anyone, we aren't trying to get any profit from the guess, just to make ourselves understood. Noone would be foolish enough to put 90% comprehension on their CV.

I believe the number of understood words and understanding the text are two different things as not only grammar but as well lots of context of various kinds come into the process. And the active skills are a totally separate matter that has nothing to do with assessement of comprehension as it is quite common not only for the heritage speakers to have a huge gap between passive and active skills. It is quite common for people where I live to write in their CVs that they "know German passively" or something like that, it is quite common among htlalers as well to learn some skills faster than the others. Such a gap could be the subject of several other threads.

The exams are totally different from our self assessment because their purpose is different and the method is different. And there are large differences between various kinds of tests. I believe sallard that the Canadian tests are really well thought out to reflect much more on real comprehension than test taking abilities but it is not true about all such exams. DELF B2 was a lot about the test taking skills and about having gone through preparatory books and past papers. I know people who had much better scores than I did but they were much worse at understanding any other texts than the typical tested kind.

So, when I am self assessing my comprehension, I am taking into account several things:
1.Understanding the meaning is the key. Did I understood nuances and details? Was I ever unsure about the meaning of the sentence?
2.Were there words I didn't know passively? I have much larger passive vocabulary than the active one so it is important to keep the difference on mind.
3.How comfortably and fast did I read? Was it much different from my stronger languages? This is a point noone has mentioned in the thread so far but I find it important. Do you automatically understand or do you spend time on translating things in your head?

And I end up with two possibilities of self assessed result:
1.A guessed percentage even though it could be a number on a scale from 1 to 10 or whatever.
2.A worded comment.
Both are self assessed and based on my feeling about the text and my comprehension. And both can be understood wrong. When I want a more objective feedback, I'll pay for formal testing, knowing what it can and cannot offer me.

My scale, which I find to be quite similar to scale used by many other htlalers:
around 10%..... I understand small bits here and there
under 50%...... I understand most main ideas but quite no details.
70-80%......... I understand many details but still far from everything and I read slowly without much comfort.
95%-99%........ I understand everything, unknown words are perfectly clear from the context and there are few, there may be five to ten totally new and not clear from context words in the whole book of 400 pages, reading is just as comfortable and fast as reading in my native language.
4 persons have voted this message useful



s_allard
Triglot
Senior Member
Canada
Joined 5241 days ago

2704 posts - 5425 votes 
Speaks: French*, English, Spanish
Studies: Polish

 
 Message 79 of 211
17 August 2014 at 4:44am | IP Logged 
shapd wrote:
@ s allard
The paper was Hu and Nation. I am not sure what your objection to it is. They asked questions to elicit the main
ideas of the story, in two different ways. The results agreed with each other. There was also a reasonable
agreement between how much the subjects thought they had understood and the formal tests, showing that
people can judge how much comprehension they have, at least roughly. Interestingly, they did not quiz words
but removed words from the text by replacing them with nonsense, rarest first. A fiction piece with a strong
narrative was chosen deliberately to give the subjects the best chance of following the thread.

They cannot be criticised for choosing learners with a good grasp of grammar, since that was not what they
wanted to test.They also made sure that they knew the common words. They quote another researcher, Laufer,
who has apparently looked at the effects of other variables apart from vocabulary.

They showed a linear relation between percentage unknown (in this case actually missing) words and test scores,
allowing them to arrive at the 98% figure.

As to the greater difficulty of assessing active knowledge, Nation claims that in all his books, as does every other
book I have read on the subject. It is straightforward, if tedious, to make up frequency lists and ask people
whether they recognise words on them. If you are sophisticated, you can add fake words to exclude guessing and
ask subjects to rate them on a scale of how well they think they know them. How can you tell how many of these
would ever be used in practice? The CEFR tests do show differences, partly depending on vocabulary, but they are
still coarse grained, with a limited number of categories.

I think we both agree it is possible to function with a limited vocabulary with good use of techniques such as
formulaic phrases, islands and avoidance techniques, but for ease of function and full comprehension of what
you read or listen to, I am convinced a large vocabulary is needed. That is the essence of Benny's approach. Make
the best use of what you do know, but still study like crazy for the higher levels.


I don't understand this post. I have no objection to the paper by Hu and Nation. After all, I went to the trouble to
searching for the paper since shapd did not give the reference. I then read it and even quoted from it extensively.
Here is what I wrote:

"The authors conclude: "This study shows that the density of unknown words has a marked effect on reading
comprehension...This (research) provides support for the position taken by Hirsch and Nation (1992) namely that
learners need to have around 98% coverage of words of a text to be able to read for pleasure."

It should be noted that the text in question was a piece of fiction.

What I found interesting in the conclusion was the next paragraph that begins as follows:

"This conclusion must not be interpreted as say that with 98% coverage of the vocabulary no other skills or
knowledge are needed to gain adequate comprehension. All of the subjects in this study were readers in their
first language, had considerable knowledge of English grammar, were experienced in reading English, and
brought considerable background knowledge to their reading. These all contribute to their skill in
comprehending text and account for some learners reading the 95% and 90% versions getting high scores.
However, as readability studies show, vocabulary knowledge is a critical component in reading."

I certainly agree with the observations of this study. As I have mentioned, word counting methods, of which Paul
Nation is certainly the most well known exponent, can have all kinds of uses, especially for language teaching.
Nation's work has focused on vocabulary size necessary for comprehension and not on measuring
comprehension. "

As I said, I agree with the observations of this study. I don't agree with shapd's interpretation of what Hu and
Nation said. In my opinion, the fundamental conclusion of this study is "learners need to have around 98%
coverage of words of a text to be able to read for pleasure." I agree with this. But I don't see what this has to do
with the thread. Hu and Nation did not seek to measure comprehension. They attempted to see how much
vocabulary is necessary to comprehend a text. They arrive at the figure of 98% of the vocabulary of the text. This
is in line with most studies of vocabulary. I agree with this.

But this is not what the thread is about. The thread is about how to measure comprehension and attempts to do
so using the word-counting method. This is not what Hu and Nation did. Let me quote Hu and Nation:

"The purpose of this study is to see what percentage coverage of text is needed for unassisted reading for
pleasure, where learners are able to read with the interruption of looking up words. "

How did Hu and Nation measure comprehension? Interestingly, they used methods similar to what the Canadian
government uses in their tests, i.e. multiple word and cued recall. They did not count the words known or
unknown. What they pointed out, I'll repeat, is that to understand a text well, you have to know most of the
words in the text. Duh. Is this a big surprise?

What I conclude from this study is that for this kind of text (fiction) enjoying reading is pretty much an all or
nothing proposition. You either know 98% of the words or don't bother reading this text. I agree, and I agree
that there are different degrees of comprehension. In my case, it's simple: All, Some, Nothing.



1 person has voted this message useful



s_allard
Triglot
Senior Member
Canada
Joined 5241 days ago

2704 posts - 5425 votes 
Speaks: French*, English, Spanish
Studies: Polish

 
 Message 80 of 211
17 August 2014 at 5:12am | IP Logged 
Cavesa wrote:
I think several things are being mixed up here. Shortly: I don't think it is possible to objectively
and accurately measure comprehension but I don't think we need it as our guesses are perfectly usable for our
purposes: 1)self assessment of our own progress 2)informing people with similar subjective scale so that they
can give us advice.

I believe it doesn't matter whether we say 70%, 7 on the scale from 1 to 10, fairly good but without details, etc.
We aren't competing against anyone, we aren't trying to get any profit from the guess, just to make ourselves
understood. Noone would be foolish enough to put 90% comprehension on their CV.

...
And I end up with two possibilities of self assessed result:
1.A guessed percentage even though it could be a number on a scale from 1 to 10 or whatever.
2.A worded comment.
Both are self assessed and based on my feeling about the text and my comprehension. And both can be
understood wrong. When I want a more objective feedback, I'll pay for formal testing, knowing what it can and
cannot offer me.

My scale, which I find to be quite similar to scale used by many other htlalers:
around 10%..... I understand small bits here and there
under 50%...... I understand most main ideas but quite no details.
70-80%......... I understand many details but still far from everything and I read slowly without much comfort.
95%-99%........ I understand everything, unknown words are perfectly clear from the context and there are few,
there may be five to ten totally new and not clear from context words in the whole book of 400 pages, reading is
just as comfortable and fast as reading in my native language.


I have no problem with this position. I use a three category scale but I admit that others may want to make more
nuanced assessments. Here Cavesa proposes four categories. I like it and may adopt it myself.

The reason I prefer not to use percentages myself is that they convey a false sense of accuracy. I should point
out that in Cavesa's scale there is nothing for 81%-94%. Is this just an oversight?

My beef in this debate is with the observer who says that if they know 30% of the words that means they
understand 30% of the text.

And there is still the problem of interpreting these statements. When I read " I understand small bits here and
there", my question is: What's the value of small bits here and there? I don't see the difference between that and
nothing. Where's the enjoyment, if it's a work of fiction, or in the case of non-fiction, where's the actionable
information?

And suppose I understand 50%. Again where's the enjoyment or, in the case of my Canadian government tests, I
probably will not pass the test at the required level.

And how can we compare other people's percentages. In another thread, a poster claims that he used to
understand Spanish TV and 50% and then after a few years break he could now understand 90%. What I do
understand is that person saw an improvement. I don't disagree. I disagree with this assessment of 50%, picked
out a hat, and 90%, also picked out a hat. I would have said, Before I could understand some Spanish, now I
understand Spanish completely.


1 person has voted this message useful



This discussion contains 211 messages over 27 pages: << Prev 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27  Next >>


Post ReplyPost New Topic Printable version Printable version

You cannot post new topics in this forum - You cannot reply to topics in this forum - You cannot delete your posts in this forum
You cannot edit your posts in this forum - You cannot create polls in this forum - You cannot vote in polls in this forum


This page was generated in 2.2810 seconds.


DHTML Menu By Milonic JavaScript
Copyright 2024 FX Micheloud - All rights reserved
No part of this website may be copied by any means without my written authorization.