ElearningWorld.org

For the online learning world

Moodle

The jigsaw of the day…

In a course, we have a final quiz of 24 random questions. These questions are drilled from six different question categories (4 questions x 6 categories = 24 random questions).

After the end of the course, we want to see which were the most difficult questions in order to improve our course.

How we’ll find these?

Don’t rush to answer… If we go through the Attempts link, we are going to see the attempts taken by each user, only though, the questions are not the same for every one of them. Q3 may be A question for Paul, but Q3 ma be Z question for Simon… remember the questions are random, so each attempt is different.

So what about this…

  • Quiz administration -> Results -> Statistics

Now, according to Moodle.docs “This report gives a statistical (psychometric) analysis of the quiz, and the questions within it.” http://docs.moodle.org/26/en/Quiz_statistics_report

WOW! I have been really impressed, with the psychometric analysis. wide eyes

Only though I would need some psychometrician, or staticologist to translate the data… sad

Lickyly Moodle.docs are really powerful, so, with some more research I found this article http://docs.moodle.org/dev/Quiz_statistics_calculations and this one http://docs.moodle.org/dev/Quiz_report_statistics

Scanning and skipping the mathematics here are the gems I collected, that make sense to me:

Facility index: This is the average score on the item, expressed as a percentage (the mean score of students on the item.). The higher the facility index, the easier the question is (for this cohort of students).

F Interpretation
5 or less Extremely difficult or something wrong with the question.
6-10 Very difficult.
11-20 Difficult.
20-34 Moderately difficult.
35-64 About right for the average student.
66-80 Fairly easy.
81-89 Easy.
90-94 Very easy.
95-100 Extremely easy.

Intended question weight: How much this question was supposed to contribute to determining the overall test score.

Random guess score (RGS): This is the mean score students would be expected to get for a random guess at the question. Random guess scores are only available for questions that use some form of multiple choice. All random guess scores are for deferred feedback only and assume the simplest situation e.g. for multiple response questions students will be told how many answers are correct. Values above 40% are unsatisfactory – and show that True/False questions must be used sparsely in summative tests.

Discrimination index: This is the correlation between the weighted scores on the question and those on the rest of the test. It indicates how effective the question is at sorting out able students from those who are less able. The results should be interpreted as follows…

Index Interpretation
50 and above Very good discrimination
30 – 50 Adequate discrimination
20 – 29 Weak discrimination
0 – 19 Very weak discrimination
-ve Question probably invalid

Discrimination efficiency: This statistic attempts to estimate how good the discrimination index is relative to the difficulty of the question.

An item which is very easy or very difficult cannot discriminate between students of different ability, because most of them get the same score on that question. Maximum discrimination requires a facility index in the range 30% – 70% (although such a value is no guarantee of a high discrimination index).

The discrimination efficiency will very rarely approach 100%, but values in excess of 50% should be achievable. Lower values indicate that the question is not nearly as effective at discriminating between students of different ability as it might be and therefore is not a particularly good question.

Well, I think I could spend days after days investigating these data. Actually I am going to check this in practice, by comparing a number of quizzes from different runs (same course, same quiz, different class) to see if it’s the same questions that students get wrong… Yay! Love this work! approve

blank
Latest posts by Anna Krassa (see all)
blank

Anna Krassa

I am an educator specialised in distance education, living and working from Greece. I have been working with HRDNZ for long time, since I completed my Moodle Teacher Certificate in 2007. My key role is to manage Moodle Educator Certificate program for HRDNZ.

3 thoughts on “The jigsaw of the day…

  • I am really glad you find it helpful Paula and Stuart! smile

    It took me a day and seeing your comments I somehow feel that it was more worthy than I initially thought.

    Stuart I think that we can organize this research for MCCC 2.7… I am really fascinated by researches… well, they take some time, but isn’t great to see how things are liaised?

    Reply
  • Thanks, Anna for doing all that research. It was on my list for the summer. I appreciate the help!

    Reply
    • Wow Anna, you have gone further into this area than I have.

      I’m interested by “Discrimination efficiency”, and hadn’t really thought about this before.

      I’m thinking that the Moodle Course Creator Certificate (MCCC) exam could use this.
      It might be interesting to see how our questions banks, and exam selection of 60 random questions stands up to an in-depth analysis like this? smile

Add a reply or comment...