Skip to main content
All CollectionsWhat is Eedi?
The research behind Eedi
The research behind Eedi

Learn the foundations of what we do and our supporting research behind it

Updated over a week ago

The use of Diagnostic Questions both in and out of the classroom, together with the reporting tools of Eedi, are based on solid empirical foundations. They have been carefully designed using key principles from research into memory, cognitive science, formative assessment, and parental engagement to ensure they have the greatest possible impact on your teaching and your students’ learning. Our Director of Education, Craig Barton, explains more 👇

What are Diagnostic Questions?

Diagnostic Questions are multiple choice questions designed in a way that each incorrect answer reveals a specific misconception. When designed carefully, Wylie and Wiliam (2006) have found Diagnostic Questions to be a key tool in a teacher’s formative assessment strategy. We have worked with experienced question writers from the likes of AQA, Edexcel, OCR, White Rose, AET, CEM and Oxford University to ensure our questions are of the highest quality, so that you can learn from your students’ answers.

Should I use Diagnostic Questions in lessons?

In short, yes. I use at least three Diagnostic Questions every day, every lesson, no matter

what my class. For Hodgen and Wiliam (2006) and Wiliam (2011), identifying where the learner is in terms of their understanding of a concept and responding accordingly is one of the most important parts of teaching. As Wiliam (2016) explains, we need to start from where the learner is, not where we would like them to be. For me there is no quicker or more accurate way to do this than to use Diagnostic Questions. Whether it is projecting a question on the board and asking students to vote with their fingers, or as part of our online quizzes, the data is fast, reliable and informative.

When should I use Diagnostic Questions in lessons?

Originally I would just ask Diagnostic Questions at the start of lessons, by the more I learn about misconceptions (e.g. Wylie and Wiliam, 2006) and the key role of retrieval in the learning process (e.g. see Bjork, 2011), I have expanded their role as follows:

  • At the beginning of the lesson to assess prerequisite knowledge, and to tap into the benefits of priming and the Testing Effect.

  • In the middle of the lesson as a hinge-point to determine where to take the lesson next.

  • The end of the lesson to inform my future planning.

Can Diagnostic Questions help me to plan my lessons?

Absolutely! Have you ever asked a question in class and been completely surprised by one of your students’ answers? I know I have. And I also know that it is incredibly difficult trying to understand where the answer came from - in other words the cause of the misconception - and how best to resolve it, whilst simultaneously trying to deal with the million other things that a teacher needs to attend to in lesson. I am now much better prepared for this, because when I am planning each lesson I log into Eedi, find quizzes on the topic I'm teaching and view the answer statistics from thousands of other students who have already answered those questions. I can look at the data behind which incorrect answers were the most popular, and read the misconception explanations to better understand their causes. I can then think carefully how I am going to respond to these misconceptions if they come up in my lessons. What explanation will I use, which supporting resources will I show my students, which examples will I pick, will I actually confront this misconception head-on? In short I can do what Lemov (2015) calls Plan for Error - the majority of my thinking can be done before the lesson, when I have more time, a calm environment, and colleagues to call upon for help, instead of in the heat of the lesson, with 30 expectant faces staring up at me.

Why do we ask students to explain their answers?

Each time a student answers one of our online multiple choice diagnostic questions, they are prompted to give an explanation. This serves two benefits. Firstly, it allows us teachers to get a far greater insight into our students’ thought processes - have they really understood the concept, or just taken an educated guess? But secondly, the students benefit themselves via The Self-Explanation Effect. This is where learners who attempt to establish a rationale for the solution steps by pausing to explain the examples to themselves appear to learn more than those who do not. Numerous studies have demonstrated this effect, notably Chi (2000), Siegler (2002) and Rittle-Johnson (2006). But can’t we just rely on students explaining the concept to themselves? Well, according to Renkl (1997), no we cannot. His study found that the majority of learners do not spontaneously engage in successful self-explanation strategies, and so miss out on this highly successful way of learning, hence why our system prompts students to provide an explanation.

Finally, what happens if students provide an incorrect explanation? Well, as their teacher you will be able to see that immediately using our powerful data insights engine, and

also it turns out that it might not matter so much. Chi (2000) argues that incorrect self-

explanations, so long as they are pointed out, lead to cognitive conflict, which can lead to long-term learning. We provide students with model explanations that they can compare to their own in order to check everything tallies up.

Why don’t we immediately tell students the right answer?

When students complete an online quiz, they have a couple of attempts to answer a question (if they got it incorrect the first time around) before we offer them a lesson. This is based on one shrewd observation, and one surprising research finding. The observation comes from my first podcast interview with Dylan Wiliam who noted that the only good feedback is that which makes students think. This is supported by Soderstrom and Bjork (2015) who find that delaying or even reducing feedback can have a long term benefit to student's learning. By simply telling students the correct answer immediately, we deny them this opportunity to contemplate and consider. Whereas, by delaying this support, we prompt students to think, which is likely to be beneficial to their learning.

The exam is not multiple choice, so what is the point in giving students multiple choice questions to practice?

There are many studies that demonstrate the Testing Effect (e.g. Bjork and Bjork, 2011), whereby testing acts as a tool of learning, not just of assessment. Marsh et al (2007) find that the positive benefits of testing are still achieved via multiple choice questions, and that these benefits are not limited to simple definition or fact-recall multiple choice questions, but extend to those that promote higher-order thinking. Indeed, there is evidence that the specific use of multiple choice diagnostic questions may actually improve future performance on non-multiple choice exams. Following my podcast interview with Robert and Elizabeth Bjork, I contacted Robert for extra clarity on this point, as it seemed so important. He replied: In total, the positives of using good multiple choice diagnostic questions are huge. In most of the experiments that have been carried out the final test is a cued recall test—without alternatives available—but initial multiple choice testing has far greater benefits for that test than does initial testing that matches such cued recall testing. Little et al (2012) provide an explanation for this. They found that good diagnostic multiple choice questions actually have an advantage over different question types in that to get the question correct students must not only consider why their choice of answer is right, but also why the other answers are wrong. They explain: our findings suggest that when multiple-choice tests are used as practice tests, they can provide a win-win situation: specifically, they can foster test-induced learning not only of previously tested information, but also of information pertaining to the initially incorrect alternatives. This latter advantage is especially important because, typically, few if any practice-test items are repeated verbatim on the subsequent real test.

Are the distractors dangerous - can they actually cause students to develop misconceptions?

This was something I have long worried about! Marsh et al (2007) do find that multiple-choice distractors (or “lures” as the authors refer to them) may become integrated into subjects’ more general knowledge and lead to erroneous reasoning about concepts. However, they also find that misconceptions usually exist before seeing the multiple choice question. They explain rarely did students select the correct answer on the initial test and then produce a lure on the final test. Nor were students likely to select Lure A on the first test and then produce Lure B on the final test . So, it seems that the majority of the time diagnostic questions do not cause misconceptions, but in fact highlight misconceptions that were already present. Moreover, diagnostic questions have the advantage of highlighting the specific nature of this misconception so it can be corrected.

Butler and Roediger (2008) offer more reassurance. They conducted an experiment where subjects studied passages and then received a multiple choice test with immediate feedback, delayed feedback, or no feedback. Feedback in this sense was informing the subjects if they were right or wrong, not the lengthy individualised process we teachers normally associate with the term. In comparison with the no-feedback condition, both immediate and delayed feedback increased the proportion of correct responses and reduced the proportion of intrusions (i.e, lure responses from the initial multiple-choice test) on a delayed cued recall test. There is the obvious danger of generalising these finding into the world of maths, but the implication for teachers from this study is very simple - if using multiple choice questions, ensure students are told whether they are right or wrong in order to reduce any negative effects from exposure to plausible incorrect answers. We can do this in class via a discussion of students’ choices, and on our online platform students are made very clear what the correct answer is when they review their answers to a quiz.

Surely you don’t learn as much from automated marking as you do marking by hand?

I used to spend approximately three hours marking a set of 30 homeworks, and by the end of it what had I learned? Often, not a great deal. Marking by hand is time consuming, and it is incredibly difficult to build up an overall picture of both a class’ and individual students’ understanding and specific areas of weakness. A 2016 report from the Education Endowment Foundation and Oxford University found many of the marking practices employed by teachers to have little positive effect. By automating a time-consuming part of the marking process and presenting data clearly and concisely using our powerful data analytics engine, we not only make the marking quicker, but also more effective. We automatically identify trends, students who are struggling, problem questions and recurring misconceptions. This gives teachers more precious time to spend planning great lessons, writing great feedback, or maybe just getting their energy back.

Is it possible to give written feedback to students based on their answers?

Writing feedback can be incredibly time-consuming, and very difficult to get right (e.g. see Butler, 1987). Wiliam (2016) argues that feedback is only successful if students use it to improve their performance, and Hattie and Timperley (2007) explain that feedback is one of the most powerful influences on learning and achievement, but this impact can be either positive or negative. We have tired to radically improve the process of giving feedback by allowing teachers to direct it based on misconceptions. By automatically grouping together all the students in a class who answered, for example, C, and by providing an explanation to teachers as to the likely misconception behind that choice of answer, we allow teachers to provide high-quality, task-focussed feedback quicker than ever before. We believe this will not only save teachers approximately 8 hours per week, but also allow their feedback to be more beneficial than ever.

References

  • Bjork, Elizabeth L., and Robert A. Bjork. "Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning." Psychology and the real world: Essays illustrating fundamental contributions to society 2 (2011): 59-68

  • Bjork, Robert. "On the symbiosis of remembering, forgetting, and learning. "Successful remembering and successful forgetting: A Festschrift in honor of Robert A. Bjork (2011): 1-22

  • Butler, Andrew C., and Henry L. Roediger. "Feedback enhances the positive effects and reduces the negative effects of multiple-choice testing." Memory and Cognition 36.3 (2008): 604-616

  • Butler, Ruth. "Task-involving and ego-involving properties of evaluation: Effects of different feedback conditions on motivational perceptions, interest, and performance." Journal of educational psychology 79.4 (1987): 474

  • Chi, M. T. H. "Self-explaining: The dual processes of generating inference and repairing mental models. InR. Glaser (Ed.), Advances in instructional psychology: Educational design and cognitive science. Vol. 5 (pp. 161-238)." (2000)

  • Education Endowment Foundation. “A marked improvement? A review on the evidence on marking” (2016)

  • Hodgen, J., and D. Wiliam. "Mathematics inside the black box: assessment for

    learning in the mathematical classroom." (2006)

  • Hattie, John, and Helen Timperley. "The power of feedback." Review of educational

    research 77.1 (2007): 81-112

  • Lemov, Doug. Teach like a champion 2.0: 62 techniques that put students on the path

    to college. John Wiley & Sons, 2015.

  • Little, Jeri L., et al. "Multiple-choice tests exonerated, at least of some charges:

    Fostering test-induced learning and avoiding test-induced forgetting." Psychological

    Science 23.11 (2012): 1337-1344

  • Marsh, Elizabeth J., et al. "The memorial consequences of multiple-choice testing." Psychonomic Bulletin and Review 14.2 (2007): 194-199

  • Renkl, Alexander. "Learning from worked‐out examples: A study on individual

    differences." Cognitive science 21.1 (1997): 1-29

  • Rittle‐Johnson, Bethany. "Promoting transfer: Effects of self‐explanation and direct

    instruction" Child development 77.1 (2006): 1-15

  • Siegler, Robert S. "Microgenetic studies of self-explanation." Microdevelopment:

    Transition processes in development and learning (2002): 31-58

  • Soderstrom, Nicholas C., and Robert A. Bjork. "Learning versus performance: An integrative review." Perspectives on Psychological Science 10.2 (2015): 176-199

  • Wiliam, Dylan. Embedded formative assessment. Solution Tree Press, 2011

  • Wiliam, Dylan. "Five “key strategies” for effective formative assessment." NCTM

    Research Brief (2007)

  • Wiliam, Dylan. "The secret of effective feedback." Educational Leadership 73.7

    (2016): 10-15

  • Wylie, E. C., and Dylan Wiliam. "Diagnostic questions: is there value in just one." Annual Meeting of the National Council on Measurement in Education. San

    Francisco, CA. 2006

Did this answer your question?