AI quiz generation.
Interestingly enough, I had been looking at AI quiz generation systems earlier this year. I tested a few out and at face value it looked promising, however I started to realise a number of limitations including:
I ended up trialing QuestGen.AI following reading a number of online reviews. I have identified the pro’s and con’s of the platform below:
Advantages
This platform allowed for large volumes of question/answer sets (i.e. generate a large question bank)
Disadvantages
Among the platforms I tested, Questgen.AI appeared to be the most effective. However, to meet my specific needs, I had to subscribe to a paid version.
While the platform shows promise, I remain uncertain about its reliability for summative assessments in life-critical certifications or highly complex content areas, such as public safety and rescue. It is unclear if QuestGen.AI can fully satisfy the rules of evidence and principles of assessment in these contexts.
An improvement would be to expand the complexity levels to a scale of 1-10, aligning with the Australian Qualifications Framework (AQF) levels. This would ensure that questions and answers are generated in accordance with the Rules of Evidence, particularly focusing on validity. By creating questions directly from the core text, such as the course manual, the platform minimises unexpected content for students, thereby adhering to the principles of fair assessment. Using the current text also helps maintain the currency of the assessment. Additionally, the platform’s capacity to generate a large volume of questions supports the sufficiency of assessment by creating a comprehensive question bank for learning objectives.
However, a limitation of this and similar online assessment tools is the potential reduction in assessment flexibility, which could pose challenges for individuals who are not computer literate. Therefore, it is important to consider reasonable adjustments.
The potential of this technology is significant. It can already generate question banks and on-demand quizzes, enhancing the validity and reliability of assessments. When new core texts are published, they can be uploaded to automatically update the examination bank, which supports the currency of assessments.
Despite this potential, it is important to note that while AI can reduce educator workload in assessment design, it cannot replace it. Questions and answers must still be reviewed for technical accuracy and compliance with assessment standards. The platform may be particularly useful for formative assessments, helping students prepare for summative evaluations through learning progress checks.
In conclusion, AI quiz generators hold potential, but their suitability is context-dependent. They may be effective for simple formative assessments and can alleviate educator workload. However, for highly technical content, I recommend using AI as a starting point, ensuring there is sufficient capacity for quality assurance before implementation.