Best Practice: Foster Critical Engagement and Evaluation

Instructors should design learning experiences that ensure critical engagement with AI tools, including how and when to use AI appropriately. They should develop guidelines for verifying suggestions produced by generative AI, provide training on how/why hallucinations occur and what steps we can take to prevent them, and identify options for LLMs trained on trusted data (and vetted by humans) to run queries against.

Applicable Principles (the why)

Generative AI Literacy

  • Instructors and learners need to understand how generative AI works.
  • Learners need to know how and when to use AI appropriately.

Complementing Creativity

  • Learners should use AI to build on their own original, creative ideas and experiences, rather than replace them.

 Keeping Humans in an Active Role

  • Instructors and learners should balance AI and Human work.
  • Instructors should facilitate learning with AI.

Practical Application (the what)

Knowing When to use AI: For the classroom context, instructors should share their expectations for when (not) to use AI, as discussed in the Best Practice: Notify Learners of Expectations for Using Generative AI to Support Learning section. Beyond the classroom context, instructors can also provide guidelines that learners can apply to different situations themselves. 

Evaluating AI Output: Instructors should design assignments that guide learners in evaluating information generated by AI, including verifying the accuracy of information generated by AI.  Instructors should design assignments that emphasize connecting AI-generated content to published research to verify the accuracy of all statements.

Background information on AI hallucinations should be provided as part of the information vetting process. In addition to defining approved citation styles (APA, MLA, etc.), course guidelines should emphasize how to verify citations to published research, such as including DOI’s connecting to full text. As generative AI tools evolve, instructors should identify what sources each LLM is trained on, and provide learners with a brief background on the strengths and weaknesses of each model. Learners should be aware that summaries may include hallucinations even on models trained against trusted data.  

In addition to accuracy and reliability, generative AI output should be evaluated for relevance, usability, coherence, consistency, and fairness (see “Evaluating the Output” from Tips for Interacting with GenAI). 

Resources