|

Week 2 Reflection: Trusting but Verifying AI-Generated Learning Support

Inquiry Connection

In my previous reflection, I explored how students are already using AI tools to support their learning, often through self-directed experimentation without formal guidance. Building on that discussion, this week I wanted to reflect more specifically on how learners evaluate AI-generated explanations and study materials, and what digital literacy skills are required to use these tools effectively without over-reliance.


What?

This week, I reflected on how AI tools are used as study supports, particularly when generating explanations or practice material. Similar to the approaches discussed in my previous post, AI is often used to break down lecture content or generate additional practice by producing materials that resemble course content.

In my own experience, AI-generated explanations are often clear, structured, and confident, which makes them feel trustworthy at first glance. However, I have also noticed that clarity does not always guarantee accuracy. Some explanations may oversimplify concepts, omit important assumptions, or present information in a way that does not fully align with how material is taught or assessed in a course.

To explore alternative approaches, I also watched a video in which someone with a background in medicine demonstrates how they use AI to support studying. In this example, AI is primarily used to organize textbook notes, summarize information, and generate multiple-choice and short-answer questions focused on definitions. The emphasis of this strategy is on efficiency and organization, allowing the learner to spend less time formatting notes and more time reviewing material.

Video by Tom Watchman on YouTube
  • This video demonstrates an AI-supported study strategy shared by someone with a background in medicine, focusing on summarization, organization, and generating definition-based practice questions from textbook notes. While this approach may be well-suited to concept-heavy disciplines, it also highlights how the effectiveness of AI-assisted study strategies depends on disciplinary context and learning goals.

So What?

Comparing these approaches highlighted how differently AI can be used depending on learning goals and disciplinary context. Using AI to generate explanations or practice materials can support active learning and repetition, but it also requires learners to evaluate whether the content is accurate and appropriate. When AI produces content confidently, the responsibility to verify correctness rests almost entirely with the user.

The background of the person in the video helped contextualize why this strategy focused heavily on summarization and definition-based practice. In concept-heavy disciplines such as medicine, where large volumes of factual and conceptual information must be retained, this approach may be especially effective. As a computer science student, however, I found that this strategy may not translate directly to my own learning needs, which often emphasize applied problem-solving and working through scenarios rather than memorization.

While similar prompting techniques could theoretically be adapted to a computer science textbook, the value of AI-generated material still depends on whether it supports hands-on reasoning rather than surface-level recall. This comparison reinforced for me that effective AI use is highly context-dependent and that digital literacy involves selecting strategies that align with specific learning objectives rather than applying tools uniformly.

This reflection also raises broader concerns about digital literacy. Without clear guidance on AI use, learners are left to develop their own habits through trial and error. While this can foster independence, it also increases the risk of over-reliance, particularly when AI-generated materials are treated as authoritative rather than as starting points for learning.


Now What?

Reflecting on these examples has made me more intentional about how I want to integrate AI into my own learning. Moving forward, I want to treat AI as a support tool rather than an authority by verifying explanations against lecture notes or textbooks, using AI to practice problem-solving rather than generate final answers, and remaining aware of when AI use may limit my own engagement with the material.

These reflections also contribute to my broader inquiry into responsible AI use. Understanding how learners evaluate AI-generated content is an important step toward identifying best practices for using tools like Microsoft Copilot efficiently and responsibly, particularly in academic and professional environments where accuracy, accountability, and judgment matter.

Featured photo by Growtika on Unsplash