AI in Education: RAG, Information Literacy, and Responsible Use
In my previous reflections and inquiry work, I focused on how AI tools are used and evaluated, especially in learning and professional contexts. Reflecting back on Week 6’s course content, including the discussion with my professor and Dr. Normand Roy, I found that many of the ideas presented did not necessarily change my perspective, but instead supported and helped explain observations I had already made while working with AI tools.
What?
In the Week 6 session, Dr. Roy discussed how generative AI is being used in education and how both students and instructors are adapting to it. One key concept he introduced was Retrieval-Augmented Generation (RAG), where AI systems are given additional context or access to external information to improve the accuracy of their responses.
He also talked about “deep research” features in tools like ChatGPT and Google Gemini, where the model retrieves and processes information from many sources before generating a response. This involves collecting data from multiple websites, organizing it, and synthesizing it into an answer. In comparison, Microsoft Copilot currently has more limited capabilities in this area.
Another major topic was information literacy, and how students should not lose their ability to think critically and evaluate information even when AI tools are available. He also briefly mentioned the environmental impact of AI, noting that large-scale data retrieval and repeated queries require significant computational resources.
So What?
What stood out to me is that these ideas closely matched what I had already noticed during my inquiry work. For example, I had found that AI tools are not very reliable when generating answers from scratch, which aligns with the idea behind RAG. Providing more context leads to better results, which shows that the quality of the output depends heavily on the input.
- To better understand this concept, I looked at an explanation from an AWS post from Amazon, which clearly describes how retrieval-augmented generation works by combining external data retrieval with AI-generated responses. Instead of relying only on what the model already “knows,” RAG allows the system to reference additional sources to improve accuracy and relevance. This explanation helped reinforce why providing more context leads to better outputs, which aligns with what I observed during my own inquiry work.

- The diagram shows how a user’s prompt is first used to retrieve relevant information from external sources, which is then added as context before the AI generates its final response, helping reduce hallucinations and improve accuracy.
I also found it interesting that tools like ChatGPT and Gemini were described as having stronger research capabilities. This matches my own experience, where they performed better than Microsoft Copilot during my inquiry work. While Copilot is useful in certain contexts, it did not seem as strong when it came to generating detailed or well-supported responses.
The discussion around information literacy also reinforced something I had already been thinking about. Even though AI can make it easier to get answers quickly, there is a risk that users stop questioning those answers. Since AI outputs often sound confident, it becomes easy to accept them without verifying whether they are correct. This connects directly to my earlier reflections on over-reliance and the importance of critical evaluation.
The environmental impact was something I had not really considered before. If deep research involves retrieving and processing large amounts of data from many sources, then repeated or inefficient prompting could have a larger impact than expected. This adds another layer to responsible AI use, where efficiency and intentional use also matter.
Now What?
Reflecting on these ideas, I want to be more intentional in how I use AI tools. One thing I plan to focus on is improving how I structure prompts by including more context upfront, rather than relying on multiple follow-up questions. This should help produce more accurate responses while also being more efficient.
I also want to continue developing my information literacy skills by treating AI outputs as starting points rather than final answers. This means verifying important information and staying actively engaged with the material instead of relying on AI to do the thinking.
Finally, considering the environmental aspect has made me more aware of how I use AI overall. While the impact of a single query may be small, repeated and inefficient use can add up. Being more deliberate with prompts and reducing unnecessary queries is something I want to keep in mind moving forward.