Digital interface with "ask anything" prompt.
|

Week 1 Reflection: How Do Students Actually Use AI?

Inquiry Focus

As AI tools become more common in both academic and professional settings, I am interested in exploring how tools like Microsoft Copilot can be leveraged efficiently and responsibly in professional settings without over-reliance. To begin this inquiry, I wanted to understand how students are already using AI in their everyday learning and study practices.


What?

This week, I had an informal conversation with a friend who is a third-year computer science student about how he uses AI as part of his studies. He shared that he primarily uses free tools such as ChatGPT and interacts with them in a highly conversational way, often treating the tool as if he were talking to another person.

One of his main study strategies involves feeding AI practice problems from course materials and asking it to generate similar types of problems to help him prepare for midterms and exams. In addition, he uses AI to break down complex concepts from his lectures by asking follow-up questions until the material makes sense.

He also mentioned that he has not followed any formal guides or tutorials on how to leverage AI tools; instead, his approach has developed entirely through personal experimentation. While he finds this method effective, he noted that the tool sometimes produces inaccurate or hallucinated responses, which he has learned to be cautious about.

Another point he raised was that AI provides a low-pressure learning environment. Being able to ask numerous or basic questions without feeling embarrassed or judged makes it easier for him to engage with difficult material and continue asking questions until he reaches understanding.


So What?

This conversation highlighted both the strengths and limitations of AI as a learning support tool. Generating similar practice problems can encourage active learning, repetition, and deeper engagement with course material. Similarly, using AI to break down concepts allows students to learn at their own pace and revisit explanations as needed, which can be especially helpful when lecture time or office hours are limited.

At the same time, relying on AI without structured guidance raises important digital literacy concerns. AI tools often present information with confidence, even when the content is incomplete or incorrect. Without clear guidelines or training, students must rely on their own judgment to evaluate accuracy and relevance. This places a significant responsibility on the user, particularly when AI-generated explanations or practice problems closely resemble authoritative learning resources.

Even in the case of a technically inclined student, this self-directed approach reflects a broader issue in education: students are increasingly expected to navigate AI tools independently. While this can foster autonomy, it also increases the risk of developing inefficient habits or over-relying on AI at the expense of critical thinking and independent problem-solving.


Now What?

This experience has shaped how I want to approach my inquiry throughout this course. Moving forward, I want to explore what responsible and effective AI use looks like in both academic and professional environments, especially when formal guidelines are limited or unclear.

In particular, I am interested in identifying practical strategies for verifying AI-generated content, understanding when human judgment should take priority, and developing personal boundaries around AI use. These insights will help inform how tools like Microsoft Copilot can be leveraged to enhance productivity and learning without replacing critical thinking or accountability.

As AI continues to be integrated into everyday learning and work, developing strong digital literacy around intentional and responsible use feels increasingly important.

Featured photo by Zulfugar Karimov on Unsplash.