whiteboard with text saying "AI?"
|

Week 7 Reflection: Developing a Personal Approach to AI and Digital Literacy

What?

Throughout this course, I explored different aspects of AI use and digital literacy across academic and professional contexts. I started by looking at how students use AI tools, often in informal and self-directed ways. I then reflected on how important it is to evaluate AI-generated content, especially since outputs can sound confident even when they are incorrect.

I also explored how expectations change in professional environments, where tools like Microsoft Copilot are used in workflows that require higher levels of accuracy, accountability, and awareness of data privacy. In addition, I looked at how AI can support accessibility through features such as captions, transcripts, and summarization, while also recognizing that these tools are not always reliable or equally accessible.

Finally, I reflected on ethical concerns such as over-reliance and the importance of maintaining independent thinking when using AI tools.


So What?

In my Week 2 reflection, I focused on the importance of verifying AI-generated outputs and how easily users can trust information that appears clear and confident. In Week 3, I explored how this becomes more important in professional settings, where mistakes can have real consequences. Looking across these reflections, one of the main things I learned is that AI itself is not the problem or the solution. The impact depends on how it is used. AI can improve efficiency, accessibility, and productivity, but it can also create problems if users rely on it without thinking critically.

A key takeaway for me is that digital literacy today goes beyond just knowing how to use technology. It includes being able to evaluate AI-generated outputs, recognize when something might be incorrect, and understand when verification is necessary. It also involves being aware of ethical considerations such as accountability and data privacy.

I also realized that context matters a lot. In academic settings, over-reliance can limit learning and reduce opportunities to develop problem-solving skills. In professional settings, it can lead to mistakes that have real consequences. In accessibility contexts, AI can help reduce barriers, but it can also introduce new ones if it is not used carefully.

Another important takeaway is that communication is still very important, even when AI is involved. For example, learning about frameworks like SCIPAB showed me that having a clear structure when explaining ideas makes a big difference, especially when dealing with complex topics.


Now What?

Moving forward, I want to be more intentional about how I use AI tools. Instead of treating AI as something that gives final answers, I want to treat it as something that supports my thinking.

To guide this, I developed a simple approach for how I want to use AI:

PrincipleHow I Will Apply It
Verify outputscheck important information using reliable sources
Use AI as supportUse AI to assist with ideas (not replace my thinking)
Maintain accountabilityTake responsibility for anything I submit or produce
Protect dataAvoid sharing sensitive/confidential information
Consider contextAdjust how I use AI depending on the situation

This approach reflects how I plan to use AI in both my coursework and future professional work. As AI continues to become more common, I think the most important thing is to stay aware, think critically, and use these tools responsibly.

Featured photo by Nahrizul Kadri

Leave a Reply