image of a model of a brain; to represent cognitive thinking
|

AI Workflows, Tools, and the Shift Toward Dialogue

In my previous reflection, I explored how concepts like retrieval-augmented generation (RAG), information literacy, and prompting affect the accuracy and reliability of AI outputs, based on a recorded discussion from Week 6. Reflecting back on another video from Week 6, this time a fireside chat between my professor and Lucas Wright, I found that his discussion of AI workflows and tools built on those ideas by showing how they are applied in real-world practice. In particular, his emphasis on evaluation, prompting, and judgment reinforced the importance of using AI intentionally rather than relying on it passively.


What?

During the fireside chat, Lucas Wright explained how he integrates AI tools into his daily workflow. He described using ChatGPT with customized GPTs designed for specific tasks, allowing him to streamline repetitive or structured work.

He also demonstrated NotebookLM, where users can upload their own materials and interact with that content using prompts. This allows the AI to generate responses based only on the provided information, making outputs more focused and relevant.

Another workflow he described involved using Google Gemini for deep research to gather information, then using Napkin AI to convert that information into diagrams for workshops. This showed how multiple AI tools can be combined across different stages of a task.

He also emphasized that using AI regularly has helped him develop new skills, particularly around evaluating outputs, designing workflows, and understanding when it is appropriate to automate tasks.


So What?

What stood out to me is that AI use is not just about using individual tools, but about building workflows and making decisions about how those tools are used together. This connects directly to my earlier reflections on digital literacy, where I focused on evaluating outputs and avoiding over-reliance.

Lucas highlighted that one of the most important skills is judgment. This includes deciding when it is appropriate to use AI, which tool to use, and how much to rely on it. This matches what I observed in my own inquiry, where the effectiveness of AI depends heavily on how intentional the user is.

The idea of cognitive offloading also stood out. While AI can reduce effort by automating tasks, there is a trade-off if users begin to rely on it too much and stop engaging with the material. This reinforces the importance of maintaining critical thinking, which I discussed in my earlier reflections.

  • This concept is also supported by research (by Ginto Chirayath, K Premamalini, and Jeena Joseph), which describes AI as a ā€œdouble-edgedā€ tool that can either support coping by reducing cognitive load or contribute to over-reliance and reduced introspection if users depend on it too heavily.

This also reinforces the importance of maintaining critical thinking, which I discussed in my earlier reflections and also made me think more about the balance between the benefits and risks of using AI tools. The key trade-offs can be summarized as follows:

Aspect of AI UseBenefitRisk if Overused
Information RetrievalQuickly gathers and summarizes large amounts of dataUsers may stop verifying information or thinking critically
Workflow automationSaves time and reduces repetitive tasksOver-reliance may reduce engagement with the task
Cognitive offloadingReduces mental effort and decision fatigueCan weaken problem-solving and independent thinking
Prompt-based interactionAllows flexible and interactive learningPoor prompting can lead to misleading or incomplete results
Tool integration (multiple AI tools)Improves efficiency across different stages of workComplexity may lead to blind trust in outputs across tools

Another important point was the shift from static content to dialogue-based interaction. Instead of focusing on documents or websites, users are now interacting with AI through prompts and conversations. This suggests that prompting and communication are becoming essential digital literacy skills, not just technical abilities.


Now What?

Reflecting on this discussion, I want to be more intentional about how I use AI tools by thinking in terms of workflows rather than individual tasks. Instead of relying on a single tool, I can consider how different tools might be combined to support different stages of a process, such as research, analysis, and presentation.

I also want to continue developing my ability to evaluate outputs and make decisions about when AI should or should not be used. This includes being mindful of privacy, avoiding unnecessary automation, and making sure that I remain actively engaged with the work.

Overall, this reflection reinforced the idea that responsible AI use is less about the tools themselves and more about the judgment, critical thinking, and decision-making behind how they are used.


Featured photo by Robina Weermeijer

Leave a Reply