Here are five key takeaways.
1. Change your mindset when approaching AI
The core takeaway from the session is foundational to how we approach using AI: it’s about your mindset. AI shouldn’t just be used for producing outputs per your command; instead, it should help you expand your thinking and understanding – ask it to explore topics, uncover connections, or challenge your current thinking.
Copilot, for example, can be particularly powerful for discovery and retrieval. It can quickly search large collections of research, documents, or past files to help users surface relevant insights they might otherwise not consider.
The goal is to learn more about a problem and explore new ideas – shifting the mindset should be to collaborate with AI as a think-partner.
2. Ask exploration questions instead of direct commands
While a change in mindset is important, practical techniques can also be a simple way to enhance your AI usage. A simple tip from the webinar recommends starting requests with “How might I…”
For example:
How might I approach this research problem?
How might I structure this analysis?
How might I explore this topic further?
This framing opens the floor for multiple approaches and perspectives, helping users break out of their own assumptions and biases.
3. Practice “model empathy”
Another key concept introduced in the webinar was model empathy.
Large language models (LLM) are extremely capable, however, it doesn’t know the context you are working with and common sense, which might seem straightforward to humans. To work effectively with them, users must think about the problem from the model’s perspective.
This means asking yourself questions like:
What information does the model have access to?
What context does it need to understand the task?
What details am I assuming that the model doesn’t know?
Providing that context helps the AI produce far more useful responses, tailored to your situation and needs.
4. Be clear about what you want and mean
Another important aspect of prompting that may be overlooked is clarifying definitions and providing structure. Similarly to context and common sense, the model doesn’t know all the information you have about what it’s been tasked to do.
For example, terms such as “attribute”, “benefit” or “insight” may have different meanings depending on context. If the model doesn’t know how you’re defining them, the output may not align with what you need.
Giving the model definitions, templates, or frameworks cannot just help guide its responses, but also put guardrails in its thinking to ultimately produce more consistent outputs that serve your needs.
5. Move beyond prompts – Think systems that serve your needs
Prompting is only the starting point…
To truly unlock AI’s potential, you should begin thinking about systems, workflows and specialised tools that are tailored to your specialty needs, rather than relying on a single general-purpose model.
You’ll see the biggest gains when AI becomes more than just a tool for mundane tasks. When it gets integrated into existing workflows, all processes for learning, decision-making, and problem-solving become more multi-dimensional and connected.