In this workshop, you will learn how to write effective prompts for generative AI models like ChatGPT, Copilot, Claude or Gemini. We start by investigating how these AI models actually work: we explore how they were trained, what’s going on behind the scenes when we interact with them, and why they respond the way they do. We look at what tasks Large Language Models (LLMs) currently excel at, as well as areas where they still struggle.
Next, the hands-on part of the workshop gives an overview of key prompting strategies for the optimal use of LLMs. You will be introduced to prompt and context engineering, always based on the latest scientific insights. Moreover, you will work with GenAI tools yourself, to experience how you can teach their models new tasks, improve their reasoning abilities and reduce their tendency to hallucinate.
Finally, we wrap up with some important challenges that come with using generative AI, including ecological considerations, copyright and the risk of plagiarism.
Learning outcomes
After having attended this workshop, you will...
Competences
An important part of preparing for any further professional step is becoming (more) aware of the competences you have developed and/or want to develop. In the current workshop, the following competences from the UHasselt competency overview are actively dealt with:
For whom?
When and where?
Preparation?
Please don't forget to bring your laptop to the session!
Registration?
Acknowledged as?