Do you work with sensitive text data and does that make you hesitant about using AI models such as ChatGPT? Are you unsure about what really happens to your data and are you concerned about privacy and security? And most importantly: are you looking for a safer alternative to process e.g. IP- sensitive texts such as research proposals?
This course, brought to you by dr. Pieter Fivez (TEXTUA), explores the use of open-source large language models as alternatives for dealing with sensitive data in a more secure way. But does their performance match that of the likes of ChatGPT? During a lecture and a hands-on workshop, you gain insight into current large language models’ potential and implications for data safety and you tackle case studies that you can afterwards apply to your own data.
Learning outcomes
After having attended this workshop, you will...
Competences
An important part of preparing for any further professional step is becoming (more) aware of the competences you have developed and/or want to develop. In the current workshop, the following competences from the UHasselt competency overview are actively dealt with:
For whom?
When and where?
Preparation?
Before the training, participants should have access to:
Microsoft Copilot
ChatGPT (e.g. via chatgpt.com)
Gemini (e.g. via gemini.google.com)
Claude (e.g. via https://claude.ai)
Ollama is available through the Software Center. To install the software, your laptop must be connected to the UHasselt network. If you are not on campus, you need to connect via an EduVPN connection first. Only when your device is connected to the UHasselt network (directly or through EduVPN) will the software become visible in the Software Center.
Important tips:
Registration?
Acknowledged as?