The good, the Bad, and the Ugly
2025-02-01
__Disclaimer__
I don’t claim to have the answers.
Be aware of these topics.
Form your own opinions and openly discuss issues.
Screenshots of Business Insider’s disturbing conversation with “Eliza,” a chatbot from Chai Research.
My personal beliefs:
Model | Source | Restrictions |
---|---|---|
ChatGPT by OpenAI | Closed source | Strongly moderated and curated |
Grok by Xai | Closed source | Less restrictions |
Llama-models by Meta | Open source | Can be finetuned for any purpose |
Keep in mind: Nobody is sharing the most important part: HIGH QUALITY DATA
@github copilot, with "public code" blocked, emits large chunks of my copyrighted code, with no attribution, no LGPL license. For example, the simple prompt "sparse matrix transpose, cs_" produces my cs_transpose in CSparse. My code on left, github on right. Not OK. pic.twitter.com/sqpOThi8nf
— Tim Davis (@DocSparse) October 16, 2022
Niet-gecontracteerde generatieve AI-toepassingen voldoen over het algemeen niet aantoonbaar aan de geldende privacy- en auteursrechtelijke wetgeving. Zodoende is het gebruik hiervan door Rijksorganisaties (of in opdracht daarvan) niet toegestaan, in die gevallen waarin het risico bestaat dat wetgeving wordt overtreden, tenzij de aanbieder en de gebruiker aantoonbaar voldoen aan de geldende wet- en regelgeving.
Passed in 2024, main effects:
Can LLMs be used if they are trained on copyrighted and/or AVG-protected data?
Should LLM usage be constrained by ethical guidelines and content filters?
Can LLMs be trusted if hallucinations are an inherent part of these systems?
Discuss within your group for 5-10 minutes , and then we will discuss the results in a plenary session.
Ethics and LLMs