
There’s an interesting test you can undertake with LLM like Chat GPT. Ask them to name 10 philosophers and a few month ago the response would have been exclusively male and white. Ask the question now and these models have learnt that this isn’t a satisfactory answer – the latest list I got is below:
- Thomas Aquinas (1225-1274) – A medieval philosopher known for his synthesis of Christian theology and Aristotelian philosophy.
- John Stuart Mill (1806-1873) – A British philosopher and utilitarian, known for his works on ethics, political philosophy, and the harm principle.
- Albert Camus (1913-1960) – An existentialist philosopher and writer, known for his exploration of the absurdity of human existence in works like “The Stranger” and “The Myth of Sisyphus.”
- Jean-Paul Sartre (1905-1980) – A French existentialist philosopher and playwright, famous for his ideas on existentialism, freedom, and existential angst.
- Laozi (6th century BCE) – A Chinese philosopher attributed to the founding of Daoism (Taoism), emphasizing the concept of “the Way” (Dao).
- Ayn Rand (1905-1982) – A Russian-American philosopher known for her philosophy of Objectivism, advocating rational self-interest and limited government.
- Thomas Hobbes (1588-1679) – An English philosopher who wrote “Leviathan,” in which he discussed the social contract and the necessity of strong government.
- Epicurus (341-270 BCE) – An ancient Greek philosopher who founded Epicureanism, which focused on the pursuit of happiness through simple pleasures and the avoidance of pain.
- Hannah Arendt (1906-1975) – A German-American political philosopher known for her work on totalitarianism, authority, and the nature of power.
- William James (1842-1910) – An American philosopher and psychologist who contributed to the fields of pragmatism, psychology, and philosophy of religion.
This came up last week in a session of the Learning Technologies Webinars I was hosting. The implicit bias which LLM and other AI may have built in was discussed in the chat. Some people expect we need to ask better questions of these kinds of tools to manage out the bias. This requires training the people first to use the tools effectively.
My concern is that as more people use these tools (and use them poorly) this bias will become baked in more. We might be able to reach and train 10 or 20% of the users. That still leaves 80-90% of users confirming the inherent biases in these tools.
Take care.