Three lecturers from the University of Glasgow, writing in Scientific American have suggested that when Chat GPT or similar AI programs provide false information – or in their words – ‘make stuff up’, this should be called ‘bullshit’ rather than ‘hallucinations’.
I posted an item in November last year headed ‘Artificial Intelligence: Hallucinations, Delusions, Confabulations, Lies, or something else?’ I didn’t think of ‘bulllshit’.
The authors – lecturers in moral and political philosophy, political theory and philosophy of science and technology, object to the term ‘hallucinations’, as do I. They object largely because using this term falsely attributes human qualities to AI. They emphasise that the Large Language Models (LLM) which underly this form of AI does not ensure ‘that the outputs accurately depict anything about the world’. ‘A well-trained chatbot will produce humanlike text, but nothing about the process checks that the text is true’. The text is based on the probability of words occurring in particular context, rather than being based on the real world which the words merely represent. They make the case that Chat GPT is bullshitting all the time, whether it produces a true statement or not, because it doesn’t ‘know’ what is the truth.
The authors refer to a meaning of ‘bullshit’ popularised by philosopher Harry Frankfurt who wrote that the bullshitter ‘just doesn’t care whether what they say is true’.
I think the authors (Slater, Humphries and Townsen Hicks), using all the subtle linguistic nuance of a Glasgow kiss, make a good point.
However, I suspect that the LLM is not extremely dissimilar to the way human (and other) brains enlist a similar process to that of LLM – making and using associations – in our case not just associations of words but also of experiences, concepts and emotions.