Lately, I’ve been experimenting with some of these Large Language Model (LLM) artificial intelligence services, particularly Monkey. Several readers have taken issue with my categorization of ChatGPT Monkey as “artificial intelligence”. The reason, they argue, is that ChatGPT really is not an artificial intelligence system. It is a linguistic model looking at a massive amount of data and smashing words together without any understanding of what they actually mean. Technologically, it has more in common with the grammar checker in Microsoft Word than HAL from 2001: A Space Odyssey.
You can ask ChatGPT for the difference between apples and bananas, and it will give you a credible response, but under the covers, it has no idea what an apple or a banana actually is.
One reader wrote in to explain that her mother’s medical professional actually had the nerve to ask ChatGPT about medical dosages. ChatGPT’s understanding of what medicine does is about the same as its understanding of what a banana is: zilch.
While some may argue that ChatGPT is a form of artificial intelligence, I have to agree that there is a more compelling argument that it is not. Moreover, calling it artificial intelligence gives us barely evolved monkeys the impression that it actually is some sort of artificial intelligence that understands and can recommend medical dosages. That is bad.
So going forward, I will be referring to things like ChatGPT as an LLM, and not artificial intelligence. I would argue that you do the same.
(I want to give particular thanks to reader Lisa, who first made the case to me on this point.)