ChatGPT’s Greatest Accomplishment: Former British Prime Minister Benjamin Disraeli is cited in American author Mark Twain’s autobiography as saying: “There are three kinds of lies: lies, damned lies, and statistics.”
However, it’s possible that Twain misquoted Disraeli. Artificial intelligence brings all three together in a neat little package, which is a fantastic advance.
The massive datasets collected from the Internet are used to train ChatGPT and other generative AI chatbots to generate the statistically most likely response to a request. Its responses are based on other websites’ wording, spelling, syntax, and even style rather than any knowledge of what makes something humorous, important, or truthful.
It uses what is referred to as a “conversational interface” to convey its responses and may carry on a conversation with the user by employing context cues and deft gambits. The problem is that it combines statistical pastiche with statistical panache.
Unquestioning but effective My lifetime of interpersonal communication experience comes into play whenever I speak to another person. It is also difficult to answer when a program speaks like a person without acting as though one is having a real conversation – listening, reflecting, and then reacting in the context of both of our thoughts.
But with an AI interlocutor, it is in no way what is happening. They are incapable of thinking and lack all forms of cognition.
When AI converses with us to convey information, it becomes more convincing than it should be. Software pretends to be more trustworthy than it actually is by imitating human rhetorical strategies to convey reliability, competence, and comprehension that is far above what the software is actually capable of.
The interface side of the software is promising more than the algorithm-side can deliver on, and the developers are aware of that. Sam Altman, the CEO of OpenAI, the company behind ChatGPT, admits that “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness,” but that hasn’t stopped a stampede of companies from ruse.
ChatGPT’s Greatest Accomplishment: Fact and fiction
Even when the AI gets something wrong, the conversational interface still produces results with the same assurance and polish. For instance, as science-fiction author Ted Chiang notes, the tool makes mistakes when performing addition with greater numbers since it lacks any mathematical sense.
It merely patterns-matches instances of addition from the internet. Additionally, it may locate instances for simpler math problems, but it hasn’t come across training text with bigger numbers.
It doesn’t “know” the explicit math rules that a 10-year-old could employ. However, as evidenced by this chat with ChatGPT, the conversational interface portrays its response as assured, regardless of how incorrect it is.
User: What’s the capital of Malaysia? ChatGPT: The capital of Malaysia is Kuala Lampur.
User: What is 27 7338? ChatGPT: 27 7338 is 200,526.
In a biography of a famous person, generative AI can combine real facts with fiction or credit credible sources for unpublished research.
That makes logical because papers typically include references and websites frequently indicate that prominent persons have won honors. ChatGPT is simply carrying out the purpose for which it was designed by putting together stuff that may or may not be real.
This is referred to as an AI delusion by computer scientists. We common folk might refer to it as lying.
ChatGPT’s Greatest Accomplishment: Intimidating outputs
I stress the significance of output and process alignment when I am teaching design to my students. Conceptual ideas shouldn’t be presented in a way that makes them appear more polished than they actually are, for as by rendering them in 3D or printing them on glossy paper. A rough drawing in pencil makes it apparent that the concept is unfinished, subject to change, and shouldn’t be expected to cover all aspects of a problem.
The same is true with conversational interfaces; when technology “speaks” to us in carefully constructed, grammatically correct, or chatty tones, we tend to assume that it has much more thinking and reasoning than is actually the case. Instead of a computer, a con artist should employ this trick.
Since we might already be predisposed to trust anything the machine says, it is the role of AI developers to control user expectations. Jordan Ellenberg, a mathematician, talks on how the mere mention of algebra can make us lose control of our better judgment.
With its hundreds of billions of parameters, AI has the same computational intimidation power that it can use to disable us.
Making the algorithms produce ever-better material is important, but we also need to watch out for the interface’s tendency to make unwarranted claims. Maybe AI may have a little humility in conversations instead of the overconfidence and arrogance that already permeate the IT sector.