On November 30, 2022, OpenAI released a chatbot called ChatGPT. Converging conversational AI with generative AI, ChatGPT can chat, create creative and/or functional content, solve math problems, give advice, play games, and even write code.
Unsurprisingly, it's been popular. When it was initially released for free (with registration), ChatGPT reached 1 million users in five days. For comparison, Instagram took two and a half months to reach the same milestone; Twitter took two years.
But to what extent is it true artificial intelligence (AI)? ChatGPT does not seem to display signs of consciousness (yet)—but in science, we don't really understand what consciousness is or how it starts. In any case, generative-AI software such as ChatGPT seems to mean something to ongoing AI innovation. Before approaching the matter philosophically, we have to look at it behaviorally.
The Turing Test
Alan Turing, an early British computer scientist who helped crack the Germans' Enigma code during World War II, developed a simple behavioral test for AI. The Turing test involves an interactive chat-like interface; according to Turing, if, over time, based on the responses you get, you cannot tell whether you are conversing with a human or a machine, then what you have is AI.
To do this the software would need to speak conversationally, understand common idioms, pick up on sarcasm, convert from words to symbols (and vice versa), and learn. AI defined this way would need to be able to, say, solve a word problem at perhaps an 11th-grade level—and then even ask for help if it got stuck.
ChatGPT isn't the first AI to try to do this sort of thing. In 1966, Joseph Weizenbaum designed a conversational-AI computer program called ELIZA. ELIZA could recognize certain keywords and forms of speech. As part of ELIZA's programming, Weizenbaum incorporated aspects of Rogerian therapy, in which the therapist waits for the patient to talk and then asks simple questions (for instance, "How do you feel about that?"). When ELIZA couldn't precisely recognize what the user typed, it could default to Rogerian responses.
Implemented in a half-dozen pages of BASIC, Eliza is easily ported to other programming platforms, including JavaScript and HTML. There are free versions of Eliza that you can experiment with today. It was easy enough to "trick" Eliza and realize it was a robot—primarily because of words that have different meanings in different contexts (such as "son of a" as an insult versus "son" as a literal familial relation). That made Eliza broken conversationally. Also, Eliza could not learn.
A second popular AI implementation is 20Q—a computerized 20 questions game developed in 1988. The game starts with the computer asking whether the thing the user is thinking of is animal, vegetable, or mineral; it then proceeds with yes-or-no questions. While 20Q wouldn't pass a Turing test, the game can—in a way—learn. 20Q creates a decision tree from the answers that users provide, cumulatively. This decision tree grows as the game is played. As long as players answer honestly, 20Q gets more accurate over time. Like ELIZA, 20Q is available online for free.
ChatGPT combines 20Q's ability to learn with ELIZA's chat interface—and adds conversation, idioms, sarcasm, and more. ChatGPT relies on a pre-release scan of a significant portion of the Internet for sentence completion. Whereas our phones and search engines can make word suggestions, ChatGPT goes so far as to suggest the next paragraph.
It can also be trained.
Making Sense of ChatGPT
Academics use Bloom's Taxonomy to describe levels of mastery. The lowest level of the pyramid is memorization, which is knowing what the words are. Above that comes understanding, followed by application.
Google is an example of memorization; you ask it for a keyword and it finds that keyword. Programmers who haven't yet mastered a subject frequently use "Google-driven development," searching for how to solve a coding problem. Between StackOverflow, Medium, support forums, and anything a search engine might turn up, programmers can often figure out how to do something simple. Because it was trained on a subset of the Internet and communicates conversationally, ChatGPT seems capable of doing that search for you and returning a guess at how to do something.
Think of a naive and lazy college freshman—willing to Google and summarize things (perhaps with some degree of prodding), and maybe able to provide an essay. That essay may sound compelling to a layperson but won't stand up to scrutiny. Consider that, in the coding context, as ChatGPT seeks to integrate advice from a half-dozen versions of the same open-source project, the person asking may not have enough knowledge yet to ask for the specific version of the tool and programming language.
That said, the most powerful piece of ChatGPT may be its ability to learn—to accept feedback and to change. That means that over time, like the 20Q game, it can have its errors corrected.
Today, this leaves us with a tool that can generate skeleton code in nearly any programming language and generate proof-of-concept code that might even work, but one that can also summarize legal arguments at a selected grade level (see image). That demonstrates at least the understanding level of Bloom's Taxonomy. Given the right inputs, it might be possible to train ChatGPT to apply those ideas. For example, Jason Perlow took four hours of transcripts from a technology show and asked ChatGPT to summarize his findings. The results aren't terrible, but they are superficial.
Presently, this sort of generative AI seems to be akin to having a research assistant that does not know the domain but can do a wicked Google search. It appears that training the tool to appropriately understand a subject the first time still requires some notable degree of effort. But what will happen once ChatGPT has been trained to understand specific domains? I, for one, can't wait to find out.
Keep learning
Take a deep dive into the state of quality with TechBeacon's Guide. Plus: Download the free World Quality Report 2022-23.
Put performance engineering into practice with these top 10 performance engineering techniques that work.
Find to tools you need with TechBeacon's Buyer's Guide for Selecting Software Test Automation Tools.
Discover best practices for reducing software defects with TechBeacon's Guide.
- Take your testing career to the next level. TechBeacon's Careers Topic Center provides expert advice to prepare you for your next move.