Buckle your seat belts. The newest computer technology, Artificial Intelligence or AI, has hit the mainstream. In November, a company called OpenIO launched a website application called ChatGPT (Generative Pre-trained Transformer). This program generates conversational responses to all sorts of inquiries and is quite popular. When I went to the website, ChatGPT displayed a cute poem (written by ChatGPT) about why I couldn’t use the website because it was already at capacity.

What does ChatGPT do? It can write music, poems, student essays, and articles, analyze computer code, and play simple games. And when taking tests, ChatGPT is more likely to provide better answers than the average test taker.

How does it work? ChatGPT depends on AI technology, which is extremely good at finding acceptable patterns in various situations by “learning” what patterns are good and what patterns are bad. To do this, programmers input a question or challenge, like “What move should I make in this game of chess?” The AI then produces a command, like “Move a particular pawn one square ahead.” The output is then graded as better or worse by comparing that move to similar moves of good chess players. Depending on the grade, the AI program (misleadingly called a “neural network”) is tweaked. When this same process is repeated—for example, by having the program play chess millions of times—then it “learns” successful patterns or moves.

ChatGPT is a complex form of AI that uses regular language questions about any topic as the input, not just chess questions. During the training phase, the program receives a question and produces a written answer. It compares that answer to other writings that it can access and receives some human editorial corrections. It then grades itself according to those standards of writing and modifies its neural network. Rinse and repeat. A gazillion times. For example, suppose you need a New York Times editorial on ChatGPT. The program has already trained on the New York Times editorial archive and incorporated examples of acceptable patterns. It has learned the stylistic guidelines for New York Times editorials. It has also learned about ChatGPT by comparing the sentences it produced in training with other sentences where “ChatGPT” has been used in print. Its neural network is optimized to accept the input parameters and spit out the article.

How well does it work? The great strength of all AI is also its weakness. It finds patterns within the set of “acceptable” results that are provided. However, the results it learns from are always limited. It has no ability to make connections to a much broader context. In other words, it has no real understanding. For instance, an AI self-driving car can “recognize” a stop sign in many situations since it has been given lots of stop sign images to process. However, if a tree branch partially blocks the sign (a simple problem for a human driver), the AI system can’t recognize the stop sign since it has not had a chance to “learn” about that situation.

Similarly, ChatGPT cannot go much beyond the examples it has to work with because the program has no real understanding or judgment. If you tell it to write a New York Times editorial about St. Augustine’s social media account, it will create an article on the subject. On the other hand, if the question is more reasonable and there were ample resources available on the topic during training, ChatGPT does a reasonable job creating essays and reports. It makes report writing easier.

In one sense, this is nothing new. Technology has always aimed at accomplishing our tasks more efficiently. Roads, microwave ovens, manufacturing machines, phones, and word processing programs all make life “easier.” However, each new technological advance changes the way we go about living and reframes our relationship to the world around us. With the invention of backhoes and other such devices, many more workers spend their days sitting in front of a screen instead of exercising their bodies. Medical technology hides the reality of death from our daily lives. Phones and TV addict us to entertainment.

ChatGPT, like other technologies, enhances or replaces tasks we do, similar to the way that a backhoe replaces the person with a shovel. But it also changes us. It creates new problems and new modes of interacting with each other and the world. Students can use ChatGPT instead of learning to write and organize their thoughts. Our society may slowly diminish its creative skills by taking computer shortcuts. Easy answers that “look good” may exacerbate our culture’s interest in appearance over truth. Technology changes us. It may make things easier, but it does not always make things better.
ChatGPT, like other technologies, enhances or replaces tasks we do, similar to the way that a backhoe replaces the person with a shovel. But it also changes us. It creates new problems and new modes of interacting with each other and the world. Students can use ChatGPT instead of learning to write and organize their thoughts. Our society may slowly diminish its creative skills by taking computer shortcuts. Easy answers that “look good” may exacerbate our culture’s interest in appearance over truth. Technology changes us. It may make things easier, but it does not always make things better.

This article first appeared in the Winter 2023 issue of Colloquy, Gutenberg College’s free quarterly newsletter. Subscribe here.