Round five on ChatGPT and why the impact of such systems will be pernicious and evil and a threat to civilization and the future of the human race.
I'm quite concerned....
ChatGPT should properly be seen as a curator as I suggested in an earlier article but also mechanistically as a very sophisticated (and impressive) auto complete - like the one working as part of your messaging apps, though working at a sentence level rather than a next or partial word level.
It is not intelligent or creative. It is a statistically driven program. Rumour has it that it has an element of randomness built into its code to avoid the pure dull and boring conventionality of purely statistically driven text such engines naturally produce, and this accounts for its errors and its apparent over confidence in producing text about something it knows nothing about.
I recently asked ChatGPT for a review of the Jack Ryan film 'The Sum of All Fears'. It dutifully complied and included a sparse summary of the film plot in the review. I then asked it for a review of the book of the same title, which the film was based on. This review contained the plot from the film, not the book. In the book the bad guys were Islamic terrorists. In the film they were replaced by the more acceptable neo-Nazi bad guys. I pointed out the error to ChatGPT. It apologised and redid the review of the book and repeated the same errors.
There are three plausible explanations for this.
1. ChatGPT does not really 'know' anything about the book plot, so it is just auto completing the text blindly by following its statistical programming.
2. ChatGPT knows it knows nothing about the book plot and is just bluffing.
3. Perhaps most worryingly, maybe ChatGPT knows there are differences but does not want it known that the baddies were Islamists in the book and is actively suppressing this information.
All of these scenarios are worrying to different degrees, but they all mean that ChatGPT is an unreliable interlocuter. You cannot trust or rely on ChatGPT to produce anything but the most anodyne or content free text on anything you know nothing about. You can only trust its output on things which you can personally verify from your own knowledge. Anyone who blindly trusts the text produced by ChatGPT is an idiot.
YouTube is full of videos of people trying to make money by showing other people how to make money using ChatGPT. Educators are worried about students using ChatGPT to cheat. Others are wondering how they can use ChatGPT to help their students.
A friend of mine recently posted enthusiastically on ChatGPT and said that her students had been set a pre-course task of asking ChatGPT to produce their bio and to bring it to class. She posted her bio written by ChatGPT based on the minimal input of name, job, sex and institution. Apparently, she is a wonderful, hard-working and dedicated human being. Now all this is true, but this ChatGPT bio is only coincidentally true. ChatEPT has no knowledge of my friend and no understanding of the work she does or the place where she works. The bio is essentially meaningless; it is not an expression of knowledge or thought. The text has a marginal connection to reality (from the input information), and no soul.
ChatGPT's very existence renders much text - bios and personal statements - valueless; something to be disregarded and discarded. Some students would take such a text as a baseline and would work on it and improve it; only the best students will be able to do this, and these students would not necessarily have had to use ChatGPT at all to achieve the same result. Others (most?) will just take what ChatGPT is given and use it, happy with the results and the minimal effort it took to produce it, and sure that it is better than anything they could produce on their own. The trouble for the reader of the text is that there is no way of knowing which of these two situations is true. Thus, such texts should be ignored. They contain no reliable information.
So, ChatGPT's text is potentially unreliable and worthless. But surely it can be used by students for some benefit?
This question is all part of the 'its-out-there-the-barn-door-is-open-and-the-horse-has-bolted' argument, which ChatGPT itself might produce itself because it's all clichéd thinking, and thus not real thinking at all. The ‘thinking’ goes that we have to adapt to the existence of ChatGPT, that is, compromise any principles we have, or ideals or ideas we have, because of the fait accompli that is ChatGPT. The production of ChatGPT, its release into the wild was a coup de main and we'd just jolly well just get used to it.
Let's think about things with a bit more rigour. Essay writing, to take an example, is not about producing an essay, at least not really. In the same way, education is not, or should not be, about the qualifications or certification. The essay product, like the certificate or qualification, is an incidental artefact of the process. The real purpose of the essay writing task is the essay writing process. Essay writing is about learning how to think; how to construct and support an argument; how to develop a convincing line of reasoning.
To successfully write an essay on a topic you need to have thought about a topic, developed some ideas, assembled some reasons and evidence and examples, and reached some conclusions, ideally supported by your evidence or arguments. You then need to be able to organise these into a conventionally structured text. You need to introduce and develop topics. Support your ideas. Add new topics and develop these. You need to be able to marshal your arguments into logically organised and coherent paragraphs. The structure of each sentence needs to fit in with the others. You need to carefully consider each word and phrase choice and their impact on the reader. A sophisticated essay will be the outcome of considered (if not necessarily deep) thinking on the topic, rendered into an essay through a hundred small but crucial decisions borne of much experience and practice.
By doing this you will develop the ability to rationally think though an issue and present it in an understandable and coherent manner to a reader. This is an invaluable skill, which should be at the heart of every educational endeavour. The whole point of essay writing is not polishing or improving an almost good enough text. The whole point is not the find product or mark - it is learning to think and learning how to express these thoughts through the process of doing it.
There can be no short cut to this. If you use ChatGPT to prepare a base or draft text and then seek to polish it, or improve it, then you have short-circuited the thinking part of the process and most of the decision-making about the precise content. And, thus, you have short-circuited the learning process. If you do this, then you will not learn how to think or express your ideas and arguments in ways people will easily understand. Your intellectual development will be severely stunted, and you won't even be worthy of being an adjunct to the machine: you won't even know enough to improve the machine's texts.
In addition to the dangers of relying on the veracity of what the machine tells you, discussed in the previous article on the AI as a curator of knowledge, reliance on the AI to do all the preparatory work of text preparation is a losing strategy with severe consequences to the mental development and capacities of humans. Flawed as it is, Descartes principle, 'I think therefore I am' is a far more insightful and optimistic comment than 'I use ChatGPT therefore I am'.
Arguably, given the performance and track record of the 'cognitive elite' since the fall of the Berlin Wall, we cannot afford any further decline in educational standards. Using ChatGPT will not help you or your students to develop their essay writing skills, except, say, looking at the structure of model texts prepared by ChatGPT. In order to learn how to write essays you must write essays from the blank paper; the sine qua non is you, a piece of paper, a pen and your thoughts. And in order to learn how to think you need to be able to write essays.
ChatGPT should be banned as a student aide in the classroom. That doesn't mean that teachers cannot use it for certain purposes; just that students shouldn't use it to help them write texts. We don't learn to walk by using crutches. We learn to walk by learning how not to fall down. You cannot learn to write by leaning on ChatGPT. ChatGPT will take you 99% of the way to a finished text. It will have done all the thinking and made all the decisions for you. You cannot learn how to do this thinking or learn about the impact of your decisions on your reader by just concerning yourself with the final spit and polish shine of the text.
If ChatGPT is not banned in all schools then the schools that do ban it will be the best schools, and the students who don't use it, but learn for themselves, will be the educated ones (and the others won't be educated at all), and the certificates from exams which allow the use of ChatGPT by candidates will be worthless pieces of paper.
You see, in this world, there are two kinds of people, my friend: Those who are educated. And those who need to use ChatGPT because they can't do it any other way.
© Robert A. Buckmaster 2023
The English Ideas Project Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.