More Thoughts on ChatGPT
ChatGPT is best thought of as an information compiler. It responds to requests by putting together the information, data, facts and figures, and people's opinions it has read into coherently organized and grammatically correct text. This output is the smart bit. It is pretty impressive compared to other such systems, but not really when compared to what good writers can produce. It might compare well with the work of mediocre writers but its output is conventional and rather repetitively structured. It will default to listing things when it can, for instance, and, as it learned through a frequency-based analysis, it will use more frequent, thus simpler, constructions.
It can quickly access a lot of data and pull it together and this is very useful but not very impressive became it is a computer program and that is the whole purpose of computers - to do this kind of stuff fast.
And like any such system, this is also its biggest weakness, as the old saw of 'garbage in, garbage out' comes into play here, in spades.
The system cannot interrogate the data to assess its veracity or accuracy. If it has read the wrong date of a battle it will use this date and perpetuate the error. It is just a machine. It can only use the data it knows. It will not coin new words or create new concepts as it is not creative. In short, it is not intelligent in any meaningful way.
If you ask it about models of language acquisition, it will parrot some stuff about Krashen's disregarded work because that is all it knows, and it has an apparent bias towards American sources. It has not read Gabrielatos’ criticism of Krashen's work, for example. If you ask it about dependency grammar it will tell you nothing about my work in the area because it is just not aware of it; though not many people are, to tell the truth. The sum of human knowledge is not available on the Internet, despite the common (parroted) claim that 'you can find anything you need online', and ChatGPT has not even seen all that is available there.
It is an impressive and efficient compiler of what it 'knows', and can produce straightforward output based on your key words and instructions. It has perfect recall of what it knows but what it knows is fundamentally constrained by its input and its programed attitude towards this data. It needs to have the ability to interrogate its input and compare and evaluate its veracity.
It is also not a stylist by any means and its essay responses, for example, are not really sophisticated or a satisfying read in any real way. It has no personality as a writer, though it can mimic the style of writers; though how useful an essay in the style of Roald Dahl will be is open to debate. And it has no real character or attitude. Nor does it produce text of any discernibly interesting tone. It produces bland and staid text; rather unsatisfying meals for the soul and intellect. I doubt it ponders which synonyms to use or which metaphor to employ. It just does it, based on the most likely collocations.
It is a useful research tool, within certain limits, as it seemingly cannot cite authors or assemble a bibliography; and what you need to know needs to have been in the data it has accessible to it. It seems to be able to satisfy easily satisfied people, like those busy producing YouTube videos on how to write and publish a book written by ChatGPT to make a lot of easy money, or those online publishers who will quite happily sack a few writers and use it to compile listicles and other trivial types of online content for the easily amused and distracted. It would, though, be a good research assistant for lawyers, if they could be sure it had been trained on all the appropriate legal texts. It also seems to be able to help coders code in some but not all cases as it has access to solutions to coding problems.
So, it has utility. Will it be worth whatever Microsoft (now a big piggybacking investor) will want to charge you for access to it though? At the time of writing rumours say Microsoft will be charging over 40 USD per month as one possible pricing tier. That's a bit pricey for me and perhaps out of reach of many less well-off students. But if you are a coder and it can solve problems for you, then it may be worth it. Or if you are a corporation more interested in the bottom line than your workers, then that will be a bargain price.
Once it and similar systems become normalized into computer programs and other systems, then there are other implications which we might do well to consider.
I suggest that coursework as part of assessment will decline in utility. It will be impossible to ascertain what is the student's own work, which is augmented by the use of such bots and which is wholly the work of the 'AI'. We will, in the best institutions, go back to 100% reliance on paper exams in the exam room at the end of the semester. This is not necessarily a bad thing.
Also, in a way, if these systems become more widespread, then a knowledge of basic facts and figures, dates and general knowledge on the part of humans will become even more essential in order to check the output of such systems. Subject knowledge will become even more important, otherwise errors, misinformation and deliberate falsehoods will creep into the knowledge base and, once there, lying unchallenged and repeatedly regurgitated by the bot and parroted by the human 'author' will become the accepted facts.
Thus, the incorporation of such systems into the workflow will be a perhaps irresistible vulnerability in the fabric of the integrity of human knowledge, which whichever version of Big Brother we get will only too happy to exploit. Unless, of course, you are willing to rely blindly on the output of the computer, as indeed many people already are, you will need to learn or relearn a lot of basic knowledge in order to guard against the possible corruption of human knowledge.
Will there need to be AI fact-checkers of such AI systems? And will I only need an AI lawyer in order to sue because I was supplied faulty information or will I need a real human lawyer to argue my case?
Interesting times.
© Robert A. Buckmaster 2023