THANK YOU FOR SUBSCRIBING

AI... What just happened?
Ian Oppermann, Government Chief Data Scientist, NSW Department of Customer Service


Ian Oppermann, Government Chief Data Scientist, NSW Department of Customer Service
AI has been with us since the 1950s, but it has always been a bit underwhelming. Yes, there were some pretty impressive games of chess played against human champions by IBM’s Deep Blue in the late 90s, then IBM’s Watson won the game of Jeopardy in 2011, and then Google’s AI beat a grand master at Go in 2016. All of these were impressive in the sense of moving the boundary of what we thought AI was, but they were all specific to one game or one application even if that application was a complex one. Recently however AI seems to have exploded onto the scene with ChatGPT and Bard showing just how good and how “general” AI can get, and these large language-driven applications seem to have surprised everyone.
We really should not have been surprised if we had been paying attention. What impressed me in recent years was, in 2021, when AI completed Beethoven’s unfinished Tenth Symphony, and then in 2022 when AI accurately predicted gene expressions in a type of yeast. That last neural network predicted the effectiveness and evolution of gene promoter sequences, something which until then was an amazingly complex computational task. AI has been quietly “generating” art, music, speeches, and images for some time. What just happened is that it got a whole lot better.
By now, everyone has probably had a play with one of the latest AI tools based on large language models or the image-based equivalent. Initially, it seems uncannily like magic. You can ask for speeches, poems, songs “in the style of.,” or art “in the style of.” There is enough information out there to train these models that they can do a pretty good job of generating a jazz song in the style of Miles Davis. The trick is to ensure the original artist is not left out. If their work is used as input, they should get credit and possibly royalties depending on how the derived art is used.
After a while, however, you do start to see some serious limitations, even some “hallucinations” from these tools. I recently asked ChatGPT to write a speech in my style of me.
What it generated was pretty ordinary in my view – lacking real substance and overall, somewhat bland. When I passed the speech to my wife, her response was “Yep, that’s you alright.” I was even more disappointed! What it showed (at least in my opinion) was that, while the tool did a great job of finding, connecting, and synthesizing source material, what it lacked was real “judgment” of the important issues and the context of such a speech.
Of course, what you can do is keep adding context by providing additional input. You can refine and refine until it does largely hit the mark. This is apparently how the Beethoven Symphony was completed. It is a bit like conducting with your keyboard, or even your voice. In that sense, the tools can be used as the assistant or co-pilot as we see more and more with code generation, or even suggesting the next few words in a sentence you are writing in an email or document (just like what happened as I was writing this sentence). A co-pilot mode helps us be more productive, as opposed to replacing us (thanks again AI for completing that last sentence).
But we do need to keep our wits about us. It is hard to imagine using any tool that “hallucinates” for serious purposes. Imagine using a ruler, a pen, or a calculator that hallucinates. Imagine using a hallucinating tool for air traffic control. Nonetheless, you can use any tool by acknowledging its limitations, being clear about just how far you can rely on it, and where we humans still need to be in the loop. By “being in the loop,” it means we are still using the tools, rather than blindly accepting an output.
Moral and ethical challenges have been debated for some time concerning the use of AI, but now we also need to consider such issues such as if AI can “own” an invention or a patent, if the style of a human creation should be protected, and even if the use of AI should be prevented in certain domains.
We also need to think about how we can appropriately use tools that are inherently unreliable and unexplainable but are still powerful. A hallucinating AI can still find a lot of useful connections, but we might use a different AI tool to check on the first one. Essentially providing independent assurance. We also need to deal with the rising tide of “noise” or deliberate misinformation. Over the years, people have generated a lot of useful material for the internet, as well as a lot of cat videos. AI could outpace us by orders of magnitude. Already we have trouble determining the validity of the information out there. If AI starts generating hallucinated, or deliberately false information at a million times the rate of humans, finding the “signal” in the “noise” may be something that only AI can do.
In the last few paragraphs, I have been accepting many of the sentence completion suggestions from my AI-powered document editor. So, after using AI to check spelling and grammar, I leave you with this final thought: could AI have written this opinion piece?
It most definitely could have.
Would it have been as good?
I will leave you to decide for yourself.
Weekly Brief
I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info
Read Also
