Since November of last year, the AI tool ChatGPT has raised both acclaim and controversy. Just five days after its launch, the application garnered its first million followers. However, many people have raised concerns that the tool could be used by students to cheat on their papers or that it could put writers out of a job.
But are apps like ChatGPT and others really a threat to human creatives or are they meant to enhance their work and make it less of a struggle? Also, are there any practical applications for conversational AI apps?
The AI Controversy
Over the past year, AI technology – particularly where conversational apps like ChatGPT and AI-driven art generators like Wombo and Lensa – has courted controversy, particularly from the creative community. Visual artists, in particular, have protested that AI developers like Stable Diffusion are gleaning through images online and using human-created art without the artists’ permission.
In terms of conversational AI, ChatGPT came under fire when it showed that it could write a legitimate academic paper in mere seconds. This was enough to drive school officials in New York – the largest school district in the United States – to ban the app from schools, and also prompted its creators at OpenAI to seek ways to determine if the app was being misused.
An Undercurrent of Fear
For Johannes Eichstaedt, a professor at Stanford University, the current fascination with ChatGPT is driven by fears that AI will eventually take over a number of occupations – a fear that feels very real in light of present economic upheavals. Indeed, this particular chatbot is already in use among application developers and real estate agents who mostly use it as a conversational FAQ fielding common buyers’ questions.
But are human jobs in danger of being taken by AI anytime soon? Probably not. Indeed, when the ChatGPT bot is asked about the topic, it is programmed to reply that it is simply enhancing tasks already being done by humans. For one thing, it cannot create original verses of poetry or music.
It also appears to be rather sexist. Whenever one posts a question regarding the best athletes, it tends not to be politically-correct as the subjects it cites are predominantly male. Likewise, when you ask it to tell you a love story, it skews towards the heteronormative as if gender issues on inclusion were irrelevant. Essentially, ChatGPT’s programming skews towards biases already embedded in a dataset based on recorded human history which, unfortunately, is dominated by male figures.
Will It Ever Become More “Human?”
This programmed “gender bias” raises a number of ethical questions regarding the use of ChatGPT. Indeed, Jeff Wong, chief global innovation officer for multinational law firm Ernst & Young, opines that ChatGPT tends to be confidently inaccurate – an aspect that could lead to it being used to spread factual inaccuracies and potentially problematic biases.
But fudging the facts aside, the big question for most people is how human can developers make ChatGPT and similar applications that may be released in the future. Scientists confidently say that such sentience is highly unlikely in the near future.
As Karina Vold, a professor of philosophy at the University of Toronto puts it, sentience means that the organism in question has the capacity to feel. No matter how skilled a programmer may be, it is nigh on impossible to code something so complex – and, in any case, any flaws that ChatGPT may have now or in the future are entirely attributable to human error.