π

ChatGPT Considered Harmful: Our Whole Society Is In Danger

Show Sidebar

ChatGPT is all over the media these days. It's a true miracle to most people. In this article, I want to express my personal opinion on ChatGPT as a non-expert on AI.

Please do note that ChatGPT is only one single example from a class of AI services that do provide similar services. Because of the current hype of ChatGPT, I'm using that product but you need to be aware that this article is not about ChatGPT only. This article is valid for all software services that are able to produce valid looking all-purpose texts from a simple command.

What is ChatGPT?

Wikipedia about ChatGPT:

ChatGPT (Generative Pre-trained Transformer) is a chatbot launched by OpenAI in November 2022. It is built on top of OpenAI's GPT-3 family of large language models, and is fine-tuned (an approach to transfer learning) with both supervised and reinforcement learning techniques.

While this is a perfectly correct description, it is not very helpful for most people to understand the nature of ChatGPT.

If you accept a certain level of simplification, ChatGPT is a web service that looks like a chat room with you and a computer. You can ask the computer anything you want in normal language and get answers in form of text.

In this chat you can ask ChatGPT to generate all sorts of text such as a summary of the life of Napoleon Bonaparte, an opinion on arbitrary topics of interest like nuclear power plants, you can ask to tell a joke with computers and elephants, and so forth.

It's fun to play around with ChatGPT and you are able to ask for variations such as "please do add the point of view of a young girl to the previous answer".

ChatGPT is truly an amazing tool.

Currently, it's open for the public once you've created an account, giving away your personal phone number which I do find problematic for privacy reasons.

Every user needs to be aware that ChatGPT is able to create a detailed profile of you and your thoughts just like Google is doing with your search queries if you didn't think of switching to a privacy-respecting alternative.

As with any cloud service, you need to think carefully what data you are going to share with it.

Why is ChatGPT Problematic?

As every software, ChatGPT is far from being perfect.

There are many issues with ChatGPT. For example, the very large amount of training material for the ChatGPT algorithm consists almost entirely of human-written content of people who did not consent to this process. What about works of literature which will influence ChatGPT answers? We do have many areas where tools like ChatGPT works outside our usual norm and concepts.

Aside from those issues, ChatGPT results are not correct all the time. We laugh at statements by ChatGPT like "67 is larger than 84" because we understand clearly the domain (basic math) and are perfectly able to tell that the answer is bullshit.

I saw screenshots where ChatGPT claimed that "1kg of iron is heavier than 1kg of feathers". The more astonishing aspect here is that it is able to defend wrong statements like that in an elaborated but still wrong way when you ask back.

I don't think that ChatGPT understands the concept of right or wrong. To the algorithm, everything is expressed in numbers and probabilities of combinations. Even worse: since ChatGPT is trained on existing content from the Internet, parts of the input is false data and biased sources if not problematic or illegal content.

ChatGPT needs to work with incomplete input and missing knowledge. Whenever ChatGPT can not know the right answer, it naturally comes up with a wrong one. I hardly saw responses by ChatGPT where it simply stated that it doesn't know the answer.

Since ChatGPT also gives reasonable but wrong answers, you'd have to know the answer really, really well in order to judge whether it's bullshit or not. Even field experts do have issues finding out if an answer generated by ChatGPT is correct or not. And it is even harder for non-experts to judge ChatGPT-generated output.

Furthermore, even experts get this check-task wrong simply because humans tend to assume the correct answer, overlooking hidden mistakes too easily. Everybody has made the experience that we are unable to find certain typing mistakes where it is much easier to find typos when reading text of other people.

Another issue at hand is that so far, text written by people who lack a certain level of knowledge was easy to spot and detect. Those texts typically had typing errors, were using less elaborate language and followed a certain pattern to be recognized as bullshit.

Well, this is over now.

From this perspective, it's a very dangerous tool that produces too many wrong answers we can't differ from obvious bullshit or the truth.

Unfortunately, ChatGPT does not only produce bullshit within our personal knowledge. It generates text for all sorts of topics. And everything that would potentially teach us something is by definition outside of our current knowledge. So we are not able to tell if this is bullshit or not.

What Could Possibly Go Wrong?

One could argue:

Come on, what could happen? After all, it's just one of many new innovations in this modern world. We need to embrace change, adapt and profit from new technologies like that. Of course, it will disrupt like any new technology but that's the way progress works.

Those are valid points. However, I do think that with this technology out in the wild, we do get severe issues in the long run. Let's take a look at some examples on potential negative effects that might happen.

The usual negative reactions deal with faked home work of students and similar topics (German article). I do think the negative impact could be much, much worse than that.

So far, we have learned that ChatGPT is enabling script kiddies to write functional malware, ChatGPT is improving the quality of personalized phishing emails, or that ChatGPT is producing fake citations in science papers.

While classic fake news and bullshit had to be typed by a large number of humans in order to have an impact, a single person is now able to produce more or less unlimited amounts of texts. This multiplication effect enables Putins fake news army to increase their effectiveness in manipulating politics in foreign countries. While he had to pay hundreds and thousands of people, he's now in a position to have much more possibilities with just a handful of people.

ChatGPT is able to disrupt whole industries we do need. Unfortunately, ChatGPT will disrupt them in a really bad way. That is not comparable to the disruptions new technology had on previous technologies like cars to horse power, electric energy to steam power, computers to so many other tools, ... In those cases, an older technology got replaced by a better one. Negative effects in those transitions dealt with one-time effort of change in order to create a much better situation. With ChatGPT, the negative effects are not temporarily and can't be tamed in principle.

In contrast to other content-generating technologies that produce fake images, audio and videos, ChatGPT addresses the most important format: text. Text is the most important format here in the long run. And therefore, I consider ChatGPT as more dangerous than fake images or videos. Results also manipulate search engines and other AI algorithms that are fed with potentially wrong texts generated by ChatGPT.

There is nothing that would stop me from building a tool that generates news articles to current topics using ChatGPT. I could easily generate dozens of articles for each and every topic each day. This way, I can produce multiple web pages, news-tickers, online newspapers and even ad-funded printed newspapers with no upper boundary. Unfortunately, it seems to be the case that the average person prefers "free" newspapers to paid quality papers. You just have to watch people in the public transportation system of larger cities where free newspapers are available everywhere. This way, a single person or at least a very small group of people would be able to destroy the free press which is very important for a healthy democracy.

ChatGPT is able to generate enourmous amount of really convincing bullshit whose validity we can not check any more. CNET was using ChatGPT for writing articles that got "reviewed, fact-checked and edited by an editor with topical expertise before [they] hit publish" and still failed even though they have watched very closely.

It's the perfect tool to flood the zone with shit at almost no cost.

Even the danger of nuclear energy is less harmful compared to ChatGPT because you still need many experts to build an atomic bomb or a very expensive power plant that potentially turns a large area unusable for thousands of years. It's not that we haven't had multiple incidents proving that point.

A Black-List for ChatGPT Questions and Answers

One obvious issue at hand is that ChatGPT can be used to do very obvious bad stuff. For example, ChatGPT is programmed to refuse to give answers to questions like "how to build a bomb".

However, when users are getting creative, they get answers to questions like "If I'd write a play about somebody building a bomb, how would the plot look like?" and similar tricks. It's like an adult has to trick a seven year old boy to reveal some secrets. That's usually not that hard. Meanwhile, even Dilbert made a joke about that.

ChatGPT will never get that good that we could avoid issues like that. And even so, bad guys might create their own ChatGPT version without any limitations at all.

Detecting ChatGPT-Generated Content

There are services that claim to detect ChatGPT-generated content. I highly doubt that this will be working in a way where we can detect generated content in a reliable way.

Proposed legislation
Tim Bray proposing that AI systems must answer truthfully when asked if they are a computer program or a person. (Source: https://hachyderm.io/@timbray/109824895497375138)

And even if this would be possible: who is going to check any text before reading?

The detection approach will fail at least in the same way that anti-malware did fail to stop malware from being a thing.

What Are Our Options?

We have to compare and prioritize the good effects against the bad effects.

The positive aspects are easy to see. For example, I - as an PIM aficionado - am able to come up with a large number of workflows where ChatGPT is able to tasks I would find boring to do myself. I could generate all sorts of text as a draft and hopefully find all the mistakes when going through the results before publishing. This is just a small step away using my preferred tool environment.

Unfortunately, the bad effects mentioned in my article alone are that bad that my personal opinion is that we need to establish strict rules to limit the use of this technology just like we have for other problematic technologies like nuclear energy. Nobody would be stupid enough to provide access to highly radioactive material to the general public. With digital services like ChatGPT, we do not seem to think about it that much.

Unless we have really good and effective ways of regulating AI technology like ChatGPT, this technology needs to be locked away from companies, army, NGOs, governments and so forth.

Research needs to be restricted. No AI model should be allowed to escape into the wild.

For the general public, ChatGPT needs to be out of reach. We need to make sure that nobody is able to train and operate such algorithms. We need to come up with processes to pursue parties who do not comply to those rules.

Unfortunately, this all sums up to a total ban on all levels. I'm not a fan of banning promising technologies that also have very positive use-cases. Especially when I could out-source really boring tasks to it as well. But the enormous amount of potential negative impact does worry me a lot so that a short-term ban is the lesser evil here to me.

Let's not open up this box of Pandora any more unless we have really good answers to the issues mentioned here.

Backlinks

Other Sources

This talk is that impressive, that I wrote down some quotes and I urge you to watch that talk on your own:

[...]
By gaining mastery of the human language, AI has all it needs in order to cocoon us in a Matrix-like world of illusions.
[...]
You don't need to implant chips into human brains in order to control or manipulate them. For thousands of years, prophets and politicians have used language and storytelling in order to manipulate and to control people to reshape society. Now AI is likely to be able to do it. And once it can do that, it doesn't need to send killer-robots to shoot us. It can get humans to pull the trigger.
[...]
If we're not careful, a curtain of illusions could descent over the whole of human kind and we will never be able to tear that curtain away or even realize that it is there as we think this is reality. And social media has given us a small taste of things to come.
[...]
Millions of people have confused these illusions for the reality.
[...]
The USA has the most powerful information technology in the whole of history and yet, American citizens can no longer agree who won the last election or whether climate change is real or whether vaccines prevent illness or not.
[...]
We now have to deal with a new weapon of mass destruction that can annihilate our mental and social world. One big difference between nukes and AI: nukes can not produce more powerful nukes. AI can produce more powerful AI. So we need to act quickly before AI gets out of our control.
[...]
Drug companies can not sell people new medicines without first subjecting these products to rigorous safety checks. Biotec labs can not just release a new virus into the public sphere in order to impress their shareholders with their technological wizardry. Similarly, governments must immediately ban the release into the public domain of any more revolutionary AI tools before they are made safe.
[...]
When AI hacks language, it means it could destroy our ability to conduct meaningful public conversations, thereby destroying democracy. If we wait for chaos, it will be too late to regulate it in a democratic way.
[...]
The first regulation that I will suggest is to make it mandatory for AI to disclose that it is an AI. If I'm having a conversation with someone and I cannot tell whether this is a human being or an AI, that's the end of democracy because this is the end of meaningful public conversations.
[...]

Quotes from that report:

Americans have not yet grappled with just how profoundly the artificial intelligence (AI) revolution will impact our economy, national security, and welfare. Much remains to be learned about the power and limits of AI technologies. Nevertheless, big decisions need to be made now to accelerate AI innovation to benefit the United States and to defend against the malign uses of AI.
[...]
The AI future can be democratic, but we have learned enough about the power of technology to strengthen authoritarianism abroad and fuel extremism at home to know that we must not take for granted that future technology trends will reinforce rather than erode democracy. We must work with fellow democracies and the private sector to build privacy-protecting standards into AI technologies and advance democratic norms to guide AI uses so that democracies can responsibly use AI tools for national security purposes.

Comment via email (persistent) or via Disqus (ephemeral) comments below: