ChatGPT is on everyone’s lips today and it’s being touted as the solution to many problems by its advocates, and as the arrival of the seven plagues of Egypt in one by its detractors. Either way, it’s nothing more than a tool and as such it can be used for good and bad. And it is that AI can also be used by criminals.
We do not say it, but it is Europol itself which warns that delinquents and criminals would use ChatGPT to carry out their acts. All this comes from a report created by several European Union police officers on the failures that the famous AI-based conversation faced with citizen security. What conclusions did they come to? Well, they are not encouraging because they consider it to be a tool that can be used in different types of different crimes.
How can criminals use ChatGPT?
Europol gave three examples where ChatGPT can help criminals commit their acts of vandalism, making this tool almost the same as a knife. That although it is used for cooking, we can also use it to hurt or kill another person.
- ChatGPT can be used to generate much more believable texts to scam people with phishing techniques. In particular, his abilities could be used by foreign criminals specializing in this type of scams to make them pass more naturally. Since the reader would see it as a natural language.
- They also explain how this ability can be used to create propaganda and disinformation to recruit people for terrorist groups and destructive cults. All of this can be combined with the use of image-generating AIs for the creation of fake news that can be used for the generation of fake news.
- There have been cases where the AI has provided confidential private data, which can pose a serious danger.
- Finally, ChatGPT would be used by criminals to create malware. Although not everything the AI generates is functional or correct, it has the ability to understand and write code. This therefore opens the possibility of creating malware of all kinds.
The conclusion to all this? We think Europol is exaggerated, in any case, we don’t have knowledge about crime and criminality like them. So in all of this, they must know something that we don’t.
These aren’t the only examples of ChatGPT misuse.
For many years, viral marketing has become a means of using word of mouth between people to influence purchase intent. ChatGPT-like AIs are used to ask for purchase recommendations for various products and what they do is read huge amounts of reviews to come to a conclusion. Well, while not foul play, there are already people writing fake reviews, both positive and negative, to influence the AI to make product recommendations.
This can create a huge loss of trust in the system and give a bad image to many big online stores like Amazon or AliExpress. While this shouldn’t affect specific brands’ after-sales support systems on their products, it does affect digital superstores and can become a serious issue. Let’s remember that not everyone has enough knowledge to make the best decisions when buying certain products.
ChatGPT depends on the information given to it as input and this data if it is not filtered poses a problem. This will be a problem on these platforms and for brands that will need to hire staff to remove misinformation and FUD on their products.
Leave a Reply