AI is already being used to commit crimes

NinFan

AI is already being used to commit crimes

artificial intelligence, ChatGPT, commit, crimes, Europol, News and novelties, The Pirates

There is no doubt about the potential of AI that ChatGPT revealed to us, but this potential can be used not only for positive purposes, but also for malicious purposes.

[ChatGPT filtró datos de los usuarios como correos y datos de tarjetas de crédito por un error]

In other words, many people have seen what ChatGPT is capable of, and the first thing they thought of is using it to take advantage of others. We have already seen cases of fraud created with artificial intelligence, but we did not imagine that the problem was so serious that Europol itself was already aware of the matter.

Europol warns against ChatGPT

The European police organization this week published its first report on the use of language models such as GPT, on which ChatGPT is based, for criminal purposes. While he didn’t share any information about the alleged crimes taking place using AI, he did detail the possible dangers of the technology.

Europol is pessimistic about the improved capabilities of ChatGPT and other language models, saying they can be exploited by criminals, who are often very quick to exploit new technologies. Specifically, Europol defines three sections where ChatGPT can be of great help in committing crimes.

For starters, ChatGPT’s ability to create “realistic” texts makes it a very useful tool for “phishing”, the technique by which the attacker impersonates another person or organization. For example, when you receive an email from your bank asking you to log in to approve a large transfer to your account.

ChatGPT lets you generate text that looks like it was written by a human

ChatGPT lets you generate text that looks like it was written by a human

Phishing emails can be detected in part by the way they are formatted and written, but GPT-4 could learn to write them in the style commonly used in real emails, allowing us to fall further easily in the trap. And these fake emails can be created very quickly and it would be more difficult to detect them.

Similarly, ChatGPT’s ability to create text could be used to create propaganda and disinformation, and even to facilitate terrorist activities, according to Europol. We’ve already seen other generative AI projects used to create fake images related to breaking news, and it wouldn’t be surprising if ChatGPT could be used in this way as well.

Finally, Europol warns that the measures taken by OpenAI to prevent the use of ChatGPT for cybercrime are insufficient, because they only work if the model understands what it is doing. Criminals can create malicious code, having the AI ​​write malware for them but in chunks to prevent the AI ​​from understanding what it is creating.

You may be interested

Leave a Comment