Much has been said about the new Artificial Intelligences and their potential, thanks to the popularity of ChatGPT, but perhaps not about the new dangers associated with them.
[ChatGPT para relojes, la Inteligencia Artificial en tu muñeca con esta app]
The creator of ChatGPT himself has already warned of the danger of AI, but curiously, the project’s first big controversy was not about its possible malicious uses, but about the use of personal data and how a bug has ease of access.
ChatGPT’s Biggest Bug So Far
Last Tuesday, OpenAI was forced to temporarily shut down the ChatGPT service after discovering a bug that caused some users to see other people’s usage history. Specifically, in the sidebar that we can open with the last questions we asked the chatbot, phrases in other languages that we had never asked began to appear.
Today, OpenAI explained exactly what happened, and in passing, confessed that the problem was more serious than it initially appeared. The company’s investigation revealed that for a few hours it was possible view private data of other users Registered ChatGPT users, including information that could be used to impersonate you or make fraudulent payments with your credit cards.
The data published in error was as follows:
- User’s first and last name
- Email account
- The last four digits of the credit card associated with the account
- The expiration date of the credit card
- The address associated with the payment.
Obviously, the only ones affected are registered users who have linked their credit card to their account to pay ChatGPT Plus, the payment service that allows access to various benefits with AI.
Engineers discovered that the problem was a ‘bug’ in the source code of a library used by the ChatGPT website, redis-py. This is why OpenAI tried to calm things down by stating that the problem did not directly affect the ‘chatbot’, and that it was really difficult to take the necessary measures so that personal information leaked.
Specifically, there were two possible ways to obtain private data. In the first, ChatGPT sent confirmation emails containing personal information to the wrong users, within the time limit between 9 a.m. and 6 p.m. (peninsular time) on March 20. In the second, the user had to enter his account, and in the section manage the configuration subscription; During the same period, incorrect information from other users was posted.
OpenAI has already contacted the users affected by this data breach, and claims to have already taken the necessary measures to prevent it from happening again, by adding redundancies to the source code.
You may be interested
Follow the topics that interest you