A ChatGPT user suddenly discovered unknown conversations in his history. These conversations, initiated by other people, contain very sensitive information, such as passwords. OpenAI denies the existence of a security flaw.
A user named Chase Whiteside had an amazing experience with ChatGPT. As reported by our colleagues at Ars Technica, he had access to conversations of other users with generative AI. The conversations suddenly appeared in his history dated January 29, 2024.
“I went to make a request […] and when I came back to access it a few moments later, I noticed the additional conversations. They weren’t there when I used ChatGPT last night (I’m a pretty regular user). No request was made — they just showed up in my history, and are definitely not mine (and I don’t think they’re from the same user either)”Whiteside testified to the media, and supporting screenshots.
Also read: ChatGPT now allows you to invoke any custom GPT
The mysteriously appearing conversations apparently originate fromone or more employees of a pharmaceutical company. In one exchange, an employee used ChatGPT to resolve an issue on an internal prescription platform. Even more worrying, the conversations contain a wealth of sensitive information, such as passwords and usernames. We also find the name of the internal portal and the name of the pharmacy. As we can see, the ChatGPT contact could not get the platform to work. He therefore began to rail against it in his conversation with the AI.
Other items found by Ars Technica include “the name of a presentation someone was working on, details of an unpublished research proposal, and a script using the PHP programming language”. It is obviously very worrying that information of this ilk ends up in the hands of other users.
At first glance, the case is reminiscent of a data leak that occurred in March 2023, a few months after the start of the ChatGPT wave. For a few hours, sensitive data, such as the name, address and the last four digits of the credit card number, of a handful of users was shared with other people. This malfunction, of limited scale, forced OpenAI to put ChatGPT on pause.
OpenAI has an explanation
However, OpenAI assures that this is not the same problem at all. In a message sent to Ars Technica, the start-up claims that Chase Whiteside’s account simply been hacked by a third party. This hacker then used the account to communicate with ChatGPT, which explains why unknown conversations ended up in his history. The attacker then gave access to the chatbot to a community of users. Many online services rely on compromised accounts to provide free access to ChatGPT-4.
“Based on what we have discovered, we consider this to be an account takeover in the sense that it is consistent with the activity we are recording […] The survey showed that conversations have been created recently from Sri Lanka. These conversations are happening in the same time frame as the connections from Sri Lanka.”
Despite everything, we strongly recommend that you never communicate sensitive personal information to ChatGPT. Following a bug or vulnerability, this data could theoretically end up in the wrong hands. In addition, the GPT linguistic model is constantly enriched thanks to the data provided by its interlocutors. It is therefore possible that your data will ultimately end up in the generative AI corpus. This is why companies like Apple or Samsung prevent their employees from using AI unlimitedly. Aware of the concerns, OpenAI has been offering an incognito mode on ChatGPT for several months.
???? To not miss any news from 01net, follow us on Google News and WhatsApp.