The release of ChatGPT and similar chatbots has raised concerns that AI technology is at risk of being exploited for cyberattacks. It did not take long for threat actors to find a way to circumvent security measures and use ChatGPT to write malicious code.
However, the situation has changed in time. Instead of using ChatGPT to initiate cyber incidents, attackers began targeting the technology itself. The developer of the said chatbot, OpenAI, recently confirmed that there was a data breach caused by a vulnerability in the code's open-source library. Such breach resulted in suspension of the service until the issue was resolved.
ChatGPT has grown rapidly in popularity since its launch in late 2022. All interested people, especially writers and software developers, have eagerly experienced this chatbot. Apart from problems such as distorted texts and obvious plagiarism, ChatGPT took its place in history as the fastest growing application with more than 100 million users per month until January. Within a month of its release, ChatGPT has seen approximately 13 million daily users. To compare, another popular app, TikTok, took nine months to reach similar user figures.
Cyber security experts liken ChatGPT, which has achieved rapid popularity in a short time thanks to its functionality and various useful uses, to a Swiss army knife.
It is no surprise for a popular application or technology to become the main target of threat actors. The ChatGPT breach was made through a vulnerability in the Redis open-source library that allowed users to access chat history of other active users.
Open-source libraries serve to develop dynamic interfaces by storing frequently used templates and resources. OpenAI utilizes Redis in the caching process for faster retrieval of user information. Threat actors are aware that vulnerabilities can easily drop beneath the radar due to the collaborative nature of open-source code development. Attacks on open-source libraries have increased by 742% since 2019.
While the breach in ChatGPT was relatively minor, it took OpenAI a few days to fix this bug. However, OpenAI only came to realize after deepening its investigation that the payment information might have been exposed due to the same vulnerability, a few hours before the service suspension. OpenAI announced that affected users could only see the last four digits of the other user's name-surname, e-mail address, payment address and credit card number, as well as the credit card expiration date. So that all 16 credit card digits were never fully exposed.
This data leak in ChatGPT, which affected less than 1% of users with subscribers who paid membership fees being the main victims, was overcome with immediate intervention with minimal damage. However, such attack was a warning for the risks that chatbots and their users may face in the future.
The recent increased use of chatbots has brought data privacy concerns along with it. Mark McCreary, co-head of data privacy and security practices at law firm Fox Rothschild LLP, likens ChatGPT and chatbots to the black box of airplanes. Such artificial intelligence technologies, which store large amounts of data to use in their responses, make the recorded information an easy target for all other users with these features. Chatbots can save and summarize a user's notes on various topics or search for additional details. Therefore, the user can lose control over such information from the moment the information enters the chatbot's library.
Due to privacy concerns, some businesses, even countries, have already started to impose various restrictions. JPMorgan Chase, for instance, has restricted employees' use of ChatGPT as part of the company's controls over third-party software and applications. Concerns about the security of financial information entered into the chatbot have also become widespread. Italy has blocked the application nationwide for reasons of data privacy of its citizens and compliance with the General Data Protection Regulation.
Experts predict that threat actors can use ChatGPT to create sophisticated and realistic phishing emails. Phishing attacks, which can no longer be associated with poor grammar and broken sentence structure, can impersonate local users of the language in their targeted messages. In this respect, ChatGPT's excellent translation skills can become a groundbreaking feature for xenophobes as well.
The use of artificial intelligence to spread misinformation and conspiracy plans is another worrisome method. The implications of this situation go beyond the issue of cybersecurity. The researchers claim that ChatGPT produced a similar result to content on InforWars and other well-known websites that spread conspiracy theories when it was asked to write an opinion column.
As chatbots continue to evolve, they continue to create new cyber threats due to their improved language skills and increasing popularity, while also becoming attractive as an attack vector. Aiming to prevent data breaches that may occur within the application in the future, OpenAI offers a $20,000 reward to those who discover the application's unreported vulnerabilities.
On the other hand, it should be noted that this program is not related to model security, malicious code generation or erroneous output. OpenAI seems to aim to strengthen this technology against external attacks while being careful to prevent the chatbot from becoming a source for cyber-attacks.
Therefore, it seems likely that ChatGPT and other chatbots will play an important role in cybersecurity's future. And of course, the time will tell whether these technologies will mainly become victims or sources of attacks.
Source: Security Intelligence
Elevating Privileged Access Management with Kron PAM and Microsoft Entra ID Integration
May 23, 2024
Enhancing Security with Kron PAM's Multitenancy: A Game-Changer for Large Organizations
Jun 10, 2024