In the previous year, The New York Times reported an unauthorized breach of OpenAI’s internal messaging systems, during which a hacker attained access and exfiltrated design specifications pertaining to the organization’s artificial intelligence (AI) technologies.
This security breach occurred within an online platform utilized by OpenAI employees for the purpose of discussing the latest advancements, as per accounts from two undisclosed sources familiar with the situation.
A cyberattack infiltrated the internal communication systems of OpenAI, the creator of the well-known AI program ChatGPT. While the intruder did not access the company’s core AI development infrastructure, details of their internal discussions were compromised. OpenAI, backed by Microsoft, remained silent on the matter when contacted.
Internally, the breach was addressed during a company-wide meeting in April of last year. The leadership team also informed the board of directors. Officials opted to keep the incident confidential as no client or partner data was exposed. They assessed the perpetrator to be a lone actor and not affiliated with any foreign government, posing no national security risk. Consequently, law enforcement wasn’t notified.
However, in May, OpenAI disclosed thwarting five clandestine operations attempting to exploit their AI models for deceptive online activities his revelation heightened anxieties regarding the potential misuse of AI. As a response, the Biden administration is formulating strategies to shield American AI technology from foreign competitors like China and Russia. Early proposals suggest establishing safeguards around powerful AI models like ChatGPT.
With rapid advancements and looming threats, sixteen AI development firms pledged in May to prioritize the responsible development of AI technology during a global summit.