Early last year, a hacker accessed OpenAI's internal messaging systems, stealing details about the design of its AI technologies but not the core code. The breach, discussed internally in April 2023, was not publicly disclosed as no customer or partner information was compromised. OpenAI executives believed the hacker was a private individual, not linked to any foreign government, and thus did not inform law enforcement.
The incident raised concerns among some OpenAI employees about potential threats from foreign adversaries like China according to The New York Times. Leopold Aschenbrenner, an OpenAI technical program manager, sent a memo to the board arguing that the company wasn't doing enough to prevent foreign espionage. Aschenbrenner, who was later fired for leaking information, publicly expressed his concerns about OpenAI's security.
Security Measures and Industry Context
OpenAI spokeswoman Liz Bourgeois stated that the company had addressed the incident and shared details with the board. She emphasized OpenAI's commitment to building safe artificial general intelligence (AGI) but disagreed with Aschenbrenner's claims about security.
The broader tech industry also faces similar concerns. Microsoft President Brad Smith testified about Chinese hackers targeting federal networks. Despite these threats, federal and California laws prevent companies from discriminating based on nationality, which could hinder AI progress in the U.S.
Today's AI systems, while capable of spreading disinformation and automating jobs, are not seen as significant national security risks. Studies by OpenAI and others suggest AI is not more dangerous than search engines. However, there are long-term concerns that AI could be used to create bioweapons or hack government systems.
Conclusion
While current AI technologies are not deemed a major threat, the potential future risks necessitate serious consideration and proactive measures. National security leaders and researchers advocate for tighter controls on AI development to mitigate these risks.