OpenAI’s Sam Altman Warns of Ceasing Operations in Europe

OpenAI’s Sam Altman Warns of Ceasing Operations in Europe
Photo: David Paul Morris/Bloomberg News

OpenAI’s Sam Altman recently warned that the company’s ChatGPT maker could be forced to cease operations in Europe if the European Union implements its proposed rules on artificial intelligence. During his tour of some of Europe’s capital cities, Altman told reporters that “the details really matter” and that OpenAI would try to comply with the regulations, but if it couldn’t, it would have to stop operating.

EU’s AI Act

The EU’s AI Act is set to become the first law on AI by a major regulator anywhere, according to its website. It focuses on regulating AI and protecting Europeans from certain AI risks, which are divided into three categories. The European parliament voted in favor of adopting the act by a large majority and June 14 has been set as the tentative date for its adoption.

OpenAI’s Systems and the AI Act

Altman is reportedly concerned that OpenAI’s systems such as ChatGPT and GPT-4 could be designated as “high risk” under the regulation, according to Time. This would mean that the company would have to meet certain requirements over safety and transparency, such as disclosing that its content was AI-generated. The highest risk category of the AI Act would be banned, while systems in the second risk category would be subject to specific legal requirements and those in the third category would be largely left unregulated. Additionally, companies will have to design their AI models so they don’t generate illegal content and publish summaries of copyrighted data used for training.

OpenAI’s GPT-4 Model

When OpenAI released GPT-4 in March, some in the AI community were disappointed that OpenAI did not disclose information on what data was used to train the model, how much it cost, and how it was created. Ilya Sutskever, OpenAI’s cofounder and chief scientist, previously told The Verge that this was due to competition and safety concerns. He said that it took almost all of OpenAI working together for a very long time to produce GPT-4 and there are many companies who want to do the same thing. He also noted that while competition is top-of-mind now, safety will become more important in the future.

Also Read: Chat GPT-4 vs. Its Predecessor: What You Need to Know

Altman’s Suggestion for Government Oversight

Altman has expressed his concern about how the AI Act will affect OpenAI’s presence in Europe but he recently told US Senate members that there should be a government agency overseeing AI projects with “above a certain scale of capabilities.” He suggested granting licenses to AI companies and taking them away if they overstep safety rules.

It remains unclear how exactly these proposed regulations will affect OpenAI’s operations in Europe but one thing is certain: The details really do matter when it comes to regulating artificial intelligence. Companies like OpenAI must ensure they comply with all safety requirements or face potentially serious consequences if they don’t.

Leave a Reply