Entrada

OpenAI fortalece las medidas de seguridad para mantener a raya a los ojos curiosos.

TecnologĂ­a de inteligencia artificial futurista con un fondo verde. Letras binarias codificadas en negro.

Créditos de la imagen: cundra / Getty Images

OpenAI ha reportedly overhauled its security operations to protect against corporate espionage. According to the Financial Times, the company accelerated an existing security clampdown after Chinese startup DeepSeek released a competing model in January, with OpenAI alleging that DeepSeek improperly copied its models using “distillation” techniques.

The beefed-up security includes “information tenting” policies that limit staff access to sensitive algorithms and new products. For example, during development of OpenAI’s o1 model, only verified team members who had been read into the project could discuss it in shared office spaces, according to the FT.

And there’s more. OpenAI now isolates proprietary technology in offline computer systems, implements biometric access controls for office areas (it scans employees’ fingerprints), and maintains a “deny-by-default” internet policy requiring explicit approval for external connections, per the report, which further adds that the company has increased physical security at data centers and expanded its cybersecurity personnel.

The changes are said to reflect broader concerns about foreign adversaries attempting to steal OpenAI’s intellectual property, though given the ongoing poaching wars amid American AI companies and increasingly frequent leaks of CEO Sam Altman’s comments, OpenAI may be attempting to address internal security issues, too.

We’ve reached out to OpenAI for comment.

Esta entrada está licenciada bajo CC BY 4.0 por el autor.