Briefly restricting employee access, Microsoft cited security concerns with OpenAI's ChatGPT.
Microsoft briefly blocked its employees from using ChatGPT, a large language model OpenAI created on Thursday due to security and data concerns. The company later restored access to ChatGPT, saying the blockage was a mistake resulting from a test of systems for large language models.
- On Thursday, Microsoft briefly blocked employees from using ChatGPT, a large language model from OpenAI.
- The company cited security and data concerns for the blockage, but later restored access, saying it was a mistake.
- Microsoft has invested billions of dollars in OpenAI and the two companies are closely tied.
Microsoft's Initial Blockage
Microsoft initially blocked employees from using ChatGPT and design software Canva but later removed a line in the advisory that included those products. The company's update recommended people use the company's own Bing Chat tool, which relies on OpenAI artificial intelligence models.
OpenAI's Response
OpenAI CEO Sam Altman wrote in a post on X that the rumors that OpenAI was blocking Microsoft 365 in retaliation for the blockage of ChatGPT were "completely unfounded."
Microsoft's Previous Guidance on ChatGPT
In January, a high-ranking Microsoft engineer wrote in a forum that employees could use ChatGPT but advised against entering confidential information.
Anonymous Sudan Attack
Earlier this week, a hacking group called Anonymous Sudan said it targeted ChatGPT in an attack because of "OpenAI's cooperation with the occupation state of Israel" and because Altman said he's "willing to invest into Israel more."