Apple Joins US Commerce Department's AI Safety Institute Consortium - MacRumors
Skip to Content

Apple Joins US Commerce Department's AI Safety Institute Consortium

Apple and other top tech companies have joined a new U.S. consortium to support the safe and responsible development and deployment of generative AI, the Commerce Department announced on Thursday (via Bloomberg).

Apple, along with OpenAI, Microsoft, Meta, Google, and Amazon, will join more than 200 members of the AI Safety Institute Consortium (AISIC) under the department, Commerce Secretary Gina Raimondo said.

"The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence," Raimondo said in a statement.

The group will work with the department's National Institute of Standards and Technology on priority actions outlined in President Biden's AI executive order, "including developing guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content."

Other technology companies, as well as civil society groups, academics, and state and local government officials, will also be involved to establish safety standards regarding AI regulation.

Generative AI has spurred excitement due to its potential to enhance creativity, improve efficiency, and advance technology. However, fears surrounding generative AI include ethical concerns like deepfakes, the potential impact on jobs, issues around information reliability, and challenges in ensuring privacy and effective regulation.

Apple is said to be spending millions of dollars a day on AI research as training large language models requires a lot of hardware. Apple is on track to spend more than $4 billion on AI servers in 2024, according to one report.

Apple is said to be developing its own generative AI model called "Ajax". Designed to rival the likes of OpenAI's GPT-3 and GPT-4, Ajax operates on 200 billion parameters, suggesting a high level of complexity and capability in language understanding and generation. Internally known as "Apple GPT," Ajax aims to unify machine learning development across Apple, suggesting a broader strategy to integrate AI more deeply into Apple's ecosystem.

Aspects of the model could be incorporated into iOS 18, such as an enhanced version of Siri with ChatGPT-like generative AI functionality. Both The Information and analyst Jeff Pu claim that Apple will have some kind of generative AI feature available on the ‌iPhone‌ and iPad later this year.

Popular Stories

iOS 26

iOS 26.4 Adds Two New Features to CarPlay

Tuesday March 24, 2026 1:55 pm PDT by
iOS 26.4 was released today, and it includes a couple of new features for CarPlay: an Ambient Music widget and support for voice-based chatbot apps. To update your iPhone 11 or newer to iOS 26.4, open the Settings app and tap on General → Software Update. CarPlay will automatically offer the new features so long as the iPhone connected to your vehicle is running iOS 26.4 or later....
Apple Business hero

Apple Unveils 'Apple Business' All-in-One Platform

Tuesday March 24, 2026 8:53 am PDT by
Apple today announced Apple Business, a new all-in-one platform that unifies device management, productivity tools, and customer outreach features. The service is designed to be a consolidated replacement for several of Apple's existing business-focused offerings, including Apple Business Essentials, Apple Business Manager, and Apple Business Connect. It provides organizations with a single...
AirPods Pro Firmware Feature

Apple Releases New Firmware for AirPods Pro 3, AirPods Pro 2 and AirPods 4

Tuesday March 24, 2026 12:31 pm PDT by
Apple today released new firmware for the AirPods Pro 2, AirPods Pro 3, and the AirPods 4. The firmware has a version number of 8B39, up from 8B34 on the AirPods Pro 3, 8B28 on the AirPods Pro 2, and 8B21 on the AirPods 4. There is no word on what's included in the firmware, but Apple has a support document with limited notes. Most updates are limited to bug fixes and performance...

Top Rated Comments

28 months ago
If tech companies talk about "safety", I read "censorship".

It is okay if they block fake porn of famous people, but even ChatGPT already goes much further. It blocks everything that could be seen as controversial. Computers that are smarter than humans my be creepy for many people, but even more creepy is the idea that tech companies or governments control those super intelligent computers.

Try asking ChatGPT what advantages climate change has. It will refuse to answer that question.
Score: 10 Votes (Like | Disagree)
28 months ago
Ministry of Truth.
Score: 6 Votes (Like | Disagree)
VulchR Avatar
28 months ago

...

We’re approaching the threshold of history- a veil beyond which prophets would not see clearly for the muddiness of the waters of truth, and time travelers will be unable to relay back what really happened for all the confusion.
We'll soon approach a point when so much of internet content will be generated by AI that AI 'hallucinations' might impair our ability to learn the truth about history or current affairs. We could have AI systems learning the hallucinations of other AI systems as they troll the internet for training data.
Score: 5 Votes (Like | Disagree)
28 months ago

Watermarking isn’t enough. The prompts used on AI created or manipulated images should be permanently encoded into the image. The images should always be allowed to be saved, yet come with the anti-screenshot tech turned on.

Loopholes should be closed as much as possible for laundering AI content to sell as real. That is the greatest danger to our society- to exist in a permanent unreal present where truth about the past and current events are in the hands of those with the power to manipulate the most believers.
Most of this is simply impossible to practically implement. I mean we can pass all the laws we want but it will not in practice stop this. Stable diffusion already watermarks. But it's in the code and you can unwatermark. You can train your AI to not do the watermark. We want to be safe and wise but some things are not practically possible to stop. It's like the invention of the camera and telling people they are only allowed to photograph what the govt says. Only a totalitarian govt could do it.
Score: 4 Votes (Like | Disagree)
IllegitimateValor Avatar
28 months ago
Watermarking isn’t enough. The prompts used on AI created or manipulated images should be permanently encoded into the image. The images should always be allowed to be saved, yet come with the anti-screenshot tech turned on.

Loopholes should be closed as much as possible for laundering AI content to sell as real. That is the greatest danger to our society- to exist in a permanent unreal present where truth about the past and current events are in the hands of those with the power to manipulate the most believers.

I don’t trust even the companies involved to do right by us even with deep oversight.

We’re approaching the threshold of history- a veil beyond which prophets would not see clearly for the muddiness of the waters of truth, and time travelers will be unable to relay back what really happened for all the confusion.
Score: 4 Votes (Like | Disagree)
28 months ago

I look forward to a day when the only way to get uncensored open-source AI models (like the kind you can get today on sites like HuggingFace or CivitAI) is to torrent them on shady sites because the government prevents people from getting them normally... for our own good, apparently. 🤡 /s
For the children.
Score: 3 Votes (Like | Disagree)