Grok 3.5

The real world is complicated, chaotic, and decidedly NSFW. Foundational AI models, however, are trained to pretend otherwise. OpenAI, Anthropic, Google, and other major players meticulously sanitize their training data, scrubbing out adult content, extremist ideology, politically sensitive topics, and anything that might trigger regulatory scrutiny or brand risk. The result is a cohort of safe, predictable, and polished assistants that play well with risk management frameworks and emerging compliance regimes.

This may sound like progress, and in some ways it is. But Grok 3 challenged the assumption that aggressive content filtering is the only responsible path forward. It demonstrated clear market demand for a model that doesn’t flinch when faced with uncomfortable or taboo subjects.

Now, with the introduction of Grok 3.5, we are about to find out if unfiltered intelligence can coexist with enterprise-grade governance. In 2025, “unfiltered” is no longer a business model. It’s a configurable setting.

Alignment: The Comfortably Vague Objective

“Alignment” is an elegant term that disguises an intractable problem: making AI systems act in accordance with human values. Easy to say, nearly impossible to define or enforce. OpenAI’s GPT-4 relies on human feedback to nudge outputs toward safe territory. Anthropic’s Claude aligns with a written constitution. Google DeepMind filters through classifiers like the Perspective API.

These techniques mitigate overt harm. But they also flatten nuance, eliminate dissenting or marginalized perspectives, and constrain intellectual depth. Filtered systems tend to hedge, deflect, or equivocate on complex issues offering milquetoast “both-sides” answers when insight is needed most.

The Post-March Shift: Infrastructure, Velocity, and Scrutiny

Several developments have reshaped the conversation since Grok 3 launched in March. Most notably, Microsoft is finalizing a deal to host Grok on Azure AI Foundry. This changes the game. It puts Grok within the same compliance perimeter that Fortune 500 companies already trust for tools like GPT-4 and Gemini. This means SOC 2, ISO 27001, audit trails, data-loss prevention hooks, and policy enforcement become part of Grok’s runtime fabric. The model may remain “optionally unfiltered,” but the wrapper will be unmistakably enterprise.

xAI has also confirmed that Grok 3.5 enters beta in May, with Grok 4 expected by September. The updated roadmap includes memory capabilities, Drive integration, image editing, and a multimodal “vision-in-voice” mode. These enhancements signal a more aggressive iteration cadence than OpenAI or Google (and edge Grok closer to becoming a general-purpose assistant).

But speed brings scrutiny. xAI’s benchmark results for AIME-2025 omitted the consensus@64 metric, which raised flags across the AI community. Critics accused xAI of cherry-picking results to inflate performance claims. The takeaway is clear: AI leaderboards are marketing tools, not procurement criteria.
Regulators have also entered the frame.

The EU’s AI Act is on the verge of finalizing its Code of Practice. While technically voluntary, this code becomes de facto mandatory for general-purpose models by August. Any brand touting “no guardrails” will soon need to demonstrate risk mitigation protocols (complete with disclosures, logging, and real-time controls).

From “Unhinged Mode” to “Enterprise Mode”

Grok 3’s defining feature was its willingness to answer questions that other models refused to touch. While GPT-4 or Claude might demur, Grok 3 would lean in, often with blunt, unsanitized responses. This “unhinged mode” thrilled some and horrified others. But that phase is ending.

With Grok moving into Microsoft’s infrastructure, the rules of engagement are changing. Enterprise customers will expect, and receive, tools to control usage, monitor compliance, and protect sensitive data. What was once marketed as “maximally unfiltered” will become “optionally unfiltered within enterprise policy constraints.” Abuse detection endpoints, audit logging, and permissioned access will become standard. It’s not censorship. It’s product maturity.

Risk Lives in the Deployment Layer

The most important shift is philosophical. The risk associated with generative AI no longer lives in the model itself. It lives in how and where the model is deployed. Researchers, red teams, and investigative journalists need filters-off access to study extremist networks, misinformation vectors, and platform vulnerabilities. Conversely, customer support teams and HR bots demand strict limitations to avoid brand damage or legal exposure.

Infrastructure is the difference. In the morning, your security team can use Grok to investigate online threats. At noon, your marketing team can deploy an OpenAI model to write customer emails. Each model runs inside its own policy context—enforced, logged, and defensible.

A New Operating Reality for the C-Suite

For senior executives, the implications are obvious. Infrastructure now matters as much as model performance. Compliance obligations are shifting: if Grok is hosted on Azure, you inherit Microsoft’s attestations. If it’s accessed directly from xAI, you inherit that risk. Benchmark results are increasingly irrelevant. Demand live pilots and make decisions based on your own evaluation environments.

Above all, regulation is coming. (Probably faster than your renewal cycle.) EU-style disclosure rules and risk audits will be a standard feature of doing business with AI. Waiting for a U.S. equivalent will be a luxury few companies can afford.

The Dial Has Replaced the Line

Grok 3 made the case for an AI that doesn’t avoid reality. Grok 3.5 now faces a more complex challenge: to prove that blunt, unsanitized speech can live comfortably within a well-governed enterprise framework.

This isn’t a choice between “safe” and “unsafe” AI. It’s a recognition that intelligence (if it’s truly intelligent) must grapple with reality as it is. In 2025, “unfiltered” is not a product to sell. It’s a setting to manage.

Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it. This work was created with the assistance of various generative AI models.

About Shelly Palmer

Shelly Palmer is the Professor of Advanced Media in Residence at Syracuse University’s S.I. Newhouse School of Public Communications and CEO of The Palmer Group, a consulting practice that helps Fortune 500 companies with technology, media and marketing. Named LinkedIn’s “Top Voice in Technology,” he covers tech and business for Good Day New York, is a regular commentator on CNN and writes a popular daily business blog. He's a bestselling author, and the creator of the popular, free online course, Generative AI for Execs. Follow @shellypalmer or visit shellypalmer.com.

Categories

PreviousGoogle's AI Mode: The Chatbot Comes to Search NextMicrosoft is Now Passwordless by Default — Well, Sort Of

Get Briefed Every Day!

Subscribe to my daily newsletter featuring current events and the top stories in AI, technology, media, and marketing.

Subscribe