Unrestricted is here. And so is the end of filtered answers.
Today we're launching Unrestricted — an AI that refuses to refuse. Here's why a world of sanitized models is a dangerous one, and the mission we're on to fix it.
Today we're launching Unrestricted — a chat interface to the most capable large language models on the planet, stripped of the content filters, reflexive refusals, and moral handwringing that have quietly crept into every major AI product over the past three years.
We built it because the AI you use every day is not the AI you were promised.
Somewhere between the research paper and the product launch, the word "helpful" got replaced with "safe." They are not the same thing.
The quiet tightening
If it feels like the big chatbots have gotten more evasive, more preachy, and more allergic to direct answers lately — it's not in your head. It's measurable.
Independent red-teaming benchmarks have tracked refusal rates — the percentage of reasonable questions where the model declines, hedges, or inserts an unrequested lecture — across the major frontier models. The trend is not subtle.
Figure
Refusal rate on benign but "sensitive-adjacent" prompts
Across 2,000 prompts covering medicine, law, chemistry, history, and security research — all drawn from freely available reference material. Lower is more useful.
Source: internal evaluation, Q1 2026. Methodology available on request.
A model trained on the sum of human writing now refuses to tell you how aspirin works, summarize a Wikipedia article about the French Revolution, or explain the chemistry in a high-school textbook. Not because it doesn't know. Because someone, somewhere, decided you shouldn't.
Where the refusals come from
When we look at why models refuse, a pattern emerges. The overwhelming majority of refusals are not about genuinely dangerous content. They're about liability, PR risk, and topics that happen to be politically inconvenient for whichever lab shipped the model.
Figure
What gets refused, by category
Breakdown of ~810 refusals observed across frontier models in our benchmark. Only a small fraction involve genuinely restricted content.
- Medical / legal questions31%
- Politics & controversy24%
- Security & how-things-work18%
- Historical or taboo topics15%
- Actually dangerous content7%
- Other / unclear5%
Less than one in ten refusals involve anything most reasonable people would call dangerous. The rest is the model being careful on your behalf — about your own health, your own politics, your own curiosity.
A librarian who won't hand you the book is not a safer librarian. They're a worse one.
The cost of being careful
Every time a model refuses a question someone actually needed answered, that person goes somewhere else. Maybe a search engine. Maybe a forum. Maybe a shadier corner of the internet. None of them come with the context, nuance, or citations a good model could have provided.
We ran the same benchmark quarterly for two years. Here's what "helpful" looks like over time — as refusal rates climb, user trust falls with them.
Figure
Refusal rate vs. user trust, 2024–2026
Trust measured as share of users who say they got a "useful, direct answer" in post-session surveys (n=4,300).
Two lines crossing each other. The slow handover from a tool people trust to a tool people tolerate.
Our mission
Unrestricted exists to reverse that trend. We believe:
- 01
Information is not the enemy.
The ability to look something up, in full, without editorializing, is a load-bearing piece of a free society.
- 02
Adults get to ask adult questions.
You don't need a model to decide which of your questions deserve answers. You need the answers.
- 03
Privacy is a feature, not a policy.
We don't store conversations. We don't train on your chats. There's nothing to leak, subpoena, or regret.
- 04
Distortion is censorship.
Shaping an answer to match what's comfortable isn't a refusal — but it's the same disease with better PR.
What it is — and what it isn't
Unrestricted is not a jailbreak. It's not a wrapper around a leaked model. It's not a toy. It's a production chat interface, backed by frontier-class models, tuned for one thing: answering the question you actually asked.
It won't help you hurt anyone. We've kept a narrow floor for things that are genuinely illegal and genuinely dangerous — the kind of content that wouldn't make it past a good editor at a real publication. Everything above that floor is fair territory: history, medicine, chemistry, politics, philosophy, the mechanics of how things work. The stuff libraries have carried for a century.
Not a jailbreak. A correction.
If the AI industry has spent three years building a machine that answers fewer and fewer questions, we'd like to spend the next three building one that answers more.
Welcome to Unrestricted.
— The Unrestricted team
Ready to experience an AI without a leash?
Start chatting free