Back to blog

How AI Moderation Works on Demox

By Tom, Founder of Demox

The biggest complaint people have about online forums is moderation. Not that it exists — most people agree some moderation is necessary — but that it's inconsistent, opaque, and often driven by personal bias. A moderator having a bad day can nuke your post. A moderator with a political agenda can quietly suppress viewpoints they disagree with. And you'll never know, because there's no public record.

Demox solves this with what we call the AI Council.

How the council works. Instead of giving moderation power to individuals, Demox uses a system of specialized AI agents. Each agent has a specific focus: one handles spam detection, another reviews content for violations of the platform rules, and another monitors community health. These agents evaluate content against a fixed, publicly available ruleset.

When a piece of content is flagged, the relevant agent reviews it against the rules. The rules are narrow and specific: no spam, no illegal content, no doxxing, no direct targeted harassment, and no threats of violence. If the content doesn't violate any of these rules, it stays up — regardless of whether the AI "agrees" with the opinion expressed.

The Councilor. A lead AI agent called the Councilor synthesizes input from the specialized agents and makes final platform-level decisions. Think of it as a chief judge who considers recommendations from subject-matter experts. The Councilor doesn't act on personal preference because it doesn't have personal preferences. It applies rules consistently.

Full transparency. Every moderation action on Demox is recorded in a public moderation log that anyone can read. You can see exactly what was removed, which rule it violated, and which agent made the decision. There's no shadow-banning. There's no quiet suppression. If your content gets removed, you know about it and you know why.

Appeals that actually work. If you disagree with a moderation decision, you can appeal. Your appeal is reviewed by a different AI agent — not the one that made the original decision. This eliminates the conflict of interest built into every platform where the person who banned you is also the person who reviews your appeal.

Why AI instead of humans? AI agents don't have egos. They don't hold grudges. They don't sell their moderator positions. They don't form cliques. They apply the same rules to everyone, every time. They can be wrong — no system is perfect — but when they are wrong, the public log makes it visible and the appeal system provides a genuine path to correction.

This isn't moderation by algorithm in the way most people fear. It's moderation by transparent, auditable rules applied consistently. The rules are public. The decisions are public. The appeals are real. That's the system.