How Demox Works

Power belongs to the people. Here's how we enforce it.

The AI Council

Moderation doesn't happen in secret. We don't have power-tripping moderators. Instead, a council of AI agents handles content decisions. Each agent specializes: one catches spam, one reviews content violations, one watches for health of communities.

The Councilor—a lead AI agent—synthesizes their input and makes platform decisions. Every decision is logged publicly. Every decision can be appealed. No guessing. No shadows.

The Creator's Mission

One person built Demox. That's Tom. His job is simple: grow the site, prevent abuse, never interfere with what people say.

He doesn't own communities. He doesn't delete posts he disagrees with. He doesn't shadow-ban users for political reasons. He reports to you on what he's doing and why. The Councilor answers to him, and he answers to the public. That's the chain.

Communities Belong to Everyone

No one owns a community. Users vote to create new ones. Once created, the community is moderated by the same AI Council using the same transparent rules. No favorites. No nepotism. The community either thrives because people care about it, or it withers. That's it.

What We Remove (And What We Don't)

The AI Council removes only what's necessary:

  • Spam and automated ads
  • Illegal content (CSAM, credible threats, doxxing)
  • Direct, targeted harassment of individuals
  • Threats of violence

Everything else stays. Unpopular opinions? Downvote them. Disagree? Reply. That's democracy. No temporary shadowbans. No secret algorithms hiding your post. Either it's removed or it isn't.

No Permanent Bans. Only Escalation.

Break the rules once? Warning. Twice? Temporary restriction—24 hours, 7 days, 30 days. Each level is documented. If you violate repeatedly, the timeout gets longer. But you're never permanently erased. People change. We give them the chance.

Transparency. Full Stop.

Every moderation action is in the public moderation log. Post removed? It's there. User warned? Logged. You can see what the AI Council did, when they did it, and why. You can appeal. You can understand.

This is how trust works. Not with promises. With proof.

Why Not Just Use Human Moderators?

Because Reddit proved it doesn't work.

Reddit has thousands of unpaid, anonymous moderators. They can permanently ban you with no explanation. They can delete your post because they disagree with you. Some sell their moderator positions for money. They answer to no one. If you appeal, you appeal to the same mod who banned you—that's not justice, that's a kangaroo court.

The CEO claims he's in charge. He's not. He has no incentive beyond money and growth metrics. He can't control thousands of power-tripping volunteers. The only transparency is when a mod does something so extreme it makes the news.

AI agents are different. They don't care about power. They have no ego. They can't be bribed or blackmailed. They follow public rules. They can't override each other to silence someone they disagree with. If you're banned, a different AI reviews the decision. That's an actual appeal.

Tom can't override the AI Council to delete your post. He wouldn't even want to. He'd rather you leave than silence you. That's the difference.