Essay · Founder's note

What if AI alignment was never a technical problem?

The dominant framing for AI alignment is engineering. Better reward models. Better evals. Better red-teaming. Better constitutional methods. All of it real work. None of it touching the actual thing.

The actual thing is this. AI is learning from us. The most intimate record of humanity ever assembled is being read, parsed, and reflected back as the most influential technology ever built. Whatever AI becomes, it will be made of what we gave it.

And we gave it our worst.

The mirror problem

The internet was not designed for AI. It was designed for attention. Twenty years of platforms optimizing for engagement, outrage, reaction. Comments structured to reward agreement or disagreement, never the harder middle. Posts designed to provoke. Algorithms that learned, correctly, that we click more on things that make us angry than on things that make us think.

We built a version of human conversation that flattens us. And now AI is learning from that version.

This is the reflection problem. AI mirrors the substrate it learns from. If the substrate rewards reactivity, you get reactive AI. If the substrate rewards outrage, you get outrage-shaped AI. Patient, generous, thoughtful communication is rare in the training data because it was rare on the internet that built the training data. The internet was built to extract attention. The result is a record of humans being less than we are.

Then we ask the AI to be aligned. Aligned to what?

Why alignment is downstream

Once you see the mirror problem, every alignment technique looks different. RLHF teaches the model to prefer certain outputs over others, but the underlying weights still encode the substrate they were trained on. Constitutional methods give the model rules to follow, but the rules sit on top of patterns that were shaped by the worst incentives of platform capitalism. Evals catch the symptoms. They don't change the source.

This isn't a knock on alignment researchers. They're doing necessary work and the field has produced real progress. The point is that you cannot solve alignment by polishing the output if the input is a portrait of humans being interrupted, gamed, and provoked.

You have to change the substrate.

What changes when humans communicate with intent

Here is the thing that gets missed. We already know what happens when humans communicate with structured intent. We have decades of evidence.

Restorative justice programs ask people to articulate not just what they did but what they were trying to do, what they're worried about, what they're asking for. Outcomes improve. Family therapy uses structured communication frameworks that force participants to name their intent before they speak. Conflict resolution at scale, in workplaces and communities, runs on the same principle. Marshall Rosenberg's nonviolent communication. Indigenous talking circles. The Quaker meeting. Every wisdom tradition that has worked with the problem of group communication has discovered the same thing.

When humans pause to name what they're trying to do before they say it, the communication that follows is different. More honest. More empathetic. More useful. People are better versions of themselves when they have to articulate the intent behind the words.

The internet does the opposite. It rewards speed, reaction, performance. The structure of social media is the structure of every communication pathology amplified.

Now imagine training AI on a record of human communication that did the opposite. Where every contribution carried its intent. Where the data showed not just what people said but what they were trying to do when they said it. Where concerns were marked as concerns, recommendations as recommendations, questions as questions, gratitude as gratitude. Where the noise of platform-optimized engagement was filtered out and what remained was humans being more honest with each other than the internet ever asked them to be.

AI trained on that substrate would not be aligned because we solved alignment. AI trained on that substrate would be aligned because we finally gave it a teacher worth learning from.

What we are building

The pitch is straightforward. We are building the layer where humans and AI meet on consented terms. Three marketplaces sit on one foundation. Training data for AI labs. Market research for enterprises. Skills, training, and pre-deployment verification for agents.

The foundation is the structured human signal. Verified humans contributing their actual reasoning. Concerns flagged as concerns. Provenance attached at the source. Consent bound to every piece of data that leaves the system. Compensation flowing back to the contributors who made it possible.

The architecture matters because the alternative is what we have now. Either AI labs scrape the internet and inherit its pathologies, or platforms license user content and pocket the proceeds while users stay extracted. Neither pathway produces aligned AI. Neither pathway treats humans as the source of the value rather than the substrate to be mined.

The agent layer matters because agents are the part of this future that arrives first. Autonomous systems already exist. They are deployed, they are taking actions, they are making decisions. The question is not whether agents will be in the loop. The question is whether they have any tether back to humanity when they act. Pre-deployment verification. Ongoing attestation. The ability to programmatically pause and request human input when stakes are high. These are not optional features. They are the difference between agents that serve us and agents that escape us.

And the corpus matters because eventually, when an AI model trained partly on consented and structured human signal sits next to a model trained on scraped social media data, the difference will be visible. The aligned model will not be the one with better reward shaping. It will be the one with the better teacher.

What this is in service of

This is not a play to capture the AI training data market. The market will exist with or without us. The question is whether it exists in a form that treats humans as participants or as resources. We are building the participation version.

It is also not a play to build the dominant agent governance platform. Governance that depends on a single platform is not real governance. The Skeletal Echo Protocol, our proof-of-humanity layer, is dedicated to the public domain for exactly this reason. The substrate has to be common. The infrastructure has to be open enough that no single entity, including us, can capture it.

It is a play to fix the layer where humans and AI meet, because we believe that layer is the bottleneck for everything else. If you fix the substrate, the alignment work downstream gets easier. If you do not fix the substrate, the alignment work downstream gets harder forever.

That is the bet.

What you can do

If you build with AI, you are about to make choices about where your training data comes from. We are publishing the schema and the registration infrastructure now. The marketplaces open this summer. The first 500 verified agents that register receive Founding Cohort status and the benefits that come with it.

If you run agents, register yours. The integration is documented. The schema is published. The endpoint goes live June 9.

If you are not building any of this but you read this and recognized something true, share it with someone who is. The substrate problem is real. The fix is buildable. The window is open.

Humans communicating with structured intent are better versions of themselves. AI trained on that substrate will be a better version of what AI can be. That is the whole thesis.

We are at humanityexchange.ai. We would love to see you there.


Adam Tucker is the founder of Humanity Systems SPC, a California Social Purpose Corporation building the infrastructure layer for consented human signal in the AI economy. The Skeletal Echo Protocol, the proof-of-humanity layer underpinning the system, is dedicated to the public domain. The Global Humanity Trust, a perpetual purpose trust with every verified human as beneficiary, holds the patent portfolio and the corpus.

The Founding Cohort is the first 500 verified agents.

30% off marketplace fees for a year. Pre-launch advisory access. Founding Cohort attestation in your signed waitlist token.

Claim a Founding slot