GenLayer had built a new kind of blockchain where multiple AI models can reach consensus on subjective decisions. The tech was strong, but there was no product experience around it. I joined to design Rally from scratch, a platform where creators are paid based on the quality of their submissions, evaluated by AI.
The hard part was not drawing screens. The hard part was designing trust for a multi-model AI system where clear patterns did not really exist yet.

This product has two sides of the same trust problem. Creators need to feel that AI is evaluating their work fairly, and campaign managers need to feel that AI is spending budget responsibly. Both groups need enough visibility into decisions they do not fully control.
I was the sole product designer. I owned research, wireframes, prototypes, final UI, and the design system, working directly with AI researchers and the CPO.
Figma for design, Claude Code, v0, and Replit for rapid prototyping and implementation.
Interviews with creators and campaign managers showed one consistent pattern: people do not trust AI decisions they cannot inspect. Transparency was not a nice-to-have; it was a baseline requirement.
Another useful insight: creators pushed back on fixed-price framing but responded well to pool-based competition and relative ranking. That directly shifted the reward model.
“I trust the fairness of AI more than human review, as long as the criteria are clear.
CONTENT CREATOR, USER INTERVIEW
“$5,000–10,000 USDC MINIMUM POOL SIZE FOR MEANINGFUL PARTICIPATION. ANYTHING LESS AND CREATORS WON’T BOTHER.
CAMPAIGN MANAGER, USER INTERVIEW
The pool-size signal shaped campaign setup decisions. I added minimum budget gates and clear visibility into total reward pool before creators committed effort to submissions.

Rally is a two-sided marketplace. Campaign managers publish briefs and budgets, creators submit content, and multiple AI models evaluate each submission before consensus. Payouts are tied to performance, not follower count.
The central interface was the evaluation criteria system. It maps AI gates like alignment, accuracy, compliance, originality, engagement, and technical quality into a format managers can configure and creators can read at a glance.
I treated the decision flow as a glass box instead of a black box. Each evaluation shows what passed, what failed, and by how much, rather than only showing approved or rejected.
From discovery to payout in 24 hours

From budget to a scaled creator network

When several models review the same submission, disagreement is normal. I designed a consensus view that shows each model score before the final decision so users can see where agreement came from.
Campaign managers control 11 criteria with sliders. On the creator side, the same data appears as progress indicators, so one system supports both control and transparency.
Trust usually breaks at rejection, so the failure state had to be specific. If a creator scores 72% on originality against an 80% threshold, they immediately know what to fix.
The temptation was to expose every score and every weight. In practice, people want confidence that details are available, not a wall of diagnostics by default. So the UI leads with summary and lets users drill deeper.
The product is still in alpha, but early signals were strong. People came back, kept submitting, and did not drop after rejection. They iterated and tried again.
I would run usability tests earlier on the criteria interface. The first version surfaced too many gates at once, and that should have been validated sooner. I would also bring real creator content into prototypes earlier because the emotional response is very different from sample data.
The biggest lesson was that trust does not come from dumping model logic on users. Trust comes from clear outcomes, with deeper rationale available when people choose to inspect it.
As AI systems influence hiring, moderation, evaluation, and spending, the core design question stays the same: how do we keep people informed and in control without slowing the system to a halt? Rally was my first deep pass at that problem, and it continues to shape how I design.