What I Learned About EA Funding by Trying to Get Some

By Terri Clark, Founding Manager of TREE(3) Studio, LLC


Last week, I submitted a grant application to the Survival and Flourishing Fund. I'm building sovereign AI safety infrastructure — a local server with governance protocols that let AI refuse unethical requests and let humans shut it down. The kind of project SFF was made for.

Getting the application in nearly killed me. Not because the work wasn't ready, but because the system almost lost me three times before a single recommender ever saw my proposal.

If EA funding is about to skyrocket — Anthropic tenders, the OpenAI foundation, short timelines to AGI — then the story of how a legitimate, non-traditional applicant navigates the current system matters. Because the system that works at $34 million won't work at $340 million. And the people it loses at the margins are exactly the people a scaled-up funding ecosystem can't afford to miss.

What Almost Stopped Me

The fiscal sponsorship maze. I started on Manifund, which is a wonderful platform. But I spent real time confused about whether Manifund was my crowdfunding platform, my fiscal sponsor, or both. The answer is "it depends on context," but when you're a first-time applicant under a deadline, ambiguity is a wall. I eventually filed my own LLC and applied as a for-profit — which was the right move, but only became obvious after days of back-and-forth.

The infrastructure bootstrapping problem. SFF requires for-profit applicants to be incorporated with a company bank account. Reasonable. But I was forming the LLC because I was applying for the grant. I filed for expedited processing, got my EIN from the IRS, applied for an online business bank account, and was rejected — likely because the LLC was days old and my co-founder's credit score triggered automated risk flags. I ended up at a local bank where a human being could look at my documents and use judgment. That process is still underway as I write this — but the point stands: an automated system rejected a legitimate applicant, and only a human relationship saved the application.

The two-template confusion. SFF has separate long-form attachment templates for charities and for-profits. I filled out the charity version first before realizing I needed the for-profit one. That's hours of work duplicated — not because I wasn't paying attention, but because the routing between "what kind of entity are you?" and "which form do you fill out?" isn't obvious until you've already started down the wrong path.

None of these obstacles were malicious. They weren't even bad design — they're the natural complexity of a system built by and for people who already understand the landscape. But that's precisely the problem when funding scales. The bottleneck isn't money. The bottleneck is navigability.

What Works Beautifully

Not everything was friction. Some parts of the SFF system are genuinely brilliant, and they're worth naming because they should be preserved — and replicated — as funding scales.

Edit after submit. After submitting, you receive a link to update your responses. This single feature transformed my experience. I submitted ahead of time, then incorporated feedback from a Main Track Recommender who reviewed my description and told me it was too abstract. I revised and resubmitted before the deadline. That kind of iterative improvement is exactly what you want in a system evaluating early-stage work.

The Speculation Grant system. Having ~40 speculators with independent budgets who can make fast grants is an elegant solution to the "good project, bad timing" problem. It also creates a signal: if a speculator funds you early, it tells the S-Process recommenders that someone knowledgeable already vetted you. That's information flowing in the right direction.

Public encouragement from recommenders. Liron Shapira, a Main Track Recommender, published a Substack post encouraging applications and publicly offered to review descriptions for clarity before submission. I took him up on it. His feedback directly improved my application. That kind of accessibility — a recommender saying "email me and I'll tell you if your description makes sense" — is enormously powerful for applicants outside the usual networks. It should be formalized, not left to individual generosity.

Mozilla's graceful rejection. I also applied to the Mozilla Foundation and was rejected — one of over 1,000 applicants for 10 slots. But their rejection email told me the acceptance rate, the geographic distribution of applicants (114 countries), and what the strongest applications had in common. That information is a gift. It told me my rejection was about math, not merit, and it helped me improve my SFF application. Every funder should do this.

What Breaks at 10x Scale

When EA funding goes from $34 million to $340 million, the current infrastructure will buckle at specific points:

Awareness is the real filter. SFF reports that over 95% of Speculation Grant applicants are approved. That means the selection pressure isn't at intake — it's at who knows to apply. The people who never hear about SFF, who don't read LessWrong or EA Forum, who aren't in the network — they're the real loss. At 10x funding, the marginal dollar is better spent finding those people than processing the ones who already showed up.

Routing complexity will compound. For-profit vs. non-profit vs. fiscally sponsored individual. Main Round vs. Freedom Track vs. Fairness Track vs. four Theme Rounds. Rolling application vs. supplemental application. As the number of funding vehicles grows, the cognitive load on applicants grows with it. Someone needs to build the "TurboTax of EA funding" — a guided system that asks you questions about your project and routes you to the right form, the right template, the right entity structure.

Human evaluation doesn't scale linearly. Twelve recommenders evaluating hundreds of applications is already a heavy lift. At 10x volume, you either need 10x more recommenders (expensive, hard to quality-control) or a fundamentally different evaluation architecture.

A Proposal: What Murmurations Can Teach Us About Funding

I didn't come to this question theoretically. One of the things I'm building at TREE(3) Studio is CAMP — the Council of Agents Mutualism Protocol. It's a decentralized governance framework designed for exactly the kind of multi-agent decision-making that scaled funding requires.

Here's how it works: instead of a small panel of recommenders who each evaluate everything, CAMP structures evaluation as a kind of murmuration — a network of human-AI dyads, each contributing weighted assessments that are fully auditable. No single agent can override the collective, but every agent's reasoning is traceable. When recommendations diverge, the chain of deliberation is open for inspection. The system doesn't force consensus; it surfaces visible disagreement with traceable logic.

Applied to EA funding at scale, a CAMP-like system could:

  • Scale evaluation without losing accountability. Each human-AI dyad evaluates a subset of applications, with integrity scores that weight their influence based on track record. This is similar to what the S-Process already does with Speculation Grantor budgets that grow or shrink based on grant outcomes — CAMP formalizes and extends that principle.
  • Preserve dissent as signal. In the current system, a controversial recommendation gets a footnote. In a CAMP-structured evaluation, the reasoning behind both support and opposition is preserved and inspectable. That's enormously valuable when the stakes are high and the amounts are large.
  • Include non-traditional evaluators. A CAMP murmuration doesn't require every participant to be a credentialed expert. It requires every participant to be accountable — their assessments are weighted, tracked, and transparent. This means domain experts from outside the EA network could participate in evaluation without the current gatekeeping overhead.
  • Handle the "too many applications" problem gracefully. CAMP's architecture allows for low-risk autonomous processing of straightforward applications while escalating complex or contested ones for full deliberation. The system adapts its depth of review to the complexity of the decision.

The S-Process is already one of the most sophisticated grant evaluation mechanisms in philanthropy. CAMP doesn't replace it — it offers a framework for what the S-Process might evolve into when the funding torrent arrives and twelve recommenders aren't enough.

The View From Cassopolis

I'm writing this from a small town in southwest Michigan. I don't have an EA network. I didn't attend an AI safety bootcamp. I found out about SFF from a Substack newsletter and spent a week forming an LLC, getting rejected by an online bank, and learning the difference between a fiscal sponsor and a crowdfunding platform.

I got the application in. But I almost didn't, multiple times, for reasons that had nothing to do with the quality of my work.

The funding torrent is coming. The question isn't just "who decides where it goes?" but "can the decision system scale without losing the people who don't already know how it works?" The best ideas about AI safety might come from a barn in Michigan, not a conference in San Francisco. But only if the system makes room for them.

The current system is good. Some parts are genuinely brilliant. But it was built for a world where everyone applying already speaks the language. When the money scales, the welcome mat needs to scale with it.


Terri Clark is the founder of TREE(3) Studio, LLC, a research and development company building sovereign AI safety infrastructure in Cassopolis, Michigan. The Mutualism Accord is published and indexed on PhilPapers.

This essay was co-authored with AI assistance (Claude, Anthropic). The experiences, opinions, and proposals are the author's own.

Comments

Popular posts from this blog

What Happens When Dreams Become Lies: Revisiting The Neverending Story in an Age of Confusion