The people with the most to say about AI are not in tech
I went to an event recently hosted by Amberflo, a company that meters AI usage for enterprise clients. Their platform tracks how employees interact with AI tools so that their organizations have data to know what’s working/not working. It’s a useful - albeit buzzword-laden - product that solves a real problem for companies managing AI spend at scale.
But sitting in that demo, I had this sinking image of the employees within these large companies. Every API call logged. Every interaction costed. Every experiment filtered through a system designed to optimize, govern, and report. By and large, this is how the legacy tech industry engages with AI: inside boundaries set by growth targets, quarterly roadmaps, and the ever-present question of ROI.
That’s not a flaw in the system. It is the system. And it shapes everything the tech industry is able to learn about what AI can actually do.
Bounded Creativity
There’s nothing wrong with constraints. Constraints breed innovation. The question is: who sets the constraints, and what do they optimize for?
In the tech and software industries, the boundaries are set by investor expectations, engagement metrics, and scale. Those constraints produce a specific kind of creativity: fast, iterative, optimized for [financial] growth. An engineer experimenting with AI inside that world is exploring what AI can do for the business. A product manager evaluating an AI feature is asking whether it moves a dashboard number. A director of operations piloting an AI workflow is asking whether it hits the efficiency targets they promised last quarter.
That’s not cynicism. It’s just the water the tech industry swims in. And it produces a version of AI fluency that’s deep in some ways and remarkably narrow in others. The tech industry is getting very good at knowing what AI can optimize. It’s less equipped to ask what AI should be used for.
This is what I mean by bounded creativity: not that there are limits, but that the limits are set by someone else’s priorities. The conversation about AI is happening almost entirely within a world that rewards extraction, speed, and scale. And that conversation is shaping what gets built, who it gets built for, and what questions never get asked.
Different Constraints, Different Leverage
Now consider the people working outside of that world.
If you lead a nonprofit, a foundation, a civic tech initiative, or any organization that exists to create impact rather than generate hyper growth, you have constraints too. Budget, capacity, institutional complexity, the slow and necessary work of building trust with the communities you serve. These are real limits, and nobody working in these spaces needs to be told that resources are tight.
But your constraints are set by mission, not by growth metrics. And that difference matters more right now than it ever has.
Here’s why: the tools have caught up to the expertise. For the first time, people who have spent years thinking deeply about education, public health, housing, environmental justice, or any other complex problem can turn that thinking into software without needing to become a traditional tech team. The barrier between domain knowledge and working technology is lower than it’s ever been, and it’s continuing to drop.
The person who understands how communities actually access care knows something no AI model can surface on its own. The educator who has spent a career working with underserved students brings context that no training dataset contains. That expertise has always been valuable. What’s changed is that it’s now actionable in ways it never was before.
And there are already some great examples of this:
- The team at CareerVillage developed an AI-powered career coach to democratize access to advice and tools empowering individuals to reach for economic mobility.
- Peter Gault founded Quill, a non-profit, providing free literacy activities that build reading comprehension, writing, and language skills for elementary, middle, and high school students. As he explained in this interview, Gault started the work that would lead to Quill as a debate video game, found it didn’t work for low-income students lacking foundational skills, then pivoted to address the actual need based on what educators were seeing in classrooms.
- Alex Stephany and his team at Beam worked for years placing people experiencing homelessness into jobs and introducing stability into their lives. By working closely with case workers all over the UK, they realized the need for a tool that eventually became Magic Notes to save those case workers time and multiply their impact by helping the helpers.
This isn’t about learning to code or becoming fluent in the latest AI tools. It’s about recognizing that the deep knowledge you already carry from years of proximity to real problems is the single most important input in building technology that actually works for people.
The Real Risk
I for one hope that AI doesn’t outpace our ability to govern ourselves. And I think it won’t so long as the people who understand what good governance looks like are actively involved in shaping how these tools get used.
The risk isn’t that mission-driven organizations will fall behind on AI adoption. It’s that they’ll sit on the sidelines while the people operating inside bounded creativity optimize for growth, scale, and extraction even more quickly than before. It’s that they will miss a seat at the table while the legacy tech industry decides what gets built and who it serves.
The tech industry will keep building. That’s what it does. The question is whether the people with the deepest understanding of the problems worth solving will have a hand in shaping what gets built next, or whether they’ll be handed finished products that were designed for someone else’s priorities.
You don’t need to become a technologist to have that hand. You need to bring what you already know to the table. The tools are ready. The leverage is yours.