Bright Nonprofit
Your AI just gave you a "recommendation." If you follow it blindly, you aren't being efficient—you're being replaced. In this episode, we look at the critical failure point in nonprofit AI adoption: the moment pattern recognition is mistaken for understanding. We walk through a common donor data scenario where the AI identifies a trend but misses the underlying cause. Following the tool would have been a disaster; ignoring it required a level of judgment the model simply doesn't possess. We discuss: Why "looks right" is the most dangerous phrase in your office. The difference between a...
info_outlineBright Nonprofit
In this episode, we examine the structural wreckage of the "Responsible AI Policy." Most nonprofit leadership teams are currently celebrating the completion of a static PDF that outlines disclosure and human review. They are celebrating a "success" that is actually a catastrophic misdiagnosis. The friction we are seeing today isn't caused by "rogue" employees using unapproved tools; it is caused by the Sovereignty Gap—the space where AI makes autonomous inferences about intake criteria, data sets, and outcomes that no human ever vetted. The old way of governing—writing a rule and...
info_outlineBright Nonprofit
If your AI implementation is delivering results, you should be looking for the cracks. Most leaders assume that if output is up and the team is keeping pace, the implementation is a success. They’re wrong. In this episode, we diagnose why AI-driven acceleration is currently colliding with two layers of your organization that weren't built for speed: Authority and Governance. When a tool produces 500 outputs instead of 50, the informal "who says this is okay" process evaporates. You don't have a volume problem—you have an ownership problem. Meanwhile, boards are still governing budgets and...
info_outlineBright Nonprofit
Many nonprofit leaders believe their AI challenges begin at the moment of implementation — choosing tools, preparing staff, or establishing policies. But most AI adoption failures start earlier than that. They begin with the first question leadership asks. When organizations respond to pressure by asking, “What are we doing about AI?”, the conversation begins with urgency and an assumed solution. What is missing is the step that makes the decision defensible: naming the specific problem the technology is supposed to solve. This episode examines how pressure-driven conversations convert...
info_outlineBright Nonprofit
A recent benchmark report surveying hundreds of nonprofit organizations found that 92% are already using AI tools, yet only 7% report major strategic impact. The report describes this as an “AI readiness” gap and recommends stronger governance, clearer policies, and more structured workflows. In this episode, we take a closer look at that diagnosis. The data reveals real coordination and governance challenges, but it may still miss the deeper structural condition that determines whether AI produces meaningful results. For nonprofit leaders responsible for strategy, operations, and...
info_outlineBright Nonprofit
Most organizations believe they already know who is responsible when AI is used: the person who used the tool. But that answer assumes something that often isn't true — that the authority underneath that responsibility is clearly defined. In practice, many nonprofits operate with informal decision structures. Authority settles into roles, trusted individuals, compressed processes, and software systems over time. The org chart stays the same, but the real decision rights slowly move somewhere else. This episode explores four patterns of authority drift that exist in most organizations long...
info_outlineBright Nonprofit
Many nonprofits are adopting AI tools expecting efficiency gains. But when those gains fail to materialize, the problem often isn’t the technology. It’s the structure of the organization itself. In this episode, we examine three structural conditions that AI tends to expose: undesigned handoffs, ownership without authority, and hidden maintenance work. These are not new problems. They’ve existed quietly inside organizations for years. What AI changes is the speed and pressure at which those weaknesses surface. For executive directors, board members, and operations leaders, this is less...
info_outlineBright Nonprofit
I’m back behind the mic. In the last episode, you heard an AI-generated overview of this topic. But in a world of automated content, the most important conversations require a human touch. I’m reclaiming the show to talk to you directly about the "AI Efficiency vs. Capacity" trap. The Reality: Most nonprofits are using AI to become more efficient - drafting faster and analyzing instantly. But for many leaders, the promised relief never arrives. The Problem: Efficiency is about rate, but Capacity is about resilience. When your execution speed accelerates through AI, but your governance and...
info_outlineBright Nonprofit
Most nonprofits are working hard to become more efficient. AI makes that easier than ever. Drafts are faster. Analysis is instant. Throughput increases. But for many leaders, the promised relief never arrives. This episode examines why. It explores the structural shift that happens when execution speed accelerates but governance capacity does not. Efficiency is about rate. Capacity is about resilience — the ability to absorb variability, maintain oversight, and protect decision quality as volume increases. For executive directors, board members, and operations or development leaders, this...
info_outlineBright Nonprofit
Get the AI Readiness Memo: Substack: iTunes: Spotify: Website: "AI readiness" is often framed as a technology milestone — something to purchase, install, or train around. But in this episode, the focus shifts to a more uncomfortable question: can your governance structure remain accountable as organizational capacity increases? For executive directors, board members, and operations leaders, this conversation reframes readiness as a structural issue. It explores how data trust, process clarity, systems coherence, and governance boundaries determine whether AI increases...
info_outline
Risk Doesn’t Have To Be a Four-Letter Word.
Risks can be unsettling. It is easier to focus on what’s urgent while ignoring what’s necessary and important. But if you can create ways to make it easier to see and address threats and opportunities, you can:
- Increase clarity
- Reduce costs
- Simplify tasks
Streamline processes - Develop new initiatives, and;
- Increase sustainability and resilience
Knowing your risks can help you increase your awareness of the threats and opportunities faced by your organization. You can identify unnecessary costs and find fixes to unlock additional resources.
Does any of that sound familiar?
“Too much of our knowledge is stuck in the minds of our key personnel. If we lose any of them, we’re sunk.”
“If we’re honest, we move from crisis to crisis and can’t get ahead of the curve.”
“We want to grow, but we need a repeatable model that doesn’t require constant supervision.”
“We are on the cusp of great things, but we need to make sure we look and act professional to the outside world. More than that, we actually need to be professional.”
Then listen to this podcast and learn how knowing your risks can give you your best insight.
About Ted
Before founding Risk Alternatives LLC, Ted was a Distinguished Visiting Professor from Practice at Georgetown University Law Center. At Georgetown, his research focused on dispute resolution, complex litigation, preventive law, legal training, risk management, governance, and compliance.
Prior to fulltime teaching, Ted served for more than 20 years in the Washington DC office of the international law firm of Jones Day. At Jones Day, Ted represented clients in successful high-profile lawsuits and investigations and worked closely with parties with divergent interests to craft workable settlements involving businesses, consumers, and government agencies. While at Jones Day, Ted taught at Georgetown for many years as an adjunct professor.
Ted's website is at, https://risk-alternatives.com