In my LinkedIn articles and blog posts, I have discussed some of AI’s powerful capabilities and covered some of the ways nonprofits can carefully evaluate and introduce its benefits to improve fundraising, strengthen relationships and alleviate administrative tasks.
Out of the 30-plus resources I reviewed and summarized, one of the most powerful was a podcast on generative AI with three technology leaders that describes AI as the ability to absorb vast amounts of information, generate new kinds of information, and take actions on that information. My favorite description of AI came from a podcast conversation between three technology pioneers at the 2024 World Economic Forum that likened AI’s capabilities to “a wizard that knows how to write new spells by studying existing ones.”
The Pros
One of the most powerful ways AI can help nonprofits is by understanding donor behavior to a level of granularity that allows for finite segmentation and personalization, which is invaluable to anyone in a development or marketing function. The catch? AI needs TONS of healthy and accurate data to work properly. The specifics are covered in my second LinkedIn article on using AI to manage donors.
The second approach that AI can be helpful to nonprofits is its use of large language models (LLMs) to create content. With a bit of AI prompting experience, content drafts can be generated by AI in half the time that can be reviewed and adapted for specific needs.
There are also numerous administrative tasks including writing meeting recaps, developing agendas, finding electronic files that are expedited by using AI, freeing resource-constrained nonprofits to work on relationship building.
In the book, The Networked Nonprofit: Connecting with Social Media to Drive Change, authors Beth Kanter and Allison Fine give examples of nonprofits like the Brooklyn Museum and the Smithsonian using technology to engage the wider community. While this was written before AI became mainstream, it gives examples of community-curated photography exhibits and ways of working with the outside world that can be adapted to how AI is introduced.
The Cons
The potential for bias is present if AI is built and trained on historical data that incorporates injustices. Dr. Brene Brown calls this “scaling injustice” in her Unlocking Us podcast conversation with Dr. S. Craig Watkins, where the solution comes down to practices that include computer engineers talking to a wide spectrum of subject matter experts, so programs are intentionally built to be inclusive. Any nonprofit leader or small business owner should understand what is under the hood of any AI model before it is implemented.
This conversation also touches upon a big human fear: being irrelevant. New technology raises all kinds of questions about who owns the content and generative AI feels like displacement for anyone in a field that puts words on paper to make a living.
Yet the technology is here, and it is inexpensive, so it is an opportune time to figure out how to incorporate it, while keeping a “human in-the-loop” and building on our qualities and skills. I talk about how nonprofits can evaluate and incorporate AI while upholding their commitment to protecting their stakeholders and serving others in this week’s LinkedIn article titled, “A Human-Centered AI Policy for Nonprofits.”
Friend or Foe?
It is predicted that humans will adapt by moving towards “augmented intelligence”, which is a much more palatable future for anyone looking to hold onto what makes us human.
On the bright side, we can imagine walking around with a super powerful reasoning brain on an App like Copilot or ChatGPT. (GPT stands for Generative Pre-trained Transformer.)
On the dark side, I hope technology is programmed with traps and red alerts for anyone who is planning on using its powers for divisive reasons that create more hurt than good.
If used with all the best of intentions and documented “guardrails”, I’d say AI for nonprofits is a friend that is here to stay.

Responses to blog