You’ve got the data. Getting answers out of it is another story.
For a lot of nonprofit digital teams, pulling a quick report means waiting on a data person, or wrestling with spreadsheets.
And when your boss needs a number by end of the day, “let me run a SQL query” isn’t usually in your toolkit.
Action Network is trying to change that.
I spoke with Mari Vangen-Strom, Product Director at Action Network, about two new products: ActionBot, an AI-powered reporting tool, and Boost, a new add-on package with advanced features and deliverability support.
If you’re an Action Network administrator, both are available now through the free Boost beta.
Action Network is not a sponsor of this newsletter.
Sara Cederberg: ActionBot lets users query their data in plain language. Can you walk us through anexample?
Mari Vangen-Strom: One of my favorite use cases came from a partner. You can ask ActionBot to identify commonly misspelled email domains like gmal dot com, then get a list of subscribed emails with those misspellings.
You edit the list, reupload, and expand your audience in about 20 minutes.
My other favorite is reporting on ROI of list acquisition, which has been a barrier for years.
Instead of running multiple reports, you can ask how much was raised from a specific source code or tag within a range of time.
SC: Boost includes email deliverability monitoring and access to experts. What does that support look like in practice?
MVS: Pretty hands-on! Obed Ventura, our Senior Partner Success Specialist, is our resident deliverability expert. He’s been taking calls with Boost members who are experiencing issues to actively troubleshoot them.
We’re also starting to provide weekly deliverability reports with more in-depth data than we have in the interface. If you want to sign up for those, email [email protected]!
SC: Automated A/B testing is part of Boost. For organizations still doing manual subject line tests, where should they start?
MVS: It’s really a matter of preference. Some people like to be fully in control, view their email stats, and pick a winner themselves.
But a lot of the groups we talked to wanted to save time.
The process of selecting a winner, duplicating, targeting, and sending can be cumbersome, and that’s where automated testing comes in.
Before automating, you want to have a good feel for your current program and be able to define what a winner looks like.
That allows you to be confident about your key stat (clicks, for example) and comfortable with whatever numbers you put in the circuit breaker.
SC: You mentioned wanting to keep Action Network pricing accessible while offering advanced features through Boost. How did you think about that balance?
MVS: It’s always been our commitment to provide accessible tools to the movement. Being a nonprofit helps a lot with this!
Boost comes out of a recognition that not every group needs the same feature set, and sometimes adding advanced features detracts from usability.
Our goal isn’t to make money from Boost, but to sustainably provide an advanced version for users who need those features.
We’re developing pricing in collaboration with our partners, so if you’re worried about being priced out, please reach out!
SC: Development partners get automatic access to Boost features. How does the cooperative development model work?
MVS: We have a pretty involved cooperative development process with our members: the AFL-CIO, CLC, DLCC, and MoveOn.
They pay for the development of the toolset and contractually own what we build and when we build it.
Every spring, our members brainstorm, prioritize, and vote on a year-long development roadmap. Almost every feature we’ve built — from ladders to surveys to ActionBot — came from a development partner’s need.
Our goal is to expand this model to include Boost users. I’ve been really excited about the feature requests from folks in the beta.
SC: Some nonprofit professionals are cautious about AI tools and accuracy. What would you say to someone skeptical about letting an AI query their supporter data?
MVS: A little caution is warranted. We’ve all had experiences of AI getting things wrong. ActionBot is not foolproof, so I recommend gut-checking your numbers and reporting answers that look off.
We’re monitoring reports and correcting errors in the model, so the goal is that ActionBot only gets something wrong once and is constantly improving.
Security-wise, the AI only sees database columns, tables, and definitions. It never sees your actual data beyond summarizing results. (More on ActionBot’s privacy and security here.)
At the end of the day, there’s a lot of utility here. ActionBot saves time and frees your data people for more interesting work.
I’d say experiment. It’s pretty fun once you get the hang of it.
Industry events
Thu, Feb 19, 10:00 AM ET
Thu, Feb 19, 3:00 PM ET
Paid: 2026 Nonprofit Technology Conference
Mar 10-13 - Detroit, MI
Check our events list for more or reply to this email to submit one for consideration.
‘Til next time!
Sara

