Running a nonprofit insights company means I spend my days in deep, human conversations. As AI reshapes research, I’ve been grappling with a broad question—can AI ever draw out the kind of truth that people reveal only when another person is there to listen and help them make sense of what they truly believe?
At Real Path, we seek the truth behind assumptions that nonprofits have about their supporters. We do this by having deep one-on-one conversations with existing supporters, lapsed supporters and potential supporters. Our bread and butter lies in the ability to build rapport, ask the right questions, and listen thoughtfully to gain deep, rich, true insights from real people. We’ve been doing this kind of work in different spaces for decades, and in the last few years, AI tools have enabled us to make certain parts of our work much more efficient.
I love what we do at Real Path. I love hearing people’s stories first hand, probing to get deeper into their brains, and collecting enough personal stories and insights to build a full understanding of what’s happening with a product, an organization and a sector. I love that in many cases the questions people are answering are new to them, so you are not only a listener, you’re also helping them to get a better grasp on their own beliefs. (The friend who probes you to listen to your heart a little more deeply.)
The problem with this type of research is it’s time-consuming and requires considered approaches to each conversation. There are multiple requirements of the interviewer to draw out different types of people on a variety of topics. At times, you are deep in the weeds, for many, many hours, to get to this kind of understanding.
Broad quantitative research
Another area of research for nonprofits is broad quantitative research. This is the data that has been important for corporations for decades, “big data” that is all metrics driven. It often tells us the “what” but not always the “why”. This is the kind of public data that AI can get access to, or be fed by organizations in the case of private data. We know AI can do some really impressive things with this kind of big data.
Our assumption has been that AI can’t do qualitative data well. It can’t take the responses to a conversation and understand what that pause means. It can’t watch someone’s expressions and change tack to ask a different series of questions. It can’t read between the lines of an answer and decide to go a little bit deeper, because you can feel the nugget of insight is just behind that response.
But we wanted to test this assumption. Finding out the truth behind assumptions is what we do after all.
Research-focused AI
Recently I added Perplexity Enterprise to the suite of tools I use. Perplexity focuses on source-backed web research and live information. I had been using ChatGPT for various things, but I wanted to test what a research-focused AI’s value could be to Real Path’s research projects.
It was perfect timing for a new Real Path project that focused on understanding how to get young people to move from environmental interest to conservation advocacy. The project included:
I used both ChatGPT and Perplexity.ai in each area, as I felt that the difference between them was enough for our purposes, which was to get an understanding of whether the insights you can get from human conversations can be replaced with AI.
1. Existing research
Perplexity really nails this. With not too much work on the user’s side to put in the right prompts, a lot of wonderful research done by private companies and universities is at our fingertips in a matter of seconds. Everything is cited, and it’s dead easy to figure out its accuracy. Now, we Werner asking it to analyze or use the data in any way, just to show me the relevant information. There might be more relevant information out there that it didn’t find, but we would never have found it otherwise, and we were unlikely to find what it did. This was a wonderful starting point. Let’s not start the project from scratch, let’s very quickly see what we collectively, already know about this. We could also filter for location, publication type, recency, etc.
2. Market examples
This was harder to prompt. It’s not very straightforward to ask either Perplexity or ChatGPT to find relevant examples. The way it understands relevance is not very … human. It required a lot of re-prompting. Perplexity and ChatGPT brought up different examples of varying usefulness. In the end, the examples we used were ones we had found ourselves.
3. Target market survey
There were two elements to this. First, creating the survey and making sure the questions were going to be useful. We use these surveys to decide which individuals we want to speak to in our interviews, but also to glean broad insights. Both Perplexity and ChatGPT gave very basic survey questions that wouldn’t have allowed us to find the right people to select.
The second element is the analysis of the survey results. Here, AI was helpful with quick graphs and charts, eliminating the need to deeply understand spreadsheet graph-making. We asked things like “Take column ‘political affiliation’” and response to ‘x broad question’ and analyze the difference across political allegiance.” For answers that were check boxes, this was no problem. When people could input their answers, AI could analyze the findings but it felt a lot less true. When we did our own deep dive into the answers, the results we found were similar but different in nuanced and important ways.
4. 1-on-1 conversations
We didn’t attempt actual conversations with supporters, but we did try creating conversation guides (It felt like a junior person, who didn’t quite understand the project, had written it) and analyzing post-conversation scripts.
Again, there was a general sense of what was happening, but the detail, nuance and nugget of insight, was missing. Very often what was coming back was close to being correct but was actually incorrect. That kind of thing can lead an organization down a path that is simply not going to work for supporters. AI can’t hear sarcasm, doesn’t understand ESL speakers in a way that a human can, and can’t see a smile that changes the context of words.
We did try out AI in a conversation replica. We started with the pre-existing conversation guide, then gave it some real responses, asking: Where it would go next? What should the next question be? Generally, AI found it hard to go deeper into where the supporter was taking the conversation. It was typically trying to get back on script. If explicitly asked to go off script, it would go into strange places where the relevancy to the project wasn’t clear. And, in a real conversation, you only have so much time, so asking questions that don’t provide relevant answers doesn’t work.
5. Analysis
Overall, the feeling that I got, no matter which way I stripped it, what prompts I used, or on what platform (ChatGPT seemed better at reading conversations than Perplexity), the key to the research—the insights—was just out of reach.
I felt like I was speaking to a new colleague that didn’t know much about the space and was trying to “fake it”. It was helpful to spark ideas, but I couldn’t trust it to get to the right insights.
One example: AI’s statement on Canadian new immigrants’ relationship to nature was “Newcomers connect to Canada through nature first”. The true insight was “a move to BC ignites delight in the outdoors”. The difference lies in something causal versus correlative. That difference can have a huge impact on the actions an organization decides to take as a result.
AI can be good at clustering insights once you already have them. When you have a list of insights, it can be tedious to figure out how to organize them, and both Perplexity and ChatGPT can give clusters that make sense. However, if you don’t like how they’ve clustered at first, it’s hard to get the AI out of its initial approach without starting all over.
Overall, I was happy to see that deep conversations can’t be replaced by AI. How nice to know that real conversations not only still matter, but seem to matter even more when so much of what's happening now is not real.
We need the human interpretation of what someone’s eyes say when you’re in conversation, of hearing the awkward laughs and knowing there’s more, of knowing when to wait to ask the next question. There is so very much that we humans know, and we often take for granted, in how we understand each other. That beautiful, innate and learned, part of being human can’t, and shouldn’t, be replaced.
Leigh Sandison started her career in fundraising in the UK—first in corporate partnerships, then product development and innovation in mass market fundraising. She then moved agency side with a focus on strategy and insight-led experience design for top global companies. Drawing on experiences from both the nonprofit and for-profit worlds, Leigh is a founding partner of Real Path, an insights and strategy consultancy focused on helping nonprofits maximize the lifetime value of their supporters. Contact her, leigh@realpath.ca.