AI Skepticism High, Embrace Low, Among Nonprofits
Nonprofit workers are exploring the use of artificial intelligence (AI) at their jobs, but their reactions to its value and hazards are decidedly mixed. While some are reaping its benefits, others believe they do not have the knowledge or training needed to realize the benefits.
Some workers expressed that data protection and programming bias concerns, according to information in new report from GivingTuesday’s Generosity AI Working Group titled “AI Readiness and Adoption in the Nonprofit Sector in 2024.”
Just fewer than seven in 10 (68%) of the 930 individuals surveyed in support of the report have used AI within their workplace, with more than half engaging the technology in at least two use-cases. But these levels do not necessarily indicate continued use after a few trials.
This is because, in part, wariness regarding these systems remains high. Half of those surveyed expressed concerns about AI facilitating breaches of data or intellectual property and maybe throwing off skewed results due to data bias. A similar number said they either do not know how to evaluate the risks of using AI or believe the risks and benefits of doing so are equal.
According to the research team’s findings, the nonprofit community mostly falls in the middle of the AI adoption continuum, with a more-or-less bell-shaped curve regarding implementation readiness. Evaluations regarding readiness to adopt AI were based on the organizations’ data collection practices, data policies, presence of technical staff and current use of AI.
Respondents were clustered into three organization personas, with 56% labeled AI Consumers (those who are most active in collecting and using data in their work, are using AI now and want to use more AI in the future), another 28% lumped together as Late AI Adopters (who collect and use data, but have not tried many – if any – AI tools, although they are interested in exploring further) and 15% falling under the heading AI Skeptics (those who by and large do not collect or use data in their work, and while they might have experimented with AI in the past they are not eager to continue exploring its use).
“Once an organization reaches at least 15 paid staff, they are more likely to be part of the AI Consumers group,” the report authors wrote. “However, more established organizations, having been around for 30 or 50 years, were far more likely to be part of the Late AI Adopters group, despite typically having larger staff sizes. We found that organizations founded less than 10 years ago are basically equally split among our clusters; about three-quarters of newly established organizations are found in each group.”
The single most significant indicator as to whether an organization fell into one of the more advanced categories was whether its leaders had hired a “Monitoring Evaluation, Research, and Learning” (MERL) or tech person. Nonprofits where both have been hired were the most likely to be on the advanced end of the continuum, regardless of the size of their organization.
The primary AI tech used by organizations appears to be generative AI tools such as ChatGPT, Microsoft Copilot or image generator DALL-E 3. They create content based on text or image prompts. Beyond these, interactive chat bots were the most commonly used AI tools. Only one in six nonprofit workers attempted to interpret data using AI or used an AI-based task manager, and roughly one in seven trued using AI for predictive purposes.
When respondents were asked to consider future tech implementation, across the board they stressed the need for tools that would better help them better organize their data. Respondents also mentioned implementing AI virtual assistants, continuing their use of generative AI and using interpretive and predictive tools.
One of the higher hurdles to AI implementation might be the patchwork of legal and ethical standards for its use. In open-ended response areas, this concern came up again and again, with respondents offering sentiments such as: “We do not yet know how to evaluate AI systems for safety and negative impact. … We also don’t know how to govern these systems and have mostly left them to the devices of the market, which does not have social well being at heart. We desperately need better policy and governance systems” and “We would love to do more with AI, but due to privacy concerns and compliance regulations we would need enterprise AI tools accessible across the organization with data privacy agreements, an understanding of how the LLM(s) [large language models] were trained and created (framework, design, bias, etc.).”
Among the 930 respondents, 549 were from the GivingTuesday network of nonprofits, most of which are based in the United States, 251 were based in India, 86 were from partner networks that focus on AI and technology and 44 were from GivingTuesday’s global network outside of North America. A full copy of the report is available here: https://ai.givingtuesday.org/ai-readiness-report-2024/
The post AI Skepticism High, Embrace Low, Among Nonprofits appeared first on The NonProfit Times.
Leave a Reply