AI for Nonprofits and Social Good: Link Roundup | Beth's Blog

AI for Nonprofits and Social Good: Link Roundup

AI4Good

Allison Fine and I have been actively researching and writing about AI for Social Good and Nonprofits with an eye towards our next book. Most recently, we co-authored a policy brief for the Toda Peace Institute  on the age of automation and it implications for civil society.

We are tracking the conversation and resources about AI and Nonprofits and social good. Here’s this month’s roundup.

Philanthropy and Fundraising

Google.Org is giving away $25 million to humanitarian projects that use Google’s machine learning technology. The “A.I. Impact Challenge” is based on Google’s belief that artificial intelligence can provide new solutions to old problems and improve people’s lives.

The Association of Advancement Professionals (AASP) Conference featured a keynote on AI and Fundraising by David Lawson and Laurie Hood Lawson.They are the co-authors of the book, Big Good Philanthropy in the Age of Big Data and Cognitive Computing. Here’s an overview of the book from the authors blog. They also published this “AI for Good Guide,” intended to help nonprofits and social enterprises learn how to apply artificial intelligence and machine learning to social, humanitarian and environmental challenges. There is a section on recommended responsible practices for AI.

Social Change

How Cancer Research UK is Exploring AI and Voice Tech: This cancer charity in the UK has been testing chatbots, voice enabled devices, and natural language processing – looking at how those could be useful and where the impact lies.They piloted a chatbot on our fundraising pages to help answer the most common questions people had. They are also using chatbots internally to make communications processes more efficient. Charity Digital  News published this interview with Rob Leyland, Innovation Manager, about their recent efforts.

Google AI Claims 99% Accuracy In Metastatic Breast Cancer Detection:  The impact of AI is also be felt on the patient diagnostic side.Researchers at the Naval Medical Center San Diego and Google AI, a division within Google dedicated to artificial intelligence (AI) research, have developed a promising solution employing cancer-detecting algorithms that autonomously evaluate lymph node biopsies for metastatic breast cancer. Their AI system — dubbed Lymph Node Assistant, or LYNA — was able to detect cancer better than human pathologists. Algorithms can identify pathology faster and more accurately. Venture Beat.

Facebook Increasingly Reliant on A.I. To Predict Suicide Risk  According to this NPR piece, a year ago, Facebook started using artificial intelligence to scan people’s accounts for danger signs of imminent self-harm and contact emergency responders. The algorithm works not only on what the person posts but how their friends react. Facebook’s AI monitoring lead to contacting  emergency responders an average of about 10 times a day to check on someone and that number does not include wellness checks that originate from people who report suspected suicidal behavior online. NPR interviewed ethics expert Mason Marks, a medical doctor and research fellow at Yale and NYU law schools, and recently wrote about Facebook’s system, to discuss the ethical and privacy concerns.

AI and Ethics

4 Human-Caused Biases We Need to Fix for Machine Learning: What the heck is algorithmic bias in practice? To the extent that humans build algorithms and train them, human-sourced bias will inevitably creep into AI models.There are four distinct types of machine learning bias, but “Prejudice bias” is a result of training data that is influenced by cultural or other stereotypes that we need to understand and avoid.  Some recent examples include:  Amazon’s AI recruiting software that learned to penalize résumés that included the word “women.” Google’s photo identification mislabeled black people as gorillas in 2015 or Microsoft’s AI-powered social chatbot that started tweeting racial slurs.     The Next Web.

Profile of Fei-Fei Li:  Fei-Fei Li is an expert in AI, has served as the chief AI scientist at Google Cloud and holds the position as director of the Stanford Artificial Intelligence Lab. Her work is focused on how AI will affect the way people work and live and it will impact the human experience—and not necessarily for the better.  Wired Magazine.

AI and Workplace Impact

Automation and the Changing Demand for Workforce Skills: Blog post by Irving Wladawsky-Berger summarizing various studies about AI will impact workforce skills.

How Artificial Intelligence Can Humanize Workforce Communications And Customer Engagement. We’re starting to see artificial intelligence augment and enhance the human element, rather than replace it, to achieve greater results. This article summarizes how some corporations are doing this with AI and other automation tools.

AI Technology

AI Is Getting Better At Conversation.  A recent article in the New York Times describes how new AI driven systems are getting better at natural language. Google researchers have unveiled a system called Bert that is viewed as a significant development in artificial intelligence because it can the nuances in language in general ways and then apply what they have learned to a variety of specific tasks. This will be help voice assistants like Alexa improve as well as software that automatically analyzes documents inside law firms and others organizations. This further raises the concern about how do we tell the difference between machine and human, although in California a bill was recently signed that chatbots need to disclose themselves.

AI Tools

How AI Is Transforming Content Creation reviews the impact that AI tools are having in writing content. Producing consistent high quality content is a pain point for  almost every nonprofit organization – whether it is annual reports, web site copy, or social media content. As the need for content grows with many more channels, budget-constrained nonprofits might find themselves turning to  artificial intelligence (AI) to make it more efficient.  This CMS Wire article describes more about the impact and highlights six tools your nonprofit can start using today.

Woven is a Calendar Assistant You Might Actually Use AI-enhanced meeting scheduler tools are a growing area. The idea is to take the hassle out of scheduling meetings. Calendar tools like Meetingbird and Calendly let users create meeting slots for attendees to choose from, while virtual assistants like X.ai can schedule entire meetings on the user’s behalf.Woven is a new tool that addresses the problem of email coordination to find a meeting time. It is a stand-alone calendar app that integrates with Google’s G Suite that combines a full-blown calendar experience with smarter scheduling features.

2 Responses

  1. Luke D says:

    Thank you for the resource list. As a general question, AI can certainly be used for social good, but should resources that could otherwise be diverted to basic needs be diverted to cutting edge technologies? The advancement of AI seems more promising for large, profit maximizing corporations than for any individuals interested in expanding social good. Social problems that exist are fundamental to our societal organization, and no level of technology will be able to address the root causes of these issues.

  2. Beth Kanter says:

    Hi Luke,

    You bring up an important point. There has always been some tension with technology as “shiny object” versus tool. I believe that nonprofits need to be open to exploring new technologies and how they move forward their missions. I am not saying that every nonprofit has to be an early adopter – there are also merits to coming in right after the early adopters. But as a sector, we should not ignore the potential benefits and challenges of AI and these other technologies to solve big hairy social change problems.

Leave a Reply