Nonprofits and the Age of Automation: AI, Machine Learning, Bots Link Roundup | Beth's Blog

Nonprofits and the Age of Automation: AI, Machine Learning, Bots Link Roundup

AI4Good, Bots, Digital Strategy

My Networked Nonprofit co-author Allison Fine and I have been actively researching and writing about the Age of Automation with an eye towards our next book. This past year wrote an  op-ed for the Chronicle of Philanthropy on the age of automation and it implications for fundraisers as well as other articles, including Leveraging the Power of Bots for Civil Society on SSIR Blog and The Robots Have Arrived: How Nonprofits Can Put Them To Work on Guidestar Blog.   As part of our ongoing research,  we are tracking the conversation about AI and Nonprofits,here is a recent roundup of articles, blog posts, and research on the Age of Automation and Nonprofits.

Machine Made Goods: Charities, Philanthropy & Artificial Intelligence Report:   The Charities Aid Foundation (GivingTuesday UK partner) has recently released a report that discusses the profound implications Artificial Intelligence (AI) will have for civil society and the work of charities. The report provides an overview of the technology in easy to understand language. There is also a section that points out the benefits as well as examples of UK charities using the technology, including WaterAid’s Bot and an AI-powered virtual assistant for people living with arthritis. The report also looks at the challenges for nonprofits, which are common in the nonprofit sector when adopting new technologies. This includes: skills, leadership, and risk appetite. A section of the report is devoted to how AI and automation might impact communications with donors. One unique idea is in the area of donor advice, for example developing a personal assistant for donors to help them decide where to donate.

The report also suggests that a more sophisticated way to offer philanthropy advice would be to use machine learning with social and environmental needs and data on the social impact of CSOs and interventions. This would enable identification of where the most pressing needs were at any given time, as well as the most effective ways of addressing those needs through philanthropy, and thus allow a rational matching of supply and demand. CAF has coined the term “philgorithms” for algorithms of this kind.The report ends with a summary of high level trends that might impact civil society organizations, including a discussion on inequality, algorithmic bias and the future of work.

Are Small Charities Being Left Out?  This Foundation Center blog post  by Social Impact Exchange is based on a  global scan highlighting how data and algorithms are being used in different ways for social good, emerging challenges in the field, and how philanthropy can be and is engaged in this work.  It highlights different initiatives on the AI for social good area, for example this Machine Learning for Good program that provides access for nonprofits to technical experts, that can help small nonprofits.

Can Machine Learning Predict Social Impact?   This webinar on Sept. 12 hosted by Data Analysts for Social Good features Peter York, an evaluation expert.   In today’s big data world, more and more social programs are using database technologies to track, monitor and manage cases. This database proliferation is resulting in the rapid growth of participant, program delivery and outcome data. Alongside these data advances have arisen the development, refinement and advancement of cutting-edge analytics, including machine learning algorithms, that can produce remarkably accurate predictive and prescriptive insights. This session will present an overview of how machine learning can be trained to evaluate causality of social programs, including sharing case studies where machine-guided causal modeling has been applied to program data from child welfare, workforce development, mental health and juvenile justice agencies.

How Artificial Intelligence Is Transforming the World:   This report from Brookings Institute discusses AI’s application across a variety of sectors, address issues in its development, and offer recommendations for getting the most out of AI while still protecting important human values.  The report explore ethical issues such as:  How do we guard against biased or unfair data used in algorithms? What types of ethical principles are introduced through software programming, and how transparent should designers be about their choices? What about questions of legal liability in cases where algorithms cause harm? 

The Two Sides of the AI and Humanity Debate:  I’ve been following Shel Israel for over ten years, since he interviewed for a chapter about nonprofits and Twitter in  his 2009 book, Twitterville.  In the last five years, he has been researching and writing about AI.  He announced on his blog that he working on another book, Augmenting People: Why AI Should Back Us Up, Not Push Us Out. He points out in this post that there are two views to the Robot VS Human debate that can be characterized as Autonomy versus Augmentation. The autonomy viewpoint paints a bleak future of robots taking over human jobs. The augmentation argument is more optimistic and suggests that AI will help human augment their work. Shel has been a champion of new technologies and he will take a deep dive into how AI is being used enhance human work, not replace the humans who do it, although he will also look at the dark side potential of lost jobs and disrupting humanity.  As he notes in the blog post, his book will be prescriptive arguing that in many cases where humans can be totally replaced, they should not be replaced.

Designing a Future for AI & Automation That We Can Live With:   Amber Case is another technology voice that I have been following for about ten years. She is a tech design advocate, speaker, tech founder, and research fellow.  She recently published a three-part series on her medium blog that looks at the question: How will artificial intelligence and automation change the way we live and work? Or has she restates it: How can we design automated systems that help us become heroic, elevated centaurs, rather than demoralized, disposable gnomes working within the gears of an automated but inefficient machine?  She makes the case that if we understand the power of technology and the unique character of humanity, we can work to build systems that amplify each. Through carefully considering which systems we automate and which systems we keep human, we can design an approach that amplifies the best of automation and the best of humanity.

In the part 3 of her essay, she focuses on design principles. These are:

  1. Use the least amount of automation to get the job done
  2. Improve efficiency before introduction automation
  3. Avoid imposing automation on the wrong people
  4. Recognize That Automation Can Never Account for Every Real World Scenario
  5. Amplify What Humans Do Best — and Amplify What Machines Do Best

You can read her essay here: Part 1, Part 2, and Part 3.

What Happens When An Algorithm Labels You Mentally Ill?  This article from the Washington Post discusses the ethical issues of AI and Algorithms in the mental health field.  In more mature fields of medical research, such as pharmacology and therapeutics, physicians and other health professionals know that prior to unleashing any new drugs or devices to the public, whether curative or diagnostic, they must test them extensively to prove that they demonstrate significant benefits and minimize the likelihood of harm. The medical field is also subject to strict health and privacy laws for safeguarding patient data. Medical fields involving artificial intelligence and algorithm-based diagnostic tools are not subject to this level of scrutiny and regulation, even when it affects a person’s well-being.

AI to Analyze Employee Emails:  This article looks at the potential application of using AI to scan employee emails to better understand morale. The technology of text-analytics has been around for a while, for example it powers the spam filter you rely on to tame your inbox manageable.  However, as the tools have grown in sophistication, so have their uses.  They can be used to monitor brand reputation on social media, in online reviews, and elsewhere on the web.  Is the next frontier mining internal emails? One obvious application of language analysis is as a tool for human-resources departments. HR teams have their own, old-fashioned ways of keeping tabs on employee morale, but people aren’t necessarily honest when asked about their work, even in anonymous surveys. Our grammar, syntax, and word choices might betray more about how we really feel.

Bot Funnels: Now for some practical advice on using bots strategically in your digital marketing communications strategy.  Designing a bot can be a lot of fun and we can forget that we need to have a measurable objective. So, if your nonprofit has decided to deploy a Facebook Messenger bot, think about where in the funnel your bot will deliver results.  This podcast and article with Mary Kathryn Johnson, a Messenger bot expert who advises and helps businesses build bots.

How is your nonprofit preparing itself for the age of automation?

One Response

  1. […] Non-profits and the age of automation: AI, machine learning and bots […]

Leave a Reply