#AI4Good: Artificial Intelligence & Wellbeing, Ethical Dilemmas, and More | Beth's Blog

#AI4Good: Artificial Intelligence & Wellbeing, Ethical Dilemmas, and More

AI4Giving

Allison Fine and I have been looking at artificial intelligence, nonprofits, and philanthropy.  Our most recent research, Unlocking Generosity with Artificial Intelligence: The Future of Giving. with support from Bill and Melinda Gates Foundation. While this particular research was focused on giving and fundraising, we have looking at topic with a broader lens. As we continue to explore this topic, I’ll be posting more regular updates about new developments.

AI and Wellbeing 

I was really excited to discover research on artificial intelligence that intersects with my work on nonprofit workplace wellbeing (The Happy Healthy Nonprofit). The Partnership on AI’s “Framework for Promoting Workforce Wellbeing in the AI-Integrated Workplace” provides a framework and practices to guide employers, workers, and other stakeholders towards promoting workforce wellbeing as AI becomes integrated into the workplace.

There are six areas to consider: human rights, physical, financial, intellectual, emotional well-being, as well as purpose and meaning. 

A few insights about the need to focus on humanizing AI.

  • Organizations should recognize that AI, like other technology, could exacerbate the blurring of work/life boundaries by pushing workers to remain constantly ‘on’. Organizations should commit to not using AI for the purpose of driving workers to higher levels of productivity at the cost of well-being. Organizations need to have structures and policies to support and enhance an individuals work/life balance and enable healthy technology use habits including a right to disconnect from the workplace.
  • Organizations need to understand the impact that AI systems may have on workers’ physical health and encourage an active, not sedentary workplace.  (Think standing desks, ergonomics, and walking meetings) 
  • Organizations should protect against the ‘datafication’ of workers and work. Priority should be given to retaining and emphasizing human characteristics in decision making processes and evaluations. 

Read the full paper here

Link Between Artificial Intelligence and Wellbeing 

This is a more macro lens. A recent study from two researchers affiliated with the Stanford Institute for Human-Centered Artificial Intelligence (HAI) has challenged the public perception about AI’s negative impact on society due to the “robots taking over jobs.”  The study found a relationship between AI-related jobs and increases in economic growth, which in return improved the well-being of the society. Read more about the methodology and findings  here and download the full paper here

The Key to Successful A.I.? Continual Human Intervention
Douglas Rushkoff, author of Team Human, has been serializing chapters from his book on Medium. This except is talking about the perils of being seduced by shiny object syndrome and artificial intelligence. Good quote: “To a hammer, everything is a nail. To an A.I., everything is a computational challenge. We must not accept any technology as the default solution for our problems. When we do, we end up trying to optimize ourselves for our machines, instead of optimizing our machines for us. Whenever people or institutions fail, we assume they are simply lacking the appropriate algorithms or upgrades.” Read here.

AI and Ethical Dilemmas   

Unesco has a paper and a call to action for international and national policies and regulatory frameworks to ensure that artificial intelligence and other emerging technologies benefit humanity as a whole. It is convening groups to work on a comprehensive global standard-setting process to provide AI with a strong ethical basis that protects human rights and dignity.  It envisions this as an ethical guiding compass and a global normative bedrock allowing to build a strong respect for the rule of law in the digital world.  To understand the ethical dilemmas, see these scenarios.

Here’s just one example of the ethical issues we need to anticipate as we embrace AI.

Chronic Homelessness Artificial Intelligence model (CHAI)  

Launched in August by a local government social service agency in Canada, the AI system analyzes the personal data of participants to calculate who is likely to have nowhere to sleep for an extended period. As a pilot, it followed a group of individuals for six months before its formal launch in August, with a 93% success rate in predicting when someone would become chronically homeless.  By using the system to anticipate who is likely to become chronically homeless, the city can prioritize how it works with those individuals to try and get them into safe housing or get them access to health services they might need.

The AI program is only applied to consenting individuals and participants can quit the program at any time and their data will be removed from the model.  Data has been “scrubbed” of identifiers like real names. Instead, each person is given an identifying number which is run through the system along with other data, including their age, race, gender, military status, the kinds of city services they have accessed, and how often they sleep in shelters. There is also transparency on how the algorithm reached its assessment. 

There are concerns. What could happen to sensitive data on vulnerable residents going forward?  What if it was used to determine who is taking up large amounts of resources and where funding could be slashed?  (link)

AI in the Humanitarian Sector  

Nethope has convening and doing research with global NGOs and technology experts to share, learn, and collaborate on all aspects of AI application in the humanitarian sector. This research was supported by Microsoft’s AI for Humanitarian Action Initiative.  

Nethope’s research identifies the benefits of using AI systems as well as many examples of how AI is deployed in the humanitarian sector.  AI for humanitarian response shares some of the same challenges as that we identified for  AI4Giving report, including expertise, data, sustainability, inclusion, funding, and oversight.  Since many NGOs are just not far along on their digital transformation journey, the bar for AI adoption is fairly high. See Nethope blog post that provides a summary of how AI is currently be used in the humanitarian sector. 

 

Leave a Reply