Utilizing AI for decision support rather than determination – opening the black box, learning from our past and reducing bias

This weekend I watched the Netflix documentary “Trials of Gabriel Fernandez” which inspired the following submission for presentation at 2020 American Public Health Association. I believe that we can do better. I don’t agree that we should continue to rely on gut feelings because we are afraid of past biases in our historic data. New staff have not developed gut feelings or clinical judgement to a degree upon which they can rely. Sadly, there are some staff that never do. Perhaps it is just not their calling. The very facts that the past decision processes represented in our data are flawed and biased are the reasons we need sophisticated approaches like artificial intelligence (AI) and machine learning (ML) to learn from our past mistakes and improve our processes. It is true that some first attempts at using these approaches in social services were wielded like blunt swords – simply reproducing bias – but hiding from progress altogether is not the answer. We must learn from our mistakes, sharpen our swords and move forward. In this case, the cost of continuing to do what we have always done outweighs the risk of innovation. Those we could help continue to run out of time while we stand still.

Below, I present my thoughts as I would like to present them at the APHA. I hope my proposal is accepted.

My Presentation Abstract Submission:

Title: Utilizing AI for decision support rather than determination – opening the black box, learning from our past and reducing bias

Administrators of social services and behavioral health have expressed trepidation for the new frontier of analytics: artificial intelligence (AI) and machine learning (ML). Grocery stores are using AI/ML to learn buying patterns which help reformat placement of products to influence buying behavior; financial institutes are utilizing AI/ML to predict likelihood of default to automate decisions for lending, but social services and behavioral health continue to rely on gut feelings for a majority of decisions.

Why are these fields afraid of AI/ML? The reason is simple: AI/ML models are built on historical patterns which exist in the data, and our historic gut feelings have often been biased and flawed. This means that predictive analytics will also be biased and flawed. AI/ML mimic our historic processes to reduce burden and increase consistency to the norm. But what if we don’t trust the norm? Should we avoid AI/ML altogether? No.

We have a responsibility to understand our past mistakes to improve quality; continuing to rely on gut feelings will only perpetuate our poor decisions and bias. I will present three methods which utilize AI/ML to model current processes outside of a black box so that we can inspect and modify the factors which drive our decisions today. First, I will show how to use random forest to reverse engineer those gut feelings into a decision tree, allowing us to inspect the true drivers of gut feelings and permitting adjustments to improve outcomes and reduce bias. Next, when trying to decide the appropriate level of intervention, I will show how to utilize factor analysis to identify the most similar individuals/families served by the system in the past, allowing decision makers to inspect the most frequent levels selected while simultaneously summarizing the success related to those decisions. Lastly, I will demonstrate how neural networks can improve the quality of the data upon which future decisions are made.

#socialservices #childwelfare #behavioralhealth #mentalhealth #machinelearning #ai4good #ai #ml