3Sixty Insights

3Sixty Insights


#HRTechChat: Informing Artificial Intelligence

November 01, 2021

For this episode of the #HRTechChat video podcast, AbilityMap CEO and Co-Founder Mike Erlin and Mike Bollinger, vice president of strategic initiatives at Cornerstone OnDemand, joined me to discuss a crucially important area of focus: at this still-early stage of its development, helping to ensure that we inform artificial intelligence with the best human-centric data possible. After all, most of us would like to think that the behavior of AI, as it grows eventually to an exceptionally high level of sophistication and begins to take over higher-level decision-making, will continue to reflect what we hold dear as "humanness."

Both Bollinger and Merlin are vendor-side members of our Global Executive Advisory Council and repeat guests on the podcast. The episode you're reading about here has its origins in an an unrecorded conversation the three of us had several weeks ago. It all began when Bollinger alerted us to "Bias in AI: Are People the Problem or the Solution?" By John Sumser, principal analyst for HRExaminer, the article acknowledges two camps and their diverging viewpoints on the development of AI. "One group says people are the problem; the other sees them as the solution" in the development of AI, according to Sumser, who also says, "All tools contain embedded biases. Bias can be introduced long before the data is examined and at other parts of the process."

We commenced this episode by agreeing with Sumser. The way forward, in our opinion, is to flood AI with as much human perspective as possible. The alternative, for developers to work overtime attempting to ensure that AI remains devoid of human bias, may be the wrong way to go and, not to mention, possibly impossible. This is my own inference from Sumser's article. The approach is counterproductive if we wish to avoid the generally dystopian future that AI has the potential to produce should we fail at this point in time, right now, to shepherd AI in a direction that humans would recognize as desirable.

This does not mean a direction that humans necessarily would set on their lonesome, by the way. And, yes, there are implications for the future of work specifically. Erlin made great points here. In the world of work, when we test for cultural fit and soft skills, the best candidate for a role can often be nothing like we might have predicted. What manager anywhere would guess that a former daycare worker would be the best fit for a role in debt collection, for example? I might be getting it slightly wrong, but something like this is a finding that modern psychometrics have produced.

Imagine a future of work where AI lacks this perspective, drawing instead solely on conventional decision-making metrics such as credentials and past work experience. That's where we're headed, a future where the AI for talent acquisition, for example, will have been developed with data that precludes the AI entirely from the very ability to unearth delightfully unintended, unexpected relevance. In an additional twist, it's a particularly human outcome that mere humans would never reach on their own.

Erlin further expounds on the idea. Incorporating quantitative evidence of human bias -- think inherent human preferences -- into the referenceable data sets available to AI generates higher-quality, human-centric current and future choices for humanity, he suggests. I agree. And it's a continual, never-ending process to feed this type of information to AI, which should then provide us suggested courses of actions. Furthermore, we must think deeply about the questions we ask AI to answer. For example, rather than ask, "How can reduce crime?," we should consider asking, "How do we create an enriching community?" -- lest AI return answers that only exacerbate human suffering or frustrations.