An independent assessment shows that SkyHive’s Skills Models are free of racial or gender bias as in analyzing candidate qualifications—answering one of the biggest questions hiring managers and workers have about the use of artificial intelligence in talent management systems.
SkyHive’s Skills Models are now Armilla Verified, with the assessment showing that:
- Skills Models remain robust and accurate when demographic information is added, meaning that the models don’t show unconscious racial and gender bias.
- Specifically, the models meet the standard set by New York City’s Local Law 144, the industry standard for employers using automated hiring tools.
- The Skills Models also remain robust when irrelevant information is added, meaning that the models successfully ignore text that doesn’t matter and focus on the real skills of a candidate.
A skills-based approach, when adopted as a talent management strategy, can improve employee retention, enhance internal mobility, and guide critical reskilling and upskilling strategies. But to do that, employers need an accurate, and most importantly unbiased, skill inventory.
The verification is completely voluntary, but SkyHive has a commitment to putting ethics first when developing talent management solutions, said Mohan Reddy, SkyHive co-founder and CTO.
“SkyHive submitted our technology for an Armilla Verification Badge for the same reason we chose to become a Certified B Corporation: to ensure we’re living up to our values as a company,” said Reddy. “We’ve worked hard to build the world’s most ethical AI people technology, and we wanted an independent assessment to demonstrate that we have succeeded.”
Unconscious AI bias is a major concern as more and more employers use automated tools and artificial intelligence to manage hiring. AI has enormous potential to make hiring both more efficient and more equitable by enabling skills-first talent approaches. But that can’t happen if longstanding biases are baked into the AI’s algorithms.
There are a number of ways bias can be inadvertently introduced into automated tools, even for well-intentioned employers. One way is for bias to be built into the datasets that employers use to “train” AI to identify potential candidates.
An AI system only knows what is in its dataset. If there are blind spots in how a company hires and promotes, such as an imbalance between men and women, then those biases will be reflected in the HR data—and the AI tool will end up sharing them. Biased AI results could end up reinforcing existing problems , denying opportunities for workers and undermining skills-based hiring strategies for HR teams.
SkyHive’s technology allows employers to move from a job-based to a skills-based hiring strategy. By having accurate and up-to-date information about skills, employers can find talent more easily and not be limited by outdated job descriptions or the “paper ceiling” of broad education requirements. SkyHive’s skill ontology also allows workers to better understand what skills are in demand and how to advance in their careers.
Bias was assessed by taking a sample of resumes parsed by SkyHive in the United States, Canada, India, and the United Kingdom. The firm ran the resumes through the SkyHive Skill Model Inference twice: once anonymously and again with demographic data including race, gender, age, and years of work experience. The skills extracted from the two analyses had a 97.5% overlap, meaning that demographic data had little influence on the results.
In addition, the review tested whether the model could be confused by irrelevant text in a resume by inserting text that had nothing to do with skills, such as “lorem ipsum” or sections from novels in the public domain. The parser correctly ignored these sections and still identified skills with 95% accuracy, the report said.
The report did find that using synonyms of skill names in a resume could have an impact on results. Substituting synonyms resulted in 19% less overlap in skill sets. This data will be used by SkyHive teams to continue improving our patented approach for an ontology that identifies and classifies skills.
The review also included a bias audit as required by New York City Local Law 144, identifying the 30 most-frequently extracted skills and the “selection rate” for candidates. The audit found that the “impact ratio” was within acceptable limits and there was no evidence of bias.
The New York law has an impact far beyond the city itself, because so many major companies either are headquartered or do business there. Realistically, any large employer has to consider the New York standards, since it is much easier to make an entire application compliant than to create a different version for New York.
Verification of AI ethics can also make it easier for SkyHive clients to respond to RFPs by providing a simple, independent way of responding to questions about ethical AI and bias in their applications.
To find out how to use SkyHive to solve your talent and workforce development problems, contact us today.