Leading the way in ethical AI and workforce management
As artificial intelligence (AI) and machine learning (ML) continue to shape the future of workforce management, questions about fairness, transparency, and ethical use are becoming more pressing. Recent regulations, like New York City’s Automated Employment Decision Tool (AEDT) law, are prompting companies to reexamine their AI practices.
As the CTO of a company whose ML is used by companies and governments to manage their workforce, you might naturally think that I was worried about the new laws coming out of New York and elsewhere—the truth is, I’m not.
In fact, I truly believe that some of these rules might not be all that bad when it comes to ensuring that your skills inventories, talent inventories, and skills-based organizations are built without bias. Yes, I’m skeptical. When someone judges me by how I play an online game, for example, I don’t believe it leads to the best hire. I have similar feelings about some of the personality tests that hiring teams use, too.
All told, having some rules about the ethics of AI is probably a good thing.
The push for ethical technology
There has been a wave of regulatory action aimed at tightening the rules surrounding AI and introducing oversight for AI tools used in workforce management.
These rules, such as New York’s AEDT law that we mentioned earlier, mandate that machine-learning systems undergo audits for bias, that job candidates be informed when AI is used in their evaluation, and that companies provide alternative options for individuals with disabilities.
It’s not just American companies that will have to deal with the implications of AI laws. The European Union is also taking steps to regulate AI with the proposed AI Act, which classifies AI systems based on their potential risk and imposes stricter requirements on high-risk applications, including those used in employment. These laws represent a growing trend toward AI accountability, ensuring that AI benefits underrepresented populations and prevents bias.
Although some companies may view these regulations with trepidation, the new laws aim to promote fairness and safety in AI by ensuring that AI tools do not disadvantage specific groups, especially underrepresented populations, or put people at unnecessary risk.
The knock-on effect is that companies are now tasked with ensuring that their AI systems operate transparently, avoiding the "black box" nature that often conceals how decisions are made. This shift in regulatory expectations signals a broader trend toward holding AI providers accountable for ethical concerns, from bias prevention to transparency in decision-making processes. I don’t have a problem with any of that.
Although it remains to be seen exactly what upcoming AI standards will entail, New York’s law, in brief, requires employers to:
- Have ML or similar hiring tools audited for bias.
- Notify candidates that the company is using ML.
- Tell candidates what qualifications and characteristics are being assessed.
- Provide accommodations for people who need an alternative selection process.
This raises obvious questions. For example, who is required to do the audit? We don’t yet know whether an organization, such as the company using the technology, needs to audit it, or whether it’ll be a requirement to enlist the help of an outside company such as Mercer or Accenture. This should become clear soon.
At SkyHive, we’ve been auditing our technology regardless. Any time we release a new feature, it is tested for bias. The purpose of our features, our enterprise technology, our platform, and our Skill Passport in the first place is to provide opportunities for people to be measured on their skills and the transferability of those skills, not on gender, ethnicity, pedigree, or any other potential sources of bias. This reflects our commitment to ethical AI.
SkyHive’s commitment to ethical AI
At SkyHive, we’ve been a champion of an ethics-by-design approach since day one. Unlike many companies that are now scrambling to comply with new AI regulations, we have always prioritized transparency and fairness.
Designed to prevent bias from the ground up, we built our patented technology with ethical principles in mind, focusing on transparency, explainability, and fairness. This is known by those of us in the AI space as a “Whitebox/Glassbox” AI approach, and it means that we disclose what we do and why we do it without compromising our intellectual property.
Our commitment to AI ethics isn’t a one-and-done situation, either. SkyHive’s internal auditing process ensures that any new feature or algorithm update is rigorously tested for potential bias before being rolled out. In a world where AI’s opacity often fuels mistrust, SkyHive’s model leads the way in ethical AI.
MIDAS: Our gold-standard ethical AI certification
To further reinforce their commitment to ethical AI, we’ve developed a bespoke certification in AI ethics, known as the MIDAS (Machine Intelligence Data Analysis Standards) certification. MIDAS covers areas like data lineage and machine-learning model reliability, reinforcing ethical AI practices.
By adhering to MIDAS, we can guarantee the highest data quality standards in the world’s largest skills dataset. Every day, we process more than 24 terabytes of raw data including anonymized worker profiles and job descriptions from over 180 countries in multiple languages.
“How does more data make you more ethical?” you might be thinking. Well, for one, such a large amount of data helps us remove bias by ensuring that any organization isn’t over-represented, which could introduce bias. It also enables us to uncover skills that people are often unaware of, leading to more upskilling and reskilling possibilities for individuals, particularly from non-traditional experiences. Or, as one of our Skills Passport users said regarding their employee skills assessment, “I got to know things about myself I hadn’t thought about.”
The result? Our commitment to ethical AI has garnered recognition from various industry bodies. We’re currently working with the Responsible Artificial Intelligence Institute and the Global Partnership on AI (GPAI), and in 2022 we won the RAISE ‘Leading Start-Up Award’ for excellence in ethical AI. I hope you can see why we’re comfortable being held accountable to a standard of AI ethics.
Your role in promoting ethical AI
Talent leaders looking to reinvent their workflows through sensible AI implementations should look for providers with a clear, proven history of ethical standards rather than those who are complying just because the law says they have to.
Another important factor is transparency around data usage. Companies should be open about where their data comes from and how it is used. For example, as we mentioned earlier, SkyHive processes more than 24 terabytes of raw data daily. We ensure that this data is anonymized and that no one organization is overrepresented, which helps prevent bias.
A provider with a diverse, well-curated dataset like this is essential for building fairer and more equitable AI systems. The larger the dataset, the easier it becomes to identify and mitigate biases that may arise from over-reliance on specific data sources.
In addition to transparency and data quality, businesses should verify that their AI providers undergo third-party audits to assess the fairness and accuracy of their systems. Regular, independent audits help ensure that AI models continue to operate without bias and remain compliant with ethical standards over time.
Compliance with global data privacy regulations like GDPR is similarly important for maintaining trust and ethical AI use. The National Institute of Standards and Technology (NIST) has developed an AI Risk Management Framework that can guide companies in assessing and mitigating AI-related risks. Any talent leader considering the adoption of AI or ML tooling would be well-advised to at least read it.
Paving the way for a skills-based future
There are some nervous people right now and some nervous companies. I can understand why. Their technologies are a black box. The software may or may not be fair or compliant. Ethics has not been a number-one for these companies. However, as AI regulations evolve, the focus on ethics and transparency will continue to grow. Companies need to be prepared to address this.
At SkyHive, our ethics-by-design culture requires that we create products that can be justified, and for which the output can be interpreted and explained. We are a Certified B Corporation. The very purpose of our company is to help democratize work, help bring opportunities to people left out of the workforce, and help communities and companies optimize the world's transition from jobs-based to skills-based.
By focusing on fairness and accountability, we’re also ensuring that AI benefits everyone and helps democratize access to job opportunities. Companies must adopt ethical AI principles to foster a more equitable, skills-based and agile workforces, where skill development, continuous learning, and internal mobility are the norm. In many ways, our approach serves as a best-practice model for the industry, and we won’t be going away any time soon.
We’re looking forward to hearing more about how the New York and other laws will be enforced, and what the particulars are. In the meantime, we will keep doing what we’re doing, because it’s the right thing to do.