Sometimes it feels like everything that’s been said and asked about AI has already been said and asked.
So, leading a panel last week at the recent Applied Intelligence Live in Austin, SkyHive Co-founder and Chief Technical Officer Mohan Reddy had the group spend the bulk of their time on a less-common question: How do we know if AI will do a better job at something than a human?
The panelists (representing themselves, not their employers) included:
- Coran Darling, AI & Data Analytics, DLA Piper US LLP
- Xiaochen Zhang, Chief Executive Officer, FinTech4Good
- Lauren Loera, Technical Recruiting Researcher, Netflix
The conversation began in earnest when Zhang said that “AI makes you a better lawyer.”
Reddy asked how we know this for sure. Could artificial intelligence handle a task very quickly or efficiently, only to find out one must spend time later to correct it?
To that, most of the panelists said that AI doesn’t prohibit errors. Instead, it helps curate and summarize. Darling, for example, said that it could find all legal contracts with an indemnity clause. Then, a human could look over those clauses.
Zhang agreed, saying that AI could take a longer piece and turn it into a short argument, a short summary, as opposed to making a decision for you.
Loera, similarly, emphasized the power of AI to reformat, modify existing tools, and build frameworks.
In using AI and considering how much confidence you should have in it, Reddy suggested the following:
- Analyze where your checks and balances are, rather than relying on a post-mortem review. You may have a chief AI officer or chief ethics officer, but if you haven’t thought through anything and everything that could go wrong (such as the existence of bias) with your AI, could it be too late once your ethics officer finds a problem? It’s essential to use technology with built-in checks and balances, not just after-the-fact checkups.
- Examine the AI model carefully. “Common sense is not so common,” Reddy says. “We need to bring in the old-school reasoning to these models, we need more responsible models,” including maybe even some rules. Another way of putting that: if your AI technology isn’t compliant with security laws, data-privacy laws, isn’t audited, isn’t made up of a large enough data pool, hasn’t gone through strict bias checks, and other ethical standards, don’t use it. (More on AI ethics in this recording.)
- Ask how it works. If you don’t know what your AI does to deliver you the results that it does, and if it can’t be clearly explained to you, Reddy says, don’t use it.