Image: Kentoh - stock.adobe.com
More organizations are embracing the concept of responsible AI, but faulty assumptions can impede success.
Ethical AI. Responsible AI. Trustworthy AI. More companies are talking about AI ethics and its facets, but can they apply them? Some organizations have articulated responsible AI principles and values but they're having trouble translating that into something that can be implemented. Other companies are further along because they started earlier, but some of them have faced considerable public backlash for making mistakes that could have been avoided.
The reality is that most organizations don't intend to do unethical things with AI. They do them inadvertently. However, when something goes wrong, customers and the public care less about the company's intent than what happened as the result of the company's actions or failure to act.
Following are a few reasons why companies are struggling to get responsible AI right.
They're focusing on algorithms
Business leaders have become concerned about algorithmic bias because they realize it's become a brand issue. However, responsible AI requires more.
"An AI product is never just an algorithm. It's a full end-to-end system and all the [related] business processes," said Steven Mills, managing director, partner and chief AI ethics officer at Boston Consulting Group (BCG). "You could go to great lengths to ensure that your algorithm is as bias-free as possible but you have to think about the whole end-to-end value chain from data acquisition to algorithms to how the output is being used within the business."
By narrowly focusing on algorithms, organizations miss a lot of sources of potential bias.
They're expecting too much from principles and values
More organizations have articulated responsible AI principles and values, but in some cases they're little more than marketing veneer. Principles and values reflect the belief system that underpins responsible AI. However, companies aren't necessarily backing up their proclamations with anything real.
"Part of the challenge lies in the way principles get articulated. They're not implementable," said Kjell Carlsson, principal analyst at Forrester Research, who covers data science, machine learning, AI, and advanced analytics. "They're written at such an aspirational level that they often don't have much to do with the topic at hand."
BCG calls the disconnect the "responsible AI gap" because its consultants run across the issue so frequently. To operationalize responsible AI, Mills recommends:
- Having a responsible AI leader
- Supplementing principles and values with training
- Breaking principles and values down into actionable sub-items
- Putting a governance structure in place
- Doing responsible AI reviews of products to uncover and mitigate issues
- Integrating technical tools and methods so outcomes can be measured
- Have a plan in place in case there's a responsible AI lapse that includes turning the system off, notifying customers and enabling transparency into what went wrong and what was done to rectify it
They've created separate responsible AI processes
Ethical AI is sometimes viewed as a separate category such as privacy and cybersecurity. However, as the latter two functions have demonstrated, they can't be effective when they operate in a vacuum.
"[Organizations] put a set of parallel processes in place as sort of a responsible AI program. The challenge with that is adding a whole layer on top of what teams are already doing," said BCG's Mills. "Rather than creating a bunch of new stuff, inject it into your existing process so that we can keep the friction as low as possible."
That way, responsible AI becomes a natural part of a product development team's workflow and there's far less resistance to what would otherwise be perceived as another risk or compliance function which just adds more overhead. According to Mills, the companies realizing the greatest success are taking the integrated approach.
They've created a responsible AI board without a broader plan
Ethical AI boards are necessarily cross-functional groups because no one person, regardless of their expertise, can foresee the entire landscape of potential risks. Companies need to understand from legal, business, ethical, technological and other standpoints what could possibly go wrong and what the ramifications could be.
Be mindful of who is selected to serve on the board, however, because their political views, what their company does, or something else in their past could derail the endeavor. For example, Google dissolved its AI ethics board after one week because of complaints about one member's anti-LGBTQ views and the fact that another member was the CEO of a drone company whose AI was being used for military applications.
More fundamentally, these boards may be formed without an adequate understanding of what their role should be.
"You need to think about how to put reviews in place so that we can flag potential issues or potentially risky products," said BCG's Mills. "We may be doing things in the healthcare industry that are inherently riskier than advertising, so we need those processes in place to elevate certain things so the board can discuss them. Just putting a board in place doesn't help."
Companies should have a plan and strategy for how to implement responsible AI within the organization [because] that's how they can affect the greatest amount of change as quickly as possible,
"I think people have a tendency to do point things that seem interesting like standing up a board, but they're not weaving it into a comprehensive strategy and approach," said Mills.
There's more to responsible AI than meets the eye as evidenced by the relatively narrow approach companies take. It's a comprehensive endeavor that requires planning, effective leadership, implementation and evaluation as enabled by people, processes and technology.
Originally written by
Lisa Morgan | March 19, 2021
for Information Week