3 Conversations Companies Need to Have

In recent years, concerns about AI ethics have become mainstream. The concerns and the outcomes everyone wants to avoid are largely agreed upon and well documented. Nobody wants to oust discriminatory or biased AI. No one wants to be the subject of a lawsuit or government investigation for invasion of privacy. But if we all agree that biased, privacy-violating, black-box AI is bad, where do we go from here? The question most leaders ask is: How can we take action to mitigate these ethical risks?

It is admirable to act quickly to address concerns, but given the complexities of machine learning, ethics and their intersections, there are no quick fixes. To implement, scale, and sustain effective AI strategies for ethical risk mitigation, organizations should start with a deep understanding of the problems they are trying to solve. One challenge, however, is that conversations about AI ethics can seem nebulous. So the first step should be learning how to talk about it in a concrete and actionable way. Here’s how to set the table for AI ethics conversations so that the next steps are clear.

Who needs to be involved?

We recommend creating a high-level working group responsible for promoting AI ethics in your organization. They should have the right skills, experience and knowledge so that the conversations are well informed about the business needs, technical capacity and operational know-how. At least, we recommend engaging four types of people: technologists, legal/compliance professionals, ethicists, and business leaders who understand the problems you’re trying to solve through the use of AI. Their common goal is to understand the sources of ethical risk in general, for the industry they belong to and for their specific company. After all, there are no good solutions without a deep understanding of the problem itself and the potential obstacles to proposed solutions.

You need the technologist to assess what is technologically feasible, not only at the product level but also at the organizational level. This is partly because different ethical risk mitigation plans require different technical tools and skills. Knowing where your organization is from a technology perspective can be crucial in figuring out how to identify and close the biggest gaps.

Legal and compliance experts help ensure that any new risk mitigation plan is compatible with existing risk mitigation practices and not redundant. Legal issues play a particularly important role given that it is not clear how existing laws and regulations will affect new technologies, nor what new regulations or laws are in the pipeline.

Ethicists are there to ensure a systematic and thorough investigation of the ethical and reputational risks you should address not only as a result of AI development and procurement, but also risks specific to your industry and/or company. Their importance is particularly relevant as complying with outdated regulations does not ensure the ethical and reputational security of your organization.

Finally, leaders should help ensure that all risks are mitigated in a manner that is consistent with business needs and objectives. Zero risk is an impossibility as long as someone is doing something. However, unnecessary risks are a threat to the bottom line, and risk mitigation strategies should also be selected with a view to what is economically feasible.

Three conversations to move things forward

Once the team is in place, here are three important conversations to have. A conversation is about coming to a common understanding of what goals an ethical risk program for AI should aim for. The second conversation involves identifying gaps between where the organization is today and where it wants to be. The third conversation aims to understand the sources of these gaps so that they can be addressed comprehensively and effectively.

1) Define your organization’s ethical standard for AI.

Every conversation should recognize that legal compliance (e.g. anti-discrimination law) and legal compliance (e.g. with GDPR and/or CCPA) are at stake. The question to be answered is: Given that the set of ethical risks is not the same as the set of legal/regulatory risks, what do we identify as ethical risks for our industry/organisation and where do we stand on them?

There are many difficult questions that need to be answered here. For example, what counts as a discriminatory model in the eyes of your organization? Suppose your AI hiring software discriminates against women, but she discriminates fewer when they were historically discriminated against. Is your benchmark for sufficiently unbiased “Better than people have in the last 10 years”? Or is there another benchmark that you think is appropriate? Those in the self-driving car space know this question well: “Are we going to deploy self-driving cars at scale when they are better than the average human driver, or when they are at least as good (or better than) our best human drivers?”

Similar questions arise in connection with black box models. Where does your organization stand in terms of explainability? Are there cases where you think using a black box is acceptable (e.g. as long as it measures well against your chosen benchmark)? What are the criteria for determining whether an AI with explainable results is redundant, a nice-to-have, or a need-to-have?

By delving deeply into these questions, you can develop frameworks and tools for your product teams and the executives who give the green light to product delivery. For example, you may decide that every product must go through an ethical risk due diligence process before deployment or even in the earliest stages of product design. You can also set policies about when, if at all, black box models can be used. Being at a point where you can articulate what minimum ethical standards all AI must meet is a good sign that progress has been made. They are also important for gaining the trust of customers and clients, and they show that your due diligence has been carried out in the event that regulators investigate whether your organization has employed a discriminatory model.

2) Identify the gaps between where you are now and what your standards require.

There are various technical “solutions” or “fixes” to AI ethics issues. A range of software products from big tech to startups to nonprofits help data scientists apply quantitative metrics of fairness to their model results. Tools like LIME and SHAP help data scientists to explain how outputs come about in the first place. But virtually nobody believes that these technical solutions, or any technological solution, will sufficiently mitigate ethical risk and transform your company into one that meets its AI ethical standards.

Your AI ethics team should identify where their respective limitations lie and how their skills and knowledge can complement each other. That means asking:

  1. What exactly is the risk we are trying to mitigate?
  2. How does software/quantitative analysis help us mitigate this risk?
  3. What gaps do the software/quantitative analyzes leave?
  4. What types of qualitative assessments do we need to make, when do we need to make them, on what basis do we make them, and who should make them so that these gaps are adequately filled?

These conversations should also include a crucial part that is usually left out: What level of technological maturity is required to meet (some) ethical requirements (e.g. ). To have productive conversations about what ethical risk management goals are achievable with AI, you need to keep an eye on what’s technologically feasible for your organization.

The answers to these questions can provide clear guidance for next steps: Assess which quantitative solutions can be meshed with existing practices of product teams, assess the organization’s capacity for the qualitative assessments, and assess how these things work in your organization can be combined effectively and flawlessly.

3) Understand the complex causes of the problems and operationalize solutions.

Many conversations about bias in AI start with giving examples and immediately talking about “biased datasets”. Sometimes this leads to talk of “implicit bias” or “unconscious bias,” which are terms borrowed from psychology that lack a clear and direct application to “biased datasets.” But it’s not enough to say “the models are trained on biased datasets” or “the AI ​​reflects our historical societal discriminatory actions and policies.”

The problem isn’t that these things aren’t (sometimes, often) true; it’s that it can’t be the whole picture. For example, understanding bias in AI requires talking about the different sources of discriminatory outcomes. This can be the result of the training data; but how, exactly, these datasets may be biased is important if for no other reason than how they are biased informs how to determine the optimal bias avoidance strategy. Other issues are numerous: how inputs are weighted, where thresholds are set, and which objective function is chosen. In short, the conversation about discriminatory algorithms needs to go deep from the sourceS of the problem and how these sources relate to different risk mitigation strategies.

. . .

Productive conversations about ethics should go deeper than rough examples given by specialists and laypeople alike. Your organization needs the right people around the table so that its standards can be defined and deepened. Your organization should successfully combine its quantitative and qualitative approaches to mitigating ethical risk so that it can bridge the gaps between where it is today and where it wants to go. And it should recognize the complexity of the sources of its ethical risks from AI. Ultimately, the ethical hazard of AI is not nebulous or theoretical. It’s concrete. And it deserves and demands attention that goes far beyond repeating chilling headlines.

Leave a Comment