How_to_Operationaliz_773494_ndx.pdf

Gartner, Inc. | G00773494 Page 1 of 10

How to Operationalize Digital Ethics in YourOrganizationPublished 12 December 2022 – ID G00773494 – 12 min read

By Analyst(s): Frank Buytendijk, Lydia Clougherty Jones, Jim Hare, Svetlana Sicular

Initiatives: Executive Leadership: Innovation and Disruption Management; Data and

Analytics Programs and Practices

Digital ethics is now a mainstream topic. Gartner client inquiries

have moved from “Why should we care?” to “How do I make this

practical for my organization?” Executive leaders can guide their

teams to resist prescriptive checklists and approach digital ethics

with a use-case-by-use-case process.

Overview

Key Findings

Recommendations

Executive leaders responsible for disruption and innovation management should:

Strategic Planning AssumptionBy 2025, ethical reviews will be as common as privacy reviews, eliminating all excuses for

irresponsible use of data and technology.

Ethics are ambiguous, pluralistic and context-specific, rendering the decision-making

process for every use case difficult to predict in advance.

Ethics are both about intentions (what we want to happen) and consequences (how

the plan works out), yet, most strategies we see only focus on the intentions side.

Avoid creating definitive, all-encompassing and complete digital ethics policies.■

Task their teams to develop and maintain a digital ethics procedure by creating a

use-case-by-use-case process.

Learn to trust this procedure and consistently follow it, as the discourse that the

digital ethics process triggers leads to the right results.

This research note is restricted to the personal use of psaibene@resultant.com.

Gartner, Inc. | G00773494 Page 2 of 10

IntroductionSuper! Everyone is convinced that a responsible, transparent, intentional use of data and

technology is a serious matter, and requires constant attention. As a result, digital ethics is

put on the agenda. But now what? How? This research describes a set of best practices

that has been implemented by organizations both in the public sector and commercial

enterprise. First, we describe which approach is often tried, but fails. Then, we show a

more successful, four-step approach (see Figure 1).

Figure 1: Four-Step Process for Digital Ethics Implementation

Analysis

Don’t Attempt to Create a Comprehensive Digital Ethics Policy

It has been tried so many times, particularly in heavily regulated industries, public sector

and more formal organizations. A working group studies other organizations, conducts

interviews and comes up with a good set of principles for digital ethics. Now a small team

is ready to turn these principles into a policy that is then rolled out, so that all stakeholders

know exactly what is expected of them and how to presort on choices given specific

potential dilemmas.

This approach invariably fails. In one case, we saw a document that was over 120 pages

long, before the team gave up. Why? For a number of reasons:

Ethics are ambiguous: How values are weighted and applied to each individual

circumstance may vary. Cases are often not clear cut right or wrong; there is a lot of

“gray area” in between.

Ethics are pluralistic: There are multiple schools of thought, sometimes

contradicting, on how to determine what is right or wrong.

This research note is restricted to the personal use of psaibene@resultant.com.

Gartner, Inc. | G00773494 Page 3 of 10

Moreover, a concrete policy, while trying to provide psychological comfort of certainty, is

often positioned or intended as a universal checklist. Checklists lead to a checklist

mentality: All the boxes are ticked and the team has complied. As a result, we must be

doing the right thing now, we might think. But that prevents us from seeing other

perspectives and varying context, which creates risk of doing the wrong thing, instead of

limiting or avoiding risk.

Universal checklists are unattainable, and policies based on them often silence awareness

when you most need it. They dampen the ethical dialogue in your mind that questions

whether you are doing the right thing.

Embrace the uncertainty in the process, especially when it presents next questions instead

of a concrete answer. That little voice in the back of your mind that is asking whether you

are doing the right thing or not, is not to be silenced by a checklist or policy, but is to be

embraced. It is trying to protect you from doing the wrong thing.

Of course, there are best practices that can be prescribed, particularly when it comes to

regulatory compliance. But any more complex conversations around ethics and risk

management, creating a value proposition or linking ethics to the values of the

organization, quickly becomes too context-dependent to fit in a policy.

So, what is a better approach?

Create a Digital Ethics Process on a Use-Case-by-Use-Case Basis

A better way to implement digital ethics, as we have seen across industries and regions, is

by creating a four-step process to be followed for each occurring use case.

Step 1: Define Your Principles or Values

Many organizations have determined and documented their principles or values for digital

ethics. Figure 2 shows an example of the digital values of the city of Utrecht in The

Netherlands, translated into English.

Ethics are context-dependent: Even a small difference between use cases can lead

to entirely different outcomes.

This research note is restricted to the personal use of psaibene@resultant.com.

Gartner, Inc. | G00773494 Page 4 of 10

Figure 2: Digital Values of City of Utrecht, The Netherlands

Adapted From City of Utrecht

The two most common specific areas of technology where organizations have defined

their principles or values are data and artificial intelligence (AI). Across industries and

regions, these principles tend to be remarkably similar.

For AI, these are:

AI should be human-centric and socially beneficial.■

AI should be fair.■

This research note is restricted to the personal use of psaibene@resultant.com.

Gartner, Inc. | G00773494 Page 5 of 10

(See AI Ethics: Use 5 Common Principles as Your Starting Point.)

For data, common principles are:

Be as open and advocating as possible with these principles. At the very least, make them

well-known within your organization. Even better, publish them on your website, and feel

what being accountable for them means. After all, that is what principles are for!

Step 2: Operationalize Your Principles or Values

Instead of a singular checklist policy, create a coordinated, repeatable review process for

individual use cases. This may come across as too elaborate, and create “yet even more

hoops to jump through,” but through building experience, this process can often be done

in 1 to 1.5 hours.

The review process consists of three steps:

1. Determine which values or principles are relevant for this particular use case.

2. Define which underlying dilemmas play a role for each of these values or principles.

3. Discuss how you can resolve these dilemmas or even improve the values at seeming

odds (for more on dilemmas, see How to Manage Digital Ethics Dilemmas).

AI should be explainable and transparent.■

AI should be secure and safe.■

The accountability for AI should be clear.■

People should have control over their personal data.■

The use of data should be transparent.■

Privacy and security should be taken care of.■

Data should be used for legitimate purposes only.■

Data should be handled with skill and competence.■

This research note is restricted to the personal use of psaibene@resultant.com.

Gartner, Inc. | G00773494 Page 6 of 10

The review process can take place on three levels. First, the project team itself carries out

the review process, which gets signed off by the manager who checks whether the review

process was taken seriously. The digital ethics advisory board has access to the library of

reviews, to check on the rigor and validity of particular reviews. For this, the project teams

need some training (see Tool: How to Build a Digital Ethics Curriculum, and the callout

below).

Advanced Way of Training People in Digital Ethics

Some organizations have instituted a training, communication and policy program based

on cases. These cases can be real for the organization, they can be cases that play in

other organizations, or they can be plausible but fictional. Each case story features a

person (for example, “Katherine”), who discovers something, recognizes a dilemma,

discusses the situation with a few others, discovering multiple angles, and each story

ends with the same question: “What would you do?” All involved stakeholders are invited

to comment. In the end, there is a process of seeking reconciliation of all points of view.

Among the multiple advantages (over writing a comprehensive, top-down policy) of this

approach, it:

The collective of these cases ultimately form a bottom-up, case-based digital ethics

policy.

Next, for dilemmas that are broader than an individual use case or for particularly

complex dilemmas, a common best practice is to have a digital ethics advisory board (see

callout below). The advisory board provides the project team with recommendations on

how to proceed with their use case responsibly.

Creating a Digital Ethics Advisory Board

Creates a better understanding of dilemmas■

Shows how involving multiple perspectives leads to better outcomes■

Builds institutional knowledge on how to deal with dilemmas■

This research note is restricted to the personal use of psaibene@resultant.com.

Gartner, Inc. | G00773494 Page 7 of 10

One best practice is to institute a digital ethics advisory board. It is important that it is an

advisory board, rather than an authoritative decision-making committee. Project teams

and line managers should feel in control over their project and actively involve the

advisory board, whereas “asking for a ruling or permission” would create an unnecessary

barrier. Advisory boards have told us that their recommendations are essentially always

followed up, which proves its effectiveness.

The advisory board should have a diverse composition, consisting of people from different

domains (such as legal, operations, IT, marketing), and with different cognitive problem-

solving styles and perspectives. Be careful with too many executives joining the advisory

board, as such may lead to less open discussions since hierarchy is involved.

Some organizations, particularly technology firms and public sector organizations, also

involve external people in their digital ethics advisory board. The advisory board helps

project teams with phrasing the right dilemmas, and how to deal with them.

Finally, some decisions are so impactful (for example, around the use of biometric data in

the workplace), that they require executive-level attention. In this case, the review needs to

be discussed and approved by executive management.

Ethics are about intent and consequence. With the best of intentions, sometimes plans

work out horribly, and the result is not positive. And sometimes actions with bad

consequences work out well, but that does not make the result an inherently ethical one.

Most of the plans we review focus too much on the intent side, trying to make sure you

are doing the right thing upfront. But managing and monitoring consequences is equally

important, hence a digital ethics implementation requires an additional two steps.

Step 3: Monitor for Unintended Consequences

The first two steps of the process help you to think through the consequences of our

actions to a reasonable extent, but there are often unintended consequences in the use of

technology or data. Machine learning (ML) can take a model in undesirable directions,

introducing all kinds of undesired, or uncontrolled unwanted bias. Data may be used

outside of the original purpose boundaries. Risks to the use of AI models include mainly

data drift, model drift, scope and function creep, as well as overreliance on a product that

is insufficiently monitored. People may respond unfavorably to new digital security

measures, and try to avoid them. Examples include duplicating data into unsecured or

unsanctioned and uncontrolled platforms and devices, or use nonconfidential platforms

to exchange information that is confidential.

Monitor continuously for these unintended consequences:

This research note is restricted to the personal use of psaibene@resultant.com.

Gartner, Inc. | G00773494 Page 8 of 10

The digital ethics advisory board should routinely check with project teams on:■

How their initiative is going■

What monitoring actions are consistently being undertaken■

To what extent the project continues to perform within the original intention

and boundaries

How, if any, inadequacies found are remediated■

Interact with your systems directly yourself, and try to exploit them for all kinds of

undesirable results (a form of “white hacking”) in a controlled environment. Contract

specialists to do so periodically. Consider operating a responsible disclosure policy

that allows the same to be done by the general public.

In your automated customer interactions, driven by chatbots, you may want to

introduce a button that people can click on if they feel something is not going right.

In several jurisdictions it is mandatory to indicate the use of automated or intelligent

response systems like chatbots, and a human alternative must be offered to prevent

frustrating loopholes in case of error.

Numerous data protection regulations worldwide dictate when to demand that there

is a right for people to ask for nonautomated processing of their request, or a human

review of an automated process (e.g., the right not to be subjected to automated

decision making).

While operating an AI model, deploy techniques that continue to validate its

functioning and outcomes for desirability. Use of explainable AI or explainability

technologies can provide granular control and insights here to prevent a model from

drifting. Create a process for automated testing and test your predictive models

every month again for “model drift.” Ensure to differentiate in model outcomes

between what might be unexpected (yet perfectly explainable) and undesired

(missing its original intent). Learn from the former, remediate always immediately

the latter. For more information, see Market Guide for AI Trust, Risk and Security

Management.

From a security perspective, monitoring adherence to security policies is a primary

indicator of where requirements may have led to deviating or risky employee

behavior. Monitor potential access to or usage of company data to ensure no data is

used in unauthorized situations, internally and externally.

This research note is restricted to the personal use of psaibene@resultant.com.

Gartner, Inc. | G00773494 Page 9 of 10

Step 4: Take Responsibility for Unintended Consequences

In project-based organizations, after completing a project, the dedicated project team is

often dissolved. Currently, centers of excellence are popular, where accumulated resources

and lessons learned remain available. Make sure there is an escalation process in place

for when an unintended consequence occurs — for instance, that:

More generally, realize that when an error occurs as an unintended consequence in testing,

model training, development and similar (internal) stages, you have an early chance to

remedy without it having led to any harm in reality. When unintended consequences

however occur in the public domain, in operations, they generally have an immediate

impact on an individual or group. Rather than staying silent or trying to ignore whatever

just happened, demonstrate that active monitoring allows you to engage and correct

quickly and adequately. Intervene decisively and correct errors in the earliest possible

moment, as any delay or failure to do so often leads to greater damage at scale.

Finally, keep in mind that no matter how well-intended a development or architecture may

have been, there should always be a threshold where you must consider temporarily

pausing an activity, or pulling the plug. For more, see What Executives Need to do to

Support the Responsible Use of AI.

Learn to Trust the Process

The digital ethics process triggers the right discussions with the right stakeholders. Ethics

are ambiguous, pluralistic and context-sensitive, so that the range of potential outcomes

is potentially as very diverse as the number of use cases. But as long as the process is

consistently followed, it will very likely lead to the right outcomes for each individual use

case.

You have the skills to detect different types of bias and AI, and how to retrain a

predictive model.

The legal department and the privacy office outline detailed guidance and

information and technology security teams can help deter or correct unauthorized

usages of data.

Specialists in other departments are aware of the sensitive nature of using digital

technologies, and prioritize issues as you need to draw them in.

This research note is restricted to the personal use of psaibene@resultant.com.

Gartner, Inc. | G00773494 Page 10 of 10

Digital ethics is a muscle that you can train. Over time, the number of “edge cases” will

decrease and although the context is different, you can leverage your existing base of

reviews more and more. Keep track of all your use case reviews, make them searchable

and categorize them, so you can refer to them and make sure that the process followed in

similar cases leads to similar results.

EvidenceDigital ethics is now a mainstream topic. See  Hype Cycle for Artificial Intelligence, 2022.

Recommended by the AuthorsSome documents may not be available as part of your current Gartner subscription.

How to Manage Digital Ethical Dilemmas

Activate Responsible AI Principles Using Human-Centered Design Techniques

Tool: Assess How You Are Doing With Your Digital Ethics

Tool: How to Build a Digital Ethics Curriculum

AI Ethics: Use 5 Common Principles as Your Starting Point

Every Executive Leader Should Challenge Their Teams on Digital Ethics

© 2023 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of

Gartner, Inc. and its affiliates. This publication may not be reproduced or distributed in any form

without Gartner's prior written permission. It consists of the opinions of Gartner's research

organization, which should not be construed as statements of fact. While the information contained in

this publication has been obtained from sources believed to be reliable, Gartner disclaims all warranties

as to the accuracy, completeness or adequacy of such information. Although Gartner research may

address legal and financial issues, Gartner does not provide legal or investment advice and its research

should not be construed or used as such. Your access and use of this publication are governed by

Gartner’s Usage Policy. Gartner prides itself on its reputation for independence and objectivity. Its

research is produced independently by its research organization without input or influence from any

third party. For further information, see "Guiding Principles on Independence and Objectivity."

This research note is restricted to the personal use of psaibene@resultant.com.