How Merck created its own digital code of ethics

The advent of AI has been accompanied by a growing awareness of the potential of digital technology to cause harm, whether through discrimination or privacy violations. Policy makers are still debating how to address these harms but, in the absence of laws, some institutions have taken an ethical approach, defining principles of ethical behavior and committing to them – with varying degrees of credibility.

Merck Group, a German science and technology company best known for its work in the pharmaceutical field, is among the few organizations that have developed and implemented their own digital AI code of ethics. Drawing on its experience in bioethics, Merck has defined a set of ethical principles to guide digital innovation, appointed a digital ethics advisory committee, and is now putting the code into practice. It’s an approach to technology governance that others can choose to follow, even if regulation bites.

Merck has established a Bioethics Advisory Board to address ethical questions around stem cell research and now the company is tackling AI and digital ethics. (Photo by © Merck KGaA, Darmstadt, Germany)

Developing an AI Code of Ethics at Merck

In 2011, Merck established a Bioethics Advisory Board to help answer ethical questions about its use of stem cells. This panel is based on established bioethical principles, such as Beauchamp and Childress’s Principles of biomedical ethicsto guide Merck’s innovation in areas that pose ethical risks, such as genome editing.

An ethics-based approach enables companies to reduce the risk of harm when regulation has not yet caught up with technology, Merck ethicists recently wrote. “Furthermore, many ethical ‘should’ questions go beyond the scope usually provided for in legal regulations, which primarily provide practitioners with answers to ‘could’ questions.

Ethical questions have since emerged around the use of data and AI in medical research, which often incorporates highly sensitive patient data. For Merck, these questions came to a head in 2019, when it began developing digital health solutions, including Syntropy, a cancer research joint venture with controversial U.S. data analytics provider Palantir. “We thought, ‘Maybe we need an ethical framework for this new type of business model and collaboration,'” recalls Jean-Enno Charton, director of digital ethics and bioethics. at Merck.

First, the company consulted its bioethics panel. Their response was twofold: first, that digital ethics requires specialized expertise. And second, Merck’s priority should be to foster trust between patients and other external stakeholders. “We must fight against the mistrust that has accumulated” around digital technology, explains Charton.

However, Merck has struggled to find digital ethicists. The company has appointed a Digital Ethics Advisory Committee, made up of experts in technology, regulation and other relevant fields, but decided that an ethical framework was needed to guide the work of the group. “The panel needs an idea of ​​what is and is not ethical,” Charton says.

To do this, Charton and his team analyzed numerous ethical AI frameworks that have been developed by regulators, trade bodies, and other institutions, settling on 42 that they considered relevant to Merck. Interestingly, Merck only focused on executives from Europe — partly to save time, Charton says, but also because “the European discussion on data and AI ethics is much more advanced. “.

Transparency is the only principle mentioned in all the digital ethics frameworks we analyzed.
Jean-Enno Charton, Director of Digital Ethics and Bioethics at Merck

From these frameworks, Merck has extracted five fundamental ethical principles for digital innovation. Four of them – justice, autonomy, beneficence (doing good) and non-maleficence (doing no harm) – correspond to the fundamental principles of bioethics. The fifth – transparency – is of particular importance in the context of digital ethics, Charton explains. “Transparency is the only principle that was mentioned in all the digital ethics frameworks we analyzed, so it is clearly a major issue in itself,” he says. “It’s about building trust, which is the biggest issue in digital ethics.”

These fundamental principles and the values ​​that compose them were then translated into a series of guidelines. These are designed to be applicable in any context and to be understood by all – for example, “We uphold justice in our digital offerings” and “We assign clear responsibilities for our digital offerings”. These guidelines form Merck’s Digital Code of Ethics (CoDE).

Putting AI Ethics into Action: Implementing Merck CoDE

However, a code of ethics is worth nothing if it is not respected and applied within an organization.

Charton started the deployment of CoDE in 2020 by involving senior decision makers from data and digital functions. “The first thing I had to do was convince all of our stakeholders that this was something they needed,” he recalls. “Some people disputed it; some of them needed to be convinced that this was not going to be an obstacle to innovation. We’ve explained how bioethics has helped business, establishing a framework for innovation that helps you think about the consequences of your work.

Next, Charton presented CoDE to Merck’s board of directors. “I knew there was no way to get this approved by our board,” he says. But with high-level stakeholders already on the sidelines, when directors asked their teams, it won their approval.

As a result, the CoDE became one of the four “charters”, the highest document status at Merck. This means it applies to all employees and can be discussed publicly to promote transparency and accountability.

Charton has implemented a process for employees to report projects that may present ethical risks. Once flagged, a project will go through a “principles at risk” exercise – a checklist of questions that examine whether it is at risk of violating CoDE. If so, it is reviewed by the Digital Ethics Advisory Board, which will provide advice to the business owner of the project.

This process is not mandatory for all projects, lest it become a tick box exercise. “If we were to make it a mandatory part of the development process, we would be losing people,” Charton says. This means that the process depends on stakeholder awareness and engagement. “I make sure everyone knows who I am and they can talk to me about ethical issues.”

The advice of the digital ethics committee is also not binding. “A business leader can ignore it, but he would have a hard time justifying his decision,” explains Charton. If a project is found to have violated CoDE retrospectively, “there will be an internal learning process to ensure it doesn’t happen again.”

Now, Charton is developing basic CoDE training for all employees, and a dedicated course for teams working with data and algorithms. It also explores the possibility of automating the evaluation of ethical risks. He has developed “digital ethics checkpoints” that can be applied to new products in development, and is currently examining how they could be integrated with Palantir’s Foundry data platform, which Merck uses for analysis, so that ethical risks can be flagged automatically and proactively. .

Next steps for putting AI ethics into practice

Ethical AI frameworks backed by governments or industry bodies have become widespread, says David Barnard-Wills, senior research director at Trilateral Research, a consultancy focused on the social impact of technology. A few years ago, one of the company’s projects identified 70 such executives; more recent studies have found hundreds.

Codes of ethics for individual organizations are less common. Existing frameworks tend to cluster around the same core principles, Barnard-Wills says, so developing them for just one organization could be a wasted effort. But “I can see why you would want to develop [a code] yourself because you can make it very specific to your business and the issues it faces,” he says.

Senior management buy-in is critical to the success of any internal ethics initiative, says Barnard-Wills. “If you don’t have a leadership culture that champions [the code] and makes tough decisions based on it, or if it is constantly ignored in the pursuit of profit, then the code is meaningless.

You can think of a code of ethics like any business change process…there must be organizational roles and responsibilities.
David Barnard-Wills, Trilateral Research

“You can think of a code of ethics like any business change process,” he adds. “It’s not just a statement or a vision, there must be organizational roles and responsibilities.”

While Merck has so far refrained from making ethical risk assessments mandatory for all projects, Trilateral Research is examining ways to embed ethics into software development. These include appointing ethicists to development teams and including ethical considerations in the requirements capture phase.

A future step could be an “ethical design repository,” says Barnard-Wills. “Design choices at the product or feature level can make a huge difference. [ethical] difference, down to the way a text is presented to a user,” he says. The way a privacy policy is presented, for example, can determine whether an individual can be considered to have actually consented to their data being used in a certain way. A repository of worked examples could help developers quickly and easily incorporate ethical design, suggests Barnard-Wills.

If the current wave of AI ethical frameworks reflect a “regulatory gap”, will they be redundant when regulations such as the EU AI law come into force? Barnard-Wills thinks not. “Ethical commitments will always be important,” he says, “because the law will never cover every extreme case.”

For Merck, the development of CoDE is just the beginning of its digital ethics initiative. One of the reasons for making the document public, he says, is so that it can be openly debated. “We want people to read it and talk about it, and if they report something wrong, we can update it,” he explains. “It’s not set in stone – ethics always follows progress.”

Pete Swabey is editor of Technical monitor.

Margie D. Carlisle