Skip to content
Augmented Intelligence Certification

Leading your organization to responsible AI

certification

CEOs often live by the numbers—profit, earnings before interest and taxes, shareholder returns. These data often serve as hard evidence of CEO success or failure, but they’re certainly not the only measures. Among the softer, but equally important, success factors: making sound decisions that not only lead to the creation of value but also “do no harm.”

;
;
;

While augmented intelligence certification (AI) is quickly becoming a new tool in the CEO tool belt to drive revenues and profitability, it has also become clear that deploying AI requires careful management to prevent unintentional but significant damage, not only to brand reputation but, more important, to workers, individuals, and society as a whole.

Legions of businesses, governments, and nonprofits are starting to cash in on the value AI can deliver. Between 2017 and 2018, McKinsey research found the percentage of companies embedding at least one AI capability in their business processes more than doubled, with nearly all companies using AI reporting achieving some level of value.

Not surprisingly, though, as AI supercharges business and society, CEOs are under the spotlight to ensure their company’s responsible use of AI systems beyond complying with the spirit and letter of applicable laws. Ethical debates are well underway about what’s “right” and “wrong” when it comes to high-stakes AI applications such as autonomous weapons and surveillance systems. And there’s an outpouring of concern and skepticism regarding how we can imbue AI systems with human ethical judgment, when moral values frequently vary by culture and can be difficult to code in software.

While these big moral questions touch a select number of organizations, nearly all companies must grapple with another stratum of ethical considerations, because even seemingly innocuous uses of AI can have grave implications. Numerous instances of AI bias, discrimination, and privacy violations have already littered the news, leaving leaders rightly concerned about how to ensure that nothing bad happens as they deploy their AI systems.

The best solution is almost certainly not to avoid the use of AI altogether—the value at stake can be too significant, and there are advantages to being early to the AI game. Organizations can instead ensure the responsible building and application of AI by taking care to confirm that AI outputs are fair, that new levels of personalization do not translate into discrimination, that data acquisition and use do not occur at the expense of consumer privacy, and that their organizations balance system performance with transparency into how AI systems make their predictions.

It may seem logical to delegate these concerns to data-science leaders and teams, since they are the experts when it comes to understanding how AI works. However, we are finding through our work that the CEO’s role is vital to the consistent delivery of responsible AI systems and that the CEO needs to have at least a strong working knowledge of AI development to ensure he or she is asking the right questions to prevent potential ethical issues. In this article, we’ll provide this knowledge and a pragmatic approach for CEOs to ensure their teams are building AI that the organization can be proud of.

Sharpening and unpacking company values

In today’s business environment, where organizations often have a lot of moving parts, distributed decision making, and workers who are empowered to innovate, company values serve as an important guide for employees—whether it is a marketing manager determining what ad campaign to run or a data scientist identifying where to use AI and how to build it. However, translating these values into practice when developing and using AI is not as straightforward as one might think. Short, high-level value statements do not always provide crystal-clear guidance in a world where “right” and “wrong” can be ambiguous and the line between innovative and offensive is thin. CEOs can provide critical guidance here in three key areas (Exhibit 1).

;

We strive to provide individuals with disabilities equal access to our website. If you would like information about this content we will be happy to work with you. Please email us at: [email protected]

1. Clarify how values translate into the selection of AI applications.

Leaders must sharpen and unpack high-level value statements, using examples that show how each value translates into the real-world choices that analytics teams make on which processes (and decisions) should be candidates for automation.

We have seen some great examples of companies using “mind maps” to turn corporate values into concrete guidance, both in terms of when to use AI and how. One European financial-services organization systematically mapped its corporate values to AI reputational risks….

Shares 0