Follow Us

  • facebook icon
  • instagram icon
  • twitter icon
Tech / Innovation

Artificial Intelligence and Reputation Risk Management

This piece was originally published in the Journal of Business Strategy, Vol. 39 Issue: 1, pp.61-64.

The explosion of machine learning in the overall context of Artificial Intelligence (AI) is transforming the relationship between humans and machines in ever more pervasive ways. At the same time, expanding concerns about the use of social media and search to influence the US elections are shining a new light on the algorithms used to deliver targeted content via Facebook and Google and the data about every aspect of human behavior that these platforms are harvesting. This scrutiny has prompted a massive campaign by these and other social media giants to address public concerns about how they are selling personal data and to whom. It is clear that we need a new regulatory regime covering data capture, storage, usage, sale and deletion, as well as machine autonomy. These troubling issues are only the tip of a vast iceberg that covers everything from ethics in autonomous vehicle use to the prospect of autonomous warfare.

The hysteria about killer drones has even caused Elon Musk, the technology entrepreneur and Alphabet’s Mustafa Suleyman to bring together 116 experts from 26 countries to call preemptively for an outright ban on autonomous weapons. In the context of this dark foreboding, we wanted to identify some positive applications of AI across different industry sectors. To the extent that crises of corporate ethics are almost always preceded by patterns of behavior and patterns of communication signaling systems veering off track, is there something we can learn from AI to put to use even in the day to day defense of corporate reputation?

One of the main reasons for AI’s negative reputation is simple ignorance but legislators and regulators around the world are now finally beginning to come to grips with AI, the use of which has accelerated in almost every field of human endeavor. Representative John Delaney, a Democrat from Maryland, launched a bipartisan caucus on AI in May 2017 that was followed by the UK Parliament’s Select Committee on AI publishing a call for evidence in July. In early 2017, Members of the European Parliament called on the European Union (EU) Commission to devise rules creating the new status of “electronic persons” to strengthen liability laws and businesses are already grappling with the ramifications of the EU’s General Data Protection Regulation due to take effect on 25, May 2018.

Amidst all of this frenzy driven largely we suspect by too many science fiction films, It is hard to find anyone willing to have a balanced discussion about the ways in which big data and machine learning can be put to good use amidst all of this frenzy. One of the biggest challenges to harnessing AI for beneficial purposes in an environment in which AI has become the universal bogeyman is lack of clarity about exactly what AI is. Fortunately, we can now turn to Stanford University’s “One Hundred Year Study on Artificial Intelligence” to find good answers to this question. The study, which will issue reports every five years, describes the current state of play in AI as follows:

  • large-scale machine learning enabled by algorithms to work with and scale larger data sets;
  • a variant of machine learning, deep learning, is expanding the boundaries of object recognition, video labeling and audio, speech and natural language processing;
  • reinforcement learning, in which machines move on from mere pattern recognition to experience-based decision-making;
  • robotics;
  • computer vision: reading X Rays, for example, more accurately than radiologists;
  • natural language processing: the ability to interact with people through real dialogue, not stylized scripts;
  • collaborative systems: autonomous systems working with each other and with humans;
  • algorithmic game theory and computational social choice: this area focuses on the social and economic aspects of AI – it tries to model potential misaligned incentives or conflicts of interest whether machine or human-driven; and
  • the internet of things: sensors drawing data from every device we use, from power plants to toothbrushes.

Even this very high-level description of AI makes it clear how powerful and pervasive a force it has become. We are already seeing its benefits in cyber security where traditional firewall management is being replaced by machine learning. As described by the Chief Executive Officer of a cyber security firm, Nicole Egan, machine learning helps define what is “normal” in any network and all of its connected devices so that the system can report on abnormalities and deviations in real time. She argues that this is the only effective way to counter new and unknown cyber threats.

AI is also transforming agriculture in positive ways, enabling farmers to use data analytics on soil and weather to maximize crop yields by understanding microclimates and weather patterns down to the individual acre. In healthcare, chat bots form the leading edge of AI, with natural language processing, knowledge management and sentiment analysis becoming critical tools in assessing patients and their needs to improve care and patient satisfaction. Predictive modeling is also being used in spine surgery to drive strategic decisions and enable personalized surgery.

Another unexpected use of AI is in monitoring child exploitation which technology has also facilitated in many different ways (Stone et al., 2016). The National Center for Missing and Exploited Children (NCMEC) received 8.2 million reports to its CyberTipLine in 2016. Today, AI is transforming its ability to respond to these reports, scanning for suspicious content, storing massive volumes of data and running a variety of queries to uncover malicious actors and find children. As MCMEC’s Chief Executive Officer told Intel: “We’re still in the initial phases, but results so far promise to reduce the typical 30 day turnaround time (to handle a report) to just a day or two. And for a child in a vulnerable situation those 28 or 29 days can literally be a lifetime.”

What then of the potential use of AI in reputation risk management? There are a number of ways in which AI could be deployed in this field, but the most important use is to create a kind of predictive analytics to identify ethical problems inside an organization that could lead to reputational damage if left unchecked. Large organizations already deploy sophisticated cyber threat detection based on continuous real-time surveillance of information technology systems and the content received from outside the network and generated inside the network by employees. We propose the creation of software tools based on machine learning that could scan employee email communications at scale for keywords linked to various kinds of ethical breaches. The now immense archive of email traffic made public in legal discovery involving fraud, price-fixing and other bad corporate behavior could be subjected to machine learning programs to identify patterns of words and the types of communications associated with ethical misjudgments and criminal behavior. We believe that these software tools will be able to detect problems long before they turn into major corporate crises.

Undoubtedly, some critics will object to any such process as being an invasion of privacy and there would indeed to be clear rules about how a company would handle insights created in this way. However, it is already well-established law that companies and other organization have considerable freedom to monitor and review all communications traffic on their own networks and devices. Indeed, automated monitoring of email correspondence is already common in the financial services industry to detect insider trading and other illegal activities. But we believe that the use of machine learning will take the predictive power of this surveillance to a much higher level of effectiveness.

Another aspect of AI that could be used in reputation risk management is algorithmic game theory and computational social choice which was originally developed to analyze the behaviors of groups of players with different and conflicting incentives. The reason this could be a powerful tool inside corporations is that bad behavior by employees and managers can often be tied to financial incentives. In the case of a recent banking scandal, for example, employees opened thousands of fake accounts and issued thousands of credit cards in customers’ names for a simple reason: their compensation was directly tied to the volume of new accounts they generated. In the case of cheating on diesel emissions tests by a car manufacturer, the root cause was the inability of engineers to achieve the desired reduction in emission levels within the company’s timeframe or budget.

Each of these examples is tied directly to all too human responses to the pressures of performance but they are eminently predictable. Using algorithmic game theory, it should be possible to stress test major business decisions in advance to determine whether any incentives for unethical behavior have inadvertently been embedded in the process. It could be argued that such a scheme represents the outsourcing of morality to a machine, but in reality, it simply represents the use of our most sophisticated tools to protect ourselves against the human weaknesses of fear and greed. By understanding the potential ethical implications of business decisions before they are enacted, companies have an opportunity to modify the decisions to eliminate this vulnerability or build in additional monitoring and other precautions to ensure that the predicted ethical lapses do not occur.

There are undoubtedly many ethical quandaries yet to be explored as we move deeper in the use of AI across every type of human activity. But this should not deter us from exploring innovative ways to use these new tools to reduce reputation risk and protect ourselves against the inevitable vulnerabilities created by human decision-making. Perhaps the best analogy for this use of AI has been suggested by Christopher Graves, founder and director of Ogilvy’s Center for Behavioral Science. He likens the use of AI in these preemptive ways to the plan of action suggested in Homer’s tale by Circe to Ulysses enabling him to listen to the beautiful song of the Sirens without being lured to his death. She recommended that he give his sailors beeswax to put in their ears so they could not hear the song and have them tie him to the mast so he could listen without jumping overboard. With the right protections in place, companies can reap the benefits of their business decisions without falling prey to the monsters of moral failure they themselves have conjured into existence.

There are no comments

Add yours

Follow Us

  • facebook icon
  • instagram icon
  • twitter icon