Follow Us

  • facebook icon
  • instagram icon
  • twitter icon
Tech & Innovation

The Caliphate Of Numbers

Ninth-century polymath Muhammad ibn Musa al-Khwarizmi is enjoying his moment in the sun. But who knows what Al-Khwarizmi, known as the father of algebra and the decimal numbering system, would have made of the twenty-first-century impact of the algorithm, the mathematical tool that took his name (Brezina, 2005). Everywhere one looks, from eCommerce to ride sharing platforms, the algorithm rules supreme. It dictates medical care, prison sentences, job seeking, remote welfare eligibility, even the growing practice of predictive policing. With the explosion of data from the internet of things, the algorithm, a set of instructions for automated decision-making, has metastasized across practically every area of human life. caliphate_of_numbers_ogilvydo

There are many reasons to celebrate the increasing reliance on evidence-based decision making that the algorithm makes possible. Faced with a difficult medical diagnosis, most people would prefer that it be based on the evidence of thousands of similar cases rather than on the experience of one doctor (Winters-Miner, 2014). Machine learning and artificial intelligence, love children of the humble algorithm, have simplified and streamlined a vast range of processes across human society, from finding companionship to avoiding a predatory plumber.

An increasing number of voices are being raised, however, suggesting that algorithm-enabled automated decision-making, rather than compensating for the frailties of human judgment, actually drives those frailties underground, amplifying the effects of human prejudice, ignorance and fear (Angwin, 2016). They argue, with growing concern, that the assumptions encoded in mathematical formulae simply reproduce the injustice and inequity of the real world but in a language inaccessible to correction or mitigation. As automated decision-making enters new spheres at a rapid pace, we need to develop new ethical tools and standards to foster the positive impact of Muhammed ibn Musa’s legacy and mitigate the many forms of potential harm.

One of the more problematic uses of the algorithm is its use in a number of US states in helping judges assess the length of prison terms for convicted felons. As documented by Angwin et al. (2016), a reporter for ProPublica, “risk scores” developed by Northpointe, a private company, are being used by judges in Illinois to determine a convicted offender’s potential future criminality and therefore the length of the sentence and eligibility for parole. These scores are now also provided to judges in advance of sentencing in Arizona, Colorado, Delaware, Kentucky, Louisiana, Virginia, Washington and Wisconsin.

ProPublica’s analysis found that these risk scores were highly unreliable in predicting future criminality. Their study also determined that the “formula wrongly labeled black defendants as future criminals at almost twice the rate as white defendants”. In spite of the fact that the scores were supposed primarily to determine what types of treatment an offender might need, judges are inevitably using them to determine sentences, as at least one judge has readily admitted. These evolving practices present a clear challenge to fundamental principles of both law and ethics effectively undermining the presumption of innocence.

Algorithms are also becoming universal in the world of employment. This is not news to job seekers and there are dozens of articles purporting to teach them how to “outwit the robots” (Perman, 2012) including such gems as plucking keywords from job listings and eschewing all formatting and picking a machine-friendly font such as Arial or Times New Roman. Behind this relatively benign sounding contest between man and machine, however, is a much more troubling reality, namely, that the assumptions baked into these resumé reading algorithms actually perpetuate disadvantages for women and minorities. One concern already surfaced by recruitment algorithm company Evolv is the use of commuting distance as a criterion for hiring customer service workers (Peck, 2012). Since different towns and neighborhoods often have distinctive racial profiles, there is an inherent vulnerability to racial bias in the use of this criterion as part of a hiring attractiveness score.

Nor is the danger solely a question of résumés inbound. A noted Carnegie Mellon study found that Google returned more executive-level salaried positions to users its algorithm deemed to be male than female (Carpenter, 2015). Research from Harvard demonstrated that searches for people with names more commonly carried by African Americans returned results accompanied by ads for arrest record inquiries (Bray, 2013). As Ifeoma Ajunwa of the District of Columbia Law School points out, standardized test scores, housing status and credit scores, all used in hiring algorithms, reflect underlying inequities in our culture and are poor predictors of fitness for a specific job or role (Alexander, 2016).

This is excerpted from the full article. Click on the link to the journal below.

“Writings on the Bathroom Wall” appeared in The Journal of Business Strategy, Vol. 37 Issue: 6, 2016 pp.51 -55, and is reprinted with permission from Emerald Publishing Group Ltd.

There are no comments

Add yours

Follow Us

  • facebook icon
  • instagram icon
  • twitter icon