Algorithmic Accountability

Side A: A Speculative Vignette

It’s early Monday morning and Andrea is still feeling unpleasantly chilled from the commute to work. The October wind had been tearing at the cable car as they traversed the river. At the busy changeover to the trams, she was caught off guard by a sudden gust of wind and rain which showered her horizontally from top to toe. Her woollen coat is now damp and she can sense the ripe smell of sheep as she takes a seat in the large Hospital lecture hall.

The purpose of this post is simply to test an idea and to do so by a two-pronged approach. Like the two sides of a vinyl record, the following sections can be consumed in any order.

Andrea and her colleagues are all specialists in radiology and they are gathering to review the latest insights generated by the Artificial Intelligence system DeepDoc. This name was given by technicians. Informally, however, the system has jestingly been nicknamed the Augur, which brings to mind the image of a haruspex—the ancient Roman priests who rummaged through animal entrails for omens.

Haruspex

As a thoracic radiologist for 30 years Andrea’s main task at work is to find and track the development of pathologies in the lungs of patients. After reviewing the information, she documents her opinion on the most suitable course of action and sends it back to the referring physician. In this profession, the work is technically mediated through and through. Without the tools, Andrea would have nothing to do. Everything revolves around the digital visualizations produced by an assortment of imaging technologies, all set to reveal the deepest recesses of the human body in ever increasing detail. For this reason, the profession has also been constantly challenged by continuous technical advancements in medical imaging. As practising radiologists, they have had to reconsider what and how they know things over and over again.

Today the system has finished analyzing a large batch of data. No ram or ewe have been sacrificed for this assembly but the internal organs of thousands of people have been processed digitally. Actually, all patients scanned in the imaging facility since it was opened a few years ago have been included. But that is not all. A different material has also been produced and fed into the system. Painstakingly put together by an interdisciplinary research team, it mainly features Andrea and her colleagues as they work through case after case and explain how to do the diagnostic work. This is diagnostic reasoning made by and for radiologists. Done as a public display of knowledge, it has been recorded, transcribed, segmented, and processed for the AI to develop an additional set of skills. The procedure aims to give the neural networks the ability to explain themselves, so as to help the radiologists understand the system’s decisions better.

It’s not that Andrea has been without support before, she’s well used to the current artificial assistance systems. Like a following a connect-the-dots drawing, the technology will typically highlight certain aspects of the data and based on her expertise Andrea will then connect the dots to make the final decision.

However, what they are presented with today is different. The system has started to address them directly. At first, it comes off as somewhat quaint, like expressions in a strange dialect or a radiological pidgin of sorts. As the initial surprise settles, it becomes clear that what the system is speaking of is still recognizable as a form of radiological reasoning. It feels commonplace and like something belonging to their life-world. But what is more, it is instructing them—the professionals—in new ways to think about the data—data that they have all seen portions of before. DeepDoc starts out by asking a few gentle questions. Specific regions are mentioned and some participants are called on to point out structures of interest on the large screen. This helps everyone there to find a shared focal point for the discussion. Gradually, the level of complexity gets higher as they are taken on a tour of increasingly rare diseases that had previously remained undiscovered in the digital scans.

The second wave of surprise and wonder is more profound than the first. The insights being laid out by the system go well beyond what she and her colleagues have been able to ascertain during several years work. The combination of originality and familiarity is confounding—It’s like being taught by aliens.

 

Side B: The Algorithmic Accountability Problem

Computer algorithms are widely employed throughout our society to make decisions that have extensive impacts. This includes actions guiding education, access to credit, healthcare, and employment. One approach to computing called machine learning is now gaining considerable traction due in part to a surge in cheaply available computational power. This approach entails that programmers don’t encode computers with direct instructions for solving problems. Rather, they define and assemble algorithms into neural networks that are trained with existing datasets. For example, the system ‘Deep Patient’ was trained using data from about 700,000 individuals, and when tested on new patient records, it proved remarkably good at predicting disease.

The problem with this development is that machine learning systems tend to become so complicated that even the engineers who designed them struggle to isolate the reasons they have for any single action. Currently, there are no viable methods for getting systems to account for their reasoning or explain how a particular output has been produced. Both the European Union and the Association for Computing Machinery US Public Policy Council have issued statements to the effect that any use of algorithmic decision-making should be complemented with explanations regarding both the procedures followed by the algorithm and the specific decisions that are made.

This development calls for research that addresses and attempts to bridge this rift between algorithmic decision-making and the currently human-centered accounting practices found in various professions.

A particularly suitable point of departure for research into this phenomenon would be to start in the domain of diagnostic imaging. This constitutes an example of a domain where the advances in computation are so far-reaching that some have even started to question the need for any further training of new radiologists, an area where we have extensive experience of working with professional learning. In a series of studies, we have addressed the methods through which diagnostic reasoning can be pulled into view and made into an object of scrutiny. This has been accomplished by deploying the format of after-action-reviews, in which targeted incidents are subjected to critical analysis by a group of peers. What has been revealed is the domain-specific and contingent considerations that make up practical diagnostic reasoning and the normalization procedures through which mistakes are corrected, i.e., the (non-rule-based but) artful practices of rational inquiry in radiology. So far, these results have been geared towards the advancement of professional reasoning itself, mainly for strengthening radiologists in the conduct of their own professional tasks.

However, this material could potentially be put to a different use. From a traditional sociological as well as computational perspective, the studied accounting practices of radiologists would appear to be ‘messy’ and be lacking in procedural logic. The fact that real-life expertise is difficult to formalize is a lesson learnt long ago. But such unruliness may be of little consequence to a machine learning approach. The hypothesis at work here is that it would be possible to combine different data sources and train networks simultaneously on the image sets analyzed by teams of radiologists and, if formatted appropriately, on the reasoning exhibited by the same teams. The aimed outcome of such a system would be a network that can produce a diagnosis of new materials accompanied by some form of account or description. Whatever shape this second part of the outcome would assume, it is most crucial that it be intelligible to practitioners themselves. By mimicking and exhibiting the recognizably rational properties of the practitioner’s own common-sense inquiries, a system could be seen to be attaining accountability as part of their work.

If a system of accountable reasoning is at all possible then one question is if such systems could also bring about or visualize new methods, and thus instruct professionals on insights generated by their networks? Whereas medical imaging would be used as a testbed, the further aim would be to articulate and describe a scalable model that could be applied in different professional settings. This would have the potential to reshape the very foundation of how we currently conceive of scientific and conceptual development at a societal level.

Jonas Ivarsson, Professor

Författare: Jonas Ivarsson

Jonas Ivarsson är professor i pedagogik. Hans forskningsintressen rör lärande och kunskapsutveckling i högre utbildning och andra teknikintensiva miljöer. Flertalet arbeten behandlar på vilka sätt introduktionen av nya teknologier förändrar villkoren inom olika yrkespraktiker.

Lämna ett svar

E-postadressen publiceras inte. Obligatoriska fält är märkta *