Home

The accumulation of enormous quantitative data sets, via digital social media and other systems, paired with recent developments in neural networks and increases in computing power has delivered unexpectedly rapid improvements in what artificial intelligence technologies have been able to achieve (Holmquist, 2017). This means the influence of algorithmic decision making and machine learning on digital products, from social media to financial management and healthcare, has increased significantly. As these systems start to pervade everyday life, they present a challenge to human understanding. We risk developing highly influential technologies of such complexity and opacity that they surpass our abilities to shape them into forces for the common good.

The consequences for culture and society are profound. Firstly, the ethical implications of personal data that is captured in a public digital place and used to train an algorithm, designed by a private corporation for unknown purposes shielded by commercial secrecy involves a dramatic imbalance of power. Secondly, the invisibility and opacity of machine learning technologies means access to the means of production is limited to the few people trained and skilled in creating them. Finally, the conscious or automatic manipulation of flows of information via digital products has been shown to be a danger to democratic processes and information equity.

Involving designers in the development of practices that will help understanding and explanation what is going on in the interfaces and interactions of digital products that depend on artificially intelligent systems means making the case for design in this context. How can designers of digital products make the workings of artificial intelligence more apparent to users? What practices and methods can help designers of digital products to reveal the workings of artificial intelligence? How can a set of practical design principles help to counter some of the negative effects of cognitive technologies in digital products?

Design research has not paid detailed attention to this topic in this way, although it has been covered in studies related to the ethics and politics of machine learning (de Bruin and Floridi, 2017, Mittelstadt et al, 2016) and in theoretical approaches to interaction design (van Allen and Marenko, 2016). 

Designers have sought to be deterministic, defensive or opportunistic in the face of AI technologies, describing new applications for machine learning technologies, arguing for the preservation of human voices in the design process, or describing how to deploy AI more fully in the design of digital products. The technologically deterministic approach is seen in design research that emphasises technical solutions to AI communication problems (Feldman et al, 2017) or AI integration with new types of hardware (Vidaurre et al, 2011). The defensive reaction to AI technologies in design responds to the threat of human designers being superseded by computers (Teixeira, 2017). This strand of thinking focuses on what human designers can do that AI is suggested to be incapable of. The opportunistic reading of AI in design works in two ways. Firstly, by identifying ways AI technologies can improve user experience by increasing personalization, or analysing huge amounts of user data. Secondly, by providing usability guidelines for how to use AI in design (Holbrook and Lovejoy, 2017, van Hoof, 2016) this thinking proposes a set of skills designers will need to respond to the emerging age of AI in design.  None of these approaches to the topic of AI and design consider the ethical, moral, and political implications for designers involved in creating AI driven systems. They do not attempt to explain how AI is used in their designs nor how the AI may be influencing the choices people can make, or are subject to. People are understood as subjects of the technology, for whom designers optimise the user experience by harnessing the power of machine learning.

Instead I suggest in my research pragmatic responses to the argument put forward by Holmquist (2017). He suggests ways designers can make the behaviour of artificial intelligence understandable. For example, designing for transparency means showing in a design what the AI is doing at any given time, designing for opacity recognises that the intricate workings of AI driven systems may be beyond immediate understanding. Designing for unpredictability takes this further to emphasise how the nature of machine learning means it is subject to error and uncertainty.

Leave a comment