The relationship between user experience (UX) designers and machine learning (ML) data scientists has emerged as a site for research since 2017. Central to recent findings is the limited ability of UX designers to conceive of new ways to use ML (Yang et al. 2018). This is due to a number of factors. Firstly, human intelligence is very different to machine intelligence. Instead of using a heuristic or associative model, ML uses statistical inference to produce outputs that can often seem nonsensical or confusing (Yang, 2018). Secondly, the types of data that UX and ML depend on can be mutually incompatible. UX Designers have developed an extensive set of research techniques to produce qualitative insight into how people experience digital systems. ML data scientists use mathematically derived automation to deliver quantitative findings (Girardin and Lathia, 2017). Thirdly, ML technologies are difficult to understand. UX designers have found it hard to understand the limits of ML and how to apply it appropriately. Finally, the type of designs facilitated by ML technologies can be unfamiliar to UX designers. This is because ML driven systems evolve according to human behaviours and are constantly updated as models are fed new streams of data (Girardin and Lathia, 2017).
The call from HCI researchers and design researchers in this context (Yang et al. 2018) has been for ‘sensitising concepts’ intended to help make ML available to UX practitioners as a new design material. Sensitising concepts reach beyond their immediate material manifestation to sensitise designers to the possibilities of the suggested design resource. Sensitising concepts expand the field of practice of
a particular design domain by demonstrating how new materials encountered within that domain may be used. Sensitising concepts are embodied in ‘designerly abstractions’ which free designers from having to fully grasp the technical constraints of ML technologies (after all data scientists are rarely expected to understand even the most basic conventions and practices of user experience design), instead allowing them to explore alternative forms. They act as boundary objects between UX design and ML data science fostering new ideas and bridging the gap between design possibility and technical capability (Yang et al. 2018) working to make ML available as a design resource. This appeal for boundary objects also acts as a call to mobilise designers in the field of ML, and artificial intelligence more generally, positioning design as an intermediary between data science on one hand and the regulatory or legalistic readings of the field on the other.
Examples of these abstractions include responses from research participants that suggest ML results in personalised experiences, evolving relationships, and uncertain outcomes. The emphasis overwhelmingly follows a transactional model of exchange through which ML technologies are seen as providing a new aspect of service in exchange for the data that enable those new services to develop. There has been to date little reflection in the research on the possible negative social effects or unforeseen consequences of an increase in human experiences that are determined by ML algorithms. It seems necessary to include the ethical and moral aspects of designing for ML technologies in the sensitising concepts and designerly abstractions they are embodied in. These may include observations that ML driven systems reproduce inequality by reinforcing the biases inherent in the training data, or that they bring about a loss of control and transparency. This last point finds validation in the recent House of Lords report AI in the UK: ready, willing and able? which finds explainability to be a desirable characteristic of future ML systems.