In response to the call for sensitising concepts in the form of designerly abstractions suggested by Yang and others as a way to make machine learning technologies available as a design material to user experience designers I ran a workshop at London design agency Venture 3. The aims of the workshop were to develop ways of thinking about machine learning that will help the team there to work in new ways and respond creatively and ethically to the challenges and opportunities of machine learning.

Participants started by listing some of the positive and negative aspects of digital systems driven by machine learning. The intention here was to ask the question: What is going on? i.e. what work does machine learning perform in digital systems. These were done from the perspective of users, rather than designers or developers. Participants were asked to write positive and negative attributes onto sets of cards. Positive attributes of machine learning in digital systems included the removal of human error, the potential for an increase in leisure time, more emphasis put on human abilities, personalised medical diagnosis, and an infinite amount of content. Negative aspects included; an inability to opt out, the threat to jobs, a reduction in human contact, loss of physical experience, and the monopolistic dominance of companies rich in data. Participants reported that their analysis took the form of an axis with efficiency and autonomy as its opposing values, saying ‘every time you make a move to make your life more efficient there’s a necessary sacrifice of autonomy’. Others pointed out that machine learning driven systems are content agnostic and derive value from interactions; ‘the user has more negatives and the platform gains more positives from the existence of machine learning’ as users deliver their preferences and choices for free to the platform it uses machine learning to tailor its service ever more accurately to other users.

Some participants chose a particular consumer system, such as Uber, to focus their analysis, mentioning the phenomenon of the misplaced surge pricing that took place in the aftermath of the Manchester Arena bomb attack, and the Greyball scandal in which Uber used algorithmic processing to avoid regulatory inspection. Other observations related to a potential future AI-deficit ‘what does a life outside AI look like? Will there ever be an opt out?’  and how it may be paradoxically satisfying to see machine learning making mistakes and not fulfilling the all-seeing omniscient system it is often sold as. From the perspective of design these observations raise the possibility for building in a certain amount of error, for a designed opt out, the need for shared experiences to counterbalance hyper-personalisation, and a nuanced understanding of how personal identities are subverted or reinforced over time.

Phase two of the workshop involved participants making representations of their ideas of the effects of machine learning – designerly abstractions using physical materials. As framing concepts we used the categories of transparency, unpredictability, opacity evolution, learning, and shared control identified by Holmquist as the main factors in designing for AI. The aim for this phase was to get some of the ideas off the page and into tangible, substantial forms around which a new set of understandings could coalesce. Design consists of making things, at Venture 3 this means the creation of systems, campaigns, identities, and interfaces. Increasingly the materials these products will be made of will include machine learning, how can it be made tractable for designers?

Identity emerged as a major theme of this phase with one group observing that the siloed subjectivising effect of machine learning could be counterbalanced through data merging; ‘what we should be doing is combining your data with lots of other versions of yourself… and maybe you could have almost alter egos of yourself’. The defining metaphor here is the bubble, as suggested by Pariser (2012). Participants proposed ‘bubbles could expand or decrease… different things fluctuate based on your experiences and your interactions with the AI and with other people’. The effect of this would be that ‘you cannot distinguish yourself anymore because the complexity of the information that you’re giving to the bubbles doesn’t allow them to clearly identify you as a single person’, an engineered counter-strategy to extreme personalisation. ‘The version of you that exists is relative depending on all the other things that are in and around you. So there is no one you, there are twenty versions of you’.

The need to account for the complexity of entanglement in systems that depend on machine learning was also expressed in the designerly abstractions produced in this phase. Thinking through making was evident here as participants worked to articulate their thoughts in physical representations. The way that a single piece of data, such as name or date of birth, is processed so that it produces a particular result in, say, a digital profile was challenged by the idea that multiple pieces of data (bearing in mind a single tweet contains 32 metadata items) add up to more than the sum of their parts. ‘Me telling them that my name is Lily and I’m 23 essentially means that what I though were two yellow and red pieces of information make an orange piece of information’. The metaphor of colour mixing is used here to explore the notion of algorithmic complexity. ‘because it’s all built on assumption and generalisations… although the pictures might be clearer, it actually becomes cloudier’. This approach raises the possibility that designers should work to subvert the illusion of efficiency and accuracy of machine learning. ‘You have these entities that enter into a relationship with each other, which seems like a much more sophisticated thing to communicate than… “this is who we think you are”‘.

The final phase of the workshop required participants to imagine speculative digital products in which their observations and thoughts about machine learning could be implemented. The first of these was a layered costume, ‘a shroud of who you are… something deeper about who you want to be today… it evokes a sense that your data is embodied around you, it’s not a distant cloud thing.’ The metaphor of bodies, layers, and textiles is at work here representing how data-driven descriptions are interdependent and can have very real tangible effects on people. It is noticeable in this project that a spiritual dimension is expressed as surrounding the data self.

Another project of this phase focused on the design of a machine learning augmented judicial system. The initial suggestion was that machine learning could help to remove human bias in a jury by accounting for a wide range of precedents and legal details. This idea evolved into the realisation that the more complex problem was considering the many types of circumstantial evidence, contextual factors, and emotional consequences and the possibility that machine learning could be useful. The group then suggested that “removing the jury and using this super holistic lawmaker” may help with trying to create laws that address behaviours that are not yet crimes. Examples such as upskirting and revenge pornography where the moral and legal situation seems clear but for which the judicial system has not yet developed legislation. The algorithmic lawmaker ‘creates precedence for new laws… by using machine learning to cross reference similar cases’. The system would do this by using vocal analysis to recognise shame, embarrassment or signs of depression in witness statements.

At the end of the day we reflected on the findings and methods used, particularly from the perspective of using tangible materials to create the designerly abstractions Yang calls for. These are intended to be sensitising concepts for designers and act as boundary objects in the development of a shared language between user experience designers and data scientists. Participants said the methods were useful for revealing hidden characteristics “prototyping quickly uncovers problems that you maybe hadn’t thought of sooner”. One of the benefits of the phased approach was speed “You’re just getting things down in a really quick way – out of the sketch pad too”. While there was a direct mapping between metaphorical and the physical “You do literally look at the problem from different angles and that makes it different, you start to look at the relationships”. So making physical representations closes the gap between abstract and concrete while allowing complexity to emerge. Participants acknowledged a cognitive dimension to the exercise ‘it just felt really different, you feel like you’re using a different part of the brain’. Making things at low resolution was useful; “it removes the barrier for people like myself who are less in the know (about machine learning)… it helps you be a little bit more open about how you talk about it”. So there is a real benefit in terms of communicating with physical abstractions.

Resolution of representations also emerged as an important factor. ‘If we’re creating products for people who aren’t creatives or designers it has to be even more simple to understand.. I think it helps in being quite primitive with our prototyping’. The physical nature of the materials allowed participants to bypass usual design reasoning; “It’s not about making a perfect outcome, it’s about communicating the core of what I think”. “It doesn’t have to be right it just has to communicate right now!” The limitations of materials and time were also liberating for participants; “I can only hope to get something that kind of communicates a thought and that’s quite freeing”.

For one participant using playful physical materials had a positive emotional effect. “using different childlike materials to talk about (machine learning) does… make you feel like (designing for it) could be a bit more joyful” and allowed for reflection on working practices. This ended with the question “What criteria would we as a company use to make AI joyful rather than efficient?”

In conclusion, workshop outcomes had a range of effects that could be summarised as foregrounding complexity, countering illusions, revealing hidden effects, and challenging design practices. I will be exploring these further in upcoming workshops.














Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s