Robots Should (Not) be Viruses

Last week, I was introduced to a conversation by Pip Thornton (@Pip__T) that was initiated by David Gunkel on Twitter, where he was taking a suggestion from Noel Sharkey on whether robots should be classified as viruses (see the tweet below). As I have been focusing on ‘computer viruses’ as part of my PhD on malware, this strikes me as something to be avoided, and as I said on Twitter, I would write a blog post (which has morphed into a short draft essay) to go into more depth than what 280 characters can provide. So, here, I try to articulate a response to why Robots Should Not be Viruses, and why this would be an inadequate theorisation of robotics (in both their material form, and manifestation in this discussion as ‘intelligent’ agents that use extensive machine learning – note that I avoid saying AI, and I will address this later too). This goes along two axes; first that viruses are loaded with biological and negative resonance, and second that they should not be seen as a liminal form between (non)life.

A bit of background

I’ve been working on malicious software, and on what can be considered ‘computer viruses’ throughout my PhD. This has involved seven-months of (auto)ethnographic work in which I developed a method of ‘becoming-analyst’ in a malware analysis laboratory, where I actively learnt to analyse and produce detections for an ‘anti-virus’ business. I have been particularly interested in how the construct in culture of the virus has been formed by a confluence between cybernetics and cyberpunk, to form what we typically see malware, and their subset viruses, as comparable to biological viruses and something with the maliciousness as somehow embedded. This work has led to various works on the viral nature of society, particularly in the sense of widespread propagation and how it ‘infects’ society. Most closely to my work, Jussi Parikka, in Digital Contagions (2007), considers how the virus is representative of culture today. That is viral capitalism, affect, distribution and so on. However, to aid my understanding of the confluence made present in malware between biology and computing, I was a research assistant on an interdisciplinary project ‘Good Germs, Bad Germs’ (click here for the website, and most recent paper from this here) in a piece of participatory research to understand how people relate to germs, viruses, and bacteria (in the biological sense) in their kitchens. So, I have both researched DNA sequencing techniques, perceptions of viruses (biological/technical), and how these get transmitted outside of their formal scientific domains to become active participants in how we relate to the world. This is true of both biological and computing viruses.

It is from my dual experiences that I have attempted to partially understand what could be considered immunological and epidemiological approaches to the study of viruses; whether that be in practices of hygiene, prevention or social knowledges about viruses. However, in both I have sought to counter these broadly ‘pathological’ approaches to something that can be considered ecological. This is not an ecology based on an ecosystems perspective where every part of the ‘web’ of relations somehow has a place and somewhat stable state. It is one based on a coagulation of different environments and bodies (organic/inorganic/animate/inanimate and things that go beyond these dualisms) that do not have forms of equilibrium, but emergent states that have are part of new ‘occasions’ (to follow Alfred Whitehead) but maintain some semblance to a system, as they exist within material limits. These can come together to produce new things, but materials cannot have unlimited potentiality – in similar ways to how Deleuze and Guattari have a dark side (Culp, 2016) with reference to the Body without Organs. So, ecology allows for an embrace of what could be seen as outside the system to be a central part of how we understand something – and allow us to consider how ‘vibrant matter’ (Bennett, 2010) plays a role in our lives. This allows me to turn why viruses (in their cultural and real existence, are an important player in our society, and why labelling robots as viruses should be avoided).

 

Viruses as loaded

Popular culture sees viruses as broadly negative. This means they are seen as the archetype of the ‘end of the world’ in Hollywood or as something ready to destabilise the world with indiscriminate impact (think of the preparedness of governments against this threat). This is not to say that this threat for certain events is unwarranted. I would quite like to avoid dying in an ‘outbreak’, so procedures in place to capture and render knowable a biological virus are not a bad thing. Yet, after the large malware incidents of 2017 – WannaCry and (Not)Petya (I wrote a short piece on the NHS, environments and WannaCry here) – there is also a widening recognition of the power of malware (which inherits much of the baggage of previous viruses) to disrupt our worlds. So, here we have powerful actors that many would deem to be different according to the inorganic/organic distinction. Before I move onto two more detailed points, I want to disregard the good/bad morality that sits at the whole debate at the level of malware or the biological virus; it is not helpful as the former suggests that software has intent, and in the latter that a virus works in the same way in all environments. I have two things to say about how I think we should rethink both;

  1. Ecology: As I said in the previous section – this is where my research has focused. In my research on malware, I argue for what could be called a ‘more-than-pathological’ approach to its study. This means I respect and think we benefit from how we treat biological viruses or detection of malware  in our society – as these are essential and important tasks to enable society to live well. But we could change our attention to environments and social relations (human and non-human) as a way to better understand how to work together with viruses (biological and computing). So, for example, if we look at the environment in malware detection (currently now in contextual approaches – behavioural detections, reputational systems, machine learning techniques) then the ecology becomes a better way of recognising malware. This is similar to new thinking in epidemiology that looks at how the environments and cultures of different ‘germs’ that can come together which may mean that the existence of a certain pathogen is not necessarily bad – but that excessive cleaning practices, for example – can cause infection.
  2. More-than-Human: You may be, by now, questioning why I combine biological and technical viruses in the same discussion. Clearly, they are not the same, they have different materialities, which allow for different possibilities. However I refrain from working on a animate/inanimate or organic/inorganic basis. For my core interest, malware may be created by humans (through specialised hackers or by their formation in malware ‘kits’) but that does not mean that they are somehow a direct extension of the hacker. They work in different environments, and the human cannot and will not be able to understand all the environments it works within. Also there are frequently some very restricted ‘choices’ that it may make – meaning it takes one angle, produces some random IP, sets some time. In similar ways biological viruses must ‘choose’ what cell to infiltrate; albeit in very limited ways that it does not ‘think’ in the way we would understand. However when you compare computing technologies (and malware) and biological viruses, they both make choices, even in the most limited way compared to the likes of, let’s say, ocean waves and rocks. I will explain in more detail in the next section why this is so important, and more fully the distinction.

Therefore viruses are loaded with a variety of different understandings of their place in regard to the human. This is cultural, organisational, hierarchical, in a nature-technical bifurcation, as some form of liminal (non)life. I think moving away from centring the human helps address the moral questions by understanding what these things are prior to ethics. Ethics is a human construct, with human values, and one we (often) cherish – but other cognizers as I shall explain now, do not work on a register that is comparable to us. This has implications for robotics (I shall add the disclaimer here that this is not my research specialism!) – and other computing technologies – meaning that we have to develop a whole different framework that may run parallel to human ethics and law but cannot fool itself into being the same.

Moving beyond the liminal virus

What I have found intriguing from the ongoing Twitter debate, is the discussions about control (and often what feels like fear) of robots. Maybe this is one way to approach the topic. But as I have said, I do not wish here to engage in the good/bad part of the (important) debate – as I think it leads us down the path I’ve had trouble disentangling in my own research on the apparent inherent badness of malicious software. Gunkel suggested in an early tweet that Donna Haraway’s Cyborg (Cyborg Manifesto) and in a piece he kindly provided, Resistance is Futilewhich is a truly wonderful, could offer a comparison of how this could work with robots as viruses. Much of Gunkel’s piece I agree with – especially on understanding the breakdown of the Human and how we are many of us can be seen as cyborgs. It also exposes the creaky human-ness that holds us together with a plurality of things that augment us. However, I do not think the cyborg performs the same function as the virus does for robots in the sense Haraway deploys it. The virus holds the baggage of the negative without understanding its ecology. It also artificially suggests that there is an important distinction in the debate between animate and inanimate – does this really matter apart from holding onto ‘life’ as something which organic entities can have? I think the liminality that could be its strength is also its greatest weakness. I would be open to hearing of options of queering the virus to embrace its positive resonances, but I still think the latter point I make on the (in)animate holds.

Instead, I rather follow N. Katherine Hayles on her most recent work in Unthought (2017) where she starts to disentangle some of these precise issues in relation to cognition. She says it may be better to think of a distinction between cognizers and noncognizers. The former are those that make choices (no matter how limited) through the processing of signs. This allows humans, plants, computing technologies, and animals to be seen as cognizers. The noncognizers are those that do not make choices such as a rock, an ocean wave, or an atom. These are guided by processes and have no cognizing ability. This does not mean that they don’t have great impact on our world, but an atom reacts according to certain laws and conditions that can be forged (albeit this frequently generates resistance – as many scientists will attest). Focusing on computing technologies, this means that certain choices are made in a limited cognitive ability to do things. They process signs – something Hayles called cybersemiosis in a lecture I attended a couple of months ago – and therefore are a cognitive actor in the world. There are transitions between machine code, assembly, to object-oriented programming, to visualisations on a screen, that are not determined by physics. This is why I like to call computers more-than-humans – something not wholly distinct but not completely controlled by humans. Below is a simple graphic. Where the different things go is complicated by the broader ecological debate – such as whether plants are indeed non-human with the impacts of climate change and so on. But that is a different debate.

So, when we see computing technologies (and their associated software, machine learning techniques, robots) they are more-than-human cognizers. This means they have their own ability to cognize which is independent and different to human cognition. This is critical. What I most appreciate from Hayles is the careful analysis of how cognition is different in computing – I won’t say any more on this as her book is wonderful and worth the read. However, it means equating humans and robots on the same cognitive plane is impossible – they may be seem aligned yes, but they will be divergences and ones that we can only start to think of as machine learning increases its cognitive capacities.

Regarding robots more explicitly, what ‘makes’ a robot – is it its materialities (in futuristic human-looking AI, in swarming flies, or in UAVs) or its cognitive abilities that are structured and conditioned by these? Clearly there will be an interdependency due to the different sensory environments they come into contact with as all cognitive actors in our world have. What we’re talking about with robots is an explicitly geographical question – in what spaces and materialities will these robots cognate? As they work on a different register to us, I think it is worth suspending discussions on human ethics, morals, and laws to go beyond their capacities of good or bad (though they will have human influence as much as hackers do with malware). I do not think we should leave these important discussions behind, but how we create laws for a cognition different to ours is currently beyond me. I find it inappropriate to talk of ‘artificial intelligence’ (AI) due to these alternative cognitive abilities, as computing technologies will never acquire an intelligence that is human, but only parallel, to the side, even if the cognitive capacities exceed those of humans. They work on a register that is neither good nor bad, but on noise, signal and anomaly rather than how humans tend to work on the normal/abnormal abstraction. Can these two alternative logics work together? I argue that they can work in parallel but not together – and this has important ramifications for anyone working in this area.

Going back to my point on why Robot Should Not be Viruses – it is because it is not on the animate/inanimate distinction which robots (and more broadly computing technologies) need to be understood – but on cognition (where the virus loses its ‘liminal’ quality). So though I have two problems with robots as viruses; the first that it is a loaded term that I have studied in both biological and computing viruses, it is the second problem on cognition which is the real reason Robots Should Not be Viruses.

*And now back to writing my thesis…*

 

 

 

%d bloggers like this: