Seminar – TORCH Oxford – 27 November – A Computational Reckoning: Calculating the Anthropocene

This will be happening in Trinity College’s Sutro Room, 27 November 2-3pm

I’ve got another talk coming up in addition to a talk at King’s College London (7 November) but this time in a seminar session at the research network ‘Life Itself in Theory and Practice‘ which is now in its second year. I had a great time going to these last year and I’m very happy to be able to speak on a bit of research which I’ve been working on but which did not necessarily work itself into my thesis (but broadly pulls on similar arguments around computation and choice) but in broader contexts of the Anthropocene debates (which started very early in 2015 when I contributed to a ‘Future Fossils’ exhibit at Society and Space.

I hope to have an interesting conversation from the other side of the spectrum to Kings and this will hopefully inform how I take forward my whole arguments around choice and computation. I also get to play around with some media (such as the Netflix film Tau) Find the abstract for the seminar below:

Computation has become, and continues to deepen, its integration, actualisation, and sensoriality among, and through, life, environments, and the ‘anthro’ of this era.  Various works on geological media (Parikka, 2014) and technologically-mediated futures (Gabrys, 2016) have opened up how computation impacts this so-called Anthropocene. In this seminar however, we will explore how computation not only unsettles our senses of dominance, but how, from the emergence of the general electronic, digital computer, as the first non-organic ‘cogniser’ (Hayles, 2017), they surrender the last vestiges of human authority of ‘decision-making’ and ‘choice’. This is not restricted to machine learning or ‘AI’ but is at the core of all computation. Unlike much debate around ‘learning data’ and the biases of machine learning – all crucial for social justice – there is something more, something far more nebulous and which cannot be attributed to us. This is the putare, the reckoning of computation – they are political. What is therefore political? How does the supposed ‘calculative machine’ make political choices beyond and through us? Together, we will seek to deepen our exploration to reckon with computation’s choices and decisions that are no longer – or more precisely perhaps, become exposed as not being – our own. Yet, in delegating, even outsourcing, choices and decisions, are we not committing an act of calculative injustice? Is this not another frontier of neoliberalism, the movement away from human politics and intervention itself? How does this reckon with the huge energy required for such choices and decisions and the implications for our climate? The Anthropocene is then not only about the demise or dominance of humanity, but of a new actor, one from the 1940s onwards, as the formulation and articulation of a new politics in our midst.

Talk at King’s College London – November 7 – Deciding on Choice: Cybersecurity, Politics and (Cyber)War

I am speaking on November 7 2019 at KCL’s Cyber Security Research Group at their Strand Campus, London. I am going to be substantively developing on my doctoral research on malware to complement it with a broader appreciation of other computational ‘materialities’ and my critiques of ‘artificial intelligence’. In order to do so I will be blending and weaving Hayles, Whitehead, and Derrida among others to establish my thoughts on decision and choice. In particular, this will be orientated to questions of intentionality and politics in discussions in cybersecurity and more so in ‘cyberwar’. I am thoroughly looking forward to this event – and the (inevitable and welcome) critique that I hope comes forth from a variety of perspectives.

There is an Eventbrite link if you wish to attend and it is open to all: https://www.eventbrite.co.uk/e/deciding-on-choice-cybersecurity-politics-and-cyberwar-tickets-74267759869

The abstract for the talk is below (and I will try to keep it as faithful to this as possible on the day):

Artificial intelligence will solve cybersecurity! It is an existential threat! It will make better decisions for us!

That is at least what we are commonly told.

In this talk, I instead unpick why we talk of decision-making in machine learning, its inherent failings, and the implications of this for the future of cybersecurity. To do so, I develop on my doctoral work on malware to explore the intricacies of choice-making in computation as one of its core foundations. I argue that we must see malware – and other computational architectures – as political and active negotiators in the formation of (in)security.

This means our contemporary notions of weapon and (cyber)war sit on shaky foundations in an age which is experiencing an explosion in computational choice. We have to decide on the role of choice in our societies, what makes something ‘political’, and what happens when we have alternative cognitive human and computational registers working together, on parallel and increasingly divergent paths.

Dangerous Gaming: Cyber-Attacks, Air-Strikes and Twitter

Yesterday, a short piece by Jan Silomon and myself went up on E-IR – accessible here. Here I offer some of my own thoughts on the article and hopefully some interpretations that I would like to be taken away and discussed further – on dehumanisation below life, the role of attribution and ‘quasi-state’ actors in cyber-attacks, and ultimately attempt to tread a ‘postcolonial’ security studies that sees this event as central to our understanding of cybersecurity.

As ever with writing, this was a much more complex piece than both of us imagined it would initially be – and was meant to be a ‘quick’ response to the events that happened in May 2019. In this piece, I’ll move into the singular ‘I’ to discuss the article as this is very much my interpretation and cannot be attributed to both of us.

The IDF tweet of an air-strike against Hamas’ ‘CyberHQ’. See tweet at: https://twitter.com/IDF/status/1125066395010699264?s=20

After the posting of this tweet, it seemed to ring so strongly what has happened elsewhere with the gamification (and trivialisation) of warfare. However, this had some a much more distinctive edge compared to other ‘conventional’ responses. First, was the connection to malware, and second, that it is *exceptionally* unusual to see a kinetic response to a cyber-attack (at least one that has been publicly attributed).

For me, that is what was exceptional – and this is what the short article tried to navigate. In particular, I express a serious worry over the pathologisation of Hamas as malware (regardless of their actions) – which informs a dehumanisation of certain bodies to a plane of abstraction through computer code that can easily be ‘wiped’ clean. It is also perhaps an ‘interesting’ play from the IDF on how malware was perhaps used by Hamas (in what we can only call an alleged attack). It was Jan(tje) who most clearly picked up on this, and I was happy to explore this side in greater detail. However, this abstraction to malware or other nonhuman ‘things’ is a common trope to try and dehumanise the other throughout (post)colonial thought (and indeed racist thought) – as ‘rabid’ or animalistic. As we say in the article, this is unlikely to be intentional. But for me, comparisons to malware reduce even further the dehumanisation – to something below life itself. Something that can be ‘created’ by humans and thus easily ‘deleted’ and rendered easily disposable. I think this is a concerning movement in how cybersecurity and cyberattacks fit into the broader spheres of security and rational for attacks. This is something that is very much under-explored and requires much more thought and conceptualisation (and indeed is something I would like to pick up on further in something I am organising with others in Gießen, Germany next year).

Then there are the more ‘conventional’ IR concerns on the development of norms and trying to theorise how ‘kinetic’ responses occur in response to cyber-attacks. The IDF’s attack against Hamas’ ‘CyberHQ’ was indeed the second-only confirmed kinetic response to a cyber-attack that we know of. This does raise further questions (that I hope others will explore) around why these have both been against what we call ‘quasi-state’ actors that control an extent of territory (which I think is a core part of the story), and then what this means for the broader conceptualisation of how one justifies such an attack and what evidence (or not, in this case) is required. As the recent release of the French government’s strategy indicates, they do not have to explicitly set out nor publish their levels of attribution before launching a (non)kinetic response. This means that if attacks do happen (as they will almost inevitably do so in the future), there is likely to be some ‘public’ justification which arriving through Twitter or other media. As our ‘case’ shows, this may rely on gamified language itself, as a way to obscure technical details and strategic purpose. This is a dangerous path to follow – as this is likely to lead to further strategies to dehumanise or render ‘others’ as permissible to kill or injure. These are only thoughts now – but may become important parts of a state’s ‘arsenal’ with regard to cyber ‘kinetic’ responses. Thus, these ‘quasi-state’ actors, that are unlikely to have a de facto state in ‘control’, means that as symmetric ‘kinetic’ responses are far less likely to occur, they have become inadvertent ‘test-beds’ for such action. The lack of response to the IDF’s tweet is perhaps part of this – in its ‘ordinariness’ – that means that this becomes part of a ‘new’ normal, and is one of the reasons why I wanted to write the article.

As in any co-authored piece there are compromises. One that didn’t make a final cut was a core issue of Eurocentrism in (cyber)security – that has been raised by many ‘postcolonial’ scholars such as Barkawi and Laffey (2006). That is, those in non-European contexts have been seen as peripheral to the ‘core’ concerns of security. That is, places such as Gaza are seen as peripheral to ‘true’ security concerns. In this short article, I hope (though perhaps unsuccessfully) that we have reorientated at least partially this centre, where we see that this conflict as central to understanding contemporary cybersecurity and international relations. I don’t think we can cast it off as some form of ‘other’ case that is not central to global (in)securities. Indeed, this is not simply a case of the ‘Israel-Palestine’ conflict (though this is an essential basis). The IDF’s tweet opens up a door to understand the politics and powers at play (between Hamas and the IDF) that are differential, to permit an understanding of the justificatory mechanisms for cyber-attacks and how Twitter and air-strikes may be used in the future elsewhere.

(Re)Cognition, computation, and war

I haven’t blogged for a long time, especially with doing the last edits to my thesis that I will hopefully submit in the next few weeks. So, this is a bit of break for me to think about what I have been partially doing as my time as a visiting fellow for the past five months at the SFB ‘Dynamics of Security’ between Marburg and Gießen, Germany.

Primarily, I have been continuing with my DPhil thesis, where the time as a visiting fellow in Gießen has given me the time to relax and think. This has delayed the submission but has, I hope, improved the quality of my thought and its application. However, I have not been solely focussed on this (and perhaps my supervisors would not be so happy about this). I have been developing my knowledge and reading on such things as ‘autonomous weapons systems,’ drones, and other computational media to think about some of the core insights from my (auto)ethnographic work with malware in 2017. I have also been part of a reading group here on ‘postcolonial securities’ which will hopefully lead to a conference in spring 2020, and been exploring more deeply the relationships between software transparency, Huawei, and ‘5G’ telecommunications infrastructures. This is where I see some of the work heading, with perhaps the Huawei paper being more of an end in itself, but this could morph as many projects do.

I guess for people (if any) have been keeping up with my work – they know I have been busy doing an (auto)ethnography of a malware analyst lab, I’m interested in the relationship between anomalies and abnormalities, and also how pathology and ecology can be thought of with regards to malware. Yet, a rather unexpected turn came after I had been really struggling with issues I have with some ‘vitalist’ material and approaches (see Ian Klinke’s paper on ‘Vitalist Temptations‘ and Ingrid Medby’s paper ‘Political geography and language‘ that are both attempting to do some of this critique). Pip Thornton zipped me away down the motorway to Winchester, where we went to see a talk by N. Katherine Hayles’ at the Winchester School of Art in May 2018. Before this, I had read little of her work (and indeed, I am still working my way through some of her lesser-read works), but what caught me was this particular rendition of computational ‘layers’ (see image below). It really engaged me in dealing with the ‘logics’ of computation, but also agency, that I think has been relatively underworked in new materialist literature in particular.

N. Katherine Hayles’ talk – ‘Cybersemiosis’ – at the Winchester School of Art, May 2018. Link to event: https://www.southampton.ac.uk/amt/news/events/2018/05/katherine-hayles-guest-talk-on-cybersemiosis.page

As Hayles’ detailed, signs can be translated over these computational layers. I found this exceptionally provocative and I use this as a foundation to my interpret of ‘malware’ and ‘computational’ politics, and she will have a forthcoming paper on this, so I do not wish to elaborate or steal away from the insights that this will provide. However, I have been using this to really think about the role of agency, intent, and how malware relates to both in computational ecologies. From the reading I make of Hayles’ work – such as in her book Unthought (2017) – along with a reading of Yuk Hui’s Recursivity and Contingency, is that computational infrastructures make choices that exceed the explicit intervention of humans (also Louise Amoore’s work contributing to her forthcoming book Cloud Ethics has been essential in all of this development from day 1).

So, if computation can exceed human intent, is this something specific to machine learning, or ‘artificial intelligence’? I don’t think so, and I am developing a paper on this now on what I see as a more foundational principle to computation, and which has real implications for security studies and what is (re)cognised as war – which is drawing on talks I did in Gießen this week and what I am doing in a couple of weeks at the SGRI conference in Trento. The latter looks like an absolutely fantastic panel in which to experiment with some of these thoughts. I am trying to rethink what agency is, perhaps this may prove to be too egregious, but I think it is necessary – and I hope that by doing these talks, I’ll find some inevitable blindspots in my knowledge and reading. You can find below my abstract for that particular conference.

I guess this is where my work is heading, as an extension of my insights from my DPhil thesis – Malware Ecologies: A Politics of Cybersecurity – that I hope will act as a bridge and be supplementary rather than singularly transforming this into papers (as I do think there is value in this as a stand-alone piece of work). I would put more here if I was not more worried about recent incidents in academia of work being taken without due credit. Of course, if you’re interested please contact me via the details on Oxford’s geography website, but I won’t be making these public until I have submitted and got a good idea that a paper will be published, unfortunately. Right, so back to thesis-writing!

Reflections on Data’s Dirty Tricks: The case of ‘value’

Last week, Oxford’s School of Geography and the Environment’s Political Worlds research cluster (thanks for funding the event!) hosted an event I and several others organised around ‘data’s dirty tricks’. As chair, I had no idea what each panellist was going to speak on, which made it both a challenge and equally thought provoking – more information here on a previous blog post. Below is a summary as a chair, and does not necessarily reflect a truthful account of the event, and any errors in the below are mine.

We had three panellists: James Ball (Journalist and author of Post-Truth: How Bullshit Conquered the World), Dr Lina Dencik (Director of the Data Justice Lab at Cardiff University and co-author of Digital Citizenship in a Datafied Society), and Dr Vidya Narayanan (Director of Research at the Oxford Internet Institute’s Computational Propaganda Project). All contributed a different dimension – broadly from the media, from those tracking propaganda and issues of surveillance and privacy. Some tweets from the event:

What came out most for me is, what is value? This was a common theme; whether from James Ball’s understanding of fake news and how the media attempts to discern what is valuable or not (i.e. more likely to be truthful, deceptive or indeed outright lies). From Vidya Narayanan, it was clear a value emerged in talking about whether Russian bots, for example, in the ‘Brexit’ process had been able to have a decisive influence (it seems they did not contra popular perception, they only amplified – or in her words lead to further polarisation of – existing views of groups). And from Lina Dencik a more fundamental critique of the value of global monopolies such as Facebook and whether they indeed provide a value at all (her discussion of how these companies are steering the debate, and are thus shutting down political debate of their actions was exceptional. Why do we have to respect their value frameworks at all?)? All three provided an excellent overview of different aspects of data’s dirty tricks – and how, I think, data has become one of values. In this sense, how is the collection, processing, and decisions made upon data all laden with different forms of value – and how do these interact and become conflicted in different spaces? The spaces of Silicon Valley are different to Westminster, and even these are different to the values of how those who are on Universal Credit see their data being used, tracked and analysed.

The core ‘surprise’ was that data’s dirty tricks are actually quite tricky. Cambridge Analytica, the firm that took lots of personal data from Facebook and used in political campaigning, was actually pretty poor at addressing the issue of changing peoples’ minds and voting. No more so than on Brexit, with perhaps a better influence on the 2016 US Presidential election. Trying to convert selected data and derive particular forms of value are hard – whether you wish someone to buy a product or vote a certain way in an election. No doubt, there are perhaps some avenues that data have been used adversely – but as Ball pointed out, it was the hacking of the US Democratic National Convention (DNC) by the Russian state hacking group CozyBear (APT29) that released emails relating to Hillary Clinton, that more likely swung the election. This is not to say that computing and hacking cannot be influential, but that data’s dirty tricks may not what they’re all cracked up to be. This is reinforced by Narayanan’s work on Russian bots which showed they are semi-automated and  rather poor at directing people in certain ways – only polarising those in different groups away from one another – but maybe that’s enough, to cause polarisation?

Whether we have ‘strong’ organisations also cropped up, with Dencik arguing that due to austerity, there had been a weakening of the state to counteract the demands of tech companies. This leads governments and other organisations to accept their demands, citing NHS contracts with Alphabet’s Deep Mind that took data with little to no patient consent. Therefore it is not only about individual consent about data but thinking about the collective privacy issues that emerge when data is used in these ways. Yet, Ball was keen to emphasise that the mainstream media is actually the main proponent of fake news, not social media, and that it is up to them to do more fact checking but the stresses of journalism make this hard. One thing he said  was that we should be proud that the UK has the BBC – as this provides a pivotal place in which to challenge inaccuracies and restrict filter bubbles… However what is distinctive about data is that it has an ability to move in ways previously impossible – and that is the new challenge, one of speed and distribution, rather than one of distinct difference.

I think the discussion left us at two avenues; one where the contortions of social media data do very little, but that the political economies (riffing off Dencik) of the use of data are challenging conventional political decision making. What I find interesting is the recent focus on the former (Facebook, elections, and so on), but little on the everyday issues of Universal Credit, NHS contracts, and outsourcing that are based on the sharing of data without appropriate consent. Hence, the dominant focus on  data’s dirty tricks should perhaps switch to focus on the latter – on asking what are the politics behind the use of data rather than on how data can influence elections (as it turns out they do very little). Data’s dirty tricks, to me, seem to be laden with power as much as they ever have been.

Data’s Dirty Tricks – Oxford, 15 November 2018

Data’s dirty tricks: The new spaces of fake news, harvesting, and contortion

As part of the new Dialogues series in the Political Worlds research cluster at the University of Oxford’s School of Geography and the environment, we are hosting a panel on ‘Data’s Dirty Tricks.’ I have helped organised this event (with Dr Ian Klinke and Dr Daniel Bos) and shall be chairing with three fantastic speakers. The official link is now available on Oxford Talks.

These are James Ball (Journalist and author of Post-Truth: How Bullshit Conquered the World), Dr Lina Dencik (Director of the Data Justice Lab at Cardiff University and co-author of the forthcoming book, Digital Citizenship in a Datafied Society), and Dr Vidya Narayanan (A researcher at the Oxford Internet Institute’s Computational Propaganda Project, looking at the impact of AI).

This is being held in the Herbertson Room of the School of Geography and the Environment at 16:30, Thursday 15 November 2018 (Week 6). The blurb is below, and everyone is welcome.

In this panel we invite three individuals from different backgrounds, within and outside of the University of Oxford’s School of Geography and the Environment, to offer their take on data’s dirty tricks. In an age where fake news is on the rise and data is harvested from social media platforms and beyond, what is the impact upon us all? We ask, what are the landscapes of fake news, harvesting and its contortions to conventional democratic spaces? How is it possible to respond, tie together, and understand new forms of geopolitical strategy? How do democracies respond to big data and what should be done? This panel seeks to explore this from people who take alternative approaches and offer insights into how it has impacted us so far, what is being done to tackle it, and what should be done in the future.

Searching for maliciousness using newspapers and Google trends data

So, I thought I would do a quick blog post, just as I have reached a block in writing and thought this would help to get me back into the mood. A couple of years ago now(!), I did some archival research on how certain malware are consumed and practiced in the media and tie this to Google trends data (search and news) to see if there were any correlations between malware events (such as its release, a malware report and so on) and then see if there was anything interesting in this.

How I did it

As one could expect, there is a strong correlation between news and google searches. I took articles published in any language using search terms, ‘stuxnet’, ‘conficker’, ‘dridex’, and ‘cryptolocker’. The former three are case studies in my thesis, and I have subsequently dropped cryptolocker. I turned to Nexis (LexisNexis), a newspaper database, to search for these terms in articles globally (which captured publications beyond English, due to the uniqueness of the words, but only those captured in Nexis unfortunately). In particular, I searched for items in the title, byline, and first paragraph so I did not pick up residual stories as much as possible. This required a substantial clean-up of newspaper editions, stories that did not make sense, mistakes in the database, and other issues. Clearly there is a lot of noise in this data, and I took some but not all precautions to try and keep this to a minimum as it was not a statistical check, but a more qualitative activity to see any form of ‘spikes’ in the data.

I used Google trends that were freely available from their website for each malware. However, frustratingly these only come out as a ratio of 0-100 (0=Least, 100=Most) on quantity of searches. So, I had to scale each malware’s newspaper articles from Nexis to 0-100 to ensure that each malware was comparable to a certain level, and to make sense of the two different sources I was using for this data. I also did this globally so that I had a close a comparison to the Nexis data as possible. This produced some interesting results, where I cover one incidence of interest.

What does it show

Though I hold little confidence on what this proves, as it was more of qualitative investigation, I think there a few points that were clear.

Combined Graph

This takes all the malware terms I was looking at, and scales them 100 on all of the equivalent data points. This shows spikes for the release of each malware; Conficker, Stuxnet, Cryptolocker, and Dridex in succession.

 

Though this may be a little hard to read, what it shows is how Stuxnet absolutely dominates over the other three malware in terms of newspaper content, however it barely registers on Google trends data when compared to the worm Conficker, that emerged in 2008 and 2009. This suggests, that though we in cybersecurity may have been greatly concerned about Stuxnet, the majority of searches on Google in fact point to those which directly impact people. In addition, though I do have graphs for this, it is clear that newspapers reacted very strong to the publication of stores in June 2012 – such as an article in the New York Times by David Sanger – that Stuxnet was related to a broader operation by the US and Israel dubbed the ‘Olympic Games’.

Emergence

When we turn to the difference between search and news data, there are some interesting phenomenon – something I would like to delve more into – where searches sometimes predate news searches. This is particularly stark with Cryptolocker and Conficker, suggesting that people may have been going online ahead of its reporting to search what to do with a ransomware and worms. Hence focusing in cybersecurity purely on news articles may not actively reflect how people come into contact with malware and how they engage with media.

Concluding Thoughts

It is not that I am claiming here that I have found something special, but it was interesting to see that my assumptions were confirmed through this work. I have a lot of data on this, and I may try and put this together as a paper at some point, but I thought it would be nice to share at least a bit about how conventional media (newspapers) and Google trends data can tell us something about how malware and its effects are consumed and practiced through different medias globally and to see how this varies over time and according to the materiality and performance of malware.

Reflecting on Engagement: Malware and a Social Scientist

I have recently completed a piece for my centre’s yearbook on my experience researching a malware analysis laboratory. I place it here as a sort of ‘preprint’ for those who may wish to think about (auto)ethnography and the role it has. Especially, as I try to reflect on the difficulties associated with the process, as well as some of the insights this method has.

Reflecting on Engagement: Malware and a Social Scientist: click here for the .pdf

It comes at a time when there has been a lot of chatter around Dr. Julian Kirchherr’s recent contribution in The Guardian – A PhD should be about improving society, not chasing academic kudos – which asks for students to not to chase citations, and work with practitioners from day one. No problem with that. However, my own work explores the distance, careful working and reflection required that doesn’t fit into a consultant-speak ‘efficient’ or ‘lean’ PhD [confession: I was a management consultant at one point]. So, I’m wary of these calls to ‘speed-up’ the process. The article identifies worrying trends, yes, around mental health and citational practices, but the resolution seems to be work ever-further within the logic that is causing these issues. I posted a Twitter thread about it:

https://twitter.com/andrewcdwyer/status/1027571730112499717

Robots Should (Not) be Viruses

Last week, I was introduced to a conversation by Pip Thornton (@Pip__T) that was initiated by David Gunkel on Twitter, where he was taking a suggestion from Noel Sharkey on whether robots should be classified as viruses (see the tweet below). As I have been focusing on ‘computer viruses’ as part of my PhD on malware, this strikes me as something to be avoided, and as I said on Twitter, I would write a blog post (which has morphed into a short draft essay) to go into more depth than what 280 characters can provide. So, here, I try to articulate a response to why Robots Should Not be Viruses, and why this would be an inadequate theorisation of robotics (in both their material form, and manifestation in this discussion as ‘intelligent’ agents that use extensive machine learning – note that I avoid saying AI, and I will address this later too). This goes along two axes; first that viruses are loaded with biological and negative resonance, and second that they should not be seen as a liminal form between (non)life.

A bit of background

I’ve been working on malicious software, and on what can be considered ‘computer viruses’ throughout my PhD. This has involved seven-months of (auto)ethnographic work in which I developed a method of ‘becoming-analyst’ in a malware analysis laboratory, where I actively learnt to analyse and produce detections for an ‘anti-virus’ business. I have been particularly interested in how the construct in culture of the virus has been formed by a confluence between cybernetics and cyberpunk, to form what we typically see malware, and their subset viruses, as comparable to biological viruses and something with the maliciousness as somehow embedded. This work has led to various works on the viral nature of society, particularly in the sense of widespread propagation and how it ‘infects’ society. Most closely to my work, Jussi Parikka, in Digital Contagions (2007), considers how the virus is representative of culture today. That is viral capitalism, affect, distribution and so on. However, to aid my understanding of the confluence made present in malware between biology and computing, I was a research assistant on an interdisciplinary project ‘Good Germs, Bad Germs’ (click here for the website, and most recent paper from this here) in a piece of participatory research to understand how people relate to germs, viruses, and bacteria (in the biological sense) in their kitchens. So, I have both researched DNA sequencing techniques, perceptions of viruses (biological/technical), and how these get transmitted outside of their formal scientific domains to become active participants in how we relate to the world. This is true of both biological and computing viruses.

It is from my dual experiences that I have attempted to partially understand what could be considered immunological and epidemiological approaches to the study of viruses; whether that be in practices of hygiene, prevention or social knowledges about viruses. However, in both I have sought to counter these broadly ‘pathological’ approaches to something that can be considered ecological. This is not an ecology based on an ecosystems perspective where every part of the ‘web’ of relations somehow has a place and somewhat stable state. It is one based on a coagulation of different environments and bodies (organic/inorganic/animate/inanimate and things that go beyond these dualisms) that do not have forms of equilibrium, but emergent states that have are part of new ‘occasions’ (to follow Alfred Whitehead) but maintain some semblance to a system, as they exist within material limits. These can come together to produce new things, but materials cannot have unlimited potentiality – in similar ways to how Deleuze and Guattari have a dark side (Culp, 2016) with reference to the Body without Organs. So, ecology allows for an embrace of what could be seen as outside the system to be a central part of how we understand something – and allow us to consider how ‘vibrant matter’ (Bennett, 2010) plays a role in our lives. This allows me to turn why viruses (in their cultural and real existence, are an important player in our society, and why labelling robots as viruses should be avoided).

 

Viruses as loaded

Popular culture sees viruses as broadly negative. This means they are seen as the archetype of the ‘end of the world’ in Hollywood or as something ready to destabilise the world with indiscriminate impact (think of the preparedness of governments against this threat). This is not to say that this threat for certain events is unwarranted. I would quite like to avoid dying in an ‘outbreak’, so procedures in place to capture and render knowable a biological virus are not a bad thing. Yet, after the large malware incidents of 2017 – WannaCry and (Not)Petya (I wrote a short piece on the NHS, environments and WannaCry here) – there is also a widening recognition of the power of malware (which inherits much of the baggage of previous viruses) to disrupt our worlds. So, here we have powerful actors that many would deem to be different according to the inorganic/organic distinction. Before I move onto two more detailed points, I want to disregard the good/bad morality that sits at the whole debate at the level of malware or the biological virus; it is not helpful as the former suggests that software has intent, and in the latter that a virus works in the same way in all environments. I have two things to say about how I think we should rethink both;

  1. Ecology: As I said in the previous section – this is where my research has focused. In my research on malware, I argue for what could be called a ‘more-than-pathological’ approach to its study. This means I respect and think we benefit from how we treat biological viruses or detection of malware  in our society – as these are essential and important tasks to enable society to live well. But we could change our attention to environments and social relations (human and non-human) as a way to better understand how to work together with viruses (biological and computing). So, for example, if we look at the environment in malware detection (currently now in contextual approaches – behavioural detections, reputational systems, machine learning techniques) then the ecology becomes a better way of recognising malware. This is similar to new thinking in epidemiology that looks at how the environments and cultures of different ‘germs’ that can come together which may mean that the existence of a certain pathogen is not necessarily bad – but that excessive cleaning practices, for example – can cause infection.
  2. More-than-Human: You may be, by now, questioning why I combine biological and technical viruses in the same discussion. Clearly, they are not the same, they have different materialities, which allow for different possibilities. However I refrain from working on a animate/inanimate or organic/inorganic basis. For my core interest, malware may be created by humans (through specialised hackers or by their formation in malware ‘kits’) but that does not mean that they are somehow a direct extension of the hacker. They work in different environments, and the human cannot and will not be able to understand all the environments it works within. Also there are frequently some very restricted ‘choices’ that it may make – meaning it takes one angle, produces some random IP, sets some time. In similar ways biological viruses must ‘choose’ what cell to infiltrate; albeit in very limited ways that it does not ‘think’ in the way we would understand. However when you compare computing technologies (and malware) and biological viruses, they both make choices, even in the most limited way compared to the likes of, let’s say, ocean waves and rocks. I will explain in more detail in the next section why this is so important, and more fully the distinction.

Therefore viruses are loaded with a variety of different understandings of their place in regard to the human. This is cultural, organisational, hierarchical, in a nature-technical bifurcation, as some form of liminal (non)life. I think moving away from centring the human helps address the moral questions by understanding what these things are prior to ethics. Ethics is a human construct, with human values, and one we (often) cherish – but other cognizers as I shall explain now, do not work on a register that is comparable to us. This has implications for robotics (I shall add the disclaimer here that this is not my research specialism!) – and other computing technologies – meaning that we have to develop a whole different framework that may run parallel to human ethics and law but cannot fool itself into being the same.

Moving beyond the liminal virus

What I have found intriguing from the ongoing Twitter debate, is the discussions about control (and often what feels like fear) of robots. Maybe this is one way to approach the topic. But as I have said, I do not wish here to engage in the good/bad part of the (important) debate – as I think it leads us down the path I’ve had trouble disentangling in my own research on the apparent inherent badness of malicious software. Gunkel suggested in an early tweet that Donna Haraway’s Cyborg (Cyborg Manifesto) and in a piece he kindly provided, Resistance is Futilewhich is a truly wonderful, could offer a comparison of how this could work with robots as viruses. Much of Gunkel’s piece I agree with – especially on understanding the breakdown of the Human and how we are many of us can be seen as cyborgs. It also exposes the creaky human-ness that holds us together with a plurality of things that augment us. However, I do not think the cyborg performs the same function as the virus does for robots in the sense Haraway deploys it. The virus holds the baggage of the negative without understanding its ecology. It also artificially suggests that there is an important distinction in the debate between animate and inanimate – does this really matter apart from holding onto ‘life’ as something which organic entities can have? I think the liminality that could be its strength is also its greatest weakness. I would be open to hearing of options of queering the virus to embrace its positive resonances, but I still think the latter point I make on the (in)animate holds.

Instead, I rather follow N. Katherine Hayles on her most recent work in Unthought (2017) where she starts to disentangle some of these precise issues in relation to cognition. She says it may be better to think of a distinction between cognizers and noncognizers. The former are those that make choices (no matter how limited) through the processing of signs. This allows humans, plants, computing technologies, and animals to be seen as cognizers. The noncognizers are those that do not make choices such as a rock, an ocean wave, or an atom. These are guided by processes and have no cognizing ability. This does not mean that they don’t have great impact on our world, but an atom reacts according to certain laws and conditions that can be forged (albeit this frequently generates resistance – as many scientists will attest). Focusing on computing technologies, this means that certain choices are made in a limited cognitive ability to do things. They process signs – something Hayles called cybersemiosis in a lecture I attended a couple of months ago – and therefore are a cognitive actor in the world. There are transitions between machine code, assembly, to object-oriented programming, to visualisations on a screen, that are not determined by physics. This is why I like to call computers more-than-humans – something not wholly distinct but not completely controlled by humans. Below is a simple graphic. Where the different things go is complicated by the broader ecological debate – such as whether plants are indeed non-human with the impacts of climate change and so on. But that is a different debate.

So, when we see computing technologies (and their associated software, machine learning techniques, robots) they are more-than-human cognizers. This means they have their own ability to cognize which is independent and different to human cognition. This is critical. What I most appreciate from Hayles is the careful analysis of how cognition is different in computing – I won’t say any more on this as her book is wonderful and worth the read. However, it means equating humans and robots on the same cognitive plane is impossible – they may be seem aligned yes, but they will be divergences and ones that we can only start to think of as machine learning increases its cognitive capacities.

Regarding robots more explicitly, what ‘makes’ a robot – is it its materialities (in futuristic human-looking AI, in swarming flies, or in UAVs) or its cognitive abilities that are structured and conditioned by these? Clearly there will be an interdependency due to the different sensory environments they come into contact with as all cognitive actors in our world have. What we’re talking about with robots is an explicitly geographical question – in what spaces and materialities will these robots cognate? As they work on a different register to us, I think it is worth suspending discussions on human ethics, morals, and laws to go beyond their capacities of good or bad (though they will have human influence as much as hackers do with malware). I do not think we should leave these important discussions behind, but how we create laws for a cognition different to ours is currently beyond me. I find it inappropriate to talk of ‘artificial intelligence’ (AI) due to these alternative cognitive abilities, as computing technologies will never acquire an intelligence that is human, but only parallel, to the side, even if the cognitive capacities exceed those of humans. They work on a register that is neither good nor bad, but on noise, signal and anomaly rather than how humans tend to work on the normal/abnormal abstraction. Can these two alternative logics work together? I argue that they can work in parallel but not together – and this has important ramifications for anyone working in this area.

Going back to my point on why Robot Should Not be Viruses – it is because it is not on the animate/inanimate distinction which robots (and more broadly computing technologies) need to be understood – but on cognition (where the virus loses its ‘liminal’ quality). So though I have two problems with robots as viruses; the first that it is a loaded term that I have studied in both biological and computing viruses, it is the second problem on cognition which is the real reason Robots Should Not be Viruses.

*And now back to writing my thesis…*

 

 

 

Rethinking Space in Cybersecurity + ‘Algorithmic Dimensionality’

After initially volunteering to give a ‘lightning’ talk at the CDT in Cyber Security joint conference (Programme) at Royal Holloway next week (3 & 4 May), I was given the opportunity to speak at greater length for 30 minutes. This has provided me the breathing space to consider how I have been conceptualising space in cybersecurity – and is likely to form the basis for the last chapter of my thesis and a subsequent paper I wish to develop out of this (and what I see I’ll be doing post-PhD).

This draws further upon the talk I gave just over a week ago at Transient Topographies at NUI Galway, Ireland. In this, I explored the formation of the software ⇄ malware object, and how this relates to concepts of topography and topology in geography and beyond. In this, I explored how space is thought through in cybersecurity; whether through cartesian representations, cyberspace, or the digital. In my re-engagement with material in software studies and new media, I have intensified the political spheres of my (auto)ethnographic work in a malware analysis lab . Namely, how we come to analyse, detect, and thus curate malware (in public opinion, in visualisations, in speeches and geopolitical manoeuvres) as something that affects security and society. This is not something I claim as anything new, by the way, with Jussi Parikka in Digital Contagions doing this on malware and ‘viral capitalism’, and the multiple works on the relation between objects and security.

Instead, I wish to trace, through my own engagements with malware and security organisations, how space has been thought of. This is in no way a genealogy which would be anything near some contributions (yet) on space and security – but I see this as a start on this path. In particular, how has computer science, mathematics, cybernetics, cyber punk literatures, the domestication of computing, and growing national security anticipatory action conditioned spatial understandings of malware? This has both helpful and unhelpful implications for how we consider collective cybersecurity practises – whether that be by government intervention, paid-for endpoint detection (commonly known as anti-virus) surveillant protection through scanning and monitoring behaviours of malware, attribution, senses of scale, or threat actors – among a variety of others.

Representation of a ‘deep-learning’ neural network. Each layer is connected to each other with different weights, according to each particular application. Some neurons become activated according to certain features.

This working of space in cybersecurity is tied with what I term ‘algorithmic dimensionality‘ in our epoch – where algorithms, and primarily neural networks, produce dimensional relations. What I mean by dimensions is the different layers, that come together to produce certain dimensions of what to follow at each consecutive layer, generating relationships that are non-linear; that can be used for malware detection, facial recognition, and a variety of other potential security applications. These dimensions exist beyond humanly comprehension; even if we can individually split neuron layers and observe what may be happening, this does not explain how the layers interact adequately. Hence, this is a question that extends beyond, and through, an ethics of the algorithm – see Louise Amoore‘s forthcoming work, which I’m sure will attend to many of these questions – to something that is more-than-human.

We cannot simply see ethics as adapting bias. As anyone who has written neural networks (including myself, for a bit of ‘fun’), weights are required to make it work. Algorithms require bias. Therefore reducing bias is an incomplete answer to ethics. We need to consider how dimensionality, which geographers can engage with, is the place (even cyberspatial) in which decisions are made. Therefore, auditing algorithms may not be possible without the environments in which dimensionality becomes known, and becomes part of the generation of connection and relationality. Simply feeding a black box and observing its outputs does not work in multi-dimensional systems. Without developing this understanding, I believe we are very much lacking. In particular – I see this as a rendition of cyberspace – that has been much vented as something that should be avoided in social science. However dimensionality shows where real, political, striations are formed that affect how people of colour, gender, and sexual orientation become operationalised within the neural network. This has dimensional affects that produce concrete issues; whether by credit ratings, adverts shown, among other variables that are hard to grasp or prove discrimination.

Going back to my talk at Royal Holloway (which may seem far from the neural network), I will attempt to

 enrol this within the conference theme of the ‘smart city’, and how certain imaginaries (drawing heavily from Gillian Rose’s recent thoughts on her blog) are generative of certain conditions of security. By this, how do the futuristic, clean, bright images of the city obscure and dent alternative ways of living and living with difference? The imaginings of space and place, mixed with algorithmic dimensionality, produce affects that must be thought of in any future imagining of the city. This draws not only from my insight from my PhD research on malware ecologies, in which I attempt to open-up what cybersecurities are and should include (and part of an article I am currently putting together), but also include feminist and queer perspectives to question what the technologically-mediated city will ex/in/clude.

I think space has been an undervalued concept in cybersecurity. Space and geography has been reduced to something of the past (due to imaginaries of the battlefield disappearing), and something that is not applicable in an ‘all-connected’ cyberspace. Instead, I wish to challenge this and bring critical analysis to cyberspace to explore the geographies which are performed and the resultant securities and inequalities that come from this. This allows for a maturity in space and cybersecurity – that appreciates that space is an intrinsic component of interactions at all levels of thinking. We cannot abandon geography, when it is ever more important in securities of everyday urban areas, in malware analysis, geopolitics, and even in the multi-dimensionality of the neural network. Hence space is an important, and fundamental, thing to engage with in cybersecurity which does not reduce it to the distance between two geometrically distant places.