(Re)Cognition, computation, and war

I haven’t blogged for a long time, especially with doing the last edits to my thesis that I will hopefully submit in the next few weeks. So, this is a bit of break for me to think about what I have been partially doing as my time as a visiting fellow for the past five months at the SFB ‘Dynamics of Security’ between Marburg and Gießen, Germany.

Primarily, I have been continuing with my DPhil thesis, where the time as a visiting fellow in Gießen has given me the time to relax and think. This has delayed the submission but has, I hope, improved the quality of my thought and its application. However, I have not been solely focussed on this (and perhaps my supervisors would not be so happy about this). I have been developing my knowledge and reading on such things as ‘autonomous weapons systems,’ drones, and other computational media to think about some of the core insights from my (auto)ethnographic work with malware in 2017. I have also been part of a reading group here on ‘postcolonial securities’ which will hopefully lead to a conference in spring 2020, and been exploring more deeply the relationships between software transparency, Huawei, and ‘5G’ telecommunications infrastructures. This is where I see some of the work heading, with perhaps the Huawei paper being more of an end in itself, but this could morph as many projects do.

I guess for people (if any) have been keeping up with my work – they know I have been busy doing an (auto)ethnography of a malware analyst lab, I’m interested in the relationship between anomalies and abnormalities, and also how pathology and ecology can be thought of with regards to malware. Yet, a rather unexpected turn came after I had been really struggling with issues I have with some ‘vitalist’ material and approaches (see Ian Klinke’s paper on ‘Vitalist Temptations‘ and Ingrid Medby’s paper ‘Political geography and language‘ that are both attempting to do some of this critique). Pip Thornton zipped me away down the motorway to Winchester, where we went to see a talk by N. Katherine Hayles’ at the Winchester School of Art in May 2018. Before this, I had read little of her work (and indeed, I am still working my way through some of her lesser-read works), but what caught me was this particular rendition of computational ‘layers’ (see image below). It really engaged me in dealing with the ‘logics’ of computation, but also agency, that I think has been relatively underworked in new materialist literature in particular.

N. Katherine Hayles’ talk – ‘Cybersemiosis’ – at the Winchester School of Art, May 2018. Link to event: https://www.southampton.ac.uk/amt/news/events/2018/05/katherine-hayles-guest-talk-on-cybersemiosis.page

As Hayles’ detailed, signs can be translated over these computational layers. I found this exceptionally provocative and I use this as a foundation to my interpret of ‘malware’ and ‘computational’ politics, and she will have a forthcoming paper on this, so I do not wish to elaborate or steal away from the insights that this will provide. However, I have been using this to really think about the role of agency, intent, and how malware relates to both in computational ecologies. From the reading I make of Hayles’ work – such as in her book Unthought (2017) – along with a reading of Yuk Hui’s Recursivity and Contingency, is that computational infrastructures make choices that exceed the explicit intervention of humans (also Louise Amoore’s work contributing to her forthcoming book Cloud Ethics has been essential in all of this development from day 1).

So, if computation can exceed human intent, is this something specific to machine learning, or ‘artificial intelligence’? I don’t think so, and I am developing a paper on this now on what I see as a more foundational principle to computation, and which has real implications for security studies and what is (re)cognised as war – which is drawing on talks I did in Gießen this week and what I am doing in a couple of weeks at the SGRI conference in Trento. The latter looks like an absolutely fantastic panel in which to experiment with some of these thoughts. I am trying to rethink what agency is, perhaps this may prove to be too egregious, but I think it is necessary – and I hope that by doing these talks, I’ll find some inevitable blindspots in my knowledge and reading. You can find below my abstract for that particular conference.

I guess this is where my work is heading, as an extension of my insights from my DPhil thesis – Malware Ecologies: A Politics of Cybersecurity – that I hope will act as a bridge and be supplementary rather than singularly transforming this into papers (as I do think there is value in this as a stand-alone piece of work). I would put more here if I was not more worried about recent incidents in academia of work being taken without due credit. Of course, if you’re interested please contact me via the details on Oxford’s geography website, but I won’t be making these public until I have submitted and got a good idea that a paper will be published, unfortunately. Right, so back to thesis-writing!

Reflections on Data’s Dirty Tricks: The case of ‘value’

Last week, Oxford’s School of Geography and the Environment’s Political Worlds research cluster (thanks for funding the event!) hosted an event I and several others organised around ‘data’s dirty tricks’. As chair, I had no idea what each panellist was going to speak on, which made it both a challenge and equally thought provoking – more information here on a previous blog post. Below is a summary as a chair, and does not necessarily reflect a truthful account of the event, and any errors in the below are mine.

We had three panellists: James Ball (Journalist and author of Post-Truth: How Bullshit Conquered the World), Dr Lina Dencik (Director of the Data Justice Lab at Cardiff University and co-author of Digital Citizenship in a Datafied Society), and Dr Vidya Narayanan (Director of Research at the Oxford Internet Institute’s Computational Propaganda Project). All contributed a different dimension – broadly from the media, from those tracking propaganda and issues of surveillance and privacy. Some tweets from the event:

What came out most for me is, what is value? This was a common theme; whether from James Ball’s understanding of fake news and how the media attempts to discern what is valuable or not (i.e. more likely to be truthful, deceptive or indeed outright lies). From Vidya Narayanan, it was clear a value emerged in talking about whether Russian bots, for example, in the ‘Brexit’ process had been able to have a decisive influence (it seems they did not contra popular perception, they only amplified – or in her words lead to further polarisation of – existing views of groups). And from Lina Dencik a more fundamental critique of the value of global monopolies such as Facebook and whether they indeed provide a value at all (her discussion of how these companies are steering the debate, and are thus shutting down political debate of their actions was exceptional. Why do we have to respect their value frameworks at all?)? All three provided an excellent overview of different aspects of data’s dirty tricks – and how, I think, data has become one of values. In this sense, how is the collection, processing, and decisions made upon data all laden with different forms of value – and how do these interact and become conflicted in different spaces? The spaces of Silicon Valley are different to Westminster, and even these are different to the values of how those who are on Universal Credit see their data being used, tracked and analysed.

The core ‘surprise’ was that data’s dirty tricks are actually quite tricky. Cambridge Analytica, the firm that took lots of personal data from Facebook and used in political campaigning, was actually pretty poor at addressing the issue of changing peoples’ minds and voting. No more so than on Brexit, with perhaps a better influence on the 2016 US Presidential election. Trying to convert selected data and derive particular forms of value are hard – whether you wish someone to buy a product or vote a certain way in an election. No doubt, there are perhaps some avenues that data have been used adversely – but as Ball pointed out, it was the hacking of the US Democratic National Convention (DNC) by the Russian state hacking group CozyBear (APT29) that released emails relating to Hillary Clinton, that more likely swung the election. This is not to say that computing and hacking cannot be influential, but that data’s dirty tricks may not what they’re all cracked up to be. This is reinforced by Narayanan’s work on Russian bots which showed they are semi-automated and  rather poor at directing people in certain ways – only polarising those in different groups away from one another – but maybe that’s enough, to cause polarisation?

Whether we have ‘strong’ organisations also cropped up, with Dencik arguing that due to austerity, there had been a weakening of the state to counteract the demands of tech companies. This leads governments and other organisations to accept their demands, citing NHS contracts with Alphabet’s Deep Mind that took data with little to no patient consent. Therefore it is not only about individual consent about data but thinking about the collective privacy issues that emerge when data is used in these ways. Yet, Ball was keen to emphasise that the mainstream media is actually the main proponent of fake news, not social media, and that it is up to them to do more fact checking but the stresses of journalism make this hard. One thing he said  was that we should be proud that the UK has the BBC – as this provides a pivotal place in which to challenge inaccuracies and restrict filter bubbles… However what is distinctive about data is that it has an ability to move in ways previously impossible – and that is the new challenge, one of speed and distribution, rather than one of distinct difference.

I think the discussion left us at two avenues; one where the contortions of social media data do very little, but that the political economies (riffing off Dencik) of the use of data are challenging conventional political decision making. What I find interesting is the recent focus on the former (Facebook, elections, and so on), but little on the everyday issues of Universal Credit, NHS contracts, and outsourcing that are based on the sharing of data without appropriate consent. Hence, the dominant focus on  data’s dirty tricks should perhaps switch to focus on the latter – on asking what are the politics behind the use of data rather than on how data can influence elections (as it turns out they do very little). Data’s dirty tricks, to me, seem to be laden with power as much as they ever have been.

Data’s Dirty Tricks – Oxford, 15 November 2018

Data’s dirty tricks: The new spaces of fake news, harvesting, and contortion

As part of the new Dialogues series in the Political Worlds research cluster at the University of Oxford’s School of Geography and the environment, we are hosting a panel on ‘Data’s Dirty Tricks.’ I have helped organised this event (with Dr Ian Klinke and Dr Daniel Bos) and shall be chairing with three fantastic speakers. The official link is now available on Oxford Talks.

These are James Ball (Journalist and author of Post-Truth: How Bullshit Conquered the World), Dr Lina Dencik (Director of the Data Justice Lab at Cardiff University and co-author of the forthcoming book, Digital Citizenship in a Datafied Society), and Dr Vidya Narayanan (A researcher at the Oxford Internet Institute’s Computational Propaganda Project, looking at the impact of AI).

This is being held in the Herbertson Room of the School of Geography and the Environment at 16:30, Thursday 15 November 2018 (Week 6). The blurb is below, and everyone is welcome.

In this panel we invite three individuals from different backgrounds, within and outside of the University of Oxford’s School of Geography and the Environment, to offer their take on data’s dirty tricks. In an age where fake news is on the rise and data is harvested from social media platforms and beyond, what is the impact upon us all? We ask, what are the landscapes of fake news, harvesting and its contortions to conventional democratic spaces? How is it possible to respond, tie together, and understand new forms of geopolitical strategy? How do democracies respond to big data and what should be done? This panel seeks to explore this from people who take alternative approaches and offer insights into how it has impacted us so far, what is being done to tackle it, and what should be done in the future.

Searching for maliciousness using newspapers and Google trends data

So, I thought I would do a quick blog post, just as I have reached a block in writing and thought this would help to get me back into the mood. A couple of years ago now(!), I did some archival research on how certain malware are consumed and practiced in the media and tie this to Google trends data (search and news) to see if there were any correlations between malware events (such as its release, a malware report and so on) and then see if there was anything interesting in this.

How I did it

As one could expect, there is a strong correlation between news and google searches. I took articles published in any language using search terms, ‘stuxnet’, ‘conficker’, ‘dridex’, and ‘cryptolocker’. The former three are case studies in my thesis, and I have subsequently dropped cryptolocker. I turned to Nexis (LexisNexis), a newspaper database, to search for these terms in articles globally (which captured publications beyond English, due to the uniqueness of the words, but only those captured in Nexis unfortunately). In particular, I searched for items in the title, byline, and first paragraph so I did not pick up residual stories as much as possible. This required a substantial clean-up of newspaper editions, stories that did not make sense, mistakes in the database, and other issues. Clearly there is a lot of noise in this data, and I took some but not all precautions to try and keep this to a minimum as it was not a statistical check, but a more qualitative activity to see any form of ‘spikes’ in the data.

I used Google trends that were freely available from their website for each malware. However, frustratingly these only come out as a ratio of 0-100 (0=Least, 100=Most) on quantity of searches. So, I had to scale each malware’s newspaper articles from Nexis to 0-100 to ensure that each malware was comparable to a certain level, and to make sense of the two different sources I was using for this data. I also did this globally so that I had a close a comparison to the Nexis data as possible. This produced some interesting results, where I cover one incidence of interest.

What does it show

Though I hold little confidence on what this proves, as it was more of qualitative investigation, I think there a few points that were clear.

Combined Graph

This takes all the malware terms I was looking at, and scales them 100 on all of the equivalent data points. This shows spikes for the release of each malware; Conficker, Stuxnet, Cryptolocker, and Dridex in succession.

 

Though this may be a little hard to read, what it shows is how Stuxnet absolutely dominates over the other three malware in terms of newspaper content, however it barely registers on Google trends data when compared to the worm Conficker, that emerged in 2008 and 2009. This suggests, that though we in cybersecurity may have been greatly concerned about Stuxnet, the majority of searches on Google in fact point to those which directly impact people. In addition, though I do have graphs for this, it is clear that newspapers reacted very strong to the publication of stores in June 2012 – such as an article in the New York Times by David Sanger – that Stuxnet was related to a broader operation by the US and Israel dubbed the ‘Olympic Games’.

Emergence

When we turn to the difference between search and news data, there are some interesting phenomenon – something I would like to delve more into – where searches sometimes predate news searches. This is particularly stark with Cryptolocker and Conficker, suggesting that people may have been going online ahead of its reporting to search what to do with a ransomware and worms. Hence focusing in cybersecurity purely on news articles may not actively reflect how people come into contact with malware and how they engage with media.

Concluding Thoughts

It is not that I am claiming here that I have found something special, but it was interesting to see that my assumptions were confirmed through this work. I have a lot of data on this, and I may try and put this together as a paper at some point, but I thought it would be nice to share at least a bit about how conventional media (newspapers) and Google trends data can tell us something about how malware and its effects are consumed and practiced through different medias globally and to see how this varies over time and according to the materiality and performance of malware.

Reflecting on Engagement: Malware and a Social Scientist

I have recently completed a piece for my centre’s yearbook on my experience researching a malware analysis laboratory. I place it here as a sort of ‘preprint’ for those who may wish to think about (auto)ethnography and the role it has. Especially, as I try to reflect on the difficulties associated with the process, as well as some of the insights this method has.

Reflecting on Engagement: Malware and a Social Scientist: click here for the .pdf

It comes at a time when there has been a lot of chatter around Dr. Julian Kirchherr’s recent contribution in The Guardian – A PhD should be about improving society, not chasing academic kudos – which asks for students to not to chase citations, and work with practitioners from day one. No problem with that. However, my own work explores the distance, careful working and reflection required that doesn’t fit into a consultant-speak ‘efficient’ or ‘lean’ PhD [confession: I was a management consultant at one point]. So, I’m wary of these calls to ‘speed-up’ the process. The article identifies worrying trends, yes, around mental health and citational practices, but the resolution seems to be work ever-further within the logic that is causing these issues. I posted a Twitter thread about it:

https://twitter.com/andrewcdwyer/status/1027571730112499717

Robots Should (Not) be Viruses

Last week, I was introduced to a conversation by Pip Thornton (@Pip__T) that was initiated by David Gunkel on Twitter, where he was taking a suggestion from Noel Sharkey on whether robots should be classified as viruses (see the tweet below). As I have been focusing on ‘computer viruses’ as part of my PhD on malware, this strikes me as something to be avoided, and as I said on Twitter, I would write a blog post (which has morphed into a short draft essay) to go into more depth than what 280 characters can provide. So, here, I try to articulate a response to why Robots Should Not be Viruses, and why this would be an inadequate theorisation of robotics (in both their material form, and manifestation in this discussion as ‘intelligent’ agents that use extensive machine learning – note that I avoid saying AI, and I will address this later too). This goes along two axes; first that viruses are loaded with biological and negative resonance, and second that they should not be seen as a liminal form between (non)life.

A bit of background

I’ve been working on malicious software, and on what can be considered ‘computer viruses’ throughout my PhD. This has involved seven-months of (auto)ethnographic work in which I developed a method of ‘becoming-analyst’ in a malware analysis laboratory, where I actively learnt to analyse and produce detections for an ‘anti-virus’ business. I have been particularly interested in how the construct in culture of the virus has been formed by a confluence between cybernetics and cyberpunk, to form what we typically see malware, and their subset viruses, as comparable to biological viruses and something with the maliciousness as somehow embedded. This work has led to various works on the viral nature of society, particularly in the sense of widespread propagation and how it ‘infects’ society. Most closely to my work, Jussi Parikka, in Digital Contagions (2007), considers how the virus is representative of culture today. That is viral capitalism, affect, distribution and so on. However, to aid my understanding of the confluence made present in malware between biology and computing, I was a research assistant on an interdisciplinary project ‘Good Germs, Bad Germs’ (click here for the website, and most recent paper from this here) in a piece of participatory research to understand how people relate to germs, viruses, and bacteria (in the biological sense) in their kitchens. So, I have both researched DNA sequencing techniques, perceptions of viruses (biological/technical), and how these get transmitted outside of their formal scientific domains to become active participants in how we relate to the world. This is true of both biological and computing viruses.

It is from my dual experiences that I have attempted to partially understand what could be considered immunological and epidemiological approaches to the study of viruses; whether that be in practices of hygiene, prevention or social knowledges about viruses. However, in both I have sought to counter these broadly ‘pathological’ approaches to something that can be considered ecological. This is not an ecology based on an ecosystems perspective where every part of the ‘web’ of relations somehow has a place and somewhat stable state. It is one based on a coagulation of different environments and bodies (organic/inorganic/animate/inanimate and things that go beyond these dualisms) that do not have forms of equilibrium, but emergent states that have are part of new ‘occasions’ (to follow Alfred Whitehead) but maintain some semblance to a system, as they exist within material limits. These can come together to produce new things, but materials cannot have unlimited potentiality – in similar ways to how Deleuze and Guattari have a dark side (Culp, 2016) with reference to the Body without Organs. So, ecology allows for an embrace of what could be seen as outside the system to be a central part of how we understand something – and allow us to consider how ‘vibrant matter’ (Bennett, 2010) plays a role in our lives. This allows me to turn why viruses (in their cultural and real existence, are an important player in our society, and why labelling robots as viruses should be avoided).

 

Viruses as loaded

Popular culture sees viruses as broadly negative. This means they are seen as the archetype of the ‘end of the world’ in Hollywood or as something ready to destabilise the world with indiscriminate impact (think of the preparedness of governments against this threat). This is not to say that this threat for certain events is unwarranted. I would quite like to avoid dying in an ‘outbreak’, so procedures in place to capture and render knowable a biological virus are not a bad thing. Yet, after the large malware incidents of 2017 – WannaCry and (Not)Petya (I wrote a short piece on the NHS, environments and WannaCry here) – there is also a widening recognition of the power of malware (which inherits much of the baggage of previous viruses) to disrupt our worlds. So, here we have powerful actors that many would deem to be different according to the inorganic/organic distinction. Before I move onto two more detailed points, I want to disregard the good/bad morality that sits at the whole debate at the level of malware or the biological virus; it is not helpful as the former suggests that software has intent, and in the latter that a virus works in the same way in all environments. I have two things to say about how I think we should rethink both;

  1. Ecology: As I said in the previous section – this is where my research has focused. In my research on malware, I argue for what could be called a ‘more-than-pathological’ approach to its study. This means I respect and think we benefit from how we treat biological viruses or detection of malware  in our society – as these are essential and important tasks to enable society to live well. But we could change our attention to environments and social relations (human and non-human) as a way to better understand how to work together with viruses (biological and computing). So, for example, if we look at the environment in malware detection (currently now in contextual approaches – behavioural detections, reputational systems, machine learning techniques) then the ecology becomes a better way of recognising malware. This is similar to new thinking in epidemiology that looks at how the environments and cultures of different ‘germs’ that can come together which may mean that the existence of a certain pathogen is not necessarily bad – but that excessive cleaning practices, for example – can cause infection.
  2. More-than-Human: You may be, by now, questioning why I combine biological and technical viruses in the same discussion. Clearly, they are not the same, they have different materialities, which allow for different possibilities. However I refrain from working on a animate/inanimate or organic/inorganic basis. For my core interest, malware may be created by humans (through specialised hackers or by their formation in malware ‘kits’) but that does not mean that they are somehow a direct extension of the hacker. They work in different environments, and the human cannot and will not be able to understand all the environments it works within. Also there are frequently some very restricted ‘choices’ that it may make – meaning it takes one angle, produces some random IP, sets some time. In similar ways biological viruses must ‘choose’ what cell to infiltrate; albeit in very limited ways that it does not ‘think’ in the way we would understand. However when you compare computing technologies (and malware) and biological viruses, they both make choices, even in the most limited way compared to the likes of, let’s say, ocean waves and rocks. I will explain in more detail in the next section why this is so important, and more fully the distinction.

Therefore viruses are loaded with a variety of different understandings of their place in regard to the human. This is cultural, organisational, hierarchical, in a nature-technical bifurcation, as some form of liminal (non)life. I think moving away from centring the human helps address the moral questions by understanding what these things are prior to ethics. Ethics is a human construct, with human values, and one we (often) cherish – but other cognizers as I shall explain now, do not work on a register that is comparable to us. This has implications for robotics (I shall add the disclaimer here that this is not my research specialism!) – and other computing technologies – meaning that we have to develop a whole different framework that may run parallel to human ethics and law but cannot fool itself into being the same.

Moving beyond the liminal virus

What I have found intriguing from the ongoing Twitter debate, is the discussions about control (and often what feels like fear) of robots. Maybe this is one way to approach the topic. But as I have said, I do not wish here to engage in the good/bad part of the (important) debate – as I think it leads us down the path I’ve had trouble disentangling in my own research on the apparent inherent badness of malicious software. Gunkel suggested in an early tweet that Donna Haraway’s Cyborg (Cyborg Manifesto) and in a piece he kindly provided, Resistance is Futilewhich is a truly wonderful, could offer a comparison of how this could work with robots as viruses. Much of Gunkel’s piece I agree with – especially on understanding the breakdown of the Human and how we are many of us can be seen as cyborgs. It also exposes the creaky human-ness that holds us together with a plurality of things that augment us. However, I do not think the cyborg performs the same function as the virus does for robots in the sense Haraway deploys it. The virus holds the baggage of the negative without understanding its ecology. It also artificially suggests that there is an important distinction in the debate between animate and inanimate – does this really matter apart from holding onto ‘life’ as something which organic entities can have? I think the liminality that could be its strength is also its greatest weakness. I would be open to hearing of options of queering the virus to embrace its positive resonances, but I still think the latter point I make on the (in)animate holds.

Instead, I rather follow N. Katherine Hayles on her most recent work in Unthought (2017) where she starts to disentangle some of these precise issues in relation to cognition. She says it may be better to think of a distinction between cognizers and noncognizers. The former are those that make choices (no matter how limited) through the processing of signs. This allows humans, plants, computing technologies, and animals to be seen as cognizers. The noncognizers are those that do not make choices such as a rock, an ocean wave, or an atom. These are guided by processes and have no cognizing ability. This does not mean that they don’t have great impact on our world, but an atom reacts according to certain laws and conditions that can be forged (albeit this frequently generates resistance – as many scientists will attest). Focusing on computing technologies, this means that certain choices are made in a limited cognitive ability to do things. They process signs – something Hayles called cybersemiosis in a lecture I attended a couple of months ago – and therefore are a cognitive actor in the world. There are transitions between machine code, assembly, to object-oriented programming, to visualisations on a screen, that are not determined by physics. This is why I like to call computers more-than-humans – something not wholly distinct but not completely controlled by humans. Below is a simple graphic. Where the different things go is complicated by the broader ecological debate – such as whether plants are indeed non-human with the impacts of climate change and so on. But that is a different debate.

So, when we see computing technologies (and their associated software, machine learning techniques, robots) they are more-than-human cognizers. This means they have their own ability to cognize which is independent and different to human cognition. This is critical. What I most appreciate from Hayles is the careful analysis of how cognition is different in computing – I won’t say any more on this as her book is wonderful and worth the read. However, it means equating humans and robots on the same cognitive plane is impossible – they may be seem aligned yes, but they will be divergences and ones that we can only start to think of as machine learning increases its cognitive capacities.

Regarding robots more explicitly, what ‘makes’ a robot – is it its materialities (in futuristic human-looking AI, in swarming flies, or in UAVs) or its cognitive abilities that are structured and conditioned by these? Clearly there will be an interdependency due to the different sensory environments they come into contact with as all cognitive actors in our world have. What we’re talking about with robots is an explicitly geographical question – in what spaces and materialities will these robots cognate? As they work on a different register to us, I think it is worth suspending discussions on human ethics, morals, and laws to go beyond their capacities of good or bad (though they will have human influence as much as hackers do with malware). I do not think we should leave these important discussions behind, but how we create laws for a cognition different to ours is currently beyond me. I find it inappropriate to talk of ‘artificial intelligence’ (AI) due to these alternative cognitive abilities, as computing technologies will never acquire an intelligence that is human, but only parallel, to the side, even if the cognitive capacities exceed those of humans. They work on a register that is neither good nor bad, but on noise, signal and anomaly rather than how humans tend to work on the normal/abnormal abstraction. Can these two alternative logics work together? I argue that they can work in parallel but not together – and this has important ramifications for anyone working in this area.

Going back to my point on why Robot Should Not be Viruses – it is because it is not on the animate/inanimate distinction which robots (and more broadly computing technologies) need to be understood – but on cognition (where the virus loses its ‘liminal’ quality). So though I have two problems with robots as viruses; the first that it is a loaded term that I have studied in both biological and computing viruses, it is the second problem on cognition which is the real reason Robots Should Not be Viruses.

*And now back to writing my thesis…*

 

 

 

Rethinking Space in Cybersecurity + ‘Algorithmic Dimensionality’

After initially volunteering to give a ‘lightning’ talk at the CDT in Cyber Security joint conference (Programme) at Royal Holloway next week (3 & 4 May), I was given the opportunity to speak at greater length for 30 minutes. This has provided me the breathing space to consider how I have been conceptualising space in cybersecurity – and is likely to form the basis for the last chapter of my thesis and a subsequent paper I wish to develop out of this (and what I see I’ll be doing post-PhD).

This draws further upon the talk I gave just over a week ago at Transient Topographies at NUI Galway, Ireland. In this, I explored the formation of the software ⇄ malware object, and how this relates to concepts of topography and topology in geography and beyond. In this, I explored how space is thought through in cybersecurity; whether through cartesian representations, cyberspace, or the digital. In my re-engagement with material in software studies and new media, I have intensified the political spheres of my (auto)ethnographic work in a malware analysis lab . Namely, how we come to analyse, detect, and thus curate malware (in public opinion, in visualisations, in speeches and geopolitical manoeuvres) as something that affects security and society. This is not something I claim as anything new, by the way, with Jussi Parikka in Digital Contagions doing this on malware and ‘viral capitalism’, and the multiple works on the relation between objects and security.

Instead, I wish to trace, through my own engagements with malware and security organisations, how space has been thought of. This is in no way a genealogy which would be anything near some contributions (yet) on space and security – but I see this as a start on this path. In particular, how has computer science, mathematics, cybernetics, cyber punk literatures, the domestication of computing, and growing national security anticipatory action conditioned spatial understandings of malware? This has both helpful and unhelpful implications for how we consider collective cybersecurity practises – whether that be by government intervention, paid-for endpoint detection (commonly known as anti-virus) surveillant protection through scanning and monitoring behaviours of malware, attribution, senses of scale, or threat actors – among a variety of others.

Representation of a ‘deep-learning’ neural network. Each layer is connected to each other with different weights, according to each particular application. Some neurons become activated according to certain features.

This working of space in cybersecurity is tied with what I term ‘algorithmic dimensionality‘ in our epoch – where algorithms, and primarily neural networks, produce dimensional relations. What I mean by dimensions is the different layers, that come together to produce certain dimensions of what to follow at each consecutive layer, generating relationships that are non-linear; that can be used for malware detection, facial recognition, and a variety of other potential security applications. These dimensions exist beyond humanly comprehension; even if we can individually split neuron layers and observe what may be happening, this does not explain how the layers interact adequately. Hence, this is a question that extends beyond, and through, an ethics of the algorithm – see Louise Amoore‘s forthcoming work, which I’m sure will attend to many of these questions – to something that is more-than-human.

We cannot simply see ethics as adapting bias. As anyone who has written neural networks (including myself, for a bit of ‘fun’), weights are required to make it work. Algorithms require bias. Therefore reducing bias is an incomplete answer to ethics. We need to consider how dimensionality, which geographers can engage with, is the place (even cyberspatial) in which decisions are made. Therefore, auditing algorithms may not be possible without the environments in which dimensionality becomes known, and becomes part of the generation of connection and relationality. Simply feeding a black box and observing its outputs does not work in multi-dimensional systems. Without developing this understanding, I believe we are very much lacking. In particular – I see this as a rendition of cyberspace – that has been much vented as something that should be avoided in social science. However dimensionality shows where real, political, striations are formed that affect how people of colour, gender, and sexual orientation become operationalised within the neural network. This has dimensional affects that produce concrete issues; whether by credit ratings, adverts shown, among other variables that are hard to grasp or prove discrimination.

Going back to my talk at Royal Holloway (which may seem far from the neural network), I will attempt to

 enrol this within the conference theme of the ‘smart city’, and how certain imaginaries (drawing heavily from Gillian Rose’s recent thoughts on her blog) are generative of certain conditions of security. By this, how do the futuristic, clean, bright images of the city obscure and dent alternative ways of living and living with difference? The imaginings of space and place, mixed with algorithmic dimensionality, produce affects that must be thought of in any future imagining of the city. This draws not only from my insight from my PhD research on malware ecologies, in which I attempt to open-up what cybersecurities are and should include (and part of an article I am currently putting together), but also include feminist and queer perspectives to question what the technologically-mediated city will ex/in/clude.

I think space has been an undervalued concept in cybersecurity. Space and geography has been reduced to something of the past (due to imaginaries of the battlefield disappearing), and something that is not applicable in an ‘all-connected’ cyberspace. Instead, I wish to challenge this and bring critical analysis to cyberspace to explore the geographies which are performed and the resultant securities and inequalities that come from this. This allows for a maturity in space and cybersecurity – that appreciates that space is an intrinsic component of interactions at all levels of thinking. We cannot abandon geography, when it is ever more important in securities of everyday urban areas, in malware analysis, geopolitics, and even in the multi-dimensionality of the neural network. Hence space is an important, and fundamental, thing to engage with in cybersecurity which does not reduce it to the distance between two geometrically distant places.

 

Topographies and Automation: Directions of my malicious research

So I’m pleased to say that I’ve been accepted on two pretty different conferences, which both disseminate different parts of my DPhil (PhD) project, and I intend to structure into more formal papers after these events.

The first is ‘Transient Topographies‘, which will be in April in Galway, Ireland that will help explore the complex word ‘topography’. This is an exceptionally complicated term for me, coming from the dual lineages of human geography and computer science where they frequently have alternative and conflicting interpretations of what this word means. Yet, this conference allows me to explore in some depth the reason I became interested in malicious software in the first place: its spatial implications. For me, malicious software allow for us to blend the spatial understandings that come with ecology in both more-than-human geographies and the broad study of technologies or technics. This also allows me to explicitly move on from an awkward experience at last year’s RGS-IBG when I was just initiating these ideas.  I had not given it enough thought to this topic, and ultimately the core argument I wish to put forward was lost. To put clearly – I think our division between human and technic is unsustainable. I hope this will become clearer in this talk.

The second is an exceptionally interesting session put together by Sam Kinsley for this year’s RGS-IBG in Cardiff on the ‘New Geographies of Automation’. This tackles the more historical (and somewhat genealogical) aspect to my work which I never anticipated I would do in my PhD. However, it has been something which captured my attention during my ethnographic work in a malware analysis laboratory.  The complex array of tools and technologies with which we have come to sense, capture, analyse, and detect malware through both ‘static’ (conventional) and ‘contextual’ (based on data) approaches. On the whole, these are primarily tools that have automated the way we comprehend malware. Even from the most basic rendering of malware requires assumptions that are built into automation of displaying malware – such as assuming a certain path which the software follows. Yet these fundamental approaches to malware analysis and the entire endpoint industry are ever-present in contemporary developments. Though I claim there has always been a more-than-human collective in the analysis of malware, developments in machine-learning offer something different. If we look to both Louise Amoore and Luciana Parisi, there is a movement of the decision that is (at least) less linear than we have previously assumed. Thus automation is entering some form of new stage, which means we not only have more-than-human collectives, but now more-than-human decision-making that is non-deterministic.

Both abstracts are below:

Transient Topographies – NUI Galway

 

The Malicious Transience: a malware ecology

Keep the Computer Clean! Eradicate the Parasite!

Exporting_Malicious

{

Lab_Check(“Software_Form, Tools, Human_Analyst”) ==1

Flag(“machine_learning”)       ==1

; Check feeds from vendors and internal reputation scoring

Rep(“vendors”)                        ==1

Rep(“age”)                              ==>5000

Rep(“prevalence”)                  ==>300

; Intuition of human analyst to find matching structure

e89a500500    ; call sub_routine

48c1e83f         ; shr rax, 3fh

84c0                ; test al, al

7414                ; jz short loc_16475f

bf50384f00      ; move di, offset dangerous_thing

; Environmental behavioural checks

Env_Chk(“bad_access, registry_change, files_dumped”) =1 detect

; terminate unsuccessfully

tu

; Detect as malware, report infection (Ri) and clean

:detect

Ri

Clean

}

Contemporary malware analysis is pathological. Analysis and detection are infused with medical-biological discourse that entangle the technological with good and bad, malicious and clean, normal and abnormal. The subjectivities of the human analyst entwine with the technical agencies of big data, the (de)compiler, the Twitter feed. Playing with the static rendering and contextual behaviour come alongside the tensed body, a moment of laughter. Enclosed, sanitised, manipulated ‘sandboxed’ environments simulate those out-there, in the ‘wild’. Logics of analysis and detection are folded in to algorithms that (re)define the human analyst. Algorithm and human learn, collaborate, and misunderstand one another.  Knowledges are fleeting, temporary, flying away from objective truths. The malicious emerges in the entanglement of human and technology, in this play. The monster, the joker, sneaks through the stoic rigour of mathematics and computation, full of glitch and slippage. Software is ranked, sorted, sifted. It is controlled, secured, cleansed, according to its maliciousness. Yet it is transient; its map forever collapsing. Time and space continue, environments refigured, maliciousness (re)modified. Drawing on ideas of technē, between communication, art and technology, this paper queries current pathological logics. This looks at what a broader grasp of more-than-humans through an ecological approach; to include the subjectivities, environments, and social relations after Félix Guattari, could achieve.

 

New Geographies of Automation – RGS-IBG

 

Automating the laboratory? Folding securities of malware

Folding, weaving, and stitching is crucial to contemporary analyses of malicious software; generated and maintained through the spaces of the malware analysis laboratory. Technologies entangle (past) human analysis, action, and decision into ‘static’ and ‘contextual’ detections that we depend on today. A large growth in suspect software to draw decisions on maliciousness have driven a movement into (seemingly omnipresent) machine learning. Yet this is not the first intermingling of human and technology in malware analysis. It draws on a history of automation, enabling interactions to ‘read’ code in stasis; build knowledges in more-than-human collectives; allow ‘play’ through a monitoring of behaviours in ‘sandboxed’ environments; and draw on big data to develop senses of heuristic reputation scoring.

Though we can draw on past automation to explore how security is folded, made known, rendered as something knowable: contemporary machine learning performs something different. Drawing on Louise Amoore’s recent work on the ethics of the algorithm, this paper queries how points of decision are now more-than-human. Automation has always extended the human, led to loops, and driven alternative ways of living. Yet the contours, the multiple dimensions of the neural net, produce the malware ‘unknown’ that have become the narrative of the endpoint industry. This paper offers a history of the automation of malware analysis from static and contextual detection, to ask how automation is changing how cyberspace becomes secured and made governable; and how automation is not something to be feared, but tempered with the opportunities and challenges of our current epoch.

Strava, Sweat, Security

Wearable tech, the ability to share your fitness stats, suggest routes, follow them, and so on have been a growing feature of (certain) everyday lifestyles. This ability to share how the body moves, performs, and expresses itself gives many people much satisfaction.

One of the popular methods is through Strava which is primarily used by runners and cyclists to measure performance (maybe to improve), and also share that information with others publicly. There are individual privacy settings, that allow you to control what you share and do not share. All seems good and well. An individual can express their privacy settings in the app: that should be the end of the story. Yet, Strava’s temptation is to share. Otherwise we could just other wearable tech that does not have such a user-friendly sharing ability, and be done with it.

Strava has recently shared a ‘Global Heatmap‘ that allows for an amalgamation of all these different individuals sharing their exercise, their sweat, their pace, to Strava for ‘all’ to access. Hence here we have a collective (yet dividualised – in the Deleuzian sense) body that has been tracked by GPS, the sweat allowing for an expression of joy, an affective disposition to share. This sharing allows for a comparison to what a normative Strava body may be, allowing for a further generation of sweaty bodies. Yet in the generation of these sweats, security becomes entangled.

This is where privacy and security comes entangled in the mist of a fallacy of the individual. The immediate attention of Strava’s release quite literally maps to concerns over ‘secret’ locations, such as  secret military bases, but also some more trivial such as how many use it around GCHQ, in the UK. This has led to calls for bans for those in military units to reduce this exposure. However, this does not address how the multiple individual choices using an app in which privacy is only one of anonymisation when this is ‘publicly’ shared by a person. This aggregated picture is ‘fuzzy’, full of traces of this dividual sweaty body. These sweaty bodies are flattened, treated as data points, then recalibrated as though all points are the same. In fact, they are not. Privacy and security are inherently collective.

So, why does this matter? If the individuals shared certain information then they are free to do so and their individual routes are still not (re)identified. Yet privacy in this concept is based on a western canon of law, where the individual is prime. There is a form of proprietary sense of ownership over our data. This is not something which I disagree with, as much in feminist studies informs us of the importance of the control over of our bodies (and therefore the affects it produces; the sweat, on our mobile devices, on Strava). Yet there has to be a simultaneous sense of the collective privacy at work. In this case, it is rather trivial (unless you run a secret military base). Yet it explodes any myths around the use of ‘big data’. Did Strava make clear it would be putting together this data, was explicit consent sought beyond the initial terms and conditions? Just because something becomes aggregated does not mean we lose our ability to deny this access.

The use of more internet-connected devices will allow for maps that gain commercial attention, but at the expense of any sense of collective privacy. Only states were able to produce this information; through a bureaucracy, yet there have been publicly, democratically-agreed steps to protect this information (whether this is effective is another question). Now we live in a world where data is seen to be utilised, to be open, freely accessible. Yet we need far more conversation that extends beyond the individual to the collective bodies that we occupy. To do one without the other is a fallacy.

My, and your, sweaty body is not there for anyone to grab.

RGS-IBG18: A Critical Geopolitics of Data? Territories, topologies, atmospherics?

Nick Robinson and I have put together a cfp for the RGS-IBG Conference that is below. This sort of movement to considering geopolitics is something that is becoming far more dominant in my work, and to have such a session helps to bring together some ideas that I’ve been having for many years, and particularly from the start of my PhD, on data and its relationship to territory (particularly after some excellent lecturing in my undergraduate days from Stuart Elden). However I look forward to understanding how data is/are constructive of geopolitics and how this may tie into some of the historical genealogies of the term.

A Critical Geopolitics of Data? Territories, topologies, atmospherics?

Sponsored by the Political Geography Research Group (PolGRG) and the Digital Geographies Working Group (DGWG)

Convened by Andrew Dwyer (University of Oxford) and Nick Robinson (Royal Holloway, University of London)

This session aims to invigorate lively discussions that are emergent at the intersection between political and digital geographies on the performativities of data and geopolitics. In particular, we grant an attentiveness to the emergent practices, performances, and perturbations of the potentials of the agencies of data. Yet, in concerning ourselves with data, we must not recede from the concrete technologies that assist in technological agencements that explicitly partake in a relationship with data, such as through drone warfare (Gregory, 2011), in cloud computing (Amoore, 2016), or even through the Estonian government’s use of ‘data embassies’ (Robinson and Martin, 2017).  Recent literature from critical data studies has supported an acute awareness of the serious and contentious politics of the everyday and the personal, with geographers utilising this such as around surveillance and anxiety (Leszczynski, 2015). In recent scholarship, a geopolitical sensitivity has considered the contingent nature of data, the possibilities of risk and the performances of sovereignties (Amoore, 2013), or even the certain dichotomies found in data’s ‘mobility’ and ‘circulation’, and its subsequent impact upon governing risk (O’Grady, 2017). In this, we wish to draw together insights from those on affective and more-than-human approaches in their many guises, to experiment and ‘map’ new trajectories, that emulsify with the more conventional concerns of geopolitics to express what a critical attention to data brings forth.

 

In this broadening of scope, how we question, and even attempt, to capture and absorb the complex ‘landscapes’ of data is fluid. How do our current theorisations and trajectories of territory, topology and atmospheres both elude and highlight data? Do we need to move beyond these terms to something new, turn to something else such as ‘volume’ (Elden, 2013) or indeed move away from a ‘vertical geopolitics’ in the tenor of Amoore and Raley (2017)? Do we wish to work and make difficult the complex lineages and histories that our current analogies provide us? Has geopolitical discourse, until now, negated the multitude of powers and affects that data exude? In this session, we invite submissions that offer a more critical introspection of data – its performativity, its affectivities, its more-than-human agencies – upon geopolitical scholarship, and even reconfigure what geopolitics is/are/should be.

 

Themes might include:

  • Data mobilities
  • Data through, across, and beyond the border
  • Data and its reconfigurations upon notions of sovereignty and territory
  • Vibrancies and new materialism
  • Legalities and net neutrality
  • Affectivities and non-representational approaches
  • Visceralities and broader attentiveness to the body
  • Atmospheres
  • Infrastructures
  • Diplomacy
  • Popular geopolitics

 

Session information

This session will take the form of multiple paper presentations of 15 minutes, each followed by 5 minutes of questions.

Please send a 200-word abstract, along with name and affiliation, to Andrew (andrew.dwyer@ouce.ox.ac.uk) AND Nick (Nicholas.Robinson.2014@live.rhul.ac.uk) by Monday 5thFebruary 2018.

Further information can be found on the RGS-IBG website here.