Nat O’Grady asked me a while ago to contribute to a new section in the recently released ‘International Encyclopedia of Human Geography 2e [2nd Edition]’ on ‘Cyber Security’. This was an interesting challenge, as there is relatively little written in human geography on cybersecurity – perhaps due to its connection with ‘cyberspace’.
Indeed, when writing something new, there is a lot of scope to outline potential avenues. We started initially with alternatives ideas of what ‘cyber security’ is and could be. This tension may be apparent in the text, and I think is a productive one. We tried to be broad enough to incorporate perspectives not typically associated with cybersecurity, yet contained enough to maintain some ‘bounds’.
Hopefully we have achieved some balance. I’m sure we have missed some perspectives, or not focused on others enough. However, I think it is a good start on understanding what geography’s contributions to cybersecurity are, and what they could be in the future.
Yesterday, a short piece by Jan Silomon and myself went up on E-IR – accessible here. Here I offer some of my own thoughts on the article and hopefully some interpretations that I would like to be taken away and discussed further – on dehumanisation below life, the role of attribution and ‘quasi-state’ actors in cyber-attacks, and ultimately attempt to tread a ‘postcolonial’ security studies that sees this event as central to our understanding of cybersecurity.
As ever with writing, this was a much more complex piece than both of us imagined it would initially be – and was meant to be a ‘quick’ response to the events that happened in May 2019. In this piece, I’ll move into the singular ‘I’ to discuss the article as this is very much my interpretation and cannot be attributed to both of us.
After the posting of this tweet, it seemed to ring so strongly what has happened elsewhere with the gamification (and trivialisation) of warfare. However, this had some a much more distinctive edge compared to other ‘conventional’ responses. First, was the connection to malware, and second, that it is *exceptionally* unusual to see a kinetic response to a cyber-attack (at least one that has been publicly attributed).
For me, that is what was exceptional – and this is what the short article tried to navigate. In particular, I express a serious worry over the pathologisation of Hamas as malware (regardless of their actions) – which informs a dehumanisation of certain bodies to a plane of abstraction through computer code that can easily be ‘wiped’ clean. It is also perhaps an ‘interesting’ play from the IDF on how malware was perhaps used by Hamas (in what we can only call an alleged attack). It was Jan(tje) who most clearly picked up on this, and I was happy to explore this side in greater detail. However, this abstraction to malware or other nonhuman ‘things’ is a common trope to try and dehumanise the other throughout (post)colonial thought (and indeed racist thought) – as ‘rabid’ or animalistic. As we say in the article, this is unlikely to be intentional. But for me, comparisons to malware reduce even further the dehumanisation – to something below life itself. Something that can be ‘created’ by humans and thus easily ‘deleted’ and rendered easily disposable. I think this is a concerning movement in how cybersecurity and cyberattacks fit into the broader spheres of security and rational for attacks. This is something that is very much under-explored and requires much more thought and conceptualisation (and indeed is something I would like to pick up on further in something I am organising with others in Gießen, Germany next year).
Then there are the more ‘conventional’ IR concerns on the development of norms and trying to theorise how ‘kinetic’ responses occur in response to cyber-attacks. The IDF’s attack against Hamas’ ‘CyberHQ’ was indeed the second-only confirmed kinetic response to a cyber-attack that we know of. This does raise further questions (that I hope others will explore) around why these have both been against what we call ‘quasi-state’ actors that control an extent of territory (which I think is a core part of the story), and then what this means for the broader conceptualisation of how one justifies such an attack and what evidence (or not, in this case) is required. As the recent release of the French government’s strategy indicates, they do not have to explicitly set out nor publish their levels of attribution before launching a (non)kinetic response. This means that if attacks do happen (as they will almost inevitably do so in the future), there is likely to be some ‘public’ justification which arriving through Twitter or other media. As our ‘case’ shows, this may rely on gamified language itself, as a way to obscure technical details and strategic purpose. This is a dangerous path to follow – as this is likely to lead to further strategies to dehumanise or render ‘others’ as permissible to kill or injure. These are only thoughts now – but may become important parts of a state’s ‘arsenal’ with regard to cyber ‘kinetic’ responses. Thus, these ‘quasi-state’ actors, that are unlikely to have a de facto state in ‘control’, means that as symmetric ‘kinetic’ responses are far less likely to occur, they have become inadvertent ‘test-beds’ for such action. The lack of response to the IDF’s tweet is perhaps part of this – in its ‘ordinariness’ – that means that this becomes part of a ‘new’ normal, and is one of the reasons why I wanted to write the article.
As in any co-authored piece there are compromises. One that didn’t make a final cut was a core issue of Eurocentrism in (cyber)security – that has been raised by many ‘postcolonial’ scholars such as Barkawi and Laffey (2006). That is, those in non-European contexts have been seen as peripheral to the ‘core’ concerns of security. That is, places such as Gaza are seen as peripheral to ‘true’ security concerns. In this short article, I hope (though perhaps unsuccessfully) that we have reorientated at least partially this centre, where we see that this conflict as central to understanding contemporary cybersecurity and international relations. I don’t think we can cast it off as some form of ‘other’ case that is not central to global (in)securities. Indeed, this is not simply a case of the ‘Israel-Palestine’ conflict (though this is an essential basis). The IDF’s tweet opens up a door to understand the politics and powers at play (between Hamas and the IDF) that are differential, to permit an understanding of the justificatory mechanisms for cyber-attacks and how Twitter and air-strikes may be used in the future elsewhere.
I haven’t blogged for a long time, especially with doing the last edits to my thesis that I will hopefully submit in the next few weeks. So, this is a bit of break for me to think about what I have been partially doing as my time as a visiting fellow for the past five months at the SFB ‘Dynamics of Security’ between Marburg and Gießen, Germany.
Primarily, I have been continuing with my DPhil thesis, where the time as a visiting fellow in Gießen has given me the time to relax and think. This has delayed the submission but has, I hope, improved the quality of my thought and its application. However, I have not been solely focussed on this (and perhaps my supervisors would not be so happy about this). I have been developing my knowledge and reading on such things as ‘autonomous weapons systems,’ drones, and other computational media to think about some of the core insights from my (auto)ethnographic work with malware in 2017. I have also been part of a reading group here on ‘postcolonial securities’ which will hopefully lead to a conference in spring 2020, and been exploring more deeply the relationships between software transparency, Huawei, and ‘5G’ telecommunications infrastructures. This is where I see some of the work heading, with perhaps the Huawei paper being more of an end in itself, but this could morph as many projects do.
I guess for people (if any) have been keeping up with my work – they know I have been busy doing an (auto)ethnography of a malware analyst lab, I’m interested in the relationship between anomalies and abnormalities, and also how pathology and ecology can be thought of with regards to malware. Yet, a rather unexpected turn came after I had been really struggling with issues I have with some ‘vitalist’ material and approaches (see Ian Klinke’s paper on ‘Vitalist Temptations‘ and Ingrid Medby’s paper ‘Political geography and language‘ that are both attempting to do some of this critique). Pip Thornton zipped me away down the motorway to Winchester, where we went to see a talk by N. Katherine Hayles’ at the Winchester School of Art in May 2018. Before this, I had read little of her work (and indeed, I am still working my way through some of her lesser-read works), but what caught me was this particular rendition of computational ‘layers’ (see image below). It really engaged me in dealing with the ‘logics’ of computation, but also agency, that I think has been relatively underworked in new materialist literature in particular.
As Hayles’ detailed, signs can be translated over these computational layers. I found this exceptionally provocative and I use this as a foundation to my interpret of ‘malware’ and ‘computational’ politics, and she will have a forthcoming paper on this, so I do not wish to elaborate or steal away from the insights that this will provide. However, I have been using this to really think about the role of agency, intent, and how malware relates to both in computational ecologies. From the reading I make of Hayles’ work – such as in her book Unthought (2017) – along with a reading of Yuk Hui’s Recursivity and Contingency, is that computational infrastructures make choices that exceed the explicit intervention of humans (also Louise Amoore’s work contributing to her forthcoming book Cloud Ethics has been essential in all of this development from day 1).
So, if computation can exceed human intent, is this something specific to machine learning, or ‘artificial intelligence’? I don’t think so, and I am developing a paper on this now on what I see as a more foundational principle to computation, and which has real implications for security studies and what is (re)cognised as war – which is drawing on talks I did in Gießen this week and what I am doing in a couple of weeks at the SGRI conference in Trento. The latter looks like an absolutely fantastic panel in which to experiment with some of these thoughts. I am trying to rethink what agency is, perhaps this may prove to be too egregious, but I think it is necessary – and I hope that by doing these talks, I’ll find some inevitable blindspots in my knowledge and reading. You can find below my abstract for that particular conference.
I guess this is where my work is heading, as an extension of my insights from my DPhil thesis – Malware Ecologies: A Politics of Cybersecurity – that I hope will act as a bridge and be supplementary rather than singularly transforming this into papers (as I do think there is value in this as a stand-alone piece of work). I would put more here if I was not more worried about recent incidents in academia of work being taken without due credit. Of course, if you’re interested please contact me via the details on Oxford’s geography website, but I won’t be making these public until I have submitted and got a good idea that a paper will be published, unfortunately. Right, so back to thesis-writing!
So, I thought I would do a quick blog post, just as I have reached a block in writing and thought this would help to get me back into the mood. A couple of years ago now(!), I did some archival research on how certain malware are consumed and practiced in the media and tie this to Google trends data (search and news) to see if there were any correlations between malware events (such as its release, a malware report and so on) and then see if there was anything interesting in this.
How I did it
As one could expect, there is a strong correlation between news and google searches. I took articles published in any language using search terms, ‘stuxnet’, ‘conficker’, ‘dridex’, and ‘cryptolocker’. The former three are case studies in my thesis, and I have subsequently dropped cryptolocker. I turned to Nexis (LexisNexis), a newspaper database, to search for these terms in articles globally (which captured publications beyond English, due to the uniqueness of the words, but only those captured in Nexis unfortunately). In particular, I searched for items in the title, byline, and first paragraph so I did not pick up residual stories as much as possible. This required a substantial clean-up of newspaper editions, stories that did not make sense, mistakes in the database, and other issues. Clearly there is a lot of noise in this data, and I took some but not all precautions to try and keep this to a minimum as it was not a statistical check, but a more qualitative activity to see any form of ‘spikes’ in the data.
I used Google trends that were freely available from their website for each malware. However, frustratingly these only come out as a ratio of 0-100 (0=Least, 100=Most) on quantity of searches. So, I had to scale each malware’s newspaper articles from Nexis to 0-100 to ensure that each malware was comparable to a certain level, and to make sense of the two different sources I was using for this data. I also did this globally so that I had a close a comparison to the Nexis data as possible. This produced some interesting results, where I cover one incidence of interest.
What does it show
Though I hold little confidence on what this proves, as it was more of qualitative investigation, I think there a few points that were clear.
This takes all the malware terms I was looking at, and scales them 100 on all of the equivalent data points. This shows spikes for the release of each malware; Conficker, Stuxnet, Cryptolocker, and Dridex in succession.
Though this may be a little hard to read, what it shows is how Stuxnet absolutely dominates over the other three malware in terms of newspaper content, however it barely registers on Google trends data when compared to the worm Conficker, that emerged in 2008 and 2009. This suggests, that though we in cybersecurity may have been greatly concerned about Stuxnet, the majority of searches on Google in fact point to those which directly impact people. In addition, though I do have graphs for this, it is clear that newspapers reacted very strong to the publication of stores in June 2012 – such as an article in the New York Times by David Sanger – that Stuxnet was related to a broader operation by the US and Israel dubbed the ‘Olympic Games’.
When we turn to the difference between search and news data, there are some interesting phenomenon – something I would like to delve more into – where searches sometimes predate news searches. This is particularly stark with Cryptolocker and Conficker, suggesting that people may have been going online ahead of its reporting to search what to do with a ransomware and worms. Hence focusing in cybersecurity purely on news articles may not actively reflect how people come into contact with malware and how they engage with media.
It is not that I am claiming here that I have found something special, but it was interesting to see that my assumptions were confirmed through this work. I have a lot of data on this, and I may try and put this together as a paper at some point, but I thought it would be nice to share at least a bit about how conventional media (newspapers) and Google trends data can tell us something about how malware and its effects are consumed and practiced through different medias globally and to see how this varies over time and according to the materiality and performance of malware.
After initially volunteering to give a ‘lightning’ talk at the CDT in Cyber Security joint conference (Programme) at Royal Holloway next week (3 & 4 May), I was given the opportunity to speak at greater length for 30 minutes. This has provided me the breathing space to consider how I have been conceptualising space in cybersecurity – and is likely to form the basis for the last chapter of my thesis and a subsequent paper I wish to develop out of this (and what I see I’ll be doing post-PhD).
This draws further upon the talk I gave just over a week ago at Transient Topographies at NUI Galway, Ireland. In this, I explored the formation of the software ⇄ malware object, and how this relates to concepts of topography and topology in geography and beyond. In this, I explored how space is thought through in cybersecurity; whether through cartesian representations, cyberspace, or the digital. In my re-engagement with material in software studies and new media, I have intensified the political spheres of my (auto)ethnographic work in a malware analysis lab . Namely, how we come to analyse, detect, and thus curate malware (in public opinion, in visualisations, in speeches and geopolitical manoeuvres) as something that affects security and society. This is not something I claim as anything new, by the way, with Jussi Parikka in Digital Contagions doing this on malware and ‘viral capitalism’, and the multiple works on the relation between objects and security.
Instead, I wish to trace, through my own engagements with malware and security organisations, how space has been thought of. This is in no way a genealogy which would be anything near some contributions (yet) on space and security – but I see this as a start on this path. In particular, how has computer science, mathematics, cybernetics, cyber punk literatures, the domestication of computing, and growing national security anticipatory action conditioned spatial understandings of malware? This has both helpful and unhelpful implications for how we consider collective cybersecurity practises – whether that be by government intervention, paid-for endpoint detection (commonly known as anti-virus) surveillant protection through scanning and monitoring behaviours of malware, attribution, senses of scale, or threat actors – among a variety of others.
This working of space in cybersecurity is tied with what I term ‘algorithmic dimensionality‘ in our epoch – where algorithms, and primarily neural networks, produce dimensional relations. What I mean by dimensions is the different layers, that come together to produce certain dimensions of what to follow at each consecutive layer, generating relationships that are non-linear; that can be used for malware detection, facial recognition, and a variety of other potential security applications. These dimensions exist beyond humanly comprehension; even if we can individually split neuron layers and observe what may be happening, this does not explain how the layers interact adequately. Hence, this is a question that extends beyond, and through, an ethics of the algorithm – see Louise Amoore‘s forthcoming work, which I’m sure will attend to many of these questions – to something that is more-than-human.
We cannot simply see ethics as adapting bias. As anyone who has written neural networks (including myself, for a bit of ‘fun’), weights are required to make it work. Algorithms require bias. Therefore reducing bias is an incomplete answer to ethics. We need to consider how dimensionality, which geographers can engage with, is the place (even cyberspatial) in which decisions are made. Therefore, auditing algorithms may not be possible without the environments in which dimensionality becomes known, and becomes part of the generation of connection and relationality. Simply feeding a black box and observing its outputs does not work in multi-dimensional systems. Without developing this understanding, I believe we are very much lacking. In particular – I see this as a rendition of cyberspace – that has been much vented as something that should be avoided in social science. However dimensionality shows where real, political, striations are formed that affect how people of colour, gender, and sexual orientation become operationalised within the neural network. This has dimensional affects that produce concrete issues; whether by credit ratings, adverts shown, among other variables that are hard to grasp or prove discrimination.
Going back to my talk at Royal Holloway (which may seem far from the neural network), I will attempt to
enrol this within the conference theme of the ‘smart city’, and how certain imaginaries (drawing heavily from Gillian Rose’s recent thoughts on her blog) are generative of certain conditions of security. By this, how do the futuristic, clean, bright images of the city obscure and dent alternative ways of living and living with difference? The imaginings of space and place, mixed with algorithmic dimensionality, produce affects that must be thought of in any future imagining of the city. This draws not only from my insight from my PhD research on malware ecologies, in which I attempt to open-up what cybersecurities are and should include (and part of an article I am currently putting together), but also include feminist and queer perspectives to question what the technologically-mediated city will ex/in/clude.
I think space has been an undervalued concept in cybersecurity. Space and geography has been reduced to something of the past (due to imaginaries of the battlefield disappearing), and something that is not applicable in an ‘all-connected’ cyberspace. Instead, I wish to challenge this and bring critical analysis to cyberspace to explore the geographies which are performed and the resultant securities and inequalities that come from this. This allows for a maturity in space and cybersecurity – that appreciates that space is an intrinsic component of interactions at all levels of thinking. We cannot abandon geography, when it is ever more important in securities of everyday urban areas, in malware analysis, geopolitics, and even in the multi-dimensionality of the neural network. Hence space is an important, and fundamental, thing to engage with in cybersecurity which does not reduce it to the distance between two geometrically distant places.
Wearable tech, the ability to share your fitness stats, suggest routes, follow them, and so on have been a growing feature of (certain) everyday lifestyles. This ability to share how the body moves, performs, and expresses itself gives many people much satisfaction.
One of the popular methods is through Strava which is primarily used by runners and cyclists to measure performance (maybe to improve), and also share that information with others publicly. There are individual privacy settings, that allow you to control what you share and do not share. All seems good and well. An individual can express their privacy settings in the app: that should be the end of the story. Yet, Strava’s temptation is to share. Otherwise we could just other wearable tech that does not have such a user-friendly sharing ability, and be done with it.
Strava has recently shared a ‘Global Heatmap‘ that allows for an amalgamation of all these different individuals sharing their exercise, their sweat, their pace, to Strava for ‘all’ to access. Hence here we have a collective (yet dividualised – in the Deleuzian sense) body that has been tracked by GPS, the sweat allowing for an expression of joy, an affective disposition to share. This sharing allows for a comparison to what a normative Strava body may be, allowing for a further generation of sweaty bodies. Yet in the generation of these sweats, security becomes entangled.
This is where privacy and security comes entangled in the mist of a fallacy of the individual. The immediate attention of Strava’s release quite literally maps to concerns over ‘secret’ locations, such as secret military bases, but also some more trivial such as how many use it around GCHQ, in the UK. This has led to calls for bans for those in military units to reduce this exposure. However, this does not address how the multiple individual choices using an app in which privacy is only one of anonymisation when this is ‘publicly’ shared by a person. This aggregated picture is ‘fuzzy’, full of traces of this dividual sweaty body. These sweaty bodies are flattened, treated as data points, then recalibrated as though all points are the same. In fact, they are not. Privacy and security are inherently collective.
So, why does this matter? If the individuals shared certain information then they are free to do so and their individual routes are still not (re)identified. Yet privacy in this concept is based on a western canon of law, where the individual is prime. There is a form of proprietary sense of ownership over our data. This is not something which I disagree with, as much in feminist studies informs us of the importance of the control over of our bodies (and therefore the affects it produces; the sweat, on our mobile devices, on Strava). Yet there has to be a simultaneous sense of the collective privacy at work. In this case, it is rather trivial (unless you run a secret military base). Yet it explodes any myths around the use of ‘big data’. Did Strava make clear it would be putting together this data, was explicit consent sought beyond the initial terms and conditions? Just because something becomes aggregated does not mean we lose our ability to deny this access.
The use of more internet-connected devices will allow for maps that gain commercial attention, but at the expense of any sense of collective privacy. Only states were able to produce this information; through a bureaucracy, yet there have been publicly, democratically-agreed steps to protect this information (whether this is effective is another question). Now we live in a world where data is seen to be utilised, to be open, freely accessible. Yet we need far more conversation that extends beyond the individual to the collective bodies that we occupy. To do one without the other is a fallacy.
My, and your, sweaty body is not there for anyone to grab.
So, this year will hopefully be the one where I gain a driving licence, pass the DPhil, and at least have a clear idea of what I intend to do in 2019 (if not already doing it).
One of my resolutions is to start writing blog posts alongside the writing of my thesis chapters, as a way to start to digest the astounding amount of stuff I have collected over the past 27 months. However, I am aware of recent turns and troubles of releasing information too early, for this then to be published by someone else. Although rather depressing that this happens – it does, and one should be aware of it (especially as an early career researcher). This will also tie in with another resolution to keep a diary of my broader life alongside the ‘day-job’. I kept one throughout all of my authethnographic research, and it is has been equally enthralling and disturbing reading of the banality of one’s life. Yet the usefulness of being able to keep a record of what I believe will be an interesting 2018, I believe, warrants the time and attention it requires.
I’m not exactly sure how these will play out, but this is so that I can express some thoughts that may never go into papers in their full form, or may inform other bits that I am writing. Because, in all honesty, much work never rises to a point of publication – not because of quality, but often of time and interest. Then there is a more cynical reason that I need to be able to demonstrate thinking beyond malware, that shows my versatility as a researcher. These two objectives are complimentary, and can be enjoyable, even if a thrust of one goes against what I would like to admit to myself. So I look forward to the rest of this year, and intend to leave some insightful comments and points along the way for anyone interested. I think my January post will likely revolve around something on the ‘Ecological Thought’, and its implications for cybersecurity.
This week I attended the #NATOtalk16 conference held at the (infamous) Hotel Adlon, along with a pre-session discussion with the youth arm of the German Atlantic Association (YATA). This was a great few days with a dedicated ‘cyber security’ group which was great. There are recommendations which were written by all participants (available here), where my short paper on the future of NATO and deterring digital ‘warriors’ (one which I don’t like, but worked with) is. This is also shown below.
It was interesting to see NATO respond to the election of Trump and the future of Germany within Europe in the context of Brexit. The event was in partnership with the British Embassy, Berlin and it was clear there was an emphasis that Brexit does not damage the relationship with NATO. However, with Germany’s new security doctrine published in the Weißbuch (White Book) German/English which gives a more assertive stance to Germany’s positioning, with a growth in spending and in particular citing Russia as a core concern. This has provided me with some interesting background and context for my thesis project in malware ecology, and how this is being thought of in more international relations circles.
Fluid Operations: NATO and Cyberdeterrence
Multiple actors, lack of attribution, and hybrid action are all part of modern warfare. The growth of the internet and other digital systems has rapidly led to cyber security becoming a serious concern, from
individual users to (inter)national security. This short piece examines NATO and its ability to deter actors who attempt to subvert its collective security. This follows an analysis of current difficulties in deterrence, namely difficulties with attribution, low engagement barriers, and multiple actors. These concerns are then folded into avenues for further exploration in defence and offensive operations, and what blended or hybrid responses may entail. An exploration of these issues concludes that the distinction between defensive and offensive operations in cyberspace are fluid, where ‘active defence’ utilising situational awareness provides the best deterrence for most actors.
Alertness to cyber security sharpened with attacks against Estonia in 2007. Although never fully attributed to Russia, it exposed the potential vulnerabilities that existed among allies as dependence on assets in cyberspace has grown. Additional events in Georgia in 2008 and more recently in Ukraine have demonstrated how cyberattacks can be blended in forms of hybrid attacks that aim to destabilise states before more conventional incursions occur. NATO has responded through developing a coordinated cyber security apparatus and the formalisation of doctrine that declares that international norms of engagement apply to cyberspace.
Yet, in comparison to previous decades, there has been considerable difficulty in engaging in forms of deterrence. I identify three of the most pressing:
Attribution: Due to the ability to mask location and to lay decoys to the origin of an attack, conventional forms of deterrence are often not applicable.
Low Engagement Barrier: The pervasiveness of digital systems across allied and non-allied states increases the vectors and opportunities for low-skilled actors to engage.
Multiple Actors: Due to the low engagement barrier, it is not only states that have interest in subverting NATO, but also criminals, terrorists, and hired mercenaries that may sell their services to the highest bidder.
Current policy options
We often divide defensive and offensive capacity, which enables clear doctrinal policy, but is of little use to cyber security strategy. NATO is responsible only for its own internal systems and ensuring that these integrate with allied systems. Yet, it currently has no offensive capacity of its own apart from those developed by allies.
Defensive: In all scenarios, defence of critical systems provides the best deterrence from actors in cyberspace. This includes everyday management of critical national infrastructure, ensuring good education, and the monitoring of networks along with other recognised good cyber security ‘hygiene’. My PhD research on malware ecology demonstrates that maintaining good cyber security posture often prevents many subversions at entry points to the system. Yet due to interdependencies between systems, between governments and business, there will always be deficiencies in cyber security, including the opening up of previously unknown vulnerabilities such as zero-day exploits.
Offensive: Discussions of offensive capacity in NATO often focus on the trigger for Article 5, and what an armed cyberattack may constitute. This often descends into theoretical discussions over ‘cyber weapons’, and one which I will not go into. If we disregard the latter, the options remain either symmetric or asymmetric with conventional response. The former is often difficult due to time dependencies in developing a sophisticated response after an attack. The latter could be considered disproportionate, but is an essential arsenal for deterrence.
There is a false dichotomy between defence and offence in cyberspace. Ensuring security often requires scanning for threats a priori an attack or subversion. This means maintaining a high sense of situational awareness, and one that espionage traditionally provides. Therefore, developing potential offensive operations to be deployed in case of attack provide the most appropriate avenue for deterrence. Publicly disclosing an arsenal of non-specific advanced defensive preparation may deter some attacks. This addresses proportionality, enhances situational awareness and allows for preparedness. In addition, it aids with attribution as situational awareness of an array of actors can be pinpointed with greater accuracy whilst also enabling responses that do not wrongly attribute a state for non-state actors.
Furtherenhancedefensivecapacitythroughgoodpracticesofcybersecuritythatharmonise across allied states.
I wish I could have attended my centre’s open day, from which I hear was a major success! It’s great to be part of a group of individuals pursuing some very different areas of cyber security across computer science, international relations, law, philosophy, and geography (well, only me, so far). Below is the poster that summarises my current DPhil onto A1. I haven’t seen it printed and it will have been there yesterday.
I’m currently in the process of organising my fieldwork (still), but hopefully I will get there. I am also still obsessing and dragging my feet over a piece on ‘objects’ I am writing, which I will be presenting first at the ISA conference in February – this is definitely the furthest in advance I’ve been writing and thinking in depth for a conference, and subsequent paper, so I think that must be progress?
I’m very excited with what looks like a fascinating workshop with a great set of people (I don’t believe the list is finalised yet, so will hold off on that) on living with algorithms. This is being hosted by Royal Holloway, University of London. I’m particularly excited with the short 5 – 10 minute provocations that this call for papers asked for – I’m sure they’ll be a some pretty diverse, and contentious, contributions on the day.
Below is my abstract for the workshop:
The kiss of death: an algorithmic curse.
Malicious softwares slither through noise of systems, of cyberspaces, attempting to avoid the signal, to defy their abnormality to their surrounding ecological condition. It’s a parasitical existence, to avoid the kiss of death.
Algorithmic detection: The curse of malwares. The curse of humans?
In similar techniques deployed against humans, the collection of data, its analysis, abnormality breathes from ever-modulating normalised abstractions. Or that is the intention, at least. Modern malware mostly emerges as malicious through the deployment of the detecting algorithm. Circulation and mobility are absolutely necessary for malwares to carry out their deeds of infection, exfiltration and so on. Yet precisely this circulation is its downfall. To be malware, it must move. Yet in moving it changes its ecological condition. Two cultures emerge at the point of software’s algorithmic detection; one becoming-human, one becoming-malware. Indeed, it is tempting to focus on humanly responses, looking at things in relation to ourselves. Yet how does algorithmic detection expose the malicious intentionality of otherwise ‘normal’ software? What human-malware normality is required?
We, malwares and humans, are rightly concerned with algorithmic detection. This is where our cultures converge in a more-than-human political project. We are unlikely to ever sense each other in that way however. Humans and malwares develop everyday practice at certain sites, sometimes technological, other times not. These include anti-virus programs, the secure connection to banking credentials, stealing ‘confidential’ big data, in organisational practice, in the virtue of the software programmer, in the chatter of politicians. When malware is detected, when it becomes known, it is sealed-off, destroyed, deleted. As humans, can similar algorithmic detection mechanisms come from our dividualisation? Can looking to a more-than-human offer potential futures of hope and resistance to the dominance of algorithms? A way for us to slither through our spatial registers?