Digital Decisions (DIGIDec): Artificial Intelligence and the Automation of Cybersecurity, a new research fellowship.

It is a rather odd time to be announcing that I have secured a new research fellowship when I know many exceptional colleagues who are currently struggling to find employment in higher education. This is something that has been weighing on my mind, and often shows the bizarre (and still discriminatory) system that is present in UK HE and beyond. This is a very strange time, no more so when protests are raging across the US and around the world.

However, I thought I would post a blog to say that I have been awarded a three-year Addison Wheeler Fellowship at the Institute of Advanced Study at Durham University in the UK starting in October 2020. This was after another fellowship at Harvard fell though due to COVID-19 (though I hope I will be able to use this fellowship to visit Harvard and do some of the research I proposed there). This research fellowship will give me the freedom to explore questions that stretch across my work from my DPhil thesis to newer interests in machine learning and decision to the below research project without administration and teaching duties. I am also lucky this comes with a generous research allowance for experimental computer science modelling and in-depth social science research.

Digital Decisions: The Research Project

As I have started to articulate elsewhere (see “Algorithms Don’t Make Decisions!“) and I began to explore in my DPhil thesis at Oxford, I am interested in how machine learning (frequently articulated as artificial intelligence) is transforming decision in cybersecurity – and no more so than in offensive cyber operations. There is a burgeoning, if not emergent, research area on understanding ‘Cyber AI’ (Cybersecurity and Artificial Intelligence) and I hope to be able to contribute to this, albeit in ways that I think often diverge from the discussion in International Relations. In particular, I will investigate how “automation” of security means staying-with (à la Donna Haraway) the trouble of ‘posthuman’ relationships. By this, decision is complicated by what I deem to be computational choices and I critique the possibility in the future for the ‘human in the loop’ in AI Ethics debates.

Thus, although some may argue that ‘automated decisions’ do not exist in cybersecurity today, my DPhil thesis makes arguments that we are indeed seeing such implications. Thus, I am going to excavate the implications of such thought and practice as machine learning may force the hands of states and other organisations to adopt responses that mean a human cannot be in the loop, and will not wholly correspond with human expectation (no matter how well engineered a system is).

My work and research fellowship will be exploratory, trying to understand today’s emergent machine learning techniques, offensive cyber operations, the understanding of this in relevant policy areas,. Work is being done elsewhere under similar thoughts such as at ‘The Cyber AI Project‘ at Georgetown University in the US. I hope I will start to contribute to a ‘European’ perspective. However, as part of this, I realise that these discussions have become exceptionally Eurocentric – so I will also be exploring this in non-traditional areas for cybersecurity to bring forth some postcolonial threads that may contradict and counter a US and European-centric view on the automation of cybersecurity (I am doing some work with colleagues at the German ‘Dynamics of Security‘ collaborative research centre where I was a visiting fellow last year on this, as well as an article for a forthcoming special issue). Likewise, I will be delving into archival material on ‘decision’ and technological change with reference to computation to understand how policy has dealt with these difficulties before.

Why Durham?

I will be returning to this stunning university city a decade after beginning my undergraduate studies there. However, one of the key reasons to return is due to its fantastic geography department and its interdisciplinary focus on conceptualising and researching political technologies. I have found sitting with ‘critical’ geography and security extremely productive alongside colleagues within other areas of cybersecurity – and I couldn’t think of a better fit in this regard for how I can stretched and challenged.

Likewise, my mentor for this fellowship is Prof. Louise Amoore, who I think is one of the most innovative contemporary thinkers on algorithms, ethics, and politics will be a wonderful person to work with. If you haven’t read her work – I seriously recommend you go to her profile and read some. She has also (wonderfully) received a European Research Council (ERC) Advanced Grant called “Algorithmic Societies: Ethical Life in the Machine Learning Age” which I think situates me in the best possible place to think about cybersecurity cohesively with other transformations fostered and generated by machine learning.

Durham also does varied work around what I term ‘computational security’ (and also the name of this blog) which I think cybersecurity can be considered part of – and perhaps not distinguishable from. So, I look forward to working with colleagues in Durham to form hopefully set up a network on such thinking to bring those who would not conventionally be regarded as ‘cybersecurity’ theorists space and time to come together to consider such issues (in a hopefully less socially distant future).

The Abstract

Below is the abstract which I submitted as part of this research fellowship proposal – to give you a glance into what I will be doing!

Digital Decisions (DIGIDec): Artificial Intelligence and the Automation of Cybersecurity

The application of machine learning to cybersecurity – known as ‘Cyber AI’ – is revolutionising security practice through computation’s ability to make choices that deviate from human intent. This challenges who and what makes a security decision. To grasp at this, I look at differing forms of (re)cognition (Hayles, 2017) where computation renders the world through digital, calculative anomaly in contrast to our abnormal senses of threat and vulnerability1. Security cannot be ‘hard-coded’, so a question emerges, are we really delegating or automating our notions of (in)security? As machine learning develops it may choose an attribute and identify it as malicious in ways unrecognisable to us, thereby becoming an active digital security actor. This opens-up a world of securities that are neither reliant on the human nor computation for their production, but a mixture between the two. I then seek to investigate the implications of differing forms of (re)cognition and ‘digital decisions’ for cyber-weapons, adversarial machine learning, and the future practices of war and security.

Last thoughts…

I have to thank those who read my proposal, my referees, and aided me in getting this research proposal through to funding. Thanks to you all – and now I have the hard work to do! I’m seriously looking forward to this, and I am extremely privileged and humbled to be given this opportunity.

Also, I must thank my current Cyber Security lab at the University of Bristol – and to Prof. Awais Rashid – for giving me a chance to “kick-off” my academic career. I have some fond memories and I know we’ll end up working together in the future!

Leave a Reply