Topographies and Automation: Directions of my malicious research

So I’m pleased to say that I’ve been accepted on two pretty different conferences, which both disseminate different parts of my DPhil (PhD) project, and I intend to structure into more formal papers after these events.

The first is ‘Transient Topographies‘, which will be in April in Galway, Ireland that will help explore the complex word ‘topography’. This is an exceptionally complicated term for me, coming from the dual lineages of human geography and computer science where they frequently have alternative and conflicting interpretations of what this word means. Yet, this conference allows me to explore in some depth the reason I became interested in malicious software in the first place: its spatial implications. For me, malicious software allow for us to blend the spatial understandings that come with ecology in both more-than-human geographies and the broad study of technologies or technics. This also allows me to explicitly move on from an awkward experience at last year’s RGS-IBG when I was just initiating these ideas.  I had not given it enough thought to this topic, and ultimately the core argument I wish to put forward was lost. To put clearly – I think our division between human and technic is unsustainable. I hope this will become clearer in this talk.

The second is an exceptionally interesting session put together by Sam Kinsley for this year’s RGS-IBG in Cardiff on the ‘New Geographies of Automation’. This tackles the more historical (and somewhat genealogical) aspect to my work which I never anticipated I would do in my PhD. However, it has been something which captured my attention during my ethnographic work in a malware analysis laboratory.  The complex array of tools and technologies with which we have come to sense, capture, analyse, and detect malware through both ‘static’ (conventional) and ‘contextual’ (based on data) approaches. On the whole, these are primarily tools that have automated the way we comprehend malware. Even from the most basic rendering of malware requires assumptions that are built into automation of displaying malware – such as assuming a certain path which the software follows. Yet these fundamental approaches to malware analysis and the entire endpoint industry are ever-present in contemporary developments. Though I claim there has always been a more-than-human collective in the analysis of malware, developments in machine-learning offer something different. If we look to both Louise Amoore and Luciana Parisi, there is a movement of the decision that is (at least) less linear than we have previously assumed. Thus automation is entering some form of new stage, which means we not only have more-than-human collectives, but now more-than-human decision-making that is non-deterministic.

Both abstracts are below:

Transient Topographies – NUI Galway

 

The Malicious Transience: a malware ecology

Keep the Computer Clean! Eradicate the Parasite!

Exporting_Malicious

{

Lab_Check(“Software_Form, Tools, Human_Analyst”) ==1

Flag(“machine_learning”)       ==1

; Check feeds from vendors and internal reputation scoring

Rep(“vendors”)                        ==1

Rep(“age”)                              ==>5000

Rep(“prevalence”)                  ==>300

; Intuition of human analyst to find matching structure

e89a500500    ; call sub_routine

48c1e83f         ; shr rax, 3fh

84c0                ; test al, al

7414                ; jz short loc_16475f

bf50384f00      ; move di, offset dangerous_thing

; Environmental behavioural checks

Env_Chk(“bad_access, registry_change, files_dumped”) =1 detect

; terminate unsuccessfully

tu

; Detect as malware, report infection (Ri) and clean

:detect

Ri

Clean

}

Contemporary malware analysis is pathological. Analysis and detection are infused with medical-biological discourse that entangle the technological with good and bad, malicious and clean, normal and abnormal. The subjectivities of the human analyst entwine with the technical agencies of big data, the (de)compiler, the Twitter feed. Playing with the static rendering and contextual behaviour come alongside the tensed body, a moment of laughter. Enclosed, sanitised, manipulated ‘sandboxed’ environments simulate those out-there, in the ‘wild’. Logics of analysis and detection are folded in to algorithms that (re)define the human analyst. Algorithm and human learn, collaborate, and misunderstand one another.  Knowledges are fleeting, temporary, flying away from objective truths. The malicious emerges in the entanglement of human and technology, in this play. The monster, the joker, sneaks through the stoic rigour of mathematics and computation, full of glitch and slippage. Software is ranked, sorted, sifted. It is controlled, secured, cleansed, according to its maliciousness. Yet it is transient; its map forever collapsing. Time and space continue, environments refigured, maliciousness (re)modified. Drawing on ideas of technē, between communication, art and technology, this paper queries current pathological logics. This looks at what a broader grasp of more-than-humans through an ecological approach; to include the subjectivities, environments, and social relations after Félix Guattari, could achieve.

 

New Geographies of Automation – RGS-IBG

 

Automating the laboratory? Folding securities of malware

Folding, weaving, and stitching is crucial to contemporary analyses of malicious software; generated and maintained through the spaces of the malware analysis laboratory. Technologies entangle (past) human analysis, action, and decision into ‘static’ and ‘contextual’ detections that we depend on today. A large growth in suspect software to draw decisions on maliciousness have driven a movement into (seemingly omnipresent) machine learning. Yet this is not the first intermingling of human and technology in malware analysis. It draws on a history of automation, enabling interactions to ‘read’ code in stasis; build knowledges in more-than-human collectives; allow ‘play’ through a monitoring of behaviours in ‘sandboxed’ environments; and draw on big data to develop senses of heuristic reputation scoring.

Though we can draw on past automation to explore how security is folded, made known, rendered as something knowable: contemporary machine learning performs something different. Drawing on Louise Amoore’s recent work on the ethics of the algorithm, this paper queries how points of decision are now more-than-human. Automation has always extended the human, led to loops, and driven alternative ways of living. Yet the contours, the multiple dimensions of the neural net, produce the malware ‘unknown’ that have become the narrative of the endpoint industry. This paper offers a history of the automation of malware analysis from static and contextual detection, to ask how automation is changing how cyberspace becomes secured and made governable; and how automation is not something to be feared, but tempered with the opportunities and challenges of our current epoch.

Leave a Reply

Your email address will not be published. Required fields are marked *