Start Submission Become a Reviewer

Reading: Algorithmic impropriety in UK policing?


A- A+
Alt. Display


Algorithmic impropriety in UK policing?


Jamie Grace

Sheffield Hallam University, GB
X close


There are concerns that UK policing could soon be awash with 'algorithmic impropriety'. Big(ger) data and machine learning-based algorithms combine to produce opportunities for better intelligence-led management of offenders, but also creates regulatory risks and some threats to civil liberties - even though these can be mitigated. In constitutional and administrative law terms, the use of predictive intelligence analysis software to serve up 'algorithmic justice' presents varying human rights and data protection problems based on the manner in which the output of the tool concerned is deployed. But regardless of exact context, in all uses of algorithmic justice in policing there are linked fears; of risks around potential fettering of discretion, arguable biases, possible breaches of natural justice, and troubling failures to take relevant information into account. The potential for 'data discrimination' in the growth of algorithmic justice is a real and pressing problem. This paper seeks to set out a number of arguments, using grounds of judicial review as a structuring tool, that could be deployed against algorithmically-based decision making processes that one might conceivably object to when encountered in the UK criminal justice system. Such arguments could be used to enhance and augment data protection and/or human rights grounds of review, in this emerging algorithmic era, for example, if a campaign group or an individual claimant were to seek to obtain a remedy from the courts in relation to a certain algorithmically-based decision-making process or outcome.

How to Cite: Grace, J., 2019. Algorithmic impropriety in UK policing?. Journal of Information Rights, Policy and Practice, 3(1). DOI:
Published on 01 Apr 2019.
Peer Reviewed


  • PDF (EN)

    comments powered by Disqus