Yearly, the world is inundated with news about government data collection programs. In addition to these programs, governments collect data from third party sources to gather information about individuals. This data in conjunction with machine learning aids governments in determining where crime will be committed and who has committed a crime. Could this data serve as a method by which governments predict whether or not the individual will commit a crime? This talk will examine the use of big data in the context of predictive policing. Specifically, how does the data collected inform suspicion about a particular individual? In the context of U.S. law, can big data alone establish reasonable suspicion or should it just factor into the totality of the circumstances? How do we mitigate the biases that might exist in large data sets?
This talk will examine the current big data programs utilized by governments and police departments around the world and discuss how they factor into individualized suspicion of persons. Can big data sets with the proper algorithm effectively predict who will commit a crime? What are the appropriate margins of error (if any at all)? I will discuss the use of algorithms on big data sets to predict both where crime will occur and who might commit it.
Additionally, I will discuss the types of data that exists in these databases and compare several different ways in which computer algorithms are used on big data sets to predict something about a particular individual. Should predictive policing algorithms more closely resemble those used to predict disease from DNA samples or those used in the clearance process? Should they be used at all?