Monday, July 22, 2013

Using Software To Assist In Police Work

As glamorized on the TV series Numb3rs, math can be a terrific asset in the field of law enforcement.  Software can certainly be a part of that.  There are some potential pitfalls, though:  law enforcement can get too reliant on software and forget how to do actual police work, law enforcement and the public can put too much faith in the abilities of the software, and the potential for privacy abuse is significant.

I'm reminded of this post I wrote almost 7 years ago about going to the National Training Center with older equipment vs. newer equipment:
One of the NTC evaluators once said to us that brigades from the 4th Division often performed better than brigades with M1's and Bradleys. The reason, he said, was because units so equipped expected their superior equipment to win the battles for them, whereas units from the 4th, because our equipment was not noticeably superior to the faux Soviet equipment (except in night operations), relied on our tactics and battle plans to win the day.
It's certainly easy to fall into that trap, though, and while I worry about that, the privacy concerns obviously loom larger.

The Economist has a lengthy article on the subject, which I'll excerpt here:
Criminal offences, like infectious disease, form patterns in time and space. A burglary in a placid neighbourhood represents a heightened risk to surrounding properties; the threat shrinks swiftly if no further offences take place. These patterns have spawned a handful of predictive products which seem to offer real insight. During a four-month trial in Kent, 8.5% of all street crime occurred within PredPol’s pink boxes, with plenty more next door to them; predictions from police analysts scored only 5%. An earlier trial in Los Angeles saw the machine score 6% compared with human analysts’ 3%.

Intelligent policing can convert these modest gains into significant reductions in crime. Cops working with predictive systems respond to call-outs as usual, but when they are free they return to the spots which the computer suggests. Officers may talk to locals or report problems, like broken lights or unsecured properties, that could encourage crime. Within six months of introducing predictive techniques in the Foothill area of Los Angeles, in late 2011, property crimes had fallen 12% compared with the previous year; in neighbouring districts they rose 0.5% (see chart). Police in Trafford, a suburb of Manchester in north-west England, say relatively simple and sometimes cost-free techniques, including routing police driving instructors through high-risk areas, helped them cut burglaries 26.6% in the year to May 2011, compared with a decline of 9.8% in the rest of the city...

Predicting and forestalling crime does not solve its root causes. Positioning police in hotspots discourages opportunistic wrongdoing, but may encourage other criminals to move to less likely areas. And while data-crunching may make it easier to identify high-risk offenders—about half of American states use some form of statistical analysis to decide when to parole prisoners—there is little that it can do to change their motivation.

Misuse and overuse of data can amplify biases. It matters, for example, whether software crunches reports of crimes or arrests; if the latter, police activity risks creating a vicious circle. And report-based systems may favour rich neighbourhoods which turn to the police more readily rather than poor ones where crime is rife. Crimes such as burglary and car theft are more consistently reported than drug dealing or gang-related violence.
It's vital that the software itself, and the uses to which law enforcement puts such software, be absolutely transparent.  "Black box" solutions don't do anyone any good in math class, and I don't see how they'll provide any better solution in law enforcement.
But mathematical models might make policing more equitable by curbing prejudice. A suspicious individual’s presence in a “high-crime area” is among the criteria American police may use to determine whether a search is acceptable: a more rigorous definition of those locations will stop that justification being abused. Detailed analysis of a convict’s personal history may be a fairer reason to refuse parole than similarity to a stereotype.
Can the police use what you put on social media without a warrant?  Yes, it's public, and you made it public, but they can't tail you in public without a warrant--but they can listen to your conversations if you speak loudly enough.  Which example is the correct one with social media?
The legal limits on using social media to fish out likely wrongdoers, or create files on them, are contested. Most laws governing police investigations pre-date social networking, and some forces assert that all information posted to public forums is fair game. But Jamie Bartlett of Demos, a British think-tank, says citizens and police forces need clearer guidance about how to map physical-world privacy rights onto online spaces. He thinks gathering information about how someone behaves on social sites ought to require the same clearance needed to monitor them doggedly in public places. Officers who register anonymously or pseudonymously to read content, or send web crawlers to trawl sites against their owner’s wishes, would require yet more supervision.
Very vexing issues, which I'm sure will keep the next generation of  lawyers well-paid and the next generation of law enforcement, lawmakers, and the judicial system very busy.

1 comment:

Mike Thiac said...

Darren

May wanna be speciy "tail". I don't need a warrant to follow you by foot or car. If I want to put a GPS tracker on you car, yes, I do need a warrant.

This looks like Intelligence Led Policing.