Japanese police will begin testing a draconian network of AI-enhanced security cameras — hoping to stop major crimes before they happen.
The pre-crime monitoring tests, reminiscent of the 2002 sci-fi film Minority Report, will intentionally avoid using the tech’s ‘facial recognition’ capabilities, according to Japan’s National Police Agency.
Instead the AI cameras will focus on machine-learning pattern recognition of three types: ‘behavior detection’ for suspicious activities, ‘object detection’ for guns and other weapons, and ‘intrusion detection’ for the protection of restricted areas.
Japanese police officials said they intend to launch their AI test program sometime during this fiscal year, which ends March 2024 in Japan.
While some counterterrorism experts maintain that the new AI-powered cameras will ‘help to deploy police officers more efficiently’ providing ‘more means for vigilance,’ others worry about introducing hidden algorithmic biases into police work.
Terrified by last year’s surprise assassination of Japanese Prime Minister Shinzo Abe, and shocked by a failed attempt on the life of Japan’s new Prime Minister Fumio Kishida this April, the nation’s police have struggled to prevent high-profile crimes, which are often committed by individuals they call ‘lone offenders.’
Police have used the term ‘lone offenders’ to describe a growing sector of Japanese society, lonely and disaffected young people, sometimes called ‘otaku’ for ‘nerd’ or ‘shut-in,’ who have sometimes proven violent despite no known criminal history.
Japan’s National Police Agency AI-camera tests come on the one-year anniversary of Prime Minister Abe’s fatal shooting.
Advocates say the AI’s so-called ‘behavior detection’ machine-learning algorithm would be capable of training itself by observing the patterns of individuals deemed suspicious: activities like looking around in a repetitious and nervous fashion.
While Japanese police officials did not get into details, past efforts at AI-enhanced security cameras in the far eastern nation have focused on fidgeting, restlessness, rapid eye movement and other behaviors flagged as products of a guilty mind.
Police officials hope that the software can pull these identifications out of large crowds and other distracting conditions that make identification of risks difficult even to highly trained humans in law enforcement.
AI shape analysis will also help the system detect suspicious items like firearms and other weapons (object detection), while certain protected locations will be programmed in to detect malicious trespassers (intrusion detection).
For now, the National Police Agency’s use of this ‘crime prediction’ tech will only be a test — an effort to evaluate the AI-assisted cameras accuracy to carefully consider the value of officially adopting the system.
The police agency will not employ the technology’s ‘facial recognition’ features, according to Nikkei, focusing only on generic behaviors and suspicious objects.
Isao Itabashi, chief analyst for the Tokyo-based Council for Public Policy, told Nikkei that Japan is far from the first nation to deploy this kind of AI pre-crime tech.
‘AI cameras are already being used widely in Europe, the U.S. and Asia, and behavior detection technology is being studied by Japanese companies,’ said Itabashi, who is also an expert on counterterrorism defense strategy.
A 2019 survey conducted by the Carnegie Endowment for International Peace, in fact, reporter that AI security camera tech was already in use by 52 of the 176 countries covered in their research.
France has recently adopted legislation authorizing the installation of AI security systems to protect Paris in advance of the 2024 Olympics and Paralympics to be held in the capital city.
Japan’s private sector has been years ahead of its national police force on the use of AI-equipped security cameras.
Last May, at G7 summit in Hiroshima, Japanese railway firm JR West implemented a system that would notify security teams of activity the AI deemed suspicious, following train closures and evacuations the preceding month over a ‘suspicious object’ that has yet to be publicly identified.
And in 2019, Japanese startup Vaak unveiled a controversial new software designed to identify potential shoplifters based on their body language.
While aspects of Vaak’s software resemble the promises behind Japan’s National Police Agency AI tests, officials have not confirmed that Vaak’s product have been contracted for these trials.
Vaak’s criminal-detecting AI is trained to recognize ‘suspicious’ activities such as fidgeting or restlessness in security footage, according to Bloomberg Quint.
Vaak says its AI can distinguish between normal customer behaviour and ‘criminal behaviour,’ such as tucking a product away into a jacket without paying.
And, in fact, the Minority Report-style system was reportedly used in successfully 2018 to track down a shoplifter who has struck a convenience store in Yokohama.
Vaak has said that their software can alert staff to suspicious behavior via smartphone app once it’s spotted something in the CCTV stream, Bloomberg said.
But, while it’s designed to crack down on theft, both predictive policing and predictive private security efforts have sparked concerns that people may be unfairly targeted as a result of racial and other biases.
An MIT study published in 2018 found that many popular AI systems exhibit racist and sexist leanings.
Researchers have urged others to use better data to ensure biases are eliminated.
‘Computer scientists are often quick to say that the way to make these systems less biased is to simply design better algorithms,’ said lead author Irene Chen, a PhD student, when the study was published in November.
‘But algorithms are only as good as the data they’re using, and our research shows that you can often make a bigger difference with better data.’