AI Challenges: Surveillance
Articles in the Series
This article is part of the series "How AI challenges humanity". In the article we'll look at how machine learning will probably affect the area of surveillance. First through a pessimistic and then through an optimistic lens.
Threats
Even with today’s face recognition performance it’s easy to see how people can completely lose their privacy in public. Add automatic action detection and every instance of jaywalking, attending a political rally or just going somewhere will be registered. Speech recognition and text understanding will complete the picture governments get from their people.
Today most societies are fine with manually doing all of the above when there is reasonable suspicion. With AI automating most of the data processing in a surveillance operation, it would suddenly become feasible to monitor every single person all the time. China’s Social Credit System might already be a push in that direction.
Companies may also use your data against you. Movement patterns can signal an unhealthy lifestyle. Facial features may not only predict your sexuality but also how safe a driver you are or how successful as a salesperson you will be.
Every prejudice people have could be tested for correlation regardless of the real causation. What happens when correlations are found and programmed into algorithms? What happens when this is done unwittingly just because the training data was already biased? Job applicants might get rejected for ridiculous and immoral - but statistically significant - reasons.
Opportunities
As data processing becomes more automated the processing also gets easier to document. Automated decisions could be explained, if required by regulation.
What if cameras don’t send video feeds to a central location but rather raise an alarm when they see something previously determined to be suspicious? The rest of the time they would be quiet. Suspicious doesn’t always have to involve crimes. A person dropping to the floor unconscious will raise an alarm as will a person robbing someone. A black-box surveillance AI might be less intrusive than a person looking at a video stream.
When prejudice is baked into algorithms it at least can become measurable, visible and correctable.
Ways Forward
The main opportunity in this area might be public discourse. Societies should have open discussions and decide on basic rules. What data about a person can be used in which way? Do we want to enforce every law all the time? Every region will answer these questions differently. The EU for example has begun to answer one of these questions quite restrictively with the GDPR. The rest of the world will follow.
Contrary to popular belief the decisions neural networks and many other machine learning algorithms take are quite easy to trace. So when a model calculates a decision, it’s simple to show the affected person how the decision was made. Governments might require the use of open source software in especially sensitive areas so that the general public can audit the software.
Email me the next article!
Be the first to get an email when we publish another high-quality article.