We use cookies on our websites. Information about cookies and how you can object to the use of cookies at any time or end their use can be found in our privacy policy.

Opinion 2 min read 3 comments

State surveillance: why AI needs boundaries

Yes, once again, it's China that's showing us where the world of modern technology might lead us. Let's take a look at two recent examples of what AI-supported systems can do in China.

In many Chinese cities, traffic is monitored by cameras that are increasingly equipped with face recognition. The social credit system established ensures that rule violations (not only in traffic) are automatically recorded in personal files.

Dong Mingzhu, president of a large appliance manufacturer and ranked number one among China's top 100 businesswomen according to Forbes, was probably surprised by an accusation that she had jaywalked at an intersection, but in fact, it was her portrait on an advertisement on a bus that had just crossed the street. She had done nothing, but she was automatically fined via AI.

The responsible traffic police later admitted the mistake on Weibo and cleared Dong's name in the system. An upgrade has also been made so that this mistake wouldn't happen again in the future.

policeglasses
Chinese policemen with camera glasses. / © Reuters

The second example is even more blatant and actually affects millions of people. The Chinese government has put countless people on black lists as a part of their social credit system. If you land on the black list, you can't book any flights or train tickets. The reason? These millions of people deemed "untrustworthy" individuals, and this measure is meant to make it more difficult or impossible for them to move freely. This is tough stuff, especially since it is likely that some people ended up on this list due to errors in the system.

Without limits everything will become uncomfortable

Both of these instances show that as a society we have to be careful when we develop a technology as powerful as artificial intelligence. Without sensible regulation and proper boundaries, both with respect to capabilities and areas of application, things can quickly get out of hand. This won't be easy, especially in our globalized world, where technology doesn't just develop in one country. This makes it all the more important for governments to play a strong role in AI research and regulation.

How do you imagine the future of AI? Let us know in the comments.

3 comments

Write new comment:
All changes will be saved. No drafts are saved when editing

  • I imagine it's the technology the anti christ will be using.


  • Why do you think Google is actively working with the Communist Chinese to develop AI? The Chinese are the "beta testers". You know if they tried something like this pretty much anywhere else, the public would be outraged. But, in Commie china, the state tells you how to live (and die). Once it is perfected, you can bet it will be rolled out elsewhere, under the guise of "safety & security".


  • The problem is that the only places where boundaries can be set are places that don't need them. The US or EU can pass any law they want, but no one can protect the Chinese people from their government, or North Korea's or Iran's or etc. There is no international body to protect the people who need it. So regulations will have primarily symbolic value.

Recommended articles