We use cookies on our websites. Information about cookies and how you can object to the use of cookies at any time or end their use can be found in our privacy policy.
The 5 greatest dangers of artificial intelligence
Huawei Mate 20 Pro AI 4 min read 1 Comment

The 5 greatest dangers of artificial intelligence

Artificial intelligence makes life easier and more comfortable, or at least it should in a lot of cases. However, there are downsides to it - AI comes with many dangers. Of course, we don't need to fear machines enslaving us or trying to kill us all, but in some areas, the risks posed by artificial intelligence are very real.

Loss of control

Developing a new technology always means that researchers, engineers, financiers and other leaders have control over the direction in which to move forward - until now. The situation with artificial intelligence is somewhat different because it can - within a certain framework - develop itself further without humans determining its path.

Unsupervised learning, i.e. a learning process in which the AI analyses, processes and draws rules from a set of data without human help, is still very time-consuming and erroneous. However, this current situation will not last too long, and then it will be crucial to teach AI limits and boundaries - and hope that it sticks to them.

Unemployment

Let's not kid ourselves: artificial intelligence will cost a lot of people their jobs. There are areas in which AI methods work much better than anything that humans can contribute. Many stock market transactions and banking transactions, for example, already run completely automatically without human intervention.

Of course, many new professions are emerging in the artificial intelligence sphere, probably even more than existing ones that will be eliminated by AI. This helps some people who are out of a job, but not much in these early stages. Most of the new jobs created by AI have completely different and often higher requirements than those that have been eliminated.

AI robot 05
Artificial intelligence costs jobs in some areas / © Phonlamai Photo/Shutterstock

The humanization of machines

This point comes mainly, but not exclusively, from digital assistants, which are springing up everywhere. We talk to our devices, ask questions, give orders and get answers. Sociologists see the danger of humanizing technical devices and complex systems, and this is quite real.

The machine always remains a machine, no matter how naturally Alexa, Cortana or Siri talk to us in the meantime. Humanization destroys the distance between man and machine and creates a one-sided dependence on an emotional level. This sounds a bit detached and abstract, but it can lead to long-term psychological problems for many people.

Companies determine the development

Important technologies need rules and boundaries and this is perhaps more important in AI than in any other fields that came before it. With the brutal speed at which artificial intelligence is being developed in the world's major corporations, government organizations responsible for these boundaries and rules are barely able to keep up.

This means that the development of perhaps the most important technology in the world is de facto in the hands of privately run and profit-oriented companies. This can turn out well, as some projects have shown, but there is a great danger that ethical and moral principles will be overlooked in the pursuit of profit.

Suppression in society

Artificial intelligence can recognize people in a matter of seconds, even in large crowds. Not only that, there are companies that can identify employees by the way they move the mouse pointer on a computer, in seconds. Sound recordings can also be used for monitoring, as can traces of Internet usage.

All this data can be searched, filtered and analyzed many times faster and more efficiently than ever before using machine learning and other AI methods. Governments, despots and authoritarian regimes thus have much more powerful tools at their disposal - to control and monitor their own people and deprive them of their freedom to a certain extent.

policeglasses
Chinese policemen with smart glasses. / © Reuters

How do you see the development of artificial intelligence? What are your concerns? Let us know in the comments.

1 Comment

Write new comment:
All changes will be saved. No drafts are saved when editing

  • In the ethics side I'm more concerned about different standards being set for "us" and "them". Let's say the EU or America decide to legislate that all autonomous cars must chose to save 3 pedestrians even if it kills the car's passenger. Does anyone on earth honestly believe that <<insert name of your least favorite CEO of a car company>> will decide to keep those settings for his/her personal car?