“Jurassic AI – Enter at your own risk”

“Jurassic AI – Enter at your own risk” –  An Interview with an experienced Park Ranger

Yvonne Hofstetter, is Managing Director of Teramark Technologies GmbH and author of several books on the topic of Big Data and AI.

Mrs. Hofstetter has more than 25 years of practical experience in building and applying AI. She once referred to herself having her own Jurassic Park of AI in her basement. Start-ups in Silicon Valley and Germany seek her advice when it comes to the successful development and implementation of AI.

As an author, Mrs. Hofstetter critically discusses the effects of digitization, big data and AI on our free- democratic society.

 

Frank Schirrmacher once described you “as one of the key figures in the debate about our social future”. You may be considered a prominent German observer of digitization and artificial intelligence. You are regarded as a critic of digital business models and you regularly warn in interviews and highly respected publications about sharing private data all too willingly in the internet.

What makes, in your view, the difference between good and bad artificial intelligence?

Technologists would never call AI “good” or “bad”. There is, however, weak versus strong AI. Weak AI is often used in image recognition, language translation, search engines, the creation of “news feeds” for social media, and “People Analytics” as used by HR departments. Strong AI has self-awareness. Research is still far away from developing strong AI. AI is only as good as the data on which it is trained. Sometimes, even in times of big data, you have too little data to train an AI. Take the example of algorithmic currency risk management: If an AI-based control strategy is to calculate when it is useful to hedge the portion of a US dollar Revenue generated by European company, it has to be trained – not solely – but also on the EUR/USD market. There, AI is looking for correlations in the market data since the introduction of the Euro. AI could determine, that every 17th leg following an ECB press conference has a higher probability that the EUR price will rise.

This is nonsense, because there is no scientific explanation for the relevance of this 17th leg. This is a “spurious effect” that a person recognizes (or overlooks), but an AI does not. So, if you ask me for “good” or “bad” AI, I’d say it’s a matter of pushing mass data into an AI to train an AI – “bottom-up” with all the nonsensical correlations “discovered” – or whether an expert is scientifically modeling an AI “top-down” and then looks for the data with which he trains the AI.

AI is also bad when it is applied to humans. Today’s AI standard task is classification and categorization. Applied to humans, AI becomes their profiler. The data on which they learn profiling are never neutral, but are often prejudiced. “Predictive policing” in Chicago always shows: the colored ones are criminal; the whites are not. This technological racism is already present in the training data – and we technologists do not (yet) know how to really master this bias.

With your company Teramark Technologies GmbH you are developing applications for artificial intelligence. You referred to in an interview of having a sort of a Jurassic Park of Artificial Intelligence in your basement.

What do you advise your customers about using these new technologies? How should customers deal with the technologies and how could a possible misuse of technologies be prevented?

Danger lies at the hands of laymen using these technologies. It is easy for them to get access because Google and others provide off the shelf AI libraries for everyone. The truth is that only a few users can understand what such a pre-packaged AI does when it performs data processing or how information is propagated through a neural network. The thoughtless use of such libraries by laymen is not without risk, especially when it comes to the classification of people. Even if the libraries only perform standard tasks: Classification is a standard task, it can have dire consequences for people’s sovereignty. For example, if you are classified as criminal, with a certain sexual orientation or as not creditworthy. This has immediate consequences for your future, which should lie in your own hands and not in those of a machine.

Prof. Jürgen Schmidhuber, one of the forerunners of machine learning who, through his work at the Faculty of Computer Science at the TU Munich in the 1990s, has become one of the pioneers of today’s everyday applications such as speech and image recognition, has regularly stated in interview that in a few decades AI will be more intelligent than mankind, mankind is only an intermediate step of evolution on the way to a universe that is not only imbued with artificial intelligence, but consists of it. Mankind is, therefore, a pioneer and a creator, but plays no role in the final intergalactic scenario. 

How do you see the future of mankind? How will mankind live AI and, vice versa, AI with mankind?

Jürgen Schmidhuber looks far into the future, and I also do not rule out that Sapiens will become number two on planet Earth.

In the next 20, 30 years, however, this will not happen. During this period, we will not be confronted with general or super-intelligence, but with millions of highly specialized AI, assigned to very specific tasks – and only those – which they perform superhumanly well. Add to this the “Internet of Things”. With it (the internet of things) we activate our environment and make it cognitive. Our environment will gather data from us everywhere, profile us, predict and always be a step ahead. A super-nanny state, super-nanny company, a super-nanny church – this can easily become the future scenario.

It would be better, however, to develop hybrid systems that do not patronize humans, but rather support them – and this is precisely not in the sense of the advertising contribution of the American company x.ai: “Using artificial intelligence to program people to behave better.” Worse than Big Brother – but it’s the mainstream among American AI manufacturers. Coming back to your initial question this is for me what I would call a “bad” AI.

What should we say to our children who do not even leave their room without their smart phone? How should they participate in social life with their friends when they are cut off from communicating via Instagram and WhatsApp. Have we raised them wrongly? And what do we have to do today to avoid the potential horrors of a data dominion (I do not call it the rule of machines)?

Man is what technology makes of it. Martin Heidegger described this quite accurately in his criticism of technology during the 20th century, it is worthwhile to read it again. Erich Fromm adds: “For Luther, the question was still whether the devil sits in the saddle and rides man. (…) For us today it is no longer a question of the devil riding us. Our problem is that things ride us – the things and circumstances that we have created by ourselves. A future for modern man exists only when man is back in the saddle. ”

We need to get the technology under control and humanize it. This also includes the legal regulation, but especially the general social discourse about what man wants to become tomorrow: A pile of data? The ultimate machine? A biologic algorithm? More or less these examples fit the description of us by Silicon Valley, todays driver of digitization. This is one reason man has no longer any role in the intergalactic goal: he is already today regarded as a “mistake of nature” by the tech giants rather than the crown of creation.

We should revive philosophy completely, so that it offers the reduced, mechanistic image of the human being a worthwhile alternative concept. In natural science reason becomes inhuman if we do not balance it with the spiritual, the invisible, the non-measurable: The ego, the consciousness, the self-consciousness, the soul. Yes, we have brought up our children wrongly, training them to follow the mainstream. And mainstream is: What cannot be measured does not exist. So, the future will be rather poor.

 

In your book “The End of Democracy” you describe the fictional data scientist Scott and his experiment of the simulation of the free-democratic society.

How close are we to this simulation scenario today?

Scott is not fictional, you can get to know him, he is one of my Australian colleagues.

We have already seen the scenario described in “The End of Democracy” in 2016. The Brexit and the American presidential elections are witnesses. “The End of Democracy” describes an algorithmic control strategy (such as is used in the context of currency risk management, for example) which “optimally” influences voters’ opinions and spreads appropriate information in the electorate. What the book describes as “information runners” is now called “social bots”. The strategy behind the use of social bots may not be optimized, still be manual, and – in much the same way – bear an American or Russian name. But the automation of such a strategy is just a matter of time and of (military) budgets.

  • Topics:
  • Artificial Intelligence

Top Stories

High Five! You just read 2 awesome articles, in row. You may want to subscribe to our blog newsletter for new blog posts.