Uwe Weinreich, the author of this blog, usually coaches teams and managers on topics related to strategy, innovation and digital transformation. Now he is seeking a direct confrontation with Artificial Intelligence.
The outcome is uncertain.
Stay informed via Twitter or linkedIn.
1. AI and Me – Diary of an experiment
2. Maths, Technology, Embarrassement
3. Learning in the deep blue sea - Azure
4. Experimenting to the Bitter End
8. Bad Jokes and AI PsychosWe already know the principle of a chatbot from a recent Blogentry. You had also the opportunity to speak very confidentially with a replica of the very first chatbot on earth.
Today's and final Azure lesson is about creating an interactive system powered by AI. Graeme Malcolm has come up with a bad joke messenger for this. He even warns us that they're really sick. And he's so right:
The only exciting thing is how easily the bot can be created. It utilizes a Q&A service where the question-answer pairs are simply entered. Doing that manually is actually the exception. It is much easier to upload a file that contains the questions and answers. Then the bot can be trained and will give the correct answer even if a question does not exactly match the stored one. So the sentence "The chicken crossed the road, but why?" also provides the right answer.
However, the bot also reacts to any similar content, even if it does not fit or is nonsense:
This can only be avoided if the bot is refined and trained further. It can be done, for example, with larger amounts of training data or through a continuous self-learning mechanism.
This takes us to the really exciting procedures of artificial intelligence, which lead to so-called "deep learning". Up to now, all procedures described in this blog have been more or less based on statistical analysis. Artificial neural networks go one step further. They modify themselves and thus the structure of the analytical system. They are therefore suitable procedures for realising autonomous learning processes.
The networks are modeled after the functioning of human brain cells and thus show the typical functionality and changeability (neuroplasticity). Neurons can reinforce or inhibit each other, connections can be re-established, changed or broken down. In other words, these processes not only teach the network, but also allow it to mature, so to speak, by completing the assigned task. An astonishing consequence is that a neural network, which has solved a complex task, will come to a slightly different solution at the second attempt with the same task. The new solution incorporates the "new knowledge" gained from the first run.
With such an adaptive structure, it is possible to perfect systems and interactions with humans more and more. This can go so far that the behaviour of the machine is experienced as almost natural. However, Microsoft has learned that it is not that simple.
In March 2016, Microsoft launched Tay to interact with people on Twitter. Tay was an AI-based, self-learning and self-perfecting chatbot. She was conceived as a hipster girl. However, she went through such a metamorphosis that Microsoft had to remove her from the net after only 16 hours. She became insulting, sexist and racist.
She has learned all this through unreflected adaptation to interaction partners. This shows that learning itself is no longer a major problem for AI. However, it is still very difficult for an AI to judge and to value things that we would call "morality" or "character".
"We present Norman, world's first psychopath AI," proudly announce MIT researchers. With the Norman project, they have developed an AI that was specifically trained with the worst images from the hidden depths of the Internet. Compared to a system trained with "normal" images, Norman showed significantly more negative interpretations of images from a Rorschach test. The experiment shows how crucial the type of training data is for the performance of an AI.
AI can develop prejudices! Therefore the training phase of AI in particular is a highly sensitive matter. Its importance must not be underestimated. It can be exhausting. I still remember when a corporate manager first told me that the company now had an artificial intelligence that not only gave them the answers to unresolved questions, but also the question itself. A short time later, he complained that the company had completely underestimated the need to train the system. In the end, the necessary effort was immense and tied up many workers.
An additional protection against biased, prejudiced and malfunctioning AI is to employ a diverse team for development. AI does not develop bias on its own or through algorithms, but through what it learns through interaction with people, as the examples of Tay and Norman show.
In the meantime, AI has penetrated other areas that were previously considered to be reserved for human beings, namely art. As an intern at Nvidia, Robbie Barrat first trained an artificial intelligence with nude paintings and then had it own paintings created on the basis of photos. The taste of the results can certainly be disputed. But you can already see a kind of style.
Made it!
In the end, I managed to complete the course within the given time and, above all, within the Azure credit. This certificate doesn't help much, but it looks somehow official.
In the next blog post I will discuss and summarize the findings from a management perspective. Opportunities and consequences are substantial.
If you want to try AI yourself, you might still find the Azure course on edx. Here is the link to edx.org.
Meanwhile there is also a free offer from the University of Helsinki :
published: December 12, 2018, © Uwe Weinreich