Blog overview

Part 3: Learning in the deep blue sea - Azure

Human Learning, Machine Learning, and


Uwe Weinreich, the author of this blog, usually coaches teams and managers on topics related to strategy, innovation and digital transfor­mation. Now he is seeking a direct confron­tation with Artificial Intelligence.

The outcome is uncertain.

Stay informed via Twitter or linkedIn.

Already published:

1. AI and Me – Diary of an experiment

2. Maths, Technology, Embarrassement

3. Learning in the deep blue sea - Azure

4. Experimenting to the Bitter End

5. The difficult path towards a webservice

6. Text analysis demystified

7. Image Recognition and Surveillance

8. Bad Jokes and AI Psychos

10. Interview with Dr. Zeplin (Otto Group)

Incidentally, it's Graeme Malcolm who'll be leading us through the introductory videos. Anyone who likes Scottish accents will certainly get their money's worth. But don't worry: the texts are easily comprehensible and even, thoughtfully, transcribed, so that everyone, even those who are out of their comfort zone with the pronunciation of our tutor, can easily follow them.

Malcolm explains that we will, later in the course, ourselves develop a data classification model, which will thereafter be available online.



Stages of Learning

Malcolm prefers the term "Machine Learning" by the way. That seems appropriate, since there's certainly a difference between learning and intelligence. When Pavlov's Dog reacts to a bell by salivating, this admittedly counts as a learning process, but demands just as much intelligence as is asked of a bacterium which is feeding chemotactically: in other words, none whatsoever. In this respect, we must distinguish between learning and intelligence. In the following table, I've tried to do so, relating to organisms and machines.



Learning Form
Organism Machine Learning
Not learning: Programming genetic characteristics, imprinting deterministic program code:
f(x) = y
Sensitisation /
Increase / decrease in reaction to stimulus: e.g. acclimatisation to noise Calibration of Sensor sensitivity
Associative Learning (classical / operant conditioning) Stimulus-Response connection (e.g. Pavlov's Dog, learning through rewards) Analysis of correlation: Regression, Pattern Recognition etc.
Learning through models mimicry, social learning, learning from reflected experience Modelling based on sample data, supervised machine learning, Deep Learning
Learning through insight Problem-solving, creativity Artificial Intelligence

Yes, this classification leaves room for debate, not only regarding the determination of the types of Machine Learning, but also regarding the original classification of the different forms of learning themselves. One thing, however, is clear: learning is already present even when no intelligence is required. In this respect, Machine Learning is a broader concept than that of AI.

Working with Azure

How are learning forms organised in Azure? Let's find out through an example. The aim is to build a model on the subject of calorie consumption. For this we will need:

Malcolm shows how an Azure Machine Learning (ML) Workspace is set up. One annoying thing: thanks to the software recognising my location, I'm automatically redirected to the German Azure site. The terms there are different: ML Workspace is called "Machine Learning Studio-Arbeitsbereich", for example. OK, I managed that. The even more annoying fact was that the set-up procecedure is different from that shown in the video. It cost me both time and nerves.



Building the model

First, a relation between data sets has to be established, which then is represented visually. Using a graphic interface, it's possible to combine different analytical operations just with the click of a mouse. So far this is neither AI, nor indeed anything new. More than 10 years ago, such visual analysis editors were already in use, for example Clementine from SPSS, now known as IBM's SPSS Modeler. Back then, the whole thing went by the name of Data Mining.


Of course at this point a bit of background knowledge of data analysis is required, otherwise one will very quickly get lost. One realisation that everyone can probably arrive at independently, however, is that the User-ID is not able to make predications about calorie consumption.

The procedure here is clever, as well as necessary for supervised learning: the data is randomly split into two parts: 70% and 30%. The model is created by using 70% of the data. Thereafter the remaining 30% is used in order to determine if it functions reliably or not.

Unsurprisingly, analytical experiments show us that a model which predicts calorie consumption relatively well in test subjects is possible. It works exactly as well in the second experiment using another data set, in which the analytics should predict which of the subjects has diabetes: an exercise in catgorisation. And, of course, example three works as well, in which blood donors should be divided into groups.

It's quite cool that as a result of the experiment – the training of the model, and the evaluation – a web service is automatically created in Azure. The system takes a lot of weight off our shoulders here.



We haven't arrived at true Artificial Intelligence yet, but we have probably reached Machine Learning, or rather an automisation of analytical sequences. In addition, the possibility to intuitively group analytical instruments together offers a lot of convenience, and can considerably assist development. What the system cannot do for us, however, is understand the analysis itself. Without the relevant knowledge, even the cleverest procedure won't deliver any useful results. And even if it does, the interpretation will be difficult. The old GIGO rule also applies here: "garbage in - garbage out", in other words, if you put rubbish in, you'll only get rubbish out.

See you next time.


⬅ previous blog entry    next blog entry ⮕

published: June 18, 2018, © Uwe Weinreich

new comment