The Machine Learning Myth

“I was recently at a demonstration of walking robots, and you know what? The organizers had spent a good deal of time preparing everything the day before. However, in the evening the cleaners had come to polish the floor. When the robots started, during the real demonstration the next morning, they had a lot of difficulty getting up. Amazingly, after half a minute or so, they walked perfectly on the polished surface. The robots could really think and reason, I am absolutely sure of it!”

Somehow the conversation had ended up here. I stared with a blank look at the lady who was trying to convince me of the robot’s self-awareness. I was trying to figure out how to tell her that she was a little ‘off’ in her assessment.

As science-fiction writer Arthur C. Clarke said: any sufficiently advanced technology is indistinguishable from magic. However, conveying this to someone with little knowledge of ‘the magic’, other than that gleaned from popular fiction, is hard. Despite trying several angles, I was unable to convince this lady that what she had seen the robots do had nothing to do with actual consciousness.

Machine learning is all the rage these days, demand for data scientists has risen to similar levels as that for software engineers in the late nineties. Jobs in this field are among the best paying relative to the number of years of working experience. Is machine learning magic, mundane or perhaps somewhere in between? Let’s take a look.

A Thinking Machine

Austria, 1770. The court of Maria Theresa buzzes. The chandeliers mounted on the ceiling of the throne room cast long shadows on the carpeted floor. Flocks of nobles arrive in anticipation of the demonstration about to take place. After a long wait, a large cabinet is wheeled in, overshadowed by something attached to it. It looks like a human-sized doll. Its arms lean over the top of the cabinet. In between those puppet arms is a chess board.

The Mechanical Turk by Wolfgang von Kempelen

The cabinet is accompanied by its maker, Wolfgang van Kempelen. He opens the cabinet doors which reveals cogs, levers and other machinery. After a dramatic pause he reveals the true name of this device: the Turk. He explains it is an automated chess player and invites Empress Maria Theresa to play. The crowd sneers and giggles. However, scorn turns to fascination as Maria Theresa’s opening chess move is countered by the mechanical hand of the Turk by a clever counter move.

To anyone in the audience the Turk looked like an actual thinking machine. It would move its arm just like people do, it took time between moves to think just like people do, it even corrected invalid moves of the opponent by reverting the faulty moves, just like people do. Was the Turk, so far ahead of its time, really the first thinking machine?

Unfortunately: no, the Turk was a hoax, an elaborate illusion. Inside the cabinet a real person was hidden. A skilled chess player, who controlled the arms of the doll-like figure. The Turk shows that people can see and have an understanding of an effect, but fail to correctly infer its cause. The Turk’s chess skills were not due to its mechanics. Instead they were the result of clever concealment of ‘mundane’ human intellect.

The Birth of Artificial Intelligence

It would take until the early 1950’s before a group of researchers at Darthmouth started the field of artificial intelligence. They believed that if the process of learning something can be precisely described in sufficiently small steps, a machine should be able to execute these steps as well as a human can. Building on existing ideas that emerged in the preceding years, the group set out to lay some of the groundwork for breakthroughs to come in the ensuing two decades. Those breakthroughs did indeed come in the form of path finding, natural language understanding and even mechanical robots.

Arthur Samuel

Around the same time, Arthur Samuel of IBM wrote a computer program that could play checkers. A computer-based checkers opponent had been developed before. However, Samuel’s program could do something unique: adapt based on what it had seen before. He roughly did this by making the program store the moves that led to games that were won in the past, and then replaying those moves at appropriate situations in the current game. Samuel referred to this self-adapting process as machine learning.

What then really is the difference between artificial intelligence and machine learning? Machine learning is best thought of as a more practically oriented sub field of artificial intelligence. With mathematics at its core, it could be viewed as a form of automated applied statistics. At the core of machine learning is finding patterns in data and exploiting those to automate specific tasks. Tasks like finding people in a picture, recognizing your voice or predicting the weather in your neighbourhood.

In contrast, at the core of artificial intelligence is the question of how to make entities that can perceive the environment, plan and take action in that environment to reach goals, and learn from the relation between actions and outcomes. These entities need not be physical or sentient, though that is often the way they are portrayed in (science) fiction. Artificial intelligence intersects with other fields like psychology and philosophy, as discussed next.

Philosophical Intermezzo: Turing and the Chinese room

Say, a machine can convince a real person into thinking that it – the machine – is a human being. By this very act of persuasion, you could say the machine is at least as cognitively able as that human. This famous claim was made by mathematician Alan Turing in the early fifties.

Turing’s claim just did not sit well with philosopher John Searle. He proposed the following thought experiment: imagine Robert, an average Joe who speaks only English. Let’s put Robert in a lighted room with a book and two small openings for a couple of hours. The first opening is for people to put in slips of paper with questions in Chinese: the input. The second opening is to deposit the answers to these questions written on new slips, also in Chinese: the output.

Searle’s Chinese Room

Robert does not know Chinese at all. To help him he has a huge book in this ‘Chinese’ room. In this book he can look up what symbols he needs to write on the output slips, given the symbols he sees on the input slips. Searle argued that no matter how many slips Robert processes, he will never truly understand Chinese. After all, he is only translating input questions to output answers without understanding the meaning of either. The book he uses also does not ‘understand’ anything, as it contains just a set of rules for Robert to follow. So, this Chinese room as a whole can answer questions. However, none of its components actually understands or can reason about Chinese!

Replace Robert in the story with a computer, and you get a feeling for what Searle tries to point out. Consider that while a translation algorithm may be able to translate one language to the other, being able to do that is insufficient for really understanding the languages themselves.

The root of Searle’s argument is that knowing how to transform information is not the same as actually understanding it. Taking that one step further: in contrast with Turing, Searle’s claim is that merely being able to function on the outside like a human being is not enough for actually being one.

The lady I talked with about the walking robots had a belief. Namely, that the robots were conscious based on their adaptive response to the polished floor. We could say the robots were able to ‘fool’ her into this. Her reasoning is valid under Turing’s claim: from her perspective the robots functioned like a human. However, it is invalid under Searle’s, as his claim implies ‘being fooled’ is simply not enough to prove consciousness.

As you let this sink in, let’s get back to something more practical that shows the strength of machine learning.

Getting Practical with Spam

In the early years of this century spam emails were on the rise. There was no defense against the onslaught of emails. So too thought Paul Graham, a computer scientist, whose mailbox was overflowing like everyone else’s. He approached it like an engineer: by writing a program. His program looked at the email’s text, and filtered those that met certain criteria. This is similar to making filter rules to sort emails into different folders, something you have likely set up for your own mailbox.

Graham spent half a year manually coding rules for detecting spam. He found himself in an addictive arms race with his spammers, trying to outsmart each other. One day he figured he should look at this problem in a different way: using statistics. Instead of coding manual rules, he simply labeled each email as spam or not spam. This resulted in two labeled sets of emails, one consisting of genuine mails, the other of only spam emails.

He analyzed the sets and found that spam emails contained many typical words, like ‘promotion’, ‘republic’, ‘investment’, and so forth. Graham no longer had to write rules manually. Instead he approached this with machine learning. Let’s get some intuition for how he did that.

Identifying Spam Automagically

Imagine that you want to train someone to recognize spam emails. Your first step is showing the person many examples of emails that are labeled genuine and that are labeled spam. That is: for each of these emails the participant is told explicitly whether each is genuine or spam. After this training phase, you put the person to work to classify new unlabeled emails. The person thus assigns each new email a label: spam or genuine. He or she does this based on the resemblance of the new email with ones seen during the training phase.

Replace the person in the previous paragraph with a machine learning model and you have a conceptual understanding of how machine learning works. Graham took both email sets he created: one consisting of spam emails, the other of genuine ones. He then used these to train a machine learning model that looks at how often words occur in text. After training he used the model to classify new incoming mails as being either spam or genuine. The model would for example mark an email as spam if the word ‘promotion’ would appear in it often. A relation it ‘learned’ from the labeled emails. This approach made his hand-crafted rules obsolete.

Your mailbox still works with this basic approach. By explicitly labeling spam messages as spam you are effectively participating in training a machine learning model that can recognize spam. The more examples it has, the better it will become. This type of application goes to the core of what machine learning can do: find patterns in data and bind some kind of consequence to the presence, or absence, of a pattern.

This example also reveals the difference between software engineering and data science. A software engineer builds a computer program explicitly by coding rules of the form: if I see this then do that. Much like Graham tried to initially combat spam. In contrast, a data scientist collects a large amount of things to see, and a large amount of subsequent things to do, and then tries to infer the rules using a machine learning method. This results in a model: essentially an automatically written computer program.

Software Engineering versus Machine Learning

If you would like a deeper conceptual understanding and don’t shy away from something a bit more abstract: let’s dive a little bit deeper into the difference between software engineering and machine learning. If you don’t: feel free to skip to the conclusion.

As a simple intuition you can think of the difference between software engineering and machine learning as the difference between writing a function explicitly and inferring a function from data implicitly. As a minimal contrived example: imagine you have to write a function f that adds two numbers. You could write it like this in an imperative language:

function f(a, b):
    c = a + b
    return c

This is the case were you explicitly code a rule. Contrast this with the case where you no longer write f yourself. Instead you train a machine learning model that produces the function f based on many examples of inputs a and b and output c.

train([(a, b, c), (a, b, c), (...)]) -> f

That is effectively what machine learning boils down to: training a model is like writing a function implicitly by inferring it from the data. After this f can be used on new previously unseen (a, b) inputs. If it has seen enough input examples, it will perform addition on the unseen (a, b) inputs. This is exactly what we want. However, consider what happens if the model would be fed only one training sample: input (2, 2, 4). Since 2 * 2 = 4 and 2 + 2 = 4, it might yield a function that multiplies its inputs instead of adding them!

There are roughly two types of functions that you can generate, that correspond to two types of tasks. The first one, called regression, returns some continuous value, as in the addition example above. The second one, called classification, returns a value from a limited set of options like in the spam example. The simplest set of options being: ‘true’ or ‘false’.

Effectively we are giving up precise control over the function’s exact definition. Instead we move to the level of specifying what should come out given what we put in. What we gain from this is the ability to infer much more complex functions than we could reasonably write ourselves. This is what makes machine learning a powerful approach for many problems. Nevertheless, every convenience comes at a cost and machine learning is no exception.


The main challenge for using a machine learning approach is that large amounts of data are required to train models, which can be costly to obtain. Luckily, recent years have seen an explosion in available data. More people are producing content than ever, and wearable electronics yield vast streams of data. All this available data makes it easier to train useful models, at least for some cases.

A second caveat is that training models on large amounts of data requires a lot of processing power. Fortunately, rapid advances in graphics hardware have led to orders of magnitude faster training of complex models. Additionally, these hardware resources can be easily accessed through cloud services, making things much easier.

A third downside is that, depending on the machine learning method, what happens inside of the model created can become opaque. What does the generated function do to get from input to output? It is important to understand the resulting model, to ensure it performs sanely on a wide range of inputs. Tweaking the model is more an art than a science, and the risk of harmful hidden biases in models is certainly realistic.

Applying machine learning is no replacement for software engineering, but rather an augmentation for specific challenges. Many problems are far more cost effective to approach by writing a simple set of explicitly written rules instead of endlessly tinkering with a machine learning model. Machine learning practitioners are best of not only knowing the mechanics of each specific method, but also whether machine learning is appropriate to use at all.


Many modern inventions would look like magic to people from one or two centuries ago. Yet, knowing how these really work shatters this illusion. A huge increase in the use of machine learning to solve a wide range of problems has taken place in the last decade. People unfamiliar with the underlying techniques often both under and overestimate the potential and the limitations of machine learning methods. This leads to unrealistic expectations, fears and confusion. From apocalyptic prophecies to shining utopias: machine learning myths abound where we are better served staying grounded in reality.

This is not helped by naming. Many people associate the term ‘learning’ with the way humans learn, and ‘intelligence’ with the way people think. Though sometimes a useful conceptual analog, it is quite different from what these methods currently actually do. A better name would have been data-based function generation. However, that admittedly sounds much less sexy.

Nevertheless, machine learning at its core is not much more than generating functions based on input and, usually, output data. With this approach it can deliver fantastic results on narrowly defined problems. This makes it an important and evolving tool, but like a hammer for a carpenter: it is really just a tool. A tool that augments, rather than replaces, software engineering. Like a hammer is limited by laws of physics, machine learning is fundamentally limited by laws of mathematics. It is no magic black box, nor can it solve all problems. However, it does offer a way forward for creating solutions to specific real-world challenges that were previously elusive. In the end it is neither magic nor mundane, but is grounded in ways that you now have a better understanding of.


  1. Singh, S. (2002). Turk’s Gambit.
  2. Rusell, S. & Norvig, P. (2003). Artificial Intelligence.
  3. Halevy, A. & Norvig, P. & Pereira, F. (2010). The Unreasonable Effectiveness of Data.
  4. Lewis, T. G. & Denning, P. J. (2018). Learning Machine Learning, CACM 61:12.
  5. Graham, P. (2002). A Plan for Spam.
  6. O’Neil, C. (2016). Weapons of Math Destruction.
  7. Stack Overflow Developer Survey (2019).



If you do not sleep you will die within months. Though, it’s unlikely that you would voluntarily stay awake twenty four hours for days at a time, modern society with its many distractions, makes it tempting to trade a little bit of sleep for a little bit of something else every day. Is this a wise trade-off to make? To answer that question: let’s dive deeper into sleep.


A decade ago I visited the music festival Sziget in Budapest. I camped there with a group of friends near the northern tip of the ‘party island’. After several days, thanks to a combination of staying up very late and sleeping with an eye mask and earplugs, my usual waking time had settled around noon. Despite the intense heat buildup in my tent during the morning hours, I still had an actual, though shifted, sleep rhythm. Where does this rhythm come from?

Living for several days on a music festival terrain, while an unforgettable experience in many ways, was not very convenient. Nevertheless, this inconvenience does not remotely come close to that of living in a cave without exposure to daylight for a month. This is what the father of modern sleep research, Nathaniel Kleitman, did in 1938 together with one of his students. Experimenting on himself was not new to Kleitman, as he also kept himself awake on benzedrine for up to hundred-eighty hours at a time just to measure the effect [2]. Little did he know that a form of this would later become a popular party drug at festivals for staying awake and boosting energy: speed.

Student Bruce Richardson (Left) and Nathaniel Kleitman (Right) in the cave

Back to Kleitman’s cave experiment: it revealed two important things. Firstly, if there is no sunlight, our bodies can still self regulate sleep and wakefulness. Secondly, our sleep cycle is not exactly twenty four hours, but a little bit longer. This observation gave rise to the concept of what we nowadays call rhythm of approximately (circa) a day (dian): the circadian rhythm.

The presence of sunlight resets the rhythm, and keeps it from shifting. This is quite similar to the way that your computer, whose clock is not entirely accurate and suffers from drift, synchronizes its time at regular intervals by contacting computers with more accurate clocks using the Network Time Protocol. Indeed, some alarm clocks are equipped with radio signal based synchronization which essentially does the same thing.


The first few days at the Sziget festival, when my rhythm had not shifted as much yet, I felt very sleepy near the end of the day when I stayed up late. But I noticed that when I stayed up long enough, there was a moment, usually in the early morning, where I got over the sleepiness and started feeling more alert again. Since my rhythm had not shifted yet: what caused this?

Besides the circadian rhythm our bodies have an other mechanism to regulate sleep. As long as you are awake your sleep pressure rises through the buildup of the neurochemical adenosine in your brain. The higher the sleep pressure, the more you actually feel like sleeping. This pressure is referred to as the sleep drive, whereas the circadian rhythm is called the wake drive.

If you were ever in a lecture where another student fell asleep on their desk with a loud thud you know what high sleep pressure can do: it will make you nid-nod, falling in and out of slumber, even if you really want to consciously pay attention. While this is fairly harmless during a lecture it can be extremely dangerous in other situations, for example: when driving.

Rising sleep urge as result of staying awake. The arc is the rising sleep pressure (sleep drive). The dotted line is the circadian rhythm (wake drive). The distance between the two represents how sleepy you feel, expressed in words as the urge to sleep.

The interesting thing is that the circadian rhythm and sleep pressure are independent ‘systems’. As your sleep pressure continues to build up, your circadian rhythm may already be swinging back up, making you feel awake and more alert: exactly what I was experiencing the first couple of days at the festival. The figure above shows this: once you stay awake long enough the distance between the sleep pressure arc (top) and the circadian rhythm (bottom) becomes less, only to hit you much harder again as the circadian rhythm comes back down, resulting in a very strong urge to sleep.

Reset your brain

A good night consists of seven to nine hours of sleep opportunity, subtract from that half an hour to an hour to get the actual time slept. Back to the Sziget festival: not everyone had the fortune of being able to obtain a good night’s sleep. One evening we met a group of girls that had set up their tent next to the twenty-four hour DJ booth. I probably don’t need to tell you that setting up their tents there did not bode well for their sleep opportunity. Hence, we spent the night socializing with them. As we sat and talked on a set of wooden benches time flew by and before we knew it the sun came up again. The festival terrain was nearly silent, until we heard a loud rumbling sound.

It was around six in the morning, and the cleaning crew rolled in. A literal wall of people that moved from the front to the back of the festival terrain picking up garbage, followed by truck mounted water sprayers that cleaned the streets. An impressive sight.

Using water to clean and cool down the streets at Sziget [3]

In the deep stages of sleep something similar to what that cleaning crew did happens in your brain. Electrical waves sweep from the front to the back of your brain. This consolidates what you experienced and learned the previous day, and at the same time helps you prepare for sponging up new things the next day. In fact, not sleeping for one night reduces your ability to form new memories the next day by a whopping forty percent.

Better sleeping through chemistry?

Sitting on those benches at the festival, pulling an all-nighter, was not good for retention of memories, but also not good for overall wakefulness. The next day I had serious problems staying awake, thanks to the before mentioned buildup of sleep pressure. As a quick fix I turned to coffee: the stimulant of choice to keep one awake under such circumstances.

The caffeine in coffee blocks the binding of certain chemicals in the brain. This makes you feel more alert while your body is still tired. A negative effect is that once you do sleep after having coffee, you will have problems reaching the deep stages of sleep and thus will acquire less deep sleep overall. Ironically, this lack of deep sleep can trap you in a dependency cycle where you actually need coffee in the morning to feel refreshed after poor quality sleep.

It was a festival, so not only did we drink coffee. There was also plenty of cheap alcohol to go around. Beer, wine, shots, you name it, they had it. This stuff had the opposite effect of coffee: it made me feel sleepy. Alcohol, after all, is a depressant.

Alcohol makes you fall ‘asleep’ faster by sedating you, but as a side effect it heavily fragments your sleep. You wake up much more often than you usually would, though you do not consciously register this. This is the reason why the day after an evening of drinking a lot, you feel extremely drowsy. A secondary effect is that you do not reach the REM stage of sleep, which is vital for learning and memory formation. The debilitating effect of alcohol on memory formation is similar to that of not sleeping at all.


Regardless of the cause: what else, besides memory and learning problems, really happens when you do not sleep enough? When I came back from the festival I had a throat infection. Not surprising: sleeping just one night for only four hours, instead of eight, reduces the presence of natural killer cells by over seventy percent. Doing this repeatedly left me much more vulnerable to infections, and there were plenty of germs floating around there. For such a short time, and the great fun I had there, that was an acceptable trade-off. However, that luxury of choice does not always apply to everyone. Involuntary chronic sleep deprivation incurs a heavy toll.

A chronic lack of sleep increases risk of physical diseases, like cancer, heart disease and diabetes, and also of mental diseases, including depression, anxiety and suicidality. Sleeping too little generally means that your life expectancy will go down dramatically. The evidence for this is strong enough that we know that if you were to actually sleep consistently for only 6.75 hours per night, you will likely live only into your early sixties.


What can you do to improve sleep? Three tips: firstly, maintain a regular schedule, which means: go to sleep and wake up at the same time every day, also during the weekends. Secondly, take care of your sleep ‘hygiene’: make sure your room is sufficiently dark and cool, around eighteen degrees Celsius, and avoid screens before you go to bed. Thirdly, avoid drinking coffee and alcohol, especially shortly before you go to bed, but also in the late afternoon and early evening.

Going to a music festival and losing some sleep by choice like I did is not a big deal. The real problem is that two thirds of all adults worldwide structurally fail to get the recommended minimum amount of sleep opportunity of seven to nine hours per night. Please, don’t be one of them. Sleeping is the single most important thing you can do to reset your brain and body health every day. Trading a little bit of sleep for a little bit of something else is not worth it in the end.


  1. Walker, M. (2017). Why We Sleep.
  2. Levine, J. (2016). Self-experimentation in Science.
  3. Forbes, G. (2008). Sziget Festival Photos.



“How can I save what I have made?”, this is the question a young girl asked me in her school’s class recently. In the past hour she had created her own website. It was about her and the books that she loved. Standing in front of me with her hands clamped around the loan laptop, she wanted to know how she could save her website. Her protectiveness revealed that she did not do this because she wanted to show it off, but because she wanted to continue to build it at home.

Primary schools are not my day-to-day work environment. That day I participated in a voluntary event, organized by my work, which had the goal of making societal impact. Among the various available options, I had chosen to learn children in disadvantaged neighbourhoods about computer programming. One of these assignments was making a website.

The girl in question on the left

The girl that came to me with the question about how to save her website was one of a group of four that I was actively helping that day. All the girls in that small group were eager to learn, but she took the assignments quite seriously. She had the potential, the motivation and the attention span. This experience got me thinking: what would her chances be of actually making a career in IT?

There is a huge demand for skilled IT personnel in the Netherlands: forty-two percent of employers in IT struggles to find people, compared to only eighteen percent for general employment [1]. Given this demand, it should be easy for her to launch a career in IT, right?


Many people want equal career opportunities for everyone, but the sad truth is that we do not live in a society that is meritocratic enough to meet that ideal. I see three major challenges for this girl: where she lives, the ethnical group she is part of and her gender.

Firstly, she lives in a disadvantaged neighbourhood. Sadly, the statistics about this are not at all uplifting [2]. Children that grow up in these neighbourhoods score lower on tests, experience more health problems and more psychological problems, and their poor socio-economic status tends to extend across generations. The only real exception to this is kids that have a resilient personality. The effects on them are smaller, but they also move out of such neighbourhoods much earlier in life [3, 4]. Nevertheless, even if she has a resilient personality, the fact remains that her neighbourhood provides less access to modern technology.

Secondly, even if she would go into IT and become, say, a software developer, she is part of an ethnic minority. The unfortunate fact is that minorities, also in the Netherlands, have a harder time finding a job and holding on to it: eight percent of highly educated minorities have no job, where this is only about three percent for the majority group: a telling sign. Even disregarding all this, the question remains: would she end up in IT at all?

The Gender Trap

When I was choosing a school to go to learn the craft of software development, in the late nineties, I attended various information events. I distinctly remember one such event. As almost everyone there, I too was accompanied by a parent: my mother. At the end of the presentation, she was the one parent in the room that dared ask the question: “are there any girls studying here?” The unsatisfying answer after a confused stare and some silence was: “no, there aren’t any, this is a technical school.”

This brings me to my third point: has that imbalance improved at all since that time? We would expect perhaps that half of the people working in IT is female, and the other half is male. However, that is not the case looking at present day workforce numbers. The country that seems to do this best is Australia with nearly three out of ten IT workers being female, whereas the Netherlands scores poorly with a score of only half that: less than two out of ten [5].

Is that due to a difference in capability? Are males ‘naturally’ better at these more technical subjects than females? There indeed is a bit of a developmental difference between boys and girls, but it may not be what you think.

Let’s take the most abstract subject as example: math. It may appear that boys are better at math, but that is not really true. Averaging out over their entire childhood into adolescence: boys and girls really are equally good at math. However, girls tend to develop language skills faster and earlier. Since boys lag behind in that development, it appears that they are better at math. Don’t be fooled by this deception: girls are not worse at math, they are better at language [6].

This difference between the genders disappears throughout adolescence, but by that time, paths have already been chosen. This is a sad consequence of our education systems pressuring children to make life affecting choices early on, which is not helped by reinforced stereotypes through gender specific toys and activities.


This brings me to another question: to what extent can parents influence their children? Specifically, can average parents, by parenting, influence their children’s personality and intelligence? How much do you think that influence is? Think about it for a minute.

This may come as a shock, but the effect of parents on their children’s personality and intelligence does not exist. To better understand this: consider that if anything parents do affect their children in any systematic way, then children growing up with the same parents will turn out to be more similar than children growing up with different parents, and that is simply not the case [7].

So, what does influence a child’s personality and intelligence? Half of that is genes, the other half is experiences that are unique to them. Those experiences are shaped largely by their peers. To some extent we all already know this: whether adolescents smoke, get into problem with the law, or commit serious crimes depends far more on what their peers do, then on what their parents do [8].

Don’t misunderstand me. Parents can inflict serious harm on their children in myriad ways, leaving scars for a lifetime, but that’s not what the average parent does. On the flip side, parents can also provide their children with the opportunity to acquire skills and knowledge, for example through reading, by playing a musical instrument or even encouraging them to explore computer programming. Hence, they can influence the conditions for them to get the right kinds of unique experiences.


Despite the odds being stacked against the girl that asked me how she could save her website. Looking backwards, why was that volunteering day so important? Was I really there to teach her how to do computer programming? No, not really, I was there to provide her with a unique experience that is rare to have in her regular day-to-day environment. It is my hope that his has cemented a positive association with IT in her, which may just tip the balance the right way.

As a final word I think that these kinds of volunteering efforts are important. Whether you work in IT or elsewhere, please consider giving others unique inspirational experiences to save, cherish and build upon. Because despite her odds, what she experienced may in the end actually make the difference for both her and her friends.


  1. Kalkhoven, F. (2018). ICT Beroepen Factsheet.
  2. Broeke, A. ten (2010). Opgroeien in een slechte wijk.
  3. Mayer & Jencks (1989). Growing Up in Poor Neigborhoods.
  4. Achterhuis, J. (2015). Jongeren in Achterstandswijk.
  5. Honeypot (2018). Women in Tech Index 2018.
  6. Oakley, B. (2018). Make your Daughter Practice Math.
  7. Turkenheimer, E. (2000). Three Laws of Behavior Genetics and What They Mean.
  8. Pinker, S. (2002). The Blank Slate.