I was once took part in a class where the instructor performed an interesting experiment. He asked us all to close our eyes, and then raise our hands and open our eyes when each of us thought a minute of time had passed. After that he would tell us how far we were off. To my amazement there was quite some difference, with some people raising their hands quite early, some quite late, and some nearly spot on. Now, this was not a test of aptitude at timing, it was a test of a specific type of perception: chronoception.

I remember being quite bored at times as a child. Many mundane things seemed to take very long. Yet, the older I have become, the faster time seems to pass. Asking around, I found out that I am not the only one with that experience. During that class I raised my hand slightly later than the one minute marker. However, now, many years later, I am convinced that if I’d take it again, I’d raise my hand quite a bit later than the minute mark.

Time of course passes at a steady rate for everyone, that is: time in the physical world. However, that is not the same rate at which time appears to pass: our chronoception. How do physical and perceived time relate? Let’s dive deeper.

Fraction of Life Argument

When you were one, that one year represented one hundred percent of your life. Conversely when you turned two, the first year constituted half of year life and the second year as well. Following this logic, by the time you turn eighteen that eighteenth year adds only about five and a half percent to your life up to that point.

Going ahead in time, the hundredth year of your life would add only one percent. The basic idea of this fractional argument is that each additional year you live is a smaller part of your life. If we discount things that happen before age five, as most people have little recollection of this, and look at this strictly numerically, we get the graph shown below.

Life in Quarters: relative age as we get older [5]

Let’s interpret: roughly your teenage years are about as long as your twenties and thirties combined according to this graph. Although mathematically attractive, there are some problems with this perspective.

Consider that this theory implies that time at age ten seems to go five times more slowly than time at age fifty, and that is not quite what really seems to happen. A ten year old does not see his fifty year old uncle respond in slow motion, and juxtaposed: the fifty year old uncle does not see his ten year old niece dart around five times more quickly. Of course there are differences in time perception, but a five fold difference seems like an unlikely stretch.

In addition to this, there is one other major problem with this fractional argument: it does not accurately represent perceived time, as that does not pass at a constant rate, our chronoception is variable as we’ll see next.

Flow Control

Waiting in a line in the supermarket, particularly when you are in a hurry, seemingly takes forever. You notice the old lady fidgeting with her hands to get the cash money out of her wallet. Then a kid that just can not seem to stop screaming. Followed by someone who nervously taps his foot standing next to you. However, when you finally exit the supermarket and drive home, taking that more quiet route that you know all to well, time passes by very quickly.

Gears of Time by Majentta:

This example already shows that perception of time is relative to what occurs around us. When we are bored or blocked, time seems to slow down. Contrast this with when we are performing either routine tasks or are deeply engaged in something: time seems to literally fly by. So, it is easy to disprove the fractional argument on a moment-to-moment basis, but in fact: this holds even for longer spans of time.


There is a difference between how we experience time in the moment and how we remember it when we look back. Waiting in line seemingly takes forever in the moment, but after a day or two, in hindsight, it was really just a very small part of that day.

In a similar vein: holidays always seem to go by very quickly. At least: that is what many conclude as soon as theirs are over. However, during your holiday, time actually seems to slow down. There is a good reason for that: new experiences.

In your daily life you see many of the same things every day, you do many of the same routine tasks everyday, and if you enjoy your work you are likely quite engaged in it. In this day-to-day life you have become highly skilled at filtering out distractions. Contrast this with your vacation where you have to do all kinds of non-routine tasks even to get to your destination, and then have complete days to fill in by yourself.

If on those days you do all kinds of activities you do not usually do, that’s all novelty for your brain. These novel things take more mental processing power and occupy more mental space. Your filters don’t work there, and hence everything seems to last longer. This is noticeable in the moment, but also if you start episodically telling others about your novel experiences.

The reverse is also true: if you would not do anything on your holiday, you will experience boredom, which also makes time appear to pass more slowly, at least: in the moment, perhaps not on retelling. Hence, the benefit of holidays for altering your perception of time, whether you do something or just sit there, either way: it helps slow down time perception at least as you experience it in the moment.


This same phenomenon of things seeming to take much longer than they actually do also occurs when there is something physically happening that is exciting. People can overestimate the actual time something took by orders of magnitude.

I once had the genius idea to step into a wooden roller coaster, after not having been in one for many years, and not remembering how much I actually disliked such experiences. While the cars were being pulled up, I started remembering that roller coasters were not a pleasant experience, but by then it was too late. As the carts were released at the apex, and my stomach started to turn, I had no other option than to simply endure it. That ride probably took only a minute or two, but really: it seemed way longer than that.

The Brain

As anything in the reality you experience. Time perception too is a construct of your brain. And as your body becomes less agile with age, so does your brain. In fact your brain uses most energy to perceive new things when you are about five, and this tapers off from that point onward.

Consider that as you get older, you have had more opportunity to learn. Hence, the more you learn, the more complex the networks in your brain to represent what you’ve soaked up. Hence the size and complexity of the webs of connected neurons in your brain increases, which leads to longer paths that signals need to traverse.

When these paths themselves start to age, they degrade, giving more resistance to the flow of signals. This causes the rate at which mental images are acquired and processed to decrease as you get older: chronoception changes. Since your brain is perceiving fewer new images in the same amount of time, it seems as though time is passing more quickly. While in fact it is your own brain slowing down. This is an interesting form of perceptual relativity: the world around you is not going faster, you are going slower relative to it.

Your brain also becomes better at filtering out signals irrelevant to whatever you are doing. This is evidenced for example when something small changes in an environment you have been in for a long time. It is very common not to notice that change for a while, since you have tuned out certain details in your environment. The net effect is not only that you see less images, but that you also see less detail in those images. A complete change-up of environment can of course work wonders here.


We know that the older we get, the faster time seem to pass, but the question is: by how much? We know that for people in their early twenties physical time and chronoception are almost equal: they experience time approximately as it passes in physical reality. Seniors, between sixty and eighty, are off with their estimates by approximately twenty to twenty-five percent.

This leads me to conclude that as a rough rule of thumb what on average feels like a week for a twenty year old, feels like about five and half days for a senior. However, that’s an average. This strongly fluctuates based on the moment-to-moment experience.

As anything that you experience, chronoception too is a construct of your brain. It seems that as we get older we literally gradually lose track of time. One of the few ways to mitigate this to some extent is to expose yourself to novelty in any form, in short: go to new places, learn new things and meet new people. But most of all: enjoy your time.


  1. Kingery. K. (2019). It’s Spring Already? Physics Explains Why Time Flies as We Age.
  2. Muller, D. (2016). Why Life Seems to Speed Up as We Age.
  3. Livni, E. (2019). Physics explains why time passes faster as you age.
  4. Haden, J. (2017). Science Says Time Really Does Seem to Fly as We Get Older.
  5. Bonwit, H. (2012). Time Dilation & Back to the Future.
  6. Kiener, M. (2015). Why Time Flies.
  7. Spencer, B. (2017). Time Perception.

The Machine Learning Myth

“I was recently at a demonstration of walking robots, and you know what? The organizers had spent a good deal of time preparing everything the day before. However, in the evening the cleaners had come to polish the floor. When the robots started, during the real demonstration the next morning, they had a lot of difficulty getting up. Amazingly, after half a minute or so, they walked perfectly on the polished surface. The robots could really think and reason, I am absolutely sure of it!”

Somehow the conversation had ended up here. I stared with a blank look at the lady who was trying to convince me of the robot’s self-awareness. I was trying to figure out how to tell her that she was a little ‘off’ in her assessment.

As science-fiction writer Arthur C. Clarke said: any sufficiently advanced technology is indistinguishable from magic. However, conveying this to someone with little knowledge of ‘the magic’, other than that gleaned from popular fiction, is hard. Despite trying several angles, I was unable to convince this lady that what she had seen the robots do had nothing to do with actual consciousness.

Machine learning is all the rage these days, demand for data scientists has risen to similar levels as that for software engineers in the late nineties. Jobs in this field are among the best paying relative to the number of years of working experience. Is machine learning magic, mundane or perhaps somewhere in between? Let’s take a look.

A Thinking Machine

Austria, 1770. The court of Maria Theresa buzzes. The chandeliers mounted on the ceiling of the throne room cast long shadows on the carpeted floor. Flocks of nobles arrive in anticipation of the demonstration about to take place. After a long wait, a large cabinet is wheeled in, overshadowed by something attached to it. It looks like a human-sized doll. Its arms lean over the top of the cabinet. In between those puppet arms is a chess board.

The Mechanical Turk by Wolfgang von Kempelen

The cabinet is accompanied by its maker, Wolfgang van Kempelen. He opens the cabinet doors which reveals cogs, levers and other machinery. After a dramatic pause he reveals the true name of this device: the Turk. He explains it is an automated chess player and invites Empress Maria Theresa to play. The crowd sneers and giggles. However, scorn turns to fascination as Maria Theresa’s opening chess move is countered by the mechanical hand of the Turk by a clever counter move.

To anyone in the audience the Turk looked like an actual thinking machine. It would move its arm just like people do, it took time between moves to think just like people do, it even corrected invalid moves of the opponent by reverting the faulty moves, just like people do. Was the Turk, so far ahead of its time, really the first thinking machine?

Unfortunately: no, the Turk was a hoax, an elaborate illusion. Inside the cabinet a real person was hidden. A skilled chess player, who controlled the arms of the doll-like figure. The Turk shows that people can see and have an understanding of an effect, but fail to correctly infer its cause. The Turk’s chess skills were not due to its mechanics. Instead they were the result of clever concealment of ‘mundane’ human intellect.

The Birth of Artificial Intelligence

It would take until the early 1950’s before a group of researchers at Darthmouth started the field of artificial intelligence. They believed that if the process of learning something can be precisely described in sufficiently small steps, a machine should be able to execute these steps as well as a human can. Building on existing ideas that emerged in the preceding years, the group set out to lay some of the groundwork for breakthroughs to come in the ensuing two decades. Those breakthroughs did indeed come in the form of path finding, natural language understanding and even mechanical robots.

Arthur Samuel

Around the same time, Arthur Samuel of IBM wrote a computer program that could play checkers. A computer-based checkers opponent had been developed before. However, Samuel’s program could do something unique: adapt based on what it had seen before. He roughly did this by making the program store the moves that led to games that were won in the past, and then replaying those moves at appropriate situations in the current game. Samuel referred to this self-adapting process as machine learning.

What then really is the difference between artificial intelligence and machine learning? Machine learning is best thought of as a more practically oriented sub field of artificial intelligence. With mathematics at its core, it could be viewed as a form of automated applied statistics. At the core of machine learning is finding patterns in data and exploiting those to automate specific tasks. Tasks like finding people in a picture, recognizing your voice or predicting the weather in your neighbourhood.

In contrast, at the core of artificial intelligence is the question of how to make entities that can perceive the environment, plan and take action in that environment to reach goals, and learn from the relation between actions and outcomes. These entities need not be physical or sentient, though that is often the way they are portrayed in (science) fiction. Artificial intelligence intersects with other fields like psychology and philosophy, as discussed next.

Philosophical Intermezzo: Turing and the Chinese room

Say, a machine can convince a real person into thinking that it – the machine – is a human being. By this very act of persuasion, you could say the machine is at least as cognitively able as that human. This famous claim was made by mathematician Alan Turing in the early fifties.

Turing’s claim just did not sit well with philosopher John Searle. He proposed the following thought experiment: imagine Robert, an average Joe who speaks only English. Let’s put Robert in a lighted room with a book and two small openings for a couple of hours. The first opening is for people to put in slips of paper with questions in Chinese: the input. The second opening is to deposit the answers to these questions written on new slips, also in Chinese: the output.

Searle’s Chinese Room

Robert does not know Chinese at all. To help him he has a huge book in this ‘Chinese’ room. In this book he can look up what symbols he needs to write on the output slips, given the symbols he sees on the input slips. Searle argued that no matter how many slips Robert processes, he will never truly understand Chinese. After all, he is only translating input questions to output answers without understanding the meaning of either. The book he uses also does not ‘understand’ anything, as it contains just a set of rules for Robert to follow. So, this Chinese room as a whole can answer questions. However, none of its components actually understands or can reason about Chinese!

Replace Robert in the story with a computer, and you get a feeling for what Searle tries to point out. Consider that while a translation algorithm may be able to translate one language to the other, being able to do that is insufficient for really understanding the languages themselves.

The root of Searle’s argument is that knowing how to transform information is not the same as actually understanding it. Taking that one step further: in contrast with Turing, Searle’s claim is that merely being able to function on the outside like a human being is not enough for actually being one.

The lady I talked with about the walking robots had a belief. Namely, that the robots were conscious based on their adaptive response to the polished floor. We could say the robots were able to ‘fool’ her into this. Her reasoning is valid under Turing’s claim: from her perspective the robots functioned like a human. However, it is invalid under Searle’s, as his claim implies ‘being fooled’ is simply not enough to prove consciousness.

As you let this sink in, let’s get back to something more practical that shows the strength of machine learning.

Getting Practical with Spam

In the early years of this century spam emails were on the rise. There was no defense against the onslaught of emails. So too thought Paul Graham, a computer scientist, whose mailbox was overflowing like everyone else’s. He approached it like an engineer: by writing a program. His program looked at the email’s text, and filtered those that met certain criteria. This is similar to making filter rules to sort emails into different folders, something you have likely set up for your own mailbox.

Graham spent half a year manually coding rules for detecting spam. He found himself in an addictive arms race with his spammers, trying to outsmart each other. One day he figured he should look at this problem in a different way: using statistics. Instead of coding manual rules, he simply labeled each email as spam or not spam. This resulted in two labeled sets of emails, one consisting of genuine mails, the other of only spam emails.

He analyzed the sets and found that spam emails contained many typical words, like ‘promotion’, ‘republic’, ‘investment’, and so forth. Graham no longer had to write rules manually. Instead he approached this with machine learning. Let’s get some intuition for how he did that.

Identifying Spam Automagically

Imagine that you want to train someone to recognize spam emails. Your first step is showing the person many examples of emails that are labeled genuine and that are labeled spam. That is: for each of these emails the participant is told explicitly whether each is genuine or spam. After this training phase, you put the person to work to classify new unlabeled emails. The person thus assigns each new email a label: spam or genuine. He or she does this based on the resemblance of the new email with ones seen during the training phase.

Replace the person in the previous paragraph with a machine learning model and you have a conceptual understanding of how machine learning works. Graham took both email sets he created: one consisting of spam emails, the other of genuine ones. He then used these to train a machine learning model that looks at how often words occur in text. After training he used the model to classify new incoming mails as being either spam or genuine. The model would for example mark an email as spam if the word ‘promotion’ would appear in it often. A relation it ‘learned’ from the labeled emails. This approach made his hand-crafted rules obsolete.

Your mailbox still works with this basic approach. By explicitly labeling spam messages as spam you are effectively participating in training a machine learning model that can recognize spam. The more examples it has, the better it will become. This type of application goes to the core of what machine learning can do: find patterns in data and bind some kind of consequence to the presence, or absence, of a pattern.

This example also reveals the difference between software engineering and data science. A software engineer builds a computer program explicitly by coding rules of the form: if I see this then do that. Much like Graham tried to initially combat spam. In contrast, a data scientist collects a large amount of things to see, and a large amount of subsequent things to do, and then tries to infer the rules using a machine learning method. This results in a model: essentially an automatically written computer program.

Software Engineering versus Machine Learning

If you would like a deeper conceptual understanding and don’t shy away from something a bit more abstract: let’s dive a little bit deeper into the difference between software engineering and machine learning. If you don’t: feel free to skip to the conclusion.

As a simple intuition you can think of the difference between software engineering and machine learning as the difference between writing a function explicitly and inferring a function from data implicitly. As a minimal contrived example: imagine you have to write a function f that adds two numbers. You could write it like this in an imperative language:

function f(a, b):
    c = a + b
    return c

This is the case were you explicitly code a rule. Contrast this with the case where you no longer write f yourself. Instead you train a machine learning model that produces the function f based on many examples of inputs a and b and output c.

train([(a, b, c), (a, b, c), (...)]) -> f

That is effectively what machine learning boils down to: training a model is like writing a function implicitly by inferring it from the data. After this f can be used on new previously unseen (a, b) inputs. If it has seen enough input examples, it will perform addition on the unseen (a, b) inputs. This is exactly what we want. However, consider what happens if the model would be fed only one training sample: input (2, 2, 4). Since 2 * 2 = 4 and 2 + 2 = 4, it might yield a function that multiplies its inputs instead of adding them!

There are roughly two types of functions that you can generate, that correspond to two types of tasks. The first one, called regression, returns some continuous value, as in the addition example above. The second one, called classification, returns a value from a limited set of options like in the spam example. The simplest set of options being: ‘true’ or ‘false’.

Effectively we are giving up precise control over the function’s exact definition. Instead we move to the level of specifying what should come out given what we put in. What we gain from this is the ability to infer much more complex functions than we could reasonably write ourselves. This is what makes machine learning a powerful approach for many problems. Nevertheless, every convenience comes at a cost and machine learning is no exception.


The main challenge for using a machine learning approach is that large amounts of data are required to train models, which can be costly to obtain. Luckily, recent years have seen an explosion in available data. More people are producing content than ever, and wearable electronics yield vast streams of data. All this available data makes it easier to train useful models, at least for some cases.

A second caveat is that training models on large amounts of data requires a lot of processing power. Fortunately, rapid advances in graphics hardware have led to orders of magnitude faster training of complex models. Additionally, these hardware resources can be easily accessed through cloud services, making things much easier.

A third downside is that, depending on the machine learning method, what happens inside of the model created can become opaque. What does the generated function do to get from input to output? It is important to understand the resulting model, to ensure it performs sanely on a wide range of inputs. Tweaking the model is more an art than a science, and the risk of harmful hidden biases in models is certainly realistic.

Applying machine learning is no replacement for software engineering, but rather an augmentation for specific challenges. Many problems are far more cost effective to approach by writing a simple set of explicitly written rules instead of endlessly tinkering with a machine learning model. Machine learning practitioners are best of not only knowing the mechanics of each specific method, but also whether machine learning is appropriate to use at all.


Many modern inventions would look like magic to people from one or two centuries ago. Yet, knowing how these really work shatters this illusion. A huge increase in the use of machine learning to solve a wide range of problems has taken place in the last decade. People unfamiliar with the underlying techniques often both under and overestimate the potential and the limitations of machine learning methods. This leads to unrealistic expectations, fears and confusion. From apocalyptic prophecies to shining utopias: machine learning myths abound where we are better served staying grounded in reality.

This is not helped by naming. Many people associate the term ‘learning’ with the way humans learn, and ‘intelligence’ with the way people think. Though sometimes a useful conceptual analog, it is quite different from what these methods currently actually do. A better name would have been data-based function generation. However, that admittedly sounds much less sexy.

Nevertheless, machine learning at its core is not much more than generating functions based on input and, usually, output data. With this approach it can deliver fantastic results on narrowly defined problems. This makes it an important and evolving tool, but like a hammer for a carpenter: it is really just a tool. A tool that augments, rather than replaces, software engineering. Like a hammer is limited by laws of physics, machine learning is fundamentally limited by laws of mathematics. It is no magic black box, nor can it solve all problems. However, it does offer a way forward for creating solutions to specific real-world challenges that were previously elusive. In the end it is neither magic nor mundane, but is grounded in ways that you now have a better understanding of.


  1. Singh, S. (2002). Turk’s Gambit.
  2. Rusell, S. & Norvig, P. (2003). Artificial Intelligence.
  3. Halevy, A. & Norvig, P. & Pereira, F. (2010). The Unreasonable Effectiveness of Data.
  4. Lewis, T. G. & Denning, P. J. (2018). Learning Machine Learning, CACM 61:12.
  5. Graham, P. (2002). A Plan for Spam.
  6. O’Neil, C. (2016). Weapons of Math Destruction.
  7. Stack Overflow Developer Survey (2019).


If you do not sleep you will die within months. Though, it’s unlikely that you would voluntarily stay awake twenty four hours for days at a time, modern society with its many distractions, makes it tempting to trade a little bit of sleep for a little bit of something else every day. Is this a wise trade-off to make? To answer that question: let’s dive deeper into sleep.


A decade ago I visited the music festival Sziget in Budapest. I camped there with a group of friends near the northern tip of the ‘party island’. After several days, thanks to a combination of staying up very late and sleeping with an eye mask and earplugs, my usual waking time had settled around noon. Despite the intense heat buildup in my tent during the morning hours, I still had an actual, though shifted, sleep rhythm. Where does this rhythm come from?

Living for several days on a music festival terrain, while an unforgettable experience in many ways, was not very convenient. Nevertheless, this inconvenience does not remotely come close to that of living in a cave without exposure to daylight for a month. This is what the father of modern sleep research, Nathaniel Kleitman, did in 1938 together with one of his students. Experimenting on himself was not new to Kleitman, as he also kept himself awake on benzedrine for up to hundred-eighty hours at a time just to measure the effect [2]. Little did he know that a form of this would later become a popular party drug at festivals for staying awake and boosting energy: speed.

Student Bruce Richardson (Left) and Nathaniel Kleitman (Right) in the cave

Back to Kleitman’s cave experiment: it revealed two important things. Firstly, if there is no sunlight, our bodies can still self regulate sleep and wakefulness. Secondly, our sleep cycle is not exactly twenty four hours, but a little bit longer. This observation gave rise to the concept of what we nowadays call rhythm of approximately (circa) a day (dian): the circadian rhythm.

The presence of sunlight resets the rhythm, and keeps it from shifting. This is quite similar to the way that your computer, whose clock is not entirely accurate and suffers from drift, synchronizes its time at regular intervals by contacting computers with more accurate clocks using the Network Time Protocol. Indeed, some alarm clocks are equipped with radio signal based synchronization which essentially does the same thing.


The first few days at the Sziget festival, when my rhythm had not shifted as much yet, I felt very sleepy near the end of the day when I stayed up late. But I noticed that when I stayed up long enough, there was a moment, usually in the early morning, where I got over the sleepiness and started feeling more alert again. Since my rhythm had not shifted yet: what caused this?

Besides the circadian rhythm our bodies have an other mechanism to regulate sleep. As long as you are awake your sleep pressure rises through the buildup of the neurochemical adenosine in your brain. The higher the sleep pressure, the more you actually feel like sleeping. This pressure is referred to as the sleep drive, whereas the circadian rhythm is called the wake drive.

If you were ever in a lecture where another student fell asleep on their desk with a loud thud you know what high sleep pressure can do: it will make you nid-nod, falling in and out of slumber, even if you really want to consciously pay attention. While this is fairly harmless during a lecture it can be extremely dangerous in other situations, for example: when driving.

Rising sleep urge as result of staying awake. The arc is the rising sleep pressure (sleep drive). The dotted line is the circadian rhythm (wake drive). The distance between the two represents how sleepy you feel, expressed in words as the urge to sleep.

The interesting thing is that the circadian rhythm and sleep pressure are independent ‘systems’. As your sleep pressure continues to build up, your circadian rhythm may already be swinging back up, making you feel awake and more alert: exactly what I was experiencing the first couple of days at the festival. The figure above shows this: once you stay awake long enough the distance between the sleep pressure arc (top) and the circadian rhythm (bottom) becomes less, only to hit you much harder again as the circadian rhythm comes back down, resulting in a very strong urge to sleep.

Reset your brain

A good night consists of seven to nine hours of sleep opportunity, subtract from that half an hour to an hour to get the actual time slept. Back to the Sziget festival: not everyone had the fortune of being able to obtain a good night’s sleep. One evening we met a group of girls that had set up their tent next to the twenty-four hour DJ booth. I probably don’t need to tell you that setting up their tents there did not bode well for their sleep opportunity. Hence, we spent the night socializing with them. As we sat and talked on a set of wooden benches time flew by and before we knew it the sun came up again. The festival terrain was nearly silent, until we heard a loud rumbling sound.

It was around six in the morning, and the cleaning crew rolled in. A literal wall of people that moved from the front to the back of the festival terrain picking up garbage, followed by truck mounted water sprayers that cleaned the streets. An impressive sight.

Using water to clean and cool down the streets at Sziget [3]

In the deep stages of sleep something similar to what that cleaning crew did happens in your brain. Electrical waves sweep from the front to the back of your brain. This consolidates what you experienced and learned the previous day, and at the same time helps you prepare for sponging up new things the next day. In fact, not sleeping for one night reduces your ability to form new memories the next day by a whopping forty percent.

Better sleeping through chemistry?

Sitting on those benches at the festival, pulling an all-nighter, was not good for retention of memories, but also not good for overall wakefulness. The next day I had serious problems staying awake, thanks to the before mentioned buildup of sleep pressure. As a quick fix I turned to coffee: the stimulant of choice to keep one awake under such circumstances.

The caffeine in coffee blocks the binding of certain chemicals in the brain. This makes you feel more alert while your body is still tired. A negative effect is that once you do sleep after having coffee, you will have problems reaching the deep stages of sleep and thus will acquire less deep sleep overall. Ironically, this lack of deep sleep can trap you in a dependency cycle where you actually need coffee in the morning to feel refreshed after poor quality sleep.

It was a festival, so not only did we drink coffee. There was also plenty of cheap alcohol to go around. Beer, wine, shots, you name it, they had it. This stuff had the opposite effect of coffee: it made me feel sleepy. Alcohol, after all, is a depressant.

Alcohol makes you fall ‘asleep’ faster by sedating you, but as a side effect it heavily fragments your sleep. You wake up much more often than you usually would, though you do not consciously register this. This is the reason why the day after an evening of drinking a lot, you feel extremely drowsy. A secondary effect is that you do not reach the REM stage of sleep, which is vital for learning and memory formation. The debilitating effect of alcohol on memory formation is similar to that of not sleeping at all.


Regardless of the cause: what else, besides memory and learning problems, really happens when you do not sleep enough? When I came back from the festival I had a throat infection. Not surprising: sleeping just one night for only four hours, instead of eight, reduces the presence of natural killer cells by over seventy percent. Doing this repeatedly left me much more vulnerable to infections, and there were plenty of germs floating around there. For such a short time, and the great fun I had there, that was an acceptable trade-off. However, that luxury of choice does not always apply to everyone. Involuntary chronic sleep deprivation incurs a heavy toll.

A chronic lack of sleep increases risk of physical diseases, like cancer, heart disease and diabetes, and also of mental diseases, including depression, anxiety and suicidality. Sleeping too little generally means that your life expectancy will go down dramatically. The evidence for this is strong enough that we know that if you were to actually sleep consistently for only 6.75 hours per night, you will likely live only into your early sixties.


What can you do to improve sleep? Three tips: firstly, maintain a regular schedule, which means: go to sleep and wake up at the same time every day, also during the weekends. Secondly, take care of your sleep ‘hygiene’: make sure your room is sufficiently dark and cool, around eighteen degrees Celsius, and avoid screens before you go to bed. Thirdly, avoid drinking coffee and alcohol, especially shortly before you go to bed, but also in the late afternoon and early evening.

Going to a music festival and losing some sleep by choice like I did is not a big deal. The real problem is that two thirds of all adults worldwide structurally fail to get the recommended minimum amount of sleep opportunity of seven to nine hours per night. Please, don’t be one of them. Sleeping is the single most important thing you can do to reset your brain and body health every day. Trading a little bit of sleep for a little bit of something else is not worth it in the end.


  1. Walker, M. (2017). Why We Sleep.
  2. Levine, J. (2016). Self-experimentation in Science.
  3. Forbes, G. (2008). Sziget Festival Photos.


“How can I save what I have made?”, this is the question a young girl asked me in her school’s class recently. In the past hour she had created her own website. It was about her and the books that she loved. Standing in front of me with her hands clamped around the loan laptop, she wanted to know how she could save her website. Her protectiveness revealed that she did not do this because she wanted to show it off, but because she wanted to continue to build it at home.

Primary schools are not my day-to-day work environment. That day I participated in a voluntary event, organized by my work, which had the goal of making societal impact. Among the various available options, I had chosen to learn children in disadvantaged neighbourhoods about computer programming. One of these assignments was making a website.

The girl in question on the left

The girl that came to me with the question about how to save her website was one of a group of four that I was actively helping that day. All the girls in that small group were eager to learn, but she took the assignments quite seriously. She had the potential, the motivation and the attention span. This experience got me thinking: what would her chances be of actually making a career in IT?

There is a huge demand for skilled IT personnel in the Netherlands: forty-two percent of employers in IT struggles to find people, compared to only eighteen percent for general employment [1]. Given this demand, it should be easy for her to launch a career in IT, right?


Many people want equal career opportunities for everyone, but the sad truth is that we do not live in a society that is meritocratic enough to meet that ideal. I see three major challenges for this girl: where she lives, the ethnical group she is part of and her gender.

Firstly, she lives in a disadvantaged neighbourhood. Sadly, the statistics about this are not at all uplifting [2]. Children that grow up in these neighbourhoods score lower on tests, experience more health problems and more psychological problems, and their poor socio-economic status tends to extend across generations. The only real exception to this is kids that have a resilient personality. The effects on them are smaller, but they also move out of such neighbourhoods much earlier in life [3, 4]. Nevertheless, even if she has a resilient personality, the fact remains that her neighbourhood provides less access to modern technology.

Secondly, even if she would go into IT and become, say, a software developer, she is part of an ethnic minority. The unfortunate fact is that minorities, also in the Netherlands, have a harder time finding a job and holding on to it: eight percent of highly educated minorities have no job, where this is only about three percent for the majority group: a telling sign. Even disregarding all this, the question remains: would she end up in IT at all?

The Gender Trap

When I was choosing a school to go to learn the craft of software development, in the late nineties, I attended various information events. I distinctly remember one such event. As almost everyone there, I too was accompanied by a parent: my mother. At the end of the presentation, she was the one parent in the room that dared ask the question: “are there any girls studying here?” The unsatisfying answer after a confused stare and some silence was: “no, there aren’t any, this is a technical school.”

This brings me to my third point: has that imbalance improved at all since that time? We would expect perhaps that half of the people working in IT is female, and the other half is male. However, that is not the case looking at present day workforce numbers. The country that seems to do this best is Australia with nearly three out of ten IT workers being female, whereas the Netherlands scores poorly with a score of only half that: less than two out of ten [5].

Is that due to a difference in capability? Are males ‘naturally’ better at these more technical subjects than females? There indeed is a bit of a developmental difference between boys and girls, but it may not be what you think.

Let’s take the most abstract subject as example: math. It may appear that boys are better at math, but that is not really true. Averaging out over their entire childhood into adolescence: boys and girls really are equally good at math. However, girls tend to develop language skills faster and earlier. Since boys lag behind in that development, it appears that they are better at math. Don’t be fooled by this deception: girls are not worse at math, they are better at language [6].

This difference between the genders disappears throughout adolescence, but by that time, paths have already been chosen. This is a sad consequence of our education systems pressuring children to make life affecting choices early on, which is not helped by reinforced stereotypes through gender specific toys and activities.


This brings me to another question: to what extent can parents influence their children? Specifically, can average parents, by parenting, influence their children’s personality and intelligence? How much do you think that influence is? Think about it for a minute.

This may come as a shock, but the effect of parents on their children’s personality and intelligence does not exist. To better understand this: consider that if anything parents do affect their children in any systematic way, then children growing up with the same parents will turn out to be more similar than children growing up with different parents, and that is simply not the case [7].

So, what does influence a child’s personality and intelligence? Half of that is genes, the other half is experiences that are unique to them. Those experiences are shaped largely by their peers. To some extent we all already know this: whether adolescents smoke, get into problem with the law, or commit serious crimes depends far more on what their peers do, then on what their parents do [8].

Don’t misunderstand me. Parents can inflict serious harm on their children in myriad ways, leaving scars for a lifetime, but that’s not what the average parent does. On the flip side, parents can also provide their children with the opportunity to acquire skills and knowledge, for example through reading, by playing a musical instrument or even encouraging them to explore computer programming. Hence, they can influence the conditions for them to get the right kinds of unique experiences.


Despite the odds being stacked against the girl that asked me how she could save her website. Looking backwards, why was that volunteering day so important? Was I really there to teach her how to do computer programming? No, not really, I was there to provide her with a unique experience that is rare to have in her regular day-to-day environment. It is my hope that his has cemented a positive association with IT in her, which may just tip the balance the right way.

As a final word I think that these kinds of volunteering efforts are important. Whether you work in IT or elsewhere, please consider giving others unique inspirational experiences to save, cherish and build upon. Because despite her odds, what she experienced may in the end actually make the difference for both her and her friends.


  1. Kalkhoven, F. (2018). ICT Beroepen Factsheet.
  2. Broeke, A. ten (2010). Opgroeien in een slechte wijk.
  3. Mayer & Jencks (1989). Growing Up in Poor Neigborhoods.
  4. Achterhuis, J. (2015). Jongeren in Achterstandswijk.
  5. Honeypot (2018). Women in Tech Index 2018.
  6. Oakley, B. (2018). Make your Daughter Practice Math.
  7. Turkenheimer, E. (2000). Three Laws of Behavior Genetics and What They Mean.
  8. Pinker, S. (2002). The Blank Slate.


When I lived at my parent’s place, I was always amazed by how much time my father spend on the daily news. Once during breakfast, usually in the form of a newspaper. A second time during lunch, and a third time by watching the evening news. He must have spent upwards an hour or so every day consuming news. Why did he do that?

In this post I will dig into attention mechanisms. Specifically: where they come from, how they work and what you can do to better cope with them. To get a better grip on this, we will first dive into some history.


In 1833 the leading news paper of New York was the New York Enquirer, costing about six cents. Not very expensive by today’s standards, but for that time it was considered a luxury item. An amount not many ordinary people had to spare. Hence, it is no surprise that someone saw a business opportunity in this [1].

Benjamin Day

That someone was Benjamin Day. In that same year he launched a new newspaper: the New York Sun. By late 1834 his new paper had become the leading one in the city, amassing five thousand daily readers. Why? A copy of the New York Sun cost only one cent, vastly more affordable for the average person back then. His rivals were amazed, how could he produce a newspaper that cheaply, below the price of the cost of printing? One word: advertisement. Day was not in the business of selling a newspaper to his readers, he was in the business of selling the attention of his readers to his advertisers. He did this by mixing news, often news of the sensational kind that easily catches attention, with advertisements.

Advertising of course is nothing new. Though, back then it was mostly just text, reminiscent of personal ads. That quickly changed throughout the years. During the forties and fifties, large billboards became increasingly common. Though our homes were still a sanctuary devoid of much of that visual onslaught, this quickly changed as televisions became commonplace. Advertisements sneaked inside our living room, and that was not the end of it. The last two decades have seen ads move even physically closer: into our very hands, in the form of one of the most effective advertisement delivery devices invented: smartphones.

Britney Spears in a Pepsi television ad

Nevertheless, let’s be honest: no one really wants to see those ads. People do not voluntarily choose to watch advertisements. So, how do we get them to do this anyway?


To understand the mechanism behind this, we go back in history once again. This time to the early nineteen-sixties. Meet B. F. Skinner working, at the time, at Harvard University. Skinner had ambitions to become a writer, but became disillusioned with the craft, and instead ended up becoming an influential figure in behavioural psychology. During the sixties he conducted experiments with animals, and one such experiment focused on pigeons.

B. F. Skinner and his pigeons

In this particular experiment Skinner placed a pigeon in a box. This box contained a small circular disc that could be pecked. In the first condition pecking this disc would release a little food pellet for the pigeon to eat. In the second condition it would sometimes release a food pellet, and sometimes it would not: an unpredictable pattern. What condition do you think made the pigeon learn the behaviour of pecking the disc the fastest? That’s right, it was the second condition, the one where whether or not the behaviour would be rewarded with a food pallet was unpredictable.

Do those results on pigeons translate to humans? What’s the closest analogue we have? That would be gambling. Remember those rows of old ladies sitting at one armed bandits in casino’s? The reason they keep sitting there, gambling away their money, is the exact same reason those pigeons kept hitting that circular disc: a variable reward. Sometimes you get something, sometimes you lose something, and you don’t know which one is coming next.


Now you may be thinking: I don’t gamble, I never go to the casino, so: no risk for me. Well, you may not go to the casino, but there is a game you play every day. I am referring to the game of scrolling through news feeds, responding to notifications on your telephone, and checking if you have any new e-mails. All of these share the same characteristic: sometimes there’s something interesting for you, sometimes there’s not, and you don’t know which one is coming next.

I think it is this variable award that got my father addicted to watching the news so often: even if most of it is not interesting, or something seen or read before on the same day: that’s not the point. If every so often there is something new in there, that’s what kept him coming back for more.

Ads in the mix

Now, think back to the advertisements. Like a news reel, or newspaper, a news feed is the prime example of something that gives you a variable reward. It keeps you scrolling so you can get a small dopamine reward for finding those interesting things. If you would put some advertisements in that news feed, regardless of whether they are interesting or relevant, people would be exposed to them. Giving you the perfect ‘voluntary’ ad delivery mechanism.

A rule of advertising is that the more times people see your ad the better the memorization effect is. That’s the reason why the same commercial is repeated on broadcast television in the same block. It does not matter what you think of the ad, it matters that you start recognizing it and that it starts to feel familiar.

The combination of advertising with other content of mixed relevance to the reader is what drives many modern social media platforms: Facebook, Instagram, LinkedIn, anything with a feed. I hope you see that this is essentially no different from the mechanism Benjamin Day used with the New York Sun to outwit his competition. If you are not paying for it, you can be sure that it is your attention that is being sold to advertisers.


Diving deeper: how would you design a product that exploits this variable reward vulnerability that people have, both in news feeds and beyond? There is a recipe for this that has proven it works time and time again called the hook model. It consists of four steps: trigger, action, reward and investment. Let’s go through these steps by using an example.

The Hook Model [2]

A trigger may be notification sound made by your telephone, a message that then pops-up telling you that you have a new message from someone, and that possibly even shows a part of that message to draw you in further. This is your first little hit of dopamine, similar to what you get when you scroll through a feed.

The action is your response. You open the message, read it and start typing something back. You re-read what you typed, think if it is an appropriate response, and then finally hit the send button.

Now comes the reward. You get a message back, or if you have posted something on a social network, you may get likes or comments on your post. That feels good: getting approval from others also gives you little shots of dopamine.

All of this feels so good, that it is now time for the last step: you invest. You post something new: a picture, a video or a message to get more of that dopamine. Your content is now in the app that you are using, regardless of whether it is Facebook, Instagram or WhatsApp. All use the same basic mechanism, some in more elaborate ways than others.

Since you are now invested, you are more likely to respond to the next trigger, and you go through the loop again, ad infinitum, reinforcing your behaviour. Feels creepy perhaps, but sounds familiar?


Now, you may ask: okay, but is it all that bad? Because: I get a lot of value out of the news and social media. Well, that depends on how you look at it. This graph from the economist shows that the more time people spend on their social media apps, the less happy they are about it.

The findings are based on a time tracking app called Moment. This is admittedly not the easiest to interpret chart, but there are some clear patterns to spot. Most social media score quite poorly in terms of how happy people are with the amount of time they spent on these platforms, for example a full two thirds of Instagram users is unhappy with the amount of time they spent there. Furthermore, we see the more general trend that more time spent translates to less happiness for nearly all apps.

Graphs like this always raise the valid question: what is the direction of causality? Perhaps unhappy people are simply drawn to these platforms and stay there for a longer amount of time, or is it the other way around: does staying on these platforms make people less happy? There seems to be at least some evidence towards this latter conclusion [5].

What can you do?

Going back in time, what would I recommend my father to do in terms of breaking his news watching habit? Well, the first step is awareness. Nowadays, that awareness applies not only to television and news served on the remains of dead trees, but particularly to the kind that comes to you via screens, be it laptops, tablets or smartphones. You have already taken the first step by reading this post, so you are at least aware of the what and the how: good.

In a broader sense: there is genuine concern nowadays for people having short(er) attention spans, and showing addictive behaviours to their social media feeds, all based on this exploitable mass vulnerability to variable rewards [7]. Fortunately, there are also some concrete things that you can do. I’ll leave you with three tips.

Infinite Feed by Attila Kristó

Three tips

Firstly, limit your time scrolling through feeds. Recognize when you are doing it, realize you are reinforcing the behaviour and ask yourself the question: what am I accomplishing by doing this? People far too often grab their phone or tablet, because they don’t dare allow themselves to get bored. Do the opposite: embrace boredom.

Secondly, turn off as many notifications as you can. Particularly all notifications not generated by another human being, by which I mean: either an algorithm, or those not specifically directed to you, like app groups that you are part of.

Thirdly, I realize that it may not be easy, but it really does not hurt to put away your smartphone or tablet for a while and do something else. The best way to cure yourself and others of these reinforced behaviours is simply to stop responding, and have something better to do.

In closing, remember: your life is what you pay attention to, so make sure you pay attention to the right things, those things that really matter to you.


  1. Wu, T. (2016). The Attention Merchants.
  2. Eyal, N. (2013). Hooked.
  3. Holiday, R. (2014). If you watch the news …
  4. Cain, D. (2016). Five things you notice when you quit the news.
  5. Economist (2018). How heavy use of social media is linked to mental illness.
  6. Tigelaar, A. S. (2016). Breaking free: how to rewire your brain.
  7. Center for Humane Technology (2018). App Ratings.