Sleep

If you do not sleep you will die within months. Though, it’s unlikely that you would voluntarily stay awake twenty four hours for days at a time, modern society with its many distractions, makes it tempting to trade a little bit of sleep for a little bit of something else every day. Is this a wise trade-off to make? To answer that question: let’s dive deeper into sleep.

Rhythm

A decade ago I visited the music festival Sziget in Budapest. I camped there with a group of friends near the northern tip of the ‘party island’. After several days, thanks to a combination of staying up very late and sleeping with an eye mask and earplugs, my usual waking time had settled around noon. Despite the intense heat buildup in my tent during the morning hours, I still had an actual, though shifted, sleep rhythm. Where does this rhythm come from?

Living for several days on a music festival terrain, while an unforgettable experience in many ways, was not very convenient. Nevertheless, this inconvenience does not remotely come close to that of living in a cave without exposure to daylight for a month. This is what the father of modern sleep research, Nathaniel Kleitman, did in 1938 together with one of his students. Experimenting on himself was not new to Kleitman, as he also kept himself awake on benzedrine for up to hundred-eighty hours at a time just to measure the effect [2]. Little did he know that a form of this would later become a popular party drug at festivals for staying awake and boosting energy: speed.

Student Bruce Richardson (Left) and Nathaniel Kleitman (Right) in the cave

Back to Kleitman’s cave experiment: it revealed two important things. Firstly, if there is no sunlight, our bodies can still self regulate sleep and wakefulness. Secondly, our sleep cycle is not exactly twenty four hours, but a little bit longer. This observation gave rise to the concept of what we nowadays call rhythm of approximately (circa) a day (dian): the circadian rhythm.

The presence of sunlight resets the rhythm, and keeps it from shifting. This is quite similar to the way that your computer, whose clock is not entirely accurate and suffers from drift, synchronizes its time at regular intervals by contacting computers with more accurate clocks using the Network Time Protocol. Indeed, some alarm clocks are equipped with radio signal based synchronization which essentially does the same thing.

Pressure

The first few days at the Sziget festival, when my rhythm had not shifted as much yet, I felt very sleepy near the end of the day when I stayed up late. But I noticed that when I stayed up long enough, there was a moment, usually in the early morning, where I got over the sleepiness and started feeling more alert again. Since my rhythm had not shifted yet: what caused this?

Besides the circadian rhythm our bodies have an other mechanism to regulate sleep. As long as you are awake your sleep pressure rises through the buildup of the neurochemical adenosine in your brain. The higher the sleep pressure, the more you actually feel like sleeping. This pressure is referred to as the sleep drive, whereas the circadian rhythm is called the wake drive.

If you were ever in a lecture where another student fell asleep on their desk with a loud thud you know what high sleep pressure can do: it will make you nid-nod, falling in and out of slumber, even if you really want to consciously pay attention. While this is fairly harmless during a lecture it can be extremely dangerous in other situations, for example: when driving.

Rising sleep urge as result of staying awake. The arc is the rising sleep pressure (sleep drive). The dotted line is the circadian rhythm (wake drive). The distance between the two represents how sleepy you feel, expressed in words as the urge to sleep.

The interesting thing is that the circadian rhythm and sleep pressure are independent ‘systems’. As your sleep pressure continues to build up, your circadian rhythm may already be swinging back up, making you feel awake and more alert: exactly what I was experiencing the first couple of days at the festival. The figure above shows this: once you stay awake long enough the distance between the sleep pressure arc (top) and the circadian rhythm (bottom) becomes less, only to hit you much harder again as the circadian rhythm comes back down, resulting in a very strong urge to sleep.

Reset your brain

A good night consists of seven to nine hours of sleep opportunity, subtract from that half an hour to an hour to get the actual time slept. Back to the Sziget festival: not everyone had the fortune of being able to obtain a good night’s sleep. One evening we met a group of girls that had set up their tent next to the twenty-four hour DJ booth. I probably don’t need to tell you that setting up their tents there did not bode well for their sleep opportunity. Hence, we spent the night socializing with them. As we sat and talked on a set of wooden benches time flew by and before we knew it the sun came up again. The festival terrain was nearly silent, until we heard a loud rumbling sound.

It was around six in the morning, and the cleaning crew rolled in. A literal wall of people that moved from the front to the back of the festival terrain picking up garbage, followed by truck mounted water sprayers that cleaned the streets. An impressive sight.

Using water to clean and cool down the streets at Sziget [3]

In the deep stages of sleep something similar to what that cleaning crew did happens in your brain. Electrical waves sweep from the front to the back of your brain. This consolidates what you experienced and learned the previous day, and at the same time helps you prepare for sponging up new things the next day. In fact, not sleeping for one night reduces your ability to form new memories the next day by a whopping forty percent.

Better sleeping through chemistry?

Sitting on those benches at the festival, pulling an all-nighter, was not good for retention of memories, but also not good for overall wakefulness. The next day I had serious problems staying awake, thanks to the before mentioned buildup of sleep pressure. As a quick fix I turned to coffee: the stimulant of choice to keep one awake under such circumstances.

The caffeine in coffee blocks the binding of certain chemicals in the brain. This makes you feel more alert while your body is still tired. A negative effect is that once you do sleep after having coffee, you will have problems reaching the deep stages of sleep and thus will acquire less deep sleep overall. Ironically, this lack of deep sleep can trap you in a dependency cycle where you actually need coffee in the morning to feel refreshed after poor quality sleep.

It was a festival, so not only did we drink coffee. There was also plenty of cheap alcohol to go around. Beer, wine, shots, you name it, they had it. This stuff had the opposite effect of coffee: it made me feel sleepy. Alcohol, after all, is a depressant.

Alcohol makes you fall ‘asleep’ faster by sedating you, but as a side effect it heavily fragments your sleep. You wake up much more often than you usually would, though you do not consciously register this. This is the reason why the day after an evening of drinking a lot, you feel extremely drowsy. A secondary effect is that you do not reach the REM stage of sleep, which is vital for learning and memory formation. The debilitating effect of alcohol on memory formation is similar to that of not sleeping at all.

Consequences

Regardless of the cause: what else, besides memory and learning problems, really happens when you do not sleep enough? When I came back from the festival I had a throat infection. Not surprising: sleeping just one night for only four hours, instead of eight, reduces the presence of natural killer cells by over seventy percent. Doing this repeatedly left me much more vulnerable to infections, and there were plenty of germs floating around there. For such a short time, and the great fun I had there, that was an acceptable trade-off. However, that luxury of choice does not always apply to everyone. Involuntary chronic sleep deprivation incurs a heavy toll.

A chronic lack of sleep increases risk of physical diseases, like cancer, heart disease and diabetes, and also of mental diseases, including depression, anxiety and suicidality. Sleeping too little generally means that your life expectancy will go down dramatically. The evidence for this is strong enough that we know that if you were to actually sleep consistently for only 6.75 hours per night, you will likely live only into your early sixties.

Conclusion

What can you do to improve sleep? Three tips: firstly, maintain a regular schedule, which means: go to sleep and wake up at the same time every day, also during the weekends. Secondly, take care of your sleep ‘hygiene’: make sure your room is sufficiently dark and cool, around eighteen degrees Celsius, and avoid screens before you go to bed. Thirdly, avoid drinking coffee and alcohol, especially shortly before you go to bed, but also in the late afternoon and early evening.

Going to a music festival and losing some sleep by choice like I did is not a big deal. The real problem is that two thirds of all adults worldwide structurally fail to get the recommended minimum amount of sleep opportunity of seven to nine hours per night. Please, don’t be one of them. Sleeping is the single most important thing you can do to reset your brain and body health every day. Trading a little bit of sleep for a little bit of something else is not worth it in the end.

Sources

  1. Walker, M. (2017). Why We Sleep.
  2. Levine, J. (2016). Self-experimentation in Science.
  3. Forbes, G. (2008). Sziget Festival Photos.

Attention

When I lived at my parent’s place, I was always amazed by how much time my father spend on the daily news. Once during breakfast, usually in the form of a newspaper. A second time during lunch, and a third time by watching the evening news. He must have spent upwards an hour or so every day consuming news. Why did he do that?

In this post I will dig into attention mechanisms. Specifically: where they come from, how they work and what you can do to better cope with them. To get a better grip on this, we will first dive into some history.

History

In 1833 the leading news paper of New York was the New York Enquirer, costing about six cents. Not very expensive by today’s standards, but for that time it was considered a luxury item. An amount not many ordinary people had to spare. Hence, it is no surprise that someone saw a business opportunity in this [1].

Benjamin Day

That someone was Benjamin Day. In that same year he launched a new newspaper: the New York Sun. By late 1834 his new paper had become the leading one in the city, amassing five thousand daily readers. Why? A copy of the New York Sun cost only one cent, vastly more affordable for the average person back then. His rivals were amazed, how could he produce a newspaper that cheaply, below the price of the cost of printing? One word: advertisement. Day was not in the business of selling a newspaper to his readers, he was in the business of selling the attention of his readers to his advertisers. He did this by mixing news, often news of the sensational kind that easily catches attention, with advertisements.

Advertising of course is nothing new. Though, back then it was mostly just text, reminiscent of personal ads. That quickly changed throughout the years. During the forties and fifties, large billboards became increasingly common. Though our homes were still a sanctuary devoid of much of that visual onslaught, this quickly changed as televisions became commonplace. Advertisements sneaked inside our living room, and that was not the end of it. The last two decades have seen ads move even physically closer: into our very hands, in the form of one of the most effective advertisement delivery devices invented: smartphones.

Britney Spears in a Pepsi television ad

Nevertheless, let’s be honest: no one really wants to see those ads. People do not voluntarily choose to watch advertisements. So, how do we get them to do this anyway?

Mechanism

To understand the mechanism behind this, we go back in history once again. This time to the early nineteen-sixties. Meet B. F. Skinner working, at the time, at Harvard University. Skinner had ambitions to become a writer, but became disillusioned with the craft, and instead ended up becoming an influential figure in behavioural psychology. During the sixties he conducted experiments with animals, and one such experiment focused on pigeons.

B. F. Skinner and his pigeons

In this particular experiment Skinner placed a pigeon in a box. This box contained a small circular disc that could be pecked. In the first condition pecking this disc would release a little food pellet for the pigeon to eat. In the second condition it would sometimes release a food pellet, and sometimes it would not: an unpredictable pattern. What condition do you think made the pigeon learn the behaviour of pecking the disc the fastest? That’s right, it was the second condition, the one where whether or not the behaviour would be rewarded with a food pallet was unpredictable.

Do those results on pigeons translate to humans? What’s the closest analogue we have? That would be gambling. Remember those rows of old ladies sitting at one armed bandits in casino’s? The reason they keep sitting there, gambling away their money, is the exact same reason those pigeons kept hitting that circular disc: a variable reward. Sometimes you get something, sometimes you lose something, and you don’t know which one is coming next.

Immunity?

Now you may be thinking: I don’t gamble, I never go to the casino, so: no risk for me. Well, you may not go to the casino, but there is a game you play every day. I am referring to the game of scrolling through news feeds, responding to notifications on your telephone, and checking if you have any new e-mails. All of these share the same characteristic: sometimes there’s something interesting for you, sometimes there’s not, and you don’t know which one is coming next.

I think it is this variable award that got my father addicted to watching the news so often: even if most of it is not interesting, or something seen or read before on the same day: that’s not the point. If every so often there is something new in there, that’s what kept him coming back for more.

Ads in the mix

Now, think back to the advertisements. Like a news reel, or newspaper, a news feed is the prime example of something that gives you a variable reward. It keeps you scrolling so you can get a small dopamine reward for finding those interesting things. If you would put some advertisements in that news feed, regardless of whether they are interesting or relevant, people would be exposed to them. Giving you the perfect ‘voluntary’ ad delivery mechanism.

A rule of advertising is that the more times people see your ad the better the memorization effect is. That’s the reason why the same commercial is repeated on broadcast television in the same block. It does not matter what you think of the ad, it matters that you start recognizing it and that it starts to feel familiar.

The combination of advertising with other content of mixed relevance to the reader is what drives many modern social media platforms: Facebook, Instagram, LinkedIn, anything with a feed. I hope you see that this is essentially no different from the mechanism Benjamin Day used with the New York Sun to outwit his competition. If you are not paying for it, you can be sure that it is your attention that is being sold to advertisers.

Hooked

Diving deeper: how would you design a product that exploits this variable reward vulnerability that people have, both in news feeds and beyond? There is a recipe for this that has proven it works time and time again called the hook model. It consists of four steps: trigger, action, reward and investment. Let’s go through these steps by using an example.

The Hook Model [2]

A trigger may be notification sound made by your telephone, a message that then pops-up telling you that you have a new message from someone, and that possibly even shows a part of that message to draw you in further. This is your first little hit of dopamine, similar to what you get when you scroll through a feed.

The action is your response. You open the message, read it and start typing something back. You re-read what you typed, think if it is an appropriate response, and then finally hit the send button.

Now comes the reward. You get a message back, or if you have posted something on a social network, you may get likes or comments on your post. That feels good: getting approval from others also gives you little shots of dopamine.

All of this feels so good, that it is now time for the last step: you invest. You post something new: a picture, a video or a message to get more of that dopamine. Your content is now in the app that you are using, regardless of whether it is Facebook, Instagram or WhatsApp. All use the same basic mechanism, some in more elaborate ways than others.

Since you are now invested, you are more likely to respond to the next trigger, and you go through the loop again, ad infinitum, reinforcing your behaviour. Feels creepy perhaps, but sounds familiar?

Effect

Now, you may ask: okay, but is it all that bad? Because: I get a lot of value out of the news and social media. Well, that depends on how you look at it. This graph from the economist shows that the more time people spend on their social media apps, the less happy they are about it.

The findings are based on a time tracking app called Moment. This is admittedly not the easiest to interpret chart, but there are some clear patterns to spot. Most social media score quite poorly in terms of how happy people are with the amount of time they spent on these platforms, for example a full two thirds of Instagram users is unhappy with the amount of time they spent there. Furthermore, we see the more general trend that more time spent translates to less happiness for nearly all apps.

Graphs like this always raise the valid question: what is the direction of causality? Perhaps unhappy people are simply drawn to these platforms and stay there for a longer amount of time, or is it the other way around: does staying on these platforms make people less happy? There seems to be at least some evidence towards this latter conclusion [5].

What can you do?

Going back in time, what would I recommend my father to do in terms of breaking his news watching habit? Well, the first step is awareness. Nowadays, that awareness applies not only to television and news served on the remains of dead trees, but particularly to the kind that comes to you via screens, be it laptops, tablets or smartphones. You have already taken the first step by reading this post, so you are at least aware of the what and the how: good.

In a broader sense: there is genuine concern nowadays for people having short(er) attention spans, and showing addictive behaviours to their social media feeds, all based on this exploitable mass vulnerability to variable rewards [7]. Fortunately, there are also some concrete things that you can do. I’ll leave you with three tips.

Infinite Feed by Attila Kristó

Three tips

Firstly, limit your time scrolling through feeds. Recognize when you are doing it, realize you are reinforcing the behaviour and ask yourself the question: what am I accomplishing by doing this? People far too often grab their phone or tablet, because they don’t dare allow themselves to get bored. Do the opposite: embrace boredom.

Secondly, turn off as many notifications as you can. Particularly all notifications not generated by another human being, by which I mean: either an algorithm, or those not specifically directed to you, like app groups that you are part of.

Thirdly, I realize that it may not be easy, but it really does not hurt to put away your smartphone or tablet for a while and do something else. The best way to cure yourself and others of these reinforced behaviours is simply to stop responding, and have something better to do.

In closing, remember: your life is what you pay attention to, so make sure you pay attention to the right things, those things that really matter to you.

Sources

  1. Wu, T. (2016). The Attention Merchants.
  2. Eyal, N. (2013). Hooked.
  3. Holiday, R. (2014). If you watch the news …
  4. Cain, D. (2016). Five things you notice when you quit the news.
  5. Economist (2018). How heavy use of social media is linked to mental illness.
  6. Tigelaar, A. S. (2016). Breaking free: how to rewire your brain.
  7. Center for Humane Technology (2018). App Ratings.


Passwords and Security

After a long journey he was nearly there. In the distance there was the outline of the city wall. Moments later he approached the city gate.
“Halt!”, shouted a heavily armed guard.
He had grown used to this ritual, so he went through the motions.
“What is the pass word?”, the guard asked.
He spoke the phrase he had memorized. The guard nodded, lowered his hands from his weapon, and stepped aside to allow him entry.

The above is how I imagine passwords came into common usage long ago. Passwords are not very practical in the above scenario, which is probably why we now have passports: literally a document to pass through some port, such as a city gate or a border. Checks at the border can also be done using fingerprints. If the guard would take fingerprints and quickly compare them to a set of known prints, he could determine whether to let you pass based on a matching print.

Consider what these three things fundamentally represent:

  1. A password is something that you know, you need to memorize it.
  2. A passport is something that you have, you need to take it with you.
  3. A fingerprint is something that you are, you always have it with you.

Most security systems combine at least two of these three factors:

Access to your bank transactions requires two things. Firstly, your debit card: something that you have. Secondly, your Personal Identification Number (PIN): something that you know. Entering a modern house also requires two things: the keys to your door and the access code to disable the alarm, which again combines something that you have with something that you know. Finally, entering a foreign country may even combine all three ingredients: a border guard may ask why you are entering the country and where you will be staying, he will ask for your passport and may scan your fingerprints.

Where am I going with this? Good security systems combine at least two of the three factors above. Think about how you access all your on-line accounts like Google, Facebook and LinkedIn. Do you use a password? Is that the only thing that you use to gain access? The answer to that is likely yes, and that is not a good thing.

Of all the three fundamental ingredients above, the password: something you memorize, is likely also the easiest to bypass. Not so much because of technical issues, although those do occur, but because of completely understandable human limitations.

The problem with passwords is that a complex password is hard to remember, and a simple password is easy to guess. Most people err on the side of making their passwords too simple. Why are such passwords easily too weak? For that we have to do some calculations.

Let us assume that you pick a single number between 1 and 10 as password. Let me think: you likely picked either a seven or a three, am I right? Even if I am not, people prefer some numbers over others, and that is exactly the root of the problem. Consider that with a single digit password I would need to guess only ten times and then I would certainly be right. If I can make my guesses a bit smarter – starting with the digits that are more often chosen – I may be able to guess ninety percent of the single digit passwords with only five tries.

Obviously we need something a little longer, a four digit password would have 10^4 = 10000 possible combinations, which is already much harder to guess. This is in fact the search space of the famous PIN codes. Some banks allow their customers to choose their own four digit code, which is a bad idea. Four digits are, from a memorization point of view, ideal for representing a birth date, or some other significant date. Consider that many such dates either start with 19 or 20 and we are left with only two numbers we need to guess: 10^2 = 100 is a much smaller space of possibilities.

Digits are often not the only parts of a password, letters are often allowed. This seems sound, since adding twenty-six letters gives us an additional fifty-two possibilities, letters can be either lower or uppercase, yielding us (10+52)^4 = 14776336 possible passwords of length four. If we add in special characters this number grows even larger.

Adding extra symbols (digits, letters, other characters) to the possible password range may seem like a good idea. However, just as we saw with numbers: if the patterns are predictable they are easy to guess. Consider that if we make a word of two characters in English there are a limited number of actually valid words: ‘of’, ‘it’ and ‘to’ are all valid. In contrast ‘tj’, ‘gh’ and ‘lq’ are not valid words. Sequences of letters that are not words are difficult to remember. Hence, people rarely use them. This leads to predicable passwords that consist usually of nouns combined with predictable number sequences: ‘Ghost2012’, ‘lipgloss’ and even ‘password’.

Indeed the top five passwords are: ‘123456’, ‘password’, ‘12345’, ‘12345678’ and ‘qwerty’. Fortunately few people actually use these passwords. If you were to guess someone’s password using one of these top ten most popular passwords, you would succeed in about sixteen in one thousand tries. Which, while not spectacular, is still ridiculously high.

A thousand tries may seem like a lot, and it is if you would have to type all those passwords yourself. However, this can be automated quite easily. Trying all possible passwords is called ‘brute-forcing’. A modern computer can easily do this at a rate of five-thousand per second. Using some statistical insights, such as those mentioned above, this process can be made highly effective. In fact most passwords under ten characters can be easily broken in several hours using off-the-shelf computer hardware.

I hope it is clear by now that using only a password that you can memorize to secure your on-line accounts is a bad idea. So, how can we improve this?

There are at least two things that you can quite easily do with respect to passwords alone:

  1. Generate passwords, instead of making them up yourself. No offense, but: a randomly generated password by a computer is most certainly better than something that you can think of.
  2. Use long passwords, as we have seen the length of a password is a means to easily increase the difficulty of guessing it. A minimal passwords consists of ten characters, but as computing power increases, this may rapidly become too short. A password of twelve characters is a more realistic minimum nowadays, and sixteen to thirty-two characters is a safe range.
  3. Use a different password for each service that you use. This way, when one account is breached, you do not get a domino effect.

Using a very long password, is one of the few exceptions where you could suffice with choosing your own. Consider that a long sentence as password is quite hard to guess: there are so many possible sentences! Even though a completely random password of the same length is harder to guess, this matters less if the password is sufficiently long.

If you are not into the long passwords, then the best solution is using a password manager of some sort. Keepass and Lastpass are popular solutions that are easy to use. There are two caveats to these services:

  1. They usually use one strong ‘master’ password, which gives access to all the site-specific passwords. This is a single-point of failure is some sense, and can also lead to a domino effect, but this is not a major problem if you have a sufficiently strong master password combined with two-factor authentication: more on that later.
  2. Some of these services may store your passwords ‘in the cloud’ in encrypted form. Understandably not everyone is okay with that. Fortunately, there are also variants which store your passwords locally on your own machine.

In a sense using a password manager in some way may feel like ‘writing down your password on a piece of paper’. This is true, but a strong password written down on a piece of paper that you keep in a safe place, is much better than a weak password that you have memorized. The same applies to password managers: the benefits outweigh the risks.

Improvements to your password do not address the most pressing concern: remember that most systems combine at least two of the three factors: something you know, something you have and something you are. A password is still only one of those ingredients. Hence, where possible you should add another one of these ingredients.

Almost all major on-line service providers – Microsoft, Google, Facebook, Yahoo, et cetera – offer some form of two-factor authentication. One popular mechanism called TOTP consists of codes that are generated using an app on your phone. How does this work? You take a picture of a QR image on the screen once, and a security app uses the data in this image to generate access codes that change every thirty seconds. You can set things up so that you are asked for a code only once a month on computers that you regularly use. So the effort is minimal and the security benefit is huge: in addition to guessing your password an attacker would have to gain access to your phone, which is way more difficult.

Some other services may rely on sending you an SMS with a code, or an e-mail with a clickable link. This is a bit less secure, but still way better than only using a password, and thus certainly worth it. If you use a password manager, then securing it with some type of two-factor authentication is an absolute must.

Say that you want to secure some other service X that does not offer two-factor authentication.
What to do? Well, the service may offer logging in via OpenID. This means that you can log in to the service using one of your main on-line accounts, like Google or Facebook. If you have secured that on-line account by enabling two-factor authentication, then transitively the account of service X is now also protected using two-factor authentication.

To wrap up: I recommend that you:

  1. Always use two-factor authentication wherever it is offered.
  2. Always construct sufficiently long passwords.
  3. Seriously consider using a password manager.

After a long journey the data packet, the first in a long data stream, was nearly there. Residing inside the last switch, in the distance was the faint hum of a server. Moments later the packet had entered the server system. The server unwrapped the data packet and found a password inside. But it knew the password was not enough. The server generated a code that it was expecting. It unwrapped the next packet in the stream and found the exact same code it had generated just a moment ago. It allowed the rest of the stream op packets to enter.

Renewed Keyboard Joy: Dvorak

Typing: you do it every day nearly unconsciously. You think of what you want to appear on the screen. This is followed by some rattling sound and the next instant it is there. The blinking cursor stares at you as to encourage you to keep going. Handwriting feels mostly like a thing of the past since typing is so much faster for you, likely up to two or three times. So, what would it be like if you were stripped from this ‘magical’ ability to type?

If you are like me, you probably learned how to type all by yourself. I never took a touch typing class, since it seemed like a waste of time. After all: I could already type, so why take a course to learn something I could already do?

Many self-learned typist adopt a hunt and peck style, meaning they need to look at the keyboard to find the keys. Usually this is done with only two fingers, since using more fingers obscures the view on the keyboard making it harder to ‘hunt’. I did not adopt this style, but rather used the three-finger approach: both hands hover over the keyboard and type using the three strongest fingers: the thumb, index finger and middle finger. Occasionally I used the ring finger as well, though not consistently. Observing my typing style, I noticed that my hands positioned themselves in anticipation of the next key to strike. This all went seamlessly, achieving speeds of about eighty-five to a hundred words per minute, which is not bad at all.

Though my self-learned typing style worked for me, I did try to switch to touch typing several times. Particularly because my hands would feel strained after intense typing sessions. However, switching never worked out. I would intensely concentrate for one day, keeping my fingers on the QWERTY home row of ‘ASDF-JKL;’, touch typing as one should. Nevertheless, the next day the years of acquired muscle memory would take over: I would be thrown back to my ‘own’ style. My hands seemed to have no incentive to touch type, even though I really wanted to consciously. Had I only taken that typing class when I had the chance, then I would be better off today, or … perhaps not?

The famous QWERTY layout, referring to the six top left keys on most standard keyboards, is not the only way to arrange the keys. Firstly, there are many small variations such as AZERTY, common in Belgium, and QWERTZ, common in Germany. Secondly, there are alternative keyboard layouts such as Colemak, Workman and Dvorak. Of these alternatives, Dvorak has been around the longest, since the 1930’s, and is also an official ANSI standard. The story behind both QWERTY and Dvorak, both developed for typewriters, is interesting in its own right and explained very well in the Dvorak zine.

The standardized simplified Dvorak layout is much less random than the QWERTY layout, it notably places the vowels on the left side of the keyboard and often used consonants on the right:

2015-07-Simplified_Dvorak

The simplified Dvorak layout

Several years ago I tried switching to Dvorak cold turkey. I relabeled all my keys and forced myself to type using the Dvorak layout. It was a disaster. I would constantly hit the wrong keys, my typing slowed to near a grinding halt. I would spent fifteen minutes typing an e-mail that previously I could write in under a minute. Frustrated, I stopped after three days.

Fast forward to several months ago. I caught a bit of a summer flu and although I was recovering I could not really think straight. Since learning a new keyboard layout is rather mechanical and repetitious in nature, I figured the timing would be right to have another stab at this. My main motivation was to increase typing comfort and reduce hand fatigue. Secondary motivations included load balancing better suited for my hands, reducing the amount of typing errors and being able to reach a higher sustained typing speed. Finally, I also picked this up as a challenge: it is good to force your brain to rewire things every once in a while. I wanted to switch layouts for these reasons for quite a while and this time I decided I would go about it the ‘right’ way.

Firstly, I had to choose a layout. Hence, I determined the following criteria:

  1. Since my left hand is a bit weaker I should opt for a right hand dominant layout, meaning one that utilizes the right hand to control more keys than the left in terms of both count and striking frequency.
  2. The layout should differ sufficiently from QWERTY, as to prevent me from relapsing into my ‘own’ typing style.
  3. As I do a fair bit of software development, the layout should be programming friendly.

Based on these criteria I chose the Programmer Dvorak layout. This layout is similar to simplified Dvorak, but has a different number row. It looks like this:

2015-07-Programmers_Dvorak

Programmer Dvorak

The main difference between this Dvorak layout and the simplified layout shown previously is that the number row is entirely different. Instead of numbers, the keys on the number row contain many characters that are often used in source code, such as parentheses and curly braces. To enter numbers the shift key needs to be pressed. This sounds cumbersome, but it makes sense if you count how many times you actually enter numbers using the number row. The numeric pad on the keyboard is much better suited to batch entry of numbers.

Awkwardly the numbers are not laid out in a linear progression. Rather the odd numbers appear on the left side and the even number on the right. This can be quite confusing at first, but interestingly it was also how the numbers were arranged on the original, non simplified, version of Dvorak. So there is some statistical basis for doing so.

If you are considering alternative keyboard layouts you should know that Dvorak and Colemak are the two most popular ones. Dvorak is said to ‘alternate’ as the left and right hand mostly alternate when pressing keys, whereas Colemak is said to ‘roll’ because adjacent fingers mostly strike keys in succession. One of the main reasons that Colemak is preferred by some is that it does not radically change the location of most keys with respect to QWERTY and, as a result, keeps several common keyboard shortcuts, particularly those for copy, cut and paste, in the same positions. This means that those shortcuts can be operated with one hand. As I am an Emacs user, used to typing three or four key chords to do comparatively trivial things – more on that later – this was not really an argument for me. I also read that the way in which you more easily roll your fingers can help with making the choice between Dvorak and Colemak. I think this was conjecture and I have no good rational explanation for it, but perhaps it helps you: tap your fingers in sequence on a flat surface. First from outwards in, striking the surface with your pinky first and then rolling off to ending with your thumb. After this do it from inwards out, striking with your thumb first and rolling back to your pinky. If the inwards roll feels more natural then Dvorak is likely a better choice for you, whereas if the outward roll feels better, Colemak may be the better choice. Again this is conjecture, interpret it as you wish.

Whichever alternative layout you choose: anything other than QWERTY, or a close variant thereof, will generally be an improvement in terms of typing effort. Dvorak cuts effort by about a third with respect to QWERTY. This means that entering hundred characters using QWERTY feels the same as entering about sixty-six characters in Dvorak in terms of the strain on your hands. If your job requires typing all day, that difference is huge. Even more so if you factor in that the number of typing errors is usually halved when you use an alternative layouts, due the more sensible and less error prone arrangement of the keys. Most alternative layouts are as good as Dvorak or better, depending on the characteristics of the text that you type. Different layouts can be easily compared here.

Now that I had chosen a layout, it was time to practice, so I set some simple rules:

  1. Practice the new layout daily for at least half an hour using on-line training tools.
  2. Do not switch layouts completely, rather keep using QWERTY as primary layout until you are confident you can switch effectively.
  3. Train on all three different keyboards that you regularly use. Do not buy any new physical keyboard, do not relabel keys, but simply switch between layouts in software.
  4. Focus on accuracy and not on speed.

Before starting I measured my raw QWERTY typing speed, which hovered around ninety words per minute sustained and about a hundred words per minute as top speed. Unfortunately, raw typing speed is a bit of a deceptive measure, as it does not factor in errors. Hitting backspace and then retyping what you intended to type contributes to your overall speed, yet it does not contribute at all to your effectiveness. So it is the effective typing speed which is of interest: how fast you type what you actually intended to type. Effective typing speed is a reasonable proxy for typing proficiency. My effective QWERTY typing speed was a bit lower than the raw speed, by about five to ten percent. This gives a sustained speed of eighty to eighty-five words per minute and a top speed of around ninety-five words per minute.

As I started with my daily Dvorak training sessions, I also started seeing a decrease in my effective QWERTY typing speed. My fingers started tripping up over simple words and key combinations, even though I still used my ‘own’ typing style for QWERTY, and touch typed only in Dvorak. The effect was subtle, but noticeable, lowering my effective QWERTY speed with about ten to fifteen percent. I deemed this acceptable, so I persevered, but it does show that using two keyboard layouts definitely messes up muscle memory. I think this effect can be mitigated to some extent by using specific layouts on specific keyboards, but I did not test this, as I would be breaking my own rules.

The first sessions in Dvorak were slow, with effective speeds of about five to ten words per minute. In fact the first days were highly demotivating, it felt like learning to walk or ride a bike from scratch again. I started out with my fingers on the home row and consciously moved my fingers into position. That process took a lot of concentration, you can think of it as talking by spelling out each word. Furthermore, every time I hit a wrong key, my muscle memory would stare me in the face full of tears and proclaim it had triggered the right motion. It did … just not for this new layout I was learning.

So, what did I use to train? I started out using a site called 10fastfingers, but I found it a bit cumbersome and it did not have a lot of variance. In the end, I can really recommend only two sites, namely learn.dvorak.nl and keybr.com. The latter has the nice property that it adapts the lessons to your proficiency level and is quite effective for improving weak keys. /r/dvorak is also good for inspiration and tips.

Some basic other tips: start typing e-mails and chats with your new layout before making a complete switch, as it will give you some training in thinking and typing, rather than just copying text. Furthermore, switching the keyboard layout of your smartphone may help as well, not for efficiency, as Dvorak is really a two-handed layout, but for memorization. Dvorak is not really designed for phones, other layouts may be better, I have not looked deeply into this, as I generally dislike using phones for entering text, it does not seem worth the trouble of optimization. I do not recommend switching the keys on your computer keyboard, or relabeling them, as doing so will tempt you to look at the keyboard as you type, which will slow you down. It is better to type ‘blind’.

It took some discipline to keep at it the first few days, but after about a week or two I was able to type at an average speed of about twenty-five words per minute. Still not even a third of my original QWERTY speed, but there was definitely improvement. After this there was a bit of a plateau. I spent more time on the combinations and key sequences that were problematic, which helped. Six weeks in I was able to type with an average speed of around forty words per minute. Since this was half of my QWERTY speed, I deemed it was time to switch to Programmer Dvorak completely.

In contrast with my previous attempt several years ago, this time the switch was not a frustrating experience. The rate of learning increased as my muscle memory no longer had to deal with two layouts. Typing became increasingly unconscious. The only things that remained difficult were special characters and numbers, for the sole reason that these do not appear often and thus learning them is slower.

Currently I am about ten weeks in. I did not use the same training tools during that entire time, but I do have data from the last eight weeks. Let us first take a look at the average typing speed:

2015-07-Typing_Speed_Smooth

Average smoothed typing speed

The graph shows two lines spanning a time of eight weeks, a green one which shows the raw speed and a purple one that shows the effective speed. You can see that both speeds go up over time and the lines are converging, which implies the error rate is going down. My average speed is currently around seventy words per minute, which is close to my original QWERTY speed.

We can also look at the non-smoothed data, which gives a feeling for the top speed. In the second graph, shown below, we see that the top speed is about hundred words per minute which is actually about the same as my QWERTY top speed.

2015-07-Typing_Speed_Raw

Raw typing speed

There is still quite a bit of variation, as is to be expected: not every character sequence can be entered at a high speed and some keys have a higher error rate than others. Most errors are mechanical in nature, which means: simply hitting the wrong key. This is particularly prevalent when the same fingers needs to move to press a subsequent key, for example for the word ‘pike’ one finger needs to move thrice to hit the first three letters. More generally, my slowest keys are the Q, J and Z and the keys with the highest error rate are the K, X and Z. Luckily these are not high frequency keys, and they are also underrepresented during training, so over time the errors will likely decrease and the speed will increase for these keys.

With respect to my original goals: firstly, I can say that typing in Dvorak is more comfortable than QWERTY, particularly at higher speeds my fingers feel much less jumbled up. The hand alternation is very pleasant, though it took some time for my hands to get synchronized. Secondly, in terms of speed: after about ten weeks I am very close to my QWERTY speed, which is great. It shows that switching layouts is possible, even though it takes effort and discipline to do so. It was frustrating at first, but I feel that it was a good opportunity to purge many bad typing habits that had accumulated over the years.

There are also some downsides, the main one is that typing QWERTY is slow for me now, and that will likely continue to deteriorate. I do not see this as a major issue, as I do about ninety-nine percent of typing on my own machines. For the other one percent, it is possible to switch layouts on each and every computer out there. Some people may dislike the moving of keyboard shortcuts, and that can really be an issue, but for the most part it is just a matter of getting used to it. As an Emacs user, I took the opportunity to switch to the ergomacs layout, which I can recommend. It significantly reduces the number and length of chords: keys that need to be pressed in succession, and is also more compatible with more broadly adapted shortcuts.

Do I recommend that you switch to Dvorak, or an other alternative layout? That really depends on how frequently you type. If you type rarely, switching may not be worth the effort. However, if you have to type a lot every day then I think it is worth it purely for the increase in typing comfort. The only argument against this is if you often need to switch computers and you can not easily change the keyboard layout on those machines.

Dvorak definitely feels a lot more natural than QWERTY, and so will most other more optimal layouts. I am relieved I never took a touch typing course. It would have taken much more effort to unlearn touch typing QWERTY if I had. Thanks to not doing that I have been able to learn and become proficient using a layout suited for my hands in just ten weeks. So, if you type frequently, are willing to make the jump and have enough discipline to get through the initially steep learning curve, then I can definitely recommend it. Even just for the challenge.

The Origins of Copyright

2013-01-15-Printing-Press

The most profound characteristic of the networked era we live in today is that ‘ordinary’ people can easily create and distribute their own content. Sites like YouTube and Vimeo are more than Internet versions of America’s Funniest Home Videos. Indeed, they enable budding filmmakers to showcase their works, singers to attract their own following, and artists to use the screens in people’s homes as their canvas. Most importantly, this direct form of broadcasting eliminates the need for a slew of middlemen, moderation and the accompanying politics. It does not end there, writers can publish their novels as e-books, journalists can report the news by using web logs, and everyone can share their day-to-day activities via social networks like Facebook, Twitter or Google+.

Never before have there existed so many options to distribute and create one’s own content. Not all of this is of high quality, nor does it really need to be. As the ability to publish taps into the basic human needs of self expression and sharing. However, there are limits, because whatever anyone creates, there is one thing that protects every creative expression: copyright.

The origin of copyright stems back to a very old invention: that of the printing press. Prior to this, literacy rates were fairly low and reproducing texts was labor intensive: the manual letter-by-letter copying by writing was performed by educated monks in monasteries. The mechanical printing press changed everything, as it made it significantly easier to produce exact duplicates of a text. And this is exactly where the tension between authors of original works, publishers/distributors and consumers originated.

An unprotected work, one that is in the public domain, can be reproduced without the original author’s consent. Indeed, one may even take a work in the public domain, change nothing and republish it under one’s own credentials. There are no limits to how works in the public domain can be used and there is no legal protection for them. Luckily, a work is not in the public domain by default. Anything you create is at least protected by copyright automatically, with some minor exceptions. However, as we will see later, this is also a double-edged sword.

As Europe became more literate, the demand for books increased and concerns grew over the monopoly of the large printing companies. After all, they could republish any work without consent, and profit solely from the act of printing without compensating the original author. To remedy this, copying restrictions were introduced, first by self-regulation at the industry level, and later government enforced by copyright law.

The duration of copyright is limited, sharing some resemblance to the patent system. After you have created some original work, you have a set time to profit from it by displaying, selling or transmitting it. You also have the exclusive right to produce derivative works and naturally: to copy it. The (expected) return based on all these rights is intended to cover the costs it took in terms of your own labor as well as other resources to produce the work. This incentive may be an explanation for the proliferation of creative works over the last couple of centuries. After the copyright expires, your work enters the public domain. So, for how long is a work covered by copyright? Initially, this was about 14 to 28 years, but most countries now use the lifespan of the author plus 50 to 70 (!) years.

The expiration of copyright and the existence of works in the public domain give rise to a somewhat odd dichotomy. One can produce a derivative of a work in the public domain, and then produce a new original copyright-protected work from it. Do you know of anyone who has done this? I bet you do, since one of the most famous examples is Walt Disney.

This brief clip shows that Disney relied on many existing works. He created original derivative works by creating modern adaptations of them. In some sense this is exactly how human culture works: you build upon the works of others. Whether those derivative works should then by fiercely protected by a profit-driven entertainment company, is an interesting other debate.

In some way copyright seems entirely fair. After all, it provides an incentive for creating new works, and enables the author to make a living. However, there are some consequences of copyright law which have nothing to do with such lofty goals. These affect both the ownership and duration of copyright.

It is common to transfer ownership from the original author to some other entity, usually a publishing company. For example, many scientists have to transfer copyright of a final manuscript version of a publication to the publishing entity, like ACM or Springer. As scientific articles are usually paywalled, this leads to the odd construction that an author may have to pay to be able to view his original work in the final published form, although commonly, and commonsensically, such access is provided free of charge. Open source projects are another example: some require the author to sign over copyright to the project itself. One reason to do this is to prevent the situation where all copyright holders have to agree to a future license change, an other is a better legal position. These examples show that copyright does not necessarily remain with the original creators, but rather is transferred based on either direct monetary gain or legal convenience.

I think the legal convenience argument is somewhat weak: indeed, it can be better to have a slew of copyright holders involved. When a license change is proposed they can all democratically vote for it. With respect to monetary gain, science is the odd one out. Scientist produce their articles, and then publish them for free at conferences or in journals. Yet, the publishers are the only ones profiting in this system. Indeed, through scientific grands, the taxpayer indirectly funds those large publishing companies when scientists publish their works. However, this is just one route through which the public sponsors those companies: universities pay licensing costs for access to repositories of publications and printed journals, which is also public money. While the publishing companies do add value, by editorial means, providing infrastructure and promotion as well as actual printed works, it is doubtful whether such a cozy money-sandwich is justified for this rather limited contribution. Contrast this with the early days of printing, where the publishers were also responsible for the actual printing and brought to the table considerable knowledge and craftsmanship concerning typesetting and replication. These tasks have quietly shifted to dedicated printing companies and the authors themselves, weakening the role of traditional publishers.

The situation is different outside of science. Most artists get some form of compensation when they sign over their copyright. This can either be some fixed amount or a share of the profit. However, the lion’s share still goes to the publishers and distributors themselves. This also raises the interesting question: why should copyright last longer than the author’s lifetime? Who owns the rights, and the profits, after the author has died? In some cases these may be the heirs of the author, for example the Tolkien Estate holds the rights to The Lord of the Rings, which makes some sense. However, this is not always the case. For example, who owns the music of the Beatles (two of whom are still alive)? That would be Sony Music. So, they can profit from the Beatles’s catalog, consisting of over 250 songs, until probably the end of this century. Wait a minute, I thought the point of copyright was to fairly compensate the authors, so that they could make a living of what they had created? In fairness, the Beatles and their estates still receive some royalties (though most profits now go to Sony Music). However, the effort it took to create those original works has already been compensated many times over by this point, and even more so by the turn of the century.

This clip illustrates some of the points made thus far:

So far, we have learned about the origin of copyright and seen the deviation between the original goal of copyright: protecting the individual authors, and the present-day situation: publishers and large corporations owning vast amounts of intellectual property. However, computers and the Internet are changing the game. In the follow-up article we look at copyright in the digital age.