The principle of immediate feedback

“Getting feedback is good” seems like a commonsense principle everyone agrees with. Code reviews are important, getting a second opinion makes programmer’s life easier. And yet it seems programmers often act as if to purposefully make giving feedback hard (if not impossible). Instead of showing incremental steps of their work while they implement it, they make changes locally on their branches and they push out a swarm of commits as a pull request to review.

There are a couple of problems with that approach. Firstly it makes the reviewer’s job difficult and avoiding that is not just about being kind to others – it’s also a matter of self-interest. It is not enough to just get feedback. In order to make use of it, one needs to get it quickly. If an error is discovered at an early stage of work, it’s easier to correct it.

In this post I would like to show you a couple of trivial and non trivial examples of using that rule. The rule I call “the principle of immediate feedback”.
Immediate feedback helps us learn faster. There are countless records of that, some more surprising than others. An application providing immediate feedback to children proved more effective in teaching kids in overcrowded Malawian classes that qualified teachers in the UK with class sizes of no more than 30.

A question remains – how can I use the principle of immediate feedback in my day to day work?

  1. Spellchecker in emacs
    When I started blogging, I didn’t use any spellchecker. After writing the whole post, people pointed out to me, that I’ve made spelling errors. Instead of discussing actual content, I was focusing on getting this right. The cost of fixing mistake later was relatively high. So I started using a spellchecker after I was done with writing. This was much better, but still took me some time to make everything straight. I also sometimes made mistakes while fixing things, so another spell checker run uncovered new errors. Gosh, this was frustrating! Now I use an interactive spellchecker spacemacs with flycheck-mode and this is wonderfully simple. Even though I get feedback as I type, I still want to perform a full check after I am done. I use M-x flyspell-buffer – it usually doesn’t uncover new errors.

  2. zsh colours
    There are plugins for your command line, that colour your commands, while you are typing. They can make it red, if you made spelling mistake even before you start typing arguments. This may save only seconds of time, but it also makes it easier to stay in flow. It was a little bit annoying to type entire sed expressions just to get error, because I started with sd instead of sed. Now I am using oh-my-zsh with syntax highlighting plugin. If everything is OK, my command is green, otherwise it is red, so I can spot errors even before I hit enter.

  3. compile errors in editor
    If my editor doesn’t show me syntax errors, while I am writing, I lose serious amounts of time on the save – compile – fix cycle. Some people like to turn the colours off, because it forces them to focus on code. That is fine, as long as something flashes, when they use invalid syntax. Without syntax highlighting, you get the same problem as without spell checker. You can introduce errors while fixing errors.

  4. browser auto reload
    If I am working with web project, automating browser reloading can sometimes save huge amounts of time. Some frameworks make it default behaviour now. Recently I am using Phoenix framework which is a web framework written in Elixir programming language Every time I save a file, it gets recompiled, reloaded and browser is refreshed to reflect changes. This isn’t new idea, but it is awesome!

  5. TDD
    TDD is a great example of applying immediate feedback principle. On low level, there is red – green – refactor cycle. Why I have to start with a failing test case? To have immediate feedback, that the test case works. Otherwise it may happen, that the test case always passes. If it was written after the code, I have no way to know that. Then I work until the test is green, so I actually worked on my feedback loop before I started actual work. This is genius! I put effort to make my feedback loop as tight as possible. Fixing mistakes right where I made them is much less time consuming than hunting for them later.

  6. Code Reviews
    As I said before – code reviews are great, but can you make them even better? Can you make the feedback instantaneous? Yes! Pair programming to the rescue! It works like a constant code review. Pair programming produces higher quality code with less bugs. If your manager says, it is too costly to do pair programming, you can reply, that it costs more not to pair program.

  7. CI
    Continues integration is also providing immediate feedback answering the question: “Are my changes working OK with everybody else changes?”. If I push code to CI every half an hour, it simpler to retract the whole commit from repository, than fixing or merging code. Unless you really like merging.

  8. CD
    Continues delivery is like testing applied on your company clients. You can get immediate feedback about your changes and if something doesn’t work properly or clients don’t like it, you quickly roll back. Compare it with monthly releases, where after a month worth of work clients don’t like the changes. Looks like immediate feedback enhances learning not only for individuals, but also for entire companies.

  9. Elon Musk making rockets
    Almost everyone knows, that Elon Musk has built first reusable rocket, but do you know why? Money wasn’t the only reason. Even more important one was to iterate quickly! Do changes and have feedback as fast as possible. Research always requires many failed attempts. Making the feedback loop tight requires making the cost of mistakes as low as possible. “Fail fast!”

  10. Pitching startup ideas.
    Founders who develop they product in secret before big launch usually have much worse results than companies, that start with pitching the idea. They can get feedback before they even start any costly or time consuming tasks. Success lies in details, that you can get right only by asking many people what they think of them. Don’t ever be afraid of someone stealing your idea. Even if someone does steal it, you are way ahead of him, if you had more insight into it. And you do, because it is your idea. This insight comes from others people feedback.

All of those things rely on one principle. Make your feedback loop as tight as possible to iterate quickly. Next time when you’ve made a mistake stop and think: “Was I able to catch the error faster? How?”

I would like to finish with a link to one more resource. There is a great talk by Bret Victor called “Inventing on principle”, where he preaches about the principle of immediate feedback. He has more interesting examples, like “time travelling debuggers”. He is also great speaker, so it is really pleasure to watch the talk. Enjoy!

Failing fast and slow in Erlang and Elixir

I am recently teaching programming with Elixir and Phoenix in Kraków. During classes, I saw, that new Erlang and Elixir programmers have problems with concept of “failing fast”, so I’ll explain it with examples, but first I need to show you…

The Golden Trinity of Erlang

Torben Hoffman in many of his webinars about Erlang programming language mentions “The Golden Trinity of Erlang”
ndc-london-2014-thinking-like-an-erlanger-9-638

The three principles are:

  • Fail fast
  • Share nothing
  • Failure handling

There are many articles explaining how sharing nothing is great for concurrency. Failure handling is usually explained when teaching OTP supervisors. But what does it mean to “fail fast”?

Failing fast principle

“Fail fast” principle isn’t exclusive to Erlang. In agile methodologies, it expands to:

  1. don’t be afraid to try something new;
  2. evaluate it quickly;
  3. if it works – stick with it;
  4. if not – abandon it fast, before it sucks too much money/energy.

This business approach translates almost directly to programming practice in Erlang.

Happy path programming

When I write in Erlang, I usually don’t program for errors. I can treat most errors or edge cases as if they don’t exist. For example:

{ok, Data} = file:read_file(Filename),
do_something_with_data(Data)

There is no code for handling situation, where file does not exist or I don’t have permissions to open it. It makes the code more readable. I only specify business logic instead of myriad of edge cases.

Of course, I can match on some errors. Maybe I want to create a file if it doesn’t exist. But in that case it becomes application logic, so my argument about not programming for edge cases still holds.

case file:read_file(Filename) of
  {error, enoent} -> create_file();
  {ok, Data} -> do_something_with_data(Data)
end,

This style of programming is called “happy path programming”. It doesn’t mean, that I don’t anticipate errors. It just means that I handle them somewhere else (by supervision trees and restarting).

Failing fast case study 1 – reading from a file

This style of programming requires the code to fail quickly, when problem occurs. Consider this code:

{_, Data} = file:read_file(Filename),
do_something_with_data(Data)

Reading the file could actually return {error, Reason} and then I treat the Reason atom as Data. This propagates the error further, where it is harder to debug and can pollute state of other processes. Erlang is dynamically typed language, so do_something_with_data/1 can pass atom many levels down the call stack. The displayed error will say, that it can’t treat atom as text and the bug gets tricky to find. Even functions, that are used purely for their side effects should match on something to check, if they worked, so instead of:

file:write_file(FileName, Bytes)

it is usually better to use:

ok = file:write_file(FileName, Bytes)

Failing fast case study 2 – calling gen_server

It is even more important to fail before sending anything wrong to another process. I once wrote about it in this blog post. Sending messages using module interfaces helps keep the damage made by errors contained. Crashing caller instead of server is “quicker”, so it doesn’t contaminate application state. It allows failure handling strategies much simpler than preparing for all possible edge cases. Most of the time those strategies are based on restarting processes with clean state. Processes and computation are cheap and can be restarted, but data is sacred.

Case study 3 – tight assertions

Lets consider another example. A test suite:

ok = db:insert(Value),
Value = hd(db:get(Query))

It tests database code by inserting single value to empty database and then retrieving it. However, if we assume, that the database was empty before test execution, we can make sure, that id doesn’t return anything else. Second line above is equivalent to:

[Value | _] = db:get(Query)

But I can make the assertion stronger by writing:

[Value] = db:get(Query)

It asserts both value and number of elements in the list. Sweet!

“Fail fast” is another example of applying “immediate feedback principle” in programming. It allows happy path programming, which makes programs more readable, but requires treating each line as an assertion. It is easy to do this with pattern matching.

Failing fast and supervision trees = ♥♥♥

Building docker images for Elixir applications

TL;DR: Use exrm to speed up working with Elixir and Docker. Time of running docker pull dropped from 5m to 16s.

Docker among many other things solves problem of deploys. It makes them easy to perform and repeatable. I can deploy the same docker image many times on different machines. Developers, testers, QAs and Ops can work with almost identical environment. Performing manual tests, automated tests and stress tests in parallel saves a lot of time.

Up to last week during docker pull, docker had to perform number of steps:

During pull docker waits for layers, if next layer depends on it. It means, that it can’t pull Elixir layer before Erlang layer is ready and it can’t pull application before Elixir is ready.
On my development machine (with the base image precached) docker pull took about five minutes to complete. In case, you are interested, we used this docker image, that installed Elixir and Erlang.

As I said before those 5 minutes pulls are performed many times during the day.

5 minutes * number of environments * number of features we want to push to production adds up quickly.

Can we somehowe speed up the process? Yes! Erlang introduces concept of releases. A release is a minimal self contained build. Releases include Erlang runtime, so you don’t have to have Elixir or Erlang installed.

Releases were historically painful to build, so there are tools that do if for you using sane defaults. relx for Erlang and exrm for Elixir.

Now docker pull performs only two steps:

  • Get the base image
  • Pull our application files

And it takes only 16s!

Is there a catch? Yes, there is.

We also liked to perform unit tests in docker image with mix tests, but exrm contains only application code. No tests code, no mix at all.

We use the old image with all dependencies for unit tests. After they finish, we build a release and create new docker image, which is then pulled many times by other teams.

If you are working with Phoenix web framework, there is a great step by step guide for setting up relx with Phoenix.