How to make your console sing?


Sometimes I am running command and I have no idea, how long it will take to execute.
There is this project Makefile, which you think only compiles sources,
but actually it fetches dependencies from git repositories and it takes 1 minute to complete.
Or I start copying things and they are havier than I expected.
So I just go to other windows to do some other work and I am simply forgetting about previous task.


Add completion sound after every command.

Additional rationale:

“Pragmatic Thinking and Learning: Refactor Your Wetware” [1]
teaches, that to use your full brain potential, you have to use multisensory input.
That is why mechanical keyboard with nice click sound and button reaction are just nicer to type on.
If you spend a lot of time in terminal, you should make it pretty and maybe even add some sounds.


Before you yell at me, that I overcomplicated the final solution, I will walk you through the intermediate steps.
If you are in a hurry, just grep for “Solution2”.
I use Mac OSX, iterm2, oh-my-zsh with bira theme and very often work with tmux over ssh.

First thing, how do you play sounds in terminal on Mac?
afplay /System/Library/Sounds/Submarine.aiff
OSX has couple of nice system sounds.
I decided to use Submarine for succesful command completion and Blow for failure.


My first idea was to use .zshrc
There is a function called precmd(), that is run always just before displaying prompt.
I could use $? variable to decide, which sound shoud I play.

play_success() { afplay /System/Library/Sounds/Submarine.aiff }
play_failure() { afplay /System/Library/Sounds/Blow.aiff }
precmd() { if [ $? -eq 0 ]; then play_success; else play_failure; fi }


The sounds were played in the same thread of execution as the terminal.
This meant, that after typing ls, I had to wait entire second for the sound to stop playing,
before I could write anything.
Not cool!
I have to put it in background.


play_sound() { afplay $1 }
play_sound_in_background() { play_sound $1 & }
play_success() { play_sound_in_background /System/Library/Sounds/Submarine.aiff }
play_failure() { play_sound_in_background /System/Library/Sounds/Blow.aiff }
precmd() { if [ $? -eq 0 ]; then play_success; else play_failure; fi }


Doing something in bacground displayed job notifications in my console:

[1] 14099
[1]  + 14099 done       afplay /System/Library/Sounds/Submarine.aiff


Surround play sound with “()” makes it run in child console, so there is no output.

play_sound() { afplay $1 }
play_sound_in_background() { ( play_sound $1 & ) }
play_success() { play_sound_in_background /System/Library/Sounds/Submarine.aiff }
play_failure() { play_sound_in_background /System/Library/Sounds/Blow.aiff }
precmd() { if [ $? -eq 0 ]; then play_success; else play_failure; fi }


It obviously doesn’t work with ssh.
Even if I install my zsh config on a remote machine.
I don’t have the sounds there.
Abandon solution.


Zsh config is local to the machine, but I always use iTerm2,
so maybe it has some helpful features.
Yes, it does!
Triggers [2] can do something every time, some pattern is printed on terminal.
In bira [3] zsh theme, if command fails, than it displays error code and unicode sign: 1 ↵
Triggers use regular expressions, so it is easy to match on a letter, that rearely appears in normal work.
I went to iTerm2 preferrences -> Profiles -> Advanced -> Triggers -> Edit and put there something like this:

0 ↵$          Run Command    afplay /System/Library/Sounds/Submarine.aiff &
[1-9]d* ↵$    Run Command    afplay /System/Library/Sounds/Blow.aiff &

First pattern catches 0 exit code and the other non zero exit codes.
Mind the & at the end of commands. They are silent, but without it, they run in the same thread
and cause the same problem as in the solution1a.

I also had to modify bira to start displaying the 0 code:

local return_code="%(?.%{$fg[green]%}%? ↵%{$reset_color%}.%{$fg[red]%}%? ↵%{$reset_color%})"


In zsh, I can scroll through files in directory using tab.
Every time I did that, the line with status code of previous command is reprinted,
which triggers the sound.


Bira splits the prompt into two lines, so I simply moved the status code to the first line:

PROMPT="╭─${user_host} ${current_dir} ${rvm_ruby} ${git_branch} ${return_code}
╰─%B$%b "
# RPS1="${return_code}"


Now I am quite far, with what I want.
I have sounds, that work even over ssh.
The problem is tmux.
When I scroll in tmux, entire pane gets repainted
and iTerm2 plays all the status codes, that are visible!


Use iTerm2 tmux integration.
I installed tmux2.0 on the server
(from source, it was the easiest method).
Now I am able to connect to it via ssh with:

ssh user@host -t 'tmux -CC attach'


Non yet, TBD.

Wrapping up:

Terminals don’t like sounds.
I can set 256 colors for foregrounds and backgrounds in my terminal emulator,
but I can’t change the beep sound easily.
After couple of days, I think, it was worth it.
I really got used to having this additional feedback,
even when I am listening to music.

Please, comment, if you find it useful.
If you don’t, let me know why.



The unintuitive latency over throughput problem

Since 15.01.2015, I am teaching a course about Elixir programming language. It is created by José Valim, who is Rails Core Team Member. He knew, that Erlang virtual machine easily solves problems, that are hard to solve in Ruby, mainly concurrency. During my first presentation, I had couple of slides, that showed, what is so great about Erlang VM, that José picked it for implementing the Ruby successor.

I will not go into details about all technical aspects, but there was one, that was particularly hard to understand: latency over throughput. To understand this problem better, you have to know one thing about scheduling. Erlang gives you very lightweight processes, but this feature is not unique. Go has goroutines, Scala has Akka library and other programming languages start to provide libraries to mimic this. But Erlang gives you also preemptive scheduling, which is really unique feature.

I tried to find something about preemptive scheduling in other languages. I’ve found articles about plans to add it in Go and Akka, but as far as I know, it is not quite there yet (correct me in comments, if I am wrong!).

But what is preemptive scheduling? Without going into details: it means, that a long running process can be paused to give other processes CPU time. Like in operating system, but using light processes and with much, much less overhead :)

Why is this detail important? Because this can reduce latency greatly, which makes user happy. :) How exactly does it work? To answer that problem, lets ask another question.

We have single core, no hyper-threading CPU. There is no parallelism involved. We are testing two web server implementations. We fire 1 000 000 requests and wait until web server returns all responses. First server has no preemptive scheduling. It returns last response after 60 seconds. All responses are correct. Next, we do the same to the server with preemptive scheduler. It finishes processing last request after 90 seconds. Again – all responses are correct.

Which webserver has higher throughput? Which one has lower latency? Which one would you choose?

Throughput question is easy: first one has 1 000 000 / 60 requests per second, which is 16 667 rps. Second one has 11 111 rps, which is worse.

Latency question: It might be tempting to say, that first server has lower latency. If processing all requests was faster, than avarage processing time must be lower, right? WRONG! And I will prove it to you, using counter example consisting of only two requests!


Lets say, there is one CPU intensive request, that will last for 5 time units and one quick one, which will last for 1 time unit. The longer is first in processing pipeline. In webserver without preemptive scheduling, it has to be processes from start to end. After that, we can get to the second one. We also count the time between next requests (one unit). Lets calculate the avarage response time. First response was sent after 5 tu, second one was sent after 7 tu. The average latency is 6 tu.

In preemptive web server, first request is processed only for one time unit and then it gets preempted, so that second one has a chance. It gets processed quickly and then, we come back to the firs one. Here, first request is finished after 8 tu and second one after 3 tu, giving avarage latency of 5.5 tu.

We have worse throughput and better latency at the same time! This is possible, because preemptive scheduling is fair. It minimises waiting time of requests and makes sure, that quick requests are served faster.

In real world, the longer request might be not 5 times, but 1000 times longer. Also context switch time is usually really small fraction of the minimal time spent on processing. This means, that you can get MUCH better results with preemptive scheduling.

Of course, it is not a silver bullet. If your application processes data and you need all data points for next step of computation, you will go with lower throughput. There is no need to optimise for latency in that case. But if you are building website, where you serve independent users and you want to be fair with them – check out the Erlang VM, you will not regret!

Managing documentation with GitHub and Jekyll

In my company, we have an open source project, that is running for some time. It is a chat server written in Erlang Recently, we stumbled on a problem. All of our documentation was on wiki pages on github. MongooseIM team puts great effort to keep it always up to date. But what if someone needed docs for old version of Mongoose?

The solutions seemed to be easy. Lets generate html version of docs for every new git tag!

Current docs used Markdown, so to have better versioning, we moved the docs to the repository. This way, when we update the code, we can update the docs in the same commit. If someone checks out some old tag, he will have matching doc in doc folder. Cool!

Now, the hard part! We would like to generate html docs from markdown ones. Why? They would be easier to read and we have greater control over presentation. We can also show docs for couple of recent version without the need to checkout the repo or switching to tags.

The static page generator choice was a no-brainer. Jekyll is not only easy to set up, generates static html from markdown, but also has native GitHub pages support. We used categories to indicate releases and changed permalinks to use the scheme /:categories/:title/ After prepending dates to file names, our docs automatically became posts. That was easy!

Then, we realised, that not everything is so cool…

1. Links between files were broken. In GitHub Flavoured Markdown, when you specify link [some title]( in file, GitHub searches for the file in the same directory as But html generated by Jekyll has <a href=””>some title</a>, which means, that instead of going from /MongooseIM/1.5/ to /MongooseIM/1.5/, it will go to /MongooseIM/1.5/ It will just append the href to the existing url, which does not make any sense.

We thought about changing the base_url, but it is one for entire page, so we would have problems with different docs versions, which was one of the main points of doing the whole html generation!

The easiest solution, we could think of, was going through all markdown files and changing ( to (../ We simply used sed for that. This looks more like an ugly hack, than a solution, though.

2. Erlang code quite often has code like this: {{atom, Something}, Options}. This is syntax for tuples nested in each other. Jekyll uses Liquid for its templating language and two curly braces are used to print variables. The problem is, we wanted to use unmodified markdown files. We don’t have any variables! We even don’t have the jekyll header with title date and other stuff. We were able to squeeze all that into _config.yml.

This lead us to another hack: we’ve added {% raw %} and {% endraw %} to every file before the generation step.

3. All the titles got capitalised. Instead of mod_mam, we now have Mod_mam. It is problematic, because Erlang is case sensitive, so those titles mean two different things…

We haven’t found easy solution to this problem yet. Lets call it a day, we will think about this tomorrow.

How do you manage documentation for your open source projects? Do you use some other tool? Maybe you know, how to make jekyll generate links in some clever way? Do you know, how to disable Liquid for all posts in category? Where do you configure Jekyll to stop capitalising post titles? If you know one of those things, please post a comment :)

Lisp for Erlangers

I was always curious about Lisp family of programming languages.
There are two reasons for that.
Firstly, there is Scheme[1], Closure[2], Common Lisp[3], Shen[4]
and they come in different flavors,
so there must be some really valuable property in Lisp,
that people want to have in their language.
Secondly, many people say, that you can “expand your programming horizons”
and that the best way to do it, is to learn about Lisp macros[5].

There is one book, that is particularly about Lisp and its macros
– “On Lisp” by Paul Graham[6].
I am an Erlang programmer, so chapters 2 – 5 were pretty basic for me.
Functions as data? Used every day. Closures? Easy.
Interactive shell? Can’t live without it now!
Recursion? Even looping over list is done this way.
Chapter 6 is called “Functions as Representation”
and I was curious, what does it mean,
so instead of jumping straight to “Macros” (chapter 7),
I started reading how can you use functions to model networks.

Simple problem was chosen to illustrate the idea:
write a 20 questions game.
User has to think of a person and then he
is asked question with only “yes” or “no” answers about that person.
Based on the answer, he is either asked another question
or he gets the person name from the program.

The tree of questions and answers was represented as a list of nodes.
Based on that data structure, there were two ways to solve the problem:
1. The usual way, where each time, we search for the node in a list,
ask question and based on the answer, we search for the new node.
2. The clever way, where we first preprocess the list to create a tree of functions.

As an exercise, I wanted to implement it in Erlang
and this is what I came up with:


%% node is either:
%% {name, question, node for yes answer, node for no answer}
%% {name, answer, undefined, undefined}
%% undefined is a special value for not implemented parts of the tree
list_of_nodes() ->
     {people, "Is the person a man?", male, undefined},
     {male, "Is he living?", undefined, deadman},
     {deadman, "Was he American?", us, undefined},
     {us, "Is he on a coin?", coin, undefined},
     {coin, "Is the coin a penny?", penny, undefined},
     {penny, "Lincoln!", undefined, undefined}

usual_way() ->
    Nodes = list_of_nodes(),
    run_nodes(people, Nodes).

%% searches for node and runs actual code
run_nodes(NodeName, Nodes) ->
    Node = get_node(NodeName, Nodes),
    run_nodes1(Node, Nodes).

%% if both "yes node name" and "no node name" is undefined
%% this must be a leaf with answer
run_nodes1({_Name, Answer, undefined, undefined}, _Nodes) ->
%% ask question and run recursively
run_nodes1({_Name, Question, YesNodeName, NoNodeName}, Nodes) ->
    {ok, [Answer]} = io:fread(Question, "~s"),
    case Answer of
        "yes" -> run_nodes(YesNodeName, Nodes);
        _ -> run_nodes(NoNodeName, Nodes)

%% searching for undefined? it is either not implemented
%% or we simply don't know
get_node(undefined, _Nodes) ->
    {undefined, "I have no idea!", undefined, undefined};
%% if there is no such NodeName - we made a spelling mistake
%% otherwise, return the node
get_node(NodeName, Nodes) ->
    case lists:keyfind(NodeName, 1, Nodes) of
        false ->
        Node ->

clever_way() ->
    Nodes = list_of_nodes(),
    RootFun = compile_net(people, Nodes),

%% After compiliation a node is a fun(),
%% which has pointers to other nodes.
compile_net(Root, Nodes) ->
    %% it uses get_node as the previous one
    Node = get_node(Root, Nodes),
    case Node of
        %% if this is leaf node,
        %% we have to return Answer, but it has to be wrapped with a fun()
        {_Name, Answer, undefined, undefined} ->
            fun() -> Answer end;
        %% if this is some other node,
        %% we immidietly call compilie_net on both children
        %% this means, that we will process the whole tree
        {_Name, Question, YesNodeName, NoNodeName} ->
            YesFun = compile_net(YesNodeName, Nodes),
            NoFun = compile_net(NoNodeName, Nodes),
            %% this entire fun() is returned
            %% it asks the question when called
            %% but it doesn't have to look for the node in the node list,
            %% because nodes were precalculated above,
            %% even in other scope, YesFun and NoFun
            %% will still point to right functions
            %% based on the answer, it runs one of the nodes
            fun() -> 
                    {ok, [Answer]} = io:fread(Question, "~s"),
                    Fun = case Answer of
                              "yes" ->
                              _ ->

You can see, that the clever way seems to be way more complicated.
One of the reasons for writing this program was to show,
that you can represent data with closures.
What are the advantages of this representation?
Firstly, it runs faster after compiling
(from now, by “compiling”, I mean calling “compile_net”).
Usual implementation searches for the node on the list each time,
user is asked a question.
In balanced tree, with N elements, we have to ask log(N) questions,
so the runtime of one game is O(N*logN).
Clever implementation takes only O(logN) to run single game,
because all the nodes are found earlier.
The compilation itself has to search entire node list for each node it compiles,
so it takes O(N^2).

This pattern of precomputing something expensive first,
so that next calculations can be performed faster, is really important.
Think of LU decomposition[7] for solving systems of linear equations.
You need O(n^3) operations to use Gaussian elimination[8],
but you can firstly decompose the matrix, which also takes O(n^3) operations,
but after that, you can solve systems of linear equations
for different vectors in O(N^2) time.

Second advantage of precompilation is finding bugs earlier.
In the usual way, if I misspell a node name,
I’ll get an error in runtime.
In the clever way, I’ll get the error during compilation.
This is huge win for me.

Last, but not least – at the end of the chapter,
you can read:

“To represent with closures” is another way of saying “to compile”
and since macros do their work at compile-time,
they are a natural vehicle for this technique.

For now, I don’t fully understand why representing with closures
is the same as compilation.
I can see, that is useful for precomputing things.
During my studies at AGH University of Science and Technology[9],
I had entire semester about compilers,
scanners, parsers, grammars and final code optimization,
but I have never fought about it this way.

This makes me even more interested in Lisp macros!


Why Erlang modules have long names or how to troll Erlang developer?

Yesterday, my friend, who is learning Erlang, asked me to show him, how to use funs in Erlang. I could have just typed the answer in Adium window, but I like to be sure, that everything, I am sending always compiles and works, so I quickly created Erlang module, scribbled an example and tried to compile it.

While in a rush, I didn’t think of a file name and just named the file file.erl. When I compiled it, I got an error saying:

121> c(file).

=ERROR REPORT==== 20-Sep-2014::10:08:33 ===
Can't load module that resides in sticky dir

Sticky dir is something with file permissions, right?
So I quit the Erlang shel, checked the permissions and restarted it:

122> q().
$ erl
Erlang R16B02 (erts-5.10.3)  [smp:2:2] [async-threads:10] [hipe] [kernel-poll:false]

{"init terminating in do_boot",{undef,[{file,path_eval,[[".","/Users/tomaszkowal"],".erlang"],[]},{c,f_p_e,2,[{file,"c.erl"},{line,474}]},{init,eval_script,8,[]},{init,do_boot,3,[]}]}}

Crash dump was written to: erl_crash.dump
init terminating in do_boot ()

WTF?! I am opening fresh Erlang shell and it crashes?! Did I just broke the ErlangVM?! How? And then, while reading the error, it struck me: “.” is in the path, so my module called file overshadowed file module, which is used during boot to search for .erlang and execute its contents [1].

Of course, I didn’t come up with that idea during the first reading. Somehow my brain did not associate the file from error message with file that I just created. They were in different contexts, because error is from the guts of ErlangVM and my module is just 4 lines of code including module declaration and exports.

So – to answer the question from post title: Send an Erlang developer module called file and let him compile it. If he or she is not careful enough to delete the .beam file, he/she won’t be able to use erl in this directory and he/she will get cryptic message, that puzzled couple of people [2]. It took me couple of minutes to realise how stupid I was, so maybe someone will fall for it too!

This also explains, why in most Erlang applications module names are so long and prefixed with application name. There are no namespaces in Erlang, so all modules should have unique names. It is not as bad as it seems. I am working with Erlang for a couple of years now and it was the first time, I had this kind of problem. Next time, I’ll name my file asdf.


Erlang OTP gen_server boilerplate.

gen_server [1] is the most basic OTP behaviour. It is also very convenient and used in almost every bigger project. Nevertheless, people sometimes ask: “Why there is so much boilerplate? [2]”. Usually, we can find three sections in the module implementing gen_server:

1. At the top are the API functions to the server
2. In the middle, there are callbacks,
that you have to specify for gen_server behaviour.
3. At the end, there are helper functions.

Useful stuff happens in the second section. There you can see actual operations on the data and state management. So why do we want repeat everything, that is already in handle_call and handle_cast to provide API? Why gen_server cannot generate the API for us?

Lets take an example from Learn You Some Erlang[3]: a gen_server for storing cats. It is described in detail here [4].


-export([start_link/0, order_cat/4, return_cat/2, close_shop/1]).
-export([init/1, handle_call/3, handle_cast/2, handle_info/2,
         terminate/2, code_change/3]).

-record(cat, {name, color=green, description}).

%%% Client API
start_link() ->
    gen_server:start_link(?MODULE, [], []).

%% Synchronous call
order_cat(Pid, Name, Color, Description) ->
   gen_server:call(Pid, {order, Name, Color, Description}).

%% This call is asynchronous
return_cat(Pid, Cat = #cat{}) ->
    gen_server:cast(Pid, {return, Cat}).

%% Synchronous call
close_shop(Pid) ->
    gen_server:call(Pid, terminate).

%%% Server functions
init([]) -> {ok, []}. %% no treatment of info here!

handle_call({order, Name, Color, Description}, _From, Cats) ->
    if Cats =:= [] ->
        {reply, make_cat(Name, Color, Description), Cats};
       Cats =/= [] ->
        {reply, hd(Cats), tl(Cats)}
handle_call(terminate, _From, Cats) ->
    {stop, normal, ok, Cats}.

handle_cast({return, Cat = #cat{}}, Cats) ->
    {noreply, [Cat|Cats]}.

handle_info(Msg, Cats) ->
    io:format("Unexpected message: ~p~n",[Msg]),
    {noreply, Cats}.

terminate(normal, Cats) ->
    [io:format("~p was set free.~n",[]) || C <- Cats],

code_change(_OldVsn, State, _Extra) ->
    %% No change planned. The function is there for the behaviour,
    %% but will not be used. Only a version on the next
    {ok, State}. 

%%% Private functions
make_cat(Name, Col, Desc) ->
    #cat{name=Name, color=Col, description=Desc}.

Not a single line of code was changed, the only thing, I added is background color.

The code with green background is executed in the client process. Sometimes, it is easy to think about all code in gen_server module as code, that runs on the server side, but this is wrong.

The API functions are called in the client process and client process sends messages to the server.
Why is it important?

Lets look closer at return_cat function. Second parameter must be a valid cat record. If you try to call something like this:

return_cat(Pid, dog).

your client will crash, but if you call it like this:

gen_server:cast(Pid, {return, dog}).

your server will crash.

This makes huge difference. “Let it crash” philosophy provides confidence, that errors will not propagate, so it is really important to crash as fast as possible. If you can do some validation on the client side – do it. Let the programmers know, that it is client, that sent bad data and not gen_server, that has a bug.

What is even more important, you will not loose the precious state. Sometimes, it is good to crash the gen_server. For example, when somehow its internal state became invalid. I had a gen_server, that was a frontend to database connection. It kept the connection in its state. If some operation failed, the server crashed and supervisor tried to restart it couple of times and create a new connection in init. Failed operation, usually required reconnecting, so it worked, if the connection problem was temporary, but if the database was down permanently, supervisor crashed itself and shut down entire application. But more often than not, you would like to preserve the state and you wouldn’t like to let invalid data to contaminate it.

So the API functions, which at first glance look like boilerplate are really important for your application.


Functional JavaScript – passing additional arguments to callback.

This post will show perfect use case for closure and higher order functions.

In work, I am developing websites in Erlang, but sometimes, there are things, that just have to be done on client side using JavaScript.
I knew JavaScript basics, but I wanted to truely know, what I am doing, so I picked up “Programming JavaScript Applications” [1].
For a quick reference, i also use JavaScript “JavaScript: The Good Parts” [2].
I like to get my hands dirty early, so I decided to make a simple game.
Searching for resources, I’ve found tutorial on building game main loop [3].

The main function looked like this:

var mainloop = function() {

and it was called like this:

var animFrame = window.requestAnimationFrame
var recursiveAnim = function(timestamp) {
    animFrame( recursiveAnim );
animFrame( recursiveAnim );

Actually, it was more complicated, but it is not important for making my point about functional programming.
If you want to see the details, check out orignal blog post [3].

There are two things, you have to know:

1. animFrame [4] takes a callback, that you want to call before repeainting the window.
This callback takes exactly one argument – a timestamp.

2. mainloop, updateGame and drawGame take no parameters,
so they just operate on variables, that are defined out of their scope.

I don’t like second point, I would like to write functions,
that don’t depend on external state – pure functions [5].
They are easier to test and reason about.
For example, updateGame function could take game state as an argument
and return new game state.
drawGame should also take canvas context and game state as arguments.
It won’t be pure, because it will draw to the canvas,
but at least, it will always work the same way, given the same inputs.

In short, I wanted something like this:

var mainloop = function mainloop(gameState, ctx) {
    var newGameState = updateGame(gameState);
    drawGame(newGameState, ctx);
    return newGameState;

But there is a problem: how to pass it to animFrame?
animFrame doesn’t know anything about game state or other variables,
that I want to pass to the mainloop.
This is a perfect job for closures [6].

I can create a function, that takes exactly one argument,
but “carries” two more from otuer scope:

function (timestamp) {
    //uses arguments from outside its scope
    //use only inside other function, that has gameState and ctx defined
    recursiveAnime(timestamp, gameState, ctx);

and use it like this:

var recursiveAnim = function(timestamp, gameState, ctx) {
    var newGameState = mainloop(gameState, ctx);
    animFrame( function (timestamp) {
        recursiveAnime(timestamp, newGameState, ctx);

// start the mainloop
animFrame( function (timestamp) {
    recursiveAnime(timestamp, newGameState, ctx);

This allows passing additional arguments to callback
without exposing newGameState or ctx to any other code.
Its already great, but it can be better.
Lets extract the pattern, that wraps three argument function into one argument function:

var toRecursiveAnim = function toRecursiveAnime(callback, gameState, ctx) {
    return function(ts) {
        callback(ts, gameState, ctx);

This one takes callback and two additonal arguments
and creates one argument function, which
calls given callback with those arguments.
Now, the code can be simplified:

var recursiveAnim = function(timestamp, gameState, ctx) {
    var newGameState = mainloop(gameState, ctx);
    animFrame( toRecursiveAnim(recursiveAnim, newGameState, ctx) );

// start the mainloop
animFrame( toRecursiveAnim(recursiveAnim, gameState, ctx) );

JavaScript allows extracting patterns and writing code, that is really easy to test.
This example showed good use case for closures and higher order functions in JavaScript.
Hope, you enjoyed it :)