26 programming languages in 25 days, Part 1: Strategy, tactics and logistics

Since making a sudden leap from computer science to academic medicine about seven years ago, I haven’t programmed as much.

I love what I do in medicine and biology, and I love helping patients.

But, I have missed programming – and programming languages.

Then I came across the Advent of Code on Mastodon – a series of daily two-part puzzles for programmers that runs for 25 days.

On a whim, I solved the Day 1 puzzle using awk.

I solved Day 2 in TeX (which underpins LaTeX) to reboot an old skill.

After that, I wondered if I could solve each of the 25 puzzles using a different programming language every day.

So, I did.

In the end:

Ultimately, I used 26 languages, because I combined two on Day 21 (sed and bc), turning the experience into a rapid-fire “breadth-first search” of programming language space.

Learning how to learn a new language became the key meta-learning.

And, it’s a great way to build an appreciation of the relative strengths and intended domains of different languages.

If you’d like to try the breadth-first search yourself, I have distilled advice on the strategy, tactics and logistics involved in using a new programming language every day for 25 days.

Strategic planning

On Day 3, I looked back at some previous years of Advent of Code to get a sense of the problems.

I noticed the problems escalate in difficulty (on average) each day.

The completion statistics support that observation too.

So, as a contest-wide strategy, I ranked languages by their power (relative to my ability in them), where ranking was roughly the product of:

  1. my experience and comfort with the language; and
  2. its expressiveness.

For example, I’ve done a lot of programming in Scala (over 10 years ago, anyway), and Scala is a highly expressive language.

So, I pushed Scala toward the end of my list (using it ultimately on Day 24).

In contrast, I moved C toward the beginning of the list, because even though I have plenty of experience writing C code, it’s a painful language in which to do much of anything.

Similarly, I moved MATLAB toward the very beginning because I didn’t have any experience in it, and by reputation, it seemed clunky and difficult for tasks outside its core domain.

Were I to do it over, I’d probably save truly novel languages for prior to day 15, and go with my best languages for day 16 and onward.

The difficulty on the first 15 days was almost perfectly calibrated for kicking the tires on a new language.

A daily sorting

Each day, I did some re-sorting of the list, trying to use the “minimum viable language” for that day.

In the end, here’s how it unfolded, along with my experience in that language:

Day 01: Awk          [minor experience]
Day 02: LaTeX        [minor experience (as programming language)] 
Day 03: C            [extensive experience]
Day 04: Java         [extensive experience]
Day 05: MATLAB       [no experience]

Day 06: C#           [no experience]
Day 07: Ruby         [no experience]
Day 08: Julia        [no experience]
Day 09: Bash         [extensive experience]
Day 10: vimscript    [no experience]

Day 11: C++          [no experience (beyond meta-programming)]
Day 12: R            [no experience]
Day 13: JavaScript   [extensive experience]
Day 14: Erlang       [no experience]
Day 15: Go           [no experience]

Day 16: Python       [minor experience]
Day 17: Standard ML  [extensive experience] 
Day 18: PHP          [extensive experience]
Day 19: Common Lisp  [no experience]
Day 20: TypeScript   [no experience]

Day 21: sed & bc     [minor experience]
Day 22: Lua          [no experience]
Day 23: Haskell      [extensive experience]
Day 24: Scala        [extensive experience]
Day 25: Racket       [extensive experience]

The reserves

Throughout the contest, I also updated a “primary reserve” of languages I could pull out to use for particularly hard problem if the language I’d planned for that day proved too tedious or cumbersome.

Here are the languages I still had in the primary reserve at the end:


For example, Standard ML had been in my primary reserve on Day 17.

I had tentatively penciled in PHP for Day 17.

But, when I looked at the problem, my hunch was that PHP was going to turn painful quickly, whereas a more functional approach was well-suited.

It seemed worth using up a “high-value” language.

(And, when part 2 was revealed for Day 17, this proved to be a wise choice.)

I had a “secondary reserve” too – languages that might be well-suited for a particular problem, and these were the ones left at the end:

Emacs lisp

In retrospect, I was probably overly cautious, since plenty of “high value” languages remained at the end.

I kept a “probably not” list too, of languages that could be fun to use but probably not practical:


These probably would have been fine for the early problems (day 10 or earlier).

Logistics for a language a day

Using a different language every day meant:

Homebrew to the rescue

For the most part, homebrew solved the installation problem.

Almost every language had a homebrew-based option.

vim as universal IDE

I knew I didn’t have time to learn a new language and a new IDE every day, so I used vim and make for my universal IDE.

The only exception was that I couldn’t figure out how to not use Visual Studio for C#, so I just went with it.

I used to use emacs, and this would have been a fine choice as well.

Writing cat in every language in advance

For languages I had a reasonable chance of using at some point, I wrote a simple program that printed out the contents of a file to stdout.

This allowed a pre-exploration of the languages, and it gave me stub projects with skeleton Makefiles from which to start the puzzles.

Tactics for using a new language quickly

I wasn’t going for the fastest completion time on any of these days, nor was I going for the shortest solution.

However, given personal and professional constraints, I had one to two hours to work on the puzzles each day, and these constraints evolved specific tactics.

Tactic: Sleep

Well-timed sleep was probably the most important thing I did.

I stayed up each night until the problem was released (11pm my time), but I didn’t try to code up the solution right away.

Instead, I read the problem description before bed and then thought about how to solve it while falling asleep.

I usually woke up every morning with a full sketch of the solution in my head, or something close to it.

Using sleep to do the heavy lifting on algorithm design meant I could utilize waking hours for learning the relevant bits of the language.

Tactic: Focus on the algorithm

With at least a rough sketch of the solution in my head from sleep, I focused on describing the solution – usually in pseudocode or in discrete mathematics.

Where possible, I tried to describe the algorithm in purely functional terms, so that I could make a direct translation from the math into code.

Tactic: Pick the language family first

Once I had the algorithm, I tried to figure out the best language family.

Would it be easier in a statically typed language or a dynamically typed language?

Would it be amenable to a purely (or mostly) functional solution, or would it be more naturally solved using side effects?

Was it possible or advantageous to use a logic-programming language?

Could it be solved with a domain-specific language?

Tactic: Select the minimum viable language

After I had a sense of what kind of language would make it easiest to implement the solution, I tried to pick the minimum viable member of that family that remained on my lists.

Tactic: Exploit universal data structures

When writing down the algorithm, I tended to focus on the universal data structures I knew I could either find in any language or quickly recreate in any language: tuples, sets, lists, (multidimensional) vectors/arrays, associative arrays/dictionaries, ordered-key maps, functions and hash tables.

And, I usually launched a rapid fire series of Google queries like “how to implement X in language Y” for every data structure X that I used in the design.

Tactic: Do translation more than optimization

I focused my attention on getting a translation of the algorithm up and running in the target language first rather than trying to learn the best way to represent that algorithm in the target language.

If there was time remaining, I might go back and try to learn better ways to do what I’d done in that language, but only if there was time.

This meant I didn’t always get to exploit or learn all of the cool features of a language.

Tactic: Proactively learning linguistic quirks

I was often tripped up by quirks in a language.

Ironically, I was so often tripped up that I benefited more from proactively searching up the quirks of a language than looking up the cool or expressive features of the language.

I made a habit of searching for known warts or quirks in each language, just to know what to guard against, and where to look first for bugs when something didn’t work.

Frequent offenders for wartiness were implicit value conversions; what counted as false (or true); and 1-based/0-based array indexing.

Tactic: Hard-code the input to start; then parse it later

For most of the problems, it was much faster to hard-code the problem input as literal value (after some manipulation in vim or with sed) than it was to write a parser.

Early on, I had spent a lot of time writing a parser or tokenizer first, and then started on the puzzle proper.

Hard-coding the input let me start solving the real problem, building up a working familiarity with the language before going back (or not!) to properly parse the input from a file.

Final thoughts

I highly recommend Advent of Code to anyone looking to sharpen (or re-sharpen) their programming skills.

It is exceptionally well done.

And, if you want to attempt your own bread-first search of programming languages, it is an excellent way to do so!