On Day 3, I looked back at some previous years of Advent of Code to get a sense of the problems.
I noticed the problems escalate in difficulty (on average) each day.
The completion statistics support that observation too.
So, as a contest-wide strategy, I ranked languages by their power (relative to my ability in them), where ranking was roughly the product of:
- my experience and comfort with the language; and
- its expressiveness.
For example, I’ve done a lot of programming in Scala (over 10 years ago, anyway), and Scala is a highly expressive language.
So, I pushed Scala toward the end of my list (using it ultimately on Day 24).
In contrast, I moved C toward the beginning of the list, because even though I have plenty of experience writing C code, it’s a painful language in which to do much of anything.
Similarly, I moved MATLAB toward the very beginning because I didn’t have any experience in it, and by reputation, it seemed clunky and difficult for tasks outside its core domain.
Were I to do it over, I’d probably save truly novel languages for prior to day 15, and go with my best languages for day 16 and onward.
The difficulty on the first 15 days was almost perfectly calibrated for kicking the tires on a new language.
A daily sorting
Each day, I did some re-sorting of the list, trying to use the “minimum viable language” for that day.
In the end, here’s how it unfolded, along with my experience in that language:
Throughout the contest, I also updated a “primary reserve” of languages I could pull out to use for particularly hard problem if the language I’d planned for that day proved too tedious or cumbersome.
Here are the languages I still had in the primary reserve at the end:
F# Ocaml Rust Perl Swift Clojure Smalltalk
For example, Standard ML had been in my primary reserve on Day 17.
I had tentatively penciled in PHP for Day 17.
But, when I looked at the problem, my hunch was that PHP was going to turn painful quickly, whereas a more functional approach was well-suited.
It seemed worth using up a “high-value” language.
(And, when part 2 was revealed for Day 17, this proved to be a wise choice.)
I had a “secondary reserve” too – languages that might be well-suited for a particular problem, and these were the ones left at the end:
Elixir Perl6/Raku Elm D Emacs lisp Groovy Tcl Kotlin Dart Objective-C Prolog
In retrospect, I was probably overly cautious, since plenty of “high value” languages remained at the end.
I kept a “probably not” list too, of languages that could be fun to use but probably not practical:
APL / J Prolog Forth m4 COBOL Fortran Ada
These probably would have been fine for the early problems (day 10 or earlier).
Logistics for a language a day
Using a different language every day meant:
- getting these languages installed on my machine;
- finding a suitable environment in which to program them; and
- pre-programming a little in advance in each language.
Homebrew to the rescue
For the most part, homebrew solved the installation problem.
Almost every language had a homebrew-based option.
vim as universal IDE
I knew I didn’t have time to learn a new language and a new IDE every day, so I used
make for my universal IDE.
The only exception was that I couldn’t figure out how to not use Visual Studio for C#, so I just went with it.
I used to use emacs, and this would have been a fine choice as well.
cat in every language in advance
For languages I had a reasonable chance of using at some point, I wrote a simple program that printed out the contents of a file to stdout.
This allowed a pre-exploration of the languages, and it gave me stub projects with skeleton Makefiles from which to start the puzzles.
Tactics for using a new language quickly
I wasn’t going for the fastest completion time on any of these days, nor was I going for the shortest solution.
However, given personal and professional constraints, I had one to two hours to work on the puzzles each day, and these constraints evolved specific tactics.
Well-timed sleep was probably the most important thing I did.
I stayed up each night until the problem was released (11pm my time), but I didn’t try to code up the solution right away.
Instead, I read the problem description before bed and then thought about how to solve it while falling asleep.
I usually woke up every morning with a full sketch of the solution in my head, or something close to it.
Using sleep to do the heavy lifting on algorithm design meant I could utilize waking hours for learning the relevant bits of the language.
Tactic: Focus on the algorithm
With at least a rough sketch of the solution in my head from sleep, I focused on describing the solution – usually in pseudocode or in discrete mathematics.
Where possible, I tried to describe the algorithm in purely functional terms, so that I could make a direct translation from the math into code.
Tactic: Pick the language family first
Once I had the algorithm, I tried to figure out the best language family.
Would it be easier in a statically typed language or a dynamically typed language?
Would it be amenable to a purely (or mostly) functional solution, or would it be more naturally solved using side effects?
Was it possible or advantageous to use a logic-programming language?
Could it be solved with a domain-specific language?
Tactic: Select the minimum viable language
After I had a sense of what kind of language would make it easiest to implement the solution, I tried to pick the minimum viable member of that family that remained on my lists.
Tactic: Exploit universal data structures
When writing down the algorithm, I tended to focus on the universal data structures I knew I could either find in any language or quickly recreate in any language: tuples, sets, lists, (multidimensional) vectors/arrays, associative arrays/dictionaries, ordered-key maps, functions and hash tables.
And, I usually launched a rapid fire series of Google queries like “how to implement X in language Y” for every data structure X that I used in the design.
Tactic: Do translation more than optimization
I focused my attention on getting a translation of the algorithm up and running in the target language first rather than trying to learn the best way to represent that algorithm in the target language.
If there was time remaining, I might go back and try to learn better ways to do what I’d done in that language, but only if there was time.
This meant I didn’t always get to exploit or learn all of the cool features of a language.
Tactic: Proactively learning linguistic quirks
I was often tripped up by quirks in a language.
Ironically, I was so often tripped up that I benefited more from proactively searching up the quirks of a language than looking up the cool or expressive features of the language.
I made a habit of searching for known warts or quirks in each language, just to know what to guard against, and where to look first for bugs when something didn’t work.
Frequent offenders for wartiness were implicit value conversions; what counted as false (or true); and 1-based/0-based array indexing.
Tactic: Hard-code the input to start; then parse it later
For most of the problems, it was much faster to hard-code the problem input as literal value (after some manipulation in vim or with sed) than it was to write a parser.
Early on, I had spent a lot of time writing a parser or tokenizer first, and then started on the puzzle proper.
Hard-coding the input let me start solving the real problem, building up a working familiarity with the language before going back (or not!) to properly parse the input from a file.
I highly recommend Advent of Code to anyone looking to sharpen (or re-sharpen) their programming skills.
It is exceptionally well done.
And, if you want to attempt your own bread-first search of programming languages, it is an excellent way to do so!