Excerpt from the Advent of Code website;
"Advent of Code is an Advent calendar of small programming puzzles for a variety of skill sets
and skill levels that can be solved in any programming language you like. People use them as a
speed contest, interview prep, company training, university coursework, practice problems, or
to challenge each other."
After much deliberation I (anti-climatically) decided to take the same approach as last year and take my time with each puzzle by taking a more enterprise-style test-driven approach in Kotlin as I really enjoyed it. I did, however, manage to complete both parts every day (except Day 22 Part 2 - I was ill!) on the day of their release. This included writing a passing solution, cleaning up somewhat, improving the performance, testing and documenting.
Simply clone or download the repository into your local environment and import it as a Gradle project in your IDE. Running Solutions.kt will run the parts from all the completed days, reporting all the answers and runtimes in the console.
Task | Description |
---|---|
test |
Runs the unit tests with coverage for the implementation , solutions and common sub-projects. |
detekt |
Runs DeteKT with the custom configuration rules defined in detekt-config.yml. |
The .run
directory contains XML configuration files from IntelliJ. Included are configurations for running the unit
tests in the common
, implementation
and solutions
Gradle sub-projects as well as for each specific day.
The package structure was something that changed almost every day. My goal was "package-by-feature". For the first few days it seemed like I'd just end up having 25 different packages whose names were relevant keywords from that day. However, towards the latter half of the days, there were consistencies in the naming that started to make the language a little more ubiquitous. This allowed me to start grouping packages together and start abstracting out common code.
I created two Gradle root projects, implementation
and solutions
. With the former having sub-projects, common
, for
behaviour that is commonly used across multiple days and test-support
for unit test utility classes.
-implementation
-src
-common
-test-support
-solutions
This year I decided to take a different approach and write my own custom solution and benchmarking infrastructure.
I started by writing a simple Solution
interface that accepts two formal generic-type parameters for the return
types of part1()
and part2()
respectively. I originally set the default implementation to throw an exception, but
later changed this to return null once I implemented the benchmarker.
I then wrote a SolutionRunner
that accepts a vararg
array of Solutions and runs both parts for all the provided
solutions. Finally, to make it runnable, I created Solutions.kt
which simply contains a main()
function and invokes
SolutionRunner.run()
.
When I reached the days whose problems were orientated around runtime-complexity, I integrated Kotlin's
measureNanoTime
into my
SolutionRunner
and wrapped the part1()
and part2()
invocations in it. To store and display my results, I
integrated Jackson Data Format XML to easily serialise and de-serialise the results and then
created some reporting classes to format them and push them through my AdventLogger
. I also exposed a JVM argument
that can be injected to change the verbosity of the report. The flag is report
and can have the values verbose
or
compact
. E.g. -Dreport=verbose
.
I used the DeteKT Gradle plugin to perform static code analysis on the
codebase. It produces a report containing all the code-smells that it found based on the set configuration.
The custom configuration is defined in detekt-config.yml
and can be found in the implementation
resources.
The JUnit5 Jupiter API exposes some really nice functionality in the form of annotations and offers callback
interfaces for each of the life-cycle events. Utilising the @ParameterizedTest
can significantly reduce the
quantity of test-methods in your unit tests suites while @Nested
allows you to organise your tests in nested
classes.
AssertK is an assertion library inspired by AssertJ but for Kotlin. I'm familiar with AssertJ and prefer the naming conventions and variety of assertions over the default JUnit offering.
I'm a huge advocate of testing in general, but particularly test-driven development. Some days were easy to test-drive as they provided lots of examples that generally covered all branches of the algorithm and sometimes even edge cases. Given I was taking a proper object-orientated approach, and not just hacking it into a single function, I still had to write more tests than just the example inputs, but they were a nice baseline to run against while writing a solution.
Day 12 - Rain Risk had us navigating a ferry across the ocean based on instructions that translated and rotated the position of the ship and its waypoint. The ship could move in any of the cardinal directions for a given number of steps, rotate clockwise or anti-clockwise by the given number of degrees, or move forward in the direction it is currently facing for the given number of steps.
Part 1 was easy. Simply translating a coordinate while maintaining the current direction the ship is facing.
Part 2 introduced the ships' waypoint, which moves relative to the ship, and whenever it moves, the ship follows.
Now the directional translation and rotational commands affect the waypoint relative to the ship. This logic introduced
some interesting math functionality that I implemented into my libs JAR for the Point2D
class. I created a function
that rotated the current point around another. Meaning I could invoke ship.rotateAbout(waypoint)
.
This was the kind of puzzle that reduced me to a piece of paper, drawing graphs and rotating points about the origin to figure out how their ordinates translated.
Here, xs
the starting point, rotates 90deg clockwise and becomes xt
the translated point.
|
| xs (180, 42)
|
----------|--------- |
| V
| xt (174, 28)
|
Day 15 - Rambunctious Recitation was an interesting day about an Elven memory game which was a facade
for a mathematical sequence called the 'Van Eck' sequence. The sequence starts with 0
and
then each term considers the last. If it was the first occurrence of that terms value, then the next term is 0
. If
not, then the next term is equal to the number of terms-ago that last term occurred. For example, the first 30 terms are;
0, 0, 1, 0, 2, 0, 2, 2, 1, 6, 0, 5, 0, 2, 6, 5, 4, 0, 5, 3, 0, 3, 2, 9, 0, 4, 9, 3, 6, 14
There's a great video by Numberphile that explains the sequence. The most
interesting (and contextually relevant) attribute of the sequence is that it is not cyclical. This means that there is
no fancy math solution. Our algorithm must be exhaustive. This day was a test of data-structures. Part 1 was to simply
find the 2020th
number spoken in the game, no big deal. Part 2, however, was to find the 30,000,000th
number.
My initial implementation was slow. I got the solution in about 30s
. This involved several maps tracking the turns
each number last spoken on, the number of times it had been asked and a LinkedList
of all the numbers.
I then refactored one of the maps' value to an IntArray
as opposed to a List
which improved the solution runtime
to about 20s
.
On the next pass, I realised we didn't need to store as much data as we were currently doing. I refactored to use only
a single map of the term value against the turn it was last spoken on, improving runtime to about 5s
.
Finally, I refactored the lone map into a primitive IntArray
where the index was the term value, and the value was
the last turn it was spoken on. Less overhead and contiguous memory allocation improved the performance to ~600ms
.
And that was about as far as I could go. Due to the exhaustive nature of the algorithm, we have to perform 30 million iterations of our algorithm one way or another. I think at this point its limited by the language its written in and maybe even the hardware.
Day 20 - Jurassic Jigsaw gave us a bunch of tiles which were part of a greater image that could be
pieced together like a jigsaw. The problem was that the pieces were in a random orientation. This included both rotation
and flipping in both axes. So although the solution is a simple one, it was finicky to implement. Once the image was
assembled, we then had to search it for sea monsters and work out the roughness of the water. Each tile had a unique
ID associated with it and consisted of .
and `# characters. For example;
Tile 2311:
..##.#..#.
##..#.....
#...##..#.
####.#...#
##.##.###.
##...#.###
.#.#.#..##
..#....#..
###...#.#.
..###..###
One of the rules is that the edges of the tiles that join together must match, meaning that there will only be one
orientation of one tile that matches any given edge. Due to the chromatic nature of the tiles, I converted each edge
to binary where #
is a 1
and .
is a 0
. I then generated all 8 possible orientations of all the tiles, chose a
random corner candidate and starting joining them together in a cartesian grid. Once I had one possible orientation of
the built image, I the generated the 8 orientations of it. Now I could start searching for sea monsters. Sea monsters
look like the image below.
#
# ## ## ###
# # # # # #
Now that I had the 8 orientations of the assembled image, I search all of them for sea monsters and updated the backing
cartesian grids to mark the points the were part of the monsters. The final step was to count the number of waves (#
)
to determine the waters' roughness. Although this day wasn't the most mathematically challenging, I chose it as the
hardest day because of the complexity of the parsing and algorithms required.
Day | Part 1 Answer | Avg Time | Part 2 Answer | Avg Time | Documentation |
---|---|---|---|---|---|
01 | 802011 | 12ms | 248607374 | 58ms | Report Repair |
02 | 660 | 21ms | 530 | 7ms | Password Philosophy |
03 | 169 | 3ms | 7560370818 | 2ms | Toboggan Trajectory |
04 | 192 | 25ms | 101 | 8ms | Passport Processing |
05 | 904 | 8ms | 669 | 4ms | Binary Boarding |
06 | 6911 | 9ms | 3473 | 24ms | Custom Customs |
07 | 300 | 41ms | 8030 | 14ms | Handy Haversacks |
08 | 1832 | 9ms | 662 | 24ms | Handheld Halting |
09 | 20874512 | 149ms | 3012420 | 60ms | Encoding Error |
10 | 2176 | 1ms | 18512297918464 | 1ms | Adapter Array |
11 | 2427 | 2s 869ms | 2199 | 2s 504ms | Seating System |
12 | 319 | 2ms | 50157 | 605μs | Rain Risk |
13 | 102 | 654μs | 327300950120029 | 71μs | Shuttle Search |
14 | 9967721333886 | 6ms | 4355897790573 | 239ms | Docking Data |
15 | 1259 | 1ms | 689 | 687ms | Rambunctious Recitation |
16 | 28882 | 3ms | 1429779530273 | 11ms | Ticket Translation |
17 | 269 | 146ms | 848 | 2s 238ms | Conway Cubes |
18 | 11297104473091 | 19ms | 185348874183674 | 15ms | Operation Order |
19 | 171 | 47ms | 369 | 133ms | Monster Messages |
20 | 20899048083289 | 79ms | 2476 | 930ms | Jurassic Jigsaw |
21 | 2315 | 26ms | cf,h,t,b,l,cb,cm,d | 2ms | Allergen Assessment |
22 | 306 | 6ms | 291 | 1s 299ms | Crab Combat |
23 | 58427369 | 955μs | 111057672960 | 815ms | Crab Cups |
24 | 326 | 431μs | 3979 | 2s 714ms | Lobby Layout |
25 | 19774660 | 104ms | 49 Stars | 15μs | Combo Breaker |
Average Execution Time: 646ms
Total Execution Time: 15s 441ms
i7 5820K - OpenJDK 14.0.2 - Ubuntu 20.04