Skip to content

Commit

Permalink
Improved docs
Browse files Browse the repository at this point in the history
  • Loading branch information
derailed-dash committed Dec 18, 2023
1 parent 5171444 commit 0d9a5c1
Showing 1 changed file with 128 additions and 56 deletions.
184 changes: 128 additions & 56 deletions src/AoC_2023/Dazbo's_Advent_of_Code_2023.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,8 @@
"import ast\n",
"import unittest\n",
"import requests\n",
"import imageio\n",
"import imageio.v2 as imageio\n",
"import imageio.v3 as iio\n",
"import math\n",
"import matplotlib.pyplot as plt\n",
"from matplotlib.animation import FuncAnimation\n",
Expand Down Expand Up @@ -3447,9 +3448,14 @@
"\n",
"- Finally, we can return the furthest distance achieved.\n",
"\n",
"Not too bad.\n",
"\n",
"#### Visualisation\n",
"Not too bad."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Visualisation\n",
"\n",
"For a bit of fun, I decided to plot a heatmap that shows the journey from start to furthest, using [matplotlib](https://aoc.just2good.co.uk/python/matplotlib). I do this in my `plot_grid()` function. It works by:\n",
"\n",
Expand Down Expand Up @@ -3697,9 +3703,14 @@
"\n",
"For each region, I can now arbitrarily pick any point in that region and see if that point is enclosed by the main loop. In the end, I cheated a little, and made use of `matplotlib` `contains_points()` to determine which regions are contained by the loop. Where any region has any point that is contained by the loop, we can conclude that all the points that region are contained by the loop. So we can add all these points to the list of included points.\n",
"\n",
"And that's it!\n",
"\n",
"#### Visualisation\n",
"And that's it!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Visualisation\n",
"\n",
"I decied to plot the loops and _internal_ tiles. It's an interesting example of superimposing a scatter graph on a line graph.\n",
"\n",
Expand All @@ -3719,7 +3730,7 @@
"\n",
"![Finding loops, real data](https://aoc.just2good.co.uk/assets/images/contained_points_real.png)\n",
"\n",
"#### Final Remarks and Useful Resources\n",
"### Final Remarks and Useful Resources\n",
"\n",
"There are a couple of other ways to solve this problem. \n",
"\n",
Expand Down Expand Up @@ -5516,6 +5527,15 @@
"That's all there is to it!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Visualisation\n",
"\n",
"I figured now would be a good time for an animation! I've created an animated gif by creating many frames [Matplotlib plots](https://aoc.just2good.co.uk/python/matplotlib)."
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down Expand Up @@ -5680,6 +5700,19 @@
" \n",
" dir_sets = [set() for _ in range(4)]\n",
" \n",
" # Plot the path\n",
" for point, dirn in self.path_taken:\n",
" for dir_set, arrow in zip(dir_sets, (\"^\", \">\", \"v\", \"<\")):\n",
" if LightGrid.VECTORS_TO_ARROWS[dirn] == arrow:\n",
" dir_set.add(point)\n",
" continue\n",
"\n",
" for dir_set, arrow in zip(dir_sets, (\"^\", \">\", \"v\", \"<\")):\n",
" if dir_set:\n",
" dir_set_x, dir_set_y = zip(*((point.x, point.y) for point in dir_set))\n",
" axes.scatter(dir_set_x, dir_set_y, marker=arrow, s=mkr_size*0.5, color=\"white\") \n",
"\n",
" # Plot the infra\n",
" vert_splitters, horz_splitters, forw_mirrors, back_mirrors = set(), set(), set(), set()\n",
" infra_mappings = {\n",
" '|': vert_splitters, \n",
Expand All @@ -5692,25 +5725,12 @@
" for char_num, char in enumerate(row):\n",
" point = Point(char_num, row_num)\n",
" if char in infra_mappings:\n",
" infra_mappings[char].add(point)\n",
" \n",
" for point, dirn in self.path_taken:\n",
" value = self.value_at_point(point)\n",
" # if value not in infra_mappings:\n",
" for dir_set, arrow in zip(dir_sets, (\"^\", \">\", \"v\", \"<\")):\n",
" if LightGrid.VECTORS_TO_ARROWS[dirn] == arrow:\n",
" dir_set.add(point)\n",
" continue\n",
"\n",
" for dir_set, arrow in zip(dir_sets, (\"^\", \">\", \"v\", \"<\")):\n",
" if dir_set:\n",
" dir_set_x, dir_set_y = zip(*((point.x, point.y) for point in dir_set))\n",
" axes.scatter(dir_set_x, dir_set_y, marker=arrow, s=mkr_size*0.5, color=\"white\") \n",
"\n",
" infra_mappings[char].add(point) \n",
" \n",
" for infra_type, marker in [(vert_splitters, r'$\\vert$'), \n",
" (horz_splitters, r'$-$'), \n",
" (forw_mirrors, r'$\\slash$'), \n",
" (back_mirrors, r'$\\backslash$')]:\n",
" (horz_splitters, r'$-$'), \n",
" (forw_mirrors, r'$\\slash$'), \n",
" (back_mirrors, r'$\\backslash$')]:\n",
" \n",
" if infra_type: # check not empty\n",
" x, y = zip(*((point.x, point.y) for point in infra_type))\n",
Expand All @@ -5731,7 +5751,7 @@
"metadata": {},
"outputs": [],
"source": [
"def solve(data, animate=False, out_name=\"lava_floor.gif\"):\n",
"def solve_part1(data, animate=False, out_name=\"lava_floor.gif\"):\n",
" if animate:\n",
" output_file = Path(locations.output_dir, out_name)\n",
" animator = Animator(file=output_file, duration=75)\n",
Expand Down Expand Up @@ -6242,9 +6262,18 @@
" - With each point we find, we add it to a set called `region`.\n",
" - If our BFS touches the boundary of the grid, then we mark `region` as NOT `is_internal`. We then add the entire region to `exterior`, and continue to the next candidate.\n",
" - If our BFS never touches the boundary, then the `region` remains `is_internal`, and we add all the points from this region to `interior`.\n",
" - Finally, we return `interior`.\n",
"- Now I plot the `perimeter` and the `interior` points visually.\n",
"- And finally, I add up the counts of points from the `perimeter_set` to the `interior` set, and return this as the answer."
" - I return `interior`.\n",
" \n",
" Finally, I add up the counts of points from the `perimeter_set` to the `interior` set, and return this as the answer."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Visualisation\n",
"\n",
"I'm plot the `perimeter` and the `interior` points using [matplotlib](https://aoc.just2good.co.uk/python/matplotlib)."
]
},
{
Expand Down Expand Up @@ -6364,19 +6393,7 @@
" plt.ylabel('Y-axis')\n",
" plt.gca().invert_yaxis() # Invert the y-axis\n",
" plt.grid(True)\n",
" plt.show()\n",
" \n",
"def parse_plan_hex(data) -> list[tuple]:\n",
" dirs = { 0: \"R\", 1: \"D\", 2: \"L\", 3: \"U\" }\n",
" \n",
" plan = []\n",
" for line in data:\n",
" instr = line[-7:-1]\n",
" dirn = dirs[int(instr[-1])]\n",
" path_len = int(instr[0:-1], base=16)\n",
" plan.append((dirn, int(path_len)))\n",
" \n",
" return plan"
" plt.show()"
]
},
{
Expand All @@ -6385,20 +6402,16 @@
"metadata": {},
"outputs": [],
"source": [
"def solve(data, part:int=1) -> int:\n",
" if part==1:\n",
" plan = parse_plan(data)\n",
" else:\n",
" plan = parse_plan_hex(data)\n",
" \n",
"def solve_part1(data) -> int:\n",
" plan = parse_plan(data)\n",
" perimeter_path, interior_candidates = process_plan(plan)\n",
" perimeter_set = set(perimeter_path)\n",
"\n",
" bounds = get_bounds(perimeter_path)\n",
" all_interior = flood_fill(perimeter_set, interior_candidates, bounds)\n",
" plot_path(perimeter_path, all_interior)\n",
" \n",
" return len(perimeter_set) + len(all_interior)\n"
" return len(perimeter_set) + len(all_interior)"
]
},
{
Expand Down Expand Up @@ -6426,11 +6439,11 @@
"sample_answers = [62]\n",
"\n",
"for curr_input, curr_ans in zip(sample_inputs, sample_answers):\n",
" validate(solve(curr_input.splitlines()), curr_ans) # test with sample data\n",
" validate(solve_part1(curr_input.splitlines()), curr_ans) # test with sample data\n",
"\n",
"logger.info(\"Tests passed!\")\n",
"\n",
"soln = solve(input_data)\n",
"soln = solve_part1(input_data)\n",
"logger.info(f\"Part 1 soln={soln}\")"
]
},
Expand All @@ -6449,11 +6462,70 @@
"\n",
"**My solution:**\n",
"\n",
"Even with the sample data, we're told that our lagoon will hold `952408144115` cubic metres of lava. So clearly we have far too many points to go storing them in sets. Our Part 1 solution isn't going to scale!\n",
"\n",
"We need to do something smarter. I'm thinking... 2D coordinate compression, to come up with a bunch of rectangles that ew can add together.\n",
"\n",
"_sigh_\n",
"\n",
"I described how this works back in [AoC 2021 Day 22](https://aoc.just2good.co.uk/2021/22):\n",
"\n",
"Coordinate compression is a technique where we take a large number of coordinates, and compress them down to fewer coordinates by eliminating all adjacent points where nothing interesting happens. As a really noddy example in one dimension:\n",
"\n",
"![1D coordinate compression](https://aoc.just2good.co.uk/assets/images/coord-compression.png)\n",
"\n",
"Here's a 2D example:\n",
"\n",
"![2D coordination compression](https://aoc.just2good.co.uk/assets/images/2d-reactor-compression.png)\n",
"\n",
"It's easy enough to parse the hex strings and convert to direction and length.\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def parse_plan_hex(data) -> list[tuple]:\n",
" dirs = { 0: \"R\", 1: \"D\", 2: \"L\", 3: \"U\" }\n",
" \n",
" plan = []\n",
" for line in data:\n",
" instr = line[-7:-1]\n",
" dirn = dirs[int(instr[-1])]\n",
" path_len = int(instr[0:-1], base=16)\n",
" plan.append((dirn, int(path_len)))\n",
" \n",
" return plan\n",
"\n",
"But the regions created are huge. The solution we've written for Part 1 is never going to scale. We need to do something smarter. I'm thinking... 2D coordinate compression, to come up with a bunch of rectangles that ew can add together.\n",
"def process_turns(plan):\n",
" # initialise, from current\n",
" curr_x = 0\n",
" curr_y = 0\n",
" x_vals = [curr_x]\n",
" y_vals = [curr_y]\n",
"\n",
"_sigh_\n"
" for instr_num, (dirn_char, path_len) in enumerate(plan):\n",
" logger.debug(f\"[{instr_num}]: {(dirn_char, path_len)}\")\n",
" dirn = VectorDicts.DIRS[dirn_char]\n",
" curr_x += dirn[0]*path_len\n",
" curr_y += dirn[1]*path_len\n",
" x_vals.append(curr_x)\n",
" y_vals.append(curr_y) \n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def solve_part2(data) -> int:\n",
" plan = parse_plan_hex(data)\n",
" perimeter_path = process_turns(plan)"
]
},
{
Expand All @@ -6466,11 +6538,11 @@
"sample_answers = [952408144115]\n",
"\n",
"for curr_input, curr_ans in zip(sample_inputs, sample_answers):\n",
" validate(solve(curr_input.splitlines(), part=2), curr_ans) # test with sample data\n",
" validate(solve_part2(curr_input.splitlines()), curr_ans) # test with sample data\n",
"\n",
"logger.info(\"Tests passed!\")\n",
"\n",
"soln = solve(input_data, part=2)\n",
"soln = solve(solve_part2)\n",
"logger.info(f\"Part 2 soln={soln}\")"
]
},
Expand Down

0 comments on commit 0d9a5c1

Please sign in to comment.