Part 1

We are given a “report” consisting of a list of fixed-length bitstrings and are asked to compute the “gamma rate” and “epsilon rate” as follows. The first of these numbers is found by concatenating the most common bits in each position over all the bitstrings; the second, by concatenating the least common bits.

This is an interesting one. The first approach that comes to mind is simple counting. The simplest case is for determining the most common bit of “single column” input—i.e. a list of bits:

mcb : [ℕ] → ℕ
mcb bs = let { ⟨k, m⟩ = count_bits bs } in
           if k ≤ m then 0; else 1

count_bits : [ℕ] → (ℕ × ℕ)
count_bits = foldl (λ ⟨k, m⟩ b → if b == 0
                                    then ⟨k + 1, m⟩
                                    else ⟨k, m + 1⟩)
                   ⟨0, 0⟩

It’s a run-of-the-mill induction to show that mcb gives us the most common bit for a list of bits. Now we want a version which can handle k-bitstrings. We specify this in terms of a variant of count_bits:

-- Specialized to position k
bpₖ : [[ℕ]] → (ℕ × ℕ)
bpₖ = foldr (λ bs ⟨m, n⟩ →
                if (bs !! k) == 0
                   then ⟨m + 1, n⟩
                   else ⟨m, n + 1⟩)
             ⟨0, 0⟩

cbs : [[ℕ]] → ⟨ℕ, ℕ, …⟩
cbs bss = let { ⟨p₁, p₂, …, pₖ⟩ = ⟨bp₀ bss, …, bpₖ bss⟩
                com ⟨m, n⟩      = if m ≤ n then 0 else 1 }
            in ⟨com p₁, com p₂, …, com pₖ⟩

The heart of cbs is the split, ⟨bp₀, …, bpₖ⟩, of type [[ℕ]] → ((ℕ × ℕ) × ⋯ × (ℕ × ℕ)). This applies bp for each position to compute the number of occurrences of each bit. This is specification; as an algorithm, it involves traversing the input k times. Converting this to a single traversal is an application of the k-ary banana split theorem (see Fokkinga 1991 or many books on functional programming from the Richard Bird/Phil Wadler sphere of influence), which states that

⟨⦇h₁⦈, …, ⦇hₖ⦈⟩ = ⦇⟨h₁ ∘ F π₁, …, hₖ ∘ F πₖ⟩⦈

where the πs are the projections for a k-ary product. The type functor in question (underlying [[ℕ]]) is given on objects and arrows by:

F A = 1 + ([ℕ] × A)
F f = id + (id × f)

Regarding each bpᵢ as a catamorphism with “gene” hᵢ, we have:

  ⟨bp₀, …, bpₖ⟩
= ⟨⦇h₁⦈, …, ⦇hₖ⦈⟩
= ⦇⟨h₁ ∘ F π₁, …, hₖ ∘ F πₖ⟩⦈
= ⦇⟨h₁ ∘ (id + (id × π₁)), …, hₖ ∘ F (id + (id × πₖ))⟩⦈

We don’t have to write out the gene hᵢ in point-free form to know that it is of the form [const ⟨0, 0⟩, fᵢ] (the bpᵢ differ only in their “inductive case combinator”). Absorbing all of the coproducts in the expression above thus gives:

= ⦇⟨ [const ⟨0, 0⟩, f₁ ∘ (id × π₁)], …,
     [const ⟨0, 0⟩, fₖ ∘ (id × πₖ)] ⟩⦈

The usual tactic when applying banana split is now to use the exchange law to convert the “split” gene to an “either” gene, thus putting the catamorphism into a form that follows the type’s underlying bifunctor. That’s indeed the next step; we obtain:

= ⦇[⟨const ⟨0, 0⟩, …, const ⟨0, 0⟩⟩,
    ⟨f₁ ∘ (id × π₁), …, fₖ ∘ (id × πₖ)⟩]⦈

This catamorphism has type [[ℕ]] → ((ℕ × ℕ) × ⋯ × (ℕ × ℕ)). Each fᵢ has type ([ℕ] × ((ℕ × ℕ) × ⋯ × (ℕ × ℕ))) → (ℕ × ℕ), and can be given in point-wise terms as follows:

fᵢ (bs, v) = let { ⟨m, n⟩ = πᵢ v } in
               if (bs !! i) == 0
                  then ⟨m + 1, n⟩
                  else ⟨m, n + 1⟩

Expressing the whole thing as a point-wise fold is now easy. We use a variant of f₁ called count_pos which abstracts over the index:

bp_all : [[ℕ]] → ((ℕ × ℕ) × ⋯ × (ℕ × ℕ))
bp_all = foldr ⟨count_pos 0, …, count_pos k⟩
               ⟨⟨0, 0⟩, …, ⟨0, 0⟩⟩

count_pos : ℕ → [ℕ] → ((ℕ × ℕ) × ⋯ × (ℕ × ℕ)) → (ℕ × ℕ)
count_pos i bs v = let { ⟨m, n⟩ = v !! i } in
                     if (bs !! i) == 0
                        then ⟨m + 1, n⟩
                        else ⟨m, n + 1⟩

In most programming languages, we’ll have to use some kind of vector (rather than kosher tuple) type to implement count_pos. We’ll use that here to present the entire program for the 5-word case, lazily assuming we can treat k-vectors and k-tuples as the same type:

bp_all : [[ℕ]] → Vector 5 (ℕ × ℕ)
bp_all = foldr ⟨add_pos 0, add_pos 1, add_pos 2, add_pos 3,
                add_pos 4⟩
               (make_vector 5 ⟨0, 0⟩)

add_pos : ℕ → [ℕ] → Vector 5 (ℕ × ℕ) → (ℕ × ℕ)
add_pos i bs v = let { ⟨m, n⟩ = v !! i } in
                   if (bs !! i) == 0
                      then ⟨m + 1, n⟩
                      else ⟨m, n + 1⟩

cbs : [[ℕ]] → Vector 5 ℕ
cbs bss = let { ⟨p₀, p₁, p₂, p₃, p₄⟩ = bp_all bss
                com ⟨m, n⟩      = if m ≤ n then 0 else 1 }
            in ⟨com p₀, com p₁, com p₂, comp p₃, com p₄⟩

The repetitive indexed names can be eliminated by replacing the split of count_pos i by a vector traversal. The specification is just

add_all_pos : [ℕ] → Vector k (ℕ × ℕ) → Vector k (ℕ × ℕ)
add_all_pos bs v = ⟨add_pos 0 bs v, …, add_pos k bs v⟩

For dealing with the realities of an array type, we may want to adapt add_all_pos to take a single pair instead of the entire array.

In addition to this simplification, we’d also want to replace foldr with a strict left fold; we can make this replacement thanks to the second fold duality theorem (right, er, left as an exercise).

Executable Haskell implementation

Short Haskellized test puzzle input

Part 2

(Note: In this part, bitstrings are referred to as words, and Word is used as a type synonym for [ℕ].)

To search for the solution values of this part, we have to progressively refine our collection of words according to certain criteria, evaluated at successive positions in the words. Each of the two criteria requires us to find the most common bit in the current position among the remaining words:

To find oxygen generator rating, determine the most common value (0 or 1) in the current bit position, and keep only numbers with that bit in that position. If 0 and 1 are equally common, keep values with a 1 in the position being considered.

To find CO2 scrubber rating, determine the least common value (0 or 1) in the current bit position, and keep only numbers with that bit in that position. If 0 and 1 are equally common, keep values with a 0 in the position being considered.

One way to tackle this is to see it in terms of function iteration, in which the wordlist is recursively refined until we have a singleton list. However, this involves lots of list index bookkeeping. A simpler approach is to think of these functions as building the rating value up a bit at a time.

Each time we determine the most common bit for a position, we learn which bit will be in that position in the final rating. Since we proceed from left to right, we only need consider the first bit of all the words at each step:

Index 0
       ↓
Words: 0 0 1 0 0
       1 1 1 1 0
       1 0 1 1 0
       ...

Most common bit: 1

--------

Index 1:
       ↓
Words: 1 1 1 0
       0 1 1 0
       ...

Most common bit: 0

--------

Index 2:
       ↓
Words: 1 1 0
       ...

Most common bit: 1

And so on. Here’s the specification:

oxygen_rating : [Word] → Word
oxygen_rating ws =
  if null (head ws)
     then []
     else
       let { b = com 0 ws } in
         b ∷ build_rating (reduce b ws)

reduce : ℕ → [Word] → [Word]
reduce b = fmap tail ∘ filter (λ(c:_) → b == c)

We use nullhead as our termination condition, since the words are all of the same length. The reduce function filters for words with the correct bit, then takes the tail of all of them.

com i ws is a function that gives the most common bit in position i in the list ws, breaking ties in favor of 1.

Clearly this is a list unfold (anamorphism). We use Gibbons’s version of the unfold universal property:

h = unfold p f g
  ⇔
h b = if p b then [] else (f b) ∷ (h (g b))

Applying this to the above gives

p = null ∘ head
f = com 0
g = λws → reduce (com 0 ws) ws

Computing com 0 ws twice is wasteful; this is an excellent use for “single-function unfold” (called unfoldr in Haskell).

oxygen_rating = unfoldr gen_and_filter

gen_and_filter : [Word] → Maybe (ℕ × [Word])
gen_and_filter ([] ∷ _) = Nothing
gen_and_filter ws       = let { b = com 0 ws } in
                            Just ⟨b, reduce b ws⟩

This works for computing the oxygen rating, but we get an interesting bug when trying to adapt it to find the scrubber rating:

scrubber_rating = unfoldr gen_and_filter′

gen_and_filter′ : [Word] → Maybe (ℕ × [Word])
gen_and_filter′ ([] ∷ _) = Nothing
gen_and_filter′ ws       = let { b = bit_flip (com 0 ws) } in
                            Just ⟨b, reduce b ws⟩

This goes off the rails and creates an infinite list. Why? We end up filtering out the last word in the list, because now we’re keeping only those (sub)words that begin with the least common bit.

reduce 0 [[1,0]] ⇒ []

This wasn’t a problem with oxygen_rating, since filtering for the words with the most common bit in head position will leave a singleton list unchanged:

reduce 1 [[1,0]] ⇒ [[1,0]]

This makes me think that the recursive refinement approach might be preferable, after all—we need that final word! But we can fix scrubber_rating by generalizing to an apomorphism, which allows us to tack a “tail” onto the unfolded list. Here’s the apo function (based on Gibbons 2003):

apo : (β → Maybe (α × (β + [α]))) → β → [α]
apo f b = case f b of
            Nothing            → []
            Just (x, Left v)   → x ∷ (apo f v)
            Just (x, Right xs) → xs ∷ xs

We then have:

scrubber_rating = apo gen_and_filter″

gen_and_filter″ : [Word] → Maybe (ℕ × ([Word] + Word))
gen_and_filter″ ([] ∷ _)        = Nothing
gen_and_filter″ ((b ∷ bs) ∷ []) = Just (b, Right bs)
gen_and_filter″ ws              =
  let { b = bit_flip (com 0 ws) } in
    Just ⟨b, Left (reduce b ws)⟩

All that’s left is to convert the words to decimal numbers and to multiply them.

Executable Haskell implementation

AOC 2021 Index

Home