Overview
Fewer people showed up for this SRM than usual, possibly because it was between 12AM and 5AM in most of Europe. I haven't yet looked at the Div 2 problems, but the Div 1 medium and hard problems were easier than usual.
Whenever a ball is painted, it is completely repainted; therefore, its final colour depends only on the final colour chosen to paint it. Define \(s_i\) as the last stage which paints ball \(i\), or \(0\) if no stage ever paints ball \(i\). Then our solution is just \(2^N\), where \(N = |\{s_1, s_2, \ldots, s_M\} \setminus \{0\}|\).
This is one of those problems where there is an obvious but inefficient way to compute the solution (simply enumerating all tuples and checking in \(O(N^5)\) time, where \(N\) is the length of the string); thus the challenge is simply to optimise the solution so that it runs in time. In this case \(N \leq 2500\), so an \(O(N^2)\) solution should work.
Since we are looking for count pairs of two disjoint intervals and all intervals can be enumerated in \(O(N^2)\), then a natural solution to consider is to break the problem into two parts. The first part counts the number of first intervals in \(O(N^2)\), and the second part counts the number of second intervals in \(O(N^2)\) as well. As it turns out, this decomposition is sufficient to solve the problem.
For each nice \((a, b, c, d)\) tuple, either
- \(S[a\ldots b]\) is nice
- \(S[c\ldots d]\) is nice
- or finally, there is some sequence of \(m\)
G
s which spans both intervals (where \(m\) is minGreen
)
Now suppose we want to iterate over all choices of \((c, d)\). (This is the second part of the decomposition.) If \(S[c\ldots d]\) is nice, then we're done. Otherwise, there must be some sequence of
G
s which starts at the end of \(S[a\ldots b]\) and ends at the beginning of \(S[c\ldots d]\). In particular, if \(S[c\ldots d]\) has a prefix of \(g\)
G
s, then \(S[a\ldots b]\) must have a suffix of at least \(m - g\)
G
s. (If \(S[c\ldots d]\) is nice, we adopt the convention that \(g = m\).)
This suggests that the first part of the decomposition is computing \(count(b, g)\), the number of subsequences which end before \(b\) and has a suffix of at least \(g\) G
s (or is nice). Then our answer is just
\[\sum_{c, d} count(b, m - prefixLength(c, d))\]
where \(prefixLength(c, d)\) is the number of G
s at the beginning of \(S[c\ldots d]\), or \(need\) if \(S[c\ldots d]\) is nice.
In this case, it suffices to compute the number of subsequences which end at exactly \(b\) and have a suffix of exactly \(g\) G
s. Call this \(count'(b, g)\); then we can generate \(count\) from \(count'\) by just doing partial sums. To compute \(count'\), we can simply enumerate over all possible \((a, b)\) in \(O(N^2)\) time.
Note that there are some additional bookkeeping details (i.e. how to keep track of prefix and suffix lengths), but I'll leave those up to the reader.
As with many problems involving expected value, one should immediately jump to thinking about linearity of expectation. This means that we should try to decompose the expected value of the area as a sum of some other variables; one possible way is as follows:
\[A = \sum_{r\in R} I_r A_r\]
where \(R\) is the set of all subregions of the polygon, \(A_r\) is the area of subregion, \(I_r\) is a binary variable indicating whether \(r\) is present or not. Then the expected area is just
\[\mathbb{E}(A) = \mathbb{E}\left[\sum_{r\in R} I_r A_r\right] = \sum_{r\in R} P_r A_r\]
where \(P_r\) is the expected value of \(I_r\), which is the probability of \(r\) being present. Note that I haven't quite defined what a subregion is -- that turns out to be the hard part of this problem.
It's tempting to try to decompose the area into triangles whose vertices are stars in the constellation, and then sum up the expected area of the triangles. Unfortunately, this doesn't quite work. Consider the second example -- there are 4 separate triangles, but each triangle intersects 2 other triangles. As a result, we overcount the area when all the triangles are present. Thus, the expected value of the area is too large.
One way to see the solution is to have a good understanding of the polygon area algorithm, AKA shoelace formula. This formula computes the area of a polygon in a clever way: first, we take each pair of adjacent points on the convex hull \((i, j)\) and extend them to a triangle by including the origin, \(o\). We then sum up the signed areas of the \((i, j, o)\)s -- which is positive or negative depending on whether \((i,j,o)\) are in clockwise or counterclockwise order. This can be done elegantly with cross products -- the area of triangle \((i, j, o)\) is just \((x_i y_j - x_j y_i)/2\). The expected signed area of the triangle is then its area multiplied by the probability that edge \((i, j)\) will be on the convex hull of the polygon.
\((i, j)\) is on the convex hull if and only if there is at least one visible star on the right side of \((i, j)\), and no visible stars on the left side. Again, we can use cross products to determine whether a point is on the left or right of \((i, j)\) -- if \((j - i) \times (k - i)\) is positive, then \(k\) is on the right; if it's negative, it's on the left side. Thus, the probability that \((i, j)\) is on the convex hull is just
\[p_{i,j} = \left[\prod_{k \in L} (1 - p_k)\right] \left[1 - \prod_{k \in R} (1 - p_k)\right]\]
Lastly, there is one additional case -- if there is some point \(k\) such that \(j\) is between \(i\) and \(k\), then \((i, k, o)\) contains \((i, j, o)\). In this case we'll overcount triangles again. The way to solve this is to consider \(k\) to be on the 'left' side; thus, whenever \(k\) appears, we won't add \((i, j, o)\) to our total. This gives us the following definitions of \(L\) and \(R\):
\[L = \{k \mid (j - i) \times (k - i) < 0 \text{ or } k \text{ outside and colinear to } i, j\}\]
\[R = \{k \mid (j - i) \times (k - i) > 0\}\]
Then we can use \(p_{i,j}\) as before, along with the signed area, to compute the expected area of the polygon.
Pseudocode for the solution is then as follows:
- For all points \(i\) and \(j\):
- Let \(L = \{k \mid (j - i) \times (k - i) < 0 \text{ or } k \text{ outside and colinear to } i, j\}\)
- Let \(R = \{k \mid (j - i) \times (k - i) > 0\}\)
- \(ans \leftarrow ans + p_i \cdot p_j \cdot \left[\prod_{k \in L} (1 - p_k)\right] \cdot \left[1 - \prod_{k \in R} (1 - p_k)\right] \cdot (i \times j) / 2\)
- Return \(ans\)