tag:blogger.com,1999:blog-88885528474045859562018-03-06T00:27:58.319-08:00bababadalgharaghtakamminarronnkonnbroRichard Kempenoreply@blogger.comBlogger15125tag:blogger.com,1999:blog-8888552847404585956.post-32243110530743656292014-12-03T23:15:00.000-08:002014-12-03T23:15:25.180-08:00<p>From <a href="http://www.newyorker.com/magazine/2014/12/01/quiet-german">a recent article on Angela Merkel</a>: <blockquote><p>Germans and Russians are bound together by such terrible memories that any suggestion of conflict leads straight to the unthinkable. Michael Naumann put the Ukraine crisis in the context of "this enormous emotional nexus between perpetrator and victim," one that leaves Germans perpetually in the weaker position. In 1999, Naumann, at that time the culture minister under Schröder, tried to negotiate the return of five million artifacts taken out of East Germany by the Russians after the Second World War. During the negotiations, he and his Russian counterpart, Nikolai Gubenko, shared their stories. Naumann, who was born in 1941, lost his father a year later, at the Battle of Stalingrad. Gubenko was also born in 1941, and his father was also killed in action. Five months later, Gubenko's mother was hanged by the Germans. </p><p>"Checkmate," the Russian told the German. Both men cried. </p><p>"There was nothing to negotiate," Naumann recalled. "He said, 'We will not give anything back, as long as I live.'" </p></blockquote></p>Richard Kempehttps://plus.google.com/103113120090509801305noreply@blogger.com1tag:blogger.com,1999:blog-8888552847404585956.post-9884364815158380492014-01-29T10:56:00.000-08:002014-01-29T10:56:05.743-08:00On Plural-Data Prescriptivism in Academia<p>I really have no problems with people using data in the plural. Rather, I'm annoyed at the small and sometimes vocal minority within the statistics community (and in the wider academic community) which likes to proselytise the plural-data rule as if there was some sort of authority to it. Unfortunately, that's not how language works; language is democratic, and its 'rules' naturally evolve through the usage of its speakers. Excepting a dead civilisation, mediæval European scholars, and some modern-day scholars, hardly anyone uses 'data' as a plural noun any more. The fact that some scientists or statisticians would claim that there is a 'correct' way of using this word, then, strikes me as rather contrary to the goal of science itself -- namely, to describe the mechanisms underlying a system by examining it empirically. </p>Richard Kempehttps://plus.google.com/103113120090509801305noreply@blogger.com0tag:blogger.com,1999:blog-8888552847404585956.post-15221508907781020172014-01-08T10:14:00.000-08:002014-02-07T21:14:09.140-08:00Arch + XMonad / GNOME on MacBook Pro Retina 11,1 (13-inch, late 2013)<p>For the past few months, I've tried running OS X as my main OS on my MacBook Pro Retina (late 2013). I eventually got tired of it, so a week ago I started trying to figure out how to dual boot Arch on this machine. This is a log of the steps that had to go through. (Before doing this, I reset OS X via the recovery partition, so this is a clean install.) The following additional pages may be helpful as reference: <ul><li><a href="https://vec.io/posts/use-arch-linux-and-xmonad-on-macbook-pro-with-retina-display">https://vec.io/posts/use-arch-linux-and-xmonad-on-macbook-pro-with-retina-display</a></li><li><a href="http://codylittlewood.com/arch-linux-on-macbook-pro-installation/">http://codylittlewood.com/arch-linux-on-macbook-pro-installation/</a></li><li><a href="https://help.ubuntu.com/community/MacBookPro11-1/Saucy">https://help.ubuntu.com/community/MacBookPro11-1/Saucy</a></li><li><a href="https://wiki.archlinux.org/index.php/Installation_Template">https://wiki.archlinux.org/index.php/Installation_Template</a></li><li><a href="https://bbs.archlinux.org/viewtopic.php?pid=1363803">https://bbs.archlinux.org/viewtopic.php?pid=1363803</a></li><li><a href="https://wiki.archlinux.org/index.php/MacBookPro11,x">https://wiki.archlinux.org/index.php/MacBookPro11,x</a></li></ul>I'm assuming that the reader has some basic level of familiarity with Linux and can understand most of what I'm doing (or look it up if need be). If anyone has any updates or helpful info, feel free to email me and/or comment below, and I will update the guide accordingly. </p> <p><h3>Boot disk</h3>In OS X, download the standard Arch install ISO and run <pre>dd if=arch.iso of=/dev/sdX</pre>to create the boot disk. </p> <p><h3>Bootloader</h3>We will use rEFInd because it seems slightly easier than hooking up GRUB to EFI. In OS X, download the rEFInd package and run <pre>sudo ./install.sh --alldrivers</pre>Edit <code>/EFI/refind/refind.conf</code> and enable the <code>scan_all_linux_kernels</code> option. (I also enabled <code>textonly</code> mode because I thought the GUI looked tacky.) In the folder <code>/EFI/refind/drivers_x64</code>, remove all the drivers except the one for EXT4. <em>(Not sure if the last step is necessary -- can someone confirm?)</em></p> <p><h3>Partitioning, Part 1</h3>While still in OS X, use the partition manager to resize your OS X partition. I left it with 50 GB, but this is up to personal preference. Note that OS X actually has 3 partitions (EFI boot partition, main partition, recovery partition), but it only shows you one of them. </p> <p><h3>Internet</h3>Boot into your Arch live USB (rEFInd should be able to detect it automatically). You need an internet connection to install the base system. I used USB tethering via an Android phone, but other options may work as well. </p> <p><h3>Partitioning, Part 2</h3>Use <code>cgdisk /dev/sda</code> to set up <code>/</code> on <code>/dev/sda4</code> and <code>/home</code> on <code>/dev/sda5</code>. Then run <pre><br />mkfs.ext4 /dev/sda4<br />mkfs.ext4 /dev/sda5<br />mount /dev/sda4 /mnt<br />mkdir /mnt/home && mount /dev/sda5 /mnt/home<br /></pre>to create new EXT4 file systems on those partitions and mount them. </p> <p><h3>Base System / Partitions, Part 3</h3>Run the following to set up the base system and <code>fstab</code>: <pre><br />pacstrap /mnt base base-devel<br />genfstab /mnt >> /mnt/etc/fstab<br /></pre>At this step we have a basic working Linux system that you can <code>chroot</code> or reboot into: <pre><br />arch-chroot /mnt /bin/bash<br /></pre></p> <p><h3>SSD Performance issues</h3>We should edit <code>/etc/fstab</code> to improve SSD performance: <pre><br />/dev/sda4 / ext4 defaults,noatime,discard 0 1<br />/dev/sda5 /home ext4 defaults,noatime,discard 0 2<br /></pre>The default boot options may cause the SSD to hang. Add the following line to <code>/boot/refind_linux.conf</code> to prevent this: <pre>"Default" "ro root=/dev/sda4 libata.force=noncq"</pre></p> <p><h3>Keyboard / Locale / Time</h3><pre><br />echo "somehostname" > /etc/hostname<br /><br /># set up keyboard layout and system locale<br />echo "KEYMAP=dvorak" > /etc/vconsole.conf<br />vi /etc/locale.gen # uncomment what you want<br />locale-gen<br />echo "LANG=en_US.UTF-8" > /etc/locale.conf<br /><br /># set local time and set hardware clock to UTC<br />ln -s /usr/share/zoneinfo/US/Pacific /etc/localtime<br />hwclock --systohc --utc<br /></pre></p> <p><h3>Users</h3><pre><br /># set root password<br />passwd<br /><br /># add user<br />useradd -m -g users -G wheel -s /bin/bash someusername<br />passwd someusername<br />visudo # uncomment wheel<br /></pre>One last thing -- we should set up wifi before rebooting and logging in to the new account. </p> <p><h3>Yaourt, Wifi</h3>Follow the instructions <a href="http://archlinux.fr/yaourt-en">here</a> to get <code>yaourt</code>. Then use it to install <code>broadcom-wl</code> from the AUR to get wifi drivers: <pre><br />yaourt -S broadcom-wl<br /></pre>I like to manage my connection with <code>wicd</code> so I'll get that as well: <pre><br />pacman -S wicd<br />systemctl enable wicd.service<br /></pre>Now we can reboot and start playing around with the system! </p> <p><h3>XMonad with GNOME</h3>Normally I would forgo GNOME, but they currently have (as far as I know) the best solution to solving the high-DPI scaling issue. Grab the following packages: <pre><br />pacman -S xorg-server xorg-xinit xorg-server-utils xf86-video-intel xf86-input-synaptics \<br /> gnome-session gnome-settings-daemon<br />yaourt -S xmonad-darcs xmonad-contrib-darcs<br /></pre>Then follow the instructions from my <a href="/2014/01/linux-retina-high-density-displays-and.html">previous post</a> to make XMonad and GNOME play nicely with each other. </p> <p><h3>Additional steps / troubleshooting</h3>The above steps will get a minimal working setup on your Retina Macbook Pro. However, there are still a lot of useful things you could configure to make your life easier: <p><h4>Firefox</h4>Change <code>layout.css.devPixelsPerPx</code> to 2 to make it scale everything up for the retina display. </p><p><h4>Missing Cursor</h4>Try running <pre><br />gsettings set org.gnome.settings-daemon.plugins.cursor active false<br /></pre></p><p><h4>USB Automounting</h4><a href="https://wiki.archlinux.org/index.php/udiskie">udiskie</a> is a simple daemon to automount disks with very few dependencies, which is great for a minimal setup. </p><p><h4>Fonts</h4>OS X ships with some great built-in fonts. I strongly recommend grabbing their fonts from <a href="http://support.apple.com/kb/ht2435">these directories</a> on your OS X install and copying them over to Linux. </p></p>Richard Kempehttps://plus.google.com/103113120090509801305noreply@blogger.com1tag:blogger.com,1999:blog-8888552847404585956.post-58327380385280229932014-01-07T21:50:00.000-08:002014-07-16T20:51:49.857-07:00Linux, retina / high-density displays, and external monitors<p><h3>The issue</h3>Currently most Linux desktop environments work rather poorly with high-density displays, and require a lot of manual tweaking. Recently, <a href="https://help.gnome.org/misc/release-notes/3.10/more-core-ux.html.en">GNOME 3 added support for high-density displays</a>, so the UI should automatically rescale itself properly for GNOME versions 3.10 and higher. Unfortunately, there's one slight issue: If you connect an external monitor, the text and icons are massive. This is because (as far as I know) GTK, X, and much of the substrata of renderers underlying the GUI only support a global DPI or scale setting for all monitors. </p> <p><h3>Workaround</h3>However, there is (sort of) a workaround. We can manually rescale so that the the external monitor is correct, but the retina monitor has small text. (Presumably if you are using an external monitor, then your laptop monitor is not your main work monitor.) In <a href="https://git.gnome.org/browse/gnome-settings-daemon//commit/?id=047f030235972fdab5e15aff484006caf914216a&utm_source=anzwix">this patch</a>, we see that GNOME rescales windows based on the parameter <code>scaling-factor</code> in schema <code>org.gnome.desktop.interface</code>. Thus, all we have to do to manually set the scale factor is to run <pre><br />gsettings set org.gnome.desktop.interface scaling-factor X<br /></pre>where <code>X</code> is 1 or 2, depending on how we want pixels to be normal-sized or enlarged. (If <code>X</code> is 0, then GNOME decides for you, but this may or may not be what you want.) For expedience, you can attach a hotkey to these commands, or you can detect <code>xrandr</code> events and rescale automatically. </p> <p><h3>Required packages</h3>It turns out that only a very minimal set of packages need to be installed for this to work -- <code>gnome-session</code> and <code>gnome-settings-daemon</code>, at least version 3.10 or higher. Because of the sparse requirements, this can be adapted to work for all GTK applications in lightweight desktop environments / window managers (e.g. XMonad or awesome) so long as you start them via a minimal GNOME session. I also strongly recommend building <code>xmonad-darcs</code> and <code>xmonad-contrib-darcs</code> from the AUR because of a bug (see edit below). </p> <p><h3>Minimal .xinitrc + XMonad + GNOME example</h3>A minimal way to get the above method working involves creating the following files: <pre><br />~/.local/share/applications/xmonad.desktop:<br />[Desktop Entry]<br />Type=Application<br />Name=Xmonad<br />Exec=xmonad<br />NoDisplay=true<br />X-GNOME-WMName=XMonad<br />X-GNOME-Autostart-Phase=WindowManager<br />X-GNOME-Provides=windowmanager<br /><br />~/.config/gnome-session/sessions/xmonad.session:<br />[GNOME Session]<br />Name=Xmonad/GNOME<br />RequiredComponents=gnome-settings-daemon;xmonad<br /></pre>Make sure you use <code>gnomeConfig</code> in your <code>xmonad.hs</code>. Then end your <code>~/.xinitrc</code> with <pre><br />exec gnome-session --session=xmonad<br /></pre>This will execute a minimal GNOME session instance which only runs <code>gnome-settings-daemon</code>, <code>xmonad</code>, and almost nothing else. </p> <p><em>Edit (2014-07-16):</em> With the latest version of D-Bus and with XMonad 0.11, <code>gnomeConfig</code> from <code>XMonad.Config.Gnome</code> doesn't properly register XMonad to GNOME, so you'll see a GNOME error dialog upon starting your XMonad-Gnome session. This has been fixed in XMonad-Contrib 0.12 from the AUR. </p>Richard Kempehttps://plus.google.com/103113120090509801305noreply@blogger.com1tag:blogger.com,1999:blog-8888552847404585956.post-46885803640363800902013-11-04T22:51:00.000-08:002013-11-05T06:44:47.242-08:00Mid-Atlantic Regional ACM Programming Contest 2013, problem F<p>In the Mid-Atlantic Regional ACM Programming Contest this year, one of the more interesting problems was the following: <blockquote>Suppose there are \( N \leq 10^5 \) stars such that star \( i \) has coordinates \( x_i, y_i, z_i \), where \( |x_i|, |y_i|, |z_i| \leq 10^9 \). Given \( K \leq 10^9 \), count the number of pairs of stars which are within distance \( K \) from each other. The cluster of stars is sparse -- that is, there are at most \( M = 10^5 \) pairs of stars within \( K \) of each other. </blockquote></p> <p>One simple solution is to think about partitioning the space up into cubes of side length \( K \). Then, for each star, it suffices to check the number of stars in the same cube and adjacent cubes and see whether they're within \( K \). Surprisingly enough, this actually runs in \( O(M) \) operations. </p> <p>First we consider the case where space is one-dimensional, and our cubes are just intervals of length \( K \). Let the intervals contain \( a_1, a_2, \ldots, a_n \) elements. Note that two stars within the same interval are guaranteed to be within \( K \) of each other. Thus, \[\sum_{i=1}^n a_i^2 \leq M\] What is the number of operations that this will take? Well, for each star in interval \(i\), we examine all stars in intervals \(i - 1\), \(i\), and \(i + 1\). This is just \[\sum_{i=1}^n a_i (a_{i-1} + a_i + a_{i+1})\] Note that the \(a_i^2\) terms are \(\leq M\); furthermore, the \(a_ia_{i-1}\) terms and the \(a_ia_{i+1}\) terms are symmetric. Thus, it suffices to show that \[\sum_{i=1}^n a_i a_{i-1} \leq M\] Here we apply something known as the <a href="http://en.wikipedia.org/wiki/Rearrangement_inequality">rearrangement inequality</a>*, which roughly says that if you have two sequences of numbers \(x_i\) and \(y_i\), and you want to rearrange the \(x_i\) and \(y_i\) to maximise \(\sum_i x_i y_i\), then it's best to rearrange them in ascending order. In this case, our sequences are \(a_i\) and \(a_{i-1}\), and the sequences in ascending order is just \(\sum_{i=1}^n a_i^2\), which we already showed was \(\leq M\). This means that the whole algorithm takes \[\sum_{i=1}^n a_i (a_{i-1} + a_i + a_{i+1}) \leq 3 M\] </p> <p>The same argument can be extended to cubes -- however, in order to guarantee that \( \sum_{i=1}^n a_i^2 \leq M \), we need to make our cubes be of size \(K / \sqrt3\) so that all points within them are guaranteed to be at most \(K\) from each other. This implies that if we instead use cubes of size \(K\), then \(\sum_{i=1}^n a_i^2 \leq 3^3 M\). We can then apply the rearrangement argument as before, except with \(27\) adjacent cubes instead of \(3\). This gives us an algorithm which runs in \(729 M\). </p> <p>In general, for dimension \(D\) we'll need \((3D)^DM\) operations for the same algorithm, so in high dimensions we'd likely need to exploit other tricks, like the fact that the <a href="http://en.wikipedia.org/wiki/Volume_of_an_n-ball#High_dimensions">volume of an \(n\)-ball disappears as</a> \(n\) increases. We could then (maybe?) approximate the sphere with small cubes of size \(\epsilon K\) instead of simply summing over \(3^D\) adjacent, larger cubes of size \(K\). </p> <p>* -- You could also prove this using <a href="http://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality">Cauchy-Schwarz</a> or some other famous inequality. Rearrangement just happened to be the first one I thought of. </p>Richard Kempehttps://plus.google.com/103113120090509801305noreply@blogger.com0tag:blogger.com,1999:blog-8888552847404585956.post-55652995430714406932013-10-25T14:59:00.002-07:002013-10-26T21:35:15.024-07:00TopCoder SRM 595 Editorial<h3>Overview</h3> <p>Fewer people showed up for this SRM than usual, possibly because it was between 12AM and 5AM in most of Europe. I haven't yet looked at the Div 2 problems, but the Div 1 medium and hard problems were easier than usual. </p> <h3><a href="http://community.topcoder.com/stat?c=problem_statement&pm=12822">LittleElephantAndIntervalsDiv1</a></h3> <p>Whenever a ball is painted, it is <em>completely</em> repainted; therefore, its final colour depends only on the final colour chosen to paint it. Define \(s_i\) as the last stage which paints ball \(i\), or \(0\) if no stage ever paints ball \(i\). Then our solution is just \(2^N\), where \(N = |\{s_1, s_2, \ldots, s_M\} \setminus \{0\}|\). </p> <h3><a href="http://community.topcoder.com/stat?c=problem_statement&pm=12113">LittleElephantAndRGB</a></h3> <p>This is one of those problems where there is an obvious but inefficient way to compute the solution (simply enumerating all tuples and checking in \(O(N^5)\) time, where \(N\) is the length of the string); thus the challenge is simply to optimise the solution so that it runs in time. In this case \(N \leq 2500\), so an \(O(N^2)\) solution should work. </p> <p>Since we are looking for count pairs of two disjoint intervals and all intervals can be enumerated in \(O(N^2)\), then a natural solution to consider is to break the problem into two parts. The first part counts the number of first intervals in \(O(N^2)\), and the second part counts the number of second intervals in \(O(N^2)\) as well. As it turns out, this decomposition is sufficient to solve the problem. </p> <p>For each nice \((a, b, c, d)\) tuple, either <ul><li>\(S[a\ldots b]\) is nice</li><li>\(S[c\ldots d]\) is nice</li><li>or finally, there is some sequence of \(m\) <code>G</code>s which spans both intervals (where \(m\) is <code>minGreen</code>)</li></ul>Now suppose we want to iterate over all choices of \((c, d)\). (This is the second part of the decomposition.) If \(S[c\ldots d]\) is nice, then we're done. Otherwise, there must be some sequence of <code>G</code>s which starts at the end of \(S[a\ldots b]\) and ends at the beginning of \(S[c\ldots d]\). In particular, if \(S[c\ldots d]\) has a prefix of \(g\) <code>G</code>s, then \(S[a\ldots b]\) must have a suffix of at least \(m - g\) <code>G</code>s. (If \(S[c\ldots d]\) is nice, we adopt the convention that \(g = m\).) </p> <p>This suggests that the first part of the decomposition is computing \(count(b, g)\), the number of subsequences which end before \(b\) and has a suffix of at least \(g\) <code>G</code>s (or is nice). Then our answer is just \[\sum_{c, d} count(b, m - prefixLength(c, d))\] where \(prefixLength(c, d)\) is the number of <code>G</code>s at the beginning of \(S[c\ldots d]\), or \(need\) if \(S[c\ldots d]\) is nice. </p> <p>In this case, it suffices to compute the number of subsequences which end at <em>exactly</em> \(b\) and have a suffix of <em>exactly</em> \(g\) <code>G</code>s. Call this \(count'(b, g)\); then we can generate \(count\) from \(count'\) by just doing partial sums. To compute \(count'\), we can simply enumerate over all possible \((a, b)\) in \(O(N^2)\) time. </p> <p>Note that there are some additional bookkeeping details (i.e. how to keep track of prefix and suffix lengths), but I'll leave those up to the reader. </p> <h3><a href="http://community.topcoder.com/stat?c=problem_statement&pm=12805">Constellation</a></h3> <p>As with many problems involving expected value, one should immediately jump to thinking about <a href="http://en.wikipedia.org/wiki/Expected_value#Linearity">linearity of expectation</a>. This means that we should try to decompose the expected value of the area as a sum of some other variables; one possible way is as follows: \[A = \sum_{r\in R} I_r A_r\] where \(R\) is the set of all subregions of the polygon, \(A_r\) is the area of subregion, \(I_r\) is a binary variable indicating whether \(r\) is present or not. Then the expected area is just \[\mathbb{E}(A) = \mathbb{E}\left[\sum_{r\in R} I_r A_r\right] = \sum_{r\in R} P_r A_r\] where \(P_r\) is the expected value of \(I_r\), which is the probability of \(r\) being present. Note that I haven't quite defined what a subregion is -- that turns out to be the hard part of this problem. </p> <p>It's tempting to try to decompose the area into triangles whose vertices are stars in the constellation, and then sum up the expected area of the triangles. Unfortunately, this doesn't quite work. Consider the second example -- there are 4 separate triangles, but each triangle intersects 2 other triangles. As a result, we overcount the area when all the triangles are present. Thus, the expected value of the area is too large. </p> <p>One way to see the solution is to have a good understanding of the <a href="http://en.wikipedia.org/wiki/Shoelace_formula">polygon area algorithm, AKA shoelace formula</a>. This formula computes the area of a polygon in a clever way: first, we take each pair of adjacent points on the convex hull \((i, j)\) and extend them to a triangle by including the origin, \(o\). We then sum up the <em>signed</em> areas of the \((i, j, o)\)s -- which is positive or negative depending on whether \((i,j,o)\) are in clockwise or counterclockwise order. This can be done elegantly with cross products -- the area of triangle \((i, j, o)\) is just \((x_i y_j - x_j y_i)/2\). The expected signed area of the triangle is then its area multiplied by the probability that edge \((i, j)\) will be on the convex hull of the polygon. </p> <p>\((i, j)\) is on the convex hull if and only if there is at least one visible star on the right side of \((i, j)\), and no visible stars on the left side. Again, we can use cross products to determine whether a point is on the left or right of \((i, j)\) -- if \((j - i) \times (k - i)\) is positive, then \(k\) is on the right; if it's negative, it's on the left side. Thus, the probability that \((i, j)\) is on the convex hull is just \[p_{i,j} = \left[\prod_{k \in L} (1 - p_k)\right] \left[1 - \prod_{k \in R} (1 - p_k)\right]\] Lastly, there is one additional case -- if there is some point \(k\) such that \(j\) is between \(i\) and \(k\), then \((i, k, o)\) contains \((i, j, o)\). In this case we'll overcount triangles again. The way to solve this is to consider \(k\) to be on the 'left' side; thus, whenever \(k\) appears, we won't add \((i, j, o)\) to our total. This gives us the following definitions of \(L\) and \(R\): \[L = \{k \mid (j - i) \times (k - i) < 0 \text{ or } k \text{ outside and colinear to } i, j\}\] \[R = \{k \mid (j - i) \times (k - i) > 0\}\] Then we can use \(p_{i,j}\) as before, along with the signed area, to compute the expected area of the polygon. </p> <p>Pseudocode for the solution is then as follows: <ol><li>For all points \(i\) and \(j\):</li><ol><li>Let \(L = \{k \mid (j - i) \times (k - i) < 0 \text{ or } k \text{ outside and colinear to } i, j\}\)</li><li>Let \(R = \{k \mid (j - i) \times (k - i) > 0\}\)</li><li>\(ans \leftarrow ans + p_i \cdot p_j \cdot \left[\prod_{k \in L} (1 - p_k)\right] \cdot \left[1 - \prod_{k \in R} (1 - p_k)\right] \cdot (i \times j) / 2\)</li></ol><li>Return \(ans\)</li></ol></p>Richard Kempehttps://plus.google.com/103113120090509801305noreply@blogger.com0tag:blogger.com,1999:blog-8888552847404585956.post-61080550432144250032013-10-13T17:06:00.000-07:002013-10-25T14:26:59.652-07:00TopCoder SRM 591 Editorial: PyramidSequences<h3><a href="http://community.topcoder.com/stat?c=problem_statement&pm=12334">PyramidSequences</a></h3> <p>First, fix some \( n \) and \( m \). We begin by looking at an easier problem: If instead of looking at sequences of the form \( 1, 2, \ldots, n - 1, n, n - 1, \ldots, 3, 2 \), what if all the sequence elements were distinct (e.g. if the sequence was of the form \( 1, 2, \ldots, 2 n - 2 \))? The <a href="https://en.wikipedia.org/wiki/Chinese_remainder_theorem">Chinese remainder theorem</a> guarantees that a number \( x \) from the \( n \)-sequence will be paired with a number \( y \) from the \( m \)-sequence if and only if \( x \equiv y \pmod{2g} \), where \( g = \gcd( n - 1, m - 1 ) \). We can partition the \( n \)- and \( m \)-sequences into equivalence classes modulo \( 2g \); each class contains \( (n - 1) / g \) or \( (m - 1) / g \) elements, so there are \( (n - 1) (m - 1) / g^2 \) pairings per class, or \( (n - 1) (m - 1) / g^2 \), or \( 2 (n - 1) (m - 1) / g \) unique pairings. </p> <p>Unfortunately, the pairings are not unique. Each \( k \) in the \( n \)-sequence has two possible indices modulo \( 2 g \): \( \{ k \% (2 g) \) and \( (2 n - k) \% (2 g) \} \). Let us first consider the two indices are equal, i.e. \( k \equiv (2 n - k) \pmod{2g} \). Simplifying, this gives us \( k \equiv n \pmod g \). Since \( g \) is a multiple of \( n - 1 \), then \( n \equiv 1 \pmod g \), so our final expression is \( k \equiv 1 \pmod g \); i.e. \( k \equiv 1 \text{ or } g + 1 \pmod{2g} \). Note that this condition is independent of \( n \) or \( m \), so it suffices to simply find the number of \( k \) equivalent to \( 1 \) or \( g + 1 \) modulo \( 2g \) in each sequence and multiply the counts accordingly. </p> <p>Now consider the case in which \( k \) has two distinct indices, \( k \% (2 g) \) and \( (2 n - k) \% (2 g) \). Again, since \( n \equiv 1 \pmod g \), the second index can be written \( (2 - k) \% (2 g) \). If \( k \% (2 g) \in [2, g] \), then \( (2 - k) \% (2 g) \in [g + 2, 2g] \) and vice versa. Thus, each \( k \) appears exactly once in the range \( [2, g] \) and exactly once in \( [g + 2, 2g] \). It suffices, then, to compute the result for one range. This is simply \( (g - 1) (n - 1) (m - 1) / g^2 \). </p> <p>Putting the two cases (\( k \equiv 1 \pmod g \) or not) together, we have the following Python code. Some care must be taken to account for the special cases \( 1 \), \( n \), and \( m \). <pre><br />from fractions import *<br /><br />class PyramidSequences(object):<br /> def distinctPairs(self, n, m):<br /> g = gcd(n - 1, m - 1)<br /> an, am = [(x - 1) / g for x in n, m]<br /> bn, bm = [x / 2 + 1 for x in [an, am]]<br /> cn, cm = [x / 2 + (x % 2) for x in [an, am]]<br /> return (g - 1) * an * am + bn * bm + cn * cm<br /></pre></p>Richard Kempehttps://plus.google.com/103113120090509801305noreply@blogger.com0tag:blogger.com,1999:blog-8888552847404585956.post-14801932495922452912012-12-09T23:38:00.000-08:002013-10-25T14:27:06.151-07:00TopCoder SRM 563 Editorial<h3>Overview</h3> <p>This SRM featured three pairs of problems with easy and hard versions: FoxHandle, CoinsGame, and SpellCards. The two CoinsGame problems could be solved with graph search, and the two SpellCards problems could be solved with dynamic programming. Notably, the harder version of SpellCards reduced to a trivial application of knapsack, but proving this reduction was tricky and most coders used a more obvious, but longer, dynamic programming solution to solve it. In fact, the author himself stated in <a href="http://apps.topcoder.com/forums/?module=Thread&threadID=772544&start=0&mc=12#1652219">this forum post</a> that he did not know about the knapsack solution until after the contest. </p> <h3><a href="http://community.topcoder.com/stat?c=problem_statement&pm=12334">FoxAndHandleEasy</a></h3> <p>This is a useful exercise in using the <code>substr()</code> function. Let \( n \) be the length of \( s \). Iterate over all length-\( n \) substrings of \( t \). Let this substring be called \( u \). If \( u \) is equal to \( s \) and if the string formed by removing \( u \) from \( t \) is also equal to \( s \), then return true. </p> <h3><a href="http://community.topcoder.com/stat?c=problem_statement&pm=12332">CoinsGameEasy</a></h3> <p>There are several approaches which work. The simplest one is probably just to brute force all possible moves. The following is pseudocode for a recursive solution which implements that idea: <pre><br />// xi, yi are the coordinates of coin i.<br />// steps is the number of steps the number of moves that we have made so far.<br />rec( x0, y0, x1, y1, steps )<br />{<br /> if( ( x0', y0' is off the grid ) && ( x1', y1' is off the grid ) )<br /> return INFINITY;<br /><br /> if( ( x0', y0' is off the grid ) || ( x1', y1' is off the grid ) )<br /> return steps;<br /><br /> if( steps == 10 ) return INFINITY;<br /><br /> ans = INFINITY;<br /> for each direction d<br /> let x0', y0' = move in direction d from x0, y0;<br /> let x1', y1' = move in direction d from x1, y1;<br /> ans = min( ans, rec( x0', y0', x1', y1', steps + 1 ) );<br /><br /> return ans;<br />}<br /></pre>The recursion depth is at most 10, and each recursive call branches 4 times, so the function is called at most \( 4^{10} \approx 1 \text{ million } \) times. This is fast enough to pass on TopCoder. </p> <p>Alternatively, you could use <a href="http://en.wikipedia.org/wiki/Breadth-first_search">breadth-first search</a> or some more fancy algorithms, but they aren't necessary in this case. </p> <h3><a href="http://community.topcoder.com/stat?c=problem_statement&pm=12333">SpellCardsEasy</a></h3> <p>Let's start with a simple solution, figure out why it doesn't work, and then improve on it. Suppose we have the following recurrence: <pre><br />f( i )<br />{<br /> if( i == n ) return 0;<br /><br /> // use card<br /> use = 0;<br /> if( i + lvl[i] <= n ) ans = dmg[i] + f( i + lvl[i] );<br /><br /> // don't use card<br /> no_use = f( i + 1 );<br /><br /> return max( use, no_use );<br />}<br /></pre>This seems simple enough: We look at each card \( i \) and decide whether to use it or not. If we use it, then we remove cards \( i, i + 1, \ldots, i + lvl_i - 1 \), and then recursively proceed from there. The problem, unfortunately, is that the cards we decide to remove may not necessarily be contiguous. Consider the following example: \[ lvl = [ 2, 1, 1 ] \\ dmg = [ 100, 1, 0 ] \] Here it's optimal to first use middle card, then use the first card, dealing a total of 101 damage. Our algorithm, on the other hand, would return 100 because it uses the first card, and then discards the first two cards. </p> <p>Suppose we keep a count of cards that need to be removed, but we don't remove them right away<a href="#srm_563_footnote1">*</a>. Call this value \( need \). Now, examining each card from left to right, we can do the following: <ul><li>Don't use the card. This decreases our current \( need \) value by 1.</li><li>Use the card. This increases the damage we do by \( dmg_i \) and increases \( need \) by \( lvl_i \). </li></ul><p><b>Claim:</b> If, after examining all the cards, we have \( need = 0 \), then the cards we chose to use are a valid combination. </p><p><b>Proof:</b> Suppose the rightmost card that we decided to use was card \( i \). When we processed card \( i \), \( need = k \) for some \( k \), and afterwards, we had \( need = k + lvl_i \). Since we managed to decrement \( need \) to 0 by the time we processed all the cards, then there must be at least \( k + lvl_i - 1 \) cards to the right of \( i \). Now let's use card \( i \) and then remove cards \( i, i + 1, \ldots, i + lvl_i - 1 \) from the deck. This new deck also satisfies the \( need = 0 \) property. That is, if you look at all the cards we decided to use and increment or decrement \( need \), then you still get to \( need = 0 \) after examining all the cards. We can then inductively apply the above process. Use the rightmost card, and then remove it, along with some cards to the right of it. Eventually we will have used all the cards we chose originally without breaking any rules of the game. </p></p> <p>Based on our above observation, we can write a new recurrence relation: <pre><br />f( i, need )<br />{<br /> if( i == n ) return 0;<br /><br /> // use card i<br /> use = 0;<br /> if( need + lvl[i] <= n ) use = dmg[i] + f( i + 1, need + lvl[i] - 1 );<br /><br /> // don't use card i<br /> no_use = f( i + 1, max( need - 1, 0 ) )<br /><br /> return max( use, no_use )<br />}<br /></pre>Now we just need to use <a href="http://community.topcoder.com/tc?module=Static&d1=features&d2=040104">dynamic programming or memoisation</a> to make sure that each recursive call gets evaluated at most once. The complexity of this algorithm is \( O( N^2 ) \) for \( N \) cards, which is more than fast enough for the time limit. </p> <h3><a href="http://community.topcoder.com/stat?c=problem_statement&pm=12331">FoxAndHandle</a></h3> <p>There are several approaches to solve this problem. In this tutorial, I'll go over a simple semi-greedy approach. In general, problems that require a 'lexicographically least' solution suggest some sort of smart greedy solution, but this is not always the case. </p> <p>We begin with a simple algorithm and fill in the blanks. The algorithm below keeps track of the position of the last character we used, and at each step, searches all subsequent characters and picks the 'best' one: <pre><br />f( s )<br />{<br /> ans = "";<br /> last = 0;<br /> for t from 0 to length(s) / 2<br /> {<br /> best = last;<br /><br /> for i from last to length(s)<br /> if (we can append s[i] to ans) && (s[i] < s[best])<br /> best = i;<br /><br /> ans += s[best];<br /> last = best;<br /> }<br />}<br /></pre></p> <p>The only missing piece is determining whether, at each step, we can append some \( s_i \) to \( ans \). As it turns out, the following condition is sufficient: <blockquote>Let \( cnt( c, str ) \) denote the number of occurrences of \( c \) in some string \( str \). Then we can append \( s_i \) to \( ans \) if and only if for all \( c \), \[ cnt( c, ans ) + cnt( c, s_{ i, i + 1, \ldots, n - 1 } ) \geq cnt( c, s ) / 2 \] </blockquote>The rationale for the above condition is the following. Let's examine each character that we have already added to \( ans \) (i.e. \( cnt( c, ans ) \)), as well as each character which we could potentially add to \( ans \) in the future (i.e. \( cnt( c, s_{ i, i + 1, \ldots, n - 1 } ) \) ). In order for us to produce a valid string in the end, these two numbers together must be at least equal to the number of characters we need in the final string, which is \( cnt( c, s ) / 2 \). This is both necessary and sufficient for \( s_i \) to be a valid addition to \( ans \). </p> <p>Once we have the above condition, the pseudocode now looks like this: <pre><br />f( s )<br />{<br /> ans = "";<br /> last = 0;<br /> for t from 0 to length(s) / 2<br /> {<br /> best = last;<br /><br /> for i from last to length(s)<br /> {<br /> ok = true;<br /> for c from 'a' to 'z'<br /> if cnt( c, ans ) + cnt( c, s.substr( i ) ) < cnt( c, s ) / 2<br /> ok = false;<br /><br /> if ok && s[i] < s[best]<br /> best = i;<br /> }<br /><br /> ans += s[best];<br /> last = best;<br /> }<br />}<br /></pre></p> <h3><a href="http://community.topcoder.com/stat?c=problem_statement&pm=12329">SpellCards</a></h3> <p>We can solve this problem in a similar manner as we solved its division 2 version. All we have to do is to rotate the string, and compute the same recurrence relation as above. However, there is a more nifty solution that's trickier to see, but much, much easier to code. It turns out that SpellCards is equivalent to the knapsack problem, where each object \( i \) has weight \( lvl_i \) and value \( dmg_i \). </p> <p><b>Proof: </b> It suffices to show that a set of cards \( S = \{ x_1, x_2, \ldots, x_m \} \) can be played if and only if \[ \sum_{ x \in S } lvl_x \leq n \] One direction is clear -- all solutions to SpellCards must also be solutions to knapsack. For the converse, assume the above inequality holds for some set of cards \( S = \{ x_1, x_2, \ldots, x_m \} \). </p> <p><b>Claim: </b> There must be at least one card \( x_i \) that we can use without removing any other cards in \( S \) in the process. </p> <p><b>Proof of claim: </b> Assume the contrary -- that for each \( x_i \), there is some other \( x_j \) within the range \( [ x_i + 1, \ldots, x_i + lvl_{x_i} - 1 ] \) modulo \( n \). Then we have a collection of intervals \( [ x_i, x_i + 1, \ldots, x_i + lvl_{x_i} - 1 ] \) which not only cover all the cards, but also overlap. This implies that \[ \sum_{ x \in S } lvl_x > n \] which contradicts the assumption. Thus, there must be at least one card which we can use. </p> <p>Given the claim, for every knapsack-satisfying set \( S = \{ x_1, \ldots, x_m \} \), we can use some \( x_i \in S \) without removing any of the other \( x_j \). This leaves us with \( n - lvl_{x_i} \) cards total. Furthermore, \[ \sum_{ x \in S \setminus \{ x_i \} } lvl_x = \left[ \sum_{ x \in S } lvl_x \right] - lvl_{x_i} \leq n - lvl_{x_i} \] I.e., after using \( x_i \), the set \( S \setminus \{ x_i \} \) still satisfies the knapsack inequality with the new set of cards. By induction, we can use all the cards if the initial set \( S \) satisfies the inequality. This completes the proof: we can solve SpellCards by simply applying a knapsack dynamic program with weights \( lvl_i \) and value \( dmg_i \). </p> <h3><a href="http://community.topcoder.com/stat?c=problem_statement&pm=12330">CoinsGame</a></h3> <p>The notion of an <a href="http://en.wikipedia.org/wiki/Equivalence_relation">equivalence relation</a> is a very useful tool from basic mathematics which can help us solve this problem. Equivalence relations allow us to define relationships between arbitrary subsets of a particular set \( S \) in terms of binary relations between individual elements, in effect turning \( O( 2^{|S|} ) \) amount of data into \( O( |S|^2 ) \). </p> <p>In this problem, we would like to consider the following relation: for two given non-obstacle squares in the grid \( a \) and \( b \), place coins on both \( a \) and \( b \). We say \( a \sim b \) (or \( a \) is <em>similar</em> to \( b \)) if every sequence of moves that causes \( a \)'s coin to fall off also causes \( b \)'s coin to fall off, and vice versa. I.e., the starting configuration consisting of coins on \( a \) and \( b \) is not a <em>good</em> configuration. (The proof that this is an equivalence relation is left to the reader.) </p> <p>Now suppose that from the above relation, we have equivalence classes \( C_1, C_2, \ldots, C_m \) that partition \( S \), the set of non-obstacle squares. By the way we defined the relation, an initial configuration is good if and only if it contains some squares in at least two classes. Then the total number of good configurations is the number of configurations containing only squares in one class, namely \[ 2^{|S|} - 1 - \sum_{i} \left( 2^{|C_i|} - 1 \right) \] </p> <p>It remains to determine how we actually compute the equivalence classes. We do this by computing which pairs \( a, b \) are <em>not</em> similar, i.e. there exists some sequence of moves which causes \( a \)'s coin to fall off, but not \( b \)'s, or vice versa. It turns out that we can do this with a simple breath-first search (DFS will overflow the stack). Let \( prev( s, d ) \) denote the set of all squares \( t \) such that moving in direction \( d \) from \( t \) takes you to \( s \). Then the following algorithm computes whether or not pairs of squares are distinct: <pre><br />boolean distinct[N][N];<br /><br />list<square> q;<br />for( square s0 )<br />for( square s1 )<br />{<br /> distinct[s0][s1] = false;<br /> for( each direction d )<br /> {<br /> let s0' = move in direction d from s0;<br /> let s1' = move in direction d from s1;<br /> if( ( s0' is off the board ) ^ ( s1' is off the board ) )<br /> distinct[s0][s1] = true;<br /> }<br /> if( distinct[s0][s1] )<br /> q.add( s0, s1 )<br />}<br /><br />while( q is not empty )<br />{<br /> s0, s1 = q.pop();<br /> for( each direction d )<br /> for( t0 in prev( s0, d ) )<br /> for( t1 in prev( s1, d ) )<br /> {<br /> if( !distinct[t0][t1] )<br /> {<br /> distinct[t0][t1] = true;<br /> q.add( t0, t1 );<br /> }<br /> }<br />}<br /></pre>This algorithm runs in \( O( |S|^2 ) \); since \( S \leq 1600 \), this will complete in under the time limit. </p> <p>Once we have computed which pairs are distinct, it is quite simple to figure out the equivalence classes and then compute the final answer. </p> <h3>Footnotes</h3><ol><li><a name="#srm_563_footnote1"></a>This is a very standard technique in designing <a href="http://en.wikipedia.org/wiki/Amortized_analysis">amortised algorithms</a>: you keep track of the 'costs' you incur, and then you figure out how to 'distribute' them optimally later. <a href="http://en.wikipedia.org/wiki/Dynamic_array">Resizing ArrayLists or vectors</a> is a classic example of a commonly used amortised algorithm. </li></ol>Richard Kempehttps://plus.google.com/103113120090509801305noreply@blogger.com0tag:blogger.com,1999:blog-8888552847404585956.post-82115334805469719352012-11-17T17:20:00.000-08:002012-11-17T22:19:09.190-08:00General Impressions about InterviewStreet<p>As a longtime participant of programming competitions (ACM-ICPC, TopCoder, Code Jam, Codeforces, SPOJ, etc...), I initially found the idea of InterviewStreet to be promising. Like TopCoder and other similar competitions, it builds connections between companies and a wide talent pool. However, I feel the attention that InterviewStreet receives is undeserved. While it excels at attracting industry partners to sponsor it and use its platform, the quality of the competitor-facing side of the website is very lacking. </p> <p><b>The main problem that I have with InterviewStreet is that the website's problem statements are very rough and unpolished, often lacking crucial details and having contradictory statements or examples.</b> While it seems like the people behind InterviewStreet are familiar with other competitions, they are rather poor producing problem statements of the same quality. The writers' seem to have a poor grasp of English, and are very bad at expressing things in an unambiguous, mathematical manner, as is required by these types of competition sites. For example, consider the following screenshot of a problem statement: </p> <div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-qu4wOEcKkTA/UKgpdU59C2I/AAAAAAAAADk/z1pzQgACowc/s1600/scrot.png" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="243" width="400" src="http://1.bp.blogspot.com/-qu4wOEcKkTA/UKgpdU59C2I/AAAAAAAAADk/z1pzQgACowc/s400/scrot.png" /></a></div> <p>When I see this image, I immediately notice a few things: <ol><li>The final sentence of the first paragraph asks the reader to "find out the number of zombies at every junction after 'k' timesteps". Many problem-solvers will realise that the problem asks for the <em>expected value</em> of the number of zombies, but to new users or to users for whom English is not a native language, this sentence can seem very confusing. </li><li>It's not specified what a zombie will do in the case where there are no neighbouring junctions. </li><li>The input format doesn't specify whether the numbers are separated by spaces, newlines, commas, or arbitrary characters. Once again, one can read the example input to figure this out, but this should not be necessary. The input format should completely describe all possible input. </li><li>There are lots of stylistic issues. All the input variables are uppercase, except for 'k', which is lowercase. Word choice is not the best, and the grammar and spelling are terrible. All in all, these small details make the statement much harder to parse, unlike statements on other websites like TopCoder. </li></ol></p> <p>This is not an isolated case -- in the InterviewStreet Collegiate Challenge last spring, there were contradictions between the problem descriptions and the input, causing participants to lose valuable coding time. (Strangely enough, the results of the collegiate challenge are no longer posted on InterviewStreet's website.) Many of the other challenges on the website suffer from the same problem. <b>The result of having poor problem statements is like having students take a poorly-written exam in a class -- less-talented students may stumble upon correct answers, which smarter students may fail.</b> In this case success on InterviewStreet is as much dependent on being able to decipher poorly-written English as it is on having true programming skills. <b>I do not exaggerate when I say that some of these problems contain the worst writing that I've ever seen in competitive programming.</b> For comparison, look at the ACM-ICPC, TopCoder, Google Code Jam, or Kaggle. Should InterviewStreet wish to remain competitive (or be taken seriously, for that matter), it should put more effort into producing higher-quality problem statements. </p> <p>InterviewStreet also fails to leverage the power of the programming community. One of Codeforces' first features when the launched in 2010 was to provide a blogging framework for its participants. Likewise, TopCoder has a wiki and forum, and hosts an extensive series of community-written articles. InterviewStreet has a forum, but few people seem to use it (perhaps because of the problems above). </p> <p>Despite all of this, InterviewStreet does a good job in attracting industry partners, which I would attribute mostly to hype and rhetoric about its appeal as a 'startup' with potential to 'disrupt' the talent search process. If I were a company or startup looking for talent, I would turn instead to Kaggle, TopCoder, or Codeforces -- websites which provide great platforms for similar competitions, with well-established communities and some of the most impressive talent pools in the world. In the past I have heard that InterviewStreet is better for job-searching than TopCoder, but if the current trend continues, that will no longer be the case. </p>Richard Kempehttps://plus.google.com/103113120090509801305noreply@blogger.com3tag:blogger.com,1999:blog-8888552847404585956.post-55695598731165252812012-10-04T22:13:00.000-07:002012-10-04T22:13:02.534-07:00Algebraic Type Systems / Combinatorial Species 101 <p>This post is not meant to be a formal introduction into type theory or combinatorial species, but rather attempts to introduce some of the intuition behind the theory. Readers should have a basic understanding of naïve set theory, calculus, and data types in modern programming languages. </p> <h3>Introduction</h3><p>Let us begin by examining how types manifest themselves in programming. Many C-like programming languages provide basic types called <code>int</code>, <code>float</code>, <code>char</code>, and <code>bool</code> representing integers, floating point numbers, character literals, and booleans, respectively. Many languages also provide more complicated <em>composite</em> types, which are built up from simpler types. Simple examples include <code>pair< x, y ></code>, which consists of a value of type <code>x</code> and a value of type <code>y</code>, or a <code>list< x ></code> containing elements of type <code>x</code>. (In this article we will use uppercase letters \( X, Y, Z, \ldots \) to denote arbitrary types; feel free to substitute these abstract types with concrete types like <code>int</code> or <code>char</code> if it makes it easier to understand.) </p> <p>In mathematical language, we could equivalently define \( P(X,Y) \) (representing a <code>pair< x, y ></code>) to be the set \( X \times Y \). Algebraic type theory allows us to represent these mathematical definitions as algebraic expressions, which we will call <em>type expressions</em>. In this case, the type expression for a pair is $$ P( X, Y ) = X \times Y = X Y $$ (Such a type is called a <a href="http://en.wikipedia.org/wiki/Product_type">product type</a>.) These expressions can be manipulated like normal algebraic expressions -- for example, a pair of two pairs <code>pair< pair< x, x >, pair< y, z > ></code> could be represented as $$ P( P( X, X ), P( Y, Z ) ) = ( X \times X ) \times ( Y \times Z ) = X^2 Y Z $$ </p> <p>A more complicated question is how to represent lists. If a <code>list< x ></code> contains exactly \( n \) elements, then it has the type \( X^n \); however, \( n \) is not fixed, so it is not clear how to represent lists of arbitrary length. We solve this problem by introducing another bit of notation, namely the \( + \) operator, representing <em>alternation</em>. The type expression \( T( A, B ) = A + B \) means that a variable of type \( T( A, B ) \) can take on a value of <em>either</em> type \( A \) or type \( B \). Mathematically, we could write \( T( A, B ) = A \sqcup B \), where \( \sqcup \) denotes the <a href="http://en.wikipedia.org/wiki/Disjoint_union">disjoint union</a> operator. These <a href="http://en.wikipedia.org/wiki/Tagged_union">sum types</a> manifest themselves in the rarely-used <a href="http://en.wikipedia.org/wiki/Union_(computer_science)">union</a> structure in C. </p> <p>Now, equipped with this notation, we can write a type expression for <code>list< x ></code>: $$ L(X) = X^0 + X^1 + X^2 + X^3 + \ldots $$ Since \( X^n \) denotes a structure containing \( n \) values, the above expression indicates that lists can contain any nonnegative number of values. But wait, what does \( X^0 \) mean here? In normal arithmetic, \( X^0 = 1 \); as it turns out, this definition also makes sense in type expressions. We denote the symbol \( 1 \) to be a type which can take on exactly one, constant value. This is often called a <a href="http://en.wikipedia.org/wiki/Unit_type">unit type</a>. Here \( X^0 = 1 \) denotes an empty list; since all empty lists can be regarded as equal (mathematically, at least), it makes sense that they form a unit type. </p> <p>A unit type is trivial in the sense that it yields no additional information -- it can only be a constant value. Thus, when we write $$ Pair( 1, X ) = 1 \times X = X $$ the expression indicates that pair consisting of a constant and an \( X \) can be completely described by specifying a value in \( X \). </p> <p>What else can we do with unit types? For starters, we could consider adding them together. As we all learned in kindergarten, $$ 1 + 1 = 2 $$ But what does this mean in terms of types? Well \( 2 \) seems to a type which can either be one constant, or a different constant; if we call the first constant <code>true</code> and the second constant <code>false</code>, then we see that \( 2 \) is the same as the boolean type. We can similarly represent the set of \( 32 \)-bit integers with the expression \( 1 + 1 + \ldots + 1 = 2^{32} \); we can view this expression as being able to choose one value out of \( 4294967296 \) possible options, or alternatively, fixing \( 32 \) bits, each of which can take on \( 2 \) values. </p> <p>Just as we can define \( 1 \) to be the multiplicative identity, we can also define \( 0 \) to be the additive identity. I.e., for any \( X \), $$ 0 + X = X $$ \( 0 \) is referred to as the <a href="http://en.wikipedia.org/wiki/Bottom_type">bottom type</a>, and it is a type with no values whatsoever. Thus, the expression \( 0 + X \), meaning "either nothing or \( X \)", is the same as just \( X \). </p> <h3>Recursive Data Types and Generating Functions</h3><p>Earlier we defined \( L( X ) \) with the following type expression: $$ L( X ) = 1 + X + X^2 + X^3 + \ldots $$ It is also possible to define lists recursively in the following manner: $$ L( X ) = 1 + X \times L( X ) $$ We read the expression thus: a list is either an empty list (the \( 1 \)), or a node which holds one element (the \( X \)) and points to a successor list (the \( L( X ) \)). This is the standard way of defining singly-linked lists in Lisp. Alternatively, in C++, this could be written <pre><br />class node<br />{<br /> int* value;<br /> node* next;<br />};<br />typedef node* list;<br />list EMPTY = NULL;<br /></pre></p> <p>What is interesting about this recursive definition is that it is <em>exactly</em> the same as the previous definition. If we repeatedly expand the expression, we get $$ L( X ) = 1 + X \times L( X ) = 1 + X ( 1 + X \times L( X ) ) = 1 + X + X^2 \times L( X ) = 1 + X + X^2 + X^3 \times L( X ) = \ldots $$ </p> <p>Furthermore, the type expression is a special polynomial called a <a href="http://en.wikipedia.org/wiki/Generating_function">generating function</a>. A generating function is a polynomial, such that the coefficient of \( X^n \) is some special function of \( n \). Whenever we write a type expression for some container \( C( X ) \), the coefficient of \( X^n \) usually denotes the number of 'differently-shaped' containers that hold exactly \( n \) \( X \)'s. In this case, every singly linked list of length \( n \) has the same shape. Therefore, the coefficient of \( X^n \) is 1. </p> <p>A simple data structure which could have many different shapes for some fixed size is a binary tree. For example, a binary tree of size \( 3 \) could have the following shapes: <div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-w8elDpKZvKg/UG4r0d-w79I/AAAAAAAAADQ/ie-RPVGwfro/s1600/trees.png" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="250" width="400" src="http://2.bp.blogspot.com/-w8elDpKZvKg/UG4r0d-w79I/AAAAAAAAADQ/ie-RPVGwfro/s400/trees.png" /></a></div>If we let \( T( X ) \) denote the type expression for binary trees, then we know that the coefficient of \( X^3 \) in \( T( X ) \) is \( 5 \), since there are exactly \( 5 \) binary trees of size \( 3 \). </p> <p>As with lists, we can write type expressions for binary trees. One possible way to define a binary tree is either an empty node, or a node containing an element and two children. This gives us the expression $$ T( X ) = 1 + X \times T( X )^2 $$ Notice that this is a standard quadratic equation in terms of \( T( X ) \). Suspending disbelief for a moment, we can actually solve for a solution to \( T( X ) \): $$ T( X ) = \frac{ 1 \pm \sqrt{ 1 - 4 X } }{ 2 X } $$ Here, the square root of \( 1 - 4 X \) obviously makes no sense in the standard numerical interpretation of the square root; rather, we take \( \sqrt{ 1 - 4 X } \) to mean the power series of \( X \), that when squared, is equal to \( 1 - 4 X \). In this case, we can simply take the Taylor series of the expression, which gives us $$ \frac{ 1 \pm \sqrt{ 1 - 4 X } }{ 2 X } = 1 \mp X \mp 2 X^2 \mp 5 X^3 \mp 14 X^4 \mp 42 X^5 \mp \ldots $$ The 'minus' root is the only one that makes sense here, so we have $$ T( X ) = 1 + X + 2 X^2 + 5 X^3 + 14 X^4 + 42 X^5 + \ldots $$ </p> <p>As a sanity check, we can verify that there is exactly \( 1 \) tree of size \( 0 \), \( 1 \) tree of size \( 1 \), \( 2 \) trees of size \( 2 \), and \( 5 \) trees of size \( 3 \). Astute readers may have already recognised that \( 1, 1, 2, 5, 14, 42 \) are the first few <a href="http://en.wikipedia.org/wiki/Catalan_numbers">Catalan numbers</a>, a sequence which appears often in combinatorics. One of the interesting properties of the Catalan numbers is that the \( n \)-th Catalan number corresponds exactly to the number of binary trees of size \( n \). By expanding a type expression for binary trees, we have come up with a generating function for Catalan numbers! </p> <h3>Differentiating Data Types</h3><p>Surprisingly enough, we can actually perform a variation of <em>differentiation</em> on data structures. In functional programming, it is often useful to consider a data structure with a 'hole' punched in it; i.e., if we take a single value out of a container and mark the spot with a placeholder. For example, in a <code>list< int ></code>, we can take out an <code>int</code> at some point in the list and keep a pointer to the position where we removed the <code>int</code>. These modified data structures are called <em>one-hole contexts</em> because one you 'fill in' the hole with a concrete value, you get a data structure of the same shape as the original. </p> <p>We begin by examining the lowly pair \( P( X, X ) \), which has the type \( X^2 \). Suppose we pluck out one of the values in the pair -- how do we represent what remains? If we remove either element, only one element remains; this can be represented with an \( X \). However, we also have to keep track of <em>which</em> element we removed -- a hole in the left element is not the same as a hole in the right element. Thus, the one-hole context of \( P( X, X ) \) is \( X + X = 2 X \). You may alternatively think of \( 2 X \) as a tuple containing a boolean (denoting whether the left or right element was removed) and an \( X \) (representing the remaining element). Interestingly enough, \( 2 X \) happens to be the derivative of \( X^2 \) with respect to \( X \). As it turns out, this is true in general -- it has been proven that one-hole contexts have an <em>exact correspondence</em> with the derivatives of type expressions! </p> <p>A more complicated example is the list \( L( X ) \). First, let's compute the derivative of \( L( X ) \); we begin by rewriting $$ 1 + X + X^2 + X^3 + \ldots = \frac{ 1 }{ 1 - X } $$ Then, $$ L'( X ) = \frac{ d }{ d X } \frac{ 1 }{ 1 - X } = \left( \frac{ 1 }{ 1 - X } \right)^2 = L( X )^2 $$ Whenever we punch a hole in a list, it leaves a hole, along with the lists before and after it. Thus, the two lists in \( L( X )^2 \) represent, respectively, the prefix and suffixes of the one-hole context. We can thus 'fill in' the context and construct a new list by concatenating the prefix, some value \( X \), and the suffix. </p> <p>Likewise, we can perform differentiation on binary trees: $$ T'( X ) = T( X )^2 + 2 X \times T( X ) \times T'( X ) $$ This definition is somewhat more cryptic, but it can be deciphered with the same type of analysis that we used earlier. Given the root node of a binary tree with a hole in it, either <ol><li>The hole lies in the root node, in which case our node has form \( T( X )^2 \).</li><li>Or the hole lies in either the left or right subtree; in this case the root node can be described by specifying its content \( X \), a complete subtree \( T( X ) \), and an incomplete subtree \( T'( X ) \). Since there are two choices for where the hole lies, we multiply this term by 2.</li></ol></p> <h3>Further Reading</h3><ol><li>Conor McBride's <a href="http://strictlypositive.org/diff.pdf">original paper</a> demonstrating that one-hole contexts are equivalent to derivatives.</li><li>Wikipedia's articles on <a href="http://en.wikipedia.org/wiki/Algebraic_type_theory">algebraic type theory</a> and <a href="http://en.wikipedia.org/wiki/Combinatorial_species">combinatorial species</a>, an equivalent way of defining types.</li><li>A series of two articles <a href="http://blog.lab49.com/archives/3011">(1)</a>, <a href="http://blog.lab49.com/archives/3027">(2)</a> on Lab49 about differentiating data structures.</li></ol>Richard Kempehttps://plus.google.com/103113120090509801305noreply@blogger.com0tag:blogger.com,1999:blog-8888552847404585956.post-14045510647955147612012-07-22T16:54:00.000-07:002012-12-02T22:39:00.662-08:00TopCoder Arena Resizing Bug in xmonad/dwm/awesome/ratpoison (AKA The Java "Empty Grey Window" Bug)<h3>What happened:</h3><p>I recently switched to using <a href="http://en.wikipedia.org/wiki/Xmonad">xmonad</a> as my main window manager, and I immediately noticed some issues with resizing the TopCoder Arena. Specifically, the main content area stayed the same size no matter how I resized the window itself. If I expanded the window, everything outside the content area would be blank grey pixels. As I soon found out, I wasn't the only person to have encountered this issue before -- the same situation is described on the TopCoder forums <a href="http://apps.topcoder.com/forums/?module=Thread&threadID=581887&start=0&mc=4#835345">here</a> and <a href="http://apps.topcoder.com/forums/?module=Thread&threadID=687927&start=0&mc=3#1290858">here</a> (screenshot <a href="https://sites.google.com/site/hujiazhen/Screenshot.png">here</a>). </p> <h3>The reason:</h3><p>As it turns out, this bug applies to all Java applications running on many <a href="http://en.wikipedia.org/wiki/Re-parenting_window_manager">non-reparenting window managers</a> (including dwm, awesome, ratpoison, etc...). According to the <a href="https://wiki.archlinux.org/index.php/Xmonad#Problems_with_Java_applications">Arch wiki</a>, Java has a hardcoded list of non-reparenting window managers; if you use anything outside of this list, then strange things happen. </p> <h3>Common fixes:</h3><p><ul><li>If you're using OpenJDK, fixing the problem should be relatively simple. You simply have to set <code>_JAVA_AWT_WM_NONREPARENTING=1</code> in whichever shell you use and try logging out and logging back in. This method is independent of window manager, but I'm not sure if it works for Sun/Oracle Java. </li><li>You could try pretending to Java that you're using a supported non-reparenting window manager, such as LG3D. Tools such as <a href="http://xmonad.org/xmonad-docs/xmonad-contrib/XMonad-Hooks-SetWMName.html">SetWMName</a> (xmonad only) and <a href="http://tools.suckless.org/wmname">wmname</a> allow you to do this. </li></ul></p> <h3>Moar info:</h3><p><ul><li>As it turns out, there is another bug in xmonad which messes up window focus (using hotkeys, typing in text areas, etc...). A summary of the bug and some fixes can be found <a href="http://code.google.com/p/xmonad/issues/detail?id=177">here</a>. (Updated on 2012-12-02: Apparently this has been fixed by the latest commit to the DARCS version of XMonad.) </li><li>HaskellWiki's <a href="http://www.haskell.org/haskellwiki/Xmonad/Frequently_asked_questions#Problems_with_Java_applications.2C_Applet_java_console">FAQ on xmonad</a></li><li>ArchWiki's pages on <a href="https://wiki.archlinux.org/index.php/Xmonad#Problems_with_Java_applications">xmonad</a>, <a href="https://wiki.archlinux.org/index.php/Awesome#Fix_Java_.28GUI_appears_gray_only.29">awesome</a>, and <a href="https://wiki.archlinux.org/index.php/dwm#Fixing_misbehaving_Java_applications">dwm</a></li></ul></p>Richard Kempehttps://plus.google.com/103113120090509801305noreply@blogger.com1tag:blogger.com,1999:blog-8888552847404585956.post-64517162686347586352012-06-25T20:00:00.000-07:002012-07-19T17:35:17.662-07:00InterviewStreet: Going Office (AKA The Most Vital Arc Problem)<h3>Overview</h3><p><a href="https://www.interviewstreet.com/challenges/dashboard/#problem/4f6522ab2f425">Going Office</a> is definitely one of the tougher problems on InterviewStreet, but the main idea<a href="#goingoffice_footnote1">*</a> behind the problem is simple: given a weighted undirected graph \( G = ( V, E ) \) and two nodes \( s, t \in V \), we wish to answer \( Q \) queries of the following form: <blockquote>What is the shortest path from \( s \) to \( t \) given that edge \( e \in E \) is removed from the graph? </blockquote>(This type of problem is also known as the "most vital arc" problem -- see the first footnote below.) </p><p>Let \( N = | V | \) and \( M = | E | \) as usual. In this case, we are guaranteed that \( N, M, Q \leq 200000 \). </p> <h3>Prerequisites</h3><p>You should definitely know <a href="http://en.wikipedia.org/wiki/Dijkstra's_algorithm">Dijkstra's algorithm</a> before attempting to solve this problem. Also, knowing about segment trees with lazy propagation would be good, but is not required to understand the solution. </p> <h3>Building Bridges</h3><p>First of all, it should be clear that naively running Dijkstra's isn't going to run in time, because it requires \( O( Q N \log N ) \) operations. In fact, the size of the input suggests that we should look for at worst an \( O( N \log N ) \) solution. What can we do then? We may first try to solve an easier problem: can we determine whether an edge \( e \), when removed, will increase the shortest path length between \( s \) and \( t \)? </p> <p><blockquote><b>Definitions:</b><ol><li>An edge is <strong>optimal</strong> if it is on a shortest path from \( s \) to \( t \). </li><li>An edge is a <strong>bridge</strong><a href="#goingoffice_footnote2">*</a> if it is on <em>all</em> shortest paths from \( s \) to \( t \). (Here we refer only to shortest paths in \( G \), without having removed any edges.) </li></ol></blockquote></p> <p>Let \( d_s( u ) \) (resp. \( d_t( u ) \)) be the shortest-path distance from \( s \) (resp. \( t \)) to \( u \), and let \( OPT \) be the shortest-path distance from \( s \) to \( t \). Then we can show the following properties: <blockquote><b>Lemmata:</b><ol><li>\( e = ( u, v ) \) is optimal if and only \( d_s( u ) + length( e ) + d_t( v ) = OPT \). </li><li>\( e = ( u, v ) \) is a bridge if and only if it is optimal and for all other optimal \( e' = ( u', v' ) \), we have either \( d_s( v' ) \leq d_s( u ) \) or \( d_s( v ) \leq d_s( u' ) \). In other words, when we travel along an optimal path from \( s \) to \( t \), then between lengths \( d_s( u ) \) and \( d_s( v ) \), we <em>must</em> be traveling along \( e \). By convention we denote a bridge with the letter \( b \). </li><li>As a corollary of the above statement, \( e \) is a bridge if and only if, when removed from \( G \), the shortest path distance from \( s \) to \( t \) increases. </li></ol></blockquote>(Proofs are left to the reader.) These properties should give you some idea of how to find all the bridges in \( G \). If you still need some hints, an algorithm for computing bridges is included at the end of this post. </p> <h3>Islands</h3><p>Now we have a way of identifying which edges, when removed, affect the shortest path length. This still leaves one major question unanswered: given a bridge \( b \), how exactly do we determine the shortest path length from \( s \) to \( t \) once \( b \) is removed? Running Dijkstra's repeatedly is too slow, since we can easily construct examples in which <em>every</em> edge is a bridge. (Consider a graph consisting of only a path from \( s \) to \( t \).) </p> <p>Let \( K \) denote the number of bridges. Note that Lemma 2 above implies that we can uniquely order the bridges \( b_0, b_1, \ldots, b_{K-1} \) based on their relative distances from \( s \). Could we use this to help us? </p> <blockquote><b>Definition:</b> Let the \( i \)-th <b>island</b> be defined as the set of all vertices \( v \), such that there exists a shortest path from \( s \) to \( v \) using no more than \( i \) bridges. (If \( v \) is not connected to either \( s \) or \( t \), then we can ignore it -- it lies on some strange continent somewhere far away.) </blockquote> The intuition behind <em>islands</em> and <em>bridges</em> is this: bridges are the fastest way to "travel" between the islands when going from \( s \) to \( t \). If we remove an edge that is not a bridge, our shortest path is not affected. However, when we take out a bridge, we are forced to travel along another (possibly much longer) path. <blockquote><b>Proposition:</b> Consider the bridge \( b_i \), connecting the \( i \)-th and \( i + 1 \)-st islands. Let \( E_i \) be the set of all edges \( e \) connecting an island with index \( \leq i \) to an island with index \( > i \). Then the shortest path from \( s \) to \( t \) that bypasses \( b_i \) must have length \[ \min_{ e = ( u, v ) \in E_i } \{ d_s( u ) + length( e ) + d_t( v ) \} \] </blockquote> <blockquote><b>Proof:</b> Any path \( P \) from \( s \) to \( t \) bypassing \( b_i \) must include some edge in \( E_i \). Now consider some edge \( e = ( u, v ) \in E_i \) -- the shortest path from \( s \) to \( t \) that goes through \( e \) must have length \( d_s( u ) + length( e ) + d_t( v ) \). Lastly, by the way we have constructed the islands, we know that \( b_i \) is not on the shortest paths from \( s \) to \( u \) or from \( v \) to \( t \). </blockquote> <h3>A Data Structure for Bypass Length</h3><p>Let \( bypass( i ) \) denote the length of the shortest path that bypasses \( b_i \). The above observations suggest that we can run the following algorithm to compute \( bypass( i ) \): <blockquote>For each non-bridge edge \( e = ( u, v ) \) connecting islands \( i \) and \( j \), where \( i < j \): <blockquote>For each \( k \in \{ i, i + 1, \ldots, j - 1 \} \): <blockquote>\( bypass( k ) \leftarrow \min \{ bypass( k ), d_s( u ) + length( e ) + d_t( v ) \} \) </blockquote></blockquote></blockquote> However, there is a slight problem with this algorithm: it's too slow. Through some tricky analysis, you can show that in the worst case, we can construct a graph that will cause the above subroutine to take \( O( M^{3/2} ) \) operations. </p> <p>What we need is a data structure that supports the following two operations efficiently: <ol><li>\( update( i, j, val ) \), which runs \( bypass( k ) \leftarrow \min \{ bypass( k ), val \} \) for all \( k \in \{ i, i+1, \ldots, j-1 \} \) </li><li>\( query( i ) \), which returns \( bypass( i ) \) </li></ol></p> <p>One possible data structure that can support both operations in \( O( \log K ) \) time is called a <em>segment tree with lazy propagation</em> or a <em>segment tree with range update</em>. There are several tutorials online about this type of tree, and it's a good data structure to be familiar with. (The main problem right now is that the structure is rather complicated and the online tutorials are not very good. I may do an article on it some time, if there is sufficient demand.) </p> <p>However, if you are lazy, there is another data structure called the <a href="http://bababadalgharaghtakamminarronnkonnbro.blogspot.com/2012/06/kempe-tree-data-structure-for.html">Kempe tree</a>, which is a bit simpler and can support the same sort of queries, also using \( O( \log K ) \) operations. Check the link for more information. </p> <h3>Putting It All Together</h3><p>The previous observations suggest the following algorithm for solving the problem: <ol><li>Compute \( d_s \) and \( d_t \) using Dijkstra's algorithm. </li><li>Find the bridges: <ol><li>First find all the optimal edges by iterating over all edges \( e = ( u, v ) \), and storing the edges such that \( d_s( u ) + length( e ) + d_t( v ) = OPT \).</li><li>Sort all the optimal edges by \( d_s( u ) \) and \( d_s( v ) \). Use the criterion from above to find the bridges.</li></ol></li><li>Find the islands. <ol><li>One way (the hard way) is to run a modified version of Dijkstra.</li><li>A smarter and shorter alternative is to just run multiple depth-first searches from \( s \) and from the far end of each bridge. It's up to the reader to figure out how to do so.</li></ol></li><li>Find the bypassing paths. <ol><li>Initialise your favourite range-update data structure.</li><li>Iterate over all the non-bridge edges. If \( e = ( u, v ) \) is an edge crossing from island \( i \) to island \( j > i \), then range-update \( bypass(i), bypass(i+1), \ldots, bypass(j-1) \) such that they are no more than \( d_s( u ) + length( e ) + d_t( v ) \).</li></ol></li><li>Process all the queries. If the edge in question is not a bridge, then we simply return the optimal length. Otherwise, we query the range-update data structure to find the best bypass length. </li></ol>All of the above steps run in require \( O( R \log R ) \) operations where \( R = max\{ N, M, Q \} \), so the algorithm has complexity \( O( R \log R ) \). </p> <p>The solution took me around 150 lines to implement with a decent amount of spacing and a relatively clean coding style. I'm somewhat suspicious, though, that I've overcomplicated the solution, and that a nicer solution exists. Message me or comment here if you have any other alternatives. </p> <h3>Footnotes</h3><ol><li><a name="#goingoffice_footnote1"></a>See <a href="http://www.sciencedirect.com/science/article/pii/0167637789900655">this paper</a> for an example of previous work on the subject. </li><li><a name="#goingoffice_footnote2"></a>See also <a href="http://en.wikipedia.org/wiki/Bridge_(graph_theory)">this page</a>. </li></ol>Richard Kempehttps://plus.google.com/103113120090509801305noreply@blogger.com8tag:blogger.com,1999:blog-8888552847404585956.post-80063231496420632392012-06-25T19:18:00.000-07:002012-07-06T07:00:41.221-07:00The Kempe Tree: A Simple Data Structure for Supporting Range Updates or Queries<h3>Introduction</h3><p>This post describes two variations on tree structure that I've used in some programming problems for solving variations of the range query/update problem. I haven't seen anyone else use it before, so for the moment I'm going to call it the <em>Kempe tree</em>, named after my great-grandfather who was also named Richard Kempe. </p> <p>Suppose we are given a list of data \( a_0, a_1, \ldots, a_{n-1} \). In general terms, <em>range update</em> data structure supports the following type of operation: \[ update( i, j, x ): a_k \leftarrow x \text{ for all } k \in \{ i, i+1, \ldots, j \} \] Similarly, a <em>range query</em> data structure supports the following type of operation: \[ query( i, j ) = f( a_i, a_{i+1}, \ldots, a_{j} ) \] (Here \( f \) is taken to be some function we can aggregate over a large number of values, like \( \min \), \( + \), etc....) Both of these operations are expected to be reasonably fast, usually taking at most \( O( n ) \) steps. </p> <p>As an example, <a href="http://community.topcoder.com/tc?module=Static&d1=tutorials&d2=lowestCommonAncestor">segment trees</a> and <a href="http://community.topcoder.com/tc?module=Static&d1=tutorials&d2=binaryIndexedTrees">Fenwick trees</a> (or binary-indexed trees) are range query data structures. Segment trees with lazy propagation support both range queries and range updates. </p> <p>The Kempe tree is a data structure that supports <em>either</em> fast range updates and single-element queries, <em>or</em> fast range queries and single-element updates, <em>but not both simultaneously</em>. (Okay, fine, it can support both, provided one is \( O( \log^2 n ) \). But more on that later.) Why, then, should I use a Kempe tree, you ask? The main reason is that it uses numerical tricks to make it easier to code, the same reason why the Fenwick tree is sometimes used instead of the segment tree. (This also means that, for all intents and purposes, Kempe trees will never be used outside of competition programming.) </p> <h3>Range Traversals</h3><p>The Kempe tree, like the Fenwick tree, is an array-based tree. Suppose we have \( n \) data entries, <code>a[0], a[1], ..., a[n-1]</code>; then we require an array of size \( N + n \), where \( N \) is the smallest power of \( 2 \) that is \( \geq n \). Let <code>tree[i]</code> denote this array. The entries of <code>tree[i]</code> are allocated as follows: <ul><li>Elements <code>a[0], a[1], ..., a[n-1]</code> are stored in entries <code>tree[N], tree[N+1], ..., tree[N+n-1]</code>, respectively. </li><li>The parent node for any entry <code>tree[i]</code> is <code>tree[i/2]</code>. </li><li>Likewise, a non-leaf node <code>tree[i]</code> has children <code>tree[2*i]</code> and <code>tree[2*i+1]</code>. </li><li>The root of the tree is <code>tree[1]</code>. </li></ul>See the following image for a description of the mapping. Numbers in the tree nodes denote position in the array <code>tree[i]</code>. </p> <div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-yqSUQXxpLKY/T-kVVLPFBLI/AAAAAAAAACY/f-iCQOwm6kc/s1600/tree.png" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="250" width="400" src="http://1.bp.blogspot.com/-yqSUQXxpLKY/T-kVVLPFBLI/AAAAAAAAACY/f-iCQOwm6kc/s400/tree.png" /></a></div> <p>We say a tree node <strong>governs</strong> the array entries that are its descendants. For some range <code>a[i], a[i+1], ..., a[j]</code>, the following algorithm finds the minimal set of tree nodes <code>tree[x[0]], tree[x[1]], ..., tree[x[m-1]]</code> that govern <code>a[i], a[i+1], ..., a[j]</code>: </p> <pre><br />vector<int> find_governing_nodes( int i, int j )<br />{<br /> vector<int> ans;<br /> i += N, j += N;<br /> while( i <= j )<br /> {<br /> if( i % 2 == 1 ) ans.push_back( i );<br /> if( j % 2 == 0 ) ans.push_back( j );<br /> i = ( i + 1 ) / 2;<br /> j = ( j - 1 ) / 2;<br /> }<br /> return ans;<br />}<br /></pre> <p>A sketch of the proof for the correctness of the algorithm is the following. We start with <code>i</code> and <code>j</code> at the ends of the interval we are interested in. We process the interval by moving <code>i</code> and <code>j</code> up the tree, and by 'pulling' them together. If moving either <code>i</code> or <code>j</code> shrinks the interval during the next step, then we add them to the list of governing nodes. The following image illustrates the process in action: </p> <div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-xA9yxsPYnWQ/T-kVFmbWdnI/AAAAAAAAACI/qCWmB6Quc6E/s1600/traversal.png" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="250" width="400" src="http://4.bp.blogspot.com/-xA9yxsPYnWQ/T-kVFmbWdnI/AAAAAAAAACI/qCWmB6Quc6E/s400/traversal.png" /></a></div> <ol><li>We wish to query the range \( [1, 5] \). <code>i</code> and <code>j</code> are initialised to <code>9</code> and <code>13</code>, respectively.</li><li>Since <code>9</code> would jump out of its subtree during the next move, we add <code>9</code> to the list of governing nodes.</li><li>We move <code>i</code> to <code>5</code> and <code>j</code> to <code>6</code>. Both <code>i</code> and <code>j</code> will jump out of their respective ranges during the next move, so we add both <code>5</code> and <code>6</code> to the list of governing nodes.</li><li><code>i</code> gets moved to <code>3</code>, and <code>j</code> to <code>2</code>. This is an invalid range, so we break and return the list <code>[9, 5, 6]</code>.</li></ol> <h3>Range Queries and Singleton Updates</h3><p>We now present an example of a tree structure that can handle updates to single elements and range queries. The function we will try to query is the sum function. </p> <pre><br />// define MAX to be some really big number<br /><br />int tree[MAX], N;<br /><br />// initialises a tree for n data entries<br />int init( int n )<br />{<br /> N = 1;<br /> while( N < n ) N *= 2;<br /> for( int i = 1; i < N + n; i++ ) tree[i] = 0;<br />}<br /><br />// compute the product a[i] + a[i+1] + ... + a[j]<br />int range_sum( int i, int j )<br />{<br /> int ans = 1;<br /> for( i += N, j += N; i <= j; i = ( i + 1 ) / 2, j = ( j - 1 ) / 2 )<br /> {<br /> if( i % 2 == 1 ) ans += tree[i];<br /> if( j % 2 == 0 ) ans += tree[j];<br /> }<br /> return ans;<br />}<br /><br />// set a[i] = val<br />void update( int i, int val )<br />{<br /> int diff = val - tree[i+N];<br /> for( int j = i + N; j; j /= 2 ) tree[j] += diff;<br />}<br /></pre> <h3>Range Updates and Singleton Queries</h3><p>The following example will allow us to increment a range, while allowing us to query the result on a singleton value. </p> <pre><br />// define MAX to be a really big number<br /><br />int tree[MAX], N;<br /><br />// initialises a tree for n data entries<br />int init( int n )<br />{<br /> N = 1;<br /> while( N < n ) N *= 2;<br /> for( int i = 1; i < N + n; i++ ) tree[i] = 0;<br />}<br /><br />// increments a[k] by val for all k between i and j, inclusive<br />void range_increment( int i, int j, int val )<br />{<br /> for( i += N, j += N; i <= j; i = ( i + 1 ) / 2, j = ( j - 1 ) / 2 )<br /> {<br /> if( i % 2 == 1 ) tree[i] += val;<br /> if( j % 2 == 0 ) tree[j] += val;<br /> }<br />}<br /><br />// compute a[i]<br />int query( int i )<br />{<br /> int ans = 0;<br /> for( int j = i + N; j; j /= 2 ) ans += tree[j];<br /> return ans;<br />}<br /></pre> <h3>Final Remarks</h3><p>We can easily extend the tree to support many types of functions, such as the product function, <code>min</code>, <code>max</code>, set union, set intersection, and so forth. </p> <p>Unfortunately, under the current scheme, the tree cannot simultaneously support range updates and queries. There is a way to support both operations, provided that one operation takes \( O( \log n ) \) and the other takes \( O( \log^2 n ) \). (The proof of this is left to the reader.) Thus, Kempe trees do not provide as much functionality as segment trees, but are generally somewhat easier to code. </p>Richard Kempehttps://plus.google.com/103113120090509801305noreply@blogger.com3tag:blogger.com,1999:blog-8888552847404585956.post-42586789112259585202012-05-31T15:45:00.000-07:002012-06-08T20:28:40.418-07:00Numbers Are Friends, Not Food: Unfriendly Numbers from InterviewStreet<p>Today we're going to go over <a href="https://www.interviewstreet.com/challenges/dashboard/#problem/4f7272a8b9d15">Unfriendly Numbers</a>, an interesting InterviewStreet problem involving a little number theory. </p> <br/> <h1>Review</h1> <h4>Prerequisites</h4><p>This article assumes that you are familiar with the Euclidean algorithm, as well as the naive \( O(\sqrt{N}) \) algorithms for integer factorisation and prime sieving. </p> <h4>Bounding the number of prime factors</h4><p>Suppose we are given some number \(N\) between \(1\) and \(MAX\), inclusive, and we want to know the maximum number of prime factors that \(N\) can have. </p> <p>We define the function \(primorial(K)\) (see <a href="http://en.wikipedia.org/wiki/Primorial">here</a>) as the product of the first \(K\) primes. It should be clear that \(primorial(K)\) is the smallest number with exactly \(K\) distinct prime factors. Thus, the maximum number of prime factors that \(N\) can have is simply the largest value of \(K\) such that \(primorial(K) \leq MAX\). </p> <p>Since \(primorial(K)\) grows more quickly than \(K!\), then it should be clear that the bound on the number of unique prime factors grows extremely slowly. I.e., we can expect very large numbers to have relatively few numbers of prime factors. </p> <h4>Bounding the number total divisors</h4><p>Now let us look at a harder problem: suppose we are given a number \(N\) between \(1\) and \(MAX\), and we want to determine the maximum number of total divisors that \(N\) can have. Define the <em>divisor function</em>, or \(d(N)\) as the number of divisors of \(N\). According to <a href="http://en.wikipedia.org/wiki/Divisor_function#Approximate_growth_rate">this Wikipedia article</a>, the divisor function grows more slowly than \( N^\epsilon \) for all \( \epsilon > 0 \). </p> <p>However, in programming competitions, this may not be a good enough estimate: more specifically, can we actually compute \( max \{ d(N) \} \) for all \(N\) in our range? Let us begin by introducing some more terminology: a <a href="http://en.wikipedia.org/wiki/Highly_composite_number">highly composite</a> number is an integer which has more factors than any smaller integer. We can then restate problem as computing the number of factors of \(M\), where \(M\) is the largest highly composite number between \(1\) and \(MAX\). </p> <p>As it turns out, there is a fast algorithm for computing the largest highly composite number. We begin by first examining a few lemmata. Here we adopt the convention that \(p_i\) refers to the \(i\)-th prime. I.e. \( p_1 = 2 \), \( p_2 = 3 \), \( p_3 = 5 \), and so forth. <ol><li><blockquote><p><b>Lemma 1:</b> Suppose \(M\) is an integer with factorisation \( p_1^{e_1} p_2^{e_2} p_3^{e_3} \ldots \). Then $$ d(M) = \prod_i ( e_i + 1 ) $$ </p><p><b>Proof:</b> If \(K\) is a divisor of \(M\) with factorisation \( p_1^{f_1} p_2^{f_2} p_3^{f_3} \ldots \), then for all \(i\), we have \( 0 \leq f_i \leq e_i \). There are thus \( e_i + 1 \) possibilities for each \( f_i \), so a total of \( \prod_i ( e_i + 1 ) \) possible factors. </p></blockquote></li><li><blockquote><p><a name="highlycomposite"><b>Lemma 2:</b></a> Suppose \(M\) is a highly composite number with factorisation \( p_1^{e_1} p_2^{e_2} p_3^{e_3} \ldots \). Then \( e_i \geq e_{i+1} \) for all \( i \). </p><p><b>Proof:</b> Suppose not: i.e. \( e_i < e_{i+1} \) for some \(i\). Swap the exponents and let \(K = p_1^{e_1} p_2^{e_2} \ldots p_i^{e_{i+1}} p_{i+1}^{e_i} \ldots \). Then \( K < M \), but by our previous lemma, \( d(K) = d(M) \). Thus, \(M\) is not highly composite. </p></blockquote></li></ol></p> <p>By <a href="#highlycomposite">Lemma 2</a>, we know that all highly composite numbers, when factored, have non-increasing sequences of exponents. Could we, then, take a dual approach and search over all non-increasing sequences of exponents to find highly composite numbers? The following recursive algorithm is an implementation of that idea in C++: <blockquote><pre><br />// algorithm for finding highly composite numbers<br /><br />pair< long long, int > rec( long long bound, int index, long long prod, int max_pow )<br />{<br /> long long best = prod;<br /> int num_div = 1;<br /> prod *= primes[index];<br /> for( int i = 1; i <= max_pow && prod <= bound; i++, prod *= primes[index] )<br /> {<br /> pair< long long, int > next = rec( bound, index + 1, prod, i );<br /> if( ( i + 1 ) * next.second > num_div )<br /> {<br /> best = next.first;<br /> num_div = ( i + 1 ) * next.second;<br /> }<br /> }<br /> return make_pair( best, num_div );<br />}<br /><br />pair< long long, int > largest_highly_composite( long long bound )<br />{<br /> return rec( bound, 0, 1, INF );<br />}<br /></pre></blockquote>Here, the first term returned, \(M\), is the largest highly composite number less than or equal to <code>bound</code>, and the second term returned is \(d(M)\). The array <code>primes</code> stores a pre-generated array of primes. </p> <p>The recursion depth is bounded by the number of unique prime factors of any number less than or equal to <code>bound</code>, which we determined earlier was very small. Furthermore, the branching factor is bound by the logarithm of <code>bound</code>, as well as some other factors. In practice, the algorithm runs almost instantaneously for any long long, though there are some overflow issues with the larger numbers. </p> <br/> <h1>InterviewStreet Challenge - Unfriendly Numbers</h1><a href="https://www.interviewstreet.com/challenges/dashboard/#problem/4f7272a8b9d15">Problem Link</a> <p>The problem statement is simple enough: count the number of factors of \(K\) that are not factors of any number in \(B_1, B_2, B_3, \ldots, B_N\). We are given the following constraints: <ul><li>\(1 \leq N \leq 10^6\)</li><li>\(1 \leq K \leq 10^{13}\)</li><li>\(1 \leq B_i \leq 10^{18}\) for all \(i\)</li></ul></p> <p>First, as always, we consider the naive solution: simply generate all the factors of \(K\) and for each factor \(F_i\), we check whether or not it is a factor of each \(B_i\). Unfortunately, there can be up to a million \(B_i\), so checking each \(F_i\) is going to be relatively slow. Let us try using our previous code to estimate the maximum number of \(F_i\). Plugging in <code>largest_highly_composite(1e13)</code> reveals that the largest highly composite number in the range is \(9316358251200\), which has \(10752\) factors. This is a small number, but certainly not small enough to check every pair of \(F_i\) and \(B_j\). We have to try something smarter. </p> <h4>A digression: numbers as vectors</h4><p>There are many ways to see the next step of the solution; in my case, I relied on geometric intuition, though I'm sure other people did it differently. We begin by considering each integer \( N = p_1^{e_1} p_2^{e_2} \ldots \) as a vector, where the \(i\)-th entry of the vector is \( e_i \). For example, consider numbers that are products of powers of \(2\) and \(3\) only. The vectors corresponding to these numbers have only two nonzero components, and thus can be plotted on the coordinate plane: <div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-guEIyM0Phsk/T8aToIoN_nI/AAAAAAAAABA/RRqhsSIGcro/s1600/numbers_as_vectors.png" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="320" width="320" src="http://2.bp.blogspot.com/-guEIyM0Phsk/T8aToIoN_nI/AAAAAAAAABA/RRqhsSIGcro/s320/numbers_as_vectors.png" /></a></div>Here \(3\) is plotted at \( (0,1) \) indicating that \( 3 \) has the factorisation \( 2^0 3^1 \). Likewise, \( 12 = 2^2 3^1 \) is plotted at \( (2,1) \), and so on. If we include more prime factors, then we'd need more dimensions to represent the numbers. Unfortunately, I'm not too good at drawing things in 12-dimensional space, so I'll leave it as an exercise for you, the reader. </p> <p>Using this geometric representation, saying that \(a\) is a divisor of \(b\) is equivalent to saying that \(a\) is contained in the box with one corner at \(0\), and another corner at \(b\). That is, if \(F(n)\) is the set of factors of \(n\) then \(F(n)\) appears on the grid as the box bounded by \(0\) and \(n\). Similarly, the greatest common divisor of \(a\) and \(b\) is the `largest' point which appears in the boxes \(F(a)\) and \(F(b)\), and the least common multiple is the `smallest' point whose box contains \(a\) and \(b\). </p> <p>Now let's consider the problem statement again: we need to find the number of points in \(K\)'s box that are completely outside of the box of any \(B_i\). This is still a hard problem, but now we have a way of visualising it. Let us return to the diagram; suppose \( K = 12 \) and \( B_1 = 18 \): <div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-1RXYE3hjOIQ/T8dlkf-k7pI/AAAAAAAAABg/CWZThx9031E/s1600/numbers_as_vectors2.png" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="320" width="320" src="http://3.bp.blogspot.com/-1RXYE3hjOIQ/T8dlkf-k7pI/AAAAAAAAABg/CWZThx9031E/s320/numbers_as_vectors2.png" /></a></div>We want to count the number of points in \(F(12)\) (the green region) that are not in \(F(18)\) (the blue region). In this case, the valid points are \( 12 \) and \( 4 \), and the invalid points are \(1\), \(2\), \(3\), \(6\), \(9\), and \(18\). </p> <p>Note, however, that we didn't really need to consider <em>all</em> the points in \(F(18)\), as some of them are outside of \(F(12\)) already. Instead, it would have been equivalent to count the points inside \(F(12)\) which were outside of \( F(18) \cap F(12) = F(6) \). More generally, this means that we need to count the points in \(F(K)\) that are not in \( F(K) \cap F(B_i) \) for any \(i\). How does this help us? Well, we showed earlier that \(F(K)\) contained around \(10000\) points. Thus, all the one million \(F(B_i)\) get mapped into at most \(10000\) distinct \( F(K) \cap F(B_i) \). </p> <p>By this point, you, the astute reader, may have already realised that \( F(K) \cap F(B_i) \) is simply \( F( gcd( K, B_i ) ) \). This suggests the following algorithm: <ol><li>First, generate all the factors of \(K\). This can be done naively in \(O(\sqrt{K})\) time.</li><li>For each \( i \), compute \( gcd( K, B_i ) \) and store it in a set called \( bad \). This takes \( O(N \log K ) \) for \( N \) bad numbers.</li><li>Lastly, we compute the answer by finding the number of factors in \( F(K) \) that are factors of no numbers in \( bad \). Since both sets contain at most \( F(K) \) numbers, then this takes \( O( |F(K)|^2 ) \) time.</li></ol></p> <p>And there you have it! As usual, being able to solve problems like these requires a solid understanding of the background material, as well as a little bit of insight or elbow grease. A little bit of number-theoretical intuition helps a great deal in programming contests, and as always, the best way to gain said intuition is just to solve more problems. </p> <br/> <h1>Homework</h1> <p>Originally I intended to also do a writeup of another excellent problem, <a href="http://community.topcoder.com/stat?c=problem_statement&pm=11787&rd=14727">EllysNumbers</a>, the Division 1, Level 2 problem from <a href="http://community.topcoder.com/stat?c=round_overview&rd=14727">TopCoder SRM 534</a>. However, I am lazy and the people at TopCoder have already written up a good tutorial <a href="http://apps.topcoder.com/wiki/display/tc/SRM+534">here</a>. Try solving it yourself! </p>Richard Kempehttps://plus.google.com/103113120090509801305noreply@blogger.com3tag:blogger.com,1999:blog-8888552847404585956.post-53901389461900127132012-05-22T03:07:00.000-07:002012-06-25T08:20:16.056-07:00"Мне до лампочки": Cold War Propaganda, Language Learning, and Light BulbsIn learning a language, it's often the case that we encounter new idioms or expressions that make little sense without historical background or further context. (E.g. <a href="http://www.snopes.com/language/phrases/farm.asp">this expression</a>, and for a more extreme example, <a href="http://en.wikipedia.org/wiki/Darmok">this episode</a> of Star Trek: TNG.) Such was the case when I was learning Russian a few years ago, and I happened upon the expression "(это) мне до лампочки", lit. "it's just like the light bulb to me". Figuratively, this is taken to mean that the person does not care much about the thing to which he is referring. This idiom is apparently relatively new, taking hold within the past century or so, and two of my professors have given me rather differing information about the etymology of this expression. <ol><li>The first professor, an American, explained to me that during one of the many Soviet industrialisation programmes, there was a drive to provide modern lighting systems to the general populace. However, because of inadequacies in the Soviet system, this programme took far longer to complete than originally envisioned, so the phrase "мне до лампочки" entered Russian vernacular as an expression describing something pointless or worthless.</li><li>In a later encounter, a second professor, a Russian, refuted the first professor's claim as American propaganda leftover from the Cold War. According to him, the true origin of the phrase is the following: In earlier times, miners often fastened mining lamps to themselves via straps or harnesses. In Russia, one method of wearing the lamp involved fastening the lamp near one's crotch. Thus, "мне до лампочки" actually arose as a miners' euphemism for a more vulgar phrase, later being adopted by the Russian-speaking population as a whole.</li></ol> While both explanations are plausible, a quick Google search for "мне до лампочки этимология [etymology]" lends more support to the latter theory. This naturally leads to the question: supposing the first explanation was indeed false, how and why did it arise? This may be hard to determine without much more research, but it seems clear that even in fields such as etymology, political and cultural biases can surface.Richard Kempehttps://plus.google.com/103113120090509801305noreply@blogger.com0