A brief history of ACM-SPPC T-shirts, and more

Yesterday was the South-Pacific region event for the annual ACM ICPC. You can read more about it in general at http://cm.baylor.edu/welcome.icpc and our regional at http://sppcontest.org/.

Every year, among the IBM-branded schwag that the members of competing teams receive for their five hours of keyboard-mashing, swearing, headdesking and facepalming, is a t-shirt. Shirts are also given to the organisers and are generally a different colour. This is an attempt to chronicle the various colours of competitor shirt given out over recent years.

2004 and before: I only have a vague recollection of what colours one of these shirts was. Back in 2005, prior to driving up to Launceston (because that’s where the site was) we stopped at Matt Armsby’s place (since he was a competitor back then) to grab some last-minute things. While rummaging around we stumbled across one of his old shirts, which from memory was an odd light blue with yellow writing combination. Ask him about it sometime.

2005: My first ACM shirt, still in one piece. The main body is a nice navy blue, though mine has faded noticeably, with grey sleeves. There is an OK photo of Matt accepting a certificate in 2006 wearing the 2005 shirt at the site history page. (I’m also there but the shirt isn’t as visible.) My actual shirt is hanging on the line right now so I won’t bother putting up another photo of it here.

2006: This shirt marked the start of the trend towards lighter combinations. A black-sleeved, yet mottled light-grey middle shirt. It makes me feel soft and slightly depressed.

ACM-SPPC Shirt 2006
ACM-SPPC Shirt 2006

2007: This shirt completed the colour-transition and had a white torso with dark navy blue sleeves. Oddly, this is the only ACM shirt of mine that has developed a small hole, the others are still perfectly intact.

ACM-SPPC Shirt 2007
ACM-SPPC Shirt 2007

2008: Once on a theme, it is difficult sometimes to get off it. The 2008 contestant shirt was the same as the 2007 shirt but with bright, cheery red for the sleeves and collar.

ACM-SPPC Shirt 2008
ACM-SPPC Shirt 2008

2009: It is perhaps a measure of the feebleness of the economy in these troubling times that this year’s shirt is completely white, except for the print. This shirt marks the end of the transition to lighter styles, the only remaining step being to remove the print. I shan’t post a photo, suffice it to say that the shirt really is indeed just a pure white t-shirt with the print as above, as it were.

So about this year’s problem set.

The nice thing about the contest is that it is over, so I and others can now blog about it. Imagine how less rich your life would be if you couldn’t read how we solved the problems right now and instead had to wait for it! Chris beat me to it, but that’s ok because we were on the same team. My contribution was the solution to problems B and E. My team placed 3rd on the day and by Chris’s reckoning, that looks to be stable for rejudging.

Problem B: Four submissions, 297th minute (that’s 3 minutes before the close of the competition). What was this odd problem? Why did it have a bug that was only found at the last moment? What circumstances lead to this supposedly easy problem being an excercise in facepalming, headdesking, swearing, groaning and, very ultimately, succeeding?

It reads as a pretty easy input munging problem. Given some date (in a fairly easy-to-parse format) for a chess tournament and a list of participant’s names, birthdays and scores on the day, bucketise them into year groups and print the person with the highest score in each group. Oh, don’t forget the leap-year thing where people born on 29 Feb are considered a year older on 1 Mar in non-leap-years.

So it didn’t take too long to get working code. Ah sorry, “working.” But it wasn’t until afterwards that we knew the early code wasn’t working.

What happened was this: I am cursed. I always get a problem that has not been totally specifed/is a bit ambiguous and also has the dodgy judging data that has to be fixed during the comp. This has happened consistently during practices and competitions and I knew it was going to happen again and it did. :‘(

When the cheif judge realises there is a testing data issue, it holds everybody from Perth to Dunedin up while they scramble to fix the issue and test it and get it out and rejudge. But before everybody is told about the issue, it is possible to tell when the annual fudgeup has occurred by looking at the leaderboard, and it was the case that we inferred from the complete lack of successful submissions to problem B that there was an issue. This was confirmed later on in an official announcement.

So thinking that I had stumbled into whatever pothole everybody else in the leaderboard had seemingly stumbled into, we put debugging the submission on hold while we worked on other things. This wasn’t too bad an approach: for instance it let me solve E, and the others solved a few other problems in that time.

Then things took another turn: they fixed the data and other teams started being more successful with their submissions to B. Mine wasn’t rejudged as successful. Hmmm.

So we politely let Simon hack away until the last five minutes of the comp on his (unfortunately TLE) solution to problem H (it only needed to be a bit faster) and in the meantime I came up with every possible fix to my code I could think of. There were two. The idea was to submit a lot in the last five minutes of the comp—what could we possibly lose by firing off as many submissions to B as possible at this point, just to be more likely to be rejudged correct later on? It was already close to 300 minutes penalty for the submission if it was correct anyway.

It so happened that the second fix I came up with—a modified calculation that should have been identical in every way to the last one—fixed it and we got B from Wrong Answer to Accepted in the 297th minute. Yes! I broke the curse!

I’m planning on seeing a specialist about the nerve damage.

Problem E: Rewards for Maths indeed.

Given real integer coefficients of an \(M\)-degree polynomial, determine whether or not all the \(M\) (complex) factors (and hence complex roots) are distinct.

We had a look at this at first, threw our hands up in the air and said “Meh! It’s maths!” Just looking at the leaderboard, most of everybody must have done the same.  “But hang on,” said Simon (these aren’t really his words), “all we need is some reliable root-finding method and then we can factorise it using synthetic division to get a polynomial of degree \(M - 1\), and then go back and do the same to the smaller one until we have found all the roots.” I would expect a few teams to have realised this as well.

A bit of background: myself and Simon are maths honours students this year. Last year I did a unit numeric methods with the head of Maths as lecturer. I think Simon had done it too. Either way, it was obvious at that point one way to solve it, even if in a numeric approximation fashion rather than in a whole-integer programming fashion. The trick was to have a fairly reliable root-finding method for roots in the complex plane. Bisection method won’t help in the complex plane. Newton’s method is famous for not converging. Who you gonna call?

I pulled out Laguerre’s method, and bashed it out in under 15 minutes, only to be held up being a boneheaded mistake in the loop that checked for distinct roots. I fixed it. It worked. It was an easy win. And it gave us one more problem that everybody else in the top 20 at that point hadn’t even tried to solve.

There was another technique we had available which is slightly more reliable for polynomials over the reals, and it goes like this: A matrix and its eigenvalues both satisfy the matrix’s characteristic polynomial. Turning it around, it is possible to build a matrix such that the characteristic polynomial is the polynomial we are trying to find roots for, and then using eigenvalue-finding methods like QR decomposition on the matrix to get the solutions.

Comparing the code in Numeric Recipes in C++ between their Laguerre’s method implementation and their eigenvalue-finding approach, the eigenvalue-solution code is roughly 3 times longer. Good stuff.