Archive:
Subtopics:
Comments disabled |
Fri, 30 Nov 2007
Lazy square roots of power series
$$(f(x))^2 + (f'(x))^2 = 1$$ where f' is the derivative of f. This equation has a couple of obvious solutions (f(x) = 1; f(x) = sin(x)) and a bunch of not-so-obvious ones. Since I couldn't solve the equation symbolically, I decided to fall back on power series. Representing f(x) as a0 + a1x + a2x2 + ... one can manipulate the power series and solve for a0, a1, a2, etc. In fact, this is exactly the application for which mathematicians first became intersted in power series. The big question is "once you have found a0, a1, etc., do these values correspond to a real function? And for what x does the power series expression actually make sense?" This question, springing from a desire to solve intractable differential equations, motivates a lot of the theoretical mathematics of the last hundred and fifty years.
# Perl my $series; $series = node(1, promise { scale(2, $series) } );This is perfectly well-defined code; it runs fine and sets $series to be the series [1,2,4,8,16...]. In Haskell this is standard operating procedure:
-- Haskell series = 1 : scale 2 seriesBut in Perl it's still a bit outré. Similarly, the book shows, on page 323, how to calculate the reciprocal of a series s. Any series can be expressed as the sum of the first term and the rest of the terms:
s = head(s) + x·tail(s)Now suppose that r = 1/s.
r = head(r) + x·tail(r)we have:
rs = 1This shows (equating the constant terms on both sides) that head(r) = 1/head(s). And equating the non-constant terms then gives:
x·(1/head(s))·tail(s) + x·tail(r)·head(s) + x2·tail(r)tail(s) = 0and we win. This same calculation appears on page 323, in a somewhat more confused form. (Also, I considered only the special case where head(s) = 1.) The code works just fine. To solve the differential equation f2 + (f')2 = 1, I want to do something like this:
$$f = \sqrt{1 - {(f')}^{2}}$$ so I need to be able to take the square root of a power series. This does not appear in the book, and I have not seen it elsewhere. Here it is.Say we want r2 = s, where s is known. Then write, as usual:
s = head(s) + x·tail(s)as before, and, since r2 = s, we have:
(head(r))2 + 2x head(r) tail(r) + x2(tail(r))2 = head(s) + x·tail(s)so, equating coefficients on both sides, (head(r))2 = head(s), and head(r) = √(head(s)). Subtracting the head(s) from both sides, and dividing by x:
2·head(r) tail(r) + x·(tail(r))2 = tail(s)and we win. Or rather, we win once we write the code, which would be something like this:
# Perl sub series_sqrt { my $s = shift; my ($s0, $st) = (head($s), tail($s)); my $r0 = sqrt($s0); my $r; $r = node($r0, promise { divide($st, add2(node($r0, undef), $r)) }); return $r; }I confess I haven't tried this in Perl yet, but I have high confidence that it will work. I actually did the implementation in Haskell:
-- Haskell series_sqrt (s0:st) = r where r = r0 : (divide st (add [r0] r)) r0 = sqrt(s0)And when I asked it for the square root of [1,1,0,0,0,...] (that is, of 1+x) it gave me back [1, 0.5, -0.125, -0.0625, ...], which is indeed correct. The Perl code is skankier than I wish it were. A couple of years ago I said in an interview that "I wish Perl's syntax were less verbose." Some people were surprised by this at the time, since Perl programmers consider Perl's syntax to be quite terse. But comparison of the Perl and Haskell code above demonstrates the sort of thing I mean. Part of ths issue here, of course, is that the lazy list data structure is built in to Haskell, but I have to do it synthetically in Perl, and so every construction of a lazy list structure in Perl is accompanied by a syntactic marker (such as node(...) or promise { ... }) that is absent, or nearly absent, from the Haskell. But when I complained about Perl's verbose syntax in 2005, one thing I had specifically in mind was Perl's argument acquisition syntax, here represented by my $s = shift;. Haskell is much terser, with no loss of expressiveness. Haskell gets another win in the automatic destructuring bind: instead of explicitly calling head() and tail() to acquire the values of s0 and st, as in the Perl code, they are implicitly called by the pattern match (s0:st) in the Haskell code, which never mentions s at all. It is quite fair to ascribe this to a failure of Perl's syntax, since there's no reason in principle why Perl couldn't support this, at least for built-in data structures. For example, consider the Perl code:
sub blah { my $href = shift(); my $a = $href->{this}; my $tmp = $href->{that}; my $b = $tmp->[0]; my $c = $tmp->[2]; # Now do something with $a, $b, $c }It would be much more convenient to write this as: sub blah { my { this => $a, that => [$b, undef, $c] } = shift(); # Now do something with $a, $b, $c }This is a lot easier to understand. There are a number of interesting user-interface issues to ask about here: What if the assigned value is not in the expected form? Are $a, $b, and $c copied from $href or are they aliases into it? And so on. One easy way to dispense with all of these interesting questions (perhaps not in the best way) is to assert that this notation is just syntactic sugar for the long version. I talked to Chip Salzenberg about this at one time, and he said he thought it would not be very hard to implement. But even if he was right, what is not very hard for Chip Salzenberg to do can turn out to be nearly impossible for us mortals. [ Addendum 20071209: There's a followup article that shows several different ways of solving the differential equation, including the power-series method. ] [ Addendum 20071210: I did figure out how to get Haskell to solve the differential equation. ] [Other articles in category /math] permanent link |