Tuesday, August 31, 2004

Bananas, Lenses, Envelopes, and Barbed Wire --- Section 3

Apparently the paper gives all it's definitions in terms of the category CPO (Complete Partial Orders); not entirely sure what the difference is between a complete and an incomplete partial-order is, so the signifigance probably escapes me. Alot of this section is also reminisent of Piece's Introduction to Category Theory, so I should probably go back and reread that.

Functors: I remember functors being fun reading in Pierce. In this case we are interested in bifunctors and monofunctors; being types into types and functions into functions; and just functions into functions respectively. Both cases preserve id and compose.

Products: Standard category fare. We have the bifunctor (at least I assume it's a bifunctor, the paper dosn't say, but it meets the defn' above) || which takes two types and maps to the type of pairs of those two types (the paper's example: D||D' = {(d,d') | d <- D, d <- D'} --- I'm using <- instead of the 'element of' symbol, I really need to get a better blog editor, one I can type symbols into easily; (f||g)(x,x') = (f x, g x') ) and the expected projection and tupling combinators.

Sum: Interesting, latent-typing as a functor, or at least something that looks like latent-typing to me. Also interesting to note the use of bottom (_|_) to include undecidability.

 D|D'          = ({tag0}||D}) U ({tag1}||D'}) U {_|_}
(f|g) _|_      = _|_
(f|g) (tag0,x) = (tag0, f x)
(f|g) (tag1,x) = (tag1, g x)
Also worth noting the injection and selection operators:
i0 x = ({tag0},x)
i1 y = ({tag1},y)
f v g --- which applies f to arguments tagged 0,
                and g to arguments tagged 1.

Arrow: Some type of 'wrapping' function. The example given is (f -> g) h = g $ h $ f. This suggests some sort of injection into f -> g, but I'm not entirely sure how. Apparently this is related to curry, uncurry, and eval, but again I don't see the relationship.

Identity, Constants: Pretty much as expected. Lifting and Sectioning, will require a little more thought.

Interesting definition of a constant as a function of 1 -> A. 1 of course having only a single member void. Also given p::A->bool, and p? a = i0 a or i1 a if p a is true/false respectively, you have f v g $ p? modeling if p then f else g.

The rest of section 3 easily exceeds my grasp of category theory. We define a representation for categorical recursion, then move on to recursive types, specifically the least-fixed-point of a monofunctor. Three examples given are cons-lists, binary trees, and natural numbers.

Having covered to background theory, the paper moves on to definitions of our operators over arbitary data types. That will have to wait until I've recovered from section 3 ;).

Monday, August 30, 2004

Bananas, Lenses, Envelopes, and Barbed Wire.

Read the first two sections of "Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire" over lunch. So far a very interesting paper, although looking ahead, I suspect it will probably endup exausting my understanding of category theory reasonably soon.

The paper has 6 sections. Section 1 is a short introduction, motivation, and review of the literature; section 2, a short introduction to the four recursion operators cata, ana, hylo, and para; section 3 draws from category theory to outline a theory of algebraic data types used in the rest of the paper; section 4, a formal definition of the operators introduced in Sect2 using the theory developed in Sect3; section 5 extends some of sect4 to parametized types; section 6 is the conclusion, acknowlegements, and references.

The paper aims to define a series of four 'recursion operators'; cata, ana, hylo, and para, and then use them as the basis for a functional programming calculus. These operators correspond to constructors for cata-, ana-, hylo-, and paramorphisms respectively. The reason for doing this is that unstructured recursion isn't amenable to structual analysis and manipulation, so it is preferable to generalise various recursive forms into various forms of structured recursion before trying to develop the calculus.

The examples given are all with respect to your standard inductive list data-type:

A* -> Nil | Cons(A A*)
(It's possibly worth noting that this is a direct algebraic analogue to your standard linked-list definition in an imperative language)
Cata
Constructs a catamorphism, more generally known as a right fold. Think of it as collapsing a list into a value. It takes two parameters, a binary function and an identity.
h = cata(b, .) 
  ->  h Nil          = b
      h (Cons(a,as)) = a . h(as)

and

h :: A* -> B
b :: B
. :: A -> B -> B
cata is denoted using 'banana brackets': (|b, a|). A good example of a catamorphism is length().
length = cata(0, (+) $ 1)  
  note: (+) $ 1  is the right assoc composition of addition and the 1
        function (which ignores its argument and returns 1)
 
proof by substitution:

length Nil = 0
length (Cons(a,as)) =  ((+) $ 1) a length(as)
                    => (+) 1(a) length(as)
                    => 1 + length(as)
Which is of course the standard definition of length::A* -> N
Ana
Constructs an anamorphism, more generally known as an unfold. This is naturally the inverse of a fold, and expands a value into a list. It takes two parameters, an induction function and a termination predicate.
h = ana(g, p) 
  ->  h b = | p b  -> Nil
            | else -> Cons(a, h b') where (a, b') = g b

and

h :: B -> A*
g :: B -> (A B)
p :: A -> boolean
ana is denoted with 'lense brackets' ([g, p]). A good example of an anamorphism is zip, an easier one to follow is count(n), which returns the list of numbers incrementing from n.
count n = ana(g, False) where False a = false,
                              g a = (a, a + 1)
Hylo
Constructs a hylomorphism. This is your classic inductive recursion from one value to another (think factorial, or fibonacci). This can be represented as an anamorphism that builds the recursive call-stack, followed by an catamorphism that folds the stack into a result.
h = hylo(c, ., g, p) 
  ->  h a = | p a  -> c
            | else -> b . (h a') where (b, a') = g a

and

h :: A -> C
c :: C
. :: B -> C -> C


g :: A -> (B A)
p :: A -> boolean
hylo is denoted using 'envelope brackets': [|(c, .), (g, p)|]. A good example of a hylomorphism is factorial.
fact n = [|(1, *),(g, p)|] where g n -> (n, n - 1),
                                 p = | 0    -> true
                                     | else -> false

substitute:

fact = hylo(c, ., g, p) 
  ->  fact n = | n == 0  -> 1
               | else    -> b * (fact a') where (b, a') = (n, n - 1)
            => | else    -> n * (fact (n - 1)).

Which is of course the classic factorial
Para
Constructs a paramorphism. This is the weakest description of any of the four. I can't really claim to understand para, but it is immediately recognisable as structured recursion over inductive datatypes. I'm hoping that the definition will be provided later in the paper, but unlike the other three, this intro section only provided examples. It does however cite another paper by one of the authors Meertens, Paramorphisms. The example for numbers (Num ::= 0 | 1 + num):
h = para(b, .) 
  ->  h 0       = b
      h (1 + n) = n . (h n)
para is denoted using 'barbed-wire brackets': [<b, .>]. Factorial can be written as a paramorphism as:
fact n = [< 1, (g) >] where n g m -> (1 + n) * m

Friday, August 27, 2004

Object.equals()

Like many languages Java provides two equality methods to test objects for equivalence. The first is accessed via the '==' operator and does a simple heap reference comparison. The other is a standard method on Object, equals() which is intended to be overridden by the programmer to perform a value-based comparison. Now the equivalence operator is defined by the language spec, and ultimately implemented by the JVM developers, hence not our concern. Unfortunately the java community has failed to properly document the subtleties involved in writing a correct equals method, and so I continue to come across incorrect implementations and worse, incorrect recommendations.

But first we have to understand what we are dealing with. Fundamentally when we are defining an equals method on a class A, what we are defining is an Equivalence Relation over the set of all objects of class A. This is important because it highlights that we can define our equals method to return anything we want provided we meet three rules.

  • Reflexivity: a.equals(a) -> true
  • Symmetry: a.equals(b) <-> b.equals(a)
  • Transitivity: a.equals(b) && b.equals(c) -> a.equals(c)
If we break any of these rules someone will suffer subtle and devious bugs.

Node: To save typing I'm going to leave out the hashCode method in each of these examples, but be aware if you override equals() you *must* override hashCode() to match.

One consequence of the requirements is that we must ensure that any test we use in our equals implementation meets these requirements as well. So lets consider the standard mistake:

class A {
  int n1;
  public boolean equals(Object o) {
    if (!(o instanceof A)) {
      return false;
    } else {
      A rhs = (A)o;
      return this.n1 == rhs.n1;
    }
  }
}
This is a very common implementation of equals, it has even made its way into a number of Java books. Worse, the bug only manifests in the presence of implementation inheritance of concrete classes.
class B extends A { 
  int n2;
  public boolean equals(Object o) {
    if (!(o instanceof B)) {
      return false;
    } else {
      B rhs = (B)o;
      return this.n1 == rhs.n1 && this.n2 == rhs.n2;
    }
  }
}
To demonstrate the bug lets define a simple function on A's:
static printEqual(A a1, A a2) {
  if (a1.equals(a2)) {
    System.out.println("EQUAL");
  } else {
    System.out.println("NOT-EQUAL");
  }
}
And test it:
  A test1 = new A();
  A test2 = new B();
  printEqual(test1, test2);
      => Prints "EQUAL"
  printEqual(test2, test1);
      => Prints "NOT-EQUAL" !!!!
And we've broken symmetry. This will bite you. There is no situation when an asymmetrical equality is not a bug. Unfortunately it is scary how many places on the net this is presented as the exemplar equals() implementation.

So what does a correct equals method look like. Well there are two. The first is very straight forward, and should be the default choice.

class A {
  int n1;
  public boolean equals(Object o) {
    if (o == this) {
      return true;
    } else if (o == null || !getClass().equals(o.getClass())) {
      return false;
    } else {
      A rhs = (A)o;
      return this.n1 == rhs.n1;
    }
  }
}

class B extends A { 
  int n2;
  public boolean equals(Object o) {
    if (!super.equals(o)) {
      return false;
    } else {
      B rhs = (B)o;
      return this.n2 == rhs.n2;


    }
  }
}
Now test1 != test2 AND test2 != test1. Also, by delegating from B to A at the start of B::equals() we ensure any inaccessible state in A is handled appropriately, and more importantly, that the equality code for A isn't duplicated in B. If you can't delegate from B->A like this, then it suggests that your B isn't actually an A, and you should check your class design (you probably should be using delegation rather than inheritance).

There are times however when you would really like an B to compare equal to an A. The classic example is the circle/ellipse or square/rectangle shape examples. It would be really nice if an ellipse with equal axes compared equal to an equivalent instance of a circle sub-class. The way we can achieve this is to recognise that the core requirement of a correct equals implementation is that the operations we use to implement meet the same reflexivity/symmetry/transitivity requirements as the method itself. Next note that instanceof is only asymmetric in the subclass; and in fact any operation in the subclass will lead to asymmetry! So if we do need subclassed equality, we can't permit a subclass to override the equals() method.

abstract class Shape {
  public boolean equals(Object o) {
    if (o == this) {
      return true;
    } else if (o instanceof Shape) {
      // !! Note: instanceof is safe here as Shape is abstract.
      Shape rhs = (Shape)o;
      return [shape specific comparison --- maybe position];
    } else {
      return false;
    }
  }
}

class Ellipse extends Shape { 
  int minor, major;

  public int getMinor() { return minor; }
  public int getMajor() { return major; }

  // !!! NOTE THE USE OF *final* to prevent asymmetry!
  // Also note the exclusive use of accessors.
  public final boolean equals(Object o) {
    if (!super.equals(o)) {
      return false;
    } else if (getClass().equals(o.getClass())) {
      Eclipse rhs = (Eclipse)o;
      return this.getMajor == rhs.getMajor()
          && this.getMinor == rhs.getMinor();
    } else {
      return false;
    }
  }
}

public Circle extends Ellipse {
  public Circle(int radius) { major = minor = radius; }
  public int getRadius()    { return major; }
}
Note that this is probably a rather clumbersome hierachy. A better approach would be:
public abstract class Ellipse {
  public abstract int getMajor();
  public abstract int getMinor();
  public final boolean equals(Object o) {
    if (!super.equals(o)) {
      return false;
    } else {
      Eclipse rhs = (Eclipse)o;
      return this.getMajor == rhs.getMajor()
          && this.getMinor == rhs.getMinor();
    }
  }
}

public class EllipseImpl {
  int major, minor;
  ...
}

public class CircleImpl {
  int radius;
  public int getMajor() { return radius; }
  ...
}
Note that an CircleImpl will still compare equal with an appropriate EllipseImpl.

Please. If you are programming Java, believe me, these really are the only two options open to you. I really am very tired of tracking down the subtle bugs that result when this advice is ignored.

Thursday, August 12, 2004

SICP Lectures.

A friend of mine is studying CS at UQ, and this year they are using The Structure and Interpretation of Computer Programs as a first year text. I am personally pleased to see UQ buck the 'vocational education' trend that has been decimating IT degree courses recently. OTOH it does make life a little tougher on the students, and although this is a good thing, there are ways of making life a little easier. One is to point students at the videos of SICP lectures given by Abelson and Sussman at MIT. I have watched them and they are very good, I highly recommend them to anyone approaching the book looking for some extra insight into the material. It is also worth pointing at the online version of the book in case anyone wasn't aware it was available.

Cale the Corporatist Corsair

Apparently the BSA are looking for a new name for their new mascot. I'm a little surprised at their honesty, personally I find a corporate extortionist weasel who attempts to indoctrinate tech-savvy kids into acquiescing to the corporatist annexation of their public-domain rights, rather appropriate for the BSA.

Personally I'm pushing for "Cale the Corporatist Corsair".

Monday, August 09, 2004

Why Java sucks.

Ok, so this could just have easily titled 'Why [C/C++/Ada/...] sucks', but I use Java more than either of these atm, so I went with Java. Back on lambda the 'Why types are interesting' thread continues unabated. However this post by Paul Snively grabbed my attention. It is definately worth reading in its entirety, but two paragraphs in particular stand out.

[Alan] Kay takes a hard opposite perspective to mine on the static typing vs. late binding question, saying, in summary (not a direct quote), "What to do until software engineering is invented... do late binding, which means being able to make the changes easily." Actually, perhaps it's not really opposite, since he could be interpreted to mean "either do software engineering if you can or make the cost of changes as low as possible, which is best done at the current state of the art with late binding." That is, get out of these tools that do neither software engineering nor dynamic change well (C, C++, Java...) and either head for software engineering (SML, O'Caml, Haskell, Concurrent Clean, Epigram, Alice...) or head for late binding (Common Lisp, Scheme, Smalltalk, Oz...). Formulated that way, I can't help but agree with him, and merely choose to focus on heading towards software engineering after spending decades in late binding.
At the end of the day, I want modern type theoretic ideas in my programming language so that, at compile time, I can:
  • Know that my code is deadlock-free.
  • Know that my code is race-condition free
  • Know that my code doesn't leak authority.
  • Know that my code is upgradable without violating encapsulation or leaking authority.
  • Will parallelize across multiple processors automagically when possible and serialize when it's not.
  • Will not throw an exception that will not be caught and handled.
  • Will have inferred all memory allocation and deallocation and do it automagically for me, letting me know if the dynamic characteristics of my code require the linking of a garbage collector.
  • Will let me know if I attempt to take the tail of an empty list. :-)
In case anyone dosn't realise, all the above points are already possible, although AFAIK not yet demonstrated possible at the same time.

Interesting scripting language for Java.

I had a quick look at groovy a few months back when it was announced on lambda. To be perfectly honest I wasn't particularly impressed. If I want a latent-typed, 'agile' scripting language, and don't want to fight too hard to be allowed to use it, I will use Jython; or if I'm writing for myself, or have a tech-savy boss, I'll use the scripting language, aka scheme (specifically: SISC or possibly JScheme). As far as I can tell it's just Yet Another Scripting Language. I mean, what's the point!?

Then I came across a comparison between groovy and another JVM scripting language, Nice. Bryn Keller does a quick comparison between groovy and nice on his blog. So what makes nice interesting, when groovy is so yawn-worthy? Well nice is statically typed! It looks like a serious attempt at dragging the java world into the world of real compile time type safety --- a world without NullPointer/ClassCastExceptions, and with the free-flowing syntactic style of your traditional latent-typed languages.

As an example compare the following (from the xoltar post):

Groovy
if (customer.orders.any { it.amount > 1000 && 
	it.product.type == "cheese"} ) {
  doSomething();
}
Nice
if (customers.orders.any(Order ord => ord.amount > 1000 && 
	ord.product.type == "cheese")) {
  doSomething();
}
Which given you can infer the type of ord from its use is intended to shortly become:
if (customers.orders.any(ord => ord.amount > 1000 && 
	ord.product.type == "cheese")) {
  doSomething();
}
Which differs from the groovy example in only the trivial matter of being able to name the parameter to the closure; and in the not-at-all-trivial matter that the compiler will ensure static type saftey at compile time!.

Just skimming the nice website, I still suspect I will find ML a nicer language, still a language targeted at interoperation with Java that supports: real parametric-types, proper closures, multiple-dispatch (goodbye visitor pattern ;), and tuples; all in a statically typed language with HM inferencing. What's not to like?

Saturday, August 07, 2004

Tired.

I just finished my third first-aid duty at the queensland royal show (aka Ekka). It was only 6 hours, so I don't know why I'm feeling so exhausted. For some reason duties seem to tire me out, this one wasn't even particularly eventful (fortunately); I wrote up incident reports for two almost identical injuries from one of the sideshow rides when I learnt there had been more than my two cases. Hopefully something will be done about it, I hate to see kids hurt. Was assigned ringside during the leadup and through the fireworks which was nice, although the fireworks were rather disappointing they were still fun.

I wrote some quick examples of continuation passing style over lunch before I went on duty, but I'm too tired to write the commentary now, so I'll do that tommorrow or monday and post part-3 then. Byron will be pleased though, one of them is a simple regex engine which I think comes out rather nicely using CPS.

For now it's bedtime I think ;).

Friday, August 06, 2004

Continuations - part 2

Back to my attempt to explain continuations.

What is a continuation? The short answer is "What the program should do next". Another way of putting it would be "Everything the program hasn't done yet, wrapped up in something that looks sort-of like a function". Now I'm going to make the possibly arrogant assumption that as that wasn't sufficient explaination for me, that wasn't sufficient here.

Let's go back and reconsider an LCO example, this time we want to write a sigma function: takes a non-empty list of numbers and returns the sum of the list.

; (sigma '(1 2 3 4)) -> 10
(define (sigma lst)
  (if (null? (cdr lst))
      (car lst)
      (+ (car lst) (sigma (cdr list)))))
Now the astute reader will be wondering why I've defined sigma like this, associating to the right, rather than to the left (which would have been simpler). The reason for this is that this function cannot be trivially converted into the accumulator form we used in the previous LCO example. More importantly, there are a number of recursive functions which are easier to define associating to the right than the left (I recommend trying to use append in place of + to define map in this manner). Now it would be nice to be able to manage this sort of function in constant stack, and this is a nice simple example to demonstrate how we can do exactly this. In case any readers are wondering what I mean by associating left vs. right, I suggest you compare the above to the left-associative version below and consider the order in which the elements of the list are combined.
(1 + (2 + (3 + 4))) - right assoc
(((1 + 2) + 3) + 4) - left assoc
; Left associative
(define (sigma lst)
  (letrec ((sigma-iter
    (lambda (lst accum)
       (if (null? lst)
           accum
           (sigma-iter (cdr lst) (+ accum (car lst)))))))
  (sigma-iter (cdr lst) (car lst))))
(Note we could have defined sigma-iter the same way we did earlier, as a global with (define...), however it really is a helper function, and so is better defined locally to sigma. --- besides, it also gives me a chance to introduce lambda, which we will be using later)

The secret to converting the right-associative version into LCO is to recognise that the last thing we want to do before returning is call sigma on the tail of the list. Now this looks difficult, as the last thing we need to do is the addition. Still, I'm a great believer in wishful thinking, so I'm going to pretend I have an function that will take the result of the recursive call and finish off everything I was supposed to do next for me. Then I can pass that into the next call to sigma and have it finish for me.

(define (sigma lst finisher)
  (if (null? (cdr lst))
      (finisher (car lst))
      (finisher (sigma (cdr lst) finishing-function))))
Opps, now we have a call to 'finisher' in the road. Still that's ok, we can just pretend that we have a new finishing-function each time that will both handle the addition for this call, and the call to the finisher we were passed.
(define (sigma lst finisher)
  (if (null? (cdr lst))
      (finisher (car lst))
      (sigma (cdr lst) new-finishing-function)))
Now this is looking better, all we have to do is work out how to define an appropriate finishing function and we're done. Before we do that however it is worth noting (as most of you will have noticed by now anyway) that the new-finishing-function's definition "finish off everything I want to do next" is just a rephrasing of the definition I gave for a continuation at the start.

Lets compare the non-tail-recursive call to the continuation based tail-recursive call:

; non-tail-recursive
(+ (car lst) (sigma (cdr lst)))
; tail-recursive
(sigma (cdr lst) continuation)
Remember the definition of the continuation is that it will be receiving the result of the recursive call to sigma as a parameter so lets call that x and substitute into the non-tail-recursive version:
; let x <- (sigma (cdr lst))
(+ (car lst) x)
This will calcuate the result of our call, but we have one thing left to do, we have to return the result to our caller. However our caller has finished (its call to us was a tail-call remember), and the destination of our result was passed as a continuation. So we don't actually return as much as make a tail-call to the continuation (note this is a major reason why LCO is so useful, if we didn't have LCO our stack frame would be leaked, but with LCO there is no leak).
; pass (+ (car lst) x) to the continuation.
(continuation (+ (car lst) x))
So what we need is to define a small function that can capture this idea and pass this into the recursive call to sigma.
; Scheme
(lambda (x) (continuation (+ (car lst) x)))

# Python - just in case it's useful understanding the closure above.
# NOTE: As python dosn't provide LCO this *will* leak stack.
lambda x: continuation(lst[0] + x)
Now put the pieces together and we're almost there:
(define (sigma lst continuation)
  (if (null? (cdr lst))
      (continuation (car lst))
      (sigma (cdr lst)
	     (lambda (x) (continuation (+ (car lst) x))))))
The next step is to restore the original signature, and make this CPS version a helper function (along the way we rename continuation to k, it's shorter ;).
(define (sigma-helper lst k)
  (if (null? (cdr lst))
      (k (car lst))
      (sigma (cdr lst) (lambda (x) (k (+ (car lst) x))))))

(define (sigma lst)
  (sigma-helper lst initial-continuation))
Final step.... what do we want as the initial-continuation? Well it will be passed the result of the outer sigma, and what we want is the result of the outer sigma.... sounds like a job for the most trivial function you can think of; also known as the I combinator, let me introduce id()
(define (id x) x)

(define sigma lst)
  (sigma-helper lst id))
And we're done. Lets step back and take a look at what we've done. We have taken a non-tail-recursive function and split into two pieces seperated by a tail-call. We execute the first half, and pass the second half as a parameter, a continuation, to the tail-call. It is worth noting three points:
  • There is nothing special about the precise point we decided to split the function, at any point we could have wrapped the rest of the function in a continuation and passed it into another function, we'll come back to this when we discuss call-with-current-continuation later
  • The continuation ends up being an ordinary function. We can save this function and call it as many times as we like; from other places than inside the sigma call. After all, once we have it, it's just a function. This dosn't make much sense with sigma, but other algorithms do make use of this trick. Again, we will come back to this with call/cc.
  • The sigma-helper function never returns! In every case the helper function substitutes return with a call to its continuation parameter. For this reason the sort of continuation used in this example is called a return-continuation. Every return in every function can be treated as a call to an implicit return-continuation. There are other uses for continuations, return-continuations are just the easiest to grasp, hence the subject of this post. Again, I'll come back to other uses when I discuss call/cc.

This technique where you pass return-continuations around instead of returning normally is called CPS. It is a powerful approach to programming that is a common transformation, especially in compilers for functional languages.

I would also appreciate any feedback on this introduction, both positive and suggestions for improvement. Hopefully I'll have time to do a worked example and a discussion on call/cc soon.

Thursday, August 05, 2004

Python and LCO.

Clinton makes a valid point I should have made clear myself. Python dosn't support LCO, and probably never will. I am only using python as a convenient language to provide anyone who dosn't already know scheme with a parallel implementation to help them pick up enough of the language to follow my later discussions on continuations.

I apologise to anyone who may have been confused.

Continuations - part 1.

Ok, so why did I write a post on last-call-optimisation? Well I find it very difficult to explain continuations to someone who dosn't understand LCO, so I thought I should post a quick example of LCO before this. Of course the reason behind this sudden urge to actually fulfil my promise to explain continuations here is Lambda again. This time an extended discussion on continuations sparked by Keith Devens' request for help understanding the things. Itself sparked by a particularly interesting analogy by Luke Palmer in news:perl.perl6.language, now forever dubbed the "Continuation Sandwich" --- the relevant part of his post is:

Say you're in the kitchen in front of the refrigerator, thinking about a sandwitch. You take a continuation right there and stick it in your pocket. Then you get some turkey and bread out of the refrigerator and make yourself a sandwitch, which is now sitting on the counter. You invoke the continuation in your pocket, and you find yourself standing in front of the refrigerator again, thinking about a sandwitch. But fortunately, there's a sandwitch on the counter, and all the materials used to make it are gone. So you eat it. :-)

Now before we begin I'm going to deal with continuation-passing-style and call-with-current-continuation seperately, but as a taste of what we're in for interested parties can check my CPS-in-Java example on the haskell.org wiki, or the most lucid description of call/cc in Chapter 13 of Teach Yourself Scheme in Fixnum Days.

In the meantime, it's 0021 hours here, so I'm going to sleep first, and finish this later.

Last-Call-Optimisation

First lets consider the old hoary chestnut, the factorial.

; Scheme
(define (fact n)
  (if (zero? n)
      1
      (* n (fact (- n 1)))))

# Python
define fact(n):
  if not n:
    return 1
  else:
    return n * fact(n - 1)
Note: I am going to have to use scheme later, so for the next few examples I'll be using scheme and python together to give non-schemers a basis for comparason.

Now this is a very straight forward recursive definition of the function. The only trouble is that we are going to grow the stack proportional to n. So we end up with the following (abbreviated) call-tree:

fact 4
  fact 3
    fact 2
      fact 1
        fact 0
        * 1 1
      * 2 1
    * 3 2
  * 6 4
-> 24
Note that for (fact 4) we end up with five stack frames on our call stack, and unfortunately there is no alternative as the code is written. We can however eliminate this efficiency by converting to an iterative version:
; Scheme
(define (fact-iter n accum)
  (if (zero? n)
      accum
      (fact-iter (- n 1) (* n accum))))

(define (fact n)
  (fact-iter n 1)

# Python
def factiter(n, accum):
  if n == 0:
    return accum
  else:
    return factiter(n - 1, n * accum)

def fact(n):
  return factiter(n, 1)
Now the call stack looks like:
fact 4
  fact-iter 4 1
    fact-iter 3 4
      fact-iter 2 12
        fact-iter 1 24
          fact-iter 0 24
-> 24
Which dosn't look very useful until you consider that as each call to fact-iter returns, it's caller simply passes the resulting value up the call-chain. In otherwords, the caller's stack-frame is never used after it calls the next fact-iter in the chain; and what is never used dosn't need to be kept. This is called last-call-optimisation, and is a very common optimisation used by many languages to make the above code as efficient as the while loop solution traditionally taught in imperative languages:
/* C */
int fact(n) {
  int accum = 1;
  while (n) {
    accum = accum * n;
    n--;
  }
  return accum;
}
In fact the traditional code generated by a LCO supporting compiler looks alot like this:
/* MIML ;) */
int fact(n) {
  int accum = 1;
factiter:
  if (n == 0)
    return accum;
  else
    accum = n * accum;
    n = n - 1;
    goto factiter;
}
Which is pretty much exactly what the while loop would compile to. This still leaves the question: "Why?", what's wrong with the while loop?

The answer to this is that factorial is defined recursively:

       / 0 -> 1
  n! = |
       \ n -> n * (n - 1)
and it is trivial to compare this definition to the first recursive example and check it is correct. The tail-recursive expression in the second example isn't as easy to confirm, but it is at least in a form amenable to a trivial inductive proof by inspection. Because the while loop dosn't abstract away the inductive step but incorperates it into the loop pre/post-conditions and invariants, it is less amenible to proof. In fact the three examples can be seen as reducing levels of abstraction: moving away from the problem domain (a maths definition) towards the raw metal; and almost a tautology, the closer we can stay to the problem domain the better.

Now this is great for linear algorithms, or algorithms that process linearly recursive datastructures (fancy name for linked-lists ;). However consider a pre-order traversal of a binary tree? This can be made tail-recusive, but it is a fair bit more involved, and the subject of my next post.

Wednesday, August 04, 2004

Rdf, indices, hypergraphs, and iTql

An rdf database is a set of declarative statements: [subject predicate object]. Now the traditional way of representing this is as a traditional directed-graph with 'object' nodes with 'predicate' edges linking them. This is a useful representation which successfully visualises most rdf-graphs. Unfortunately for visualisers, for rdf to be useful you need to be able to attach semantics to predicates, and the logical way to do this is to make statements. A quick example:

Fred parentOf John
John parentOf Lucy
John parentOf Jane
This becomes much more useful if we can utilise statements such as
parentOf   subsetOf    relativeOf
relativeOf hasProperty transitive
relativeOf hasProperty symmetric
which allows us to match against match(Jane relativeOf $x) and get back { x=Fred, John, Lucy, Jane }

This is all well and good until you try and visualise the union of these two graphs (often called base-facts, and schema). The first problem you face is multiple edges between the verticies - forcing you to move to a 'directed-graph with named edges'; not too hard to visualise, but it isn't looking too much like a traditional graph anymore. The real kicker comes when you try and visualise the relationship between parentOf and relativeOf, now you have an edge between two edges -- and we're not in Kansas anymore Toto.

So what do we have? It is certainly 'graph-like', but it long ago ceased to be a traditional graph. More importantly, we haven't started considering 'statings', 'named-graphs', or 'reification'. The real problem is that a predicate is a valid vertex in the graph; an equal partner (conceptually) with its more numerous subject and object bretheren. Fortunately we're not the first to consider this problem. First consider the definition of a graph (preempting my summary of Trudeau Ch 2).

A graph is an object consisting of two sets called its vertex set and its edge set. The vertex set is a finite nonempty set. The edge set is a set of two-element subsets of the vertex set.
Looking at the definition, it becomes quite obvious that what we need is for the edge set to contain three-element subsets of the vertex set. This conveniently has a name: a 3-hypergraph. More importantly it can be easilly expanded to support named-graphs by using a 4-hypergraph, and as Paul Gearon noted, this has immediate relevance to the indicies kowari uses. It also has immediate relevance to crystalising iTql's semantics, allowing us to integrate the trans(), walk(), and the new ???() graph constraints --- we're still arguing over a name for what I like to call exclude(); and the original Constraint constraint (include()???). Each of these then returns an element of the powerset of VxVxV (where V is the vertex set). The elements of these resolved constraints are then statements, and as such amenable to predicate logic; and iTql provides conjunction and disjunction operations (and, or) and returns a sum-of-products solution to the resulting compound predicate.

So iTql provides four levels of operations:

Graph Operations
union, intersection, and disjoint-union of 3-hypergraphs in the FROM clause. Note the result of this is actually a 4-hypergraph.
Graph Constraints
inclusive, transitive, reachability, and soon exclusive constraints against a specific 4-hypergraph (by default the graph returned from the FROM clause). Note these are parametised by variables, and the result is a set of variable bindings that substitute to statisfy the constraint. These form the primitive constraints in an iTql WHERE clause.
Predicate Logic Expressions
conjunction and disjunction of variable binding sets, and type specific boolean functions (ie. comparison operators >, <, ==, etc) forming the rest of the WHERE clause. The result of this is a product-of-sums we represent as an implementation of the 'Tuples' interface. It is probably worth noting two things. First, a Tuples fails to be a relation only because the individual variable binding sets have differing cardinalities; we fake this by padding with a unique UNBOUND value which carries DONT-CARE semantics. Second, this can only happen when the expression includes a non-union compatable disjunction.
Projection, Aggregation, and Subqueries
ie. the SELECT clause: a list of functions to apply to the padded-product-of-sums returned by the WHERE clause; the results of which are zip'ed into the conventional tabular format most application developers find most useful.