Birthday operators

Today’s exercise in C++ is to implement various kinds of matrix products. Of course, since this is a C++ exercise, I must implement a matrix class (I feel that this is somehow a rite of passage). Naturally I’ve got mine constructing matrices of all kinds of dimensions, adding and scalar-multiplying them, assigning and comparing. Also naturally, I did not define matrix multiplication in this class!

The first product I defined was the naive product (that is, the definition) and the next, the recursive-descent one where you break the matrix in half in both dimensions and block-multiply them using the naive rule (but recursively for each product of blocks). One then learns that this gains nothing in time, and moves on to Strassen’s algorithm (one learns this from the book “Introduction to Algorithms”, also known as “CLR”, after the authors).

One way to write this would be to hand-code the product of a 2x2 block matrix, but that is silly, since it is algebraically the same rule as the naive product, which I already defined (as a template, no less). One learns in math that matrices are valid over any ring, for example the ring of matrices, so I’d like to just call the naive product function to do this. Alas, the definition of that function uses the * operator, which I never defined for matrices since I didn’t want to be bound to one particular version for these tests.

Fortunately, in C++ it is not necessary to define overloaded operators as member functions, so I can just do it right there. Unfortunately, it is also illegal to make function definitions inside another function. Even worse, I also can’t make this definition outside the function since I don’t want it to affect anything else (the template compilation model forces me to include all my code into one file if I want template functions, which as I’ve said I do, and this is perhaps worthy of another rant but this is not the place). So how do I do it?

The solution: a birthday present. It is for this precise purpose that C++ provides namespaces, which are effectively prefixes to all names defined within them that the compiler enforces to distinguish those names which are otherwise the same. You can also ask it to disregard the prefix by saying you’re using a name or a namespace. The functions therein then enjoy a birthday, since that’s when they get their names.

I actually always thought that the namespace mechanism was a little overwrought: for example, the entire standard library has std:: plastered on everything, and the first thing anyone writes in their programs is using namespace std;. But then I saw a piece of code from the Boost library which does a clever thing with the += operator to allow you to assign to lists in a natural-looking way; of course, they stick it all in the namespace boost::assign.

Now, for most functions or objects in a namespace, it doesn’t matter whether you are using or not: the std::cout stream is just as good as the cout stream would be, if a little wordier (fortunately, it does not prefix std:: to everything printed in it!). If that Boost function had been called add_to(), then I could use it as boost::assign::add_to() without first using that namespace. But alas, it is an operator.

Operator overloading in C++ is sort of subtle. The definition looks sort of the same as any overloaded function: it has a name, say operator* for multiplication or operator+= for the sum-assignment operator, and it also has arguments and a return type. But the semantics are not flexible: for example, both of these operators have exactly two arguments, the one on the left of the symbol and the one on the right; you can’t change where they go. The brackets operator operator[], which is supposed to mean array indexing, can only take one argument in the brackets; even if you put commas in and try to pretend you are indexing a multidimensional array, you will get only a list of values of which the first few are ignored and only the last is used. (There are tricky ways around this by overloading the comma operator, of course.)

That last one was a problem for me in defining the matrix class, by the way, since of course I want multidimensional indexing and furthermore, I don’t want the overhead of first copying out an entire row for the first index and then looking up the desired column for the second (which is what would happen if I used the C convention array[i][j]: array[i] is the whole i‘th row). I ended up overloading operator() instead, the function call operator, which alone among them all has the same semantics as actual function calls and does not think that commas mean to ignore the first few values, but treats the resulting list as an actual list of function arguments.

Anyway, the point is that operators, even when overloaded, are written like and behave like the native versions, though they may do something different. In particular, you can’t attach a namespace to them: if you define operator* inside namespace binary_recursive, it will be unusable. Unless, of course, you say using namespace binary_recursive; then suddenly the name of that function loses its prefix and the compiler can call operator* directly, and it will overload whatever multiplication operator happens to be present for the types you defined it on. For example, matrices.

So, to review: the code

template <typename T>
matrix<T> binary_recursive (matrix<T> first, matrix<T> second)
{
    matrix<T> operator*(matrix<T> m, matrix<T> n)
    {
        return naive_product<T>(m,n);
    }
    ...
}

is invalid, but to simulate it, I can just write

namespace binary_recursive {
    template <typename T>
    matrix<T> operator*(matrix<T> m, matrix<T> n)
    {
        return naive_product<T>(m,n);
    }
}

template <typename T>
matrix binary_recursive(matrix first, matrix second)
{
   using binary_recursive::operator*;
   ...
}

and get the overloaded matrix product anyway.  (Part of that other rant about templates is how there is no way of templating more than one thing at a time, so you have those stupid template <typename T>‘s all over the place.)

As a coda to this, I should observe that nested function definitions are invalid in C as well, and C doesn’t have namespaces. However, this is no problem, since C also doesn’t have templates that force you to put things in the global scope and doesn’t have overloaded operators that require a particular syntax to call. So you can just define the function you want to be defined only in one place right there, immediately before the function that uses it, and not use it anywhere else. In many ways, C is a much more elegant language than C++.

Advertisements
This entry was posted in C++, Computers, Programming. Bookmark the permalink.

3 Responses to Birthday operators

  1. Sam I. says:

    Recursive-descent multiplication is faster than naive matrix multiplication on a machine with a cache (in fact, it is cache-oblivious). Memory lookups are not O(1) these days …

    • ryancreich says:

      Hi Sam! One of the fun features of OOP is that you can never tell where your slowdowns are, because the boundary between any two symbols or words may be some deep operation. That is to say, my recursive-descent algorithm is embarrassingly slow, certainly much slower than naive multiplication (perhaps for 1e5x1e5 matrices things would be different, but it would take an hour to check). I’ve already learned how to use a debugger; perhaps now I will learn how to use a profiler?

      • Sam I. says:

        Profilers are useful. In my experience, the first place to look for inefficiencies is in the allocator. If you’re creating a lot of instances of a class only to immediately destroy them, and the class’s constructor calls new, then you’re in trouble. Unfortunately, the semantics for copying objects in C++ is a mess. It might be better to tune the algorithm in C. Of course, the best thing to do in this case (although the least instructive) is to use a library. Your computer probably has a GPU tuned for very fast linear algebra. As long as you don’t need to copy stuff on and off it all the time, it can be a big win.

        There are some MIT lectures online about cache-oblivious algorithms (http://www.catonmat.net/blog/mit-introduction-to-algorithms-part-fourteen/). Even matrix transposition is faster if you divide and conquer!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s