[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Showing posts with label math. Show all posts
Showing posts with label math. Show all posts

Tuesday, April 01, 2014

Pi, Harbor Steps

Several years ago, Seattle's Harbor Steps temporarily hosted a giant sculpture of the letter/number pi. I think it was connected to the nearby Seattle Art Museum somehow, but I haven't been able to figure out much of anything else about it: Who created it, what it was officially called, where it went, etc. Anyway, I was going to post this on March 14th (3/14, in the US date format) for Pi Day, but I was on a tough work deadline & totally forgot about Pi Day. So then I figured, hey, I can use it for April Fool's Day instead and say it's because pi actually equals 4. I can't seem to figure out how to make the joke work, though, probably because it's a dumb joke. It's barely a joke at all, to be honest. Besides, "pi equals 4" is a longstanding cheesy internet gag, and there's a nice VIHart video that explains how it works:

Monday, February 20, 2006

SuperHyperRedux

Some followup notes about my recent math-heavy Hercules vs. Hydra post. If you didn't find that one very interesting, this one will definitely bore you to tears.

  • Here's a link I ought to have included, giving an explanation of Goodstein Sequences, the basic idea behind the Hydra problem.
  • A point I don't think I made sufficiently clear in that post is that, while the basic notion of using hierarchies of functions to define ordinals is widely shared, the higher up you go the less agreement there is on what hierarchies ought to look like. There's no one natural or universally accepted representation, at least not one that's been identified to date. Veblen's phi hierarchy, introduced back in 1908, seems to have attracted general use anymore, but there's no reason you couldn't ignore it entirely and define your own equally valid representation if it suits your needs better. This recent paper discusses a number of concerns, including the ongoing problem of "proper" ordinal notation. As you can see in that paper, the state of the art as of ten years ago was well beyond anything I mentioned in that previous post. I'll try to fill that gap if I ever get a better handle on the material. Right now it's Greek to me. One interesting tidbit is that some recent notational innovations go beyond the existing trick of using 'big' ordinals (w1/Omega/aleph-1 and larger) to help describe "normal", countable ordinals (generally beyond Gamma_0), which makes some mathematicians uneasy, or at least they find the practice unsatisfying. I think the justification has something to do with the big ordinals merely being indexes into the hierarchy of functions, and not being part of the functions themselves. I think. Regardless, the state of the art moves beyond alephs into the land of large cardinals, up to at least supercompact cardinals. Which is pretty damn big, if you ask me.
  • I'd intended to spend some time pondering the relationship (if any) between ordinals and hyperreal numbers in that post, and just didn't get to it. It's not a topic I've seen a lot of discussion about, although here's a somewhat recent Usenet thread discussing the idea. I think for the most part it just doesn't come up in the course of normal mathematical work. If you're using hyperreals, you don't really need or care about ordinals, and vice versa, so positing any relationship between the two would just be unnecessary.

    To me, things would seem generally nicer and tidier if you could regard the countable ordinals as a subset of the hyperreals, for example. Or more precisely, as a subset of the hypernaturals, themselves a subset of the hyperreals. Identifying the sequence (1,2,3,...,n,...) with the ordinal omega (denoted here by 'w') doesn't seem like that big of a stretch. Venturing off to one side for a moment, the surreal numbers are intended to be all-encompassing, and it's been stated that the surreals include the class of all ordinals, plus the set of hyperreals, with the surreal number (1,2,3,...|) being explicitly identified as 'w'. Surreals and hyperreals are similar enough that it seems logical (to me) to identify the surreal 'w' with the hypernatural (1,2,3...n...), and call it 'w' as well. Further, you could identify the sequence f(n) = (a_1,a_2,a_3,...) as f(w), so that n+1 = (2,3,4,...) = w+1, n^2 = (1,4,9,16,25...) = w^2, n^^n = epsilon_0, and so forth.

    So you could do this, but let's take a step back and merely argue that ordinals and hypernaturals look pretty similar in a lot of ways, so that there may be interesting ideas to carry back and forth between them. This is because they aren't exactly the same, and you run into a few rough edges if you try to fit them together.

    Recall that arithmetic works differently on hyperreals than it does on ordinals, and then there's a third arithmetic for cardinal numbers. And further, IIRC arithmetic on surreals isn't quite the same as arithmetic on hyperreals, either. One could take the position that these are merely three different views of the same object, expressing various properties that happen to coincide and give the same answers so long as we limit ourselves to finite numbers. Because of this difference, there are several kinds of hyperreal that aren't also ordinals. Obviously infinitesimals are out (no 1/w), as are any non-"integers" (no w+pi). 2^w and w!, in hyperreal terms, both define "integers" that aren't also ordinals, since in ordinal terms both expressions are equal to 'w'. No such thing as a negative ordinal, either. And infinite numbers less than 'w' are right out, like w-1. Meanwhile, the ordinals stretch far, far past w1, the first uncountable ordinal, which appears to be the upper bound on the usual realm of hyperreals, which is why I'm limiting the discussion to countable ordinals.

    A correspondence like this does let you ask some (possibly) interesting questions, like what hyperreal sequence, in other words what function from N to N, would correspond to Gamma_0, or various other 'large' ordinals. And going the other way, one of the common objects of study in the area of function hierarchies is the so-called "slow growing hierarchy", where G_0(n) = n+1, G_m+1(n) = G_m(n)+1. Clearly, if you can increment by less than 1, you can get a hierarchy that increases much more slowly, even infinitesimally if you like, say G_0(n) = n + 1/n, G_m+1(n) = G_m(n)+ 1/G_m(n), so that I suppose you'd eventually get w+1/w, w^w+(1/w^w), Gamma_0+(1/Gamma_0), etc., unless perhaps that hierarchy converges to a limit somewhere. Either way, I don't know how useful it would really be.

    Also, it's possible everything I just said is complete nonsense.
  • On a different note, I've finally managed to piece together part of the puzzle about "supernatural numbers", for which I'd come across several apparently incompatible definitions. The PlanetMath definition just referenced doesn't mention this, but the same product-of-primes thing is used for Godel numbering. So it becomes clear that these are the same supernatural numbers that Hofstadter talks about, but doesn't properly explain or give references for, in Godel, Escher, Bach. GEB goes on to mention infinitely large supernatural numbers as an example of nonstandard arithmetic or number theory. Again, the book doesn't explain what's meant by "nonstandard", but thanks to the magic of the Internet it's possible to find out. The axioms of Peano arithmetic were intended to give a succinct, complete, and unique definition of the natural numbers, but Lowenheim-Skolem theorem demonstrates that the axioms do not, and cannot, uniquely define the natural numbers, and it's possible in principle to create sets of objects, of any cardinality you like, that fulfill precisely the same axioms. And you can't get to a unique definition by adding more axioms, or using a different set of axioms. It's just an unavoidable fact of life. Instead of freaking out or getting depressed by this result, the math world simply decided to call the desired model "standard" and all the others "nonstandard", and study all of them. The supernaturals are just one nonstandard model, one which includes the naturals as a subset. The word "nonstandard" as it appeared later in connection with hyperreals is an analogy: Nonstandard analysis is to regular analysis as nonstandard number theory is to regular number theory. Both add additional "useful" numbers to the usual set, supernaturals being "useful" because, as infinite products of primes, by definition they fall under the fundamental theorem of arithmetic, so you can do all the usual number-theoretical stuff with them, I guess. Maybe some of them are even prime themselves. That's a weird idea.

    If we're going to assume the supernaturals fit into the hyperreals (which may or may not be justifiable, see the previous discussion), I think they'd all fit within the range starting at 2^w and less than w^w, which would mean there'd be no overlap between them and the ordinals. Although again, this is just a guess.

Wednesday, February 08, 2006

Hercules vs. Hydra



While we're on the topic of Greek mythology, which we still sort of are due to the recent post about Echidnas, I recently came across a fun bit of math known as the "Battle of Hercules and the Hydra". As the ancient myth goes, one of the 12 labors of Hercules was to go kill the Hydra, a nasty creature with lots and lots of heads. When you'd cut its heads off, the Hydra would just grow more of them. It's one of the better-known Hercules stories; snakes and snake-like creatures look good in art, and they're easy to draw, so the battle's been a popular topic for artists over the centuries. Here's John Singer Sargent's take on it. Here's a version by Francisco de Zurbarán a Spanish painter of the 1630s. And one by the Italian Renaissance artist Antonio del Pollaiuolo. And that's not even the tip of the iceberg.

But the big reason I think the story's so well known is that we've all been in situations like that at some time. Well, not precisely like that, but situations where the problem seems to keep piling up and getting worse despite, or possibly because of, one's best efforts to solve it. It turns out that the situation is less hopeless than you might think, at least in theory. It turns out that Hercules always wins eventually, by simply standing there lopping off Hydra heads, if he just sticks to it long enough, and doesn't get discouraged by the rapidly mounting number of heads early on. Hercules didn't actually need that torch-wielding assistant, after all.

This paper quotes the exact mathematical definition for the Hercules vs. Hydra game:

At stage n (n a positive integer), Hercules chops one head h of the hydra H. If the predecessor of h is the root, nothing happens. Otherwise let h1 and h2 be respectively the father and the grandfather of h. The hydra sprouts n copies of the principal subtree determined by h1 without the head h from the node h2 (the the roots of the new copies are immediate successors of the node h2). Hercules wins the battle if he reduces in a finite number of attacks the hydra to its root.


Another paper mentioning the problem can be found here.

In short, most of the time you end up with more heads than before, but not always, and that's apparently enough to guarantee that Hercules will win in the end. But it'll take him a good while, but hey, he's a demigod and all, he's got the time. You'll notice that the problem as formulated isn't precisely the same as the mythological account. If anything, it's even more of a challenge, since each lopped head is replaced with two new ones. The reason for the difference is that the problem was formulated not to resolve a hair-splitting dispute over an ancient myth, but to express certain ideas in the mathematical discipline of proof theory. Namely, the battle is a "natural" example of a theorem which is true but not provable within the normal realm of addition, multiplication, and exponentiation (known in the trade as "Peano Arithmetic"), therefore requiring the use of stronger axioms. If I understand the explanation properly, anyway. The whole discipline is highly esoteric, and discussions aimed at anyone below grad student level are few and far between. And even then it seems to be assumed you've got a faculty advisor holding your hand and guiding you through the material.

It's reasonable to wonder why I'm bothering to poke around in all this mysterious abstract stuff. Well, it turns out that proof theorists are among the very few people on the planet who have a genuine need for really large infinite ordinal numbers, which is a subject I find weirdly fascinating. I'm not even going to try to explain what these numbers are used for, since I'm sure I'd bungle the attempt. For the time being, we're just interested in the numbers themselves, not so much what they're used for. Think of it as a grownup version of the playground "who can count the highest?" game.

First a quick note on notation: When you see a 'w', it's really supposed to be a lower-case 'omega', which looks sort of like a curly 'w', and which represents the first infinite ordinal. The plain vanilla one that I imagine is what everyone has in mind when they say "infinity", which is followed directly by w+1, the legendary "infinity plus one" you knew existed back on the playground in third grade, before your math teacher told you otherwise. Following typical comp-sci notation, w*2 is omega times two, and w^w is omega to the omegath power. When you see an underscore, it indicates something's a subscript. And when Greek letters are named, the naming's sort of case sensitive: "epsilon_0" indicates a lower-case epsilon with a subscripted zero, while "Gamma_0" indicates an upper-case gamma, for example.

Most discussions of ordinal numbers tail off after you get to epsilon_0, like the Wikipedia article referenced above. If you're lucky, you'll get a brief mention that there are additional epsilon numbers -- epsilon_1, epsilon_2, epsilon_w, epsilon_epsilon_0, and so forth, just to reinforce that the process goes on forever. Recall that epsilon_0 is defined as w^w^w^w... , and it's the point where you can't get any further with just finite ordinals, 'w', "+', '*' and '^'. Your system's run out of steam, and adding another w to the already-infinite stack of w's doesn't change anything. Epsilon_0 = w^epsilon_0. We've reached what's known as a fixed point. You'd think this would be a bad thing, but far from it. They're solid gold. The ordinals, even just the countable ordinals, are an unimaginably vast collection, and fixed points of various functions are among the rare guideposts in the territory. So instead of bailing out, we just introduce the new symbol epsilon_0, and we can take its successor ( epsilon_0+1) and otherwise multiply, divide, and exponentiate to our hearts' content, until we reach a new fixed point, namely epsilon_1. And so on, until the inevitable, infinitely nested epsilon_epsilon_epsilon... , also known as the first "critical epsilon number", and as phi_2(0). What you're looking at there is an example from the hierarchy of Veblen functions, which basically take the fixed point thing and abstract it up another level. Each phi function enumerates the fixed points of the previous phi function. You can start out a Veblen-style function hierarchy with whatever function you like, but classically you start out with phi_0(n) = n, i.e. 0,1,2,3...w,w+1...w^2...w^w., etc. And phi_1(n) enumerates the fixed points of phi_0(n): epsilon_1, epsilon_2, epsilon_3, and so forth, on and on and finally we hit that infinite stack of epsilons mentioned earlier. the first fixed point of phi_1(n), giving us phi_2(0). And then we've got phi_3, phi_4, phi_w, phi_(w^w), and so on.

This may be a good point to pause a moment and try to imagine, for example, the size of the gap between phi_42(666) and phi_42(667). That's 41 levels of nested fixed points, which themselves are an incredibly-widely-spaced set of little corks bobbing in a vast sea of 'normal' ordinals. Ok, maybe it's better not to try to imagine that. Perhaps the only reasonable thing to do at this point is forget that we're talking about infinity at all, and just think of it as an exercise in abstract symbol manipulation, with levels upon countless levels of nested recursion. Which is still fun to think about, at least for a computer geek like myself. We certainly aren't going to resolve the question of whether these rarefied numbers we're talking about here are "real" or not. It's not that I'm against philosophical debates, per se. It's just that I usually have very little to add. If I was forced to pick a position, I'd incline toward a sort of Platonist notion that infinitely large and small numbers are somehow reflective of an underlying reality independent of human observers. Real, but perhaps not truly graspable by anybody with a mere few pounds of grey matter to work with. But that's just a personal hunch, and I can't imagine how you'd go about testing it.

In any case, the phi_n hierarchy itself runs out of steam eventually, and its limit is the ordinal Gamma_0. Gamma_0 is a milestone in that it's the first "impredicative" ordinal. I don't really understand the implications of this very well, but I gather that at least some schools of thought hold it's something to be avoided if possible. Or at least not overly enjoyed.

Beyond Gamma_0, the landscape is less clear (to me, anyway). Well, for starters you have Gamma_0+1, I guess, and then you repeat the whole process that got you to Gamma_0 until you run out of steam again, at which point you're at Gamma_1, Gamma_2, and so forth. The subject of proper ordinal notation for "large" ordinals seems to be the subject of ongoing research and debate. A Google search on the term "ordinal notation" will bring up lots and lots of hits. Here's one somewhat comprehensible treatment of the subject. Here's a decent article titled "Transfinite Ordinals and their Notations: For the Uninitiated". [Note: Like a lot of academic papers, it's on the net in PostScript (not PDF) format, which is sort of inconvenient for most people. One way to go is save it to a file, and run it through a PS to PDF converter, like this one.]

The referenced article describes Bachmann's theta notation, which is a more powerful extension of what we've looked at so far. Theta_0(0) = 1, Theta_1(0) = epsilon_0, Theta_2(0) = phi_2(0), Theta_Omega(0) = Gamma_0, and that's just the beginning. Note the capital 'O' in Omega there. This is not a reference to the familiar 'w', the first infinite ordinal, but rather the first uncountable ordinal, a.k.a. the cardinal number Aleph_1. Note: I'd originally had a comment here saying that all depended on what you thought of the Continuum Hypothesis, but that was a momentary bit of boneheadedness on my part. Ordinals actually don't have anything to do with CH, as far as I can tell. In any case, cardinal numbers aren't important to the present topic anyway. The important, and weird, thing here is that Omega (also alternately known as w1/omega_1) is much larger than the countable, recursive ordinals we're talking about right now, but it, and even larger ordinals, are necessary as indexes into the massive Theta hierarchy. We're no longer defining numbers as combinations of smaller, simpler numbers; no, at this point to describe a given number we need to assume the existence of a vastly larger number, and use that number to help us describe the number we really care about. Some people are ok with this, others get the philosophical heebie-jeebies.

Since not everyone is comfortable with monkeying around in the higher number classes just to come up with names for plain old countable, recursive ordinals, there are a number of other approaches one can use. The simplest is just to extend the classical phi hierarchy and create ternary, or even n-place Veblen functions. The notation usually gets modified a little, dropping the function notation and subscripts and such, so it looks like you're naming a number, not a function, i.e. instead of something like phi_1(0,0) we just have phi100 -- which just happens to be yet another name for our friend Gamma_0. It might help to think of the levels of nesting as the ordinal equivalent of decimal places, except that each 'place' can be as big as we like: phi110 (= Gamma_1), phi[w]00, phi[epsilon_0]00, etc., where the square brackets are just something I'm adding for clarity since I've been trying to avoid using actual greek letters. Anything in square brackets is really a single "digit". You don't often see phi extended past 3 places, but apparently it can be done if you're so inclined. I suppose the next logical step would be to give a count of the number of places, assuming zeros for trailing "digits". This is kind of the equivalent of scientific notation, so let's say phi100 = phi1^3 (in a trivial case), with phi6^4 = phi6000, phi6.6^4 = phi6600, phi1^10, phi1^100, ad infinitum. I don't know enough about the n-place Veblen thing to be able to say if there's a limit to this madness. Can one have a non-finite number of places? Is there such a beast as phi1^w? phi1^[phi100]? phi1^[phi1^[phi1...]]]? I really don't know.

There are also more complicated notations out there: Something called "Klammersymbolen", which translates simply to "bracket symbols", for one. I don't really understand how they work, but the notation is quite complex: There's an upper and lower row, each of which specifies a sequence of ordinals, with the whole assembly surrounded with large round brackets. There's a further extension of the idea called "ordinal systems". Concrete examples of either are hard to come by, so I don't know what the equivalent Klammer notation would be for, say, phi[epsilon_0]^[10^100].

Continuing on, we run into a few semi-important ordinals: The Ackermann ordinal (Theta_Omega^2(0)), and the small (Theta_Omega^w(0)) and large (Theta_Omega^Omega(0)) Veblen ordinals. The literature varies on the exact notation for all three, and it's unclear which but either the Ackermann or the small Veblen is the limit of finite n-place Veblen notation, in other words one of the two is equal to phi1^w, but I'm not sure which one. Which does indicate that the bigger-cardinal notation is way more extensible than the idea of simply adding more places to your phi notation, all philosophical objections aside.

The next really important landmark out there is an entity known as the Howard (or Bachmann-Howard) Ordinal, not to be confused with "Bachmann Turner Overdrive", of course. A lot of the discussion I've seen treats this number like some sort of Holy Grail, without explaining why. So I know they think it's very important, for some reason, I can say that much. In the Theta notation, it's Theta_(Omega^Omega^Omega...)(0), or Theta_epsilon_(Omega+1)(0). Which really tells me nothing useful, and leaves me scratching my head, I have to say.

As an example of how the notation can trip you up, this post used to assert that the Bachmann-Howard ordinal was Theta_(Omega_Omega_Omega...)(0) rather than the correct Theta_(Omega^Omega^Omega...)(0). That may seem like a trivial difference, but the former is vastly larger, and gets us into the thrilling world of large cardinals, specifically (I think) the first inaccessible cardinal (also known as "Aleph_Aleph_Aleph...").
If we're going to take the step into large cardinal land, we can go on iterating the Theta hierarchy as long as we need to, letting the big cardinal do all the heavy lifting. Alternately, Setzer's "ordinal systems" approach is supposed to get you well beyond the Howard ordinal without ever referring to any number larger than itself. If I had examples of this, I'd show them, but I don't. I understand that the ordinal systems notation is an extension of the already-complex Klammersymbolen, so if there are any examples out there, there probably aren't any concise examples out there.

And far beyond that we have the Church-Kleene ordinal, often denoted by w1CK. This is the first non-recursive, or non-computable, ordinal, the absolute limit of what it's possible to do recursively. Think of a rough analogy with the Busy Beaver functions discussed a few days back (although BB(x) is merely an example of a noncomputable funtion, not the actual upper bound on recursive functions). Any scheme you come up with for counting really, really high is guaranteed to run out of steam before you get to w1CK. It's kind of like the lightspeed of counting. The higher you go, the harder it gets to go even higher, and you'll still never get to w1CK by counting, no matter how hard you try, no matter what clever tricks you devise. Which is not to say we're out of ordinals. Oh, no, far from it. It seems that w1CK is just the first of something called "admissible ordinals", so that there's also a w2CK, w3CK, and so on. The fundamental thing here is that while you can count countable ordinals as long as you like, the set as a whole is uncountable. Unlike the (also uncountable) real numbers, which are uncountable due to their overwhelming density everywhere on the number line, ordinals stick the uncountability out on one end. The reals you can't count anywhere, but ordinals you can count, and count, and count, and it'll just never be enough. Not only can you not count to Omega (the next higher cardinal), you can't even count to a lesser milestone on the way there (w1CK). And then there's still an unimaginably huge sea of ordinals separating w1CK and Omega.

Even then, the ordinal Omega (or omega_1) (a.k.a Aleph_1) still lies off in the distance, far, far away, and then there's omega_2, omega_3, omega_omega, stack_of_omegas, and on, and on, and on.

All of which really makes me want a beer. And I see -- in a remarkable coincidence -- it's right about time for happy hour. So I think I'll stop right here for the time being.

Updated: Two subsequent posts of mine sort of elaborate on the material here. So if you liked this one, you'll love SuperHyperRedux and No Name. To teh Megist(r)on is also somewhat related.

Sunday, January 29, 2006

To teh Megist(r)on


In a number of recent posts, I've gone and totally geeked out over several theories of infinitely large and small numbers. In comparison, the topic of extremely large finite numbers just doesn't seem all that interesting. But I recently remembered an incident from when I was a small child, when a grade school teacher asked our class to name the largest number we could think of. The then-current Guinness Book of World Records insisted the "largest number" was something called "megiston", which was denoted by the number 10 in a circle. So that's what I answered, and I seem to recall that I either got a lower grade, or was publicly scolded by the teacher for making up numbers that didn't exist. Apparently the teacher didn't regard the Guinness Book as the final authority on these matters, even though I think I'd actually gotten my dog-eared copy at one of those Scholastic book fairs they used to have all the time. I remember the incident because it was one of those early inklings that either a.) grownups aren't always right, and/or b.) the stuff you read in books isn't always true. I also remember being frustrated because at the time I was absolutely sure I was right, but nobody would take me seriously, just because I was a little kid, and everybody knows kids don't know anything.

So I was thinking about this recently, and it occurred to me that I'd never seen the word "megiston", or any numbers in circles, in any context since that time. Which made me curious. Was this exotic-sounding number just something dreamed up in the Guinness-drenched offices of the official world record people? Was the Book wrong? Is it possible that I just imagined the whole thing, or misremembered it, or misunderstood what the Book was talking about? Thanks to the magic of Google, I now know that I was right, and my idols at the massive beer conglomerate had not betrayed me. What a relief that was, let me tell you. There is such a number as "megiston", though for some reason people occasionally call it "megistron" with an 'r', which is obviously incorrect. The ten-in-a-circle is one example of something called Steinhaus-Moser Notation, one of several different ways of expressing really humongous numbers, ones that are so big they can't be easily expressed in terms of exponents. Others include Donald Knuth's up-arrows, John Conway's chained sideways arrows, hyper operators, and Ackermann functions. The basic idea in each case is that, since mere exponents aren't up to the job, we'll just define more poweful operations, generally by iterated exponentiation, with the first and simplest higher-level operator sometimes known as tetration. This is just the logical extension of the process where exponentiation is iterated multiplication, and multiplication, in turn is just iterated addition. These various notations are attempts to define a symbolism general enough to express incredibly large numbers in a reasonable amount of symbols. Although, since the integers are infinite and all, any finite symbolism you come up with is eventually going to run out of steam, no matter how many nested levels of recursion you use, and no matter what kind of clever notational tricks you can come up with. In the end, the best you can really hope for is that your notation will be sufficient for any numbers you think you're likely to need, and won't be too clumsy to use for actual work. Despite the memorable, mysterious-sounding names, and mystical-looking notation, the Steinhaus-Moser seems like it would be awkward to use in "everyday" usage, in the unlikely event that you had to deal with quantities that big on a daily basis. Kind of a shame, really. And then, none of the notations seem to be entirely adequate for expressing certain really big numbers, like Graham's number. (Which, incidentally, I think has since replaced megiston in the Guinness Book.)

Another way to think about things like the Ackermann function and the like is that they're functions that increase much more rapidly than mundane ones like x^2, x!, or even x^x. Conversely, you've got things like the inverse Ackermann function (also discussed in the Wikipedia article linked to above) that increase far more slowly than, say, your garden-variety logarithms, for example. And remarkably, the inverse Ackermann function a(m,n) has real-world applications, including some in the computer science world. Here's a report about a fancy minimum spanning tree algorithm that runs in O( m a(m,n)) time, which is pretty schweeet. If you can't find an O(n) or better algorithm, and usually you can't, this is pretty much the next best thing you can hope for. If you have no idea what I'm talking about, here's a primer on Big O notation from a CS standpoint, and another one given from a more pure mathematical perspective. My impression has been that algorithms coming from a simple, "naive" attempt to solve a problem tend to be O(n^2), bubble sort being the classic example, and it can take a fair bit of head-scratching to come up with something more efficient. O(n^2) isn't always unacceptable, depending on the values of n you're likely to run into, but as they drum into you as a first-year CS student, you shouldn't settle for it if you don't have to. However, back when I had the inevitable C coding assignment to write a little program that implements 4 or 5 sort algorithms, for fun I decided one of them would be bogosort, known as the "the archetypal perversely awful algorithm", which runs in O(n*n!). That's pretty bad. I ended up putting in lots of printfs so the user could see it "working", because it's so unlikely they'd have enough patience to wait for it to complete. The wikipedia article referenced mentions an even worse variant known as "bozosort", although it probably still runs in some variant of factorial time. At the office a while back we got into a discussion about whether it's possible to write a sort that runs in worse than factorial time, without having the code take any pathological steps to deliberately avoid sorting the data properly. We couldn't come up with anything off the tops of our heads that might be worse than the repeated shuffle-and-test process. I think you'd have to be fairly ingenious to invent an O(n^n), or (even better) O(A(n)) sort. You probably woudn't get a medal, but you'd certainly merit a writeup on Slashdot. And probably here as well, FWIW.

Ahh, but it gets better, and weirder. A good article I came across titled "Who can name the largest number?" The author, Scott Aaronson, starts out discussing the answers you get when you ask kids this question -- all of whom gave answers vastly smaller than mine, I'm happy to say -- and then moves on to discuss things like our friend the Ackermann function, and then brings up a family of functions collectively known as Busy Beaver functions, one of the goofier names you're likely to run across in the math or CS worlds. The exact definition is fairly technical, involving various questions about Turing machines, but the really key point is that the functions are non-computable, meaning that although a function BB(n) is well-defined, there's no easy way to calculate its exact value for a given n. All you can do is build hardware, or write software, and try to determine the values experimentally. The really bizarre part: I'm not sure if this is the cause of, or an effect of, the functions' non-computability, but it's been proven that BB functions increase more rapidly than any function that can be recursively defined (exponents, Ackermann, numbers-in-polygons, whatever). I find this extremely puzzling and counterintuitive, and I have a lot of trouble imagining any sort of upper bound on how rapidly a "normal" function can increase, no matter how many nestings and such that you do. But so be it, I guess. The Aaronson article mentions that once you've got BB funtions, you can define even bigger functions in terms of them, but everything past BB is squarely in the realm of non-computability, continuing on up forever from there. So it seems that our familiar world of well-behaved, easily computed mathematical expressions is just the merest tip of the iceberg, and our difficulty in grasping what else is out there isn't just due to difficulties in notation. Wow. Freaky.

Some related functions with far less astronomical values are disussed here. The author of that page is Australian, hence the names "placid platypus" and "weary wombat", and where BB functions look for the maximal values of various things relating to Turing machines, his family of Australian fauna functions look for the minimal values. And then of course you've got inverse Busy Beaver functions. The link is to the sole research paper I can find (via Google) that mentions the subject. If they need a flashier, sillier name, I'd propose "Sleepy Sloth", but then, I'm not a mathematician, so what do I know? :) Anyway, as the inverse of a BB, it seems like an SS would increase more slowly than any computable function. It would still tend to infinity, but at a nearly imperceptible rate.

Now, the main underlying reason I've been doing all this reading lately is that I've been trying to get a better handle on hyperreal numbers and related beasties. And it turns out that one way you can look at the hyperreals is to view them in a purely algebraic sense, as a field of real-valued functions. It's a little hard to get your brain around at first, but basically each real number can be thought of as a constant function with the value of that real number, while functions that increase or decrease without bound are infinite nonstandard numbers, and functons that converge on some horizontal asymptote are infinititesimals. And functions that oscillate get us in to that confusing business with ultrafilters, so let's not go there for right now. Really exploring this will have to wait for another post some time, but it's interesting to consider how BB and related functions fit in. In the world of the hyperreals, BB(n) and SS(n) (or BB-1(n), if you prefer) both define infinitely large numbers, while 1/BB(n) and 1/SS(n) both give us infinitesimals. 1/SS(n) would seem to be an especially "big" infinitesimal, with SS(n) being a particularly "small" infinite number, but neither is a limit. No, in the hyperreal universe (as well as that of the surreal numbers), there's no such thing as a least infinite number, or a greatest infinitesimal. So the domains of the finite, the infinitely large, and the infinitesimal stretch out towards each other without bound, but they never actually touch. Weird, huh?

And the rest I'll save for another post, when I get around to it. I've rambled enough for now.

Thursday, January 12, 2006

Ultrareal nanoniblets




  • As proof there's no justice in the world, look at the shabby way the Tampa Bay Lightning treated poor Dave Andreychuk. Maybe he really was too old and too slow for the "new NHL", but when team officials openly say that to the media, that's just the height of tackiness. Maybe I'm being oversensitive, having just had a 30-somethingth birthday myself. The guy's not that much older, so he can't possibly be old. I mean, we went to see a hockey game on that birthday, and when I went on a beer run, I got carded. On my birthday. Which proves, incontrovertibly, that I am Not Old, therefore anyone who's not that much older than me is also Not Old. Note that the "Not Old" property is not fully associative, so that someone who's, say, a mere 8 years older than the 42-year-old Andreychuk would still be a plausible candidate for oldness, even though Andreychuk himself is Not Old.
  • The San Diego Zoo has a baby tapir. I don't find it all that appealing, actually, but if beady little eyes are your thing, enjoy!
  • From the usually-sedate world of classical music comes this weird legal soap opera. One member of a quartet had been fired for "incompatibility", so he sued, and two of the remaining members nearly lost their instruments to help pay legal bills, until the inevitable anonymous donor stepped in at the last moment, as they always do. You'd expect classical musicians to be calm and mature about resolving their differences, but maybe that's just because the music itself tends to be on the sedate side, and is performed for an affluent, educated, "mature" audience. But take away the tuxes and evening wear, and they're just another bunch of crazy, emotional musicians. They should all count their lucky stars that there weren't any drive-by shootings, and nobody went all Salieri on their quartet-mates.
  • It's official: Trees are bastards!!! In particular, they've been pumping out methane gas in far higher quantities than expected, and methane is a major greenhouse gas. The sneaky botanical malefactors have been doing this for years without anyone suspecting what was going on. Their motive is unclear as of yet, but they're obviously up to no good. Could Ronald Reagan have been right after all?
  • I'm getting very, very sick of cold, dark, wet winter weather. All you can really do right now is sit indoors, drum your fingers on the wall, and wait impatiently for spring. Until that happens, here are some pics of crocuses. I've always liked crocuses, primarily because they come up early, before it seems like its really a wise idea, just because they're full of enthusiasm and are unable to hold back. Or maybe I'm anthropomorphizing. Note that at least some crocus species are highly inedible. Furthermore, a crocus of any species should not be confused with the unrelated Krokus, which is an 80's metal band from Switzerland. No foolin'. They're still together, and they're touring right now. It must be tough being a Swiss metal band. You can't shock people by wearing tight leather pants, for one thing, and your fans all want to grow up to be -- or by now they've already grown up to be -- bankers or pharmaceutical executives, or possibly midlevel UN bureaucrats. You can sing to them all you like about your uniquely deep and tortured soul, and how you're one wild-n-crazy rebellious outsider, and nobody will have the foggiest clue what you're talking about.
  • A couple more fun names for peculiar types of numbers. I've come across a book with a brief treatment of ultrareal numbers, but I've only just skimmed the chapter and I can't say I understand what it's getting at so far. Meanwhile, here's a good article that mentions hypernatural numbers, which are a subset of the hyperreals (which are what the article's really about). There are a number of uses of the term "unreal number" [pdf] as well. The linked-to page uses the term as a synonym for p-adic numbers, and you have to admit "unreal" is a vastly more evocative name.

Wednesday, January 04, 2006

Surcomplex

Today's cool word is "Surcomplex", as in "surcomplex number". Surcomplex numbers are to surreal numbers what plain-vanilla complex numbers are to real numbers. The name combines "surreal" and "complex", and I can think of lots of everyday situations where this word would be a perfect description.

As you may have noticed with the stubby little Wikipedia article referenced above, there's not a lot of info on the net about these babies. And I've seen nothing at all about "surreal-hypercomplex" numbers, which would seem to be a logical extension of the idea. With the use of something called the Cayley-Dickson construction, you can go on indefinitely, creating ever-more-hypercomplex (and ill-behaved) numbers. It's interesting to note that the number of dimensions of a number created this way is always going to be a power of 2, which suggests a sort of parallel with the binary-tree way that surreals are constructed. Well, it does to me, anyway, but then I'm a computer geek, not a mathematician, and besides, by no means are all mathematicians happy with Conway and his surreals (you'll want to read a few posts in the thread to get the gist of the argument). Anyway, a surreal-style tree that also gains dimensions on the way down is quite an interesting thing to comtemplate. At omega, assuming the analogy holds this far, you'd start encountering weird beasts like numbers with an infinite number of dimensions. Just try to imagine that for a moment. And each dimension a surreal number, possibly infinite or infinitesimal. Wheee!

I'd hope there'd be no way to use something as esoteric as a surcomplex number to design a new kind of bomb, but you never can tell. GH Hardy took a great deal of pride in the notion that his beloved number theory was of no practical use to anyone, not foreseeing the rise of math-intensive cryptography. Now, cryptography clearly isn't a bomb, and has lots of wholesome, nonmilitary applications, but in this day and age it's also safe to assume that numerous large numbers have been factored using the Cheney algorithm, which is to find someone who already knows the factors, and torture them until they cough 'em up.

Speaking of Cheney, here's another fun article about his (and Rummy's) longstanding love of surveillance and hatred of Constitutional government. Meanwhile, the talk radio / think tank / conservative blog echo chamber is full of the usual suspects, each trying to outdo the next in heaping scorn on outdated and subversive notions like search warrants, habeas corpus, the ban on cruel and unusual punishment, freedom of speech. We're told again and again that anyone who still believes in that old-fashioned stuff is a commie pinko liberal who hates America and loves the evildoers. I ask again, what's this country coming to, when to be considered a "true patriot" you're expected to abandon, and actively oppose, the entirety of the country's founding principles?

Well, actually today the wingnut universe has other, more immediate concerns, namely yesterday's UN-imposed ban on trade in wild caviar. Here's one example of the conservative hysteria this is evoking. This story's got everything. The UN, which they hate, and have all sorts of fun conspiracy notions about. The environment and endangered species, which they're also 100% against. Left-wing-inspired "persecution" of poor innocent rich folks who just want to enjoy their bland, salty fish eggs in peace.

Interestingly, we have a similar fishy situation here in Oregon, except with sea urchins instead of sturgeon. People are often surprised that sea urchins are commercially fished along the Pacific coast. It's not like you see them at the grocery store, or even in seafood markets on the coast. No, they're all exported to Japan, where sea urchin eggs are a delicacy. They only started catching the things here in the early 70's, and the sea urchin population seems to have been in a steady and ongoing decline ever since due to overfishing. What makes this even more fun is that the urchin population started out at an unnaturally inflated size, due to the prior local elimination of its main predator, the sea otter. And when the urchin population explodes, the little beasties mow down the coastal kelp forests that provide shelter to numerous species of fish, hurting those populations as well. So some might argue that overfishing of sea urchins is good news for kelp and fish. Well, until the population crashes, which is something fisheries of all types are prone to do, and then there aren't enough urchins around to keep the kelp in check. Then I suppose the answer to that problem will be some kind of anti-kelp herbicide, or an ongoing Federal kelp mitigation program at taxpayer expense.

Here's a pic of a sea otter eating a sea urchin.

Here's an even cuter sea otter picture. Awwwwww.....