Saturday 7 February 2009

Mathematics, Certainty, and the Mathematical Impossibility of the Saturn V

Back in the early Twentieth century, after Konstantin Tsiolkovsky had initially tried to get people interested in the idea of liquid-fuelled rocketry, the story goes that an elite mathematical society decided to study the problem, and decided that LFRs were impractical. They didn't just decide that LFRs were unlikely to to be useful to launching payloads into space, they proved mathematically that such a device wasn't workable. Or so they thought.

The argument was nice. It said, to launch a rocketship to any given height requires a certain amount of fuel, and with an LFR, a certain amount of additional infrastructure (fuel tanks, pumps, piping, the rocket engine itself, etc.). The more infrastructure you have, the more weight you have to carry, and the more “dead weight” you end up with at the end. If you want to get higher, you have to carry more fuel, which means that you start with more weight, which in turn means that you have to carry even more fuel to compensate.


Solid-fuelled rockets
were simpler. They were basically big fireworks, and when you used solid fuel, the “fuel problem” wasn't intractable. If you calculated “weight of fuel required plus weight of payload” for a given height, it was solvable. It helped that the weight of the fuel being carried was constantly reducing as the flight progressed. With a solid rocket, you might start off with a monstrous amount of rocket propellant on the launchpad, but by the time you got into space you had something much more lightweight.

With a liquid-fuelled spaceship (it was argued) the equation was different. Once again, the higher you wanted to go, the greater the amount of fuel you had to carry ... but with a liquid-fuelled craft, all that propellant had to be carried in tanks, and fed through pipes. Although the tankwork became more efficient as you built larger tanks, the total weight of tankange and pipage was always greater for a larger craft, and since this weight had to be subtracted from the final payload weight, you found that beyond a certain height, your payload was effectively 100% infrastructure.
If you wanted to lift a person into space, with accompanying life-support, liquid rocketry obviously wasn't the way to go. It was an impractical idea, and funding liquid fuelled rocket research was clearly a waste of time.

A few decades later, when we landed on the moon, we got there using LFRs.

Where Things Went Wrong


When Apollo 11 went to the Moon, NASA didn't attempt to lift the entire Saturn V rocket into space. That would have been silly. They took a rocket that could lift a heavy payload pretty high, and instead of enlarging it, they then sat a second rocket on top of it. The payload for the first rocket was another rocket. Rocket #2 was smaller, was launched from the new height (with a good starting velocity), and didn't have to carry the dead weight of rocket #1, which simply fell back and crashed into the sea. The Saturn V had multiple stages, with only the last stage(s) leaving Earth with their final payload.

---


The mathematicians would probably say that they didn't do anything wrong. Their calculations were exactly correct for the problem that they were given. The problem was that the question wasn't asked carefully enough ... or rather, that they'd asked a version of the question that was too careful, phrased in such a way as to be easily solvable and to give a definite answer.
And by doing that, they got an answer that was simple, straightforward, provably correct ... and physically wrong.

There are three main lessons here:

1. The "test particle" fallacy

Firstly, the physical behaviour of compound systems is sometimes quantatively or even qualitatively different to the results that we predict from simply studying individual components in isolation. The limits of a two-rocket system aren't the same those of a one-rocket system. If we take a “test particle” approach to physics, and derive a set of laws based on the idealised behaviour of single non-interacting idealised objects, and then turn those predictions into elegant mathematics and beautiful self-contained geometrical models, it doesn't mean that the answers given by those models are going to be correct. We can have beautiful, rigorously-derived geometry and mathematics that allows just one solution, and that provably has zero errors in any of its derivations, and creates breathtakingly elegant resonances across multiple fields of mathematical theory ... and physically, it it can still be quite wrong.

This isn't always appreciated.


2. Physics vs. mathematics

Secondly, mathematics is not physics. It might well be that "all physics is mathematics", but that can mean that physics is a subset of mathematics, rather than the thing itself ... physics is obliged to correspond to reality, while mathematics is not, so the two disciplines aren't automatically interchangeable. Modern physics is now so strongly math-based that researchers can spend their lives learning to manipulate the textbook mathematical machinery, without necessarily realising that the resulting answers aren't guaranteed to be physically meaningful. These guys are liable to tell you that "You can't argue with the math", but sometimes, if you're a physicist, your ability to argue with (and occasionally overturn) mathematically-proven results is what makes you worth your salary and your job title. There are situations where not understanding the importance of disputing math, regardless of the apparent strength of the proofs, means that perhaps you haven't really understood the idea of physics at all.


3. Proof vs Certainty

Thirdly, in physics, there's a sometimes a tradeoff between calculability and correctness. Sometimes the things that you do to a problem to make it well-defined and easily modellable destroy delicate-but-critical characteristics of the original problem. Instead of a "correctly vague" answer to an "indistinct" problem, you then end up with an unambiguous answer to a well-defined problem .. that doesn't actually correspond to the thing that you're trying to model.

In a worst-case scenario, a desire to be able to definitively solve a problem can, if your tools aren't up to the job, lead to a process of successive approximation that converges more and more definitely on an answer that's emphatically wrong. In everyday situations a physicist will use common sense to ignore answers that obviously aren't right, but when we're working at the edge of known theory, the selection process becomes more dangerous.


If we're not careful, all we end up doing is generating mathematically rigorous retrospective justifications for whatever it is that we already happen to believe. But what what we believe is based on our cumulative experiences to date. Our current belief system is a logically-perfect inverse projection of an imperfect dataset, designed to recreate the particular set of rules that we inherited from the generation before us.

And it's wrong

No comments: