June 2014

You are currently browsing the monthly archive for June 2014.

Yep, I had a problem here (and I may change my mind back again) as to what to call this. I made the (in retrospect) obvious decision to title the post with what I was writing about. That being said, I’ll still talk about computation later, but I think in a separate topic covering things like logic, complexity, etc.

So we have a nice clean definition of what we are going to talk about – computers. So, what is a computer? Whoops, problems already. Loosely, we can say somewhat circularly, it is anything that computes. So is a pencil and paper, accompanied by the human holding the pencil a computer? That’s stupid, you say, but that is exactly what the (mostly women) at Bletchley Park did and were called. That’s unsatisfactory to me, and it’s my blog, so that ain’t happenin’. (Much, much more about that and them later, but if you want to read ahead, see the references – but no extra credit on the final grade.)  So is it something mechanical/electrical that aids in mathematics? In one sense that’s too narrow – we probably want to include any “machine” that processes information. In another sense it’s too broad – is an abacus a computer? Again just a thingie with a carbon-based I/O system. Or even the Egyptians with 3 sticks tied together in lengths of 3, 4, and 5, so they could run around creating right angles? I think not.

I’m taking the coward’s way out here – to paraphrase  U.S. Supreme Court Justice Potter Stewart re pornography. “I know it when I see it.” Suffice it to say, we will see all kinds of embodiments of “computers”, some of which will make you say. “Huh?”, and post a scathing comment. To those of you who feel like doing that all I can say is. “Bring it on!!!” I thrive on controversy.

So let’s get started: what was the first “computer” (I will henceforth drop the quotes with the understanding that there could (will) be some disagreement)? Let me propose the Jacquard loom (~1800). Another “Huh?” moment. I chose this because it is the first device I know of whose behavior was “programmable” – that is, it didn’t just kind of stand there waiting for some one to tell it what to do. Note the distinction here, someone is telling it what to do. An abacus needs fingers (and nimble ones, at that) to do any computing. Here in the case of the loom, instructions were prepared off-line, loaded into the machine, and the “GO” button pushed. In its simplest manifestation, the pattern to be woven was encoded into a set of punched cards. Without a full Weaving 101 class, the basic idea was that the cards told the loom what to do with the various threads that were being woven, whether to go over or under, or some such. But, Martin Davis points out, that’s how a player piano works, and no one pretends that it’s any sort of a computer, so we’re back to square one. As a side note, Napoleon I stuck his two cents in here, as he did with so many scientific ventures.

Next up, Charles Babbage’s Analytic Engine.

Weave, weave, weave me the sunshine out of the falling rain

Weave me the  hope of a new tomorrow and fill my cup again. Peter Yarrow (1998?) – Around the Campfire

References:

Bletchley Park – The Secret Lives of Codebreakers  by Sinclair McKay

  Colossus: The secrets of Bletchley Park’s code-breaking computers by B. Jack Copeland

Jacquard’s Loom – Jacquard’s Web: How a hand-loom led to the birth of the information age by James Essinger

Any discussion of mathematics and the history thereof must obviously start with a discussion of what numbers are, or must it? On the surface, it would seem that all mathematics is based on numbers, and more specifically the (positive) integers; 1,2, … Negative numbers and even zero come much later in the game, and the ellipsis (pedantry alert – that’s those 3 dots meaning etc.), wow, what a can of worms that opened in the late 19th century. But I digress – why did I say, “or not!!”?

For those of us paying attention in High School, Algebra, (after the hassle of trying to get word problems formulated as equations), seemed just to consist of moving things around from one side of an equal side to the other, adding this number to both sides, dividing both sides by something, and for the more advanced of us, taking square (and yes, even cube) roots: in short just following some rules guaranteed to not “change” anything, in some fundamental meaning of change. These rules we used were the one that kept the validity of the equation intact, so that if what we started with was true, what we ended with was true.

Then (or before, depending on your particular school district) came Geometry, which was much different. Here we drew pretty (or in my case, not so pretty) pictures and proved theorems about these little pictures. You know, this picture can hold twice as many squares as that one. Or if all the angles of one triangle are the same as the angles of another , the triangles kinda look like each other, or some such. Completely different stuff. Or is it? Again, we are starting with something true, whether axioms/postulates (Geometry) or a formulation of an existing condition for a word problem (Algebra), doing some sort of applications of rules to symbols (pay attention, SYMBOLS, not necessarily numbers), and coming up with a new theorem or a statement of the form x=7 in Algebra. Similarly we can view Trigonometry and its crazily named function the same way, all we are really doing is manipulating symbols (not necessarily numbers) according to some rules, that, in some sense, “work”, meaning that if we start with true stuff, we end up with true stuff. And so on, through Calculus, and areas of mathematics that none of us, save the people working in that field know even exist.  Cobordant Manifolds and Thurston’s Geometrization Conjecture? Really, mathematicians?

So it seems that all we are doing in essence is manipulation of symbols according to some set of rules. It all make sense to us because we “know” what the symbols mean. So when we see 2+1=3, it makes sense; we know it’s true. But wait, if we wrote 3=2+1 (same guy, just rewritten), can’t we take that as a definition of 3? That is, “3” is the number we get when we add 1 to 2. Or 3 is the number after 2 or the successor  to 2. (Just a bit more of this and I hope I’ll have made my point. Hang in there.) Please try to recall here what a function is. Simply you put something into the function and get something out. So if I say f(x) = x2, when I put 5 in I get 25, or f(5) = 25. Back to 3=2+1. Let me define S(x) to be the successor function, so that S(x)=x+1; then I have S(2)=2+1=3. Here’s the real point – this makes sense because we have associated our experience with what numbers are. We could consistently use a successor function to define the next letter in the Latin (or any other) alphabet, with proper finagling. But, regardless, we would still be deriving true statements. And S(x) might have nothing to do with any of these semantic interpretations.

The takeaway here, expressed in a somewhat roundabout way, is that mathematics exists independent of the semantics we apply to our symbols. We are only interested in the syntax, the rules for doing things the right way. we will see a lot more discussion of this when we discuss logic in the late 1800s and early 1900s, people like Gottlob Frege, Giuseppe Peano, and Bertrand Russell, etc. (if you want to read ahead.) We will now go back to discussing our main thread, the History of Mathematics. Fear not, we will be dealing with familiar stuff like 1, 3, 8, +,  /, etc. for quite a while. Stay tuned.

I think math is a hugely creative field, because there are some very well-defined operations that you have to work within. You are, in a sense, straightjacketed by the rules of the mathematics.  But within that constrained environment, it’s up to you what you do with the symbols. Brian Greene – The Hidden Reality (2011)