[net.philosophy] The Nature of Mathematical Knowledge

weemba@brahms.BERKELEY.EDU (Matthew P. Wiener) (01/20/86)

I read P Kitcher _The Nature of Mathematical Knowledge_ last October.
I immediately sent the letter below to him.  I never received a response,
so if any Kitcher fans want to respond for him, go ahead.  In general,
while I can be quite certain his mathematics was bad, I am only an outsider
to philosophy.  Still, it seemed rather shoddy philosophy as philosophy to
me.  I guess I was disappointed since the topic of the book is so rarely
discussed.

WARNING: The letter may be somewhat incomprehensible without first reading
the book, or at least without having a copy for page references in front of
you, and as I don't recommend the book in the first place, you might want to
skip this article entirely.  It's quite long.

================================================================================

					19 Oct 1985

Dear Professor Kitcher,

Having just read your book The Nature of Mathematical Knowledge, I feel obliged
as a mathematician to point out several errors.  I will also object to some un-
perceived difficulties in your presentation.  I will also suggest some other
arguments in favor of ontological apriorism, ones I have not seen in print.  But
to make this letter worth your time, I will conclude with some examples that you
could use to bolster you empirical epistemology, examples that I was surprised
to not find in your book and had, while reading, psyched myself up to reinterpret.

On page 41, you quote Hume favorably: but his first sentence is untrue.  I've
seen mathematical truth at the moment of mathematical discovery myself, but I'm
sure you know the story of Hamilton's discovery of quaternions.  Proof, quite
often, is a mere epilogue to direct insight.  Indeed, many mathematicians leave
the proofs to their advisees in need of a thesis.  The psychological relation-
ship between truth, proof and certainty is quite touchy, and I will discuss it
further below.

On page 42, you claim Stokes' theorem is a good example of a theorem suffering
from nebulous formal status.  That cloud may be present in the engineering and
physics texts, but a few mathematical texts present it in all its gory detail,
including the cases with corners.  It is true that most mathematical texts do
not, for the trivial reason that the mathematical applications are to nice mani-
folds.  (See, for example, M Spivak's Calculus on Manifolds, p. 137, last para-
graph, where the generalization to corners is left as a challenging project for
the ambitious reader.  Notice how he even leaves the proper definitions to the
reader.  This is a perfect introduction to doing research, suitable for his
undergraduate audience.)

On pages 43-6, you discuss long proofs and the Cartesian course in consciousness
raising.  The two examples you mention, Stokes' theorem and the unsolvability of
quintics, are considered rather easy to keep simultaneously in mind.  To call
that feat staggering is an unintended compliment to our years of practice.  If
I were to venture what cognitive science might demonstrate some day, I propose
that it could lead, when combined with bioengineering, to vastly improved mental
capacities.  But I don't venture, and propose that philosophers who grab assent
to their theories from one line arguments based on what cognitive science might
someday do be banished to the tenth circle of Hell, or at least join the staff
of TIME magazine.

(Of course, it is not made clear what you mean to keep a proof simultaneously in
mind.  But if you are not saying something true for trivial reasons, namely that
it is impossible to keep two things in mind simultaneously by some definition or
other, then your statement is just wrong.

(The hullabaloo that computer proofs have generated, by the way, is mostly due
to ignorance.  Among those I've asked, for example, those with deep familiarity
of computers and years of programming experience, have no qualms, while those
without that experience are nervous of the proofs.  But that is because they are
nervous of the actual machines!  I've actually seen a famous algebraist, who
wanted to know if a certain arithmetical identity involving 100! was true, flee
the multiprecision machine verification.

(If you want explicit examples of what I consider astounding mental capacities,
I mention great chess masters.  Reuben Fine describes how he once saw 40 moves
ahead, instantaneouly.  He also once played four simultaneous blindfold BLITZ
games against master/expert opponents.  Bobby Fischer was once approached by
someone who said, I'm sure you don't remember me, but you totally demolished
me during a simultaneous exhibition some years back, while never taking more
than a glance at my board.  Fischer replied, yes I remember you, and then pro-
ceeded to give a move-by-move analysis of the more interesting aspects of the
game.  Of course, I am a mere patzer, so maybe these are not astounding.

(One of the greatest minds ever was John von Neumann's.  He was once asked by
someone to solve the problem of the distance travelled by the fly, flying at
(say) 100 mph between two cars approaching either at 60 mph each, assuming a
starting separation of the cars of 20 miles, and that the fly instantly switches
back and forth when it reaches one of the cars.  He replied instantly with the
correct answer.  The questioner was disappointed that von Neumann had found the
clever method, namely it flies for the time it takes the cars to crash, but was
then amazed to learn that von Neumann had summed the obvious infinite series.)

On page 59, you refer to Benacerraf's point (b) that there is no causal relation
between mathematical objects and other entities.  How absurd!  No Platonist, not
even Plato, believes that.  I would call the relationship direct mathematical
perception, as important and as inherent to me as listening is to a musician.
(I was surprised that what I thought was crudely obvious is considered most
sophisticated on page 148.)

On pages 65ff you discuss the question of whether groups contains a unit element
in virtue of the concept or meaning of group.  First, a howler you should avoid:
"unit" is ambiguous, and you should use "invertible" or "identity".  Second, all
mathematicians would reply groups contain a (whatever) in virtue of the defini-
tion of a group.  (In particular, the sentence on page 66 'To say ... invites
criticism' invites derision from mathematicians.)  It is not clear to me whether
you view mathematical definition as stipulation.  Overall, the Quinish arguments
wither when dealing with mathematics, because the language mathematicians use is
highly refined, and does not suffer the defects that natural language suffers.
In other words, Quine is not permitted to argue with Humpty Dumpty.

On page 85, you say 'Conceptualism makes a priori knowledge come too cheap'.
This may be true in chemistry and physics, but so what?  If the thrust of your
book is to show that mathematics is much like chemistry and physics, then to
reject conceptualism in mathematics at this point is to beg the argument.  And
one sentence later you say 'The risk that what we know will prove useless would
be greatly increased'.  This is a daily risk in almost all mathematics, one
that most of us take fearlessly.

On page 87, you claim triumph over conceptualism because of the example of
'acid'.  Whether this example works is irrelevant, since the question is whether
conceptualism is valid within the philosophy of mathematics, not chemistry.
Again, I must point out meaning within mathematics is highly arbitrary, and
while specific words may change over time, that is just an artifact of human
language, not the underlying mathematical concepts.  Thus in France, fields
need not be commutative, but compact spaces are always Hausdorff.  In Russian
a normed ring is what Westerners call a Banach algebra.  To read anything into
these vagaries is to not understand mathematics.  There is nothing in mathem-
atics that could cause confusion even remotely close to what duck-billed platypi
caused eighteenth century taxonimists.

(I don't understand your discussion of 'acid' at all, by the way.  As far as I
can tell, it looks like chemists settled on the more useful concept.  It is
true that if somebody introduces new terminology, somebody else might amend it
because he wants a variant of the first use.  But that fools no one.  The under-
lying concepts have not changed in the least.  Are you mixing in here your later
discussion of rational changes in science and mathematics?)

On pages 99-100, you conclude your chapter with a paragraph that suggests the
only choices for a history of mathematics is between haphazard random develop-
ment and a well explained theory of knowledge.  I wish these Hegelian portents
would be left with less serious topics, say the philosophy of nematology.  I
see no reason why the overall history of mathematics must be rational in order
to account for its development.  Finding a proof by luck does not make a proof
less valid than finding a proof by rational methods.  I always thought mathem-
atics developed because doing mathematics is a lot of fun to some people.  I
certainly have no rational reasons for doing the kind of mathematics I do.

On page 103, while discussing intuition and sense perception you call the former
mysterious while the latter well-understood.  I find both of them equally weird
if I think about them, and equally sensible when I don't think about them.  Nor
do I see how scientific knowledge about the latter and scientific ignorance on
the former makes a difference: if anything, the physiological and physical know-
ledge we have about vision makes the ability to see seem even more miraculous.

On pages 104 and 134-5 you point out the serious difficulty a Platonist has in
choosing which of several arbitrary identifications to make of various mathem-
atical objects within set theory.  You seem to have missed the entire point of
Platonic mathematical ontology: the real line, for example, exists out there,
and is nothing more nor less than the real line.  For someone to say that  2 IS
{ r rational | r < 0 or r  < 2 } is silly and absurd.  But for him to say  2 can
be identified with that Dedekind cut, however, is profound and serious.

On page 125, near the top, your use of 'Unfortunately' is rather unfortunate: it
suggests meanings I don't think you intended.

On page 128-9, your diagrams that suggest your approach sure look like Platonism
to me!

On page 130, you ask why the stuck mathematician does not renew his acquaintance
with his Platonic ideals and instead wonder why he so often engages in calcula-
tion.  This whole paragraph, and its discussion of mathematical notation, is
very misguided.  Many mathematicians, when stuck in a proof, put their pencil
down and go to the park and ask the ducks their opinion.  Others juggle three
balls simultaneously.  Some prepare their upcoming lecture.  In short, anything
to let their subconscious solve the problem.  And sometimes they engage in sym-
bolic manipulation: John Horton Conway is a famous example of a mathematician
who computes and computes and computes forever.  Your assertions are just too
sweeping.  Concerning notation and its analogies, I think you are misled by the
efficacy of modern notation: it is designed so that proofs flow, so that brains
can concentrate on the ideas behind the proof, so that the reader has no idea of
the struggle to find the proof, etc.  'To solve a problem is to discover a truth
about mathematical operations'?  No.  To solve a problem is usually to discover
a proof.  The truth had been discovered when the problem was posed, but no one
knows it is a truth until the proof comes later.  So says the apriorist.

On pages 149ff, you discuss a Kuhnique view of mathematical change.  I won't
claim any points are wrong here or wrong there, but want to mention that Kuhn's
views seem at times too weak an interpretation and at times too strong to me.
I just don't see how this is supposed to favor your views.  Are you trying to
say that math and science went through similar tribulations, and so should be
viewed as similar epistemologically?  I would expect such a non-obvious claim
to be defended, so I demur from objecting to what might not be the case.  But
I can't think why else the chapter exists.

On page 156, you state we no longer care for systematic exploration of special
functions.  Most of us no longer care, but some of us do.  David Mumford works
with Jacobi theta functions.  Jacobi modular functions, as presented in a old
paper written in Latin(!), have played a central role in one of the most amaz-
ing, deep, profound and bizarre developments of the last decade, the so-called
"Monster Moonshine" question.  Modular functions in general were in deep freeze
until around 1960, but they have been growing rapidly again ever since.  And
the solution last year of the Bieberbach conjecture depending on some special
function identities, both old and new.  In general, what mathematicians study
goes through periods of fashion.

On page 157, you say that our concept of number has changed from the ancients.
While true, you are a victim of the language objection you stated earlier that
very paragraph.  The ancient concept of number is virtually identical with the
modern concept of natural number, the difference being one of precision and
surrounding knowledge (and zero).

On page 180, your sentences about Hamilton and quaternions are rather mis-
leading, if literally correct.  Nothing studied last century plays as large a
role as it did then, for the trivial reason that mathematics has mushroomed
exponentially, and very little from then plays a large role, although much plays
a strong undercurrent.  And I am bothered by the parenthetic comment that the
quaternions play nothing like the role Hamilton envisioned: is it a derogatory
remark on how futile Hamilton's actual creations were compared with the fruit-
fulness of the algebraic freedom they crystallized into existence, or was it an
insightful remark that quaternions are used in number theory, gauge fields,
symplectic geometry, Lie group theory, polytopes, etc., most of which he never
heard of?  You give the impression that you find quaternions a failure because
they did not get the name 'number'.  If so, I should point out that that is just
a random vagary of language.  The eight dimensional octonions are often called
Cayley numbers, and then there are p-adic numbers, supernatural numbers, etc.

On page 188, you claim the classical questions about ratios of infinitesimals
and logarithms of quaternions are now viewed as wrongheaded.  Both questions
make a lot of sense in modern mathematics, and can be dealt with effortlessly.

On page 191, you claim the current successors of Newton and Bolzano agree that
proofs generate knowledge and advance understanding.  I am one of those current
successors, but must disagree.  Some proofs generate knowledge, some generate
understanding, some do both, and some do neither.  And I don't think my col-
leagues would always agree with me as to which proofs do which.  As to the
question whether proofs can generate a priori knowledge, I feel that the phrase
"generate a priori knowledge" is self-contradictory, but I see no reason that
proofs cannot help us realize what was a priori all along.  That, I believe, is
the point of Meno's slave and the Pythogorean theorem.  (I am presuming an ideal
reader of Plato who does not endlessly quibble over irrelevancies.)

On page 211-2, you claim Cantor's development of set theory was rational.  He
certainly conceived of the countable ordinals while trying, unsuccessfully, to
solve certain questions concerning Fourier series.  And he did come up with a
new proof of the existence of transcendentals.  But that is all regarding prior
mathematics that Cantor's set theory had to say!  Cantor stopped doing Fourier
analysis because his set theory was a lot more interesting to him.  It led, at
the time, to no further insights into anything but set theory!  To say it led
to 'a clear way to state conditions on two sets' having the same size or on a
set's being finite' is true but pointless, since that is, prima facie, a very
weak and basically useless concept.  Cantor's famous defense DOES refer to the
fact that his subject was basically devoid of applications to prior mathematics.
Why else would he defend the subject as he did?

(Mathematicians have always worked wherever their whimsy takes them.  Fermat,
for example, was certainly not trying to understand any prior mathematics when
he did number theory.  Mathematicians throw their problems at each other rather
randomly.  The van der Waerden theorem seems a prime example of a problem solved
for no other reason than that it was there.  But now it has become a central
part of modern Ramsey theory.  And why did Steiner think of Steiner triple
systems?  Beats me.)

On page 220, you say Cayley offered a characterization of the fundamental
properties of groups, and remark that Cayley did not explicitly recognize the
need for inverses.  Today one would say Cayley defined a monoid.  (I am not
familiar with this work, but am venturing a guess that Cayley was going down
the objects that were then called groups, and was trying to systemize what
was confusing at the time.)  Your sentence that Cayley was prompted to allow
for non-commutativity by the examples of matrices and quaternions should be
deleted.  After all, most of the permutation groups studied were also non-
commutative, and as you say, they were the well-known examples.

On page 229ff, you present the history of calculus written with your empirical
ontology and epistemology in mind, as other accounts of the history have an
apriorist bias.  Until your concluding remarks, I cannot see the difference.
Moreover, a much more serious difficulty in drawing conclusions from your ac-
count is the question of how to disentangle the physics from the mathematics.
To proclaim that your account shows much empiricism in mathematics, when much
of the account is physics to begin with, and then give no careful delineation
of the boundary, invites criticism.

On page 254, you claim that it takes considerable talent to spot Cauchy's error.
Any mathematician of Cauchy's day who was alert when reading his text would spot
the error in reasoning immediately: infinitely many infinitesimals need not add
up to an infinitesimal.  That was a well known fact about infinitesimals since
Leibniz.

On pages 258-9, you quote Weierstrass out of context.  The algebraic truths he
is referring to are I believe the algebraic properties of holomorphic functions,
not the delta-epsilon methods.  Things like the Weierstrass preparation theorem,
for example, are part of Weierstrass' approach to the algebraic properties of
elliptic functions, as opposed to Riemann's analytic approach.  He is certainly
not commenting on Riemann's rigor.  (Riemann always went straight to the heart
of a problem, leaving several generations stunned as they tried to catch up to
his thoughts.  His papers were extremely condensed, packed with rich mathematics
on every line.  The question of rigor just does not apply!)

On page 269, you claim that we should subtract the irony.  Why?

On page 271, you claim that traditional philisophical accounts have presumed
a priori knowledge of axioms and definitions.  The traditional accounts you
refer to presumably discuss Euclid, and you wish to show the limitations of
such accounts when compared with analysis.  So what?  You succeed, but I can
only draw a different conclusion from your case analysis.  To wit, the mathem-
aticians had direct insight into the true properties of the real line, and then
labored mightily in arithmeticizing their notions.  Platonism would explain why
they could succeed with faulty reasoning for so long: it always matched their
direct perception.

On the flyleaf, which I realize is possibly beyond your control, you quote
Morris Kline as recommending the book.  Bad idea.  And the summary of the book
is extremely silly: every mathematician I have read it to cracked up in utter
disbelief that someone would pronounce the purpose of a book is to announce
so profoundly something we have known all along.  It's somewhat like telling
a Cambodian about Nixon's secret bombing of his country.

Enough of particular errors.  I will present various objections not considered
in chapter 6 section V.

You claim that Millian arithmetic naturally accounts for its applicability to
the empirical methods that it idealizes, whereas Platonic arithmetic is left
with a great mystery as to why it is applicable.  You are quite accurate, but
you can't claim triumph yet for your idea.  Millian arithmetic makes it doubly
mysterious that arithmetic has OTHER applications.  First, the standard mystery
inherent in Platonic accounts regarding the use of arithmetic in playing bridge,
for example, is still inherent in your book, but then furthermore the mystery of
how playing bridge is unnecessarily and rather bizarrely linked to the original
application (Mesopotamian bricklaying or what?) is introduced.

You imply that abstract intelligence could not develop mathematics.  How could
God, for example, create such a mathematically ordered universe without knowing
the mathematics?  Was it his precognition concerning the Mesopotamian bricks
that his universe would one day contain?  But how could he work out all the
structure of Mesopotamian bricks without first working out the mathematics
implied physics and chemistry of these bricks?  There seems to be an irreducible
loop here, rather troublesome to say the least, like the classical time travel
paradoxes.  Or does one conclude from your book that there is no God?

You claim that your ideal subject, who doesn't exist, has incredible powers.
Why are you allowed to idealize your subject, dealing with a concrete reality,
but not the concrete reality?  Your argument strikes me as being the same one
as this one between two nine year olds concerning the existence of God:
    "The universe had to come from somewhere, and that somewhere is what we
     mean by God."
    "Alright then, where did God come from?"
    "Don't be silly, God comes from nowhere..."
More precisely, your argument is more like the first assertion above, and my
objection is like the succeeding question.  And just pointing out that your
ideal subject doesn't exist will not answer my objection, because I already
believe that your ideal subject does not exist.

You claim that mathematics is the human emulation of the ideal subject.  Are you
willing to generalize that to other abstract entities, whose ontological status
is often as puzzling as cohomology?  IS Beethoven's Ninth Symphony the symphony
as performed by the ideal subject (that's pretty talented!).  (Or perhaps to be
realistic, you would allow it to BE the ideal performance by the ideal orchestra
in the ideal auditorium, as listened to by the ideal subject with the ideal aud-
ience, none of which exist, but somehow, rather non-mysteriously, now that it is
all explained, the Ninth does exist?)  If you are not willing, why not?  Mathem-
aticians often compare themselves to artists rather than to scientists.  Why?
But back to ontological status.  What about Moby Dick, Hamlet, The London Times,
a word, a single letter?  I find letters and numbers about on par ontologically.
(Rather amusingly, the Mona Lisa has fewer ontological problems.)  And who does
the ideal subject play chess with when he wants to play an ideal game of chess?
In short, is your proposed mathematical ontology fecund or sterile?

You claim that an empirical account of mathematical ontology is supportive of
your empirical account of mathematical epistemology.  I would think somewhat
the opposite is true: the more you reify the ontology, the more you empiricize
the epistemology.  That's why the sciences are considered empirical.  I also
believe, somewhat less surely, that an empiricized ontology favors an a priori
epistemology.  For example, firewalking seems highly mysterious unless one does
indeed firewalk, after which one knows, if not a priori, than at least highly
viscerally, that mind over matter etc. is possible.  It is the empirical nature
of firewalking that requires one to participate in it in order to gain knowledge
of it, and then the new knowledge of the firewalker is practically unrevisable.
(This last example is very weak, but I feel something like it can be concocted.)
Irregardless of my objection, you at least need explain how the one empiricism
aids the other.

You claim, perhaps implicitly, that mathematicians have good reasoning powers.
You even suggest that some of them were correct in their reasoning.  Is it
unreasonable to suppose that mathematics grew out of these reasoning powers
directly, perhaps sometimes letting physical objects aid their intuition, but
certainly never letting physical objects restrain their intuition?  In short,
why do you grant logical reasoning without hesitation?  And if you can grant it
so freely, what happens to the formalist, who claims that all he has done is
pure logical reasoning, that is, shown that certain assumptions lead to certain
conclusions, while denying any knowledge or ontology concerning the assumptions?

You claim that the mathematician solving the Schroedinger equation is emulating
the ideal subject, several levels of collective operations beyond the empty set.
Or maybe you claim he is emulating the ideal subject's manipulation of electrons
in the hydrogen atom.  (He is ideal, isn't she?)  Or is it that you claim he is
emulating the ideal subject's direct collapsing of wave packets in free space?
And what do you claim is going on when a physicist solves the wave equation?
I bail out here, because the point, I think, is clear.  Mathematics is NOT tied
down to physical reality, and to even pretend it is, for even an ontological or
epistemological moment, can lead to some even stranger beliefs.  What if quantum
mechanics is overthrown?  What if it were shown internally inconsistent?   As
Cantor said, the essence of mathematics is its freedom.  And I experience that
sentence quite literally.  (There is a similar objection to F Capra's The Tao of
Physics.  If physics is revised, does the Buddha lose his enlightened status?)

You do not discuss one extremely important ontological question.  What is the
status of non-existent objects?  Proof by contradiction is as valid a proof as
any, and while such can often be recast as positive proofs, indeed giving one
more information, non-existent objects still play a key role in mathematics.
Platonists generally assign them to some unspecified limbo, even vaguer than
the usual mathematical realms, but your account leaves me no clue as to how you
would consider them.

An even more touchy objection is the fact that Zermelo-Frankel set theory is
well-known to be incomplete.  That is, the iterative concept of a set, and your
reformulation of it, cannot touch the ontology of large cardinals.  If your on-
tology is complete, then there are no models of ZFC, by the G"odel incompleteness
theorems.  And even if you idealize your subject even further, so that he can
see all of what he has done at once, so that he can ideally realize he left out
something, this process can never work.  Inaccessibles, Mahlos, weakly compacts,
Ramseys, measurables, Mitchells, supercompacts, Martins, and Woodins are way up
there, but ONLY BY ASSUMING they exist.  Their mathematical status is that they
can never be constructed or proved to exist or even to be consistent.  Which of
them are consistent is unknown and could forever be unknowable.  In short, they
are mathematical theology.  Your ideal subject cannot in any sense manipulate
measurable cardinals, unless he is a closet formalist or a closet realist!  For
example, in the preface to F Drake's Set Theory, the author emphasizes his
uncompromising realist attitude.

And this is just the sharp edge of a more fundamental objection.  You assume
happily that to understand the ontology and epistemology of mathematics, one
just needs to understand the ontology and epistemology of set theory.  That is
completely absurd.  Mathematics CAN be embedded within set theory, but we do
that for our own technical reasons.  K-theory, differential geometry, abelian
group theory, etc. all could survive without ZFC.  Set theory is a powerful
precision tool that forces high standards of rigor and high purity of language
on all of us.  But the idea that K-theory, differential geometry, abelian group
theory, etc. depend on set theory for their EXISTENCE and our KNOWLEDGE is just
not true: the discovery that ZFC had been inconsistent all these years would
only induce a major yawn in most mathematicians.  ("See, we told you it wasn't
very interesting.")  And none would say, K-theory totters.

(Actually, some fine mathematicians have fallen for the ZFC propaganda and have
an unusual nervousness about proper classes as a result, spending an extra page
in a book or fifteen minutes in the classroom making sure that Russell's ghost
won't spook their proofs.  But then, most mathematicians are nervous about any
alien fields.

(It took a while for logicians and set theorists too to realize their job was
to do logic and set theory, and not to secure mathematics.  Since then, we have
created paradises far beyond Hilbert's wildest imagination.

(There are other foundations of mathematics that have been proposed, some having
nothing to do with sets.  It's a game we play that emphasizes that the true deep
foundations of mathematics are far more fundamental and firm than anything that
has been proposed, indeed they seem almost mystically beyond our ken.

(Once upon a time there was an ancient castle in the lost forest.  In its deep-
est, darkest, dreariest dungeon lived the spiders, who had been spinning their
webs for ages unknown.  And then, the great flood swept through the countryside.
Some spiders, the lucky few, survived by burying themselves deep in the cracks.
When the violent waters receded, they came out and saw in great horror the
complete devestation wrecked upon their demesne.  And with a furious energy,
almost a panic, they immediately started spinning their webs again.  For you
see, they thought it was the webs that had been holding the castle up.)

Enough of objections.  I will proceed discuss some examples that I feel are
extremely mysterious without some sort of realistic ontology.  They concern the
problems of deep coincidence, structural coherence, classifications, and in-
dependent discovery.

Deep coincidence:
In the early 1970's, as part of the classification of finite simple groups,
R Griess and B Fischer independently suggested evidence that there was a very
large simple group out there, with about 10^54 elements.  While looking around,
such an object seemed to exist to them, since they could derive an incredible
amount of information about the group.  Indeed, character tables were quickly
drawn up and studied for information.  One thing noticed, rather early, was the
following: the dimensions of the representations were 1,196883,21296876,....
Notice the pattern?  Not at all, until one recalls that 744,196884,21493760,...
are the coefficients of the elliptic modular function.  Ignoring the 744, it
turned out that all the coefficients come from adding the dimensions of the
hypothetical group!  And that was just the beginning.  Hundreds of other deep
coincidences were quickly discovered.  The whole question became known as
"Monster Moonshine".  (Monster became the name of the group rather early on.)
It was clear to all that the group existed, and even more importantly, that
something deep and important was going on.  Five years ago, R Griess found the
Monster for real.  And just last year Frenkel, Lepowsky and Meurman made the
first step at explaining moonshine.  (Remarkably enough, their methods were
based on vertex operators, from recent mathematical physics.)  It had never been
a question of would someone, only of when.

Structural coherence:
In the early 1930's Banach and Ulam analyzed a certain infinite game of pure
strategy.  In the 1953 Gale and Stewart proved some minor little theorems about
certain related infinite games.  In the early 1960's, Mycielski and Steinhaus
proposed the axiom of determinacy (AD) concerning winning strategies for these
infinite games.  It contradicted the axiom of choice (AC) but apparently gave a
cleaner real line.  (For example, AD implies that all sets are Lebesgue measur-
able and rules out the Banach-Tarski paradox, both of which depend upon AC.)
Their proposal was considered Loony Tunes.  And then in the late 1960's, two
events, one a surprising theorem, the other an odd AD-inspired reproof of an
old theorem, turned this obscure and bizarre proposal into one of the central
concerns of modern set theorists.  A weaker form, called projective determinacy
(PD), has been extensively investigated, and leads to the only known coherent
and natural theory of the projective subsets of the real line.  But PD, although
it does not seem to violate AC, is a very strong axiom whose fate is intricately
linked with the consistency of large cardinals.  It is astonishing that an
elementary question concerning the reals should be so related to the theological
assumptions one makes regarding the entire universe of sets, and even more
astonishing that that exact same question leads to the only natural solution to
the original problems concerning projective sets as proposed back in the 1920's.
AD, by the way, is still considered Loony Tunes.

Classification:
What does one make of a statement that the only semisimple complex Lie algebras
are the classical ones and five others, the so-called exceptional Lie algebras?
Why five others, and not four or seventeen?  What does one make of the statement
that the only finite simple groups are the cyclic ones, the alternating ones,
the groups of Lie type and then 26 others, the so-called sporadic groups?  Why
26 and not 1001?  It seems a mystery, unless, of course, there happen to be 26
sporadic groups, just sitting out there, swimming in some mathematical sea from
time to time.

Independent discovery:
Two novelists, asked to tell a story with the same plot and same characters,
would come up with two very different stories.  Two physicists, assured that a
certain interesting phenomena was within their experimental grasp, would, in
fact, find the same core results.  The difference comes from the ontology of
their subject matter.  Mathematical discoveries always go one way, even when
more than one person is involved, and centuries separate the efforts.  The Radon
transform was rediscovered seven different times this century.  Connections on
principal fiber bundles in mathematics and gauge field theories in physics were
developed separately for decades until the surprising discovery in the early
1960's that they were one and the same thing.  The mutual astonishment gave way
to the modern relinking of deep mathematics and deep physics, which had parted
ways around 1915.  The list is endless.

Enough of arguments.  I want to remove misconceptions you seem to have about
proof, axioms, and mathematics in general.

Some mathematicians love proofs.  Some mathematicians hate them.  Some love them
when they are short and clever, others love them best when they are long and
difficult.  One I know of likes them best when they explain why their probanda
are true.  I'd love to take twenty years off someday and learn the proof of the
classification of finite simple groups.  Some mathematicians like only the key
insights that motivate the proof, others have to dot every i before they believe
it.  And the amount of detail needed depends on the background of the mathema-
tician and the subject area of the proof.  I've often had seminars, devoted to
proving a major theorem in the professor's field, get tied up in little details
concerning the small lemma that comes from an alien field.  I've even seen some
implicitly asking the one or two experts from the alien field as to whether he
is overdoing it.  The possible examples are endless.

My own feeling about proofs can vary.  I've seen some rather tricky proofs.  The
best way I have to understand such is to work out a particular example, and then
convince myself that the example was used in a sufficiently generic way that I
know that the same sort of reasoning would apply to the general case, which I
then skip.  This method is quite common.  Euclid, for example, proves that there
are infinitely many primes by assuming there are three and deriving a fourth.
I have sometimes, as a challenge problem, created the tricky proof, given in the
book by how it works in a particular case only, and expanded it out to its full
case.  In many cases, I was amazed at how my original delight at the insight was
lost in the careful twists I had to work out, or replaced by different delights
as slightly subtle difficulties had to be accounted for.  But to say that the
full proof compared with the sketch increased my understanding or my insight or
my knowledge is only sometimes correct.  (And by full proof, I do not mean the
full formal proof, but a proof whose steps are considered elementary within the
given field.  We mathematicians do an incredible amount of Cartesian chunking
without even being the slightest bit aware of it.)

What is my point?  My point is first any sweeping claims about proofs and what
one thereby knows are extremely tenuous.  My second is more serious: proofs are
not what mathematicians are always interested in.  It is part of our field, and
is the prerequisite of theoremhood, but they can seem a nuisance, a source of
insight, or just an aesthetic joy.  There are theorems whose only point seems
to be they have such incredible proofs.  And there are serious theorems which
have only dull proofs.

If the job of the mathematician is not to generate proofs, then what does he do?
He tries to understand.  He tries to understand whatever it is he's working on
at the moment.  If he's a graduate student, he may be working on Galois theory
or on Poincare duality until he gets it perfect.  If he's retired from research,
he may become the wise source of knowledge in his field, surveying, summarizing
and even proseletizing for the benefit of the field.  If he's somewhere in be-
tween, he is probably working on some problems somewhere, thinking about them,
trying to understand what makes something tick.  And the best of us have deep
insights, insights so deep and profound, about structures equally deep and pro-
found, that it may take several generations of mathematicians for some of them
to even realize that the insights are deep and profound, and only then does the
hard work begin.

But insight and its kid brother intuition are themselves nice and nebulous.  But
I must reject any need, even within epistemology, to give sharp definitions of
these things: such a requirement seems impossible to meet generally, and is
within the nature of the language, but by staying alert, we can be careful what
we delineate by out terminology anyway.  (This is why I find Quine's "Two Dogmas
of Empiricism" bothersome.  Its discussion is beautiful, its point is serious,
but the actual objections are unfair!)

So how does a mathematician know something is really true?  He knows some state-
ment is true for many reasons.  Because a qualified expert proved it.  Because
he's checked a special case.  Because it fits in with the needed structures.
Because it just has to be true.  I have to ask, was your book written to point
this out?  If so, I'm greatly disappointed.

I would like to point out that the word belief is used in many different ways
by mathematicians, and you do not distinguish these ways.  For example:
    a)  I believe 2+2=4.
    b)  I believe the Fundamental Theorem of Algebra.
    c)  I believe that e^-x^2  has no elementary antiderivative.
    d)  I believe the Riemann hypothesis.
    e)  I believe the Poincare conjecture in dimensions 5 and up.
    f)  I believe the Poincare conjecture in dimension 4.
    g)  I believe the Poincare conjecture in dimension 3.
    h)	I believe the Mordell conjecture.
    i)	I believe Projective Determinacy.
    j)  I believe the Classification of Semisimple Lie Algebras.
    k)  I believe the Classification of Finite Simple Groups.
    l)  I believe the Four Color Map Theorem.
    m)  I believe the two Enflo counterexamples.
    n)	I believe the last theorems of Harish-Chandra.
 Ad a:  I've known this since I can remember.  I've also seen a formal proof.
 Ad b:  I've known this since I can remember.  I've also seen several full
	proofs, and could generate several of them at the drop of a hat.
 Ad c:	I know of two proofs, neither of which I have read more than the highest
	level rough sketch.  The first one is classical, a kind of generalized
	Galois theory.  The second gives an explicit algorithm for integrating,
	which has been implemented on the computer I use here, and which I ran,
	getting, as expected, the reply that the integration could not be done
	with the elementary transcendental functions.
 Ad d:	This is a deep open problem.  Various generalized forms of it are deep
	theorems.  A lot of people who have thought about it long and hard 
	think it is true.
 Ad e:	This was proven in the 1960's.  It has become a standard second year
	topic, one I've never learned, but all my topologist friends have.  I
	have seen tiny pieces of it, and it seems to make good sense.
 Ad f:	This was proven in 1982.  It made a lot of people very excited, with
	seminars in quantity about it.  A lot of people have gone over the
	proof, and they find it valid.  Indeed, in 1983 the superficially un-
	related Donaldson's Theorem in gauge field theory from physics (!),
	when combined with Freedman's proof, led to one of the most astonishing
	discoveries of the century: the existence of fake R^4s.  (The discovery
	of quasars and pulsars had some of flavor of this astonishment.)  The
	number of people who have jumped on the bandwagon because of this is
	even larger.  Someday I will learn all this.  At the moment, I just
	know the roughest sketch of the Donaldson's theorem.  Even that much
	amazes me.
 Ad g:	This is a deep open problem.  I believe it because Bill Thurston will
	prove it one of these days.  Until he came along, there was no hope.
	(His work, by the way, has been on a much more difficult problem, which
	gives the Poincare conjecture trivially.  He has been at it for about
	fifteen years.  His approach was known to be impossible in dimension 4,
	so the sudden announcement of f above was part of its surprise.  When
	Thurston announces his proof, it will have been expected all along.)
	(Thurston, by the way, has a big edge over most of us, because, not only
	is he smart as hell, he can see directly, in his mind's eye, the higher
	dimensional objects that he needs to manipulate.)
 Ad h:	This used to be a deep open problem, with the general feeling that it
	was true but could easily go centuries without proof or even intuition.
	But Faltings surprised the mathematical community in 1983 with a proof,
	not only of the conjecture, but a number of other important theorems
	along the way.  He was immediately offered tenure anywhere he wanted.
 Ad i:	This is treading very thin theological ice, which I discussed above.
 	But I will mention that Martin, Steel and Woodin's work over the past
	two years have firmed up the theology to an almost respectable degree.
	The final step, by the way, is only in rumor stage.
 Ad j:	I've taken a course which culminated in this.  Some steps in the early
	part of the general theory were too boring for me.  I'll learn those
	steps when I teach a course on the subject.  But the actual classifica-
	tion was pretty exciting.  This theorem was proved around the turn of
	the century by Cartan, but the original insights of Killing, which saw
	directly the final result, were twenty years older.
 Ad k:	When 50 mathematicians work for 20 years on the same problem, when their
	entire field becomes just one problem, I just have to believe they know
	what they are talking about.  (The result was announced late in 1980,
	although the last steps weren't completed until 1981.)  The proof,
	scattered in dozens of journals, rough manuscipts, computer calcula-
	tions, etc. is about 5000-10000 pages long.  They are not sure, since
	there is an incredible amount of redundancy.  Hopefully, the revision-
	ists will get it down to a reasonable 3000 pages, with a clear outline
	to the proof, and some computer programs in the appendix.  At least,
	that is Gorenstein's announced goal.  (The actual proof, apparently,
	has a rather natural organization.  Rather astonishingly, the proof
	yields very little information about finite simple groups in general,
	until the very last line of the very last lemma, and then vast hordes of
	information come tumbling out.)  (The problem, by the way, has the feel
	of much of 19th century mathematics.)
 Ad l:	I've read a discussion of their proof and its methods, and even went
	through a hand verification of some of the easy cases.  It seemed mech-
	anical enough, so I think the proof is quite airtight.  Even better,
	a second proof exists, requiring only 1000 cases instead of the original
	2000.  (But what many people do not realize, is that not only was the
	proof computer verified, the actual breakdown into 2000 cases was found
	by extensive computer calculation, so that the proof was partly computer
	generated.)
 Ad m:  These are both very difficult, the latter about 80 pages.  The first
	one many people claim is a fine piece of mathematics, and so I believe
	it must be correct.  As for the second one, it has been unpublished for
	ten years, because no referee has been able to read it through to the
	end!  I want to volunteer, but my friends won't let me.  But I think
	Enflo knows what he's done, although no doubt he's forgotten most of
	it by now.  As a much simpler counterexample was produced in the first
	case, I think people are waiting for a simpler one in the second case.
 Ad n:	Harish-Chandra's theorems are in an area I know nothing about, not even
	the most elementary examples.  His style was always the same throughout
	his long life: he would announce a theorem that he had discovered, and
	then within a year or two work out the actual proof.  He never once re-
	tracted an announcement.  As he was fully active until his sudden death
	the other year, his latest announcements' proofs are lost forever.
There are many fine nuances of belief present in the above, and by not giving
even a coarse spectrum your discussion of epistemology seems confusing at times.
I feel fully warranted in believing theorems of which I have never seen proofs.
Why?  Because it is in the nature of mathematics.  Because I have seen, over and
over again, how proofs work, and I know my colleagues have.  The more interest-
ing a theorem is, the more people look over it.  There are numerous erroneous
proofs published in journals every year.  I don't believe one of them, for the
simple reason that I have not even looked at the titles of most papers, and so
do not even know the statement of the erroneous results.

I would like to point out a common feature of some of the more difficult proofs.
They require broad knowledge of the area they are from, sometimes pulling in
everything from the field, and then more.  They usually require a good deal of
mental stamina.  A few other cases of extra difficulty: the counterexample to
one of the Burnside conjectures used a simultaneous induction on a hundred
propositions.  The proof of the Smith conjecture required hard work from three
disparate fields.  The fundamental objects in Jensenlehre are called morasses,
for reasons painfully clear to those who study them.  (And the published versions
are the trivial case!)  Studying them is hard enough just as an object of pure
thought, but when all of Jensen's handwritten notes (he almost never publishes)
are riddled with gaps, errors and traps, it takes much faith before one reaches
the promised land.  (But correct versions are now seeing the light.)

So what can one make of mathematical epistemology?  While it may seem that I am
in fact pushing towards your claim that it is, after all, rather empirical, I
favor a different view.  Since any individual is fallible, and so I agree with
you, any discussion gets bogged down in questions of psychology, I find the
question as you discuss it impossible to reply to, since psychology is far too
inaccurate for me to draw any conclusions.  I favor thus a return to apsycholo-
gistic epistemology, by discussing, instead of the knowledge of an individual
mathematician, the knowledge of the mathematical community.  That knowledge
seems grounded far more securely than any other large body of human knowledge.
But I leave this question to you and your colleagues, since thinking this much
gives me nightmares.

Indeed, it seems much of the discussion you presented turned on what I find is
irrelevant: human fallibility in the actual proofs.  As I mentioned above, the
relation between truth, proofs and certainty is quite involved.  But even deeper
is a trend that has just barely begun, but which I think within fifty to a hun-
dred years will be a commonplace: the computer verification of proofs.  Not just
specialized examples like the Four Color Map Theorem or the complicated algebra
of a MACSYMA system, but the computer verification of ALL proofs.  While the AI
community is busy trying to generate theorems, mathematicians are increasingly
having their papers computer typeset.  It seems entirely reasonable that the AI
methods can be steered to the simpler task of checking already given proofs, and
these proofs will be in computer readable format more and more frequently.  Who
knows?

Enough of epistemology.  I now want to discuss examples that I felt were missing
from your book.  I do not feel they favor your argument, but maybe they do.

(The examples all concern 20th century mathematics, which I notice is virtually
non-existent in your book.  Perhaps this is partly why my reaction is so strong,
since you say your book is about mathematics, yet it hardly seems so to me, as
most of mathematics is missing.

(Indeed, the flavor of much of today's work seems to have nothing in common with
classical mathematics.  The things we study are so much more sophisticated than
mere numbers, layers and layers of structure beyond what was done before.  A
common feature in much of modern mathematics is that the various transformations
between standard objects are taken as new fundamental objects of study, and that
this process is iterated to a dizzying degree.)

The axiom of choice (AC) was adopted with much dispute.  Today it is almost
impossible to understand our predecessors' concerns about it.  We take it for
granted that it is true, indeed obviously true, and use it when needed.  The
fact that it was crucial for so many serious theorems was an important factor
in its general acceptance.  Yet if one looks around hard enough, one can find
lingering suspicion, even among the new generation.

Leibniz' infinitesimals have been vindicated.  Non-standard analysis is the name
given to the modern approach to their usage, and it has played a key role in the
proof of several modern theorems.  Yet the field is looked upon with great sus-
picion, and most mathematicians, while granting that such proofs are technically
correct, always translate them into one the standard molds.

There are many ways of dealing with divergent series and getting numbers out of
them.  Leibniz' trivial averaging is just the first of many schemes for dealing
with series, which is now a small cottage industry.  In physics such methods
are extremely important (renormalization).  But mathematicians as a whole avoid 
all but the simplest of these methods.  Indeed our field tends to look down on
scorn at the physicists, on the grounds that what they do does not follow any
mathematical sense.  And they don't seem to want to help us, proud of their
secret ability to add and subtract and divide lots of infinite quantities so as
to make their theories fit nature.  We both know this is actually a regrettable
state of affairs, but few can jump the canyon separating our fields.

Enough of this letter.  I'd like to suggest various items worth reading.

Some history of Stokes' theorem, and worthwhile comments on mathematics:
	M Spivak, Calculus on Manifolds
	preface,pages 104,137
The Mathematicians' Liberation Movement:
	J H Conway, On Numbers and Games
	appendix to Part Zero
The difference between mathematics and physics:
	R Sachs, H Wu, General Relativity for Mathematicians
	preface,guidelines,sections 0.1,0.2,2.1,small print above 3.3.5,6.1,6.2
On adopting new axioms for set theory:
	Y Moschovakis, Descriptive Set Theory
	preface,section 8J
You might like to find out what a mathematician means by 'arithmetic':
	J P Serre, A Course in Arithmetic
Some books with lots of discussion by mathematicians:
	D J Albers, G L Alexanderson, Mathematical People
	P J Davis, R Hersh, The Mathematical Experience
	Serge Lang, The Beauty of Doing Mathematics

Yours truly-


ucbvax!brahms!weemba	Matthew P Wiener/UCB Math Dept/Berkeley CA 94720