Thursday, March 29, 2012

I Came To A Realization Today

I'm going to share a deep, dark secret with you:

I don't know as much math as you might think I do. And I've always been afraid of being found out.

I've always known where my education was deficient, even when I was receiving that education. In high school I never learned about hyperbolas, or about factoring a cubic polynomial. I'd never heard of the Rational Root Theorem or Descartes's Rule of Signs. In college I earned a degree in applied math, and could calculate my butt off, but so often didn't fully understand what I was doing. I knew what to do, and I could understand why one step logically led to the next, but I didn't have a "big picture" understanding. As an example, in differential equations I could calculate eigenvalues all day long, but to this day I don't know what an eigenvalue is or what it does for me or why I need to calculate it. I've taught myself plenty--sometimes just days before I had to teach it to my students.

This came to a head today when I was talking to one of my students who's heading to Cal Poly. I told him not to make the mistake I made; ask the questions, go for the deeper understanding.

I've never understood the Fundamental Theorem of Calculus. Why, exactly, are an integral and an antiderivative the same thing? I've followed the steps in my calculus books, and understood each step, but never really understood how they all fit together. So today I pulled a different calculus book out of my closet and I started studying. I found one that provided a very user-friendly explanation, which then allowed me to understand the very rigorous (read: dry and difficult) proof in a second text. It took a few minutes to replace decades of deficit.

I learned something today. Tackling the Fundamental Theorem of Algebra, which one book says is "beyond the scope of this textbook", is next.

I remember my senior project at West Point. I was writing a computer program that would aim a gun at an airplane, and an instructor asked me, "Why are you using that algorithm? There are others that will converge much more quickly." I knew that there were others, and I knew what "converge much more quickly" meant, but what I didn't know was what others there were or how they'd converge more quickly. I was able to throw him off, but I remember the fear of being caught that day.

So now I want to learn. I want to understand. I'm looking forward to that masters program I'll be starting in the fall, a Masters in Teaching Math through the University of Idaho's Engineering Outreach Program.

Explaining this to another teacher today, I was told that now I'm experiencing the difference between learning and merely completing a degree.

I'm looking forward to this.


KauaiMark said...

I wonder if Kahn 'can" help with a little prep work beforehand

mazenko said...

Excellent and honest insight, D. If only more in our profession were similarly motivated, and subsequently to inspire students. Best of luck with the program.

Tulip said...

It is funny that you blogged about this today because I was talking with my husband about a similar instance that happened with his grandfather when he was in highschool. They were working on the farm and his grandfather asked him a simple geometry question, but it wasn't on paper and there were no numbers. There was just a problem that needed to be solved. My husband knew all the formulas, but couldn't apply them to a real-life situation. I wish you many blessings on your new endeavor and hope you are able to transfer that real-life application and understanding to your students. Just curious; is it an online program?

Darren said...

Distance learning, yes, but not online. I'll be watching DVDs of classes conducted at the university, doing all the same homework, and then scanning it and emailing it to the instructor.

Anonymous said...

Good on you, Darren. Here's a simple explanation of the fundamental theorem of algebra: A graph of the function can only cross the x-axis at most equal to the highest term. Means, a graph of a quadratic equation (which has x^2) can only cross the axis twice (or fewer). A cubic can cross it three times. A 5th power equation can cross the line 5 times or less, never more. Note, only the highest term matters.

Simple! You do know the fundamental theorem of arithmetic, correct? One of its results is that the number 1 is not prime. Want me to explain why?

Darren said...

Conceptually I understand the fundamental theorem of algebra. I just want to see a proof of it.

Off the top of my head I do not know what the fundamental theorem of arithmetic is. One isn't a prime because it doesn't conform to the definition of a prime number :)

Scott McCall said...

You don't know English much either. Second paragraph: "I'm don't know"

Darren said...


And Scott? I'll be watching :)

Ellen K said...

Any good teacher recognizes their own weaknesses. Only mediocre teachers presume to know all the answers. I am not the best painter (although many people like my work, I am highly critical) but I know how to teach skills, techniques, composition and how to think so that my students can do better. I hope I live to see some of them become famous.

Anna A said...

I can appreciate your situation. I managed to get through 3 semesters of college calculus, because I needed them for my chemistry major, but never got an intuitive handle on it.

I've ended up in an area of chemistry, formulating, that really works better for an intuitive, see what happens type, than other more mathematical areas.

Anonymous said...

The fundamental theorem of arithmetic says that each positive integer has one, and only one, unique prime factorization. Take 6, there is only one way to prime factor it, 2x3. Out of all the infinite number of primes, that's the only way to get 6.

However, if 1 is prime, then: 6=2x3, 6=2x3x1, 6=2x3x1x1x1x1x1, etc. But the theorem says that there is only ONE unique factorization. Therefore, 1 cannot be prime. It's a special case number.

Eric Jablow said...

Eigenvalues are part of linear algebra; Given a square matrix T, an eigenvalue is a number λ such that there is a non-zero vector v with Tv = λv. The vector v is called an eigenvector. Now, if T is an n by n matrix, and has n distinct eigenvalues, you can take their corresponding vectors, and they form a basis of the ambient space. By throwing some extra constants in, you can make this an orthonormal basis.

Now, let's relate this to differential equations. What sort of vector space would you need? A vector space of functions. You want to apply differentiation to them, and do so repeatedly, so you want infinitely-differentiable functions. You want some control over this, so let's force the functions to be periodic with period 2π. Okay, I'm hand-waving here, but there's a reason for that. Similarly, we'll make these functions complex-valued. You'll see why.

What are the eigenvalues of the differential operator D? Solve Df = λf; The answer is f = e^{λx}. Easy, huh? But f is periodic. f(0) = f(2π). e^{2λπ} = 1. Remember Euler's formula? λ is an integer times i. So, the integer n is an eigenvalue, and its eigenvector is e^{i nx} = cos nx + i sin nx.

Where's the connection? Well, just as the eigenvectors of a n by n matrix form a basis, these functions form a 'basis' of the function space. Do a little algebraic manipulation, and you get that 1 and the functions cos nx and sin nx form a basis of the vector space of 'nice periodic functions'. This is one of the motivations for Fourier series and Fourier analysis.

Now, you can write formulas with dot products to express a vector v in terms of an orthonormal basis. You can do the same thing here, but you replace dot product with various integrals. Remember--you can integrate more functions than you can differentiate. You can apply these formulas to many more functions than just the infinitely-differentiable ones. Keep the periodicity, however. So, we can take a periodic function and write its Fourier series (and we'll go back to real functions here, not complex):

f(x) has series 1/2 a_0 + a_1 cos x + b_1 sin x + a_2 cos 2x + b_2 sin 2x + ....

(Why 1/2, you may ask? A technical detail.)

You see this sort of material physically when you analyze the motion of a violin string, or try to synthesize music.

Why didn't I say that f equals its Fourier series? Well, in general it doesn't have to. A lot of the history of differentiation and integration comes from questions like "When and where does the Fourier series of a function equal it?", "Where can a function not be continuous and still have a Fourier series that equals it where it is continuous?" And this led to "What is the correct form of the Fundamental Theorem of Calculus?" This led to first the Riemann integral, and then the Lesbegue integral.

Eric Jablow said...

One correction--in the first paragraph, T needs to be symmetric. Otherwise, the eigenvectors need not be normal to each other. Fortunately, D is 'symmetric'.

Michael Anderson said...

Bravo, Darren!

I went through a similar experience after blasting my way through an MS in Statistics, then immediately teaching "bonehead" statistics at the university level. My basic approach was to revisit my textbooks, and work out all the problems that had NOT been assigned to start filling in the gaps.

I was amazed at how much I had not understood! Better yet, I was surprised at how much better I could teach once I had a deeper understanding of the basic math and statistics.

Bonus: by doing this, I was totally overprepared for my PhD comps. Cool.

Joshua Sasmor said...

Congrats Darren! I think this is the first step to you getting a Ph.D. - wanting to understand _why_! It's my favorite part of teaching math :)

James said...

As an engineering student, I learned how to use various mathematical techniques for solving engineering problems. Unfortunately, many of us used math as a tool and often didn't understand why it worked. I knew why a partial differential equation was set up the way it was to solve a heat transfer problem, but everything after that consisted of marching through mathematical processes that I could use but never fully understood.

Part of the problem was that my math professors had a tendency to stand with their backs to the class while they worked through a proof that only the tiniest minority of students understood. The professors didn't even see our looks of confusion. (I've noticed that many Engineering students have difficulty seeing the connection between proving a technique and actually applying it.) When some brave soul actually admitted to being confused, the professors would simply re-prove the theorem.

Our engineering professors were a lot better at actually teaching the students and explaining concepts, but either they had too much other material to cover to compensate for the gaps in our knowledge or they themselves couldn't explain why the techniques worked.

GS said...

Ooh, Scott, you forgot to finish your sentence with a period. Ouch, that must be embarrassing.

David said...

This realization of yours, and a similar realization I also had leads me to believe that we need to spend more time on conceptual understanding in mathematics, and less time on the procedures. To be sure, the procedures are important, but they aren't necessarily the goal. If you don't understand WHY you are doing something, why bother?

Salviati said...

Darren the various aspects of mathematics that you do not understand will not be addressed in an education course on mathematics.

What you need to do is actually take some mathematics classes. Mathematics is first and foremost proof. This is a painful process and one that I also avoided in my youth. You need to take a proof based Linear Algebra class and a class in Real Analysis. Two books I strongly recommend are: Linear Algebra Done Right by Sheldon Axler and the other is Introduction to Real Analysis by Robert Bartle. Both of these books will require taking a course and a tremendous time sink. But you will come out feeling like you really understand the questions you have and if you dont, you might try to prove it yourself.

Darren said...

The program is 8 math courses and 2 pedagogy courses.

Anonymous said...

Which 8 math courses?

Darren said...

I haven't decided yet, but there's a long list from which to choose. I definitely need a course in linear algebra, though; my undergraduate course was horrible and I learned essentially nothing. It was one of those courses wherein I got the highest grade in the class, and got the A, but had a percentage around 60. It was all curve (and no road!).

Anonymous said...

You need to consider what might be of interest, since your motivation is different this time around. Linear algebra isn't typically considered a graduate-level class, however, unless this is graduate-level treatment of the undergraduate class.

Be careful of taking graduate-level work in undergraduate courses that you have forgotten, or never really learned. I got my @ss handed to me in a graduate-level numerical analysis course for that reason. I'd taken numerical analysis as an undergrad, but the instructor was very weak and I was pretty lazy. Too, I took the undergrad course about 8 years before I took the graduate course, and forgot all of it. I thought I was smart enough to power through the graduate-level course, but I have to understand things from the bottom up, and it was really hard to follow.

Eric Jablow said...

The difference between the basic linear algebra course and the advance course is this:

In the basic linear algebra course, one studies properties of matrices. Pretty much everything is tied to those as arrays of numbers.

In the advanced course, one studies properties of vector spaces and of linear transformations between them. Given a basis of the domain and codomain (I'd say range, but that's not precisely the correct word), one can give the matrix form of the linear transformation, but one tends to worry about those properties of the transformation that do not depend on particular bases. And then one can generalize these:

One can look at infinite-dimensional vector spaces, and then one has questions about spaces with norms (lengths), called Banach spaces, spaces with inner products (dot products) called Hilbert spaces, and other generalizations some with extra conditions.

One can generalize the vector spaces algebraically; you might replace scalar multiplication by real (or complex) numbers with multiplication by elements in a commutative ring. This leads to commutative algebra and module theory, and after a while leads to algebraic geometry through sheaves and schemes and dippier structures. The last is not for undergraduates, and is a specialized subject for graduate students.

One can take functions from the plane to the plane, or any real Euclidean space to another real Euclidean space. If a function is differentiable, then locally it can be approximated by a linear function:

f(v) ≈ f(v₀) + J (v - v₀) where J is the Jacobian matrix.

Now get away from flat spaces like the plane or 3-space, and consider general surfaces like spheres and ellipsoids or 3-dimensional spaces like the solid ball. We call them manifolds if we can define coordinate systems for portions on each space such that every part of the space is covered by a coordinate patch, and where two patches overlap, the different coordinate systems are related nicely. How nice? It depends on what you want.

In this case, the spaces has a regular vector space associated to it at each point called the tangent space, and a nice (differentiable) function between two such spaces leads to linear transformations between these tangent spaces. This leads to differential geometry, to Riemannian geometry under nice assumptions about lengths, and then to pseudo-Riemannian geometry in physics--relativity theory.

Again, most of the generalizations are for very advanced and dedicated students. Someone teaching high school students might never need to know any of them, except that it always helps to know more, and to know the ways of thinking behind what one is teaching.