Like many of us in the education community, my first year of teaching was a disaster. Five periods of Algebra I with 10th and 11th graders at an urban Los Angeles high school was a tough start, and on top of it I made pretty much every mistake in the book. Classroom management was one thing, but the bigger frustration was that I knew I was making no headway in actually teaching math to my students.

Mercifully, sometime during my second year, a fellow teacher gave me some life-changing advice: “The thing is,” she said, “You’re focusing on what you are doing, not on what the students are doing.” And she was right. I was operating under the mindset “I taught it, so they must have learnt it” and not surprisingly, it doesn’t work this way.

Over time I radically changed my classroom instruction. I got rid of the homework review where students watched me do math for twenty minutes, the lectures through which students sat and listened, and the worksheets of practice problems about which students received little or no meaningful feedback. In place, I created group activities where students grappled with problems like: “Why is a Toblerone a triangular prism?” or “How many knots could you tie in this rope?” — Complete with chocolate bars and ropes for the whole class.

My teaching now involved asking questions, empowering students to make mistakes, asking more questions, refereeing classroom discussions, and above all, transforming learning into a guided process of active inquiry that was intriguing and made connections between previously learned and new mathematics. The results were awesome! Especially on standardized tests.

And yet now I’m more puzzled than ever. I’m a passionate advocate of education technology but when I look at most online mathematics programs I see they replicate the same passive learning that proved so ineffective during my first year of teaching: Students watch videos of someone else doing math, they read text about how to do math, and then answer multiple choice questions to see if they understood it. Even more perversely, I see educators who are committed to producing a generation of “critical thinkers” assume that because a lecture is delivered via the internet or text on an iPad this will somehow be the game changer we’re all looking for. I don’t think we could be more wrong.

Technology or not, the advice I received over coffee cake in the teacher’s lounge is still as pertinent today as it was back then: “What are the students doing?” Let’s face it, if all students could learn math through watching videos and reading text they’d have already graduated high school!

The same rules of good pedagogy that apply to classroom teaching must also be applied to technology-based learning. Students need to be active participants in their learning experience: They need to touch, feel, and see the mathematics. They need their curiosity engaged, and above all they need to become problem solvers — the Common Core standards even say so! And the potential of internet-connected touch-screen devices to do this is unparalleled. In fact, they could hardly be better designed to engage students in interactive, multisensory mathematical experiences. So why do we seem so doggedly determined to use them for purely passive learning?

There are many reasons for this, not least of which is that it’s easier, and thus considerably cheaper, to develop math programs which are the pedagogical equivalent of a lecture followed by a worksheet, but until we as educators consistently apply the rules we know to be true about good instruction to technology, we will continue to be offered programs that only deliver passive engagement. Currently, the most frequently asked questions about a new math program are:

- Does it cover the standards?
- Does it have reports for the teacher?
- How much time will it take?

Upon receiving the answers “Yes”, Yes, and Not too much,” educators are convinced that the program will be successful — but then are surprised when it isn’t! Sound familiar? The real question we need to ask when evaluating ANY learning experience is the same, “What are the students going to be doing?”

The best math software answers this in two ways: **interactivity** and **informative** **feedback**. However, there are plenty of programs that *claim* to be interactive, but when examined with a discerning eye the interactivity is really just the online equivalent of turning a page, or rewinding a video: a forward and back button does not make a math lesson interactive!

The best math software answers this in two ways: **interactivity** and **informative** **feedback**. An excellent example of genuine interactivity is Geogebra, a free downloadable program that allows teachers and students to create virtual manipulative mathematical tools. For example, an Algebra class could build a parabola with sliders to control the values of coefficients *a*, *b*, and *c*; observe the connections between the quadratic equation and the graph; and use this tool to answer questions and solve problems. Currently, weaving a stand-alone tool like Geogebra into classroom instruction does of course require an extremely high degree of skill on the part of the teacher, but this type of tactile interactivity that makes lasting connections between the concepts and symbols of mathematics is increasingly available in other, more complete, math programs.

And so to feedback. Imagine this, you’re driving a car and you brake too hard. What if instead of feeling your body lunging into the seatbelt, you received a pop-up warning that said, “Wrong! Try again. Do you need a hint? The angle of depression of the brake pedal should be between 55 and 60 degrees.” How much harder would learning to drive be now? This confusing feedback assumes you already know what an angle of depression is, how to measure it, and even worse, gives you zero indication of how far off you were from getting it right in the first place.

Sadly, this is how practically all math programs currently handle feedback, even the majority of math gaming software does the same thing. When confronted with this type of feedback, which is divorced from the context and assumes that you pretty much already know what it is that you’re trying to learn, most students quickly give up and resort to guessing.

Imagine if instead of the textual feedback above, you had a gauge that showed you how far you had pushed the brake pedal, indicated the “safe breaking” region, and then allowed you to try again: you would learn very quickly. You also would have a context within which to understand the technical language of the mathematics. Feedback: It’s all about what happens when you go wrong! Learning by making mistakes is good, but for this to occur there must be immediate feedback within the context of the game or problem, informing students precisely how and why their answer was wrong and allowing them to self-correct.

So technology, especially internet-enabled, touch-screen devices, *can* be used to create engaging learning experiences with rich interactivity and informative visual feedback. Software *can* be designed to ensure students consistently apply critical thinking skills to solve problems, enabling them to be active participants in the learning experience. But the next time you’re thinking about bringing software into the classroom, or if you’re sitting on a committee evaluating a math program, just ask yourself one question, “What are the students actually going to be doing?”

One walk through of almost any school and you would see that no one teaches as you are advocating. You looked at the software but you did not look at the data. Look at the data put out by the National Center for Academic Transformation http://thencat.org/RedMathematics.htm You are right that students learn math by doing math, software programs have students do that. The best situation is when there are people with students who can provide tutoring on demand when they get stuck. Technology will be a major part of instruction in education in all subjects, but especially in math and science education where there are so few teachers qualified or capable of teaching the way you described.

Good post. Good advice, “think about what the students are doing.” Chewy goodness there.

On feedback. A problem with having a computer do it is that you’re limited to using exercises that computers can grade. That puts most content creation exercises out-of-bounds.

I teach a course where students learn how to build simple Web sites. It’s a skills course. It’s an intro college course, but the principles below apply to K12, and to topics other than Web tech.

Practice is key. Students do more than 100 exercises that start easy, and get harder over time. Students get formative feedback for each solution, that is, a list of things they could improve. Students resubmit as many times as they like, until they get it right.

Almost all of the exercises involve creating content. A nav bar, a program, a slide show, etc. There is no way that student work can be graded automatically.

If you want students to get the best experience, you have to bite the bullet, and have people do the feedback (“It’s people!”).

I wrote software to make this work. The project is called CoreDogs. There are lots of cool aspects to it, but the feedback features are key.

CoreDogs books are feedback heavy. 40 students, 100 exercises each – that’s a lot of grading! I needed to make giving feedback as quick and easy as possible. The feedback system reduces the number of mouse clicks, keystrokes, and cognitive operations required of graders. I can grade a simple exercise solution in as little as 30 seconds.

The key to this is an innovation called “clickable rubrics.” Clickable rubrics also standardize things, so there can be consistency across graders. That means the grading can be outsourced.

I’d like to release the software as open source, so that anyone could use it. But the s/w should really be rewritten, to make it scalable, multilingual, and usable by non-geeks. I’m applying for grants for that now.

Kieran

kieran@coredsogs.com

Interesting insight into the learning experience. No wonder I found math to be confusing and difficult. I always assumed it was me . . .