Philosophy of science, science and social values, food systems. I'm a postdoc at the Rotman Institute of Philosophy, Western University, Ontario, Canada.

Catching Elephant is a theme by Andy Taylor

In this post, I’m going to discuss MOOCs, based on my recent experience in a Coursera course on Social and Economic Networks. I’m going to start with a brief explanation of my reasons for taking the course, give a quick overview of the structure of the course, then explain my criticism. I’ll end up arguing that, perhaps surprisingly, MOOCs are not effective, even for the math courses that, it would seem, they’re perfect for.

I decided to take this course for two primary reasons:

- MOOCs are the subject of a lot of discussion in higher education right now. I’m deeply skeptical of them, but so far my worries haven’t really been empirically grounded.
- I’m quite interested in the new field of network science, and this seemed like a good opportunity to learn a bit about it.

Let’s start with a brief, purely descriptive overview of the course. It was an introduction to network theory, with an emphasis on economics-style model building and analysis. (The lectures were given by Matthew Jackson, who’s in the economics department at Stanford.) While we did cover some empirical tools, and the lectures occasionally gestured in the direction of data analysis, for the most part this was essentially a math class, and the lectures relied heavily on advanced undergraduate or master’s level linear algebra and probability theory.

As you probably know if you’re already familiar with MOOCs, the lectures for each week were short videos, with the instructor greenscreened onto PowerPoint slides. Our videos were significantly longer than I had been led to expect, with individual videos typically 15-20 minutes long and a total of 90-120 minutes of video each week. The videos were interrupted every few minutes with a multiple-choice quiz question. The question could be skipped, and wasn’t graded. The assigned work for each week was a problem set of multiple-choice questions. There was also a final exam. “Passing” the class required completing earning at least 70% of the total possible credit. Note that, because I spent last week moving and then went out of town for a conference, I didn’t take the final exam; but I assume it was also multiple-choice questions. Also, since I ended up at 65% of the total possible points, I didn’t “pass” the class.

Interaction between the instructor and students was basically nonexistent; this is going to be the core of my criticism below. While there was a text-based asynchronous discussion board, I only saw the instructor post anything there once. There was also a Google Hangouts chat (similar to Skype) near the end of the 8-week session, in which a handful of students got to ask the instructor questions live, for about half an hour. That session is available on YouTube. There was also a TA, who apparently is one of the instructor’s graduate students at Stanford. The TA interacted a bit more with the students on the discussion board, but seemed to primarily handle technical issues (e.g., mid-video discussion questions that had been coded incorrectly) and writing the quiz and problem set questions, as well as the explanations for the correct answers. As far as I know, he never provided additional explanations of lectures or answers to questions. This was pretty much left to other students.

I found that this course worked well for me. I found most of the material interesting, never had problems with any of the assigned work that I couldn’t puzzle out in a few minutes (note again that I didn’t take the final exam), and learned a lot. However, based in part on my own background teaching math and my interactions with the other students in the discussion boards, I think my experience was exceptional. I have a master’s degree in math and lots of practice learning new areas of math on my own. (For example, I took graduate courses in mathematical logic without ever taking undergraduate logic.) I also taught myself the programming language Python this past winter, and I learned a lot by writing Python routines to implement various ideas presented in the lectures.

By contrast, my fellow students had a lot of questions, and early on one of the most active discussion threads comprised complaints about the difficulty of the course. The lectures were given at a high level of abstraction, and often both the definitions of new terminology and the examples used to illustrate those ideas were shortchanged. In a few cases, upon closer examination (by the students), it wasn’t clear how the example was supposed to illustrate the definition or principle, or the example appeared to involve a typo or miscalculation. I’m going to come back to these points in a few paragraphs.

My critique of this course — and, by extension, MOOCs in general — can be summarized as follows:

- A successful math course requires communication from the students to the instructor.
- MOOCs effectively all-but-close the channels for this communication.
- Hence, math MOOCs cannot be successful.

I’m going to speak specifically about math courses here. I’m doing this for two reasons. First, while my course was taught by an economist, it was basically a math course, and I think it’s safe to generalize from it to other math courses. Second, it seems that, if MOOCs are successful in any field, they would be successful in math. Humanities courses obviously require too much moderated discussion and their writing assignments can’t be graded automatically. (Despite edX’s efforts, they simply aren’t going to be able to develop a computer program that can judge how accurately a student has reconstructed Descartes’ argument for mind-body dualism, or how insightful their criticism of this argument is.) There’s no replacement for a physical lab in science courses. On the other hand, insofar as students in math classes are learning how to solve certain kinds of problems, it seems like there’s no difference between a live lecture and a MOOC. Thus my conclusion — that math MOOCs cannot be successful — is especially surprising.

I could offer some general reasons to support my first premise, i.e., reasons why *any* successful course requires student-instructor communication. But instead I’m going to give a math-specific reason.

In a live math class — as well as a sufficiently small online math class — the instructor can interact with the students regularly, and consequently can develop a good understanding of what math teachers (especially at the primary and secondary level) call the **mathematical maturity** of the students, both as a whole class and individually. Mathematical maturity includes both the mathematical knowledge that students already have — whether or not they know what eigenvectors are, say — but also their ability to think abstractly. For example, junior high algebra students think of functions as *operations that are done to numbers*. By the time they’re high school seniors, they’re starting to think of functions as *things*, namely *curves on a plane*, and they’re ready to think of derivatives and integrals as *operations that are done to these curves*. It’s a few more years before they’re ready to think of both functions and derivatives as *algebraic objects*, in the way that studying differential forms requires.

Learning to gauge mathematical maturity has been crucial in my summer work, teaching math and logic to gifted and talented teenagers at Johns Hopkins’ Center for Talented Youth. These kids are certainly bright, but generally aren’t very far beyond their age level in terms of mathematically maturity. I was initially hired to teach a class on chaos theory and fractals. This class failed pretty much completely, all three times that I taught it, because (for example) students weren’t sufficiently mathematically mature to understand the concept of phase space, and this concept is crucial to understand chaos theory. By contrast, because one can teach formal logic as pushing around symbols — like high school algebra — it works beautifully with these teenagers.

**In short, any successful math class requires that the instructor accurately gauge the students’ mathematical maturity. And, obviously, this requires communication from the students to the instructor.**

For the second premise, start by considering the channels for student-instructor communication in a traditional live lecture course. In this setting, students can interrupt the lecture — they can ask for additional clarification, an example (or an additional example) to illustrate the point under discussion, or point out possible mistakes. (Math teachers make mistakes regularly; it’s simply unavoidably easy to copy numbers incorrectly out of one’s notes or make a mental arithmetic error.)

In any asynchronous online course — whether it’s a 30-person online section or a MOOC — the lectures will be prerecorded, and students won’t be able to interrupt the lecture for any of the things I listed in the last paragraph. But in both a live course and a smaller online course students can also talk to the instructor outside of class, whether in person or using email or an asynchronous online discussion board. This isn’t quite as good as a question asked during a live lecture — typically more students will be confused about something than just the one who asks the question — but it still creates a channel by which the instructor can gauge the students’ mathematical maturity.

A third channel is through graded work. Students in math courses hate being told to “show their work” — especially the students with the most “natural” understanding of the problems — but this often gives instructors valuable insight into how students are understanding the problems, and by extension the concepts relevant to the problems. Students who try to solve an algebra problem by “guess and check” — plugging in values until they get one that works — are probably at a lower level of mathematical maturity than students who use symbolic manipulations.

So, we have three main channels for student-instructor communication: questions during lecture, questions outside of lecture, and the work done on graded assignments. MOOCs, as per my second premise, close off all three of these channels.

It’s clear that MOOCs close off the first channel: the lectures are prerecorded videos. The students can pause and rewind, but can’t ask a question or point out a problem. As I mentioned above, it seemed to me and a few other students that there were a few typos and bad examples in the lectures, and these were never corrected or given more adequate explanations. Because this channel has, in some sense, a significantly higher bitrate than the other two, and both MOOCs and smaller asynchronous online courses close it off, it seems to me that this is a devastating problem for both kinds of online courses.

I’ll come back to the second channel, questions asked outside of class time. The third channel is through graded work. At least in my course, graded work was exclusively answering multiple-choice questions. There was no way to “show your work,” and indeed it seems to me that computer evaluation of student efforts to solve math problems would be about as difficult to implement as computer evaluation of written assignments for a humanities MOOC. Furthermore, the logistics of a MOOC require several intermediaries between the instructor and the students’ submitted work, including the automated grading system and the TA. Perhaps student submissions could be randomly sampled, but of course it would be an enormous amount of person-hours to review a genuinely representative sample. Hence, MOOCs close off this third channel even more severely than smaller online asynchronous courses, at least in principle.

Finally we have questions asked outside of class time, which in the case of a MOOC means the discussion board. As with the second channel, the logistics of MOOCs prevent the instructor from being heavily involved in the boards. They simply can’t respond to every question from the thousands of students in the course. Even a handful of TAs couldn’t keep up if the course is really larger than a couple hundred students; and then there’s the problem of effectively feeding these into the lectures.

Note that an smaller online asynchronous course does not close off this channel. In a 30-person course, the instructor can still take questions from the students on the discussion board. For this reason, while I think that MOOCs as such cannot be successful, smaller online courses might still be. As I said above, I think this channel has a much lower bitrate than live questions, and so I remain skeptical about smaller online asynchronous courses. But I’m willing to give them the benefit of the doubt for the purposes of this post.

**So, all together, MOOCs close the three channels of student-instructor communication found in a traditional live math course.** They do not, as far as I can tell, open any alternative channels. And hence, combined with the first premise, it follows that MOOCs cannot be successful. MOOCs prevent instructors in math courses from accurately gauging the mathematical maturity of the students, and hence prevent these courses from succeeding.

It does not seem to me that there is any way for MOOCs to avoid this problem. If it were possible for them to create a new channel for student-instructor communication, then perhaps they could. But, by definition, we’re talking about a student-teacher ratio in the thousands. There’s no way for the instructor to keep up with the deluge of student questions.

Blog comments powered by Disqus