By Craig Forcese

Full Professor
Faculty of Law

Email: cforcese[at]uottawa.ca

Twitter: @cforcese

 

Most Recent Blog Postings
Subscribe to Bleaching Law
« Punditry and the Professoriate | Main | Flipping the Classroom: The End of the Term Recap »
Wednesday
May082013

Flipping the Classroom: The Results

 

As I have noted in prior posts, when I started my "flipped" classroom experiment, a key question for me was whether with a new "flipped" teaching methodology, I could close the "gap" between raw performance on the exam and the upwards adjustment of grades required to meet faculty course average guidelines.  Put another way, would a "flipped" format that displaced passive learning to podcasts and re-tasked classroom time as very practical problem solving "practices" change outcomes on a classic IRAC exam?

Faculty Council met this morning and approved grades.  I am now, therefore, in a position to comment on this question. A caveat at the outset: this is far from a scientific study.  Nevertheless, I believe it has some qualitative value.

I have compared exam performance in Winter 2013 with the last time I taught my administrative law course (using conventional lecturing) in Winter 2011.  (Winter 2011 was a typical performance for the 8 or 9 times I have taught this course.)

Put simply, the difference in outcome was marked.  They were different exams, of course, which makes any comparison imperfect.  But I certainly didn't mark any easier this year than before, and both were classic IRAC exams (albeit of different duration).  

It is clear to me that whatever other utility my conventional approach had, my new approach produces different and perhaps more representative results.   This may be particularly true since I suspect there was selection effect favouring comparably weaker exam writers — students most fearful of exams may have gravitated to my class (which included 30% of in-class work).

However, even when the mathematical effect of this 30% of in-class work is removed from the final grade, relative exam performance improved markedly in 2013 on average.  The adjustment I made to the raw final grade to bring the average within the Faculty's B guidelines was one of the (if not the) smallest I have ever made. Indeed, the raw average was 5% higher in 2013, coming in at 70%.

More notably, the mark distribution was radically different.  In 2011, a solid majority of the class -- 60% -- scored sub-B on the raw exam.  This number fell to 40% when the marks were adjusted upwards to meet the Faculty's B average guidelines. 

In 2013, 43% of students were sub-B on the raw exam.  Following adjustment and inclusion of the in-class component (where many students outperformed their exam result) only 25% of students are sub-B.   

Mark distribution has been of great concern to me given that several Ontario (and several other) law schools now place a “quota” on sub-B (and above B) grades. We do not -- our guidelines prescribe an average GPA but are silent on the mark distributions producing this average.  It is possible to meet these guidelines with a bevy of very strong grades and a long tally of sub-B grades -- that has been my typical grade distribution by happenstance, not design.  This year, through natural result, my grade distribution fell into a pattern typical under these quota systems, with a much reduced number of sub-B grades (and, interestingly, a less top heavy group of extremely high grades on both the raw exam grades and for final grades).

My take-away from this is that my conventional teaching tended to favour strong exam-writing students, clustered at the top range, and produced a long-tail of other students (struggling with issue spotting in particular).  My GPAs always fell within guidelines (always after an upwards mark adjustment), but the spread was wide. 

This year, the grades still fall within guidelines, but the spread is close to a perfect curve.   This is not to say that every result was stellar.  There were still failures on the exam (although not in the final grade, when the 30% of in-class work was factored in).  But generally, the flipped class seemed to help weaker exam-writing students disproportionately.  I take some comfort from this, as these same students produced often excellent quality in-class work.

All of this is to say that my grades this year are consistent with the following hypothesis: they may be a closer representation (albeit still imperfect) of overall student aptitude as opposed to a potentially idiosyncratic talent to “hack” law school and do well on IRAC exams administered after a passive learning class. 

Time will tell – I will likely be participating in a US project to try to measure the relative impact of active vs passive learning strategies in law school.  But so far, I am pleased by the outcome of my experiment, and will expand to my other classes.