So some people said some stuff about a thing you taught
For the past few weeks, I've been reading and ruminating on course evaluations that we got from this round of our biocomputing working group. Previous post introducing the course is here and my thoughts on writing a good evaluation is here.
We changed the course a lot since last year based on evaluations. The big change is that we added homework, which many students requested. What we wanted to learn is as follows:
- What do students enjoy about the class?
- What do they not like?
- What can we do to support learning?
So let's take a look at the results.
We moved both too fast and too slow
There's wide variance in whether the course moved too fast. Novices think it did, more experienced people think it didn't. Hardly a surprise.
What this suggests: There is some demand in the UT community for intermediate Python. I'm not super shocked at this, as there's no real formal curriculum for learning specific languages at UT (beyond a CS course that is literally impossible for non-CS majors to enroll in). The CCBB is ramping up its training programs and there might be some room there for development of courses that assume a higher baseline of knowledge and cover more breadth. * Homework helps. So do code fragments. Students who looked at code outside of class, did homework and reviewed notes are more confident in their ability to complete programming tasks on their own or with a little Google. Hardly a revelation.
What this suggests: We currently exist outside of any sort of incentive structure. We can't give credit, nor can we levy consequences for not doing your homework. If we're going to continue to operate outside the normal academic structure, we need to find some way to motivate students to think about this material outside of class. Perhaps some sort of certificate? * Students who need to use programming for their work learned more stuff. It's very clear that "I need to graduate someday" or "I don't wanna post-doc forever" are great motivators. But early-stage graduate students (who have the time to be in class) don't have the data to apply their learning off the bat.
What this suggests: We considered making it mandatory to have a data set that you're working with to join our class/support group. But we have no way of enforcing this. I think the thing to do is to make a bank of example data sets, and assign people without data a buddy and a dataset. Part of the homework could be to apply some aspect of the class material to the fake data.
People love Python, Pandas and in-class exercises.
What this suggests: #Pythonista4Life. Also, perhaps we should switch to a two-hour session. Right now, we do a 1.5-hour session. We talk about some topic for about a half-hour to 45 minutes, and then take an exercise application break. Then we talk some more. Perhaps we could switch to a format with two half-hour talk sessions, and two half-hour exercise breaks. * We were mostly good on jargon, except when we weren't. Most students said that we didn't use jargon and that they didn't feel lost very often. A couple students have a more expansive definition of jargon, including concepts like loops.
What this suggests: I'm not really sure. There are some words you just have to know if you want to do programming. We generally provide some kind of cheatsheet with lessons. There's a hard floor on how simple we can make things, and I think we've about hit it.
There's the five-cent tour of our survey results, folks. Overall, we did well this year, in particular since we had some tough circumstances towards the start of the year (two lectures lost due to snow days). I feel positive about our evaluation format, as well. We got the information that we set out to get, and now we know what to work on for next year. I'll definitely be using these evaluation forms again. I don't want to share results publicly, since some are signed, but ping me if you want to talk in more depth about evaluations and results.