I let the students in my genetics-pilot class choose a topic for me to teach about, and they chose The Genetics of Behaviour. The class comes up next Monday (the second-last class of the term), so I need to get to work on it. Of course I know nothing about this, but that's not a big problem - I can certainly learn.
The problem is that all I've found is various individual studies that connect particular behaviours with particular genotypes. All very well, each reasonably good science about an interesting behaviour, but essentially just a series of anecdotes. What I'm hoping to come up with is a lesson, or a take-away message, or some unifying principle.
What form might such a principle take? That behaviour is a biochemical phenomenon? That it isn't? That we still have free will? That we don't? That nature and nurture interact to determine behaviour, as everything else?
Maybe I should start the class by raising the big questions (after I start preparing for the class by finding out what these are). What is consciousness? No, too big. Do we have 'free will'? Also too big. In fact, I think both these questions should be better handled by saying "What do we want the words 'consciousness' or 'free will' to mean?" After we've settled on clear definitions we'll be in a better position to answer questions about them.
So I won't start with such big questions. How about 'What can genetics teach us about why we behave the way we do?" A bit vague. Should I pick one behaviour that's been well studied and report on the findings? Not if it's a boring behaviour like the ability of rats to remember the location of an underwater platform. What about all those Drosophila vegetable mutants (rutabaga, cabbage, turnip, dunce)?
Sexual behaviour is much more interesting, especially to college kids. And it's where natural selection/sexual selection has probably acted very strongly, but in complex ways. Maybe the genetics of sexual orientation - review the studies, tell them whether there is any good data at all? Wikipedia looks like a good starting point, but it tells me that there's very little good data, certainly not enough to base a whole lecture on.
What about human pheromones? The whole question of whether we pick our mates partly because they have different HLA alleles than we do. All those T-shirt-sniffing experiments are thought to be detecting genetic differences. And there's that ocytocin study, and lots of sleazy marketing. And then I could end with the new Drosophila study showing that the smells being used for mate choice can come from bacteria which come from the food source...
I teach genetics and do research in evolutionary microbiology at the University of British Columbia. This blog is about my teaching, and about other teaching-related ideas and issues.
Monday, March 28, 2011
Tuesday, March 15, 2011
The 'calibration' part works!
My students have finished their Calibrated Peer Review assignment, and now I'm discovering the wealth of instructor resources for interpreting the outcomes and correcting discrepancies.
First is a Student Results page, listing each student's total score on the assignment (text plus reviewing activities), the points earned by the text they submitted, and their Reviewer Competency Index. The latter is a measure of how well they reviewed the three calibration texts, and is used to modulate the ratings they assigned to the three student submissions they reviewed. From this screen you can click on any student's name to see all the details of the scores they assigned and received.
Next is a Problems List page. This again lists the students, this time flagging in red any problems the system noted with their assignment. Students who failed to complete the calibrations on time are flagged and their Reviewer Competency Index is zero. Reports that were reviewed by only a single student, or not reviewed at all, are flagged. So are reports that received discordant reviews (discrepancies exceeding a pre-set tolerance) , and reports that were reviewed by students with very low Reviewer Competency Indexes.
Only two reports had seriously discordant reviews, and rather than being evidence of problems with CPR, both of these demonstrate how well the CPR system works. The first was a very good report that received one very low score because the reviewer had given too much weight to very minor flaws. But because this reviewer had also performed badly on the calibration reviews, they had a low Reviewer Competency Index and this poor review didn't drag down the good report's score. The second problem report also had one very low score and two good scores. But this time the low score was from a very competent reviewer, and when I read the report I discovered strong evidence of plagiarism, which would certainly justify that low score. (The other two reviewers evidently assumed that the professional writing was the student's own work.)
The third set of resources are on a Tools page. Here you can change deadlines for individual students to allow them to submit after the original deadline has passed (I've done this for one student), change student's ratings and scores, and have the whole assignment remarked to incorporate adjustments you've made to a Reviewer Competency Index. You can also download the complete texts of submissions and evaluations. You can even access the system as if you were any one of the students (this gets you a caution that any changes you make will appear to have been made by the student).
Overall I'm very pleased with the CPR system. I've only encountered a few minor problems: One is that many of my students earned low Reviewer Competency Indexes. I suspect this is because I had so many calibration questions about specific details of the submissions. Several students had minor problems submitting the various components by their deadlines. I think these were mostly because they overlooked a final step needed to complete the submission. The text-only interface didn't seem to cause many problems - some students had trouble with the word limit (because HTML tags are counted as words), but only one student's submission had text-formatting errors.
Now it's time to start spreading the word around campus about this new resource. Hopefully there'll be enough interest that UBC will decide to purchase it once our free trial ends.
First is a Student Results page, listing each student's total score on the assignment (text plus reviewing activities), the points earned by the text they submitted, and their Reviewer Competency Index. The latter is a measure of how well they reviewed the three calibration texts, and is used to modulate the ratings they assigned to the three student submissions they reviewed. From this screen you can click on any student's name to see all the details of the scores they assigned and received.
Next is a Problems List page. This again lists the students, this time flagging in red any problems the system noted with their assignment. Students who failed to complete the calibrations on time are flagged and their Reviewer Competency Index is zero. Reports that were reviewed by only a single student, or not reviewed at all, are flagged. So are reports that received discordant reviews (discrepancies exceeding a pre-set tolerance) , and reports that were reviewed by students with very low Reviewer Competency Indexes.
Only two reports had seriously discordant reviews, and rather than being evidence of problems with CPR, both of these demonstrate how well the CPR system works. The first was a very good report that received one very low score because the reviewer had given too much weight to very minor flaws. But because this reviewer had also performed badly on the calibration reviews, they had a low Reviewer Competency Index and this poor review didn't drag down the good report's score. The second problem report also had one very low score and two good scores. But this time the low score was from a very competent reviewer, and when I read the report I discovered strong evidence of plagiarism, which would certainly justify that low score. (The other two reviewers evidently assumed that the professional writing was the student's own work.)
The third set of resources are on a Tools page. Here you can change deadlines for individual students to allow them to submit after the original deadline has passed (I've done this for one student), change student's ratings and scores, and have the whole assignment remarked to incorporate adjustments you've made to a Reviewer Competency Index. You can also download the complete texts of submissions and evaluations. You can even access the system as if you were any one of the students (this gets you a caution that any changes you make will appear to have been made by the student).
Overall I'm very pleased with the CPR system. I've only encountered a few minor problems: One is that many of my students earned low Reviewer Competency Indexes. I suspect this is because I had so many calibration questions about specific details of the submissions. Several students had minor problems submitting the various components by their deadlines. I think these were mostly because they overlooked a final step needed to complete the submission. The text-only interface didn't seem to cause many problems - some students had trouble with the word limit (because HTML tags are counted as words), but only one student's submission had text-formatting errors.
Now it's time to start spreading the word around campus about this new resource. Hopefully there'll be enough interest that UBC will decide to purchase it once our free trial ends.
Saturday, March 12, 2011
Peerwise let my students write their own midterm problems!
(Of course they didn't know it at the time!)
The students in my introductory genetics pilot class have had weekly Peerwise assignments all term. In alternate weeks they either had to either create at least one multiple-choice genetics problem suitable for an open-book exam (i.e. not based on memorization) or to answer and critique at least two problems previously created by other students.
I had used Peerwise a couple of years ago in a first-year class. Each student only had to create one question and critique four. I had found their questions to be full of confusions, poorly written and lacking important information, but the critiques were quite good.
This time, my students have told me that they don't mind answering the problems other students have posed but find creating their own to be quite difficult. The first problems I had looked at hadn't been very good. But last week I started going through their more recent problems, looking for ideas I might use in creating problems for the upcoming midterm, and I was pleasantly surprised by the high quality of many of the problems they had developed.
I was so impressed by the students' questions that I decided to use them for the midterm, rather than writing my own. I downloaded a range of questions to consider - some of them were very good but much too difficult for a 45-minute midterm. I had thought that the good questions might have all been written by one or a few students, but when I checked their usernames I found that 16 of the 17 questions had been written by different students; this means that most and maybe all students are creating good problems!
I used 14 questions for the exam - most of them I was able to leave unchanged, but some required minor editing for clarity. The students must have recognized their own questions, but I haven't seen anything on the course discussion board to suggest that they've realized that they collectively created all the questions on the exam. I'll tell them on Monday.
The students in my introductory genetics pilot class have had weekly Peerwise assignments all term. In alternate weeks they either had to either create at least one multiple-choice genetics problem suitable for an open-book exam (i.e. not based on memorization) or to answer and critique at least two problems previously created by other students.
I had used Peerwise a couple of years ago in a first-year class. Each student only had to create one question and critique four. I had found their questions to be full of confusions, poorly written and lacking important information, but the critiques were quite good.
This time, my students have told me that they don't mind answering the problems other students have posed but find creating their own to be quite difficult. The first problems I had looked at hadn't been very good. But last week I started going through their more recent problems, looking for ideas I might use in creating problems for the upcoming midterm, and I was pleasantly surprised by the high quality of many of the problems they had developed.
I was so impressed by the students' questions that I decided to use them for the midterm, rather than writing my own. I downloaded a range of questions to consider - some of them were very good but much too difficult for a 45-minute midterm. I had thought that the good questions might have all been written by one or a few students, but when I checked their usernames I found that 16 of the 17 questions had been written by different students; this means that most and maybe all students are creating good problems!
I used 14 questions for the exam - most of them I was able to leave unchanged, but some required minor editing for clarity. The students must have recognized their own questions, but I haven't seen anything on the course discussion board to suggest that they've realized that they collectively created all the questions on the exam. I'll tell them on Monday.
Subscribe to:
Posts (Atom)