Papers as puzzles

October 02, 2012

Last week I served up a puzzle at the weekly section for the class I’m teaching this semester called Integrated Sciences 235, the first half of the sophomore biology course, at Princeton. By puzzle I don’t mean fancy schmancy like FoldIt, the popular crowdsourced protein folding game; or contrived like a problem from that moldering problem set archive, which has been handed down from teaching assistant to teaching assistant. Instead, I’m talking old school deductive reasoning: take data from a real-life paper and synthesize a model.

 

Science instructors, it’s simple. The paper-as-puzzle approach involves only three steps, and 1-2 hours of lead time. First, select a concise, seminal and accessible paper from a domain of expertise, in my case genetics, and understand it backwards and forwards. Second, copy and paste all of the figures and tables from said paper into PowerPoint and present to your class, datum by datum (approximately 30 minutes). Third, have the students form breakout groups in which they deliberate( for another 30 minutes) and formulate a three-minute summation that one volunteer from each team presents at the blackboard in front the entire class.

 

Think journal club, but more didactic, more interactive, and the participants don’t get the benefit of reading the paper beforehand! Now I admit that there’s nothing revolutionary about the idea of incorporating papers into science pedagogy. My first-year graduate seminar at Harvard – MCB100 – was just reading assigned papers on our own and discussing them with a professor in class. There must be countless courses just like it in every biology department.

 

The novelty comes from the timing. Based on my own early exposure to the primary scientific literature when I was still in high school, I believe that it’s never too early for budding scientists to start interpreting experiments. But later, when I was an undergrad at Columbia, my sophomore biology recitations consisted of a review of that week’s lecture material by the teaching assistant, and an anemic group problem-solving session.

 

Once upon a time those problems were derived from honest to God papers. However, years of editing and drift have concealed their origins and diluted their complexity. It’s to be expected: grading problem sets is tasked to overworked and underpaid graduate students who want to simplify and standardize evaluation as much as possible. And it takes time and commitment to revitalize a problem set archive. Has this arrangement changed much in 10 years? I doubt it…

 

So, the paper I selected as my puzzle was originally published in 1991 in Science by Prof Michael Hall’s lab, and entitled “Targets for cell cycle arrest by the immunosuppressant rapamycin in yeast.” This paper describes the isolation of mutants that are resistant to the natural product and therapeutic drug rapamycin. Based on the set of genetic observations in the paper, one could reasonably propose a physical model of how rapamycin works, i.e., what are its targets and how does it interact with these targets?

 

I chose the rapamycin case study because I worked extensively with rapamycin in graduate school, and so I understand the mechanism well. (They say you should teach what you know). Also, I did a trial run with this paper last year, and I received some of the best feedback from students afterwards. Although none of the students this year or last year solved the puzzle and explained how rapamycin works, some teams got really close.

 

I’ll go step by step through my presentation of the data, adding commentary and annotation along the way. I opened with this figure, featuring yeast cells treated with rapamycin (A) vs. untreated yeast cells (B):

 

 

The students should discern, or be alerted to, two observations. First, the rapamycin-treated cells aren’t budding. Second, the enlarged rapamycin-treated cells appear to be arrested at a pre-mitotic step in the cell cycle, either G1 or G2. It’s not evident from the data so I told the students that the effects of rapamycin are reversible.

 

Next I showed them this table:

 

 

These data depict the growth responses of rapamycin-resistant mutants. The drug resistance selection yielded three complementation groups: fpr1, tor1 and tor2. I explained that the fpr1 mutants are recessive, while tor1 and tor2 mutants are dominant. Most students picked up on the fact that amino acid residue 65 was targeted by multiple independent mutations. Some students also noticed that the nonsense allele (fpr1-12) exhibited stronger resistance than the missense mutants.

 

Next up was this figure:

 

 

In this experiment, the wildtype FPR1 strain was compared to the fpr1 whole-gene deletion mutant. The students know that the fpr1 mutants are recessive for rapamycin resistance, yet here they learn that the FPR1 gene is completely dispensable for yeast cell growth. They can only reconcile these observations by concluding that rapamycin requires the FPR1 protein for its growth inhibitory effects, and that loss of FPR1 renders rapamycin harmless. Consistent with that interpretation is the fact that loss of FPR1 protects yeast cells across a 1000-fold concentration range of rapamycin.

 

Here comes an essential clue to the puzzle, which most of the students failed to appreciate fully:

 

 

FK506 is a structural sibling of rapamycin. In fact, one half of each of the two molecules is completely identical. From the plates above, the students should glean that FK506 and rapamycin mutually suppress each other. At this point many of the students realize that FK506 and rapamycin both bind to the same target, namely FPR1. TRP1 is a bit of obfuscation, to borrow from the Car Talk guys. Loss of TRP1, which encodes a gene required for tryptophan biosynthesis, sensitizes yeast cells to FK506, but has nothing to do with rapamycin’s mechanism of action.

 

For good measure, I showed them this sequence alignment data for FPR1:

 

 

What’s clear is that residue 65, which the students already knew was the frequent target of resistance-conferring mutations, is evolutionarily conserved and therefore functionally important. Most students appreciated that this meant that residue 65 comprises the binding site for rapamycin on FPR1.

 

However, in the end students seemed to forget about the other rapamycin-resistant mutants, tor1 and tor2, which display dominant resistance. In fairness, the data in the paper revolved mostly around FPR1, but the puzzle cannot be solved without the TOR proteins and an explanation of the dominant resistance phenotype, which I’ll leave to you, astute reader.

 

The solution to the puzzle is that rapamycin (lavender) binds to FPR1 (blue), and this drug-protein co-complex then binds to TOR (red), which is a kinase required for passage through the G1 cell cycle checkpoint:

 

Related Posts

Indie science
Anatomy of a crowdfund, week 1
Scientist-Artist Collaboration
  • http://davebridges.github.com/ Dave Bridges

    Excellent idea and an even better paper. How do you evaluate understanding of the materials (especially for the second groups). Are students able to check on the current state of the research within the classes and would that help or hinder the discussion?

    • http://twitter.com/eperlste Ethan Perlstein

      Thanks! Evaluation of 16 students by one person in real time is hard. What I look for are patterns of confusion. Has the same point been misunderstood by more than one student? Does a student’s misunderstanding appear to originate in a deeper flaw of logic or lacuna?

      Also, I can evaluate whether the genetic concepts are sinking in by walking around the room and listening in on the deliberations. If I overhear something incorrect or if students have a focused question, I chime in.

      I like the idea of follow up. I didn’t stress it to my class at the end. I probably should have pointed them to follow up references. I think it’s something you can provide self-motivated students, but I found that 90 minutes was just enough time for data presentation, breakout groups and blackboard summations.

      • http://davebridges.github.com/ Dave Bridges

        I think a lot can be learned from systematically evaluating new pedagogical techniques