LCM-FFPE

118 views
Skip to first unread message

Adam Passman

unread,
Apr 1, 2021, 9:01:46 AM4/1/21
to Smart-3SEQ
Hi and thank you for setting up this group - brilliant.

I've got big plans for SMART 3SEQ if I can get more consistent results. Hopefully someone can help me out. Sorry if long winded but I would really love to get this working well and want to try to give as much info to help you help me.

I'm doing LCM-SMART-3SEQ for FFPE from 150-500 cells. I typically get a library at the end that looks like the image attached. I've only ever been doing "option 1 - individual library", not pre-SPRI pooling. I'm usually in the region of 18-22 cycles (I add on an extra 1 or 2 if I'm LCMing stroma) and this usually gets me to 5-20nM (not including the PCR dimers which I get varying degrees of). So typically my peak insert is 30-40bp. Seems quite small! My mappable reads seem to be equivalent to what you publish, but my duplication level is more similar to what you get from single cells! I've had a first run where I detected 2-8000 genes (we've been using a cutoff of UMI>4 as a gene detection threshold) and that was great, but all recent runs have been closer to 0-500 and its driving me a bit crazy trying to figure out what the issue is. My current thinking is that my RNA is getting too degraded at some stage.

1) What would you say are the most crucial parts of getting good FFPE LCM data from 150-500 cells? 

2) Perhaps related - are slides fairly stable if cut, but not dewaxed yet and kept in the fridge for long periods or do I need to always rush from getting slides cut to staining to LCM to prep. Is the staining stage/ exposure to aqueous solutions the most detrimental to the RNA integrity?

3) Your supplementary protocol suggests a bead ratio of 0.7 but I just completely lose my whole library if I do this. I can't really go far below 0.9 for my libraries. This seems to be the optimal for me to get rid of as much PCR dimer (quite close to my library peak as you can see in the attached) as I can, whilst maintaining enough library.  

4) I noticed in your oligonucleotide dilution worsksheet and elsewhere in this group that the P5 universal + i7 index is now legacy, do you have a document that you can share to show your new i5 index + P7 universal setup? I have actually been using 8 nt i7 indexes to do a run of 50 samples and have been sequencing on NextSeq. 

I sincerely appreciate any help you can suggest and I'm happy to provide more information if needed.

Cheers,

Adam

 
1.jpg

Joe Foley

unread,
Apr 1, 2021, 3:56:16 PM4/1/21
to smart...@googlegroups.com
I'm usually in the region of 18-22 cycles (I add on an extra 1 or 2 if I'm LCMing stroma) and this usually gets me to 5-20nM (not including the PCR dimers which I get varying degrees of). So typically my peak insert is 30-40bp. Seems quite small!
This sounds normal for FFPE LCM, though the library in the electropherogram looks slightly overamplified.


My mappable reads seem to be equivalent to what you publish, but my duplication level is more similar to what you get from single cells! I've had a first run where I detected 2-8000 genes (we've been using a cutoff of UMI>4 as a gene detection threshold) and that was great, but all recent runs have been closer to 0-500 and its driving me a bit crazy trying to figure out what the issue is.
The number of genes detected is not a very meaningful measure, because it's a function of so many other things besides the detection threshold: cell type, cell state at the time of measurement/fixation, sequencing depth, proportion of alignable reads, etc. If you need a QC metric for this I would recommend something more robust like the estimated library size.


1) What would you say are the most crucial parts of getting good FFPE LCM data from 150-500 cells? 
Well, the simplest answer is to aim closer to 500 cells or even more. Otherwise, check the the caps under the microscope afterward to ensure you're getting complete lysis; some tissues are tougher than others. If possible, design your experiment with plenty of replicates so it will have enough sample size even if you lose a few.


2) Perhaps related - are slides fairly stable if cut, but not dewaxed yet and kept in the fridge for long periods or do I need to always rush from getting slides cut to staining to LCM to prep. Is the staining stage/ exposure to aqueous solutions the most detrimental to the RNA integrity?
We have found that sections of FFPE tissue produce similar libraries after 1.5 years of storage; in an abundance of caution we store ours at room temp in a nitrogen tank, but it's unclear whether that's necessary. However, we always do our dissections immediately after staining, because things are probably more risky once the tissue gets wet. We put the LCM caps on dry ice immediately after dissection and store them at -80 C until library prep, though again that may be overcautious.


3) Your supplementary protocol suggests a bead ratio of 0.7 but I just completely lose my whole library if I do this. I can't really go far below 0.9 for my libraries. This seems to be the optimal for me to get rid of as much PCR dimer (quite close to my library peak as you can see in the attached) as I can, whilst maintaining enough library.  
That's strange but from the electropherogram it looks like your adjusted protocol is working. Depending on where you get your bead mix it could just be batch-to-batch variation, e.g. Beckman Coulter's SPRIselect is quality-controlled for size selection but AMPure XP is not.


4) I noticed in your oligonucleotide dilution worsksheet and elsewhere in this group that the P5 universal + i7 index is now legacy, do you have a document that you can share to show your new i5 index + P7 universal setup? I have actually been using 8 nt i7 indexes to do a run of 50 samples and have been sequencing on NextSeq. 
Use the universal P7 and the UDI P5's from this post: https://groups.google.com/g/smart-3seq/c/gtwfYxNJQKw/m/l0BYI1KQAgAJ
--
You received this message because you are subscribed to the Google Groups "Smart-3SEQ" group.
To unsubscribe from this group and stop receiving emails from it, send an email to smart-3seq+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/smart-3seq/8c500b0f-fc52-40c4-9ade-67da30fe4b77n%40googlegroups.com.

OpenPGP_signature

jm googlecalendar

unread,
Feb 28, 2022, 2:25:51 PM2/28/22
to Smart-3SEQ
As posted previously, we have determined experimentally that we were getting (what we considered) to be good RNA yields (~ 1ng each sample) from LCM of FFPE tissues (https://groups.google.com/g/smart-3seq/c/g6CVddnvCUg).

So in aiming to do a preliminary MiSeq run, we collected a dozen similar samples, ampilfied the library by PCR as per the equation provided for SMART-3Seq (adding two cycles for FFPE -- 16 cycles) and ran them through the Pre-SPRI pooling protocol using a bead-ratio of 0.7X.  Unfortunately, we saw nothing on the Tapestation electrophoretogram for this run, so repeated the experiment using 21 cycles, which produced a measurable amount of cDNA with the expected distribution of cDNA fragments (Fig1).Test2 21 PCR cycles.JPG

We noted, however, that the calculated concentration of cDNA seemed rather low (for peaks between 200-700 bp = 6.3 nM), and  Kapa qPCR test (using an average size of 281 bp) reported a concentration of  only 0.5 nM.

Worrying that we may have thrown the baby out with the bathwater, we analysed the (fortunately) saved SPRI bead washings.  A Nanodrop measurement revealed that a total of nearly 900 ng of cDNA in the bead washings, suggesting quite a lot of the DNA either didn't stick, or was washed off the SPRI beads. 

After re-reading this thread suggesting others had some similar problems when using bead ratios <0.9X, we undertook and experiment using the AMPure XP SPRI beads to verify the same using a commercial 50 bp DNA ladder  (Thermo Scientific GeneRuler 50 bp ladder #SM0373) and various bead-ratios (AMPure XP 0.8X, 0.9X, and 1.1X), none of which were visible on Tapestation (Figure 2)Ladder Test1 AMPure XP BeadRatios 0.8,0.9,1.1X.JPG
This annoying and difficult-to-explain result (noting that there are some lane dye markers in this DNA ladder) led us to repeat using SPRIselect beads (from the same company) using 0.8X  and 1.1X bead ratios.  A _very_ faint trace of lines corresponding to 50 bp 300 bp and 600 bp can be seen (Fig 3).  SPRIselect Bead ratios 0.8,1.1X.JPG


Our approach for testing DNA retention of the beads may, somehow, be flawed, but any advice on how to gain confidence in the SPRI purification of these library fragments would be appreciated.  I still have a strong sense we are getting decent RNA from our LCM samples, and (based on Tapestation) getting the quality, i.e, the size distribution expected, yet are somehow losing library (based on Nanodrop) during the SPRI purification step(s).  We thought we had done all the hard parts, but are feeling a bit stuck right now is how best to proceed.  Any advice would be valued. 

John Matyas
University of Calgary
Faculty of Veterinary Medicine.

Joe Foley

unread,
Mar 1, 2022, 2:51:52 PM3/1/22
to smart...@googlegroups.com
Interesting experiment but I'm not sure the results are easy to interpret. UV spectrophotometry (NanoDrop) doesn't directly tell you that you've thrown away good sequenceable dsDNA molecules; it would also pick up leftover primers and dNTPs, in addition to unwanted but sequenceable dsDNA byproducts. Those kinds of things would have different 260/280 ratios but it's hard to say what to expect when the solution contains so many different species of molecules.

As for SPRI calibration, you really have to do it in the same conditions as the reaction you're cleaning up, rather than a nice clean ladder in low-ionic storage buffer. There are a lot of salts in the final reaction mix that change the dynamics of the SPRI phenomenon. We prefer to calibrate it by running entire parallel library preps with the same input (reference RNA), and then look at the final products on a MiSeq so the QC metrics we're comparing are the ones that directly matter. When we do that, we find ratios above 0.7X increase the amount of unusable byproduct reads (not just adapter dimers but also cDNAs too short to align uniquely) without gaining any new usable reads - it's all bathwater, no baby.

However, I'm a little confused about your PCR cycles vs. input amount in the first place. Do you think the amount of total RNA going into the reaction is 1 ng? If so, then our calibration on human reference RNA indicates 19 cycles for that, and allowing for lower efficiency from FFPE material, 21 cycles seems reasonable with a yield at the low end of what's usable. So even though I don't think SPRI is your problem here, it's not clear to me how much of a problem there actually is to solve.
OpenPGP_signature

jm googlecalendar

unread,
Mar 2, 2022, 3:46:05 PM3/2/22
to Smart-3SEQ
Joe Foley 12:51 (3 hours ago) 
Interesting experiment but I'm not sure the results are easy to interpret. UV spectrophotometry (NanoDrop) doesn't directly tell you that you've thrown away good sequenceable dsDNA molecules; it would also pick up leftover primers and dNTPs, in addition to unwanted but sequenceable dsDNA byproducts. Those kinds of things would have different 260/280 ratios but it's hard to say what to expect when the solution contains so many different species of molecules.

While we recognized that UV spectrophotometry is a rather opaque diagnostic, we were still surprised to see such large quantities (of whatever) end up in the wash given the pittance that remained in the eluate.  This is the primary reason we chose to use as pure a sample (commercial DNA ladder) to try and test the efficiency of SPRI purification (seeming now an unfortunate crusade).

Yesterday's experiment adds another few datapoints to the performance of SPRI purification, using the same 50 bp DNA ladder and AMPure XP beads with ratios of 1.4X, 1.6X, and 1.8X (the recommended ratio based on the manufacturer's protocol), and with SPRIselect beads with a ratio of 1.2X (cutoff of ~200 bp). 

AMPure 1.4_1.6_1.8X Select 1.2X.JPG

Control DNA ladder in lane 2 (200 ng)
So, as foretold by the manufacturer and product support staff, the AMPure XP beads return the expected product when using the recommended AMPureXP bead ratio of 1.8X, but not any appreciable yield when using 1.4X and 1.6X bead ratios.  Interestingly, the size-selective SPRIselect beads used at a ratio of 1.2X (recommended for a cutoff of 200 bp) also yields visible DNA ladder (with an effective cutoff at 200 bp). 
It is noteworthy that for this rather pure sample of DNA (without pools of primers, fragments, and various solutes) the yields are also disappointingly low no matter the method of evaluation (<10%).

Regardless of method of assessment, it is clear from the electrophoretogram that, under "nearly ideal" conditions, even when using the manufacturer's recommended bead ratio of 1.8, there is substantial loss of DNA.
A review of the AMPure bead protocol from the manufacturer (their figure 4) reports that eluate volume is a rather important variable, with an expectedly low recovery of <20% based on extrapolation of their data:
AMPure eluate volume.JPG


As for SPRI calibration, you really have to do it in the same conditions as the reaction you're cleaning up, rather than a nice clean ladder in low-ionic storage buffer. There are a lot of salts in the final reaction mix that change the dynamics of the SPRI phenomenon. We prefer to calibrate it by running entire parallel library preps with the same input (reference RNA), and then look at the final products on a MiSeq so the QC metrics we're comparing are the ones that directly matter. When we do that, we find ratios above 0.7X increase the amount of unusable byproduct reads (not just adapter dimers but also cDNAs too short to align uniquely) without gaining any new usable reads - it's all bathwater, no baby.

This is all sensible to me, and I know it is based on your experience and careful experiments.  Yet the fear of losing our pooled libraries of very precious samples is a powerful motivator to try and be sure we understand each step of the process.  We are contemplating such an experiment to see how this all works in our hands.

However, I'm a little confused about your PCR cycles vs. input amount in the first place. Do you think the amount of total RNA going into the reaction is 1 ng? If so, then our calibration on human reference RNA indicates 19 cycles for that, and allowing for lower efficiency from FFPE material, 21 cycles seems reasonable with a yield at the low end of what's usable. So even though I don't think SPRI is your problem here, it's not clear to me how much of a problem there actually is to solve.

Although I was pretty happy with the Tapestation distribution our first library amplified at 21 cycles, I was disappointed by the yield, which prevented our core facility from launching our first MiSeq run.  Hence, we focused on the yield of cDNA and the purification step, wondering if either the baby was hiding in the beads or was swimming in the bathwater, which is why we focused on the SPRI products and procedures.  I would be delighted if this all simply disappeared and we could go our merry way, but, again, lingering fears have lead us to these experiments.  I have resisted the urge to simply increase the PCR cycle number further due to anticipated problems of overamplification.  I am humbled to think we may be able to achieve success by adding sample numbers to the original pool (our initial pools had about a dozen samples, we expect to make it 96 samples), though worries linger if we end up short.

Thanks again for the feedback.


jm googlecalendar

unread,
Apr 20, 2022, 3:56:42 PM4/20/22
to Smart-3SEQ

SMART-3Seq Yield from Zeiss Palm samples of rat dorsal root ganglia

 

This follows up to the last post in this thread wherein we did two experiments with 17 or 18 caps using the Zeiss PALM system and following the SMART-3Seq protocol.  Using 17 LCM caps and 10 uL per cap of lysis solution (i.e., MicroCap column) in the SMART-3Seq workflow for FFPE sections equates to a pre-SPRI pool volume that fits neatly into a 1.5 uL microfuge tube.

We had previously worried about losing cDNA using the 0.7X SPRI bead ratio (AMPure SPRI beads), so had to bite down hard to see if we would generate sufficient cDNA to do an initial QC sequencing test using the i5 indexing strategy on the MiSeq platform.  Part of our worry was obtaining sufficient cDNA in the small final elution volume of 15 uL (the minimum specified by our university NGS facility). 

Indeed, our sequencing facility specifies a minimum of 10 uL of a 2 nM pool or 5 uL of a 4 nM pool, which is our target yield (noting that the SMART-3Seq suggests a final yield of 10 μL of amplified library should range between 5 to 50 nM, with most of the fragments between 200 and 600 bp.)

Our first experiment used 17 LCM caps and 22 PCR cycles, which yielded approximately 32 nM in Elution 1 (15 uL water) and 2 nM in Elution 2 (an additional 35 uL water chaser) based on Tapestation analysis of the 200-300 bp band, which when evaluated from 50-165 bp and 165-500 bp segments gives a ratio of 3.0:

Fig1_20Apr22.JPG 

Based on our first experiment, we repeated the SMART-3Seq protocol on another set of 17 LCM caps, but reduced PCR cycles to 21 and followed the same SPRI purification (0.7X bead ratio).  Elution 1 yielded 18.1 nM in 15 uL and Elution 2 yielded 4.74 nM in 35 uL based on Tapestation analysis of the 257 bp peak, which calculates at a 165-500 bp / 50-165 bp ratio of 11 (an improvement likely related to the absence of a 550 bp peak indicative of overamplification at 22 cycles).

 Fig2_20Apr22.JPG

While these results seemingly gave sufficient yields of cDNA, we found that the relationship between Tapestation and Qubit assays (which were similar) versus the Kapa qPCR quantification to be remarkably different—nearly an order of magnitude:

 Fig3_20Apr22.JPG

Acknowledging that we have not yet followed Dr. Foley’s suggestion of quantification using dual-labeled hydrolysis PCR probes, our cDNA yields are disappointingly meagre (acknowledging that poly-adenylated sequences form a small fraction of the overall pool of mRNAs).  Still, we seem that we will have (just) sufficient cDNA to test for MiSeq QC run.  Should the MiSeq confirm we have readable sequences, we hope similar yields will ultimately be sufficient for Novaseq6000 sequencing (we have 96 samples).

Given our thin yields, we have combined the cDNAs from the two experiments illustrated above and slightly concentrated them using a SpeedVac refrigerated evaporation system, which seems to work predictably and satisfactorily.  This raises the question of whether we eluate from the AMPure SPRI beads using larger volumes of water to maximize yields, then concentrate down to volumes suitable for sequencing.  If anyone else has had experience with this approach, or any other advise for maximizing yields, it would be valuable to learn.

Joe Foley

unread,
Apr 21, 2022, 9:02:12 PM4/21/22
to smart...@googlegroups.com
It looks like you're on the right track! These libraries look normal except perhaps the yield.

Re TapeStation vs. qPCR yield estimates, I would have strong confidence in qPCR and none in the TapeStation. The electropherograms are crucial for QC but the concentration estimates from electrophoresis are usually far off like this, while qPCR reliably predicts performance in the sequencer.

Re overamplification, since the yields are borderline for your purposes, you might err on the side of allowing a little bit of overamplification rather than risking a yield too low to use. We see no difference in the sequencing data from mild overamplification (a couple of cycles too high). The library just looks different in electrophoresis, which is basically a cosmetic problem. However, it does affect the accuracy of SYBR qPCR: overamplification falsely increases the reported average molecule length from electrophoresis (artifacts migrate more slowly), so when you normalize the qPCR data by the average molecule length, you'll underestimate the true molarity of the library.

Re elution & SpeedVac, I haven't tried that, but you probably don't need a whole LCM and library prep to find out if it works. Just take a few aliquots of some non-precious DNA sample and test both methods on it, then measure the final yields with the Qubit. My guess is it won't help.
--
You received this message because you are subscribed to the Google Groups "Smart-3SEQ" group.
To unsubscribe from this group and stop receiving emails from it, send an email to smart-3seq+...@googlegroups.com.
OpenPGP_signature

jm googlecalendar

unread,
Apr 22, 2022, 12:19:01 PM4/22/22
to Smart-3SEQ
Thanks for your continued enthusiasm, and your comments on qPCR , which reinforce our experience, noting the relationship between Qubit and qPCR is remarkably stable even if the values are far apart.
And we appreciate your thoughts on overamplification as this is the obvious way to generate more cDNA, which, as long as it doesn't confound interpretation seems a reasonable hedge.

We just received the Kapa qPCR values for the combined libraries described above (17+18 caps), which were calculated to be 3.43 nM (in 10uL), which our NGS facility claims is still insufficient to run a MiSeq.  I am curious in your opinion of how risky it would be to run a MiSeq on this sample for the purposes of QC?  I read that actually lower concentrations are equired for Novaseq (our ultimate target) than for MiSeq, and that there is a number of steps during both procedures the require quite a bit of dilution before they are run, so even though we are little low there seem to be steps that could be adjusted.  Obviously, we don't risk overclustering, but want to be sure we avoid underloading as well.

Thanks as usual.

Joe Foley

unread,
Apr 22, 2022, 2:01:55 PM4/22/22
to smart...@googlegroups.com
I've never set up a NovaSeq myself, but the MiSeq Denature & Dilute protocol calls for 5 μL @ 4 nM, and I've found it works fine with 10 μL @ 2 nM, 20 @ 1, 40 @ 0.5, just like the NextSeq. Or if you want to preserve more of your pool for the NovaSeq, underloading is no problem since you just want QC, not real data. You could also make up the missing concentration with extra phiX spike-in, which might make the core facility more comfortable running unusual libraries but it's really not necessary. I would just ask for the cheapest possible run (the cheapest kit Illumina sells is the 300-cycle Nano but your facility might not have that one in stock) and let the facility disavow responsibility for weird results from going off the official protocol.
OpenPGP_signature
Reply all
Reply to author
Forward
0 new messages