Four questions about reliability of a wet lab action

38 views
Skip to first unread message

Dan Kolis

unread,
Aug 31, 2022, 9:40:46 AM8/31/22
to DIYbio
                        

Greeting reader(s):

I posted this question buried in a deeper strand, I think maybe it has to bubble up to get noticed enough to snag opinions, answers; maybe some direct experience ‘stories’:

Question began ‘sort of’ to to Sean Sullivan, possibly but really a general question

Here are four questions about creating a gene edit and expression, ex Yeast, even e coli, etc... Some plate and Petri dish wetlab style work. All are about framing expectations and resources, the assumption is the technology is ‘best we can get’, and/or almost isn’t the point, anyway.


Reference Situation:

*) Gene BP count is 3K NT, promoter, tails, introns if any etc known to work generally.

*) Expression system known, like locally managed Yeast or 'whatever'

*) Selection itself is reliable, UV florescence or some other property


Questions:

Q1) How many labor hours is typical, no paperwork time, just wet lab including end game verification ?

Q2) Whats the success rate of a new germ line emerging in percent ?

Q3) How often is a success rate wrong ultimately. not that the Gene did do what is expected, but some 'other' flaw made it not really get expressed ?

Q4) Is sequencing the believed usable final modified colony invariably reliable enough to make Q3 irrelevant ?


Is Q2) Closer to what ? 30% 80 %

For Q1) I mean hands on time with endless revisits to the plates/wells, dishes, tubes, etc. and so on, not including incubation times, centerfuge, etc.

Thanks in advance of anyone who answers. As people like me want to make this technology, not science, this is a big thing hugely.

Thanks!

Daniel B Kolis

my ref: nafl, 31 Aug 2022, diybio



bioscisam

unread,
Sep 19, 2022, 10:57:45 AM9/19/22
to DIYbio
I think the answer to Q1 is that it can be highly variable depending on user, but if you're using a known transformation system, i.e common host/chassis, vector and insert then the turn around can be a couple of days if everything lines up i.e primers/PCRs work, transformation is reasonably efficient etc. There's often waiting times for some of the steps where you can be doing other things.
Q2 transformation efficiency (I'm assuming this is what you mean by producing new germ line) is something you can measure thoughout the process. If you're using a well characterised system there may be historical data on this. My experience in a DIY setting is efficiencies can be lower for various reasons.
Q3 I think you mean false positives? Depends on how you're testing for and what you consider success. If you've selected for an insert on the plate, and tested for the right size of insert (maybe through colony PCR) you've probably narrowed it down a lot. If you get that sequenced and it's exactly the right thing with base calls of high confidence, you're probably doing well.
Q4 If you're intending for the gene product to be expressed there's further characterisation you can do.  e.g protein chromatography or metabolite analysis. Do you intend to isolate the product, e.g nickel chelation with a poly-His sequence, and do you isolate a product of expected size on SDS-PAGE is a common approach.

Dan Kolis

unread,
Sep 19, 2022, 1:07:33 PM9/19/22
to DIYbio
Thanks for the answers.

My context is I've made an extensive machine readable nomenclature  for syn bio and have some extremely supportive views from top line workers. So I started an IDE to support it. Another group here tinkering with improving outcomes by max/match manual protocols and automation made me realise what a huge commitment it is to yank out the rug under a group and expect them to use a amped-up lab notebook system instead of how they progress daily now. That is, the human being who dreams it up has to be ass-wired right to the daily life of the group, or its crazy to risk a loss of continuity. That is, every 'group' trying some serious biotech has some Python and DB programmers who have a system already. 

Consider BioXXX in Germany etc to make a page of mRNA end up in a VAX had 950 people lean into it. It is really hard to get to punchlines is the real world, not just a western blot with nanograms on a piece of paper...

The bills of material for some of these projects is simply immense. 

It doesn't bother me greatly, but it makes me realise even the sweetest whole cell simulation ( ex: Yeast ) does not make a bucket of previously rare stuff come to exist with ease...

Still, vastly more transferable know how and far fewer simple clerical errors may easily justify more software. A careful reading of Venters labs show huge multi year diversions which are almost SNP hits + some mishandling of data,  etc. Paying 1000 people for a year is ouchy in the accounting department somewhat.  

The pseudo batch oriented WWW database thing with CSV files is pretty comical. Can you imagine stocking the shelves of a Walmart with 30K spreadsheets and copy and paste ?

Thanks for the answers; gave me more to think about in what I am attempting. One target area is deextinction. Lets guess out that at 300 new base base edits. so if each it repeated 6 times, that's maybe 2000 files and each is flavored 3 ways. This is at the extents of manual hacking. Possible, but it must be near the edge creeping into simply impossible...

my ref: 19 Sep 2022, 15:00z, NAFL, https://groups.google.com/g/diybio/c/3wPJbiAz9xU
Reply all
Reply to author
Forward
0 new messages