Assessing Temporal Signal and Clock Model Selection in Mtb WGS Data

186 views
Skip to first unread message

Opeyemi Alayande

unread,
Mar 24, 2025, 1:15:42 PMMar 24
to beast-users

Screenshot from 2025-03-24 12-03-45.pngScreenshot from 2025-03-24 09-15-25.png



Screenshot from 2025-03-24 10-26-16.png

Screenshot from 2025-03-24 09-16-22.png


Dear BEAST2 Community,

 I am analyzing Mycobacterium tuberculosis (Mtb) whole-genome sequencing (WGS) data and assessing the presence of a temporal signal in my dataset. I have attached the correlation plots from TempEst  and BacDating, which I interpret as indicating no temporal signal under a strict molecular clock assumption.

A previous BEAST2 community response from Artem B on a similar topic reads:
  

“TempEst has no cut-off values to make this decision. Low R2R^2 values do not necessarily mean the lack of temporal signal. Root-to-tip regression analysis implies a strict clock model (substitution rate for all branches is the same).”

Given this, I suspect that among-branch rate variation may be present in my dataset, making a relaxed clock model more appropriate. I am currently performing model selection using Nested Sampling to compare relaxed vs. strict clock models, but I have several questions about interpreting and acting on the results:


1. Temporal Signal and Relaxed Clock Models
  • If a relaxed clock model is preferred based on marginal likelihood/Bayes factor comparison, does that confirm the presence of a temporal signal and thus allow me to proceed with downstream analyses?

  • Alternatively, are there additional methods to assess temporal signal in datasets that do not follow a strict molecular clock?

  • Can tip randomization tests or other sensitivity analyses help evaluate support for temporal signal in such cases?

2. Interpreting a Worse Fit for the Relaxed Clock Model
  • If model selection result still favors a strict clock model over a relaxed clock model, given a poor root-to-tip regression result, does that suggest the dataset totally lacks temporal signal and is unsuitable for BEAST2 analyses (e.g., coalescent and birth-death models)?

  • If so, are there any strategies to remedy or validate such datasets before concluding that they are unsuitable for Bayesian phylogenetic inference?


3. Working with Slowly Evolving Pathogens Like Mtb
  • Mtb is a slow-evolving pathogen. Some studies analyzing datasets spanning only a few years (e.g., 2 years) do not explicitly test for temporal signal. Instead, they assume a strict molecular clock and fix the mean clock rate at an accepted value from the literature.

  • Is this an equally valid approach, or should explicit tests for temporal signal always be performed before selecting a clock model?


Given my dataset and results so far (as communicated above), are there any best practices, additional tests, or alternative approaches you would recommend?

I appreciate any insights or guidance you can provide. Thank you in advance!



Artem B

unread,
Mar 25, 2025, 2:06:59 AMMar 25
to beast-users

Hi Opeyemi,

Based on the R2, I assume the relaxed clock as well. Besides, the distances between the root and the contemporaneous samples vary widely, which also indicates a high degree of among-branch rate variation. However, there are too many contemporaneous samples and this is a problem for RTT analysis.
So, I agree that marginal likelihood estimation is the best option for the current complex case.

If a relaxed clock model is preferred based on marginal likelihood/Bayes factor comparison, does that confirm the presence of a temporal signal and thus allow me to proceed with downstream analyses?

To test temporal signal in the data with a marginal likelihood estimator, the model without tip dates must be included into the comparison. In other words, you should estimate log marginal likelihood for the model without tip dates as well. This is called the Bayesian evaluation of temporal signal (BETS, https://academic.oup.com/mbe/article/37/11/3363/5867920). To make a model without tip dates, you should just uncheck or not to check the ‘tip dates’ tick in beauti. The rate can be fixed at 1.0, but, In the last publication, Tay et al. advice to fix the substitution rate at the order of magnitude assumed for the species analyzed (you can find details at the link: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1012371). If the log Bayes factor favors dated models, than signal is present in the data. It is common practice to create dated and undated strict clock models (SC_dated, SC_undated) and dated and undated relaxed clock models (UCLDdated, UCLDundated; 4 models in total) and calculate log marginal likelihoods for them. Thus, you compare strict and relaxed clocks along with testing temporal signal!

What about the date randomization test, contemporaneous samples are bad for it too for obvious reasons. If I get the case right, the section 'Effect of Nonuniform Temporal Sampling on the Date-Randomization Test' from the next article of Duchene et al. can be helpful for you: https://academic.oup.com/mbe/article/32/7/1895/1016979?login=false

If model selection result still favors a strict clock model over a relaxed clock model, given a poor root-to-tip regression result, does that suggest the dataset totally lacks temporal signal and is unsuitable for BEAST2 analyses (e.g., coalescent and birth-death models)

If BETS provide evidence for the presence of temporal signal, it is not important what RTT shows you. RTT is not a formal test.

If so, are there any strategies to remedy or validate such datasets before concluding that they are unsuitable for Bayesian phylogenetic inference? Mtb is a slow-evolving pathogen. Some studies analyzing datasets spanning only a few years (e.g., 2 years) do not explicitly test for temporal signal. Instead, they assume a strict molecular clock and fix the mean clock rate at an accepted value from the literature. Is this an equally valid approach, or should explicit tests for temporal signal always be performed before selecting a clock model?

I think you have nothing to lose if you check signal with BETS and only gain if you find it :)

Best,
Artem

вторник, 25 марта 2025 г. в 01:15:42 UTC+8, Opeyemi Alayande:

Opeyemi Alayande

unread,
Mar 25, 2025, 1:36:46 PMMar 25
to beast...@googlegroups.com

Dear Artem,

Thank you so much for providing this detailed explanation. I'm seeing https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1012371)  for the first time, and it was a very interesting read. I really appreciate your help.

I have made several attempts to perform BETS using GSS (I’m particularly interested in using GSS for BETS as described in  https://academic.oup.com/mbe/article/37/11/3363/5867920), but I have not been able to figure out how to work around it. While Nested Sampling provides the "MCMC to NS converter" and the "Nested Sampling Log Analyzer" apps to process logs and calculate Bayes Factors, the tutorials on GSS are quite not straight forward.

Here are the specific issues I’m facing:

  • MCMC Tab Options:
    I was instructed to select GSS in the MCMC tab, but I only see options for NS and PS/SS.

  • Tree Prior Options:
    I was told to choose either “Product of exponential distributions”, “Matching coalescent model”, or “Matching speciation model” as the tree prior, but I can’t find these options


  • Additional details from previous community conversations have made performing BETS with GSS even more complicated.

Although I’m not entirely sure how it works, I’d like to know if the Path Sampler (MODEL_SELECTION) and Path Sampler Analyzer (MODEL_SELECTION) can be used to implement GSS and compare marginal likelihoods for BETS in BEAST2.

I would be extremely grateful if you could walk me through this process or direct me to further information that might help clarify these points.

Thank you very much for your assistance!



--
You received this message because you are subscribed to the Google Groups "beast-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to beast-users...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/beast-users/a3918ddb-7e05-4557-967e-d6186715bce3n%40googlegroups.com.

Artem B

unread,
Mar 25, 2025, 9:49:45 PMMar 25
to beast-users
Hi Opeyemi,

You're welcome. The fact is that, GSS is implemented in BEAST1 only. However, you can use any marginal likelihood estimator. GSS is the most accurate, it's true, but it's more computationally extensive than other methods. My understanding is Duchene et al. have chosen GSS since they carried out methodology validation and GSS eliminates all question on the accuracy of the method 
What about me, I prefer MODEL_SELECTION in BEAST2 since it allows me to resume a run in the case of accident aborting (it's a really big advantage). Also, I find Path Sampling more simpler than Nested Sampling in terms of settings. The rule of thumb is pre-burning MCMC length should be the same as in the main BEAST run where you get ESS > 200 for all parameters. Next, the product of steps and MCMC length in each step should be again equal to the length of the main BEAST run where you get ESS > 200 for all parameters. For example, the length of the main runs is 50 000 000, then you can set 100 steps, hence 50 000 000 / 100 = 500 000 iteration for MCMC length in each step. To check your settings, you can increase the number of steps or MCMC length for each step and compare the results. But this settings are thought to be redundant already. Besides, all of the above is also true for GSS.

More about PS/GSS you can find at the BEAST1 website: https://beast.community/model_selection_1https://beast.community/model_selection_2https://beast.community/bets_tutorial

Also, my strong advice to read the works of Sebastian Duchene and his students on temporal signal and molecular clocks, they are all clearly written.

Best,
Artem
среда, 26 марта 2025 г. в 01:36:46 UTC+8, Opeyemi Alayande:

Artem B

unread,
Mar 25, 2025, 9:52:12 PMMar 25
to beast-users
* For example, the length of the main run

среда, 26 марта 2025 г. в 09:49:45 UTC+8, Artem B:

Opeyemi Alayande

unread,
Mar 26, 2025, 2:25:30 PMMar 26
to beast-users

Hi Artem,

Thank you for your detailed and insightful response. Your explanation and references clarified many aspects of GSS implementation, and I now have a much better understanding of the differences between BEAST1 and BEAST2. I only wish there weren’t two parallel BEAST projects!

1. GSS Implementation

Based on your recommendations and the findings of Tay et al. (2024), I have begun implementing GSS. Before proceeding further, I would appreciate your feedback on my trace logs. My first run was set up as follows:

  • Data: Isochronous tips (dated tips is next)

  • Clock Model: Strict molecular clock

  • Tree Prior: Coalescent constant size

  • Priors: All lognormal (default values), except the clock rate, which is fixed at 5.0e-7 (calibrated from thousands of global M. tuberculosis datasets). I subsequently will try other distributions based on recommendationsTay et al., 2024

  • MCMC Settings:

    • Chain length = 10⁸

    • MLE estimation method = GSS

    • Number of stepping stones = 100

    • Chain length per stepping stone = 2.0 × 10⁶ (following Tay et al., 2024)

    • Tree working prior = Matching coalescent model

    • Stepping stone distribution = Beta

Trace Log Observations
  • Most parameters have ESS > 200, including those I consider most critical.

  • However, joint, likelihood, meanRate, and treeLikelihood have lower ESS values.

    Screenshot from 2025-03-26 16-06-54.png

  • In the MLE trace log, only pathLikelihood.destination has an ESS ≥ 200.

    Screenshot from 2025-03-26 16-11-00.png

I came across one of your previous community responses stating:

“How do you measure ESS for marginal likelihood? As far as I know, it is not a good idea.”

From this, I infer that a well-mixed MCMC run with ESS > 200 for key parameters is more important than achieving high ESS for marginal likelihood estimation. Does this imply that my MLE estimate (-203580.65) is still reliable despite the low ESS values in the MLE trace log? I’d greatly appreciate your clarification on this.

2. Ascertainment Bias Correction

I am working only with variant sites, as including all sites makes the alignment file excessively large (in GBs). Given this, I attempted to correct for ascertainment bias by modifying the XML file to include constant sites, similar to the approach in BEAST2 (as described https://www.beast2.org/2019/07/18/ascertainment-correction.html). However, this modification failed in BEAST1.

Instead, I set the site heterogeneity model to Gamma (equal weights). Would the lack of explicit ascertainment bias correction impact my marginal likelihood estimation or other parameter estimates? I’ve searched extensively for a BEAST1-compatible method but haven’t found any guidance. Do you have any suggestions for handling this?

Thank you again for your time and assistance—I truly appreciate it.

Artem B

unread,
Mar 27, 2025, 10:10:52 PMMar 27
to beast-users

Hi Opeyemi,

Does this imply that my MLE estimate (-203580.65) is still reliable despite the low ESS values in the MLE trace log? I’d greatly appreciate your clarification on this.

As far as I know, yes, you shouldn’t check MLE logs for convergence. I remember that I read about that at the BEAST1 website, but unfortunately I cannot find this mention now. As I said earlier, the best way to validate your GSS parameters is rerunning it with more steps and/or longer MCMC length. Log marginal likelihoods should not differ drastically.

As I understood from your first screenshot, your main MCMC run didn’t converge (ESSs for posterior and likelihood are about zero). The trace of posterior should look like hairy centipede:

image.png
 

Maybe you should increase burn-in size to skip the burn-in phase of the trace. All of these are MCMC basics, so maybe you should pass several basic BEAST tutorials before GSS.

  1. Ascertainment Bias Correction

I’m not an expert in this, sorry. I work with viruses that have much simpler genomes.

Best,
Artem

четверг, 27 марта 2025 г. в 02:25:30 UTC+8, Opeyemi Alayande:

Artem B

unread,
Mar 27, 2025, 10:13:40 PMMar 27
to beast-users
For some reason, the figure has not been uploaded. The second try (sorry if I can't handle this out)

unnamed.png

пятница, 28 марта 2025 г. в 10:10:52 UTC+8, Artem B:

Opeyemi Alayande

unread,
Mar 30, 2025, 2:40:09 PMMar 30
to beast-users
Thank you very much Artem for providing these details.

Sincere appreciation
Reply all
Reply to author
Forward
0 new messages