Hi LDSC experts,
I ran a GWAS in the UK Biobank --with relatively few cases--, and I'm getting a very low h2 estimate, similar to what is described
here. More specifically, I have these two scenarios (two munge stats and two h2 files):
GWAS #1 Munge stats file:
Mean chi^2 = 0.999
WARNING: mean chi^2 may be too small.
Lambda GC = 1.001
Max chi^2 = 25.326
0 Genome-wide significant SNPs (some may have been removed by filtering).
GWAS #1 h2 file:
Total Observed scale h2: 0.0015 (0.0015)
Lambda GC: 1.0075
Mean Chi^2: 1.003
Intercept: 0.9943 (0.0069)
Ratio < 0 (usually indicates GC correction).
GWAS #2 Munge stats file:
Mean chi^2 = 0.999
WARNING: mean chi^2 may be too small.
Lambda GC = 0.997
Max chi^2 = 23.817
GWAS #2 h2 file:
Total Observed scale h2: 0.0002 (0.0015)
Lambda GC: 1.0016
Mean Chi^2: 1.0017
Intercept: 1.0007 (0.0067)
Ratio: 0.3995 (3.9369)
In GWAS #2, I'm actually getting "NA"s when I run genetic correlation analysis (via LD Hub), such as in the posts
here and
here, which is apparently due to the very low h2.
Long story short, I have a question related to this: if I do not care about heritability, may I use the "Lambda GC" values shown above as an evidence of lack of inflation in the GWAS? If so, which "Lambda GC" value is best (from munge stats file or from h2 file)?
Thanks!
P.S.: By the way, should I still *trust* the genetic correlation results of GWAS #1 versus multiple other traits (things run smoothly on LD Hub)?