Yep, the title speaks for itself.
The references are the same as the "A Former Shade Tree Mechanic Talks About Causality in Health Claims" 2 Episode Video Series.
Sorry for getting a little sentimental there a couple times.
https://nutritionj.biomedcentral.com/articles/10.1186/1475-2891-3-9
https://deepai.org/machine-learning-glossary-and-terms/probabilistic-causation
https://plato.sydney.edu.au/archives/spr2004/entries/causation-probabilistic/
https://www.oxfordreference.com/display/10.1093/oi/authority.20110803095523346
https://sci-hub.se/10.1007/s12170-018-0566-9
https://en.wikipedia.org/wiki/Correlation_does_not_imply_causation
https://www.amazon.com/Structure-Scientific-Revolutions-50th-Anniversary/dp/0226458121
Mendelian Randomization cannot prove cause and effect.
Why?
Because it's an ATTEMPT at using measured variation in genes to TRY to establish a POSSIBLE cause from correlations found in epidemiology based on strong associations assumed between those genes, their variations and health outcomes..
The issue is it cannot, because what they're doing is trying to use gene variants and associations to attempt to make a causality and if you understand epigenetics enough to factor that in making you 2 pubic hairs more competent, you'd factor that in too, because that muddies the water of that method, which is the first of 4 fundamental weaknesses of Mendelian Randomization..
The second weakness is Plieotropy, which is what happens when you cause multiple possible outcomes or effects by a single gene or gene variant, epigenetics compounds this I'm PRETTY sure, but I could be wrong.
The third weakness is linkage disequilibrium, (initials being LD) which is what happens when you get a statistical association between different genetic variants induced by the tendency of alleles (pronounced "a-leels") that are close together on a chromosome to be inherited together and can occur when there is non-random association of variants at different loci (pronounced as "low-sye").
Loci in this case is plural for the biology term for locus, which is the position in a chromosome of a particular gene or allele.
By this principle, two SNPs (pronounced “snips” and it means Single nucleotide polymorphisms which are the most common type of gene variations) have LD if they are observed to be inherited together more often than expected, so the likelihood of LD increases if the SNPs are located close to one another on the chromosome.
If a SNP affecting the expression of gene A may be in linkage disequilibrium with a SNP that affects expression of gene B. If the product of gene B is causally related to the disease outcomes, it would be wrong to conclude that gene A is responsible for the phenotype, although associations COULD be found.. As a result, to limit the influence of LD, ideal gene variants for Mendelian Randomization are those not as close to other genes that can influence an outcome through other pathways, even then, it's still an estimation effort...
The FOURTH weakness is precise estimates to attempt to make statistical strength or meaningfullness to say that something is likely causal or not is heavily biased and that happens really often.
For example, causal effect attempts made by trying to find strong enough estimates from Mendelian randomization studies can be thought of as a population-average effect (i.e., as if the
intervention was applied to the entire population) and could be different than the effect of interventions applied to specific subgroups. On the other hand, weak genetic instruments, that explain too little variation in the exposure, could bias causal estimates or result in failure to establish causal relationships due do a lack of power.
Use of large sample sizes or a genetic score combining multiple SNPs with additive or robust associations with the outcome of interest partially alleviate this concern, or you can use Canalization which refers to the development of counter-regulating mechanisms in response to gene variants, but by the time you take all this shit into consideration, the problem is it still will not be as likely to be rigorous as the types of trials I mentioned above to eliminate as many probabilities and not buck against as many counter-factors as possible.
That's the fourth weakness, even if you reduced as much as possible or eliminate (which is probably impossible) these weaknesses, it will never be as robust as a proper set of clinical direct intervention experiments or at lease these other trials I listed.
If you want to make a consensus, it absolutely has to be a **CONSENSUS OF REPEATED FINDINGS THAT CAN ALSO BE COUNTER-TESTED FOR PROCESS OF ELIMINATION TO ENSURE IT'S THE TRUTH AND NO DATA ALTERING IN ANY STUDY OR YOU ARE FABRICATING THE STUDY, PERIOD**.
The references are the same as the "A Former Shade Tree Mechanic Talks About Causality in Health Claims" 2 Episode Video Series.
Sorry for getting a little sentimental there a couple times.
https://nutritionj.biomedcentral.com/articles/10.1186/1475-2891-3-9
https://deepai.org/machine-learning-glossary-and-terms/probabilistic-causation
https://plato.sydney.edu.au/archives/spr2004/entries/causation-probabilistic/
https://www.oxfordreference.com/display/10.1093/oi/authority.20110803095523346
https://sci-hub.se/10.1007/s12170-018-0566-9
https://en.wikipedia.org/wiki/Correlation_does_not_imply_causation
https://www.amazon.com/Structure-Scientific-Revolutions-50th-Anniversary/dp/0226458121
Mendelian Randomization cannot prove cause and effect.
Why?
Because it's an ATTEMPT at using measured variation in genes to TRY to establish a POSSIBLE cause from correlations found in epidemiology based on strong associations assumed between those genes, their variations and health outcomes..
The issue is it cannot, because what they're doing is trying to use gene variants and associations to attempt to make a causality and if you understand epigenetics enough to factor that in making you 2 pubic hairs more competent, you'd factor that in too, because that muddies the water of that method, which is the first of 4 fundamental weaknesses of Mendelian Randomization..
The second weakness is Plieotropy, which is what happens when you cause multiple possible outcomes or effects by a single gene or gene variant, epigenetics compounds this I'm PRETTY sure, but I could be wrong.
The third weakness is linkage disequilibrium, (initials being LD) which is what happens when you get a statistical association between different genetic variants induced by the tendency of alleles (pronounced "a-leels") that are close together on a chromosome to be inherited together and can occur when there is non-random association of variants at different loci (pronounced as "low-sye").
Loci in this case is plural for the biology term for locus, which is the position in a chromosome of a particular gene or allele.
By this principle, two SNPs (pronounced “snips” and it means Single nucleotide polymorphisms which are the most common type of gene variations) have LD if they are observed to be inherited together more often than expected, so the likelihood of LD increases if the SNPs are located close to one another on the chromosome.
If a SNP affecting the expression of gene A may be in linkage disequilibrium with a SNP that affects expression of gene B. If the product of gene B is causally related to the disease outcomes, it would be wrong to conclude that gene A is responsible for the phenotype, although associations COULD be found.. As a result, to limit the influence of LD, ideal gene variants for Mendelian Randomization are those not as close to other genes that can influence an outcome through other pathways, even then, it's still an estimation effort...
The FOURTH weakness is precise estimates to attempt to make statistical strength or meaningfullness to say that something is likely causal or not is heavily biased and that happens really often.
For example, causal effect attempts made by trying to find strong enough estimates from Mendelian randomization studies can be thought of as a population-average effect (i.e., as if the
intervention was applied to the entire population) and could be different than the effect of interventions applied to specific subgroups. On the other hand, weak genetic instruments, that explain too little variation in the exposure, could bias causal estimates or result in failure to establish causal relationships due do a lack of power.
Use of large sample sizes or a genetic score combining multiple SNPs with additive or robust associations with the outcome of interest partially alleviate this concern, or you can use Canalization which refers to the development of counter-regulating mechanisms in response to gene variants, but by the time you take all this shit into consideration, the problem is it still will not be as likely to be rigorous as the types of trials I mentioned above to eliminate as many probabilities and not buck against as many counter-factors as possible.
That's the fourth weakness, even if you reduced as much as possible or eliminate (which is probably impossible) these weaknesses, it will never be as robust as a proper set of clinical direct intervention experiments or at lease these other trials I listed.
If you want to make a consensus, it absolutely has to be a **CONSENSUS OF REPEATED FINDINGS THAT CAN ALSO BE COUNTER-TESTED FOR PROCESS OF ELIMINATION TO ENSURE IT'S THE TRUTH AND NO DATA ALTERING IN ANY STUDY OR YOU ARE FABRICATING THE STUDY, PERIOD**.
- Category
- Try Not To Cum
Commenting disabled.