{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Institute for Behavioral Genetics International Statistical Genetics 2023 Workshop \n", "\n", "# Rare Variant Analysis of Sequencing Data with Hail\n", "\n", "Learning objectives:\n", "\n", "1. Understand statistical models of rare variant effects on phenotype.\n", "2. Understand how to use Hail to perform a Burden test.\n", "3. Understand how to use Hail to perform a SKAT test." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Burden Test: Theory\n", "\n", "The phrase \"genome-wide association study\" (GWAS) usually refers to independently testing every variant in a dataset against a phenotype. For a continuous phenotype, we symbolically state that as:\n", "\n", "$$\n", "\\begin{align*}\n", "N &: \\mathbb{N} &\\textrm{The number of samples} \\\\\n", "M &: \\mathbb{N} &\\textrm{The number of variants} \\\\\n", "G &: \\{0, 1, 2\\}^{N \\times M} &\\textrm{The genotypes represented as the number of alternate alleles} \\\\\n", "y &: \\mathbb{R}^{N} &\\textrm{The value of the phenotype for each sample}\\\\\n", "\\beta &: \\mathbb{R}^{M} &\\textrm{The unknown effect of each variant on the phenotype}\\\\\n", "\\\\\n", "\\varepsilon_i &\\sim \\mathcal{N}(0, \\sigma^2) &\\textrm{Normally distributed measurement error of unknown variance, }\\sigma^2\\\\\n", "y_i &= \\beta_j G_{ij} + \\varepsilon_i\n", "\\end{align*}\n", "$$\n", "\n", "This model lacks sufficient statistical power to detect rare variants _because of_ their rarity. There are two ways to address this problem: collect more samples or combine multiple variants into a single association test. In this notebook, we explore two tests that combine multiple variants: the burden test and the squence kernel association test (SKAT).\n", "\n", "The burden test considers the sum of effects of a set of variants on a phenotype. When the set of variants is a gene, this test is called a gene burden set. Analagously to testing every variant in GWAS, we typically test many variant-sets. We symbolically state this model as:\n", "\n", "$$\n", "\\begin{align*}\n", "S_k && \\textrm{The } k\\textrm{-th set of variants} \\\\\n", "\\\\\n", "\\varepsilon_i &\\sim \\mathcal{N}(0, \\sigma^2) \\\\\n", "y_i &= \\beta_k \\left( \\sum_{j \\in S_k} G_{ij} \\right) + \\varepsilon_i\n", "\\end{align*}\n", "$$\n", "\n", "This model is well-powered for rare variants whose effects have the same direction. For example, if all the variants in a gene increase the chance of disease, a burden test is well-powered. If the direction of effect of variants in the set is random and the effects size are all of similar magnitude, the sum of effects will trend towards zero. We can simulate and visualize this effect:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%config InlineBackend.figure_format = 'retina'" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import matplotlib.pyplot as plt\n", "\n", "plt.style.use('ggplot')\n", "\n", "effects = np.random.normal(0, 1, size=100)\n", "\n", "sum_of_effects = np.sum(effects)\n", "magnitude_of_effects = np.sqrt(effects.T @ effects)\n", "sum_of_abs_of_effects = np.sum(np.abs(effects))\n", "\n", "plt.bar(list(range(100)), effects,\n", " label=(f'sum(y) = {sum_of_effects}\\n'\n", " f'sqrt(y.T @ y) = {magnitude_of_effects}\\n'\n", " f'sum(abs(y)) = {sum_of_abs_of_effects}'))\n", "plt.legend(loc='upper right')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Burden Test: Practice\n", "### Setup\n", "\n", "Import Hail and configure the plotting system for Notebooks." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import hail as hl" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "hl.plot.output_notebook()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 1: Quality Control & Sample Annotation\n", "\n", "The last notebook covered these steps in detail. We'll do them quickly here:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mt = hl.read_matrix_table('resources/hgdp-tgp-rare-variants.mt')\n", "\n", "# remove non-PASS variants\n", "mt = mt.filter_rows(hl.len(mt.filters) == 0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Remove Common Variants\n", "\n", "Next, we will keep variants with an allele frequency of under 1%. Including common variants will only reduce the power of a burden test.\n", "\n", "We could rerun `hl.variant_qc` here, or use an aggregator designed to compute allele frequencies and counts:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mt = mt.filter_rows(\n", " hl.agg.call_stats(mt.GT, mt.alleles).AF[1] < 0.01\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We also remove variants without any non-reference calls:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mt = mt.filter_rows(\n", " hl.agg.all(mt.GT.is_hom_ref()),\n", " keep=False\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 2: Group by gene\n", "\n", "We imported gene names and intervals from GENCODE and created a Hail table keyed by interval. We'll use this table to annotate our genetic data with gene information. After annotation, we can group our variants and perform a linear regression." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "genes = hl.read_table('resources/genes.ht')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "genes.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Recall how we annotated sample phenotypes earlier in the common variant tutorial -- this looks very similar:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mt = mt.annotate_rows(gene_name = genes[mt.locus].gene_name)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "phenos = hl.read_table('resources/rare-variant-phenotypes.ht')\n", "mt = mt.annotate_cols(\n", " pheno1 = phenos[mt.s].pheno1,\n", " pheno2 = phenos[mt.s].pheno2\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's `show` the resulting annotations on the matrix table to make sure everything worked:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mt.gene_name.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 3: Aggregate by gene\n", "\n", "Hail's modularity makes it easy to perform non-kernel-based burden tests.\n", "\n", "We'll compose two general tools:\n", " - [group_rows_by](https://hail.is/docs/0.2/hail.MatrixTable.html#hail.MatrixTable.group_rows_by) / [aggregate](https://hail.is/docs/0.2/hail.GroupedMatrixTable.html#hail.GroupedMatrixTable.aggregate)\n", " - [hl.linear_regression_rows](https://hail.is/docs/0.2/methods/stats.html#hail.methods.linear_regression_rows).\n", " \n", "This means that you can flexibly specify the way genotypes are summarized per gene. Using other tools, you may have a few ways to aggregate, but if you want to do something different you are out of luck!" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "burden_mt = mt.group_rows_by(mt.gene_name).aggregate(\n", " n_variants = hl.agg.count_where(mt.GT.n_alt_alleles() > 0)\n", ")\n", "\n", "# filter to genes with at least one rare variant!\n", "burden_mt = burden_mt.filter_rows(hl.agg.sum(burden_mt.n_variants) > 0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's explore this new matrix table!\n", "\n", "We always start exploring a new matrix table with `describe`. The describe command does not perform any time-consuming or expensive operations. It just introspects on the fields and their \"types\" (meaning the kind of data, e.g. `float`, `int`, and `str`)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [], "source": [ "burden_mt.describe(widget=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also `show` a Matrix Table (or Table). This operation has to actually load and process the data so it might be take some time! We can limit the amount of data processed by specifying `n_cols` and `n_rows`. In the following cell, we look at the top-left, 5x5, corner of the Matrix Table." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "burden_mt.show(n_cols=5, n_rows=5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Even this small dataset is too large for us to inspect the value of `n_variants` for every sample at every variant. Instead, we need to use methods to aggregate or summarize data. Hail has some automagical summarization methods such as:\n", "\n", "- [`hl.summarize_variants(mt)`](https://hail.is/docs/0.2/methods/genetics.html#hail.methods.summarize_variants)\n", "- [`mt.field.summarize()`](https://hail.is/docs/0.2/hail.expr.Expression.html#hail.expr.Expression.summarize)\n", "\n", "If your scientific questions are not answered by those methods, you probably need to use an [aggregator](https://hail.is/docs/0.2/aggregators.html). Aggregators collapse many values into one value. For example, `hl.agg.mean(mt.field)` computes the mean of all the values of `mt.field`. We can calculate the mean depth for each variant:\n", "\n", "```\n", "mt = mt.annotate_rows(mean_DP_per_variant = hl.agg.mean(mt.DP))\n", "```\n", "\n", "as well as the mean depth for each sample:\n", "\n", "```\n", "mt = mt.annotate_cols(mean_DP_per_sample = hl.agg.mean(mt.DP))\n", "```\n", "\n", "and the mean depth over all genotypes:\n", "\n", "```\n", "mt = mt.annotate_globals(mean_DP_overall = hl.agg.mean(mt.DP))\n", "```\n", "\n", "In the next exercise, you will need to use either a summarize function or an aggregator.\n", "\n", "### Exercise 1\n", "\n", "Is this a dense (mostly non-zero) or sparse (mostly zero) matrix? Is this expected? How many variants are in our dataset, and how many genes are there?\n", "\n", "There are a variety of ways to interrogate this. A few are listed below." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "xx = burden_mt\n", "xx.aggregate_entries(hl.agg.fraction(xx.n_variants == 0))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "xx = burden_mt\n", "xx = xx.annotate_rows(frac_zero = hl.agg.fraction(xx.n_variants == 0))\n", "xx.aggregate_rows(hl.agg.hist(xx.frac_zero, 0, 1, 20))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "xx = burden_mt\n", "xx = xx.annotate_rows(n_zero_variants = hl.agg.count_where(xx.n_variants == 0))\n", "xx.aggregate_rows(hl.agg.counter(xx.n_zero_variants))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In addition to using `annotate_global` to compute dataset-wide aggregations, we can combine row-wise and column-wise aggregations with `hail.ggplot` to produce visualizations. Instead of relying on our brains to make sense of things like mean and variance, we can let our brain consume the entire distribution of the data!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [], "source": [ "from hail.ggplot import *\n", "import plotly\n", "import plotly.io as pio\n", "pio.renderers.default = 'iframe'\n", "\n", "xx = burden_mt\n", "xx = xx.annotate_rows(\n", " n_zero = hl.agg.count_where(xx.n_variants != 0)\n", ")\n", "\n", "ggplot(xx) + geom_col(aes(x=xx.gene_name, y=xx.n_zero))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Step 4: Run linear regression per gene\n", "\n", "Hail is designed as a set of resuable modules and functions. In this section, we will re-use several functions from the first notebook but apply them to our =burden_mt= which is keyed by gene instead of locus and contains combined variants rather than genotype calls." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "_, pca_scores, _ = hl.hwe_normalized_pca(mt.GT)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "burden_mt = burden_mt.annotate_cols(pca = pca_scores[burden_mt.s])\n", "\n", "burden_results = hl.linear_regression_rows(\n", " y=burden_mt.pheno1, \n", " x=burden_mt.n_variants,\n", " covariates=[1.0, \n", " burden_mt.pca.scores[0], \n", " burden_mt.pca.scores[1], \n", " burden_mt.pca.scores[2]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We use Hail's new plotting system, `hl.ggplot`, to show a bar graph of the burden results. Notice that the genes are sorted alphabetically, not by genomic location!" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from hail.ggplot import *\n", "import plotly\n", "import plotly.io as pio\n", "pio.renderers.default = 'iframe'\n", "\n", "ht = burden_results\n", "ggplot(ht) + geom_col(aes(x=ht.gene_name, y=-hl.log(ht.p_value, base=10)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also look at the first ten results in ascending order of p-value." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "burden_results.order_by(burden_results.p_value).show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, a Q-Q plot is meaningful on genes. Let's plot one:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p = hl.plot.qq(burden_results.p_value)\n", "hl.plot.show(p)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With fewer tests performed (one per gene, instead of one per variant), the X and Y range of the Q-Q plot is much smaller than in the common variant association practical.\n", "\n", "Let's compare the burden test to a standard GWAS. Recall that a standard GWAS performs a large number of tests and therefore must overcome a substantial multiple testing burden. We also look at the genomic locations for some of our top burden genes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "genes.filter(hl.set(['MREG', 'TFB2M']).contains(genes.gene_name)).show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [], "source": [ "mt = mt.annotate_cols(pca = pca_scores[mt.s])\n", "\n", "\n", "linreg_results = hl.linear_regression_rows(\n", " y=mt.pheno1, \n", " x=mt.GT.n_alt_alleles(),\n", " covariates=[1.0, \n", " mt.pca.scores[0], \n", " mt.pca.scores[1], \n", " mt.pca.scores[2]])\n", "ht = linreg_results\n", "hl.plot.show(hl.plot.manhattan(ht.p_value))\n", "linreg_results.order_by(linreg_results.p_value).show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Weighted Burden Test: Theory\n", "\n", "If we can confidently predict the directions of effects (while the effect sizes themselves are still unknown), we can encode that knowledge as a \"weight\". A burden test with weights is known as a weighted burden test. We symbolically represent it as:\n", "\n", "$$\n", "\\begin{align*}\n", "S_k && \\textrm{The } k\\textrm{-th set of variants} \\\\\n", "w &: \\mathbb{R}^M &\\textrm{The weights for each variant}\\\\\n", "\\\\\n", "\\varepsilon_i &\\sim \\mathcal{N}(0, \\sigma^2) \\\\\n", "y_i &= \\beta_k \\left( \\sum_{j \\in S_k} w_j G_{ij} \\right) + \\varepsilon_i\n", "\\end{align*}\n", "$$\n", "\n", "## The Weighted Burden Test: Practice\n", "\n", "An effective choice of weights can increase the power of a burden test. For example, we may weight variants which are predicted to be damaging higher than synonymous variants. The HGDP+1kG subset dataset we have here, `mt`, contains a few different annotations. Your tasks in this section are:\n", "\n", "### Exercise 2\n", "\n", "1. Explore these annotations using `show` and aggregations.\n", "2. Use a numeric annotation as a weight or compute a new numeric annotation from a non-numeric annotation (you might need [`hl.case`](https://hail.is/docs/0.2/functions/core.html#hail.expr.functions.case)).\n", "3. Perform a new burden test using `mt.group_rows_by(...).aggregate(...)`, aggregators, `hl.linear_regression_rows`, and your new weight annotation. Do not use `burden_mt` again!\n", "\n", "Again, there are many ways to approach this. We provide a few options below." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mt.describe(widget=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mt.splice_ai.summarize()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mt.cadd.summarize()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mt.aggregate_rows(hl.agg.counter(mt.vep.most_severe_consequence))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mt = mt.annotate_rows(\n", " weight1 = (hl.case()\n", " .when(mt.vep.most_severe_consequence == \"synonymous_variant\", 2)\n", " .when(mt.vep.most_severe_consequence == \"intron_variant\", 3)\n", " .when(mt.vep.most_severe_consequence == \"missense_variant\", 5)\n", " .default(1)),\n", " weight2 = mt.cadd.phred\n", ")\n", "\n", "mt = mt.annotate_cols(pca = pca_scores[mt.s])\n", "\n", "\n", "xx = mt.group_rows_by(mt.gene_name).aggregate(\n", " n_variants = hl.agg.count_where(mt.weight2 * mt.GT.n_alt_alleles() > 0)\n", ")\n", "xx = xx.filter_rows(hl.agg.sum(xx.n_variants) > 0)\n", "\n", "weighted_burden = hl.linear_regression_rows(\n", " y=xx.pheno1, \n", " x=xx.n_variants,\n", " covariates=[1.0, \n", " xx.pca.scores[0], \n", " xx.pca.scores[1], \n", " xx.pca.scores[2]])\n", "ht = weighted_burden\n", "ggplot(ht) + geom_col(aes(x=ht.gene_name, y=-hl.log(ht.p_value, base=10)))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ht.order_by(ht.p_value).show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Sequence Kernel Association Test (SKAT): Theory\n", "\n", "If the directions of effects are unpredictably random, then neither a burden test nor a weighted burden test is well-powered. Instead we can test for _excess variance_ of the effect sizes of a set of variants. The sequence kernel association test (SKAT) is one such test. It does not report an effect size because it does not test the strength of the association. Instead, SKAT reports a $p$-value of rejecting its null hypothesis: that the effect of the genotypes on the phenotypes is zero. The SKAT test involves two models, a null model and a full model. Both models include a set of covariates per sample. The full model is:\n", "\n", "$$\n", "\\begin{align*}\n", "K && \\textrm{The number of covariates} \\\\\n", "X &: \\mathbb{R}^{N \\times K} &\\textrm{The covariates for each sample} \\\\\n", "\\\\\n", "\\varepsilon_i &\\sim \\mathcal{N}(0, \\sigma^2) \\\\\n", "y_i &= X \\vec{\\alpha} + G \\vec{\\beta} + \\varepsilon_i\n", "\\end{align*}\n", "$$\n", "\n", "The null model considers only the covariates:\n", "\n", "$$\n", "\\begin{align*}\n", "y_i &= X \\vec{\\alpha}_{\\textrm{null}} + \\varepsilon_i\n", "\\end{align*}\n", "$$\n", "\n", "The null hypothesis supposes that $\\beta = 0$. The test of the null hypothesis essentially investigates the likelihood that the residual variance (i.e. $y - X \\widehat{\\vec{\\alpha}_{\\textrm{null}}}$) is truly independently, identically, and normally distributed noise. The details of how to test that are somewhat complex and involve a distribution without a closed form. We refer the interested reader to the [SKAT paper](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3135811/).\n", "\n", "## The Sequence Kernel Association Test (SKAT): Practice\n", "\n", "The sequence kernel association test is one of Hail's built-in methods. The SKAT also permits a non-negative weight paramter for each variant. The SKAT paper suggests using weights taken from the CDF of a 1,25-Beta distribution evaluated at the allele frequency of the variant." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "skat_mt = mt\n", "skat_mt = skat_mt.annotate_cols(\n", " pca = pca_scores[skat_mt.s]\n", ")\n", "skat_mt = hl.variant_qc(skat_mt)\n", "skat_mt = skat_mt.annotate_rows(\n", " weight = hl.dbeta(skat_mt.variant_qc.AF[1], 1, 25)\n", ")\n", "\n", "skat_results = hl.skat(\n", " skat_mt.gene_name,\n", " skat_mt.weight,\n", " y = skat_mt.pheno2, \n", " x = skat_mt.GT.n_alt_alleles(),\n", " covariates = [1.0, \n", " skat_mt.pca.scores[0], \n", " skat_mt.pca.scores[1], \n", " skat_mt.pca.scores[2]]\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "ht.describe()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ht = skat_results\n", "ht = ht.annotate(\n", " p_value = hl.if_else(ht.fault == 0, ht.p_value, 1)\n", ")\n", "ggplot(ht) + geom_col(aes(x=ht.id, y=-hl.log(ht.p_value, base=10)))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [], "source": [ "skat_results.order_by(skat_results.p_value).show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Again, let's compare to a standard GWAS on this phenotype." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "genes.filter(hl.set(['KLHL5', 'SFT2D2']).contains(genes.gene_name)).show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mt = mt.annotate_cols(pca = pca_scores[mt.s])\n", "\n", "linreg_results = hl.linear_regression_rows(\n", " y=mt.pheno2, \n", " x=mt.GT.n_alt_alleles(),\n", " covariates=[1.0, \n", " mt.pca.scores[0], \n", " mt.pca.scores[1], \n", " mt.pca.scores[2]])\n", "ht = linreg_results\n", "hl.plot.show(hl.plot.manhattan(ht.p_value))\n", "linreg_results.order_by(linreg_results.p_value).show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Logistic Phenotypes\n", "\n", "For binary phenotypes, for example from a case-control study, [`hl.logistic_regression_rows`](https://hail.is/docs/0.2/methods/stats.html#hail.methods.logistic_regression_rows) and [`hl.skat(..., logistic=True)`](https://hail.is/docs/0.2/methods/genetics.html#hail.methods.skat) can be used instead of their linear analogues. No other changes to the code are necessary." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.13" } }, "nbformat": 4, "nbformat_minor": 4 }