{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Practical 3: Annotation, query and plotting\n", "\n", "## The cells of this practical can be entered (by cut and paste) into the IPython console.\n", "\n", "## Before entering the first cell, make sure you have changed to the directory hail-practical. Skip the first cell haven't closed IPython console since running the last practical." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import pandas as pd\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "import seaborn\n", "from math import log, isnan\n", "from pprint import pprint\n", "import matplotlib.patches as mpatches\n", "\n", "from hail import *\n", "\n", "%matplotlib inline\n", "\n", "def qqplot(pvals):\n", " spvals = sorted([x for x in pvals if x and not(isnan(x))])\n", " exp = [-log(float(i) / len(spvals), 10) for i in np.arange(1, len(spvals) + 1, 1)]\n", " obs = [-log(p, 10) for p in spvals]\n", " plt.scatter(exp, obs)\n", " plt.plot(np.arange(0, max(max(exp), max(obs))), c=\"red\")\n", " plt.xlabel(\"Expected p-value (-log10 scale)\")\n", " plt.ylabel(\"Observed p-value (-log10 scale)\")\n", " plt.xlim(xmin=0)\n", " plt.ylim(ymin=0)\n", "\n", "hc = HailContext()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Start by importing and writing a larger dataset, then reading the result" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# dataset already imported to VDS in hail-practical\n", "# hc.import_vcf('1kg.vcf.bgz').filter_multi().variant_qc().sample_qc().write('1kg.vds', overwrite=True)\n", "\n", "vds = hc.read('1kg.vds')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Let's print the schemas: we've seen this before, but should always know what's in our data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Let's add QC statistics. Note that although we're going to be running a method called `variant_qc` this doesn't actually filter variants from our dataset." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "vds = vds.variant_qc()\n", "vds = vds.sample_qc()\n", "vds.persist() # We'll use these results several times, so checkpoint our computations" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### These methods added quite a few annotations!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "scrolled": false }, "outputs": [], "source": [ "print(vds.variant_schema)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### There's a lot here, and it can certainly be confusing. Our documentation explains what these values are, however: see [sample_qc](https://hail.is/hail/hail.VariantDataset.html#hail.VariantDataset.sample_qc) and [variant_qc](https://hail.is/hail/hail.VariantDataset.html#hail.VariantDataset.variant_qc) for more information.\n", "\n", "### Let's use `matplotlib` to look at some of these values. Both `sa.qc.callRate` and `sa.qc.gqMean` may be useful metrics for filtering." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "[call_rate, gq_mean] = vds.query_samples(['samples.map(s => sa.qc.callRate).collect()',\n", " 'samples.map(s => sa.qc.dpMean).collect()'])\n", "plt.scatter(gq_mean, call_rate, alpha=0.25)\n", "plt.xlabel('Mean DP')\n", "plt.ylabel('Call Rate')\n", "plt.title('Sample call rate vs mean DP')\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "[call_rate, gq_mean] = vds.query_variants(['variants.map(v => va.qc.callRate).collect()',\n", " 'variants.map(v => va.qc.dpMean).collect()'])\n", "plt.scatter(gq_mean, call_rate, alpha=0.2)\n", "plt.xlabel('Mean DP')\n", "plt.ylabel('Call Rate')\n", "plt.title('Variant call rate vs mean DP')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### It's a good idea to get rid of samples and variant with a low call rate. Let's see how a cutoff of 95% affects the included variants and samples." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "original_counts = vds.count()\n", "filtered_vds = (vds.filter_variants_expr('va.qc.callRate > 0.95')\n", " .filter_samples_expr('sa.qc.callRate > 0.95'))\n", "new_counts = filtered_vds.count()\n", "print(original_counts)\n", "print(new_counts)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Annotations can be added from files as well." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The next cell uses something called \"ipython magic\". These \"magics\" begin with one or two percent signs, and can do various helpful things. The `%%sh` magic runs the contents of the cell as Unix commands." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%%sh\n", "head sample_annotations.txt | column -t" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%%sh\n", "head variant_annotations.txt | column -t" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "filtered_vds = filtered_vds.annotate_samples_table('sample_annotations.txt', \n", " sample_expr='Sample',\n", " root='sa.metadata',\n", " config=TextTableConfig(impute=True))\n", "filtered_vds = filtered_vds.annotate_variants_table('variant_annotations.txt', \n", " variant_expr='Variant',\n", " code='va.consequence = table.Consequence',\n", " config=TextTableConfig(impute=True))\n", "print('Variant schema:')\n", "print(filtered_vds.variant_schema)\n", "print('Sample schema:')\n", "print(filtered_vds.sample_schema)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Query methods are good ways to explore datasets" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "filtered_vds.query_variants('variants.map(v => va.consequence).counter()')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "filtered_vds.query_samples('samples.map(s => sa.metadata.SuperPopulation).counter()')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Mutation rate refresher\n", "Not all base pairs are created equal. A \"CG\" motif ([see wikipedia](https://en.wikipedia.org/wiki/CpG_site)) has a high probability of mutating to \"TG\".\n", "\n", "On the opposite strand, a \"GC\" motif is likely to mutate to \"AC\".\n", "\n", "We expect to see many C/T and G/A SNPs in our dataset. Do we?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "alleles = vds.query_variants('variants.map(v => v.altAllele).counter()')\n", "from collections import Counter\n", "pprint(Counter(alleles).most_common())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Counts from the full thousand genomes dataset:\n", "```\n", "G/A\t| 15689986\n", "C/T\t| 15685250\n", "A/G\t| 10779869\n", "T/C\t| 10736805\n", "G/T\t| 3735538\n", "C/A\t| 3727525\n", "C/G\t| 3436122\n", "G/C\t| 3425025\n", "T/G\t| 2861302\n", "A/C\t| 2860428\n", "A/T\t| 2715606\n", "T/A\t| 2713458\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Plot variant frequency distribution" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "af = vds.query_variants('variants.map(v => va.qc.AF).collect()')\n", "plt.hist(af, bins=250)\n", "plt.xlabel('Alternate allele frequency')\n", "plt.ylabel('Count')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The below code will work at any scale. Given the choice, always use a built-in Hail aggregator function!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# This will always work!\n", "af_hist = vds.query_variants('variants.map(v => va.qc.AF).hist(0, 1, 250)')\n", "plt.bar(af_hist.binEdges[:-1], af_hist.binFrequencies, width=0.004)\n", "plt.xlim(0, 1)\n", "plt.xlabel('Alternate allele frequency')\n", "plt.ylabel('Count')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Warning! Don't exit the IPython console. You will need `filtered_vds` for the next practical." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python [default]", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.12" } }, "nbformat": 4, "nbformat_minor": 1 }