{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Practical 4: Aggregables: working with massive data\n", "\n", "## The cells of this practical can be entered (by cut and paste) into the IPython console.\n", "\n", "## Before entering the first cell, make sure you have changed to the directory hail-practical. Skip the first cell haven't closed IPython console since running the last practical." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import pandas as pd\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "import seaborn\n", "from math import log, isnan\n", "import hail\n", "import matplotlib.patches as mpatches\n", "\n", "%matplotlib inline\n", "\n", "def qqplot(pvals):\n", " spvals = sorted([x for x in pvals if x and not(isnan(x))])\n", " exp = [-log(float(i) / len(spvals), 10) for i in np.arange(1, len(spvals) + 1, 1)]\n", " obs = [-log(p, 10) for p in spvals]\n", " plt.scatter(exp, obs)\n", " plt.plot(np.arange(0, max(max(exp), max(obs))), c=\"red\")\n", " plt.xlabel(\"Expected p-value (-log10 scale)\")\n", " plt.ylabel(\"Observed p-value (-log10 scale)\")\n", " plt.xlim(xmin=0)\n", " plt.ylim(ymin=0)\n", "\n", "hc = hail.HailContext()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Aggregables\n", "\n", "Terrible name, we know.\n", "\n", "In the practical on the Hail expression language, you learned about arrays, filter, map, and aggregating opeprations like sum, max, etc.\n", "\n", "Hail is designed to work with very large datasets, datasets with 100s of millions (billions?) of variants and 10s or 100s of trillions of genotypes. More data than one could hope to store on a single computer. Therefore, Hail stores and computes on data in a distributed fasion. But we still want a simple programming model that allows us to query and transform such distributed data. Thats where Aggregables come in.\n", "\n", "And `Aggregable[T]` is distributed collection of elements of type `T`. The interface is modeled on `Array[T]`, but aggregables can be arbitrarily large and they are unordered, so they don't support operations like indexing.\n", "\n", "Aggregables support map and filter. Like sum, max, etc. on arrays, aggregables support operations which we call \"aggregators\" that operate on the entire aggregable collection and produce a summary or derived statistic. See the [documentation](https://hail.is/hail/types.html#aggregable) for a complete list of aggregators.\n", "\n", "Aggregables are available in expressions on various methods on [VariantDataset](https://hail.is/hail/hail.VariantDataset.html). For example, [query_samples](https://hail.is/hail/hail.VariantDataset.html#hail.VariantDataset.query_samples) has an aggregable `samples: Aggregable[String]`, [query_variants](https://hail.is/hail/hail.VariantDataset.html#hail.VariantDataset.query_variants) has an aggregable `variants: Aggregable[Variant]` and [query_genotypes](https://hail.is/hail/hail.VariantDataset.html#hail.VariantDataset.query_genotypes) has an aggregable `gs: Aggregable[Genotype]` which is the collection of *all* the genotypes in the dataset." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's load a small VCF to play with." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "vds = hc.import_vcf('sample.vcf')\n", "print(vds.variant_schema)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are several aggregators for obtaining elements of an aggregable. `.take(n)` returns `n` elements of an aggregable sampled randomly. `takeBy(x => , n)` returns the `n` elements with highest value for ``. `` must be a type that can be compared: numeric or `String`. Let's grab a few variants and genotypes." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# five random variants\n", "vds.query_variants('variants.take(5)')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# return five genotypes with the largest gq\n", "vds.query_genotypes('gs.takeBy(g => g.gq, 5)')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`collect()` returns an array of *all* the elements of an array.\n", "\n", "**Warning**: This is very dangerous for large data! If you collect 100 trillion genotypes, it will fail, probably with an out of memory error (or something more obscure).\n", "\n", "In these practicals, we routinely use collect to return values for plotting. Here's an example using `collect` to return the allele number (number of called alternate alleles) per variant.\n", "\n", "Note: We're doing *two* aggregations here. In `annotate_variants_expr`, `gs: Aggregable[Genotype]` is the collection of all genotypes for the given variant (ranging over sample), and `variants: Aggregable[Variant]` in `query_variants` is the collection of all variants in the dataset.\n", "\n", "`g.nNonRefAlleles` returns number of called non-reference alleles (0/0 = 0, 0/1 = 1, 1/1 = 2). Note, we're using `map` to transform aggregables." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ANs = (vds.annotate_variants_expr('va.AN = gs.map(g => g.nNonRefAlleles).sum()')\n", " .query_variants('variants.map(v => va.AN).collect()'))\n", "ANs[:5] # print the first 5" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You might have noticed there is some tricky magic here! In `variants.map(v => va.AN)`, we're mapping over variants but we're accessing the variant annotations `va`. How is that possible?\n", "\n", "Aggregables differ from arrays in one other important respect: in addition to the elements of the aggregable, each element has an associated **context**. `map` and `filter` transform the elements of the collection, but do not change the context. The documentation for each method which provides an aggregable will document the context for the aggregable. For example, see the documentation for [query_genotypes_typed](https://hail.is/hail/hail.VariantDataset.html#hail.VariantDataset.query_genotypes_typed). In `query_genotypes`, `gs: Aggregable[Genotype]` is the collection of all genotypes in the dataset. Each genotype is associated with a variant `v` and a sample `s`, so those are in the context of `gs`. In addition, associated to each variant is its variant annotations `va` and to each sample `s` its sample annotations `sa`, so those are in the context, too. Finally, the genotype `g` itself is in the context, so if you map away the genotype, it is still accessible as `g`! Here's an example to sample 8 dp fields from Het genotypes.\n", "\n", "Note, the genotype schema is fixed: `g: Genotype`. To see the accessible fields and methods on `Genotype`, see the [documentation](https://hail.is/hail/types.html#genotype)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# you can still access g since it is in the context even though we've mapped the gs aggregable to the genotype depth\n", "vds.query_genotypes('gs.map(g => g.dp).filter(dp => g.isHet()).take(8)')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Hard Exercise\n", "\n", "Compute, for each variant, the top 5 five non-reference genotypes by depth and then collect collect a few variants to show the output. Hint: like the `ANs` example above, you'll have to annotate variants withe the genotypes per variant using `takeBy`, then query variants to show a few examples. Think in terms of the `VariantDataset` to understand the aggregable context and what variables are available.\n", "\n", "Replace `` with code. Good luck!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## A few more useful aggregators\n", "\n", "See the [documentation](https://hail.is/hail/types.html#aggregable) for the full list.\n", "\n", "You can use `count` to count the number of elements of an aggregator." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# get number of variants\n", "vds.query_variants('variants.count()')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can compute a collection of simple statistics with `stats`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# depth statisticals for all genotypes\n", "vds.query_genotypes('gs.map(g => g.dp).stats()')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can use `hist(min, max, bins)`, [documentation](https://hail.is/hail/types.html#aggregable-double) to compute a histogram of a numeric aggregable. Let's use `hist` to plot the histogram of genotype quality (GQ) over all genotypes." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "dp_hist = vds.query_genotypes('gs.map(g => g.dp).hist(0, 450, 45)')\n", "dp_hist" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# plot the histogram\n", "plt.bar(dp_hist.binEdges[:-1], dp_hist.binFrequencies, width=10)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll explore the distribute of DP and GQ more in a later practical." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The aggregator `counter` counts the number of times each element occures in a aggregable. When applied to an `Aggregable[T]`, it returns a `Dict[T, Long]` which maps each element to its count. Let's count the distribution of genotypes (missing, 0, 1 and 2) in the dataset." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "vds.query_genotypes('gs.map(g => g.nNonRefAlleles).counter()')" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "# Exercises\n", "\n", "Recall the structure of `filtered_vds` by printing out the variant and sample schemas in the next cell. Look at the first practical if you've forgotten how to print `VariantDataset` schemas. Fill in the `` with code." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "## Number 1: What fraction of British samples (`sa.metadata.Population == \"GBR\"`) have purple hair?\n", "\n", "Fill in the `` with code." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "filtered_vds.query_samples('samples.filter(s => ).fraction(s => )')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Number 2: What is the mean caffeine consumption among South Asian samples (`sa.metadata.SuperPopulation == \"SAS\"`)?\n", "\n", "Fill in the `` with code.\n", "\n", "Hint: `.stats()` will compute a variety of useful metrics from a numeric aggregable." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "filtered_vds.query_samples('samples.filter(s => ).map(s => ).')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# You don't need to run this cell\n", "\n", "## This can be used to recreate `filtered_vds` if you exited the IPython console" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "vds = hc.read('1kg.vds')\n", "\n", "vds = vds.variant_qc()\n", "vds = vds.sample_qc()\n", "vds.persist() # We'll use these results several times, so checkpoint our computations\n", "\n", "filtered_vds = (vds.filter_variants_expr('va.qc.callRate > 0.95')\n", " .filter_samples_expr('sa.qc.callRate > 0.95'))\n", "\n", "filtered_vds = filtered_vds.annotate_samples_table('sample_annotations.txt', \n", " sample_expr='Sample',\n", " root='sa.metadata',\n", " config=TextTableConfig(impute=True))\n", "filtered_vds = filtered_vds.annotate_variants_table('variant_annotations.txt', \n", " variant_expr='Variant',\n", " code='va.consequence = table.Consequence',\n", " config=TextTableConfig(impute=True))\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python [conda root]", "language": "python", "name": "conda-root-py" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.12" } }, "nbformat": 4, "nbformat_minor": 1 }