Wednesday, September 22, 2021

A New Tool

We’ve added a new tool to WhatIsMyGene.com called “Cell Types.” The idea is fairly simple. You enter a gene name, submit, and the output will be a list of keywords associated with your gene. The keywords primarily relate to cell type. There’s a binomial probability calculation being performed in the process, comparing the frequency of those keywords over the complete database versus the frequency of those keywords in data in which your gene appears. A high “binomial” output would represent a high positive correlation, and a strongly negative number would indicate a negative correlation. If you choose filters, both the gene-specific data and the larger database will be filtered.

It was difficult to get this tool up and running. I won’t bore you with the details of programming. But let me know if it crashes on you.

It’s possible to get zero or minimal output by improper filter selection. For example, our “cell type” data is largely composed of genes that are not “upregulated” or “downregulated” on some perturbation…that’s not the nature of the typical clustering result based on single-cell sequencing. So if you select “Cell Type” (in the “experiment” box, NOT the new tool we're talking about), and “upregulated” (as opposed to “Any”), you may not receive any output. It’s also possible, of course, to enter rare genes and get zero output, or very un-insightful output.

In keeping with our previous discussion of the “perturbome”, please note that the output you’ll receive is probably not relevant to abundance. Most (not all) of the lists in our database are not abundance lists. Rather, they are tagged as “upregulated” or “downregulated” under particular conditions. There’s little or no correlation between a list of genes that are abundant in liver and a list of genes that are commonly perturbed in liver.

Plugging in some well-known cell-type markers, the tool works quite nicely! Below, we take a common marker for lymphocytes, CD8, run it through the tool, and use a few lines of R to generate a graph. See below for the code we used. Bear in mind that the standard 0.05 cutoff for significance would be found at binomial values of +-/- 1.3. We output 100 keywords, so an adjusted cutoff would be +/- 3.3. In the graph below, we tossed the 25th-75th tissues (the not-so-interesting ones).



Not surprisingly, CD8 is primarily perturbed in tissues with keywords like “blood” or “lymphocytes” or “spleen.” Perhaps more interesting is the fact that one rarely sees this perturbation in tissues labeled “adherent” or “epithelial.” Stem cells just miss the adjusted cutoff for absence of CD8 perturbations.

We plugged in ACE2, a protein everyone knows to be expressed in lungs. However, judging from output from the cell type tool, it’s not commonly perturbed in lungs, which may offer one explanation why lung tissue is a handy-dandy, dependable target for Covid-19.1 More commonly, ACE2 is altered in the colon and intestine (log(P)< -10 and -7). It’s particularly difficult to tweak in the case of blood and the brain (both with log(P)<-4). The rarity of tweaked ACE2 in the blood and brain may be because it’s not there to begin with. However, we know that ACE2 is found in the lungs…it’s simply difficult to alter its expression. To probe further, one could use filters to see if drugs or knockdowns (or whatever) alter these probabilities.

Actually, a quick peek at ACE2 expression (genecards.org) shows that the transcripts are indeed commonly found in blood and brain. Quite interestingly, however, ACE2 protein is rare in blood and brain, while ACE2 protein is common in the kidney, as well as heart and ovary (which also ranked high as tissues in which ACE2 is commonly perturbed). The pattern is broken with the colon, however, where the protein is rare. Nevertheless, we wonder if there’s a relationship between perturbability and protein levels that differs from the perturbability/transcript-levels relationship.

The above ACE2 results have implications for anyone who wishes to decrease lung ACE2 expression via some treatment. Another practical implication would be in the choice of cell lines for experiments. If you want to perform a knockdown of some transcript, you’ll obviously want to choose a cell type in which the transcript is expressed. However, it might also be prudent to choose a cell line in which the transcript can be perturbed!

We had a lot of fun entering our favorite genes into the tool. Guess the tissue in which APP (the Alzheimer’s amyloid gene) is most difficult to perturb! Compare the perturbability of PD-1 and PD-L1 over tissues. Compare the HLA-I and HLA-II perturbomes.

One of my favorite genes is DDX6. I’ve oft-noted how the genes it regulates overlap with the genes another helicase, DHX9, regulates. It seemed a bit redundant. But the Cell-Type tool makes it fairly obvious that this regulation happens in very different cell types. DHX9 loves to do its job in epithelial cells and DDX6 hates it!

One idiosyncrasy is the following: cell lines are either male or female. Huh7, for example, is male. Whenever possible, we’ve labeled cell lines with a “male” or “female” keyword. You may thus find that your gene is strongly enriched with the “male” designation. You may wish to ignore this, as it may reflect the fact that the cell lines that represent certain tissues are overwhelmingly male or female, not a broad tendency for a gene to be perturbed, for example, only in males.

A few other keywords bear explanation. “3d” refers to organoids. “Cancer_tissue” refers to in-vivo cancer tissue, not cell culture (after all, the majority of cell culture lines are generated from cancers). “Resistance” relates to studies where resistance to a treatment (e.g. cisplatin) was examined. Such studies can be in-vitro (performing cell culture until resistant strains emerge) or in-vivo (e.g. from studies of patients who respond, vs don’t respond, to particular therapies).

If you don’t want to examine cell line data at all, one trick is to exclude the keyword “ line” (include the space) in all studies. We’re currently retroactively labeling all cell line studies (there are a lot, of course) with this keyword…the trick won’t work optimally until we’re finished with this task. This trick applies to many of our tools, actually. Another way to de-emphasize cell line data is to examine only mouse data, not human data. This is because, with the exception of blood, muscle, and cancer, it’s difficult to access human in-vivo tissue; researchers use mice for those.

One might imagine a sort of “inverse” cell-type tool. Here, you’d select from a list of keywords and the output would be the genes that are most enriched (or depleted) for the keyword. I’m guessing this task would be computationally expensive…you’d need to “stack” all the genes in the database into a frequency table, then stack all the keyword-relevant genes into another frequency table, merge the tables, and then perform something like a hundred thousand binomial calculations. All this stacking would have to be performed on the fly (as opposed to using a one-time table that resides on the hard drive), because the user might apply filters to the database. However, we may embark on this little exercise in the future on our local machine, and report on the outcome. For now, the big task is to increase/refine/improve the keywords in our database.

 

***

Initially, we considered outputting results in graphical format, as opposed to a table. In the end, we decided to stick with tables. You can generate graphics based on the table output in any way you like, rather than being stuck with a limited set of color schemes, labels, graphics formats, etc. If you’re familiar with R, the code below might be useful.

library(ggplot2)

tissue_data <- read.csv("D:/your_table.csv")tissue_data$stacked_tissues <- factor(tissue_data$stacked_tissues, levels  = tissue_data$stacked_tissues)tissue_data$fill<- ifelse (abs(tissue_data$binomials) > 3.3,"red", ifelse(abs(tissue_data$binomials)>1.3,"purple","gray"))g <- ggplot(tissue_data, aes(x = binomials,y = stacked_tissues, fill = fill))g + geom_col()+ylab(NULL)+scale_fill_identity()

#The “factor” line prevents the table from being sorted alphabetically.

 

1) Gotta be careful with this kind of logic, of course. If the virus has the capacity to alter the expression of a target (such as ACE2), the perturbability of the target might work to the benefit of the virus, not to the detriment.

whatismygene.com 


Thursday, September 16, 2021

Why Standard Gene Enrichment Tools Can Fail to Produce Insight

Why isn’t the gene enrichment tool you’re using spurring further insight and hypothesis? Or, worse yet, is it possible the gene enrichment tool you’re using is spurring unjustified insight and hypothesis? Without mentioning specific tools, here are some potential causes:

*With some standard tools, relatively few studies may be combined to create a “canonical response” list of genes, with a majority of relevant studies being ignored or excluded. Why? I note, for example, that relatively few studies are embodied in a particular “response to hypoxia” list, whereas there are certainly more than a hundred deep transcriptomic/proteomic studies on the subject.

*Conversely, it’s possible that the response to hypoxia, for example, is strongly dependent on cell type. In this case, a researcher may fail to recognize that a particular drug indeed induced a hypoxia response if he/she compares results against a “canonical regulation” list. It might have been better to examine a single previous study that more closely mirrored the researcher’s own study setup.

*”Low quality” genes may be excluded from canonical lists. That is good, of course, and one assumes that there are reasonably stringent criteria for such exclusion. If however, a “low quality” gene (for example, a probe that lacks a well-described transcript) appears again and again in studies of, say, hypoxia, perhaps it’s more relevant than had been thought.

*Researchers are not immune to fads in their own fields. I’ve seen it myself…RNA-seq is performed and the 25th most significantly altered transcript is selected for functional studies, not the first.1 Why? It could be because the 25th gene rings a bell for the researcher. Or…it fits his/her preconceptions as to what should be altered in the experiment. Or…it’s a “hot” gene that would be more likely to draw in grant money. Or…it’s easy to study, as antibodies against the protein are already in the lab freezer. Or…the 25th gene is the subject of previous studies, making it easier to formulate a hypothesis for its involvement in a process. What other factors unrelated to biological significance cause researchers to mention one entity versus another in papers? The point here is that the folks who screen studies for genes that can be incorporated into lists will be victims of these biases.

*To what extent are human screeners subject to their own biases? Do they examine supplemental data?2

*A single study may contribute an excess of data to a transcriptomic database. You could examine the effects of viral infection on a cell line at 1, 2, 3, 4….72 hours and compare the transcriptomic results against controls for each timepoint. Such studies could inflate the size of a database to an impressive degree. However, does insight follow? Does one really expect that the result at 16 hours is going to be interestingly different from the result at 18 hours? Inclusion of multiple highly similar studies will also confound large-scale co-expression analysis (e.g. gene ABC could be lumped together with gene XYZ 72 times, even though the two genes aren’t associated in other studies, in other cell types, under other infection conditions).

*Rare entities may be excluded from canonical lists. Consider two transcripts. Transcript ABC is upregulated in hypoxia in 6 out of 10 studies. ABC is abundant and also tends to be altered in numerous non-hypoxia studies. That is, if you perturb a cell, there’s a good chance you’re perturbing ABC. Transcript XYZ, which may not even be represented by probes in some microarrays, is upregulated in 2 out of 10 hypoxia studies. It’s never mentioned in the body of hypoxia papers, and it’s rarely seen in non-hypoxia studies. Shall we exclude XYZ from our list of transcripts altered in hypoxia?

*Some enrichment tools do not incorporate estimates of the “background” of an experiment. Even if a background is incorporated, shall we assume that all gene ontology lists share the same background? As we’ve noted previously, some of these lists are heavily overloaded with extremely abundant proteins/transcripts. In these cases, it would appear that the genes that compose these lists are more likely to be drawn from a pool of 2,000, as opposed to 20,000, possible genes. In other cases, a gene ontology list does not over-represent abundant entities, meaning that a background of 20,000 might be appropriate for comparison against your own list of genes.3,4

*You add a drug to cell culture and perform transcriptomics against controls. Performing “pathway analysis” on your list of up- and down-regulated transcripts could certainly prove insightful. However, is that all you wish to do with respect to enrichment analysis? Bear in mind that your significantly altered transcripts may be more likely to be bundled in “modules” than in groups of genes found in particular pathways. In other words, your transcripts may contain a large dose of genes downregulated in autophagy, a moderate dose of mitochondrial process, a smattering of genes upregulated in antiviral response, and a heavy dose of genes upregulated in an esoteric process that isn’t even represented in popular gene ontology lists. If there are other studies that match up with your results, will you know?

*Try entering a standard gene-enrichment list (GO, Reactome, whatever), into our Fisher app. Despite the fact that a majority of the lists in our database are derived from individual studies, not mere copies of gene-enrichment lists that other folks have created, you'll probably find that the output is dominated by other gene-enrichment lists (be sure to set the "regulation" filter to "Any"). Basically: GO lists (and the like) best overlap with other GO lists, not data generated from studies involving specific tests versus controls.

The solution to the above concerns is not necessarily tricky. All you need is a database of results from specific studies, as opposed to (or in addition to), compiled lists of genes. To maximize the chance that your own results will strongly align with results from another specific study, the database should be large. This large database should contain a roughly randomized set of studies versus, say, a strong focus on cancer. Inclusion of multiple results from a single study should be avoided. Rare and/or uncharacterized genes should not be eliminated without very good cause.

The above describes our database fairly well. Have we fully eliminated all the above concerns? No. In addition to specific studies, we do offer some compiled lists, described in some of our previous blog posts. On some occasions, we do include multiple results from one study. However, we take steps to make sure that such studies do not confound results from our co-expression app.

 

1) Yes, I’ve got a particular study in mind. In fact, the single most significantly altered transcript was not even mentioned in this study.

2) Plenty of biologists believe that confirmation of a protein alteration requires a Western blot. The mass-spec community scoffs at this, believing that blots are vastly inferior to MS and antibody studies are a waste of time if MS is performed properly. I side with the MS folks. In any case, though, where do the screeners draw their particular lines? Even if they’re consistently following particular criteria, can we assume the criteria is reasonable?

3) If this bit seems difficult to understand, my apologies. It might help to bear in mind that Fisher’s exact test, or similar tests, require a “background” figure. Strictly speaking, this should be the intersection of ALL identified entities in study A with ALL identified entities in study B, regardless of metrics like significance and fold-change. This is not so difficult if you’re comparing results from two studies that used, say, the same brand of microarray. But what if Study A is generated by compiling multiple studies, or if study A is generated by humans who screen papers for genes involved in various processes? What is the sum of all identified (not simply "mentioned") transcripts/proteins in study A? This gets tricky. Things get particularly tricky if the process of compilation results in an excess of highly abundant entities. And we certainly do see cases where abundant entities are strongly over-represented.

4) If you've tinkered with Fisher's exact test, you know that small/moderate errors in the background figure don't necessarily make much difference. However, some potential errors go way beyond the "small/moderate" level. In gene enrichment analysis, the output often consists of a list of enriched groups ranked from most to least significant. Here, one naturally pays most attention to the top ranked groups. In the case of a significantly tweaked background, however, perhaps the top-ranked study should really belong at the 20th position.


whatismygene.com 


A Preprint

It has been a while since we posted. That's largely because of the effort put into generating a paper. Check it out on BioRxiv . This is...