• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Breakfast Bytes
  3. Do Cellphones Cause Brain Cancer?
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
stastical significance
cellphones
significance
p-hacking
mobile
brain cancer
p-value

Do Cellphones Cause Brain Cancer?

27 Nov 2018 • 8 minute read

 breakfast bytes logoOne subject that regularly gets some traction in the news is whether cellphones cause brain cancer. Every so often a jury awards damages and the press makes a big deal about it, but usually it turns out mostly to be that a sympathetic plaintiff like a cute kid with brain cancer gets a sort of sympathy vote from the jury since the defendant is a big rich telecom company. Or some scientific study finds some connection that just passes a test for statistical significance and the press, who are statistically illiterate by and large, runs with it (more on that below).

However, since mobile is the largest market segment for semiconductors, this is potentially important.

Graphs

Let's start by looking at a few graphs. This is the most recent graph I can find of the number of cellphones and brain cancer:

Just glancing at this graph, the immediate conclusion is either that cellphones do not cause brain cancer, or if they do, then the effect is extremely small. Cellphones have exploded from zero to billions, and brain cancer cases have not exploded in the same way. In fact, eye-balling this graph, brain cancer cases seemed to grow when there were very few cellphones, and have been in decline once the number of cellphones got large, so if you had to draw any conclusion from just this single graph (which obviously you should not), it would be that cellphones have some sort of protective effect against brain cancer!

Just for comparison, here's a graph showing cigarette consumption against lung cancer.

Again, just eye-balling the graph, you would immediately say that smoking clearly causes lung cancer. You can even say more, that it seems to take about 25 years of smoking before lung cancer happens. In the first half of the 20th century, smoking rates rose, and 25 years later lung cancer rates rose. Then, in about 1960, people began to give up smoking (and fewer started), and 25 years later, lung cancer rates fell, too.

But just as a warning about reading too much into single graphs, here's another one. You might expect that non-commercial space launches might correlate with the number of rocket scientists or something, maybe the number PhD physicists. But apparently, it's all down to sociologists!

Statistical Significance

It would take a lot more than a single blog post to explain all the details of statistical significance, but I'll give a summary. In general, if you want to test something like whether cellphones cause cancer, a probability called p is calculated. By tradition, if p is less than 0.05 (or 5%, which is the same thing) then it is regarded as statistically significant. This is actually quite a large value. For studies working in smoking and lung cancer, for example, values of 0.0001 are more normal (for example, if you want to test whether smoking for five years and then giving up is worse than never smoking at all). But what is p?

If you want to test something, like the cellphone-cancer link, you have a hypothesis, sometimes called H1. In this case, the hypothesis is that cellphones cause brain cancer. Set against this is another hypothesis called the null hypothesis, often called H0. This is the hypothesis that you are wrong, and whatever you are trying to measure happens just by chance. So in this case, the null hypothesis is that cellphones have no effect on brain cancer rates. The p-value is this: if the null hypothesis is true (so cellphones don't cause brain cancer), what is the probability of getting the results you did anyway? If p is less than your threshold, typically 0.05 as I said above, then you reject the null hypothesis and conclude that, in this case, cellphones do cause brain cancer (and you publish the result). If it is greater than the threshold, you accept the null hypothesis. If you are honest, you don't publish, but read on...

The big problem with these so-called p-tests is that researchers have learned how to game the system and produce statistically significant results (and so publishable) even when there is no effect. I like Scott Alexander's term for this, the "elderly Hispanic woman effect".  If your drug has no effect on the general population, because it doesn't work, then just break up the population by age, by gender, by race, by income, whatever. Since p<0.05 is only a 1 in 20 chance, it is likely (in fact there are papers showing that it is over 60% likely) that one of these groups will be statistically significant. This is known disparagingly as p-hacking. It is actually statistical malfeasance, since if you do lots of tests on the same data like this, you have to adjust the way you calculate the p-values, but since all the tests you do that failed don't show up in your published paper, this isn't obvious to the reader (nor the referees). If you want to be honest, you must decide in advance what tests you will apply, and make the adjustments in advance, not see what you get as results, and then try and find a way to squint your eyes when you look at them to get something out of it.

As always, there is an XKCD for everything. I'll put it at the end of the post since it takes up so much vertical space.

Let's go back to the cellphone-cancer thing. Mother Jones breathlessly reports on “Game-Changing” Study Links Cellphone Radiation to Cancer. Here is the actual study Report of Partial Findings from the National Toxicology Program Carcinogenesis Studies of Cell Phone Radiofrequency Radiation in Hsd: Sprague Dawley SD Rats (Whole Body Exposures). I'm not an expert on this stuff, I'm relying on this guy who is. A quote from him:

To summarize the results, there were actually two studies, one in mice and one in rats. The researchers exposed some of the animals to intermittent (10 minutes on, 10 minutes off) radio frequencies in varying strengths, whole body, from the womb until death. They found a statistically significant increase in some types of brain and heart tumor in male rats only exposed to the radio frequencies, but not in female rats, and not in male or female mice.

This immediately reeks of p-hacking, the "elderly Hispanic woman" effect. And even if you believe the result is not just chance, if it only affects male rats, and not females nor mice of either sex, then why should we think it has any relevance for humans?

Breaking News

After I'd written this post, I happened to come across a piece about this chart. It's not actually breaking news, it's three years old, but it is new to me (and probably you).

 I said above that if you are going to test for lots of things then you can't just decide after you get the results and go p-hacking. Every study has its equivalent of the elderly Hispanic woman effect. Until 2000, researchers in drug studies could just get the results, find something juicy, and publish. But in 2000, the rules changed, and researchers had to register what tests they planned to run before they collected the data. If you had some reason to believe that your drug might work better on elderly Hispanic women, then you had to say so in advance (and some drugs really do work better on some races for good reasons, not that "Hispanic" is a biological "race"). The results of this change in the rule were dramatic:

Prior to 2000, researchers could do just about anything they wanted. All they had to do was run the study, collect the data, and then look to see if they could pull something positive out of it. And they did! Out of 22 studies, 13 showed significant benefits. That’s 59 percent of all studies. Pretty good!

Then, in 2000, the rules changed. Researchers were required before the study started to say what they were looking for. They couldn’t just mine the data afterward looking for anything that happened to be positive. They had to report the results they said they were going to report.

And guess what? Out of 21 studies, only two showed significant benefits. That’s 10 percent of all studies. Ugh. And one of the studies even demonstrated harm, something that had never happened before 2000.

Other Effects

Although it seems unlikely to me that cellphones cause cancer, there are other worrying and somewhat unexplainable results. The official position of organizations like the National Institute for Health is that only local heating due to microwave-oven type effects can cause problems in the human body, and at the power output used in cellphones this is trivially small. But there are anomalies showing that this can't be the whole story. For example, wearing your cell-phone on your hip can cause lower bone density in the femur on that side of your body. So there is something other than local heating going on. If you are a young man hoping to have children one day, it might not be the best place to keep your phone. Women typically keep their cellphones in their back pocket or their purse, so it is probably less problematic.

Another one is the suspicion that the weird weird auditory effects experienced by US embassy staff in Cuba and China may be due to low-level microwave radiation causing brain damage (link is to NYT and may require $), and nobody is claiming that this is any sort of thermal effect.

There is also evidence that electromagnetic radiation (cellphones, WiFi, and more) may affect the voltage-gated calcium channels in the outer membranes of our cells, allowing large volumes of calcium ions in. I have no expertise to evaluate the biology here, but it is one possible mechanism for low-level (non-ionizing, non-heating) radiation to affect animal biology.

That XKCD Comic

 

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.