haggholm: (Default)

PSA: In spite of a recent press release from an organisation no less respectable than the WHO, and the consequent media furor from less respectable sources, there is in fact no good reason to think that cell phones are responsible for increases in cancer. A few facts:

  • The WHO has reported on the same research before. Last time, they interpreted it as meaning that cell phones don’t increase risk of cancer. This time, they seem to interpret it as meaning that they do. As far as anyone knows, nothing’s changed except their reporting.

  • There have been many studies on this, the general concensus of which are that there’s no link between cell phone use and incidence of brain cancers. In fact, it’s not even plausible on a basic science level.

  • The “smoking gun” study has an enormous flaw: It’s a retrospective study based on asking brain cancer patients about their cell phone usage a decade ago. Would it be surprising, given their circumstances, and given that they know that the possibility of causation has been floated about, if they accidentally overestimated their usage?

  • Even after what looks to be jumping to conclusions, cell phone usage has been placed in the same rather tentative risk category of cancer causation as—pay attention—pickled vegetables and coffee.

More here (good and succinct and fairly brief), and here (comprehensive and not at all brief).

haggholm: (Default)

Nothing much to say, but an interesting link to share. A while back I wrote about phages, the viruses that infect bacteria (and got some good comments, I might add). Today, I happened upon this exchange between science writer Carl Zimmer (author of several great books) and Timothy Lu, an MIT researcher on phages. I suggest you go read it if this stuff interests you at all.

Fortunately, in the past few decades, there has been a renaissance brewing in the phage world. Commercial, government, and academic labs have begun to tackle the fundamental issues that have held back phage therapy using rigorous molecular tools. To use phages to effectively treat bacterial contaminations, these labs have been developing technologies for classifying bacterial populations, identifying the right combination of phages to use, and optimizing phage properties using evolutionary or engineering approaches.

haggholm: (Default)

These days scientists have a much clearer picture of our inner ecosystem. We know now that there are a hundred trillion microbes in a human body. You carry more microbes in you this moment than all the people who ever lived. Those microbes are growing all the time. So try to imagine for a moment producing an elephant’s worth of microbes. I know it’s difficult, but the fact is that actually in your lifetime you will produce five elephants of microbes. You are basically a microbe factory.

The microbes in your body at this moment outnumber your cells by ten to one. And they come in a huge diversity of species—somewhere in the thousands, although no one has a precise count yet. By some estimates there are twenty million microbial genes in your body: about a thousand times more than the 20,000 protein-coding genes in the human genome. So the Human Genome Project was, at best, a nice start. If we really want to understand all the genes in the human body, we have a long way to go.

[…]

In the September 2010 issue of the journal Microbiology and Molecular Biology Reviews, a team of researchers looked over this sort of research and issued a call to doctors to rethink how they treat their patients. One of the section titles sums up their manifesto: “War No More: Human Medicine in the Age of Ecology.” The authors urge doctors to think like ecologists, and to treat their patients like ecosystems.

[…]

Here’s one crude but effective example of what this kind of ecosystem engineering might look like. A couple years ago, Alexander Khoruts, a gastroenterologist at the University of Minnesota, found himself in a grim dilemma. He was treating a patient who had developed a runaway infection of Clostridium difficile in her gut. She was having diarrhea every 15 minutes and had lost sixty pounds, but Khoruts couldn’t stop the infection with antibiotics. So he performed a stool transplant, using a small sample from the woman’s husband. Just two days after the transplant, the woman had her first solid bowel movement in six months. She has been healthy ever since.

Carl Zimmer, The Human Lake

haggholm: (Default)

From the SBM article Compare and Contrast by Mark Crislip:

There were two reviews concerning chiropractic safety published recently. Safety of chiropractic interventions: a systematic review, which found

[…] The search identified 46 articles that included data concerning adverse events… Most of the adverse events reported were benign and transitory, however, there are reports of complications that were life threatening, such as arterial dissection, myelopathy, vertebral disc extrusion, and epidural hematoma. The frequency of adverse events varied between 33% and 60.9%, and the frequency of serious adverse events varied between 5 strokes/100,000 manipulations to 1.46 serious adverse events/10,000,000 manipulations and 2.68 deaths/10,000,000 manipulations.

That is impressive complication rates, although the authors suggest the data to support the rates are not robust, for an intervention that only has at best proven efficacy for low back pain and safer alternatives. Also published recently was Deaths after chiropractic: a review of published cases.

Twenty six fatalities were published in the medical literature and many more might have remained unpublished. The alleged pathology usually was a vascular accident involving the dissection of a vertebral artery.

That is about three times the number of deaths from trovafloxacin, an excellent antibiotic that we abandoned in the U.S. as too dangerous.

Emphasis added. Note that evidence generally seems to support chiropractic as a moderately efficacious treatment for low back pain on par with a massage, but ineffective for any other indication (though it is touted as allegedly effective for all manner of improbable symptoms). Thus, when evaluating the risk/benefit calculation for these deaths, keep in mind that these are at best deaths in exchange for temporary relief of back pain, and at worst deaths in exchange for profiteering by means of bogus treatments.

haggholm: (Default)

As I was re-reading Jerry Coyne’s Why Evolution Is True the other night, an aside about bacterial resistance to antibiotics set me to thinking. It’s well known, and to be expected from evolutionary theory, that bacteria—and other pathogenic organisms, such as viruses and eukaryotic parasites—tend to evolve resistance to the drugs designed to wipe them out. It’s fairly obvious: Those drugs exert an extreme selection pressure, as strong as we can make it…thus if the variation exists to produce resistance, we’re positively selecting for it and are guaranteed to end up with it in the end.

As Coyne also pointed out, the response is pretty variable: Diseases like polio and measles haven’t seemed to evolve any resistance at all, while common flu strains evolve so quickly that every flu season is a gamble, where the vaccines available are but a best guess as to what will crop up next. HIV takes the prize as the grand champion of rapid evolution: It evolves so quickly inside each carrier that it eventually evolves its way around the immune system.

This of course reminded me of Orgel’s Second Rule: Evolution is cleverer than you are. As long as medical scientists invent drugs to eradicate pathogenic organisms, they will be exerting selective pressures and will tend to select for resistance. I don’t see that there’s a feasible way around this—short of a drug so astonishingly good that it is always 100% effective (even in the face of imperfect patient compliance!), or a set of rotating drugs so large that no strain can evolve resistance to all at once, it seems impossible; and those conditions seem extremely implausible.

So I wonder, how much medicine is being done that is not only aware of evolution (and the dangers it causes in adaptation to resist drugs), but actually uses it? What follows is very speculative—I’m no biologist or medical expert; I’m just thinking out loud, wondering how much of this is done or has been done; and if not, then why.


One example of a strategy that harnesses evolution would be to design phages. A phage (or bacteriophage), if you don’t know, is a virus that infects a bacterium. This may sound like a rare and exotic sort of being, but phages may well be one of the largest groups of organisms on Earth. Phage therapy makes a sort of intuitive and beautiful sense: Instead of giving a patient a drug that may operate on the basis of differential toxicity (it will damage the bacteria a lot more than the patient), give them something that is absolutely specific to the infectuous agent. (In fact, if Wikipedia is to be believed on this, phage therapy is so specific that this is actually one of the problems: Each phage targets a very specific strain of bacteria, so if the infectuous strain is different…)

But it seems to me that there’s room for so much more. Now that we stand at the early dawn of the age of genetic engineering, perhaps a time will come when phages can be designed to target pathogens for which no useful phages are known. And if the phages we have access to are less than ideal, then why should we not selectively breed them? If we have a disease for which all of Koch’s postulates hold, for instance, and we can retrieve the real, infectuous strains and grow them in culture, then surely we can use those cultures as growth media for phages—evolving phages better able to attack those strains. By wiping out the bacterial cultures and growing subsequent generations of phages on bacteria that lack exposure, the bacteria will have no opportunity to evolve resistance, and we should be able to essentially breed phages to wipe out whatever bacteria we like.

Of course, eventually resistant strains would spread throughout the population. But so what? It’s no big deal. We’d just isolate those strains and selectively breed our phages to target them.

I don’t know whether anything similar could work against viruses. Viruses don’t metabolise, so virophages are pretty damned rare (I’ve only ever heard of one). Are there bacteria that eat viruses? If so, those bacteria could presumably be selectively “bred”…


Another, more modest, evolution-aware strategy would be to deliberately select against virulence. Consider, for example, a hypothetical drug that targets some, but only some strains of flu. Specifically, it suppresses particularly dangerous strains, but is deliberately designed not to target the less harmful ones. Such a strategy would not wipe out flu, of course—but then, we couldn’t do that in the first place. What it might do is give more “benign” strains of flu a competitive advantage over their deadlier cousins. Since all strains of flu work in similar ways, they could be viewed as competitors for the same ecological niche. (I could be wrong about this; I’m speculating, after all!) If we allow a relatively benign strain to fill this niche, but do our best to keep the deadlier version down, then we basically co-operate with the more benign flu at the expense of the more dangerous one. There will be a selective benefit in being benign.

(The inspiration for this speculation is actually evolution in HIV. HIV tends to spread pretty slowly—its epidemic proportions may not make it seem that way, but you really can’t spread HIV as quickly as you can spread a cold. A sneeze just hits more people than—Let’s just say that airborne diseases can spread more quickly. As a result, a strain of HIV that caused the carrier to drop dead within the week would never get off the ground; it would spread more slowly than it killed, which limits a disease to a small group of carriers. In other words, a disease has to spread slowly enough to infect a bunch of other carriers, and how slowly is enough depends on the disease: In case of HIV, pretty slowly. I have read, though I cannot now find a source for this claim, that some widespread strains of HIV are evolving toward slower development to full-blown AIDS, for these reasons.)


I harbour no illusion that any of this is revolutionary to anyone at all professionally competent in biology or medical science—I am at best a reasonably well-read layman. But the notion of evolutionary strategies in medicine are intriguing to me, and I really do wonder how much is being done, how much has been done, how much can and will be done with it. For now, I merely jot down these thoughts and speculations. When I have a little more time, I should do some digging and some reading to see what’s out there.

haggholm: (Default)

An article I recently read boldly claims that The G-spot 'doesn't appear to exist', say researchers. I read this with a sigh, as I know from experience how greatly distorted any research findings can get when they are published in mainstream media. Clearly, this was an instance of such distortion. I was curious to see what the actual study said, and went off to find it. You may read it here, if you are curious.

Sadly, it wasn’t very distorted after all.

In fact, the press release from the Department of Twin Research & Genetic Epidemiology was worse than the articles I had read. It presents the following conclusion from their study:

The complete absence of genetic contribution to the G-Spot, an allegedly highly sensitive area in the anterior wall of the vagina which when stimulated produces powerful orgasm, casts serious doubt on its existence, suggests a study by the Department of Twin Research to be published in the Journal of Sexual Medicine.

The investigators carried out this study by recruiting 1804 female volunteers from the TwinsUK registry aged 23-83 years. All completed questionnaires detailing their general sexual behavior and functioning, and a specific question on self-perception of the G- Spot. The researchers found no evidence for a genetic basis. This led to the conclusion that – given that all anatomical and physiological traits studied so far have been shown to be at least partially influenced by genes – the G-Spot does not exist and is more a fiction created by other factors e.g. an individual’s own sexual and relationship satisfaction or self-report is an inadequate way to assess the G-Spot and researchers should in future focus more on ultrasound studies.

The impression I took from the mainstream press articles, and which was reinforced by the institute’s press release, was that the existence of the G-spot was inferred to correspond to study participants’ reports of whether they had one. If this were so—if we could determine anatomy by poll—I expect I could find some people with more spleens than kidneys and more livers than lymph nodes.


I took the trouble to read the actual paper (it’s fairly short and quite accessible). The reality turns out not to be quite so bad. The main point—well, let me make an aside here and say that I find it extremely odd that what seemed to be the main point emphasised in the paper was considerably de-emphasised in the press release and consequent mainstream articles, seriously reducing their credibility. Anyway, back to the point:

The main point of the paper is that if the G-spot exists, it is an anatomical structure; if it is an anatomical structure, it is presumably genetically inherited. Even if some women have it while some don’t, we expect to find a strong correlation in twins. Since heterozygotic twins share 50% of their genome, and monozygotic (‘identical’) twins share 100% of their genome, if it’s genetically heritable at all, we should see a correlation in twins, especially monozygotic ones: If one twin has it, the other should (more often than is the case with unrelated people); if one twin does not, the other shouldn’t. Because twins are typically raised in extremely similar environments, even environmental factors should be similar. In particular, monozygotic twins should be more similar to each other than heterozygotic twins for heritable (but not environmental) factors.

Well, this turned out not to be the case: Heterozygous twins report that they have G-spots about as often as do monozygous twins, and this is the real point of the paper. It’s not as spectacular as the mainstream news articles, but I’m surprised that they so failed to emphasise this in their own press release. Ah well, such is the hunt for fame, I suppose.

In the conclusion of the real, scientific paper, the authors are of course forced to admit that

A possible explanation for the lack of heritability may be that women differ in their ability to detect their own (true) G-spots.

They, of course, do not believe this to be the case. We may reasonably ask, why not? And how good is your evidence? My thoughts will be very tentative, because I’m not an expert in any related field; but we may at least reason about it.


First, I will note that the study’s exclusion criteria were, at times, a bit puzzling.

Women who reported that they were homo- or bisexual were excluded from the study because of the common use of digital stimulation among these women, which may bias the results.

I daresay it may bias them! For example, if the G-spot exists, it’s a specific anatomical location inside the vagina. Because it is postulated to be a very specific location, it may be difficult to stimulate with the penis, which is after all not prehensile and may not be angled so as to optimally stimulate a specific location. This postulated spot could perhaps be more easily located and stimulated with the fingers. Therefore, if it does exist, and if we are restricted to self-reporting as evidence, I would expect to find much stronger evidence for this in a population with common use of digital stimulation. The people I would ask first are the people whose answers they discarded. I would be very curious to see how their data are affected if they include this population. What was their rationale for the exclusion criterion? Was it determined beforehand, or after the data were in? Would it contradict their conclusion? What if this population were considered exclusively?

This looks like a very serious weakness to me, as the exclusion criterion seems to be specifically geared towards reaching a particular conclusion. (I can’t think of anything much more damning I could possibly say about a study.) It’s not the only thing that makes me raise an eyebrow, though (but it is the strongest).

Another thing is that, well, some traits just aren’t very heritable. (This is why we measure heritability; if there weren’t variation in how strongly phenotypic traits are associated with genes, there’d be no need.) I suppose the authors may reasonably expect their readership to be familiar with not just the concept of heritability (as I am), but also what kind of numbers we should expect (as I am not). Is a “close to 0” heritability common, or unusual, or rare, or impossible in variable phenotypic traits? Still, it is possible that heritability of the G-spot—not necessarily its existence, but perhaps its precise location and orientation, or its sensitivity—is relatively low. Is the study still powered to detect it? How does this render it more vulnerable to other confounders?

There are various criticisms leveraged against twin studies in general. Twin studies are potentially wonderful tools because monozygotic twins offer unique opportunities to investigate heritability. (Personally, I think the most interesting ones are of that rarity of rarities, pairs of monozygotic twins raised apart; the surprising similarities they show in a very wide range of behavioural traits is strong evidence of genetic conditioning.) But they are not perfect.

And finally, I make the observation that the institute—the Department of Twin Research & Genetic Epidemiology—maintain a database of twins (an awful lot of them: Some 11,000 people). This is great; it enables them to efficiently perform twin studies. However, studying the same sample over and over again is problematic. If you look at the same N people, examining them for different properties over and over again, you’re bound to find an apparent correlation eventually. Think about it: If you pick 100 names at random from a phone book, you’ll expect about half of them to be male, half female; and about 8–15 of them to be left-handed…but if you examined them for blood pressure, and dietary habits, and sexual preferences, and number of children, and so on for any number of questions, it would be bizarre if they were an average sample in every respect. This is a problem with data mining. Clearly, the department’s database is pretty large, but then they’ve already published over 400 research papers. At what number of papers should we statistically expect to find spurious calculations?


All in all, the study was a bit more sensible than mainstream media had me thinking at first, but as research papers go, I found it surprisingly unimpressive. In particular, the exclusion criterion that discarded answers from gay and bisexual women smells very fishy, and I wouldn’t be terribly surprised if it “biased” the results so far as to invalidate their conclusion.

In a general sense, I trust science—I trust the scientific method, and (to a lesser but considerable degree) I trust that scientific consensus will move toward the right answers: Science is often characterised as an asymptotic approach to the truth (we may never know it exactly, but we will get ever closer). However, when considering a single study, one should be cautious. Never trust what the mainstream press says about it at all, whether you like what it says or not—ordinary reporters lack scientific savvy, good science reporters are rare, and after the editors have their say, it’s often dubious whether the scientists behind a finding would agree with anything the press has to say about them except, perhaps, the scientists’ names.

And while the scientific method is excellent, and the scientific consensus is the best approach we have to knowledge, some studies just aren’t worth the paper of the webpages they’re published on. If you want to adjust your opinions according to a single study, read it. Read it critically.

haggholm: (Default)

Doing my part to echo reason in the skeptical blogosphere, I’ll make a brief mention of what I’ve read about the new USPSTF guidelines, which you may have heard of. If not, Dr. David Gorski explains and deconstructs. The short version is, a group belonging to (but not setting policy for) the US government has altered its recommendations for mammographic screening to

  1. not screen women aged 40–49 anymore (rather, wait until 50)
  2. screen once every two years, instead of annually

Naturally, a lot of people misunderstand this and some of the less reasonable among them start crying about misogynism and the Obama administration’s death panels. These people miss a lot of obvious points.

  • These are screening guidelines. They have nothing to do with recommendations for symptomatic women (examining them is not screening, but diagnosis). It also does not apply to women with known risk factors. It’s a change in how they suggest screening of asymptomatic women should happen.

  • The group was not set up by the Obama administration. The US authorities do not condone these guidelines. In fact, most groups do not, though if Dr. Gorski is right, a gradual shift in this general direction may happen over time. Either way, the USPSTF just makes recommendations; they have no power to dictate policy.

  • The tricky one: The guidelines actually make a lot of sense. Excessive screening does more harm than good.

It may sound bizarre that more cancer screening could be harmful, but it’s true. Apart from discomfort and angst caused by false positive diagnoses, there’s very real pain and even small danger in performing biopsies on harmless lumps (even good medical interventions are never completely risk free). And, not all cancers will kill you—a few may spontaneously go away, but much more significantly, a lot of cancers are just so slow-growing that they shouldn’t be on your list of worries. With the average life expectancy around 80, a tumour that will absolutely kill you by your 110th birthday is…really nothing to worry about. You’re more likely to live longer without the harsh regimen of surgery, chemotherapy, and radiation necessary to treat the cancer, even though that same regimen is an absolute life-saver if you have the sort of cancer that would kill you before you’re certainly dead by natural causes.

There are also other factors, such as lead time bias (highly recommended reading). It’s easy to say that If we screen 40-year-olds, most diagnosed cancer patients survive on average 15 years; if we only screen 50-year-olds, we find that our average patient only survives 5 years and think that early screening lets people live longer (15 years versus 5!)…but I’ve just described two scenarios with people dying at age 55; the difference is how long they live with the knowledge that they have cancer. This sort of thing happens, it is significant, and it confounds trials and policy making. Detecting cancers earlier is only helpful if interventions actually turn out to save lives.

I won’t say much further, because this is obviously not my area of expertise, but because a moral panic has sprung up around the internet, I figured I would say something in case you stumble across my blog. If these issues concern you, I highly recommend David Gorski’s write-up, the SkepChick counter to a bad Feministing report, and Orac’s direct deconstruction of the canards and conspiracy theories.

haggholm: (someone is wrong on the internet)

It would be disingenuous to imply that non-vaccination might not lead to an increased incidence in vaccine-preventable illness. It would be equally disingenuous to state that this possibility poses a great threat to America's children.

Dr. Jay Gordon, quoted at Respectful Insolence

It would be…disingenuous to state that this possibility poses a great threat to America’s children.

Never mind polio, which killed or crippled thousands of children every year before it was eradicated by vaccines, the fear of which ruled some people’s childhoods.

Never mind smallpox, an epidemic disease with an average fatality rate of 30%, also eradicated by vaccines.

Never mind Hemophilus influenza type b (HiB), a disease now nearly forgotten in pediatric wards thanks to vaccination, but which used to cause disease in one of every 200 children under the age of 5—whereof ½–⅔ developed meningitis, with a mortality rate of 5% and rate of permanent brain damage of 30%.

No—none of these, nor any of the other among the dozens of vaccine-preventable diseases now eradicated or dramatically reduced, pose a great threat; thus, because there’s no great threat, we should cautiously withhold vaccination just in case we ever find evidence that they cause any harm. We have no such evidence, but why jump the gun? It’s not like they prevent any great threat.

haggholm: (Default)

The US National Institute of Health department, the National Centre for Complementary and Alternative Medicine (NC-CAM), whose aim is to find evidence for alternative medicine, found to its chagrin that alternative medicine doesn’t work. Key snippets:

Ten years ago the government set out to test herbal and other alternative health remedies to find the ones that work. After spending $2.5 billion, the disappointing answer seems to be that almost none of them do.

Echinacea for colds. Ginkgo biloba for memory. Glucosamine and chondroitin for arthritis. Black cohosh for menopausal hot flashes. Saw palmetto for prostate problems. Shark cartilage for cancer. All proved no better than dummy pills in big studies funded by the National Center for Complementary and Alternative Medicine. The lone exception: ginger capsules may help chemotherapy nausea.

As for therapies, acupuncture has been shown to help certain conditions [though if I read it aright, That finding was called into question when a later, larger study found that sham treatment worked just as welled.], and yoga, massage, meditation and other relaxation methods may relieve symptoms like pain, anxiety and fatigue.

…Critics say that unlike private companies that face bottom-line pressure to abandon a drug that flops, the federal center is reluctant to admit a supplement may lack merit — despite a strategic plan pledging not to equivocate in the face of negative findings.

"There's been a deliberate policy of never saying something doesn't work. It's as though you can only speak in one direction," and say a different version or dose might give different results, said Dr. Stephen Barrett, a retired physician who runs Quackwatch, a web site on medical scams.

Critics also say the federal center's research agenda is shaped by an advisory board loaded with alternative medicine practitioners. They account for at least nine of the board's 18 members, as required by its government charter. Many studies they approve for funding are done by alternative therapy providers; grants have gone to board members, too.

[The Centre’s methodology] is opposite how other National Institutes of Health agencies work, where scientific evidence or at least plausibility is required to justify studies, and treatments go into wide use after there is evidence they work — not before.

In a federally funded pilot study, 30 dieters who were taught acupressure regained only half a pound six months later, compared with over three pounds for a comparison group of 30 others. However, the study widely missed a key scientific standard for showing that results were not a statistical fluke.

In other words, NC-CAM, which was founded with the intent of finding evidence for the quackery that the sponsoring Senators were already convinced by (to look for a yes, in other words, rather than objectively assessing credibility), is perfectly happy to spend millions upon millions of US tax dollars on investigating ludicrous fantasies like distance faith healing, energy healing, and homeopathy (dollars that could be spent on valid research), is biased by a board of proponents, tends to publish lackluster studies with missing controls…and still can’t come up with a single positive result beyond noting that ginger may (may) help with nausea.

If that’s the best they can come up with the cards stacked unreasonably in their favour, then it’s time to pull the plug and spend the next $2.5 billion dollars on something useful.

haggholm: (Default)

Although people all over the spectrum—layman bloggers like me or medical experts like those at the American CDC—all agree that the link between MMR and other vaccines on the one hand and autism on the other are spurious, many people still ask: If not this, then how shall we explain the autism epidemic?

Well, the best answer I have seen is in this, one of Orac’s most succinct articles. I do not, myself, have much to add, so I shall merely provide that link and provide a summary for the most impatient among you (though if you are that impatient, why read someone so wordy as me?).

The one piece of irony I wish to add is that I have seen pre-emptive protests by those who do buy into this stuff that claiming that diagnostics have improved radically in the past few decades won’t cut it—irony, because I have never seen anyone claim that we are better now than previously at diagnosing autism. Instead—and here I go into brief summary mode for that article—what has happened is that the diagnostic criteria have changed. In a very real sense, the definition of autism has changed.

What seems to have happened is this: Various sources of statistics, like those for students with any kind of significant learning disabilities, classify those students by their primary diagnosis. A couple of decades ago, autism wasn’t even a category. By sixteen years ago, people were diagnosed as autistic if they met a specific set of criteria. More recently, the criteria have expanded, autism has been expanded into autism spectrum disorder (and many who are diagnosed on that spectrum are defined as high-functioning: They may have ‘peculiarities’, but are not ‘disabled’ in any serious sense)…and of course the number of people diagnosed as autistic have gone up.

Well, of course they have! Thirty years ago they’d have been diagnosed as something else entirely. And this is not because doctors have become better at making the diagnoses: No one is claiming that. Instead, the medical community has changed the definition of what it means to be autistic. (This may very well be for good reason: Unifying similar conditions, etc.) Thirty years ago, perhaps, you were diagnosed with autism if you showed symptoms X, Y, or Z; now you may be diagnosed on the autism spectrum if you show two or more out of the symptoms X, Y, Z, U, V, or W.


There is one additional twist to the story: Because the diagnostic criteria have changed (and because diagnosing disorders like autism is a lot trickier than, say, bacterial diseases where a pathogen is or is not present in a pretty concrete way), it may be impossible to figure out if the prevalence of autism really has changed at all. This is unfortunate because it makes it that much harder to study the condition and figure out what the causes really are; and while some high-functioning people with autistic spectrum disorders are fine just the way they are, low-functioning autism can be a pretty awful thing. It’s bad enough that researchers are sidetracked and distracted by claims to study these spurious vaccine danger claims (certainly a worthwhile topic to study! —but it’s been done again, and again, and again…).

haggholm: (Default)

During my coffee break, I read an article in Scientific American Mind called Knowing Your Chances (available online). I think it is an outstanding article, and you should read it. The most evocative part may have been a simple example:

Consider a woman who has just received a positive result from a mammogram and asks her doctor: Do I have breast cancer for sure, or what are the chances that I have the disease? In a 2007 continuing education course for gynecologists, Gigerenzer asked 160 of these practitioners to answer that question given the following information about women in the region:

  • The probability that a woman has breast cancer (prevalence) is 1 percent.
  • If a woman has breast cancer, the probability that she tests positive (sensitivity) is 90 percent.
  • If a woman does not have breast cancer, the probability that she nonetheless tests positive (false-positive rate) is 9 percent.

What is the best answer to the patient’s query?

  1. The probability that she has breast cancer is about 81 percent.
  2. Out of 10 women with a positive mammogram, about nine have breast cancer.
  3. Out of 10 women with a positive mammogram, about one has breast cancer.
  4. The probability that she has breast cancer is about 1 percent.

Before you read on, take a brief moment to think about it, but also note your gut feeling. Done? Let’s continue:

Gynecologists could derive the answer from the statistics above, or they could simply recall what they should have known anyhow. In either case, the best answer is C; only about one out of every 10 women who test positive in screening actually has breast cancer. The other nine are falsely alarmed. Prior to training, most (60 percent) of the gynecologists answered 90 percent or 81 percent, thus grossly overestimating the probability of cancer. Only 21 percent of physicians picked the best answer—one out of 10.

Doctors would more easily be able to derive the correct probabilities if the statistics surrounding the test were presented as natural frequencies. For example:

  • Ten out of every 1,000 women have breast ­cancer.
  • Of these 10 women with breast cancer, nine test positive.
  • Of the 990 women without cancer, about 89 nonetheless test positive.

Thus, 98 women test positive, but only nine of those actually have the disease. After learning to translate conditional probabilities into natural frequencies, 87 percent of the gynecologists understood that one in 10 is the best answer.

I’m happy to say that I did get it right on the first try, but I strongly agree witht the authors’ opinion that it is not intuitive when the statistics are cited as probabilities rather than natural frequencies. The reason I got it right is because I’ve done a bit of math and a wee bit of stats, I enjoy reading some blogs that talk about medical statistics, I know some of the not-quite-obvious ground rules of probabilities; I know what Type I and Type II errors are (even if I occasionally mix them up)…

…And, perhaps crucially, I’ve spent time thinking about false positives in medical testing before. When I get my periodic routine screenings for STIs (I’ve never had symptoms or tested positive for any, I’m glad to say, but I feel a responsible person should get tested anyway!), I’ve asked myself the hypothetical question What if it did show positive for, say, HIV? What are the odds that I would actually have it? (It turns out that if you’re a heterosexual male, and if you test positive for HIV, there’s about a 50% chance that you don’t have it! You should play it safe, but get re-tested and don’t panic. Some people commit suicide when they get positive test results, even though they’re as likely as not to be healthy.)

Still, while my gut told me the answer was not A (wherein I did better than most of the gynecologists), I had to think about it for a minute to figure out which was the proper answer. People need to be educated on this stuff. Meanwhile, if you haven’t had the benefit of statistical education, keep this one thing in mind: The obvious answer is not always correct, so if you’re unsure, ask someone who can do the maths. And, sadly, even your doctor may not know. I actually find it rather sad that as after learning to translate conditional probabilities into natural frequencies, 87 percent of the gynecologists understood that one in 10 is the best answer, this means that even after simplification, more than 1 in 10 gynecologists didn’t get it. Your doctor can spot the symptoms and order the right tests, but you may need a mathematically inclined friend to actually calculate the risks.

haggholm: (Default)

In areas like alternative medicine and the anti-vaccine movement, one argument that is frequently brought up is that the status quo looks the way it does because Big Pharma is suppressing the argumentor’s favoured research results—they suppress evidence that vaccines cause health problems because they are greedy and want to make money from vaccines even if harmful; they suppress evidence that homeopathy works because they are greedy and do not want to lose the drug market to homeopaths…

First, let me state quite baldly that I firmly believe that the large pharmaceutical companies fall pretty squarely in the Big Evil Corporation category and frequently engage in questionable or reprehensible behaviour. Certainly many of their executives are as motivated by greed, and as ruthless, as executives of other Big Evil Corporations, like oil companies. I do not dismiss out of hand claims that Big Pharma are doing questionable things. But obviously, that doesn’t mean that they are guilty of all the evils of which they are accused, and we have to look at the actual claims, and corroborating evidence, in order to figure out what’s what.

Frankly, I find the vaccine claim outright puzzling. I have taken vaccines for a number of different things. Every vaccine shot I have taken, even ones I had to pay for entirely out of my own pocket, have cost less than or around $50. Assuming that a vaccine requires one booster shot, that represent a sales potential of $100 for my entire lifetime, a statistical 80 years or so. That’s not a lot of profit.

On the other hand, if vaccines are not available for a disease, the disease has to be cured and controlled. Consider polio, a disease eradicated by vaccines:

There is no cure for polio. The focus of modern treatment has been on providing relief of symptoms, speeding recovery and preventing complications. Supportive measures include antibiotics to prevent infections in weakened muscles, analgesics for pain, moderate exercise and a nutritious diet. Treatment of polio often requires long-term rehabilitation, including physical therapy, braces, corrective shoes and, in some cases, orthopedic surgery.

Portable ventilators may be required to support breathing. Historically, a noninvasive negative-pressure ventilator, more commonly called an iron lung, was used to artificially maintain respiration during an acute polio infection until a person could breathe independently (generally about one to two weeks). Today many polio survivors with permanent respiratory paralysis use modern jacket-type negative-pressure ventilators that are worn over the chest and abdomen.

Suppose that some utterly ruthless Big Pharma executive sits down and does the math on this. We can either sell $100 worth of vaccines to quite a lot of people, but once only per patient lifetime…or, if we make no vaccine available (or allow it to be banned due to spurious health concerns) we can sell antibiotics, analgesics, braces, corrective shoes, surgical equipment, iron lungs…

I don’t claim to be an expert on the market, but I postulate that vaccines just aren’t big money makers compared to after-the-fact treatments, and obviously vaccines compete with curative and palliative drugs. As someone said in agreement with my opinion,

I had a friend working as an assistant on big pharm sponsored vaccine research project back in the early 90s. The pharm company eventually pulled funding for the research, and the researchers suspected that the motivation was that producing drugs to treat the illness in question was a better moneymaker than funding relatively expensive research to develop a vaccine. The vaccine would have essentially killed a bunch of the company's product lines.

Let me make this very explicit: This quote is pure anecdote and is not intended to be used as evidence, but presented as an example of why my argument is plausible. Nor do I have the market research and relative cost/profit analyses for vaccines versus conventional drugs. However, my point is that in order for the “Big Pharma” conspiracy theory to hold any water at all, this argument has to be addressed. In short, conspiracy theorists who view vaccines as poisoning for profit must believe that

  1. Big Pharma executives are so ruthless and greedy that they are willing to poison millions of children (including their own) for money;
  2. They do this, and get away with it with no sign of internal whistle-blowers (the critics are always outside critics, with no sign of leaked memos as is usually the case in attempts at corporate cover-ups); and
  3. Vaccine production is so profitable that even after R&D costs, it earns the company more money than selling curative and palliative treatments.
…And if they wish to be believed, they have to substantiate that.


Similar claims are often raised by supporters of “alternative” medical treatments like homeopathy and naturopathy. It’s not quite so sinister—they tend to accuse Big Pharma not of mass poisoning campaigns, but merely suppressing their own (surely superior) treatments for profit.

Once again, however, these economic accusations are very fast and loose and vague. Even if homeopathic remedies worked, would it really profit Big Pharma to suppress it? I would rather imagine that they would attempt to take over that market and drive the smaller players out. Simply by pushing for increased regulation (requiring similar standards of evidence of effectiveness and safety for “alternative” drugs as for conventional ones), they would kill a lot of companies that lack the R&D resources to run the necessary studies. (Why don’t they do it already? Well, since these treatments don’t work, the studies would never pass muster.)

Do I know that this is the way the finances would work out? Absolutely not! But the careful evasion of even raising the question makes me think that the alt-med advocates would rather no one think it through—it’s much easier, after all, to go with a knee-jerk Big corporate evil! reaction. There’s no reason to take the greed accusation seriously unless it can be shown to be logically coherent.

This argument has a second irony, of course: Alternative medicine is a huge industry. Billions upon billions of dollars are spent on “alternative” treatments every year—without all the R&D costs that real pharmaceutical companies have to battle with; freed of the expenses and vast time commitments of running large-scale, double-blind medical trials to show that the drugs work. Tu quoque is a logical fallacy, but when the argument amounts largely to character assault (Big Pharma is greedy and evil), it may be worth keeping in mind that “alternative medicine” is no more innocent of the character flaw at hand.

haggholm: (Default)

“Orac” over at Respectful Insolence has a writing style that’s fairly prone to offend—definitely pugnacious, and very fond of side swipes at those he dislikes (primarily alternative medicine quacks)—and I don’t blame him for his distaste, which in fact I share, but it does sometimes make his essays a bit harder to slog through. (He also has an inordinate fondness for beginning sentences with Indeed. This is one area where I can tentatively claim superiority: I can also be pugnacious and come off as offensive, but while I am no less prone than Orac to complicated sentence structure, I’ve never been accused of any such repetitive verbal tic.)

However, those foibles aside, he has written some very good stuff (he’s on my list of blogs I ready daily for a reason), and this article, summarising and explaining the work of a John Ioannidis, was very interesting indeed. The claim it looks at is a very interesting and puzzling one: Given a set of published clinical studies reporting positive outcomes, all with a confidence interval of 95%, we should expect more than 5% to give wrong results; and, furthermore, studies of phenomena with low prior probability are more likely to give false positives than studies where the prior probabilities are high. He has often cited this result as a reason why we should be even more skeptical of trials of quackery like homeopathy than the confidence intervals and study powers suggest, but I have to confess I never quite understood it.

I would suggest that you go read the article (or this take, referenced therein), but at the risk of being silly in summarising what is essentially a summary to begin with…here’s the issue, along with some prefatory matter for the non-statisticians:

A Type I error is a false positive: We seem to see an effect where there is no effect, simply due to random chance. This sort of thing does happen. Knowing how dice work, I may hypothesise that if you throw a pair of dice, you are not likely to throw two sixes, but one time out of every 36 (¹/₆×¹/₆), you will. I can confidently predict that you won’t roll double sixes twice in a row, but about one time in 1,296, you will. Any time we perform any experiment, we may get this sort of effect, so a statistical test, such as a medical trial, has a confidence level, where a confidence level of 95% means there’s a 5% chance of a Type I error.

There’s also a Type II error, or false negative, where the hypothesis is true but the results just aren’t borne out on this occasion. To the best of my knowledge, there is no equivalent of the confidence level for Type II errors.

This latter observation is a bit problematic, and leads into what Ioannidis observed:

Suppose there are 1000 possible hypotheses to be tested. There are an infinite number of false hypotheses about the world and only a finite number of true hypotheses so we should expect that most hypotheses are false. Let us assume that of every 1000 hypotheses 200 are true and 800 false.

It is inevitable in a statistical study that some false hypotheses are accepted as true. In fact, standard statistical practice [i.e. using a confidence level of 95%] guarantees that at least 5% of false hypotheses are accepted as true. Thus, out of the 800 false hypotheses 40 will be accepted as "true," i.e. statistically significant.

It is also inevitable in a statistical study that we will fail to accept some true hypotheses (Yes, I do know that a proper statistician would say "fail to reject the null when the null is in fact false," but that is ugly). It's hard to say what the probability is of not finding evidence for a true hypothesis because it depends on a variety of factors such as the sample size but let's say that of every 200 true hypotheses we will correctly identify 120 or 60%. Putting this together we find that of every 160 (120+40) hypotheses for which there is statistically significant evidence only 120 will in fact be true or a rate of 75% true.

Did you see that magic? Our confidence interval was 95%, no statistics were abused, no mistakes were made (beyond the ones falling into that 5% gap, which we accounted for), and yet we were only 75% correct.

The root of the problem is, of course, the ubiquitous problem of publication bias: Researchers like to publish, and people like to read so journals like to print, positive outcome studies rather than negative ones, because a journal detailing a long list of ideas that turned out to be wrong isn’t very exciting. The problem is, obviously, that published studies are therefore biased in favour of positive outcomes. (If not, all 800 studies of false hypotheses would have been published and the problem would disappear.)

Definition time again: A prior probability is essentially a plausibility measure before we run an experiment. Plausibility sounds very vague and subjective, but can be pretty concrete. If I know that it rains on (say) 50% of all winter days in Vancouver, I can get up in the morning and assign a prior probability of 50% to the hypothesis that it’s raining. (I can then run experiments, e.g. by looking out a window, and modify my assessment based on new evidence to come up with a posterior probability.)

Now we can go on to look at why Orac is so fond of holding hypotheses with low prior probabilities to higher standards. It’s pretty simple, really: Recall that the reason why we ended up with so many false positives above—the reason why false positives were such a large proportion of the published results—is because there were more false hypotheses than true hypotheses. The more conservative we are in generating hypotheses, the less outrageous we make them, the more likely we are to be correct, and the fewer false hypotheses we will have (in relation to true hypotheses). Put slightly differently, we’re more likely to be right in medical diagnoses if we go by current evidence and practice than if we make wild guesses.

Now we see that modalities with very low prior probability, such as ones with no plausible mechanism, should be regarded as more suspect. Recall that above, we started out with 800 false hypotheses (out of 1000 total hypotheses), ended up accepting 5% = 40 of them, and that

It's hard to say what the probability is of not finding evidence for a true hypothesis because it depends on a variety of factors such as the sample size but let's say that of every 200 true hypotheses we will correctly identify 120 or 60%. Putting this together we find that of every 160 (120+40) hypotheses for which there is statistically significant evidence only 120 will in fact be true or a rate of 75% true.

That is, the proportion of true hypotheses to false hypotheses affects the accuracy of our answer. This is very easy to see—let’s suppose that only half of the hypotheses were false; now we accept 5% of 500, that is 25 false studies, and keeping the same proportions,

…Let's say that of every 200 500 true hypotheses we will correctly identify 120 300 or 60%. Putting this together we find that of every 160 (120+40) 325 (300+25) hypotheses for which there is statistically significant evidence only 120 300 will in fact be true or a rate of 75% 92% true.

We’re still short of that 95% measure, but we’re way better than the original 75%, simply by making more plausible guesses (within each study, we were still equally likely to make either Type I or Type II errors). The less plausible an idea is, the higher the proportion of false hypotheses will be out of all the hypotheses the idea generates: A true/false ratio. Wild or vague ideas (homeopathy, reiki, …) are very likely to generate false hypotheses along with any true ones they might conceivably generate. More conventional ideas will tend to generate a higher proportion of true hypotheses—if we know from long experience that Aspirin relieves pain, it’s very likely that a similar drug does likewise.

This is not to say that no wild ideas are ever right. Of course they sometimes are (though of course they usually aren’t). What it does mean is that not only should we be skeptical and demand evidence for them, there are sound statistical reasons to set the bar of evidence even higher for implausible than for plausible modalities.

It is also a good argument for the move away from strict EBM (evidence-based medicine) to SBM (science-based medicine) where things like prior probability are taken into account. Accepting 95% double-blind trials at face value isn’t good enough.

Syndicate

RSS Atom

Most Popular Tags