Jul | 2012

6th

Friday

Evaluating scientific credibility in an Age of Misinformation; Lessons from a new non-medication based ADHD treatment called Cogmed

Tags:

Questions you will be able to answer by the end of this blog:

-What analytical questions can you ask to evaluate the scientific credibility of a healthcare treatment?
-What are key characteristics you should look for in scientific studies to know if a treatment has real credibility?
-What is an example of a well–researched and scientifically credible treatment that is non-medication based and used to treat ADHD-like problems with focusing?

Science for some of us is about as exciting as a root canal, and about as understandable as Egyptian hieroglyphics. Well, at least the Egyptians used human symbols! Aren’t scientists those nerdy people who excitedly whisper in corners about statistical facts? Well, yes, some of them. I admit I get excited about statistics sometimes too. But one day you too will get excited about statistics….when your find out your father has Alzheimer’s, or your sister has breast cancer, you suddenly may find yourself up at 2 a.m. reading web pages full of scientific information.

How will you know how to evaluate competing scientific opinions?

We have all been confused by the dizzying complexity of science, but science is actually a basic process that you can learn more about, and the quality of your life (or a family member’s) may depend on your learning what we are about to go over here. As I talk about science I will share my own journey as a clinician who felt like he was walking upstream as I tried to treat ADHD but felt there were no powerful methods to use. I felt very frustrated by the shortage of non-medication based treatments for ADHD, and then I heard about a new treatment that showed scientific promise. Real promise. Let’s back up a little first.

Let’s start with a reality about ADHD treatments that is true for almost any healthcare problems out there: there are over 50 treatments marketed as “effective”, “scientifically proven”, “research-based”, or “highly successful”. The problem is, anyone can use these words, no “word police” will drop by a company’s marketing agency to penalize them for overselling their product. Their job is to hypnotise you! The only real weapon you have is your ability to use SCIENTIFIC REASONING to make your final decision about how you will treat that tumor they find, or which knee treatment you choose when you tear some ligaments like I did. Yes, we seek out the most trusted healthcare person we find for their opinion, but it is often better to also review research for yourself, isn’t it?

Even doctors and other healthcare professionals are increasingly wanting their patients to make informed decisions, and to do that you must understand estimated risks and benefits, and to do that, you must understand the probability of a treatment working or not. And to do that you must understand what makes some scientific data more convincing than other data. Do you see where this is leading? Yes, that’s right, you should not passively and blindly say, “Doc, tell me what to do?”. When people come to me who have attention problems, the first thing out of my mouth is, “Go review the research and make a list of questions for me!”. Why? Because I want them to be an active part of how we decide to move forward.

Our changing healthcare culture means you must become a better scientist

Have you noticed the healthcare relationship is increasingly about a partnership, where the professional and the patient are examining things like treatment options together. When I work with people as a psychological doctor I discuss how we will make decisions together, because that is healthcare at its best. Have you ever walked out of a doctor’s office and asked yourself, “what just happened?” because you were shuttled out before you even had a chance to think about what was being said? I have…..and I NEVER want any of my patients to feel that way.
People should feel like they are a co-pilot in the healthcare process, and you must become a critical thinker if you want to be part of the best decision-making process.

Another aspect of healthcare that is changing is its pace of advancement, as scientists around the world are all working on similar problems and so your doctor or professional may not be aware that just yesterday a new approach to treating a condition you have been diagnosed with has been found to be less risky AND more effective! But you may learn about this because you may have more time to research a new finding that has not even been reported at a professional conference. Yet another reason to spend a few minutes reading the rest of this blog.

As a behavioral scientist I am always open to considering scientific information the client may have discovered. Because I want them to be fully informed before they make decisions. My goal as I work with people is to teach them to be a better scientist, because the internet must be treated as an enemy weapons system, as my computer guy JR Guthrie likes to point out jokingly. Why? The internet does not tend to teach you how to increase your scientific horsepower, rigorous scientific thinking does. In other words, scientific thinking skills – not the internet – makes you more powerful.

Dealing with uncertainty – how you calculate the probability of success of anything

There is no certainty, save death and taxes, a wise person once said. Not even with science. What science does is help us make decisions according to calculations we make about the probability or trustworthinesss of something. For example, as a child specialist I commonly have parents ask me if their child’s attention problem can be improved without resorting to medication. They know medication can cause its own set of problems, and does not teach their child skills. Medication sometimes is necessary, but I use it only as a last resort. Rather, I want to help people use their natural cognitive skill sets to develop better focusing skills.

As I researched way to help people with their focusing problems without medication, I came upon Cogmed and found them to be brutally honest about what this new computer game like method could do and could not do. This was a refreshing change from meeting professionals who oversold their approach.

You might not be interested in Cogmed, but the morale of my story is the more you learn about what a genuinely scientific process looks like, the more you make finer grained judgements about the relative risks/benefits of treatments you are considering for your own problems.

Well, what a great time for me to end with a bullet point list of scientific criteria you might look for in treatments, whether it is for ADHD (in the case of Cogmed) or other treatments.

And before we get to the bullet points, don’t forget you can always use local experts as consultants to find the best sources of information on any topic. Many people consult psychologists like myself because psychologists must undergo rigorous training in statistics, research methodology, and scientific thinking. The best psychologists are not just compassionate and inspiring, but use what I call “Great People Science”. That’s one GPS that will save you over and over again from getting lost in the research jungle.

If after reviewing these bullet points you want to become smarter than your doctor regarding the concept of probability (of failure or success) and statistics, and/or you have to make a serious life threatening decision where competing scientific opinions will not make it easy for you to know what to do, there is one book you need to read called “Calculated Risks” by Gerd Gigerenzer, PhD.

First, some reminders…

-Just because you saw it on TV, newspaper, a celebrity online pharmacy no prescription endorses it, there are hundreds of testimonials, or their brochure looks great, does not mean it works.

-Tucsonans are especially prone toward thinking “alternative” necessarily means “helpful”…it doesn’t! View the National Institute of Health alternative treatments website!

-Cultures differ in their tolerance for ambiguity and their need for certainty. In the US, there is a very low tolerance for uncertainty. Pseudoscience promises certainty, science hands us probability and doubt.

-Americans tend to be optimists and pragmatists, we want to believe and want things that work, with less emphasis on why. This can be a fatal flaw.

Questions you can try to get answered to determine the scientific credibility of a treatment or product:

-How well regarded are the creators of the treatment in their own scientific community?

Find out by noting how many peer reviewed journal articles the creator(s) have published, and what level of scientific credibility the journals have. Many fields now rank the level of prestigiousness of their journals (yes, scientists are blunt about what they think of themselves). You can find out how often that research is being cited by other researchers. Let your local reference librarian at the local university help you find this out.

-What percentage of the research is funded by the company selling the product?

Researchers who are funded by a drug company might have a harder time reporting data suggesting a drug does not work. A key adage I always follow is: “Follow the money”. Not just who is funding the research, but for the company making money on a product or service, how much of their money is going back into research which helps you know if the company is deeply invested in improvement versus making a quick buck. Cogmed has shown that its major investment concern is research, not sales. Moreover, the vast majority of the research on Cogmed is not funded by Cogmed. And scientists tend to be very careful with the little money they often receive.

-How strong or credible is the research?

All studies can be placed on a continuum of trustworthiness, based on certain variables. For example, some of the strongest research has a placebo controlled, double blind method built in. This means we account for how people report feeling better just because something was done, even taking a sugar pull that in itself is not actually helpful. Double blind means not even the study’s staff who administer it to people know which condition is favored by the study creators. What I found impressive about Cogmed was their focus on stimulating outside researchers to use placebo controlled conditions. Credibility also goes up as any finding is replicated, especially across different populations of people. Look for the degree of replication. More replication, more credibility.

-How carefully is the treatment being delivered?

What I appreciated about Cogmed was they do have not simply released their product to the open market for anyone with a working memory/focusing problem to use, like many other computer based treatments. Why does this matter? Because focusing problems are much more complex than a common headache for which you SHOULD be able to get over-the-counter medication.

-How objectively can you measure progress?

I think the future of healthcare partly resides in its ability to use technology to precisely measure changes that everyone can see, whether they be subtle or more dramatic. What impressed me about Cogmed is they focused on developing a program that tracks every single effort a person makes as they are using Cogmed to improve their attention. So, we are getting rid of the subjective opinion of people, and rather, seeing changes over time that are precisely measured.

-Is the pace of research being done on the treatment you are considering picking up or slowing down?

You know the scientific community is beginning to take an idea, treatment, etc very seriously when the pace of research picks up. For example, determine how many research papers have been published in last 6 months, or 1 year, on a treatment you are considering? Are the intervals between publications shortening or lengthening? If a particular treatment is truly more of a breakthrough treatment (versus an over-hyped claim) that is gaining serious scientific credibility, then often you will see an increase in the pace of peer-reviewed journal articles that are published. If you are comparing two or more treatments, compare how many studies have been conducted on each.

For example, many people call me and ask about the use of biofeedback for treatment of ADHD. Well, there was only one study on that in 2011, while there were many more studies on Cogmed in 2011. Moreover, there are 50 + studies being done on Cogmed at the time of this blog’s publication.

-What do reviews of the literature tell you?

A review of the literature is when someone tries to summarize what we know after a body of studies have been completed. However, reviews are rarely done well, and so beware! One type of review is called a meta-analysis, and that is when someone tries to amalgamate all of the statistical data in a body of literature that focuses on one topic. Again, beware! Often scientists do horrible jobs of this as well. You want to not only read the bottom line from the review of meta-analysis, but also read the responses to those documents from those who critique that review.

-Are “effect sizes” (this is a statistical concept) reported that can help you understand the magnitude of the treatment’s effect?

An effect size is now considered a benchmark statistic that should be reported because it takes into account the need to report on real world affects of a treatment.

I cannot clarify the importance of effect sizes any better than Dr. Robert Coe whom I quote:

“ ‘Effect size’ is simply a way of quantifying the size of the difference between two groups. It is easy to calculate, readily understood and can be applied to any measured outcome in Education or Social Science. It is particularly valuable for quantifying the effectiveness of a particular intervention, relative to some comparison. It allows us to move beyond the simplistic, ‘Does it work or not?’ to the far more sophisticated, ‘How well does it work in a range of contexts?’ Moreover, by placing the emphasis on the most important aspect of an intervention – the size of the effect – rather than its statistical significance (which conflates effect size and sample size), it promotes a more scientific approach to the accumulation of knowledge. For these reasons, effect size is an important tool in reporting and interpreting effectiveness”

You may read his excellent paper on why effect size is important here:
http://www.leeds.ac.uk/educol/documents/00002182.htm

-How well does the treatment replicate a common sense approach that prepares someone for the real life challenges they struggle with because of the problem they are seeking treatment for?

This is a key question, especially when you are trying to compare the various techniques used by professionals to treat attention and focusing problems – like Cogmed – that all consider themselves credible. For example, some practitioners use neurofeedback, but when I reviewed the science behind it I had many questions that the evidence supporting it could not answer.

Moreover, when I put Cogmed head to head against neurofeedback, I recognized Cogmed was having people practice doing mental tasks that most closely approximated the REAL LIFE challenges they were having. For example, Cogmed has you practice remembering visual and auditory information, much like you must everyday. This is what those with focusing problems often struggle with the most. Meanwhile, neurofeedback is more focused on a person learning to control brain wave activity. In other words, I found Cogmed was a more direct approach to improving attention and concentration.


Want to subscribe to future blog posts? Simple enter your email below!


 

Get to know Dr. Brunner better by clicking on the links below:

Learn More About Dr. Brunner
Behavioral Science Measure
Good to Great Blog
LinkedIn Profile
College and Career Guidance Expertise
Clinical Resume
Forensic Resume
Business Resume
Business Consulting Firm