The Cellar  

Go Back   The Cellar > Main > Home Base
FAQ Community Calendar Today's Posts Search

Home Base A starting point, and place for threads don't seem to belong anywhere else

Reply
 
Thread Tools Display Modes
Old 11-29-2013, 07:56 PM   #1
Clodfobble
UNDER CONDITIONAL MITIGATION
 
Join Date: Mar 2004
Location: Austin, TX
Posts: 20,012
You misunderstand me completely, ortho. I am not "turning away from science." Quite the opposite. I desire for science's mechanism for ruling out flawed data to work faster and more efficiently, that's all. I am impatient.

I wasn't the one who questioned your qualifications, and I have no desire to get into some sort of personal pissing match with you. I'll readily admit I don't have the qualifications you are looking for. I do hold two separate bachelor's degrees, one of which included upper-division classes in physics, biology, and chemistry. I got a nearly perfect score on my SATs, and when they totaled up all the college credits I accrued through various testing (including AP tests in physics, biology, and chemistry,) I entered as a sophomore halfway through my second year. I finished my two degrees in 3 years total. But no, they weren't hard science degrees.

There are smart researchers out there. The best man at my wedding got his PhD studying the physics of muon spin resonance. He's brilliant. I also know he regularly complained about how several of the other grad students he had to work with were idiots. "Idiot" is relative. If I find a flaw in a study's design in less than 5 minutes, that researcher is an idiot to me, regardless of how smart they may seem to you. Not all researchers are brilliant, and if you think they are, you are missing the point of the system that allows for their work to be weeded out over time.
Clodfobble is offline   Reply With Quote
Old 11-29-2013, 09:04 PM   #2
tw
Read? I only know how to write.
 
Join Date: Jan 2001
Posts: 11,933
Quote:
Originally Posted by Clodfobble View Post
Case in point: this recent study, that claims to have found a trend whereby people with a "negative gut reaction" to a picture of their new spouse are more likely to be unhappy/divorced in the coming years than those with a "positive gut reaction."
The complaint is that the study has no control group. But that was not the point. They were looking for trends. Something upon which other researchers would build more accurate experiments. As the BBC reporter notes,
Quote:
He pointed out that overall the scientists found a trend, but some of those who had a negative response stayed happy, while others who had a positive gut reaction became unhappy.
Second, the report is from the BBC - a reporter's summary. So we really do not know if the researchers had a control group.

However a much larger topic applies. Some researchers are literally falsifying facts - as Dr Wakefield did to claim autism is created by the MMR vaccine. (And a stripper named Jenny McCarthy refuses to admit she was brainwashed by the fraud.) Or Dr Schön did in research on organic transistors in the Bell Labs. In that case, it took five years for the fraud to finally be exposed - a major setback for this once promising research that could have made tablets (ie Ipad) on an electronic sheet of paper.

The problem is discussed in much greater detail in various publications. For example, social sciences (ie psychology) suffer from an often misunderstood use of statistics. Not necessary fraud; just bad science.

The confusion extends to a controversy over free publications (“minimal-threshold” journals) verses the mainstay peer reviewed publications (ie Lancet). The former are about getting as much science out to others as fast as possible. Peer review is about getting it right. Problem is that many who peer review papers do not do a good job especially since the reviewers do not get paid and get little credit for doing the work.

The Economist defines two types of errors. Type 1 is the false positive created by thinking something is true when it is not. Type 2 is a false negative, for example, by assuming the 5% that contradicts a conclusion are only outliers and can be ignored.

Worse is the amount of data that must be crunched to prove a conclusion. This makes peer review even more difficult and expensive. Especially true in big pharma where trial data is not provided (the exception being GalxoSmithKline). In another study, only 143 of 351 randomly selected papers would share their raw data for review. Granted, this is mostly in the pharmaceutical industry where lawyers and business school graduates now dominate top management. It may not be as flagrant in other sciences.

Also not required to be shared is software necessary to make the research possible.

1.4 million papers are published annually. The number of retractions have increased and yet still remain at 0.2%. A problem compounded by what too many of us do. We don't like people who contradict. Too many of us want to feel good rather than grasp reality. And yet that is what science really needs. More papers that describe failed experiments. But if you want your paper to be published, then you better have positive (cheery) results.

A Harvard biologist (John Bohannon) intentionally wrote a paper chock full of what was described as clangers - intentionally fraudulent and written to be obviously wrong. He submitted it to 304 publications. 157 approved it for publication.

Fiona Godlee of the British Medical Journal submitted a paper with eight glaring and obvious mistakes to 200 of their regular reviewers. No one identified all mistakes. The average was only two mistakes per reviewer. Some found no mistakes.

In another study over 14 years, 92% of reviewers found a constantly less number of mistakes with each year. With 14 years of experience, one should identify more mistakes? Apparently not.

Some years ago, Amgen tried to replicate results of 53 published studies they considered relevant. Only six papers were confirmed.

Fraud is not widespread or universal. But concerns exist for so much research time lost due to so much flawed science. Essential is to reproduce the study.

Diederik Stapel was defined in April 2013 by the NY Times as a fraudster in maybe 55 papers on psychology. 30 were definitely found fraudulent. The fraud was confirmed in November 2011 when two students blew the whistle. He returned his PhD to the University of Amsterdam. But worse are the dissertations by 10 PhD candidates whose research and reputations are now tarnished by then Dr Stapel.

He got away with it for so long because he did extensive preliminary research; created hypothesizes that were credible. So no one questioned his research. Dr Schön, on the other hand, was making breathtaking conclusions. So it only took almost five years to discover his fraud. IBM's Watson labs (and others) repeatedly tried and failed to reproduce his results. Science also takes time to discredit.

Questions are whether we have become more suspicious. Or whether new 'low threshold' internet publications have put a spotlight on the entire peer review process. One hypothesis is that 'softer' science (ie psychology, sociology, pharma) is so subjective as to make fraud easier. Also critically important is the expression "Publish or die" - a reference to what professors must do to protect their university positions. Nobody wants to hear why some lines of research fail since most of us want positive and cheery results rather than a hard reality that most experiments in revolutionary study fail.

Last edited by tw; 11-29-2013 at 09:17 PM.
tw is offline   Reply With Quote
Old 11-29-2013, 11:59 PM   #3
orthodoc
Not Suspicious, Merely Canadian
 
Join Date: Oct 2006
Posts: 3,774
Quote:
Originally Posted by Clodfobble View Post
I wasn't the one who questioned your qualifications, and I have no desire to get into some sort of personal pissing match with you. I'll readily admit I don't have the qualifications you are looking for. I finished my two degrees in 3 years total. But no, they weren't hard science degrees.

There are smart researchers out there. The best man at my wedding got his PhD studying the physics of muon spin resonance. He's brilliant. I also know he regularly complained about how several of the other grad students he had to work with were idiots. "Idiot" is relative. If I find a flaw in a study's design in less than 5 minutes, that researcher is an idiot to me, regardless of how smart they may seem to you. Not all researchers are brilliant, and if you think they are, you are missing the point of the system that allows for their work to be weeded out over time.
What I'm looking for is some justification for your blanket statement that 'Most studies are designed by stupid people'. A pissing contest isn't necessary; only some information that assures us that you are qualified to make the above generalization. But then, if you were qualified to do so, you wouldn't make such a statement.

I have already pointed out that errors in research are identified and corrected with time. It's only the frauds and charlatans, like Andrew Wakefield and his ilk, who persist in the face of contradictory evidence and lead people who pride themselves on being intelligent and knowledgable, in spite of having no education in the area, down false paths. And, no - having good SAT scores and/or some AP courses doesn't take the place of actual graduate level courses in the hard sciences. Nor does having had a best man with a PhD.

Most intelligent people accept that they don't have expert knowledge about everything - law, for example - and will consult an expert about matters of importance outside their area of education. You have admitted that your education is not in the sciences, but you nevertheless view yourself as an expert in scientific research, qualified to dictate who is, and is not, an idiot.

Can you describe the different types of studies, the advantages and drawbacks of each, and the situations in which each is appropriate, clod? Do you understand what makes a good study? Do you understand how to analyze the data from a given study - which statistical tests are appropriate and which can't be used, and whether the results are statistically significant? Can you tell us whether this study was designed with sufficient power to render significant results? Do you know what the term 'power' refers to, in this context?

If you knew anything in this area, you wouldn't have made such a statement in the first place.
__________________
The greatness of a nation and its moral progress can be judged by the way its animals are treated. - Ghandi
orthodoc is offline   Reply With Quote
Reply


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

All times are GMT -5. The time now is 09:09 PM.


Powered by: vBulletin Version 3.8.1
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.