Hydroxychloroquine study

Kitchen Knife Forums

Help Support Kitchen Knife Forums:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

Michi

I dislike attempts to rewrite history
Joined
Jan 13, 2019
Messages
6,465
Reaction score
16,389
Location
Brisbane, Australia
Following up on an earlier post on the advantages and failings of peer review

The Guardian just published this article about the Lancet's retraction of a hydroxychloroquine study.

If you don't want to read it in full, the TLDR is that
  • The Lancet published a study claiming to show that hydroxychloroquine causes a higher death rate and more heart-related complications in Covid-19 patients.
  • The study relied on a data set from the Surgisphere database.
  • Multiple researchers pointed out that figures cited by the authors of the paper did not line up with official data.
  • One of the co-authors is the founder of the database.
  • Following the complaints, information about Surgisphere was deleted from the Internet.
  • None of the other authors had seen the data first-hand and were denied access to the dataset.
  • Publication of the study led to a temporary stop of controlled studies into the efficacy of the drug.
  • The Lancet has changed its peer review policy to be more stringent.
Note that the above does not mean that hydroxychloroquine is efficacious. Other studies have shown that it is not. But the publication of the paper—most likely based on fabricated data—did halt studies that, otherwise, would not have been delayed.

More importantly, this episode shows that the scientific method and peer review process are, in the long run, self-correcting and effective. There will always be some idiots who fabricate results and sneak them past reviewers for ulterior motives. But, in the end, the truth comes out on top.
 
Last edited:
Following up on an earlier post on the advantages and failings of peer review

The Guardian just published this article about the Lancet's retraction of a hydroxychloroquine study.

If you don't want to read it in full, the TLDR is that
  • The Lancet published a study claiming to show that hydroxychloroquine causes a higher death rate and more heart-related complications in Covid-19 patients.
  • The study relied on a data set from the Surgisphere database.
  • Multiple researchers pointed out that figures cited by the authors of the paper did not line up with official data.
  • One of the co-authors is the founder of the database.
  • Following the complaints, information about Surgisphere was deleted from the Internet.
  • None of the other authors had seen the data first-hand and were denied access to the dataset.
  • Publication of the study let to a temporary stop of controlled studies into the efficacy of the drug.
  • The Lancet has changed is peer review policy to be more stringent.
Note that the above does not mean that hydroxychloroquine is efficacious. Other studies have shown that it is not. But the publication of the paper—most likely based on fabricated data—did halt studies that, otherwise, could not have been delayed.

More importantly, this episode shows that the scientific method and peer review process are, in the long run, self-correcting and effective. There will always be some idiots who fabricate results and sneak them past reviewers for ulterior motives. But, in the end, the truth comes out on top.

the sentence in bold should have caused the 'other authors' to be wary of the data.....

I keep being amazed how easy it seems for HCPs to publish articles of which we only much later (this is relatively fast!) find out that data is misrepresented, altered, manipulated or even 'invented'. I work in 'the industry' where it rains audits and inspections to ensure data quality, and where repercussions for 'incidents' like this are very real and severe (like they should).
 
I am in a different academic discipline than medicine and I am always amazed by how amazingly bad the empirical work of doctors is.

The sad fact is that almost no doctors know any stats or have any "data science" skills whatsoever. They rely on external data managers and external biostatistics people to work their data and run their statistical analysis. Their contribution to the paper is some "subject matter expertise" which often ends up being half baked implausible theories from half understood biology concepts.

The lancet has a history of horribly insufficient peer review, inability to check authors claims, and lack of understanding of basic statistical methodology.

As my stats professor used to joke "the people most opposed to evidence based medicine are doctors"
 
I am in a different academic discipline than medicine and I am always amazed by how amazingly bad the empirical work of doctors is.

The sad fact is that almost no doctors know any stats or have any "data science" skills whatsoever. They rely on external data managers and external biostatistics people to work their data and run their statistical analysis. Their contribution to the paper is some "subject matter expertise" which often ends up being half baked implausible theories from half understood biology concepts.

The lancet has a history of horribly insufficient peer review, inability to check authors claims, and lack of understanding of basic statistical methodology.

As my stats professor used to joke "the people most opposed to evidence based medicine are doctors"

While I'm not sure I agree with you on the first part on doctors, the Lancet is a dumpster fire of a journal.

Guess which journal published (and then retracted) the paper associating autism with the MMR vaccine...

This is an interesting article hypothesizing the role of bradykinins in severe COVID cases: Bradykinin and the Coronavirus
 
More importantly, this episode shows that the scientific method and peer review process are, in the long run, self-correcting and effective.
This is pretty dependent on the field, though, which I think is an important point. This got caught because the breaches were incredibly flagrant, and it should NEVER have got to the publishing stage. This is less a triumph of the system than a cautionary tale about taking some fields at face value when it comes to "peer-reviewed" studies.

As @ian can attest, hard sciences like maths are far more rigorous and less open to mates being able to "peer-review" papers.
 
I am in a different academic discipline than medicine and I am always amazed by how amazingly bad the empirical work of doctors is.

The sad fact is that almost no doctors know any stats or have any "data science" skills whatsoever. They rely on external data managers and external biostatistics people to work their data and run their statistical analysis. Their contribution to the paper is some "subject matter expertise" which often ends up being half baked implausible theories from half understood biology concepts.

The lancet has a history of horribly insufficient peer review, inability to check authors claims, and lack of understanding of basic statistical methodology.

As my stats professor used to joke "the people most opposed to evidence based medicine are doctors"
The Lancet is one of the top journals in medicine, getting a paper published is difficult in most ranking journals but even more so with The Lancet. Editors and reviewers for those journals typically are highly renowned experts in their field, yes that review is inclusive of statisticians. Is there some selection bias or are their numbers of fraud/retractions higher than any other journal?

MD's get a pretty solid background in stats, the few MDs I frequently work with can tell most biostatisticians a thing or two simply because they are subject matter experts in their area or expertise. The bigges FU-s in protocol writing that I have recently seen were made by statisticians designing analysis plans for topics they had no clue about. By itself statistics does not add much to science other than methodology; Statistics can prove that statistics can prove anything'...Don't get me wrong, you need a good statistician to write a decent protocol.

Articles in high ranking journals such as the Lancet are most definitely not based on empirical dasta (you'd be hard pressed to get even a case report published based on empirical data alone), the problem IMO is caused by a lack of backbone in Academia to resist the pressure to publish (the most cynical incentive to use by universities and governments if data quality and integrity are of any importance) and a practical issue of repeatability and ethics.

Doing clinical studies means you need patients, an experimental drug or method , approvals, consents, (lots of) money. The days of Mengele and Tuskegee are behind us, even if not that long.
Imagine what happens if you want to see if the results published by Dr X are any good and want to repeat the experiment, you somehow get a bucket of funding, write up a protocol (BTW ; including a statistical analysis plan, background, hypothesis etc, see ICH chapter), submit to an ethics committee...likely to get a NO-GO because the experiment has been done as you explained in your background section in the protocol, and the research adds nothing new so the risk benefit ratio just is not favourable for patients.

Unless the data used for the original paper is checked and scrutinized in a similar way as it is when a pharmaceutical company initiates clinical research to get a drug approved the data may be subject to fraud that is difficult to detect. Is it fair to blame a journal for someone who wants 5 minutes of fame at the cost of science and patients or 'needs to publish to keep their position/status/faculty ranking up? I don't think so, at the same time; scientific fraud within academia is hardly ever prosecuted for the crime against humanity it really is...
 
As @ian can attest, hard sciences like maths are far more rigorous and less open to mates being able to "peer-review" papers.

Absolutely!

Although of the two review jobs I have to do in the next couple weeks, one is for a paper written by my mathematical grandma (advisor’s advisor), and the other is written by one of my former coauthors and good friends, collaborating with a guy I’ve hosted multiple times at my house and a woman that I’ve hung out with a few times at bars at conferences.

It’s a small field. 😂

I’ve seen a couple papers in my field eventually be exposed as wrong, but that’s a couple out of hundreds or thousands. Many papers have small errors: that’s kind of expected, and not a big deal if it doesn’t kill the main results. Most arguments are robust enough to survive small modifications.

If a paper in hyperbolic geometry is wrong, though, the consequences for the rest of the world are rather minimal. 😁
 
Nepotism is pretty much out of the equation, peer review is not done by 'mates' but by peers with their own professional attitude and reputation. Neither is it a requirement that the reviewers do not know or do not like the author, in most cases the author does not know who reviewed the manuscript anyway.
I've done 'some' bartime with folks that happily drill holes, for real or perceived, in manuscripts I'm one of the authors for...peer review is not infallible, reviewers do not necessarily have all the knowledge the authors have but they usually are quite capable of checking methodology and conclusions drawn. The biggest gap as I see it is in quality control of the raw data, from collection to analysing. Things are improving but slowly.
 
In lots of disciplines peer review is not double blind -- the reviewers know the identity of the authors. And in many disciplines, the set of "peers" (ie people with the subject matter expertise to competently evaluate a paper) is really small. So you often end up with situations where even the authors more or less know who the referees are because it could only be five different people and they know the opinion of three of them already on the paper because they discussed it at a conference.

So nepotism is most definitely not out of the question for many many disciplines and subfields.

Back to MDs and stats -- that's great that you work with competent MDs. Good that some subfields of medicine do proper work.

But the given lancet study had such humongously obvious flaws that I don't think we can trust any of the referees or editors to ever participate in proper peer review again. The data set was obviously fake -- the sample sizes did not even remotely correspond to what was going on in the areas studied. Any referee could have cross checked the summary stats in the paper with what is public info regarding COVID and hospitals.... As a lot of independent third party researchers did! Why the Lancet referees don't do that is a mystery to me.

I understand that you have to go on faith that the data isn't made up in general when doing reviewing. You can't, as a referee, check each row of the data set, go back to the lab and ask if the study was actually run etc etc. That is infeasible. But when the data was not even shared with most of the authors on a paper, the paper is incredibly consequential, the data stems from an unknown new company, maybe at least check if the data passes the most basic smell test ?
 
I do agree that a reviewer cannot and should not even begin to check each data line, I'd advocate a system of data quality control at its source like it's done with sponsor initiated clinical studies. Think another major issue is that publication pressure is incredibly which may lower peoples threshold to check what they sign off on...I do wonder how they ever qualified for authorship acc to the ICMJE guidelines...
 
I do agree that a reviewer cannot and should not even begin to check each data line, I'd advocate a system of data quality control at its source like it's done with sponsor initiated clinical studies. Think another major issue is that publication pressure is incredibly which may lower peoples threshold to check what they sign off on...I do wonder how they ever qualified for authorship acc to the ICMJE guidelines...

In other disciplines, you need to upload "replication packages" that produce the entire paper from the raw data (if the raw data can be shared) or at least the statistical code to do so + whatever intermediate data is sharable. Something like that would have already helped here because it would make it much easier and more transparent for a reviewer to check if they even ran the code for their analysis, inspect (not by hand, statistically) intermediate data etc.

The higher the stakes, the better and more stringent the peer review needs to be. Up to and including doing basic "accounting fraud" checks for the data. Such as "do the number of people in the sample exceed the positive covid cases in a locality?"
 
clinical studies are a bit more complicated than that, there are all kinds of restrictions, data privacy may sound as a welcome thing for us all yet there are areas where it works against us. This is one of them.

It's one thing to bash one study where misconduct was discovered (if far too late), that does not mean that medical science does not work. There are S%^&loads of articles on treatment of COVID patients that actually helps to advance treatment.
 
I mean, I work in an empirical discipline. I don't want to sound like I am bashing all research. Lots of important and good research gets done every day by super diligent honest researchers that put all their hours and all their energy into getting it right.

But that only RAISES the bar for the top medicine journal, for example, to ensure that what they publish goes through proper review. The flip side of bad peer review is that you not only promote bad science but that you don't reward good science either. And what's the quickest way to turn an honest diligent researcher into a fraudster? I'd guess having your paper rejected because the referee doesn't understand basic stats and/or was too lazy to read your methods section, while seeing some inflammatory flashy bs be published in the same week. why put your early mornings and late evenings in when you can get a higher chance of a publication with a bogus statistical method, just dumb enough for the average bored and lazy referee, and made up data?

(As an aside: even if the lancet study's data had been real, the paper was junk.... Which it shares with the lancet vaccine paper. That paper was fraudulent in terms of data, but even dumber in terms of stats. It shouldn't ever have been published even if the data was real)
 
The publish or perish mentality has been a real detriment to science and proper scientific research.

8AEFB7FC-D389-493B-B8D6-E1AF789562C8.jpeg


Image credit: SMBC
 
Last edited:
yeah as if putting out articles helps to advance science, look in any lower ranking paper and you'll see plenty articles on stuff already published several times but now with a minor insignificant tweak, heck usually the originals or first copies are referenced in the articles...keeps plenty folks real busy but it does nothing for actual progres.
 
yeah as if putting out articles helps to advance science, look in any lower ranking paper and you'll see plenty articles on stuff already published several times but now with a minor insignificant tweak, heck usually the originals or first copies are referenced in the articles...keeps plenty folks real busy but it does nothing for actual progres.

This is a really dismissive comment. It’s true that a small number of papers are responsible for most of the progress in a field, but you don’t necessarily know which they’re going to be before they’re completed, or even immediately after they’re published. And even those eventually important papers are influenced by lots of other smaller papers. I’ve had multiple experiences with publishing papers in great journals, while citing smaller works published in lesser journals, and I’ve also published a lot of stuff in lesser journals too. Regarding your objections to trivial modifications of existing studies, I thought reproducing the results of important studies was supposed to be important too? Wasn’t that kind of the argument above?

Also, it’s not unreasonable to think that universities and agencies issuing research grants should evaluate a researcher’s previous output when considering future funding.... are you suggesting they just give everyone in the field equal funding and 20 years to complete their dream projects? Sounds like a massively inefficient way to allocate funds. And many scientists need some sort of external motivation to get themselves moving, too. I know I do most of my work when there’s a deadline somewhere in the future.

All that said, tenure does take pressure off some scientists, allowing them to focus their attention on things they really care about. I’ve definitely felt that in the past few years, since I was lucky enough to get tenure and also a 5 yr government grant around the same time. But at the same time, I haven’t been as productive as I was when deadlines were looming. Now that I’m getting closer to having to reapply for a grant, the motivation has returned in full force. (The tenure exception also doesn’t really apply in fields where all research requires massive funding, though.)

Finally, one should realize that “publish or perish” doesn’t mean “publish some trivial crap and you’ll get all the funding you want”. The quality of your output determines the kind of job and amount of funding you’ll get. So there is actually motivation to work on quality projects, not just drivel.

TLDR: yea, publish or perish probably results in some trivial papers being published, but you got a better idea for how to allocate funds and motivate scientists?
 
Last edited:
yeah as if putting out articles helps to advance science, look in any lower ranking paper and you'll see plenty articles on stuff already published several times but now with a minor insignificant tweak, heck usually the originals or first copies are referenced in the articles...keeps plenty folks real busy but it does nothing for actual progres.

I don't want to come across as being needlessly confrontational to each of your messages but .. this is just such a weird thing to complain about.

In literally every single empirical discipline the largest problem standing in the way of progress is a LACK of independent replication. Ie "trivial modification of what is already known". That's literally the thing we lack the most as it stands.

Every single empirical field has a huge "false positive" issue, with hundreds of studies being published that have almost no chance of being replicated. And it's not just the usual suspects, say, social psychology, that pump out false findings by the minute. Cognitive psych, medicine, sociology, economics, education, etc are only marginal degrees better (and some like nutrition and criminology are even worse).

What we need is people taking grandiose claims in papers with p-value 0.045 and subject them to an independent replication. If they then have an idea of how to "marginally" extend the analysis or experiment even better. But what we need is more robust research, not even more "just the most grandiose idea gets published" incentives that just lead to a ton of spurious false positives, each of which has the potential to throw a discipline years backwards (think about how social psych has made virtually no systematic progress over the last twenty years thanks to entire research teams dedicating thousands of research hours and hundreds of thousands of research funding on now debunked "big ideas", from positive psychology, over power poses, to virtually every other big idea in psych...)
 
I was merely responding to the cartoon Ian posted, which to me came across as dismissive. I agree with your points Ian, so if I was dismissive (and I was), it was about the cartoon.

That said, repetition of experiments is a key part of science, I was trying to say that I observe (not complain) there also is an abundance of articles written because folks need to write papers, so for want of a new idea or even an new insight to something existing is used, just follow the references in some articles.

VicVox, all true, I do not advocate how scientists are 'managed' as they currently are, there must be a middle ground between giving them a nice place to work and X amount of funding and see what comes out and chase them to death to publish. We live in an illusion of control, we cannot really administrate science forward, it happens, people discover major things while tinkering and messing around.
 
Also needing to be borne in mind is the fact that "science" is a broad church as a word in general usage, covering actual verifiable hard sciences like Ian's and spreading all the way out to areas like nutritional epidemiology (i.e. person belief systems of the researcher) while still being described by that same word, much as "car" covers everything from the Trabant to the Veyron.
 
I agree with some sentiments from a few prior posts. I’ve had collaborations with several MDs on biological research, & published papers with some of them. I was surprised that their experimental designs were full of holes. I realized that they weren’t Trained to do real scientific research. Thank god that the biopharmaceutical medicines are mostly developed by Ph.D.s, the MDs are left to “practice“ medicine, i.e. match symptoms with available prescriptions.
 
Last edited:
I’ve had collaborations with several MDs on biological research, & published papers with them. I was surprised that their experimental designs were full of holes. Thank god that the biopharma medicines are mostly developed by Ph.D.s, the MDs are left to “practice“ medicine, i.e. match symptoms with available prescriptions.


They just don't learn much about stats, causal inference, experiment design in school.

There's this well known anecdote that "even" doctors are very bad at doing Bayesian updating about test results (ie, given prevalence x, false positive rate m, false negative rate k, how likely am I to have disease Y if I get a positive result?)

I find that very unsurprising. In fact, I am sure their training actually makes them worse at it than the average highly educated person.

Next time your doctor tells you result X means Y, ask them what their best guess is of the probability whether you ACTUALLY have Y. I'd bet it's 50% chance they don't think that's even a meaningful question that they can process, 49% that they look up the % of positive cases that test positive and tell you that, 1% they actually know the correct way to answer.

I have done this about multiple diagnoses and never gotten the third case. Always been first or second....
 
Sick burn, dude. You should probably go see an MD about it.

I am a puppy dog at Dr. office now, keep my mouth shut, I learned my lesson the hard way. I challenged my doctors a bit too much & one of them fired me, wrote me a letter saying that she can no longer be my doctor 😂
 
the MDs are left to “practice“ medicine, i.e. match symptoms with available prescriptions.
Sadly all too true. What's lifestyle intervention? Can you write a script for that? No, that's not an income generator, and takes way too long.

I challenged my doctors a bit too much & one of them fired me, wrote me a letter saying that she can no longer be my doctor
It's for the best, moves you along towards a doctor with a clue, or at least an open mind.
 
Last edited:
They just don't learn much about stats, causal inference, experiment design in school.

There's this well known anecdote that "even" doctors are very bad at doing Bayesian updating about test results (ie, given prevalence x, false positive rate m, false negative rate k, how likely am I to have disease Y if I get a positive result?)

I find that very unsurprising. In fact, I am sure their training actually makes them worse at it than the average highly educated person.

Next time your doctor tells you result X means Y, ask them what their best guess is of the probability whether you ACTUALLY have Y. I'd bet it's 50% chance they don't think that's even a meaningful question that they can process, 49% that they look up the % of positive cases that test positive and tell you that, 1% they actually know the correct way to answer.

I have done this about multiple diagnoses and never gotten the third case. Always been first or second....

I think that varies quite a bit from medical school to medical school. A prior (different geographic location) primary care physician of mine was one of the rare ones that actually keeps up on the literature and reads it with a critical eye. He regularly complained about the general quality of published research in his field but singled out pharmacology in particular as the absolute nadir of research quality. The guys in that realm may have PhDs but they mostly work for for profit companies that are far from dispassionate about the fiscal implications of the work of their research divisions. If you believe there is a fire wall dividing research from the money guys perhaps you'd like to buy a bridge?

Also needing to be borne in mind is the fact that "science" is a broad church as a word in general usage, covering actual verifiable hard sciences like Ian's and spreading all the way out to areas like nutritional epidemiology (i.e. person belief systems of the researcher) while still being described by that same word, much as "car" covers everything from the Trabant to the Veyron.

I have to take some issue with this. Certainly pop nutrition fads and a small number of people with some semblance of credentials who feed this nonsense should be called out but there is quite a bit of legit work being done in nutritional epidemiology. IME the social sciences taken broadly have much lower research standards but as in any field there are exceptions that try the rule. In the interest of full disclosure I used to work in research on heavy metal contamination in urban areas with respect to public health and have dealt with a number of epidemiologists including one focused on diet and cancer, Australia educated I might add..
 
They just don't learn much about stats, causal inference, experiment design in school.

There's this well known anecdote that "even" doctors are very bad at doing Bayesian updating about test results (ie, given prevalence x, false positive rate m, false negative rate k, how likely am I to have disease Y if I get a positive result?)

Is anyone good at these though? I thought I read somewhere that humans are just bad at probabilities. I know I'm awful so every time I see it, I have to sit down and write it out.

I find that very unsurprising. In fact, I am sure their training actually makes them worse at it than the average highly educated person.

Next time your doctor tells you result X means Y, ask them what their best guess is of the probability whether you ACTUALLY have Y. I'd bet it's 50% chance they don't think that's even a meaningful question that they can process, 49% that they look up the % of positive cases that test positive and tell you that, 1% they actually know the correct way to answer.

I have done this about multiple diagnoses and never gotten the third case. Always been first or second....

Of course it would be better to be accurate, but I wonder how damaging it is. For instance, if doctors always underestimate the severity of diseases that are common and well diagnose-able (low false positive), is that bad? I think I want my doctor to overestimate the severity of uncommon and hard to detect diseases.
 
Back
Top