(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=40589675

该文本描述了学术科学面临的问题,即由于当前的资助结构而存在的潜在欺诈和低质量研究问题。 它将这种情况与 20 世纪 90 年代末的计算机安全状况进行了比较,表明解决这些问题的文化发生了转变,从而实现了改进。 然而,尽管学术科学的声誉不断恶化,但它仍然不受市场力量的影响。 批评者认为,缺乏市场影响力可能会阻碍必要的变革,同时有可能使通过欺诈手段崛起的学术界领导者维持现状。 解决方案的建议包括增加复制、自主研究和纯粹的蓝天研究,以及提高学术出版物和研究过程的透明度和问责制。 然而,挑战仍然存在,包括担心市场干预可能无法改善整体研究,以及市场力量可能无法准确确定最佳资金分配。 此外,关于学术研究中欺诈活动的程度和原因,以及外部与内部资金来源的作用和影响也存在争议。

相关文章

原文


> published in Nature in 2006

So around 18 years ago. That’s a long time for researchers that believed in this papar's conclusions to be going down the wrong rabbit hole. What a huge waste of effort and the lives of those with Alzheimer's.



Not everything gets retracted either. There's a surprisingly deep rot in many parts of science. There are strong incentives to publish, and a lot of the methods you can use to inflate statistical significance (i.e. p-hacking) are hard to distinguish from publication bias and other innocent explanations for falling outside of statistical expectations.

Preregistration might help, but it doesn't really address the misaligned incentives that are at the heart of academic fraud.



Even articles that publish legit findings tend to embellish data. I do this for a living, I often try to reproduce prominent results, and I regularly see things that are too good to be true. This is bad because it pushes everyone to do the same, as reviewers are now used to seeing perfect and pristine data.

I have been asked to manipulate data a few times in my career. I have always refused, but this came at the cost of internal fights, getting excluded from other projects for being "too idealistic", or missed promotions. Incentives are just perverse. Fraud and dishonesty are rewarded, pretty depressing.



Tragedy of the Commons Ruins Everything Around Me.

Everyone wants answers, so anyone that provides an answer is elevated, independent of if the answer is right or not. To wit: the current AI push.

Clips from The Big Short surfaced in my YouTube feed recently, and they way you worded this reminded me of the scene with the rating agency.

Is scientific research and publishing headed for it's own CDS/MBS-esque implosion?



I think academic research is becoming very inefficient, and traditional Academia might eventually become stagnant. If you don't play the game I described above, it is really hard to stay afloat. I guess industrial labs, where incentives are better aligned, might become more attractive. I have seen lots of prominent scientists moving into industrial labs recently, which would be something hard to imagine even a few years back.



Yep. Worse, p-hacking can be done by accident. I mean, the term implies intent, but a dogshit null hypothesis is problematic regardless of whether it is dogshit on purpose or merely due to lack of skill on the part of the researcher. Either way, it pumps publication numbers and dumps publication quality. If 100% of researchers were 100% honest, we would still see this effect boost low-quality research.



this isn't a big deal since no one reads most of those papers, it's mostly invisible sacrifice to the metrics gods. Very prominent papers like the one discussed, on the other hand, have much bigger consequences



I’m 100% in agreement that there is a massive reproducibility crisis in science and that the publish-or-perish model is broken.

But, for completeness, paper retractions can happen for many reasons, not all of them nefarious, though it could be that most retractions are from authors trying to game the system and getting called out. For example, if the terms of using a certain data set change, you could be required to retract your paper and remove that data from the analysis.



Not only effort: millions and millions of public and private dollars wasted on followup research on a false premise, all so that an author could get citations. What a cost.



Playing devil's advocate: perhaps the authors strongly believed in the method but have failed to produce strong enough evidence. Then the goal was to renew financing in hopes of getting somewhere with the research, not to just get citations.

This of course is pure speculation



I think the point you're making is that there need not have been any malicious intent, and I accept that, but I don't really consider this a devil's advocate position, since strongly believing something will work is still not grounds enough to lie about outcomes.



It's really not an excuse. This has directly led to decades of wasted time/money/effort searching for a cure of a very prevalent disease.

It does lend to an interesting thought experiment, though.

Imagine the researchers were right after all. They just needed more time, and so they faked some results. We might have ended up with an underdog/feel-good movie about how the researchers bravely persevered in spite of the drying up of their funding.



This paper is certainly shameful.

But one should not exaggerate the impact it had. The strongest support for the amyloid hypothesis of Alzheimer's comes from multiple sources of evidence that are completely unrelated to this paper.



The problem is we don't know how many of those other lines of evidence are also fraudulent.

What we do know is that finding academic fraud is like shooting fish in a barrel. It's way too easy even for totally unfunded volunteers. The situation in science right now is reminiscent of the situation in computer security circa 1999. Back then there were so many easily exploited RCE vulns in foundational infrastructure that "hacked by teenagers" became a TV trope. Almost anyone could sit down and find a way to hack into a network without much effort. Vendors engaged in a lot of screaming and denial, sometimes even attacking the people who found exploits for them.

But, the industry changed. White hats became widespread, companies hired those ex-teenage hackers in their thousands. They put every employee through trainings. Then when that wasn't enough they started paying out millions in bug bounties, buying sandboxing companies (like Google did for Chrome) and more. Companies did this because bad security was hurting their reputations and in a competitive market that is an exploitable weakness. It was a form of self defense.

Academic science is excluded from market mechanisms by legal fiat, supposedly on the basis that this will yield better research outcomes. What it's actually done is shield universities from any need to make hard changes. Their reputation is tanking but they just don't seem to care and why should they? They'll get grants from the NIH anyway because they're all as bad as each other, and nobody in politics is talking about a total defunding of the sector yet.

Unfortunately even if the grants were cut off tomorrow they probably would still find it hard to change. It's becoming apparent that if a culture tolerates fraud for decades then eventually the leadership of these institutions are people who got there through fraud. How are universities or journals going to reform when the people who run them know full well they can't crack down on fraud without exposing their own career to audit risk?



You've diagnosed the issue then recommended leaches and bleeding.

'Market mechanisms' already exist - no company is precluded from doing research. We can discuss the deficit in research investment that short termist 'market mechanisms' have led to in contemporary corporations, but its absurd to claim that the market can or should replace academic research.

It's pretty clear that in practice the 'publish or perish' academic culture fostered by the financialisation of the university, the rise of the adjunct and the death of tenure have contributed heavily to the falsification of research. In practice encouraging replication, self directed research and significantly more 'pure' 'blue sky' research than governments and universities are inclined to do, would significantly alleviate this problem.

Academia needs far less exposure to 'market mechanisms'. They're perverse incentives for good science.



To say there's a deficit in research spending pre-supposes there's a correct absolute level of resource allocation that is easily discoverable. But the reason we need markets in the first place is that governments aren't able to discover the right levels of resource allocation on their own. That's why the USSR ended up with lots of steel foundries but no software companies, nor even any competitor to the internet.

If anything research is clearly over-funded right now, especially in some fields like medical research. That's why so many scientists turn to fraud in desperation. There just aren't enough genuine discoveries to go round.

> encouraging replication, self directed research and significantly more 'pure' 'blue sky' research

Replication is the opposite of self directed blue sky research, so how do you suggest it be encouraged whilst simultaneously reducing the power of the money givers even further? BTW, many bogus papers will replicate whilst still being wrong. Replicability is not a synonym for good science.

Really, this is a very common claim that needs to go away. The solution to fraud and incompetence is not to give incompetent fraudsters even more freedom and money.



> But the reason we need markets in the first place is that governments aren't able to discover the right levels of resource allocation on their own.

Markets don't discover "the right" (i.e. social welfare maximizing) resource allocation, they discover, under ideal conditions, a pareto-optimal allocation. This is a very weak condition: "Jeff Bezos owns everything" is pareto-optimal. You need redistribution if you want to turn that into a socially optimal one.

In realistic conditions, they don't even do that: market mechanisms will reliably underinvest in things with positive externalities, and the larger the externality the greater the degree of misallocation. The positive externalities of research are enormous compared to the value captured by the researchers.



No market system anywhere ends up with one person owning everything. That is the standard under non-market systems, though.

> The positive externalities of research

... are only relevant if it's actually research, and not something that looks a lot like research without actually being so.



> Academic science is excluded from market mechanisms by legal fiat.

This to me just seems untrue. What is your basis for this claim? There is plenty of research privately funded by corporations, some of which is very influential. Often this work is published by university researchers. Ask any university researcher about the numerous compliance courses we all have to take about funding and conflict of interest.

It is true that the biggest funders (NSF, NIH) are not market-focused, but for good reason. The market does not prioritize the public good. I know first hand -- my son has a rare disease (1 out of 20,000 people). There are many drug companies putting vast resources into drug development in the hopes of a huge payoff. In reality this benefits a small number of people (I remain grateful for how improvements have helped us). I'm grateful our major scientific funding bodies are not swayed entirely by market influences because it would lead to us focusing on a narrow set of scientific problems which would ultimately limit the way it helps the public good.

Im any event, I work in biomedical research. I think your diagnosis (incentives, process) is correct, but the way you discuss the attitudes and motives of researchers is wrong-headed.

You say:

> Their reputation is tanking but they just don't seem to care and why should they? They'll get grants from the NIH anyway because they're all as bad as each other, and nobody in politics is talking about a total defunding of the sector yet.

You're talking about hundreds of thousands of researchers as if they're all psychotic citation fanatics with no care for truth. That is not reality. I think the kind of psychotic, data-manipulating researcher who would put people's health and lives at risk for citations -- or fabricate data sets out of thin air -- are vanishingly small. We can point to a handful of them -- the author of this paper, and the Daniel Ariely's and Francesca Gino's of the world -- but there are tens of thousands of people in every field working on research in good faith, with utmost care. The vast, vast majority never have any scandal, never get caught up in data manipulation, and so on.

No field I know of out-right tolerates fraud (and I follow all the retraction stories fairly closely). I think the closest we get to "toleration" is researchers dealing with scientific problems who more or less say "we're not going to publicly flay you but behind your back we're all going to know what you did and your future is limited when it comes to big grants, prestigious invitations, and so on." People who are credibly accused of fraud become pariahs and often targets of scorn not only within the research community but in the press and wider community.

The most serious issue IMO is not outright but poor norms around scientific practice, leading to p-hacking, harking, and other "forking paths" problems. Calling that type of behavior "fraudulent" is perhaps justifiable under some ways of thinking, but I think the word fraud mischaracterizes what is going on. There are, in fact, many serious efforts to root out this type of behavior and put in transparency rules to open up research to scrutiny, including among funders like the NIH.



> What is your basis for this claim?

Well you answered your own question: It is true that the biggest funders (NSF, NIH) are not market-focused. A lot of research is funded by taxes. That's an exclusion from market mechanisms. They don't have to convince the actual consumers of the research to buy it, we are all collectively forced to buy it by law.

> No field I know of out-right tolerates fraud

I know quite a few such fields, so we might have a different definition of "tolerate". After all this story contains the following paragraph:

“It’s unfortunate that it has taken 2 years to make the decision to retract,” says Donna Wilcock, an Indiana University neuroscientist and editor of the journal Alzheimer’s & Dementia. “The evidence of manipulation was overwhelming.”

We're talking about a retraction here, which is the weakest response possible. So ... it took two years of "investigation" to do nearly nothing, after other people did all the investigative work for free, and one of the authors continues to be employed with no consequences whatsoever even though his co-author admitted the figures were tampered with. I'd argue this is what institutional tolerance of fraud looks like.



The "consumer" who "buys" research in a commercial setting is usually the executive who is funding the department. In that context stuff like P-hacking, HARKing etc doesn't happen much because at some point your bosses boss is going to read your internal paper and notice that your claimed discovery has nothing to do with what you were originally asked to investigate. In academia that doesn't matter, it still counts as a discovery because nobody is really checking your pre-registrations. In corporate research it'll either be checked by the people paying your salary, or at a larger scale, it'll be checked by a regulator who is forcing you to pre-register your clinical trials.



There's no "consumer" who "buys" research into whether protons decay. That's basic research that will only occur if funded by national agencies that are not motivated by return on investment.

As for commercial research, similar problems of fraud exist as in academic research. Instead of prestige, the motivations are things like bonuses and promotions.

Academics actually care a great deal about fraud, funding agencies hate fraud and punish it, journals hate fraud - everyone dislikes it. Competing labs have every incentive to catch fraud conducted by their rivals. The idea that academia is rife with fraud and that nobody cares is just not true.

There will always be a certain level of fraud not just in research, but in every economic sector, every intellectual pursuit, and every sport, commercial or not. There's no system that will perfectly eliminate it, but because of its empiricism and openness, science is fairly good at correcting itself over time.



> Academic science is excluded from market mechanisms by legal fiat...

And a world class propaganda game. Go onto mainstream social media, pick a science fan at random out of the crowd and pick an argument with them about science. The average fan would have you believe that science is just short of perfect.

In a sense it's hard to blame these people, considering what two years of the media machine's full court press of pro-science culture war propaganda during a global pandemic does to a mind of average intelligence.

I mean, I suppose "winning" is important folks, but is losing your soul in the process worth it?



Absolutely. It’s not just a bad path, it’s taken researchers down a road that robbed approaches that could have helped people of resources because they lied.



A lot of science is fake, if not most of it. I learned the hard way in academia that it really seems the majority of PhD level researchers are there because they are willing to stretch the truth to "tell a good story".



If one wants less fraud change incentives.

1. Grants and faculty positions should be less dependent on number of publications, and more on the quality of publications.

2. Place more emphasis on basic research and less on translational research. The latter encourages hyperbole and p-hacking. In the late 90s, the NIH initiated a strong emphasis that all research should have an immediate practical value. This is not reasonable and as a result, researchers grossly exaggerate the impact of their work, and some even engage in an outright fraud.



How do you determine quality? Right now, publishing below Q2 or even below Q1 in some cases is the same as not publishing at all. I've seen grants that only accept D1 papers. As a curiosity, Gregor Mendel original work was published in a small and newly created local Brno journal. It was cited three times in the following 35 years. By all metrics, it was a low quality work. Only 40 years after being published it was rediscovered as a fundamental work.

That on the clean part. I've also seen papers published well above its merit just because the authors know the editors, or the paper comes from a reputable lab so it must be good. Also, the opposite is true: your work is from a small or unknown lab, or goes against the grain, and you'll be lucky if you get published at all.



> How do you determine quality?

Certainly not by citations counts. Citations have more to do with the authors social or scientific network than the worth of the papers.

> It was cited three times in the following 35 years. By all metrics, it was a low quality work.

It was not low quality work (although it did spark quite a bit of controversy due to its perceived or actual issues). It was just an article written by an unknown author in an unknown journal.



FYI: Some have suggested that Mendel's data might be too perfect, indicating possible manipulation or fraud. (1) Mendel's results are unusually close to the expected ratios of 3:1 for dominant and recessive traits. Some argue that real-world data often show more variation. (2) In 1936, statistician Ronald A. Fisher analyzed Mendel's data and suggested it was "too good to be true." Fisher believed the results might have been altered or selectively reported to better match Mendel's hypothesis. (3) Despite these concerns, many of Mendel's experiments have been replicated, and his fundamental findings remain valid. Most scientists believe any discrepancies in Mendel's data were not due to intentional fraud but possibly unconscious bias or error.



If money is the problem, maybe money is the solution?

Like open a betting market for study replication. If no methodological errors are found and the study can be successfully replicated the authors get a percentage of the pool, replication effort is run by a red team that gets paid the same percentage of the pool regardless of outcome making their incentive to find bets that just attract a lot of bets.

This would incentivize scrutinizing big findings like the one in the OP where a failure would be big, but also act as a force for unearthing dark horse findings in journals off the beaten path where replication success would be big.



> Also, the opposite is true: your work is from a small or unknown lab, or goes against the grain, and you'll be lucky if you get published at all.

Unfortunately, many papers that come from obscure labs and go against the grain are both bad and wrong. It's a hard problem.



Why not criminal charges for fraud? Besides the fact that this kind of stuff is used to obtain funding in the first place (defrauding the funding agency), it’s theft (stealing funds that would otherwise be allocated to competitors), and breaking the public trust since usually these kinds of funds are allocated by government agencies.



E.g., return to the rigor and quality-aspects of research standards c. 1940 before it declined into the Sears, Roebuck and Co. model we find today.

The commercialization of academia rewards churn and volume rather than $upporting (a semi-orthogonal constraint) and promoting landmark hard-work that takes longer to gather data or develop.

Perhaps data-oriented research needs to consider extraordinary rigor as the new normal, perhaps with internal and external audits to assure standards and practices are sound.



Are you arguing that Alzheimer's research isn't basic research? That it's got so much fraud because people demand immediate practical value from it? How does that square with the fact that they've been researching it for decades, haven't found a cure or even a solid understanding of the cause, and still get funded? Surely this is an exemplar of basic research?



If the authors are pushing a novel or shaky hypothesis that purports to provide a potential cure (as they often do), yes this might not fall under basic research. For instance, arguing a particular pathway is the cause of AD without first characterizing the pathway and how it affects humans can waste tens of million dollars and cost years.

In the balance between trying to quickly develop a therapeutic and understanding the basic mechanisms driving the disease etiology, we see a lot of research groups with under powered studies playing fast and loose and hyping their research. Is it outright fraud? Rarely. But it’s not good science either.



How do we measure quality of publications? I know there have been attempts to do so based on journal impact factor and quantity of citations but those metrics are also manipulated.



I know we would have to change a lot from how it is set up today, but here are some ideas:

- How many times has the results been reproduced by others? - Add another layer of blindness: the person doing the lab work is not the person crunching the results. It could even be where two different groups crunch the numbers, and all groups are unknown to the others. - Avoid p hacking with pre determined (and registered!) p values - Register the experiment with hypothesis before the start of the experiment - Register authors as they are added and removed, and have that history be publicly available - All results have to be uploaded to a website before publication - The method of calculation has to be public on a website before publication.

So a high quality paper is one where - the experiment was logged in advance - the history of authors is known - there is a distinct separation of experimenter and cruncher - the public can get the results themselves and run the same analysis to confirm (or not!) the results - the experiment is repeated and confirmed by others. Even if the first experiment is bad or a fraud, and the second one doesn’t confirm it, a third experiment could be the tie breaker. It would be more traceable to understand if it was the lab or the cruncher that made a mistake or was committing fraud.

Overall, we have to incentivize good science.



Many of these ideas have been tried already. Unfortunately they don't work.

1. Pre-registration. Great idea, it's often being done. Doesn't work because universities don't enforce it. They'd have to proactively check that papers match their pre-registrations and then fire people when they don't.

2. Reproducibility. Nobody gets funded to do this but even if they did, there are lots of bogus studies that can easily be reproduced. The problems are in the methods rather than the data. If you specify a logically invalid or unscientific method, then people following your instructions will calculate the same results and still be wrong.

3. Blindness/splitting work up. This is already how many papers are made and academics turn it around as a defense. Notice how in every single case where this stuff happens, the people whose names are on the paper claim the fraud was done by someone else and they had no idea. Universities invariably accept this claim without question.

4. All results have to be uploaded before publication. Did you mean raw data? Results are the primary content of the paper, so it's unclear what this would mean. Researchers in some fields heavily resist publishing raw data for various (bad) reasons, like academic culture rewarding papers but not data collection efforts, so they're incentivized to squeeze as many papers out of a dataset as possible before revealing it to competitors. In a few fields (like climatology) they often refuse to publish raw data because they're afraid of empowering people who might check their papers for errors, who they see as a sort of shadowy enemy.

5. Authorship history. Which frauds are you thinking of that this would fix?

I spent a lot of lockdown time looking at this question. You've listed five ideas, I churned through maybe 15-20. None of them work. On investigation they've usually been tried before and academics immediately work around them. Science is littered with integrity theatre in which systems are put in place in response to some fraud, and they appear to be operating on the surface, but nothing is actually being checked or enforced.

> Overall, we have to incentivize good science.

I'm by now 95% convinced the only way to do this is for scientific research to be done commercially. No government or charitable funding of science at all. As long as science is being firehosed with money by people who don't actually use or consume the resulting papers, the incentive will always be to provide the appearance of doing science without actually doing it. It's just so much more effective for fundraising. Commercial science that's purchased by someone has a customer who will actually try to check they got what they paid for at some point, and can vote with their wallet if they didn't. Also legal protections and systems to stop fraud in markets are well developed. Notice that Elizabeth Holmes went to prison for defrauding investors. If she'd done the same thing at a university she'd have gotten away with it scot free.



> If she'd done the same thing at a university she'd have gotten away with it scot free.

But she didn't though. She was rejected from academia, which is why she turned to private capital. Her fraud worked on private investors, not government funding agencies. She couldn't even convince her professor to get behind her idea, but private funds threw millions of dollars at her.



I haven't read her bio, so would be interested to know more about her being rejected from academia. The official story is that she dropped out during her undergrad and immediately formed a company. Or are you referring to her professors telling her they didn't think her idea would work? If so then they were right, but her profs are not the people handing out grants. There's no sign she wouldn't have got a grant for such a thing given that government funding is justified by the fact that it can fund long shot ideas that some say won't work, and people doing things that clearly can't work regularly get grants.



As it is now, bad researches get rewarded and good researches who speak up get pushed out. One of the most despicable people I've ever met is a long term "successful" academic researcher at a major university.



Yes, but AI tools also make it easier for bad actors to generate false images that are harder to detect as false going forward. The ones that get caught now seem to be incredibly simplistic manipulations.



Karen Ashe attended Harvard for both her undergrad and MD. With all the academic fraud committed by Harvard connected people that has come to light in the past few years, one really has to wonder what is up with that institution (and higher ed more generally).



Harvard wants academic celebrities. It's not enough to do solid work, you need awards, plaudits, research that punches through the noise, articles in the mainstream press, and so on. Harvard is a brand first and foremost. If what you want most in life is to be a tenured Harvard professor in your chosen field, you need to publish research that makes a big splash.

The most surefire way to do this was to simply commit fraud rather than conduct experiments and pray you get lucky and find an outlier result worthy of significant rewards. We are witnessing the fallout now as people no longer assume honesty and integrity, and begin to analyze the actual papers for accuracy.



Indeed. Another example of this is Forbes featuring outlier successful people on its front cover who have later turned out to be involved in criminal activities or scandals. (1) Elizabeth Holmes, founder of Theranos, who was once celebrated as a tech visionary but later convicted of fraud. (2) Sam Bankman-Fried, founder of FTX, who faced multiple fraud charges following the collapse of his cryptocurrency exchange.



Forbes is just another junk brand looking to turn a profit. With Harvard and other higher education institutions there is the expectation that they are vehement proponents of scientific rigor.



It was her "protégé", Sylvain Lesné, that did the dirty work. And apparently has been rewarded with continuing employment.

Perhaps it was Ashe's Harvard education that led her to not consider double-checking the work...

Edit to add: iirc, it was the University of Minnesota that is also responsible for the "food fat is bad" advice that has made obesity than American epidemic...



I met a harvard grad the other day. I was astounded how below average this person was. They just got their PhD. However, it was on a soft science.

I asked him for advice on a problem in his field, and he basically couldn't answer. He talked about his degrees.

I think this person must have cheated. Its the only thing I can imagine.



> Lesné, who did not reply to requests for comment, remains a UMN professor and receives National Institutes of Health funding. The university has been investigating his work since June 2022. A spokesperson says UMN recently told Nature it had reviewed two images in question, and “has closed this review with no findings of research misconduct pertaining to these figures.” The statement did not reference several other questioned figures in the same paper. UMN did not comment on whether it had reached conclusions about other Lesné papers with apparently doctored images.

Horrifying. 2 years investigating and they’ve commented on two images?



It's standard. Universities always drag these things out for years, and often acquit either without explanation or with an explanation like "this unfortunate event was caused merely by enormous incompetence", even when that can't possibly be true.

Universities cannot/will not investigate their own staff properly.



> Neuroscientist Karen Ashe plans to retract her team’s landmark Alzheimer’s paper after acknowledging that it contains manipulated images. The 2006 study, which suggested that the disease could be caused by an amyloid beta protein, has been cited nearly 2,500 times. “I had no knowledge of any image manipulations in the published paper until it was brought to my attention two years ago,” Ashe wrote on the discussion site PubPeer, adding that she stands by the paper’s conclusions.

> Scientists are divided over whether the problems with the paper undermine the dominant, yet controversial, theory that beta-amyloid plaques are a root cause of Alzheimer’s disease.



> Scientists are divided over whether the problems with the paper undermine the dominant, yet controversial, theory that beta-amyloid plaques are a root cause of Alzheimer’s disease.

How could it not at least be a wake up call that studies that use this paper as a base could be fundamentally flawed?



Part of the problem is that studying the brain is hard. We have lots of in vitro or mouse models that tell a story, but nothing is conclusive. There have been many orthogonal studies which can replicate some of the associations, but nothing that can precisely point out underlying cause and effect. Yet people are suffering today, so researchers are following the best leads we have.



The problem is that the altered images represent a still plausible theory. There is a reason why they were believed in the first place. It isn’t that these false images are negative evidence. Instead, they don’t help or refute the underlying hypothesis at all.

The real question (that I’m not sure about) is if anyone else has been able for replicate the original experiment.



There have been many subsequent studies which generate evidence in support of the underlying idea.

As they say, it only takes one result to disprove a theory, but nobody has been able to do so to date.



I'm amazed at how this one specific easily detectable type of fraud is so common. One has to wonder about all the other, less obvious, ways of comitting fraud and how common they must be.



This is why you're going to see an explosion of fraud cases stemming from the 90s and early-to-mid 00s. That was the period where PCs were widespread so it was pretty easy to copy words and images, but actually looking at any individual set of words or images and asking "Were these copied from somewhere?" was much more difficult. A lot of people copied/manipulated then because they thought it would be too hard to catch them. Well, technology caught up.

Just look at the multiple plagiarism cases against the former Harvard president. It was clear, at least to me, she copied liberally more out of laziness/lack of confidence because she didn't think she would get caught for small phrases here and there. I mean, who goes through all of the trouble to write a dissertation and then plagiarizes the acknowledgements???



In the Claudine Gay case, she didn't actually steal anyone's work in any of the publicized examples I saw. She clearly attributed the ideas and statements she was using, but then sometimes proceeded to paraphrase them too closely, without using quotation marks. You could argue that it's technically plagiarism, but morally, it isn't. The reason she got nailed was because billionaire donors were upset that she wasn't cracking down on pro-Palestinian student protesters.

Ironically, the guy who led the charge against Gay, Bill Ackman, is married to a celebrity academic who committed real, bona fide plagiarism. Once that came out, Ackman suddenly had all sorts of excuses for his wife's actual misconduct, which makes it clear that he never actually cared about the issue of plagiarism. He just wants kids who protest against the war in Gaza to get expelled.



I agree with everything you've said about Ackman's rationale and actions.

I don't necessarily agree with your overall characterization of Gay's plagiarism. While some of it is clearly of the kind you site (e.g. she's clearly referencing other work in a lot of her analysis, so the fact that she doesn't just reword some phrases a little more seems like a very minor transgression to me), there are other cases that are more than just sloppiness and are outright weird, like the acknowledgements issue. This opinion article from Ruth Marcus (a generally left-leaning writer) of the Washington Post highlighted the issues very well IMO: https://archive.vn/h8lqM



I don't think anyone cares what is written in acknowledgment sections. That's where people thank their mom and dad. If they thank their mom and dad in exactly the same way as someone else, who cares?

Ruth Marcus may be liberal, but she's also extremely pro-Israeli. The entire motivation for going after Claudine Gay was that she didn't stop students from protesting against Israel's war in Gaza, so Marcus' political stance on that issue likely colors her view on the plagiarism accusations.



I disagree; I inspected Gay's writing closely and compared it to the original and from what I can tell, she had specific intent to spend the least amount of effort and skill to make something that barely passed the threshold for being publishable, or she lacked the skill entirely to write her own unique text.

I expect somebody who reaches her level of achievement to adhere to the rules of academic plagiarism to the letter.



There's a strong incentive to get a high citation count for your papers. This encourages behavior like the manipulations we're seeing here, on the part of fraudsters. But it also fails to incentivize caution on the part of researchers who cite existing works. If there were a "bamboozled count" that showed how many times a researcher cited a work that was later retracted, that would incentivize people to be a little more cautious, and perhaps avoid citing work of people who are suspected to be fudging the numbers.



Making the system more elaborate would also make it more complex, and people would just find more complex ways to game it. it might even make things worse.

I am sure most citations are not by people who suspected the numbers were fudged.

If anything, the way to go is to stop using flawed metrics. Most people want to do a good job and build a good reputation. Incentives just distort this.



While we're at it, let's also add a "rickrolled counter" for people who open the link and close it in less than 10 seconda. That should incentivize people to be a little more cautious, and perhaps avoid clicking on links without first doublechecking the URL.



One red flag should be that nowhere in this news article is the reader made aware of the exact nature of the manipulated images or their implications. If you go to the linked pubpeer review page you'll find why -- it's much less dramatic and all the findings were also replicated following this inquiry to show that any image alteration, that might have been made for editorial purposes, does not affect conclusions....

Drama drama drama, feed the people more drama... sigh



>One red flag should be that nowhere in this news article is the reader made aware of the exact nature of the manipulated images or their implications.

Because that was covered in detail when the manipulations were first reported and those articles are linked to in the above article. This is just reporting on the resulting retraction two years after that initial report.

>it's much less dramatic and all the findings were also replicated following this inquiry to show that any image alteration, that might have been made for editorial purposes, does not affect conclusions

Other groups had issues replicating the results with the same oligomer (often just chalked up to its instability), it's not like someone just happened to stumble upon these manipulations casually. This retraction only happened because Nature rejected the author's attempt to publish a correction. This whole thing is a black mark on Nature's record as well so if it really was just some minor change to make a picture look prettier for publishing purposes, I doubt they would have insisted on this action



“All the findings were replicated” is a claim by the accused, which is disputed by the researcher who originally found the issues, and he detailed all the contradicting claims right in that thread https://pubpeer.com/publications/8FF7E6996524B73ACB4A9EF5C0A.... Image alteration “that might have been made for editorial purposes” is a laughable euphemism for fraud, even the accused didn’t dare to use that phrasing. Not sure what’s in it for you to seriously misrepresent scientific fraud.


I disagree with this assessment. If Bik sees "shockingly blatant" copying, it's almost certain the author (or one of the authors) specifically, with intent, committed fraud. The other main explanation is incompetence (it's not impossible to misattribute a specific figure if your data handling is poor).



People have been using these mice as mouse model of alzheimer's for decades, so there must be something here. It's not like the whole research direction was "wrong"



Too late, the tens of thousands of papers citing this paper and drawing conclusions based on it won't be retracted. Nor would be retracted papers based on papers based on this paper.



It's dubious that a lot of the papers citing this paper are actually drawing conclusions based on this paper. Per Derek Lowe [1]:

> I could be wrong about this, but from this vantage point the original Lesné paper and its numerous follow-ups have largely just given people in the field something to point at when asked about the evidence for amyloid oligomers directly affecting memory. [...] The expressions in the literature about the failure to find *56 (as in the Selkoe lab’s papers) did not de-validate the general idea for anyone - indeed, Selkoe’s lab has been working on amyloid oligomers the whole time and continues to do so.

[1] https://www.science.org/content/blog-post/faked-beta-amyloid...



It's interesting that Lowe said this: >When I review a paper, I freely admit that I am generally not thinking “What if all of this is based on lies and fakery?” It’s not the way that we tend to approach scientific manuscripts. Rather, you ask whether the hypothesis is a sound one and if it was tested in a useful way: were the procedures used sufficient to trust the results and were these results good enough to draw conclusions that can in turn be built upon by further research?

I have, over time, come to treat every paper I read as being based on lies and fakery (or incompetence, unconscious bias, or intentional omission of key details), and I work to convince myself that the paper is not fradulent or false. That is, my null hypothesis is that published work is wrong.

After chattiing with many people about this, I've found that most people default ot believing a paper is right and if the figures and conclusion agree with their bias, they just move on, beliving the paper to be true. I've been guilty of this in the past as well.



The only papers that are likely to draw conclusions directly from the cited papers are meta-analysis and reviews. Any pure research will have its own hypothesis, its own experiments, its own results, and its own conclusions. Most papers will simply cite it as background or related works. Even if the evidence in the cited work is doctored, the hypothesis can still be true and future papers based on it can still find valid, positive results.

This paper being manipulated doesn't imply anything about the hypothesis or any subsequent studies - it simply fails to be useful as any form of evidence.



There should be some sort of "taint" checking that will indicate how many retracted papers any given one is based on. It shouldn't be too difficult since publications are highly structured.

With LLMs we can read all the papers at once and flag the most strongly derived for further review.



I have been reading Outlive by Dr. Peter Attia. I believe this research is used in the book to talk about Alzheimer's. I'm guessing the book will have to be revised. Gonna have to check in the acknowledgements of the book now.



I may be alone in this, but I think these threats like jailtime for fraudulent research would have the side effect of discouraging researchers that don't even engage in that stuff. I think pulling out the pitchforks is a misguided idea, and it's a shame that it's so common today.

On one hand, these people have a big responsibility to do good work and stay out of trouble for the sake of the ones they're helping (i.e. Alzheimer's patients), but on the other hand their impact depends a lot on their mental/emotional state -- if you put more bs in their mental buffer, it's naturally going to mess with them and lead to a decline in quality. They have to focus.



Does that same reasoning apply to people who work in finance or accounting?

Distrust in science is at an all time high and these frauds cause incalculable damage to scientific integrity. They are literally poisoning the well of human knowledge. That should be considered a high crime.



I know 2 people including my own mother who died of Alzheimer's and one who has it right now. Who knows where would we be right now if we didn't go on the wrong track with this fraudulent paper? Things like this is why trust in scientists are at an all-time low. There's corruption at every single layer of our society and I don't know what we can do to get honor and honesty back as top moral goals for citizens.

This amyloid theory has been completely debunked at this point but has wasted almost two decades of progress because of it. It's so infuriating, but I don't see this stopping because there's no accountability.



> this is why trust in scientists are at an all-time low

People needs to stop making excuses to avoid science, use their brain cells and grow some critic sense. When you feel afraid of science, stop and think that the other option is worse.



Everyone in power is in on it for revenue maximization. Treating someone at an early stage is never the goal; making money and winning awards is the goal. The FDA is 100% in on it.



> Treating someone at an early stage is never the goal

This claim is just an opinion. I bet that if we check the real statistics there are lots of cases that show exactly the opposite behavior.

> making money and winning awards is the goal

We should remind that making money and winning awards are perfectly legit goals, as long as the work is done honestly.

Working with Alzheimer patients rank probably among some of the most emotionally hardest and more depressive works that I could imagine. We should stop treating scientists as missionary figures that were born to save all of us for free.



Those who don't even see the problem are already corrupted. Early stage treatments at scale can be very possible with cheap preventative bloodwork and inexpensive small molecules. Instead, we have no diagnostics until it's already too late, and then we have 50K/year "treatments" that don't even work. The ones you referred to are not scientists; they're scientific mercenaries.



Billionaires and powerful politicians (and their parents and other loved ones) die of Alzheimer's too.

If it was just a big cartel conspiracy, you would see a different disease pattern when comparing the elites to everyone else.



I still firmly believe a large portion of alzheimer cases are due to candida or other forms of gut dysbiosis, leading to the breakdown of the blood-brain barrier. Candida then enters the brain and the glial cells break down the candida producing amyloid plaques. But the root cause is from the gut.



Olfactory in what way? I think toxins could definitely get in by breathing it in, e.g. like mold, which is a generally underestimated, but in my opinion significant source of complex chronic conditions. So for sure those are possibilities too.

I suspect alzheimers (which is a subset of dementia), is a cluster of diseases similar to cancer. I think a large subset is due to candida, but I wouldn't be surprised if mold and other environmental toxins cause other subsets.



This person Lesné shouldn’t have a job at a university after this. In fact, I strongly believe this person should be behind bars for having swindled the government and tax payers out of many millions of dollars in wasted grant money. The incentives for perpetuating this kind of fraudulent research have to be utterly devastating to those who do it to even hope for a deterrence against future malfeasance.



These and your past few posts are obviously ChatGPT, but you posted something real only an hour ago, and your account is years old. So what's the deal? Why are you doing this?



The audacity you have to claim causality here is wild. I’m a geneticist and boy let me tell you that it is not easy to unravel causality. Even in the systems I’ve studied for years, I would never make claims like this.



If you are a geneticist, you would know genetic correlates. The only causality of AD that I assert is with lifestyle factors and genetic factors, e.g. APOE4, that govern neuroinflammation, as is strongly established.



With regard to lifestyle factors, here is some evidence to get you started:

Healthy lifestyle and the risk of Alzheimer dementia (PMC7455318)

> Compared to participants with 0 to 1 healthy lifestyle factor, the risk of Alzheimer dementia was 37% lower (pooled HR 0.63, 95% CI 0.47–0.84) in those with 2 to 3 healthy lifestyle factors and 60% lower (pooled HR 0.40, 95% CI 0.28–0.56) in those with 4 to 5 healthy lifestyle factors.



It's not opinion when there is evidence:

APOE4 homozygozity represents a distinct genetic form of Alzheimer's disease (PMID: 38710950)

> The study concludes that APOE4 homozygotes represent a genetic form of AD

Healthy lifestyle and the risk of Alzheimer dementia (PMC7455318)

> Compared to participants with 0 to 1 healthy lifestyle factor, the risk of Alzheimer dementia was 37% lower (pooled HR 0.63, 95% CI 0.47–0.84) in those with 2 to 3 healthy lifestyle factors and 60% lower (pooled HR 0.40, 95% CI 0.28–0.56) in those with 4 to 5 healthy lifestyle factors.



I tried reading your post with an open mind, but starting with your first citation of PMID 38753870 you neglect to mention it was a study done in genetically modified mice. For you to claim it is "the single most powerful supplement for lowering its risk might be urolithin-A" right after you confidently assert "Beta-amyloid is a symptom of AD, not the cause" looks like you're just as bad at jumping to conclusions as the beta-amyloiders you "don't know how could mix this up."



In effect I asserted that neuroinflammation is at the root of AD, and that factors that lower it will in turn lower the risk of AD. This I stand by completely, and so the lifestyle factors that lower neuroinflammation are relevant. As for PMID 38753870, it is just one of many relevant studies showing brain health benefits from urolithin A. In particular, urolithin A works by inducing mitophagy (among its many other effects), with mitophagy improving mitochondrial health and therefore metabolic health. Unhealthy lifestyle choices compromise the excitatory/inhibitory balance in the brain, and it goes downhill from there.



What we have today in the field of drug development, and we have had this for many years now, is the development of cost-inefficient late-stage treatments that maximize revenue exploitation, but often don't even work. Instead, what we need is early-stage and preventative diagnostic measures, and cheap scalable generic small molecules or vaccines, coupled with lifestyle and environmental changes, that together reverse or treat the condition when it's still reversible or treatable. We will not get this from the FDA because it's 100% in bed with the scientific mercenaries that like to call themselves scientists. Moreover, I assert that most of the ones downvoting my comments here represent the mercenaries that work in the industry to earn some of this exploitative revenue for themselves, and they don't want the size of the pie to be reduced.



Please consider presenting the evidence without the personal swipes, and please consider reviewing the HN Guidelines: https://news.ycombinator.com/newsguidelines.html

> Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.

> When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."

Your comments in this thread would be more constructive without the swipes.



It's all well and good for you to discipline me, and I will be happy to edit and abide, but what's not good is you letting others unfairly chew me alive. It's just wrong.



Yes, you're right. Multiple people in the thread needed to be reminded of this.

I wrote my comment in reply to you since you seemed to be the most active, and several of your replies were the most recent at the time I was responding.



Things to consider: * Alzheimer might not be a disease, but a syndrome. (several diseases that look about the same) * Alzheimer's onset might be multifactorial (like diabetes, unlike Hungtington's) * Everything we know about the pathophysiology of Alzheimer might be wrong. What we believe is right would actually prevent us from making progress. It doesn't help that very loud idiots crave for recognition rather than discovery.

"I know that I know nothing" ~Socrates(?)



I actually know one of the guys who faked this data from a young age. He was basically losing his mind because of rage issues due to his extreme phobia of being poor/ falling into poverty. It’s interesting to me how these companies basically played into his fear of losing his funding to push this BS.

联系我们 contact @ memedata.com