> This doesn’t mean that the authors of that paper are bad people!
> We should distinguish the person from the deed. We all know good people who do bad things
> They were just in situations where it was easier to do the bad thing than the good thing
I can't believe I just read that. What's the bar for a bad person if you haven't passed it at "it was simply easier to do the bad thing?"
In this case, it seems not owning up to the issues is the bad part. That's a choice they made. Actually, multiple choices at different times, it seems. If you keep choosing the easy path instead of the path that is right for those that depend on you, it's easier for me to just label you a bad person.
Labeling people as villains (as opposed to condemning acts), in particular those you don’t know personally, is almost always an unhelpful oversimplification of reality. It obscures the root causes of why the bad things are happening, and stands in the way of effective remedy.
As with anything, it's just highly subjective. What some call an heinous act is another person's heroic act. Likewise, where I draw the line between an unlucky person and a villain is going to be different from someone else.
Personally, I do believe that there are benefits to labelling others as villains if a certain threshold is met. It cognitively reduces strain by allowing us to blanket-label all of their acts as evil [0] (although with the drawback of occasionally accidentally labelling acts of good as evil), allowing us to prioritise more important things in life than the actions of what we call villains.
I'm not sure the problems we have at the moment are a lack of accountability. I mean, I think let's go a little overboard on holding people to account first, then wind it back when that happens. The crisis at the moment is mangeralism across all of our institutions which serves to displace accountability .
Just to add on, armchair quarterbacking is a thing, it’s easy in hindsight to label decisions as the result of bad intentions. This is completely different than whatever might have been at play in the moment and retrospective judgement is often unrealistic.
I would argue that villainy and "bad people" is an overcomplication of ignorance.
If we equate being bad to being ignorant, then those people are ignorant/bad (with the implication that if people knew better, they wouldn't do bad things)
I'm sure I'm over simplifying something, looking forward to reading responses.
It’s possible to take two opposing and flawed views here, of course.
On the one hand, it is possible to become judgmental, habitually jumping to unwarranted and even unfair conclusions about the moral character of another person. On the other, we can habitually externalize the “root causes” instead of recognizing the vice and bad choices of the other.
The latter (externalization) is obvious when people habitually blame “systems” to rationalize misbehavior. This is the same logic that underpins the fantastically silly and flawed belief that under the “right system”, misbehavior would simply evaporate and utopia would be achieved. Sure, pathological systems can create perverse incentives, even ones that put extraordinary pressure on people, but moral character is not just some deterministic mechanical response to incentive. Murder doesn’t become okay because you had a “hard life”, for example. And even under “perfect conditions”, people would misbehave. In fact, they may even misbehave more in certain ways (think of the pathologies characteristic of the materially prosperous first world).
So, yes, we ought to condemn acts, we ought to be charitable, but we should also recognize human vice and the need for justice. Justly determined responsibility should affect someone’s reputation. In some cases, it would even be harmful to society not to harm the reputations of certain people.
I'm guessing you believe that a person is always completely responsible for their actions. If you are doing root cause analysis you will get nowhere with that attitude.
IMHO, you should deal with actual events, when not ideas, instead of people. No two people share the exact same values.
For example, you assume that guy trying to cut the line is a horrible person and a megalomaniac because you've seen this like a thousand times. He really may be that, or maybe he's having an extraordinarily stressful day, or maybe he's just not integrated with the values of your society ("cutting the line is bad, no matter what") or anything else BUT none of all that really helps you think clearly. You just get angry and maybe raise your voice when you're warning him, because "you know" he won't understand otherwise. So you left your values now too because you are busy fighting a stereotype.
IMHO, correct course of action is assuming good faith even with bad actions, and even with persistent bad actions, and thinking about the productive things you can do to change the outcome, or decide that you cannot do anything.
You can perhaps warn the guy, and then if he ignores you, you can even go to security or pick another hill to die on.
I'm not saying that I can do this myself. I fail a lot, especially when driving. It doesn't mean I'm not working on it.
I used to think like this, and it does seem morally sound at first glance, but it has the big underlying problem of creating an excellent context in which to be a selfish asshole.
Turns out that calling someone on their bullshit can be a perfectly productive thing to do, it not only deals with that specific incident, but also promotes a culture in which it's fine to keep each other accountable.
I honestly think this would qualify as "ruinous empathy"
It's fine and even good to assume good faith, extend your understanding, and listen to the reasons someone has done harm - in a context where the problem was already redressed and the wrongdoer is labelled.
This is not that. This is someone publishing a false paper, deceiving multiple rounds of reviewers, manipulating evidence, knowingly and for personal gain. And they still haven't faced any consequences for it.
I don't really know how to bridge the moral gap with this sort of viewpoint, honestly. It's like you're telling me to sympathise with the arsonist whilst he's still running around with gasoline
I think they're actually just saying bad actors are inevitable, inconsistent, and hard to identify ahead of time, so it's useless to be a scold when instead you can think of how to build systems that are more resilient to bad acts
What if the root cause is that because we stopped labeling villains, they no longer fear being labeled as such. The consequences for the average lying academic have never been lower (in fact they usually don’t get caught and benefit from their lie).
Surely the public discourse over the past decades has been steadily moving from substantive towards labeling each other villains, not the other way around.
People are afraid to sound too critical. It's very noticeable how every article that points out a mistake anywhere in a subject that's even slightly politically charged, has to emphasize "of course I believe X, I absolutely agree that Y is a bad thing", before they make their point. Criticising an unreplicable paper is the same thing. Clearly these people are afraid that if they sound too harsh, they'll be ignored altogether as a crank.
> Clearly these people are afraid that if they sound too harsh, they'll be ignored altogether as a crank.
This is true though, and one of those awkward times where good ideals like science and critical feedback brush up against potentially ugly human things like pride and ego.
I read a quote recently, and I don't like it, but it's stuck with me because it feels like it's dancing around the same awkward truth:
"tact is the art of make a point without making an enemy"
I guess part of being human is accepting that we're all human and will occasionally fail to be a perfect human.
Sometimes we'll make mistakes in conducting research. Sometimes we'll make mistakes in handling mistakes we or others made. Sometimes these mistakes will chain together to create situations like the post describes.
Making mistakes is easy - it's such a part of being human we often don't even notice we do it. Learning you've made a mistake is the hard part, and correcting that mistake is often even harder. Providing critical feedback, as necessary as it might be, typically involves putting someone else through hardship. I think we should all be at least slightly afraid and apprehensive of doing that, even if it's for a greater good.
The fountain is charity. This is no mere matter of sentiment. Charity is willing the objective good of the other. This is what should inform our actions. But charity does not erase the need for justice.
American culture has this weird thing to avoid blame and direct feedback. It's never appropriate to say "yo, you did shit job, can you not fuck it up next time?". For example, I have a guy in my team who takes 10 minutes every standup - if everyone did this, standup would turn into an hour-long meeting - but telling him "bro what the fuck, get your shit together" is highly inappropriate so we all just sit and suffer. Soon I'll have my yearly review and I have no clue what to expect because my manager only gives me feedback when strictly and explicitly required so the entire cycle "I do something wrong" -> "I get reprimanded" -> "I get better" can take literal years. Unless I accidentally offend someone, then I get 1:1 within an hour. One time I was upset about the office not having enough monitors and posted this on slack and my manager told me not to do that because calling out someone's shit job makes them lose face and that's a very bad thing to do.
Whatever happens, avoid direct confrontation at all costs.
I'll be direct with you, this sounds like an issue specific to your workplace. Get a better job with a manager who can find the middle ground between cursing in frustration and staying silent.
On one hand, I totally agree - soliciting and giving feedback is a weakness.
On the other hand, it sounds like this workplace has weak leadership - have you considered leaving for some place better? If the manager can’t do their job enough to give you decent feedback and stop a guy giving 10 min stand ups, LEAVE.
Reasons for not leaving? Ok, then don’t be a victim. Tell yourself you’re staying despite the management and focus on the positive.
I agree. If the company culture is not even helping or encouraging people to give pragmatic feedback, the war is already lost. Even the CEO and the board are in for a few years of stress.
The biggest reason for not leaving is that I understand that perfect things don't exist and everything is about tradeoffs. My current work is complete dogshit - borderline retarded coworkers, hilariously incompetent management. But on the other hand they pay me okay salary while having very little expectations, which means that if I spend entire day watching porn instead of working, nobody cares. That's a huge perk, because it makes the de facto salary per hour insanely huge. Moreover, I found a few people from other teams I enjoy talking to, which means it's a rare opportunity for me to build a social life. Once they start requiring me to actually put in the effort, I'll bounce.
While I agree there’s a childish softness in our culture in many respects, you don’t need to go to extremes and adopt thuggish or boorish behavior (which is also a problem, one that is actually concomitant with softness, because soft people are unable to bear discomfort or things not going their way). Proportionality and charity should inform your actions. Loutish behavior makes a person look like an ill-mannered toddler.
In general Western society has effectively outlawed "shame" as an effective social tool for shaping behavior. We used to shame people for bad behavior, which was quite effective in incentivizing people to be good people (this is overly reductive but you get the point). Nowadays no one is ever at fault for doing anything because "don't hate the player hate the game".
A blameless organization can work, so long as people within it police themselves. As a society this does not happen, thus making people more steadfast in their anti-social behavior
I guess he means that the authors can still be decent people in their private and even professional lives and not general scoundrels who wouldn't stop at actively harming other people to gain something.
At which point do you cross the line? Somebody who murders to take someone else's money is ultimately just too lazy to provide value in return for money, so they're not evil?
There are extremely competent coworkers I wouldn't like them as neighbours. Some of my great neighborhoods would make very sloppy and annoying coworkers.
These people are terrible at their job, perhaps a bit malicious too. They may be great people as friends and colleagues.
I think calling someone a "bad person" (which is itself a horribly vague term) for one situation where you don't have all the context is something most people should be loath to do. People are complicated and in general normal people do a lot of bad things for petty reasons.
Other than just the label being difficult to apply, these factors also make the argument over who is a "bad person" not really productive and I will put those sorts of caveats into my writings because I just don't want to waste my time arguing the point. Like what does "bad person" even mean and is it even consistent across people? I think it makes a lot more sense to label them clearer labels which we have a lot more evidence for, like "untrustworthy scientist" (which you might think is a bad person inherently or not).
Not sure if this in jest referring to the inherently sanctimonious nature of the framing, but this is actually exactly what I was gesturing towards. If it didn't feel good, then it would be either an unintentional action (random or coerced), or an irrational one (go against their perceived self-interest).
The whole "bad vs good person" framing is probably not a very robust framework, never thought about it much, so if that's your position you might well be right. But it's not a consideration that escaped me, I reasoned under the same lens the person above did on intention.
It's 2026, and social media brigading and harassment is a well-known phenomenon. In light of that, trying to preemptively de-escalate seems like a Good Thing.
But there is a concern which goes out of the "they" here. Actually, "they" could just as well not exist, and all narrative in the article be some LLM hallucination, we are still training ourself how we respond to this or that behavior we can observe and influence how we will act in the future.
If we go with the easy path labeling people as root cause, that's the habit we are forging for ourself. We are missing the opportunity to hone our sense of nuance and critical thought about the wider context which might be a better starting point to tackle the underlying issue.
Of course, name and shame is still there in the rhetorical toolbox, and everyone and their dog is able to use it even when rage and despair is all that stay in control of one mouth. Using it with relevant parcimony however is not going to happen from mere reactive habits.
Nowadays high citation numbers don't mean anymore what they used to. I've seen too many highly cited papers with issues that keep getting referenced, probably because people don't really read the sources anymore and just copy-paste the citations.
On my side-project todo list, I have an idea for a scientific service that overlays a "trust" network over the citation graph. Papers that uncritically cite other work that contains well-known issues should get tagged as "potentially tainted". Authors and institutions that accumulate too many of such sketchy works should be labeled equally. Over time this would provide an additional useful signal vs. just raw citation numbers. You could also look for citation rings and tag them. I think that could be quite useful but requires a bit of work.
Interesting idea. How do you distinguish between critical and uncritical citation? It’s also a little thorny—if your related work section is just describing published work (which is a common form of reviewer-proofing), is that a critical or uncritical citation? It seems a little harsh to ding a paper for that.
Going to conferences seeing researchers who've built a career doing subpar (sometimes blatantly 'fake') work has made me grow increasingly wary of experts. Worst is lots of people just seem to go along with it.
Still I'm skeptical about any sort of system trying to figure out 'trust'. There's too much on the line for researchers/students/... to the point where anything will eventually be gamed. Just too many people trying to get into the system (and getting in is the most important part).
The worse system is already getting gamed. There's already too much on the line for researchers/students, so they don't admit any wrong doing or retract anything. What's the worse that could happen by adding a layer of trust in the h-index ?
I think it could end up helping a bit in the short term. But in the end an even more complicated system (even if in principle better) will reward those spending time gaming it even more.
The system ends up promoting an even more conservative culture. What might start great will end up with groups and institutions being even more protective of 'their truths' to avoid getting tainted.
Don't think there's any system which can avoid these sort of things, people were talking about this before WW1, globalisation just put it in overdrive.
Pretty much all fields have shit papers, but if you ever feel the need to develop a superiority complex, take a vacation from your STEM field and have a look at what your university offers under the "business"-anything label. If anyone in those fields manages to produce anything of quality, they're defying the odds and should be considered one of the greats along the line of Euclid, Galileo Galilei, or Isaac Newton - because they surely didn't have many shoulders to stand on either.
This is exactly how I felt when studying management as part of ostensibly an Engineering / Econ / Management degree.
When you added it up, most of the hard parts were Engineering, and a bit Econ. You would really struggle to work through tough questions in engineering, spend a lot of time on economic theory, and then read the management stuff like you were reading a newspaper.
Management you could spot a mile away as being soft. There's certainly some interesting ideas, but even as students we could smell it was lacking something. It's just a bit too much like a History Channel documentary. Entertaining, certainly, but it felt like false enlightenment.
I suppose it's to be expected, the business department is built around the art of generating profit from cheap inputs. It's business thinking in action!
The root of the problem is referred to implicitly: publish or perish. To get tenure, you need publications, preferably highly cited, and money, which comes from grants that your peers (mostly from other institutions) decide on. So the mutual back scratching begins, and the publication mill keeps churning out papers whose main value is the career of the author and --through citation-- influential peers, truth be damned.
> Stop citing single studies as definitive. They are not. Check if the ones you are reading or citing have been replicated.
And from the comments:
> From my experience in social science, including some experience in managment studies specifically, researchers regularly belief things – and will even give policy advice based on those beliefs – that have not even been seriously tested, or have straight up been refuted.
Sometimes people use fewer than one non replicatable studies. They invent studies and use that! An example is the "Harvard Goal Study" that is often trotted out at self-review time at companies. The supposed study suggests that people who write down their goals are more likely to achieve them than people who do not. However, Harvard itself cannot find such a study existing:
Definitely ignore single studies, no matter how prestigious the journal or numerous the citations.
Straight-up replications are rare, but if a finding is real, other PIs will partially replicate and build upon it, typically as a smaller step in a related study. (E.g., a new finding about memory comes out, my field is emotion, I might do a new study looking at how emotion and your memory finding interact.)
If the effect is replicable, it will end up used in other studies (subject to randomness and the file drawer effect, anyway). But if an effect is rarely mentioned in the literature afterwards...run far, FAR away, and don't base your research off it.
A good advisor will be able to warn you off lost causes like this.
There is a surprisingly large amount of bad science out there. And we know it.
One of my favourite writeup on the subject: John P. A. Ioannidis: Why Most Published Research Findings Are False
John Ioannidis is a weird case. His work on the replication crisis across many domains was seminal and important. His contrarian, even conspiratorial take on COVID-19 not so much.
Ugh, wow, somehow I missed all this. I guess he joins the ranks of the scientists who made important contributions and then leveraged that recognition into a platform for unhinged diatribes.
What’s happening here? “ Most Published Research Findings Are False” —> “ Most Published COVID-19 Research Findings Are False” -> “Uh oh, I did a wrongthink” let’s backtrack at bit”. Is that it?
This likely represents only a fragment of a larger pattern. Research contradicting prevailing political narratives faces significant professional obstacles, and as this article shows, so does critiques of research that don't.
The webpage of the journal [1] only says 109 citations of the original article, this count only "indexed" journals, that are not guaranty to be ultra high quality but at least filter the worse "pay us to publish crap" journals.
ResearchGate says 3936 citations. I'm not sure what they are counting, probably all the pdf uploaded to ResearchGate
I'm not sure how they count 6000 citations, but I guess they are counting everything, including quotes by the vicepresident. Probably 6001 after my comment.
Quoted in the article:
>> 1. Journals should disclose comments, complaints, corrections, and retraction requests. Universities should report research integrity complaints and outcomes.
All comments, complaints, corrections, and retraction requests? Unmoderated? Einstein articles will be full of comments explaining why he is wrong, from racist to people that can spell Minkowski to save their lives. In /newest there is like one post per week from someone that discover a new physics theory with the help of ChatGPT. Sometimes it's the same guy, sometimes it's a new one.
> I'm not sure how they count 6000 citations, but I guess they are counting everything, including quotes by the vicepresident. Probably 6001 after my comment.
The number appears to be from Google Scholar, which currently reports 6269 citations for the paper
> All comments, complaints, corrections, and retraction requests? Unmoderated? Einstein articles will be full of comments explaining why he is wrong, from racist to people that can spell Minkowski to save their lives. In /newest there is like one post per week from someone that discover a new physics theory with the help of ChatGPT. Sometimes it's the same guy, sometimes it's a new one.
Judging from PubPeer, which allows people to post all of the above anonymously and with minimal moderation, this is not an issue in practice.
They mentioned a famous work, which will naturally attract cranks to comment on it. I’d also expect to get weird comments on works with high political relevance.
Not enough is understood about the replication crisis in the social sciences. Or indeed in the hard sciences. I do wonder whether this is something that AI will rectify.
I appreciate the convenience of having the original text on hand, as opppsed to having to download it of Dropbox of all places.
But if you're going to quote the whole thing it seems easier to just say so rather than quoting it bit by bit interspersed with "King continues" and annotating each I with [King].
Social fame is fundamentally unscalable, as it operates in limited room on the scene and even less in the few spot lights.
Benefits we can get from collective works, including scientific endeavors, are indefinitely large, as in far more important than what can be held in the head of any individual.
Incitives are just irrelevant as far as global social good is concerned.
It's harder to do social/human science because it's just easier to make mistakes that leads to bias. It's harder to do in maths, physics, biology, medecine, astronomy, etc.
I often say that "hard sciences" have often progressed much more than social/human sciences.
you get a replication crisis on the bleeding edge between replication being possible and impossible. There’s never going to be a replication crisis in linear algebra, there’s never going to be a replication crisis in theology, there definitely was a replication crisis in psych and a replication crisis in nutrition science is distinctly plausible and would be extremely good news for the field as it moves through the edge.
I agree. Most of the time people think STEM is harder but it is not. Yes, it is harder to understand some concepts, but in social sciences we don't even know what the correct concepts are. There hasn't been so much progress in social sciences in the last centuries as there was for STEM.
I'm not sure if you're correct. In fact there has been a revolution in some areas of social science in the last two decades due to the availability of online behavioural data.
Being practical, and understanding the gamification of citation counts and research metrics today, instead of going for a replication study and trying to prove a negative, I'd instead go for contrarian research which shows a different result (or possibly excludes the original result; or possibly doesn't even if it does not confirm it).
These probably have bigger chance of being published as you are providing a "novel" result, instead of fighting the get-along culture (which is, honestly, present in the workplace as well). But ultimately, they are (research-wise! but not politically) harder to do because they possibly mean you have figured out an actual thing.
Not saying this is the "right" approach, but it might be a cheaper, more practical way to get a paper turned around.
Whether we can work this out in research in a proper way is linked to whether we can work this out everywhere else? How many times have you seen people tap each other on the back despite lousy performance and no results? It's just easier to switch private positions vs research positions, so you'll have more of them not afraid to highlight bad job, and well, there's this profit that needs to pay your salary too.
Most of these studies get published based on elaborate constructions of essentially t-tests for differences in means between groups. Showing the opposite means showing no statistical difference, which is almost impossible to get published, for very human reasons.
My point was exactly not to do that (which is really an unsuccesfull replication), but instead to find the actual, live correlation between the same input rigourously documented and justified, and new "positive" conclusion.
As I said, harder from a research perspective, but if you can show, for instance, that sustainable companies are less profitable with a better study, you have basically contradicted the original one.
Does it bug anyone else when your article has so many quotes it’s practically all italics? Change the formatting style so we don’t have to read pages of italic quotes
Not even surprised. My daughter tried to reproduce a well-cited paper a couple of years back as part of her research project. It was not possible. They pushed for a retraction but university don't want to do it because it would cause political issues as one of the peer-reviewers is tenured at another closely associated university. She almost immediately fucked off and went to work in the private sector.
That's not right; retractions should only be for research misconduct cases. It is a problem with the article's recommendations too. Even if a correction is published that the results may not hold, the article should stay where it is.
But I agree with the point about replications, which are much needed. That was also the best part in the article, i.e. "stop citing single studies as definitive".
Isn't at least part of the problem with replication that journals are businesses. They're selling in part based on limited human focus, and on desire to see something novel, to see progress in one's chosen field. Replications don't fit a commercial publications goals.
Institutions could do something, surely. Require one-in-n papers be a replication. Only give prizes to replicated studies. Award prize monies split between the first two or three independent groups demonstrating a result.
The 6k citations though ... I suspect most of those instances would just assert the result if a citation wasn't available.
Not in academia myself, but I suspect the basic issue is simply that academics are judged by the number of papers they publish.
They are pushed to publish a lot, which means journals have to review a lot of stuff (and they cannot replicate findings on their own). Once a paper is published on a decent journal, other researchers may not "waste time" replicating all findings, because they also want to publish a lot. The result is papers getting popular even if no one has actually bothered to replicate the results, especially if those papers are quoted by a lot of people and/or are written by otherwise reputable people or universities.
Could you also provide your critical appraisal of the article so this can be more of a journal club for discussion vs just a paper link? I have no expertise in this field so would be good for some insights.
Once, back around 2011 or 2012, I was using Google Translate for a speech I was to deliver in church. It was shorter than one page printed out.
I only needed the Spanish translation. Now I am proficient in spoken and written Spanish, and I can perfectly understand what is said, and yet I still ran the English through Google Translate and printed it out without really checking through it.
I got to the podium and there was a line where I said "electricity is in the air" (a metaphor, obviously) and the Spanish translation said "electricidad no está en el aire" and I was able to correct that on-the-fly, but I was pissed at Translate, and I badmouthed it for months. And sure, it was my fault for not proofing and vetting the entire output, but come on!
Family member tried to do work relying on previous results from a biotech lab. Couldn’t do it. Tried to reproduce. Doesn’t work. Checked work carefully. Faked. Switched labs and research subject. Risky career move, but. Now has a career. Old lab is in mental black box. Never to be touched again.
For original research, a researcher is supposed to replicate studies that form the building blocks of their research. For example, if a drug is reported to increase expression of some mRNA in a cell, and your research derives from that, you will start by replicating that step, but it will just be a note in your introduction and not published as a finding on its own.
When a junior researcher, e.g. a grad student, fails to replicate a study, they assume it's technique. If they can't get it after many tries, they just move on, and try some other research approach. If they claim it's because the original study is flawed, people will just assume they don't have the skills to replicate it.
One of the problems is that science doesn't have great collaborative infrastructure. The only way to learn that nobody can reproduce a finding is to go to conferences and have informal chats with people about the paper. Or maybe if you're lucky there's an email list for people in your field where they routinely troubleshoot each other's technique. But most of the time there's just not enough time to waste chasing these things down.
I can't speak to whether people get blackballed. There's a lot of strong personalities in science, but mostly people are direct and efficient. You can ask pretty pointed questions in a session and get pretty direct answers. But accusing someone of fraud is a serious accusation and you probably don't want to get a reputation for being an accuser, FWIW.
I haven't identified an outright fake one but in my experience (mainly in sensor development) most papers are at the very least optimistic or are glossing over some major limitations in the approach. They should be treated as a source of ideas to try instead of counted on.
I've also seen the resistance that results from trying to investigate or even correct an issue in a key result of a paper. Even before it's published the barrier can be quite high (and I must admit that since it's not my primary focus and my name was not on it, I did not push as hard as I could have on it)
I've read of a few cases like this on Hacker News. There's often that assumption, sometimes unstated: if a junior scientist discovers clear evidence of academic misconduct by a senior scientist, it would be career suicide for the junior scientist to make their discovery public.
The replication crisis is largely particular to psychology, but I wonder about the scope of the don't rock the boat issue.
I will not go into the details of the topic but the "What to do" is the most obvious thing.
If a paper that is impactful cannot be backed by other works that should be a smell
And thus all citing, have fatally flawed there paper if its central to the thesis, thus, he who proofs the root is rotten, should gain there funding from this point forward.
I see this approach as a win win for science. Debunking bad science becomes a for profit enterprise, rigorous science becomes the only one sustainable, the paper churn gets reduced, as even producing a good one becomes a financial risk, when it becomes foundational and gets debunked later.
That's a very good point. Some of what's called "science" today, in popular media and coming from governments, is religion. "We know all, do not question us." It's the common problem of headlines along the lines of "scientists say" or "The Science says", which should always be a red flag - but the majority of people believe it.
Yes, that's the problem, many do, and they swear by these oversimplified ideas and one-liners that litter the field of popular management books, fully believing it's all "scientific" and they'll laugh at you for questioning it. It's nuts.
There is a difference between popular management books and academic publications.
For example there is a long history of studies of the relationship between working hours and productivity which is one of the few things that challenges the idea that longer hours means more output.
Yes, but the books generally take their ideas from the academic publications. And the replication problems, and general incentives around academic publishing, show that all too often, the academic publications in the social sciences are unfortunately no more rigorous than the populist books.
> This doesn’t mean that the authors of that paper are bad people!
> We should distinguish the person from the deed. We all know good people who do bad things
> They were just in situations where it was easier to do the bad thing than the good thing
I can't believe I just read that. What's the bar for a bad person if you haven't passed it at "it was simply easier to do the bad thing?"
In this case, it seems not owning up to the issues is the bad part. That's a choice they made. Actually, multiple choices at different times, it seems. If you keep choosing the easy path instead of the path that is right for those that depend on you, it's easier for me to just label you a bad person.
Labeling people as villains (as opposed to condemning acts), in particular those you don’t know personally, is almost always an unhelpful oversimplification of reality. It obscures the root causes of why the bad things are happening, and stands in the way of effective remedy.
As with anything, it's just highly subjective. What some call an heinous act is another person's heroic act. Likewise, where I draw the line between an unlucky person and a villain is going to be different from someone else.
Personally, I do believe that there are benefits to labelling others as villains if a certain threshold is met. It cognitively reduces strain by allowing us to blanket-label all of their acts as evil [0] (although with the drawback of occasionally accidentally labelling acts of good as evil), allowing us to prioritise more important things in life than the actions of what we call villains.
[0]: https://en.wikipedia.org/wiki/Halo_effect#The_reverse_halo_e...
I'm not sure the problems we have at the moment are a lack of accountability. I mean, I think let's go a little overboard on holding people to account first, then wind it back when that happens. The crisis at the moment is mangeralism across all of our institutions which serves to displace accountability .
Just to add on, armchair quarterbacking is a thing, it’s easy in hindsight to label decisions as the result of bad intentions. This is completely different than whatever might have been at play in the moment and retrospective judgement is often unrealistic.
I would argue that villainy and "bad people" is an overcomplication of ignorance.
If we equate being bad to being ignorant, then those people are ignorant/bad (with the implication that if people knew better, they wouldn't do bad things)
I'm sure I'm over simplifying something, looking forward to reading responses.
It’s possible to take two opposing and flawed views here, of course.
On the one hand, it is possible to become judgmental, habitually jumping to unwarranted and even unfair conclusions about the moral character of another person. On the other, we can habitually externalize the “root causes” instead of recognizing the vice and bad choices of the other.
The latter (externalization) is obvious when people habitually blame “systems” to rationalize misbehavior. This is the same logic that underpins the fantastically silly and flawed belief that under the “right system”, misbehavior would simply evaporate and utopia would be achieved. Sure, pathological systems can create perverse incentives, even ones that put extraordinary pressure on people, but moral character is not just some deterministic mechanical response to incentive. Murder doesn’t become okay because you had a “hard life”, for example. And even under “perfect conditions”, people would misbehave. In fact, they may even misbehave more in certain ways (think of the pathologies characteristic of the materially prosperous first world).
So, yes, we ought to condemn acts, we ought to be charitable, but we should also recognize human vice and the need for justice. Justly determined responsibility should affect someone’s reputation. In some cases, it would even be harmful to society not to harm the reputations of certain people.
The person is inseparable from the root cause.
I'm guessing you believe that a person is always completely responsible for their actions. If you are doing root cause analysis you will get nowhere with that attitude.
In that case let's just shut down the FAA and any accident investigations.
It's not processes that can be fixed, it's just humans being stupid.
Then "root cause" means basically nothing
> Labeling people as villains is almost always an unhelpful oversimplification of reality
This is effectively denying the existence of bad actors.
We can introspect into the exact motives behind bad behaviour once the paper is retracted. Until then, there is ongoing harm to public science.
IMHO, you should deal with actual events, when not ideas, instead of people. No two people share the exact same values.
For example, you assume that guy trying to cut the line is a horrible person and a megalomaniac because you've seen this like a thousand times. He really may be that, or maybe he's having an extraordinarily stressful day, or maybe he's just not integrated with the values of your society ("cutting the line is bad, no matter what") or anything else BUT none of all that really helps you think clearly. You just get angry and maybe raise your voice when you're warning him, because "you know" he won't understand otherwise. So you left your values now too because you are busy fighting a stereotype.
IMHO, correct course of action is assuming good faith even with bad actions, and even with persistent bad actions, and thinking about the productive things you can do to change the outcome, or decide that you cannot do anything.
You can perhaps warn the guy, and then if he ignores you, you can even go to security or pick another hill to die on.
I'm not saying that I can do this myself. I fail a lot, especially when driving. It doesn't mean I'm not working on it.
I used to think like this, and it does seem morally sound at first glance, but it has the big underlying problem of creating an excellent context in which to be a selfish asshole.
Turns out that calling someone on their bullshit can be a perfectly productive thing to do, it not only deals with that specific incident, but also promotes a culture in which it's fine to keep each other accountable.
I honestly think this would qualify as "ruinous empathy"
It's fine and even good to assume good faith, extend your understanding, and listen to the reasons someone has done harm - in a context where the problem was already redressed and the wrongdoer is labelled.
This is not that. This is someone publishing a false paper, deceiving multiple rounds of reviewers, manipulating evidence, knowingly and for personal gain. And they still haven't faced any consequences for it.
I don't really know how to bridge the moral gap with this sort of viewpoint, honestly. It's like you're telling me to sympathise with the arsonist whilst he's still running around with gasoline
I think they're actually just saying bad actors are inevitable, inconsistent, and hard to identify ahead of time, so it's useless to be a scold when instead you can think of how to build systems that are more resilient to bad acts
To which my reply would be, we can engage in the analysis after we have taken down the paper.
It's still up! Maybe the answer to building a resilient system lies in why it is still up.
What if the root cause is that because we stopped labeling villains, they no longer fear being labeled as such. The consequences for the average lying academic have never been lower (in fact they usually don’t get caught and benefit from their lie).
Are we living on the same planet?
Surely the public discourse over the past decades has been steadily moving from substantive towards labeling each other villains, not the other way around.
But that kind of labeling happens because of having the wrong political stances, not because of the moral character of the person.
I'm not a bad person, I just continuously do bad things, none of which is my fault - there is always a deeper root cause \o/
On the flip side, even if you punish the villain, garbage papers still get printed. Almost like there is a root cause.
Both views are maximalistic.
People are afraid to sound too critical. It's very noticeable how every article that points out a mistake anywhere in a subject that's even slightly politically charged, has to emphasize "of course I believe X, I absolutely agree that Y is a bad thing", before they make their point. Criticising an unreplicable paper is the same thing. Clearly these people are afraid that if they sound too harsh, they'll be ignored altogether as a crank.
> Clearly these people are afraid that if they sound too harsh, they'll be ignored altogether as a crank.
This is true though, and one of those awkward times where good ideals like science and critical feedback brush up against potentially ugly human things like pride and ego.
I read a quote recently, and I don't like it, but it's stuck with me because it feels like it's dancing around the same awkward truth:
"tact is the art of make a point without making an enemy"
I guess part of being human is accepting that we're all human and will occasionally fail to be a perfect human.
Sometimes we'll make mistakes in conducting research. Sometimes we'll make mistakes in handling mistakes we or others made. Sometimes these mistakes will chain together to create situations like the post describes.
Making mistakes is easy - it's such a part of being human we often don't even notice we do it. Learning you've made a mistake is the hard part, and correcting that mistake is often even harder. Providing critical feedback, as necessary as it might be, typically involves putting someone else through hardship. I think we should all be at least slightly afraid and apprehensive of doing that, even if it's for a greater good.
The fountain is charity. This is no mere matter of sentiment. Charity is willing the objective good of the other. This is what should inform our actions. But charity does not erase the need for justice.
American culture has this weird thing to avoid blame and direct feedback. It's never appropriate to say "yo, you did shit job, can you not fuck it up next time?". For example, I have a guy in my team who takes 10 minutes every standup - if everyone did this, standup would turn into an hour-long meeting - but telling him "bro what the fuck, get your shit together" is highly inappropriate so we all just sit and suffer. Soon I'll have my yearly review and I have no clue what to expect because my manager only gives me feedback when strictly and explicitly required so the entire cycle "I do something wrong" -> "I get reprimanded" -> "I get better" can take literal years. Unless I accidentally offend someone, then I get 1:1 within an hour. One time I was upset about the office not having enough monitors and posted this on slack and my manager told me not to do that because calling out someone's shit job makes them lose face and that's a very bad thing to do.
Whatever happens, avoid direct confrontation at all costs.
I'll be direct with you, this sounds like an issue specific to your workplace. Get a better job with a manager who can find the middle ground between cursing in frustration and staying silent.
On one hand, I totally agree - soliciting and giving feedback is a weakness.
On the other hand, it sounds like this workplace has weak leadership - have you considered leaving for some place better? If the manager can’t do their job enough to give you decent feedback and stop a guy giving 10 min stand ups, LEAVE.
Reasons for not leaving? Ok, then don’t be a victim. Tell yourself you’re staying despite the management and focus on the positive.
I agree. If the company culture is not even helping or encouraging people to give pragmatic feedback, the war is already lost. Even the CEO and the board are in for a few years of stress.
The biggest reason for not leaving is that I understand that perfect things don't exist and everything is about tradeoffs. My current work is complete dogshit - borderline retarded coworkers, hilariously incompetent management. But on the other hand they pay me okay salary while having very little expectations, which means that if I spend entire day watching porn instead of working, nobody cares. That's a huge perk, because it makes the de facto salary per hour insanely huge. Moreover, I found a few people from other teams I enjoy talking to, which means it's a rare opportunity for me to build a social life. Once they start requiring me to actually put in the effort, I'll bounce.
What you're describing is mostly a convergence on the methods of "nonviolent communication".
While I agree there’s a childish softness in our culture in many respects, you don’t need to go to extremes and adopt thuggish or boorish behavior (which is also a problem, one that is actually concomitant with softness, because soft people are unable to bear discomfort or things not going their way). Proportionality and charity should inform your actions. Loutish behavior makes a person look like an ill-mannered toddler.
In general Western society has effectively outlawed "shame" as an effective social tool for shaping behavior. We used to shame people for bad behavior, which was quite effective in incentivizing people to be good people (this is overly reductive but you get the point). Nowadays no one is ever at fault for doing anything because "don't hate the player hate the game".
A blameless organization can work, so long as people within it police themselves. As a society this does not happen, thus making people more steadfast in their anti-social behavior
I was just following orders comes to mind.
Yes, the complicity is normal. No the complicity isn't right.
The banality of evil.
> I can't believe I just read that. What's the bar for a bad person if you haven't passed it at "it was simply easier to do the bad thing?"
This actually doesn't surprise much. I've seen a lot of variety in the ethical standards that people will publicly espouse.
I guess he means that the authors can still be decent people in their private and even professional lives and not general scoundrels who wouldn't stop at actively harming other people to gain something.
I'd rather if the article would stick to the facts
Hmm. I wonder how he knows these bad-doers are good people.
Most people aren’t evil, just lazy.
In real life, not disney movies made for simple minded children, lazy apathy is what most real evil looks like. Please see "the banality of evil."
At which point do you cross the line? Somebody who murders to take someone else's money is ultimately just too lazy to provide value in return for money, so they're not evil?
https://tvtropes.org/pmwiki/pmwiki.php/Main/PragmaticVillain...
There are extremely competent coworkers I wouldn't like them as neighbours. Some of my great neighborhoods would make very sloppy and annoying coworkers.
These people are terrible at their job, perhaps a bit malicious too. They may be great people as friends and colleagues.
I think calling someone a "bad person" (which is itself a horribly vague term) for one situation where you don't have all the context is something most people should be loath to do. People are complicated and in general normal people do a lot of bad things for petty reasons.
Other than just the label being difficult to apply, these factors also make the argument over who is a "bad person" not really productive and I will put those sorts of caveats into my writings because I just don't want to waste my time arguing the point. Like what does "bad person" even mean and is it even consistent across people? I think it makes a lot more sense to label them clearer labels which we have a lot more evidence for, like "untrustworthy scientist" (which you might think is a bad person inherently or not).
> What's the bar for a bad person if you haven't passed it at "it was simply easier to do the bad thing?"
When the good thing is easier to do and they still knowingly pick the bad one for the love of the game?
It feels good to be bad.
Not sure if this in jest referring to the inherently sanctimonious nature of the framing, but this is actually exactly what I was gesturing towards. If it didn't feel good, then it would be either an unintentional action (random or coerced), or an irrational one (go against their perceived self-interest).
The whole "bad vs good person" framing is probably not a very robust framework, never thought about it much, so if that's your position you might well be right. But it's not a consideration that escaped me, I reasoned under the same lens the person above did on intention.
It's 2026, and social media brigading and harassment is a well-known phenomenon. In light of that, trying to preemptively de-escalate seems like a Good Thing.
"It was easier for me to just follow orders than do the right thing." – Fictional SS officer, 1945. Not a bad person.
/s
If you defend a bad person, you are a bad person.
Seems fair in the frame of what is responded.
But there is a concern which goes out of the "they" here. Actually, "they" could just as well not exist, and all narrative in the article be some LLM hallucination, we are still training ourself how we respond to this or that behavior we can observe and influence how we will act in the future.
If we go with the easy path labeling people as root cause, that's the habit we are forging for ourself. We are missing the opportunity to hone our sense of nuance and critical thought about the wider context which might be a better starting point to tackle the underlying issue.
Of course, name and shame is still there in the rhetorical toolbox, and everyone and their dog is able to use it even when rage and despair is all that stay in control of one mouth. Using it with relevant parcimony however is not going to happen from mere reactive habits.
Nowadays high citation numbers don't mean anymore what they used to. I've seen too many highly cited papers with issues that keep getting referenced, probably because people don't really read the sources anymore and just copy-paste the citations.
On my side-project todo list, I have an idea for a scientific service that overlays a "trust" network over the citation graph. Papers that uncritically cite other work that contains well-known issues should get tagged as "potentially tainted". Authors and institutions that accumulate too many of such sketchy works should be labeled equally. Over time this would provide an additional useful signal vs. just raw citation numbers. You could also look for citation rings and tag them. I think that could be quite useful but requires a bit of work.
Interesting idea. How do you distinguish between critical and uncritical citation? It’s also a little thorny—if your related work section is just describing published work (which is a common form of reviewer-proofing), is that a critical or uncritical citation? It seems a little harsh to ding a paper for that.
Going to conferences seeing researchers who've built a career doing subpar (sometimes blatantly 'fake') work has made me grow increasingly wary of experts. Worst is lots of people just seem to go along with it.
Still I'm skeptical about any sort of system trying to figure out 'trust'. There's too much on the line for researchers/students/... to the point where anything will eventually be gamed. Just too many people trying to get into the system (and getting in is the most important part).
The worse system is already getting gamed. There's already too much on the line for researchers/students, so they don't admit any wrong doing or retract anything. What's the worse that could happen by adding a layer of trust in the h-index ?
I think it could end up helping a bit in the short term. But in the end an even more complicated system (even if in principle better) will reward those spending time gaming it even more.
The system ends up promoting an even more conservative culture. What might start great will end up with groups and institutions being even more protective of 'their truths' to avoid getting tainted.
Don't think there's any system which can avoid these sort of things, people were talking about this before WW1, globalisation just put it in overdrive.
Those citation rings are becoming rampant in my country, along with the author count inflation.
Pretty much all fields have shit papers, but if you ever feel the need to develop a superiority complex, take a vacation from your STEM field and have a look at what your university offers under the "business"-anything label. If anyone in those fields manages to produce anything of quality, they're defying the odds and should be considered one of the greats along the line of Euclid, Galileo Galilei, or Isaac Newton - because they surely didn't have many shoulders to stand on either.
This is exactly how I felt when studying management as part of ostensibly an Engineering / Econ / Management degree.
When you added it up, most of the hard parts were Engineering, and a bit Econ. You would really struggle to work through tough questions in engineering, spend a lot of time on economic theory, and then read the management stuff like you were reading a newspaper.
Management you could spot a mile away as being soft. There's certainly some interesting ideas, but even as students we could smell it was lacking something. It's just a bit too much like a History Channel documentary. Entertaining, certainly, but it felt like false enlightenment.
I suppose it's to be expected, the business department is built around the art of generating profit from cheap inputs. It's business thinking in action!
The root of the problem is referred to implicitly: publish or perish. To get tenure, you need publications, preferably highly cited, and money, which comes from grants that your peers (mostly from other institutions) decide on. So the mutual back scratching begins, and the publication mill keeps churning out papers whose main value is the career of the author and --through citation-- influential peers, truth be damned.
something something Goodhart's Law
> Stop citing single studies as definitive. They are not. Check if the ones you are reading or citing have been replicated.
And from the comments:
> From my experience in social science, including some experience in managment studies specifically, researchers regularly belief things – and will even give policy advice based on those beliefs – that have not even been seriously tested, or have straight up been refuted.
Sometimes people use fewer than one non replicatable studies. They invent studies and use that! An example is the "Harvard Goal Study" that is often trotted out at self-review time at companies. The supposed study suggests that people who write down their goals are more likely to achieve them than people who do not. However, Harvard itself cannot find such a study existing:
https://ask.library.harvard.edu/faq/82314
Check out the “Jick Study,” mentioned in Dopesick.
https://en.wikipedia.org/wiki/Addiction_Rare_in_Patients_Tre...
Definitely ignore single studies, no matter how prestigious the journal or numerous the citations.
Straight-up replications are rare, but if a finding is real, other PIs will partially replicate and build upon it, typically as a smaller step in a related study. (E.g., a new finding about memory comes out, my field is emotion, I might do a new study looking at how emotion and your memory finding interact.)
If the effect is replicable, it will end up used in other studies (subject to randomness and the file drawer effect, anyway). But if an effect is rarely mentioned in the literature afterwards...run far, FAR away, and don't base your research off it.
A good advisor will be able to warn you off lost causes like this.
There is a surprisingly large amount of bad science out there. And we know it. One of my favourite writeup on the subject: John P. A. Ioannidis: Why Most Published Research Findings Are False
https://pmc.ncbi.nlm.nih.gov/articles/PMC1182327/pdf/pmed.00...
John Ioannidis is a weird case. His work on the replication crisis across many domains was seminal and important. His contrarian, even conspiratorial take on COVID-19 not so much.
Ugh, wow, somehow I missed all this. I guess he joins the ranks of the scientists who made important contributions and then leveraged that recognition into a platform for unhinged diatribes.
What’s happening here? “ Most Published Research Findings Are False” —> “ Most Published COVID-19 Research Findings Are False” -> “Uh oh, I did a wrongthink” let’s backtrack at bit”. Is that it?
Sounds like the Watergate Scandal. The crime was one thing, but it was the cover-up that caused the most damage.
Once something enters The Canon, it becomes “untouchable,” and no one wants to question it. Fairly classic human nature.
> "The most erroneous stories are those we think we know best -and therefore never scrutinize or question."
-Stephen Jay Gould
This likely represents only a fragment of a larger pattern. Research contradicting prevailing political narratives faces significant professional obstacles, and as this article shows, so does critiques of research that don't.
The webpage of the journal [1] only says 109 citations of the original article, this count only "indexed" journals, that are not guaranty to be ultra high quality but at least filter the worse "pay us to publish crap" journals.
ResearchGate says 3936 citations. I'm not sure what they are counting, probably all the pdf uploaded to ResearchGate
I'm not sure how they count 6000 citations, but I guess they are counting everything, including quotes by the vicepresident. Probably 6001 after my comment.
Quoted in the article:
>> 1. Journals should disclose comments, complaints, corrections, and retraction requests. Universities should report research integrity complaints and outcomes.
All comments, complaints, corrections, and retraction requests? Unmoderated? Einstein articles will be full of comments explaining why he is wrong, from racist to people that can spell Minkowski to save their lives. In /newest there is like one post per week from someone that discover a new physics theory with the help of ChatGPT. Sometimes it's the same guy, sometimes it's a new one.
[1] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1964011
[2] https://www.researchgate.net/publication/279944386_The_Impac...
> I'm not sure how they count 6000 citations, but I guess they are counting everything, including quotes by the vicepresident. Probably 6001 after my comment.
The number appears to be from Google Scholar, which currently reports 6269 citations for the paper
> All comments, complaints, corrections, and retraction requests? Unmoderated? Einstein articles will be full of comments explaining why he is wrong, from racist to people that can spell Minkowski to save their lives. In /newest there is like one post per week from someone that discover a new physics theory with the help of ChatGPT. Sometimes it's the same guy, sometimes it's a new one.
Judging from PubPeer, which allows people to post all of the above anonymously and with minimal moderation, this is not an issue in practice.
Link to PubPerr https://pubpeer.com/publications/F9538AA8AC2ECC7511800234CC4...
It has 0 comments, for an article that forgot "not" in "the result is *** statistical significative".
They mentioned a famous work, which will naturally attract cranks to comment on it. I’d also expect to get weird comments on works with high political relevance.
did “not impact the main text, analyses, or findings.”
Made me think of the black spoon error being off by a factor of 10 and the author also said it didn't impact the main findings.
https://statmodeling.stat.columbia.edu/2024/12/13/how-a-simp...
Not enough is understood about the replication crisis in the social sciences. Or indeed in the hard sciences. I do wonder whether this is something that AI will rectify.
How would AI do anything to rectify it?
it will not, ai reads and "believes" the heavily cited but incorrect papers.
I appreciate the convenience of having the original text on hand, as opppsed to having to download it of Dropbox of all places.
But if you're going to quote the whole thing it seems easier to just say so rather than quoting it bit by bit interspersed with "King continues" and annotating each I with [King].
Social fame is fundamentally unscalable, as it operates in limited room on the scene and even less in the few spot lights.
Benefits we can get from collective works, including scientific endeavors, are indefinitely large, as in far more important than what can be held in the head of any individual.
Incitives are just irrelevant as far as global social good is concerned.
It's harder to do social/human science because it's just easier to make mistakes that leads to bias. It's harder to do in maths, physics, biology, medecine, astronomy, etc.
I often say that "hard sciences" have often progressed much more than social/human sciences.
Funny you say that, as medicine is one of the epicenters of the replication crisis[1].
[1] https://en.wikipedia.org/wiki/Replication_crisis#In_medicine
you get a replication crisis on the bleeding edge between replication being possible and impossible. There’s never going to be a replication crisis in linear algebra, there’s never going to be a replication crisis in theology, there definitely was a replication crisis in psych and a replication crisis in nutrition science is distinctly plausible and would be extremely good news for the field as it moves through the edge.
I agree. Most of the time people think STEM is harder but it is not. Yes, it is harder to understand some concepts, but in social sciences we don't even know what the correct concepts are. There hasn't been so much progress in social sciences in the last centuries as there was for STEM.
I'm not sure if you're correct. In fact there has been a revolution in some areas of social science in the last two decades due to the availability of online behavioural data.
Being practical, and understanding the gamification of citation counts and research metrics today, instead of going for a replication study and trying to prove a negative, I'd instead go for contrarian research which shows a different result (or possibly excludes the original result; or possibly doesn't even if it does not confirm it).
These probably have bigger chance of being published as you are providing a "novel" result, instead of fighting the get-along culture (which is, honestly, present in the workplace as well). But ultimately, they are (research-wise! but not politically) harder to do because they possibly mean you have figured out an actual thing.
Not saying this is the "right" approach, but it might be a cheaper, more practical way to get a paper turned around.
Whether we can work this out in research in a proper way is linked to whether we can work this out everywhere else? How many times have you seen people tap each other on the back despite lousy performance and no results? It's just easier to switch private positions vs research positions, so you'll have more of them not afraid to highlight bad job, and well, there's this profit that needs to pay your salary too.
Most of these studies get published based on elaborate constructions of essentially t-tests for differences in means between groups. Showing the opposite means showing no statistical difference, which is almost impossible to get published, for very human reasons.
My point was exactly not to do that (which is really an unsuccesfull replication), but instead to find the actual, live correlation between the same input rigourously documented and justified, and new "positive" conclusion.
As I said, harder from a research perspective, but if you can show, for instance, that sustainable companies are less profitable with a better study, you have basically contradicted the original one.
Does it bug anyone else when your article has so many quotes it’s practically all italics? Change the formatting style so we don’t have to read pages of italic quotes
Not even surprised. My daughter tried to reproduce a well-cited paper a couple of years back as part of her research project. It was not possible. They pushed for a retraction but university don't want to do it because it would cause political issues as one of the peer-reviewers is tenured at another closely associated university. She almost immediately fucked off and went to work in the private sector.
> They pushed for a retraction ...
That's not right; retractions should only be for research misconduct cases. It is a problem with the article's recommendations too. Even if a correction is published that the results may not hold, the article should stay where it is.
But I agree with the point about replications, which are much needed. That was also the best part in the article, i.e. "stop citing single studies as definitive".
Isn't at least part of the problem with replication that journals are businesses. They're selling in part based on limited human focus, and on desire to see something novel, to see progress in one's chosen field. Replications don't fit a commercial publications goals.
Institutions could do something, surely. Require one-in-n papers be a replication. Only give prizes to replicated studies. Award prize monies split between the first two or three independent groups demonstrating a result.
The 6k citations though ... I suspect most of those instances would just assert the result if a citation wasn't available.
Not in academia myself, but I suspect the basic issue is simply that academics are judged by the number of papers they publish.
They are pushed to publish a lot, which means journals have to review a lot of stuff (and they cannot replicate findings on their own). Once a paper is published on a decent journal, other researchers may not "waste time" replicating all findings, because they also want to publish a lot. The result is papers getting popular even if no one has actually bothered to replicate the results, especially if those papers are quoted by a lot of people and/or are written by otherwise reputable people or universities.
Could you also provide your critical appraisal of the article so this can be more of a journal club for discussion vs just a paper link? I have no expertise in this field so would be good for some insights.
> They intended to type “not significant” but omitted the word “not.”
This one is pretty egregious.
Once, back around 2011 or 2012, I was using Google Translate for a speech I was to deliver in church. It was shorter than one page printed out.
I only needed the Spanish translation. Now I am proficient in spoken and written Spanish, and I can perfectly understand what is said, and yet I still ran the English through Google Translate and printed it out without really checking through it.
I got to the podium and there was a line where I said "electricity is in the air" (a metaphor, obviously) and the Spanish translation said "electricidad no está en el aire" and I was able to correct that on-the-fly, but I was pissed at Translate, and I badmouthed it for months. And sure, it was my fault for not proofing and vetting the entire output, but come on!
Family member tried to do work relying on previous results from a biotech lab. Couldn’t do it. Tried to reproduce. Doesn’t work. Checked work carefully. Faked. Switched labs and research subject. Risky career move, but. Now has a career. Old lab is in mental black box. Never to be touched again.
Talked about it years ago https://news.ycombinator.com/item?id=26125867
Others said they’d never seen it. So maybe it’s rare. But no one will tell you even if they encounter. Guaranteed career blackball.
For original research, a researcher is supposed to replicate studies that form the building blocks of their research. For example, if a drug is reported to increase expression of some mRNA in a cell, and your research derives from that, you will start by replicating that step, but it will just be a note in your introduction and not published as a finding on its own.
When a junior researcher, e.g. a grad student, fails to replicate a study, they assume it's technique. If they can't get it after many tries, they just move on, and try some other research approach. If they claim it's because the original study is flawed, people will just assume they don't have the skills to replicate it.
One of the problems is that science doesn't have great collaborative infrastructure. The only way to learn that nobody can reproduce a finding is to go to conferences and have informal chats with people about the paper. Or maybe if you're lucky there's an email list for people in your field where they routinely troubleshoot each other's technique. But most of the time there's just not enough time to waste chasing these things down.
I can't speak to whether people get blackballed. There's a lot of strong personalities in science, but mostly people are direct and efficient. You can ask pretty pointed questions in a session and get pretty direct answers. But accusing someone of fraud is a serious accusation and you probably don't want to get a reputation for being an accuser, FWIW.
I haven't identified an outright fake one but in my experience (mainly in sensor development) most papers are at the very least optimistic or are glossing over some major limitations in the approach. They should be treated as a source of ideas to try instead of counted on.
I've also seen the resistance that results from trying to investigate or even correct an issue in a key result of a paper. Even before it's published the barrier can be quite high (and I must admit that since it's not my primary focus and my name was not on it, I did not push as hard as I could have on it)
I've read of a few cases like this on Hacker News. There's often that assumption, sometimes unstated: if a junior scientist discovers clear evidence of academic misconduct by a senior scientist, it would be career suicide for the junior scientist to make their discovery public.
The replication crisis is largely particular to psychology, but I wonder about the scope of the don't rock the boat issue.
Maybe that's why it gets cited? People starting with an answer and backfilling?
I will not go into the details of the topic but the "What to do" is the most obvious thing. If a paper that is impactful cannot be backed by other works that should be a smell
And thus all citing, have fatally flawed there paper if its central to the thesis, thus, he who proofs the root is rotten, should gain there funding from this point forward.
I see this approach as a win win for science. Debunking bad science becomes a for profit enterprise, rigorous science becomes the only one sustainable, the paper churn gets reduced, as even producing a good one becomes a financial risk, when it becomes foundational and gets debunked later.
The title alone is sus. I guess there are a lot of low quality papers out there in sciencey sounding fields.
The journal name ("Management Science") is a bit of a giveaway too.
Join me in my new business endeavor where we found the Journal for Journal Science.
In the past the elite would rule the plebs by saying "God says so, so you must do this".
Today the elites rule the plebs by saying "Science sasy so, so you must do this".
Author doesn't seem to understand this, the purpose of research papers is to be gospel, something to be believed, not scrutinized.
In fact, religious ideas (at least in Europe) were often in opposition to the ruling elite (and still are) and even inspired rebellion: https://en.wikipedia.org/wiki/John_Ball_(priest)
There is a reason scriptures were kept away from the oppressed, or only made available to them in a heavily censored form (e.g. the Slaves Bible).
That's a very good point. Some of what's called "science" today, in popular media and coming from governments, is religion. "We know all, do not question us." It's the common problem of headlines along the lines of "scientists say" or "The Science says", which should always be a red flag - but the majority of people believe it.
Do people actually take papers in "management science" seriously?
Yes, that's the problem, many do, and they swear by these oversimplified ideas and one-liners that litter the field of popular management books, fully believing it's all "scientific" and they'll laugh at you for questioning it. It's nuts.
There is a difference between popular management books and academic publications.
For example there is a long history of studies of the relationship between working hours and productivity which is one of the few things that challenges the idea that longer hours means more output.
Yes, but the books generally take their ideas from the academic publications. And the replication problems, and general incentives around academic publishing, show that all too often, the academic publications in the social sciences are unfortunately no more rigorous than the populist books.