The popularity of EA always seemed pretty obvious to me: here's a philosophy that says it doesn't matter what kind of person you are or how you make your fortune, as long as you put some amount of money toward problems. Exploiting people to make money is fine, as long as some portion of that money is going toward "a good cause." There is really no element of self virtue in the way that virtue ethics has..it's just pure calculation.
It's the perfect philosophy for morally questionable people with a lot of money. Which is exactly who got involved.
That's not to say that all the work they're doing/have done is bad, but it's not really surprising why bad actors attached themselves to the movement.
>The popularity of EA always seemed pretty obvious to me: here's a philosophy that says it doesn't matter what kind of person you are or how you make your fortune, as long as you put some amount of money toward problems. Exploiting people to make money is fine, as long as some portion of that money is going toward "a good cause."
I dont think this is a very accurate interpretation of the idea - even with how flawed the movement is. EA is about donating your money effectively. IE ensuring the donation gets used well. At it's face, that's kind of obvious. But when you take it to an extreme you blur the line between "donation" and something else. It has selected for very self-righteous people. But the idea itself is not really about excusing you being a bad person, and the donation target is definitely NOT unimportant.
You claim OP's interpretation is inaccurate, while it tracks perfectly with many of EA's most notorious supporters.
Given that contrast, I'd ask what evidence do you have for why OP's interpretation is incorrect, and what evidence do you have that your interpretation is correct?
The fact they're notorious makes them a biased sample.
My guess is for the majority of people interested in EA - the typical supporter who is not super wealthy or well known - the two central ideas are:
- For people living in wealthy countries, giving some % of your income makes little difference to your life, but can potentially make a big difference to someone else's
- We should carefully decide which charities to give to, because some are far more effective than others.
I would describe myself as an EA, but all that means to me is really the two points above. It certainly isn't anything like an indulgence that morally offsets poor behaviour elsewhere
I would say the problem with EA is the "E". Saying you're doing 'effective' altruism is another way of saying that everyone else's altruism is wasteful and ineffective. Which of course isn't the case. The "E" might as well stand for "Elitist" in that's the vibe it gives off. All truly altruistic acts would aim to be effective, otherwise it wouldn't be altruism - it would just be waste. Not to say there is no waste in some altruism acts, but I'm not convinced its actually any worse than EA. Given the fraud associated with some purported EA advocates, I'd say EA might even be worse. The EA movement reeks of the optimize-everything mindset of people convinced they are smarter than everyone else who just say just gives money to a charity A when they could have been 13% more effective if they sent the money directly to this particular school in country B with the condition they only spend it on X. The origins of EA may not be that, but that's what it has evolved into.
It's like libertarianism. There is a massive gulf between the written goals and the actual actions of the proponents. It might be more accurately thought of as a vehicle for plausible deniability than an actual ethos.
The problem is that creates a kind of epistemic closure around yourself where you can't encounter such a thing as a sincere expression of it. I actually think your charge against Libertarians is basically accurate. And I think it deserves a (limited) amount of time and attention directed at its core contentions for what they are worth. After all, Robert Nozick considered himself a libertarian and contributed some important thinking on things like justice and retribution and equality and any number of subjects, and the world wouldn't be bettered by dismissing him with twitter style ridicule.
I do agree that things like EA and Libertarianism have to answer for the in-the-wild proponents they tend to attract but not to the point of epistemic closure in response to its subject matter.
When a term becomes loaded enough then people will stop using it when they don't want to be associated with the loaded aspects of the term. If they don't then they already know what the consequences are, because they will be dealing with them all the time. The first and most impactful consequence isn't 'people who are not X will think I am X' it is actually 'people who are X will think I am one of them'.
I think social dynamics are real and must be answered for but I don't think any self-correction or lacktherof has anything to do with subject matter which can be understood independently.
I will never take a proponent of The Bell Curve seriously who tries to say they're "just following the data", because I do hold them and the book responsible for their social and cultural entanglements and they would have to be blind to ignore it. But the book is wrong for reasons intrinsic to its analysis and it would be catastrophic to treat that point as moot.
I am saying that those who actually believe something won't stick around and associate themselves with the original movement if that movement has taken on traits that they don't agree with.
I actually think I agree with this, but nevertheless people can refer to EA and mean by it the totality of sociological dynamics surrounding it, including its population of proponents and their histories.
I actually think EA is conceptually perfectly fine within its scope of analysis (once you start listing examples, e.g. mosquito nets to prevent malaria, I think they're hard to dispute), and the desire to throw out the conceptual baby with the bathwater of its adherents is an unfortunate demonstration of anti-intellectualism. I think it's like how some predatory pickup artists do the work of being proto-feminists (or perhaps more to the point, how actual feminists can nevertheless be people who engage in the very kinds of harms studied by the subject matter). I wouldn't want to make feminism answer for such creatures as definitionally built into the core concept.
I don't see anything in your comment that directly disagrees with the one that you've replied to.
Maybe you misinterpreted it? To me, It was simply saying that the flaw in the EA model is that a person can be 90% a dangerous sociopath and as long as the 10% goes to charity (effectively) they are considered morally righteous.
It's the 21st century version of Papal indulgences.
For most it seems EA is an argument that despite no charitable donations being made at all, and despite gaining wealth through questionable means it’s still all ethical because it’s theoretically “just more effective” if the person continues to claim that they would in the far future put some money towards these hypothetical “very effective” charitable causes, that just never seems to have materialized yet, and all of cause shouldn’t be perused “until you’ve built your fortune”.
If you're going to assign a discount rate for cash, you also need to assign a similar "discount rate" for future lives saved. Just like investments compound, giving malaria medicine and vitamins to kids who needs him should produce at least as much positive compounding returns.
I’m skeptical of any consequentialist approach that doesn’t just boil down to virtue ethics.
Aiming directly at consequentialist ways of operating always seems to either become impractical in a hurry, or get fucked up and kinda evil. Like, it’s so consistent that anyone thinking they’ve figured it out needs to have a good hard think about it for a several years before tentatively attempting action based on it, I’d say.
I partly agree with you but my instinct is that Parfit Was Right(TM) that they were climbing the same mountain from different sides. Like a glove that can be turned inside out and worn on either hand.
I may be missing something, but I've never understood the punch of the "down the road" problem with consequentialism. I consider myself kind of neutral on it, but I think if you treat moral agency as only extending so far as consequences you can reasonably estimate, there's a limit to your moral responsibility that's basically in line with what any other moral school of thought would attest to.
You still have cause-end-effect responsibility; if you leave a coffee cup on the wrong table and the wrong Bosnian assassinates the wrong Archduke, you were causally involved, but the nature of your moral responsibility is different.
… and I tend to think of it as the safest route to doing OK at consequentialism, too, myself. The point is still basically good outcomes, but it short-circuits the problems that tend to come up when one starts trying to maximize utility/good, by saying “that shit’s too complicated, just be a good person” (to oversimplify and omit the “draw the rest of the fucking owl” parts)
Like you’re probably not going to start with any halfway-mainstream virtue ethics text and find yourself pondering how much you’d have to be paid to donate enough to make it net-good to be a low-level worker at an extermination camp. No dude, don’t work at extermination camps, who cares how many mosquito nets you buy? Don’t do that.
The best statement of virtue ethics is contained in Alasdair Macintyre’s _After Virtue_. It’s a metaethical foundation that argues that both deontology and utilitarianism are incoherent and have failed to explain what some unitary “the good” is, and that ancient notions of “virtues” (some of which have filtered down to present day) can capture facets of that good better.
The big advantage of virtue ethics from my point of view is that humans have unarguably evolved cognitive mechanisms for evaluating some virtues (“loyalty”, “friendship”, “moderation”, etc.) but nobody seriously argues that we have a similarly built-in notion of “utility”.
Probably a topic for a different day, but it's rare to get someone's nutshell version of ethics so concise and clear. For me, my concern would be letting the evolutionary tail wag the dog, so to speak. Utility has the advantage of sustaining moral care toward people far away from you, which may not convey an obvious evolutionary advantage.
And I think the best that can be said of evolution is that it mixes moral, amoral and immoral thinking in whatever combinations it finds optimal.
Macintyre doesn’t really involve himself with the evolutionary parts. He tends to be oriented towards historical/social/cultural explanations instead. But yes, this is an issue that any virtue ethics needs to handle.
> Utility has the advantage of sustaining moral care toward people far away from you
Well, in some formulations. There are well-defined and internally consistent choices of utility function that discount or redefine “personhood” in anti-humanist ways. That was more or less Rawls’ criticism of utilitarianism.
> It's the perfect philosophy for morally questionable people with a lot of money.
The perfect philosophy for morally questionable people would just be to ignore charity altogether (e.g. Russian oligarchs) or use charity to launder strategically launder their reputations (e.g. Jeffrey Epstein). SBF would fall into that second category as well.
I'm tired of every other discussion about EA online assuming that SBF is representative of the average EA member, instead of being an infamous outlier.
Is there a term for what I had previously understood Effective Altruism to be, since I don’t want to reference EA in a conversation and have the other person think I’m associated with these sorts of people.
I had assumed it was just simple mathematics and the belief that cash is the easiest way to transfer charitable effort. If I can readily earn 50USD/hour, rather than doing a volunteering job that I could pay 25USD/hour to do, I simply do my job and pay for 2 people to volunteer.
That's just called utilitarianism/consequentialism. It's a perfect respectable ethical framework. Not the most popular in academic philosophy, but prominent enough that you have to at least engage with it.
Effective altruism is a political movement, with all the baggage implicit in that.
Is there a term for looking at the impact of your donations, rather than process (like percentage spent on "overhead")? I like discussing that, but have the same problem as GP.
Certainly charities exist that are ineffective, but there is very strong evidence that there exist charities that do enormous amounts of direct, targeted good.
givewell.org is probably the most prominent org recommended by many EAs that does and aggregates research on charitable interventions and shows with strong RCT evidence that a marginal charitable donation can save a life for between $3,000 and $5,500. This estimate has uncertainty, but there's extremely strong evidence that money to good charities like the ones GiveWell recommends massively improves people's lives.
GiveDirectly is another org that's much more straightforward - giving money directly to people in extreme poverty, with very low overheads. The evidence that that improves people's lives is very very strong (https://www.givedirectly.org/gdresearch/).
It absolutely makes sense to be concerned about "is my hypothetical charitable donation actually doing good", which is more or less a premise of the EA movement. But the answer seems to be "emphatically, yes, there are ways to donate money that do an enormous amount of good".
GiveWell actually benchmarks their charity recommendations against direct cash transfers and will generally only recommend charities whose benefits are Nx cash for some N that I don't remember off the top of my head. I buy that lots of charities aren't effective, but some are!
That said I also think that longer term research and investment in things like infrastructure matters too and can't easily be measured as an RCT. GiveWell style giving is great and it's awesome that the evidence is so strong (and it's most of my charitable giving), but that doesn't mean other charities with less easily researched goals are bad necessarily.
How? I'm curious because the numbers are so specific ($5000 = 1 human life), unclouded by the usual variances of getting the money to people at a macro scale and having it go through many hands and across borders. Is it related to treating a specific illness that just objectively costs that much to treat?
The fundamental problem is that Effective Altruism is a political movement that spun out of a philosophical one. If you want to talk about the relative strengths and weaknesses of consequentialism, go right ahead. If you want to assume consequentialism is true and discuss specific ethical questions via that framing, power to you.
If you want to form a movement, you now have a movement, with all that entails: leaders, policies, politics, contradictions, internecine struggles, money, money, more money, goals, success at your goals, failure at your goals, etc.
I expect the book itself (Death in a Shallow Pond: A Philosopher, a Drowning Child, and Strangers in Need, by David Edmonds) is good, as the author has written a lot of other solid books making philosophy accessible. The title of the article though, is rather clickbaity: it’s hardly “recovering” the origins of EA to say that it owes a huge debt to Peter Singer, who is only the most famous utilitarian philosopher of the late 20th century!
(Peter Singer’s books are also good: his Hegel: A Very Short Introduction made me feel kinda like I understood what Hegel was getting at. I probably don’t of course, but it was nice to feel that way!)
Man this is such a loaded term. Even in a comment section about the origins of it, everyone is silently using their own definition. I think all discussions of EA should start with a definition at the top. I'll give it a whirl:
>Effective altruism: Donating with a focus on helping the most people in the most effective way, using evidence and careful reasoning, and personal values.
What happens in practice is a lot worse than this may sound at first glance, so I think people are tempted to change the definition. You could argue EA in practice is just a perversion of the idea in principle, but I dont think its even that. I think the initial assumption that that definition is good and harmless is just wrong. It's basically just spending money to change the world into what you want. It's similar to regular donations except you're way more invested and strategic in advancing the outcome. It's going to invite all sorts of interests and be controversial.
The problem with "helping the most people in the most effective way" is these two goals are often at odds with each other.
If you donate to a local / neighborhood cause, you are helping few people, but you your donation may make an outsized difference: it might be the make-or-break for a local library or shelter. If you donate to a global cause, you might have helped a million people, but each of them is helped in such a vanishingly small way that the impact of your donation can't be measured at all.
The AE movement is built around the idea that you can somehow, scientifically, mathematically, compare these benefits - and that the math works out to the latter case being objectively better. Which leads to really weird value systems, including various "longtermist" stances: "you shouldn't be helping the people alive today, you should be maximizing the happiness for the people living in the far future instead". Preferably by working on AI or blogging about AI.
And that's before we get into a myriad of other problems with global aid schemes, including the near-impossibly of actually, honestly understanding how they're spending money and how effective their actions really are.
>it might be the make-or-break for a local library or shelter. If you donate to a global cause, you might have helped a million people, but each of them is helped in such a vanishingly small way that the impact of your donation can't be measured at all.
I think you intended to reproduce utilitarianisms "repugnant conclusion". But strictly speaking I think the real world dynamics you mentioned don't map on to that. What's abstract in your examples is our grasp of the meaning of impact on the people being helped. But it doesn't follow that the causes are fractional changes to large populations. The beneficiaries of UNICEF are completely invisible to me (in fact I had to look it up to recall what UNICEF even does), but still critically important to those who benefit from it: things like food for severe malnutrition, maternal health support absolutely are pivotal make-or-break differences in the lives of people who get it.
So as applied to global initiatives with nearly anonymous beneficiaries, I don't think they actually reproduce the so-called repugnant conclusion, though it's still perfectly fair as a challenge to the utilitarian calculus EA relies on. I just think it cashes out as a conceptual problem, and the uncomfortable truth for aspiring EA critics is that their stock recommendations are not that different from Carter Foundation or UN style initiatives.
The trouble is their judgment of global catastrophic risks, which, interestingly, I think does map on to your criticism.
On one hand, it is an example of the total-order mentality which impregnates society, and businesses in general: “there exists a single optimum”. That is wrong on so many levels, especially with regards to charities. ETA: the real world has optimals, not an optimum.
Then it easily becomes a slippery slope of “you are wrong if you are not optimizing”.
ETA: it is very harmful to oneself and to society to think that one is obliged to “do the best”. The ethical rule is “do good and not bad”, no more than that.
Finally, it is a receipt for whatever you want to call it: fascism, communism, totalitarianism… “There is an optimum way, hence if you are not doing it, you must be corrected”.
It's a layer above even that: it's a way to justify doing unethical shit to earn obscene amounts of money by convincing themselves (and attempting to convince others) that the ends justify the means because the entire world will somehow be a better place if I'm allowed to become Very Rich.
Anyone who has to call themselves altruistic simply isn't lol
> Inspired by Singer, Oxford philosophers Toby Ord and Will MacAskill launched Giving What We Can in 2009, which encouraged members to pledge 10 percent of their incomes to charity.
I find it to be a dangerous ideology since it can effectively be used to justify anything. I joined an EA group online (from a popular YouTube channel) and the first conversation I saw was a thread by someone advocating for eugenics. And it only got worse from there.
> A paradox of effective altruism is that by seeking to overcome individual bias through rationalism, its solutions sometimes ignore the structural bias that shapes our world.
Yes, this just about sums it up. As a movement they seem to be attracting some listless contrarians that seem entirely too willing to dig up old demons of the past.
Agreed. It's firmly an "ends justify the means" ideology, reliant on accurately predicting future outcomes to justify present actions. This sort of thing gives free license to any sociopath with enough creativity to spin some yarn with handwavy math about the bad outcome their malicious actions are meant to be preventing.
Yes! It's a crucial distinction. Rationalism is about being rational / logical -- moving closer to neutrality and "truth". Whereas to rationalize something is often about masking selfish motives, making excuses, or (self-)deception -- moving away from "truth".
> I think they’re recovering. They’ve learned a few lessons, including not to be too in hock to a few powerful and wealthy individuals.
I do not believe the EA movement to be recoverable; it is built on flawed foundations and its issues are inherent. The only way I see out of it is total dissolution; it cannot be reformed.
Man, EA is so close to getting it. They are right that we have a moral obligation to help those in need but they are wrong about how to do it.
Don't outsource your altruism by donating to some GiveWell-recommended nonprofit. Be a human, get to know people, and ask if/how they want help. Start close to home where you can speak the same language and connect with people.
The issues with EA all stem from the fact that the movement centralizes power into the hands of a few people who decide what is and isn't worthy of altruism. Then similar to communism, that power gets corrupted by self-interested people who use it to fund pet projects, launder reputations, etc.
Just try to help the people around you a bit more. If everyone did that, we'd be good.
If everyone did that, lots of people would still die of preventable causes in poor countries. I think GiveWell does a good job of identifying areas of greatest need in public health around the world. I would stop trusting them if they turned out to be corrupt or started misdirecting funds to pet projects. I don’t think everyone has to donate this way as it’s very personal decision, nor does it automatically make someone a good person or justify immoral ways of earning money, but I think it’s a good thing to help the less fortunate who are far away and speak a different language.
> Just try to help the people around you a bit more. If everyone did that, we'd be good.
This describes a generally wealthy society with some people doing better than average and others worse. Redistributing wealth/assistance from the first group to the second will work quite well for this society.
It does nothing to address the needs of a society in which almost everyone is poor compared to some other potential aid-giving society.
Supporting your friends and neighbors is wonderful. It does not, in general, address the most pressing needs in human populations worldwide.
That's the thing though, if EA had said: find 10 people in your life and help them directly, it wouldn't have appealed to the well-off white collar workers that want to spend money, but not actually do anything. The movement became popular because it didn't require one to do anything other than spend money in order to be lauded.
Better, it’s a small step to “being a small part of something that’s doing a little evil to a shitload of people (say, working on Google ~scams targeting the vulnerable and spying on everybody~ Ads) is not just OK, but good, as long as I spend a few grand a year buying mosquito nets to prevent malaria, saving a bunch of lives!”
> . . . but also what’s called long-termism, which is worrying about the future of the planet and existential risks like pandemics, nuclear war, AI, or being hit by comets. When it made that shift, it began to attract a lot of Silicon Valley types, who may not have been so dedicated to the development part of the effective altruism program.
The rationalists thought they understood time discounting and thought they could correct for it. They were wrong. Then the internal contradictions of long-termism allowed EA to get suckered by the Silicon Valley crew.
Effective Altruism and Utilitarianism are just a couple of the presentations authoritarians sometimes make for convenience. To me the code simply as "if I had everything now, that would eventually be good for everybody."
The arguments always feel to me too similar "it is good Carnegie called in the Pinkerton's to suppress labor, as it allowed him to build libraries." Yes it is good what Carnegie did later, but it doesn't completely paper over what he did earlier.
> The arguments always feel to me too similar "it is good Carnegie called in the Pinkerton's to suppress labor
Is that an actual EA argument?
The value is all at the margins. Like Carnegie had legitimate functional businesses that would be profitable without Pinkerton's. So without Pinkerton's he'd still be able to afford probably every philanthropic thing he did so it doesn't justify it.
I don't really follow the EA space but the actual arguments I've heard are largely about working in FANG to make 3x the money outside of fang to allow them to donate 1x ~1.5x the money. Which to me is very justifiable.
But to stick with the article. I don't think taking in billions via fraud to donate some of it to charity is a net positive on society.
> I don't think taking in billions via fraud to donate some of it to charity is a net positive on society.
it could be though, if by first centralizing those billions, you could donate more effectively than the previous holders of that money could. the fraud victims may have never donated in the first place, or have donated to the wrong thing, or not enough to make the right difference.
you missed this part: "The arguments always feel to me too similar"
> The value is all at the margins. Like Carnegie had legitimate functional businesses that would be profitable without Pinkerton's. So without Pinkerton's he'd still be able to afford probably every philanthropic thing he did so it doesn't justify it.
That isn't what OP was engaging with though, they aren't asking for you to answer the question 'what could Carnegie have done better' they are saying 'the philosophy seems to be arguing this particular thing'.
When you work for something that directly contradicts peaceful civil society you are basically saying the mass murder of today is ok because it allows you to assuage your guilt by giving to your local charity - its only effective if altruism is not your goal.
A janitor at the CIA in the 1960s is certainly working at an organization that is disrupting the peaceful Iranian society and turning it into a "death to America" one. But I would not agree that they're doing a net-negative for society because the janitor's marginal contribution towards that objective is 0.
It might not be the best thing the janitor could do to society (as compared to running a soup kitchen).
I'm leery of any philosophy that is popular in tech circles because they all seem to lead to eugenics, hyperindividualism, ignoring systemic issues, deregulation and whatever the latest incarnation of prosperity gospel is.
Utilitarianism suffers from the same problems it always had: time frames. What's the best net good 10 minutes from now might be vastly different 10 days, 10 months or 10 years from now. So whatever arbitrary time frame you choose affects the outcome. Taken further, you can choose a time frame that suits your desired outcome.
"What can I do?" is a fine question to ask. This crops up a lot in anarchist schools of thought too. But you can't mutual aid your way out of systemic issues. Taken further, focusing on individual action often becomes a fig leaf to argue against any form of taxation (or even regulation) because the government is limiting your ability to be altruistic.
I expect the effective altruists have largely moved on to transhumanism as that's pretty popular with the Silicon Valley elite (including Peter Thiel and many CEOs) and that's just a nicer way of arguing for eugenics.
Effective altruism and transhumanism is kinda the same thing along with other stuff like longetermism. There is even name for the whole thing TESCREAL. Very slightly different positions invented i guess for branding.
The popularity of EA always seemed pretty obvious to me: here's a philosophy that says it doesn't matter what kind of person you are or how you make your fortune, as long as you put some amount of money toward problems. Exploiting people to make money is fine, as long as some portion of that money is going toward "a good cause." There is really no element of self virtue in the way that virtue ethics has..it's just pure calculation.
It's the perfect philosophy for morally questionable people with a lot of money. Which is exactly who got involved.
That's not to say that all the work they're doing/have done is bad, but it's not really surprising why bad actors attached themselves to the movement.
>The popularity of EA always seemed pretty obvious to me: here's a philosophy that says it doesn't matter what kind of person you are or how you make your fortune, as long as you put some amount of money toward problems. Exploiting people to make money is fine, as long as some portion of that money is going toward "a good cause."
I dont think this is a very accurate interpretation of the idea - even with how flawed the movement is. EA is about donating your money effectively. IE ensuring the donation gets used well. At it's face, that's kind of obvious. But when you take it to an extreme you blur the line between "donation" and something else. It has selected for very self-righteous people. But the idea itself is not really about excusing you being a bad person, and the donation target is definitely NOT unimportant.
You claim OP's interpretation is inaccurate, while it tracks perfectly with many of EA's most notorious supporters.
Given that contrast, I'd ask what evidence do you have for why OP's interpretation is incorrect, and what evidence do you have that your interpretation is correct?
> many of EA's most notorious supporters.
The fact they're notorious makes them a biased sample.
My guess is for the majority of people interested in EA - the typical supporter who is not super wealthy or well known - the two central ideas are:
- For people living in wealthy countries, giving some % of your income makes little difference to your life, but can potentially make a big difference to someone else's
- We should carefully decide which charities to give to, because some are far more effective than others.
That's pretty much it - essentially the message in Peter Singer's book: https://www.thelifeyoucansave.org/.
I would describe myself as an EA, but all that means to me is really the two points above. It certainly isn't anything like an indulgence that morally offsets poor behaviour elsewhere
I would say the problem with EA is the "E". Saying you're doing 'effective' altruism is another way of saying that everyone else's altruism is wasteful and ineffective. Which of course isn't the case. The "E" might as well stand for "Elitist" in that's the vibe it gives off. All truly altruistic acts would aim to be effective, otherwise it wouldn't be altruism - it would just be waste. Not to say there is no waste in some altruism acts, but I'm not convinced its actually any worse than EA. Given the fraud associated with some purported EA advocates, I'd say EA might even be worse. The EA movement reeks of the optimize-everything mindset of people convinced they are smarter than everyone else who just say just gives money to a charity A when they could have been 13% more effective if they sent the money directly to this particular school in country B with the condition they only spend it on X. The origins of EA may not be that, but that's what it has evolved into.
It's like libertarianism. There is a massive gulf between the written goals and the actual actions of the proponents. It might be more accurately thought of as a vehicle for plausible deniability than an actual ethos.
The problem is that creates a kind of epistemic closure around yourself where you can't encounter such a thing as a sincere expression of it. I actually think your charge against Libertarians is basically accurate. And I think it deserves a (limited) amount of time and attention directed at its core contentions for what they are worth. After all, Robert Nozick considered himself a libertarian and contributed some important thinking on things like justice and retribution and equality and any number of subjects, and the world wouldn't be bettered by dismissing him with twitter style ridicule.
I do agree that things like EA and Libertarianism have to answer for the in-the-wild proponents they tend to attract but not to the point of epistemic closure in response to its subject matter.
When a term becomes loaded enough then people will stop using it when they don't want to be associated with the loaded aspects of the term. If they don't then they already know what the consequences are, because they will be dealing with them all the time. The first and most impactful consequence isn't 'people who are not X will think I am X' it is actually 'people who are X will think I am one of them'.
I think social dynamics are real and must be answered for but I don't think any self-correction or lacktherof has anything to do with subject matter which can be understood independently.
I will never take a proponent of The Bell Curve seriously who tries to say they're "just following the data", because I do hold them and the book responsible for their social and cultural entanglements and they would have to be blind to ignore it. But the book is wrong for reasons intrinsic to its analysis and it would be catastrophic to treat that point as moot.
I am saying that those who actually believe something won't stick around and associate themselves with the original movement if that movement has taken on traits that they don't agree with.
You risk catastrophe if you let social dynamics stand in for truth.
You risk catastrophe if you ignore social indicators as a valid heuristic.
I actually think I agree with this, but nevertheless people can refer to EA and mean by it the totality of sociological dynamics surrounding it, including its population of proponents and their histories.
I actually think EA is conceptually perfectly fine within its scope of analysis (once you start listing examples, e.g. mosquito nets to prevent malaria, I think they're hard to dispute), and the desire to throw out the conceptual baby with the bathwater of its adherents is an unfortunate demonstration of anti-intellectualism. I think it's like how some predatory pickup artists do the work of being proto-feminists (or perhaps more to the point, how actual feminists can nevertheless be people who engage in the very kinds of harms studied by the subject matter). I wouldn't want to make feminism answer for such creatures as definitionally built into the core concept.
I don't see anything in your comment that directly disagrees with the one that you've replied to.
Maybe you misinterpreted it? To me, It was simply saying that the flaw in the EA model is that a person can be 90% a dangerous sociopath and as long as the 10% goes to charity (effectively) they are considered morally righteous.
It's the 21st century version of Papal indulgences.
> EA is about donating your money effectively
For most it seems EA is an argument that despite no charitable donations being made at all, and despite gaining wealth through questionable means it’s still all ethical because it’s theoretically “just more effective” if the person continues to claim that they would in the far future put some money towards these hypothetical “very effective” charitable causes, that just never seems to have materialized yet, and all of cause shouldn’t be perused “until you’ve built your fortune”.
If you're going to assign a discount rate for cash, you also need to assign a similar "discount rate" for future lives saved. Just like investments compound, giving malaria medicine and vitamins to kids who needs him should produce at least as much positive compounding returns.
I’m skeptical of any consequentialist approach that doesn’t just boil down to virtue ethics.
Aiming directly at consequentialist ways of operating always seems to either become impractical in a hurry, or get fucked up and kinda evil. Like, it’s so consistent that anyone thinking they’ve figured it out needs to have a good hard think about it for a several years before tentatively attempting action based on it, I’d say.
I partly agree with you but my instinct is that Parfit Was Right(TM) that they were climbing the same mountain from different sides. Like a glove that can be turned inside out and worn on either hand.
I may be missing something, but I've never understood the punch of the "down the road" problem with consequentialism. I consider myself kind of neutral on it, but I think if you treat moral agency as only extending so far as consequences you can reasonably estimate, there's a limit to your moral responsibility that's basically in line with what any other moral school of thought would attest to.
You still have cause-end-effect responsibility; if you leave a coffee cup on the wrong table and the wrong Bosnian assassinates the wrong Archduke, you were causally involved, but the nature of your moral responsibility is different.
What does "virtue ethics" mean?
One of the three traditional European philosophy approaches to ethics:
https://en.wikipedia.org/wiki/Virtue_ethics
EA being a prime example of consequentialism.
… and I tend to think of it as the safest route to doing OK at consequentialism, too, myself. The point is still basically good outcomes, but it short-circuits the problems that tend to come up when one starts trying to maximize utility/good, by saying “that shit’s too complicated, just be a good person” (to oversimplify and omit the “draw the rest of the fucking owl” parts)
Like you’re probably not going to start with any halfway-mainstream virtue ethics text and find yourself pondering how much you’d have to be paid to donate enough to make it net-good to be a low-level worker at an extermination camp. No dude, don’t work at extermination camps, who cares how many mosquito nets you buy? Don’t do that.
The best statement of virtue ethics is contained in Alasdair Macintyre’s _After Virtue_. It’s a metaethical foundation that argues that both deontology and utilitarianism are incoherent and have failed to explain what some unitary “the good” is, and that ancient notions of “virtues” (some of which have filtered down to present day) can capture facets of that good better.
The big advantage of virtue ethics from my point of view is that humans have unarguably evolved cognitive mechanisms for evaluating some virtues (“loyalty”, “friendship”, “moderation”, etc.) but nobody seriously argues that we have a similarly built-in notion of “utility”.
Probably a topic for a different day, but it's rare to get someone's nutshell version of ethics so concise and clear. For me, my concern would be letting the evolutionary tail wag the dog, so to speak. Utility has the advantage of sustaining moral care toward people far away from you, which may not convey an obvious evolutionary advantage.
And I think the best that can be said of evolution is that it mixes moral, amoral and immoral thinking in whatever combinations it finds optimal.
Macintyre doesn’t really involve himself with the evolutionary parts. He tends to be oriented towards historical/social/cultural explanations instead. But yes, this is an issue that any virtue ethics needs to handle.
> Utility has the advantage of sustaining moral care toward people far away from you
Well, in some formulations. There are well-defined and internally consistent choices of utility function that discount or redefine “personhood” in anti-humanist ways. That was more or less Rawls’ criticism of utilitarianism.
> It's the perfect philosophy for morally questionable people with a lot of money.
The perfect philosophy for morally questionable people would just be to ignore charity altogether (e.g. Russian oligarchs) or use charity to launder strategically launder their reputations (e.g. Jeffrey Epstein). SBF would fall into that second category as well.
SBF has entered the chat
I'm tired of every other discussion about EA online assuming that SBF is representative of the average EA member, instead of being an infamous outlier.
Its basically the same thing as the church selling indulgences. Didn't matter if you stole the money, pay the church and go to heaven
Is there a term for what I had previously understood Effective Altruism to be, since I don’t want to reference EA in a conversation and have the other person think I’m associated with these sorts of people.
I had assumed it was just simple mathematics and the belief that cash is the easiest way to transfer charitable effort. If I can readily earn 50USD/hour, rather than doing a volunteering job that I could pay 25USD/hour to do, I simply do my job and pay for 2 people to volunteer.
That's just called utilitarianism/consequentialism. It's a perfect respectable ethical framework. Not the most popular in academic philosophy, but prominent enough that you have to at least engage with it.
Effective altruism is a political movement, with all the baggage implicit in that.
Is there a term for looking at the impact of your donations, rather than process (like percentage spent on "overhead")? I like discussing that, but have the same problem as GP.
> In the past, there was nothing we could do about people in another country. Peter Singer says that’s just an evolutionary hangover, a moral error.
This is sadly still true, given the percentage of money that goes to getting someone some help vs the amount dedicated to actually helping.
Certainly charities exist that are ineffective, but there is very strong evidence that there exist charities that do enormous amounts of direct, targeted good.
givewell.org is probably the most prominent org recommended by many EAs that does and aggregates research on charitable interventions and shows with strong RCT evidence that a marginal charitable donation can save a life for between $3,000 and $5,500. This estimate has uncertainty, but there's extremely strong evidence that money to good charities like the ones GiveWell recommends massively improves people's lives.
GiveDirectly is another org that's much more straightforward - giving money directly to people in extreme poverty, with very low overheads. The evidence that that improves people's lives is very very strong (https://www.givedirectly.org/gdresearch/).
It absolutely makes sense to be concerned about "is my hypothetical charitable donation actually doing good", which is more or less a premise of the EA movement. But the answer seems to be "emphatically, yes, there are ways to donate money that do an enormous amount of good".
> giving money directly to people in extreme poverty, with very low overheads. The evidence that that improves people's lives is very very strong
When you see the return on money spent this way other forms of aid start looking like gatekeeping and rent-seeking.
GiveWell actually benchmarks their charity recommendations against direct cash transfers and will generally only recommend charities whose benefits are Nx cash for some N that I don't remember off the top of my head. I buy that lots of charities aren't effective, but some are!
That said I also think that longer term research and investment in things like infrastructure matters too and can't easily be measured as an RCT. GiveWell style giving is great and it's awesome that the evidence is so strong (and it's most of my charitable giving), but that doesn't mean other charities with less easily researched goals are bad necessarily.
You can pretty reliably save a life in a 3rd world country for about $5k each right now.
How? I'm curious because the numbers are so specific ($5000 = 1 human life), unclouded by the usual variances of getting the money to people at a macro scale and having it go through many hands and across borders. Is it related to treating a specific illness that just objectively costs that much to treat?
Here is a detailed methodology: https://www.givewell.org/impact-estimates. It convinced me that $5k is a reasonable estimate.
Peter Singer is the LAST person I would go to for advice on morality or ethics.
The fundamental problem is that Effective Altruism is a political movement that spun out of a philosophical one. If you want to talk about the relative strengths and weaknesses of consequentialism, go right ahead. If you want to assume consequentialism is true and discuss specific ethical questions via that framing, power to you.
If you want to form a movement, you now have a movement, with all that entails: leaders, policies, politics, contradictions, internecine struggles, money, money, more money, goals, success at your goals, failure at your goals, etc.
I expect the book itself (Death in a Shallow Pond: A Philosopher, a Drowning Child, and Strangers in Need, by David Edmonds) is good, as the author has written a lot of other solid books making philosophy accessible. The title of the article though, is rather clickbaity: it’s hardly “recovering” the origins of EA to say that it owes a huge debt to Peter Singer, who is only the most famous utilitarian philosopher of the late 20th century!
(Peter Singer’s books are also good: his Hegel: A Very Short Introduction made me feel kinda like I understood what Hegel was getting at. I probably don’t of course, but it was nice to feel that way!)
Ok, we've de-recovered the origins in the title above.
Man this is such a loaded term. Even in a comment section about the origins of it, everyone is silently using their own definition. I think all discussions of EA should start with a definition at the top. I'll give it a whirl:
>Effective altruism: Donating with a focus on helping the most people in the most effective way, using evidence and careful reasoning, and personal values.
What happens in practice is a lot worse than this may sound at first glance, so I think people are tempted to change the definition. You could argue EA in practice is just a perversion of the idea in principle, but I dont think its even that. I think the initial assumption that that definition is good and harmless is just wrong. It's basically just spending money to change the world into what you want. It's similar to regular donations except you're way more invested and strategic in advancing the outcome. It's going to invite all sorts of interests and be controversial.
> I think the initial assumption that that definition is good and harmless is just wrong.
Why? The alternative is to donate to sexy causes that make you feel good:
- disaster relief and then forget about once it's not in the news anymore
- school uniforms for children when they can't even do their homework because they can't afford lighting at home
- literal team of full time body guards for the last member of some species
That's a strawman alternative.
The problem with "helping the most people in the most effective way" is these two goals are often at odds with each other.
If you donate to a local / neighborhood cause, you are helping few people, but you your donation may make an outsized difference: it might be the make-or-break for a local library or shelter. If you donate to a global cause, you might have helped a million people, but each of them is helped in such a vanishingly small way that the impact of your donation can't be measured at all.
The AE movement is built around the idea that you can somehow, scientifically, mathematically, compare these benefits - and that the math works out to the latter case being objectively better. Which leads to really weird value systems, including various "longtermist" stances: "you shouldn't be helping the people alive today, you should be maximizing the happiness for the people living in the far future instead". Preferably by working on AI or blogging about AI.
And that's before we get into a myriad of other problems with global aid schemes, including the near-impossibly of actually, honestly understanding how they're spending money and how effective their actions really are.
>it might be the make-or-break for a local library or shelter. If you donate to a global cause, you might have helped a million people, but each of them is helped in such a vanishingly small way that the impact of your donation can't be measured at all.
I think you intended to reproduce utilitarianisms "repugnant conclusion". But strictly speaking I think the real world dynamics you mentioned don't map on to that. What's abstract in your examples is our grasp of the meaning of impact on the people being helped. But it doesn't follow that the causes are fractional changes to large populations. The beneficiaries of UNICEF are completely invisible to me (in fact I had to look it up to recall what UNICEF even does), but still critically important to those who benefit from it: things like food for severe malnutrition, maternal health support absolutely are pivotal make-or-break differences in the lives of people who get it.
So as applied to global initiatives with nearly anonymous beneficiaries, I don't think they actually reproduce the so-called repugnant conclusion, though it's still perfectly fair as a challenge to the utilitarian calculus EA relies on. I just think it cashes out as a conceptual problem, and the uncomfortable truth for aspiring EA critics is that their stock recommendations are not that different from Carter Foundation or UN style initiatives.
The trouble is their judgment of global catastrophic risks, which, interestingly, I think does map on to your criticism.
On one hand, it is an example of the total-order mentality which impregnates society, and businesses in general: “there exists a single optimum”. That is wrong on so many levels, especially with regards to charities. ETA: the real world has optimals, not an optimum.
Then it easily becomes a slippery slope of “you are wrong if you are not optimizing”.
ETA: it is very harmful to oneself and to society to think that one is obliged to “do the best”. The ethical rule is “do good and not bad”, no more than that.
Finally, it is a receipt for whatever you want to call it: fascism, communism, totalitarianism… “There is an optimum way, hence if you are not doing it, you must be corrected”.
It's a layer above even that: it's a way to justify doing unethical shit to earn obscene amounts of money by convincing themselves (and attempting to convince others) that the ends justify the means because the entire world will somehow be a better place if I'm allowed to become Very Rich.
Anyone who has to call themselves altruistic simply isn't lol
> Inspired by Singer, Oxford philosophers Toby Ord and Will MacAskill launched Giving What We Can in 2009, which encouraged members to pledge 10 percent of their incomes to charity.
Congratulations you rediscovered tithing.
I find it to be a dangerous ideology since it can effectively be used to justify anything. I joined an EA group online (from a popular YouTube channel) and the first conversation I saw was a thread by someone advocating for eugenics. And it only got worse from there.
> A paradox of effective altruism is that by seeking to overcome individual bias through rationalism, its solutions sometimes ignore the structural bias that shapes our world.
Yes, this just about sums it up. As a movement they seem to be attracting some listless contrarians that seem entirely too willing to dig up old demons of the past.
Agreed. It's firmly an "ends justify the means" ideology, reliant on accurately predicting future outcomes to justify present actions. This sort of thing gives free license to any sociopath with enough creativity to spin some yarn with handwavy math about the bad outcome their malicious actions are meant to be preventing.
> through rationalism,
When they write "rationalism" you should read "rationalization".
Yes! It's a crucial distinction. Rationalism is about being rational / logical -- moving closer to neutrality and "truth". Whereas to rationalize something is often about masking selfish motives, making excuses, or (self-)deception -- moving away from "truth".
It's a variant of how you instantly know what a government will be like depending how much democracy they put in their name.
> I think they’re recovering. They’ve learned a few lessons, including not to be too in hock to a few powerful and wealthy individuals.
I do not believe the EA movement to be recoverable; it is built on flawed foundations and its issues are inherent. The only way I see out of it is total dissolution; it cannot be reformed.
Man, EA is so close to getting it. They are right that we have a moral obligation to help those in need but they are wrong about how to do it.
Don't outsource your altruism by donating to some GiveWell-recommended nonprofit. Be a human, get to know people, and ask if/how they want help. Start close to home where you can speak the same language and connect with people.
The issues with EA all stem from the fact that the movement centralizes power into the hands of a few people who decide what is and isn't worthy of altruism. Then similar to communism, that power gets corrupted by self-interested people who use it to fund pet projects, launder reputations, etc.
Just try to help the people around you a bit more. If everyone did that, we'd be good.
If everyone did that, lots of people would still die of preventable causes in poor countries. I think GiveWell does a good job of identifying areas of greatest need in public health around the world. I would stop trusting them if they turned out to be corrupt or started misdirecting funds to pet projects. I don’t think everyone has to donate this way as it’s very personal decision, nor does it automatically make someone a good person or justify immoral ways of earning money, but I think it’s a good thing to help the less fortunate who are far away and speak a different language.
> Just try to help the people around you a bit more. If everyone did that, we'd be good.
This describes a generally wealthy society with some people doing better than average and others worse. Redistributing wealth/assistance from the first group to the second will work quite well for this society.
It does nothing to address the needs of a society in which almost everyone is poor compared to some other potential aid-giving society.
Supporting your friends and neighbors is wonderful. It does not, in general, address the most pressing needs in human populations worldwide.
If you live in a wealthy society it's possible to travel or move or get to know people in a different society and offer to help them.
The GP said:
> Just try to help the people around you a bit more. If everyone did that, we'd be good.
That's why I was replying too. Obviously, if you are willing to "do more", then you can potentially get more done.
That's the thing though, if EA had said: find 10 people in your life and help them directly, it wouldn't have appealed to the well-off white collar workers that want to spend money, but not actually do anything. The movement became popular because it didn't require one to do anything other than spend money in order to be lauded.
Better, it’s a small step to “being a small part of something that’s doing a little evil to a shitload of people (say, working on Google ~scams targeting the vulnerable and spying on everybody~ Ads) is not just OK, but good, as long as I spend a few grand a year buying mosquito nets to prevent malaria, saving a bunch of lives!”
Which obviously has great appeal.
What studies can you point to demonstrating your approach is more effective than donating to a GiveWell recommended non profit?
> . . . but also what’s called long-termism, which is worrying about the future of the planet and existential risks like pandemics, nuclear war, AI, or being hit by comets. When it made that shift, it began to attract a lot of Silicon Valley types, who may not have been so dedicated to the development part of the effective altruism program.
The rationalists thought they understood time discounting and thought they could correct for it. They were wrong. Then the internal contradictions of long-termism allowed EA to get suckered by the Silicon Valley crew.
Alas.
Effective Altruism and Utilitarianism are just a couple of the presentations authoritarians sometimes make for convenience. To me the code simply as "if I had everything now, that would eventually be good for everybody."
The arguments always feel to me too similar "it is good Carnegie called in the Pinkerton's to suppress labor, as it allowed him to build libraries." Yes it is good what Carnegie did later, but it doesn't completely paper over what he did earlier.
> The arguments always feel to me too similar "it is good Carnegie called in the Pinkerton's to suppress labor
Is that an actual EA argument?
The value is all at the margins. Like Carnegie had legitimate functional businesses that would be profitable without Pinkerton's. So without Pinkerton's he'd still be able to afford probably every philanthropic thing he did so it doesn't justify it.
I don't really follow the EA space but the actual arguments I've heard are largely about working in FANG to make 3x the money outside of fang to allow them to donate 1x ~1.5x the money. Which to me is very justifiable.
But to stick with the article. I don't think taking in billions via fraud to donate some of it to charity is a net positive on society.
> I don't think taking in billions via fraud to donate some of it to charity is a net positive on society.
it could be though, if by first centralizing those billions, you could donate more effectively than the previous holders of that money could. the fraud victims may have never donated in the first place, or have donated to the wrong thing, or not enough to make the right difference.
"The ends justify the means" is a terrible, and terribly dangerous, argument.
That is the point. Much clearer than I was. Thank you.
> Is that an actual EA argument?
you missed this part: "The arguments always feel to me too similar"
> The value is all at the margins. Like Carnegie had legitimate functional businesses that would be profitable without Pinkerton's. So without Pinkerton's he'd still be able to afford probably every philanthropic thing he did so it doesn't justify it.
That isn't what OP was engaging with though, they aren't asking for you to answer the question 'what could Carnegie have done better' they are saying 'the philosophy seems to be arguing this particular thing'.
When you work for something that directly contradicts peaceful civil society you are basically saying the mass murder of today is ok because it allows you to assuage your guilt by giving to your local charity - its only effective if altruism is not your goal.
It still depends on the marginal contribution.
A janitor at the CIA in the 1960s is certainly working at an organization that is disrupting the peaceful Iranian society and turning it into a "death to America" one. But I would not agree that they're doing a net-negative for society because the janitor's marginal contribution towards that objective is 0.
It might not be the best thing the janitor could do to society (as compared to running a soup kitchen).
I'm leery of any philosophy that is popular in tech circles because they all seem to lead to eugenics, hyperindividualism, ignoring systemic issues, deregulation and whatever the latest incarnation of prosperity gospel is.
Utilitarianism suffers from the same problems it always had: time frames. What's the best net good 10 minutes from now might be vastly different 10 days, 10 months or 10 years from now. So whatever arbitrary time frame you choose affects the outcome. Taken further, you can choose a time frame that suits your desired outcome.
"What can I do?" is a fine question to ask. This crops up a lot in anarchist schools of thought too. But you can't mutual aid your way out of systemic issues. Taken further, focusing on individual action often becomes a fig leaf to argue against any form of taxation (or even regulation) because the government is limiting your ability to be altruistic.
I expect the effective altruists have largely moved on to transhumanism as that's pretty popular with the Silicon Valley elite (including Peter Thiel and many CEOs) and that's just a nicer way of arguing for eugenics.
Effective altruism and transhumanism is kinda the same thing along with other stuff like longetermism. There is even name for the whole thing TESCREAL. Very slightly different positions invented i guess for branding.
[flagged]
"Shouldn't you feed the lepers, Supply Side Jesus?" "No, that would only make them lazy!"
https://www.beliefnet.com/news/2003/09/the-gospel-of-supply-...