SBF and the EA/Longtermist “Moral Reasoning”

MSM picking up on lunatic idea that one can extrapolate the future of moral choices by simple reductive reasoning without context, then justify any decision one takes based on that “logic”.

https://www.washingtonpost.com/technology/2022/11/17/effective-altruism-sam-bankman-fried-ftx-crypto/

This is like Mein Kampf’s careful logical justification, based on a reductive theory of history, or Lenin’s “What Must Be Done?” careful logic. The problem with EA and Longtermism is obvious to a modern mathematician with some understanding of philosophy of science… In particular with the issues around Platonism.

A future in a *theory*, in a model, is not a fact. It’s merely a deduction from assumptions. But all models are reductive. Reduction is always bias.

You don’t see without prior belief. A being cannot make moral decisions outside a set of beliefs.

You can’t estimate the future, because you can’t know the whole of now and everywhere.

Unfortunately, SBF cannot see reality at all. He chooses his reductionist view based on self centered models.

Just as Bill Gates sees “Education” as only what he got in his school years. That makes his recommendations extremely dangerous. But he is lauded for “education reform”, despite the obvious fact that he draws his reforms from biased sources that claim to predict what will happen perfectly if a change is implemented.



Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.