An Effective Altruism Curriculum Using Only Free Online Resources đź’»

Here are some of the best links that I have come across to learn about the ideas behind EA and how they can change lives — or even the future of humanity.

Damla Ozdemir
5 min readMay 27, 2021

In my previous post, I gave an overview of the lessons that I learned from doing an EA (Effective Altruism) fellowship at Duke University this past semester. There, I detailed the meaning of EA, and why it was so influential in shaping my decision-making. 🧠

Now, it is time for some links so that you too can dive into this world, explore it to your heart’s content, and decide how you feel about the controversial and not-so-controversial statements that are made in the EA community.

Some initial links to get started

80,000 Hours is THE organization that you should check out if you want to gain a deep appreciation for EA, but also just as importantly, get clear, actionable resources for how to get started.

The premise of the website and all that they do is very simple: There are 80,000 hours in an average career, and this nonprofit helps those of us who want to make as big an impact as possible during those hours, while doing the things that we find purposeful and rewarding. In this way, they focus on the career possibilities that await someone who decides to pursue Effective Altruism, and what that decision could mean for humanity as a whole.

They have a Job Board on their site that highlights particularly EA-centered career options that you can apply to right away to start to change your life. According to their site:

“Some of these roles directly address some of the world’s most pressing problems, while others may help you build the career capital you need to have a big impact later.”

I would say that one of the most exciting resources that they provide is an extensive guide to “using your career to help solve the world’s most pressing problems”. This page might take an hour or longer to read, but having read the contents, I can assure you that putting aside the time to absorb the guidance that they provide at NO COST will be well worth the price of your time.

The Sentience Institute is another core resource for a key research area of EA: “Expanding the Moral Circle”. In short, the moral circle represents humanity’s idea of who (or what) deserves moral concern. The term was introduced by historian William Lecky in the 1860s and popularized by philosopher Peter Singer in the 1980s. The latter is a very influential, albeit controversial, thinker in the field of Effective Altruism.

The expansion of this moral circle is where a lot of people get uncomfortable with the concept. There is an observation made by thinkers in this field that the moral circle has been growing outwards throughout our human history — for example, most people in the West would by now say that females and males deserve the same rights. However, a few hundred years ago, this statement would have been absurd and unthinkable by most.

Much the same is happening with the inclusion of races that are different from us into our circle of concern. Researchers believe that the next two groups that will highly influence the size of our circle will be animals and Artificial Intelligence robots.

Do non-humans deserve the same rights as humans?

Do we have a moral responsibility to an algorithm, the same way that we do to a human child?

You can see how this can quickly become a contentious issue.

An engaging and clarifying article on this concept asks the question: Should animals, plants, and robots have the same rights as you?

The Moral Circle

The Effective Altruism Forum has a great article on Longtermism. It is slightly more academic and less accessible for the EA beginner due to its lack of tangibility and easy application in any one individual’s life, but nevertheless provides a whole treasure trove of crucial information for grasping the idea of “Longtermism” — that is, the approach of considering how we can make the most good impact in the long run.

“The long run” here is not what you think. People in this field think about impacts on the scale of millennia, or even millions of years into the future, and think about what we owe our far-removed descendants. At this point in the EA journey, you might start getting overwhelmed at even trying to fathom what humanity might look like that far down the line, and nobody would blame you.

Longtermism is more the approach of keeping our influence on the far-off future in mind while making decisions, and less a conviction about what exactly those future humans might want or need. As you can probably see, there is a lot of research still to be done in this area due to the vague nature of the discussions. It is all more of an attitude than anything concrete at the moment.

However, Longtermism also ties into the expansion of the moral circle, as it requires an agreement that we humans have the same moral responsibility for future generations that we have for our own. Although this is intuitive for many, it is also a difficult topic to debate, as some people believe that the lives of people who have not yet been born do not — or might not ever — exist, and therefore they should not be stakeholders in any major decision.

What do you think? These are the kinds of things that the EA community regularly discusses. There is not a consensus in these topics, so you must decide on many aspects of your EA journey yourself.

--

--

Damla Ozdemir

Duke University ’23 🏫 Worldschooling/Unschooling ✏️ 9 countries, 3 continents, 2 boarding schools, 10 languages 🏫