Anthropic and Effective Altruism [Eureka]
Anthropic, a U.S. AI company focused on effective altruism, is in conflict over the ethical use of AI, particularly regarding its model Claude and its implications for humanity's future.
Anthropic, an American artificial intelligence company, has risen to prominence in Silicon Valley with its name rooted in the adjective for 'humanity.' Recently, the company stated that its AI model, Claude, cannot be used for autonomous weapons or mass surveillance systems, drawing attention as it engages in a public dispute known as the 'AI control war' against the Trump administration. This follows a previous incident where the company faced astronomical compensation claims for using countless books without authorization in Claude's training. The duality of being 'a guardian of humanity' and 'a thief of human intellectual property' reflects the founding ethos of the company, which is based on the philosophy of effective altruism—striving for the greatest benefit to the largest number of people with minimal resources.
Peter Singer laid the philosophical groundwork for effective altruism, while William MacAskill, a professor at Oxford, is considered a co-founder of this movement. The model emphasizes choosing high-income careers to donate more instead of engaging in social welfare, thereby focusing on long-term human threats. The Amodei siblings, co-founders of Anthropic, left OpenAI in 2021 to maintain this benevolent efficiency, prioritizing 'the safety and ethics of AI' over profit. To ensure safe responses from AI, they are developing a 'constitutional AI' technology that integrates constitutional and ethical guidelines for AI learning.
While this philosophy aims for the common good and efficiency, it carries flaws of consequentialism and elitism that can prove detrimental. A notable example is the downfall of Sam Bankman-Fried from cryptocurrency exchange FTX, who failed to fulfill his promise of generating wealth for donations due to misappropriating customer deposits. Similarly, Anthropic's commitment to 'AI development for humanity's safety' may have led it to consider copyright infringement a secondary issue. The effective altruism of Anthropic currently stands in stark contrast to Trump's nationalist populism, highlighting the tensions that arise from differing visions of ethical responsibility in technology.