Don’t Pit AI Against People

Target: Dario Amodei, CEO of Anthropic PBC

Goal: Maintain planned guardrails and protections for wide-scale AI uses.

A high-profile legal fight is playing out between the federal government and one of the country’s leading artificial intelligence companies, Anthropic. Government officials, including the president, have ceased use of Anthropic’s Claude AI technology, have threatened to make Claude a “supply chain risk,” and have strong-armed other businesses to cut off contact as well. At dispute is the potential use of the AI for mass surveillance of American citizens and for the possibility of utilizing AI to develop dangerous autonomous weapons. But in the upside-down reality of current politics, it is not elected and appointed leaders who are concerned about safeguards from AI abuse. It is the AI company itself, and the federal government is punishing said company because certain politicians do not want any AI guardrails.

In a 2024 policy statement, leaders of Anthropic spoke at length in favor of regulation because they – more than most – understood the risks that came along with the benefits of AI. The statement stressed, “Governments should urgently take action on AI policy in the next eighteen months. The window for proactive risk prevention is closing fast.” Since then, the company has dedicated extensive research to a number of risk factors, from AI’s impact on jobs and the economy to how humans can maintain proper control over the rapidly accelerating technology. Recently, however, reports have emerged that Anthropic may consider walking back some of its commitments in order to ease increasing pressures on its bottom line from government officials.

Sign the petition below to urge this company to hold the line and not give in to an intimidation campaign that could hurt millions of Americans in the long term.

PETITION LETTER:

Dear Mr. Amodei,

In Anthropic’s “Case for Targeted Regulation,” this company’s leadership outlined in stark terms the many ways in which artificial intelligence can be misused and abused. In relation to cyber and chemical, biological, radiological, and nuclear misuses alone, the statement warned, “we are now substantially closer to such risks. Surgical, careful regulation will soon be needed.” The statement ended with the following: “getting this right is essential to realizing AI’s benefits and addressing its risks.”

To Anthropic’s credit, it has honored this pledge by taking a decisive stance against surveillance or weapons development abuses within the Pentagon. That stance has come with extensive retaliation. Please do not let this pressure from powerful people and entities change your core mission and responsibility.

Stand strong, and help protect all your consumers from the irreversible impacts of a failure to act. Do not become the Frankenstein destroyed by its out-of-control creation.

Sincerely,

[Your Name Here]

Photo Credit: Francesco Ungaro

Please share and discuss this cause on social media. Spreading the word is essential to the success of this petition:

Leave a Reply

Your email address will not be published. Required fields are marked *

*

6 Signatures

  • Petra Jones
  • Patti Chapman
  • Judith Embry
  • Jacqueline Lavanchy
  • Jenn Day
  • marilyn evenson
Skip to toolbar