March 8, 2026

Banned, Beloved, and Still Running Airstrikes

Five stories about AI. One of them is a lie. Can you spot it?

washingtonpost.com favicon
BREAKING

The Pentagon designated Anthropic a threat to national security on a Thursday. By the weekend, Claude was the most downloaded app in the country. Being blacklisted by the federal government turns out to be the single best marketing event in the history of the AI industry, outperforming every product launch, every benchmark announcement, and every open letter the company has ever published. Hundreds of employees at Google and OpenAI signed petitions demanding their own employers take the same stand. The government's message to the public: this company is dangerous. The public's response: where do I subscribe. washingtonpost.com

Source favicon
IMPACT

ChatGPT subscriptions dropped seventeen percent in the week following OpenAI's Pentagon deal, according to internal metrics described to the Financial Times by people familiar with the company's growth data. OpenAI disputed the figure, saying its numbers showed continued growth. That both sides are now watching subscription counts as a real-time referendum on their military relationships is itself a new and genuinely strange situation for the AI industry. A year ago, these companies competed on benchmark scores and context windows. Today the question apparently on the mind of the median customer is: does this chatbot help build weapons? The answer, depending on which chatbot you ask, is increasingly yes.

techcrunch.com favicon
ETHICS

Steve Bannon and the AI researcher who helped invent modern deep learning walked into a New Orleans hotel in January and signed the same document. That sentence is not a setup. The Pro-Human AI Declaration, released this week by the Future of Life Institute, assembled labor unions, evangelical churches, MAGA media figures, progressive advocacy groups, SAG-AFTRA, and Nobel laureates under a five-point framework demanding that humans remain in charge of AI systems. The signatories agree on almost nothing else. What they apparently agree on is that a small number of companies should not get to decide unilaterally what kind of civilization comes next. A poll released alongside the declaration found Americans favor human control over AI development speed by an eight-to-one margin. Congress has not yet responded to the document, which is to say Congress is doing exactly what you expected Congress to do. techcrunch.com

nature.com favicon
APPLICATION

Nature published a detailed account this week of how AI is shaping the ongoing US-Israeli military campaign in Iran, with large language models — the same kind that power your phone's autocomplete and your company's customer service chatbot — being used for intelligence analysis and battlefield decision support. The story ran three days after the Pentagon tried to ban the only AI company currently embedded in its classified systems. The irony appears to have been lost on no one except the Pentagon. Researchers in Geneva are meeting to negotiate international rules on autonomous weapons. The weapons, meanwhile, are not waiting for the meeting to end. nature.com

axios.com favicon
TOOL

OpenAI launched a tool called Codex Security this week, an AI agent that scans software code for vulnerabilities, confirms that those vulnerabilities are real rather than phantom warnings, and proposes fixes — all without a human having to sift through thousands of false alerts first. The company says it cut false positives by more than fifty percent during testing, and is rolling it out free to enterprise customers for the first month. The timing is not nothing: OpenAI is currently under fire for its Pentagon contract, and releasing a product that makes software more secure rather than more lethal is presumably not entirely a coincidence. The company that just signed a weapons deal is also, it turns out, in the business of patching holes. Whether those two things ever come into tension is a question OpenAI has not yet been asked loudly enough. axios.com