This website uses cookies

Read our Privacy policy and Terms of use for more information.

  • The White House is developing guidance to let federal agencies bypass Anthropic's supply-chain risk designation and onboard its most powerful model, Mythos — a dramatic reversal after the Pentagon blacklisted the company and the administration ordered a government-wide phaseout.

  • The standoff began when Anthropic refused to remove guardrails prohibiting its AI from being used for autonomous weapons or domestic surveillance; the Pentagon designated Anthropic an unprecedented "supply chain risk" — a label typically reserved for foreign adversary companies.

  • Anthropic sued the federal government in two courts over the designation; a California judge issued a preliminary injunction in late March temporarily halting the ban, while a D.C. court of appeals rejected Anthropic's separate request for an emergency stay.

  • Mythos — Anthropic's most advanced model — can autonomously identify zero-day vulnerabilities across every major operating system and browser, making it extremely valuable to cybersecurity-focused agencies; the NSA is already using it.

  • A draft executive action now in circulation could walk back the OMB directive banning Anthropic products government-wide, with one source describing the effort as a way to "save face and bring em back in."

  • OpenAI and Google have both signed agreements allowing Pentagon use under an "all lawful purposes" standard — the same standard Anthropic refused — making this dispute a defining test case for AI safety red lines versus government contracting.

  • A landmark ruling by the Northern District of California found that when a platform's AI exercises "ultimate authority" over assembled ad content, the platform itself may be considered a maker of fraudulent statements under Rule 10b-5 securities law.

  • The decision creates significant new legal exposure for Meta, Alphabet, Snap, TikTok, and X Corp — every major platform deploying generative AI in advertising products.

  • The ruling turns on a key question AI-native founders need to understand: when your AI autonomously composes or assembles output for users, you may be the "maker" of that content under federal law — not just a neutral conduit.

  • For SaaS and AI-native startups building ad-tech, marketing automation, or content-generation products, this ruling signals that autonomous AI output is not legally equivalent to user-generated content, and standard platform liability shields may not apply.

  • Founders using AI to generate outward-facing commercial content — especially anything touching financial products, securities, or investment-adjacent services — should immediately audit whether their AI's autonomous decisions could be attributed to them as the speaker under securities law standards.

  • Rogo, a New York startup building an agentic AI platform for financial services, closed a $160 million Series D led by Kleiner Perkins, with Sequoia Capital, Thrive, and Khosla Ventures also participating — bringing total funding beyond $300 million.

  • The platform is used daily by over 35,000 bankers at more than 250 institutions, including Rothschild & Co., Jefferies, Lazard, and Moelis, automating complex deal workflows and financial modeling.

  • Today's broader funding slate also included Aidoc raising $150M Series E for clinical AI adoption and Scout AI raising $100M Series A for defense AI systems — signaling that late-stage capital is concentrating on AI platforms embedded in mission-critical workflows.

  • For AI-native SaaS founders, Rogo's round is a benchmark: vertical AI that owns a specific high-value workflow, drives daily professional usage, and integrates deeply with regulated-sector processes is commanding top-tier valuations and brand-name VC backing even in a tighter 2026 market.

BUILDING AN AI-FIRST TECH STARTUP?
Hoag Law.ai provides flat-rate pricing.
Schedule your FREE 30-min call here.

Looking for past newsletters? You can find them all here.

Reply

Avatar

or to participate

Keep Reading