ποΈ Check out my new Marc Hoag AI Law Pod, a podcast βhostedβ by Google's NotebookLM. Now that NotebookLM allows customization, the AI βguestsβ actually address me by name at times, directly reference the newsletter itself, and deep dive quite a bit beyond the bounds of each issueβs content. If nothing else, itβs a very cool alternative way to enjoy Future Perfect while driving. Episodes typically go live 30-60 minutes after the newsletter is published.
Capability Thresholds: Anthropic has established benchmarks that, when crossed, require enhanced safeguards. These thresholds focus on high-risk areas, like bioweapons or autonomous AI research, to prevent misuse.
AI Safety Levels (ASLs): The tiered ASL system (ASL-2 to ASL-3) scales safety measures according to a model's capability. The higher the risk, the stricter the controls.
Stricter Governance: ASL-3 models, for instance, require red-teaming (simulated adversarial attacks) and independent audits before deployment.
Responsible Scaling Officer (RSO): Anthropic assigns this officer to ensure policies are enforced, including pausing AI models that surpass safety thresholds until all required safeguards are met.
Industry Blueprint: Anthropicβs policy sets an industry precedent by promoting a βrace to the topβ in AI safety. It aims to inspire other developers to adopt similar frameworks, preventing AI misuse as models grow more powerful.
Risk Mitigation Focus: By establishing clear guidelines, Anthropic seeks to prevent AI models from crossing into dangerous territory, like being weaponized or advancing harmful research areas.
Preemptive Regulation: This policy is timed with increasing pressure on the AI industry to regulate, providing a potential framework that governments may look to as they create legal standards.
Broader Influence: Anthropic hopes that public disclosures of their Capability Reports and Safeguard Assessments will foster greater transparency across the AI field.
Future Evolution: Anthropic plans to continuously update its policy to adapt as AI systems become more advanced, ensuring that evolving capabilities remain within safe, manageable bounds.
Key Idea: Noam Brown emphasizes a paradigm shift in AI, focusing on "system two thinking," a slower, more deliberate form of reasoning, which Brown argues can outperform traditional AI scaling.
Poker AI Example: Brown illustrates this by sharing that Libratus, a poker-playing AI, achieved massive performance gains by thinking for just 20 seconds, equating this to a 100,000x data scaling boost.
OpenAIβs o1 Model: The new o1 models integrate system two thinking, excelling in complex tasks like coding, scientific research, and strategic decision-making.
Applications: Brown highlights the potential of o1 in industries like healthcare, energy, and finance, where slower, more deliberate AI can improve decision-making and outcomes.
Costs and Challenges: While o1 offers high accuracy, itβs slower and more expensive, raising concerns about its accessibility for broader enterprise use.
Strategic Shift: Brown concludes by framing this as the beginning of a new AI race focused on deep reasoning, rather than sheer processing power, to deliver more accurate insights.

Your central, quick glance resource to understand all AI toolsβ data privacy policies.
No artificial ingredients. Just intelligence. Understand how your data is used by AI. If itβs used for training. Who owns the output. Straight from companiesβ Terms of Service and Privacy Policies. One stop. Know all the AI.
Allegation of Copyright Infringement: Former OpenAI researcher, Suchir Balaji, argues that OpenAIβs model training violates U.S. copyright laws by improperly using copyrighted material in AI outputs.
Fair Use Criticism: Balaji believes OpenAIβs defense of βfair useβ does not hold, as too much copyrighted information appears in the modelβs outputs.
OpenAIβs Response: OpenAI maintains that their data usage falls within fair use, citing legal precedents.
Ongoing Lawsuits: OpenAI faces multiple lawsuits, including from the New York Times and various artists, for unauthorized use of their work.
β¨ PowerPromptβ’
Transform the next 24 hours of my life into a video game design document. Create an achievement system for my daily tasks, define the 'power-ups' and 'level-ups' in my environment, identify the 'boss battles' (major challenges), and explain how to 'min-max' my stats for optimal performance. Then outline the exact optimal strategy guide, including time-based power spike opportunities and key decision points.This is an amusing prompt with surprisingly practical benefits; give it a try and let me know how it goes for you!
π§° AI Tools & Resources
Delle: Upload garment images and get stunning product photos without hiring models or studios.
Paperguide: Discover, read, write, and manage research with ease.
Trag: Superlinter for any language: connect your repo and Trag can review your code with very specific instructions.
π THATβS ALL FOR TODAY!
See you next time! π
What'd you think of today's email?
Looking for past newsletters? You can find them all here.


