- Future Perfect
- Posts
- #119. š Anthropic's updated AI safety protocols š¤Æ 20 seconds of thinking = 100K more data ā ļø Former OpenAI employee says company is violating copyright
#119. š Anthropic's updated AI safety protocols š¤Æ 20 seconds of thinking = 100K more data ā ļø Former OpenAI employee says company is violating copyright
Plus, AI tools: Clothing modeling without models; research management; code review
šļø Check out my new Marc Hoag AI Law Pod, a podcast āhostedā by Google's NotebookLM. Now that NotebookLM allows customization, the AI āguestsā actually address me by name at times, directly reference the newsletter itself, and deep dive quite a bit beyond the bounds of each issueās content. If nothing else, itās a very cool alternative way to enjoy Future Perfect while driving. Episodes typically go live 30-60 minutes after the newsletter is published.
Capability Thresholds: Anthropic has established benchmarks that, when crossed, require enhanced safeguards. These thresholds focus on high-risk areas, like bioweapons or autonomous AI research, to prevent misuse.
AI Safety Levels (ASLs): The tiered ASL system (ASL-2 to ASL-3) scales safety measures according to a model's capability. The higher the risk, the stricter the controls.
Stricter Governance: ASL-3 models, for instance, require red-teaming (simulated adversarial attacks) and independent audits before deployment.
Responsible Scaling Officer (RSO): Anthropic assigns this officer to ensure policies are enforced, including pausing AI models that surpass safety thresholds until all required safeguards are met.
Industry Blueprint: Anthropicās policy sets an industry precedent by promoting a ārace to the topā in AI safety. It aims to inspire other developers to adopt similar frameworks, preventing AI misuse as models grow more powerful.
Risk Mitigation Focus: By establishing clear guidelines, Anthropic seeks to prevent AI models from crossing into dangerous territory, like being weaponized or advancing harmful research areas.
Preemptive Regulation: This policy is timed with increasing pressure on the AI industry to regulate, providing a potential framework that governments may look to as they create legal standards.
Broader Influence: Anthropic hopes that public disclosures of their Capability Reports and Safeguard Assessments will foster greater transparency across the AI field.
Future Evolution: Anthropic plans to continuously update its policy to adapt as AI systems become more advanced, ensuring that evolving capabilities remain within safe, manageable bounds.
Key Idea: Noam Brown emphasizes a paradigm shift in AI, focusing on "system two thinking," a slower, more deliberate form of reasoning, which Brown argues can outperform traditional AI scaling.
Poker AI Example: Brown illustrates this by sharing that Libratus, a poker-playing AI, achieved massive performance gains by thinking for just 20 seconds, equating this to a 100,000x data scaling boost.
OpenAIās o1 Model: The new o1 models integrate system two thinking, excelling in complex tasks like coding, scientific research, and strategic decision-making.
Applications: Brown highlights the potential of o1 in industries like healthcare, energy, and finance, where slower, more deliberate AI can improve decision-making and outcomes.
Costs and Challenges: While o1 offers high accuracy, itās slower and more expensive, raising concerns about its accessibility for broader enterprise use.
Strategic Shift: Brown concludes by framing this as the beginning of a new AI race focused on deep reasoning, rather than sheer processing power, to deliver more accurate insights.
Your central, quick glance resource to understand all AI toolsā data privacy policies.
No artificial ingredients. Just intelligence. Understand how your data is used by AI. If itās used for training. Who owns the output. Straight from companiesā Terms of Service and Privacy Policies. One stop. Know all the AI.
Allegation of Copyright Infringement: Former OpenAI researcher, Suchir Balaji, argues that OpenAIās model training violates U.S. copyright laws by improperly using copyrighted material in AI outputs.
Fair Use Criticism: Balaji believes OpenAIās defense of āfair useā does not hold, as too much copyrighted information appears in the modelās outputs.
OpenAIās Response: OpenAI maintains that their data usage falls within fair use, citing legal precedents.
Ongoing Lawsuits: OpenAI faces multiple lawsuits, including from the New York Times and various artists, for unauthorized use of their work.
āØ PowerPromptā¢
Transform the next 24 hours of my life into a video game design document. Create an achievement system for my daily tasks, define the 'power-ups' and 'level-ups' in my environment, identify the 'boss battles' (major challenges), and explain how to 'min-max' my stats for optimal performance. Then outline the exact optimal strategy guide, including time-based power spike opportunities and key decision points.
This is an amusing prompt with surprisingly practical benefits; give it a try and let me know how it goes for you!
š§° AI Tools & Resources
Delle: Upload garment images and get stunning product photos without hiring models or studios.
Paperguide: Discover, read, write, and manage research with ease.
Trag: Superlinter for any language: connect your repo and Trag can review your code with very specific instructions.
š THATāS ALL FOR TODAY!
See you next time! š
What'd you think of today's email?This helps me improve it for you! |
Looking for past newsletters? You can find them all here.
Reply