• Future Perfect
  • Posts
  • #126. 📚 Targeted AI regulation 🛠️ AI inventing AI 🛍️ Perplexity's Amazon competitor

#126. 📚 Targeted AI regulation 🛠️ AI inventing AI 🛍️ Perplexity's Amazon competitor

Plus, AI tools: ChatGPT to help you vote; 3D from a few 2D photos; professional AI headshots

🎙️ Check out my new Marc Hoag AI Law Pod, a podcast with AI “guests” courtesy of Google's NotebookLM. Now that NotebookLM allows customization, the AI “guests” actually interact with me at times, and deep dive quite a bit beyond the bounds of each issue’s content. If nothing else, it’s a very cool alternative way to enjoy Future Perfect while driving.

  • The Case for Urgent, Targeted AI Regulation: AI is advancing rapidly, with powerful systems poised to transform fields like healthcare and science, but also posing cybersecurity and biosecurity risks. Governments must act within 18 months to manage these effectively.

  • Benefits of Focused Regulation: Narrow, well-designed regulation can maximize AI benefits while minimizing risks. Delay risks poorly designed, sweeping policies that stifle innovation and fail to prevent key issues.

    Recent AI Advancements and Potential Misuse: AI’s improvements in coding, reasoning, and biological understanding heighten misuse risks in cybersecurity and hazardous domains if left unchecked.

  • Anthropic’s Responsible Scaling Policy (RSP): RSPs enforce stronger safety measures as AI capabilities grow, providing a scalable model that adjusts as technology evolves.

  • Core RSP Principles: RSPs are proportionate and iterative, adjusting safety requirements based on a model’s capabilities to allow safe AI deployment while encouraging innovation.

  • Benefits of RSPs in AI Companies: RSPs guide companies to develop strong security practices, improve transparency, and align with voluntary safety commitments, supporting regulatory goals.

  • Key Elements for Effective AI Regulation: AI regulation should focus on transparency, incentivize robust safety practices, and be simple to manage risks without hindering progress.

  • The Need for a Collaborative Approach: Policymakers, industry, and advocates must work together to create a balanced framework that supports AI innovation while managing essential safeguards.

  • The Case for Self-Improving AI: The concept of AI developing better AI—a recursive intelligence explosion—has long been a topic of speculation. This could soon become reality, with powerful implications for industries and risks to global safety.

  • Leopold Aschenbrenner’s Predictions: In a widely-discussed manifesto, Aschenbrenner projects that AGI could arrive by 2027, consuming massive energy and reshaping global dynamics through self-improving capabilities.

  • Early Steps in AI Researching AI: AI systems are beginning to conduct their own research, as seen in Sakana’s “AI Scientist” project. This AI can autonomously read literature, design experiments, conduct peer reviews, and write publishable research papers.

  • The Potential for Rapid Feedback Loops: AI automating AI research could accelerate its own development, setting off cycles of exponential improvement. Sakana’s AI Scientist, while still limited, shows how AI could soon perform end-to-end research tasks autonomously.

  • Proof of Concept with Sakana’s AI Scientist: The AI Scientist has produced research that competently addresses real AI problems, even publishing papers of NeurIPS-conference quality. This work suggests AI could soon conduct much of AI research independently.

  • Future Enhancements: Sakana’s system was built with modest resources, hinting at massive potential if paired with multimodal capabilities, internet access, and greater compute power.

  • Comparison with Early AI Milestones: Sakana’s work is akin to “GPT-1” for AI research; it may seem limited now, but the rapid advancements seen in models like GPT-3 suggest we may soon see dramatic improvements.

  • Implications for AI Safety and Control: Frontier labs like OpenAI and Anthropic are preparing for self-improving AI, aware of both the transformative potential and the risks, including in fields like bioterrorism and nuclear safety.

  • Preparing for a New Research Landscape: Automated AI research could multiply the global talent pool in AI by thousands, accelerating advancements in fields like life sciences, materials, and climate tech but also heightening risks.

  • Perplexity’s Pro Shop Launch: Jeff Bezos-backed Perplexity has introduced “Pro Shop,” a feature allowing users to shop directly on its platform without leaving for external sites.

  • Shopping Experience Features: Pro Shop offers free shipping for eligible items and a “Buy with Pro” button to streamline purchases, requiring only billing and shipping details for transactions.

  • Aiming at Ecommerce Giants: With aspirations to challenge Amazon and Google Shopping, Perplexity aims to double its valuation to $8 billion by establishing itself in the competitive ecommerce market.

  • Early Brand Interest: Perplexity is reportedly in talks with brands like Nike for potential partnerships and “sponsored questions” ads, signaling plans for monetizing its platform through targeted advertising.

  • High Stakes in AI-Driven Shopping: As Perplexity gears up for ads in Q4, success with Pro Shop and potential partnerships will be key for this small company in its push to disrupt AI-powered retail.

✨ PowerPrompt™

This is a fun, fascinating thing to try for the elections: Use the prompt below -- or a similar variation of it -- with ChatGPT*:

How many questions would you need to ask me to reach 95% confidence about where I lean politically—specifically whether I’d lean toward voting for Trump or Kamala? Start by asking me nuanced questions that reveal my stance on specific issues (avoid broad topics like taxes or healthcare or abortion, etc). After each response, analyze where I might fall on the spectrum. Challenge any inconsistencies, and guide me logically to see if I lean left, right, or dead center.

After you've gone through the questions, you can keep pressing it to see whether it makes sense to ask even further questions, until you reach diminishing returns; at some point, ChatGPT will come to a fairly conclusive determination on where you stand.

At worst, this will be fun, interesting, and amusing; at best, you might learn some interesting things about yourself.

Note that I used the Pro version which unlocks both ChatGPT-4 and ChatGPT-o1 preview, of which I used the faster but less advanced ChatGPT-4 rather than o1 preview (it would have proven too time consuming), the latter of which I'd like to try along with Claude 3.5-sonnet.

🧰 AI Tools & Resources

  • VoteGPT: An implementation of ChatGPT specifically for the elections. (I tried it, and think my prompt works better, however.)

  • Noposlat: Create 3D from sparse collections of 2D images.

  • Aragon: Professional AI photos from your own headshots.

🎉 THAT’S ALL FOR TODAY!

See you next time! 👋

What'd you think of today's email?

This helps me improve it for you!

Login or Subscribe to participate in polls.

Looking for past newsletters? You can find them all here.

Reply

or to participate.