“Capital, AGI, and Human Ambition” by L. Rudolf L. & “What’s the Short Timeline Plan?” by Marius Hobbhahn

post by Michael Michalchik (michael-michalchik) · 2025-01-03T06:03:30.835Z · ? · GW · 0 comments

Contents

  ACXLW Meetup 82
    Discussion Questions
    Discussion Questions
  Walk & Talk
  Share a Surprise
  Looking Ahead
None
No comments

ACXLW Meetup 82

“Capital, AGI, and Human Ambition” by L. Rudolf L. & “What’s the Short Timeline Plan?” by Marius Hobbhahn

Date: Saturday, January 4
Time: 2:00 PM
Location: 1970 Port Laurent Place, Newport Beach, CA 92660

Host: Michael Michalchik
Contact: michaelmichalchik@gmail.com | (949) 375-2045

We’re excited to continue our exploration of how advanced technology, AI governance, and human ambition intersect. This session features two compelling readings that consider the real-world implications of transformative AI on both macro-level power structures and immediate practical safety measures.

 


 

Conversation Starter 1

Topic: “Capital, AGI, and Human Ambition” by L. Rudolf L.

Summary
This article explores a future in which AI drastically reduces the value of human labor, thereby elevating the importance of “capital.” The author posits that as advanced AI becomes a near-complete substitute for workers, current power structures—governments and corporations—face less incentive to maintain public welfare beyond superficial measures. Themes include entrenchment of existing elites, universal basic income’s potential (and limitations), the risk of a stagnant society locked into inherited advantage, and the pressing need to preserve genuine human ambition. Ultimately, the author warns about an “existential stasis” if society fails to ensure that humans retain meaningful agency, and he calls for near-term action to safeguard our collective future.

Discussion Questions

  1. Incentives & Governance: If states and corporations no longer depend on labor, what factors might still drive them to care about broad human well-being?
  2. Stasis vs. Dynamism: How can society remain dynamic and innovative rather than letting power become permanently entrenched under AI-driven “capital?”
  3. Entrepreneurship & Mobility: Could humans still find disruptive paths to influence in a world where capital can simply “buy” or clone the best AI talent?
  4. Locking in Benevolence: Are there frameworks to codify ethical or benevolent norms now, before AI-based power undermines them?
  5. Redistribution & Complacency: If UBI becomes widespread, would that be enough to preserve social stability, or does it risk complacency (and reduced public scrutiny of AI’s role)?

 


 

Conversation Starter 2

Topic: “What’s the Short Timeline Plan?” by Marius Hobbhahn

Summary
This piece presents a scenario in which AGI capable of surpassing top-level researchers might arrive by 2027. The author offers a “bare minimum” safety plan focusing on two pillars: (1) secure model weights so rogue actors or the models themselves cannot exploit them, and (2) ensure the first powerful AI used for research is not scheming or deceptive. The proposal recommends a layered approach—monitoring chain-of-thought (CoT) if possible, or using robust “control” techniques otherwise—alongside advanced evaluations, a security-first organizational culture, and transparent planning. Although the plan is partial and conservative, it underscores the urgency of developing real, actionable strategies for safe AI deployment under rapid timelines.

Discussion Questions

  1. Tradeoffs in Transparency: To what extent should labs slow down capabilities for the sake of interpretable, faithful AI chain-of-thought?
  2. Detecting Deception: Are black-box monitors, white-box probes, or “model organisms” truly sufficient to catch hidden power-seeking or scheming behavior?
  3. Security Triage: If time is short, which cybersecurity and model protection measures should labs prioritize first?
  4. Global Governance & Racing: If an AI arms race emerges, can national or international bodies effectively mandate safety steps, or will labs feel pressured to cut corners?
  5. “Safety-First” Culture: How realistic is it for commercial labs to adopt near-military levels of caution, and who enforces these norms?

 


 

Walk & Talk

After our main discussion, we’ll do our usual hour-long walk around the area. Feel free to grab takeout at Gelson’s or Pavilions nearby if you like.

Share a Surprise

We’ll also have an open floor for anyone who wants to share something unexpected or perspective-shifting—an article, a personal anecdote, or a fun fact.

Looking Ahead

As always, we welcome ideas for future topics, activities, or guest discussions. Don’t hesitate to reach out if you’d like to host or propose a new theme.

We look forward to seeing you all on January 4 for another engaging ACXLW meetup!

o1

0 comments

Comments sorted by top scores.