LessWrong 2.0 Reader

View: New · Old · Top

← previous page (newer posts) · next page (older posts) →

Slack Group: Rationalist Startup Founders
Adam Zerner (adamzerner) · 2023-04-03T00:44:54.791Z · comments (0)
The Friendly Drunk Fool Alignment Strategy
JenniferRM · 2023-04-03T01:26:13.999Z · comments (19)
Rules for living in a 99.9+% lizardman world
at_the_zoo · 2023-04-03T02:39:15.690Z · comments (12)
[question] I'm just starting to grasp Shard Theory. Is that a normal feeling?
twkaiser · 2023-04-03T03:08:44.351Z · answers+comments (1)
Planecrash Podcast
planecrashpodcast · 2023-04-03T04:34:07.848Z · comments (5)
Board Game Theory
Optimization Process · 2023-04-03T06:23:47.081Z · comments (0)
Effective Altruism Virtual Programs Apr-May 2023
Yve Nichols-Evans (yve-nichols-evans) · 2023-04-03T06:40:06.526Z · comments (0)
Repeated Play of Imperfect Newcomb's Paradox in Infra-Bayesian Physicalism
Sven Nilsen (bvssvni) · 2023-04-03T10:06:37.388Z · comments (0)
Orthogonality is expensive
beren · 2023-04-03T10:20:43.964Z · comments (8)
AI-kills-everyone scenarios require robotic infrastructure, but not necessarily nanotech
avturchin · 2023-04-03T12:45:01.324Z · comments (47)
Superintelligence will outsmart us or it isn't superintelligence
Neil (neil-warren) · 2023-04-03T15:01:00.900Z · comments (4)
Mati's introduction to pausing giant AI experiments
Mati_Roy (MathieuRoy) · 2023-04-03T15:56:58.542Z · comments (0)
AWS Has Raised Prices Before
jefftk (jkaufman) · 2023-04-03T16:00:01.900Z · comments (3)
[link] Exploratory Analysis of RLHF Transformers with TransformerLens
Curt Tigges (curt-tigges) · 2023-04-03T16:09:12.671Z · comments (2)
Do we have a plan for the "first critical try" problem?
Christopher King (christopher-king) · 2023-04-03T16:27:50.821Z · comments (14)
[question] GJP on AGI
Suh_Prance_Alot · 2023-04-03T17:21:23.335Z · answers+comments (0)
Exploring non-anthropocentric aspects of AI existential safety
mishka · 2023-04-03T18:07:27.932Z · comments (0)
Monthly Roundup #5: April 2023
Zvi · 2023-04-03T18:50:01.056Z · comments (12)
[link] Towards empathy in RL agents and beyond: Insights from cognitive science for AI Alignment
Marc Carauleanu (Marc-Everin Carauleanu) · 2023-04-03T19:59:00.057Z · comments (6)
If interpretability research goes well, it may get dangerous
So8res · 2023-04-03T21:48:18.752Z · comments (10)
Communicating effectively under Knightian norms
Richard_Ngo (ricraz) · 2023-04-03T22:39:58.350Z · comments (54)
Twin Cities ACX Meetup - April 2023
Timothy M. (timothy-bond) · 2023-04-03T23:07:40.966Z · comments (3)
[link] Apply to the Cavendish Labs Fellowship (by 4/15)
agg (ag) · 2023-04-03T23:09:07.309Z · comments (0)
Complex Systems are Hard to Control
jsteinhardt · 2023-04-04T00:00:13.634Z · comments (5)
ChatGPT Suggests Listening To Russell & Yudkowsky
JenniferRM · 2023-04-04T00:30:08.951Z · comments (1)
Steering systems
Max H (Maxc) · 2023-04-04T00:56:55.407Z · comments (1)
How to respond to the recent condemnations of the rationalist community
Christopher King (christopher-king) · 2023-04-04T01:42:49.225Z · comments (7)
AI Summer Harvest
Cleo Nardo (strawberry calm) · 2023-04-04T03:35:58.473Z · comments (10)
[link] On how it feels generating art with DALL-E
cortrinkau (Brunhild) · 2023-04-04T04:13:54.317Z · comments (0)
[link] Consider applying to a 2-week alignment project with former GitHub CEO
jacobjacob · 2023-04-04T06:20:49.532Z · comments (0)
[question] Best Ways to Try to Get Funding for Alignment Research?
RGRGRG · 2023-04-04T06:35:05.356Z · answers+comments (6)
ACX Meetup Madrid
Pablo Villalobos (pvs) · 2023-04-04T08:53:18.424Z · comments (2)
Strategies to Prevent AI Annihilation
[deleted] · 2023-04-04T08:59:53.867Z · comments (0)
Where Free Will and Determinism Meet
David Bravo (davidbravocomas) · 2023-04-04T10:59:36.474Z · comments (0)
Wanted: Mental Health Program Manager at Rethink Wellbeing
Inga G. (inga-g) · 2023-04-04T11:49:40.547Z · comments (0)
[link] Invocations: The Other Capabilities Overhang?
Robert_AIZI · 2023-04-04T13:38:14.315Z · comments (4)
Running many AI variants to find correct goal generalization
avturchin · 2023-04-04T14:16:34.422Z · comments (3)
Given the Restrict Act, Don’t Ban TikTok
Zvi · 2023-04-04T14:40:03.162Z · comments (9)
[question] Steelman / Ideological Turing Test of Yann LeCun's AI X-Risk argument?
Aryeh Englander (alenglander) · 2023-04-04T15:53:19.099Z · answers+comments (14)
[link] Play My Futarchy/Prediction Market Mafia Game
Arjun Panickssery (arjun-panickssery) · 2023-04-04T16:12:30.730Z · comments (2)
Giant (In)scrutable Matrices: (Maybe) the Best of All Possible Worlds
1a3orn · 2023-04-04T17:39:39.720Z · comments (35)
[link] The One Heresy to Rule Them All
rogersbacon · 2023-04-04T18:23:54.263Z · comments (0)
Penalize Model Complexity Via Self-Distillation
research_prime_space · 2023-04-04T18:52:41.063Z · comments (7)
Excessive AI growth-rate yields little socio-economic benefit.
Cleo Nardo (strawberry calm) · 2023-04-04T19:13:51.120Z · comments (22)
LW Team is adjusting moderation policy
Raemon · 2023-04-04T20:41:07.603Z · comments (182)
Recontextualizing the Risks of AI in More Predictable Outcomes
ignorepeter · 2023-04-04T23:28:31.343Z · comments (2)
Is AGI suicidality the golden ray of hope?
Alex Kirko · 2023-04-04T23:29:21.607Z · comments (4)
[link] New survey: 46% of Americans are concerned about extinction from AI; 69% support a six-month pause in AI development
Akash (akash-wasil) · 2023-04-05T01:26:51.830Z · comments (9)
[link] "Corrigibility at some small length" by dath ilan
Christopher King (christopher-king) · 2023-04-05T01:47:23.246Z · comments (3)
Empathy bandaid for immediate AI catastrophe
installgentoo · 2023-04-05T02:12:54.713Z · comments (2)
← previous page (newer posts) · next page (older posts) →