LessWrong 2.0 Reader

View: New · Old · Top

← previous page (newer posts) · next page (older posts) →

ChatGPT tells stories, and a note about reverse engineering: A Working Paper
Bill Benzon (bill-benzon) · 2023-03-03T15:12:05.544Z · comments (0)
Measuring Ads Opt-Out Compliance
jefftk (jkaufman) · 2023-03-03T16:00:03.586Z · comments (2)
Veganism and Acausal Trade
elbow921 (elbow921@aol.com) · 2023-03-03T17:44:46.483Z · comments (1)
AI Governance & Strategy: Priorities, talent gaps, & opportunities
Akash (akash-wasil) · 2023-03-03T18:09:26.659Z · comments (2)
Situational awareness in Large Language Models
Simon Möller · 2023-03-03T18:59:32.148Z · comments (2)
Why are counterfactuals elusive?
Martín Soto (martinsq) · 2023-03-03T20:13:48.981Z · comments (6)
Comments on OpenAI's "Planning for AGI and beyond"
So8res · 2023-03-03T23:01:29.665Z · comments (2)
Acausal normalcy
Andrew_Critch · 2023-03-03T23:34:33.971Z · comments (30)
How popular is ChatGPT? Part 2: slower growth than Pokémon GO
Richard Korzekwa (Grothor) · 2023-03-03T23:40:04.494Z · comments (4)
Bite Sized Tasks
Johannes C. Mayer (johannes-c-mayer) · 2023-03-04T03:31:30.404Z · comments (2)
Contra Hanson on AI Risk
Liron · 2023-03-04T08:02:02.375Z · comments (23)
Predictive Performance on Metaculus vs. Manifold Markets
nikos (followtheargument) · 2023-03-04T08:10:40.941Z · comments (0)
Switching to Electric Mandolin
jefftk (jkaufman) · 2023-03-04T15:40:05.218Z · comments (0)
Why don't more people talk about ecological psychology?
Ppau · 2023-03-04T17:03:17.588Z · comments (10)
Optimal Music Choice
mbazzani · 2023-03-04T17:26:57.915Z · comments (0)
The Benefits of Distillation in Research
Jonas Hallgren · 2023-03-04T17:45:22.547Z · comments (2)
[link] The Practitioner's Path 2.0: A new framework for structured self-improvement
[deleted] · 2023-03-04T19:19:04.153Z · comments (2)
Contra "Strong Coherence"
DragonGod · 2023-03-04T20:05:28.346Z · comments (24)
More money with less risk: sell services instead of model access
lukehmiles (lcmgcd) · 2023-03-04T20:51:36.480Z · comments (3)
Consciousness is irrelevant - instead solve alignment by asking this question
Oliver Siegel (oliver-siegel) · 2023-03-04T22:06:46.424Z · comments (6)
Why Not Just... Build Weak AI Tools For AI Alignment Research?
johnswentworth · 2023-03-05T00:12:33.651Z · comments (17)
[link] Forecasts on Moore v Harper from Samotsvety
gregjustice · 2023-03-05T00:47:37.217Z · comments (0)
Why We MUST Build an (aligned) Artificial Superintelligence That Takes Over Human Society - A Thought Experiment
twkaiser · 2023-03-05T00:47:51.884Z · comments (12)
[question] Does polyamory at a workplace turn nepotism up to eleven?
Viliam · 2023-03-05T00:57:52.087Z · answers+comments (11)
Substitute goods for leisure are abundant
Adam Zerner (adamzerner) · 2023-03-05T03:45:32.705Z · comments (7)
Selective, Corrective, Structural: Three Ways of Making Social Systems Work
Said Achmiz (SaidAchmiz) · 2023-03-05T08:45:45.615Z · comments (13)
Why kill everyone?
arisAlexis (arisalexis) · 2023-03-05T11:53:23.823Z · comments (5)
Halifax Monthly Meetup: AI Safety Discussion
Ideopunk · 2023-03-05T12:42:40.511Z · comments (0)
[link] QACI: the problem of blob location, causality, and counterfactuals
Tamsin Leake (carado-1) · 2023-03-05T14:06:09.372Z · comments (1)
The Inner-Compass Theorem
Tristan Miano (tristan-miano-2) · 2023-03-05T15:21:00.797Z · comments (12)
Do humans derive values from fictitious imputed coherence?
TsviBT · 2023-03-05T15:23:04.065Z · comments (8)
Abusing Snap Circuits IC
jefftk (jkaufman) · 2023-03-05T17:00:08.585Z · comments (3)
Research proposal: Leveraging Jungian archetypes to create values-based models
MiguelDev (whitehatStoic) · 2023-03-05T17:39:27.126Z · comments (2)
[question] What are some good ways to heighten my emotions?
[deleted] · 2023-03-05T18:06:30.693Z · answers+comments (5)
The Ogdoad
Tristan Miano (tristan-miano-2) · 2023-03-05T20:01:07.746Z · comments (1)
🤔 Coordination explosion before intelligence explosion...?
tailcalled · 2023-03-05T20:48:55.995Z · comments (8)
Is it time to talk about AI doomsday prepping yet?
bokov (bokov-1) · 2023-03-05T21:17:54.270Z · comments (6)
[link] Steven Pinker on ChatGPT and AGI (Feb 2023)
Evan R. Murphy · 2023-03-05T21:34:14.846Z · comments (8)
A concerning observation from media coverage of AI industry dynamics
Justin Olive · 2023-03-05T21:38:18.473Z · comments (3)
Startups are like firewood
Adam Zerner (adamzerner) · 2023-03-05T23:09:35.066Z · comments (2)
Who Aligns the Alignment Researchers?
Ben Smith (ben-smith) · 2023-03-05T23:22:27.107Z · comments (0)
What should we do about network-effect monopolies?
benkuhn · 2023-03-06T00:50:07.554Z · comments (7)
Cap Model Size for AI Safety
research_prime_space · 2023-03-06T01:11:59.617Z · comments (4)
Why did you trash the old HPMOR.com?
AnnoyedReader · 2023-03-06T01:55:48.304Z · comments (68)
[link] AGI's Impact on Employment
TheUnkown · 2023-03-06T01:56:16.934Z · comments (1)
The Type II Inner-Compass Theorem
Tristan Miano (tristan-miano-2) · 2023-03-06T02:35:16.224Z · comments (0)
EA & LW Forum Weekly Summary (27th Feb - 5th Mar 2023)
Zoe Williams (GreyArea) · 2023-03-06T03:18:32.184Z · comments (0)
Honesty, Openness, Trustworthiness, and Secrets
NormanPerlmutter · 2023-03-06T09:03:12.139Z · comments (0)
The idea
JNS (jesper-norregaard-sorensen) · 2023-03-06T13:42:38.714Z · comments (0)
← previous page (newer posts) · next page (older posts) →