LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

Degeneracies are sticky for SGD
Guillaume Corlouer (Tancrede) · 2024-06-16T21:19:53.362Z · comments (1)
some thoughts on LessOnline
Raemon · 2024-05-08T23:17:41.372Z · comments (5)
Coalitional agency
Richard_Ngo (ricraz) · 2024-07-22T00:09:51.525Z · comments (6)
Experiment on repeating choices
KatjaGrace · 2024-04-19T04:20:03.992Z · comments (1)
A "Bitter Lesson" Approach to Aligning AGI and ASI
RogerDearnaley (roger-d-1) · 2024-07-06T01:23:22.376Z · comments (39)
Calculating Natural Latents via Resampling
johnswentworth · 2024-06-06T00:37:42.127Z · comments (4)
[link] Let's Design A School, Part 1
Sable · 2024-04-23T21:50:20.937Z · comments (5)
[link] Datasets that change the odds you exist
dynomight · 2024-06-29T18:45:14.385Z · comments (4)
How the AI safety technical landscape has changed in the last year, according to some practitioners
tlevin (trevor) · 2024-07-26T19:06:47.126Z · comments (6)
[link] Questions are usually too cheap
Nathan Young · 2024-05-11T13:00:54.302Z · comments (19)
AI things that are perhaps as important as human-controlled AI
Chi Nguyen · 2024-03-03T18:07:24.291Z · comments (4)
What if a tech company forced you to move to NYC?
KatjaGrace · 2024-06-09T06:30:03.329Z · comments (22)
[link] Building intuition with spaced repetition systems
Jacob G-W (g-w1) · 2024-05-12T15:49:04.860Z · comments (6)
Spatial attention as a “tell” for empathetic simulation?
Steven Byrnes (steve2152) · 2024-04-26T15:10:58.040Z · comments (12)
Why Care About Natural Latents?
johnswentworth · 2024-05-09T23:14:30.626Z · comments (3)
It's OK to be biased towards humans
dr_s · 2023-11-11T11:59:16.568Z · comments (69)
Conditional prediction markets are evidential, not causal
philh · 2024-02-07T21:52:47.476Z · comments (10)
Why I no longer identify as transhumanist
Kaj_Sotala · 2024-02-03T12:00:04.389Z · comments (33)
[link] AI Safety Hub Serbia Official Opening
DusanDNesic · 2023-10-28T17:03:34.607Z · comments (0)
Skepticism About DeepMind's "Grandmaster-Level" Chess Without Search
Arjun Panickssery (arjun-panickssery) · 2024-02-12T00:56:44.944Z · comments (13)
[link] A model of research skill
L Rudolf L (LRudL) · 2024-01-08T00:13:12.755Z · comments (6)
Unpacking the dynamics of AGI conflict that suggest the necessity of a premptive pivotal act
Eli Tyre (elityre) · 2023-10-20T06:48:06.765Z · comments (2)
The Best of Don’t Worry About the Vase
Zvi · 2023-12-13T12:50:02.510Z · comments (4)
How do you actually obtain and report a likelihood function for scientific research?
Peter Berggren (peter-berggren) · 2024-02-11T17:42:49.956Z · comments (4)
[link] In Defense of Epistemic Empathy
Kevin Dorst · 2023-12-27T16:27:06.320Z · comments (19)
A quick investigation of AI pro-AI bias
Fabien Roger (Fabien) · 2024-01-19T23:26:32.663Z · comments (1)
Genetic fitness is a measure of selection strength, not the selection target
Kaj_Sotala · 2023-11-04T19:02:13.783Z · comments (43)
Seeking Collaborators
abramdemski · 2024-11-01T17:13:36.162Z · comments (14)
A Path out of Insufficient Views
Unreal · 2024-09-24T20:00:27.332Z · comments (46)
Safe Predictive Agents with Joint Scoring Rules
Rubi J. Hudson (Rubi) · 2024-10-09T16:38:16.535Z · comments (10)
[link] Demis Hassabis — Google DeepMind: The Podcast
Zach Stein-Perlman · 2024-08-16T00:00:04.712Z · comments (8)
Secret Collusion: Will We Know When to Unplug AI?
schroederdewitt · 2024-09-16T16:07:01.119Z · comments (7)
Owain Evans on Situational Awareness and Out-of-Context Reasoning in LLMs
Michaël Trazzi (mtrazzi) · 2024-08-24T04:30:11.807Z · comments (0)
[question] Could orcas be (trained to be) smarter than humans? 
Towards_Keeperhood (Simon Skade) · 2024-11-04T23:29:26.677Z · answers+comments (11)
[Intuitive self-models] 5. Dissociative Identity (Multiple Personality) Disorder
Steven Byrnes (steve2152) · 2024-10-15T13:31:46.157Z · comments (7)
[link] On the Role of Proto-Languages
adamShimi · 2024-09-22T16:50:34.720Z · comments (1)
Thiel on AI & Racing with China
Ben Pace (Benito) · 2024-08-20T03:19:18.966Z · comments (10)
[link] How Likely Are Various Precursors of Existential Risk?
NunoSempere (Radamantis) · 2024-10-28T13:27:31.620Z · comments (4)
[Closed] PIBBSS is hiring in a variety of roles (alignment research and incubation program)
Nora_Ammann · 2024-04-09T08:12:59.241Z · comments (0)
Monthly Roundup #17: April 2024
Zvi · 2024-04-15T12:10:03.126Z · comments (4)
On “first critical tries” in AI alignment
Joe Carlsmith (joekc) · 2024-06-05T00:19:02.814Z · comments (8)
[link] OpenAI releases GPT-4o, natively interfacing with text, voice and vision
Martín Soto (martinsq) · 2024-05-13T18:50:52.337Z · comments (23)
Towards a formalization of the agent structure problem
Alex_Altair · 2024-04-29T20:28:15.190Z · comments (5)
[link] Come to Manifest 2024 (June 7-9 in Berkeley)
Saul Munn (saul-munn) · 2024-03-27T21:30:17.306Z · comments (2)
[link] Unlocking Solutions—By Understanding Coordination Problems
James Stephen Brown (james-brown) · 2024-07-27T04:52:13.435Z · comments (4)
Math-to-English Cheat Sheet
nahoj · 2024-04-08T09:19:40.814Z · comments (5)
[link] Land Reclamation is in the 9th Circle of Stagnation Hell
Maxwell Tabarrok (maxwell-tabarrok) · 2024-01-12T13:36:27.159Z · comments (6)
AI #44: Copyright Confrontation
Zvi · 2023-12-28T14:30:10.237Z · comments (13)
[link] Theories of Change for AI Auditing
Lee Sharkey (Lee_Sharkey) · 2023-11-13T19:33:43.928Z · comments (0)
← previous page (newer posts) · next page (older posts) →