LessWrong 2.0 Reader

View: New · Old · Top

← previous page (newer posts) · next page (older posts) →

Run Posts By Orgs
jefftk (jkaufman) · 2023-03-29T02:40:01.586Z · comments (74)
Desensitizing Deepfakes
Phib · 2023-03-29T01:20:41.540Z · comments (0)
[link] Large language models aren't trained enough
sanxiyn · 2023-03-29T00:56:13.925Z · comments (4)
Job Board (28 March 2033)
dr_s · 2023-03-28T22:44:41.568Z · comments (1)
[link] Four lenses on AI risks
jasoncrawford · 2023-03-28T21:52:55.191Z · comments (5)
Some common confusion about induction heads
Alexandre Variengien (alexandre-variengien) · 2023-03-28T21:51:12.411Z · comments (4)
Draft: The optimization toolbox
Alex_Altair · 2023-03-28T20:40:38.165Z · comments (1)
Inching “Kubla Khan” and GPT into the same intellectual framework @ 3 Quarks Daily
Bill Benzon (bill-benzon) · 2023-03-28T19:50:09.538Z · comments (0)
A rough and incomplete review of some of John Wentworth's research
So8res · 2023-03-28T18:52:50.553Z · comments (17)
[question] How do you manage your inputs?
Mateusz Bagiński (mateusz-baginski) · 2023-03-28T18:26:36.979Z · answers+comments (3)
[link] Chatbot convinces Belgian to commit suicide
Jeroen De Ryck · 2023-03-28T18:14:02.529Z · comments (18)
A Primer On Chaos
johnswentworth · 2023-03-28T18:01:30.702Z · comments (9)
[question] How likely are scenarios where AGI ends up overtly or de facto torturing us? How likely are scenarios where AGI prevents us from committing suicide or dying?
JohnGreer · 2023-03-28T18:00:47.221Z · answers+comments (4)
How do we align humans and what does it mean for the new Conjecture's strategy
Igor Ivanov (igor-ivanov) · 2023-03-28T17:54:23.982Z · comments (4)
[link] Governing High-Impact AI Systems: Understanding Canada’s Proposed AI Bill. April 15, Carleton University, Ottawa
Liav Koren (liav-koren) · 2023-03-28T17:48:11.531Z · comments (1)
I had a chat with GPT-4 on the future of AI and AI safety
Kristian Freed (kristian-freed) · 2023-03-28T17:47:36.171Z · comments (0)
LessWrong Hangout
Raymond Koopmanschap · 2023-03-28T17:47:12.391Z · comments (0)
Half-baked alignment idea
ozb · 2023-03-28T17:47:09.528Z · comments (27)
[question] Solving Mysteries -
Phib · 2023-03-28T17:46:12.795Z · answers+comments (0)
Some of My Current Impressions Entering AI Safety
Phib · 2023-03-28T17:46:12.718Z · comments (0)
[question] Why do the Sequences say that "Löb's Theorem shows that a mathematical system cannot assert its own soundness without becoming inconsistent."?
Thoth Hermes (thoth-hermes) · 2023-03-28T17:19:12.089Z · answers+comments (30)
[link] Corrigibility, Self-Deletion, and Identical Strawberries
Robert_AIZI · 2023-03-28T16:54:59.760Z · comments (2)
[question] Why no major LLMs with memory?
Kaj_Sotala · 2023-03-28T16:34:37.272Z · answers+comments (15)
Response to Tyler Cowen’s Existential risk, AI, and the inevitable turn in human history
Zvi · 2023-03-28T16:00:02.088Z · comments (27)
Adapting to Change: Overcoming Chronostasis in AI Language Models
RationalMindset · 2023-03-28T14:32:50.905Z · comments (0)
[link] Feeling Progress as Motivation
Sable · 2023-03-28T09:11:48.458Z · comments (1)
Be Not Afraid
Alex Beyman (alexbeyman) · 2023-03-28T08:12:48.108Z · comments (0)
Creating a family with GPT-4
Kaj_Sotala · 2023-03-28T06:40:06.412Z · comments (3)
[link] Some 2-4-6 problems
abstractapplic · 2023-03-28T06:32:02.946Z · comments (9)
[question] Deep folding docs site?
mcint · 2023-03-28T06:01:23.567Z · answers+comments (2)
[question] Why does advanced AI want not to be shut down?
RedFishBlueFish (RedStateBlueState) · 2023-03-28T04:26:23.914Z · answers+comments (19)
100 Dinners And A Workshop: Information Preservation And Goals
Stephen Fowler (LosPolloFowler) · 2023-03-28T03:13:06.362Z · comments (0)
[link] Demons from the 5&10verse!
Slimepriestess (Hivewired) · 2023-03-28T02:41:11.487Z · comments (15)
[question] [link] Can GPT-4 play 20 questions against another instance of itself?
Nathan Helm-Burger (nathan-helm-burger) · 2023-03-28T01:11:46.601Z · answers+comments (1)
Geoffrey Hinton - Full "not inconceivable" quote
WilliamKiely · 2023-03-28T00:22:01.626Z · comments (2)
[link] An A.I. Safety Presentation at RIT
NicholasKross · 2023-03-27T23:49:59.657Z · comments (0)
Which AI outputs should humans check for shenanigans, to avoid AI takeover? A simple model
Tom Davidson (tom-davidson-1) · 2023-03-27T23:36:39.701Z · comments (3)
[link] The Prospect of an AI Winter
Erich_Grunewald · 2023-03-27T20:55:35.619Z · comments (24)
[question] Best arguments against the outside view that AGI won't be a huge deal, thus we survive.
Noosphere89 (sharmake-farah) · 2023-03-27T20:49:24.728Z · answers+comments (7)
EA & LW Forum Weekly Summary (20th - 26th March 2023)
Zoe Williams (GreyArea) · 2023-03-27T20:46:55.269Z · comments (0)
[link] Three of my beliefs about upcoming AGI
Robert_AIZI · 2023-03-27T20:27:57.234Z · comments (0)
Nobody knows how to reliably test for AI safety
marcusarvan · 2023-03-27T19:48:23.531Z · comments (0)
[link] New blog: Planned Obsolescence
Ajeya Cotra (ajeya-cotra) · 2023-03-27T19:46:25.429Z · comments (7)
South Bay ACX/SSC Spring Meetups Everywhere
allisona · 2023-03-27T19:39:20.483Z · comments (0)
[question] Resources to see how people think/approach mathematics and problem-solving
zef (uzpg) · 2023-03-27T19:12:44.090Z · answers+comments (2)
Staggering Hunters
Screwtape · 2023-03-27T19:11:50.333Z · comments (2)
[link] Neurotechnology is Critical for AI Alignment
Milan Cvitkovic (milan-cvitkovic-1) · 2023-03-27T18:27:09.228Z · comments (3)
[question] Best resources to learn philosophy of mind and AI?
Sky Moo (sky-moo) · 2023-03-27T18:22:12.872Z · answers+comments (0)
[link] the tensor is a lonely place
jml6 (joshua-lovins) · 2023-03-27T18:22:00.515Z · comments (0)
[question] Bermudez Interface Problem
Motor Vehicle (michael-violette) · 2023-03-27T18:11:48.133Z · answers+comments (2)
← previous page (newer posts) · next page (older posts) →