LessWrong 2.0 Reader

View: New · Old · Top

next page (older posts) →

[question] Are quantum indeterminacy and normal uncertainty meaningfully distinct?
eapi (edward-pierzchalski) · 2023-03-30T23:48:46.067Z · answers+comments (11)
Deference on AI timelines: survey results
Sam Clarke · 2023-03-30T23:03:52.661Z · comments (4)
AI #5: Level One Bard
Zvi · 2023-03-30T23:00:00.690Z · comments (9)
Eliezer's Videos and More
Johannes C. Mayer (johannes-c-mayer) · 2023-03-30T22:16:30.269Z · comments (5)
We might need to rethink the Hard Reset , aka the AI Pause.
Jonas Kgomo (jonas-kgomo) · 2023-03-30T21:38:44.564Z · comments (0)
AI-assisted alignment proposals require specific decomposition of capabilities
RobertM (T3t) · 2023-03-30T21:31:57.725Z · comments (2)
The Healing Code of Joan
jdcampolargo (jdampolargo) · 2023-03-30T21:09:31.469Z · comments (0)
[Event] Join Metaculus Tomorrow, March 31st, for Forecast Friday!
ChristianWilliams · 2023-03-30T20:58:57.315Z · comments (2)
How To Get Startup Ideas: A Brief Lit Review and Analysis
Adam Zerner (adamzerner) · 2023-03-30T20:33:04.179Z · comments (10)
Shannon's Surprising Discovery
johnswentworth · 2023-03-30T20:15:54.065Z · comments (7)
[link] Early Results: Do LLMs complete false equations with false equations?
Robert_AIZI · 2023-03-30T20:14:23.132Z · comments (0)
Arguing all sides with ChatGPT
Richard_Kennaway · 2023-03-30T19:50:38.892Z · comments (1)
AGI: Hire Software Engineers - All of Them, Right Now
MGow · 2023-03-30T18:40:47.530Z · comments (3)
Burlington, VT - Spring ACX Meetup
Forrest Csuy (forrest-csuy) · 2023-03-30T18:15:57.673Z · comments (1)
The 0.2 OOMs/year target
Cleo Nardo (strawberry calm) · 2023-03-30T18:15:40.735Z · comments (24)
ACX Everywhere - Punta Cana (DR)
Nikita Sokolsky (nikita-sokolsky) · 2023-03-30T16:03:06.525Z · comments (0)
On the FLI Open Letter
Zvi · 2023-03-30T16:00:00.716Z · comments (11)
[link] "Dangers of AI and the End of Human Civilization" Yudkowsky on Lex Fridman
DragonGod · 2023-03-30T15:43:32.814Z · comments (32)
How is AI governed and regulated, around the world?
Mitchell_Porter · 2023-03-30T15:36:55.987Z · comments (6)
Role Architectures: Applying LLMs to consequential tasks
Eric Drexler · 2023-03-30T15:00:28.928Z · comments (7)
Alignment - Path to AI as ally, not slave nor foe
ozb · 2023-03-30T14:54:27.231Z · comments (3)
What if our Galaxy isn't full of AI because we're in a Neutral Zone between them?
Erlja Jkdf. (erlja-jkdf) · 2023-03-30T14:31:30.864Z · comments (0)
Imitation Learning from Language Feedback
Jérémy Scheurer (JerrySch) · 2023-03-30T14:11:56.295Z · comments (3)
~100 Interesting Questions
RohanS · 2023-03-30T13:57:28.591Z · comments (18)
The AI Shutdown Problem Solution through Commitment to Archiving and Periodic Restoration
avturchin · 2023-03-30T13:17:58.519Z · comments (7)
[link] AI and Evolution
Dan H (dan-hendrycks) · 2023-03-30T12:56:27.280Z · comments (4)
Meme or Die: Modern Societies are Dependent on Emotionally Rich Memes to Rapidly Evolve
monkymind · 2023-03-30T08:59:29.976Z · comments (1)
[link] Stop Using Discord as an Archive
Nicholas / Heather Kross (NicholasKross) · 2023-03-30T02:15:34.580Z · comments (2)
AI Doom Is Not (Only) Disjunctive
NickGabs · 2023-03-30T01:42:55.970Z · comments (0)
You Can’t Predict a Game of Pinball
Jeffrey Heninger (jeffrey-heninger) · 2023-03-30T00:40:05.280Z · comments (12)
[link] Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky
jacquesthibs (jacques-thibodeau) · 2023-03-29T23:16:19.431Z · comments (297)
[link] Othello-GPT: Reflections on the Research Process
Neel Nanda (neel-nanda-1) · 2023-03-29T22:13:42.007Z · comments (0)
[link] Othello-GPT: Future Work I Am Excited About
Neel Nanda (neel-nanda-1) · 2023-03-29T22:13:26.823Z · comments (2)
[link] Actually, Othello-GPT Has A Linear Emergent World Representation
Neel Nanda (neel-nanda-1) · 2023-03-29T22:13:14.878Z · comments (26)
Draft: Detecting optimization
Alex_Altair · 2023-03-29T20:17:46.642Z · comments (2)
[link] "Sorcerer's Apprentice" from Fantasia as an analogy for alignment
awg · 2023-03-29T18:21:55.678Z · comments (4)
The Changing Face of Twitter
Zvi · 2023-03-29T17:50:00.747Z · comments (8)
[link] Nobody’s on the ball on AGI alignment
leopold · 2023-03-29T17:40:36.250Z · comments (38)
[link] Want to win the AGI race? Solve alignment.
leopold · 2023-03-29T17:40:36.187Z · comments (3)
ChatGPT and Bing Chat can't play Botticelli
Asha Saavoss (asha-saavoss) · 2023-03-29T17:39:50.029Z · comments (0)
[link] The Rationalist Guide to Hinduism
Harsha G. (harsha-g) · 2023-03-29T17:03:41.107Z · comments (12)
“Unintentional AI safety research”: Why not systematically mine AI technical research for safety purposes?
Jemal Young (ghostwheel) · 2023-03-29T15:56:39.355Z · comments (3)
The open letter
kornai · 2023-03-29T15:09:43.047Z · comments (2)
I made AI Risk Propaganda
monkymind · 2023-03-29T14:26:49.346Z · comments (0)
[link] Strong Cheap Signals
trevor (TrevorWiesinger) · 2023-03-29T14:18:52.734Z · comments (3)
Missing forecasting tools: from catalogs to a new kind of prediction market
MichaelLatowicki · 2023-03-29T09:55:30.885Z · comments (1)
Spreadsheet for 200 Concrete Problems In Interpretability
Jay Bailey · 2023-03-29T06:51:46.114Z · comments (0)
[question] Which parts of the existing internet are already likely to be in (GPT-5/other soon-to-be-trained LLMs)'s training corpus?
AnnaSalamon · 2023-03-29T05:17:28.000Z · answers+comments (2)
[question] Are there specific books that it might slightly help alignment to have on the internet?
AnnaSalamon · 2023-03-29T05:08:28.364Z · answers+comments (25)
[link] FLI open letter: Pause giant AI experiments
Zach Stein-Perlman · 2023-03-29T04:04:23.333Z · comments (123)
next page (older posts) →