LessWrong 2.0 Reader

View: New · Old · Top

next page (older posts) →

[link] Bengio's FAQ on Catastrophic AI Risks
Vaniver · 2023-06-29T23:04:49.098Z · comments (0)
AGI & War
Calecute · 2023-06-29T22:20:58.453Z · comments (1)
Biosafety Regulations (BMBL) and their relevance for AI
Štěpán Los (stepan-los) · 2023-06-29T19:22:41.196Z · comments (0)
Nature Releases A Stupid Editorial On AI Risk
omnizoid · 2023-06-29T19:00:58.170Z · comments (1)
AI Safety without Alignment: How humans can WIN against AI
vicchain (vic-cheng) · 2023-06-29T17:53:03.194Z · comments (1)
Challenge proposal: smallest possible self-hardening backdoor for RLHF
Christopher King (christopher-king) · 2023-06-29T16:56:59.832Z · comments (0)
AI #18: The Great Debate Debate
Zvi · 2023-06-29T16:20:05.569Z · comments (9)
[link] Bruce Sterling on the AI mania of 2023
Mitchell_Porter · 2023-06-29T05:00:18.326Z · comments (1)
Cheat sheet of AI X-risk
momom2 (amaury-lorin) · 2023-06-29T04:28:32.292Z · comments (1)
Anthropically Blind: the anthropic shadow is reflectively inconsistent
Christopher King (christopher-king) · 2023-06-29T02:36:26.347Z · comments (40)
One path to coherence: conditionalization
porby · 2023-06-29T01:08:14.527Z · comments (4)
AXRP announcement: Survey, Store Closing, Patreon
DanielFilan · 2023-06-28T23:40:02.537Z · comments (0)
Metaphors for AI, and why I don’t like them
boazbarak · 2023-06-28T22:47:54.427Z · comments (18)
Transforming Democracy: A Unique Funding Opportunity for US Federal Approval Voting
Aaron Hamlin (aaron-hamlin) · 2023-06-28T22:07:35.971Z · comments (6)
AGI x Animal Welfare: A High-EV Outreach Opportunity?
simeon_c (WayZ) · 2023-06-28T20:44:25.836Z · comments (0)
A "weak" AGI may attempt an unlikely-to-succeed takeover
RobertM (T3t) · 2023-06-28T20:31:46.356Z · comments (17)
[link] Progress links and tweets, 2023-06-28: “We can do big things again in Pennsylvania”
jasoncrawford · 2023-06-28T20:23:55.927Z · comments (1)
[question] What money-pumps exist, if any, for deontologists?
Daniel Kokotajlo (daniel-kokotajlo) · 2023-06-28T19:08:54.890Z · answers+comments (35)
[question] What is your financial portfolio?
Algon · 2023-06-28T18:39:15.284Z · answers+comments (11)
[link] Levels of safety for AI and other technologies
jasoncrawford · 2023-06-28T18:35:52.933Z · comments (0)
LeCun says making a utility function is intractable
Iknownothing · 2023-06-28T18:02:13.721Z · comments (3)
My research agenda in agent foundations
Alex_Altair · 2023-06-28T18:00:27.813Z · comments (9)
AI Incident Sharing - Best practices from other fields and a comprehensive list of existing platforms
Štěpán Los (stepan-los) · 2023-06-28T17:21:13.991Z · comments (0)
[link] The Case for Overconfidence is Overstated
Kevin Dorst · 2023-06-28T17:21:06.160Z · comments (13)
[link] When do "brains beat brawn" in Chess? An experiment
titotal (lombertini) · 2023-06-28T13:33:23.854Z · comments (82)
[link] Giving an evolutionary explanation for Kahneman and Tversky's insights on subjective satisfaction
Lionel (lionel) · 2023-06-28T12:17:18.609Z · comments (1)
[link] Nature: "Stop talking about tomorrow’s AI doomsday when AI poses risks today"
Ben Smith (ben-smith) · 2023-06-28T05:59:49.015Z · comments (8)
Request: Put Carl Shulman's recent podcast into an organized written format
Aryeh Englander (alenglander) · 2023-06-28T02:58:40.011Z · comments (4)
[link] Prediction Market: Will I Pull "The One Ring To Rule Them All?"
Connor Tabarrok · 2023-06-28T02:41:39.414Z · comments (0)
[link] Carl Shulman on The Lunar Society (7 hour, two-part podcast)
ESRogs · 2023-06-28T01:23:52.541Z · comments (17)
[link] Brief summary of ai-plans.com
Iknownothing · 2023-06-28T00:33:36.309Z · comments (4)
[link] Catastrophic Risks from AI #6: Discussion and FAQ
Dan H (dan-hendrycks) · 2023-06-27T23:23:58.846Z · comments (1)
[link] Catastrophic Risks from AI #5: Rogue AIs
Dan H (dan-hendrycks) · 2023-06-27T22:06:11.029Z · comments (0)
AISN #12: Policy Proposals from NTIA’s Request for Comment and Reconsidering Instrumental Convergence
Dan H (dan-hendrycks) · 2023-06-27T17:20:55.185Z · comments (0)
[link] The Weight of the Future (Why The Apocalypse Can Be A Relief)
Sable · 2023-06-27T17:18:30.944Z · comments (14)
Aligning AI by optimizing for "wisdom"
JustinShovelain · 2023-06-27T15:20:00.682Z · comments (8)
Freedom under Naturalistic Dualism
Arturo Macias (arturo-macias) · 2023-06-27T14:34:41.148Z · comments (36)
Munk AI debate: confusions and possible cruxes
Steven Byrnes (steve2152) · 2023-06-27T14:18:47.694Z · comments (21)
Ateliers: Motivation
Stephen Fowler (LosPolloFowler) · 2023-06-27T13:07:06.129Z · comments (0)
Self-Blinded Caffeine RCT
niplav · 2023-06-27T12:38:55.354Z · comments (9)
[link] An overview of the points system
Iknownothing · 2023-06-27T09:09:54.881Z · comments (4)
AISC team report: Soft-optimization, Bayes and Goodhart
Simon Fischer (SimonF) · 2023-06-27T06:05:35.494Z · comments (2)
Epistemic spot checking one claim in The Precipice
Isaac King (KingSupernova) · 2023-06-27T01:03:57.553Z · comments (3)
[link] nuclear costs are inflation
bhauth · 2023-06-26T22:30:52.341Z · comments (42)
Man in the Arena
Richard_Ngo (ricraz) · 2023-06-26T21:57:45.353Z · comments (6)
[link] Catastrophic Risks from AI #4: Organizational Risks
Dan H (dan-hendrycks) · 2023-06-26T19:36:41.333Z · comments (0)
The fraught voyage of aligned novelty
TsviBT · 2023-06-26T19:10:42.195Z · comments (0)
[question] Deceptive AI vs. shifting instrumental incentives
Aryeh Englander (alenglander) · 2023-06-26T18:09:08.306Z · answers+comments (2)
On the Cost of Thriving Index
Zvi · 2023-06-26T15:30:05.160Z · comments (6)
[link] "Safety Culture for AI" is important, but isn't going to be easy
Davidmanheim · 2023-06-26T12:52:47.368Z · comments (2)
next page (older posts) →