LessWrong 2.0 Reader

View: New · Old · Top

← previous page (newer posts) · next page (older posts) →

[question] Nonlinear limitations of ReLUs
magfrump · 2023-10-26T18:51:24.130Z · answers+comments (1)
AI Alignment Problem: Requirement not optional (A Critical Analysis through Mass Effect Trilogy)
TAWSIF AHMED (tawsif-ahmed-1) · 2023-10-26T18:02:31.748Z · comments (0)
[Thought Experiment] Tomorrow's Echo - The future of synthetic companionship.
Vimal Naran (vimal-naran) · 2023-10-26T17:54:17.404Z · comments (2)
Disagreements over the prioritization of existential risk from AI
Olivier Coutu (olivier-coutu) · 2023-10-26T17:54:12.014Z · comments (0)
[question] What if AGI had its own universe to maybe wreck?
mseale (mason-seale) · 2023-10-26T17:49:27.018Z · answers+comments (2)
Changing Contra Dialects
jefftk (jkaufman) · 2023-10-26T17:30:10.387Z · comments (2)
5 psychological reasons for dismissing x-risks from AGI
Igor Ivanov (igor-ivanov) · 2023-10-26T17:21:48.580Z · comments (6)
5. Risks from preventing legitimate value change (value collapse)
Nora_Ammann · 2023-10-26T14:38:35.136Z · comments (1)
4. Risks from causing illegitimate value change (performative predictors)
Nora_Ammann · 2023-10-26T14:38:26.140Z · comments (3)
3. Premise three & Conclusion: AI systems can affect value change trajectories & the Value Change Problem
Nora_Ammann · 2023-10-26T14:38:14.916Z · comments (4)
2. Premise two: Some cases of value change are (il)legitimate
Nora_Ammann · 2023-10-26T14:36:53.511Z · comments (7)
1. Premise one: Values are malleable
Nora_Ammann · 2023-10-26T14:36:41.168Z · comments (1)
0. The Value Change Problem: introduction, overview and motivations
Nora_Ammann · 2023-10-26T14:36:15.466Z · comments (0)
[link] EPUBs of MIRI Blog Archives and selected LW Sequences
mesaoptimizer · 2023-10-26T14:17:11.538Z · comments (6)
[link] UK Government publishes "Frontier AI: capabilities and risks" Discussion Paper
A.H. (AlfredHarwood) · 2023-10-26T13:55:16.841Z · comments (0)
AI #35: Responsible Scaling Policies
Zvi · 2023-10-26T13:30:02.439Z · comments (10)
RA Bounty: Looking for feedback on screenplay about AI Risk
Writer · 2023-10-26T13:23:02.806Z · comments (6)
Sensor Exposure can Compromise the Human Brain in the 2020s
trevor (TrevorWiesinger) · 2023-10-26T03:31:09.835Z · comments (6)
Notes on "How do we become confident in the safety of a machine learning system?"
RohanS · 2023-10-26T03:13:56.024Z · comments (0)
Apply to the Constellation Visiting Researcher Program and Astra Fellowship, in Berkeley this Winter
Nate Thomas (nate-thomas) · 2023-10-26T03:07:34.118Z · comments (10)
CHAI internship applications are open (due Nov 13)
Erik Jenner (ejenner) · 2023-10-26T00:53:49.640Z · comments (0)
Architects of Our Own Demise: We Should Stop Developing AI
Roko · 2023-10-26T00:36:05.126Z · comments (74)
EA Infrastructure Fund: June 2023 grant recommendations
Linch · 2023-10-26T00:35:07.981Z · comments (0)
[link] Responsible Scaling Policies Are Risk Management Done Wrong
simeon_c (WayZ) · 2023-10-25T23:46:34.247Z · comments (33)
[question] What are the long-term outcomes for Bitcoin and other cryptocurrencies?
Auspicious · 2023-10-25T21:12:03.017Z · answers+comments (1)
AI as a science, and three obstacles to alignment strategies
So8res · 2023-10-25T21:00:16.003Z · comments (79)
My hopes for alignment: Singular learning theory and whole brain emulation
Garrett Baker (D0TheMath) · 2023-10-25T18:31:14.407Z · comments (5)
[question] Lying to chess players for alignment
Zane · 2023-10-25T17:47:15.033Z · answers+comments (54)
[link] Anthropic, Google, Microsoft & OpenAI announce Executive Director of the Frontier Model Forum & over $10 million for a new AI Safety Fund
Zach Stein-Perlman · 2023-10-25T15:20:52.765Z · comments (8)
"The Economics of Time Travel" - call for reviewers (Seeds of Science)
rogersbacon · 2023-10-25T15:13:59.647Z · comments (2)
Compositional preference models for aligning LMs
Tomek Korbak (tomek-korbak) · 2023-10-25T12:17:28.990Z · comments (2)
[question] Should the US House of Representatives adopt rank choice voting for leadership positions?
jmh · 2023-10-25T11:16:14.223Z · answers+comments (6)
[link] Researchers believe they have found a way for artists to fight back against AI style capture
vernamcipher (michael-flood-1) · 2023-10-25T10:54:23.652Z · comments (1)
Why We Disagree
zulupineapple · 2023-10-25T10:50:26.420Z · comments (2)
Beyond the Data: Why aid to poor doesn't work
Lyrongolem (david-xiao) · 2023-10-25T05:03:39.402Z · comments (31)
[link] Announcing Epoch's newly expanded Parameters, Compute and Data Trends in Machine Learning database
Robi Rahman (robirahman) · 2023-10-25T02:55:07.440Z · comments (0)
What is a Sequencing Read?
jefftk (jkaufman) · 2023-10-25T02:10:11.443Z · comments (2)
Verifiable private execution of machine learning models with Risc0?
mako yass (MakoYass) · 2023-10-25T00:44:48.643Z · comments (1)
[question] How to Resolve Forecasts With No Central Authority?
Nathan Young · 2023-10-25T00:28:32.332Z · answers+comments (6)
Thoughts on responsible scaling policies and regulation
paulfchristiano · 2023-10-24T22:21:18.341Z · comments (33)
Poker, Religion and Peace
sleno · 2023-10-24T21:21:56.859Z · comments (2)
The Screenplay Method
Yeshua God (lord-god) · 2023-10-24T17:41:02.812Z · comments (0)
Blunt Razor
fryolysis (fryloysis) · 2023-10-24T17:27:19.537Z · comments (0)
Halloween Problem
Saint Blasphemer (Licht) · 2023-10-24T16:46:50.772Z · comments (1)
Who is Harry Potter? Some predictions.
Donald Hobson (donald-hobson) · 2023-10-24T16:14:17.860Z · comments (7)
Book Review: Going Infinite
Zvi · 2023-10-24T15:00:02.251Z · comments (109)
[Interview w/ Quintin Pope] Evolution, values, and AI Safety
fowlertm · 2023-10-24T13:53:06.146Z · comments (0)
[link] Lying is Cowardice, not Strategy
Connor Leahy (NPCollapse) · 2023-10-24T13:24:25.450Z · comments (73)
[question] Anyone Else Using Brilliant?
Sable · 2023-10-24T12:12:27.540Z · answers+comments (0)
Announcing #AISummitTalks featuring Professor Stuart Russell and many others
otto.barten (otto-barten) · 2023-10-24T10:11:34.527Z · comments (1)
← previous page (newer posts) · next page (older posts) →