LessWrong 2.0 Reader

View: New · Old · Top

Restrict date range: Today · This week · This month · Last three months · This year · All time

← previous page (newer posts) · next page (older posts) →

Applied Linear Algebra Lecture Series
johnswentworth · 2022-12-22T06:57:26.643Z · comments (8)
LW Beta Feature: Side-Comments
jimrandomh · 2022-11-24T01:55:31.578Z · comments (47)
Solving Math Problems by Relay
bgold · 2020-07-17T15:32:00.985Z · comments (26)
[link] The Witness
Richard_Ngo (ricraz) · 2023-12-03T22:27:16.248Z · comments (4)
Announcing Encultured AI: Building a Video Game
Andrew_Critch · 2022-08-18T02:16:26.726Z · comments (27)
[link] Scaling Laws for Reward Model Overoptimization
leogao · 2022-10-20T00:20:06.920Z · comments (13)
PSA: The community is in Berkeley/Oakland, not "the Bay Area"
maia · 2023-09-11T15:59:47.132Z · comments (7)
Biosecurity Culture, Computer Security Culture
jefftk (jkaufman) · 2023-08-30T16:40:03.101Z · comments (10)
Testing PaLM prompts on GPT3
Yitz (yitz) · 2022-04-06T05:21:06.841Z · comments (14)
Money Stuff
Jacob Falkovich (Jacobian) · 2021-11-01T16:08:02.700Z · comments (18)
Introducing Leap Labs, an AI interpretability startup
Jessica Rumbelow (jessica-cooper) · 2023-03-06T16:16:22.182Z · comments (12)
Dreams of AI alignment: The danger of suggestive names
TurnTrout · 2024-02-10T01:22:51.715Z · comments (59)
[link] Financial Times: We must slow down the race to God-like AI
trevor (TrevorWiesinger) · 2023-04-13T19:55:26.217Z · comments (17)
[link] Carl Sagan, nuking the moon, and not nuking the moon
eukaryote · 2024-04-13T04:08:50.166Z · comments (8)
Experimentally evaluating whether honesty generalizes
paulfchristiano · 2021-07-01T17:47:57.847Z · comments (24)
[link] [Linkpost] The Story Of VaccinateCA
hath · 2022-12-09T23:54:48.703Z · comments (4)
Frequent arguments about alignment
John Schulman (john-schulman) · 2021-06-23T00:46:38.568Z · comments (17)
Toward A Bayesian Theory Of Willpower
Scott Alexander (Yvain) · 2021-03-26T02:33:55.056Z · comments (28)
[link] Reproducing ARC Evals' recent report on language model agents
Thomas Broadley (thomas-broadley) · 2023-09-01T16:52:17.147Z · comments (17)
I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness
J Bostock (Jemist) · 2021-10-29T11:09:20.559Z · comments (120)
Rapid Increase of Highly Mutated B.1.1.529 Strain in South Africa
dawangy · 2021-11-26T01:05:49.516Z · comments (15)
How likely is deceptive alignment?
evhub · 2022-08-30T19:34:25.997Z · comments (28)
[link] A review of Where Is My Flying Car? by J. Storrs Hall
jasoncrawford · 2020-11-06T20:01:55.074Z · comments (23)
[link] Turning air into bread
jasoncrawford · 2019-10-21T17:50:00.117Z · comments (12)
Yes, AI research will be substantially curtailed if a lab causes a major disaster
lc · 2022-06-14T22:17:01.273Z · comments (31)
[link] Can crimes be discussed literally?
Benquo · 2020-03-22T20:17:05.545Z · comments (38)
Value systematization: how values become coherent (and misaligned)
Richard_Ngo (ricraz) · 2023-10-27T19:06:26.928Z · comments (48)
Safety Implications of LeCun's path to machine intelligence
Ivan Vendrov (ivan-vendrov) · 2022-07-15T21:47:44.411Z · comments (18)
Analysis: US restricts GPU sales to China
aogara (Aidan O'Gara) · 2022-10-07T18:38:06.517Z · comments (58)
Trauma, Meditation, and a Cool Scar
Logan Riggs (elriggs) · 2019-08-06T16:17:39.912Z · comments (17)
SAE reconstruction errors are (empirically) pathological
wesg (wes-gurnee) · 2024-03-29T16:37:29.608Z · comments (16)
Contra shard theory, in the context of the diamond maximizer problem
So8res · 2022-10-13T23:51:29.532Z · comments (19)
On the FLI Open Letter
Zvi · 2023-03-30T16:00:00.716Z · comments (11)
A Theory of Laughter
Steven Byrnes (steve2152) · 2023-08-23T15:05:59.694Z · comments (14)
[link] Poker is a bad game for teaching epistemics. Figgie is a better one.
rossry · 2024-07-08T06:05:20.459Z · comments (47)
Another Way to Be Okay
Gretta Duleba (gretta-duleba) · 2023-02-19T20:49:31.895Z · comments (14)
Thoughts on AGI organizations and capabilities work
Rob Bensinger (RobbBB) · 2022-12-07T19:46:04.004Z · comments (17)
[link] Nobody’s on the ball on AGI alignment
leopold · 2023-03-29T17:40:36.250Z · comments (38)
[link] Alignment 201 curriculum
Richard_Ngo (ricraz) · 2022-10-12T18:03:03.454Z · comments (3)
Vegan Nutrition Testing Project: Interim Report
Elizabeth (pktechgirl) · 2023-01-20T05:50:03.565Z · comments (37)
Large language models learn to represent the world
gjm · 2023-01-22T13:10:38.837Z · comments (19)
What happens if you present 500 people with an argument that AI is risky?
KatjaGrace · 2024-09-04T16:40:03.562Z · comments (7)
Perishable Knowledge
lsusr · 2021-12-18T05:53:03.343Z · comments (6)
I Don’t Know How To Count That Low
Elizabeth (pktechgirl) · 2021-10-22T22:00:02.708Z · comments (10)
Lsusr's Rationality Dojo
lsusr · 2024-02-13T05:52:03.757Z · comments (17)
[link] Notes from a Prompt Factory
Richard_Ngo (ricraz) · 2024-03-10T05:13:39.384Z · comments (19)
[link] A Chess-GPT Linear Emergent World Representation
Adam Karvonen (karvonenadam) · 2024-02-08T04:25:15.222Z · comments (14)
2022 was the year AGI arrived (Just don't call it that)
Logan Zoellner (logan-zoellner) · 2023-01-04T15:19:55.009Z · comments (60)
Against "blankfaces"
philh · 2021-08-08T23:00:04.126Z · comments (12)
Response to nostalgebraist: proudly waving my moral-antirealist battle flag
Steven Byrnes (steve2152) · 2024-05-29T16:48:29.408Z · comments (29)
← previous page (newer posts) · next page (older posts) →