LessWrong 2.0 Reader

View: New · Old · Top

← previous page (newer posts) · next page (older posts) →

If a little is good, is more better?
DanielFilan · 2023-11-04T07:10:05.943Z · comments (15)
Being good at the basics
dominicq · 2023-11-04T14:18:50.976Z · comments (1)
We are already in a persuasion-transformed world and must take precautions
trevor (TrevorWiesinger) · 2023-11-04T15:53:31.345Z · comments (14)
[link] [Linkpost] Concept Alignment as a Prerequisite for Value Alignment
Bogdan Ionut Cirstea (bogdan-ionut-cirstea) · 2023-11-04T17:34:36.563Z · comments (0)
[link] The Soul Key
Richard_Ngo (ricraz) · 2023-11-04T17:51:53.176Z · comments (9)
Genetic fitness is a measure of selection strength, not the selection target
Kaj_Sotala · 2023-11-04T19:02:13.783Z · comments (43)
The 6D effect: When companies take risks, one email can be very powerful.
scasper · 2023-11-04T20:08:39.775Z · comments (40)
Stuxnet, not Skynet: Humanity's disempowerment by AI
Roko · 2023-11-04T22:23:55.428Z · comments (23)
Utility is not the selection target
tailcalled · 2023-11-04T22:48:20.713Z · comments (1)
Lightning Talks
Screwtape · 2023-11-05T03:27:19.267Z · comments (3)
Life of GPT
Odd anon · 2023-11-05T04:55:06.124Z · comments (2)
Go flash blinking lights at printed text right now
lukehmiles (lcmgcd) · 2023-11-05T07:29:44.630Z · comments (9)
The Assumed Intent Bias
silentbob · 2023-11-05T16:28:03.282Z · comments (13)
Pivotal Acts might Not be what You Think they are
Johannes C. Mayer (johannes-c-mayer) · 2023-11-05T17:23:50.464Z · comments (13)
[link] Eric Schmidt on recursive self-improvement
nikola (nikolaisalreadytaken) · 2023-11-05T19:05:15.416Z · comments (3)
EA orgs' legal structure inhibits risk taking and information sharing on the margin
Elizabeth (pktechgirl) · 2023-11-05T19:13:56.135Z · comments (17)
AI as Super-Demagogue
RationalDino · 2023-11-05T21:21:13.914Z · comments (9)
Disentangling four motivations for acting in accordance with UDT
Julian Stastny · 2023-11-05T21:26:22.514Z · comments (3)
[link] xAI announces Grok, beats GPT-3.5
nikola (nikolaisalreadytaken) · 2023-11-05T22:11:15.274Z · comments (6)
On Overhangs and Technological Change
Roko · 2023-11-05T22:58:51.306Z · comments (19)
When and why should you use the Kelly criterion?
Garrett Baker (D0TheMath) · 2023-11-05T23:26:38.952Z · comments (25)
Ramble on progressively constrained agent design
Iris of Rosebloom (Qualia Inqueria) · 2023-11-05T23:34:29.208Z · comments (0)
Taboo Wall
Screwtape · 2023-11-06T03:51:09.968Z · comments (0)
[link] Announcing TAIS 2024
Blaine (blaine-rogers) · 2023-11-06T08:38:37.226Z · comments (0)
Tips, tricks, lessons and thoughts on hosting hackathons
gergogaspar (gergo-gaspar) · 2023-11-06T11:03:40.040Z · comments (0)
[link] Are language models good at making predictions?
dynomight · 2023-11-06T13:10:36.379Z · comments (14)
Proposal for improving state of alignment research
Iknownothing · 2023-11-06T13:55:39.015Z · comments (0)
An illustrative model of backfire risks from pausing AI research
Maxime Riché (maxime-riche) · 2023-11-06T14:30:58.615Z · comments (3)
What is true is already so. Owning up to it doesn't make it worse.
RamblinDash · 2023-11-06T15:49:10.153Z · comments (2)
[link] Why building ventures in AI Safety is particularly challenging
Heramb · 2023-11-06T16:27:36.535Z · comments (0)
[question] Does bulemia work?
lc · 2023-11-06T17:58:27.612Z · answers+comments (18)
ACX Meetup Leipzig
Roman Leipe (roman-leipe) · 2023-11-06T18:33:05.537Z · comments (0)
[link] LQPR: An Algorithm for Reinforcement Learning with Provable Safety Guarantees
MadHatter · 2023-11-06T20:17:05.790Z · comments (0)
[link] Askesis: a model of the cerebellum
MadHatter · 2023-11-06T20:19:09.001Z · comments (2)
Job listing: Communications Generalist / Project Manager
Gretta Duleba (gretta-duleba) · 2023-11-06T20:21:03.721Z · comments (7)
A bet on critical periods in neural networks
kave · 2023-11-06T23:21:17.279Z · comments (1)
[link] cost estimation for 2 grid energy storage systems
bhauth · 2023-11-06T23:32:03.764Z · comments (12)
How to (hopefully ethically) make money off of AGI
habryka (habryka4) · 2023-11-06T23:35:16.476Z · comments (75)
The Perils of Professionalism
Screwtape · 2023-11-07T00:07:33.213Z · comments (1)
AI Alignment Research Engineer Accelerator (ARENA): call for applicants
CallumMcDougall (TheMcDouglas) · 2023-11-07T09:43:41.606Z · comments (0)
Box inversion revisited
Jan_Kulveit · 2023-11-07T11:09:36.557Z · comments (3)
On the UK Summit
Zvi · 2023-11-07T13:10:04.895Z · comments (6)
[link] AI Alignment [Progress] this Week (11/05/2023)
Logan Zoellner (logan-zoellner) · 2023-11-07T13:26:21.995Z · comments (0)
[link] What I've been reading, November 2023
jasoncrawford · 2023-11-07T13:37:20.077Z · comments (1)
Preface to the Sequence on LLM Psychology
Quentin FEUILLADE--MONTIXI (quentin-feuillade-montixi) · 2023-11-07T16:12:07.742Z · comments (0)
The Stochastic Parrot Hypothesis is debatable for the last generation of LLMs
Quentin FEUILLADE--MONTIXI (quentin-feuillade-montixi) · 2023-11-07T16:12:20.031Z · comments (20)
AMA: Earning to Give
jefftk (jkaufman) · 2023-11-07T16:20:10.972Z · comments (8)
Symbiotic self-alignment of AIs.
Spiritus Dei (spiritus-dei) · 2023-11-07T17:18:21.230Z · comments (0)
Mirror, Mirror on the Wall: How Do Forecasters Fare by Their Own Call?
nikos (followtheargument) · 2023-11-07T17:39:49.988Z · comments (5)
Implementing Decision Theory
justinpombrio · 2023-11-07T17:55:43.313Z · comments (12)
← previous page (newer posts) · next page (older posts) →