LessWrong 2.0 Reader

View: New · Old · Top

← previous page (newer posts) · next page (older posts) →

Cross-Void Optimization
pneumynym · 2022-11-04T17:47:44.499Z · comments (1)
Follow up to medical miracle
Elizabeth (pktechgirl) · 2022-11-04T18:00:01.858Z · comments (5)
[link] Ethan Caballero on Broken Neural Scaling Laws, Deception, and Recursive Self Improvement
Michaël Trazzi (mtrazzi) · 2022-11-04T18:09:04.759Z · comments (11)
Toy Models and Tegum Products
Adam Jermyn (adam-jermyn) · 2022-11-04T18:51:41.688Z · comments (7)
For ELK truth is mostly a distraction
c.trout (ctrout) · 2022-11-04T21:14:52.279Z · comments (0)
[link] How much should we care about non-human animals?
bokov (bokov-1) · 2022-11-04T21:36:57.836Z · comments (8)
[link] Should we “go against nature”?
jasoncrawford · 2022-11-04T22:14:44.172Z · comments (3)
[link] Metaculus is seeking Software Engineers
dschwarz · 2022-11-05T00:42:24.909Z · comments (0)
[link] Interpreting systems as solving POMDPs: a step towards a formal understanding of agency [paper link]
the gears to ascension (lahwran) · 2022-11-05T01:06:39.743Z · comments (2)
Spectrum of Independence
jefftk (jkaufman) · 2022-11-05T02:40:03.822Z · comments (7)
Review of the Challenge
SD Marlow (sd-marlow) · 2022-11-05T06:38:58.899Z · comments (5)
My summary of “Pragmatic AI Safety”
Eleni Angelou (ea-1) · 2022-11-05T12:54:53.532Z · comments (0)
When can a mimic surprise you? Why generative models handle seemingly ill-posed problems
David Johnston (david-johnston) · 2022-11-05T13:19:37.384Z · comments (4)
The Slippery Slope from DALLE-2 to Deepfake Anarchy
scasper · 2022-11-05T14:53:54.556Z · comments (9)
How to store human values on a computer
Oliver Siegel (oliver-siegel) · 2022-11-05T19:17:56.595Z · comments (17)
Should AI focus on problem-solving or strategic planning? Why not both?
Oliver Siegel (oliver-siegel) · 2022-11-05T19:17:56.783Z · comments (3)
Provably Honest - A First Step
Srijanak De (srijanak-de) · 2022-11-05T19:18:45.655Z · comments (2)
Instead of technical research, more people should focus on buying time
Akash (akash-wasil) · 2022-11-05T20:43:45.215Z · comments (45)
Recommend HAIST resources for assessing the value of RLHF-related alignment research
Sam Marks (samuel-marks) · 2022-11-05T20:58:06.511Z · comments (9)
Unpricable Information and Certificate Hell
eva_ · 2022-11-05T22:56:58.081Z · comments (2)
Takeaways from a survey on AI alignment resources
DanielFilan · 2022-11-05T23:40:01.917Z · comments (10)
[question] Has anyone increased their AGI timelines?
Darren McKee · 2022-11-06T00:03:11.756Z · answers+comments (12)
Response
Jarred Filmer (4thWayWastrel) · 2022-11-06T01:03:36.151Z · comments (2)
[link] What is epigenetics?
Metacelsus · 2022-11-06T01:24:05.350Z · comments (4)
Apple Cider Syrup
jefftk (jkaufman) · 2022-11-06T02:10:06.270Z · comments (6)
[link] Word-Distance vs Idea-Distance: The Case for Lanoitaring
Sable · 2022-11-06T05:25:46.950Z · comments (7)
You won’t solve alignment without agent foundations
Mikhail Samin (mikhail-samin) · 2022-11-06T08:07:12.505Z · comments (3)
[question] Should I Pursue a PhD?
DragonGod · 2022-11-06T10:58:51.241Z · answers+comments (8)
[question] Has Pascal's Mugging problem been completely solved yet?
EniScien · 2022-11-06T12:52:17.811Z · answers+comments (11)
Playground Game: Monster
jefftk (jkaufman) · 2022-11-06T16:00:04.255Z · comments (4)
[link] Democracy Is in Danger, but Not for the Reasons You Think
ExCeph · 2022-11-06T21:15:43.962Z · comments (4)
Exams-Only Universities
Mati_Roy (MathieuRoy) · 2022-11-06T22:05:39.373Z · comments (40)
[question] Is there any discussion on avoiding being Dutch-booked or otherwise taken advantage of one's bounded rationality by refusing to engage?
shminux · 2022-11-07T02:36:36.826Z · answers+comments (29)
A philosopher's critique of RLHF
ThomasW (ThomasWoodside) · 2022-11-07T02:42:51.234Z · comments (8)
2022 LessWrong Census?
SurfingOrca · 2022-11-07T05:16:33.207Z · comments (13)
Counterfactability
Scott Garrabrant · 2022-11-07T05:39:05.668Z · comments (4)
[Book Review] "Station Eleven" by Emily St. John Mandel
lsusr · 2022-11-07T05:56:19.994Z · comments (1)
Google Search as a Washed Up Service Dog: "I HALP!"
shminux · 2022-11-07T07:02:40.469Z · comments (8)
4 Key Assumptions in AI Safety
Prometheus · 2022-11-07T10:50:40.211Z · comments (5)
Opportunities that surprised us during our Clearer Thinking Regrants program
spencerg · 2022-11-07T13:09:14.523Z · comments (0)
How to Make Easy Decisions
lynettebye · 2022-11-07T13:17:10.075Z · comments (3)
What’s the Deal with Elon Musk and Twitter?
Zvi · 2022-11-07T13:50:00.991Z · comments (11)
Hacker-AI – Does it already exist?
Erland Wittkotter (Erland) · 2022-11-07T14:01:23.915Z · comments (13)
[link] AI Safety Unconference NeurIPS 2022
Orpheus · 2022-11-07T15:39:57.231Z · comments (0)
Loss of control of AI is not a likely source of AI x-risk
squek · 2022-11-07T18:44:30.720Z · comments (0)
[question] Ideas for tiny research projects related to rationality?
Frej (emil-frej-brunbjerg) · 2022-11-07T18:45:44.682Z · answers+comments (1)
Thinking About Mastodon
jefftk (jkaufman) · 2022-11-07T19:40:04.285Z · comments (17)
Distillation Experiment: Chunk-Knitting
DirectedEvolution (AllAmericanBreakfast) · 2022-11-07T19:56:39.905Z · comments (1)
The biological function of love for non-kin is to gain the trust of people we cannot deceive
chaosmage · 2022-11-07T20:26:29.876Z · comments (3)
[link] Intercept article about lab accidents
ChristianKl · 2022-11-07T21:10:19.559Z · comments (9)
← previous page (newer posts) · next page (older posts) →