Posts

Features that make a report especially helpful to me 2022-04-14T01:12:19.111Z
Preliminary thoughts on moral weight 2018-08-13T23:45:13.430Z
Quick thoughts on empathic metaethics 2017-12-12T21:46:08.834Z
MIRI's 2014 Summer Matching Challenge 2014-08-07T20:03:24.171Z
Will AGI surprise the world? 2014-06-21T22:27:31.620Z
Some alternatives to “Friendly AI” 2014-06-15T19:53:20.340Z
An onion strategy for AGI discussion 2014-05-31T19:08:24.784Z
Can noise have power? 2014-05-23T04:54:32.829Z
Calling all MIRI supporters for unique May 6 giving opportunity! 2014-05-04T23:45:25.469Z
Is my view contrarian? 2014-03-11T17:42:49.788Z
Futurism's Track Record 2014-01-29T20:27:24.738Z
Tricky Bets and Truth-Tracking Fields 2014-01-29T08:52:38.889Z
MIRI's Winter 2013 Matching Challenge 2013-12-17T20:41:28.303Z
A model of AI development 2013-11-28T13:48:20.083Z
Gelman Against Parsimony 2013-11-24T15:23:32.773Z
From Philosophy to Math to Engineering 2013-11-04T15:43:55.704Z
The Inefficiency of Theoretical Discovery 2013-11-03T21:26:52.468Z
Intelligence Amplification and Friendly AI 2013-09-27T01:09:15.978Z
AI ebook cover design brainstorming 2013-09-26T23:49:03.319Z
Help us Optimize the Contents of the Sequences eBook 2013-09-19T04:31:20.391Z
Help us name a short primer on AI risk! 2013-09-17T20:35:34.895Z
Help MIRI run its Oxford UK workshop in November 2013-09-15T03:13:36.553Z
How well will policy-makers handle AGI? (initial findings) 2013-09-12T07:21:30.255Z
How effectively can we plan for future decades? (initial findings) 2013-09-04T22:42:05.195Z
Which subreddits should we create on Less Wrong? 2013-09-04T17:56:33.729Z
Artificial explosion of the Sun: a new x-risk? 2013-09-02T06:12:39.019Z
Transparency in safety-critical systems 2013-08-25T18:52:07.757Z
How Efficient is the Charitable Market? 2013-08-24T05:57:48.169Z
Engaging Intellectual Elites at Less Wrong 2013-08-13T17:55:05.719Z
How to Measure Anything 2013-08-07T04:05:58.366Z
Algorithmic Progress in Six Domains 2013-08-03T02:29:21.928Z
MIRI's 2013 Summer Matching Challenge 2013-07-23T19:05:56.873Z
Model Combination and Adjustment 2013-07-17T20:31:08.687Z
Writing Style and the Typical Mind Fallacy 2013-07-14T04:47:48.167Z
Four Focus Areas of Effective Altruism 2013-07-09T00:59:40.963Z
Responses to Catastrophic AGI Risk: A Survey 2013-07-08T14:33:50.800Z
Start Under the Streetlight, then Push into the Shadows 2013-06-24T00:49:22.961Z
Elites and AI: Stated Opinions 2013-06-15T19:52:36.207Z
Will the world's elites navigate the creation of AI just fine? 2013-05-31T18:49:10.861Z
Help us name the Sequences ebook 2013-04-15T19:59:13.969Z
Estimate Stability 2013-04-13T18:33:23.799Z
Fermi Estimates 2013-04-11T17:52:28.708Z
Explicit and tacit rationality 2013-04-09T23:33:29.127Z
Critiques of the heuristics and biases tradition 2013-03-18T23:49:57.035Z
Decision Theory FAQ 2013-02-28T14:15:55.090Z
Improving Human Rationality Through Cognitive Change (intro) 2013-02-24T04:49:48.976Z
Great rationality posts in the OB archives 2013-02-23T23:33:51.624Z
Great rationality posts by LWers not posted to LW 2013-02-16T00:31:20.077Z
A reply to Mark Linsenmayer about philosophy 2013-01-05T11:25:25.242Z
Ideal Advisor Theories and Personal CEV 2012-12-25T13:04:46.889Z

Comments

Comment by lukeprog on AI #13: Potential Algorithmic Improvements · 2023-05-30T17:44:39.323Z · LW · GW

no one is currently hard at work drafting concrete legislative or regulatory language

I'd like readers to know that fortunately, this hasn't been true for a while now. But yes, such efforts continue to be undersupplied with talent.

Comment by lukeprog on AI #12:The Quest for Sane Regulations · 2023-05-19T13:59:50.856Z · LW · GW

Where is the Arnold Kling quote from?

Comment by lukeprog on Shut Up and Divide? · 2022-09-02T18:48:38.747Z · LW · GW

I haven't read the other comments here and I know this post is >10yrs old, but…

For me, (what I'll now call) effective-altruism-like values are mostly second-order, in the sense that a lot of my revealed behavior shows that a lot of the time I don't want to help strangers, animals, future people, etc. But I think I "want to want to" help strangers, and sometimes the more goal-directed rational side of my brain wins out and I do something to help strangers at personal sacrifice to myself (though I do this less than e.g. Will MacAskill). But I don't really detect in myself a symmetrical second-order want to NOT want to help strangers. So that's one thing that "Shut up and multiply" has over "shut up and divide," at least for me.

That said, I realize now that I'm often guilty of ignoring this second-orderness when e.g. making the case for effective altruism. I will often appeal to my interlocutor's occasional desire to help strangers and suggest they generalize it, but I don't symmetrically appeal to their clearer and more common disinterest in helping strangers and suggest they generalize THAT. To be more honest and accurate while still making the case for EA, I should be appealing to their second-order desires, though of course that's a more complicated conversation.

Comment by lukeprog on Humans are very reliable agents · 2022-06-19T21:48:22.954Z · LW · GW

Somewhat related: Estimating the Brittleness of AI.

Comment by lukeprog on Clem's Memo · 2022-04-17T02:10:43.699Z · LW · GW

See also e.g. Stimson's memo to Truman of April 25, 1945.

Comment by lukeprog on Ideal governance (for companies, countries and more) · 2022-04-07T13:50:19.330Z · LW · GW

Some other literature OTOH:

Comment by lukeprog on Epistemic Legibility · 2022-02-09T21:06:48.020Z · LW · GW

Lots of overlap between this concept and what Open Phil calls reasoning transparency.

Comment by lukeprog on List of Probability Calibration Exercises · 2022-01-25T16:48:19.425Z · LW · GW

The Open Philanthropy and 80,000 Hours links are for the same app, just at different URLs.

Comment by lukeprog on Forecasting Newsletter: December 2021 · 2022-01-13T20:42:42.157Z · LW · GW

On Foretell moving to ARLIS… There's no way you could've known this, but as it happens Foretell is moving from one Open Phil grantee (CSET) to another (UMD ARLIS). TBC I wasn't involved in the decision for Foretell to make that transition, but it seems fine to me, and Foretell is essentially becoming another part of the project I funded at ARLIS.

Comment by lukeprog on Forecasting Newsletter: December 2021 · 2022-01-11T21:02:23.873Z · LW · GW

Someone with a newsletter aimed at people interested in forecasting should let them know. :)

Comment by lukeprog on Forecasting Newsletter: December 2021 · 2022-01-10T21:29:06.158Z · LW · GW

$40k feels like a significant quantity of all the funding there is for small experiments in the forecasting space.

Seems like a fit for the EA Infrastructure Fund, no?

Comment by lukeprog on My Overview of the AI Alignment Landscape: Threat Models · 2022-01-09T14:17:51.809Z · LW · GW
Comment by lukeprog on Great Power Conflict · 2021-09-17T19:59:40.840Z · LW · GW
Comment by lukeprog on Multitudinous outside views · 2020-08-18T16:33:30.875Z · LW · GW

Previously: Model Combination and Adjustment.

Comment by lukeprog on Predictions/questions about conquistadors? · 2020-05-29T16:23:26.425Z · LW · GW

Very cool that you posted these quantified predictions in advance!

Comment by lukeprog on Peter's COVID Consolidated Brief - 29 Apr · 2020-04-30T17:22:58.547Z · LW · GW

Nice write-up!

A few thoughts re: Scott Alexander & Rob Wiblin on prediction.

  • Scott wrote that "On February 20th, Tetlock’s superforecasters predicted only a 3% chance that there would be 200,000+ coronavirus cases a month later (there were)." I just want to note that while this was indeed a very failed prediction, in a sense the supers were wrong by just two days. (WHO-counted cases only reached >200k on March 18th, two days before question close.)
  • One interesting pre-coronavirus probabilistic forecast of global pandemic odds is this: From 2016 through Jan 1st 2020, Metaculus users made forecasts about whether there would be a large pandemic (≥100M infections or ≥10M deaths in a 12mo period) by 2026. For most of the question’s history, the median forecast was 10%-25%, and the special Metaculus aggregated forecast was around 35%. At first this sounded high to me, but then someone pointed out that 4 pandemics from the previous 100 years qualified (I didn't double-check this), suggesting a base rate of 40% chance per decade. So the median and aggregated forecasts on Metaculus were actually lower than the naive base rate (maybe by accident, or maybe forecasters adjusted downward because we have better surveillance and mitigation tools today?), but I'm guessing still higher than the probabilities that would've been given by most policymakers and journalists if they were in the habit of making quantified falsifiable forecasts. Moreover, using the Tetlockian strategy of just predicting the naive base rate with minimal adjustment would've yielded an even more impressive in-advance prediction of the coronavirus pandemic.
  • More generally, the research on probabilistic forecasting makes me suspect that prediction polls/markets with highly-selected participants (e.g. via GJI or HyperMind), or perhaps even those without highly-selected participants (e.g. via GJO or Metaculus), could achieve pretty good calibration (though not necessarily resolution) on high-stakes questions (e.g. about low-probability global risks) with 2-10 year time horizons, though this has not yet been checked.
Comment by lukeprog on Cortés, Pizarro, and Afonso as Precedents for Takeover · 2020-03-02T05:43:30.154Z · LW · GW

Nice post. Were there any sources besides Wikipedia that you found especially helpful when researching this post?

Comment by lukeprog on In Defense of the Arms Races… that End Arms Races · 2020-01-16T00:08:56.132Z · LW · GW
If the U.S. kept racing in its military capacity after WW2, the U.S. may have been able to use its negotiating leverage to stop the Soviet Union from becoming a nuclear power: halting proliferation and preventing the build up of world threatening numbers of high yield weapons.

BTW, the most thorough published examination I've seen of whether the U.S. could've done this is Quester (2000). I've been digging into the question in more detail and I'm still not sure whether it's true or not (but "may" seems reasonable).

Comment by lukeprog on How common is it for one entity to have a 3+ year technological lead on its nearest competitor? · 2019-12-03T01:59:28.379Z · LW · GW

I'm very interested in this question, thanks for looking into it!

Comment by lukeprog on If you had to pick one thing you've read that changed the course of your life, what would it be? · 2019-09-15T00:59:26.455Z · LW · GW

My answer from 2017 is here.

Comment by lukeprog on Preliminary thoughts on moral weight · 2019-07-31T21:37:22.822Z · LW · GW

Interesting historical footnote from Louis Francini:

This issue of differing "capacities for happiness" was discussed by the classical utilitarian Francis Edgeworth in his 1881 Mathematical Psychics (pp 57-58, and especially 130-131). He doesn't go into much detail at all, but this is the earliest discussion of which I am aware. Well, there's also the Bentham-Mill debate about higher and lower pleasures ("It is better to be a human being dissatisfied than a pig satisfied"), but I think that may be a slightly different issue.
Comment by lukeprog on Which scientific discovery was most ahead of its time? · 2019-05-16T14:42:59.024Z · LW · GW

Cases where scientific knowledge was in fact lost and then rediscovered provide especially strong evidence about the discovery counterfactauls, e.g. Hero's eolipile and al-Kindi's development of relative frequency analysis for decoding messages. Probably we underestimate how common such cases are, because the knowledge of the lost discovery is itself lost — e.g. we might easily have simply not rediscovered the Antikythera mechanism.

Comment by lukeprog on Preliminary thoughts on moral weight · 2018-10-24T19:38:04.818Z · LW · GW

Apparently Shelly Kagan has a book coming out soon that is (sort of?) about moral weight.

Comment by lukeprog on A Proper Scoring Rule for Confidence Intervals · 2018-08-29T17:48:22.643Z · LW · GW

This scoring rules has some downsides from a usability standpoint. See Greenberg 2018, a whitepaper prepared as background material for a (forthcoming) calibration training app.

Comment by lukeprog on Preliminary thoughts on moral weight · 2018-08-16T02:33:24.309Z · LW · GW

Some other people at Open Phil have spent more time thinking about two-envelope effects more than I have, and fwiw some of their thinking on the issue is in this post (e.g. see section 1.1.1.1).

Comment by lukeprog on Preliminary thoughts on moral weight · 2018-08-14T19:03:58.337Z · LW · GW

My own take on this is described briefly here, with more detail in various appendices, e.g. here.

Comment by lukeprog on Preliminary thoughts on moral weight · 2018-08-14T19:02:27.195Z · LW · GW

Yes, I meant to be describing ranges conditional on each species being moral patients at all. I previously gave my own (very made-up) probabilities for that here. Another worry to consider, though, is that many biological/cognitive and behavioral features of a species are simultaneously (1) evidence about their likelihood of moral patienthood (via consciousness), and (2) evidence about features that might affect their moral weight *given* consciousness/patienthood. So, depending on how you use that evidence, it's important to watch out for double-counting.

I'll skip responding to #2 for now.

Comment by lukeprog on Preliminary thoughts on moral weight · 2018-08-14T18:57:48.389Z · LW · GW

For anyone who is curious, I cite much of the literature arguing over criteria for moral patienthood/weight in the footnotes of this section of my original moral patienthood report. My brief comments on why I've focused on consciousness thus far are here.

Comment by lukeprog on Announcement: AI alignment prize winners and next round · 2018-01-15T21:42:23.386Z · LW · GW

Cool, this looks better than I'd been expecting. Thanks for doing this! Looking forward to next round.

Comment by lukeprog on Oxford Prioritisation Project Review · 2017-10-14T00:12:05.575Z · LW · GW

Hurrah failed project reports!

Comment by lukeprog on Ten small life improvements · 2017-08-24T15:55:33.787Z · LW · GW

One of my most-used tools is very simple: an Alfred snippet that lets me paste-as-plain-text using Cmd+Opt+V.

Comment by lukeprog on Rescuing the Extropy Magazine archives · 2017-07-01T21:36:21.850Z · LW · GW

Thanks!

Comment by lukeprog on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2017-07-01T06:43:09.823Z · LW · GW

From a user's profile, be able to see their comments in addition to their posts.

Dunno about others, but this is actually one of the LW features I use the most.

(Apologies if this is listed somewhere already and I missed it.)

Comment by lukeprog on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2017-06-23T23:37:09.865Z · LW · GW

Probably not suitable for launch, but given that the epistemic seriousness of the users is the most important "feature" for me and some other people I've spoken to, I wonder if some kind of "user badges" thing might be helpful, especially if it influences the weight that upvotes and downvotes from those users have. E.g. one badge could be "has read >60% of the sequences, as 'verified' by one of the 150 people the LW admins trust to verify such a thing about someone" and "verified superforecaster" and probably some other things I'm not immediately thinking of.

Comment by lukeprog on Book recommendation requests · 2017-06-03T23:32:00.328Z · LW · GW
  1. Constantly.
  2. Frequently.
Comment by lukeprog on Book recommendation requests · 2017-06-02T19:40:42.748Z · LW · GW

Best Textbooks on Every Subject

Comment by lukeprog on AGI and Mainstream Culture · 2017-05-23T19:28:54.174Z · LW · GW

Thanks for briefly describing those Doctor Who episodes.

Comment by lukeprog on The Best Textbooks on Every Subject · 2017-03-07T22:24:42.993Z · LW · GW

Lists of textbook award winners like this list might also be useful.

Comment by lukeprog on The Best Textbooks on Every Subject · 2017-02-25T21:44:11.720Z · LW · GW

Fixed, thanks.

Comment by lukeprog on Can the Chain Still Hold You? · 2017-01-27T14:41:03.746Z · LW · GW

Today I encountered a real-life account of a the chain story — involving a cow rather than an elephant — around 24:10 into the "Best of BackStory, Vol. 1" episode of the podcast BackStory.

Comment by lukeprog on CFAR’s new focus, and AI Safety · 2016-12-07T18:06:38.611Z · LW · GW

"Accuracy-boosting" or "raising accuracy"?

Comment by lukeprog on Paid research assistant position focusing on artificial intelligence and existential risk · 2016-05-03T18:28:24.969Z · LW · GW

Source. But the non-cached page says "The details of this job cannot be viewed at this time," so maybe the job opening is no longer available.

FWIW, I'm a bit familiar with Dafoe's thinking on the issues, and I think it would be a good use of time for the right person to work with him.

Comment by lukeprog on Audio version of Rationality: From AI to Zombies out of beta · 2016-04-22T14:44:38.240Z · LW · GW

Hi Rick, any updates on the Audible version?

Comment by lukeprog on [link] Simplifying the environment: a new convergent instrumental goal · 2016-04-22T14:43:10.592Z · LW · GW

See also: https://scholar.google.com/scholar?cluster=9557614170081724663&hl=en&as_sdt=1,5

Comment by lukeprog on Why CFAR? The view from 2015 · 2015-12-20T19:57:54.441Z · LW · GW

Just donated!

Comment by lukeprog on Audio version of Rationality: From AI to Zombies out of beta · 2015-12-01T04:51:13.682Z · LW · GW

Hurray!

Comment by lukeprog on Audio version of Rationality: From AI to Zombies out of beta · 2015-11-27T15:24:23.787Z · LW · GW

Any chance you'll eventually get this up on Audible? I suspect that in the long run, it can find a wider audience there.

Comment by lukeprog on The Best Textbooks on Every Subject · 2015-10-03T05:52:52.571Z · LW · GW

Another attempt to do something like this thread: Viva la Books.

Comment by lukeprog on Estimate Stability · 2015-08-20T17:30:53.991Z · LW · GW

I guess subjective logic is also trying to handle this kind of thing. From Jøsang's book draft:

Subjective logic is a type of probabilistic logic that allows probability values to be expressed with degrees of uncertainty. The idea of probabilistic logic is to combine the strengths of logic and probability calculus, meaning that it has binary logic’s capacity to express structured argument models, and it has the power of probabilities to express degrees of truth of those arguments. The idea of subjective logic is to extend probabilistic logic by also expressing uncertainty about the probability values themselves, meaning that it is possible to reason with argument models in presence of uncertain or incomplete evidence.

Though maybe this particular formal system has really undesirable properties, I don't know.

Comment by lukeprog on MIRI's 2015 Summer Fundraiser! · 2015-07-21T01:56:19.191Z · LW · GW

Donated $300.