Comment by lukeprog on Which scientific discovery was most ahead of its time? · 2019-05-16T14:42:59.024Z · score: 26 (12 votes) · LW · GW

Cases where scientific knowledge was in fact lost and then rediscovered provide especially strong evidence about the discovery counterfactauls, e.g. Hero's eolipile and al-Kindi's development of relative frequency analysis for decoding messages. Probably we underestimate how common such cases are, because the knowledge of the lost discovery is itself lost — e.g. we might easily have simply not rediscovered the Antikythera mechanism.

Comment by lukeprog on Preliminary thoughts on moral weight · 2018-10-24T19:38:04.818Z · score: 6 (3 votes) · LW · GW

Apparently Shelly Kagan has a book coming out soon that is (sort of?) about moral weight.

Comment by lukeprog on A Proper Scoring Rule for Confidence Intervals · 2018-08-29T17:48:22.643Z · score: 13 (3 votes) · LW · GW

This scoring rules has some downsides from a usability standpoint. See Greenberg 2018, a whitepaper prepared as background material for a (forthcoming) calibration training app.

Comment by lukeprog on Preliminary thoughts on moral weight · 2018-08-16T02:33:24.309Z · score: 17 (6 votes) · LW · GW

Some other people at Open Phil have spent more time thinking about two-envelope effects more than I have, and fwiw some of their thinking on the issue is in this post (e.g. see section 1.1.1.1).

Comment by lukeprog on Preliminary thoughts on moral weight · 2018-08-14T19:03:58.337Z · score: 12 (4 votes) · LW · GW

My own take on this is described briefly here, with more detail in various appendices, e.g. here.

Comment by lukeprog on Preliminary thoughts on moral weight · 2018-08-14T19:02:27.195Z · score: 13 (3 votes) · LW · GW

Yes, I meant to be describing ranges conditional on each species being moral patients at all. I previously gave my own (very made-up) probabilities for that here. Another worry to consider, though, is that many biological/cognitive and behavioral features of a species are simultaneously (1) evidence about their likelihood of moral patienthood (via consciousness), and (2) evidence about features that might affect their moral weight *given* consciousness/patienthood. So, depending on how you use that evidence, it's important to watch out for double-counting.

I'll skip responding to #2 for now.

Comment by lukeprog on Preliminary thoughts on moral weight · 2018-08-14T18:57:48.389Z · score: 25 (10 votes) · LW · GW

For anyone who is curious, I cite much of the literature arguing over criteria for moral patienthood/weight in the footnotes of this section of my original moral patienthood report. My brief comments on why I've focused on consciousness thus far are here.

Preliminary thoughts on moral weight

2018-08-13T23:45:13.430Z · score: 73 (31 votes)
Comment by lukeprog on Announcement: AI alignment prize winners and next round · 2018-01-15T21:42:23.386Z · score: 25 (7 votes) · LW · GW

Cool, this looks better than I'd been expecting. Thanks for doing this! Looking forward to next round.

Quick thoughts on empathic metaethics

2017-12-12T21:46:08.834Z · score: 34 (11 votes)
Comment by lukeprog on Oxford Prioritisation Project Review · 2017-10-14T00:12:05.575Z · score: 17 (6 votes) · LW · GW

Hurrah failed project reports!

Comment by lukeprog on Ten small life improvements · 2017-08-24T15:55:33.787Z · score: 1 (1 votes) · LW · GW

One of my most-used tools is very simple: an Alfred snippet that lets me paste-as-plain-text using Cmd+Opt+V.

Comment by lukeprog on Rescuing the Extropy Magazine archives · 2017-07-01T21:36:21.850Z · score: 9 (9 votes) · LW · GW

Thanks!

Comment by lukeprog on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2017-07-01T06:43:09.823Z · score: 10 (5 votes) · LW · GW

From a user's profile, be able to see their comments in addition to their posts.

Dunno about others, but this is actually one of the LW features I use the most.

(Apologies if this is listed somewhere already and I missed it.)

Comment by lukeprog on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2017-06-23T23:37:09.865Z · score: 6 (8 votes) · LW · GW

Probably not suitable for launch, but given that the epistemic seriousness of the users is the most important "feature" for me and some other people I've spoken to, I wonder if some kind of "user badges" thing might be helpful, especially if it influences the weight that upvotes and downvotes from those users have. E.g. one badge could be "has read >60% of the sequences, as 'verified' by one of the 150 people the LW admins trust to verify such a thing about someone" and "verified superforecaster" and probably some other things I'm not immediately thinking of.

Comment by lukeprog on Book recommendation requests · 2017-06-03T23:32:00.328Z · score: 0 (0 votes) · LW · GW
  1. Constantly.
  2. Frequently.
Comment by lukeprog on Book recommendation requests · 2017-06-02T19:40:42.748Z · score: 12 (12 votes) · LW · GW

Best Textbooks on Every Subject

Comment by lukeprog on AGI and Mainstream Culture · 2017-05-23T19:28:54.174Z · score: 1 (1 votes) · LW · GW

Thanks for briefly describing those Doctor Who episodes.

Comment by lukeprog on The Best Textbooks on Every Subject · 2017-03-07T22:24:42.993Z · score: 10 (7 votes) · LW · GW

Lists of textbook award winners like this list might also be useful.

Comment by lukeprog on The Best Textbooks on Every Subject · 2017-02-25T21:44:11.720Z · score: 0 (0 votes) · LW · GW

Fixed, thanks.

Comment by lukeprog on Can the Chain Still Hold You? · 2017-01-27T14:41:03.746Z · score: 3 (3 votes) · LW · GW

Today I encountered a real-life account of a the chain story — involving a cow rather than an elephant — around 24:10 into the "Best of BackStory, Vol. 1" episode of the podcast BackStory.

Comment by lukeprog on CFAR’s new focus, and AI Safety · 2016-12-07T18:06:38.611Z · score: 0 (0 votes) · LW · GW

"Accuracy-boosting" or "raising accuracy"?

Comment by lukeprog on Paid research assistant position focusing on artificial intelligence and existential risk · 2016-05-03T18:28:24.969Z · score: 2 (2 votes) · LW · GW

Source. But the non-cached page says "The details of this job cannot be viewed at this time," so maybe the job opening is no longer available.

FWIW, I'm a bit familiar with Dafoe's thinking on the issues, and I think it would be a good use of time for the right person to work with him.

Comment by lukeprog on Audio version of Rationality: From AI to Zombies out of beta · 2016-04-22T14:44:38.240Z · score: 0 (0 votes) · LW · GW

Hi Rick, any updates on the Audible version?

Comment by lukeprog on [link] Simplifying the environment: a new convergent instrumental goal · 2016-04-22T14:43:10.592Z · score: 2 (2 votes) · LW · GW

See also: https://scholar.google.com/scholar?cluster=9557614170081724663&hl=en&as_sdt=1,5

Comment by lukeprog on Why CFAR? The view from 2015 · 2015-12-20T19:57:54.441Z · score: 18 (18 votes) · LW · GW

Just donated!

Comment by lukeprog on Audio version of Rationality: From AI to Zombies out of beta · 2015-12-01T04:51:13.682Z · score: 3 (3 votes) · LW · GW

Hurray!

Comment by lukeprog on Audio version of Rationality: From AI to Zombies out of beta · 2015-11-27T15:24:23.787Z · score: 5 (5 votes) · LW · GW

Any chance you'll eventually get this up on Audible? I suspect that in the long run, it can find a wider audience there.

Comment by lukeprog on The Best Textbooks on Every Subject · 2015-10-03T05:52:52.571Z · score: 1 (1 votes) · LW · GW

Another attempt to do something like this thread: Viva la Books.

Comment by lukeprog on Estimate Stability · 2015-08-20T17:30:53.991Z · score: 1 (1 votes) · LW · GW

I guess subjective logic is also trying to handle this kind of thing. From Jøsang's book draft:

Subjective logic is a type of probabilistic logic that allows probability values to be expressed with degrees of uncertainty. The idea of probabilistic logic is to combine the strengths of logic and probability calculus, meaning that it has binary logic’s capacity to express structured argument models, and it has the power of probabilities to express degrees of truth of those arguments. The idea of subjective logic is to extend probabilistic logic by also expressing uncertainty about the probability values themselves, meaning that it is possible to reason with argument models in presence of uncertain or incomplete evidence.

Though maybe this particular formal system has really undesirable properties, I don't know.

Comment by lukeprog on MIRI's 2015 Summer Fundraiser! · 2015-07-21T01:56:19.191Z · score: 37 (37 votes) · LW · GW

Donated $300.

Comment by lukeprog on Some Heuristics for Evaluating the Soundness of the Academic Mainstream in Unfamiliar Fields · 2015-07-17T13:07:17.541Z · score: 1 (1 votes) · LW · GW

Never heard of him.

Comment by lukeprog on [link] FLI's recommended project grants for AI safety research announced · 2015-07-02T07:08:14.186Z · score: 4 (4 votes) · LW · GW

For those who haven't been around as long as Wei Dai…

Eliezer tells the story of coming around to a more Bostromian view, circa 2003, in his coming of age sequence.

Comment by lukeprog on GiveWell event for SF Bay Area EAs · 2015-06-25T20:42:51.429Z · score: 4 (4 votes) · LW · GW

Just FYI, I plan to be there.

Comment by lukeprog on A map: Typology of human extinction risks · 2015-06-24T17:43:12.734Z · score: 1 (1 votes) · LW · GW

Any idea when the book is coming out?

Comment by lukeprog on [link] Baidu cheats in an AI contest in order to gain a 0.24% advantage · 2015-06-08T01:57:10.615Z · score: 2 (2 votes) · LW · GW

Just FYI to readers: the source of the first image is here.

Comment by lukeprog on Learning to get things right first time · 2015-05-30T00:17:06.638Z · score: 5 (5 votes) · LW · GW

I don't know if this is commercially feasible, but I do like this idea from the perspective of building civilizational competence at getting things right on the first try.

Comment by lukeprog on Request for Advice : A.I. - can I make myself useful? · 2015-05-29T18:17:43.531Z · score: 19 (19 votes) · LW · GW

Might you be able to slightly retrain so as to become an expert on medium-term and long-term biosecurity risks? Biological engineering presents serious GCR risk over the next 50 years (and of course after that, as well), and very few people are trying to think through the issues on more than a 10-year time horizon. FHI, CSER, GiveWell, and perhaps others each have a decent chance of wanting to hire people into such research positions over the next few years. (GiveWell is looking to hire a biosecurity program manager right now, but I assume you can't acquire the requisite training and background immediately.)

Comment by lukeprog on CFAR-run MIRI Summer Fellows program: July 7-26 · 2015-04-29T23:57:55.306Z · score: 5 (5 votes) · LW · GW

I think it's partly not doing enough far-advance planning, but also partly just a greater-than-usual willingness to Try Things that seem like good ideas even if the timeline is a bit rushed. That's how the original minicamp happened, which ended up going so well that it inspired us to develop and launch CFAR.

Comment by lukeprog on The Effective Altruism Handbook · 2015-04-26T03:02:30.605Z · score: 1 (1 votes) · LW · GW

People have complained about Sumatra not working with MIRI's PDF ebooks, too. It was hard enough already to get our process to output the links we want on most readers, so we decided not to make the extra effort to additionally support Sumatra. I'm not sure what it would take.

Comment by lukeprog on The Best Textbooks on Every Subject · 2015-04-15T00:19:58.404Z · score: 1 (1 votes) · LW · GW

Updated, thanks!

Comment by lukeprog on The Best Textbooks on Every Subject · 2015-04-15T00:07:56.197Z · score: 0 (0 votes) · LW · GW

Fixed, thanks.

Comment by lukeprog on How urgent is it to intuitively understand Bayesianism? · 2015-04-07T02:27:47.038Z · score: 9 (9 votes) · LW · GW

Maybe just use odds ratios. That's what I use when I'm trying to make updates on the spot.

Comment by lukeprog on Just a casual question regarding MIRI · 2015-03-22T22:05:42.493Z · score: 22 (22 votes) · LW · GW

Working on MIRI's current technical agenda mostly requires a background in computer science with an unusually strong focus on logic: see details here. That said, the scope of MIRI's research program should be expanding over time. E.g. see Patrick's recent proposal to model goal stability challenges in a machine learning system, which would require more typical AI knowledge than has usually been the case for MIRI's work so far.

MIRI's research isn't really what a mathematician would typically think of as "math research" — it's more like theory-heavy computer science research with an unusually significant math/logic component, as is the case with a few other areas of computer science research, e.g. program analysis.

Also see the "Our recommended path for becoming a MIRI research fellow" section on our research fellow job posting.

Comment by lukeprog on The Best Textbooks on Every Subject · 2015-03-19T23:08:36.185Z · score: 1 (1 votes) · LW · GW

Fixed, thanks!

Comment by lukeprog on Best Explainers on Different Subjects · 2015-03-18T23:37:39.168Z · score: 8 (8 votes) · LW · GW

I tried this earlier, with Great Explanations.

Comment by lukeprog on Rationality: From AI to Zombies · 2015-03-15T18:27:14.800Z · score: 5 (5 votes) · LW · GW

I can't mail that address, I get a failure message from Google

Oops. Should be fixed now.

Comment by lukeprog on Calibration Test with database of 150,000+ questions · 2015-03-13T21:57:50.093Z · score: 2 (2 votes) · LW · GW

Thanks! BTW, I'd prefer to have 1% and 0.1% and 99% and 99.9% as options, rather than skipping over the 1% and 99% options as you have it now.

Comment by lukeprog on Calibration Test with database of 150,000+ questions · 2015-03-13T21:55:55.311Z · score: 1 (1 votes) · LW · GW

Fair enough. I've edited my original comment.

(For posterity: the text for my original comment's first hyperlink originally read "0 and 1 are not probabilities".)

Comment by lukeprog on Rationality: From AI to Zombies · 2015-03-13T21:48:28.029Z · score: 11 (11 votes) · LW · GW

Which is roughly the length of War and Peace or Atlas Shrugged.

Comment by lukeprog on Calibration Test with database of 150,000+ questions · 2015-03-13T18:03:23.961Z · score: 7 (7 votes) · LW · GW

0% probability is my most common answer as well, but I'm using it less often than I was choosing 50% on the CFAR calibration app (which forces a binary answer choice rather than an open-ended answer choice). The CFAR app has lots of questions like "Which of these two teams won the Superbowl in 1978" where I just have no idea. The trivia database Nanashi is using has, for me, a greater proportion of questions on which my credence is something more interesting than an ignorance prior.

Comment by lukeprog on Calibration Test with database of 150,000+ questions · 2015-03-13T17:59:17.922Z · score: 0 (0 votes) · LW · GW

I'd prefer not to allow 0 and 1 as available credences. But if 0 remained as an option I would just interpret it as "very close to 0" and then keep using the app, though if a future version of the app showed me my Bayes score then the difference between what the app allows me to choose (0%) and what I'm interpreting 0 to mean ("very close to 0") could matter.

MIRI's 2014 Summer Matching Challenge

2014-08-07T20:03:24.171Z · score: 17 (18 votes)

Will AGI surprise the world?

2014-06-21T22:27:31.620Z · score: 12 (13 votes)

Some alternatives to “Friendly AI”

2014-06-15T19:53:20.340Z · score: 19 (24 votes)

An onion strategy for AGI discussion

2014-05-31T19:08:24.784Z · score: 13 (16 votes)

Can noise have power?

2014-05-23T04:54:32.829Z · score: 9 (10 votes)

Calling all MIRI supporters for unique May 6 giving opportunity!

2014-05-04T23:45:25.469Z · score: 20 (29 votes)

Is my view contrarian?

2014-03-11T17:42:49.788Z · score: 22 (23 votes)

Futurism's Track Record

2014-01-29T20:27:24.738Z · score: 12 (13 votes)

Tricky Bets and Truth-Tracking Fields

2014-01-29T08:52:38.889Z · score: 16 (16 votes)

MIRI's Winter 2013 Matching Challenge

2013-12-17T20:41:28.303Z · score: 20 (21 votes)

A model of AI development

2013-11-28T13:48:20.083Z · score: 18 (21 votes)

Gelman Against Parsimony

2013-11-24T15:23:32.773Z · score: 5 (12 votes)

From Philosophy to Math to Engineering

2013-11-04T15:43:55.704Z · score: 20 (22 votes)

The Inefficiency of Theoretical Discovery

2013-11-03T21:26:52.468Z · score: 19 (24 votes)

Intelligence Amplification and Friendly AI

2013-09-27T01:09:15.978Z · score: 14 (19 votes)

AI ebook cover design brainstorming

2013-09-26T23:49:03.319Z · score: 3 (6 votes)

Help us Optimize the Contents of the Sequences eBook

2013-09-19T04:31:20.391Z · score: 11 (12 votes)

Help us name a short primer on AI risk!

2013-09-17T20:35:34.895Z · score: 7 (12 votes)

Help MIRI run its Oxford UK workshop in November

2013-09-15T03:13:36.553Z · score: 6 (9 votes)

How well will policy-makers handle AGI? (initial findings)

2013-09-12T07:21:30.255Z · score: 15 (18 votes)

How effectively can we plan for future decades? (initial findings)

2013-09-04T22:42:05.195Z · score: 11 (12 votes)

Which subreddits should we create on Less Wrong?

2013-09-04T17:56:33.729Z · score: 24 (25 votes)

Artificial explosion of the Sun: a new x-risk?

2013-09-02T06:12:39.019Z · score: 3 (22 votes)

Transparency in safety-critical systems

2013-08-25T18:52:07.757Z · score: 4 (11 votes)

How Efficient is the Charitable Market?

2013-08-24T05:57:48.169Z · score: 16 (19 votes)

Engaging Intellectual Elites at Less Wrong

2013-08-13T17:55:05.719Z · score: 11 (28 votes)

How to Measure Anything

2013-08-07T04:05:58.366Z · score: 53 (55 votes)

Algorithmic Progress in Six Domains

2013-08-03T02:29:21.928Z · score: 24 (29 votes)

MIRI's 2013 Summer Matching Challenge

2013-07-23T19:05:56.873Z · score: 23 (30 votes)

Model Combination and Adjustment

2013-07-17T20:31:08.687Z · score: 51 (55 votes)

Writing Style and the Typical Mind Fallacy

2013-07-14T04:47:48.167Z · score: 27 (32 votes)

Four Focus Areas of Effective Altruism

2013-07-09T00:59:40.963Z · score: 43 (47 votes)

Responses to Catastrophic AGI Risk: A Survey

2013-07-08T14:33:50.800Z · score: 11 (12 votes)

Start Under the Streetlight, then Push into the Shadows

2013-06-24T00:49:22.961Z · score: 31 (34 votes)

Elites and AI: Stated Opinions

2013-06-15T19:52:36.207Z · score: 10 (23 votes)

Will the world's elites navigate the creation of AI just fine?

2013-05-31T18:49:10.861Z · score: 22 (24 votes)

Help us name the Sequences ebook

2013-04-15T19:59:13.969Z · score: 14 (15 votes)

Estimate Stability

2013-04-13T18:33:23.799Z · score: 6 (9 votes)

Fermi Estimates

2013-04-11T17:52:28.708Z · score: 57 (56 votes)

Explicit and tacit rationality

2013-04-09T23:33:29.127Z · score: 40 (45 votes)

Critiques of the heuristics and biases tradition

2013-03-18T23:49:57.035Z · score: 19 (20 votes)

Decision Theory FAQ

2013-02-28T14:15:55.090Z · score: 67 (64 votes)

Improving Human Rationality Through Cognitive Change (intro)

2013-02-24T04:49:48.976Z · score: 11 (11 votes)

Great rationality posts in the OB archives

2013-02-23T23:33:51.624Z · score: 9 (13 votes)

Great rationality posts by LWers not posted to LW

2013-02-16T00:31:20.077Z · score: 28 (33 votes)

A reply to Mark Linsenmayer about philosophy

2013-01-05T11:25:25.242Z · score: 19 (20 votes)

Ideal Advisor Theories and Personal CEV

2012-12-25T13:04:46.889Z · score: 24 (25 votes)

Noisy Reasoners

2012-12-13T07:53:29.193Z · score: 11 (16 votes)