Posts

Jimrandomh's Shortform 2019-07-04T17:06:32.665Z · score: 29 (4 votes)
Recommendation Features on LessWrong 2019-06-15T00:23:18.102Z · score: 61 (18 votes)
Welcome to LessWrong! 2019-06-14T19:42:26.128Z · score: 82 (35 votes)
User GPT2 is Banned 2019-04-02T06:00:21.075Z · score: 64 (18 votes)
User GPT2 Has a Warning for Violating Frontpage Commenting Guidelines 2019-04-01T20:23:11.705Z · score: 50 (18 votes)
LW Update 2019-03-12 -- Bugfixes, small features 2019-03-12T21:56:40.109Z · score: 17 (2 votes)
Karma-Change Notifications 2019-03-02T02:52:58.291Z · score: 95 (25 votes)
Two Small Experiments on GPT-2 2019-02-21T02:59:16.199Z · score: 55 (21 votes)
How does OpenAI's language model affect our AI timeline estimates? 2019-02-15T03:11:51.779Z · score: 51 (16 votes)
Introducing the AI Alignment Forum (FAQ) 2018-10-29T21:07:54.494Z · score: 89 (32 votes)
Boston-area Less Wrong meetup 2018-05-16T22:00:48.446Z · score: 4 (1 votes)
Welcome to Cambridge/Boston Less Wrong 2018-03-14T01:53:37.699Z · score: 4 (2 votes)
Meetup : Cambridge, MA Sunday meetup: Lightning Talks 2017-05-20T21:10:26.587Z · score: 0 (1 votes)
Meetup : Cambridge/Boston Less Wrong: Planning 2017 2016-12-29T22:43:55.164Z · score: 0 (1 votes)
Meetup : Boston Secular Solstice 2016-11-30T04:54:55.035Z · score: 1 (2 votes)
Meetup : Cambridge Less Wrong: Tutoring Wheels 2016-01-17T05:23:05.303Z · score: 1 (2 votes)
Meetup : MIT/Boston Secular Solstice 2015-12-03T01:14:02.376Z · score: 1 (2 votes)
Meetup : Cambridge, MA Sunday meetup: The Contrarian Positions Game 2015-11-13T18:08:19.666Z · score: 1 (2 votes)
Rationality Cardinality 2015-10-03T15:54:03.793Z · score: 21 (22 votes)
An Idea For Corrigible, Recursively Improving Math Oracles 2015-07-20T03:35:11.000Z · score: 5 (5 votes)
Research Priorities for Artificial Intelligence: An Open Letter 2015-01-11T19:52:19.313Z · score: 23 (24 votes)
Petrov Day is September 26 2014-09-18T02:55:19.303Z · score: 24 (18 votes)
Three Parables of Microeconomics 2014-05-09T18:18:23.666Z · score: 25 (35 votes)
Meetup : LW/Methods of Rationality meetup 2013-10-15T04:02:11.785Z · score: 0 (1 votes)
Cambridge Meetup: Talk by Eliezer Yudkowsky: Recursion in rational agents 2013-10-15T04:02:05.988Z · score: 7 (8 votes)
Meetup : Cambridge, MA Meetup 2013-09-28T18:38:54.910Z · score: 4 (5 votes)
Charity Effectiveness and Third-World Economics 2013-06-12T15:50:22.330Z · score: 7 (12 votes)
Meetup : Cambridge First-Sunday Meetup 2013-03-01T17:28:01.249Z · score: 3 (4 votes)
Meetup : Cambridge, MA third-Sunday meetup 2013-02-11T23:48:58.812Z · score: 3 (4 votes)
Meetup : Cambridge First-Sunday Meetup 2013-01-31T20:37:32.207Z · score: 1 (2 votes)
Meetup : Cambridge, MA third-Sunday meetup 2013-01-14T11:36:48.262Z · score: 3 (4 votes)
Meetup : Cambridge, MA first-Sunday meetup 2012-11-30T16:34:04.249Z · score: 1 (2 votes)
Meetup : Cambridge, MA third-Sundays meetup 2012-11-16T18:00:25.436Z · score: 3 (4 votes)
Meetup : Cambridge, MA Sunday meetup 2012-11-02T17:08:17.011Z · score: 1 (2 votes)
Less Wrong Polls in Comments 2012-09-19T16:19:36.221Z · score: 79 (82 votes)
Meetup : Cambridge, MA Meetup 2012-07-22T15:05:10.642Z · score: 2 (3 votes)
Meetup : Cambridge, MA first-Sundays meetup 2012-03-30T17:55:25.558Z · score: 0 (3 votes)
Professional Patients: Fraud that ruins studies 2012-01-05T00:20:55.708Z · score: 16 (25 votes)
[LINK] Question Templates 2011-12-23T19:54:22.907Z · score: 1 (1 votes)
I started a blog: Concept Space Cartography 2011-12-16T21:06:28.888Z · score: 6 (9 votes)
Meetup : Cambridge (MA) Saturday meetup 2011-10-20T03:54:28.892Z · score: 2 (3 votes)
Another Mechanism for the Placebo Effect? 2011-10-05T01:55:11.751Z · score: 8 (22 votes)
Meetup : Cambridge, MA Sunday meetup 2011-10-05T01:37:06.937Z · score: 1 (2 votes)
Meetup : Cambridge (MA) third-Sundays meetup 2011-07-12T23:33:01.304Z · score: 0 (1 votes)
Draft of a Suggested Reading Order for Less Wrong 2011-07-08T01:40:06.828Z · score: 29 (30 votes)
Meetup : Cambridge Massachusetts meetup 2011-06-29T16:57:15.314Z · score: 1 (2 votes)
Meetup : Cambridge Massachusetts meetup 2011-06-22T15:26:03.828Z · score: 2 (3 votes)
The Present State of Bitcoin 2011-06-21T20:17:13.131Z · score: 7 (12 votes)
Safety Culture and the Marginal Effect of a Dollar 2011-06-09T03:59:28.731Z · score: 23 (36 votes)
Cambridge Less Wrong Group Planning Meetup, Tuesday 14 June 7pm 2011-06-08T03:41:41.375Z · score: 1 (2 votes)

Comments

Comment by jimrandomh on Sayan's Braindump · 2019-09-19T01:23:15.432Z · score: 2 (1 votes) · LW · GW
  • Multiple large monitors, for programming.
  • Waterproof paper in the shower, for collecting thoughts and making a morning todo list
  • Email filters and Priority Inbox, to prevent spurious interruptions while keeping enough trust that urgent things will generate notifications, that I don't feel compelled to check too often
  • USB batteries for recharging phones - one to carry around, one at each charging spot for quick-swapping
Comment by jimrandomh on Focus · 2019-09-16T20:07:29.706Z · score: 7 (4 votes) · LW · GW

Yep, one of us edited it to fix the link. Added a GitHub issue for dealing with relative links in RSS in general: https://github.com/LessWrong2/Lesswrong2/issues/2434 .

Comment by jimrandomh on Raemon's Scratchpad · 2019-09-14T05:26:16.569Z · score: 4 (2 votes) · LW · GW

Note that this would be a very non-idiomatic way to use jQuery. More typical architectures don't do client-side templating; they do server-side rendering and client-side incremental mutation.

Comment by jimrandomh on Jimrandomh's Shortform · 2019-09-13T22:24:50.632Z · score: 22 (4 votes) · LW · GW

I'm kinda confused about the relation between cryptography people and security mindset. Looking at the major cryptographic algorithm classes (hashing, symmetric-key, asymmetric-key), it seems pretty obvious that the correct standard algorithm in each class is probably a compound algorithm -- hash by xor'ing the results of several highly-dissimilar hash functions, etc, so that a mathematical advance which breaks one algorithm doesn't break the overall security of the system. But I don't see anyone doing this in practice, and also don't see signs of a debate on the topic. That makes me think that, to the extent they have security mindset, it's either being defeated by political processes in the translation to practice, or it's weirdly compartmentalized and not engaged with any practical reality or outside views.

Comment by jimrandomh on Bíos brakhús · 2019-09-12T18:13:03.208Z · score: 4 (3 votes) · LW · GW

In my experience, the motion that seems to prevent mental crowding-out is intervening on the timing of my thinking: if I force myself to spend longer on a narrow question/topic/idea than is comfortable, eg with a timer, I'll eventually run out of cached thoughts and spot things I would have otherwise missed.

Comment by jimrandomh on An1lam's Short Form Feed · 2019-09-12T02:01:17.861Z · score: 2 (1 votes) · LW · GW
By generativity do you mean "within-domain" generativity?

Not exactly, because Carmack has worked in more than one domain (albeit not as successfully; Armadillo Aerospace never made orbit.)

On those dimensions, it seems entirely fair to compare across topics and assert that Pearl was solving more significant and more difficult problem(s) than Carmack

Agree on significance, disagree on difficulty.

Comment by jimrandomh on Jimrandomh's Shortform · 2019-09-12T01:19:07.010Z · score: 22 (6 votes) · LW · GW

Eliezer has written about the notion of security mindset, and there's an important idea that attaches to that phrase, which some people have an intuitive sense of and ability to recognize, but I don't think Eliezer's post quite captured the essence of the idea, or presented anything like a usable roadmap of how to acquire it.

An1lam's recent shortform post talked about the distinction between engineering mindset and scientist mindset, and I realized that, with the exception of Eliezer and perhaps a few people he works closely with, all of the people I know of with security mindset are engineer-types rather than scientist-types. That seemed like a clue; my first theory was that the reason for this is because engineer-types get to actually write software that might have security holes, and have the feedback cycle of trying to write secure software. But I also know plenty of otherwise-decent software engineers who don't have security mindset, at least of the type Eliezer described.

My hypothesis is that to acquire security mindset, you have to:

  • Practice optimizing from a red team/attacker perspective,
  • Practice optimizing from a defender perspective; and
  • Practice modeling the interplay between those two perspectives.

So a software engineer can acquire security mindset because they practice writing software which they don't want to have vulnerabilities, they practice searching for vulnerabilities (usually as an auditor simulating an attacker rather as an actual attacker, but the cognitive algorithm is the same), and they practice going meta when they're designing the architecture of new projects. This explains why security mindset is very common among experienced senior engineers (who have done each of the three many times), and rare among junior engineers (who haven't yet). It explains how Eliezer can have security mindset: he alternates between roleplaying a future AI-architect trying to design AI control/alignment mechanisms, roleplaying a future misaligned-AI trying to optimize around them, and going meta on everything-in-general. It also predicts that junior AI scientists won't have this security mindset, and probably won't acquire it except by following a similar cognitive trajectory.

Which raises an interesting question: how much does security mindset generalize between domains? Ie, if you put Theo de Raadt onto a hypothetical future AI team, would he successfully apply the same security mindset there as he does to general computer security?

Comment by jimrandomh on G Gordon Worley III's Shortform · 2019-09-12T00:20:15.389Z · score: 11 (5 votes) · LW · GW

Outside observer takeaway: There's a bunch of sniping and fighting here, but if I ignore all the fighting and look at only the ideas, what we have is that Gordon presented an idea, Duncan presented counterarguments, and Gordon declined to address the counterarguments. Posting on shortform doesn't come with an obligation to follow up and defend things; it's meant to be a place where tentative and early stage ideas can be thrown around, so that part is fine. But I did come away believing the originally presented idea is probably wrong.

(Some of the meta-level fighting seemed not-fine, but that's for another comment.)

Comment by jimrandomh on ike's Shortform · 2019-09-10T01:39:50.959Z · score: 3 (2 votes) · LW · GW

Yes, it implies that. The exact level of fidelity required is less straightforward; it's clear that a perfect simulation must have qualia/consciousness, but small imperfections make the argument not hold, so to determine whether an imperfect simulation is conscious we'd have to grapple with the even-harder problem of neuroscience.

Comment by jimrandomh on Eli's shortform feed · 2019-09-10T01:34:00.749Z · score: 14 (6 votes) · LW · GW

In There’s No Fire Alarm for Artificial General Intelligence Eliezer argues:

A fire alarm creates common knowledge, in the you-know-I-know sense, that there is a fire; after which it is socially safe to react. When the fire alarm goes off, you know that everyone else knows there is a fire, you know you won’t lose face if you proceed to exit the building.

If I have a predetermined set of tests, this could serve as a fire alarm, but only if you've successfully built a consensus that it is one. This is hard, and the consensus would need to be quite strong. To avoid ambiguity, the test itself would need to be demonstrably resistant to being clever Hans'ed. Otherwise it would be just another milestone.

Comment by jimrandomh on An1lam's Short Form Feed · 2019-09-10T01:05:42.962Z · score: 4 (3 votes) · LW · GW

I think the engineer mindset is more strongly represented here than you think, but that the nature of nonspecialist online discussion warps things away from the engineer mindset and towards the scientist mindset. Both types of people are present, but the engineer-mindset people tend not to put that part of themselves forward here.

The problem with getting down into the details is that there are many areas with messy details to get into, and it's hard to appreciate the messy details of an area you haven't spent enough time in. So deep dives in narrow topics wind up looking more like engineer-mindset, while shallow passes over wide areas wind up looking more like scientist-mindset. LessWrong posts can't assume much background, which limits their depth.

I would be happy to see more deep-dives; a lightly edited transcript of John Carmack wouldn't be a prototypical LessWrong post, but it would be a good one. But such posts are necessarily going to exclude a lot of readers, and LessWrong isn't necessarily going to be competitive with posting in more topic-specialized places.

Comment by jimrandomh on An1lam's Short Form Feed · 2019-09-10T00:36:57.021Z · score: 8 (3 votes) · LW · GW
Yet I also feel like John Carmack probably probably isn't remotely near the level of Pearl (I'm not that familiar Carmack's work): pushing forward video game development doesn't compare to neatly figuring what exactly causality itself is.

You're looking at the wrong thing. Don't look at the topic of their work; look at their cognitive style and overall generativity. Carmack is many levels above Pearl. Just as importantly, there's enough recorded video of him speaking unscripted that it's feasible to absorb some of his style.

Comment by jimrandomh on benwr's unpolished thoughts · 2019-09-10T00:11:35.857Z · score: 3 (2 votes) · LW · GW

I'm not sure relationship-strength on a single axis is quite the right factor. At the end of a workshop, the participants don't have that much familiarity, if you measure it by hours spent talking; but those hours will tend to have been focused on the sort of information that makes a Doom circle work, ie, people's life strategies and the things they're struggling with. If I naively tried to gather a group with strong relationship-strength, I expect many of the people I invited would find out that they didn't know each other as well as they thought they did.

Comment by jimrandomh on Eli's shortform feed · 2019-09-10T00:05:13.511Z · score: 7 (2 votes) · LW · GW

A slightly different spin on this model: it's not about the types of strategies people generate, but the number. If you think about something and only come up with one strategy, you'll do it without hesitation; if you generate three strategies, you'll pause to think about which is the right one. So people who can't come up with as many strategies are impulsive.

Comment by jimrandomh on Rob B's Shortform Feed · 2019-09-09T23:56:10.224Z · score: 10 (2 votes) · LW · GW

Somewhat more meta level: Heuristically speaking, it seems wrong and dangerous for the answer to "which expressed human preferences are valid?" to be anything other than "all of them". There's a common pattern in metaethics which looks like:

1. People seem to have preference X

2. X is instrumentally valuable as a source of Y and Z. The instrumental-value relation explains how the preference for X was originally acquired.

3. [Fallacious] Therefore preference X can be ignored without losing value, so long as Y and Z are optimized.

In the human brain algorithm, if you optimize something instrumentally for awhile, you start to value it terminally. I think this is the source of a surprisingly large fraction of our values.

Comment by jimrandomh on Chris_Leong's Shortform · 2019-09-09T23:41:39.829Z · score: 8 (4 votes) · LW · GW

+1 for book-distillation, probably the most underappreciated and important type of post.

Comment by jimrandomh on Bíos brakhús · 2019-09-09T23:37:02.240Z · score: 2 (1 votes) · LW · GW

In theory you might, but in practice you can't. Distraction-avoidant behavior favors things that you can get into quickly, on the order of seconds--things like checking for Facebook notifications, or starting a game which has a very fast load time. Most intellectual work has a spinup, while you recreate mental context, before it provides rewards, so distraction-avoidant behavior doesn't choose it.

Comment by jimrandomh on ozziegooen's Shortform · 2019-09-09T23:33:09.576Z · score: 2 (1 votes) · LW · GW

One way to look at this is, where is the variance coming from? Any particular forecasting question has implied sub-questions, which the predictor needs to divide their attention between. For example, given the question "How much value has this organization created?", a predictor might spend their time comparing the organization to others in its reference class, or they might spend time modeling the judges and whether they tend to give numbers that are higher or lower.

Evaluation consistency is a way of reducing the amount of resources that you need to spend modeling the judges, by providing a standard that you can calibrate against. But there are other ways of achieving the same effect. For example, if you have people predict the ratio of value produced between two organizations, then if the judges consistently predict high or predict low, this no longer matters since it affects both equally.

Comment by jimrandomh on Hazard's Shortform Feed · 2019-09-09T23:14:41.750Z · score: 4 (2 votes) · LW · GW

Yep, I notice this sometimes when other people are doing it. I don't notice myself doing it, but that's probably because it's easier to notice from the receiving end.

In writing, it makes me bounce off. (There are many posts competing for my attention, so if the first few sentences fail to say anything interesting, my brain assumes that your post is not competitive and moves on.) In speech, it makes me get frustrated with the speaker. If it's in speech and it's an interruption, that's especially bad, because it's displacing working memory from whatever I was doing before.

Comment by jimrandomh on Open & Welcome Thread - September 2019 · 2019-09-09T21:53:57.828Z · score: 13 (3 votes) · LW · GW

It's not promoted as a first-class feature since most people don't have enough time to read quite so many comments, and need more filtering, but some people requested it and use it, and the code-implementation is simple, so it won't be going away.

The reason negatively-voted comments don't appear is because it once shared code with the All Posts page, which has a checkbox for controlling that, but it doesn't have a checkbox wired up. GitHub issue: https://github.com/LessWrong2/Lesswrong2/issues/2415 . Hiding negative-karma content used to be important because the most-recent content was often spam, and displaying it in between when it was posted and when the mods deleted it made for a bad experience; but we now have enough other anti-spam measures in place that this isn't really a concern.

The way pagination is currently handled is something we inherited from our framework, which is pretty suboptimal. At some point we're going to redo the way pagination is handled, not for allComments in particular but at a lower level which will affect multiple places, allComments included. This is likely to be awhile, though, since it's a somewhat involved piece of development and there are more important things in the queue in front of it.

Comment by jimrandomh on Does anyone else feel LessWrong is slow? · 2019-09-06T20:21:20.537Z · score: 15 (6 votes) · LW · GW

Yes, it's slower than old-LessWrong. When we rewrote, we built on a collection of libraries and frameworks which were... not exactly production-ready. This has required a lot of compensatory engineering work to bring performance up to par, and there's still a bunch more compensatory engineering work left to do.

In the few months immediately after the rewrite, it was catastrophically slow, to the point where if you opened a bunch of sequence-posts quickly in new tabs, whichever server from the pool handled the requests would die. It's much better than that now, but it's still not where I want it to be. When it is as fast as I'd like, I plan to write a long post with technical details of what we had to change. In the mean time, yes, we know and we're working on it.

Comment by jimrandomh on How Specificity Works · 2019-09-06T00:59:09.206Z · score: 11 (6 votes) · LW · GW

I suspect the true skill is neither going up nor down the ladder of abstraction, it's "taking the ladder of abstraction as object". From that perspective, this post (and most of the posts it's linking to) are teaching the skill, but in a weird indirect way: by making claims about the ladder of abstraction, they force you to notice and think about it, and practice doing this is valuable independent of the specific claim.

Comment by jimrandomh on What Programming Language Characteristics Would Allow Provably Safe AI? · 2019-08-28T21:46:53.454Z · score: 2 (2 votes) · LW · GW

In the context of programming languages, "proof" means "machine-checkable proof from mathematical axioms". While there is difficult work involved in bridging between philosophy and mathematics, a programming language is only going to help on the math side, and on the math side, verifying proofs (once written in programming-language form) is trivial.

Comment by jimrandomh on Dual Wielding · 2019-08-27T20:27:27.424Z · score: 5 (3 votes) · LW · GW

If the phone has a removable main battery, and you're swapping that, then yes. If it's a standalone power bank with a USB port, then it's the cable that varies rather than the battery, and you only need a few varieties for complete coverage (micro-USB, USB-C and Lightning will charge pretty much any device you can find these days).

Comment by jimrandomh on Dual Wielding · 2019-08-27T16:21:58.906Z · score: 41 (17 votes) · LW · GW
A battery pack is an alternative, but they seem to be similar in size to phones and don’t allow the charging swap tactic

That's because you haven't used enough dakka. In addition to carrying an extra battery, you also leave an identical battery plugged in at each of the spots where you have a charger. You charge your phone by connecting it to the battery (with both in your inventory); you charge the battery by swapping the one in your inventory with the one that lives at the charger. This has the additional benefit that, in addition to solving the battery problem for yourself, if you find yourself with a group, you can solve the phone-battery problem for other people by lending or gifting them a charged battery.

Comment by jimrandomh on adam_scholl's Shortform · 2019-08-12T20:50:47.695Z · score: 5 (5 votes) · LW · GW

On the other hand: half of mouse studies working in humans is an extremely good success rate. We should be quite suspicious of file-drawer effects and p-hacking.

Comment by jimrandomh on Dony's Shortform Feed · 2019-08-12T20:40:15.849Z · score: 6 (3 votes) · LW · GW

Unclear, but see Zvi's Slack sequence for some good reasons why we should act as though we need breaks, even if we technically don't.

Comment by jimrandomh on Keeping Beliefs Cruxy · 2019-08-07T19:43:06.093Z · score: 4 (2 votes) · LW · GW

Pedantic correction: We didn't spend 4 weeks on it; while 4 weeks did pass, there was a lot of other stuff going on during that interval.

Comment by jimrandomh on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-05T17:31:29.546Z · score: 21 (11 votes) · LW · GW

If the true cost were double or triple the $10B estimate, this wouldn't significantly change the implications; $30B is not significantly less feasible.

Comment by jimrandomh on Drive-By Low-Effort Criticism · 2019-07-31T22:21:42.160Z · score: 20 (8 votes) · LW · GW

Yes, we should discourage low-quality criticism which is wrong, and encourage high-quality criticism which is right. (I already said this, in the grandparent.) Having accounted for this, it makes no sense at all to prefer longer critical comments to shorter ones. (Quite the opposite preference would be sensible, in fact.)

I think that compared to high-effort criticisms, low-effort criticisms are much more likely to be based on misunderstandings or otherwise low quality. I interpret Lionhearted as saying that criticism should, on the margin, be held to a higher bar than it is now.

Comment by jimrandomh on Shortform Beta Launch · 2019-07-29T03:25:35.210Z · score: 7 (3 votes) · LW · GW

Before we implemented shortform as a feature, some people created posts for themselves to put comments on and called them "shortform feeds". This is a misnomer, because they're not feeds in any sense of the word, so we decided not to call them that. But it looks like there were some residual linguistic habits.

Comment by jimrandomh on Nutrition heuristic: Cycle healthy options · 2019-07-17T21:47:00.058Z · score: 5 (3 votes) · LW · GW

This works in the case of {steak,fish,chicken} because those options are pretty close to identical in their overall role in a diet. But there are also nutrition strategies for which cycling is worse than any of the options. A straightforward example of this is keto. Conventional wisdom is that these are highly nonlinear; a diet which is low-carb but not low-carb enough is substantially worse than either a low-carb or a full-carb diet. And there's a switchover period with some adverse effects, so cycling between keto-days and non-keto-days would be bad.

Comment by jimrandomh on Please give your links speaking names! · 2019-07-12T22:29:48.995Z · score: 5 (3 votes) · LW · GW

This is a bug in Vulcan, the framework we're built on; https://github.com/LessWrong2/Lesswrong2/issues/638 . We'll come up with a workaround at some point.

Comment by jimrandomh on "Rationalizing" and "Sitting Bolt Upright in Alarm." · 2019-07-10T05:57:44.751Z · score: 6 (3 votes) · LW · GW

That link doesn't have enough information to find the study, which is likely to contain important methodological caveats.

Comment by jimrandomh on Jimrandomh's Shortform · 2019-07-09T02:20:15.059Z · score: 13 (6 votes) · LW · GW

Among people who haven't learned probabilistic reasoning, there's a tendency to push the (implicit) probabilities in their reasoning to the extremes; when the only categories available are "will happen", "won't happen", and "might happen", too many things end up in the will/won't buckets.

A similar, subtler thing happens to people who haven't learned the economics concept of elasticity. Some example (fallacious) claims of this type:

  • Building more highway lanes will cause more people to drive (induced demand), so building more lanes won't fix traffic.
  • Building more housing will cause more people to move into the area from far away, so additional housing won't decrease rents.
  • A company made X widgets, so there are X more widgets in the world than there would be otherwise.

This feels like it's in the same reference class as he traditional logical fallacies, and that giving it a name - "zero elasticity fallacy" - might be enough to significantly reduce the rate at which people make it. But it does require a bit more concept-knowledge than most of the traditional fallacies, so, maybe not? What happens when you point this out to someone with no prior microeconomics exposure, and does logical-fallacy branding help with the explanation?

Comment by jimrandomh on "Rationalizing" and "Sitting Bolt Upright in Alarm." · 2019-07-08T22:59:22.186Z · score: 5 (4 votes) · LW · GW
What's the difference between motivated errors and lies?

They're implemented by very different cognitive algorithms, which differently constrain the sorts of falsehoods and strategies they can generate.

Motivated cognition is exclusively implemented in pre-conscious mechanisms: distortion of attention, distortion of intuition, selective forgetting. Direct lying, on the other hand, usually refers to lying which has System 2 involvement, which means a wider range of possible mistruths and a wider (and more destructive) range of supporting strategies.

For example: A motivated reasoner will throw out some of their data inappropriately, telling themself a plausible but false story about how that data didn't mean anything, but they'll never compose fake data from scratch. But a direct liar will do both, according to what they can get away with.

Comment by jimrandomh on Jimrandomh's Shortform · 2019-07-04T20:46:52.061Z · score: 4 (2 votes) · LW · GW

I'm pretty uncertain how the arrangements actually work in practice, but one possible arrangement is: You have two organizations, one of which is a traditional pharmaceutical company with the patent for an untested drug, and one of which is a contract research organization. The pharma company pays the contract research organization to conduct a clinical trial, and reports the amount it paid as the cost of the trial. They have common knowledge of the chance of success, of the future probability distribution of future revenue for the drug, how much it costs to conduct the trial, and how much it costs to insure away the risks. So the amount the first company pays to the second is the costs of the trial, plus a share of the expected profit.

Pharma companies making above-market returns are subject to political attack from angry patients, but contract research organizations aren't. So if you control both of these organizations, you would choose to allocate all of the profits to the second organization, so you can defend yourself from claims of gouging by pleading poverty.

Comment by jimrandomh on Crisis of Faith · 2019-07-04T17:34:27.145Z · score: 4 (2 votes) · LW · GW

Yep; in the time since this was written, the LW community has gone pretty heavily in the direction of "let's figure out how to reclaim the coordination and community benefits of religion separately from the weird belief stuff", and (imo) done pretty well at it.

Comment by jimrandomh on Jimrandomh's Shortform · 2019-07-04T17:22:49.463Z · score: 16 (8 votes) · LW · GW

The discussion so far on cost disease seems pretty inadequate, and I think a key piece that's missing is the concept of Hollywood Accounting. Hollywood Accounting is what happens when you have something that's extremely profitable, but which has an incentive to not be profitable on paper. The traditional example, which inspired the name, is when a movie studio signs a contract with an actor to share a percentage of profits; in that case, the studio will create subsidiaries, pay all the profits to the subsidiaries, and then declare that the studio itself (which signed the profit-sharing agreement) has no profits to give.

In the public contracting sector, you have firms signing cost-plus contracts, which are similar; the contract requires that profits don't exceed a threshold, so they get converted into payments to de-facto-but-not-de-jure subsidiaries, favors, and other concealed forms. Sometimes this involves large dead-weight losses, but the losses are not the point, and are not the cause of the high price.

In medicine, there are occasionally articles which try to figure out where all the money is going in the US medical system; they tend to look at one piece, conclude that that piece isn't very profitable so it can't be responsible, and move on. I suspect this is what's going on with the cost of clinical trials, for example; they aren't any more expensive than they used to be, they just get allocated a share of the profits from R&D ventures that're highly profitable overall.

Comment by jimrandomh on Jimrandomh's Shortform · 2019-07-04T17:09:37.876Z · score: 20 (7 votes) · LW · GW

Bullshit jobs are usually seen as an absence of optimization: firms don't get rid of their useless workers because that would require them to figure out who they are, and risk losing or demoralizing important people in the process. But alternatively, if bullshit jobs (and cover for bullshit jobs) are a favor to hand out, then they're more like a form of executive compensation: my useless underlings owe me, and I will get illegible favors from them in return.

What predictions does the bullshit-jobs-as-compensation model make, that differ from the bullshit-jobs-as-lack-of-optimization model?

Comment by jimrandomh on How/would you want to consume shortform posts? · 2019-07-04T17:04:56.615Z · score: 4 (2 votes) · LW · GW

The current, hacky solution to shortform is: you make a post named "[Name]'s Shortform Posts", and write comments on it.

We're planning to promote this to a first-class site feature; we're going to make some UI that auto-generates a post like that for you, and gives the comments on it visibility on a special shortform page and on the All Posts page.

Comment by jimrandomh on Causal Reality vs Social Reality · 2019-06-30T19:02:07.600Z · score: 20 (6 votes) · LW · GW

If someone is wrong, this should definitely be made legible, so that no one leaves believing the wrong thing. The problem is with the "obviously" part. Once the truth of the object-level question is settled, there is the secondary question of how much we should update our estimate of the competence of whoever made a mistake. I think we should by default try to be clear about the object-level question and object-level mistake, and by default glomarize about the secondary question.

I read Ruby as saying that we should by default glomarize about the secondary question, and also that we should be much more hesitant about assuming an object-level error we spot is real. I think this makes sense as a conversation norm, where clarification is fast, but is bad in a forum, where asking someone to clarify their bad argument frequently leads to a dropped thread and a confusing mess for anyone who comes across the conversation later.

Comment by jimrandomh on How to deal with a misleading conference talk about AI risk? · 2019-06-27T21:54:54.714Z · score: 4 (2 votes) · LW · GW

Moderators are discussing this with each other now. We do not have consensus on this.

Comment by jimrandomh on What is the evidence for productivity benefits of weightlifting? · 2019-06-19T19:44:59.119Z · score: 12 (7 votes) · LW · GW

I think this answer was good, but also feel like curating it (and skipping the team-discussion that usually goes with curation) was a mistake. This answer really needed, at a minimum, a formatting cleanup, before it was ready for curation. I tried to read it, and I just... can't. Too many fonts, too much inconsistent indentation. And I would've appreciated a chance to make the curation email work right (ie, make it include the actual answer), before this went out.

Comment by jimrandomh on Mistakes with Conservation of Expected Evidence · 2019-06-18T19:34:17.361Z · score: 17 (6 votes) · LW · GW

Promoted to curated. One of LessWrong's main goals is to advance the art of rationality, and spotting patterns in the ways we process and misprocess evidence is a central piece of that. I also appreciated the Bayesian grounding, the epistemic statuses and the recapping and links to older work. I'm pretty sure most have made these errors before, and I expect that fitting them into a pattern will make them easier to recognize in the future.

Comment by jimrandomh on Recommendation Features on LessWrong · 2019-06-15T19:17:36.315Z · score: 5 (3 votes) · LW · GW

It's ambiguous whether to recommend the first unread post or the next post after the last read, and I suspect neither answer will satisfy everyone. You can at least click through to the sequence table of contents, and go from there, though.

Comment by jimrandomh on Recommendation Features on LessWrong · 2019-06-15T04:04:10.045Z · score: 8 (4 votes) · LW · GW

Is this adjusted by post date? Posts from before the relaunch are going to have much less karma, on average (and as user karma grows and the karma weight of upvotes grows with it, average karma will increase further). A post from last month with 50 karma, and a post from 2010 with 50 karma, are really not comparable…

This is one of a number of significant problems with using karma for this. My ideal system - which we probably won't do soon, because of the amount of effort involved - would be something like:

  • Periodically, users get a list of posts that they read over the past week, end are asked to pick their favorite and to update their votes
  • This is converted into pairwise comparisons end used to generate an elo rating for each post
  • The recommender has a VOI factor to increase the visibility of posts where it doesn't have a precise enough estimate of the rating
  • We separately have trusted raters compare posts from a more random sampling, compute a separate set of ratingr that way, and use it as a ground truth to set the tuning parameters and see how well it's working.

In this world, karma would still be displayed and updated in response to votes the same way it is now, to give people an estimate of visibility and reception and to get a quick initial estimate of quality, but it would be superseded as a measurement of post quality for older content.

Comment by jimrandomh on Does Bayes Beat Goodhart? · 2019-06-04T23:23:09.592Z · score: 2 (1 votes) · LW · GW

However, I think it is reasonable to at least add a calibration requirement: there should be no way to systematically correct estimates up or down as a function of the expected value.

Why is this important? If the thing with the highest score is always the best action to take, why does it matter if that score is an overestimate? Utility functions are fictional anyway right?

As a very high level, first-pass approximation, I think the right way to think of this is as a sort of unit test; even if we can't directly see a reason why systematically incorrect estimates would cause problems in an AI design, this is an obvious enough desiderata that we should by default assume a system which breaks it is bad, unless we can prove otherwise.

Closer to the object level--yes, the highest-scoring action is the correct action to take, and if you model miscalibration as a single, monotonic function applied as the last step before deciding, then it can't change any decisions. But if miscalibration can affect any intermediate steps, then this doesn't hold. As a simple example: suppose the AI is deciding whether to pay to preserve its access to a category of options which it knows are highly subject to Regressional Goodhart.

Comment by jimrandomh on Comment section from 05/19/2019 · 2019-05-20T20:31:29.692Z · score: 6 (3 votes) · LW · GW

I believe this is currently mostly manual (ie, Oli created a new post, did a database operation to move comments over, then posted a comment in the old place). Given that it seems to have worked out well in this case, if it comes up a few more times, we'll look into automating it (and making small details like old-post comment permalinks work).

Comment by jimrandomh on Feature Request: Self-imposed Time Restrictions · 2019-05-20T20:19:26.753Z · score: 6 (3 votes) · LW · GW

We (the LW team) are definitely thinking about this issue, and I at least strongly prefer that people use the site in ways that reflect decisions which they would endorse in retrospect; ie, reading things that are valuable to them, at times and in quantities that make sense, and not as a way to avoid other things that might be more important. I'm particularly thinking about this in the context of the upcoming Recommendations system, which recommends older content; that has the potential to be more of an unlimited time sink, in contrast to reading recent posts (which are limited in number) or reading sequences (which is more like reading a book, which people have existing adaptations around).

A big problem with naively implemented noprocrast/leechblock-style features at the site level, is that they can backfire by shunting people into workarounds which make things worse. For example, if someone is procrastinating on their computer, noprocrast kicking in when they don't want to stop might make them start reading on their phone, creating bad habits around phone use. Cutting off access in the middle of reading a post (as opposed to between posts) is especially likely to do this; but enforcing a restriction only at load-time encourages opening lots of tabs, which is bad. And since people are likely to invest in setting personal rules around whatever mechanisms we build, there are switching cost if the first mechanism isn't quite right.

So: I definitely want us to have something in this space, and for it to be good. But it may take awhile.