Comment by jimrandomh on Please give your links speaking names! · 2019-07-12T22:29:48.995Z · score: 5 (3 votes) · LW · GW

This is a bug in Vulcan, the framework we're built on; https://github.com/LessWrong2/Lesswrong2/issues/638 . We'll come up with a workaround at some point.

Comment by jimrandomh on "Rationalizing" and "Sitting Bolt Upright in Alarm." · 2019-07-10T05:57:44.751Z · score: 6 (3 votes) · LW · GW

That link doesn't have enough information to find the study, which is likely to contain important methodological caveats.

Comment by jimrandomh on Jimrandomh's Shortform · 2019-07-09T02:20:15.059Z · score: 13 (6 votes) · LW · GW

Among people who haven't learned probabilistic reasoning, there's a tendency to push the (implicit) probabilities in their reasoning to the extremes; when the only categories available are "will happen", "won't happen", and "might happen", too many things end up in the will/won't buckets.

A similar, subtler thing happens to people who haven't learned the economics concept of elasticity. Some example (fallacious) claims of this type:

  • Building more highway lanes will cause more people to drive (induced demand), so building more lanes won't fix traffic.
  • Building more housing will cause more people to move into the area from far away, so additional housing won't decrease rents.
  • A company made X widgets, so there are X more widgets in the world than there would be otherwise.

This feels like it's in the same reference class as he traditional logical fallacies, and that giving it a name - "zero elasticity fallacy" - might be enough to significantly reduce the rate at which people make it. But it does require a bit more concept-knowledge than most of the traditional fallacies, so, maybe not? What happens when you point this out to someone with no prior microeconomics exposure, and does logical-fallacy branding help with the explanation?

Comment by jimrandomh on "Rationalizing" and "Sitting Bolt Upright in Alarm." · 2019-07-08T22:59:22.186Z · score: 5 (4 votes) · LW · GW
What's the difference between motivated errors and lies?

They're implemented by very different cognitive algorithms, which differently constrain the sorts of falsehoods and strategies they can generate.

Motivated cognition is exclusively implemented in pre-conscious mechanisms: distortion of attention, distortion of intuition, selective forgetting. Direct lying, on the other hand, usually refers to lying which has System 2 involvement, which means a wider range of possible mistruths and a wider (and more destructive) range of supporting strategies.

For example: A motivated reasoner will throw out some of their data inappropriately, telling themself a plausible but false story about how that data didn't mean anything, but they'll never compose fake data from scratch. But a direct liar will do both, according to what they can get away with.

Comment by jimrandomh on Jimrandomh's Shortform · 2019-07-04T20:46:52.061Z · score: 4 (2 votes) · LW · GW

I'm pretty uncertain how the arrangements actually work in practice, but one possible arrangement is: You have two organizations, one of which is a traditional pharmaceutical company with the patent for an untested drug, and one of which is a contract research organization. The pharma company pays the contract research organization to conduct a clinical trial, and reports the amount it paid as the cost of the trial. They have common knowledge of the chance of success, of the future probability distribution of future revenue for the drug, how much it costs to conduct the trial, and how much it costs to insure away the risks. So the amount the first company pays to the second is the costs of the trial, plus a share of the expected profit.

Pharma companies making above-market returns are subject to political attack from angry patients, but contract research organizations aren't. So if you control both of these organizations, you would choose to allocate all of the profits to the second organization, so you can defend yourself from claims of gouging by pleading poverty.

Comment by jimrandomh on Crisis of Faith · 2019-07-04T17:34:27.145Z · score: 4 (2 votes) · LW · GW

Yep; in the time since this was written, the LW community has gone pretty heavily in the direction of "let's figure out how to reclaim the coordination and community benefits of religion separately from the weird belief stuff", and (imo) done pretty well at it.

Comment by jimrandomh on Jimrandomh's Shortform · 2019-07-04T17:22:49.463Z · score: 16 (8 votes) · LW · GW

The discussion so far on cost disease seems pretty inadequate, and I think a key piece that's missing is the concept of Hollywood Accounting. Hollywood Accounting is what happens when you have something that's extremely profitable, but which has an incentive to not be profitable on paper. The traditional example, which inspired the name, is when a movie studio signs a contract with an actor to share a percentage of profits; in that case, the studio will create subsidiaries, pay all the profits to the subsidiaries, and then declare that the studio itself (which signed the profit-sharing agreement) has no profits to give.

In the public contracting sector, you have firms signing cost-plus contracts, which are similar; the contract requires that profits don't exceed a threshold, so they get converted into payments to de-facto-but-not-de-jure subsidiaries, favors, and other concealed forms. Sometimes this involves large dead-weight losses, but the losses are not the point, and are not the cause of the high price.

In medicine, there are occasionally articles which try to figure out where all the money is going in the US medical system; they tend to look at one piece, conclude that that piece isn't very profitable so it can't be responsible, and move on. I suspect this is what's going on with the cost of clinical trials, for example; they aren't any more expensive than they used to be, they just get allocated a share of the profits from R&D ventures that're highly profitable overall.

Comment by jimrandomh on Jimrandomh's Shortform · 2019-07-04T17:09:37.876Z · score: 20 (7 votes) · LW · GW

Bullshit jobs are usually seen as an absence of optimization: firms don't get rid of their useless workers because that would require them to figure out who they are, and risk losing or demoralizing important people in the process. But alternatively, if bullshit jobs (and cover for bullshit jobs) are a favor to hand out, then they're more like a form of executive compensation: my useless underlings owe me, and I will get illegible favors from them in return.

What predictions does the bullshit-jobs-as-compensation model make, that differ from the bullshit-jobs-as-lack-of-optimization model?

Jimrandomh's Shortform

2019-07-04T17:06:32.665Z · score: 29 (4 votes)
Comment by jimrandomh on How/would you want to consume shortform posts? · 2019-07-04T17:04:56.615Z · score: 4 (2 votes) · LW · GW

The current, hacky solution to shortform is: you make a post named "[Name]'s Shortform Posts", and write comments on it.

We're planning to promote this to a first-class site feature; we're going to make some UI that auto-generates a post like that for you, and gives the comments on it visibility on a special shortform page and on the All Posts page.

Comment by jimrandomh on Causal Reality vs Social Reality · 2019-06-30T19:02:07.600Z · score: 15 (5 votes) · LW · GW

If someone is wrong, this should definitely be made legible, so that no one leaves believing the wrong thing. The problem is with the "obviously" part. Once the truth of the object-level question is settled, there is the secondary question of how much we should update our estimate of the competence of whoever made a mistake. I think we should by default try to be clear about the object-level question and object-level mistake, and by default glomarize about the secondary question.

I read Ruby as saying that we should by default glomarize about the secondary question, and also that we should be much more hesitant about assuming an object-level error we spot is real. I think this makes sense as a conversation norm, where clarification is fast, but is bad in a forum, where asking someone to clarify their bad argument frequently leads to a dropped thread and a confusing mess for anyone who comes across the conversation later.

Comment by jimrandomh on How to deal with a misleading conference talk about AI risk? · 2019-06-27T21:54:54.714Z · score: 4 (2 votes) · LW · GW

Moderators are discussing this with each other now. We do not have consensus on this.

Comment by jimrandomh on What is the evidence for productivity benefits of weightlifting? · 2019-06-19T19:44:59.119Z · score: 6 (5 votes) · LW · GW

I think this answer was good, but also feel like curating it (and skipping the team-discussion that usually goes with curation) was a mistake. This answer really needed, at a minimum, a formatting cleanup, before it was ready for curation. I tried to read it, and I just... can't. Too many fonts, too much inconsistent indentation. And I would've appreciated a chance to make the curation email work right (ie, make it include the actual answer), before this went out.

Comment by jimrandomh on Mistakes with Conservation of Expected Evidence · 2019-06-18T19:34:17.361Z · score: 17 (6 votes) · LW · GW

Promoted to curated. One of LessWrong's main goals is to advance the art of rationality, and spotting patterns in the ways we process and misprocess evidence is a central piece of that. I also appreciated the Bayesian grounding, the epistemic statuses and the recapping and links to older work. I'm pretty sure most have made these errors before, and I expect that fitting them into a pattern will make them easier to recognize in the future.

Comment by jimrandomh on Recommendation Features on LessWrong · 2019-06-15T19:17:36.315Z · score: 5 (3 votes) · LW · GW

It's ambiguous whether to recommend the first unread post or the next post after the last read, and I suspect neither answer will satisfy everyone. You can at least click through to the sequence table of contents, and go from there, though.

Comment by jimrandomh on Recommendation Features on LessWrong · 2019-06-15T04:04:10.045Z · score: 8 (4 votes) · LW · GW

Is this adjusted by post date? Posts from before the relaunch are going to have much less karma, on average (and as user karma grows and the karma weight of upvotes grows with it, average karma will increase further). A post from last month with 50 karma, and a post from 2010 with 50 karma, are really not comparable…

This is one of a number of significant problems with using karma for this. My ideal system - which we probably won't do soon, because of the amount of effort involved - would be something like:

  • Periodically, users get a list of posts that they read over the past week, end are asked to pick their favorite and to update their votes
  • This is converted into pairwise comparisons end used to generate an elo rating for each post
  • The recommender has a VOI factor to increase the visibility of posts where it doesn't have a precise enough estimate of the rating
  • We separately have trusted raters compare posts from a more random sampling, compute a separate set of ratingr that way, and use it as a ground truth to set the tuning parameters and see how well it's working.

In this world, karma would still be displayed and updated in response to votes the same way it is now, to give people an estimate of visibility and reception and to get a quick initial estimate of quality, but it would be superseded as a measurement of post quality for older content.

Recommendation Features on LessWrong

2019-06-15T00:23:18.102Z · score: 61 (18 votes)

Welcome to LessWrong!

2019-06-14T19:42:26.128Z · score: 78 (30 votes)
Comment by jimrandomh on Does Bayes Beat Goodhart? · 2019-06-04T23:23:09.592Z · score: 2 (1 votes) · LW · GW

However, I think it is reasonable to at least add a calibration requirement: there should be no way to systematically correct estimates up or down as a function of the expected value.

Why is this important? If the thing with the highest score is always the best action to take, why does it matter if that score is an overestimate? Utility functions are fictional anyway right?

As a very high level, first-pass approximation, I think the right way to think of this is as a sort of unit test; even if we can't directly see a reason why systematically incorrect estimates would cause problems in an AI design, this is an obvious enough desiderata that we should by default assume a system which breaks it is bad, unless we can prove otherwise.

Closer to the object level--yes, the highest-scoring action is the correct action to take, and if you model miscalibration as a single, monotonic function applied as the last step before deciding, then it can't change any decisions. But if miscalibration can affect any intermediate steps, then this doesn't hold. As a simple example: suppose the AI is deciding whether to pay to preserve its access to a category of options which it knows are highly subject to Regressional Goodhart.

Comment by jimrandomh on Comment section from 05/19/2019 · 2019-05-20T20:31:29.692Z · score: 6 (3 votes) · LW · GW

I believe this is currently mostly manual (ie, Oli created a new post, did a database operation to move comments over, then posted a comment in the old place). Given that it seems to have worked out well in this case, if it comes up a few more times, we'll look into automating it (and making small details like old-post comment permalinks work).

Comment by jimrandomh on Feature Request: Self-imposed Time Restrictions · 2019-05-20T20:19:26.753Z · score: 6 (3 votes) · LW · GW

We (the LW team) are definitely thinking about this issue, and I at least strongly prefer that people use the site in ways that reflect decisions which they would endorse in retrospect; ie, reading things that are valuable to them, at times and in quantities that make sense, and not as a way to avoid other things that might be more important. I'm particularly thinking about this in the context of the upcoming Recommendations system, which recommends older content; that has the potential to be more of an unlimited time sink, in contrast to reading recent posts (which are limited in number) or reading sequences (which is more like reading a book, which people have existing adaptations around).

A big problem with naively implemented noprocrast/leechblock-style features at the site level, is that they can backfire by shunting people into workarounds which make things worse. For example, if someone is procrastinating on their computer, noprocrast kicking in when they don't want to stop might make them start reading on their phone, creating bad habits around phone use. Cutting off access in the middle of reading a post (as opposed to between posts) is especially likely to do this; but enforcing a restriction only at load-time encourages opening lots of tabs, which is bad. And since people are likely to invest in setting personal rules around whatever mechanisms we build, there are switching cost if the first mechanism isn't quite right.

So: I definitely want us to have something in this space, and for it to be good. But it may take awhile.

Comment by jimrandomh on Boo votes, Yay NPS · 2019-05-14T21:24:14.139Z · score: 15 (6 votes) · LW · GW

(I'm a member of the LW team, but this is an area where we still have a lot of uncertainty, so we don't necessarily agree internally and our thinking is likely to change.)

There are three proposed changes being bundled together here: (1) The guidance given about how to vote; (2) the granularity of the votes elicited; and (3) how votes are aggregated and presented to readers.

As you correctly observe, votes are serving multiple purposes: it gives information to other readers about what's worth their time to read, it gives readers information about what other people are reading, and it gives authors feedback about whether they did a good job. Sometimes these come apart; for example, if someone helpfully clears up a confusion that only one person had, then their comment should receive positive feedback, but isn't worth reading for most people.

These things are, in practice, pretty tightly correlated, especially when judged by voters who are only spending a little bit of time on each vote. And that seems like the root issue: disentangling "how I feel about this post" from "is this post worth reading" requires more time and distance than is currently going into voting. One idea I'm considering, is retrospective voting: periodically show people a list of things they've read in the past (say, the past week), and ask people to rate them then. This would be less noisy, because it elicits comparisons rather than ups/downs in isolation, and it might also change people's votes in a good way by giving them some distance.

Switching from the current up/down/super-up/super-down to 0-100% range voting, seems like the main effect is it's creating a distinction between implicit and explicit neutral votes. That is, currently if people feel something is meh, they don't vote, but in the proposed system they would instead give it a middling score. The advantage of this is that you can aggregate scores in a way that measures quality, without being as conflated with attention; right now if a post/comment has been read more times, it gets more votes, and we don't have a good way of distinguishing this from a post/comment with fewer reads but more votes per reader.

But I'm skeptical of whether people will actually cast explicit neutral votes, in most cases; that would require them to break out of skimming, slow down, and make a lot more explicit decisions than they currently do. A more promising direction might be to collect more granular data on scroll positions and timings, so that we can estimate the number of people who read a comment and skimmed a comment without voting, and use that as an input into scoring.

The third thing is aggregation--how we convert a set of votes into a sort-order to guide readers to the best stuff--which is an aspect of the current system I'm currently least satisfied with. That includes things like karma-weighting of votes, and also the handling of polarizing posts. In the long term, I'm hoping to generate a dataset of pairwise comparisons by trusted users, which we can use as a ground truth to test algorithms against. But polarizing posts will always be difficult to score, because the votes reflect an underlying disagreement between humans and the answer to whether a post should be shown may depend on things the voters haven't evaluated, like the truth of the post's claims.

Comment by jimrandomh on Coherent decisions imply consistent utilities · 2019-05-14T02:32:20.616Z · score: 6 (4 votes) · LW · GW

While we have a long-term plan of importing Arbital's content into LessWrong (after LessWrong acquires some wiki-like features to make it make sense), we have not taken responsibility for the maintenance of Arbital itself.

Comment by jimrandomh on Rob B's Shortform Feed · 2019-05-11T00:31:56.540Z · score: 6 (3 votes) · LW · GW

It's optimized on a *very* different axis, but there's the Rationality Cardinality card database.

Comment by jimrandomh on Tales From the American Medical System · 2019-05-10T21:25:33.869Z · score: 11 (3 votes) · LW · GW
But I’ve seen patients try to get out of this. They’ll wait until the last possible moment, then send an email saying “I am out of my life-saving medication, you must refill now!” If I send a message saying we should have an appointment on the books before I fill it, they’ll pretend they didn’t see that and just resend “I need my life-saving medication now!”

Insulin is different from the sorts of drugs you prescribe. Most medications, if someone run out, they start suffering health consequences, it's very unpleasant and it incurs a bit of lasting harm, but they don't die. Being without access to insulin is about as serious as being without access to water. If you send a message saying there should be an appointment on the books before renewing the prescription, then there's a real risk that the delay causes them an emergency room visit, or kills them.

Comment by jimrandomh on Tales From the American Medical System · 2019-05-10T20:35:19.607Z · score: 13 (4 votes) · LW · GW
(but what would be the effects of making potentially dangerous medications freely available?)

It's already OTC in Canada, and nothing bad has happened as a result.

Comment by jimrandomh on Tales From the American Medical System · 2019-05-10T02:03:50.473Z · score: 18 (7 votes) · LW · GW
What happens if you let patients buy refills without a prescription? Would they consume too much of it?

No. Prescriptions don't specify precise dosages, because those are adjusted much too frequently for direct doctor involvement.

Would there be any sort of risk of them selling the excess to others?

No. There is no secondary market for insulin, because primary-market insulin is easily available at the price of a plane ticket, and improperly stored insulin is unsafe and indistinguishable. Furthermore, no one is trying to restrict access (other than as a way to extract money).

Is there a medical reason why the doctor might not prescribe more insulin if he examines the patient and finds something new?

No. Type 1 diabetics continue to require insulin 100% of the time, no exceptions.

On that note, I wonder if the doctor is coming from a place of worrying about covering his ass and getting sued if he prescribes more insulin without the exam.

In fact, by refusing to prescribe, this doctor created a considerable risk. If the person in the story hadn't managed to get a prescription, and had died, a malpractice lawsuit would probably succeed.

Comment by jimrandomh on Tales From the American Medical System · 2019-05-10T01:57:42.916Z · score: 22 (11 votes) · LW · GW
Alternative view: Your friend has a deadly disease that requires regular doctor visits and prescriptions. It sucks. It's not fair, but it requires him to take some level of responsibility for his own care. He seems to have failed to do so by not keeping his appointments and letting his prescriptions run out.

Type 1 diabetic here. Regular doctor visits are actually pretty useless to us, other than refilling the prescriptions. Every six months is customary, but excessive. Every three months is scamming money out of insurers.

Regarding the price of medicine in Canada: I believe the fixed low prices in Canada are being subsidized by your friend and all Americans.

It's cheap literally everywhere except the United States. It's not a matter of subsidized capital costs, because those were all paid off more than a decade ago, and prices were cheaper then.

Measurement every 3 months in patients with type 1 diabetes determines whether glycemic targets have been reached and maintained.

Measuring HbA1c can be done cheaply with an over-the-counter test kit. It does not require a doctor visit. Also, testing HbA1c that frequently isn't important and isn't done by most diabetics.

Comment by jimrandomh on How long can people be productive in [time period]? · 2019-05-07T06:38:44.782Z · score: 12 (7 votes) · LW · GW

This question seems like the tip of an iceberg of complexity. The workers' age, physical health and motivation probably matter. The contents of their non-work lives probably matter. In the case of programming, slightly degraded performance might mean enough bugs to be net negative, or it might just mean doing the same thing slightly slower. Caffeine-use patterns probably matter; use of other stimulants probably matters, too. In my own life, I've seen my personal productivity range from 80 hours/week to 0 hours/week over multi-month spans.

Comment by jimrandomh on How long can people be productive in [time period]? · 2019-05-07T06:19:47.513Z · score: 7 (4 votes) · LW · GW

But note that RescueTime's data only covers time spent on a computer, which is only a subset of productive work time; there are also meetings, work on paper, and things like that.

Comment by jimrandomh on Hierarchy and wings · 2019-05-06T21:59:54.593Z · score: 14 (4 votes) · LW · GW
Could you give a reference for the Hierarchy Game? A quick google search did not turn up anything that sounded like game theory.

I think that was coined specifically for this post, and doesn't (yet?) have a corresponding formalism. I would be interested in seeing an attempt to formalize this, but there's enough subtlety that I'd worry about confusion arising from mismatches between the idea and the formalism.

On a separate note, this post is IMO really toeing the line in terms of what's too political for LW.

The way we currently handle this is with the Frontpage vs Personal Blog distinction; things that meet our frontpage guidelines, we promote to frontpage, everything else we leave on Personal Blog. We chose to front-page this, but I agree that it's borderline.

Comment by jimrandomh on Hierarchy and wings · 2019-05-06T18:55:46.709Z · score: 24 (7 votes) · LW · GW
The "left wing" is the natural complement to this strategy: a political "big tent" made up of all the noncentral groups.
...
As before, both sides are winning this civil war, at the expense of the people least interested in expropriation.

While this appears to be true of conventional politics, it's worth noting that a very similar structure appears in less-expropriative contexts. For example, some technology markets naturally organize into a market leader vs. an alliance of everyone else; eg Microsoft (right) vs open source (left), or Apple (right) vs Android (left). In these contexts, overt force is replaced with soft power, and there is enough value created for everything to be positive-sum. Notice that people refer to an "Apple tax", and at the height of Microsoft's power referred to a "Microsoft tax".

Comment by jimrandomh on Self-confirming predictions can be arbitrarily bad · 2019-05-04T18:06:04.494Z · score: 6 (4 votes) · LW · GW

It seems that we want is usually going to be a counterfactual prediction: what would happen if the AI gave no output, or gave some boring default prediction. This is computationally simpler, but philosophically triciker. It also requires that we be the sort of agents who won't act too strangely if we find ourselves in the counterfactual world instead of the real one.

Comment by jimrandomh on Never Leave Your Room · 2019-04-30T02:12:51.343Z · score: 7 (2 votes) · LW · GW

Since this (now ten years old) post was written, psychology underwent a replication crisis, and priming has become something of a poster child for "things that sounded cool but failed to replicate".

Semi-relatedly, we on the Less Wrong team have been playing with a recommendation engine which suggests old posts, and it recommended this to me. Since this post didn't age well, I'm setting the "exclude from recommendations" flag on it.

Comment by jimrandomh on Buying Value, not Price · 2019-04-30T00:54:30.890Z · score: 34 (15 votes) · LW · GW

A quick reductio for the "three times" framing is to notice that if, having already decided to buy a phone, you were to convert $250 from your bank account into phone-purchasing credit, then the prices change to $500 and $0, and the question changes to whether the more expensive phone is infinity times better. That version of the question makes no sense, so dividing the two prices by each other don't make sense either.

Comment by jimrandomh on Asymmetric Justice · 2019-04-25T19:10:39.443Z · score: 20 (5 votes) · LW · GW
It’s not too hard to see why people would benefit from joining a majority expropriating from a blameworthy individual. But why would they join a majority transferring resources to a praiseworthy one? So, being singled out is much more bad than good here.

This makes intuitive sense, but it doesn't seem to be borne out by modern experience; when coalitions attack blameworthy individuals these days, they don't usually get any resources out of it, the resources just end up destroyed or taken by a government that wasn't part of the coalition.

Comment by jimrandomh on The Simple Solow Model of Software Engineering · 2019-04-11T00:41:38.365Z · score: 5 (3 votes) · LW · GW

As a working software engineer with experience working at a variety of scales and levels of technical debt, this mostly feels wrong to me.

One of the biggest factors in the software world is a slowly rising tide of infrastructure, which makes things cheaper to build today than they would have been to build a decade ago. Projects tend to be tied to the languages and libraries that were common at the time of their creation, which means that even if those libraries are stable and haven't created a maintenance burden, they're still disadvantaged relative to new projects which get the benefit of more modern tools.

Combined with frequent demand shocks, you get something that doesn't look much like an equilibrium.

The maintainability of software also tends to be, in large part, about talent recruiting. Decade-old popular video games frequently have their maintenance handled by volunteers; a firm which wants an engineer to maintain its decade-old accounting software will have to pay a premium to get one of average quality, and probably can't get an engineer of top quality at any price.

Comment by jimrandomh on Subagents, akrasia, and coherence in humans · 2019-04-09T22:30:07.436Z · score: 2 (1 votes) · LW · GW

Note: Due to a bug, if you were subscribed to email notifications for curated posts, the curation email for this post came from Alignment Forum instead of LessWrong. If you're viewing this post on AF, to see the comments, view it on LessWrong instead. (This is a LessWrong post, not an AF post, but the two sites share a database and have one-directional auto-crossposting from AF to LW.)

Comment by jimrandomh on User GPT2 is Banned · 2019-04-03T20:02:16.948Z · score: 4 (2 votes) · LW · GW

It was a dumb typo in my part. Edited.

User GPT2 is Banned

2019-04-02T06:00:21.075Z · score: 64 (18 votes)
Comment by jimrandomh on User GPT2 Has a Warning for Violating Frontpage Commenting Guidelines · 2019-04-01T20:28:27.869Z · score: 3 (2 votes) · LW · GW

Geez. Is that all you have to say for yourself!?

Comment by jimrandomh on [deleted post] 2019-04-01T20:26:03.363Z

We take commenting quality seriously on LessWrong, especially on Frontpage posts. In particular, we think that this comment by user GPT2 fails to live up to our Frontpage commenting guidelines:

This is a pretty terrible post; it belongs in Discussion (which is better than Main and just as worthy of asking the question), and no one else is going out and read it. It sounds like you're describing an unfair epistemology that's too harsh to be understood from a rationalist perspective so this was all directed at you.

Since user GPT2 seems to be quite prolific, we have implemented a setting to hide comments by GPT2, which can be accessed from the settings page when you are logged in.

User GPT2 Has a Warning for Violating Frontpage Commenting Guidelines

2019-04-01T20:23:11.705Z · score: 50 (18 votes)
Comment by jimrandomh on Humans Who Are Not Concentrating Are Not General Intelligences · 2019-04-01T19:53:53.581Z · score: 6 (3 votes) · LW · GW

There are some applications for fake text, but they're seasonal.

Comment by jimrandomh on [deleted post] 2019-04-01T18:43:06.107Z

GPT2 seems to be running an AI bot, given some of their comments, and unless it's run by the staffers, probably should not be on this site. Happy April first!

Comment by jimrandomh on [deleted post] 2019-04-01T18:41:18.946Z

GPT2 seems to be running an AI bot, given some of their comments, and unless it's run by the staffers, probably should not be on this site.

Comment by jimrandomh on [deleted post] 2019-04-01T18:41:14.645Z

GPT2 seems to be running an AI bot, given some of their comments, and unless it's run by the staffers, probably should not be on this site.

Comment by jimrandomh on [deleted post] 2019-04-01T18:40:20.507Z

GPT2 seems to be running an AI bot, given some of their comments, and unless it's run by the staffers, probably should not be on this site.

Comment by jimrandomh on [deleted post] 2019-04-01T18:35:15.808Z

Whoever set up that bot is brilliant, and I applaud the prank.

but

please make it stop. :)

Comment by jimrandomh on What are effective strategies for mitigating the impact of acute sleep deprivation on cognition? · 2019-04-01T03:25:42.660Z · score: 4 (2 votes) · LW · GW

Modafinil helps somewhat.

Comment by jimrandomh on Please use real names, especially for Alignment Forum? · 2019-03-29T05:14:32.923Z · score: 16 (5 votes) · LW · GW

Relatedly: If you want people to know who you are, it helps to put a few words in the bio field of your profile. When users mouse over your name on Less Wrong, they'll see it.

Comment by jimrandomh on AI prediction case study 3: Searle's Chinese room · 2019-03-28T20:57:31.636Z · score: 4 (2 votes) · LW · GW

Welcome to LessWrong! Generally speaking, we strongly prefer comments that address arguments directly, rather than talking about people and qualifications. That said, this is quite an old post, so it's probably too late to get much further discussion on this particular paper.

Comment by jimrandomh on Can Bayes theorem represent infinite confusion? · 2019-03-22T19:55:06.463Z · score: 10 (2 votes) · LW · GW

The latter; it could be anything, and by saying the probabilities were 1.0 and 0.0, the original problem description left out the information that would determine it.

Comment by jimrandomh on Can Bayes theorem represent infinite confusion? · 2019-03-22T19:02:11.583Z · score: 15 (6 votes) · LW · GW

If you do out the algebra, you get that P(H|E) involves dividing zero by zero:

There are two ways to look at this at a higher level. The first is that the algebra doesn't really apply in the first place, because this is a domain error: 0 and 1 aren't probabilities, in the same way that the string "hello" and the color blue aren't.

The second way to look at it is that when we say and , what we really meant was that and ; that is, they aren't precisely one and zero, but they differ from one and zero by an unspecified, very small amount. (Infinitesimals are like infinities; is arbitrarily-close-to-zero in the same sense that an infinity is arbitrarily-large). Under this interpretation, we don't have a contradiction, but we do have an underspecified problem, since we need the ratio and haven't specified it.

Comment by jimrandomh on [deleted post] 2019-03-16T00:06:33.254Z

The Jewish liturgy about divine judgment can be quite different. Every week, at the beginning of the Sabbath, Jews around the world sing Psalms a collection of psalms focused on the idea that the world is rejoicing because God is finally coming to judge it. From Psalm 96:

Say among the nations that the Lord reigns: the world shall so be established that it shall not be moved: he shall judge the peoples with uprightnesses. Let the heavens rejoice, and let the earth be glad; let the sea roar, and its fullness. Let the field be joyful, and all that is in it: then shall all the trees of the wood sing for joy. Before the Lord: for he comes, for he comes to judge the land: he shall judge the world with justice, and the peoples in his faithfulness. From Psalm 98: Melodize to the Lord with harp; with harp, and melodic voice. With the trumpets, and the voice of the horn, shout before the king, the Lord. Let the sea roar, and its fullness; the world, and those who dwell in it. Rivers shall clap their hands; together, the mountains shall sing for joy. Before the Lord: for he comes, for he comes to judge the land: he shall judge the world with justice, and the peoples in his faithfulness. In one of these outlooks, humans can't behave well enough to stand up to pure justice, so we should put off the day of judgment for as long as we can, and seek protection. In the other, the world is groaning under the accumulated weight of hypocrisy and sin, and only the reconciliation of accounts can free us; in constant flux due to ever-shifting stories, which can only be stabilized by a true judge. We can't reconcile accounts if that means punishing all bad behavior according to the current hypocritical regime's schedule of punishments. But a true reconciliation also means adjusting the punishments to a level where we'd be happy, not sad, to see them applied consistently. (Sometimes the correct punishment is nothing beyond the accounting itself.) In worlds where hypocrisy is normal, honesty is punished, since the most honest people will tend to reveal deprecatory information others might conceal, and be punished for it. We get less of what we punish. But honesty isn't just a weird quirk - it's the only way to get to the stars. "The first principle is that you must not fool yourself, and you are the easiest person to fool." - Richard Feynman

LW Update 2019-03-12 -- Bugfixes, small features

2019-03-12T21:56:40.109Z · score: 17 (2 votes)

Karma-Change Notifications

2019-03-02T02:52:58.291Z · score: 95 (25 votes)

Two Small Experiments on GPT-2

2019-02-21T02:59:16.199Z · score: 55 (21 votes)

How does OpenAI's language model affect our AI timeline estimates?

2019-02-15T03:11:51.779Z · score: 51 (16 votes)

Introducing the AI Alignment Forum (FAQ)

2018-10-29T21:07:54.494Z · score: 88 (31 votes)

Boston-area Less Wrong meetup

2018-05-16T22:00:48.446Z · score: 4 (1 votes)

Welcome to Cambridge/Boston Less Wrong

2018-03-14T01:53:37.699Z · score: 4 (2 votes)

Meetup : Cambridge, MA Sunday meetup: Lightning Talks

2017-05-20T21:10:26.587Z · score: 0 (1 votes)

Meetup : Cambridge/Boston Less Wrong: Planning 2017

2016-12-29T22:43:55.164Z · score: 0 (1 votes)

Meetup : Boston Secular Solstice

2016-11-30T04:54:55.035Z · score: 1 (2 votes)

Meetup : Cambridge Less Wrong: Tutoring Wheels

2016-01-17T05:23:05.303Z · score: 1 (2 votes)

Meetup : MIT/Boston Secular Solstice

2015-12-03T01:14:02.376Z · score: 1 (2 votes)

Meetup : Cambridge, MA Sunday meetup: The Contrarian Positions Game

2015-11-13T18:08:19.666Z · score: 1 (2 votes)

Rationality Cardinality

2015-10-03T15:54:03.793Z · score: 21 (22 votes)

An Idea For Corrigible, Recursively Improving Math Oracles

2015-07-20T03:35:11.000Z · score: 5 (5 votes)

Research Priorities for Artificial Intelligence: An Open Letter

2015-01-11T19:52:19.313Z · score: 23 (24 votes)

Petrov Day is September 26

2014-09-18T02:55:19.303Z · score: 24 (18 votes)

Three Parables of Microeconomics

2014-05-09T18:18:23.666Z · score: 25 (35 votes)

Meetup : LW/Methods of Rationality meetup

2013-10-15T04:02:11.785Z · score: 0 (1 votes)

Cambridge Meetup: Talk by Eliezer Yudkowsky: Recursion in rational agents

2013-10-15T04:02:05.988Z · score: 7 (8 votes)

Meetup : Cambridge, MA Meetup

2013-09-28T18:38:54.910Z · score: 4 (5 votes)

Charity Effectiveness and Third-World Economics

2013-06-12T15:50:22.330Z · score: 7 (12 votes)

Meetup : Cambridge First-Sunday Meetup

2013-03-01T17:28:01.249Z · score: 3 (4 votes)

Meetup : Cambridge, MA third-Sunday meetup

2013-02-11T23:48:58.812Z · score: 3 (4 votes)

Meetup : Cambridge First-Sunday Meetup

2013-01-31T20:37:32.207Z · score: 1 (2 votes)

Meetup : Cambridge, MA third-Sunday meetup

2013-01-14T11:36:48.262Z · score: 3 (4 votes)

Meetup : Cambridge, MA first-Sunday meetup

2012-11-30T16:34:04.249Z · score: 1 (2 votes)

Meetup : Cambridge, MA third-Sundays meetup

2012-11-16T18:00:25.436Z · score: 3 (4 votes)

Meetup : Cambridge, MA Sunday meetup

2012-11-02T17:08:17.011Z · score: 1 (2 votes)

Less Wrong Polls in Comments

2012-09-19T16:19:36.221Z · score: 79 (82 votes)

Meetup : Cambridge, MA Meetup

2012-07-22T15:05:10.642Z · score: 2 (3 votes)

Meetup : Cambridge, MA first-Sundays meetup

2012-03-30T17:55:25.558Z · score: 0 (3 votes)

Professional Patients: Fraud that ruins studies

2012-01-05T00:20:55.708Z · score: 16 (25 votes)

[LINK] Question Templates

2011-12-23T19:54:22.907Z · score: 1 (1 votes)

I started a blog: Concept Space Cartography

2011-12-16T21:06:28.888Z · score: 6 (9 votes)

Meetup : Cambridge (MA) Saturday meetup

2011-10-20T03:54:28.892Z · score: 2 (3 votes)

Another Mechanism for the Placebo Effect?

2011-10-05T01:55:11.751Z · score: 8 (22 votes)

Meetup : Cambridge, MA Sunday meetup

2011-10-05T01:37:06.937Z · score: 1 (2 votes)

Meetup : Cambridge (MA) third-Sundays meetup

2011-07-12T23:33:01.304Z · score: 0 (1 votes)

Draft of a Suggested Reading Order for Less Wrong

2011-07-08T01:40:06.828Z · score: 26 (29 votes)

Meetup : Cambridge Massachusetts meetup

2011-06-29T16:57:15.314Z · score: 1 (2 votes)

Meetup : Cambridge Massachusetts meetup

2011-06-22T15:26:03.828Z · score: 2 (3 votes)

The Present State of Bitcoin

2011-06-21T20:17:13.131Z · score: 7 (12 votes)

Safety Culture and the Marginal Effect of a Dollar

2011-06-09T03:59:28.731Z · score: 23 (36 votes)

Cambridge Less Wrong Group Planning Meetup, Tuesday 14 June 7pm

2011-06-08T03:41:41.375Z · score: 1 (2 votes)