Posts

Jimrandomh's Shortform 2019-07-04T17:06:32.665Z · score: 29 (4 votes)
Recommendation Features on LessWrong 2019-06-15T00:23:18.102Z · score: 62 (19 votes)
Welcome to LessWrong! 2019-06-14T19:42:26.128Z · score: 100 (54 votes)
User GPT2 is Banned 2019-04-02T06:00:21.075Z · score: 64 (18 votes)
User GPT2 Has a Warning for Violating Frontpage Commenting Guidelines 2019-04-01T20:23:11.705Z · score: 50 (18 votes)
LW Update 2019-03-12 -- Bugfixes, small features 2019-03-12T21:56:40.109Z · score: 17 (2 votes)
Karma-Change Notifications 2019-03-02T02:52:58.291Z · score: 96 (26 votes)
Two Small Experiments on GPT-2 2019-02-21T02:59:16.199Z · score: 56 (22 votes)
How does OpenAI's language model affect our AI timeline estimates? 2019-02-15T03:11:51.779Z · score: 51 (16 votes)
Introducing the AI Alignment Forum (FAQ) 2018-10-29T21:07:54.494Z · score: 91 (34 votes)
Boston-area Less Wrong meetup 2018-05-16T22:00:48.446Z · score: 4 (1 votes)
Welcome to Cambridge/Boston Less Wrong 2018-03-14T01:53:37.699Z · score: 4 (2 votes)
Meetup : Cambridge, MA Sunday meetup: Lightning Talks 2017-05-20T21:10:26.587Z · score: 0 (1 votes)
Meetup : Cambridge/Boston Less Wrong: Planning 2017 2016-12-29T22:43:55.164Z · score: 0 (1 votes)
Meetup : Boston Secular Solstice 2016-11-30T04:54:55.035Z · score: 1 (2 votes)
Meetup : Cambridge Less Wrong: Tutoring Wheels 2016-01-17T05:23:05.303Z · score: 1 (2 votes)
Meetup : MIT/Boston Secular Solstice 2015-12-03T01:14:02.376Z · score: 1 (2 votes)
Meetup : Cambridge, MA Sunday meetup: The Contrarian Positions Game 2015-11-13T18:08:19.666Z · score: 1 (2 votes)
Rationality Cardinality 2015-10-03T15:54:03.793Z · score: 21 (22 votes)
An Idea For Corrigible, Recursively Improving Math Oracles 2015-07-20T03:35:11.000Z · score: 5 (5 votes)
Research Priorities for Artificial Intelligence: An Open Letter 2015-01-11T19:52:19.313Z · score: 23 (24 votes)
Petrov Day is September 26 2014-09-18T02:55:19.303Z · score: 24 (18 votes)
Three Parables of Microeconomics 2014-05-09T18:18:23.666Z · score: 25 (35 votes)
Meetup : LW/Methods of Rationality meetup 2013-10-15T04:02:11.785Z · score: 0 (1 votes)
Cambridge Meetup: Talk by Eliezer Yudkowsky: Recursion in rational agents 2013-10-15T04:02:05.988Z · score: 7 (8 votes)
Meetup : Cambridge, MA Meetup 2013-09-28T18:38:54.910Z · score: 4 (5 votes)
Charity Effectiveness and Third-World Economics 2013-06-12T15:50:22.330Z · score: 7 (12 votes)
Meetup : Cambridge First-Sunday Meetup 2013-03-01T17:28:01.249Z · score: 3 (4 votes)
Meetup : Cambridge, MA third-Sunday meetup 2013-02-11T23:48:58.812Z · score: 3 (4 votes)
Meetup : Cambridge First-Sunday Meetup 2013-01-31T20:37:32.207Z · score: 1 (2 votes)
Meetup : Cambridge, MA third-Sunday meetup 2013-01-14T11:36:48.262Z · score: 3 (4 votes)
Meetup : Cambridge, MA first-Sunday meetup 2012-11-30T16:34:04.249Z · score: 1 (2 votes)
Meetup : Cambridge, MA third-Sundays meetup 2012-11-16T18:00:25.436Z · score: 3 (4 votes)
Meetup : Cambridge, MA Sunday meetup 2012-11-02T17:08:17.011Z · score: 1 (2 votes)
Less Wrong Polls in Comments 2012-09-19T16:19:36.221Z · score: 79 (82 votes)
Meetup : Cambridge, MA Meetup 2012-07-22T15:05:10.642Z · score: 2 (3 votes)
Meetup : Cambridge, MA first-Sundays meetup 2012-03-30T17:55:25.558Z · score: 0 (3 votes)
Professional Patients: Fraud that ruins studies 2012-01-05T00:20:55.708Z · score: 16 (25 votes)
[LINK] Question Templates 2011-12-23T19:54:22.907Z · score: 1 (1 votes)
I started a blog: Concept Space Cartography 2011-12-16T21:06:28.888Z · score: 6 (9 votes)
Meetup : Cambridge (MA) Saturday meetup 2011-10-20T03:54:28.892Z · score: 2 (3 votes)
Another Mechanism for the Placebo Effect? 2011-10-05T01:55:11.751Z · score: 8 (22 votes)
Meetup : Cambridge, MA Sunday meetup 2011-10-05T01:37:06.937Z · score: 1 (2 votes)
Meetup : Cambridge (MA) third-Sundays meetup 2011-07-12T23:33:01.304Z · score: 0 (1 votes)
Draft of a Suggested Reading Order for Less Wrong 2011-07-08T01:40:06.828Z · score: 29 (30 votes)
Meetup : Cambridge Massachusetts meetup 2011-06-29T16:57:15.314Z · score: 1 (2 votes)
Meetup : Cambridge Massachusetts meetup 2011-06-22T15:26:03.828Z · score: 2 (3 votes)
The Present State of Bitcoin 2011-06-21T20:17:13.131Z · score: 7 (12 votes)
Safety Culture and the Marginal Effect of a Dollar 2011-06-09T03:59:28.731Z · score: 23 (36 votes)
Cambridge Less Wrong Group Planning Meetup, Tuesday 14 June 7pm 2011-06-08T03:41:41.375Z · score: 1 (2 votes)

Comments

Comment by jimrandomh on Voting Phase of 2018 LW Review (Deadline: Sun 19th Jan) · 2020-01-16T19:22:38.147Z · score: 10 (5 votes) · LW · GW

The list is now shuffled (as a tiebreak after sorting by your own vote). The shuffle is done once per user, so each user should see the posts in a random order, but it'll be the same order each time you revisit it. This change went live around the 13th.

Comment by jimrandomh on Repossessing Degrees · 2020-01-14T02:22:11.239Z · score: 4 (3 votes) · LW · GW

The two of us could sign a contract where I pay you $100 and you agree not to disclose what you ate for breakfast this morning, and agree not to disclose the existence of the contract.

The relevant difference between this and an NDA is that this has the restriction on speech coming from a statute, rather than a contract between nongovernmental entities.

In practice I think this is unlikely to matter much for most people. If you're applying for a job, and the job asks for your resume, they're not going to go poking around dusty corners of the web looking to see if you had some other version with different contents.

Actually, I expect this will be discovered with nearly 100% reliability by ordinary due diligence on hires. Bankruptcies are necessarily very public and there are APIs for finding out whether someone has declared bankruptcy, so you just check whether each candidate has declared bankruptcy, and if so, you take the resume-URL they gave you and check that URL on archive.org just prior to their bankruptcy.

Comment by jimrandomh on Repossessing Degrees · 2020-01-13T22:02:22.336Z · score: 6 (3 votes) · LW · GW

This doesn't work, legally or practically speaking, because it's trying to restrict speech-acts between parties that both want the information to be shared. You can't legally stop people from truthfully disclosing that they have a reposessed degree, because of the first amendment. You can't practically stop people from truthfully disclosing that they have a repossessed degree because they will have left many archived traces of that information, for example copies of their resume in the Internet Archive, they have an incentive to leave those traces in place, and removing those traces is too difficult and involves too many third parties to be a legal requirement.

Comment by jimrandomh on George's Shortform · 2020-01-08T22:28:04.635Z · score: 3 (3 votes) · LW · GW

But I think that my disagreement with this first class of alarmist is no very fundamental, we can probably agree on a few things such as:

1. In principle, the kind of intelligence needed for AGI is a solved problem, all that we are doing now is trying to optimize for various cases.

2. The increase in computational resources is enough to get us closer and closer to AGI even without any more research effort being allocated to the subject.

This is definitely not something you will find agreement on. Thinking that this is something that alarmists would agree with you on suggests you are using a different definition of AGI than they are, and may have other significant misunderstandings of what they're saying.

Comment by jimrandomh on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-08T20:23:40.627Z · score: 6 (3 votes) · LW · GW

"Sealioning" is attempting to participate in "reasoned discourse" in a way that is insensitive to the appropriateness of the setting and to the buy-in of the other party. (Importantly, not "costs" of reasoned discourse; they are polite in some ways, like "oh sure, we can take an hour break for breakfast".) People who have especially low buy-in to reasoned discourse use the word to paint the person asking for clarification as the oppressor, and themselves the victim. Importantly, they view attempting to have reasoned discourse as oppression. Thus it blends "not tracking buy-in" and "caring about reasoning over feelings" in a way that makes them challenging to unblend.

The part of sealioning that's about setting can't really apply to comments on LW. In the comic that originated the term, a sealion intrudes on a private conversation, follows them around and trespasses in their house; but LessWrong frontpage is a public space for public dialogue, so a LessWrong comment can't have that problem no matter what it is.

So, conversational dynamics are worth talking about, and I do think there's something in this space worth reifying with a term, preferably in a more abstract setting.

Comment by jimrandomh on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-08T20:22:22.812Z · score: 11 (3 votes) · LW · GW

There was a mention of moderation regarding the term sealioning, so I'm addressing that. (We're not yet addressing the thread-as-a-whole, but may do so later).

In general, it's important to be able to give names to things. I looked into how the term sealioning seems to be defined and used on the internet-as-a-whole. It seems to have a lot of baggage, including (if used to refer to comments on LessWrong) false connotations about what sort of place LessWrong is and what behavior is appropriate on LessWrong. However, this baggage was not common knowledge. I see little reason to think those connotations were known or intended by Duncan. So, this looks to me look a good-faith proposal of terminology, but the terminology itself seems bad.

Comment by jimrandomh on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-04T01:56:03.929Z · score: 8 (3 votes) · LW · GW

I fixed it.

Comment by jimrandomh on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-04T01:07:27.766Z · score: 10 (4 votes) · LW · GW

Moderator hat on.

In general, I don't think we're going to have a moderator response time of ~4 hours (which is about how long Duncan's comment had been up when you wrote yours). However, seeing a call for moderator action, we are going to be reviewing this thread and discussing what if anything to do here.

I've spent the last few hours catching up on the comments here. While Vaniver and Habryka have been participating in this thread and are site moderators, this seems like a case where moderation decisions should be made by people with more distance.

Comment by jimrandomh on [deleted post] 2019-12-31T22:44:55.454Z

I reason (as is standard) that the only real way that my machine would be compromised is if someone has physical access; and if that's the case there's absolutely nothing you can do about it.

This is incorrect. The main ways computers get compromised are as part of broadly-targeted attacks using open ports, trojanized downloads, and vulnerabilities in the web browser, email client and other network-facing software. For physical-access attacks, the main one is that the computer gets physically stolen, powered off in the process, and never returned, in which case having encrypted the hard disk matters a lot.

Comment by jimrandomh on Programmers Should Plan For Lower Pay · 2019-12-31T17:51:25.836Z · score: 7 (3 votes) · LW · GW

How? Is the model that, as the field matures, programmers will get more fungible? Because it actually seems like programmers have gotten less fungible over time (as both projects and tech stacks have increased in size) rather than more.

Comment by jimrandomh on Programmers Should Plan For Lower Pay · 2019-12-30T23:55:37.275Z · score: 7 (3 votes) · LW · GW

If a janitor quits, a new janitor can be hired the next day with minimal disruption. If a programmer quits, it will be half a year before a newly hired replacement can have acquired the context, they may bring expertise about your business to a competitor, and there's a significant risk that the replacement hire will be bad. Projects and businesses do sometimes fail because their programmers quit. This means that even if there were an oversupply of programmers, it would still be worth paying them well in order to increase retention.

Comment by jimrandomh on Programmers Should Plan For Lower Pay · 2019-12-29T20:14:39.837Z · score: 15 (6 votes) · LW · GW

The barrier to entry is higher than you think, it just takes the form of a talent requirement rather than a training requirement.

Comment by jimrandomh on Perfect Competition · 2019-12-29T20:03:27.355Z · score: 23 (6 votes) · LW · GW

So, to summarize: Systems are a mix of a steady state plus exogenous shocks (disasters and whalefalls and sideways shifts); and within each system and subsystem, there is some balance of competition (Moloch) and slack (Elua). Steady state favors those which are most competitive, while shocks favor those with the most slack. More frequent, larger, and more varied types of shocks favor Elua, while less frequent, smaller and more predictable types of shocks favor Moloch. Since the world's rate of change and weirdness of changes has recently increased, slack are favored; under a narrative of increasing rate of technological progress, slack will continue being more favored over time.

There's a big caveat, though, which is that in the long run, only those types of slack which can be deployed to respond to shocks count. This means that interconvertibility of stocks and flows is very bad, and that anything which can be sacrificed permanently during a temporary shock is very much at risk.

Comment by jimrandomh on Programmers Should Plan For Lower Pay · 2019-12-29T18:56:50.327Z · score: 29 (7 votes) · LW · GW

Things are pretty good now, and seem to have gotten even better since Dan's 2015 post, but something could change. Given how poorly we understand this, and the wide range of ways the future might be different, I think we should treat collapse as a real possibility:

Poor understanding is in the map, not the territory. I started to write a comment arguing that this is incorrect, that the factors which cause programmers to be well paid are straightforward and aren't going to go away. But instead of that, how about a bet.

Here's the US Bureau of Labor Statistics series for 2nd quartile nominal weekly earnings of software developers (applications and systems software): https://fred.stlouisfed.org/series/LEU0254530600A. They didn't seem to have mean, median, or other quartiles. There are other series for different subsets of programmers, like web developers; I chose this one arbitrarily. The series is not inflation adjusted.

I will bet up to $1k at 4:1 odds that the 2030 value in this series will be greater than or equal to the 2018 value, which was 1864. So $1k of my dollars against $250 of other peoples' dollars.

(I'll accept bets in order of their comment timestamps, up to a maximum of $1k of my dollars. Bet is only confirmed if I reply to accept it. Winner must remember to email to claim bet winnings. Bet is cancelled if the US Bureau of Labor Statistics discontinues the series or doesn't publish a 2030 number for any reason. Non-anonymous counterparties only.)

Comment by jimrandomh on We need to revisit AI rewriting its source code · 2019-12-27T20:36:05.334Z · score: 16 (5 votes) · LW · GW

In practice, self-modification is a special case of arbitrary code execution; it's just running a program that looks like yourself, with some changes. That means there are two routes to get there: either communicate with the internet (to, eg, pay Amazon EC2 to run the modified program), or use a security vulnerability. In the context of computer security, preventing arbitrary code execution is an extremely well-studied problem. Unfortunately, the outcome of all the study is that it's really hard, and multiple vulnerabilities are discovered every year with low probability of them ever stopping.

Comment by jimrandomh on ialdabaoth is banned · 2019-12-13T20:06:13.673Z · score: 13 (5 votes) · LW · GW

I participated in the LW team discussion about whether to ban, but not in the details of this announcement. I agree that, in your hypothetical, we probably wouldn'tve banned. In another hypothetical where he were accused of sex crimes but everyone was fine with his epistemic tactics, we probably wouldn'tve banned either.

Comment by jimrandomh on Affordance Widths · 2019-12-06T02:54:05.151Z · score: 47 (15 votes) · LW · GW

This is an unusually difficult post to review. In an ideal world, we'd like to be able to review things as they are, without reference to who the author is. In many settings, reviews are done anonymously (with the author's name stricken off), for just this reason. This post puts that to the test: the author is a pariah. And ordinarily I would say, that's irrelevant, we can just read the post and evaluate it on its own merits.

Other comments have mentioned that there could be PR concerns, ie, that making the author's existence and participation on LessWrong salient is embarrassing. I don't think this is an appropriate basis for judging the post, and would prefer to judge it based on its content.

The problem is, I think this post may contain a subtle trap, and that understanding its author, and what he was trying to do with this post, might actually be key to understanding what the trap is.

Ialdabaoth had a metaproblem, which was this: he had conspicuous problems, in a community full of people who would try start conversations where they help analyze his problems for him; but if those people truly understood him, they might turn on him. So he created narratives to explain why those conversations were so confusing, why he wouldn't follow the advice, and why the people trying to help him were actually wronging him, and therefore indebted. This post is one such narrative. Here's another.

The core idea of this post is that spectrum-direction advice is structured as a pair of failure modes, which may have either a variably-sized gap or a variably-sized overlap, depending on the post. This is straightforwardly true. But I think that the next inferential step the post takes after that, about how people do and should respond to that, is wrong. Charles, David, and Edgar should all be rejecting the frame in which they're tuning {B}, and instead be looking for third options which make {B} irrelevant. This is easy to overlook when {B} is a generic placeholder rather than a specific behavior, but becomes clear when applied to specific examples. Edgar, in particular, is described as doing a probably-catastrophically-wrong thing, presented as though it were the obvious reaction to circumstances.

I suspect that, if this concept were widespread and salient, especially presented in its current form, the main effect would be to help people rationalize their way out of doing the obvious things to solve their problems, and to explain their confusion when other people seem to not be doing the obviously things. I think there's a next-inferential-step post that I would be happy with, but this one isn't it.

Comment by jimrandomh on Affordance Widths · 2019-12-06T02:53:39.851Z · score: 2 (1 votes) · LW · GW

This is an unusually difficult post to review. In an ideal world, we'd like to be able to review things as they are, without reference to who the author is. In many settings, reviews are done anonymously (with the author's name stricken off), for just this reason. This post puts that to the test: the author is a pariah. And ordinarily I would say, that's irrelevant, we can just read the post and evaluate it on its own merits.

Other comments have mentioned that there could be PR concerns, ie, that making the author's existence and participation on LessWrong is embarrassing. I don't think this is an appropriate basis for judging the post, and would prefer to judge it based on its content.

The problem is, I think this post may contain a subtle trap, and that understanding its author, and what he was trying to do with this post, might actually be key to understanding what the trap is.

Ialdabaoth had a metaproblem, which was this: he had conspicuous problems, in a community full of people who would try start conversations where they help analyze his problems for him; but if those people truly understood him, they might turn on him. So he created narratives to explain why those conversations were so confusing, why he wouldn't follow the advice, and why the people trying to help him were actually wronging him, and therefore indebted. This post is one such narrative. Here's another.

The core idea of this post is that spectrum-direction advice is structured as a pair of failure modes, which may have either a variably-sized gap or a variably-sized overlap, depending on the post. This is straightforwardly true. But I think that the next inferential step the post takes after that, about how people do and should respond to that, is wrong. Charles, David, and Edgar should all be rejecting the frame in which they're tuning {B}, and instead be looking for third options which make {B} irrelevant. This is easy to overlook when {B} is a generic placeholder rather than a specific behavior, but becomes clear when applied to specific examples. Edgar, in particular, is described as doing a probably-catastrophically-wrong thing, presented as though it were the obvious reaction to circumstances.

I suspect that, if this concept were widespread and salient, especially presented in its current form, the main effect would be to help people rationalize their way out of doing the obvious things to solve their problems, and to explain their confusion when other people seem to not be doing the obviously things. I think there's a next-inferential-step post that I would be happy with, but this one isn't it.

Comment by jimrandomh on What makes people intellectually active? · 2019-12-02T19:58:07.017Z · score: 4 (2 votes) · LW · GW

The answers on this question have a lot of good analysis from an angle and at a level of meta which otherwise seems somewhat neglected.

Comment by jimrandomh on Mandatory Obsessions · 2019-12-02T19:53:01.670Z · score: 4 (2 votes) · LW · GW

This post crystallizes an important problem, which seems to be hijacking a lot of people lately and has turned several people I know into scrupulosity wrecks. I would like to see it built upon, because this problem is unlikely to go away any time soon.

Comment by jimrandomh on The Pavlov Strategy · 2019-12-02T19:47:38.542Z · score: 4 (2 votes) · LW · GW

This post bridges two domains, game theory and reinforcement learning, which previously I previously thought of as mostly separate; and it caused a pretty big shift in my model of how intelligence-in-general works, since this is much simpler than my previous simplest model of how reinforcement learning would do game theory.

Comment by jimrandomh on Prediction Markets: When Do They Work? · 2019-12-02T19:42:29.994Z · score: 5 (3 votes) · LW · GW

There's a set of recurring prediction-market flavored project ideas which people have, and a fair amount of resources going into attempts. This post gives a model which predicts which ones failed, and will fail, and why, so that we (collectively) can start either looking for clever workarounds, or stop wasting resources on the attempts.

Comment by jimrandomh on CO2 Stripper Postmortem Thoughts · 2019-12-01T00:34:25.126Z · score: 18 (6 votes) · LW · GW

Back of the envelope: a person exhales about as much carbon as they eat, and a plant removes carbon from the air only by increasing its size, so to remove one person's CO2 exhalations, you would need to grow as much plant matter as they ate. That's not impossible, but at that point you're looking at something more like a greenhouse than like a grow room.

Comment by jimrandomh on Market Rate Food Is Luxury Food · 2019-11-23T23:19:33.474Z · score: 8 (4 votes) · LW · GW

It doesn't; this post is satire.

Comment by jimrandomh on Attach Receipts to Credit Card Transactions · 2019-11-12T19:14:04.346Z · score: 2 (1 votes) · LW · GW

Right, they would certainly do it if you paid them enough (and lowering the fee is a form of payment); this is a reason why the price would be higher.

Comment by jimrandomh on Ban the London Mulligan · 2019-11-12T19:10:27.512Z · score: 7 (5 votes) · LW · GW

It sounds like the problem is that mulligans are necessary to ensure there's a game to play, which depends mostly on having a reasonable number of lands, but that they have bad side-effects which are mostly the result of giving too much control over which nonland cards you have. So I propose the following mulligan rule:

  • On your first mulligan, draw five, then choose one: draw a card, or search your library for a basic land card, reveal it, and put it into your hand.
  • On your second mulligan, draw three, then choose as before, twice.
  • On your third mulligan, draw one, then choose as before three times.
  • There is no fourth mulligan.

This makes mulligans much better at ensuring you can play, and much worse at ensuring you can find a particular card or combo that you're looking for.

(For complexity reasons, this rule would work better if there were a keyword for "either draw or fetch a land", and if it were introduced in advance.)

Comment by jimrandomh on Attach Receipts to Credit Card Transactions · 2019-11-12T17:19:41.292Z · score: 2 (1 votes) · LW · GW

Stores don't want to do this for the same reason they make prices that change frequently, are one cent off from round numbers, and are in the least legible font that is legally permissible. They want paying attention to prices to be inconvenient, because paying attention decreases spending and shifts that spending towards lower margin items.

Comment by jimrandomh on How do you assess the quality / reliability of a scientific study? · 2019-11-02T06:25:58.214Z · score: 28 (11 votes) · LW · GW

1. For health-related research, one of the main failure modes I've observed when people I know try to do this, is tunnel vision and a lack of priors about what's common and relevant. Reading raw research papers before you've read broad-overview stuff will make this worse, so read UpToDate first and Wikipedia second. If you must read raw research papers, find them with PubMed, but do this only rarely and only with a specific question in mind.

2. Before looking at the study itself, check how you got there. If you arrived via a search engine query that asked a question or posed a topic without presupposing an answer, that's good; if there are multiple studies that say different things, you've sampled one of them at random. If you arrived via a query that asked for confirmation of a hypothesis, that's bad; if there are multiple studies that said different things, you've sampled in a way that was biased towards that hypothesis. If you arrived via a news article, that's the worst; if there are multiple studies that said different things, you sampled in a way that was biased opposite reality.

3. Don't bother with studies in rodents, animals smaller than rodents, cell cultures, or undergraduate psychology students. These studies are done in great numbers because they are cheap, but they have low average quality. The fact that they are so numerous makes the search-sampling problems in (2) more severe.

4. Think about what a sensible endpoint or metric would be before you look at what endpoint/metric was reported. If the reported metric is not the metric you expected, this will often be because the relevant metric was terrible. Classic examples are papers about battery technologies reporting power rather than capacity, biomedical papers reporting effects on biomarkers rather than symptoms or mortality.

5. Correctly controlling for confounders is much, much harder than people typically give it credit for. Adding extra things to the list of things controlled for can create spurious correlations, and study authors are not incentivized to handle this correctly. The practical upshot is that observational studies only count if the effect size is very large.

Comment by jimrandomh on [Site Update] Subscriptions, Bookmarks, & Pingbacks · 2019-10-30T18:59:09.806Z · score: 3 (2 votes) · LW · GW

(We forgot to run one of the migration scripts)

Comment by jimrandomh on [Site Update] Subscriptions, Bookmarks, & Pingbacks · 2019-10-30T02:17:07.527Z · score: 10 (5 votes) · LW · GW

Same cadence, but separating them does make sense and I might add that option in the future.

Comment by jimrandomh on Climate technology primer (1/3): basics · 2019-10-29T03:07:07.098Z · score: 9 (3 votes) · LW · GW

Nuclear power is typically located close to power demands, ie cities, because of the costs and losses in transporting power over long distances. This also limits the size/scale of nuclear power plants, since if you build larger than the demands of a city, you have to transport the power over a long distance to find more sinks.

On the other hand, suppose a city were built specifically for the purpose of hosting nuclear power, carbon-capture, and CO2-to-fuel plants. Such a city might be able to have significantly cheaper nuclear power, since being far away from existing population centers would lower safety and regulatory costs, and concentrating production in one place might enable new economies of scale.

It seems like there are two worlds we could be in, here. In one world, nuclear power right now is like the space launch industry of a decade ago: very expensive, but expensive because of institutional failure and a need for R&D, rather than fundamental physics. In the other world, some component of power plants (steam turbines, for example) is already optimized close to reasonable limits, so an order of magnitude is not possible. Does anyone with engineering knowledge of this space have a sense of which is likely?

Comment by jimrandomh on Climate technology primer (1/3): basics · 2019-10-29T02:11:38.054Z · score: 7 (4 votes) · LW · GW

Something that confuses me. This (and also other) discussions of afforestation focus on planting trees (converting non-forest biomes into forest). But it seems like trees do a reasonably good job of spreading themselves; why not instead go to existing forests, cut down and bury any trees that are past their peak growth phase, and let the remaining trees plant the replacements? How much does the cutting and burial itself cost? (I see an estimate of $50/t for burying crop residues, which seems like it would be similar.)

Comment by jimrandomh on Climate technology primer (1/3): basics · 2019-10-29T01:37:06.391Z · score: 13 (3 votes) · LW · GW

I expect to have some questions and ideas, but I'm still working my way through this, as I suspect are others. I really appreciate how in depth this is!

Comment by jimrandomh on Jacy Reese (born Jacy Anthis)? · 2019-10-27T03:56:23.153Z · score: 2 (1 votes) · LW · GW

Under Wikipedia's rules, yes.

Comment by jimrandomh on Endogenous Epinephrine for Anaphylaxis? · 2019-10-20T19:23:12.093Z · score: 13 (4 votes) · LW · GW

I expect that knowing you're having anaphylaxis without a solution is already reasonably close to the upper end of psychological stress, and you can't add that much more. The reason the epinephrine concentrations are so much higher in cardiac arrest patients is not because cardiac arrest is psychologically stressful, it's because epinephrine release is triggered by hypoxia.

Comment by jimrandomh on Noticing Frame Differences · 2019-10-05T02:30:23.798Z · score: 4 (2 votes) · LW · GW

Curated. I think the meta-frame (frame of looking at frames) is key to figuring out a lot of important outstanding questions. In particular, some ideas are hard to parse or generate in some frames and easy to parse or generate in others. Most people have frame-blind-spots, and lose access to some ideas that way. There also seem to be groups within the rationality community that are feeling alienated, because too many of their conversations suffer from frame-mismatch problems.

Comment by jimrandomh on Follow-Up to Petrov Day, 2019 · 2019-09-28T18:58:54.774Z · score: 27 (11 votes) · LW · GW

Lizardman's Constant is an observation seen in polls of unfiltered groups of people, but the people who were given the launch codes were selected for trustworthiness.

Comment by jimrandomh on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T02:56:53.227Z · score: 42 (14 votes) · LW · GW

(This thread is our collective reenactment of the conversations about nuclear safety that happened during the cold war.)

Comment by jimrandomh on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T01:27:54.304Z · score: 7 (4 votes) · LW · GW




?


!


Comment by jimrandomh on Sayan's Braindump · 2019-09-19T01:23:15.432Z · score: 4 (3 votes) · LW · GW
  • Multiple large monitors, for programming.
  • Waterproof paper in the shower, for collecting thoughts and making a morning todo list
  • Email filters and Priority Inbox, to prevent spurious interruptions while keeping enough trust that urgent things will generate notifications, that I don't feel compelled to check too often
  • USB batteries for recharging phones - one to carry around, one at each charging spot for quick-swapping
Comment by jimrandomh on Focus · 2019-09-16T20:07:29.706Z · score: 7 (4 votes) · LW · GW

Yep, one of us edited it to fix the link. Added a GitHub issue for dealing with relative links in RSS in general: https://github.com/LessWrong2/Lesswrong2/issues/2434 .

Comment by jimrandomh on Raemon's Scratchpad · 2019-09-14T05:26:16.569Z · score: 4 (2 votes) · LW · GW

Note that this would be a very non-idiomatic way to use jQuery. More typical architectures don't do client-side templating; they do server-side rendering and client-side incremental mutation.

Comment by jimrandomh on Jimrandomh's Shortform · 2019-09-13T22:24:50.632Z · score: 22 (4 votes) · LW · GW

I'm kinda confused about the relation between cryptography people and security mindset. Looking at the major cryptographic algorithm classes (hashing, symmetric-key, asymmetric-key), it seems pretty obvious that the correct standard algorithm in each class is probably a compound algorithm -- hash by xor'ing the results of several highly-dissimilar hash functions, etc, so that a mathematical advance which breaks one algorithm doesn't break the overall security of the system. But I don't see anyone doing this in practice, and also don't see signs of a debate on the topic. That makes me think that, to the extent they have security mindset, it's either being defeated by political processes in the translation to practice, or it's weirdly compartmentalized and not engaged with any practical reality or outside views.

Comment by jimrandomh on Bíos brakhús · 2019-09-12T18:13:03.208Z · score: 4 (3 votes) · LW · GW

In my experience, the motion that seems to prevent mental crowding-out is intervening on the timing of my thinking: if I force myself to spend longer on a narrow question/topic/idea than is comfortable, eg with a timer, I'll eventually run out of cached thoughts and spot things I would have otherwise missed.

Comment by jimrandomh on NaiveTortoise's Short Form Feed · 2019-09-12T02:01:17.861Z · score: 2 (1 votes) · LW · GW
By generativity do you mean "within-domain" generativity?

Not exactly, because Carmack has worked in more than one domain (albeit not as successfully; Armadillo Aerospace never made orbit.)

On those dimensions, it seems entirely fair to compare across topics and assert that Pearl was solving more significant and more difficult problem(s) than Carmack

Agree on significance, disagree on difficulty.

Comment by jimrandomh on Jimrandomh's Shortform · 2019-09-12T01:19:07.010Z · score: 22 (6 votes) · LW · GW

Eliezer has written about the notion of security mindset, and there's an important idea that attaches to that phrase, which some people have an intuitive sense of and ability to recognize, but I don't think Eliezer's post quite captured the essence of the idea, or presented anything like a usable roadmap of how to acquire it.

An1lam's recent shortform post talked about the distinction between engineering mindset and scientist mindset, and I realized that, with the exception of Eliezer and perhaps a few people he works closely with, all of the people I know of with security mindset are engineer-types rather than scientist-types. That seemed like a clue; my first theory was that the reason for this is because engineer-types get to actually write software that might have security holes, and have the feedback cycle of trying to write secure software. But I also know plenty of otherwise-decent software engineers who don't have security mindset, at least of the type Eliezer described.

My hypothesis is that to acquire security mindset, you have to:

  • Practice optimizing from a red team/attacker perspective,
  • Practice optimizing from a defender perspective; and
  • Practice modeling the interplay between those two perspectives.

So a software engineer can acquire security mindset because they practice writing software which they don't want to have vulnerabilities, they practice searching for vulnerabilities (usually as an auditor simulating an attacker rather as an actual attacker, but the cognitive algorithm is the same), and they practice going meta when they're designing the architecture of new projects. This explains why security mindset is very common among experienced senior engineers (who have done each of the three many times), and rare among junior engineers (who haven't yet). It explains how Eliezer can have security mindset: he alternates between roleplaying a future AI-architect trying to design AI control/alignment mechanisms, roleplaying a future misaligned-AI trying to optimize around them, and going meta on everything-in-general. It also predicts that junior AI scientists won't have this security mindset, and probably won't acquire it except by following a similar cognitive trajectory.

Which raises an interesting question: how much does security mindset generalize between domains? Ie, if you put Theo de Raadt onto a hypothetical future AI team, would he successfully apply the same security mindset there as he does to general computer security?

Comment by jimrandomh on G Gordon Worley III's Shortform · 2019-09-12T00:20:15.389Z · score: 11 (5 votes) · LW · GW

Outside observer takeaway: There's a bunch of sniping and fighting here, but if I ignore all the fighting and look at only the ideas, what we have is that Gordon presented an idea, Duncan presented counterarguments, and Gordon declined to address the counterarguments. Posting on shortform doesn't come with an obligation to follow up and defend things; it's meant to be a place where tentative and early stage ideas can be thrown around, so that part is fine. But I did come away believing the originally presented idea is probably wrong.

(Some of the meta-level fighting seemed not-fine, but that's for another comment.)

Comment by jimrandomh on ike's Shortform · 2019-09-10T01:39:50.959Z · score: 3 (2 votes) · LW · GW

Yes, it implies that. The exact level of fidelity required is less straightforward; it's clear that a perfect simulation must have qualia/consciousness, but small imperfections make the argument not hold, so to determine whether an imperfect simulation is conscious we'd have to grapple with the even-harder problem of neuroscience.

Comment by jimrandomh on Eli's shortform feed · 2019-09-10T01:34:00.749Z · score: 14 (6 votes) · LW · GW

In There’s No Fire Alarm for Artificial General Intelligence Eliezer argues:

A fire alarm creates common knowledge, in the you-know-I-know sense, that there is a fire; after which it is socially safe to react. When the fire alarm goes off, you know that everyone else knows there is a fire, you know you won’t lose face if you proceed to exit the building.

If I have a predetermined set of tests, this could serve as a fire alarm, but only if you've successfully built a consensus that it is one. This is hard, and the consensus would need to be quite strong. To avoid ambiguity, the test itself would need to be demonstrably resistant to being clever Hans'ed. Otherwise it would be just another milestone.

Comment by jimrandomh on NaiveTortoise's Short Form Feed · 2019-09-10T01:05:42.962Z · score: 4 (3 votes) · LW · GW

I think the engineer mindset is more strongly represented here than you think, but that the nature of nonspecialist online discussion warps things away from the engineer mindset and towards the scientist mindset. Both types of people are present, but the engineer-mindset people tend not to put that part of themselves forward here.

The problem with getting down into the details is that there are many areas with messy details to get into, and it's hard to appreciate the messy details of an area you haven't spent enough time in. So deep dives in narrow topics wind up looking more like engineer-mindset, while shallow passes over wide areas wind up looking more like scientist-mindset. LessWrong posts can't assume much background, which limits their depth.

I would be happy to see more deep-dives; a lightly edited transcript of John Carmack wouldn't be a prototypical LessWrong post, but it would be a good one. But such posts are necessarily going to exclude a lot of readers, and LessWrong isn't necessarily going to be competitive with posting in more topic-specialized places.