Posts

Jimrandomh's Shortform 2019-07-04T17:06:32.665Z · score: 29 (4 votes)
Recommendation Features on LessWrong 2019-06-15T00:23:18.102Z · score: 62 (19 votes)
Welcome to LessWrong! 2019-06-14T19:42:26.128Z · score: 94 (48 votes)
User GPT2 is Banned 2019-04-02T06:00:21.075Z · score: 64 (18 votes)
User GPT2 Has a Warning for Violating Frontpage Commenting Guidelines 2019-04-01T20:23:11.705Z · score: 50 (18 votes)
LW Update 2019-03-12 -- Bugfixes, small features 2019-03-12T21:56:40.109Z · score: 17 (2 votes)
Karma-Change Notifications 2019-03-02T02:52:58.291Z · score: 96 (26 votes)
Two Small Experiments on GPT-2 2019-02-21T02:59:16.199Z · score: 56 (22 votes)
How does OpenAI's language model affect our AI timeline estimates? 2019-02-15T03:11:51.779Z · score: 51 (16 votes)
Introducing the AI Alignment Forum (FAQ) 2018-10-29T21:07:54.494Z · score: 91 (34 votes)
Boston-area Less Wrong meetup 2018-05-16T22:00:48.446Z · score: 4 (1 votes)
Welcome to Cambridge/Boston Less Wrong 2018-03-14T01:53:37.699Z · score: 4 (2 votes)
Meetup : Cambridge, MA Sunday meetup: Lightning Talks 2017-05-20T21:10:26.587Z · score: 0 (1 votes)
Meetup : Cambridge/Boston Less Wrong: Planning 2017 2016-12-29T22:43:55.164Z · score: 0 (1 votes)
Meetup : Boston Secular Solstice 2016-11-30T04:54:55.035Z · score: 1 (2 votes)
Meetup : Cambridge Less Wrong: Tutoring Wheels 2016-01-17T05:23:05.303Z · score: 1 (2 votes)
Meetup : MIT/Boston Secular Solstice 2015-12-03T01:14:02.376Z · score: 1 (2 votes)
Meetup : Cambridge, MA Sunday meetup: The Contrarian Positions Game 2015-11-13T18:08:19.666Z · score: 1 (2 votes)
Rationality Cardinality 2015-10-03T15:54:03.793Z · score: 21 (22 votes)
An Idea For Corrigible, Recursively Improving Math Oracles 2015-07-20T03:35:11.000Z · score: 5 (5 votes)
Research Priorities for Artificial Intelligence: An Open Letter 2015-01-11T19:52:19.313Z · score: 23 (24 votes)
Petrov Day is September 26 2014-09-18T02:55:19.303Z · score: 24 (18 votes)
Three Parables of Microeconomics 2014-05-09T18:18:23.666Z · score: 25 (35 votes)
Meetup : LW/Methods of Rationality meetup 2013-10-15T04:02:11.785Z · score: 0 (1 votes)
Cambridge Meetup: Talk by Eliezer Yudkowsky: Recursion in rational agents 2013-10-15T04:02:05.988Z · score: 7 (8 votes)
Meetup : Cambridge, MA Meetup 2013-09-28T18:38:54.910Z · score: 4 (5 votes)
Charity Effectiveness and Third-World Economics 2013-06-12T15:50:22.330Z · score: 7 (12 votes)
Meetup : Cambridge First-Sunday Meetup 2013-03-01T17:28:01.249Z · score: 3 (4 votes)
Meetup : Cambridge, MA third-Sunday meetup 2013-02-11T23:48:58.812Z · score: 3 (4 votes)
Meetup : Cambridge First-Sunday Meetup 2013-01-31T20:37:32.207Z · score: 1 (2 votes)
Meetup : Cambridge, MA third-Sunday meetup 2013-01-14T11:36:48.262Z · score: 3 (4 votes)
Meetup : Cambridge, MA first-Sunday meetup 2012-11-30T16:34:04.249Z · score: 1 (2 votes)
Meetup : Cambridge, MA third-Sundays meetup 2012-11-16T18:00:25.436Z · score: 3 (4 votes)
Meetup : Cambridge, MA Sunday meetup 2012-11-02T17:08:17.011Z · score: 1 (2 votes)
Less Wrong Polls in Comments 2012-09-19T16:19:36.221Z · score: 79 (82 votes)
Meetup : Cambridge, MA Meetup 2012-07-22T15:05:10.642Z · score: 2 (3 votes)
Meetup : Cambridge, MA first-Sundays meetup 2012-03-30T17:55:25.558Z · score: 0 (3 votes)
Professional Patients: Fraud that ruins studies 2012-01-05T00:20:55.708Z · score: 16 (25 votes)
[LINK] Question Templates 2011-12-23T19:54:22.907Z · score: 1 (1 votes)
I started a blog: Concept Space Cartography 2011-12-16T21:06:28.888Z · score: 6 (9 votes)
Meetup : Cambridge (MA) Saturday meetup 2011-10-20T03:54:28.892Z · score: 2 (3 votes)
Another Mechanism for the Placebo Effect? 2011-10-05T01:55:11.751Z · score: 8 (22 votes)
Meetup : Cambridge, MA Sunday meetup 2011-10-05T01:37:06.937Z · score: 1 (2 votes)
Meetup : Cambridge (MA) third-Sundays meetup 2011-07-12T23:33:01.304Z · score: 0 (1 votes)
Draft of a Suggested Reading Order for Less Wrong 2011-07-08T01:40:06.828Z · score: 29 (30 votes)
Meetup : Cambridge Massachusetts meetup 2011-06-29T16:57:15.314Z · score: 1 (2 votes)
Meetup : Cambridge Massachusetts meetup 2011-06-22T15:26:03.828Z · score: 2 (3 votes)
The Present State of Bitcoin 2011-06-21T20:17:13.131Z · score: 7 (12 votes)
Safety Culture and the Marginal Effect of a Dollar 2011-06-09T03:59:28.731Z · score: 23 (36 votes)
Cambridge Less Wrong Group Planning Meetup, Tuesday 14 June 7pm 2011-06-08T03:41:41.375Z · score: 1 (2 votes)

Comments

Comment by jimrandomh on Attach Receipts to Credit Card Transactions · 2019-11-12T19:14:04.346Z · score: 2 (1 votes) · LW · GW

Right, they would certainly do it if you paid them enough (and lowering the fee is a form of payment); this is a reason why the price would be higher.

Comment by jimrandomh on Ban the London Mulligan · 2019-11-12T19:10:27.512Z · score: 7 (5 votes) · LW · GW

It sounds like the problem is that mulligans are necessary to ensure there's a game to play, which depends mostly on having a reasonable number of lands, but that they have bad side-effects which are mostly the result of giving too much control over which nonland cards you have. So I propose the following mulligan rule:

  • On your first mulligan, draw five, then choose one: draw a card, or search your library for a basic land card, reveal it, and put it into your hand.
  • On your second mulligan, draw three, then choose as before, twice.
  • On your third mulligan, draw one, then choose as before three times.
  • There is no fourth mulligan.

This makes mulligans much better at ensuring you can play, and much worse at ensuring you can find a particular card or combo that you're looking for.

(For complexity reasons, this rule would work better if there were a keyword for "either draw or fetch a land", and if it were introduced in advance.)

Comment by jimrandomh on Attach Receipts to Credit Card Transactions · 2019-11-12T17:19:41.292Z · score: 2 (1 votes) · LW · GW

Stores don't want to do this for the same reason they make prices that change frequently, are one cent off from round numbers, and are in the least legible font that is legally permissible. They want paying attention to prices to be inconvenient, because paying attention decreases spending and shifts that spending towards lower margin items.

Comment by jimrandomh on How do you assess the quality / reliability of a scientific study? · 2019-11-02T06:25:58.214Z · score: 22 (7 votes) · LW · GW

1. For health-related research, one of the main failure modes I've observed when people I know try to do this, is tunnel vision and a lack of priors about what's common and relevant. Reading raw research papers before you've read broad-overview stuff will make this worse, so read UpToDate first and Wikipedia second. If you must read raw research papers, find them with PubMed, but do this only rarely and only with a specific question in mind.

2. Before looking at the study itself, check how you got there. If you arrived via a search engine query that asked a question or posed a topic without presupposing an answer, that's good; if there are multiple studies that say different things, you've sampled one of them at random. If you arrived via a query that asked for confirmation of a hypothesis, that's bad; if there are multiple studies that said different things, you've sampled in a way that was biased towards that hypothesis. If you arrived via a news article, that's the worst; if there are multiple studies that said different things, you sampled in a way that was biased opposite reality.

3. Don't bother with studies in rodents, animals smaller than rodents, cell cultures, or undergraduate psychology students. These studies are done in great numbers because they are cheap, but they have low average quality. The fact that they are so numerous makes the search-sampling problems in (2) more severe.

4. Think about what a sensible endpoint or metric would be before you look at what endpoint/metric was reported. If the reported metric is not the metric you expected, this will often be because the relevant metric was terrible. Classic examples are papers about battery technologies reporting power rather than capacity, biomedical papers reporting effects on biomarkers rather than symptoms or mortality.

5. Correctly controlling for confounders is much, much harder than people typically give it credit for. Adding extra things to the list of things controlled for can create spurious correlations, and study authors are not incentivized to handle this correctly. The practical upshot is that observational studies only count if the effect size is very large.

Comment by jimrandomh on [Site Update] Subscriptions, Bookmarks, & Pingbacks · 2019-10-30T18:59:09.806Z · score: 3 (2 votes) · LW · GW

(We forgot to run one of the migration scripts)

Comment by jimrandomh on [Site Update] Subscriptions, Bookmarks, & Pingbacks · 2019-10-30T02:17:07.527Z · score: 10 (5 votes) · LW · GW

Same cadence, but separating them does make sense and I might add that option in the future.

Comment by jimrandomh on Climate technology primer (1/3): basics · 2019-10-29T03:07:07.098Z · score: 9 (3 votes) · LW · GW

Nuclear power is typically located close to power demands, ie cities, because of the costs and losses in transporting power over long distances. This also limits the size/scale of nuclear power plants, since if you build larger than the demands of a city, you have to transport the power over a long distance to find more sinks.

On the other hand, suppose a city were built specifically for the purpose of hosting nuclear power, carbon-capture, and CO2-to-fuel plants. Such a city might be able to have significantly cheaper nuclear power, since being far away from existing population centers would lower safety and regulatory costs, and concentrating production in one place might enable new economies of scale.

It seems like there are two worlds we could be in, here. In one world, nuclear power right now is like the space launch industry of a decade ago: very expensive, but expensive because of institutional failure and a need for R&D, rather than fundamental physics. In the other world, some component of power plants (steam turbines, for example) is already optimized close to reasonable limits, so an order of magnitude is not possible. Does anyone with engineering knowledge of this space have a sense of which is likely?

Comment by jimrandomh on Climate technology primer (1/3): basics · 2019-10-29T02:11:38.054Z · score: 7 (4 votes) · LW · GW

Something that confuses me. This (and also other) discussions of afforestation focus on planting trees (converting non-forest biomes into forest). But it seems like trees do a reasonably good job of spreading themselves; why not instead go to existing forests, cut down and bury any trees that are past their peak growth phase, and let the remaining trees plant the replacements? How much does the cutting and burial itself cost? (I see an estimate of $50/t for burying crop residues, which seems like it would be similar.)

Comment by jimrandomh on Climate technology primer (1/3): basics · 2019-10-29T01:37:06.391Z · score: 13 (3 votes) · LW · GW

I expect to have some questions and ideas, but I'm still working my way through this, as I suspect are others. I really appreciate how in depth this is!

Comment by jimrandomh on Jacy Reese (born Jacy Anthis)? · 2019-10-27T03:56:23.153Z · score: 2 (1 votes) · LW · GW

Under Wikipedia's rules, yes.

Comment by jimrandomh on Endogenous Epinephrine for Anaphylaxis? · 2019-10-20T19:23:12.093Z · score: 13 (4 votes) · LW · GW

I expect that knowing you're having anaphylaxis without a solution is already reasonably close to the upper end of psychological stress, and you can't add that much more. The reason the epinephrine concentrations are so much higher in cardiac arrest patients is not because cardiac arrest is psychologically stressful, it's because epinephrine release is triggered by hypoxia.

Comment by jimrandomh on Noticing Frame Differences · 2019-10-05T02:30:23.798Z · score: 4 (2 votes) · LW · GW

Curated. I think the meta-frame (frame of looking at frames) is key to figuring out a lot of important outstanding questions. In particular, some ideas are hard to parse or generate in some frames and easy to parse or generate in others. Most people have frame-blind-spots, and lose access to some ideas that way. There also seem to be groups within the rationality community that are feeling alienated, because too many of their conversations suffer from frame-mismatch problems.

Comment by jimrandomh on Follow-Up to Petrov Day, 2019 · 2019-09-28T18:58:54.774Z · score: 27 (11 votes) · LW · GW

Lizardman's Constant is an observation seen in polls of unfiltered groups of people, but the people who were given the launch codes were selected for trustworthiness.

Comment by jimrandomh on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T02:56:53.227Z · score: 42 (14 votes) · LW · GW

(This thread is our collective reenactment of the conversations about nuclear safety that happened during the cold war.)

Comment by jimrandomh on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T01:27:54.304Z · score: 7 (4 votes) · LW · GW




?


!


Comment by jimrandomh on Sayan's Braindump · 2019-09-19T01:23:15.432Z · score: 4 (3 votes) · LW · GW
  • Multiple large monitors, for programming.
  • Waterproof paper in the shower, for collecting thoughts and making a morning todo list
  • Email filters and Priority Inbox, to prevent spurious interruptions while keeping enough trust that urgent things will generate notifications, that I don't feel compelled to check too often
  • USB batteries for recharging phones - one to carry around, one at each charging spot for quick-swapping
Comment by jimrandomh on Focus · 2019-09-16T20:07:29.706Z · score: 7 (4 votes) · LW · GW

Yep, one of us edited it to fix the link. Added a GitHub issue for dealing with relative links in RSS in general: https://github.com/LessWrong2/Lesswrong2/issues/2434 .

Comment by jimrandomh on Raemon's Scratchpad · 2019-09-14T05:26:16.569Z · score: 4 (2 votes) · LW · GW

Note that this would be a very non-idiomatic way to use jQuery. More typical architectures don't do client-side templating; they do server-side rendering and client-side incremental mutation.

Comment by jimrandomh on Jimrandomh's Shortform · 2019-09-13T22:24:50.632Z · score: 22 (4 votes) · LW · GW

I'm kinda confused about the relation between cryptography people and security mindset. Looking at the major cryptographic algorithm classes (hashing, symmetric-key, asymmetric-key), it seems pretty obvious that the correct standard algorithm in each class is probably a compound algorithm -- hash by xor'ing the results of several highly-dissimilar hash functions, etc, so that a mathematical advance which breaks one algorithm doesn't break the overall security of the system. But I don't see anyone doing this in practice, and also don't see signs of a debate on the topic. That makes me think that, to the extent they have security mindset, it's either being defeated by political processes in the translation to practice, or it's weirdly compartmentalized and not engaged with any practical reality or outside views.

Comment by jimrandomh on Bíos brakhús · 2019-09-12T18:13:03.208Z · score: 4 (3 votes) · LW · GW

In my experience, the motion that seems to prevent mental crowding-out is intervening on the timing of my thinking: if I force myself to spend longer on a narrow question/topic/idea than is comfortable, eg with a timer, I'll eventually run out of cached thoughts and spot things I would have otherwise missed.

Comment by jimrandomh on An1lam's Short Form Feed · 2019-09-12T02:01:17.861Z · score: 2 (1 votes) · LW · GW
By generativity do you mean "within-domain" generativity?

Not exactly, because Carmack has worked in more than one domain (albeit not as successfully; Armadillo Aerospace never made orbit.)

On those dimensions, it seems entirely fair to compare across topics and assert that Pearl was solving more significant and more difficult problem(s) than Carmack

Agree on significance, disagree on difficulty.

Comment by jimrandomh on Jimrandomh's Shortform · 2019-09-12T01:19:07.010Z · score: 22 (6 votes) · LW · GW

Eliezer has written about the notion of security mindset, and there's an important idea that attaches to that phrase, which some people have an intuitive sense of and ability to recognize, but I don't think Eliezer's post quite captured the essence of the idea, or presented anything like a usable roadmap of how to acquire it.

An1lam's recent shortform post talked about the distinction between engineering mindset and scientist mindset, and I realized that, with the exception of Eliezer and perhaps a few people he works closely with, all of the people I know of with security mindset are engineer-types rather than scientist-types. That seemed like a clue; my first theory was that the reason for this is because engineer-types get to actually write software that might have security holes, and have the feedback cycle of trying to write secure software. But I also know plenty of otherwise-decent software engineers who don't have security mindset, at least of the type Eliezer described.

My hypothesis is that to acquire security mindset, you have to:

  • Practice optimizing from a red team/attacker perspective,
  • Practice optimizing from a defender perspective; and
  • Practice modeling the interplay between those two perspectives.

So a software engineer can acquire security mindset because they practice writing software which they don't want to have vulnerabilities, they practice searching for vulnerabilities (usually as an auditor simulating an attacker rather as an actual attacker, but the cognitive algorithm is the same), and they practice going meta when they're designing the architecture of new projects. This explains why security mindset is very common among experienced senior engineers (who have done each of the three many times), and rare among junior engineers (who haven't yet). It explains how Eliezer can have security mindset: he alternates between roleplaying a future AI-architect trying to design AI control/alignment mechanisms, roleplaying a future misaligned-AI trying to optimize around them, and going meta on everything-in-general. It also predicts that junior AI scientists won't have this security mindset, and probably won't acquire it except by following a similar cognitive trajectory.

Which raises an interesting question: how much does security mindset generalize between domains? Ie, if you put Theo de Raadt onto a hypothetical future AI team, would he successfully apply the same security mindset there as he does to general computer security?

Comment by jimrandomh on G Gordon Worley III's Shortform · 2019-09-12T00:20:15.389Z · score: 11 (5 votes) · LW · GW

Outside observer takeaway: There's a bunch of sniping and fighting here, but if I ignore all the fighting and look at only the ideas, what we have is that Gordon presented an idea, Duncan presented counterarguments, and Gordon declined to address the counterarguments. Posting on shortform doesn't come with an obligation to follow up and defend things; it's meant to be a place where tentative and early stage ideas can be thrown around, so that part is fine. But I did come away believing the originally presented idea is probably wrong.

(Some of the meta-level fighting seemed not-fine, but that's for another comment.)

Comment by jimrandomh on ike's Shortform · 2019-09-10T01:39:50.959Z · score: 3 (2 votes) · LW · GW

Yes, it implies that. The exact level of fidelity required is less straightforward; it's clear that a perfect simulation must have qualia/consciousness, but small imperfections make the argument not hold, so to determine whether an imperfect simulation is conscious we'd have to grapple with the even-harder problem of neuroscience.

Comment by jimrandomh on Eli's shortform feed · 2019-09-10T01:34:00.749Z · score: 14 (6 votes) · LW · GW

In There’s No Fire Alarm for Artificial General Intelligence Eliezer argues:

A fire alarm creates common knowledge, in the you-know-I-know sense, that there is a fire; after which it is socially safe to react. When the fire alarm goes off, you know that everyone else knows there is a fire, you know you won’t lose face if you proceed to exit the building.

If I have a predetermined set of tests, this could serve as a fire alarm, but only if you've successfully built a consensus that it is one. This is hard, and the consensus would need to be quite strong. To avoid ambiguity, the test itself would need to be demonstrably resistant to being clever Hans'ed. Otherwise it would be just another milestone.

Comment by jimrandomh on An1lam's Short Form Feed · 2019-09-10T01:05:42.962Z · score: 4 (3 votes) · LW · GW

I think the engineer mindset is more strongly represented here than you think, but that the nature of nonspecialist online discussion warps things away from the engineer mindset and towards the scientist mindset. Both types of people are present, but the engineer-mindset people tend not to put that part of themselves forward here.

The problem with getting down into the details is that there are many areas with messy details to get into, and it's hard to appreciate the messy details of an area you haven't spent enough time in. So deep dives in narrow topics wind up looking more like engineer-mindset, while shallow passes over wide areas wind up looking more like scientist-mindset. LessWrong posts can't assume much background, which limits their depth.

I would be happy to see more deep-dives; a lightly edited transcript of John Carmack wouldn't be a prototypical LessWrong post, but it would be a good one. But such posts are necessarily going to exclude a lot of readers, and LessWrong isn't necessarily going to be competitive with posting in more topic-specialized places.

Comment by jimrandomh on An1lam's Short Form Feed · 2019-09-10T00:36:57.021Z · score: 8 (3 votes) · LW · GW
Yet I also feel like John Carmack probably probably isn't remotely near the level of Pearl (I'm not that familiar Carmack's work): pushing forward video game development doesn't compare to neatly figuring what exactly causality itself is.

You're looking at the wrong thing. Don't look at the topic of their work; look at their cognitive style and overall generativity. Carmack is many levels above Pearl. Just as importantly, there's enough recorded video of him speaking unscripted that it's feasible to absorb some of his style.

Comment by jimrandomh on benwr's unpolished thoughts · 2019-09-10T00:11:35.857Z · score: 3 (2 votes) · LW · GW

I'm not sure relationship-strength on a single axis is quite the right factor. At the end of a workshop, the participants don't have that much familiarity, if you measure it by hours spent talking; but those hours will tend to have been focused on the sort of information that makes a Doom circle work, ie, people's life strategies and the things they're struggling with. If I naively tried to gather a group with strong relationship-strength, I expect many of the people I invited would find out that they didn't know each other as well as they thought they did.

Comment by jimrandomh on Eli's shortform feed · 2019-09-10T00:05:13.511Z · score: 7 (2 votes) · LW · GW

A slightly different spin on this model: it's not about the types of strategies people generate, but the number. If you think about something and only come up with one strategy, you'll do it without hesitation; if you generate three strategies, you'll pause to think about which is the right one. So people who can't come up with as many strategies are impulsive.

Comment by jimrandomh on Rob B's Shortform Feed · 2019-09-09T23:56:10.224Z · score: 10 (2 votes) · LW · GW

Somewhat more meta level: Heuristically speaking, it seems wrong and dangerous for the answer to "which expressed human preferences are valid?" to be anything other than "all of them". There's a common pattern in metaethics which looks like:

1. People seem to have preference X

2. X is instrumentally valuable as a source of Y and Z. The instrumental-value relation explains how the preference for X was originally acquired.

3. [Fallacious] Therefore preference X can be ignored without losing value, so long as Y and Z are optimized.

In the human brain algorithm, if you optimize something instrumentally for awhile, you start to value it terminally. I think this is the source of a surprisingly large fraction of our values.

Comment by jimrandomh on Chris_Leong's Shortform · 2019-09-09T23:41:39.829Z · score: 8 (4 votes) · LW · GW

+1 for book-distillation, probably the most underappreciated and important type of post.

Comment by jimrandomh on Bíos brakhús · 2019-09-09T23:37:02.240Z · score: 2 (1 votes) · LW · GW

In theory you might, but in practice you can't. Distraction-avoidant behavior favors things that you can get into quickly, on the order of seconds--things like checking for Facebook notifications, or starting a game which has a very fast load time. Most intellectual work has a spinup, while you recreate mental context, before it provides rewards, so distraction-avoidant behavior doesn't choose it.

Comment by jimrandomh on ozziegooen's Shortform · 2019-09-09T23:33:09.576Z · score: 2 (1 votes) · LW · GW

One way to look at this is, where is the variance coming from? Any particular forecasting question has implied sub-questions, which the predictor needs to divide their attention between. For example, given the question "How much value has this organization created?", a predictor might spend their time comparing the organization to others in its reference class, or they might spend time modeling the judges and whether they tend to give numbers that are higher or lower.

Evaluation consistency is a way of reducing the amount of resources that you need to spend modeling the judges, by providing a standard that you can calibrate against. But there are other ways of achieving the same effect. For example, if you have people predict the ratio of value produced between two organizations, then if the judges consistently predict high or predict low, this no longer matters since it affects both equally.

Comment by jimrandomh on Hazard's Shortform Feed · 2019-09-09T23:14:41.750Z · score: 4 (2 votes) · LW · GW

Yep, I notice this sometimes when other people are doing it. I don't notice myself doing it, but that's probably because it's easier to notice from the receiving end.

In writing, it makes me bounce off. (There are many posts competing for my attention, so if the first few sentences fail to say anything interesting, my brain assumes that your post is not competitive and moves on.) In speech, it makes me get frustrated with the speaker. If it's in speech and it's an interruption, that's especially bad, because it's displacing working memory from whatever I was doing before.

Comment by jimrandomh on Open & Welcome Thread - September 2019 · 2019-09-09T21:53:57.828Z · score: 13 (3 votes) · LW · GW

It's not promoted as a first-class feature since most people don't have enough time to read quite so many comments, and need more filtering, but some people requested it and use it, and the code-implementation is simple, so it won't be going away.

The reason negatively-voted comments don't appear is because it once shared code with the All Posts page, which has a checkbox for controlling that, but it doesn't have a checkbox wired up. GitHub issue: https://github.com/LessWrong2/Lesswrong2/issues/2415 . Hiding negative-karma content used to be important because the most-recent content was often spam, and displaying it in between when it was posted and when the mods deleted it made for a bad experience; but we now have enough other anti-spam measures in place that this isn't really a concern.

The way pagination is currently handled is something we inherited from our framework, which is pretty suboptimal. At some point we're going to redo the way pagination is handled, not for allComments in particular but at a lower level which will affect multiple places, allComments included. This is likely to be awhile, though, since it's a somewhat involved piece of development and there are more important things in the queue in front of it.

Comment by jimrandomh on Does anyone else feel LessWrong is slow? · 2019-09-06T20:21:20.537Z · score: 15 (6 votes) · LW · GW

Yes, it's slower than old-LessWrong. When we rewrote, we built on a collection of libraries and frameworks which were... not exactly production-ready. This has required a lot of compensatory engineering work to bring performance up to par, and there's still a bunch more compensatory engineering work left to do.

In the few months immediately after the rewrite, it was catastrophically slow, to the point where if you opened a bunch of sequence-posts quickly in new tabs, whichever server from the pool handled the requests would die. It's much better than that now, but it's still not where I want it to be. When it is as fast as I'd like, I plan to write a long post with technical details of what we had to change. In the mean time, yes, we know and we're working on it.

Comment by jimrandomh on How Specificity Works · 2019-09-06T00:59:09.206Z · score: 11 (6 votes) · LW · GW

I suspect the true skill is neither going up nor down the ladder of abstraction, it's "taking the ladder of abstraction as object". From that perspective, this post (and most of the posts it's linking to) are teaching the skill, but in a weird indirect way: by making claims about the ladder of abstraction, they force you to notice and think about it, and practice doing this is valuable independent of the specific claim.

Comment by jimrandomh on What Programming Language Characteristics Would Allow Provably Safe AI? · 2019-08-28T21:46:53.454Z · score: 4 (3 votes) · LW · GW

In the context of programming languages, "proof" means "machine-checkable proof from mathematical axioms". While there is difficult work involved in bridging between philosophy and mathematics, a programming language is only going to help on the math side, and on the math side, verifying proofs (once written in programming-language form) is trivial.

Comment by jimrandomh on Dual Wielding · 2019-08-27T20:27:27.424Z · score: 5 (3 votes) · LW · GW

If the phone has a removable main battery, and you're swapping that, then yes. If it's a standalone power bank with a USB port, then it's the cable that varies rather than the battery, and you only need a few varieties for complete coverage (micro-USB, USB-C and Lightning will charge pretty much any device you can find these days).

Comment by jimrandomh on Dual Wielding · 2019-08-27T16:21:58.906Z · score: 41 (17 votes) · LW · GW
A battery pack is an alternative, but they seem to be similar in size to phones and don’t allow the charging swap tactic

That's because you haven't used enough dakka. In addition to carrying an extra battery, you also leave an identical battery plugged in at each of the spots where you have a charger. You charge your phone by connecting it to the battery (with both in your inventory); you charge the battery by swapping the one in your inventory with the one that lives at the charger. This has the additional benefit that, in addition to solving the battery problem for yourself, if you find yourself with a group, you can solve the phone-battery problem for other people by lending or gifting them a charged battery.

Comment by jimrandomh on adam_scholl's Shortform · 2019-08-12T20:50:47.695Z · score: 5 (5 votes) · LW · GW

On the other hand: half of mouse studies working in humans is an extremely good success rate. We should be quite suspicious of file-drawer effects and p-hacking.

Comment by jimrandomh on Dony's Shortform Feed · 2019-08-12T20:40:15.849Z · score: 6 (3 votes) · LW · GW

Unclear, but see Zvi's Slack sequence for some good reasons why we should act as though we need breaks, even if we technically don't.

Comment by jimrandomh on Keeping Beliefs Cruxy · 2019-08-07T19:43:06.093Z · score: 4 (2 votes) · LW · GW

Pedantic correction: We didn't spend 4 weeks on it; while 4 weeks did pass, there was a lot of other stuff going on during that interval.

Comment by jimrandomh on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-05T17:31:29.546Z · score: 23 (13 votes) · LW · GW

If the true cost were double or triple the $10B estimate, this wouldn't significantly change the implications; $30B is not significantly less feasible.

Comment by jimrandomh on Drive-By Low-Effort Criticism · 2019-07-31T22:21:42.160Z · score: 20 (8 votes) · LW · GW

Yes, we should discourage low-quality criticism which is wrong, and encourage high-quality criticism which is right. (I already said this, in the grandparent.) Having accounted for this, it makes no sense at all to prefer longer critical comments to shorter ones. (Quite the opposite preference would be sensible, in fact.)

I think that compared to high-effort criticisms, low-effort criticisms are much more likely to be based on misunderstandings or otherwise low quality. I interpret Lionhearted as saying that criticism should, on the margin, be held to a higher bar than it is now.

Comment by jimrandomh on Shortform Beta Launch · 2019-07-29T03:25:35.210Z · score: 7 (3 votes) · LW · GW

Before we implemented shortform as a feature, some people created posts for themselves to put comments on and called them "shortform feeds". This is a misnomer, because they're not feeds in any sense of the word, so we decided not to call them that. But it looks like there were some residual linguistic habits.

Comment by jimrandomh on Nutrition heuristic: Cycle healthy options · 2019-07-17T21:47:00.058Z · score: 5 (3 votes) · LW · GW

This works in the case of {steak,fish,chicken} because those options are pretty close to identical in their overall role in a diet. But there are also nutrition strategies for which cycling is worse than any of the options. A straightforward example of this is keto. Conventional wisdom is that these are highly nonlinear; a diet which is low-carb but not low-carb enough is substantially worse than either a low-carb or a full-carb diet. And there's a switchover period with some adverse effects, so cycling between keto-days and non-keto-days would be bad.

Comment by jimrandomh on Please give your links speaking names! · 2019-07-12T22:29:48.995Z · score: 5 (3 votes) · LW · GW

This is a bug in Vulcan, the framework we're built on; https://github.com/LessWrong2/Lesswrong2/issues/638 . We'll come up with a workaround at some point.

Comment by jimrandomh on "Rationalizing" and "Sitting Bolt Upright in Alarm." · 2019-07-10T05:57:44.751Z · score: 6 (3 votes) · LW · GW

That link doesn't have enough information to find the study, which is likely to contain important methodological caveats.

Comment by jimrandomh on Jimrandomh's Shortform · 2019-07-09T02:20:15.059Z · score: 13 (6 votes) · LW · GW

Among people who haven't learned probabilistic reasoning, there's a tendency to push the (implicit) probabilities in their reasoning to the extremes; when the only categories available are "will happen", "won't happen", and "might happen", too many things end up in the will/won't buckets.

A similar, subtler thing happens to people who haven't learned the economics concept of elasticity. Some example (fallacious) claims of this type:

  • Building more highway lanes will cause more people to drive (induced demand), so building more lanes won't fix traffic.
  • Building more housing will cause more people to move into the area from far away, so additional housing won't decrease rents.
  • A company made X widgets, so there are X more widgets in the world than there would be otherwise.

This feels like it's in the same reference class as he traditional logical fallacies, and that giving it a name - "zero elasticity fallacy" - might be enough to significantly reduce the rate at which people make it. But it does require a bit more concept-knowledge than most of the traditional fallacies, so, maybe not? What happens when you point this out to someone with no prior microeconomics exposure, and does logical-fallacy branding help with the explanation?