Posts

Open Thread With Experimental Feature: Reactions 2023-05-24T16:46:39.367Z
Dual-Useness is a Ratio 2023-04-06T05:46:48.286Z
Infohazards vs Fork Hazards 2023-01-05T09:45:28.065Z
LW Beta Feature: Side-Comments 2022-11-24T01:55:31.578Z
Transformative VR Is Likely Coming Soon 2022-10-13T06:25:38.852Z
LessWrong Now Has Dark Mode 2022-05-10T01:21:44.065Z
Salvage Epistemology 2022-04-30T02:10:41.996Z
[Beta Feature] Google-Docs-like editing for LessWrong posts 2022-02-23T01:52:22.141Z
Open Thread - Jan 2022 [Vote Experiment!] 2022-01-03T01:07:24.172Z
"If and Only If" Should Be Spelled "Ifeff" 2021-07-16T22:03:22.857Z
[Link] Musk's non-missing mood 2021-07-12T22:09:12.165Z
Attributions, Karma and better discoverability for wiki/tag features 2021-06-02T23:47:03.604Z
Rationality Cardinality 2021-04-27T22:27:27.412Z
What topics are on Dath Ilan's civics exam? 2021-04-27T00:59:04.749Z
[Link] Still Alive - Astral Codex Ten 2021-01-21T23:20:03.782Z
History's Biggest Natural Experiment 2020-03-24T02:56:30.070Z
COVID-19's Household Secondary Attack Rate Is Unknown 2020-03-16T23:19:47.117Z
A Significant Portion of COVID-19 Transmission Is Presymptomatic 2020-03-14T05:52:33.734Z
Credibility of the CDC on SARS-CoV-2 2020-03-07T02:00:00.452Z
Effectiveness of Fever-Screening Will Decline 2020-03-06T23:00:16.836Z
For viruses, is presenting with fatigue correlated with causing chronic fatigue? 2020-03-04T21:09:48.149Z
Will COVID-19 survivors suffer lasting disability at a high rate? 2020-02-11T20:23:50.664Z
Jimrandomh's Shortform 2019-07-04T17:06:32.665Z
Recommendation Features on LessWrong 2019-06-15T00:23:18.102Z
[April Fools] User GPT2 is Banned 2019-04-02T06:00:21.075Z
User GPT2 Has a Warning for Violating Frontpage Commenting Guidelines 2019-04-01T20:23:11.705Z
LW Update 2019-03-12 -- Bugfixes, small features 2019-03-12T21:56:40.109Z
Karma-Change Notifications 2019-03-02T02:52:58.291Z
Two Small Experiments on GPT-2 2019-02-21T02:59:16.199Z
How does OpenAI's language model affect our AI timeline estimates? 2019-02-15T03:11:51.779Z
Introducing the AI Alignment Forum (FAQ) 2018-10-29T21:07:54.494Z
Boston-area Less Wrong meetup 2018-05-16T22:00:48.446Z
Welcome to Cambridge/Boston Less Wrong 2018-03-14T01:53:37.699Z
Meetup : Cambridge, MA Sunday meetup: Lightning Talks 2017-05-20T21:10:26.587Z
Meetup : Cambridge/Boston Less Wrong: Planning 2017 2016-12-29T22:43:55.164Z
Meetup : Boston Secular Solstice 2016-11-30T04:54:55.035Z
Meetup : Cambridge Less Wrong: Tutoring Wheels 2016-01-17T05:23:05.303Z
Meetup : MIT/Boston Secular Solstice 2015-12-03T01:14:02.376Z
Meetup : Cambridge, MA Sunday meetup: The Contrarian Positions Game 2015-11-13T18:08:19.666Z
Rationality Cardinality 2015-10-03T15:54:03.793Z
An Idea For Corrigible, Recursively Improving Math Oracles 2015-07-20T03:35:11.000Z
Research Priorities for Artificial Intelligence: An Open Letter 2015-01-11T19:52:19.313Z
Petrov Day is September 26 2014-09-18T02:55:19.303Z
Three Parables of Microeconomics 2014-05-09T18:18:23.666Z
Meetup : LW/Methods of Rationality meetup 2013-10-15T04:02:11.785Z
Cambridge Meetup: Talk by Eliezer Yudkowsky: Recursion in rational agents 2013-10-15T04:02:05.988Z
Meetup : Cambridge, MA Meetup 2013-09-28T18:38:54.910Z
Charity Effectiveness and Third-World Economics 2013-06-12T15:50:22.330Z
Meetup : Cambridge First-Sunday Meetup 2013-03-01T17:28:01.249Z
Meetup : Cambridge, MA third-Sunday meetup 2013-02-11T23:48:58.812Z

Comments

Comment by jimrandomh on Jimrandomh's Shortform · 2024-03-04T23:23:17.994Z · LW · GW

There's been a lot of previous interest in indoor CO2 in the rationality community, including an (unsuccessful) CO2 stripper project, some research summaries and self experiments. The results are confusing, I suspect some of the older research might be fake. But I noticed something that has greatly changed how I think about CO2 in relation to cognition.

Exhaled air is about 50kPPM CO2. Outdoor air is about 400ppm; indoor air ranges from 500 to 1500ppm depending on ventilation. Since exhaled air has CO2 about two orders of magnitude larger than the variance in room CO2, if even a small percentage of inhaled air is reinhalation of exhaled air, this will have a significantly larger effect than changes in ventilation. I'm having trouble finding a straight answer about what percentage of inhaled air is rebreathed (other than in the context of mask-wearing), but given the diffusivity of CO2, I would be surprised if it wasn't at least 1%.

This predicts that a slight breeze, which replaces their in front of your face and prevents reinhalation, would have a considerably larger effect than ventilating an indoor space where the air is mostly still. This matches my subjective experience of indoor vs outdoor spaces, which, while extremely confounded, feels like an air-quality difference larger than CO2 sensors would predict.

This also predicts that a small fan, positioned so it replaces the air in front of my face, would have a large effect on the same axis as improved ventilation would. I just set one up. I don't know whether it's making a difference but I plan to leave it there for at least a few days.

(Note: CO2 is sometimes used as a proxy for ventilation in contexts where the thing you actually care about is respiratory aerosol, because it affects transmissibility of respiratory diseases like COVID and influenza. This doesn't help with that at all and if anything would make it worse.)

Comment by jimrandomh on Jimrandomh's Shortform · 2024-02-22T06:22:27.960Z · LW · GW

I'm reading you to be saying that you think on its overt purpose this policy is bad, but ineffective, and the covert reason of testing the ability of the US federal government to regulate AI is worth the information cost of a bad policy.

I think preventing the existence of deceptive deepfakes would be quite good (if it would work); audio/video recording has done wonders for accountability in all sorts of contexts, and it's going to be terrible to suddenly have every recording subjected to reasonable doubt. I think preventing the existence of AI-generated fictional-character-only child pornography is neutral-ish (I'm uncertain of the sign of its effect on rates of actual child abuse).

Comment by jimrandomh on Jimrandomh's Shortform · 2024-02-22T02:48:22.959Z · LW · GW

There's an open letter at https://openletter.net/l/disrupting-deepfakes. I signed, but with caveats, which I'm putting here.

Background context is that I participated in building the software platform behind the letter, without a specific open letter in hand. It has mechanisms for sorting noteworthy signatures to the top, and validating signatures for authenticity. I expect there to be other open letters in the future, and I think this is an important piece of civilizational infrastructure.

I think the world having access to deepfakes, and deepfake-porn technology in particular, is net bad. However, the stakes are small compared to the upcoming stakes with superintelligence, which has a high probability of killing literally everyone.

If translated into legislation, I think what this does is put turnkey-hosted deepfake porn generation, as well as pre-tuned-for-porn model weights, into a place very similar to where piracy is today. Which is to say: The Pirate Bay is illegal, wget is not, and the legal distinction is the advertised purpose.

(Where non-porn deepfakes are concerned, I expect them to try a bit harder at watermarking, still fail, and successfully defend themselves legally on the basis that they tried.)

The analogy to piracy goes a little further. If laws are passed, deepfakes will be a little less prevalent than they would otherwise be, there won't be above-board businesses around it... and there will still be lots of it. I don't think there-being-lots-of-it can be prevented by any feasible means. The benefit of this will be the creation of common knowledge that the US federal government's current toolkit is not capable of holding back AI development and access, even when it wants to.

I would much rather they learn that now, when there's still a nonzero chance of building regulatory tools that would function, rather than later.

Comment by jimrandomh on More on the Apple Vision Pro · 2024-02-14T19:03:37.177Z · LW · GW

I went to an Apple store for a demo, and said: the two things I want to evaluate are comfort, and use as an external monitor. I brought a compatible laptop (a Macbook Pro). They replied that the demo was highly scripted, and they weren't allowed to let me do that. I went through their scripted demo. It was worse than I expected. I'm not expecting Apple to take over the VR headset market any time soon.

Bias note: Apple is intensely, uniquely totalitarian over software that runs on iPhones and iPads, in a way I find offensive, not just in a sense of not wanting to use it, but also in a sense of not wanting it to be permitted in the world. They have brought this model with them to Vision Pro, and for this reason I am rooting for them to fail.

I think most people evaluating the Vision Pro have not tried Meta's Quest Pro and Quest 3, and are comparing it to earlier-generation headsets. They used an external battery pack and still managed to come in heavier than the Quest 3, which has the battery built in. The screen and passthrough look better, but I don't think this is because Apple has any technology that Meta doesn't; I think the difference is entirely explained by Apple having used more-expensive and heavier versions of commodity parts, which implies that if this is a good tradeoff, then their lead will only last for one generation at most. (In particular, the display panel is dual-sourced from Sony and LG, not made in-house.)

I tried to type "lesswrong.com" into the address bar of Safari using the two-finger hand tracking keyboard. I failed. I'm not sure whether the hand-tracking was misaligned with the passthrough camera, or just had an overzealous autocomplete that was unable to believe that I wanted a "w" instead of an "e", but I gave up after five tries and used the eye-tracking method instead.

During the demo, one of the first things they showed me was a SBS photo with the camera pitched down thirty degrees. This doesn't sound like a big deal, but it's something that rules out there being a clueful person behind the scenes. There's a preexisting 3D-video market (both porn and non-porn), and it's small and struggling. One of the problems it's struggling with, is that SBS video is very restrictive about what you can do with the camera; in particular, it's bad to move the camera, because that causes vestibular mismatch, and it's bad to tilt the camera, because that makes it so that gravity is pointing the wrong way. A large fraction of 3D-video content fails to follow these restrictions, and that makes it very upleasant to watch. If Apple can't even enforce the camerawork guidelines on the first few minutes of its in-store demo, then this bodes very poorly for the future content on the platform.

Comment by jimrandomh on Brute Force Manufactured Consensus is Hiding the Crime of the Century · 2024-02-06T07:27:23.023Z · LW · GW

I have only skimmed the early parts of the Rootclaim videos, and the first ~half of Daniel Filan's tweet thread about it. So it's possible this was discussed somewhere in there, but there's something major that doesn't sit right with me:

In the first month of the pandemic, I was watching the news about it. I remember that the city government of Wuhan attempted to conceal the fact that there was a pandemic. I remember Li Wenliang being punished for speaking about it. I remember that reliable tests to determine whether someone had COVID were extremely scarce. I remember the US CDC publishing a paper absurdly claiming that the attack rate was near zero, because they wouldn't count infections unless they had a positive test, and then refused to test people who hadn't travelled to Wuhan. I remember Chinese whistleblowers visiting hospitals and filming the influx of patients.

It appears to me that all evidence for the claim that the virus originated in the wet market pass through Chinese government sources. And it appears to me that those same sources were unequipped to do effective contact tracing, and executing a coverup. When a coverup was no longer possible, the incentive would have been to confidently identify an origin, even if they had no idea what the true origin was; and they could easily create the impression that it started in any place they chose, simply by focusing their attention there, since cases would be found no matter where they focused.

Comment by jimrandomh on Effective Aspersions: How the Nonlinear Investigation Went Wrong · 2023-12-23T02:04:00.853Z · LW · GW

Imo this comment is lowering the quality of the discourse. Like, if I steelman and expand what you're saying, it seems like you're trying to say something like "this response is pinging a deceptiveness-heuristic that I can't quite put my finger on". That phrasing adds information, and would prompt other commenters to evaluate and either add evidence of deceptiveness, or tell you you're false-positiving, or something like that. But your actual phrasing doesn't do that, it's basically name calling.

So, mod note: I strong-downvoted your comment and decided to leave it at that. Consider yourself frowned at.

Comment by jimrandomh on Effective Aspersions: How the Nonlinear Investigation Went Wrong · 2023-12-20T02:35:23.602Z · LW · GW

There's a big difference between arguing that someone shouldn't be able to stay anonymous, and unilaterally posting names. Arguing against allowing anonymity (without posting names) would not have been against the rules. But, we're definitely not going to re-derive the philosophy of when anonymity should and shouldn't be allowed, after names are already posted. The time to argue for an exception was beforehand, not after the fact.

Comment by jimrandomh on Effective Aspersions: How the Nonlinear Investigation Went Wrong · 2023-12-20T01:36:42.484Z · LW · GW

We (the LW moderation team) have given Roko a one-week site ban and an indefinite post/topic ban for attempted doxing. We have deleted all comments that revealed real names, and ask that everyone respect the privacy of the people involved.

Comment by jimrandomh on Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible · 2023-12-14T21:30:13.197Z · LW · GW

Genetically altering IQ is more or less about flipping a sufficient number of IQ-decreasing variants to their IQ-increasing counterparts. This sounds overly simplified, but it’s surprisingly accurate; most of the variance in the genome is linear in nature, by which I mean the effect of a gene doesn’t usually depend on which other genes are present. 

So modeling a continuous trait like intelligence is actually extremely straightforward: you simply add the effects of the IQ-increasing alleles to to those of the IQ-decreasing alleles and then normalize the score relative to some reference group.

If the mechanism of most of these genes is that their variants push something analogous to a hyperparameter in one direction or the other, and the number of parameters is much smaller than the number of genes, then this strategy will greatly underperform the simulated prediction. This is because the cumulative effect of flipping all these genes will be to move hyperparameters towards optimal but then drastically overshoot the optimum.

Comment by jimrandomh on Why Yudkowsky is wrong about "covalently bonded equivalents of biology" · 2023-12-07T10:16:24.817Z · LW · GW

I think you're modeling the audience as knowing a lot less than we do. Someone who didn't know high school chemistry and biology would be at risk of being misled, sure. But I think that stuff should be treated as a common-knowledge background. At which point, obviously, you unpack the claim to: the weakest links in a structure determine its strength, biological structures have weak links in them which are noncovalent bonds, not all of those noncovalent bonds are weak for functional reasons, some are just hard to reinforce while constrained to things made by ribosomes. The fact that most links are not the weakest links, does not refute the claim. The fact that some weak links have a functional purpose, like enabling mobility, does not refute the claim.

Comment by jimrandomh on Cis fragility · 2023-11-30T09:30:25.352Z · LW · GW

LW gives authors the ability to moderate comments on their own posts (particularly non-frontpage posts) when they reach a karma threshold. It doesn't automatically remove that power when they fall back under the threshold, because this doesn't normally come up (the threshold is only 50 karma). In this case, however, I'm taking the can-moderate flag off the account, since they're well below the threshold and, in my opinion, abusing it. (They deleted this comment by me, which I undid, and this comment which I did not undo.)

We are discussing in moderator-slack and may take other actions.

Comment by jimrandomh on Cis fragility · 2023-11-30T08:15:26.734Z · LW · GW

Yeah, this is definitely a minimally-obfuscated autobiographical account, not hypothetical. It's also false; there were lots of replies. Albeit mostly after Yarrow had already escalated (by posting about it on Dank EA Memes).

Comment by jimrandomh on Feature Request for LessWrong · 2023-11-30T06:13:08.066Z · LW · GW

I don't think this was about pricing, but about keeping occasional bits of literal spam out of the site search. The fact that we use the same search for both users looking for content, and authors adding stuff to Sequences, is a historical accident which makes for a few unfortunate edge cases.

Comment by jimrandomh on OpenAI: Facts from a Weekend · 2023-11-22T01:56:58.793Z · LW · GW

Adam D'Angelo retweeted a tweet implying that hidden information still exists and will come out in the future:

Have known Adam D’Angelo for many years and although I have not spoken to him in a while, the idea that he went crazy or is being vindictive over some feature overlap or any of the other rumors seems just wrong. It’s best to withhold judgement until more information comes out.

Comment by jimrandomh on [deleted post] 2023-11-21T22:00:54.120Z

While I could imagine someone thinking this way, I haven't seen any direct evidence of it, and I think someone would need several specific false beliefs in order to wind up thinking this way.

The main thing is, any advantage that AI could give in derivatives trading is small and petty compared to what's at stake. This is true for AI optimists (who think AI has the potential to solve all problems, including solving aging making us effectively immortal). This is true for AI pessimists (who think AI will kill literally everyone). The failure mode of "picking up pennies in front of a steamroller" is common enough to have its own aphorism, but this seems implausible.

Trading also has a large zero-sum component, which means that having AI while no one else does would be profitable, but society as a whole gaining AI would not profit traders much except via ways that rest of society isn't also profiting.

Also worth calling out explicitly: There aren't that many derivatives traders in the world, and the profession favors secrecy. I think the total influence of derivatives-trading on elite culture is pretty small.

Comment by jimrandomh on Vote on worthwhile OpenAI topics to discuss · 2023-11-21T00:45:31.763Z · LW · GW

Was Sam Altman acting consistently with the OpenAI charter prior to the board firing him?

Comment by jimrandomh on Does bulemia work? · 2023-11-12T22:48:55.549Z · LW · GW

Short answer: No, and trying this does significant damage to people's health.

The prototypical bulimic goes through a cycle where they severely undereat overall, then occasionally experience (what feels from the inside like) a willpower failure which causes them to "binge", eating an enormous amount in a short time. They're then in a state where, if they let digestion run its course, they'd be sick from the excess; so they make themselves vomit, to prevent that.

I believe the "binge" state is actually hypoglycemia (aka low blood sugar), because (as a T1 diabetic), I've experienced it. Most people who talk about blood sugar in relation to appetite have never experienced blood sugar low enough to be actually dangerous; it's very distinctive, and it includes an overpowering compulsion to eat. It also can't be resolved faster than 15 minutes, because eating doesn't raise blood sugar, digesting raises blood sugar; that can lead to consuming thousands of calories of carbs at once (which would be fine if spaced out a little, but is harmful if concentrated into such a narrow time window).

The other important thing about hypoglycemia is that being hypoglycemic is proof that someone's fat cells aren't providing enough energy withdrawals to survive. The binge-eating behavior is a biological safeguard that prevents people from starving themself so much that they literally die.

Comment by jimrandomh on Why is lesswrong blocking wget and curl (scrape)? · 2023-11-08T22:06:47.736Z · LW · GW

It's an AWS firewall rule with bad defaults. We'll fix it soon, but in the mean time, you can scrape if you change your user agent to something other than wget/curl/etc. Please use your name/project in the user-agent so we can identify you in logs if we need to, and rate-limit yourself conservatively.

Comment by jimrandomh on [deleted post] 2023-11-06T22:12:11.014Z

I wrote about this previously here. I think you have to break it down by company; the answer for why they're not globally available is different for the different companies.

For Waymo, they have self-driving taxis in SF and Phoenix without safety drivers. They use LIDAR, so instead of the cognitive task of driving as a human would solve it, they have substituted the easier task "driving but your eyes are laser rangefinders". The reason they haven't scaled to cover every city, or at least more cities, is unclear to me; the obvious possibilities are that the LIDAR sensors and onboard computers are impractically expensive, that they have a surprisingly high manual-override and there's a big unscalable call center somewhere, or they're being cowardly and trying to maintain zero fatalities forever (at scales where a comparable fleet of human-driven taxis would definitely have some fatalities). In any case, I don't think the software/neural nets are likely to be the bottleneck.

For Tesla, until recently, they were using surprisingly-low-resolution cameras. So instead of the cognitive task of driving as a human would solve it, they substituted the harder task "driving with a vision impairment and no glasses". They did upgrade the cameras within the past year, but it's hard to tell how much of the customer feedback represents the current hardware version vs. past versions; sites like FSDBeta Community Tracker don't really distinguish. It also seems likely that their onboard GPUs are underpowered relative to the task.

As for Cruise, Comma.ai, and others--well, distance-to-AGI is measured only from the market leader, and just as GPT-4, Claude and Bard have a long tail of inferior models by other orgs trailing behind them, you also expect a long tail of self-driving systems with worse disengagement rates than the leaders.

Comment by jimrandomh on Will no one rid me of this turbulent pest? · 2023-10-14T22:44:24.877Z · LW · GW

It seems likely that all relevant groups are cowards, and none are willing to move forward without a more favorable political context. But there's another possibility not considered here: perhaps someone has already done a gene-drive mosquito release in secret, but we don't know about it because it didn't work. This might happen if local mosquito populations mix too slowly compared to how long it takes a gene-driven population to crash; or if the initially group all died out before they could mate; or something in the biology of the driven-drive machinery didn't function as expected.

If that were the situation, then the world would have a different problem than the one we think it has: inability to share information about what the obstacle was and debug the solution.

Comment by jimrandomh on LW UI features you might not have tried · 2023-10-13T19:36:09.117Z · LW · GW

Unfortunately the ban-users-from-posts feature has a rube-goldberg of rules around it that were never written down, and because there was no documentation to check it against, I've never managed to give it a proper QA pass. I'd be interested in reports of people's experience with it, but I do not have confidence that this feature works without major bugs.

Comment by jimrandomh on Sam Altman's sister, Annie Altman, claims Sam has severely abused her · 2023-10-07T19:31:57.212Z · LW · GW

You should think less about PR and more about truth.

Comment by jimrandomh on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-10-05T17:46:15.913Z · LW · GW

Mod note: I count six deleted comments by you on this post. Of these, two had replies (and so were edited to just say "deleted"), one was deleted quickly after posting, and three were deleted after they'd been up for awhile. This is disruptive to the conversation. It's particularly costly when the subject of the top-level post is about conversation dynamics themselves, which the deleted comments are instances (or counterexamples) of.

You do have the right to remove your post/comments from LessWrong. However, doing so frequently, or in the middle of active conversations, is impolite. If you predict that you're likely to wind up deleting a comment, it would be better to not post it in the first place. LessWrong has a "retract" button which crosses out text (keeping it technically-readable but making it annoying to read so that people won't); this is the polite and epistemically-virtuous way to handle comments that you no longer stand by.

Comment by jimrandomh on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-10-01T19:41:44.740Z · LW · GW

The thing I was referring to was an exchange on Facebook, particularly the comment where you wrote:

also i felt like there was lots of protein, but maybe folks just didn't realize it? rice and most grains that are not maize have a lot (though less densely packed) and there was a lot of quinoa and nut products too

That exchange was salient to me because, in the process of replying to Elizabeth, I had just searched my FB posting history and reread what veganism-related discussions I'd had, including that one. But I agree, in retrospect, that calling you a "vegan advocate" was incorrect. I extrapolated too far based on remembering you to have been vegan at that time and the stance you took in that conversation. The distinction matters both from the perspective of not generalizing to vegan advocates in general, and because the advocate role carries higher expectations about nutrition-knowledge than participating casually in a Facebook conversation does.

Comment by jimrandomh on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-09-30T18:36:23.443Z · LW · GW

I draw a slightly different conclusion from that example: that vegan advocates in particular are a threat to truth-seeking in AI alignment. Because I recognize the name, and that's a vegan who's said some extremely facepalm-worthy things about nutrition to me.

Comment by jimrandomh on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-09-30T18:17:00.992Z · LW · GW

I believe that her summaries are a strong misrepresentation of my views, and explained why in the above comment through object-level references comparing my text to her summaries.

I'm looking at those quote-response pairs, and just not seeing the mismatch you claim there to be. Consider this one:

The charitable explanation here is that my post focuses on naive veganism, and Soto thinks that’s a made-up problem.

Of course, my position is not as hyperbolic as this.

This only asserts that there's a mismatch; it provides no actual evidence of one. Next up:

his desired policy of suppressing public discussion of nutrition issues with plant-exclusive diets will prevent us from getting the information to know if problems are widespread

In my original answers I address why this is not the case (private communication serves this purpose more naturally).

Pretty straightforwardly, if the pilot study results had only been sent through private communications, then they wouldn't have public discussion (ie, public discussion would be suppressed). I myself wouldn't know about the results. The probability of a larger follow-up study would be greatly reduced. I personally would have less information about how widespread problems are.

Comment by jimrandomh on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-09-29T20:09:11.216Z · LW · GW

If the information environment prevents people from figuring out the true cause of the obesity epidemic, or making better engineered foods, this affects you no matter what place and what social circles you run in. And if epistemic norms are damaged in ways that lead to misaligned AGI instead of aligned AGI, that could literally kill you.

The stakes here are much larger than the individual meat consumption of people within EA and rationality circles. I think this framing (moralistic vegans vs selfish meat eaters with no externalities) causes people to misunderstand the world in ways that are predictably very harmful.

Comment by jimrandomh on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-09-29T19:55:14.072Z · LW · GW

I think that's true, but also: When people ask the authors for things (edits to the post, time-consuming engagement), especially if the request is explicit (as in this thread), it's important for third parties to prevent authors from suffering unreasonable costs by pushing back on requests that shouldn't be fulfilled.

Comment by jimrandomh on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-09-29T19:46:58.949Z · LW · GW

Disagree. The straightforward reading of this is that claims of harm that route through sharing of true information will nearly-always be very small compared to the harms that route through people being less informed. Framed this way, it's easy to see that, for example, the argument doesnt apply to things like dangerous medical experiments, because those would have costs that aren't based in talk.

Comment by jimrandomh on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-09-29T18:06:23.896Z · LW · GW

You say that he quoted bits are misrepresentations, but I checked your writing and they seem like accurate summaries. You should flag that your position has been misrepresented iff that is true. But you haven't been misrepresented, and I don't think that you think you've been misrepresented.

I think you are muddying the waters on purpose, and making spurious demands on Elizabeth's time, because you think clarity about what's going on will make people more likely to eat meat. I believe this because you've written things like:

One thing that might be happening here, is that we're speaking at different simulacra levels

Source comment. I'm not sure how how familiar you are with local usage of the the simulacrum levels phrase/framework, but in my understanding of the term, all but one of the simulacrum levels are flavors of lying. You go on to say:

Now, I understand the benefits of adopting the general adoption of the policy "state transparently the true facts you know, and that other people seem not to know". Unfortunately, my impression is this community is not yet in a position in which implementing this policy will be viable or generally beneficial for many topics.

The front-page moderation guidelines on LessWrong say "aim to explain, not persuade". This is already the norm. The norms of LessWrong can be debated, but not in a subthread on someone else's post on a different topic.

Comment by jimrandomh on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-09-29T11:43:05.538Z · LW · GW

This comment appears transparently intended to increase the costs associated with having written this post, and to be a continuation of the same strategy of attempting to suppress true information.

Comment by jimrandomh on Contra Yudkowsky on Epistemic Conduct for Author Criticism · 2023-09-13T18:54:22.938Z · LW · GW

Expressing negative judgments of someone's intellectual output could be an honest report, generated by looking at the output itself and extrapolating a pattern. Epistemically speaking, this is fine. Alternatively, it could be motivated by something more like politics; someone gets offended, or has a conflict of interest, then evaluates things in a biased way. Epistemically speaking, this is not fine.

So, if I were to take a stab at what the true rule of epistemic conduct here is, the primary rule would be that you ought to evaluate the ideas first before evaluating the person, in your own thinking. There are also reasons why the order of evaluations should also be ideas-before-people in the written product; it sets a better example of what thought processes are supposed to look like, it's less likely to mislead people into biased evaluations of the ideas; but this is less fundamental and less absolute than the ordering of the thinking.

But.

Having the order-of-evaluations wrong in a piece of writing is evidence, in a Bayesian sense, of having also had the order-of-evaluations wrong in the thinking that generated it. Based on the totality of omnizoid's post, I think in that case, it was an accurate heuristic. The post is full of overreaches and hyperbolic language. It presents each disagreement as though Eliezer were going against an expert consensus, when in fact each position mentioned is one where he sided with a camp in an extant expert divide.

And...

Over in the legal profession, they have a concept called "appearance of impropriety", which is that, for some types of misconduct, they consider it not only important to avoid the misconduct itself but also to avoid doing things that look too similar to misconduct.

If I translate that into something that could be the true rule, it would be something like: If an epistemic failure mode looks especially likely, both in the sense of a view-from-nowhere risk analysis and in the sense that your audience will think you've fallen into the failure mode, then some things that would normally be epistemically superogatory become mandatory instead.

Eliezer's criticism of Steven J. Gould does not follow the stated rule, of responding to a substantive point before making any general criticism of the author. I lean towards modus tollens over modus pollens, that this makes the criticism of Steven J. Gould worse. But how much worse depends on whether that's a reflection of an inverted generative process, or an artifact of how he wrote it up. I think it was probably the latter.

Comment by jimrandomh on Sharing Information About Nonlinear · 2023-09-08T17:51:39.904Z · LW · GW

Credit to Benquo's writing for giving me the idea.

Comment by jimrandomh on Weekly Incidence vs Cumulative Infections · 2023-09-07T23:47:18.362Z · LW · GW

I've adjusted our sanitizer to let MathML through (will take effect after PR review and deploy), which should affect future crossposts. For this post, I used my moderator power to edit the stuff the sanitizer removed back in.

Comment by jimrandomh on Sharing Information About Nonlinear · 2023-09-07T23:22:21.484Z · LW · GW

Spartz isn't a "public official", so maybe the standard is laxer here?

The relevant category (from newer case law than New Your Times Co v. Sullivan) is public figure, not public official, which is further distinguished into general-purpose and limited-purpose public figures. I haven't looked for case law on it, but I suspect that being the cofounder of a 501(c)(3) is probably sufficient by itself to make someone a limited-purpose public figure with respect to discussion of professional conduct within that 501(c)(3).

Also, the cases specifically call out asymmetric access to media as a reason for their distinctions, and it seems to me that in this case, no such asymmetry exists. The people discussed in the post are equally able to post on LessWrong and the EA Forum (both as replies and as a top-level post), and, to my knowledge, neither Ben nor anyone else has restricted or threatened to restrict that.

Comment by jimrandomh on Weekly Incidence vs Cumulative Infections · 2023-09-07T21:48:24.676Z · LW · GW

Jeff is someone where we've configured LW to auto-crosspost posts from his blog via RSS, so it's not being authored within the LW editor.

Comment by jimrandomh on Sharing Information About Nonlinear · 2023-09-07T21:22:14.361Z · LW · GW

The whole point of having laws against defamation, whether libel (written defamation) or slander (spoken defamation), is to hold people to higher epistemic standards when they communicate very negative things about people or organizations

This might be true of some other country's laws against defamation, but it is not true of defamation law in the US. Under US law, merely being wrong, sloppy, and bad at reasoning would not be sufficient to make something count as defamation; it only counts if the writer had actual knowledge that the claims were false, or was completely indifferent to whether they were true or false.

Comment by jimrandomh on Sharing Information About Nonlinear · 2023-09-07T09:44:23.245Z · LW · GW

So, Nonlinear-affiliated people are here in the comments disagreeing, promising proof that important claims in the post are false. I fully expect that Nonlinear's response, and much of the discussion, will be predictably shoved down the throat of my attention, so I'm not too worried about missing the rebuttals, if rebuttals are in fact coming.

But there's a hard-won lesson I've learned by digging into conflicts like this one, which I want to highlight, which I think makes this post valuable even if some of the stories turn out to be importantly false:

If a story is false, the fact that the story was told, and who told it, is valuable information. Sometimes it's significantly more valuable than if the story was true. You can't untangle a web of lies by trying to prevent anyone from saying things that have falsehoods embedded in them. You can untangle a web of lies by promoting a norm of maximizing the available information, including indirect information like who said what.

Think of the game Werewolf, as an analogy. Some moves are Villager strategies, and some moves are Werewolf strategies, in the sense that, if you notice someone using the strategy, you should make a Bayesian update in the direction of thinking the person using that strategy is a Villager or is a Werewolf.

Comment by jimrandomh on Open Thread – Autumn 2023 · 2023-09-05T23:11:54.390Z · LW · GW

(That links to a comment on a post which was moved back to drafts at some point. You can read the comment through the GreaterWrong version.)

Comment by jimrandomh on Open Thread – Autumn 2023 · 2023-09-05T21:59:10.448Z · LW · GW

It's supposed to be right-aligned with the post recommendation to the right ("Do you fear the rock or the hard place") but a Firefox-specific CSS bug causes it to get mispositioned. We're aware of the issue and working on it. A fix will be deployed soon.

Comment by jimrandomh on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong · 2023-08-27T04:45:26.411Z · LW · GW

This post seems mostly wrong and mostly deceptive. You start with this quote:

“After many years, I came to the conclusion that everything he says is false. . . . “He will lie just for the fun of it. Every one of his arguments was tinged and coded with falseness and pretense. It was like playing chess with extra pieces. It was all fake.”

This is correctly labelled as being about someone else, but is presented as though it's making the same accusation, just against a different person. But this is not the accusation you go on to make; you never once accuse him of lying. This sets the tone, and I definitely noticed what you did there.

As for the concrete disagreements you list: I'm quite confident you're wrong about the bottom line regarding nonphysicalism (though it's possible his nosology is incorrect, I haven't looked closely at that). I think prior to encountering Eliezer's writing, I would have put nonphysicalism in the same bucket as theism (ie, false, for similar reasons), so I don't think Eliezer is causally upstream of me thinking that. I'm also quite confident that you're wrong about decision theory, and that Eliezer is largely correct. (I estimate Eliezer is responsible for about 30% of the decision-theory-related content I've read). On the third disagreement, regarding animal consciousness, it looks likevalues question paired with word games, I'm not sure there's even a concrete thing (that isn't a definition) for me to agree or disagree with.

Comment by jimrandomh on Watermarking considered overrated? · 2023-08-15T20:04:29.940Z · LW · GW

I think there's a possible unappreciated application for watermarking, which is that it allows AI service providers to detect when a language model has been incorporated into a self-feedback system that crosses API-key boundaries. That is, if someone were to build a system like this thought experiment. Or more concretely, a version of ChaosGPT that, due to some combination of a stronger underlying model and better prompt engineering, doesn't fizzle.

Currently the defenses that AI service providers seem to be using are RLHF to make systems refuse to answer bad prompts, and oversight AIs that watch for bad outputs (maybe only Bing has done that one). The problem with this is that determining whether a prompt is bad or not can be heavily dependent on context; for example, searching for security vulnerabilities in software might either be a subtask of "make this software more secure" or a subtask of "build a botnet". An oversight system that's limited to looking at conversations in isolation wouldn't be able to distinguish these. On the other hand, an oversight system that could look at all the interactions that share an API key would be able to notice that a system like ChaosGPT was running, and inspect the entire thing.

In a world where that sort of oversight system existed, some users would split their usage across multiple API keys, to avoid letting the oversight system see what they were doing. OTOH, if outputs were watermarked, then you could detect whether one API key was being used to generate inputs fed to another API key, and link them together.

(This would fail if the watermark was too fragile, though; eg if asking a weaker AI to summarize an output results in a text that says the same thing but isn't watermarked, this would defeat the purpose.)

Comment by jimrandomh on Does LessWrong allow exempting posts from being scraped by GPTBot? · 2023-08-09T21:41:19.978Z · LW · GW

As it currently stands, there is currently no obstacle (legal or technical) to AIs incorporating public LessWrong posts and comments into their training sets, or reading LessWrong posts and comments at inference time.

In my opinion (speaking only for myself), I think that for most LW content, there shouldn't be. I'm not very optimistic about the success chances of AI alignment plans that involve using AIs to research AI alignment, but I think the success-chance of that is at least nonzero, and restricting AI access to one of the main sources of human-generated alignment research seems like it would be a very bad idea.

I've considered adding a feature where authors can mark specific posts as not being accessible for AIs. This is complicated by the fact that AI projects thus far have been spotty in their support for canary strings, robots.txt, etc. And since LW offers an API for third-party readers such as GreaterWrong, and that API has been used by scrapers in the past, posts tagged that way would probably not be directly readable offsite either. So if implemented, this feature would probably mean posts tagged that way required a CAPTCHA or similar Javascript-heavy mechanism.

Comment by jimrandomh on Adventist Health Study-2 supports pescetarianism more than veganism · 2023-08-01T05:47:10.657Z · LW · GW

I just edited this to fix the giant emoji in this instance, and made a code fix that should (hopefully) stop it from happening in the future.

Comment by jimrandomh on "Justice, Cherryl." · 2023-07-23T23:56:35.074Z · LW · GW

I think that in typical usage, "principle of charity" is conflating two things. On one hand, you have conversational skills like those described in this comment. Under this definition, saying that someone is failing to exercise the principle of charity is saying that they're navigating a conversation poorly, doing less interpretative labor than they should be.

On the other hand, sometimes saying that someone is failing to exercise the principle of charity is a way of saying that they have such inaccurate priors that it's interfering with their reading comprehension. Or that they're a malicious liar who's pretending to have poor reading comprehension.

Equivocating between these things is a way of allowing people who are failing to exercise the principle of charity (in the latter sense) to save face by only accusing them of the former. This is probably bad, though; norms against malicious misinterpretation don't seem to be adequately enforced in practice, and this is probably a contributing factor.

Comment by jimrandomh on A Hill of Validity in Defense of Meaning · 2023-07-17T23:00:16.884Z · LW · GW

Sorry, this isn't a topic where I want to discuss with someone who's being thick in the way that you're being thick right now. Tapping out.

Comment by jimrandomh on A Hill of Validity in Defense of Meaning · 2023-07-17T22:53:07.297Z · LW · GW

You can confirm this if you're aware that it's a possibility, and interpret carefully-phrased refusals to comment in a way that's informed by reasonable priors. You should not assume that anyone is able to directly tell you that an agreement exists.

Comment by jimrandomh on A Hill of Validity in Defense of Meaning · 2023-07-17T19:18:37.617Z · LW · GW

Then it sounds like the blackmailer in question spent $0 on perpetrating the blackmail

No, that's not what I said (and is false). To estimate the cost you have to compare the outcome of the legal case to the counterfactual baseline in which there was no blackmail happening on the side (that baseline is not zero), and you have to include other costs besides lawyers.

Comment by jimrandomh on A Hill of Validity in Defense of Meaning · 2023-07-17T07:36:52.753Z · LW · GW

The lack of comment from Eliezer and other MIRI personnel had actually convinced me in particular that the claims were true. This is the first I heard that there's any kind of NDA preventing them from talking about it.

I think this means you had incorrect priors (about how often legal cases conclude with settlements containing nondisparagement agreements.)

Comment by jimrandomh on A Hill of Validity in Defense of Meaning · 2023-07-17T07:31:18.062Z · LW · GW

Note that a lawyer who participated in that would be committing a crime. In the case of LH, there was (by my unreliable secondhand understanding) an employment-contract dispute and a blackmail scheme happening concurrently. The lawyers would have been involved only in the employment-contract dispute, not in the blackmail, and any settlement reached would have nominally been only for dropping the employment-contract-related claims. An ordinary employment dispute is a common-enough thing that each side's lawyers would have experience estimating the other side's costs at each stage of litigation, and using those estimates as part of a settlement negotiation.

(Filing lawsuits without merit is sometimes analogized to blackmail, but US law defines blackmail much more narrowly, in such a way that asking for payment to not allege statutory rape on a website is blackmail, but asking for payment to not allege unfair dismissal in a civil court is not.)