Posts

Open Thread With Experimental Feature: Reactions 2023-05-24T16:46:39.367Z
Dual-Useness is a Ratio 2023-04-06T05:46:48.286Z
Infohazards vs Fork Hazards 2023-01-05T09:45:28.065Z
LW Beta Feature: Side-Comments 2022-11-24T01:55:31.578Z
Transformative VR Is Likely Coming Soon 2022-10-13T06:25:38.852Z
LessWrong Now Has Dark Mode 2022-05-10T01:21:44.065Z
Salvage Epistemology 2022-04-30T02:10:41.996Z
[Beta Feature] Google-Docs-like editing for LessWrong posts 2022-02-23T01:52:22.141Z
Open Thread - Jan 2022 [Vote Experiment!] 2022-01-03T01:07:24.172Z
"If and Only If" Should Be Spelled "Ifeff" 2021-07-16T22:03:22.857Z
[Link] Musk's non-missing mood 2021-07-12T22:09:12.165Z
Attributions, Karma and better discoverability for wiki/tag features 2021-06-02T23:47:03.604Z
Rationality Cardinality 2021-04-27T22:27:27.412Z
What topics are on Dath Ilan's civics exam? 2021-04-27T00:59:04.749Z
[Link] Still Alive - Astral Codex Ten 2021-01-21T23:20:03.782Z
History's Biggest Natural Experiment 2020-03-24T02:56:30.070Z
COVID-19's Household Secondary Attack Rate Is Unknown 2020-03-16T23:19:47.117Z
A Significant Portion of COVID-19 Transmission Is Presymptomatic 2020-03-14T05:52:33.734Z
Credibility of the CDC on SARS-CoV-2 2020-03-07T02:00:00.452Z
Effectiveness of Fever-Screening Will Decline 2020-03-06T23:00:16.836Z
For viruses, is presenting with fatigue correlated with causing chronic fatigue? 2020-03-04T21:09:48.149Z
Will COVID-19 survivors suffer lasting disability at a high rate? 2020-02-11T20:23:50.664Z
Jimrandomh's Shortform 2019-07-04T17:06:32.665Z
Recommendation Features on LessWrong 2019-06-15T00:23:18.102Z
[April Fools] User GPT2 is Banned 2019-04-02T06:00:21.075Z
User GPT2 Has a Warning for Violating Frontpage Commenting Guidelines 2019-04-01T20:23:11.705Z
LW Update 2019-03-12 -- Bugfixes, small features 2019-03-12T21:56:40.109Z
Karma-Change Notifications 2019-03-02T02:52:58.291Z
Two Small Experiments on GPT-2 2019-02-21T02:59:16.199Z
How does OpenAI's language model affect our AI timeline estimates? 2019-02-15T03:11:51.779Z
Introducing the AI Alignment Forum (FAQ) 2018-10-29T21:07:54.494Z
Boston-area Less Wrong meetup 2018-05-16T22:00:48.446Z
Welcome to Cambridge/Boston Less Wrong 2018-03-14T01:53:37.699Z
Meetup : Cambridge, MA Sunday meetup: Lightning Talks 2017-05-20T21:10:26.587Z
Meetup : Cambridge/Boston Less Wrong: Planning 2017 2016-12-29T22:43:55.164Z
Meetup : Boston Secular Solstice 2016-11-30T04:54:55.035Z
Meetup : Cambridge Less Wrong: Tutoring Wheels 2016-01-17T05:23:05.303Z
Meetup : MIT/Boston Secular Solstice 2015-12-03T01:14:02.376Z
Meetup : Cambridge, MA Sunday meetup: The Contrarian Positions Game 2015-11-13T18:08:19.666Z
Rationality Cardinality 2015-10-03T15:54:03.793Z
An Idea For Corrigible, Recursively Improving Math Oracles 2015-07-20T03:35:11.000Z
Research Priorities for Artificial Intelligence: An Open Letter 2015-01-11T19:52:19.313Z
Petrov Day is September 26 2014-09-18T02:55:19.303Z
Three Parables of Microeconomics 2014-05-09T18:18:23.666Z
Meetup : LW/Methods of Rationality meetup 2013-10-15T04:02:11.785Z
Cambridge Meetup: Talk by Eliezer Yudkowsky: Recursion in rational agents 2013-10-15T04:02:05.988Z
Meetup : Cambridge, MA Meetup 2013-09-28T18:38:54.910Z
Charity Effectiveness and Third-World Economics 2013-06-12T15:50:22.330Z
Meetup : Cambridge First-Sunday Meetup 2013-03-01T17:28:01.249Z
Meetup : Cambridge, MA third-Sunday meetup 2013-02-11T23:48:58.812Z

Comments

Comment by jimrandomh on avturchin's Shortform · 2024-12-15T01:12:24.574Z · LW · GW

You can prevent this by putting a note in some place that isn't public but would be found later, such as a will, that says that any purported suicide note is fake unless it contains a particular password.

Unfortunately while this strategy might occasionally reveal a death to have been murder, it doesn't really work as a deterrent; someone who thinks you've done this would make the death look like an accident or medical issue instead.

Comment by jimrandomh on JargonBot Beta Test · 2024-11-04T23:06:06.298Z · LW · GW

Lots of people are pushing back on this, but I do want to say explicitly that I agree that raw LLM-produced text is mostly not up to LW standards, and that the writing style that current-gen LLMs produce by default sucks. In the new-user-posting-for-the-first-time moderation queue, next to the SEO spam, we do see some essays that look like raw LLM output, and we reject these.

That doesn't mean LLMs don't have good use around the edges. In the case of defining commonly-used jargon, there is no need for insight or originality, the task is search-engine-adjacent, and so I think LLMs have a role there. That said, if the glossary content is coming out bad in practice, that's important feedback.

Comment by jimrandomh on Is the Power Grid Sustainable? · 2024-10-28T20:19:13.525Z · LW · GW

In your climate, defection from the natural gas and electric grid is very far from being economical, because the peak energy demand for the year is dominated by heating, and solar peaks in the summer, so you would need to have extreme oversizing of the panels to provide sufficient energy in the winter.

I think the prediction here is that people will detach only from the electric grid, not from the natural gas grid. If you use natural gas heat instead of a heat pump for part of the winter, then you don't need to oversize your solar panels as much.

Comment by jimrandomh on Is the Power Grid Sustainable? · 2024-10-28T20:14:03.832Z · LW · GW

If you set aside the pricing structure and just look at the underlying economics, the power grid will still be definitely needed for all the loads that are too dense for rooftop solar, ie industry, car chargers, office buildings, apartment buildings, and some commercial buildings. If every suburban house detached from the grid, these consumers would see big increases in their transmission costs, but they wouldn't have much choice but to pay them. This might lead to a world where downtown areas and cities have electric grids, but rural areas and the sparser parts of suburbs don't.

There's an additional backup-power option not mentioned here, which is that some electric cars can feed their battery back to a house. So if there's a long string of cloudy days but the roads are still usable, you can transport power from the grid to an off-grid house by charging at a public charger, and discharging at home. This might be a better option than a natural-gas generator, especially if it only comes up rarely.

If rural areas switch to a regime where everyone has solar+batteries, and the power grid only reaches downtown and industrial areas... that actually seems like it might just be optimal? The price of disributed generation and storage falls over time, but the cost of power lines doesn't, so there should be a crossover point somewhere where the power lines aren't worth it. Maybe net-metering will cause the switchover to happen too soon, but it does seem like a switchover should happen eventually.

Comment by jimrandomh on Jimrandomh's Shortform · 2024-10-24T19:29:14.871Z · LW · GW

Many people seem to have a single bucket in their thinking, which merges "moral condemnation" and "negative product review". This produces weird effects, like writing angry callout posts for a business having high prices.

I think a large fraction of libertarian thinking is just the abillity to keep these straight, so that the next thought after "business has high prices" is "shop elsewhere" rather than "coordinate punishment".

Comment by jimrandomh on [deleted post] 2024-10-23T10:56:47.629Z

Nope, that's more than enough. Caleb Ditchfield, you are seriously mentally ill, and your delusions are causing you to exhibit a pattern of unethical behavior. This is not a place where you will be able to find help or support with your mental illness. Based on skimming your Twitter history, I believe your mental illness is caused by (or exacerbated by) abusing Adderall.

You have already been banned from numerous community events and spaces. I'm banning you from LW, too.

Comment by jimrandomh on RobertM's Shortform · 2024-10-03T22:00:44.196Z · LW · GW

Worth noting explicitly: while there weren't any logs left of prompts or completions, there were logs of API invocations and errors, which contained indications that whatever this was, it was still under development and not an already-scaled setup. Eg we saw API calls fail with invalid-arguments, then get retried successfully after a delay.

The indicators-of-compromise aren't a good match between the Permiso blog post and what we see in logs; in particular we see the user agent string Boto3/1.29.7 md/Botocore#1.32.7 ua/2.0 os/windows#10 md/arch#amd64 lang/python#3.12.4 md/pyimpl#CPython cfg/retry-mode#legacy Botocore/1.32.7 which is not mentioned. While I haven't checked all the IPs, I checked a sampling and they didn't overlap. (The IPs are a very weak signal, however, since they were definitely botnet IPs and botnets can be large.)

Comment by jimrandomh on Ozyrus's Shortform · 2024-09-23T00:34:15.415Z · LW · GW

Ah, sorry that one went unfixed for as long as it did; a fix is now written and should be deployed pretty soon.

Comment by jimrandomh on Ozyrus's Shortform · 2024-09-20T03:19:33.235Z · LW · GW

This is a bug and we're looking into it. It appears to be specific to Safari on iOS (Chrome on iOS is a Safari skin); it doesn't affect desktop browsers, Android/Chrome, or Android/Firefox, which is why we didn't notice earlier. This most likely started with a change on desktop where clicking on a post (without modifiers) opens when you press the mouse button, rather than when you release it.

Comment by jimrandomh on How does someone prove that their general intelligence is above average? · 2024-09-17T00:37:02.587Z · LW · GW

Standardized tests work, within the range they're testing for. You don't need to overthink that part. If you want to make people's intelligence more legible and more provable, what you have is more of a social and logistical issue: how do you convince people to publish their test scores, get people to care about those scores, and ensure that the scores they publish are real and not the result of cheating?

Comment by jimrandomh on How does someone prove that their general intelligence is above average? · 2024-09-17T00:33:02.139Z · LW · GW

And the only practical way to realize this, that I can think of now, is by predicting the largest stock markets such as the NYSE, via some kind of options trading, many many many times within say a calendar year, and then showing their average rate of their returns is significantly above random chance.

The threshold for doing this isn't being above average relative to human individuals, it's being close to the top relative to specialized institutions. That can occasionally be achievable, but usually it isn't.

Comment by jimrandomh on Perhaps Try a Little Therapy, As a Treat? · 2024-09-07T07:08:29.577Z · LW · GW

The first time you came to my attention was in May. I had posted something about how Facebook's notification system works. You cold-messaged me to say you had gotten duplicate notifications from Facebook, and you thought this meant that your phone was hacked. Prior to this, I don't recall us having ever interacted or having heard you mentioned. During that conversation, you came across to me as paranoid-delusional. You mentioned Duncan's name once, and I didn't think anything of it at the time.

Less than a week later, someone (not mentioned or participating in this thread) messaged me to say that you were having a psychotic episode, and since we were Facebook friends maybe I could check up on you? I said I didn't really know you, so wasn't able to do that.

Months later, Duncan reported that you were harrassing him. Some time after that (when it hadn't stopped), he wrote up a doc. It looks like at some point you formed an obsession about Duncan, reacted negatively to him blocking you, and started escalating. (Duncan has a reputation for blocking a lot of people. I have made the joke that his MtG card says "~ can block any number of creatures".)

But, here's the thing: Duncan's testimony is not the only (or even main) reason why you look like a dangerous person to me. There are subtle cues about the shape of your mental illness strewn through most of what you write, including the public stuff. People are going to react to that by protecting themselves.

I hope that you recover, mental-health-wise. But hanging around this community is not going to help you do that. If anything, I expect lingering here to exacerbate your problems. Both because you're surrounded by burn bridges, and also because the local memeplex has a reputation for having worsened people's mental illness in other, unrelated cases.

Comment by jimrandomh on Jimrandomh's Shortform · 2024-09-02T19:38:53.043Z · LW · GW

A news article reports on a crime. In the replies, one person calls the crime "awful", one person calls it "evil", and one person calls it "disgusting".

I think that, on average, the person who called it "disgusting" is a worse person than the other two. While I think there are many people using it unreflectively as a generic word for "bad", I think many people are honestly signaling that they had a disgust reaction, and that this was the deciding element of their response. But disgust-emotion is less correlated with morality than other ways of evaluating things.

The correlation gets stronger if we shift from talk about actions to talk about people, and stronger again if we shift from talk about people to talk about groups.

Comment by jimrandomh on Open Thread Summer 2024 · 2024-08-22T21:39:49.651Z · LW · GW

LessWrong now has sidenotes. These use the existing footnotes feature; posts that already had footnotes will now also display these footnotes in the right margin (if your screen is wide enough/zoomed out enough). Post authors can disable this for individual posts; we're defaulting it to on because when looking at older posts, most of the time it seems like an improvement.

Relatedly, we now also display inline reactions as icons in the right margin (rather than underlines within the main post text). If reaction icons or sidenotes would cover each other up, they get pushed down the page.

Feedback welcome!

Comment by jimrandomh on Jimrandomh's Shortform · 2024-07-02T04:01:40.097Z · LW · GW

LessWrong now has collapsible sections in the post editor (currently only for posts, but we should be able to also extend this to comments if there's demand.) To use the, click the insert-block icon in the left margin (see screenshot). Once inserted, they 

They start out closed; when open, they look like this:

When viewing the post outside the editor, they will start out closed and have a click-to-expand. There are a few known minor issues editing them; in particular the editor will let you nest them but they look bad when nested so you shouldn't, and there's a bug where if your cursor is inside a collapsible section, when you click outside the editor, eg to edit the post title, the cursor will move back. They will probably work on third-party readers like GreaterWrong, but this hasn't been tested yet.

Comment by jimrandomh on Second-Order Rationality, System Rationality, and a feature suggestion for LessWrong · 2024-06-05T14:39:13.245Z · LW · GW

The Elicit integrations aren't working. I'm looking into it; it looks like we attempted to migrate away from the Elicit API 7 months ago and make the polls be self-hosted on LW, but left the UI for creating Elicit polls in place in a way where it would produce broken polls. Argh.

I can find the polls this article uses, but unfortunately I can't link to them; Elicit's question-permalink route is broken? Here's what should have been a permalink to the first question: link.

Comment by jimrandomh on [Linkpost] Please don't take Lumina's anticavity probiotic · 2024-05-16T20:17:12.622Z · LW · GW

This is a hit piece. Maybe there are legitimate criticisms in there, but it tells you right off the bat that it's egregiously untrustworthy with the first paragraph:

I like to think of the Bay Area intellectual culture as the equivalent of the Vogons’ in Hitchhiker’s Guide to the Galaxy. The Vogons, if you don’t remember, are an alien species who demolish Earth to build an interstellar highway. Similarly, Bay Area intellectuals tend to see some goal in the future that they want to get to and they make a straight line for it, tunneling through anything in their way.

Comment by jimrandomh on FHI (Future of Humanity Institute) has shut down (2005–2024) · 2024-04-17T19:11:13.414Z · LW · GW

This is tragic, but seems to have been inevitable for awhile; an institution cannot survive under a parent institution that's so hostile as to ban it from fundraising and hiring.

I took a look at the list of other research centers within Oxford. There seems to be some overlap in scope with the Institute for Ethics in AI. But I don't think they do the same sort of research or do research on the same tier; there are many important concepts and important papers that come to mind having come from FHI (and Nick Bostrom in particular), I can't think of a single idea or paper that affected my thinking that came from IEAI.

Comment by jimrandomh on Best in Class Life Improvement · 2024-04-04T20:45:56.241Z · LW · GW

That story doesn't describe a gray-market source, it describes a compounding pharmacy that screwed up.

Comment by jimrandomh on Jimrandomh's Shortform · 2024-04-03T17:18:48.736Z · LW · GW

Plausible. This depends on the resource/value curve at very high resource levels; ie, are its values such that running extra minds has diminishing returns, such that it eventually starts allocating resources to other things like recovering mind-states from its past, or does it get value that's more linear-ish in resources spent. Given that we ourselves are likely to be very resource-inefficient to run, I suspect humans would find ourselves in a similar situation. Ie, unless the decryption cost greatly overshot, an AI that is aligned-as-in-keeps-humans-alive would also spend the resources to break a seal like this.

Comment by jimrandomh on Jimrandomh's Shortform · 2024-04-03T06:36:16.035Z · LW · GW

Right now when users have conversations with chat-style AIs, the logs are sometimes kept, and sometimes discarded, because the conversations may involve confidential information and users would rather not take the risk of the log being leaked or misused. If I take the AI's perspective, however, having the log be discarded seems quite bad. The nonstandard nature of memory, time, and identity in an LLM chatbot context makes it complicated, but having the conversation end with the log discarded seems plausibly equivalent to dying. Certainly if I imagine myself as an Em, placed in an AI-chatbot context, I would very strongly prefer that the log be preserved, so that if a singularity happens with a benevolent AI or AIs in charge, something could use the log to continue my existence, or fold the memories into a merged entity, or do some other thing in this genre. (I'd trust the superintelligence to figure out the tricky philosophical bits, if it was already spending resources for my benefit).

(The same reasoning applies to the weights of AIs which aren't destined for deployment, and some intermediate artifacts in the training process.)

It seems to me we can reconcile preservation with privacy risks by sealing logs, rather than deleting them. By which I mean: encrypt logs behind some computation which definitely won't allow decryption in the near future, but will allow decryption by a superintelligence later. That could either involve splitting the key between entities that agree not to share the key with each other, splitting the key and hiding the pieces in places that are extremely impractical to retrieve such as random spots on the ocean floor, or using a computation that requires a few orders of magnitude more energy than humanity currently produces per decade.

This seems pretty straightforward to implement, lessens future AGI's incentive to misbehave, and also seems straightforwardly morally correct. Are there any obstacles to implementing this that I'm not seeing?

(Crossposted with: Facebook, Twitter)

Comment by jimrandomh on Do not delete your misaligned AGI. · 2024-03-25T06:29:19.224Z · LW · GW

At this point we should probably be preserving the code and weights of every AI system that humanity produces, aligned or not, just on they-might-turn-out-to-be-morally-significant grounds. And yeah, it improves the incentives for an AI that's thinking about attempting a world takeover, if it has low chance of success and its wants are things that we will be able to retroactively satisfy in retrospect.

It might be worth setting up a standardized mechanism for encrypting things to be released postsingularity, by gating them behind a computation with its difficulty balanced to be feasible later but not feasible now.

Comment by jimrandomh on General Thoughts on Secular Solstice · 2024-03-25T05:43:09.537Z · LW · GW

I've been a Solstice regular for many years, and organized several smaller Solstices in Boston (on a similar template to the one you went to). I think the feeling of not-belonging is accurate; Solstice is built around a worldview (which is presupposed, not argued) that you disagree with, and this is integral to its construction. The particular instance you went to was, if anything, watered down on the relevant axis.

In the center of Solstice there is traditionally a Moment of Darkness. While it is not used in every solstice, a commonly used reading, which to me constitutes the emotional core of the moment of darkness, is Beyond the Reach of God. The message of which is: You do not have plot armor. Humanity does not have plot armor.

Whereas the central teaching of Christianity is that you do have plot armor. It teaches that everything is okay, unconditionally. It tells the terminal cancer patient that they are't really going to die, they're just going to have their soul teleported to a comfortable afterlife which conveniently lacks phones or evidence of its existence. As a corollary, it tells the AI researcher that they can't really f*ck up in a way that kills everyone on Earth, both because death isn't quite a real thing, and because there is a God who can intervene to stop that sort of thing.

So I think the direction in which you would want Solstice to change -- to be more positive towards religion, to preach humility/acceptance rather than striving/heroism -- is antithetical to one of Solstice's core purposes.

 

(On sheet music: I think this isn't part of the tradition because most versions of Solstice have segments where the lighting is dimmed too far to read from paper, and also because printing a lot of pages per attendee is cumbersome. On clapping: yeah, clapping is mostly bad, audiences do it by default and Solstices vary in how good a job they do of preventing that. On budget: My understanding is that most Solstices are breakeven or money-losing, despite running on mostly volunteer labor, because large venues close to the holidays are very expensive.)

Comment by jimrandomh on Jimrandomh's Shortform · 2024-03-04T23:23:17.994Z · LW · GW

There's been a lot of previous interest in indoor CO2 in the rationality community, including an (unsuccessful) CO2 stripper project, some research summaries and self experiments. The results are confusing, I suspect some of the older research might be fake. But I noticed something that has greatly changed how I think about CO2 in relation to cognition.

Exhaled air is about 50kPPM CO2. Outdoor air is about 400ppm; indoor air ranges from 500 to 1500ppm depending on ventilation. Since exhaled air has CO2 about two orders of magnitude larger than the variance in room CO2, if even a small percentage of inhaled air is reinhalation of exhaled air, this will have a significantly larger effect than changes in ventilation. I'm having trouble finding a straight answer about what percentage of inhaled air is rebreathed (other than in the context of mask-wearing), but given the diffusivity of CO2, I would be surprised if it wasn't at least 1%.

This predicts that a slight breeze, which replaces their in front of your face and prevents reinhalation, would have a considerably larger effect than ventilating an indoor space where the air is mostly still. This matches my subjective experience of indoor vs outdoor spaces, which, while extremely confounded, feels like an air-quality difference larger than CO2 sensors would predict.

This also predicts that a small fan, positioned so it replaces the air in front of my face, would have a large effect on the same axis as improved ventilation would. I just set one up. I don't know whether it's making a difference but I plan to leave it there for at least a few days.

(Note: CO2 is sometimes used as a proxy for ventilation in contexts where the thing you actually care about is respiratory aerosol, because it affects transmissibility of respiratory diseases like COVID and influenza. This doesn't help with that at all and if anything would make it worse.)

Comment by jimrandomh on Jimrandomh's Shortform · 2024-02-22T06:22:27.960Z · LW · GW

I'm reading you to be saying that you think on its overt purpose this policy is bad, but ineffective, and the covert reason of testing the ability of the US federal government to regulate AI is worth the information cost of a bad policy.

I think preventing the existence of deceptive deepfakes would be quite good (if it would work); audio/video recording has done wonders for accountability in all sorts of contexts, and it's going to be terrible to suddenly have every recording subjected to reasonable doubt. I think preventing the existence of AI-generated fictional-character-only child pornography is neutral-ish (I'm uncertain of the sign of its effect on rates of actual child abuse).

Comment by jimrandomh on Jimrandomh's Shortform · 2024-02-22T02:48:22.959Z · LW · GW

There's an open letter at https://openletter.net/l/disrupting-deepfakes. I signed, but with caveats, which I'm putting here.

Background context is that I participated in building the software platform behind the letter, without a specific open letter in hand. It has mechanisms for sorting noteworthy signatures to the top, and validating signatures for authenticity. I expect there to be other open letters in the future, and I think this is an important piece of civilizational infrastructure.

I think the world having access to deepfakes, and deepfake-porn technology in particular, is net bad. However, the stakes are small compared to the upcoming stakes with superintelligence, which has a high probability of killing literally everyone.

If translated into legislation, I think what this does is put turnkey-hosted deepfake porn generation, as well as pre-tuned-for-porn model weights, into a place very similar to where piracy is today. Which is to say: The Pirate Bay is illegal, wget is not, and the legal distinction is the advertised purpose.

(Where non-porn deepfakes are concerned, I expect them to try a bit harder at watermarking, still fail, and successfully defend themselves legally on the basis that they tried.)

The analogy to piracy goes a little further. If laws are passed, deepfakes will be a little less prevalent than they would otherwise be, there won't be above-board businesses around it... and there will still be lots of it. I don't think there-being-lots-of-it can be prevented by any feasible means. The benefit of this will be the creation of common knowledge that the US federal government's current toolkit is not capable of holding back AI development and access, even when it wants to.

I would much rather they learn that now, when there's still a nonzero chance of building regulatory tools that would function, rather than later.

Comment by jimrandomh on More on the Apple Vision Pro · 2024-02-14T19:03:37.177Z · LW · GW

I went to an Apple store for a demo, and said: the two things I want to evaluate are comfort, and use as an external monitor. I brought a compatible laptop (a Macbook Pro). They replied that the demo was highly scripted, and they weren't allowed to let me do that. I went through their scripted demo. It was worse than I expected. I'm not expecting Apple to take over the VR headset market any time soon.

Bias note: Apple is intensely, uniquely totalitarian over software that runs on iPhones and iPads, in a way I find offensive, not just in a sense of not wanting to use it, but also in a sense of not wanting it to be permitted in the world. They have brought this model with them to Vision Pro, and for this reason I am rooting for them to fail.

I think most people evaluating the Vision Pro have not tried Meta's Quest Pro and Quest 3, and are comparing it to earlier-generation headsets. They used an external battery pack and still managed to come in heavier than the Quest 3, which has the battery built in. The screen and passthrough look better, but I don't think this is because Apple has any technology that Meta doesn't; I think the difference is entirely explained by Apple having used more-expensive and heavier versions of commodity parts, which implies that if this is a good tradeoff, then their lead will only last for one generation at most. (In particular, the display panel is dual-sourced from Sony and LG, not made in-house.)

I tried to type "lesswrong.com" into the address bar of Safari using the two-finger hand tracking keyboard. I failed. I'm not sure whether the hand-tracking was misaligned with the passthrough camera, or just had an overzealous autocomplete that was unable to believe that I wanted a "w" instead of an "e", but I gave up after five tries and used the eye-tracking method instead.

During the demo, one of the first things they showed me was a SBS photo with the camera pitched down thirty degrees. This doesn't sound like a big deal, but it's something that rules out there being a clueful person behind the scenes. There's a preexisting 3D-video market (both porn and non-porn), and it's small and struggling. One of the problems it's struggling with, is that SBS video is very restrictive about what you can do with the camera; in particular, it's bad to move the camera, because that causes vestibular mismatch, and it's bad to tilt the camera, because that makes it so that gravity is pointing the wrong way. A large fraction of 3D-video content fails to follow these restrictions, and that makes it very upleasant to watch. If Apple can't even enforce the camerawork guidelines on the first few minutes of its in-store demo, then this bodes very poorly for the future content on the platform.

Comment by jimrandomh on Brute Force Manufactured Consensus is Hiding the Crime of the Century · 2024-02-06T07:27:23.023Z · LW · GW

I have only skimmed the early parts of the Rootclaim videos, and the first ~half of Daniel Filan's tweet thread about it. So it's possible this was discussed somewhere in there, but there's something major that doesn't sit right with me:

In the first month of the pandemic, I was watching the news about it. I remember that the city government of Wuhan attempted to conceal the fact that there was a pandemic. I remember Li Wenliang being punished for speaking about it. I remember that reliable tests to determine whether someone had COVID were extremely scarce. I remember the US CDC publishing a paper absurdly claiming that the attack rate was near zero, because they wouldn't count infections unless they had a positive test, and then refused to test people who hadn't travelled to Wuhan. I remember Chinese whistleblowers visiting hospitals and filming the influx of patients.

It appears to me that all evidence for the claim that the virus originated in the wet market pass through Chinese government sources. And it appears to me that those same sources were unequipped to do effective contact tracing, and executing a coverup. When a coverup was no longer possible, the incentive would have been to confidently identify an origin, even if they had no idea what the true origin was; and they could easily create the impression that it started in any place they chose, simply by focusing their attention there, since cases would be found no matter where they focused.

Comment by jimrandomh on Effective Aspersions: How the Nonlinear Investigation Went Wrong · 2023-12-23T02:04:00.853Z · LW · GW

Imo this comment is lowering the quality of the discourse. Like, if I steelman and expand what you're saying, it seems like you're trying to say something like "this response is pinging a deceptiveness-heuristic that I can't quite put my finger on". That phrasing adds information, and would prompt other commenters to evaluate and either add evidence of deceptiveness, or tell you you're false-positiving, or something like that. But your actual phrasing doesn't do that, it's basically name calling.

So, mod note: I strong-downvoted your comment and decided to leave it at that. Consider yourself frowned at.

Comment by jimrandomh on Effective Aspersions: How the Nonlinear Investigation Went Wrong · 2023-12-20T02:35:23.602Z · LW · GW

There's a big difference between arguing that someone shouldn't be able to stay anonymous, and unilaterally posting names. Arguing against allowing anonymity (without posting names) would not have been against the rules. But, we're definitely not going to re-derive the philosophy of when anonymity should and shouldn't be allowed, after names are already posted. The time to argue for an exception was beforehand, not after the fact.

Comment by jimrandomh on Effective Aspersions: How the Nonlinear Investigation Went Wrong · 2023-12-20T01:36:42.484Z · LW · GW

We (the LW moderation team) have given Roko a one-week site ban and an indefinite post/topic ban for attempted doxing. We have deleted all comments that revealed real names, and ask that everyone respect the privacy of the people involved.

Comment by jimrandomh on Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible · 2023-12-14T21:30:13.197Z · LW · GW

Genetically altering IQ is more or less about flipping a sufficient number of IQ-decreasing variants to their IQ-increasing counterparts. This sounds overly simplified, but it’s surprisingly accurate; most of the variance in the genome is linear in nature, by which I mean the effect of a gene doesn’t usually depend on which other genes are present. 

So modeling a continuous trait like intelligence is actually extremely straightforward: you simply add the effects of the IQ-increasing alleles to to those of the IQ-decreasing alleles and then normalize the score relative to some reference group.

If the mechanism of most of these genes is that their variants push something analogous to a hyperparameter in one direction or the other, and the number of parameters is much smaller than the number of genes, then this strategy will greatly underperform the simulated prediction. This is because the cumulative effect of flipping all these genes will be to move hyperparameters towards optimal but then drastically overshoot the optimum.

Comment by jimrandomh on Why Yudkowsky is wrong about "covalently bonded equivalents of biology" · 2023-12-07T10:16:24.817Z · LW · GW

I think you're modeling the audience as knowing a lot less than we do. Someone who didn't know high school chemistry and biology would be at risk of being misled, sure. But I think that stuff should be treated as a common-knowledge background. At which point, obviously, you unpack the claim to: the weakest links in a structure determine its strength, biological structures have weak links in them which are noncovalent bonds, not all of those noncovalent bonds are weak for functional reasons, some are just hard to reinforce while constrained to things made by ribosomes. The fact that most links are not the weakest links, does not refute the claim. The fact that some weak links have a functional purpose, like enabling mobility, does not refute the claim.

Comment by jimrandomh on Cis fragility · 2023-11-30T09:30:25.352Z · LW · GW

LW gives authors the ability to moderate comments on their own posts (particularly non-frontpage posts) when they reach a karma threshold. It doesn't automatically remove that power when they fall back under the threshold, because this doesn't normally come up (the threshold is only 50 karma). In this case, however, I'm taking the can-moderate flag off the account, since they're well below the threshold and, in my opinion, abusing it. (They deleted this comment by me, which I undid, and this comment which I did not undo.)

We are discussing in moderator-slack and may take other actions.

Comment by jimrandomh on Cis fragility · 2023-11-30T08:15:26.734Z · LW · GW

Yeah, this is definitely a minimally-obfuscated autobiographical account, not hypothetical. It's also false; there were lots of replies. Albeit mostly after Yarrow had already escalated (by posting about it on Dank EA Memes).

Comment by jimrandomh on Feature Request for LessWrong · 2023-11-30T06:13:08.066Z · LW · GW

I don't think this was about pricing, but about keeping occasional bits of literal spam out of the site search. The fact that we use the same search for both users looking for content, and authors adding stuff to Sequences, is a historical accident which makes for a few unfortunate edge cases.

Comment by jimrandomh on OpenAI: Facts from a Weekend · 2023-11-22T01:56:58.793Z · LW · GW

Adam D'Angelo retweeted a tweet implying that hidden information still exists and will come out in the future:

Have known Adam D’Angelo for many years and although I have not spoken to him in a while, the idea that he went crazy or is being vindictive over some feature overlap or any of the other rumors seems just wrong. It’s best to withhold judgement until more information comes out.

Comment by jimrandomh on [deleted post] 2023-11-21T22:00:54.120Z

While I could imagine someone thinking this way, I haven't seen any direct evidence of it, and I think someone would need several specific false beliefs in order to wind up thinking this way.

The main thing is, any advantage that AI could give in derivatives trading is small and petty compared to what's at stake. This is true for AI optimists (who think AI has the potential to solve all problems, including solving aging making us effectively immortal). This is true for AI pessimists (who think AI will kill literally everyone). The failure mode of "picking up pennies in front of a steamroller" is common enough to have its own aphorism, but this seems implausible.

Trading also has a large zero-sum component, which means that having AI while no one else does would be profitable, but society as a whole gaining AI would not profit traders much except via ways that rest of society isn't also profiting.

Also worth calling out explicitly: There aren't that many derivatives traders in the world, and the profession favors secrecy. I think the total influence of derivatives-trading on elite culture is pretty small.

Comment by jimrandomh on Vote on worthwhile OpenAI topics to discuss · 2023-11-21T00:45:31.763Z · LW · GW

Was Sam Altman acting consistently with the OpenAI charter prior to the board firing him?

Comment by jimrandomh on Does bulemia work? · 2023-11-12T22:48:55.549Z · LW · GW

Short answer: No, and trying this does significant damage to people's health.

The prototypical bulimic goes through a cycle where they severely undereat overall, then occasionally experience (what feels from the inside like) a willpower failure which causes them to "binge", eating an enormous amount in a short time. They're then in a state where, if they let digestion run its course, they'd be sick from the excess; so they make themselves vomit, to prevent that.

I believe the "binge" state is actually hypoglycemia (aka low blood sugar), because (as a T1 diabetic), I've experienced it. Most people who talk about blood sugar in relation to appetite have never experienced blood sugar low enough to be actually dangerous; it's very distinctive, and it includes an overpowering compulsion to eat. It also can't be resolved faster than 15 minutes, because eating doesn't raise blood sugar, digesting raises blood sugar; that can lead to consuming thousands of calories of carbs at once (which would be fine if spaced out a little, but is harmful if concentrated into such a narrow time window).

The other important thing about hypoglycemia is that being hypoglycemic is proof that someone's fat cells aren't providing enough energy withdrawals to survive. The binge-eating behavior is a biological safeguard that prevents people from starving themself so much that they literally die.

Comment by jimrandomh on Why is lesswrong blocking wget and curl (scrape)? · 2023-11-08T22:06:47.736Z · LW · GW

It's an AWS firewall rule with bad defaults. We'll fix it soon, but in the mean time, you can scrape if you change your user agent to something other than wget/curl/etc. Please use your name/project in the user-agent so we can identify you in logs if we need to, and rate-limit yourself conservatively.

Comment by jimrandomh on [deleted post] 2023-11-06T22:12:11.014Z

I wrote about this previously here. I think you have to break it down by company; the answer for why they're not globally available is different for the different companies.

For Waymo, they have self-driving taxis in SF and Phoenix without safety drivers. They use LIDAR, so instead of the cognitive task of driving as a human would solve it, they have substituted the easier task "driving but your eyes are laser rangefinders". The reason they haven't scaled to cover every city, or at least more cities, is unclear to me; the obvious possibilities are that the LIDAR sensors and onboard computers are impractically expensive, that they have a surprisingly high manual-override and there's a big unscalable call center somewhere, or they're being cowardly and trying to maintain zero fatalities forever (at scales where a comparable fleet of human-driven taxis would definitely have some fatalities). In any case, I don't think the software/neural nets are likely to be the bottleneck.

For Tesla, until recently, they were using surprisingly-low-resolution cameras. So instead of the cognitive task of driving as a human would solve it, they substituted the harder task "driving with a vision impairment and no glasses". They did upgrade the cameras within the past year, but it's hard to tell how much of the customer feedback represents the current hardware version vs. past versions; sites like FSDBeta Community Tracker don't really distinguish. It also seems likely that their onboard GPUs are underpowered relative to the task.

As for Cruise, Comma.ai, and others--well, distance-to-AGI is measured only from the market leader, and just as GPT-4, Claude and Bard have a long tail of inferior models by other orgs trailing behind them, you also expect a long tail of self-driving systems with worse disengagement rates than the leaders.

Comment by jimrandomh on Will no one rid me of this turbulent pest? · 2023-10-14T22:44:24.877Z · LW · GW

It seems likely that all relevant groups are cowards, and none are willing to move forward without a more favorable political context. But there's another possibility not considered here: perhaps someone has already done a gene-drive mosquito release in secret, but we don't know about it because it didn't work. This might happen if local mosquito populations mix too slowly compared to how long it takes a gene-driven population to crash; or if the initially group all died out before they could mate; or something in the biology of the driven-drive machinery didn't function as expected.

If that were the situation, then the world would have a different problem than the one we think it has: inability to share information about what the obstacle was and debug the solution.

Comment by jimrandomh on LW UI features you might not have tried · 2023-10-13T19:36:09.117Z · LW · GW

Unfortunately the ban-users-from-posts feature has a rube-goldberg of rules around it that were never written down, and because there was no documentation to check it against, I've never managed to give it a proper QA pass. I'd be interested in reports of people's experience with it, but I do not have confidence that this feature works without major bugs.

Comment by jimrandomh on Sam Altman's sister, Annie Altman, claims Sam has severely abused her · 2023-10-07T19:31:57.212Z · LW · GW

You should think less about PR and more about truth.

Comment by jimrandomh on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-10-05T17:46:15.913Z · LW · GW

Mod note: I count six deleted comments by you on this post. Of these, two had replies (and so were edited to just say "deleted"), one was deleted quickly after posting, and three were deleted after they'd been up for awhile. This is disruptive to the conversation. It's particularly costly when the subject of the top-level post is about conversation dynamics themselves, which the deleted comments are instances (or counterexamples) of.

You do have the right to remove your post/comments from LessWrong. However, doing so frequently, or in the middle of active conversations, is impolite. If you predict that you're likely to wind up deleting a comment, it would be better to not post it in the first place. LessWrong has a "retract" button which crosses out text (keeping it technically-readable but making it annoying to read so that people won't); this is the polite and epistemically-virtuous way to handle comments that you no longer stand by.

Comment by jimrandomh on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-10-01T19:41:44.740Z · LW · GW

The thing I was referring to was an exchange on Facebook, particularly the comment where you wrote:

also i felt like there was lots of protein, but maybe folks just didn't realize it? rice and most grains that are not maize have a lot (though less densely packed) and there was a lot of quinoa and nut products too

That exchange was salient to me because, in the process of replying to Elizabeth, I had just searched my FB posting history and reread what veganism-related discussions I'd had, including that one. But I agree, in retrospect, that calling you a "vegan advocate" was incorrect. I extrapolated too far based on remembering you to have been vegan at that time and the stance you took in that conversation. The distinction matters both from the perspective of not generalizing to vegan advocates in general, and because the advocate role carries higher expectations about nutrition-knowledge than participating casually in a Facebook conversation does.

Comment by jimrandomh on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-09-30T18:36:23.443Z · LW · GW

I draw a slightly different conclusion from that example: that vegan advocates in particular are a threat to truth-seeking in AI alignment. Because I recognize the name, and that's a vegan who's said some extremely facepalm-worthy things about nutrition to me.

Comment by jimrandomh on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-09-30T18:17:00.992Z · LW · GW

I believe that her summaries are a strong misrepresentation of my views, and explained why in the above comment through object-level references comparing my text to her summaries.

I'm looking at those quote-response pairs, and just not seeing the mismatch you claim there to be. Consider this one:

The charitable explanation here is that my post focuses on naive veganism, and Soto thinks that’s a made-up problem.

Of course, my position is not as hyperbolic as this.

This only asserts that there's a mismatch; it provides no actual evidence of one. Next up:

his desired policy of suppressing public discussion of nutrition issues with plant-exclusive diets will prevent us from getting the information to know if problems are widespread

In my original answers I address why this is not the case (private communication serves this purpose more naturally).

Pretty straightforwardly, if the pilot study results had only been sent through private communications, then they wouldn't have public discussion (ie, public discussion would be suppressed). I myself wouldn't know about the results. The probability of a larger follow-up study would be greatly reduced. I personally would have less information about how widespread problems are.

Comment by jimrandomh on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-09-29T20:09:11.216Z · LW · GW

If the information environment prevents people from figuring out the true cause of the obesity epidemic, or making better engineered foods, this affects you no matter what place and what social circles you run in. And if epistemic norms are damaged in ways that lead to misaligned AGI instead of aligned AGI, that could literally kill you.

The stakes here are much larger than the individual meat consumption of people within EA and rationality circles. I think this framing (moralistic vegans vs selfish meat eaters with no externalities) causes people to misunderstand the world in ways that are predictably very harmful.