Posts

clone of saturn's Shortform 2020-03-08T21:27:47.577Z
GreaterWrong Arbital Viewer 2019-06-28T06:32:22.278Z
What societies have ever had legal or accepted blackmail? 2019-03-17T09:16:55.560Z
An alternative way to browse LessWrong 2.0 2018-02-19T01:52:06.462Z

Comments

Comment by clone of saturn on Heresies in the Shadow of the Sequences · 2024-11-15T12:15:49.669Z · LW · GW

Any agent that makes decisions has an implicit decision theory, it just might not be a very good one. I don't think anyone ever said advanced decision theory was required for AGI, only for robust alignment.

Comment by clone of saturn on Seven lessons I didn't learn from election day · 2024-11-15T07:08:28.194Z · LW · GW

The second reason that I don’t trust the neighbor method is that people just… aren’t good at knowing who a majority of their neighbors are voting for.

This seems like a point in favor of the neighbor method, not against it. You would want people to find "who are my neighbors voting for?" too difficult to readily answer and so mentally replace it with the simpler question "who am I voting for?" thus giving them a plausibly deniable way to admit to voting for Trump.

Comment by clone of saturn on The Alignment Trap: AI Safety as Path to Power · 2024-10-31T04:06:56.853Z · LW · GW

Can anyone lay out a semi-plausible scenario where humanity survives but isn't dominated by an AI or posthuman god-king? I can't really picture it. I always thought that's what we were going for since it's better than being dead.

Comment by clone of saturn on Not every accommodation is a Curb Cut Effect: The Handicapped Parking Effect, the Clapper Effect, and more · 2024-09-18T02:44:13.451Z · LW · GW

I would guess most of them just want their screen readers to work, but a badly written law assigns the responsibility for fixing it to the wrong party, probably due to excessive faith in Coase's theorem.

Comment by clone of saturn on Not every accommodation is a Curb Cut Effect: The Handicapped Parking Effect, the Clapper Effect, and more · 2024-09-18T01:55:24.728Z · LW · GW

I would guess it's because the Americans with Disabilities Act provides a private right of action against businesses whose websites are not accessible to people with disabilities, but doesn't say anything about screen reader software bugs.

Comment by clone of saturn on Altruism and Vitalism Aren't Fellow Travelers · 2024-08-09T23:26:50.744Z · LW · GW

Why is it assumed that there's a dichotomy between expressing strength or creative genius and helping others? It seems like the truly excellent would have no problem doing both, and if the only way you can express your vitality is by keeping others in poverty, that actually seems kind of sad and pathetic and not very excellent.

Comment by clone of saturn on Relativity Theory for What the Future 'You' Is and Isn't · 2024-07-29T09:20:08.455Z · LW · GW

Note that the continuity you feel is strictly backwards-looking; we have no way to call up the you of a year ago to confirm that he still agrees that he's continuous with the you of now. In fact, he is dead, having been destructively transformed into the you of now. So what makes one destructive transformation different from another, as long as the resulting being continues believing he is you?

Comment by clone of saturn on Universal Basic Income and Poverty · 2024-07-29T00:25:45.768Z · LW · GW

From what I understand, they are using a forked version of Nitter which uses fully registered accounts rather than temporary anonymous access tokens, and sourcing those accounts from various shady websites that sell them in bulk.

Comment by clone of saturn on Universal Basic Income and Poverty · 2024-07-26T23:13:08.289Z · LW · GW

Please be aware of https://xcancel.com/ESYudkowsky/ https://nitter.privacydev.net/ESYudkowsky/ https://nitter.poast.org/ESYudkowsky/

Comment by clone of saturn on What are the actual arguments in favor of computationalism as a theory of identity? · 2024-07-21T05:46:32.663Z · LW · GW

Based on this comment I guess by "existing" you mean phenomenal consciousness and by "awareness" you mean behavior? I think the set of brainlike things that have the same phenomenal consciousness as me is a subset of the brainlike things that have the same behavior as me.

Comment by clone of saturn on What are the actual arguments in favor of computationalism as a theory of identity? · 2024-07-21T04:20:03.687Z · LW · GW

There seems to generally be a ton of arbitrary path-dependent stuff everywhere in biology that evolution hasn't yet optimized away, and I don't see a reason to expect the brain's implementation of consciousness to be an exception.

Comment by clone of saturn on What are the actual arguments in favor of computationalism as a theory of identity? · 2024-07-21T04:06:51.261Z · LW · GW

If it's immediate enough that all the copies end up indistinguishable, with the same memories of the copying process, then uniform, otherwise not uniform.

Comment by clone of saturn on What are the actual arguments in favor of computationalism as a theory of identity? · 2024-07-20T04:37:07.761Z · LW · GW

I think the standard argument that quantum states are not relevant to cognitive processes is The importance of quantum decoherence in brain processes. This is enough to convince me that going through a classical teleporter or copying machine would preserve my identity, and in the case of a copying machine I would experience an equal subjective probability of coming out as the original or the copy. It also seems to strongly imply than mind uploading into some kind of classical artificial machine is possible, since it's unlikely that all or even most of the classical properties of the brain are essential. I agree that there's an open question about whether mind emulation on any arbitrary substrate (like, for instance, software running on CMOS computer chips) preserves identity even if it shows the same behavior as the original.

Comment by clone of saturn on What Other Lines of Work are Safe from AI Automation? · 2024-07-13T04:50:41.171Z · LW · GW

You missed what I think would be by far the largest category, regulatory capture: jobs where the law specifically requires a human to do a particular task, even if it's just putting a stamp of approval on an AI's work. There are already a lot of these, but it seems like it would be a good idea to create even more, and add rate limits to existing ones.

Comment by clone of saturn on What if a tech company forced you to move to NYC? · 2024-06-10T01:19:59.574Z · LW · GW

A big difference is that assuming you’re talking about futures in which AI hasn’t catastrophic outcomes, no one will be forcibly mandated to do anything.

Why do you believe this? It seems to me that in the unlikely event that the AI doesn't exterminate humanity, it's much more likely to be aligned with the expressed values of whoever has their hands on the controls at the moment of no return, than to an overriding commitment to universal individual choice.

Comment by clone of saturn on Real Life Sort by Controversial · 2024-05-29T02:09:33.517Z · LW · GW

None of these seem like actual scissor statements, just taking a side in well known controversies using somewhat obnoxious language. This seems to be a general property of RLHF trained models - they are more interested in playing up an easily recognizable stereotype somehow related to the question that will trigger cognitively lazy users to click the thumbsup due to the mere-exposure effect, than actually doing what was asked for.

Comment by clone of saturn on Intransitive Trust · 2024-05-28T17:23:47.715Z · LW · GW

The mammogram problem is different because you're only trying to determine whether a specific woman has cancer, not whether cancer exists at all as a phenomenon. If Bob was abducted by aliens, it implies that alien abduction is real, but the converse isn't true. You either need to do two separate Bayesian updates (what's the probability that Bob was abducted given his experience, and then what's the probability of aliens given the new probability that Bob was abducted), or you need a joint distribution covering all possibilities (Bob not abducted, aliens not real; Bob not abducted, aliens real; Bob abducted, aliens real).

Comment by clone of saturn on Some "meta-cruxes" for AI x-risk debates · 2024-05-20T01:28:11.992Z · LW · GW

I would add

Conflict theory vs. comparative advantage

Is it possible for the wrong kind of technological development to make things worse, or does anything that increases aggregate productivity always make everyone better off in the long run?

Cosmopolitanism vs. human protectionism

Is it acceptable, or good, to let humans go extinct if they will be replaced by an entity that's more sophisticated or advanced in some way, or should humans defend humanity simply because we're human?

Comment by clone of saturn on What you really mean when you claim to support “UBI for job automation”: Part 1 · 2024-05-14T08:19:08.940Z · LW · GW

You're equivocating between real economic costs and nominal amounts of money transferred. Most of that $4 trillion is essentially fictional, taxed back again as soon as it's paid.

Comment by clone of saturn on Do I count as e/acc for exclusion purposes? · 2024-04-05T00:53:56.046Z · LW · GW

I'm not in Berkeley and I have no direct knowledge of Berkeley parties, but a certain level of contempt or revulsion toward e/acc seems pretty universal among the LW-aligned people I know. I have no reason to doubt that there's no explicit rule against e/accs showing up at Berkeley parties, as others have said. I personally wouldn't feel entirely comfortable at a party with a lot of e/accs.

Comment by clone of saturn on Do I count as e/acc for exclusion purposes? · 2024-04-02T03:09:32.447Z · LW · GW

I think the desire to exclude e/accs is mainly because of their attitude that human extinction is acceptable or even desirable, not because of the specifics of what regulatory actions they support. So how do you feel about human extinction?

Comment by clone of saturn on Toward a Broader Conception of Adverse Selection · 2024-03-15T01:35:44.909Z · LW · GW

I use eBay somewhat regularly, and I've found that most of the time I get what I expected at a reasonable price. So I find the theory that I should always regret participation in any auction somewhat dubious.

Comment by clone of saturn on [deleted post] 2024-01-08T05:40:56.087Z
Comment by clone of saturn on Most People Don't Realize We Have No Idea How Our AIs Work · 2023-12-25T08:00:46.042Z · LW · GW

I think the distinction is that even for plant cultivars and pharmaceuticals, we can straightforwardly circumscribe the potential danger, e.g. a pharmaceutical will not endanger people unless they take it, and a new plant cultivar will not resist our attempts to control it outside of the usual ways plants behave. That's not necessarily the case with an AI that's smarter than us.

Comment by clone of saturn on [deleted post] 2023-12-14T17:53:21.410Z
Comment by clone of saturn on [deleted post] 2023-12-14T17:53:08.206Z
Comment by clone of saturn on [deleted post] 2023-12-14T17:46:50.909Z
Comment by clone of saturn on Do websites and apps actually generally get worse after updates, or is it just an effect of the fear of change? · 2023-12-11T22:32:13.010Z · LW · GW

As a control, you could look at Craigslist, which hasn't changed its appearance for about 25 years, but is still the most popular website in its category according to SimilarWeb.

Comment by clone of saturn on [deleted post] 2023-11-20T02:32:05.333Z
Comment by clone of saturn on Book Review: Going Infinite · 2023-10-26T20:07:19.456Z · LW · GW

I think most organizations the size of EA have formal accountability mechanisms that attempt to investigate claims of fraud and abuse in some kind of more-or-less fair and transparent way. Of course, the actual fairness and effectiveness of such mechanisms can be debated, but at least the need for them is acknowledged. The attitude among EAs, on the other hand, seems to be that EAs are all too smart and good to have any real need for accountability.

Comment by clone of saturn on Book Review: Going Infinite · 2023-10-26T02:25:07.164Z · LW · GW

I guess I should know better by now, but it still astonishes me that EAs can set such abysmally low standards for themselves while simultaneously representing themselves as dramatically more ethical than everyone else.

Comment by clone of saturn on The U.S. is becoming less stable · 2023-08-19T03:03:33.128Z · LW · GW

It seems like anger against the exact kind of neoliberal technocracy you propose was a major source of energy for the 2016 Trump campaign, as well as the highly contentious Sanders campaigns.

Comment by clone of saturn on Why You Should Never Update Your Beliefs · 2023-07-29T01:59:59.197Z · LW · GW

After carefully considering your arguments, I've decided that you are right. Therefore, I won't update my current belief that I should sometimes update my beliefs.

Comment by clone of saturn on How to use DMT without going insane: On navigating epistemic uncertainty in the DMT memeplex · 2023-07-24T09:16:14.776Z · LW · GW

Why would a sane person want to use DMT?

Comment by clone of saturn on Some reasons to not say "Doomer" · 2023-07-10T00:09:39.768Z · LW · GW

Unfortunately, I think the tribalization and politicization is caused by the share-with-followers social media model, not by specific words, so using or not using the word "doomer" will have a negligible effect on the amount of tribalization. You just have to accept that people who insist on using Twitter will have their sanity eroded in this way, and do what you can to compartmentalize the damage and avoid becoming a target.

Comment by clone of saturn on My tentative best guess on how EAs and Rationalists sometimes turn crazy · 2023-06-25T19:39:57.857Z · LW · GW

I think the causality runs the other way though; people who are crazy and grandiose are likely to come up with spurious theories to justify actions they wanted to take anyway. Experience and imitation shows us that non-crazy people successfully use theories to do non-crazy things all the time, so much so that you probably take it for granted.

Comment by clone of saturn on Why am I Me? · 2023-06-25T18:43:05.031Z · LW · GW

I feel like I'm still the same person as I was before I learned how many humans were born earlier than me. I think that's all you need for the Doomsday Argument to go through.

Comment by clone of saturn on AI Rights: In your view, what would be required for an AGI to gain rights and protections from the various Governments of the World? · 2023-06-11T07:56:59.210Z · LW · GW

AI should never have rights. Any AI that would have moral patienthood should not be created.

Comment by clone of saturn on What is it like to be a compatibilist? · 2023-05-06T00:25:48.580Z · LW · GW

It doesn't bother me, because I'm me, with the propensity to make the choices I'm determined to make. If I had chosen otherwise, I would not be me.

Suppose I love chocolate ice cream and hate vanilla ice cream. When I choose to eat chocolate ice cream, it's an expression of the fact that I prefer chocolate ice cream. I have free will in the sense that if I preferred vanilla instead, I could have chosen vanilla, but in fact I prefer chocolate so I won't choose vanilla.

Comment by clone of saturn on Moderation notes re: recent Said/Duncan threads · 2023-04-16T07:50:20.600Z · LW · GW

Ok, I edited the comment.

Comment by clone of saturn on Moderation notes re: recent Said/Duncan threads · 2023-04-15T05:46:58.906Z · LW · GW

Why not just use the original sentence, with only the name changed? I don't see what is supposed to be accomplished by the other substitutions.

Comment by clone of saturn on Moderation notes re: recent Said/Duncan threads · 2023-04-15T03:54:50.711Z · LW · GW

I see. If the issue here is only with extended socratic dialogues, rather than any criticism which is perceived as low-effort, that wasn't clear to me. I wouldn't be nearly as opposed to banning the former, if that could be operationalized in a reasonable way.

Comment by clone of saturn on Moderation notes re: recent Said/Duncan threads · 2023-04-15T03:30:49.802Z · LW · GW

I can't read Duncan's mind and have no direct access to facts about his ultimate motivations. I can be much more confident that a person who is currently getting away with doing X has reason to dislike a rule that would prevent X. So the "I suspect" was much more about the second clause than the first. I find this so obvious that it never occurred to me that it could be read another way.

I don't accept Duncan's stand-in sentence "I suspect that Eric won't like the zoo, because he wants to stay out of the sun." as being properly analogous, because staying out of the sun is not something people typically need to hide or deny.

To be honest, I think I have to take this exchange as further evidence that Duncan is operating in bad faith. (Within this particular conflict, not necessarily in general.)

Comment by clone of saturn on Moderation notes re: recent Said/Duncan threads · 2023-04-15T00:54:07.624Z · LW · GW

I'll go along with whatever rules you decide on, but that seems like an extremely long time to wait for basic clarifications like "what did you mean by this word" or "can you give a real-world example".

Comment by clone of saturn on Moderation notes re: recent Said/Duncan threads · 2023-04-14T20:24:56.224Z · LW · GW

One technical solution that occurs to me is to allow explicitly marking a post as half-baked, and therefore only open to criticism that comes along with substantial effort towards improving the post, or fully-baked and open to any criticism. However, I suspect that Duncan won't like this idea, because [edit: I suspect that] he wants to maintain a motte-and-bailey where his posts are half-baked when someone criticizes them but fully-baked when it's time to apportion status.

Comment by clone of saturn on What problems do African-Americans face? An initial investigation using Standpoint Epistemology and Surveys · 2023-03-14T08:23:55.170Z · LW · GW

Do you think the original proponents of Standpoint Epistemology would agree that it's simply a logical consequence of Aumann's agreement theorem?

Comment by clone of saturn on Short Notes on Research Process · 2023-02-23T07:13:04.880Z · LW · GW
Comment by clone of saturn on Escape Velocity from Bullshit Jobs · 2023-01-11T23:37:45.844Z · LW · GW

These statements seem awfully close to being unfalsifiable. The amount of research and development coming from twitter in the 5 years before the acquisition was already pretty much negligible, so there's no difference there. How long do we need to wait for lawsuits or loss of clients to cause observable consequences?

Comment by clone of saturn on Escape Velocity from Bullshit Jobs · 2023-01-11T07:28:50.185Z · LW · GW

Twitter recently fired a majority of its workforce (I've seen estimates from 50% to 90%) and seems to be chugging along just fine. This strongly implies that at least that many jobs were bullshit, but it's unlikely that the new management was able to perfectly identify all bullshitters, so it's only a lower bound. Sometimes contributions can be illegible, but there are also extremely strong incentives to obfuscate.

Comment by clone of saturn on Escape Velocity from Bullshit Jobs · 2023-01-11T06:37:44.468Z · LW · GW

Suppose the IRS requires 100 pages of paperwork per employee. This used to take 10 hours. Now with GPT-4, as a thought experiment, let’s say it takes 1 hour.

The long run result might be 500 pages of more complicated paperwork that takes 10 hours even with GPT-4, while accomplishing nothing. That still will take time. It is not so easy or fast to come up with 400 more pages. I’d assume that would take at least a decade.

This seems to neglect the possibility that GPT-4 could be used, not just to accomplish bullshit tasks, but also to invent new bullshit tasks much faster than humans could.