Posts

clone of saturn's Shortform 2020-03-08T21:27:47.577Z
GreaterWrong Arbital Viewer 2019-06-28T06:32:22.278Z
What societies have ever had legal or accepted blackmail? 2019-03-17T09:16:55.560Z
An alternative way to browse LessWrong 2.0 2018-02-19T01:52:06.462Z

Comments

Comment by clone of saturn on Do I count as e/acc for exclusion purposes? · 2024-04-05T00:53:56.046Z · LW · GW

I'm not in Berkeley and I have no direct knowledge of Berkeley parties, but a certain level of contempt or revulsion toward e/acc seems pretty universal among the LW-aligned people I know. I have no reason to doubt that there's no explicit rule against e/accs showing up at Berkeley parties, as others have said. I personally wouldn't feel entirely comfortable at a party with a lot of e/accs.

Comment by clone of saturn on Do I count as e/acc for exclusion purposes? · 2024-04-02T03:09:32.447Z · LW · GW

I think the desire to exclude e/accs is mainly because of their attitude that human extinction is acceptable or even desirable, not because of the specifics of what regulatory actions they support. So how do you feel about human extinction?

Comment by clone of saturn on Toward a Broader Conception of Adverse Selection · 2024-03-15T01:35:44.909Z · LW · GW

I use eBay somewhat regularly, and I've found that most of the time I get what I expected at a reasonable price. So I find the theory that I should always regret participation in any auction somewhat dubious.

Comment by clone of saturn on [deleted post] 2024-01-08T05:40:56.087Z
Comment by clone of saturn on Most People Don't Realize We Have No Idea How Our AIs Work · 2023-12-25T08:00:46.042Z · LW · GW

I think the distinction is that even for plant cultivars and pharmaceuticals, we can straightforwardly circumscribe the potential danger, e.g. a pharmaceutical will not endanger people unless they take it, and a new plant cultivar will not resist our attempts to control it outside of the usual ways plants behave. That's not necessarily the case with an AI that's smarter than us.

Comment by clone of saturn on [deleted post] 2023-12-14T17:53:21.410Z
Comment by clone of saturn on [deleted post] 2023-12-14T17:53:08.206Z
Comment by clone of saturn on [deleted post] 2023-12-14T17:46:50.909Z
Comment by clone of saturn on Do websites and apps actually generally get worse after updates, or is it just an effect of the fear of change? · 2023-12-11T22:32:13.010Z · LW · GW

As a control, you could look at Craigslist, which hasn't changed its appearance for about 25 years, but is still the most popular website in its category according to SimilarWeb.

Comment by clone of saturn on [deleted post] 2023-11-20T02:32:05.333Z
Comment by clone of saturn on Book Review: Going Infinite · 2023-10-26T20:07:19.456Z · LW · GW

I think most organizations the size of EA have formal accountability mechanisms that attempt to investigate claims of fraud and abuse in some kind of more-or-less fair and transparent way. Of course, the actual fairness and effectiveness of such mechanisms can be debated, but at least the need for them is acknowledged. The attitude among EAs, on the other hand, seems to be that EAs are all too smart and good to have any real need for accountability.

Comment by clone of saturn on Book Review: Going Infinite · 2023-10-26T02:25:07.164Z · LW · GW

I guess I should know better by now, but it still astonishes me that EAs can set such abysmally low standards for themselves while simultaneously representing themselves as dramatically more ethical than everyone else.

Comment by clone of saturn on The U.S. is becoming less stable · 2023-08-19T03:03:33.128Z · LW · GW

It seems like anger against the exact kind of neoliberal technocracy you propose was a major source of energy for the 2016 Trump campaign, as well as the highly contentious Sanders campaigns.

Comment by clone of saturn on Why You Should Never Update Your Beliefs · 2023-07-29T01:59:59.197Z · LW · GW

After carefully considering your arguments, I've decided that you are right. Therefore, I won't update my current belief that I should sometimes update my beliefs.

Comment by clone of saturn on How to use DMT without going insane: On navigating epistemic uncertainty in the DMT memeplex · 2023-07-24T09:16:14.776Z · LW · GW

Why would a sane person want to use DMT?

Comment by clone of saturn on Some reasons to not say "Doomer" · 2023-07-10T00:09:39.768Z · LW · GW

Unfortunately, I think the tribalization and politicization is caused by the share-with-followers social media model, not by specific words, so using or not using the word "doomer" will have a negligible effect on the amount of tribalization. You just have to accept that people who insist on using Twitter will have their sanity eroded in this way, and do what you can to compartmentalize the damage and avoid becoming a target.

Comment by clone of saturn on My tentative best guess on how EAs and Rationalists sometimes turn crazy · 2023-06-25T19:39:57.857Z · LW · GW

I think the causality runs the other way though; people who are crazy and grandiose are likely to come up with spurious theories to justify actions they wanted to take anyway. Experience and imitation shows us that non-crazy people successfully use theories to do non-crazy things all the time, so much so that you probably take it for granted.

Comment by clone of saturn on Why am I Me? · 2023-06-25T18:43:05.031Z · LW · GW

I feel like I'm still the same person as I was before I learned how many humans were born earlier than me. I think that's all you need for the Doomsday Argument to go through.

Comment by clone of saturn on AI Rights: In your view, what would be required for an AGI to gain rights and protections from the various Governments of the World? · 2023-06-11T07:56:59.210Z · LW · GW

AI should never have rights. Any AI that would have moral patienthood should not be created.

Comment by clone of saturn on What is it like to be a compatibilist? · 2023-05-06T00:25:48.580Z · LW · GW

It doesn't bother me, because I'm me, with the propensity to make the choices I'm determined to make. If I had chosen otherwise, I would not be me.

Suppose I love chocolate ice cream and hate vanilla ice cream. When I choose to eat chocolate ice cream, it's an expression of the fact that I prefer chocolate ice cream. I have free will in the sense that if I preferred vanilla instead, I could have chosen vanilla, but in fact I prefer chocolate so I won't choose vanilla.

Comment by clone of saturn on Moderation notes re: recent Said/Duncan threads · 2023-04-16T07:50:20.600Z · LW · GW

Ok, I edited the comment.

Comment by clone of saturn on Moderation notes re: recent Said/Duncan threads · 2023-04-15T05:46:58.906Z · LW · GW

Why not just use the original sentence, with only the name changed? I don't see what is supposed to be accomplished by the other substitutions.

Comment by clone of saturn on Moderation notes re: recent Said/Duncan threads · 2023-04-15T03:54:50.711Z · LW · GW

I see. If the issue here is only with extended socratic dialogues, rather than any criticism which is perceived as low-effort, that wasn't clear to me. I wouldn't be nearly as opposed to banning the former, if that could be operationalized in a reasonable way.

Comment by clone of saturn on Moderation notes re: recent Said/Duncan threads · 2023-04-15T03:30:49.802Z · LW · GW

I can't read Duncan's mind and have no direct access to facts about his ultimate motivations. I can be much more confident that a person who is currently getting away with doing X has reason to dislike a rule that would prevent X. So the "I suspect" was much more about the second clause than the first. I find this so obvious that it never occurred to me that it could be read another way.

I don't accept Duncan's stand-in sentence "I suspect that Eric won't like the zoo, because he wants to stay out of the sun." as being properly analogous, because staying out of the sun is not something people typically need to hide or deny.

To be honest, I think I have to take this exchange as further evidence that Duncan is operating in bad faith. (Within this particular conflict, not necessarily in general.)

Comment by clone of saturn on Moderation notes re: recent Said/Duncan threads · 2023-04-15T00:54:07.624Z · LW · GW

I'll go along with whatever rules you decide on, but that seems like an extremely long time to wait for basic clarifications like "what did you mean by this word" or "can you give a real-world example".

Comment by clone of saturn on Moderation notes re: recent Said/Duncan threads · 2023-04-14T20:24:56.224Z · LW · GW

One technical solution that occurs to me is to allow explicitly marking a post as half-baked, and therefore only open to criticism that comes along with substantial effort towards improving the post, or fully-baked and open to any criticism. However, I suspect that Duncan won't like this idea, because [edit: I suspect that] he wants to maintain a motte-and-bailey where his posts are half-baked when someone criticizes them but fully-baked when it's time to apportion status.

Comment by clone of saturn on What problems do African-Americans face? An initial investigation using Standpoint Epistemology and Surveys · 2023-03-14T08:23:55.170Z · LW · GW

Do you think the original proponents of Standpoint Epistemology would agree that it's simply a logical consequence of Aumann's agreement theorem?

Comment by clone of saturn on Short Notes on Research Process · 2023-02-23T07:13:04.880Z · LW · GW
Comment by clone of saturn on Escape Velocity from Bullshit Jobs · 2023-01-11T23:37:45.844Z · LW · GW

These statements seem awfully close to being unfalsifiable. The amount of research and development coming from twitter in the 5 years before the acquisition was already pretty much negligible, so there's no difference there. How long do we need to wait for lawsuits or loss of clients to cause observable consequences?

Comment by clone of saturn on Escape Velocity from Bullshit Jobs · 2023-01-11T07:28:50.185Z · LW · GW

Twitter recently fired a majority of its workforce (I've seen estimates from 50% to 90%) and seems to be chugging along just fine. This strongly implies that at least that many jobs were bullshit, but it's unlikely that the new management was able to perfectly identify all bullshitters, so it's only a lower bound. Sometimes contributions can be illegible, but there are also extremely strong incentives to obfuscate.

Comment by clone of saturn on Escape Velocity from Bullshit Jobs · 2023-01-11T06:37:44.468Z · LW · GW

Suppose the IRS requires 100 pages of paperwork per employee. This used to take 10 hours. Now with GPT-4, as a thought experiment, let’s say it takes 1 hour.

The long run result might be 500 pages of more complicated paperwork that takes 10 hours even with GPT-4, while accomplishing nothing. That still will take time. It is not so easy or fast to come up with 400 more pages. I’d assume that would take at least a decade.

This seems to neglect the possibility that GPT-4 could be used, not just to accomplish bullshit tasks, but also to invent new bullshit tasks much faster than humans could.

Comment by clone of saturn on Preventing, reversing, and addressing data leakage: some thoughts · 2022-11-15T04:56:04.740Z · LW · GW

This post is currently tagged "security mindset" but the advice seems close to the opposite of security mindset; it amounts to just trying to be extra careful, and if that doesn't work, hoping the damage isn't too bad. Security mindset would require strategies to make a leak impossible or at least extremely unlikely.

Comment by clone of saturn on Weekly Roundup #3 · 2022-11-01T18:33:55.091Z · LW · GW

Remember when Google Shopping used to be an actual search index of pretty much every online store? You could effortlessly find even the most obscure products and comparison shop between literally thousands of sellers. Then one day they decided to make it pay-to-play and put advertisers in control of what appears on there. Now it's pretty much useless to me. I think a similar process has happened with Search, just more gradually. Your experience with it probably has a lot to do with how well your tastes and preferences happen to align with what advertisers want to steer people toward.

Comment by clone of saturn on What LessWrong/Rationality/EA chat-servers exist that newcomers can join? · 2022-11-01T18:11:12.158Z · LW · GW

Sorry, I've edited my comment with the working link.

Comment by clone of saturn on The harms you don't see · 2022-10-18T23:54:57.127Z · LW · GW

It's absurd to equate the shaky and informal coalition of Russia, China, Iran, and Syria with the 750+ extraterritorial bases, worldwide naval dominance, and global surveillance network of the US Military.

Comment by clone of saturn on Ways to increase working memory, and/or cope with low working memory? · 2022-08-22T00:15:44.487Z · LW · GW

Method of loci?

Comment by clone of saturn on chinchilla's wild implications · 2022-08-08T08:00:55.700Z · LW · GW

Language models seem to do a pretty good job at judging text "quality" in a way that agrees with humans. And of course, they're good at generating new text. Could it be useful for a model to generate a bunch of output, filter it for quality by its own judgment, and then continue training on its own output? If so, would it be possible to "bootstrap" arbitrary amounts of extra training data?

Comment by clone of saturn on Don't take the organizational chart literally · 2022-07-24T08:57:47.809Z · LW · GW

But you’d have to be one really stupid correctional officer to get an order to disable the cameras around Epstein’s cell the night he was murdered, and not know who killed him after he dies. Even if you were that dumb, it seems like something you would mention unless you were threatened, in which case you obviously are now a possible defecting member of the plot.

If I were a prison guard who had just seen a well-connected group of conspirators murder someone who had become inconvenient to them and easily get away with it, it seems to me that one of the stupidest things I could possibly do would be to tell anyone about it. Why would they need to explicitly threaten me? We both understand there's no one I could "defect" to who could stop them or protect me.

Comment by clone of saturn on A Bias Against Altruism · 2022-07-24T08:09:47.075Z · LW · GW
Comment by clone of saturn on [deleted post] 2022-07-13T01:53:01.033Z
Comment by clone of saturn on [deleted post] 2022-06-30T07:22:25.594Z
Comment by clone of saturn on [deleted post] 2022-06-30T07:21:24.988Z
Comment by clone of saturn on [deleted post] 2022-06-30T07:15:23.123Z
Comment by clone of saturn on [deleted post] 2022-06-30T07:13:12.582Z
Comment by clone of saturn on [deleted post] 2022-06-30T07:11:43.816Z
Comment by clone of saturn on [deleted post] 2022-06-30T07:00:51.999Z
Comment by clone of saturn on Security Mindset: Lessons from 20+ years of Software Security Failures Relevant to AGI Alignment · 2022-06-22T08:45:43.869Z · LW · GW

That said, it took the software industry a long time to learn all the ways to NOT solve XSS before people really understood what a correct fix looked like. It often takes many many examples in the reference class before a clear fundamental solution can be seen.

This is true about the average software developer, but unlike in AI alignment, the correct fix was at least known to a few people from the beginning.

Comment by clone of saturn on [deleted post] 2022-06-08T01:17:43.521Z
Comment by clone of saturn on What is Going On With CFAR? · 2022-06-05T21:45:28.241Z · LW · GW

As someone who has watched "Century of the Self" I'd guess it's more along the lines of

  • What people want is not what they need. People don't need much help to self-improve in ways which are already consonant with their natural desires and self-image. So any safe and effective self-improvement program would be a nonstarter in the free market because it would immediately repel the very people who could benefit from it.
Comment by clone of saturn on Preview On Hover · 2022-05-22T03:19:03.187Z · LW · GW

There is an icon in the lower right that looks like this which toggles previews on or off. Do they come back if you click on it?