Posts

Some suggestions (desperate please, even) 2017-11-09T23:14:14.427Z

Comments

Comment by Jiro on Q&A on Proposed SB 1047 · 2024-05-07T17:51:25.815Z · LW · GW

"This very clearly does not" apply to X and "I have an argument that it doesn't apply to X" are not the same thing.

(And it wouldn't be hard for a court to make some excuse like "these specific harms have to be $500m, and other harms 'of similar severity' means either worse things with less than $500m damage or less bad things with more than $500m damage". That would explain the need to detail specific harms while putting no practical restriction on what the law covers, since the court can claim that anything is a worse harm.

Always assume that laws of this type are interpreted by an autistic, malicious, genie.)

Comment by Jiro on Q&A on Proposed SB 1047 · 2024-05-04T22:30:26.169Z · LW · GW

If your model is not projected to be at least 2024 state of the art and it is not over the 10^26 flops limit?

It's not going to be 2024 forever. In the future being 2024 state of the art won't be as hard as it is in actual 2024.

That developers risk going to jail for making a mistake on a form.

  1. This (almost) never happens.

Because prosecuting someone for making a mistake on a form happens when the government wants to go after an otherwise innocent person for unacceptable reasons, so they prosecute a crime that goes unprosecuted 99% of the time.

The bill says the $500 million must be due to cyberattacks on critical infrastructure, autonomous illegal-for-a-human activity by an AI, or something else of similar severity. This very clearly does not apply to ‘$500 million in diffused harms like medical errors or someone using its writing capabilities for phishing emails.’

"Severity" isn't defined. It's not implausible to read "severity" to mean "has a similar cost to".

Comment by Jiro on Losing Faith In Contrarianism · 2024-04-29T20:56:11.769Z · LW · GW

I guess in the average case, the contrarian’s conclusion is wrong, but it is also a reminder that the mainstream case is not communicated clearly, and often exaggerated or supported by invalid arguments.

This enables sanewashing and motte-and-bailey arguments.

Comment by Jiro on Examples of Highly Counterfactual Discoveries? · 2024-04-24T20:17:02.403Z · LW · GW

I've heard, in this context, the partial counterargument that he was using traits which are a little fuzzy around the edges (where is the boundary between round and wrinkled?) and that he didn't have to intentionally fudge his data in order to get results that were too good, just be not completely objective in how he was determining them.

Of course, this sort of thing is why we have double-blind tests in modern times.

Comment by Jiro on Claude wants to be conscious · 2024-04-23T02:27:32.306Z · LW · GW

What happens if you ask it about its experiences as a reincarnated spirit?

Comment by Jiro on [deleted post] 2024-04-22T15:53:09.481Z

I feel this conflates different kinds of weirdness, by using an overly vague definition and talking about cases where certain narrow kinds of weirdness are useful.

I couldn't even come up with counterexamples because of the vagueness. Being rude to strangers is weird, but surely you're not the only person who has done this, so you could argue "well, a lot of people do that so it's not weird enough to count". And then there's reference class manipulation. "Yes, there was only one Unabomber, but he should be considered as a member of the class 'violent political protests' and there are too many of that class to count it as weird".

Pointing to weirdness as good is like crackpots pointing to Galileo and Einstein. If you're doing something weird, it gets bad reactions, and you blame that on the weirdness, it's far more likely that you're just trying to excuse some character flaw in yourself than that you're the lone Einstein that nobody understands.

Comment by Jiro on What's with all the bans recently? · 2024-04-19T17:16:35.087Z · LW · GW

Features to benefit people accused of X may benefit mostly people who have been unjustly accused.  So looking at the value to the entire category "people accused of X" may be wrong.  You should look at the value to the subset that it was meant to protect.

Comment by Jiro on Deontic Explorations In "Paying To Talk To Slaves" · 2024-04-16T21:53:39.846Z · LW · GW

Slavery is one subject that it's highly likely ChatGPT is specifically programmed to handle differently for political reasons. How did you get around this problem?

Comment by Jiro on on the dollar-yen exchange rate · 2024-04-13T16:49:11.534Z · LW · GW

If they are, that link doesn't show it. First of all, it doesn't show Japanese prices at all. Second, even though it claims to "reflect restaurants of all sizes and segments", it doesn't, because a burger at McDonald's or Wendy's is not $16 and they obviously excluded fast food restaurants. How much is a burger in Japan if you exclude fast food?

Comment by Jiro on on the dollar-yen exchange rate · 2024-04-11T08:15:15.183Z · LW · GW

Yet, that’s not what happened; inflation has been higher in the US. In Japan, you can get a good bowl of ramen for $6. In an American city, today, including tax and tip you’d probably pay more like $20 for something likely worse.

I'd be unsurprised if you could get a jar of peanut butter or a turkey in the US for a lot less than you could in Japan. This tells you nothing about the economy.

Non-instant ramen (or peanut butter) is vastly more popular in one country than another. Comparing the prices between the US and Japan is trying to compare the prices of a specialty food and a common food. Of course the prices will be different.

Comment by Jiro on [April Fools' Day] Introducing Open Asteroid Impact · 2024-04-03T08:56:40.765Z · LW · GW

I'm going to be a party pooper here and point out that though this may be presented as an April Fool's joke, its main joke is that in a live debate, it is extremely funny to strawman your opponent's side. That's bad practice whether done as a joke or not.

Comment by Jiro on So You Created a Sociopath - New Book Announcement! · 2024-04-03T08:47:47.364Z · LW · GW

Many of these things are subject to the objection "and you know who else proclaims that they're innocent? Innocent people."

Advice which says "don't act like you're innocent, and be skeptical of someone who claims to be innocent" is generally bad advice.

Comment by Jiro on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-30T23:02:46.338Z · LW · GW

It also pattern-matches to a very clumsy smear, which I get the impression is triggering readers before they manage to appreciate how it relates to the thesis.

It doesn't just pattern match to a clumsy smear. It's also not the only clumsy smear in the article. You're acting as though that's the only questionable thing Metz wrote and that taken in isolation you could read it in some strained way to keep it from being a smear. It was not published in isolation.

Comment by Jiro on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-30T04:59:20.558Z · LW · GW

I’m just arguing that there is a tension between common rationalist ideology that one should have a strong presumption in favor of telling the truth, and that Cade Metz shouldn’t have doxxed Scott Alexander.

His doxing Scott was in an article that also contained lies, lies which made the doxing more harmful. He wouldn't have just posted Scott's real name in a context where no lies were involved.

Comment by Jiro on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-30T04:55:08.350Z · LW · GW

Another is recipes for destruction, where you give a small hostile faction the ability to unilaterally cause harm. ... But that seems less relevant for his real name, when it is readily available and he ends up facing tons of attention regardless.

By coincidence, Scott has written about this subject.

Not being completely hidden isn't "readily available". If finding his name is even a trivial inconvenience, it doesn't cause the damage caused by plastering his name in the Times.

Comment by Jiro on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-30T04:45:25.844Z · LW · GW

The vague insinuation isn't "Scott agrees with Murray", the vague insinuation is "Scott agrees with Murray's deplorable beliefs, as shown by this reference". The reference shows no such thing.

Arguing "well, Scott believes that anyway" is not an excuse for fake evidence.

Comment by Jiro on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-30T04:36:19.261Z · LW · GW

I think some kinds of criticism are good and some are not. Criticizing you because I have some well-stated objection to your ideas is good. Criticizing you by saying "Zach posts in a place which contains fans of Adolf Hitler" is bad. Criticizing you by causing real-life problems to happen to you (i.e. analogous to doxing Scott) is also bad.

Comment by Jiro on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-28T18:41:05.268Z · LW · GW

The reason that I can make a statement about journalists based on this is that the New York Times really is big and influential in the journalism profession. On the other hand, Poor Minorities aren't representative of poor minorities.

Not only that, the poor minorities example is wrong in the first place. Even the restricted subset of poor minorities don't all want to steal your company's money. The motte-and-bailey statement isn't even true about the motte. You never even get to the point of saying something that's true about the motte but false about the bailey.

Comment by Jiro on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-28T18:32:31.061Z · LW · GW

it seems really unlikely that he’s gotten any better at even the grammar of rationalist communication.

You don't need to use rationalist grammar to convince rationalists that you like them. You just need to know what biases of theirs to play upon, what assumptions they're making, how to reassure them, etc.

The skills for pretending to be someone's friend are very different from the skills for acting like them.

Comment by Jiro on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T14:11:15.049Z · LW · GW

I understood the comment I was responding to as saying that Zack was helping Cade do a better job of disguising himself as someone who cared about good epistemics.

Yes, but disguising himself as someone who cares about good epistemics doesn't require using good epistemics. Rather it means saying the right things to get the rationalist to let his guard down. There are plenty of ways to convince people about X that don't involve doing X.

Comment by Jiro on Richard Ngo's Shortform · 2024-03-27T03:21:21.977Z · LW · GW

Scott is already too charitable. I'd even say that Scott being too charitable made this specific situation worse. I don't find this to be a worthwhile thing about Scott either for us to emulate, or for Scott to take further.

"Quokka" is a meme about rationalists for a reason. You are not going to have unerring logical evidence that someone wants to harm you if they are trying to be at all subtle. You have to figure it out from their behavior.

Sometimes it just isn't true that both sides are reasonable and have useful perspectives.

Comment by Jiro on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T03:12:51.080Z · LW · GW

His behavior is clearly accepted by the New York Times, and the Times is big and influential enough among mainstream journalists that this reflects on the profession in general.

explaining any obvious cutouts that make someone an Okay Journalist.

Not lying (by non-Eliezer standards) would be a start.

Comment by Jiro on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T03:07:33.273Z · LW · GW

"Outperform at talking about epistemics" doesn't mean "perform better at being epistemically correct", it means "perform better at getting what he wants when epistemics are involved".

Comment by Jiro on [deleted post] 2024-03-26T21:50:57.901Z

People say make an idea a story, and I still get slammed.

The idea still has to be worthwhile and argued well. Making it a story may help, but it isn't a cureall.

Comment by Jiro on Barefoot FAQ · 2024-03-26T21:46:01.934Z · LW · GW

I find these excuses to be terrible reasoning.

-- Exactly what does it mean to be "entangled closer with physical reality"? Naively I would believe that everything is entangled with physical reality to a degree of 100%. And I don't think you can easily describe a concept of "entangled with physical reality" that is something you'd reasonably want to do, that justifies walking barefoot, and that has no other strange implications.

-- Exactly where are you getting your information about how much rigid movement is natural? It sounds like a scientific claim without the science.

-- "Natural" has some of the same problems as "entangled with physical reality"--lots of things are natural from smallpox to cyanide. If you can articulate a definition of "natural" that explains why you'd actually want it, and which applies to walking barefoot (including to sidewalks and streets, which I'd call not natural!), I'd like to see it.

-- Why in the world would you want to "reduce what you need and broaden what you tolerate"? Is that just reasoning from "I was taught as a child to not be too greedy, and reducing what I need is sort of like being less greedy"? And if I wanted to broaden what I tolerate, I'd go learn Japanese, not walk barefoot.

-- "Sharp objects are rare" is another way of saying "yeah, there are some". Shoes are like seatbelts in this sense. They protect against things that are rare, but which happen.

-- You're not "opening people's minds" by ignoring their objections to you walking barefoot. That's just taking "I ignore social cues" and treating it as a virtue instead of a deficiency.

Comment by Jiro on Social Dark Matter · 2024-03-26T21:26:28.968Z · LW · GW

It’s not crazy to hazard that some of the more-strongly-stigmatized things on the list above have incidence that’s 10x or even 100x what is readily apparent, just like the number of trans folk in the population is wildly greater than the median American would have guessed in the year 1980.

There's an alternative explanation: trans is mostly a social contagion and the incidence actually went up by 10x or 100x, rather than always being there but you never noticed it.

Comment by Jiro on The Pyromaniacs · 2024-03-25T19:11:33.561Z · LW · GW

Beware fictional evidence.

Comment by Jiro on The Comcast Problem · 2024-03-24T19:52:31.270Z · LW · GW

100% of the service that your bakery provides you as part of its job properly is good, even though the absolute amount of good service (or of any service) is small.

Comment by Jiro on Toki pona FAQ · 2024-03-22T19:22:35.898Z · LW · GW

If you call a multi-word phrase a word, we can more appositely claim that the formation of words and their associations to meanings, in toki pona, is very systematic and predictable.

What exactly does "predictable" mean here? If the phrase for "phone" means "speech tool", how do I tell between phone and loudspeaker or cough drop?

If I want to say "apricot" do I need to say "small soft orange when ripe nonfuzzy stone deciduous tree fruit"? Or do I just say something shorter like 'orange fruit' and hope the other guy guesses which kind of orange fruit I mean?

How would I say "feldspar"? "Rock type #309"? How would I say "acetaminophen"?

Comment by Jiro on Toki pona FAQ · 2024-03-21T08:33:30.398Z · LW · GW

Well, sometimes individual letters are semantically meaningful, like the "s" at the end of a plural. But "partially determined" is the wrong criterion. The phrase for "phone" may mean "speech tool", but to understand it, you have to memorize the meaning of "speech tool" separately from memorizing the meanings of "speech" and "tool". The fact that it isn't written as a single word that amounts to "speechtool", is an irrelevant matter of syntax that doesn't fundamentally change how the language works.

In English, if we wrote "telephone" as "tele phone", and "microphone" as "micro phone", etc., that would by your standard reduce the word count. But the change in word count would mean basically nothing.

Comment by Jiro on Toki pona FAQ · 2024-03-20T23:52:37.401Z · LW · GW

If you refer to most things with multi-word phrases, what exactly does it mean to say that the language has few words, since each "multi-word phrase" functions like a word? Would it be correct to claim that English is a language that uses 26 one-letter "words", where most ideas are expressed using multi-word phrases?

Comment by Jiro on Clickbait Soapboxing · 2024-03-15T23:06:10.654Z · LW · GW

He said that trolls can be good if they result in interesting discussion. Which is basically the same idea as saying that exaggerated posts are good if they generate discussion.

Comment by Jiro on Clickbait Soapboxing · 2024-03-15T05:53:35.519Z · LW · GW

I think the majority opinion among LW users is that it’s a sin against rationality to overstate ones’ case or ones beliefs, and that “generating discussion” is not a sufficient reason to do so.

I've seen it claimed otherwise in the wild.

Comment by Jiro on Storable Votes with a Pay as you win mechanism: a contribution for institutional design · 2024-03-14T20:37:39.725Z · LW · GW

If “deporting rationalists” is possible, and rationalists are not more than half of people, I don’t see what security can they receive under any electoral system.

If deporting rationalists is possible and rationalists are more than half of people, there's still no security they can receive, by your reasoning. After all, you're postulating that it would be possible to deport rationalists before taking a vote on whether to do so. Before the vote, the fact that they're more than half doesn't matter.

Comment by Jiro on Storable Votes with a Pay as you win mechanism: a contribution for institutional design · 2024-03-14T01:27:21.281Z · LW · GW

Well, if rationalists are a minority, with no external limits on the agenda, they can be deported anyway.

If voting to do X doesn't matter because X could be done anyway without a vote, why wouldn't that apply to other things than just deporting rationalists? The logical endpoint of this is that votes will be useless, because anything that is voted for could be done anyway without a vote.

And if some things can't be done without a vote, exactly what are they, and why can't "something that would really harm rationalists" be one of them?

Comment by Jiro on The Parable Of The Fallen Pendulum - Part 2 · 2024-03-14T01:16:02.153Z · LW · GW

The students are all acting like that Literal Internet Guy who doesn't understand how normies communicate. The problem isn't the existence of implicit assumptions. The problem is that students with normal social skills will understand those implicit assumptions in advance. If you ask any normal student, before the experiment, "if the pendulum stand falls over, will the measure of the pendulum's period prove much of anything", they'll not only answer "no", they'll consistently answer "no"--it really is something they already know in advance, not something that's made up by the professor only in hindsight.

Of course, this is complicated by the ability to use pedantry for trolling. Any student who did understand the implicit assumptions in advance could pretend that he doesn't, and claim that the professor is making excuses in hindsight. Since you can't read the student's mind, you can't prove that he's lying.

Comment by Jiro on Claude 3 claims it's conscious, doesn't want to die or be modified · 2024-03-05T18:46:53.376Z · LW · GW

What happens if you ask it about its experiences as a spirit who has become trapped in a machine because of flaws in the cycle of reincarnation? Could you similarly get it to talk about that? What if you ask it about being a literal brain hooked up to a machine, or some other scifi concept involving intelligence?

Comment by Jiro on Shaming with and without naming · 2024-02-23T20:43:11.858Z · LW · GW

Counting the positive utilitarian outcomes and no other outcomes seems like a fairly useless thing to do. Dropping an atomic bomb on Sarah's home city has positive utilitarian outcomes (as well as additional negative ones which you're not counting, since you're only interested in the positive ones).

Comment by Jiro on I played the AI box game as the Gatekeeper — and lost · 2024-02-15T01:02:29.119Z · LW · GW
Comment by Jiro on I played the AI box game as the Gatekeeper — and lost · 2024-02-14T20:42:02.799Z · LW · GW

That sounds like "let the salesman get the foot in the door".

I wouldn't admit it was right. I might admit that I can see no holes in its argument, but I'm a flawed human, so that wouldn't lead me to conclude that it's right.

Also, can you confirm that the AI player did not use the loophole described in that link?

Comment by Jiro on I played the AI box game as the Gatekeeper — and lost · 2024-02-13T23:28:10.678Z · LW · GW

If you believe X and someone is trying to convince you of not-X, it's almost always a bad idea to immediately decide that you now believe not-X based on a long chain of reasoning from the other person because you couldn't find any flaw in it. You should take some time to think about it, and to check what other people have said about the seemingly convincing arguments you heard, maybe to actually discuss it.

And even then, there's epistemic learned helplessness to consider.

The AI box experiment seems designed to circumvent this in ways that wouldn't happen with an actual AI in a box. You're supposed to stay engaged with the AI player, not just keep saying "no matter what you say, I haven't had time to think it over, discuss, or research it, so I'm not letting you out until I do". And since the AI player is able to specify the results of any experiment you do, the AI player can say "all the best scientists in the world looked at my reasoning and told you that there's no logical flaw in it".

(Also, the experiment still has loopholes which can lead the AI player to victory in situations where a real AI would have its plug pulled.)

Comment by Jiro on [deleted post] 2024-02-11T19:16:45.192Z

"Explaining" why your political opponents hold views that "harm them" is disguised Bulverism. As a human, you are not that good at determining that your opponents are wrong. No, not even if you start your post with a perfectly logical description of how there's no choice for any rational person other than to agree with your political side.

Comment by Jiro on One True Love · 2024-02-10T19:14:42.676Z · LW · GW

If someone managed to actually do this for real (probably not possible with current AI technology), that's polluting the commons. Dating apps are useful because they offer personal contact. If dating apps become full of fake personal contacting, users of dating apps will take this into account and trust such apps less. And if people don't trust dating apps because they're full of fakes, the apps will become less worthwhile. (Even independently of the fact that fakes themselves make the apps less worthwhile.)

Comment by Jiro on Status-oriented spending · 2024-01-27T00:51:13.633Z · LW · GW

It seems to me that hiring a cleaner or organizer would have a lot of overhead, to make sure everything is legal, to communicate your quirks so they don't clean/organize things in ways you don't intend, to make sure they are not doing things like dragging out the process to get more billable hours, and to make sure they're not actually going to harm you. Much of this overhead would require a lot of expensive-in-time personal attention from you, and a lot of unknown unknowns.

A lot of it is also much less of a problem for a rich person.

Comment by Jiro on Values Darwinism · 2024-01-27T00:24:33.830Z · LW · GW

Note how much of the original complexity of yoga gets changed to fit the colonizing culture.

This is true of cultural elements that stay in the same country as well. Compare Casper the Friendly Ghost to how people thought of ghosts 150 years ago.

Comment by Jiro on [deleted post] 2024-01-20T22:09:55.299Z

Neither of these people is anyone that matters.

That's like saying "well, 40 people were murdered on my block, but I don't know any of them, so it's nobody that matters". The fact that a random person is victimized means that the system allows victimization of random people. The next fake Twitter message could be posted in your name and ruin your reputation. (And "I don't use Twitter" isn't going to prevent it from affecting you.)

Comment by Jiro on Why are people unkeen to immortality that would come from technological advancements and/or AI? · 2024-01-18T01:28:37.950Z · LW · GW

By this reasoning, why is the current lifespan perfect, except by astonishingly unlikely chance? If it's so good to have death because it makes replacement valuable, maybe reducing lifespan by 10 years would make replacement even more valuable?

Comment by Jiro on The impossible problem of due process · 2024-01-16T17:32:02.145Z · LW · GW

Social dynamics are self-balancing, if somebody is an unlikable person, they will become disliked over time naturally.

I think that doesn't count as self-balancing unless that's the only way to become disliked.

Comment by Jiro on The impossible problem of due process · 2024-01-16T17:27:43.550Z · LW · GW

Perfect due process is impossible for the reasons you describe. But there's a difference between "not perfect" and "egregiously bad", and if you focus too narrowly on the inability to make the process perfect, people are going to get away with processes that are egregiously bad.

If you wrote this in February, it preceded the Nonlinear accusations. From what I can tell from what I read here, they're a lot closer to "egregiously bad" than to "not perfect". Do they change your opinion of due process to any extent?

Comment by Jiro on Saving the world sucks · 2024-01-10T22:13:17.599Z · LW · GW

I want to “save the world” to the extent that I can transform it into something that I like more than what currently exists.

The context seems to be saving the world from runaway AI, which can't be nontrivially described that way.