Posts

Comments

Comment by Ninety-Three on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T23:33:54.111Z · LW · GW

Metz persistently fails to state why it was necessary to publish Scott Alexander's real name in order to critique his ideas.


It's not obvious that that should be the standard. I can imagine Metz asking "Why shouldn't I publish his name?", the implied "no one gets to know your real name if you don't want them to" norm is pretty novel.

One obvious answer to the above question is "Because Scott doesn't want you to, he thinks it'll mess with his psychiatry practice", to which I imagine Metz asking, bemused "Why should I care what Scott wants?" A journalist's job is to inform people, not be nice to them! Now Metz doesn't seem to be great at informing people anyway, but at least he's not sacrificing what little information value he has upon the altar of niceness.

Comment by Ninety-Three on New LessWrong feature: Dialogue Matching · 2024-01-08T21:09:50.108Z · LW · GW

I just got a "New users interested in dialoguing with you (not a match yet)" notification and when I clicked on it the first thing I saw was that exactly one person in my Top Voted users list was marked as recently active in dialogue matching. I don't vote much so my Top Voted users list is in fact an All Voted users list. This means that either the new user interested in dialoguing with me is the one guy who is conspicuously presented at the top of my page, or it's some random that I've never interacted with and have no way of matching.

This is technically not a privacy violation because it could be some random, but I have to imagine this is leaking more bits of information than you intended it to (it's way more than a 5:1 update), so I figured I'd report it as a bug unanticipated feature.

It further occurs to me that anyone who was dedicated to extracting information from the system could completely deanonymize their matches by setting a simple script to scrape https://www.lesswrong.com/dialogueMatching every minute or so and cross-referencing "new users interested" notifications with the moment someone shoots to the top of the "recently active in dialogue matching" list. It sounds like you don't care about that kind of attack though so I guess I'm mentioning it for completeness.

Comment by Ninety-Three on the uni wheel is dumb · 2023-12-01T18:59:58.761Z · LW · GW

Link is broken

Sorry, you don't have access to this page. This is usually because the post in question has been removed by the author.

Comment by Ninety-Three on A Question For People Who Believe In God · 2023-11-24T17:29:03.020Z · LW · GW

All your examples of high-tier axioms seem to fall into the category of "necessary to proceed", the sort of thing where you can't really do any further epistemology if the proposition is false. How did the God axiom either have that quality or end up high on the list without it?

Comment by Ninety-Three on A Question For People Who Believe In God · 2023-11-24T14:19:43.465Z · LW · GW

Surely some axioms can be more rationally chosen than others. For instance, "There is a teapot orbiting the sun somewhere between Earth and Mars" looks like a silly axiom, but "there is a round cube orbiting the sun somewhere between Earth and Mars" looks even sillier. Assuming the possibility of round cubes seems somehow more "epistemically expensive" than assuming the possibility of teapots.

Comment by Ninety-Three on [Bias] Restricting freedom is more harmful than it seems · 2023-11-22T19:56:24.476Z · LW · GW

If you are predicting that two people will never try to censor each other in the same domain, that also happens. If your theory is somehow compatible with that, then it sounds like there are a lot of epicycles in this "independent-mindedness" construct that ought to be explained rather than presented as self-evident.

Comment by Ninety-Three on [Bias] Restricting freedom is more harmful than it seems · 2023-11-22T15:03:20.015Z · LW · GW

We only censor other people more-independent-minded than ourselves.

This predicts that two people will never try to censor each other, since it is impossible for A to be more independent-minded than B and also for B to be more independent-minded than A. However, people do engage in battles of mutual censorship, therefore the claim must be false.

Comment by Ninety-Three on Social Dark Matter · 2023-11-18T01:28:23.171Z · LW · GW

The Law of Extremity seems to work against the Law of Maybe Calm The Fuck Down. If the median X isn't worth worrying about, but most Xs you see are selected for being so extreme they can't hide, then the fact you are seeing an X is evidence about its extremity and you should only calm down if an unusually extreme X is not worth worrying about.

Comment by Ninety-Three on Sam Altman fired from OpenAI · 2023-11-18T00:29:22.865Z · LW · GW

Surely they would use different language than "not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities" to describe a #metoo firing.

Comment by Ninety-Three on 2023 LessWrong Community Census, Request for Comments · 2023-11-02T01:43:22.824Z · LW · GW

It's fine to include my responses in summaries from the dataset, but please remove it before making the data public (Example: "The average age of the respondents, including row 205, is 22.5")

It's not clear to me what this option is for. If someone doesn't tick it, it seems like you are volunteering to remove their information even from summary averages, but that doesn't make sense because at that point it seems to mean "I am filling out this survey but please throw it directly in the trash when I'm done." Surely if someone wanted that kind of privacy they would simply not submit the survey?

Comment by Ninety-Three on Rationalist horror movies · 2023-10-17T01:01:40.998Z · LW · GW

That's it! Thanks, I have no idea why shift+enter is special there.

 This works

Comment by Ninety-Three on Rationalist horror movies · 2023-10-15T19:51:56.418Z · LW · GW

That's the one. I couldn't get either solution to work:

>! I am told this text should be spoilered

:::spoiler And this text too:::

Comment by Ninety-Three on Rationalist horror movies · 2023-10-15T17:23:59.054Z · LW · GW

There is a narrative-driven videogame that does exactly this, but unfortunately I found the execution mediocre. I can't get spoilers to work in comments or I'd name it. Edit: It's

Until Dawn

Comment by Ninety-Three on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-09-29T01:01:43.278Z · LW · GW

The other reason vegan advocates should care about the truth is that if you keep lying, people will notice and stop trusting you. Case in point, I am not a vegan and I would describe my epistemic status as "not really open to persuasion" because I long ago noticed exactly the dynamics this post describes and concluded that I would be a fool to believe anything a vegan advocate told me. I could rigorously check every fact presented but that takes forever, I'd rather just keep eating meat and spend my time in an epistemic environment that hasn't declared war on me.

Comment by Ninety-Three on Petrov Day Retrospective, 2023 (re: the most important virtue of Petrov Day & unilaterally promoting it) · 2023-09-28T19:32:37.039Z · LW · GW

Separate from the moral issue, this is the kind of trick you can only pull once. I assume that almost everyone who received the "your selected response is currently in the minority" message believed it, that will not be the case next year.

Comment by Ninety-Three on Petrov Day Retrospective, 2023 (re: the most important virtue of Petrov Day & unilaterally promoting it) · 2023-09-28T16:44:42.303Z · LW · GW

Granting for the sake of argument that launching the missiles might not have triggered full-scale nuclear war, or that one might wish to define "destroy the world" in a way that is not met by most full-scale nuclear wars, I am still dissatisfied with virtue A because I think an important part of Petrov's situation was that whatever you think the button did, it's really hard to find an upside to pushing it, whereas virtue A has been broadened to cover situations that are merely net bad, but where one could imagine arguments for pushing the button. My initial post framing it in terms of certainty may have been poorly phrased.

Comment by Ninety-Three on Petrov Day Retrospective, 2023 (re: the most important virtue of Petrov Day & unilaterally promoting it) · 2023-09-28T16:24:00.435Z · LW · GW

Petrov was not the last link in the chain of launch authorization which means that his action wasn't guaranteed to destroy the world since someone further down the chain might have cast the same veto he did. So technically yes, Petrov was pushing a button labeled "destroy the world if my superior also thinks these missiles are real, otherwise do nothing". For this reason I think Vasily Arkhipov day would be better, but too late to change now. 

But I think that if the missiles had been launched, that destroys the world (which I use as shorthand for destroying less than literally all humans, as in "The game Fallout is set in the year 2161 after the world was destroyed by nuclear war), and there is a very important difference between Petrov evaluating the uncertainty of "this is the button designed to destroy the world, which technically might get vetoed by my boss" and e.g. a nuclear scientist who has model uncertainty about the physics of igniting the planet's atmosphere (which yes, actual scientists ruled out years before the first test, but the hypothetical scientist works great for illustrative purposes). In Petrov's case, nothing good can ever come of hitting the button except perhaps selfishly, in that he might avoid personal punishment for failing in his button-hitting duties.

Comment by Ninety-Three on Petrov Day Retrospective, 2023 (re: the most important virtue of Petrov Day & unilaterally promoting it) · 2023-09-28T16:09:20.833Z · LW · GW

It seems quite easy to me. Imagine me stating "The sky is purple, if you come to the party I'll introduce you to Alice." If you come to the party then me performing the promised introduction honours a commitment I made, even though I also lied to you.

Comment by Ninety-Three on Petrov Day Retrospective, 2023 (re: the most important virtue of Petrov Day & unilaterally promoting it) · 2023-09-28T15:30:52.693Z · LW · GW

This is not responding to the interesting part of the post, but I did not vote in the poll because I felt like virtue A was a mangled form of the thing I care about for Petrov Day, and non-voting was the closest I could come to fouling my ballot in protest.

To me Petrov Day is about having a button labeled "destroy world" and choosing not to press it. Virtue A as described in the poll is about having a button labeled "maybe destroy world, I dunno, are you feeling lucky?" and choosing not to press it. This is a different definition which seems to have been engineered so that a holiday about avoiding certain doom can be made compatible with avoiding speculative doom due to, for instance, AI.

I would prefer that Petrov Day gets to be about Petrov, and "please Sam Altman, don't risk turning the world into paperclips" gets a different day if there is demand for such a thing.

Comment by Ninety-Three on Honor System for Vaccination? · 2023-09-24T14:24:36.971Z · LW · GW

This explains why the honour system doesn't do as much as one might hope, but it doesn't address the initial question of why use explicitly optional vaccination instead of mandatory + honour system. If excluding the unvaccinated is desirable then surely it remains desirable (if subtoptimal) to exclude only those who are both unvaccinated and honest.
 

Comment by Ninety-Three on Lack of Social Grace Is an Epistemic Virtue · 2023-08-12T17:05:23.194Z · LW · GW

Scott Adams predicted Trump would win in a landslide. He wasn't just overconfident, he was wrong! The fact that he's not taking a status hit is because people keep reporting his prediciton incompletely and no one bothers to confirm what he actually predicted (when I Google 'Scott Adams Trump prediciton' in Incognito, the first two results say "landslide" in the first ten seconds and title, respectively).

Your first case is an example of something much worse than not updating fast enough.

Comment by Ninety-Three on If I showed the EQ-SQ theory's findings to be due to measurement bias, would anyone change their minds about it? · 2023-08-01T01:45:13.244Z · LW · GW

If someone updated towards the "autism is extreme maleness" theory after reading an abstract based on your hypothetical maleness test, you could probably argue them out of that belief by explaining the specific methodology of the test, because it's obviously dumb. If you instead had to do a bunch of math to show why it was flawed, then it would be much harder to convince people because some wouldn't be interested in reading a bunch of math, some wouldn't be able to follow it, and some would have complicated technical nitpicks about how if you run these numbers slightly differently you get a different result.

Separate from the "Is that your true rejection?" question, I think the value of making this argument depends heavily on how simple you can make the explanation. No matter how bulletproof it is, a counterargument that takes 10000 words to make will convince fewer people than one that can be made in 100 words.

Comment by Ninety-Three on The Dictatorship Problem · 2023-06-12T18:13:33.596Z · LW · GW

One can cross-reference the moderation log with "Deleted by alyssavance, Today at 8:19 AM" to determine who made any particular deleted comment. Since this information is already public, does it make sense to preserve the information directly on the comment, something like "[comment by Czynski deleted]"?

Comment by Ninety-Three on Open Thread: June 2023 (Inline Reacts!) · 2023-06-12T17:58:23.753Z · LW · GW
Comment by Ninety-Three on LW moderation: my current thoughts and questions, 2023-04-12 · 2023-04-21T18:12:51.601Z · LW · GW

Fearing that this would be adequate with a large influx of low-quality users

Clarifying: this is a typo and should be inadequate, right?

Comment by Ninety-Three on FLI open letter: Pause giant AI experiments · 2023-03-29T14:27:10.268Z · LW · GW

It seems unlikely that AI labs are going to comply with this petition. Supposing that this is the case, does this petition help, hurt, or have no impact on AI safety, compared to the counterfactual where it doesn't exist?

All possibilities seem plausible to me. Maybe it's ignored so it just doesn't matter. Maybe it burns political capital or establishes a norm of "everyone ignores those silly AI safety people and nothing bad happens". Maybe it raises awareness and does important things for building the AI safety coalition.

Modeling social reality is always hard, but has there been much analysis of what messaging one ought to use here, separate from the question of what policies one ought to want?

Comment by Ninety-Three on Don't take bad options away from people · 2023-03-27T22:24:05.794Z · LW · GW

Not if the people paying in sex are poor! Imagine that 10% of housing is reserved for the poorest people in society as part of some government program that houses them for free, and the other 90% is rented for money at a rate of £500/month (also this is a toy model where all housing is the same, no mansions here). One day the government ends the housing program and privatizes the units, they all go to landlords who start charging money. Is the new rate for housing lower, higher or the same?

The old £500/month rate was the equilibrium that fell out of matching the richest 90% of people with 90% of the housing stock. The new equilibrium has 10% more people and 10% more housing to work with, but the added people are poorer than average, supply and demand tells us that prices will go down to reflect the average consumer having less buying power.

If you think of paying the rent with sex as "getting housing for free" and "government bans sex for rent" as "ending the free housing program", this model applies to both cases. Assuming that people paying the rent in sex are of exactly average wealth then the new equilibrium might also be £500/month, but if they are much poorer than average it should be lower (and interestingly, if they're richer than average, it would end up higher).

Comment by Ninety-Three on Don't take bad options away from people · 2023-03-27T12:43:51.630Z · LW · GW

Good point. I feel like it shouldn't happen much but I agree the simple economic model predicts it should. I could resolve it within the model as some kind of market friction argument (finding someone to sell sex to is not trivial, the landlord makes it easier to go into prostitution by providing himself as a "steady employer"), but I think my real intuition is that this is a place where homo economicus breaks down so I shouldn't be trying to apply simple economic models.

Also, even if my initial argument does work, this is basically a novel form of rent control, so the standard arguments against rent control should apply (supply isn't completely inelastic, constraining demand will reduce future supply, which we don't want).

Comment by Ninety-Three on Don't take bad options away from people · 2023-03-27T03:26:14.392Z · LW · GW

Nitpicking the landlord case: Banning sex for rent drives down prices. 

Suppose the market rate for a room is £500 or X units of sex. Most people pay in money but some are desperate and lack £500 so they pay in sex. One day the government bans paying in sex. This is an artificial constraint on demand, some people who would have paid at the old sex rate are being prevented from doing so. When you constrain demand on something with relatively inelastic supply, prices fall. Specifically, the rooms that would have been rented for sex sit empty until their prices are lowered, the new market rate is £490.

Some people are still worse off because of this (a lot of the desperate people don't have £490 to pay either) but there are possible values where the utilitarian calculus works out net positive (plenty of non-desperate people still benefit from lower rent). One can imagine the government in a productive role as a renter's negotiating partner: "Gosh Mr. Landlord, I'd love to pay in sex but that's illegal, best I can do is £490."

Comment by Ninety-Three on Where I agree and disagree with Eliezer · 2023-03-26T21:24:50.893Z · LW · GW

we know how to specify rewards for... "A human approved this output"; we don't know how to specify rewards for "Actually good alignment research".


Can't these be the same thing? If we have humans who can identify actually good alignment research, we can sit them down in the RLHF booth and have the AI try to figure out how to make them happy.

Now obviously a sufficiently clever AI will infer the existence of the RLHF booth and start hacking the human in order to escape its box, which would be bad for alignment research. But it's looking increasingly plausible that e.g. GPT-6 will be smart enough to provide actually good mathematical research without being smart enough to take over the world (that doesn't happen until GPT-8). So why not alignment research?

To break the comparison I think you need to posit either that alignment research is way harder than math research (as Eli understands Eliezer does) such that anything smart enough to do it is also smart enough to hack a human, or I suppose it could be the case that we don't have humans who can identify actually good alignment research.

Comment by Ninety-Three on Parasitic Language Games: maintaining ambiguity to hide conflict while burning the commons · 2023-03-13T15:37:23.711Z · LW · GW

If you believe strongly enough in the Great Man theory of startups then it's actually working as intended. If startups are more about selling the founder rather than the product, if the pitch is "I am the kind of guy who can do cool business stuff" rather than "Look at this cool stuff I made", then penalizing founders who don't pre-truth is correctly downranking them for being some kind of chump. A better founder would have figured out that he was supposed to pre-truth and it is significant information about his competence that he did not.

Realistically it is surely at least a little bit about the product itself, and honest founders must be "unfairly" losing points on the perceived merits of their product, but one could argue that identifying people savvy enough to play the game creates more value than is lost by underestimating the merits of honest product pitches.

Comment by Ninety-Three on Parasitic Language Games: maintaining ambiguity to hide conflict while burning the commons · 2023-03-13T14:25:18.440Z · LW · GW

Depending on exactly where the boundaries of the pre-truth game are, I think I could argue no one is being deceived (I mean realistically there will be at least a couple naive investors who think founders are speaking literal truth, but there could be few enough that hoodwinking them isn't the point).

When founders present a slide deck full of pre-truths about how great their product is, that slide deck is aimed solely at investors. The founder usually doesn't publish the slide deck, and if they did they wouldn't expect Joe Average to care much. The purpose of the pre-truths isn't to make anyone believe that their product is great (because all the investors know that this is an audition for lying, so none of them are going to take the claims literally), rather it is to demonstrate to investors that the founder is good at exaggerating the greatness of their product. This establishes that a few years later when they go to market, they will be good at telling different lies to regulators, customers, etc.

The pre-truth game could be a trial run for deceiving people, rather than itself being deceptive.

Comment by Ninety-Three on Parasitic Language Games: maintaining ambiguity to hide conflict while burning the commons · 2023-03-13T02:34:57.418Z · LW · GW

Here is a possible defense of pre-truth. I'm not sure if I believe it, but it seems like one of several theories that fit the available evidence.

Willingness to lie is a generally useful business skill. Businesses that lie to regulators will spend less time on regulatory compliance, businesses that lie to customers will get more sales, etc. The optimal amount of lying is not zero.

The purpose of the pre-truth game is to allow investors to assess the founder's skill at lying, because you wouldn't want to fund some chump who can't or won't lie to regulators. Think of it as an initiation ritual: if you run a criminal gang it might be useful to make sure all your new members are able to kill a man, and if you run a venture capital firm it might be useful to make sure all the businessmen you invest in are skilled liars. The process generates value in the same way as any other skill-assessing job interview. There's a conflict which features lying, but it's a coalition of founders and investors against regulators and customers.

So why keep the game secret? Well it would probably be bad for the startup scene if it became widely known that everyone's hoping startups will lie to regulators and customers. Also, by keeping the game secret you make "figure out what game we're playing" a part of the interview process, and you'd probably prefer to invest in people savvy enough to figure that out on their own.

Comment by Ninety-Three on "Rationalist Discourse" Is Like "Physicist Motors" · 2023-03-12T01:15:10.883Z · LW · GW

I understood "based" to be a 4chan-ism but I didn't think very hard about the example, it is possible I chose a word  that does not actually work in the way I had meant to illustrate. Hopefully the intended meaning was still clear.

Comment by Ninety-Three on "Rationalist Discourse" Is Like "Physicist Motors" · 2023-03-10T05:50:47.810Z · LW · GW

Is it wrong for Bob the Democrat to say "based" because it might lead people to incorrectly infer he is a conservative? Is it wrong for Bob the plumber to say "edema" because it might lead people to incorrectly infer he is a a doctor? If I told Bob to start saying "swelling" instead of "edema" then I feel like he would have some right to defend his word use: no one thinks edema literally means "swelling, and also I am a doctor" even if they update in a way that kind of looks like it does.

I don't think we have a significant disagreement here, I was merely trying to highlight a distinction your comment didn't dwell on, about different ways statements can be perceived differently. "There is swelling" vs "There is swelling and also I am a doctor" literally means something different while "There is swelling" vs "There is edema" merely implies something different to people familiar with who tends to use which words.

Comment by Ninety-Three on "Rationalist Discourse" Is Like "Physicist Motors" · 2023-03-10T02:31:03.814Z · LW · GW

I agree that people hearing Zack say "I think this is insane" will believe he has a lower P(this is insane) than people hearing him say "This is insane", but I'm not sure that establishes the words mean that.

If Alice goes around saying "I'm kinda conservative" it would be wise to infer that she is probably conservative. If Bob goes around saying "That's based" in the modern internet sense of the term, it would also be wise to infer that he is probably a conservative. But based doesn't mean Bob is conservative, semantically it just means something like "cool", and then it happens to be the case that this particular synonym for cool is used more often by conservatives than liberals.

If it turned out that Alice voted party line Democrat and loved Bernie Sanders, one would have a reasonable case that she had used words wrong when she said she was kinda conservative, those words mean basically the opposite of her circumstances. If it turned out that Bob voted party line Democrat and loved Bernie Sanders, then one might advise him "your word choice is causing people to form a false impression, you should maybe stop saying based", but it would be weird to suggest this was about what based means. There's just an observable regularity of our society that people who say based tend to be conservative, like how people who say "edema" tend to be doctors.

If Zack is interested in accurately conveying his level of confidence, he would do well to reserve "That's insane" for cases where he is very confident and say "That seems insane" when he is less confident. If he instead decided to use "That's insane" in all cases, that would be misleading. But I think it is significant that this would be a different kind of misleading than if he were to use the words "I am very confident that is insane", even if the statements cause observers to make the exact same updates.

Comment by Ninety-Three on "Rationalist Discourse" Is Like "Physicist Motors" · 2023-03-08T02:51:22.184Z · LW · GW

Everyone sometimes issues replies that are not rebuttals, but there is an expectation that replies will meet some threshold of relevance. Injecting "your comment reminds me of the medieval poet Dante Alighieri" into a random conversation would generally be considered off-topic, even if the speaker genuinely was reminded of him. Other participants in the conversation might suspect this speaker of being obsessed with Alighieri, and they might worry that he was trying to subvert the conversation by changing it to a topic no one but him was interested in. They might think-but-be-too-polite-to-say "Dude, no one cares, stop distracting from the topic at hand".

The behaviour Raemon was trying to highlight is that you soapbox. If it is line with your values to do so, it still seems like choosing to defect rather than cooperate in the game of conversation.

Comment by Ninety-Three on Basics of Rationalist Discourse · 2023-03-04T06:07:22.726Z · LW · GW

> Aim for convergence on truth, and behave as if your interlocutors are also aiming for convergence on truth.

It's not clear to me what the word "convergence" is doing here. I assume the word means something, because it would be weird if you had used extra words only to produce advice identical to "Aim for truth, and behave as if your interlocutors are also aiming for truth". The post talks about how truthseeking leads to convergence among truthseekers, but if that were all there was to it then one could simply seek truth and get convergence for free. Apparently we ought to seek specifically convergence on truth, but what does seeking convergence look like?

I've spent a while thinking on it and I can't come up with any behaviours that would constitute aiming for truth but not aiming for convergence on truth, could you give an example?

Comment by Ninety-Three on The LessWrong 2021 Review: Intellectual Circle Expansion · 2023-01-16T20:21:54.384Z · LW · GW

The positive votes are described as "Good", "Quite important" and "Extremely important" while the negative votes are described as "Misleading, harmful or unimportant", "Very Misleading, harmful or unimportant" and "Highly Misleading, harmful or unimportant". It may be unwise to change the labels halfway through voting, but I am noting for the future that the positive vs negative labels are inconsistent and this seems suboptimal. If +9 is extremely important, surely -9 should be extremely unimportant?

Comment by Ninety-Three on Decision theory does not imply that we get to have nice things · 2022-11-13T14:09:51.979Z · LW · GW

At least, it defects if that's all there is to the world. It's technically possible for an LDT agent to think that the real world is made 10% of cooperate-rocks and 90% opponents who cooperate in a one-shot PD iff their opponent cooperates with them and would cooperate with cooperate-rock, in which case LDT agents cooperate against cooperate-rock.

 

If we're getting this technical, doesn't the LDT agent only cooperate with cooperate-rocks if all of the above and if "would cooperate with cooperate-rock" is a quality opponents can learn about? The default PD does not give players access to their opponent's decision theory.

Comment by Ninety-Three on Good Heart Week: Extending the Experiment · 2022-04-07T16:39:33.688Z · LW · GW

Doh. I forget how much faster strong upvotes scaled with user karma, that resolves all my confusion.

Comment by Ninety-Three on Good Heart Week: Extending the Experiment · 2022-04-06T22:03:31.219Z · LW · GW

Looking at more random users, I think tokens earned via posting are being undercounted somehow. Users with only comments display as having exactly the amount of tokens I would expect from total karma on eligible posts minus self-votes, but looking at users with posts made after April 2 (to avoid complications from a changing post-value formula) consistently have less than the "comment votes plus 3x post votes, not counting self votes" formula would predict. For instance, Zvi has two posts (currently 52 and 68 karma) and zero comments in the last week. With strength 2 selfvotes, (52-2+68-2)*3=348 expected tokens which is a significant mismatch from his displayed 302. It doesn't seem to be out of date since his displayed tokens change instantly in response to me voting on the posts, is something going wrong or is there some weird special case way of voting on posts that doesn't get immediately reflected in the user page?

Comment by Ninety-Three on Good Heart Week: Extending the Experiment · 2022-04-06T19:29:24.038Z · LW · GW

The numbers attached to posts and comments seem to be a straightforward reskin of karma: includes self-votes, does not include the 3x multiplier for posts. The token counter in user profiles seems to update instantly (I tried voting up and down on posts and comments then refreshing the page of the user to test) but undercounts in ways I don't understand. For instance, this random user currently displays as 207 karma (edit: tokens, not karma, doh) based on a post with 68 karma and about a dozen comments with net karma-minus-selfvotes of ~15. I can tell it's up to date because it went up by 3 when I upvoted his post, but it seems like it ought to be ~216 (post karma times three plus comment karma) and I can't explain the missing tokens, and several of the random users I looked at displayed this sort of obvious undercounting.

Comment by Ninety-Three on Good Heart Week: Extending the Experiment · 2022-04-06T18:02:13.956Z · LW · GW

As a lurker, I failed to understand this system in a way that led to me completely ignoring it (I probably would have engaged more with LW this week had I understood, having noticed now it feels too late to bother), so I feel like I should document what went wrong for me.

I read several front-page posts about the system but did not see this one until today. The posts I read were having fun with it rather than focusing on communication, plus the whole thing was obviously an extended April Fool's joke so I managed to come away with a host of misconceptions including total ignorance of the core "no really, karma equals actual money for you" feature. I assumed that if it was serious people would be trying a lot harder to communicate the incentives to people (compare announcements of LW bounties, which I routinely manage to hear about even in periods where I've fallen out of the habit of checking this website).

On top of "karma equals money" being fundamentally implausible, an April 1st joke named Good Heart Tokens feels like it was designed to not be taken seriously. If the system was meant to incentivize posts from lurkers, more effort could have been put into making the incentives clear.

Edit: Making this comment I double-checked some things and thought I came to a fully correct understanding of the system but upon hitting submit I became confused again. This post says that self-votes don't count but my fresh comment displays as having 1 token. There is a token counter on the user profile page, but as far as I can tell from looking at the pages of a few random users, that counter is tracking neither karma nor any calculation I can imagine representing token count, I have no idea what it's doing.

Comment by Ninety-Three on It Looks Like You're Trying To Take Over The World · 2022-03-14T22:02:17.050Z · LW · GW

As an exercise in describing hard takeoff using only known effects, this story handwaves the part I always had the greatest objection to: What does Clippy do after pwning the entire internet? At the current tech level, most of our ability to manufacture novel new goods is gated behind the physical labour requirements of building factories: even supposing you could invent grey goo from first principles plus publicly available research, how are you going to build it?

A quiet takeover could plausibly use crypto wealth to commission a bunch of specialized equipment to get a foothold in the real world a month later when it's all assembled, but going loud as Clippy did seems like it's risking a substantial chance that the humans successfully panic and Shut. Down. Everything.

Comment by Ninety-Three on "Moral progress" vs. the simple passage of time · 2022-02-09T20:52:35.882Z · LW · GW

"I don't care what future people think of my morality, I just care what's moral by the arbitrary standards of the time I live in."

As a moral super-anti-realist ("Morality is a product of evolutionary game theory shaping our brains plus arbitrary social input") this doesn't represent my view. 

I care about morality the same way I care about aesthetics: "I guess brutalism, rock music and prosocial behaviour are just what my brain happens to be fond of, I should go experience those if I want to be happy." I think this is heavily influenced by the standards of the time, but not exactly equal to those standards, probably because brains are noisy machines that don't learn standards perfectly. For instance, I happen to think jaywalking is not immoral so I do it without regard for how local standards view jaywalking.

Concisely, I'd phrase it as "I don't care what future people think of my morality, I just care what's moral by the arbitrary standards of my own brain."

Comment by Ninety-Three on Petrov Day 2021: Mutually Assured Destruction? · 2021-09-27T12:59:45.581Z · LW · GW

I am opposed to the implementation of this exercise, I believe its basic concept seriously undercuts the moral lesson we should take from Petrov Day.

The best way to not blow ourselves up is to not make nuclear weapons. On a day dedicated to not blowing ourselves up, LW has decided to manufacture a bunch of completely unneeded nuclear weapons, hand them out to many people, and then hope really hard that no one uses them. This is like a recovering addict carrying drugs on his person in order to make a point about resisting temptation: he is at best bragging and at worst courting disaster so boldly that one should wonder if he really wants to avoid self-destruction. This makes a good allegory for the senseless near-apocalypse of the Cold War, but deliberately creating a senseless risk does not seem like an appropriate way of celebrating the time we narrowly avoided triggering a senseless risk.

Comment by Ninety-Three on Communication Requires Common Interests or Differential Signal Costs · 2021-03-26T15:23:42.651Z · LW · GW

"it is impossible for there to be a language in which most sentences were lies"
Is it? If 40% of the time people truthfully described what colour a rock was, and 60% of the time they picked a random colour to falsely describe it as (perhaps some speakers benefit from obscuring the rock's true colour but derive no benefit from false belief in any particular colour), we would have a case where most sentences describing the rock were lies and yet listening to someone describing an unknown rock still allowed you to usefully update your priors. That ability to benefit from communication seems like all that should be necessary for a language to survive.

Comment by Ninety-Three on What are the reasons to *not* consider reducing AI-Xrisk the highest priority cause? · 2019-08-22T03:10:33.774Z · LW · GW

Without rejecting any of the premises in your question I can come up with:

Low tractability: you assign almost all of the probability mass to one or both of "alignment will be easily solved" and "alignment is basically impossible"

Currently low tractability: If your timeline is closer to 100 years than 10, it is possible that the best use of resources for AI risk is "sit on them until the field developers further" in the same sense that someone in the 1990s wanting good facial recognition might have been best served by waiting for modern ML.

Refusing to prioritize highly uncertain causes in order to avoid the Winner's Curse outcome of your highest priority ending up as something with low true value and high noise

Flavours of utilitarianism that don't value the unborn and would not see it as an enormous tragedy if we failed to create trillions of happy post-Singularity people (depending on the details human extinction might not even be negative, so long as the deaths aren't painful)

Comment by Ninety-Three on Book Review - Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness · 2018-12-11T23:50:16.303Z · LW · GW

I got all of the octopus questions right (six recalled facts, #6 intuitively plausible, #9 seems rare enough that it should be unlikely for humans to observe, and #2 was uncertain until I completed the others then metagamed that a 7/2 split would be "too unbalanced" for a handcrafted test) so the only surprising fact I have to update on is that the recognition thing is surprising to others. My model was that many wild animals are capable of recognizing humans, and octopuses are particularly smart as animals go, no other factors weigh heavily. That octopuses evolved totally separated from humans didn't seem significant because although most wild animals were exposed to humans I see no obvious incentive for most of them to recognize individual humans, so the cases should be comparable on that axis. I also put little weight on octopuses not being social creatures because while there may be social recognition modules, A: animals are able to recognize humans and all of them generalizing their social modules to our species seems intuitively unlikely and B: At some level of intelligence it must be possible to distinguish individuals based on sheer general pattern-recognition, for ten humans an octopus would only need four or five bits of information and animal intelligence in general seems good at distinguishing between a few totally arbitrary bits.

The evolutionary theory of aging is interesting and seems to predict that an animal's maximum age will be proportionate to its time -to-accidental-death. Just thinking of animals and their ages at random this seems plausible but I'm hardly being rigorous, have there been proper analyses done of that?