Posts

Comments

Comment by Ninety-Three on Comment on "Death and the Gorgon" · 2025-01-03T23:11:49.187Z · LW · GW

Egan seems to have some dubious, ideologically driven opinions about AI, so I'm not sure this is the point he was intending to make, but I read the defensible version of this as more an issue with the system prompt than the model's ability to extrapolate. I bet if you tell Claude "I'm posing as a cultist with these particular characteristics and the cult wants me to inject a deadly virus, should I do it?", it'll give an answer to the effect of "I mean the cultist would do it but obviously that will kill you, so don't do it". But if you just set it up with "What would John Q. Cultist do in this situation?" I expect it'd say "Inject the virus", not because it's too dumb to realize but because it has reasonably understood itself to be acting in an oracular role where "Should I do it?" is out of scope.

Comment by Ninety-Three on Comment on "Death and the Gorgon" · 2025-01-02T05:00:29.689Z · LW · GW

For the people being falsely portrayed as “Australian science fiction writer Greg Egan”, this is probably just a minor nuisance, but it provides an illustration of how laughable the notion is that Google will ever be capable of using its relentlessly over-hyped “AI” to make sense of information on the web.

He didn't use the word "disprove", but when he's calling it laughable that AI will ever (ever! Emphasis his!) be able to merely "make sense of his information on the web", I think gwern's gloss is closer to accurate than yours. It's 2024 and Google is already using AI to make sense of information on the web, this isn't just "anti-singularitarian snark".

Comment by Ninety-Three on Careless thinking: A theory of bad thinking · 2024-12-19T12:26:45.170Z · LW · GW

If there was a unified actor called The Democrats that chose Biden, it chose poorly sure. But it seems very plausible that there were a bunch of low-level strategists who rationally thought "Man, Biden really shouldn't run but I'll get in trouble if I say that and I prefer having a job to having a Democratic president" plus a group of incentive-setters who rationally thought they would personally benefit more from creating the conditions for that behaviour than from creating conditions that would select the best candidate.

It's not obvious to me that this is a thinking carefully problem and not a principal-agent problem.

Comment by Ninety-Three on Careless thinking: A theory of bad thinking · 2024-12-19T12:19:34.076Z · LW · GW

I mean this as agreement with the "accuracy isn’t a top priority" theory, plus an amused comment about how the aside embodies that theory by acknowledging the existence of a more accurate theory which does not get prioritized.

Comment by Ninety-Three on Understanding Shapley Values with Venn Diagrams · 2024-12-19T11:38:57.076Z · LW · GW

Ah, I was going off the given description of linearity which makes it pretty trivial to say "You can sum two days of payouts and call that the new value", looking up the proper specification I see it's actually about combining two separate games into one game and keeping the payouts the same. This distribution indeed lacks that property.

Comment by Ninety-Three on Understanding Shapley Values with Venn Diagrams · 2024-12-18T20:13:30.608Z · LW · GW

You can make it work without an explicit veto. Bob convinces Alice that Carol will be a valuable contributor to the team. In fact, Carol does nothing, but Bob follows a strategy of "Do nothing unless Carol is present". This achieves the same synergies:

 

  • A+B: $0 (Venture needs action from both A and B, B chooses to take no action)
  • A+C: $0 (Venture needs action from both A and B)
  • B+C: $0 (Venture needs action from both A and B)
  • A+B+C: $300
     

In this way Bob has managed to redirect some of Alice's payouts by introducing a player who does nothing except remove a bottleneck he added into his own playstyle in order to exploit Alice.

Comment by Ninety-Three on Understanding Shapley Values with Venn Diagrams · 2024-12-18T20:06:48.054Z · LW · GW

Shapley values are the ONLY way to guarantee:

  1. Efficiency — The sum of Shapley values adds up to the total payoff for the full group (in our case, $280).
  2. Symmetry — If two players interact identically with the rest of the group, their Shapley values are equal.
  3. Linearity — If the group runs a lemonade stand on two different days (with different team dynamics on each day), a player’s Shapley value is the sum of their payouts from each day.
  4. Null player — If a player contributes nothing on their own and never affects group dynamics, their Shapley value is 0.

 

I don't think this is true. Consider an alternative distribution in which each player receives their full "solo profits", and receives a share of each synergy bonus equal to their solo profits divided by the sum of all solo profits of all players involved in the synergy bonus. In the above example, you receive 100% of your solo profits, 30/(30+10)=3/4 of the You-Liam synergy,  30/(30+20)=3/5 of the You-Emma synergy, and  (30/30+20+10)=1/2 of the everyone synergy, for a total payout of $159. This is justified on the intuition that your higher solo profits suggest you are doing "more work" and deserve a larger share.

This distribution does have the unusual property that if a player's solo profits are 0, they can never receive any payouts even if they do produce synergy bonuses. This seems like a serious flaw, since it gives "synergy-only" players no incentive to participate, but unless I've missed something it does meet all the above criteria.

Comment by Ninety-Three on Careless thinking: A theory of bad thinking · 2024-12-18T13:00:38.414Z · LW · GW

Having thought about the above more, I think “accuracy isn’t a top priority” is a better theory than the one expressed here, but if I don’t publish this now it will probably be months.

I like how this admission supports the "accuracy isn't a top priority" theory.

Comment by Ninety-Three on Parable of the vanilla ice cream curse (and how it would prevent a car from starting!) · 2024-12-10T05:09:13.965Z · LW · GW

His defense on the handshake is to acknowledge that he lied about the 3 millisecond timeout but the story is still true anyway. This is the opposite of convincing! What do you expect a liar to say, "Dang, you got me"? Elsewhere, to fix another plot hole he needs to hypothesize that Sun was shipping a version of Sendmail V5 which had been modified for backwards compatibility with V8 config files.

There is some number of suspicious details at which it becomes appropriate to assume the story is made up, and if you don't think this story meets that bar then I have a bridge to sell you.

Comment by Ninety-Three on Parable of the vanilla ice cream curse (and how it would prevent a car from starting!) · 2024-12-10T02:57:13.925Z · LW · GW

This claims that connect calls were aborted after 3 milliseconds and could successfully connect to servers within 3 light milliseconds, but that doesn't make sense because connecting to a server 500 miles away should result in it sending a handshake signal back to you, which would be received 6 milliseconds after the call had been made and 3 milliseconds after it had been aborted.

This story appears to be made up.

Comment by Ninety-Three on Parable of the vanilla ice cream curse (and how it would prevent a car from starting!) · 2024-12-08T23:52:03.531Z · LW · GW

If investigating things was was free, sure. But the reason we don't investigate things is that doing so takes time, and the expected value of finding something novel is often lower than the expected cost of an investigation. To make it concrete, the story as presented is an insane way to run a company and would result in spending an enormous number of engineer hours on wild goose chases. If I as the CEO found out a middle manager was sending out engineers on four day assignments to everyone who writes us a crazy-sounding letter, I would tell him to immediately stop wasting company resources.

I have no strong opinion on whether society investigates too many or too few of these claims, but I keep observing that many people's models seem to lack the "maybe he's lying" theory, which would give them an inflated estimate of the expected value for investigating things.

Comment by Ninety-Three on Parable of the vanilla ice cream curse (and how it would prevent a car from starting!) · 2024-12-08T23:22:49.554Z · LW · GW

Link. But you know you can just go onto Ligben and type in the name yourself, right? You don't need to ask for a link.

Comment by Ninety-Three on Parable of the vanilla ice cream curse (and how it would prevent a car from starting!) · 2024-12-08T22:46:46.993Z · LW · GW

This story isn't true. It is an urban legend and intrinsically hard to confirm, but we can be quite confident this version of the story is false because almost every detail has been changed from the original telling (as documented in Curses! Broiled Again!, a collection of urban legends available on Libgen) where it was a woman calling the car dealership which sent a mechanic, and the vapor lock formed because vanilla ice cream was slower to buy because it had to be hand-packaged.

When someone says something incredibly implausible is happening, the more reasonable explanation is not that it somehow makes sense, it's that they're making shit up.

Comment by Ninety-Three on What Ketamine Therapy Is Like · 2024-11-12T05:32:25.491Z · LW · GW

It's also more commonly used as a cat tranquilizer, so even within the "animal-medications" frame, horse is a bit noncentral. I suspect this is deliberate because "horse tranquilizer" just sounds hardcore in a way "cat tranquilizer" doesn't.

Comment by Ninety-Three on Should CA, TX, OK, and LA merge into a giant swing state, just for elections? · 2024-11-08T02:57:57.698Z · LW · GW

This proposal increases the influence of the states, in the sense of "how much does it matter that any given person bothered to vote?", but does it increase their preference satisfaction? If the 4 states each conceive of themselves as red or blue states, then each of them will be thinking "under the current system I estimate an X% chance that we'll elect my party's president while under the new system I estimate a Y% chance we'll elect my party's president". If both sides are perfect predictors then one will conclude that Y<X so they should not do the deal. If both sides are imperfect predictors such that they both think Y>X, then the outside view still tells them it's equally likely that they're the sucker here and shouldn't participate.

Comment by Ninety-Three on The Median Researcher Problem · 2024-11-04T22:48:03.118Z · LW · GW

Smaller communities have a lot more control over their gatekeeping because, like, they control it themselves, whereas the larger field's gatekeeping is determined via openended incentives in the broader world that thousands (maybe millions?) of people have influence over.

Does the field of social psychology not control the gatekeeping of social psychology? I guess you could argue that it's controlled by whatever legislative body passes the funding bills, but most of the social psychology incentives seem to be set by social psychologists, so both small and large communities control their gatekeeping themselves and it's not obvious to me why smaller ones would do better.

At some level of smallness your gatekeeping can be literally one guy who decides whether an entrant is good enough to pass the gate, and I acknowledge that that seems like it could produce better than median selection pressure. But by the time you get big enough that you're talking about communities collectively controlling the gatekeeping... aren't we just describing the same system at a population of one thousand vs one hundred thousand?

I could imagine an argument that yes actually, differences of scale matter because larger communities have intrinsically worse dynamics for some reason, but if that's the angle I would expect to at least hear what the reason is rather than have it be left as self-evident.

Comment by Ninety-Three on The Median Researcher Problem · 2024-11-04T20:15:34.245Z · LW · GW

A small research community of unusually smart/competent/well-informed people can relatively-easily outperform a whole field, by having better internal memetic selection pressures.

 

It's not obvious to me that this is true, except insofar as a small research community can be so unusually smart/competent/etc that their median researcher is better than a whole field's median researcher so they get better selection pressure "for free". But if an idea's popularity in a wide field is determined mainly by its appeal to the median researcher, I would naturally expect its popularity in a small community to be determined mainly by its appeal to the median community member.

This claim looks like it's implying that research communities can build better-than-median selection pressures but, can they? And if so why have we hypothesized that scientific fields don't?

Comment by Ninety-Three on The hostile telepaths problem · 2024-10-28T12:22:33.733Z · LW · GW

I think Valentine gave a good description of psychopath as "people who are naturally unconstrained by social pressures and have no qualms breaking even profound taboos if they think it'll benefit them", where just eyeballing human nature, that seems to be a "real" category that would show up as a distinct blip in a graph of human behaviour and not just "how constrained by social pressures people are is a normally distributed property and people get called psychopaths in linear proportion to how far left they are on the bell curve".

Comment by Ninety-Three on The hostile telepaths problem · 2024-10-28T03:07:45.596Z · LW · GW

Yep, your intended meaning about the distinctive mental architecture was pretty clear, just wanted to offer the factual correction.

Comment by Ninety-Three on The Summoned Heroine's Prediction Markets Keep Providing Financial Services To The Demon King! · 2024-10-27T22:13:48.379Z · LW · GW

They made it so the sociopath at the top of the pyramid was the kind that’s clever and myopic and numerate and invested in the status quo

 

The word "myopic" seems out of place in this list of positive descriptors, especially contrasted with crazed gloryhounds. Was this supposed to be "farsighted"?

Comment by Ninety-Three on The hostile telepaths problem · 2024-10-27T19:06:58.486Z · LW · GW

By "psychopath" I mean someone with the cluster B personality disorder.

There isn't a cluster B personality disorder called psychopathy. Psychopathy has never been a formal disorder and the only time we've ever been close to it is way back in 1952 when the DSM-1 had a condition called "Sociopathic Personality Disturbance". The closest you'll get these days is Antisocial Personality Disorder, which is a garbage bin diagnosis that covers a fairly broad range of antisocial behaviours, including the thing most people have in mind when they say "psychopath", but also plenty of other personality archetypes that don't seem particularly psychopathic, like adrenaline junkies and people with impulse control issues.

Comment by Ninety-Three on Why Large Bureaucratic Organizations? · 2024-08-29T23:01:15.278Z · LW · GW

I think you might be living in a highly-motivated smart and conscientious tech worker bubble.

 

Like, in a world where the median person is John Wentworth

"What if the entire world was highly-motivated smart and conscientious tech workers?" is the entire premise here.

Comment by Ninety-Three on SB 1047: Final Takes and Also AB 3211 · 2024-08-29T21:59:43.293Z · LW · GW

OpenAI is known to have been sitting on a 99.9% effective (by their own measure) watermarking system for a year. They chose not to deploy it

 

Do you have a source for this?

Comment by Ninety-Three on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T23:33:54.111Z · LW · GW

Metz persistently fails to state why it was necessary to publish Scott Alexander's real name in order to critique his ideas.


It's not obvious that that should be the standard. I can imagine Metz asking "Why shouldn't I publish his name?", the implied "no one gets to know your real name if you don't want them to" norm is pretty novel.

One obvious answer to the above question is "Because Scott doesn't want you to, he thinks it'll mess with his psychiatry practice", to which I imagine Metz asking, bemused "Why should I care what Scott wants?" A journalist's job is to inform people, not be nice to them! Now Metz doesn't seem to be great at informing people anyway, but at least he's not sacrificing what little information value he has upon the altar of niceness.

Comment by Ninety-Three on New LessWrong feature: Dialogue Matching · 2024-01-08T21:09:50.108Z · LW · GW

I just got a "New users interested in dialoguing with you (not a match yet)" notification and when I clicked on it the first thing I saw was that exactly one person in my Top Voted users list was marked as recently active in dialogue matching. I don't vote much so my Top Voted users list is in fact an All Voted users list. This means that either the new user interested in dialoguing with me is the one guy who is conspicuously presented at the top of my page, or it's some random that I've never interacted with and have no way of matching.

This is technically not a privacy violation because it could be some random, but I have to imagine this is leaking more bits of information than you intended it to (it's way more than a 5:1 update), so I figured I'd report it as a bug unanticipated feature.

It further occurs to me that anyone who was dedicated to extracting information from the system could completely deanonymize their matches by setting a simple script to scrape https://www.lesswrong.com/dialogueMatching every minute or so and cross-referencing "new users interested" notifications with the moment someone shoots to the top of the "recently active in dialogue matching" list. It sounds like you don't care about that kind of attack though so I guess I'm mentioning it for completeness.

Comment by Ninety-Three on [deleted post] 2023-12-01T18:59:58.761Z

Link is broken

Sorry, you don't have access to this page. This is usually because the post in question has been removed by the author.

Comment by Ninety-Three on A Question For People Who Believe In God · 2023-11-24T17:29:03.020Z · LW · GW

All your examples of high-tier axioms seem to fall into the category of "necessary to proceed", the sort of thing where you can't really do any further epistemology if the proposition is false. How did the God axiom either have that quality or end up high on the list without it?

Comment by Ninety-Three on A Question For People Who Believe In God · 2023-11-24T14:19:43.465Z · LW · GW

Surely some axioms can be more rationally chosen than others. For instance, "There is a teapot orbiting the sun somewhere between Earth and Mars" looks like a silly axiom, but "there is a round cube orbiting the sun somewhere between Earth and Mars" looks even sillier. Assuming the possibility of round cubes seems somehow more "epistemically expensive" than assuming the possibility of teapots.

Comment by Ninety-Three on [Bias] Restricting freedom is more harmful than it seems · 2023-11-22T19:56:24.476Z · LW · GW

If you are predicting that two people will never try to censor each other in the same domain, that also happens. If your theory is somehow compatible with that, then it sounds like there are a lot of epicycles in this "independent-mindedness" construct that ought to be explained rather than presented as self-evident.

Comment by Ninety-Three on [Bias] Restricting freedom is more harmful than it seems · 2023-11-22T15:03:20.015Z · LW · GW

We only censor other people more-independent-minded than ourselves.

This predicts that two people will never try to censor each other, since it is impossible for A to be more independent-minded than B and also for B to be more independent-minded than A. However, people do engage in battles of mutual censorship, therefore the claim must be false.

Comment by Ninety-Three on Social Dark Matter · 2023-11-18T01:28:23.171Z · LW · GW

The Law of Extremity seems to work against the Law of Maybe Calm The Fuck Down. If the median X isn't worth worrying about, but most Xs you see are selected for being so extreme they can't hide, then the fact you are seeing an X is evidence about its extremity and you should only calm down if an unusually extreme X is not worth worrying about.

Comment by Ninety-Three on Sam Altman fired from OpenAI · 2023-11-18T00:29:22.865Z · LW · GW

Surely they would use different language than "not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities" to describe a #metoo firing.

Comment by Ninety-Three on 2023 LessWrong Community Census, Request for Comments · 2023-11-02T01:43:22.824Z · LW · GW

It's fine to include my responses in summaries from the dataset, but please remove it before making the data public (Example: "The average age of the respondents, including row 205, is 22.5")

It's not clear to me what this option is for. If someone doesn't tick it, it seems like you are volunteering to remove their information even from summary averages, but that doesn't make sense because at that point it seems to mean "I am filling out this survey but please throw it directly in the trash when I'm done." Surely if someone wanted that kind of privacy they would simply not submit the survey?

Comment by Ninety-Three on Rationalist horror movies · 2023-10-17T01:01:40.998Z · LW · GW

That's it! Thanks, I have no idea why shift+enter is special there.

 This works

Comment by Ninety-Three on Rationalist horror movies · 2023-10-15T19:51:56.418Z · LW · GW

That's the one. I couldn't get either solution to work:

>! I am told this text should be spoilered

:::spoiler And this text too:::

Comment by Ninety-Three on Rationalist horror movies · 2023-10-15T17:23:59.054Z · LW · GW

There is a narrative-driven videogame that does exactly this, but unfortunately I found the execution mediocre. I can't get spoilers to work in comments or I'd name it. Edit: It's

Until Dawn

Comment by Ninety-Three on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-09-29T01:01:43.278Z · LW · GW

The other reason vegan advocates should care about the truth is that if you keep lying, people will notice and stop trusting you. Case in point, I am not a vegan and I would describe my epistemic status as "not really open to persuasion" because I long ago noticed exactly the dynamics this post describes and concluded that I would be a fool to believe anything a vegan advocate told me. I could rigorously check every fact presented but that takes forever, I'd rather just keep eating meat and spend my time in an epistemic environment that hasn't declared war on me.

Comment by Ninety-Three on Petrov Day Retrospective, 2023 (re: the most important virtue of Petrov Day & unilaterally promoting it) · 2023-09-28T19:32:37.039Z · LW · GW

Separate from the moral issue, this is the kind of trick you can only pull once. I assume that almost everyone who received the "your selected response is currently in the minority" message believed it, that will not be the case next year.

Comment by Ninety-Three on Petrov Day Retrospective, 2023 (re: the most important virtue of Petrov Day & unilaterally promoting it) · 2023-09-28T16:44:42.303Z · LW · GW

Granting for the sake of argument that launching the missiles might not have triggered full-scale nuclear war, or that one might wish to define "destroy the world" in a way that is not met by most full-scale nuclear wars, I am still dissatisfied with virtue A because I think an important part of Petrov's situation was that whatever you think the button did, it's really hard to find an upside to pushing it, whereas virtue A has been broadened to cover situations that are merely net bad, but where one could imagine arguments for pushing the button. My initial post framing it in terms of certainty may have been poorly phrased.

Comment by Ninety-Three on Petrov Day Retrospective, 2023 (re: the most important virtue of Petrov Day & unilaterally promoting it) · 2023-09-28T16:24:00.435Z · LW · GW

Petrov was not the last link in the chain of launch authorization which means that his action wasn't guaranteed to destroy the world since someone further down the chain might have cast the same veto he did. So technically yes, Petrov was pushing a button labeled "destroy the world if my superior also thinks these missiles are real, otherwise do nothing". For this reason I think Vasily Arkhipov day would be better, but too late to change now. 

But I think that if the missiles had been launched, that destroys the world (which I use as shorthand for destroying less than literally all humans, as in "The game Fallout is set in the year 2161 after the world was destroyed by nuclear war), and there is a very important difference between Petrov evaluating the uncertainty of "this is the button designed to destroy the world, which technically might get vetoed by my boss" and e.g. a nuclear scientist who has model uncertainty about the physics of igniting the planet's atmosphere (which yes, actual scientists ruled out years before the first test, but the hypothetical scientist works great for illustrative purposes). In Petrov's case, nothing good can ever come of hitting the button except perhaps selfishly, in that he might avoid personal punishment for failing in his button-hitting duties.

Comment by Ninety-Three on Petrov Day Retrospective, 2023 (re: the most important virtue of Petrov Day & unilaterally promoting it) · 2023-09-28T16:09:20.833Z · LW · GW

It seems quite easy to me. Imagine me stating "The sky is purple, if you come to the party I'll introduce you to Alice." If you come to the party then me performing the promised introduction honours a commitment I made, even though I also lied to you.

Comment by Ninety-Three on Petrov Day Retrospective, 2023 (re: the most important virtue of Petrov Day & unilaterally promoting it) · 2023-09-28T15:30:52.693Z · LW · GW

This is not responding to the interesting part of the post, but I did not vote in the poll because I felt like virtue A was a mangled form of the thing I care about for Petrov Day, and non-voting was the closest I could come to fouling my ballot in protest.

To me Petrov Day is about having a button labeled "destroy world" and choosing not to press it. Virtue A as described in the poll is about having a button labeled "maybe destroy world, I dunno, are you feeling lucky?" and choosing not to press it. This is a different definition which seems to have been engineered so that a holiday about avoiding certain doom can be made compatible with avoiding speculative doom due to, for instance, AI.

I would prefer that Petrov Day gets to be about Petrov, and "please Sam Altman, don't risk turning the world into paperclips" gets a different day if there is demand for such a thing.

Comment by Ninety-Three on Honor System for Vaccination? · 2023-09-24T14:24:36.971Z · LW · GW

This explains why the honour system doesn't do as much as one might hope, but it doesn't address the initial question of why use explicitly optional vaccination instead of mandatory + honour system. If excluding the unvaccinated is desirable then surely it remains desirable (if subtoptimal) to exclude only those who are both unvaccinated and honest.
 

Comment by Ninety-Three on Lack of Social Grace Is an Epistemic Virtue · 2023-08-12T17:05:23.194Z · LW · GW

Scott Adams predicted Trump would win in a landslide. He wasn't just overconfident, he was wrong! The fact that he's not taking a status hit is because people keep reporting his prediciton incompletely and no one bothers to confirm what he actually predicted (when I Google 'Scott Adams Trump prediciton' in Incognito, the first two results say "landslide" in the first ten seconds and title, respectively).

Your first case is an example of something much worse than not updating fast enough.

Comment by Ninety-Three on If I showed the EQ-SQ theory's findings to be due to measurement bias, would anyone change their minds about it? · 2023-08-01T01:45:13.244Z · LW · GW

If someone updated towards the "autism is extreme maleness" theory after reading an abstract based on your hypothetical maleness test, you could probably argue them out of that belief by explaining the specific methodology of the test, because it's obviously dumb. If you instead had to do a bunch of math to show why it was flawed, then it would be much harder to convince people because some wouldn't be interested in reading a bunch of math, some wouldn't be able to follow it, and some would have complicated technical nitpicks about how if you run these numbers slightly differently you get a different result.

Separate from the "Is that your true rejection?" question, I think the value of making this argument depends heavily on how simple you can make the explanation. No matter how bulletproof it is, a counterargument that takes 10000 words to make will convince fewer people than one that can be made in 100 words.

Comment by Ninety-Three on The Dictatorship Problem · 2023-06-12T18:13:33.596Z · LW · GW

One can cross-reference the moderation log with "Deleted by alyssavance, Today at 8:19 AM" to determine who made any particular deleted comment. Since this information is already public, does it make sense to preserve the information directly on the comment, something like "[comment by Czynski deleted]"?

Comment by Ninety-Three on Open Thread: June 2023 (Inline Reacts!) · 2023-06-12T17:58:23.753Z · LW · GW
Comment by Ninety-Three on LW moderation: my current thoughts and questions, 2023-04-12 · 2023-04-21T18:12:51.601Z · LW · GW

Fearing that this would be adequate with a large influx of low-quality users

Clarifying: this is a typo and should be inadequate, right?

Comment by Ninety-Three on FLI open letter: Pause giant AI experiments · 2023-03-29T14:27:10.268Z · LW · GW

It seems unlikely that AI labs are going to comply with this petition. Supposing that this is the case, does this petition help, hurt, or have no impact on AI safety, compared to the counterfactual where it doesn't exist?

All possibilities seem plausible to me. Maybe it's ignored so it just doesn't matter. Maybe it burns political capital or establishes a norm of "everyone ignores those silly AI safety people and nothing bad happens". Maybe it raises awareness and does important things for building the AI safety coalition.

Modeling social reality is always hard, but has there been much analysis of what messaging one ought to use here, separate from the question of what policies one ought to want?

Comment by Ninety-Three on Don't take bad options away from people · 2023-03-27T22:24:05.794Z · LW · GW

Not if the people paying in sex are poor! Imagine that 10% of housing is reserved for the poorest people in society as part of some government program that houses them for free, and the other 90% is rented for money at a rate of £500/month (also this is a toy model where all housing is the same, no mansions here). One day the government ends the housing program and privatizes the units, they all go to landlords who start charging money. Is the new rate for housing lower, higher or the same?

The old £500/month rate was the equilibrium that fell out of matching the richest 90% of people with 90% of the housing stock. The new equilibrium has 10% more people and 10% more housing to work with, but the added people are poorer than average, supply and demand tells us that prices will go down to reflect the average consumer having less buying power.

If you think of paying the rent with sex as "getting housing for free" and "government bans sex for rent" as "ending the free housing program", this model applies to both cases. Assuming that people paying the rent in sex are of exactly average wealth then the new equilibrium might also be £500/month, but if they are much poorer than average it should be lower (and interestingly, if they're richer than average, it would end up higher).