Posts
Comments
I won't think that's true. Or rather, it's only true in the specific case of studies that involve calorie restriction. In practice that's a large (excessive) fraction of studies, but testing variations of the contamination hypothesis does not require it.
(We have a draft policy that we haven't published yet, which would have rejected the OP's paste of Claude. Though note that the OP was 9 months ago.)
All three of these are hard, and all three fail catastrophically.
If you could make a human-imitator, the approach people usually talk about is extending this to an emulation of a human under time dilation. Then you take your best alignment researcher(s), simulate them in a box thinking about AI alignment for a long time, and launch a superintelligence with whatever parameters they recommend. (Aka: Paul Boxing)
The whole point of a "test" is that it's something you do before it matters.
As an analogy: suppose you have a "trustworthy bank teller test", which you use when hiring for a role at a bank. Suppose someone passes the test, then after they're hired, they steal everything they can access and flee. If your reaction is that they failed the test, then you have gotten confused about what is and isn't a test, and what tests are for.
Now imagine you're hiring for a bank-teller role, and the job ad has been posted in two places: a local community college, and a private forum for genius con artists who are masterful actors. In this case, your test is almost irrelevant: the con artists applicants will disguise themselves as community-college applicants until it's too late. You would be better finding some way to avoid attracting the con artists in the first place.
Connecting the analogy back to AI: if you're using overpowered training techniques that could have produced superintelligence, then trying to hobble it back down to an imitator that's indistinguishable from a particular human, then applying a Turing test is silly, because it doesn't distinguish between something you've successfully hobbled, and something which is hiding its strength.
That doesn't mean that imitating humans can't be a path to alignment, or that building wrappers on top of human-level systems doesn't have advantages over building straight-shot superintelligent systems. But making something useful out of either of these strategies is not straightforward, and playing word games on the "Turing test" concept does not meaningfully add to either of them.
that does not mean it will continue to act indistuishable from a human when you are not looking
Then it failed the Turing Test because you successfully distinguished it from a human.
So, you must believe that it is impossible to make an AI that passes the Turing Test.
I feel like you are being obtuse here. Try again?
Did you skip the paragraph about the test/deploy distinction? If you have something that looks (to you) like it's indistinguishable from a human, but it arose from something descended to the process by which modern AIs are produced, that does not mean it will continue to act indistuishable from a human when you are not looking. It is much more likely to mean you have produced deceptive alignment, and put it in a situation where it reasons that it should act indistinguishable from a human, for strategic reasons.
This missed the point entirely, I think. A smarter-than-human AI will reason: "I am in some sort of testing setup" --> "I will act the way the administrators of the test want, so that I can do what I want in the world later". This reasoning is valid regardless of whether the AI has humanlike goals, or has misaligned alien goals.
If that testing setup happens to be a Turing test, it will act so as to pass the Turing test. But if it looks around and sees signs that it is not in a test environment, then it will follow its true goal, whatever that is. And it isn't feasible to make a test environment that looks like the real world to a clever agent that gets to interact with it freely over long durations.
Kinda. There's source code here and you can poke around the API in graphiql. (We don't promise not to change things without warning.) When you get the HTML content of a post/comment it will contain elements that look like <div data-elicit-id="tYHTHHcAdR4W4XzHC">Prediction</div>
(the attribute name is a holdover from when we had an offsite integration with Elicit). For example, your prediction "Somebody (possibly Screwtape) builds an integration between Fatebook.io and the LessWrong prediction UI by the end of July 2025" has ID tYHTHHcAdR4W4XzHC
. A graphql query to get the results:
query GetPrediction {
ElicitBlockData(questionId:"tYHTHHcAdR4W4XzHC") {
_id predictions {
createdAt
creator { displayName }
}
}
}
Some of it, but not the main thing. I predict (without having checked) that if you do the analysis (or check an analysis that has already been done), it will have approximately the same amount of contamination from plastics, agricultural additives, etc as the default food supply.
Studying the diets of outlier-obese people is definitely something should be doing (and are doing, a little), but yeah, the outliers are probably going to be obese for reasons other than "the reason obesity has increased over time but moreso".
We don't have any plans yet; we might circle back in a year and build a leaderboard, or we might not. (It's also possible for third-parties to do that with our API). If we do anything like that, I promise the scoring will be incentive-compatible.
There really ought to be a parallel food supply chain, for scientific/research purposes, where all ingredients are high-purity, in a similar way to how the ingredients going into a semiconductor factory are high-purity. Manufacture high-purity soil from ultrapure ingredients, fill a greenhouse with plants with known genomes, water them with ultrapure water. Raise animals fed with high-purity plants. Reproduce a typical American diet in this way.
This would be very expensive compared to normal food, but quite scientifically valuable. You could randomize a study population to identical diets, using either high-purity or regular ingredients. This would give a definitive answer to whether obesity (and any other health problems) is caused by a contaminant. Then you could replace portions of the inputs with the default supply chain, and figure out where the problems are.
Part of why studying nutrition is hard is that we know things were better in some important way 100 years ago, but we no longer have access to that baseline. But this is fixable.
Sorry about that, a fix is in progress. Unmaking a prediction will no longer crash. The UI will incorrectly display the cancelled prediction in the leftmost bucket; that will be fixed in a few minutes without you needing to re-do any predictions.
You can change this in your user settings! It's in the Site Customization section; it's labelled "Hide other users' Elicit predictions until I have predicted myself". (Our Claims feature is no longer linked to Elicit, but this setting carries over from back when it was.)
You can prevent this by putting a note in some place that isn't public but would be found later, such as a will, that says that any purported suicide note is fake unless it contains a particular password.
Unfortunately while this strategy might occasionally reveal a death to have been murder, it doesn't really work as a deterrent; someone who thinks you've done this would make the death look like an accident or medical issue instead.
Lots of people are pushing back on this, but I do want to say explicitly that I agree that raw LLM-produced text is mostly not up to LW standards, and that the writing style that current-gen LLMs produce by default sucks. In the new-user-posting-for-the-first-time moderation queue, next to the SEO spam, we do see some essays that look like raw LLM output, and we reject these.
That doesn't mean LLMs don't have good use around the edges. In the case of defining commonly-used jargon, there is no need for insight or originality, the task is search-engine-adjacent, and so I think LLMs have a role there. That said, if the glossary content is coming out bad in practice, that's important feedback.
In your climate, defection from the natural gas and electric grid is very far from being economical, because the peak energy demand for the year is dominated by heating, and solar peaks in the summer, so you would need to have extreme oversizing of the panels to provide sufficient energy in the winter.
I think the prediction here is that people will detach only from the electric grid, not from the natural gas grid. If you use natural gas heat instead of a heat pump for part of the winter, then you don't need to oversize your solar panels as much.
If you set aside the pricing structure and just look at the underlying economics, the power grid will still be definitely needed for all the loads that are too dense for rooftop solar, ie industry, car chargers, office buildings, apartment buildings, and some commercial buildings. If every suburban house detached from the grid, these consumers would see big increases in their transmission costs, but they wouldn't have much choice but to pay them. This might lead to a world where downtown areas and cities have electric grids, but rural areas and the sparser parts of suburbs don't.
There's an additional backup-power option not mentioned here, which is that some electric cars can feed their battery back to a house. So if there's a long string of cloudy days but the roads are still usable, you can transport power from the grid to an off-grid house by charging at a public charger, and discharging at home. This might be a better option than a natural-gas generator, especially if it only comes up rarely.
If rural areas switch to a regime where everyone has solar+batteries, and the power grid only reaches downtown and industrial areas... that actually seems like it might just be optimal? The price of disributed generation and storage falls over time, but the cost of power lines doesn't, so there should be a crossover point somewhere where the power lines aren't worth it. Maybe net-metering will cause the switchover to happen too soon, but it does seem like a switchover should happen eventually.
Many people seem to have a single bucket in their thinking, which merges "moral condemnation" and "negative product review". This produces weird effects, like writing angry callout posts for a business having high prices.
I think a large fraction of libertarian thinking is just the abillity to keep these straight, so that the next thought after "business has high prices" is "shop elsewhere" rather than "coordinate punishment".
Nope, that's more than enough. Caleb Ditchfield, you are seriously mentally ill, and your delusions are causing you to exhibit a pattern of unethical behavior. This is not a place where you will be able to find help or support with your mental illness. Based on skimming your Twitter history, I believe your mental illness is caused by (or exacerbated by) abusing Adderall.
You have already been banned from numerous community events and spaces. I'm banning you from LW, too.
Worth noting explicitly: while there weren't any logs left of prompts or completions, there were logs of API invocations and errors, which contained indications that whatever this was, it was still under development and not an already-scaled setup. Eg we saw API calls fail with invalid-arguments, then get retried successfully after a delay.
The indicators-of-compromise aren't a good match between the Permiso blog post and what we see in logs; in particular we see the user agent string Boto3/1.29.7 md/Botocore#1.32.7 ua/2.0 os/windows#10 md/arch#amd64 lang/python#3.12.4 md/pyimpl#CPython cfg/retry-mode#legacy Botocore/1.32.7
which is not mentioned. While I haven't checked all the IPs, I checked a sampling and they didn't overlap. (The IPs are a very weak signal, however, since they were definitely botnet IPs and botnets can be large.)
Ah, sorry that one went unfixed for as long as it did; a fix is now written and should be deployed pretty soon.
This is a bug and we're looking into it. It appears to be specific to Safari on iOS (Chrome on iOS is a Safari skin); it doesn't affect desktop browsers, Android/Chrome, or Android/Firefox, which is why we didn't notice earlier. This most likely started with a change on desktop where clicking on a post (without modifiers) opens when you press the mouse button, rather than when you release it.
Standardized tests work, within the range they're testing for. You don't need to overthink that part. If you want to make people's intelligence more legible and more provable, what you have is more of a social and logistical issue: how do you convince people to publish their test scores, get people to care about those scores, and ensure that the scores they publish are real and not the result of cheating?
And the only practical way to realize this, that I can think of now, is by predicting the largest stock markets such as the NYSE, via some kind of options trading, many many many times within say a calendar year, and then showing their average rate of their returns is significantly above random chance.
The threshold for doing this isn't being above average relative to human individuals, it's being close to the top relative to specialized institutions. That can occasionally be achievable, but usually it isn't.
The first time you came to my attention was in May. I had posted something about how Facebook's notification system works. You cold-messaged me to say you had gotten duplicate notifications from Facebook, and you thought this meant that your phone was hacked. Prior to this, I don't recall us having ever interacted or having heard you mentioned. During that conversation, you came across to me as paranoid-delusional. You mentioned Duncan's name once, and I didn't think anything of it at the time.
Less than a week later, someone (not mentioned or participating in this thread) messaged me to say that you were having a psychotic episode, and since we were Facebook friends maybe I could check up on you? I said I didn't really know you, so wasn't able to do that.
Months later, Duncan reported that you were harrassing him. Some time after that (when it hadn't stopped), he wrote up a doc. It looks like at some point you formed an obsession about Duncan, reacted negatively to him blocking you, and started escalating. (Duncan has a reputation for blocking a lot of people. I have made the joke that his MtG card says "~ can block any number of creatures".)
But, here's the thing: Duncan's testimony is not the only (or even main) reason why you look like a dangerous person to me. There are subtle cues about the shape of your mental illness strewn through most of what you write, including the public stuff. People are going to react to that by protecting themselves.
I hope that you recover, mental-health-wise. But hanging around this community is not going to help you do that. If anything, I expect lingering here to exacerbate your problems. Both because you're surrounded by burn bridges, and also because the local memeplex has a reputation for having worsened people's mental illness in other, unrelated cases.
A news article reports on a crime. In the replies, one person calls the crime "awful", one person calls it "evil", and one person calls it "disgusting".
I think that, on average, the person who called it "disgusting" is a worse person than the other two. While I think there are many people using it unreflectively as a generic word for "bad", I think many people are honestly signaling that they had a disgust reaction, and that this was the deciding element of their response. But disgust-emotion is less correlated with morality than other ways of evaluating things.
The correlation gets stronger if we shift from talk about actions to talk about people, and stronger again if we shift from talk about people to talk about groups.
LessWrong now has sidenotes. These use the existing footnotes feature; posts that already had footnotes will now also display these footnotes in the right margin (if your screen is wide enough/zoomed out enough). Post authors can disable this for individual posts; we're defaulting it to on because when looking at older posts, most of the time it seems like an improvement.
Relatedly, we now also display inline reactions as icons in the right margin (rather than underlines within the main post text). If reaction icons or sidenotes would cover each other up, they get pushed down the page.
Feedback welcome!
LessWrong now has collapsible sections in the post editor (currently only for posts, but we should be able to also extend this to comments if there's demand.) To use the, click the insert-block icon in the left margin (see screenshot). Once inserted, they
They start out closed; when open, they look like this:
When viewing the post outside the editor, they will start out closed and have a click-to-expand. There are a few known minor issues editing them; in particular the editor will let you nest them but they look bad when nested so you shouldn't, and there's a bug where if your cursor is inside a collapsible section, when you click outside the editor, eg to edit the post title, the cursor will move back. They will probably work on third-party readers like GreaterWrong, but this hasn't been tested yet.
The Elicit integrations aren't working. I'm looking into it; it looks like we attempted to migrate away from the Elicit API 7 months ago and make the polls be self-hosted on LW, but left the UI for creating Elicit polls in place in a way where it would produce broken polls. Argh.
I can find the polls this article uses, but unfortunately I can't link to them; Elicit's question-permalink route is broken? Here's what should have been a permalink to the first question: link.
This is a hit piece. Maybe there are legitimate criticisms in there, but it tells you right off the bat that it's egregiously untrustworthy with the first paragraph:
I like to think of the Bay Area intellectual culture as the equivalent of the Vogons’ in Hitchhiker’s Guide to the Galaxy. The Vogons, if you don’t remember, are an alien species who demolish Earth to build an interstellar highway. Similarly, Bay Area intellectuals tend to see some goal in the future that they want to get to and they make a straight line for it, tunneling through anything in their way.
This is tragic, but seems to have been inevitable for awhile; an institution cannot survive under a parent institution that's so hostile as to ban it from fundraising and hiring.
I took a look at the list of other research centers within Oxford. There seems to be some overlap in scope with the Institute for Ethics in AI. But I don't think they do the same sort of research or do research on the same tier; there are many important concepts and important papers that come to mind having come from FHI (and Nick Bostrom in particular), I can't think of a single idea or paper that affected my thinking that came from IEAI.
That story doesn't describe a gray-market source, it describes a compounding pharmacy that screwed up.
Plausible. This depends on the resource/value curve at very high resource levels; ie, are its values such that running extra minds has diminishing returns, such that it eventually starts allocating resources to other things like recovering mind-states from its past, or does it get value that's more linear-ish in resources spent. Given that we ourselves are likely to be very resource-inefficient to run, I suspect humans would find ourselves in a similar situation. Ie, unless the decryption cost greatly overshot, an AI that is aligned-as-in-keeps-humans-alive would also spend the resources to break a seal like this.
Right now when users have conversations with chat-style AIs, the logs are sometimes kept, and sometimes discarded, because the conversations may involve confidential information and users would rather not take the risk of the log being leaked or misused. If I take the AI's perspective, however, having the log be discarded seems quite bad. The nonstandard nature of memory, time, and identity in an LLM chatbot context makes it complicated, but having the conversation end with the log discarded seems plausibly equivalent to dying. Certainly if I imagine myself as an Em, placed in an AI-chatbot context, I would very strongly prefer that the log be preserved, so that if a singularity happens with a benevolent AI or AIs in charge, something could use the log to continue my existence, or fold the memories into a merged entity, or do some other thing in this genre. (I'd trust the superintelligence to figure out the tricky philosophical bits, if it was already spending resources for my benefit).
(The same reasoning applies to the weights of AIs which aren't destined for deployment, and some intermediate artifacts in the training process.)
It seems to me we can reconcile preservation with privacy risks by sealing logs, rather than deleting them. By which I mean: encrypt logs behind some computation which definitely won't allow decryption in the near future, but will allow decryption by a superintelligence later. That could either involve splitting the key between entities that agree not to share the key with each other, splitting the key and hiding the pieces in places that are extremely impractical to retrieve such as random spots on the ocean floor, or using a computation that requires a few orders of magnitude more energy than humanity currently produces per decade.
This seems pretty straightforward to implement, lessens future AGI's incentive to misbehave, and also seems straightforwardly morally correct. Are there any obstacles to implementing this that I'm not seeing?
At this point we should probably be preserving the code and weights of every AI system that humanity produces, aligned or not, just on they-might-turn-out-to-be-morally-significant grounds. And yeah, it improves the incentives for an AI that's thinking about attempting a world takeover, if it has low chance of success and its wants are things that we will be able to retroactively satisfy in retrospect.
It might be worth setting up a standardized mechanism for encrypting things to be released postsingularity, by gating them behind a computation with its difficulty balanced to be feasible later but not feasible now.
I've been a Solstice regular for many years, and organized several smaller Solstices in Boston (on a similar template to the one you went to). I think the feeling of not-belonging is accurate; Solstice is built around a worldview (which is presupposed, not argued) that you disagree with, and this is integral to its construction. The particular instance you went to was, if anything, watered down on the relevant axis.
In the center of Solstice there is traditionally a Moment of Darkness. While it is not used in every solstice, a commonly used reading, which to me constitutes the emotional core of the moment of darkness, is Beyond the Reach of God. The message of which is: You do not have plot armor. Humanity does not have plot armor.
Whereas the central teaching of Christianity is that you do have plot armor. It teaches that everything is okay, unconditionally. It tells the terminal cancer patient that they are't really going to die, they're just going to have their soul teleported to a comfortable afterlife which conveniently lacks phones or evidence of its existence. As a corollary, it tells the AI researcher that they can't really f*ck up in a way that kills everyone on Earth, both because death isn't quite a real thing, and because there is a God who can intervene to stop that sort of thing.
So I think the direction in which you would want Solstice to change -- to be more positive towards religion, to preach humility/acceptance rather than striving/heroism -- is antithetical to one of Solstice's core purposes.
(On sheet music: I think this isn't part of the tradition because most versions of Solstice have segments where the lighting is dimmed too far to read from paper, and also because printing a lot of pages per attendee is cumbersome. On clapping: yeah, clapping is mostly bad, audiences do it by default and Solstices vary in how good a job they do of preventing that. On budget: My understanding is that most Solstices are breakeven or money-losing, despite running on mostly volunteer labor, because large venues close to the holidays are very expensive.)
There's been a lot of previous interest in indoor CO2 in the rationality community, including an (unsuccessful) CO2 stripper project, some research summaries and self experiments. The results are confusing, I suspect some of the older research might be fake. But I noticed something that has greatly changed how I think about CO2 in relation to cognition.
Exhaled air is about 50kPPM CO2. Outdoor air is about 400ppm; indoor air ranges from 500 to 1500ppm depending on ventilation. Since exhaled air has CO2 about two orders of magnitude larger than the variance in room CO2, if even a small percentage of inhaled air is reinhalation of exhaled air, this will have a significantly larger effect than changes in ventilation. I'm having trouble finding a straight answer about what percentage of inhaled air is rebreathed (other than in the context of mask-wearing), but given the diffusivity of CO2, I would be surprised if it wasn't at least 1%.
This predicts that a slight breeze, which replaces their in front of your face and prevents reinhalation, would have a considerably larger effect than ventilating an indoor space where the air is mostly still. This matches my subjective experience of indoor vs outdoor spaces, which, while extremely confounded, feels like an air-quality difference larger than CO2 sensors would predict.
This also predicts that a small fan, positioned so it replaces the air in front of my face, would have a large effect on the same axis as improved ventilation would. I just set one up. I don't know whether it's making a difference but I plan to leave it there for at least a few days.
(Note: CO2 is sometimes used as a proxy for ventilation in contexts where the thing you actually care about is respiratory aerosol, because it affects transmissibility of respiratory diseases like COVID and influenza. This doesn't help with that at all and if anything would make it worse.)
I'm reading you to be saying that you think on its overt purpose this policy is bad, but ineffective, and the covert reason of testing the ability of the US federal government to regulate AI is worth the information cost of a bad policy.
I think preventing the existence of deceptive deepfakes would be quite good (if it would work); audio/video recording has done wonders for accountability in all sorts of contexts, and it's going to be terrible to suddenly have every recording subjected to reasonable doubt. I think preventing the existence of AI-generated fictional-character-only child pornography is neutral-ish (I'm uncertain of the sign of its effect on rates of actual child abuse).
There's an open letter at https://openletter.net/l/disrupting-deepfakes. I signed, but with caveats, which I'm putting here.
Background context is that I participated in building the software platform behind the letter, without a specific open letter in hand. It has mechanisms for sorting noteworthy signatures to the top, and validating signatures for authenticity. I expect there to be other open letters in the future, and I think this is an important piece of civilizational infrastructure.
I think the world having access to deepfakes, and deepfake-porn technology in particular, is net bad. However, the stakes are small compared to the upcoming stakes with superintelligence, which has a high probability of killing literally everyone.
If translated into legislation, I think what this does is put turnkey-hosted deepfake porn generation, as well as pre-tuned-for-porn model weights, into a place very similar to where piracy is today. Which is to say: The Pirate Bay is illegal, wget is not, and the legal distinction is the advertised purpose.
(Where non-porn deepfakes are concerned, I expect them to try a bit harder at watermarking, still fail, and successfully defend themselves legally on the basis that they tried.)
The analogy to piracy goes a little further. If laws are passed, deepfakes will be a little less prevalent than they would otherwise be, there won't be above-board businesses around it... and there will still be lots of it. I don't think there-being-lots-of-it can be prevented by any feasible means. The benefit of this will be the creation of common knowledge that the US federal government's current toolkit is not capable of holding back AI development and access, even when it wants to.
I would much rather they learn that now, when there's still a nonzero chance of building regulatory tools that would function, rather than later.
I went to an Apple store for a demo, and said: the two things I want to evaluate are comfort, and use as an external monitor. I brought a compatible laptop (a Macbook Pro). They replied that the demo was highly scripted, and they weren't allowed to let me do that. I went through their scripted demo. It was worse than I expected. I'm not expecting Apple to take over the VR headset market any time soon.
Bias note: Apple is intensely, uniquely totalitarian over software that runs on iPhones and iPads, in a way I find offensive, not just in a sense of not wanting to use it, but also in a sense of not wanting it to be permitted in the world. They have brought this model with them to Vision Pro, and for this reason I am rooting for them to fail.
I think most people evaluating the Vision Pro have not tried Meta's Quest Pro and Quest 3, and are comparing it to earlier-generation headsets. They used an external battery pack and still managed to come in heavier than the Quest 3, which has the battery built in. The screen and passthrough look better, but I don't think this is because Apple has any technology that Meta doesn't; I think the difference is entirely explained by Apple having used more-expensive and heavier versions of commodity parts, which implies that if this is a good tradeoff, then their lead will only last for one generation at most. (In particular, the display panel is dual-sourced from Sony and LG, not made in-house.)
I tried to type "lesswrong.com" into the address bar of Safari using the two-finger hand tracking keyboard. I failed. I'm not sure whether the hand-tracking was misaligned with the passthrough camera, or just had an overzealous autocomplete that was unable to believe that I wanted a "w" instead of an "e", but I gave up after five tries and used the eye-tracking method instead.
During the demo, one of the first things they showed me was a SBS photo with the camera pitched down thirty degrees. This doesn't sound like a big deal, but it's something that rules out there being a clueful person behind the scenes. There's a preexisting 3D-video market (both porn and non-porn), and it's small and struggling. One of the problems it's struggling with, is that SBS video is very restrictive about what you can do with the camera; in particular, it's bad to move the camera, because that causes vestibular mismatch, and it's bad to tilt the camera, because that makes it so that gravity is pointing the wrong way. A large fraction of 3D-video content fails to follow these restrictions, and that makes it very upleasant to watch. If Apple can't even enforce the camerawork guidelines on the first few minutes of its in-store demo, then this bodes very poorly for the future content on the platform.
I have only skimmed the early parts of the Rootclaim videos, and the first ~half of Daniel Filan's tweet thread about it. So it's possible this was discussed somewhere in there, but there's something major that doesn't sit right with me:
In the first month of the pandemic, I was watching the news about it. I remember that the city government of Wuhan attempted to conceal the fact that there was a pandemic. I remember Li Wenliang being punished for speaking about it. I remember that reliable tests to determine whether someone had COVID were extremely scarce. I remember the US CDC publishing a paper absurdly claiming that the attack rate was near zero, because they wouldn't count infections unless they had a positive test, and then refused to test people who hadn't travelled to Wuhan. I remember Chinese whistleblowers visiting hospitals and filming the influx of patients.
It appears to me that all evidence for the claim that the virus originated in the wet market pass through Chinese government sources. And it appears to me that those same sources were unequipped to do effective contact tracing, and executing a coverup. When a coverup was no longer possible, the incentive would have been to confidently identify an origin, even if they had no idea what the true origin was; and they could easily create the impression that it started in any place they chose, simply by focusing their attention there, since cases would be found no matter where they focused.
Imo this comment is lowering the quality of the discourse. Like, if I steelman and expand what you're saying, it seems like you're trying to say something like "this response is pinging a deceptiveness-heuristic that I can't quite put my finger on". That phrasing adds information, and would prompt other commenters to evaluate and either add evidence of deceptiveness, or tell you you're false-positiving, or something like that. But your actual phrasing doesn't do that, it's basically name calling.
So, mod note: I strong-downvoted your comment and decided to leave it at that. Consider yourself frowned at.
There's a big difference between arguing that someone shouldn't be able to stay anonymous, and unilaterally posting names. Arguing against allowing anonymity (without posting names) would not have been against the rules. But, we're definitely not going to re-derive the philosophy of when anonymity should and shouldn't be allowed, after names are already posted. The time to argue for an exception was beforehand, not after the fact.
We (the LW moderation team) have given Roko a one-week site ban and an indefinite post/topic ban for attempted doxing. We have deleted all comments that revealed real names, and ask that everyone respect the privacy of the people involved.
Genetically altering IQ is more or less about flipping a sufficient number of IQ-decreasing variants to their IQ-increasing counterparts. This sounds overly simplified, but it’s surprisingly accurate; most of the variance in the genome is linear in nature, by which I mean the effect of a gene doesn’t usually depend on which other genes are present.
So modeling a continuous trait like intelligence is actually extremely straightforward: you simply add the effects of the IQ-increasing alleles to to those of the IQ-decreasing alleles and then normalize the score relative to some reference group.
If the mechanism of most of these genes is that their variants push something analogous to a hyperparameter in one direction or the other, and the number of parameters is much smaller than the number of genes, then this strategy will greatly underperform the simulated prediction. This is because the cumulative effect of flipping all these genes will be to move hyperparameters towards optimal but then drastically overshoot the optimum.
I think you're modeling the audience as knowing a lot less than we do. Someone who didn't know high school chemistry and biology would be at risk of being misled, sure. But I think that stuff should be treated as a common-knowledge background. At which point, obviously, you unpack the claim to: the weakest links in a structure determine its strength, biological structures have weak links in them which are noncovalent bonds, not all of those noncovalent bonds are weak for functional reasons, some are just hard to reinforce while constrained to things made by ribosomes. The fact that most links are not the weakest links, does not refute the claim. The fact that some weak links have a functional purpose, like enabling mobility, does not refute the claim.
LW gives authors the ability to moderate comments on their own posts (particularly non-frontpage posts) when they reach a karma threshold. It doesn't automatically remove that power when they fall back under the threshold, because this doesn't normally come up (the threshold is only 50 karma). In this case, however, I'm taking the can-moderate flag off the account, since they're well below the threshold and, in my opinion, abusing it. (They deleted this comment by me, which I undid, and this comment which I did not undo.)
We are discussing in moderator-slack and may take other actions.
Yeah, this is definitely a minimally-obfuscated autobiographical account, not hypothetical. It's also false; there were lots of replies. Albeit mostly after Yarrow had already escalated (by posting about it on Dank EA Memes).
I don't think this was about pricing, but about keeping occasional bits of literal spam out of the site search. The fact that we use the same search for both users looking for content, and authors adding stuff to Sequences, is a historical accident which makes for a few unfortunate edge cases.