Posts
Comments
Why is it a sickness of soul to abuse an animal that's been legally defined as a "pet", but not to define an identical animal that has not been given this arbitrary label?
Eliezer's argument is the primary one I'm thinking of as an obvious rationalization.
https://benthams.substack.com/p/against-yudkowskys-implausible-position
I'm not confident about fetuses either, hence why I generally oppose abortion after the fetus has started developing a brain.
Different meanings of "bad". The former is making a moral claim, the second presumably a practical one about the person's health goals. "Bad as in evil" vs. "bad as in ineffective".
Hitler was an evil leader, but not an ineffective one. He was a bad person, but he was not bad at gaining political power.
It seems unlikely to me that the amount of animal-suffering-per-area goes down when a factory farm replaces a natural habitat; natural selection is a much worse optimizer than human intelligence.
And that's a false dichotomy anyway; even if factory farms did reduce suffering per area, you could instead pay for something else to be there that has even less suffering.
I agree with the first bullet point in theory, but see the Corrupted Hardware sequence of posts. It's hard to know the true impact of most interventions, and easy for people to come up with reasons why whatever they want to do happens to have large positive externalities. "Don't directly inflict pain" is something we can be very confident is actually a good thing, without worrying about second-order effects.
Additionally, there's no reason why doing bad things should be acceptable just due to also doing unrelated good things. Sure it's net positive from a consequentialist frame, but ceasing the bad things while continuing to do the good things is even more positive! Giving up meat is not some ultimate hardship like martyrdom, nor is there any strong argument that meat-eating is necessary in order to keep doing the other good things. It's more akin to quitting a minor drug addition; hard and requires a lot of self-control at first, but after the craving goes away your life is pretty much the same as it was before.
As for the rest of your comment, any line of reasoning that would equally excuse slavery and the holocaust is, I think, pretty suspect.
Do you also find it acceptable to torture humans you don't personally know, or a pet that someone purchased only for the joy of torturing it and not for any other service? If not, the companionship explanation is invalid and likely a rationalization.
I agree that this is technically a sound philosophy; the is-ought problem makes it impossible to say as a factual matter that any set of values is wrong. That said, I think you should ask yourself why you oppose the mistreatment of pets and not other animals. If you truly do not care about animal suffering, shouldn't the mistreatment of a pet be morally equivalent to someone damaging their own furniture? It may not have been a conscious decision on your part, but I expect that your oddly specific value system is at least partially downstream of the fact that you grew up eating meat and enjoy it.
Meat-eating (without offsetting) seems to me like an obvious rationality failure. Extremely few people actually take the position that torturing animals is fine; that it would be acceptable to do to a pet or even a stray. Yet people are happy to pay others to do it for them, as long as it occurs where they can't see it happening.
Attempts to point this out to them are usually met with deflection or anger, or among more level-headed people, with elaborate rationalizations that collapse under minimal scrutiny. ("Farming creates more animals, so as long as their lives are net positive, farming is net positive" relies on the extremely questionable assumption that their lives are net positive, and these people would never accept the same argument in favor of forcible breeding of humans. "Animals aren't sentient" relies on untested and very speculative ideas about consciousness, akin to the just-so stories that have plagued psychology. There's no way someone could justifiably be >95% confident of such a thing, and I highly doubt these people would accept a 5% chance of torturing a human in return for tastier food.)
So with the exception of hardcore moral relativists who reject any need to care about any other beings at all, I find it hard to take seriously any "rationalist" who continues to eat meat. It seems to me that they've adopted "rationalism" in the "beliefs as attire" sense, as they fail to follow through on even the most straightforward implications of their purported belief system as soon as those implications do not personally benefit them.
Change my mind?
The images appear to be broken.
Fixed, thank you.
There is no one Overton window, it's culture-dependent. "Sleeping in a room with a fan on will kill you" is within the Overton window in South Korea, but not in the US. Wikipedia says this is false rather than adopting a neutral stance because that's the belief held by western academia.
I didn't claim that the far-left generally agrees with the NYT, or that the NYT is a far-left outlet. It is a center-left outlet, which makes it cover far-left ideas much more favorably than far-right ideas, while still disagreeing with them.
This is not an idiosyncrasy of Gerard and people like him, it is core to Wikipedia's model. Wikipedia is not an arbiter of fact, it does not perform experiments or investigations to determine the truth. It simply reflects the sources.
This means it parrots the majority consensus in academia and journalism. When that consensus is right, as it usually is, Wikipedia is right. When that consensus is wrong, as happens more frequently than its proponents would like to admit but still pretty rarely overall, Wikipedia is wrong. This is by design.
Wikipedia is not objective, it is neutral. It is an average of everyone's views, skewed towards the views of the WEIRD people who edit Wikipedia and the people respected by those people.
In the linked Wikipedia discussion, someone asked David to provide sources for his claim and he refused to do so, so I would not consider them to be relevant evidence.
As for the factual question, I've come across one article from Quillette that seemed significantly biased and misleading, and I wouldn't be surprised if there were more. There was one hoax that they briefly fell for and then corrected within hours, which was the main reason that Wikipedia considers them unreliable, but this says more about Wikipedia than Quillette. (I'm sure many of Wikipedia's "reliable sources" have gone much longer without correcting errors.)
Quillette definitely has an anti-woke bent, and this colors its coverage. But I haven't seen anything to indicate that its bias is worse than that of the NYT in the other direction. I have no problem trusting its articles to the same extent I would trust one in the mainstream media.
I think Michael's response to that is that he doesn't oppose that. He only opposes a lawyer who tries to prevent their client from getting a punishment that the lawyer believes would be justified. From his article:
It is not wrong per se to represent guilty clients. A lawyer may represent a factually guilty client for the purpose of preventing unjust punishments or rights-violations. What is unethical is to represent a person who you know committed a crime that was really wrong and really deserves to be punished, and to attempt to stop that person from getting the punishment he deserves.
Oh weird, apparently all my running pm2 jobs cancelled themselves at the end of the month. No idea what caused that. Thanks, fixed now.
Oh whoops, thank you.
Did you confirm with the doctor that this actually occurred? I'd be worried about a false memory.
Ideally, this would eliminate [...] the “learning the test” issues.
How would it do that? If they learned the test in advance, it would be in their long-term memory, and they'd still remember it when tested on the drug.
They didn't change their charter.
https://forum.effectivealtruism.org/posts/2Dg9t5HTqHXpZPBXP/ea-community-needs-mechanisms-to-avoid-deceptive-messaging
Hmm, interesting. The exact choice of decimal place at which to cut off the comparison is certainly arbitrary, and that doesn't feel very elegant. My thinking is that within the constraint of using floating point numbers, there fundamentally isn't a perfect solution. Floating point notation changes some numbers into other numbers, so there are always going to be some cases where number comparisons are wrong. What we want to do is define a problem domain and check if floating point will cause problems within that domain; if it doesn't, go for it, if it does, maybe don't use floating point.
In this case my fix solves the problem for what I think is the vast majority of the most likely inputs (in particular it solves it for all the inputs that my particular program was going to get), and while it's less fundamental than e.g. using arbitrary-precision arithmetic, it does better on the cost-benefit analysis. (Just like how "completely overhaul our company" addresses things on a more fundamental level than just fixing the structural simulation, but may not be the best fix given resource constraints.)
The main purpose of my example was not to argue that my particular approach was the "correct" one, but rather to point out the flaws in the "multiply by an arbitrary constant" approach. I'll edit that line, since I think you're right that it's a little more complicated than I was making it out to be, and "trivial" could be an unfair characterization.
In the general case I agree it's not necessarily trivial; e.g. if your program uses the whole range of decimal places to a meaningful degree, or performs calculations that can compound floating point errors up to higher decimal places. (Though I'd argue that in both of those cases pure floating point is probably not the best system to use.) In my case I knew that the intended precision of the input would never be precise enough to overlap with floating point errors, so I could just round anything past the 15th decimal place down to 0.
If we figure out how to build GAI, we could build several with different priors, release them into the universe, and see which ones do better. If we give them all the same metric to optimize, they will all agree on which of them did better, thus determining one prior that is the best one to have for this universe.
I don't understand what "at the start" is supposed to mean for an event that lasts zero time.
I don't think you understand how probability works.
https://outsidetheasylum.blog/understanding-subjective-probabilities/
Ok now I'm confused about something. How can it be the case that an instantaneous perpendicular burn adds to the craft's speed, but a constant burn just makes it go in a circle with no change in speed?
...Are you just trying to point out that thrusting in opposite directions will cancel out? That seems obvious, and irrelevant. My post and all the subsequent discussion are assuming burns of epsilon duration.
I don't understand how that can be true? Vector addition is associative; it can't be the case that adding many small vectors behaves differently from adding a single large vector equal to the small vectors' sum. Throwing one rock off the side of the ship followed by another rock has to do the same thing to the ship's trajectory as throwing both rocks at the same time.
How is that relevant? In the limit where the retrograde thrust is infinitesimally small, it also does not increase the length of the main vector it is added to. Negligibly small thrust results in negligibly small change in velocity, regardless of its direction.
Unfortunately I already came across that paradox a day or two ago on Stack Exchange. It's a good one though!
Yeah, my numerical skill is poor, so I try to understand things via visualization and analogies. It's more reliable in some cases, less in others.
when the thrust is at 90 degrees to the trajectory, the rocket's speed is unaffected by the thrusting, and it comes out of the gravity well at the same speed as it came in.
That's not accurate; when you add two vectors at 90 degrees, the resulting vector has a higher magnitude than either. The rocket will be accelerated to a faster speed.
I don't think so. The difference in the gravitational field between the bottom point of the swing arc and the top is negligible. The swing isn't an isolated system, so you're able to transmit force to the bar as you move around.
There's a common explanation you'll find online of how swings work by you changing the height of your center of mass, which is wrong, since it would imply that a swing with rigid bars wouldn't work. But they do.
The actual explanation seems to be something to do with changing your angular momentum at specific points by rotating your body.
I'm still confused about some things, but the primary framing of "less time spent subject to high gravitational deceleration" seems like the important insight that all other explanations I found were missing.
Probability is a geometric scale, not an additive one. An order of magnitude centered on 10% covers ~1% - 50%.
https://www.lesswrong.com/posts/QGkYCwyC7wTDyt3yT/0-and-1-are-not-probabilities
Feel free to elaborate on the mistakes and I'll fix them.
That article isn't about e/acc people and doesn't mention them anywhere, so I'm not sure why you think it's intended to be. The probability theory denial I'm referencing is mostly on Twitter.
Great point! I focused on AI risk since that's what most people I'm familiar with are talking about right now, but there are indeed other risks, and that's yet another potential source of miscommunication. One person could report a high p(doom) due to their concerns about bioterrorism, and another interprets that as them being concerned about AI.
Oh I agree the main goal is to convince onlookers, and I think the same ideas apply there. If you use language that's easily mapped to concepts like "unearned confidence", the onlooker is more likely to dismiss whatever you're saying.
It's literally an invitation to irrelevant philosophical debates about how all technologies are risky and we are still alive and I don't know how to get out of here without reference to probabilities and expected values.
If that comes up, yes. But then it's them who have brought up the fact that probability is relevant, so you're not the one first framing it like that.
This kinda misses greater picture? "Belief that here is a substantial probability of AI killing everyone" is 1000x stronger shibboleth and much easier target for derision.
Hmm. I disagree, not sure exactly why. I think it's something like: people focus on short phrases and commonly-used terms more than they focus on ideas. Like how the SSC post I linked gives the example of republicans being just fine with drug legalization as long as it's framed in right-wing terms. Or how talking positively about eugenics will get you hated, but talking positively about embryo selection and laws against incest will be taken seriously. I suspect that most people don't actually take positions on ideas at all; they take positions on specific tribal signals that happen to be associated with ideas.
Consider all the people who reject the label of "effective altruist", but try to donate to effective charity anyway. That seems like a good thing to me; some people don't want to be associated with the tribe for some political reason, and if they're still trying to make the world a better place, great! We want something similar to be the case with AI risk; people may reject the labels of "doomer" or "rationalist", but still think AI is risky, and using more complicated and varied phrases to describe that outcome will make people more open to it.
I don't see how they would be. If you do see a way, please share!
I don't understand how either of those are supposed to be a counterexample. If I don't know what seat is going to be chosen randomly each time, then I don't have enough information to distinguish between the outcomes. All other information about the problem (like the fact that this is happening on a plane rather than a bus) is irrelevant to the outcome I care about.
This does strike me as somewhat tautological, since I'm effectively defining "irrelevant information" as "information that doesn't change the probability of the outcome I care about". I'm not sure how to resolve this; it certainly seems like I should be able to identify that the type of vehicle is irrelevant to the question posed and discard that information.
No, I think what I said was correct? What's an example that you think conflicts with that interpretation?
I think that's accurate, yeah. What's your objection to it?
Yeah that was a mistake, I mixed frequentism and propensity together.
I don't have an answer for you, as this is also something I'm confused about. I felt bad seeing 0 answers here, so I just wanted to mention that I asked about this on Manifold and got some interesting discussion, see here:
No, I'm using the WYSIWYG editor. It was for a post, not a comment, and definitely the right link.
Edit: Huh, I tried it again and it worked this time. My bad for not reloading to test on a fresh page before posting here, sorry.
This doesn't seem to work anymore? I'm posting the link in the editor and nothing happens, there's just a text link.
It's conceptually pretty simple; 240 characters isn't room for a lot. Here's how the writer explained it:
Here's the annotated version of my bot: https://pastebin.com/1a9UPKQk
The basic strategy is:
Simulate what the opponent will do on the current turn, and what they would do on the next two turns if I defect twice in a row.
If the results of the simulations are [cooperate, defect, defect], play tit-for-tat. Otherwise, defect.
This will defect against DefectBots, CooperateBots, and most of the silly bots that don't pay attention to the opponent's moves. Meanwhile, it cooperates against tit-for-tats and grim triggers and the like.
There are a bunch of details and optimizations and code-golf-ifications, but the big one is that instead of simulating the opponent using the provided function f, I use a code-golfed version of the code used to create bots in the tournament program (in createBot()). This prevents the anti-simulator bots from realizing they're in a simulation - arguments.callee.name is "f", and the variable a isn't defined.
The simulation this bot is doing pits the opposing bot against tit-for-tat, rather than against the winning bot itself, to avoid the infinite regress that would occur if the opposing bot also runs a simulation.
The last paragraph is because I wrote the tournament code poorly, and the function that's provided to the bots to let them simulate other bots was slightly different from the way the top-level bots were run, which allowed bots to tell if they were in a simulation and output different behavior. (In particular someone submitted a "simulator killer" that would call process.exit() any time it detected it was in a simulation, which would shut down the entire tournament, disqualifying the top-level bot that had run the simulation.) This submission modifies the simulation function to make it indistinguishable.
The greek letters as variable names were for style points.
The winner was the following program:
try{eval(`{let f=(d,m,c,s,f,h,i)=>{let r=9;${c};return +!!r};r=f}`);let θ='r=h.at(-1);r=!r||r.o',λ=Ω=>r(m,d,θ,c,f,Ω,Ω.map(χ=>({m:χ.o,o:χ.m}))),Σ=(μ,π)=>[...μ,{m:π,o:+!1}],α=λ([...i]),β=λ(Σ(i,α));r=f(θ)&α&!β&!λ(Σ(Σ(i,α),β))|d==m}catch{r = 1}
We're running a sequel, see here to participate.
Done!
Well, I'd encourage you to submit this strategy and see how it does. :)
Ringer tit-for-tat will always beat tit-for-tat since it will score the same as tit-for-tat except when it goes up against shill tit-for-tat, where it will always get the best possible score.
It will do worse than tit-for-tat against any bot that cooperates with tit-for-tat and defects against ringer tit-for-tat.