Posts

Isaac King's Shortform 2024-10-15T02:03:08.013Z
Did Christopher Hitchens change his mind about waterboarding? 2024-09-15T08:28:09.451Z
In Defense of Lawyers Playing Their Part 2024-07-01T01:32:58.695Z
Mistakes people make when thinking about units 2024-06-25T03:39:20.138Z
Duct Tape security 2024-04-26T18:57:05.659Z
An Actually Intuitive Explanation of the Oberth Effect 2024-01-10T20:23:17.216Z
Stop talking about p(doom) 2024-01-01T10:57:28.636Z
Understanding Subjective Probabilities 2023-12-10T06:03:27.958Z
FTL travel summary 2023-12-04T05:17:21.422Z
A free to enter, 240 character, open-source iterated prisoner's dilemma tournament 2023-11-09T08:24:43.277Z
An attempt at a "good enough" solution for human two-party negotiations 2023-09-16T00:38:47.594Z
Rational Agents Cooperate in the Prisoner's Dilemma 2023-09-02T06:15:15.720Z
Epistemic spot checking one claim in The Precipice 2023-06-27T01:03:57.553Z
What are some of the best introductions/breakdowns of AI existential risk for those unfamiliar? 2023-05-29T17:04:40.384Z
What's the best way to streamline two-party sale negotiations between real humans? 2023-05-19T23:30:26.244Z
The Hidden Complexity of Thought 2023-03-19T21:59:21.696Z
Quantum Suicide and Aumann's Agreement Theorem 2022-10-27T01:32:43.282Z
Can DALL-E understand simple geometry? 2022-06-18T04:37:46.471Z
Looking for someone to run an online seminar on human learning 2022-05-03T01:54:54.129Z
My Approach to Non-Literal Communication 2022-05-01T02:47:46.821Z
List of Probability Calibration Exercises 2022-01-23T02:12:41.798Z
Question Gravity 2022-01-13T06:30:56.013Z
"Rational Agents Win" 2021-09-23T07:59:09.072Z

Comments

Comment by Isaac King (KingSupernova) on Isaac King's Shortform · 2024-10-17T16:18:23.236Z · LW · GW

Why is it a sickness of soul to abuse an animal that's been legally defined as a "pet", but not to define an identical animal that has not been given this arbitrary label?

Comment by Isaac King (KingSupernova) on Isaac King's Shortform · 2024-10-16T21:04:12.045Z · LW · GW

Eliezer's argument is the primary one I'm thinking of as an obvious rationalization.

https://www.lesswrong.com/posts/KFbGbTEtHiJnXw5sk/i-really-don-t-understand-eliezer-yudkowsky-s-position-on

https://benthams.substack.com/p/against-yudkowskys-implausible-position

I'm not confident about fetuses either, hence why I generally oppose abortion after the fetus has started developing a brain.

Comment by Isaac King (KingSupernova) on Isaac King's Shortform · 2024-10-16T20:54:07.354Z · LW · GW

Different meanings of "bad". The former is making a moral claim, the second presumably a practical one about the person's health goals. "Bad as in evil" vs. "bad as in ineffective".

Hitler was an evil leader, but not an ineffective one. He was a bad person, but he was not bad at gaining political power.

Comment by Isaac King (KingSupernova) on Isaac King's Shortform · 2024-10-16T20:51:59.965Z · LW · GW

It seems unlikely to me that the amount of animal-suffering-per-area goes down when a factory farm replaces a natural habitat; natural selection is a much worse optimizer than human intelligence.

And that's a false dichotomy anyway; even if factory farms did reduce suffering per area, you could instead pay for something else to be there that has even less suffering.

Comment by Isaac King (KingSupernova) on Isaac King's Shortform · 2024-10-16T20:49:27.644Z · LW · GW

I agree with the first bullet point in theory, but see the Corrupted Hardware sequence of posts. It's hard to know the true impact of most interventions, and easy for people to come up with reasons why whatever they want to do happens to have large positive externalities. "Don't directly inflict pain" is something we can be very confident is actually a good thing, without worrying about second-order effects.

Additionally, there's no reason why doing bad things should be acceptable just due to also doing unrelated good things. Sure it's net positive from a consequentialist frame, but ceasing the bad things while continuing to do the good things is even more positive! Giving up meat is not some ultimate hardship like martyrdom, nor is there any strong argument that meat-eating is necessary in order to keep doing the other good things. It's more akin to quitting a minor drug addition; hard and requires a lot of self-control at first, but after the craving goes away your life is pretty much the same as it was before.

As for the rest of your comment, any line of reasoning that would equally excuse slavery and the holocaust is, I think, pretty suspect.

Comment by Isaac King (KingSupernova) on Isaac King's Shortform · 2024-10-16T20:40:26.783Z · LW · GW

Do you also find it acceptable to torture humans you don't personally know, or a pet that someone purchased only for the joy of torturing it and not for any other service? If not, the companionship explanation is invalid and likely a rationalization.

Comment by Isaac King (KingSupernova) on Isaac King's Shortform · 2024-10-16T20:37:51.176Z · LW · GW

I agree that this is technically a sound philosophy; the is-ought problem makes it impossible to say as a factual matter that any set of values is wrong. That said, I think you should ask yourself why you oppose the mistreatment of pets and not other animals. If you truly do not care about animal suffering, shouldn't the mistreatment of a pet be morally equivalent to someone damaging their own furniture? It may not have been a conscious decision on your part, but I expect that your oddly specific value system is at least partially downstream of the fact that you grew up eating meat and enjoy it.

Comment by Isaac King (KingSupernova) on Isaac King's Shortform · 2024-10-15T02:03:08.288Z · LW · GW

Meat-eating (without offsetting) seems to me like an obvious rationality failure. Extremely few people actually take the position that torturing animals is fine; that it would be acceptable to do to a pet or even a stray. Yet people are happy to pay others to do it for them, as long as it occurs where they can't see it happening.

Attempts to point this out to them are usually met with deflection or anger, or among more level-headed people, with elaborate rationalizations that collapse under minimal scrutiny. ("Farming creates more animals, so as long as their lives are net positive, farming is net positive" relies on the extremely questionable assumption that their lives are net positive, and these people would never accept the same argument in favor of forcible breeding of humans. "Animals aren't sentient" relies on untested and very speculative ideas about consciousness, akin to the just-so stories that have plagued psychology. There's no way someone could justifiably be >95% confident of such a thing, and I highly doubt these people would accept a 5% chance of torturing a human in return for tastier food.)

So with the exception of hardcore moral relativists who reject any need to care about any other beings at all, I find it hard to take seriously any "rationalist" who continues to eat meat. It seems to me that they've adopted "rationalism" in the "beliefs as attire" sense, as they fail to follow through on even the most straightforward implications of their purported belief system as soon as those implications do not personally benefit them.

Change my mind?

Comment by Isaac King (KingSupernova) on Optimizing crop planting with mixed integer linear programming in Stardew Valley · 2024-09-30T23:24:31.502Z · LW · GW

The images appear to be broken.

Comment by Isaac King (KingSupernova) on Did Christopher Hitchens change his mind about waterboarding? · 2024-09-16T17:49:15.194Z · LW · GW

Fixed, thank you.

Comment by Isaac King (KingSupernova) on Reliable Sources: The Story of David Gerard · 2024-07-13T20:44:22.390Z · LW · GW

There is no one Overton window, it's culture-dependent. "Sleeping in a room with a fan on will kill you" is within the Overton window in South Korea, but not in the US. Wikipedia says this is false rather than adopting a neutral stance because that's the belief held by western academia.

Comment by Isaac King (KingSupernova) on Reliable Sources: The Story of David Gerard · 2024-07-11T20:32:49.232Z · LW · GW

I didn't claim that the far-left generally agrees with the NYT, or that the NYT is a far-left outlet. It is a center-left outlet, which makes it cover far-left ideas much more favorably than far-right ideas, while still disagreeing with them.

Comment by Isaac King (KingSupernova) on Reliable Sources: The Story of David Gerard · 2024-07-11T20:27:49.751Z · LW · GW

This is not an idiosyncrasy of Gerard and people like him, it is core to Wikipedia's model. Wikipedia is not an arbiter of fact, it does not perform experiments or investigations to determine the truth. It simply reflects the sources.

This means it parrots the majority consensus in academia and journalism. When that consensus is right, as it usually is, Wikipedia is right. When that consensus is wrong, as happens more frequently than its proponents would like to admit but still pretty rarely overall, Wikipedia is wrong. This is by design.

Wikipedia is not objective, it is neutral. It is an average of everyone's views, skewed towards the views of the WEIRD people who edit Wikipedia and the people respected by those people.

Comment by Isaac King (KingSupernova) on Reliable Sources: The Story of David Gerard · 2024-07-10T22:30:06.547Z · LW · GW

In the linked Wikipedia discussion, someone asked David to provide sources for his claim and he refused to do so, so I would not consider them to be relevant evidence.

As for the factual question, I've come across one article from Quillette that seemed significantly biased and misleading, and I wouldn't be surprised if there were more.  There was one hoax that they briefly fell for and then corrected within hours, which was the main reason that Wikipedia considers them unreliable, but this says more about Wikipedia than Quillette. (I'm sure many of Wikipedia's "reliable sources" have gone much longer without correcting errors.)

Quillette definitely has an anti-woke bent, and this colors its coverage. But I haven't seen anything to indicate that its bias is worse than that of the NYT in the other direction. I have no problem trusting its articles to the same extent I would trust one in the mainstream media.

Comment by Isaac King (KingSupernova) on In Defense of Lawyers Playing Their Part · 2024-07-06T22:27:53.684Z · LW · GW

I think Michael's response to that is that he doesn't oppose that. He only opposes a lawyer who tries to prevent their client from getting a punishment that the lawyer believes would be justified. From his article:

It is not wrong per se to represent guilty clients. A lawyer may represent a factually guilty client for the purpose of preventing unjust punishments or rights-violations. What is unethical is to represent a person who you know committed a crime that was really wrong and really deserves to be punished, and to attempt to stop that person from getting the punishment he deserves.

Comment by Isaac King (KingSupernova) on In Defense of Lawyers Playing Their Part · 2024-07-01T08:22:38.314Z · LW · GW

Oh weird, apparently all my running pm2 jobs cancelled themselves at the end of the month. No idea what caused that. Thanks, fixed now.

Comment by Isaac King (KingSupernova) on Mistakes people make when thinking about units · 2024-06-25T06:56:18.716Z · LW · GW

Oh whoops, thank you.

Comment by Isaac King (KingSupernova) on My hour of memoryless lucidity · 2024-06-24T23:30:45.365Z · LW · GW

Did you confirm with the doctor that this actually occurred? I'd be worried about a false memory.

Comment by Isaac King (KingSupernova) on Some Experiments I'd Like Someone To Try With An Amnestic · 2024-06-24T05:48:01.538Z · LW · GW

Ideally, this would eliminate [...] the “learning the test” issues.

 

How would it do that? If they learned the test in advance, it would be in their long-term memory, and they'd still remember it when tested on the drug.

Comment by Isaac King (KingSupernova) on simeon_c's Shortform · 2024-05-15T07:55:19.778Z · LW · GW

They didn't change their charter.

https://forum.effectivealtruism.org/posts/2Dg9t5HTqHXpZPBXP/ea-community-needs-mechanisms-to-avoid-deceptive-messaging

Comment by Isaac King (KingSupernova) on Duct Tape security · 2024-04-27T01:56:24.135Z · LW · GW

Hmm, interesting. The exact choice of decimal place at which to cut off the comparison is certainly arbitrary, and that doesn't feel very elegant. My thinking is that within the constraint of using floating point numbers, there fundamentally isn't a perfect solution. Floating point notation changes some numbers into other numbers, so there are always going to be some cases where number comparisons are wrong. What we want to do is define a problem domain and check if floating point will cause problems within that domain; if it doesn't, go for it, if it does, maybe don't use floating point.

In this case my fix solves the problem for what I think is the vast majority of the most likely inputs (in particular it solves it for all the inputs that my particular program was going to get), and while it's less fundamental than e.g. using arbitrary-precision arithmetic, it does better on the cost-benefit analysis. (Just like how "completely overhaul our company" addresses things on a more fundamental level than just fixing the structural simulation, but may not be the best fix given resource constraints.)

The main purpose of my example was not to argue that my particular approach was the "correct" one, but rather to point out the flaws in the "multiply by an arbitrary constant" approach. I'll edit that line, since I think you're right that it's a little more complicated than I was making it out to be, and "trivial" could be an unfair characterization.

Comment by Isaac King (KingSupernova) on Duct Tape security · 2024-04-27T01:09:59.545Z · LW · GW

In the general case I agree it's not necessarily trivial; e.g. if your program uses the whole range of decimal places to a meaningful degree, or performs calculations that can compound floating point errors up to higher decimal places. (Though I'd argue that in both of those cases pure floating point is probably not the best system to use.) In my case I knew that the intended precision of the input would never be precise enough to overlap with floating point errors, so I could just round anything past the 15th decimal place down to 0.

Comment by Isaac King (KingSupernova) on Probability is Subjectively Objective · 2024-02-05T21:13:25.134Z · LW · GW

If we figure out how to build GAI, we could build several with different priors, release them into the universe, and see which ones do better. If we give them all the same metric to optimize, they will all agree on which of them did better, thus determining one prior that is the best one to have for this universe.

Comment by Isaac King (KingSupernova) on An Actually Intuitive Explanation of the Oberth Effect · 2024-01-17T21:56:54.078Z · LW · GW

I don't understand what "at the start" is supposed to mean for an event that lasts zero time.

Comment by Isaac King (KingSupernova) on Stop talking about p(doom) · 2024-01-16T19:36:21.009Z · LW · GW

I don't think you understand how probability works.

https://outsidetheasylum.blog/understanding-subjective-probabilities/

Comment by Isaac King (KingSupernova) on An Actually Intuitive Explanation of the Oberth Effect · 2024-01-14T23:24:38.281Z · LW · GW

Ok now I'm confused about something. How can it be the case that an instantaneous perpendicular burn adds to the craft's speed, but a constant burn just makes it go in a circle with no change in speed?

Comment by Isaac King (KingSupernova) on An Actually Intuitive Explanation of the Oberth Effect · 2024-01-13T04:03:43.334Z · LW · GW

...Are you just trying to point out that thrusting in opposite directions will cancel out? That seems obvious, and irrelevant. My post and all the subsequent discussion are assuming burns of epsilon duration.

Comment by Isaac King (KingSupernova) on An Actually Intuitive Explanation of the Oberth Effect · 2024-01-13T01:22:59.356Z · LW · GW

I don't understand how that can be true? Vector addition is associative; it can't be the case that adding many small vectors behaves differently from adding a single large vector equal to the small vectors' sum. Throwing one rock off the side of the ship followed by another rock has to do the same thing to the ship's trajectory as throwing both rocks at the same time.

Comment by Isaac King (KingSupernova) on An Actually Intuitive Explanation of the Oberth Effect · 2024-01-12T20:13:55.980Z · LW · GW

How is that relevant? In the limit where the retrograde thrust is infinitesimally small, it also does not increase the length of the main vector it is added to. Negligibly small thrust results in negligibly small change in velocity, regardless of its direction.

Comment by Isaac King (KingSupernova) on An Actually Intuitive Explanation of the Oberth Effect · 2024-01-12T10:06:30.499Z · LW · GW

Unfortunately I already came across that paradox a day or two ago on Stack Exchange. It's a good one though!

Yeah, my numerical skill is poor, so I try to understand things via visualization and analogies. It's more reliable in some cases, less in others.

Comment by Isaac King (KingSupernova) on An Actually Intuitive Explanation of the Oberth Effect · 2024-01-11T19:49:39.154Z · LW · GW

when the thrust is at 90 degrees to the trajectory, the rocket's speed is unaffected by the thrusting, and it comes out of the gravity well at the same speed as it came in.

 

That's not accurate; when you add two vectors at 90 degrees, the resulting vector has a higher magnitude than either. The rocket will be accelerated to a faster speed.

Comment by Isaac King (KingSupernova) on An Actually Intuitive Explanation of the Oberth Effect · 2024-01-11T02:07:29.061Z · LW · GW

I don't think so. The difference in the gravitational field between the bottom point of the swing arc and the top is negligible. The swing isn't an isolated system, so you're able to transmit force to the bar as you move around.

There's a common explanation you'll find online of how swings work by you changing the height of your center of mass, which is wrong, since it would imply that a swing with rigid bars wouldn't work. But they do.

The actual explanation seems to be something to do with changing your angular momentum at specific points by rotating your body.

Comment by Isaac King (KingSupernova) on An Actually Intuitive Explanation of the Oberth Effect · 2024-01-10T21:15:42.332Z · LW · GW

I'm still confused about some things, but the primary framing of "less time spent subject to high gravitational deceleration" seems like the important insight that all other explanations I found were missing.

Comment by Isaac King (KingSupernova) on Stop talking about p(doom) · 2024-01-03T10:01:27.581Z · LW · GW

Probability is a geometric scale, not an additive one.  An order of magnitude centered on 10% covers ~1% - 50%.

https://www.lesswrong.com/posts/QGkYCwyC7wTDyt3yT/0-and-1-are-not-probabilities

Comment by Isaac King (KingSupernova) on Stop talking about p(doom) · 2024-01-03T09:47:45.509Z · LW · GW

Feel free to elaborate on the mistakes and I'll fix them.

That article isn't about e/acc people and doesn't mention them anywhere, so I'm not sure why you think it's intended to be. The probability theory denial I'm referencing is mostly on Twitter.

Comment by Isaac King (KingSupernova) on Stop talking about p(doom) · 2024-01-03T09:40:53.476Z · LW · GW

Great point! I focused on AI risk since that's what most people I'm familiar with are talking about right now, but there are indeed other risks, and that's yet another potential source of miscommunication. One person could report a high p(doom) due to their concerns about bioterrorism, and another interprets that as them being concerned about AI.

Comment by Isaac King (KingSupernova) on Stop talking about p(doom) · 2024-01-01T19:13:02.935Z · LW · GW

Oh I agree the main goal is to convince onlookers, and I think the same ideas apply there. If you use language that's easily mapped to concepts like "unearned confidence", the onlooker is more likely to dismiss whatever you're saying.

It's literally an invitation to irrelevant philosophical debates about how all technologies are risky and we are still alive and I don't know how to get out of here without reference to probabilities and expected values. 

If that comes up, yes. But then it's them who have brought up the fact that probability is relevant, so you're not the one first framing it like that.

This kinda misses greater picture? "Belief that here is a substantial probability of AI killing everyone" is 1000x stronger shibboleth and much easier target for derision. 

Hmm. I disagree, not sure exactly why. I think it's something like: people focus on short phrases and commonly-used terms more than they focus on ideas. Like how the SSC post I linked gives the example of republicans being just fine with drug legalization as long as it's framed in right-wing terms. Or how talking positively about eugenics will get you hated, but talking positively about embryo selection and laws against incest will be taken seriously. I suspect that most people don't actually take positions on ideas at all; they take positions on specific tribal signals that happen to be associated with ideas.

Consider all the people who reject the label of "effective altruist", but try to donate to effective charity anyway. That seems like a good thing to me; some people don't want to be associated with the tribe for some political reason, and if they're still trying to make the world a better place, great! We want something similar to be the case with AI risk; people may reject the labels of "doomer" or "rationalist", but still think AI is risky, and using more complicated and varied phrases to describe that outcome will make people more open to it.

Comment by Isaac King (KingSupernova) on An attempt at a "good enough" solution for human two-party negotiations · 2023-12-18T10:11:27.045Z · LW · GW

I don't see how they would be. If you do see a way, please share!

Comment by Isaac King (KingSupernova) on Understanding Subjective Probabilities · 2023-12-12T12:04:22.920Z · LW · GW

I don't understand how either of those are supposed to be a counterexample. If I don't know what seat is going to be chosen randomly each time, then I don't have enough information to distinguish between the outcomes. All other information about the problem (like the fact that this is happening on a plane rather than a bus) is irrelevant to the outcome I care about.

This does strike me as somewhat tautological, since I'm effectively defining "irrelevant information" as "information that doesn't change the probability of the outcome I care about". I'm not sure how to resolve this; it certainly seems like I should be able to identify that the type of vehicle is irrelevant to the question posed and discard that information.

Comment by Isaac King (KingSupernova) on Understanding Subjective Probabilities · 2023-12-11T17:39:21.986Z · LW · GW

No, I think what I said was correct? What's an example that you think conflicts with that interpretation?

Comment by Isaac King (KingSupernova) on Understanding Subjective Probabilities · 2023-12-11T16:40:39.110Z · LW · GW

I think that's accurate, yeah. What's your objection to it?

Comment by Isaac King (KingSupernova) on Understanding Subjective Probabilities · 2023-12-11T16:39:57.870Z · LW · GW

Yeah that was a mistake, I mixed frequentism and propensity together.

Comment by Isaac King (KingSupernova) on How to quantify uncertainty about a probability estimate? · 2023-12-10T04:38:47.762Z · LW · GW

I don't have an answer for you, as this is also something I'm confused about. I felt bad seeing 0 answers here, so I just wanted to mention that I asked about this on Manifold and got some interesting discussion, see here: 

Comment by Isaac King (KingSupernova) on View and bet in Manifold prediction markets on Lesswrong · 2023-12-04T06:58:13.928Z · LW · GW

No, I'm using the WYSIWYG editor. It was for a post, not a comment, and definitely the right link.

Edit: Huh, I tried it again and it worked this time. My bad for not reloading to test on a fresh page before posting here, sorry.

Comment by Isaac King (KingSupernova) on View and bet in Manifold prediction markets on Lesswrong · 2023-12-04T05:04:59.057Z · LW · GW

This doesn't seem to work anymore? I'm posting the link in the editor and nothing happens, there's just a text link.

Comment by Isaac King (KingSupernova) on A free to enter, 240 character, open-source iterated prisoner's dilemma tournament · 2023-11-30T09:57:24.320Z · LW · GW

It's conceptually pretty simple; 240 characters isn't room for a lot. Here's how the writer explained it:
 

Here's the annotated version of my bot: https://pastebin.com/1a9UPKQk

The basic strategy is:

Simulate what the opponent will do on the current turn, and what they would do on the next two turns if I defect twice in a row.

If the results of the simulations are [cooperate, defect, defect], play tit-for-tat. Otherwise, defect.

This will defect against DefectBots, CooperateBots, and most of the silly bots that don't pay attention to the opponent's moves. Meanwhile, it cooperates against tit-for-tats and grim triggers and the like.

There are a bunch of details and optimizations and code-golf-ifications, but the big one is that instead of simulating the opponent using the provided function f, I use a code-golfed version of the code used to create bots in the tournament program (in createBot()). This prevents the anti-simulator bots from realizing they're in a simulation - arguments.callee.name is "f", and the variable a isn't defined.

The simulation this bot is doing pits the opposing bot against tit-for-tat, rather than against the winning bot itself, to avoid the infinite regress that would occur if the opposing bot also runs a simulation.

The last paragraph is because I wrote the tournament code poorly, and the function that's provided to the bots to let them simulate other bots was slightly different from the way the top-level bots were run, which allowed bots to tell if they were in a simulation and output different behavior.  (In particular someone submitted a "simulator killer" that would call process.exit() any time it detected it was in a simulation, which would shut down the entire tournament, disqualifying the top-level bot that had run the simulation.) This submission modifies the simulation function to make it indistinguishable.

The greek letters as variable names were for style points.

Comment by Isaac King (KingSupernova) on A free to enter, 240 character, open-source iterated prisoner's dilemma tournament · 2023-11-29T19:46:25.115Z · LW · GW

The winner was the following program:

try{eval(`{let f=(d,m,c,s,f,h,i)=>{let r=9;${c};return +!!r};r=f}`);let θ='r=h.at(-1);r=!r||r.o',λ=Ω=>r(m,d,θ,c,f,Ω,Ω.map(χ=>({m:χ.o,o:χ.m}))),Σ=(μ,π)=>[...μ,{m:π,o:+!1}],α=λ([...i]),β=λ(Σ(i,α));r=f(θ)&α&!β&!λ(Σ(Σ(i,α),β))|d==m}catch{r = 1}

We're running a sequel, see here to participate.

Comment by Isaac King (KingSupernova) on A free to enter, 240 character, open-source iterated prisoner's dilemma tournament · 2023-11-12T18:36:23.212Z · LW · GW

Done!

Comment by Isaac King (KingSupernova) on A free to enter, 240 character, open-source iterated prisoner's dilemma tournament · 2023-11-11T23:49:30.162Z · LW · GW

Well, I'd encourage you to submit this strategy and see how it does. :)

Comment by Isaac King (KingSupernova) on A free to enter, 240 character, open-source iterated prisoner's dilemma tournament · 2023-11-10T15:35:07.429Z · LW · GW

Ringer tit-for-tat will always beat tit-for-tat since it will score the same as tit-for-tat except when it goes up against shill tit-for-tat, where it will always get the best possible score.

It will do worse than tit-for-tat against any bot that cooperates with tit-for-tat and defects against ringer tit-for-tat.