Posts

Open & Welcome Thread - December 2020 2020-12-01T17:03:48.263Z
Pattern's Shortform Feed 2019-05-30T21:21:23.726Z

Comments

Comment by Pattern on Collisteru's Shortform · 2021-09-26T02:33:04.370Z · LW · GW

That is a great point. I would like to add:

'The optimal' X is not static. (Even if there's only one.)

The optimal supply of some items - including toilet paper, and probably water - has changed in my lifetime.

Comment by Pattern on GPT-3 + GAN · 2021-09-24T21:12:06.621Z · LW · GW

You might find this interesting:

https://www.gwern.net/GPT-2-preference-learning#bradley-terry-preference-learning

Comment by Pattern on Jitters No Evidence of Stupidity in RL · 2021-09-19T18:50:26.146Z · LW · GW

Was this comment meant to be here? It's not quite clear what it means.

Comment by Pattern on Jitters No Evidence of Stupidity in RL · 2021-09-19T18:44:07.962Z · LW · GW

I've also seen something sort of jittery as a way to be robust against lag, i.e. sending a bunch of signals (not super precisely) in order to ensure that at least one gets through to try to prevent getting stuck.

Comment by Pattern on I wanted to interview Eliezer Yudkowsky but he's busy so I simulated him instead · 2021-09-19T18:37:29.291Z · LW · GW
Lsusr: On the one hand, that seems really practical. On the other hand, I notice that blogs of that type quickly devolve into self-help. I don't like writing self-help. Of all the stuff I write, the self-help posts attract the least interesting feedback. Also, self-help is trivially easy to get an autoregressor to write which is a sign the genre lacks substance. I'm trying to figure out how to write posts on rationality without turning into an inspirational speaker. (You know the kind I'm talking about.)

Could a self help autogressor actually make an impact?

Comment by Pattern on MikkW's Shortform · 2021-09-19T18:29:55.388Z · LW · GW

Alternative: Something like condorcet voting, where voters receive a random subset of pairs to compare. For a simple analysis, the number of pairs could be 1. (Or instead of pairs, a voter could asked to choose 'the best'.)

Comment by Pattern on MikkW's Shortform · 2021-09-19T18:26:54.578Z · LW · GW
(One last note: yes, I have considered whether the anecdote I give is a result of the culture or the more diverse ethnic makeup of the US compared to Denmark, and I am unconvinced by that hypothesis. Those effects, while real, are nearly trivial compared to the effects from the voting system. I have written too much here already, so I will not comment further on why this is the case)

I was thinking that 'size dynamics' seem like 'a more obvious reason for delay' than 'diverse ethnic makeup'. Not 'this dynamic makes a lot of sense' but 'this other dynamic would make more sense'.

Comment by Pattern on Is there a beeminder without the punishment? · 2021-09-19T18:19:35.269Z · LW · GW

What was the blog post?

Comment by Pattern on MikkW's Shortform · 2021-09-19T04:03:40.555Z · LW · GW

At first glance, the obvious difference would be size. (But voting, so that the office of vital records is staffed properly and does not take years ...does seem the obvious answer.)

Comment by Pattern on The noncentral fallacy - the worst argument in the world? · 2021-09-19T02:42:57.212Z · LW · GW
If he can unilaterally declare a Worst Argument, then so can I. I declare the Worst Argument In The World to be this: "X is in a category whose archetypal member gives us a certain emotional reaction. Therefore, we should apply that emotional reaction to X, even though it is not a central category member."
Call it the Noncentral Fallacy. It sounds dumb when you put it like that. Who even does that, anyway?

One could go further, and say its basis is often wrong - the central fallacy. Why would our initial, instinctive reaction be the be all, end all?

Comment by Pattern on Why didn't we find katas for rationality? · 2021-09-17T17:29:38.526Z · LW · GW

So katas are more 'exercise' than mastery?

Comment by Pattern on Why didn't we find katas for rationality? · 2021-09-17T16:25:01.882Z · LW · GW

It's also helpful to practice when you're first learning something.

Comment by Pattern on Why didn't we find katas for rationality? · 2021-09-17T16:23:58.758Z · LW · GW

0. You assume there's one optimal solution, and that it has already been found, i.e. no room for improvement, and that nothing changes that would affect the solution.


1. You can change the math problem. (Coming up with or finding novel new problems is also useful, but with work, an old problem can be extended. You've heard of the Monty Hall problem, but how does it change if you have more doors, and more time with doors being opened? If another person was playing a similar game, given the option, would you gain in expected value if switched with them?)

2. You may find your (old) solution is wrong. (A 'solution' you find online can be wrong.)

3. You can try to solve a problem using a different technique. (Try to find 1-the probability first. Exact solution instead of approximate, or the other. Numeric solution, functional solution (in terms of parameters or variables). Often, the set of parameters can be extended.)

4.

Therefore you don't get the benefit from repeating the same exercises again and again.

If you've forgotten something, then you can learn it again. Likewise, review (though with decreasing frequency) allows retention.


5. (See below.)

Comment by Pattern on When Hindsight Isn't 20/20: Incentive Design With Imperfect Credit Allocation · 2021-09-14T05:58:50.838Z · LW · GW

1)

The tricky part in real life is there not being perfect memory.

(However, instead of throwing the chest overboard you could attempt to catch one major defector, and toss them overboard, and distribute the remaining gold (even if the sum before didn't quite work, but you couldn't catch the remaining defectors):

If everyone is honest:

then coins_1 + coins_2 + ... + coins_n = coins_chest.

If there's only one major defector, then not only is coins_1 + ... + coins_n > coins_chest, but

coins_1 + ... + coins_n - coins_major-defector <= coins_chest. (More generally, any sum of entirely honest (and entirely accurate) entries will be less than the number of coins in the chest.))


2)

If you split people up into groups ahead of time, you could keep track of the amount that a group has (more cheaply than keeping track of everyone).

I'm going to stop this train of though here before recreating bitcoin. (If everyone keeps track of a random partition involving everyone else, then will that probably be enough to figure everything else out afterwards?)

Might work against fraud detection though.

(Simpler to just remember what one other person's amount is though.)


3)

Enough people who have more (and this is common knowledge) agree to an equal distribution. (For example 'We were all going to use this gold to get drunk anyway. So why don't we just spend this chest of gold on drinks?')


4)

Not sure it has an effect but:

Multiple rounds. First round, not everyone is honest, half the gold in the chest is dumped out. Restart.

(Can be combined with method 2.)


Also, if the amounts submitted add ups o a multiple of the amount in the chest, then it can be split based on the proportion.


5)

Pick a random order to go in. Once the amount in the chest is exceeded, it is divided among those people (minus the person who went over). If someone cheats by a lot, they will probably not get it.


6)

Throw out the person who said the most, if the sum is too great.

Comment by Pattern on When Hindsight Isn't 20/20: Incentive Design With Imperfect Credit Allocation · 2021-09-14T05:33:09.389Z · LW · GW

Perhaps experience with saboteurs* (or conditions with high rates of natural accidents) could have that effect. (Though the original story would be rather nice to know.)

*Especially if the saboteurs weren't part of the unit - stealing stuff, messing things up, etc. Then it's obviously a good idea to implement such a rule.

Comment by Pattern on When Hindsight Isn't 20/20: Incentive Design With Imperfect Credit Allocation · 2021-09-14T05:30:43.137Z · LW · GW
And, if there’s an end-to-end mismatch, it will often be expensive to figure out where communication failed, even in hindsight.

Might be true of this type of production (O-ring), less clear that it's the case with a game of telephone.

(Implicitly you're able to compare the messages at the ends, to know there's a mismatch.)

Ask the person halfway along what they heard. If what they heard is fine, then the error/s occurred before halfway along.

(Also, this is what redundancy and error correcting codes are for.)

Comment by Pattern on The Best Software For Every Need · 2021-09-12T03:23:37.076Z · LW · GW

Upvoted. I don't see competitors being mentioned in this thread.

Comment by Pattern on Open & Welcome Thread September 2021 · 2021-09-12T00:11:05.949Z · LW · GW

Coming across Scott Aaronson by way of searching for info about P=NP. That happened to me a long time ago. At the time, my reaction to 'I think we should add physics doesn't enable P=NP' as a law was something like 'What? Don't you need some reason to assert that it's impossible?' (Though I did wonder if that's where thermodynamics came from.)

Comment by Pattern on Bayeswatch 9: Zombies · 2021-09-12T00:05:03.998Z · LW · GW
A righteous government is of all the most to be wished for.

I'm not sure if this is right. (It could work as a sentence, though it's not super clear.)

Comment by Pattern on Bayeswatch 8: Antimatter · 2021-09-11T00:08:12.955Z · LW · GW

He plugged the Z-Day AI it into his laptop.

Something with this sentence (above) is wrong.

(Probably the unnecessary "it".)

Comment by Pattern on Rationality Is Expertise With Reality · 2021-09-06T17:27:31.828Z · LW · GW
I was going for a style of writing I'm familiar with in explaining useful things - question to conclusion, I don't

So the style is based around making assertions, and if people don't think the point is obvious, they ask for evidence/ask what you mean?

Do you have any (other) examples of the style?

Comment by Pattern on Can you control the past? · 2021-09-03T17:17:13.708Z · LW · GW

(A lot of this has been covered before.)


Showing that you can control such things* doesn't seem to disprove CDT. It seems to motivate different CDT dynamics. (In case that's a source of confusion, it could be called something else like Control Decision Theory.)

*taking this as given


Instead of picking one option you could randomize. (If Newcomb can read my mind, then a coin flip should be no problem.)


Are you really supposed to just leave it there, sitting in the attic? What sort of [madness] is that?

If it's for someone else...


Sometimes, one-boxers object: if two-boxers are so rational, why do the one-boxers end up so much richer? But two-boxers can answer: because Omega has chosen to give better options to agents who will choose irrationally. Two-boxers make the best of a worse situation: they almost always face a choice between nothing or $1K, and they, rationally, choose $1K. One-boxers, by contrast, make the worse of a better situation: they almost always face a choice between $1M or $1M+$1K, and they, irrationally, choose $1M.

There's also evolutionary approaches. You could always switch strategies. (Especially if you get to play multiple times. Also, what is all this inflation doing to the economy?)


If Omega will give you millions if you believe that Paris is in Ohio,

Paris, Ohio. (Might have been easier/cheaper to build in the past if it didn't exist.)


Parfit’s hitchhiker

Violence is the answer.

Gratitude? $11,000 is a ridiculous sum - daylight robbery. (An entirely different question is:

a) It's a charity. They save the lives of people stuck in the desert for free (for the beneficiaries). They run on donations, though.

b) It's an Uber. How much do you tip? (A trip into the desert is awful during the day, terribly hot. It's a long trip, you were way out there. And you suspect the powerful (and expensive) air conditioning saved your life. The food and water definitely did.))


Counterfactual mugging: Omega doesn’t know whether the X-th digit of pi is even or odd. Before finding out, she makes the following commitment. If the X-th digit of pi is odd, she will ask you for a thousand dollars. If the X-th digit is even, she will predict whether you would’ve given her the thousand had the X-th digit been odd, and she will give you a million if she predicts “yes.” The X-th digit is odd, and Omega asks you for the thousand. Should you pay?

$1,000? Try $100, scale the cost and the payout down by a factor of 10. (Also, by studying math, I might conclude those are bad odds, for pi-digits.)


The issue is that you’re trying to improve the past at all.

The framing leading up to it was weird, but this makes sense. (There's also wanting to improve the future as a motivation, too. And if population increases over time, a small effect could have a huge impact. (Even before we adjust for impacts that won't be linear.))*

*This also leads to 'should you be nice to people after an apocalypse (before which, population was higher), because in a large population, even a small effect on the past would...'


Also 'be nice'? What about...breaking people out? Be the change you wanted to see in the world.


Once you’ve started trying to acausally influence the behavior of aliens throughout the multiverse, though, one starts to wonder even more about the whole lost-your-marbles thing.

Randomize for reduced cost. (Also might allow for improving expected value, while adjusting for transaction costs in a reasonable fashion.)

Comment by Pattern on D&D.Sci Pathfinder: Return of the Gray Swan · 2021-09-03T16:49:20.777Z · LW · GW

Secrets(?):

Somethings more probable than a history eating monster, although not based on me looking at any data:

a) insurance fraud. (Yes, this trip is very risky, but it's very lucrative if we succeed! Oh, no! Well insurance will cover it. Wait, they don't? Uh... what if the ship didn't go there? What if it went somewhere else, where it's not our fault?) Issues: requires data showing that something is a bad idea, or stories.

b) Secrets are hidden by having people not go where they could find them. (May require centralized control, without alternatives; not too unlikely in this scenario, though.)

Comment by Pattern on [deleted post] 2021-09-01T21:14:36.161Z
I would say, however, the Danish system is better designed for that goal

What is the Danish system?


I want to live in the truly free world.

Is that just "New Zealand...Denmark...Ireland...Germany"?

Comment by Pattern on Living with a homeopath - how? · 2021-08-27T18:03:08.561Z · LW · GW
more leverage

Then don't.

Comment by Pattern on Is top-down veganism unethical? · 2021-08-27T18:01:16.925Z · LW · GW

Edited to fix that.

Comment by Pattern on Is top-down veganism unethical? · 2021-08-27T17:58:23.182Z · LW · GW

I am now sure I wasn't.

Comment by Pattern on Randal Koene on brain understanding before whole brain emulation · 2021-08-26T17:40:38.643Z · LW · GW
I'm not sure whether this part of your comment is referring to the normative question or the forecasting question.

Normative. Say that aligning something much smarter/more complicated than you (or your processing**) is difficult. The obvious fix would be: can we make people smarter?* If digital enhancement is easier (which seems like it could be likely, at least for crude ways - more computers more processing (though this may be more difficult than it sounds - serial has to be finished, parallel has to be communicated, etc.).)

This might help with analysis, or something like bandwidth. (Being able to process more information might make it easier to analyze the output of a process - if we wanted to evaluate something GPT like thing's ability to generate poetry, then if it can generate 'poetry' faster than we can read or rate, then we're the bottle neck.)

*Or algorithms simpler/easier to understand.


**Better ways of analyzing chess might help someone understand a (current) position in a chess game better (or just allow phones to compete with humans (though I don't know how much of this is better algorithms, versus more powerful phones)).

Comment by Pattern on Living with a homeopath - how? · 2021-08-26T17:18:19.379Z · LW · GW

This might be a dumb idea but:

If you're sure that the 'boost' will not make things worth (find out about it, talk to multiple actual doctors who are not affiliated with it, etc.) and aren't worried about a slippery slope

Would going for both work? ("I want all the help I can get.")

Comment by Pattern on Living with a homeopath - how? · 2021-08-26T17:16:24.152Z · LW · GW

Have you ever changed someone's mind this way? (Seriously asking, I haven't tried that.)

Comment by Pattern on Living with a homeopath - how? · 2021-08-26T17:14:01.677Z · LW · GW

I don't know exactly what you mean by the term 'injections' but I am afraid of that - in that:

  • I believe that injecting anything (even water) into your veins can kill you
  • Injecting stuff in other places...sounds like a horrible idea.
  • I've had vaccines that were painful because...they were injections.
  • From what I've heard about homeopathy some of it is actively harmful. (If I was in your shows, I would be worried about a) dying, b) things going badly, not getting medical treatment, c) things going badly getting 'treatment' that makes things worse etc.
Comment by Pattern on Buck's Shortform · 2021-08-26T17:05:37.824Z · LW · GW

Hm. Maybe there's something to be gained from navigating 'trade-offs' differently? I thought perpetual motion machines were impossible (because thermodynamics) aside from 'launch something into space, pointed away from stuff it would crash into', though I'd read that 'trying to do so is a good way to learn about physics., but I didn't really try because I thought it'd be pointless.' And then this happened.

Comment by Pattern on Is top-down veganism unethical? · 2021-08-26T17:02:10.140Z · LW · GW
in the end it will be a similar issue to GMO foods; its wide-spread adoption depending on whether businesses have to explicitly label plant-based alternatives as alternatives

Unless it becomes better. It might be far off, but if we can consistently create tastier meat, then people will seek it out (if it becomes known for quality, and the price is reasonable).


Edits:

removed footnotes that would have been better as comments (on the original post)

added quote for context

Comment by Pattern on Is top-down veganism unethical? · 2021-08-26T16:54:48.174Z · LW · GW
On the other hand, if [happiness] cannot be measured or even defined in any meaningful way, maybe it does not matter that much, after all.

I think people would disagree with this (above).

Comment by Pattern on Randal Koene on brain understanding before whole brain emulation · 2021-08-26T16:49:37.972Z · LW · GW
I think you're overly confident that WBE would be irrelevant to the timeline of AGI capabilities research

Ah, I wrote this around the same time as another comment responding to something about 'alignment work is a good idea even if the particular alignment method won't work for a super intelligence'. (A positive utility argument is not a max utility argument.)

So, I wasn't thinking about the timeline (and how relevant it would be) when I wrote that, just that it seems far out to me.


On reflection:

would be feasible to get WBE without incidentally first understanding brain algorithms well enough to code an AGI from scratch using similar algorithms.

I should have just responded to something like this (above).


I can see this being right (similar understanding required for both), although the idea that one must be easier than the other, I'm less sure of. Mostly in the sense that: I don't know how small an AGI can be. Yes brains are big (and complicated), but I don't know how much that can be avoided. So I think a working, understood, digital mind is a sufficiently large task that:

  • it seems far off (in terms of time and effort)
  • If a brain is harder, the added time and difficulty isn't quite so big in comparison*

*an alternative would be that we'll get an answer to this question before we get AGI:

As we start understanding minds, our view of the brain starts to recognize the difficulty. Like 'we know something like X is required, but since evolution was proceeding in a very random/greedy way, in order to get something like X, a lot of unneeded complexity is added (because continuous improvement is a difficult constrain to fulfill, and relentless focus on improvement through that view is a path that does more 'visiting every local maximum along the way' than 'goes for the global maximum') and figuring this out will be a lot harder than figuring out (how to make) a digital** mind.

**I don't know how much 'custom/optimized hardware for architectures/etc.' addresses the difficulties I mentioned. This might make your point about AGI before WBE a lot stronger - if the brain is an architecture optimized for minimizing power consumption in ways that make it way harder to emulate, timewise, than 'AGI more optimized for 'computers'', then that could be a reason WBE would take longer.

I'd have thought that the main reason WBE would come up would be 'understandability' or 'alignment' rather than speed, though I can see why at first glance people would say 'reverse engineering the brain (which exists) seems easier than making something new' (even if that is wrong).

Comment by Pattern on Randal Koene on brain understanding before whole brain emulation · 2021-08-25T15:44:28.233Z · LW · GW
Would that actually help solve the problem? I mean, other things equal, if flesh-and-blood humans have probability P of accidentally creating catastrophically-out-of-control AGIs, well, emulated human brains would do the exact same thing with the exact same probability…. Right? Well, maybe. Or it might be more complicated than that.

1. Accidentally creating AGI seems unlikely.

2. If you only have one emulated brain, it's less likely to do so than humans (base rate).

3. Emulating brains in order to increase capability is currently...an idea. Even if you could run it 'at the same speed' (to the extent that such a thing makes sense - and remember we interact with the world through bodies, a brain can't see, etc.), running faster would take correspondingly more effort and be even more expensive. (I've heard brains are remarkably power efficient, physically. The cost of running a supercomputer to simulate a human brain, seems high. (Working out better software might improve this some amount.))


Interviewer (Paul Middlebrooks): Engineering and science-for-understanding are not at odds with each other, necessarily, but they are two different things

Practically, progress requires doing both, i.e. better equipment to create and measure electricity is needed to understand it better, which helps understand how to direct, contain, and generate it better, etc.

Comment by Pattern on Buck's Shortform · 2021-08-25T15:28:53.859Z · LW · GW
They also believe that covid is a hoax, plus have lots of less fringe but still quite irrational beliefs.

It seems like the more people you know, the less likely this is.


Of course I dream about a group that would have all the advantages and none of the disadvantages.

Of both? (This sentence didn't have a clear object.)

Comment by Pattern on Buck's Shortform · 2021-08-25T15:25:13.513Z · LW · GW
One extreme is that we can find a way such that there is no tradeoff whatsoever between safety and capabilities—an "alignment tax" of 0%.

I think was the idea behind 'oracle ai's'. (Though I'm aware there were arguments against that approach.)


One of the arguments I didn't see for

sorting out the practical details of how to implement them:

was:

"As we get better at this alignment stuff we will reduce the 'tradeoff'. (Also, arguably, getting better human feedback improves performance.)

Comment by Pattern on Buck's Shortform · 2021-08-25T15:19:59.332Z · LW · GW
The best we could possibly hope for with transparency techniques is: For anything that a neural net is doing, we are able to get the best possible human understandable explanation of what it’s doing, and what we’d have to change in the neural net to make it do something different. But this doesn’t help us if the neural net is doing things that rely on concepts that it’s fundamentally impossible for humans to understand, because they’re too complicated or alien. It seems likely to me that these concepts exist. And so systems will be much weaker if we demand interpretability.

That may be 'the best we could hope for', but I'm more worried about 'we can't understand the neural net (with the tools we have)' than "the neural net is doing things that rely on concepts that it’s fundamentally impossible for humans to understand". (Or, solving the task requires concepts that are really complicated to understand (though maybe easy for humans to understand), and so the neural network doesn't get it.)


And so systems will be much weaker if we demand interpretability.

Whether or not "empirical contingencies work out nicely", I think the concern about 'fundamentally impossible to understand concepts" is...something that won't show up in every domain. (I also think that things do exist that people can understand, but it takes a lot of work, so people don't do it. There's an example from math involving some obscure theorems that aren't used a lot for that reason.)

Comment by Pattern on Yet More Modal Combat · 2021-08-25T15:06:35.029Z · LW · GW
Where C= My opponent cooperates against me.

Cooperates with you, or against you?


Lobian

lob style reasoning

Shouldn't that be:

Löbian

löb style reasoning

Comment by Pattern on What was my mistake evaluating risk in this situation? · 2021-08-21T05:58:44.684Z · LW · GW
How could I have done better?

Well, you don't know, what you don't know. Would you have found it less absurd if you had known more about polio? Other lung diseases? Other diseases that caused permanent brain damage?

Comment by Pattern on Open and Welcome Thread – August 2021 · 2021-08-21T05:52:21.853Z · LW · GW

Well, it might make it easier for someone to steal your credit card info if you're wearing one of these headsets.

Comment by Pattern on Open and Welcome Thread – August 2021 · 2021-08-21T05:49:37.234Z · LW · GW
Is there any advantage

You probably use your shortform more, so it might get more attention as a comment here.

Comment by Pattern on Covid 8/19: Cracking the Booster · 2021-08-21T05:44:43.582Z · LW · GW

Base Meme:

My Summer* Plans:

[random picture of literally anything, I haven't seen a lot of high quality versions of this meme]

Delta:

[another random picture, seriously would be way better if it these were that general versus Darth Vader force choking him]


The Oko Thief of crowns is an actually good version of the meme/image macro:

My Summer plans:

[picture of the back of a magic card]

Delta:

[a magical thief, that has stolen...your summer]


*Summer plans, Fall plans,...same meme.

Comment by Pattern on Covid 8/19: Cracking the Booster · 2021-08-21T05:39:22.273Z · LW · GW

There are these people called "commenters" that can do that.

Comment by Pattern on Covid 8/19: Cracking the Booster · 2021-08-21T05:38:25.042Z · LW · GW

16.7% of people polled(biased sample) appear comfortable with murder by means of magic. All it takes is a couple ceilings that are only 15.5 inches high, or lower, while the space is occupied.

Comment by Pattern on Agency in Conway’s Game of Life · 2021-08-16T19:32:42.356Z · LW · GW

If by 'time reversibility' you mean 'a cellular automaton in which every configuration has a unique predecessor', then is that important to thermodynamics? And does our world/universe have this property?

Comment by Pattern on Working With Monsters · 2021-08-16T19:06:52.224Z · LW · GW

While they are dead no. While they were alive - yes, they could. (This is interesting in that, properly performed, (a specified) computation gets the same result, whatever the circumstances. More generally, Fermat argued that: for integers a, b, c, and n, where n>2, a^n+b^n=c^N:

  • had no solutions
  • was provable

He might have been wrong about the difficulty of proving it, but he was right about the above. If perhaps for the wrong reasons. (Can we prove Fermat didn't have a proof?))

Comment by Pattern on Agency in Conway’s Game of Life · 2021-08-15T18:41:53.891Z · LW · GW
It's remarkable that googling "thermodynamics of the game of life" turns up zero results. 

It's not obvious that thermodynamics generalizes to the game of life, or what the equivalents of energy or order would be: at first glance it has perpetual motion machines ("gliders").

Comment by Pattern on Covid 6/17: One Last Scare · 2021-08-15T18:38:18.644Z · LW · GW
Eliezer several times expressed the view

Can you link to 3 times?