Posts

Newton's law of cooling from first principles 2024-01-16T04:21:11.094Z
Inflection AI: New startup related to language models 2022-04-02T05:35:24.759Z
My take on higher-order game theory 2021-11-30T05:56:00.990Z
Nisan's Shortform 2021-09-12T06:05:04.965Z
April 15, 2040 2021-05-04T21:18:08.912Z
What is a VNM stable set, really? 2021-01-25T05:43:59.496Z
Why you should minimax in two-player zero-sum games 2020-05-17T20:48:03.770Z
Book report: Theory of Games and Economic Behavior (von Neumann & Morgenstern) 2020-05-11T09:47:00.773Z
Conflict vs. mistake in non-zero-sum games 2020-04-05T22:22:41.374Z
Beliefs at different timescales 2018-11-04T20:10:59.223Z
Counterfactuals and reflective oracles 2018-09-05T08:54:06.303Z
Counterfactuals, thick and thin 2018-07-31T15:43:59.187Z
An environment for studying counterfactuals 2018-07-11T00:14:49.756Z
Logical counterfactuals and differential privacy 2018-02-04T00:17:43.000Z
Oracle machines for automated philosophy 2015-02-17T15:10:04.000Z
Meetup : Berkeley: Beta-testing at CFAR 2014-03-19T05:32:26.521Z
Meetup : Berkeley: Implementation Intentions 2014-02-27T07:06:29.784Z
Meetup : Berkeley: Ask vs. Guess (vs. Tell) Culture 2014-02-19T20:16:30.017Z
Meetup : Berkeley: The Twelve Virtues 2014-02-12T19:56:53.045Z
Meetup : Berkeley: Talk on communication 2014-01-24T03:57:50.244Z
Meetup : Berkeley: Weekly goals 2014-01-22T18:16:38.107Z
Meetup : Berkeley meetup: 5-minute exercises 2014-01-15T21:02:26.223Z
Meetup : Meetup at CFAR, Wednesday: Nutritionally complete bread 2014-01-07T10:25:33.016Z
Meetup : Berkeley: Hypothetical Apostasy 2013-06-12T17:53:40.651Z
Meetup : Berkeley: Board games 2013-06-04T16:21:17.574Z
Meetup : Berkeley: The Motivation Hacker by Nick Winter 2013-05-28T06:02:07.554Z
Meetup : Berkeley: To-do lists and other systems 2013-05-22T01:09:51.917Z
Meetup : Berkeley: Munchkinism 2013-05-14T04:25:21.643Z
Meetup : Berkeley: Information theory and the art of conversation 2013-05-05T22:35:00.823Z
Meetup : Berkeley: Dungeons & Discourse 2013-03-03T06:13:05.399Z
Meetup : Berkeley: Board games 2013-01-29T03:09:23.841Z
Meetup : Berkeley: CFAR focus group 2013-01-23T02:06:35.830Z
A fungibility theorem 2013-01-12T09:27:25.637Z
Proof of fungibility theorem 2013-01-12T09:26:09.484Z
Meetup : Berkeley meetup: Board games! 2013-01-08T20:40:42.392Z
Meetup : Berkeley: How Robot Cars Are Near 2012-12-17T19:46:33.980Z
Meetup : Berkeley: Boardgames 2012-12-05T18:28:09.814Z
Meetup : Berkeley meetup: Hermeneutics! 2012-11-26T05:40:29.186Z
Meetup : Berkeley meetup: Deliberate performance 2012-11-13T23:58:50.742Z
Meetup : Berkeley meetup: Success stories 2012-10-23T22:10:43.964Z
Meetup : Different location for Berkeley meetup 2012-10-17T17:19:56.746Z
[Link] "Fewer than X% of Americans know Y" 2012-10-10T16:59:38.114Z
Meetup : Different location: Berkeley meetup 2012-10-03T08:26:09.910Z
Meetup : Pre-Singularity Summit Overcoming Bias / Less Wrong Meetup Party 2012-09-24T14:46:05.475Z
Meetup : Vienna meetup 2012-09-22T13:14:23.668Z
Meetup report: How harmful is cannabis, and will you change your habits? 2012-09-09T04:50:10.943Z
Meetup : Berkeley meetup: Cannabis, Decision-Making, And A Chance To Change Your Mind 2012-08-29T03:50:23.867Z
Meetup : Berkeley meetup: Operant conditioning game 2012-08-21T15:07:36.431Z
Meetup : Berkeley meetup: Discussion about startups 2012-08-14T17:09:10.149Z
Meetup : Berkeley meetup: Board game night 2012-08-01T06:40:27.322Z

Comments

Comment by Nisan on The Worst Form Of Government (Except For Everything Else We've Tried) · 2024-03-18T03:12:35.059Z · LW · GW

Maybe the right word for this would be corporatism.

Comment by Nisan on Simple versus Short: Higher-order degeneracy and error-correction · 2024-03-11T23:43:19.022Z · LW · GW

I'm surprised to see an application of the Banach fixed-point theorem as an example of something that's too implicit from the perspective of a computer scientist. After all, real quantities can only be represented in a computer as a sequence of approximations — and that's exactly what the theorem provides.

I would have expected you to use, say, the Brouwer fixed-point theorem instead, because Brouwer fixed points can't be computed to arbitrary precision in general.

(I come from a mathematical background, fwiw.)

Comment by Nisan on Prediction market: Will John Wentworth's Gears of Aging series hold up in 2033? · 2024-02-22T21:14:45.184Z · LW · GW

For reference, here's the Gears of Aging sequence.

Comment by Nisan on Importing a Python File by Name · 2024-02-02T19:11:01.154Z · LW · GW

This article saved me some time just now. Thanks!

Comment by Nisan on Newton's law of cooling from first principles · 2024-01-18T00:20:00.801Z · LW · GW

Scaling temperature up by a factor of 4 scales up all the velocities by a factor of 2 [...] slowing down the playback of a video has the effect of increasing the time between collisions [....]

Oh, good point! But hm, scaling up temperature by 4x should increase velocities by 2x and energy transfer per collision by 4x. And it should increase the rate of collisions per time by 2x. So the rate of energy transfer per time should increase 8x. But that violates Newton's law as well. What am I missing here?

Comment by Nisan on Newton's law of cooling from first principles · 2024-01-17T20:01:08.689Z · LW · GW

constant volume

Ah, so I'm working at a level of generality that applies to all sorts of dynamical systems, including ones with no well-defined volume. As long as there's a conserved quantity , we can define the entropy as the log of the number of states with that value of . This is a univariate function of , and temperature can be defined as the multiplicative inverse of the derivative .

if the proportionality depends on thermodynamic variables

By

I mean

for some constant that doesn't vary with time. So it's incompatible with Newton's law.

This asymmetry in the temperature dependence would predict that one subsystem will heat faster than the other subsystem cools

Oh, the asymmetric formula relies on the assumption I made that subsystem 2 is so much bigger than subsystem 1 that its temperature doesn't change appreciably during the cooling process. I wasn't clear about that, sorry.

Comment by Nisan on Newton's law of cooling from first principles · 2024-01-17T08:28:22.304Z · LW · GW

Yeah, as Shankar says, this is only for conduction (and maybe convection?). The assumption about transition probabilities is abstractly saying there's a lot of contact between the subsystems. If two objects contact each other in a small surface area, this post doesn't apply and you'll need to model the heat flow with the heat equation. I suppose radiative cooling acts abstractly like a narrow contact region, only allowing photons through.

Comment by Nisan on Newton's law of cooling from first principles · 2024-01-17T08:17:55.361Z · LW · GW

I am suspicious of this "Lambert's law". Suppose the environment is at absolute zero -- nothing is moving at all. Then "Lambert's law" says that the rate of cooling should be infinite: our object should itself instantly drop to absolute zero once placed in an absolute-zero environment. Can that be right?

We're assuming the environment carries away excess heat instantly. In practice the immediate environment will warm up a bit and the cooling rate will become finite right away.

But in the ideal case, yeah, I think instant cooling makes sense. The environment's coldness is infinite!

Comment by Nisan on Newton's law of cooling from first principles · 2024-01-17T08:07:13.742Z · LW · GW

Oh neat! Very interesting. I believe your argument is correct for head-on collisions. What about glancing blows, though?

Assume two rigid, spherical particles with the same mass and radius.

Pick a coordinate system (at rest) where the collision normal vector is aligned with the x-axis.

Then move the coordinate system along the x axis so that the particles have equal and opposite x-velocities. (The y-velocities will be whatever.) In this frame, the elastic collision will negate the x-velocities and leave the y-velocities untouched.

Back in the rest frame, this means that the collision swaps the x-velocities and keeps the y-velocities the same. Thus the energy transfer is half the difference of the squared x-velocities, .

I'm not sure that's proportional to ? The square of the x-velocity does increase with temperature, but I'm not sure it's linear. If there's a big temperature difference, the collisions are ~uniformly distributed on the cold particle's surface, but not on the hot particle's surface.

Comment by Nisan on Nisan's Shortform · 2024-01-08T01:41:03.499Z · LW · GW

I'd love if anyone can point me to anywhere this cooling law (proportional to the difference of coldnesses) has been written up.

Also my assumptions about the dynamical system are kinda ad hoc. I'd like to know assumptions I ought to be using.

Comment by Nisan on Nisan's Shortform · 2024-01-08T01:38:08.150Z · LW · GW

We can derive Newton's law of cooling from first principles.

Consider an ergodic discrete-time dynamical system and group the microstates into macrostates according to some observable variable . ( might be the temperature of a subsystem.)

Let's assume that if , then in the next timestep can be one of the values , , or .

Let's make the further assumption that the transition probabilities for these three possibilities have the same ratio as the number of microstates.

Then it turns out that the rate of change over time is proportional to , where is the entropy, which is the logarithm of the number of microstates.

Now suppose our system consists of two interacting subsystems with energies and . Total energy is conserved. How fast will energy flow from one system to the other? By the above lemma, is proportional to .

Here and are the coldnesses of the subsystems. Coldness is the inverse of temperature, and is more fundamental than temperature.

Note that Newton's law of cooling says that the rate of heat transfer is proportional to . For a narrow temperature range this will approximate our result.

Comment by Nisan on What Helped Me - Kale, Blood, CPAP, X-tiamine, Methylphenidate · 2024-01-03T23:35:18.573Z · LW · GW

Wow, that's a lot of kale. Do you eat 500g every day? And 500g is the mass of the cooked, strained kale?

Comment by Nisan on You are probably underestimating how good self-love can be · 2023-12-16T19:34:17.433Z · LW · GW

What a beautiful illustration of how a Humanist's worldview differs from a Cousin's!

Comment by Nisan on Google Gemini Announced · 2023-12-10T17:35:13.073Z · LW · GW

I wonder why Gemini used RLHF instead of Direct Preference Optimization (DPO). DPO was written up 6 months ago; it's simpler and apparently more compute-efficient than RLHF.

  • Is the Gemini org structure so sclerotic that it couldn't switch to a more efficient training algorithm partway through a project?
  • Is DPO inferior to RLHF in some way? Lower quality, less efficient, more sensitive to hyperparameters?
  • Maybe they did use DPO, even though they claimed it was RLHF in their technical report?
Comment by Nisan on Sum-threshold attacks · 2023-10-22T18:15:11.938Z · LW · GW

Another example is the obfuscated arguments problem. As a toy example:

For every cubic centimeter in Texas, your missing earring is not in the cubic centimeter.

Therefore, your missing earring is not in Texas.

Even if the conclusion of the argument is a lie, each premise is spot-checkable and most likely true. The lie has been split up into many statements each of which is only slightly a lie.

Comment by Nisan on My take on higher-order game theory · 2023-10-21T19:44:35.436Z · LW · GW

Thanks! For convex sets of distributions: If you weaken the definition of fixed point to , then the set has a least element which really is a least fixed point.

Comment by Nisan on Nisan's Shortform · 2023-10-21T02:25:13.586Z · LW · GW

Hyperbolic growth

The differential equation , for positive and , has solution

(after changing the units). The Roodman report argues that our economy follows this hyperbolic growth trend, rather than an exponential one.

While exponential growth has a single parameter — the growth rate or interest rate — hyperbolic growth has two parameters: is the time until singularity, and is the "hardness" of the takeoff.

A value of close to zero gives a "soft" takeoff where the derivative gets high well in advance of the singularity. A large value of gives a "hard" takeoff, where explosive growth comes all at once right at the singularity. (Paul Christiano calls these "slow" and "fast" takeoff.)

Paul defines "slow takeoff" as "There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles." This corresponds to . (At , the first four-year doubling starts at and the first one-year doubling starts at years before the singularity.)

So the simple hyperbola with counts as "slow takeoff". (This is the "naive model" mentioned in footnote 31 of Intelligence Explosion Microeconomics.)

Roodman's estimates of historical are closer to (see Table 3).

Comment by Nisan on Features of Emacs that I only recently discovered · 2023-09-25T22:13:36.526Z · LW · GW

Ah, beginning-of-line-text is nice. It skips over the initial # or // of comments and the initial * of Org headings. I've now bound it to M-m.

Comment by Nisan on I'm consistently overwhelmed by basic obligations. Are there any paradigm shifts or other rationality-based tips that would be helpful? · 2023-07-21T21:29:59.255Z · LW · GW

Consider seeing a doctor about the panicky and stressed feelings. They may test you for hormone imbalances or prescribe you antianxiety medication.

Comment by Nisan on Nisan's Shortform · 2023-07-17T06:56:25.279Z · LW · GW

Conception is a startup trying to do in vitro gametogenesis for humans!

Comment by Nisan on Open Thread - July 2023 · 2023-07-06T19:29:27.679Z · LW · GW

A long reflection requires new institutions, and creating new institutions requires individual agency. Right? I have trouble imagining a long reflection actually happening in a world with the individual agency level dialed down.

A separate point that's perhaps in line with your thinking: I feel better about cultivating agency in people who are intelligent and wise rather than people who are not. When I was working on agency-cultivating projects, we targeted those kinds of people.

Comment by Nisan on We Are Less Wrong than E. T. Jaynes on Loss Functions in Human Society · 2023-06-05T20:04:53.981Z · LW · GW

What's more, even selfish agents with de dicto identical utility functions can trade: If I have two right shoes and you have two left shoes, we'd trade one shoe for another because of decreasing marginal utility.

Comment by Nisan on Nisan's Shortform · 2023-05-11T05:27:19.608Z · LW · GW

Recent interviews with Eliezer:

Comment by Nisan on Looking for a post I read if anyone recognizes it · 2023-05-10T04:28:21.923Z · LW · GW

The bug patches / epiphanies / tortoises / wizardry square from Small, Consistent Effort: Uncharted Waters In the Art of Rationality

Comment by Nisan on Predictable updating about AI risk · 2023-05-09T03:43:15.440Z · LW · GW

The nanobots, from the bloodstream, in the parlor, Professor Plum.

You could have written Colonel Mustard!

Comment by Nisan on AI #10: Code Interpreter and Geoff Hinton · 2023-05-04T14:18:41.873Z · LW · GW
Comment by Nisan on Features of Emacs that I only recently discovered · 2023-04-17T00:57:47.149Z · LW · GW

I did not know about M-m, thanks!

Comment by Nisan on A Tale of Two Intelligences: xRisk, AI, and My Relationship · 2023-04-11T01:17:09.683Z · LW · GW
  • Figure out why it's important to you that your romantic partner agree with you on this. Does your relationship require agreement on all factual questions? Are you contemplating any big life changes because of x-risk that she won't be on board with?

  • Would you be happy if your partner fully understood your worries but didn't share them? If so, maybe focus on sharing your thoughts, feelings, and uncertainties around x-risk in addition to your reasoning.

Comment by Nisan on [New LW Feature] "Debates" · 2023-04-01T18:09:41.862Z · LW · GW

I have to click twice on the Reply link, which is unintuitive. (Safari on iOS.)

Comment by Nisan on [New LW Feature] "Debates" · 2023-04-01T18:08:05.829Z · LW · GW

I tried a couple other debates with GPT-4, and they both ended up at "A, nevertheless B" vs. "B, nevertheless A".

Comment by Nisan on Ethical AI investments? · 2023-03-25T17:35:16.726Z · LW · GW

I expressed some disagreement in my comment, but I didn't disagree-vote.

Comment by Nisan on Ethical AI investments? · 2023-03-25T00:17:40.208Z · LW · GW

I like your upper bound. The way I'd put it is: If you buy $1 of Microsoft stock, the most impact that can have is if Microsoft sells it to you, in which case Microsoft gets one more dollar to invest in AI today.

And Microsoft won't spend the whole dollar on AI. Although they'd plausibly spend most of a marginal dollar on AI, even if they don't spend most of the average dollar on AI.

I'm not sure what to make of the fact that Microsoft is buying back stock. I'd guess it doesn't make a difference either way? Perhaps if they were going to buy back $X worth of shares but then you offer to buy $1 of shares from them at market price, they'd buy back $X and sell you $1 for a net buyback of $(X-1) and you still have an impact of $1.

I like the idea that buying stock only has a temporary effect on price. If the stock price is determined by institutional investors that take positions on the price, then maybe when you buy $1 of stock, these investors correct the price immediately, and the overall effect is to give those investors $1, which is ethically neutral? James_Miller makes this point here. But I'd like to have a better understanding of where the boundary lies between tiny investors who have zero impact and big investors who have all the impact.

Or maybe the effect of buying $1 of stock is giving $1 to early Microsoft investors and employees? The ethics of that are debatable since the early investors didn't know they were funding an AGI lab.

Comment by Nisan on Zach Stein-Perlman's Shortform · 2023-03-24T23:31:52.214Z · LW · GW

That could be, but also maybe there won't be a period of increased strategic clarity. Especially if the emergence of new capabilities with scale remains unpredictable, or if progress depends on finding new insights.

I can't think of many games that don't have an endgame. These examples don't seem that fun:

  • A single round of musical chairs.
  • A tabletop game that follows an unpredictable, structureless storyline.
Comment by Nisan on Ethical AI investments? · 2023-03-17T22:15:00.844Z · LW · GW

I don't think this is a good argument. A low probability of impact does not imply the expected impact is negligible. If you have an argument that the expected impact is negligible, I'd be happy to see it.

Comment by Nisan on Podcast with Oli Habryka on LessWrong / Lightcone Infrastructure · 2023-02-06T09:32:35.251Z · LW · GW

Is there a transcript available?

Comment by Nisan on Transcript of Sam Altman's interview touching on AI safety · 2023-01-21T04:40:58.845Z · LW · GW

We had the model for ChatGPT in the API for I don't know 10 months or something before we made ChatGPT. And I sort of thought someone was going to just build it or whatever and that enough people had played around with it.

 

I assume he's talking about text-davinci-002, a GPT 3.5 model supervised-finetuned on InstructGPT data. And he was expecting someone to finetune it on dialog data with OpenAI's API. I wonder how that would have compared to ChatGPT, which was finetuned with RL and can't be replicated through the API.

Comment by Nisan on Master plan spec: needs audit (logic and cooperative AI) · 2022-12-05T10:18:53.108Z · LW · GW

I agree that institutional inertia is a problem, and more generally there's the problem of getting principals to do the thing. But it's more dignified to make alignment/cooperation technology available than not to make it.

Comment by Nisan on Master plan spec: needs audit (logic and cooperative AI) · 2022-12-05T10:15:51.018Z · LW · GW

I'm a bit more optimistic about loopholes because I feel like if agents are determined to build trust, they can find a way.

Comment by Nisan on Master plan spec: needs audit (logic and cooperative AI) · 2022-12-05T10:07:53.960Z · LW · GW

I agree those nice-to-haves would be nice to have. One could probably think of more.

I have basically no idea how to make these happen, so I'm not opinionated on what we should do to achieve these goals. We need some combination of basic research, building tools people find useful, and stuff in-between.

Comment by Nisan on Lessons learned from talking to >100 academics about AI safety · 2022-10-11T02:32:59.862Z · LW · GW

You poster talks about "catastrophic outcomes" from "more-powerful-than-human" AI. Does that not count as alarmism and x-risk? This isn't meant to be a gotcha, I just want to know what counts as too alarmist for you.

Comment by Nisan on Do bamboos set themselves on fire? · 2022-09-20T15:34:36.973Z · LW · GW

Setting aside tgb's comment, shouldn't it be ? The formula in the post would have positive growth even if , which doesn't seem right.

Comment by Nisan on A Request for Open Problems · 2022-08-24T07:46:28.008Z · LW · GW

It only took 7 years to make substantial progress on this problem: Logical Induction by Garrabrant et al..

Comment by Nisan on What should you change in response to an "emergency"? And AI risk · 2022-07-21T02:36:35.746Z · LW · GW

Taking on a 60-hour/week job to see if you burn out seems unwise to me. Some better plans:

  • Try lots of jobs on lots of teams, to see if there is a job you can work 60 hours/week at.
  • Pay attention to what features of your job are energizing vs. costly. Notice any bad habits that might cause burnout.
  • Become more productive per hour.
Comment by Nisan on Failing to fix a dangerous intersection · 2022-07-01T23:51:56.917Z · LW · GW

Hi Bob, I noticed you have some boxes of stuff stacked up in the laundry room. I can't open the washing machine door all the way because the boxes are in the way. Could you please move them somewhere else?

Dear Alice,

Some of the boxes in that stack belong to my partner Carol, and I'd have to ask her if she's okay with them being moved.

In theory I could ask Carol if she's all right with the idea of moving the boxes. If Carol were to agree to the idea, I would need to find a new place for the boxes, then develop a plan for how to actually move the boxes from one place to another, then get Carol to approve of the plan, then find someone to help me with the bigger boxes, and finally implement the plan.

Though it seems simple enough as an idea, no one would be able to get in or out of the laundry room while I'm maneuvering boxes in there. I would have to coordinate with anyone who wants to do laundry that day to make sure we don't get in each others' way.

Overall, it would be a significant resource-intensive task for me to make and execute such a plan.

I regret I'm unable to proceed any further with your request at this time, as it currently doesn't fit into my to-do list for this week.

I do keep a "someday-maybe" list of projects I can draw from should I ever have some free time, for example if my job unexpectedly gives everyone the day off for some reason.

I already have "empty the lint trap" on this wish list, and will add your suggestion about moving the boxes to the list.

Unfortunately, this is all I can do at this time.

Comment by Nisan on Will working here advance AGI? Help us not destroy the world! · 2022-05-30T01:50:03.800Z · LW · GW

Thanks for sharing your reasoning. For what it's worth, I worked on OpenAI's alignment team for two years and think they do good work :) I can't speak objectively, but I'd be happy to see talented people continue to join their team.

I think they're reducing AI x-risk in expectation because of the alignment research they publish (1 2 3 4). If anyone thinks that research or that kind of research is bad for the world, I'm happy to discuss.

Comment by Nisan on Will working here advance AGI? Help us not destroy the world! · 2022-05-29T20:12:00.234Z · LW · GW

Why do you think the alignment team at OpenAI is contributing on net to AI danger?

Comment by Nisan on Why Go is a Better Game than Chess · 2022-04-18T07:03:44.357Z · LW · GW

Also, chess usually ends in a draw, which is lame. Go rarely if ever ends in a draw.

Comment by Nisan on Shah and Yudkowsky on alignment failures · 2022-03-01T01:24:59.015Z · LW · GW

CFAR used to have an awesome class called "Be specific!" that was mostly about concreteness. Exercises included:

  • Rationalist taboo
  • A group version of rationalist taboo where an instructor holds an everyday object and asks the class to describe it in concrete terms.
  • The Monday-Tuesday game
  • A role-playing game where the instructor plays a management consultant whose advice is impressive-sounding but contentless bullshit, and where the class has to force the consultant to be specific and concrete enough to be either wrong or trivial.
  • People were encouraged to make a habit of saying "can you give an example?" in everyday conversation. I practiced it a lot.

IIRC, Eliezer taught the class in May 2012? He talks about the relevant skills here and here. And then I ran it a few times, and then CFAR dropped it; I don't remember why.

Comment by Nisan on Nisan's Shortform · 2022-02-06T11:50:05.654Z · LW · GW

Agents who model each other can be modeled as programs with access to reflective oracles. I used to think the agents have to use the same oracle. But actually the agents can use different oracles, as long as each oracle can predict all the other oracles. This feels more realistic somehow.

Comment by Nisan on Jimrandomh's Shortform · 2022-01-24T01:37:19.270Z · LW · GW

Ok, I think in the OP you were using the word "secrecy" to refer to a narrower concept than I realized. If I understand correctly, if Alice tells Bob "please don't tell Bob", and then five years later when Alice is dead or definitely no longer interested or it's otherwise clear that there won't be negative consequences, Carol tells Bob, and Alice finds out and doesn't feel betrayed — then you wouldn't call that a "secret". I guess for it to be a "secret" Carol would have to promise to carry it to her grave, even if circumstances changed, or something.

In that case I don't have strong opinions about the OP.