Posts

Linkpost: Memorandum on Advancing the United States’ Leadership in Artificial Intelligence 2024-10-25T04:37:00.828Z
[Retracted] Newton's law of cooling from first principles 2024-01-16T04:21:11.094Z
Inflection AI: New startup related to language models 2022-04-02T05:35:24.759Z
My take on higher-order game theory 2021-11-30T05:56:00.990Z
Nisan's Shortform 2021-09-12T06:05:04.965Z
April 15, 2040 2021-05-04T21:18:08.912Z
What is a VNM stable set, really? 2021-01-25T05:43:59.496Z
Why you should minimax in two-player zero-sum games 2020-05-17T20:48:03.770Z
Book report: Theory of Games and Economic Behavior (von Neumann & Morgenstern) 2020-05-11T09:47:00.773Z
Conflict vs. mistake in non-zero-sum games 2020-04-05T22:22:41.374Z
Beliefs at different timescales 2018-11-04T20:10:59.223Z
Counterfactuals and reflective oracles 2018-09-05T08:54:06.303Z
Counterfactuals, thick and thin 2018-07-31T15:43:59.187Z
An environment for studying counterfactuals 2018-07-11T00:14:49.756Z
Logical counterfactuals and differential privacy 2018-02-04T00:17:43.000Z
Oracle machines for automated philosophy 2015-02-17T15:10:04.000Z
Meetup : Berkeley: Beta-testing at CFAR 2014-03-19T05:32:26.521Z
Meetup : Berkeley: Implementation Intentions 2014-02-27T07:06:29.784Z
Meetup : Berkeley: Ask vs. Guess (vs. Tell) Culture 2014-02-19T20:16:30.017Z
Meetup : Berkeley: The Twelve Virtues 2014-02-12T19:56:53.045Z
Meetup : Berkeley: Talk on communication 2014-01-24T03:57:50.244Z
Meetup : Berkeley: Weekly goals 2014-01-22T18:16:38.107Z
Meetup : Berkeley meetup: 5-minute exercises 2014-01-15T21:02:26.223Z
Meetup : Meetup at CFAR, Wednesday: Nutritionally complete bread 2014-01-07T10:25:33.016Z
Meetup : Berkeley: Hypothetical Apostasy 2013-06-12T17:53:40.651Z
Meetup : Berkeley: Board games 2013-06-04T16:21:17.574Z
Meetup : Berkeley: The Motivation Hacker by Nick Winter 2013-05-28T06:02:07.554Z
Meetup : Berkeley: To-do lists and other systems 2013-05-22T01:09:51.917Z
Meetup : Berkeley: Munchkinism 2013-05-14T04:25:21.643Z
Meetup : Berkeley: Information theory and the art of conversation 2013-05-05T22:35:00.823Z
Meetup : Berkeley: Dungeons & Discourse 2013-03-03T06:13:05.399Z
Meetup : Berkeley: Board games 2013-01-29T03:09:23.841Z
Meetup : Berkeley: CFAR focus group 2013-01-23T02:06:35.830Z
A fungibility theorem 2013-01-12T09:27:25.637Z
Proof of fungibility theorem 2013-01-12T09:26:09.484Z
Meetup : Berkeley meetup: Board games! 2013-01-08T20:40:42.392Z
Meetup : Berkeley: How Robot Cars Are Near 2012-12-17T19:46:33.980Z
Meetup : Berkeley: Boardgames 2012-12-05T18:28:09.814Z
Meetup : Berkeley meetup: Hermeneutics! 2012-11-26T05:40:29.186Z
Meetup : Berkeley meetup: Deliberate performance 2012-11-13T23:58:50.742Z
Meetup : Berkeley meetup: Success stories 2012-10-23T22:10:43.964Z
Meetup : Different location for Berkeley meetup 2012-10-17T17:19:56.746Z
[Link] "Fewer than X% of Americans know Y" 2012-10-10T16:59:38.114Z
Meetup : Different location: Berkeley meetup 2012-10-03T08:26:09.910Z
Meetup : Pre-Singularity Summit Overcoming Bias / Less Wrong Meetup Party 2012-09-24T14:46:05.475Z
Meetup : Vienna meetup 2012-09-22T13:14:23.668Z
Meetup report: How harmful is cannabis, and will you change your habits? 2012-09-09T04:50:10.943Z
Meetup : Berkeley meetup: Cannabis, Decision-Making, And A Chance To Change Your Mind 2012-08-29T03:50:23.867Z
Meetup : Berkeley meetup: Operant conditioning game 2012-08-21T15:07:36.431Z
Meetup : Berkeley meetup: Discussion about startups 2012-08-14T17:09:10.149Z

Comments

Comment by Nisan on OpenAI Email Archives (from Musk v. Altman and OpenAI blog) · 2024-12-14T03:37:34.622Z · LW · GW

Exhibit 13 is a sort of Oppenheimer-meets-Truman email thread in which Ilya Sutskever says:

Yesterday while we were considering making our final commitment given the non-solicit agreement, we realized we'd made a mistake.

Today, OpenAI republished that email (along with others) on its website (archived). But the above sentence is different in OpenAI's version of the email:

Yesterday while we were considering making our final commitment (even the non-solicit agreement), we realized we’d made a mistake.

I wonder which sentence is the one Ilya actually wrote.

Comment by Nisan on Habryka's Shortform Feed · 2024-11-16T10:57:45.648Z · LW · GW

check out exhibit 13...

Comment by Nisan on Linkpost: Memorandum on Advancing the United States’ Leadership in Artificial Intelligence · 2024-10-25T04:48:30.555Z · LW · GW

Section 3.3(f)(iii):

Within 120 days of the date of this memorandum, DOE, acting primarily through the National Nuclear Security Administration (NNSA) and in close coordination with AISI and NSA, shall seek to develop the capability to perform rapid systematic testing of AI models’ capacity to generate or exacerbate nuclear and radiological risks. This initiative shall involve the development and maintenance of infrastructure capable of running classified and unclassified tests, including using restricted data and relevant classified threat information. This initiative shall also feature the creation and regular updating of automated evaluations, the development of an interface for enabling human-led red-teaming, and the establishment of technical and legal tooling necessary for facilitating the rapid and secure transfer of United States Government, open-weight, and proprietary models to these facilities.

It sounds like the plan is for AI labs to transmit models to government datacenters for testing. I anticipate at least one government agency will quietly keep a copy for internal use.

Comment by Nisan on 2024 Petrov Day Retrospective · 2024-09-29T06:56:09.991Z · LW · GW

So was the launch code really 000000?

Comment by Nisan on Nisan's Shortform · 2024-09-28T02:31:55.747Z · LW · GW

A fun lesson on the scientific method: A few days ago, OpenAI announced a rollout of "Advanced Voice Mode". Confused by the messaging, I ran a test to see if their phone app could respond to an interruption:

Nisan: Can you count to 10, and then I'll interrupt you in the middle and see how that works.

ChatGPT 4o: Sure! I'll start counting now: One, two, three, four, fiv—

Nisan: Stop. Interruption.

ChatGPT 4o: How'd the interruption go?

Nisan: Oh, that went great. You stopped at five.

ChatGPT 4o: Awesome! Glad it worked smoothly. Do you want to try another test or experiment?

Nisan: Yeah, let's do another interruption test. This time, please count to 20.

ChatGPT 4o: Got it! Starting now: One, two, three, four, five, six, seven, eight, nine, ten, eleven, twelve, thirt—

Nisan: Stop. Interruption.

ChatGPT 4o: How was that?

Nisan: Wow. [Thinks: It seemed to stop slightly before I interrupted it!]

ChatGPT 4o: Glad that impressed you! Want to try something else?

Nisan: Uh, hold on.

ChatGPT 4o: Take your time!

Nisan: [Generates a random number.] Okay, please count to 30, and I'll interrupt at a random point.

ChatGPT 4o: Alright, starting now: One, two, three, four, five, six, seven, eight, nine, ten, eleven, twelve, thirteen, fourteen, fifteen—

[Nisan interrupts at some point, I forgot exactly where.]

ChatGPT 4o: —sixteen, seventeen, eighteen, nineteen, twenty, twenty-one, twenty-two, twenty-three, twenty-four, twenty-five, twenty-six, twenty-seven, twenty-eight, twenty-nine, thirty. Looks like I made it to the end! Did you mean to interrupt somewhere?

Nisan: Yeah, I did. It must not have worked. Let's try that again. [Generates a random number.] Please count up to 30.

ChatGPT 4o: No problem, let's try again! Starting now: One, two, three, four, five, six, seven, eight, nine, ten, eleven, twelve, thirteen, fourteen, fifteen—

[Nisan interrupts at some point.]

ChatGPT 4o: —sixteen, seventeen, eighteen, nineteen, twenty, twenty-one, twenty-two, twenty-three, twenty-four, twenty-fiv— Did it work this time?

Nisan: Uh, no.

I believe ChatGPT can't hear me when it's talking. But it correctly guessed when I would interrupt on the first two trials!

Comment by Nisan on Nisan's Shortform · 2024-09-04T21:11:03.720Z · LW · GW

The coin flip is a brilliant piece of technology for generating trustworthy random noise:

  • Making a two-headed coin is forgery, which is a crime.
  • Such trick coins can be foiled anyways by calling the toss in the air.

Thus when teaching the concept of a Bernoulli variable, we use the example of coin flips, because everyone already knows what they are. This is unfortunate because the very next concept we introduce is a biased Bernoulli variable, which corresponds to a "weighted" coin. But weighted coins don't exist! If it were practical to manufacture trick coins with arbitrary biases, coin flipping wouldn't be as popular as it is.

Comment by Nisan on You don't know how bad most things are nor precisely how they're bad. · 2024-08-23T05:14:29.654Z · LW · GW

If there was a consensus among the 8 as to which tuning is better, that would be significant, right? Since the chance of that is 1/128 if they can't tell the difference. You can even get p < 0.05 with one dissenter if you use a one-tailed test (which is maybe dubious). Of course we don't know what the data look like, so I'm just being pedantic here.

Comment by Nisan on You don't know how bad most things are nor precisely how they're bad. · 2024-08-04T17:18:15.426Z · LW · GW

Progress towards a robotic piano tuner: Entropy piano tuner attempts to accommodate "variations in string thickness, stretching, corrosion, dents, the harp flexing", etc. by minimizing the entropy of the power spectrum. Using it should be better than mindlessly tuning to a digital guitar tuner.

According to the website, professional pianists still prefer a human-tuned piano, but no one else can tell the difference. And the general opinion on piano tuner message boards seems to be that it's not quite good enough to replace a professional tuner's judgment.

Comment by Nisan on [Retracted] Newton's law of cooling from first principles · 2024-07-20T00:42:58.428Z · LW · GW

This post is wrong. Thanks to SymplecticMan for the thought experiment demonstrating that a mixture of ideal gases follows a law rather than my proposed law. (It's also different from Newton's law.)

I made a pretty but unjustified assumption — that a cooling baking sheet can be modeled as a dynamical system where each possible transition is equally likely and in which heat is transferred in fixed quanta, one at a time. This contradicted Newton's law, and I got excited when I realized that Newton's law was merely a first-order approximation.

My mistake was not noticing that Newton's law is a first-order approximation to any model of cooling where heat transfer increases with temperature difference, so I had not observed any reason to favor my model over any other.

In penance I have acquired a copy of Non-Equilibrium Thermodynamics by de Groot and Mazur, with the intention of eventually reading it.

Comment by Nisan on Opinions on Eureka Labs · 2024-07-17T00:50:48.390Z · LW · GW

This is the perfect time to start an AI + education project. AI today is not quite reliable enough to be a trustworthy teacher; and in the near future generic AI assistants will likely be smart enough to teach anything well (if they want to).

In the meantime, Eureka Labs faces an interesting alignment problem: Can they ensure that their AI teachers teach only true things? It will be tempting to make teachers that only seem to teach well. I hope they figure out how to navigate that!

Comment by Nisan on Nisan's Shortform · 2024-07-11T01:04:43.411Z · LW · GW

On 2018-04-09, OpenAI said[1]:

OpenAI’s mission is to ensure that artificial general intelligence (AGI) [...] benefits all of humanity.

In contrast, in 2023, OpenAI said[2]:

[...] OpenAI’s mission: to build artificial general intelligence (AGI) that is safe and benefits all of humanity.


  1. Archived ↩︎

  2. This archived snapshot is from 2023-05-17, but the document didn't get much attention until November that year. ↩︎

Comment by Nisan on Evaporation of improvements · 2024-06-20T21:19:28.489Z · LW · GW

Another example is risk compensation: You make an activity safer (yay) and participants compensate by taking more risks (oh no).

Comment by Nisan on How was Less Online for you? · 2024-06-05T04:42:34.775Z · LW · GW

Interesting, it felt less messy to me than, say, rationalist-adjacent research retreats.

lsuser says that as a result of his spiritual journey, "now if there is so much as a cardboard box on my kitchen counter, it bothers me". Has your spiritual practice changed your tolerance of clutter?

Comment by Nisan on How likely is it that AI will torture us until the end of time? · 2024-05-31T19:45:01.754Z · LW · GW

In other words, the zero-information oblivion that produced you once can produce you again, maybe in a different form.

Huh, that's Epicurus's argument against fearing death. But while Epicurus assumed there is no afterlife, you're using it to argue there is one!

Comment by Nisan on keltan's Shortform · 2024-05-29T21:06:01.836Z · LW · GW

Re: safety, it depends on exactly where you are, your skill in assessing strangers' intentions from a distance, and probably the way you carry yourself.

Speaking of which, I'd be interested in playing some improv games with you at less.online, if you want to do that!

Comment by Nisan on Stephen Fowler's Shortform · 2024-05-21T00:57:51.883Z · LW · GW

I'd like to know what Holden did while serving on the board, and what OpenAI would have done if he hadn't joined. That's crucial for assessing the grant's impact.

But since board meetings are private, this will remain unknown for a long time. Unfortunately, the best we can do is speculate.

Comment by Nisan on Nisan's Shortform · 2024-05-14T05:03:27.941Z · LW · GW

Of course, Karpathy's post could be in the multimodal training data.

Comment by Nisan on Nisan's Shortform · 2024-05-14T05:01:06.553Z · LW · GW

12 years ago, in The state of Computer Vision and AI: we are really, really far away, Andrej Karpathy wrote:

The picture above is funny.

But for me it is also one of those examples that make me sad about the outlook for AI and for Computer Vision. What would it take for a computer to understand this image as you or I do? [...]

In any case, we are very, very far and this depresses me. What is the way forward? :(

I just asked gpt-4o what's going on in the picture, and it understood most of it:

In this image, a group of men in business attire are seen in a locker room or a similar setting. The focus is on two men, where the taller man is standing on a scale. The shorter man, who appears to be playfully pressing down on the taller man's shoulders to increase his weight on the scale, is creating a humorous situation. Both men and those observing in the background are smiling or laughing, indicating that they are enjoying the lighthearted moment. The man pressing down seems to be taking part in a playful prank or joke, adding a sense of camaraderie and fun to the scene.

Comment by Nisan on Should I Finish My Bachelor's Degree? · 2024-05-12T17:16:30.728Z · LW · GW

That does look like a rough commute, the kind that can use up the mental energy you want to spend on learning. One thing you could consider is staying in a hotel overnight near your school sometimes.

Also, consider wearing ear protection on the Transbay Tube. I wish I had done that when I commuted that way for a year.

Comment by Nisan on Use the Try Harder, Luke · 2024-04-25T01:51:19.462Z · LW · GW
Comment by Nisan on Transformers Represent Belief State Geometry in their Residual Stream · 2024-04-18T17:05:57.156Z · LW · GW

I suppose if you had more hidden states than observables, you could distinguish hidden-state prediction from next-token prediction by the dimension of the fractal.

Comment by Nisan on Transformers Represent Belief State Geometry in their Residual Stream · 2024-04-18T16:07:03.871Z · LW · GW

If I understand correctly, the next-token prediction of Mess3 is related to the current-state prediction by a nonsingular linear transformation. So a linear probe showing "the meta-structure of an observer's belief updates over the hidden states of the generating structure" is equivalent to one showing "the structure of the next-token predictions", no?

Comment by Nisan on The Waluigi Effect (mega-post) · 2024-04-03T17:43:57.961Z · LW · GW

The subject of this post appears in the "Did you know..." section of Wikipedia's front page(archived) right now.

Comment by Nisan on Modern Transformers are AGI, and Human-Level · 2024-03-27T06:49:50.983Z · LW · GW

I'm saying "transformers" every time I am tempted to write "LLMs" because many modern LLMs also do image processing, so the term "LLM" is not quite right.

"Transformer"'s not quite right either because you can train a transformer on a narrow task. How about foundation model: "models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks".

Comment by Nisan on Modern Transformers are AGI, and Human-Level · 2024-03-27T06:42:53.033Z · LW · GW

I agree 100%. It would be interesting to explore how the term "AGI" has evolved, maybe starting with Goertzel and Pennachin 2007 who define it as:

a software program that can solve a variety of complex problems in a variety of different domains, and that controls itself autonomously, with its own thoughts, worries, feelings, strengths, weaknesses and predispositions

On the other hand, Stuart Russell testified that AGI means

machines that match or exceed human capabilities in every relevant dimension

so the experts seem to disagree. (On the other hand, Stuart & Russell's textbook cite Goertzel and Pennachin 2007 when mentioning AGI. Confusing.)

In any case, I think it's right to say that today's best language models are AGIs for any of these reasons:

  • They're not narrow AIs.
  • They satisfy the important parts of Goertzel and Pennachin's definition.
  • The tasks they can perform are not limited to a "bounded" domain.

In fact, GPT-2 is an AGI.

Comment by Nisan on The Worst Form Of Government (Except For Everything Else We've Tried) · 2024-03-18T03:12:35.059Z · LW · GW

Maybe the right word for this would be corporatism.

Comment by Nisan on Simple versus Short: Higher-order degeneracy and error-correction · 2024-03-11T23:43:19.022Z · LW · GW

I'm surprised to see an application of the Banach fixed-point theorem as an example of something that's too implicit from the perspective of a computer scientist. After all, real quantities can only be represented in a computer as a sequence of approximations — and that's exactly what the theorem provides.

I would have expected you to use, say, the Brouwer fixed-point theorem instead, because Brouwer fixed points can't be computed to arbitrary precision in general.

(I come from a mathematical background, fwiw.)

Comment by Nisan on Prediction market: Will John Wentworth's Gears of Aging series hold up in 2033? · 2024-02-22T21:14:45.184Z · LW · GW

For reference, here's the Gears of Aging sequence.

Comment by Nisan on Importing a Python File by Name · 2024-02-02T19:11:01.154Z · LW · GW

This article saved me some time just now. Thanks!

Comment by Nisan on [Retracted] Newton's law of cooling from first principles · 2024-01-18T00:20:00.801Z · LW · GW

Scaling temperature up by a factor of 4 scales up all the velocities by a factor of 2 [...] slowing down the playback of a video has the effect of increasing the time between collisions [....]

Oh, good point! But hm, scaling up temperature by 4x should increase velocities by 2x and energy transfer per collision by 4x. And it should increase the rate of collisions per time by 2x. So the rate of energy transfer per time should increase 8x. But that violates Newton's law as well. What am I missing here?

Comment by Nisan on [Retracted] Newton's law of cooling from first principles · 2024-01-17T20:01:08.689Z · LW · GW

constant volume

Ah, so I'm working at a level of generality that applies to all sorts of dynamical systems, including ones with no well-defined volume. As long as there's a conserved quantity , we can define the entropy as the log of the number of states with that value of . This is a univariate function of , and temperature can be defined as the multiplicative inverse of the derivative .

if the proportionality depends on thermodynamic variables

By

I mean

for some constant that doesn't vary with time. So it's incompatible with Newton's law.

This asymmetry in the temperature dependence would predict that one subsystem will heat faster than the other subsystem cools

Oh, the asymmetric formula relies on the assumption I made that subsystem 2 is so much bigger than subsystem 1 that its temperature doesn't change appreciably during the cooling process. I wasn't clear about that, sorry.

Comment by Nisan on [Retracted] Newton's law of cooling from first principles · 2024-01-17T08:28:22.304Z · LW · GW

Yeah, as Shankar says, this is only for conduction (and maybe convection?). The assumption about transition probabilities is abstractly saying there's a lot of contact between the subsystems. If two objects contact each other in a small surface area, this post doesn't apply and you'll need to model the heat flow with the heat equation. I suppose radiative cooling acts abstractly like a narrow contact region, only allowing photons through.

Comment by Nisan on [Retracted] Newton's law of cooling from first principles · 2024-01-17T08:17:55.361Z · LW · GW

I am suspicious of this "Lambert's law". Suppose the environment is at absolute zero -- nothing is moving at all. Then "Lambert's law" says that the rate of cooling should be infinite: our object should itself instantly drop to absolute zero once placed in an absolute-zero environment. Can that be right?

We're assuming the environment carries away excess heat instantly. In practice the immediate environment will warm up a bit and the cooling rate will become finite right away.

But in the ideal case, yeah, I think instant cooling makes sense. The environment's coldness is infinite!

Comment by Nisan on [Retracted] Newton's law of cooling from first principles · 2024-01-17T08:07:13.742Z · LW · GW

Oh neat! Very interesting. I believe your argument is correct for head-on collisions. What about glancing blows, though?

Assume two rigid, spherical particles with the same mass and radius.

Pick a coordinate system (at rest) where the collision normal vector is aligned with the x-axis.

Then move the coordinate system along the x axis so that the particles have equal and opposite x-velocities. (The y-velocities will be whatever.) In this frame, the elastic collision will negate the x-velocities and leave the y-velocities untouched.

Back in the rest frame, this means that the collision swaps the x-velocities and keeps the y-velocities the same. Thus the energy transfer is half the difference of the squared x-velocities, .

I'm not sure that's proportional to ? The square of the x-velocity does increase with temperature, but I'm not sure it's linear. If there's a big temperature difference, the collisions are ~uniformly distributed on the cold particle's surface, but not on the hot particle's surface.

Comment by Nisan on Nisan's Shortform · 2024-01-08T01:41:03.499Z · LW · GW

I'd love if anyone can point me to anywhere this cooling law (proportional to the difference of coldnesses) has been written up.

Also my assumptions about the dynamical system are kinda ad hoc. I'd like to know assumptions I ought to be using.

Comment by Nisan on Nisan's Shortform · 2024-01-08T01:38:08.150Z · LW · GW

We can derive Newton's law of cooling from first principles.

Consider an ergodic discrete-time dynamical system and group the microstates into macrostates according to some observable variable . ( might be the temperature of a subsystem.)

Let's assume that if , then in the next timestep can be one of the values , , or .

Let's make the further assumption that the transition probabilities for these three possibilities have the same ratio as the number of microstates.

Then it turns out that the rate of change over time is proportional to , where is the entropy, which is the logarithm of the number of microstates.

Now suppose our system consists of two interacting subsystems with energies and . Total energy is conserved. How fast will energy flow from one system to the other? By the above lemma, is proportional to .

Here and are the coldnesses of the subsystems. Coldness is the inverse of temperature, and is more fundamental than temperature.

Note that Newton's law of cooling says that the rate of heat transfer is proportional to . For a narrow temperature range this will approximate our result.

Comment by Nisan on What Helped Me - Kale, Blood, CPAP, X-tiamine, Methylphenidate · 2024-01-03T23:35:18.573Z · LW · GW

Wow, that's a lot of kale. Do you eat 500g every day? And 500g is the mass of the cooked, strained kale?

Comment by Nisan on You are probably underestimating how good self-love can be · 2023-12-16T19:34:17.433Z · LW · GW

What a beautiful illustration of how a Humanist's worldview differs from a Cousin's!

Comment by Nisan on Google Gemini Announced · 2023-12-10T17:35:13.073Z · LW · GW

I wonder why Gemini used RLHF instead of Direct Preference Optimization (DPO). DPO was written up 6 months ago; it's simpler and apparently more compute-efficient than RLHF.

  • Is the Gemini org structure so sclerotic that it couldn't switch to a more efficient training algorithm partway through a project?
  • Is DPO inferior to RLHF in some way? Lower quality, less efficient, more sensitive to hyperparameters?
  • Maybe they did use DPO, even though they claimed it was RLHF in their technical report?
Comment by Nisan on Sum-threshold attacks · 2023-10-22T18:15:11.938Z · LW · GW

Another example is the obfuscated arguments problem. As a toy example:

For every cubic centimeter in Texas, your missing earring is not in the cubic centimeter.

Therefore, your missing earring is not in Texas.

Even if the conclusion of the argument is a lie, each premise is spot-checkable and most likely true. The lie has been split up into many statements each of which is only slightly a lie.

Comment by Nisan on My take on higher-order game theory · 2023-10-21T19:44:35.436Z · LW · GW

Thanks! For convex sets of distributions: If you weaken the definition of fixed point to , then the set has a least element which really is a least fixed point.

Comment by Nisan on Nisan's Shortform · 2023-10-21T02:25:13.586Z · LW · GW

Hyperbolic growth

The differential equation , for positive and , has solution

(after changing the units). The Roodman report argues that our economy follows this hyperbolic growth trend, rather than an exponential one.

While exponential growth has a single parameter — the growth rate or interest rate — hyperbolic growth has two parameters: is the time until singularity, and is the "hardness" of the takeoff.

A value of close to zero gives a "soft" takeoff where the derivative gets high well in advance of the singularity. A large value of gives a "hard" takeoff, where explosive growth comes all at once right at the singularity. (Paul Christiano calls these "slow" and "fast" takeoff.)

Paul defines "slow takeoff" as "There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles." This corresponds to . (At , the first four-year doubling starts at and the first one-year doubling starts at years before the singularity.)

So the simple hyperbola with counts as "slow takeoff". (This is the "naive model" mentioned in footnote 31 of Intelligence Explosion Microeconomics.)

Roodman's estimates of historical are closer to (see Table 3).

Comment by Nisan on Features of Emacs that I only recently discovered · 2023-09-25T22:13:36.526Z · LW · GW

Ah, beginning-of-line-text is nice. It skips over the initial # or // of comments and the initial * of Org headings. I've now bound it to M-m.

Comment by Nisan on I'm consistently overwhelmed by basic obligations. Are there any paradigm shifts or other rationality-based tips that would be helpful? · 2023-07-21T21:29:59.255Z · LW · GW

Consider seeing a doctor about the panicky and stressed feelings. They may test you for hormone imbalances or prescribe you antianxiety medication.

Comment by Nisan on Nisan's Shortform · 2023-07-17T06:56:25.279Z · LW · GW

Conception is a startup trying to do in vitro gametogenesis for humans!

Comment by Nisan on Open Thread - July 2023 · 2023-07-06T19:29:27.679Z · LW · GW

A long reflection requires new institutions, and creating new institutions requires individual agency. Right? I have trouble imagining a long reflection actually happening in a world with the individual agency level dialed down.

A separate point that's perhaps in line with your thinking: I feel better about cultivating agency in people who are intelligent and wise rather than people who are not. When I was working on agency-cultivating projects, we targeted those kinds of people.

Comment by Nisan on We Are Less Wrong than E. T. Jaynes on Loss Functions in Human Society · 2023-06-05T20:04:53.981Z · LW · GW

What's more, even selfish agents with de dicto identical utility functions can trade: If I have two right shoes and you have two left shoes, we'd trade one shoe for another because of decreasing marginal utility.

Comment by Nisan on Nisan's Shortform · 2023-05-11T05:27:19.608Z · LW · GW

Recent interviews with Eliezer:

Comment by Nisan on Looking for a post I read if anyone recognizes it · 2023-05-10T04:28:21.923Z · LW · GW

The bug patches / epiphanies / tortoises / wizardry square from Small, Consistent Effort: Uncharted Waters In the Art of Rationality

Comment by Nisan on Predictable updating about AI risk · 2023-05-09T03:43:15.440Z · LW · GW

The nanobots, from the bloodstream, in the parlor, Professor Plum.

You could have written Colonel Mustard!