Posts

Comments

Comment by Pongo on Why We Launched LessWrong.SubStack · 2021-04-01T14:16:45.096Z · LW · GW

LW must really need the money, having decided to destroy a non-trivial communal resource

Comment by Pongo on How You Can Gain Self Control Without "Self-Control" · 2021-03-25T01:22:11.641Z · LW · GW

The thought saver at the end of the heritability section asks you to remember some strategies for self control, but they've not been introduced yet

Comment by Pongo on Demand offsetting · 2021-03-23T19:40:13.197Z · LW · GW

Presumably the welfare premium is reduced if the ethical egg providers can recoup some costs from a quality premium

Comment by Pongo on Demand offsetting · 2021-03-22T04:49:16.294Z · LW · GW

Not sure at all! It still seems like the ordering is tricky. They don't know how many ethical eggs they've sold when selling towards the consumer. There's not a guarantee of future ethical eggs when buying the certificate.

Maybe it works out OK, and they can sell 873,551 eggs at a regular price after that many certificates were bought, and the rest at the higher price. I know very little about how the food supply chain works

Comment by Pongo on Demand offsetting · 2021-03-22T02:03:47.029Z · LW · GW

IIUC, this exposes the high-welfare egg co to more risk. It's hard to sell 1 million eggs for one price, and 1 million for another price. So they probably have to choose to sell at the low welfare price constantly. But this means they build up a negative balance that they're hoping ethical consumers will buy them out of.

Comment by Pongo on A Semitechnical Introductory Dialogue on Solomonoff Induction · 2021-03-05T23:58:01.524Z · LW · GW

Thanks!

Comment by Pongo on A Semitechnical Introductory Dialogue on Solomonoff Induction · 2021-03-05T19:10:05.034Z · LW · GW

Was this actually cross posted by EY, or by Rob or Ben? I prefer it being mentioned in the latter case

Comment by Pongo on Takeaways from one year of lockdown · 2021-03-02T22:37:33.154Z · LW · GW

To add more color to the inadequate equilibrium: I didn’t want to hang out with people with a lot of risk, not because of how bad COVID would be for me, but because of how it would limit which community members would interact with me. But this also meant I was a community member who was causing other people to take less risk.

Comment by Pongo on Making Vaccine · 2021-02-08T22:46:35.214Z · LW · GW

I didn’t mean to predict on this; I was just trying to see number of predictions on first one. Turns out that causes prediction on mobile

Comment by Pongo on So8res' Shortform Feed · 2021-02-02T01:33:08.731Z · LW · GW

Hoping, I guess, that the name was bad enough that others would call it an Uhlmann Filter

Comment by Pongo on A few thought on the inner ring · 2021-01-22T15:59:35.844Z · LW · GW

Oh, and I also notice that a social manoeuvring game (the game that governs who is admitted) is a task where performance is correlated with performance on (1) and (2)

Comment by Pongo on Covid 1/21: Turning the Corner · 2021-01-21T21:29:05.167Z · LW · GW

First time I’ve seen a highlighted mod comment. I like it!

Comment by Pongo on A few thought on the inner ring · 2021-01-21T17:38:53.322Z · LW · GW

Most of the Inner Rings I've observed are primarily selected on (1) being able to skilfully violate the explicit local rules to get things done without degrading the structure the rules hold up and (2) being fun to be around, even for long periods and hard work.

Lewis acknowledges that Inner Rings aren't necessarily bad, and I think the above is a reason why.

Comment by Pongo on Public selves · 2021-01-19T06:07:57.708Z · LW · GW

Making correct decisions is hard. Sharing more data tends to make them easier. Whether you'll thrive or fail in a job may well depend on parts you are inclined to hide. Also, though we may be unwilling to change many parts of ourselves, other times we are getting in our own way, and it can help to have more eyes on the part of the territory that's you

Comment by Pongo on A Healthy News Diet · 2021-01-03T03:58:36.451Z · LW · GW

All uses of the second person "you" and "your" in this post are in fact me talking to myself

I wanted to upvote just for this note, but I decided it's not good to upvote things based on the first sentence or so. So I read the post, and it's good, so now I can upvote guilt-free!

Comment by Pongo on A Healthy News Diet · 2021-01-03T03:57:25.387Z · LW · GW
Comment by Pongo on Motive Ambiguity · 2020-12-21T01:43:31.347Z · LW · GW

Reminds me of The Costs of Reliability

Comment by Pongo on [deleted post] 2020-12-18T06:43:49.822Z

I would also be salty

Comment by Pongo on [deleted post] 2020-12-18T04:49:15.768Z

I think this tag should cover, for example, auction mechanics. Auctions don't seem much like institutions to me

Comment by Pongo on What confusions do people have about simulacrum levels? · 2020-12-15T03:20:12.090Z · LW · GW

Note also Odin was "Woden" in Old English

Comment by Pongo on Hermione Granger and Newcomb's Paradox · 2020-12-14T23:52:01.822Z · LW · GW

90% autocorrect of "Chauna"

Comment by Pongo on Hermione Granger and Newcomb's Paradox · 2020-12-14T07:26:31.979Z · LW · GW

Should I be reading all the openings of transparent envelopes as actual openings, or are they sometimes looking at the sealed envelope and seeing what it contains (the burnings incline me towards the second interpretation, but I'm not sure)?

EDIT: Oh, I think I understand better now

Comment by Pongo on D&D.Sci · 2020-12-10T07:13:32.434Z · LW · GW

Made a quick neural network (reaching about 70% accuracy), and checked all available scores.

 

 Its favorite result was: +2 Cha, +8 Wis. It would have like +10 Wis if it were possible.

For at least the top few results, it wanted to (a) apportion as much to Wis as possible, then (b) as much to Cha, then (c) as much to Con. So we have for (Wis, Cha, Con): 1. (8, 2, 0) 2. (8, 1, 1) 3. (8, 0, 2) 4. (7, 3, 0) 5. (7, 2, 1) ...

Comment by Pongo on Final Version Perfected: An Underused Execution Algorithm · 2020-11-28T23:20:50.030Z · LW · GW

Seems like it's a lazy sort to me (with obvious wrinkles from the fact that the list can grow). It also seems to be a variant of drop sort (which is  via cheating) designed for repeated passes on the remaining list

Comment by Pongo on Public transmit metta · 2020-11-15T23:56:40.746Z · LW · GW

Any favored resources on metta?

Comment by Pongo on Spend twice as much effort every time you attempt to solve a problem · 2020-11-15T19:26:57.891Z · LW · GW
Comment by Pongo on Misalignment and misuse: whose values are manifest? · 2020-11-13T20:00:32.204Z · LW · GW

If we solve the problem normally thought of as "misalignment", it seems like this scenario would now go well. If we solve the problem normally thought of as "misuse", it seems like this scenario would now go well. This argues for continuing to use these categories, as fixing the problems they name is still sufficient to solve problems that do not cleanly fit in one bucket or another

Comment by Pongo on How can I bet on short timelines? · 2020-11-10T00:10:20.221Z · LW · GW

Sure!

I see people on twitter, for example, doing things like having GPT-3 provide autocomplete or suggestions while they're writing, or doing grunt work of producing web apps. Plausibly, figuring out how to get the most value out of future AI developments for improving productivity is important.

There's an issue that it's not very obvious exactly how to prepare for various AI tools in the future. One piece of work could be thinking more about how to flexibly prepare for AI tools with unknown capabilities, or predicting what the capabilities will be.

Other things that come to mind are:

  • Practice getting up to speed in new tool setups. If you are very bound to a setup that you like, you might have a hard time leveraging these advances as they come along. Alternatively, try and be sure you can extend your current workflow
  • Increase the attention you pay to new (AI) tools. Get used to trying them out, both for the reasons above and because it may be important to act fast in picking up very helpful new tools

To be clear, it's not super clear to me how much value there is in this direction. It is pretty plausible to me that AI tooling will be essential for competitive future productivity, but maybe there's not much of an opportunity to bet on that

Comment by Pongo on Three Open Problems in Aging · 2020-11-08T22:56:41.177Z · LW · GW

Now, it's still possible that accumulation of slow-turnover senescent cells could cause the increased production rate of fast-turnover senescent cells.

Reminds me of this paper, in which they replaced the blood of old rats with a neutral solution (not the blood of young rats), and found large rejuvinative effects. IIRC, they attributed it to knocking the old rats out of some sort of "senescent equilibrium"

Comment by Pongo on How can I bet on short timelines? · 2020-11-08T21:51:53.621Z · LW · GW

If timelines are short, where does the remaining value live? Some fairly Babble-ish ideas:

  • Alignment-by-default
    • Both outer alignment and inner by default
      • With full alignment by default, there's nothing to do, I think! One could be an accelerationist, but the reduction in suffering and lives lost now doesn't seem large enough for the cost in probability of aligned AI
      • Possibly value could be lost if values aren't sufficiently cosmopolitan? One could try and promote cosmopolitan values
    • Inner alignment by default
      • Focus on tools for getting good estimates of human values, or an intent-aligned AI
        • Ought's work is a good example
        • Possibly trying to experiment with governance / elicitation structures, like quadratic voting
        • Also thinking about how to get good governance structures actually used
  • Acausal trade
    • In particular, expand the ideas in this post. (I understand Paul to be claiming he argues for tractability somewhere in that post, but I couldn't find it)
    • Work through the details of UDT games, and how we could effect proper acausal trade. Figure out how to get the relevant decision makers on board
  • Strong, fairly late institutional responses
    • Work on making, for example, states strong enough to (coordinately) restrict or stop AI development

Other things that seem useful:

  • Learn the current hot topics in ML. If timelines are short, it's probably the case that AGI will use extensions of the current frontier
  • Invest in leveraging AI tools for direct work / getting those things that money cannot buy. This maybe a little early, but if the takeoff is at all soft, maybe there are still >10 years left of 2020-level intellectual work before 2030 if you're using the right tools
Comment by Pongo on Where do (did?) stable, cooperative institutions come from? · 2020-11-03T22:53:54.989Z · LW · GW

Even more so, I would love to see your unjustifiable stab-in-the-dark intuitions as to where the center of all this is

Curious why this in particular (not trying to take umbrage with wanting this info; I agree that there’s a lot of useful data here. Would be a thing I’d also want to ask for, but wouldn’t have prioritised)

Comment by Pongo on Where do (did?) stable, cooperative institutions come from? · 2020-11-03T22:47:54.136Z · LW · GW

Seems like you’re missing an end to the paragraph that starts “Related argument”

Comment by Pongo on Kelly Bet or Update? · 2020-11-02T23:52:43.363Z · LW · GW

I liked your example of being uncertain of your probabilities. I note that if you are trying to make an even money bet with a friend (as this is a simple Schelling point), you should never Kelly bet if you have discounted rate of 2/3 or less of your naïve probabilities.

The maximum bet for  is when  is 1, which is  which crosses below 0 at 

Comment by Pongo on The Darwin Game - Rounds 0 to 10 · 2020-10-24T17:14:02.666Z · LW · GW

In the pie chart in the Teams section, you can see "CooperateBot [Larks]" and "CooperateBot [Insub]"

Comment by Pongo on Should we use qualifiers in speech? · 2020-10-23T19:15:15.095Z · LW · GW

Yeah, that's what my parenthetical was supposed to address

(particularly because there is large interpersonal variation in the strength of hedging a given qualifier is supposed to convey)

Perhaps you are able to get more reliable information out of such statements than I am.

Comment by Pongo on Should we use qualifiers in speech? · 2020-10-23T19:06:28.973Z · LW · GW

I like qualifiers that give information on the person's epistemic state or, even better, process. For example:

  • "Technological progress happens exponentially (70%)"
  • "My rough intuition is that technological progress happens exponentially"
  • "Having spent a year looking into the history of technological progress, I concluded it happens exponentially: [here are specific data sources, or links, or some sort of pointer]"

Given that I don't start thinking that anyone can report directly the state of the world (rather than their beliefs and understanding of it), "From what I understand, technological progress happens exponentially" does not provide much information (particularly because there is large interpersonal variation in the strength of hedging a given qualifier is supposed to convey).

Sometimes I feel forced to add qualifiers because it makes it more likely that someone will keep engaging with me. That is, I am confused about something, and they are helping refine my model. By adding qualifiers, I signal that I'm reasonable / understand I should be uncertain of my conclusions / am open to updating.

Comment by Pongo on The date of AI Takeover is not the day the AI takes over · 2020-10-22T16:54:19.193Z · LW · GW

You can steer a bit away from catastrophe today. Tomorrow you will be able to do less. After years and decades go by, you will have to be miraculously lucky or good to do something that helps. At some point, it's not the kind of "miraculous" you hope for, it's the kind you don't bother to model.

Today you are blind, and are trying to shape outcomes you can't see. Tomorrow you will know more, and be able to do more. After years and decades, you might know enough about the task you are trying to accomplish to really help. Hopefully the task you find yourself faced with is the kind you can solve in time.

Comment by Pongo on Open & Welcome Thread – October 2020 · 2020-10-18T22:50:58.268Z · LW · GW

I think you're right. I think inline comments are a good personal workflow when engaging with a post (when there's a post I want to properly understand, I copy it to a google doc and comment it), but not for communicating the engagement

Comment by Pongo on The Solomonoff Prior is Malign · 2020-10-16T07:13:14.993Z · LW · GW

My understanding is the first thing is what you get with UDASSA and the second thing would be what you get is if you think the Solomonoff prior is useful for predicting your universe for some other reason (ie not because you think the likelihood of finding yourself in some situation covaries with the Solomonoff prior's weight on that situation)

Comment by Pongo on Hiring engineers and researchers to help align GPT-3 · 2020-10-05T23:35:16.817Z · LW · GW

It is at least the case that OpenAI has sponsored H1Bs before: https://www.myvisajobs.com/Visa-Sponsor/Openai/1304955.htm

Comment by Pongo on Postmortem to Petrov Day, 2020 · 2020-10-04T02:25:07.736Z · LW · GW

He treated it like a game, even though he was given the ability to destroy a non-trivial communal resource.

I want to resist this a bit. I actually got more value out of the "blown up" frontpage then I would have from a normal frontpage that day. A bit of "cool to see what the LW devs prepared for this case", a bit of "cool, something changed!" and some excitement about learning something.

Comment by Pongo on Covid 10/1: The Long Haul · 2020-10-01T21:31:06.075Z · LW · GW

That’s a very broad definition of ‘long haul’ on duration and on severity, and I’m guessing this is a large underestimate of the number of actual cases in the United Kingdom

If the definition is broad, shouldn't it be an overestimate?

Comment by Pongo on “Unsupervised” translation as an (intent) alignment problem · 2020-09-30T01:53:29.083Z · LW · GW

My attempt to summarize the alignment concern here. Does this seem a reasonable gloss?

It seems plausible that competitive models will not be transparent or introspectable. If you can't see how the model is making decisions, you can't tell how it will generalize, and so you don't get very good safety guarantees. Or to put it another way, if you can't interact with the way the model is thinking, then you can't give a rich enough reward signal to guide it to the region of model space that you want

Comment by Pongo on “Unsupervised” translation as an (intent) alignment problem · 2020-09-30T01:47:55.355Z · LW · GW

Most importantly, the success of the scheme relies on the correctness of the prior over helper models (or else the helper could just be another copy of GPT-Klingon)

I'm not sure I understand this. My understanding of the worry: what if there's some equilibrium where the model gives wrong explanations of meanings, but I can't tell using just the model to give me meanings.

But it seems to me that having the human in the loop doing prediction helps a lot, even with the same prior. Like, if the meanings are wrong, then the user will just not predict the correct word. But maybe this is not enough corrective data?

Comment by Pongo on On Destroying the World · 2020-09-29T22:06:43.002Z · LW · GW

As someone who didn't receive the codes, but read the email on Honoring Petrov Day, I also got the sense it wasn't too serious. The thing that would most give me pause is "a resource thousands of people view every day".

I'm not sure I can say exactly what seems lighthearted about the email to me. Perhaps I just assumed it would be, and so read it that way. If I were to pick a few concrete things, I would say the phrase "with our honor intact" seems like a joke, and also "the opportunity to not destroy LessWrong" seems like a silly phrase (kind of similar to "a little bit the worst thing ever"). On reflection, yep, you are getting an opportunity you don't normally get. But it's also weird to have an opportunity to perform a negative action.

Also, it still seems to me that there's no reason anyone who was taking it seriously would blow up LW (apart from maybe Jeff Kauffman). So if there's a real risk of someone blowing it up, it must not be that serious.

Comment by Pongo on On Destroying the World · 2020-09-29T07:05:41.246Z · LW · GW

Suppose phishing attacks do have an 80%+ success rate. I have been the target of phishing attempts 10s of times, and never fallen for it (and I imagine this is not unusual on LW). This suggests the average LWer should not expect to fall victim to a phishing attempt with 80% probability even if that is the global average

Comment by Pongo on Most Prisoner's Dilemmas are Stag Hunts; Most Stag Hunts are Schelling Problems · 2020-09-24T01:14:12.076Z · LW · GW

This summary was helpful for me, thanks! I was sad cos I could tell there was something I wanted to know from the post but couldn't quite get it

In a Stag Hunt, the hunters can punish defection and reward cooperation

This seems wrong. I think the argument goes "the essential difference between a one-off Prisoner's Dilemma and an IPD is that players can punish and reward each other in-band (by future behavior). In the real world, they can also reward and punish out-of-band (in other games). Both these forces help create another equilibrium where people cooperate and punishment makes defecting a bad idea (though an equilibrium of constant defection still exists). This payoff matrix is like that of a Stag Hunt rather than a one-off Prisoner's Dilemma"

Comment by Pongo on Matt Goldenberg's Short Form Feed · 2020-09-24T00:46:29.775Z · LW · GW

I think it's probably true that the Litany of Gendlin is irrecoverably false, but I feel drawn to apologia anyway.

I think the central point of the litany is its equivocation between "you can stand what is true (because, whether you know it or not, you already are standing what is true)" and "you can stand to know what is true".

When someone thinks, "I can't have wasted my time on this startup. If I have I'll just die", they must really mean "If I find out I have I'll just die". Otherwise presumably they can conclude from their continued aliveness that they didn't waste their life, and move on. The litany is an invitation to allow yourself to have less fallout from acknowledging or finding out the truth because you finding it out isn't what causes it to be true, however bad the world might be because it's true. A local frame might be "whatever additional terrible ways it feels like the world must be now if X is true are bucket errors".

So when you say "Owning up to what's true makes things way worse if you don't have the psychological immune system to handle the negative news/deal with the trauma or whatever", you're not responding to the litany as I see it. The litany says (emphasis added) "Owning up to it doesn't make it worse". Owning up to what's true doesn't make the true thing worse. It might make things worse, but it doesn't make the true thing worse (though I'm sure there are, in fact, tricky counterexamples here)

(The Litany of Gendlin is important to me, so I wanted to defend it!)

Comment by Pongo on Matt Goldenberg's Short Form Feed · 2020-09-24T00:35:49.019Z · LW · GW

I wonder why it seems like it suggests dispassion to you, but to me it suggests grace in the presence of pain. The grace for me I think comes from the outward- and upward-reaching (to me) "to be interacted with" and "to be lived", and grace with acknowledgement of pain comes from "they are already enduring it"

Comment by Pongo on Sunday September 20, 12:00PM (PT) — talks by Eric Rogstad, Daniel Kokotajlo and more · 2020-09-20T16:21:53.786Z · LW · GW

Wondering if these weekly talks should be listed in the Community Events section?