Posts

Is there a good way to simultaneously read LW and EA Forum posts and comments? 2020-06-25T00:15:00.711Z · score: 8 (4 votes)

Comments

Comment by pongo on Most Prisoner's Dilemmas are Stag Hunts; Most Stag Hunts are Battle of the Sexes · 2020-09-24T01:14:12.076Z · score: 3 (2 votes) · LW · GW

This summary was helpful for me, thanks! I was sad cos I could tell there was something I wanted to know from the post but couldn't quite get it

In a Stag Hunt, the hunters can punish defection and reward cooperation

This seems wrong. I think the argument goes "the essential difference between a one-off Prisoner's Dilemma and an IPD is that players can punish and reward each other in-band (by future behavior). In the real world, they can also reward and punish out-of-band (in other games). Both these forces help create another equilibrium where people cooperate and punishment makes defecting a bad idea (though an equilibrium of constant defection still exists). This payoff matrix is like that of a Stag Hunt rather than a one-off Prisoner's Dilemma"

Comment by pongo on Matt Goldenberg's Short Form Feed · 2020-09-24T00:46:29.775Z · score: 2 (2 votes) · LW · GW

I think it's probably true that the Litany of Gendlin is irrecoverably false, but I feel drawn to apologia anyway.

I think the central point of the litany is its equivocation between "you can stand what is true (because, whether you know it or not, you already are standing what is true)" and "you can stand to know what is true".

When someone thinks, "I can't have wasted my time on this startup. If I have I'll just die", they must really mean "If I find out I have I'll just die". Otherwise presumably they can conclude from their continued aliveness that they didn't waste their life, and move on. The litany is an invitation to allow yourself to have less fallout from acknowledging or finding out the truth because you finding it out isn't what causes it to be true, however bad the world might be because it's true. A local frame might be "whatever additional terrible ways it feels like the world must be now if X is true are bucket errors".

So when you say "Owning up to what's true makes things way worse if you don't have the psychological immune system to handle the negative news/deal with the trauma or whatever", you're not responding to the litany as I see it. The litany says (emphasis added) "Owning up to it doesn't make it worse". Owning up to what's true doesn't make the true thing worse. It might make things worse, but it doesn't make the true thing worse (though I'm sure there are, in fact, tricky counterexamples here)

(The Litany of Gendlin is important to me, so I wanted to defend it!)

Comment by pongo on Matt Goldenberg's Short Form Feed · 2020-09-24T00:35:49.019Z · score: 3 (2 votes) · LW · GW

I wonder why it seems like it suggests dispassion to you, but to me it suggests grace in the presence of pain. The grace for me I think comes from the outward- and upward-reaching (to me) "to be interacted with" and "to be lived", and grace with acknowledgement of pain comes from "they are already enduring it"

Comment by pongo on Sunday September 20, 12:00PM (PT) — talks by Eric Rogstad, Daniel Kokotajlo and more · 2020-09-20T16:21:53.786Z · score: 1 (1 votes) · LW · GW

Wondering if these weekly talks should be listed in the Community Events section?

Comment by pongo on In praise of pretending to really try · 2020-09-18T01:12:26.743Z · score: 1 (1 votes) · LW · GW

I like this claim about the nature of communities. One way people can Really Try in a community is by taking stands against the way the community does things while remaining part of the community. I can’t think of any good solutions for encouraging this without assuming closed membership (or other cures worse than the disease)

Comment by pongo on Sunday September 20, 12:00PM (PT) — talks by Eric Rogstad, Daniel Kokotajlo and more · 2020-09-17T22:21:22.300Z · score: 3 (2 votes) · LW · GW

I vote for GWP or your favorite timelines model

Comment by pongo on capybaralet's Shortform · 2020-09-17T17:30:56.185Z · score: 3 (2 votes) · LW · GW

But, that is indeed a clunkier statement

I once heard someone say, "I'm curious about X, but only want to ask you about it if you want to talk about it" and thought that seemed very skillful.

Comment by pongo on Capturing Ideas · 2020-09-14T19:02:31.761Z · score: 3 (2 votes) · LW · GW

A repeating block I have with increasing capture is the tension between having enough notebooks to be convenient and having one's notes not be hopelessly scattered.

To expand: I strongly prefer paper notes to digital. I want to have notetaking stuff with my everywhere. I want maintaining access to notetaking to be convenient (robust to changing from gym clothes to jeans etc). I want to be able to trust that I will look at / can find a given note in the future.

I've never quite cracked getting all of these lined up. The closest I've come is having a pocket notebook everywhere I can think of, but laundry or removing notebooks to read them at a desk tends to break this system.

Comment by pongo on The Wiki is Dead, Long Live the Wiki! [help wanted] · 2020-09-12T23:28:43.723Z · score: 3 (2 votes) · LW · GW

I expect "x imported out of y", or "x imported, y remain" to be more motivating than the current "y remain" on the import progress bar.

Comment by pongo on ‘Ugh fields’, or why you can’t even bear to think about that task (Rob Wiblin) · 2020-09-12T23:27:32.493Z · score: 6 (4 votes) · LW · GW

My peevish reaction to this is (sarcastically) "Finally, there's a 1400 word popularization of that 850 word blog post". I dunno why I found that so annoying. It does seem valuable to have multiple explanations available for good concepts

Comment by pongo on Why would code/English or low-abstraction/high-abstraction simplicity or brevity correspond? · 2020-09-04T23:24:27.519Z · score: 1 (1 votes) · LW · GW

On the literature that addresses your question: here is a classic LW post on this sort of question.

You point out that length of a description in English and length in code don't necessarily correlate. I think for English sentences that are actually constraining expectations, there is a fairly good correlation between length in English and length in code.

There's the issue that the high-level concepts we use in English can be short, but if we were writing a program from scratch using those concepts, the expansion of the concepts would be large. When I appeal to the concept of a buffer overflow when explaining how someone knows secrets from my email, the invocatory phrase "buffer overflow" is short, but the expansion out in terms of computers and transistors and semiconductors and solid state physics is rather long.

But I'm in the game of trying to explain all of my observations. I get to have a dictionary of concepts that I pay the cost for, and then reuse the words and phrases in the dictionary in all my explanations nice and cheaply. Similarly, the computer program that I use to explain the world can have definitions or a library of code, as long as I pay the cost for those definitions once.

So, I'm already paying the cost of the expansion of "buffer overflow" in my attempt to come up with simple explanations for the world. When new data has to be explained, I can happily consider explanations using concepts I've already paid for as rather simple.

Comment by pongo on Sunday August 23rd, 12pm (PDT) – Double Crux with Buck Shlegeris and Oliver Habryka on Slow vs. Fast AI Takeoff · 2020-09-03T20:23:01.506Z · score: 1 (1 votes) · LW · GW

Any update on the likelihood of a transcript?

Comment by pongo on romeostevensit's Shortform · 2020-08-29T04:00:14.482Z · score: 1 (1 votes) · LW · GW

Seems like a cool insight here, but I've not quite managed to parse it. Best guess at what's meant: the more at stake / more people care about some issue, the more skilled the arguers that people pay attention to in that space. This is painful because arguing right at the frontier of your ability does not often give cathartic opinion shifts

Comment by pongo on Do we have updated data about the risk of ~ permanent chronic fatigue from COVID-19? · 2020-08-27T03:17:09.256Z · score: 2 (2 votes) · LW · GW

Found some evidence about fatigue in the linked paper

Compared with pre–COVID-19 status, 36 patients (36%) reported ongoing shortness of breath and general exhaustion, of whom 25 noted symptoms during less-than-ordinary daily activities, such as a household chore.

That said, recently recovered patients are not a particularly helpful group to look at (the average period since diagnosis was ~70 days, which might mean they'd generally been recovered for less than a month)

Comment by pongo on On Suddenly Not Being Able to Work · 2020-08-27T01:44:46.868Z · score: 2 (2 votes) · LW · GW

I've never had a work environment where I could do it, but I've always wanted to tackle problems like this by restricting the amount of time I can work. Start by only allowing myself to work an hour a day, and then slowly expand the window

Comment by pongo on Mathematical Inconsistency in Solomonoff Induction? · 2020-08-25T19:34:11.121Z · score: 3 (2 votes) · LW · GW

A hypothesis can't exclude things, only make positive predictions

Internally, the algorithm could work by ruling things out ("There are no black swans, so the world can't be X"), but it must still completely specify everything. This may be clearer once you have the answer your question, "What counts as a hypothesis for Solomonoff induction?": a halting program for some universal Turing machine. And the possible worlds are (in correspondence with) the elements of the space of possible outputs of that machine. So every "hypothesis" pins down everything exactly.

You may have also read some stuff about the Minimum Message Length formalization of Occam's razor, and it may be affecting your intuitions. In this formalization, it's more natural to use logical operations for part of your message. That is, you could say something like "It's the list of all primes OR the list of all squares. Compressed data: first number is zero". Here, we've used a logical operation on the statement of the model, but it's made our lossless compression of the data longer. This is a meaningful thing to do in this formalization (whereas it's not really in Solomonoff induction), but the thing we ended up with is definitely not the message with the shortest length. That means it doesn't affect the prior because that's all about the minimum message length.

Comment by pongo on Sunday August 23rd, 12pm (PDT) – Double Crux with Buck Shlegeris and Oliver Habryka on Slow vs. Fast AI Takeoff · 2020-08-22T22:50:17.684Z · score: 3 (2 votes) · LW · GW

Oh no! Would’ve loved to attend this, but too late notice for me

Comment by pongo on Matt Botvinick on the spontaneous emergence of learning algorithms · 2020-08-22T21:40:29.463Z · score: 3 (2 votes) · LW · GW

This is an aside, but I remain really confused by the claim that RL algorithms will tend to find policies close to the optimal one. Is inductive bias not a thing for RL?

Comment by pongo on Matt Botvinick on the spontaneous emergence of learning algorithms · 2020-08-22T18:06:11.769Z · score: 1 (3 votes) · LW · GW

Agreed, which is why I focused on the AF karma rather than the LW karma

I think it's worth pointing out that I originally saw this just posted to LW, and must have been manually promoted to AF by a mod. Partly want to point it out because possibly one of the main errors is people updating too much on promotion as a signal of quality

Comment by pongo on Matt Botvinick on the spontaneous emergence of learning algorithms · 2020-08-20T23:18:56.833Z · score: 15 (6 votes) · LW · GW

I imagine this was not your intention, but I'm a little worried that this comment will have an undesirable chilling effect. I think it's good for people to share when members of DeepMind / OpenAI say something that sounds a lot like "we found evidence of mesaoptimization".

I also think you're right that we should be doing a lot better on pushing back against such claims. I hope LW/AF gets better at being as skeptical of AI researchers assertions that support risk as they are of those that undermine risk. But I also hope that when those researchers claim something surprising and (to us) plausibly risky is going on, we continue to hear about it.

Comment by pongo on ricraz's Shortform · 2020-08-20T20:21:34.835Z · score: 5 (3 votes) · LW · GW

It seems really valuable to have you sharing how you think we’re falling epistemically short and probably important for the site to integrate the insights behind that view. There are a bunch of ways I disagree with your claims about epistemic best practices, but it seems like it would be cool if I could pass your ITT more. I wish your attempt to communicate the problems you saw had worked out better. I hope there’s a way for you to help improve LW epistemics, but also get that it might be costly in time and energy.

Comment by pongo on Alex Irpan: "My AI Timelines Have Sped Up" · 2020-08-20T00:40:37.788Z · score: 24 (8 votes) · LW · GW

Whoa, I hadn’t noticed that. The old predictions put 40% probability on AGI being developed in a 5 year window

Comment by pongo on Multitudinous outside views · 2020-08-19T22:07:50.170Z · score: -1 (2 votes) · LW · GW

Isn't this just messing up Bayes rule on their part? AFAIU, the multiplicative increase in the log odds is not particularly meaningful [Edit: I'm currently interpreting the downvote as either my explanation not being sufficiently charitable or me being wrong about the multiplicative increase in log odds. Would be down to hear more about the mistake I'm making]

Comment by pongo on Alignment By Default · 2020-08-12T22:59:39.147Z · score: 3 (2 votes) · LW · GW

Contrast to e.g. an AI which is optimizing for human approval. [...] When that AI designs its successor, it will want the successor to be even better at gaining human approval, which means making the successor even better at deception.

Is the idea that the AI is optimizing for humans approving of things, as opposed to humans approving of its actions? It seems that if its optimizing for humans approving of its actions, it doesn't necessarily have an incentive to make a successor that optimizes for approval (though I admit it's not clear why it would make a successor at all in this case; perhaps it's designed to not plan against being deactivated after some time)

Comment by pongo on Infinite Data/Compute Arguments in Alignment · 2020-08-05T20:18:08.192Z · score: 3 (2 votes) · LW · GW

Interesting to compare/contrast with "The Ideal Fades into the Background" from What does it mean to apply decision theory? (to be clear, I don't think the two posts are opposed)

Comment by pongo on Buck's Shortform · 2020-08-04T19:34:10.421Z · score: 1 (1 votes) · LW · GW

Because I'm dumb, I would have found it easier to interpret the graph if the takeoff curves crossed at "current capabilities" (and thus get to high levels of capabilities at different times) 

Comment by pongo on Three mental images from thinking about AGI debate & corrigibility · 2020-08-03T20:46:35.251Z · score: 1 (1 votes) · LW · GW

Perhaps an aside, but it seems worse for an AI to wander into "riskiness" and "incorrigibility" for awhile than it is good for it to be able to wander into "risklessness" and "corrigibility" for awhile. I expect we would be wiped out in the risky period, and it's not clear enough information would be preserved such that we could be reinstantiated later (and even then, it seems a shame to waste all the period where the Universe is being used for ends we wouldn't endorse -- a sort of 'periodic astronomical waste')

Comment by pongo on Why isn't there a LessWrong App? Is a "blog-app" sustainable/useful for a community? · 2020-08-03T18:33:38.947Z · score: 1 (1 votes) · LW · GW

I’m still not sold, but I think it’s pretty awesome you wrote this out in response to what is, ultimately, someone poking their nose into your internal high-context LW dev models

Comment by pongo on Why isn't there a LessWrong App? Is a "blog-app" sustainable/useful for a community? · 2020-08-03T07:29:06.722Z · score: 2 (2 votes) · LW · GW

though performance wise it would be worse, since progressive web-apps run on Javasript and native apps get to be written in much faster languages

I don't know if progressive web apps are slower than native apps, but if they are, I doubt it's because of the speed of the languages ("JavaScript used to be slow and weird, and now it's fast and weird")

Comment by pongo on Abstraction = Information at a Distance · 2020-08-03T07:06:26.528Z · score: 1 (1 votes) · LW · GW

The informal section at the beginning is the piece of your writing that clicked the most for me. I really like only caring about the part of the local information that matters for the global information.

I remain confused about how to think about abstraction leaks in this world (like row hammer messing with our ability to ignore the details of charges in circuitry)

Comment by pongo on AllAmericanBreakfast's Shortform · 2020-08-01T22:46:57.624Z · score: 9 (4 votes) · LW · GW

Like to explain why I think as I do, I‘d need to go through some basic rational concepts.

I believe that if the rational concepts are pulling their weight, it should be possible to explain the way the concept is showing up concretely in your thinking, rather than justifying it in the general case first.

As an example, perhaps your friend is protesting your use of anecdotes as data, but you wish to defend it as Bayesian, if not scientific, evidence. Rather than explaining the difference in general, I think you can say "I think that it's more likely that we hear this many people complaining about an axe murderer downtown if that's in fact what's going on, and that it's appropriate for us to avoid that area today. I agree it's not the only explanation and you should be able to get a more reliable sort of data for building a scientific theory, but I do think the existence of an axe murderer is a likely enough explanation for these stories that we should act on it"

If I'm right that this is generally possible, then I think this is a route around the feeling of being trapped on the other side of an inferential gap (which is how I interpreted the 'weird tension')

Comment by pongo on Basic Conversational Coordination: Micro-coordination of Intention · 2020-07-28T00:47:01.329Z · score: 1 (1 votes) · LW · GW

Yeah this can make it very hard not to be in “formulating“ mode as opposed to “listening” mode when not speaking

Comment by pongo on Lessons on AI Takeover from the conquistadors · 2020-07-25T22:43:41.097Z · score: 1 (1 votes) · LW · GW

There's also never been someone with 100% of the money, even though getting money tends to make it easier to get more money.

Oh yeah! What's up with that?

Nevertheless it's still true that, in most cases, conquering something gives you more resources, military force, etc.

Yeah, that seems to be true. My intuition is still having trouble with the success of converting the conquered military. Wikipedia tells me it was a big deal, and it remains surprising to me

Comment by pongo on Lessons on AI Takeover from the conquistadors · 2020-07-25T22:31:52.904Z · score: 1 (1 votes) · LW · GW

OK, so maybe the idea is "Conquered territory has reified net production across however long a period -> take all the net production and spend it on ships / horses / mercenaries"?

I expect that the administrative parts of states expand to be about as expensive as the resources they can get under their direct control. (Perhaps this is the dumb part, and ancient states regularly stored >90% of tax revenue as treasure?). Then, when you are making the state more expensive to run, you have less of a surplus. You also can't really make the state do something different than it was before if you have low fidelity control. The state doing what it was doing before probably wasn't helping you conquer more territory.

Comment by pongo on Lessons on AI Takeover from the conquistadors · 2020-07-25T17:20:01.992Z · score: 1 (1 votes) · LW · GW

That’s a good point!

Edit: I think my confusion is that there’s never been a whole world empire. Shouldn’t that happen if conquering a neighbouring region tends to make you more able to conquer other regions?

Alexander’s empire didn’t last. It seems like shortly after being founded, the Roman Empire has a lot of trouble and then (from my skim of Wikipedia) Diocletian sort of split it in four?

Comment by pongo on Mark Xu's Shortform · 2020-07-25T04:28:23.273Z · score: 1 (1 votes) · LW · GW

Decision theoretic uncertainty seems easier to deal with, because you’re trying to maximise expected value in each case, just changing what you condition on in calculating the expectation. So you can just take the overall expectation given your different credences in the decision theories

Comment by pongo on Lessons on AI Takeover from the conquistadors · 2020-07-24T07:45:25.263Z · score: 1 (1 votes) · LW · GW

(2) It seems expensive to run a state (maintain power structures, keep institutions intact for future benefit, keep everything running well enough that the stuff that depends on other things running well keeps running). Increasing the cost by a large factor seems like it would reduce the net resources extracted.

It seems even more expensive if the native population will continue intermittently fighting you for 400 years (viz your rebellion fact)

Comment by pongo on Can you get AGI from a Transformer? · 2020-07-23T21:09:55.114Z · score: 4 (3 votes) · LW · GW

To check, I think you're saying:

You believe it is inefficient to simulate some algorithms with a DNN. But the algorithms only matter for performance on various tasks. You give an example where a hard-to-simulate algorithm was used in an AI system, but when we got rid of the hard-to-simulate algorithm, it performed comparably well.

Comment by pongo on How good is humanity at coordination? · 2020-07-23T20:22:19.502Z · score: 2 (2 votes) · LW · GW

Really appreciate you taking the time to go through this!

To establish some language for what I want to talk about, I want to say your setup has two world sets (each with a prior of 50%) and six worlds (3 in each world set). A possible error I was making was just thinking in terms of one world set (or, one hypothesis: C), and not thinking about the competing hypotheses.

I think in your SSA, you treat all observers in the conditioned-on world set as "actually existing". But shouldn't you treat only the observers in a single world as "actually existing"? That is, you notice you're in a world where everyone survives. If C is true, the probability of this, given that you survived, is (0.7/0.9)/(0.7/0.9 + 0.2/0.9) = 7/9.

And then what I wanted to do with SIA is to use a similar structure to the not-C branch of your SSA argument to say "Look, we have 10/11 of being in an everyone survived world even given not-C. So it isn't strong evidence for C to find ourselves in an everyone survived world".

It's not yet clear to me (possibly because I am confused) that I definitely shouldn't do this kind of reasoning. It's tempting to say something like "I think the multiverse might be such that measure is assigned in one of these two ways to these three worlds. I don't know which, but there's not an anthropic effect about which way they're assigned, while there is an anthropic effect within any particular assignment". Perhaps this is more like ASSA than SIA?

Comment by pongo on How good is humanity at coordination? · 2020-07-22T22:10:30.028Z · score: 2 (2 votes) · LW · GW

Yep, sorry, looks like we do disagree.

Not sure I'm parsing your earlier comment correctly, but I think you say "SIA says there should be more people everywhere, because then I'm more likely to exist. More people everywhere means I think my existence is evidence for people handling nukes correctly everywhere". I'm less sure what you say about SSA, either "SSA still considers the possibility that nukes are regularly mishandled in a way that kills everyone" or "SSA says you should also consider yourself selected from the worlds with no observers".

Do I have you right?

I say, "SIA says that if your prior is '10% everyone survives, 20% only 5% survive, 70% everyone dies', and you notice you're in a 'survived' world, you should think you are in the 'everyone survives' world with 90% probability (as that's where 90% of the probability-weighted survivors are)".

Comment by pongo on Alignment As A Bottleneck To Usefulness Of GPT-3 · 2020-07-22T19:50:51.795Z · score: 3 (2 votes) · LW · GW

I agree that most of my concern has moved to inner (and, in particular, deceptive) alignment. I still don't quite see how to get enough outer alignment to trust an AI with the future lightcone, but I am much less worried about it.

Comment by pongo on How good is humanity at coordination? · 2020-07-22T17:23:25.037Z · score: 1 (1 votes) · LW · GW

I think you misread which direction the ‘“much weaker” evidence’ is supposed to be going, and that we agree (unless the key claim is about SIA exactly balancing selection effects)

Comment by pongo on Lessons on AI Takeover from the conquistadors · 2020-07-22T02:48:54.448Z · score: 2 (2 votes) · LW · GW

Thanks for sharing this data.

The lesson I draw from (1) is that in fact I should not think that conquering some areas help you conquer others. Rather, when entering some area, it is possible to draw local support in the first stages of a war. This updates me back towards thinking it's costly to control a newly conquered area.

The lesson I draw from (2) is that you can continue to make use of some of the state capacity of the native power structures. But it seems like you have fairly low fidelity control (at least in the language barrier case, and probably in all cases, because you lack a lot of connections to informal power structures). This seems like mostly a wash?

Are these the same as the lessons you draw from this data?

Comment by pongo on How good is humanity at coordination? · 2020-07-22T01:24:21.433Z · score: 7 (4 votes) · LW · GW

Seems like it's "much weaker" evidence if you buy something like SIA, and only a little weaker evidence if you buy something like SSA.

To expand: imagine a probability distribution over the amount of person-killing power that gets released as a consequence of nukes. Imagine it's got a single bump well past the boundary where total extinction is expected. That means worlds where more people die are more likely[1].

If you sample, according to its probability mass, some world where someone survived, then our current world is quite surprising.

If instead you upweight the masses by how many people are in each, then you aren't that surprised to be in our world

[1]: Well, there might be a wrinkle here with the boundary at 0 and a bunch of probability mass getting "piled up" there.

Comment by pongo on Lessons on AI Takeover from the conquistadors · 2020-07-21T16:29:12.346Z · score: 1 (1 votes) · LW · GW

Yeah, I’m confused about how quickly one can convert plunder into reinforcements.  But I certainly update on the conversion of the conquered. Thanks!

Comment by pongo on Lessons on AI Takeover from the conquistadors · 2020-07-21T05:45:16.419Z · score: 1 (1 votes) · LW · GW

In my model, to retain control of a conquered area  you must commit military forces for some time, and you are not much able to convert the conquered populace into additional military force.

This is not a very data-informed model though.

Comment by pongo on Lessons on AI Takeover from the conquistadors · 2020-07-21T00:42:39.442Z · score: 1 (1 votes) · LW · GW

My naïve model is that within ~a generation, maintaining control of a newly conquered region is a net cost, or at least near zero

Comment by pongo on Solving Math Problems by Relay · 2020-07-19T01:25:51.801Z · score: 1 (1 votes) · LW · GW

To get sufficient training data, it must surely be “a human” (in generic, smushed together, ‘modelling an ensemble of humans’ sense)

Comment by pongo on bgold's Shortform · 2020-07-17T17:20:58.784Z · score: 36 (7 votes) · LW · GW

Looks like the Monkey's Paw curled a finger here ...

Comment by pongo on benwr's unpolished thoughts · 2020-07-16T22:34:28.963Z · score: 2 (2 votes) · LW · GW

Oh no, ".projekt" can't be played on recent versions of MacOS! :(