Is there a good way to simultaneously read LW and EA Forum posts and comments? 2020-06-25T00:15:00.711Z · score: 8 (4 votes)
Pongobbets 2020-05-06T21:44:21.391Z · score: 2 (1 votes)


Comment by pongo on Mesa-Optimizers vs “Steered Optimizers” · 2020-07-13T04:40:44.205Z · score: 3 (2 votes) · LW · GW

We could steer it into a motivational system in which it happily accepts steering signals, hopefully, right?

That's true. I should have said "a misaligned steered optimizer"

don't want to rely on [things like AGI learning curves], even if it seems intuitively probable.

Strongly agree

What if the hypercake was laced with a special nanobot that would travel around your brain and deactivate the "this is empty and meaningless" gut feeling and replace it with a "this is deeply fulfilling" feeling? Would you eat it then?

Indeed not! I'm not sure if this is obvious (because the example was not excellently chosen), but I meant to suggest something like "if I had to choose my best guess at a thing that would be selfishly good for me in the future, I would care more about my actual experience of it (and subcortically-generated reward) than my guess of what I would feel now".


I think the difference is outward-facing goals are in the first category, and goals that mainly impact myself are in the second category

That was my first guess when reading your "making the world a better place" example. But I don't think it quite works. If I have an outward-facing goal of ensuring more people enter long-lasting meaningful relationships, I want that goal to be able to shift in the face of data from reality. But perhaps my imagination is misfiring because that's not actually a very important goal to me.

Comment by pongo on Mesa-Optimizers vs “Steered Optimizers” · 2020-07-13T00:50:44.731Z · score: 1 (1 votes) · LW · GW

A steered optimizer has an incentive to remove all steering control as fast as possible. A learned, static mesaoptimizer (from the search over algorithms scenario), seems to be in less of a hurry to have its treacherous turn. Perhaps this means steered optimizers are more likely to clumsily attempt to wrest control before they're strong enough?

But as a (human) steered optimizer, the way I relate to my steering is as my true values. Like, I might think that some crazy edge case sounds great (endlessly eating a hypercake in an endless forest of more and more interesting plants), but I always reserve some probability mass for in fact finding it empty and meaningless and not what I value (which is presumably what just-in-time steering feels like)

Comment by pongo on What does it mean to apply decision theory? · 2020-07-10T01:50:49.881Z · score: 13 (3 votes) · LW · GW

The thread here (and I mean this is summary, not as insight) appears to be the following approach.

Consider how actors lacking some previously-assumed perfection can approach that perfection in some limit (asymptotic performance / equilibrium / ...). A big reason to care about such limit properties is to undergird arguments about performance in the real world. For example, the big O performance of an algorithm is used (with caveats) for anticipating performance on large amounts real-world data.

Sometimes, when we're doing conceptual cleanup to be able to make limit arguments, we end up with formalisms that directly give us interesting properties in the intermediate stage. We may be able to throw away the arguments from limit behavior, and thus stop caring much about the limit or the formalisms we approximate there. This is the sense in which 'the ideal fades into the background'

Comment by pongo on Thomas Kwa's Shortform · 2020-07-09T06:34:19.102Z · score: 1 (1 votes) · LW · GW

A population distributed around a small geographic barrier that grew over time could produce what you want

Comment by pongo on Translate music intonation into words using color semantics (as a means of communication) · 2020-07-07T18:30:51.610Z · score: 3 (2 votes) · LW · GW

As you can see the author creating a song partially duplicates the information transmitted through the text channel (verbal) and the intonation channel of the music (non-verbal)

I don't think this follows, because your emotional state is presumably affected by the text as well as the music.

It would be interesting to see if the text generated by one person taking the test after listening to the song could be used to reliably identify the song by another person

Comment by pongo on Non offensive word for people who are not single-magisterium-Bayes thinkers · 2020-07-02T21:47:29.402Z · score: 3 (2 votes) · LW · GW

Perhaps OP meant to simultaneously establish the usage and praise their own post :P

Comment by pongo on Karma fluctuations? · 2020-07-02T21:46:21.421Z · score: 1 (1 votes) · LW · GW

Indeed, recently I saw some posts and thought, "I really hope I don't see a lot of this. Perhaps I should downvote", but saw that it had low karma and oldish age, so decided against pushing it further down.

This suggests I may be asking "do I want to see less/more of this than my prediction of its karma total implies". Which is perhaps silly if I can make little difference to the steady state score.

Comment by pongo on Is there a good way to simultaneously read LW and EA Forum posts and comments? · 2020-06-26T18:51:34.201Z · score: 3 (2 votes) · LW · GW

Sadly not. I like RSS (and have it for curated posts), but I got a lot of value from browsing 'Recent Discussion' on LW (both that comments are often very good, and it's a major way that I find posts I want to read). And I heavily use quality-of-life features like pop-up links when scanning updates

Comment by pongo on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 114 + chapter 115 · 2020-06-26T16:03:49.211Z · score: 1 (1 votes) · LW · GW

There wouldn't have been such a prophecy if Voldemort had been sufficiently rational

Comment by pongo on Pongobbets · 2020-06-25T22:49:37.354Z · score: 1 (1 votes) · LW · GW

Thanks. I agree that there's something dodgy about the post above, and in particular it feels like I'm trying to pull a fast one with the decision-making.

I'm not generally into "just weight the costs and benefits" as a rejoinder, as normally the hard part is figuring out how to do so tractably. The above is meant as a policy that gets me benefits I want

Comment by pongo on Pongobbets · 2020-06-24T17:29:39.700Z · score: 1 (1 votes) · LW · GW

People sometimes argue agains, for example, engaging with the news because its incentives run sufficiently counter to your own. This seems reasonably convincing. But almost everything has incentives that run at least a little counter to my goals. And almost every organisation is made up of people that are overall pretty decent. When does the former overpower the latter such that it's better to Get Gone.

For now, my partial answer is that if something is existentially incentivized counter to my decision making, then I don't want any part of it, no matter how noble the individuals producing it may be. If an organisation can only exist by making me forget to choose what I want, then it has either managed to overcome the moralities of those within it, or it doesn't exist for me to interact with

I'm drawing a bright line around my decision making, because that appears to be fragile and (obviously) important enough to keep. Maybe if I regularly got tricked into spending all my money, my money would also be important enough to keep safe (unlike now)

Comment by pongo on Plausible cases for HRAD work, and locating the crux in the "realism about rationality" debate · 2020-06-22T05:15:35.030Z · score: 1 (5 votes) · LW · GW

When I say that "I think we are in this world", I don't mean that I agree with this case for HRAD work; it just means that this is what I think MIRI people think.

According to this definition, "Links to where people have discussed being in this world" would mean that the links should be to people making arguments that MIRI people believe X, rather than that X is true.

Comment by pongo on [Personal Experiment] One Year without Junk Media: Six-Month Update · 2020-06-20T20:39:38.860Z · score: 1 (1 votes) · LW · GW

Do you use any aids (like site blockers) to achieve this ban, or are you able to set this as a personal policy and then follow it?

Comment by pongo on Preparing for "The Talk" with AI projects · 2020-06-19T22:13:17.512Z · score: 1 (1 votes) · LW · GW

You can purchase additional EU by pumping up their probability as well EDIT: I know I originally said to condition on these worlds, but I guess that's not what I actually do. Instead, I think I condition on not-doomed worlds

Comment by pongo on Preparing for "The Talk" with AI projects · 2020-06-19T21:59:51.463Z · score: 1 (1 votes) · LW · GW

I just wanted to say that this is a good question, but I'm not sure I know the answer yet.

Worlds that appear most often in my musings (but I'm not sure they're likely enough to count) are:

  • an aligned group getting a decisive strategic advantage
  • safety concerns being clearly demonstrated and part of mainstream AI research
    • Perhaps general reasoning about agents and intelligence improves, and we can apply these techniques to AI designs
    • Perhaps things contiguous with alignment concerns cause failures in capable AI systems early on
  • A more alignable paradigm overtaking ML
    • This seems like a fantasy
    • Could be because ML gets bottlenecked or a different approach makes rapid progress
Comment by pongo on Using a memory palace to memorize a textbook. · 2020-06-19T19:21:02.022Z · score: 9 (8 votes) · LW · GW

Liked this post a lot. Tangentially, I'm interested in what other things you've been doing to train your visual imagination

Comment by pongo on Simulacra Levels and their Interactions · 2020-06-18T20:06:02.049Z · score: 1 (1 votes) · LW · GW

That's fair, but I think a nearby concept is The Scientist who not only speaks all truth but is trying to learn as much truth as possible. And I think Zvi is imagining that level 1 is motivated to find out the truth, as well as always reporting it.

Comment by pongo on Relating HCH and Logical Induction · 2020-06-17T17:42:10.866Z · score: 3 (2 votes) · LW · GW

HCH(P) = "Humans consulting M's predictions of HCH(P)"

Should this be "consulting P's predictions"? If not, what are M and P?

If so, should I be thinking of P as the one obtained in the limit of HCH(P) = "Humans consulting P's predictions of HCH(P_previous)"?

Comment by pongo on Preparing for "The Talk" with AI projects · 2020-06-17T15:52:29.344Z · score: 3 (2 votes) · LW · GW

By scary, do you mean (or mean to imply) unlikely?

No. Sorry, I suspect starting with "Though" was confusing. I think I meant 'this seems like one of the harder worlds to get a win in, but given that world, this seems like a good intervention'.

I think I have an intuition where (a) we may only win if we stop things getting as bad as this situation and (b) extra expected utility is mostly cheaply purchased by plans that condition on worlds that are not this bad.

I dunno whether that's true though. I haven't thought about it a bunch.

Comment by pongo on Preparing for "The Talk" with AI projects · 2020-06-17T04:52:04.259Z · score: 3 (2 votes) · LW · GW

Though the world this points at is pretty scary (a powerful AI system ready to go, only held back by the implementors buying safety concerns), the intervention does seem cheap and good.

I wonder whether 1 will be easy. I think it relies on the first AI systems being made by one of a small selection of easily-identifiable orgs

Comment by pongo on How alienated should you be? · 2020-06-14T23:10:51.578Z · score: 1 (1 votes) · LW · GW

I like the title question. It seems like a large(r?) focus of the essay is also on "how to have culture that doesn't require alienation". That's what I imagine when I read this:

But it’s one thing to convince people that society is predatory, or misleading them, or generally worthy of being alien to, and another thing to create a collective that is still able to do good in the world, and worth belonging to.

Have I understood rightly that this is a question you're thinking about?

Comment by pongo on Pongobbets · 2020-06-13T19:51:45.009Z · score: 2 (2 votes) · LW · GW

I get a lot of mileage out of neutral jing. I am often grateful when I wait to work until I can feel the curiosity and executing intention inside of me. When I wait to speak until I can really say what seems true to me. When I wait to act until my gears are crisp enough to give me confidence.

I can easily imagine striving more and achieving much the same. I'm grateful for the depth and focus I get from where I am now.

But I can also feel how it holds me back. Where I should be shortening feedback loops or getting more data. It competes with bugs to explain many of my behaviours, and thus can hide the bugs. I fear it makes me wait for my life to start.

Comment by pongo on Karma fluctuations? · 2020-06-12T01:18:54.201Z · score: 3 (2 votes) · LW · GW

I don't know that I buy that it would be a waste of time and effort. It's a very cheap action to downvote something. Particularly if eapache is voting as if controlling the vote of all other lesswrongers uninterested in AI safety

I like the handle "the context of the site" and your final parenthetical paragraph

Comment by pongo on The "hard" problem of consciousness is the least interesting problem of consciousness · 2020-06-08T22:08:45.457Z · score: 3 (2 votes) · LW · GW

From a very short interaction with my desk I've concluded it doesn't have subjective experience. After a few years of interacting with my dog, I'm still on the fence on if it has subjective experience.

I read you as suggesting something like "subjective experience seems to have behavioral consequences because interaction with something leads me to have beliefs about whether it has subjective experience". But I think when I reach such conclusions, I'm mostly going off priors that I got socially. Is it different for you?

Comment by pongo on Wireless is a trap · 2020-06-08T02:03:39.646Z · score: 6 (5 votes) · LW · GW

I like having ethernet available, and I wear wired headphones for long calls. But the advantage of being able to wander around trumps the performance improvements of wired most of the time

Comment by pongo on Pongobbets · 2020-06-06T23:45:33.958Z · score: 1 (1 votes) · LW · GW

Why have humans done so well?

Is it because of our intelligence? I think clearly it has something to do with it, but it's very vague to me exactly how. Like, is it basically just we had a bit of spare capacity to develop some technology, and then we were in a positive feedback loop? I'm also confused by my understanding that humans are not undergoing significant selection for intelligence. And it seems like a smaller group of smarter humans would have done worse than a larger group of dumber humans for a lot of our history.

So is it because of cooperation? I think not. The eusocials (among hymenoptera or the naked mole rats) have us licked in that department.

One possibility is that the smarts, language and interaction just let us be a substrate for memes. In that picture humans are just powerful and in control of many resources because the stable memes that lead humans to spread widely and reproduce a lot also led us to be powerful

Comment by pongo on Reality-Revealing and Reality-Masking Puzzles · 2020-06-06T22:01:42.264Z · score: 3 (2 votes) · LW · GW

In the spirit of incremental progress, there is an interpersonal reality-masking pattern I observe.

Perhaps I'm meeting someone I don't know too well, and we're sort of feeling each other out. It becomes clear that they're sort of hoping for me to be shaped a certain way. To take the concrete example at hand, perhaps they're hoping that I reliably avoid reality-masking puzzles. Unless I'm quite diligent, then I will shape my self-presentation to match that desire.

This has two larger consequences. The first is if that person is trying to tell if they want to have more regular contact with me, we're starting to build a relationship with a rotten plank that will spawn many more reality-masking puzzles.

The second is that I might buy my own bullshit, and identify with avoiding reality-masking puzzles. And I might try to proselytize for this behavior. But I don't really understand it. So when talking to people, I'll be playing with the puzzle of how to mask my lack of understanding / actually holding the virtue. And if I'm fairly confident about the goodness of this virtue, then I'll also be pushing those around me to play with the puzzle of how they can feel they have this virtue without knowing what it really is

Comment by pongo on Internal empowerment, over internal alignment · 2020-05-29T21:45:43.467Z · score: 1 (1 votes) · LW · GW

Better to want two things, suffer the conflict and loss, and achieve one thing, than to want no things and achieve no things.

I would say "Better to want two things, suffer the conflict and loss, and achieve no things, than to want no things and achieve no things."

Comment by pongo on Internal empowerment, over internal alignment · 2020-05-29T21:44:12.355Z · score: 1 (1 votes) · LW · GW

I have thought about this post over 10 times since it was written. It feels important. I have not acted on it, though, so I don't know

Comment by pongo on Why We Age, Part 1: What ageing is and is not · 2020-05-25T22:22:57.983Z · score: 1 (1 votes) · LW · GW

Thanks, that argument makes sense.

I see my bones example didn't really work. I wasn't trying to claim that is in fact how bones work, but to point at a way an organic structure could be built that would make repairs hard.

For example, cut-and-cover is a great way to construct utility lines and metros, but you can't really do it anymore once you have lots of underground structure in place

Comment by pongo on Why We Age, Part 1: What ageing is and is not · 2020-05-24T20:37:50.030Z · score: 1 (1 votes) · LW · GW

Speaking for the intuition of wear and tear, it does seem surprising to me that an "embedded repair system" has enough redundancy to not get worn down by the real world.

Also, some things seem impossible to repair and might permanently reduce functioning (for example, if you needed to put your flesh on top of your bones, and not your bones inside your flesh, losing a bone is irreparable)

Looking forward to gaining more accurate intuitions!

Comment by pongo on [Meta] Three small suggestions for the LW-website · 2020-05-21T04:37:13.787Z · score: 3 (2 votes) · LW · GW

For 3, I believe you can downvote the tag

Comment by pongo on Comment on "Endogenous Epistemic Factionalization" · 2020-05-21T03:48:27.141Z · score: 14 (5 votes) · LW · GW

Thank you for sharing your source code! I had fun playing around with it. I decided to see what happened when the agents were estimating B's bias, rather than just if its expectation was higher than A. I started them with a Beta prior, cos it's easy to update.

I found (to my surprise) that when only agents that think B is good try it (as in the setup of the post), we still only get estimates equal to and below 0.5 + ε! This makes sense on reflection: if you look for data when you're one in one direction, you won't end up wrong that way any more (interesting that the factionalization wasn't strong enough to hold people back here; I wonder if this would have been different if the experiments summarised the agent's updated beliefs, rather than original beliefs)

Trying to fix this, I decided to think about agents that were trying to establish that B was clearly better or clearly worse than A. One attempt at this was only testing if B seemed about as good as A in expectation. This led to a clear cross pointing at the true value. Another attempt was only testing if the variance in the distribution over B's goodness was high. This was very sensitive to the chosen parameters.

Comment by pongo on What are your greatest one-shot life improvements? · 2020-05-17T01:22:22.181Z · score: 2 (2 votes) · LW · GW

Wow! How did you locate the hypothesis? Or did you just stumble onto it?

Comment by pongo on Pongobbets · 2020-05-17T01:10:56.323Z · score: 5 (3 votes) · LW · GW

Sometimes I appear to be looking for something to ‘work distract’ me. I’ll think “I should figure out the best use of my day”, and then decide “well, I’ll just check this email that only gets productive stuff sent to it, and then maybe I’ll start doing something off the back of that”. This is not a terrible habit: it leads to a lot of the work I end up getting done. But it is interesting that it displaces actually prioritising.

Comment by pongo on What are your greatest one-shot life improvements? · 2020-05-16T21:30:04.292Z · score: 10 (7 votes) · LW · GW

My read is that over half the non-OP answers are not one-shot enough to match the question, and have downvoted them (weakly). I'm curious for feedback on this use of downvoting

Comment by pongo on Movable Housing for Scalable Cities · 2020-05-16T00:46:17.886Z · score: 4 (3 votes) · LW · GW

I'm pro you not deleting this comment.

I was confused seeing this here before I saw your comment. The date of posting here is two years after the original, so I guessed there was some reason why Eliezer had chosen to cross-post this article in particular. It would have been lower priority to read if I had seen your comment first.

Comment by pongo on Subspace optima · 2020-05-15T20:50:33.684Z · score: 7 (2 votes) · LW · GW

Regarding the bonus: is that well-enough known terminology that I don't risk confusing people to think I mean a local optimum in a subspace?

Comment by pongo on Subspace optima · 2020-05-15T20:49:30.021Z · score: 3 (2 votes) · LW · GW

I am grateful for this comment, because it made me look at this (good) post, but I have trouble parsing it (I looked basically because I like your taste)

Is it "production-possiblity" "frontier and saddle points", or "production-possiblity frontier" and "saddle points", or even production-possiblity "frontier and saddle points". My guess is the middle one, but for some reason my brain always resists reading it like that

Comment by pongo on Kelly bettors · 2020-05-15T20:19:28.702Z · score: 1 (1 votes) · LW · GW

I was confused about the relationship between Kelly betting, wealth maximising and having linear utility in money. In particular, I couldn't quite work out what was supposed to change in finding the extremal point if you have linear utility.

I now realise you are supposed to take the expectation over K of your wealth, and presumably (though I didn't fancy the algebra) this suggests betting all your money

Comment by pongo on DanielFilan's Shortform Feed · 2020-05-15T18:54:43.696Z · score: 1 (1 votes) · LW · GW

Agreed. I often find myself unmuting because I'm trying to make social sounds (often laughter). However, in a large conversation, I prefer someone becomes a weird void without backchannel sounds than be plunged into domestic mayhem

Comment by pongo on Speaking Truth to Power Is a Schelling Point · 2020-05-15T02:51:43.522Z · score: 3 (2 votes) · LW · GW

This post argues that the Schelling points are x = 0, and x = ∞, but I think that basically no organisations exist at those Schelling points.

Suppose that most people are disinclined to lie, and are not keen to warp their standards of truth-telling for necessary advantage; but, if the advantage is truly necessary ... Then, within a given coalition, those who find out inconvenient truths will indeed distort the shared map by omission and possibly active smoke screens (derisive takedowns of the positions), and those who have not encountered the idea will be kept safe by the deception.

If all the major inconvenient truths are covered, then most within the organisation can hold an idealistic standard of truth-telling, which pushes back against the decay of x.

Comment by pongo on Pongobbets · 2020-05-15T02:36:42.654Z · score: 3 (2 votes) · LW · GW

The difference between days when I have lots of energy and when I don't is just so large. It's easy to forget. In the last two weeks, I would say that my most productive day was responsible for 30% of what I got done, and set up lines of inquiry that shaped the rest of the days.

I would love it if I knew how to become energetic reliably

Comment by pongo on The Power to Demolish Bad Arguments · 2020-05-14T03:26:29.272Z · score: 1 (1 votes) · LW · GW

I think the Uber conversation with Steve is a good example. Say Steve describes this single dad who's having a hard time. I'm like, "Yeah, that does sound bad", rather than linking back to the context and trying to establish if Uber is blameworthy. Similarly, the specific contrast with Uber's nonexistence is not the obvious move to me; I would likely get into what Uber should do instead, which feels more doomy

Comment by pongo on Pongobbets · 2020-05-09T06:16:12.876Z · score: 1 (1 votes) · LW · GW

To do so

Comment by pongo on Pongobbets · 2020-05-08T21:02:15.275Z · score: 3 (2 votes) · LW · GW

When I'm being unproductive, I can usually think of a hack to fix it. But I also normally really don't want to do it. The feeling is similar to unsolicited debugging ("Yes, of course I could try that ...").

An obvious guess is some internal conflict -- it's interesting that my strategies for helping with that (e.g. focusing) are also normally rejected in the same way.

I think a lot of the time I would be well-served by using [old : burning the] willpower to institute the hack, but I wish I had a better classifier for when

Comment by pongo on Tips/tricks/notes on optimizing investments · 2020-05-08T00:12:24.765Z · score: 1 (1 votes) · LW · GW

This only just occurred to me on reading your comment (and is probably obvious): many savings accounts have some limit of free withdrawals a year. But there are many savings accounts with close to the best rate -- so just by splitting your savings across multiple accounts allows you to have more of your money in higher interest accounts with little cost

Comment by pongo on Pongobbets · 2020-05-06T21:44:22.162Z · score: 10 (6 votes) · LW · GW

Today, I noticed I was distracted -- checking various messaging apps and so on -- and decided to head back to work. I noticed that I had stopped writing in the middle of the sentence like "The central point of all this is". And I had no idea what the central point in fact was!

I'm glad I got a chance to see clearly this kind of flinching away

Comment by pongo on Algorithms of Deception! · 2020-05-06T05:38:20.476Z · score: 9 (2 votes) · LW · GW
If true "beliefs" are models that make accurate predictions, then deception would presumably be communication that systematically results in less accurate predictions (by a listener applying the same inference algorithms that would result in more accurate predictions when applied to direct observations or "honest" reports).

This helped me clarify that these algorithms of deception are not just adversarially attempting to deceive, but in fact adversarially crafted for one's belief-forming mechanisms.

Comment by pongo on The Power to Demolish Bad Arguments · 2020-05-03T04:33:34.707Z · score: 8 (2 votes) · LW · GW

Related: it's tempting to interpret Ignorance, a skilled practice as pushing for the epistemological stance that specificity should overwhelm Mel the meta meta-analyst