post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Elo · 2018-04-30T02:26:40.602Z · LW(p) · GW(p)

http://bearlamp.com.au/filter-on-the-way-in-filter-on-the-way-out/

Your model looks a lot like my model but relevant to a different problem.

Replies from: ialdabaoth, mtrazzi
comment by ialdabaoth · 2018-04-30T15:33:16.531Z · LW(p) · GW(p)

It's worth noting that in other cultures, tact explicitly signals a difference in status. It seems obvious from how nerds are treated in school, that this is true in American English as well, but is implicit rather than explicit.

Most people, even (especially?) people who grow up in "normal" culture, know that you apply less tact in a conversation when you want third-party listeners to know that you're above the person you're talking to. Nerds never enter the reward-feedback mechanism that trains this; they're at the very bottom so they never get the differential feedback that would cause them to notice that there's an 'upslope' and 'downslope', so they never actually learn to decline. (Declenate? I'm not sure what the correct word is here. But it's the thing that in German differentiates between 'Sie haben' and 'Du hast'.)

comment by Michaël Trazzi (mtrazzi) · 2018-04-30T15:22:50.684Z · LW(p) · GW(p)

Yes, basically communicating is understanding how information is encoded in the receiver's model of reality and encoding your message without hurting their feelings (if not necessary).

Here is my feedback on your post:

Tact: Interesting and funny!

Nerd speaking: I think I already understood your point with the tact filters quote, so what went after was redundant.

More Tact: But I nonetheless liked the diagrams.

comment by Gyrodiot · 2018-04-29T19:49:55.137Z · LW(p) · GW(p)

Thanks again for this piece. I'll follow your daily posts and comment on them regularly!

I have a few clarification questions for you:

  • if an AGI could simulate quasi-perfectly a human brain, with human knowledge encoded inside, would your utility function be satisfied?
  • is the goal of understanding all there is to the utility function? What would the AGI do, once able to model precisely the way humans encode knowledge? If the AGI has the keys to the observable universe, what does it do with it?
Replies from: mtrazzi
comment by Michaël Trazzi (mtrazzi) · 2018-04-30T15:12:02.969Z · LW(p) · GW(p)
Thanks again for this piece. I'll follow your daily posts and comment on them regularly!

Very kind of you!

if an AGI could simulate quasi-perfectly a human brain, with human knowledge encoded inside, would your utility function be satisfied?

Interesting thought experiment. I would say no, but it depends on how it simulates the human brain.

If brain scans allow to quasi-perfectly get the position of every cell in a brain, and that we know how to model their interactions, we could have an electric circuit without knowing much about the information inside it.

So we would have the code, but we would have nothing about the meaning, so it would not "understand how the knowledge is encoded".

is the goal of understanding all there is to the utility function? What would the AGI do, once able to model precisely the way humans encode knowledge? If the AGI has the keys to the observable universe, what does it do with it?

You're absolutely right in the sense that it does not constitute a valid utility function for the alignment problem, or for anything really useful, if it is the one used for the final goal.

My point was that Yudkowsky showed how the encoding utility function was limited because of the simple way of maximizing it, but if we changed it to my "understanding the encoding" it could be much more interesting, enough to lead to AGI.

Once the AGI knows how humans encode knowledge, it can basically have, at least, the same model of reality of humans (assuming it has the same Input sensors (e.g. things like eyes, ears, etc.) , which is not very difficult) by encoding knowledge the same way. And then, because it is in Silico (and not in a biological prison), it can do everything humans do but much faster (basically an ASI).

I guess if it reaches this point, it would be the same as humans living for thousands of subjective years, and so it would be able to do whatever humans would believe useful if we were given more time to think about it.

Should we implement ought statements inside it in addition to the "understanding the encoding" utility function? Or the "faster human" is enough? I don't know.

comment by PeterMcCluskey · 2018-05-03T00:26:48.477Z · LW(p) · GW(p)

I'm unsure how much of this post I understand.

For a clear but long explanation of why compression provides a good measure of understanding, see Dan Burfoot's book draft.