Posts

A Rephrasing Of and Footnote To An Embedded Agency Proposal 2022-03-09T18:13:23.348Z
Exploring Decision Theories With Counterfactuals and Dynamic Agent Self-Pointers 2021-12-18T21:50:13.751Z
A Possible Resolution To Spurious Counterfactuals 2021-12-06T18:26:41.409Z

Comments

Comment by JoshuaOSHickman on Have You Tried Hiring People? · 2022-04-12T01:12:17.158Z · LW · GW

I think you're buying the hype of how much Alignment Forum posts help you even get the attention of MIRI way too much. I have a much easier time asking university departments for feedback, and there is a much smoother process for applying there.

Comment by JoshuaOSHickman on Have You Tried Hiring People? · 2022-04-12T01:08:30.897Z · LW · GW

I recently went through the job search process as a software engineer who's had some technical posts approved on the Alignment Forum (really only one core insight, but I thought it was valuable). The process is so much better for standard web development jobs, you genuinely cannot possibly imagine, and I was solving a problem MIRI explicitly said they were interested in, in my Alignment Forum posts. It took (no joke) months to get a response from anyone at MIRI and that response ended up being a single dismissive sentence. It took less than a month from my first sending in a Software Engineer job application to a normal company to having a job paying [redacted generous offer].

Comment by JoshuaOSHickman on $1000 USD prize - Circular Dependency of Counterfactuals · 2022-01-08T21:54:33.209Z · LW · GW

I was attempting to solve a relatively specific technical problem related to self-proofs using counterfactuals. So I suppose I do think (at least non-circular ones) are useful. But I'm not sure I'd commit to any broader philosophical statement about counterfactuals beyond "they can be used in a specific formal way to help functions prove statements about their own output in a way that avoid Lob's Theorem issues". That being said, that's a pretty good use, if that's the type of thing you want to do? It's also not totally clear if you're imagining counterfactuals the same way I am. I am using the English term because it matches the specific thing I'm describing decently well, but the term has a broad meaning, and without having an extremely specific imagining, it's hard to make any more statements about what can be done with them.

Comment by JoshuaOSHickman on A Possible Resolution To Spurious Counterfactuals · 2022-01-08T21:50:47.390Z · LW · GW

The Agent needs access to a self pointer, and it is parameterized so it doesn't have to be a static pointer, as it was in the original paper -- this approach in particular needs it to be dynamic in this way.

There are also use cases where a bit of code receives a pointer not to its exact self -- when it is called as a subagent, it will get the parent's pointer.

Comment by JoshuaOSHickman on Classical symbol grounding and causal graphs · 2022-01-04T23:22:30.759Z · LW · GW

It seems like a solid symbol grounding solution would allow us to delegate some amount of "translate vague intuitions about alignment into actual policies". In particular, there seems to be a correspondence between CIRL and symbol grounding -- systems aware they do not know the goal they should optimize are similar to symbol-grounding machines aware there is a difference between the literal content of instructions and the desired behavior the instructions represent (although the instructions might be even more abstract symbols than words).

Is there any literature you're aware of that would propose a seemingly robust alignment solution in a world where we have solved symbol grounding? e.g. Yudkowsky suggests Coherent Extrapolated Volition, and has a sentence or so in English that he proposes, but because machines cannot execute English it's not clear this was meant literally, or more as a vague gesture at important properties solutions might have.

Comment by JoshuaOSHickman on $1000 USD prize - Circular Dependency of Counterfactuals · 2022-01-02T00:45:23.368Z · LW · GW

So, this post only deals with agent counterfactuals (not environmental counterfactuals), but I believe I have solved the technical issue you mention about the construction of logical counterfactuals as it concerns TDT. See: https://www.alignmentforum.org/posts/TnkDtTAqCGetvLsgr/a-possible-resolution-to-spurious-counterfactuals

I have fewer thoughts about environmental counterfactuals but think a similar approach could be used to make statements along those lines, i.e. construct alternate agents receiving a different observation about the world. I'm not sure any very specific technical problem exists with that, though -- the TDT paper already talks about world model surgery.

Comment by JoshuaOSHickman on A Possible Resolution To Spurious Counterfactuals · 2021-12-12T17:53:19.883Z · LW · GW

It seems like you could use these counterfactuals to do whatever decision theory you'd like? My goal wasn't to solve actually hard decisions -- the 5 and 10 problem is perhaps the easiest decision I can imagine -- but merely to construct a formalism such that even extremely simple decisions involving self-proofs can be solved at all.

I think the reason this seems to imply a decision theory is that it's such a simple model that there are some ways of making decisions that are impossible in the model -- a fair portion of that was inherited from the psuedocode in the Embedded Agency paper. I have an extension of the formalism in mind that allows an expression of UDT as well (I suspect. Or something very close to it. I haven't paid enough attention to the paper yet to know for sure). I would love to hear your thoughts once I get that post written up? :)