Posts

1st Athena Rationality Workshop - Retrospective 2019-07-17T16:51:36.754Z · score: 25 (15 votes)
Learning-by-doing AI Safety workshop 2019-05-24T09:42:49.996Z · score: 11 (5 votes)
TAISU - Technical AI Safety Unconference 2019-05-21T18:34:34.051Z · score: 18 (9 votes)
The Athena Rationality Workshop - June 7th-10th at EA Hotel 2019-05-11T01:01:01.973Z · score: 28 (9 votes)
The Athena Rationality Workshop - June 7th-10th at EA Hotel 2019-05-10T22:08:03.600Z · score: 5 (3 votes)
The Game Theory of Blackmail 2019-03-22T17:44:36.545Z · score: 23 (12 votes)
Optimization Regularization through Time Penalty 2019-01-01T13:05:33.131Z · score: 12 (6 votes)
Generalized Kelly betting 2018-07-19T01:38:21.311Z · score: 16 (7 votes)
Non-resolve as Resolve 2018-07-10T23:31:15.932Z · score: 14 (5 votes)
Repeated (and improved) Sleeping Beauty problem 2018-07-10T22:32:56.191Z · score: 13 (5 votes)
Probability is fake, frequency is real 2018-07-10T22:32:29.692Z · score: 12 (9 votes)
The Mad Scientist Decision Problem 2017-11-29T11:41:33.640Z · score: 14 (5 votes)
Extensive and Reflexive Personhood Definition 2017-09-29T21:50:35.324Z · score: 3 (2 votes)
Call for cognitive science in AI safety 2017-09-29T20:35:16.738Z · score: 3 (10 votes)
The Virtue of Numbering ALL your Equations 2017-09-28T18:41:35.631Z · score: 2 (13 votes)
Suggested solution to The Naturalized Induction Problem 2016-12-24T16:03:03.000Z · score: 1 (1 votes)
Suggested solution to The Naturalized Induction Problem 2016-12-24T15:55:16.000Z · score: 0 (0 votes)

Comments

Comment by linda-linsefors on “embedded self-justification,” or something like that · 2019-11-13T19:11:32.891Z · score: 3 (2 votes) · LW · GW

The way I understand your division of floors and sealing, the sealing is simply the highest level meta there is, and the agent has *typically* no way of questioning it. The ceiling is just "what the algorithm is programed to do". Alpha Go is had programed to update the network weights in a certain way in response to the training data.

What you call floor for Alpha Go, i.e. the move evaluations, are not even boundaries (in the sense nostalgebraist define it), that would just be the object level (no meta at all) policy.

I think this structure will be the same for any known agent algorithm, where by "known" I mean "we know how it works", rather than "we know that it exists". However Humans seems to be different? When I try to introspect it all seem to be mixed up, with object level heuristics influencing meta level updates. The ceiling and the floor are all mixed together. Or maybe not? Maybe we are just the same, i.e. having a definite top level, hard coded, highest level meta. Some evidence of this is that sometimes I just notice emotional shifts and/or decisions being made in my brain, and I just know that no normal reasoning I can do will have any effect on this shift/decision.

Comment by linda-linsefors on Vanessa Kosoy's Shortform · 2019-11-13T14:09:50.217Z · score: 1 (1 votes) · LW · GW

I agree that you can assign what ever belief you want (e.g. what ever is useful for the agents decision making proses) for for what happens in the counterfactual when omega is wrong, in decision problems where Omega is assumed to be a perfect predictor. However if you want to generalise to cases where Omega is an imperfect predictor (as you do mention), then I think you will (in general) have to put in the correct reward for Omega being wrong, becasue this is something that might actually be observed.

Comment by linda-linsefors on All I know is Goodhart · 2019-10-24T23:03:03.460Z · score: 7 (3 votes) · LW · GW

Weather this works or not is going to depend heavily on what looks like.

Given , i.e. , what does this say about ?

The answer depends on the amount of mutual information between , and . Unfortunately the the more generic is, (i.e. any function is possible) the less mutual information there will be. Therefore, unless we know some structure about , the restriction to is not going to do much. The agent will just find a very different policy that also actives very high in some very Goodharty way, but does not get penalized because low value for on is not correlated with low value on .

This could possibly be fixed by adding assumptions of the type for any that does too well on . That might yield something interesting, or it might just be a very complicated way of specifying as satisfiser, I don't know.

Comment by linda-linsefors on TAISU 2019 Field Report · 2019-10-16T07:18:42.515Z · score: 5 (4 votes) · LW · GW

Mainly that we had two scheduling sessions, one on the morning of the first day an one on the morning of the third day. At each scheduling session, it was only possible to add activities for the upcoming two days.

At the start of unconference encouraged people to think of it as 2 day event and try to put in everything they really wanted to do the first two days. On the morning of day three, the schedule was cleared to let people add sessions about topic that where alive to them at that time.

The main reason for this design choice was to allow continued/deeper conversation. I if ideas where created during the first half, I wanted there to be space to keep talking about those ideas.

Also, some people only attended the last two days, and this set up guaranteed they would get a chance to add things to the schedule too. But that could also have been solved in other ways, so that was not a crux for my design choice.

Comment by linda-linsefors on Conceptual Problems with UDT and Policy Selection · 2019-10-15T12:59:34.447Z · score: 5 (3 votes) · LW · GW

I think UDT1.1 have two fundamentally wrong assumptions built in.

1) Complete prior: UDT1.1 follows the policy that is optimal according to it's prior. This is incommutable in general settings and will have to be approximated some how. But even an approximation of UDT1.1 assumes that UDT1.1 is at least well defined. However in some multi agent settings or when the agent is being fully simulated by the environment, or any other setting where the environment is necessary bigger than the agent, then UDT1.1 is ill defined.

2) Free will: In the problem Agent Simulates Predictor, the environment is smaller than the agent, so it is falls outside the above point. Here instead I think the problem is that the agent assumes that it has free will, when in fact it behaves in a deterministic manner.

The problem of free will in Decision Problems is even clearer in the smoking lesion problem:

You want to smoke and you don't want Cancer. You know that people who smoke are more likely get cancer, but you also know that smoking does not cause cancer. Instead, there is a common cause, some gene, that happens to both increase the risk of cancer and make it more likely that a person with this gene are more likely to choose to smoke. You can not test if you have the gene.

Say that you decide to smoke, becasue ether you have the gene or not so you might as well enjoy smoking. But what if everyone though like this? Then there would be no correlation between the cancer gene and smoking. So where did the statistics about smokers getting cancer come from (in this made up version of reality).

If you are the sort of person who smokes no mater what, then ether:

a) You are sufficiently different from most people such that the statistics does not apply to you.

or

b) The cancer gene is correlated with being the sort of person that has a decision possess that leads to smoking.

If b is correct, then maybe you should be the sort of algorithm that decides not to smoke, as to increase the chance of being implemented into a brain that lives in a body with less risk of cancer. But if you start thinking like that, then you are also giving up your hope at affecting the universe, and resign to just choosing where you might find yourself, and I don't think that is what we want from a decision theory.

But there also seems to be no good way of thinking about how to steer the universe with out pretending to have free will. But since that is actually a falls assumption, there will be weird edge cases where you're reasoning breaks down.


Comment by linda-linsefors on Minimization of prediction error as a foundation for human values in AI alignment · 2019-10-14T14:24:34.516Z · score: 3 (2 votes) · LW · GW

Do you agree with my clarification?

Because what you are trying to say makes very much sense to me, if and only if I replace "prediction" with "set point value" for cases when the so called prediction is fixed.

Set point (control system vocabulary) = Intention/goal (agent vocabulary)

Comment by linda-linsefors on Minimization of prediction error as a foundation for human values in AI alignment · 2019-10-11T17:37:43.537Z · score: 26 (7 votes) · LW · GW

It seems like people are talking in circles around each other in these comments, and I think the reason is that Gordon and other people who likes predictive processing theory are misusing the world "prediction"

By misuse I mean clearly deviating from common use. I don't really care about sticking to common use, but if you deviate from the expected meaning of a word it is good to let people know.

Lets say I have a model of the future in my head. If I try to adjust the model to fit reality this model is a prediction. If I try to fit reality to my model, this model is an intention.

If you have a control system that tries to minimise "prediction error" with respect to a "prediction" that it is not able to chance, so that the system resort to change reality instead, then that is not really a prediction anymore.

As I understand it predictive processing theory suggest that both updating predictions and executing intentions are optimising for the same thing, which is aligning reality with my internal model. However there is an important difference with is what is variables and what is constants in solving that problem. Gordon is mentioning at some places that sometimes "predictions" can't be updated.

This means that it won't always be the case that a control system is globally trying to minimize prediction error, but instead is locally trying to minimize prediction error, although it may not be able to become less wrong over time because it can't change the prediction to better predict the input.

There are probably some actual disagreement here (in this comment section) too, but we will not figure that out if we don't agree on what words mean first.

Comment by linda-linsefors on Minimization of prediction error as a foundation for human values in AI alignment · 2019-10-11T16:35:09.408Z · score: 1 (1 votes) · LW · GW

I have not read all the comments yet, so maybe this is redundant, but anyway...

I think it is plausible that humans and other life forms, are mostly made up of layers of control systems, stacked on each other. However it does not follow from this that humans are trying to minimise prediction error.

There are probably some part of the brain that is trying to minimise prediction error. Possibly organised as a control system that tries to keep expectations in line with reality. Because it is useful to be able to accurately predict the world.

But if we are a stack of control systems, then I would expect other parts of the brain to be control systems for other things. E.g. Having the correct level of blood sugar, having a good amount of social interaction, having a good amount of variety in our lives.

I can imagine someone figuring out more or less how the prediction control system works and what it is doing, then looking at everything else, noticing the similarity (becasue it is all types of control systems and evolution tend to reuse structures) and thinking "Hmm, maybe it is all about predictions". But I also think that would be wrong.

Comment by linda-linsefors on 1st Athena Rationality Workshop - Retrospective · 2019-08-02T12:05:35.900Z · score: 3 (2 votes) · LW · GW

We are currently deciding between:

a) Running second Athena Workshop, similar to the first one, i.e. teaching a broad range of techniques for solving internal conflicts.

b) Running a workshop specifically focused on overcoming procrastination

c) Doing both

If you have any preferences, let me know.

Comment by linda-linsefors on 1st Athena Rationality Workshop - Retrospective · 2019-08-02T11:57:10.800Z · score: 1 (1 votes) · LW · GW

Our current goal is to gather more information. Which method should we teach and how should we teach it? Is what we are teaching actual useful? To find this out we indent to:

1) Run various versions of the workshop

2) Experiment with various forms online teaching

3) Follow up with participants about what has been useful to them

We have also made a strategic decision to mostly learn from our own experiences, to hopefully find new local optimums for what and how to teach these types of things.

Because of the stage we are at, the quickest way for you to get more information about techniques would be to attend one of the workshop. We would like to eventually do something more scalable (e.g. realizing video lectures), but first we'll need to do a lot more testing.

Comment by linda-linsefors on Raemon's Scratchpad · 2019-07-28T07:30:35.780Z · score: 7 (4 votes) · LW · GW
For discussions between individuals about who is "more cognitively sophisticated", my current best guess is that you can actually have this conversation reasonably easily in private (where by "reasonably easily", I mean it maybe takes several hours of building trust and laying groundwork, but there's nothing mysterious about it)

I can confirm this (anecdotally).

Comment by linda-linsefors on TAISU - Technical AI Safety Unconference · 2019-07-22T12:32:36.054Z · score: 1 (1 votes) · LW · GW

The TAISU is now full. I might still accept exceptional applications. But don't expect to be accepted just becasue you meet the basic requirements.

Comment by linda-linsefors on Robust Agency for People and Organizations · 2019-07-20T00:03:57.322Z · score: 6 (3 votes) · LW · GW
I think that agency requires a membrane, something keeps particular people in and out, such that you have any deliberate culture, principles or decision making at all.

This TED talk Religion, evolution, and the ecstasy of self-transcendence, Jonathan Haidt talks about how having a membrane around a group is necessary for group selection to happen. This seems very related.

With out a membrane, the free rider problem can not be solved. If the free rider problem is not solved, then the group can not be fully aligned.

Comment by linda-linsefors on 1st Athena Rationality Workshop - Retrospective · 2019-07-19T00:42:41.577Z · score: 1 (1 votes) · LW · GW

The Acceptance stuff was most useful for me. I don't remember any CFAR technique that focus on this.

Comment by linda-linsefors on TAISU - Technical AI Safety Unconference · 2019-07-05T15:32:56.220Z · score: 1 (1 votes) · LW · GW

There is still room for more participants at TAISU, but sleeping space is starting to fill up. The EA Hotel dorm rooms are almost fully booked. Fore those who don't fit in the dorm or want some more privet space, there are lots of near by hotel. However since TAISU happens to be on a UK bank holiday, these might fill up too.

Comment by linda-linsefors on Learning-by-doing AI Safety workshop · 2019-06-13T13:23:20.080Z · score: 1 (1 votes) · LW · GW

This workshop is now full, but due to the enthusiasm I have received I am going to organize a second Learning-by-doing AI Safety workshop some time in October/November this year. If you want to influence when it will be you can fill in our doodle: https://doodle.com/poll/haxdy8iup4hes9xy

I am leaving the application form open. You can fill it in to show interest in the second Learning-by-doing AI Safety workshop and future similar events.

Comment by linda-linsefors on TAISU - Technical AI Safety Unconference · 2019-06-04T20:25:41.276Z · score: 1 (1 votes) · LW · GW

Fixed! Thank you for pointing this out.

Comment by linda-linsefors on TAISU - Technical AI Safety Unconference · 2019-05-25T11:08:49.604Z · score: 8 (5 votes) · LW · GW

Accepted applicants so far (July 5)

Gavin Leech, University of Bristol (soon)

Michaël Trazzi, FHI

David Lindner, ETH Zürich

Gordon Worley, PAISRI

anonymous

Josh Jacobson, BERI

anonymous

Andrea Luppi, Harvard University / FHI

Dragan Mlakić

Noah Topper

Andrew Schreiber, Ought

Jan Brauner, University of Edinburgh - weekend only

Søren Elverlin, AISafety.com

Victoria Krakovna, DeepMind - weekend only

Janos Kramar, DeepMind - weekend only

Comment by linda-linsefors on TAISU - Technical AI Safety Unconference · 2019-05-25T10:55:39.203Z · score: 1 (1 votes) · LW · GW

Are you worried about the unconference not having enough participants (in total), or it not having enough senior participants?

Comment by linda-linsefors on TAISU - Technical AI Safety Unconference · 2019-05-25T10:53:36.299Z · score: 1 (1 votes) · LW · GW

commend removed by me

Comment by linda-linsefors on TAISU - Technical AI Safety Unconference · 2019-05-25T10:47:20.921Z · score: 3 (2 votes) · LW · GW

There i no specific deadline for signing up.

However, i might close the application at some point due to the unconference being full. We have more or less unlimited sleeping space since the EA Hotel is literally surrounded by other hotels. So the limitation is spaces for talks, discussions and workshops and such.

If all activities are in the EA Hotel, we should not be much more than 20 people. If it looks like I will get more applications than that I will see if it is possibly to rent some more commons spaces at other hotels I have not looked in to this yet, but I will soon.

We currently have 4 accepted applicants.

Comment by linda-linsefors on TAISU - Technical AI Safety Unconference · 2019-05-24T08:39:03.430Z · score: 1 (1 votes) · LW · GW

Good initiative. I will add a question to the application form, asking if the applicant allows me to share that they are coming. I then will share the participant list here (with the names of those how agreed) and update every few days.

For pledges, just write here as Ryan said.

Comment by linda-linsefors on The Game Theory of Blackmail · 2019-03-26T10:27:31.852Z · score: 1 (1 votes) · LW · GW

I would decompose that in to a value trade + a blackmail.

The default for me would be to take the action that gives me 1 utility. But you can offer me a trade where you give me something better in return for me not taking that action. This would be a value trade.

Lets now take me agreeing to your proposition as the default. If I then choose to threaten to call the deal off, unless you pay me a even higher amount, than this is blackmail.

I don't think that these parts (the value trade and the blackmail) should be viewed as sequential. I wrote it that way for illustrative purposes. However, I do think that any value trade has a Game of Chicken component, where each player can threaten to call of the trade if they don't get the more favorable deal.

Comment by linda-linsefors on The Game Theory of Blackmail · 2019-03-26T10:14:40.296Z · score: 0 (2 votes) · LW · GW

I did not mean to imply that the choices had to be made simultaneous, or in any other particular order, just that this is the type of payoff matrix. But I also think that "simultaneous choice" v.s. "sequential game" is a false dichotomy. If both players are UDT, every game is a game simultaneous choice game (where the choices are over complete policies).

I know that according to what I describe, the blackmailers threat is not credible in the game theory sense of the word. Sow what? It is still possible to make credible threats in the common-use meaning of the word, which is what matters.

Comment by linda-linsefors on Major Donation: Long Term Future Fund Application Extended 1 Week · 2019-03-04T14:47:56.258Z · score: 3 (2 votes) · LW · GW

Hi, approximately when will it be decided who gets funding this round?

Comment by linda-linsefors on Probability is fake, frequency is real · 2018-07-11T01:29:46.050Z · score: 3 (2 votes) · LW · GW

I agree that "want" is not the correct word exactly. What I mean by prior is an agents actual a priori beliefs, so by definition there will be no mis-match there. I am not trying to say that you choose your prior exactly.

What I am gesturing at is that no prior is wrong, as long as it does not assign zero probability to the true outcome. And I think that much of the confusion in atrophic situation comes from trying to solve an under-constrained system.

Comment by linda-linsefors on Two agents can have the same source code and optimise different utility functions · 2018-07-11T00:50:02.902Z · score: 4 (4 votes) · LW · GW

I agree.

An even simpler example: If the agents are reward learners, both of them will optimize for their own reward signal, which are two different things in the physical world.

Comment by linda-linsefors on The reverse job · 2018-05-13T23:12:38.330Z · score: 3 (1 votes) · LW · GW

>it seems that in order to be worthwhile the person would most likely have to be co-located with the team

My conclusion was the opposite. For this to work well the bread winner should be in a high earning location (which typically high cost living) and the rest of the team should be in a low cost location (which typically have low earning potential).

Being the only one in the team that is i a separate lotion, is not optimal for inclusion. But many teams are spread out anyway. I am pretty sure RAISE is not all in one location. As an other example, the organizers of AI Safety Camp is spread out all over Europe.

>Also, if the organisation later receives funding, the amount of prestige/influence of those taking this role will seem to drop or they might even become completely obsolete.

This might actually be feature, not a bug. When the new organisation has grown up and are receiving all the grants they need, then it is time for the funder to move on, to the next project, brining with them knowledge and experience from the first project.

Comment by linda-linsefors on Call for cognitive science in AI safety · 2017-09-30T07:26:06.604Z · score: 5 (2 votes) · LW · GW

Basically, if I change the title, it can go on the front page?

Comment by linda-linsefors on Call for cognitive science in AI safety · 2017-09-29T22:56:01.910Z · score: 2 (1 votes) · LW · GW

Better?

Comment by linda-linsefors on Call for cognitive science in AI safety · 2017-09-29T21:22:08.874Z · score: 2 (1 votes) · LW · GW

Yes, that is correct.
I wrote the text and asked people to cosign if the agreed, for signaling value.

Do you have a good idea on how to make this clearer?

Comment by linda-linsefors on Call for cognitive science in AI safety · 2017-09-29T20:36:10.374Z · score: 6 (3 votes) · LW · GW

Recent talk by Stuart Armstrong related to this topic:

https://www.youtube.com/watch?v=19N4kjYbZD4

Comment by linda-linsefors on The Virtue of Numbering ALL your Equations · 2017-09-28T18:34:20.458Z · score: 0 (0 votes) · LW · GW