Comment by linda-linsefors on TAISU - Technical AI Safety Unconference · 2019-07-05T15:32:56.220Z · score: 1 (1 votes) · LW · GW

There is still room for more participants at TAISU, but sleeping space is starting to fill up. The EA Hotel dorm rooms are almost fully booked. Fore those who don't fit in the dorm or want some more privet space, there are lots of near by hotel. However since TAISU happens to be on a UK bank holiday, these might fill up too.

Comment by linda-linsefors on Learning-by-doing AI Safety workshop · 2019-06-13T13:23:20.080Z · score: 1 (1 votes) · LW · GW

This workshop is now full, but due to the enthusiasm I have received I am going to organize a second Learning-by-doing AI Safety workshop some time in October/November this year. If you want to influence when it will be you can fill in our doodle: https://doodle.com/poll/haxdy8iup4hes9xy

I am leaving the application form open. You can fill it in to show interest in the second Learning-by-doing AI Safety workshop and future similar events.

Comment by linda-linsefors on TAISU - Technical AI Safety Unconference · 2019-06-04T20:25:41.276Z · score: 1 (1 votes) · LW · GW

Fixed! Thank you for pointing this out.

Comment by linda-linsefors on TAISU - Technical AI Safety Unconference · 2019-05-25T11:08:49.604Z · score: 8 (5 votes) · LW · GW

Accepted applicants so far (July 5)

Gavin Leech, University of Bristol (soon)

Michaël Trazzi, FHI

David Lindner, ETH Zürich

Gordon Worley, PAISRI

anonymous

Josh Jacobson, BERI

anonymous

Andrea Luppi, Harvard University / FHI

Dragan Mlakić

Noah Topper

Andrew Schreiber, Ought

Jan Brauner, University of Edinburgh - weekend only

Søren Elverlin, AISafety.com

Victoria Krakovna, DeepMind - weekend only

Janos Kramar, DeepMind - weekend only

Comment by linda-linsefors on TAISU - Technical AI Safety Unconference · 2019-05-25T10:55:39.203Z · score: 1 (1 votes) · LW · GW

Are you worried about the unconference not having enough participants (in total), or it not having enough senior participants?

Comment by linda-linsefors on TAISU - Technical AI Safety Unconference · 2019-05-25T10:53:36.299Z · score: 1 (1 votes) · LW · GW

commend removed by me

Comment by linda-linsefors on TAISU - Technical AI Safety Unconference · 2019-05-25T10:47:20.921Z · score: 3 (2 votes) · LW · GW

There i no specific deadline for signing up.

However, i might close the application at some point due to the unconference being full. We have more or less unlimited sleeping space since the EA Hotel is literally surrounded by other hotels. So the limitation is spaces for talks, discussions and workshops and such.

If all activities are in the EA Hotel, we should not be much more than 20 people. If it looks like I will get more applications than that I will see if it is possibly to rent some more commons spaces at other hotels I have not looked in to this yet, but I will soon.

We currently have 4 accepted applicants.

Learning-by-doing AI Safety workshop

2019-05-24T09:42:49.996Z · score: 11 (5 votes)
Comment by linda-linsefors on TAISU - Technical AI Safety Unconference · 2019-05-24T08:39:03.430Z · score: 1 (1 votes) · LW · GW

Good initiative. I will add a question to the application form, asking if the applicant allows me to share that they are coming. I then will share the participant list here (with the names of those how agreed) and update every few days.

For pledges, just write here as Ryan said.

TAISU - Technical AI Safety Unconference

2019-05-21T18:34:34.051Z · score: 16 (8 votes)

The Athena Rationality Workshop - June 7th-10th at EA Hotel

2019-05-11T01:01:01.973Z · score: 28 (9 votes)

The Athena Rationality Workshop - June 7th-10th at EA Hotel

2019-05-10T22:08:03.600Z · score: 5 (3 votes)
Comment by linda-linsefors on The Game Theory of Blackmail · 2019-03-26T10:27:31.852Z · score: 1 (1 votes) · LW · GW

I would decompose that in to a value trade + a blackmail.

The default for me would be to take the action that gives me 1 utility. But you can offer me a trade where you give me something better in return for me not taking that action. This would be a value trade.

Lets now take me agreeing to your proposition as the default. If I then choose to threaten to call the deal off, unless you pay me a even higher amount, than this is blackmail.

I don't think that these parts (the value trade and the blackmail) should be viewed as sequential. I wrote it that way for illustrative purposes. However, I do think that any value trade has a Game of Chicken component, where each player can threaten to call of the trade if they don't get the more favorable deal.

Comment by linda-linsefors on The Game Theory of Blackmail · 2019-03-26T10:14:40.296Z · score: 0 (2 votes) · LW · GW

I did not mean to imply that the choices had to be made simultaneous, or in any other particular order, just that this is the type of payoff matrix. But I also think that "simultaneous choice" v.s. "sequential game" is a false dichotomy. If both players are UDT, every game is a game simultaneous choice game (where the choices are over complete policies).

I know that according to what I describe, the blackmailers threat is not credible in the game theory sense of the word. Sow what? It is still possible to make credible threats in the common-use meaning of the word, which is what matters.

The Game Theory of Blackmail

2019-03-22T17:44:36.545Z · score: 22 (11 votes)
Comment by linda-linsefors on Major Donation: Long Term Future Fund Application Extended 1 Week · 2019-03-04T14:47:56.258Z · score: 3 (2 votes) · LW · GW

Hi, approximately when will it be decided who gets funding this round?

Optimization Regularization through Time Penalty

2019-01-01T13:05:33.131Z · score: 12 (6 votes)

Generalized Kelly betting

2018-07-19T01:38:21.311Z · score: 16 (7 votes)
Comment by linda-linsefors on Probability is fake, frequency is real · 2018-07-11T01:29:46.050Z · score: 3 (2 votes) · LW · GW

I agree that "want" is not the correct word exactly. What I mean by prior is an agents actual a priori beliefs, so by definition there will be no mis-match there. I am not trying to say that you choose your prior exactly.

What I am gesturing at is that no prior is wrong, as long as it does not assign zero probability to the true outcome. And I think that much of the confusion in atrophic situation comes from trying to solve an under-constrained system.

Comment by linda-linsefors on Two agents can have the same source code and optimise different utility functions · 2018-07-11T00:50:02.902Z · score: 4 (4 votes) · LW · GW

I agree.

An even simpler example: If the agents are reward learners, both of them will optimize for their own reward signal, which are two different things in the physical world.

Non-resolve as Resolve

2018-07-10T23:31:15.932Z · score: 14 (5 votes)

Repeated (and improved) Sleeping Beauty problem

2018-07-10T22:32:56.191Z · score: 13 (5 votes)

Probability is fake, frequency is real

2018-07-10T22:32:29.692Z · score: 12 (9 votes)
Comment by linda-linsefors on The reverse job · 2018-05-13T23:12:38.330Z · score: 3 (1 votes) · LW · GW

>it seems that in order to be worthwhile the person would most likely have to be co-located with the team

My conclusion was the opposite. For this to work well the bread winner should be in a high earning location (which typically high cost living) and the rest of the team should be in a low cost location (which typically have low earning potential).

Being the only one in the team that is i a separate lotion, is not optimal for inclusion. But many teams are spread out anyway. I am pretty sure RAISE is not all in one location. As an other example, the organizers of AI Safety Camp is spread out all over Europe.

>Also, if the organisation later receives funding, the amount of prestige/influence of those taking this role will seem to drop or they might even become completely obsolete.

This might actually be feature, not a bug. When the new organisation has grown up and are receiving all the grants they need, then it is time for the funder to move on, to the next project, brining with them knowledge and experience from the first project.

The Mad Scientist Decision Problem

2017-11-29T11:41:33.640Z · score: 14 (5 votes)
Comment by linda-linsefors on Call for cognitive science in AI safety · 2017-09-30T07:26:06.604Z · score: 5 (2 votes) · LW · GW

Basically, if I change the title, it can go on the front page?

Comment by linda-linsefors on Call for cognitive science in AI safety · 2017-09-29T22:56:01.910Z · score: 2 (1 votes) · LW · GW

Better?

Extensive and Reflexive Personhood Definition

2017-09-29T21:50:35.324Z · score: 3 (2 votes)
Comment by linda-linsefors on Call for cognitive science in AI safety · 2017-09-29T21:22:08.874Z · score: 2 (1 votes) · LW · GW

Yes, that is correct.
I wrote the text and asked people to cosign if the agreed, for signaling value.

Do you have a good idea on how to make this clearer?

Comment by linda-linsefors on Call for cognitive science in AI safety · 2017-09-29T20:36:10.374Z · score: 6 (3 votes) · LW · GW

Recent talk by Stuart Armstrong related to this topic:

https://www.youtube.com/watch?v=19N4kjYbZD4

Call for cognitive science in AI safety

2017-09-29T20:35:16.738Z · score: 3 (10 votes)

The Virtue of Numbering ALL your Equations

2017-09-28T18:41:35.631Z · score: 2 (13 votes)
Comment by linda-linsefors on The Virtue of Numbering ALL your Equations · 2017-09-28T18:34:20.458Z · score: 0 (0 votes) · LW · GW

Suggested solution to The Naturalized Induction Problem

2016-12-24T16:03:03.000Z · score: 1 (1 votes)

Suggested solution to The Naturalized Induction Problem

2016-12-24T15:55:16.000Z · score: 0 (0 votes)