Posts

Coherence arguments imply a force for goal-directed behavior 2021-03-26T16:10:04.936Z
Animal faces 2021-03-11T08:50:11.484Z
Quarantine variety 2021-03-10T08:40:11.970Z
Why does Applied Divinity Studies think EA hasn’t grown since 2015? 2021-03-09T10:10:20.446Z
Sleep math: red clay blue clay 2021-03-08T00:30:11.938Z
Arrow grid game 2021-02-28T09:50:09.755Z
Remarks on morality, shuddering, judging, friendship and the law 2021-02-21T08:10:12.899Z
Coffee trucks: a brilliant idea that someone should do? 2021-02-20T07:50:12.401Z
Oliver Sipple 2021-02-19T07:00:18.788Z
Neck abacus 2021-02-18T07:40:13.066Z
Training sweetness 2021-02-17T07:00:14.744Z
Remember that to value something infinitely is usually to give it a finite dollar value 2021-02-16T06:40:15.664Z
What didn’t happen 2021-02-15T04:30:12.132Z
The ecology of conviction 2021-02-14T01:30:13.068Z
Things a Katja-society might try (Part 2) 2021-02-13T09:20:14.706Z
The art of caring what people think 2021-02-12T05:40:12.633Z
In balance and flux 2021-02-11T15:40:12.699Z
A great hard day 2021-02-10T07:00:12.892Z
Evolution from distinction difference 2021-02-09T05:20:14.733Z
The distinction distance 2021-02-08T07:40:14.341Z
Massive consequences 2021-02-07T05:30:14.215Z
Current cryonics impressions 2021-02-06T10:00:18.252Z
Ways of being with you 2021-02-05T07:00:15.383Z
Speaking of the efficiency of utopia 2021-02-03T17:10:15.293Z
Covid cafes 2021-02-03T08:20:13.096Z
Elephant seal 2 2021-02-02T09:40:15.272Z
Feedback for learning 2021-02-01T09:40:12.735Z
Oceans of snails 2021-01-31T11:20:10.545Z
Play with neural net 2021-01-30T10:50:13.811Z
Top li’l pots 2021-01-29T07:30:12.088Z
Unpopularity of efficiency 2021-01-28T08:30:41.455Z
What is up with spirituality? 2021-01-27T06:20:12.670Z
Wordtune review 2021-01-26T09:40:18.305Z
Tentative covid surface risk estimates 2021-01-25T09:40:46.658Z
Li’l pots 2021-01-24T12:40:42.259Z
Who should you expect to spend your life with? 2021-01-23T09:00:41.659Z
What if we all just stayed at home and didn’t get covid for two weeks? 2021-01-22T09:10:22.517Z
A few thought on the inner ring 2021-01-21T03:40:15.253Z
On fundamental solitude 2021-01-20T10:10:16.983Z
Public selves 2021-01-18T21:10:20.159Z
Are the consequences of groups usually highly contingent on their details? 2021-01-18T01:30:16.236Z
What is going on in the world? 2021-01-17T11:30:12.275Z
Meditative thinking 2021-01-16T02:30:17.953Z
San Francisco outing 2021-01-15T11:40:13.720Z
Discussion on the choice of concepts 2021-01-14T04:00:13.283Z
What’s good about haikus? 2021-01-13T08:10:15.311Z
A vastly faster vaccine rollout 2021-01-12T07:40:16.165Z
The time I got really into poker 2021-01-11T13:00:17.856Z
A different dictionary 2021-01-10T06:30:53.518Z
Why not? potato chips in a box edition 2021-01-08T20:10:16.882Z

Comments

Comment by KatjaGrace on Coherence arguments imply a force for goal-directed behavior · 2021-04-08T21:45:51.560Z · LW · GW

I wrote an AI Impacts page summary of the situation as I understand it. If anyone feels like looking, I'm interested in corrections/suggestions (either here or in the AI Impacts feedback box).  

Comment by KatjaGrace on Coherence arguments imply a force for goal-directed behavior · 2021-03-29T19:36:05.142Z · LW · GW

A few quick thoughts on reasons for confusion:

I think maybe one thing going on is that I already took the coherence arguments to apply only in getting you from weakly having goals to strongly having goals, so since you were arguing against their applicability, I thought you were talking about the step from weaker to stronger goal direction. (I’m not sure what arguments people use to get from 1 to 2 though, so maybe you are right that it is also something to do with coherence, at least implicitly.)

It also seems natural to think of ‘weakly has goals’ as something other than ‘goal directed’, and ‘goal directed’ as referring only to ‘strongly has goals’, so that ‘coherence arguments do not imply goal directed behavior’ (in combination with expecting coherence arguments to be in the weak->strong part of the argument) sounds like ‘coherence arguments do not get you from ‘weakly has goals’ to ‘strongly has goals’.

I also think separating out the step from no goal direction to weak, and weak to strong might be helpful in clarity. It sounded to me like you were considering an argument from 'any kind of agent' to 'strong goal directed' and finding it lacking, and I was like 'but any kind of agent includes a mix of those that this force will work on, and those it won't, so shouldn't it be a partial/probabilistic move toward goal direction?' Whereas you were just meaning to talk about what fraction of existing things are weakly goal directed.

Comment by KatjaGrace on Coherence arguments imply a force for goal-directed behavior · 2021-03-29T19:27:28.828Z · LW · GW

Thanks. Let me check if I understand you correctly:

You think I take the original argument to be arguing from ‘has goals' to ‘has goals’, essentially, and agree that that holds, but don’t find it very interesting/relevant.

What you disagree with is an argument from ‘anything smart’ to ‘has goals’, which seems to be what is needed for the AI risk argument to apply to any superintelligent agent.

Is that right?

If so, I think it’s helpful to distinguish between ‘weakly has goals’ and ‘strongly has goals’:

  1. Weakly has goals: ‘has some sort of drive toward something, at least sometimes' (e.g. aspects of outcomes are taken into account in decisions in some way)
  2. Strongly has goals: ’pursues outcomes consistently and effectively' (i.e. decisions maximize expected utility)

 

So that the full argument I currently take you to be responding to is closer to:

  1. By hypothesis, we will have superintelligent machines
  2. They will weakly have goals (for various reasons, e.g. they will do something, and maybe that means ‘weakly having goals’ in the relevant way? Probably other arguments go in here.)
  3. Anything that weakly has goals has reason to reform to become an EU maximizer, i.e. to strongly have goals
  4. Therefore we will have superintelligent machines that strongly have goals

 

In that case, my current understanding is that you are disagreeing with 2, and that you agree that if 2 holds in some case, then the argument goes through. That is, creatures that are weakly goal directed are liable to become strongly goal directed. (e.g. an agent that twitches because it has various flickering and potentially conflicting urges toward different outcomes is liable to become an agent that more systematically seeks to bring about some such outcomes) Does that sound right?

If so, I think we agree. (In my intuition I characterize the situation as ‘there is roughly a gradient of goal directedness, and a force pulling less goal directed things into being more goal directed. This force probably doesn’t exist out at the zero goal directness edges, but it unclear how strong it is in the rest of the space—i.e. whether it becomes substantial as soon as you move out from zero goal directedness, or is weak until you are in a few specific places right next to ‘maximally goal directed’.)

Comment by KatjaGrace on Animal faces · 2021-03-11T16:57:41.611Z · LW · GW

Good points. Though I claim that I do hold the same facial expression for long periods sometimes, if that's what you mean by 'not moving'. In particular, sometimes it is very hard for me not to screw up my face in a kind of disgusted frown, especially if it is morning. And sometimes I grin for so long that my face hurts, and I still can't stop.

Comment by KatjaGrace on Why Hasn't Effective Altruism Grown Since 2015? · 2021-03-09T18:40:39.141Z · LW · GW

(Lesswrong version here: https://www.lesswrong.com/posts/JJxxoRPMMvWEYBDpc/why-does-applied-divinity-studies-think-ea-hasn-t-grown )

Comment by KatjaGrace on Why Hasn't Effective Altruism Grown Since 2015? · 2021-03-09T18:38:49.562Z · LW · GW

I respond here: https://worldspiritsockpuppet.com/2021/03/09/why-does-ads-think-ea-hasnt-grown.html

Comment by KatjaGrace on Tentative covid surface risk estimates · 2021-03-09T04:05:04.905Z · LW · GW

It doesn't seem that hard to wash your hands after putting away groceries, say. If I recall, I was not imagining getting many touches during such a trip. I'm mostly imagining that you put many of the groceries you purchase in your fridge or eat them within a couple of days, such that they are still fairly contaminated if they started out contaminated, and it is harder to not touch your face whenever you are eating recently acquired or cold food.

Comment by KatjaGrace on Wordtune review · 2021-02-25T02:09:10.289Z · LW · GW

Yes - I like 'application' over 'potentially useful product' and 'my more refined writing skills' over 'my more honed writing', in its first one, for instance.

Comment by KatjaGrace on Neck abacus · 2021-02-19T06:08:33.439Z · LW · GW

I grab the string and/or some beads I don't want to move together between my thumb and finger on one hand, and push the bead I do want to move with my thumb and finger of the other hand. (I don't need to see it because I can feel it and the beads don't move with my touching it.) I can also do it more awkwardly with one hand.

Comment by KatjaGrace on Neck abacus · 2021-02-19T06:03:17.293Z · LW · GW

Thanks for further varieties! I hadn't seen the ring, and have had such a clicker but have not got the hang of using it non-awkwardly (where do you put it? With your keys? Who knows where those are? In your pocket? Who reliably has a pocket that fits things in? In your bag? Then you have to dig it out..)

Good point regarding wanting to know what number you have reached. I only want to know the exact number very occasionally, like with a bank account, but I agree that's not true of many use cases.

Comment by KatjaGrace on Unpopularity of efficiency · 2021-02-03T16:40:31.654Z · LW · GW

I haven't read Zvi's post, but would have thought that the good of slack can be cashed out in efficiency, if you are optimizing for the right goals (e.g. if you have a bunch of tasks in life which contribute to various things, it will turn out that you contribute to those things better overall if you have spare time between the tasks).  

If you aren't in the business of optimizing for the ultimately right goals though, I'd think you could also include slack as one of your instrumental goals, and thus mostly avoid serious conflict e.g. instead of turning out as many cookies per minute as I can, aim to turn out as many cookies as I can while spending half my time not working on it, and setting aside a bag of each ingredient. Perhaps the thought is that this doesn't work because 'slack' is hard to specify, so if you just say that I have to be away from the cookie cutters, I might spend my time strategizing about cookies instead, and that might be somehow not slack in the right way? Plus part of the point is that if things go awry, you want me to be able to put all my time back into cookies briefly?

Comment by KatjaGrace on Li’l pots · 2021-01-26T04:23:49.337Z · LW · GW

Thanks. What kind of gloves do you suggest?

Comment by KatjaGrace on Blog plant · 2020-12-19T08:42:24.466Z · LW · GW

I actually know very little about my plants at present, so cannot help you.

Comment by KatjaGrace on Blog plant · 2020-12-19T08:41:35.690Z · LW · GW

It is irrigation actually, not moisture sensors. Or rather, I think it irrigates based on the level of moisture, using a combination of tiny tubes and clay spikes that I admittedly don't fully understand. (It seems to be much better at watering my plants than I am, even ignoring time costs!) I do have to fill up the water container sometimes.

Comment by KatjaGrace on What technologies could cause world GDP doubling times to be <8 years? · 2020-12-10T18:01:10.471Z · LW · GW

I meant: conditional on it growing faster, why expect this is attributable to a small number of technologies, given that when it accelerated previously it was not like that (if I understand)?

Comment by KatjaGrace on What technologies could cause world GDP doubling times to be <8 years? · 2020-12-10T17:14:59.915Z · LW · GW

If throughout most of history growth rates have been gradually increasing, I don't follow why you would expect one technology to cause it to grow much faster, if it goes back to accelerating.

Comment by KatjaGrace on Why are delicious biscuits obscure? · 2020-12-09T23:57:46.503Z · LW · GW

Thanks for contributing data! :D 

Comment by KatjaGrace on Why are delicious biscuits obscure? · 2020-12-09T23:52:56.262Z · LW · GW

They are meant to be chewy, not crumbly.

Comment by KatjaGrace on Why are delicious biscuits obscure? · 2020-12-09T23:52:06.302Z · LW · GW

Making them tastier, though not confident about this - originally motivated by not having normal flour, and then have done some of each, and thought the gluten free ones were better, but much randomness at play. 

I did mean 'white' by 'wheat'; sorry (I am a foreigner). I haven't tried anything other than the gluten free one mentioned and white wheat flour.

Comment by KatjaGrace on Automated intelligence is not AI · 2020-11-02T20:10:12.487Z · LW · GW

>Someone's cognitive labor went into making the rabbit mold, and everything from there on out is eliminating the need to repeat that labor, and to reduce the number of people who need to have that knowledge.

Yeah, that's the kind of thing I had in mind in the last paragraph.

Comment by KatjaGrace on My dad got stung by a bee, and is mildly allergic. What are the tradeoffs involved in deciding whether to have him go the the emergency room? · 2020-04-22T21:22:59.106Z · LW · GW

In such a case, you might get many of the benefits without the covid risks from driving to very close to the ER, then hanging out there and not going in and risking infection unless worse symptoms develop, but being able to act very fast if they do.

Comment by KatjaGrace on Soft takeoff can still lead to decisive strategic advantage · 2020-02-19T01:11:50.545Z · LW · GW

1) Even if it counts as a DSA, I claim that it is not very interesting in the context of AI. DSAs of something already almost as large as the world are commonplace. For instance, in the extreme, the world minus any particular person could take over the world if they wanted to. The concern with AI is that an initially tiny entity might take over the world.

2) My important point is rather that your '30 year' number is specific to the starting size of the thing, and not just a general number for getting a DSA. In particular, it does not apply to smaller things.

3) Agree income doesn't equal taking over, though in the modern world where much purchasing occurs, it is closer. Not clear to me that AI companies do better as a fraction of the world in terms of military power than they do in terms of spending.

Comment by KatjaGrace on Soft takeoff can still lead to decisive strategic advantage · 2020-02-18T06:56:45.040Z · LW · GW

The time it takes to get a DSA by growing bigger depends on how big you are to begin with. If I understand, you take your 30 years from considering the largest countries, which are not far from being the size of the world, and then use it when talking about AI projects that are much smaller (e.g. a billion dollars a year suggests about 1/100,000 of the world). If you start from a situation of an AI project being three doublings from taking over the world say, then most of the question of how it came to have a DSA seems to be the question of how it grew the other seventeen doublings. (Perhaps you are thinking of an initially large country growing fast via AI? Do we then have to imagine that all of the country's resources are going into AI?)

Comment by KatjaGrace on LW For External Comments? · 2019-12-08T08:12:57.215Z · LW · GW

This sounds great to me, and I think I would be likely to sign up for it if I could, but I haven't thought about it for more than a few minutes, am particularly unsure about the implications for culture, and am maybe too enthusiastic in general for things being 'well organized'.

Comment by KatjaGrace on Pieces of time · 2019-11-13T08:11:18.190Z · LW · GW

Oh yeah, I think I get something similar when my sleep schedule gets very out of whack, or for some reason when I moved into my new house in January, though it went back to normal with time. (Potentially relevant features there: bedroom didn't seem very separated from common areas, at first was sleeping on a pile of yoga mats instead of a bed, didn't get out much.)

Comment by KatjaGrace on jacobjacob's Shortform Feed · 2019-10-31T17:08:55.828Z · LW · GW

I think random objects might work in a similar way. e.g. if talking in a restaurant, you grab the ketchup bottle and the salt to represent your point. I've only experimented with this once, with ultimately quite an elaborate set of condiments, tableware and fries involved. It seemed to make things more memorable and followable, but I wasn't much inclined to do it more for some reason. Possibly at that scale it was a lot of effort beyond the conversation.

Things I see around me sometimes get involved in my thoughts in a way that seems related. For instance, if I'm thinking about the interactions of two orgs while I'm near some trees, two of the trees will come to represent the two orgs, and my thoughts about how they should interact will echo ways that the trees are interacting, without me intending this.

Comment by KatjaGrace on Realistic thought experiments · 2018-11-29T21:55:08.420Z · LW · GW

No, never heard of it, that I know of.

Comment by KatjaGrace on Berkeley: being other people · 2018-10-23T21:37:07.527Z · LW · GW

I'm pretty unsure how much variation in experience there is—'not much' seems plausible to me, but why do you find it so probable?

Comment by KatjaGrace on Moloch in whom I sit alone · 2018-10-05T21:04:43.713Z · LW · GW

I also thought that at first, and wanted to focus on why people join groups that are already large. But yeah, lack of very small groups to join would entirely explain that. Leaving a group signaling not liking the conversation seems like a big factor from my perspective, but I'd guess I'm unusually bothered by that.

Another random friction:

  • If you just sit alone, you don't get to choose the second person who joins you. I think a thing people often do rather than sitting alone is wander alone, and grab someone else also wandering, or have plausible deniability that they might be actually walking somewhere, if they want to avoid being grabbed. This means both parties get some choice.
Comment by KatjaGrace on Moloch in whom I sit alone · 2018-10-05T20:53:57.297Z · LW · GW

Aw, thanks. However I claim that this was a party with very high interesting people density, and that the most obvious difference between me and others was that I ever sat alone.

Comment by KatjaGrace on Epistemic Spot Check: The Dorito Effect (Mark Schatzker) · 2018-10-04T02:20:58.261Z · LW · GW

I share something like this experience (food desirability varies a lot based on unknown factors and something is desirable for maybe a week and then not desirable for months) but haven't checked carefully that it is about nutrient levels in particular. If you have, I'd be curious to hear more about how.

(My main alternative hypothesis regarding my own experience is that it is basically imaginary, so you might just have a better sense than me of which things are imaginary..)

Comment by KatjaGrace on Epistemic Spot Check: The Dorito Effect (Mark Schatzker) · 2018-10-04T02:09:23.538Z · LW · GW

A page number or something for the 'more seasoned' link might be useful. The document is very long and doesn't appear to contain 'season-'.

The 'blander' link doesn't look like it supports the claim much, though I am only looking at the abstract. It says that 'in many instances' there have been reductions in crop flavor, but even this appears to be background that the author is assuming, rather than a claim that the paper is about. If the rest of the paper does contain more evidence on this, could you quote it or something, since the paper is expensive to see?

Comment by KatjaGrace on Reframing misaligned AGI's: well-intentioned non-neurotypical assistants · 2018-04-18T04:54:36.832Z · LW · GW
I am somewhat hesitant to share simple intuition pumps about important topics, in case those intuition pumps are misleading.

This sounds wrong to me. Do you expect considering such things freely to be misleading on net? I expect some intuition pumps to be misleading, but for considering all of the intuitions that we can find about a situation to be better than avoiding them.

Comment by KatjaGrace on Will AI See Sudden Progress? · 2018-04-05T05:03:07.771Z · LW · GW

Thanks for your thoughts!

I don't quite follow you on the intelligence explosion issue. For instance, why does a strong argument against the intelligence explosion hypothesis need to show that a feedback loop is unlikely? Couldn't we believe that it is likely, but not likely to be very rapid for a while? For instance, there is probably a feedback loop in intelligence already, where humans with better thoughts and equipment are effectively smarter, and can then devise better thoughts and equipment. But this has been true for a while, and is a fairly slow process (at least for now, relative to our ability to deal with things).

Comment by KatjaGrace on Making yourself small · 2018-03-09T01:55:36.055Z · LW · GW

My example for high status/small was an esteemed teacher unexpectedly dropping in to see to see their student perform, and entering silently and at the last minute, then standing quietly at the back of the room by the door.

Comment by KatjaGrace on Person-moment affecting views · 2018-03-08T19:20:59.726Z · LW · GW

I also think they are probably wrong, but this kind of argument is a substantial part of why. So I want to see if they can be rescued from it, since that would affect their probability of being right from my perspective.

Do you think there are more compelling arguments that they are wrong, such that we need not consider ones like this? (Also just curious)

Comment by KatjaGrace on Multidimensional signaling · 2017-10-19T06:06:53.440Z · LW · GW

>Katja: do people infer that taste and wealth go together?

My weak guess is yes, but not sure.

Comment by KatjaGrace on Multidimensional signaling · 2017-10-19T06:03:56.315Z · LW · GW

I don't follow why you think this dynamic exists because wealth and taste are correlated. I think the dynamic I am describing is independent of that, and caused by it being very hard to find a signal of taste say that you cannot buy with other resources at least somewhat. If in fact taste was anticorrelated with wealth in terms of underlying characteristics, a wealthy person could still buy other people's tasteful guidance for instance.

Comment by KatjaGrace on There's No Fire Alarm for Artificial General Intelligence · 2017-10-17T22:42:45.893Z · LW · GW

Scott's understanding of the survey is correct. They were asked about four occupations (with three probability-by-year, or year-reaching-probability numbers for each), then for an occupation that they thought would be fully automated especially late, and the timing of that, then all occupations. (In general, survey details can be found at https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/)

Comment by KatjaGrace on Gnostic Rationality · 2017-10-12T21:47:42.722Z · LW · GW

"It's not enough to know about the Way and how to walk it; you need gnosis of walking."

Could I have a less metaphorical example of what people need gnosis of for rationality? I'm imagining you are thinking of e.g. what it is like to carry out changing your mind in a real situation, or what it looks like to fit knowing why you believe things into your usual sequences of mental motions, but I'm not sure.

Comment by KatjaGrace on Gnostic Rationality · 2017-10-12T19:31:21.323Z · LW · GW

So a gnostically rational person with low epistemic rationality cannot figure things out by reasoning, yet experiences being rational nonetheless? Could you say more about what you mean by 'rational' here? Is it something like frequently having good judgment?

Comment by KatjaGrace on For signaling? (Part I) · 2017-09-28T20:17:20.560Z · LW · GW

I wasn't thinking of one of them as the opponent really, but it is inspired by an amalgam of all the casual conversation about signaling I have ever had. For some reason I feel like there is sort of a canonical platonic conversation about signaling, and all of the real conversations are short extracts from it. So I started out tried to write it down. It doesn't seem very canonical in the end, but I figured it might be interesting anyway.

Comment by KatjaGrace on Impression track records · 2017-09-24T11:53:19.984Z · LW · GW

In my terminology, 'impression' is your own sense of what seems true before taking into account other people's views (unless another person's view actually changes your own sense) and 'belief' is what you would actually bet on, given that you are not vastly more reliable than everyone with different impressions.

For example, perhaps my friend is starting a project, and based on talking to her about it a bit I feel like it is stupid and will never work. But several other friends who work on similar projects are really excited about it. So I might decide that it is probably going to be successful after all, though it doesn't look exciting to me. Then my impression of the project was that it was unpromising, but my belief is that it is promising.

Comment by KatjaGrace on I Want To Live In A Baugruppe · 2017-03-17T09:39:01.889Z · LW · GW

Interested in things like this, presently have a partial version that is good.

Comment by KatjaGrace on I Want To Live In A Baugruppe · 2017-03-17T09:37:11.113Z · LW · GW

In my experience this has been less of a problem than you might expect: our landlord likes us because we are reasonable and friendly and only destroy parts of the house when we want to make renovations with our own money and so on. So they would prefer more of us to many other candidates. And since we would also prefer they have more of us, we can make sure our landlord and more of us are in contact.

Comment by KatjaGrace on I Want To Live In A Baugruppe · 2017-03-17T09:30:57.913Z · LW · GW

I and friends have, but pretty newly; there are currently two houses two doors apart, and more friends in the process of moving into a third three doors down. I have found this good so far, and expect to continue to for now, though i agree it might be unstable long term. As an aside, there is something nice about being able to wander down the street and visit one's neighbors, that all living in one house doesn't capture.

Comment by KatjaGrace on Superintelligence 29: Crunch time · 2015-03-31T04:35:52.581Z · LW · GW

Bostrom quotes a colleague saying that a Fields medal indicates two things: that the recipient was capable of accomplishing something important, and that he didn't. Should potential Fields medalists move into AI safety research?

Comment by KatjaGrace on Superintelligence 29: Crunch time · 2015-03-31T04:32:26.596Z · LW · GW

The claim on p257 that we should try to do things that are robustly positive seems contrary to usual consequentialist views, unless this is just a heuristic for maximizing value.

Comment by KatjaGrace on Superintelligence 29: Crunch time · 2015-03-31T04:31:31.292Z · LW · GW

Does anyone know of a good short summary of the case for caring about AI risk?

Comment by KatjaGrace on Superintelligence 29: Crunch time · 2015-03-31T04:30:46.231Z · LW · GW

Did you disagree with anything in this chapter?