Río Grande: judgment calls

2019-01-27T03:50:01.102Z · score: 28 (10 votes)

Bay Area: reading, writing, moving, celebrating

2018-12-26T03:40:00.722Z · score: 12 (10 votes)

Worth keeping

2018-12-07T04:50:01.210Z · score: 57 (26 votes)
Comment by katjagrace on Realistic thought experiments · 2018-11-29T21:55:08.420Z · score: 3 (2 votes) · LW · GW

No, never heard of it, that I know of.

Bodega Bay: workshop

2018-11-27T03:20:01.290Z · score: 42 (12 votes)
Comment by katjagrace on Berkeley: being other people · 2018-10-23T21:37:07.527Z · score: 2 (1 votes) · LW · GW

I'm pretty unsure how much variation in experience there is—'not much' seems plausible to me, but why do you find it so probable?

Berkeley: being other people

2018-10-21T02:50:01.408Z · score: 19 (10 votes)

Bay Area: vibes

2018-10-11T19:00:18.694Z · score: 43 (17 votes)
Comment by katjagrace on Moloch in whom I sit alone · 2018-10-05T21:04:43.713Z · score: 7 (4 votes) · LW · GW

I also thought that at first, and wanted to focus on why people join groups that are already large. But yeah, lack of very small groups to join would entirely explain that. Leaving a group signaling not liking the conversation seems like a big factor from my perspective, but I'd guess I'm unusually bothered by that.

Another random friction:

  • If you just sit alone, you don't get to choose the second person who joins you. I think a thing people often do rather than sitting alone is wander alone, and grab someone else also wandering, or have plausible deniability that they might be actually walking somewhere, if they want to avoid being grabbed. This means both parties get some choice.
Comment by katjagrace on Moloch in whom I sit alone · 2018-10-05T20:53:57.297Z · score: 2 (1 votes) · LW · GW

Aw, thanks. However I claim that this was a party with very high interesting people density, and that the most obvious difference between me and others was that I ever sat alone.

Comment by katjagrace on Epistemic Spot Check: The Dorito Effect (Mark Schatzker) · 2018-10-04T02:20:58.261Z · score: 7 (5 votes) · LW · GW

I share something like this experience (food desirability varies a lot based on unknown factors and something is desirable for maybe a week and then not desirable for months) but haven't checked carefully that it is about nutrient levels in particular. If you have, I'd be curious to hear more about how.

(My main alternative hypothesis regarding my own experience is that it is basically imaginary, so you might just have a better sense than me of which things are imaginary..)

Comment by katjagrace on Epistemic Spot Check: The Dorito Effect (Mark Schatzker) · 2018-10-04T02:09:23.538Z · score: 9 (5 votes) · LW · GW

A page number or something for the 'more seasoned' link might be useful. The document is very long and doesn't appear to contain 'season-'.

The 'blander' link doesn't look like it supports the claim much, though I am only looking at the abstract. It says that 'in many instances' there have been reductions in crop flavor, but even this appears to be background that the author is assuming, rather than a claim that the paper is about. If the rest of the paper does contain more evidence on this, could you quote it or something, since the paper is expensive to see?

Moloch in whom I sit alone

2018-10-03T23:40:00.636Z · score: 50 (23 votes)

Are ethical asymmetries from property rights?

2018-07-02T03:00:00.567Z · score: 107 (50 votes)

Personal relationships with goodness

2018-05-14T18:50:01.310Z · score: 69 (18 votes)
Comment by katjagrace on Reframing misaligned AGI's: well-intentioned non-neurotypical assistants · 2018-04-18T04:54:36.832Z · score: 15 (3 votes) · LW · GW
I am somewhat hesitant to share simple intuition pumps about important topics, in case those intuition pumps are misleading.

This sounds wrong to me. Do you expect considering such things freely to be misleading on net? I expect some intuition pumps to be misleading, but for considering all of the intuitions that we can find about a situation to be better than avoiding them.

Comment by katjagrace on Will AI See Sudden Progress? · 2018-04-05T05:03:07.771Z · score: 10 (2 votes) · LW · GW

Thanks for your thoughts!

I don't quite follow you on the intelligence explosion issue. For instance, why does a strong argument against the intelligence explosion hypothesis need to show that a feedback loop is unlikely? Couldn't we believe that it is likely, but not likely to be very rapid for a while? For instance, there is probably a feedback loop in intelligence already, where humans with better thoughts and equipment are effectively smarter, and can then devise better thoughts and equipment. But this has been true for a while, and is a fairly slow process (at least for now, relative to our ability to deal with things).

Realistic thought experiments

2018-04-04T01:50:01.763Z · score: 69 (17 votes)

The fundamental complementarity of consciousness and work

2018-03-28T01:20:00.563Z · score: 26 (7 votes)

Strengthening the foundations under the Overton Window without moving it

2018-03-14T02:20:00.604Z · score: 36 (8 votes)
Comment by katjagrace on Making yourself small · 2018-03-09T01:55:36.055Z · score: 24 (7 votes) · LW · GW

My example for high status/small was an esteemed teacher unexpectedly dropping in to see to see their student perform, and entering silently and at the last minute, then standing quietly at the back of the room by the door.

Comment by katjagrace on Person-moment affecting views · 2018-03-08T19:20:59.726Z · score: 14 (3 votes) · LW · GW

I also think they are probably wrong, but this kind of argument is a substantial part of why. So I want to see if they can be rescued from it, since that would affect their probability of being right from my perspective.

Do you think there are more compelling arguments that they are wrong, such that we need not consider ones like this? (Also just curious)

Person-moment affecting views

2018-03-07T02:30:00.392Z · score: 42 (10 votes)

Will AI See Sudden Progress?

2018-02-26T00:41:14.514Z · score: 55 (15 votes)

Replacing expensive costly signals

2018-02-17T00:50:00.500Z · score: 60 (20 votes)

The Principled Intelligence Hypothesis

2018-02-14T01:00:00.939Z · score: 62 (17 votes)

Why everything might have taken so long

2018-01-01T01:00:00.441Z · score: 110 (45 votes)

Why did everything take so long?

2017-12-29T01:00:00.324Z · score: 60 (19 votes)

Rules of variety

2017-12-08T17:10:00.300Z · score: 64 (27 votes)
Comment by katjagrace on Multidimensional signaling · 2017-10-19T06:06:53.440Z · score: 4 (1 votes) · LW · GW

>Katja: do people infer that taste and wealth go together?

My weak guess is yes, but not sure.

Comment by katjagrace on Multidimensional signaling · 2017-10-19T06:03:56.315Z · score: 8 (2 votes) · LW · GW

I don't follow why you think this dynamic exists because wealth and taste are correlated. I think the dynamic I am describing is independent of that, and caused by it being very hard to find a signal of taste say that you cannot buy with other resources at least somewhat. If in fact taste was anticorrelated with wealth in terms of underlying characteristics, a wealthy person could still buy other people's tasteful guidance for instance.

Comment by katjagrace on There's No Fire Alarm for Artificial General Intelligence · 2017-10-17T22:42:45.893Z · score: 16 (6 votes) · LW · GW

Scott's understanding of the survey is correct. They were asked about four occupations (with three probability-by-year, or year-reaching-probability numbers for each), then for an occupation that they thought would be fully automated especially late, and the timing of that, then all occupations. (In general, survey details can be found at https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/)

Multidimensional signaling

2017-10-16T07:00:00.176Z · score: 50 (19 votes)
Comment by katjagrace on Gnostic Rationality · 2017-10-12T21:47:42.722Z · score: 11 (3 votes) · LW · GW

"It's not enough to know about the Way and how to walk it; you need gnosis of walking."

Could I have a less metaphorical example of what people need gnosis of for rationality? I'm imagining you are thinking of e.g. what it is like to carry out changing your mind in a real situation, or what it looks like to fit knowing why you believe things into your usual sequences of mental motions, but I'm not sure.

Comment by katjagrace on Gnostic Rationality · 2017-10-12T19:31:21.323Z · score: 10 (3 votes) · LW · GW

So a gnostically rational person with low epistemic rationality cannot figure things out by reasoning, yet experiences being rational nonetheless? Could you say more about what you mean by 'rational' here? Is it something like frequently having good judgment?

Signal seeding

2017-10-12T07:00:00.245Z · score: 30 (8 votes)

Prosocial manipulation

2017-10-01T05:30:00.210Z · score: 15 (5 votes)
Comment by katjagrace on For signaling? (Part I) · 2017-09-28T20:17:20.560Z · score: 3 (1 votes) · LW · GW

I wasn't thinking of one of them as the opponent really, but it is inspired by an amalgam of all the casual conversation about signaling I have ever had. For some reason I feel like there is sort of a canonical platonic conversation about signaling, and all of the real conversations are short extracts from it. So I started out tried to write it down. It doesn't seem very canonical in the end, but I figured it might be interesting anyway.

For signaling? (Part I)

2017-09-28T07:00:00.605Z · score: 14 (9 votes)
Comment by katjagrace on Impression track records · 2017-09-24T11:53:19.984Z · score: 8 (4 votes) · LW · GW

In my terminology, 'impression' is your own sense of what seems true before taking into account other people's views (unless another person's view actually changes your own sense) and 'belief' is what you would actually bet on, given that you are not vastly more reliable than everyone with different impressions.

For example, perhaps my friend is starting a project, and based on talking to her about it a bit I feel like it is stupid and will never work. But several other friends who work on similar projects are really excited about it. So I might decide that it is probably going to be successful after all, though it doesn't look exciting to me. Then my impression of the project was that it was unpromising, but my belief is that it is promising.

Impression track records

2017-09-24T04:40:00.131Z · score: 14 (7 votes)
Comment by katjagrace on I Want To Live In A Baugruppe · 2017-03-17T09:39:01.889Z · score: 5 (5 votes) · LW · GW

Interested in things like this, presently have a partial version that is good.

Comment by katjagrace on I Want To Live In A Baugruppe · 2017-03-17T09:37:11.113Z · score: 1 (1 votes) · LW · GW

In my experience this has been less of a problem than you might expect: our landlord likes us because we are reasonable and friendly and only destroy parts of the house when we want to make renovations with our own money and so on. So they would prefer more of us to many other candidates. And since we would also prefer they have more of us, we can make sure our landlord and more of us are in contact.

Comment by katjagrace on I Want To Live In A Baugruppe · 2017-03-17T09:30:57.913Z · score: 1 (1 votes) · LW · GW

I and friends have, but pretty newly; there are currently two houses two doors apart, and more friends in the process of moving into a third three doors down. I have found this good so far, and expect to continue to for now, though i agree it might be unstable long term. As an aside, there is something nice about being able to wander down the street and visit one's neighbors, that all living in one house doesn't capture.

Comment by katjagrace on Superintelligence 29: Crunch time · 2015-03-31T04:35:52.581Z · score: 3 (3 votes) · LW · GW

Bostrom quotes a colleague saying that a Fields medal indicates two things: that the recipient was capable of accomplishing something important, and that he didn't. Should potential Fields medalists move into AI safety research?

Comment by katjagrace on Superintelligence 29: Crunch time · 2015-03-31T04:32:26.596Z · score: 3 (3 votes) · LW · GW

The claim on p257 that we should try to do things that are robustly positive seems contrary to usual consequentialist views, unless this is just a heuristic for maximizing value.

Comment by katjagrace on Superintelligence 29: Crunch time · 2015-03-31T04:31:31.292Z · score: 7 (7 votes) · LW · GW

Does anyone know of a good short summary of the case for caring about AI risk?

Comment by katjagrace on Superintelligence 29: Crunch time · 2015-03-31T04:30:46.231Z · score: 4 (4 votes) · LW · GW

Did you disagree with anything in this chapter?

Comment by katjagrace on Superintelligence 29: Crunch time · 2015-03-31T04:29:27.856Z · score: 4 (4 votes) · LW · GW

Are there things that someone should maybe be doing about AI risk that haven't been mentioned yet?

Comment by katjagrace on Superintelligence 29: Crunch time · 2015-03-31T04:28:45.453Z · score: 5 (5 votes) · LW · GW

Are you concerned about AI risk? Do you do anything about it?

Comment by katjagrace on Superintelligence 29: Crunch time · 2015-03-31T04:27:58.991Z · score: 5 (5 votes) · LW · GW

Do you agree with Bostrom that humanity should defer non-urgent scientific questions, and work on time-sensitive issues such as AI safety?

Comment by katjagrace on Superintelligence 29: Crunch time · 2015-03-31T04:26:38.362Z · score: 3 (3 votes) · LW · GW

Did Superintelligence change your mind on anything?

Comment by katjagrace on Superintelligence 29: Crunch time · 2015-03-31T04:25:56.678Z · score: 4 (4 votes) · LW · GW

This is the last Superintelligence Reading Group. What did you think of it?

Superintelligence 29: Crunch time

2015-03-31T04:24:41.788Z · score: 8 (9 votes)
Comment by katjagrace on Superintelligence 28: Collaboration · 2015-03-30T19:13:57.078Z · score: 0 (0 votes) · LW · GW

Does anyone have suggested instances of this? I actually don't know of many.

Comment by katjagrace on Superintelligence 28: Collaboration · 2015-03-24T03:11:57.532Z · score: 2 (2 votes) · LW · GW

What did you find most interesting in this week's reading?

Comment by katjagrace on Superintelligence 28: Collaboration · 2015-03-24T03:11:42.399Z · score: 2 (2 votes) · LW · GW

Is AI more likely than other technologies to produce an race dynamic?

Comment by katjagrace on Superintelligence 28: Collaboration · 2015-03-24T03:10:53.963Z · score: 2 (2 votes) · LW · GW

What do you think of Miles' views?

Comment by katjagrace on Superintelligence 28: Collaboration · 2015-03-24T03:10:23.428Z · score: 2 (2 votes) · LW · GW

What do you think of the 'Common Good Principle'?

Comment by katjagrace on Superintelligence 28: Collaboration · 2015-03-24T03:09:57.409Z · score: 2 (2 votes) · LW · GW

Do you think the model Bostrom presents of the race dynamic captures basically what will happen if there are not big efforts to coordinate?

Comment by katjagrace on Superintelligence 28: Collaboration · 2015-03-24T03:09:09.436Z · score: 2 (2 votes) · LW · GW

If AI is likely to cause a 'race dynamic', do you think this could be averted by a plausible degree of effort?

Comment by katjagrace on Superintelligence 28: Collaboration · 2015-03-24T03:08:16.801Z · score: 2 (2 votes) · LW · GW

Is there anything particular you would like to do by the end of this reading group, other than read and discuss the last chapter?

Comment by katjagrace on Superintelligence 28: Collaboration · 2015-03-24T03:07:48.060Z · score: 2 (2 votes) · LW · GW

What did you find least persuasive in this week's reading?

Superintelligence 28: Collaboration

2015-03-24T01:29:21.415Z · score: 7 (8 votes)
Comment by katjagrace on Superintelligence 27: Pathways and enablers · 2015-03-17T01:36:42.759Z · score: 3 (3 votes) · LW · GW

Do you have further interesting pointers to material relating to this week’s reading?

Comment by katjagrace on Superintelligence 27: Pathways and enablers · 2015-03-17T01:36:23.480Z · score: 2 (2 votes) · LW · GW

What did you find most interesting in this week's reading?

Comment by katjagrace on Superintelligence 27: Pathways and enablers · 2015-03-17T01:36:06.849Z · score: 2 (2 votes) · LW · GW

What did you find least persuasive in this week's reading?

Comment by katjagrace on Superintelligence 27: Pathways and enablers · 2015-03-17T01:35:52.578Z · score: 2 (2 votes) · LW · GW

Did you change your mind about anything as a result of this week's reading?

Comment by katjagrace on Superintelligence 27: Pathways and enablers · 2015-03-17T01:35:07.049Z · score: 3 (3 votes) · LW · GW

What do you think of Kenzi's views?

Comment by katjagrace on Superintelligence 27: Pathways and enablers · 2015-03-17T01:34:55.049Z · score: 2 (2 votes) · LW · GW

Do you think the future will be worse if brain emulations come first? Should we try to influence the ordering one way or the other?

Comment by katjagrace on Superintelligence 27: Pathways and enablers · 2015-03-17T01:32:57.902Z · score: 4 (4 votes) · LW · GW

Do you think hardware progress is bad for the world?

Superintelligence 27: Pathways and enablers

2015-03-17T01:00:51.539Z · score: 10 (11 votes)
Comment by katjagrace on Superintelligence 26: Science and technology strategy · 2015-03-10T02:09:36.765Z · score: 2 (2 votes) · LW · GW

What was your favorite part of this section?

Comment by katjagrace on Superintelligence 26: Science and technology strategy · 2015-03-10T02:09:21.055Z · score: 2 (2 votes) · LW · GW

Do you think increased prosperity now is good for the long term?

Comment by katjagrace on Superintelligence 26: Science and technology strategy · 2015-03-10T02:09:08.749Z · score: 3 (3 votes) · LW · GW

How high do you think state risks are at the moment?

Comment by katjagrace on Superintelligence 26: Science and technology strategy · 2015-03-10T02:08:41.175Z · score: 3 (3 votes) · LW · GW

How plausible do you find the key points in this chapter? (see list above)

Comment by katjagrace on Superintelligence 26: Science and technology strategy · 2015-03-10T02:07:52.336Z · score: 3 (3 votes) · LW · GW

What do you think of Holden's view?

Superintelligence 26: Science and technology strategy

2015-03-10T01:43:48.371Z · score: 8 (9 votes)

Superintelligence 25: Components list for acquiring values

2015-03-03T02:01:11.071Z · score: 6 (7 votes)

Superintelligence 24: Morality models and "do what I mean"

2015-02-24T02:00:50.974Z · score: 7 (8 votes)

Superintelligence 23: Coherent extrapolated volition

2015-02-17T02:00:20.030Z · score: 5 (6 votes)

Superintelligence 22: Emulation modulation and institutional design

2015-02-10T02:06:01.155Z · score: 8 (9 votes)

Superintelligence 21: Value learning

2015-02-03T02:01:09.407Z · score: 7 (8 votes)

AI Impacts project

2015-02-02T19:40:14.612Z · score: 12 (13 votes)

Superintelligence 20: The value-loading problem

2015-01-27T02:00:19.358Z · score: 4 (5 votes)

Superintelligence 19: Post-transition formation of a singleton

2015-01-20T02:00:27.460Z · score: 7 (8 votes)

Superintelligence 18: Life in an algorithmic economy

2015-01-13T02:00:11.506Z · score: 4 (5 votes)

Superintelligence 17: Multipolar scenarios

2015-01-06T06:44:45.533Z · score: 4 (5 votes)

Superintelligence 16: Tool AIs

2014-12-30T02:00:09.775Z · score: 7 (8 votes)

Superintelligence 15: Oracles, genies and sovereigns

2014-12-23T02:01:02.907Z · score: 6 (7 votes)

Superintelligence 14: Motivation selection methods

2014-12-16T02:00:53.128Z · score: 5 (6 votes)

Superintelligence 13: Capability control methods

2014-12-09T02:00:34.433Z · score: 7 (8 votes)

Superintelligence 12: Malignant failure modes

2014-12-02T02:02:24.576Z · score: 9 (11 votes)

Superintelligence 11: The treacherous turn

2014-11-25T02:00:06.414Z · score: 10 (13 votes)

When should an Effective Altruist be vegetarian?

2014-11-22T05:04:07.000Z · score: 27 (30 votes)

Superintelligence 10: Instrumentally convergent goals

2014-11-18T02:00:26.375Z · score: 7 (10 votes)

Superintelligence 9: The orthogonality of intelligence and goals

2014-11-11T02:00:09.458Z · score: 8 (11 votes)

Superintelligence 8: Cognitive superpowers

2014-11-04T02:01:01.526Z · score: 7 (11 votes)

Superintelligence 7: Decisive strategic advantage

2014-10-28T01:01:01.415Z · score: 7 (10 votes)

Superintelligence 6: Intelligence explosion kinetics

2014-10-21T01:00:26.704Z · score: 9 (10 votes)