Posts

Have we really forsaken natural selection? 2024-03-27T06:50:04.694Z
Robin Hanson and I talk about AI risk 2024-03-27T06:50:03.420Z
More podcasts on 2023 AI survey: Cognitive Revolution and FLI 2024-03-27T05:50:04.499Z
New social credit formalizations 2024-03-11T19:00:06.201Z
Movie posters 2024-03-06T06:20:03.034Z
Are we so good to simulate? 2024-03-04T05:20:03.535Z
Deep and obvious points in the gap between your thoughts and your pictures of thought 2024-02-23T07:30:07.461Z
Parasocial relationship logic 2024-02-23T07:30:05.475Z
Shaming with and without naming 2024-02-23T07:30:03.862Z
Survey of 2,778 AI authors: six parts in pictures 2024-01-06T04:43:34.590Z
I put odds on ends with Nathan Young 2023-11-17T05:40:03.006Z
A to Z of things 2023-11-17T05:20:03.134Z
The other side of the tidal wave 2023-11-03T05:40:05.363Z
Robin Hanson and I talk about AI risk 2023-05-04T22:20:08.448Z
How bad a future do ML researchers expect? 2023-03-09T04:50:05.122Z
Have we really forsaken natural selection? 2023-01-12T00:10:05.998Z
We don’t trade with ants 2023-01-10T23:50:11.476Z
How to eat potato chips while typing 2023-01-03T11:50:05.816Z
Pacing: inexplicably good 2023-01-02T08:30:10.158Z
Worldly Positions archive, briefly with private drafts 2022-12-30T12:20:05.430Z
More ways to spot abysses 2022-12-30T06:30:06.301Z
Let’s think about slowing down AI 2022-12-22T17:40:04.787Z
Counterarguments to the basic AI x-risk case 2022-10-14T13:00:05.903Z
A game of mattering 2022-10-12T08:50:18.097Z
Calibration of a thousand predictions 2022-10-12T08:50:16.768Z
A game of mattering 2022-09-23T02:30:15.714Z
Podcasts on surveys, slower AI, AI arguments, etc 2022-09-18T07:30:15.846Z
Survey advice 2022-08-24T03:10:21.424Z
What do ML researchers think about AI in 2022? 2022-08-04T15:40:05.024Z
Book review: The Passenger by Lisa Lutz 2022-06-23T23:10:19.626Z
An inquiry into the thoughts of twenty-five people in India 2022-05-28T08:30:15.355Z
Podcast: Spencer Greenberg talks to me about dealing with our groupstruckness and boundedness 2022-05-20T04:20:12.221Z
Proposal: Twitter dislike button 2022-05-17T19:40:10.485Z
Fighting in various places for a really long time 2022-05-11T01:50:18.681Z
Stuff I might do if I had covid 2022-05-11T00:00:20.312Z
Why do people avoid vaccination? 2022-02-10T21:10:18.311Z
Bernal Heights: acquisition of a bicycle 2022-02-09T21:00:13.559Z
Positly covid survey 2: controlled productivity data 2022-02-04T07:30:16.934Z
Positly covid survey: long covid 2022-01-18T10:40:18.665Z
Long covid: probably worth avoiding—some considerations 2022-01-16T11:46:52.087Z
Survey supports ‘long covid is bad’ hypothesis (very tentative) 2022-01-14T14:50:18.538Z
Beyond fire alarms: freeing the groupstruck 2021-09-26T09:30:17.288Z
Punishing the good 2021-07-20T23:30:15.270Z
Lafayette: empty traffic signals 2021-07-16T04:30:15.759Z
Lafayette: traffic vessels 2021-07-15T01:00:15.658Z
Typology of blog posts that don’t always add anything clear and insightful 2021-07-13T01:40:15.274Z
Do incoherent entities have stronger reason to become more coherent than less? 2021-06-30T05:50:10.842Z
Holidaying and purpose 2021-06-06T19:30:12.433Z
Coherence arguments imply a force for goal-directed behavior 2021-03-26T16:10:04.936Z
Animal faces 2021-03-11T08:50:11.484Z

Comments

Comment by KatjaGrace on A to Z of things · 2023-11-21T10:26:27.400Z · LW · GW

Fair! I interpret them as probably happy free-range sheep being raised for wool, an existence I'm happy about and in particular prefer to vegetablehood, but a) that seems uncertain, and b) ymmv regarding the value of unfree sheep lives being used as a means to an end etc. 

Comment by KatjaGrace on A to Z of things · 2023-11-21T10:20:46.142Z · LW · GW

The seals share the reference class "seals" but are different, notably one is way bigger than the others. So if you wanted to predict something about the big seal, there is a discussion to be had about what to make of the seal reference class, or other possible reference classes e.g. "things that weigh half a ton"

Comment by KatjaGrace on The other side of the tidal wave · 2023-11-06T01:20:11.017Z · LW · GW

Assuming your preferences don't involve other people or the world

Comment by KatjaGrace on The other side of the tidal wave · 2023-11-06T01:18:03.362Z · LW · GW

Not sure about this, but to the extent it was so, often they were right that a lot of things they liked would be gone soon, and that that was sad. (Not necessarily on net, though maybe even on net for them and people like them.)

Comment by KatjaGrace on The other side of the tidal wave · 2023-11-06T01:07:37.871Z · LW · GW

Seems like there are a lot of possibilities, some of them good, and I have little time to think about them. It just feels like a red flag for everything in your life to be swapped for other things by very powerful processes beyond your control while you are focused on not dying. Like, if lesser changes were upcoming in people's lives such that they landed in near mode, I think they would be way less sanguine—e.g. being forced to move to New York City.

Comment by KatjaGrace on A game of mattering · 2022-09-24T07:14:39.837Z · LW · GW

Do you mean that the half-day projects have to be in sequence relative to the other half-day projects, or within a particular half-day project, its contents have to be in sequence (so you can't for instance miss the first step then give up and skip to the second step)?

In general if things have to be done in sequence, often I make the tasks non-specific, e.g. lets say i want to read a set of chapters in order, then i might make the tasks 'read a chapter' rather than 'read the first chapter' etc. Then if I were to fail at the first one, I would keep reading the first chapter to grab the second item, then when I eventually rescued what would have been the first chapter, I would collect it by reading whatever chapter I was up to. (This is all hypothetical—I never read chapters that fast.)

Comment by KatjaGrace on Survey advice · 2022-08-27T00:45:09.277Z · LW · GW

Second sentence: 

  • People say very different things depending on framing, so responses to any particularly-framed question are presumably not accurate, though I'd still take them as some evidence.
  • People say very different things from one another, so any particular person is highly unlikely to be accurate.  An aggregate might still be good, but e.g. if people say such different things that three-quarters of them have to be totally wrong, then I don't think it's that much more likely that the last quarter is about right than that the answer is something almost nobody said.

First sentence: 

  • In spite of the above, and the prior low probability of this being a reliable guide to AGI timelines, our paper was the 16th most discussed paper in the world. On the other hand, something like Ajeya's timelines report (or even AI Impacts' cruder timelines botec earlier) seem more informative, and to get way less attention. (I didn't mean 'within the class of surveys, interest doesn't track informativeness much' though that might be true, I meant 'people seem to have substantial interest in surveys beyond what is explained by them being informative about e.g. AI timelines'
Comment by KatjaGrace on What do ML researchers think about AI in 2022? · 2022-08-05T05:28:53.170Z · LW · GW

We didn't do rounding though, right? Like, these people actually said 0?

Comment by KatjaGrace on Bernal Heights: acquisition of a bicycle · 2022-05-18T03:04:21.376Z · LW · GW

Gazelle ultimate T10+ 46inch

Comment by KatjaGrace on Positly covid survey: long covid · 2022-02-14T04:23:25.661Z · LW · GW

Not quite sure what you mean, but all data is linked at the end of https://www.lesswrong.com/posts/3Rtvo6qhFde6TnDng/positly-covid-survey-2-controlled-productivity-data

Comment by KatjaGrace on Survey supports ‘long covid is bad’ hypothesis (very tentative) · 2022-02-14T04:22:38.510Z · LW · GW

n probably too small to read much into it, but yes: https://www.lesswrong.com/posts/3Rtvo6qhFde6TnDng/positly-covid-survey-2-controlled-productivity-data

Comment by KatjaGrace on Survey supports ‘long covid is bad’ hypothesis (very tentative) · 2022-02-14T04:21:17.136Z · LW · GW

I did ask about it, data here (note that n is small): https://www.lesswrong.com/posts/iTH6gizyXFxxthkDa/positly-covid-survey-long-covid

Comment by KatjaGrace on Why do people avoid vaccination? · 2022-02-12T06:54:55.015Z · LW · GW

Yeah, I meant that early on in the vaccinations, officialish-seeming articles said or implied that breakthrough cases were very rare (even calling them 'breakthrough cases', to my ear, sounds like they are sort of more unexpected than they should be, but perhaps that's just what such things are always called). That seemed false at the time even, before later iterations of covid made it more blatantly so.  I think it was probably motivated partly by desire to convince people that the vaccine was very good, rather than just error, which I think is questionable behavior.

Comment by KatjaGrace on Long covid: probably worth avoiding—some considerations · 2022-01-16T23:22:39.590Z · LW · GW

I agree that I'm more likely to be concerned about in-fact-psychosomatic things than average, and on the outside view, thus probably biased in that direction in interpreting evidence. Sorry if that colors the set of considerations that seem interesting to me. (I didn't mean to claim that this was an unbiased list, sorry if I implied it. )

Some points regarding the object level:

  1. The scenario I described was to illustrate a logical point (that the initially tempting inference from that study wasn't valid). So I wouldn't want to take the numbers from that hypothetical scenario and apply them across the board to interpreting other data. I haven't thought through what range of possible numbers is really implied, or whether there are other ways to make sense of these prima facie weird findings (especially re lack of connection between having covid and thinking you have covid). If I put a lot of stock in that study,  I agree there is some adjustment to be made to other numbers (and probably anyway - surely some amount of misattribution is going on, and even some amount of psychosomatic illness). 
  2. My description was actually of how you would get those results if approximately none of the illness was psychosomatic but a lot of it was other illnesses (the description would work with psychosomatic illnesses too, but I worry that you misread my point, since you are saying that in that world most things are psychosomatic, and my point was that you can't infer that anything was psychosomatic).
  3. If the scenario I described was correct, the rates of misattribution implied would be specific to that population and their total ignorance about whether they had covid, rather than a fact intrinsic to covid in general, and applicable to all times and places. I do find it very hard to believe that in general there is not some decently strong association between having covid and thinking you have covid, even if also a lot of errors. 
  4. It's a single study, and single studies find all kinds of things. I don't recall seeing other evidence supporting it. In such a case, I'm inclined to treat it as worthy of adding some uncertainty, but not worthy of a huge update about everything. 
  5. If this consideration reduced real long covid cases by a factor of two, it doesn't feel like that changes the story very much (there's a lot of factor-of-two-level uncertainty all over the place, especially in guessing what the rate is for a specific demographic), so I guess it doesn't seem cruxy enough to give a lot of attention to.
  6. I agree that mostly it isn't salient to me that some fraction of cases are misattributions, and that maybe I should keep it in mind more, and say things like 'it looks like many people who think they had covid can no longer do their jobs' instead of taking things at face value. Though in my defense, this was a list of considerations, so I'm also not flagging all of the other corrections one might want to make to numbers throughout, as I might if I were doing a careful calculation. 
  7. It's true that I don't really believe that half of the bad cases at least are misattributions or psychosomatic—the psychosomatic story seems particularly far-fetched (particularly for the bad cases).  Perhaps I'm mis-imagining what this would look like. Is there other evidence for this that you are moved by? 
Comment by KatjaGrace on COVID and the holidays · 2021-12-20T03:55:06.867Z · LW · GW

I thought rapid tests were generally considered to have a much lower false negative rate for detecting contagiousness, though they often miss people who are infected but not yet contagious. I forget why I think this, and haven't been following possible updates on this story, but is that different from your impression? (Here's one place I think saying this, for instance: https://www.rapidtests.org/blog/antigen-tests-as-contagiousness-tests) On this story, rapid tests immediately before an event would reduce overall risk by a lot.

Comment by KatjaGrace on Beyond fire alarms: freeing the groupstruck · 2021-09-27T19:55:18.196Z · LW · GW

Agree the difference between actors and real companions is very important! I think you misread me (see response to AllAmericanBreakfast's above comment.) 

Your current model appears to be wrong (supposing people should respond to fire alarms quickly).

From the paper:

"Subjects in the three naive bystander condition were markedly inhibited from reporting the smoke. Since 75% of the alone subjects reported the smoke, we would expect over 98% of the three-person groups to contain at least one reporter. In fact, in only 38% of the eight groups in this condition did even 1 subject report (p < .01). Of the 24 people run in these eight groups, only 1 person reported the smoke within the first 4 minutes before the room got noticeably unpleasant. Only 3 people reported the smoke within the entire experimental period." 

Fig 1 in the paper looks at a glance to imply also that the solitary people all reported it before 4 minutes.

Comment by KatjaGrace on Beyond fire alarms: freeing the groupstruck · 2021-09-27T19:43:56.093Z · LW · GW

Sorry for being unclear.  The first video shows a rerun of the original experiment, which I think is interesting because it is nice to actually see how people behave, though it is missing footage of the (I agree crucial) three group case. The original experiment itself definitely included groups of entirely innocent participants, and I agree that if it didn't it wouldn't be very interesting. (According to the researcher in the footage, via private conversation, he recalls that the filmed rerun also included at least one trial with all innocent people, but it was a while ago, so he didn't sound confident. See footnote there.)

It still looks to me like this is what I say, but perhaps I could signpost more clearly that the video is different from the proper experiment? 

Comment by KatjaGrace on Ask Not "How Are You Doing?" · 2021-07-26T21:27:55.692Z · LW · GW

I think I would have agreed that answering honestly is a social gaffe a few years ago, and in my even younger years I found it embarrassing to ask such things when we both knew I wasn't trying to learn the answer, but now I feel like it's very natural to elaborate a bit, and it usually doesn't feel like an error. e.g. 'Alright - somewhat regretting signing up for this thing, but it's reminding me that I'm interested in the topic' or 'eh, seen better days, but making crepes - want one?' I wonder if I've become oblivious in my old age, or socially chill, or the context has changed. It could be partly that this depends on how well the conversationalists know each other, and it has been a slow year and a half for seeing people I don't live with.

Comment by KatjaGrace on Feedback is central to agency · 2021-07-23T15:40:20.731Z · LW · GW

To check I have this: in the two-level adaptive system, one level is the program adjusting its plan toward the target configuration of being a good plan, and the other level is the car (for instance) adjusting its behavior (due to following the plan) toward getting to a particular place without crashing?

Comment by KatjaGrace on Taboo "Outside View" · 2021-06-30T07:28:37.597Z · LW · GW

Fwiw I'm not aware of using or understanding 'outside view' to mean something other than basically reference class forecasting (or trend extrapolation, which I'd say is the same). In your initial example, it seems like the other person is using it fine - yes, if you had more examples of an AGI takeoff, you could do better reference class forecasting, but their point is that in the absence of any examples of the specific thing, you also lack other non-reference-class-forecasting methods (e.g. a model), and you lack them even more than you lack relevant reference classes. They might be wrong, but it seems like a valid use. I assume you're right that some people do use the term for other stuff, because they say so in the comments, but is it actually that common?

I don't follow your critique of doing an intuitively-weighted average of outside view and some inside view. In particular, you say 'This is not Tetlock’s advice, nor is it the lesson from the forecasting tournaments...'. But in the  blog post section that you point to, you say 'Tetlock’s advice is to start with the outside view, and then adjust using the inside view.', which sounds like he is endorsing something very similar, or a superset of the thing you're citing him as disagreeing with?

Comment by KatjaGrace on Holidaying and purpose · 2021-06-08T19:01:40.243Z · LW · GW

I too thought the one cruise I've been on was a pretty good type of holiday! A giant moving building full of nice things is so much more convenient a vehicle than the usual series of planes and cabs and subways and hauling bags along the road and stationary buildings etc.

Comment by KatjaGrace on Coherence arguments imply a force for goal-directed behavior · 2021-04-08T21:45:51.560Z · LW · GW

I wrote an AI Impacts page summary of the situation as I understand it. If anyone feels like looking, I'm interested in corrections/suggestions (either here or in the AI Impacts feedback box).  

Comment by KatjaGrace on Coherence arguments imply a force for goal-directed behavior · 2021-03-29T19:36:05.142Z · LW · GW

A few quick thoughts on reasons for confusion:

I think maybe one thing going on is that I already took the coherence arguments to apply only in getting you from weakly having goals to strongly having goals, so since you were arguing against their applicability, I thought you were talking about the step from weaker to stronger goal direction. (I’m not sure what arguments people use to get from 1 to 2 though, so maybe you are right that it is also something to do with coherence, at least implicitly.)

It also seems natural to think of ‘weakly has goals’ as something other than ‘goal directed’, and ‘goal directed’ as referring only to ‘strongly has goals’, so that ‘coherence arguments do not imply goal directed behavior’ (in combination with expecting coherence arguments to be in the weak->strong part of the argument) sounds like ‘coherence arguments do not get you from ‘weakly has goals’ to ‘strongly has goals’.

I also think separating out the step from no goal direction to weak, and weak to strong might be helpful in clarity. It sounded to me like you were considering an argument from 'any kind of agent' to 'strong goal directed' and finding it lacking, and I was like 'but any kind of agent includes a mix of those that this force will work on, and those it won't, so shouldn't it be a partial/probabilistic move toward goal direction?' Whereas you were just meaning to talk about what fraction of existing things are weakly goal directed.

Comment by KatjaGrace on Coherence arguments imply a force for goal-directed behavior · 2021-03-29T19:27:28.828Z · LW · GW

Thanks. Let me check if I understand you correctly:

You think I take the original argument to be arguing from ‘has goals' to ‘has goals’, essentially, and agree that that holds, but don’t find it very interesting/relevant.

What you disagree with is an argument from ‘anything smart’ to ‘has goals’, which seems to be what is needed for the AI risk argument to apply to any superintelligent agent.

Is that right?

If so, I think it’s helpful to distinguish between ‘weakly has goals’ and ‘strongly has goals’:

  1. Weakly has goals: ‘has some sort of drive toward something, at least sometimes' (e.g. aspects of outcomes are taken into account in decisions in some way)
  2. Strongly has goals: ’pursues outcomes consistently and effectively' (i.e. decisions maximize expected utility)

 

So that the full argument I currently take you to be responding to is closer to:

  1. By hypothesis, we will have superintelligent machines
  2. They will weakly have goals (for various reasons, e.g. they will do something, and maybe that means ‘weakly having goals’ in the relevant way? Probably other arguments go in here.)
  3. Anything that weakly has goals has reason to reform to become an EU maximizer, i.e. to strongly have goals
  4. Therefore we will have superintelligent machines that strongly have goals

 

In that case, my current understanding is that you are disagreeing with 2, and that you agree that if 2 holds in some case, then the argument goes through. That is, creatures that are weakly goal directed are liable to become strongly goal directed. (e.g. an agent that twitches because it has various flickering and potentially conflicting urges toward different outcomes is liable to become an agent that more systematically seeks to bring about some such outcomes) Does that sound right?

If so, I think we agree. (In my intuition I characterize the situation as ‘there is roughly a gradient of goal directedness, and a force pulling less goal directed things into being more goal directed. This force probably doesn’t exist out at the zero goal directness edges, but it unclear how strong it is in the rest of the space—i.e. whether it becomes substantial as soon as you move out from zero goal directedness, or is weak until you are in a few specific places right next to ‘maximally goal directed’.)

Comment by KatjaGrace on Animal faces · 2021-03-11T16:57:41.611Z · LW · GW

Good points. Though I claim that I do hold the same facial expression for long periods sometimes, if that's what you mean by 'not moving'. In particular, sometimes it is very hard for me not to screw up my face in a kind of disgusted frown, especially if it is morning. And sometimes I grin for so long that my face hurts, and I still can't stop.

Comment by KatjaGrace on Why Hasn't Effective Altruism Grown Since 2015? · 2021-03-09T18:40:39.141Z · LW · GW

(Lesswrong version here: https://www.lesswrong.com/posts/JJxxoRPMMvWEYBDpc/why-does-applied-divinity-studies-think-ea-hasn-t-grown )

Comment by KatjaGrace on Why Hasn't Effective Altruism Grown Since 2015? · 2021-03-09T18:38:49.562Z · LW · GW

I respond here: https://worldspiritsockpuppet.com/2021/03/09/why-does-ads-think-ea-hasnt-grown.html

Comment by KatjaGrace on Tentative covid surface risk estimates · 2021-03-09T04:05:04.905Z · LW · GW

It doesn't seem that hard to wash your hands after putting away groceries, say. If I recall, I was not imagining getting many touches during such a trip. I'm mostly imagining that you put many of the groceries you purchase in your fridge or eat them within a couple of days, such that they are still fairly contaminated if they started out contaminated, and it is harder to not touch your face whenever you are eating recently acquired or cold food.

Comment by KatjaGrace on Wordtune review · 2021-02-25T02:09:10.289Z · LW · GW

Yes - I like 'application' over 'potentially useful product' and 'my more refined writing skills' over 'my more honed writing', in its first one, for instance.

Comment by KatjaGrace on Neck abacus · 2021-02-19T06:08:33.439Z · LW · GW

I grab the string and/or some beads I don't want to move together between my thumb and finger on one hand, and push the bead I do want to move with my thumb and finger of the other hand. (I don't need to see it because I can feel it and the beads don't move with my touching it.) I can also do it more awkwardly with one hand.

Comment by KatjaGrace on Neck abacus · 2021-02-19T06:03:17.293Z · LW · GW

Thanks for further varieties! I hadn't seen the ring, and have had such a clicker but have not got the hang of using it non-awkwardly (where do you put it? With your keys? Who knows where those are? In your pocket? Who reliably has a pocket that fits things in? In your bag? Then you have to dig it out..)

Good point regarding wanting to know what number you have reached. I only want to know the exact number very occasionally, like with a bank account, but I agree that's not true of many use cases.

Comment by KatjaGrace on Unpopularity of efficiency · 2021-02-03T16:40:31.654Z · LW · GW

I haven't read Zvi's post, but would have thought that the good of slack can be cashed out in efficiency, if you are optimizing for the right goals (e.g. if you have a bunch of tasks in life which contribute to various things, it will turn out that you contribute to those things better overall if you have spare time between the tasks).  

If you aren't in the business of optimizing for the ultimately right goals though, I'd think you could also include slack as one of your instrumental goals, and thus mostly avoid serious conflict e.g. instead of turning out as many cookies per minute as I can, aim to turn out as many cookies as I can while spending half my time not working on it, and setting aside a bag of each ingredient. Perhaps the thought is that this doesn't work because 'slack' is hard to specify, so if you just say that I have to be away from the cookie cutters, I might spend my time strategizing about cookies instead, and that might be somehow not slack in the right way? Plus part of the point is that if things go awry, you want me to be able to put all my time back into cookies briefly?

Comment by KatjaGrace on Li’l pots · 2021-01-26T04:23:49.337Z · LW · GW

Thanks. What kind of gloves do you suggest?

Comment by KatjaGrace on Blog plant · 2020-12-19T08:42:24.466Z · LW · GW

I actually know very little about my plants at present, so cannot help you.

Comment by KatjaGrace on Blog plant · 2020-12-19T08:41:35.690Z · LW · GW

It is irrigation actually, not moisture sensors. Or rather, I think it irrigates based on the level of moisture, using a combination of tiny tubes and clay spikes that I admittedly don't fully understand. (It seems to be much better at watering my plants than I am, even ignoring time costs!) I do have to fill up the water container sometimes.

Comment by KatjaGrace on What technologies could cause world GDP doubling times to be <8 years? · 2020-12-10T18:01:10.471Z · LW · GW

I meant: conditional on it growing faster, why expect this is attributable to a small number of technologies, given that when it accelerated previously it was not like that (if I understand)?

Comment by KatjaGrace on What technologies could cause world GDP doubling times to be <8 years? · 2020-12-10T17:14:59.915Z · LW · GW

If throughout most of history growth rates have been gradually increasing, I don't follow why you would expect one technology to cause it to grow much faster, if it goes back to accelerating.

Comment by KatjaGrace on Why are delicious biscuits obscure? · 2020-12-09T23:57:46.503Z · LW · GW

Thanks for contributing data! :D 

Comment by KatjaGrace on Why are delicious biscuits obscure? · 2020-12-09T23:52:56.262Z · LW · GW

They are meant to be chewy, not crumbly.

Comment by KatjaGrace on Why are delicious biscuits obscure? · 2020-12-09T23:52:06.302Z · LW · GW

Making them tastier, though not confident about this - originally motivated by not having normal flour, and then have done some of each, and thought the gluten free ones were better, but much randomness at play. 

I did mean 'white' by 'wheat'; sorry (I am a foreigner). I haven't tried anything other than the gluten free one mentioned and white wheat flour.

Comment by KatjaGrace on Automated intelligence is not AI · 2020-11-02T20:10:12.487Z · LW · GW

>Someone's cognitive labor went into making the rabbit mold, and everything from there on out is eliminating the need to repeat that labor, and to reduce the number of people who need to have that knowledge.

Yeah, that's the kind of thing I had in mind in the last paragraph.

Comment by KatjaGrace on My dad got stung by a bee, and is mildly allergic. What are the tradeoffs involved in deciding whether to have him go the the emergency room? · 2020-04-22T21:22:59.106Z · LW · GW

In such a case, you might get many of the benefits without the covid risks from driving to very close to the ER, then hanging out there and not going in and risking infection unless worse symptoms develop, but being able to act very fast if they do.

Comment by KatjaGrace on Soft takeoff can still lead to decisive strategic advantage · 2020-02-19T01:11:50.545Z · LW · GW

1) Even if it counts as a DSA, I claim that it is not very interesting in the context of AI. DSAs of something already almost as large as the world are commonplace. For instance, in the extreme, the world minus any particular person could take over the world if they wanted to. The concern with AI is that an initially tiny entity might take over the world.

2) My important point is rather that your '30 year' number is specific to the starting size of the thing, and not just a general number for getting a DSA. In particular, it does not apply to smaller things.

3) Agree income doesn't equal taking over, though in the modern world where much purchasing occurs, it is closer. Not clear to me that AI companies do better as a fraction of the world in terms of military power than they do in terms of spending.

Comment by KatjaGrace on Soft takeoff can still lead to decisive strategic advantage · 2020-02-18T06:56:45.040Z · LW · GW

The time it takes to get a DSA by growing bigger depends on how big you are to begin with. If I understand, you take your 30 years from considering the largest countries, which are not far from being the size of the world, and then use it when talking about AI projects that are much smaller (e.g. a billion dollars a year suggests about 1/100,000 of the world). If you start from a situation of an AI project being three doublings from taking over the world say, then most of the question of how it came to have a DSA seems to be the question of how it grew the other seventeen doublings. (Perhaps you are thinking of an initially large country growing fast via AI? Do we then have to imagine that all of the country's resources are going into AI?)

Comment by KatjaGrace on LW For External Comments? · 2019-12-08T08:12:57.215Z · LW · GW

This sounds great to me, and I think I would be likely to sign up for it if I could, but I haven't thought about it for more than a few minutes, am particularly unsure about the implications for culture, and am maybe too enthusiastic in general for things being 'well organized'.

Comment by KatjaGrace on Pieces of time · 2019-11-13T08:11:18.190Z · LW · GW

Oh yeah, I think I get something similar when my sleep schedule gets very out of whack, or for some reason when I moved into my new house in January, though it went back to normal with time. (Potentially relevant features there: bedroom didn't seem very separated from common areas, at first was sleeping on a pile of yoga mats instead of a bed, didn't get out much.)

Comment by KatjaGrace on jacobjacob's Shortform Feed · 2019-10-31T17:08:55.828Z · LW · GW

I think random objects might work in a similar way. e.g. if talking in a restaurant, you grab the ketchup bottle and the salt to represent your point. I've only experimented with this once, with ultimately quite an elaborate set of condiments, tableware and fries involved. It seemed to make things more memorable and followable, but I wasn't much inclined to do it more for some reason. Possibly at that scale it was a lot of effort beyond the conversation.

Things I see around me sometimes get involved in my thoughts in a way that seems related. For instance, if I'm thinking about the interactions of two orgs while I'm near some trees, two of the trees will come to represent the two orgs, and my thoughts about how they should interact will echo ways that the trees are interacting, without me intending this.

Comment by KatjaGrace on Realistic thought experiments · 2018-11-29T21:55:08.420Z · LW · GW

No, never heard of it, that I know of.

Comment by KatjaGrace on Berkeley: being other people · 2018-10-23T21:37:07.527Z · LW · GW

I'm pretty unsure how much variation in experience there is—'not much' seems plausible to me, but why do you find it so probable?

Comment by KatjaGrace on Moloch in whom I sit alone · 2018-10-05T21:04:43.713Z · LW · GW

I also thought that at first, and wanted to focus on why people join groups that are already large. But yeah, lack of very small groups to join would entirely explain that. Leaving a group signaling not liking the conversation seems like a big factor from my perspective, but I'd guess I'm unusually bothered by that.

Another random friction:

  • If you just sit alone, you don't get to choose the second person who joins you. I think a thing people often do rather than sitting alone is wander alone, and grab someone else also wandering, or have plausible deniability that they might be actually walking somewhere, if they want to avoid being grabbed. This means both parties get some choice.