Posts

Comments

Comment by devi on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T16:59:17.952Z · LW · GW

Please see my comment on the grandparent.

I agree with Jessica's general characterization that this is better understood as multi-causal rather than the direct cause of actions by one person.

Comment by devi on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T16:48:56.294Z · LW · GW

Including Olivia, and Jessica, and I think Devi. Devi had a mental breakdown and detransitioned IIHC

Digging out this old account to point out that I have not in fact detransitioned, but find it understandable why those kinds of rumours would circulate given my behaviour during/around my experience of psychosis. I'll try to explain some context for the record.

In other parts of the linked blogpost Ziz writes about how some people around the rationalist community were acting on or spreading variations of the meme "trans women are [psychologically] men". I experienced this while dating AM (same as mentioned above). She repeatedly brought up this point in various interactions. Since we were both trans women this was hurting us both, so I look back with more pity than concern about malice. At some point during this time I started treating this as a hidden truth that I was proud of myself for being able to see, which I in retrospect I feel disgusted and complicit to have accepted. This was my state of mind when I discussed these issues with Zack reinforcing each others views. I believe (less certain) I also broached the topic with Michael and/or Anna at some point which probably went like a brief mutual acknowledgement of this hidden fact before continuing on to topics that were more important.

I don't think anyone mentioned above was being dishonest about what they thought or was acting from a desire to hurt trans people. Yet, above exchanges did in retrospect cause me emotional pain, stress, and contributed to internalizing sexism and transphobia. I definitely wouldn't describe this as a main causal factor to my psychosis (that was very casual drug use that even Michael chided me for). I cant' think of a good policy that would have been helpful to me in above interactions. Maybe emphasizing bucket-errors in this context more, or spreading caution about generalizing from abstract models to yourself, but I think I would have been too rash to listen.

I wouldn't say I completely moved past this until years following the events. I think the following things were helpful for that (in no particular order): the intersex brains model and associated brain imagining studies, everyday-acceptance while living a normal life not allowing myself concerns larger than renovations or retirement savings, getting to experience some parts of female socialization and mother-daughter bonding, full support from friends and family in cases my gender has come into question, and the acknowledgement of a medical system that still has some gate-keeping aspects (note: I don't think this positive effect of a gate-keeping system at all justifies the negative of denying anyone morphological freedom).

Thinking back to these events, engaging with the LessWrong community, and even publicly engaging under my real name bring back fear and feelings of trauma. I'm not saying this to increase a sense of having been wronged but as an apology for this not being as long as it should be, or as well-written, and for the lateness/absence of any replies/followups.

Comment by devi on How does personality vary across US cities? · 2016-12-20T23:16:43.355Z · LW · GW

However, this likely understates the magnitudes of differences in underlying traits across cities, owing to people anchoring on the people who they know when answering the questions rather than anchoring on the national population

I think this is a major problem. This is mainly based on taking a brief look at this study a while back and being very suspicious of it explicitly contradicting so many of my models (eg South America having lower Extraversion than North America and East Asia being the least Conscientious region)

Comment by devi on "Flinching away from truth” is often about *protecting* the epistemology · 2016-12-20T22:30:27.117Z · LW · GW

The causal chain feels like a post-justification and not what actually goes on in the child's brain. I expect this to be computed using a vaguer sense of similarity that often ends up agreeing with causal chains (at least good enough in domains with good feedback loops). I agree that causal chains are more useful models of how you should think explicitly about things, but it seems to me that the purpose of these diagrams is to give a memorable symbol for the bug described here (use case: recognize and remember the applicability of the technique).

Comment by devi on Lesswrong 2016 Survey · 2016-05-09T00:39:31.418Z · LW · GW

Great! Thanks!

Comment by devi on Lesswrong 2016 Survey · 2016-05-01T19:21:05.308Z · LW · GW

I just remembered that I still haven't finished this. I saved my survey response partway through, but I don't think I ever submitted it. Will it still be counted, and if not, could you give people with saved survey responses the opportunity to submit them?

I realize this is my fault, and understand if you don't want to do anything extra to fix it.

Comment by devi on Open Thread Feb 16 - Feb 23, 2016 · 2016-02-18T02:09:37.597Z · LW · GW

I wasn't only referring to wanting to live where there are a lot of people. I was also referring to wanting to live near to very similar/nice people and far from very dissimilar/annoying people. I think the latter, together with the expected ability to scale things down, would make people want to live in smaller, more selected, communities. Even if they were in the middle of nowhere.

Comment by devi on Open Thread Feb 16 - Feb 23, 2016 · 2016-02-17T01:48:10.676Z · LW · GW

Where people want to live depends on where other people live. It's possible to move away from bad Nash equilibria by cooperation.

Comment by devi on [Link] Introducing OpenAI · 2015-12-13T00:18:48.497Z · LW · GW

Yes, robust cooperation is not much to us if its cooperation between the paperclip maximizer and the pencilhead minimizer. But if there are a hundred shards that make up human values, and tens of thousands of people running AI's trying to maximize the values they see fit. It's actually not unreasonable to assume that the outcome, while not exactly what we hoped for, is comparable to incomplete solutions that err on the side of (1) instead.

After having written this I notice that I'm confused and conflating: (a) incomplete solutions in the sense of there not being enough time to do what should be done, and (b) incomplete solutions in the sense of it being actually (provably?) impossible to implement what we right now consider essential parts of the solution. Has anyone got thoughts on (a) vs (b)?

Comment by devi on [Link] Introducing OpenAI · 2015-12-12T17:53:58.227Z · LW · GW

It's important to remember the scale we're talking about here. A $1B project (even when considered over its lifetime) in such an explosive field with such prominent backers, would be interpreted as nothing other than a power-grab unless it included a lot of talk about openness (it will still be, but as a less threatening one). Read the interview with Musk and Altman and note how they're talking about sharing data and collaborations. This will include some noticeable short term benefits for the contributors, and pushing for safety, either via including someone from our circles or by a more safety focused mission statement, would impede your efforts at gathering such a strong coalition.

It's easy to moan over civilizational inadequacy and moodily conclude that above shows us how (as a species) we're so obsessed with appropriateness and politics that we will avoid our one opportunity to save ourselves. Sure do some of that, and then think of the actual effects for a few minutes:

If the Value Alignment research program is solvable in the way we all hope it is (complete with a human universal CEV, stable reasoning under self-modification and about other instances of our algorithm) then having lots of implementations running around will be basically the same as distributing the code over lots of computers. If the only problem is that human values won't quite converge: this gives us a physical implementation of the merging algorithm of everyone just doing their own thing and (acausally?) trading with each other.

If we can't quite solve everything that we're hoping for, this does change the strategic picture somewhat. Mainly it seems to push us away from a lot of quick fixes, that will likely seem tempting as we approach the explosion: we can't have a sovereign just run the world like some kind of OS that keeps everyone separate, we'll also be much less likely to make the mistakes of creating CelestAI from Friendship is Optimal, something that optimizes most our goals but has some undesired lock-ins. There are a bunch of variations here, but we seem locked out of strategies that try to achieve some minimum level of the cosmic endowment, while possibly failing at getting a substantial constant fraction of our potential by achieving it at the cost of important values or freedoms.

Whether this is a bad thing or not really depends on how one evaluates two types of risk: (1) the risk of undesired lock-ins from an almost perfect superintelligence getting too much relative power, (2) the risk of bad multi-polar traps. Much of (2) seems solvable by robust cooperation, that we seem to be making good progress on. What keeps spooking me are risks due to consciousness: either mistakenly endowing algorithms with it creating suffering, or evolving to the point that we loose it. These aren't as easily solved by robust cooperation, especially if we don't notice them until it's too late. The real strategic problem right now is that there isn't really anyone we can trust to be unbiased in analyzing the relative dangers of (1) and (2), especially because they pattern-match so well with the ideological split between left and right.

Comment by devi on [Link] Introducing OpenAI · 2015-12-12T17:06:11.792Z · LW · GW

They seem deeply invested in avoiding an AI arms race. This is a good thing, perhaps even if it speeds up research somewhat right now (avoiding increasing speedups later might be the most important thing: e^x vs 2+x etc etc).

Note that if the Deep Learning/ML field is talent limited rather than funding limited (seems likely given how much funding it has), the only acceleration effects we should expect are from connectedness and openness (i.e. better institutions). When some of this connectedness might be through collaboration with MIRI, this could very well advance AI Safety Research relative to AI research (via tighter integration of the research programs and choices of architecture and research direction, this seems especially important in how it will play out in the endgame).

In summary, this could actually be really good, it's just too early to tell.

Comment by devi on [deleted post] 2015-01-15T03:00:09.189Z

Does Java (the good parts) refer to the O'Reilly book with the same name? Or is it some proper subset of the language like what Crockford describes for Javascript?

Comment by devi on Research Priorities for Artificial Intelligence: An Open Letter · 2015-01-12T16:01:01.240Z · LW · GW

Is the idea to get as many people as possible to sign this? Or do we want to avoid the image of a giant LW puppy jumping up and down while barking loudly, when the matter finally starts getting attention from serious people?

Comment by devi on Harper's Magazine article on LW/MIRI/CFAR and Ethereum · 2014-12-12T23:34:02.145Z · LW · GW

Men made up 88.8% of respondents; 78.7% were straight, 1.5% transgender, ...

The author makes it sound like this makes us a very male-dominated straight cisgender community.

Mostly male, sure. But most people won't compare the percentage of heterosexuals and cisgenders with that of the general population to note that we are in fact more diverse.

Comment by devi on Harper's Magazine article on LW/MIRI/CFAR and Ethereum · 2014-12-12T23:29:47.449Z · LW · GW

But how can you take issue with our insistence that people use hand sanitizer at a 4-day retreat with 40 people sharing food and close quarters?

This is not something that would cross my mind if I was organizing such a retreat. Making sure people who handled food washed their hands with soap, yes, but not hand sanitizer. Perhaps this is a cultural difference between (parts of) US and Europe.

Comment by devi on MIRI Research Guide · 2014-11-09T21:36:31.540Z · LW · GW

It may be more exciting, but the HoTT book has a bad habit of sending people down the homotopy rabbit hole. People with CS backgrounds will probably find it easier to pick up other type theories. (In fact, Church's "simple type theory" paper may be enough instead of an entire textbook... maybe I'll update the suggestions.)

Yeah, it could quite easily sidetrack people. But simple type theory, simply wouldn't do for foundations since you can't do much mathematics without quantifiers, or dependent types in the case of type theory. Further, IMHO, the univalence axiom is the largest selling point of type theory as foundations. Perhaps a reading guide to the relevant bits of the HoTT book would be useful for people?

Comment by devi on MIRI Research Guide · 2014-11-08T01:15:10.759Z · LW · GW

The recommended order for the papers seems really useful. I was a bit lost about where to start last time I tried reading a chunk of MIRI's research.

The old course list mentioned many more courses, in particular ones more towards Computer Science rather than Mathematics (esp. there is no AI book mentioned). Is this change mainly due to the different aims of the guides, or does it reflect an opinion in MIRI that those areas are not more likely to be useful than what a potential researcher would have studied otherwise?

I also notice that within the subfields of Logic, Model Theory seems to be replaced by Type Theory. Is this reprioritization due to changed beliefs about which is useful for FAI, or due to differences in mathematical taste between you and Louie?

Also, if you're interested in Type Theory in the foundational sense the Homotopy Type Theory book is probably more exciting since that project explicitly has this ambition.

Comment by devi on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-16T03:23:38.526Z · LW · GW

I think AI-completeness is a quite seductive notion. Borrowing the concept of reduction from complexity/computability theory makes it sound technical, but unlike those fields I haven't seen anyone actually describing eg how to use an AI with perfect language understanding to produce another one that proved theorems or philosophized.

Spontaneously it feels like everyone here should in principle be able to sketch the outlines of such a program (at least in the case of a base-AI that has perfect language comprehension that we want to reduce to), probably by some version of trying to teach the AI as we teach a child in natural language. I suspect that the details of some of these reductions might still be useful, especially the parts that don't quite seem to work. For while I don't think that we'll see perfect machine translation before AGI, I'm much less convinced that there is a reduction from AGI to perfect translation AI. This illustrates what I suspect might be an interesting differences between two problem classes that we might both want to call AI-complete: the problems human programmers will likely not be able to solve before we create superintelligence, and the problems whose solutions we could (somewhat) easily re-purpose to solve the general problem of human-level AI. These classes look the same as in we shouldn't expect to see problems from any of them solved without an imminent singularity, but differ in that the problems in the latter class could prove to be motivating examples and test-cases for AI work aimed at producing superintelligence.

I guess the core of what I'm trying to say is that arguments about AI-completeness has so far sounded like: "This problem is very very hard, we don't really know how to solve it. AI in general is also very very hard, and we don't know how to solve it. So they should be the same." Heuristically there's nothing wrong with this, except we should keep in mind that we could be very mistaken about what is actually hard. I'm just missing the part that goes: "This is very very hard. But if we knew it this other thing would be really easy."

Comment by devi on What are you learning? · 2014-09-16T00:14:28.734Z · LW · GW

This sounds like an interesting project. I've studied quite some category theory myself, though mostly from the "oh pretty!" point of view, and dipped my feet into algebraic geometry because it sounded cool. I think that reading algebraic geometry with the sight set on cryptography would be more giving than the general swimming around in its sea that I've done before. So if you want a reading buddy, do tell. A fair warning though: I'm quite time limited these coming months, so will not be able to keep a particularly rapid pace.