Thriving in the Weird Times: Preparing for the 100X Economy 2023-05-08T13:44:40.341Z
Lucie Philippon's Shortform 2022-12-04T21:17:09.449Z
[Rough notes, BAIS] Human values and cyclical preferences 2022-05-13T13:28:04.907Z
How can I determine that Elicit is not some weak AGI's attempt at taking over the world ? 2022-04-12T00:54:10.621Z


Comment by Lucie Philippon (lucie-philippon) on Lucie Philippon's Shortform · 2023-07-09T13:50:32.792Z · LW · GW

Thank you for the pointer ! I found the article you mentioned, and then found the tag Postmortem & Retrospective which led me to three additional posts:

Comment by Lucie Philippon (lucie-philippon) on Lucie Philippon's Shortform · 2023-07-09T06:16:09.299Z · LW · GW

Yesterday, I was searching for posts by alignment researchers describing how they got into the field. I was searching specifically for personal stories rather than guides on how other people can get into the field.

I was trying to perform Intuition flooding, by reading lots of accounts, and getting intuitions on which techniques work to enter the field.

I only managed to find three which fit somewhat my target:

Neel Nanda's post was the central example of what I was looking for, and I was surprised to not find more. Does anyone know where I can find more posts like this ?

Comment by lucie-philippon on [deleted post] 2023-06-14T06:42:50.668Z does not seem to exist anymore.

Comment by Lucie Philippon (lucie-philippon) on Open Thread With Experimental Feature: Reactions · 2023-05-27T14:51:45.480Z · LW · GW

I don't have the intuition that reactions will replace some comments which would have been written without this feature. What makes you think this will happen?

If reactions were tied to posting a comment, such as reactions could not decrease the number of comments, would this make you more likely to support this feature?

Incidentally, thinking about which reaction to put to this comment instead of just up or downvoting made me realize I did not understand completely what you meant, and motivated me to write a comment instead.

Comment by Lucie Philippon (lucie-philippon) on Seize the Maximal Probability Moment · 2023-05-18T16:35:41.479Z · LW · GW

I think in this situation, you could use the momentum to implement one hack which increases the probability of implementing all of them is the future. For example, buying a white board, writing all the life-hacks ideas you got from the minicamp and putting it in a very visible place.

Comment by Lucie Philippon (lucie-philippon) on Thriving in the Weird Times: Preparing for the 100X Economy · 2023-05-09T13:00:34.926Z · LW · GW

We're in agreement. I'm not sure what's my expectation for the length of this phase or the final productivity boost, but I was exploring what we would need to do now to prepare for the kind of world where there is a short period of time when productivity skyrockets. If we end up in such a world, I would prefer people working on AI alignment to be ready to exploit the productivity gains fully.

Comment by Lucie Philippon (lucie-philippon) on Thriving in the Weird Times: Preparing for the 100X Economy · 2023-05-09T08:38:27.948Z · LW · GW

The question I was exploring was not how to find the tools that do make their users more productive, as I expect good curation to appear in time with the tools, but whether there were resources which would be necessary to use those tools, but difficult to acquire in a short time when the tools are released.

The post was not optimized for SEO, but it definitely has a ChatGPT style I dislike. It's one of my first post, so I'm still exploring how to write good quality post. Thank you for the feedback!

Comment by Lucie Philippon (lucie-philippon) on Thriving in the Weird Times: Preparing for the 100X Economy · 2023-05-09T08:24:54.823Z · LW · GW

At the individual level, I expect agentic AI to allow even more powerful tools, like ACT acting as a semi autonomous digital assistant, or AutoGPT acting as a lower level executor, taking in your goals and doing most of the work.

Once we have powerful agentic AGI, of the kind that can run continuously and disempower humanity, I expect that at this point we'll be leaving the "world as normal but faster" phase where tools are useful, and then what happens next depends on our alignment plan I guess.

Comment by Lucie Philippon (lucie-philippon) on Thriving in the Weird Times: Preparing for the 100X Economy · 2023-05-09T08:02:14.959Z · LW · GW

I think I focused too much on the "competitive" part, but my main point was that only certain factors would maintain a difference between individuals productivity, whether they are zero-sum or not. If future AI assistants require large personal datasets to perform well, only the people with preexisting datasets will perform well for a while, even though anyone could start their own dataset at that point.

Comment by Lucie Philippon (lucie-philippon) on Cyborgism · 2023-02-13T20:09:07.547Z · LW · GW

Conjecture is "a team of researchers dedicated to applied, scalable AI alignment research." according to their website

They are publishing regularly on the alignment forum and LessWrong

I also searched their website, and it does not look like Bonsai is publicly accessible. This must be some internal tool they developed ?

Comment by Lucie Philippon (lucie-philippon) on The Rationalists of the 1950s (and before) also called themselves “Rationalists” · 2023-01-14T22:12:27.055Z · LW · GW

This post points at an interesting fact: some people, communities or organizations already called themselves "rationalists" before the current rationalist movement. It brings forth the idea that the rationalist movement may be anchored in a longer history than what might first seem from reading LessWrong/Overcoming Bias/Eliezer history.

However, this post reads more like a Wikipedia article, or an historical overview. It does not read like it has a goal. Is this post making some sort of argument that the current rationalist community is descended from those earlier groups ? Is it poking at the consensus history of how the rationalist community ended up choosing "rationalist" as an identifier ? I don't know whether any of those things is argued in this post.

This feels like an interesting bag of facts, full of promising threads of inquiry which could develop in new historical insights and make great posts. I am looking forward to reading those follow-ups, but for now this feels incomplete and lacking purpose.

Comment by Lucie Philippon (lucie-philippon) on Lucie Philippon's Shortform · 2022-12-04T21:17:09.740Z · LW · GW

TIL that the expected path a new user of LW is expected to follow, according to, is to become comfortable with commenting regularly in 3-6 month, and comfortable with posting regularly in 6-9 month. I discovered the existence of shortforms. I (re)discovered the expectation that your posts should be treated as a personal blog medium style ?

As I'm typing this I'm still unsure whether I'm destroying the website with my bad shortform, even though the placeholder explicitly said... (\*right click inspect\*)

Write your thoughts here! What have you been thinking about?
Exploratory, draft-stage, rough, and rambly thoughts are all welcome on Shortform.

I'm definitely rambling ! Look ! I'm following the instructions !

I feel like a "guided tour of LW" is missing when joining the website ? Some sort if premade path to get up to speed on "what am I supposed and allowed to do as a user of LW, except reading posts ?". Could take some inspiration from Duolingo, Brilliant, or any other app trying to get a user past the initial step of interacting with the content ?

Comment by Lucie Philippon (lucie-philippon) on AGI Ruin: A List of Lethalities · 2022-06-09T18:26:10.573Z · LW · GW

I vehemently disagree here, based on my personal and generalizable or not history. I will illustrate with the three turning points of my recent life.

First step: I stumbled upon HPMOR, and Eliezer way of looking straight into the irrationality of all our common ways of interacting and thinking was deeply shocking. It made me feel like he was in a sense angrily pointing at me, who worked more like one of the PNJ rather than Harry. I heard him telling me you're dumb and all your ideals of making intelligent decisions, being the gifted kid and being smarter than everyone are all are just delusions. You're so out of touch with reality on so many levels, where to even start.

This attitude made me embark on a journey to improve myself, read the sequences, pledge on Giving What we can after knowing EA for many years, and overall reassess whether I was striving towards my goal of helping people (spoiler: I was not).

Second step:  The April fools post also shocked me on so many levels. I was once again deeply struck by the sheer pessimism of this figure I respected so much. After months of reading articles on LessWrong and so many about AI alignment, this was the one that made me terrified in the face of the horrors to come.

Somehow this article, maybe by not caring about not hurting people, made me join an AI alignment research group in Berlin. I started investing myself into the problem, working on it regularly, diverting my donations towards effective organizations in the field. It even caused me to publish my first bit of research on preference learning.

Third step: Today this post, by not hiding any reality of the issue and striking a lot of ideas down that I was relying on for hope, made me realize I was becoming complacent. Doing a bit of research in the weekend is the way to be able to say “Yeah I participated in solving the issue” once it's solved, not making sure it is in fact solved.

Therefore, based on my experience, not a lot of works made me significantly alter my life decisions. And those who did are all strangely ranting, smack-in-your-face works written by Eliezer.

Maybe I'm not the audience to optimize for to solve the problem, but on my side, I need even more smacks in the face, breaking you fantasy style posts.

Comment by Lucie Philippon (lucie-philippon) on Less Wrong Community Weekend 2022 · 2022-05-28T11:36:53.252Z · LW · GW

Regarding the schedule, when does the event start on friday and end of monday ? I would like to already book my trip to take advantage of low prices.

Comment by Lucie Philippon (lucie-philippon) on AI Safety Berlin Kickoff Meetup · 2022-04-19T11:35:32.371Z · LW · GW

I would love to go, and was pondering quite hard whether to try other people interested in this endeavour in Berlin. Sadly I am not available this weekend. Can I join on saturday 30th without going to the first one ?

Comment by Lucie Philippon (lucie-philippon) on How can I determine that Elicit is not some weak AGI's attempt at taking over the world ? · 2022-04-12T18:12:42.019Z · LW · GW

Thank you for the reply. I know that worry is unnecessary, I was rather asking about what you would do if you didn't know for a fact that it was indeed based on GPT-3, or that humans were effectively overseeing it, to determine whether it is an unsafe AGI trying to manipulate humans using it ?

I know that no one could detect a super intelligent AGI trying to manipulate them, but I think it's can be non-obvious that a sub human AGI is trying to manipulate you if you don't look for it.

Primarily, I think that currently, no one uses AI systems with the expectation that it could try to deceive them, so they don't apply the basic level of doubt you put in every human when you don't know their intentions.

Comment by Lucie Philippon (lucie-philippon) on Open & Welcome Thread December 2021 · 2021-12-29T20:30:07.432Z · LW · GW

Thank you for the heads-up ! I joined the meetup group and i am looking forward to new events :)

Comment by Lucie Philippon (lucie-philippon) on Open & Welcome Thread December 2021 · 2021-12-29T15:27:17.115Z · LW · GW

Hello everyone ! My name is Lucie, and I am studying computer science. I'm fascinated by this website and started binge reading the sequences after finishing reading HPMOR. With all the information I was exposed to on this website during the last week, I am hyped and thinking frantically about how can all of this change my life goals.

However, I know that for me only reading more and more post, and getting more and more information will only sustain me for a while. When my hype die down, I think I will not be as motivated as right now into reading posts if I don't find a way to tie it to my life more than pure curiosity.

I think I need to fill at least a bit part of a community and tie it into my social life to keep my interest for long enough. Therefore, I'm making this comment, and asking you how to meet some people from this community, either online or offline.

Right now, I'm a bit lost as to what is the next step for me in this journey. I don't know whether the lack of explicit way of getting into the community is an intentional filter for people with enough intrinsic motivation to continue learning on their own for a long time ? Is there a will for new active members, whatever that means ?

So anyway, if you want to help me, to chat or to meet in Berlin, feel free to reply or to send me a message !