Posts
Comments
"Making nodes in one's head" → probably meant knots?
TL;DR: This post gave me two extremely useful handles to talk about a kind of internal struggle I've been grappling with for as long as I've been in the EA community.
This post seemed obviously true when I read it and I started reusing the concept in conversations, but it did not lead to a lot of internal changes. However, a few months later, having completely forgotten this post, I started practicing self therapy using Internal Family Systems, and then I uncovered a large conflict which after multiple sessions seemed to map really well to the two archetypes described in this post. I'm confident having read this post then made my progress substantially faster in untangling my mental mess.
I read Would You Work Harder In The Least Convenient Possible World? a year later, and it felt like a natural extension of this post, where Humo and Robutil map to Bob and Alice. Replacing guilt and Replacing fear seem related too; some sort of guide on how to get one's Robutil to accept that guilt and fear are not viable tools.
In the same vein, another guide I've been read which fits well into the "helping Humo and Robutil work together" is Introspect by Visakan.
(I only discovered this post in 2024, so I'm less sure it will stand the test of time for me)
This post is up there with The God of Humanity, and the God of the Robot Utilitarians as the posts that contributed the most to making me confront the conflict between wanting to live a good life and wanting to make the future go well.
I read this post while struggling half burnt out on a policy job, having lost touch with the fire that drove me to AI safety in the first place, and this imaginary dialogue brought back this fire I had initially found while reading HPMOR. I knew then that I could take no other choice than to move forward and continue fighting as hard as I could. Realizing that probably contributed ~25% of my productivity of the past two months.
I support the content note at the start. My fear based motivation has interacted badly with this urge to make the future go well, and led me into a cycle of burn out and demotivation. I wish there was a post that would help me make sense of how to stop shooting myself in the foot when I care so much.
I'd love a follow-up dialogue where instead of replying "I just kind of don’t really want to work harder.", Bob instead replied:
Bob: Part of my soul does want to follow your call, to work hard. I tried to do so in the past and badly burnt out. I'm afraid that if I take that as a goal again, I'll predictably end up burnt out and end up doing less than right now, so I've been protecting myself by not doing too hard. I now know that I won't ever be satisfied just doing my 10%, but I don't know how to proceed. What would you do in my place?
Update: Seems definitely not possible to get tax deduction in France for an American organisation.
This post from Don Efficace, the organisation which was trying to set up EA regranting in France, explains what are the constraints for the French tax deduction: https://forum.effectivealtruism.org/posts/jWhFmavJ9cE2zE585/procedure-to-allow-donations-to-european-organizations-to-be
My bad, I mistook Mieux Donner for an older organisation that was trying to setup this.
I checked online, and it does not seem that it's possible to get the deduction for non-profits outside the EU even through a proxy, except if their action is related to France or is humanitarian.
Source: https://www.centre-francais-fondations.org/dons-transnationaux/
Completed! It was really fun. Thanks for the question to give appreciation to another LWer :)
I'd love to donate to Lightcone ~5K€ next year, but as long as it's not tax-deductible in France I'll keep to French AI safety orgs as the French non-profit donation tax break is stupidly good: it can basically triple the donation amount and reduce income tax to 0.
I know that Mieux Donner, a new French effective giving org, is acting as French tax-deductible front for a number of EA organisations. I'll contact them to check whether they could forward a donation to Lightcone and give an update under this comment.
I find that by focusing on the legs of the dancer, I managed to see it oscillating: half-turn clockwise then half-turn counterclockwise with the feet towards the front. However, this always break when I start looking at the arms. Interesting
I'm currently doing the Rethink Wellbeing IFS Course, and it's given me so much understanding of myself so quickly with no diminishing returns in sight yet, that it felt like the perfect time to apply More Dakka.
Therefore, I've used this list to generate ideas of how to apply More Dakka to my internal exploration, and found 30 strategies that sound super helpful :)
Makes sense! Someone yesterday mistakenly read it as the date of the event, so this confusion seems to happen.
When I'm looking at the date, it says 10th November 2023, but underneath it says 27th September. Seems like a bug
I guess the word Mnestic was originally introduced in the popular SCP story There is no Antimemetics Division.
I expect it could be mildly valuable to index the previously existing calendars and give the best current alternative. I don't think it will bring much though.
Where is the event? There is no location information
This is list is aimed at people visiting the Bay area and searching how to get in contact with the local community. Currently, the Lighthaven website does not list events happening there, so I don't think it's relevant for someone who is not searching for a venue.
Possibly a larger index of rationalist resources in the Bay would be useful, including potential venues.
I expect that basic econ models and their consequences on the motivations of investors are already mostly known in the AI safety community, even if only through vague statements like "VCs are more risk tolerant than pension funds".
My main point in this post is that it might be the case that AI labs successfully removed themselves from the influence of investors, so that it actually matters very little what the investors of AI Labs want or do. I think that determining whether this is the case is important, as in this case our intuitions about how companies work generally would not apply to AI labs.
The link does not work.
I don't think a written disclaimer would amount to much in a court case without corresponding provisions in the corporate structure.
Following this post, I made 4 forecasts on the output and impact of my MATS project, which led me to realize some outcomes I expected were less likely than I felt, absent active effort on my part to make them happen.
I don't have any more information on this. DM me if you want me to check whether I can find more info.
The founders of Hugging Face are French yes, but I'm not sure how invested they are in French AI policy. I mostly did not hear about them doing any specific actions or having any specific people with influence there.
I'm glad this post came out and made me try Claude. I now find it mostly better than ChatGPT, and with the introduction of projects, all the features I need are there.
In the new UI, the estimated reading time is not visible anymore. Is it intended?
It was often useful for me. How can I tell my friends "I'll arrive in X minutes, just after reading this post" without knowing the reading time !
I consumed edible cannabis for the first time a few months ago, and it felt very similar to the experience you're describing. I felt regularly surprised at where I was, and had lots of trouble remembering more than the last 30 seconds of the conversation.
The most troubling experience was listening to someone telling me something, me replying, and while saying the reply, forgetting where I was, what I was replying to and what I already said. The weirdest part is that at this point I would finish the reply in a sort of disconnected state, not knowing where the words were coming from, and at the end I would have a feeling of "I said what I wanted to say", even though I could not remember a word of it.
The main part of the issue was actually that I was not aware I had internal conflicts. I just mysteriously felt less emotions and motivation. That's the main thing all the articles I read of sustainable productivity did not transmit me, how to recognize it as it happens, without ever having my internal monologue saying "I don't want to work on this" or something.
What do you think antidepressants would be useful for? I don't expect to be matching any clinical criteria for depression.
There was this voice inside my head that told me that since I got Something to protect, relaxing is never ok above strict minimum, the goal is paramount, and I should just work as hard as I can all the time.
This led me to breaking down and being incapable to work on my AI governance job for a week, as I just piled up too much stress.
And then, I decided to follow what motivated me in the moment, instead of coercing myself into working on what I thought was most important, and lo and behold! my total output increased, while my time spent working decreased.
I'm so angry and sad at the inadequacy of my role models, cultural norms, rationality advice, model of the good EA who does not burn out, which still led me to smash into the wall despite their best intentions. I became so estranged from my own body and perceptions, ignoring my core motivations, feeling harder and harder to work. I dug myself such deep a hole. I'm terrified at the prospect to have to rebuild my motivation myself again.
On the Spotify release, there is a typo in "First they came for the epsistemology".
Over the last two years, I discovered LessWrong, learned about x-risks, joined the rationalist community, joined EA, started a rationalist/EA group house, and finally left my comfy high earning crypto job last September, to start working on AI safety. During this time, I definitely felt multiple switch of taking on different kinds of responsibilities.
The first responsibility I learned, by reading HPMOR and The Sequences, was the sense that more was possible, that I could achieve greatness, become as cool as I ever wanted, but that it needed actual work, that I was not on the right path to achieve it, that I would need to take risks and that I could not count on any of my then friends to help me with it. It was at this time that I took responsibility over what my life would be like.
I joined the rationalist community somewhat quickly, and after a few months ended up creating a rationalist group house. There, I spent lots of time with people very different from those I spent time with, in no small part because they questioned my beliefs. I realized lots of the factual knowledge I got from the Internet or from school was incomplete or flat out wrong, that the procedural knowledge I got from my parents and common culture was deeply suboptimal, that strong and counterproductive emotional patterns were driving me a large chunk of the time, and that generally my epistemics were broken, which prevented solving the other problems. I realized I could not trust anyone to give me correct knowledge, to show me the right way, even on the most basic stuff. It was at this time I took responsibility over my cognitive processes and beliefs, because blind faith was not a reliable way to navigate the world.
Leaving my job for AI Safety definitely felt like jumping into the unknown. For the first time in my life, I cared about achieving a goal that was wildly above my level. I finally had something to protect, and was taking active steps each day towards getting better. It felt like taking the responsibility of shaping the future like I wanted. I realized nobody else would do it for me.
Working on AI safety full-time also led to a large increase in the amount of stress I experienced, as working harder and caring more than I ever did exposed lots of flaws that were never a problem when I was just going with the flow. I can give more details on the issues I experienced, but basically I was terrible at noticing issues and kept ignoring my emotions, nearly leading to burnout twice. I realized nobody could manage my internal states except me. It felt like taking responsibility over my motivation, my happiness, my reactions to any event. This is still a work in progress, though.
When I first read HPMOR, I expected that taking responsibility was just a single jump you had to take once. Now, it seems to be a succession of realization, where the tools I had been given proved to be insufficient, and I had to take upon myself to reforge better tools. I'm actually looking forward to the next realization now. I hope you and I have the courage to continue down this road.
I was allergic to acarids when I was a child, and this caused me a severe asthma crisis when I was around 10. I live in France, and I got prescribed SLIT by the first allergy specialist my mother found, so I guess it's quite a common treatment there. I took it for more than 5 years, and now 8 years later I don't ever have any symptoms of allergy.
I filled in the survey! It was a fun way to relax this morning
Thank you for the pointer ! I found the article you mentioned, and then found the tag Postmortem & Retrospective which led me to three additional posts:
Yesterday, I was searching for posts by alignment researchers describing how they got into the field. I was searching specifically for personal stories rather than guides on how other people can get into the field.
I was trying to perform Intuition flooding, by reading lots of accounts, and getting intuitions on which techniques work to enter the field.
I only managed to find three which fit somewhat my target:
- Neel Nanda: How I Formed My Own Views About AI Safety
- Kevin RoWang: Lessons After a Couple Months of Trying to Do ML Research
- TurnTrout: Lessons I've Learned from Self-Teaching
- Nate Soares: The mechanics of my recent productivity
Neel Nanda's post was the central example of what I was looking for, and I was surprised to not find more. Does anyone know where I can find more posts like this ?
blog.jaibot.com does not seem to exist anymore.
I don't have the intuition that reactions will replace some comments which would have been written without this feature. What makes you think this will happen?
If reactions were tied to posting a comment, such as reactions could not decrease the number of comments, would this make you more likely to support this feature?
Incidentally, thinking about which reaction to put to this comment instead of just up or downvoting made me realize I did not understand completely what you meant, and motivated me to write a comment instead.
I think in this situation, you could use the momentum to implement one hack which increases the probability of implementing all of them is the future. For example, buying a white board, writing all the life-hacks ideas you got from the minicamp and putting it in a very visible place.
We're in agreement. I'm not sure what's my expectation for the length of this phase or the final productivity boost, but I was exploring what we would need to do now to prepare for the kind of world where there is a short period of time when productivity skyrockets. If we end up in such a world, I would prefer people working on AI alignment to be ready to exploit the productivity gains fully.
The question I was exploring was not how to find the tools that do make their users more productive, as I expect good curation to appear in time with the tools, but whether there were resources which would be necessary to use those tools, but difficult to acquire in a short time when the tools are released.
The post was not optimized for SEO, but it definitely has a ChatGPT style I dislike. It's one of my first post, so I'm still exploring how to write good quality post. Thank you for the feedback!
At the individual level, I expect agentic AI to allow even more powerful tools, like ACT acting as a semi autonomous digital assistant, or AutoGPT acting as a lower level executor, taking in your goals and doing most of the work.
Once we have powerful agentic AGI, of the kind that can run continuously and disempower humanity, I expect that at this point we'll be leaving the "world as normal but faster" phase where tools are useful, and then what happens next depends on our alignment plan I guess.
I think I focused too much on the "competitive" part, but my main point was that only certain factors would maintain a difference between individuals productivity, whether they are zero-sum or not. If future AI assistants require large personal datasets to perform well, only the people with preexisting datasets will perform well for a while, even though anyone could start their own dataset at that point.
Conjecture is "a team of researchers dedicated to applied, scalable AI alignment research." according to their website https://www.conjecture.dev/
They are publishing regularly on the alignment forum and LessWrong https://www.lesswrong.com/tag/conjecture-org
I also searched their website, and it does not look like Bonsai is publicly accessible. This must be some internal tool they developed ?
This post points at an interesting fact: some people, communities or organizations already called themselves "rationalists" before the current rationalist movement. It brings forth the idea that the rationalist movement may be anchored in a longer history than what might first seem from reading LessWrong/Overcoming Bias/Eliezer history.
However, this post reads more like a Wikipedia article, or an historical overview. It does not read like it has a goal. Is this post making some sort of argument that the current rationalist community is descended from those earlier groups ? Is it poking at the consensus history of how the rationalist community ended up choosing "rationalist" as an identifier ? I don't know whether any of those things is argued in this post.
This feels like an interesting bag of facts, full of promising threads of inquiry which could develop in new historical insights and make great posts. I am looking forward to reading those follow-ups, but for now this feels incomplete and lacking purpose.
TIL that the expected path a new user of LW is expected to follow, according to https://www.lesswrong.com/posts/rEHLk9nC5TtrNoAKT/lw-2-0-strategic-overview, is to become comfortable with commenting regularly in 3-6 month, and comfortable with posting regularly in 6-9 month. I discovered the existence of shortforms. I (re)discovered the expectation that your posts should be treated as a personal blog medium style ?
As I'm typing this I'm still unsure whether I'm destroying the website with my bad shortform, even though the placeholder explicitly said... (\*right click inspect\*)
Write your thoughts here! What have you been thinking about?
Exploratory, draft-stage, rough, and rambly thoughts are all welcome on Shortform.
I'm definitely rambling ! Look ! I'm following the instructions !
I feel like a "guided tour of LW" is missing when joining the website ? Some sort if premade path to get up to speed on "what am I supposed and allowed to do as a user of LW, except reading posts ?". Could take some inspiration from Duolingo, Brilliant, or any other app trying to get a user past the initial step of interacting with the content ?
I vehemently disagree here, based on my personal and generalizable or not history. I will illustrate with the three turning points of my recent life.
First step: I stumbled upon HPMOR, and Eliezer way of looking straight into the irrationality of all our common ways of interacting and thinking was deeply shocking. It made me feel like he was in a sense angrily pointing at me, who worked more like one of the PNJ rather than Harry. I heard him telling me you're dumb and all your ideals of making intelligent decisions, being the gifted kid and being smarter than everyone are all are just delusions. You're so out of touch with reality on so many levels, where to even start.
This attitude made me embark on a journey to improve myself, read the sequences, pledge on Giving What we can after knowing EA for many years, and overall reassess whether I was striving towards my goal of helping people (spoiler: I was not).
Second step: The April fools post also shocked me on so many levels. I was once again deeply struck by the sheer pessimism of this figure I respected so much. After months of reading articles on LessWrong and so many about AI alignment, this was the one that made me terrified in the face of the horrors to come.
Somehow this article, maybe by not caring about not hurting people, made me join an AI alignment research group in Berlin. I started investing myself into the problem, working on it regularly, diverting my donations towards effective organizations in the field. It even caused me to publish my first bit of research on preference learning.
Third step: Today this post, by not hiding any reality of the issue and striking a lot of ideas down that I was relying on for hope, made me realize I was becoming complacent. Doing a bit of research in the weekend is the way to be able to say “Yeah I participated in solving the issue” once it's solved, not making sure it is in fact solved.
Therefore, based on my experience, not a lot of works made me significantly alter my life decisions. And those who did are all strangely ranting, smack-in-your-face works written by Eliezer.
Maybe I'm not the audience to optimize for to solve the problem, but on my side, I need even more smacks in the face, breaking you fantasy style posts.
Regarding the schedule, when does the event start on friday and end of monday ? I would like to already book my trip to take advantage of low prices.
I would love to go, and was pondering quite hard whether to try other people interested in this endeavour in Berlin. Sadly I am not available this weekend. Can I join on saturday 30th without going to the first one ?
Thank you for the reply. I know that worry is unnecessary, I was rather asking about what you would do if you didn't know for a fact that it was indeed based on GPT-3, or that humans were effectively overseeing it, to determine whether it is an unsafe AGI trying to manipulate humans using it ?
I know that no one could detect a super intelligent AGI trying to manipulate them, but I think it's can be non-obvious that a sub human AGI is trying to manipulate you if you don't look for it.
Primarily, I think that currently, no one uses AI systems with the expectation that it could try to deceive them, so they don't apply the basic level of doubt you put in every human when you don't know their intentions.
Thank you for the heads-up ! I joined the meetup group and i am looking forward to new events :)
Hello everyone ! My name is Lucie, and I am studying computer science. I'm fascinated by this website and started binge reading the sequences after finishing reading HPMOR. With all the information I was exposed to on this website during the last week, I am hyped and thinking frantically about how can all of this change my life goals.
However, I know that for me only reading more and more post, and getting more and more information will only sustain me for a while. When my hype die down, I think I will not be as motivated as right now into reading posts if I don't find a way to tie it to my life more than pure curiosity.
I think I need to fill at least a bit part of a community and tie it into my social life to keep my interest for long enough. Therefore, I'm making this comment, and asking you how to meet some people from this community, either online or offline.
Right now, I'm a bit lost as to what is the next step for me in this journey. I don't know whether the lack of explicit way of getting into the community is an intentional filter for people with enough intrinsic motivation to continue learning on their own for a long time ? Is there a will for new active members, whatever that means ?
So anyway, if you want to help me, to chat or to meet in Berlin, feel free to reply or to send me a message !