Posts
Comments
I would be interested in this!
Related: an organization called Sage maintains a variety of calibration training tools.
How long does the Elta MD sunscreen last?
Having kids does mean less time to help AI go well, so maybe it’s not so much of a good idea if you’re one of the people doing alignment work.
I love how it has proven essentially impossible to, even with essentially unlimited power, rig a vote in a non-obvious way. I am not saying it never happens deniably, and you may not like it, but this is what peaked rigged election somehow always seems to actually look like.
(Maybe I misunderstood, but isn’t this weak evidence that non-obviously rigging an election is essentially impossible, since you wouldn‘t notice the non-obvious examples?)
Are there any organizations or research groups that are specifically working on improving the effectiveness of the alignment research community? E.g.
- Reviewing the literature on intellectual progress, metascience, and social epistemology and applying the resulting insights to this community
- Funding the development of experimental “epistemology software”, like Arbital or Mathopedia
I'll end with this thought: I think you can probably use these ideas of moral weights and moral mountains to quantify how altruistic someone is.
Maybe “altruistic” isn’t the right word. Someone who spends every weekend volunteering at the local homeless shelter out of a duty to help the needy in their community but doesn’t feel any specific obligation towards the poor in other areas is certainly very altruistic. The amount that one does to help those in their circle of consideration seems to be a better fit for most uses of the word altruism.
How about “morally inclusive”?
I would find this deeply frustrating. Glad they fixed it!
One year later, what you think about the field now?
I’m a huge fan of agree/disagree voting. I think it’s an excellent example of a social media feature that nudges users towards truth, and I’d be excited to see more features like it.
(low confidence, low context, just an intuition)
I feel as though the LessWrong team should experiment with even more new features, treating the project of maintaining a platform for collective truth-seeking like a tech startup. The design space for such a platform is huge (especially as LLMs get better).
From my understanding, the strategy that startups use to navigate huge design spaces is “iterate on features quickly and observe objective measures of feedback”, which I suspect LessWrong should lean into more. Although, I imagine creating better truth-seeking infrastructure doesn’t have as good of a feedback signal as “acquire more paying users” or “get another round of VC funding”.
This is really exciting. I’m surprised you’re the first person to spearhead a platform like this. Thank you!
I wonder if you could use a dominant assurance contract to raise money for retroactive public goods funding.
Is it any of the results from this Metaphor search?
A research team's ability to design a robust corporate structure doesn't necessarily predict their ability to solve a hard technical problem. Maybe there's some overlap, but machine learning and philosophy are different fields than business. Also, I suspect that the people doing the AI alignment research at OpenAI are not the same people who designed the corporate structure (but this might be wrong).
Welcome to LessWrong! Sorry for the harsh greeting. Standards of discourse are higher than other places on the internet, so quips usually aren't well-tolerated (even if they have some element of truth).
I mean, is the implication that this would instead be good if phenomenological consciousness did come with intelligence?
This was just an arbitrary example to demonstrate the more general idea that it’s possible we could make the wrong assumption about what makes humans valuable. Even if we discover that consciousness comes with intelligence, maybe there’s something else entirely that we’re missing which is necessary for a being to be morally valuable.
I don't want "humanism" to be taken too strictly, but I honestly think that anything that is worth passing the torch to wouldn't require us passing any torch at all and could just coexist with us…
I agree with this sentiment! Even though I’m open to the possibility of non-humans populating the universe instead of humans, I think it’s a better strategy for both practical and moral uncertainty reasons to make the transition peacefully and voluntarily.
I think the risk of human society being superseded by an AI society which is less valuable in some way shouldn't be guarded against by a blind preference for humans. Instead, we should maintain a high level of uncertainty about what it is that we value about humanity and slowly and cautiously transition to a posthuman society.
"Preferring humans just because they're humans" or "letting us be selfish" does prevent the risk of prematurely declaring that we've figured out what makes a being morally valuable and handing over society's steering wheel to AI agents that, upon further reflection, aren't actually morally valuable.
For example, say some AGI researcher believes that intelligence is the property which determines the worth of a being and blindly unleashes a superintelligent AI into the world because they believe that whatever it does with society is definitionally good, simply based on the fact that the AI system is more intelligent than us. But then maybe it turns out that phenomenological consciousness doesn't necessarily come with intelligence, and they accidentally wiped out all value from this world and replaced it with inanimate automatons that, while intelligent, don't actually experience the world they've created.
Having an ideological allegiance to humanism and a strict rejection of non-humans running the world even if we think they might deserve to would prevent this catastrophe. But I think that a posthuman utopia is ultimately something we should strive for. Eventually, we should pass the torch to beings which exemplify the human traits we like (consciousness, love, intelligence, art) and exclude those we don't (selfishness, suffering, irrationality).
So instead of blind humanism, we should be biologically conservative until we know more about ethics, consciousness, intelligence, et cetera and can pass the torch in confidence. We can afford millions of years to get this right. Humanism is arbitrary in principle and isn't the best way to prevent a valueless posthuman society.
Others have provided sound general advice that I agree with, but I’ll also throw in the suggestion of piracetam for a nootropic with non-temporary effects.
7 months later, from Business Insider: Silicon Valley elites are pushing a controversial new philosophy.
I've also been thinking a lot about this recently and haven't seen any explicit discussion of it. It's the reason I recently began going through BlueDot Impact's AI Governance course.
A couple questions, if you happen to know:
- Is there anywhere else I can find discussion about what the transition to a post-superhuman-level-AI society might look like, on an object level? I also saw the FLI Worldbuilding Contest.
- What are the implications for this on career choice for a early-career EA trying to make this transition go well?
Manifold.love is in alpha, and the MVP should be released in the next week or so. On this platform, people can bet on the odds that each other will enter in at least a 6-month relationship.
I suspect this was written by ChatGPT. It doesn’t say anything meaningful about applying Bayes’ theorem to memory techniques.
Microsolidarity is a community-building practice. We're weaving the social fabric that underpins shared infrastructure.
The first objective of microsolidarity is to create structures for belonging. We are stitching new kinship networks to shift us out of isolated individualism into a more connected way of being. Why? Because belonging is a superpower: we’re more courageous & creative when we "find our people".
The second objective is to support people into meaningful work. This is very broadly defined: you decide what is meaningful to you. It could be about your job, your family, or community volunteering. Generally, life is more meaningful when we are being of benefit to others, when we know how to contribute, when we can match our talents to the needs in the world.
You don't even necessarily do it on purpose, sometimes entire groups simply drift into doing it as a result of trying to up each other in trying to sound legitimate and serious (hello, academic writing).
Yeah, I suspect some intellectual groups write like this for that reason: not actively trying to trick people into thinking it's more profound than it is, but a slow creep into too much jargon. Like a frog in boiling water.
Then, when I look at their writing, it seems needlessly intelligible to me, even when it's writing designed for a newcomer. How do they not realize this? Maybe the water just feels warm to them.
When the human tendency to detect patterns goes too far
And, apophenia might make you more susceptible to what researchers call ‘pseudo-profound bullshit’: meaningless statements designed to appear profound. Timothy Bainbridge, a postdoc at the University of Melbourne, gives an example: ‘Wholeness quiets infinite phenomena.’ It’s a syntactically correct but vague and ultimately meaningless sentence. Bainbridge considers belief in pseudo-profound bullshit a particular instance of apophenia. To find it significant, one has to perceive a pattern in something that is actually made of fluff, and at the same time lack the ability to notice that it is actually not meaningful.
Np! I actually did read it and thought it was high-quality and useful. Thanks for investigating this question :)
Too long; didn’t read
From Pluriverse:
A viable future requires thinking-feeling beyond a neutral technocratic position, averting the catastrophic metacrisis, avoiding dysoptian solutionism, and dreaming acutely into the techno-imaginative dependencies to come.
How do you decide which writings to convert to animations?
Metaculus puts 7% on the WHO declaring it a Public Health Emergency of International Concern, and 2.4% on it killing more than 10,000 people, before 2024.
I was also disappointed to read Zvi's take on fruit fly simulations. "Figuring out how to produce a bunch of hedonium" is not an obviously stupid endeavor to me and seems completely neglected. Does anyone know if there are any organizations with this explicit goal? The closest ones I can think of are the Qualia Research Institute and the Sentience Institute, but I only know about them because they're connected to the EA space, so I'm probably missing some.
You can browse the "Practical" tag to find posts which are directly useful. Here are some of my favorites:
- Lukeprog's The Science of Winning at Life sequence summarizes scientifically-backed advice for "winning" at everyday life: in productivity, relationships, emotions, etc. Not exaggerating, it is close to the most useful piece of media I have ever consumed. I especially recommend the first post Scientific Self Help: The State of our Knowledge, which transformed my perception of where I should look to learn how to improve my life.
- After reading Scientific Self Help, my suspicion that popular self-help books were epistemically garbage was confirmed, and I learned that many of my questions for how to improve my life could be answered by textbooks. This gave me a strong intrinsic motivation for self-learning, which was made more effective by another of Lukeprog's posts, Scholarship: How to Do It Efficiently, combined with his thread The Best Textbooks on Every Subject.
- Romeo Steven's no-bullshit recommendations on how to increase your longevity and exercise routines, with a ten-year update and reflection.
- Many posts that are Repositories of advice.
I see. Maybe you could address it towards "DAIR, and related, researchers"? I know that's a clunkier name for the group you're trying to describe, but I don't think more succinct wording is worth progressing towards a tribal dynamic between researchers who care about X-risk and S-risk and those who care about less extreme risks.
I don't think it's a good idea to frame this as "AI ethicists vs. AI notkilleveryoneists", as if anyone that cares about issues related to the development of powerful AI has to choose to only care about existential risk or only other issues. I think this framing unnecessarily excludes AI ethicists from the alignment field, which is unfortunate and counterproductive since they're otherwise aligned with the broader idea of "AI is going to be a massive force for societal change and we should make sure it goes well".
Suggestion: instead of addressing "AI ethicists" or "AI ethicists of the DAIR / Stochastic Parrots school of thought", why not address "AI X-risk skeptics"?
Does anyone know whether added sugar is bad for you if you ignore the following points?
- It spikes your blood sugar quickly (it has a high glycemic index)
- It doesn't have any nutrients, but it does have calories
- It does not make you feel full, so it makes it easier to eat more calories, and
- It increases tooth decay.
I'm asking because I'm trying to figure out what carbohydrate-dense foods to eat when I'm bulking. I find it difficult to cram in enough calories per day, so most of my calories come from fat and protein at the moment. I'm not getting enough carbs. But most "carby foods for bulking" (e.g. potatoes, rice) are very filling! E.g., a cup of rice has 200 kcal, but a cup of nuts has 800.
I did some stats to figure out what carby foods have a low glycemic index but also a low satiation index, i.e. how quickly they make you feel full. My analysis showed that sponge cake was a great choice, having a glycemic index of only 40 while being the least filling of all the foods I analyzed!
But common sense says that cake would be classified as a "dirty bulk" food, which I'm trying to avoid. If it's not dirty for its glycemic index, what makes it dirty? Is it because cake has a "dirty" kind of fat, or is there something bad about sugar besides its glycemic index?
Just going off of the points I listed, eating cake to bulk up isn't "dirty", except for tooth decay. That's because
- Cake has a low glycemic index, I think because it has a lot of fat?
- I would be getting enough nutrients from the rest of what I eat; cake would make up the surplus.
- The whole point of me eating cake is to get more calories, so this point is nil.
What am I missing?
They meant a physical book (as opposed to an e-book) that is fiction.
I've also reflected on "microhabits" – I agree that the epistemics are tricky, of maintaining a habit even when you can't observe causal evidence for it being beneficial. I'll implement a habit if I've read some of the evidence and think it's worth the cost, even if I don't observe any effect in myself. Unfortunately, that's the same mistake homeopathics make.
I'm motivated to follow microhabitats mostly out of faith that they have some latent effects, but also out of a subconscious desire to uphold my identity, like what James Clear talks about in Atomic Habits.
Like when I take a vitamin D supplement in the morning, I'm not subconsciously thinking "oh man, the subtle effects this might have on my circadian rhythm and mood are totally worth the minimal cost!". Instead, it's more like "I'm taking this supplement because that's what a thoughtful person who cares about their cognitive health does. This isn't a chore; it's a part of what it means to live Roman's life".
Here's a list of some of my other microhabits (that weren't mentioned in your post) in case anyone's looking for inspiration. Or maybe I'm just trying to affirm my identity? ;P
- Putting a grayscale filter on my phone
- Paying attention to posture – e.g., not slouching as I walk
- Many things to help me sleep better
- Taking 0.3 mg of melatonin
- Avoiding exercise, food, and caffeine too close to bedtime
- Putting aggressive blue light filters on my laptop and phone in the evening and turning the lights down
- Taking a warm shower before bed
- Sleeping on my back
- Turning the temperature down before bed
- Wearing headphones to muffle noise and a blindfold
- Backing up data and using some internet privacy and security tools
- Anything related to being more attractive or likable
- Whitening teeth
- Following a skincare routine
- Smiling more
- Active listening
- Avoiding giving criticism
- Flossing, using toothpaste with Novamin, and tounge scraping
- Shampooing twice a week instead of daily
I haven't noticed any significant difference from any of these habits individually. But, like you suggested, I've found success with throwing many things at the wall: it used to take me a long time to fall asleep, and now it doesn't. Unfortunately, I don't know what microhabits did the trick (stuck to the wall).
It seems like there are three types of habits that require some faith:
- Those that take a while to show effects, like weightlifting and eating a lot to gain muscle.
- Those that only pay off for rare events, like backing up your data or looking both ways before crossing the street.
- Those with subtle and/or uncertain effects, like supplementing vitamin D for your cognitive health or whitening your teeth to make a better first impression on people. This is what you're calling microhabits.
I find it interesting that all but one toy is a transportation device or a model thereof.
Regardless of whether the lack of these kinds of studies is justified, I think you shouldn't automatically assume that "virology is unreasonable" or "there's something wrong with virologists". Because you're asking why the lack exists, there's something you don't know about virology, and your prior should be that it's justified, similar to Chesterton's Fence.
I also don't particularly like the hedonic gradient of pushing yourself to run at the volume and frequency that seems necessary to really git gud
What do you mean by "hedonic gradient" in this context?
For those of us who don't know where to start (like me), I also recommend checking out the wiki from r/malefashionadvice or r/femalefashionadvice.
Related: Wisdolia is a Chrome extension which automatically generates Anki flashcards based on the content of a webpage you're on.
That's a good point. I conflated Moravec's Paradox with the observation that so far, it seems as though cognitive tasks will be automated more quickly than physical tasks.
We take tending the garden seriously
Ironic typo: the link includes the proceeding space.
Suppose a family values the positive effects that screening would have on their child at $30,000, but in their area, it would cost them $50,000. Them paying for it anyway would be like "donating" $20,000 towards the moral imperative that you propose. But would that really be the best counterfactual use of the money? E.g. donating it instead to the Against Malaria Foundation would save 4-5 lives in expectation.[1] Maybe it would be worth it at $10,000? $5,000?
Although, this doesn't take into account the idea that an additional person doing polygenic screening would increase its acceptance in the public, incentivizing companies to innovate and drive the price down. So maybe the knock-on effects would make it worth it.
- ^
Okay, I've heard that this scale of donations to short-termist charities is actually a lot more complicated than that, but this is just an example.
I agree. Maybe it's time to repost The Best Textbooks on Every Subject again? Many of the topics I want to self-study I haven't found recommendations for in that thread. Or maybe we should create a public database of textbook recommendations instead of maintaining an old forum post.
Just curious: what motivated the transition?
Prioritizing subjects to self-study (advice wanted)
I plan to do some self-studying in my free time over the summer, on topics I would describe as "most useful to know in the pursuit of making the technological singularity go well". Obviously, this includes technical topics within AI alignment, but I've been itching to learn a broad range of subjects to make better decisions about, for example, what position I should work in to have the most counterfactual impact or what research agendas are most promising. I believe this is important because I aim to eventually attempt something really ambitious like founding an organization, which would require especially good judgement and generalist knowledge. What advice do you have on prioritizing topics to self-study and for how much depth? Any other thoughts or resources about my endeavor? I would be super grateful to have a call with you if this is something you've thought a lot about (Calendly link). More context: I'm a undergraduate sophomore studying Computer Science.
So far, my ordered list includes:
- Productivity
- Learning itself
- Rationality and decision making
- Epistemology
- Philosophy of science
- Political theory, game theory, mechanism design, artificial intelligence, philosophy of mind, analytic philosophy, forecasting, economics, neuroscience, history, psychology...
- ...and it's at this point that I realize I've set my sights too high and I need to reach out for advice on how to prioritize subjects to learn!
This is really clever. Good work!
I don't have a ton of programming experience either (still a student, done an internship and some hackathons) but I'd be very interested in poking around at what you have already have and potentially contributing. I've had this exact idea before.