Open Thread Fall 2024

post by habryka (habryka4) · 2024-10-05T22:28:50.398Z · LW · GW · 171 comments

Contents

171 comments

If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.

If you want to explore the community more, I recommend reading the Library [? · GW], checking recent Curated posts [? · GW], seeing if there are any meetups in your area [? · GW], and checking out the Getting Started [? · GW] section of the LessWrong FAQ [? · GW]. If you want to orient to the content on the site, you can also check out the Concepts section [? · GW].

The Open Thread tag is here [? · GW]. The Open Thread sequence is here [? · GW].

171 comments

Comments sorted by top scores.

comment by Sage (sage-1) · 2024-10-17T10:32:45.462Z · LW(p) · GW(p)

Hello everyone!

I am new here and I thought I should introduce myself, I am currently reading the highlights of the sequences and it has been giving me a sharper worldview and  I do feel myself being more rational, I think a lot of people who call themselves rational are motivated by biases and emotions more than they think, but it is good to be aware of that and try to work to be better, so I am doing that.

I am 17 years old from Iraq, I found the forum through Daniel Schmachtenberger, I am not sure how known he is here.

I am from a very Muslim country, I was brainwashed by it growing up like most people, but at 11 I started questioning and reading books as well, which was very hard, since the fear of "hell" is imprinted in someone growing up in this environment, but by 14 I broke free, I had a three months existential crisis as a result where I felt like I don't exist, and was in anxiety 24/7.

At that point I got interested in the new age movement, eastern religions and spirituality, especially Buddhism and certain strands of Hinduism, I wasn't interested in them to take as dogmas or as absolute views, I also got into western philosophy later, especially the Idealism vs. Realism debate, I liked Hegel, Spinoza, and Thomas Kuhn the most, but I can't say I read their books directly, I read books about their books, which are called secondary sources.

I tend to view all worldviews as incomplete lenses, whether that's the religious paradigm, or the scientific paradigm, or the spiritual paradigm, or different schools of philosophy, the true rational person in my opinion has the capability to understand all lenses in depth and switch between them, be aware of self-deception and constantly question ideas.

Of course, one must also come to conclusions, so there needs to be a certain balance.

I am going to study computer engineering or cybersecurity engineering, and I plan to study AI in masters and other studies, I am currently trying to leave this third world country, as life is difficult here and especially being treated as an outcast because I don't believe in religion dogmatically.

Of course, I am open to any of beliefs being challenged, that's why I am here :) 

I would also appreciate anyone wanting to share good resources, or maybe advice for me at this age.

Replies from: Ruby
comment by Ruby · 2024-10-17T18:03:18.308Z · LW(p) · GW(p)

Welcome! Sounds like you're on the one hand at start of a significant journey but also you've come a long distance already. I hope you find much helpful stuff on LessWrong.

I hadn't heard of Daniel Schmachtenberger, but I'm glad to have learend of him and his works. Thanks.

Replies from: malcolmocean, sage-1
comment by MalcolmOcean (malcolmocean) · 2024-11-01T02:23:58.116Z · LW(p) · GW(p)

Daniel Schmachtenberger has lots of great stuff.  Two pieces I recommend:

  1. this article Higher Dimensional Thinking, the End of Paradox, and a More Adequate Understanding of Reality, which is about how just because two people disagree doesn't mean either is wrong
  2. this Stoa video Converting Moloch from Sith to Jedi w/ Daniel Schmachtenberger, which is about races-to-the-bottom eating themselves

Also hi, welcome Sage!  I dig the energy you're coming from here.

comment by Sage (sage-1) · 2024-10-17T19:45:30.081Z · LW(p) · GW(p)

Thank you! I hope I do yes, I am still learning how the forum works :)

And you are welcome as well.

comment by bensenberner · 2024-11-07T18:38:48.037Z · LW(p) · GW(p)

Hi! I joined LW in order to post a research paper that I wrote over the summer, but I figured I'd post here first to describe a bit of the journey that led to this paper.

I got into rationality around 14 years ago when I read a blog called "you are not so smart", which pushed me to audit potential biases in myself and others, and to try and understand ideas/systems end-to-end without handwaving.

I studied computer science at university, partially because I liked the idea that with enough time I could understand any code (unlike essays, where investigating bibliographies for the sources of claims might lead to dead ends), and also because software pays well. I specialized in machine learning because I thought that algorithms that could make accurate predictions based on patterns in the world that were too complex for people to hardcode were cool. I had this sense that somewhere, someone must understand the "first principles" behind how to choose a neural network architecture, or that there was some way of reverse-engineering what deep learning models learned. Later I realized that there weren't really first principles regarding optimizing training, and that spending time trying to hardcode priors into models representing high-dimensional data was less effective than just getting more data (and then never understanding what exactly the model had learned).

I did a couple of kaggle competitions and wanted to try industrial machine learning. I took a SWE job on a data-heavy team at a tech company working on the ETLs powering models, and then did some backend work which took me away from large datasets for a couple years. I decided to read through recent deep learning textbooks and re-implement research papers at a self-directed programming retreat. Eventually I was able to work on a large scale recommendation system, but I still felt a long way from the cutting edge, which had evolved to GPT-4. At this point, my initial fascination with the field had become tinged with concern, as I saw people (including myself) beginning to rely on language model outputs as if they were true without consulting primary sources. I wanted to understand what language models "knew" and whether we could catch issues with their "reasoning."

I considered grad school, but I figured I'd have a better application if I understood how ChatGPT was trained, and how far we'd progressed in reverse engineering neural networks' internal representations of their training data.

I participated in the AI Safety fundamentals course which covered both of these topics, focusing particularly on the mechanistic interpretability section. I worked through parts of the ARENA curriculum, found an opportunity to collaborate on a research project, and decided to commit to it over the summer, which led to the paper I mentioned in the beginning! Here it is. [LW · GW]

Replies from: julius-vidal
comment by julius vidal (julius-vidal) · 2024-11-11T02:15:30.338Z · LW(p) · GW(p)

Hi!

I think I'm probably in a pretty similar position to where you were maybe a few months/a year ago in that I am a CS grad (though sadly no ML specialisation) working in industry who recently started reading a lot of mechanistic intepretability research, and is starting to seriously consider pursuing a PHD in that area (and also am looking at how I could get some initial research done in the meantime). 
Could I DM you to maybe get some advice?

Replies from: bensenberner
comment by bensenberner · 2024-11-13T15:55:30.443Z · LW(p) · GW(p)

Sure!

comment by JoseFaustino · 2024-11-11T21:10:11.201Z · LW(p) · GW(p)

Hello everyone! 

My name is José, 23 years old, brazilian and finishing (in July) an weird interdisciplinary undergraduate in University of Sao Paulo (2 years of math, physics, computer science, chem and bio + 2 years of do whatever you want - I did things like optimization, measure theory, decision theory, advanced probability, bayesian inference, algorithms, etc.)

I've been reading stuff in LW about AIS for a while now, and took some steps to change my career to AIS. I met EA/AIS via a camp focused on AIS for brazilian students called Condor Camp in 2022 and since then participated in a bunch of those camps, created a uni group, ML4Good, bunch of EAGs/Xs. 

I recently started an Agent Foundations fellowship by Alex Altair and am writing a post about Internal Model Principle. I expect to release it soon! 

Hope you all enjoy it!

comment by Screwtape · 2024-11-01T17:04:08.450Z · LW(p) · GW(p)

I'm planning to run the unofficial LessWrong Community Census again this year. There's a post with a link to the draft and a quick overview of what I'm aiming for here [LW · GW], and I'd appreciate comments and feedback. In particular, if you

  • Have some political questions you want to get into detail with or
  • Have experience or opinions on the foundational skills of rationality and how to test them on a survey

then I want to hear from you. I care a lot about rationality skills but don't know how to evaluate them in this format, but I have some clever ideas if I had a signal I could sift out of the survey. I don't care about politics, but lots of people do and I don't want to spoil their fun.

You can also propose other questions! I like playing with survey data :) 

Replies from: Screwtape
comment by Screwtape · 2024-12-11T02:46:52.450Z · LW(p) · GW(p)

The census is live

The post itself is here [LW · GW] if you want a little more detail, but I thought I'd save you a click.

comment by Lerk · 2024-10-11T15:55:24.038Z · LW(p) · GW(p)

I found the site a few months ago due to a link from an AI themed forum.  I read the sequences and developed the belief that this was a place for people who think in ways similar to me.  I work as a nuclear engineer.  When I entered the workforce, I was surprised to find that there weren’t people as dispositioned toward logic as I was.  I thought perhaps there wasn’t really a community of similar people and I had largely stopped looking.

 

This seems like a good place for me to learn, for the time being.  Whether or not this is a place for me to develop community remains to be seen. The format seems to promote people presenting well-formed ideas.  This seems valuable, but I am also interested in finding a space to explore ideas which are not well-formed.  It isn’t clear to me that this is intended to be such a space.  This may simply be due to my ignorance of the mechanics around here. That said, this thread seems to be inviting poorly formed ideas and I aim to oblige.

 

There seem to be some writings around here which speak of instrumental rationality, or “Rationality Is Systematized Winning”.  However, this seems to beg the question: “At what scale?”  My (perhaps naive) impression is that if you execute instrumental rationality with an objective function at the personal scale it might yield the decision that one should go work in finance and accrue a pile of utility. But if you apply instrumental rationality to an objective function at the societal scale it might yield the decision to give all your spare resources to the most effective organizations you can find.  It seems to me that the focus on rationality is important but doesn’t resolve the broader question of “In service of what?” which actually seems to be an important selector of who participates in this community.  I don’t see much value in pursuing Machiavellian rationality and my impression is that most here don’t either.  I am interested in finding additional work that explores the implications of global scale objective functions.

 

On a related topic, I am looking to explore how to determine the right scale of the objective function for revenge (or social correction if you prefer a smaller scope).  My intuition is that revenge was developed as a mechanism to perform tribal level optimizations.  In a situation where there has been a social transgression, and redressing that transgression would be personally costly but societally beneficial, what is the correct balance between personal interest and societal interest?

 

My current estimate of P(doom) in the next 15 years is 5%.  That is, high enough to be concerned , but not high enough to cash out my retirement. I am curious about anyone harboring a P(doom) > 50%.  This would seem to be high enough to support drastic actions.  What work has been done to develop rational approaches to such a high P(doom)?

 

This idea is quite poorly formed, but I am interested in exploring how to promote encapsulation, specialization, and reuse of components via the cost function in an artificial neural network. This comes out of the intuition that actions (things described by verbs, or transforms) may be a primitive in human mental architecture and are one of the mechanisms by which analogical connections are searched.  I am interested in seeing if continuous mechanisms could be defined to promote the development of a collection of transforms which could be applied usefully across multiple different domains.  Relatedly, I am also interested in what an architecture/cost function would need to look like to promote retaining multiple representations of a concept with differing levels of specificity/complexity.

Replies from: gilch, Morpheus, gilch, gilch, Raemon
comment by gilch · 2024-10-12T18:54:11.809Z · LW(p) · GW(p)

My current estimate of P(doom) in the next 15 years is 5%. That is, high enough to be concerned , but not high enough to cash out my retirement. I am curious about anyone harboring a P(doom) > 50%. This would seem to be high enough to support drastic actions. What work has been done to develop rational approaches to such a high P(doom)?

I mean, what do you think we've been doing all along?

I'm at like 90% in 20 years, but I'm not claiming even one significant digit on that figure. My drastic actions have been to get depressed enough to be unwilling to work in a job as stressful as my last one. I don't want to be that miserable if we've only got a few years left. I don't think I'm being sufficiently rational about it, no. It would be more dignified to make lots of money and donate it to the organization with the best chance of stopping or at least delaying our impending doom. I couldn't tell you which one that is at the moment though.

Some are starting to take more drastic actions. Whether those actions will be effective remains to be seen.

In my view, technical alignment is not keeping up with capabilities advancement. We have no alignment tech robust enough to even possibly survive the likely intelligence explosion scenario, and it's not likely to be developed in time. Corporate incentive structure and dysfunction makes them insufficiently cautious. Even without an intelligence explosion, we also have no plans for the likely social upheaval from rapid job loss. The default outcome is that human life becomes worthless, because that's already the case in such economies.

Our best chance at this point is probably government intervention to put the liability back on reckless AI labs for the risks they're imposing on the rest of us, if not an outright moratorium on massive training runs.

Gladstone has an Action Plan. There's also https://www.narrowpath.co/.

Replies from: Lerk
comment by Lerk · 2024-10-16T15:23:43.028Z · LW(p) · GW(p)

I mean, what do you think we've been doing all along?

 

So, the short answer is that I am actually just ignorant about this.  I’m reading here to learn more but I certainly haven’t ingested a sufficient history of relevant works.  I’m happy to prioritize any recommendations that others have found insightful or thought provoking, especially from the point of view of a novice.

 

I can answer the specific question “what do I think” in a bit more detail.  The answer should be understood to represent the viewpoint of someone who is new to the discussion and has only been exposed to an algorithmically influenced, self-selected slice of the information.

 

I watched the Lex Fridman interview of Eliezer Yudkowsky and around 3:06 Lex asks about what advice Eliezer would give to young people.  Eliezer’s initial answer is something to the extent of “Don’t expect a long future.”  I interpreted Eliezer’s answer largely as trying to evoke a sense of reverence for the seriousness of the problem.  When pushed on the question a bit further, Eliezer’s given answer is “…I hardly know how to fight myself at this point.”  I interpreted this to mean that the space of possible actions that is being searched appears intractable from the perspective of a dedicated researcher.  This, I believe, is largely the source of my question.  Current approaches appear to be losing the race, so what other avenues are being explored?

 

I read the “Thomas Kwa's MIRI research experience [LW · GW]” discussion and there was a statement to the effect that MIRI does not want Nate’s mindset to be known to frontier AI labs.  I interpreted this to mean that the most likely course being explored at MIRI is to build a good AI to preempt or stop a bad AI.  This strikes me as plausible because my intuition is that the LLM architectures being employed are largely inefficient for developing AGI.  However, the compute scaling seems to work well enough that it may win the race before other competing ideas come to fruition.

 

An example of an alternative approach that I read was “Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible [LW · GW]” which seems like an avenue worth exploring, but well outside of my areas of expertise.  The approach shares a characteristic with my inference of MIRI’s approach in that both appear to be pursuing highly technical avenues which would not scale meaningfully at this stage by adding helpers from the general public.

 

The forms of approaches that I expected to see but haven’t seen too much of thus far are those similar to the one that you linked about STOP AI.  That is, approaches that would scale with the addition of approximately average people.  I expected that this type of approach might take the form of disrupting model training by various means or coopting the organizations involved with an aim toward redirection or delay.  My lack of exposure to such information supports a few competing models: (1) drastic actions aren’t being pursued at large scales, (2) actions are being pursued covertly, or (3) I am focusing my attention in the wrong places.

 

Our best chance at this point is probably government intervention to put the liability back on reckless AI labs for the risks they're imposing on the rest of us, if not an outright moratorium on massive training runs.

 

Government action strikes me as a very reasonable approach for people estimating long time scales or relatively lower probabilities.  However, it seems to be a less reasonable approach if time scales are short or probabilities are high.  I presume that your high P(doom) already accounts for your estimation of the probability of government action being successful.  Does your high P(doom) imply that you expect these to be too slow, or too ineffective?  I interpret a high P(doom) as meaning that the current set of actions that you have thought of are unlikely to be successful and therefore additional action exploration is necessary.  I would expect this would include the admission of ideas which would have previously been pruned because they come with negative consequences.

Replies from: gilch, gilch
comment by gilch · 2024-10-17T20:00:34.544Z · LW(p) · GW(p)

The forms of approaches that I expected to see but haven’t seen too much of thus far are those similar to the one that you linked about STOP AI. That is, approaches that would scale with the addition of approximately average people.

Besides STOP AI, there's also the less extreme PauseAI. They're interested in things like lobbying, protests, lawsuits, etc.

comment by gilch · 2024-10-17T19:54:52.354Z · LW(p) · GW(p)

I presume that your high P(doom) already accounts for your estimation of the probability of government action being successful. Does your high P(doom) imply that you expect these to be too slow, or too ineffective?

Yep, most of my hope is on our civilization's coordination mechanisms kicking in in time. Most of the world's problems seem to be failures to coordinate, but that's not the same as saying we can't coordinate. Failures are more salient, but that's a cognitive bias. We've achieved a remarkable level of stability, in the light of recent history. But rationalists can see more clearly than most just how mad the world still is. Most of the public and most of our leaders fail to grasp some of the very basics of epistemology.

We used to think the public wouldn't get it (because most people are insufficiently sane), but they actually seem appropriately suspicious of AI. We used to think a technical solution was our only realistic option, but progress there has not kept up with more powerful computers brute-forcing AI. In desperation, we asked for more time. We were pleasantly surprised at how well the message was received, but it doesn't look like the slowdown is actually happening yet.

As a software engineer, I've worked in tech companies. Relatively big ones, even. I've seen the pressures and dysfunction. I strongly suspected that they're not taking safety and security seriously enough to actually make a difference, and reports from insiders only confirm that narrative. If those are the institutions calling the shots when we achieve AGI, we're dead. We desperately need more regulation to force them to behave or stop. I fear that what regulations we do get won't be enough, but they might.

Other hopes are around a technical breakthrough that advances alignment more than capabilities, or the AI labs somehow failing in their project to produce AGI (despite the considerable resources they've already amassed), perhaps due to a breakdown in the scaling laws or some unrelated disaster that makes the projects too expensive to continue.

However, it seems to be a less reasonable approach if time scales are short or probabilities are high.

I have a massive level of uncertainty around AGI timelines, but there's an uncomfortably large amount of probability mass on the possibility that through some breakthrough or secret project, AGI was achieved yesterday and not caught up with me. We're out of buffer. But we might still have decades before things get bad. We might be able to coordinate in time, with government intervention.

I would expect this would include the admission of ideas which would have previously been pruned because they come with negative consequences.

What ideas are those?

Replies from: Lerk
comment by Lerk · 2024-10-21T19:51:16.849Z · LW(p) · GW(p)

Yep, most of my hope is on our civilization's coordination mechanisms kicking in in time. Most of the world's problems seem to be failures to coordinate, but that's not the same as saying we can't coordinate.

This is where most of my anticipated success paths lie as well.

Other hopes are around a technical breakthrough that advances alignment more than capabilities…

I do not really understand how technical advance in alignment realistically becomes a success path.  I anticipate that in order for improved alignment to be useful, it would need to be present in essentially all AI agents or it would need to be present in the most powerful AI agent such that the aligned agent could dominate other unaligned AI agents.  I don’t expect uniformity of adoption and I don’t necessarily expect alignment to correlate with agent capability.  By my estimation, this success path rests on the probability that the organization with the most capable AI agent is also specifically interested in ensuring alignment of that agent.  I expect these goals to interfere with each other to some degree such that this confluence is unlikely.  Are your expectations different?

I have a massive level of uncertainty around AGI timelines, but there's an uncomfortably large amount of probability mass on the possibility that through some breakthrough or secret project, AGI was achieved yesterday and not caught up with me.

I have not been thinking deeply in the direction of a superintelligent AGI having been achieved already.  It certainly seems possible.  It would invalidate most of the things I have thus far thought of as plausible mitigation measures.

What ideas are those?

Assuming a superintelligent AGI does not already exist, I would expect someone with a high P(doom) to be considering options of the form:

Use a smart but not self-improving AI agent to antagonize the world with the goal of making advanced societies believe that AGI is a bad idea and precipitating effective government actions.  You could call this the Ozymandias approach.

Identify key resources involved in AI development and work to restrict those resources.  For truly desperate individuals this might look like the Metcalf attack, but a tamer approach might be something more along the lines of investing in a grid operator and pushing to increase delivery fees to data centers.

I haven’t pursued these thoughts in any serious way because my estimation of the threat isn’t as high as yours.  I think it is likely we are unintentionally heading toward the Ozymandias approach anyhow.

Replies from: gilch, gilch, gilch
comment by gilch · 2024-10-22T17:56:16.468Z · LW(p) · GW(p)

Use a smart but not self-improving AI agent to antagonize the world with the goal of making advanced societies believe that AGI is a bad idea and precipitating effective government actions. You could call this the Ozymandias approach.

ChaosGPT already exists. It's incompetent to the point of being comical at the moment, but maybe more powerful analogues will appear and wreak havoc. Considering the current prevalence of malware, it might be more surprising if something like this didn't happen.

We've already seen developments that could have been considered AI "warning shots" in the past. So far, they haven't been enough to stop capabilities advancement. Why would the next one be any different? We're already living in a world with literal wars killing people right now, and crazy terrorists with various ideologies. It's surprising what people get used to. How bad would a warning shot have to be to shock the world into action given that background noise? Or would we be desensitized by then by the smaller warning shots leading up to it? Boiling the frog, so to speak. I honestly don't know. And by the time a warning shot gets that bad, can we act in time to survive the next one?

Intentionally causing earlier warning shots would be evil, illegal, destructive, and undignified. Even "purely" economic damage at sufficient scale is going to literally kill people. Our best chance is civilization stepping up and coordinating. That means regulations and treaties, and only then the threat of violence to enforce the laws and impose the global consensus on any remaining rogue nations. That looks like the police and the army, not terrorists and hackers.

comment by gilch · 2024-10-22T17:13:41.778Z · LW(p) · GW(p)

I do not really understand how technical advance in alignment realistically becomes a success path. I anticipate that in order for improved alignment to be useful, it would need to be present in essentially all AI agents or it would need to be present in the most powerful AI agent such that the aligned agent could dominate other unaligned AI agents.

The instrumental convergence of goals implies that a powerful AI would almost certainly act to prevent any rivals from emerging, whether aligned or not. In the intelligence explosion scenario, progress would be rapid enough that the first mover achieves a decisive strategic advantage over the entire world. If we find an alignment solution robust enough to survive the intelligence explosion, it will set up guardrails to prevent most catastrophes, including the emergence of unaligned AGIs.

I don’t expect uniformity of adoption and I don’t necessarily expect alignment to correlate with agent capability. By my estimation, this success path rests on the probability that the organization with the most capable AI agent is also specifically interested in ensuring alignment of that agent. I expect these goals to interfere with each other to some degree such that this confluence is unlikely. Are your expectations different?

Alignment and capabilities don't necessarily correlate, and that accounts for lot of why my p(doom) is so high. But more aligned agents are, in principle, more useful, so rational organizations should be motivated to pursue aligned AGI, not just AGI. Unfortunately, alignment research seems barely tractable, capabilities can be brute-forced (and look valuable in the short term) and corporate incentive structures being what they are, in practice, what we're seeing is a reckless amount of risk taking. Regulation could alter the incentives to balance the externality with appropriate costs.

comment by gilch · 2024-10-22T17:37:39.594Z · LW(p) · GW(p)

We have already identified some key resources involved in AI development that could be restricted. The economic bottlenecks are mainly around high energy requirements and chip manufacturing.

Energy is probably too connected to the rest of the economy to be a good regulatory lever, but the U.S. power grid can't currently handle the scale of the data centers the AI labs want for model training. That might buy us a little time. Big tech is already talking about buying small modular nuclear reactors to power the next generation of data centers. Those probably won't be ready until the early 2030s. Unfortunately, that also creates pressures to move training to China or the Middle East where energy is cheaper, but where governments are less concerned about human rights.

A recent hurricane flooding high-purity quartz mines made headlines because chip producers require it for the crucibles used in making silicon wafers. Lower purity means accidental doping of the silicon crystal, which means lower chip yields per wafer, at best. Those mines aren't the only source, but they seem to be the best one. There might also be ways to utilize lower-purity materials, but that might take time to develop and would require a lot more energy, which is already a bottleneck.

The very cutting-edge chips required for AI training runs require some delicate and expensive extreme-ultraviolet lithography machines to manufacture. They literally have to plasmify tin droplets with a pulsed laser to reach those frequencies. ASML Holdings is currently the only company that sells these systems, and machines that advanced have their own supply chains. They have very few customers, and (last I checked) only TSMC was really using them successfully at scale. There are a lot of potential policy levers in this space, at least for now.

comment by Morpheus · 2024-10-17T19:09:51.287Z · LW(p) · GW(p)

I am also interested in finding a space to explore ideas which are not well-formed. It isn’t clear to me that this is intended to be such a space. This may simply be due to my ignorance of the mechanics around here.

For not well-formed ideas, you can write a Quick Take (can be found by clicking on your profile name in the top right corner) or starting a dialogue if you want to develop the idea together with someone (can be found in the same corner).

comment by gilch · 2024-10-12T18:07:58.349Z · LW(p) · GW(p)

On a related topic, I am looking to explore how to determine the right scale of the objective function for revenge (or social correction if you prefer a smaller scope). My intuition is that revenge was developed as a mechanism to perform tribal level optimizations. In a situation where there has been a social transgression, and redressing that transgression would be personally costly but societally beneficial, what is the correct balance between personal interest and societal interest?

This is a question for game theory [? · GW]. Trading a state of total anarchy for feudalism, a family who will avenge you is a great deterrent to have. It could even save your life. Revenge is thus a good thing. A moral duty, even. Yes, really. For a smaller scope, being quick to anger and vindictive will make others reluctant to mess with you.

Unfortunately, this also tends to result in endless blood feuds as families get revenge for the revenge for the revenge, at least until one side gets powerful enough to massacre the other. In the smaller scope, maybe you exhaust yourself or risk getting killed fighting duels to protect your honor.

We've found that having a central authority to monopolize violence rather than vengeance and courts to settle disputes rather than duels works better. But the instincts for anger and revenge and taking offense are still there. Societies with the better alternatives now consider such instincts bad.

Unfortunately, this kind of improved dispute resolution isn't available at the largest and smallest scales. There is no central authority to resolve disputes between nations, or at least not ones powerful enough to prevent all wars. We still rely on the principle of vengeance (second strike) to deter nuclear wars. This is not an ideal situation to be in. At the smaller scale, poor inner-city street kids join gangs that could avenge them, use social media show off weapons they're not legally allowed to have, and have a lot of anger and bluster, all to try to protect themselves in a system that can't or won't do that for them.

So, to answer the original question, the optimal balance really depends on your social context.

comment by gilch · 2024-10-12T17:01:31.057Z · LW(p) · GW(p)

at the personal scale it might yield the decision that one should go work in finance and accrue a pile of utility. But if you apply instrumental rationality to an objective function at the societal scale it might yield the decision to give all your spare resources to the most effective organizations you can find.

Yes. And yes. See You Need More Money [LW · GW] for the former, Effective Altruism [? · GW] for the latter, and Earning to give [? · GW] for a combination of the two.

As for which to focus on, well, Rationality doesn't decide for you what your utility function [? · GW] is. That's on you. (surprise! you want what you want)

My take is that maybe you put on your own oxygen mask first, and then maybe pay a tithe, to the most effective orgs you can find. If you get so rich that even that's not enough, why not invest in causes that benefit you personally, but society as well? (Medical research, for example.)

I also don't feel the need to aid potential future enemies just because they happen to be human. (And feel even less obligation for animals [LW · GW].) Folks may legitimately differ on what level counts as having taken care of themselves first. I don't feel like I'm there yet. Some are probably worse off than me and yet giving a lot more. But neglecting one's own need is probably not very "effective" either.

comment by Raemon · 2024-10-11T19:52:04.750Z · LW(p) · GW(p)

I'm interested in knowing which AI forum you came from.

Replies from: Lerk
comment by Lerk · 2024-10-16T12:52:27.965Z · LW(p) · GW(p)

I believe it was the Singularity subreddit in this case.  I was more or less passing through while searching for places to learn more about principles of ANN for AGI.

comment by yanni kyriacos (yanni) · 2024-10-07T01:54:04.232Z · LW(p) · GW(p)

I think there is a 10-20 per cent chance we get digital agents in 2025 that produce a holy shit moment as big as the launch of chatgpt.

If that happens I think that will produce another round of questions that sounds approximately like “how were we so unprepared for this moment”.

Fool me once, shame on you…

Replies from: daniel-kokotajlo, Sherrinford
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-12-17T23:26:46.042Z · LW(p) · GW(p)

I'd say it's more like 50% chance.

Replies from: yanni
comment by yanni kyriacos (yanni) · 2024-12-21T03:19:42.464Z · LW(p) · GW(p)

That's pretty high. What use cases are you imagining as the most likely?

Replies from: yanni
comment by yanni kyriacos (yanni) · 2024-12-21T03:20:24.567Z · LW(p) · GW(p)

E.g. 50,000 travel agents lose their jobs in 25/26

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-12-21T04:15:15.582Z · LW(p) · GW(p)

I wasn't imagining people actually losing their jobs. I was imagining people having a holy shit moment though, because e.g. they can watch computer-using-agents take over their keyboard and mouse and browse around, play video games, send messages, make purchases, etc. Like with ChatGPT it'll be unreliable at first and even for the things it can do reliably it'll take years to actually get whole categories of people laid off.

Replies from: yanni
comment by yanni kyriacos (yanni) · 2024-12-21T05:19:31.327Z · LW(p) · GW(p)

I think it will take less than 3 years for the equivalent of 1,000,000 people to get laid off.

comment by Sherrinford · 2024-10-14T22:08:16.342Z · LW(p) · GW(p)

Expecting that, how do you prepare?

Replies from: ChristianKl
comment by ChristianKl · 2024-10-15T14:38:17.794Z · LW(p) · GW(p)

One way would be to now think up ways how you can monetize the existing of effective digital agents. There are probably a bunch of business opportunities.

comment by lsusr · 2024-10-15T19:33:25.835Z · LW(p) · GW(p)

I don't know exactly when this was implemented, but I like how footnotes appear to the side of posts.

comment by derzhi (adriaabu) · 2024-10-11T11:24:43.163Z · LW(p) · GW(p)

I am a university dropout that wants to make an impact in the AI safety field. I am a complete amateur in the field, just starting out, but I want to learn as much as possible in order to make an impact. I studied software engineering for a semester and a half before realizing that there was a need for more people in the AI safety field, and that's where I want to give all my attention. If you are interested in connecting DM me, if you have any advice for a newcomer post a comment below. I am located in Hønefoss, Norway.

Replies from: Screwtape
comment by Screwtape · 2024-10-15T15:45:42.482Z · LW(p) · GW(p)

I'm not in AI Safety so if someone who is in the field has better suggestions, assume they're right and I'm wrong. Still, I hang out adjacent to AI Safety a lot. The best, easily accessible on-ramp I'm aware of is AiSafety.Quest. The best program I'm aware of is probably AI Safety Fundamentals, though I think they might get more applications than they can take. 

Best of luck and skill, and I'm glad to have people working on the problem.

comment by [deleted] · 2024-11-10T22:51:55.903Z · LW(p) · GW(p)

Hello! I've just found out about Lesswrong and I immediately feel at home. I feel this is what I was looking for in medium.com and I never found there; a website to learn about things, about improving oneself and about thinking better. Medium proved to be very useful at reading about how people made 5 figures using AI to write articles for them, but not so useful at providing genuinely valuable information.

One thing I usually say about myself is that I have "learning" as a hobby. I have only very recently given a name to things and now I know that it's ADHD I can thank for my endless consumption of information about seemingly unrelated topics. I try (good thing PKMs exist!) to give shape to my thoughts and form them into something cohesive, but this tends to be a struggle. 

If anyone has ideas on how to "review" what already sits in your mind to create new connections between ideas and strengthen thoughts, they'd be more than welcome. 

comment by Ben Pace (Benito) · 2024-11-07T22:51:37.702Z · LW(p) · GW(p)

Site update: the menu bar is shorter!

Previously I found it overwhelming when I opened it, and many of the buttons were getting extremely little use. It now looks like this.

If you're one of the few people who used the other buttons, here's where you can find them:

  • New Question: If you click on "New Post", it's one of the types of post available at the top of the page.
  • New Dialogue: If you go to any user page, you can see an option to invite them to a dialogue at the top, next to the option to send them a message or subscribe to their posts.
  • New Sequence: You can make your first when you scroll down the Library [? · GW] page, and once you've got one you can also make a new one from your profile page.
  • Your Quick Takes: Your shortform post is pinned to the top of your posts on your profile page.
  • Bookmarks: This menu icon will re-appear as soon as you have any bookmarks, which you set in the same way (from the triple-dot menu on posts.
Replies from: Yoav Ravid, papetoast
comment by Yoav Ravid · 2024-12-04T17:12:00.482Z · LW(p) · GW(p)

Maybe it can be good to have a "add post to sequence" option when you click the context menu on a post. That's more intuitive than going to the library page.

comment by papetoast · 2024-11-09T04:22:06.044Z · LW(p) · GW(p)

I want to use this chance to say that I really want to be able to bookmark a sequence

comment by João Ribeiro Medeiros (joao-ribeiro-medeiros) · 2024-10-28T17:52:01.937Z · LW(p) · GW(p)

Hello Everyone!

I am a Brazilian AI/ML engineer and data scientist, I have been following the rationalist community for around 10 years now, originally as a fan of Scott Alexander's Slate Star Codex where I came to know of Eliezer and Lesswrong as a community, along with the rationalist enterprise. 

I only recently created my user and started posting here, currently, I’m experiencing a profound sense of urgency regarding the technical potential of AI and its impact on the world. With seven years of experience in machine learning, I’ve witnessed how the stable and scalable use of data can be crucial in building trustworthy governance systems. I’m passionate about contributing to initiatives that ensure these advancements yield positive social outcomes, particularly for disadvantaged communities. I believe that rationality can open paths to peace, as war often stems from irrationality.

I feel privileged to participate in the deep and consequential discussions on this platform, and I look forward to exchanging ideas and insights with all the brilliant writers and thinkers who regularly contribute here. 

Thank you all!

comment by Embee · 2024-10-11T20:22:56.457Z · LW(p) · GW(p)

Does someone have a guesstimate of the ratio of lurkers to posters on lesswrong? With 'lurker' defined as someone who has a habit of reading content but never posts stuff (or posts only clarification questions)

In other words, what is the size of the LessWrong community relative to the number of active contributors?

Replies from: habryka4, stephen-mcaleese
comment by habryka (habryka4) · 2024-10-12T23:52:41.806Z · LW(p) · GW(p)

You could check out the LessWrong analytics dashboard: https://app.hex.tech/dac32525-33e6-44f9-bbcf-65a0ba40152a/app/9742e086-54ca-4dd9-86c9-25fc53f90f80/latest 

In any given week there are around 40k unique logged out users, around ~4k unique logged in users and around 400 unique commenters (with about ~1-2k comments). So the ratio of lurkers to commenters is about 100:1, though more like 20:1 for people who visit more regularly and people who comment.

Replies from: selador, Embee, fallcheetah7373
comment by selador · 2024-10-22T22:11:16.900Z · LW(p) · GW(p)

That link appears not to work. I'd be quite interested in what those numbers were 10 years ago when Deepmind and the like were getting excited about DNNs, but it wasn't that interesting to the wider world, who generally didn't believe that something like what is happening could happen. 

(I believed in the theory of superintelligence, like there's an exponential that's going to go past this arbitary point eventually, and IIRC had wildly wildly too distant expectations of when it might begin to happen in any meaningful way. Just thinking back to that time makes the last couple of years shocking to comprehend.)

comment by Embee · 2024-10-13T05:34:22.499Z · LW(p) · GW(p)

Thank you so much.

comment by lesswronguser123 (fallcheetah7373) · 2024-10-17T13:58:07.245Z · LW(p) · GW(p)

It would be an interesting meta post if someone did a analysis of each of those traction peaks due to various news or other articles.

comment by Stephen McAleese (stephen-mcaleese) · 2024-10-12T18:48:31.598Z · LW(p) · GW(p)

There's a rule of thumb called the "1% rule" on the internet that 1% of users contribute to a forum and 99% only read the forum.

Replies from: gilch
comment by gilch · 2024-10-12T19:00:35.789Z · LW(p) · GW(p)

The mods probably have access to better analytics. I, for one, was a long-time lurker before I said anything.

comment by Bohaska · 2024-10-06T09:48:04.583Z · LW(p) · GW(p)

If spaced repetition is the most efficient way of remembering information, why do people who learn a music instrument practice every day instead of adhering to a spaced repetition schedule?

Replies from: gwern, dan-molloy, AliceZ
comment by gwern · 2024-10-06T23:48:47.617Z · LW(p) · GW(p)

Spaced repetition is the most efficient way in terms of time spent per item. That doesn't make it the most efficient way to achieve a competitive goal. For this reason, SRS systems often include a 'cramming mode', where review efficiency is ignored in favor of maximizing memorization probability within X hours. And as far as musicians go - orchestras don't select musicians based on who spent the fewest total hours practicing but still manage to sound mostly-kinda-OK, they select based on who sounds the best; and if you sold your soul to the Devil or spent 16 hours a day practicing for the last 30 years to sound the best, then so be it. If you don't want to do it, someone else will.

That said, the spaced repetition research literature on things like sports does suggest you still want to do a limited form of spacing in the form of blocking or rotating regularly between each kind of practice/activity.

Replies from: sheikh-abdur-raheem-ali, hrs, Bohaska
comment by Sheikh Abdur Raheem Ali (sheikh-abdur-raheem-ali) · 2024-10-11T05:52:21.395Z · LW(p) · GW(p)

Thank you, this was informative and helpful for changing how I structure my coding practice.

comment by herschel (hrs) · 2024-11-07T04:01:36.491Z · LW(p) · GW(p)

this feels like a simplistic model of what's going on with learning an instrument. iirc in the "principles of SR" post from 20 years ago wozniak makes a point that you essentially can't start doing SR until you've already learned an item, this being obviously for purely sort of "fact" based learning. SR doesn't apply in the way you've described for all of the processes of tuning, efficiency, and accuracy gains that you need for learning an instrument. my sloppy model here is that formal practice eg for music is something like priming the system to spend optimization cycles on that etc--I assume cognitive scientists claim to have actual models here which I suppose are >50% fake lol.

 

also, separately, professional musicians in fact do a cheap SR for old repertoire, where they practice only intermittently to keep it in memory once it's been established.

comment by Bohaska · 2024-10-07T07:01:15.500Z · LW(p) · GW(p)

What about a goal that isn't competitive, such as "get grade 8 on the ABRSM music exam for <instrument>"? Plenty of Asian parents have that particular goal and yet they usually ask/force their children to practice daily. Is this irrational, or is it good at achieving this goal? Would we be able to improve efficiency by using spaced repetition in this scenario as opposed to daily practice?

Replies from: gwern
comment by gwern · 2024-10-08T02:16:11.398Z · LW(p) · GW(p)

The ABRSM is in X days. It too does not care how efficient you were time-wise in getting to grade-8 competency. There are no bonus points for sample-efficiency.

(And of course, it's not like Asian parents are doing their kids much good in the first place with that music stuff, so there's even less of an issue there.)

comment by Dan Valentine (dan-molloy) · 2024-10-06T12:09:40.404Z · LW(p) · GW(p)

Declarative and procedural knowledge are two different memory systems. Spaced repetition is good for declarative knowledge, but for procedural (like playing music) you need lots of practice. Other examples include math and programming - you can learn lots of declarative knowledge about the concepts involved, but you still need to practice solving problems or writing code.

Edit: as for why practice every day - the procedural system requires a lot more practice than the declarative system does.

Replies from: cubefox
comment by cubefox · 2024-10-06T19:55:52.107Z · LW(p) · GW(p)

Do we actually know procedural knowledge is linear rather than logarithmic, unlike declarative knowledge?

Replies from: ChristianKl
comment by ChristianKl · 2024-10-07T22:12:38.363Z · LW(p) · GW(p)

I'm not sure that linear vs. logarithmic is the key. 

With many procedural skills learning to apply the skill in the first place is a lot more central than not forgetting the skill.

If you want to learn to ride a bike, a little of the practice is about repeating what you already know to avoid forgetting what you already know. 

"How can we have the best deliberate practice?" is the key question for most procedural skills and you don't need to worry much about forgetting. With declarative knowledge forgetting is a huge deal and you need strategies to counteract it. 

comment by ZY (AliceZ) · 2024-10-07T02:40:39.684Z · LW(p) · GW(p)

(Like the answer on declarative vs procedural). Additionally, reflecting on practicing Hanon for piano (which is almost a pure finger strength/flexibility type of practice) - might be also for physical muscle development and control.

comment by nottilthursday · 2024-12-04T21:49:25.034Z · LW(p) · GW(p)

I've been lurking for years. I'm a lifelong rationalist who was hesitant to join because I didn't like HPMOR. (Didn't have a problem with the methods of rationality; I just didn't like how the characters' personalities changed, and I didn't find them relatable anymore.) I finally signed up due to an irrepressible urge to upvote a particular comment I really liked.

I struggle with LW content, tbh. It takes so long to translate it into something readable, something that isn't too littered with jargon and self-reference to be understandable for a generalist with ADHD. By the time I've spent 3 hours doing that for any given "8 minute" article, I'm typically left thinking, "No shit, Sherlock. Thanks for wasting another 3 hours of my life on your need to prove your academic worth to a community of people who talk like they still think IQ is an accurate way to assess intelligence."

AI helps, but still, it's just... ugh. How do I find the good shit? I want to LEARN MORE FASTER BETTER. 

Halp, pls.

Replies from: niplav, lsusr, aristotelis-kostelenos
comment by niplav · 2024-12-05T17:12:16.313Z · LW(p) · GW(p)

The obvious advice is of course "whatever thing you want to learn, let an LLM help you learn it". Throw that post in the context window, zoom in on terms, ask it to provide examples in the way the author intended it, let it generate exercises, let it rewrite it for your reading level.

If you're already doing that and it's not helping, maybe… more dakka [LW · GW]? And you're going to have to expand on what your goals are and what you want to learn/make.

comment by Aristotelis Kostelenos (aristotelis-kostelenos) · 2024-12-07T05:16:38.852Z · LW(p) · GW(p)

I've been lurking for not years. I also have ADHD and I deeply relate to your sentiment about the jargon here and it doesn't help that when I manage to concentrate enough to get through a post and read the 5 substack articles it links to and skim the 5 substack articles they link to, it's... pretty hit or miss. I remember reading one saying something about moral relativism not being obviously true and it felt like all the jargon and all the philosophical concepts mentioned only served to sufficiently confuse the reader (and I guess the writer too) so that it's not. I will say though that I don't get that feeling reading the sequences. Or stuff written by other rationalist GOATs. The obscure terms there don't serve as signals of the author's sophistication or ways to make their ideas less accesible. They're there because there are actually useful bundles of meaning that are used often enough to warrant a shortcut.

comment by papetoast · 2024-10-29T02:22:30.937Z · LW(p) · GW(p)

Re: the new style (archive for comparision)

Not a fan of

1. the font weight, everything seem semi-bolded now and a little bit more blurred than before. I do not see myself getting used to this.

2. the unboxed karma/argeement vote. It is fine per se, but the old one is also perfectly fine.

 

Edit: I have to say that the font on Windows is actively slightly painful and I need to reduce the time spent reading comments or quick takes.

Replies from: habryka4, kave
comment by habryka (habryka4) · 2024-10-29T04:42:48.789Z · LW(p) · GW(p)

1. the font weight, everything seem semi-bolded now and a little bit more blurred than before. I do not see myself getting used to this.

Are you on Windows? Probably an OS-level font-rendering issue which we can hopefully fix. I did some testing on Windows (using Browserstack) but don't have a Windows machine for detailed work. We'll look into it in the next few days.

Replies from: papetoast, papetoast
comment by papetoast · 2024-10-29T09:43:57.694Z · LW(p) · GW(p)

I overlayed my phone's display (using scrcpy) on top of the website rendered on Windows (Firefox). Image 1 shows that they indeed scaled to align. Image 2 (Windows left, Android right) shows how the font is bolder on Windows and somewhat blurred.

The monitor is 2560x1440 (website at 140%) and the phone is 1440x3200 (100%) mapped onto 585x1300.

comment by papetoast · 2024-10-29T06:30:21.611Z · LW(p) · GW(p)

I am on Windows. This reply is on Android and yeah definitely some issue with Windows / my PC

comment by kave · 2024-10-29T02:49:29.358Z · LW(p) · GW(p)

I don't think we've changed how often we use serifs vs sans serifs. Is there anything particular you're thinking of?

Replies from: papetoast
comment by papetoast · 2024-10-29T03:07:58.401Z · LW(p) · GW(p)

I hallucinated

comment by Screwtape · 2024-10-10T19:28:06.855Z · LW(p) · GW(p)

Once upon a time, there were Rationality Quotes threads, but they haven't been done for years. I'm curious if there's enough new, quotable things that have been written since the last one to bring back the quote posts [LW · GW]. If you've got any good lines, please come share them :) If there's a lot of uptake, maybe they could be a regular thing again.

Replies from: Gunnar_Zarncke, Screwtape
comment by Gunnar_Zarncke · 2024-11-26T22:04:52.368Z · LW(p) · GW(p)

Maybe create a Quotes Thread post with the rule that quotes have to be upvoted and if you like them you can add a react.

comment by Screwtape · 2024-10-11T16:35:51.674Z · LW(p) · GW(p)

Listen people, I don't want your upvotes on that post, I want your quotes. Well, not your quotes, you can't quote yourself, but for you to submit posts other people have made. XD

comment by Yoshinori Okamoto · 2024-12-11T14:52:11.730Z · LW(p) · GW(p)

Let me introduce myself. I come from Japan. As far as I know, there is no community like this in Japan. It seems very important to feel gratitude for being so blessed to be a part of one.  I introduce my research work, which has been published in the Japanese AI community, in an effort to contribute to the rationality of the world.  My official research history can be seen in the research map (in English and in Japanese) at my profile. Since I am not a native speaker of English, it would be glad if you make allowance for the points to be misunderstood.  

comment by clovis_ruskin · 2024-11-23T20:46:05.183Z · LW(p) · GW(p)

Hi! My name is Clovis. I'm an PhD student studying distributed AI. In my spare time, I work on social science projects.

One of my big interests is mathematically modelling dating and relationship dynamics. I study how well people's stated and revealed preferences align. I'd love to chat about experimental design and behavioral modeling! There are a couple of ideas around empirically differentiating models of people's preferences that I'd love to vet in particular. I've only really read the Sequences though, and I know that there's a lot of prior discussion here on that stuff; is this the right place to ask question about the right terms to search for? LessWrong terminology can sometimes be a bit different than econ journal terminology.

Replies from: Sodium
comment by Sodium · 2024-11-28T17:52:19.884Z · LW(p) · GW(p)

Hi Clovis! Something that comes to mind is Zvi's dating roundup posts [LW · GW] in case you haven't seen them yet. 

comment by churchturing (john-h-k) · 2024-11-17T02:29:15.910Z · LW(p) · GW(p)

Hi everyone,

I have been a lurker for a considerable amount of time but have finally gotten around to making an account.

By trade I am a software engineer, primarily interested in PL, type systems, and formal verification.

I am currently attempting to strengthen my historical knowledge of pre-facist regimes with a focus on 1920s/30s Germany & Italy. I would greatly appreciate either specific book recommendations or reading lists for this topic - while I approach this topic from a distinctly “not a facist” viewpoint, I am interested in books from both sides to attempt to build as authentic an understanding of the period as possible.

I also would be interested in reading on post-soviet to modern-day Russia. I think many, including myself, would not characterise it as a facist regime, but I have a suspicion it is the most useful contemporary comparison point. I am very open and interested in anyone disagreeing with this

Thanks

Replies from: kieran-knight
comment by Kieran Knight (kieran-knight) · 2024-11-18T17:20:30.539Z · LW(p) · GW(p)

Hi there,

I have some background in history, though mostly this is from my own study.

There are some big ones on Nazi Germany in particular. William L Shirer's "The Rise and Fall of the Third Reich" is an obvious choice. Worth bearing in mind that his background was journalism and his thesis of Sonderweg (the idea that German history very specifically had an authoritarian tendency that inevitably prefigured the Nazis) is not considered convincing by most of the great historians.

Anything by Richard J Evans is highly recommended, particuarly his trilogy on the Third Reich. He also regularly appears on many documentaries on the Nazis.

As regards to Russia you would have to ask someone else. Serhii Plokhy is well regarded though he mostly focuses on Ukraine and the Soviet period.

comment by WyldCard4 · 2024-10-13T03:13:33.720Z · LW(p) · GW(p)

Hello.

I have been adjacent to but not participating in rationality related websites and topics since at least Middle School age (homeschooled and with internet) and had a strong interest in science and science fiction long before that. Relevant pre-Less Wrong readings probably include old StarDestroyer.Net essays and rounds of New Atheism that I think were age and time appropriate. I am a very long term reader of Scott Alexander and have read at least extensive chunks of the Sequences in the past.

A number of factors are encouraging me to become more active in rationalist spaces right now.

  1. A bit over six months ago now I engaged in a medical procedure called TMS, or  Transcranial Magnetic Stimulation, as a treatment after about a decade of clinical depression. The results were shockingly potent, making me feel non-disabled for the first time since I was about 20, and I am now 32.  The scale of this change opens up a lot of spare personal energy and time.
  2. I have a strong interest in creative and essay writing. I read most of the Methods of Rationality and most or all of Scott Alexander's creative fiction. I am a long time roleplayer and participant in web forums. I think this is a solid place to get some grounding as I try to restart my life.
  3. Um, in the last decade we suddenly got an AI that passed the Turing Test. Dude, that's freaky.  The relatively tight overlap between the LLM period of AI public awareness and my own health recovery makes me somewhat more aware of "what the hell, how is this possible and should I be worried about paperclip maximization" train of thought in a way I think a more constant perception of change would not have caused.

I am currently thinking and feeling out ideas for posts to gather my own thoughts and perspective on the rapid progress of AGI and the potential risks and tradeoffs we might be experiencing in the near future. I would be curious about any resources that might be non-obvious for feeling this out and getting feedback or guidance as I start on an essay to try and form a coherent perspective and plan for moving forward.

Replies from: Screwtape
comment by Screwtape · 2024-10-15T15:38:13.757Z · LW(p) · GW(p)

Hello, and welcome! I'm also a habitual roleplayer (mostly tabletop rpgs for me, with the occasional LARP) and I'm a big fan of Alexander and Yudkowsky's fiction. Any particular piece of fiction stand out as your favourite? It isn't one of theirs, but I love The Cambist and Lord Iron.

I've been using Zvi's articles [LW · GW] on AI to try and keep track of what's going on, though I tend to skim them unless something catches my eye. I'm not sure if that's what you're looking for in terms of resources.

comment by notfnofn · 2024-11-25T18:09:51.095Z · LW(p) · GW(p)

Possible bug report: today I've been seeing errors of the form

Error: Cannot query field "givingSeason2024VotedFlair" on type "User". Did you mean "givingSeason2024DonatedFlair"?

that tend to go away when the page is refreshed. I don't remember if all errors said this same thing.

comment by Embee · 2024-10-28T11:44:02.546Z · LW(p) · GW(p)

I've noticed that the karma system makes me gravitate towards posts of very high karma. Are there low-karma posts that impacted you? Maybe you think they are underrated or that they fail in interesting ways.

comment by galathmir · 2024-10-11T18:58:27.434Z · LW(p) · GW(p)

Hey, everyone! Pretty new here and first time posting.

I have some questions regarding two odd scenarios. Let's assume there is no AI takeover to the Yudkowsky-nth degree and that AGI and ASI goes just fine. (Yes, that's are already a very big ask).

Scenario 1: Hyper-Realistic Humanoid Robots

Let's say AGI helps us get technology that allows for the creation of humanoid robots that are visually indistinguishable from real humans. While the human form is suboptimal for a lot of tasks, I'd imagine that people still want them for a number of reasons. If there's significant market demand for such robots:

  1. Would each robot face need to be unique from existing humans to avoid infringing on the likeness rights of existing humans?
  2. Are celebrity faces or the faces of public figures protected in a way that would prevent their replication in robotic form?
  3. How might current copyright law, which typically applies to creative works, extend to the realm of robotics and AI?

Scenario 2: Full-Dive Virtual Reality Simulations

Now, let's say further in the future, ASI helps us create full-dive virtual reality technology, allowing users to experience Matrix-level realistic simulations:

  1. If someone wants to simulate living in present-day Beverly Hills, complete with celebrity encounters, what are the legal implications of including accurate representations of these public figures?
  2. In a more personal use case, if an individual wishes to re-experience their childhood or high school years in VR, would they legally need permission from every person who was part of their life to include them in the simulation?
  3. How might we balance the right to one's own memories and experiences with the privacy and likeness rights of others?

Curious to learn about everyone's thoughts on the matter.

Replies from: gilch, daijin
comment by gilch · 2024-10-12T16:38:54.553Z · LW(p) · GW(p)

The questions seem underspecified. You're haven't nailed down a single world, and different worlds could have different answers. Many of the laws of today no longer make sense in worlds like you're describing. They may be ignored and forgotten or updated after some time.

If we have the technology to enhance human memory for perfect recall, does that violate copyright, since you're recording everything? Arguably, it's fair use to remember your own life. Sharing that with others gets murkier. Also, copyright was originally intended to incentivize creation. Do we still need that incentive when AI becomes more creative than we are? I think not.

You can already find celebrity deepfakes online, depicting them in situations they probably wouldn't approve of. I imagine the robot question has similar answers. We haven't worked that out yet, but there seem to be legal trends towards banning it, but without enough teeth to actually stop it. I think culture can adapt to the situation just fine even without a ban, but it could take some time.

comment by daijin · 2024-11-16T21:12:54.357Z · LW(p) · GW(p)

TL;DR I think increasing the fidelity of partial reconstructions of people is orthogonal to legality around the distribution of such reconstructions, so while your scenario describes an enhancement of fidelity, there would be no new legal implications.
---
Scenario 1: Hyper-realistic Humanoid robots
CMIIW, I would resummarise your question as 'how do we prevent people from being cloned?'
Answer: A person is not merely their appearance + personality; but also their place-in-the-world. For example, if you duplicated Chris Hemsworth but changed his name and popped him in the middle of London, what would happen?
- It would likely be distinctly possible to tell the two Chris Hemsworths' apart based on their continuous stream of existence and their interaction with the world
- The current Chris Hemsworth would likely order the destruction of the duplicated Chris Hemsworth (maybe upload the duplicate's memories to a databank) and I think most of society would agree with that.
This is an extension of the legal problem of 'how do we stop Bob from putting Alice's pictures on his dorm room wall' and the answer is generally 'we don't put in the effort because the harm to Alice is minimal and we have better things to do.'

Scenario 2: Full-Drive Virtual Reality Simulations
1. Pragmatically: They would unlikely be able to replicate the Beverly hills experience by themselves - even as technology improves, its difficult for a single person to generate a world. There would likely be some corporation behind creating beverly-hills-like experiences, and everyone can go and sue that corporation.
1. Abstractly: Maybe this happens and you can pirate beverly hills off Piratebay. That's not significantly different to what you can do today.
2. I can't see how what you're describing is significantly different to keeping a photo album, except technologically more impressive. I don't need legal permission to take a photo of you in a public space.
Perplexity AI gives:
```
In the United States, you generally do not need legal permission to take a photo of someone in a public place. This is protected under the First Amendment right to freedom of expression, which includes photography
```
3. IMO a 'right to one's own memories and experiences' would be the same as a right to one's creative works.

comment by Sherrinford · 2024-12-06T08:46:25.703Z · LW(p) · GW(p)

Is there an explanation somewhere how the recommendations algorithm on the homepage works, i.e. how recency and karma or whatever are combined?

Replies from: kave
comment by kave · 2024-12-06T16:05:57.210Z · LW(p) · GW(p)

The "latest" tab works via the hacker news algorithm. Ruby has a footnote about it here [LW(p) · GW(p)]. I think we set the "starting age" to 2 hours, and the power for the decay rate to 1.15.

Replies from: Sherrinford
comment by Sherrinford · 2024-12-06T17:43:39.308Z · LW(p) · GW(p)

Very helpful, thanks! So I assume the parameter b is what you call starting age?

I ask because I am a bit confused about the following: 

  • If you apply this formula, it seems to me that all posts with karma = 0 should have the same score, that score should be higher than the score of all negative-karma posts and negative-karma posts should get a higher score if they are older.
  • All karma>0 posts should appear before all karma=0 posts and those should appear before all negative-karma posts.

However, when I expand my list a lot until it íncludes four posts with negative karma (one of them is 1 month old), I still do not see any post with zero karma. (With "enriched" sorting, I found two recent ones with 0 karma.)

Moreover, this kind of sorting seems to give really a lot of power to the first one or two people who vote on a post if their votes can basically let a post disappear?

Replies from: kave
comment by kave · 2024-12-06T17:59:20.340Z · LW(p) · GW(p)

A quick question re: your list: do you have any tag filters set?

Replies from: Sherrinford
comment by Sherrinford · 2024-12-06T18:22:20.716Z · LW(p) · GW(p)

I don't think so. But where could I check that?

Replies from: kave
comment by kave · 2024-12-06T18:29:50.944Z · LW(p) · GW(p)

Click on the gear icon next to the feed selector 

Replies from: Sherrinford
comment by Sherrinford · 2024-12-06T19:40:48.358Z · LW(p) · GW(p)

No, all tags are on default weight.

Replies from: kave, habryka4
comment by kave · 2024-12-08T19:07:08.365Z · LW(p) · GW(p)

I had a quick look in the database, and you do have some tag filters set, which could cause the behaviour you describe

Replies from: Sherrinford
comment by Sherrinford · 2024-12-08T20:00:55.707Z · LW(p) · GW(p)

Thanks. I dud not see any, but I will check again. Maybe I also accidentally set them when i tried to check whether I had set any...

comment by habryka (habryka4) · 2024-12-06T19:49:41.766Z · LW(p) · GW(p)

Could you send me a screenshot of your post list and tag filter list? What you are describing sounds really very weird to me and something must be going wrong.

Replies from: Sherrinford
comment by Sherrinford · 2024-12-07T09:33:51.036Z · LW(p) · GW(p)

The list is very long, so it is hard to make a screenshot. Now with some hours of distance, I reloaded the homepage, tried again, and one 0 karma post appeared. (Last time, it did definitely not, I searched very rigorously.)

However, according to the mathematical formula, it still seems to me that all 0 karma post should appear at the same position, and negative karma posts below them?

Replies from: habryka4
comment by habryka (habryka4) · 2024-12-07T17:29:30.078Z · LW(p) · GW(p)

We have a few kinds of potential bonus a post could get, but yeah, something seems very off about your sort order, and I would really like to dig into it. A screenshot would still be quite valuable.

Replies from: Sherrinford
comment by Sherrinford · 2024-12-08T07:32:38.497Z · LW(p) · GW(p)

I will see whether I can make a useful one later on. Still, my main point is about the sorting score as stated in that referenced footnote: if indeed a post karma is divided by whatever, then I expect all 0 karma post to appear at the same position, and I expect the first person who votes to have a strong influence leading to herding, in particularif the personvotes the post to zero or lower. Right?

Replies from: kave
comment by kave · 2024-12-08T19:13:55.669Z · LW(p) · GW(p)

Yep, if the first vote takes the score to ≤ 0, then the post will be dropped off the latest list. This is somewhat ameliorated by:

(a) a fair number of people browsing https://lesswrong.com/allPosts

(b) https://greaterwrong.com having chronological sort by default

(c) posts appearing in recent discussion in order that they're posted (though I do wonder if we filter out negative karma posts from recent discussion)

I often play around with different karma / sorting mechanisms, and I do think it would be nice to have a more Bayesian approach that started with a stronger prior. My guess is the effect you're talking about isn't a big issue in practice, though probably worth a bit of my time to sample some negative karma posts.

Replies from: Sherrinford
comment by Sherrinford · 2024-12-08T20:04:59.698Z · LW(p) · GW(p)

Maybe the numerator of the score should remain at the initial karma until at least 4 people have voted, for example.

comment by skybluecat · 2024-10-23T13:02:59.123Z · LW(p) · GW(p)

Should AI safety people/funds focus more on boring old human problems like (especially cyber-and bio-)security instead of flashy ideas like alignment and decision theory? The possible impact of vulnerabilities will only increase in the future with all kinds of technological progress, with or without sudden AI takeoff, but they are much of what makes AGI dangerous in the first place. Security has clear benefits regardless and people already have a good idea how to do it, unlike with AGI or alignment.

If any actor with or without AGI can quickly gain lots of money and resources without alarming anyone, can take over infrastructure and weaponry, or can occupy land and create independent industrial systems and other countries cannot stop it, our destiny is already not in our hands, and it would be suicidal to think we don't need to fix these first because we expect to create an aligned AGI to save us.

If we grow complacent about the fragility of our biology and ecosystem, and continue to allow the possibility of any actor releasing pandemics and arbitrary malwares and deadly radiation etc (for example by allowing global transport without reliable pathogen removal, or using operating systems and open-source libraries that have not been formally proven to be safe), and keep thinking the universe should keep our environment safe and convenient by default, it would be naive to complain when these things happen and hope AGI would somehow preserve human lives and values without having to change our lifestyle or biology to adapt to new risks.

Yes, fixing vulnerabilities of our biology and society is hard and inconvenient and not as glamorous as creating a friendly god to do whatever you want, but we shouldn't let motivated reasoning and groupthink lead us into thinking the latter is feasible when we don't have a good idea about how to do it, just because the former requires sacrifices and investments and we'd prefer if it's not needed. After all, it's a fact that there exist small configurations of matter and information that can completely devastate our world, and just wishing it wasn't true is not going to make it go away.

Replies from: AliceZ
comment by ZY (AliceZ) · 2024-10-28T06:55:38.764Z · LW(p) · GW(p)

I personally agree with you on the importance of these problems. But I myself might also be a more general responsible/trustworthy AI person, and I care about other issues outside of AI too, so not sure about a more specific community, or what the definition is for "AI Safety" people.

For funding, I am not very familiar and want to ask for some clarification: by "(especially cyber-and bio-)security", do you mean generally, or "(especially cyber-and bio-)security" caused by AI specifically?

comment by Bumjin Park (bumjin-park) · 2024-10-20T12:33:12.645Z · LW(p) · GW(p)

I'm really interested in AI and want to build something amazing, so I’m always looking to expand my imagination! Sure, research papers are full of ideas, but I feel like insights into more universal knowledge spark a different kind of creativity. I found LessWrong through things like LLM, but the posts here give me the joy of exploring a much broader world!

I’m deeply interested in the good and bad of AI. While aligning AI with human values is important, alignment can be defined in many ways. I have a bit of a goal to build up my thoughts on what’s right or wrong, what’s possible or impossible, and write about them.

comment by Hastings (hastings-greer) · 2024-10-18T02:32:14.154Z · LW(p) · GW(p)

Are there any mainstream programming languages that make it ergonomic to write high level numerical code that doesn't allocate once the serious calculation starts? So far for this task C is by far the best option but it's very manual, and Julia tries and does pretty well but you have to constantly make sure that the compiler successfully optimized away the allocations that you think it optimized away. (Obviously Fortran is also very good for this, but ugh)

comment by Embee · 2024-10-13T05:38:55.475Z · LW(p) · GW(p)

What happens if and when a slightly unaligned AGI crowds the forum with its own posts? I mean, how strong is our "are you human?" protection?

comment by halinaeth · 2024-10-09T09:32:32.488Z · LW(p) · GW(p)

Hi! New to the forums and excited to keep reading. 

Bit of a meta-question: given proliferation of LLM-powered bots in social media like twitter etc, do the LW mods/team have any concerns about AI-generated content becoming an issue here in a more targeted way?

For a more benign example, say one wanted to create multiple "personas" here to test how others react. They could create three accounts, and respond to posts always with all three accounts- one with a "disagreeable" persona, one neutral, and one "agreeable".

A malicious example would be if someone hated an idea or person, X, on the forums. They could use GPT-4o to brainstorm any avenues of attack on X, then create any amount of accounts which will always flag posts about X to criticize and challenge. Thus they could bias readers through both creating a false "majority opinion", as well as through sheer exposure & chance (someone skimming the comments might only see criticizing & skeptical ones).

Thanks for entertaining my random hypotheticals!

Replies from: Screwtape
comment by Screwtape · 2024-10-11T14:28:00.844Z · LW(p) · GW(p)

Not a member of the LessWrong team, but historically the site had a lot of sockpuppetting problems that they (as far as I know) solidly fixed and keep an eye out for.

Replies from: halinaeth
comment by halinaeth · 2024-10-14T03:14:44.461Z · LW(p) · GW(p)

Makes sense, thanks for the new vocab term!

comment by Richard_Kennaway · 2024-11-12T16:51:05.400Z · LW(p) · GW(p)

Is anyone from LW going to the Worldcon (World Science Fiction Convention) in Seattle next year?

ETA: I will be, I forgot to say. I also notice that Burning Man 2025 begins about a week after the Worldcon ends. I have never been to BM, I don't personally know anyone who has been, and it seems totally impractical for me, but the idea has been in the back of my mind ever since I discovered its existence, which was a very long time ago.

Replies from: lsusr
comment by lsusr · 2024-11-15T05:34:02.141Z · LW(p) · GW(p)

I didn't know about that. That sounds like fun!

comment by ideasthete · 2024-10-17T18:01:39.236Z · LW(p) · GW(p)

Hello,

Longtime lurker, more recent commenter. I see a lot of rationality-type posters on Twitter and in the past couple of years became aware of "post-rationalists." It's somewhat ill-defined but essentially they are former rationalists who are more accepting of "woo" to be vague about it. My question is: 1) What level of engagement is there (if any) between rationalists and post-rationalists and 2) Is there anyone who dabbled or full on claimed post-rationalist positions and then reverted back to rationalists positions? What was that journey like and what made you switch between these beliefs?

Replies from: ChristianKl, Screwtape, AliceZ
comment by ChristianKl · 2024-10-21T16:10:08.529Z · LW(p) · GW(p)

One aspect of LessWrongers is that they often tend to hold positions that are very complex. If you think that there are a bunch of positions that are rationalist and a bunch of positions that are post-rationalist and there are two camps that each hold the respective positions, you miss a lot of what rationalism is about.

You will find people at LessWrong from whom doing rituals like the Solstice events or doing Circling (which for example people at CFAR did a lot) feels too woo. Yet, CFAR was the primer organization for the development of rationality and for the in person community the Winter Solstice event is a central feature.

In the recent LessWrong Community Weekend in Europe, Anna Riedl give the keynote speech about 4E-rationality. You could call 4E-rationality post-rational, in the sense that it moves past the view of rationality you find in the sequences on LessWrong. 

comment by Screwtape · 2024-10-21T01:58:31.467Z · LW(p) · GW(p)

From my observations it's fairly common for post-rationalists to go to rationalist events and vice-versa, so there's at least engagement on the level of waving hello in the lunchroom. There's enough overlap in identification that some people people in both categories read each other's blogs, and the essays that wind up at the intersection of both interests will have some back and forth in the comments. Are you looking for something more substantial than that?

I can't think of any reverting rationalists off the top of my head, though they might well be out there.

comment by ZY (AliceZ) · 2024-10-28T06:59:05.597Z · LW(p) · GW(p)

I am interested in learning more about this, but not sure what "woo" means; after googling, is it right to interpret as "unconventional beliefs" of some sort?

Replies from: gilch
comment by gilch · 2024-10-28T23:59:20.864Z · LW(p) · GW(p)

It's short for "woo-woo", a derogatory term skeptics use for magical thinking.

I think the word originates as onomatopoeia from the haunting woo-woo Theremin sounds played in black-and-white horror films when the ghost was about to appear. It's what the "supernatural" sounds like, I guess.

It's not about the belief being unconventional as much as it being irrational. Just because we don't understand how something works doesn't mean it doesn't work (it just probably doesn't), but we can still call your reasons for thinking so invalid. A classic skeptic might dismiss anything associated categorically, but rationalists judge by the preponderance of the evidence. Some superstitions are valid. Prescientific cultures may still have learned true things, even if they can't express them well to outsiders.

Replies from: AliceZ
comment by ZY (AliceZ) · 2024-10-29T00:27:36.230Z · LW(p) · GW(p)

Ah thanks. Do you know why these former rationalists were "more accepting" of irrational thinking? And to be extremely clear, does "irrational" here mean not following one's preference with their actions, and not truth seeking when forming beliefs?

comment by Mateusz Bagiński (mateusz-baginski) · 2024-12-18T11:02:18.499Z · LW(p) · GW(p)

React suggestion/request: "not joint-carving"/"not the best way to think about this topic".

This is kind of "(local) taboo those words" but it's more specific.

comment by Steven Byrnes (steve2152) · 2024-12-02T22:44:42.085Z · LW(p) · GW(p)

I think there might be a lesswrong editor feature that allows you to edit a post in such a way that the previous version is still accessible. Here’s an example [LW · GW]—there’s a little icon next to the author name that says “This post has major past revisions…”. Does anyone know where that option is? I can’t find it in the editor UI. (Or maybe it was removed? Or it’s only available to mods?) Thanks in advance!

Replies from: Raemon
comment by Raemon · 2024-12-03T01:38:40.354Z · LW(p) · GW(p)

It's available for admins at the moment. What post do you wanna change?

Replies from: steve2152
comment by Steven Byrnes (steve2152) · 2024-12-03T16:25:02.124Z · LW(p) · GW(p)

Actually never mind. But for future reference I guess I’ll use the intercom if I want an old version labeled. Thanks for telling me how that works.  :)

(There’s a website / paper going around that cites a post [LW · GW] I wrote way back in 2021, when I was young and stupid, so it had a bunch of mistakes. But after re-reading that post again this morning, I decided that the changes I needed to make weren’t that big, and I just went ahead and edited the post like normal, and added a changelog to the bottom. I’ve done this before [LW(p) · GW(p)]. I’ll see if anyone complains. I don’t expect them to. E.g. that same website / paper cites a bunch of arxiv papers while omitting their version numbers, so they’re probably not too worried about that kind of stuff.)

Replies from: Raemon
comment by Raemon · 2024-12-03T17:00:50.791Z · LW(p) · GW(p)

I think probably we don't have that great a reason not to roll this out to more users, it's mostly a matter of managing UI complexity

comment by Dom Polsinelli (dom-polsinelli) · 2024-12-01T00:36:05.706Z · LW(p) · GW(p)

I am very interested in mind uploading

I want to do a PhD in a related field and comprehensively go through "whole brain emulation: a roadmap" and take notes on what has changed since it was published

If anyone knows relevant papers/researchers that would be useful to read for that or so I can make an informed decision on where to apply to gradschool next year, please let me know

Maybe someone has already done a comprehenisve update on brain emulation I would like to know and I would still like to read more papers before I apply to grad school

Replies from: D0TheMath, steve2152
comment by Garrett Baker (D0TheMath) · 2024-12-02T05:28:40.197Z · LW(p) · GW(p)

Those invited to the foresight workshop (also the 2023 one) are probably a good start, as well as foresight’s 2023 and 2024 lectures on the subject.

comment by Steven Byrnes (steve2152) · 2024-12-02T18:16:49.927Z · LW(p) · GW(p)

Good luck! I was writing about it semi-recently here [LW · GW].

General comment: It’s also possible to contribute to mind uploading without getting a PhD—see last section of that post [LW · GW]. There are job openings that aren’t even biology, e.g. ML engineering. And you could also earn money and donate it, my impression is that there’s desperate need.

comment by Sherrinford · 2024-11-30T23:20:24.682Z · LW(p) · GW(p)

Are there good and comprehensive evaluations of covid policies? Are there countries who really tried to learn, also for the next pandemic?

comment by ProgramCrafter (programcrafter) · 2024-11-09T22:32:48.751Z · LW(p) · GW(p)

When rereading [0 and 1 Are Not Probabilities], I thought: can we ever specify our amount of information in infinite domains, perhaps with something resembling hyperreals?

  1. An uniformly random rational number from  is taken. There's an infinite number of options meaning that prior probabilities are all zero (), so we need infinite amount of evidence to single out any number.
    (It's worth noting that we have codes that can encode any specific rational number with a finite word - for instance, first apply bijection of rationals to natural numbers, then use Fibonacci coding; but in expectation we need to receive infinite bits to know an arbitrary number).

    Since  symbol doesn't have nice properties with regards to addition and subtraction, we might define a symbol  which means "we need some information to single out one natural number out of their full set". Then, the uniform prior over  would have form  (prefix and suffix standing for values outside  segment) while a communication "the number is " would carry  bits of evidence on average, making the posterior .
  2. The previous approach suffers from a problem, though. What if two uniformly random rationals  are taken, forming a square on coordinate grid?
    If we've been communicated  information about , we clearly have learned nothing about  and thus cannot pinpoint the specific point, requiring  more bits.

    However, there's bijection between  and , so we can assign a unique natural number to any point in the square, and therefore can communicate it in  bits in expectation, without any coefficient .

When I tried exploring some more, I've validated that greater uncertainty (, communication of one real number) makes smaller ones () negligible, and that evidence for a natural number can presumably be squeezed into communication for a real value. That also makes the direction look unpromising.

 

However, there can be a continuation still: are there books/articles on how information is quantified given a distribution function?

comment by Sodium · 2024-11-04T18:17:15.346Z · LW(p) · GW(p)

Man, politics really is the mind killer

comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-10-28T20:44:03.179Z · LW(p) · GW(p)

I've noticed that when writing text on LessWrong, there is a tendency for the cursor to glitch out and jump to the beginning of the text. I don't have the same problem on other websites. This most often happens after I've clicked to try to insert the cursor in some specific spot. The cursor briefly shows where I clicked, but then the page lags slightly, as if loading something, and the cursor jumps to the beginning.

The way around this I've found is to click once. Wait to see if the cursor jumps away. If so, click again and hope. Only start typing once you've seen multiple blinks at the desired location. Annoying!

Replies from: habryka4
comment by habryka (habryka4) · 2024-10-28T20:54:31.552Z · LW(p) · GW(p)

We used to have a bug like this a long time ago, it was related to a bug at the intersection of our rich-text editor library and our upgrade from React 17 to React 18 (our front-end framework). I thought that we had fixed it, and it's definitely much less frequent than it used to be, but it's plausible we are having a similar bug. 

It's annoyingly hard to reproduce, so if you or anyone else finds a circumstance where you can reliably trigger it, that would be greatly appreciated.

comment by Sherrinford · 2024-10-16T19:05:17.272Z · LW(p) · GW(p)

In Fertility Rate Roundup #1, [LW · GW]Zvi wrote   

"This post assumes the perspective that more people having more children is good, actually. I will not be engaging with any of the arguments against this, of any quality, whether they be ‘AI or climate change is going to kill everyone’ or ‘people are bad actually,’ other than to state here that I strongly disagree." 

Does anyone of you have an idea where I can find arguments related to or a more detailed discussion about this disagreement (with respect to AI or maybe other global catastrophic risks; this is not a question about how bad climate change is)?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2024-10-20T18:43:50.443Z · LW(p) · GW(p)

Look up anti-natalism, and the Voluntary Human Extinction Movement. And random idiots everywhere saying "well maybe we all deserve to die", "the earth would be better off without us", "evolution made a huge mistake in inventing consciousness", etc.

Replies from: Sherrinford
comment by Sherrinford · 2024-10-20T19:02:51.934Z · LW(p) · GW(p)

So you think that looking up "random idiots" helps me find "arguments related to or a more detailed discussion about this disagreement"?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2024-10-20T19:57:39.411Z · LW(p) · GW(p)

No, I just threw that in. But there is the VHEM, and apparently serious people who argue for anti-natalism.

Short of those, there are also advocates for "degrowth".

I suspect the reason that Zvi declined to engage with such arguments is that he thinks they're too batshit insane to be worth giving house room, but these are a few terms to search for.

Replies from: Sherrinford
comment by Sherrinford · 2024-10-20T21:49:33.464Z · LW(p) · GW(p)

I appreciate that you posted a response to my question. However, I assume there is some misunderstanding here.

Zvi notes that he will not "be engaging with any of the arguments against this, of any quality" (which suggests that there are also good or relevant arguments). Zvi includes the statement that "AI is going to kill everyone", and notes that he "strongly disagrees". 

As I asked for "arguments related to or a more detailed discussion" of these issues, you mention some people you call "random idiots" and state that their arguments are "batshit insane". It thus seems like a waste of time trying to find arguments relevant to my question based on these keywords. 

So I wonder: was your answer actually meant to be helpful?

comment by Bohaska · 2024-10-14T03:23:44.397Z · LW(p) · GW(p)

Why are comments on older posts sorted by date, but comments on newer posts are sorted by top scoring?

Replies from: Raemon
comment by Raemon · 2024-10-14T04:02:43.669Z · LW(p) · GW(p)

The oldest posts were from before we had nested comments, so the comments there need to be in chronological order to make sense of the conversation.

comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-12-19T01:44:46.935Z · LW(p) · GW(p)

Couple of UI notes:

  1. the top of the frontpage is currently kinda broken for me
  2. on mobile, there's a problem with bullet-points squishing text too much. I'd rather go without the indentation at all than allow the indentation to smoosh the text into unreadability.
     

     

Replies from: habryka4
comment by habryka (habryka4) · 2024-12-19T04:41:12.634Z · LW(p) · GW(p)

Huh, what browser and OS?

Replies from: nathan-helm-burger, nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-12-19T23:23:36.889Z · LW(p) · GW(p)

Another thing that I've noticed is that after submitting a comment, sometimes the comment appears in the list of comments, but also the text remains in the editing textbox. This leads to sometimes thinking the first submit didn't work, and submitting again, and thus double-posting the same comment.

Other times, the text does correctly get removed from the textbox after hitting submit and the submitted comment appearing, but the page continues to think that you have unsubmitted text and to warn you about this when you try to navigate away. Again, a confusing experience that can lead to double-posting.

comment by jmh · 2024-12-18T15:09:18.486Z · LW(p) · GW(p)

What is the price of the past? Kind of leading question but I've found myself wondering at times about the old saying about those who don't know the past are doom to repeat it. 

It's not that I don't think there is a good point to that view. However, when I look at the world around me I often see something that is vastly different from that view. I've come to summarize that as those who cannot let go of the past will never escape it. The implication is that not only those "clingy" people but also those around them will continue living whatever past it is they are attached to. As this is generally bad events of the past that means we get stuck living with these bad outcomes in the world. 

So while keeping that knowledge of the past in mind may prevent repeating it, that might be accomplished by never actually having learned from or moved past the history. I'm unsure which is worse.

comment by dmac_93 (D M Cat) · 2024-12-12T16:10:15.767Z · LW(p) · GW(p)

Meow Meow,

I'd like to introduce myself. My name is David and I am an AGI enthusiast. My goal is to reverse engineer the brain in order to create AGI and to this end I've spent years studying neuroscience. I look forward to talking with you all about neuroscience and AGI.

Now I must admit: I disagree with this community's prevailing opinions on the topic of AI-Doom. New technology is almost always "a good thing". I think we all daydream about AGI, but whereas your fantasies may be dark and grim, mine are bright and utopian.

I'm also optimistic about my ability to succeed. Nature has provided us with intelligent lifeforms which we can probe and disect until we understand both life and intelligence. Technology has advanced to the point where this is within our reach. Here is a blog post I wrote in suport of this point.

As a final note I'd like express my distain for deep learning. It's not biologically plausible. It does not operate on the same basic prinicples as intelligent life. Maybe with sufficient effort you could use deep learning to create AGI, but I suspect that in doing so you'd rediscover the same principles that are behind biological intelligence.

comment by Raemon · 2024-12-04T19:01:32.963Z · LW(p) · GW(p)

Quick note: there's a bug I'm sorting out for some new LessWrong Review features for this year, hopefully will be fixed soon and we'll have the proper launch post that explains new changes.

comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-11-26T21:25:42.119Z · LW(p) · GW(p)

Possible bug: Whenever I click the vertical ellipsis (kebab) menu option in a comment, my page view jumps to the top of the page. 

This is annoying, since if I've chosen to edit a comment I then need to scroll back down to the comment section and search for my now-editable comment.

comment by Garrett Baker (D0TheMath) · 2024-11-21T00:50:23.447Z · LW(p) · GW(p)

[Bug report]: The Popular Comments section's comment preview ignores spoiler tags

As seen on Windows/Chrome

comment by Sherrinford · 2024-11-18T23:03:53.070Z · LW(p) · GW(p)

Having read something about self-driving cars actually being a thing now, I wonder how the trolley-problem thing (and whatever other ethics problems come up) was solved in the relevant regulation?

comment by AnthonyC · 2024-11-01T01:34:44.770Z · LW(p) · GW(p)

Is there a way to access text formatting options when commenting on Android devices?

Replies from: habryka4
comment by habryka (habryka4) · 2024-11-01T02:01:26.764Z · LW(p) · GW(p)

On mobile we by default use a markdown editor, so you can use markdown to format things.

Replies from: AnthonyC
comment by AnthonyC · 2024-11-01T11:32:20.463Z · LW(p) · GW(p)

Thanks. I'd somehow made it to 2024 without realizing Markdown was a standardized syntax.

comment by jmh · 2024-10-28T01:55:18.833Z · LW(p) · GW(p)

I was just scrolling through Metaculus and its predictions for the US Elections. I noticed that pretty much every case was a conditional If Trump wins/If doesn't win. Had two thought about the estimates for these. All seem to suggest the outcomes are worse under Trump. But that assessment of the outcome being worse is certainly subject to my own biases, values and preferences. (For example, for US voters is it really a bad outcome if the probability of China attacking Taiwan increases under Trump? I think so but other may well see the costs necessary to reduce the likelihood as high for something that is not something that actually involves the USA.)

So my first though was how much bias should I infer as present in these probability estimates? I'm not sure. But that does relate a bit to my other thought.

In one sense you could naively apply the p, therefore not p is the outcome for the other candidate as only two actually exist. But I think it is also clear that the two probability distributions don't come from the same pool so conceivably you could change the name to Harris and get the exact same estimates.

So I was thinking, what if Metaculus did run the two cases side by side? Would seeing p(Haris) + p(Trump) significantly different than 1 suggest one should have lower confidence in the estimates? I am not sure about that.

What if we see something like p(H) approximately equale to p(T)? does that suggest the selected outcome is poorly chosen as it is largely independant of the elected candidate so the estimates are largely meaninless in terms of election outcomes? I have a stronger sense this is the case.

So my bottome line now is that I should likely not hold a high confidence that the estimates on these outcomes are really meaninful with regards to the election impacts. 

comment by Embee · 2024-10-18T09:51:01.665Z · LW(p) · GW(p)

I'm still bothering you with inquiries on user information. I would like to check this in order to write a potential LW post. Do we have data on the prevalence of "mental illnesses" and do we have a rough idea of the average IQ among LWers (or SSCers since the community is adjacent) I'm particulary interested in the prevalence of people with autism and/or schizoid disorders. Thank you very much. Sorry if I used offensive terms. I'm not a native speaker.

Replies from: Screwtape, ChristianKl
comment by Screwtape · 2024-10-21T01:51:51.663Z · LW(p) · GW(p)

I think the best Less Wrong Census for mental illness would be 2016 [? · GW], though 2012 did ask about autism. You're probably going to have better luck using the 2024 SSC/ACX survey data, as it's more recent and bigger.

Have fun! 

comment by ChristianKl · 2024-10-18T12:22:41.917Z · LW(p) · GW(p)

If you search for "Less Wrong Census" you will find the existing surveys of the LessWrong readership. 

comment by ProgramCrafter (programcrafter) · 2024-12-17T20:25:14.231Z · LW(p) · GW(p)

How can I get an empathic handle on my region/country/world's society (average/median/some sample of its people, to be clearer)?

I seem to have got into a very specific social circle, being a constant LW reader and so on. That often makes me think "well, there is question X, a good answer is A1 and it is also shared within the circle, but many people overall will no doubt think A2 and I don't even know who or why". I can read survey/poll results but not understand why would people even want to ban something like surrogate motherhood or things like that.

I've heard one gets to know people when works with them. If so, I'd like to hear suggestions for some [temporary] professions which could aid me here?

Replies from: ChristianKl
comment by ChristianKl · 2024-12-18T23:49:37.703Z · LW(p) · GW(p)

Does your family have the same opinions as your social circle? If not family events can be good place to learn why people hold different beliefs.

Getting to know your neighbors is another way to expose yourself to people who often think differently.

As far as professions go being an Uber driver might get you into a lot of conversations with diverse passengers.

Replies from: programcrafter
comment by ProgramCrafter (programcrafter) · 2024-12-19T17:17:06.235Z · LW(p) · GW(p)

Does your family have the same opinions as your social circle?

Quite similar, in fact - at least where they care to do so! I do listen for perspective, but I still can't put society's revealed opinion into span of those who I know better!

Getting to know your neighbors is another way to expose yourself to people who often think differently.

A good idea! I'll have to take it a bit more general, because I'm an university student with dormitory and already know many people around; though, eating at some local cafe with diverse customers should work.

Being an Uber driver would be too taxing on my time, but I'm sure there is another idea instantiation which would work!

 

(And while we're in an Open Thread, I'd like to thank LessWrong for featuring that one can make informed decisions on pretty much any topic! I've chosen uni aligned with me and I'm not disappointed with it so far.)

comment by notfnofn · 2024-12-05T18:43:12.974Z · LW(p) · GW(p)

Is there a way to for me to prove that I'm a human on this website before technology makes this task even more difficult?

Replies from: gilch, Yoav Ravid
comment by gilch · 2024-12-17T02:14:44.592Z · LW(p) · GW(p)

I don't know of any officially sanctioned way. But, hypothetically, meeting a publicly-known real human person in person and giving them your public pgp key might work. Said real human could vouch for you and your public key, and no one else could fake a message signed by you, assuming you protect your private key. It's probably sufficient to sign and post one message proving this is your account (profile bio, probably), and then we just have to trust you to keep your account password secure.

comment by Yoav Ravid · 2024-12-05T19:23:09.219Z · LW(p) · GW(p)

Sounds like a question a non-human would ask :P

comment by owngrove · 2024-12-04T05:31:23.141Z · LW(p) · GW(p)

Happened on this song on Tiny Desk: Paperclip Maximizer (by Rosie Tucker, from an album titled "Utopia Now!").

Paperclip maximizer
Single minded if you mind at all
A paragon of puritanical panoptical persistence
Everybody envies your resolve
Paperclip maximizer
Mining for a better way
No ontological contention
Tends your content generation
Every sorrow makes a link in the chain

[...]

And the shareholders meet gruesome ends
But the cosmos expands
So the market survives
All the better to bear all your office supplies
And the space they require was once occupied
By the sun
On your hair
And the curve
Of your thighs
Horizon of sighs
Destroyer of worlds

comment by yc (AAA) · 2024-11-07T08:42:42.192Z · LW(p) · GW(p)

How do we best model an irrational world rationally? I would assume we would need to understand at least how irrationality works?

Replies from: gilch
comment by gilch · 2024-11-08T19:07:08.300Z · LW(p) · GW(p)

Not sure I understand what you mean by that. The Universe seems to follow relatively simple deterministic laws. That doesn't mean you can use quantum field theory to predict the weather. But chaotic systems can be modeled as statistical ensembles. Temperature is a meaningful measurement even if we can't calculate the motion of all the individual gas molecules.

If you're referring to human irrationality in particular, we can study cognitive bias [? · GW], which is how human reasoning diverges from that of idealized agents in certain systematic ways. This is a topic of interest at both the individual level of psychology, and at the level of statistical ensembles in economics.

Replies from: AAA
comment by yc (AAA) · 2024-11-11T17:01:53.375Z · LW(p) · GW(p)

Thanks, I was thinking of the latter more (human irrationality), but found your first part still interesting. I understand irrationality was studied in psychology and economics, and I was wondering on the modeling of irrationality particularly, for 1-2 players, but also for a group of agents. For example, there are arguments saying for a group of irrational agents, the group choice could be rational depending on group structure etc. On individual irrationality and continued group irrationality, I think we would need to estimate the level of (and prevalence of ) irrationality in some way that captures unconscious preferences, or incomplete information. How to best combine these? Maybe it would just be just more data driven.

Replies from: gilch
comment by gilch · 2024-11-12T01:47:54.226Z · LW(p) · GW(p)

That seems to be getting into Game Theory [? · GW] territory. One can model agents (players) with different strategies, even suboptimal ones. A lot of the insight from Game Theory isn't just about how to play a better strategy, but how changing the rules affects the game.

comment by roland · 2024-11-03T12:24:38.486Z · LW(p) · GW(p)

Bayes for arguments: how do you quantify P(E|H) when E is an argument? E.g. I present you a strong argument supporting Hypothesis H, how can you put a number on that?

Replies from: D0TheMath, notfnofn
comment by Garrett Baker (D0TheMath) · 2024-11-03T16:40:23.804Z · LW(p) · GW(p)

There’s not a principled way for informal arguments, but there are a few for formal arguments—ie proofs. The relevant search term here is logical induction [? · GW].

comment by notfnofn · 2024-11-03T14:13:03.053Z · LW(p) · GW(p)

I think  is close enough to 1 to be dropped here; the more interesting thing is  (how likely would they be to make such a convincing argument if the hypothesis is false?). We have:

so Bayes rule becomes

 

Edit: actually use likelihood ratios; it's way simpler.

comment by low_pryce · 2024-10-26T13:55:55.003Z · LW(p) · GW(p)

Hello all,

I am new here. I am uncertain of what exactly is expected to be posted here, but I was directed here by a friend after sharing something I had written down with them. They encouraged me to post it here, and after reading a few threads I came to find interest in this site. So now I would like to share here what I shared with them, which is a new way to view understanding from an objective standpoint, one that regards emergent phenomena such as intuition, innovation, and inspiration:

 

 

 

 

The concept of Tsaphos

 

Definition:

 

Tsaphos

- refers to the set of understandings beyond logic that significantly contribute to innovation, intuition, and inspiration.

 

 

 

Application:

 

to understand Understanding, one must realize that Logic alone is not sufficient. The concept of Tsaphos is not to introduce a new phenomena, but rather to outline the limits of Logic on conceptualizing Understanding. for example, an intelligent mind might be able to recognize a pattern without being able to comprehend the underlying principles, such as a student learning trigonometry without a full understanding of the unit circle. 

in essence, Tsaphos encompasses understandings that lie beyond what Logic can provide. it is through this concept that Logic can separate from Understanding, deriving Logic to an apparatus instead of a world-view.

this concept also solidifies Understanding as an explorable landscape with seemingly infinite depth and intricacies. 

once someone is able to fully comprehend and describe the system of the Logic behind the Understanding initially understood through Tsaphos, then the set of Tsaphos decreases in volume and the set of Logic increases. however, as the set of Logic increases the larger the landscape of Understanding is, meaning that as long as there is infinite depth, Tsaphos will always grow larger as a set in volume compared to Logic.

Differences from intuition(at the request of a friend):

this concept almost mimics intuition, yet there is a significant difference. intuition is immediate, almost as if it were an action, were the definition relies entirely on individual expression. While intuition is the immediate recognition or action, Tsaphos represents the underlying unknowns that intuition accesses. Tsaphos and intuition are intrinsically related, much like how Tsaphos with both innovation and inspiration are related. Tsaphos pertains to the content of Understanding, whereas inspiration, then intuition, and then innovation pertains to the process of the phenomenon.

Tsaphos is more of a complementary tool with Logic to better understand Understanding, as since it is possible for someone to intuitively infer something, then there was something to infer in the first place. that something lies within the set of Tsaphos, where once understood no longer resides within that set.

comment by DiamondSolstice (TourmalineCupcakes) · 2024-10-08T00:29:41.215Z · LW(p) · GW(p)

I'd like to know: what are the main questions a rational person would ask? (Also what are some better ways to phrase what I have?)

I've been thinking something like

  • What will happen in the future?
  • What is my best course of action regardless of what all other people are doing? (Asked in moderation)
Replies from: gilch, Screwtape, jmh
comment by gilch · 2024-10-10T03:32:18.745Z · LW(p) · GW(p)

What we'd ask depends on the context. In general, not all rationalist teachings are in the form of a question, but many could probably be phrased that way.

"Do I desire to believe X if X is the case and not-X if X is not the case?" [? · GW] (For whatever X in question.) This is the fundamental lesson of epistemic rationality. If you don't want to lie to yourself, the rest will help you get better at that. But if you do, you'll lie to yourself anyway and all your acquired cleverness will be used to defeat itself. [LW · GW]

"Am I winning?" [LW · GW] This is the fundamental lesson of instrumental rationality. It's not enough to act with Propriety or "virtue" or obey the Great Teacher. Sometimes the rules you learned aren't applicable. If you're not winning and it's not due to pure chance, you did it wrong, propriety be damned. You failed to grasp the Art. Reflect, and actually cut the enemy.

Those two are the big ones. But there are more.

Key lessons from Bayes:

Others I thought of:

  • Am I confused? [? · GW]
  • What's a concrete example?
  • What do I expect? (And then, "How surprised am I?")
  • Assuming this failed, how surprised would I be? What's the most obvious reason why it would fail? Can I take a step to mitigate that/improve my plans? (And repeat [? · GW].)
  • Does this happen often enough to be worth the cost of fixing it?
  • Has anyone else solved this problem? Have I checked? (Web, LLMs, textbooks?)
  • Have I thought about this for at least five minutes?
  • Do I care? Should I? Why?
  • Wanna bet? [? · GW]
  • Can I test this?
  • What's my biggest problem? [LW · GW] What's the bottleneck? What am I not allowed to care about? If my life was a novel, what would I be yelling at the protagonist to do? Of my current pursuits, which am I pursuing ineffectively? What sparks my interest?
  • Am I lying?

I'm not claiming this list is exhaustive.

comment by Screwtape · 2024-10-10T18:37:36.274Z · LW(p) · GW(p)

There's a triad of paired questions I sometimes run through.

  • What do you think you know and how do you think you know it?
  • Do you know what you are doing, and why you are doing it?
  • What are you about to do and what do you think will happen next?

They're suited for slightly different circumstances, but I think each is foundational in its own way.

comment by jmh · 2024-10-20T06:59:13.315Z · LW(p) · GW(p)

I think perhaps a first one might be:

On what evidence do I conclude what I think is know is correct/factual/true and how strong is that evidence? To what extent have I verified that view and just how extensively should I verify the evidence?

After that might be a similar approach to the implications or outcomes of applying actions based on what one holds as truth/fact.

I tend to think of rationality as a process rather than endpoint. Which isn't to say that the destination is not important but clearly without the journey the destination is just a thought or dream. That first of a thousand steps thing.

Replies from: AliceZ
comment by ZY (AliceZ) · 2024-10-28T07:01:57.375Z · LW(p) · GW(p)

On what evidence do I conclude what I think is know is correct/factual/true and how strong is that evidence? To what extent have I verified that view and just how extensively should I verify the evidence?


For this, aside from traditional paper reading from credible sources, one good approach in my opinion is to actively seek evidence/arguments from, or initiate conversations with people who have a different perspective with me (on both side of the spectrum if the conclusion space is continuous). 

comment by lucid_levi_ackerman · 2024-11-29T05:55:58.029Z · LW(p) · GW(p)

Levi da.

I'm here to see if I can help.

I heard a few things about Elizier Yudkowsky. Saw a few LW articles while looking for previous research on my work with AI psychological influence. There isn't any so I signed up to contribute.

If you recognize my username, you probably know why that's a good idea. If you don't, I don't know how to explain succinctly yet. You'd have to see for yourself, and a web search can do that better than an intro comment.

It's a whole ass rabbit hole so either follow to see what I end up posting or downvote to repress curiosity. I get it. It's not comfortable for me either.

Update: explanation in bio.