Posts

Questions on Theism 2014-10-08T21:02:43.338Z

Comments

Comment by Aiyen on Your best future self · 2021-05-13T01:01:40.995Z · LW · GW

I have mixed feelings about this post.  On the one hand, it's a new, interesting idea.  You say it's helpful to you, and it wouldn't be entirely surprising if it's helpful to a great many readers.  This could be a very good thing.  

On the other hand, there's a tendency among rationalists these days to turn to religion, or to the closest thing to religion we can make ourselves believe in.  For a while there were a great many posts about meditation and enlightenment, for instance, and if we look at common eregores in the community, we find multiple.  Azathoth, God of Evolution.  Moloch, God of Prisoners' Dilemmas.  Cthulhu, God of Memetics and Monotonically Increasing Progressivism.  Bruce, God of Self-Sabotage.  This can be entertaining, and perhaps motivating.  Yet I cannot shake the feeling that we're taking a serious risk in trying to create something too closely akin to religion.  As the saying goes, what do you think you know, and how do you think you know it?  We're quite certain that e.g. Islam is founded on lies, with more lies built up to try to protect the initial deceptions.  Do you really want to mimic such a thing?  A tradition created without a connection to actual reality is unlikely to have any value.  

I won't say that you shouldn't pray to your future self, if you find doing so beneficial, and you yourself say this isn't your usual subject matter.  But be careful.  It's far too easy to create religious-style errors even if you do not consciously believe in your rituals.  

Comment by Aiyen on Core Pathways of Aging · 2021-04-10T20:21:53.165Z · LW · GW

https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.0030058

This is the source I found. It’s fairly old, so if you’ve found something that supersedes it I’d be interested.

Comment by Aiyen on Core Pathways of Aging · 2021-04-08T20:39:39.902Z · LW · GW

An initial search doesn’t confirm whether or not mycoplasma age. Bacteria do age though; even seemingly-symmetrical divisions yield one “parent” bacterium that ages and dies.

If mycoplasma genuinely don’t, that would be fascinating and potentially yield valuable clues on the aging mechanism.

Comment by Aiyen on Core Pathways of Aging · 2021-04-04T21:08:52.819Z · LW · GW

Minimal cell experiments (making cells with as small a genome as possible) have already been done successfully. This presumably removes transposons, and I have not heard that such cells had abnormally long lifespans.

One possibility is that there are at least two aging pathways-the effect of transposons, which evolution wasn’t able to eliminate, and an evolved aging pathway intended to eliminate older organisms so they don’t compete with their progeny (doing so while suffering ill health from transposon build-up would be less fit than dying and delegating reproduction to one’s less transposon-heavy offspring).

There is significant evidence that most organisms have evolved to eventually deliberately die, independent of problems like transposons that aren’t intentional on the level of the organism. Yamanaka factors can reverse some symptoms of aging, and appear to do so by activating a rejuvenation pathway. This makes perfect sense if the body deliberately ordinarily reserves that pathway for gamete production, while letting itself deteriorate. It is extremely confusing if aging is purely damage, however. Yamanaka factors don’t provide new information (other than the order to rejuvenate) or resources; a body that is doing its best to avoid aging wouldn’t seem to benefit from them, and could presumably evolve to produce them if evolution found this desirable. Other examples include the beneficial effects of removing old blood plasma (this appears to trick the body into thinking it is younger, which should work on a deliberately aging organism but not one that aged purely through damage), the fact that rat brain cells deteriorate as the perceive the brain to gradually stiffen with age, but rejuvenate if their ability to detect stiffness is removed, and the fact that some species of octopuses commit suicide after reproducing, and refrain from doing so if a particular gland is removed.

If both transposons and a deliberate aging pathway contribute to aging, it would be very interesting to see what happens in an organism with both transposon inactivation and Yamanaka factor treatment. Neither appears to create massive life extension on its own, but together they might do so, or at least point out worthwhile directions for further inquiry.

Comment by Aiyen on Reasons against anti-aging · 2021-01-25T08:30:24.131Z · LW · GW

"Or maybe anti-aging is inherently interesting to some people who want to live to see flying cars..." 

Maybe anti-aging is inherently interesting?  Do you not expect some people to want to survive?  The will to live is inherent in humanity for very obvious evolutionary reasons.  Moreover, anyone whose quality of life is positive has reason to want to live so long as that is the case.  There are religious people who want to die so as to attain an afterlife, but unless you are hoping for Heaven/Nirvana/72 Virgins/whatever, or your current quality of life is negative, anti-aging should be inherently interesting to you.

"and no rational critique would dissuade them."

If something is inherently interesting, people will want it unless there is a cost that exceeds the benefit.  If there is such a cost, such a rational critique will in fact dissuade rational people.  This seems like a cheap attempt to make transhumanists seem unwilling to listen to reason without actually making a case to that effect.

"In short, the best approach would be to rebuild your tree from scratch. This is why having kids is more efficient than just having more time on earth."

More efficient for what purpose?  Even if we assume you are correct that experience is a negative to career success (not what is typically observed, to put it mildly), what are you hoping to attain with your career that is better served by dying and hoping your children will carry on the work?  It can't be making money for you-you do not benefit from money when you're dead!  It can't be making money for your children; you're as dismissive of their survival as of your own.  It sounds like you want money for your genetic lineage, but why?  Normally people value their wellbeing and that of their family; all of you dying does not serve this.  You can't even claim to be following some underlying evolutionary principle, as the survival of you and your children will preserve your genes better than letting them be diluted down over generations.  

"Even if birth rates went down to replacement rate tomorrow, improvements in longevity would result in more people being on the planet at any given time."

Correct.  On the other hand, while overpopulation is a potential concern with longevity, it is worth taking five minutes to consider the problem rather than simply electing to die.  Potential solutions include interplanetary colonization, mind uploading, better birth control or simply handing off the problem to a friendly AI.  All of these are technically challenging, but so is life extension.  It does not make sense to assume that a world capable of it must be forever incapable of ever finding a solution to overpopulation.  To assert that this question must necessarily make life extension harmful is to assert that we know that no such solution can be found, quite the extraordinary claim.  The milder claim that this is a concern worth addressing is by contrast valid, but that's not a reason to abandon life extension, merely one to develop population solutions in tandem, if we can.

"Arguably, one of reasons young people are frustrated with modern politics is that boomers are still very much in the driver's seat. "

Easy enough to mandate political retirement at a particular age.  Disenfranchisement is better than death.  To quote Eliezer's short story Three Worlds Collide, "Only youth can Administrate.  That is the pact of immortality." 

"...we'll need more senior care. This may become a costly burden on future generations. "

Potentially.  Or a population that spends more time healthy and able to work and less time slowly decaying in retirement might have a lighter burden on future generations.  Or perhaps a growing, potentially-automated economy will obviate the question entirely.  This is much like the overpopulation question in that it conflates desirability with prudence.  Desirability is whether or not we consider a thing beneficial as such; whether or not we'd want it in the absence of countervailing costs.  Prudence is whether or not we consider a thing worthwhile on net even counting the costs. You point out, correctly, that overpopulation and a strained senior care system are potential risks that may need to be addressed if we want to make life extension prudent.  That does not mean that it is not desirable, nor that we should immediately view the costs it could impose as impossible to mitigate. 

"We may also have to consider assisted suicide for people who would be dead if it weren't for technology. Should we keep them alive because we can?"  

Do these people want to die?  Are we out of resources to sustain them with?  If the answers are no and no, why should we kill them?  If one or more of the answers is yes, that's a concern, but one better answered by seeking to improve their quality of life or acquire more resources, at least if we value human wellbeing.  And if we don't, why are we bothering to stay alive ourselves, or avoid killing willy-nilly?  

Ultimately, it is human nature to value survival.  We cannot always survive, we may sacrifice ourselves for others we care for if we cannot both survive, and some people even choose death out of misery or religious faith.  Yet where it is possible, it is better to make life worth living than to give up and die.  Where it is possible, it is better to save everyone rather than sacrificing our lives.  Where it is possible, it is better to oppose aging like we would any other injury, and while I cannot claim that life is better than Heaven, you did not bring up afterlives, so it seems unlikely that they are factoring in your reasoning.  Unless you assert that the natural order of things was divinely, benevolently ordained, there is no reason to think that death by aging is somehow better than any other threat to life, be it disease, injury, war, poverty or the like.  

Would you use those same reasons to argue for Covid?

Comment by Aiyen on The True Face of the Enemy · 2021-01-12T18:30:38.415Z · LW · GW

This is also true for many people not in that age range. “Many people in a group will try to make life harder for those around them” isn’t much of an argument for incarceration. If it were, who would you permit to be free?

Comment by Aiyen on GPT-3 + GAN · 2020-10-19T23:00:49.135Z · LW · GW

That might work.  Maybe have the adversarial network try to distinguish GPT-3 text from human text?  That said, GPT-3 is already trying to predict humanlike text continuations, so there's a decent chance that having a separate GAN layer wouldn't help.  It's probably worth doing the experiment though; traditional GANs work by improving the discriminator as well as the desired categorizer, so there's a chance it could work here too. 

Comment by Aiyen on Covid 10/1: The Long Haul · 2020-10-02T18:35:50.174Z · LW · GW

You say vulnerable, low-income people "must put themselves at risk to stay alive", then propose not letting them do so?  A lockdown, by itself, does not give the poor any money.  If you wish to prevent them from working risky jobs to support themselves, you must either offer them some other form of support or assert that they have other, better options ("homelessness, malnourishment, etc."?), but are making the wrong decision by working and thus ought to be prevented from doing so.  Being denied options is only protection if one is making the wrong decision.

Do you think these people ought to be homeless and malnourished?  If so, that's a hard case to make morally or practically.  If not, you should offer an alternative, rather than simply banning what you yourself state is their only path to avoiding this.

Comment by Aiyen on Why haven't we celebrated any major achievements lately? · 2020-09-11T18:43:45.279Z · LW · GW

"We hold all Earth to plunder, all time and space as well. Too wonder-stale to wonder at each new miracle."-Rudyard Kipling

Comment by Aiyen on Stop saying wrong things · 2020-05-03T21:49:40.817Z · LW · GW

This is a genuine concern, and this may be particularly high-variance advice. However, a focus on avoiding mistakes over trying new "superstrategies" might also help some people with akrasia. It's easier to do what you know than seek some special trick. Personally, at least, I find akrasia is worst when it comes from not knowing what to do next. And while taking fewer actions in general is usually a bad idea, trying to avoid mistakes could also be used for "the next time I'm about to sit around and do nothing, instead I'll clean/program/reach out to a friend." This doesn't sound like it has to be about necessarily doing less.

Comment by Aiyen on Money isn't real. When you donate money to a charity, how does it actually help? · 2020-02-05T18:28:52.194Z · LW · GW

Consider a charity providing malaria nets. Somebody has to make the nets. Somebody has to distribute them. These people need to eat, and would prefer to have shelter, goods, services and the like. That means that you need to convince people to give food, shelter, etc. to the net makers. If you give them money, they can simply buy their food.

This of course raises the question of why you can't simply ask other people to support the charity directly. But consider someone providing a service to the charity workers: even if they care passionately about fighting malaria, they do not want to run out of resources themselves! If you make food, and give it all to the netweavers, how can you get your own needs met? What happens when you need medical care, and the doctor in turn would love to treat a supporter of the anti-malaria fight, but wants to make sure he can get his car fixed?

In a nutshell, people want to make sure there will be resources available to us when we need them. Money allows us to keep track of those resources: if everyone treats money as valuable, we can be confident of having access to as many resources as our savings will buy at market rates. If we decide instead to have everyone be "generous" and give in the hopes that others will give to them in turn, it becomes impossible to keep track of who needs to do how much work or who can take how many resources without creating a shortage. You can't even solve that problem by having everyone decide to work hard and consume little; doing too much can be as harmful as doing too little, as resources get foregone. And of course, that's with everyone cooperating. If someone decides to defect in such a system, they can take and take while providing nothing in return. Thus, it is much easier to mange resources with money, despite it being "not real", even in the chase of charity. Giving money to a charity is a commitment to consume less (or to give up the right to consume as much as you possibly could, whether or not your actual current spending changes), freeing up resources that are then directed to the charity.

Comment by Aiyen on Why Are So Many Rationalists Polyamorous? · 2019-10-23T23:00:05.057Z · LW · GW

By that definition nothing is zero sum. "Zero sum" doesn't mean that literally all possible outcomes have equal total utility; it means that one person's gain is invariably another person's loss.

Comment by Aiyen on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T20:36:17.634Z · LW · GW
But Petrov was not a launch authority. The decision to launch or not was not up to him, it was up to the Politburo of the Soviet Union.

This is obviously true in terms of Soviet policy, but it sounds like you're making a moral claim. That the Politburo was morally entitled to decide whether or not to launch, and that no one else had that right. This is extremely questionable, to put it mildly.

We have to remember that when he chose to lie about the detection, by calling it a computer glitch when he didn't know for certain that it was one, Petrov was defecting against the system.

Indeed. But we do not cooperate in prisoners' dilemmas "just because"; we cooperate because doing so leads to higher utility. Petrov's defection led to a better outcome for every single person on the planet; assuming this was wrong because it was defection is an example of the non-central fallacy.

Is that the sort of behavior we really want to lionize?

If you will not honor literally saving the world, what will you honor? If we wanted to make a case against Petrov, we could say that by demonstrably not retaliating, he weakened deterrence (but deterrence would have helped no one if he had launched), or that the Soviets might have preferred destroying the world to dying alone, and thus might be upset with a missileer unwilling to strike. But it's hard to condemn him for a decision that predictably saved the West, and had a significant chance (which did in fact occur) of saving the Soviet Union.

Comment by Aiyen on Accelerate without humanity: Summary of Nick Land's philosophy · 2019-06-22T04:19:06.746Z · LW · GW

This seems wrong.

The second law of thermodynamics isn't magic; it's simply the fact that when you have categories with many possible states that fit in them, and categories with only a few states that count, jumping randomly from state to state will tend to put you in the larger categories. Hence melting-arrange atoms randomly and it's more likely that you'll end up in a jumble than in one of the few arrangements that permit solidity. Hence heat equalizing-the kinetic energy of thermal motion can spread out in many ways, but remain concentrated in only a few; thus it tends to spread out. You can call that the universe hating order if you like, but it's a well-understood process that operates purely through small targets being harder to hit; not through a force actively pushing us towards chaos, making particles zig when they otherwise would have zagged so as to create more disorder.

This being the case, claiming that life exists for the purpose of wasting energy seems absurd. Evolution appears to explain the existence of life, and it is not an entropic process. Positing anything else being behind it requires evidence, something about life that evolution doesn't explain and entropy-driven life would. Also, remember, entropy doesn't think ahead. It is purely the difficulty of hitting small targets; a bullet isn't going to 'decide' to swerve into a bull's eye as part of a plan to miss more later! It would be very strange if this could somehow mold us into fearing both death and immortality as part of a plan to gather as much energy as we could, then waste it through our deaths.

This seems like academics seeking to be edgy much more than a coherent explanation of biology.

As for transhumanism being overly interested in good or evil, what would you suggest we do instead? It's rather self-defeating to suggest that losing interest in goodness would be a good idea.

Comment by Aiyen on The Hard Work of Translation (Buddhism) · 2019-04-08T21:18:50.858Z · LW · GW

So enlightenment is defragmentation, just like we do with hard drives?

Comment by Aiyen on Rest Days vs Recovery Days · 2019-03-20T23:17:31.242Z · LW · GW

That make a fair bit of sense. And what are your thoughts on work days? I get my work for my job done, but advice on improving productivity on chores and future planning would be appreciated. Also good point on pica!

Comment by Aiyen on Rest Days vs Recovery Days · 2019-03-20T18:16:50.414Z · LW · GW

Very interesting dichotomy! Definitely seems worth trying. I'm confused about the reading/screen time/video games distinction though. Why would reading seem appealing but being in front of a screen not? Watching TV is essentially identical to reading right? You're taking in a preset story either way. Admittedly you can read faster than TV characters can talk, so maybe that makes it more rewarding?

Also, while playing more video games while recovering and fewer while resting makes sense (they're an easy activity while low on energy, and thus will take up much of a recovery day, but less of a rest day), "just following my gut" can still lead to plenty of gaming. Does this mean that I should still play some on a rest day, just less? That I almost never have enough energy to rest instead of recover? That I'm too into gaming and this is skewing my gut such that a good rest day rule would be "follow your gut, except playing fewer/no games today"?

Comment by Aiyen on [NeedAdvice]How to stay Focused on a long-term goal? · 2019-03-08T22:34:59.090Z · LW · GW

First off, you probably want to figure out if your nihilism is due to philosophy or depression. Would you normally enjoy and value things, but idea of finite life gets in the way? Or would you have difficulty seeing a point to things even if you were suddenly granted immortality and the heat death of the universe was averted?

Either way, it's difficult to give a definitive solution, as different things work for different people. That said, if the problem seems to be philosophy, it might be worth noting that the satisfaction found in a good moment isn't a function of anything that comes after it. If you enjoy something, or you help someone you love, or you do anything else that seems valuable to you, the fact of that moment is unchangeable. If the stars die in heaven, that cannot change the fact that you enjoyed something. Another possible solution would be trying to simply not think about it. I know that sounds horribly dismissive, but it's not meant to. In my own life there have been philosophical (and in my case religious) issues that I never managed to think my way out of... but when I stopped focusing on the problem it went away. I managed this only after getting a job that let my brain say "okay, worry about God later, we need to get this task done first!" If you think it would help, finding an activity that demands attention might help (if you feel that your brain will let you shift your attention; if not this might just be overly stressful).

If the problem seems to be depression, adrafinil and/or modafinil are extremely helpful for some people. Conventional treatments exist too of course (therapy and/or anti-depressants); I don't know anyone who has benefited from therapy (at least not that they've told me), but one of my friends had night and day improvement with an anti-depressant (sadly I don't remember which one; if you like I can check with her). Another aspect of overcoming depression is having friends in the moment and a plan for the future, not a plan you feel you should follow, but one you actively want to. I don't know your circumstances, but insofar as you can prioritize socialization and work for the future, that might help.

As for the actual question of self-improvement, people vary wildly. An old friend of mine found huge improvements in her life due to scheduling; I do markedly better without it. The best advice I can offer (and this very well might not help; drop it if it seems useless or harmful) is three things:

Don't do what you think you should do, do what you actually want to (if there isn't anything that you want, maybe don't force trying to find something too quickly either). People find motivation in pursuing goals they actually find worthwhile, but following a goal that sounds good but doesn't actually excite you is a recipe for burnout.

Make actionable plans-if there's something you want to do, try to break it down into steps that are small enough, familiar enough or straightforwards enough that you can execute the plan without feeling out of your depth. Personally, at least, I find there's a striking "oh, that's how I do that" feeling when a plan is made sufficiently explicit, a sense that I'm no longer blundering around in a fog.

Finally, and perhaps most importantly, don't eliminate yourself. That is, don't abandon a goal because it looks difficult; make someone else eliminate you. This is essential because many tasks look impossible from the outside, especially if you are depressed. It's almost the mirror image of the planning fallacy-when people commit to doing something, it's all too easy to envision everything going right and not account for setbacks. But before you actually take the plunge, so to speak, it's easy to just assume you can't do anything, which is simply not true.

Comment by Aiyen on To understand, study edge cases · 2019-03-03T17:43:41.685Z · LW · GW

"To understand anatomy, dissect cadavers." That's less a deliberate study of an edge case, and more due to the fact that we can't ethically dissect living people!

Comment by Aiyen on Minimize Use of Standard Internet Food Delivery · 2019-02-11T19:35:14.143Z · LW · GW

At the risk of appearing defective, isn't this the sort of action one would only want to take in a coordinated manner? If it turns out that use of such delivery services tends to force restaurants out of business, then certainly one would prefer a world where we don't use those services and still have the restaurants-you can't order take out from a place that doesn't exist anymore! But deciding unilaterally to boycott delivery imposes a cost without any benefit-whether I choose to use delivery or not will not make the difference. This looks like a classic tragedy of the commons, where it is best to coordinate cooperation, but cooperating without that coordination is a pure loss.

Comment by Aiyen on [Link] Did AlphaStar just click faster? · 2019-01-29T01:15:34.667Z · LW · GW

Interesting article. It argues that the AI learned spam clicking from human replays, then needed its APM cap raised to prevent spam clicking from eating up all of its APM budget and inhibiting learning. Therefore, it was permitted to use inhumanly high burst APM, and with all its clicks potentially effective actions instead of spam, its effective actions per minute (EPM, actions not counting spam clicks) are going to outclass human pros to the point of breaking the game and rendering actual strategy redundant.

Except that if it's spamming, those clicks aren't effective actions, and if those clicks are effective actions, it's not spamming. To the extent Alphastar spams, its superhuman APM is misleading, and the match is fairer than it might otherwise appear. To the extent that it's using high burst EPM instead, that can potentially turn the game into a micro match rather than the strategy match that people are more interested in. But that isn't a question of spam clicking.

Of course, if it started spam clicking, needed the APM cap raised, converted its spam into actual high EPM and Deepmind didn't lower the cap afterwards, then the article's objection holds true. But that didn't sound like what it was arguing (though perhaps I misunderstood it). Indeed, it seems to argue the reverse, that spam clicking was so ingrained that the AI never broke the habit.

Comment by Aiyen on For what do we need Superintelligent AI? · 2019-01-25T22:17:20.070Z · LW · GW

It depends on the goal. We can probably defeat aging without needing much more sophisticated AI than Alphafold (a recent Google AI that partially cracked the protein folding problem). We might be able to prevent the creation of dangerous superintelligences without AI at all, just with sufficient surveillance and regulation. We very well might not need very high-level AI to avoid the worst immediately unacceptable outcomes, such as death or X-risk.

On the other hand, true superintelligence offers both the ability to be far more secure in our endeavors (even if human-level AI can mostly secure us against X-risk, it cannot do so anywhere nearly as reliably as a stronger mind), and the ability to flourish up to our potential. You list high-speed space travel as "neither urgent nor necessary", and that's true-a world without near lightspeed travel can still be a very good world. But eventually we want to maximize our values, not merely avoid the worst ways they can fall apart.

As for truly urgent tasks, those would presumably revolve around avoiding death by various means. So anti-aging research, anti-disease/trauma research, gaining security against hostile actors, ensuring access to food/water/shelter, detecting and avoiding X-risks. The last three may well benefit greatly from superintelligence, as comprehensively dealing with hostiles is extremely complicated and also likely necessary for food distribution, and there may well be X-risks a human-level mind can't detect.

Comment by Aiyen on For what do we need Superintelligent AI? · 2019-01-25T22:01:41.362Z · LW · GW

Most people seem to need something to do to avoid boredom and potentially outright depression. However, it is far from clear that work as we know it (which is optimized for our current production needs, and in no way for the benefit of the workers as such) is the best way to solve this problem. There is likely a need to develop other things for people to do alongside alleviating the need for work, but simply saying "unemployment is bad" would seem to miss that there may be better options than either conventional work or idleness.

Comment by Aiyen on For what do we need Superintelligent AI? · 2019-01-25T21:57:52.834Z · LW · GW

Where governance is the barrier to human flourishing, doesn't that mean that using AI to improve governance is useful? A transhuman mind might well be able to figure out not only better policies but how to get those policies enacted (persuasion, force, mind control, incentives, something else we haven't thought of yet). After all, if we're worried about a potentially unfriendly mind with the power to defeat the human race, the flip side is that if it's friendly, it can defeat harmful parts of the human race, like poorly-run governments.

Comment by Aiyen on For what do we need Superintelligent AI? · 2019-01-25T21:52:23.737Z · LW · GW

Safer for the universe maybe, perhaps not for the old person themselves. Cryonics is highly speculative-it *should* work, given that if your information is preserved it should be possible to reconstruct you, and cooling a system enough should reduce thermal noise and reactivity enough to preserve information... but we just don't know. From the perspective of someone near death, counting on cryonics might be as risky or more so than a quick AI.

Comment by Aiyen on Do the best ideas float to the top? · 2019-01-21T15:32:39.425Z · LW · GW

This. Also, political factors-ideas that boost the status of your tribe are likely to be very competitive independently of truth and nearly so of complexity (though if they're too complex one would expect to see simplified versions propagating as well).

Comment by Aiyen on Life can be better than you think · 2019-01-21T15:14:46.487Z · LW · GW

"Emotions have their role in providing meaning."

Even if true, is meaning actually valuable? I would far rather be happy than meaningful, and a universe of truth, beauty, love and joy seems much more worthwhile than a universe of meaning.

Caveat-I feel much the same disconnect in hearing about meaning that Galton's non-imagers appeared to feel about mental imaging, so there's a pretty good chance I simply don't have the mental circuitry needed to appreciate or care about meaning. You might be genuinely pursuing something very important to you in seeking meaning. On the other hand, even if that's true, it's worth noting that there are some people who don't need it.

Comment by Aiyen on What are questions? · 2019-01-10T02:15:00.630Z · LW · GW

It's a noticed gap in your knowledge.

Comment by Aiyen on Consequentialism FAQ · 2019-01-02T23:24:51.531Z · LW · GW

Link doesn't seem to work.

Comment by Aiyen on What makes people intellectually active? · 2018-12-30T19:40:40.026Z · LW · GW

My best guess: There's a difference between reviewing ideas and exploring them.
Reviewing ideas allows you to understand concepts, think about them and talk about them, but you're looking at material you already have. Consider someone preparing a lecture well-they'll make sure that they have no confusion about what they're covering, and write eloquently on the topic at hand.

On the other hand, this is thinking along pre-set pathways. It can be very useful for both learning and teaching, but you aren't likely to discover something new. Exploring ideas, by contrast, is looking at a part of idea space and then seeing what you can find. It's thinking about the implications of things you know, and looking to see if an unexpected result shows up, or simply considering a topic and hoping that something new on the subject occurs to you.

Comment by Aiyen on Fifteen Things I Learned From Watching a Game of Secret Hitler · 2018-12-19T19:19:00.463Z · LW · GW

"The more liberal policies you pass, the more likely it is any future policy will be fascist."

Sadly this one is likely true irl. When you have a government that passes more and more laws, and does not repeal old laws, then the degree of restriction of people's lives increases monotonically. This creates a precedent for ever more control, until the end is either a backlash or tyranny.

Comment by Aiyen on 18-month follow-up on my self-concept work · 2018-12-19T19:12:27.151Z · LW · GW

Not Kaj, but shame and self-concept (damaging or otherwise) are thoughts (or self-concept is a thought and shame is an emotion produced by certain thoughts). It seems obvious that people with a greater tendency to think will be at greater risk of harmful thoughts. Of course, they'll also have a better chance of coming up with something beneficial as well, but that doesn't strike me as likely to cancel out the harm. Humans are fairly well adapted for our intellectual and social niche; there are a lot more ways for introspection to break things than to improve them.

Comment by Aiyen on Open Thread September 2018 · 2018-09-27T00:30:31.035Z · LW · GW

Happy Petrov Day!

Comment by Aiyen on A Rationalist's Guide to... · 2018-08-10T16:20:30.334Z · LW · GW

...? "Winning" isn't just an abstraction, actually winning means getting something you value. Now, maybe many rationalists are in fact winning, but if so, there are specific values we're attaining. It shouldn't be hard to delineate them.

It should look like, "This person got a new job that makes them much happier, that person lost weight on an evidence-based diet after failing to do so on a string of other diets, this other person found a significant other once they started practicing Alicorn's self-awareness techniques and learned to accept their nervousness on a first date..." It might even look like, "This person developed a new technology and is currently working on a startup to build more prototypes."

In none of these cases should it be hard to explain how we're winning, nor should Tim's "not looking carefully enough" be an issue. Even if the wins are limited to subjective well-being, you should at least be able to explain that! Do you believe that we're winning, or do you merely believe you believe it?

Comment by Aiyen on Who Wants The Job? · 2018-07-22T16:29:32.230Z · LW · GW

This is simultaneously horrifying and incredibly comforting. One would hope that people would be orders of magnitude better than this. But it also bodes very well for the future prospects of anyone remotely competent (unless your boss is like this...)

Comment by Aiyen on An optimization process for democratic organizations · 2018-07-14T01:34:28.204Z · LW · GW

"True. Equalizing the influence of all parties (over the long term at least) doesn't just risk giving such people power; it outright does give them power. At the time of the design, I justified it on the grounds that (1) it forces either compromise or power-sharing, (2) I haven't found a good way to technocratically distinguish humane-but-dumb voters from inhumane-but-smart ones, or rightly-reviled inhumane minorities from wrongly-reviled humane minorities, and (3) the worry that if a group's interests are excluded, then they have no stake in the system, and so they have reason to fight against the system in a costly way. Do any alternatives come to your mind?"

1. True, but is the compromise beneficial? Normally one wants to compromise either to gain useful input from good decision makers, or else to avoid conflict. The people one would be compromising with here would (assuming wisdom of crowds) be poor decision makers, and conventional democracy seems quite peaceful. 2. Why are you interested in distinguishing humane-but-dumb voters from inhumane-but-smart ones? Neither one is likely to give you good policy. Wrongly-reviled humane minorities deserve power, certainly, but rebalancing votes to give it to them (when you can't reliably distinguish them) is injecting noise into the system and hoping it helps. 3. True, but this has always been a trade-off in governance-how much do you compromise with someone to keep the peace vs. promote your own values at the risk of conflict? Again, conventional democracy seems quite good at maintaining peace; while one might propose a system that seeks to produce better policy, it seems odd to propose a system that offers worse policy in exchange for averting conflict when we don't have much conflict.

"I may have been unduly influenced by my anarchist youth: I'm more worried about the negative effects of concentrating power than about the negative effects of distributing it. Is there any objective way to compare those effects, however, that isn't quite similar to how Ophelimo tries to maximize public satisfaction with their own goals?"

Asking the public how satisfied they are is hopefully a fairly effective way of measuring policy success. Perhaps not in situations where much of the public has irrational values (what would Christian fundamentalists report about gay marriage?), but asking people how happy they are about their own lives should work as well as anything we can do. This strikes me as one of the strongest points of Ophelimo, but it's worth noting that satisfaction surveys are compatible with any form of government, not just this proposal.

Hopefully this doesn't come across as too negative; it's a fascinating idea!

Comment by Aiyen on Secondary Stressors and Tactile Ambition · 2018-07-14T01:08:29.944Z · LW · GW

Enye-word's comment is witty, certainly, but "this is going to take a while to explain" and "systematically underestimated inferential distances" aren't the same thing. Similar yes, but there's a difference between something taking a while to explain, while addressing X so you can explain Y which is a prerequisite for talking about Z, while your interlocutor may not understand why you're not just talking about Z, and something just taking a while to explain!

For example, if someone asked me about transhumanism, I might have to explain why immortality looks biologically possible, and how reversal tests work so we're not just stuck with the "death gives meaning to life" intuition, and the possibility of mind uploading to avoid a Malthusian catastrophe, and the evidence for minds being a function of information such that uploading looks even remotely plausible... Misunderstandings are all but guaranteed. But if someone asked me about the plot of Game of Thrones in detail, there would be far less chance of misunderstanding, even if it took longer to explain.

Also, motivation and "tactile ambition" aren't the same thing either. Tactile ambition sounds like ambition to do a specific thing, rather than to just "do well" in an ill-defined way. Someone might be very motivated to save money, for instance, and spend a lot of time and energy looking for ways to do so, yet not hit on a specific strategy and thus never develop a related tactile ambition. Or someone might have a specific ambition to save money by eating cheaply, as in the Mr. Money Mustache example, yet find themselves unmotivated and constantly ordering (relatively expensive) pizza.

That said, why "tactile ambition" rather than something like "specific ambition"?

Comment by Aiyen on An optimization process for democratic organizations · 2018-07-05T20:10:51.558Z · LW · GW

Very interesting idea! The first critique that comes to mind is that the increased voting power given to those whose bills are not passed risks giving undue power to stupid or inhumane voters. Normally, if someone has a bad idea, hopefully it will not pass, and that is that. Under Ophelimo, however, adherents of bad ideas would gather more and more votes to spend over time, until their folly was made law, at least for a time. It's also morally questionable-deweighting someone's judgments because they have been voting for and receiving (hopefully) good things may satisfy certain conceptions of fairness (they've gotten their way; now it's someone else's turn), but it makes less sense in governance, where the goal should be to produce beneficial policies, rather than to be "fair" if fairness yields harmful decisions.

The increased weight given to more successful predictors seems wise. While this might make the policy a harder sell (it may seem less democratic), it also ensures that the system can focus on learning from those best able to make good decisions. It's interesting that you're combining this (a meritocratic element) with the vote re-balancing (an egalitarian element). One could imagine this leaning to a system of carefully looking to the best forecasters while valuing the desires of all citizens; this might be an excellent outcome.

An obvious concern is people giving dishonest forecasts in an effort to more effectively sway policy. While this is somewhat disincentivized by the penalties to one's forecaster rating if the bill is passed, and the uncertainty about what bills may pass provides some disincentive to do this even with disfavored bills (as you address in the article), I suspect more incentive is needed for honesty. Dishonest forecasting, especially predicting poor results to try to kill a bill, remains tempting, especially for voters with one or two pet issues. If someone risks losing credibility to affect other issues, but successfully shot down a bill on their favorite hot button issue, they very well may consider the result worth it.

Finally, there is the question of what happens when the entire electorate can affect policy directly. In contemporary representative democracy, the only power of the voters is to select a politician, typically from a group that has been fairly heavily screened by various status requirements. While giving direct power to the people might help avoid much of the associated corruption and wasteful signalling, it risks giving increased weight to people without the requisite knowledge and intelligence to make good policy.

Comment by Aiyen on Dissolving the Fermi Paradox, and what reflection it provides · 2018-06-30T21:06:55.490Z · LW · GW

Possibility-if panspermia is correct (the theory that life is much older than Earth and has been seeded on many planets by meteorite impacts), then we might not expect to see other civilizations advanced enough to be visible yet. If evolving from the first life to roughly human levels takes around the current lifetime of the universe, rather than of the Earth, not observing extraterrestrial life shouldn't be surprising! Perhaps the strongest evidence for this is that the number of codons in observed genomes over time (including as far back as the Paleozoic) increases on a fairly steady logarithmic trend, which extrapolates back to shortly after the birth of the universe.

Comment by Aiyen on Aligned AI May Depend on Moral Facts · 2018-06-15T16:18:24.342Z · LW · GW

What do you mean by moral facts? It sounds in context like "ways to determine which values to give precedence to in the event of a conflict." But such orders of precedence wouldn't be facts, they'd be preferences. And if they're preferences, why are you concerned that they might not exist?

Comment by Aiyen on Oops on Commodity Prices · 2018-06-10T16:06:50.339Z · LW · GW

This is exactly the kind of learning and flexibility that we're trying to get better at here. There's not much to say but congratulations, but it's still worth saying.

Comment by Aiyen on Unraveling the Failure's Try · 2018-06-09T15:33:01.852Z · LW · GW

The MtG article is called Stuck in the Middle With Bruce by John Rizzo. Not sure how to link, but it's

http://www.starcitygames.com/magic/misc/2005_Stuck_In_The_Middle_With_Bruce.html

The article is worth your time, but if you want a summary-there appears to be a part of many people's minds that wants to lose. And often winning is as much a matter of overcoming this part of you (which the article terms Bruce) as it is overcoming the challenges in front of you.

Comment by Aiyen on When is unaligned AI morally valuable? · 2018-06-06T22:19:46.808Z · LW · GW

Humans are made of atoms that are not paperclips. That's enough reason for extinction right there.

Comment by Aiyen on When is unaligned AI morally valuable? · 2018-06-05T18:12:37.519Z · LW · GW

It's an evolved predisposition, but does that make it a terminal value? We like sweet foods, but a world that had no sweet foods because we'd figured out something else that tasted better doesn't sound half bad! We have an evolved predisposition to sleep, but if we learned how to eliminate the need for sleep, wouldn't that be even better?

Comment by Aiyen on When is unaligned AI morally valuable? · 2018-05-28T02:22:17.852Z · LW · GW

Yes. I wouldn't be surprised if this happened in fact.

Comment by Aiyen on When is unaligned AI morally valuable? · 2018-05-27T06:58:36.862Z · LW · GW

The strongest argument that an upload would share our values is that our terminal values are hardwired by evolution. Self-preservation is common to all non-eusocial creatures, curiosity to all creatures with enough intelligence to benefit from it. Sexual desire is (more or less) universal in sexually reproducing species, desire for social relationships is universal in social species. I find it hard to believe that a million years of evolution would change our values that much when we share many of our core values with the dinosaurs. If maiasaura can have recognizable relationships 76 million years ago, are those going out the window in the next million? It's not impossible, of course, but shouldn't it seem pretty unlikely?

I think the difference between us is that you are looking at instrumental values, noting correctly that those are likely to change unrecognizably, and fearing that that means that all values will change and be lost. Are you troubled by instrumental values shifts, even if the terminal values stay the same? Alternatively, is there a reason you think that terminal values will be affected?

I think an example here is important to avoid confusion. Consider Western Secular sexual morals vs Islamic ones. At first glance, they couldn't seem more different. One side is having casual sex without a second thought, the other is suppressing desire with full-body burqas and genital mutilation. Different terminal values, right? And if there can be that much of a difference between two cultures in today's world, with the Islamic model seeming so evil, surely values drift will make the future beyond monstrous!

Except that the underlying thoughts behind the two models aren't as different as you might think. A Westerner having casual sex knows that effective birth control and STD countermeasures means that the act is fairly safe. A sixth century Arab doesn't have birth control and knows little of STDs beyond that they preferentially strike the promiscuous-desire is suddenly very dangerous! A woman sleeping around with modern safeguards is just a normal, healthy person doing what they want without harming anyone; one doing so in the ancient world is a potential enemy willing to expose you to cuckoldry and disease. The same basic desires we have to avoid cuckoldry and sickness motivated them to create the horrors of Shari'a.

None of this is intended to excuse Islamic barbarism. Even in the sixth century, such atrocities were a cure worse than the disease. But it's worth noting that their values are a mistake much more than a terminal disagreement. They're thinking of sex as dangerous because it was dangerous for 99% of human history, and "sex is bad" is easier meme to remember and pass on than "sex is dangerous because of pregnancy risks and disease risks, but if at some point in the future technology should be created that alleviates the risks, then it won't be so dangerous", especially for a culture to which such technology would seem an impossible dream.

That's what I mean by terminal values-the things we want for their own sake, like both health and pleasure, which are all too easy to confuse with the often misguided ways we seek them. As technology improves, we should be able to get better at clearing away the mistakes, which should lead to a better world by our own values, at least once we realize where we were going wrong.

Comment by Aiyen on When is unaligned AI morally valuable? · 2018-05-26T18:25:58.160Z · LW · GW

The values you're expressing here are hard for me to comprehend. Paperclip maximization isn't that bad, because we leave a permanent mark on the universe? The deaths of you, everyone you love, and everyone in the universe aren't that bad (99% of the way from extinction that doesn't leave a permanent mark to flourishing?) because we'll have altered the shape of the cosmos? It's common for people to care about what things will be like after they die for the sake of someone they love. I've never heard of someone caring about what things will be like after everyone dies-do you value making a mark so much even when no one will ever see it?

"...our descendants 1 million years from now will not be called humans and will not share our values. I don't see much of a reason to believe that the values of my biological descendants will be less ridiculous to me, than paperclip maximization."

That depends on what you value. If we survive and have a positive singularity, it's fairly likely that our descendants will have fairly similar high level values to us: happiness, love, lust, truth, beauty, victory. This sort of thing is exactly what one would want to design a Friendly AI to preserve! Now, you're correct that the ways in which these things are pursued will presumably change drastically. Maybe people stop caring about the Mona Lisa and start getting into the beauty of arranging atoms in 11 dimensions. Maybe people find that merging minds is so much more intimate and pleasurable than any form of physical intimacy that sex goes out the window. If things go right, the future ends up very different, and (until we adjust) likely incomprehensible and utterly weird. But there's a difference between pursuing a human value in a way we don't understand yet and pursuing no human value!

To take an example from our history-how incomprehensible must we be to cavemen? No hunting or gathering-we must be starving to death. No camps or campfires-surely we've lost our social interaction. No caves-poor homeless modern man! Some of us no longer tell stories about creator spirits-we've lost our knowledge of our history and our place in the universe. And some of us no longer practice monogamy-surely all love is lost.

Yet all these things that would horrify a caveman are the result of improvement in pursuing the caveman's own values. We've lost our caves, but houses are better shelter. We've lost Dreamtime legends, Dreamtime lies, in favor of knowledge of the actual universe. We'd seem ridiculous, maybe close to paperclip-level ridiculous, until they learned what was actually going on, and why. But that's not a condemnation of the modern world, that's an illustration of how we've done better!

Do you draw no distinction between a hard-to-understand pursuit of love or joy, and a pursuit of paperclips?

Comment by Aiyen on When is unaligned AI morally valuable? · 2018-05-25T18:55:10.419Z · LW · GW

Well then, isn't the answer that we care about de re alignment, and whether or not an AI is de dicto aligned is relevant only as far as it predicts de re alignment? We might expect that the two would converge in the limit of superintelligence, and perhaps that aiming for de dicto alignment might be the easier immediate target, but the moral worth would be a factor of what the AI actually did.

That does clear up the seeming confusion behind the OP, though, so thanks!

Comment by Aiyen on When is unaligned AI morally valuable? · 2018-05-25T18:15:15.690Z · LW · GW

I may be missing the point here, so please don't be offended. Isn't this confusing "does the AI have (roughly) human values?" and "was the AI deliberately, rigorously designed to do so?" Obviously, our perception of the moral worth of an agent doesn't require them to have values identical to ours. We can value another's pleasure, even if we would not derive pleasure from the things they're experiencing. We can value another's love, even if we do not feel as affectionate towards their loved ones. But do we value an agent who's goal is to suffer as much as possible? Do we value an agent motivated purely by hatred?

Our values are our values; they determine our perception of moral worth. And while many people might be happy about a strange and wonderful AI civilization, even if it was very different from what we might choose to build, very few would want a boring one. That's a values question, or a meta values question; there's no way to posit a worthwhile AI civilization without assuming that on some level our values align.

The example given for a "good successor albeit unaligned" AI is a simulated civilization that eventually learns about the real world and figures out how to make AI work here. Certainly this isn't an AI with deliberate, rigorous Friendliness programming, but if you'd prefer handing the universe off to it to taking a 10% extinction risk, isn't that because you're hoping it will be more or less Friendly anyway? And at that point, the answer to when is unaligned AI morally valuable is when it is, in fact, aligned, regardless of whether that alignment was due to a simulated civilization having somewhat similar values to our own, or any other reason?

Comment by Aiyen on On "Overthinking" Concepts · 2017-06-04T04:43:35.097Z · LW · GW

This. It took me years to understand this, but it's true, and vital to proficiency in most areas of endeavor.

The trouble with "overthinking" is that it's all to easy to try to oversimplify, or to frame a problem in terms that make it unnecessarily difficult. Martial arts are a good example. My experience with aikido is minimal, but at least in jiu-jitsu, knowing what a move feels like provides the data you need to actually use it, and in a form that can be applied in real time. Knowing verbal principles behind the move, on the other hand, almost invariably leaves out important pieces, and even when your verbal understanding is more or less complete, it's too slow to actually use against all but the most cooperative opponents.

Of course, that's with a physical discipline. Going back to the OP's question, how can overthinking be harmful when trying to understand a purely abstract concept, or how can a concept be understood with less thought rather than more? Well, as Bound_up says, it's impossible to understand a concept without thinking. But the kind of thinking is essential.

For example, I struggled with learning calculus for a while. The teachers would explain various tools that could be used to take a derivative or integral, but it wasn't clear which tools to use when. I responded to this by trying to create a rigorous framework that would reliably let me know when to use which formulas. However, there simply weren't enough consistent, reliable patterns relating a certain type of function to a given formula for differentiating it. Everyone said to "stop overthinking" calculus, but I figured there had to be rigorous algorithms governing the use of u substitution vs. integration by parts, and that the people telling me to just relax were sloppy thinkers who didn't generally understand concepts beyond rote learning.

What ended up actually working, however, was accepting a more ad hoc approach. Creating an algorithm that could tell me what tools to use, first time, every time, was beyond my capabilities. But noticing that a function could be manipulated in a certain way, or expressed in a more tractable form, without expecting that the exact same process would work the next time, wasn't actually very difficult at all. It was a bit frustrating to accept that calculus would consistently require creativity, but that's what actually worked, when my overthinking turned out to be oversimplification.