Questions on Theism 2014-10-08T21:02:43.338Z


Comment by Aiyen on What would it look like if it looked like AGI was very near? · 2021-07-19T21:09:02.488Z · LW · GW

Thanks. The assertiveness was deliberate; I wanted to take the perspective of someone in a post-AGI world saying, “Of course it worked out this way!” In our time, we can’t be as certain; the narrator is suffering from a degree of hindsight bias.

There were a couple of fake breakthroughs in there (though maybe I glossed over them more than I ought?). Specifically the bootstrapping from a given model to a more accurate one by looking for implications and checking for alternatives (this actually is very close to the self-play that helped build AlphaGo as stated, but making it work with a full model of the real world would require substantial further work), and the solution of AI alignment via machine learning with multiple agents seeking to more accurately model each other’s values (which I suspect might actually work, but which is purely speculative).

Comment by Aiyen on What would it look like if it looked like AGI was very near? · 2021-07-16T08:30:15.423Z · LW · GW

Why would QC be irrelevant?  Quantum systems don't perform well on all tasks, but they generally work well for parallel tasks, right? And neural nets are largely parallel. QC isn't to the point of being able to help yet, but especially if conventional computing becomes a serious bottleneck, it might become important over the next decade. 

Comment by Aiyen on What would it look like if it looked like AGI was very near? · 2021-07-15T18:23:57.598Z · LW · GW

Wouldn't that imply that the trajectory of AI is heavily dependent on how long Moore's Law lasts, and how well quantum computers do?  

Is your model that the jump to GPT-3 scale consumed the hardware overhang, and that we cannot expect meaningful progress on the same time scale in the near future?  

Comment by Aiyen on What would it look like if it looked like AGI was very near? · 2021-07-15T16:01:14.871Z · LW · GW

Trillions of dollars for +6 OOMs is not something people are likely to be willing to spend by 2023. On the other hand, part of the reason that neural net sizes have consistently increased by one to two OOMs per year lately is due to advances in running them cheaply. Programs like Microsoft’s ZeRO system aim explicitly at creating nets on the hundred trillion-parameter scale at an acceptable price. Certainly there’s uncertainty around how well it will work, and whether it will be extended to a quadrillion parameters even if it does, but parts of the industry appear to believe it’s practical.

Comment by Aiyen on What would it look like if it looked like AGI was very near? · 2021-07-14T15:46:53.174Z · LW · GW

That’s why I had it that general intelligence is possible at the cat level. That said, it doesn’t seem too implausible that there’s a general intelligence threshold around human-level intelligence (not brain size), which raises the possibility that achieving general intelligence becomes substantially easier with human-scale brains (which is why evolution achieved it with us, rather than sooner or later).

This scenario is based on the Bitter Lesson model, in which size is far more important than the algorithm once a certain degree of generality in the algorithm is attained. If that is true in general, while evolution would be unlikely to hit on a maximally efficient algorithm, it might get within an order of magnitude of it.

Comment by Aiyen on What would it look like if it looked like AGI was very near? · 2021-07-14T15:36:46.168Z · LW · GW

Thanks for the feedback! The timeframe is based on extrapolating neural net sizes since 2018; given that the past two years have each shown two order of magnitude increases, in some ways it’s actually conservative. Of course, it appears we’re in a hardware overhang for neural nets, and once that overhang is exhausted, “gotta build the supercomputers” could slow things down massively. Do you have any data on how close we are to fully utilizing existing hardware? That could tell us a lot about how long we might expect current trends to continue. Another potential argument for slowdown is that there’s stated interest in the industry to build nets up to 100 trillion parameters, but I don’t know for certain how much interest there is in scaling beyond that (though if a 100 trillion parameter net performs well enough, that would likely generate interest by itself).

DeepMind strikes me as reasonably likely to take alignment concerns into account (whether or not they succeed in addressing them is another question); a much scarier scenario IMO would be for the first AGIs to be developed by governments. Convincing the US Congress or Chinese Communist Party to slow progress due to misalignment would be borderline impossible; keeping the best AI in the hands of people willing to align it may be as important as solving alignment in the first place. That could be a remarkably difficult task, as not only do governments have vast resources and strong incentives to pursue AI research, but trying to avoid an unfriendly AI due to congressional fiat would almost certainly touch on politics, with all the associated risks of mind-killing.

I wrote that scenario to be realistic, but “DARPA wanted to wait for alignment, Congress told them to press on, everyone died six months later” is also disturbingly plausible.

I suspect the weakest part of the scenario is the extrapolation from “predictive nets can generate scenarios that optimize a given parameter” to “such a net can be strategic”. While finding the input conditions required for the desired output is a large part of strategy, so too is intelligently determining where to look in the search space, how to chain actions together, how to operate in a world that responds to attempts to alter it in a way it doesn’t to simple predictions and so on. If something substantively similar to this scenario does not happen, most of my probability for why not concentrates here.

Comment by Aiyen on What would it look like if it looked like AGI was very near? · 2021-07-12T18:53:18.944Z · LW · GW

Note:  everything stated about 2021 and earlier is actually the case in the real world; everything stated about the post-2021 world is what I'd expect to see contingent on this scenario being true, and something I would give decently high probabilities of in general.  I believe there is a fairly high chance of AGI in the next 10 years. 

12 July 2031, Retrospective from a post-AGI world:

By 2021, it was blatantly obvious that AGI was immanent.  The elements of general intelligence were already known:  access to information about the world, the process of predicting part of the data from the rest and then updating one's model to bring it closer to the truth (note that this is precisely the scientific method, though the fact that it operates in AGI by human-illegible backpropagation rather than legible hypothesis generation and discarding seems to have obscured this fact from many researchers at the time), and the fact that predictive models can be converted into generative models by reversing them:  running a prediction model forwards predicts levels of X in a given scenario, but running it backwards predicts which scenarios have a given level of X.  A sufficiently powerful system with relevant data, updating to improve prediction accuracy and the ability to be reversed to generate optimization of any parameter in the system is a system that can learn and operate strategically in any domain.  

Data wasn't exactly scarce in 2021.  The internet was packed with it, most of it publicly available, and while the use of internal world-simulations to bootstrap an AI's understanding of reality didn't become common in would-be general programs until 2023, it was already used in more narrow neural nets like AlphaGo by 2016; certainly researchers at the time were already familiar with the concept.  

Prediction improvement by backpropagation was also well known by this point, as was the fact that this is the backbone of human intelligence.  While there was a brief time when it seemed like this might be fundamentally different than the operation of the brain (and thus less likely to scale to general intelligence) given that human neurons only feed forwards, it was already known by 2021 that the predictive processing algorithm used by the human neocortex is mathematically isomorphic to backpropagation, albeit implemented slightly differently due to the inability of neurons to feed backwards.  

The interchangeability of prediction and optimization or generation was known as well, indeed it wasn't too uncommon to use predictive neural nets to produce images (one not-uncommon joke application was using porn filters to produce the most pornographic images possible according to the net), and the rise of DeepMind's complementary AIs DALL-E (image from text) and CLIP (text from image) showed the interchangeability in a striking way (though careful observers might note that CLIP wasn't reversed DALL-E; the twin nets merely demonstrated that the calculation can go either way; the reversed porn filter was a more rigorous demonstration of optimization from prediction).

Given that all the pieces for AGI thus existed in 2021, why didn't more people realize what was coming?  For that matter, given that all the pieces existed already, why did true AGI take until 2023, and AGI with a real impact on the world until 2025?  The answer to the second question is scale.  All animal brains operate on virtually identical principles (though there are architectural differences, e.g. striatum vs pallium), yet the difference between a human and a chimp, let alone a human and a mouse, is massive.  Until the rise of neural nets, it was commonly assumed that AGI would be a matter primarily of more clever software, rather than simply scaling up relatively simple algorithms.  The fact that greater performance is primarily the result of simple size, rather than brilliance on the part of the programmers even became known as the Bitter Lesson, as it wasn't exactly easy on designers' egos.  With the background assumption of progress as a function of algorithms rather than scale, it was easy to miss that AlphaGo already had nearly everything a modern superintelligence needs; it was just small.  

From 2018 through 2021, neural nets were built at drastically increasing scales.  GPT (2018) had 117 million parameters, GPT-2 (2019) had 1.5 billion, GPT-3 (2020) had 175 billion, ZeRO-Infinity (2021) had 32 trillion.  By comparison to animal brains (a neural net's parameter is closely analogous to a brain's synapse), that is similar to an ant (very wide error bars on this one; on the other comparisons I was able to find synapse numbers, but for an ant I could only find the number of neurons), bee, mouse and cat respectively.  Extrapolating this trend, it should not have been hard to see human-scale nets coming (100 trillion parameters, reached by 2022), nor AIs orders of magnitude more powerful than this.

Moreover, neural nets are much more powerful in many ways than their biological counterparts.  Part of this is speed (computers can operate around a million times faster than biological neurons), but a more counterintuitive part of this is encephalization.  Specifically, the requirements of operating an animal's body are sufficiently intense that available intelligence for other things scales not with brain size, but with the ratio of brain size to body size, called the encephalization quotient (this is why elephants are not smarter than humans, despite having substantially larger brains).  An artificial neural net, of course, is not trying to control a body, and can use all of its power on the question at hand.  This allowed even a relatively small net like GPT-3 to do college-level work in law and history by 2021 (subjects that require actual understanding remained out of reach of neural nets until 2022, though the 2021 net Hua Zhibing, based on the Chinese Wu Dao 2.0 system, came very close).  Given that a mouse-sized neural net can compete with college students, it should have been clear that human-sized nets would posses most elements of general intelligence, and the nets that soon followed at ten and one hundred times the scale of the human brain would be capable of it.  

Given the (admittedly retrospective) obviousness of all this, why wasn't it widely recognized at the time?  As previously stated, much of the lag in recognition was driven by the belief that progress would be driven more by advances in algorithms than by scale.  Given this belief, AGI would appear extraordinarily difficult, as one would try to imagine algorithms capable of general intelligence at small scales (DeepMind's AlphaOmega proved in 2030 that this is mathematically impossible; you can't have true general intelligence at much below cat-scale, and it's very difficult to have it below human-scale!)  Even among those who understood the power of scaling, the fact that it's almost impossible to have AI do anything in the real world beyond very narrow applications like self-driving cars without reaching the general intelligence threshold made it appear plausible that simply building larger GPT-style systems wouldn't be enough without another breakthrough.  However, in 2021 DeepMind published a landmark paper entitled "Reward is Enough", recognizing that reward-based reinforcement learning was in fact capable of scaling to general intelligence.  This paper was the closest thing humanity ever got to a fire alarm for general AI:  a fairly rigorous warning that existing models could scale up without limit, and that AGI was now only a matter of time, rather than requiring any further real breakthroughs.  

After that paper, 2022 brought human-scale neural nets (not quite fully generally intelligent, due to lacking human instincts and only being trained on internet data, which leaves some gaps that require substantially superhuman capacity to bridge through inference alone), and 2023 brought the first real AGI, with a quadrillion parameters, powerful enough to develop an accurate map of the world purely through a mix of internet data and internal modeling to bootstrap the quality of its predictions.  After that, AI was considered to have stalled, as alignment concerns prohibited the use of such nets to optimize the real world, until 2025 when a program that trained agents on modeling each others' full terminal values from limited amounts of data allowed the safe real-world deployment of large-scale neural nets.  Mankind is eternally grateful to those who raised the alarm about the value alignment problem, without which DeepMind would not have conducted that crucial hiatus, and without which our entire light cone would now be paperclips (instead of just the Horsehead Nebula, which Elon Musk converted to paperclips as a joke).

Comment by Aiyen on Your best future self · 2021-05-13T01:01:40.995Z · LW · GW

I have mixed feelings about this post.  On the one hand, it's a new, interesting idea.  You say it's helpful to you, and it wouldn't be entirely surprising if it's helpful to a great many readers.  This could be a very good thing.  

On the other hand, there's a tendency among rationalists these days to turn to religion, or to the closest thing to religion we can make ourselves believe in.  For a while there were a great many posts about meditation and enlightenment, for instance, and if we look at common eregores in the community, we find multiple.  Azathoth, God of Evolution.  Moloch, God of Prisoners' Dilemmas.  Cthulhu, God of Memetics and Monotonically Increasing Progressivism.  Bruce, God of Self-Sabotage.  This can be entertaining, and perhaps motivating.  Yet I cannot shake the feeling that we're taking a serious risk in trying to create something too closely akin to religion.  As the saying goes, what do you think you know, and how do you think you know it?  We're quite certain that e.g. Islam is founded on lies, with more lies built up to try to protect the initial deceptions.  Do you really want to mimic such a thing?  A tradition created without a connection to actual reality is unlikely to have any value.  

I won't say that you shouldn't pray to your future self, if you find doing so beneficial, and you yourself say this isn't your usual subject matter.  But be careful.  It's far too easy to create religious-style errors even if you do not consciously believe in your rituals.  

Comment by Aiyen on Core Pathways of Aging · 2021-04-10T20:21:53.165Z · LW · GW

This is the source I found. It’s fairly old, so if you’ve found something that supersedes it I’d be interested.

Comment by Aiyen on Core Pathways of Aging · 2021-04-08T20:39:39.902Z · LW · GW

An initial search doesn’t confirm whether or not mycoplasma age. Bacteria do age though; even seemingly-symmetrical divisions yield one “parent” bacterium that ages and dies.

If mycoplasma genuinely don’t, that would be fascinating and potentially yield valuable clues on the aging mechanism.

Comment by Aiyen on Core Pathways of Aging · 2021-04-04T21:08:52.819Z · LW · GW

Minimal cell experiments (making cells with as small a genome as possible) have already been done successfully. This presumably removes transposons, and I have not heard that such cells had abnormally long lifespans.

One possibility is that there are at least two aging pathways-the effect of transposons, which evolution wasn’t able to eliminate, and an evolved aging pathway intended to eliminate older organisms so they don’t compete with their progeny (doing so while suffering ill health from transposon build-up would be less fit than dying and delegating reproduction to one’s less transposon-heavy offspring).

There is significant evidence that most organisms have evolved to eventually deliberately die, independent of problems like transposons that aren’t intentional on the level of the organism. Yamanaka factors can reverse some symptoms of aging, and appear to do so by activating a rejuvenation pathway. This makes perfect sense if the body deliberately ordinarily reserves that pathway for gamete production, while letting itself deteriorate. It is extremely confusing if aging is purely damage, however. Yamanaka factors don’t provide new information (other than the order to rejuvenate) or resources; a body that is doing its best to avoid aging wouldn’t seem to benefit from them, and could presumably evolve to produce them if evolution found this desirable. Other examples include the beneficial effects of removing old blood plasma (this appears to trick the body into thinking it is younger, which should work on a deliberately aging organism but not one that aged purely through damage), the fact that rat brain cells deteriorate as the perceive the brain to gradually stiffen with age, but rejuvenate if their ability to detect stiffness is removed, and the fact that some species of octopuses commit suicide after reproducing, and refrain from doing so if a particular gland is removed.

If both transposons and a deliberate aging pathway contribute to aging, it would be very interesting to see what happens in an organism with both transposon inactivation and Yamanaka factor treatment. Neither appears to create massive life extension on its own, but together they might do so, or at least point out worthwhile directions for further inquiry.

Comment by Aiyen on Reasons against anti-aging · 2021-01-25T08:30:24.131Z · LW · GW

"Or maybe anti-aging is inherently interesting to some people who want to live to see flying cars..." 

Maybe anti-aging is inherently interesting?  Do you not expect some people to want to survive?  The will to live is inherent in humanity for very obvious evolutionary reasons.  Moreover, anyone whose quality of life is positive has reason to want to live so long as that is the case.  There are religious people who want to die so as to attain an afterlife, but unless you are hoping for Heaven/Nirvana/72 Virgins/whatever, or your current quality of life is negative, anti-aging should be inherently interesting to you.

"and no rational critique would dissuade them."

If something is inherently interesting, people will want it unless there is a cost that exceeds the benefit.  If there is such a cost, such a rational critique will in fact dissuade rational people.  This seems like a cheap attempt to make transhumanists seem unwilling to listen to reason without actually making a case to that effect.

"In short, the best approach would be to rebuild your tree from scratch. This is why having kids is more efficient than just having more time on earth."

More efficient for what purpose?  Even if we assume you are correct that experience is a negative to career success (not what is typically observed, to put it mildly), what are you hoping to attain with your career that is better served by dying and hoping your children will carry on the work?  It can't be making money for you-you do not benefit from money when you're dead!  It can't be making money for your children; you're as dismissive of their survival as of your own.  It sounds like you want money for your genetic lineage, but why?  Normally people value their wellbeing and that of their family; all of you dying does not serve this.  You can't even claim to be following some underlying evolutionary principle, as the survival of you and your children will preserve your genes better than letting them be diluted down over generations.  

"Even if birth rates went down to replacement rate tomorrow, improvements in longevity would result in more people being on the planet at any given time."

Correct.  On the other hand, while overpopulation is a potential concern with longevity, it is worth taking five minutes to consider the problem rather than simply electing to die.  Potential solutions include interplanetary colonization, mind uploading, better birth control or simply handing off the problem to a friendly AI.  All of these are technically challenging, but so is life extension.  It does not make sense to assume that a world capable of it must be forever incapable of ever finding a solution to overpopulation.  To assert that this question must necessarily make life extension harmful is to assert that we know that no such solution can be found, quite the extraordinary claim.  The milder claim that this is a concern worth addressing is by contrast valid, but that's not a reason to abandon life extension, merely one to develop population solutions in tandem, if we can.

"Arguably, one of reasons young people are frustrated with modern politics is that boomers are still very much in the driver's seat. "

Easy enough to mandate political retirement at a particular age.  Disenfranchisement is better than death.  To quote Eliezer's short story Three Worlds Collide, "Only youth can Administrate.  That is the pact of immortality." 

"...we'll need more senior care. This may become a costly burden on future generations. "

Potentially.  Or a population that spends more time healthy and able to work and less time slowly decaying in retirement might have a lighter burden on future generations.  Or perhaps a growing, potentially-automated economy will obviate the question entirely.  This is much like the overpopulation question in that it conflates desirability with prudence.  Desirability is whether or not we consider a thing beneficial as such; whether or not we'd want it in the absence of countervailing costs.  Prudence is whether or not we consider a thing worthwhile on net even counting the costs. You point out, correctly, that overpopulation and a strained senior care system are potential risks that may need to be addressed if we want to make life extension prudent.  That does not mean that it is not desirable, nor that we should immediately view the costs it could impose as impossible to mitigate. 

"We may also have to consider assisted suicide for people who would be dead if it weren't for technology. Should we keep them alive because we can?"  

Do these people want to die?  Are we out of resources to sustain them with?  If the answers are no and no, why should we kill them?  If one or more of the answers is yes, that's a concern, but one better answered by seeking to improve their quality of life or acquire more resources, at least if we value human wellbeing.  And if we don't, why are we bothering to stay alive ourselves, or avoid killing willy-nilly?  

Ultimately, it is human nature to value survival.  We cannot always survive, we may sacrifice ourselves for others we care for if we cannot both survive, and some people even choose death out of misery or religious faith.  Yet where it is possible, it is better to make life worth living than to give up and die.  Where it is possible, it is better to save everyone rather than sacrificing our lives.  Where it is possible, it is better to oppose aging like we would any other injury, and while I cannot claim that life is better than Heaven, you did not bring up afterlives, so it seems unlikely that they are factoring in your reasoning.  Unless you assert that the natural order of things was divinely, benevolently ordained, there is no reason to think that death by aging is somehow better than any other threat to life, be it disease, injury, war, poverty or the like.  

Would you use those same reasons to argue for Covid?

Comment by Aiyen on The True Face of the Enemy · 2021-01-12T18:30:38.415Z · LW · GW

This is also true for many people not in that age range. “Many people in a group will try to make life harder for those around them” isn’t much of an argument for incarceration. If it were, who would you permit to be free?

Comment by Aiyen on GPT-3 + GAN · 2020-10-19T23:00:49.135Z · LW · GW

That might work.  Maybe have the adversarial network try to distinguish GPT-3 text from human text?  That said, GPT-3 is already trying to predict humanlike text continuations, so there's a decent chance that having a separate GAN layer wouldn't help.  It's probably worth doing the experiment though; traditional GANs work by improving the discriminator as well as the desired categorizer, so there's a chance it could work here too. 

Comment by Aiyen on Covid 10/1: The Long Haul · 2020-10-02T18:35:50.174Z · LW · GW

You say vulnerable, low-income people "must put themselves at risk to stay alive", then propose not letting them do so?  A lockdown, by itself, does not give the poor any money.  If you wish to prevent them from working risky jobs to support themselves, you must either offer them some other form of support or assert that they have other, better options ("homelessness, malnourishment, etc."?), but are making the wrong decision by working and thus ought to be prevented from doing so.  Being denied options is only protection if one is making the wrong decision.

Do you think these people ought to be homeless and malnourished?  If so, that's a hard case to make morally or practically.  If not, you should offer an alternative, rather than simply banning what you yourself state is their only path to avoiding this.

Comment by Aiyen on Why haven't we celebrated any major achievements lately? · 2020-09-11T18:43:45.279Z · LW · GW

"We hold all Earth to plunder, all time and space as well. Too wonder-stale to wonder at each new miracle."-Rudyard Kipling

Comment by Aiyen on Stop saying wrong things · 2020-05-03T21:49:40.817Z · LW · GW

This is a genuine concern, and this may be particularly high-variance advice. However, a focus on avoiding mistakes over trying new "superstrategies" might also help some people with akrasia. It's easier to do what you know than seek some special trick. Personally, at least, I find akrasia is worst when it comes from not knowing what to do next. And while taking fewer actions in general is usually a bad idea, trying to avoid mistakes could also be used for "the next time I'm about to sit around and do nothing, instead I'll clean/program/reach out to a friend." This doesn't sound like it has to be about necessarily doing less.

Comment by Aiyen on Money isn't real. When you donate money to a charity, how does it actually help? · 2020-02-05T18:28:52.194Z · LW · GW

Consider a charity providing malaria nets. Somebody has to make the nets. Somebody has to distribute them. These people need to eat, and would prefer to have shelter, goods, services and the like. That means that you need to convince people to give food, shelter, etc. to the net makers. If you give them money, they can simply buy their food.

This of course raises the question of why you can't simply ask other people to support the charity directly. But consider someone providing a service to the charity workers: even if they care passionately about fighting malaria, they do not want to run out of resources themselves! If you make food, and give it all to the netweavers, how can you get your own needs met? What happens when you need medical care, and the doctor in turn would love to treat a supporter of the anti-malaria fight, but wants to make sure he can get his car fixed?

In a nutshell, people want to make sure there will be resources available to us when we need them. Money allows us to keep track of those resources: if everyone treats money as valuable, we can be confident of having access to as many resources as our savings will buy at market rates. If we decide instead to have everyone be "generous" and give in the hopes that others will give to them in turn, it becomes impossible to keep track of who needs to do how much work or who can take how many resources without creating a shortage. You can't even solve that problem by having everyone decide to work hard and consume little; doing too much can be as harmful as doing too little, as resources get foregone. And of course, that's with everyone cooperating. If someone decides to defect in such a system, they can take and take while providing nothing in return. Thus, it is much easier to mange resources with money, despite it being "not real", even in the chase of charity. Giving money to a charity is a commitment to consume less (or to give up the right to consume as much as you possibly could, whether or not your actual current spending changes), freeing up resources that are then directed to the charity.

Comment by Aiyen on Why Are So Many Rationalists Polyamorous? · 2019-10-23T23:00:05.057Z · LW · GW

By that definition nothing is zero sum. "Zero sum" doesn't mean that literally all possible outcomes have equal total utility; it means that one person's gain is invariably another person's loss.

Comment by Aiyen on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T20:36:17.634Z · LW · GW
But Petrov was not a launch authority. The decision to launch or not was not up to him, it was up to the Politburo of the Soviet Union.

This is obviously true in terms of Soviet policy, but it sounds like you're making a moral claim. That the Politburo was morally entitled to decide whether or not to launch, and that no one else had that right. This is extremely questionable, to put it mildly.

We have to remember that when he chose to lie about the detection, by calling it a computer glitch when he didn't know for certain that it was one, Petrov was defecting against the system.

Indeed. But we do not cooperate in prisoners' dilemmas "just because"; we cooperate because doing so leads to higher utility. Petrov's defection led to a better outcome for every single person on the planet; assuming this was wrong because it was defection is an example of the non-central fallacy.

Is that the sort of behavior we really want to lionize?

If you will not honor literally saving the world, what will you honor? If we wanted to make a case against Petrov, we could say that by demonstrably not retaliating, he weakened deterrence (but deterrence would have helped no one if he had launched), or that the Soviets might have preferred destroying the world to dying alone, and thus might be upset with a missileer unwilling to strike. But it's hard to condemn him for a decision that predictably saved the West, and had a significant chance (which did in fact occur) of saving the Soviet Union.

Comment by Aiyen on Accelerate without humanity: Summary of Nick Land's philosophy · 2019-06-22T04:19:06.746Z · LW · GW

This seems wrong.

The second law of thermodynamics isn't magic; it's simply the fact that when you have categories with many possible states that fit in them, and categories with only a few states that count, jumping randomly from state to state will tend to put you in the larger categories. Hence melting-arrange atoms randomly and it's more likely that you'll end up in a jumble than in one of the few arrangements that permit solidity. Hence heat equalizing-the kinetic energy of thermal motion can spread out in many ways, but remain concentrated in only a few; thus it tends to spread out. You can call that the universe hating order if you like, but it's a well-understood process that operates purely through small targets being harder to hit; not through a force actively pushing us towards chaos, making particles zig when they otherwise would have zagged so as to create more disorder.

This being the case, claiming that life exists for the purpose of wasting energy seems absurd. Evolution appears to explain the existence of life, and it is not an entropic process. Positing anything else being behind it requires evidence, something about life that evolution doesn't explain and entropy-driven life would. Also, remember, entropy doesn't think ahead. It is purely the difficulty of hitting small targets; a bullet isn't going to 'decide' to swerve into a bull's eye as part of a plan to miss more later! It would be very strange if this could somehow mold us into fearing both death and immortality as part of a plan to gather as much energy as we could, then waste it through our deaths.

This seems like academics seeking to be edgy much more than a coherent explanation of biology.

As for transhumanism being overly interested in good or evil, what would you suggest we do instead? It's rather self-defeating to suggest that losing interest in goodness would be a good idea.

Comment by Aiyen on The Hard Work of Translation (Buddhism) · 2019-04-08T21:18:50.858Z · LW · GW

So enlightenment is defragmentation, just like we do with hard drives?

Comment by Aiyen on Rest Days vs Recovery Days · 2019-03-20T23:17:31.242Z · LW · GW

That make a fair bit of sense. And what are your thoughts on work days? I get my work for my job done, but advice on improving productivity on chores and future planning would be appreciated. Also good point on pica!

Comment by Aiyen on Rest Days vs Recovery Days · 2019-03-20T18:16:50.414Z · LW · GW

Very interesting dichotomy! Definitely seems worth trying. I'm confused about the reading/screen time/video games distinction though. Why would reading seem appealing but being in front of a screen not? Watching TV is essentially identical to reading right? You're taking in a preset story either way. Admittedly you can read faster than TV characters can talk, so maybe that makes it more rewarding?

Also, while playing more video games while recovering and fewer while resting makes sense (they're an easy activity while low on energy, and thus will take up much of a recovery day, but less of a rest day), "just following my gut" can still lead to plenty of gaming. Does this mean that I should still play some on a rest day, just less? That I almost never have enough energy to rest instead of recover? That I'm too into gaming and this is skewing my gut such that a good rest day rule would be "follow your gut, except playing fewer/no games today"?

Comment by Aiyen on [NeedAdvice]How to stay Focused on a long-term goal? · 2019-03-08T22:34:59.090Z · LW · GW

First off, you probably want to figure out if your nihilism is due to philosophy or depression. Would you normally enjoy and value things, but idea of finite life gets in the way? Or would you have difficulty seeing a point to things even if you were suddenly granted immortality and the heat death of the universe was averted?

Either way, it's difficult to give a definitive solution, as different things work for different people. That said, if the problem seems to be philosophy, it might be worth noting that the satisfaction found in a good moment isn't a function of anything that comes after it. If you enjoy something, or you help someone you love, or you do anything else that seems valuable to you, the fact of that moment is unchangeable. If the stars die in heaven, that cannot change the fact that you enjoyed something. Another possible solution would be trying to simply not think about it. I know that sounds horribly dismissive, but it's not meant to. In my own life there have been philosophical (and in my case religious) issues that I never managed to think my way out of... but when I stopped focusing on the problem it went away. I managed this only after getting a job that let my brain say "okay, worry about God later, we need to get this task done first!" If you think it would help, finding an activity that demands attention might help (if you feel that your brain will let you shift your attention; if not this might just be overly stressful).

If the problem seems to be depression, adrafinil and/or modafinil are extremely helpful for some people. Conventional treatments exist too of course (therapy and/or anti-depressants); I don't know anyone who has benefited from therapy (at least not that they've told me), but one of my friends had night and day improvement with an anti-depressant (sadly I don't remember which one; if you like I can check with her). Another aspect of overcoming depression is having friends in the moment and a plan for the future, not a plan you feel you should follow, but one you actively want to. I don't know your circumstances, but insofar as you can prioritize socialization and work for the future, that might help.

As for the actual question of self-improvement, people vary wildly. An old friend of mine found huge improvements in her life due to scheduling; I do markedly better without it. The best advice I can offer (and this very well might not help; drop it if it seems useless or harmful) is three things:

Don't do what you think you should do, do what you actually want to (if there isn't anything that you want, maybe don't force trying to find something too quickly either). People find motivation in pursuing goals they actually find worthwhile, but following a goal that sounds good but doesn't actually excite you is a recipe for burnout.

Make actionable plans-if there's something you want to do, try to break it down into steps that are small enough, familiar enough or straightforwards enough that you can execute the plan without feeling out of your depth. Personally, at least, I find there's a striking "oh, that's how I do that" feeling when a plan is made sufficiently explicit, a sense that I'm no longer blundering around in a fog.

Finally, and perhaps most importantly, don't eliminate yourself. That is, don't abandon a goal because it looks difficult; make someone else eliminate you. This is essential because many tasks look impossible from the outside, especially if you are depressed. It's almost the mirror image of the planning fallacy-when people commit to doing something, it's all too easy to envision everything going right and not account for setbacks. But before you actually take the plunge, so to speak, it's easy to just assume you can't do anything, which is simply not true.

Comment by Aiyen on To understand, study edge cases · 2019-03-03T17:43:41.685Z · LW · GW

"To understand anatomy, dissect cadavers." That's less a deliberate study of an edge case, and more due to the fact that we can't ethically dissect living people!

Comment by Aiyen on Minimize Use of Standard Internet Food Delivery · 2019-02-11T19:35:14.143Z · LW · GW

At the risk of appearing defective, isn't this the sort of action one would only want to take in a coordinated manner? If it turns out that use of such delivery services tends to force restaurants out of business, then certainly one would prefer a world where we don't use those services and still have the restaurants-you can't order take out from a place that doesn't exist anymore! But deciding unilaterally to boycott delivery imposes a cost without any benefit-whether I choose to use delivery or not will not make the difference. This looks like a classic tragedy of the commons, where it is best to coordinate cooperation, but cooperating without that coordination is a pure loss.

Comment by Aiyen on [Link] Did AlphaStar just click faster? · 2019-01-29T01:15:34.667Z · LW · GW

Interesting article. It argues that the AI learned spam clicking from human replays, then needed its APM cap raised to prevent spam clicking from eating up all of its APM budget and inhibiting learning. Therefore, it was permitted to use inhumanly high burst APM, and with all its clicks potentially effective actions instead of spam, its effective actions per minute (EPM, actions not counting spam clicks) are going to outclass human pros to the point of breaking the game and rendering actual strategy redundant.

Except that if it's spamming, those clicks aren't effective actions, and if those clicks are effective actions, it's not spamming. To the extent Alphastar spams, its superhuman APM is misleading, and the match is fairer than it might otherwise appear. To the extent that it's using high burst EPM instead, that can potentially turn the game into a micro match rather than the strategy match that people are more interested in. But that isn't a question of spam clicking.

Of course, if it started spam clicking, needed the APM cap raised, converted its spam into actual high EPM and Deepmind didn't lower the cap afterwards, then the article's objection holds true. But that didn't sound like what it was arguing (though perhaps I misunderstood it). Indeed, it seems to argue the reverse, that spam clicking was so ingrained that the AI never broke the habit.

Comment by Aiyen on For what do we need Superintelligent AI? · 2019-01-25T22:17:20.070Z · LW · GW

It depends on the goal. We can probably defeat aging without needing much more sophisticated AI than Alphafold (a recent Google AI that partially cracked the protein folding problem). We might be able to prevent the creation of dangerous superintelligences without AI at all, just with sufficient surveillance and regulation. We very well might not need very high-level AI to avoid the worst immediately unacceptable outcomes, such as death or X-risk.

On the other hand, true superintelligence offers both the ability to be far more secure in our endeavors (even if human-level AI can mostly secure us against X-risk, it cannot do so anywhere nearly as reliably as a stronger mind), and the ability to flourish up to our potential. You list high-speed space travel as "neither urgent nor necessary", and that's true-a world without near lightspeed travel can still be a very good world. But eventually we want to maximize our values, not merely avoid the worst ways they can fall apart.

As for truly urgent tasks, those would presumably revolve around avoiding death by various means. So anti-aging research, anti-disease/trauma research, gaining security against hostile actors, ensuring access to food/water/shelter, detecting and avoiding X-risks. The last three may well benefit greatly from superintelligence, as comprehensively dealing with hostiles is extremely complicated and also likely necessary for food distribution, and there may well be X-risks a human-level mind can't detect.

Comment by Aiyen on For what do we need Superintelligent AI? · 2019-01-25T22:01:41.362Z · LW · GW

Most people seem to need something to do to avoid boredom and potentially outright depression. However, it is far from clear that work as we know it (which is optimized for our current production needs, and in no way for the benefit of the workers as such) is the best way to solve this problem. There is likely a need to develop other things for people to do alongside alleviating the need for work, but simply saying "unemployment is bad" would seem to miss that there may be better options than either conventional work or idleness.

Comment by Aiyen on For what do we need Superintelligent AI? · 2019-01-25T21:57:52.834Z · LW · GW

Where governance is the barrier to human flourishing, doesn't that mean that using AI to improve governance is useful? A transhuman mind might well be able to figure out not only better policies but how to get those policies enacted (persuasion, force, mind control, incentives, something else we haven't thought of yet). After all, if we're worried about a potentially unfriendly mind with the power to defeat the human race, the flip side is that if it's friendly, it can defeat harmful parts of the human race, like poorly-run governments.

Comment by Aiyen on For what do we need Superintelligent AI? · 2019-01-25T21:52:23.737Z · LW · GW

Safer for the universe maybe, perhaps not for the old person themselves. Cryonics is highly speculative-it *should* work, given that if your information is preserved it should be possible to reconstruct you, and cooling a system enough should reduce thermal noise and reactivity enough to preserve information... but we just don't know. From the perspective of someone near death, counting on cryonics might be as risky or more so than a quick AI.

Comment by Aiyen on Do the best ideas float to the top? · 2019-01-21T15:32:39.425Z · LW · GW

This. Also, political factors-ideas that boost the status of your tribe are likely to be very competitive independently of truth and nearly so of complexity (though if they're too complex one would expect to see simplified versions propagating as well).

Comment by Aiyen on Life can be better than you think · 2019-01-21T15:14:46.487Z · LW · GW

"Emotions have their role in providing meaning."

Even if true, is meaning actually valuable? I would far rather be happy than meaningful, and a universe of truth, beauty, love and joy seems much more worthwhile than a universe of meaning.

Caveat-I feel much the same disconnect in hearing about meaning that Galton's non-imagers appeared to feel about mental imaging, so there's a pretty good chance I simply don't have the mental circuitry needed to appreciate or care about meaning. You might be genuinely pursuing something very important to you in seeking meaning. On the other hand, even if that's true, it's worth noting that there are some people who don't need it.

Comment by Aiyen on What are questions? · 2019-01-10T02:15:00.630Z · LW · GW

It's a noticed gap in your knowledge.

Comment by Aiyen on Consequentialism FAQ · 2019-01-02T23:24:51.531Z · LW · GW

Link doesn't seem to work.

Comment by Aiyen on What makes people intellectually active? · 2018-12-30T19:40:40.026Z · LW · GW

My best guess: There's a difference between reviewing ideas and exploring them.
Reviewing ideas allows you to understand concepts, think about them and talk about them, but you're looking at material you already have. Consider someone preparing a lecture well-they'll make sure that they have no confusion about what they're covering, and write eloquently on the topic at hand.

On the other hand, this is thinking along pre-set pathways. It can be very useful for both learning and teaching, but you aren't likely to discover something new. Exploring ideas, by contrast, is looking at a part of idea space and then seeing what you can find. It's thinking about the implications of things you know, and looking to see if an unexpected result shows up, or simply considering a topic and hoping that something new on the subject occurs to you.

Comment by Aiyen on Fifteen Things I Learned From Watching a Game of Secret Hitler · 2018-12-19T19:19:00.463Z · LW · GW

"The more liberal policies you pass, the more likely it is any future policy will be fascist."

Sadly this one is likely true irl. When you have a government that passes more and more laws, and does not repeal old laws, then the degree of restriction of people's lives increases monotonically. This creates a precedent for ever more control, until the end is either a backlash or tyranny.

Comment by Aiyen on 18-month follow-up on my self-concept work · 2018-12-19T19:12:27.151Z · LW · GW

Not Kaj, but shame and self-concept (damaging or otherwise) are thoughts (or self-concept is a thought and shame is an emotion produced by certain thoughts). It seems obvious that people with a greater tendency to think will be at greater risk of harmful thoughts. Of course, they'll also have a better chance of coming up with something beneficial as well, but that doesn't strike me as likely to cancel out the harm. Humans are fairly well adapted for our intellectual and social niche; there are a lot more ways for introspection to break things than to improve them.

Comment by Aiyen on Open Thread September 2018 · 2018-09-27T00:30:31.035Z · LW · GW

Happy Petrov Day!

Comment by Aiyen on A Rationalist's Guide to... · 2018-08-10T16:20:30.334Z · LW · GW

...? "Winning" isn't just an abstraction, actually winning means getting something you value. Now, maybe many rationalists are in fact winning, but if so, there are specific values we're attaining. It shouldn't be hard to delineate them.

It should look like, "This person got a new job that makes them much happier, that person lost weight on an evidence-based diet after failing to do so on a string of other diets, this other person found a significant other once they started practicing Alicorn's self-awareness techniques and learned to accept their nervousness on a first date..." It might even look like, "This person developed a new technology and is currently working on a startup to build more prototypes."

In none of these cases should it be hard to explain how we're winning, nor should Tim's "not looking carefully enough" be an issue. Even if the wins are limited to subjective well-being, you should at least be able to explain that! Do you believe that we're winning, or do you merely believe you believe it?

Comment by Aiyen on Who Wants The Job? · 2018-07-22T16:29:32.230Z · LW · GW

This is simultaneously horrifying and incredibly comforting. One would hope that people would be orders of magnitude better than this. But it also bodes very well for the future prospects of anyone remotely competent (unless your boss is like this...)

Comment by Aiyen on An optimization process for democratic organizations · 2018-07-14T01:34:28.204Z · LW · GW

"True. Equalizing the influence of all parties (over the long term at least) doesn't just risk giving such people power; it outright does give them power. At the time of the design, I justified it on the grounds that (1) it forces either compromise or power-sharing, (2) I haven't found a good way to technocratically distinguish humane-but-dumb voters from inhumane-but-smart ones, or rightly-reviled inhumane minorities from wrongly-reviled humane minorities, and (3) the worry that if a group's interests are excluded, then they have no stake in the system, and so they have reason to fight against the system in a costly way. Do any alternatives come to your mind?"

1. True, but is the compromise beneficial? Normally one wants to compromise either to gain useful input from good decision makers, or else to avoid conflict. The people one would be compromising with here would (assuming wisdom of crowds) be poor decision makers, and conventional democracy seems quite peaceful. 2. Why are you interested in distinguishing humane-but-dumb voters from inhumane-but-smart ones? Neither one is likely to give you good policy. Wrongly-reviled humane minorities deserve power, certainly, but rebalancing votes to give it to them (when you can't reliably distinguish them) is injecting noise into the system and hoping it helps. 3. True, but this has always been a trade-off in governance-how much do you compromise with someone to keep the peace vs. promote your own values at the risk of conflict? Again, conventional democracy seems quite good at maintaining peace; while one might propose a system that seeks to produce better policy, it seems odd to propose a system that offers worse policy in exchange for averting conflict when we don't have much conflict.

"I may have been unduly influenced by my anarchist youth: I'm more worried about the negative effects of concentrating power than about the negative effects of distributing it. Is there any objective way to compare those effects, however, that isn't quite similar to how Ophelimo tries to maximize public satisfaction with their own goals?"

Asking the public how satisfied they are is hopefully a fairly effective way of measuring policy success. Perhaps not in situations where much of the public has irrational values (what would Christian fundamentalists report about gay marriage?), but asking people how happy they are about their own lives should work as well as anything we can do. This strikes me as one of the strongest points of Ophelimo, but it's worth noting that satisfaction surveys are compatible with any form of government, not just this proposal.

Hopefully this doesn't come across as too negative; it's a fascinating idea!

Comment by Aiyen on Secondary Stressors and Tactile Ambition · 2018-07-14T01:08:29.944Z · LW · GW

Enye-word's comment is witty, certainly, but "this is going to take a while to explain" and "systematically underestimated inferential distances" aren't the same thing. Similar yes, but there's a difference between something taking a while to explain, while addressing X so you can explain Y which is a prerequisite for talking about Z, while your interlocutor may not understand why you're not just talking about Z, and something just taking a while to explain!

For example, if someone asked me about transhumanism, I might have to explain why immortality looks biologically possible, and how reversal tests work so we're not just stuck with the "death gives meaning to life" intuition, and the possibility of mind uploading to avoid a Malthusian catastrophe, and the evidence for minds being a function of information such that uploading looks even remotely plausible... Misunderstandings are all but guaranteed. But if someone asked me about the plot of Game of Thrones in detail, there would be far less chance of misunderstanding, even if it took longer to explain.

Also, motivation and "tactile ambition" aren't the same thing either. Tactile ambition sounds like ambition to do a specific thing, rather than to just "do well" in an ill-defined way. Someone might be very motivated to save money, for instance, and spend a lot of time and energy looking for ways to do so, yet not hit on a specific strategy and thus never develop a related tactile ambition. Or someone might have a specific ambition to save money by eating cheaply, as in the Mr. Money Mustache example, yet find themselves unmotivated and constantly ordering (relatively expensive) pizza.

That said, why "tactile ambition" rather than something like "specific ambition"?

Comment by Aiyen on An optimization process for democratic organizations · 2018-07-05T20:10:51.558Z · LW · GW

Very interesting idea! The first critique that comes to mind is that the increased voting power given to those whose bills are not passed risks giving undue power to stupid or inhumane voters. Normally, if someone has a bad idea, hopefully it will not pass, and that is that. Under Ophelimo, however, adherents of bad ideas would gather more and more votes to spend over time, until their folly was made law, at least for a time. It's also morally questionable-deweighting someone's judgments because they have been voting for and receiving (hopefully) good things may satisfy certain conceptions of fairness (they've gotten their way; now it's someone else's turn), but it makes less sense in governance, where the goal should be to produce beneficial policies, rather than to be "fair" if fairness yields harmful decisions.

The increased weight given to more successful predictors seems wise. While this might make the policy a harder sell (it may seem less democratic), it also ensures that the system can focus on learning from those best able to make good decisions. It's interesting that you're combining this (a meritocratic element) with the vote re-balancing (an egalitarian element). One could imagine this leaning to a system of carefully looking to the best forecasters while valuing the desires of all citizens; this might be an excellent outcome.

An obvious concern is people giving dishonest forecasts in an effort to more effectively sway policy. While this is somewhat disincentivized by the penalties to one's forecaster rating if the bill is passed, and the uncertainty about what bills may pass provides some disincentive to do this even with disfavored bills (as you address in the article), I suspect more incentive is needed for honesty. Dishonest forecasting, especially predicting poor results to try to kill a bill, remains tempting, especially for voters with one or two pet issues. If someone risks losing credibility to affect other issues, but successfully shot down a bill on their favorite hot button issue, they very well may consider the result worth it.

Finally, there is the question of what happens when the entire electorate can affect policy directly. In contemporary representative democracy, the only power of the voters is to select a politician, typically from a group that has been fairly heavily screened by various status requirements. While giving direct power to the people might help avoid much of the associated corruption and wasteful signalling, it risks giving increased weight to people without the requisite knowledge and intelligence to make good policy.

Comment by Aiyen on Dissolving the Fermi Paradox, and what reflection it provides · 2018-06-30T21:06:55.490Z · LW · GW

Possibility-if panspermia is correct (the theory that life is much older than Earth and has been seeded on many planets by meteorite impacts), then we might not expect to see other civilizations advanced enough to be visible yet. If evolving from the first life to roughly human levels takes around the current lifetime of the universe, rather than of the Earth, not observing extraterrestrial life shouldn't be surprising! Perhaps the strongest evidence for this is that the number of codons in observed genomes over time (including as far back as the Paleozoic) increases on a fairly steady logarithmic trend, which extrapolates back to shortly after the birth of the universe.

Comment by Aiyen on Aligned AI May Depend on Moral Facts · 2018-06-15T16:18:24.342Z · LW · GW

What do you mean by moral facts? It sounds in context like "ways to determine which values to give precedence to in the event of a conflict." But such orders of precedence wouldn't be facts, they'd be preferences. And if they're preferences, why are you concerned that they might not exist?

Comment by Aiyen on Oops on Commodity Prices · 2018-06-10T16:06:50.339Z · LW · GW

This is exactly the kind of learning and flexibility that we're trying to get better at here. There's not much to say but congratulations, but it's still worth saying.

Comment by Aiyen on Unraveling the Failure's Try · 2018-06-09T15:33:01.852Z · LW · GW

The MtG article is called Stuck in the Middle With Bruce by John Rizzo. Not sure how to link, but it's

The article is worth your time, but if you want a summary-there appears to be a part of many people's minds that wants to lose. And often winning is as much a matter of overcoming this part of you (which the article terms Bruce) as it is overcoming the challenges in front of you.

Comment by Aiyen on When is unaligned AI morally valuable? · 2018-06-06T22:19:46.808Z · LW · GW

Humans are made of atoms that are not paperclips. That's enough reason for extinction right there.