Questions on Theism 2014-10-08T21:02:43.338Z · score: 25 (33 votes)


Comment by aiyen on Why Are So Many Rationalists Polyamorous? · 2019-10-23T23:00:05.057Z · score: 2 (3 votes) · LW · GW

By that definition nothing is zero sum. "Zero sum" doesn't mean that literally all possible outcomes have equal total utility; it means that one person's gain is invariably another person's loss.

Comment by aiyen on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T20:36:17.634Z · score: 29 (10 votes) · LW · GW
But Petrov was not a launch authority. The decision to launch or not was not up to him, it was up to the Politburo of the Soviet Union.

This is obviously true in terms of Soviet policy, but it sounds like you're making a moral claim. That the Politburo was morally entitled to decide whether or not to launch, and that no one else had that right. This is extremely questionable, to put it mildly.

We have to remember that when he chose to lie about the detection, by calling it a computer glitch when he didn't know for certain that it was one, Petrov was defecting against the system.

Indeed. But we do not cooperate in prisoners' dilemmas "just because"; we cooperate because doing so leads to higher utility. Petrov's defection led to a better outcome for every single person on the planet; assuming this was wrong because it was defection is an example of the non-central fallacy.

Is that the sort of behavior we really want to lionize?

If you will not honor literally saving the world, what will you honor? If we wanted to make a case against Petrov, we could say that by demonstrably not retaliating, he weakened deterrence (but deterrence would have helped no one if he had launched), or that the Soviets might have preferred destroying the world to dying alone, and thus might be upset with a missileer unwilling to strike. But it's hard to condemn him for a decision that predictably saved the West, and had a significant chance (which did in fact occur) of saving the Soviet Union.

Comment by aiyen on Accelerate without humanity: Summary of Nick Land's philosophy · 2019-06-22T04:19:06.746Z · score: 4 (7 votes) · LW · GW

This seems wrong.

The second law of thermodynamics isn't magic; it's simply the fact that when you have categories with many possible states that fit in them, and categories with only a few states that count, jumping randomly from state to state will tend to put you in the larger categories. Hence melting-arrange atoms randomly and it's more likely that you'll end up in a jumble than in one of the few arrangements that permit solidity. Hence heat equalizing-the kinetic energy of thermal motion can spread out in many ways, but remain concentrated in only a few; thus it tends to spread out. You can call that the universe hating order if you like, but it's a well-understood process that operates purely through small targets being harder to hit; not through a force actively pushing us towards chaos, making particles zig when they otherwise would have zagged so as to create more disorder.

This being the case, claiming that life exists for the purpose of wasting energy seems absurd. Evolution appears to explain the existence of life, and it is not an entropic process. Positing anything else being behind it requires evidence, something about life that evolution doesn't explain and entropy-driven life would. Also, remember, entropy doesn't think ahead. It is purely the difficulty of hitting small targets; a bullet isn't going to 'decide' to swerve into a bull's eye as part of a plan to miss more later! It would be very strange if this could somehow mold us into fearing both death and immortality as part of a plan to gather as much energy as we could, then waste it through our deaths.

This seems like academics seeking to be edgy much more than a coherent explanation of biology.

As for transhumanism being overly interested in good or evil, what would you suggest we do instead? It's rather self-defeating to suggest that losing interest in goodness would be a good idea.

Comment by aiyen on The Hard Work of Translation (Buddhism) · 2019-04-08T21:18:50.858Z · score: 3 (3 votes) · LW · GW

So enlightenment is defragmentation, just like we do with hard drives?

Comment by aiyen on Rest Days vs Recovery Days · 2019-03-20T23:17:31.242Z · score: 3 (2 votes) · LW · GW

That make a fair bit of sense. And what are your thoughts on work days? I get my work for my job done, but advice on improving productivity on chores and future planning would be appreciated. Also good point on pica!

Comment by aiyen on Rest Days vs Recovery Days · 2019-03-20T18:16:50.414Z · score: 6 (4 votes) · LW · GW

Very interesting dichotomy! Definitely seems worth trying. I'm confused about the reading/screen time/video games distinction though. Why would reading seem appealing but being in front of a screen not? Watching TV is essentially identical to reading right? You're taking in a preset story either way. Admittedly you can read faster than TV characters can talk, so maybe that makes it more rewarding?

Also, while playing more video games while recovering and fewer while resting makes sense (they're an easy activity while low on energy, and thus will take up much of a recovery day, but less of a rest day), "just following my gut" can still lead to plenty of gaming. Does this mean that I should still play some on a rest day, just less? That I almost never have enough energy to rest instead of recover? That I'm too into gaming and this is skewing my gut such that a good rest day rule would be "follow your gut, except playing fewer/no games today"?

Comment by aiyen on [NeedAdvice]How to stay Focused on a long-term goal? · 2019-03-08T22:34:59.090Z · score: 11 (8 votes) · LW · GW

First off, you probably want to figure out if your nihilism is due to philosophy or depression. Would you normally enjoy and value things, but idea of finite life gets in the way? Or would you have difficulty seeing a point to things even if you were suddenly granted immortality and the heat death of the universe was averted?

Either way, it's difficult to give a definitive solution, as different things work for different people. That said, if the problem seems to be philosophy, it might be worth noting that the satisfaction found in a good moment isn't a function of anything that comes after it. If you enjoy something, or you help someone you love, or you do anything else that seems valuable to you, the fact of that moment is unchangeable. If the stars die in heaven, that cannot change the fact that you enjoyed something. Another possible solution would be trying to simply not think about it. I know that sounds horribly dismissive, but it's not meant to. In my own life there have been philosophical (and in my case religious) issues that I never managed to think my way out of... but when I stopped focusing on the problem it went away. I managed this only after getting a job that let my brain say "okay, worry about God later, we need to get this task done first!" If you think it would help, finding an activity that demands attention might help (if you feel that your brain will let you shift your attention; if not this might just be overly stressful).

If the problem seems to be depression, adrafinil and/or modafinil are extremely helpful for some people. Conventional treatments exist too of course (therapy and/or anti-depressants); I don't know anyone who has benefited from therapy (at least not that they've told me), but one of my friends had night and day improvement with an anti-depressant (sadly I don't remember which one; if you like I can check with her). Another aspect of overcoming depression is having friends in the moment and a plan for the future, not a plan you feel you should follow, but one you actively want to. I don't know your circumstances, but insofar as you can prioritize socialization and work for the future, that might help.

As for the actual question of self-improvement, people vary wildly. An old friend of mine found huge improvements in her life due to scheduling; I do markedly better without it. The best advice I can offer (and this very well might not help; drop it if it seems useless or harmful) is three things:

Don't do what you think you should do, do what you actually want to (if there isn't anything that you want, maybe don't force trying to find something too quickly either). People find motivation in pursuing goals they actually find worthwhile, but following a goal that sounds good but doesn't actually excite you is a recipe for burnout.

Make actionable plans-if there's something you want to do, try to break it down into steps that are small enough, familiar enough or straightforwards enough that you can execute the plan without feeling out of your depth. Personally, at least, I find there's a striking "oh, that's how I do that" feeling when a plan is made sufficiently explicit, a sense that I'm no longer blundering around in a fog.

Finally, and perhaps most importantly, don't eliminate yourself. That is, don't abandon a goal because it looks difficult; make someone else eliminate you. This is essential because many tasks look impossible from the outside, especially if you are depressed. It's almost the mirror image of the planning fallacy-when people commit to doing something, it's all too easy to envision everything going right and not account for setbacks. But before you actually take the plunge, so to speak, it's easy to just assume you can't do anything, which is simply not true.

Comment by aiyen on To understand, study edge cases · 2019-03-03T17:43:41.685Z · score: 4 (3 votes) · LW · GW

"To understand anatomy, dissect cadavers." That's less a deliberate study of an edge case, and more due to the fact that we can't ethically dissect living people!

Comment by aiyen on Minimize Use of Standard Internet Food Delivery · 2019-02-11T19:35:14.143Z · score: 8 (4 votes) · LW · GW

At the risk of appearing defective, isn't this the sort of action one would only want to take in a coordinated manner? If it turns out that use of such delivery services tends to force restaurants out of business, then certainly one would prefer a world where we don't use those services and still have the restaurants-you can't order take out from a place that doesn't exist anymore! But deciding unilaterally to boycott delivery imposes a cost without any benefit-whether I choose to use delivery or not will not make the difference. This looks like a classic tragedy of the commons, where it is best to coordinate cooperation, but cooperating without that coordination is a pure loss.

Comment by aiyen on [Link] Did AlphaStar just click faster? · 2019-01-29T01:15:34.667Z · score: 1 (1 votes) · LW · GW

Interesting article. It argues that the AI learned spam clicking from human replays, then needed its APM cap raised to prevent spam clicking from eating up all of its APM budget and inhibiting learning. Therefore, it was permitted to use inhumanly high burst APM, and with all its clicks potentially effective actions instead of spam, its effective actions per minute (EPM, actions not counting spam clicks) are going to outclass human pros to the point of breaking the game and rendering actual strategy redundant.

Except that if it's spamming, those clicks aren't effective actions, and if those clicks are effective actions, it's not spamming. To the extent Alphastar spams, its superhuman APM is misleading, and the match is fairer than it might otherwise appear. To the extent that it's using high burst EPM instead, that can potentially turn the game into a micro match rather than the strategy match that people are more interested in. But that isn't a question of spam clicking.

Of course, if it started spam clicking, needed the APM cap raised, converted its spam into actual high EPM and Deepmind didn't lower the cap afterwards, then the article's objection holds true. But that didn't sound like what it was arguing (though perhaps I misunderstood it). Indeed, it seems to argue the reverse, that spam clicking was so ingrained that the AI never broke the habit.

Comment by aiyen on For what do we need Superintelligent AI? · 2019-01-25T22:17:20.070Z · score: 2 (2 votes) · LW · GW

It depends on the goal. We can probably defeat aging without needing much more sophisticated AI than Alphafold (a recent Google AI that partially cracked the protein folding problem). We might be able to prevent the creation of dangerous superintelligences without AI at all, just with sufficient surveillance and regulation. We very well might not need very high-level AI to avoid the worst immediately unacceptable outcomes, such as death or X-risk.

On the other hand, true superintelligence offers both the ability to be far more secure in our endeavors (even if human-level AI can mostly secure us against X-risk, it cannot do so anywhere nearly as reliably as a stronger mind), and the ability to flourish up to our potential. You list high-speed space travel as "neither urgent nor necessary", and that's true-a world without near lightspeed travel can still be a very good world. But eventually we want to maximize our values, not merely avoid the worst ways they can fall apart.

As for truly urgent tasks, those would presumably revolve around avoiding death by various means. So anti-aging research, anti-disease/trauma research, gaining security against hostile actors, ensuring access to food/water/shelter, detecting and avoiding X-risks. The last three may well benefit greatly from superintelligence, as comprehensively dealing with hostiles is extremely complicated and also likely necessary for food distribution, and there may well be X-risks a human-level mind can't detect.

Comment by aiyen on For what do we need Superintelligent AI? · 2019-01-25T22:01:41.362Z · score: 1 (1 votes) · LW · GW

Most people seem to need something to do to avoid boredom and potentially outright depression. However, it is far from clear that work as we know it (which is optimized for our current production needs, and in no way for the benefit of the workers as such) is the best way to solve this problem. There is likely a need to develop other things for people to do alongside alleviating the need for work, but simply saying "unemployment is bad" would seem to miss that there may be better options than either conventional work or idleness.

Comment by aiyen on For what do we need Superintelligent AI? · 2019-01-25T21:57:52.834Z · score: 2 (2 votes) · LW · GW

Where governance is the barrier to human flourishing, doesn't that mean that using AI to improve governance is useful? A transhuman mind might well be able to figure out not only better policies but how to get those policies enacted (persuasion, force, mind control, incentives, something else we haven't thought of yet). After all, if we're worried about a potentially unfriendly mind with the power to defeat the human race, the flip side is that if it's friendly, it can defeat harmful parts of the human race, like poorly-run governments.

Comment by aiyen on For what do we need Superintelligent AI? · 2019-01-25T21:52:23.737Z · score: 1 (1 votes) · LW · GW

Safer for the universe maybe, perhaps not for the old person themselves. Cryonics is highly speculative-it *should* work, given that if your information is preserved it should be possible to reconstruct you, and cooling a system enough should reduce thermal noise and reactivity enough to preserve information... but we just don't know. From the perspective of someone near death, counting on cryonics might be as risky or more so than a quick AI.

Comment by aiyen on Do the best ideas float to the top? · 2019-01-21T15:32:39.425Z · score: 2 (2 votes) · LW · GW

This. Also, political factors-ideas that boost the status of your tribe are likely to be very competitive independently of truth and nearly so of complexity (though if they're too complex one would expect to see simplified versions propagating as well).

Comment by aiyen on Life can be better than you think · 2019-01-21T15:14:46.487Z · score: 4 (3 votes) · LW · GW

"Emotions have their role in providing meaning."

Even if true, is meaning actually valuable? I would far rather be happy than meaningful, and a universe of truth, beauty, love and joy seems much more worthwhile than a universe of meaning.

Caveat-I feel much the same disconnect in hearing about meaning that Galton's non-imagers appeared to feel about mental imaging, so there's a pretty good chance I simply don't have the mental circuitry needed to appreciate or care about meaning. You might be genuinely pursuing something very important to you in seeking meaning. On the other hand, even if that's true, it's worth noting that there are some people who don't need it.

Comment by aiyen on What are questions? · 2019-01-10T02:15:00.630Z · score: 1 (1 votes) · LW · GW

It's a noticed gap in your knowledge.

Comment by aiyen on Consequentialism FAQ · 2019-01-02T23:24:51.531Z · score: 1 (1 votes) · LW · GW

Link doesn't seem to work.

Comment by aiyen on What makes people intellectually active? · 2018-12-30T19:40:40.026Z · score: 8 (4 votes) · LW · GW

My best guess: There's a difference between reviewing ideas and exploring them.
Reviewing ideas allows you to understand concepts, think about them and talk about them, but you're looking at material you already have. Consider someone preparing a lecture well-they'll make sure that they have no confusion about what they're covering, and write eloquently on the topic at hand.

On the other hand, this is thinking along pre-set pathways. It can be very useful for both learning and teaching, but you aren't likely to discover something new. Exploring ideas, by contrast, is looking at a part of idea space and then seeing what you can find. It's thinking about the implications of things you know, and looking to see if an unexpected result shows up, or simply considering a topic and hoping that something new on the subject occurs to you.

Comment by aiyen on Fifteen Things I Learned From Watching a Game of Secret Hitler · 2018-12-19T19:19:00.463Z · score: 2 (2 votes) · LW · GW

"The more liberal policies you pass, the more likely it is any future policy will be fascist."

Sadly this one is likely true irl. When you have a government that passes more and more laws, and does not repeal old laws, then the degree of restriction of people's lives increases monotonically. This creates a precedent for ever more control, until the end is either a backlash or tyranny.

Comment by aiyen on 18-month follow-up on my self-concept work · 2018-12-19T19:12:27.151Z · score: 3 (2 votes) · LW · GW

Not Kaj, but shame and self-concept (damaging or otherwise) are thoughts (or self-concept is a thought and shame is an emotion produced by certain thoughts). It seems obvious that people with a greater tendency to think will be at greater risk of harmful thoughts. Of course, they'll also have a better chance of coming up with something beneficial as well, but that doesn't strike me as likely to cancel out the harm. Humans are fairly well adapted for our intellectual and social niche; there are a lot more ways for introspection to break things than to improve them.

Comment by aiyen on Open Thread September 2018 · 2018-09-27T00:30:31.035Z · score: 6 (4 votes) · LW · GW

Happy Petrov Day!

Comment by aiyen on A Rationalist's Guide to... · 2018-08-10T16:20:30.334Z · score: 1 (1 votes) · LW · GW

...? "Winning" isn't just an abstraction, actually winning means getting something you value. Now, maybe many rationalists are in fact winning, but if so, there are specific values we're attaining. It shouldn't be hard to delineate them.

It should look like, "This person got a new job that makes them much happier, that person lost weight on an evidence-based diet after failing to do so on a string of other diets, this other person found a significant other once they started practicing Alicorn's self-awareness techniques and learned to accept their nervousness on a first date..." It might even look like, "This person developed a new technology and is currently working on a startup to build more prototypes."

In none of these cases should it be hard to explain how we're winning, nor should Tim's "not looking carefully enough" be an issue. Even if the wins are limited to subjective well-being, you should at least be able to explain that! Do you believe that we're winning, or do you merely believe you believe it?

Comment by aiyen on Who Wants The Job? · 2018-07-22T16:29:32.230Z · score: 1 (1 votes) · LW · GW

This is simultaneously horrifying and incredibly comforting. One would hope that people would be orders of magnitude better than this. But it also bodes very well for the future prospects of anyone remotely competent (unless your boss is like this...)

Comment by aiyen on An optimization process for democratic organizations · 2018-07-14T01:34:28.204Z · score: 1 (1 votes) · LW · GW

"True. Equalizing the influence of all parties (over the long term at least) doesn't just risk giving such people power; it outright does give them power. At the time of the design, I justified it on the grounds that (1) it forces either compromise or power-sharing, (2) I haven't found a good way to technocratically distinguish humane-but-dumb voters from inhumane-but-smart ones, or rightly-reviled inhumane minorities from wrongly-reviled humane minorities, and (3) the worry that if a group's interests are excluded, then they have no stake in the system, and so they have reason to fight against the system in a costly way. Do any alternatives come to your mind?"

1. True, but is the compromise beneficial? Normally one wants to compromise either to gain useful input from good decision makers, or else to avoid conflict. The people one would be compromising with here would (assuming wisdom of crowds) be poor decision makers, and conventional democracy seems quite peaceful. 2. Why are you interested in distinguishing humane-but-dumb voters from inhumane-but-smart ones? Neither one is likely to give you good policy. Wrongly-reviled humane minorities deserve power, certainly, but rebalancing votes to give it to them (when you can't reliably distinguish them) is injecting noise into the system and hoping it helps. 3. True, but this has always been a trade-off in governance-how much do you compromise with someone to keep the peace vs. promote your own values at the risk of conflict? Again, conventional democracy seems quite good at maintaining peace; while one might propose a system that seeks to produce better policy, it seems odd to propose a system that offers worse policy in exchange for averting conflict when we don't have much conflict.

"I may have been unduly influenced by my anarchist youth: I'm more worried about the negative effects of concentrating power than about the negative effects of distributing it. Is there any objective way to compare those effects, however, that isn't quite similar to how Ophelimo tries to maximize public satisfaction with their own goals?"

Asking the public how satisfied they are is hopefully a fairly effective way of measuring policy success. Perhaps not in situations where much of the public has irrational values (what would Christian fundamentalists report about gay marriage?), but asking people how happy they are about their own lives should work as well as anything we can do. This strikes me as one of the strongest points of Ophelimo, but it's worth noting that satisfaction surveys are compatible with any form of government, not just this proposal.

Hopefully this doesn't come across as too negative; it's a fascinating idea!

Comment by aiyen on Secondary Stressors and Tactile Ambition · 2018-07-14T01:08:29.944Z · score: 11 (3 votes) · LW · GW

Enye-word's comment is witty, certainly, but "this is going to take a while to explain" and "systematically underestimated inferential distances" aren't the same thing. Similar yes, but there's a difference between something taking a while to explain, while addressing X so you can explain Y which is a prerequisite for talking about Z, while your interlocutor may not understand why you're not just talking about Z, and something just taking a while to explain!

For example, if someone asked me about transhumanism, I might have to explain why immortality looks biologically possible, and how reversal tests work so we're not just stuck with the "death gives meaning to life" intuition, and the possibility of mind uploading to avoid a Malthusian catastrophe, and the evidence for minds being a function of information such that uploading looks even remotely plausible... Misunderstandings are all but guaranteed. But if someone asked me about the plot of Game of Thrones in detail, there would be far less chance of misunderstanding, even if it took longer to explain.

Also, motivation and "tactile ambition" aren't the same thing either. Tactile ambition sounds like ambition to do a specific thing, rather than to just "do well" in an ill-defined way. Someone might be very motivated to save money, for instance, and spend a lot of time and energy looking for ways to do so, yet not hit on a specific strategy and thus never develop a related tactile ambition. Or someone might have a specific ambition to save money by eating cheaply, as in the Mr. Money Mustache example, yet find themselves unmotivated and constantly ordering (relatively expensive) pizza.

That said, why "tactile ambition" rather than something like "specific ambition"?

Comment by aiyen on An optimization process for democratic organizations · 2018-07-05T20:10:51.558Z · score: 2 (2 votes) · LW · GW

Very interesting idea! The first critique that comes to mind is that the increased voting power given to those whose bills are not passed risks giving undue power to stupid or inhumane voters. Normally, if someone has a bad idea, hopefully it will not pass, and that is that. Under Ophelimo, however, adherents of bad ideas would gather more and more votes to spend over time, until their folly was made law, at least for a time. It's also morally questionable-deweighting someone's judgments because they have been voting for and receiving (hopefully) good things may satisfy certain conceptions of fairness (they've gotten their way; now it's someone else's turn), but it makes less sense in governance, where the goal should be to produce beneficial policies, rather than to be "fair" if fairness yields harmful decisions.

The increased weight given to more successful predictors seems wise. While this might make the policy a harder sell (it may seem less democratic), it also ensures that the system can focus on learning from those best able to make good decisions. It's interesting that you're combining this (a meritocratic element) with the vote re-balancing (an egalitarian element). One could imagine this leaning to a system of carefully looking to the best forecasters while valuing the desires of all citizens; this might be an excellent outcome.

An obvious concern is people giving dishonest forecasts in an effort to more effectively sway policy. While this is somewhat disincentivized by the penalties to one's forecaster rating if the bill is passed, and the uncertainty about what bills may pass provides some disincentive to do this even with disfavored bills (as you address in the article), I suspect more incentive is needed for honesty. Dishonest forecasting, especially predicting poor results to try to kill a bill, remains tempting, especially for voters with one or two pet issues. If someone risks losing credibility to affect other issues, but successfully shot down a bill on their favorite hot button issue, they very well may consider the result worth it.

Finally, there is the question of what happens when the entire electorate can affect policy directly. In contemporary representative democracy, the only power of the voters is to select a politician, typically from a group that has been fairly heavily screened by various status requirements. While giving direct power to the people might help avoid much of the associated corruption and wasteful signalling, it risks giving increased weight to people without the requisite knowledge and intelligence to make good policy.

Comment by aiyen on Dissolving the Fermi Paradox, and what reflection it provides · 2018-06-30T21:06:55.490Z · score: 2 (2 votes) · LW · GW

Possibility-if panspermia is correct (the theory that life is much older than Earth and has been seeded on many planets by meteorite impacts), then we might not expect to see other civilizations advanced enough to be visible yet. If evolving from the first life to roughly human levels takes around the current lifetime of the universe, rather than of the Earth, not observing extraterrestrial life shouldn't be surprising! Perhaps the strongest evidence for this is that the number of codons in observed genomes over time (including as far back as the Paleozoic) increases on a fairly steady logarithmic trend, which extrapolates back to shortly after the birth of the universe.

Comment by aiyen on Aligned AI May Depend on Moral Facts · 2018-06-15T16:18:24.342Z · score: 7 (2 votes) · LW · GW

What do you mean by moral facts? It sounds in context like "ways to determine which values to give precedence to in the event of a conflict." But such orders of precedence wouldn't be facts, they'd be preferences. And if they're preferences, why are you concerned that they might not exist?

Comment by aiyen on Oops on Commodity Prices · 2018-06-10T16:06:50.339Z · score: 13 (5 votes) · LW · GW

This is exactly the kind of learning and flexibility that we're trying to get better at here. There's not much to say but congratulations, but it's still worth saying.

Comment by aiyen on Unraveling the Failure's Try · 2018-06-09T15:33:01.852Z · score: 7 (4 votes) · LW · GW

The MtG article is called Stuck in the Middle With Bruce by John Rizzo. Not sure how to link, but it's

The article is worth your time, but if you want a summary-there appears to be a part of many people's minds that wants to lose. And often winning is as much a matter of overcoming this part of you (which the article terms Bruce) as it is overcoming the challenges in front of you.

Comment by aiyen on When is unaligned AI morally valuable? · 2018-06-06T22:19:46.808Z · score: 5 (4 votes) · LW · GW

Humans are made of atoms that are not paperclips. That's enough reason for extinction right there.

Comment by aiyen on When is unaligned AI morally valuable? · 2018-06-05T18:12:37.519Z · score: 5 (2 votes) · LW · GW

It's an evolved predisposition, but does that make it a terminal value? We like sweet foods, but a world that had no sweet foods because we'd figured out something else that tasted better doesn't sound half bad! We have an evolved predisposition to sleep, but if we learned how to eliminate the need for sleep, wouldn't that be even better?

Comment by aiyen on When is unaligned AI morally valuable? · 2018-05-28T02:22:17.852Z · score: 4 (2 votes) · LW · GW

Yes. I wouldn't be surprised if this happened in fact.

Comment by aiyen on When is unaligned AI morally valuable? · 2018-05-27T06:58:36.862Z · score: 4 (2 votes) · LW · GW

The strongest argument that an upload would share our values is that our terminal values are hardwired by evolution. Self-preservation is common to all non-eusocial creatures, curiosity to all creatures with enough intelligence to benefit from it. Sexual desire is (more or less) universal in sexually reproducing species, desire for social relationships is universal in social species. I find it hard to believe that a million years of evolution would change our values that much when we share many of our core values with the dinosaurs. If maiasaura can have recognizable relationships 76 million years ago, are those going out the window in the next million? It's not impossible, of course, but shouldn't it seem pretty unlikely?

I think the difference between us is that you are looking at instrumental values, noting correctly that those are likely to change unrecognizably, and fearing that that means that all values will change and be lost. Are you troubled by instrumental values shifts, even if the terminal values stay the same? Alternatively, is there a reason you think that terminal values will be affected?

I think an example here is important to avoid confusion. Consider Western Secular sexual morals vs Islamic ones. At first glance, they couldn't seem more different. One side is having casual sex without a second thought, the other is suppressing desire with full-body burqas and genital mutilation. Different terminal values, right? And if there can be that much of a difference between two cultures in today's world, with the Islamic model seeming so evil, surely values drift will make the future beyond monstrous!

Except that the underlying thoughts behind the two models aren't as different as you might think. A Westerner having casual sex knows that effective birth control and STD countermeasures means that the act is fairly safe. A sixth century Arab doesn't have birth control and knows little of STDs beyond that they preferentially strike the promiscuous-desire is suddenly very dangerous! A woman sleeping around with modern safeguards is just a normal, healthy person doing what they want without harming anyone; one doing so in the ancient world is a potential enemy willing to expose you to cuckoldry and disease. The same basic desires we have to avoid cuckoldry and sickness motivated them to create the horrors of Shari'a.

None of this is intended to excuse Islamic barbarism. Even in the sixth century, such atrocities were a cure worse than the disease. But it's worth noting that their values are a mistake much more than a terminal disagreement. They're thinking of sex as dangerous because it was dangerous for 99% of human history, and "sex is bad" is easier meme to remember and pass on than "sex is dangerous because of pregnancy risks and disease risks, but if at some point in the future technology should be created that alleviates the risks, then it won't be so dangerous", especially for a culture to which such technology would seem an impossible dream.

That's what I mean by terminal values-the things we want for their own sake, like both health and pleasure, which are all too easy to confuse with the often misguided ways we seek them. As technology improves, we should be able to get better at clearing away the mistakes, which should lead to a better world by our own values, at least once we realize where we were going wrong.

Comment by aiyen on When is unaligned AI morally valuable? · 2018-05-26T18:25:58.160Z · score: 13 (5 votes) · LW · GW

The values you're expressing here are hard for me to comprehend. Paperclip maximization isn't that bad, because we leave a permanent mark on the universe? The deaths of you, everyone you love, and everyone in the universe aren't that bad (99% of the way from extinction that doesn't leave a permanent mark to flourishing?) because we'll have altered the shape of the cosmos? It's common for people to care about what things will be like after they die for the sake of someone they love. I've never heard of someone caring about what things will be like after everyone dies-do you value making a mark so much even when no one will ever see it?

"...our descendants 1 million years from now will not be called humans and will not share our values. I don't see much of a reason to believe that the values of my biological descendants will be less ridiculous to me, than paperclip maximization."

That depends on what you value. If we survive and have a positive singularity, it's fairly likely that our descendants will have fairly similar high level values to us: happiness, love, lust, truth, beauty, victory. This sort of thing is exactly what one would want to design a Friendly AI to preserve! Now, you're correct that the ways in which these things are pursued will presumably change drastically. Maybe people stop caring about the Mona Lisa and start getting into the beauty of arranging atoms in 11 dimensions. Maybe people find that merging minds is so much more intimate and pleasurable than any form of physical intimacy that sex goes out the window. If things go right, the future ends up very different, and (until we adjust) likely incomprehensible and utterly weird. But there's a difference between pursuing a human value in a way we don't understand yet and pursuing no human value!

To take an example from our history-how incomprehensible must we be to cavemen? No hunting or gathering-we must be starving to death. No camps or campfires-surely we've lost our social interaction. No caves-poor homeless modern man! Some of us no longer tell stories about creator spirits-we've lost our knowledge of our history and our place in the universe. And some of us no longer practice monogamy-surely all love is lost.

Yet all these things that would horrify a caveman are the result of improvement in pursuing the caveman's own values. We've lost our caves, but houses are better shelter. We've lost Dreamtime legends, Dreamtime lies, in favor of knowledge of the actual universe. We'd seem ridiculous, maybe close to paperclip-level ridiculous, until they learned what was actually going on, and why. But that's not a condemnation of the modern world, that's an illustration of how we've done better!

Do you draw no distinction between a hard-to-understand pursuit of love or joy, and a pursuit of paperclips?

Comment by aiyen on When is unaligned AI morally valuable? · 2018-05-25T18:55:10.419Z · score: 7 (2 votes) · LW · GW

Well then, isn't the answer that we care about de re alignment, and whether or not an AI is de dicto aligned is relevant only as far as it predicts de re alignment? We might expect that the two would converge in the limit of superintelligence, and perhaps that aiming for de dicto alignment might be the easier immediate target, but the moral worth would be a factor of what the AI actually did.

That does clear up the seeming confusion behind the OP, though, so thanks!

Comment by aiyen on When is unaligned AI morally valuable? · 2018-05-25T18:15:15.690Z · score: 6 (3 votes) · LW · GW

I may be missing the point here, so please don't be offended. Isn't this confusing "does the AI have (roughly) human values?" and "was the AI deliberately, rigorously designed to do so?" Obviously, our perception of the moral worth of an agent doesn't require them to have values identical to ours. We can value another's pleasure, even if we would not derive pleasure from the things they're experiencing. We can value another's love, even if we do not feel as affectionate towards their loved ones. But do we value an agent who's goal is to suffer as much as possible? Do we value an agent motivated purely by hatred?

Our values are our values; they determine our perception of moral worth. And while many people might be happy about a strange and wonderful AI civilization, even if it was very different from what we might choose to build, very few would want a boring one. That's a values question, or a meta values question; there's no way to posit a worthwhile AI civilization without assuming that on some level our values align.

The example given for a "good successor albeit unaligned" AI is a simulated civilization that eventually learns about the real world and figures out how to make AI work here. Certainly this isn't an AI with deliberate, rigorous Friendliness programming, but if you'd prefer handing the universe off to it to taking a 10% extinction risk, isn't that because you're hoping it will be more or less Friendly anyway? And at that point, the answer to when is unaligned AI morally valuable is when it is, in fact, aligned, regardless of whether that alignment was due to a simulated civilization having somewhat similar values to our own, or any other reason?

Comment by aiyen on On "Overthinking" Concepts · 2017-06-04T04:43:35.097Z · score: 1 (1 votes) · LW · GW

This. It took me years to understand this, but it's true, and vital to proficiency in most areas of endeavor.

The trouble with "overthinking" is that it's all to easy to try to oversimplify, or to frame a problem in terms that make it unnecessarily difficult. Martial arts are a good example. My experience with aikido is minimal, but at least in jiu-jitsu, knowing what a move feels like provides the data you need to actually use it, and in a form that can be applied in real time. Knowing verbal principles behind the move, on the other hand, almost invariably leaves out important pieces, and even when your verbal understanding is more or less complete, it's too slow to actually use against all but the most cooperative opponents.

Of course, that's with a physical discipline. Going back to the OP's question, how can overthinking be harmful when trying to understand a purely abstract concept, or how can a concept be understood with less thought rather than more? Well, as Bound_up says, it's impossible to understand a concept without thinking. But the kind of thinking is essential.

For example, I struggled with learning calculus for a while. The teachers would explain various tools that could be used to take a derivative or integral, but it wasn't clear which tools to use when. I responded to this by trying to create a rigorous framework that would reliably let me know when to use which formulas. However, there simply weren't enough consistent, reliable patterns relating a certain type of function to a given formula for differentiating it. Everyone said to "stop overthinking" calculus, but I figured there had to be rigorous algorithms governing the use of u substitution vs. integration by parts, and that the people telling me to just relax were sloppy thinkers who didn't generally understand concepts beyond rote learning.

What ended up actually working, however, was accepting a more ad hoc approach. Creating an algorithm that could tell me what tools to use, first time, every time, was beyond my capabilities. But noticing that a function could be manipulated in a certain way, or expressed in a more tractable form, without expecting that the exact same process would work the next time, wasn't actually very difficult at all. It was a bit frustrating to accept that calculus would consistently require creativity, but that's what actually worked, when my overthinking turned out to be oversimplification.

Comment by aiyen on Break your habits: be more empirical · 2016-07-02T06:02:10.995Z · score: 0 (0 votes) · LW · GW

This makes sense, and isn't the sort of thing I would necessarily think of on my own. We will see if this leads to higher quality of life; I'm eager to try the experiment!

As for the obviousness, for every person who writes this sort of thing off as trivial, someone else may well benefit. We shall see.

Comment by aiyen on Open thread, Sep. 14 - Sep. 20, 2015 · 2015-09-16T17:44:35.596Z · score: 3 (5 votes) · LW · GW

To be fair, if it helps someone find useful information, so much the better. If not, who does it harm?

Comment by aiyen on Stupid Questions July 2015 · 2015-07-06T05:29:08.074Z · score: 0 (0 votes) · LW · GW

Hmm, could be useful. My biggest concern is that my degree is in geology, so it is obviously not directly applicable. How much opportunity is there to get involved given that my training is fairly irrelevant? I have something of a philosophy background, and math through calculus II, but my formal education isn't going to help.

Comment by aiyen on Stupid Questions July 2015 · 2015-07-05T18:20:43.641Z · score: 2 (2 votes) · LW · GW

Not sure if this is the right place for this; if not I will be happy to move this to a more appropriate location. I just graduated college, and plan on working for a year as a math tutor. After that, I don't really have any fixed plans, and lately I have been wondering about possibly trying to work for MIRI/CFAR/similar organizations. What exactly is needed to get involved? And if this appears feasible, what should I be working on during the gap year to be ready?

Comment by aiyen on Questions on Theism · 2015-04-29T20:40:04.986Z · score: 0 (0 votes) · LW · GW

Actually, I don't have any of the community-based fears. Most of my friends are atheist or irreligious, and while my family would be concerned, I'm not especially worried about their reactions. The guide for the recently deconverted is nice, but I'm still having fears. Sorry-I'm not trying to drag things out! But this is taking a while, and I wish there was a way to just stop worrying.

Comment by aiyen on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 · 2015-03-02T19:42:35.452Z · score: 0 (0 votes) · LW · GW

Let's see. First off, let's consider the problem as thoroughly as possible without proposing solutions. Harry is surrounded by Death Eaters with orders to fire should he move, speak in any language other than Parseltongue (and probably if he makes any sound other than a hiss), raise his wand, and presumably if he does anything else suspicious (such as casting a visible spell without raising his wand or speaking an incantation). Lord Voldemort will presumably order his death (and likely shoot at him) if he does not appear to be complying with the instructions to tell the Dark Lord as many secrets as possible.

Therefore, he needs a countermeasure that can be used without giving any sign until it's too late, or a way to convince the Dark Lord of his cooperation, either to the point of making his continued survival valuable to Riddle Sr., or to the point of buying enough time to use a countermeasure.

Cooperating, or at least plausibly faking cooperation should be fairly simple. He can explain his understanding of Dementors, telling Voldemort that his desire to keep it secret was to prevent an infohazard to conventional Patronus casters; as Voldemort is not one such he will not be harmed by the information. He can explain partial transfiguration-it's not likely a difficult concept for the Dark Lord to grasp, but it seems to be one he hasn't thought of. Buying time is not a problem.

More difficult is what to do while/after buying time. He either needs that countermeasure, or else a way to convince Lord Voldemort that he is worth more alive than dead. As Voldemort fears existential risk greatly, the main way to convince him would be to point out that Harry is not the only source of x-risk, and that he very well may now be a means of reducing it. Prophecies are spoken to those with the power to fulfill or avert them, the "tear apart the very stars in the heavens" prophecy was spoken to Voldemort, suggesting that he might be able to alter the future, and may well have already done so (resurrecting Hermione, binding Harry with the Vow). As such, Harry is no longer necessarily a universal threat. Furthermore, he's not THAT special. He's highly intelligent and a wizard; that's about the sum total of his unusual powers, and it's hardly that rare of a combination (rare enough that we've only heard of one/two Riddle-level intelligent wizards in the story, but if Voldemort plans to live forever, another one will surely arise in the absence of dire action taken to prevent it and/or a catastrophe).

As such, if Harry could have destroyed the stars, another wizard will likely emerge as a threat to do exactly that. For that matter, depending on the method of stellar annihilation, it might be possible for a Muggle to accomplish this as well, or for Muggle actions to end the world as we know it (nuclear weapons, anyone?). Therefore, Lord Voldemort must either drastically repress Mankind to reduce x-risk (and this seems likely to be deadly dull for him, consider his horrified reaction to the possibility of spending his eternity in a dead world-he may not care about humanity the way a normal person does, but he finds us amusing enough to be worth preserving to some extent; also consider that he valued having an equal/near equal enough to make a copy of himself and dragged out his war with Dumbledore far beyond the point he could have easily beaten him, so suppressing intelligence and creativity for fear of their misuse is likely to be repugnant to Voldemort), be destroyed/spend eternity in a dead world, or find an intelligent solution to avert x-risk without taking drastic actions that make the world boring. The last of these options is the only one that Lord Voldemort seems likely to consider acceptable, and Harry might be a useful asset in finding a solution.

Alternatively, he could point out that the prophecy might refer to some form of apotheosis, rather than calamity. Tearing apart the stars for energy/to prevent the loss of negentropy, which seems like a reasonable post-singularity plan. Voldemort is unlikely to want to take the risk, but both of these arguments together might sway him, or at least buy more time. Of course, this may well require hearing the prophecy to learn enough details to craft a convincing argument, but the incident with Firenze might give Harry enough information to start without learning any more from Voldemort.

This might at least avert Harry's immediate death, and thus is one potential solution to Eliezer's challenge. The other option is to find a countermeasure.

The Boy-Who-Lived is naked save for his wand and glasses. Preempting/evading/deterring the Death Eater's curses seems impossible without magic, with suggests that a countermeasure would involve the wand and/or glasses. By the time he speaks an incantation, he will be cursed, suggesting that we need wordless, invisible magic (at least invisible until it's too late!)

Transfiguration is wordless, and Harry can even reverse transfiguration without a wand. Do we see any other spells he's capable of casting without words? If not, we're probably looking at untransfiguring his glasses-air can't be transfigured, and his wand isn't touching anything else. Unless there's a range on transfiguration? He's only done it before on things his wand has been touching, but that doesn't make any sense-the effect isn't limited to a one molecule layer that's "actually touching" the wand, and when you look at the quantum structure of objects there isn't a hard line between "contact" and "not in contact" anyway! That would allow him to transfigure the ground his wand is pointing at. He'd need a weapon or device that was too small to be noticed-possibly nanites or nano-scale line? That could allow him to strike back at the Death Eaters and/or threaten to do so, and explaining secrets/arguing for his continued existence as an x-risk mitigator should give him enough time to do so.

Nanites might provide x-risk in their own right, which means that the Vow might not allow it, but if he could limit them enough (cannot replicate, or can't replicate beyond a few generations?) he might have a shot. Or transfigure the ground into a gas-Harry'd be affected too, but he only needs to avoid immediate death, and if he can get the Stone, otherwise-fatal transfiguration poisoning could be cured.

On the bright side, touching Voldemort with anything magical (and transfigured material should count!) will trigger the resonance, and the Death Eaters are nowhere nearly as formidable as their master. We don't know the exact rules on the resonance, but if the "stronger magic means stronger backlash" theory is correct Harry might be able to incapacitate Voldemort while remaining upright himself.

The main difficulty with this approach is that even though Harry might be able to trigger the resonance with a fairly innocuous gas (heck, just make a little more air!), the Death Eaters would presumably fire the moment their master was affected. Poisonous/soporific gas would work on the Eaters as well, but it would impact Harry too-is there a substance so fast-acting that he could simply hold his breath/keep exhaling while talking to Voldemort, and then everyone inhaling would be dropped? I don't know of any gas that fast-acting, but if one exists it might provide another solution.

To sum up:

Potential solutions that I can think of-

  1. Convince Voldemort to keep Harry on as an x-risk mitigator.
  2. Convince Voldemort that the prophecy refers to an apotheosis, rather than an apocalypse (seems unlike to work by itself, but might be useful in conjunction with 1.
  3. Buy time with secrets/attempts to use 1 or 2; transfigure the ground into a weapon (gas, monofiliment line, nanites?) If the line is used, some form of guiding nanites may be required; alternatively, transfigure the line extending into Voldemort/the Death Eaters.
  4. Untransfigure glasses. Do we know what was transfigured to make the glasses to begin with? Could it be a useful countermeasure?

Non-HPMOR related note-I found organizing my thoughts far easier while typing this than while trying to figure solutions out before. Has anyone else noticed a writing makes thinking easier effect, and could this be a useful technique?

Comment by aiyen on Questions on Theism · 2015-01-28T22:36:10.909Z · score: 0 (0 votes) · LW · GW

Hey, it's been a while since I've looked over this thread. A lot of the answers have been very helpful; thanks! Another question-if I decide that I want to let go of my faith, do you have any advice for overcoming indoctrination and not being constantly afraid that I'm headed straight to hell? A few times I've decided that it made sense to let go, but while my beliefs are starting to tend more towards naturalism, my aliefs are still firmly Christian, and the fear keeps pushing me back. As a number of people have pointed out, religious memes are very resistant to purely reason-based attack. Thoughts?

Comment by aiyen on How subjective is attractiveness? · 2015-01-15T03:13:24.545Z · score: 0 (0 votes) · LW · GW

It looks like a signal to me. Maybe we're misinterpreting, but if so, we have multiple people making the same mistake.

Comment by aiyen on How subjective is attractiveness? · 2015-01-15T03:11:11.259Z · score: 0 (0 votes) · LW · GW

Or you're typical-minding? I'd give her a 4, but that doesn't mean that anyone and everyone is going to feel the same way. In my experience at least, perceptions of attractiveness are higher varience than most other preferences-and "no accounting for taste" is a proverb for a reason.

Comment by aiyen on Questions on Theism · 2014-10-20T23:33:07.552Z · score: 0 (0 votes) · LW · GW

My observation was that people said syllables that I didn't understand. As for telling if it was another language or nonsense, finding that one of the phrases actually made sense in another language would be very strong evidence for the existance of God. Proving that it was nonsense would be harder-how do you know when you've checked all the languages?

Does something like "koriata mashita mashuta amon hala" mean anything in any language anyone here knows? It sounds somewhat Japanese to me.

Comment by aiyen on Questions on Theism · 2014-10-16T21:20:14.619Z · score: 1 (3 votes) · LW · GW

Just finished the Quackwatch article. My prior for belief is dropping substantially.