Posts

Comments

Comment by nazgulnarsil3 on Wrong Tomorrow · 2009-04-02T10:01:10.000Z · score: 0 (0 votes) · LW · GW

the likely result is that pundits would start taking more care to make their predictions untestable.

this is already the norm 1) make qualitative prediction 2) reject criticism with "no true scotsman" fallacy (x wasn't really an example of y because z)

Comment by nazgulnarsil3 on Markets are Anti-Inductive · 2009-02-26T02:45:45.000Z · score: 1 (1 votes) · LW · GW

the general public being unaware of the fact that stock prices are an equilibrium of beliefs about whether the stock will rise or fall is not a major cause for concern.

Comment by nazgulnarsil3 on Fairness vs. Goodness · 2009-02-23T03:47:24.000Z · score: 2 (2 votes) · LW · GW

AA is willing to pay in order to achieve a more egalitarian outcome. in other words: AA is willing to pay money in order to force others to be more like him.

a desire to change the payoff matrix itself is my point: one monkey gets the banana and the other monkey cries justice. justice is formalized fairness. I can easily envision that AA would also pay in order to alter the payoff matrix.

So let's set up another trial of this with an added meta dilemma: in each case the disadvantaged member of the trial can forfeit another 5 points in order to alter the payoff matrix itself in an egalitarian direction. The caveat is that the advantaged person can pay an additional 5 points to stop you. or make it so they can contribute any number of points and the other has to contribute an equal number to stop you. what sort of equilibrium would result here?

Comment by nazgulnarsil3 on Fairness vs. Goodness · 2009-02-22T21:20:00.000Z · score: -3 (3 votes) · LW · GW

god damn communists. always on about income inequality instead of trying to maximize the amount everyone gets. I always refer to Mind the Gap by Paul Graham in these cases.

Comment by nazgulnarsil3 on Good Idealistic Books are Rare · 2009-02-18T04:03:11.000Z · score: 0 (0 votes) · LW · GW

Richard that can be described as near/far. also I'm not sure the cynic/idealist is the correct dichotomy, as cynicism seems a form of idealism. idealist/realist optimist/cynic ?

Comment by nazgulnarsil3 on An African Folktale · 2009-02-16T20:50:49.000Z · score: 2 (2 votes) · LW · GW

I regretted posting the original comment immediately but felt like your comment "maybe this is why africa stays poor" was kind of a pandora's box for this sort of thing.

all discussions lead inexorably towards ever more fundamental issues until eventually you're talking about axiomatic beliefs. This seem to fall in line with the idea that either you have different priors or one of you has made a mistake. Since this is a community of intelligent commenters it follows that most disagreements are probably due to different core values/assumptions.

But anyway, you're right. this stuff will have plenty of opportunity to be aired out when the blog transition occurs. Until then arguing about government is counter productive to the focus of the site since there is limited comment space.

my bad.

Comment by nazgulnarsil3 on An African Folktale · 2009-02-16T18:42:08.000Z · score: 2 (4 votes) · LW · GW

it's disingenuous to blame NASA, as if we couldn't afford both!

the point here is that the money that the government spends is 100% wasted on these things, not that we should find ways to pay for more stuff. I don't support government spending at all. when I talk about environmentalism I'm talking about the government whipping people into a frenzy in order to justify ridiculous schemes that private enterprise would never support. If there was less taxation and people were rational about picking charities to reduce overall suffering micronutrient and clean water programs would get huge boosts. They get practically nil right now because they aren't glamorous, and the government takes a large percentage of money that people would otherwise be generous with.

Comment by nazgulnarsil3 on An African Folktale · 2009-02-16T06:05:24.000Z · score: 0 (8 votes) · LW · GW

Do you feel the same indignation toward spending on, say, NASA?

of course. environmentalism is just the latest in a long string of justifications for government subsidy. NASA is another great example of breathtaking levels of waste.

Comment by nazgulnarsil3 on An African Folktale · 2009-02-16T01:48:05.000Z · score: 3 (17 votes) · LW · GW

A big part of the reason Africa stays poor is because nutrition and education is so poor that sub-saharan IQ's average about 70. Environmentalism pisses me off because for a fraction of what we are spending on the public hysteria we could be providing micro nutrients that would lead to huge decreases in overall suffering. Ditto with providing clean water.

What the hell is green tech? Is it just more efficient tech? Or does it have less to do with the technology and more to do with economic agents acknowledging externalities, consciously choosing to internalize some of that cost?

Comment by nazgulnarsil3 on Rationality Quotes 26 · 2009-02-14T21:31:06.000Z · score: 1 (1 votes) · LW · GW

terrifying freedom

I believe this is one of the prime motivators for religion, conspiracy theories, and all other manner of hidden organization schemes. the thought that this is literally IT and no one will judge the wicked, no one is guiding the leviathan, no one will care if you make a stupid mistake and it costs you your life.

"The cold, suffocating dark goes on forever and we are alone. Live our lives, lacking anything better to do. Devise reason later. Born from oblivion; bear children, hell-bound as ourselves, go into oblivion. There is nothing else. Existence is random. Has no pattern save what we imagine after staring at it for too long. No meaning save what we choose to impose. This rudderless world is not shaped by vague metaphysical forces. It is not God who kills the children. Not fate that butchers them or destiny that feeds them to the dogs. It’s us. Only us." - Rorschach

Comment by nazgulnarsil3 on Cynicism in Ev-Psych (and Econ?) · 2009-02-11T18:46:22.000Z · score: 0 (0 votes) · LW · GW

Huh, I was unaware that the whole concept of spandrels had originated with Gould. Point taken, one can reinterpret seemingly random noise as being itself an adaptation that overcomes simple hill climbing perhaps. Mutations themselves are a random walk, selection is not random. Environment acts as a hill, organisms as hill climbing algorithms, with the top of the hill being maximally efficient use of resources for reproduction. Is this correct?

Comment by nazgulnarsil3 on Cynicism in Ev-Psych (and Econ?) · 2009-02-11T15:26:53.000Z · score: 0 (2 votes) · LW · GW

we have X because it increased inclusive genetic fitness, full stop.

if evolutionary psychologists actually believe this it is a good example of why they aren't taken very seriously. what about spandrels?

Comment by nazgulnarsil3 on Informers and Persuaders · 2009-02-10T20:46:45.000Z · score: 0 (0 votes) · LW · GW

yes, the easiest way to spot scientism is to look for value statements being conflated with factual statements. This is done unintentionally in many cases, the persuaders can't help it because they can't distinguish between the two. 1) you falsify the data that someone thought was factual that they used to support their values. They take this as an attack on said values. 2) you point out errors in the train of logic between factual statements and values, and/or point out that there is no valid logic train between their values and facts. 3) you make a factual statements and it is confused for a value statement. This happens because we're taught to value truth and this valuation occasionally glitches. People assume that because you say something is true that you are also saying that it is good. 4) vice-versa of the above. you make a value statement and people take it as a factual statement. this is the goal of a persuader.

I'm sure there are other common examples.

Comment by nazgulnarsil3 on (Moral) Truth in Fiction? · 2009-02-09T17:57:37.000Z · score: 0 (2 votes) · LW · GW

three worlds collide would make a decent movie...just have to make the reasoning of the characters more explicit for people unfamiliar with concepts involved.

Comment by nazgulnarsil3 on ...And Say No More Of It · 2009-02-09T01:58:13.000Z · score: 3 (3 votes) · LW · GW

scientists fight over the division of money that has been block-allocated by governments and foundations. I should write about this later.

yes you should. this is a very serious issue. in art the artist caters to his patron. the more I see of the world of research in the U.S. the more I am disturbed by the common source of the vast majority of funding. science is being tailored and politicized.

Comment by nazgulnarsil3 on True Ending: Sacrificial Fire (7/8) · 2009-02-05T21:41:33.000Z · score: 0 (0 votes) · LW · GW

if the SHs find humans via another colony world blowing up earth is still an option. I don't believe the SHs could have been bargained with. They showed no inclination towards compromise in any other sense than whichever one they have calculated as optimal based on their understanding of humans and babyeaters. Because the SHs don't seem to value the freedom to make sub-optimal choices (free will) they may also worry much less about making incorrect choices based on imperfect information (this is the only rational reason I can come up with for them wanting to make a snap decision when a flaw in their data could lead to more of what they don't want: suffering). It is probably the norm for SHs to make snap decisions based on all available data rather than take no action while waiting for more data. They must have had a weird scientific revolution.

Comment by nazgulnarsil3 on OB Status Update · 2009-01-28T23:21:58.000Z · score: 2 (1 votes) · LW · GW

keeping the signal to noise ratio in a community is easy. Just make sure to wright long detailed posts about obtuse subjects (we have that covered) and don't respond to trolls. Any commoner that stumbles upon it will get bored and leave. This seems to have worked with Hacker News so far.

Comment by nazgulnarsil3 on Rationality Quotes 25 · 2009-01-28T23:19:38.000Z · score: 2 (2 votes) · LW · GW

with regards to the Steve Jobs Quote: Democracy is the theory that the common people know what they want and deserve to get it good and hard. - H.L. Mencken

Comment by nazgulnarsil3 on The Fun Theory Sequence · 2009-01-26T06:17:17.000Z · score: 6 (6 votes) · LW · GW

but an eden with a reversible escape option is surely better than an eden with a non-reversible escape option yes?

Comment by nazgulnarsil3 on Failed Utopia #4-2 · 2009-01-21T17:58:52.000Z · score: 4 (4 votes) · LW · GW

ZM: I'm not saying that the outcome wouldn't be bad from the perspective of current values, I'm saying that it would serve to lessen the blow of sudden transition. The knowledge that they can get back together again in a couple decades seems like it would placate most. And I disagree that people would cease wanting to see each other. They might prefer their new environment, but they would still want to visit each other. Even if Food A tastes better in every dimension to Food B I'll probably want to eat Food B every once in awhile.

James: Considering the fact that the number of possible futures that are horrible beyond imagining is far far greater than the number of even somewhat desirable futures I would be content with a weirdtopia. Weirdtopia is the penumbra of the future light cone of desirable futures.

Comment by nazgulnarsil3 on Failed Utopia #4-2 · 2009-01-21T15:33:21.000Z · score: 8 (10 votes) · LW · GW

am I missing something here? What is bad about this scenario? the genie himself said it will only be a few decades before women and men can be reunited if they choose. what's a few decades?

Comment by nazgulnarsil3 on Interpersonal Entanglement · 2009-01-20T22:59:57.000Z · score: 1 (1 votes) · LW · GW

rw: methods of short circuiting the sex drive falls into two categories. the first would be controlling sensory input (holodecks/virtual reality and or cyborgs). the second is bypassing the senses and directly messing with the brain itself via implants or genetic manipulation.

the second type is more prone to unintended consequences than the first.

Comment by nazgulnarsil3 on Interpersonal Entanglement · 2009-01-20T09:44:33.000Z · score: 5 (5 votes) · LW · GW

Our drive to do better than our neighbor is a deeply ingrained metric of how we judge ourselves. In essence we recognize that our own assessment is biased and look for cues from others. Eliminating this seems like eliminating past of the foundation of a social species.

I think you're being remarkably binary about this. I think it more realistic that non-sentient sexdroids will enable healthier relationships. When people get the urge to procreate with fitter partners they can just spend an afternoon in the holodeck. I see what you're saying as advocating keeping people a little hungry so that they appreciate food more.

Comment by nazgulnarsil3 on Interpersonal Entanglement · 2009-01-20T09:19:34.000Z · score: 1 (1 votes) · LW · GW

I thought a big part of the appeal of the super villain fantasy wasn't your standard of living but in comparative standard of living. It's boring if everyone has a volcano lair. People want a doomsday weapon so that they are feared and respected.

Comment by nazgulnarsil3 on Continuous Improvement · 2009-01-11T03:06:13.000Z · score: 1 (1 votes) · LW · GW

an investment earning 2% annual interest for 12,000 years adds up to a googol (10^100) times as much wealth.

no it adds up to a googol of economic units. in all likelihood the actual wealth that the investment represents will stay roughly the same or grow and shrink within fairly small margins.

it seems you conclude with an either/or on subjective experience improvement and brain tinkering. I think it more likely that we will improve our subjective experience up to a certain point of feasibility and then start with the brain tinkering. Some will clock-out by wireheading themselves, but most won't. Some will be more disposed towards brain tinkering, some will plug themselves into experience machines instead. The average person will do a little of both, trying various brain modifications the way we try drugs today. Will this be dangerous? Well the first people to try a new drug are taking a big risk, but the guinea pigs are a small minority. And they will use experience machines, but most won't surrender to them, just like most don't die playing world of warcraft today.

Comment by nazgulnarsil3 on Emotional Involvement · 2009-01-07T06:41:24.000Z · score: 0 (0 votes) · LW · GW

so would you be for or against an AI that inserted us into an experience machine programmed to provide a life of maximum self expression without our knowledge?

Comment by nazgulnarsil3 on A New Day · 2008-12-31T21:18:15.000Z · score: -1 (1 votes) · LW · GW

the value of this is most easily demonstrated in daydream scenarios. I'm guessing that other people, like me, find themselves going through some of the same fantasies time and time again, whether they be about wealth, sex, prestige or whatever else. A few days ago I banished all these familiar fantasies and spent some time thinking up new ones. Not only was it a wonderfully fun exercise, it seemed to increase my creativity when doing other activities throughout the day.

Comment by nazgulnarsil3 on Can't Unbirth a Child · 2008-12-28T22:01:10.000Z · score: 2 (2 votes) · LW · GW

the difference between reality and this hypothetical scenario is where control resides. I take no issue with the decentralized future roulette we are playing when we have this or that kid with this or that person. all my study of economics and natural selection indicates that such decentralized methods are self-correcting. in this scenario we approach the point where the future cone could have this or that bit snuffed by the decision of a singleton (or a functional equivalent), advocating that this sort of thing be slowed down so that we can weigh the decisions carefully seems prudent. isn't this sort of the main thrust of the friendly AI debate?

Comment by nazgulnarsil3 on Can't Unbirth a Child · 2008-12-28T21:38:24.000Z · score: 0 (0 votes) · LW · GW

what effect would it have on the point

if rewinding is morally unacceptable (erasing could-have-been sentients) and you have unlimited power to direct the future, does this mean that all the could-have-beens from futures you didn't select are on your shoulders? This is directly related to another recent post. If I choose a future with less sentients who have a higher standard of living am I responsible for the sentients that would have existed in a future where I chose to let a higher number of them be created? If you're a utilitarian this is the delicate point. at what point are two sentients with a certain happiness level worth one sentient with a higher happiness level? Does a starving man steal bread to feed his family? This turns into: Should we legitimize stealing from the baker to feed as many poor as we can?

Comment by nazgulnarsil3 on Can't Unbirth a Child · 2008-12-28T20:18:39.000Z · score: 1 (1 votes) · LW · GW

Actually it sounds pretty unlikely to me, considering the laws of thermodynamics as far as I know them.

you can make entropy run in reverse in one area as long as a compensating amount of entropy is generated somewhere within the system. what do you think a refrigerator is? what if the extra entropy that needs to be generated in order to rewind is shunted off to some distant corner of the universe that doesn't affect the area you are worried about? I'm not talking about literally making time go in reverse. You can achieve what is functionally the same thing by reversing all the atomic reactions within a volume and shunting the entropy generated by the energy you used to do this to some other area.

Comment by nazgulnarsil3 on Can't Unbirth a Child · 2008-12-28T18:33:45.000Z · score: 1 (1 votes) · LW · GW

I think it's worth noting that truly unlimited power means being able to undo anything. But is it wrong to rewind when things go south? if you rewind far enough you'll be erasing lives and conjuring up new different ones. Is rewinding back to before an AI explodes into a zillion copies morally equivalent to destroying them in this direction of time? unlimited power is unlimited ability to direct the future. Are the lives on every path you don't choose "on your shoulders" so to speak?

Comment by nazgulnarsil3 on Can't Unbirth a Child · 2008-12-28T17:08:29.000Z · score: 3 (3 votes) · LW · GW

Or should we be content to have the galaxy be 0.1% eudaimonia and 99.9% cheesecake?

given that the vast majority of possible futures are significantly worse than this, I would be pretty happy with this outcome. but what happens when we've filled the universe? much like the board game risk, your attitude towards your so called allies will abruptly change once the two of you are the only ones left.

Comment by nazgulnarsil3 on Devil's Offers · 2008-12-26T23:02:32.000Z · score: 0 (0 votes) · LW · GW

Peter: if your change of utility functions is of domain rather than degree you can't calculate the negative utility. the difference in utility between making 25 paperclips a day and 500 a day is a calculable difference for a paperclip maximizing optimization process.

however, if the paperclip optimizer self-modifies and inadvertently changes his utility function to maximizing staples....well you can't calculate paperclips in terms of staples. This outcome is of infinite negative utility from the perspective of the paperclip maximizer. And vice-versa. Once the utility function has changed to maximizing staples, it would be of infinite negative utility to change back to paperclips from the perspective of the staple maximizing utility.

this defeats the built in time out clause. with a modification that only affects your ability to reach your current utility, you have a measurable output. with a change that changes your utility you are changing the very thing you were using to measure success by.

I know that this isn't worded very well. I'm sure one of elizer's posts has done this subject better at some point.

Comment by nazgulnarsil3 on Devil's Offers · 2008-12-26T05:55:56.000Z · score: 0 (0 votes) · LW · GW

I should have specified a domain change. a modification that varies your utility function by degree has a calculable negative utility.

Comment by nazgulnarsil3 on Devil's Offers · 2008-12-26T05:03:33.000Z · score: 0 (0 votes) · LW · GW

I think that an empirical approach self modification would quickly become prominent. alter one variable and test it, with a self imposed timeout clause. the problem is that this does not apply to one sort of change: a change in utility function. an inadvertent change of utility function is extremely dangerous, because changing your utility function is of infinite negative utility by the standards of your current utility, and vice-versa.

Comment by nazgulnarsil3 on Sensual Experience · 2008-12-22T05:19:48.000Z · score: 0 (2 votes) · LW · GW

frelkins: in that vein what if we could flip the switch in the brain that usually only flips when you are sleeping with a new partner? isn't this half of humanties sex problems gone in one shot? it seems to me that the realm of sex is the one in which it is most obvious that desires shaped by natural selection are not in line with actual happiness and fulfillment.

Comment by nazgulnarsil3 on High Challenge · 2008-12-19T20:50:28.000Z · score: 4 (4 votes) · LW · GW

caledonian: I agree. if we develop some sort of virtual reality that can provide any desire, we'll just be selecting for people who don't go in and never come out. If so the future will be populated by people who refuse such self gratification.

Comment by nazgulnarsil3 on High Challenge · 2008-12-19T15:14:46.000Z · score: 0 (0 votes) · LW · GW

what's more fun? a holodeck that you have complete control over? or a holodeck with built in constraints?

playing god might be fun for awhile, but I think everyone would eventually switch over to programs with built in constraints to challenge themselves. the profession of highest prestige will probably people who write really really good holodeck programs.

Comment by nazgulnarsil3 on Prolegomena to a Theory of Fun · 2008-12-18T00:44:45.000Z · score: 3 (3 votes) · LW · GW

yeah, I did. Only because I see political machinations as far more dangerous than the problems happiness studies solve.

Comment by nazgulnarsil3 on Prolegomena to a Theory of Fun · 2008-12-17T23:45:37.000Z · score: 2 (6 votes) · LW · GW

as a preference utilitarian I dislike happiness studies. they're much too easy to use as justification for social engineering schemes.

Comment by nazgulnarsil3 on Visualizing Eutopia · 2008-12-16T23:50:23.000Z · score: 0 (0 votes) · LW · GW

Shuman hmm true. alright. fission reactor with enough uranium to power everything for several lifetimes (whatever my lifetime is at that point) and accelerate the asteroid up to relativistic speeds. aim the ship out of the galactic plane. the energy required to catch up with me will make it unprofitable to do so.

Comment by nazgulnarsil3 on Visualizing Eutopia · 2008-12-16T21:03:36.000Z · score: 0 (0 votes) · LW · GW

Carl Shuman that is why I will create a solar powered holodeck with built in replicator, and launch myself into deep space attached to an asteroid with enough elements for the replicator.

rest of humanity can go to hell.

Comment by nazgulnarsil3 on Visualizing Eutopia · 2008-12-16T20:09:45.000Z · score: -1 (1 votes) · LW · GW

maximized freedom with the constraint of zero violence. violence will always exist as long as there is scarcity, so holodecks + replicators will save humanity.

Comment by nazgulnarsil3 on For The People Who Are Still Alive · 2008-12-14T17:53:30.000Z · score: 0 (0 votes) · LW · GW

the most important adaptation an ideology can make to improve its inclusive fitness for consumption by the human brain is to

  1. refrain from making falsifiable claims
  2. convince its followers to aggressively expand

1 is accomplished by making the ideology rest on a priori claims. everything that rests on top of that claim can be perfectly logical given the premise. since most people don't examine their beliefs axiomatically, few will question the premise as long as they are provided the bare minimum of comfort. 2 is accomplished by activating the "morally righteous" centers of the brain. We're not aggressively expanding, we're bringing democracy/communism/god/whatever to the heathens.

Having a high standard of living seems incompatible with natural selection. Like sadness and pain leading to greater inclusive fitness in an individual, devoting more resources to expansion increases the inclusive fitness of any social system. Those who don't expand are swallowed by those who do. It only takes one aggressively expansionist civilization per hubble volume to wipe out all other forms of civilization.

Comment by nazgulnarsil3 on Sustained Strong Recursion · 2008-12-05T21:56:42.000Z · score: 0 (0 votes) · LW · GW

I like to think of life as being a Pe^rt equation. P = you and your skills r = your investment ability/luck t = invest early, take advantage of tax laws

Comment by nazgulnarsil3 on Selling Nonapples · 2008-11-13T21:14:05.000Z · score: 0 (0 votes) · LW · GW

I hope I live to see a world where synchronous computing is considered a quaint artifact of the dawn of computers. cognitive bias has prevented us from seeing the full of extent of what can be done with this computing thing. a limit on feasible computability (limited by our own brain capacity) that has existed for all the millions of years, shaping the way we assume we can solve problems in our world, is suddenly gone. we've made remarkable progress in a short time, I can't wait to see what happens next.

Comment by nazgulnarsil3 on Ask OB: Leaving the Fold · 2008-11-09T23:37:13.000Z · score: 0 (0 votes) · LW · GW

in the course of natural selection, conformity to social values took on a much higher priority than the truth. especially for women who are vulnerable and must adapt to please whichever males are in charge at the time. confronting the average person with the truth is a waste of time. they place a higher priority on social status. If you live in a primarily Christian community don't expect anyone to listen, they would lose status by seriously considering your doubts.

Comment by nazgulnarsil3 on Recognizing Intelligence · 2008-11-07T23:42:25.000Z · score: -2 (2 votes) · LW · GW

I tend to think aliens shaped by natural selection will exhibit many of the same neurological adaptations that we do.

Comment by nazgulnarsil3 on Hanging Out My Speaker's Shingle · 2008-11-05T23:46:10.000Z · score: 0 (0 votes) · LW · GW

how much will you be charging for bar mitzvahs?

Comment by nazgulnarsil3 on Economic Definition of Intelligence? · 2008-11-04T12:03:24.000Z · score: -2 (2 votes) · LW · GW

If humanity was forced to choose a simple optimization process to submit itself to I think capitalism would be our best bet.