Posts

Solomonoff induction on a random string 2014-04-08T05:30:08.541Z · score: 0 (3 votes)
Amanda Knox Guilty Again 2014-01-31T04:12:34.529Z · score: 7 (18 votes)

Comments

Comment by christopherj on Interlude with the Confessor (4/8) · 2015-03-16T05:05:57.988Z · score: 0 (0 votes) · LW · GW

Weirdtopia? No -- history. For example, the Bible rules allowed for capturing the enemy's women as loot, having sex with their slave, and I'm fairly certain that a woman's wishes in terms of consent mattered a lot less than those of the male in charge of her. I seem to recall that at some point in Europe the feudal lord or whatever could have his way with your wife, and you had no recourse. This, of course, probably has more to do with inequality than anything else.

As for consent, it's ... complicated. For one thing, it exists in the mind and thus cannot reliably leave a physical trace (because of how memory works by retroactively fitting facts into a narrative, not even the owner of the brain can be certain). And then there's sleeping, and drugs, and mental illness, and changing one's mind, and how we decided that none of the usual applies when the person is below a certain age. As a hypothetical example, consider a mute quadriplegic who can only communicate by blinking, gave consent, then withdrew consent halfway through the act but while their partner couldn't see their eyes.

Besides, it's not like any modern society would allow assault or harassment, so if they got rid of the laws concerning the special case where sex is involved, it wouldn't really change much.

Comment by christopherj on Pascal's Mugging Solved · 2014-05-30T16:01:35.307Z · score: 1 (1 votes) · LW · GW

Pascal's mugging against an actual opponent is easy. If they are able to carry out their threat, they don't need anything you would be able to give them. If the threat is real, you're at their mercy and you have no way of knowing if acceding to their demand will actually make anyone safer, whereas if he's lying you don't want to be giving resources to that sort of person. This situation is a special case of privileging the hypothesis, since for no reason you're considering a nearly impossible event while ignoring all the others.

If we're talking about a metaphor for general decision-making, eg an AI who's actions could well affect the entirety of the human race, it's much harder. I'd probably have it ignore any probabilities below x%, where x is calculated as a probability so small that the AI would be paralyzed from worrying about the huge number of improbable things. Not because it's a good idea, but because as probability approaches zero, the number of things to consider approaches infinity yet processing power is limited.

Comment by christopherj on A Dialogue On Doublethink · 2014-05-16T05:33:27.732Z · score: 0 (0 votes) · LW · GW

It is pretty much a necessity that humans will believe contradictory things, if only because consistency checking each new belief with each of your current beliefs is impossibly difficult. Cognitive dissonance won't occur if the contradiction is so obscure that you haven't noticed it, or perhaps wouldn't even understand exactly how it contradicts a set of 136 other beliefs even if it was explained to you. Even if you could check for contradictions, your values change drastically from one hour to the next (how much you value food, water, company, solitude, leisure, etc), and that will change all your beliefs that start with "I want ...". Most likely you actually have different bits of brain with different values vying for dominance

Moreover, many times a belief is part of a group membership (eg "I support [cause]", or simply feels good (eg "I am a good person"). People will not appreciate if you point out contradictions in these things, possibly because they are instrumental and not epistemic beliefs. There is no doubt that professing contradictory beliefs can be highly beneficial (eg "Republicans are fiscally conservative, want small government, cut taxes, more money for the military and enforcing morality", if you reject any of that you're not a viable candidate)

Comment by christopherj on A Dialogue On Doublethink · 2014-05-16T04:32:38.453Z · score: 1 (1 votes) · LW · GW

I myself lie effortlessly, and felt not a shred of guilt when, say, I would hide my atheism to protect myself from the hostility of my very anti-anti-religious father (he's not a believer himself, he's just hostile to atheism for reasons which elude me).

Hm, an atheist who hides his atheism, from his father who also seems to be an atheist (aka non-believer) but acts hostile towards atheists? Just out of curiosity, do you also act hostile towards atheists when you're around him?

Comment by christopherj on Questions to ask theist philosophers? I will soon be speaking with several · 2014-05-10T14:35:30.172Z · score: 0 (0 votes) · LW · GW

FWIW these questions have standard answers in Christian doctrine: he didn't want to be tortured to death, but he wanted to do God's will more than he not-wanted to be crucified.

Sure, but don't forget that in Christian doctrine Jesus=God. This vastly complicates the issue, God-the-Father demands that God-the-Son die on behalf of the sins of humanity, which God-the-Son doesn't want to do but is willing to do because it's what God-the-Father requires to bring Himself to forgive people and He may have been ordered to as well. I don't know what would happen if God disobeys Himself.

Comment by christopherj on Arguments and relevance claims · 2014-05-06T00:11:21.766Z · score: 0 (0 votes) · LW · GW

And I expect the reason is that people who insufficiently ironman an argument are either more interested in the argument's technical correctness, or more interested in discrediting the claim.

Comment by christopherj on Arguments and relevance claims · 2014-05-06T00:00:06.989Z · score: 1 (1 votes) · LW · GW

Summary: Agreeing with people who insufficiently ironman an argument, will be treated as agreeing that the argument is complete rubbish.

Comment by christopherj on The Extended Living-Forever Strategy-Space · 2014-05-04T05:14:02.351Z · score: 1 (1 votes) · LW · GW

Supplemental data preservation seems like a synergistic match with cryonics. You'd want to collect vast amounts of data with little effort, so no diaries or random typing or asking friends to memorize facts. MRIs and other medical records might help, keeping a video or audio recording of everything you do, and recording everything you do with your computer, should take little time and might preserve something that might aid cryonic preservation.

Simulation-based preservation attempts may be more likely than people expect, based on the logic that simulated humans likely outnumber physical humans (we could be in a simulation to determine how many simulations per human we will eventually make ourselves). However it is clear that the simulator(s) either already are communicating with us or do not care to, and to gain any more direct access to their attention we'd have to hack the simulation, in which case there may be more clever things to do than call attention to our hacking. However, it is likely that the simulators have highly advanced security technology compared to us. Alternately, given that we are probably being simulated by other humans, and they might be watching, we may be able to appeal to their empathy.

Evolutionary Preservation and Genetic Preservation depend on a misunderstanding of genetics, Philosophical Preservation on a misunderstanding of the natures of reality vs rationalization, and Time-travel Preservation suggests that making a commitment to something that 10%-50% of humans already made will make you notable to time travelers. This sort of thing detracts from your suggestion since you're grasping at straws to find alternatives.

Granted, it's hard to find alternatives. I suppose EEG data could be collected as well, and would also have research benefits. However, like most of the other data that could be collected, it would probably only suffice as a sanity check on your cryonic reconstruction.

Comment by christopherj on Rebutting radical scientific skepticism · 2014-05-02T13:50:09.035Z · score: 0 (0 votes) · LW · GW

I don't consider this an advantage. My goal is to find vivid and direct demonstrations of scientific truths, and so I am happy to use things that are commonplace today, like telephones, computers, cameras, or what-have-you.

Well, you could use your smartphone's accelerometer to verify the equations for centrifugal force, or its GPS to verify parts of special and general relativity, or the fact that its chip functions to verify parts of quantum mechanics. But I'm not sure how you can legitimately claim to be verifying anything; if you don't trust those laws how can you trust the phone? It would be like using a laser rangefinder to verify the speed of light. For this sort of thing the fact that your equipment functions is better evidence that the people who made it know the laws of physics, than any test you could do with it.

Comment by christopherj on Link: Study finds that using a foreign language changes moral decisions · 2014-05-02T01:25:58.763Z · score: 1 (1 votes) · LW · GW

Some hypotheses: 1) Words in the foreign language are not tainted with morality. Using more neutral words in the problem description would have a similar effect.

2) The extra time taken to parse the foreign language description forces more time to think about the problem. Saying the problem slowly, or writing with a huge font, would have a similar effect.

3) The distraction of translating has an effect. Giving the subjects an additional task to do would have a similar effect.

Other studies showed an effect of language helping to discriminate between things like two different colors (aided if your language uses different words for them). That seemed like a different thing, perhaps an effect of categories and practice.

Comment by christopherj on Discussion: How scientifically sound are MBAs? · 2014-05-02T00:53:58.139Z · score: 3 (3 votes) · LW · GW

A huge chunk of an MBA'a job is to play a hostile asymmetric game against their employees (where their productivity has somewhere between negative value and positive sentimental value to them and their wages have negative value to you), and an approximately zero sum game against competitors, and a more neutral zero sum game against their customers trading quality and advertizing for price. These sorts of games are complicated and winning strategies change as the playing field evolves and your opponents change tactics. A working strategy could quite legitimately be described as "it simply works, don't ask why" because no one has quite figured out why. And different strategies will work better or fail miserably depending on who the other players are.

Comment by christopherj on Rebutting radical scientific skepticism · 2014-05-02T00:29:58.323Z · score: 1 (1 votes) · LW · GW

As a general rule, the easiest way to verify a scientific discovery is to find out how the original discoverer did it and replicate their experiment. There are sometimes easier ways, and occasionally the discoverers used some expensive equipment... but mostly the requirement is some math and elbow grease/patience. Another advantage of replicating the original discovery is that you don't accidentally use unverified equipment or discoveries (ie equipment dependent on laws that were unknown at the time).

Comment by christopherj on The Universal Medical Journal Article Error · 2014-04-29T22:44:29.976Z · score: 0 (0 votes) · LW · GW

This refereed medical journal article, like many others, made the same mistake as my undergraduate logic students, moving the negation across the quantifier without changing the quantifier. I cannot recall ever seeing a medical journal article prove a negation and not make this mistake when stating its conclusions.

That would be interesting if true. I recommend finding another one, since you sya they're so plentiful. And I also recommend reading it carefully, as the study you chose to make an example of is not the study you were looking for. (If you don't want to change the exemplar study, it may also be of interest what is your prior for "PhilGoetz is right where all of LessWrong is wrong')

The different but related question of the proportion of people (especially scientists, regulators, and legislators) misinterpreting such studies might also be worth looking into. It wouldn't surprise me if people who know better make the same mistake as your logic students, possibly in their subconscious probability sum.

Comment by christopherj on Questions to ask theist philosophers? I will soon be speaking with several · 2014-04-29T22:07:34.831Z · score: 0 (0 votes) · LW · GW

Matthew 26:39 Going a little farther, he fell with his face to the ground and prayed, “My Father, if it is possible, may this cup be taken from me. Yet not as I will, but as you will.”

The "cup" is Jesus' crucifixion, and this prayer implies that Jesus would rather not get crucified, but rather it was God's will. I suppose it could be read as Jesus wishing there was a different way to forgive sins.

Philippians 2:8 (ESV) And being found in human form, he humbled himself by becoming obedient to the point of death, even death on a cross.

While this could be a reference to Jesus living a sinless life, read literally it implies that Jesus was told to volunteer for the whole crucifixion thing. Note that disobeying God could result in anything from no effect to being condemned to eternal hellfire and perhaps having the entire universe cursed for good measure. But maybe Jesus cheerfully volunteered.

Comment by christopherj on Skills and Antiskills · 2014-04-29T14:53:25.736Z · score: 0 (0 votes) · LW · GW

Still, one can ask if generally speaking, a person is better off learning Skill X.

Doesn't stop one from answering that, generally speaking, it depends on the person and circumstances. :-p

On a more serious note, I think that it is rather different to ask if for a skill X, X is more useful than not to the sort of people that learned X, as compared to asking if a random person would benefit from X. For example, I'd say that learning neurosurgery procedures is useful to a huge percentage of the people who learned it, but useless to the average person. I'd say rationality skills are probably most useful to precisely the sort of people who would not learn any, while providing diminishing returns to rationalists

Comment by christopherj on The Cryonics Strategy Space · 2014-04-29T03:59:16.336Z · score: 1 (3 votes) · LW · GW

(Sensible if legal) Compound-interest cryonics: Devote a small chunk of your resources towards a fund which you expect to grow faster than the rate of inflation, with exponential growth (the simplest example would be a bank account with a variable rate that pays epsilon percent higher than the rate of inflation in perpetuity). Sign a contract saying the person(s) who revive you receive the entire pot. Since after a few thousand years the pot will nominally contain almost all the money in the world this strategy will eventually incentivise almost the entire world to dedicate itself to seeking your revival. Although this strategy will not work if postscarcity happens before unfreezing, it collapses into the conventional cryonics problem and therefore costs you no more than the opportunity cost of spending the capital in the fund before you die. (Although apparently this is illegal)

(from the link)

The rule is often stated as follows: “No interest is good unless it must vest, if at all, not later than twenty-one years after the death of some life in being at the creation of the interest." For the purposes of the rule, a life is "in being" at conception.

It seems a workaround would be to keep around a frozen embryo. Since frozen embryos are viable with current technology, they probably have to qualify as not dead. You could probably also do it via donating the money to an "independent" organization, but that's not as cool as using cryonics as a workaround to aid your cryonics.

The bigger problem is that you'll have trouble investing the money at a rate higher than inflation (and keep in mind that in the US the rate of inflation is much higher than the official number)

Comment by christopherj on Request for concrete AI takeover mechanisms · 2014-04-28T05:14:38.030Z · score: 1 (1 votes) · LW · GW

There's just so many routes for an AI to gain power.

Internet takeover: not a direct route to power, but the AI may wish to acquire more computer power and there happens to be a lot of it available. Security flaws could be exploited to spread maliciously (and an AI should know a lot more about programming and hacking than us). Alternately, the AI could buy computing power, or could attach itself to a game or tool it designed such that people willingly allow it onto their computers.

Human alliance: the AI can offer a group of humans wealth, power, knowledge, hope for the future, friendship, charismatic leadership, advice, far beyond what any human could offer. The offer could be legit or a trick, or both. In this way, the AI could control the entirety of human civilization, or whatever size group it wishes to gain direct access to the physical world.

Robot bodies: the AI could design a robot/factory combination capable of self-replication. It would be easy to find an entrepreneur willing to make this, as it implies nearly infinite profits. This is closes to our current technology and easiest to understand, already popularized by various movies. Furthermore, this seems the only method that could be done without assistance from even one human, since the necessary components may already exist.

3D printers: technically a subset of robot bodies, but with improvements to 3D printers the factory size would be much smaller.

Biotechnology: biotechnology has proven its worth for making self-replicators, and microscopic size is sufficient. The AI would need to design a computer --> biology link to allow it to create biotechnology directly. We already have a DNA --> protein machine (ribosomes) and a data -- DNA system (however companies that produce made-to-order DNA do it). All that is needed is for the AI to be able to figure out what proteins it wants.

Chemo/bio/nanotechnology: Odds are the AI would prefer some alternatives to our DNA/protein system, one more suited to calculation rather than abiogenesis/evolution. However, I have no concrete examples.

Reprogramming a human: Perhaps the AI considers the current robots unacceptable and can't design its own biotechnology, and thinks humans are unreliable. Besides merely convincing humans to help it, perhaps the AI could use a mind-machine interface or other neuro-techonlogy to directly control a human as an extension of itself. This seems like a rather terrible idea, since the AI would probably prefer either a robotic body, biological body, or nano-techological body of its own design. Would make for a nice horror story though,

Comment by christopherj on Skills and Antiskills · 2014-04-27T21:14:58.284Z · score: 3 (3 votes) · LW · GW

If dancing will largely prevent you from having interesting conversations, it may well be an antiskill-- but if you go to a lot of nightclubs where loud music makes conversation difficult, knowing how to dance seems very useful indeed!

This seems like a poor example -- why go to loud nightclubs if not to dance, conversely knowing how to dance increases the chance that you'll choose to go to loud nightclubs. The benefits and drawbacks of dancing are similar whether the music is loud or soft. It only makes sense if you were dragged to the party and had to make the best of it.

I think a better example would be martial arts -- there are situations where knowing martial arts could get you into a ton of trouble (eg some gang wants to beat you up as a show of dominance, but with trained instinct you manage to hurt one of them), and others where it could save your life. As a more mundane example, knowing facts about politics seems to polarize people by allowing them better motivated skepticism of opposing viewpoints.

Comment by christopherj on Skills and Antiskills · 2014-04-27T21:00:03.098Z · score: 0 (0 votes) · LW · GW

Depends on if you use it to activate analysis paralysis, cynicism, and to find excellent excuses, or to make good decisions and act on them. Most any skill can be abused, even the most useful ones.

Comment by christopherj on Questions to ask theist philosophers? I will soon be speaking with several · 2014-04-27T17:45:43.592Z · score: 1 (3 votes) · LW · GW

First, I should note that all the most common/obvious questions have been thoroughly answered (where thorough refers to length). For many of these questions, you could get a better answer from reading what has already been written about it. Edit: you probably don't want to ask these questions as bluntly as I've worded them.

Why is choice of god mainly determined by which country a person was raised in, like eg language but unlike eg science? Does belief in God help one make more accurate predictions (not "better explanations") than using a secular model?

Why is wisdom praised throughout the Bible, except for a reversal in the New Testament, where standard wisdom is condemned as an opponent to Godly wisdom? Why is it that higher education leads to lower rates of theism, and what is this evidence of?

Why are various good traits assigned to God? For example, why is God considered forgiving if He demands a blood sacrifice (human sacrifice according to Christians) before He can forgive sins? I know humans who can forgive without requiring any sacrifice, nor even a request for forgiveness, remorse, etc. Why is God considered just, when He is willing not only to excuse evildoers, but actually punish the innocent in their place as a sacrifice (and eternally reward people for a trivial thing with no moral value like believing in Jesus)? Why is God considered merciful what with going out of His way to eternally punish people in Hell? I've heard that the answer to this sort of question is that God is holy -- is holiness some sort of terrible character flaw that we must avoid at all costs, or something to be emulated?

Why do people say that Jesus willingly died for our sins, when He clearly didn't (Matthew 26:39) and don't forget that disobeying God is frequently regarded as resulting in eternal damnation. Some people say that Jesus suffered in our place -- shouldn't that mean eternal damnation, not being dead for 3 days? Why can't I die for everyone's sins, or at least my own? Why all the confusion with what exactly "death" means in the Bible, especially if it starts in Genesis 3 yet is prevented by believing in Jesus?

Compare Genesis 3:3-5 vs Genesis 3:22. Why is the serpent considered a liar? Why is God upset at humans knowing good and evil in Genesis, yet elsewhere wants that distinction taught?

Consider the sort of worlds created by humans to interact without other humans -- eg the worlds in which MMORPGs are set. These worlds have moral laws, unbreakable ones. For example, in many MMORPGs, theft is simply against the laws of physics in that universe, as is murder but not dueling, much like in our universe traveling faster than the speed of light is forbidden. Why wasn't our universe designed with moral laws of some kind to protect one person from another (eg such a universe could allow dueling to the death, gambling, and adultery, but not murder, theft and rape, as part of the laws of physics). Free will and choice are not the answer, as I am not free to travel faster than the speed of light, regardless of my will or choice, nor is the ability to violate another's will necessarily and improvement in free will.

How would the world be different if someone else (eg the listener, or Mother Teresa) were made omnipotent and omniscient in place of God?

Comment by christopherj on AI risk, new executive summary · 2014-04-27T14:59:59.224Z · score: 0 (0 votes) · LW · GW

Do we have some reason to expect [an AGI's] goals to be more complex than ours?

I find myself agreeing with you -- human goals are a complex mess, which we seldom understand ourselves. We don't come with clear inherent goals, and what goals we do have we abuse by using things like sugar and condoms instead of eating healthy and reproducing like we were "supposed" to. People have been asking about the meaning of life for thousands of years, and we still have no answer.

An AI on the other hand, could have very simple goals -- make paperclips, for example. An AI's goals might be completely specified in two words. It's the AI's sub-goals and plans to reach its goals that I doubt I could comprehend. It's the very single-mindedness of an AI's goals and our inability to comprehend our own goals, plus the prospect of an AI being both smarter and better at goal-hacking than us, that has many of us fearing that we will accidentally kill ourselves via non-friendly AI. Not everyone will think to clarify "make paperclips" with, "don't exterminate humanity", "don't enslave humanity", "don't destroy the environment", "don't reprogram humans to desire only to make paperclips", and various other disclaimers that wouldn't be necessary if you were addressing a human (and we don't know the full disclaimer list either).

Comment by christopherj on Google vs Wikipedia, for-profit vs not-for-profit · 2014-04-27T14:22:00.003Z · score: -1 (1 votes) · LW · GW

I never said that the "invisible hand" would fail to function, I said that it would function inefficiently. Since efficiency is the major factor in deciding whether an economic strategy "works", I noted that it would be out-performed by a system that can account for externalities. The free market could be patched to optimize things that contain externalities by applying tariffs and subsidies.

Given that I know of no system to properly account for externalities, I noted that as a failing of the free market but did not suggest any alternative -- especially since my country already has this patch applied to some of the biggest and most obvious externalities, yet also shows signs of promoting the wrong things (eg corn based ethanol).

Comment by christopherj on The social value of high school extracurricular time · 2014-04-10T22:24:58.765Z · score: 1 (1 votes) · LW · GW

My understanding is that various games can provide benefits such as ability to find relevant things in clutter, and reaction time, and decreasing the loss of mental function in the elderly. Other games could provide other benefits. However, if you consider that computer games could easily eat up all your free time plus some of your sleep, socialization, and homework time, and that alternate activities also have non-obvious benefits, this seems merely like a feel-good excuse. It's probably not as bad as watching certain television shows though.

Comment by christopherj on Google vs Wikipedia, for-profit vs not-for-profit · 2014-04-10T20:20:04.254Z · score: 0 (0 votes) · LW · GW

The idea underpinning market economics is the "invisible hand" which is supposed to aggregate everybody's selfish behaviour into collective good (given a certain institutional set-up).

Unfortunately, the set up for it to work involves a massive use of product-specific tariffs and subsidies, to account for negative and positive externalities respectively. Otherwise the "invisible hand" would function inefficiently, over-promoting things with negative externalities like pollution, and under-promoting things with positive externalities like education.

Comment by christopherj on Solomonoff induction on a random string · 2014-04-10T06:43:00.343Z · score: 0 (0 votes) · LW · GW

But it seems to me rather different to assume you can do any finite amount of calculation, vs relying on things that can only be done with infinite calculation. Can we ever have a hope of having infinite resources?

Comment by christopherj on Solomonoff induction on a random string · 2014-04-10T06:32:00.412Z · score: 1 (1 votes) · LW · GW

Would it be legitimate to ask the SI to estimate the probability that its guess is correct? I suppose that if it sums up its programs' estimates as to the next bit and finds itself predicting a 50% chance either way, it at least understands that it is dealing with random data but is merely being very persistent in looking for a pattern just in case it merely seemed random? That's not as bad as I thought at first.

Comment by christopherj on Controversy - Healthy or Harmful? · 2014-04-09T16:40:24.757Z · score: 1 (1 votes) · LW · GW

Since you mention Slashdot, here's a little side effect of one of their moderation systems. At one point, they decided that "funny" shouldn't give posters karma. However, given the per-post karma cap of 5, this can prevent karma-giving moderation while encouraging karma-deleting moderation by people who think the comment overrated, potentially costing the poster tons of karma. As such, moderators unwilling to penalize posters for making jokes largely abandoned the "funny" tag in favor of alternatives.

I suspect that if an agree/disagree moderation option were added, it would likely suffer from a similar problem. Eg if we treated that tag reasonably and used it to try to separate karma gains/losses from personal agreement/disagreement, people would be tempted to rate a post they especially like as disagree/love/awe.

A more interesting idea, I think, would be to run correlations between your votes and various other bits, such as keywords, author, and other voters, to increase the visibility of posts you like and decrease the visibility of posts you don't like. This would encourage honest and frequent voting, and diversity. Conversely, it would cause people to overestimate the community's agreement with them (more than they would by default).

Comment by christopherj on Solomonoff induction on a random string · 2014-04-09T15:54:19.814Z · score: 1 (1 votes) · LW · GW

OK, so it will predict one of multiple different ~ 1 terabyte programs as having different likelihoods. I'd still rather it predict random{0,1} for less than 10 bytes, as the most probable. Inability to recognize noise as noise seems like a fundamental problem.

Comment by christopherj on Solomonoff induction on a random string · 2014-04-09T15:44:48.107Z · score: -2 (2 votes) · LW · GW

He has repeatedly said that he's talking about an SI that outputs a specific prediction instead of a probability distribution of them, and you even quoted him saying so.

Comment by christopherj on Be comfortable with hypocrisy · 2014-04-09T07:46:18.403Z · score: 3 (5 votes) · LW · GW

This does not seem nearly as bad as the flip side, people preaching weak morals so as to not be seen failing them.

Comment by christopherj on Be comfortable with hypocrisy · 2014-04-09T07:28:26.170Z · score: 10 (10 votes) · LW · GW

I say you're a hypocrite, pretending indifference between good and evil yet for the most part choosing good.

Comment by christopherj on Solomonoff Cartesianism · 2014-04-09T07:18:00.019Z · score: 0 (0 votes) · LW · GW

I know a way to guarantee wireheading is suboptimal: make the reward signal be available processing power. Unfortunately this would guarantee that the AI is unfriendly, but at least it will self-improve!

Comment by christopherj on Solomonoff induction on a random string · 2014-04-09T06:14:04.625Z · score: 0 (0 votes) · LW · GW

I think you can justify stopping the search when you are hitting your resource limits and have long since ceased to find additional signal. You could be wrong, but it seems justified.

Comment by christopherj on Solomonoff induction on a random string · 2014-04-09T06:01:26.676Z · score: 0 (0 votes) · LW · GW

But, given 1 terabyte of data, will it not generate a ~1 terabyte program as it's hypothesis? Even if it is as accurate as the best answer, this seems like a flaw.

Comment by christopherj on Worse Than Random · 2014-04-09T05:46:32.117Z · score: 1 (1 votes) · LW · GW

OK, let me give you another example of the lock device. Each time a code is tried, the correct code changes to (previous code) + 2571 mod 10000. You don't know this. You won't find out before opening the door, because of limited feedback. Sequential check of every code will fail, but let you know that the correct code changes (if there is a correct code). Constantly guessing the same code because you think it'll randomly change to that one will fail. Random guessing will eventually succeed. Using randomness prevents you from getting stuck due to your own stupidity or an opponent. There is no method for always beating randomness, making it a reliable failsafe against always losing.

You won't always have knowledge about the problem, and on occasion what you think is knowledge is wrong. Random may be stupid, but it has a lower bound for stupidity.

Comment by christopherj on Worse Than Random · 2014-04-08T17:19:33.367Z · score: 3 (3 votes) · LW · GW

What you're missing is that, if the signal is below the detection threshold, there is no loss if the noise pushes it farther below the detection threshold, whereas there is a gain when the noise pushes the signal above the detection threshold. Thus the noise increases sensitivity, at the cost of accuracy. (And since a lot of sensory information is redundant, the loss of accuracy is easy to work around.)

Comment by christopherj on Lawful Uncertainty · 2014-04-08T16:28:10.600Z · score: 1 (1 votes) · LW · GW

If you wanted to play the lottery, the best strategy is to play the "least lucky" and "least 'random'" numbers, ie pick the numbers that won't be picked by a bunch of superstitious people. Decrease your odds of having the split the winnings with another winner.

Comment by christopherj on Lawful Uncertainty · 2014-04-08T16:01:19.201Z · score: 1 (1 votes) · LW · GW

If you're predictably committed to winning the game of chicken, then you have essentially already won, at least against a rational opponent. Though you'd have to wonder how you wound up with a rational opponent if the game is chicken.

Comment by christopherj on The Problem with AIXI · 2014-04-08T04:53:44.091Z · score: 1 (1 votes) · LW · GW

I'm having trouble understanding how something generally intelligent in every respect except failure to understand death or that it has a physical body, could be incapable of ever learning or at least acting indistinguishable from one that does know.

For example, how would AIXI act if given the following as part of its utility function: 1) utility function gets multiplied by zero should a certain computer cease to function 2) utility function gets multiplied by zero should certain bits be overwritten except if a sanity check is passed first

Seems to me that such an AI would act as if it had a genocidally dangerous fear of death, even if it doesn't actually understand the concept.

Comment by christopherj on The Absent-Minded Driver · 2014-04-04T17:43:22.165Z · score: 1 (1 votes) · LW · GW

If you're allowed to use external memory, why not just write down how many you painted of each color? Note that memory is different from a random number generator; for example, a random number generator can be used (imperfectly) to coordinate with a group of people with no communication, whereas memory would require communication but could give perfect results.

Comment by christopherj on Solve Psy-Kosh's non-anthropic problem · 2014-04-04T16:42:01.778Z · score: 0 (0 votes) · LW · GW

Seems to me that you'd want to add up the probabilities of each of the 10 outcomes, 0*p^10*(10!/(10!*0!)) + 9000*p^9*(1-p)*(10!/(9!*1!)) + 8000*p^8*(1-p)^2*(10!/(8!*2!)) + 7000*p^7*(1-p)^3*(10!/(7!*3!))... This also has a maximum at p~= 0.774, with expected value of $6968. This verifies that your shortcut was correct.

James' equation gives a bigger value, because he doesn't account for the fact that the lost payoff is always the maximum $10,000. His equation would be the correct one to use, if the problem were with 20 people, 10 of which determine the payoff and the other 10 whether the payoff is payed and they all have to use the same probability.

Comment by christopherj on Thermodynamics of Intelligence and Cognitive Enhancement · 2014-04-04T15:20:53.791Z · score: 2 (4 votes) · LW · GW

That sounds plausible. Of course it also sounds plausible as an explanation for rapidly increasing the evolution of intelligence.

Comment by christopherj on Don't teach people how to reach the top of a hill · 2014-04-04T15:16:58.673Z · score: 0 (0 votes) · LW · GW

Sure. Our brains contain millions of neurons working in parallel. Our spoken words come one at a time; thus the natural way to speak is one word at a time, one after the other, which in computer lingo is sequential instruction. While it is entirely possible to say thinks like, "the first thousand things you do are these, the second thousand things are those, ..." I can guarantee you no human will be able to follow that instruction, not in the requisite number of milliseconds anyways. Besides which, instructions of this nature will also be out of reach of the instructor's consciousness, so he too will be unable to understand how he does it.

Like Lumifer said, you can still teach such things, but you do it differently. You don't explain how much to twitch each of the hundreds of muscles you have to maintain balance, you plunk your kid on a bicycle and steady the bike and let him figure it out on his own. Ironically, tasks like these that would be impossible to verbally teach or understand, are simple enough that you can do them without thinking about it.

Comment by christopherj on Thermodynamics of Intelligence and Cognitive Enhancement · 2014-04-04T04:52:48.974Z · score: 1 (1 votes) · LW · GW

And the mechanism by which civilization interrupts the evolution of intelligence is?

Comment by christopherj on Thermodynamics of Intelligence and Cognitive Enhancement · 2014-04-04T04:50:32.975Z · score: 1 (1 votes) · LW · GW

The massive variation in human intelligence and the positive correlation between IQ and pretty much everything good implies that "Any simple major enhancement to human intelligence is a net evolutionary disadvantage" isn't true

There's also the saying that "correlation does not imply causation". The brain is very complex and energy intensive; basically anything that messes much with you is going to mess with your brain. For example, a near universal symptom of genetic diseases is reduced intelligence -- and I'm going to bet that low intelligence is not the cause of the genetic problem.

Comment by christopherj on Gunshot victims to be suspended between life and death [link] · 2014-04-04T04:00:50.630Z · score: 1 (1 votes) · LW · GW

It seems V_V and others might be having a communications gap. I'll take a guess at the problem, please tell me if I'm wrong.

V_V is saying cryonics isn't proven, and has trouble advancing because we're not planning to revive cryogenically preserved corpses anytime soon and so won't get feedback. In particular, that on top of a fatal injury, you're adding trauma from freezing/chemicals, and that molecular damage will continue to accumulate.

Others are saying that cryonics is not intended as research nor as something in the same category as most medical procedures; rather it gives you better odds of survival than rotting in a cemetery. They expect the effectiveness of cryonics to depend not on current cryonics or medical technology, but on future development of technology. The question is, can we slow the rate of degradation enough that future technology can fix what killed us, any freezing damage, and any ongoing damage before our "soul" is irrevocably lost?

Currently, cryonics is finding use in the transplant of organs. Currently as animal research, mammalian organs such as blood vessels, ovaries, kidneys, and livers can be reduced to sub-zero temperatures and successfully transplanted. The combination of short term whole-body cooling, and longer term organ preservation, should provide some research and evidence for cryonics, in addition to making it seem like the logical course of action as people become used to the use of lower temperatures to preserve human tissue. Whether it works now is still in question, but if you know of better alternatives, please let us know.

Comment by christopherj on The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It · 2014-04-01T21:39:31.132Z · score: 0 (0 votes) · LW · GW

This seems like just another example of our tendency to (badly) rationalize whatever decisions we made subconsciously. We like to think we do things for good reasons, and if we don't know the reasons we'll make some up.

Comment by christopherj on Does religion help cure insanity? · 2014-04-01T17:19:51.471Z · score: 0 (0 votes) · LW · GW

Religion can change your outlook on life, and give you a social support group. Falsely joining a religion might increase your sense of guilt and stress from maintaining your cover, so it might have different effects on your health.

Although many religions include meditation, meditation is not an inherently religious activity. IMO including meditation benefits under religious benefits is similar to claiming that a religion that involves wild dancing rituals, helps you lose weight via religion.

Comment by christopherj on Don't teach people how to reach the top of a hill · 2014-04-01T16:54:11.031Z · score: 1 (1 votes) · LW · GW

On a similar note, I've heard that professional golfers fear teaching another person, because doing so can ruin your game forever. Fine motor control is pretty much impossible to put into words, and whatever they decide to give as instruction they are tempted to follow themselves.

I think you got wrong what sort of things are easier to learn/do than to teach. Anything done primarily by the subconscious could well be forever out of the understanding of your conscious. If you can't understand how you do something, how can you expect to teach it? For example, we've been trying for decades to teach a computer how to think, something every human can do but we all do it subconsciously. Nor can we teach another human who, due to localized brain damage, has lost an ability.

Note that our minds are massively parallel calculators that don't necessarily require language, yet our instructions are sequential and language-based.

Comment by christopherj on Is my view contrarian? · 2014-04-01T06:47:21.936Z · score: 0 (0 votes) · LW · GW

It seems to me that having some contrarian views is a necessity, despite the fact that most contrarian views are wrong. "Not every change is an improvement, but every improvement is a change." As such I'd recommend going meta, teaching other people the skills to recognize correct contrarian arguments. This of course will synergize with recognizing whether your own views are probable or suspect, as well as with convincing others to accept your contrarian views.

  1. Determine levels of expertise in the subject. Not a binary distinction between "expert" and "non-expert" that would put nutritionists, theologians, and futurists in the same category as physicists, materials scientists, and engineers. 1a. The main determinants would be how easy it is to test things, and how much testing has been done. 1b. What's the level of consensus? I'd say less than 90% consensus is suspicious; probably indicative of a difficult profession (the experts cannot give definitive well-tested answers).

  2. What's the experts' reaction to the contrarian view? Do the experts have good reason for rejecting the view, or do they become converts upon hearing it?

  3. What's the epistemic basis of the views? Are we talking about empirical tests, logical deduction, educated guesses, or wild speculation?

  4. Look for conflicts of interest. Don't exclude your own. Look for monetary interests, political interests, moral/values interests, emotional interests, aesthetic interests. Subjects like climate change and economic policy are so interest-laden that besides the difficulties in testing it becomes difficult to find honest, actual experts. Conversely, some ideas are accepted despite interests; dentists advise you against sugar despite their monetary interests, and quantum mechanics is accepted despite being unintuitive.

  5. Consider how you'd convince an honest, intelligent, well-educated expert to accept your contrarian view. If you don't think you can, odds are you don't have cause to believe it yourself.

  6. Test point 5. Remember, you're making a difference in the world, so don't make excuses.