Posts

Comments

Comment by Salutator on ARC's first technical report: Eliciting Latent Knowledge · 2022-01-18T21:00:31.345Z · LW · GW

If the reporter estimates every node of the human's Bayes net, then it can assign a node a probability distribution different from the one that would be calculated from the distributions simultaneously assigned to its parent nodes. I don't know if there is a name for that, so for now i will pompously call it inferential inconsistency. Considering this as a boolean bright-line concept, the human simulator is clearly the only inferentially consistent reporter. But one could consider some kind of metric on how different probability distributions are and turn it into a more gradual thing.

Being a reporter basically means being inferentially consistent on the training set. On the other hand being inferentially consistent everywhere means being the human simulator. So a direct translator would differ from a human simulator by being inferentially inconsistent for some inputs outside of the training set. This could in principle be checked by sampling random possible inputs. The human could then try to distinguish a direct translator from a randomly overfitted model by trying to understand a small sample of inferentially inconsistencies.

So much for my thoughts inside the paradigm, now on to snottily rejecting it. The intuition that the direct translator should exist seems implausible. And the idea that it would be so strong an attractor that a training strategy avoiding the human simulator would quasi-automatically borders on the absurd. Modeling a constraint on the training set and not outside of it basically is what overfitting is and overfitted solutions with many specialised degrees of freedom are usually highly degenerete. In other words, penalizing the human simulator would almost certainly lead to something closer to a pseudorandomizer than a direct translation. And looking at it a different way, the direct translator is supposed to be helpful in situations the human would perceive as contradictory. Or to put it differently, not bad model fits but rather models strongly misspecified and then extrapolated far out of the sample space. That's basically situations where statistical inference and machine learning have strong track records of not working.

Comment by Salutator on Stupid Questions, December 2015 · 2015-12-02T13:49:15.694Z · LW · GW

It gets very interesting if there actually are no stocks to buy back in the market. For details on how it gets interesting google "short squeeze".

Other than that exceptional situation it's not that asymmetrical:

-Typically you have to post some collateral for shorting and there will be a well-understood maximum loss before your broker buys back the stock and seizes your collateral to cover that loss. So short (haha) of a short squeeze there actually is a maximum loss in short selling.

-You can take similar risks on the long side by buying stocks on credit ("on margin" in financial slang) with collateral, which the bank will use to close your position if the stock drops too far. So basically long risks also can be made as big as your borrowing ability.

Comment by Salutator on Rationality Reading Group: Fake Beliefs (p43-77) · 2015-05-13T21:18:37.207Z · LW · GW

Let me be a bit trollish so as to establish an actual counter-position (though I actually believe everything I say):

This is where the sequences first turn dumb.

For low-hanging fruit, we first see modern mythology misinterpreted as actual history. In reality, phlogiston was a useful theory at the time, which was rationally arrived at and rationally discarded when evidence turned against it (With some attempst at "adding epicycles", but no more than other scientific theories) . And the NOMA thing was made up by Gould when he misunderstood actual religious claims, i.e. it is mostly a straw-man.

On a higher level of abstraction, the whole approach of this sequence is discussing other peoples alleged rationalizations. This is almost always a terrible idea. For comparison, other examples would include Marxist talk about false consciousness, Christian allegations that atheists are angry at God or want a license to sin or the Randian portrayal of irrational death-loving leachers. [Aware of meta-irony following:] Arguments of this type almost always serve to feed the ingroup's sense of security, safely portraying the most scary kinds of irrationality as a purely outgroup thing. And that is the most simple sufficient causal explanation of this entire sequence.

Comment by Salutator on Change Contexts to Improve Arguments · 2014-07-09T01:36:03.039Z · LW · GW

You're treating looking for week points in your and the interlocutors belief as basically the same thing. That's almost the opposite of the truth, because there's a trade-off between those two things. If you're totally focused on the second thing, the first one is psychologically near impossible.

Comment by Salutator on 2013 Less Wrong Census/Survey · 2013-11-23T10:37:12.205Z · LW · GW

This was based on a math error, it actually is a prisoners dilemma.

Comment by Salutator on 2013 Less Wrong Census/Survey · 2013-11-23T10:20:22.628Z · LW · GW

I threw a D30, came up with 20 and cooperated.

Point being that cooperation in a prisoners dilemma sense means choosing the strategy that would maximize my expected payout if everyone chose it, and in this game that is not equivalent to cooperating with probability 1. If it was supposed to measure strategies, the question would have been better if it asked us for a cooperating probability and then Yvain would have had to draw the numbers for us.

Comment by Salutator on [deleted post] 2013-11-04T22:30:52.048Z

I'm a bit out of my depth here. I understood an "ordered group" as a group with an order on its elements. That clearly can be finite. If it's more than that the question would be why we should assume whatever further axioms characterize it.

Comment by Salutator on [deleted post] 2013-10-28T14:55:56.250Z

Two points:

  1. I don't know the Holder theorem, but if it actually depends on the lattice being a group, that includes an extra assumption of the existence of a neutral element and inverse elements. The neutral element would have to be a life of exactly zero value, so that killing that person off wouldn't matter at all, either positively or negatively. The inverse elements would mean that for every happy live you can imagine an exactly opposite unhappy live, so that killing off both leaves the world exactly as good as before.

  2. Proving this might be hard for infinite cases, but it would be trivial for finite generating groups. Most Less Wrong utilitarians would believe there are only finitely many brain states (otherwise simulations are impossible!) and utility is a function of brain states. That would mean only finitely many utility levels and then the result is obvious. The mathematically interesting part is that it still works if we go infinite on some things but not on others, but that's not relevant to the general Less Wrong belief system.

(Also, here I'm discussing the details of utilitarian systems arguendo, but I'm sticking with the general claim that all of them are mathematically inconsistent or horrible under Arrow's theorem.)

Comment by Salutator on Open Thread, October 20 - 26, 2013 · 2013-10-24T21:07:48.736Z · LW · GW

I think it's just elliptic rather than fallacious.

Paul Graham basically argues for artistic quality as something people have a natural instinct to recognize. The sexual attractiveness of bodies might be a more obvious example of this kind of thing. If you ask 100 people to rank pictures another 100 people of the opposite sex by hotness, the ranks will correlate very highly even if the rankers don't get to communicate. So there is something they are all picking up on, but it isn't a single property. (Symmetry might come closest but not really close, i.e. it explains more than any other factor but not most of the phenomenon.)

Paul Graham basically thinks artistic quality works the same way. Then taste is talent at picking up on it. For in-metaphor comparison, perhaps a professional photographer has an intuitive appreciation of how a tired woman would look awake, can adjust for halo effects, etc., so he has a less confounded appreciation of the actual beauty factor than I do. Likewise someone with good taste would be less confounded about artistic quality than someone with bad taste.

That's his basic argument for taste being a thing and it doesn't need a precise definition, in fact it would suggest giving a precise definition is probably AI-complete.

Now the contempt thing is not a definition, it is a suggested heuristic for identifying confounders. To look at my metaphor again, if I wanted to learn about beauty-confounders, tricks people use to make people they have no respect for think woman are hotter than they are (in other words porn methods) would be a good place to start.

This really isn't about the thing (beuty/artistic quality) per se, more about the delta between the thing and the average person's perception of it. And that actually is quite dependent on how much respect the artist/"artist" has for his audience.

Comment by Salutator on Open Thread, September 30 - October 6, 2013 · 2013-10-10T12:58:49.018Z · LW · GW

I think another thing to remember here is sampling bias. The actual conversion/deconversion probably mostly is the end point of a lengthy intellectual process. People far along that process probably aren't very representative of people not going through it and it would be much more interesting what gets the process started.

To add some more anecdata, my reaction to that style of argumentation was almost diametrically opposed. I suspect this is fairly common on both sides of the divide, but not being convinced by some specific argument just isn't such a catchy story, so you would hear it less.

Comment by Salutator on Open Thread, February 1-14, 2013 · 2013-02-03T21:24:56.593Z · LW · GW

But if you missed Twelfth Night, Candlemas would be a Schelling point for rescheduling, because it's the other "Christmas now definitely over" holiday.

Comment by Salutator on Open Thread, December 16-31, 2012 · 2012-12-19T18:34:32.403Z · LW · GW

Ha! That's the delightful little project, no?

Comment by Salutator on 2012 Less Wrong Census/Survey · 2012-11-04T10:56:28.512Z · LW · GW

I took the survey too. I can haz karma plz? Kthxbye.

Comment by Salutator on 2012 Less Wrong Census Survey: Call For Critiques/Questions · 2012-10-19T06:23:38.749Z · LW · GW

The race question doesn't make much sense for Europeans. I could answer White (non-Hispanic) even though the Hispanic category doesn't exist here. But what should Spaniards answer?

Comment by Salutator on We won't be able to recognise the human Gödel sentence · 2012-10-05T22:51:39.125Z · LW · GW

The thing is that the proof for Gödel's theorem is constructive. We have an algorithm to construct Gödel sentences from Axioms. So basically the only way we can be unable to recognize our Gödel sentences is being unable to recognize our axioms.

Comment by Salutator on We won't be able to recognise the human Gödel sentence · 2012-10-05T22:42:10.861Z · LW · GW

But that sentence isn't self-contradictory like "This is a lie", it is just self-referential, like "This sentence has five words". It does have a well-defined meaning and is decidable for all hypothetical consistent people other than hypothetical consitentified Stuart Armstrong.

Comment by Salutator on The Useful Idea of Truth · 2012-10-03T09:42:10.162Z · LW · GW

Yeah, probably all theories of truth are circular and the concept is simply non-tabooable. I agree your explanation doesn't make it worse, but it doesn't make it better either.

Comment by Salutator on The Useful Idea of Truth · 2012-10-02T12:27:41.941Z · LW · GW

But that's only useful if you make it circular.

Taking you more strictly at your word than you mean it the program could just return true for the majority belief on empirically non-falsifiable questions. Or it could just return false on all beliefs including your belief that that is illogical. So with the right programs pretty much arbitrary beliefs pass as meaningful.

You actually want it to depend on the state of the universe in the right way, but that's just another way to say it should depend on whether the belief is true.

Comment by Salutator on Rationality Quotes September 2012 · 2012-09-05T08:20:05.499Z · LW · GW

Let's go one step back on this, because I think our point of disagreement is earlier than I thought in that last comment.

The efficient market hypothesis does not claim that the profit on all securities has the same expectation value. EMH-believers don't deny, for example, the empirically obvious fact that this expectation value is higher for insurances than for more predictable businesses. Also, you can always increase your risk and expected profit by leverage, i.e. by investing borrowed money.

This is because markets are risk-averse, so that on the same expectation value you get payed extra to except a higher standard deviation. Out- or underperforming the market is really easy by excepting more or less risk than it does on average. The claim is not that the expectation value will be the same for every security, only that the price of every security will be consistent with the same prices for risk and expected profit.

So if the EMH is true, you can not get a better deal on expected profit without also accepting higher risk and you can not get a higher risk premium than other people. But you still can get lots of different trade-offs between expected profit and risk.

Now can you do worse? Yes, because you can separate two types of risk.

Some risks are highly specific to individual companies. For example, a company may be in trouble if a key employee gets hit by a beer truck. That's uncorrelated risk. Other risks affect the whole economy, like revolutions, asteroids or the boom-bust-cycle. That's correlated risk.

Diversification can insure you against uncorrelated risk, because, by definition, it's independent from the risk of other parts of your portfolio, so it's extremely unlikely for many of your diverse investments to be affected at the same time. So if everyone is properly diversified, no one actually needs to bear uncorrelated risk. In an efficient market that means it doesn't earn any compensation.

Correlated risk is not eliminated by diversification, because it is by definition the risk that affects all your diversified investments simultaneously.

So if you don't diversify you are taking on uncorrelated risk without getting paid for it. If you do that you could get a strictly better deal by taking on a correlated risk of the same magnitude which you would get payed for. And since that is what the marked is doing on average, you can get a worse deal than it does.

Comment by Salutator on Rationality Quotes September 2012 · 2012-09-04T12:28:10.095Z · LW · GW

No, not really. In an efficient marked risks uncorrelated with those of other securities shouldn't be compensated, so you should easily be able to screw yourself over by not diversifying.

Comment by Salutator on Open Thread, September 1-15, 2012 · 2012-09-03T22:09:29.399Z · LW · GW

Can they use quill and parchent?

If so, the usual public key algorithms could be encoded into something like a tax form, i.e. something like "...51. Subtract the number on line 50 from the number on line 49 and write the result in here:__ ...500. The warden should also have calculated the number on line 499. Burn this parchent."

Of course there would have to be lots of error checks. ("If line 60 doesn't match line 50 you screwed up. If so, redo everything from line 50 on.")

To make it practical, each warden/non-prisoner-pair would do a Diffie-Hellman exchange only once. That part would take a day or two. After establishing a shared secret the daily authentication would be done by a hash, which probably could be done in half an hour or less.

Of course most people would have no clue why those forms work, they would just blindly follow the instructions, which for each line would be doable with primary school math.

The wardens would probably spend large parts of their shifts precalculating hashes for prisoners still asleep, so that several prisoners could do their get-out work at the same time. Or maybe they would do the crypto only once a month or so and normally just tell the non-prisoners their passwords for the next day every time they come in.

Comment by Salutator on The "Intuitions" Behind "Utilitarianism" · 2008-01-29T22:54:00.000Z · LW · GW

@Eisegates
Yes, I was operating on the implicit convention, that true statements must be meaningfull, so I could also say there is no k, so that I have exactly k quobbelwocks.
The nonexistence of a -operator (and of a +-operator) is actually the point. I don't think preferences of different persons can be meaningfully combined, and that includes, that {possible world-states} or {possible actions} don't, in your formulation, contain the sort of objects to which our everyday understanding of multiplication normally applies. Now if you insist on an intuitively defined -operator every bounded utility function is an example. For example my utility for the amount c of chocolate available for consumption in some given timeframe could well be approximately 1- exp(1-(min(c/1kg,1)), so 100g<1kg but there is no k to make k*100g>1kg. That is, of course, nothing new even in this discussion. Also more directly to the point, me doing evil is something I should avoid more then other people doing evil. So when I do the choosing "I kill 1 innocent person" < "someone else kills 1 innocent person", but there is no k so that "I kill 1 innocent person"> "someone else kills k innocent persons". In fact, if a kidnapper plausibly threatened to kill his k hostages unless I kill a random passerby almost nobody would think me justified in doing so for an imaginable value of k. That people may think different for unimaginably large values of k is a much more plausible candidate for failure to be rational whit large numbers then not adding speckles up to torture.

But basically I wasn't making a claim, just trying to give an understandable (or so I thought) formulation for denying Thombs' non-technically stated claim that existence of an order implies the Archimedian axiom.

@Bob
If it's true, and you seem to agree, that our intuition focuses on actions over outcomes, don't you think that's a problem? Perhaps you're not convinced that our intuition reflects a bias? That we'd make better decisions if we shifted a little bit of our attention to outcomes?
You nailed it. Not only am I not convinced, that our intuition on this point reflects a bias, I'm actually convinced, that it doesn't. Utility is irrelevant, rights are relevant. And while I may sacrifice a lesser right for a greater right I can't sacrifice a person for another person. So in the torture example I may not flip the (50a,1 person/49a, 2 persons)switch either way.

@Doug S.
I disagree. An objective U doesn't exist and individual Us can't be meaningfully aggregated. Moreover, if the individual Us are meant to be von-Neumann-Morgenstern-functions they don't exist either.

Comment by Salutator on The "Intuitions" Behind "Utilitarianism" · 2008-01-29T16:50:00.000Z · LW · GW

@Unknown
So if everyone is a deontologist by nature, shouldn't a "normalization" of intuitions result in a deontological system of morals? If so, what makes you look for the right utilitarian system?

Comment by Salutator on The "Intuitions" Behind "Utilitarianism" · 2008-01-29T16:41:53.000Z · LW · GW

@Sean

If your utility function u was replaced by 3u,there would be no observable difference in your behavior. So which of these functions is declared real and goes on to the interpersonal summing? "The same factor for everyone" isn't an answer, because if u_you doesn't equal u_me "the same factor" is simply meaningless.

@Tomhs2

A < B < C < D doesn't imply that there's some k such that kA>D
Yes it does.

I think you're letting the notation confuse you. It would imply that, if A,B,C,D where e.g. real numbers, and that is the context the "<"-sign is mostly used in. But Orders can exist on sets other then sets of numbers. You can for example sort (order) the telephone book alphabetically, so that Cooper < Smith and still there is no k so that k*Cooper>Smith.

@most people here:

A lot of confusion is caused by the unspoken premise that a moral system should sort outcomes rather then actions, so that it doesn't matter who would do the torturing or speck-placing. Now for Eliezer that assumption is de fide, because otherwise the concept of a friendly AI (sharing our ends and choosing the unimportant-declared means with its superior intelligence) is meaningless. But the assumption contradicts basically everyone's intuition. So why should it convince anyone not following Eliezer's religion?

[Edit: fixed some typos and formating years later]

Comment by Salutator on The "Intuitions" Behind "Utilitarianism" · 2008-01-29T02:08:12.000Z · LW · GW

So what exactly do you multiply when you shut up and multiply? Can it be anything else then a function of the consequences? Because if it is a function of the consequences, you do believe or at least act as if believing your #4.

In which case I still want an answer to my previously raised and unanswered point: As Arrow demonstrated a contradiction-free aggregate utility function derived from different individual utility functions is not possible. So either you need to impose uniform utility functions or your "normalization" of intuition leads to a logical contradiction - which is simple, because it is math.

Comment by Salutator on Circular Altruism · 2008-01-26T04:23:00.000Z · LW · GW

1. In this whole series of posts you are silently presupposing that utilitarianism is the only rational system of ethics. Which is strange, because if people have different utility functions Arrow's impossibility theorem makes it impossible to arrive at a "rational" (in this blogs bayesian-consistent abuse of the term) aggregate utility function. So irrationality is not only rational but the only rational option. Funny what people will sell as overcoming bias.

2. In this particular case the introductory example fails, because 1 killing != - 1 saving. Removing a drowning man from the pool is obviously better then merely abstaining from drowning an other man in the pool.

3. The feeling of superiority over all those biased proles is a bias. In fact it is very obviously among your main biases and consequently one you should spend a disproportional amount of resources on overcoming.