Posts

Maximize in a limited domain. Hope for the future. 2022-03-17T14:05:47.960Z
Dallas SSC Meetup #11 2021-05-26T22:19:00.959Z
Dallas SSC Meetup #10 2020-01-04T16:50:35.595Z
Dallas SSC Meetup #9 (New Time+Location) 2019-11-27T18:15:16.017Z
Dallas SSC Meetup #8 2019-10-29T16:35:27.727Z
Dallas SSC Meetup #7 2019-10-03T03:53:24.515Z
Conceptual problems with utility functions, second attempt at explaining 2018-07-21T02:08:44.598Z
Conceptual problems with utility functions 2018-07-11T01:29:42.585Z
Are long-term investments a good way to help the future? 2018-04-30T14:41:56.640Z

Comments

Comment by Dacyn on Should we maximize the Geometric Expectation of Utility? · 2024-04-17T21:10:55.689Z · LW · GW

If you find yourself thinking about the differences between geometric expected utility and expected utility in terms of utility functions, remind yourself that, for any utility function, one can choose* either* averaging method.

No, you can only use the geometric expected utility for nonnegative utility functions.

Comment by Dacyn on Claude wants to be conscious · 2024-04-15T22:51:19.602Z · LW · GW

It's obvious to us that the prompts are lying; how do you know it isn't also obvious to the AI? (To the degree it even makes sense to talk about the AI having "revealed preferences")

Comment by Dacyn on Protestants Trading Acausally · 2024-04-01T22:44:29.612Z · LW · GW

Calvinists believe in predestination, not Protestants in general.

Comment by Dacyn on The Worst Form Of Government (Except For Everything Else We've Tried) · 2024-03-18T18:15:03.901Z · LW · GW

Wouldn't that mean every sub-faction recursively gets a veto? Or do the sub-faction vetos only allow the sub-faction to veto the faction veto, rather than the original legislation? The former seems unwieldy, while the latter seems to contradict the original purpose of DVF...

Comment by Dacyn on Simulation arguments · 2024-02-20T18:23:01.230Z · LW · GW

(But then: aren’t there zillions of Boltzmann brains with these memories of coherence, who are making this sort of move too?)

According to standard cosmology, there are also zillions of actually coherent copies of you, and the ratio is heavily tilted towards the actually coherent copies under any reasonable way of measuring. So I don't think this is a good objection.

Comment by Dacyn on Abs-E (or, speak only in the positive) · 2024-02-20T13:11:28.108Z · LW · GW

“Only food that can be easily digested will provide calories”

That statement would seem to also be obviously wrong. Plenty of things are ‘easily digested’ in any reasonable meaning of that phrase, while providing ~0 calories.

I think you've interpreted this backwards; the claim isn't that "easily digested" implies "provides calories", but rather that "provides calories" implies "easily digested".

Comment by Dacyn on Abs-E (or, speak only in the positive) · 2024-02-20T13:06:12.690Z · LW · GW

In constructivist logic, proof by contradiction must construct an example of the mathematical object which contradicts the negated theorem.

This isn't true. In constructivist logic, if you are trying to disprove a statement of the form "for all x, P(x)", you do not actually have to find an x such that P(x) is false -- it is enough to assume that P(x) holds for various values of x and then derive a contradiction. By contrast, if you are trying to prove a statement of the form "there exists x such that P(x) holds", then you do actually need to construct an example of x such that P(x) holds (in constructivist logic at least).

Comment by Dacyn on Anthropics and the Universal Distribution · 2024-02-15T15:38:18.904Z · LW · GW

Just a technical point, but it is not true that most of the probability mass of a hypothesis has to come from "the shortest claw". You can have lots of longer claws which together have more probability mass than a shorter one. This is relevant to situations like quantum mechanics, where the claw first needs to extract you from an individual universe of the multiverse, and that costs a lot of bits (more than just describing your full sensory data would cost), but from an epistemological point of view there are many possible such universes that you might be a part of.

Comment by Dacyn on Ten Modes of Culture War Discourse · 2024-02-05T13:34:07.593Z · LW · GW

As I understood it, the whole point is that the buyer is proposing C as an alternative to A and B. Otherwise, there is no advantage to him downplaying how much he prefers A to B / pretending to prefer B to A.

Comment by Dacyn on Ten Modes of Culture War Discourse · 2024-02-02T21:59:39.046Z · LW · GW

Hmm, the fact that C and D are even on the table makes it seem less collaborative to me, even if you are only explicitly comparing A and B. But I guess it is kind of subjective.

Comment by Dacyn on Ten Modes of Culture War Discourse · 2024-02-02T13:41:07.777Z · LW · GW

It seems weird to me to call a buyer and seller's values aligned just because they both prefer outcome A to outcome B, when the buyer prefers C > A > B > D and the seller prefers D > A > B > C, which are almost exactly misaligned. (Here A = sell at current price, B = don't sell, C = sell at lower price, D = sell at higher price.)

Comment by Dacyn on Ten Modes of Culture War Discourse · 2024-02-01T16:39:35.310Z · LW · GW

Isn't the fact that the buyer wants a lower price proof that the seller and buyer's values aren't aligned?

Comment by Dacyn on [deleted post] 2024-01-28T13:03:01.749Z

You're right that "Experiencing is intrinsically valuable to humans". But why does this mean humans are irrational? It just means that experience is a terminal value. But any set of terminal values is consistent with rationality.

Comment by Dacyn on What makes teaching math special · 2023-12-18T13:34:21.865Z · LW · GW

Of course, from a pedagogical point of view it may be hard to explain why the "empty function" is actually a function.

Comment by Dacyn on ChatGPT 4 solved all the gotcha problems I posed that tripped ChatGPT 3.5 · 2023-11-30T17:04:04.687Z · LW · GW

When you multiply two prime numbers, the product will have at least two distinct prime factors: the two prime numbers being multiplied.

Technically, it is not true that the prime numbers being multiplied need to be distinct. For example, 2*2=4 is the product of two prime numbers, but it is not the product of two distinct prime numbers.

As a result, it is impossible to determine the sum of the largest and second largest prime numbers, since neither of these can be definitively identified.

This seems wrong: "neither can be definitively identified" makes it sound like they exist but just can't be identified...

Safe primes area subset of Sophie Germain primes

Not true, e.g. 7 is safe but not Sophie Germain.

Comment by Dacyn on If a little is good, is more better? · 2023-11-09T15:01:25.306Z · LW · GW

OK, that makes sense.

Comment by Dacyn on If a little is good, is more better? · 2023-11-07T18:21:29.984Z · LW · GW

OK, that's fair, I should have written down the precise formula rather than an approximation. My point though is that your statement

the expected value of X happening can be high when it happens a little (because you probably get the good effects and not the bad effects Y)

is wrong because a low probability of large bad effects can swamp a high probability of small good effects in expected value calculations.

Comment by Dacyn on If a little is good, is more better? · 2023-11-05T15:29:20.301Z · LW · GW

Yeah, but the expected value would still be .

Comment by Dacyn on Multi-Winner 3-2-1 Voting · 2023-10-31T16:14:15.167Z · LW · GW

I don't see why you say Sequential Proportional Approval Voting gives little incentive for strategic voting. If I am confident a candidate I support is going to be elected in the first round, it's in my interest not to vote for them so that my votes for other candidates I support will count for more. Of course, if a lot of people think like this then a popular candidate could actually lose, so there is a bit of a brinksmanship dynamic going on here. I don't think that is a good thing.

Comment by Dacyn on Hyperreals in a Nutshell · 2023-10-16T14:51:42.288Z · LW · GW

The definition of a derivative seems wrong. For example, suppose that for rational but for irrational . Then is not differentiable anywhere, but according to your definition it would have a derivative of 0 everywhere (since could be an infinitesimal consisting of a sequence of only rational numbers).

Comment by Dacyn on Think Like Reality · 2023-09-25T14:58:42.694Z · LW · GW
Comment by Dacyn on Think Like Reality · 2023-09-25T14:58:31.805Z · LW · GW

But if they are linearly independent, then they evolve independently, which means that any one of them, alone, could have been the whole thing—so why would we need to postulate the other worlds? And anyway, aren’t the worlds supposed to be interacting?

Can't this be answered by an appeal to the fact that the initial state of the universe is supposed to be low-entropy? The wavefunction corresponding to one of the worlds, run back in time to the start of the universe, would have higher entropy than the wavefunction corresponding to all of them together, so it's not as good a candidate for the starting wavefunction of the universe.

Comment by Dacyn on The Dick Kick'em Paradox · 2023-09-25T10:43:48.348Z · LW · GW

No, the whole premise of the face-reading scenario is that the agent can tell that his face is being read, and that's why he pays the money. If the agent can't tell whether his face is being read, then his correct action (under FDT) is to pay the money if and only if (probability of being read) times (utility of returning to civilization) is greater than (utility of the money). Now, if this condition holds but in fact the driver can't read faces, then FDT does pay the $50, but this is just because it got unlucky, and we shouldn't hold that against it.

Comment by Dacyn on The Dick Kick'em Paradox · 2023-09-24T15:32:11.689Z · LW · GW

In your new dilemma, FDT does not say to pay the $50. It only says to pay when the driver's decision of whether or not to take you to the city depends on what you are planning to do when you get to the city. Which isn't true in your setup, since you assume the driver can't read faces.

Comment by Dacyn on One Minute Every Moment · 2023-09-03T15:50:16.029Z · LW · GW

a random letter contains about 7.8 (bits of information)

This is wrong, a random letter contains log(26)/log(2) = 4.7 bits of information.

Comment by Dacyn on Newcomb Variant · 2023-08-30T18:15:42.105Z · LW · GW

This only works if Omega is willing to simulate the Yankees game for you.

Comment by Dacyn on Ten variations on red-pill-blue-pill · 2023-08-19T23:21:24.698Z · LW · GW
Comment by Dacyn on Having a headache and not having a headache · 2023-06-21T17:36:12.986Z · LW · GW

I have tinnitus every time I think about the question of whether I have tinnitus. So do I have tinnitus all the time, or only the times when I notice?

Comment by Dacyn on Deontological Norms are Unimportant · 2023-05-19T17:27:23.608Z · LW · GW

I was confused at first what you meant by "1 is true" because when you copied the post from your blog you didn't copy the numbering of the claims. You should probably fix that.

Comment by Dacyn on Explaining “Hell is Game Theory Folk Theorems” · 2023-05-07T16:51:18.680Z · LW · GW

The number 99 isn’t unique—this works with any payoff between 30 and 100.

Actually, it only works with payoffs below 99.3 -- this is the payoff you get by setting the dial to 30 every round while everyone else sets their dials to 100, so any Nash equilibrium must beat that. This was mentioned in jessicata's original post.

Incidentally, this feature prevents the example from being a subgame perfect Nash equilibrium -- once someone defects by setting the dial to 30, there's no incentive to "punish" them for it, and any attempt to create such an incentive via a "punish non-punishers" rule would run into the trouble that punishment is only effective up to the 99.3 limit.

Comment by Dacyn on [deleted post] 2023-05-05T17:00:37.326Z

It's part of the "frontpage comment guidelines" that show up every time you make a comment. They don't appear on GreaterWrong though, which is why I guess you can't see them...

Comment by Dacyn on Votes-per-Dollar · 2023-04-12T14:03:24.767Z · LW · GW

I explained the problem with the votes-per-dollar formula in my first post. 45% of the vote / $1 >> 55% of the vote / $2, so it is not worth it for a candidate to spend money even if they can buy 10% of the vote for $1 (which is absurdly unrealistically high).

When I said maybe a formula would help, I meant a formula to explain what you mean by "coefficient" or "effective exchange rate". The formula "votes / dollars spent" doesn't have a coefficient in it.

If one candidate gets 200 votes and spends 200 dollars, and candidate 2 gets 201 votes and spends two MILLION dollars, who has the strongest mandate, in the sense that the representative actually represents the will of the people when wealth differences are ignored?

Sure, and my proposal of Votes / (10X + Y) would imply that the first candidate wins.

Comment by Dacyn on Votes-per-Dollar · 2023-04-11T15:40:38.101Z · LW · GW

I don't think the data dependency is a serious problem, all we need is a very loose estimate. I don't know what you mean by a "spending barrier" or by "effective exchange rate", and I still don't know what coefficient you are talking about. Maybe it would help if you wrote down some formulas to explain what you mean.

Comment by Dacyn on Votes-per-Dollar · 2023-04-10T20:17:05.560Z · LW · GW

I don't understand what you mean; multiplying the numerator by a coefficient wouldn't change the analysis. I think if you wanted to have a formula that was somewhat sensitive to campaign spending but didn't rule out campaign spending completely as a strategy, Votes/(10X+Y) might work, where Y is the amount spent of campaign spending, and X is an estimate of average campaign spending. (The factor of 10 is because campaign spending just isn't that large a factor to how many votes you get in absolute terms; it's easy to get maybe 45% of the vote with no campaign spending at all, just by having (D) or (R) in front of your name.)

Comment by Dacyn on Votes-per-Dollar · 2023-04-10T15:22:14.895Z · LW · GW

The result of this will be that no one will spend more than the $1 minimum. It's just not worth it. So your proposal is basically equivalent to illegalizing campaign spending.

Comment by Dacyn on Don't take bad options away from people · 2023-03-27T19:43:07.481Z · LW · GW

I wonder whether this one is true (and can be easily proved): For a normal form game G and actions ai for a player i, removing a set of actions a−i from the game yields a game G− in which the Nash equilibria are worse on average for i (or alternatively the pareto-best/pareto-worst Nash equilibrium is worse for G− than for G).

It's false: consider the normal form game

(0,0) (2,1)

(1,1) (3,0)

For the first player the first option is dominated by the second, but once the second player knows the first player is going to choose the second option, he's motivated to take the first option. Removing the first player's second option means the second player is motivated to take the second option, yielding a higher payoff for the first player.

Comment by Dacyn on Veganism and Acausal Trade · 2023-03-05T23:23:19.240Z · LW · GW

Not eating meat is not a Pascal's mugging because there is a solid theoretical argument for why the expected value is positive even if the payoff distribution is somewhat unbalanced: if a large number of people decide not to eat meat, then this will necessarily have the effect of shifting production, for supply to meet demand. Since you have no way of knowing where you are in that large ensemble, the expected value of you not eating meat is equal to the size of the effect divided by the number of people in the ensemble, which is presumably what we would expect the value of not eating meat to be under a naive calculation. There's really nothing mysterious about this, unlike the importance of the choice of a Solomonoff prior in a Pascal's mugger argument.

Comment by Dacyn on Speedrunning 4 mistakes you make when your alignment strategy is based on formal proof · 2023-02-17T18:23:17.478Z · LW · GW

A proof you don’t understand does not obligate you to believe anything; it is Bayesian evidence like anything else. If an alien sends a 1GB Coq file Riemann.v, running it on your computer does not obligate you to believe that the Riemann hypothesis is true. If you’re ever in that situation, do not let anyone tell you that Coq is so awesome that you don’t roll to disbelieve. 1GB of plaintext is too much, you’ll get exhausted before you understand anything. Do not ask the LLM to summarize the proof.

I'm not sure what you are trying to say here. Even with 1GB I imagine the odds of a transistor failure during the computation would still be astronomically low (thought I'm not sure how to search for good data on this). What other kinds of failure modes are you imagining? The alien file actually contains a virus to corrupt your hardware and/or operating system? The file is a proof not of RH but of some other statement? (The latter should be checked, of course.)

Comment by Dacyn on Why should ethical anti-realists do ethics? · 2023-02-17T17:27:53.214Z · LW · GW

Thus, for example, intransitivity requires giving up on an especially plausible Stochastic Dominance principle, namely: if, for every outcome o and probability of that outcome p in Lottery A, Lottery B gives a better outcome with at least p probability, then Lottery B is better (this is very similar to “If Lottery B is better than Lottery A no matter what happens, choose Lottery B” – except it doesn’t care about what outcomes get paired with heads, and which with tails).

This principle is phrased incorrectly. Taken literally, it would imply that the mixed outcome "utility 0 with probability 0.5, utility 1 with probability 0.5" is dominated by "utility 2 with probability 0.5, utility -100 with probability 0.5". What you probably want to do is to add the condition that the function f mapping each outcome to a better outcome is injective (or equivalently, bijective). But in that case, it is impossible for to occur with probability strictly greater than , since

Comment by Dacyn on Inequality Penalty: Morality in Many Worlds · 2023-02-12T16:18:57.200Z · LW · GW

It seems like "equally probable MWI microstates" is doing a lot of work here. If we have some way of determining how probable a microstate is, then we are already assuming the Born probabilities. So it doesn't work as a method of deriving them.

Comment by Dacyn on Inequality Penalty: Morality in Many Worlds · 2023-02-11T16:18:10.754Z · LW · GW

That quote seems nonsensical. What do the Born probabilities have to do with a counting argument, or with the dimension of Hilbert space? A qubit lives in a two-dimensional space, so a dimension argument would seem to suggest that the probabilities of the qubit being 0 or 1 must both be 50%, and yet in reality the Born probabilities say they can be anything from 0% to 100%. What am I missing?

Comment by Dacyn on The Cabinet of Wikipedian Curiosities · 2023-01-26T16:46:01.747Z · LW · GW

-"“Percentage of marriages that end in divorce” is an underspecified concept. There is only “percentage of marriages that end in divorce after n years”. "

The concept is perfectly well specified, just take n to be e.g. 75. But of course, it can only be measured for cohorts that are at least that old. Still, I would have assumed it possible to do some extrapolation to estimate what the value will be for younger cohorts (e.g. the NYT article you linked to says "About 60 percent of all marriages that eventually end in divorce do so within the first 10 years", so it should be possible to get a reasonable estimate of the percentage of marriages that end in divorce for cohorts at least 10 years old.

Anyway, the article says "The method preferred by social scientists in determining the divorce rate is to calculate how many people who have ever married subsequently divorced. Counted that way, the rate has never exceeded about 41 percent, researchers say." Obviously this method gives an underestimate of the total percentage of marriages to end in divorce. And even with that underestimate, the rate has gotten up to 41%. So I don't think the oft-quoted statistic is wildly off, even if it is based on bad methodology.

Comment by Dacyn on The Cabinet of Wikipedian Curiosities · 2023-01-25T18:30:42.282Z · LW · GW

-"If you have heard that “40% of marriages end in divorce” or some similar figure, you are probably misinterpreting the divorce-to-marriage ratio. "

Really? So what is the right number then? A cursory Google search shows 40-50% is a commonly repeated figure for "percentage of marriages that end in divorce", are you really claiming that all of those webpages are misinterpreting the divorce-to-marriage ratio? What is the basis for such a claim? It does not appear to be in the Wikipedia article, which says nothing about the percentage of marriages that end in divorce.

Comment by Dacyn on [deleted post] 2023-01-16T16:24:35.986Z

To explain my disagree-vote: I think such a system would necessarily create a strong bias against downvotes/disagree-votes, since most people would just not downvote rather than making a justifying comment. "Beware trivial inconveniences"

Comment by Dacyn on Basic building blocks of dependent type theory · 2023-01-06T17:33:59.384Z · LW · GW

How so? It looks like you have confused with . In this example we have , so the type of is .

Comment by Dacyn on Contra Common Knowledge · 2023-01-05T22:09:03.885Z · LW · GW

The infinite autoresponse example seems like it would be solved in practice by rational ignorance: after some sufficiently small number of autoresponses (say 5) people would not want to explicitly reason about the policy implications of the specific number of autoresponses they saw, so "5+ autoresponses" would be a single category for decisionmaking purposes. In that case the induction argument fails and "both people go to the place specified in the message as long as they observe 5+ autoresponses" is a Nash equilibrium.

Of course, this assumes people haven't already accepted and internalized the logic of the induction argument, since then no further explicit reasoning would be necessary based on the observed number of autoresponses. But the induction argument presupposes that rational ignorance does not exist, so it is not valid when we add rational ignorance to our model.

Comment by Dacyn on A kernel of Lie theory · 2023-01-04T16:52:23.559Z · LW · GW

-"this is just the lie algebra, and is why elements of it are always invertible."

First of all, how did we move from talking about numbers to talking about Lie algebras? What is the Lie group here? The only way I can make sense of your statement is if you are considering the case of a Lie subgroup of GL(n,R) for some n, and letting 1 denote the identity matrix (rather than the number 1) [1]. But then...

Shouldn't the Lie algebra be the monad of 0, rather than the monad of 1? Because usually Lie algebras are defined in terms of being equipped with two operations, addition and the Lie bracket. But neither the sum nor the Lie bracket of two elements of the monad of 1 are in the monad of 1.

[1] Well, I suppose you could be considering just the special case n=1, in which case 1 the identity matrix and 1 the number are the same thing. But then why bother talking about Lie algebras? The group is commutative, so the formalism does not appear to be necessary.

Comment by Dacyn on A kernel of Lie theory · 2023-01-04T16:36:03.519Z · LW · GW

-"On any finite dim space we have a canon inner product by taking the positive definite one."

What? A finite dimensional space has more than one positive definite inner product (well, unless it is zero-dimensional), this choice is certainly not canonical. For example in R^2 any ellipse centered at the origin corresponds to a positive definite inner product.

Comment by Dacyn on Friendly and Unfriendly AGI are Indistinguishable · 2022-12-30T21:11:48.375Z · LW · GW

I know this is not your main point, but the Millenium Problems are not an instance of a way for an AGI to quickly get money. From the CMI website:

-"Before CMI will consider a proposed solution, all three of the following conditions must be satisfied: (i) the proposed solution must be published in a Qualifying Outlet (see §6), and (ii) at least two years must have passed since publication, and (iii) the proposed solution must have received general acceptance in the global mathematics community"

Comment by Dacyn on Logical Probability of Goldbach’s Conjecture: Provable Rule or Coincidence? · 2022-12-30T00:16:48.597Z · LW · GW

-"The more time passes, the more evidence we get that (4) is false from computational tests and also the more we should doubt any proof (3) as its complexity grows. Therefore, the logical probability of (1) and (2) is growing over time."

The fact that the methods we use to check proofs are not perfectly reliable (and I think you greatly overstate the importance of this consideration, computer proof checkers are very reliable) does not at all imply that the probability that a proof exists is decreasing over time. You need to distinguish between the fact of a proof existing (which is what Godel's theorem is about) and the fact of us being able to check a proof.

-"In short, coincidence theory has chances of being false 1 to 10^−3700 and any future formal proof has chances to be false around 1 to 100. Thus, I still should bet on the coincidence theory, even if the formal proof is convincing and generally accepted."

Neither of these numbers are right. The ratio 1 to 10^−3700 is not the chance that coincidence theory is false, but rather the chance that GC is false given that either GC is false or coincidence theory is true. And I don't know where you got the ratio 1 to 100 from, but it is not remotely plausible; out of the many thousands of formal proofs which have gained widespread acceptance since the advent of formality in mathematics, I am not sure if there has been even one which was wrong. (I suppose it may come down to exactly how you define your terms.)

Since this seems to be the core of your argument, I guess I have disproved your thesis?

-"(I assume here that there is only one correct proof, but actually could be many different valid proofs, but not for such complex and well-researched topics as GC.)"

Why would you assume that? There are many theorems with multiple non-equivalent valid proofs, even for "such complex and well-researched topics as GC". This is leaving aside that "non-equivalent" for proofs is an ill-defined concept, and any proof can always be transformed into a new proof via trivial changes.

-"Fermat theorem is surprising, but GC is unsurprising from probabilistic reasons alone."

Fermat's theorem is also unsurprising from probabilistic reasons alone.

-"One may argue that GC is so strong for the numbers below 100 that we should assume that there is a rule."

Actually, the probabilistic arguments already imply that GC is likely (although with less dramatic odds) even for numbers less than 100, especially since 3, 5, and 7 are all prime.

-"E.g. twin primes are an example of non-randomness."

Actually, the standard random model of the primes predicts that there are infinitely many twin primes.