Posts

Comments

Comment by Luke A Somers on Contra EY: Can AGI destroy us without trial & error? · 2022-06-14T13:08:22.683Z · LW · GW

It seems like you're relying on the existence of exponentially hard problems to mean that taking over the world is going to BE an exponentially hard problem. But you don't need to solve every problem. You just need to take over the world.

Like, okay, the three body problem is 'incomputable' in the sense that it has chaotically sensitive dependence on initial conditions in many cases. So… don't rely on specific behavior in those cases on long time horizons without the ability to do small adjustments to keep things on track.

If the AI can detect most of the hard cases and avoid relying on them, and include robustness by having multiple alternate mechanisms and backup plans, even just 94% success on arbitrary problems could translate into better than that on an overall solution.

Comment by Luke A Somers on Contra EY: Can AGI destroy us without trial & error? · 2022-06-14T12:58:56.575Z · LW · GW

The distribution of outcomes is much more achievable and much more useful than determining the one true way some specific thing will evolve. Like, it's actually in-principle achievable, unlike making a specific pointlike prediction of where a molecular ensemble is going to be given a starting configuration (QM dependency? Not merely a matter of chaos). And it's actually useful, in that it shows which configurations have tightly distributed outcomes and which don't, unlike that specific pointlike prediction.

Comment by Luke A Somers on A voting theory primer for rationalists · 2018-04-12T17:55:45.362Z · LW · GW

I see. I figured U/A meant something like that. I think it's potentially useful to consider that case, but I wouldn't design a system entirely around it.

Comment by Luke A Somers on A voting theory primer for rationalists · 2018-04-12T17:24:01.758Z · LW · GW

In terms of explaining the result, I think Schulze is much better. You can do that very compactly and with only simple, understandable steps. The best I can see doing with RP is more time-consuming and the steps have potential to be more complicated.

As far as promotion is concerned, I haven't run into it; since it's so similar to RP, I think non-algorithmic factors like I mentioned above begin to be more important.

~~~~

The page you linked there has some undefined terms like u/a (it says it's defined in previous articles, but I don't see a link).

>it certainly doesn’t prevent Beatpath (and other TUC methods) from being a strategic mess, without known strategy,

Isn't that a… good thing? With the fog of reality, strategy looking like 60% stabbing yourself, 30% accomplishing nothing, 10% getting what you want… how is that a bad trait for a system to have?

In particular, as far as strategic messes are concerned, I would definitely feel more pressure to use strategy of equivocation in SICT than in beatpath (Schulze), because it would feel a lot less drastic/scary/risky.

Comment by Luke A Somers on A voting theory primer for rationalists · 2018-04-12T16:59:30.901Z · LW · GW

What are the improved Condorcet methods you're thinking of? I do recall seeing that Ranked Pairs and Schulze have very favorable strategy-backfire to strategy-works ratios in simulations, but I don't know what you're thinking of for sure. If those are it, then if you approach it right, Schulze isn't that hard to work through and demonstrate an election result (wikipedia now has an example).

Comment by Luke A Somers on Global insect declines: Why aren't we all dead yet? · 2018-04-08T03:15:13.768Z · LW · GW

95% of the sperm reaching the endpoint, then, if they're not independent.

Comment by Luke A Somers on Global insect declines: Why aren't we all dead yet? · 2018-04-03T14:20:45.136Z · LW · GW

And, like with sperm, it may be that there were many more insects than needed to fulfill their role? Like, if 20 sperm reach an egg, you can lose 95% of them and end up just as pregnant.

Comment by Luke A Somers on Corrigible but misaligned: a superintelligent messiah · 2018-04-03T14:17:04.468Z · LW · GW

That dialog reminds me of some scenes from Friendship is Optimal, only even more morally off-kilter than CelestAI, which is saying something.

Comment by Luke A Somers on April Fools: Announcing: Karma 2.0 · 2018-04-02T18:50:20.556Z · LW · GW

I have no RSS monkeying going on, and Wei Dai and Kaj Sotala have the same font size as you or me.

Comment by Luke A Somers on April Fools: Announcing: Karma 2.0 · 2018-04-02T18:46:42.628Z · LW · GW

Instructions unclear, comment stuck in ceiling fan?

Comment by Luke A Somers on The Math Learning Experiment · 2018-03-27T15:48:08.596Z · LW · GW

That does not always produce a reduced fraction, of course. In order to do that, you need to go find a GCF just like before... but I agree, that should be presented as an *optimization* after teaching the basic idea.

Comment by Luke A Somers on Arguments about fast takeoff · 2018-03-11T22:41:35.720Z · LW · GW

Where is that second quote from? I can't find it here.

Comment by Luke A Somers on Write a Thousand Roads to Rome · 2018-02-10T01:37:00.544Z · LW · GW

Mostly, yes. Feynman gets a lot of credit for making QED comprehensible, even though he didn't make it in the first place.