Posts

Comments

Comment by Dweomite on The Talk: a brief explanation of sexual dimorphism · 2023-09-20T21:58:02.105Z · LW · GW

Typos:

  • You have two "part 3" and no "part 2"
  • The final summary says "the two types diverged into big gametes and large gametes"
Comment by Dweomite on Ethics Needs A Marginal Revolution · 2023-09-16T04:42:50.216Z · LW · GW

I'm not sure if I'm representative, but I don't think this does anything to address my personal intuitive dislike of the repugnant conclusion.

I think I intuitively agree that the world with tons of less-happy people is indeed more valuable; this could be affirmed by e.g. supposing that both worlds already exist, but both are in danger of destruction, and I only have the resources to save one of them.  I think I would save the huge number of people rather than the small number of people, even if the huge number of people are less happy.  (I suspect a lot of people would do the same.)

My intuitive problem is with the idea that there's a duty to create people who don't already exist.  I intuitively feel like nonexistent people shouldn't be able to make demands of me, no matter how good an exchange rate they can get from my happiness to theirs.  (And similarly, that nonexistent people shouldn't be able to demand that I make some existing third party less happy, no matter how much it helps the nonexistent person.)

In fact, I feel like I have kind of a nexus of confused intuitions regarding

  • Ethics offsets
  • The difference between ethically-good and ethically-obligatory
  • The difference between it being good for Alice to give X to Bob, and it being good for Bob to take X from Alice without her consent

This seems like it might be some sort of collision between an axiology frame and a bargaining frame?  Like there's a difference between "which of these two states is better?" and "do you have the right to unilaterally move the world from the worse state to the better state?"

Comment by Dweomite on The Flow-Through Fallacy · 2023-09-14T02:14:26.897Z · LW · GW

Sounds similar to fabricated options.

Comment by Dweomite on Sum-threshold attacks · 2023-09-10T00:38:01.607Z · LW · GW

It seems pretty natural to me to think of a DDoS as being a DoS (with only one "D") that has been salami-sliced up into many pieces.

One could argue that a DoS is only an abstraction and not "concrete", but one could make a similar argument about money or alliances, which Wikipedia presents as the canonical examples of salami slicing.

Comment by Dweomite on Sum-threshold attacks · 2023-09-10T00:02:48.775Z · LW · GW

"in the spirit of" a sum-threshold attack (if each perturbation just signals 0 or 1, is not context-aware, and the full ciphertext is cryptographically encrypted on top of the steganography)

By this logic, wouldn't all textual messages qualify?  The letters of this comment are individually insignificant but add up to communicating an idea.

Except they're not actually "adding", they're interacting in a structured way that isn't commutative or associative.  The same letters in a different order wouldn't "add up" to the same idea.  This isn't subdividing an action into smaller actions; it's building a complex machine that only functions as an entire unit.  It is "more than the sum of its parts."

Comment by Dweomite on Sum-threshold attacks · 2023-09-08T21:37:51.491Z · LW · GW

I believe the preexisting name is "salami slicing"

Comment by Dweomite on Defunding My Mistake · 2023-09-05T21:06:48.863Z · LW · GW

Even under the idealistic assumption that the exile-location develops some organized society that is kind and friendly and would never engaged in forced labor, at that point it is effectively a separate country and you are exporting your criminals to them.  This is not so much solving the problem of handling criminals as delegating it to someone else.

Exile makes sense as a punishment when you have an unpopulated wilderness area and the person is going to live in isolation.  It stops making sense when all habitable land is already being used.

Comment by Dweomite on Defunding My Mistake · 2023-09-04T20:44:28.545Z · LW · GW

I'd just like to say that I appreciate you writing about this, and congratulations on making your beliefs a little more self-consistent.

Comment by Dweomite on The Economics of the Asteroid Deflection Problem (Dominant Assurance Contracts) · 2023-08-31T07:03:47.695Z · LW · GW

It's my impression that currently many "successful" Kickstarter projects already break their promises (most commonly by delivering very late, sometimes in more dramatic fashions) and rarely suffer any consequences for this.

I have some concerns that if the penalty for missing your funding goal (i.e. losing your collateral) is worse than the penalty for funding but then failing to deliver (i.e. probably just a reputation hit), that's a bad incentive for creators.  Some creators might try to meet their funding goal artificially (e.g. by secretly contributing their own money, or by intentionally setting their goal too low) in order to save their collateral, knowing that they won't be able to deliver, but calculating that the expected penalty for failure will be less than the collateral.

One could perhaps solve that by keeping the collateral in escrow until the project actually delivers, but that will raise transaction costs, and you'd effectively be signing up to arbitrate what counts as a "success".

Comment by Dweomite on Assume Bad Faith · 2023-08-27T04:41:20.882Z · LW · GW

My concern isn't "what words do you say when you leave", it's "how do you decide when to leave".

Comment by Dweomite on Assume Bad Faith · 2023-08-27T00:33:47.988Z · LW · GW

I would consider that deliberate deception, yes.  I interpret "deception" to mean something like "actions that are expected or intended to make someone's beliefs less accurate".

Comment by Dweomite on Assume Bad Faith · 2023-08-26T22:36:06.886Z · LW · GW

Zack says in his intro that "[people think] that if you've determined someone is in bad faith, you shouldn't even be talking to them, that you need to exile them" and then makes the counter-claim that "being touchy about bad faith accusations seems counterproductive...it shouldn't be beyond the pale to think that of some particular person, nor should it necessarily entail cutting the 'bad faith actor' out of public life."

That sounds to me like a claim that you shouldn't use bad faith as a reason to disengage.  Admittedly terms like "exile" have implications of punishment, while "walk away" has implications of cutting your losses, but in both cases the actual action being taken is "stop talking to them", right?

Also note that Zack starts with the premise that "bad faith" refers to both deception and bias, and then addresses a "deception only" interpretation later on as a possible counter-claim.  I normally use "bad faith" to mean deception (not mere bias), my impression is that's how most people use it most of the time, and that's the version I'm defending.

(Though strong bias might also be a reason to walk away in some cases.  I am not claiming that deception is the only reason to ever walk away.)

I'll grant that "just walk away from deceivers" is a bit simplistic.  I think a full treatment of this issue would need to consider several different goals you might have in the conversation (e.g. convincing the other side, convincing an audience, gathering evidence for yourself) and how the deception would interact with each of them, which seems like it would require a post-length analysis.  But I don't think "treat it the same as bias" is strategically correct in most cases.

Comment by Dweomite on Assume Bad Faith · 2023-08-26T18:25:59.100Z · LW · GW

Note that bullshitting is only one subtype of bad faith argument.  There are other strategies of bad faith argument that don't require making untrue statements, such as cherry picking, gish galloping, making intentional logical errors, or being intentionally confusing or distracting.

Comment by Dweomite on Assume Bad Faith · 2023-08-26T04:24:08.511Z · LW · GW

I fully agree with the initial premise that bias is common.  I don't see how this supports your conclusions; especially:

(1) You say that the difference between bias and deception is uninteresting, because the main case where you might care is that bias is more likely to fold against strong counter-argument.  But isn't this case exactly what people are using it for?

If I'm having a disagreement with you, and I think I could make a strong argument (at some cost in time/effort), then the question of whether you will fold to a strong argument seems like the central question in the decision to either make that argument or simply walk away.

But I thought the central point of this post was to argue that we should stop using "bad faith" as a reason for walking away?

(2) In your final example (where I point out that you've contradicted yourself and you say "look, a distraction!"), I don't see how either of your proposed responses would prevent you from continuing with "look, another distraction!"

You suggest we could stick to the object level and then the process emitting the outputs would be irrelevant.  But whether we're caught in an infinite loop seems pretty important to me, and that depends crucially on whether the distraction was strategic (in which case you'll repeat it as often as you find it helpful) or inadvertent (in which case I can probably get you back on topic).

If you are committed to giving serious consideration to everything your interlocutor says, then a bad actor can tie you up indefinitely just by continuing to say new things.  If you don't want to be tied up indefinitely, your strategy needs to include some way of ending the conversation even when the other guy doesn't cooperate.

(3) In your example of a pseudo-disagreement (about expanding a factory into wetlands), you say it's inefficient that the conflict is disguised as a disagreement.  But your example seems perfectly tailored to show that the participants should use that disguise anyway, because the parties aren't engaged in a negotiation (where the goal is to reach a compromise) they are engaged in a contest before a judge (the regulatory commission) who has predetermined to decide the issue based on how it affects the avian life.  If either side admits that the other side is correct about the question of fact then the judge will decide against them.

Complaining that this is inefficient seems a bit like complaining that it is inefficient for the destruction of factories to reduce a country's capacity for war, and war would be more efficient if there were no incentives to destroy factories.  The participants in a war cannot just decide that factories shouldn't affect war capacity; that was decided by the laws of physics.

Comment by Dweomite on If we had known the atmosphere would ignite · 2023-08-19T20:23:53.141Z · LW · GW

"A program that can identify whether a very specific class of programs will halt" does disprove the stronger analog of the Halting Theorem that (I argued above) you'd need in order for it to make alignment impossible.

Comment by Dweomite on If we had known the atmosphere would ignite · 2023-08-19T18:24:34.207Z · LW · GW

I can write a simple program that modifies its own source code and then modifies it back to its original state, in a trivial loop.  That's acting on its own substrate while provably staying within extremely tight constraints.  Does that qualify as a disproof of your hypothesis?

Comment by Dweomite on If we had known the atmosphere would ignite · 2023-08-19T02:00:24.461Z · LW · GW

When you say that "aligned AGI" might need to solve some impossible problem in order to function at all, do you mean

  1. Coherence is impossible; any AGI will inevitably sabotage itself
  2. Coherent AGI can exist, but there's some important sense in which it would not be "aligned" with anything, not even itself
  3. You could have an AGI that is aligned with some things, but not the particular things we want to align it with, because our particular goals are hard in some special way that makes the problem impossible
  4. You can't have a "universally alignable" AGI that accepts an arbitrary goal as a runtime input and self-aligns to that goal
  5. Something else
Comment by Dweomite on If we had known the atmosphere would ignite · 2023-08-17T21:49:06.669Z · LW · GW

Why would 3 be important?  3 is true of the halting problem, yet we still create and use lots of software that needs to halt, and the trueness of 3 for the halting problem doesn't seem to be an issue in practice.

Comment by Dweomite on If we had known the atmosphere would ignite · 2023-08-17T21:43:40.649Z · LW · GW

A couple observations on that:

1) The halting problem can't be solved in full generality, but there are still many specific programs where it is easy to prove that they will or won't halt.  In fact, approximately all actually-useful software exists within that easier subclass.

We don't need a fully-general alignment tester; we just need one aligned AI.  A halting-problem-like result wouldn't be enough to stop that.  Instead of "you can't prove every case" it would need to be "you can't prove any positive case", which would be a much stronger claim.  I'm not aware of any problems with results like that.

(Switching to something like "exponential time" instead of "possible" doesn't particularly change this; we normally prove that some problem is expensive to solve in the fully-general case, but some instances of the problem can still be solved cheaply.)

2) Even if we somehow got an incredible result like that, that doesn't rule out having some AIs that are likely aligned.  I'm skeptical that "you can't be mathematically certain this is aligned" is going to stop anyone if you can't also rule out scenarios like "but I'm 99.9% certain".

If you could convince the world that mathematical proof of alignment is necessary and that no one should ever launch an AGI with less assurance than that, that seems like you've already mostly won the policy battle even if you can't follow that up by saying "and mathematical proof of alignment is provably impossible".  I think the doom scenarios approximately all involve someone who is willing to launch an AGI without such a proof.

Comment by Dweomite on If we had known the atmosphere would ignite · 2023-08-17T19:40:36.495Z · LW · GW

Sounds like you're imagining that you would not try to prove "there is no AGI that will do what you want", but instead prove "it is impossible to prove that any particular AGI will do what you want".  So aligned AIs are not impossible per se, but they are unidentifiable, and thus you can't tell whether you've got one?

Comment by Dweomite on If we had known the atmosphere would ignite · 2023-08-17T18:12:47.409Z · LW · GW

What would it mean for alignment to be impossible, rather than just difficult?

I can imagine a trivial way in which it could be impossible, if outcomes that you approve of are just inherently impossible for reasons unrelated to AI--for example, if what you want is logically contradictory, or if the universe just doesn't provide the affordances you need.  But if that's the case, you won't get what you want even if you don't build AI, so that's not a reason to stop AI research, it's a reason to pick a different goal.

But if good outcomes are possible but "alignment" is not, what could that mean?

That there is no possible way of configuring matter to implement a smart brain that does what you want?  But we already have a demonstrated configuration that wants it, which we call "you".  I don't think I can imagine that it's possible to build a machine that calculates what you should do but impossible to build a machine that actually acts on the result of that calculation.

That "you" is somehow not a replicable process, because of some magical soul-thing?  That just means that "you" need to be a component of the final system.

That it's possible to make an AGI that does what one particular person wants, but not possible to make one that does what "humanity" wants?  Proving that would certainly not result in a stop to AI research.

I can imagine worlds where aligning AI is impractically difficult.  But I'm not sure I understand what it would mean for it to be literally "impossible".

Comment by Dweomite on Book Launch: "The Carving of Reality," Best of LessWrong vol. III · 2023-08-17T04:12:57.734Z · LW · GW

On the off-chance you are unaware:  There are some existing tools for converting web pages into ebooks that are configurable to handle different web sites, such as the browser plug-in WebToEpub.  I haven't tried using one of these on LessWrong, but it seems like that would be worth trying before creating an original tool.  I did manage to use it to make myself an ebook of Unsong at one point.

Comment by Dweomite on video games > IQ tests · 2023-08-17T03:57:10.271Z · LW · GW

That is a rather long article that appears to be written for an audience that is already familiar with their community.  Could you summarize and/or explain why you think I should read it?

Comment by Dweomite on video games > IQ tests · 2023-08-16T19:56:55.397Z · LW · GW

I suppose I was hoping for a programming-based puzzle game, with some new clever insight required to solve each level, rather than pure programming.

Comment by Dweomite on video games > IQ tests · 2023-08-16T12:09:23.302Z · LW · GW

Programming is my career. I didn't find the leaderboards very challenging; I especially noticed this in Opus Magnum, which I partially blame on them picking boring optimization targets.  I typically picked one category to optimize on my first play of the level, and often tied the best score for that category on my first try.

Your realization that the fastest cycle time would be limited by the max input or output speed is something that I figured out immediately; once you're aware of it, reaching that cap is basically just a matter of parallelization.  Hitting the exact best possible "warm-up" time to produce the first output wasn't completely trivial, but getting in the top bucket of the histogram was usually a breeze for me.

Optimizing cost is even simpler.  You can put a lower bound on the cheapest possible cost by listing the obviously-necessary components (e.g. if the output has a bond that the inputs don't then you need at least one bonder), then calculating the shortest possible track that will allow a single arm to use all of those, then checking whether it's cheaper to replace the track with an extending arm instead.  As far as I can recall, I didn't find a single level where it was difficult to hit that lower bound once I'd calculated it; doing the entire level with only 1 arm is sometimes a bit tedious but it's not actually complicated.

Doing the minimum-cost solution will usually get you very close to the minimum-size solution automatically, since you've already crammed everything around one arm.  This is probably the hardest category if you want to be literally optimal, but I was often in the top bucket by accident.

I think they should have had players optimize for something like "rental cost" where you pay for (components + space) multiplied by running time, so that you have to compromise between the different goals instead of just doing one at a time.

Comment by Dweomite on What's A "Market"? · 2023-08-16T01:47:21.462Z · LW · GW

Checking my understanding:

If Alice and Bob have reached ideal levels of specialization, that implies they have equal marginal prices.

Alice and Bob having the same prices does not, by itself, imply they are optimally specialized.  If you add in an additional assumption of non-increasing marginal returns (e.g. if doubling the amount of land devoted to apples will give you at most twice as many apples), then it implies optimality.  Otherwise, Alice and Bob could be in a local maximum that is not a global maximum.

We are assuming that the marginal exchange is the same whether going upwards or downwards.  This is a fairly natural assumption for a continuous system where you can make infinitesimal steps in either direction.  In a discontinuous system, prices would need to be represented by something more complicated than a real number and basically you'd end up saying that Alice's and Bob's spreads of prices need to overlap, rather than that they need to be identical.

All correct?

Comment by Dweomite on video games > IQ tests · 2023-08-16T00:53:47.678Z · LW · GW

I've played 3 Zachtronics games (SpaceChem, Infinifactory, Opus Magnum) and was ultimately disappointed by all of them.  (I didn't 100% any but got pretty far in all 3.)

Am I missing something about these games that makes them great, or is the following just what it looks like if I'm one of the people who doesn't find them fun?

The early levels made me think:  This is too easy, but early levels are effectively a tutorial and most players have less programming skill than me, so that's not very surprising.  Later on there should be harder levels, and I bet hard versions of this would be fun.

But then the levels never got harder, they only got bigger.  Maybe an early level has 6 steps to the solution, and a later level has 30 steps, but no individual step is hard and the overall direction is obvious, so it's not that much different from playing 5 easy levels in a row (and takes as long).

And when the levels get big, the lack of real programming tools really starts to pinch.  You can't annotate your code with comments, you can't write reusable subfunctions, you can't output logs.  Test runs take too long because break points are weak or non-existent (you can't e.g. break on the 12th iteration of a loop or when a condition is met) and in some of the games the max sim speed is also frustratingly low.

If solving these puzzles were my actual job, I'd invest heavily in building a better IDE.

I made some machines involving (IIRC) hundreds of sequential instructions where I had to hold in my mind the state the molecule was going to be in so I could keep track of what to do next.  But tracking that was the only hard part; if the game had given me a continuously-updating preview of what the machine's state would be at the end of what I'd written so far, the process would have been trivial.

Comment by Dweomite on The Parable of the Dagger - The Animation · 2023-07-30T00:29:46.181Z · LW · GW

What meaning do you take from this parable?

When I first read it, some time ago, my initial reaction was that it felt like it should have a moral but I wasn't immediately sure what the moral was.

I spent some time thinking about it, and settled on:  No matter how you describe reality, mere description cannot constrain the ways that reality can be.

In this interpretation, the jester's cry of "it's logically impossible!" means that the jester thought this wasn't merely a case of the king cheating at the game, but that the king had literally done the impossible; the parable teaches us that it was, in fact, possible.

This moral is kind of trivial in the sense that it's hard to imagine someone explicitly disagreeing with it. However, it may still be useful as a warning that you can make this mistake without realizing what you are doing.

Later on, I read Cleo Nardo's post on The Waluigi Effect, where this parable is referenced as an example of Derridean criticism.  (Nardo says) Derrida said there is no outside-text; that all parts of a book are subject to literary interpretation, including text that appears to be meta-text. This didn't strike me as especially consistent with my reading of the parable, and made me wonder if I'd gotten it wrong.  Did other people also interpret it this way?

On further reflection that I'm doing just now as I write this, I'm not sure I even understand what Nardo's interpretation is.  What did the jester interpret as outside-text that should have been taken as inside-text?  The box inscriptions?  The jester explicitly considers that they might be untrue (and such consideration is completely standard in this type of game; the inscriptions are not likely to be mistaken for outside-text).  The king's explanation of the rules?  But we have no evidence that the king spoke anything false.

For completeness, I also note that there are simpler morals one could take from the parable, such as:

It is possible to form words into a self-referential paradox that is neither true nor false.

It is dangerous to annoy the guy in charge.

These seem accurate, but I don't think they are the intended payload, because the parable is substantially more detailed than necessary to convey one of them.  (Also that last one doesn't especially fit the context of the sequence where this parable appears.)

Comment by Dweomite on Yes, It's Subjective, But Why All The Crabs? · 2023-07-28T23:04:19.363Z · LW · GW

The title of this post ("Yes, It's Subjective, But Why All The Crabs?") gave me entirely the wrong idea of what the post was going to be about, and likely would have caused me to skip it if I didn't recognize the author.

(I thought it was going to actually be about crabs, not about crabs-as-a-metaphor.  I was reminded of the article There's No Such Thing As A Tree (Phylogenetically).)

Comment by Dweomite on Predictive history classes · 2023-07-19T06:07:11.849Z · LW · GW

Are there currently people with the skills to pass such tests, or is this proposal intended to give students a novel skill that the instructors lack?

It's been my vague impression that there are no widely-accepted models of how history works that are detailed enough to let you predict the outcomes of unfamiliar historical events, and that in the few cases where students are asked to give causal explanations in current classes, their work is graded as a persuasive essay rather than as a factual claim that can be held to some objective standard of correctness.

Comment by Dweomite on Predictive history classes · 2023-07-19T05:52:09.272Z · LW · GW

If Project Lawful is canon, then dath ilan has intentionally forgotten its own history, so they wouldn't have a very large data set for doing this.

(Keltham doesn't know why the decision was made, but there are some hints that could be taken to mean that it was done to destroy common knowledge about AI in order to delay its development while secret labs work on alignment.)

Comment by Dweomite on Weak Evidence is Common · 2023-07-17T04:55:18.238Z · LW · GW

If you took the original post to mean that weak evidence isn't common, I'd contend you took the wrong lesson. Encountering strong evidence can be a common occurrence while still being much less common than encountering weak evidence.

You are constantly bombarded by evidence all the time, so something can be true of only a tiny fraction of all evidence you encounter and still be something you experience many times per day.

Also, each observation is simultaneously evidence of many different things.  When someone asked if you were evil and you said "no", that was weak evidence against you being evil, but also fairly strong evidence that you speak English.  If I put my hand out the window and it gets wet, that's pretty strong evidence that it's raining outside my window, but also weaker evidence that it's raining on the other side of town.

I think you're right to point out that just because strong evidence is common in general doesn't necessarily mean that strong evidence about some specific question you are interested in will be common.  There are definitely questions for which it is hard to find strong evidence.

But I don't especially trust your generalization of when strong evidence should be expected, and I think some of your examples are confused.

In asking how hard it is to get evidence that you can beat the stock market, I think you are misleadingly combining the questions "how likely is it that you can beat the market?" and "in worlds where you can, how hard is it to get evidence of that?" in order to imply that the evidence is hard to get, when I think most of the intuition that you are unlikely to see such evidence is coming from the "it's unlikely to be true" side. (Also, it is actually not true that getting >50% of trades right means you will be profitable, because the size of the gains or losses matters.)

And asking someone "are you overconfident?" may not give you very much evidence about whether they are overconfident or not, but that's probably far from the best strategy for gathering evidence on that question.

Comment by Dweomite on Attempting to Deconstruct "Real" · 2023-07-11T04:43:43.922Z · LW · GW

that correspond to aspects of the world in which we operate. If it were the case that 2+2=3, we'd have developed different formal systems and would build different devices based on them.

Which aspects are those?  What parts of the world could have been different to make 2+2=3 work better than 2+2=4?

You've replied 3 times and it seems to me that you have not yet given a clear answer to the original question of where mathematical truth comes from.

It seems obvious to me that 2+2=4 is special in a way that is not contingent on humans.  There are no aliens that just happen to use 2+2=3 instead of 2+2=4 and end up with equally good math.  So all this talk about correspondence with stuff in human brains seems to me like a distraction.

Comment by Dweomite on Attempting to Deconstruct "Real" · 2023-07-10T19:50:19.472Z · LW · GW

Then is an expectation that 2 +2 = 3 just as valid as an expectation that 2 + 2 = 4?  If so, what is the difference between those statements that makes one of them pragmatically more useful than the other?

Comment by Dweomite on Attempting to Deconstruct "Real" · 2023-07-10T17:01:27.538Z · LW · GW

It's either using a different language that happens to use the same alphabet, like if I feed the same string of letters into brains that speak English vs. German (most such strings will be gibberish/errors modulo any given language, of course, but that is also a kind of output which can help us learn the language's rules and structure), or else it is using the same keys to encode information differently, like typing on a keyboard whose keys are printed as QWERTY layout but which the computer is interpreting as Dvorak layout.

The overwhelming majority of all possible broken calculators are not doing either of those things.

For example, you could have a device where no matter what buttons you push, it always outputs "7".  That is not a substitution cipher on standard arithmetic or a new language; it's not secretly doing correct math if only you understood how to interpret it.  You can't use it to replace your calculator once you're trained on it, the way you could start speaking German instead of English, or start typing on a Dvorak keyboard instead of a Qwerty one.

Comment by Dweomite on Attempting to Deconstruct "Real" · 2023-07-10T06:53:32.298Z · LW · GW

Suppose we create a second device that looks like a calculator but displays different answers when you push the same buttons.  Both devices are equally physical, and you can explain using physics how either of them works.  But our common intuitive notion of truth would like to be able to say that one of those devices is giving true answers and the other is giving false answers.

(Or, more rigorously, that one of the devices is completing strings of symbols in a way that conforms to our axioms of arithmetic.)

It's not clear to me how physics gets you closer to that.

 

I've already linked the sequence in another comment, but Eliezer's account of the truth of logical statements is given in Logical Pinpointing, and I think I pretty much agree with him.

Comment by Dweomite on Attempting to Deconstruct "Real" · 2023-07-10T00:08:15.825Z · LW · GW

You may want to read the sequence Highly Advanced Epistemology 101 for Beginners, and in particular the first post in that sequence, The Useful Idea of Truth.

Or, for a less-formal analysis of some of the same ideas, The Simple Truth.

Comment by Dweomite on When do "brains beat brawn" in Chess? An experiment · 2023-06-30T02:04:00.338Z · LW · GW

Probably not relevant to any arguments about AI doom, but some notes about chess material values:

You said a rook is "ostensibly only 1 point of material less than two bishops".  This is true in the simplified system usually taught to new players (where pawn = 1, knight = bishop = 3, rook = 5, queen = 9).  But in models that allow themselves a higher complexity budget, 2 bishops can be closer to a queen than a rook (at the start of the game):

  • Bishops are usually considered slightly better than knights; a value of 3 + 1/3 is typical
  • There is a "pair bonus" of ~1/2 point for having 2 bishops on opposite colors.  (Bishops are a "color-bound" piece: a bishop that starts on a dark square can only reach other dark squares, and vice-versa.  Having 2 on opposite colors mitigates this disadvantage because an opportunity that is on the wrong color for one bishop will be exploitable by the other; the "Jack Sprat" effect.)
  • Rooks are weaker in crowded boards (early game) where their movement options are often blocked, and stronger in open boards (endgames).  5 is an average across the whole game.  I've seen estimates <4.5 for early-game and >6 for endgame.
  • (Queen is also often a bit higher than 9, especially for AI players; e.g. 9.25 or 9.5)

 

If you're interested in a deeper analysis of material values, I recommend these articles by Ralph Betza.  Betza is both an international master chess player and a prolific designer of chess variants, so he's interested in models that work outside the distribution of standard chess.

Comment by Dweomite on When do "brains beat brawn" in Chess? An experiment · 2023-06-29T22:47:57.599Z · LW · GW

I suspect that the domain of martial arts is unusually susceptible to that problem because

  1. Fights happen so quickly (relative to human thought) that lots of decisions need to be made on reflex
    1. (And this is highly relevant to performance because the correct action is heavily dependent on your opponent's very recent actions)
  2. Most well-trained martial artists were trained on data that is heavily skewed towards formally-trained opponents
Comment by Dweomite on What money-pumps exist, if any, for deontologists? · 2023-06-29T21:59:51.489Z · LW · GW

If you model a deontological constraint as making certain actions unavailable to you, then you could be worse off than you would be if you had access to those actions, but you shouldn't be worse off than if those options had never existed (for you) in the first place.  That is, it's equivalent to being a pure utilitarian in a world with fewer affordances. Therefore if you weren't otherwise vulnerable to money-pumps this shouldn't make you vulnerable to them. 

(Obviously someone might be able to get some money from you that they couldn't otherwise get, by offering you a legitimate service that you wouldn't otherwise need--for example, someone with a deontological rule against hauling water is more likely to pay for a water delivery service.  But that's not a "money pump" because it's actually increasing your utility compared to your BATNA.)

If you model a deontological constraint as an obligation to minimize the probability of some outcome at any cost, then it's equivalent to being a utilitarian with an infinite negative weight attached to that outcome.  Unbounded utilities introduce certain problems (e.g. Pascal's Mugging) that you might not have if your utilities were otherwise bounded, but this shouldn't make you vulnerable to anything that an unbounded utilitarian wouldn't be.

Comment by Dweomite on When do "brains beat brawn" in Chess? An experiment · 2023-06-29T00:53:14.223Z · LW · GW

and these variants don't even remove any pieces - they're just small tweaks like permitting self-capture or forbidding castling within the first 10 moves

You're framing these as being closer to "regular" chess, but my intuition is the opposite.  Most of the game positions that occur during a queen-odds game are rare but possible positions in a regular game; they are contained within the game tree of normal chess.  I'm not sure about Stockfish in particular, but I'd expect many chess AIs incorporating machine learning would have non-zero experience with such positions (e.g. from early self-play runs when they were making lots of bad moves).

Positions permitting self-capture do not appear anywhere in that game tree and typical chess AIs are guaranteed to have exactly zero experience of them.

ETA:  It also might affect your intuitions to remember that many positions Stockfish would never actually play will still show up in its tree search, requiring it to evaluate them at least accurately enough to know not to play them.

Comment by Dweomite on Why Not Subagents? · 2023-06-25T16:23:54.428Z · LW · GW

Doesn't irreversibility imply that there is zero probability of a trade opportunity to reverse the thing?  I'm not proposing a new trait that your original scenario didn't have; I'm proposing that I identified which aspect of your scenario was load-bearing.

 

I don't think I understand how your new hypothetical is meant to be related to anything discussed so far.  As described, the group doesn't have strongly incomplete preferences, just 2 mutually-exclusive objectives.

Comment by Dweomite on Why Not Subagents? · 2023-06-25T11:39:26.014Z · LW · GW

Rather than talking about reversibility, can this situation be described just by saying that the probability of certain opportunities is zero?  For example, if John and David somehow know in advance that no one will ever offer them pepperoni in exchange for anchovies, then the maximum amount of probability mass that can be shifted from mushrooms to pepperoni by completing their preferences happens to be zero.  This doesn't need to be a physical law of anchovies; it could just be a characteristic of their trade partners.

But in this hypothetical, their preferences are effectively no longer strongly incomplete--or at least, their trade policy is no longer strongly incomplete.  Since we've assumed away the edge between pepperoni and anchovies, we can (vacuously) claim that John and David will collectively accept 100% of the (non-existent) trades from anchovies to pepperoni, and it becomes possible to describe their trade policy as being a utility maximizer.  (Specifically, we can say anchovies = mushrooms because they won't trade between them, and say pepperoni > mushrooms because they will trade mushrooms for pepperoni.  The original problem was that this implies that pepperoni > anchovies, which is false in their preferences, but it is now (vacuously) true in their trade policy if such opportunities have probability zero.)

Comment by Dweomite on AI #17: The Litany · 2023-06-24T21:17:07.881Z · LW · GW

But saying "next year is going to be the warmest year in history" implies that you are viewing history from some hypothesized future time when at least some parts of what is now the future have been converted into history.  It's ambiguous as to how far in the future that viewpoint is.

Comment by Dweomite on AI #17: The Litany · 2023-06-24T21:13:33.705Z · LW · GW

I would actually argue nuclear is level 3.

You appear to be talking about nuclear power.  The excerpt you quoted just says "nuclear" but I initially assumed it was talking about nuclear weapons, so I was confused for a bit.

Then I imagined someone framing the issue as "nuclear is the technology, nuclear weapons are an example of that technology in malicious hands, therefore this is still level 3".  Which I don't take especially seriously as a frame but now I'm not sure how to draw a line between "distinct technology" and "distinct use of the same technology" and I'm idly wondering whether the entire classification scheme is merely a framing trick.

Comment by Dweomite on Lessons On How To Get Things Right On The First Try · 2023-06-20T04:10:28.757Z · LW · GW

When I tried to imagine solving this, I was pretty concerned about a variable that was not mentioned in the post:

I worried that a hotwheels track might not fit a sphere snugly enough to ensure the sphere would exit the ramp traveling at a consistent yaw, and so you'd need to worry about angle and not just velocity.

I did not come up with any way to deal with this if it turned out to be an issue.

Comment by Dweomite on Are Bayesian methods guaranteed to overfit? · 2023-06-17T21:29:53.717Z · LW · GW

My first-level intuition says that if you had some sort of knob you could turn to adjust the amount of "fitting" while holding everything else constant, then "overfitting" would be when turning the knob higher makes the out-of-sample loss go up.

My more-detailed model--which I haven't thought super long about--looks like this:

In an idealized example where you had "perfect" training data that correctly labeled every example your classifier might encounter, you would want the classifier to learn a rule that puts literally 100% of the positive examples on one side of the boundary and literally 100% of the negative examples on the other side, because anything else would be inaccurate.  You'd want the classifier to get as complex as necessary to achieve that.

Some reasons you might want the classifier to stop before that extreme include:

  1. You may have errors in your training data.  You want the classifier to learn the "natural" boundary instead of finding a way to reproduce the errors.
  2. Your data set may be missing dimensions.  Maybe the "true" rule for classifying bleggs and rubes involves their color, but you can only view them through a black-and-white camera.  Even if none of your training points are mislabeled, the best rule you could learn might misclassify some of them because you don't have access to all the data that you'd need for a correct classification.
  3. Rather than all possible data points, you may have a subset that is less-than-perfectly-representative.  Drawing a line half-way between the known positive examples and known negative examples would misclassify some of the points in between that aren't in your training set, because by bad luck there was some region of thingspace where your training set included positive examples that were very close to the true boundary but no equally-close negative examples (or vice versa).

The reason that less-than-maximum fitting might help with these is that we have an Occamian prior saying that the "true" (or best) classifying rule ought to be simple, and so instead of simply taking the best possible fit of the training data, we want to skew our result towards our priors.

Through this lens, "overfitting" could be described as giving too much weight to your training data relative to your priors.

Comment by Dweomite on I still think it's very unlikely we're observing alien aircraft · 2023-06-16T00:07:13.448Z · LW · GW

The general point that you need to update on the evidence that failed to materialize is in the sequences and is exactly where I expected you to go based on your introductory section.

Comment by Dweomite on Causal Reference · 2023-06-10T21:06:16.702Z · LW · GW

When you "simulate random universes," what distribution are you randomizing over?

Seems like the simulations only help if you somehow already know the true probability distribution from which the actual universe was selected.

Comment by Dweomite on Causal Universes · 2023-06-10T20:48:03.668Z · LW · GW

I think there's a subtle but important difference between saying that time travel can be represented by a DAG, and saying that you can compute legal time travel timelines using a DAG.

There's one possible story you can tell about time turners where the future "actually" affects the past, which is conceptually simple but non-causal.

There's also a second possible story you can tell about time turners where some process implementing the universe "imagines" a bunch of possible futures and then prunes the ones that aren't consistent with the time turner rules.  This computation is causal, and from the inside it's indistinguishable from the first story.

But if reality is like the second story, it seems very strange to me that the rules used for imagining and pruning just happen to implement the first story.  Why does it keep only the possible futures that look like time travel, if no actual time travel is occurring?

The first story is parsimonious in a way that the second story is not, because it supposes that the rules governing which timelines are allowed to exist are a result of how the timelines are implemented, rather than being an arbitrary restriction applied to a vastly-more-powerful architecture that could in principle have much more permissive rules.

So I think the first story can be criticized for being non-causal, and the second can be criticized for being non-parsimonious, and it's important to keep them in separate mental buckets so that you don't accidentally do an equivocation fallacy where you use the second story to defend against the first criticism and the first story to defend against the second.