post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by localdeity · 2023-03-08T20:28:04.100Z · LW(p) · GW(p)

So I would rank "doomed to fail" up near the top of the list of "things most likely to actually be false", up higher than other things like "pretending not to like someone" or "pretending to like someone." You get more falsity the closer you get to claims that involve something you intend to happen resulting in something you didn't intend to happen.

I would invite those who disagree with the above sentence to submit examples of times they personally caused something to happen that they didn't intend to (also with the stipulation(s) that a) no other people were involved, b) you only intended something positive and c) it caused you to believe that what you were trying to do was inherently impossible (e.g. you really wanted to get something done a certain way, and came to the conclusion that either what you wanted to get done, your way of doing it, or both were not going to be possible). 

Er... Not a personal example, but surely people have tried to build perpetual motion machines or infinite energy sources by themselves and at least some of them concluded on their own that it was impossible?  And then there are things like the halting problem.  Arrow's impossibility theorem, lots of impossibility theorems.  Computer science has lots of known lower bounds on the number of operations for any algorithm that solves certain problems; e.g. it's impossible for a general comparison-based sort to handle all cases in less than O(n log n) on average, and some problems are known to be NP-complete, which I think the majority of CS people believe can't be solved in polynomial time; so anyone who runs into such a problem (e.g. "I'm going to write a perfect register allocator for my compiler, which will always return an optimal answer in a short time and it'll be great!") will beat their head against it until they either give up or settle for a suboptimal answer.

For more mundane examples, the everyday practice of programming could be described as a series of incidents where the programmer says "I have to do a thing, and I'd like to do it simply and efficiently", and then learns details that prove the initial vision impossible, and has to make the vision slightly less simple or efficient every time.  (E.g. "I'd like to solve this by adding just a touch of code to this one function here... no, impossible, I need to track some state, which means I have to make additional changes there and there.  Then that means I also have to ...")  One could say that the ultimate project often succeeds well enough—I had to move the goalposts a few times, but I was ultimately pleased with the results I got.  But sometimes it doesn't—I realized the project would be much harder than I'd thought, and dropped it in favor of other priorities.

Is there a known algorithm / function / procedure, that when submitted another algorithm / function / procedure, returns false? (That ever returns false, that is, which is equivalent to saying it provably will for a specific set of inputs). Furthermore, that returns false when it is false, and not when it is not false.

The phrasing is not entirely clear to me, but I think you mean "a function F that, when called on function G and any set of inputs H, returns true or false depending on whether G(H) would return true or false"?  If F simply executes G's code and returns the result, then the answer is trivially yes if you have a working compiler/interpreter.  If this isn't good enough because G could run forever, and we require F to always return relatively quickly, then this is impossible because it would imply a solution to the halting problem.

If someone is making a genuine, good-faith effort to prove that they are right, and you knowingly choose to make a (bad faith) effort to prove that person wrong, then - apparently, and this amazed me too when I first learned about it - you just won't be able to. But you thought you would be.

Erm... If someone is making a genuine, good-faith effort to prove that their program solves the halting problem, then I don't see why it makes a difference whether I have good faith or not as I give them a rigorous mathematical proof that no program could solve it—or, more simply, as I use their program to write the test case that can't behave in a way consistent with their claim.  Maybe whether I have good faith would affect whether they trust me, and whether they consider what I'm saying or simply reject it, but I consider that a different question from whether my proof is correct.

You apparently thought you literally could show that someone else is wrong, who you knew was making a good-faith, honest effort. I guess this would require you to believe that hypotheses and propositions are randomly pulled out of a hat, and randomly assigned the values T or F,

Er... or that people sometimes make mistakes, or make guesses based on incomplete information and those guesses are sometimes wrong.

 and stored somewhere around the universe, or in the computer program that simulates the universe, maybe. Therefore, that just because you hoped that someone was wrong wouldn't change the probability that they had simply pulled their hypotheses out of a hat, and thus that there was some positive chance they were ultimately wrong. 

To make an intuitive argument for why that doesn't pan out, first promote intuition to "really important" which is essentially what I've argued for in the most basic terms. Then, note that the person making good-faith effort to find the truth has his / her intuitions already fully aligned with finding the truth, and nothing else. You have your intuitions only partially aligned with finding the truth, at best, and your motives are set against someone else's well-aligned intuitions. So that means you're (actually) misaligned. 

Er... suppose someone was telling you about their detailed plan to do a thing, and they've clearly made a math mistake, dropping a 0 somewhere, so their cost estimates were off by a factor of 10, and when this mistake is corrected it's clear that the project will lose money, or that the rocket would crash unless the design is completely reworked.  Is it "misaligned" to tell them this?  (I guess this is a case where one has to do the math to see if it works, which means that "intuitions" aren't king—at least for those who haven't done enough similar calculations for their intuitions to reliably match reality.  But then, if we take your statement to still be technically correct on that basis, the question becomes "How do you tell that someone's intuitions are so good that you can be certain they're right without needing to double-check their math?"—that makes "their intuitions are well-aligned with finding the truth" a much stronger statement, to the point where it seems very unlikely that you are in a position to declare this to be true about anyone.)

Hmm.  The philosophy espoused here, if taken seriously, appears to mean that, if you know you're "fully aligned with finding the truth", then anyone who tells you you're wrong is misaligned, and their motives are suspect.  If you did make any mistakes, is there any way for someone to tell you about it in a way that you'll accept it?  This seems epistemically dangerous, making the adherent prone to going off the deep end.

That's right. I consider it immoral to believe p(doom) > 0. It's even worse to say it and that you believe it. 

I ... ... can see cases in which someone is being an unhelpful jerk by telling others that p(doom) is really high and they should just give up and should feel bad for even trying.  But (a) believing it's >0 doesn't mean believing it's nearly 1, (b) saying so doesn't mean adding the social slapdown stuff, (c) saying so doesn't mean being unhelpful either.  Imagine someone saying "I think there's a slight chance that someone will create a bioweapon that kills everyone, and I have some ideas for how to improve biotech safety and security to reduce that chance".  You think that's immoral?  I doubt it.

comment by ZT5 · 2023-03-11T17:15:56.425Z · LW(p) · GW(p)

If the SRP is consistent, then more true beliefs are also easier and better to state on paper than less true beliefs. They should make more sense, comport with reality better, actually provide constructions and justifications for things, have an internal, discernable structure, as well as have a sequence that is possible for more people to follow from start to finish and see what's going on. 

Oh, I get it. You are performing Löbian provability computation, isomorphic to this post [LW · GW] (I believe).

comment by ZT5 · 2023-03-08T16:17:46.936Z · LW(p) · GW(p)

The onlooker would have interpreted it as a faux pas if you had told him that you had designed the set-up that way on purpose, for the castle to keep being smoothed-over by the waves. He didn't mean to help you, so if you responded that everything's just fine, he would have taken that as a slight-that-he-can't-reveal-he-took, thus faux pas. 

Ah. "You are wrong" is a social attack. "You are going to fail" is a social attack. Responding to it with "that is perfectly fine" is the faux pas.

Or rather, in this case it is meant as a social attack on you, rather than cooperating you on adversarially testing the sandcastle (which is not you, and "the sandcastle" being wrong does not mean "you" are wrong).

Thanks, I think I got your meaning.

That's right. I consider it immoral to believe p(doom) > 0. It's even worse to say it and that you believe it. 

I would say that the question of being able to put a probability on future events is... not as meaningful as you might think.

But yes, I believe all decisions are Löbian self-fulfilling prophecies [LW · GW] that work by overriding the outputs of your predictive system. By committing to make the outcome you want happen, even if your predictive system completely and unambigiously predicts it won't happen. 
(that is the reason that being able to put a probability on future events is not as meaningful as you might think).

You still need to understand very clearly, though, how your plan ("the sandcastle") will fail, again and again, if you actually intend to accomplish the outcome you want. You are committing to the final output/impact of your program, not to any specific plan, perspective, belief, paradigm, etc etc.

I'm not sure I have the capacity to understand all the technical details of your work (I might), but I am very certain you are looking in the correct direction. Thank you. I have updated on your words.