Posts

Comments

Comment by matteyas on Book Review: Spiritual Enlightenment: The Damnedest Thing · 2024-01-21T14:50:55.989Z · LW · GW

Could you point to where he claims there is no truth? What I've seen him say is along the lines of "no belief is true" and "nobody will write down the truth." That should not be surprising to anyone who groks falsification. (For those who do not, the LessWrong article on why 0 and 1 are not probabilities is a place to start.)

He is describing what he's up to. You say that's what he's offering. So you already are searching out other readings. Have you heard of taking things out of context? The reason that is frowned upon is because dogmatically just reading a piece of text is a reliable way to draw bad conclusions.

Comment by matteyas on Book Review: Spiritual Enlightenment: The Damnedest Thing · 2024-01-18T04:36:53.067Z · LW · GW

His "shtick" (why the dramatic approach?) is that if we try to disprove everything, without giving up, every false belief will eventually be dealt with, and nothing true will be affected. Is there some fault with that or not?

In regards to enlightenment, he uses a specific definition, and it's not something that can be decided by arguing. You either satisfy the definition or you don't. Nobody has asked you to care about it, so you needn't justify your decisions if you don't.

If you think he is offering something like "how to play video games all day," you have misunderstood him quite significantly, and I'd suggest not misrepresenting him, at least not here on LessWrong.

Comment by matteyas on Fake Explanations · 2017-08-03T20:59:45.828Z · LW · GW

Are you saying that in an environment for learning about- and discussing rationality, we should strive for a less-than-ideal rationality (that is, some form of irrationality) just because of practical contexts that people often run into and choose the easy way out of?

Would you become equally suspicious of the math teacher's point of view if some person from a math problem buys 125 boxes with 6 watermelons each, since he won't be able to handle that amount in most practical contexts?

Comment by matteyas on Efficient Cross-Domain Optimization · 2017-07-28T17:15:58.165Z · LW · GW

First paragraph

There is only action, or interaction to be precise. It doesn't matter whether we experience the intelligence or not, of course, just that it can be experienced.

Second paragraph

Sure, it could still be intelligent. It's just more intelligent if it's less dependent. The definition includes this since more cross-domain ⇒ less dependence.

Comment by matteyas on The Least Convenient Possible World · 2017-07-18T11:02:05.207Z · LW · GW

For Pascal Wager's specific scenario, I'd probably ask Omega "Really? Either God doesn't exist or everything the Catholics say is correct? Even the self-contradicting stuff?" And of course, he'd decline to answer and fly away.

The point is that in the least convenient world for you, Omega would say whatever it is that you would need to hear to not slip away. I don't know what that is. Nobody but you do. If it is about eternal damnation for you, then you've hopefully found your holy grail, and as some other poster pointed out, why this is the holy grail for you can be quite interesting to dig into as well.

The point raised, as I see it, is just to make your stance on Pascal's wager contend against the strongest possible ideas.

Comment by matteyas on The Modesty Argument · 2014-10-20T08:35:34.382Z · LW · GW

If genunie Bayesians will always agree with each other once they've exchanged probability estimates, shouldn't we Bayesian wannabes do the same?

An example I read comes to mind (it's in dialogue form): "This is a very common error that's found throughout the world's teachings and religions," I continue. "They're often one hundred and eighty degrees removed from the truth. It's the belief that if you want to be Christ-like, then you should act more like Christ—as if the way to become something is by imitating it."

It comes with a fun example, portraying the absurdity and the potential dangers of the behavior: "Say I'm well fed and you're starving. You come to me and ask how you can be well fed. Well, I've noticed that every time I eat a good meal, I belch, so I tell you to belch because that means you're well fed. Totally backward, right? You're still starving, and now you're also off-gassing like a pig. And the worst part of the whole deal—pay attention to this trick—the worst part is that you've stopped looking for food. Your starvation is now assured."

Comment by matteyas on Circular Altruism · 2014-10-18T00:22:01.285Z · LW · GW

This threshold thing is interesting. Just to make the idea itself solid, imagine this. You have a type of iron bar that can bend completely elastically (no deformation) if forces less than 100N is applied to it. Say they are more valuable if they have no such deformations. Would you apply 90N to 5 billion bars or 110N to one bar?

With this thought experiment, I reckon the idea is solidified and obvious, yes? The question that still remains, then, is whether dust specks in eyes is or is not affected by some threshold.

Though I suppose the issue could actually be dropped completely, if we now agree that the idea of threshold is real. If there is a threshold and something is below that threshold, then the utility of doing it is indeed zero, regardless of how many times you do it. If something is above the threshold, shut up (or don't) and multiply.

Comment by matteyas on How to Convince Me That 2 + 2 = 3 · 2014-10-09T00:42:25.243Z · LW · GW

I hate to break it to you, but if setting two things beside two other things didn't yield four things, then number theory would never have contrived to say so.

At what point are there two plus two things, and at what point are there four things? Would you not agree that a) the distinction itself between things happens in the brain and b) the idea of the four things being two separate groups with two elements each is solely in the mind? If not, I'd very much like to see some empirical evidence for the addition operation being carried out.

Mathematics are so firmly grounded in the physical reality that when observations don't line up with what our math tells us, we must change our understanding of reality, not of math.

English is so firmly grounded in the physical reality that when observations don't line up with what our english tells us, we must change our understanding of reality, not of english.

I hope the absurdity is obvious, and that there are no problems to make models of the world with english alone. So, do you find it more likely that math is connected to the world because we link it up explicitly or because it is an intrinsic property of the world itself?

Comment by matteyas on 37 Ways That Words Can Be Wrong · 2014-10-04T20:20:58.807Z · LW · GW

It's a bit unfortunate that these articles are so old; or rather that people aren't as active presently. I'd have enjoyed some discussion on a few thoughts. Take for instance #5, I shall paste it for convenience:

If the last 11 egg-shaped objects drawn have been blue, and the last 8 cubes drawn have been red, it is a matter of induction to say this rule will hold in the future. But if you call the blue eggs "bleggs" and the red cubes "rubes", you may reach into the barrel, feel an egg shape, and think "Oh, a blegg."

It struck me that this is very deeply embedded in us, or at least in me. I read this and noticed that my thought was along the lines of "yes, how silly, it could be a non-colored egg." What's wrong with this? What's felt is an egg shape, not an egg. Might as well be something else entirely.

So how deep does this one go; and how deep should we unravel it? I guess "all the way down" is the only viable answer. I can assign a high probability that it is an egg, I simply shouldn't conclude anything just yet. When is it safe to conclude something? I take it the only accurate answer would be "never." So we end up with something that I believe most of us holds as true already: Nothing is certain.

It is of course a rather subtle distinction going from 'certain' to 'least uncertain under currently assessed information'. Whenever I speak about physics or other theoretical subjects, I'm always in the mindset that what I'm discussing is on the basis of "as is currently understood," so in that area it feels rather natural. I suppose it's just a bit startling to find that the chocolate I just ate is only chocolate as a best candidate rather than as a true description of reality; that biases can be found in such "personal" places.

Comment by matteyas on The Least Convenient Possible World · 2014-09-28T14:56:26.171Z · LW · GW

I have a question related to the initial question about the lone traveler. When is it okay to initiate force against any individual who has not initiated force against anyone?

Bonus: Here's a (very anal) cop out you could use against the least convenient possible world suggestion: Such a world—as seen from the perspective of someone seeking a rational answer—has no rational answer for the question posed.

Or a slightly different flavor for those who are more concerned with being rational than with rationality: In such a world, I—who value rational answers above all other answers—will inevitably answer the question irrationally. :þ