**mrmind**on 1960: The Year The Singularity Was Cancelled · 2019-05-08T10:33:15.858Z · score: 2 (1 votes) · LW · GW

In Foerster's paper, he links the increase in productivity linearly with the increase in population. But Scott has also proposed that the rate of innovation is slowing down, due to a *logarithmic *increase of productivity from population. So maybe Foerster's model is still valid, and 1960 is only the year where we exhausted the almost linear part of progress (the "low hanging fruits").

Perhaps nowadays we combine the exponential growth of population from population with the logarithmic increase in productivity, to get the linear economic growth we see.

**mrmind**on Why does category theory exist? · 2019-05-07T13:56:05.403Z · score: 10 (3 votes) · LW · GW

Algebraic topology is the discipline that studies geometries by associating them with algebraic objects (usually, groups or vector spaces) and observing how changing the underlying space affects the related algebras. In 1941, two mathematicians working in that field sought to generalize a theorem that they discovered, and needed to show that their solution was still valid for a larger class of spaces, obtained by "natural" transformations. Natural, at that point, was a term lacking a precise definition, and only meant something like "avoiding arbitrary choices", in the same way a vector space is naturally isomorphic to its double dual, while it's isomorphic to its dual only through the choice of a basis.

The need to make precise the notion of naturality for algebraic topology led them to the definition of natural transformation, which in turn required the notion of functor which in turn required the notion of category.

This answers questions 1 and 2: category theory was born to give a precise definition of naturality, and was sought to generalize the "universal coefficient theorem" to a larger class of spaces.

This story is told with a lot of details in the first paragraphs of Riehl's wonderful "Category theory in context".

To answer n° 3, though, even if category theory was rapidly expanding during the '50s and the '60s, it was only with the work of Lawvere (who I consider a genius on par with Gödel) in the '70s that it became a foundational discipline: guided by his intuitions, category theory became the unifying language for every branch of mathematics, from geometry to computation to logic to algebras. Basically, it showed how the variety of mathematical disciplines are just different ways to say the same thing.

**mrmind**on Highlights from "Integral Spirituality" · 2019-04-16T08:05:41.239Z · score: 2 (1 votes) · LW · GW

Is it really quite different, besides halo effect? It strongly depends on the detail, though if the two say the exact same thing, how are things different?

**mrmind**on Highlights from "Integral Spirituality" · 2019-04-15T13:45:27.897Z · score: 4 (2 votes) · LW · GW

The concept of "fake framework", elucidated in the original post, to me it seems one of a model of reality that hides some complexity, sometimes even to the point of being very wrong, but that is nonetheless useful because it makes some other complex area manageable.

On the other hand, when I read the quotes you presented, I see a rich tapestry of metaphors and jargon, of which the proponent himself says that they can be wrong... but I fail completely to see what part of reality they make manageable. These frameworks seems to just add complexity to complexity, without any real leverage over reality. This makes those frameworks draw nearer fiction, rather than useful but simplified models.

For example, if there's no post-rational stage of developement, what use is the advice of not confusing it with a pre-rational stage of developement? If Enlightenment is not a thing, what use is the exortation to come up with a chronologically robust definition of the same?

This to me is the most striking difference between "Integral spirituality" and say a road map. With the road map, you know exactly what is hidden and why, and it's evident how to use it. With Wilber's framework, it seems exactly the opposite.

Maybe this is due to of my unfamiliarity with that material... so someone who has effectively found out something useful out of that model can chime in and tell their experience, and I will stand corrected.

**mrmind**on What I've Learned From My Parents' Arranged Marriage · 2019-03-27T17:04:57.984Z · score: 3 (7 votes) · LW · GW

I'm sorry, but you cannot really learn anything from one example. I'm happy that your parents are faring well in their marriage, but if they didn't would you have learned the same thing?

I've consulted a few statistics on arranged marriage, and they all are:

- underpowered
- showing no significative difference between autonomous and arranged marriages

The latter part is somewhat surprising for a Westerner, but given what you say, the same should be said for an Indian coming from your background.

The only conclusion I can draw fairly conclusively is that, for a long term relationship, the way or the why it started doesn't really matter.

**mrmind**on Plans are Recursive & Why This is Important · 2019-03-12T17:12:13.843Z · score: 8 (5 votes) · LW · GW

Are you familiar with the concept of fold/unfold? Folds are functions that consume structures and produce values, while unfolds do the opposite. The composition of an unfold plus a fold is called a hylomorphism, of which the factorial is a perfect example: the unfold creates a list from 1 to *n*, the fold multiplies together the entire list. Your section on the "two-fold recursion" is a perfect description of a hylomorphism: you take a goal, unfold it into a plan composed of a list of micro-steps, then you fold it by executing each one of the micro-steps in order.

**mrmind**on On Doing the Improbable · 2018-10-29T15:40:28.709Z · score: 4 (6 votes) · LW · GW

Luke already wrote that there are at least four factors that feed motivation, and the expectation of success is only one of them. No amount of expectancy can increment drive if other factors are lacking, and as Eliezer notice, it's not sane to expect only one factor to be 10x the others so that it alone powers the engine.

What Eliezer is asking is basicall if anyone has solved the basic coordination problem of mankind, and I think he knows very well that the answer to his question is no. Also, because we are operating in a relatively small mindspace (humans' system 1), the fact that no one solved that problem in hundreds of thousands of years of cooperation points strongly toward the fact that such a solution doesn't exist.

**mrmind**on (A -> B) -> A · 2018-10-05T10:41:27.824Z · score: 2 (1 votes) · LW · GW

Re: the third point, I think it's important to differentiate between and , where is the true prediction, that is what actually happens when an agent performs the action .

is simply the outcome the agent is aiming at, while is the outcome the agent eventually gets. So maybe it's more interesting a measure of similarity in , from which you can compare the two.

**mrmind**on (A -> B) -> A · 2018-10-04T16:18:43.244Z · score: 2 (1 votes) · LW · GW

Let's say that is the set of available actions and is the set of consequences. is then the set of predictions, where a single prediction associates to every possible action a consequence. is then a choice operator, that selects for each prediction an action to take.

What we have seen so far:

- There's no 'general' or 'natural' choice operator, that is, every choice operator must be based on at least a partial knowledge of the domain or the codomain;
- Unless the possible consequences are trivial, a choice operator will choose the same action for many different predictions, that is a choice operator only uses certain feature of the predictions' space and is indifferent to anything else [1];
- A choice operator defines naturally a 'preferred outcome' operator, which is simply the predicted outcome of the chosen action, and is defined by 'sandwiching' the choice operator between two predictions. I just thought
*interleave*is a better name than*sandwich*. It's of type .

[1] To show this, let be a partition of and let be the equivalence relation uniquely generated by the partition. Then

## You're never wrong injecting complexity, but rarely you're right

2018-10-03T14:20:15.177Z · score: 38 (15 votes)**mrmind**on (A -> B) -> A · 2018-09-13T08:42:50.932Z · score: 2 (1 votes) · LW · GW

I wonder if there are any plausible examples of this type where the constraints don't look like ordering on B and search on A.

Yes, as I shown in my post, such operators must know at least an element of one of the domains of the function. If it knows at least an element of A, a constant function on that element has the right type. Unfortunately, it's not much interesting.

**mrmind**on (A -> B) -> A · 2018-09-13T08:12:23.446Z · score: 6 (3 votes) · LW · GW

It's interesting to notice that there's nothing with that type on hoogle (Haskell language search engine), so it's not the type of any common utility.

On the other hand, you can still say quite a bit on functions of that type, drawing from type and set theory.

First, let's name a generic function with that type . It's possible to show that k cannot be parametric in both types. If it were, would be valid, which is absurd ( has an element!). It' also possible to show that if k is not parametric in one type, it must have access to at least an element of that type (think about and ).

A simple cardinality argument also shows that k must be many-to-one (that is, non injective): unless B is 1 (the one element type),

There is an interesting operator that uses k, which I call interleave:

Trivially,

It's interesting because partially applying interleave to some k has the type , which is the type of continuations, and I suspect that this is what underlies the common usage of such operators.

**mrmind**on Youtube channel devoted to the art of rationality · 2017-12-18T09:09:04.246Z · score: 4 (2 votes) · LW · GW

The difference would be that I'm doing it more for myself than for those out there, because I don't expect my youtube video to get out much.

I also don't know if I'll get some attention, I'm doing that entirely for myself: to leave a legacy, to look back and say that I too did something to raise the sanity waterline.

My biggest hurdle currently is video editing.

My motto: "think big, act small, move quickly". I know that my first videos will suck, I've prepared to embrace suckiness and plunge forward anyway.

**mrmind**on Youtube channel devoted to the art of rationality · 2017-12-18T09:03:01.541Z · score: 4 (2 votes) · LW · GW

Honestly, I'm not sure how explaining Bayesian thinking will help people with understanding media claims.

Sometimes important news are based entirely on the availability bias or the base rate fallacy: knowing them is important to cultivate a critical view of media. To understanding why they are wrong you need probabilistic reasoning. But media awareness is just an excuse, a hook to introduce Bayesian thinking, which will allow me to also talk about how to construct a critical view of science.

**mrmind**on Youtube channel devoted to the art of rationality · 2017-12-15T13:53:22.075Z · score: 3 (1 votes) · LW · GW

These are all excellent tips, thank you!

## Youtube channel devoted to the art of rationality

2017-12-15T11:42:14.759Z · score: 19 (11 votes)**mrmind**on Bayes and Paradigm Shifts - or being wrong af · 2017-12-15T11:20:53.813Z · score: 1 (2 votes) · LW · GW

A much, much easier think that still works is P(sunrise) = 1, which I expect is what ancient astronomers felt about.

**mrmind**on Bayes and Paradigm Shifts - or being wrong af · 2017-12-14T10:48:30.744Z · score: 2 (1 votes) · LW · GW

That entirely depends on your cosmological model, and in all cosmological models I know, the sun is a definite and fixed object, so usually

**mrmind**on Will IOTA work as promized? · 2017-12-13T11:07:56.387Z · score: 7 (2 votes) · LW · GW

From what I've understood of the white paper, there's no transaction fee because, instead of rewarding active nodes like in the blockchain, the Tangle punishes inactive nodes. So when a node performes few transactions, other nodes tends to disconnect from it and in the long run an inactive node will be dropped entirely.

On the other hand, a node has only a partial copy of the entire Tangle at each time, so it is possible to keep it small even when the total volume is large.

Economically, I don't know if switching from incentives to partecipate to punishments for leaving makes sense.

**mrmind**on The list · 2017-12-13T10:42:12.653Z · score: 2 (1 votes) · LW · GW

With the magic of probability theory, you can convert one into the other. By the way, you yourself should search for evidence that you're wrong, as any honest intellectual would do.

**mrmind**on Bayes and Paradigm Shifts - or being wrong af · 2017-12-13T09:59:44.303Z · score: 9 (3 votes) · LW · GW

This might be a minor or a major nitpick, depending on your point of view: Laplace rule works only if the repeated trials are thought to be independent of one another. That is why you cannot use it to predict sunrise: even without accurate cosmological model, it's quite clear that the ball of fire rising up in the sky every morning is always the same object. But what prior you use after that information is another story...

**mrmind**on The list · 2017-12-11T15:53:24.716Z · score: 2 (1 votes) · LW · GW

This is a standard prediction since the unconscious was theorized more than a century ago, so unfortunately it's not good evidence that the model is correct. Unfortunately, if what you've written is the only things that the list has to say, then I would say that no, this is not worth pursuing.

**mrmind**on The list · 2017-12-11T13:12:54.568Z · score: 10 (3 votes) · LW · GW

In a vein similar to Erfeyah's comment, I think that your model needs to be developed much more. For example, what predictions does it make that are notably different from other psychological models? It's just an explanation that feels too "overfitted".

**mrmind**on Security Mindset and Ordinary Paranoia · 2017-11-28T13:19:29.189Z · score: 11 (4 votes) · LW · GW

I feel that Eliezer's dialogue are optimized for "one-pass reading", when someone reads an article once and moves along to other contents. To convey certain ideas, or better yet, certain modes of thinking, they necessarily need to be very long, very repetitive, grasping the same concept from different directions.

On the other hand, I prefer much more direct and concise articles that one can re-read at will, grasping a smidge of concept at every pass. This is though a very unpopular format to be consumed on social media, so I guess that, as long as the format is intentional, this is the reason.

**mrmind**on Arbitrary Math Questions · 2017-11-21T17:00:50.020Z · score: 4 (2 votes) · LW · GW

Probably those questions needs to be polished and stated more clearly to receive a precise answer. I'll try to add something regarding the second point (the first I'm not sure I understand): from the point of view of VNM-rationality, which is the only guarantee that an agents has a utility function, you can only deduce that utility order-type is isomorphic to R, the set of reals. So in full generality, you cannot deduce anything about the dimensionality of the utility function before stating which actually it is.

**mrmind**on Simple refutation of the ‘Bayesian’ philosophy of science · 2017-11-10T11:32:40.562Z · score: 0 (0 votes) · LW · GW

To explain, e.g. to describe "why" something happened, is to talk about causes and effects.

I would still say that cause and effect is a subset of the kind of models that are used in statistics. A case in point is for example Bayesian networks, that can accomodate both probabilistc and causal relations.

I'm aware that Judea Pearl and probably others reverse the picture, and think that C&E are the real relations, which are only approximated in our mind as probabilistic relations. On that, I would say that quantum mechanics seems to point out that there is something fundamentally undetermined about our relations with cause and effect. Also, causal relations are very useful in physics, but one may want to use other models where physics is not especially relevant.

From what one may call "instrumentalist" point of view, time is a dimension so universal that any model can compress information by incorporating it, but it is not *necessarily* so, as relativity shows us: indeed, general relativity shows us you can compress a lot of information by not explicitly talking about time, and thus by sidestepping clean causal relations (what is cause in a reference frame is effect in another).

Prediction and explanation are very very different.

I'm not aware of a theory or a model that uses vastly different entities to explain and to predict. The typical case of a physical law posits an ontology governed by a stable relation, thus using the precise same pieces to explain the past and predict the future. Besides, such a model would be very difficult to tune: any set of data can be partitioned in any way you like between training and test, and it seems odd that a model is so dependent from the experimenter's intent.

**mrmind**on Simple refutation of the ‘Bayesian’ philosophy of science · 2017-11-09T13:02:29.793Z · score: 0 (0 votes) · LW · GW

By ‘Bayesian’ philosophy of science I mean the position that (1) the objective of science is, or should be, to increase our ‘credence’ for true theories [...]

Phew, I thought for a moment he was about to refute the *actual* Bayesian philosophy of science...

Snark aside, as others have noticed, point 1 is highly problematic. From a broader perspective, if Bayesian probability has to inform the practice of science, then a scientist should be wary of the concept of truth. Once a model has reached probability 1, it becomes an unwieldy object: it cannot be swayed by further, contrary evidence, and if we ever encounter an impossible piece of data (impossible for that model), the whole system breaks down. It is then considered good practice to always hedge models with a small probability for 'unknown unknowns', even with our most certain beliefs. After all, humans are finite and the universe is much, much bigger.

On the other hand, I don't think it's fair to say that the objective of science is either to "just explain" or "just predict". Both views are unified and expanded by the Bayesian perspective: "explanation", as far as the concept can be modelled mathematically, is fitness to data and low complexity. On the other hand, predictive power is fitness to future data, which can only be checked once the future data had been acquired. What is one man's prediction can be another man's explanation.

**mrmind**on Less Wrong Lacks Representatives and Paths Forward · 2017-11-09T11:29:34.749Z · score: 1 (1 votes) · LW · GW

In my understanding, there’s no one who speaks for LW, as its representative, and is

responsiblefor addressing questions and criticisms.

Exactly. That is by design. See the title of the site? It doesn't say "MoreRight". Here even Yudkowski, the Founding Father, was frequently disagreed upon.

This is the School-less school.

**mrmind**on Zero-Knowledge Cooperation · 2017-10-25T10:41:43.447Z · score: 7 (4 votes) · LW · GW

There's a detail I've not understood: what is that A passes back to B? The encrypted output or the decrypted output? If it's the decrypted output of the Validator, how does B verify that the signature is correct, since the Validator signed the enrcypted output?

**mrmind**on Postmodernism for rationalists · 2017-10-19T09:48:13.290Z · score: 1 (1 votes) · LW · GW

No, that I understood, but on one side, you cannot distinguish people who agree with you because they understand from people who simply agree with you because they want to jump on the bandwagon, and on the other side the purpose of the construction of this following is...?

**mrmind**on Seeding a productive culture: a working hypothesis · 2017-10-19T09:43:30.309Z · score: 5 (2 votes) · LW · GW

Unfortunately, this excludes people who already have their own families and are rooted in different places. I’m open to alternative proposals.

Isn't this already a failure mode? The seed needs to be open to information integration but it can only gather informations from a small geographical area?

**mrmind**on Postmodernism for rationalists · 2017-10-19T09:33:21.002Z · score: 3 (2 votes) · LW · GW

because if you need his points explained to you in plain language then you are not the sort of person he is speaking to.

Because... reason? Is there any valid point besides snobbery and fear of criticisms?

**mrmind**on Postmodernism for rationalists · 2017-10-19T09:26:23.508Z · score: 5 (2 votes) · LW · GW

This is wrong on oh so many levels, but I do not dare to criticize a powerpoint, because there might be many subtle points that were totally disregarded by the abreviated presentation. Does anyone know if there's a longer form? A video or a series of blog posts?

**mrmind**on Why no total winner? · 2017-10-16T09:35:52.993Z · score: 2 (1 votes) · LW · GW

I think at least three factors hamper the emergence of a single global power:

As others have commented, coordination problem is a big factor.

A subset of coordination problem I think is that most humans are not linear in their utility: sure, making more money might be attractive in some ranges, but only peculiar individuals chase money or power for their own sake. Maybe you can have a maniac CEO who is willing to stay awake twenty hours a day working on the developement of her business, but a lot of people will be content to just work enough and receive enough.

The time-frame for the emergence might be so long to be unobservable. After all, going from tribes to city states to regional powers to nations to global corporations has taken millennia. Already global corporations are trying to gain the upper hand against states with special laws, so it might very well be the case that in a few decades the world is going to be dominated by a few big conglomerates.

You can pitch 1 and 3 against each other and see that by nature, humans don't cooperate spontaneously very well; as technology marches forward, though, and means to connect more and more people spreads, you see the emergence of bigger and bigger powers. Facebook nowadays has more people connected than any other nations in history.

**mrmind**on Open thread, October 2 - October 8, 2017 · 2017-10-09T12:16:14.467Z · score: 0 (0 votes) · LW · GW

Am I right to think that this statement is based on the assumption that the brain (and all computation machines) have been proven to have Turing machine equivalents based on the Church-Turing thesis?

No, otherwise we would have the certainty that the brain is Turing-equivalent and I wouldn't have prefaced with "Either the brain is capable of doing things that would require infinite resources for a computer to perform".
We do not have proof that everything not calculable by a Turing machine requires infinite resources, otherwise Church-Turing will be a theorem and not a thesis, but we have strong hints: every hypercomputation model is based on accessing some infinite resource (whether it's infinite time or infinite energy or infinite precision). Plus recently we had this theorem: any function on the naturals is computable by some machine in some non-standard time.

So either the brain can compute things that a computer would take infinite resources to do, or the brain is *at most* as powerful as a Turing machine.

As per the electron thing, there's a level where there is symbolic manipulation and a level where there isn't. I don't understand why it's symbolic manipulation for electronics but not for neurons. At the right abstraction level, neurons too manipulate symbols.

**mrmind**on Open thread, October 2 - October 8, 2017 · 2017-10-06T10:21:49.288Z · score: 0 (0 votes) · LW · GW

It is that it is not based on syntactic symbol manipulation in the way a computer is and for that reason it is not going to be simulated by a computer with our current architecture (binary, logic gates etc.) as the AI community thought (and thinks).

Well, that would run counter to the Church-Turing thesis. Either the brain is capable of doing things that would require infinite resources for a computer to perform, or the power of the brain and the computer is the same. Indeed, not even computers are based on symbolic manipulation: at the deepest level, it's all electrons flowing back and forth.

**mrmind**on Open thread, October 2 - October 8, 2017 · 2017-10-06T10:11:48.911Z · score: 0 (0 votes) · LW · GW

That's interesting... is the dust size still consistent with artificial objects?

**mrmind**on Open thread, September 25 - October 1, 2017 · 2017-09-29T15:34:51.309Z · score: 0 (0 votes) · LW · GW

Well, to calculate P(T|S) = p you need a model of how a student 'works', in such a way that the test's result T happens for the kind of students S with probability p. Or you can calculate P(S|T), thereby having a model of how a test 'works' by producing the kind of student S with probability p.

If you have only one of those, these are the only things you can calculate.

If on the other hand you have one or more complementary models (complemenetary here means that they exclude each other and form a complete set), then you can calculate the probabilities P(T1|S1), P(T1|S2), P(T2|S1) and P(T2|S2). With these numbers, via Bayes, you have both P(T|S) and P(S|T), so it's up to you to decide if you're analyzing stundents or tests.

Usually one is more natural than the other, but it's up to you, since they're models anyway.

**mrmind**on Open thread, September 25 - October 1, 2017 · 2017-09-29T09:35:06.678Z · score: 0 (0 votes) · LW · GW

From a Bayesian perspective, you calculate P(S|T) and P(T|S) at the same time, so it doesn't really matter. What *does* matter, and greatly, are your starting assumptions and models: if you have only one for each entity, you won't be able to calculate how much some datum is evidence of your model or not.

**mrmind**on LW2.0 now in public beta (you'll need to reset your password to log in) · 2017-09-25T10:30:05.254Z · score: 0 (0 votes) · LW · GW

Which is the proper route to signal a bug?

**mrmind**on Open thread, September 18 - September 24, 2017 · 2017-09-19T07:54:18.598Z · score: 0 (0 votes) · LW · GW

Wonderful as always!

**mrmind**on Open thread, August 14 - August 20, 2017 · 2017-08-17T10:28:51.615Z · score: 1 (1 votes) · LW · GW

"Once" does exactly what you have described.

**mrmind**on Inscrutable Ideas · 2017-08-14T10:30:42.179Z · score: 0 (0 votes) · LW · GW

Now I understand that we are talking with two completely different frames of reference.

When I write about meta-rationalists, I'm specifically referring to Chapman and Gworley and the like. You have obviously a much wider tradition in mind, on which I don't necessarily have an opinion. Everything I said needs to be restricted to this much smaller context.

On other points of your answer:

- yes, there are important antecedents, but also important novelties too;
- identification of what you consider to be the relevant corpus of 'old' meta-rationality would be appreciated, mainly of deity as a simplifying nontrivial hypothesis;
- about inherently mysteriousness, it's claimed in the linked post of this page, first paragraph: " I had come to terms with the idea that my thoughts might never be fully explicable".

**mrmind**on Inscrutable Ideas · 2017-08-10T09:51:40.733Z · score: 1 (1 votes) · LW · GW

I wasn't making a point about meta-rationality versus rationality, I was making a point about noticing-and-putting-on-a-shelf versus noticing-and-taking-seriously.

Right. Let's say that there are (at least) three levels of noticing a discrepancy in a model:

1 - noticing, shrugging and moving on

2 - noticing and claiming that it's important

3 - noticing, claiming that it's important and create something new about it ('something' can be a new institution, a new model, etc.)

We both agree that LW rationalists are mostly at stage 1. We both agree that meta-rationalsts are at level 2. I also claim that meta-rationalists claim to be at level 3, while they are not.

You need to distinguish between phenomena (observations, experiences) and explanations.

This is also right. But at the same time, I haven't seen any proof that meta-rationalists have offered god as a simplifying hypothesis of some unexplained phoenomenon that wasn't trivial.

Any kind of fundamentally different, new or advanced understanding has to be not completely communicable and comprehensible to the N-1 level, otherwise it would not be fundamentally new.

I think this is our true disagreement. I reject your thesis: there is nothing that is inherently mysterious, not even relatively. I think that any idea is either incoherent, comprehensible or infinitely complex.

Math is an illustration of this classification: it exists exactly at the level of being comprehensible. We see levels because we break down a lot of complexity in stages, so that you manipulate the simpler levels, and when you get used to them, you start with more complex matters. But the entire *raison d'etre* of mathematics is that **everything is reducible to trivial**, it just takes hundreds of pages more.

Maybe meta-rationalists have yet to unpack their intuitions: it happens all the time that someone has a genius idea that only later gets unpacked into simpler components. So kudos to the idea of destroying inscrutability (I firmly believe that destroying inscrutability will destroy meta-rationalism), but claiming that something is inherently mysterious... that runs counter epistemic hygiene.

**mrmind**on Inscrutable Ideas · 2017-08-08T14:28:22.542Z · score: 1 (1 votes) · LW · GW

Lots of people noticed the Newton/Maxwell disparities in the 1900s, but Einstein noticed them enough.

Your example, in my opinion, disproves your point. Einstein did not simply noticed them: he constructed a coherent explanation that accounted for both the old model and the discrepancies. It unified both models under one map.
Do you feel that meta-rationalists have a model of intention-implementation and maps generation that is coherent with the naive model of a Bayesian agent?

A meta-rationalist is like physicist from the 19th century, that, having noticed the dual nature of light, called himself meta-physicist, because he uses two maps for the phoenomenon of light. Instead the true revolution, quantum mechanics, happened when two conflicting models were united under *one* explanation.

Inscrutable and unnoticeable to whom?

It's a degree: the more people have independently access to the phoenomenon, the more confidence I would give to its existence. If it's only one person and said person cannot communicate it nor behaves any differently... well I would equate its existence to that of the invisible and intangible dragon.

**mrmind**on Open thread, July 31 - August 6, 2017 · 2017-08-07T13:30:19.607Z · score: 0 (0 votes) · LW · GW

I don't know how much sexual satisfaction scales linearly, but from 1 to 2 seems about right.

**mrmind**on Open thread, August 7 - August 13, 2017 · 2017-08-07T13:26:00.578Z · score: 1 (1 votes) · LW · GW

"Inscrutable", related to the meta-rationality sphere, is a word that gets used a lot these days. On the fun side, set theory has a perfectly scrutable definition of indescribability.

Very roughly: the trick is to divide your language in stages, so that stage n+1 is strictly more powerful than stage n. You can then say that a concept (a cardinal) k is n-indescribable if every n-sentence true in a world where k is true, is also true in a world where a lower concept (a lower cardinal) is true. In such a way, no true n-sentence can distinguish a world where k is true from a world where something less than k is true.

Then you can say that k is totally indescribable if the above property is true for every finite n.

Total indescribability is not even such a strong property, in the grand scheme of large cardinals.

**mrmind**on Inscrutable Ideas · 2017-08-07T12:45:14.100Z · score: 0 (0 votes) · LW · GW

Points 1 and 2 are critiques of the rationalist community that are around since the inception of LW (as witnessed by the straw Vulcan / hot iron approaching metaphors), so I question that they usefully distinguish meta- from plain rationalists.

Point 3 is more helpful in this regard, but then if anyone made that claim then I would ask to point to what differences does such a behavior imply... I find very hard to believe in something that is both unscrutable and unnoticeable.

**mrmind**on Open thread, August 7 - August 13, 2017 · 2017-08-07T12:25:51.624Z · score: 0 (0 votes) · LW · GW

The intuitive answer seems to me to be: the last one. It's the tallest, so it witness exactly one billion towers. Am I misinterpreting something?

**mrmind**on Open thread, July 31 - August 6, 2017 · 2017-08-07T10:31:39.540Z · score: 1 (1 votes) · LW · GW

Which emotions would be easiest?

Sexual attraction...

**mrmind**on Ignorant, irrelevant, and inscrutable (rationalism critiques) · 2017-07-31T15:06:16.995Z · score: 1 (1 votes) · LW · GW

But if meta-rationality is unscrutable for rationality, how do you know it even exists? At least, Bayesian rationalists have some solace in Cox's theorem, or the coherence theorem, or the Church-Turing thesis. What stops me from declaring there's a sigma-rationality, which is unscrutable by all n-rationality below them? What does meta-rationality even imply, for the real world?

**mrmind**on Ignorant, irrelevant, and inscrutable (rationalism critiques) · 2017-07-31T15:03:08.204Z · score: 0 (0 votes) · LW · GW

I would be content in just someone saying: "this person is a meta-rationalist, and this is what s/he has achieved".