Comment by shminux on Evidence other than evolution for optimization daemons? · 2019-04-22T04:33:25.411Z · score: 3 (2 votes) · LW · GW

You are right, it's not a good example, since the optimization pressure does not result in optimizing for a different goal.

Comment by shminux on Evidence other than evolution for optimization daemons? · 2019-04-22T00:07:58.135Z · score: 1 (2 votes) · LW · GW

Would you consider a MENACE Tic-tac-toe matchbox-based optimizer an OD?

Comment by shminux on The Stack Overflow of Factored Cognition · 2019-04-21T21:48:28.651Z · score: 7 (4 votes) · LW · GW

My immediate reaction is: why do you think the real and not the toy problems you are trying to solve are factorizable?

To take an example from your link, "What does a field theory look like in which supersymmetry is spontaneously broken?" does not appear to be an easily factorizable question. One needs to have 6+ years of intensive math and theoretical physics education to even understand properly what the question means and why it is worth answering. (Hint: it may not be worth answering, given that there are no experimentally detected super partners and there is no indication that any might exist below Planck scale.)

Provided you have reached the required level of understanding the problem, why do you think that the task of partitioning the question is any easier than actually solving the question? Currently the approach in academia is hiring a small number of relatively well supervised graduate students, maybe an occasional upper undergrad, to assist in solving a subproblem. I have seen just one case with a large number of grad students, and that was when the problem had already been well partitioned and what was needed was warm bodies to explore the parameter spaces and add small tweaks to a known solution.

I do not know how much research has been done on factorizability, but that seems like a natural place to start, so that you avoid going down the paths where your chosen approach is unlikely to succeed.

Comment by shminux on How do S-Risk scenarios impact the decision to get cryonics? · 2019-04-21T18:32:27.225Z · score: 6 (3 votes) · LW · GW

My assumption is that getting frozen means giving up all control over what, if anything, happens to the dead frozen piece of organic matter that you used to identify with. With high probability it will get discarded within the next century, due to a failure of the some sort, technical, economical or political. There is a very unlikely eventuality of it being used for recovery of the informational content, even less likely eventuality that the recovery process will result in some sort of self-awareness, and the chance is even more remote that it would be anything resembling the kind of "life" that you hope for when you sign up. if this is a baseline (and if you are more optimistic than that, then I want some of what you are on), then the decision to sign up for cryonics is between a near-certain extinguishing of your identity (not absolutely certain, as there is always a vanishingly small chance that we can be simulated from the information available) and a tiny chance of revival in some form, and in various number of copies/clones of varying faithfulness/awareness/intelligence, maybe to live happily forever, maybe to be tortured forever, maybe the whole spectrum in between.

If your question is whether the odds of happy resurrection are lowered by taking into account S-risks, then my answer is that they are already so low, S-risk doesn't even enter into it.

Still, I'd take my chances and get frozen rather than, say, cremated. Because to me personally, non-existence is worse. Your outlook is likely to be different.

Comment by shminux on Quantitative Philosophy: Why Simulate Ideas Numerically? · 2019-04-19T01:46:41.303Z · score: 4 (2 votes) · LW · GW

Right, this an honest dynamical model, the curves from the followup post is the opinion bots converging or diverging as they interact. I thought I explained it, and I think it's in one of the blog posts, but looking back at it, apparently not on this site. Thanks!

Comment by shminux on Why is multi worlds not a good explanation for abiogenesis · 2019-04-19T01:43:21.682Z · score: 2 (1 votes) · LW · GW

It seems like we are talking about something similar. If you interpret MWI as "anything can happen with some probability, and, given that we are here observing it, the posterior probability is obviously high enough", then you can use it to explain anything. I agree that my usage was not quite standard, but it fits somewhat, because you can use MWI to justify any conclusion, including an absurd one.

Comment by shminux on Why is multi worlds not a good explanation for abiogenesis · 2019-04-18T15:51:22.185Z · score: 2 (1 votes) · LW · GW
I stipulate that nearly anything can be a consequence of MWI, but not with equal probability.

Note that MWI postulates unitary evolution of the wave function, and in unitary evolution there are no probabilities, everything is completely deterministic, no exceptions. None. Let it sink in:

NO PROBABILITIES. PURE DETERMINISM OF THE WAVE FUNCTION EVOLUTION

There have been numerous attempts to saddle this unitary evolution with something extra that would give us the empirically observed probabilities. Everett suggested some in his PhD, many others did, with marginal success. The only statement nearly everyone is on board with is that, if we were to look for a way to assign probabilities, the Born rule is the only sensible one. In that sense, the Born rule is not an arbitrary one, but a unique way to map wave function to probability. The need to get probabilities from the unitary evolution of the wave function is not built into the MWI, but is grafted on it by the need to connect this theory with observations, exactly like the Born rule in the Copenhagen interpretation was.

That said, we might be on the cusp of something super mega extra interesting observed in the next few years, much more so than the recent black hole doughnut seen by the EHT: Measuring gravitational field from Schrodinger cat-like objects. There are no definite predictions on what we will see in this case, because QM and general relativity currently do not mix, and this is what makes it so exciting. I have mentioned it in a blog post discussing how MWI emerges from unitary evolution:

https://edgeofgravity.wordpress.com/2019/01/19/entanglement-many-worlds-and-general-relativity/

There is some discussion of this issue online, and I have mused about it on my blog some time ago:

https://edgeofgravity.wordpress.com/2019/02/25/schrodingers-cattraction/

Comment by shminux on Quantitative Philosophy: Why Simulate Ideas Numerically? · 2019-04-18T01:41:07.766Z · score: 6 (3 votes) · LW · GW

First, thank you so much for taking the time to reply! I sucks to write into the void, and your thorough comment gives me much needed feedback.

It's unclear to me that the model you construct has much relationship to your tested idea

I must have been super unclear, yes. The model I suggested as a first approximation is very basic:

  • People with similar views attract (they talk and their views converge), people with divergent views repel (they talk and dislike what the other party says, and their views drift apart even more).
  • The interaction amount between people does not depend on how close their views are. This is not a great approximation in general, but gotta start somewhere. Also, in an online world it is hard to avoid interactions with those you disagree with, so the assumption does not seem to be totally without merit. But definitely can be improved upon.
  • The shape of the attraction/repulsion as a function of the distance between views is definitely largely arbitrary, just something simple that would reflect the first point above.
  • The model is memory-less, e.g. you don't keep tabs on the past interactions, at each step each interaction between each two people is evaluated on its own merit.

I am not sure if this answers the question about the grounding, I am most likely missing something.

FWIW I went in to this expecting a very different sort of model, one more like a simulation using simple bots that interact in simplified ways you describe and then we could see how they end up clustering, maybe by each bot keeping an affinity score for the others and finding results about the affinity of the bots forming clusters.

I am not sure I follow. The bots indeed do end up clustering into 4 to 5 different clusters, where each cluster represents a certain convergent view. By "keeping the affinity score", do you mean they keep track of the past interactions, not just compare current views at each step? That would be an interesting improvement, adding memory to the model, but that would be, well, an improvement, not necessarily something you put into a toy model from the beginning. Maybe you mean something else? I'm confused.

Comment by shminux on Liar Paradox Revisited · 2019-04-17T15:55:14.861Z · score: 4 (2 votes) · LW · GW

I really like the idea that an evaluation algorithm of a proposition can either terminate or end up in a fixed point, mapping back to the evaluation algorithm itself. It unites mathematical and non-mathematical statements instead of separating them, and it allows for algorithm-dependent outcomes of propositions, which fits well into my anti-realist ontology. In this approach a lack of convergence would be an indication that a new, potentially higher-level evaluation algorithm (I call those "models") is required.

Going by what you have presented, some basic hierarchy could be something like this:

Evaluating algorithm-1: Immediately/obviously/postulated true or false, no extra evaluation needed

Evaluating algorithm-2: Evaluated to true or false with or the evaluating algorithm-2 (your "infinite loop")

Evaluating algorithm-3: Evaluated to one of the 2 above or to itself, if the "evaluation field" is not closed.

Etc.

Comment by shminux on Why is multi worlds not a good explanation for abiogenesis · 2019-04-14T20:04:00.468Z · score: 1 (2 votes) · LW · GW
According to this theory, there is a fundamental physical difference between a complex collection of atoms, and an "observer" and somewhere in the development of life, creatures flipped from one to the other.

You seem to refer to some strawman version of the Copenhagen interpretation that no physicist subscribes to. Being brainwashed by Eliezer's writings can do that. He is very eloquent and persuasive. Consider reading other sources. Scott Aaronson's blog is a good start. Wikipedia has a bunch of useful links, too.

Comment by shminux on A Case for Taking Over the World--Or Not. · 2019-04-14T04:24:17.609Z · score: 5 (4 votes) · LW · GW
The current state of things, where people suffer when they don't have to due to circumstances outside of their control.

Ah, I can very much relate to that sentiment! The Effective Altruism movement was spawned largely in response to the concerns like that. Have you looked into their agenda, methods and achievements?

A Numerical Model of View Clusters: Results

2019-04-14T04:21:00.947Z · score: 18 (6 votes)

Quantitative Philosophy: Why Simulate Ideas Numerically?

2019-04-14T03:53:11.926Z · score: 23 (12 votes)
Comment by shminux on A Case for Taking Over the World--Or Not. · 2019-04-14T02:34:23.547Z · score: 13 (4 votes) · LW · GW
What would we have to do to save the world?

Why do you think the world needs saving and from what?

Comment by shminux on Why is multi worlds not a good explanation for abiogenesis · 2019-04-14T02:29:20.022Z · score: 2 (1 votes) · LW · GW
its not controversial to use the multi world model in the less wrong forums and that most people I respect use it fully

the key word is "in the lesswrong forums". This is because Eliezer Yudkowsky, the founder and the main contributor for a long time, promoted both MWI and Bayesianism as cornerstones of rationality. Neither is necessary for either epistemic or instrumental rationality, but they are useful reasoning devices. No one really "uses" those directly to make decisions in life, even though most people pretend to. In actuality, they use those to justify the decisions already made, consciously or subconsciously. The reason is that the Bayes theorem relies on evaluation of probabilities, something humans are not very good at. At least not until you spend as much time as Eliezer, Scott and some others on self-calibration. And MWI is generally used as a fancy name for "imagine possible outcomes and assign probabilities to them", which has nothing to do with physics whatsoever, when it is not misused for discussing quantum suicide/immortality, or, well, to justify anthropics.

Comment by shminux on Why is multi worlds not a good explanation for abiogenesis · 2019-04-13T21:47:37.992Z · score: 5 (2 votes) · LW · GW

That's a good point! I was definitely unclear, and even sloppy in my claims.

My first statement, "makes no testable predictions" refers to "pure" quantum physics, specifically quantum mechanics and quantum field theory on a fixed spacetime background, where what the matter is doing does not affect, in the first approximation, what happens to the spacetime itself (which is the subject of general relativity). We know it is not a good assumption in general, because it leads to contradictions like the various black hole evaporation paradoxes. But it works within certain limits. Within those limits, many worlds add absolutely nothing new and predictable.

Sadly, the extrapolation of QM to the realm where gravity is still weak but already matters, is still an uncharted territory, over 90 years later. The generally accepted claim (but only a claim) is that the unitary evolution part of the QM scales up into the macroscopic world in some way, and the measurement postulate emerges from this upscaling eventually. Many worlds make a more specific claim, that we live in many worlds, decohering (splitting) all the time, and that there is nothing new happening beyond the basic decoherence. However, there is still the gravitational footprint of those worlds, unless you figure out how they split the spacetime itself, as well. In that sense, many worlds make a claim in the domain where QM has never been observed that is incompatible with the theory that rules that domain. it is still just an ontological claim though, without any predictive power in it.

Not sure if this makes sense.

Comment by shminux on Experimental Open Thread April 2019: Socratic method · 2019-04-13T20:51:24.593Z · score: 2 (1 votes) · LW · GW
our ability to make observations implies some level of predictability--which I'm not fully convinced of

Maybe we can focus on this one first, before tackling a harder question of what degree of predictability is observed, what it depends on, and what "the laws of physics changing every Sunday" would actually mean observationally.

Please describe a world in which there is no predictability at all, yet where agents "exist". How they survive without being able to find food, interact, or even breathe, because there breathing means you have a body that can anticipate that breathing keeps it alive.

Comment by shminux on Why is multi worlds not a good explanation for abiogenesis · 2019-04-13T18:15:49.652Z · score: 2 (1 votes) · LW · GW

Why do you call it complaining?

Comment by shminux on Why is multi worlds not a good explanation for abiogenesis · 2019-04-13T01:55:58.975Z · score: 4 (6 votes) · LW · GW

Many worlds is a model that currently has no testable predictions, no micro/macro connection, contradicts general relativity (which assumes there is a single spacetime), and in general proves too much: nearly anything can be a consequence of infinitely many worlds. Additionally, the observable universe has only 10^122 qubits, which limits the number of possible states, including possible worlds.

So, your best bet is to avoid invoking many worlds for explaining anything. You can certainly use possible worlds, as logical counterfactuals based on the lack of knowledge of the "real" world, to consider which decision would be best, for example. But not many worlds. Those currently have zero predictive power and 100% explanatory power, which is equivalent to "God did it".

Comment by shminux on Agent Foundation Foundations and the Rocket Alignment Problem · 2019-04-09T16:04:50.895Z · score: 6 (3 votes) · LW · GW

I definitely agree that the AFF work is essential and does not seem to get as much attention as warranted, judging by the content of the weekly alignment newsletter. I still think that a bit more quantitative approach to philosophy would be a good thing. For example, I wrote a post "Order from Randomness" giving a toy model of how a predictable universe might spontaneously arise. I would like to see more foundational ideas from the smart folks at MIRI and elsewhere.

Comment by shminux on The Simple Solow Model of Software Engineering · 2019-04-09T03:51:01.302Z · score: 5 (3 votes) · LW · GW

In many ways the software repairability is much worse than that of physical objects, because there are no certifications for software quality, at best unit tests and regression tests, if that. The bit rot through the changes in the environment, like APIs, interfaces, even CPU/GPU change only adds to that. Software cannot be maintained like, say, bridges, (or fridges) because there are no spare parts you can find, and building them from scratch is at some point costlier than a rewrite, especially if the original designers and maintainers are all gone. So, a company needs to design for planned obsolescence if they want to avoid excessive maintenance costs (what you call "hire more engineers", only the cost of maintenance grows exponentially). Hiring/training better engineers is unfeasible, as you can imagine. There are not enough top 1% engineers to staff even the top 10 high-tech companies. Figuring out better ways to make software works for awhile, but then the software expands to saturate those "better ways", and you are back where you started. Planned rewrites and replacements would cut the costs, but, like investing money into prevention in healthcare, that is something that is never a priority. And so we are stuck with sick geriatric fragile code bases that take a fortune to keep alive until they are eventually let die long past their expiry date.

Comment by shminux on [Spoilers] How did Voldemort learn the horcrux spell? · 2019-04-08T05:35:51.630Z · score: 3 (2 votes) · LW · GW

One would imagine that someone would be able to learn the spell mechanics from a book, do all the hard work, so to speak, but require a living person knowing the spell to "animate" it :) Though it is definitely not how HPMoR presents it. Also, how, in that world, would wizards be able to invent new spells, if knowledge can only be passed down?

Comment by shminux on Experimental Open Thread April 2019: Socratic method · 2019-04-07T05:07:08.660Z · score: 2 (1 votes) · LW · GW

My answer is, as before, conditional on our ability to observe anything, the observations are guaranteed to be somewhat predictable. One can imagine completely random sequences of observation, of course, but those models are not self-consistent, as there have to be some regularities for the models to be constructed. In the usual speak those models refer to other potential universes, not to ours.

Comment by shminux on Experimental Open Thread April 2019: Socratic method · 2019-04-07T02:51:09.181Z · score: 2 (1 votes) · LW · GW

Consider reading the link above and the rest of the SSC posts on the topic. In the model discussed there brain is nothing but a prediction error minimization machine. Which happens to match my views quite well.

Comment by shminux on Experimental Open Thread April 2019: Socratic method · 2019-04-06T08:15:01.390Z · score: 2 (1 votes) · LW · GW

What is the difference? Achieving goals relies on making accurate predictions. See https://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/

Comment by shminux on Experimental Open Thread April 2019: Socratic method · 2019-04-06T08:12:53.089Z · score: 2 (1 votes) · LW · GW

The latter. Also postulating immutable territory outside all maps means asking toxic questions about what exists, what is real and what is a fact.

Comment by shminux on Experimental Open Thread April 2019: Socratic method · 2019-04-06T01:45:33.891Z · score: 2 (1 votes) · LW · GW

I am still not sure what you mean.

why are our observations structured rather than unstructured?

are you asking why they are not random and unpredictable? That's an observation in itself, as I pointed out... One might use the idea of predictable objective reality to make oneself feel better. It does not do much in terms of predictive power. Or you can think of yourself as a Boltzmann brain hallucinating a reality. Physicists actually talk about those as if they were more than idle musings.

Comment by shminux on Experimental Open Thread April 2019: Socratic method · 2019-04-05T02:21:43.479Z · score: 8 (2 votes) · LW · GW

I intentionally went a bit further than warranted, yes. Just like atheists claim that there is no god, whereas the best one can claim is the agnostic Laplacian position that there is no use for the god hypothesis in the scientific discourse, I don't really claim that there is no territory, just that we have no hope of proving it is out there, and we don't really need to use this idea to make progress.

Comment by shminux on Experimental Open Thread April 2019: Socratic method · 2019-04-05T02:07:34.305Z · score: 2 (1 votes) · LW · GW

Maybe I misunderstand the question. My answer is that the only answer to any "why" question is constructing yet another model. Which is a very worthwhile undertaking, since the new model will hopefully make new testable predictions, in addition to explaining the known ones.

Comment by shminux on Ideas ahead of their time · 2019-04-05T02:03:31.017Z · score: 5 (3 votes) · LW · GW

I have a physics degree and ran the Freenode #physics channel for a few years, and so had to deal with a lot of crackpots. It's easy to tell the obvious nonsense (it raises a lot of standard red flags, like proclaiming a well tested model wrong) but within a well informed professional community ideas ahead of their time are very hard to tell apart from the chaff. Is Tipler's Omega point nonsense? Is AI fooming nonsense? Is Tegmark's multiverse nonsense? Is string theory? If you read Not Even Wrong, you can get some idea how hard it is to tell promising ideas apart from the rest.

Comment by shminux on Ideas ahead of their time · 2019-04-04T06:02:54.512Z · score: 10 (5 votes) · LW · GW

The problem is that the visionary ideas ahead of their time are indistinguishable from the crank ones: they are way outside the (scientific) Overton window and so are automatically misinterpreted and dismissed. Some of those that pan out centuries later were the germ theory of disease, soft inheritance, the idea that brain hosts the mind, etc. There are probably a few prophetic ideas published and ridiculed fairly recently, whose power will only become apparent decades or centuries from now.

Comment by shminux on Experimental Open Thread April 2019: Socratic method · 2019-04-04T03:14:40.118Z · score: 2 (1 votes) · LW · GW

To me belief in the territory is the confused one :)

Comment by shminux on Experimental Open Thread April 2019: Socratic method · 2019-04-04T01:18:07.795Z · score: 4 (2 votes) · LW · GW

Once you postulate the territory behind your observations, you start using misleading and ill-defined terms like "exists", "real" and "true", and argue, say, which interpretation of QM is "true" or whether numbers "exist", or whether unicorns are "real". If you stick to models only, none of these are meaningful statements and so there is no reason to argue about them. Let's go through these examples:

  • The orthodox interpretation of quantum mechanics is useful in calculating the cross sections, because it deals with the results of a measurement. The many-worlds interpretation is useful in pushing the limits of our understanding of the interface between quantum and classical, like in the Wigner's friend setup.
  • Numbers are a useful mental tool in multiple situations, they make many other models more accurate.
  • Unicorns are real in a context of a relevant story, or as a plushie, or in a hallucination. They are a poor model of the kind of observation that lets us see, say, horses, but an excellent one if you are wandering through a toy store.

Comment by shminux on Experimental Open Thread April 2019: Socratic method · 2019-04-03T14:30:55.902Z · score: 2 (1 votes) · LW · GW

The meta-observation (and the first implicit and trivially simple meta-model) is that accurate predictions are possible. Translated to the realist's speak it would say something like "the universe is predictable, to some degree". Which is just as circular, since without predictability there would be no agents to talk about predictability.

Comment by shminux on Experimental Open Thread April 2019: Socratic method · 2019-04-03T07:38:20.007Z · score: 2 (1 votes) · LW · GW

Again, without a certain regularity in our observations we would not be here talking about it. Or hallucinating talking about it. Or whatever. You can ask the "why" question all you want, but the only non-metaphysical answer can be another model, one more level deep. And then you can ask the "why" question again, and look for even deeper model. All. The. Way. Down.

Comment by shminux on Experimental Open Thread April 2019: Socratic method · 2019-04-03T04:40:55.611Z · score: 2 (1 votes) · LW · GW

It's an empirical fact (a meta-observation) that they do. You can postulate that there is a predictable universe that is the source of these observations, but this is a tautology: they are predictable because they originate in a predictable universe.

Comment by shminux on Experimental Open Thread April 2019: Socratic method · 2019-04-03T02:11:35.143Z · score: 2 (1 votes) · LW · GW
I'd question how you would evaluate if something is instrumentally useful when you can only judge something in terms of other maps.

Not in terms of other maps, but in terms of its predictive power: Something is more useful if it allows you to more accurately predict future observations. The observations themselves, of course, go through many layers of processing before we get a chance to compare them with the model in question. I warmly recommend the relevant SSC blog posts:

https://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/

https://slatestarcodex.com/2017/09/06/predictive-processing-and-perceptual-control/

https://slatestarcodex.com/2017/09/12/toward-a-predictive-theory-of-depression/

https://slatestarcodex.com/2019/03/20/translating-predictive-coding-into-perceptual-control/

Comment by shminux on Experimental Open Thread April 2019: Socratic method · 2019-04-03T02:04:50.539Z · score: 3 (2 votes) · LW · GW

The term truth has many meanings. If you mean the first one on wikipedia

Truth is most often used to mean being in accord with fact or reality

then it is very much possible to not use that definition at all. In fact, try to taboo the terms truth, existence and reality, and phrase your statements without them, it might be an illuminating exercise. Certainly it worked for Thomas Kuhn, he wrote one of the most influential books on philosophy of science without ever using the concept of truth, except in reference to how others use it.

Comment by shminux on Experimental Open Thread April 2019: Socratic method · 2019-04-03T02:00:28.564Z · score: 2 (1 votes) · LW · GW
Why do you assume that future predictions would follow from past predictions?

That's a meta-model that has been confirmed pretty reliably: it is possible to make reasonably accurate predictions in various areas based on past observations. In fact, if this were not possible at any level, we would not be talking about it :)

It seems like there has to be an implicit underlying model there to make that assump

Yes, that's the (meta-)model, that accurate predictions are possible.

Comment by shminux on Experimental Open Thread April 2019: Socratic method · 2019-04-02T15:14:35.907Z · score: 3 (2 votes) · LW · GW

I don't claim what is true, what exists, or what is real. In fact, I explicitly avoid all these 3 terms as devoid of meaning. That is reading too much into it. I'm simply pointing out that one can make accurate predictions of future observations without postulating anything but models of past observations.

How is predictive error, as opposed to our perception of predictive error, defined if not relative to the territory?

There is no such thing as "perception of predictive error" or actual "prediction error". There is only observed prediction error. You are falling back on your default implicit ontology of objective reality when asking those questions.

Comment by shminux on Experimental Open Thread April 2019: Socratic method · 2019-04-02T15:09:13.332Z · score: 2 (1 votes) · LW · GW

That's a common implicit assumption, that observations require a source, hence reality. Note that this assumption is not needed if your goal to predict future observations, not to "uncover the nature of the source of observations". Of course a model of observations having a common source can be useful at times, just not always.

Comment by shminux on Announcing the Center for Applied Postrationality · 2019-04-02T04:36:39.732Z · score: 3 (2 votes) · LW · GW

I think you meant "Implied Postrationality"

Comment by shminux on Experimental Open Thread April 2019: Socratic method · 2019-04-02T04:00:44.392Z · score: 3 (2 votes) · LW · GW

Brain is a multi-level predictive error minimization machine, at least according to a number of SSC posts and reviews, and that matches my intuition as well. So, ultimately predictive power is an instrumental goal toward the terminal goal of minimizing the prediction error.

A territory is a sometimes useful model, and the distinction between an approximate map and as-good-as-possible map called territory is another useful meta-model. Since there is nothing but models, there is nothing to deny or to be agnostic about.

Is the therapy example a true model of the world or a useful fiction?

You are using terms that do not correspond to anything in my ontology. I'm guessing by "the world" you mean that territory thing, which is a sometimes useful model, but not in that setup. "A useful fiction" is another term for a good model, as far as I am concerned, as long as it gets you where you intend to be.

Comment by shminux on Experimental Open Thread April 2019: Socratic method · 2019-04-02T01:10:59.249Z · score: 4 (3 votes) · LW · GW

First, "usefulness" means only one thing: predictive power, which is accuracy in predicting future inputs (observations). The territory is not a useful model in multiple situations.

In physics, especially quantum mechanics, it leads to an argument about "what is real?" as opposed to "what can we measure and what can we predict?", which soon slides into arguments about unobservables and untestables. Are particles real? Nope, they are an asymptotically flat interaction-free approximations of the QFT in curved spacetimes. Are fields real? Who knows, we cannot observe them directly, only their effects. They are certainly a useful model, without a doubt though.

Another example: are numbers real? Who cares, they are certainly useful. Do they exist in the mind or outside of it? Depends on your definitions, so an answer to this question says more about human cognition and human biases than about anything math- or physics-related.

Another example is in psychology: if you ever go to therapist for, say, couples counseling, the first thing a good one would explain is that there is no single "truth", there is "his truth" and "her truth" (fix the pronouns as desired), and the goal of therapy would be to figure out a mutually agreeable future, not to figure out who was right and who was wrong and what really happened, and who thought what and said what exactly and when.

Comment by shminux on Experimental Open Thread April 2019: Socratic method · 2019-04-01T15:23:49.619Z · score: 3 (2 votes) · LW · GW

While I agree that circle geometry is best left for specialized elective math classes, and that some basics statistical ideas like average, variance and Bell curve can be useful for an average person, I am curious which alternatives to circle geometry you considered before settling on stats as the best candidate?

Comment by shminux on Experimental Open Thread April 2019: Socratic method · 2019-04-01T15:18:56.440Z · score: 3 (2 votes) · LW · GW

A map (another term for a model) is an algorithm to predict future inputs. To me it is meaningful enough. I am not sure what you mean by "grounded in something". Models are multi-level, of course, and postulating ""territory" as one of meta models can be useful (i.e. have predictive value) at times. At other times territory is not a particularly useful model.

Comment by shminux on Experimental Open Thread April 2019: Socratic method · 2019-04-01T15:14:27.003Z · score: 2 (1 votes) · LW · GW

I would consider a different phrasing, sure. I'm not the best persuader out there, so any help is welcome!

Comment by shminux on Experimental Open Thread April 2019: Socratic method · 2019-04-01T04:06:58.700Z · score: 4 (3 votes) · LW · GW

Are you an AI bot replying to random comments in the GPT2 style?

Comment by shminux on On the Nature of Agency · 2019-04-01T03:58:40.135Z · score: 5 (3 votes) · LW · GW

I'm wondering if you are using these terms as synonyms for conformism/non-conformism, or if there is more to being agentic than refusing to conform and looking for your own way?

Also this SSC post seems relevant. Scott calls them "thinkers".

Comment by shminux on Experimental Open Thread April 2019: Socratic method · 2019-04-01T03:49:46.450Z · score: 4 (4 votes) · LW · GW

In a five-year-old contrarian thread I had stated that "there is no territory, it's maps all the way down." There was a quality discussion thread with D_Malik about it, too. Someone also mentioned it on reddit, but that didn't go nearly as well. Since then, various ideas of postrationality have become more popular, but this one still remains highly controversial. It is still my claim, though.

Comment by shminux on Experimental Open Thread April 2019: Socratic method · 2019-04-01T03:22:53.555Z · score: 2 (1 votes) · LW · GW

Question: how do you evaluate the plausibility of each scenario, and potentially of other ways the AI development timeline might go?

Comment by shminux on Do you like bullet points? · 2019-03-26T04:50:10.369Z · score: 5 (3 votes) · LW · GW

I naturally write documents in bullet points, especially when multiple distinct points or items are presented, or if it's a list of thoughts to be expanded on later. Didn't realize that many people dislike it.

Boeing 737 MAX MCAS as an agent corrigibility failure

2019-03-16T01:46:44.455Z · score: 42 (20 votes)

To understand, study edge cases

2019-03-02T21:18:41.198Z · score: 27 (11 votes)

How to notice being mind-hacked

2019-02-02T23:13:48.812Z · score: 16 (8 votes)

Electrons don’t think (or suffer)

2019-01-02T16:27:13.159Z · score: 5 (7 votes)

Sabine "Bee" Hossenfelder (and Robin Hanson) on How to fix Academia with Prediction Markets

2018-12-16T06:37:13.623Z · score: 11 (3 votes)

Aligned AI, The Scientist

2018-11-12T06:36:30.972Z · score: 12 (3 votes)

Logical Counterfactuals are low-res

2018-10-15T03:36:32.380Z · score: 22 (8 votes)

Decisions are not about changing the world, they are about learning what world you live in

2018-07-28T08:41:26.465Z · score: 31 (16 votes)

Probability is a model, frequency is an observation: Why both halfers and thirders are correct in the Sleeping Beauty problem.

2018-07-12T06:52:19.440Z · score: 24 (12 votes)

The Fermi Paradox: What did Sandberg, Drexler and Ord Really Dissolve?

2018-07-08T21:18:20.358Z · score: 47 (20 votes)

Wirehead your Chickens

2018-06-20T05:49:29.344Z · score: 72 (44 votes)

Order from Randomness: Ordering the Universe of Random Numbers

2018-06-19T05:37:42.404Z · score: 16 (5 votes)

Physics has laws, the Universe might not

2018-06-09T05:33:29.122Z · score: 28 (14 votes)

[LINK] The Bayesian Second Law of Thermodynamics

2015-08-12T16:52:48.556Z · score: 8 (9 votes)

Philosophy professors fail on basic philosophy problems

2015-07-15T18:41:06.473Z · score: 16 (21 votes)

Agency is bugs and uncertainty

2015-06-06T04:53:19.307Z · score: 10 (17 votes)

A simple exercise in rationality: rephrase an objective statement as subjective and explore the caveats

2015-04-18T23:46:49.750Z · score: 19 (21 votes)

[LINK] Scott Adam's "Rationality Engine". Part III: Assisted Dying

2015-04-02T16:55:29.684Z · score: 7 (8 votes)

In memory of Leonard Nimoy, most famous for playing the (straw) rationalist Spock, what are your top 3 ST:TOS episodes with him?

2015-02-27T20:57:19.777Z · score: 10 (15 votes)

We live in an unbreakable simulation: a mathematical proof.

2015-02-09T04:01:48.531Z · score: -31 (42 votes)

Calibrating your probability estimates of world events: Russia vs Ukraine, 6 months later.

2014-08-28T23:37:06.430Z · score: 19 (19 votes)

[LINK] Could a Quantum Computer Have Subjective Experience?

2014-08-26T18:55:43.420Z · score: 16 (17 votes)

[LINK] Physicist Carlo Rovelli on Modern Physics Research

2014-08-22T21:46:01.254Z · score: 6 (11 votes)

[LINK] "Harry Potter And The Cryptocurrency of Stars"

2014-08-05T20:57:27.644Z · score: 2 (4 votes)

[LINK] Claustrum Stimulation Temporarily Turns Off Consciousness in an otherwise Awake Patient

2014-07-04T20:00:48.176Z · score: 37 (37 votes)

[LINK] Why Talk to Philosophers: Physicist Sean Carroll Discusses "Common Misunderstandings" about Philosophy

2014-06-23T19:09:54.047Z · score: 10 (12 votes)

[LINK] Scott Aaronson on Google, Breaking Circularity and Eigenmorality

2014-06-19T20:17:14.063Z · score: 20 (20 votes)

List a few posts in Main and/or Discussion which actually made you change your mind

2014-06-13T02:42:59.433Z · score: 16 (16 votes)

Mathematics as a lossy compression algorithm gone wild

2014-06-06T23:53:46.887Z · score: 39 (41 votes)

Reflective Mini-Tasking against Procrastination

2014-06-06T00:20:30.692Z · score: 17 (17 votes)

[LINK] No Boltzmann Brains in an Empty Expanding Universe

2014-05-08T00:37:38.525Z · score: 9 (11 votes)

[LINK] Sean Carroll Against Afterlife

2014-05-07T21:47:37.752Z · score: 5 (9 votes)

[LINK] Sean Carrol's reflections on his debate with WL Craig on "God and Cosmology"

2014-02-25T00:56:34.368Z · score: 8 (8 votes)

Are you a virtue ethicist at heart?

2014-01-27T22:20:25.189Z · score: 11 (13 votes)

LINK: AI Researcher Yann LeCun on AI function

2013-12-11T00:29:52.608Z · score: 2 (12 votes)

As an upload, would you join the society of full telepaths/empaths?

2013-10-15T20:59:30.879Z · score: 7 (17 votes)

[LINK] Larry = Harry sans magic? Google vs. Death

2013-09-18T16:49:17.876Z · score: 25 (31 votes)

[Link] AI advances: computers can be almost as funny as people

2013-08-02T18:41:08.410Z · score: 7 (9 votes)

How would not having free will feel to you?

2013-06-20T20:51:33.213Z · score: 6 (14 votes)

Quotes and Notes on Scott Aaronson’s "The Ghost in the Quantum Turing Machine"

2013-06-17T05:11:29.160Z · score: 18 (22 votes)

Applied art of rationality: Richard Feynman steelmanning his mother's concerns

2013-06-04T17:31:24.675Z · score: 8 (17 votes)

[LINK] SMBC on human and alien values

2013-05-29T15:14:45.362Z · score: 3 (10 votes)

[LINK]s: Who says Watson is only a narrow AI?

2013-05-21T18:04:12.240Z · score: 4 (11 votes)

LINK: Google research chief: 'Emergent artificial intelligence? Hogwash!'

2013-05-17T19:45:45.739Z · score: 7 (16 votes)

[LINK] The Unbelievers: Lawrence Krauss and Richard Dawkins Team Up Against Religion

2013-04-30T18:11:13.901Z · score: 1 (15 votes)

Litany of a Bright Dilettante

2013-04-18T05:06:05.490Z · score: 57 (67 votes)

Time turners, Energy Conservation and General Relativity

2013-04-16T07:23:13.411Z · score: 7 (30 votes)

Litany of Instrumentarski

2013-04-09T15:07:10.565Z · score: 3 (17 votes)