Posts

Nitric Oxide Spray... a cure for COVID19?? 2021-03-15T19:36:17.054Z
Uninformed Elevation of Trust 2020-12-28T08:18:07.357Z
Learning is (Asymptotically) Computationally Inefficient, Choose Your Exponents Wisely 2020-10-22T05:30:18.648Z
Mask wearing: do the opposite of what the CDC/WHO has been saying? 2020-04-02T22:10:31.126Z
Good News: the Containment Measures are Working 2020-03-17T05:49:12.516Z
(Double-)Inverse Embedded Agency Problem 2020-01-08T04:30:24.842Z
Since figuring out human values is hard, what about, say, monkey values? 2020-01-01T21:56:28.787Z
A basic probability question 2019-08-23T07:13:10.995Z
Inspection Paradox as a Driver of Group Separation 2019-08-17T21:47:35.812Z
Religion as Goodhart 2019-07-08T00:38:36.852Z
Does the Higgs-boson exist? 2019-05-23T01:53:21.580Z
A Numerical Model of View Clusters: Results 2019-04-14T04:21:00.947Z
Quantitative Philosophy: Why Simulate Ideas Numerically? 2019-04-14T03:53:11.926Z
Boeing 737 MAX MCAS as an agent corrigibility failure 2019-03-16T01:46:44.455Z
To understand, study edge cases 2019-03-02T21:18:41.198Z
How to notice being mind-hacked 2019-02-02T23:13:48.812Z
Electrons don’t think (or suffer) 2019-01-02T16:27:13.159Z
Sabine "Bee" Hossenfelder (and Robin Hanson) on How to fix Academia with Prediction Markets 2018-12-16T06:37:13.623Z
Aligned AI, The Scientist 2018-11-12T06:36:30.972Z
Logical Counterfactuals are low-res 2018-10-15T03:36:32.380Z
Decisions are not about changing the world, they are about learning what world you live in 2018-07-28T08:41:26.465Z
Probability is a model, frequency is an observation: Why both halfers and thirders are correct in the Sleeping Beauty problem. 2018-07-12T06:52:19.440Z
The Fermi Paradox: What did Sandberg, Drexler and Ord Really Dissolve? 2018-07-08T21:18:20.358Z
Wirehead your Chickens 2018-06-20T05:49:29.344Z
Order from Randomness: Ordering the Universe of Random Numbers 2018-06-19T05:37:42.404Z
Physics has laws, the Universe might not 2018-06-09T05:33:29.122Z
[LINK] The Bayesian Second Law of Thermodynamics 2015-08-12T16:52:48.556Z
Philosophy professors fail on basic philosophy problems 2015-07-15T18:41:06.473Z
Agency is bugs and uncertainty 2015-06-06T04:53:19.307Z
A simple exercise in rationality: rephrase an objective statement as subjective and explore the caveats 2015-04-18T23:46:49.750Z
[LINK] Scott Adam's "Rationality Engine". Part III: Assisted Dying 2015-04-02T16:55:29.684Z
In memory of Leonard Nimoy, most famous for playing the (straw) rationalist Spock, what are your top 3 ST:TOS episodes with him? 2015-02-27T20:57:19.777Z
We live in an unbreakable simulation: a mathematical proof. 2015-02-09T04:01:48.531Z
Calibrating your probability estimates of world events: Russia vs Ukraine, 6 months later. 2014-08-28T23:37:06.430Z
[LINK] Could a Quantum Computer Have Subjective Experience? 2014-08-26T18:55:43.420Z
[LINK] Physicist Carlo Rovelli on Modern Physics Research 2014-08-22T21:46:01.254Z
[LINK] "Harry Potter And The Cryptocurrency of Stars" 2014-08-05T20:57:27.644Z
[LINK] Claustrum Stimulation Temporarily Turns Off Consciousness in an otherwise Awake Patient 2014-07-04T20:00:48.176Z
[LINK] Why Talk to Philosophers: Physicist Sean Carroll Discusses "Common Misunderstandings" about Philosophy 2014-06-23T19:09:54.047Z
[LINK] Scott Aaronson on Google, Breaking Circularity and Eigenmorality 2014-06-19T20:17:14.063Z
List a few posts in Main and/or Discussion which actually made you change your mind 2014-06-13T02:42:59.433Z
Mathematics as a lossy compression algorithm gone wild 2014-06-06T23:53:46.887Z
Reflective Mini-Tasking against Procrastination 2014-06-06T00:20:30.692Z
[LINK] No Boltzmann Brains in an Empty Expanding Universe 2014-05-08T00:37:38.525Z
[LINK] Sean Carroll Against Afterlife 2014-05-07T21:47:37.752Z
[LINK] Sean Carrol's reflections on his debate with WL Craig on "God and Cosmology" 2014-02-25T00:56:34.368Z
Are you a virtue ethicist at heart? 2014-01-27T22:20:25.189Z
LINK: AI Researcher Yann LeCun on AI function 2013-12-11T00:29:52.608Z
As an upload, would you join the society of full telepaths/empaths? 2013-10-15T20:59:30.879Z
[LINK] Larry = Harry sans magic? Google vs. Death 2013-09-18T16:49:17.876Z

Comments

Comment by shminux on “Who’s In Charge? Free Will and the Science of the Brain” · 2021-09-16T05:41:38.494Z · LW · GW

How much free will does a monkey have? A cat? A fish? An amoeba? A virus? A vapor bubble in a boiling pot? A raspberry shoot jockeying for a sunny spot? An octopus arm? A solar flare? A chess bot?

Hint: the same amount as a human. 

Answer: We just happen to have a feeling of free will that is an artifact of some optimization subroutine that runs in our brains and is not fully available to introspection. Do octopuses have that feeling? Chess bots? That question might get answered one day once we understand that how the feeling of free will is formed in humans.

Comment by shminux on What is difference between thoughts and consciousness? · 2021-08-20T01:57:04.056Z · LW · GW

How would you define thoughts? Is it something you can notice happening, as opposed to a feeling or an urge, that just bubbles up? 

Comment by shminux on When Programmers Don't Understand Code, Don't Blame The User · 2021-08-19T00:03:55.898Z · LW · GW

Javascript is indeed notoriously opaque in terms of assigning "self" to a function call. There are multiple ways to do it more explicitly, including prototype inheritance, using Function.prototype.bind() etc., all being workarounds for passing and calling "selfless" methods. So yeah, I agree with your main point.

Comment by shminux on Erratum for "From AI to Zombies" · 2021-08-18T00:30:40.796Z · LW · GW

I think that rationality as a competing approach to the scientific method is a particularly bad take that leads a lot of aspiring rationalists astray, into the cultish land of "I know more and better than experts in the field because I am a rationalist". Data analysis uses plenty of Bayesian reasoning. Scientists are humans and so are prone to the biases and bad decisions that instrumental rationality is supposed to help with. CFAR-taught skills are likely to be useful for scientists and non-scientists alike. 

Comment by shminux on Erratum for "From AI to Zombies" · 2021-08-15T23:00:57.676Z · LW · GW

I agree that the point was not to teach you physics. It was a tool to teach you rationality. Personally, I think it failed at that, and instead created a local lore guided by the teacher's password, "MWI is obviously right". And yes, I think he said nearly as much on multiple occasions. This post https://www.lesswrong.com/posts/8njamAu4vgJYxbJzN/bloggingheads-yudkowsky-and-aaronson-talk-about-ai-and-many  links a video of him saying as much: https://bloggingheads.tv/videos/2220?in=29:28

Note that Aaronson's position is much weaker, more like "if you were to extrapolate micro to macro assuming nothing new happens...", see, for example https://www.scottaaronson.com/blog/?p=1103

we do more-or-less know what could be discovered that would make it reasonable to privilege “our” world over the other MWI branches.  Namely, any kind of “dynamical collapse” process, any source of fundamentally-irreversible decoherence between the microscopic realm and that of experience, any physical account of the origin of the Born rule, would do the trick.

and 

Admittedly, like most quantum folks, I used to dismiss the notion of “dynamical collapse” as so contrived and ugly as not to be worth bothering with.  But while I remain unimpressed by the specific models on the table (like the GRW theory), I’m now agnostic about the possibility itself.  Yes, the linearity of quantum mechanics does indeed seem incredibly hard to tinker with.  But as Roger Penrose never tires of pointing out, there’s at least one phenomenon—gravity—that we understand how to combine with quantum-mechanical linearity only in various special cases (like 2+1 dimensions, or supersymmetric anti-deSitter space), and whose reconciliation with quantum mechanics seems to raise fundamental problems (i.e., what does it even mean to have a superposition over different causal structures, with different Hilbert spaces potentially associated to them?).

Comment by shminux on Technical Predictions Related to AI Safety · 2021-08-13T02:38:22.844Z · LW · GW

I guess you don't mean simulating the relevant parts of the rat brain in silico like OpenWorm, but "a rat-equivalent Bayesian reasoning machine out of silicon", which is probably different.

Comment by shminux on Technical Predictions Related to AI Safety · 2021-08-13T01:47:21.916Z · LW · GW

Do you think it's reasonable to push for rat-level AI before we can create a C.Elegans-level AI?

Comment by shminux on Erratum for "From AI to Zombies" · 2021-08-12T07:05:49.031Z · LW · GW

The book is great for improving one's thinking. My long-standing advice is to ignore anything in it with the word "quantum," it detracts from the book's message. If you want to learn physics, read a physics book. For a good review of that link in Nature, see Scott Aaronson's post https://www.scottaaronson.com/blog/?p=3975, and he also has a review of interpretations in https://www.scottaaronson.com/blog/?p=3628 

Comment by shminux on Against "blankfaces" · 2021-08-09T00:46:29.917Z · LW · GW

I think the Umbridge version is uncontroversial: someone who uses existing rules or creates new rules (like the lifeguard in the Scott's description, or the agencies making it intentionally hard to get reimbursed) to disguise their real intentions, that have nothing to do with following the rules and everything with achieving nefarious goals, be it torturing HP, getting rid of a kid they don't like, or maybe getting a bonus for minimizing expenses. 

Comment by shminux on The Myth of the Myth of the Lone Genius · 2021-08-06T00:01:44.219Z · LW · GW

I don't know if that last paragraph is the author's view, and whether there is any evidence/consensus for it. I go by what I see, and this is a person possessed to overcome obstacles over and over again. Musk is an extreme example, but in general all the classic tech moguls are "natural heroes" in that sense. The burning need inside to do "world optimization" cannot be quenched.

Comment by shminux on The Myth of the Myth of the Lone Genius · 2021-08-03T07:50:01.132Z · LW · GW

I like the post, but just to pick on one thing

(4) There are no such things as geniuses, and even if there were you are not one of them. 

There are two parts to this, the first "There are no such things as geniuses" is not proclaimed by anyone serious, the second, "you are not one of them" is basically correct if you rephrase it as "if you need to ask whether you are one of them, you are not."

Comment by shminux on Torture vs Specks: Sadist version · 2021-08-01T06:02:03.497Z · LW · GW

There was an interesting discussion in my old post on a related topic.

Comment by shminux on Uncertainty can Defuse Logical Explosions · 2021-07-30T18:50:53.886Z · LW · GW

I think what you are gesturing at is a removable singularity of some map from probability to... something.

Comment by shminux on The shoot-the-moon strategy · 2021-07-21T20:28:40.396Z · LW · GW

One of my go-to examples is crash-only code: graceful shutdown is complicated, so instead you always crash and make your code crash-proof.

Comment by shminux on Any taxonomies of conscious experience? · 2021-07-18T19:59:52.094Z · LW · GW

Happiness is a complicated emotion. It can spring from so many causes. Maybe start with something more primitive. For example, before you can feel happiness or sadness due to having to pick an option, you need to feel the ability to make that choice. So maybe relate the subroutine that considers the options and makes the choice to the feeling of free will or something. Still quite complicated, but seems simpler than what you are attempting. Maybe even try to dissect it even further.

Comment by shminux on What are some examples from history where a scientific theory predicted a significant experimental observation in advance? · 2021-07-18T07:07:18.305Z · LW · GW

Here is 4 off the top of my head:

Graphene

Higgs

Young's double-slit experiment

Ion channels

Comment by shminux on A cognitive algorithm for "free will." · 2021-07-15T02:46:29.239Z · LW · GW

A better question is, if you write a complex enough decision making algorithm, would there necessarily be a part of it that would naturally map into the free will quale?

Comment by shminux on Rationality Yellow Belt Test Questions? · 2021-07-08T00:32:36.609Z · LW · GW

How useful was learning chemistry 10+ years of the chemistry community existing?

Good question. I don't know what reference class is appropriate here. I can't come up with other communities like this off the top of my head.

The assumptions depends a lot on how much of possible rationality techniques we already discovered and for those techniques that we did discover we actually got people to use them on a regular basis.

It does. One estimate is "what CFAR teaches", and I think it's quite a bit. Whether the CFAR alumni are measurably better than their peers who didn't attend a CFAR workshop, I don't know, do you?

Comment by shminux on Rationality Yellow Belt Test Questions? · 2021-07-07T08:04:41.782Z · LW · GW

I understand it's meant to be fictional, but probably less fictional than Harry Potter-style magic, in that it is assumed to be achievable without supernatural miracles. Still, the conjecture is that most people would measurably benefit from learning rationality, as opposed to, say, math or tarot cards, and one would expect these benefits to start showing up quite visibly after 10+ years of the community existing.

Comment by shminux on Rationality Yellow Belt Test Questions? · 2021-07-07T05:05:05.532Z · LW · GW

Rationalists are capable of impressive feats individually, and accomplish miracles when working in groups.

I believe it when I see it. Any real-life examples where previously ordinary people who mastered zen and the art of rationality "accomplished miracles"?

Comment by shminux on Should VS Would and Newcomb's Paradox · 2021-07-06T03:54:56.051Z · LW · GW

Indeed starting with an imperfect predictor helps. Classic CDT implicitly assumes that you are not a typical subject, but one of those who can go toe-to-toe with Omega. In the limit of 100% accuracy the space of such subjects is empty, but CDT insists on acting as if you are one anyway.

Comment by shminux on Should VS Would and Newcomb's Paradox · 2021-07-05T07:50:55.002Z · LW · GW

I definitely agree with the last paragraph, stick with one perspective. To the predictor you are an algorithm that either one-boxes or not. There is nothing more to design.

I agree with you on self-locating probabilities not being a useful concept for making optimal decisions. However, in the absent-minded driver problem turning with the probability 2/3 to optimize your payout is not talking about a self-locating probability. Not sure if that is what you meant.

I don't understand the point about the Copenhagen-type interpretation at all...

As for the free will, metaphysics is definitely not against it, physics is. The feeling of free will is a human cognitive artifact, not anything reducible or emergent. But it doesn't seem useful to argue this point.

Comment by shminux on Should VS Would and Newcomb's Paradox · 2021-07-04T19:13:08.654Z · LW · GW

"Matter" was a poor choice of words (hah). But no, there is no difference between determinism and non-determinism in terms of how free the choices are. Unless you are willing to concede that your choice is determine by the projection postulate, or by which Everett branch "you" end up in.

Comment by shminux on Should VS Would and Newcomb's Paradox · 2021-07-04T00:58:05.850Z · LW · GW

I talked about it a few years back. If you think of the world as a deterministic or non-deterministic evolution of some initial conditions, you can potentially separate small parts of it as "agents" and study what they would internally consider a better or worse outcome, which is the "should", vs what they actually do, which is the "would". You don't have to internally run a God's eye view algorithm, some agents can still notice themselves "make decisions" (the internal feeling as an artifact of the agency algorithm evolving in time), while understanding that this is only a feeling, and the reality is nothing more than learning what kind of an agent you are, for example whether you one-box or two-box. Or maybe it's what you mean by an outside view.

Notice, however, that the view you have re-discovered is anti-memetic: it contradicts the extremely strong "free choice" output of a subroutine that converts multiple potential maps of the observed world into an action that has a certain degree of optimality, and so is almost instantly internally rejected in most cases. In fact, most agents pretend to find a loophole, often under the guise of compatibilism, that lets them claim that choices matter and that they can affect the outcome by making them, not just passively watch themselves think and act and discover what they would actually do.

Comment by shminux on The Unexpected Hanging Paradox · 2021-06-27T08:09:19.967Z · LW · GW

First, you unexpectedly switched from the unexpected hanging to the unexpected test in your third last paragraph :)

Second, surprise is best defined as inaccurate map, and the judge/teacher in their pronouncement assumes that the prisoner/student will not be able to come up with an accurate map. If the prisoner can come up with one, then the judge's assertion of "it will be a surprise" will be just another inaccurate map, not the territory. The two maps cannot be both accurate given the stipulation of "surprise". 

The prisoner's reasoning, as described, is a maximally inaccurate map.

What would be a maximally accurate map for the prisoner? That crucially depends on the mechanism the judge uses to decide on the day. If the judge rolls a five-sided fair die, then the odds are 20% on Monday, 25% by Tuesday, 33% by Wednesday, 50% by Thursday, 100% by Friday. If the judge instead flips a coin before each day, the probability is 50% each day except on Friday, when there is no coin flip and it's 100%. If the judge instead decides that Friday is right out and rolls a 4-sided die, then it's 25%/33%/50%/100%/0%. Maybe the judge always schedules executions on Wednesdays, and if the prisoner knows that, then the odds are 0/0/100%.

Can the prisoner construct an accurate map? Who knows, their capabilities and their knowledge of the judge are not specified in the problem statement. Either way, increased accuracy of one map can only come at the expense of the accuracy of the other map. That's all there is to it. 

Comment by shminux on [deleted post] 2021-06-25T16:10:53.802Z

This whole thing is aliens of the gaps. Grainy fuzzy images and videos whose quality has not improved in decades despite manifold advances in optics and imaging, scientific, military and commercial, all across the (EM) spectrum. Yet the "aliens" are good enough to avoid outright detection, yet sloppy enough to show the faint artifacts of their presence. Now, think for a moment what it would mean for "aliens" to be present on earth, to begin with, despite zero signs of them existing anywhere in the Galaxy. The only sensible action is to stop speculating about it and focus on obtaining better data whenever we see something unusual.

Comment by shminux on Why did no LessWrong discourse on gain of function research develop in 2013/2014? · 2021-06-19T23:49:11.580Z · LW · GW

I don't disagree that it was discussed on LW... I'm just pointing out that there was little interest from the founder himself.

Comment by shminux on Why did no LessWrong discourse on gain of function research develop in 2013/2014? · 2021-06-19T06:04:37.570Z · LW · GW

Eliezer's X-risk emphasis has always been about extinction-level events, and a pandemic ain't one, so it didn't get a lot of attention from... the top.

Comment by shminux on Can someone help me understand the arrow of time? · 2021-06-17T05:19:08.449Z · LW · GW

Observations.

Comment by shminux on Can someone help me understand the arrow of time? · 2021-06-16T07:17:43.776Z · LW · GW

There are no actionable predictions in his models, so they are mostly of aesthetic value.

Comment by shminux on Can someone help me understand the arrow of time? · 2021-06-16T07:15:21.264Z · LW · GW

Time is a convenient abstraction. Like baseball.

Comment by shminux on Psyched out · 2021-06-15T05:25:49.654Z · LW · GW

I'd actually suggest starting at a different blog: https://www.lesswrong.com/posts/vwqLfDfsHmiavFAGP/the-library-of-scott-alexandria

Comment by shminux on Why do patients in mental institutions get so little attention in the public discourse? · 2021-06-12T22:34:43.653Z · LW · GW

It's just a general pattern of overlooking certain kinds of terrible suffering that is not very visible. My go-to example is that, even by the most conservative estimates, at least 1% of children go through severe physical, emotional and sexual abuse growing up, which means that, if you live in the city, there is a high chance that there a girl being raped by her brother/uncle/father within a mile of you right now and no one would hear about it or pay attention, until it's too late. A decade or two or three down the road she will end up in a psych ward with incurable CPTSD manifesting as a host of personality disorders, only to be marginalized and often abused and neglected there, as well. 

Omelas was so much better, even for the one suffering child, compared to our society. At least everyone there knew about the suffering child, and it was not completely in vain. And there is nowhere to walk away, it's no better anywhere else.

Comment by shminux on Other Constructions of Gravity · 2021-06-10T00:29:35.411Z · LW · GW

Uninformed indeed :) We know that Newtonian gravity is a low-energy slow-motion approximation of General Relativity, and that a sentence like "total mass of the universe" is meaningless in the spatially flat but expanding universe. While there is a tension between GR and QM, and it has no good explanation for the Tully–Fisher relation, anything that would do a better job would have to be compatible with GR in the regime where it is shown to work well. Consider reading up on the current state of the field before coming up with your own models. Also, reminds me of my very old post.

Comment by shminux on The dumbest kid in the world (joke) · 2021-06-06T04:45:33.321Z · LW · GW

Not smart enough to pretend to be dumb when asked for his reasons, is he.

Comment by shminux on Paper Review: Mathematical Truth · 2021-06-05T04:08:55.701Z · LW · GW

Interesting... My feeling is that we are not even using the same language. Probably because of something deep. It might be the definition of some words, but I doubt it. 

Knowledge - I think knowledge has to be correct to be knowledge, otherwise you just think you have knowledge.

what does it mean for knowledge to be correct? To me it means that it can be used to make good predictions. 

you think that knowledge just means a belief that is likely to be true (and for the right reason?)

well, that's the same thing, a model that makes good predictions. "The right reason" is just another way to say "the model's domain of applicability can be expanded without a significant loss of accuracy". 

It's unclear to me how you would cash out "accurate map" for things that you can't physically observe like math

You can "observe math", as much as you can observe anything. How do you observe something else that is not "plainly visible", like, say, UV radiation? 

We both agree it doesn't matter for our day-to-day lives whether math is real or not.

That is not quite what I said, I think. I meant that math is as real as, well, baseball.

You seem to think that mathematical knowledge doesn't exist, because mathematical "knowledge" is just what we have derived within a system.

I... was saying the opposite. That mathematical knowledge exists just as much as any other knowledge, it just comes equipped with its own unique rigging, like proven theorems being "true", or, in GEB's language, a collection of valid strings or something. I don't want to go deeper, since math is not my area.

In general, the concept of existence and reality, while useful, has a limited applicability and even lifetime. One can say that some models exist more than others, or are more real than others. 

I also view epistemic uniformity as pretty important, because we should have the same standards of knowledge across all fields.

I agree with that, but those standards are not linguistic, the way (your review of) Benacerraf's paper describes it, that they should have the same form (semantic uniformity). The standards are whether the models are accurate (in terms of their observational value) in the domain of their applicability, and how well they can be extended to other domains. Semantic uniformity is sometimes useful and sometimes not, and there is no reason that I can see that it should be universally valid.

Not sure if this made sense... Most people don't naturally think in the way I described.

Comment by shminux on Paper Review: Mathematical Truth · 2021-06-01T01:26:13.384Z · LW · GW

First, I appreciate your thoughtful reply!

It sounds like your view is that mathematical sentences have different forms (they all have an implicit "within some mathematical system that is relevant, it is provable that..." before them)

Yes. And your paraphrasing matches what I tried to express pretty well, except when you use the term "knowledge".

math is just a map, and maps are neither true nor false. If math is just a map, then there is no such thing as objective mathematical truth.

Depends on your definition of "objective". It's a loaded term, and people vehemently disagree on its meaning.

So it sounds like you agree that knowledge about any mathematical object is impossible.

Not really, I just don't think you and I use the term "knowledge" the same way. I reject the old definition "justified true belief", because it has a weasel word "true" in it. Knowledge is an accurate map, nothing else.

Epistemic uniformity says that evaluating the truth-value of a mathematical statement should be a similar process evaluating the truth-value of any other statement.

I'd restrict the notion of "truth" to proved theorems. Not just provable, but actually proved. Which also means that different people have different mathematical truths. If I don't know what the eighth decimal digit of pi is, the statement that it is equal six is neither true nor false for me, not without additional evidence. In that sense, a set of mathematical axioms carves out a piece of territory that is in the model space. There is nothing particularly contradictory about it, we are all embedded agents, and any map is also a territory, in the minds of the agents. I agree that math is not very special, except insofar as it has a specific structure, a set of axioms that can be combined to prove theorems, and those theorems can sometimes serve as useful maps of the territory outside the math itself.

I am not sure what your objection is to the statement that mathematical truths can be discovered experimentally. Seems like we are saying the same thing?

Doing math (under the intutionist paradigm) tells us whether something is provable within a mathematical system, but it has no bearing on whether it is true outside of our minds.

It's worse than that. "Truth" is not a coherent concept outside of the parts of our minds that do math.

My main objection with intuitionism is that it makes a lot of math time-dependent (e.x. 2+2 didn't equal 4 until someone proved it for the first time).

A better way to state this is that the theorem 2+2=4 was not a part of whatever passed for math back then. We are in the process of continuous model building, some models work out and persist for a time, some don't and fade away quickly. Some models propagate through multiple human minds and take over as "truths", and others remain niche, even if they are accurate and useful. That depends on the memetic power of the model, not just on how accurate it is. Religions, for example, have a lot of memetic power, even if their predictions are wildly inaccurate. 

It seems to me that math is a real thing in the universe, it was real because humans comprehended it, and it will remain real after humans are gone. That view is incompatible with intuitionism.

Again, "real" does all the work here. Math is useful to humans. The model that "[math] will remain after humans are gone" is content-free unless you specify how it can be tested. And that requires a lot of assumptions, such as "what if another civilization arose, would it construct mathematics the way humans do?" -- and we have no way to test that, given that we know of no other civilizations.

can you be a bit more specific about the contradition you think is avoided by giving up Platonism? I think that you still don't have epistemic and semantic uniformity with an intuitionist/combinatorial theory of math

If you give up Platonism as some independent idea-realm, you don't have to worry about meaningless questions like "are numbers real?" but only about "are numbers useful?" Semantic uniformity disappears except as a model that is sometimes useful and sometimes not. In the examples given it is not useful. Epistemic uniformity is trivially true, since all mathematical "knowledge" is internal to the mathematical system in question.

We might be talking past each other though.

Comment by shminux on Why don't long running conversations happen on LessWrong? · 2021-05-31T16:13:07.747Z · LW · GW

My wild guess is that, yes, "instant gratification" is important to engage people better. There was a recent discussion on how to do that, but it fizzled.  A built-in chat window, a live comment scroll window, a temporary discord channel for select posts where the author commits to being around at announced times... there are many ways to engage the audience better.

Comment by shminux on Paper Review: Mathematical Truth · 2021-05-31T08:14:42.793Z · LW · GW

There is no contradiction if you treat mathematical knowledge as a map of the world, not anything separate.

 2+2=4 is a useful model for counting sheep, not as useful for counting raindrops. 

Maps are neither true nor false, they have various degrees of accuracy (i.e. explaining existing observations and predicting new observations well) and applicability (a set of observations where they show good accuracy). 

Platonism makes the mistake of promoting an accurate and widely applicable model (a certain type of math) into its own special territory, and that's how it all goes wrong. Epistemic uniformity simply states that math is a useful model. Mathematical statements can be true or false internally, i.e. consistent or inconsistent with the axioms of the model, but they have no truth value as applied to the territory. None. Only usefulness in a certain domain of applicability. 

In this framework, semantic uniformity is a meaningless construct. You can only talk about truth value as being internal to its own model, and 1 and 2 are from different models of different parts of the territory. 3 is... nothing, it has no meaning without a context. There is no reason at all that 1 and 2 should have the form 3, unless they happen to be both submaps of the same map where 3 is a useful statement. For example, in the intuitionist view whether "There are at least three perfect numbers greater than 17" is discoverable experimentally (by proving a theorem or by finding the 3 numbers after some work), just like "There are at least three large cities older than New York" is discoverable experimentally (e.g. by checking in person or online). Again, I'm discounting Platonism, because it confuses map and territory.

Comment by shminux on Against Being Against Growth · 2021-05-29T19:43:17.916Z · LW · GW

Well, yes, but that's the difference between instrumental and terminal goals. If your terminal goal is (longest) survival, not profit or growth, your best instrumental goals are not that obvious. This is basically like in any strategy game. Should you eliminate any competition the moment you notice it? Should you alternate between growth (through profit) and war? Should you cultivate competition for a time, then cull them just before they become a threat? Should you conserve limited resources?

Definitely the growth stage is important as a part of any "proposed solution", but it doesn't mean that it's the main metric to focus on.

Comment by shminux on Against Being Against Growth · 2021-05-29T07:58:16.325Z · LW · GW

imagine two ice cream stores, one which cares about profit-maximization at the expense of all else, and the other which cares about X, for any X other than profit-maximization

if X is "eliminating competition", then the second store might end up more successful (and more sustainable) in the long term, while the first one will sleep with the fishes.

Comment by shminux on What's your probability that the concept of probability makes sense? · 2021-05-23T01:31:02.270Z · LW · GW

Probability is a useful self-consistent model, and so it better be 100% where applicable.

Comment by shminux on Uninformed Elevation of Trust · 2021-05-18T01:32:58.849Z · LW · GW

That's... a surprisingly detailed and interesting analysis, potentially worthy of a separate post. My prototypical example would be something like

  1. Your friend who is a VP at public company XCOMP says "this quarter has been exceptionally busy, we delivered a record number of widgets and have a backlog of new orders enough to last a year. So happy about having all this vested stock options"
  2. You decide that XCOMP is a good investment, since your friend is trustworthy, has the accurate info, and would not benefit from you investing in XCOMP.
  3. You plunk a few grand into XCOMP stock.
  4. The stock value drops after the next quarterly report.
  5. You mention it to your friend, who says "yeah, it's risky to invest in a single stock, no matter how good the company looks, I always diversify."

What happened here is that your friend's odds of the stock going up was maybe 50%, while you assumed that, because you find them 99% trustworthy, you estimated the odds of XCOMP going up as 90%. That is the uninformed elevation of trust I am talking about. 

Another example: Elon Musk says "We will have full self-driving ready to go later this year". You, as an Elon fanboy, take it as a gospel and rush to buy the FSD option for your Model 3. While, if pressed, Elon would say that "I am confident that we can stick to this aggressive timeline if everything goes smoothly" (which it never does).

So, it's closer to what you call the Assumption Amnesia, as I understand it.

Comment by shminux on How concerned are you about LW reputation management? · 2021-05-17T21:18:07.991Z · LW · GW

Sure, if you are interested, some of these are below, in reverse chronological order, but, I am quite sure, your reaction would match that of the others: either a shrug or a cringe.

And yes, I agree that the reasons are related to both the writing style, and to the audience being "ready and interested to hear it."

Comment by shminux on How concerned are you about LW reputation management? · 2021-05-17T20:31:59.458Z · LW · GW

For comparison, I have over two dozen posts in Drafts, accumulated over several years, that are unlikely to ever get published. One reason for it is that there are likely plenty of regulars whose reaction to the previous sentence would be "And thank God for that!" Another is the underwhelming response to what I personally considered my best contributions to the site. Admittedly this is not a typical situation. 

Comment by shminux on Does butterfly affect? · 2021-05-16T01:01:12.456Z · LW · GW

A causes B iff the model [A causes B] performs superior to other models in some (all?) games / environments

There are two parts that go into this: the rules of the game, and the initial state of it. You can fix one or both, you can vary one or both. And by "vary" I mean "come up with a distribution, draw an instance at random used for a particular run" then see which runs cause what. For example, in physics you could start with general relativity and vary the gravitational constant, the cosmological constant, the initial expansion rate, the homogeneity levels etc. Your conclusion might be something like "given this range of parameters, the inhomogeneities cause the galaxies to form around them. Given another range of parameters, the universe might collapse or blow up without any galaxies forming. So, yes, as you said,

"A causes B" ... has a funny dependence on the game or environment we choose

In the Game of Life, given a certain setup, a glider can hit a stable block, causing its destruction. This setup could be unique or stable to a range of perturbations or even large changes, and it still would make sens sense to use the cause/effect concept. 

The counterfactuals in all those cases would be in the way we set up a particular instance of the universe: the laws and the initial conditions. They are counterfactual because in our world we only have the one run, and all others are imagined, not "real". However, if one can set up a model of our world where a certain (undetectable) variation leads to a stable outcome, then those would be counterfactuals. The condition that the variations are undetectable given the available resolution is essential, otherwise it would not look like the same world to us. I had a post about that, too. 

An example of this "low-res" causing an apparent counterfactual is the classic 

If Lee Harvey Oswald hadn't shot John F. Kennedy, someone else would have

If you can set up a simulation with varying initial conditions that includes, as Eliezer suggests, a conspiracy to kill JFK, but varies in whether Oswald was a good/available tool for it, then, presumably, in many of those runs JFK would have been shot within a time frame that is not too different from our particular realization. In some others JFK would have been killed but poisoned or stabbed, not shot, and so the Lee Harvey Oswald would not be the butterfly you are describing. In the models where there is no conspiracy, Oswald would have been the butterfly, again, as Eliezer describes. There are many other possible butterflies and non-butterflies in this setup, of course, from gusts of wind at a wrong time, to someone discovering the conspiracy early, etc.

Note that some of those imagined worlds are probably impossible physically, as in, when extrapolated into the past they would have caused macroscopic effects that are incompatible with observations. For example, Oswald missing the mark with his shot may have resulted from the rifle being of poor quality which would have been incompatible with the known quality control procedures when it was made. 

Hope some of this makes sense.

Comment by shminux on Does butterfly affect? · 2021-05-15T03:07:25.064Z · LW · GW

The counterfactual approach is indeed very popular, despite its obvious limitations. You can see a number of posts from Chris Leung here on the topic, for example. As for comparing performance of different agents, I wrote a post about it some years ago, not sure if that is what you meant, or if it even makes sense to you. 

Comment by shminux on Does butterfly affect? · 2021-05-14T07:37:55.750Z · LW · GW

Would the hurricane have happened if not for the butterfly?

You are talking about counterfactuals, and those a difficult problem to solve when there is only one deterministic or probabilistic world and nothing else. A better question is "Does a model where 'a hurricane would not have happened as it had, if not for the butterfly' make useful and accurate predictions about the parts of the world we have not yet observed?" If so, then it's useful to talk about a butterfly causing a hurricane, if not, then it's a bad model. This question is answerable, and as someone with an expertise in "complexity science," whatever it might be, you are probably well qualified to answer it. It seems that your answer is "the impact of butterfly’s wings will typically not rise above the persistent stochastic inputs affecting the Earth, " meaning that the model where a butterfly caused the hurricane is not a useful one. In that clearly defined sense, you have answered the question you posed. 

Comment by shminux on Where do LessWrong rationalists debate? · 2021-04-29T23:02:12.878Z · LW · GW

Discord works just fine for most cases, but the existing LW discord is all but dead (or was, last time I looked), since it's a separate entity and requires active competent admins and moderators. A button like "discuss this post live on Discord," possibly with a few latest comments visible, would likely make a difference by removing a (non-trivial) inconvenience.

Comment by shminux on "Who I am" is an axiom. · 2021-04-26T04:31:47.340Z · LW · GW

The idea of an "I" is an output of your brain, which has a model of the outside world, of others like you, and of you. In programming terms, the "programming language" of your mind has the reflection and introspection capabilities that provide some limited access to "this" or "self". There is nothing mysterious about it, and there is no need to axiomatize it. 

import human.lang.reflect.*;
// me.getClass().getDeclaredMethods()