Posts

Nitric Oxide Spray... a cure for COVID19?? 2021-03-15T19:36:17.054Z
Uninformed Elevation of Trust 2020-12-28T08:18:07.357Z
Learning is (Asymptotically) Computationally Inefficient, Choose Your Exponents Wisely 2020-10-22T05:30:18.648Z
Mask wearing: do the opposite of what the CDC/WHO has been saying? 2020-04-02T22:10:31.126Z
Good News: the Containment Measures are Working 2020-03-17T05:49:12.516Z
(Double-)Inverse Embedded Agency Problem 2020-01-08T04:30:24.842Z
Since figuring out human values is hard, what about, say, monkey values? 2020-01-01T21:56:28.787Z
A basic probability question 2019-08-23T07:13:10.995Z
Inspection Paradox as a Driver of Group Separation 2019-08-17T21:47:35.812Z
Religion as Goodhart 2019-07-08T00:38:36.852Z
Does the Higgs-boson exist? 2019-05-23T01:53:21.580Z
A Numerical Model of View Clusters: Results 2019-04-14T04:21:00.947Z
Quantitative Philosophy: Why Simulate Ideas Numerically? 2019-04-14T03:53:11.926Z
Boeing 737 MAX MCAS as an agent corrigibility failure 2019-03-16T01:46:44.455Z
To understand, study edge cases 2019-03-02T21:18:41.198Z
How to notice being mind-hacked 2019-02-02T23:13:48.812Z
Electrons don’t think (or suffer) 2019-01-02T16:27:13.159Z
Sabine "Bee" Hossenfelder (and Robin Hanson) on How to fix Academia with Prediction Markets 2018-12-16T06:37:13.623Z
Aligned AI, The Scientist 2018-11-12T06:36:30.972Z
Logical Counterfactuals are low-res 2018-10-15T03:36:32.380Z
Decisions are not about changing the world, they are about learning what world you live in 2018-07-28T08:41:26.465Z
Probability is a model, frequency is an observation: Why both halfers and thirders are correct in the Sleeping Beauty problem. 2018-07-12T06:52:19.440Z
The Fermi Paradox: What did Sandberg, Drexler and Ord Really Dissolve? 2018-07-08T21:18:20.358Z
Wirehead your Chickens 2018-06-20T05:49:29.344Z
Order from Randomness: Ordering the Universe of Random Numbers 2018-06-19T05:37:42.404Z
Physics has laws, the Universe might not 2018-06-09T05:33:29.122Z
[LINK] The Bayesian Second Law of Thermodynamics 2015-08-12T16:52:48.556Z
Philosophy professors fail on basic philosophy problems 2015-07-15T18:41:06.473Z
Agency is bugs and uncertainty 2015-06-06T04:53:19.307Z
A simple exercise in rationality: rephrase an objective statement as subjective and explore the caveats 2015-04-18T23:46:49.750Z
[LINK] Scott Adam's "Rationality Engine". Part III: Assisted Dying 2015-04-02T16:55:29.684Z
In memory of Leonard Nimoy, most famous for playing the (straw) rationalist Spock, what are your top 3 ST:TOS episodes with him? 2015-02-27T20:57:19.777Z
We live in an unbreakable simulation: a mathematical proof. 2015-02-09T04:01:48.531Z
Calibrating your probability estimates of world events: Russia vs Ukraine, 6 months later. 2014-08-28T23:37:06.430Z
[LINK] Could a Quantum Computer Have Subjective Experience? 2014-08-26T18:55:43.420Z
[LINK] Physicist Carlo Rovelli on Modern Physics Research 2014-08-22T21:46:01.254Z
[LINK] "Harry Potter And The Cryptocurrency of Stars" 2014-08-05T20:57:27.644Z
[LINK] Claustrum Stimulation Temporarily Turns Off Consciousness in an otherwise Awake Patient 2014-07-04T20:00:48.176Z
[LINK] Why Talk to Philosophers: Physicist Sean Carroll Discusses "Common Misunderstandings" about Philosophy 2014-06-23T19:09:54.047Z
[LINK] Scott Aaronson on Google, Breaking Circularity and Eigenmorality 2014-06-19T20:17:14.063Z
List a few posts in Main and/or Discussion which actually made you change your mind 2014-06-13T02:42:59.433Z
Mathematics as a lossy compression algorithm gone wild 2014-06-06T23:53:46.887Z
Reflective Mini-Tasking against Procrastination 2014-06-06T00:20:30.692Z
[LINK] No Boltzmann Brains in an Empty Expanding Universe 2014-05-08T00:37:38.525Z
[LINK] Sean Carroll Against Afterlife 2014-05-07T21:47:37.752Z
[LINK] Sean Carrol's reflections on his debate with WL Craig on "God and Cosmology" 2014-02-25T00:56:34.368Z
Are you a virtue ethicist at heart? 2014-01-27T22:20:25.189Z
LINK: AI Researcher Yann LeCun on AI function 2013-12-11T00:29:52.608Z
As an upload, would you join the society of full telepaths/empaths? 2013-10-15T20:59:30.879Z
[LINK] Larry = Harry sans magic? Google vs. Death 2013-09-18T16:49:17.876Z

Comments

Comment by shminux on Why did no LessWrong discourse on gain of function research develop in 2013/2014? · 2021-06-19T23:49:11.580Z · LW · GW

I don't disagree that it was discussed on LW... I'm just pointing out that there was little interest from the founder himself.

Comment by shminux on Why did no LessWrong discourse on gain of function research develop in 2013/2014? · 2021-06-19T06:04:37.570Z · LW · GW

Eliezer's X-risk emphasis has always been about extinction-level events, and a pandemic ain't one, so it didn't get a lot of attention from... the top.

Comment by shminux on Can someone help me understand the arrow of time? · 2021-06-17T05:19:08.449Z · LW · GW

Observations.

Comment by shminux on Can someone help me understand the arrow of time? · 2021-06-16T07:17:43.776Z · LW · GW

There are no actionable predictions in his models, so they are mostly of aesthetic value.

Comment by shminux on Can someone help me understand the arrow of time? · 2021-06-16T07:15:21.264Z · LW · GW

Time is a convenient abstraction. Like baseball.

Comment by shminux on Psyched out · 2021-06-15T05:25:49.654Z · LW · GW

I'd actually suggest starting at a different blog: https://www.lesswrong.com/posts/vwqLfDfsHmiavFAGP/the-library-of-scott-alexandria

Comment by shminux on Why do patients in mental institutions get so little attention in the public discourse? · 2021-06-12T22:34:43.653Z · LW · GW

It's just a general pattern of overlooking certain kinds of terrible suffering that is not very visible. My go-to example is that, even by the most conservative estimates, at least 1% of children go through severe physical, emotional and sexual abuse growing up, which means that, if you live in the city, there is a high chance that there a girl being raped by her brother/uncle/father within a mile of you right now and no one would hear about it or pay attention, until it's too late. A decade or two or three down the road she will end up in a psych ward with incurable CPTSD manifesting as a host of personality disorders, only to be marginalized and often abused and neglected there, as well. 

Omelas was so much better, even for the one suffering child, compared to our society. At least everyone there knew about the suffering child, and it was not completely in vain. And there is nowhere to walk away, it's no better anywhere else.

Comment by shminux on Other Constructions of Gravity · 2021-06-10T00:29:35.411Z · LW · GW

Uninformed indeed :) We know that Newtonian gravity is a low-energy slow-motion approximation of General Relativity, and that a sentence like "total mass of the universe" is meaningless in the spatially flat but expanding universe. While there is a tension between GR and QM, and it has no good explanation for the Tully–Fisher relation, anything that would do a better job would have to be compatible with GR in the regime where it is shown to work well. Consider reading up on the current state of the field before coming up with your own models. Also, reminds me of my very old post.

Comment by shminux on The dumbest kid in the world (joke) · 2021-06-06T04:45:33.321Z · LW · GW

Not smart enough to pretend to be dumb when asked for his reasons, is he.

Comment by shminux on Paper Review: Mathematical Truth · 2021-06-05T04:08:55.701Z · LW · GW

Interesting... My feeling is that we are not even using the same language. Probably because of something deep. It might be the definition of some words, but I doubt it. 

Knowledge - I think knowledge has to be correct to be knowledge, otherwise you just think you have knowledge.

what does it mean for knowledge to be correct? To me it means that it can be used to make good predictions. 

you think that knowledge just means a belief that is likely to be true (and for the right reason?)

well, that's the same thing, a model that makes good predictions. "The right reason" is just another way to say "the model's domain of applicability can be expanded without a significant loss of accuracy". 

It's unclear to me how you would cash out "accurate map" for things that you can't physically observe like math

You can "observe math", as much as you can observe anything. How do you observe something else that is not "plainly visible", like, say, UV radiation? 

We both agree it doesn't matter for our day-to-day lives whether math is real or not.

That is not quite what I said, I think. I meant that math is as real as, well, baseball.

You seem to think that mathematical knowledge doesn't exist, because mathematical "knowledge" is just what we have derived within a system.

I... was saying the opposite. That mathematical knowledge exists just as much as any other knowledge, it just comes equipped with its own unique rigging, like proven theorems being "true", or, in GEB's language, a collection of valid strings or something. I don't want to go deeper, since math is not my area.

In general, the concept of existence and reality, while useful, has a limited applicability and even lifetime. One can say that some models exist more than others, or are more real than others. 

I also view epistemic uniformity as pretty important, because we should have the same standards of knowledge across all fields.

I agree with that, but those standards are not linguistic, the way (your review of) Benacerraf's paper describes it, that they should have the same form (semantic uniformity). The standards are whether the models are accurate (in terms of their observational value) in the domain of their applicability, and how well they can be extended to other domains. Semantic uniformity is sometimes useful and sometimes not, and there is no reason that I can see that it should be universally valid.

Not sure if this made sense... Most people don't naturally think in the way I described.

Comment by shminux on Paper Review: Mathematical Truth · 2021-06-01T01:26:13.384Z · LW · GW

First, I appreciate your thoughtful reply!

It sounds like your view is that mathematical sentences have different forms (they all have an implicit "within some mathematical system that is relevant, it is provable that..." before them)

Yes. And your paraphrasing matches what I tried to express pretty well, except when you use the term "knowledge".

math is just a map, and maps are neither true nor false. If math is just a map, then there is no such thing as objective mathematical truth.

Depends on your definition of "objective". It's a loaded term, and people vehemently disagree on its meaning.

So it sounds like you agree that knowledge about any mathematical object is impossible.

Not really, I just don't think you and I use the term "knowledge" the same way. I reject the old definition "justified true belief", because it has a weasel word "true" in it. Knowledge is an accurate map, nothing else.

Epistemic uniformity says that evaluating the truth-value of a mathematical statement should be a similar process evaluating the truth-value of any other statement.

I'd restrict the notion of "truth" to proved theorems. Not just provable, but actually proved. Which also means that different people have different mathematical truths. If I don't know what the eighth decimal digit of pi is, the statement that it is equal six is neither true nor false for me, not without additional evidence. In that sense, a set of mathematical axioms carves out a piece of territory that is in the model space. There is nothing particularly contradictory about it, we are all embedded agents, and any map is also a territory, in the minds of the agents. I agree that math is not very special, except insofar as it has a specific structure, a set of axioms that can be combined to prove theorems, and those theorems can sometimes serve as useful maps of the territory outside the math itself.

I am not sure what your objection is to the statement that mathematical truths can be discovered experimentally. Seems like we are saying the same thing?

Doing math (under the intutionist paradigm) tells us whether something is provable within a mathematical system, but it has no bearing on whether it is true outside of our minds.

It's worse than that. "Truth" is not a coherent concept outside of the parts of our minds that do math.

My main objection with intuitionism is that it makes a lot of math time-dependent (e.x. 2+2 didn't equal 4 until someone proved it for the first time).

A better way to state this is that the theorem 2+2=4 was not a part of whatever passed for math back then. We are in the process of continuous model building, some models work out and persist for a time, some don't and fade away quickly. Some models propagate through multiple human minds and take over as "truths", and others remain niche, even if they are accurate and useful. That depends on the memetic power of the model, not just on how accurate it is. Religions, for example, have a lot of memetic power, even if their predictions are wildly inaccurate. 

It seems to me that math is a real thing in the universe, it was real because humans comprehended it, and it will remain real after humans are gone. That view is incompatible with intuitionism.

Again, "real" does all the work here. Math is useful to humans. The model that "[math] will remain after humans are gone" is content-free unless you specify how it can be tested. And that requires a lot of assumptions, such as "what if another civilization arose, would it construct mathematics the way humans do?" -- and we have no way to test that, given that we know of no other civilizations.

can you be a bit more specific about the contradition you think is avoided by giving up Platonism? I think that you still don't have epistemic and semantic uniformity with an intuitionist/combinatorial theory of math

If you give up Platonism as some independent idea-realm, you don't have to worry about meaningless questions like "are numbers real?" but only about "are numbers useful?" Semantic uniformity disappears except as a model that is sometimes useful and sometimes not. In the examples given it is not useful. Epistemic uniformity is trivially true, since all mathematical "knowledge" is internal to the mathematical system in question.

We might be talking past each other though.

Comment by shminux on Why don't long running conversations happen on LessWrong? · 2021-05-31T16:13:07.747Z · LW · GW

My wild guess is that, yes, "instant gratification" is important to engage people better. There was a recent discussion on how to do that, but it fizzled.  A built-in chat window, a live comment scroll window, a temporary discord channel for select posts where the author commits to being around at announced times... there are many ways to engage the audience better.

Comment by shminux on Paper Review: Mathematical Truth · 2021-05-31T08:14:42.793Z · LW · GW

There is no contradiction if you treat mathematical knowledge as a map of the world, not anything separate.

 2+2=4 is a useful model for counting sheep, not as useful for counting raindrops. 

Maps are neither true nor false, they have various degrees of accuracy (i.e. explaining existing observations and predicting new observations well) and applicability (a set of observations where they show good accuracy). 

Platonism makes the mistake of promoting an accurate and widely applicable model (a certain type of math) into its own special territory, and that's how it all goes wrong. Epistemic uniformity simply states that math is a useful model. Mathematical statements can be true or false internally, i.e. consistent or inconsistent with the axioms of the model, but they have no truth value as applied to the territory. None. Only usefulness in a certain domain of applicability. 

In this framework, semantic uniformity is a meaningless construct. You can only talk about truth value as being internal to its own model, and 1 and 2 are from different models of different parts of the territory. 3 is... nothing, it has no meaning without a context. There is no reason at all that 1 and 2 should have the form 3, unless they happen to be both submaps of the same map where 3 is a useful statement. For example, in the intuitionist view whether "There are at least three perfect numbers greater than 17" is discoverable experimentally (by proving a theorem or by finding the 3 numbers after some work), just like "There are at least three large cities older than New York" is discoverable experimentally (e.g. by checking in person or online). Again, I'm discounting Platonism, because it confuses map and territory.

Comment by shminux on Against Being Against Growth · 2021-05-29T19:43:17.916Z · LW · GW

Well, yes, but that's the difference between instrumental and terminal goals. If your terminal goal is (longest) survival, not profit or growth, your best instrumental goals are not that obvious. This is basically like in any strategy game. Should you eliminate any competition the moment you notice it? Should you alternate between growth (through profit) and war? Should you cultivate competition for a time, then cull them just before they become a threat? Should you conserve limited resources?

Definitely the growth stage is important as a part of any "proposed solution", but it doesn't mean that it's the main metric to focus on.

Comment by shminux on Against Being Against Growth · 2021-05-29T07:58:16.325Z · LW · GW

imagine two ice cream stores, one which cares about profit-maximization at the expense of all else, and the other which cares about X, for any X other than profit-maximization

if X is "eliminating competition", then the second store might end up more successful (and more sustainable) in the long term, while the first one will sleep with the fishes.

Comment by shminux on What's your probability that the concept of probability makes sense? · 2021-05-23T01:31:02.270Z · LW · GW

Probability is a useful self-consistent model, and so it better be 100% where applicable.

Comment by shminux on Uninformed Elevation of Trust · 2021-05-18T01:32:58.849Z · LW · GW

That's... a surprisingly detailed and interesting analysis, potentially worthy of a separate post. My prototypical example would be something like

  1. Your friend who is a VP at public company XCOMP says "this quarter has been exceptionally busy, we delivered a record number of widgets and have a backlog of new orders enough to last a year. So happy about having all this vested stock options"
  2. You decide that XCOMP is a good investment, since your friend is trustworthy, has the accurate info, and would not benefit from you investing in XCOMP.
  3. You plunk a few grand into XCOMP stock.
  4. The stock value drops after the next quarterly report.
  5. You mention it to your friend, who says "yeah, it's risky to invest in a single stock, no matter how good the company looks, I always diversify."

What happened here is that your friend's odds of the stock going up was maybe 50%, while you assumed that, because you find them 99% trustworthy, you estimated the odds of XCOMP going up as 90%. That is the uninformed elevation of trust I am talking about. 

Another example: Elon Musk says "We will have full self-driving ready to go later this year". You, as an Elon fanboy, take it as a gospel and rush to buy the FSD option for your Model 3. While, if pressed, Elon would say that "I am confident that we can stick to this aggressive timeline if everything goes smoothly" (which it never does).

So, it's closer to what you call the Assumption Amnesia, as I understand it.

Comment by shminux on How concerned are you about LW reputation management? · 2021-05-17T21:18:07.991Z · LW · GW

Sure, if you are interested, some of these are below, in reverse chronological order, but, I am quite sure, your reaction would match that of the others: either a shrug or a cringe.

And yes, I agree that the reasons are related to both the writing style, and to the audience being "ready and interested to hear it."

Comment by shminux on How concerned are you about LW reputation management? · 2021-05-17T20:31:59.458Z · LW · GW

For comparison, I have over two dozen posts in Drafts, accumulated over several years, that are unlikely to ever get published. One reason for it is that there are likely plenty of regulars whose reaction to the previous sentence would be "And thank God for that!" Another is the underwhelming response to what I personally considered my best contributions to the site. Admittedly this is not a typical situation. 

Comment by shminux on Does butterfly affect? · 2021-05-16T01:01:12.456Z · LW · GW

A causes B iff the model [A causes B] performs superior to other models in some (all?) games / environments

There are two parts that go into this: the rules of the game, and the initial state of it. You can fix one or both, you can vary one or both. And by "vary" I mean "come up with a distribution, draw an instance at random used for a particular run" then see which runs cause what. For example, in physics you could start with general relativity and vary the gravitational constant, the cosmological constant, the initial expansion rate, the homogeneity levels etc. Your conclusion might be something like "given this range of parameters, the inhomogeneities cause the galaxies to form around them. Given another range of parameters, the universe might collapse or blow up without any galaxies forming. So, yes, as you said,

"A causes B" ... has a funny dependence on the game or environment we choose

In the Game of Life, given a certain setup, a glider can hit a stable block, causing its destruction. This setup could be unique or stable to a range of perturbations or even large changes, and it still would make sens sense to use the cause/effect concept. 

The counterfactuals in all those cases would be in the way we set up a particular instance of the universe: the laws and the initial conditions. They are counterfactual because in our world we only have the one run, and all others are imagined, not "real". However, if one can set up a model of our world where a certain (undetectable) variation leads to a stable outcome, then those would be counterfactuals. The condition that the variations are undetectable given the available resolution is essential, otherwise it would not look like the same world to us. I had a post about that, too. 

An example of this "low-res" causing an apparent counterfactual is the classic 

If Lee Harvey Oswald hadn't shot John F. Kennedy, someone else would have

If you can set up a simulation with varying initial conditions that includes, as Eliezer suggests, a conspiracy to kill JFK, but varies in whether Oswald was a good/available tool for it, then, presumably, in many of those runs JFK would have been shot within a time frame that is not too different from our particular realization. In some others JFK would have been killed but poisoned or stabbed, not shot, and so the Lee Harvey Oswald would not be the butterfly you are describing. In the models where there is no conspiracy, Oswald would have been the butterfly, again, as Eliezer describes. There are many other possible butterflies and non-butterflies in this setup, of course, from gusts of wind at a wrong time, to someone discovering the conspiracy early, etc.

Note that some of those imagined worlds are probably impossible physically, as in, when extrapolated into the past they would have caused macroscopic effects that are incompatible with observations. For example, Oswald missing the mark with his shot may have resulted from the rifle being of poor quality which would have been incompatible with the known quality control procedures when it was made. 

Hope some of this makes sense.

Comment by shminux on Does butterfly affect? · 2021-05-15T03:07:25.064Z · LW · GW

The counterfactual approach is indeed very popular, despite its obvious limitations. You can see a number of posts from Chris Leung here on the topic, for example. As for comparing performance of different agents, I wrote a post about it some years ago, not sure if that is what you meant, or if it even makes sense to you. 

Comment by shminux on Does butterfly affect? · 2021-05-14T07:37:55.750Z · LW · GW

Would the hurricane have happened if not for the butterfly?

You are talking about counterfactuals, and those a difficult problem to solve when there is only one deterministic or probabilistic world and nothing else. A better question is "Does a model where 'a hurricane would not have happened as it had, if not for the butterfly' make useful and accurate predictions about the parts of the world we have not yet observed?" If so, then it's useful to talk about a butterfly causing a hurricane, if not, then it's a bad model. This question is answerable, and as someone with an expertise in "complexity science," whatever it might be, you are probably well qualified to answer it. It seems that your answer is "the impact of butterfly’s wings will typically not rise above the persistent stochastic inputs affecting the Earth, " meaning that the model where a butterfly caused the hurricane is not a useful one. In that clearly defined sense, you have answered the question you posed. 

Comment by shminux on Where do LessWrong rationalists debate? · 2021-04-29T23:02:12.878Z · LW · GW

Discord works just fine for most cases, but the existing LW discord is all but dead (or was, last time I looked), since it's a separate entity and requires active competent admins and moderators. A button like "discuss this post live on Discord," possibly with a few latest comments visible, would likely make a difference by removing a (non-trivial) inconvenience.

Comment by shminux on "Who I am" is an axiom. · 2021-04-26T04:31:47.340Z · LW · GW

The idea of an "I" is an output of your brain, which has a model of the outside world, of others like you, and of you. In programming terms, the "programming language" of your mind has the reflection and introspection capabilities that provide some limited access to "this" or "self". There is nothing mysterious about it, and there is no need to axiomatize it. 

import human.lang.reflect.*;
// me.getClass().getDeclaredMethods()
Comment by shminux on Let's Rename Ourselves The "Metacognitive Movement" · 2021-04-24T01:53:32.461Z · LW · GW

Metacognition is a fine name for a specific cognitive science, but, unlike "rationality" or "rationalist", it has none of the call for action, just idle musings of how brains might work. "Metacognition is Systematized Winning" doesn't pack any punch. 

Comment by shminux on The secret of Wikipedia's success · 2021-04-15T05:25:04.194Z · LW · GW

Huh, what an interesting observation! I wonder if there are more examples of something like that.

Comment by shminux on What if AGI is near? · 2021-04-14T01:54:52.929Z · LW · GW

Consider that "if AGI is very near" probably means that it's already happened (or, equivalently, that we are past the point of no return) on Copernican grounds, since the odds of living in a very special moment where the timelines are short but it's not too late yet are very low. Not seeing an obvious AGI around likely means that either it's not very near, or that the take-off is slow, not fast. 

Ironically, it's not Roko's basilisk that is an infohazard, it's the "AGI go foom!" idea that is.

Comment by shminux on A New Center? [Politics] [Wishful Thinking] · 2021-04-12T20:37:10.299Z · LW · GW

It's tempting to try to reinvent the wheel, but this dynamic is by no means new. There have been viable political alternatives popping in the middle in various places around the world. Not as many as those emerging from the right or from the left,  One can argue that the US is unique in many ways, and it sure is, but the degree of uniqueness would only become clear once you identify the common trends. 

From what I understand, the process of emergence of a centrist party is usually by one of the mainstream parties not being radical enough for a large chunk of its base, splitting the party in two, one more extreme and one more centrist. It happened in Canada, Germany, Israel, Italy and many other places. The odds of creating a centrist political force from scratch are not good, and require much shallow equilibria than those in most de facto two-party systems.  For example, the Israel Resilience Party was created in 2018 on the multi-party background and many years of political gridlock.

Comment by shminux on Identifiability Problem for Superrational Decision Theories · 2021-04-09T23:43:54.953Z · LW · GW

Despite this, superrational reasoning gives us different results.

what is that "superrational" reasoning that gives different results?

Comment by shminux on How should I behave ≥14 days after my first mRNA vaccine dose but before my second dose? · 2021-04-08T06:41:11.400Z · LW · GW

One reference point: in Canada the guidance is that the first dose provides enough protection to delay the second dose up to 4 month.

Comment by shminux on Learning Russian Roulette · 2021-04-02T20:27:06.408Z · LW · GW

If you appear to be an outlier, it's worth investigating why precisely, instead of stopping at one observation and trying to make sense of it using essentially an outside view. There are generally higher-probability models in the inside view, such as "I have hallucinated other people dying/playing" or "I always end up with an empty barrel"

Comment by shminux on Why 1-boxing doesn't imply backwards causation · 2021-03-25T07:40:26.347Z · LW · GW

Hmm, it sort of makes sense, but possible_world_augmented() returns not just a set of worlds, but a set of pairs, (world, probability). For example for the transparent Newcomb's you get possible_world_augmented() returns {(<1-box, million>, 1), (<2-box, thousand>, 0)}. And that's enough to calculate EV, and conclude which "decision" (i.e. possible_world_augmented() given decision X) results in maxEV. Come to think of it, if you tabulate this, you end up with what I talked about in that post.

Comment by shminux on Why 1-boxing doesn't imply backwards causation · 2021-03-25T04:00:54.294Z · LW · GW

I'm confused... What you call the "Pure Reality" view seems to work just fine, no? (I think you had a different name for it, pure counterfactuals or something.) What do you need counterfactuals/Augmented Reality for? Presumably making decisions thanks to "having a choice" in this framework, right? In the pure reality framework the "student and the test" example one would dispassionately calculate what kind of a student algorithm passes the test, without talking about making a decision to study or not to study. Same with the Newcomb's, of course, one just looks at what kind of agents end up with a given payoff. So... why pick an AR view over the PR view, what's the benefit?

Comment by shminux on Preferences and biases, the information argument · 2021-03-23T19:13:35.940Z · LW · GW

"look through this collection of psychology research and take it as roughly true"

Well, you are an intelligence that "is well-grounded and understands what human concepts mean", do you think that the above approach would lead you to distill the right assumptions?

Comment by shminux on AstraZeneca vaccine shows no protection against Covid-19 variant from Africa · 2021-03-18T23:05:38.495Z · LW · GW

Total of 39 cases of SA variant seems too underpowered to make any conclusions.

Comment by shminux on The best things are often free or cheap · 2021-03-18T04:42:00.589Z · LW · GW

Note that everything on your list has zero or near zero replication cost. A lot of essentials, tangible and intangible, are not like that. Food, companionship, living accommodations, etc. I don't know how far one can get on easy-clone stuff.

Comment by shminux on Nitric Oxide Spray... a cure for COVID19?? · 2021-03-17T19:35:51.809Z · LW · GW

The pharma research company appears to be very real https://sanotize.com/covid-19/

Comment by shminux on Anyone been through IFS or coherence therapy? · 2021-03-15T19:49:07.911Z · LW · GW

An important consideration is what you are dealing with. Severe ACE? Something else?

Comment by shminux on Deflationism isn't the solution to philosophy's woes · 2021-03-10T21:12:04.649Z · LW · GW

Well, we may have had this argument before, likely more than once, so probably no point rehashing it. I appreciate you expressing your views succinctly though. 

Comment by shminux on Deflationism isn't the solution to philosophy's woes · 2021-03-10T20:13:42.521Z · LW · GW

Well, it looks like you declare "outperforming" by your own metric, not by anything generally accepted.

 (Also, I take issue with the last two.  The philosophical ideas about time are generally not about time, but about "time", i.e. about how humans perceive and understand passage of time. So distinguishing between A and B is about humans, not about time, unlike, say, Special and General Relativity, which provide a useful model of time and spacetime.

A non-epistemic theory of truth (e.g. there is an objective truth we try to learn) is detrimental in general, because it inevitably deteriorates into debates about untestables, like other branches of a hypothetical multiverse and how to behave morally in an infinite universe.)

Also, most people here, while giving lip service to non-libertarian views of free will, sneak it in anyway, as evidenced by relying on "free choice" in nearly all decision theory discussions.

Comment by shminux on Deflationism isn't the solution to philosophy's woes · 2021-03-10T19:39:59.471Z · LW · GW

This sounds like a very Eliezer-like approach: "I don't have to convince you, a professional who spent decades learning and researching the subject matter, here is the truth, throw away your old culture and learn from me, even though I never bothered to learn what you learned!" While there are certainly plenty of cases where this is valid, in any kind of evidence-based sciences the odds of it being successful are slim to none (the infamous QM sequence is one example of a failed foray like that. Well, maybe not failed, just uninteresting). I want to agree with you on the philosophy of religion, of course, because, well, if you start with a failed premise, you can spend all your life analyzing noise, like the writers of Talmud did. But an outside view says that the Chesterton fence of an existing academic culture is there for a reason, including the philosophical traditions dating back millennia.

An SSC-like approach seems much more reliable in terms of advancing a particular field. Scott spends inordinate amount of time understanding the existing fences, how they came to be and why they are there still, before advancing an argument why it might be a good idea to move them, and how to test if the move is good. I think that leads to him being taken much more seriously by the professionals in the area he writes about. 

I gather that both approaches have merit, as there is generally no arguing with someone who is in a "diseased discipline", but one has to be very careful affixing that label on the whole field of research, even if it seems obvious to an outsider. Or to an insider, if you follow the debates about whether the String Theory is a diseased field in physics.

Still, except for the super-geniuses among us, it is much safer to understand the ins and outs before declaring that the giga-IQ-hours spent by humanity on a given topic are a waste or a dead end. The jury is still out on whether Eliezer and MIRI in general qualify.

Comment by shminux on Deflationism isn't the solution to philosophy's woes · 2021-03-10T04:07:28.594Z · LW · GW

These are some extraordinary claims. I wonder if there is a metric that mainstream analytical philosophers would agree to use to evaluate statements like 

LW outperform analytic philosophy

and 

LW is academic philosophy, rebooted with better people than Plato as its Pater Patriae.

Without an agreed upon evaluation criteria, this is just tooting one's own horn, wouldn't you agree?

Comment by shminux on What I'd change about different philosophy fields · 2021-03-08T19:07:15.138Z · LW · GW

You and I rarely agree on much, but this looks like a great post! It highlights what an outsider like me would find befuddling about philosophical discourse, and your prescriptions, usually the weakest part of any argument, actually make sense. Huh.

Comment by shminux on Announcement: Real-time discussions in a new Clubhouse community. · 2021-03-07T20:50:28.753Z · LW · GW

The link in the reply requires an iPhone app, no Android or desktop support. That seems a bit limiting.

Comment by shminux on I'm still mystified by the Born rule · 2021-03-04T04:19:30.869Z · LW · GW

There is one phenomenon conspicuously absent from your analysis: gravity. If you think that it's not important, and the nonlinearity of the projection postulate while preserving classical locality can be understood without it, consider this from Sean Carroll, an expert in both QFT and GR, and a hard-core Everettian:

https://twitter.com/seanmcarroll/status/1363611156493950979

Why is gravity important? It is:

  • nonlinear 
  • intimately and mysteriously related to entropy, a property of macroscopic systems
  • magically becomes important at the scale where the quantum effects are all but gone (1 Planck mass)
  • related to holography, connecting local and non-local effects
  • points to various issues in our current understanding of Quantum (e.g. the black hole information paradox)

I am not saying that the reasons for the Born rule will have to wait until we have a workable theory of Quantum Gravity (and no, String theory and LQG aren't one), but mostly because I expect the whole idea of Quantum Gravity to be a wrong abstraction to use.

In other words, you are right to be mystified by the Born rule, if for the wrong reasons.

Comment by shminux on Are the Born probabilities really that mysterious? · 2021-03-02T03:59:23.911Z · LW · GW

The mysterious part is not the square norm, it's that the universe looks like it conspires to present apparently non-local phenomena like the projection postulate as fully local. You can handwave it with many worlds, but it does not dissolve the mystery.

Comment by shminux on Weighted Voting Delenda Est · 2021-03-01T22:12:54.426Z · LW · GW

I assume you have evidence of your conjectures that voting is a problem? If so, can you list a few high-quality posts with strangely low voting total by less-known users here?

Comment by shminux on Heuristic: Replace "No Evidence" with "No Reason" · 2021-02-15T22:12:54.009Z · LW · GW

I think it's a useful mental check of what you really mean. It can lead you astray (e.g. "there is no reason to suggest that vaccine cause autism" is not obviously false, not without proper research), but it certainly works in the cases you describe.

Comment by shminux on [deleted post] 2021-02-13T20:30:48.370Z

downvoting on general principles of not giving publicity to forgettable shitty superficial publications.

Comment by shminux on How Should We Respond to Cade Metz? · 2021-02-13T18:43:05.373Z · LW · GW

I don't think an answer is needed, the article is so bad, I'm surprised NYT even published it. Best not even mention it again, lest it gets more publicity than it deserves.