Posts

A basic probability question 2019-08-23T07:13:10.995Z · score: 11 (2 votes)
Inspection Paradox as a Driver of Group Separation 2019-08-17T21:47:35.812Z · score: 31 (13 votes)
Religion as Goodhart 2019-07-08T00:38:36.852Z · score: 21 (8 votes)
Does the Higgs-boson exist? 2019-05-23T01:53:21.580Z · score: 6 (9 votes)
A Numerical Model of View Clusters: Results 2019-04-14T04:21:00.947Z · score: 18 (6 votes)
Quantitative Philosophy: Why Simulate Ideas Numerically? 2019-04-14T03:53:11.926Z · score: 23 (12 votes)
Boeing 737 MAX MCAS as an agent corrigibility failure 2019-03-16T01:46:44.455Z · score: 50 (23 votes)
To understand, study edge cases 2019-03-02T21:18:41.198Z · score: 27 (11 votes)
How to notice being mind-hacked 2019-02-02T23:13:48.812Z · score: 16 (8 votes)
Electrons don’t think (or suffer) 2019-01-02T16:27:13.159Z · score: 5 (7 votes)
Sabine "Bee" Hossenfelder (and Robin Hanson) on How to fix Academia with Prediction Markets 2018-12-16T06:37:13.623Z · score: 11 (3 votes)
Aligned AI, The Scientist 2018-11-12T06:36:30.972Z · score: 12 (3 votes)
Logical Counterfactuals are low-res 2018-10-15T03:36:32.380Z · score: 22 (8 votes)
Decisions are not about changing the world, they are about learning what world you live in 2018-07-28T08:41:26.465Z · score: 30 (17 votes)
Probability is a model, frequency is an observation: Why both halfers and thirders are correct in the Sleeping Beauty problem. 2018-07-12T06:52:19.440Z · score: 24 (12 votes)
The Fermi Paradox: What did Sandberg, Drexler and Ord Really Dissolve? 2018-07-08T21:18:20.358Z · score: 47 (20 votes)
Wirehead your Chickens 2018-06-20T05:49:29.344Z · score: 72 (44 votes)
Order from Randomness: Ordering the Universe of Random Numbers 2018-06-19T05:37:42.404Z · score: 16 (5 votes)
Physics has laws, the Universe might not 2018-06-09T05:33:29.122Z · score: 28 (14 votes)
[LINK] The Bayesian Second Law of Thermodynamics 2015-08-12T16:52:48.556Z · score: 8 (9 votes)
Philosophy professors fail on basic philosophy problems 2015-07-15T18:41:06.473Z · score: 16 (21 votes)
Agency is bugs and uncertainty 2015-06-06T04:53:19.307Z · score: 14 (18 votes)
A simple exercise in rationality: rephrase an objective statement as subjective and explore the caveats 2015-04-18T23:46:49.750Z · score: 21 (22 votes)
[LINK] Scott Adam's "Rationality Engine". Part III: Assisted Dying 2015-04-02T16:55:29.684Z · score: 7 (8 votes)
In memory of Leonard Nimoy, most famous for playing the (straw) rationalist Spock, what are your top 3 ST:TOS episodes with him? 2015-02-27T20:57:19.777Z · score: 10 (15 votes)
We live in an unbreakable simulation: a mathematical proof. 2015-02-09T04:01:48.531Z · score: -31 (42 votes)
Calibrating your probability estimates of world events: Russia vs Ukraine, 6 months later. 2014-08-28T23:37:06.430Z · score: 19 (19 votes)
[LINK] Could a Quantum Computer Have Subjective Experience? 2014-08-26T18:55:43.420Z · score: 16 (17 votes)
[LINK] Physicist Carlo Rovelli on Modern Physics Research 2014-08-22T21:46:01.254Z · score: 6 (11 votes)
[LINK] "Harry Potter And The Cryptocurrency of Stars" 2014-08-05T20:57:27.644Z · score: 2 (4 votes)
[LINK] Claustrum Stimulation Temporarily Turns Off Consciousness in an otherwise Awake Patient 2014-07-04T20:00:48.176Z · score: 37 (37 votes)
[LINK] Why Talk to Philosophers: Physicist Sean Carroll Discusses "Common Misunderstandings" about Philosophy 2014-06-23T19:09:54.047Z · score: 10 (12 votes)
[LINK] Scott Aaronson on Google, Breaking Circularity and Eigenmorality 2014-06-19T20:17:14.063Z · score: 20 (20 votes)
List a few posts in Main and/or Discussion which actually made you change your mind 2014-06-13T02:42:59.433Z · score: 16 (16 votes)
Mathematics as a lossy compression algorithm gone wild 2014-06-06T23:53:46.887Z · score: 39 (41 votes)
Reflective Mini-Tasking against Procrastination 2014-06-06T00:20:30.692Z · score: 17 (17 votes)
[LINK] No Boltzmann Brains in an Empty Expanding Universe 2014-05-08T00:37:38.525Z · score: 9 (11 votes)
[LINK] Sean Carroll Against Afterlife 2014-05-07T21:47:37.752Z · score: 5 (9 votes)
[LINK] Sean Carrol's reflections on his debate with WL Craig on "God and Cosmology" 2014-02-25T00:56:34.368Z · score: 8 (8 votes)
Are you a virtue ethicist at heart? 2014-01-27T22:20:25.189Z · score: 11 (13 votes)
LINK: AI Researcher Yann LeCun on AI function 2013-12-11T00:29:52.608Z · score: 2 (12 votes)
As an upload, would you join the society of full telepaths/empaths? 2013-10-15T20:59:30.879Z · score: 7 (17 votes)
[LINK] Larry = Harry sans magic? Google vs. Death 2013-09-18T16:49:17.876Z · score: 25 (31 votes)
[Link] AI advances: computers can be almost as funny as people 2013-08-02T18:41:08.410Z · score: 7 (9 votes)
How would not having free will feel to you? 2013-06-20T20:51:33.213Z · score: 6 (14 votes)
Quotes and Notes on Scott Aaronson’s "The Ghost in the Quantum Turing Machine" 2013-06-17T05:11:29.160Z · score: 18 (22 votes)
Applied art of rationality: Richard Feynman steelmanning his mother's concerns 2013-06-04T17:31:24.675Z · score: 8 (17 votes)
[LINK] SMBC on human and alien values 2013-05-29T15:14:45.362Z · score: 3 (10 votes)
[LINK]s: Who says Watson is only a narrow AI? 2013-05-21T18:04:12.240Z · score: 4 (11 votes)
LINK: Google research chief: 'Emergent artificial intelligence? Hogwash!' 2013-05-17T19:45:45.739Z · score: 7 (16 votes)

Comments

Comment by shminux on A simple sketch of how realism became unpopular · 2019-10-13T02:09:18.724Z · score: 2 (3 votes) · LW · GW
I still intermittently run into people who claim that there's no such thing as reality or truth;

This sounds... strawmanny. "Reality and truth are not always the most useful concepts and it pays to think in other ways at times" would be a somewhat more charitable representation of non-realist ideas.


Comment by shminux on What do the baby eaters tell us about ethics? · 2019-10-12T21:19:08.984Z · score: 11 (2 votes) · LW · GW
If I understood correctly, your objection to Three Worlds Collide is (mostly?) descriptive rather than prescriptive: you think the story is unrealistic, rather than dispute some normative position that you believe it defends.

I am not a moral realist, so I cannot dispute someone else's morals, even if I don't relate to them, as long as they leave me alone. So, yes, descriptive, and yes, I find the story a great read, but that particular element, moral expansionism, does not match the implied cohesiveness of the multi-world human species.

Do you believe real world humans are "slow to act against the morals it finds abhorrent"?

Yes. Definitely.

how do you explain all (often extremely violent) conflicts over religion and political ideology over the course of human history?

Generally, economic or some other interests in disguise. Like distracting the populous from the internal issues. You can read up on the reasons behind the Crusades, the Holocaust, etc. You can also notice that when the morals lead the way, extreme religious zealotry leads to internal instability, like the fractures inside Christianity and Islam. So, my model that you call "factually wrong" seems to fit the observations rather well, though I'm sure not perfectly.

Whatever explanation you provide to this survival, what prevents it from explaining the continued survival of the human species until the imaginary future in the story?

My point is that humans are behaviorally both much more and much less tolerant of the morals they find deviant than they profess. In the story I would have expected humans to express extreme indignation over babyeaters' way of life, but do nothing about it beyond condemnation.

Comment by shminux on What do the baby eaters tell us about ethics? · 2019-10-12T02:02:55.057Z · score: 2 (1 votes) · LW · GW

It's frustrating where an honest exchange fails to achieve any noticeable convergence... Might try once more and if not, well, Aumann does not apply here, anyhow.

My main point: "to survive, a species has to be slow to act against the morals it finds abhorrent". I am not sure if this is the disagreement, maybe you think that it's not a valid implication (and by implication I mean the converse, "intolerant => stunted").

Comment by shminux on When is pair-programming superior to regular programming? · 2019-10-11T01:19:58.565Z · score: 2 (1 votes) · LW · GW

I had a pair programming experience at my first job back in the late 80s, before it was a thing, and my coworker and I clicked well, so it was fun while it lasted. Never had a chance to do it again, but miss it a lot. Wish I could work at a place where this is practiced.

Comment by shminux on What do the baby eaters tell us about ethics? · 2019-10-09T06:56:49.122Z · score: 4 (2 votes) · LW · GW
I still don't understand, is your claim descriptive or prescriptive?

Neither... Or maybe descriptive? I am simply stating the implication, not prescribing what to do.

I don't understand what you're saying here at all.

Yes, we do have plenty of laws, but no one goes out of their way to find and hunt down the violators. If anything, the more horrific something is, the more we try to pretend it does not exist. You can argue and point at the law enforcement, whose job it is, but it doesn't change the simple fact that you can sleep soundly at night ignoring what is going on somewhere not far from you, let alone in the babyeaters' world.

"Universal we!right" is a contradiction in terms.

We may have not agreed on the meaning. I meant "human universal" not some species-independent morality.

in a given debate about ethics there might be hope that the participants can come to a consensus

I find it too optimistic a statement for a large "we". The best one can hope for is that logical people can agree with an implication like "given this set of values, this is the course of action someone holding these values ought to take to stay consistent", without necessarily agreeing with the set of values themselves. In that sense, again, it's describes self-consistent behaviors without privileging a specific one.

In general, it feels like this comment thread has failed to get to the crux of the disagreement, and I am not sure if anything can be done about it, at least without using a more interactive medium.

Comment by shminux on What do the baby eaters tell us about ethics? · 2019-10-08T15:46:40.153Z · score: 4 (2 votes) · LW · GW

Re "tenability", today's SMBC captures it well: https://www.smbc-comics.com/comic/multiplanetary

If interpreted in the logical sense, I don't think your argument makes sense: it seems like trying to derive an "ought" from an "is".

Hmm, in my reply to OP I expressed what the moral of the story is for me, and in my reply to you I tried to justify it by appealing to the expected stability of the species as a whole. The "ought", if any, is purely utilitarian: to survive, a species has to be slow to act against the morals it finds abhorrent.

Also, the actual distance between those diverging morals matters, and baby eating surely seems like an extreme example.

Uh. If you live in a city, there is a 99% chance that there is little girl within a mile from you being raped and tortured by her father/older brother daily for their own pleasure, yet no effort is made to find and save her. I don't find the babyeaters' morals all that divergent from human, at least the babyeaters had a justification for their actions based on the need for the whole species to survive.

I don't claim claim that leaving the Baby-eaters alone is necessarily we!wrong, but it is not obvious to me that it is we!right

My point is that there is no universal we!right and we!wrong in the first place, yet the story was constructed on this premise, which led to the whole species being hoisted on its own petard.

it is supposed to be a "weird" culture by modern standards), much less an alien culture like the Super-Happies

Oh. It never struck me as weird, let alone alien. The babyeaters are basically Spartans and the super-happies are hedonists.

Comment by shminux on What do the baby eaters tell us about ethics? · 2019-10-07T15:45:21.045Z · score: 4 (5 votes) · LW · GW

The near-universal reaction of the crew to the baby-eaters customs is not just horror and disgust, but also the moral imperative to act to change them. It's as if there existed a species-wide objective "species!wrong", which is an untenable position, and, even less believably than that, as if there existed a "universal!meta-wrong" where anyone not adhering to your moral norms must be changed in some way to make it palatable (the super-happies are willing to go an extra mile to change themselves in their haste to fix things that are "wrong" with others).

This position is untenable because it would lead to constant internal infighting, as customs and morals naturally drift apart for a diverse enough society. Unless you impose a central moral authority and ruthlessly weed out all deviants.

I am not sure how much of the anti-prime-directive morality is endorsed by Eliezer personally, as opposed to merely being described by Eliezer the fiction writer.

Comment by shminux on What do the baby eaters tell us about ethics? · 2019-10-07T01:17:37.672Z · score: 16 (5 votes) · LW · GW

I liked the story, but could never relate to its Eliezer-imposed "universal morality" of forcing others to conform to your own norms. To me the message of the story is "expansive metaethics leads to trouble, stick to your own and let others live the way they are used to, while being open to learning about each other's ways non-judgmentally".

Comment by shminux on Introduction to Introduction to Category Theory · 2019-10-06T18:45:33.198Z · score: 2 (1 votes) · LW · GW

I've tried to learn the basics of the category theory some years ago, already having some background in algebraic topology, mathematical physics and programming. And, presumably, in rationality. I got the glimpses of how interesting it is, how it could be useful, but was never quite able to make use of it. Very curious if your series of posts can change that for me. Keep going!

Comment by shminux on What empirical work has been done that bears on the 'freebit picture' of free will? · 2019-10-05T04:20:00.299Z · score: 2 (1 votes) · LW · GW

This was a very speculative if exciting essay, and I don't believe that there has been any serious research done in this area, in part because it is unclear where one would start without having a better understanding of the measurement problem. Certainly online search comes up empty. I think the main value of this work is that a computer scientist and part-physicist (though Scott Aaronson would probably deny that he is the latter) can make a non-trivial contribution to the age-old philosophical questions of free will and consciousness.

Comment by shminux on Follow-Up to Petrov Day, 2019 · 2019-09-28T06:49:23.352Z · score: 4 (2 votes) · LW · GW

On general principles, given the Lizardman's constant of 4-5%, one would expect at least several people to nuke the site. Strange that it didn't happen.

Comment by shminux on Happy Petrov Day! · 2019-09-27T05:10:46.448Z · score: 13 (4 votes) · LW · GW

That was an interesting exposition. One of the millions of lives rare people like Petrov save from extinction. There are probably dozens more of people like him, all over the world, most never to get any recognition or even acknowledgment, and likely prosecuted for going against the authority and the regulations.

Comment by shminux on On Becoming Clueless · 2019-09-24T05:13:46.683Z · score: 6 (4 votes) · LW · GW

Increasing complexity also increases fragility. #ProgrammingTruths

Comment by shminux on Therapy vs emotional support · 2019-09-24T02:30:11.895Z · score: 2 (1 votes) · LW · GW
Good therapy and a good emotional support networks are not competitors, but rather two great tastes that taste great together.

The hard part is finding friends who would give you emotional support by actively listening to you, without giving you unsolicited advice or trying to solve your problems. An even harder part is to be a friend like that to others. I used to volunteer on an emotional support website for several hours a day for a couple of years dong just that, and it's amazing how much we all crave an empathetic listener while rarely being one.

Comment by shminux on Towards an empirical investigation of inner alignment · 2019-09-24T02:21:06.245Z · score: 3 (1 votes) · LW · GW

This "proxy fireworks" where expanding the system causes various proxies to visibly split and fly in different directions is definitely a good intuitive way to understand some of the AI alignment issues.

What I am wondering is whether it is possible to specify anything but a proxy, even in principle. After all, humans as general intelligences fall into the same trap of optimizing proxies (instrumental goals) instead of the terminal goals all the time. We are also known for our piss-poor corrigibility properties. Certainly the maze example or a clean room example are simple enough, but once we ramp up the complexity, odds are that the actual optimization proxies start showing up. And the goal is for an AI to be better at discerning and avoiding "wrong" proxies than any human, even though humans are apt to deny that they are stuck optimizing proxies despite the evidence to the contrary. The reaction (except in PM from Michael Vassar) to an old post of mine is a pretty typical example of that. So, my expectation would be that we would resist to even consider that we optimize a proxy when trying to align a corrigible AI.

Comment by shminux on What are the studies and literature on the traditional medicine theory of humorism? · 2019-09-19T01:03:04.447Z · score: 3 (2 votes) · LW · GW

Not sure about studies, seems more like a useful cultural thing, one of those traditions whose internal justifications have nothing to do with the reason they are useful:

https://slatestarcodex.com/2019/06/04/book-review-the-secret-of-our-success/

In this case it is probably about having a balanced diet with enough calories, vitamins, minerals etc. It is probably far from optimal, but a good-enough simple advice an average human can follow.

Comment by shminux on Realism and Rationality · 2019-09-16T06:25:15.335Z · score: -10 (11 votes) · LW · GW

I was tempted to downvote your post, but refrained, seeing how much effort you put into it. Sadly, it seems to miss the point of non-realism entirely, at least the way I understand it. I am not a realist, and have been quite vocal about my views here. Admittedly, they are rather more radical than those of many here. Mostly out of necessity, since once you become skeptical about one realist position, then to be consistent you have to keep decompartmentalizing until the notions of reality, truth and existence become nothing more than useful models. This obviously applies to normative claims, as well, and so cognitivism is not wrong, but meaningless.

if anti-realism is true, then it can’t also be true that we should believe that anti-realism is true

Consider thinking in terms of useful instead of true, it will magically remove all these internal contradictions you are struggling with. Sometimes it's useful to follow the realist approach, and sometimes it doesn't work and so you do or say something that an anti-realist would endorse. No need to be dogmatic about it.

Comment by shminux on Non-anthropically, what makes us think human-level intelligence is possible? · 2019-09-16T04:14:28.334Z · score: 1 (4 votes) · LW · GW

Without a better description of the "you" in this setup, I doubt one can fruitfully answer this question. In general, however, my preferred resolution to the Fermi paradox is that there is no intelligent life out there because there isn't one even here, on Earth. Because the notion of life abstracted from "self-reproducing proteins" loses its coherency. But, as a fellow chemical reaction, I am looking forward to other views.

Comment by shminux on How do I reach a conclusion on how many eggs per week are healthy? · 2019-09-16T01:19:41.897Z · score: 3 (2 votes) · LW · GW

Why are you worried about egg consumption specifically?

Comment by shminux on Devil's Dialectic · 2019-09-15T02:00:25.254Z · score: 2 (1 votes) · LW · GW

I would like to see some concrete examples of this DDDiscourse. And with fewer Zs :)

Comment by shminux on The Power to Understand "God" · 2019-09-14T04:13:11.304Z · score: 3 (2 votes) · LW · GW

Great! Well done! Noticing your own emotions is a great step most aspiring rationalists lack.

Comment by shminux on The Power to Understand "God" · 2019-09-14T01:05:29.512Z · score: 4 (5 votes) · LW · GW
I don't even know where to begin imagining what a lack of objective reality looks like

Well. Now you have stumbled upon another standard fallacy, argument from the failure of imagination. If you look up various non-realist epistemologies, it could be a good start.

I think I'm doing better and being more rational than most God-believers. That's why I consider myself a pretty skilled rationalist!

Uh. Depends on how you define being rational. If you follow Eliezer and define it as winning, then there are many believers that are way ahead of you.

Comment by shminux on The Power to Understand "God" · 2019-09-14T01:00:51.347Z · score: 2 (1 votes) · LW · GW

I don't want to get into this discussion now, I've said enough about my views on the topic in other threads. Certainly "the heavily experimentally demonstrated hypothesis that the universe runs on math" is a vague enough statement to not even be worth challenging, too much wiggle room.

Comment by shminux on The Power to Understand "God" · 2019-09-13T15:28:31.654Z · score: 5 (8 votes) · LW · GW

I meant that a believer in God and supernatural in general is an easy target for a non-believer armed with the standard arguments of atheism.

LessWrong has the effect of gradually making people lose belief in God, and move beyond the whole frame of arguing about God to all kinds of interesting new framings and new arguments (e.g. simulation universes, decision theories, and AI alignment).

Yes and no. That's how I moved from being an atheist to being an agnostic, given the options above. There are just too many "rational" possibilities instrumentally indistinguishable from God.

I like to think that my deep beliefs all have specific referents to not be demolishable, so it's hard for me to know where to start looking for one that doesn't. Feel free to propose ideas. But if I don't personally struggle with the weakness that I'm helping others overcome, that seems ok too.

I call it the folly of a bright dilettante. You are human with all the human failings, which includes deeply held mind projection fallacies. A deeply held belief feels like an unquestionable truth from the inside, so much so, we are unlikely to even notice that it's just a belief, and defend it against anyone who questions it. If you want an example, I've pointed out multiple times that privileging the model of objective reality (the map/territory distinction) over other models is one of those ubiquitous beliefs. Now that you have read this sentence, pause for a moment and notice your emotions about it. Really, take a few seconds. List them. Now compare it with the emotions a devout person would feel when told that God is just a belief. If you are honest with yourself, then you are likely to admit that there is little difference. Actually, a much likelier outcome is skipping the noticing entirely and either ignoring the uncomfortable question as stupid/naive/unenlightened, or rushing to come up with arguments defending your belief. So, if you have never demolished your own deeply held belief, and went through the emotional anguish of reframing your views unflinchingly, you are not qualified to advise others how to do it.

Comment by shminux on The Power to Understand "God" · 2019-09-13T03:41:25.061Z · score: 12 (6 votes) · LW · GW

A believer in God is an easy target. Can you find a deep belief in something that you are holding and go through the same steps you outlined above for Stephanie?

Comment by shminux on Looking for answers about quantum immortality. · 2019-09-09T14:43:24.828Z · score: 2 (1 votes) · LW · GW

I do have a PhD in Physics, classical General Relativity specifically. But you wanted something who adheres to MWI, and that is not me.

Some thoughts from Sean Carroll on the topic of Quantum Immortality:

https://www.reddit.com/r/seancarroll/comments/9drd25/quantum_immortality/e5l663t/

And this one from Scott Aaronson:

https://www.scottaaronson.com/blog/?p=2643#comment-1001030

Celebrity or not, both are quite likely to reply to a polite yet anxious email, since they can actually relate to your worries, if maybe not on the same topic.


Comment by shminux on Looking for answers about quantum immortality. · 2019-09-09T04:54:22.468Z · score: 2 (3 votes) · LW · GW

You've been basilisked. There is no empirical evidence for MWI, but a number of physicists do believe that it can be something related to reality, with some heavy modifications, since, as stated, it contradicts General Relativity. Sean Carroll, an expert in both Quantum Mechanics and General Relativity, is one of them. Consider reading his blog. His latest article about the current (pitiful) state of fundamental research in Quantum Mechanics, can be found in the New York Times. His book on the topic is coming out in a couple of days, and it is guaranteed to be a highly entertaining and insightful read, which might also alleviate some of your worries.


Comment by shminux on Rationality Exercises Prize of September 2019 ($1,000) · 2019-09-08T00:05:18.209Z · score: 4 (2 votes) · LW · GW

Not trying to win anything, but maybe my old post can be of some interest: https://www.lesswrong.com/posts/PDFJPxPope2aDtmpQ/a-simple-exercise-in-rationality-rephrase-an-objective


Comment by shminux on Living the Berkeley idealism · 2019-09-05T02:08:15.812Z · score: 2 (1 votes) · LW · GW
Berkeley thought that anything exists only because God is perceiving it, and if God stops perceiving it, that thing disappears.

That sounds remarkably perceptive, for some reasonable definition of God. In quantum mechanics, or, rather, the classical measurements of it, we can only observe what's entangled with us. If somehow a state got untangled, we'd stop perceiving it. If you subscribe to the MWI, you are living with multiple copies of yourself which, for you, don't even exist because you can never perceive them, or even infer much about them. (There is a wrinkle there, gravity doesn't fit into the picture, but then it doesn't fit into any quantum picture.)

As for the appropriate media, basic PDF and paper copies are indeed the safest bets. Also, pure HTML5 might survive, even if JS goes out of fashion. Also, hosting is always an issue. Then again, most published works, especially theses, do not deserve a ten-year lifespan, though yours might be different.

Comment by shminux on [Hammertime Final Exam] Quantum Walk, Oracles and Sunk Meaning · 2019-09-04T01:34:24.584Z · score: 3 (2 votes) · LW · GW

Trying to understand your points... Possibly incorrectly.

Technique: Quantum Walk: set a stretch goals and Work backwards from your goal.

Ask the Oracle: seems like you suggest working on deconfusion.

Bias: Sunk Meaning: Meaning is in the map.

Comment by shminux on Counterfactuals are an Answer, Not a Question · 2019-09-04T01:11:27.903Z · score: -4 (5 votes) · LW · GW

If you are trying to reach those who think that "If Oswald had not shot Kennedy, then someone else would have" is not a confused question, good luck!


Comment by shminux on Arguing Absolute Velocities · 2019-09-01T20:31:21.834Z · score: 0 (5 votes) · LW · GW

Abandoning Good vs Bad is a useful step. I remember being there. It's the first step toward abandoning other concepts and paradigms that are not useful as absolutes, such as Better vs Worse, True vs False and Right vs Wrong. And eventually Exists vs does not exist.

Comment by shminux on Why are the people who could be doing safety research, but aren’t, doing something else? · 2019-08-30T06:32:39.896Z · score: 1 (3 votes) · LW · GW

A better question might be, those people who think (and are right) that they could most helpfully contribute to alignment research, and also think that it is the most important issue they could be working on, yet are doing something else, why are they doing something else? (And I doubt that with these caveats you will find 50 people like that.)

Comment by shminux on Don't Pull a Broken Chain · 2019-08-30T04:07:11.771Z · score: 2 (1 votes) · LW · GW

Lost purposes is a common occurrence, definitely. And it looks like you had provided that feedback without waiting in the open loop until your customer give you the feedback. Your motivation was, or appeared to be based on a broken causal chain or something, but the resulting action was something that ought to have been built-in from the start: closing the loop early.

Comment by shminux on Don't Pull a Broken Chain · 2019-08-28T03:11:44.146Z · score: 4 (2 votes) · LW · GW

Seems like we speak different languages, or come from different epistemologies, since we seem to be talking past each other. The meta-model I was talking about is a rather universal one: close the loop whenever possible and don't expect much from an open loop. I don't understand the chain-1 to chain-2 argument. Then again, I find the whole idea of causal chains underwhelming. Maybe they have something to show in terms of having solved problems that are intractable otherwise, but if so, I am not aware of any,

Comment by shminux on Don't Pull a Broken Chain · 2019-08-28T02:01:33.246Z · score: 12 (6 votes) · LW · GW

Yes, "don't pull a broken chain." But how do you know or notice that it is broken? In all of the examples you cited there was one thing missing: the feedback loop. Seeing incremental results based on your actions and adjusting those actions. Open loop doesn't work well, or, at least, is much harder to make work. Noticing that you are in the open loop mode and looking for ways to close it is something that can help a lot but is often overlooked. I think in the ML parlance it is called iterative learning or something. And if you notice that you are open-looping and there is no way you can find to close the loop, adjust your expectation of success down accordingly.

Comment by shminux on Reversible changes: consider a bucket of water · 2019-08-27T03:55:25.315Z · score: 4 (2 votes) · LW · GW

This is not just a robot thing, people face the same problem and use a set of heuristics to evaluate the effort/cost of good-enough state restoration. For example, there is a good chance that something precious would not be placed in something cheap-looking, in someone's way and without supervision. And if it is, too bad for the careless person who did not put an effort into protected the hard-to restore state.

Comment by shminux on A basic probability question · 2019-08-24T02:14:18.231Z · score: 2 (1 votes) · LW · GW

OK, so the trouble with logical induction is assuming mathematical realism, where "the claim that the 87,653rd digit of π is a 7" is either true or false even when not yet evaluated by someone, and the paper is discussing a way to assign a reasonable probability to it (e.g. 1/10 in this case if you know nothing about digits or pi apriori) using the trading market model. In which case the implication condition does not hold ever, (since the chance of making an error in calculating the 87,653rd digit of π is always larger than in calculating 1+1). So they are treating logical uncertainty as environmental then. It makes sense if so.

Comment by shminux on Prokaryote Multiverse. An argument that potential simulators do not have significantly more complex physics than ours · 2019-08-18T05:03:16.930Z · score: 4 (2 votes) · LW · GW

I reflexively downvoted this, so I feel obliged to explained why. Mostly because it reads to me like content-free word salad repeating the buzzwords like Solomonoff induction, Kolmogorov complexity and Occam's razor. And it claims to disprove something it doesn't even clearly define. Not trying to impose my opinion on others here, just figured I'd write it out, since being silently downvoted sucks, at least for me.

Comment by shminux on Distance Functions are Hard · 2019-08-17T02:23:09.058Z · score: 2 (1 votes) · LW · GW

I don't think I am following your argument. I am not sure what Pearl's causal networks are and how they help here, so maybe I need to read up on it.

Comment by shminux on Distance Functions are Hard · 2019-08-16T02:24:43.237Z · score: 2 (1 votes) · LW · GW

I am not sure if labels help here. I'm simply pointing out that logical counterfactuals applied to the "real Lincoln" lead to the sort of issues MIRI is facing right now when trying to make progress in the theoretical AI alignment issues. The reference class approach removes the difficulties, but then it is hard to apply it to the "mathematical facts", like what is the probability of 100...0th digit of pi being 0 or, to quote the OP "If the Modularity Theorem were false..." and the prevailing MIRI philosophy does not allow treating logical uncertainty as environmental.

Comment by shminux on Distance Functions are Hard · 2019-08-14T03:39:22.348Z · score: 1 (2 votes) · LW · GW

"If Lincoln were not assassinated, he would not have been impeached" is a probabilistic statement that is not at all about THE Lincoln. It's a reference class analysis of leaders who did not succumb to premature death and had the leadership, economy etc. metrics similar to the one Lincoln. There is no "counterfactual" there in any interesting sense. It is not about the minute details of avoiding the assassination. If you state the apparent counterfactual more precisely, it would be something like

There is a 90% probability of a ruler with [list of characteristics matching Lincoln, according to some criteria] serving out his term.

So, there is no issue with "If 0=1..." here, unlike with the other one, "If the modularity theorem were false", which implies some changes in the very basics of mathematics, though one can also argue for the reference class approach there.

Comment by shminux on Three Stories for How AGI Comes Before FAI · 2019-08-12T04:54:00.373Z · score: -2 (6 votes) · LW · GW

From what I understand about humans, they are so self-contradictory and illogical that any AGI that actually tries to optimize for human values will necessarily end up unaligned, and that the best we can hope for is that whatever we end up creating will ignore us and will not need to disassemble the universe to reach whatever goals it might have.

Comment by shminux on Weak foundation of determinism analysis · 2019-08-07T06:48:31.797Z · score: 4 (5 votes) · LW · GW

"There's no free will," says the philosopher;

"To hang is most unjust."

"There is no free will," assents the officer;  

"We hang because we must."

-- Ambrose Bierce



Comment by shminux on How would a person go about starting a geoengineering startup? · 2019-08-06T08:13:03.234Z · score: 3 (2 votes) · LW · GW

Maybe consider starting something less ambitious first, but where you have a good handle on what you can achieve, a unique idea and an advantage over those who would want to compete against you and over political interests who would want to destroy you?


Comment by shminux on Diagnosis: Russell Aphasia · 2019-08-06T07:54:26.341Z · score: 3 (5 votes) · LW · GW
A culture that cannot tell the difference between "reporting" and "doxing" and merely considers it "doxing" when they do it, is a culture that cannot accurately talk about behavior anymore.

I'm yet to see a culture that can do it. Any examples?


Comment by shminux on Zeno walks into a bar · 2019-08-04T08:05:02.632Z · score: 3 (2 votes) · LW · GW

Struggling to understand the point of this post.

Comment by shminux on [Site Update] Weekly/Monthly/Yearly on All Posts · 2019-08-02T06:49:37.939Z · score: 2 (1 votes) · LW · GW

These are not mutually exclusive, so why not make both calendar and duration options available.

Comment by shminux on Will autonomous cars be more economical/efficient as shared urban transit than busses or trains, and by how much? What's some good research on this? · 2019-07-31T02:25:23.780Z · score: 8 (4 votes) · LW · GW

I remember thinking through the potential evolution of autonomous transportation* some 10 years ago, and, barring the protectionist forces winning out and enshrining the "right to drive" in law, like the "right to bear arms", it's pretty clear where the transportation is going.

1. One- or two-person electric commute vehicles dominating city traffic, eventually leading to the whole swaths of urban areas being closed to human drivers, which would be deemed unsafe. Those areas will then expand outwards and merge, eventually spreading into the suburbs, and at some point major highways, first with HOV lanes, then taking over the rest lane-by-lane. Owning a car will become very expensive, and a human-driven car prohibitively so.

2. The huge parking lots will disappear, since uber-like electric commuters will be in use much more often and can be stored efficiently in much smaller spaces during off-peak times.

3. Everything will be routinely recorded, whether outside of the vehicle or inside it, limiting the type of activities one can indulge in while getting to the destination. Vandalism will virtually disappear, as well. Ride sharing will complete strangers will be as safe as walking along them on a busy street somewhere in central London.

4. Once there are no more human drivers, in the autonomous-only areas the vehicles themselves will be able to communicate and coordinate, and soon will be required to do so, forming a driving grid. Any vehicle not complying with the grid inclusion rules will not be allowed in, or forced to stop and get towed outside. Yes, the grid will eventually take control from the single vehicles.

5. Once that happens, the traffic lights will be largely obsolete. There will be pedestrian crossings, with the Walk/Stop signs, but no usual traffic lights, since there will be no human drivers to look at them.

6. The current alternating pattern of driving through the intersection will change: without pedestrian crossings cars will simply zoom in all directions, their movement perfectly choreographed by the grid. With pedestrian crossings there will be breaks for humans to cross on foot in all directions at once. Odds are, many crossings will be replaced with walkways above or below ground.

7. Congestion will be greatly reduced due to coordination. Worst case you'd have to wait to get your ride, as the grid will limit the number of vehicles to keep the system at peak efficiency. Traffic jams will be extremely rare, since most current causes of it will be eliminated, such as broken down cars, accidents, high volume traffic, power outages, road work (the grid will shape the traffic around any roadwork in progress).

8. The city architecture will change to accommodate the new transportation realities: there will be much less road space needed, so some of the wide busy streets will be repurposed for parks, living spaces, etc.

This is as much as I recall offhand, but there is definitely more.

Now, to answer your question, the costs will eventually go down orders of magnitude compared to the existing means of transportation. Which does not mean that price will go down nearly as much, as everything will be heavily taxed, like train and plane tickets now.

Edit: I expect this will happen first in places with high penetration of autonomous vehicles. Places like, say, Oslo. Also in the countries where the government can exert some pressure and ensure compliance and coordination, like, say, in China and maybe Japan. The US will be one of the last ones, and the most expensive ones, as is customary with most technological innovations lately.

_________________________________________________________

* The language will evolve accordingly:

  • "self-driving car" will be a name for DIY driving, the opposite of what it is now.
  • self-driving will be reserved for antique car enthusiasts, who would tow their cars to a "driving range" and show off their skills in this ancient activity, sort of like horseback riding is now.


Comment by shminux on When Having Friends is More Alluring than Being Right (by Ferrett Steinmetz) · 2019-07-31T00:46:16.612Z · score: 16 (4 votes) · LW · GW

Instead of assuming, as the linked post does, that "I am right and those loonies are wrong", consider answering the question in the link, when applied to oneself:

“What if you got irrefutable proof that the Earth was round? You’d lose all your friends. Could you walk away from this culture you helped create?”

Say

“What if I got irrefutable proof that [my belief X] contradicts evidence? I'd lose all my friends believing X. Could I walk away from this culture I helped create?”

where X can be "left-wing political values" or "Bayesian rationality" or "freedom of choice" or... you name a belief you hold dear and invested a lot of effort to create a group around.