Posts

A basic probability question 2019-08-23T07:13:10.995Z · score: 11 (2 votes)
Inspection Paradox as a Driver of Group Separation 2019-08-17T21:47:35.812Z · score: 31 (13 votes)
Religion as Goodhart 2019-07-08T00:38:36.852Z · score: 21 (8 votes)
Does the Higgs-boson exist? 2019-05-23T01:53:21.580Z · score: 6 (9 votes)
A Numerical Model of View Clusters: Results 2019-04-14T04:21:00.947Z · score: 18 (6 votes)
Quantitative Philosophy: Why Simulate Ideas Numerically? 2019-04-14T03:53:11.926Z · score: 23 (12 votes)
Boeing 737 MAX MCAS as an agent corrigibility failure 2019-03-16T01:46:44.455Z · score: 50 (23 votes)
To understand, study edge cases 2019-03-02T21:18:41.198Z · score: 27 (11 votes)
How to notice being mind-hacked 2019-02-02T23:13:48.812Z · score: 16 (8 votes)
Electrons don’t think (or suffer) 2019-01-02T16:27:13.159Z · score: 5 (7 votes)
Sabine "Bee" Hossenfelder (and Robin Hanson) on How to fix Academia with Prediction Markets 2018-12-16T06:37:13.623Z · score: 11 (3 votes)
Aligned AI, The Scientist 2018-11-12T06:36:30.972Z · score: 12 (3 votes)
Logical Counterfactuals are low-res 2018-10-15T03:36:32.380Z · score: 22 (8 votes)
Decisions are not about changing the world, they are about learning what world you live in 2018-07-28T08:41:26.465Z · score: 30 (17 votes)
Probability is a model, frequency is an observation: Why both halfers and thirders are correct in the Sleeping Beauty problem. 2018-07-12T06:52:19.440Z · score: 24 (12 votes)
The Fermi Paradox: What did Sandberg, Drexler and Ord Really Dissolve? 2018-07-08T21:18:20.358Z · score: 47 (20 votes)
Wirehead your Chickens 2018-06-20T05:49:29.344Z · score: 72 (44 votes)
Order from Randomness: Ordering the Universe of Random Numbers 2018-06-19T05:37:42.404Z · score: 16 (5 votes)
Physics has laws, the Universe might not 2018-06-09T05:33:29.122Z · score: 28 (14 votes)
[LINK] The Bayesian Second Law of Thermodynamics 2015-08-12T16:52:48.556Z · score: 8 (9 votes)
Philosophy professors fail on basic philosophy problems 2015-07-15T18:41:06.473Z · score: 16 (21 votes)
Agency is bugs and uncertainty 2015-06-06T04:53:19.307Z · score: 14 (18 votes)
A simple exercise in rationality: rephrase an objective statement as subjective and explore the caveats 2015-04-18T23:46:49.750Z · score: 21 (22 votes)
[LINK] Scott Adam's "Rationality Engine". Part III: Assisted Dying 2015-04-02T16:55:29.684Z · score: 7 (8 votes)
In memory of Leonard Nimoy, most famous for playing the (straw) rationalist Spock, what are your top 3 ST:TOS episodes with him? 2015-02-27T20:57:19.777Z · score: 10 (15 votes)
We live in an unbreakable simulation: a mathematical proof. 2015-02-09T04:01:48.531Z · score: -31 (42 votes)
Calibrating your probability estimates of world events: Russia vs Ukraine, 6 months later. 2014-08-28T23:37:06.430Z · score: 19 (19 votes)
[LINK] Could a Quantum Computer Have Subjective Experience? 2014-08-26T18:55:43.420Z · score: 16 (17 votes)
[LINK] Physicist Carlo Rovelli on Modern Physics Research 2014-08-22T21:46:01.254Z · score: 6 (11 votes)
[LINK] "Harry Potter And The Cryptocurrency of Stars" 2014-08-05T20:57:27.644Z · score: 2 (4 votes)
[LINK] Claustrum Stimulation Temporarily Turns Off Consciousness in an otherwise Awake Patient 2014-07-04T20:00:48.176Z · score: 37 (37 votes)
[LINK] Why Talk to Philosophers: Physicist Sean Carroll Discusses "Common Misunderstandings" about Philosophy 2014-06-23T19:09:54.047Z · score: 10 (12 votes)
[LINK] Scott Aaronson on Google, Breaking Circularity and Eigenmorality 2014-06-19T20:17:14.063Z · score: 20 (20 votes)
List a few posts in Main and/or Discussion which actually made you change your mind 2014-06-13T02:42:59.433Z · score: 16 (16 votes)
Mathematics as a lossy compression algorithm gone wild 2014-06-06T23:53:46.887Z · score: 39 (41 votes)
Reflective Mini-Tasking against Procrastination 2014-06-06T00:20:30.692Z · score: 17 (17 votes)
[LINK] No Boltzmann Brains in an Empty Expanding Universe 2014-05-08T00:37:38.525Z · score: 9 (11 votes)
[LINK] Sean Carroll Against Afterlife 2014-05-07T21:47:37.752Z · score: 5 (9 votes)
[LINK] Sean Carrol's reflections on his debate with WL Craig on "God and Cosmology" 2014-02-25T00:56:34.368Z · score: 8 (8 votes)
Are you a virtue ethicist at heart? 2014-01-27T22:20:25.189Z · score: 11 (13 votes)
LINK: AI Researcher Yann LeCun on AI function 2013-12-11T00:29:52.608Z · score: 2 (12 votes)
As an upload, would you join the society of full telepaths/empaths? 2013-10-15T20:59:30.879Z · score: 7 (17 votes)
[LINK] Larry = Harry sans magic? Google vs. Death 2013-09-18T16:49:17.876Z · score: 25 (31 votes)
[Link] AI advances: computers can be almost as funny as people 2013-08-02T18:41:08.410Z · score: 7 (9 votes)
How would not having free will feel to you? 2013-06-20T20:51:33.213Z · score: 6 (14 votes)
Quotes and Notes on Scott Aaronson’s "The Ghost in the Quantum Turing Machine" 2013-06-17T05:11:29.160Z · score: 18 (22 votes)
Applied art of rationality: Richard Feynman steelmanning his mother's concerns 2013-06-04T17:31:24.675Z · score: 8 (17 votes)
[LINK] SMBC on human and alien values 2013-05-29T15:14:45.362Z · score: 3 (10 votes)
[LINK]s: Who says Watson is only a narrow AI? 2013-05-21T18:04:12.240Z · score: 4 (11 votes)
LINK: Google research chief: 'Emergent artificial intelligence? Hogwash!' 2013-05-17T19:45:45.739Z · score: 7 (16 votes)

Comments

Comment by shminux on What's an important (new) idea you haven't had time to argue for yet? · 2019-12-11T04:58:19.817Z · score: 5 (3 votes) · LW · GW

Traversable wormholes, were they to exist for any length of time, would act as electric and gravitational Faraday cages, i.e. attenuate non-normal electric and gravitational field exponentially inside their throats with the scale parameter of the mouth size/throat circumference. Consequently, the electric/gravitational field around them is non-conservative. This follows straightforwardly from solving the Laplace equation, but never discussed in the literature as far as I can find.

Comment by shminux on Counterfactuals: Smoking Lesion vs. Newcomb's · 2019-12-11T01:19:00.979Z · score: 2 (1 votes) · LW · GW
Smoking lesion is an interesting problem in that it's really not that well defined. If an FDT agent is making the decision, then its reference class should be other FDT agents, so all agents in the same class make the same decision, contrary to the lesion.

Wha...? Isn't like saying that Newcomb's is not well defined? In the smoking lesion problem there is only one decision that gives you highest expected utility, no?

Comment by shminux on Counterfactuals: Smoking Lesion vs. Newcomb's · 2019-12-10T01:26:17.960Z · score: 2 (1 votes) · LW · GW

Let's try to back up a bit. What, in your mind, does the sentence

With vanishingly small probability, a cosmic ray will cause her to do the opposite of what she would have done otherwise.

mean observationally? What does the agent intend to do and what does actually happen?

Comment by shminux on Counterfactuals: Smoking Lesion vs. Newcomb's · 2019-12-10T01:23:33.193Z · score: 2 (1 votes) · LW · GW

Right, sorry, I let my frustration get the best of me. I possibly am misinterpreting the FDT paper, though I am not sure where and how.

To answer your question, yes, obviously desire to smoke is correlated with the increased chance of cancer, through the common cause. If those without the lesion got utility from smoking (contrary to what the FDT paper stipulates), then the columns 3,4 and 7,8 would become relevant, definitely. We can then assign the probabilities and utilities as appropriate. What is the formulation of the problem that you have in mind?

Comment by shminux on Counterfactuals: Smoking Lesion vs. Newcomb's · 2019-12-09T01:03:01.250Z · score: 2 (1 votes) · LW · GW

I... do not follow. Unlike the FDT paper, I try to write out every assumption. I certainly may have missed something, but it is not clear to me what. Can you point out something specific? I have explained the missing $1000 checkup cost: it has no bearing on decision making because the cosmic ray strike making one somehow do the opposite of what they intended and hence go and get examined can happen with equal (if small) probability whether they take $1 or $100. If the cosmic ray strikes only those who take $100, or if those who take $100 while intending to take $1 do not bother with the checkup, this can certainly be included in the calculations.

Comment by shminux on Counterfactuals: Smoking Lesion vs. Newcomb's · 2019-12-09T00:48:26.984Z · score: 3 (2 votes) · LW · GW
The problem is usually set up so that they gain utility from smoking, but choose not to smoke.

Well, I went by the setup presented in the FDT paper (which is terrifyingly vague in most of the examples while purporting to be mathematically precise), and it clearly says that only those with the lesion love smoking. Again, if the setup is different, the numbers would be different.

In any case, you seem to have ignored the part of the problem where smoking increases chance of the lesion and hence cancer. So there seems to be some implicit normalisation? What's your exact process there?

Smoking does not increase the chances of the lesion in this setup! From the FDT paper:

an arterial lesion that causes those afflicted with it to love smoking and also (99% of the time) causes them to develop lung cancer. There is no direct causal link between smoking and lung cancer.

Comment by shminux on Counterfactuals: Smoking Lesion vs. Newcomb's · 2019-12-09T00:38:40.666Z · score: 3 (2 votes) · LW · GW

I tried to operationalize what

do the opposite of what she would have done otherwise

might mean, and came up with

Deciding and attempting to do X, but ending up doing  the opposite of X and realizing it after the fact.

which does not depend on which decision is made, and so the checkup cost has no bearing on the decision. Again, if you want to specify the problem differently but still precisely (as in, where it is possible to write an algorithm that would unambiguously calculate expected utilities given the inputs), by all means, do, we can apply the same approach to your favorite setup.

Comment by shminux on Counterfactuals: Smoking Lesion vs. Newcomb's · 2019-12-09T00:23:10.613Z · score: 3 (2 votes) · LW · GW

I agree that some of the rows are there just for completeness (they are possible worlds after all), not because they are interesting in the problem setup. How is the problem normally interpreted? The description in your link is underspecified.

But you're also giving everyone without the lesion 0 utility for not having cancer?

Good point. Should have put the missing million there. Wouldn't have made a difference in this setup, since the agent would not consider taking up smoking if they have no lesion, but in a different setup, where smoking brings pleasure to those without the lesion, and the probability of the lesion is specified, the probabilities and utilities for each possible world are to be evaluated accordingly.

Comment by shminux on Counterfactuals: Smoking Lesion vs. Newcomb's · 2019-12-08T23:37:20.021Z · score: 3 (2 votes) · LW · GW

Anyway, the interesting worlds are those where smoking adds utility, since there is no reason for the agent to consider smoking in the worlds where she has no lesion.

Comment by shminux on Counterfactuals: Smoking Lesion vs. Newcomb's · 2019-12-08T23:34:29.876Z · score: 3 (2 votes) · LW · GW

Because the problem states that only those afflicted with the lesion would gain utility from smoking:

an arterial lesion that causes those afflicted with it to love smoking

Comment by shminux on Counterfactuals: Smoking Lesion vs. Newcomb's · 2019-12-08T22:52:01.345Z · score: 3 (2 votes) · LW · GW

Still super confused why you are trying this convoluted and complicated approach, or mixing this problem with Newcomb's, while the useful calculation is very straightforward. To quote from my old post here:

An agent is debating whether or not to smoke. She knows that smoking is correlated with an invariably fatal variety of lung cancer, but the correlation is (in this imaginary world) entirely due to a common cause: an arterial lesion that causes those afflicted with it to love smoking and also (99% of the time) causes them to develop lung cancer. There is no direct causal link between smoking and lung cancer. Agents without this lesion contract lung cancer only 1% of the time, and an agent can neither directly observe, nor control whether she suffers from the lesion. The agent gains utility equivalent to $1,000 by smoking (regardless of whether she dies soon), and gains utility equivalent to $1,000,000 if she doesn’t die of cancer. Should she smoke, or refrain?

There are 8 possible worlds here, with different utilities and probabilities:

An agent who "decides" to smoke has higher expected utility than the one who decides not to, and this "decision" lets us learn which of the 4 possible worlds could be actual, and eventually when she gets the test results we learn which one is the actual world.

Note that the analysis would be exactly the same if there was a “direct causal link between desire for smoking and lung cancer”, without any “arterial lesion”. In the problem as stated there is no way to distinguish between the two, since there are no other observable consequences of the lesion. There is 99% correlation between the desire to smoke and and cancer, and that’s the only thing that matters. Whether there is a “common cause” or cancer causes the desire to smoke, or desire to smoke causes cancer is irrelevant in this setup. It may become relevant if there were a way to affect this correlation, say, by curing the lesion, but it is not in the problem as stated.

Comment by shminux on What is Abstraction? · 2019-12-07T06:18:12.964Z · score: 2 (1 votes) · LW · GW
To repeat the intuitive idea: an abstract model throws away or ignores information from the concrete model, but in such a way that we can still make reliable predictions about some aspects of the underlying system.

That's not quite my understanding what an abstraction is. What you have described is basic modeling. An abstraction is a model that works well in a large class of disparate domains, like polymorphism in programming. The idea of addition is such an abstraction, it works equally well for numbers, strings, sheep, etc. What you call a natural abstraction is closer to my intuitive understanding of the concept. I do not subscribe to your assertion that this is a "property of the territory". Sheep are not like bit strings. Anyhow, ideas are in the mind, and different sets of ideas can be useful for predicting different sets of observations of seemingly unrelated parts of the territory. Also, good luck with your research, whatever definition of abstraction you use, as long as it is useful to you.


Comment by shminux on The Actionable Version of "Keep Your Identity Small" · 2019-12-06T17:57:40.454Z · score: 6 (3 votes) · LW · GW

Looks like we are mostly on the same page. Your example is worth looking into a bit.

Someone argues with you that graffiti is harmful for the community. You fight vehemently to defend graffiti, because deep down you know that without graffiti, your group of friends wouldn't exist, and then you'd be alone

Right, if you identify as belonging to that group, you would feel that you have no choice but to defend it. If you were to consider the graffiti group as a place to socialize instead, then your reaction to someone attacking graffiti artists would still make you think that they are ignorant of the subculture and you would engage them without feeling endangered and getting defensive, which tends to be more productive. In this case your personal identity could be as in your description

"Be funny, act unflappable, be competent at the basic stuff"

or something similar that lets you pick and choose what groups you meet your social needs in without getting stuck in their world and having to bend your identity to fit into theirs.

Comment by shminux on The Actionable Version of "Keep Your Identity Small" · 2019-12-06T05:17:18.178Z · score: 2 (1 votes) · LW · GW

Why can't one meet the social needs by participating in groups without identifying with them?

Comment by shminux on The Actionable Version of "Keep Your Identity Small" · 2019-12-06T03:12:24.576Z · score: 2 (3 votes) · LW · GW

To me KYIS is about group identity, not so much about personal identity. So the actionable advice would be "Does this particular part of your identity come from belonging to a group, wanting to belong to one? Then discard it."

Comment by shminux on Karate Kid and Realistic Expectations for Disagreement Resolution · 2019-12-05T04:07:12.288Z · score: 8 (4 votes) · LW · GW
Maybe the total process only takes 30 hours but you have to actually do the 30 hours, and people rarely dedicate more than 4 at a time, and then don't prioritize finishing it that highly

And even worse, our beliefs are elastic, so between this session and the next one the damage to the wounded beliefs heals, and one has to start almost from scratch again.

Comment by shminux on (Reinventing wheels) Maybe our world has become more people-shaped. · 2019-12-04T02:17:26.385Z · score: 2 (1 votes) · LW · GW
Chances are ninety percent of the interesting/useful/beautiful/etc. objects around you were either designed by humans, or placed there strategically by them.

yes, and that includes math. That's why it can be "reconstructed" to some degree.

Comment by shminux on What I talk about when I talk about AI x-risk: 3 core claims I want machine learning researchers to address. · 2019-12-03T08:24:19.479Z · score: -5 (7 votes) · LW · GW
The development of advanced AI increases the risk of human extinction (by a non-trivial amount, e.g. 1%)

This is where I call BS. Even the best calibrated people are not accurate at the margins. They probably cannot tell 1% from 0.1%. The rest of us can't reliably tell 1% from 0.00001% or from 10%. If you are in doubt, ask those who self-calibrate all the time and are good at it (Eliezer? Scott? Anna? gwern?) how accurate their 1% predictions are.

Also notice your motivated cognition. You are not trying to figure out whether your views are justified, but how to convince those ignorant others that your views are correct.

Comment by shminux on Counterfactuals as a matter of Social Convention · 2019-12-01T22:33:53.179Z · score: 2 (1 votes) · LW · GW
think we can construct a meaningful (but non-unique) notion of counterfactuals in the map

Quite likely, and you have been working on it for some time. But is it a useful direction to work in? Every time I read what people write about it, I get the impression that they end up more confused than when they had started.

Comment by shminux on Counterfactuals as a matter of Social Convention · 2019-11-30T20:40:37.999Z · score: 2 (1 votes) · LW · GW

Counterfactuals are in the mind, so of course they depend on the mental models, including "social conventions". They are also a bad model of the actual world, because they tempt you to argue with reality. (Here I assume a realist position.) There is only one world. There is no what could have been, only what was, is and may yet be. And that "may" is in the mind, not in the territory. Being an embedded agent, you cannot change the world, only learn more about which of your maps, if any, are more accurate. That's why a's and b's in your examples are identical, only some sound more confused than others. There is no difference between an opaque and a transparent Newcomb's.

Comment by shminux on Could someone please start a bright home lighting company? · 2019-11-29T20:28:02.352Z · score: 2 (1 votes) · LW · GW

I don't see that part anywhere, only the one example that everyone refers to.

Comment by shminux on Order and Chaos · 2019-11-29T20:24:29.582Z · score: 14 (3 votes) · LW · GW

This post came across to me as mostly speculative but trying to be academic, I may well be wrong. Habryka in the other comment suggested that your claims have some grounding that I was not aware of. Additionally, I do not subscribe to the local lore of Eliezer's contrarianism and extreme Bayesianism. The metaphor of "reality joints", or "reality fluid", falls flat for me, as well. If you perspective is different, then feel free to disregard my comment, it's not like you and I can square our epistemic views in a comment thread.

Comment by shminux on Order and Chaos · 2019-11-29T08:22:30.396Z · score: 10 (5 votes) · LW · GW

I'm confused, you seem to describe a very elaborate model of cognition, yet I can find no literature review, no testable predictions and no experimental results to ground this model in something observable. What is this model based on?

Comment by shminux on Could someone please start a bright home lighting company? · 2019-11-27T03:14:24.458Z · score: 7 (7 votes) · LW · GW

I wonder if the claim of extreme bright light alleviating the symptoms of SAD has been tested by more than one person. While efficacy is not correlated with popularity, given that this is a rational-thinking based forum, it would be at least nice to know if the device works.

Comment by shminux on Market Rate Food Is Luxury Food · 2019-11-23T20:42:57.388Z · score: -6 (7 votes) · LW · GW

Your post starts with a questionable statement and is heavy on poorly motivated emotional shoulds and light on anything that can be charitably called "research". How do you define a food crisis? Which food crises in the past are you comparing it to? How were they resolved? Have you looked at the food availability issues around the world and the various ways they have been or are being (mis-)handled?

Comment by shminux on How common is it for one entity to have a 3+ year technological lead on its nearest competitor? · 2019-11-18T01:13:01.537Z · score: 4 (5 votes) · LW · GW

Almost anything Elon Musk touches: online payments, reusable rockets, electric (and some day soon autonomous) vehicles, solar panels, underground tunnels.

Comment by shminux on Steelmanning social justice · 2019-11-17T19:20:22.969Z · score: -2 (7 votes) · LW · GW

Social justice does not need steelmanning. The wikipedia definition makes perfect sense as is: "Social justice is a concept of fair and just relations between the individual and society." I am not sure what you are trying to steelman (or, from the tone of your post, dismiss), but it is something else, unrelated to social justice except maybe in some loose way.

Comment by shminux on Attach Receipts to Credit Card Transactions · 2019-11-13T05:03:35.811Z · score: 6 (3 votes) · LW · GW

I work in the area. In the EMV specification the receipt content is already saved electronically and is somewhat standardized, see, for example https://www.mastercard.us/content/dam/mccom/global/documents/transaction-processing-rules.pdf. What is missing is for the consumers and point-of-sale owners to be able to access it easily. The receipt does not identify the product sold, by the way, but enough details to verify the transaction's occurrence.

Of course, if your chipped card is stolen and pinless tap is supported for small purchases, no transaction verification helps you to avoid fraudulent charges. And if someone has your card and knows your pin (which still can be skimmed with card skimmers), then you are likely on the hook for all transactions up until you report your card stolen.

Comment by shminux on Elon Musk is wrong: Robotaxis are stupid. We need standardized rented autonomous tugs to move customized owned unpowered wagons. · 2019-11-04T16:42:43.514Z · score: 21 (10 votes) · LW · GW

It's not either/or.

I no longer own a car, just use a ride share like car2go when I need to get somewhere. Also, I hate owning things. Renting what I need for when I need it and letting someone else deal with the nitty-gritty of maintenance frees my time and my mind. And I am not alone. Most corporate compute is rented. Most commercial space is rented. Most offices are rented. That most houses (in the US) are owned is an artifact of the way thing came to be. If land had no intrinsic value, and renting was a universally more economical option, then real estate ownership would drop dramatically over time.

that a single product, a single vehicle needs to do both the moving and the accommodation of the user

That's a good point. Short trips only need moving, and longer travel needs to include accommodation, though not necessarily of a house-type kind. So, there is some room for what you propose, but probably not for the daily commute, where splitting the tug from the cabin is not necessarily economical. Still, I can see that standardized tugs might be economically viable in some niche cases.

Comment by shminux on Where should I ask this particular kind of question? · 2019-11-03T19:16:50.668Z · score: 2 (1 votes) · LW · GW

How likely is it that your question has not already been asked, answered, indexed and on the first page of google search with a reasonable query? My guess is very unlikely. So a better question might be, and it's something you could ask here, "I'm interested in understanding X, but not sure how to find reliable information about it, what search terms should I try?" The relevant part of stack overflow could also be a good candidate for such a question. Even at their unfriendliest, they might close your question as a duplicate or off-topic, but still provide a link to the similar questions asked.

Comment by shminux on What are human values? - Thoughts and challenges · 2019-11-02T18:57:26.116Z · score: 2 (1 votes) · LW · GW

Note that cognition comes with no goals attached. Emotions is where the drive to do anything comes from. You allude to it in "contemplation of a possible choice or outcome". My suspicion is that having a component not available to accurate introspection is essential to functioning. An agent (well, an algorithm) that can completely model itself (or, rather, model its identical copy, to mitigate the usual decision theory paradoxes), may end up in a state where there is no intrinsic meaning to anything, a nirvana of sorts, and no reason to act.

Comment by shminux on The Curse Of The Counterfactual · 2019-11-02T04:33:46.037Z · score: 9 (4 votes) · LW · GW

First, really like your post! I've done online emotional support for some years, and I could clearly see how people end up in these self-blame and self-punishment loops with little to no improvement. Someone called it "shoulding oneself in the foot." The common denominator is people trying to argue with the past, and to change the past, or to punish themselves (or others) for the past transgressions, whether "real" or not. What gets lost is trying to affect the present with the goal to improve the potential future. Arguably, self-blame is an easy and tempting way out, as it lets one avoid acting in the present, and just keep the self-punishment going. It also saps the energy out of you, creating a vicious circle hard to break out of. If you try to point out the loop, and ask if they are interesting in working on improving the future, they (we) tend to find/invent/create a lot of reasons why self-judgment, self-blame and self-punishment is the only way to go.

Comment by shminux on The Simulation Epiphany Problem · 2019-11-01T01:22:57.083Z · score: 3 (2 votes) · LW · GW

Note that for a simulation to be useful, it has to be as faithful as possible, so SimDave would not be given any clues that he is simulated.

Comment by shminux on [deleted post] 2019-10-31T04:04:53.997Z

There are many many ways something bad can happen. It is natural to focus on something you find especially scary and that your mind insists is a real possibility. Most of the time, it is you privileging a hypothesis through motivated emotional cognition. When this thought pattern goes into overdrive, one can end up with a real mental illness, not an imagined one, an anxiety disorder based on a specific if unfounded fear. Consider examining your motivations, emotions and thought patterns and figure out why you single out this specific scenario over all others.

Comment by shminux on Why are people so bad at dating? · 2019-10-29T03:55:01.422Z · score: 2 (1 votes) · LW · GW

If PhotoFeeler or similar apps drastically improve your chances, why wouldn't dating apps offer to improve your picture? Surely they are interested in more interactions between users? Snapchat automatic filters show that it is not hard or resource-intensive to touch up one's picture to make it look much better.

Comment by shminux on I would like to try double crux. · 2019-10-26T19:26:42.110Z · score: 3 (2 votes) · LW · GW

So, has this attempt failed? From the comment thread it looks like the OP has made an honest attempt and read up and probably learned a fair bit, but I did not see anything like "aha, this is where we disagree" and, to quote the link in the OP

If B, then A. Furthermore, if ¬B, then ¬A. You've both agreed that the states of B are crucial for the states of A, and in this way your continuing "agreement to disagree" isn't just "well, you take your truth and I'll take mine," but rather "okay, well, let's see what the evidence shows."

Comment by shminux on What economic gains are there in life extension treatments? · 2019-10-26T19:17:24.288Z · score: 3 (2 votes) · LW · GW

Economically it would be better if humans were sort of like salmon and died after a predetermined event, without suffering through physical and cognitive decline while draining the lion share of societal resources. Some of those potential events could be: age 60, birth of a great grandchild, completing a pilgrimage, writing an autobiography... Sadly, we are stuck with the evolutionary leftovers like waiting to die of disease and old age.

Comment by shminux on Decisions with Non-Logical Counterfactuals: request for input · 2019-10-26T02:42:21.233Z · score: 1 (4 votes) · LW · GW

I suspect the real underlying issue is that of free will: all decision theories assume we can make different decisions in EXACT SAME circumstances, whereas from what we understand about the physical world, there is no such thing, and the only non-dualist proposal on the table is that of Scott Aaronson's freebits. I have written a related post last year. We certainly do have a very realistic illusion of free will, to the degree where any argument to the contrary tends to be rejected, ignored, strawmanned or misinterpreted. If you read through the philosophical writings on compatibilism, people keep talking past each other all the time, never getting to the crux of their disagreement. Not that it (or anything else) matters in the universe where there is no freedom of choice, anyway.

Comment by shminux on Decisions with Non-Logical Counterfactuals: request for input · 2019-10-25T01:53:06.888Z · score: 2 (3 votes) · LW · GW

The issue of so-called logical counterfactuals has been discussed here and on the alignment forum quite a few times, including a bunch of posts by Chris_Leong, and at least one by yours truly. Consider browsing through them before embarking on original research:

https://www.google.com/search?q=logical+counterfactuals+site:lesswrong.com


Comment by shminux on [deleted post] 2019-10-15T07:59:31.801Z

This situation, if real, seems self-correcting. If you are sure you have a superior technology, and can show you do by monetizing it (otherwise, how is it superior?), then you will gain a lot of attention and change a lot of minds pretty quick.

Comment by shminux on Maybe Lying Doesn't Exist · 2019-10-14T19:52:49.539Z · score: 2 (1 votes) · LW · GW

Maybe it is worth starting with a core definition of a lie that most people would agree with, something like "an utterance that is consciously considered by the person uttering it to misrepresent reality as they know it at that moment, with the intention to force the recipient to adjust their map of reality to be less accurate". Well, that's unwieldy. Maybe "an attempt to deceive through conscious misrepresentation of one's models"? Still not great.

Comment by shminux on A simple sketch of how realism became unpopular · 2019-10-13T02:09:18.724Z · score: 2 (3 votes) · LW · GW
I still intermittently run into people who claim that there's no such thing as reality or truth;

This sounds... strawmanny. "Reality and truth are not always the most useful concepts and it pays to think in other ways at times" would be a somewhat more charitable representation of non-realist ideas.


Comment by shminux on What do the baby eaters tell us about ethics? · 2019-10-12T21:19:08.984Z · score: 11 (2 votes) · LW · GW
If I understood correctly, your objection to Three Worlds Collide is (mostly?) descriptive rather than prescriptive: you think the story is unrealistic, rather than dispute some normative position that you believe it defends.

I am not a moral realist, so I cannot dispute someone else's morals, even if I don't relate to them, as long as they leave me alone. So, yes, descriptive, and yes, I find the story a great read, but that particular element, moral expansionism, does not match the implied cohesiveness of the multi-world human species.

Do you believe real world humans are "slow to act against the morals it finds abhorrent"?

Yes. Definitely.

how do you explain all (often extremely violent) conflicts over religion and political ideology over the course of human history?

Generally, economic or some other interests in disguise. Like distracting the populous from the internal issues. You can read up on the reasons behind the Crusades, the Holocaust, etc. You can also notice that when the morals lead the way, extreme religious zealotry leads to internal instability, like the fractures inside Christianity and Islam. So, my model that you call "factually wrong" seems to fit the observations rather well, though I'm sure not perfectly.

Whatever explanation you provide to this survival, what prevents it from explaining the continued survival of the human species until the imaginary future in the story?

My point is that humans are behaviorally both much more and much less tolerant of the morals they find deviant than they profess. In the story I would have expected humans to express extreme indignation over babyeaters' way of life, but do nothing about it beyond condemnation.

Comment by shminux on What do the baby eaters tell us about ethics? · 2019-10-12T02:02:55.057Z · score: 2 (1 votes) · LW · GW

It's frustrating where an honest exchange fails to achieve any noticeable convergence... Might try once more and if not, well, Aumann does not apply here, anyhow.

My main point: "to survive, a species has to be slow to act against the morals it finds abhorrent". I am not sure if this is the disagreement, maybe you think that it's not a valid implication (and by implication I mean the converse, "intolerant => stunted").

Comment by shminux on When is pair-programming superior to regular programming? · 2019-10-11T01:19:58.565Z · score: 0 (2 votes) · LW · GW

I had a pair programming experience at my first job back in the late 80s, before it was a thing, and my coworker and I clicked well, so it was fun while it lasted. Never had a chance to do it again, but miss it a lot. Wish I could work at a place where this is practiced.

Comment by shminux on What do the baby eaters tell us about ethics? · 2019-10-09T06:56:49.122Z · score: 4 (2 votes) · LW · GW
I still don't understand, is your claim descriptive or prescriptive?

Neither... Or maybe descriptive? I am simply stating the implication, not prescribing what to do.

I don't understand what you're saying here at all.

Yes, we do have plenty of laws, but no one goes out of their way to find and hunt down the violators. If anything, the more horrific something is, the more we try to pretend it does not exist. You can argue and point at the law enforcement, whose job it is, but it doesn't change the simple fact that you can sleep soundly at night ignoring what is going on somewhere not far from you, let alone in the babyeaters' world.

"Universal we!right" is a contradiction in terms.

We may have not agreed on the meaning. I meant "human universal" not some species-independent morality.

in a given debate about ethics there might be hope that the participants can come to a consensus

I find it too optimistic a statement for a large "we". The best one can hope for is that logical people can agree with an implication like "given this set of values, this is the course of action someone holding these values ought to take to stay consistent", without necessarily agreeing with the set of values themselves. In that sense, again, it's describes self-consistent behaviors without privileging a specific one.

In general, it feels like this comment thread has failed to get to the crux of the disagreement, and I am not sure if anything can be done about it, at least without using a more interactive medium.

Comment by shminux on What do the baby eaters tell us about ethics? · 2019-10-08T15:46:40.153Z · score: 4 (2 votes) · LW · GW

Re "tenability", today's SMBC captures it well: https://www.smbc-comics.com/comic/multiplanetary

If interpreted in the logical sense, I don't think your argument makes sense: it seems like trying to derive an "ought" from an "is".

Hmm, in my reply to OP I expressed what the moral of the story is for me, and in my reply to you I tried to justify it by appealing to the expected stability of the species as a whole. The "ought", if any, is purely utilitarian: to survive, a species has to be slow to act against the morals it finds abhorrent.

Also, the actual distance between those diverging morals matters, and baby eating surely seems like an extreme example.

Uh. If you live in a city, there is a 99% chance that there is little girl within a mile from you being raped and tortured by her father/older brother daily for their own pleasure, yet no effort is made to find and save her. I don't find the babyeaters' morals all that divergent from human, at least the babyeaters had a justification for their actions based on the need for the whole species to survive.

I don't claim claim that leaving the Baby-eaters alone is necessarily we!wrong, but it is not obvious to me that it is we!right

My point is that there is no universal we!right and we!wrong in the first place, yet the story was constructed on this premise, which led to the whole species being hoisted on its own petard.

it is supposed to be a "weird" culture by modern standards), much less an alien culture like the Super-Happies

Oh. It never struck me as weird, let alone alien. The babyeaters are basically Spartans and the super-happies are hedonists.

Comment by shminux on What do the baby eaters tell us about ethics? · 2019-10-07T15:45:21.045Z · score: 4 (5 votes) · LW · GW

The near-universal reaction of the crew to the baby-eaters customs is not just horror and disgust, but also the moral imperative to act to change them. It's as if there existed a species-wide objective "species!wrong", which is an untenable position, and, even less believably than that, as if there existed a "universal!meta-wrong" where anyone not adhering to your moral norms must be changed in some way to make it palatable (the super-happies are willing to go an extra mile to change themselves in their haste to fix things that are "wrong" with others).

This position is untenable because it would lead to constant internal infighting, as customs and morals naturally drift apart for a diverse enough society. Unless you impose a central moral authority and ruthlessly weed out all deviants.

I am not sure how much of the anti-prime-directive morality is endorsed by Eliezer personally, as opposed to merely being described by Eliezer the fiction writer.

Comment by shminux on What do the baby eaters tell us about ethics? · 2019-10-07T01:17:37.672Z · score: 16 (5 votes) · LW · GW

I liked the story, but could never relate to its Eliezer-imposed "universal morality" of forcing others to conform to your own norms. To me the message of the story is "expansive metaethics leads to trouble, stick to your own and let others live the way they are used to, while being open to learning about each other's ways non-judgmentally".

Comment by shminux on Introduction to Introduction to Category Theory · 2019-10-06T18:45:33.198Z · score: 2 (1 votes) · LW · GW

I've tried to learn the basics of the category theory some years ago, already having some background in algebraic topology, mathematical physics and programming. And, presumably, in rationality. I got the glimpses of how interesting it is, how it could be useful, but was never quite able to make use of it. Very curious if your series of posts can change that for me. Keep going!