Posts
Comments
This is an extremely clear explanation of something I hadn't even realized I didn't understand. Thank you for writing it.
Note that the original question wasn't "Is it right for a pure altruist to have children?", it was "Would a pure altruist have children?". And the answer to that question most definitely depends on the beliefs of the altruist being modeled. It's also a more useful question, because it leads us to explore which beliefs matter and how they effect the decision (the alternative being that we all start arguing about our personal beliefs on all the relevant topics).
This sounds like a sufficiently obvious failure mode that I'd be extremely surprised to learn that modern index funds operate this way, unless there's some worse downside that they would encounter if their stock allocation procedure was changed to not have that discontinuity.
I think the important insight you may be missing is that the AI, if intelligent enough to recursively self-improve, can predict what the modifications it makes will do (and if it can't, then it doesn't make that modification because creating an unpredictable child AI would be a bad move according to almost any utility function, even that of a paperclipper). And it evaluates the suitability of these modifications using its utility function. So assuming the seed AI is build with a sufficiently solid understanding of self-modification and what its own code is doing, it will more or less automatically work to create more powerful AIs whose actions will also be expected to fulfill the original utility function, no "fixed points" required.
There is a hypothetical danger region where an AI has sufficient intelligence to create a more powerful child AI, isn't clever enough to predict the actions of AIs with modified utility functions, and isn't self-aware enough to realize this and compensate by, say, not modifying the utility function itself. Obviously the space of possible minds is sufficiently large that there exist minds with this problem, but it probably doesn't even make it into the top 10 most likely AI failure modes at the moment.
Could someone explain the reasoning behind answer A being the correct choice in Question 4? My analysis was to assume that, since 30 migraines a year is still pretty terrible (for the same reason that the difference in utility between 0 and 1 migraines per year is larger than the difference between 10 and 11), I should treat the question as asking "Which option offers more migraines avoided per unit money?"
Option A: $350 / 70 migraines avoided = $5 per migraine avoided
Option B: $100 / 50 migraines avoided = $2 per migraine avoided
And when I did the numbers in my head I thought it was obvious that the answer should be B. What exactly am I missing that led the upper tiers of LWers to select option A?
My understanding is that it was once meant to be almost tvtropes-like with a sort of back-and forth linking between pages about concepts on the wiki and posts which refer to those concepts on the main site (in the same way that tvtropes gains a lot of its addictiveness from the back-and-forth between pages for tropes and pages for shows/books/etc).
I think we're in agreement then, although I've managed to confuse myself by trying to actually do the Shannon entropy math.
In the event we don't care about birth orders we have two relevant hypotheses which need to be distinguished between (boy-girl at 66% and boy-boy at 33%), so the message length would only need to be 0.9 bits#Definition) if I'm applying the math correctly for the entropy of a discrete random variable. So in one somewhat odd sense Sarah would actually know more about the gender than George does.
Which, given that the original post said
Still, it seems like Sarah knows more about the situation, where George, by being given more information, knows less. His estimate is as good as knowing nothing other than the fact that the man has a child which could be equally likely to be a boy or a girl.
may not actually be implausible. Huh.
The standard formulation of the problem is such you are the one making the bizarre contortions of conditional probabilities by asking a question. The standard setup has no children with the person you meet, he tells you only that he has two children, and you ask him a question rather than them revealing information. When you ask "Is at least one a boy?", you set up the situation such that the conditional probabilities of various responses are very different.
In this new experimental setup (which is in very real fact a different problem from either of the ones you posed), we end up with the following situation:
h1 = "Boy then Girl"
h2 = "Girl then Boy"
h3 = "Girl then Girl"
h4 = "Boy then Boy"
o = "The man says yes to your question"
With a different set of conditional probabilities:
P(o | h1) = 1.0
P(o | h2) = 1.0
P(o | h3) = 0.0
P(o | h4) = 1.0
And it's relatively clear just from the conditional probabilities why we should expect to get an answer of 1/3 in this case now (because there are three hypotheses consistent with the observation and they all predict it to be equally likely).
I agree that George definitely does know more information overall, since he can concentrate his probability mass more sharply over the 4 hypotheses being considered, but I'm fairly certain you're wrong when you say that Sarah's distribution is 0.33-0.33-0-0.33. I worked out the math (which I hope I did right or I'll be quite embarassed), and I get 0.25-0.25-0-0.5.
I think your analysis in terms of required message lengths is arguably wrong, because the purpose of the question is to establish the genders of the children and not the order in which they were born. That is, the answer to the question "What gender is the child at home?" can always be communicated in a single bit, and we don't care whether they were born first or second for the purposes of the puzzle. You have to send >1 bit to Sarah only if she actually cares about the order of their births (And specifically, your "1 or 2 bits, depending" result is made by assuming that we don't care about the birth order if they're boys. If we care whether the boy currently out walking is the eldest child regardless of the other child's gender we have to always send Sarah 2 bits).
Another way to look at that result is that when you simply want to ask "What is the probability of a boy or a girl at home?" you are adding up two disjoint ways-the-world-could-be for each case, and this adding operation obscures the difference between Sarah's and George's states of knowledge, leading to them both having the same distribution over that answer.
I'll just note in passing that this puzzle is discussed in this post, so you may find it or the associated comments helpful.
I think the specific issue is that in the first case, you're assuming that each of the three possible orderings yields the same chance of your observation (the son out walking with him is a boy). If you assume that his choice of which child to go walking with is random, then the fact that you see a boy makes the (girl, boy) possibilities each less likely, so together they are equally likely to the (boy, boy) one.
Let's define (imagining, for the sake of simplicity, that Omega descended from the heavens and informed you that the man you are about to meet has two children who can both be classified into ordinary gender categories):
h1 = "Boy then Girl"
h2 = "Girl then Boy"
h3 = "Girl then Girl"
h4 = "Boy then Boy"
o = "The man is out walking with a boy child"
Our initial estimates for each should be 25% before we see any evidence. Then if we make the aforementioned assumption that the man doesn't like one child more than the other:
P(o | h1) = 0.5
P(o | h2) = 0.5
P(o | h3) = 0.0
P(o | h4) = 1.0
And then we can apply bayes theorem to figure out the posterior probability of each hypothesis:
P(h1 | o) = P(h1) * P(o | h1) / P(o)
P(h2 | o) = P(h2) * P(o | h2) / P(o)
P(h3 | o) = P(h3) * P(o | h3) / P(o)
P(h4 | o) = P(h4) * P(o | h4) / P(o)
(where P(o) = P(o | h1)*P(h1) + P(o | h2)*P(h2) + P(o | h3)*P(h3) + P(o | h4)*P(h4))
The denominator is a constant factor which works out to 0.5 (meaning "before making that observation I would have assigned it 50% probability"), and overall the math works out to:
P(h1 | o) = P(h1) * P(o | h1) / 0.5 = 0.25
P(h2 | o) = P(h2) * P(o | h2) / 0.5 = 0.25
P(h3 | o) = P(h3) * P(o | h3) / 0.5 = 0.0
P(h4 | o) = P(h4) * P(o | h4) / 0.5 = 0.5
So the result in the former case is the same as in the latter, seeing one child offers you no information about the gender of the other (unless you assume that the man hates his daughter and never goes walking with her, in which case you get the original 1/3 chance of it being a boy).
The lesson to take away here is the same lesson as the usual bayesian vs frequentist debate, writ very small: if you're getting different answers from the two approaches, it's because the frequentist solution is slipping in unstated assumptions which the bayesian approach forces you to state outright.
The Shangri-La diet has been mentioned a few times around here, and each time I came across it I went "Hmm, that's cool, I'll have to do it some time". Last week I realized that this was in large part due to the fact that all discussions of it say something along the lines of "Sugar water is listed as one of the options, but you should really do one of the less pleasant alternatives". And this was sufficient to make me file it away as something I should do "some time".
I'm not in any population which is especially more strongly disposed to getting diabetes than average, I already drink a soda every other day or so, drinking sugar water is something I would consider quite pleasant, and I'm around 40 lbs over my target weight, so I decided that getting whatever benefit there was to be had from the sugar water was a better outcome than deciding on a less pleasant method and failing to actually get started yet again.
Over the first few days I saw a slight drop in my weight (though still more than the "MORE WILLPOWER!" method had accomplished over the same interval the last time I tried that), and some appetite reduction which may have just been imagined. Unfortunately, I ran out of sugar 4 days in, and wasn't able to buy more for a couple days. By the time I did, my weight had risen to above where I started. So I have no idea whether this is a success or not, but I'm still proud that I managed to get past the whole "But it's not optimal!" roadblock and try a cheap (in terms of willpower) test.
Maybe "value loading" is a term most people here can be expected to know, but I feel like this post would really be improved by ~1 paragraph of introduction explaining what's being accomplished and what the motivation is.
As it is, even the text parts make me feel like I'm trying to decipher an extremely information-dense equation.
Actually, I don't think oxygen tanks are that expensive relative to the potential gain. Assuming that the first result I found for a refillable oxygen tank system is a reasonable price, and conservatively assuming that it completely breaks down after 5 years, that's only $550 a year, which puts it within the range of "probably worthwhile for any office worker in the US" (assuming an average salary of $43k) if it confers a performance benefit greater than around 1.2% on average.
These tanks supposedly hold 90% pure oxygen, and are designed to be used with a little breathing lasso thing that ends up with you breathing around 30% oxygen (depending on the flow rate of course).
Since 30-40% oxygen concentrations seem to increase word recall by a factor of 30-50% and reduce reaction time by ~30%, improve 2-back performance by ~15%, and improve mental arithmetic accuracy by ~20% for 3-digit numbers, it seems pretty likely that the overall benefit of oxygen supplementation while working could be greater than breakeven.
The self-modification isn't in itself the issue though is it? It seems to me that just about any sort of agent would be willing to self-modify into a utility monster if it had an expectation of that strategy being more likely to achieve its goals, and the pleasure/pain distinction is simply adding a constant (negative) offset to all utilities (which is meaningless since utility functions are generally assumed to be invariant under affine transformations).
I don't even think it's a subset of utility monster, it's just a straight up "agent deciding to become a utility monster because that furthers its goals".
When in doubt, try a cheap experiment.
Make a list of various forms of recreation, then do one of them for some amount of time whenever you feel the need to take a break. Afterwards, note how well-rested you feel and how long you performed the activity. It shouldn't take many repetitions before you start to notice trends you can make use of.
Although to be honest, the only conclusive thing I've learned from trying that is that there's a large gap between "feeling rested" and "feeling ready to get back to work on something productive".
I realized upon further consideration that I don't actually have any evidence regarding keyboards and RSI, so here are the most relevant results of my brief research:
- Effects of keyboard keyswitch design: A review of the current literature
The abstract states that "Due to the ballistic nature of typing, new keyswitch designs should be aimed at reducing impact forces." This is a task which mechanical keyboards can potentially achieve more effectively than membrane ones because you can stop pushing on the key before it bottoms out. Later on in the paper they discuss results which seem to show that the loud noise of mechanical keyboards may actually be their best feature, as a silent keyboard with 0.28N of force causes about the same amount of finger effort as a clicky keyboard requiring 0.72N. - Computer key switch force-displacement characteristics and short-term effects on localized fatigue
I'm unclear how much this paper is worth, as their methodology seems unlikely to produce situations like those encountered in real life. Assuming their conclusions are correct, they appear to indicate that keyswitches requiring lower actuation forces will lead to lower strike force when they hit the keyboard backing, which I believe would tend to mean that membrane keyboards are better for you if you can't train yourself not to shove the key into the keyboard backplane. However, they do indicate that longer overtravel (the length the key can be pressed after it activates) will reduce the striking force, so I'm not sure whether membrane keyboards come out ahead overall as they have quite a bit less overtravel. - Toward a more humane keyboard
Light on details, but states that from their research one of the design goals of an improved ergonomic keyboard should be to optimize tactile feedback (among other things). This paper was co-written by the president of Kinesis (in 1992), and it's worth noting that at least the modern Kinesis ergonomic keyboards use mechanical keyswitches with 45g of operating force (lower than the 50-65g typical of Cherry keyswitches), and have around 4mm of overtravel.
There's about 4 or 5 additional promising results from Google Scholar, but I think I've learned as much about keyboards as I care to at the moment. If you want to read further I found that the most promising search terms were "buckling spring carpal tunnel", and "effects of keyboard keyswitch design".
Overall the evidence seems to vaguely back up the folk wisdom that mechanical keyboards can help to lessen one's chances of getting a hand injury like carpal tunnel, but there doesn't appear to be anything conclusive enough to warrant using a mechanical keyboard for that alone (and there's probably a lot more benefit to be had from an ergonomic layout than from the keyswitches). I still judge it worth the $60 extra in my own case, but that's probably just the sunk costs and techno-hipsterism talking.
I'm going to disagree with the weakness of your recommendation. I may be falling prey to the typical mind fallacy here, but I feel that everyone who types for a significant fraction of their day (programming, writing, etc) should at least strongly consider getting a mechanical keyboard. In addition to feeling nicer to type on, there's some weak evidence that buckling-spring keyboards can lower your risk of various hand injuries down the line, and even a slightly lessened risk of RSI is probably worth the $60 or so more that a mechanical keyboard costs, even ignoring the greater durability.
I'm not particularly attached to that metric, it was mostly just an example of "here's a probably-cheap hack which could help remedy the problem". On the other hand, I'm not convinced that one post means that a "Automatically promote after a score of 10" policy wouldn't improve the overall state of affairs, even if that particular post is a net negative.
I feel like the mechanism probably goes something like:
- People are generally pretty risk-averse when it comes to putting themselves out in public in that way, even when the only risk is "Oh no my post got downvoted"
- Consequently, I'm only likely to post something to main if I personally believe that it exceeds the average quality threshold.
An appropriate umeshism might be "If you've never gotten a post moved to Discussion, you're being too much of a perfectionist."
The problem, of course, is that there are very few things we can do to reverse the trend towards higher and higher post sophistication, since it's not an explicit threshold set by anyone but simply a runaway escalation.
One possible "patch" which comes to mind would be to set it up so that sufficiently high-scoring Discussion posts automatically get moved to Main, although I have no idea how technically complicated that is. I don't even think the bar would have to be that high. Picking an arbitrary "nothing up my sleeve" number of 10, at the moment the posts above 10 points on the first page of Discussion are:
- Low Hanging Fruit -- Basic Bedroom Decorating
- Only say 'rational' when you can't eliminate the word
- Short Primers on Crucial Topics
- List of underrated risks?
- [Link] Reason: the God that fails, but we keep socially promoting….
- A Protocol for Optimizing Affection
- Computer Science and Programming: Links and Resources
- The rational rationalist's guide to rationally using "rational" in rational post titles
- Funding Good Research
- Expertise and advice
- Posts I'd Like To Write (Includes Poll)
- Share Your Checklists!
- A Scholarly AI Risk Wiki
Which means that in the past week (May 25 - June 1) there have been 13 discussion posts gaining over 10 points. If all of these were promoted to Main, this would be an average post rate of just under 2 per day, which is potentially just around the level which some might consider "spammy" if they get the Less Wrong RSS.
Personally, though, I would be fully in favor of getting a couple of moderately-popular posts in my feed reader every morning.
I disagree. From checking "recent comments" a couple times a day as is my habit, I feel like the past few days have seen an outpouring of criticism of Eliezer and the sequences by a small handful of people who don't appear to have put in the effort to actually read and understand them, and I am thankful to the OP for providing a counterbalance to that.
Has there been any serious discussion of the implications of portraits? I couldn't find any with some cursory googling, but I'll be really surprised if it hasn't been discussed here yet. I can't entirely remember which of these things are canon and which are various bits of fanfiction, but:
- You can take someone's portrait without them explicitly helping, as evidenced in canon by at least one photograph of someone being arrested, whose picture in the newspaper is continually struggling and screaming at the viewer. I don't remember which book this was or any of the particulars unfortunately, but I'm pretty certain it's a thing that was in one of them. Or maybe one of the movies. Moving on.
- They can perform simple tasks of short-term memory and carry on a coherent conversation.
- They can walk from picture to picture to communicate with each other.
- They can operate simple mechanisms in some way. In canon, the door to Gryffindor Tower is a portrait, which requires a password before opening.
As far as I can tell, portraits in the Harry Potter universe would be a gigantic game-breaker if it weren't for all the other game-breakers overshadowing them. I suppose it's possible to mitigate this (maybe a picture carries less of the "person" compared to a portrait for which they have to sit for hours) but if that's not the case, portraits appear to be essentially a way of involuntarily uploading a copy of someone and enslaving them for all eternity, and all you need is knowledge of what they look like and a modicum of artistic ability.
edit: Oh crap, in MoR they ask portraits questions about knowledge they would have had before being painted, like "what spells did they teach you as a first year" and "did you know a married squib couple". So you're not just getting a basic "human" imprint, you're getting that specific person.
And on the flip side of that, not all the portraits in Hogwarts are necessarily real people. What moral weight does a newly-created personality in a portrait have?
I think you may be misinterpreting what he means by "takes five whole minutes to think an original thought". You may well have to sit thinking for considerably longer than five minutes before you have an original thought, but are you truly spending that whole interval having the thought, or are you retracing the same patterns of thought over and over again in different permutations?
I think the implication is that, since the new thought itself only takes a few minutes, training for and expecting better performance could cut down the amount of "waiting for a new thought" time.
All that means is that you have a different definition of value for your friendships. It's important to focus on what exactly you want from your friends, but I see no reason that definition of value would be incompatible with trying to consciously cultivate stronger and better relationships.
So let's run with that. What can one do to intentionally try and grow those sorts of strong bonds with people? I'm reminded of a quote from HPMoR:
"One of my tutors once said that people form close friendships by knowing private things about each other, and the reason most people don't make close friends is because they're too embarrassed to share anything really important about themselves."
Since the topic of this post is on sub-optimal communication, I thought I'd point out that
I think you'd be more convincing if you learned about brevity.
reads as rather more condescending than I think you intended from the tone of the rest of your comment. Specifically, it implies not just that he needs to practice revising for brevity, but that he doesn't even know what it is.
I'm using the stock browser that comes with Cyanogenmod 9, so in principle I can open links in a new window but in practice the interface is annoying enough that I rarely use it. I've tried Firefox mobile but the white-and-grey "not yet rendered" texture makes the browser feel much slower due to its obviousness. Dolphin looks interesting, I'm surprised I haven't heard of it before.
I guess my complaint isn't that I can't open a link separately, it's just that it's annoying enough to do so that I find myself running into the question of "do I care enough about learning what this spoilered text is saying to bother following the link?" and repeatedly running into that question during a longish discussion causes enough decision fatigue that I stop bothering.
I often read LW on my phone and for that use case rot13 is the best spoiler method by far. It prevents immediately seeing words that would give away spoilers, but I can generally decode a given phrase in my head given the word lengths, punctuation, topic of conversation, and position of common words like 'gur', 'na', 'bs', 'vf', or 'gb'.
Using reddit-style CSS spoiler tags means that I can't access the spoilered content at all AFAICT, and linking me to a decoder, while nice in theory, isn't very helpful because if I click it I will lose the nice highlighting of new posts. This is a Big Deal on long-running threads like the HPMoR discussions.
As far as I can tell from my limited research, it appears to be a combination of the SCP Foundation's "Object Classes" with a hypothetical new object class "Roko" which I believe to be named for an LW user who appears to no longer exist, but made a post at some point (the best I can establish is that it had to be prior to December 2010), presenting some idea which later came to be called a "basilisk", because the very knowledge of it was judged by some to be potentially harmful and unsettling. The post was deleted, although it appears to be possible to find copies of it or at least the basic idea if one cares enough.
So presumably the containment protocol for Object Class: Roko is simply to destroy the offending information and maybe take steps to prevent a recurrence? I'm mostly guessing, anyone who actually knows this context firsthand want to comment on whether my guess is close?
Other than the "external database" option, the only other sources of name information I can think of are:
- The mind of the person being mapped
- The mind of the person reading the map
- A sort of consensus of how everyone in Hogwarts knows someone
I feel that picking someone's name from their own mind seems the most elegant and consistent. It doesn't handle babies (Before the parents choose a name, can a baby even be said to have one? Babies would have to be special-cased regardless), but it does allow arbitrary people to be mapped (multiple strangers being indistinguishable from each other seems like a serious flaw in a security system) and requires no external registry. On the one hand, it seems like interrogating the mind of every human is vastly more complicated than just looking up the name in a database, but to the kind of epistemology which would seem obvious to a 9th-century witch or wizard I can see it being "obvious".
(And to respond to your question about Pettigrew in the great-grandparent, I would assume that the map skips over animals entirely, which would probably include animagi. This would tend to lend a slight amount of weight to my "the map displays your name as you know it" theory, as if the names came from how everyone else around you knew you there would be no reason not to include pets.)
If my theory is true, it raises an additional interesting question: Is it possible to obliviate yourself selectively so that you lose all knowledge of your own name? (Possibly storing the memories in a pensieve first so you can recover them later) And if so, is the map the only piece of the Hogwarts security system which might be impeded by this?
A further idea: Professor Quirrel is shown to take a very loose approach to identity and names ("Identity does not mean, to such as us, what it means to other people.") Possibly Quirrelmort is the constant error, not because his name is wrong, but because he doesn't have a name attached to his marker at all.
I don't remember what post it was in response to, but at one point someone suggested "optimal" as a much better substitute for "rational" in this type of post, partly to reduce the use of "rational" as an applause light, and partly because it better describes what these posts are generally asking.