Stupid Questions May 2015
post by Gondolinian · 2015-05-01T17:28:59.679Z · LW · GW · Legacy · 264 commentsContents
264 comments
This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.
Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.
To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.
264 comments
Comments sorted by top scores.
comment by Gondolinian · 2015-05-01T18:04:30.901Z · LW(p) · GW(p)
There is a not necessarily large, but definitely significant chance that developing machine intelligence compatible with human values may very well be the single most important thing that humans have or will ever do, and it seems very likely that economic forces will make strong machine intelligence happen soon, even if we're not ready for it.
So I have two questions about this: firstly, and this is probably my youthful inexperience talking (a big part of why I'm posting this here), but I see so many rationalists do so much awesome work on things like social justice, social work, medicine, and all kinds of poverty-focused effective altruism, but how can it be that the ultimate fate of humanity to either thrive beyond imagination or perish utterly may rest on our actions in this century, and yet people who recognize this possibility don't do everything they can to make it go the way we need it to? This sort of segues in to my second question, which is what is the most any person, more specifically, I can do for FAI? I'm still in high school, so there really isn't that much keeping me from devoting my life to helping the cause of making sure AI is friendly. What would that look like? I'm a village idiot by LW standards, and especially bad at math, so I don't think I'd be very useful on the "front lines" so to speak, but perhaps I could try to make a lot of money and do FAI-focused EA? I might be more socially oriented/socially capable than many here, perhaps I could try to raise awareness or lobby for legislation?
Replies from: dxu, ChaosMote, Viliam, MarsColony_in10years, Gram_Stone, Capla↑ comment by dxu · 2015-05-01T23:10:26.460Z · LW(p) · GW(p)
how can it be that the ultimate fate of humanity to either thrive beyond imagination or perish utterly may rest on our actions in this century, and yet people who recognize this possibility don't do everything they can to make it go the way we need it to?
Well, ChaosMote already gave part of the answer, but another reason is the idea of comparative advantage. Normally I'd bring up someone like Scott Alexander/Yvain as an example (since he's repeatedly claimed he's not good at math and blogs more about politics/general rationality than about AI), but this time, you can just look at yourself. If, as you claim,
I'm a village idiot by LW standards, and especially bad at math, so I don't think I'd be very useful on the "front lines" so to speak, but perhaps I could try to make a lot of money and do FAI-focused EA? I might be more socially oriented/socially capable than many here, perhaps I could try to raise awareness or lobby for legislation?
then your comparative advantage lies less in theory and more in popularization. Technically, theory might be more important, but if you can net bigger gains elsewhere, then by all means you should do so. To use a (somewhat strained) analogy, think about expected value. Which would you prefer: a guaranteed US $50, or a 10% chance at getting US $300? The raw value of the $300 prize might be greater, but you have to multiply by the probabilities before you can do a comparison. It's the same here. For some LWers, working on AI is the way to go, but for others who aren't as good at math, maybe raising money is the best way to do things. And then there's the even bigger picture: AI might be the most important risk in the end, but what if (say) nuclear war occurs first? A politically-oriented person might do better to go into government or something of the sort, even if that person thinks AI is more important in the long run.
So while it might look somewhat strange that not every LWer is working frantically on AI at first, if you look a little deeper, there's actually a good reason. (And then there's also scope insensitivity, hyperbolic discounting, and all that good stuff ChaosMote brought up.) In a sense, you answered your own question when you asked your second.
↑ comment by ChaosMote · 2015-05-01T20:53:17.894Z · LW(p) · GW(p)
To address your first question: this has to do with scope insensitivity, hyperbolic discounting, and other related biases. To put it bluntly, most humans are actually pretty bad at maximizing expected utility. For example, when I first head about x-risk, my thought process was definitely not "humanity might be wiped out - that's IMPORTANT. I need to devote energy to this." It was more along the lines of "huh; That's interesting. Tragic, even. Oh well; moving on..."
Basically, we don't care much about what happens in the distant future, especially if it isn't guaranteed to happen. We also don't care much more about humanity than we do about ourselves plus our close ones. Plus we don't really care about things that don't feel immediate. And so on. Then end result is that most people's immediate problems are more important to them then x-risk, even if the latter might be by far the more essential according to utilitarian ethics.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2015-05-03T02:24:04.922Z · LW(p) · GW(p)
It's also possible that people might reasonably disagree with one or more of MIRI's theses.
Replies from: None↑ comment by Viliam · 2015-05-02T17:06:35.811Z · LW(p) · GW(p)
people who recognize this possibility don't do everything they can to make it go the way we need it to
Despite all talking about rationality, we are still humans with all typical human flaws. Also, it is not obvious which way it needs to go. Even if we had unlimited and infinitely fast processing power, and could solve mathematically all kinds of problems related to Löb's theorem, I still would have no idea how we could start transferring human values to the AI, considering that even humans don't understand themselves, and ideas like "AI should find a way to make humans smile" can lead to horrible outcomes. So maybe the first step would be to upload some humans and give them more processing power, but humans can also be horrible (and the horrible ones are actually more likely to seize such power), and the changes caused by uploading could make even nice people go insane.
So, what is the obvious next step, other than donating some money to the research, which will most likely conclude that further research is needed? I don't want to discourage anyone who donates or does the research, just saying that the situation with the research is frustrating by its lack of feedback. On the scale where 0 is the first electronic computer and 100 is the Friendly AI, are we at least at point 1? If we happen to be there, how would we know that?
Replies from: Capla↑ comment by Capla · 2015-05-06T21:07:52.545Z · LW(p) · GW(p)
So maybe the first step would be to upload some humans and give them more processing power,
I would like this plan, but there are reasons to think that the path to WBE passes through nueromorphic AI which is exceptionally likely to be unfriendly, since the principle is basically to just copy parts of the human brain without understanding how the human brain works.
↑ comment by MarsColony_in10years · 2015-05-03T16:44:22.277Z · LW(p) · GW(p)
I don't agree with this particular argument, but I'll mention it anyway for the sake of having a devil's advocate:
The number of lives lost to an extinction event is arguably capped at ~10 billion, or whatever Earth's carrying capacity is. If you think the AI risk is enough generations out, then it may well be possible to do more good by, say, eliminating poverty faster. A simple mathematical model would suggest that if the singularity is 10 generations away, and Earth will have a constant population of ~10 billion, then 100 billion lives will pass between now and the singularity. A 10% increase in humanity's average quality of life over that period would be morally equivalent to stopping the singularity.
Now, there are a host of problems with the above argument:
First, it is trying to minimize death rather than maximize life. If you set out to maximize the number of Quality Adjusted Life Years that intelligent life accumulates before it's extinction, then you should also take into account all of the potential future lives which would be extinguished by an extinction event, rather than just the lives taken by the event itself.
Second, the Future of Humanity Institute has conducted an informal survey of existential risk researchers, asking for estimates of the probability of human extinction in the next 100 years. The median result (not mean so as to minimize the impact of outliers) was ~19%. If that's a ~20% chance each century, then we can expect humanity to last perhaps 2 or 3 centuries (aka, that's the half life of a technological civilization). Even 300 years is only maybe 4 or 5 generations, so perhaps 50 billion lives could be effected by eliminating poverty now. Using the same simplistic model as before, that would require a 20% increase in humanity's average quality of life to be morally equivalent to ~10 billion deaths. That's a harder target to hit, but it may be even harder still if you consider that poverty is likely to be nearly eliminated in ~100 years. Poverty really has been going down steadily for the last century or so, and in another century we can expect it to be much improved.
Note that both of these points are based on somewhat subjective judgements. Personally, I think Friendly AI research is around the point of diminishing returns. More money would be useful, of course, but I think it would be putting some focus on preemptively addressing other forms of existential risk which may emerge over the next century. Additionally, I think it's important to play with other factors that go into QALY. Quality of life is already being addressed, and duration of our civilization is starting to be addressed via x-risk reduction. The other factor is the number of lives, which is currently capped at Earth's carrying capacity of ~10 billion. I'd like to see trillions of lives. Brain uploads are one method as technology improves, but another is space colonization. The cheapest option I see is Directed Panspermia, which is the intentional seeding of new solar systems with dormant single cell life. All other forms of X-risk reduction address the possibility that the Great Filter is ahead of us, but this would hedge that bet. I haven't done any calculations yet, but donating to organizations like the Mars Society may even turn out to be competitive in terms of QALY/$, if they can tip the political scale between humanity staying in and around Earth, and humanity starting to spread outward, colonizing other planets and eventually other stars over the next couple millennia. It's hard to put a figure on the expected QALY return, but if quadrillions of lives hang in the balance, that may well tip the scales and make the tens of billions of dollars needed to initiate Mars colonization an extremely good investment.
↑ comment by Gram_Stone · 2015-06-04T05:25:27.254Z · LW(p) · GW(p)
To elaborate on existing comments, a fourth alternative to FAI theory, Earning To Give, and popularization is strategy research. (That could include research on other risks besides AI.) I find that the fruit in this area is not merely low-hanging but rotting on the ground. I've read in old comment threads that Eliezer and Carl Shulman in particular have done a lot of thinking about strategy but very little of it has been written down, and they are very busy people. Circumstances may well dictate retracing a lot of their steps.
You've said elsewhere that you have a low estimate of your innate mathematical ability, which would preclude FAI research, but presumably strategy research would require lower aptitude. Things like statistics would be invaluable, but strategy research would also involve a lot of comparatively less technical work, like historical and philosophical analysis, experiments and surveys, literature reviews, lots and lots of reading, etc. Also, you've already done a bit of strategizing; if you are fulfilled by thinking about those things and you think your abilities meet the task, then it might be a good alternative.
Some strategy research resources:
Luke Muehlhauser's How to study superintelligence strategy.
Luke's AI Risk and Opportunity: A Strategic Analysis sequence.
The AI Impacts blog, particularly the Possible Empirical Investigations post and links therein.
The Future of Life Institute's A survey of research questions for robust and beneficial AI.
Naturally, Bostrom's Superintelligence.
↑ comment by Gondolinian · 2015-06-04T16:04:04.368Z · LW(p) · GW(p)
Thanks for taking the time to put all that together! I'll keep it in mind.
comment by DanArmak · 2015-05-02T16:02:56.362Z · LW(p) · GW(p)
Following on from this question, since cheap energy storage is a big obstacle to using wind/wave/solar energy, why is gravity-based energy storage not used more?
Many coasts have some cliffs, where we could build a reservoir on top of the cliff and pump up seawater to store energy. What is the fundamental problem with this? Efficiency of energy conversion when pumping? Cost of building? The space the reservoir would take (or the amount of water it could hold)?
Replies from: g_pepper, Douglas_Knight, Lumifer↑ comment by g_pepper · 2015-05-03T02:31:00.738Z · LW(p) · GW(p)
Actually, this scheme is currently employed by utilities, albeit usually not with seawater. The technique is called pumped storage hydro. Pumped storage hydro accounts for the vast majority of grid energy storage world-wide. Pumped storage hydro is used to by power companies to achieve various goals, e.g.:
flatten out load variations (as you suggested elsewhere in this thread)
provide "instant-on" reserve generation for voltage and frequency support
level out the fluctuating output of intermittent energy sources such as wind and solar (as you suggested above)
Wikipedia states that round-trip efficiency of pumped storage hydro can range between 70% and 87%, making it an economical solution in many cases.
A couple of obstacles to using pumped storage hydro are:
Certain topological/geographic features are needed to make PSH viable
Social and ecological concerns
↑ comment by DanArmak · 2015-05-03T15:02:24.240Z · LW(p) · GW(p)
Yes! Thank you.
I wonder why there aren't more of them, or bigger ones. The only seaside-cliff one listed on Wikipedia is the Okinawa Yanbaru station, completed in 1999, which only provides 30 MW.
Apparently the cost/demand situation isn't favorable.
Replies from: g_pepper↑ comment by g_pepper · 2015-05-03T15:35:26.158Z · LW(p) · GW(p)
Yes, it seems like using a seaside cliff would have several advantages over a freshwater solution, not the least of which is an unlimited water supply in the lower reservoir.
Replies from: DanArmak↑ comment by DanArmak · 2015-05-03T17:49:16.254Z · LW(p) · GW(p)
I guess the problem is scale, after all. I'm quite bad at physical calculations, so the below may be wrong.
Even a small hydroelectric dam generates gigawatts of power. Assuming a 30 meter tall cliff, each cubic meter of water generates 294 kJ when descending. To produce 1 GW of power, we would need 1,000,000/294=3400 cubic meters of water descending every second (watt = joule/second).
If we build a lake at the top, 10 meters deep and 1 kilometer on a side, it would contain 10 million cubic meters of water. If we run it at 1GW, it would be emptied after 49 minutes. Not very useful, after all.
It makes me really appreciate the scale of natural phenomena like Niagara Falls.
Replies from: g_pepper, D_Alex↑ comment by g_pepper · 2015-05-03T18:29:57.020Z · LW(p) · GW(p)
Even a small hydroelectric dam generates gigawatts of power
Actually, multi-gigawatt hydro plant is a large hydro plant, e.g. Hoover Dam has a capacity of 2GW. A medium sized hydro plant might have a capacity of around 200 MW, e.g. Martin Dam in Alabama has a capacity of 182 MW.
Your point is well taken however; scale issues will probably prevent pumped storage hydro from being the one-and-only solution to intermittent energy sources. Just for comparison, the reservoir created by the above-mentioned Martin Dam covers 40,000 acres!
However, pumped storage hydro can still be a useful and economical part of the solution. Other components would be natural gas powered combustion turbines which can be brought online quickly as needed, and a mix of renewable sources. To this latter point, some areas tend to be windier at night than during the day; this suggests that a mix of wind and solar and wind might be a useful combination.
Still, it is hard to imagine that we'll be getting away from fossil and nuclear any time soon; renewables can help reduce the amount of fossil fuels that we consume, but won't (for now) be able to eliminate the need for fossil. Pumped storage hydro can be a valuable part of the solution by smoothing over irregularities in the supply and demand while reducing the use of natural gas powered generation.
↑ comment by D_Alex · 2015-05-04T07:08:08.084Z · LW(p) · GW(p)
Apart from what g_pepper has correctly pointed out regarding size/power of hydro plants...
If we build a lake at the top, 10 meters deep and 1 kilometer on a side
With the right terrain, this is pretty trivial, all you need is a relatively small dam wall closing off a small ravine between mountains... here is a nice example:
http://www.iwb.ch/media/de/picdb/2012/366/nant_de_drance_stausee_vieux.jpg
http://www.iwb.ch/media/de/picdb/2012/367/nant_de_drance_stauseen_vieu.jpg
↑ comment by Douglas_Knight · 2015-05-03T01:59:20.300Z · LW(p) · GW(p)
The operating cost for hydro power plants is very low, so the relevant cost is the initial building. If you dam a river, it just takes one wall, while if you want to create a swimming pool, it takes four. Actually, five, and the floor may be the biggest problem. If you dam a river, you already know that it isn't easy for the water to flow through the ground, because it isn't taking that route. Whereas pumping it onto dry ground probably won't work.
Replies from: bingobongo↑ comment by bingobongo · 2015-05-03T08:07:47.600Z · LW(p) · GW(p)
If you dam a river, you already know that it isn't easy for the water to flow through the ground, because it isn't taking that route.
Actually, this (water that passes under the dam) is the main problem after water passing directly through the dam. If the bed of the river is ok to hold a height of 10 m of water, it is probably not to hold 20 or 30 or 70 m of water.
↑ comment by Lumifer · 2015-05-02T16:22:10.920Z · LW(p) · GW(p)
What is the fundamental problem with this?
Cost. Wind/wave/solar energy is more expensive than fossil-fuel or nuclear energy to start with, and adding not-too-efficient storage mechanisms to even out the supply does not help it at all.
Really, the answer to most questions of this kind is "cost". It is the default and usually correct answer.
Replies from: DanielLC, DanArmak↑ comment by DanArmak · 2015-05-02T17:00:22.930Z · LW(p) · GW(p)
Doesn't the energy grid need good storage anyway, to even out differences between day and night?
Replies from: Lumifer↑ comment by Lumifer · 2015-05-02T22:32:10.629Z · LW(p) · GW(p)
If it were cheap enough, maybe, but at the moment the demand fluctuations are covered by power generating plants which come online in times of high demand (e.g. day) and shut off during low demand (e.g. night). Typically these plants burn natural gas.
Sufficiently cheap storage would be very useful, yes.
Replies from: g_pepper↑ comment by g_pepper · 2015-05-03T02:46:02.611Z · LW(p) · GW(p)
Actually, pumped storage hydro is used for the purposes than DanArmak describes; see my post elsewhere in this thread.
comment by Daniel_Burfoot · 2015-05-01T22:26:53.937Z · LW(p) · GW(p)
Why isn't sea-based solar power more of a thing? Say you have a big barge of solar panels, soaking up energy and storing it in batteries. Then once in a while a transport ship takes the full batteries to land to be used, and returns some empty batteries to the barge.
Replies from: Douglas_Knight, drethelin, Nornagest, Luke_A_Somers, ChristianKl, Lumifer↑ comment by Douglas_Knight · 2015-05-02T00:49:41.787Z · LW(p) · GW(p)
Storing energy in batteries is a net loss. Even at retail prices, the total electricity stored in the battery over its entire lifespan will not pay for the upfront cost of the battery. Even if the electricity were free.
Batteries are a generic technology. If they were useful for grid energy storage, they would be used for it already, not just useful for exotic future energy generation methods. In particular, wind power is terrible because it is erratic (and badly timed where it has trends) and would be the existing technology to most benefit from improved storage.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2015-05-02T15:36:27.498Z · LW(p) · GW(p)
That hasn't stooped Musk planning to couple batteries with solar power.
Replies from: Douglas_Knight, None↑ comment by Douglas_Knight · 2015-05-03T01:44:32.822Z · LW(p) · GW(p)
The question was about the present, not the future. Maybe Musk will be able to lower the price of batteries in the future, but his current price is pretty much what I said. What he has achieved is to make lithium batteries about as cheap as existing consumer batteries, not even as cheap as the sodium-sulfur batteries that power companies use at the moment, let alone what is necessary for widespread deployment.
↑ comment by [deleted] · 2015-05-02T15:42:58.544Z · LW(p) · GW(p)
Musk is claiming orders of magnitude reduction in cost.
Replies from: Lumifer↑ comment by Lumifer · 2015-05-02T16:18:15.922Z · LW(p) · GW(p)
Musk is claiming orders of magnitude reduction in cost.
*Orders of magnitude"?? which means at least a hundred times? Methinks you're mistaken.
Replies from: None↑ comment by [deleted] · 2015-05-02T17:00:55.272Z · LW(p) · GW(p)
Meaning more than 1 order of magnitude, which necessitates the plural.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-05-02T22:16:23.907Z · LW(p) · GW(p)
I think Musk speaks of roughly 8% improvement in battery cost per year. At that pace it takes three decades to get them 1 order of magnitude cheaper.
Replies from: None↑ comment by [deleted] · 2015-05-02T22:19:24.556Z · LW(p) · GW(p)
The prices Musk is quoting is a full 10x less than analyst estimates of what those batteries should cost. Tesla would not offer the product unless they felt they could make a sufficient profit, so their costs must be lower still. That is what I was talking about.
Replies from: Lumifer↑ comment by Lumifer · 2015-05-02T22:46:42.588Z · LW(p) · GW(p)
The prices Musk is quoting is a full 10x less than analyst estimates of what those batteries should cost.
Can we see some links? These claims don't make sense to me. Musk didn't achieve any breakthroughs in battery technology.
Replies from: None↑ comment by drethelin · 2015-05-02T05:27:39.626Z · LW(p) · GW(p)
Saltwater causes huge amounts of wear and tear, and weather fluctuations can completely destroy your ship. What you're basically doing is paying a ton more money per square foot of solar panel space than you would be on land, because every bit of that space needs to be attached to a ship.
↑ comment by Nornagest · 2015-05-01T22:47:07.874Z · LW(p) · GW(p)
I imagine in most places it'd be cheaper to buy an acre of land on the outskirts of town, or to rent an acre of otherwise unused rooftop from your local big-box store, than to build a barge with equivalent deck area; Wikipedia informs me for example that a Nimitz-class aircraft carrier has a deck area of only about six acres.
Land-based solutions also let you plug directly into the grid rather than futzing with also-expensive battery storage.
Replies from: None↑ comment by [deleted] · 2015-05-04T09:45:26.742Z · LW(p) · GW(p)
buy an acre of land on the outskirts of town
Opportunity cost. 15 years later that is suburbia. (And the current suburbia is a slum. Never buy a house unless you are 95% sure this won't happen!)
Replies from: Nornagest, tut↑ comment by Nornagest · 2015-05-04T16:56:55.173Z · LW(p) · GW(p)
That should already be priced into the present value of the property. On the other hand, if you can see the land value going up, that'll have an effect on expected property tax rates; but that should be a relatively minor line item. And not all cities are expanding in that way; if you pulled this right now in the States, in fact, there's a good chance that the cheap land you're picking up would once have been intended for a subdivision before the bottom fell out of the housing market.
Meanwhile, the barge full of solar panels will probably have needed hull repairs at least once in those fifteen years, and its batteries (assuming lead-acid, which give the best energy-to-price ratio with current technology, and assuming a cycle per day) will have been replaced two or three times.
↑ comment by tut · 2015-05-04T19:34:26.896Z · LW(p) · GW(p)
If you think that the land will be more valuable in a few decades, that is an argument for wanting to own it. If your solar adventures can pay for just the interest on what you bought the land for, then you have the upside of land speculation without the downside.
↑ comment by Luke_A_Somers · 2015-05-02T00:20:28.486Z · LW(p) · GW(p)
Batteries? Batteries?
There's the problem.
↑ comment by ChristianKl · 2015-05-01T23:44:59.402Z · LW(p) · GW(p)
Solar panels don't take that much space. Elon Musk has a nice graphic in yesterday's presentation of his new battery: https://youtu.be/yKORsrlN-2k?t=3m32s The amount of space required for enough solar panels to produce enough energy for the whole world is tiny.
Replies from: D_Alexcomment by [deleted] · 2015-05-04T10:39:09.513Z · LW(p) · GW(p)
Why do we discuss typical mind fallacy more than the atypical mind fallacy (the later is not even an accepted term, I came up with it) ?
I am far more likely to assume that "I am so special snowflake" than to assume everybody is like me. Basically this is what the ego, the pride, the vanity in me wants to do.
Replies from: CBHacking, Ishaan, MockTurtle↑ comment by CBHacking · 2015-05-05T08:49:17.820Z · LW(p) · GW(p)
Best guess, it's simply because Typical Mind is overwhelmingly more common (though this could be an example of TMF at work right here!). Humans are social animals, who value the agreement of others with their own views. It's easy and comfortable to assume that other people will think similarly to you. There's an even deeper level than that, though: you are the only person whose mind you are truly familiar with, and so there's a huge availability bias in favor of your own thought processes on any subject. It requires more thought to consider what other people - either in particular, or generally - would think of a situation than it does to form your own thoughts; you do the latter automatically just be considering the situation at all. Many people will never put forth the extra effort without being prodded to do so.
Even those who claim to be their own special snowflake actually commonly do value commonality with other people. Hipsters (not saying you are one, it's just a convenient category of identifiable people who, ironically, share a useful set of criteria) may proudly claim to think differently from the rest of society, but even then they are agreeing with each other about what to think differently about, and are frequently thinking different in the same way. If you drink PBR but claim to do so "ironically", you're a hipster; you belong to a society that may have some differences from the dominant one, but is internally relatively consistent. If you only drink micro-brewed craft beers of at least 7% ABV and made with organically-grown hops then you're a couple different kinds of beer snob, but in a way that people can relate to; maybe they also prefer stronger beers, or stick to organic produce, or whatever, and they know other people who have those other preferences too so they can visualize you as the intersection of those groups, and you can be a member (a "special" member, but one nonetheless) of groups such as "craft beer snobs". If you drink Bud Light and Coors Light but only when mixed with pear juice and Tabasco sauce, you're actually a special snowflake... otherwise known as being just weird. People won't really be able to relate to your tastes, and (except when trying to signal your different-ness) you probably won't talk about your atypical taste when you're at a bar and somebody strikes up a conversation.
I'll admit I've had AMF moments myself, though. Topics I avoid talking about because I don't expect anybody else to be interested, or situations where I think literally everybody must think some way except me because I don't see any other counterexamples. It's rare, though; at 28 I probably experience as much TMF in a week as I can recall AMF experiences in my life.
↑ comment by Ishaan · 2015-05-07T20:59:04.146Z · LW(p) · GW(p)
Typical mind fallacy is "that person behaved that way for the same reasons I would behave that way, and they would like what I would like, and dislike what I dislike".
Even if you think you are special and different, you might still implicitly assume that everyone knows that crinkling that noisy bag of chips is annoying simply because it's annoying to you and therefore flare up in irritation at the one guy in the library doing so. You say "how inconsiderate", but they don't even notice when other people crinkle chips, so they think you just appeared out of nowhere and started being unreasonable and mean on purpose.
"Special snowflake" attitudes don't run counter this, it's an entirely separate thing, operating on a "higher" level. Your ego might think you're a special snowflake, but your id doesn't take that into account when you instantly react.
↑ comment by MockTurtle · 2015-05-05T10:09:16.340Z · LW(p) · GW(p)
I would say that it has to do with the consequences of each mistake. When you subconsciously assume that others think the way you do, you might see someone's action and immediately assume they have done it for the reason you would have done it (or, if you can't conceive of a reason you would do it, you might assume they are stupid or insane).
On the other hand, assuming people's minds differ from you may not lead to particular assumptions in the same way. When you see someone do something, it doesn't push you into thinking that there's no way the person did that for any reason you would do it. I don't think it will have that same kind of effect on your subconscious assumptions. I might be missing something, though. How do you see the atypical mind fallacy affecting your behaviour/thoughts in general?
Replies from: None↑ comment by [deleted] · 2015-05-05T10:27:33.680Z · LW(p) · GW(p)
For example, I often think I am unusually cowardly or clumsy. Then I am totally surprised when I find after like 3 months of martial arts practice I am already better on both accounts than like 20-30% of the new starters, I was sure I will never ever get better at it, which roughly predicts average ability - but then why does it feel so unusually low?
I tend to think others are far more social than me. Then I start wondering, the fact that we are living in the same flat for 3 years now and never had a chat with a neighbor cannot be 100% my fault, it is 50% mine for not initiating such a conversation, but also 50% theirs as they too didn't. So it may actually be they are not that much more social than me.
Replies from: Ishaan, MockTurtle↑ comment by Ishaan · 2015-05-07T21:04:39.808Z · LW(p) · GW(p)
That one is fundamental attribution error, I think. The real reason you didn't chat with neighbors is because you do not repeatedly bump into your neighbor in spontaneous circumstances on a regular basis - even very social people often don't chat with neighbors. It's more about circumstance than personality.
↑ comment by MockTurtle · 2015-05-05T13:49:43.471Z · LW(p) · GW(p)
From these examples, I might guess that these mistakes fall into a variety of already existing categories, unlike something like the typical mind fallacy which tends to come down to just forgetting that other people may have different information, aims and thought patterns.
Assuming you're different from others, and making systematic mistakes caused by this misconception, could be attributed to anything from low-self esteem (which is more to do with judgments of one's own mind, not necessarily a difference between one's mind and other people's), to the Fundamental Attribution Error (which could lead you to think people are different from you by failing to realise that you might have the same behaviour if you were in the same situation as they are, due to your current ignorance of what that situation is). Also, I don't know if there is a fallacy name for this, but regarding your second example, it sounds like the kind of mistake one makes when one forgets that other people are agents too. When all you can observe is your own mind, and the internal causes from your side which contribute to something in the outside world, it can be easy to forget to consider the other brains contributing to it. So, again, I'm not sure I would really put it down to something as precise as 'assuming one's mind is different from that of other people'.
(Edit: The top comment in this post by Yvain seems to expand a little on what you're talking about.)
comment by Risto_Saarelma · 2015-05-03T06:08:06.830Z · LW(p) · GW(p)
Just how bad of an idea is it for someone who knows programming and wants to learn math to try to work through a mathematics textbook with proof exercises, say Rudin's Principles of Mathematical Analysis, by learning a formal proof system like Coq and using that to try to do the proof exercises?
I'm figuring, hey, no need to guess whether whatever I come up with is valid or not. Once I get it right, the proof assistant will confirm it's good. However, I have no idea how much work it'll be to get even much simpler proofs that what are expected of the textbook reader right, how much work it'll be to formalize the textbook proofs even if you do know what you're doing and whether there are areas of mathematics where you need an inordinate amount of extra work to get machine-checkable formal proofs going to begin with.
Replies from: Anatoly_Vorobey, Epictetus, redlizard, Drahflow, IlyaShpitser↑ comment by Anatoly_Vorobey · 2015-05-03T10:30:17.502Z · LW(p) · GW(p)
It's a bad idea. Don't do it. You'll be turned off by all the low-level grudgery and it'll distract you from the real content.
Most of the time, you'll know if you found a solid proof or not. Those times you're not sure, just post a question on math.stackexchange, they're super-helpful.
↑ comment by Epictetus · 2015-05-03T15:50:38.647Z · LW(p) · GW(p)
Mathematicians have come up with formal languages that can, in principle, be used to write proofs in a way that they can be checked by a simple algorithm. However, they're utterly impractical. Most proofs leave some amount of detail to the reader. A proof might skip straightforward algebraic manipulations. It might state an implication and leave the reader to figure out just what happened. Actually writing out all the details (in English) would at least double the length of most proofs. Put in a formal language, and you're looking at an order-of-magnitude increase in length.
That's a lot of painstaking labor for even a simple proof. If the problem is the least bit interesting, you'll spend a lot more time writing out the details for a computer than you did solving it.
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2015-05-04T16:42:19.321Z · LW(p) · GW(p)
Mathematicians have come up with formal languages that can, in principle, be used to write proofs in a way that they can be checked by a simple algorithm. However, they're utterly impractical.
I understood that a part of the univalent foundations project is to develop a base formalism for mathematics that's amenable to similar layered abstraction with formal proofs as you can do with programs in modern software engineering. The basic formal language for proofs is like raw lambda calculus, you can see it works in theory but it'd be crazy to write actual stuff in it.
So is it possible that in the future we might be able to have something that's to the present raw proof languages as Haskell is to basic lambda calculus, and that it might actually be feasible to write proofs on top of established theorem libraries with the highest level of proof code concise enough for comfortable human manipulation?
I also understand that Coq does some limited proof search based on the outline given by the human operator, which is another interesting usability groove on top of the raw language. Of course both using a complex, software-engineering like theorem library and giving proof-hints to a Coq style program are pretty obvious expert skills which you'll expect to have after being quite familiar with knowing how mathematical proofs work in general.
↑ comment by redlizard · 2015-05-05T23:34:04.906Z · LW(p) · GW(p)
I have tried exactly this with basic topology, and it took me bloody ages to get anywhere despite considerable experience with coq. It was a fun and interesting exercise in both the foundations of the topic I was studying and coq, but it was by no means the most efficient way to learn the subject matter.
↑ comment by Drahflow · 2015-05-03T20:43:04.381Z · LW(p) · GW(p)
The Metamath project was started by a person who also wanted to understand math by coding it: http://metamath.org/
Generally speaking, machine-checked proofs are ridiculously detailed. But it being able to create such detailed proofs did boost my mathematical understanding a lot. I found it worthwhile.
↑ comment by IlyaShpitser · 2015-05-03T10:09:35.157Z · LW(p) · GW(p)
"There is no royal road to geometry."
The way we teach proofs and mathematical sophistication is ad hoc and subject specific. I wish I knew a better general way, but barring that, perhaps start with a mathematical subject close to programming. For instance logic or complexity theory. I wouldn't bother with proof assistants until you are pretty comfortable with proofs.
comment by sixes_and_sevens · 2015-05-02T00:54:49.723Z · LW(p) · GW(p)
If someone reports inconsistent preferences in the Allais paradox, they're violating the axiom of independence and are vulnerable to a Dutch Book. How would you actually do that? What combination of bets should they accept that would yield a guaranteed loss for them?
Replies from: gjm, Xachariah, Manfred↑ comment by gjm · 2015-05-02T19:59:42.837Z · LW(p) · GW(p)
There is a demonstration of exactly this in Eliezer's post from 2008 about the Allais paradox.
(Eliezer modified the numbers a bit, compared with other statements of the Allais paradox that I've seen. I don't think this makes a substantial difference to what's going on.)
Replies from: cousin_it↑ comment by Xachariah · 2015-05-02T18:33:49.717Z · LW(p) · GW(p)
The point of the Allais paradox is less about how humans violate the axiom of independence and more about how our utility functions are nonlinear, especially with respect to infinitesimal risk.
There is an existing Dutch Book for eliminating infinitesimal risk, and it's called insurance.
Replies from: Kindly↑ comment by Kindly · 2015-05-02T19:36:43.206Z · LW(p) · GW(p)
Yyyyes and no. Our utility functions are nonlinear, especially with respect to infinitesimal risk, but this is not inherently bad. There's no reason for our utility to be everywhere linear with wealth: in fact, it would be very strange for someone to equally value "Having $1 million" and "Having $2 million with 50% probability, and having no money at all (and starving on the street) otherwise".
Insurance does take advantage of this, and it's weird in that both the insurance salesman and the buyers of insurance end up better off in expected utility, but it's not a Dutch Book in the usual sense: it doesn't guarantee either side a profit.
The Allais paradox points out that people are not only averse to risk, but also inconsistent about how they are averse about it. The utility function U(X cents) = X is not risk-averse, and it picks gambles 1A and 2A (in Wikipedia's notation). The utility function U(X cents) = log X is extremely risk-averse, and it picks gambles 1B and 2B. Picking gambles 1A and 2B, on the other hand, cannot be described by any utility function.
There's a Dutch book for the Allais paradox in this post reading after "money pump".
Replies from: Xachariah↑ comment by Xachariah · 2015-05-02T19:53:09.679Z · LW(p) · GW(p)
I didn't mean to imply nonlinear functions are bad. It's just how humans are.
Picking gambles 1A and 2B, on the other hand, cannot be described by any utility function.
Prospect Theory describes this and even has a post here on lesswrong. My understanding is that humans have both a non-linear utility function as well as a non-linear risk function. This seems like a useful safeguard against imperfect risk estimation.
[Insurance is] not a Dutch Book in the usual sense: it doesn't guarantee either side a profit.
If you setup your books correctly, then it is guaranteed. A dutch book doesn't need to work with only one participant, and in fact many dutch books only work with on populations rather than individuals, in the same way insurance only guarantees a profit when properly spread across groups.
Replies from: Kindly↑ comment by Kindly · 2015-05-02T20:18:55.652Z · LW(p) · GW(p)
Insurance makes a profit in expectation, but an insurance salesman does have some tiny chance of bankruptcy, though I agree that this is not important. What is important, however, is that an insurance buyer is not guaranteed a loss, which is what distinguishes it from other Dutch books for me.
Prospect theory and similar ideas are close to an explanation of why the Allais Paradox occurs. (That is, why humans pick gambles 1A and 2B, even though this is inconsistent.) But, to my knowledge, while utility theory is both a (bad) model of humans and a guide to how decisions should be made, prospect theory is a better model of humans but often describes errors in reasoning.
(That is, I'm sure it prevents people from doing really stupid things in some cases. But for small bets, it's probably a bad idea; Kahneman suggests teaching yourself out of it by making yourself think ahead to how many such bets you'll make over a lifetime. This is a frame of mind in which the risk thing is less of a factor.)
↑ comment by Manfred · 2015-05-02T01:08:27.217Z · LW(p) · GW(p)
You get them to pay you for one, in terms of the other. People will pay you for a small chance of a big payoff in units of a medium chance of medium payoff. People will pay you for the certainty of a moderate reward by giving up a higher reward with a small chance of failure. All of the good examples of this I can think of are already well-populated business models, but I didn't try very hard so you can probably find some unexploited ones.
Replies from: sixes_and_sevens, torekp↑ comment by sixes_and_sevens · 2015-05-02T11:15:07.755Z · LW(p) · GW(p)
I get that you can do this in principle, but in the specific case of the Allais Paradox (and going off the Wikipedia setup and terminology), if someone prefers options 1B and 2A, what specific sequence of trades do you offer them? It seems like you'd give them 1A, then go 1A -> 1B -> (some transformation of 1B formally equivalent to 2B) -> 2A -> (some transformation of 2A formally equivalent to 1A') -> 1B' ->... in perpetuity, but what are the "(some transformation of [X] formally equivalent to [Y])" in this case?
Replies from: one_forward↑ comment by one_forward · 2015-05-02T16:21:09.248Z · LW(p) · GW(p)
You can stagger the bets and offer either a 1A -> 1B -> 1A circle or a 2B -> 2A -> 2B circle.
Suppose the bets are implemented in two stages. In stage 1 you have an 89% chance of the independent payoff ($1 million for bets 1A and 1B, nothing for bets 2A and 2B) and an 11% chance of moving to stage 2. In stage 2 you either get $1 million (for bets 1A and 2A) or a 10/11 chance of getting $5 million.
Then suppose someone prefers a 10/11 chance of 5 million (bet 3B) to a sure $1 million (bet 3A), prefers 2A to 2B, and currently has 2B in this staggered form. You do the following:
- Trade them 2A for 2B+$1.
- Play stage 1. If they don't move on to stage 2, they're down $1 from where they started. If they do move on to stage 2, they now have bet 3A.
- Trade them 3B for 3A+$1.
- Play stage 2.
The net effect of those trades is that they still played gamble 2B but gave you a dollar or two. If they prefer 3A to 3B and 1B to 1A, you can do the same thing to get them to circle from 1A back to 1A. It's not the infinite cycle of losses you mention, but it is a guaranteed loss.
↑ comment by torekp · 2015-05-02T15:20:44.308Z · LW(p) · GW(p)
"People will pay you," as in different people, true, but I really doubt that you can get the same person to keep paying you over and over through many cycles. They will remember the history, and that will affect their later behavior.
My cards on the table: Allais was right. The collection of VNM axioms, taken as a whole, is rationally non-binding.
comment by [deleted] · 2015-05-04T09:42:28.813Z · LW(p) · GW(p)
How can we have Friendly AI even if we humans cannot agree about our ethical values? This is an SQ because probably this was the first problem solved -it just so obivious - yet I cannot find it.
I have not finished the sequences yet, but they sound a bit optimistic to me - as if basically everybody is a modern utilitarian and the rest of the people just don't count. To give you the really dumbest question: what about religious folks? Is it just supposed to be a secular-values AI and they can go pound sand, or some sort of an agreement, compromise drawn with them and then that implemented? Is some sort of a generally agreed Human Values system a prerequisite?
My issue here is that if we want to listen to everybody, then this will be a never-ending debate. If you draw the line and e.g. include on people with reasonably utilitarian value systems, where do you draw the line etc.
Replies from: hairyfigment, Ishaan, DanArmak↑ comment by hairyfigment · 2015-05-14T17:47:24.331Z · LW(p) · GW(p)
As I told someone else, this pdf has preliminary discussion about how to resolve differences that persist under extrapolation.
The specific example of religious disagreements seems like a trivial problem to anyone who gets far enough to consider the question. Since there aren't any gods, the AI can ask what religious people would want if they accepted this fact. (This is roughly why I would oppose extrapolating only LW-ers rather than humanity as a whole.) But hey, maybe the question is more difficult than I think - we wouldn't specifically tell the AI to be an atheist if general rules of thinking did not suffice - or maybe this focus on surface claims hides some deeper disagreement that can't be so easily settled by probability.
↑ comment by Ishaan · 2015-05-07T21:08:52.701Z · LW(p) · GW(p)
The "if we knew more, thought faster, were more the people we wished we were, had grown up farther together" CEV idea hopes that the disagreements are really just misunderstandings and mistakes in some sense. Otherwise, take some form of average or median, I guess?
Replies from: None↑ comment by [deleted] · 2015-05-08T08:33:43.298Z · LW(p) · GW(p)
Sorry, what is CEV?
Replies from: DanArmak↑ comment by DanArmak · 2015-05-08T10:27:06.345Z · LW(p) · GW(p)
Coherent Extrapolated Volition.
Or was that a snarky question referring to the fact that CEV is underspecified and may not exist?
Replies from: None↑ comment by DanArmak · 2015-05-04T17:51:37.527Z · LW(p) · GW(p)
This is an SQ because probably this was the first problem solved -it just so obivious - yet I cannot find it.
AFAIK it has not been solved and if it has, I would love to hear about it too. I also believe that, like you said, while it's possible for humans to negotiate and agree, any agreement would clearly be a compromise, and not the simultaneous fulfillment of everyone's different values.
CEV has, IIRC, a lot of handwaving and unfounded assumptions about the existence and qualities of the One True Utility Function it's trying to build. Is there something better?
comment by Pfft · 2015-05-03T18:55:01.601Z · LW(p) · GW(p)
So my university sends ~weekly email reminders to not walk alone in dark places, because of robberies. And recently Baltimore introduced a night-time curfew to prevent rioting.
But is there any technical reason that you can't rob people or riot in daylight? Or is it all some giant coordination game where the police work hard to enforce the law during office hours, but then they go home for some well-earned rest and relaxation, while the streets devolve into a free-for-all?
Replies from: DanArmak, drethelin, Epictetus↑ comment by DanArmak · 2015-05-03T19:13:00.787Z · LW(p) · GW(p)
I suppose people aren't robbed "in broad daylight", when there are many people on the streets, because standers-by can help the victim, call the police, or take videos that show the robber's face.
As for rioting, the rioters would rather attack and rob a store when the store-owner isn't there to defend it or, again, call for help or take photos.
But even if that wasn't so, there might be game theoretic reasons to rob and riot at night. Suppose police (or other authorities) need to invest some amount of effort to make each hour of the day or night crime-free. They don't have enough budget to make all hours crime-free; besides, the last few hours require the most effort, because it's easier to make robbers to delay their robbery by a few hours, than to make them never rob at all.
So which hours should the police invest their effort in? Since robbing affects pedestrians, and rioting affects stores and shoppers, then clearly police should prioritize daylight or working hours, when there are many more people at risk, when people can't just decide to stay home because they're afraid of being robbed, and when the police themselves want to have their shifts. And once police are more active during certain hours, criminals will become less active during those hours.
↑ comment by drethelin · 2015-05-03T22:12:29.950Z · LW(p) · GW(p)
https://keysso.net/community_news/May_2003/improved_lighting_study.pdf crime goes down when areas are better lit. I think there is a psychological reluctance to commit crimes if you think you're going to be clearly visible, regardless of your actual chances of getting away with it. Sort of like the experiment with the eyes posted above the honor system bagels in an office.
↑ comment by Epictetus · 2015-05-04T15:19:40.391Z · LW(p) · GW(p)
But is there any technical reason that you can't rob people or riot in daylight?
The odds of finding a sufficiently isolated spot to safely carry out a robbery are a lot lower in the day. If I leave my office and walk to my car at 2 pm, I pass at least a dozen people on the way. If I do it at 2 am, I'm usually the only one around. Daylight hours are a lot riskier for a robber because there are a lot more people walking around. That means more witnesses and more people who could interfere.
comment by James_Miller · 2015-05-03T00:03:19.868Z · LW(p) · GW(p)
Why is it such a big deal for SpaceX to land its used booster rocket on a floating platform rather than just having the booster parachute down into the ocean and then be retrieved?
Replies from: None, fubarobfusco, Pfft, drethelin↑ comment by [deleted] · 2015-05-03T06:34:09.703Z · LW(p) · GW(p)
Salt water is VERY UNKIND to precision metal machinery like rocket engines. Also the tank has such thin walls that chaotic wave action will destroy it.
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2015-05-03T13:37:42.439Z · LW(p) · GW(p)
Hm, this isn't intuitive to me. How could a rocket that was designed to withstand the pressures and conditions of space not be able to take some salt water? And what about just adding a layer of coating that would protect it?
Replies from: None, Viliam↑ comment by [deleted] · 2015-05-03T19:09:49.894Z · LW(p) · GW(p)
Consider a Coke can.
When it's closed and pressurized you have a very hard time crushing it. The internal pressure is converted to a force of tension that resists deformation. Once it's been opened, you can crush it with one hand from the side. But it's much stronger along the axis of the cylinder, since the force is directed through all the material rather than deforming it inwards.
A rocket if scaled down to the size of a coke can has walls much thinner than a coke can, and is much longer relative to its width. You can create great torques by hitting the sides to bend it, or crush it inwards. Imagine the force of tens of tons of water suddenly slapping onto the side of this tank as waves lap around, unevenly across multiple parts of the tank.
Consider a rocket.
It must, with the least possible amount of mass, generate a high acceleration along its direction of motion while subtending a very small surface area in that direction of motion. This dictates that it is long and thin, and able to withstand high forces along that long axis. But every kilogram you add to its mass is one kilogram you can't get to orbit, or a couple more kilograms of fuel. You make it withstand the forces of its operational environment well. Other forces not so much. As far as I know most rockets get a good deal of their shear-strength perpendicular to the long axis from pressure within their tanks - some cannot even be held sideways without being pressurized. I've seen figures to the effect that more than half of the structural strength of a falcon 9 tank comes from it being pressurized by inert gases during operation.
Optimizing for its operational environment plus mass puts limits on how much you can optimize for other things. There is no one abstract 'durability' factor.
Salt water does not play nice with metals, especially fine tubes and components that will be channeling ridiculous energies and god knows how many RPM and very fine precision. It corrodes, it dries up and gums joints, etc. Shuttle SRBs were a different story - they were big, dumb, thick, wide metal tubes filled with fuel that burned like a sparkler, not fine actuators of active control. If someone could find a simple coating that could protect complex powerful machines from the damage of salt water without tradeoffs, I'm sure the navy would LOVE to know about it.
Replies from: James_Miller, adamzerner↑ comment by James_Miller · 2015-05-03T20:18:44.222Z · LW(p) · GW(p)
Great answer!
↑ comment by Adam Zerner (adamzerner) · 2015-05-03T20:03:12.400Z · LW(p) · GW(p)
I see, that makes much more sense now. Thank you!
So I (now) understand that rockets are designed to be thin and light to reduce drag and gravitational forces. As far as the cost-benefit of adding a protective layer, the cost is the added drag/gravitational forces, but the benefit seems to be huge (able to land it in water). Is it really that hard to generate the necessary propulsion forces?
Obviously the answer is "yes", but that goes against what makes intuitive sense to me. Can you explain?
Replies from: CBHacking, Viliam↑ comment by CBHacking · 2015-05-05T07:38:41.146Z · LW(p) · GW(p)
I think part of the problem is a fundamental misunderstanding of what parachuting into the ocean does to a rocket motor. The motors are the expensive part of the first stage; I don't know exact numbers, but they are the complicated, intricate, extremely-high-precision parts that must be exactly right or everything goes boom. The tank, by comparison, is an aluminum can.
The last landing attempt failed because a rocket motor's throttle valve had a bit more static friction than it should, and stuck open a moment too long. SpaceX's third launch attempt - the last failed launch they've had, many years ago with the Falcon 1 - was because the motor didn't shut off instantly before stage separation, like it should have. As far as I know, people still don't know why Orbital ATK (FKA Orbital Science)'s last launch attempt failed, except that it was obviously an engine failure. We talk about rocket science, but honestly the theoretical aspects of rocketry aren't that complicated. Rocket engineering, though, that's a bloody nightmare. You get everything as close to perfect as you can, and sometimes it still fails catastrophically and blows away more value than most of the people reading this thread will earn in their lifetimes, leaving virtually nothing to tell the tale of what happened.
What does all that have to do with parachute recovery of booster stages? Well, once you've dunked those motors in saltwater, they're a write-off. They can't be trusted to ever again operate perfectly without fairly literally rebuilding them, which defeats most of the purpose of recovering the booster.
There's nothing you could coat a rocket motor with that would both survive that motor operating and make it economical to re-use the motor after plunging into the ocean. The closest thing I can think of would be some kind of protective bubble that expands to protect the motors from the ocean once their job is done. It would need to be watertight, impact-resistant (the rocket still hits the water pretty hard, even with parachutes), able to deploy around the motors reliably, avoid causing a bending moment that collapses the tank (which has minimal pressure, because its fuel is depleted and any excess pressurizing agent you carry is wasted weight to orbit), and able to operate after being exposed to the environment in close proximity to a medium-lift rocket's primary launch motors. Maybe it's possible, but I can't think of how to do reliably enough to be worth the added cost on launch.
↑ comment by Viliam · 2015-05-04T09:19:47.082Z · LW(p) · GW(p)
Each additional kilogram of the rocket is probably extremely expensive. I don't how much weight would that extra protective layer add, but I can imagine it could more than double the weight of the rocket, and the weight of the fuel it needs, etc.
Replies from: ZankerH↑ comment by ZankerH · 2015-05-07T08:20:57.918Z · LW(p) · GW(p)
Look at the Tsiolkovsky rocket equation - a rocket's delta-v (velocity change potential) is proportional to the log of its mass ratio (its mass with fuel divided by its mass without fuel). For modern rockets, that means about twenty kilos of fuel for every kilo of anything else (the rocket included). You really don't want to add structural mass if there's any way to avoid it.
↑ comment by Viliam · 2015-05-03T15:38:57.693Z · LW(p) · GW(p)
Just guessing: In space the pressure comes from inside, in water from outside; it is easier to protect in one direction than in both. In space the pressure difference is at most one atmosphere, in water it depends on how deep you submerge. The salt water also attacks the metal chemically.
↑ comment by fubarobfusco · 2015-05-03T04:37:02.792Z · LW(p) · GW(p)
It's a step toward landing it back at the launch site for rapid reuse.
The project's long-term objectives include returning a launch vehicle first stage to the launch site in minutes and to return a second stage to the launch pad following orbital realignment with the launch site and atmospheric reentry in up to 24 hours. Both stages will be designed to allow reuse a few hours after return.
— https://en.wikipedia.org/wiki/SpaceX_reusable_launch_system_development_program
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2015-05-03T13:34:36.858Z · LW(p) · GW(p)
1) Why is it important that the rocket that just landed take off again so soon? I've always had the impression that space missions aren't too frequent.
2) Does transporting the rocket from the ocean back to the launch site cost a lot of money? Is avoiding this a big benefit of the reusable launch system?
Replies from: Vaniver↑ comment by Vaniver · 2015-05-03T17:47:34.452Z · LW(p) · GW(p)
Why is it important that the rocket that just landed take off again so soon?
While that's worded in customer benefit, I think the actual reason is supply-side: hovering is costly, and so landing the stages as cheaply as possible implies doing it quickly.
I've always had the impression that space missions aren't too frequent.
This may be because they are so expensive; if reusable rockets decrease the launch costs significantly enough, there may be many more launches.
Replies from: Lalartu↑ comment by Pfft · 2015-05-03T17:09:26.380Z · LW(p) · GW(p)
The Space Shuttle did something like this, the rocket boosters were landed in the ocean with parachutes and reused. I found a PDF from NASA which describes the procedure. They disassembled the entire thing into parts, inspected each part for damage, and then restored and reused the parts as appropriate. By contrast, I think what SpaceX is aiming for is more like an airplane, you just fill the tank with new fuel and launch it again.
(The PDF claims that the refurbishment program is cost effective, but word of mouth has it that if you factor in the cost of retrieving the boosters, the whole thing cost more than just manufacturing new ones from scratch. See also this thread in the KSP forum.)
Replies from: CBHacking↑ comment by CBHacking · 2015-05-05T07:52:18.750Z · LW(p) · GW(p)
You're also talking about fundamentally different kinds of rocket boosters. The Space Shuttle used solid fuel boosters, which are basically nothing except a tube packed full of energetically burning material, an igniter to light said material, and a nozzle for the generated gases to come out. They couldn't throttle, couldn't gimbal, couldn't shut off or restart, didn't use cryogenic fuel so didn't need insulation, didn't rely on pressurized fuel so they didn't need turbopumps... In fact, as far as I know they basically didn't have any moving parts at all!
You ever flown a model rocket, like an Estes? That little tube of solid grey gritty stuff that you use to launch the rocket is basically a miniature version of the solid fuel boosters on the Space Shuttle. The shuttle boosters were obviously bigger, and were a lot tougher (which made them unacceptably heavy for something like the Falcon 9's first stage) so they could survive the water landing, but fundamentally they were basically just cylindrical metal tubes with a nozzle at the bottom.
Despite that, reconditioning them for re-use was still so expensive that it's unclear if the cost was worth it. Now, of course, they cost a lot less to build than a Falcon 9 first stage, but every one of the Falcon 9 first stage's nine Merlin 1D engines is many times as complicated as the entire solid booster used on the Space Shuttle. Even the first stage tank is much more complicated, since it needs to take cryogenic fuels and massive internal pressurization.
Replies from: kpreid↑ comment by kpreid · 2015-05-21T01:43:17.351Z · LW(p) · GW(p)
This isn't all that relevant, but the Shuttle SRBs were gimbaled (Wikipedia, NASA 1, NASA 2).
(I was thinking that there is probably at least a mechanical component to arming the ignition and/or range safety systems, but research turned up this big obvious part.)
Replies from: CBHacking↑ comment by CBHacking · 2015-05-22T04:23:18.978Z · LW(p) · GW(p)
Whoops, you're right. I thought the gimbaling was just on the SSMEs (attached to the orbiter) but in retrospect it's obvious that the SRBs had to have some control of their flight path. I'm now actually rather curious about the range safety stuff for the SRBs - one of the dangers of an SRB is that there's basically no way to shut it down, and indeed they kept going for some time after Challenger blew up - but the gimbaling is indeed an obvious sign that I should have checked my memory/assumptions. Thanks.
Replies from: kpreid↑ comment by kpreid · 2015-05-22T14:27:50.792Z · LW(p) · GW(p)
I'm now actually rather curious about the range safety stuff for the SRBs - one of the dangers of an SRB is that there's basically no way to shut it down, and indeed they kept going for some time after Challenger blew up
What I've heard (no research) is that thrust termination for a solid rocket works by charges opening the top end, so that the exhaust exits from both ends and the thrust mostly cancels itself out, or perhaps by splitting along the length of the side (destroying all integrity). In any case, the fuel still burns, but you can stop it from accelerating further.
Replies from: Lumifer↑ comment by Lumifer · 2015-05-22T15:43:57.213Z · LW(p) · GW(p)
Hm. A solid rocket burns from one end, opening up the nose will do nothing to the thrust. Splitting a side, I would guess, will lead to uncontrolled acceleration with chaotic flight path, but not zero acceleration.
Replies from: kpreid↑ comment by kpreid · 2015-05-23T15:02:35.193Z · LW(p) · GW(p)
Apparently that's true of some model rocket motors, but the SRBs have a hollow through the entire length of the propellant, so that it burns from the center out to the casing along the entire length at the same time.
Replies from: CBHackingcomment by ChristianKl · 2015-05-02T22:20:56.524Z · LW(p) · GW(p)
Elon Musk recently proposed to run the whole world on solar panels + lithium ion batteries.
Is there enough lithium in the world that we can mine to build enough batteries?
Replies from: D_Alex, Vaniver, advancedatheist↑ comment by D_Alex · 2015-05-04T07:55:47.016Z · LW(p) · GW(p)
Yes, there is.
http://en.wikipedia.org/wiki/Lithium#Terrestrial gives reserves of 13 million tonnes, ie 13 billion kg. I think these are "proven" reserves, ie economical to mine at current prices.
The amount of lithium in a Li-ion battery is not that much, roughly 500g/kwhr. So a 10 kW Tesla Power Wall would contain about 5 kg of lithium. We can make 2.6 billion Power Walls... and E. Musk said at the launch that 2 billion would be enough to convert the entire planet's energy usage - including industry and transport - to renewable electricity.
↑ comment by Vaniver · 2015-05-03T18:05:26.251Z · LW(p) · GW(p)
Is there enough lithium in the world that we can mine to build enough batteries?
I have mostly seen claims to the contrary. However, there are other technologies that work very well at storing energy at industrial scales; phase change materials come to mind as promising.
My impression is that this sort of thing is not done already because nonrenewables are so cheap. Musk pays more attention to the energy markets than I do, so he might be correct that solar is going to happen in a big way in the near term, but I think this is more likely to be a conspicuous consumption thing than an economically wise thing for the next decade or so.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-05-03T20:57:47.648Z · LW(p) · GW(p)
A bit of Googling brings me to http://reneweconomy.com.au/2015/solar-grid-parity-world-2017:
Investment bank Deutsche Bank is predicting that solar systems will be at grid parity in up to 80 per cent of the global market within 2 years, and says the collapse in the oil price will do little to slow down the solar juggernaut.
Over the last decades solar doubled price efficiency roughly every 7 years. Batteries also get exponentially cheaper.
Coal doesn't get cheaper but a little more expensive. It also easy to make it more expensive by creating higher environmental standards and cutting coal subsidies if there's political desire to do so.
↑ comment by advancedatheist · 2015-05-02T23:57:05.993Z · LW(p) · GW(p)
Lithium doesn't seem particularly cosmically abundant:
http://upload.wikimedia.org/wikipedia/commons/e/e6/SolarSystemAbundances.png
Replies from: ChristianKl↑ comment by ChristianKl · 2015-05-03T13:23:46.555Z · LW(p) · GW(p)
Cosmic abundance isn't the important thing. Abundance on earth is.
Replies from: Nonecomment by Gunnar_Zarncke · 2015-05-10T20:28:06.589Z · LW(p) · GW(p)
Where can I list the pages that I saved with the "save" button below the post? Or how else does the save work? I seem to remember having read how it works but it seems I can't find it.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2015-09-23T00:19:03.042Z · LW(p) · GW(p)
comment by Username · 2015-05-03T15:54:20.142Z · LW(p) · GW(p)
So we want FAI's values to be conducive with human values. But how are people proposing FAI will deal with the fact that humans have conflicting and inconsistent values, both within individual humans and between different humans? Do we just hope that our values when extrapolated and reasoned far enough are consistent enough to be feasibly satisfied? I feel like I might have missed the LW post where this kind of thing was discussed, or maybe I read it and was left unsatisfied with the reasoning.
What happens, for example, to the person who takes their religion very seriously and wants everyone to convert to their religion? Or just people whose values are heavily tied to their religious beliefs, of which there probably are a few, considering there are billions of religious people around.
Replies from: Vaniver, hairyfigment, Viliam↑ comment by Vaniver · 2015-05-03T17:51:58.845Z · LW(p) · GW(p)
I feel like I might have missed the LW post where this kind of thing was discussed, or maybe I read it and was left unsatisfied with the reasoning.
I do not think anyone who's thought about it deeply thinks that they have the best possible solution to this problem, and most proposed solutions are preliminary at best.
↑ comment by hairyfigment · 2015-05-08T02:13:03.818Z · LW(p) · GW(p)
There's a pdf discussing the matter linked here - though as you can see, a lot of people failed to read it (or "seer" has a lot of sockpuppets).
↑ comment by Viliam · 2015-05-04T09:11:18.928Z · LW(p) · GW(p)
How would you as an individual deal with having inconsistent values? Let's assume that you become super intelligent, so you can rather reliably see the possible consequences of any changes you would try to do.
The person who takes their religion very seriously probably believes that the religion is factually true. Or possibly they believe that the religion is false, but that it is a "Noble Lie", i.e. that the belief in the religion is a net benefit. Either way, there is an empirical claim they have related to the religion. They would probably be interested in knowing whether this claim is true.
These are not full answers, just hints towards possible answers.
comment by redding · 2015-05-01T22:32:19.696Z · LW(p) · GW(p)
One common way to think about utilitarianism is to say that each person has a utility function and whatever utilitarian theory you subscribe to somehow aggregates these utility functions. My question, more-or-less, is whether an aggregating function exists that says that (assuming no impact on other sentient beings) the birth of a sentient being is neutral. My other question is whether such a function exists where the birth of the being in question is neutral if and only if that sentient being would have positive utility.
EDIT: I do recall that a similar-seeming post: http://lesswrong.com/lw/l53/introducing_corrigibility_an_fai_research_subfield/
Replies from: DanielLC↑ comment by DanielLC · 2015-05-02T01:35:13.790Z · LW(p) · GW(p)
If you try to do that, you get a paradox where, if A is not creating anyone, B is creating a new person and letting them lead a sad life, and C is creating a new person and letting them lead a happy life, then U(A) = U(B) < U(C) = U(A). You can't say that it's better for someone to be happy than sad, but both are equivalent to nonexistence.
Replies from: reddingcomment by Dahlen · 2015-05-07T16:01:58.797Z · LW(p) · GW(p)
Has anyone ever studied the educational model of studying just one subject at a time, and does it have a name? While in college the last semester, it occurred to me that, with so many subjects at once competing for my time and attention, I cannot dedicate myself to learning any given one in depth, and just achieve mediocre grades for all of them. The model I had in mind went like this:
1) Embark on one, and only one, subject for a few weeks or couple of months (example: high school trigonometry);
2) Study it full-time and exhaust the textbook;
3) Take an exam in it;
4) Have a short vacation (1-2 weeks);
5) Pass on to the next subject (example: early modern history).
There could be yearly review sessions a couple of weeks long, so that students have their memory refreshed on the subjects they have learned so far.
Leaving aside some issues relating to the practicality of scheduling classes like that, does this work better/worse than the model in which students' schedules are diversified? Would it just get monotonous after a while, and does this outweigh the benefits of being able to dedicate your focus to one single subject?
Replies from: iarwain1, wadavis, Elo↑ comment by iarwain1 · 2015-05-07T23:28:19.515Z · LW(p) · GW(p)
It has been studied and it's actually usually not recommended. This is the principle of interleaving. See Make It Stick, especially chapter 3. See also the recently linked document by richard_reitz.
Replies from: wadavis↑ comment by wadavis · 2015-05-07T18:57:06.311Z · LW(p) · GW(p)
A few comments from my experience, these may not be applicable to all circumstances.
I found material to have a digestion time, to much material too fast and I would stop learning. If understanding A depends on understanding B, which depends on C, It was easier to learn C, and sleep on it, then learn and sleep on B, then A. As opposed to taking ABC all in one bite. In addition to the short term, I experienced this in the long term; I would frequently look back at courses from the previous years and wonder how I ever found them challenging. When I had Calculus 1 and 2 back to back I struggled hard, When I had a year break, then Calculus 3, another year break, then Calculus 4, I felt I had a better grasp of the material.
Also, as you approach higher levels of education and specialize, your classes overlap material more and more. In high school I took grade twelve physics then grade 12 calculus. I was very upset to discover after the fact the derivative relations between locations, velocity, and acceleration, and that the equations were simple to derive once the missing calculus piece of the puzzle was provided. Once I got to the end of my undergrad, every class I was taking was looking at the same problem from different perspectives, so any one taken by themselves would be without supporting knowledge to lean on.
And lastly, Topic burn out would kill me. This has to do with mental digestion time, but some times it was nice to skip a class or two, not think about the topic for a week, and then jump back in with renewed energy.
But of course also am full of bias, because learning is complex and I'm just latching on to the few patterns I've recognized.
comment by James_Miller · 2015-05-03T03:44:41.039Z · LW(p) · GW(p)
What is the LessWrong-like answer to whether someone born a male but who identifies as female is indeed female? Relevant to my life because of this. I'm likely to be asked about this if for no other reason than students seeing how I handle such a question.
Replies from: PeerGynt, None, Dias, buybuydandavis, minorin, ChristianKl, Epictetus, Vaniver, Strangeattractor, Elo↑ comment by PeerGynt · 2015-05-03T04:12:24.954Z · LW(p) · GW(p)
What is the LessWrong-like answer to whether someone born a male but who identifies as female is indeed female?
The Lesswrong-like answer to whether a blue egg containing Palladium is indeed a blegg is "It depends on what your disguised query is".
If the disguised query is which pronoun you should use, I don't see any compelling reason not use the word that the person in question prefers. If you insist on using the pronoun associated with whatever disguised query you associate with sex/gender, this is at best an example of "defecting by accident".
Replies from: mkf, Good_Burning_Plastic, Unknowns, NancyLebovitz↑ comment by mkf · 2015-05-03T16:54:30.012Z · LW(p) · GW(p)
By the way, it is one of the best examples I've seen of quick, practical gains from reading LW: the ability to sort out problems like this.
Replies from: Viliam↑ comment by Viliam · 2015-05-04T09:30:12.015Z · LW(p) · GW(p)
This. After reading the Sequences, many things that seemed like "important complicated questions" before are now reclassified as "obvious confusions in thinking".
Even before reading Sequences I was already kinda supsicious that something is wrong when the long debates on such questions do not lead to meaningful answers, despite the questions do not contain any difficult math or any experimentally expensive facts. But I couldn't transform this suspicion into an explanation of what exactly was wrong; so I didn't feel certain about it myself.
After reading Sequences, many "deep problems" became "yet another case of someone confusing a map with the territory". -- But the important thing is not merely learning that the password is "map is not the territory", but the technical details of how specifically the maps are built, and how specifically the artifacts arise on those maps.
Replies from: None↑ comment by [deleted] · 2015-05-04T09:54:18.941Z · LW(p) · GW(p)
Sounds a lot like General Semantics, at least, Eric S. Raymond derived something similar based on GS. Example: http://esr.ibiblio.org/?p=161
Replies from: Viliam↑ comment by Good_Burning_Plastic · 2015-05-09T16:28:36.995Z · LW(p) · GW(p)
In this case the disguised query is "Were I to ask 'What would stop someone assigned male at birth to fraudulently claim to be a trans woman in order to seek admission to Smith College?', what would I mean by 'fraudulently'?"
↑ comment by Unknowns · 2015-05-04T10:12:44.021Z · LW(p) · GW(p)
If you "use the word that the person in question prefers," then the word acquires a new meaning. From that moment on, the word "male" means "a human being who prefers to be called 'male'" and the word "female" means "a human being who prefers to be called 'female'". These are surely not the original meaning of the words.
Replies from: PeerGynt↑ comment by PeerGynt · 2015-05-04T14:52:11.604Z · LW(p) · GW(p)
Why do you care about the 'original' meaning of the word?
Let's imagine we are arguing about trees falling in the forest. You are a lumberjack who relies on a piece of fancy expensive equipment that unfortunately tends to break if subjected to accoustic vibrations. You therefore create a map where the word "sound" means accoustic vibrations. This map works well for you and helps you resolve most disguised queries you could be interested in
Then you meet me. i make a living producing cochlear implants. My livelihood depends on making implants that reliably generate the qualia of sound. I therefore have a different map from you, where the word 'sound' means the subjective experience in a person's brain. This works well for the disguised queries that I care about.
If we meet at a cocktail party and you try to convince me that the 'original' meaning of sound is accoustic vibrations, this is not a dispute about the territory. What is happening is that you are arguing the primacy of your map over mine, which is a pure status challenge.
The purpose of categories in this context is to facilitate communication, ie transfer of information about the territory from one mind to another. Agreeing on a definition is sometimes important to avoid confusion over what is being said. However, if there is no such confusion, insisting on one definition over another is a pure monkey status game
Replies from: Jiro↑ comment by Jiro · 2015-05-04T16:26:16.639Z · LW(p) · GW(p)
Most common terms will, when used in a context that doesn't imply a specific meaning, be taken by the listener to imply a default meaning. Furthermore, some contexts do imply a meaning, but only weakly; if the context makes slightly more sense with meaning A, but you know that most people default to meaning B, and you are Bayseian, you should infer that the intended meaning was B.
Caring about the "original meaning of the word" is about this default meaning, and is not nonsensical. If I say that this person is female, without qualifiers such as "genetically female", what will others understand me as saying? Will what they understand me as saying be more or less accurate than if I refer to them as male?
↑ comment by NancyLebovitz · 2015-05-03T11:38:34.997Z · LW(p) · GW(p)
Is there additional material about disguised queries?
Replies from: James_Miller↑ comment by [deleted] · 2015-05-04T10:07:18.363Z · LW(p) · GW(p)
Like others said, the answer is more like "depends what do you want to know". In this case, you need to figure out what is the point of an all-female college at all. Fixing systemic disadvantages? Rape safety? More cooperative athmosphere? What you need to figure out is not whether trans women are "really, truly, indeed" women, which is a meaningless kind of sentence, but more like whether they can be categorized as women for the purposes of the goals of an all-women college.
But another relevant aspect is, and this I guess is not discussed so much on LW so I am not really sure what is the best way of wording it... basically that means to an end tend to ossify into values in themselves, via becoming part of identity, tradition etc. There is the old story of a Zen monastery, where the cat made a lot of noise during meditation so they ended up tying it up and 200 years they write treatises about the spiritial significance of having a tied up cat around symbolizing the tied up mind.
Even if the idea of an all-women college was originally utilitarian, a means to an end, now it is easily identity based. And identity isn't a rational thing. There is no truly objective answer whether biological women should or should not accept trans women into their idea of women-as-an-identity-group, because identity is something that tends to break free from the original reasons of creating it. People tend to ask precisely these "are X truly, really, indeed Y" when discussing identity, and there is no good answer, because on the rational level we can only ask the question "is it useful to categorize X as Y for the purposes of Z" and identity is precisely something that breaks free of such purposes and becomes a tribe on its own. There is no right or wrong answer about who should be allowed to be a member of a tribe. It is largely based on the feelings of the other tribe members. You cannot really give an objective, rational answer to who should be considered a hardcore Manchaster United fan or what is true, real punk music as opposed to pop punk. This just based on how the people in question feel.
Replies from: VoiceOfRa↑ comment by VoiceOfRa · 2015-07-03T09:00:42.144Z · LW(p) · GW(p)
Except all women's colleges arguably still serve a purpose. There is reasonable evidence that women learn better in an all woman environment.
You may ask how trans-"women" affect this. Well there isn't much data on the subject (and unlikely to be anytime soon given how reluctant people are to research politically incorrect topics). However, given how any m-to-f transsexuals appear to be masculine men with an autogynephilia fetish, my guess would be they are effectively men for this purpose.
Replies from: Vaniver, None↑ comment by Vaniver · 2015-07-03T14:20:10.669Z · LW(p) · GW(p)
However, given how any m-to-f transsexuals appear to be masculine men with an autogynephilia fetish, my guess would be they are effectively men for this purpose.
That's a misrepresentation of both Bailey and Sailer; Bailey claims that most MtFs cluster into either "started out gay" or "started out straight," with the first group transitioning early in life (high school, college, etc.) and the second group transitioning late in life (typically after married with children). Thus, the sort of MtF that will go to a woman's college is disproportionately likely to be romantically interested in men / have a more feminine personality.
Replies from: VoiceOfRa↑ comment by VoiceOfRa · 2015-07-03T18:05:19.239Z · LW(p) · GW(p)
Bailey claims that most MtFs cluster into either "started out gay" or "started out straight," with the first group transitioning early in life (high school, college, etc.) and the second group transitioning late in life
It is unlikely to stay that way now that incentives have changes, in particular, I expect a lot of "heterosexual male who wants to enter the women's bathroom" type "transsexuals".
Replies from: None↑ comment by [deleted] · 2015-07-03T11:29:00.935Z · LW(p) · GW(p)
I don't know. My wife / former GFs said me they tend to prefer mixed working environments, they told me all-female environments get too filled with all kinds of back-stabby intrigue "she said that she said that you said that I said" and guys dampen that.
↑ comment by Dias · 2015-05-04T02:09:10.623Z · LW(p) · GW(p)
What does it mean to be female? It has to be something such that babies, animals and people in tribal cultures can be classified as female or not. Lets call this property, that baby girls, hens and women in hunter-gatherer tribes share, and baby boys etc. do not, property P. People who identify as female are presumably claiming they have property P, and presumably think this is a substantive claim.
Now, could P be something such that merely believing you had property P, made you have property P? Certainly there are some properties like this:
- X has P if and only if ( X has two x chromosomes OR X believes ( X has property P ) )
but I think this is clearly unsatisfactory. For example, it would mean that an ordinary young boy who, upon being taught about gender for the first time, was momentarily mistaken and thought he was female, would instantly become female. And it would mean that transwomen were asserting a disjunction of a falsehood and a weird recursive clause.
There are social-role based alternatives, along the lines of
- X has P if and only if ( X wishes to be treated in the typical manner of people with property P )
but this doesn't work for Tomboys, who wish to be treated broadly like boys but are nonetheless definitely girls. Nor does it work for extreme feminists, who do not wish women (including themselves) to be treated in the typical way women are treated.
Now, whether believing something is sufficient to make it true is of course a separate issue from what is politically prudent of you to say. My guess is that your students would ask you this question have a few motivations:
- If you say that the map is not the territory, they can safely reject you as an outdated and uncaring reactionary, and will reject what you say on other subjects.
- If you say that believing things makes them true, they can say "even our ultra-conservative republican lecturer agrees".
My advice to you is to say 'mu'. Ask your students what they mean by female, or why they are asking. Then you can respond in the correct manner according to their definition, pointing out that if they don't like the answer, maybe they didn't really mean that definition.
Replies from: philh↑ comment by philh · 2015-05-04T15:18:17.414Z · LW(p) · GW(p)
It has to be something such that babies, animals and people in tribal cultures can be classified as female or not.
Why? We use the word "female" when referring to babies and animals, but that doesn't mean we're necessarily talking about the same thing as when we refer to adult humans; and if we are, it doesn't mean that we're talking about something they actually have. (I assume that it doesn't make sense to talk about whether a baby is straight or gay, for example.)
Replies from: VoiceOfRa↑ comment by VoiceOfRa · 2015-07-03T08:51:19.033Z · LW(p) · GW(p)
but that doesn't mean we're necessarily talking about the same thing as when we refer to adult humans
Why not?
I assume that it doesn't make sense to talk about whether a baby is straight or gay, for example.
Well, according to the (admittedly rather dubious) party line being gay is an intrinsic property and not a choice and most definitely not subject to environmental influence by pro-gay memes.
Heck, the official (and even more dubious) party line on trans-people is that they have always been their gender trapped in the wrong bodies.
Replies from: philh↑ comment by philh · 2015-07-03T10:27:24.709Z · LW(p) · GW(p)
Why not?
Because the map is not the territory? You can argue that they are the same thing, but the fact that they use the same word isn't sufficient.
Well, according to the (admittedly rather dubious) party line being gay is an intrinsic property and not a choice and most definitely not subject to environmental influence by pro-gay memes.
That doesn't mean it's a property that babies have. They might have the property "will be gay when they hit puberty", but that's a different property. A six-month old baby might have a gene that will give her a speech defect, but for now she speaks just as well as every other baby her age.
Heck, the official (and even more dubious) party line on trans-people is that they have always been their gender trapped in the wrong bodies.
I don't think this is true, but I'm not an expert.
Replies from: VoiceOfRa, VoiceOfRa↑ comment by VoiceOfRa · 2015-07-03T19:47:56.505Z · LW(p) · GW(p)
Heck, the official (and even more dubious) party line on trans-people is that they have always been their gender trapped in the wrong bodies.
I don't think this is true, but I'm not an expert.
It almost certainly isn't. Of course believing that makes you an "evil transphobe" according to the official party line, especially if you consider the implication for gender reassignment surgery of children.
Replies from: philh↑ comment by philh · 2015-07-03T23:12:08.749Z · LW(p) · GW(p)
I mean that I don't think that's the party line, but I'm not an expert on what the party line is.
Replies from: None↑ comment by [deleted] · 2015-07-05T16:49:06.934Z · LW(p) · GW(p)
The 'line' is that it is very complicated. There are people with strong body dysphorias who have always had them, there are people who care much more about social presentation than anatomy, and everybody in between or in combination. The social presentation can be separate from body image considerations, and for those people in particular 'wrong body' would be inaccurate. There are people whose experienced-from-within gender is different at different times or who do not strongly identify with masculine or feminine, or with bits of both. Knowing multiple people in various positions on these spectra, trying to collapse the experience of everyone with non-default gender situations to one party line is a recipe for confusion, unproductive arguments, and missing the point for some of them.
Replies from: VoiceOfRa↑ comment by VoiceOfRa · 2015-07-06T00:59:16.436Z · LW(p) · GW(p)
So was Jenner always a woman trapped in a man's body? The answer according to his (her) transition miniseries is yes (complete with tales of sneaking into his mother's closet to wear her clothes).
Replies from: None↑ comment by [deleted] · 2015-07-06T04:10:45.510Z · LW(p) · GW(p)
Sounds like that is the case for her, then. Can't say I've been keeping up with that particular story. One of my friends has had a similarly-describable situation from a very early age as well.
Though again, using that phrase is lumping everything into one essentialist label that says everything rather than decomposing it into the potentially more useful descriptive subcategories 'how one wants to be considered socially', 'the body one wishes one had' / 'the body one is willing to have given current medical technology and the costs and tradeoffs thereof', 'how one wishes to behave', and 'how one identifies internally'. Often these go together making that phrase more or less applicable, but sometimes they don't, and to get at the truth can then require finer detail depending on what you want to know and who you are describing.
Two other friends of mine have had more complicated situations in their lives - one whom goes to some effort to decrease their external femininity in favor of a more androgynous presentation to match the way they feel internally and their dislike of a feminine appearance [while not going through the efforts of invasive medical transition because the effort and tradeoffs are not worth it to them, but would still choose a very different physical appearance if the tradeoffs were smaller and easier], and one whom is fine with a male body and being called 'he' but whom behaves in a very feminine manner because that is just the way he is and he feels being physically male doesn't dictate his behavior, and feels wrong behaving in a masculine manner.
Replies from: VoiceOfRa↑ comment by VoiceOfRa · 2015-07-07T01:14:31.753Z · LW(p) · GW(p)
Sounds like that is the case for her, then. Can't say I've been keeping up with that particular story.
Except, looking at his previous life, which included fathering several children, winning men's Olympic medals, and being a media hound with corporate sponsorship, suggests that this is not in fact the case, and suggests other motivations for him to do this. Namely, the need to pull a stunt to get back in the spotlight.
↑ comment by VoiceOfRa · 2015-07-03T17:58:13.373Z · LW(p) · GW(p)
Because the map is not the territory?
That's an argument for bringing our map closer to the territory, i.e., applying the word "gender" in humans to the same concept we use for animals. Not for completely messing up our map.
Replies from: philh, Good_Burning_Plastic↑ comment by philh · 2015-07-03T23:24:14.549Z · LW(p) · GW(p)
That "i.e." is doing an awful lot of work. I don't agree that the map is messed up, and moving a label doesn't necessarily bring it closer to the territory.
Replies from: VoiceOfRa↑ comment by Good_Burning_Plastic · 2015-07-04T09:00:32.540Z · LW(p) · GW(p)
applying the word "gender" in humans to the same concept we use for animals
I'm not aware of the word "gender" being commonly applied to non-human animals for any concept, other than grammatical gender. You might be thinking of the concept usually referred to as "sex".
Replies from: VoiceOfRa↑ comment by VoiceOfRa · 2015-07-04T18:20:18.595Z · LW(p) · GW(p)
If you want to follow that distinction, then I agree that "gender" doesn't point to anything real aside from what is commonly pointed to by the word "sex". Heck when "gender" first became used in its non-grammatical meaning, it was a euphemism for "sex" since the latter had acquired a meaning (as [Edit: an act]) that made it not necessarily SFW.
Replies from: gjm↑ comment by gjm · 2015-07-04T22:54:03.913Z · LW(p) · GW(p)
A pedantic correction: "gender" appears to have had that non-grammatical meaning since the 15th century (and has also had an NSFW meaning as a verb since even earlier) but (if the OED is to be trusted, which usually it is) it's true that "gender" became widely used to mean males/females collectively in the 20th century because "sex" was too distracting. (It wasn't "sex" as a verb, though, but "sex" as a noun meaning "copulation".)
↑ comment by buybuydandavis · 2015-05-03T23:50:43.472Z · LW(p) · GW(p)
'Whatever you might say something "is", it is not.' Whatever we might say belongs to the verbal level and not to the un-speakable, objective levels.
'what we see, hear, feel, speak about or infer, is never it, but only our human abstraction about 'it'
'A map is not the territory.'
'Identity is invariably false to facts.'
'Every identification is bound to be in some degree a misevaluation'
-- Korzybski
↑ comment by minorin · 2015-05-03T18:30:16.756Z · LW(p) · GW(p)
I don't know the LessWrong-like answer, so I can only offer you the human, empathic answer.
Based on the phrasing of your question:
whether someone born a male but who identifies as female is indeed female
and the fact that you have posted it to LessWrong, I understand it to be a question about constructing a useful and consistent model of the human condition, rather than about respecting an actual or hypothetical human being. If so, I think you are asking the wrong question.
Your students want to learn from you, but on a more basic level, they want to feel safe with you. If you have a trans student, or a student with a trans friend/relative, she is likely to take your answer to this question very personally. Your choice boils down to whether you offer a personal welcome (by recognizing your student's identity) or a personal affront (by implying that you have more authority than she does to determine who she "really is").
I should add that it is a common failure mode for humans, when confronted with a counterexample to their existing model of the human condition, to insist that their model is correct and that the fellow human they are dealing with is a bad data point. As well as rude and demeaning, this approach is irrational and intellectually dishonest.
Replies from: Dias, Jiro↑ comment by Dias · 2015-05-04T02:14:40.843Z · LW(p) · GW(p)
a counterexample to their existing model of the human condition
I'm not sure how this could be counted as a counterexample to anyone's model. Presumably most people would agree that there are people who are confused about their sexuality. It would only be a counterexample to that model if the student was correct, but whether or not the student is correct is precisely what we are discussing.
If James agreed with the student, this would not be a counterexample to his beliefs, and if he disagrees with the student, it he would not agree that they represented a counterexample to the model.
Replies from: minorin↑ comment by minorin · 2015-05-04T03:44:30.091Z · LW(p) · GW(p)
Presumably most people would agree that there are people who are confused about their sexuality.
"Confused about their sexuality" is a particularly uncharitable characterization of a transgender person. Many are not confused, rather absolutely certain. Unless you're using the term "confused" as a polite way of indicating that you believe such a person to be mistaken or delusional, in which case you would be begging the question.
By the way, gender is not the same thing as sexuality.
It would only be a counterexample to that model if the student was correct, but whether or not the student is correct is precisely what we are discussing.
If one models gender as a boolean switch that can be set to either "male" or "female", and encounters an individual who has a combination of "male" and "female" characteristics, their model may not accommodate the new observation. I have watched people (who I previously considered fairly sane) break into a yelling fit when confronted with someone undergoing a gender transition, demanding to know their "real" gender and hurling insults when the response was not what they expected.
Replies from: Dias, VoiceOfRa↑ comment by VoiceOfRa · 2015-07-03T08:44:52.561Z · LW(p) · GW(p)
If one models gender as a boolean switch that can be set to either "male" or "female", and encounters an individual who has a combination of "male" and "female" characteristics, their model may not accommodate the new observation.
So did the person have a Y chromosome or not?
Replies from: None↑ comment by Jiro · 2015-05-04T02:38:16.163Z · LW(p) · GW(p)
Unfortunately for this line of argument, there are a whole lot of things one can say that may cause personal affronts, some of which are essential as part of some debates and some of which may even express factual truths. If they are generalities, they might not even be disprovable by examples of individual humans (such as statements that some class of humans is more likely to have lower scores on IQ tests).
↑ comment by ChristianKl · 2015-05-03T11:42:20.615Z · LW(p) · GW(p)
For the purposes of a all women college you have to ask yourself about the purpose of limiting the college to women.
Maybe there a perception out there that math isn't a women's subject. In mixed-gender classes woman are more likely to spent effort into signalling their strong femininity. The are spending more effort into engaging in actions that signal high mating market value when suitable mates are around.
Do transwomen trigger the same behavior? I don't think that the gender that's assigned at birth matters. On the other hand "identifies" is a complex word. Just checking a checkbox isn't enough to stop being seen as a potential mating partner. The transwomen who spends enough effort on looking female that strangers easily identify them as being female on the other hand is unlikely to trigger mating market behavior.
Another reason to limit the college to women is about women being a minority that's discriminated against. Transwomen do get discriminated against them and don't get included in old boys networks. From that perspective it also seems fine to accept transwomen.
Maybe you can also think of other reasons for the policy of being all-women. Check whether those reasons matter for transwomen.
There seem to be strong laws against gender discrimination in the US, how does your college avoid getting sued for discrimination?
Replies from: James_Miller, Pfft, VoiceOfRa↑ comment by James_Miller · 2015-05-03T14:52:04.212Z · LW(p) · GW(p)
how does your college avoid getting sued for discrimination?
I'm not sure, but no one seems concerned that the courts will force us to admit men.
↑ comment by Pfft · 2015-05-03T16:53:29.476Z · LW(p) · GW(p)
There seem to be strong laws against gender discrimination in the US, how does your college avoid getting sued for discrimination?
I wonder about this also. First, there is a supreme court case from 1982, Mississippi University for Women v. Hogan, which held that admitting only women violated the Equal Protection clause of the constitution. This seems to have not had much impact on college admissions.
For one, they held that when checking the if a law violates Equal Protection by discriminating against women, it is only subject to "intermediate scrutiny", as opposed to discrimination by race, which is subject to "strict scrutiny". So the state interest that has to be served by a law in order to outweigh the discrimination against women does not have to be as compelling as for race. A concurrence also noted that "the Court's holding today is limited to the context of a professional nursing school. Ante at 723, n. 7, 727. Since the Court's opinion relies heavily on its finding that women have traditionally dominated the nursing profession, see ante at 729-731, it suggests that a State might well be justified in maintaining, for example, the option of an all-women's business school or liberal arts program." Maybe that's indeed what happened later.
It seems that the strongest law against sex discrimination in education is not the constitution, but Title IX. However, Title IX explicitly grandfathered in existing single-sex colleges: "in regard to admissions this section shall not apply to any public institution of undergraduate higher education which is an institution that traditionally and continually from its establishment has had a policy of admitting only students of one sex."
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2015-05-05T02:29:11.173Z · LW(p) · GW(p)
MUW is a public school.
↑ comment by VoiceOfRa · 2015-07-03T08:41:49.223Z · LW(p) · GW(p)
Do transwomen trigger the same behavior? I don't think that the gender that's assigned at birth matters.
Why wouldn't it? Or rather the important feature is possession of a Y chromosome, which in almost all cases is the same thing as the "gender assigned at birth" given how the procedure tends to work. However, the phrase "gender assigned at birth" is highly misleading since it seems to imply that the feature is based on the choices of the hospital staff and/or parents rather then the chromosomes of the child.
It's notable that the second highest paid "female" CEO would appear to be a trans-"woman". Who fathered a number of children before "realizing" this. And if I may dispense with the currently fashionable charade of pretending trans-"woman" were really women trapped in men's bodies, appears to me a masculine man with an autogynephilia fetish.
↑ comment by Epictetus · 2015-05-03T05:37:36.040Z · LW(p) · GW(p)
Historically, the transgendered were swept under the rug in Western society and we're left with hopelessly inadequate naming conventions. Some cultures have come up with extra genders in order to accommodate the transgendered and this sort of question is moot.
Working within the present naming conventions, we can develop a simplified model. Suppose there are two sexes (M and F) and two genders (M and F). To every person we can associate an ordered pair where e.g. (M,F) would denote someone who is biologically male but identifies as female. Mathematically, we have a map F from the set of humans (H) to the cartesian product {M,F} X {M,F}.
The question at hand can be stated thus: suppose we wish to define a projection operator p: {M,F} X {M,F} to {M,F}. Is it better to project onto the first coordinate or the second? In other words, should we associate (M,F) with M or with F?
Projecting onto a coordinate loses information. This is unavoidable. The answer will depend on which bit of information we consider more important. That, in turn, depends entirely on our values. My own view is that if we accept the legitimacy of being transgender (as opposed to considering it a mental defect), then one's present self-identified gender is far more pertinent to daily life and so deserves priority. Changing our conventions would be ideal, but in the meantime we have to deal with accidents of history.
↑ comment by Vaniver · 2015-05-03T17:59:07.042Z · LW(p) · GW(p)
What is the LessWrong-like answer to whether someone born a male but who identifies as female is indeed female?
What do you mean by "indeed"? That is, neural categories seems like the right place to start. (And, especially as a professor, that seems like the right thing to talk about when students ask about a political issue--not your position, but the mental algorithms you're executing to determine your position.)
(Very similar to PeerGynt's response, and drawing on the same sequence.)
↑ comment by Strangeattractor · 2015-05-05T14:49:36.056Z · LW(p) · GW(p)
Our society is currently set up to assume that there are only two genders, male and female, and that those categories imply many other things. You can encounter this everywhere, from pronouns in language to the M/F boxes on government forms. If a person cannot be adequately described by those categories, with their implicit assumptions, (and there are people who even on a biological basis cannot be described as purely male or female, based on what genitals or chromosomes they have), then they likely have a difficult life, and it is important to be kind to them, (not that it is not important to be kind to people with easy lives.)
I would keep in mind, when addressing questions regarding this issue, just how hard it would be to live a life in a society completely not set up for something so basic and personal, that by daily interactions does not acknowledge one's true experience and tries to force badly constructed and mismatched categories onto the person to make the cognitive dissonance go away, instead of addressing the problems with the categories.
↑ comment by Elo · 2015-05-04T13:03:43.925Z · LW(p) · GW(p)
my general answer to similar questions of hard to tease out results is:-
either way this does not help me do the things that I do and be good at life. (other than to specifically related to those people, at which point - ask them what they want and create the categories around their preference)
gender is only a map made by humans - one humans have found useful for a long time. A map should be used any way possible to make the territory easier to predict.
Replies from: VoiceOfRa↑ comment by VoiceOfRa · 2015-07-03T09:14:48.404Z · LW(p) · GW(p)
A map should be used any way possible to make the territory easier to predict.
Except it correlates with a bunch of things from chromosomes, to physical strength, to IQ, to ability to impregnate versus to become pregnant. If we look at a few prominent examples of trans-"women" the results don't appear to be hard to tease out.
Replies from: Elo↑ comment by Elo · 2015-07-04T08:39:45.921Z · LW(p) · GW(p)
I wrote a response and then realised I was unclear as to what you were saying and asking. Can you elaborate?
Replies from: VoiceOfRa↑ comment by VoiceOfRa · 2015-07-04T18:21:49.978Z · LW(p) · GW(p)
Well, I wasn't disagreeing with anything you said in the grandparent although possibly with some things you meant to imply. (I'm not completely clear on what you meant to imply.)
Replies from: Elo↑ comment by Elo · 2015-07-04T22:36:45.111Z · LW(p) · GW(p)
I was unclear as to the quote you selected and the link to your comment.
What do you think I was implying; I don't think I was implying anything particularly hidden. It would help me to understand what I might have been implying for improving my communication in the future.
perhaps you meant to quote:
gender is only a map made by humans
I hope I wasn't implying that the map is wrong; I was trying to say that the map is historically quite good; and only recently is it becoming challenged with the possibilities of more modern genders opening up through technology or medical intervention.
Replies from: VoiceOfRa↑ comment by VoiceOfRa · 2015-07-05T07:53:56.934Z · LW(p) · GW(p)
only recently is it becoming challenged with the possibilities of more modern genders opening up through technology or medical intervention.
Except for the most the the technologies haven't changed it, e.g., men still can't give birth and women can't impregnate women. Also most of the people claim to be "transgender" aren't even availing themselves of ancient that can do something, e.g., Bruce Jenner wasn't even willing to cut off his penis.
Replies from: Elo↑ comment by Elo · 2015-07-05T13:33:14.765Z · LW(p) · GW(p)
most people claim...
Be careful with your sweeping statements. I personally know of greater than one person who has gone through gender reassignment surgery. while the ability to ipregnate/be impregnated is still very much permanently attached to XYgender, it is only a matter of time.
You might know of the prevalence of "Ladyboys" in thai culture. there is a gene that is linked to that condition, and it affects a large number of their population.
I wouldn't be forcing people who don't fit into the existing gender territory to attempt to fit if they don't (especially if all they do is break your personal map about it). I believe in personal liberties as long as you don't harm others.
Replies from: VoiceOfRa↑ comment by VoiceOfRa · 2015-07-06T00:37:52.231Z · LW(p) · GW(p)
while the ability to ipregnate/be impregnated is still very much permanently attached to XYgender, it is only a matter of time.
My point is that can't be what the "transsexual" movement [edit: is about], so why did you claim it was?
I believe in personal liberties as long as you don't harm others.
Sorry, but in this case the "liberty" in question is the right to insist others play along with their delusions and/or acting out.
Replies from: Elo↑ comment by Elo · 2015-07-06T06:25:25.907Z · LW(p) · GW(p)
You got downvoted for having a contrarian view that was also badly expressed. I have neutralised. (try to express it clearer or more delicately in the future)
My point is that can't be what the "transsexual" movement, so why did you claim it was?
I am sorry; I don't understand this sentence, was there perhaps a spelling error?
right to insist others play along with their delusions and/or acting out.
I can see what you are saying that it burdens everyone else to go along with the decisions of a person to make choices that are non-standard about gender. I haven't really considered the effect of the total burden placed on others.
when analysing this issue; I tend to weigh up the perspectives:
for whatever reason; (reason not relevant at all; ranging from hormonal imbalances to psychological reasons to environmental factors), a person decides their ability to fit into the M/F gender structure of society works better if they fit in a place other than the one phenotypically associated with their brain at birth. Their attitude is around their personal relationship with the world.
from the perspective of anyone this person interacts with; "this is slightly confusing". It might be an assault on one's understanding to be thrown into a state of confusion, there are multiple solutions to this problem; they might include:-
a. avoiding the assault on your sensibilities by avoiding the people who don't fit the map
b. ensuring that there are no people who do not fit your map by changing the people, or
c. ensuring there are no people who don't fit the map, by changing the map.
In the sense that we really only have power over ourselves. we sure can wish to be able to control others; but that is wrong. and conflicts with personal liberties.
The person in 1; is changing themselves to better fit into the world they find themselves in.
The person in 2b is trying to change the world where it disagrees with them, whereas in 2a or 2c, the person is finding solutions that involve changing themselves, and not the world.
I wouldn't be insisting anything of you; but I would like to make it clear that the 2b attitude is literally impossible to maintain. If we were to fulfil the desires of every person who wished to change the external world; the wishes would come into conflict where some 1 person would wish for something one way and some 2b person would wish it the opposite.
In this sense the right to "insist others play along", is nothing but the right to ask permission to entertain your own delusions (and maps) about yourself and your interaction with the world. It carries no bearing on what external parties do; unlike a party who is claiming that "playing along with someone else's gender delusion" is a burden on them, and insists that external parties play along with their own map (of M/F etc).
I'd like to encourage you to further express your opinion in this discussion but I have to point out; as evidence by a downvote; you are talking about a delicate issue; one likely to be downvoted simply for your view; in light of this - please be delicate in expressing your reply; and to save me some time misunderstanding; feel free to be verbose about it.
Replies from: VoiceOfRa↑ comment by VoiceOfRa · 2015-07-07T01:32:51.827Z · LW(p) · GW(p)
You got downvoted for having a contrarian view that was also badly expressed. I have neutralised. (try to express it clearer or more delicately in the future)
Down voted for concern trolling. (And no I don't care about a single down vote.)
I am sorry; I don't understand this sentence, was there perhaps a spelling error?
Fixed. I changed the grammar of that sentence several times as I was editing.
I can see what you are saying that it burdens everyone else to go along with the decisions of a person to make choices that are non-standard about gender. I haven't really considered the effect of the total burden placed on others.
Suppose a man walks up to you and claims to be both Jesus and John Lennon, and another man walks up to you claiming to be a woman. Would you accommodate one or both of their delusions and how? What accounts for the difference? Also note how LW takes a very different approach to discussing the two delusions.
In this sense the right to "insist others play along", is nothing but the right to ask permission to entertain your own delusions (and maps) about yourself and your interaction with the world.
Except not all maps are equal. Some are more accurate then others.
It carries no bearing on what external parties do; unlike a party who is claiming that "playing along with someone else's gender delusion" is a burden on them, and insists that external parties play along with their own map (of M/F etc).
Except it does, e.g., the deluded individual insists men on attending women's colleges and going to women's bathrooms.
Replies from: Elo↑ comment by Elo · 2015-07-07T04:24:58.642Z · LW(p) · GW(p)
Suppose a man walks up to you and claims to be both Jesus and John Lennon, and another man walks up to you claiming to be a woman. Would you accommodate one or both of their delusions and how?
That depends on what is required to accommodate the two people.
If "jesus" guy insists on being worshipped, then thats infringing on the boundaries of others; if he wishes to be left to his own devices of enlightenment (talking to bushes and drinking wine-water) - then sure, good for him and his utilons of happiness.
Similarly if "woman" wishes to change the world that could be violating the personal rights of others; however keeping to yourself and what you can reasonably have power over - does not burden others.
the deluded individual insists men on attending women's colleges and going to women's bathrooms.
This is an example that describes violating personal rights of others. Because of this example I believe I agree with you, however we disagree over what could/could not be permitted by reasonable liberties.
Replies from: VoiceOfRa↑ comment by VoiceOfRa · 2015-07-07T04:46:21.381Z · LW(p) · GW(p)
That depends on what is required to accommodate the two people.
Let's put it this way. Would you recommend he get psychological counseling for his delusion, or tell him that his map is a perfectly valid one and that how he(she?) feels about himself he is by definition correct?
Also, suppose the men wants to adopt a kid. Should he be able to? Should adoption agencies have the right to refuse him?
the deluded individual insists men on attending women's colleges and going to women's bathrooms.
This is an example that describes violating personal rights of others. Because of this example I believe I agree with you, however we disagree over what could/could not be permitted by reasonable liberties.
Unfortunately, these are examples of demands "transwomen" are making, and frequently getting in the US. Frequently backed up by state laws.
Replies from: Elo↑ comment by Elo · 2015-07-07T05:47:10.291Z · LW(p) · GW(p)
Would you recommend he get psychological counseling for his delusion, or tell him that his map is a perfectly valid one
I would do neither as both are examples of attempting to influence the world beyond the personal scope.
suppose the men wants to adopt a kid.
I have no knowledge of existing processes, but I assume somewhere it involves counselling or an evaluation as to whether the person is fit to parent. (some kind of massive paperwork process) If they pass such an ordinary process, then they should be able to do as they please.
Which is to say; yes this model breaks eventually (because someone has to evaluate and establish a baseline of "fit to parent" and declare whether the person fits within the lines or not), but it goes a lot further than "I have the right map because I say so, and everyone who doesn't have my map is wrong and should change their map to be like mine"
these are examples of demands "transwomen" are making.
Not gonna lie - the US is a strange place.
individual insists men on attending women's colleges and going to women's bathrooms.
I can't say that I know these answers; but I would start by looking at "what is a women's college and for what purpose was it defined to be so?". followed by, what does "men" mean in this instance, then "who will be directly affected by any decision, in what way?"
comment by drethelin · 2015-05-01T20:05:43.865Z · LW(p) · GW(p)
Is there anything a non-famous non-billionaire person do to meaningfully impact medical research? It seems like the barriers to innovation are insurmountable to everyone with the will to try, and the very few organizations and people who might be able to aren't dedicated to it.
Replies from: IlyaShpitser, James_Miller, adamzerner, ChristianKl↑ comment by IlyaShpitser · 2015-05-01T20:38:41.382Z · LW(p) · GW(p)
Causal inference research.
Replies from: Vaniver, mkf↑ comment by mkf · 2015-05-03T16:46:33.519Z · LW(p) · GW(p)
Could you elaborate?
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-05-03T18:05:30.662Z · LW(p) · GW(p)
Sure. I am not an MD, but this is my view as an outsider: the medical profession is quite conservative, and people that publish medical papers have an incentive to publish "fast and loose," and not necessarily be very careful (because hey you can just write another paper later if a better method comes along!)
Because medicine deals with people, you often can't do random treatment assignment in your studies (for ethical reasons), so you often have evidence in the form of observational data, where you have a ton of confounding of various types. Causal inference can help people make use of observational data properly. This is important -- most data we have is observational. And if you are not careful, you are going to get garbage from it. For example, there was a fairly recent paper by Robins et al. that basically showed that their way of adjusting for confounding in observational data was correct because they reproduced a result found in an RCT.
There is room both for people to go to graduate school and work on new methods for properly dealing with observational data for drawing causal conclusions, and for popularizers.
Popularizing this stuff is a lot of what Pearl does, and is also partly the purpose of Hernan's and Robin's new book:
https://www.facebook.com/causalinference
Full disclosure: obviously this is my area, so I am going to say that. So don't take my word for it :).
Replies from: Kazuo_Thow↑ comment by Kazuo_Thow · 2015-05-03T21:31:58.871Z · LW(p) · GW(p)
Here on Less Wrong there are a significant number of mathematically inclined software engineers who know some probability theory, meaning they've read/worked through at least one of Jaynes and Pearl but may not have gone to graduate school. How could someone with this background contribute to making causal inference more accessible to researchers? Any tools that are particularly under-developed or missing?
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-05-04T12:55:47.254Z · LW(p) · GW(p)
I am not sure I know what the most impactful thing to do is, by edu level. Let me think about it.
My intuition is the best thing for "raising the sanity waterline" is what the LW community would do with any other bias: just preaching association/causation to the masses that would otherwise read bad scientific reporting and conclude garbage about e.g. nutrition. Scientists will generally not outright lie, but are incentivized to overstate a bit, and reporters are incentivized to overstate a bit more. In general, we trust scientific output too much, so much of it is contingent on modeling assumptions, etc.
Explaining good clear examples of gotchas in observational data is good: e.g. doctors give sicker people a pill, so it might look like the pill is making people sick. It's like the causality version of the "rare cancer => likely you have a false positive by Bayes theorem". Unlike Bayes theorem, this is the kind of thing people immediately grasp if you point it out, because we have good causal processing natively, unlike our native probability processing which is terrible. Association/causation is just another type of bias to be aware of, it just happens to come up a lot when we read scientific literature.
If you are looking for some specific stuff to do as a programmer, email me :). There is plenty to do.
↑ comment by James_Miller · 2015-05-03T16:05:16.404Z · LW(p) · GW(p)
Yes depending on how you define "meaningful". If you can speed up a cure for cancer by 1 day, or increase the chance of us getting a cure next year by 1 in a million then the expected value of this increase in terms of human welfare is huge compared to what most people manage to accomplish in their life.
↑ comment by Adam Zerner (adamzerner) · 2015-05-03T03:39:12.718Z · LW(p) · GW(p)
Become a famous billionaire?
Replies from: DanArmak↑ comment by ChristianKl · 2015-05-02T00:05:49.042Z · LW(p) · GW(p)
It seems like the barriers to innovation are insurmountable to everyone with the will to try
Why?
When it comes to running studies, you could run a study for the effect of taking Vitamin D in the morning vs. in the evening. Various QS people have found an effect for themselves. The effect should be there. As far as I know there no study establishing the effect. Doesn't seem to be difficult for someone without resources but with a decent amount of time on their hand.
As far as diagnostic tools go, we today have a lot of smart scales and blood pressure measurement devices. As far as I know nobody produced a similarly smart peak flow meter (/ FEV1). Such a device would allow the gathering of more data for a lot of people. More and better data means more opportunities for scientific insight.
On a more theoretical level I think there could be advances in organizing the large pile of biological and medical knowledge we have. I don't think that textbooks and journal articles are good mediums for transferring knowledge between scientists. Protein databases like uniprot contain a lot of information in a way that's easy to query. I think that finding ways to organize less structured biological insights and the evidence for them is an area with a high potential impact.
comment by philh · 2015-05-03T22:50:51.681Z · LW(p) · GW(p)
Is there a reason not to drink oral rehydration solution just as a part of everyday life? Maybe as a replacement for water in general, or maybe as a replacement for sports drinks? (In my case, I'd be inclined to take a bottle of it along when I go dancing.)
If this was a good idea I'd expect it to be sold in shops, and as far as I can tell it's not, but I don't know why.
It looks like a litre has about 100 calories of sugar, and half the RDA of salt, but I'm not sure how worrying that is.
Replies from: ChristianKl, Ishaan↑ comment by ChristianKl · 2015-05-04T01:07:16.724Z · LW(p) · GW(p)
I think a sport drink is basically a branded version of this.
Given that sweating while dancing means loss of salt, having a drink with a bit of salt to replace the lost salt seem reasonable. Replacing all your water consumption with a isotonic drink on the other hand doesn't seem like a good idea.
As far as sugar goes I would use a more complex one than glucose.
Replies from: None, philh↑ comment by [deleted] · 2015-05-04T10:10:57.031Z · LW(p) · GW(p)
I figure unless we are extremely anal about our diets and never eat fast/street/microwave/bakery food, not consuming enough salt is about the last things to worry about. I figure the majority is on the far too much sodium side.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-05-04T11:38:58.417Z · LW(p) · GW(p)
At 1.5 liter per that the averages person drinks, having to make sure that you ingest enough salt isn't important.
On the other hand at 5 liters day, things look different.
When I go dancing I think it's plausible that I sweat out 2 liters of water. It's worth thinking about whether those two liters should be replaced by salt free water or by a isotonic solution.
There also the separate issues of how strong one considers the case about people eating too much salt to be. Michael Vassar who was MetaMed's CEO for example argues that in the average European diet there's not enough salt.
↑ comment by philh · 2015-05-04T16:24:06.172Z · LW(p) · GW(p)
Thanks. The sugars in my cupboard don't say what molecule they are, but I assume sucrose, so that's what I'd go for by default.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-05-04T17:54:25.089Z · LW(p) · GW(p)
I personally use maltrodextrin when I want to add calories to a drink.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2015-05-14T18:10:10.718Z · LW(p) · GW(p)
Flavorless (kind of, makes tap water "taste more like tap water" to me), but it has an insane glycemic index (higher than pure sugar, by some molecular miracle I've never investigated), which makes it great for replenishing energy while working out and terrible otherwise.
(Actually, on that note, I improved my jogging performance by 20-40% by adding 75 calories of maltodextrin immediately before starting, and completely eliminated the "I'm dead now" period afterwards with another 25 calories; I had a tub of the stuff left over from weightlifting, which since I've moved and haven't joined a new gym I'm not doing anymore.)
↑ comment by Ishaan · 2015-05-07T21:22:59.939Z · LW(p) · GW(p)
Overhydration, but I'm not sure whether a sedentary person drinking it orally is at risk for that. (We know runners drinking hydration fluid and sedentary people taking hydration intravenously are at risk).
Empty calories, but you can offset that.
Excess salt (but it is now uncertain as to whether excess salt is harmful in the absence of high blood pressure).
It's also weird, as in it's both an evolutionary novel and culturally novel thing for you to do, and that means it's risky until proven innocent.
comment by Dorikka · 2015-05-03T00:57:48.488Z · LW(p) · GW(p)
I have only taken one (undergrad microecon) course in econ - would really appreciate if some one more familiar with business and economics could go through my ramblings below and point out errors/make corrections in my understanding.
Picture the sell and buy side of business. The people on a given side compete against each other to be more favorable to the people on the other side. So more buyers => sellers capture more value. More sellers-> buyers capture more value. If you are selling a commodity, you don't capture much value at all. The price that buyers are going to pay for your good is not really based on the value that they'll receive from it, since the choice they face is not (buy and get value) OR (don't buy and don't get value). They also have the choice to buy from any of the other sellers, and will take that if they can pay less. So sellers cut, cut, cut prices until they can't cut any more (they race to the bottom), where price = cost of production (including wages, etc) + enough of a return for it to be "worth it" for them. (What rate of return sellers will stop undercutting each other probably relies on macro...ask someone on LW maybe. This is an important piece but I think I can get by without it for now.)
If you're a seller and there are no other sellers (monopoly), they buyer only has the choices (buy and get value) OR (don't buy and don't get value). So they will (more or less) pay anything up to the value that they will get from the good. This may be much, much more than the cost of production + "acceptable" return, so monopolists can win huge.
Your job is probably a commodity. Engineer, baker, tailor, data scientist, statistician...it's a commodity unless you can make buyers believe that other sellers are not selling the same thing that you are - basically, that they can't approach other sellers and get a price from them for the same good. So even if you are creating lots of value...because other people could do so as well, you can't capture nearly as much of that value. (So basically it sounds like you want to make a very different type of value than other people can provide, and lots of it. It seems like a decent place to look would be what different experiences, skills, etc that you have that others don't - even though much of your training might have been geared towards increasing your ability to produce a commodity, see above.)
The above applies to wages, to businesses, etc. Keep in mind that the skilled labor market on the sellers side is not efficient, meaning that people aren't necessarily making the most money that they could be. This is at least partially because there are lots of other constraints on sellers choices and behavior, which may be family, upbringing, social situation, other terminal values that they can't really maximize with more marginal money, etc.
I think that large companies likely have quite a bit of inertia, meaning that they may not be making "efficient" decisions either. The people working at those companies have TVs that are probably risk-averse with respect to money. Firm capital (if publicly traded) depends on shareholders transient judgment...which may be to SELL due to their own risk aversion if the firm makes a radical move. So I would expect firms to be not efficient in the strict sense, and also risk averse. So it looks like there is a tradeoff as firms get bigger (as revenue increases and costs increase). They likely get some smoother processes, less hiccups, economies of scale etc, but they also possibly give away expected utility through risk aversion. It seems like firms have an incentive to get bigger for these reasons (and if they are getting revenue based on value generated instead of cost of production, this incentive may even be bigger in terms of executive salaries), but future losses through risk aversion may be undervalued, or simply discounted because they are far in the future (and execs may have cashed out by then anyways so have reason to strongly discount the future.) So there may be room in the market for a new, smaller firm that creates value in a way that established players can't, since the established companies' incentives will punish them for taking advantage of the opportunity.
Replies from: Viliam, None↑ comment by Viliam · 2015-05-03T17:01:01.804Z · LW(p) · GW(p)
there may be room in the market for a new, smaller firm that creates value in a way that established players can't
Maybe this is what startups are about. When a few people start a company and invent a product... well, hypothetically you could have a big company like Microsoft select a few employees, give them salary and complete freedom, and let them invent a new product for the company... but for various reasons that rarely happens. So it is better to wait until someone outside of the company develops the stuff and then try to buy it.
An alternative explanation is that by buying external startups that succeed, you save money by not having to finance all those other startups that failed, so for a big company buying a startup is cheaper than trying to create one at home. But I feel this explanation is probably wrong. -- You could do some preselection about whom you allow to try making an "internal startup". And if one in a hundred succeeds, the price of a successful startup is probably higher than salaries for a few hundred employees; and a sufficiently big company has enough resources to play this game. To sufficiently motivate people, you could promise half or maybe even 10% of the "internal startup" profits to the employees who created it, and the company would still be profitable.
↑ comment by [deleted] · 2015-05-04T10:25:34.926Z · LW(p) · GW(p)
Sounds partially correct, although, I am a self-taught "economist" mostly.
However, in my experience, this kind of "basic incentive structure analysis" type of economics does not really predict the real world so well. I had to actually un-learn a lot of it in real life. (Or maybe I should have learned more advanced kinds of economics.)
Basic Econ can be understood as more normative than descriptive. That is, if you had these kinds of markets, then things would be really great and efficient. But in real life everything from old-boys networks to political corruption to cognitive biases screwing this up. The more you live in a place that works like Basic Econ the closer you are to an optimum. If you see lots of poverty around you, someone, something is throwing big monkey wrenches into this mechanism.
So in the real life you can see well-paid employees who are just nephews of the CEO or have an excellent sales pitch (interview skills). It does not always work out the textbook way. It should, just doesn't.
Your job is probably a commodity. Engineer, baker, tailor, data scientist, statistician...it's a commodity unless you can make buyers believe that other sellers are not selling the same thing that you are - basically, that they can't approach other sellers and get a price from them for the same good.
But most people are specialists today. When was the last time you saw a General IT Guy in a big city? 1990? Today it is being some kind of an Enterprise Network Bullshit Bingo Database Architect. Or more realistically, simply based on what kinds of products or technologies they know. For example being an Oracle Financials expert is a world on its own. Or SAP, Navision...
But if you become a specialist, there is one trade-off. You must live in a city. Rural areas need generalists. This is simply based on density. In the country, there may not be more than 5 doctors or IT folks within 100km so they better know it all.
I think that large companies likely have quite a bit of inertia, meaning that they may not be making "efficient" decisions either. The people working at those companies have TVs that are probably risk-averse with respect to money. Firm capital (if publicly traded) depends on shareholders transient judgment...which may be to SELL due to their own risk aversion if the firm makes a radical move. So I would expect firms to be not efficient in the strict sense, and also risk averse. So it looks like there is a tradeoff as firms get bigger (as revenue increases and costs increase). They likely get some smoother processes, less hiccups, economies of scale etc, but they also possibly give away expected utility through risk aversion.
The big issue is not even risk aversion. After all a truly large firm can risk a tiny fraction of its revenue on an experiment and that tiny fraction can still be large. The issue is plain simply the incentive chains. There is a long hierarchy with everybody on higher than you trying to use rewards and punishments to get you work the way they want you to. After several steps down in this chain, this naturally gets corrupted. Like a game of telephone. People who are 4-5 level removed from shareholders they don't want the same things shareholders want to. If they are rewarded for number of calls taken, they will find any excuse to drop calls and take another one.
comment by ImmortalRationalist · 2015-05-12T07:47:25.319Z · LW(p) · GW(p)
Using Bayesian reasoning, what is the probability that the sun will rise tomorrow? If we assume that induction works, and that something happening previously, i.e. the sun rising before, increases the posterior probability that it will happen again, wouldn't we ultimately need some kind of "first hyperprior" to base our Bayesian updates on, for when we originally lack any data to conclude that the sun will rise tomorrow?
Replies from: Epictetus↑ comment by Epictetus · 2015-05-12T08:13:35.668Z · LW(p) · GW(p)
This is a well-known problem dating back to Laplace (pp 18-19 of the book).
Replies from: ImmortalRationalist↑ comment by ImmortalRationalist · 2015-05-19T08:00:17.203Z · LW(p) · GW(p)
Also, how do we know when the probability surpasses 50%? Couldn't the prior probability of the sun rising tomorrow be astronomically small, and with Bayesian updates using the evidence that the sun will rise tomorrow, merely make the probability slightly less astronomically small?
Replies from: Epictetus↑ comment by Epictetus · 2015-05-19T14:05:55.129Z · LW(p) · GW(p)
The prior probability could be anything you want. Laplace advised taking a uniform distribution in the absence of any other data. In other words, unless you have some reason to suppose one outcome is more likely than another, you should weigh them equally. For the sunrise problem, you could invoke the laws of physics and our observations of the solar system to assign a prior probability much greater than 0.5.
Example: If I handed you a biased coin and asked for the prior probability that it comes up heads, it would be reasonable to suppose that there's no reason to suppose it biased to one side over the other and so assign a prior of 0.5. If I asked about the prior probability of three heads in a row, then there's nothing stopping you from saying "Either it happens or it doesn't, so 50/50". However, if your prior is that the coin is fair, then you can compute the prior for three heads in a row as 1/8.
comment by [deleted] · 2015-05-04T18:16:35.274Z · LW(p) · GW(p)
Should people give money to beggars on the street? I heard conflicting opinions about this. Some say they just spend it on booze and cigarettes, so it would be more effective to donate that money to hostels for the homeless and similar institutions. Others say it's not a big deal and it makes them happy. What do you think?
Replies from: garabik, ChristianKl, Ixiel, ImmortalRationalist, None, None↑ comment by garabik · 2015-05-06T06:54:52.890Z · LW(p) · GW(p)
I notice the effects of the recession – I always offer to buy food, and in the past beggars often concocted elaborate excuses why I should give them money instead of buying a hotdog or something. But in the last few years, more and more they agree to get the food (and actually eat it).
Anyway, depending on where you live, there might be organized teams of professional beggars who are exploited by their "owner" and they have to give him his daily share "or else". By not giving money, you really hurt the beggars - but this amounts to emotional blackmailing, so perhaps you should not - for the price of hurting a few beggars now we could close the exploitation later, since if there is no profit there is no incentive to exploit the beggars.
↑ comment by ChristianKl · 2015-05-04T18:42:03.201Z · LW(p) · GW(p)
What's your goal? Effective altruism literature suggest that if you want spent money to do good, bet nets for Africa are an effective way to do so.
Replies from: Elo↑ comment by Ixiel · 2015-05-13T20:49:24.891Z · LW(p) · GW(p)
Do you care if they buy booze and cigarettes? I for one do not. If it makes you happy to give them a few bucks, do it. I do not charge this against my charity budget, however, but my discretionary budget. But that impulse causing you to take money OUT of your charity budget may be a poor choice, as there may be charities about which you care more than beggar booze. Bottom line, giving to beggars is consumption not charity and if you get more happiness that way, do it. Or go buy a bunch of mini liquors and hand them out; I did this once and I felt great afterwards.
Replies from: None↑ comment by [deleted] · 2015-05-15T01:26:45.742Z · LW(p) · GW(p)
Or go but a bunch of mini liquors and hand them out; I did this once and I felt great afterwards.
You can't be serious.
Replies from: Ixiel↑ comment by Ixiel · 2015-05-19T03:26:22.600Z · LW(p) · GW(p)
Yup. I enjoy drinking, so I understand that they might likewise, and I enjoy other humans enjoying themselves. Win/win. It genuinely does feel good to make other people happy. Or I suppose pleased, if you want to be technical. Try it for yourselves. The people you help will thank you.
↑ comment by ImmortalRationalist · 2015-05-12T07:51:58.966Z · LW(p) · GW(p)
The money you would have spent on giving money to a beggar might be better spent on something that will decrease existential risk or contribute to transhumanist goals, such as donating to MIRI or the Methuselah Foundation.
↑ comment by [deleted] · 2015-05-04T23:15:03.959Z · LW(p) · GW(p)
I can see nothing wrong with buying such a person food.
Replies from: CBHacking↑ comment by CBHacking · 2015-05-05T08:57:09.413Z · LW(p) · GW(p)
Anecdotal, but I know of one case where the beggar got angry about being given food (I think it was something like a grocery store deli sandwich, still wrapped and unopened) and ranted at my friend about thinking they know better than the recipient about what they need and how the giver must not trust beggars with money and so on. It's kind of funny in retrospect, but at the time it was disturbing and confrontational and (of course) extremely ungrateful, so there were definitely no warm fuzzies derived therefrom (more like a highly unpleasant fight-or-flight moment plus public embarrassment) and it really did feel like a waste of money. If your goal is warm fuzzies, or even just to convince yourself you're doing some good, triggering an experience like that is utterly counterproductive. I can't imagine it's a common thing, but so far as I know, my friend has never again tried giving food directly (as opposed to donating to a food drive).
↑ comment by [deleted] · 2015-05-04T18:37:35.255Z · LW(p) · GW(p)
And sometimes they bring babies, not always their own, drugged perhaps unto death so that they will receive more charity but not be bothered by the unrest of the children... I think it is better to donate to hostels because it doesn't give the 'parenting homeless' an advantage.
comment by advancedatheist · 2015-05-03T00:00:09.405Z · LW(p) · GW(p)
Has any science fiction writer ever announced that he or she has given up writing in that genre because technological progress has pretty much ended?
Because if you think about it, the idea of sending someone to the moon has gone from science fiction to a brief technological reality ~ 45 years ago back to science fiction again.
Replies from: ChristianKl, JoshuaZ, listic, None↑ comment by ChristianKl · 2015-05-03T12:33:51.497Z · LW(p) · GW(p)
Science fiction is about imaging how the future would look like and not essentially about space travel.
Bruce Sterling even managed to write science fiction that happens at the same year it's written with Zeitgeist. It's a quite good book. It manages to name Osama Bin Ladin as a significant player even through it's written in 1999 (and published a year later). It also features the NSA and how they manage to wiretap everything in the world.
Technological progress also hasn't ended. 20 years ago flying drones bringing your packages to your door step was science fiction and today Amazon rolls out field trials where they do that.
Given how technology prices change the costs of bugging equipment sinks fast. Cheap bugging that can be deployed by individuals features a role in Bruce Sterling's Distraction. The book also looks at a gene-manipulated guy who lives in a country where gene manipulation is outlawed and as a result he can't run for office.
There plenty of technology and change to be written about. Apart from that the future is "Old people in cities who are afraid of the sky".
↑ comment by JoshuaZ · 2015-05-07T02:29:42.297Z · LW(p) · GW(p)
Has any science fiction writer ever announced that he or she has given up writing in that genre because technological progress has pretty much ended?
Charlie Stross has expressed disappointment that technological change hasn't been as rapid as he hoped for in the mid 1990s, but that's a very different claim. I don't think anyone has claimed that progress has stopped completely, and it would be very strange to do so. Yes, the specific technologies involved in space travel have not progressed much but even in those areas progress is still occurring: the rise of cubesats and private rockets, the spread of highly accurate civilian GPS, the ability to send long-lived rovers to other planets - these are all advances in the last 20 years.
Outside space issues, there's been big jumps in technology- the most obvious changes have been (and continue to be) in the computer related fields (e.g. cheap smart phones, rise of big data, general increase in computer power level, drastic reductions in bandwith cost) but there have also been major improvements in other fields.
Let's look at medicine. HIV used to be a death sentence and now the life expectancy of people in much of the developed world with HIV is close to that of the general population. Death rates from various cancers continue to decline. Many tests are faster and more reliable. You seem implicitly to be using a 45 year time window, but in either a 45 or a 20 year time window, these advances are pretty clearcut.
I could continue with other fields but the general trend is clear: progress may be occurring slowly but technological progress is still definitely going on.
↑ comment by listic · 2015-05-05T18:58:01.041Z · LW(p) · GW(p)
Haven't heard about such an accident. Why do you ask?
Can't figure how the second paragraph demonstrates that technological progress has ended (by the way, do you mean it stopped or it really reached its logical conclusion?). Rather, it illustrates its ever more rapid pace. And that might be a problem for science fiction: where formerly readers were excited to read fiction about strange new things that science could bring in the future, nowadays they are rather overwhelmed with the strange new things they already have, and afraid and unwilling to look into the future; it's not that the science fiction cannot show it. Charles Stross has written about it (don't have the exact link; sorry)
Replies from: Lumifer↑ comment by Lumifer · 2015-05-05T19:13:09.760Z · LW(p) · GW(p)
nowadays they are rather overwhelmed with the strange new things they already have, and afraid and unwilling to look into the future
It might also have to do with the fact that cheery SF utopias are out of fashion at the moment and the dark and depressive dystopias are rather more prevalent.
↑ comment by [deleted] · 2015-05-03T06:36:12.633Z · LW(p) · GW(p)
Why should they?
You many note a general shift away from space opera towards biology and computer technology inspired storylines. But even that being said, why should the direction reality is going affect the direction of speculative fiction?
Replies from: ChristianKl↑ comment by ChristianKl · 2015-05-03T12:33:56.555Z · LW(p) · GW(p)
But even that being said, why should the direction reality is going affect the direction of speculative fiction?
That depends on what you consider the goal of science fiction to be. If it's about discussing technnological trends then the direction of reality matters.
comment by Username · 2015-06-01T22:19:23.355Z · LW(p) · GW(p)
Suppose I'm penniless. I borrow $1000 from the bank, go to a roulette table, and bet it all on red. If I win, I pay back the $1000 and end up with a profit. If I lose, I declare bankruptcy and never pay the money back. What's stopping people from doing this? Perhaps credit scores prevent this; if so, couldn't you just get some false documents and do this anyway?
Replies from: Lumifer↑ comment by Lumifer · 2015-06-02T00:23:54.452Z · LW(p) · GW(p)
What's stopping people from doing this?
You can't do this for a large amount without a decent amount of credit history and seven (I think) years of no credit is a very high price to pay for a 50% chance at some sum.
And if you try to do this for a large amount, the bank will want assurances of you being able to pay them back. If you do this with "false documents", you will go to jail for fraud.
comment by [deleted] · 2015-05-20T14:10:12.803Z · LW(p) · GW(p)
If people obtain 70% information through vision, and might allocate more significance to negative feedback than to positive one, why do lecturers never bind their eyes?
Replies from: ChristianKl↑ comment by ChristianKl · 2015-07-05T10:04:52.988Z · LW(p) · GW(p)
If people obtain 70% information through vision
What does that even mean?
why do lecturers never bind their eyes?
Eye contact is part of good public speaking. It useful to perceive one's audience and see when it get's confused.
If someone is afraid, binding their eyes is not likely to reduce anxiety.
comment by ImmortalRationalist · 2015-05-13T04:24:37.710Z · LW(p) · GW(p)
How do we determine our "hyper-hyper-hyper-hyper-hyperpriors"? Before updating our priors however many times, is there any way to calculate the probability of something before we have any data to support any conclusion?
Replies from: Epictetus, ChristianKl, Ishaan, hairyfigment↑ comment by Epictetus · 2015-05-15T04:58:36.743Z · LW(p) · GW(p)
In some applications, you can get a base rate through random sampling and go from there.
Otherwise, you're stuck making something up. The simplest principle is to assume that if there's no evidence with which to distinguish possibilities, then one should take a uniform distribution (this has obvious drawbacks if the number of possibilities is infinite). Another approach is to impose some kind of complexity penalty, i.e. to have some way of measuring the complexity of a statement and to prefer statements with less complexity.
If you have no data, you can't have a good way to calculate the probability of something. If you defend a method by saying it works in practice, then you're using data.
↑ comment by ChristianKl · 2015-07-05T10:03:48.899Z · LW(p) · GW(p)
Intuition. Trusting our brain to come up with something useful.
↑ comment by hairyfigment · 2015-05-14T17:36:54.424Z · LW(p) · GW(p)
No, practical Bayesian probability starts with an attempt to represent your existing beliefs and make them self-consistent. For a brief post on the more abstract problem, see here.
comment by Dahlen · 2015-05-11T00:21:19.253Z · LW(p) · GW(p)
A public transportation dilemma: to get to the nearest subway station, which is on the same boulevard as my apartment building a good few blocks away, I have to take a bus to there. A bus trip to the subway station is short, about 2 or 3 minutes, but buses come at irregular times. I might find one already there, with its doors open, when I arrive to the bus stop, or I might wait 15-20 minutes for one to come. If I were to walk to my destination, the trip would take about 10 to 15 minutes.
When I'm in a hurry, I usually head for the bus stop and hope a bus comes right away -- that would minimize the duration of my trip. The worst decision I could possibly make is wait 15 minutes for a bus to come, get annoyed, decide it's probably not my lucky day, and suck it up and walk to the subway station -- that's a full 25 minutes, up from the 2-3 minutes of a "lucky" trip. Very often I get to see several buses passing me by as I decide to walk. But if I were to take the opposite decision and decide to not even wait for a bus, instead just heading straight to the station, I'm still at a disadvantage of about 8 minutes, if the bus comes shortly after I pass by the bus stop.
After how much time is it rational to leave the bus stop and decide to walk, if the bus doesn't come? The probability of the bus to come after a given waiting time (the frequency with which it comes) is unknown, although it might get close to 1 after 30 minutes.
Replies from: Jiro, Epictetus↑ comment by Jiro · 2015-05-12T05:49:00.898Z · LW(p) · GW(p)
If you want to minimize the mean travel time, the answer is either to always take the bus or always walk (unless you see the bus at the stop immediately, in which case you should take it--this violates the assumption that the bus's arrival time is unknown.) With the numbers you've given, walking is better.
If you want to minimize the chance that you'll have regret, where regret is defined as "if I had chosen the other method, I would have gotten there first", the solution is the same as in the average case.
(Both of these cases are complicated by the fact that since you see the bus at the stop some percentage of the time, the bus arrival times are not actually evenly distributed from 0 to 30 minutes, but that just makes choosing the bus worse and it's already worse.)
To the question "should I wait X length of time for the bus and start walking if the bus hasn't arrived in that time", generally, you should never do that; if walking after X minutes is better than continuing to wait after X minutes, then walking after X' minutes (where X' < X) is better than continuing to wait after X' minutes (since the bus will arrive sooner if there has been no bus for X minutes, than if there has been no bus for X' minutes), so you should walk immediately.
It is of course possible to come up with more detailed requirements that could demand you wait X length of time and then start walking. For instance, "I never want to take more than 20 minutes, but as long as it is less than 20 minutes, I want it to be as short as possible", it takes 15 minutes to walk, and the bus takes 0-20 minutes to arrive--you should wait for the bus for 5 minutes, and then start walking.
↑ comment by Epictetus · 2015-05-12T08:02:01.480Z · LW(p) · GW(p)
Just for the sake of completion, there are some bus services that offer real-time GPS tracking, so you'd know where the bus is and roughly how long until it arrives. Presumably no such service is available.
We go on to the age-old question: do you try to get the best average performance or the best worst-case performance? If it's the best average performance, go to the bus stop and wait. If you prefer to get the best worst-case performance, just walk every time.
Replies from: Jirocomment by Houshalter · 2015-05-04T03:30:11.135Z · LW(p) · GW(p)
I can't figure out why induction is considered an axiom. It seems like it should follow from other axioms.
I did some googling and the answers were all something like "if you don't assume it, then you can't prove that all natural numbers can be reached by repeatedly applying S(x) from 0." But why can't you just assume that as an axiom, instead of induction?
You could argue that it doesn't matter because they are equivalent, and that seems to be what people said. But it doesn't feel like that. I think you could express that in first order logic, but the axiom of induction can't be expressed in first order logic (according to wikipedia.) And that seems like a huge issue and bothers me.
Replies from: hairyfigment, Epictetus, gjm↑ comment by hairyfigment · 2015-05-04T04:04:37.062Z · LW(p) · GW(p)
"if you don't assume it, then you can't prove that all natural numbers can be reached by repeatedly applying S(x) from 0." But why can't you just assume that as an axiom, instead of induction?
How would you say "repeatedly" in formal language? Would it be something like, 'every number can be reached by applying the successor operation some number of times'? To the extent we can say that, it already seems included in the definition of addition. But it doesn't rule out non-standard numbers, because you can reach one of them from zero by applying S(x) a non-standard number of times.
Replies from: Houshalter↑ comment by Houshalter · 2015-05-04T04:45:38.941Z · LW(p) · GW(p)
IIRC some of the axioms use recursion so I don't see why that wouldn't be allowed. I'm not entirely certain how you would set it up but it doesn't seem like it should be impossible. That link looks interesting though, perhaps it addresses it.
↑ comment by Epictetus · 2015-05-04T05:28:23.000Z · LW(p) · GW(p)
I can't figure out why induction is considered an axiom. It seems like it should follow from other axioms.
You can prove that any individual natural number is part of a set by applying S(x) finitely many times, but getting all of them either requires an infinitely long proof (bad) or an axiom.
I think you could express that in first order logic, but the axiom of induction can't be expressed in first order logic (according to wikipedia.) And that seems like a huge issue and bothers me.
You run into Lowenheim-Skolem. If you could express the axiom of induction in first-order logic, then you'd have a model of arithmetic with k numbers for every infinite cardinal k. That's the advantage of second-order logic: the original Peano axioms give only one set of natural numbers (up to isomorphism). I believe there are certain formulations that only use first-order logic, but they give weaker results.
comment by advancedatheist · 2015-05-04T01:46:55.446Z · LW(p) · GW(p)
Did the Catholic Church invent the celibate priesthood and monastic orders as a humane way to spare the feelings of the male sexual rejects in every generation, by providing a home for them and rationalizing their condition as a higher spiritual calling?
Replies from: None, None, drethelin, Viliam↑ comment by [deleted] · 2015-05-04T13:57:32.472Z · LW(p) · GW(p)
Catholic priestly celibacy was codified at the second Lateran Council in 1139. Before this time there was great diversity in practice in different locations. You had theological justifications for the practice which included Christ's and various apostles' appeals for celibacy as a way of focusing on religious rather than worldly matters. There were also scandals by which large amounts of church property were inherited away from it by priests who had many children which lead directly to a previous decree in the early 11th century by Pope Benedict VIII that the children of priests could not inherit property.
↑ comment by [deleted] · 2015-05-04T10:31:23.089Z · LW(p) · GW(p)
No, there were various theological and practical (inheritance) grounds, I don't think anyone can find evidence for this being an acknowledged purpose. However it could still be an unspoken purpose and it could still work that way. I actually find it very reasonable, given how geeky Catholic priests and monks tend to be, they are prone to typical nerd traits like hair-splitting arguments, they are pretty much like the nerds who can't get some and thus better to rebrand it as a virtue. Just it was not an explicit, outspoken purpose.
I should also say, there were no sexual rejects in the modern sense. Just have a wealthy father, someone will arranged-marry their daughter to you.
↑ comment by drethelin · 2015-05-04T01:57:03.604Z · LW(p) · GW(p)
The Catholic priesthood is more likely an offshoot of other existing priesthoods from Hellenic traditions and Judaism, some of which were celibate and others of which decidedly not, eg Rabbis are expected to be fruitful and multiply just as much as all Jews.
Replies from: advancedatheist↑ comment by advancedatheist · 2015-05-04T03:22:05.502Z · LW(p) · GW(p)
The Jews faced different demographic challenges than Catholics, so that might account for the difference.
I wondered about the sexual reject idea after reading that Thomas Aquinas' father decided that his pudgy little boy didn't look fit for breeding, so he sent Thomas to a monastery at the age of five, while the father encouraged Thomas's better-looking brothers to marry and perpetuate the family.
Replies from: drethelin, hairyfigment↑ comment by drethelin · 2015-05-04T04:21:29.865Z · LW(p) · GW(p)
I haven't read the history but I would be bet that story is apocryphal. Tons of people at age 5 or even at age 11 look completely different than they do as adults and I'm sure people a thousand years ago knew that just as well as we know about Neville Longbottom
↑ comment by hairyfigment · 2015-05-14T17:20:28.365Z · LW(p) · GW(p)
From a simple Google search:
Thomas had eight siblings, and was the youngest child.
While the rest of the family's sons pursued military careers,[12] the family intended for Thomas to follow his uncle into the abbacy;[13] this would have been a normal career path for a younger son of southern Italian nobility.[14]
And biography.com asserts,
Before St. Thomas Aquinas was born, a holy hermit shared a prediction with his mother, foretelling that her son would enter the Order of Friars Preachers, become a great learner and achieve unequaled sanctity.
You're trying to make this about him as an individual, when in fact it seems like a combination of family needs and inheritance customs/law that even a hermit could predict in advance.
↑ comment by Viliam · 2015-05-04T10:35:23.975Z · LW(p) · GW(p)
I doubt it was "invented to do X" -- that implies too much strategy. Instead, I think it happened because of some historical coincidences, and coincidentally it also happened to do X.
The historical reason for not reproducing was the early Christian belief that the end of the world is near, so it does not make sense to marry and have children. (These days there are some Jehovah Wittnesses who don't have children for these reasons.) Of course most people violated this rule and had sex anyway, but following the rule became a voluntary signal of having strong faith.
A few centuries later Catholic Church realized that if they could make non-reproduction of priests mandatory, all resources that those priests would otherwise gather for their families, would now return to the church ownership. And they already had the excuse that not reproducing was a signal of strong faith, so it made sense to demand the signal from the priests. So the Popes began a campaign against marriage of priests; at the beginning the priests ignored them, but gradually the pressure increased, and at some moment one Pope ordered to take all existing wives and children of priests and sell them to slavery. From that moment, Catholic priests were celibate... at least officially, because unofficially I would guess most of them either have a secret wife and secret children, or secret homosexual relations.
Of course the people who are religious and unattractive can rationalize this and join some institution that requires celibate, thus converting their lower status on sexual marketplace to religious status. On the other hand, there are also many attractive people who choose this role because of their strong religious feelings. -- The fact that many priests actually secretly have sex and children is an evidence of their attractivity. Of course there is this confounding factor whether having high religious status could have made the difference in their attractivity. But I anecdotally know about a few people who were considered attractive before they became celibate for religious reasons.