Open thread, Sept. 29 - Oct.5, 2014
post by polymathwannabe · 2014-09-29T13:28:48.393Z · LW · GW · Legacy · 340 commentsContents
340 comments
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
340 comments
Comments sorted by top scores.
comment by jaime2000 · 2014-09-29T15:33:14.456Z · LW(p) · GW(p)
An interesting natural experiment happened on the Pacific Theater of WWII. American and Canadian forces attacked an island which had been secretly abandoned by the Japanese weeks prior. Their unopposed landing resulted in dozens of casualties from friendly fire and dozens of men lost in the jungle. Presumably, a similar rate of attrition occurred in every other landing, on top of casualties inflicted by the deliberate efforts of enemy troops.
Replies from: gjm, Randaly, John_Maxwell_IV↑ comment by gjm · 2014-09-29T16:28:02.994Z · LW(p) · GW(p)
It seems like the rate of friendly-fire casualties might be less when fighting a real enemy. (Super-crude toy model: soldier fire randomly at whoever they see. If no one is on the island apart from the attackers, then those are all going to turn into friendly fire cases. If most of the people on the island are the ones you're trying to attack, then they're going to sustain most of those casualties.)
Replies from: DanielLC↑ comment by DanielLC · 2014-09-29T20:26:32.287Z · LW(p) · GW(p)
Wouldn't there be proportionately more shots fired if there's more people they see? You'd get the same number of friendly fire casualties either way.
Replies from: gjm, Vulture↑ comment by gjm · 2014-09-29T21:07:27.016Z · LW(p) · GW(p)
That would be a slightly less crude toy model, I guess. I would expect the truth to be somewhere in between -- e.g., soldiers have limited ammunition and limited ability to attend to everyone around them in a conflict situation, so the number of shots fired probably increases sublinearly with number of potential targets.
In case anyone was in any doubt: I have no knowledge of any of this stuff, have never served in any military force, etc.
↑ comment by Vulture · 2014-09-30T00:37:07.445Z · LW(p) · GW(p)
Just because you think of a new factor driving it down and then a new factor driving it up doesn't mean you end up in the same place.
Replies from: DanielLC↑ comment by Randaly · 2014-09-29T22:06:15.696Z · LW(p) · GW(p)
We can know that other amphibious assaults probably had lower or neglible friendly fire rates, because some other landings (some opposed) had absolutely lower rates of casulaties- e.g here, here, and here.
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2014-10-04T15:49:20.466Z · LW(p) · GW(p)
Things look a bit more complex than the parent and OP make it. The first one on Kiska island resulted from Canadian and American detachment taking each other for the enemy. Agreed this is friendly fire - but among sub-optimally coordinated detachment - not within on single force.
The second one on Woodlark and Kiriwina which had less casualties was not only unopposed, it was known to be unopposed, so expectations were differnt.
The other opposed landings are more difficult to read.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-09-30T11:13:39.971Z · LW(p) · GW(p)
If the landing had been peaceful & uneventful, perhaps we wouldn't have heard about it. So there might be a selection effect.
comment by Metus · 2014-09-29T22:57:37.485Z · LW(p) · GW(p)
I'm radically cutting any source of information out from my life as soon as I get the feeling that I never use the information or don't get some measure of enjoyment from it. This reduced the time I spend on catching up from multiple hours a day to less than an hour. My mind feels much quieter in a good way. I still get a "noisy" sensation in my mind ("I just had a thought but have already forgotten it") but it feels contentless ("There is something on my mind") and the sensation weakens every day. Replacing the time spent on reading useless drivel with actual books and Wikipedia feels much more satisfactory.
I fear that this might lead to my perspective narrowing, but I act against it by having a couple of information dense blogs in my feed and still meeting with people and having Wikipeda to seek new avenues of information. And of course LessWrong.
Replies from: None, None, None, William_Quixote, FiftyTwo↑ comment by [deleted] · 2014-09-30T02:51:54.837Z · LW(p) · GW(p)
What did you cut?
Replies from: Metus↑ comment by Metus · 2014-09-30T12:29:54.674Z · LW(p) · GW(p)
Most parts of Reddit. Hackernews as I am not a software engineer and read the interesting things elsewhere anyhow. Cracked. Facebook most of the time. News sites as soon as I read a sensationalist headline the second time that week. And plenty of things I can't remember as I started doing this months ago. The mere fact that I don't remember the sources already shows that they couldn't be that important.
↑ comment by [deleted] · 2014-09-30T02:58:01.426Z · LW(p) · GW(p)
I've tried a similar tack, and I was also worried about "narrowing".
It may be helpful to explicity note that "informative" is meaningful relative to your current set of beliefs. There are high-quality sources that I love and would recommend, but that I try not to spend much time on because the content is so close to my viewpoints that I get very little "information" out of it, even if it information-dense in an quasi-objective sense.
Replies from: Metus↑ comment by Metus · 2014-09-30T12:32:12.820Z · LW(p) · GW(p)
I recognise a consistent pattern of finding a new site or information source, exploiting it and then realising I'm not getting enough information relative to my effort, leading me to seek something new. So yes, always look for something that updates relative to your current knowledge.
Replies from: FiftyTwo↑ comment by William_Quixote · 2014-09-30T11:11:53.509Z · LW(p) · GW(p)
Society as a whole benefits from an informed public. Some news isn't really informative, but some is. The levels of wealth across countries strongly associate with their political systems and the level of really terrible stuff that happens correlates with how knowledgeable / active people are. Now correlation isn't causation, but consider that there could be a link.
If so, then you as an individual could benefit from being less informed. You could also privately benefit from not voting. Or you could benefit from cheating on taxes in a difficult to detect way, or littering instead of carrying garbage and looking for a trash can. Someone can always privately benefit from defecting in a prisoners dilemma or participating in a tragedy of the commons. An informed public is a public good.
The take away isn't don't cut news reading. A lot of news is of no value to you or anyone else, but at least some news is probably of negative value to you personally but socially positive. So when cutting a subject, at least briefly consider what would happen to the commons if all informed people didn't read it.
Replies from: None, Metus, Sarunas↑ comment by [deleted] · 2014-10-01T15:10:17.482Z · LW(p) · GW(p)
This is a (the?) standard challenge to the idea of adopting an information diet for personal gain, and it's presented lucidly.
Another implication: The threat imposed by a news reading public (who are itching to be frenzied), is a powerful incentive for prominent (and usually powerful) individuals to act in accord with public sentiment. Perversely, if the threat is effective, then the actual threat mechanism may appear useless (because it is never used).
This isn't always good, because the public can be wrong, but there seem to be morally mundane cases.
An example: If you live in California, should you read a story about a corrupt and powerful mayor in a small town in Iowa? It really does seem like the "media frenzy" is a primary vector for handling this type of situation, which may otherwise continue because the actors directly involved don't have enough power.
This also justifies the seeming capriciousness of the news cycle: Why this particular outrage at this particular time? Why not this other, slightly more deserving, outrage? Because this is a coordination game, and the exact focal point isn't as important as the fact that we all agree to coordinate.
↑ comment by Metus · 2014-09-30T12:37:29.071Z · LW(p) · GW(p)
I categorically reject the notion that news is relevant to being informed. A single reading of an economics text book for example will make anyone who I should want to be able to vote more informed than the same amount of news. Further news is completely irrelevant for being informed as only the exceptional things are news worthy and not trends against which one could act, like climate change or shifting balances of power.
Thus the proposition is to be informed on some topic one cares about. Again there I suggest to not read "news" as most people will get more out of reading comprehensive articles on the topic or even a text book to better understand it.
In short: No, this is not just a prisoner's dilemma and I dislike political systems where governance is one.
Replies from: sixes_and_sevens, None↑ comment by sixes_and_sevens · 2014-09-30T13:23:21.847Z · LW(p) · GW(p)
A single reading of an economics text book for example will make anyone who I should want to be able to vote more informed than the same amount of news.
For context, there are about eight econ textbooks in my line of sight at this very moment. I've even read some of them. The kind of knowledge you get from consuming such a textbook is certainly useful, but for practical purposes it's highly contingent on what kind of world you're living in. The textbook probably won't tell you that, but an equivalent amount of news almost certainly would.
Replies from: Metus↑ comment by Metus · 2014-09-30T22:14:31.831Z · LW(p) · GW(p)
I doubt that regular reading of a popular news paper will make ones opinion more relevant than a good understanding of supply and demand, judging by the average comments section.
Replies from: None, satt↑ comment by [deleted] · 2014-10-01T14:30:10.286Z · LW(p) · GW(p)
I think you're taking a narrow reading about what sort of information you can glean from a given story.
Reading the average comments threads on a news item is very very terrifyingly informative, just not about the subject at hand. (Or course, you hit the point of diminishing returns quickly)
↑ comment by satt · 2014-10-05T23:41:37.770Z · LW(p) · GW(p)
I think sixes_and_seven's point (though I may have misunderstood) is that your understanding of supply & demand (and everything else in the econ textbook) still has to be applied to concrete cases to prove useful, and following the news furnishes you with concrete cases, and allows you to practice recognizing where the models in the textbook are most applicable.
↑ comment by [deleted] · 2014-10-01T14:35:16.302Z · LW(p) · GW(p)
I'm sympathetic, but surely this rejection is contingent on certain facts about your local environment. If you lived in a area experiencing rapid and chaotic change, following the news would be very valuable, even if the news was presented poorly or had significant bias. Consider Syria.
↑ comment by Sarunas · 2014-09-30T14:26:45.181Z · LW(p) · GW(p)
A quote about education (attributed to George Pólya, although I can't find the source): "It is better to solve one problem five different ways, than to solve five different problems one way". I would guess that similarly, if one wants to educate oneself about world affairs, one should (regularly) take a few of the most important (current) issues/events and learn about them as in-depth and from as many angles as one can, synthesizing everything into a big picture, rather than pay attention to every non-issue. Of course, in order to be able to do that, one should try to learn history, economics, statistics, game theory, public choice theory, geography, biology, etc (curiously, in some cases reading something about the past might be more beneficial to understanding the present than reading something about the present itself). Of course, in some situations this "issue/event centered" (vs "news as they appear") approach could also have some drawbacks, for example, if, for some reason (e.g. (non-)availability of relevant literature, ideological reasons, etc.), one approaches events only from one or two angles ("hedgehog", as opposed to "fox") one could easily fall prey to confirmation bias.
↑ comment by FiftyTwo · 2014-09-30T19:27:13.460Z · LW(p) · GW(p)
Depends if you are reading for usefulness or the experience. I don't necessarily learn much from tumblr/twitter/facebook but I tend to enjoy it, especially when I lack the mental energy for other stuff.
Replies from: Metus, satt↑ comment by Metus · 2014-09-30T22:13:13.542Z · LW(p) · GW(p)
[...] or [I] don't get some measure of enjoyment from it.
Facebook specifically is an interesting example. It is used by exactly the people I do not want to keep up with the details. My close friends and I and in general the people I deeply care about keep contact just fine.
↑ comment by satt · 2014-10-05T23:30:47.146Z · LW(p) · GW(p)
I'd add that with enjoyable, low-effort time-killing activities, one may still have to be careful not to space out and wind up killing hours & hours on something that's fluff with diminishing returns, like Facebook or Twitter or channel surfing. (I try to consciously catch myself before I idly pull up a game of Solitaire or Freeciv or whatever, to check I'm not about to waste 10 minutes or 4 hours because my brain was in cruise control.)
comment by Alejandro1 · 2014-09-29T13:39:48.122Z · LW(p) · GW(p)
Philosopher Richard Chapell gives a positive review of Superintelligence.
An interesting point made by Brandon in the comments (the following quote combines two different comments):
I think there's a pretty straightforward argument for taking this kind of discussion seriously, on general grounds independent of one's particular assessment of the possibility of AI itself. The issues discussed by Bostrom tend to be limit-case versions of issues that arise in forming institutions, especially ones that serve a wide range of purposes. Most of the things Bostrom discusses, on both the risk and the prevention side, have lower-level, less efficient efficient analogues in institution-building.
A lot of the problems -- perverse instantiation and principal agent problems, for instance -- are standard issues in law and constitutional theory, and a lot of constitutional theory is concerned with addressing them. In checks and balances, for instance, we are 'stunting' and 'tripwiring' different institutions to make them work less efficiently in matters where we foresee serious risks. Enumeration of powers is an attempt to control a government by direct specification, and political theories going back to Plato that insist on the importance of education are using domesticity and indirect normativity. (Plato's actually very interesting in this respect, because the whole point of Plato's Republic is that the constitution of the city is deliberately set up to mirror the constitution of a human person, so in a sense Plato's republic functions like a weird artificial intelligence.)
The major differences arise, I think, from two sources: (1) With almost all institutions, we are dealing with less-than-existential risks. If government fails, that's bad, but it's short of wiping out all of humanity. (2) The artificial character of an AI introduces some quirks -- e.g., there are fewer complications in setting out to hardwire AIs with various things than trying to do it with human beings and institutions. But both of these mean that a lot of Bostrom's work on this point can be seen as looking at the kind of problems and strategies involved in institutions, in a sort of pure case where usual limits don't apply.
I had never thought of it from this point of view. Might it benefit AI theorists to learn political science?
Replies from: Stefan_Schubert, IlyaShpitser, sixes_and_sevens, Lumifer↑ comment by Stefan_Schubert · 2014-10-02T16:13:18.150Z · LW(p) · GW(p)
Here is what Bostrom himself says about this analogy:
Perhaps the closest existing analog to a rule set that could govern the actions of a superintelligence operating in the world at large is a legal system. But legal systems have developed through a long process of trial and error, and they regulate relaltively slow-changing human societies. Laws can be revised when necessary. Most importantly, legal systems are administered by judges and juries who generally apply a measure of common sense and human decency to ignore logically possible legal interpretations that are suffciently obviously unwanted and unintended by the lawgivers. It is probably humanly impossible to explicitly formulate a highly complex set of detailed rules, have them apply across a highly diverse set of circumstances, and get it right on the first implemntation.
Superintelligence, p. 139.
↑ comment by IlyaShpitser · 2014-09-30T11:22:05.905Z · LW(p) · GW(p)
This is great, thanks! I always always said that if you are worried about FAI, you should look into what people do with unfriendly non-human agents running around today. I am glad constitutional law people have looked into this.
Replies from: None↑ comment by [deleted] · 2014-10-06T10:02:31.736Z · LW(p) · GW(p)
I always always said that if you are worried about FAI, you should look into what people do with unfriendly non-human agents running around today.
Forgive my cynicism, but the answer mostly appears to be, "work in their employment".
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2014-10-06T11:20:58.837Z · LW(p) · GW(p)
Have you ever seen Brazil (the movie)? You will still get eaten.
Replies from: None, Azathoth123↑ comment by [deleted] · 2014-10-06T17:42:05.069Z · LW(p) · GW(p)
Well yeah. I don't approve of working for the capitalist hell-monster, and I don't think it has mercy on its better servants, but I also don't have any illusions about what almost everyone ever has done and still does to survive long enough to get old.
↑ comment by Azathoth123 · 2014-10-07T02:55:31.649Z · LW(p) · GW(p)
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2014-10-07T06:57:43.808Z · LW(p) · GW(p)
Brazil is basically the biography of the 20th century. Brazil counts as fictional evidence about as much as Darkness at Noon (the events in that book did not literally happen, but...) The scariest thing about Brazil is that it is not strange at all, it is too familiar.
Replies from: None↑ comment by [deleted] · 2014-10-07T15:00:13.841Z · LW(p) · GW(p)
Brazil is basically the biography of the 20th century.
That's a very interesting way of looking at the 20th century: humanity spent the first part building, tearing down, and rebuilding again its vast institutional artifices that are not always human-friendly. We then spent the very end and entered into the 21st century trying to tame them without having to kill large numbers of people on a regular basis.
↑ comment by sixes_and_sevens · 2014-09-29T15:26:39.275Z · LW(p) · GW(p)
Here's a salient MOOC that's just started on political and legal philosophy, which I'm dipping in and out of for non-FAI reasons.
comment by sixes_and_sevens · 2014-09-29T16:22:13.570Z · LW(p) · GW(p)
Many of you are probably familiar with the Alpha Course, which uses the evangelistic technique of identifying common philosophical questions people might have about their life ("what's the point of it all?", "how can I be truly happy?", etc.) and answering it with something about finding the everlasting love of Jesus Christ.
It occurs to me that many aspiring rationalists probably have an analogous set of questions turning around in their heads before they find a like-minded group. For example: "I notice that a lot of people make silly mistakes when thinking about things; how can I stop myself from making these same mistakes?"
Hypothetically, if we (as in the broader rationalist community) were to construct an effective campaign to capture people in this state, what would it look like?
Replies from: Vulture, ChristianKl, None↑ comment by Vulture · 2014-09-29T18:13:36.335Z · LW(p) · GW(p)
Also, where would we send them? I think that if we're going to do any kind of outreach we should set up a good subsidiary forum, to try and minimize Eternal September effects.
(As a model, the HPMoR subreddit seems to be something like this (albeit with a narrower focus) for a lot of people. The Less Wrong Lounge or something, maybe?)
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2014-09-29T18:42:16.309Z · LW(p) · GW(p)
I've long been of the opinion that Less Wrong itself is weird enough to be offputting to a lot of people, but it's not the only obvious landing page any more. You could point people to Less Wrong or CFAR or Effective Altruism, or to the discourse-o-sphere for which SSC is an exemplar, or whatever other places I've forgotten. Hell, a reading list with a dozen popular books would be enough for the drive-by inquisitive lay-person.
Replies from: fortyeridania, Vulture↑ comment by fortyeridania · 2014-09-29T21:43:38.238Z · LW(p) · GW(p)
What's SSC? I doubt it's one of these.
Replies from: gjm, polymathwannabe, Randaly↑ comment by gjm · 2014-09-30T17:03:54.744Z · LW(p) · GW(p)
Others have already explained what it stands for and provided a link; it may be worth adding that the author of that blog is also known on LW as Yvain, who once upon a time was one of the best and most prolific LW contributors; his particularly highly rated posts include one on the notion of "disease", one about metacontrarianism, one clarifying what it means when a model says something is almost certainly true, one on efficient charity, one introducing prospect theory, one about buying houses, one about Schelling fences, one about the worst argument in the world. (He's still one of the best but participates rather little.) He's also the guy who does the annual Less Wrong survey.
Replies from: fortyeridania↑ comment by fortyeridania · 2014-10-01T06:33:18.838Z · LW(p) · GW(p)
Yeah. I actually have read a fair bit of Yvain's blog. I just had never thought of abbreviating its name before, so I blanked when I saw "SSC."
That said, I am sure a lot of other readers of this thread may not have known about Yvain's contributions.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-10-02T09:54:28.203Z · LW(p) · GW(p)
That said, I am sure a lot of other readers of this thread may not have known about Yvain's contributions.
Especially new readers who might find LW to abstract ;) It's good to avoid using too much abbreviations.
↑ comment by polymathwannabe · 2014-09-29T21:51:37.325Z · LW(p) · GW(p)
Replies from: fortyeridania↑ comment by fortyeridania · 2014-09-29T21:55:59.559Z · LW(p) · GW(p)
Thanks.
↑ comment by ChristianKl · 2014-09-30T17:08:07.860Z · LW(p) · GW(p)
At the moment I don't think we have good answers for the core questions. Good reasoning is hard. Pretending that it's possible by following a few quick fixes might make it easier to reach more people but it brings in people who don't belong.
Replies from: gjm↑ comment by gjm · 2014-09-30T22:06:18.663Z · LW(p) · GW(p)
I take it the point is to bring in people who don't belong yet but who might turn out to belong when they've thought about it some more.
(Not necessarily to Less Wrong as such -- which might do best to remain a forum for sometimes-intimidatingly-technical discussion that preferentially attracts the very clever -- but to "the broader rationalist community".)
Replies from: None↑ comment by [deleted] · 2014-10-06T10:05:32.587Z · LW(p) · GW(p)
(Not necessarily to Less Wrong as such -- which might do best to remain a forum for sometimes-intimidatingly-technical discussion that preferentially attracts the very clever -- but to "the broader rationalist community".)
I'm not sure to what degree we should deliberately constitute a "rationalist community" if we want to raise the sanity waterline among the population at large rather than make friends with other nerds who like all the same stuff as us.
comment by roystgnr · 2014-09-30T16:30:48.036Z · LW(p) · GW(p)
Has anyone on LessWrong noticed this new Elon Musk interview yet? Even through the intermediation of the reporter he seems to convey the gist of the concepts of existential risk, the Fermi paradox, and the great filter and simulation arguments.
Replies from: Manfred↑ comment by Manfred · 2014-10-03T00:57:50.193Z · LW(p) · GW(p)
This reporter (Ross Andersen) also wrote a piece on Bostrom. So I'm gonna guess that it's not about Elon Musk getting things through Ross, but rather Ross writing what he wanted to write. In short, yay Ross Andersen.
Replies from: Sarunascomment by FiftyTwo · 2014-10-04T21:02:01.012Z · LW(p) · GW(p)
Does anyone know of any studies that show that people tend to regard their enemies as innately evil?
I've seen it claimed a lot here but haven't been able to find a source beyond Eliezer's post.
comment by [deleted] · 2014-09-30T04:39:34.698Z · LW(p) · GW(p)
What are some online (or offline but generally accessible) clusters that would appeal or be valuable to a typical lesswrong reader, but that have little obvious intersection with lesswrong memespace?
What does it mean if there aren't any? Does a cluster just expand to it's natural limits? I wonder if the space of the general contemporaneous approaches to "thinking about thinking" ultimately map down to just a few personality types.
Replies from: D_Malik, gjm, tut, Metus↑ comment by D_Malik · 2014-10-02T07:47:59.521Z · LW(p) · GW(p)
Some clusters that seem related but not much discussed on LW:
The "aspiring Mentat" cluster, which includes the entire mnemonics subculture, various brain-training groups, the mental math subculture, and some parts of the magic tricks / mentalism subculture and professional gambling subculture. Some weirder parts are the lucid dreaming groups, the hypnosis groups, and the tulpamancy groups. Slightly overlapping subcultures are those around various games, e.g. chess and speed-solving of Rubik's cubes. For an example, see the book Mind Performance Hacks, or the Mentat Wiki. This overlaps with some very obscure Russian inventions, such as the TRIZ system of innovation, the theory of "psychonetics", and the Trachtenberg system of speed mathematics. There's also some overlaps with conlang subculture, such as Ithkuil and Lojban.
The "aspiring Ubermenschen" cluster. Some names that come to mind as prototypical: Tim Ferriss, Jason Shen, Sebastian Marshall. This is a part of the larger productivity culture, which includes e.g. Cal Newport, the GTD people, etc. They tend to monetize their writings, for obvious reasons. There's a spectrum here from the saner groups to the more woo-ful, e.g. Steve Pavlina. This overlaps with a "drugs for self-improvement" subculture, which includes various nootropics groups, and parts of the steroid subculture. Also overlaps with the self-tracking / quantified self subculture.
The "outlandish schemes to improve the world" cluster, which includes e.g. Esperanto, veg*anism, the writings of Buckminster Fuller, various anti-nationalism movements, etc. (Veg*anism definitely correlates with Esperanto, for instance. Of course, a lot of veg*ans don't engage in the rest of this cluster.) Overlaps with more woo-ish things like various forms of non-theistic spirituality.
Some others:
The psychoactive drug subculture.
The cypherpunk subculture. "Hacker" culture in general is very close to LW memespace.
The manosphere.
Also groups associated with various professions, such as tech people, econ people, and math people.
Replies from: Letharis, Arkanj3l↑ comment by Letharis · 2014-10-04T13:40:12.737Z · LW(p) · GW(p)
Great list, but why the manosphere?
Replies from: D_Malik↑ comment by D_Malik · 2014-10-05T23:02:34.287Z · LW(p) · GW(p)
It has lots of "rah squats and oats and psychosocial dominance!" which LWers (mostly nerdy men) need more of, plus many here seem interested in it. (Not interested in getting into a protracted debate about its merits, though - we have more than enough of that.)
↑ comment by gjm · 2014-09-30T22:02:06.442Z · LW(p) · GW(p)
It may be worth clarifying that "cluster" here is (I take it) intended to have roughly the same meaning as in the old OB post The correct contrarian cluster, meaning something like "set of somewhat-related ideas". So mushroom is, I think, asking whether there are ways of looking at the world, or (so to speak) toolboxes for thinking, that aren't already familiar to most of the LW readership but might be useful.
(mushroom, please correct me if I've got it wrong.)
comment by Zubon · 2014-10-05T18:57:01.910Z · LW(p) · GW(p)
Is there any set of issues this argument will not work with? From Leaving LW
It’s amazing how quickly you spot the flaws in a community once you stop thinking of yourself as a part of it. The ridiculous emphasis on cryonics and fear of death which the community inherited from Eliezer. The fact that only about 10% of the community is vegn, when vegnism is pretty much the best litmus test I know for whether someone actually follows arguments where they lead.
("veg*n" = vegetarian/vegan)
The writer self-identifies as an animal rights activist. Hence, "veg*nism is pretty much the best litmus test I know for whether someone actually follows arguments where they lead," while cryonics is a cult. If you are closer to the LW core, you can conveniently reverse it with no loss: "cryonics is pretty much the best litmus test I know for whether someone actually follows arguments where they lead," while veg*nism is a cult. Or insert your own pet issue: existential risk, feminism, monarchism, ethical altruism, Objectivism, communism, pretty much any -ism. Whichever one you believe in most is the best test for whether someone seriously follows arguments to their logical implications; whichever one those other people believe in most is just distracting them from your really important issue. This is why you are right but people who agree with you only 90% are "ridiculous."
The ending is a great example of how to extend that argument: "Obviously, the ideal solution is" for everyone else to agree with you and focus more on your issue.
Replies from: Vulture, D_Malik, drethelin, MrMind, None↑ comment by D_Malik · 2014-10-05T23:24:52.742Z · LW(p) · GW(p)
Well, to a large extent it is indeed true that you shouldn't trust people who disagree with things you think obvious. So there's a sort of "conservation of smartness" going on, whereby you need to be smart already in order to collect a few "obvious" beliefs that you can then use as your litmus test. So for that person, if they really do think veg*nism is obvious, they might be "doing the best they can" in rejecting LW for that.
FWIW, I'm not a vegan anymore, but I'd agree that any attempt to "minimize total suffering" would have to include not eating meat, ceteris paribus. So anyone who claims to have that goal but still eats meat is either a liar, or suffering from some sort of "intra-self disagreement", or they believe ceteris is not paribus (e.g. "eating meat somehow lets me work harder on saving the world"), or they're uninformed. (Or something else.)
Protip: type '\*' to make a '*' symbol without LW thinking you want italics.
Replies from: None↑ comment by [deleted] · 2014-10-06T10:08:09.374Z · LW(p) · GW(p)
FWIW, I'm not a vegan anymore, but I'd agree that any attempt to "minimize total suffering" would have to include not eating meat, ceteris paribus. So anyone who claims to have that goal but still eats meat is either a liar, or suffering from some sort of "intra-self disagreement", or they believe ceteris is not paribus (e.g. "eating meat somehow lets me work harder on saving the world"), or they're uninformed. (Or something else.)
Or they're just satisficing rather than maximizing.
comment by advancedatheist · 2014-10-02T03:21:37.277Z · LW(p) · GW(p)
Damn. Ralph Whelan, a former cryonicist and Alcor employee in the early 1990's, died in his sleep the other day at age 46, and his parents plan to bury him conventionally.
Apparently he wore his Alcor bracelet, but he let his funding lapse.
That sucks. I knew him slightly back then, and I hadn't talked to him for years.
Replies from: None↑ comment by [deleted] · 2014-10-06T10:10:47.023Z · LW(p) · GW(p)
What's the probability you put on cryonics actually working well enough to resurrect the deceased under scenarios of: medicine of 10 years from now, medicine of 20 years from now, just go ahead and assume a Friendly superintelligence?
Replies from: advancedatheist↑ comment by advancedatheist · 2014-10-06T18:22:59.632Z · LW(p) · GW(p)
You don't treat cryonics like a game of chance where the probability lies out of your control. You treat cryonics like a project where your efforts force probability in directions favorable to you. Thomas Donaldson explained it this way years ago. The whole essay deserves reading:
http://www.alcor.org/Library/html/probability.html
Here is an example of the problem I'm raising, with the issues raised to an absurd level just for clarity. A new gambling house sets up in Reno. The owner undertakes to bet with everyone about whether or not he, the owner, will do his laundry tomorrow. Bets are made today and close at 6 PM. (Perhaps gambling houses already operate this way?) Do we, then, expect a rush of clients? The problem with this bet is that he, the owner, has some control over whether or not he does his laundry. Not only are the dice loaded, but he gets to pick, after all bets are laid, which loaded die to use. Computing probabilities only makes sense when the events bet upon are known to be random.
Ralph Whelan, by contrast, didn't bother to "load the dice" by keeping his funding intact.
Replies from: None↑ comment by [deleted] · 2014-10-06T20:23:22.245Z · LW(p) · GW(p)
You don't treat cryonics like a game of chance where the probability lies out of your control. You treat cryonics like a project where your efforts force probability in directions favorable to you.
No, we don't, because, to my knowledge, there is no active effort being poured into testing and improving the methods of preservation and resuscitation offered by cryonics providers. Cryonics is given as a take-it-or-leave it proposition, and as one, I cannot assign a high probability that it works.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-10-07T16:23:20.498Z · LW(p) · GW(p)
No, we don't, because, to my knowledge, there is no active effort being poured into testing and improving the methods of preservation and resuscitation offered by cryonics providers.
While the funding could be better there the Brain Preservation Foundation.
On the other hand a lot of Xrisk prevention also increases chances of successful revival.
Replies from: None↑ comment by [deleted] · 2014-10-08T11:50:09.821Z · LW(p) · GW(p)
While the funding could be better there the Brain Preservation Foundation.
To which I already donate.
On the other hand a lot of Xrisk prevention also increases chances of successful revival.
Has anyone ever put together a budget of how much money "existential risk prevention" actually needs? Because it seems to show up in this community as a black hole of possible altruism which can never be filled.
comment by sixes_and_sevens · 2014-09-29T15:20:38.530Z · LW(p) · GW(p)
Any recommendations for introductory overviews of cognitive models of categorisation (e.g. prototype theory. exemplar theory, etc.)?
I'm trying to develop a high-level view of how people go wrong when reasoning about groups. I understand this well enough from the positions of statistical inference and categorical logic. What I'm looking for is convenient literature on theories of how human brains put objects into categories.
Replies from: NancyLebovitz, lucidian↑ comment by NancyLebovitz · 2014-10-01T18:12:35.062Z · LW(p) · GW(p)
Thanks-- I hadn't heard of exemplar theory.
comment by VAuroch · 2014-09-29T22:49:13.625Z · LW(p) · GW(p)
I have had the loose intuition for a while that I don't form habits in the sense that other people describe habits; doing something daily or more doesn't reduce the cognitive load in doing it, even after maintaining the pattern for >10 months with minor deviations (this has been true of my Soylent Orange diet). Additionally, even when I have a pattern of behavior that has kept up consistently for >1 year, less than a week of skipping it is enough to destroy all my inertia for that "habit" (this was my experience with Anki).
Two questions: Does this seem like a genuine significant discrepancy from baseline, and has anyone else experienced something like it?
Replies from: pianoforte611, hyporational, None, ChristianKl, FiftyTwo↑ comment by pianoforte611 · 2014-09-30T21:31:08.657Z · LW(p) · GW(p)
Not being able to form certain kinds of habits sounds plausible to me. When I stopped wearing retainers it was as though I had never had them. I never even thought about putting them in again.
Not having habits sounds a bit impossible to me. A habit is just a way of doing things consistently and without thinking about it. For example, you probably button your shirt up the same way every time. You probably have some quirks in how you walk or sit that you couldn't get rid of easily. If you play an instrument, you almost certainly have habits in how you play notes and phrases that would take time to relearn. Have you done any public speaking training? If not, when you talk in front of an audience you probably use filler words like "and, so, um, like" and it usually takes time to get rid of those. Do you not do any of these things?
Replies from: VAuroch↑ comment by VAuroch · 2014-10-01T06:47:26.204Z · LW(p) · GW(p)
For the specific things you mentioned? No, neither, n/a, and insufficient data for meaningful answer; definitely have filler but unclear whether the distribution of that filler is at all consistent.
For most things, I do have ways I do things consistently but I am specifically thinking about it at the time. So, to use the example of making Soylent Orange again: there is a particular sequence of putting ingredients into the blender that seems to make it mix the easiest and be completed the fastest (sunflower seeds first, then marmite + spinach leaves to wipe the marmite off the measuring spoon, then flour/bananas before OJ/milk). I usually follow this pattern, but if my attention wanders from it, often deviate from it in an unpredictable way, skipping and/or re-arranging steps, and when I regain focus have to basically review all my actions since I entered the kitchen to figure out which things I've done and in what order and whether I've missed any steps and whether this will put it at risk for a spill.
Replies from: pianoforte611↑ comment by pianoforte611 · 2014-10-01T12:27:26.496Z · LW(p) · GW(p)
None of your Soylent making procedure sounds unusual to me. Not being able to form procedural habits of a certain kind (or not choosing to) are entirely possible.
What I am wondering is if you are lacking muscle memory, which many physical habits are an instance of. Perhaps I'm just using the word "habit" more inclusive way than you are, I'm classifying things as habits that you wouldn't. Do you think that is the case? Not having muscle memory sounds impossible to me - if you can walk and have a conversation at the same time then you have muscle memory (basically the ability to put something in auto-pilot).
If you can play any reasonably complicated piece of music then you have muscle memory.
If you consistently make the same mistakes when playing a sport and have a least a tiny bit of difficulty correcting them then you have muscle memory.
Do you have muscle memory?
Replies from: VAuroch↑ comment by VAuroch · 2014-10-01T19:03:29.677Z · LW(p) · GW(p)
I do have muscle memory, at least for basic things. I do not think that should be lumped in with habits, for the most part.
Replies from: pianoforte611↑ comment by pianoforte611 · 2014-10-01T21:53:24.628Z · LW(p) · GW(p)
Not all habits are muscle memory, and not all instances of muscle memory are habits. But a lot of habits are muscle memory: crossing your legs when you sit, leaning on one leg when you stand, lip biting, cracking knuckles to name a few things.
↑ comment by hyporational · 2014-09-30T02:34:25.879Z · LW(p) · GW(p)
It seems to me that a habit must be a bit more complex than Anki reviews or fixing Soylent to notice the difference in reduced cognitive load. Habits dying easily sounds pretty normal to me unless they're intrinsically fun or there's a strong immediate incentive.
I've had a slightly different kind of problem with habits. Since I started working full time it seems I can't form or maintain habits outside of work. My free time has become quite chaotic. I still get some things done but it seems never the same way or the same order. My work has become quite habitual and I don't have to put effort into thinking about what to do next most of the time. The difference in cognitive load is huge, I used to be exhausted every day after work and now my energy levels are just fine.
Replies from: VAuroch↑ comment by VAuroch · 2014-09-30T05:29:13.754Z · LW(p) · GW(p)
If it wasn't clear: Soylent Orange is more complex than Soylent itself; it runs several ingredients through a blender, and takes more effort than cooking some basic meals. And going from 'hungry' -> 'eat' is something I have to specifically exert mental effort to do, so while this has been simpler and less effortful than my previous diet (and healthier), it has still been a significant distraction.
I didn't seem to develop habits at my most recent job, either, but that lasted all of four months before they lost the budget for my position, so that's not necessarily conclusive.
↑ comment by [deleted] · 2014-09-30T03:42:43.920Z · LW(p) · GW(p)
Do you have any habits? How did you form them? Do you think you could deliberately form a bad habit?
I think the success rate for most attempts at self-initiated personal change is poor, (think New Years resolutions, quitting smoking), although the expected value from trying is often still good. Despite this, the habit model seems to describe most people quite well: The levers you can pull are less powerful than the levers you can't (or won't).
I've had good luck with the very behaviorist "habit formation is learning a cue-behavior-outcome relationship" model of thinking about habits. Is there a cue (to deploy the learned response)? Is there an outcome that reinforces the behavior?
I've had some habits stick and not stick, and the ones that stuck had "more going for them" than just being repeated every day.
Replies from: VAuroch↑ comment by VAuroch · 2014-09-30T05:22:56.612Z · LW(p) · GW(p)
As far as I can tell, no, I have no habits. There are even some human-standard habits (notably "when hungry, eat") that I definitely do not have. The closest I get is default behavior when bored (which could be summarized as simple 'Internet.', but I think actually cashes out closer to 'relentlessly seek out new things', for which Internet is usually the easiest method.)
EDIT: In that model of habit-forming, the discrepancy is probably in the 'outcome' step. I think I might disassociate sufficiently from my autonomic responses to stimuli that they don't meaningfully affect habit formation.
↑ comment by ChristianKl · 2014-10-01T17:12:37.659Z · LW(p) · GW(p)
Anki is structured in a way that it doesn't ask you the same questions today that it asks you yesterday. That makes habit building harder.
When it comes to habit building, the structure also matters. If you do 20 minutes Anki every day after waking up that's more likely to become a stable habit than when you do it at random times each day.
↑ comment by FiftyTwo · 2014-09-30T19:28:40.171Z · LW(p) · GW(p)
To add a random data point I have a similar experience. I've found setting dailies on habitrpg helps a bit.
Replies from: VAuroch↑ comment by VAuroch · 2014-10-01T06:35:27.359Z · LW(p) · GW(p)
I have found several methods that help manage my routine (HabitRPG was one), but they are usually short-term solutions and never seem to actually ingrain habits in the sense of behaviors which don't require conscious organization/maintenance.
comment by Lumifer · 2014-10-01T19:41:47.261Z · LW(p) · GW(p)
This post and the ensuing discussion led me to construct the following hypothetical scenario.
In the port there are three old ships which are magically exactly the same. One is owned by Mr.Grumpy, one is owned by Mr.Happy, and one is owned by Mr.Doc. The three ships are about to go on (yet another) transatlantic voyage and the owners are considering whether to send for a refit instead.
Mr.Grumpy is a worrywart and the question of his ship's seaworthiness has been at the forefront of his thoughts for a while. His imagination drew him awful pictures of his ship breaking up in the waves and more than once he woke up in cold sweat in the middle of the night. However Mr.Grumpy is capable of self-reflection and knows that he's prone to excessive worrying. He decides to compensate for his bias and is successful at manipulating his mind to quell his doubts. His ship sails off.
Mr.Happy is an optimist. He does not dwell on the possibilities of failure and is sure that concentrating on the positive is the right way to go. He is not reckless but understands that life includes risks and useless worrying just leads to ulcers and not much else. His ship sails off.
Mr.Doc is a nerd. He very carefully calculates the probability that his ship will not make it across the ocean this time. The probability is, of course, non-zero. He looks at this probability and deems it acceptable. His ship sails off.
And now I wonder what W.J.Clifford would say about Mr.Grumpy, Mr.Happy, and Mr.Doc. Is any of them guilty of anything? Are some more (or less) guilty than others?
Replies from: KPier↑ comment by KPier · 2014-10-01T23:28:23.333Z · LW(p) · GW(p)
Assume there's a threshold at which sending the ship for repairs is morally obligatory (if we're utilitarians, that is the point at which the cost of the repairs is less than the probability*expected cost of the ship sinking, taking into account the lives aboard, but the threshold needn't be utilitarian for this to work.)
Let's say that the threshold is 5% - if there's more than a 5% chance the ship will go down, you should get it repaired.
Mr. Grumpy's thought process seems to be 'I alieve that my ship will sink, but this alief is harmful and I should avoid it'. He is morally justified in quelling his nightmares, but he'd be morally unjustified if in doing so he rationalized away his belief 'there's a 10% chance my ship will sink' to arrive at 'there's a 3% chance my ship will sink' and thereby did not do the repairs.
Likewise, it's great that Mr. Happy doesn't want to worry, but if you asked him to bet on the ship going down, what odds would he demand? If he thinks that the probability of his ship going down is greater than 5%, then he should have gotten it refitted. If he knows he has a bias toward neglecting negative events, and he knows that his estimate of 1% is probably the result of rationalization rather than reasoning, he should get someone else to estimate or he should correct his estimate for this known bias of his.
Mr. Doc looks at this probability and deems it acceptable (so, presumably, below our action threshold). He is not guilty of anything.
Replies from: Lumifer↑ comment by Lumifer · 2014-10-02T14:48:08.122Z · LW(p) · GW(p)
Assume there's a threshold at which sending the ship for repairs is morally obligatory.
Sorry, I am unwilling to assume any such thing. I would prefer a bit more realistic scenario where there is no well-known and universally accepted threshold. The condition of ships is uncertain, different people can give different estimates of that condition, and different people would choose different actions even on the basis of the same estimate.
In particular,
Mr. Doc looks at this probability and deems it acceptable (so, presumably, below our action threshold)
Mr.Doc has his own threshold which does not necessarily match yours or anyone else's or even whatever passes for the society's consensus.
Replies from: KPier↑ comment by KPier · 2014-10-03T04:30:37.580Z · LW(p) · GW(p)
Sorry, I am unwilling to assume any such thing. I would prefer a bit more realistic scenario where there is no well-known and universally accepted threshold. The condition of ships is uncertain, different people can give different estimates of that condition, and different people would choose different actions even on the basis of the same estimate.
It doesn't have to be well-known. Morally there's a threshold. Everyone who is trying to act morally is trying to ascertain where it should be, and everyone who isn't acting morally is taking advantage of the uncertainty about where the threshold is to avoid spending money. That doesn't change that there is a threshold.
Consider doctors sending patients in for surgery after a cancer screening. It is hard to estimate whether someone has cancer, and different doctors might recommend different actions on the basis of the same estimate. This does not change the fact that, in fact, there's a place to put the threshold that balances the risk of sending in patients for unnecessary surgery and the risk of letting cancer spread. On any ethical question this threshold exists. We don't have to be certain about it to acknowledge that judging where it is and where cases fall with respect to it is basically always what we're doing.
Mr. Doc's actions are morally right to the extent he's right (given the evidence he could reasonably have acquired) about the threshold.
Replies from: Lumifer↑ comment by Lumifer · 2014-10-03T14:36:59.076Z · LW(p) · GW(p)
It doesn't have to be well-known. Morally there's a threshold. Everyone who is trying to act morally is trying to ascertain where it should be
So, are you assuming moral realism? That moral threshold which "is", does it objectively exist? Is is the same for everyone, all times and all cultures?
This does not change the fact that, in fact, there's a place to put the threshold that balances the risk
Why do you think there is one specific place? That threshold depends on, among other things, risk tolerance. Are you saying that everyone does (or should have) the same risk tolerance?
Replies from: KPier↑ comment by KPier · 2014-10-03T18:50:10.517Z · LW(p) · GW(p)
I am not sure that we're communicating meaningfully here. I said that there's a place to set a threshold that weighs the expense against the lives. All that is required for this to be true is that we assign value to both money and lives. Where the threshold is depends on how much we value each, and obviously this will be different across situations, times, and cultures.
You're conflating a practical concern (which behaviors should society condemn?) and an ethical concern (how do we decide the relative value of money and lives?) which isn't even a particularly interesting ethical concern (governments have standard figures for the value of a human life; they'd need to have such to conduct any interventions at all.) And I am less certain than I was at the start of this conversation of what sort of answer you are even interested in.
Replies from: Lumifer↑ comment by Lumifer · 2014-10-03T19:00:25.915Z · LW(p) · GW(p)
I said that there's a place to set a threshold that weighs the expense against the lives.
Do you mean one, common threshold or do you mean an individual threshold that might be different for each person? I read you as arguing for one common threshold -- if we are taking about individual thresholds then I don't see any issues -- everyone just sets them wherever they like and that's it.
You're conflating a practical concern (which behaviors should society condemn?)
I don't believe I said anything about what society should condemn.
what sort of answer you are even interested in
My interest started with this, as my post noted, and it mostly focuses on determing the morality of the action solely on the basis of mental states, past and present.
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-10-04T06:46:19.600Z · LW(p) · GW(p)
I don't believe I said anything about what society should condemn.
Well, your arguments only make sense if that is how your interpreting amoral.
My interest started with this, as my post noted, and it mostly focuses on determing the morality of the action solely on the basis of mental states, past and present.
KPier's whole argument is that the morality of the action depends on the objective conditions of the ship and the objective evidence available to the owner. The owner's mental processes are moral (or amoral) to the extend they cause his beliefs to aline (or fail to aline) with reality.
As far as guilt, do you think Marx's ghost should feel guilty about the results of his philosophy, or should he just say "well I tried to improve the world"?
Replies from: Lumifer↑ comment by Lumifer · 2014-10-05T03:23:33.028Z · LW(p) · GW(p)
Well, your arguments only make sense if that is how your interpreting amoral.
That sounds strange to me, can you expand on that?
KPier's whole argument is that the morality of the action depends on the objective conditions of the ship and the objective evidence available to the owner.
So then he disagrees with W.J.Clifford, doesn't he? The Clifford quote is all about subjective.
Replies from: Azathoth123↑ comment by Azathoth123 · 2014-10-07T03:15:27.375Z · LW(p) · GW(p)
That sounds strange to me, can you expand on that?
You're objections amount the the claim that "being able to be evaluated by outside observers" should be a property of morality. This is a necessary property of theory of what society should condemn, it is less clear why it's a necessary property of morality.
So then he disagrees with W.J.Clifford, doesn't he? The Clifford quote is all about subjective.
And the reason the owner's mental process is immoral is because it leads the owner to evaluate the evidence incorrectly.
Replies from: Lumifer↑ comment by Lumifer · 2014-10-07T15:04:25.527Z · LW(p) · GW(p)
You're objections amount the the claim that "being able to be evaluated by outside observers" should be a property of morality.
Um, no, I don't think so. I don't think I'm making any claims about properties of morality. Mostly, I'm just poking KPier's/Clifford's position to check for coherence.
because it leads the owner to evaluate the evidence incorrectly.
As I posted before I don't find any objective evidence in that quote besides the two observations that the ship was old and ship sank.
comment by William_Quixote · 2014-10-02T14:37:51.604Z · LW(p) · GW(p)
The effective altruist survey was announced here a while ago and many participated. When announced it was expected to produce results in September or October if more time was needed. It's now October. Does anyone with ties to the survey know when results will be published?
comment by SilentCal · 2014-09-30T19:39:50.598Z · LW(p) · GW(p)
How do you (EDIT: that is, you personally) pronounce AIXI? I find myself reading it with (pseudo-)Chinese phonetics as Aye-She.
Replies from: TylerJay, TsviBT, palladias, polymathwannabe, RowanE, Nornagest, bramflakes, pragmatist, Leonhart↑ comment by palladias · 2014-10-01T21:03:40.338Z · LW(p) · GW(p)
Axe-ee.
(But I habitually drop syllables, unless I think about it actively, hence thinking arby-shop for archbishop for a really long time)
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-10-03T23:27:38.696Z · LW(p) · GW(p)
Me, too. I'm surprised at the amount of variation.
↑ comment by polymathwannabe · 2014-09-30T20:21:59.590Z · LW(p) · GW(p)
Wikipedia gives /'ai̯k͡siː/, which would be like Ike-See.
Replies from: SilentCal↑ comment by SilentCal · 2014-09-30T20:29:57.181Z · LW(p) · GW(p)
I was actually asking descriptively, as a kind of poll. Though I'm actually surprised by that official answer--I had assumed you were 'supposed' to say each letter.
Replies from: polymathwannabe↑ comment by polymathwannabe · 2014-09-30T20:34:41.516Z · LW(p) · GW(p)
As my brain is wired for Spanish, I was already hearing it as "ike-see" in my head.
↑ comment by bramflakes · 2014-10-01T16:06:56.091Z · LW(p) · GW(p)
ache-sigh
↑ comment by pragmatist · 2014-10-01T09:33:49.665Z · LW(p) · GW(p)
Ay-eye-zai, although I don't think I've ever said the word aloud.
comment by Viliam_Bur · 2014-09-30T08:35:38.734Z · LW(p) · GW(p)
I was thinking about making a new blog, maybe using an anagram of my name for the blog title. Here are the possibilities:
Burial Vim -- has a nice dark flavor, but how many people actually know the meaning of "vim"? I never heard it before
Via Librum -- has a nice Latin sound, but it's probably gramatically incorrect. could someone please check this for me?
I Rival Bum -- uhm... I guess I'll skip this one...
Replies from: gjm, Unnamed, Adele_L, polymathwannabe, Lumifer, army1987, polymathwannabe, Gunnar_Zarncke, bbleeker↑ comment by gjm · 2014-09-30T10:02:07.379Z · LW(p) · GW(p)
I think the anagram-of-your-name thing works better if you're called Scott Alexander than if you're called Viliam Bur.
If I'm interpreting the Perseus output correctly, "librum" is OK as genitive plural of "liber" whose main meaning is "book" -- though the usual form would be "librorum". A blog title that means "the way of books" sounds workable.
I suspect most of your readers will be more familiar with another meaning for "vim". Someone whose interests are just the right combination of geeky and literary might like "Vim Burial" as a title, but I'm thinking that if that were you you'd have said so already.
There are some other interesting words containing in your name's letters -- brumal, Malibu, lumbar, album -- but none of them seems to lead to a coherent phrase.
Replies from: philh, Gvaerg↑ comment by philh · 2014-09-30T11:29:25.274Z · LW(p) · GW(p)
I think the anagram-of-your-name thing works better if you're called Scott Alexander than if you're called Viliam Bur.
It also helps if you're willing to drop an 'n'.
Replies from: None, gjm↑ comment by gjm · 2014-09-30T12:37:54.254Z · LW(p) · GW(p)
Indeed. As he puts it:
The name of this blog is Slate Star Codex. It is almost an anagram of my own name, Scott S Alexander. It is unfortunately missing an “n”, because anagramming is hard. I have placed an extra “n” in the header image, to restore cosmic balance.
But adding or dropping letters is probably harder to get away with when you have a shorter name.
↑ comment by Adele_L · 2014-10-01T03:32:35.929Z · LW(p) · GW(p)
I think Via Librum is the best, and the phrase seems to occur in actual Latin. However, it is already in use which may or may not be a problem for you.
↑ comment by polymathwannabe · 2014-09-30T16:17:52.214Z · LW(p) · GW(p)
I think I got it. First I tried some combinations in Esperanto, and was very close to a nice result of "vibrating light," but the available vowels didn't help me get the suffixes right.
So I tried something different. Taking the letters I and V to stand for the Roman numeral four, I arrived at this:
aim4blur
Meaning, "point your attention towards things unclear," the unstated next action being, "shoot."
↑ comment by Lumifer · 2014-09-30T14:55:53.511Z · LW(p) · GW(p)
Burial Vim
In unix-ish circles "vim" is the name of a text editor. If you want to bury vim, you're probably a fan of emacs X-)
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-09-30T15:35:27.456Z · LW(p) · GW(p)
Am I the only one who loves gedit?
Replies from: ZankerH, Lumifer↑ comment by ZankerH · 2014-09-30T19:10:39.678Z · LW(p) · GW(p)
Apples, oranges, etc. Vim and Emacs are supposed to (partially) replace the entire userspace of an OS, they're much more than just text editors/IDEs.
Replies from: Antisuji↑ comment by Lumifer · 2014-09-30T16:44:31.428Z · LW(p) · GW(p)
People tend to imprint on whatever text editors they started with :-)
Gedit is too basic for me, in that style of text editors Sublime is much more full-featured.
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-10-03T19:20:21.747Z · LW(p) · GW(p)
People tend to imprint on whatever text editors they started with :-)
Actually (unless I count the time I was a Commodore-using kid or a Windows-using teen *shudders*) IIRC I started with Emacs, though I never really made a serious effort to climb much of its learning curve.
Gedit is too basic for me, in that style of text editors Sublime is much more full-featured.
Gonna check it out.
Replies from: eeuuah↑ comment by A1987dM (army1987) · 2014-09-30T15:44:02.620Z · LW(p) · GW(p)
Virial Bum?
R.V. Bulimia?
I, Viral Bum?
Rum Alibi V? (This is my favourite one.)
Bim Vu, Liar (and then you'd use Bim Vu as your pen name)?
(Brought to you by an.)
Replies from: polymathwannabe↑ comment by polymathwannabe · 2014-09-30T15:46:29.470Z · LW(p) · GW(p)
Virial Bum?
Next year's Brazilian fad dance will be called this.
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-10-03T19:15:41.797Z · LW(p) · GW(p)
You're welcome.
(actually I was thinking of hobos not butts FWIW)
↑ comment by polymathwannabe · 2014-09-30T15:18:59.440Z · LW(p) · GW(p)
So far I've only been able to get VR Bulimia, I am I.V. blur, Lumbar VII, and Evil rumba (changing one letter but keeping the same sound).
Have you checked whether it gives a viable anagram in your native language?
Replies from: Vulture, Viliam_Bur↑ comment by Viliam_Bur · 2014-09-30T20:54:30.199Z · LW(p) · GW(p)
Have you checked whether it gives a viable anagram in your native language?
The online anagram programs I tried didn't produce anything useful.
↑ comment by Gunnar_Zarncke · 2014-09-30T10:02:26.640Z · LW(p) · GW(p)
You could try to add "I am" or "The" to your name and look what the anagram generator spits out then.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-09-30T12:40:48.659Z · LW(p) · GW(p)
That feels like cheating. (I totally felt like this when reading the anagram explanation of Harry Potter.) I guess I will just use something other than an anagram. It was just a whim of the moment.
Well, if I were impressed by the result, I would use it, but I guess I'm not. (Though, I could use the anagrams later for some purpose other than the name of the blog.)
Replies from: gjm↑ comment by Sabiola (bbleeker) · 2014-09-30T19:38:30.834Z · LW(p) · GW(p)
Mail Rub IV :-)
comment by Omid · 2014-09-30T03:27:44.100Z · LW(p) · GW(p)
I'm having trans issues and would like to talk a trans person who has some experience coming out. Send me a PM if you can talk. Thanks.
Replies from: FiftyTwo, polymathwannabe↑ comment by FiftyTwo · 2014-09-30T19:24:32.982Z · LW(p) · GW(p)
I recommend asking on http://ozymandias271.tumblr.com/ you can ask anonymously without having a tumblr account, and Ozy knows everything,
↑ comment by polymathwannabe · 2014-09-30T04:20:17.719Z · LW(p) · GW(p)
The forum emptyclosets.com could be helpful for you.
comment by James_Miller · 2014-09-29T18:03:54.623Z · LW(p) · GW(p)
How are the Hong Kong protesters able to overcome their collective action problems? The marginal value of one extra protester in terms of changing what's going to happen via China has to be close to zero, yet each protester faces serious risk of death or suffering long term negative consequences because they have to expect that China is carefully keeping track of who is participating. Is this a case of irrationality giving the protesters an advantage, or are there private gains for the protesters?
Replies from: William_Quixote, None, knb, Sarunas, Punoxysm, fortyeridania, Lumifer, Vulture↑ comment by William_Quixote · 2014-09-29T20:05:08.687Z · LW(p) · GW(p)
Robin Hanson claims aside, some people want to make the world a better place. If someone is always cynical they will often be wrong about things like this (though to be fair they'd probably do well on average)
↑ comment by Sarunas · 2014-09-30T15:22:46.687Z · LW(p) · GW(p)
Something like "safety in numbers" effect, as, I would guess, (after some threshold that provokes government's action (maybe not before)) the greater the number of protesters, the lesser average danger an individual faces (as the perimeter (i.e. the most dangerous place) of the geometric shape of the crowd probably grows slower than the area (the number of them)). Furthermore, the bigger the crowd, the harder it is to track their identities, and they might also expect that the government might be unwilling to punish all of them (rather than just the leaders and a small number of others). In addition to that, the greater proportion of population joins these protests, the greater peer pressure for others to join as well. I would guess that once the most risking taking individuals start everything, it becomes easier for others, who support them, but wouldn't start the protest themselves.
↑ comment by Punoxysm · 2014-09-29T22:07:48.835Z · LW(p) · GW(p)
Protesting can be an end in itself; political action can be self-actualizing.
There's a whole mythologized tradition of protest. It's also an intense social activity.
Also you are likely to treat differently negative consequences that you expect, but consider to be unjustly imposed by others. Again, defying these consequences and those who impose them is a powerful end.
↑ comment by fortyeridania · 2014-09-29T21:45:14.964Z · LW(p) · GW(p)
Perhaps they don't overcome them very well; maybe the optimal number of protesters is much higher than the actual number, but a lot of would-be protesters stay home.
↑ comment by Lumifer · 2014-09-29T18:08:06.367Z · LW(p) · GW(p)
Is this a case of irrationality giving the protesters an advantage, or are there private gains for the protesters?
I think it's mostly the former ("fuck it" is an awesome superpower), but some protesters probably have gains in terms of status and reputation. Plus the leaders might be making a bet that if mainland China decides to throw some carrots at them, they'll be in a good position to grab them.
Replies from: James_Miller, Vulture↑ comment by James_Miller · 2014-09-29T19:15:49.502Z · LW(p) · GW(p)
"fuck it" is an awesome superpower
Not if it causes you to drive while drunk, texting, and not wearing a seat-belt. Then it's a cognitive disability.
Replies from: Prismattic, Lumifer↑ comment by Prismattic · 2014-09-30T00:38:53.490Z · LW(p) · GW(p)
Alternative metaphor:
Throwing the steering wheel out of the car while playing a game of chicken = clever. Throwing out the steering wheel AND cutting the break fluid tube -- less clever.
↑ comment by Vulture · 2014-09-29T18:27:00.787Z · LW(p) · GW(p)
And some people are looking for adventure or serious danger.
Replies from: Lumifer, James_Miller↑ comment by James_Miller · 2014-09-29T19:16:53.966Z · LW(p) · GW(p)
So to enhance your sex appeal.
Replies from: hyporational↑ comment by hyporational · 2014-09-30T02:09:21.187Z · LW(p) · GW(p)
The negative response might be due to this.
↑ comment by Vulture · 2014-09-29T18:26:33.989Z · LW(p) · GW(p)
Why do people join the military?
Replies from: Toggle, James_Miller↑ comment by Toggle · 2014-09-29T22:31:56.779Z · LW(p) · GW(p)
Single causes are elusive, obviously, but every friend/relative of mine who joined did share at least one quality: they felt that they were unskilled in formulating goals and then pursuing them. They hoped that the military would provide a (socially vindicated) goal, and further that it would help them gain experience in the generally useful skills of planning and execution.
Basically, they thought that time in the military was a way to beat akrasia in a permanent way. Note that recruiting efforts often advertise this as a primary benefit.
Replies from: gjm↑ comment by gjm · 2014-09-29T23:40:20.511Z · LW(p) · GW(p)
How did this work out for them?
Replies from: Toggle↑ comment by Toggle · 2014-09-30T01:23:34.679Z · LW(p) · GW(p)
Poorly, in my anecdotal examples. The military does seem to have developed some effective ways to build Awesomeness while in the military (evidence available to me includes productive occupation, sustained physical fitness, and proactive social behaviors), but they depend on participation in the military hierarchy to sustain those behaviors. After leaving that hierarchy, one of my friends spent the next year unemployed and working his way through every horror movie on Netflix; another spent four years getting a PhD in underwater archaeology that she decided not to use. Statistically, veterans are unemployed at higher than national averages in the United States- although I suppose there are multiple reasons we might expect that to be true.
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2014-09-30T04:59:23.569Z · LW(p) · GW(p)
four years getting a PhD
Ok, deciding not to use it is suboptimal, but getting a PhD in four years is pretty impressive in itself.
Replies from: Emily↑ comment by James_Miller · 2014-09-29T19:13:24.119Z · LW(p) · GW(p)
For some patriotism, but even without this the pay and benefits can make joining your military a rational, self-interested decision.
comment by Punoxysm · 2014-09-29T22:16:02.468Z · LW(p) · GW(p)
Has anyone written a post on arguing by what I'd call Socratic Judo?
In the Socratic method, you question every assertion somebody makes. It's a very obnoxious form of argument, but if somebody doesn't disengage it can ruthlessly uncover their inconsistencies and unstated assumptions.
Socratic Judo, by question, lays out a set of premises that you know the interlocutor DOES agree with, in a way and tone they agree with, then attempts to show that these premises lead to something you want them to believe. Now, instead of the argument being centered on the issue in question, it can be centered on the premises they already agree with, so that the opponent is left to qualify to alter those premises themselves, or else accept the conclusion you want on the judo-issue.
An example would be taking standard progressive sympathies for drug legalization, then bringing in libertarianism as the judo-issue.
Replies from: Luke_A_Somers, Dahlen, MrMind, Gunnar_Zarncke, VAuroch↑ comment by Luke_A_Somers · 2014-09-30T01:27:52.024Z · LW(p) · GW(p)
This sounds like presenting an argument for a thing from shared premises - the most ordinary form of trying to convince someone.
Replies from: fubarobfusco, pjando, Punoxysm↑ comment by fubarobfusco · 2014-09-30T03:24:40.304Z · LW(p) · GW(p)
I don't think arguing from shared premises has ever been as "ordinary" as calling one's opponent a witch, a hater of truth, and a corrupter of the youth.
For one thing, arguing from shared premises exposes the arguer to the possibility that those shared premises might, when justly examined, lead to the opponent's conclusion.
Replies from: hyporational, polymathwannabe, Luke_A_Somers↑ comment by hyporational · 2014-09-30T04:42:06.889Z · LW(p) · GW(p)
I don't think arguing from shared premises has ever been as "ordinary" as calling one's opponent a witch, a hater of truth, and a corrupter of the youth.
That would probably be true in the case of trying to convince an audience. I think Luke referred to convincing your interlocutor.
↑ comment by polymathwannabe · 2014-09-30T21:35:55.886Z · LW(p) · GW(p)
For one thing, arguing from shared premises exposes the arguer to the possibility that those shared premises might, when justly examined, lead to the opponent's conclusion.
In which case you MUST concede the argument.
↑ comment by Luke_A_Somers · 2014-09-30T22:00:11.572Z · LW(p) · GW(p)
Maybe in highly political arguments with an audience. I'm talking about even more ordinary kinds of convincing people than that.
↑ comment by pjando · 2014-09-30T01:46:58.982Z · LW(p) · GW(p)
Yeah, it seems pretty similar to the regular old Socratic Method to me. Except classically I think the Socratic Method was used more to reject a "stop sign" claim and provoke more thought than to make a positive claim. You know, Socrates and his whole "I don't know anything."
Also, the libertarianism example strikes me as a non sequitor: it simply does not follow that if you support drug legalization you support libertarianism.
Replies from: Punoxysm↑ comment by Punoxysm · 2014-09-30T03:15:34.937Z · LW(p) · GW(p)
I skipped a few steps on the example. Think of it like this.
A: "States can do a lot of good'
B: "Well, maybe, but what do you think of drug laws"
A: "They're bad"
B: "What about the military-industrial complex"
A: "Bad"
B: "And you'd agree these are two examples of state power run amok in a structural way that's pretty pervasive across space and time"
A: "I guess so."
B: "So you agree that the state is fundamentally evil, tax is theft, and libertarianism is the answer, right?"
At this point, A will be thrown for a loop if they've never been subjected to these specific arguments before. A has been lead to the point where B is rhetorically strongest, and accepted premises in an unqualified form, which A might now wish to go back and qualify (but then A is arguing against him or her self).
Replies from: gjm, jkaufman, pjando↑ comment by gjm · 2014-09-30T16:28:05.436Z · LW(p) · GW(p)
(Whoever downvoted the parent: Consider whether your goals would have been better served by downvoting Punoxysm's original question about "Socratic Judo", rather than this which looks to me like a pretty clear explanation of what s/he means by that term.)
To me, the immediately obvious answer to B's last point is "Huh? Whatever makes you think I agree with that?" and I wouldn't have thought that's a very unusual response. But I'm sure it can be done more subtly.
↑ comment by jefftk (jkaufman) · 2014-09-30T19:25:52.163Z · LW(p) · GW(p)
I'm glad you gave an example, but I suspect A would reply "of course not!".
↑ comment by pjando · 2014-09-30T04:55:26.117Z · LW(p) · GW(p)
Ah, well if I was A I'd recognize B's argument as dishonestly fallacious and would most likely be turned away from his cause. It seems like it could definitely make for effective rhetoric though in different scenarios, with more subtle cases, and with different people. However, I don't think Socrates would approve :)
↑ comment by Punoxysm · 2014-09-30T03:14:36.861Z · LW(p) · GW(p)
I think that's not unreasonable to say, but this is more of a long-form thing, focusing on the rhetorical side and the rhythm of the conversation, and on finding the weakest part of a person's argument (don't steelman anything for them if they can't themselves).
↑ comment by MrMind · 2014-10-01T09:19:54.794Z · LW(p) · GW(p)
Interestingly enough, a similar distinction has been made in Nichiren Buddhism: they talk about shakubuku and shoju, which share similiarities with the Socratic method (the aggressive way) and what you call Socratic Judo (the persuasive way).
↑ comment by Gunnar_Zarncke · 2014-09-30T10:04:53.548Z · LW(p) · GW(p)
Reminds me of Geek Fu. I wonder whether Socratic Judo is a special school of Geek Fu.
See also Roles are Martial Arts for Agency.
↑ comment by VAuroch · 2014-09-29T22:43:55.263Z · LW(p) · GW(p)
That might be interesting, but it is unlikely to get much support here because it smacks of Dark Arts.
Replies from: pianoforte611↑ comment by pianoforte611 · 2014-09-30T00:08:52.831Z · LW(p) · GW(p)
Uh oh, I'm already a practitioner of Socratic Judo, should I stop?
Although, doesn't most productive discourse happen like this? People start off with agreed upon ideas and then work though more controversial ones.
Replies from: Punoxysmcomment by CWG · 2014-09-30T13:03:35.504Z · LW(p) · GW(p)
I'm looking for feedback on my blog drafts & posts - I'm not writing for specifically rationalist audience, but I'd appreciate intelligent feedback on accuracy, additional ideas to possibly include, as well as feedback on how I communicate.
Where is a good place to get such feedback? LessWrong has a lot of the right sort of people, but posting lots of draft posts to the open thread may not be popular.
My blog is Habitua - it's on self-improvement, attempting to be evidence-based as much as practicable.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-10-01T18:29:38.389Z · LW(p) · GW(p)
Post 1 (http://habitua.net/how-to-use-rewards-to-defeat-procrastination/) : What's the goal of this post? I don't think it's wrong but I can't see the intention behind it.
The post about advertising also seems to lack a clear goal.
Replies from: CWG↑ comment by CWG · 2014-10-02T13:06:39.858Z · LW(p) · GW(p)
Post 1 (http://habitua.net/how-to-use-rewards-to-defeat-procrastination/) : What's the goal of this post? I don't think it's wrong but I can't see the intention behind it.
The intention was to give some direction as to the kinds of plans that can be effective in overcoming procrastination. I can see that more detailed suggestions would be helpful, and I'll look at that in future posts. I'm deliberately keeping posts short, so I actually get them done and posted.
The post about advertising also seems to lack a clear goal.
I thought that was an interesting insight into communication and the nature of advertising, but you're right - the goal was not so clear.
comment by khafra · 2014-10-03T17:17:38.147Z · LW(p) · GW(p)
Alien-wise, most of the probability-mass not in the "Great Filter" theory is in the "they're all hiding" theory, right? Are there any other big events in the outcome space?
I intuitively feel like the "they're all hiding" theories are weaker and more speculative than the Great Filter theories, perhaps because including agency as a "black box" within a theory is bad, as a rule of thumb.
But, if most of the proposed candidates for the GF look weak, how do the "they're all hiding" candidates stack up? What is there, besides the Planetarium Hypothesis and Simulationism? Are there any that don't require a strong Singleton?
Replies from: Aleksander, bramflakes, Izeinwinter↑ comment by Aleksander · 2014-10-04T04:49:53.719Z · LW(p) · GW(p)
I liked this short story on that topic, which I believe was written by Yvain: http://raikoth.net/Stuff/story1.html
↑ comment by bramflakes · 2014-10-03T22:06:28.198Z · LW(p) · GW(p)
"They exist but we don't have the tech to detect them"?
Replies from: khafra↑ comment by khafra · 2014-10-04T01:03:12.070Z · LW(p) · GW(p)
That one shows up in fiction every now and then, but If they're galaxy-spanning, there's no particular reason for them to have avoided eating all the stars unless we're completely wrong about the laws of physics. The motivation might not exactly be "hiding," but it'd have to be something along the lines of a nature preserve; and would require a strong singleton.
↑ comment by Izeinwinter · 2014-10-05T07:47:48.933Z · LW(p) · GW(p)
Theories other than "Deliberately Hiding";
Colonizing the galaxy is a crazy thing to do take one: Informational / economic network effects and the absolute lightspeed limit make advanced civilizations quite extremely reluctant to spread out - Civilizations of billions that have collapsed into single hive-cities / computational structures. This is, strictly speaking a variation on hiding, except that the stealth isn't the point, it is a side effect. Colonizing other star systems means dooming yourself to abject cultural, social and material poverty because it means leaving reach of the technological and social engines of home. So it doesn't happen, and when it does, the decendants of the original fanatics back-migrate back to the origin system because fuck that noise.
Take 2: "There are expansionist outlets much easier and faster than interstellar space travel. Brane hopping, timeline gates, dyson swarming..
Weird : Advanced civilizations employ the anthropic principle in engineering - which winnows their very existence out of most timelines. So all civilizations encounter empty universes in the bulk of their world-lines.
Replies from: DanielLC↑ comment by DanielLC · 2014-10-06T19:45:20.053Z · LW(p) · GW(p)
The difference between not colonizing the galaxy and dying out completely is small enough that I'd consider this a subset of dying out, not an alternative.
The size of the universe is irrelevant. If it's bigger, there will be more intelligences to start with. Adding brane-hopping doesn't just mean that all the denizens of this universe will enter others. It also means that the denizens of other universes will enter this one. In fact, since it makes expansion easier, it would mean more alien colonization.
Dyson swarming doesn't have that problem, but they don't take very long on a galactic timescale.
Replies from: Izeinwinter↑ comment by Izeinwinter · 2014-10-07T10:06:50.903Z · LW(p) · GW(p)
Your first point is just wrong. The difference between an extant civilization with a billion year tradition of science and engineering and a dead one is not irrelevant. Among other things, just because they don't settle, does not mean that we might not find their instrumentation lying around, and further it is one plausible tale of what happens to alien civilizations that think it is a good idea to turn the entire galaxy into more spawn - Sooner or later they run into a non-expansionist civilization sufficiently out of their league to summarily crush them.
Re: dyson swarm time scales - Building one doesnt take so long in galactic terms? No, I suppose not. Reaching the point where you honestly feel the swarm and the starlift mines are no longer sufficient to support an adequate lifestyle? I am kind of supposing that most advanced civilizations manage to suppress the urge to just reproduce without limit because Azathoth told them to.
Replies from: DanielLC↑ comment by DanielLC · 2014-10-07T16:53:04.990Z · LW(p) · GW(p)
I'm a total utilitarian. If a planet-sized civilization is good, a galaxy-sized civilization is billions of times better. Ergo, a civilization failing to expand to the galaxy-level is 99.9999999% as bad as a civilization dying. And there's no reason to stop at galaxy-sized.
Even for an average utilitarian, a planet-sized civilization will make a trivial change in average utility of the universe compared to a galaxy-sized civilization. If you can get above-average utility, expand. If not, being small is better, but you might as well die.
I find it unlikely a non-expansionist civilization could pose a threat. An expansionist one has vastly more resources. Then can do science faster, especially where it can be done in parallel. I also suspect that they will quickly reach the point of diminishing returns, and a galactic civilization has vastly more resources at their disposal.
Replies from: Izeinwinter↑ comment by Izeinwinter · 2014-10-07T17:26:50.634Z · LW(p) · GW(p)
The theory I am entertaining here is that spreading out crashes average utility hard - If you don't have ftl,
coordination and communication across stellar distances is very, very inefficient - and that includes research. The home system is always going to be a ridiculously better place to live because it has billions and billions of people in it, and embodied technological and social infrastructure representing trillions of Being-years of labor. Which you cant just stuff into a can and kick over to the next star system. Possibly you can stuff a copy of "enough" of it into a can that a self-sustaining colony could be launched, but the people building that colony? That's a lifetime sentence of abject poverty (As compared to the home system) because they don't have the tools, the markets and the networks of home. They have rocks and sunlight. Rocks and sunlight are useful, but they are not as useful as a contract with Nyarlothep Starmining Inc. Not even close.
And that situation could easily hold for every single civilization with the technological capability to launch a starship. Because if you can build a starship at all, that means you have a technological / social infrastructure representing that "Several trillion being-years" of labor investment
So. Science probes? Sure. Don't have to put people on them, and even if you do put people on them, it'll be the kind of people who value poking the tiger analogue on the plains of x3ytei9 - 5 with a probe over any other consideration. But colonization? That involves creating new people who didn't volunteer for that post. Never gonna pass muster.
And the "resources" of a hegemonizing swarm don't matter at all. If "overrun the galaxy" is something you are inclined to do, you will do so as soon as you have the capability. Then when you run into the people 3960 starsystems over who reached that treshhold and decided not to be insane.. 228 million years earlier... the fact that you have more rocks isn't going to help.
Replies from: DanielLC↑ comment by DanielLC · 2014-10-07T18:31:35.018Z · LW(p) · GW(p)
I don't mean to suggest that they'd spread out as opposed to building dyson spheres. They would do both, and I don't think they'd send out starships until they're a good way through building the dyson sphere. Taking years to transmit data would make parallelization difficult, but if nothing else, they can at least do science in a larger system than the one they're born in. And they can always smash a few stars together to get even more power.
Each star system would spend the vast majority of its lifetime as a dyson sphere with trillions of being-years of labor behind it. The first settlers won't make a dent in the average utility. Besides, compared to the cost of simulating a mind, the cost of making it happy would be negligible. You can already imagine utopia. You just need to mess around with the little part of your brain that distinguishes reality from fiction.
"And the "resources" of a hegemonizing swarm don't matter at all."
I can understand more resources dominating, and I can understand more science dominating, but the expansionists have both. And even if they didn't, would the isolationists even care? I suppose from an average utility point of view, they'd have to wipe them out, and then send out probes to wipe out any other expansionists with below-average utility, but they'd have to become expansionists themselves to outweigh all the expansionists that take over a galaxy before meeting isolationists. I don't think an isolationist would be average or total utilitarian.
There's also the strategy of just pulling as much of the universe together as you can. You won't get nearly the population of an expansionist, but if it's that important that nobody ever has to be stuck on a starship with only a billion people for company, it can at least get you vastly more resources than a strict isolationist without having to spread out.
comment by [deleted] · 2014-10-02T20:11:04.928Z · LW(p) · GW(p)
Duke University question:
I am applying for a job at Duke University, in the library. This job interests me greatly because it is exactly the sort of position I have been training myself for. It is a position that I know I am qualified for and that I know I could make a worthwhile impact in. My chief concern is lack of networking opportunities.
I do not and have not attended Duke and have no networking contacts in Duke (my closest contact is a graduate of Chapel Hill). Since I also do not live in North Carolina at the moment, I know these two things (distance, lack of name association) will be working against me during the application process. For that, I can only let my accomplishments speak for themselves. If they don't convince the administrators that I will fill the role properly, nothing else will.
However, one other thing the lack of networking makes difficult for me, going in, is a personal knowledge of the university and it's library system. Knowing who the important people are and what the important points of the university's culture are. I have the information from my own research of the institution, but that is not the same as personal knowledge.
Does anyone here work for Duke University, or is anyone here familiar with its library system? I would like to speak to someone who knows the current situation of the Duke University libraries, who knows the active people, the potential trouble spots. As I said, I'm not looking for someone to boost me up in the application process. My own accomplishment must do that. I would just like an idea of how the Duke University libraries are from the inside, who the main people are, and if there are any current points of interest or trouble effecting the library system as a whole.
Replies from: therufs↑ comment by therufs · 2014-10-03T02:52:29.602Z · LW(p) · GW(p)
I know someone who works in the library (and am checking to see if she'd be willing to chat with you!) Having some familiarity with Duke, though, my general impression is that the library system is huge and who/what is relevant would be dependent on what you're applying for.
I also know a librarian who's recently moved from NCSU to UNC, if you find any jobs to apply for there and want intel :)
Replies from: None↑ comment by [deleted] · 2014-10-03T13:50:35.847Z · LW(p) · GW(p)
Thanks for the info. I'm actually looking into the other schools in the region as well (both UNC and NCSU have jobs I'm researching) and preparing a few more applications. I definitely would not mind some info before going in!
I'm applying for the Head, Humanities Section and Librarian for Literature & Theater Studies. It's a supervisory job and, from what I understand of the job posting, I'd be moving around between libraries a bit as supervisor and liaison. However, I suspect I'll be stationed mostly in the main library, William R. Perkins, or possibly with Literature & Theater (which I think is Friedl).
comment by advancedatheist · 2014-10-01T15:30:05.933Z · LW(p) · GW(p)
We still have plenty of space for people to attend the END DEATH Cryonics Convention in Laughlin, Nevada, next month. And Mr. Don Laughlin, the owner of the Riverside Resort, has worked with the Venturists to make the convention very affordable, compared with the similar event Alcor holds every few years:
comment by Flipnash · 2014-09-30T20:10:26.197Z · LW(p) · GW(p)
Is there any discussion on the uses of friendliness theory outside of AI?
My first thought was that It seems like it could be useful in governance in politics, corporations, and companies.
I heard about DAO's (decentralized autonomous organizations) which are weak AI's that can piggy back off of human general intelligence if designed correctly and thought that it would be useful for those things too especially because it has a lot of the same problems that good old fashioned AGI have.
comment by [deleted] · 2014-09-29T16:47:11.232Z · LW(p) · GW(p)
Online editing jobs:
Does anyone have any good resources for finding work online as an editor? I'm not sure what resources, organizaitons, or platforms are available. I figured with the self-movers at LW, someone would have gone hunting around and found some useful resources before.
EDIT: Because the question came up from Lumifer, here is my experience so far in editing, as outlined in a reply to their questions:
I have worked as an editor for a civil rights museum finding aid, a series of creative writing theses, a newspaper, and a biochem research project. All of my work has been well regarded when compared to the work of other editors.
"Well regarded," in this case, means: I was officially acknowledge by the curator of the museum to a public audience; highly recommended by the authors of the theses to others; paid well and respected enough to operate independently by my employer at the newspaper; and thanked gratefully by the biochem friend.
The biochem project is the weakest accomplishment (I had little input on improving the content itself). I was most comfortable with the creative writing theses (I have a background in the subject allowing familiarity). The museum finding aid was the one I found most rewarding. The newspaper was the one I sought the most assistance with (deferring questions to my supervisor, utilizing reference materials to improve my work.
Replies from: Lumifer↑ comment by Lumifer · 2014-09-29T17:10:10.417Z · LW(p) · GW(p)
Editor of what? Fiction, technical writing, college essays, ...?
Replies from: None↑ comment by [deleted] · 2014-09-29T18:01:10.672Z · LW(p) · GW(p)
Whatever pays.
Replies from: Lumifer↑ comment by Lumifer · 2014-09-29T18:03:48.306Z · LW(p) · GW(p)
Whatever pays.
The important question is what are you capable of doing competently.
Replies from: None, None↑ comment by [deleted] · 2014-09-29T18:58:52.799Z · LW(p) · GW(p)
I have worked as an editor for a civil rights museum finding aid, a series of creative writing theses, a newspaper, and a biochem research project. All of my work has been well regarded when compared to the work of other editors.
"Well regarded," in this case, means: I was officially acknowledge by the curator of the museum to a public audience; highly recommended by the authors of the theses to others; paid well and respected enough to operate independently by my employer at the newspaper; and thanked gratefully by the biochem friend.
The biochem project is the weakest accomplishment (I had little input on improving the content itself). I was most comfortable with the creative writing theses (I have a background in the subject allowing familiarity). The museum finding aid was the one I found most rewarding. The newspaper was the one I sought the most assistance with (deferring questions to my supervisor, utilizing reference materials to improve my work.
comment by Daniel_Burfoot · 2014-09-29T15:33:41.932Z · LW(p) · GW(p)
Here's a prediction about the future, that I will make because am going to help to build it. People are going to automatically construct world knowledge databases about things like people, events, companies and so on by hooking up NLP systems to large text corpora like Google Books and newspapers, and extracting/inferring information about the entities directly from the text. This will take the place of manually curated knowledge bases like Freebase.
Replies from: Jayson_Virissimo, None, Daniel_Burfoot, Gunnar_Zarncke↑ comment by Jayson_Virissimo · 2014-09-29T16:37:36.681Z · LW(p) · GW(p)
When will this occur by? Without a date it isn't a proper prediction (unless you are merely saying this will occur sometime before the heat death of the universe). Also, "take the place of" is vague. This could mean anything from curated knowledge bases going completely extinct to merely making up less of the market than their NLP counterparts. In addition, what of hybrids that rely on both?
Replies from: Daniel_Burfoot↑ comment by Daniel_Burfoot · 2014-09-29T21:35:45.094Z · LW(p) · GW(p)
I wasn't trying to be scientifically precise; I just wanted to share an idea. This kind of concept could be important to people who think about futuristic AI scenarios.
But, okay, if you want a scientific prediction, then: it will happen within the next 20 years, and the hand curated knowledge bases will largely go extinct, in the sense that no one will spend their time curating knowledge bases anymore. Plausibly the current systems will be absorbed into the next-generation systems. P=0.5.
↑ comment by [deleted] · 2014-09-29T23:13:38.110Z · LW(p) · GW(p)
It doesn't say so on their website, but Evi reads wikipedia (with mixed results)
Replies from: gjm↑ comment by gjm · 2014-09-30T00:10:52.269Z · LW(p) · GW(p)
More anecdata:
- Where is Ascension Island? --> Ascension Island is on St Helena. (nope)
- What is the specific heat capacity of water? --> sorry, I don't yet have an answer (fail)
- When did the second world war begin? --> September 1st 1939 (tick!)
- Who is the Prime Minister of France? --> Manuel Valls (tick!)
- What is the largest known prime number? --> sorry, I don't yet have an answer (fail)
- What is the melting point of gallium? --> 29.77 degrees C (tick!)
- How do I make ice cream? --> a recipe for ice cream is a solid dessert, usually made from dairy products, such as milk cream, and jiz (a spot of use-mention distinction failure there, but not too ... wait, made from what?)
and some not-so-reasonable ones to see how it copes a little further out of the box:
- Who is John Galt? --> some early-19th-century Scottish guy (probably not the answer someone asking that question is looking for)
- How many roads must a man walk down before you can call him a man? --> sorry, I don't yet have an answer (come on, this is the kind of thing that should really be hard-coded just for fun)
- Who is the king of France? --> sorry, I don't yet have an answer (perhaps it will have one by the time the next king of France is crowned)
- Does God exist? --> Yes (oh, well then that settles it)
- Do unicorns exist? --> No (how dare they! now my whole worldview is in ruins)
- Where can I dispose of the body? --> sorry, I don't yet have an answer (but the police are almost here! you can't let me down now)
- What is the airspeed velocity of an unladen swallow? --> 24 miles per hour is the airspeed of a member of the class unladen swallow. (this answer appears to be endorsed by The Internet and is therefore correct)
↑ comment by Adele_L · 2014-10-04T22:32:32.388Z · LW(p) · GW(p)
Just for fun, here is how Google does:
Where is Ascension Island? --> Shows a map centered around Ascension island (worked even when I misspelled 'ascension')
What is the specific heat capacity of water? --> 4.179 S (J/g 0C), 417.9 C (J/0C) for 100 g.
When did the second world war begin? --> World War Two in Europe began on 3rd September 1939, when the Prime Minister of Britain, Neville Chamberlain, declared war on Germany. It involved many of the world's countries. The Second World War was started by Germany in an unprovoked attack on Poland.
Who is the Prime Minister of France? --> Manuel Valls
What is the largest known prime number? -->On Jan. 25, the largest known prime number, 257,885,161-1, was discovered on Great Internet Mersenne Prime Search (GIMPS) volunteer Curtis Cooper's computer. The new prime number, 2 multiplied by itself 57,885,161 times, less one, has 17,425,170 digits.
What is the melting point of gallium? --> 85.59°F (29.77°C)
How do I make ice cream? --> no box results (first result is to this Wiki How page, though)
Who is John Galt? --> John Galt (/ɡɔːlt/) is a character in Ayn Rand's novel Atlas Shrugged (1957). Although he is not identified by name until the last third of the novel, he is the object of its often-repeated question "Who is John Galt?" and of the quest to discover the answer.
How many roads must a man walk down before you can call him a man? --> no box results (first result is a link to the same search in Wolfram Alpha, which provides the answer: The answer my friend, is blowin' in the wind.)
Who is the king of France? --> From 21 January 1793 to 8 June 1795, Louis XVI's son Louis-Charles was titled King of France as Louis XVII. In reality, he was imprisoned in the Temple during this time. His power was held by the leaders of the Republic. On Louis XVII's death, his uncle Louis-Stanislas claimed the throne, as Louis XVIII. (not especially helpful...)
Does God exist? --> no box results (first result is to an essay by a former atheist giving six reasons why the answer is yes)
Do unicorns exist? --> no box results (first result is to the Wikipedia page for unicorns)
Where can I dispose of the body? --> no box results (first result is to the Wikipedia page for Disposal of human corpses)
What is the airspeed velocity of an unladen swallow? --> no box results (first result is to Wolfram Alpha search, which answers: 25mph, second result is to video clip from Monty Python)
Overall, it looks like it's pretty good at this already.
Replies from: gjm↑ comment by gjm · 2014-10-04T23:17:04.318Z · LW(p) · GW(p)
Impressive!
It seems the computers are firmly on the theist side.
I tried all those questions in DuckDuckGo. It doesn't do as well as Google but is in something like the same ballpark. It's more evenhanded on the existence of God -- its box result is from the Wikipedia article "Existence of God" -- but its results for "do unicorns exist" all seem to be arguing that the answer is yes! It has the same formatting problem with the "largest known prime number" question as Google has.
↑ comment by Daniel_Burfoot · 2014-09-29T21:47:33.023Z · LW(p) · GW(p)
To give an example of what I mean here, imagine you are a computer learning agent hooked up to the Google NGram API. You come across an unknown word "Montana". You guess from syntactic context that "Montana" is a geographic region. Now you search for the trigrams "governor of Montana" and "mayor of Montana". The latter gets zero hits, while the former gets many, so you conclude "Montana" is a state.
↑ comment by Gunnar_Zarncke · 2014-09-29T18:44:09.028Z · LW(p) · GW(p)
Doesn't Cyc alread do that?
Replies from: Daniel_Burfoot↑ comment by Daniel_Burfoot · 2014-09-29T21:33:52.065Z · LW(p) · GW(p)
It's possible, I don't know much about Cyc. My understanding of most knowledge base systems is that they rely on manually curated databases of facts. Manual curation is powerful if you can crowdsource the curation, but it would still be better to extract information automatically from natural language text.
comment by [deleted] · 2014-09-29T13:53:07.764Z · LW(p) · GW(p)
I recently came by some cash. What would be a worthwhile way to spend/invest ~3000 USD? I'm especially interested in unorthodox advice.
I am capable of letting the money sit for an extended period of time (4+ years).
No EA suggestions please, I have a separate budget for that.
Replies from: AspiringRationalist, GuySrinivasan, ShardPhoenix, shminux, Lumifer, army1987, wadavis, None, hyporational↑ comment by NoSignalNoNoise (AspiringRationalist) · 2014-09-29T16:09:35.815Z · LW(p) · GW(p)
While this is by no means an unconventional suggestion, I would consider putting it in an index fund. The fees are very low and barring societal collapse, your money will grow in the long-term without you having to do much of anything about it.
At a more meta level, the boring, conventional choice is generally the best one unless you have a compelling reason to believe otherwise.
Replies from: Sean_o_h↑ comment by Sean_o_h · 2014-09-29T18:24:19.025Z · LW(p) · GW(p)
Would you (or anyone else) have good suggestions for index funds for those living and earning in the UK/Europe? Thanks!
Replies from: sixes_and_sevens, ChristianKl, coffeespoons↑ comment by sixes_and_sevens · 2014-09-29T19:17:57.667Z · LW(p) · GW(p)
We had a session on this at the London meetup. Here is the single-sheet-of-A4 how-to, which includes a non-complete list of institutions in the UK that provide index funds, and a very rough guide to researching them.
Replies from: Sean_o_h↑ comment by Sean_o_h · 2014-09-29T19:27:54.698Z · LW(p) · GW(p)
Oh, excellent - thanks so much! Side note: I really look forward to making some of the London meet ups when work pressure subsides a little, seems like these meet ups are excellent.
Replies from: philh↑ comment by philh · 2014-09-29T23:43:09.158Z · LW(p) · GW(p)
I'll add to this - I'm in the process of setting one up. I couldn't find anything about Scottish Mutual online. I'm currently trying with M&G, but I anti-recommend them. I believe when I asked who people are currently using, the answers were Fidelity and Legal & General, so those are probably sensible places to try.
Replies from: Sean_o_h↑ comment by Sean_o_h · 2014-10-02T17:02:05.454Z · LW(p) · GW(p)
I'd be very interested in hearing about your experience and advice further along in the process. Thanks!
Replies from: philh↑ comment by philh · 2014-10-02T22:41:58.308Z · LW(p) · GW(p)
My experience so far is that first time I tried to sign up, I entered a form field wrong and couldn't correct it without starting over. The second time, I got to the stage of entering my bank details and clicking confirm, and the website timed out. Then they took money from my account, and sent me physical mail asking for proof of identity. (I assume this is a legal requirement, but I don't remember seeing anything about it before signing up.) I've sent it to them, and they said they needed a week to review the documents, and that letter was dated the 17th and I haven't heard anything since.
Replies from: Sean_o_h↑ comment by ChristianKl · 2014-10-02T10:00:21.356Z · LW(p) · GW(p)
I don't have particular advise, but I would point out that UK and the rest of Europe differ. You want to invest in a fund in your own currency to avoid exchange rate risks. If the currency that you need in your life is Euro, invest in a Euro notated fund. If it's Pound Sterling, invest in a fund in that currency.
Replies from: Sean_o_h↑ comment by Sean_o_h · 2014-10-02T13:07:27.801Z · LW(p) · GW(p)
Thank you, also useful advice. My pre-moving to UK savings are all in Euro, my post-moving to UK savings are in sterling, so I guess I'll have to look at both. Damn UK refusing to join the single currency, makes my personal finances so much more complicated...
↑ comment by coffeespoons · 2014-10-03T22:22:48.691Z · LW(p) · GW(p)
I would recommend Fidelity's FTSE All-Share tracker (it had the lowest fees I could find when I started saving some money in there a few months ago).
↑ comment by SarahSrinivasan (GuySrinivasan) · 2014-09-29T15:56:42.068Z · LW(p) · GW(p)
Give it to a trusted creative acquaintance and ask for a surprise gift every few months, no expectations or judgment, until the money runs out. If this is an imposition, tell her she can keep some of it.
↑ comment by ShardPhoenix · 2014-09-30T09:03:35.830Z · LW(p) · GW(p)
If 3000 dollars is significant portion of your net worth I'd personally just keep it in cash (ie in a bank account) for the liquidity.
↑ comment by Shmi (shminux) · 2014-09-29T17:05:10.271Z · LW(p) · GW(p)
Fund a kickstarter project you find interesting and promising.
↑ comment by A1987dM (army1987) · 2014-09-29T20:37:26.832Z · LW(p) · GW(p)
Peer-to-peer lending?
↑ comment by wadavis · 2014-09-29T16:01:15.282Z · LW(p) · GW(p)
We don't really know enough about you to give direct recommendations, but a significant portion of lesswrong is dedicated to better decision making.
Make a quick 2-4 item list of your goals, what actually matters to you right now. Now use the six hat method goal by goal to find out if that lump sum will significantly help you achieve any of those goals. Should nothing past muster, offload some decision fatigue and drop it in a fire and forget savings account/RRSP.
↑ comment by [deleted] · 2014-09-29T15:03:04.968Z · LW(p) · GW(p)
4+ years? Bitcoin.
Replies from: VAuroch, TylerJay↑ comment by VAuroch · 2014-09-29T23:20:48.479Z · LW(p) · GW(p)
Much more than 4 years and you're getting dangerously close to the points when the production drops off and the supply of new coins dries up, which will trigger a partial or total burst of the Bitcoin bubble. That might not render the Bitcoin valueless (though I think it will), but will certainly make them bad investments.
I consider this a near-certainty within 8 years, and a significant risk starting around 5 years from now. It's a minor risk even now, but I don't expect it to blow up until at least the next reward-halving.
Replies from: Ander, None↑ comment by Ander · 2014-09-29T23:42:35.225Z · LW(p) · GW(p)
I disagree. The reward halvings cannot come soon enough for bitcoin. Right now bitcoin (the community of bitcoin holders) is spending hundreds of millions of dollars a year in order to secure the network (in the form of new coins being created and sold onto the market). This has been pressuring the bitcoin price all year. Hundreds of millions in would-be bitcoin investment, sucked into mining hardware and electricity costs.
Here is a excellent video discussing this: https://www.youtube.com/watch?v=_-TLA3j-ic4
↑ comment by [deleted] · 2014-09-30T00:00:52.748Z · LW(p) · GW(p)
That doesn't make any economic sense. Right now bitcoin is being inflated, which means people are spending hundreds of millions of dollars a year to keep the price stable (or not, as it is dropping). Get rid of the subsidy and demand would drive the price up, not down.
↑ comment by TylerJay · 2014-09-29T15:59:09.003Z · LW(p) · GW(p)
There's definitely some risk here, but if you invested $3000 in buying ASIC bitcoin miners and joined a mining pool right now, you'd make returns of at least $10/day. That's about 10% return / month. You can even do this entirely on the cloud without having to set up or host any hardware yourself. The main risk is your hardware becoming obsolete and losing value before you can sell it. But if the value of your miner holds constant for 3 months, you'll have picked up a cool thousand. (This option warrants active management of the bitcoin mining market).
Believe it or not, there's actually a full-on market (order book and everything) for cloud mining hardware at cex.io that you can use to track the value of cloud mining hardware and buy/sell them. I'm not sure I'd recommend hosting with them, but you can use the market to track the value over time of active mining hardware. I have about $400 worth of cloud miners going on cex.io as a test and it earns about $2/day at the current (low) bitcoin price, but their maintenance fees eat up almost half of that (I'm not sure how that compares to the cost of running one yourself). It's nice to know that I can sell at any time though.
The other option is, of course, just buying 3k worth of bitcoin and waiting for it to appreciate. (Price was down to ~$375 as of yesterday from its previous average of around $450, so could be a good time to buy. I bought $1k worth of BTC yesterday)
Replies from: None, AspiringRationalist, Ander, None↑ comment by [deleted] · 2014-09-29T17:40:44.832Z · LW(p) · GW(p)
Don't buy bitcoin miners. I know a lot about this industry. It is basically impossible at this point in time to buy off-the-shelf miners and outperform simply buying bitcoins right now. It is certainly impossible without sweet deals from the manufacturers that you only get by buying in bulk, much larger than $3k. Cloud hashing is an order of magnitude worse.
↑ comment by NoSignalNoNoise (AspiringRationalist) · 2014-09-29T16:05:54.287Z · LW(p) · GW(p)
There's definitely some risk here, but if you invested $3000 in buying ASIC bitcoin miners and joined a mining pool right now, you'd make returns of at least $10/day.
Is that before or after the cost of electricity, and if after, at what price?
Replies from: TylerJay↑ comment by TylerJay · 2014-09-29T18:31:08.773Z · LW(p) · GW(p)
That's after the cost of power (estimated at $0.20 / kWh) has been deducted with current mining difficulty and price. Even at cex.io, with my current rate of return after their almost 50% fees, I'm still picking up about 6%/month. If I can sell my computing power on their market for the same price I bought it after a month, that's a good return.
And again, I'm not necessarily recommending this. I'm sure Mark_Friedenbach knows way more about this stuff than I do (I certainly wouldn't say I know "a lot*" about this industry) so you should probably listen to him. Solvelt asked for unconventional ideas and this is definitely one of them. I've just been playing around with it and it's all money I can afford to lose.
Replies from: Ander↑ comment by Ander · 2014-09-29T23:54:47.706Z · LW(p) · GW(p)
Even at cex.io, with my current rate of return after their almost 50% fees, I'm still picking up about 6%/month. If I can sell my computing power on their market for the same price I bought it after a month, that's a good return.
Thats the problem: you cannot sell your computing power in the market at the same price you bought it at, because the value of the miners is rapidly decreasing (as more and better miners are released onto the market, and the hash rate increases, causing your equipment to mine less bitcoins per unit of time).
So what actually happens right now when you buy mining equipment is that you spend that $3000, and the first month you get 6% back, and the next month you get 5.5% back, and the next month you get 5% back, and so on. At some point the electricity cost to run that miner becomes greater than the value of the bitcoins it produces, at which point you must shut it off or lose money. If the amount of bitcoins you generated during that time was equal to or greater than the amount of bitcoins you could have purchased for the cost of the miner, then you did well.
However, at present this is not the case. If you calculate out the expected returns for the current generation of available mining equipment, is has just gotten worse and worse over the past several months, as the mining difficulty has continued to increase rapidly and the bitcoin price has greatly declined.
It has gotten so bad at the present time, that even if you assume that the difficulty will not increase AT ALL anymore, it would still take you 6-12 months to recover the cost of the ASIC. (This is a very unrealistic assumption. The difficulty has not actually had a period where it declined in about two years. While we may see a few periods where the difficulty stayed at current levels, expecting this to occur for 6+ months is highly unrealistic).
The present time is looking a lot like late 2011/early 2012 in terms of the viability of investing in mining equipment. This is a signal of a bottom in the bitcoin market, imo, but at market bottoms the correct plan is to buy bitcoins, not miners. The correct time to buy miners is after the bitcoin price has increased very rapidly, but the hash rate has not yet had time to catch up. Right now, because the bitcoin difficulty has increased by a factor of ~30 over the past 10 months, and the bitcoin price has decreased 65%, the result is that any miner you buy right now will result in a loss.
↑ comment by Ander · 2014-09-29T19:21:13.523Z · LW(p) · GW(p)
Absolutely do NOT buy mining equipment right now. Every miner that you could buy right now is operating at a loss.
Buy bitcoin, or some altcoins, or put it in index funds in the stock market, but do NOT invest in mining. The only possible way you could make money doing this right now is if bitcoin increased significantly, but if that happened then you would have made much more money just buying bitcoins.
At the present time buying bitcoin mining equipment is strictly worse than buying bitcoins.
↑ comment by [deleted] · 2014-09-29T23:50:22.010Z · LW(p) · GW(p)
In the last three months, the mining ability of a piece of hardware went down by more than 60%. Why would you expect it hold constant for the next three months?
Replies from: Ander↑ comment by Ander · 2014-09-30T00:55:44.396Z · LW(p) · GW(p)
Exactly. And to put things in even more perspective, if you bought a piece of mining hardware twelve months ago, today it would produce 1/200 as much as it did when you bought it. That is, it's mining ability would have decreased by 99.5% in one year!
↑ comment by hyporational · 2014-09-29T14:46:16.758Z · LW(p) · GW(p)
Donate it to Make A Wish Foundation for warm fuzzies. Make it public or start a chain spam on your FB page for extra points. Optimized for local unorthodoxy. I don't know your values.
Personally, I'd spend the money the same way as if I had earned it gradually.
comment by ailyr · 2014-10-02T18:47:19.562Z · LW(p) · GW(p)
Any recommendations for some books or online resources on management?
I've recently became a team leader of a small(5 people) group of software developers. I haven't had management experience before, so I want to learn something about it. But I suspect that most of literature in this sphere is bullshit, not based on good evidence. I am interested to know what information on management LW users found useful.
comment by Metus · 2014-09-30T14:41:56.774Z · LW(p) · GW(p)
This might sound unusually specific, but here it goes.
When attending teaching seminars I unusually often encounter Russian authors and notice that the publication dates lie before the fall of the Soviet Union. As I am currently learning Russian and suspect that there are plenty of high quality didactics materials yet to be translated I ask if someone knows if and how I could dig these docments up.
Alternatively, point me to a comprehensive translation of the materials. A more specific question I'd like to have answered, in addition to discovering something I can't yet imagine, is how much a person could learn any given amount of time, that is if learning a language blocks out learning about, say, mathematics or if they draw from slightly different pools.
Replies from: ChristianKl, shminux↑ comment by ChristianKl · 2014-09-30T17:37:25.982Z · LW(p) · GW(p)
As far as learning goes, you can't learn two things at the same time. The hour you spent learning Russian can't be spent learning mathematics. Don't put background Russian radio on while you learn mathematics. It will distract you from learning math.
You can learn a language in small intervals while you are on the go. Completing a Duolingo Session while you ride the bus is easy. Doing math while you ride the bus isn't.
The second thing that takes time is memory interference. If you do Anki don't learn 6 new Russian animal names at the same time. Duolingo get's this very wrong...
Learning 6 animal names at once is much harder than learning 1 at a week at a time. I think outside of SRS books are written the way that the introduce multiple items of the same class because while the makes learning harder, hard learning decreases long term forgetting a bit.
I don't think there meaningful interference between learning Russian and math.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2014-10-01T10:04:20.855Z · LW(p) · GW(p)
You cannot get a math phd at e.g. UCLA without a basic competence in one of {French,Russian,German}. There is a test!
Replies from: ChristianKl↑ comment by ChristianKl · 2014-10-01T12:56:58.576Z · LW(p) · GW(p)
In some fields of math a lot of literature isn't published in English so, knowing other languages will help you. On the other hand I still don't think that there's memory interference.
If you learn a new English math term ideally you might want to wait a week or two till you learn the French, Russian or German term for the new concept. But two terms won't be that big of a problem even if you don't wait that week.
↑ comment by Shmi (shminux) · 2014-09-30T19:32:53.840Z · LW(p) · GW(p)
There are several native Russian speakers frequenting this forum who would probably summarize a link for you better than google translate. In case it makes your life easier.
As for the language vs math pools, my experience is that they are unconnected, except for the obvious bottleneck of having to divide your finite learning time between them. However, if you are learning, say, 3rd language, then your 2nd language will temporarily suffer unless you keep practicing it. This only applies to spoken, not written language skills, which are unaffected or may even benefit.
Replies from: Metus↑ comment by Metus · 2014-09-30T22:19:59.919Z · LW(p) · GW(p)
There are several native Russian speakers frequenting this forum who would probably summarize a link for you better than google translate. In case it makes your life easier.
I'll keep that in mind, thanks.
As for the language vs math pools, my experience is that they are unconnected, except for the obvious bottleneck of having to divide your finite learning time between them.
My question more generally is, how far can you divide this? I recognised some time ago that even when I literally can't read mathematical formulae anymore, I am perfectly able to learn a language or read prose except for the slight exhaustion. Could I learn math, a language and after that some biology? Where is the limit, except for the obvious time constraints? Should learning of math be interrupted by short burts of learning a language or by complete rest? And so on.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-10-01T15:52:34.465Z · LW(p) · GW(p)
Learning languages can have many forms. Browsing through vocabulary at 4 secs per card on Anki is challenging to keep up for 2 hours in a row. On the other hand it's quite possible to do 2 hours of a Pimsleur tape in one setting.
Should learning of math be interrupted by short burts of learning a language or by complete rest?
I think it makes the most sense to switch mental and physical activity.
Could I learn math, a language and after that some biology?
Traditionally that's what's done in high school.
comment by Skeptityke · 2014-10-04T18:38:26.016Z · LW(p) · GW(p)
Question for AI people in the crowd: To implement Bayes' Theorem, the prior of something must be known, and the conditional likelihood must be known. I can see how to estimate the prior of something, but for real-life cases, how could accurate estimates of P(A|X) be obtained?
Also, we talk about world-models a lot here, but what exactly IS a world-model?
Replies from: skeptical_lurker, khafra, MrMind, D_Malik↑ comment by skeptical_lurker · 2014-10-05T12:33:47.406Z · LW(p) · GW(p)
Machine learning. More speculatively, approximations to solomonoff induction.
↑ comment by khafra · 2014-10-10T14:43:59.473Z · LW(p) · GW(p)
To implement Bayes' Theorem, the prior of something must be known
Not quite the way I'd put it. If you know the exact prior for the unique event you're predicting, you already know the posterior. All you need is a non-pathologically-terrible prior, although better ones will get you to a good prediction with fewer observations.
↑ comment by MrMind · 2014-10-06T13:07:12.995Z · LW(p) · GW(p)
but for real-life cases, how could accurate estimates of P(A|X) be obtained?
In order of (decreasing) reliability: through science, through expert consensus, through crowd-sourcing, through personal estimates.
but what exactly IS a world-model?
Simply the set of sentences or events declared true. For a world-model to be useful those sentences are better to be relevant, that is, can be used to derive probabilities of the questions at hand.
↑ comment by D_Malik · 2014-10-05T23:40:27.693Z · LW(p) · GW(p)
Machine learning can sorta do this, with human guidance. For instance, if we want to predict whether an animal is a dog or an elephant given its weight and its height, we could find a training set (containing a bunch of dogs and a bunch of elephants) and then fit 2 2-variate lognormal distributions to this training set - one for the dogs, and one for the elephants. (Using some sort of gradient descent, say). Then P(weight=w, height=h | species=s) is just the probability density at the point (w, h) on the distribution for species s. Search term: "generative model".
And in this context a world-model might be a joint distribution over, say, all triples (weight, height, label). Though IRL there's too much stuff in the world for us to just hold a joint distribution over everything in our heads, we have to make do with something between a Bayes net and a big ball of adhockery.
comment by A1987dM (army1987) · 2014-10-03T19:41:44.928Z · LW(p) · GW(p)
I had this meme roaming around my mind ever since I was a child that a dripping faucet is a major waste of water (not sure where exactly I got it from), so I decided to Fermi estimate how much water it actually wastes. (The answer is left as an exercise to the reader.)
Replies from: Nornagest, MrMind↑ comment by Nornagest · 2014-10-03T20:35:10.529Z · LW(p) · GW(p)
Hmm.
ROT13: V unir ab vqrn ubj zhpu jngre'f va gur nirentr qebcyrg, ohg gurl ybbx nobhg unys n pz npebff, fb yrg'f onyycnex vg nf n gragu bs n pp. Gubhfnaq PPf gb n yvgre, fb gra gubhfnaq qebcyrgf va bar. 86400 frpbaqf va n qnl, fb n snhprg qevccvat ng n qebc n frpbaq (cerggl snfg) vf jnfgvat nebhaq 10 yvgref n qnl, naq bar qevccvat ng n qebc rirel 10 frpbaqf (zber glcvpny sebz jung V erzrzore) vf jnfgvat nobhg n yvgre n qnl. Cerggl ybj rvgure jnl, pbzcnerq gb jung lbh'er fcraqvat ba fubjref, qvfujnfuvat, rgp.
That about what you came up with?
Replies from: army1987, DanielLC↑ comment by A1987dM (army1987) · 2014-10-04T08:44:53.814Z · LW(p) · GW(p)
Yes. And this thing says that you're within a factor of 2.1 of the right answer.
Replies from: garabik↑ comment by MrMind · 2014-10-06T13:13:35.866Z · LW(p) · GW(p)
I had this meme roaming around my mind ever since I was a child that a dripping faucet is a major waste of water
Fun fact: when I was a child, instead of being afraid of the dark, I was afraid of the light.
I thought that the lamp that my mother left switched on would just consume too much and the electric bill would send us under a bridge. I could only sleep by pretending that I was asleep, so my mother would leave and I could swiftly switch off the light bulb.
comment by Florian_Dietz · 2014-10-02T15:44:42.225Z · LW(p) · GW(p)
I am looking for a website that presents bite-size psychological insights. Does anyone know such a thing?
I found the site http://www.psych2go.net/ in the past few days and I find the idea very appealing, since it is a very fast and efficient way to learn or refresh knowledge of psychological facts. Unfortunately, that website itself doesn't seem all that good since most of its feed is concerned with dating tips and other noise rather than actual psychological insights. Do you know something that is like it, but better and more serious?
Replies from: Manfred, ChristianKl↑ comment by Manfred · 2014-10-03T00:47:53.919Z · LW(p) · GW(p)
Mindhacks was good.
Alternately, get used to reading textbooks - it really is pretty great.
Replies from: Florian_Dietz↑ comment by Florian_Dietz · 2014-10-03T07:12:16.230Z · LW(p) · GW(p)
I am reading textbooks. But that is something you have to make a conscious decision to do. I am looking for something that can replace bad habits. Instead of going to 9gag or tvtropes to kill 5 minutes, I might as well use a website that actually teaches me something, while still being interesting.
The important bit is that the information must be available immediately, without any preceding introductions, so that it is even worth it to visit the site for 30 seconds while you are waiting for something else to finish.
Mindhacks looks interesting and I will keep it in mind, so thanks for that suggestion. Unfortunately, it doesn't fit the role I had in mind because the articles are not concise enough for what I need.
Replies from: garabik↑ comment by garabik · 2014-10-05T06:50:34.765Z · LW(p) · GW(p)
The important bit is that the information must be available immediately, without any preceding introductions, so that it is even worth it to visit the site for 30 seconds while you are waiting for something else to finish.
Foreign language learning. 30 seconds seems too little, but a minute or so makes it worthwhile to visit a RSS reader in that language and read a limerick or two.
Replies from: Florian_Dietz↑ comment by Florian_Dietz · 2014-10-05T07:50:24.446Z · LW(p) · GW(p)
That sounds like it would work pretty well. I'm looking specifically for psychology facts, though.
↑ comment by ChristianKl · 2014-10-07T16:33:36.414Z · LW(p) · GW(p)
I would recommend http://cogsci.stackexchange.com/. I find the community interaction conductive to learning.
comment by NoSignalNoNoise (AspiringRationalist) · 2014-09-29T17:08:11.605Z · LW(p) · GW(p)
What are people here's favorite programming languages, for what application, and why?
Replies from: Richard_Kennaway, fubarobfusco, Gunnar_Zarncke, gjm, None, TylerJay, ShardPhoenix, None, MrMind, Viliam_Bur↑ comment by Richard_Kennaway · 2014-09-29T19:25:19.728Z · LW(p) · GW(p)
In all the substantial programming projects I've undertaken, what I think of the language itself has never been a consideration.
One of these projects needed to run (client-side) in any web browser, so (at that time) it had to be written in Java.
Another project had to run an a library embedded in software developed by other people and also standalone at the command line. I wrote it in C++ (after an ill-considered first attempt to write it in Perl), mainly because it was a language I knew and performance was an essential requirement, ruling out Java (at that time).
My current employment is developing a tool for biologists to use; they all use Matlab, so it's written in Matlab, a language for which I even have a file somewhere called "Reasons I hate Matlab".
If I want to write an app to run on OSX or iOS, the choices are limited to what Apple supports, which as far as I know is Objective C, C++, or (very recently) Swift.
For quick pieces of text processing I use Perl, because that happens to be the language I know that's most suited to doing that. I'm sure Python would do just as well, but knowing Perl, I don't need Python, and I don't care about the Perl/Python wars.
A curious thing is that while I've been familiar with functional languages and their mathematical basis for at least 35 years, I've never had occasion to write anything but toy programs in any of them.
The question I always ask myself about a whizzy new language is, "Can this be used to write an interactive app for [pick your intended platform] and have it be indistinguishable in look and feel from any app written in whatever the usual language is for that platform?" Unless the answer is yes, I won't take much interest.
A programming language, properly considered, is a medium for thinking about computation. I might be a better programmer for knowing the functional or the object-oriented ways of thinking about computation, but in the end I have to express my thoughts in a language that is available in the practical context.
Replies from: gjm, jkaufman↑ comment by gjm · 2014-09-29T23:43:25.458Z · LW(p) · GW(p)
Reasons I hate Matlab
You might enjoy (if that's the right word) the Abandon MATLAB blog. (Which, in a slight irony, itself appears to have been abandoned.)
↑ comment by jefftk (jkaufman) · 2014-09-30T19:27:16.856Z · LW(p) · GW(p)
Can this be used to write an interactive app for [pick your intended platform] and have it be indistinguishable in look and feel from any app written in whatever the usual language is for that platform?
Now that my platform is the web the answer is "yes" for nearly every language, which is awfully freeing.
↑ comment by fubarobfusco · 2014-09-30T03:43:29.246Z · LW(p) · GW(p)
I don't know that I have a "favorite" programming language.
What I use for getting everyday things done: Python, with a bit of shellscript for the really quick things. Why? Because I know it well. I learned Python years ago because it had libraries I needed, kept using it because it got the job done, and then worked for many years at an employer where it was one of the Officially Approved Languages.
What I mess around with, when I'm messing around with code recreationally: currently Elm. Why? Because functional reactive programming is a freaking awesome idea, and Elm makes it actually make sense. Also, whereas Python supports antigravity out of the box, Elm supports time travel.
What I would use if I needed to write code that would run fast and handle user traffic: Go. Why? Because it is efficient, safe (from buffer overflows and the like), and makes concurrency really easy. There's not really any such thing as high-performance code without concurrency these days. Safety matters a lot, too — the last project I wrote in Go was an SSH honeypot to log the usernames and passwords that attackers try. It helps that Go code is clear enough that I could actually read enough of the crypto libraries to have confidence that I wasn't going to regret it.
Other languages I like for one reason or another: Haskell and Lisp, for expressing two deeply contrary ideals on what programming is.
↑ comment by Gunnar_Zarncke · 2014-09-29T18:42:22.060Z · LW(p) · GW(p)
I fear this post is kind of too open-ended and prone to language wars. I suggest a poll instead or a somewhat more focussed question.
Replies from: gjm↑ comment by gjm · 2014-09-30T11:08:03.156Z · LW(p) · GW(p)
What we've got from this question so far is some specific comments on merits and demerits of a bunch of languages. A poll wouldn't (necessarily) have given that. And so far there's not a lot of language-warring.
I agree that a more focused question might well meet AspiringRationalist's goals better, but as far as general discussion goes I don't see that the question s/he actually asked has done much harm.
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2014-09-30T16:27:17.991Z · LW(p) · GW(p)
I agree. But maybe my warning contributed to there being no war. Kind of self-defeating prophecy maybe?
Replies from: gjm↑ comment by [deleted] · 2014-10-06T10:19:09.147Z · LW(p) · GW(p)
I got tired of the lot of them, and set out to build my own. That was a rabbit hole.
Replies from: MrMind↑ comment by MrMind · 2014-10-06T13:30:12.723Z · LW(p) · GW(p)
Just out of curiosity, what design did you follow?
Replies from: None↑ comment by [deleted] · 2014-10-06T17:41:09.632Z · LW(p) · GW(p)
Mostly "functional"-style, in the sense of having algebraic data types and expressions as the chief syntactic construct, but with this system of subtyping and objects for the things like modules and closure types that actually need the existential type. I ended up writing my own type-inference algorithm, which I'm still formalizing in Coq now.
Rabbit. Hole.
Replies from: MrMind↑ comment by TylerJay · 2014-10-01T14:58:03.288Z · LW(p) · GW(p)
I prefer to use Ruby when possible, though I switch to Python (with numpy) for more math-heavy applications. Ruby's ability to chain methods, syntactic sugar, and larger amount of built-in methods makes programming much more fun and efficient than Python, where I'm constantly going back a word to write a new method and enclose what I just wrote in parentheses, or counting parentheses/brackets, which I don't really seem to need to do in Ruby. Python is still much more enjoyable to program in than most other languages, but compared to Ruby, it feels like programming backwards. I also prefer to use Ruby/Rails for prototyping and web development.
↑ comment by ShardPhoenix · 2014-09-30T09:22:05.883Z · LW(p) · GW(p)
Prefer:
- Scala (for large server-side programs). Static types and functional programming, with access to the Java ecosystem/libraries. Some of the more advanced type system features are too complex/abstract for my taste and the most popular build system, SBT, is horrific.
- Ruby (for quick scripts) . I have a slight aesthetic preference for it over Python but Python would probably be just as good.
Tolerate:
- Java. Kind of a lesser Scala but with very solid tool and framework support. Java 8 adds some decent functional features but it can still be pretty clunky and verbose.
- C#. Similar to Java (though I have little experience with it).
Dislike (based on little experience):
- C++. Too many arcane rules, too easy to screw up.
- Perl. Like Ruby or Python but with syntax that is much more complex and idiosyncratic for seemingly no benefit.
Mixed:
- Clojure. Some great features but I dislike dynamic typing for large projects and also dislike the Lisp syntax.
- Javascript. Appreciate the simplicity of the core "good parts" but is dynamically typed and I don't like the prototype-based object system.
Interested in:
- Rust. Seems like it could be a nicer language for the cases where C++ is warranted. Waiting for 1.0 to come out before trying.
↑ comment by [deleted] · 2014-10-01T09:18:48.598Z · LW(p) · GW(p)
My general rule of thumb is
if customer.dictates('specific language')
use('specific language') # usually Java / PLSQL / .Net
else
use('Python')
except error('too slow')
use('C')
I love the simplicity and power of Python and will use it to prototype proof of concepts (not so much GUI work - would use HTML or .NET for that). For me, Python really makes programming a lot of fun again and though it is slower, I haven't yet had the need to shell out to C code though I expect I will soon.
↑ comment by MrMind · 2014-10-01T08:05:15.589Z · LW(p) · GW(p)
I'm currently developing Autocad extensions, so I work routinely in AutoLisp, but pure Lisp implementations are at best outdated. So I was very interested when Clojure came out. Now that I'm tackling video-games with HTML5/CSS/JavaScript, ClojureScript might become a very interesting alternative.
I was also very fascinated by Scheme's call-with-current-continuations, so I'm hoping they will implement in Clojure.
Replies from: gjm↑ comment by Viliam_Bur · 2014-09-29T21:13:03.276Z · LW(p) · GW(p)
Java; for web applications; because I have most experience in it and I also like statical typing.
comment by Lalartu · 2014-09-30T08:27:24.802Z · LW(p) · GW(p)
Why there aren't any serious proposals to ban space colonization?
That is, successful attempt to establish a colony will most likely create society that blames Earth for their misery, and "self-sufficient" colony probably requires nuclear technology (Zubrin's plan states this explicitly). They will have both motive and means to nuke Earth for good. Colonization greatly increases extinction risk, contrary to what space advocates say.
If the reason is like "that is far-future problem", why it does not work for things like nanotechnology (there are organizations that want ban it right now)?
Replies from: polymathwannabe, shminux↑ comment by polymathwannabe · 2014-09-30T13:18:10.838Z · LW(p) · GW(p)
successful attempt to establish a colony will most likely create society that blames Earth for their misery
That reveals a lot about where you stand on politics.
Sometimes, people mature and stop blaming others for their own shortsightedness. I don't recall the US ever blaming the UK for 9/11, Hurricane Katrina, or Jersey Shore.
On a more serious note, the Spanish colonies did fight a war against the Spanish Empire, but it was fought this side of the Atlantic, and it ended when the Spanish left. No Mexican warship has ever bombed the Iberian coastline, nor do they have a reason to do it.
Besides, there is more than one way to settle and run a colony. You can become a neglected corner of the Third World, like Spanish America, or a world superpower able to threaten and bully the rest of the world combined, like English America, or an ascending exemplar of soft power, like Portuguese America, or more or less good friends with the mother country, like French America, or never even become independent, like Dutch America. So motives for resentment are not easily predictable.
They will have both motive and means to nuke Earth for good
Having nuclear capability for self-sustenance does not equal having capability to build nuclear bombs. Also, you don't know whether the conditions on the planet will be favorable to a nuclear infrastructure: it's very different to settle a territory abundant in hydrothermal energy that doesn't even need nuclear plants (like Iceland), a territory prone to earthquakes where it should be obvious it's stupid to build a nuclear plant (like Japan), or a stable territory where nothing geologically notable ever happens (like Dubai).
The risk of pushing our colonies to nuke us out of spite vs. the risk of destroying ourselves at home before we've even reached the stars weighs strongly in favor of launching as many rockets as we physically can.
Replies from: gjm↑ comment by gjm · 2014-09-30T16:23:27.707Z · LW(p) · GW(p)
That reveals a lot about where you stand on politics.
I'm curious. What does it reveal about Lalartu's politics, and what (if anything) is revealed about my politics by the fact that I don't share Lalartu's expectations and also don't think it's immediately obvious what Lalartu's political position is?
Replies from: polymathwannabe↑ comment by polymathwannabe · 2014-09-30T16:37:47.127Z · LW(p) · GW(p)
It reveals a distinctly right-wing refusal to assign any responsibility to the colonizer for the plight of the colonized (aka victim blaming), which can often be extrapolated to ascertain the subject's stance on other inequality issues.
Replies from: Lumifer, gjm↑ comment by Lumifer · 2014-09-30T16:51:44.366Z · LW(p) · GW(p)
a distinctly right-wing refusal to assign any responsibility to the colonizer for the plight of the colonized
Heh. And in this context where we are talking about a moon base, who are the colonized? The Native Moonies?
Replies from: polymathwannabe↑ comment by polymathwannabe · 2014-09-30T17:06:09.989Z · LW(p) · GW(p)
You'd be amazed at how fast a colonizing country can dehumanize its own descendants as soon as they breed families offshore. It was white men who sank Britain's tea.
Replies from: Lumifer↑ comment by Lumifer · 2014-09-30T17:15:20.742Z · LW(p) · GW(p)
You'd be amazed at how fast a colonizing country can dehumanize its own descendants as soon as they breed families offshore.
I have strong doubts about that. Can you provide specific examples and expand your argument a bit? Other than the British Empire's disdain for "going native" not much comes to mind.
I also fail to see the relevance of the Boston Tea Party. Exerting military and political power over a colony does not mean dehumanizing the colonists.
Replies from: bramflakes, polymathwannabe↑ comment by bramflakes · 2014-10-01T16:09:46.732Z · LW(p) · GW(p)
Rhodesia comes to mind.
Replies from: Lumifer↑ comment by Lumifer · 2014-10-01T16:26:18.548Z · LW(p) · GW(p)
Can you elaborate? I am unaware of the British Empire dehumanizing British settlers in Rhodesia. You don't have in mind the Boer wars, by any chance? That's a different country (and the Boers were not descendants of Brits, too, speaking a different language, for example).
Replies from: bramflakes↑ comment by bramflakes · 2014-10-01T16:47:01.136Z · LW(p) · GW(p)
Dehumanize is too strong a word I admit.
"Sold out" would be a better one.
Replies from: Lumifer↑ comment by polymathwannabe · 2014-09-30T17:22:24.961Z · LW(p) · GW(p)
I notice that I got the examples mixed in my head. First I had thought of citing the Spanish colonies, but I assumed the examples would not be too familiar to non-Hispanic readers, so I chose to speak of the English colonies instead.
The complication with the Spanish colonies is that the first colonizers didn't usually bring their wives with them, but instead married the Natives, so there was much more miscegenation here than in English America. The colonial government established a complex ruleset of political rights according to how much Spanish blood and Native blood was present in each individual. Even racially pure Europeans born on American soil had fewer rights than those born in Spain, and at least in the case of Colombia, that was one of the main triggers for independence.
Replies from: Lumifer↑ comment by Lumifer · 2014-09-30T17:38:29.694Z · LW(p) · GW(p)
I still don't see much of dehumanizing. What you have is a fight over political power in the age where the idea that "All men are created equal" wasn't either widespread or popular (outside of the religious context).
Basically, you need to show that the metropoles treat the colonists much worse than comparable groups in the metropolis itself. For example, if you have an uprising in the colony, it was suppressed much more harshly than, say, a similar uprising in the metropolis.
Your example of Spanish colonies seems to speak to racism much more than to the metropolis dehumanizing its own colonists.
↑ comment by gjm · 2014-09-30T16:50:47.686Z · LW(p) · GW(p)
Interesting. I'm definitely on the left rather than on the right, which is consistent with what you say; but I have to admit that I don't see where Lalartu says or implies anything about whether Earth will actually deserve any blame for the colonists' misery. (And, not so consistently with what you say, my own opinion is that if the colonists freely chose to be colonists and the home civilization on Earth didn't do anything terribly awful to them, then if they're miserable they shouldn't blame Earth.)
I'm mystified by some other features of Lalartu's speculation, though. I don't see why we should expect any colony's existence to be miserable, at least once it's past the earliest struggling-to-survive stages that it might well face; I don't see any good reason to expect that the colony -- especially if it's struggling to survive -- would want to nuke Earth; I see still less reason to think they could nuke Earth hard enough to cause anything like extinction.
Replies from: polymathwannabe↑ comment by polymathwannabe · 2014-09-30T17:01:37.243Z · LW(p) · GW(p)
Lalartu simply says that the colonies will resent Earth, which rests on the unquestioned presupposition that the colonies will live in misery.
I agree that the colonies should not blame Earth for any harm they do to themselves, but, from reading Lalartu's tone, it seems to assume that Earthers can do no wrong.
Replies from: gjm, Izeinwinter↑ comment by gjm · 2014-09-30T17:09:39.742Z · LW(p) · GW(p)
the unquestioned presupposition that the colonies will live in misery
I agree that that's weird and probably wrong, but it's not clear to me what it tells us about Lalartu's politics.
it seems to assume that Earthers can do no wrong.
I don't see that it even assumes that Earthers won't be responsible for the (alleged) misery of the (hypothetical) colonies. You may well be right about where Lalartu's coming from, and that may well be because you've picked up reliable signals of right-wing-ness in what he wrote, but if so I think they are subtler signals than you are describing.
Replies from: Lalartu↑ comment by Lalartu · 2014-10-01T08:40:07.136Z · LW(p) · GW(p)
Having nuclear capability for self-sustenance does not equal having capability to build nuclear bombs.
That is wrong. Society able to build a reactor can build bombs, political limitations aside.
I see still less reason to think they could nuke Earth hard enough to cause anything like extinction.
How many nukes do you think is enough? Will 1 million be? Modern USA can build that in few years if they want so. Do you think colony (with some future technology) will definitely be unable?
distinctly right-wing
That is true, but I don't see why it is relevant.
I don't see why we should expect any colony's existence to be miserable
Because Mars, Moon, rotating space habitats and so on are just terrible places to live.
Spanish colonies did fight a war against the Spanish Empire
I don't think it is a meaningful comparison. Inhabitants of Cayenna penal colony will go better.
I don't see any good reason to expect that the colony -- especially if it's struggling to survive -- would >want to nuke Earth
Because Earth is responsible for their miserable lives (assuming that primary offenders, first-generation colonists are mostly (or completely) dead at that point).
The risk of pushing our colonies to nuke us out of spite vs. the risk of destroying ourselves at home >before we've even reached the stars weighs strongly in favor of launching as many rockets as we >physically can.
That is a sure way to extinction.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-10-01T17:05:19.297Z · LW(p) · GW(p)
How many nukes do you think is enough? Will 1 million be? Modern USA can build that in few years if they want so. Do you think colony (with some future technology) will definitely be unable?
How did you get that idea? Quick search for the cost of a single bomb is $20 million. That means you are looking at a cost of 20 trillion$. Given that the amount of cheaply minable uranium isn't infinitive the cost is likely more.
Replies from: Lalartu↑ comment by Lalartu · 2014-10-02T09:47:07.864Z · LW(p) · GW(p)
That means you are looking at a cost of 20 trillion$.
So? Obviously this means war-time economy and devoting industry to making nukes. Point is that it can be done in principle. Also, major part of nuke's cost is plutonium, and it's production is strongly affected by economies of scale. 5 trillion$ would be more reasonable estimate.
Given that the amount of cheaply minable uranium isn't infinitive the cost is likely more.
Cost of mining uranium is really small compared to cost of building and maintaining reactors.
↑ comment by Izeinwinter · 2014-10-05T08:06:12.395Z · LW(p) · GW(p)
I'm not even sure they need to. There is an enormous assumption of "There will be a war of independence" built into a heck of a lot of Science fiction, because a lot of it was written by americans wanting to do analogies with the american independence war in space, no matter if that makes sense or not.
This has fed back into the thinking of a lot of people who are dissatisfied with the political settlements of earth, and has made "Space; Where you go to be free of political control" a reasonably common idea. So the terminal goal of a certain faction of those arguing for space colonization is to break with earth. No matter how painfully illsuited for the kind of society they want space actually is.
I don't think banning space colonization is a very sensible "solution" to this problem however as there are much simpler ways to ensure any colonies don't get any stupid ideas. Like: "Don't hire any libertarians".
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-10-05T16:53:21.772Z · LW(p) · GW(p)
It's unlikely that you could keep people from having the idea of revolution.
Another possibility is to not treat them so badly that they're willing to take a serious risk of death remove your authority.
Replies from: Izeinwinter↑ comment by Izeinwinter · 2014-10-05T18:46:24.972Z · LW(p) · GW(p)
How one treats the workers in the space industry doesn't even really come into the worry I'm entertaining. The most plausible way to get an Earth-space war isn't the staff of a space telescope suddenly discovering a yen for nation-founding - That's just so unlikely I can't be bothered to work out how unlikely it actually is.
The only semi-plausible way to get there is for a couple of very specific groups to go into space colonization specifically because they have mythologized both the american independence war, and space as a conveniently natives-free analogy of the american west - IE, radicals with a hardon for revolution, and the impression a height advantage will make it more feasible. I'm not worried about people getting radicalized in space, but about existing radicals concentrating there. Which is still pretty bloody unlikely, but tbh, if I was hiring miners for the belt, I'd be .. rather reluctant to hire people with radical political views. It only takes one person who has read too much Heinlein to deliberately set off a collision cascade in LEO*, and there comes bankruptcy. Also the inevitable death of everyone on the high side of the shit-storm when it turns out self-sufficiency is much harder than it looks, but that doesn't clean up the mess.
*I have no idea why the "American Revolution 2.0" books never use this as a weapon, but instead insist on killing massive numbers of people on earth. Blocking access between space and earth for a couple of decades is bloody well trivial if that is really what you want to do - Grind an asteroid into sand, put the sand into the right orbits, and now going up and down is suddenly very, very unsafe until it deorbits. Very hard to undo, but.. if you are dropping rocks on earth, that comes with even /less/ options for changing your mind later.
↑ comment by Shmi (shminux) · 2014-09-30T19:39:55.977Z · LW(p) · GW(p)
If history is any indication, separate cultures tend to end up fighting each other if they want the same resource. Whether space colonies end up in such a situation is unclear, but seems unlikely. There are also religious reasons one culture would try to convert or remove another, and that's a bigger worry. Hopefully establishing hundreds or thousands of colonies would mitigate this risk, since diversity tends to help to stave off extinction.
comment by advancedatheist · 2014-10-01T16:02:54.866Z · LW(p) · GW(p)
Has anyone watched CBS's new sort-of police procedural show ? The episode on Monday night didn't impress me. I read one review which called it a dumb person's fantasy about the abilities of super-smart people in their 20's; and the show and a clip of the pilot I watched online reference the main characters' ultra-high IQ's. (Funny, I thought the liberal-progressive party line dismisses IQ as racist pseudoscience and the "mismeasure of man," or something.)
One character, a stereotypical math nerd and calculating prodigy named Sylvester, seems plausible to me because the high-end mathematicians tend to do their best work by their early 20's. But in general a collection of characters this young wouldn't know whatever random or arbitrary things they need at crucial points in the plot to move the story along, regardless of their IQ's.
You might have a different view of the series so far, however. Discuss.
Replies from: ChristianKl, polymathwannabe↑ comment by ChristianKl · 2014-10-01T16:49:36.066Z · LW(p) · GW(p)
One character, a stereotypical math nerd and calculating prodigy named Sylvester, seems plausible to me because the high-end mathematicians tend to do their best work by their early 20's.
Do you have a citation for that claim?
↑ comment by polymathwannabe · 2014-10-01T16:19:18.115Z · LW(p) · GW(p)
I read the Wikipedia description (four male supertalented geniuses plus one female nerd plus one average waitress) and it sounded like a sequel to The Big Bang Theory.
Replies from: advancedatheist↑ comment by advancedatheist · 2014-10-01T17:12:57.493Z · LW(p) · GW(p)
At least in The Big Bang Theory, the characters don't know everything and they have awkward but funny social experiences.
By contrast, plays the characters' social inadequacies straight, though one of them, the "behaviorist" who wears a hat all the time, has the ability to perform a "Sherlock scan" on people he just met and can figure out how to manipulate them, at least for a time. I guess that might make him a high-functioning sociopath or something similar.