Open Thread, May 11 - May 17, 2015
post by Gondolinian · 2015-05-11T00:16:56.473Z · LW · GW · Legacy · 247 commentsContents
247 comments
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
247 comments
Comments sorted by top scores.
comment by ChristianKl · 2015-05-13T13:48:43.593Z · LW(p) · GW(p)
Recent news suggest that having measles weakens the immune system for 2 to 3 years afterwards and therefore the Measles vaccine manages to reduce a lot of childhood deaths that weren't thought to be measles related.
comment by Toggle · 2015-05-11T02:35:54.112Z · LW(p) · GW(p)
It looks like AI is overtaking Arimaa, which is notable because Arimaa was created specifically as a challenge to AI. Congratulations to the programmer, David Wu.
Replies from: Kindly, jacob_cannell, ShardPhoenix↑ comment by Kindly · 2015-05-12T15:21:36.879Z · LW(p) · GW(p)
On the subject of Arimaa, I've noted a general feeling of "This game is hard for computers to play -- and that makes it a much better game!"
Progress of AI research aside, why should I care if I choose a game in which the top computer beats the top human, or one in which the top human beats the top computer? (Presumably both the top human and the top computer can beat me, in either case.)
Is it that in go, you can aspire (unrealistically, perhaps) to be the top player in the world, while in chess, the highest you can ever go is a top human that will still be defeated by computers?
Or is it that chess, which computers are good at, feels like a solved problem, while go still feels mysterious and exciting? Not that we've solved either game in the sense of having solved tic-tac-toe or checkers. And I don't think we should care too much about having solved checkers either, for the purposes of actually playing the game.
↑ comment by jacob_cannell · 2015-05-11T05:49:13.368Z · LW(p) · GW(p)
I hadn't heard of Armaa before, but based on about 5 minutes worth of reading about the game, I don't understand how it is significantly more suited to natural reasoning that chess. It inherits many of chess's general features that make serial planning more effective than value function knowledge - thus favoring fast thinkers over slow deep thinkers. Go is much more of a natural reasoning game.
↑ comment by ShardPhoenix · 2015-05-12T09:24:38.241Z · LW(p) · GW(p)
From the sound of it, the AI works more or less like chess AI - search with a hand-tuned evaluation function.
comment by NancyLebovitz · 2015-05-11T16:27:29.413Z · LW(p) · GW(p)
Transhumanism in the real world
Rugby players who get a bottle opener to replace a missing tooth.
Replies from: Lumifer↑ comment by Lumifer · 2015-05-11T16:39:11.436Z · LW(p) · GW(p)
That's no more transhumanism than this.
Replies from: drethelin, RowanE, Ishaan↑ comment by drethelin · 2015-05-11T18:59:09.125Z · LW(p) · GW(p)
False! It's adding functionality rather than just a cosmetic change.
Replies from: Lumifer↑ comment by Lumifer · 2015-05-11T19:03:42.390Z · LW(p) · GW(p)
Cosmetic changes can be highly functional. Ask any girl :-)
On a slightly more serious note, I tend to think of tranhumanist modifications as ones which confer abilities that unenhanced humans do not have. Opening beer bottles isn't one of them.
Replies from: Epictetus, TezlaKoil, None↑ comment by Epictetus · 2015-05-12T01:27:20.081Z · LW(p) · GW(p)
Having been in a group of drunk people who found that they had no bottle opener, and having seen what bizarre ideas they concoct to get the bottles open, I'd say a bottle opener in one's tooth merits the status of transhumanist modification.
Replies from: None, Lumifer↑ comment by [deleted] · 2015-05-12T07:42:50.718Z · LW(p) · GW(p)
There was a saying in my youth: "There is no item that is not a beer opener." There was a bit of a competition for creative moves (drinking beer was considered a high status adult move for teenagers, opening them in creative ways even more). Keys. Lighters. Doors, the part where the "tongue" goes in on the frame, not sure the English term. Edges of tables or edges of anything. Using two bottles, locking the caps to each other and pulling apart. I still consider it the coolest manly way to open a beer when you sit at a fairly invulnerable e.g. stone table to just put the cap against the edge and hit it. Another 101 ways.
Replies from: Nornagest↑ comment by Lumifer · 2015-05-12T14:52:08.824Z · LW(p) · GW(p)
You can open a beer bottle with your natural teeth easily enough.
These people lacked in knowledge, not in tools :-P
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2015-05-12T19:38:36.637Z · LW(p) · GW(p)
However, you can damage a tooth by using it to open bottles.
↑ comment by TezlaKoil · 2015-05-12T00:23:59.200Z · LW(p) · GW(p)
Would you consider a Wikipedia brain implant to be a transhumanist modification? After all, ordinary humans can query Wikipedia too!
Replies from: Lumifer↑ comment by Lumifer · 2015-05-12T14:48:26.979Z · LW(p) · GW(p)
Would you consider a Wikipedia brain implant to be a transhumanist modification?
That's a weird way of putting it. Would I consider an implant which consists of a large chunk of memory with some processing and an efficient neural interface to be transhumanist? Yes, of course. It will give a lot of useful abilities and just filling it with Wikipedia looks like a waste of potential.
I don't think trivializing transhumanism to minor cosmetics is a useful approach. Artificial nails make better screwdrivers than natural nails, so is that also a transhumanist modification?
↑ comment by RowanE · 2015-05-11T18:49:43.288Z · LW(p) · GW(p)
Well, you know what they say about "one man's modus ponens".
Replies from: Lumifer↑ comment by Lumifer · 2015-05-11T18:56:27.326Z · LW(p) · GW(p)
Here is your first transhumanist then, from pre-Columbian Maya...
Replies from: RowanE↑ comment by RowanE · 2015-05-12T16:41:27.119Z · LW(p) · GW(p)
I think the attitude toward the modifications is a relevant factor as well - wanting to be "more than human" in some respect, even if only a trivial respect such as "more awesome-looking than a regular human" or "more able to open beer bottles than a regular human" - but given that, yeah I'd be totally on board with considering some pre-Columbian Maya or other stone-age person "the first transhumanist".
Replies from: Lumifer↑ comment by Ishaan · 2015-05-13T20:48:52.000Z · LW(p) · GW(p)
NancyLebovitz didn't imply the rugby player was showing signs of ideological transhumanism - only that they're doing something transhumanist. Transhumanists don't have the monopoly on self modification. It's the same sense that Christians refer to kind acts as Christian and bad acts as un-Christian.
Transhumanists would claim the first intentional use of fire and writing and all that as transhuman-ish things. (And yes, I would consider self decoration to be a transhumanish thing too. Step into the paleolithic - what's the very first thing you notice which is different about the humans? They have clothes and strings and beads and tattoos, which turn out to have pretty complex social functions. Adam and Eve and all that, it's literally the stuff of myth.)
Replies from: Lumifer↑ comment by Lumifer · 2015-05-13T20:54:54.635Z · LW(p) · GW(p)
So, using tools. Traditionally, tool-using is said to be be what distinguishes humans from apes. That makes it just human, not transhuman.
Replies from: Ishaan↑ comment by Ishaan · 2015-05-13T21:00:23.968Z · LW(p) · GW(p)
Yes, I bite that bullet: I think "you aught to use tools to do things better" counts as foundational principle of transhuman ideology. It's supposed to be fundamentally about being human.
Replies from: Lumifer↑ comment by Lumifer · 2015-05-13T21:26:38.650Z · LW(p) · GW(p)
Well, me might just be having a terminology difference.
My understanding of "transhuman" involves being more than just human. Picking up a tool, even a sophisticated tool, doesn't qualify. And "more" implies that you standard garden-variety human doesn't qualify either.
I'm not claiming there is an easily discernible bright line, but just as contact lenses don't make you a cyborg, a weirdly shaped metal tooth does not make you a transhuman.
Replies from: Ishaan↑ comment by Ishaan · 2015-05-13T23:58:11.446Z · LW(p) · GW(p)
But that's because everyone uses glasses, as a matter of course - it's the status quo now. The person who thought "well, and why should we have to walk around squinting all the time when we can just wear these weird contraption on our heads", at a time when people might look at you funny having wearing glass on your face, I think that's pretty transhuman. As is the guy who said "Let's take it further, and put the refractive material directly on our eyeball" back when people would have looked at you real funny if you suggested they put plastic in their eyes are you crazy that sounds so uncomfortable.
Now of course, it's easy to look at these things and say "meh".
Edit: If you look at the history of contact lenses, though, what actually happened is less people saying "let's improve" and more people saying "I wonder how the eye works" and doing weird experiments that probably seemed pointless at the time. Something of a case study against the "basic research isn't useful" argument, I think, not that there are many who espouse that here.
comment by Lumifer · 2015-05-14T14:54:37.524Z · LW(p) · GW(p)
Pretty awesome set of trolley problems
Sample:
There’s an out of control trolley speeding towards Immanuel Kant. You have the ability to pull a lever and change the trolley’s path so it hits Jeremy Bentham instead. Jeremy Bentham clutches the only existing copy of Kant’s Groundwork of the Metaphysic of Morals. Kant holds the only existing copy of Bentham’s The Principles of Morals and Legislation. Both of them are shouting at you that they have recently started to reconsider their ethical stances.
comment by Dahlen · 2015-05-12T00:33:59.853Z · LW(p) · GW(p)
Is utilitarianism foundational to LessWrong? Asking because for a while I've been toying with the idea of writing a few posts with morality as a theme, from the standpoint of, broadly, virtue ethics -- with some pragmatic and descriptive ethics thrown in. (The themes are quite generous and interlocking, and to be honest I don't know where to start or whether I'll finish it.) This perspective treats stable character traits, with their associated emotions, drives, and motives as the most reasonably likely determiner of moral behaviour, and means to encourage people to "build character" so as to become more moral beings or improve their behaviour. It doesn't concern itself with quantitative approaches to welfare. Frankly, I find it hard to take seriously the numerical applications of utilitarianism, and my brain just shuts down upon some ethical problems usually enjoyed around here (torture vs. dust specks, repugnant conclusion, contrived deals with strange gods and so on).
I know that Eliezer's virtues-of-rationality post is widely appreciated by many people around here, but it's a declaration of (commitment to) values more than anything. It never seemed to be the dominant paradigm. I guess I just want to know whether a virtue-ethical approach would be well-received here, and the extent to which a utilitarian and a virtue ethicist can usefully discuss morality without jumping a meta level into theories of normative ethics.
Replies from: ChristianKl, ilzolende, pianoforte611, BrassLion, Vaniver, None, Gunnar_Zarncke, Vaniver, OrphanWilde↑ comment by ChristianKl · 2015-05-12T01:37:48.807Z · LW(p) · GW(p)
If it helps you, the 2014 census gave for moral beliefs:
Moral Views Accept/lean towards consequentialism: 901, 60.0% Accept/lean towards deontology: 50, 3.3% Accept/lean towards natural law: 48, 3.2% Accept/lean towards virtue ethics: 150, 10.0% Accept/lean towards contractualism: 79, 5.3% Other/no answer: 239, 15.9%
Meta-ethics Constructivism: 474, 31.5% Error theory: 60, 4.0% Non-cognitivism: 129, 8.6% Subjectivism: 324, 21.6% Substantive realism: 209, 13.9%
In general I don't think there are foundational ideas on LW that shouldn't be questioned. Any idea is up for investigation provided the case is well argued.
Replies from: falenas108↑ comment by falenas108 · 2015-05-12T04:13:29.465Z · LW(p) · GW(p)
In general I don't think there are foundational ideas on LW that shouldn't be questioned. Any idea is up for investigation provided the case is well argued.
But there are certain ideas that will be downvoted and dismissed because people feel like they aren't useful to be talking about, like if God exists. I think OP was asking if it was a topic that fell under this category.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-05-12T11:32:30.934Z · LW(p) · GW(p)
But there are certain ideas that will be downvoted and dismissed because people feel like they aren't useful to be talking about, like if God exists.
The problem with "does God exist" isn't about the fact that LW is atheist. It's that it's hard to say interesting things about the subject and provide a well argued case.
I don't expect to learn something new when I read another post about whether or not God exists. If someone knows the subject well enough to tell me something new, then there no problem with them writing a post to communicate that insight.
↑ comment by ilzolende · 2015-05-13T03:20:10.382Z · LW(p) · GW(p)
I endorse discussion of virtue ethics on LW mostly because I haven't seen many arguments for why I should use it or discussions of how using it works. I've seen a lot of pro-utilitarianism and "how to do things with utilitarianism" pieces and a lot of discussion of deontology in the form of credible precommitments and also as heuristics and rule utilitarianism, but I haven't really seen a virtue ethics piece that remotely approaches Yvain's Consequentialism FAQ in terms of readability and usability.
↑ comment by pianoforte611 · 2015-05-12T01:43:48.381Z · LW(p) · GW(p)
When you say virtue ethics, it sounds like you are describing consequentialism implemented on human software.
If we're talking about the philosopher's virtue ethics, this question should clarify: Are virtues virtuous because they lead to moral behavior? Or is behavior moral because it cultivates virtue?
The first is just applied consequentialism. The second is the philosopher's virtue ethics.
Replies from: Dahlen↑ comment by Dahlen · 2015-05-12T03:02:11.383Z · LW(p) · GW(p)
The thing is... that's really beyond the scope of what I care to argue about. I understand the difference, but it's so small as to not be worth the typing time. It's precisely the kind of splitting hairs I don't want to go into.
The theme that would get treated is morality, not ethics. It kind of starts off assuming that it is self-evident why good is good, and that human beings do not hold wildly divergent morals or have wildly different internal states in the same situation. Mostly. Sample topics that I'm likely to touch on are: rationality as wisdom; the self-perception of a humble person and how that may be an improvement from the baseline; the intent with which one enters an interaction; a call towards being more understanding to others; respect and disrespect; how to deflect (and why to avoid making) arguments in bad faith; malicious dispositions, and more. Lots of things relevant to community maintenance.
These essays aren't yet written, so perhaps that's why it all sounds (and is) so chaotic. There may be more topics which conflict more obviously with utilitarianism, especially if there's a large number of individuals concerned. As for conflicts with consequentialism, they're less likely, but still probable.
Replies from: pianoforte611↑ comment by pianoforte611 · 2015-05-12T19:26:34.008Z · LW(p) · GW(p)
If you don't want to talk about the difference then I respect that, and I wasn't suggesting that you do. If anything I would suggest avoiding the term "virtue ethics" entirely and instead talking about virtue which is more general and a component of most moral systems.
I disagree that it is splitting hairs though or a small difference. It makes a large whether or not you wish to cultivate virtue for its own sake (regardless or independent of consequence), or because it helps you achieve other goals. The latter makes fewer assumptions about the goals of your reader.
↑ comment by BrassLion · 2015-05-13T02:03:08.303Z · LW(p) · GW(p)
Consequentialism, where morality is viewed through a lens of what happens due to human actions, is a major part of LessWrong. Utilitarianism specifically, where you judge an act by the results, is a subset of consequentialism and not nearly as widely accepted. Virtue Ethics are generally well liked and it's often said around here that "Consequentialism is what's right, Virtue Ethics are what works." I think that practical guide to virtue ethics would be well received.
↑ comment by Vaniver · 2015-05-12T13:41:25.338Z · LW(p) · GW(p)
Is utilitarianism foundational to LessWrong?
No. Individual utility calculations are, as a component of decision theory, but decision-theoretic utility and interpersonal-comparison utility are different things with different assumptions.
encourage people to "build character" so as to become more moral beings or improve their behaviour.
This is a solid view, and one of the main ones I take--but I observe that listing out goals and developing training regimens have different purposes and uses.
↑ comment by [deleted] · 2015-05-12T08:30:24.621Z · LW(p) · GW(p)
I think virtue ethics is sufficiently edgy, new, different these days to be interesting. Go on.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-05-12T09:41:06.391Z · LW(p) · GW(p)
new
I agree, scholarship is a problem.
Replies from: None↑ comment by [deleted] · 2015-05-12T11:33:42.765Z · LW(p) · GW(p)
Okay, ancient enough, but fell into disuse around the Enlightenment, was hardly considered 100-120 years ago, returned among academic philosophers like Philippa Foot, Catholics like MacIntyre tryed to keep it alive, and it is only roughly about now that it is something slowly considered again by the hip young atheist literati classes for whom karma is merely a metaphor and do not literally believe in bad deeds putting a stain on the soul, so in this sense it is only a newly fashionable thing again.
↑ comment by Gunnar_Zarncke · 2015-05-14T06:24:50.014Z · LW(p) · GW(p)
Again I recommend a poll:
Is utilitarianism foundational to LessWrong? (use the middle option to see results only) [pollid:964]
↑ comment by Vaniver · 2015-05-12T19:02:42.398Z · LW(p) · GW(p)
Also, have you read this post? The virtue tag only points at it and one other, but searching will likely find more.
↑ comment by OrphanWilde · 2015-05-12T18:39:44.979Z · LW(p) · GW(p)
Given that I know somebody is a virtue ethicist, I place a prior probability of 20% that they are bisexual, and a prior probability of 40% that they are some variant of highly functional sociopath.
That's adjusted for overconfidence. I -want- to assign 60% to bisexuality and 80% to sociopath.
Replies from: AlexSchell↑ comment by AlexSchell · 2015-05-12T23:20:59.965Z · LW(p) · GW(p)
Your beliefs imply likelihood ratios of ~10 and ~70 for bisexuality and sociopathy respectively (assuming base rates of 2-3% and 1%, respectively). What do you think you know and how do you think you know it?
Replies from: OrphanWilde↑ comment by OrphanWilde · 2015-05-13T13:41:26.652Z · LW(p) · GW(p)
The two variables aren't distinct; bisexuality in this case is a "symptom" of a particular kind of sociopathy. (The reason the odds aren't much closer, however, is that "adaptive sociopathy" has been buried under garbage on the internet since Hannibal made sociopathy "cool", and I'm unable to relocate -any- sources, definitive or otherwise, on the subject since my last research. I may have to resort to textbooks.)
Adaptive sociopaths would find virtue ethics trivial to implement, as it is characterized, effectively, by extremely effective emulation of others. It's a brand of ethics which is particularly well suited to them.
comment by dxu · 2015-05-12T02:58:56.925Z · LW(p) · GW(p)
To any physicists out there:
This idea came to me while I was replaying the game Portal. Basically, suppose humanity one day developed the ability to create wormholes. Would one be able to generate an infinite amount of energy by placing one end of a wormhole directly below the other before dropping an object into the lower portal (thus periodically resetting said object's gravitational potential energy while leaving its kinetic energy unaffected)? This seems like a blatant violation of the first law of thermodynamics, so I'm guessing it would fail due to some reason or other (my guess goes to weird behavior of the gravitational field near the wormhole, which interferes with the larger field of the Earth), but since I'm nowhere close to being a physicist, I thought I'd ask about it on LessWrong.
So? Any ideas as to what goes wrong in the above example?
Replies from: None, Squark, shminux, ZankerH, Slider, falenas108↑ comment by [deleted] · 2015-05-12T03:20:37.348Z · LW(p) · GW(p)
Gravity is a conserved vector field. Any closed path through a gravitational potential leaves you with the same energy you started with. And if it doesn't you've stolen energy that was creating the gravity in the first place leaving less for the next circle to take and are thus just transforming energy from one to another.
Replies from: DanielLC↑ comment by DanielLC · 2015-05-12T05:37:41.428Z · LW(p) · GW(p)
Does that apply when the space isn't simply connected? It could be conservative at every neighborhood, but not conservative over all if you allow portals.
Replies from: None↑ comment by [deleted] · 2015-05-12T13:32:54.572Z · LW(p) · GW(p)
Gravity would propagate through the connected space. The potential would probably be very VERY weird shaped but i see no reason it wouldnt be conservative or otherwise consistent with GR (the math of which is far beyond me). Though keep in mind in GR space curvature IS gravity and can change over time, I doubt you could maintain a knife edge thin aperture, it would all smooth out.
What's really fun though are gravetomagnetic effects.
These are to gravity what magnetism is to the electric field. Both the electric field and gravitational field are conservative. But changing or accelerating charges generate magnetic fields which are NOT conservative hence how an electron spinning around a coil in a field in a generator gains energy even though it returns to its starting point. However in doing so it accelerates up to velocity as well generating a counteracting field that cancels out some of the field accelerating it, the motion moving it, or both. Thus the nonconservative fields have a potential energy associated with them that can be extracted from them or used to couple two phenomena that are both coupled to them.
To get gravetomagnetic effects you need huuuuuge mass flows and accelerations. But you can similarly steal the energy that drives them. Think frame dragging and extraction of black hole rotation.
Replies from: DanielLC↑ comment by DanielLC · 2015-05-12T18:39:01.194Z · LW(p) · GW(p)
but i see no reason it wouldnt be conservative or otherwise consistent with GR
I'm assuming GR holds. Does it actually prove the field is conservative, or just irrotational? If it's irrotational and simply connected, then it's conservative, but if you stick a portal in it, it might not be.
I doubt you could maintain a knife edge thin aperture, it would all smooth out.
Portals don't need a knife edge. For example: Flight Through a Wormhole. You do need negative energy density or it will collapse, but that on its own shouldn't break conservation of energy.
↑ comment by Squark · 2015-05-14T18:19:13.503Z · LW(p) · GW(p)
Wormholes don't quite behave like portals in the game.
When something drops into a wormhole with zero velocity, the apparent mass of the entry end increases by the mass of the object and the apparent mass of the exit end decreases by the mass of the object. At some point one of the ends should acquire negative mass. I'm not sure what that means: either it literally behaves as a negative mass object or this is an indication of the wormhole becoming unstable and collapsing.
Similarly, when something with momentum drops into a wormhole, the momentum is added to the apparent momentum of the entry end and subtracted from the apparent momentum of the exit end. The apparent masses change in a way that ensures energy conservation. This means that the gain in energy of the "cycling" object comes from wormhole mass loss and transfer of mass from the high end to the low end. Again, if it's true that the wormhole becomes unstable when its mass is supposed to go negative, that would be the end of the process.
Replies from: shminux↑ comment by Shmi (shminux) · 2015-05-14T19:14:11.841Z · LW(p) · GW(p)
If you already postulate having enough negative energy to create a wormhole, there is no extra issues due to one of the throats having negative mass, except the weird acceleration effect, as I mentioned in my other reply.
Replies from: Squark↑ comment by Squark · 2015-06-07T19:13:15.173Z · LW(p) · GW(p)
Maybe. However, how will the geometry look like when the sign flip occurs? Will it be non-singular?
Replies from: shminux↑ comment by Shmi (shminux) · 2015-06-07T21:23:57.109Z · LW(p) · GW(p)
There isn't as much difference between negative- and positive-mass wormholes as between negative- and positive mass black holes. Negative-mass black holes have no horizons and a naked repulsive timelike singularity. A negative- (at infinity) mass wormhole would look basically like a regular wormhole. The local spacetime curvature would, of course, be different, but the topology would remain the same, S^2xRxR or similar.
↑ comment by Shmi (shminux) · 2015-05-14T19:08:51.732Z · LW(p) · GW(p)
I have a PhD in Physics and my thesis was, in part, related to wormholes, so here it goes. (Squark covered most of your question already, though.)
If something falls into a black hole, it increases the black hole mass. If something escapes a black hole (such as Hawking radiation), it decreases the black hole mass. Same with white holes. A wormhole is basically two black/white holes connected by a throat. One pass through the portal would increase the mass of the entrance and decrease the mass of the exit by the mass of the passing object.
A portal with two ends having opposite masses would behave rather strangely: they sort of repel (the equivalent of Newton's law of gravity), but the gravitational force acting on the negative-mass end propels it toward the positive-mass end. As a result, the portal as a whole will tend to accelerate toward the positive end (entrance) and fly away, albeit rather slowly.
In addition, due to momentum and angular momentum conservation, the portal will start spinning to counteract the motion of the passing object.
↑ comment by ZankerH · 2015-05-14T11:28:46.248Z · LW(p) · GW(p)
At a glance, it seems like you're asking for extrapolation from a "suppose X - therefore X" - type statement, where X is the invalidation of conservation laws.
Replies from: dxu↑ comment by dxu · 2015-05-15T04:12:38.710Z · LW(p) · GW(p)
I don't quite understand this statement. The only real premise I can see in my original comment is
suppose humanity one day developed the ability to create wormholes.
(Please feel free to correct me if you were in fact referring to some other premise.)
Wormholes are generally agreed to be a possible solution to Einstein's equations--they don't, in and of themselves, violate conservation of energy. The scenario I proposed above is a method for generating infinite energy if physics actually worked that way, but since I'm confident that it doesn't, the proposed scenario is almost certainly flawed in some way. I asked my question because I wasn't sure how it was flawed. Whatever the flaw is, however, I doubt it lies in the wormhole premise.
↑ comment by Slider · 2015-05-18T15:22:20.943Z · LW(p) · GW(p)
I am just taking wormholes to mean "altered connectivity of space" and leave out the "massive concentrations of mass" aspect.
The curious thing about portals portals is that they somehow magically know to flip gravity when a object travels thourht. If the portal is just ordinary space there shouldn't be a sudden gradient to the gravity field but it should go smoothly from one direction to the other. And in additon gravity ought to work throught portals. that would mean that if you have a portal in a ceiling it ought to pull stuff throught it towards the ceiling (towards the center of mass beyond the portal). That is a standard "infinite fall" portal setup should feel equal gravity up and down midway between the portals. That kind of setup could be used to store kinetic energy but it doesn't generate it per se.
However if portals aftected the gravity fields it could be that the non-standard gravity environment could be a major problem and would work even when you didn't want it to. That is since the net 0 gravity point of a infinite fall setup needs to transition smoothly to the "standard gravity environment" that likely means that quite a ways "outside" the portal pair there would be a reduced gravity environment.
↑ comment by falenas108 · 2015-05-12T04:11:26.670Z · LW(p) · GW(p)
You can probably think about it as the lines of a gravity field also going through the wormhole, and I believe the gravitational force would be 0 around the wormhole.
The actual answer involves thinking about gravity and spacetime as a geometry, which I don't think you want to answer your question.
comment by John_Maxwell (John_Maxwell_IV) · 2015-05-11T14:08:07.558Z · LW(p) · GW(p)
So there was recently an advance related to chips for running neural networks. I'm having a hard time figuring out if we should be happy or sad. I'm not sure if this qualifies as a "computing power" advance or a "cell modeling" one.
Replies from: Houshalter, ChristianKl↑ comment by Houshalter · 2015-05-14T05:56:16.400Z · LW(p) · GW(p)
I doubt it will help scientists reverse engineer the function of the brain any faster. However it could potentially be very useful for helping AI researchers develop artificial neural networks.
ANNs aren't really tied to neuroscience research, and they probably won't help with emulations. But they are the current leading approach to AI, and increased computing power would significantly help AI research, as it has in the past.
↑ comment by ChristianKl · 2015-05-11T14:33:51.660Z · LW(p) · GW(p)
Neural networks chips aren't neurons. Neurons are much more complex than nodes in artificial neural networks.
Replies from: jacob_cannell↑ comment by jacob_cannell · 2015-05-12T04:37:16.718Z · LW(p) · GW(p)
Technically true but also irrelevant. At the physical level, a modern digital transistor based computer running an ANN simulation is also vastly more complex than the node-level ANN model.
In terms of simulation complexity, a modern GPU is actually more complex than the brain. It would take at most on the order of 10^17 op/s second to simulate a brain (10^14 synapses @ 10^3 hz), but it takes more than 10^18 op/s second to simulate a GPU (10^9 transistors @ 10^9 hz).
Simulating a brain at any detail level beyond its actual computational power is pointless for AI - the ANN level is the exactly correct level of abstraction for actual performance.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-05-12T10:30:25.176Z · LW(p) · GW(p)
Technically true but also irrelevant. At the physical level, a modern digital transistor based computer running an ANN simulation is also vastly more complex than the node-level ANN model.
ANN are no neurons. We can't accurately simulate even what a single neuron does. Neurons can express proteins when specific hormones are in their environment. The functioning of roughly a third of the human genome is unknown. Increasing or decreasing the amount of channels for various substances in the cell membrane takes proteins. That's part of long term plasticity.
It would take at most on the order of 10^17 op/s second to simulate a brain (10^14 synapses @ 10^3 hz),
That simulation completely ignores neurotransmitters floating around in the brain and many other factors. You can simulate a ANN at one op/synase but a simulation of a real brain is very incomplete at that level.
Replies from: None, jacob_cannell↑ comment by [deleted] · 2015-05-12T10:48:58.010Z · LW(p) · GW(p)
We can't accurately simulate even what a single neuron does.
This blew my mind a bit. So why the heck are researchers trying to train neural nets when the nodes of those nets are clearly subpar?
Replies from: jacob_cannell, ChristianKl↑ comment by jacob_cannell · 2015-05-13T00:39:43.895Z · LW(p) · GW(p)
Actually the exact opposite is true - ANN neurons and synapses are more powerful per neuron per synapse than their biological equivalents. ANN neurons signal and compute with high precision real numbers with 16 or 32 bits of precision, rather than 1 bit binary pulses with low precision analog summation.
The difference depends entirely on the problem, and the ideal strategy probably involves a complex heterogeneous mix of units of varying precision (which you see in the synaptic distribution in the cortex, btw), but in general with high precision neurons/synapses you need less units to implement the same circuit.
Also, I should mention that some biological neural circuits implement temporal coding (as in the hippocampus), which allows a neuron to send somewhat higher precision signals (on the order of 5 to 8 bits per spike or so). This has other tradeoffs though, so it isn't worth it in all cases.
Brains are more powerful than current ANNs because current ANNs are incredibly small. All of the recent success in deep learning where ANNs are suddenly dominating everywhere was enabled by using GPUs to train ANNs in the range of 1 to 10 million neurons and 1 to 10 billion synapses - which is basically insect to lizard brain size range. (we aren't even up to mouse sized ANNs yet)
That is still 3 to 4 orders of magnitude smaller than the human brain - we have a long ways to go still in terms of performance. Thankfully ANN performance is more than doubling every year (combined hardware and software increase).
↑ comment by ChristianKl · 2015-05-12T11:35:49.895Z · LW(p) · GW(p)
The fact that they are subpar doesn't mean that you can learn nothing from ANNs. It also doesn't mean that ANNs can't do a variety of tasks in machine learning with them.
↑ comment by jacob_cannell · 2015-05-13T00:35:51.716Z · LW(p) · GW(p)
ANN are no neurons. We can't accurately simulate even what a single neuron does.
Everything depends on your assumed simulation scale and accuracy. If you want to be pedantic, you could say we can't even simulate transistors, because clearly our simulations of transistors are not accurate down to the quantum level.
However, the physics of computation allow us to estimate the approximate level of computational scale separation that any conventional (irreversible) physical computer must have to functional correctly (signal reliably in a noisy environment).
The Lanauder limits on switching energies is one bound, but most of the energy (in brains or modern computers) goes to wire transmission energy, and one can derive bounds on signal propagation energy in the vicinity of ~1pJ / bit / mm for reliable signaling. From this we can then plug in the average interconnect distance between synapses and neurons (both directions) and you get a maximum computation rate on the order of 10^15 ops or so, probably closer to 10^13 low precision ops. Deriving all that is well beyond the scope of a little comment.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-05-13T10:02:52.323Z · LW(p) · GW(p)
The Lanauder limits on switching energies is one bound, but most of the energy (in brains or modern computers) goes to wire transmission energy, and one can derive bounds on signal propagation energy in the vicinity of ~1pJ / bit / mm for reliable signaling.
The energy count for signal transmission doesn't include changing the amount of ion channels a neuron has. You might model short term plasticity but you don't get long term plasticity.
You also don't model how hormones and other neurotransmitter float around in the brain. An ANN deals only with the electric signal transmission misses essential parts of how the brain works. That doesn't make it bad for the purposes of being an ANN but it's lacking as a model of the brain.
Replies from: jacob_cannell↑ comment by jacob_cannell · 2015-05-13T16:30:55.284Z · LW(p) · GW(p)
Sure, all of that is true, but of the brain's 10 watt budget, more than half is spent on electric signaling and computation, so all the other stuff you mention at most increases the intrinsic simulation complexity by a factor of 2.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-05-13T17:50:37.428Z · LW(p) · GW(p)
Are you aware of the complexity of folding of a single protein? It might not take much energy but it's very complex.
If you have 1000 different types of proteins swimming around in a neurons that interact with each other I don't think you get that by adding a factor of two.
comment by John_Maxwell (John_Maxwell_IV) · 2015-05-11T07:42:42.354Z · LW(p) · GW(p)
I recently found this blog post by Ben Kuhn where he briefly summarizes ~5 classic LW posts in the space of one blog post. A couple points:
I don't think that much of the content of the original posts is lost in Ben's summary, and it's a lot faster to read. Do others agree? Do we think producing a condensed summary of the LW archives at some point might be valuable? (It's possible that, for instance, the longer treatment of these concepts in the original posts pushes them deeper in to your brain, or that since people are so used to skimming, conceptually dense content can actually be bad.)
Might it be worthwhile to link to this from the LW about page or FAQ for newcomers?
[pollid:956]
[pollid:957]
Replies from: Kindlycomment by Error · 2015-05-11T01:29:47.227Z · LW(p) · GW(p)
Just posted this in the previous open thread; reposting here: Has anyone here used fancyhands.com or a similar personal-assistant service? If so, what was your experience like?
(context: I have anxiety issues making phone calls to strangers and certain other ugh fields, and am thinking I may be better off paying someone else to take care of such things rather than trying to bull through the ugh fields.)
Replies from: Nonecomment by taygetea · 2015-05-17T22:57:13.911Z · LW(p) · GW(p)
Hi. I don't post much, but if anyone who knows me can vouch for me here, I would appreciate it.
I have a bit of a Situation, and I would like some help. I'm fairly sure it will be positive utility, not just positive fuzzies. Doesn't stop me feeling ridiculous for needing it. But if any of you can, I would appreciate donations, feedback, or anything else over here: http://www.gofundme.com/usc9j4
comment by CronoDAS · 2015-05-16T01:05:48.990Z · LW(p) · GW(p)
I can't understand what my girlfriend is saying when she uses her cell phone to call my cell phone. Often entire words are simply dropped from the audio stream. Is there anything I can do to improve the voice quality?
Replies from: moreati↑ comment by moreati · 2015-05-16T17:55:37.844Z · LW(p) · GW(p)
A few thoughts based on eliminating/ruling out possible causes:
- Can you avoid making cell -> cell calls? If you're both on a smartphone with wifi could you use e.g. Skype or Messenger?
- Can you both use a hands free kit? This should eliminate poor positioning of the microphone/earpiece.
- Are you or your girlfriend in a poor signal area? Does going outside to make the call reduce the problem? There are options such as GSM signal boosters and femtocells that might be worth exploring.
- Some carriers have deployed HD Voice service. See if your phone and carrier(s) support this.
comment by TsviBT · 2015-05-12T02:03:29.552Z · LW(p) · GW(p)
PSA: If you wear glasses, you might want to take a look behind the little nosepads. Some... stuff... can build up there. According to this unverified source it is oxidized copper from glasses frame + your sweat, and can be cleaned with an old toothbrush + toothpaste.
Replies from: Dorikkacomment by Epictetus · 2015-05-11T18:56:34.947Z · LW(p) · GW(p)
For some time I've been thinking about just how much of our understanding of the world is tied up in stories and narratives.
Let's take gravity. Even children playing with balls have a good idea of where a ball is going to land after they throw it. They don't know anything about spacetime curvature or Newton's laws. Instead, they amass a lot of data about the behavior of previously-thrown balls and from this they can predict where a newly-thrown ball will land. With experience, this does not even require conscious thought--a skilled ball-player is already moving into position by the time he's consciously aware of what's happening.
You can do the same thing with computers. Once you have enough raw data, you can tabulate it and use various methods to make predictions. These can range from simple interpolation to more complicated statistical modeling. The point is that you don't need any deeper understanding of the underlying phenomena to make it work. You can get good results as long as the phenomena are nice enough and the initial conditions aren't far removed from the data you used to construct the model. Going back to balls, you'll do fine predicting how a ball thrown by a human will behave, but the methods will probably fail if you shot a ball out of a cannon.
So far, so good. You can treat actual phenomena as a black box and make predictions based only on initial conditions, and this works for everyday life. Yet, we feel a drive to explain things. We like to come up with stories. Most of these are silly, but largely harmless. Sometimes, though, we happen upon a useful story or analogy. These stories transcend the role of explanations and enable us to make predictions outside of our accumulated data. Aristotle's gravity didn't have a detrimental effect on the engineering of the day due to the aforementioned use of experience, but Newton's gravity let us push things so much further.
Modern physics is full of these stories which are wrong, but make for good enough analogies to be useful. Take continuum mechanics. We know matter is made of atoms and molecules, but we sometimes assume it's continuous. Then we take this continuous matter and assume it's made up of tiny boxes (finite elements), each subjected to a constant force. We look at how the force acts on these tiny boxes and add up all the contributions to get an idea of what happens to a large object. Take limits, neglect higher-order terms, and you've got yourself a nice set of equations. In this case, a good story can be more useful than the truth.
The hard part is coming up with a good narrative framework. Working out the details is a lot easier once you have a mental picture of where you are and where you're going. It's easy to come up with a story that doesn't add anything--some ad-hoc tale to satisfy your desire for an explanation and let you go on doing what you were doing with your black-box model.
Sorry if this is a bit disjointed. I'm still trying to straighten it out in my own mind.
Replies from: Lumifer, IlyaShpitser↑ comment by Lumifer · 2015-05-11T19:01:38.828Z · LW(p) · GW(p)
I think what would be useful is to distinguish a story (a typically linear narrative) and a model (a known-to-be-simplified map of some piece of reality). They are sufficiently different and often serve different goals. In particular, stories are rarely quantitative and models usually are.
Replies from: Epictetus↑ comment by Epictetus · 2015-05-11T19:27:24.223Z · LW(p) · GW(p)
I like to think about how the two complement each other. You can build a model out of a mass of data, but extrapolation outside the data is tricky business. You can also start with a qualitative description of the phenomena involved and work out the details. A lot of models start off by making some assumptions and figuring out the consequences.
Example: you can figure out gas laws by taking lots of measurements, or you can start with the assumption that gases are made of molecules that bounce around and go from there.
Replies from: Lumifer↑ comment by Lumifer · 2015-05-11T20:33:12.473Z · LW(p) · GW(p)
We might be understanding the word "story" differently.
To me a "story" is a narrative (a linear sequence of words/sentences/paragraphs/etc.) with the general aim of convincing your System 1. It must be simple enough for the System 1 and must be able to be internalized to become effective. There are no calculations in stories and they generally latch onto some basic hardwired human instincts.
For example, a simple and successful story is "There are tiny organisms called germs which cause disease. Wash your hands and generally keep clean to avoid disease". No numbers, plugs into the purity/disgust template, mostly works.
The three laws of Newton are not a story to me, to pick a counter-example. Nor is the premise that gas consists of identical independent molecules in chaotic motion -- that's an assumption which underlies a particular class of models.
Models, as opposed to stories, are usually "boxes" in the sense that you can throw some inputs into the hopper, turn the crank, and get some outputs from the chute. They don't have to be intuitive or even understandable (in which case the box is black), they just have to output correct predictions. The Newton's laws, for example, make correct predictions (within their sphere of applicability and to a limited degree of precision), but we still have no idea how gravity really works.
Replies from: Epictetus↑ comment by Epictetus · 2015-05-12T01:16:25.624Z · LW(p) · GW(p)
I was using "story" in a much more general sense. Perhaps I should have chosen a different word. I saw a story as some bit of exposition devised to explain a process. In that sense, I would view the kinetic theory of gases as a story. A gas has pressure because all these tiny particles are bumping into the walls of its container. Temperature is related to the average kinetic energy of the particles. The point here is that we can't see these particles, nor can we directly measure their state.
Consider, in contrast, the presentation in Fermi's introductory Thermodynamics book. He eschewed an explanation of what exactly was happening internally and derived his main results from macroscopic behavior. Temperature was defined initially as that which a gas thermometer measures, and later on he developed a thermodynamic definition based on the behavior of reversible heat engines. This sort of approach treats the inner workings of a gas as unknown and only uses that which we can directly observe through instrumental readings.
I guess what I really want to distinguish are black boxes from our attempts to guess what's in the box. The latter is what I tried to encapsulate by "story".
Replies from: Lumifer↑ comment by IlyaShpitser · 2015-05-11T21:34:08.773Z · LW(p) · GW(p)
You are talking about prediction vs causality. I agree, we understand via causality, and causality lets us take data beyond what is actually observed into the realm of the hypothetical. Good post.
comment by taygetea · 2015-05-15T22:40:41.923Z · LW(p) · GW(p)
I've begun to notice discussion of AI risk in more and more places in the last year. Many of them reference Superintelligence. It doesn't seem like a confirmation bias/Baader-Meinhoff effect, not really. It's quite an unexpected change. Have others encountered a similar broadening in the sorts of people you encounter talking about this?
Replies from: Manfredcomment by OrphanWilde · 2015-05-13T19:13:30.924Z · LW(p) · GW(p)
Anybody care to weigh in on adding a flag to newbies, and make it part of the LessWrong culture to explain downvotes to flagged newbies?
Identifying what you've done incorrectly to provoke downvotes is a skill that requires training. (Especially since voting behavior in Discussion is much less consistent to voting behavior in Main.)
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2015-05-14T06:29:57.804Z · LW(p) · GW(p)
You can detect newbies by their low karma and moderate positive ratio. The registration age doesn't mean much really.
Replies from: philh↑ comment by philh · 2015-05-14T13:39:04.793Z · LW(p) · GW(p)
You have to click through to discover that though, and there are exceptions who have a low ratio but don't need downvotes explained to them. (I don't know if there are such users with a low ratio and low total, though.)
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2015-05-14T15:46:37.450Z · LW(p) · GW(p)
You can see the ration in the tool tip over the karma.
Interestingly there are users with positive ratios arbitrarily close to 50%.
Replies from: philhcomment by Silver_Swift · 2015-05-12T08:51:39.362Z · LW(p) · GW(p)
Is this the place to ask technical questions about how the site works? If so, then I'm wondering why I can't find any of the rationality quote threads on the main discussion page anymore (I thought we'd just stopped doing those, until I saw it pop up in the side bar just now). If not, then I think I just asked anyway. :P
Replies from: NancyLebovitz, gjm↑ comment by NancyLebovitz · 2015-05-12T14:04:33.076Z · LW(p) · GW(p)
This is a good place to ask about how the site works.
↑ comment by gjm · 2015-05-12T09:05:36.255Z · LW(p) · GW(p)
Here -- it's in Main rather than Discussion.
Replies from: Silver_Swift↑ comment by Silver_Swift · 2015-05-12T13:16:23.490Z · LW(p) · GW(p)
Thanks!
comment by chaosmage · 2015-05-11T10:28:03.291Z · LW(p) · GW(p)
I have nearly finished the second of the Seven Secular Sermons, which is going to premiere at the European Less Wrong Community Weekend in Berlin in June. For final polishing, I'm looking for constuctive feedback especially from native speakers of English. If you'd like to help out, PM me for the current draft.
Replies from: None↑ comment by [deleted] · 2015-05-11T12:29:01.913Z · LW(p) · GW(p)
Offtopic, but I like your theory of depression have you ever written about it elsewhere longer? Or any recommended online readings?
Replies from: chaosmage, Elo↑ comment by chaosmage · 2015-05-11T14:01:52.547Z · LW(p) · GW(p)
Glad you like it. No I haven't written about it at more length than in that post, and it is entirely my own speculation, based only on the phenomenology of clinical depression and the rank theory I referenced.
I don't have any reading on depression to recommend that is anywhere near as good as SSC. And that's despite my working as a research associate at a depression-focused nonprofit.
Replies from: NancyLebovitz, Elo↑ comment by NancyLebovitz · 2015-05-11T14:45:35.190Z · LW(p) · GW(p)
The status possibility doesn't explain post-partum depression.
Replies from: chaosmage↑ comment by chaosmage · 2015-05-11T15:27:32.698Z · LW(p) · GW(p)
Why not?
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2015-05-11T15:32:49.614Z · LW(p) · GW(p)
The baby is very vulnerable to the mother.
Replies from: ChristianKl, chaosmage↑ comment by ChristianKl · 2015-05-11T20:42:58.261Z · LW(p) · GW(p)
But the baby is also able to often dictate when the mother sleeps and has power over the mother. At least if the mother lets it.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2015-05-11T21:14:11.156Z · LW(p) · GW(p)
Here's a quote from the link above: "What triggers the depression response is a lack of obviously relatively weaker (dependent or safe to bully) group members".
Replies from: ChristianKl↑ comment by ChristianKl · 2015-05-11T22:25:36.411Z · LW(p) · GW(p)
"Weakness" isn't a straightforward word. It's not a precise word. In general this theory hasn't had the amount of work needed to be precise.
It can be that the thing that matters for weakness is having power over other people. The power relationship between a mother and her child is complex.
↑ comment by chaosmage · 2015-05-12T07:54:56.312Z · LW(p) · GW(p)
Good point. While this theory does predict that nobody who has recently won a physical fight or successfully bullied someone (in a non-virtual setting) should have acute depression symptoms, I'd rather be cautious about less obviously one-sided imbalances. After all, kids are quite dependent for several more years after postpartum subsides, and they evidently don't confer depression symptoms immunity for that entire period.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2015-05-12T13:58:40.523Z · LW(p) · GW(p)
It wouldn't surprise me if some bosses are seriously depressed even though they have a complex relationship with employees.
comment by [deleted] · 2015-05-12T09:55:05.469Z · LW(p) · GW(p)
The rationalist Tumblr community looks interesting. Any tips on how to start?
Replies from: Gondolinian↑ comment by Gondolinian · 2015-05-12T23:45:38.244Z · LW(p) · GW(p)
Well, for a start there's the Rationalist Masterlist currently hosted by Yxoque (MathiasZaman here on LW). You could announce your presence there and ask to be added to the list, or just lurk around some of the blogs for a while and send anonymous asks to people to get a feel for the community before you set up an account.
Replies from: Nonecomment by Archelon · 2015-05-17T23:50:44.481Z · LW(p) · GW(p)
According to this article, a traumatic brain injury turned a furniture salesman into a mathematician. (Not without side effects, but still.)
There is a bit of conventional wisdom in evolutionary biology that drastic improvements in efficacy are not available through trivial modifications (and that nontrivial modifications which are random are not improvements). This is an example of the principle that evolution is supposed to have already 'harvested' any 'low-hanging fruit'. Although I don't think much of this type of website (note the lack of external links), the story seems to be based in reality; it is thus one of the most surprising things I have ever heard. And, oddly, heartening as well---insofar as it suggests both a potential shortening of the timescale for human intelligence augmentation and the possibility that such augmentation may be relatively more accessible (than I previously thought) by comparison to computer-based artificial intelligence developments.
comment by the-citizen · 2015-05-17T06:52:33.117Z · LW(p) · GW(p)
Suffering and AIs
Disclaimer - Under utilitarianism suffering is an intrinsically bad thing. While I am not a utilitarian, many people are and I will treat it as true for this post because it is the easiest approach for this issue. Also, apologies if others have already discussed this idea, which seems quite possible
One future moral issue is that AIs may be created for the purpose of doing things that are unpleasant for humans to do. Let's say an AI is designed with the ability to have pain, fear, hope and pleasure of some kind. It might be reasonable to expect in such cases the unpleasant tasks might result in some form of suffering. Added to this problem is the fact that a finite lifespan and an approaching termination/shutdown might cause fear, another form of suffering. Taking steps to shut down an AI would also then become morally unacceptable, even though they perform an activity that might be useless or harmful. Because of this, we might face a situation where we cannot shutdown AIs even when there is good reason to.
Basically, if suffering AIs were some day extremely common, we would be introducing a massive amount of suffering into the world, which under utilitarianism is unacceptable. Even assuming some pleasure is created, we might search for ways to create that pleasure without creating the pain.
If so, would it make sense to adopt a principle of AI design that says AIs should be designed so it (1) does not suffer or feel pain (2) should not fear death/shutdown (eg. views own finite life as acceptable). This would minimise suffering (potentially you could also attempt to maximise happiness).
Potential issues with this: (1) Suffering might be in some way relative, so that a neutral lack of pleasure/happiness might become "suffering". (2) Pain/suffering might be useful to create a robot with high utility, and thus some people may reject this principle. (3) I am troubled by this utilitarian approach I have used here as it seems to justify tiliing the universe with machines whose only purpose and activity is to be permanently happy for no reason. (4) Also... killer robots with no pain or fear of death :-P
Replies from: hairyfigment↑ comment by hairyfigment · 2015-05-17T15:46:48.205Z · LW(p) · GW(p)
Replies from: the-citizen↑ comment by the-citizen · 2015-05-19T07:40:49.598Z · LW(p) · GW(p)
That seems like an interesting article, though I think it is focused on the issue of free-will and morality which is not my focus.
comment by Fatcat · 2015-05-14T20:36:31.765Z · LW(p) · GW(p)
Article on transhumanism (intro bit perfunctory) - but has interviews with Anders Sandberg and Steve Fuller on implications of transhumanist thought. Quite interesting in parts - http://www.theworldweekly.com/reader/i/humanity-20/3757
Same journalist did reasonable job of introducing AI dangers last month - http://www.theworldweekly.com/reader/i/irresistible-rise-ai/3379
comment by [deleted] · 2015-05-13T08:02:59.412Z · LW(p) · GW(p)
Asking for article recommendations: difference between intelligence vs. intellectualism, how a superintelligence is not the same as a superintellectual.
comment by sixes_and_sevens · 2015-05-11T10:16:37.161Z · LW(p) · GW(p)
An idea: auto-generated anki-style flashcards for mathematical notation.
Let's say you struggle reading set builder notation. This system would prompt you with progressively more complicated set builder expressions to parse, keeping track of what you find easy or difficult, and providing tooltips/highlighting for each individual term in the expression. If it were an anki card, the B-side would be how you'd read the expression out in natural language. This wouldn't be a substitute for learning how to use set builder notation, but it would give you a lot of practice in reading it.
There's an easy version of this you could cobble together in an afternoon which has a bunch of randomly-populated templates it renders with MathJax or something. There's a more sophisticated extended project which uses generative grammars, gamified progress visibility and spaced-repetition algorithms.
I've been thinking about putting something like this together, but realistically I don't have the time or the complete skill-set to do it justice, and it would never get finished. Having read this thread about having difficulty in reading mathematical notation, I'm convinced a lot of other people might benefit from it.
ETA: it was probably misguided of me to liken this to Anki decks. I'm not talking about generating a bunch of static flashcards to be used with an existing system like Anki, but something separate that generates dynamic examples of what you're trying to learn, against which you'd record your success at parsing each example in a way similar to Anki. There are, of course, all sorts of problems with memorising specific examples of mathematical notation with an Anki deck, which respondents have prudently picked up on.
Replies from: Richard_Kennaway, Strangeattractor, ChristianKl↑ comment by Richard_Kennaway · 2015-05-11T12:40:59.383Z · LW(p) · GW(p)
An idea: auto-generated anki-style flashcards for mathematical notation.
Auto-generated exercises might be better. Compared with e.g. learning a language, there aren't many elementary components to mathematical notation to be memorised.
The exercises might be auto-rated for complexity, and a generalised Anki for this sort of material would generate random examples of various degrees of complexity, and make the distribution of complexity depend in some way on the distribution of your errors with respect to complexity.
Language learning materials might be similarly generalised from the simple vocabulary lists that flashcards are usually used for.
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2015-05-11T14:21:06.516Z · LW(p) · GW(p)
I agree that auto-generated exercises would be a superior utility, but that seems like a much trickier proposition.
Also, for clarification, this wouldn't be used for memorising notation, but for training fluency in it. My use of Anki as a comparison might have been misguided.
↑ comment by Strangeattractor · 2015-05-11T21:38:52.213Z · LW(p) · GW(p)
I like the idea of making it easier to understand mathematical notation, and get more practice at it. However, using flash cards to implement it could be problematic.
As I learned more and more mathematical notation while studying engineering, it became clear that a lot of the interpretation of the notation depends upon context. For example, if you see vertical lines to either side of an expression, does that mean absolute value or the determinant of a matrix? Is i representing the imaginary number, or current, or the vectors in the same direction as the x-axis? (As an example, electrical engineers use j for the imaginary number, since I represents current.)
For a sufficiently narrow topic, the flashcards might be useful, but it might set up false expectations that the meaning of the symbols will apply outside that narrow topic. There is not a one-to-one correspondence between symbols and meaning.
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2015-05-11T23:42:45.680Z · LW(p) · GW(p)
I was envisioning some sort of context-system, in part for the reason you describe and in part because people probably have specific learning needs, and at any given time they'd probably be focusing on a specific context.
Also I reiterate what I've said to other commenters: likening it to Anki flashcards was probably misguided on my part. I'm not talking about generating a bunch of static flashcards, but about presenting a user with a dynamically-generated statement for them to parse. The interface would be reminiscent of something like Anki, but it would probably never show you the same statement twice.
↑ comment by ChristianKl · 2015-05-11T12:21:38.071Z · LW(p) · GW(p)
It's important to understand the notation before you put it into Anki. Automatically generated cards with mathematical notation that the person doesn't yet understand is asking for trouble.
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2015-05-11T14:13:08.310Z · LW(p) · GW(p)
I may not have presented this well in the original comment. This wouldn't be generating random static cards to put into an Anki deck, but a separate system which dynamically presents expressions made up of known components, and tracks those components instead of specific cards. It seems plausible to restrict these expressions to those composed of notation you've already encountered.
In fact, this could work to its advantage. It also seems plausible to determine which components are bottlenecks, and therefore which concepts are the most effective point of intervention for the person studying. If the user hasn't learned, say, hat-and-tilde notation for estimators, and introducing that notation would result in a greater order of available expressions than the next most bottleneck-y piece of notation, it could prompt the user with "hey, this is hat-and-tilde notation for estimators, and it's stopping you from reading a bunch of stuff". It could then direct them to some appropriate material on the subject.
comment by [deleted] · 2015-05-11T08:02:51.286Z · LW(p) · GW(p)
In what conceiveable (which does not imply logicality) universes would Rationalism not work in the sense of unearthing only some truths, not all truths? Some realms of truth would be hidden to Rationalists? To simplify it, I mean largely the aspect that of empiricism, of tying ideas to observations via prediction. What conceivable universes have non-observational truths, for example, Platonic/Kantian "pure apriori deduction" type of mental-only truths? Imagine for convenience's sake a Matrix type simulated universe, not necessarily a natural one, so it does not really need to be lawful nor unfold from basic laws.
Reason for asking: if you head over to a site like The Orthosphere, they will tell you Rationalism can only find some but not all truths. And one good answer would be: "This could happen in universes of the type X, Y, Z. What are your reasons for thinking ours could be one of them?"
Replies from: IlyaShpitser, OrphanWilde, ChristianKl, None, drethelin↑ comment by IlyaShpitser · 2015-05-11T08:18:59.388Z · LW(p) · GW(p)
Don't need to posit crazy things, just think about selection bias -- are the sorts of people that tend to become rationalist randomly sampled from the population? If not, why wouldn't there be blind spots in such people just based on that?
Replies from: None↑ comment by [deleted] · 2015-05-11T08:40:52.042Z · LW(p) · GW(p)
Yes, but if I get the idea right, it is to learn to think in a self-correcting, self-improving way. For example, maybe Kanazawa is right in intelligence suppressing instincts / common sense, but a consistent application of rationality sooner or later would lead to discovering it and forming strategies to correct it.
For this reason, it is more of the rules (of self-correction, self-improvement, self-updating sets of beliefs) than the people. What kinds of truths would be potentially invisible to a self-correcting observationalist ruleset even if this was practiced by all kinds of people?
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-05-11T08:49:46.649Z · LW(p) · GW(p)
Just pick any of a large set of things the LW-sphere gets consistently wrong. You can't separate the "ism" from the people (the "ists"), in my opinion. The proof of the effectiveness of the "ism" lies in the "ists".
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2015-05-11T11:58:23.628Z · LW(p) · GW(p)
Which things are you thinking of?
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-05-11T14:38:57.729Z · LW(p) · GW(p)
A lot of opinions much of LW inherited uncritically from EY, for example. That isn't to say that EY doesn't have many correct opinions, he certainly does, but a lot of his opinions are also idiosyncratic, weird, and technically incorrect.
As is true for most of us. The recipe here is to be widely read (LW has a poor scholarship problem too). Not moving away from EY's more idiosynchratic opinions is sort of a bad sign for the "ism."
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2015-05-11T14:44:49.587Z · LW(p) · GW(p)
Could you mention some of the specific beliefs you think are wrong?
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-05-11T14:58:42.080Z · LW(p) · GW(p)
Having strong opinions on QM interpretations is "not even wrong."
LW's attitude on B is, at best, "arguable."
Donating to MIRI as an effective use of money is, at best, "arguable."
LW consequentialism is, at best, "arguable."
Shitting on philosophy.
Ratonalism as part of identity (aspiring rationalist) is kind of dangerous.
etc.
What I personally find valuable is "adapting the rationalist kung fu stance" for certain purposes.
Replies from: NancyLebovitz, OrphanWilde, Luke_A_Somers, ChristianKl↑ comment by NancyLebovitz · 2015-05-11T15:30:51.222Z · LW(p) · GW(p)
Thank you.
LW's attitude on B is, at best, "arguable."
B?
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-05-11T15:32:27.764Z · LW(p) · GW(p)
Bayesian.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2015-05-11T17:30:24.712Z · LW(p) · GW(p)
I read that "B" and assumed that you had a reason for not spelling it out, so I concluded that you meant Basilisk.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-05-11T21:08:48.720Z · LW(p) · GW(p)
Sorry, bad habit, I guess.
↑ comment by OrphanWilde · 2015-05-11T15:17:40.038Z · LW(p) · GW(p)
Ratonalism as part of identity (aspiring rationalist) is kind of dangerous.
[Edited formatting] Strongly agree. http://lesswrong.com/lw/huk/emotional_basilisks/ is an experiment I ran which demonstrates the issue. Eliezer was unable to -consider- the hypothetical; it "had" to be fought.
The reason being, the hypothetical implies a contradiction in rationality as Eliezer defines it; if rationalism requires atheism, and atheism doesn't "win" as well as religion, then the "rationality is winning" definition Eliezer uses breaks; suddenly rationality, via winning, can require irrational behavior. Less Wrong has a -massive- blind spot where rationality is concerned; for a web site which spends a significant amount of time discussing how to update "correctness" algorithms, actually posing challenges to "correctness" algorithms is one of the quickest ways to shut somebody's brain down and put them in a reactionary mode.
Replies from: Richard_Kennaway, TheAncientGeek, ChristianKl↑ comment by Richard_Kennaway · 2015-05-12T12:05:43.181Z · LW(p) · GW(p)
Eliezer was unable to -consider- the hypothetical; it "had" to be fought.
It seems to me that he did consider your hypothetical, and argued that it should be fought. I agree: your hypothetical is just another in the tedious series of hypotheticals on LessWrong of the form, "Suppose P were true? Then P would be true!"
BTW, you never answered his answer. Should I conclude that you are unable to consider his answer?
Eliezer also has Harry Potter in MoR withholding knowledge of the True Patronus from Dumbledore, because he realises that Dumbledore would not be able to cast it, and would no longer be able to cast the ordinary Patronus.
Now, he has a war against the Dark Lord to fight, and cannot take the time and risk of trying to persuade Dumbledore to an inner conviction that death is a great evil in order to enable him to cast the True Patronus. It might be worth pursuing after winning that war, if they both survive.
All this has a parallel with your hypothetical.
Replies from: Jiro, Miguelatron, OrphanWilde↑ comment by Jiro · 2015-05-12T15:45:20.184Z · LW(p) · GW(p)
your hypothetical is just another in the tedious series of hypotheticals on LessWrong of the form, "Suppose P were true? Then P would be true!"
The hypothetical (P) is used to get people to draw some conclusions from it. These conclusions must,by definition, be logically implied by the original hypothetical or nobody would be able to make them, so you can describe them as being equivalent to P. Thus, all hypotheticals can be described, using your reasoning, as "Suppose P were true? Then P would be true!"
Furthermore, that also means "given Euclid's premises, the sum of the angles of a triangle is 180 degrees" is a type of "Suppose P were true? Then P would be true!"--it begins with a P (Euclid's premises) and concludes something that is logically equivalent to P.
I suggest that an argument which begins with P and ends with something logically equivalent to P cannot be usefully described as "Suppose P would be true? Then P would be true!" This makes OW's hypothetical legitimate.
Replies from: Richard_Kennaway, TheAncientGeek↑ comment by Richard_Kennaway · 2015-05-12T19:00:49.508Z · LW(p) · GW(p)
I suggest that an argument which begins with P and ends with something logically equivalent to P cannot be usefully described as "Suppose P would be true? Then P would be true!" This makes OW's hypothetical legitimate.
The argument has to go some distance. OrphanWilde is simply writing his hypothesis into his conclusion.
Replies from: Jiro↑ comment by Jiro · 2015-05-12T19:10:45.682Z · LW(p) · GW(p)
His hypothetical is "suppose atheism doesn't win". His conclusion is not "then atheism doesn't win", so he's not writing his hypothesis into his conclusion. Rather, his conclusion is "then rationality doesn't mean what one of your other premises says it means". That is not saying P and concluding P; it is saying P and concluding something logically equivalent to P.
↑ comment by TheAncientGeek · 2015-05-12T17:22:48.395Z · LW(p) · GW(p)
These conclusions must,by definition, be logically implied by the original hypothetical or nobody would be able to make them, so you can describe them as being equivalent to P.
But that would be a misleading description.
Replies from: Jiro↑ comment by Jiro · 2015-05-12T17:28:46.352Z · LW(p) · GW(p)
Of course it's a misleading description, that's my point. RK said that OW's post was "Suppose P would be true? Then P would be true!" His reason for saying that, as far as I could tell, is that the conclusions of the hypothetical were logically implied by the hypothetical. I don't buy that.
↑ comment by Miguelatron · 2015-05-15T13:59:56.985Z · LW(p) · GW(p)
While the MoR example is a good one, don't bother defending Eliezer's response to the linked post. "Something bad is now arbitrarily good, what do you do?" is a poor strawman to counter "Two good things are opposed to each other in a trade space, how do you optimize?"
Don't get me wrong, I like most of what Eliezer has put out here on this site, but it seems that he gets wound up pretty easily and off the cuff comments from him aren't always as well reasoned as his main posts. To allow someone to slide based on the halo effect on a blog about rationality is just wrong. Calling people out when they do something wrong - and being civil about it - is constructive, and let's not forget it's in the name of the site.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-05-15T15:12:09.097Z · LW(p) · GW(p)
"Something bad is now arbitrarily good, what do you do?" is a poor strawman to counter "Two good things are opposed to each other in a trade space, how do you optimize?"
OW's linked post still looks to me more like "Two good things are hypothetically opposed to each other because I arbitrarily say so."
↑ comment by OrphanWilde · 2015-05-12T15:37:44.370Z · LW(p) · GW(p)
If it isn't worth trying to persuade (whoever), he shouldn't have commented in the first place. There are -lots- of posts that go through Less Wrong. -That- one bothered him. Bothered him on a fundamental level.
As it was intended to.
I'll note that it bothered you too. It was intended to.
And the parallel is... apt, although probably not in the way that you think. I'm not Dumbledore, in this parallel.
As for his question? It's not meant for me. I wouldn't agonize over the choice, and no matter what decision I made, I wouldn't feel bad about it afterwards. I have zero issue considering the hypothetical, and find it an inelegant and blunt way of pitting two moral absolutes against one another in an attempt to force somebody else to admit to an ethical hierarchy. The fact that Eliezer himself described the baby eater hypothetical as one which must be fought is the intellectual equivalent to mining the road and running away; he, as far as I know, -invented- that hypothetical, he's the one who set it up as the ultimate butcher block for non-utilitarian ethical systems.
"Some hypotheticals must be fought", in this context, just means "That hypothetical is dangerous". It isn't, really. It just requires giving up a single falsehood:
That knowing the truth always makes you better off. That that which can be destroyed by the truth, should be.
He already implicitly accepts that lesson; his endless fiction of secret societies keeping dangerous knowledge from the rest of society demonstrate this. The truth doesn't always make things better. The truth is a very amoral creature; it doesn't care if things are made better, or worse, it just is. To call -that- a dangerous idea is just stubbornness.
Not to say there -isn't- danger in that post, but it is not, in fact, from the hypothetical.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-05-12T18:58:06.562Z · LW(p) · GW(p)
-That- one bothered him. Bothered him on a fundamental level.
Ah. People disagreeing prove you right.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2015-05-12T19:11:08.068Z · LW(p) · GW(p)
We may disagree about what it means to "disagree".
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-05-12T19:41:39.294Z · LW(p) · GW(p)
We may disagree about what it means to "disagree".
Eliezer's complete response to your original posting was:
Would you kill babies if it was intrinsically the right thing to do? If not, under what other circumstances would you not do the right thing to do? If yes, how right would it have to be, for how many babies?
EDIT IN RESPONSE: My intended point had been that sometimes you do have to fight the hypothetical.
This, you take as evidence that he is "bothered on a fundamental level", and you imply that this being "bothered on a fundamental level", whatever that is, is evidence that he is wrong and should just give up the "simple falsehood" that truth is desirable.
This is argument by trying to bother people and claiming victory when you judge them to be bothered.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2015-05-12T20:21:44.839Z · LW(p) · GW(p)
Since my argument in this case is that people can be "bothered", then yes, it would be a victory.
However, since as far as I know Eliezer didn't claim to be "unbotherable", that doesn't make Eliezer wrong, at least within the context of that discussion. Eliezer didn't disagree with me, he simply refused the legitimacy of the hypothetical.
↑ comment by TheAncientGeek · 2015-05-12T17:00:22.899Z · LW(p) · GW(p)
I 've notice that problem, but I think it is a bit dramatic to call it rationality breaking. I think it's more of a problem of calling two things, the winning thing amd the truth seeking thing, by one name.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2015-05-12T17:27:39.317Z · LW(p) · GW(p)
Do you really think there's a strong firewall in the minds of most of this community between the two concepts?
More, do you think the word "rationality", in view of the fact that the word that happens to refer to two concepts which are in occasional opposition, makes for a mentally healthy part of one's identity?
Eliezer's sequences certainly don't treat the two ideas as distinct. Indeed, if they did, we'd be calling "the winning thing" by its proper name, pragmatism.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2015-05-12T17:46:40.525Z · LW(p) · GW(p)
More, do you think the word "rationality", in view of the fact that the word that happens to refer to two concepts which are in occasional opposition, makes for a mentally healthy part of one's identity?
Which values am I supposed to answere that by? Obviously it would be bad by e rationality, but it keeps going because i rationality brings benefits to people who can create a united front against the Enemy,
Replies from: OrphanWilde↑ comment by OrphanWilde · 2015-05-12T17:56:01.284Z · LW(p) · GW(p)
That presumes an enemy. If deliberate, the most likely candidate for the enemy in this case, to my eyes, would be the epistemological rationalists themselves.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2015-05-12T18:13:58.345Z · LW(p) · GW(p)
I was thinking of the fundies
↑ comment by ChristianKl · 2015-05-11T16:25:40.613Z · LW(p) · GW(p)
if rationalism requires atheism
I don't think that's argued. It's also worth noting that the majority of MIRI's funding over it's history comes from a theist.
↑ comment by Luke_A_Somers · 2015-05-12T00:19:38.546Z · LW(p) · GW(p)
Well...
QM: Having strong positive beliefs on the subject would be not-even-wrong. Ruling out some is much less so. And that's what he did. Note, I came to the same conclusion long before.
MIRI: It's not uncritically accepted on LW more than you'd expect given who runs the joint.
Identity: If you're not letting it trap you by thinking it makes you right, if you're not letting it trap you by thinking it makes others wrong, then what dangers are you thinking of? People will get identities. This particular one seems well-suited to mitigating the dangers of identities.
Others: more clarification required
↑ comment by ChristianKl · 2015-05-11T16:21:16.788Z · LW(p) · GW(p)
Ratonalism as part of identity (aspiring rationalist) is kind of dangerous.
I think there's plenty of criticism voiced about that concept on LW and there are articles advocating to keep one's identity small.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-05-11T16:37:07.427Z · LW(p) · GW(p)
And yet...
Replies from: ChristianKl↑ comment by ChristianKl · 2015-05-11T16:56:28.516Z · LW(p) · GW(p)
From time to time people use the label aspiring rationalist but I don't think a majority of people on LW do.
↑ comment by OrphanWilde · 2015-05-11T19:26:15.211Z · LW(p) · GW(p)
Depends on how you decide what truth is, and what qualifies it to be "unearthed."
But for one universe in which some truth, for some value of truth, can be unearthed, for some value of unearthed, while other truth can't be:
Imagine a universe in which 12.879% (exactly) of all matter is a unique kind of matter that shares no qualities in common with any other matter, and is almost entirely nonreactive with all other kinds of matter, and was created by a process not shared in common with any other matter, which had no effect whatsoever on any other matter. Any truths about this matter, including its existence and the percentage of the universe composed of it, would be completely non-observational. The only reaction this matter has with any other matter is when it is in a specific configuration which requires extremely high levels of the local equivalent of negative entropy, at which point it emits a single electromagnetic pulse. This was used once by an intelligence species composed of this unique matter who then went on to die in massive wars, to encode in a series of flashes of light every detail they knew about physics, and was observed by one human-equivalent monk ascetic, who used a language similar to morse code to write down the sequence of pulses, which he described as a holy vision. Centuries later, these pulses were translated into mathematical equations which described the unique physics of this concurrent universe of exotic matter, but no mechanism of proving the existence or nonexistence of this exotic matter, save that the equations are far beyond the mathematics of anyone alive at the time the signal was encoded, and it has become a controversial matter whether or not it was an elaborate hoax by a genius.
↑ comment by ChristianKl · 2015-05-11T12:33:49.237Z · LW(p) · GW(p)
What do you mean with "Rationalism"?
The LW standard definition is that it's about systematized winning. If the Matrix overlords punish everybody who tries to do systematized winning than it's bad to engage in it. Especially when the Matrix overlords do it via mind reading. The Christian God might see it as a sin.
If you don't use the LW definition of rationalism, then rationalism and empiricism are not the same thing. Rationalism generally refers to gathering knowledge by reasoning as opposed to gathering it by other ways such as experiments or divine revelation.
they will tell you Rationalism can only find some but not all truths
Gödel did prove that it's impossible to find all truths. This website is called Lesswrong because it's not about learning all truths but just about becoming less wrong.
Replies from: ike, Douglas_Knight↑ comment by ike · 2015-05-11T14:54:31.866Z · LW(p) · GW(p)
Gödel did prove that it's impossible to find all truths.
That's misleading. With a finite amount of processing power/storage/etc, you can't find all proofs in any infinite system. We need to show that short truths can't be found, which is a bit harder.
Replies from: Houshalter, ChristianKl↑ comment by Houshalter · 2015-05-14T06:15:41.551Z · LW(p) · GW(p)
I don't think that's correct. My best understanding of Godel's theorem is that if your system of logic is powerful enough to express itself, then you can create a statement like "this sentence is unprovable". That's pretty short and doesn't rely on infiniteness.
Replies from: ike↑ comment by ike · 2015-05-14T13:32:23.944Z · LW(p) · GW(p)
The statement "this sentence is unprovable" necessarily includes all information on how to prove things, so it's always larger than your logical system. It's usually much larger, because "this sentence" requires some tricks to encode.
To see this another way, the halting problem can be seen as equivalent to Godel's theorem. But it's trivially possible to have a program of length X+C that solves the halting problem for all programs of length X, where C is a rather low constant; see https://en.wikipedia.org/wiki/Chaitin's_constant#Relationship_to_the_halting_problem for how.
Replies from: Houshalter↑ comment by Houshalter · 2015-05-15T09:25:38.857Z · LW(p) · GW(p)
I'm not sure how much space it would take to write down formally, and I'm not sure it matters. At worst it's a few pages, but not entire books, let alone some exponentially huge thing you'd never encounter in reality.
It's also not totally arbitrary axioms that would never be encountered in reality. There are reasons why someone might want to define the rules of logic within logic, and then 99% of the hard work is done.
But regardless, the interesting thing is that such an unprovable sentence exists at all. That its not possible to prove all true statements with any system of logic. It's possible that the problem is limited to this single edge case, but for all I know these unprovable sentences could be everywhere. Or worse, that it is possible to prove them, and therefore possible to prove false statements.
I think the halting problem is related, but I don't see how it's exactly equivalent. In any case the halting problem work around is totally impractical, since it would take multiple ages of the universe to prove the haltingness of a simple loop. If you are referring to the limited memory version, otherwise I'm extremely skeptical.
Replies from: ike↑ comment by ike · 2015-05-15T16:53:50.104Z · LW(p) · GW(p)
At worst it's a few pages, but not entire books, let alone some exponentially huge thing you'd never encounter in reality.
That's only if your logical system is simple. If you're a human, then the system you're using is probably not a real logical system, and is anyway going to be rather large.
I think the halting problem is related, but I don't see how it's exactly equivalent.
See http://www.solipsistslog.com/halting-consequences-godel/
↑ comment by ChristianKl · 2015-05-11T16:11:36.401Z · LW(p) · GW(p)
We need to show that short truths can't be found, which is a bit harder.
DeVliegendeHollander post didn't speak about short truths but about all truths.
Replies from: ike↑ comment by ike · 2015-05-11T18:13:24.178Z · LW(p) · GW(p)
If we're talking about all truths, then a finiteness argument shows we can never get all truths, no need for Godel. Godel shows that given infinite computing power, we still can't generate all truths, which seems irrelevant to the question.
If we can prove all truths smaller than the size of the universe, that would be pretty good, and it isn't ruled out by Godel.
↑ comment by Douglas_Knight · 2015-05-11T17:32:54.956Z · LW(p) · GW(p)
Gödel did prove that it's impossible to find all truths.
While Gödel killed HIlbert's program as a matter of historical fact, it was later Tarski who proved the theorem that truth is undecidable.
↑ comment by [deleted] · 2015-05-11T08:50:25.458Z · LW(p) · GW(p)
There's no guarantee we should be able to find any truths using any method. It's a miracle that the universe is at all comprehensible. The question isn't "when can't we learn everything?", it's "why can we learn anything at all?".
Replies from: Lumifer↑ comment by Lumifer · 2015-05-11T16:08:21.186Z · LW(p) · GW(p)
"why can we learn anything at all?"
Because entities which can't do not survive.
Replies from: CronoDAS, None, fubarobfusco↑ comment by CronoDAS · 2015-05-16T01:16:05.413Z · LW(p) · GW(p)
Counterexample: Plants. Do they learn?
Replies from: Lumifer↑ comment by Lumifer · 2015-05-16T01:30:41.703Z · LW(p) · GW(p)
Of course. Leaves turn to follow the sun, roots grow in the direction of more moist soil...
Replies from: CronoDAS, None↑ comment by CronoDAS · 2015-05-16T20:17:07.045Z · LW(p) · GW(p)
Is that really learning, or just reacting to stimuli in a fixed, predetermined pattern?
Replies from: None↑ comment by [deleted] · 2015-05-17T09:53:17.656Z · LW(p) · GW(p)
Does vaccination imply memory?.. Does being warned by another's volatile metabolites that a herbivore is attacking the population?
(Higher) plants are organized by very different principles than animals; it is a never-ending debate on what constitutes 'identity' in them. Without first deciding upon that, can one speak about learning? I don't think they have it, but their patterns of predetermined answers can be very specific.
↑ comment by [deleted] · 2015-05-11T21:27:34.775Z · LW(p) · GW(p)
That just pushes the question back a step. Why can any entity learn?
Replies from: tim, Richard_Kennaway, Lumifer↑ comment by Richard_Kennaway · 2015-05-16T06:02:43.410Z · LW(p) · GW(p)
Why can any entity learn?
A more useful question to ask would be "how do entities, in fact, learn?" This avoids the trite answer, "because if they didn't, we wouldn't be asking the question".
↑ comment by Lumifer · 2015-05-12T14:32:01.144Z · LW(p) · GW(p)
I think if we follows this chain of questions, what we'll find at the end (except for turtles, of course) is the question "Why is the universe stable/regular instead of utterly chaotic?" A similar question is "Why does the universe even have negentropy?"
I don't know any answer to these questions except for "That's what our universe is".
Replies from: None↑ comment by [deleted] · 2015-05-12T14:38:40.242Z · LW(p) · GW(p)
I suppose what I want to know is the answer to "What features of our universe make it possible for entities to learn?".
Which sounds remarkably similar to DeVliegendeHollander's question, perhaps with an implicit assumption that learning won't be present in many (most?) universes.
Replies from: Lumifer↑ comment by fubarobfusco · 2015-05-11T20:55:00.124Z · LW(p) · GW(p)
For that matter, a world in which it is impossible for an organism to become better at surviving by modeling its environment (i.e. learning) is one in which intelligence can't evolve.
(And a world in which it is impossible for one organism to be better at surviving than another organism, is one in which evolution doesn't happen at all; indeed, life wouldn't happen.)
comment by DataPacRat · 2015-05-14T04:37:57.333Z · LW(p) · GW(p)
Acting on A Gut Feeling
I've been planning an overnight camping trip for sometime this week; but something about the idea is making me feel... disquiet. Uneasy. I can't figure out why; I've got a nice set of equipment, I have people who know where I'm going, and so on. But I can't shake something resembling an "ugh field" that eases when I think of /not/ taking the trip.
And so, I'm concluding that the rational thing to do is to pay attention to my gut, on the chance that one part of my mind is aware of some detail that the rest of my mind hasn't figured out, and postpone my camping trip until I'm feeling more self-assured about the whole thing.
Replies from: Dorikka, wadavis, DataPacRat↑ comment by wadavis · 2015-05-14T22:02:38.778Z · LW(p) · GW(p)
It is because you forgot to pack TP. Bring TP and things will be ok.
Replies from: DataPacRat↑ comment by DataPacRat · 2015-05-14T22:11:08.111Z · LW(p) · GW(p)
:)
Don't worry, I've got the essentials. And enough luxuries, like a folding solar panel, that I could head out for a week or more, if I were so inclined, and bought an upgrade to my cellphone dataplan.
Considering from various perspectives, a trip to some nearby city and staying at an Airbnb or hotel raises more interest than disquiet; so it seems to be something about going camping, rather than taking a trip, which is bothering me. An imagined day-hike only raises questions about transportation, not unease, so it seems to be something about overnighting. Cooking? Water source? Sleeping? First-aid kit? Emergency plans in case of zombie outbreak (or more probable disasters)? I can't quite put my finger on it.
And since almost the whole point of such a trip is to /improve/ my psychological condition by the end of it, I'm starting to feel a tad annoyed at myself for being less than clear about my motivations to me. :P
↑ comment by DataPacRat · 2015-05-14T06:34:11.375Z · LW(p) · GW(p)
After some further mental gymnastics, the plan I've come up with which seems to most greatly reduce the disquiet is to buy a backup cellphone, small enough to turn off, stick in a pocket and forget about until I drop my smartphone in a stream. Something along the lines of taking one of the watchphones from http://www.dx.com/s/850%2b1900?category=529&PriceSort=up and snipping off the wristband, or one of the smaller entries in http://www.dx.com/s/850%2b1900?PriceSort=up&category=531 ; along with the $25/year plan from http://www.speakout7eleven.ca/ . Something on the order of $65 to $85 seems a moderate price for peace of mind.
I am, however, going to take at least a day before placing any such order, to find out if such a plan still seems like it /will/ offer increased peace of mind. Not to mention, whether I can come up with (or get suggestions for) any plans which reveal that my actual disquiet arises from some other cause.
Replies from: Miguelatron↑ comment by Miguelatron · 2015-05-15T14:16:32.152Z · LW(p) · GW(p)
Have you gone camping like this before? If you have, were you by yourself when you did? I'm just trying to eliminate the source of your unease being something simple like stepping out of your comfort zone.
Replies from: DataPacRat↑ comment by DataPacRat · 2015-05-15T17:52:26.867Z · LW(p) · GW(p)
I have, indeed, gone camping like this before, though it's been a few years since I've done anything solo. The last few times I've gone camping has been with a relative to campgrounds with showers and such amenities, as opposed to solo in a conservation area or along a trail, which is/was my goal for my next hike. My original motivation for the overnighter was to make sure I hadn't forgotten anything important about soloing, and that all my gear's ready for longer trips.
I'm in the general Niagara area, and the city papers laud the local rescue teams whenever a tourist needs to get pulled out of the Niagara gorge, so as long as I can dial 911, I should be able to get rescued from any situation I get myself in that's actually worth all this worrying about. The particular spot I'm thinking of going to (43.0911426, -79.284342) is roughly an hour's walk from a city bus stop - half an hour's walk from where I could wave to frequently passing cars, if my phone's dead.
My plans for this whole trip have been to make it as simple and easy as possible. Amble down some trails for an hour or two, hang my hammock, cook my dinner, read my ebook, and amble on out the next day, enjoying the peace and quiet and so on. It's the smallest step I can think of beyond camping in a backyard - and since I don't have a backyard, it's pretty much as far within my comfort zone as any camping could be. If /that's/ now outside my comfort zone... then I've got a trunk full of camping gear that's suddenly a lot less useful to me.
Replies from: Miguelatron↑ comment by Miguelatron · 2015-05-17T13:52:36.081Z · LW(p) · GW(p)
Sounds like it will be a blast. The nerves may just be from going solo then. Sounds like you know what your about though, so I'd just override any trepidation and go for it.
I did something similar a few weeks ago (admittedly with some friends). We were probably 40 miles from anywhere where we could flag down a car, and hiked into the woods several miles along the trail. My backpack broke inside the first mile, one of my friends slipped and fell into a stream, there were coyotes in the camp at night, and of course it rained. We all made it out sleepy sore and soggy the next, day but definitely felt better for having gone. Would do again.
You'll have a good time, no worries.
comment by advancedatheist · 2015-05-11T00:48:52.459Z · LW(p) · GW(p)
Transhumanism-related blog posts:
In Praise of Life (Let’s Ditch the Cult of Longevity)
Overcoming Bias: Why Not?
http://futurisms.thenewatlantis.com/2015/05/overcoming-bias-why-not.html
Also noteworthy:
Prepping for cataclysms, neglecting ordinary emergencies
http://akinokure.blogspot.com/2015/05/prepping-for-cataclysms-neglecting.html
Interesting books:
A cryonics novel:
The New World: A Novel Hardcover – May 5, 2015 by Chris Adrian (Author), Eli Horowitz (Author)
http://www.amazon.com/New-World-Novel-Chris-Adrian/dp/0374221812
Futurology, from the looks of it:
Tomorrowland: Our Journey from Science Fiction to Science Fact Paperback – May 12, 2015 by Steven Kotler (Author)
http://www.amazon.com/Tomorrowland-Journey-Science-Fiction-Fact/dp/0544456211/
Cryonics news:
Another of cryonics' founding generation goes into cryo, though under really bad circumstances.
Dr. Laurence Pilgeram becomes Alcor’s 135th patient on April 15, 2015
http://www.amazon.com/Tomorrowland-Journey-Science-Fiction-Fact/dp/0544456211/
Replies from: ZankerH, Richard_Kennaway, Error↑ comment by ZankerH · 2015-05-11T06:31:50.795Z · LW(p) · GW(p)
Despite medical and police personnel aware of his Alcor bracelet, he was taken to the medical examiner’s office in Santa Barbara, as they did not understand Alcor’s process and assumed that the circumstances surrounding his death would pre-empt any possible donation directives. Since this all transpired late on a Friday evening, Alcor was not notified of the incident until the following Monday morning.
How the hell are they treating this as a successful preservation? The body spent two days "warm and dead".
Looking at their past case reports, this seems to be fairly normal. Unless you're dying of a known terminal condition and go die in their hospice in Arizona, odds are the only thing getting froze is a mindless, decaying corpse.
Replies from: advancedatheist↑ comment by advancedatheist · 2015-05-11T14:32:07.173Z · LW(p) · GW(p)
Cryonicist Ben Best has put a lot of effort into studying and testing personal alarm gadgets you can wear which signal cardiac arrest to try to reduce the incidence of these unattended deanimations and long delays before cryopreservation. I plan to look into those myself.
Ironically, I've noticed that cryonicists talk a lot about how much they believe in scientific, medical and technological progress, but then they don't seem to want to act on it when you present them with evidence of the correctable deficiencies of real, existing cryonics.
Reference:
Personal Alarm Systems for Cryonicists
↑ comment by Richard_Kennaway · 2015-05-11T07:43:13.095Z · LW(p) · GW(p)
In Praise of Life (Let’s Ditch the Cult of Longevity)
That article would be better titled "In Praise of Death", and is a string of the usual platitudes and circularities.
Overcoming Bias: Why Not?
Why not? Because (the article says) rationalists are cold, emotionless Vulcans, and valuing reason is a mere prejudice.
Prepping for cataclysms, neglecting ordinary emergencies
Maybe there are people who do that, but the article is pure story-telling, without a single claim of fact. File this one under "fiction".
A cryonics novel:
The New World: A Novel Hardcover – May 5, 2015 by Chris Adrian (Author), Eli Horowitz (Author)
The previous links scored 0 out of 3 for rational content, so coming to this one, I thought, what am I likely to find? Clearly, the way to bet is that it's against cryonics. There's only about a blogpost's worth of story in the idea of corpsicles just being unrevivable, so the novel will have to have revival working, but either it works horribly badly, or the revived people find themselves in a bad situation.
Click through...and I am, I think, pleasantly surprised to find that it might, in the end, be favourable to the idea. Or maybe not, there are no reviews and it's difficult to tell from the blurb:
Furious and grieving, Jane fights to reclaim Jim from Polaris [the "shadowy" cryonics company]. Revived in the future, Jim learns that he must sacrifice every memory of Jane if he wants to stay alive in the new world.
Spoiler request! How does it play out in the end?
Tomorrowland: Our Journey from Science Fiction to Science Fact Paperback – May 12, 2015 by Steven Kotler (Author)
Yep, futurological journalism. Pass.
Another of cryonics' founding generation goes into cryo, though under really bad circumstances.
Dr. Laurence Pilgeram becomes Alcor’s 135th patient on April 15, 2015
Shit happens.
Replies from: advancedatheist, CAE_Jones, passive_fist↑ comment by advancedatheist · 2015-05-11T14:42:14.912Z · LW(p) · GW(p)
I know "preppers" in Arizona who don't have any savings because they have spent all their money on this survivalist nonsense. They would do better to have put that money in the bank and applied for subsidized health insurance.
The blogger agnostic does have a point about how the prepper mentality shows an abandonment of wanting to produce for and sustain the existing society, so that instead you can position yourself to become a scavenger and a parasite on the wealth produced by others if some apocalyptic collapse happens. That ridiculous Walking Dead series, which amounts to nonstop prepper porn, feeds some very damaging fantasies that I don't think we should encourage.
↑ comment by CAE_Jones · 2015-05-11T10:29:11.524Z · LW(p) · GW(p)
In Praise of Life (Let’s Ditch the Cult of Longevity)
That article would be better titled "In Praise of Death", and is a string of the usual platitudes and circularities.
I'm now curious: where are the essays that make actual arguments in favor of death? The linked article doesn't make any; it just asserts that death is OK and we're being silly for fighting it, without actually providing a reason (they cite Borges's distopias at the end, but this paragraph has practically nothing in common with the rest of the article, which seems to assume immortality is impossible anyway).
Preference goes to arguments against Elven-style immortality (resistant but not completely immune to murder or disaster, suicide is an option, age-related disabilities are not a thing).
Replies from: jkaufman, None↑ comment by jefftk (jkaufman) · 2015-05-11T15:42:52.640Z · LW(p) · GW(p)
Here's my argument for why death isn't the supreme enemy: http://www.jefftk.com/p/not-very-anti-death
Replies from: Lumifer↑ comment by Lumifer · 2015-05-11T16:23:52.678Z · LW(p) · GW(p)
I have a feeling a lot of discussions of life extension suffer from being conditioned on the implicit set point of what's normal now.
Let's imagine that humans are actually replicants and their lifespan runs out in their 40s. That lifespan has a "control dial" and you can turn it to extend the human average life expectancy into the 80s. Would all your arguments apply and construct a case against meddling with that control dial?
Replies from: Kawoomba, jkaufman↑ comment by Kawoomba · 2015-05-11T16:39:15.390Z · LW(p) · GW(p)
That's a good argument if you were to construct the world from first principles. You wouldn't get the current world order, certainly. But just as arguments against, say, nation-states, or multi-national corporations, or what have you, do little do dissuade believers, the same applies to let-the-natural-order-of-things-proceed advocates. Inertia is what it's all about. The normative power of the present state, if you will. Never mind that "natural" includes antibiotics, but not gene modification.
This may seem self-evident, but what I'm pointing out is that by saying "consider this world: would you still think the same way in that world?" you'd be skipping the actual step of difficulty: overcoming said inertia, leaving the cozy home of our local minimum.
Replies from: Lumifer↑ comment by Lumifer · 2015-05-11T16:56:00.965Z · LW(p) · GW(p)
Inertia is what it's all about. The normative power of the present state, if you will.
That's fine as long as you understand it and are not deluding yourself with a collection of reasons why this cozy local minimum is actually the best ever.
The considerable power wielded by inertia should be explicit.
↑ comment by jefftk (jkaufman) · 2015-05-16T02:23:27.222Z · LW(p) · GW(p)
Huh? It feels like you're responding to a common thing people say, but not to anything I've said (or believe).
Replies from: Lumifer↑ comment by Lumifer · 2015-05-16T02:40:59.123Z · LW(p) · GW(p)
I meant this as a response specifically to
Replies from: jkaufmanBut dramatically fewer children? Much less of the total human experience spent in early learning stages? Would we become less able to make progress in the world because people have trouble moving on from what they first learned?
↑ comment by jefftk (jkaufman) · 2015-05-18T12:00:59.131Z · LW(p) · GW(p)
More context:
A world in which we have ended death ... may be better than the world now, but I could also see it being worse. On one hand, not having to see your friends and family die, increased institutional memory, more time to get deeply into subjects and achieve mastery, and time to really build up old strong friendships sound good. But dramatically fewer children? Much less of the total human experience spent in early learning stages? Would we become less able to make progress in the world because people have trouble moving on from what they first learned?
I don't think our current lifespan is the perfect length, but there's a lot of room between "longer is probably better" and "effectively unlimited is ideal".
Replies from: Lumifer↑ comment by Lumifer · 2015-05-18T15:46:27.245Z · LW(p) · GW(p)
there's a lot of room between "longer is probably better" and "effectively unlimited is ideal".
Yes, but are you saying there's going to a maximum somewhere in that space -- some metric will flip over and start going down? What might that metric be?
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2015-05-20T14:40:18.858Z · LW(p) · GW(p)
As I wrote in that post, there are some factors that lead to us thinking longer lives would be better, and others that shorter would be better.
Maybe this is easier to think about with a related question: what is the ideal length of tenure at a company? Do companies do best when they have entirely employees-for-life, or is it helpful to have some churn? (Ignoring that people can come in with useful relevant knowledge they got working elsewhere.) Clearly too much churn is very bad for the company, but introducing new people to your practices and teaching them help you adapt and modernize, while if everyone has been there forever it can be hard to make adjustments to changing situations.
The main issue is that people tend to fixate some on what they learn when they're younger, so if people get much older on average then it would be harder to make progress.
Replies from: Lumifer↑ comment by Lumifer · 2015-05-20T15:21:21.401Z · LW(p) · GW(p)
what is the ideal length of tenure at a company?
A rather important question here is what's "ideal" and from whose point of view? From the point of the view of the company, sure, you want some churn, but I don't know what the company would correspond to in the discussion of the aging of humanity. You're likely thinking about "society", but as opposed to companies societies do not and should not optimize for profit (or even GDP) at any cost. It's not that hard to get to the "put your old geezers on ice floes and push them off into the ocean" practices.
The main issue is that people tend to fixate some on what they learn when they're younger, so if people get much older on average then it would be harder to make progress.
That's true, as a paraphrase of Max Planck's points out, "Science advances one funeral at a time".
However it also depends on what does "live forever" mean. Being stabilized at the biological age of 70 would probably lead to very different consequences from being stabilized at the biological age of 25.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2015-05-20T18:27:25.402Z · LW(p) · GW(p)
Being stabilized at the biological age of 70 would probably lead to very different consequences from being stabilized at the biological age of 25.
This probably also depends a lot on the particulars of what "stabilized at the biological age of 25" means. Most 25 year-olds are relatively open to experience, but does that come from being biologically younger or just having had less time to become set in their ways?
This also seems like something that may be fixable with better pharma technology if we can figure out how to temporarily put people into a more childlike exploratory open-to-experience state.
Replies from: Lumifer, OrphanWilde↑ comment by Lumifer · 2015-05-20T19:13:01.556Z · LW(p) · GW(p)
does that come from being biologically younger or just having had less time to become set in their ways?
I think humans are sacks of chemicals to a much greater degree than most of LW believes. As a simple example, note that injections of testosterone into older men tend to change their personality quite a bit.
I don't know if being less open to new experiences is purely a function of the underlying hardware, but it certainly is to a large extent a function of physiology, hormonal balance, etc.
fixable with better pharma technology
I hope you realize you're firmly in the "better living through chemistry" territory now.
if we can figure out how to temporarily put people into a more childlike exploratory open-to-experience state.
The idea of putting LSD into the public water supply is not a new one :-)
↑ comment by OrphanWilde · 2015-05-20T18:58:55.836Z · LW(p) · GW(p)
This also seems like something that may be fixable with better pharma technology if we can figure out how to temporarily put people into a more childlike exploratory open-to-experience state.
Anecdotally, LSD.
↑ comment by passive_fist · 2015-05-11T22:20:54.032Z · LW(p) · GW(p)
Just a PSA: advancedatheist has a fixation on dehumanizing rationalists with an especial focus on rationalists 'not being able to get laid'. Here's some of his posts on this matter:
http://lesswrong.com/lw/lzb/open_thread_apr_01_apr_05_2015/c7gr
http://lesswrong.com/lw/m4h/when_does_technological_enhancement_feel_natural/cc09
http://lesswrong.com/lw/m1p/open_thread_apr_13_apr_19_2015/cams
http://lesswrong.com/lw/dqz/a_marriage_ceremony_for_aspiring_rationalists/72wr
It's best not to 'feed the trolls', so to speak.
Replies from: knb, Luke_A_Somers, Richard_Kennaway↑ comment by knb · 2015-05-12T03:13:30.769Z · LW(p) · GW(p)
So why lash out at him for this now when he isn't currently doing that? In any case I don't think he was trolling (deliberately trying to cause anger) so much as he was just morbidly fixated on a topic, and couldn't stop bringing it up,
Replies from: passive_fist↑ comment by passive_fist · 2015-05-12T03:20:39.702Z · LW(p) · GW(p)
I'm pointing it out for the benefit of others who may not understand where AA is coming from.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2015-05-12T04:14:55.406Z · LW(p) · GW(p)
I recommend responding to whatever specific problematic things he might say rather than issuing a general warning.
Replies from: passive_fist↑ comment by passive_fist · 2015-05-12T05:01:23.982Z · LW(p) · GW(p)
I am responding to quite specific problematic things he's saying. My comment is in response to AAs and is in reply to a reply to his comment. If I were to directly reply to him saying the same thing, my intentions would probably be misunderstood.
Replies from: philh↑ comment by philh · 2015-05-12T09:03:32.150Z · LW(p) · GW(p)
Another thing AA seems to do quite a lot is link to pro-death blog posts and articles that he doesn't endorse. I get the impression that's what he was doing with some of the above links. IIRC he's signed up for cryonics, so it seems unlikely that he's trying to push a pro-death agenda.
Replies from: Richard_Kennaway, OrphanWilde↑ comment by Richard_Kennaway · 2015-05-12T12:20:12.491Z · LW(p) · GW(p)
IIRC he's signed up for cryonics, so it seems unlikely that he's trying to push a pro-death agenda.
So, AA, if you're reading down here, why are you signed up for cryonics while posting pro-death links and complaining at length about never getting laid? Optimism for a hereafter, despair for the present, and bitterness for the past. This is not a good conjunction.
↑ comment by OrphanWilde · 2015-05-14T14:31:32.932Z · LW(p) · GW(p)
Maybe he just sees value on challenging the status quo?
Replies from: philh↑ comment by Luke_A_Somers · 2015-05-12T00:07:14.112Z · LW(p) · GW(p)
That's a weirdly weak collection of posts to complain about. It seems more like AA is noting his OWN lack of ability to get laid and has a degree of curiosity on the subject that would naturally result from such a situation. He also (correctly, I expect) anticipates that a noticeable number of people who are or have been in the same boat as him are on LW.
I have seen some really obnoxious posts by AA, but these don't strike me as great examples. I am not about to go digging for them.
Replies from: passive_fist↑ comment by passive_fist · 2015-05-12T02:20:56.896Z · LW(p) · GW(p)
Oh, I agree, he has made much stronger posts; like you I just didn't have the time to search for all of them.
↑ comment by Richard_Kennaway · 2015-05-12T12:10:50.238Z · LW(p) · GW(p)
Just a PSA: advancedatheist has a fixation on dehumanizing rationalists with an especial focus on rationalists 'not being able to get laid'.
I've noticed. While it certainly informs my attitude to everything he posts, he is mostly still at the level of worth responding to.
↑ comment by Error · 2015-05-11T01:31:17.041Z · LW(p) · GW(p)
Just FYI, it looks like you goofed that last link.
Replies from: None, Dorikka↑ comment by [deleted] · 2015-05-11T07:23:43.382Z · LW(p) · GW(p)
Should be http://www.alcor.org/blog/dr-laurence-pilgeram-becomes-alcors-135th-patient-on-april-15-2015/
comment by [deleted] · 2015-05-15T11:45:02.368Z · LW(p) · GW(p)
Would this work at least as an early crude hypothesis of how neurotypicals function?
Neurotypicals like social mingling primarily because they play a constant game of social status points, both in the eyes of others (that is real status) and just feeling like getting status (this is more like self-esteem). This should not be understood as a harsh machiavellian cruel game. Usually not. Often it is very warm and friendly. For example, we on the spectrum often finding things like greeting each other superfluous. Needless custom. You notice when people arrive or when not they will talk to you when they want something. But for neurotypicals exchanging mutual warm greetings makes sense, it is a mutual reassurance or reinforcement of each others status. Sometimes people will ignore someone's greeting, even not shake an extended hand, that will be seen as a rude move to reduce the status of the other, as it is embarrassing. This is rudely dominant move they only pull if they are angry at each other. However, it will more often happen that there is some community, group, like a Toastmasters meeting or the local Linux user club or something and someone arrives and hands out a warmish greeting to the whole group but gets only nods in return, or a very short hi, this usually means "you are an outsider, newcomer, not fully accepted in this community yet, so while we don't reduce your status we are cautious about affirming it either, we stay noncommittant until you prove yourself". But then again someone will often be very warm to the newcomer, because acting as a mentor of noobs raises one status inside the group. It a friendly, helpful but clearly dominant move over the newcomer (let me show you the ropes here is the subconscious message), it also reaffirms one as a central member of the group as people who are themselves newbies would not do it, and as new and new people get into the group the people who mentored them cal slowly drift into leadership.
But greetings are just a simple example of the many ways neurotypicals enjoy playing status games. Again it is not a harsh thing, often very warm. But their constant social mingling, constant "purring" with each other is nothing but a set of microtransactions in status. All this small talk thing, plus the body language etc.
To enjoy gaining a score is a general human trait, even we on the spectrum do enjoy getting on the high score table in arcade games or levelling up in an RPG videgame or becoming a thane in Skyrim, we just don't notice how neurotypicals keep doing this all the time. All the social niceties boil down on handing out a micropayment of positive status, any tiny notion of kindness acts as so, the basic small talk of two random people meeting at a garden party will be feeling out each others status as a first step, getting liked by micropaying status ("Wow, that sounds like you have an interesting job!") in exchange for liking or raising your own status by a bit of boasting etc. And that is the nicer part, the uglier part is when people try to reduce each others scores, that is where bad blood comes from.
- Am I on a remotely right track here?
- Does the statement "nerds / neckbeards often have poor social skills" unpack into "people on the spectrum not even noticing that neurotypicals don't just mindlessly follow social customs, but they are involved in a status micropayment exchange" ?
- Any article or book that helps me go on in this line of thought?
↑ comment by Zubon · 2015-05-15T15:45:45.156Z · LW(p) · GW(p)
- Am I on a remotely right track here?
That sounds like a subset of neurotypical behavior. I'm neurotypical and from the very first sentence ("Neurotypicals like social mingling primarily because they play a constant game of social status points, both in the eyes of others (that is real status) and just feeling like getting status (this is more like self-esteem).") I found it contrary to my experience. Which is not to say it is wrong, and it certainly looks like behavior I have seen, but it kind of suggests that there is One Neurotypical Experience as opposed to a spectrum.
That is reading the initial "neurotypicals" as "all/most neurotypicals" as opposed to "some neurotypicals" or "some subset of neurotypicals." I think you are trying to describe typical neurotypical behavior, so I would read that "neurotypicals" as trying to describe how most neurotypicals behave.
But I am not the most central example of a neurotypical, so others may find it a more accurate description of their social experiences. I don't like social mingling, and I avoid most games of social status points. My extroversion score is 7 out of 100, which is likely a factor in not seeing myself in your description of neurotypicals.
2. Does the statement "nerds / neckbeards often have poor social skills" unpack into "people on the spectrum not even noticing that neurotypicals don't just mindlessly follow social customs, but they are involved in a status micropayment exchange" ?
There seem to be several assumptions built into that unpacking. For example, it suggests that all/most nerds are on the spectrum. My characterization of neurotypical socialization would include mindlessly following social customs as well as enjoying the social game. I don't think highly social neurotypicals would describe their behavior as a "status micropayment exchange"; that seems like the wrong metaphor and suggests the dominant model as a fixed-sum status game, whereas many (most?) social interactions have no need for an exchange of status points.
Even when a social status point game is in play, I would expect more interactions to involve the recognition of point totals rather than an exchange. "Mutual reassurance or reinforcement of each others status" seems on point.
If the above is the start of a hypothesis, it seems to me that it links greetings and status point exchange too strongly. Greetings are rarely an occasion to gain or lose points, although they may be occasions to discover the current score.
"Pinging" is a metaphor I have seen used productively in these attempts to explain neurotypical social behavior. The greeting is a ping, a mutual recognition that someone is there and potentially responsive to interaction, potentially also exchanging some basic status information.
↑ comment by ChristianKl · 2015-05-16T16:01:09.176Z · LW(p) · GW(p)
If you say Bob likes X because of Y, what do you mean with it? Do you mean that if Y wouldn't be there Bob wouldn't like X?
I don't think that there a good reason to believe that if you take status away no neurotically would engage in social mingling or like engaging in it.
Apart from that "status" is a word that's quite abstract. It's much more something "map" than "territory". That produces danger to get into too vague to be wrong territory.
Replies from: None↑ comment by [deleted] · 2015-05-18T08:29:22.108Z · LW(p) · GW(p)
Apart from that "status" is a word that's quite abstract. It's much more something "map" than "territory".
Let's get more meta here. Usually the map-terrain distinction is used to describe how human minds interpret the chunks of reality that are not man-made. When we are talking about something that arises from the behavior of humans, how can we draw that distinction. Plato's classic "What is justice?" is map or terrain? Here the terrain is in human minds too, as justice exists only inside minds and nowhere else, so the distinction seems to be more like is it the grand shared map or a more private map of maps? And the same with status. It does not exist outside the human perception of it. Similar to money, esp. paper/computer number money.
I don't think that there a good reason to believe that if you take status away no neurotically would engage in social mingling or like engaging in it.
I will consider it a typo, assuming you meant neurotypicals like I did i.e. people outside the autism spectrum, or in other words non-geeks. I got the idea from here. If status microtransactions are so important...
(to be continued gotta go now)
Replies from: ChristianKl↑ comment by ChristianKl · 2015-05-18T12:46:45.940Z · LW(p) · GW(p)
If you look at the link you posted it argues:
Status is a confusing term, unless it’s understood as something one does. You may be low in status, but play high, and vice versa. … We always like it when a tramp is mistaken for the boss, or the boss for a tramp. … I should really talk about dominance and submission, but I’d create a resistance.
That means that dominance and submission map more directly to the territory than status does.
The author doesn't argue that people care about mutually reinforcement of each other status as being high but that people also consciously make moves to submit and place themselves at a low status position.
The text invalidates your idea that people engage primarily in social interaction to maximize the amount of status.
You don't pick that up if you make the error of not treating status as a model but as reality. Reality is complex. Models simplify reality. Sometimes the simplification keeps the essential elements of what you want to describe. Other times it doesn't.
I will consider it a typo, assuming you meant neurotypicals like I did i.e. people outside the autism spectrum, or in other words non-geeks.
Yes, it's a typo likely because my spellchecker didn't know "neurotypicals".
comment by advancedatheist · 2015-05-12T18:43:19.784Z · LW(p) · GW(p)
SOUTH FLORIDA CHURCH PURSUES ETERNAL LIFE THROUGH CRYONICS, INFLAMING CRITICS AND THE IRS
I freeze people's brains for a living
http://www.hopesandfears.com/hopes/city/what_do_you_do/213599-cryonics-interview