Open thread, 3-8 June 2014

post by David_Gerard · 2014-06-03T08:57:43.756Z · LW · GW · Legacy · 153 comments

Contents

153 comments

Previous Open Thread:  http://lesswrong.com/r/discussion/lw/k9x/open_thread_may_26_june_1_2014/

(oops, we missed a day!)

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

 

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should start on Monday, and end on Sunday.

4. Open Threads should be posted in Discussion, and not Main.

153 comments

Comments sorted by top scores.

comment by Kaj_Sotala · 2014-06-03T17:12:44.874Z · LW(p) · GW(p)

I'm starting to maybe figure out why I've had such difficulties with both relaxing and working in the recent years.

It feels that, large parts of the time, my mind is constantly looking for an escape, though I'm not entirely sure what exactly it is trying to escape from. But it wants to get away from the current situation, whatever the current situation happens to be. To become so engrossed in something that it forgets about everything else.

Unfortunately, this often leads to the opposite result. My mind wants that engrossment right now, and if it can't get it, it will flinch away from whatever I'm doing and into whatever provides an immediate reward. Facebook, forums, IRC, whatever gives that quick dopamine burst. That means that I have difficulty getting into books, TV shows, computer games: if they don't grab me right away, I'll start growing restless and be unable to focus on them. Even more so with studies or work, which usually require an even longer "warm-up" period before one gets into flow.

Worse, I'm often sufficiently aware of that discomfort that my awareness of it prevents the engrossment. I go loopy: I get uncomfortable about the fact that I'm uncomfortable, and then if I have to work or study, my focus is on "how do I get rid of this feeling" rather than on "what should I do next in this project". And then my mind keeps flinching away from the project, to anything that would provide a distraction, on to Facebook, to IRC, to whatever. And I start feeling worse and worse.

Some time back, I started experimenting with teaching myself not to have any goals. That is, instead of having a bunch of stuff I try to accomplish in some given time period, simply be okay with doing absolutely nothing for all day (or all week, or all year...), until a natural motivation to do something develops. This seems to help. So does mindfulness, as well as ensuring that my basic needs have been met: enough sleep and food and having some nice real-life social interaction every few days.

Anybody else recognize this?

Replies from: Kazuo_Thow, gothgirl420666, Vulture, None, NancyLebovitz
comment by Kazuo_Thow · 2014-06-04T20:05:05.388Z · LW(p) · GW(p)

I recognize this in myself and it's been difficult to understand, much less get under control. The single biggest insight I've had about this flinching-away behavior (at least the way it arises in my own mind) is that it's most often a dissociative coping mechanism. Something intuitively clicked into place when I read Pete Walker's description of the "freeze type". From The 4Fs: A Trauma Typology in Complex PTSD:

Many freeze types unconsciously believe that people and danger are synonymous, and that safety lies in solitude. Outside of fantasy, many give up entirely on the possibility of love. The freeze response, also known as the camouflage response, often triggers the individual into hiding, isolating and eschewing human contact as much as possible. This type can be so frozen in retreat mode that it seems as if their starter button is stuck in the "off" position. It is usually the most profoundly abandoned child - "the lost child" - who is forced to "choose" and habituate to the freeze response (the most primitive of the 4Fs). Unable to successfully employ fight, flight or fawn responses, the freeze type's defenses develop around classical dissociation, which allows him to disconnect from experiencing his abandonment pain, and protects him from risky social interactions - any of which might trigger feelings of being reabandoned. Freeze types often present as ADD; they seek refuge and comfort in prolonged bouts of sleep, daydreaming, wishing and right brain-dominant activities like TV, computer and video games. They master the art of changing the internal channel whenever inner experience becomes uncomfortable. When they are especially traumatized or triggered, they may exhibit a schizoid-like detachment from ordinary reality.

Of course like with any other psychological condition there's a wide spectrum: some people had wonderful childhoods full of safe attachment and always had somebody to model healthy processing of emotions for them, some people were utterly abandoned as children, and many more had something between those extremes. The key understanding I've gained from Pete Walker's writing is that simply being left alone with upsetting inner experience too often as a child can lead to development of "freeze type" defenses, even in the absence of any overtly abusive treatment.

I suspect that using a combination of TV shows, games and web browsing as emotional analgesics (at various levels of awareness) is very common now in wealthy countries. This is one of the reasons I would like to see more discussion of emotional issues on Less Wrong.

Replies from: Richard_Kennaway, Metus, Kaj_Sotala
comment by Richard_Kennaway · 2014-06-05T22:13:23.496Z · LW(p) · GW(p)

I suspect that using a combination of TV shows, games and web browsing as emotional analgesics (at various levels of awareness) is very common now in wealthy countries. This is one of the reasons I would like to see more discussion of emotional issues on Less Wrong.

I would also like to see more such discussion, but, as with rationality, more from the viewpoint of rising above base level average than of recovering only to that level.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-06-06T06:58:41.576Z · LW(p) · GW(p)

Although on further thought, maybe that sort of discussion would have to happen somewhere other than LessWrong. Who can do for it the equivalent of Eliezer's writings here? Is there anywhere it is currently being done?

ETA: Brienne, and here might be answers to those questions.

comment by Metus · 2014-06-05T13:49:06.232Z · LW(p) · GW(p)

If people on LW put half the effort in emotional issues they put in rational topics we'd be a whole lot further. Thank you for this quote very much.

Any insight explosion books I should read?

Replies from: Kazuo_Thow
comment by Kazuo_Thow · 2014-06-05T19:34:00.255Z · LW(p) · GW(p)

Complex PTSD: From Surviving To Thriving by Pete Walker focuses on the understanding that wounds from active abuse make up the outer layers of a psychological structure, the core of which is an experience of abandonment caused by passive neglect. He writes about self-image, food issues, codependency, fear of intimacy and generally about the long but freeing process of recovering.

As with physical abuse, effective work on the wounds of verbal and emotional abuse can sometimes open the door to de-minimizing the awful impact of emotional neglect. I sometimes feel the most for my clients who were “only” neglected, because without the hard core evidence – the remembering and de-minimizing of the impact of abuse – they find it extremely difficult to connect their non-existent self-esteem, their frequent flashbacks, and their recurring reenactments of impoverished relationships, to their childhood emotional abandonment. I repeatedly regret that I did not know what I know now about this kind of neglect when I wrote my book and over-focused on the role of abuse in childhood trauma.

The Drama of the Gifted Child by Alice Miller focuses more on the excuses and cultural ideology behind poor parenting. She grew up in an abusive household in 1920s-'30s Germany.

Contempt is the weapon of the weak and a defense against one's own despised and unwanted feelings. And the fountainhead of all contempt, all discrimination, is the more or less conscious, uncontrolled, and secret exercise of power over the child by the adult, which is tolerated by society (except in the case of murder or serious bodily harm). What adults do to their child's spirit is entirely their own affair. For a child is regarded as the parents' property, in the same way that the citizens of a totalitarian state are the property of its government. Until we become sensitized to the child's suffering, this wielding of power by adults will continue to be a normal aspect of the human condition, for no one takes seriously what is regarded as trivial, since the victims are "only children." But in twenty years' time these children will be adults who will pay it all back to their own children. They may then fight vigorously against cruelty "in the world" -- and yet they will carry within themselves an experience of cruelty to which they have no access and which remains hidden behind their idealized picture of a happy childhood.

Healing The Shame That Binds You by John Bradshaw is about toxic shame and the variety of ways it takes root in our minds. Feedback loops between addictive behavior and self-hatred, subtle indoctrination about sexuality being "dirty", religious messages about sin, and even being compelled to eat when you're not hungry:

Generally speaking, most of our vital spontaneous instinctual life gets shamed. Children are shamed for being too rambunctious, for wanting things and for laughing too loud. Much dysfunctional shame occurs at the dinner table. Children are forced to eat when they are not hungry. Sometimes children are forced to eat what they do not find appetizing. Being exiled at the dinner table until the plate is cleaned is not unusual in modern family life. The public humiliation of sitting at the dinner table all alone, often with siblings jeering, is a painful kind of exposure.

Replies from: Metus
comment by Metus · 2014-06-05T20:00:11.590Z · LW(p) · GW(p)

Now that is quite some text to read. Thank you very much. My request was aimed at more general books though this is still useful.

You seem very knowledgeable on this specific topic. Am I right in assuming you are knowledgeable about emotional issues more generally? Would you be willing to write a post about these topics?

Replies from: Kazuo_Thow
comment by Kazuo_Thow · 2014-06-08T20:18:59.963Z · LW(p) · GW(p)

It's only been about 6 months since I started consciously focusing my attention on the subtle effects of abandonment trauma. Although I've done a fair amount of reading and reflecting on the topic I'm not at the point yet where I can confidently give guidance to others. Maybe in the next 3-4 months I'll write up a post for the discussion section here on LW.

What's frustrating is that signs of compulsive, codependent and narcissistic behavior are everywhere, with clear connections to methods of coping developed in childhood, but the number of people who pay attention to these connections is still small enough that discussion is sparse and the sort of research findings you'd like to look up remain unavailable. The most convincing research result I've been able to find is this paper on parental verbal abuse and white matter, where it was found that parental verbal abuse significantly reduces fractional anisotropy in the brain's white matter.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-06-09T16:37:50.833Z · LW(p) · GW(p)

Maybe in the next 3-4 months I'll write up a post for the discussion section here on LW.

Please do. This seems like an important part of "winning" to some people, and it is related to thinking, therefore it absolutely belongs here.

comment by Kaj_Sotala · 2014-06-06T04:48:57.335Z · LW(p) · GW(p)

Interesting, thanks. I had a pretty happy childhood in general, but I was a pretty lonely kid for large parts of the time, and I've certainly experienced a feeling of being abandoned or left alone several times since that. And although my memories are fuzzy, it's possible that the current symptoms would have started originally developing after one particularly traumatic relationship/breakup when I was around 19. Also, meaningful social interaction with people seems to be the most reliable way of making these feelings go away for a while. Also, I tend to react really strongly and positively to any fiction that portrays strong, warm relationships between people.

Most intriguing.

comment by gothgirl420666 · 2014-06-04T01:53:39.179Z · LW(p) · GW(p)

This is kind of funny because I came to this open thread to ask something very similar.

I have noticed that my mind has a "default mode" which is to aimlessly browse the internet. If I am engaged in some other activity, no matter how much I am enjoying it, a part of my brain will have the strong desire to go into default mode. Once I am in default mode, it takes active exertion to break away do anything else, no matter how bored or miserable I become. As you can imagine, this is a massive source of wasted time, and I have always wished that I could stop this tendency. This has been the case more or less ever since I got my first laptop when I was thirteen.

I have recently been experimenting with taking "days off" of the internet. These days are awesome. The day just fills up with free time, and I feel much calmer and content. I wish I could be free of the internet and do this indefinitely.

But there are obvious problems, a few of which are:

  • Most of the stuff that I wish I was doing instead of aimlessly surfing the internet involves the computer and oftentimes the internet. A few of the things that would be "good uses of my time" are reading, making digital art, producing electronic music, or coding. Three out of four of those things rely on the computer, and of those three, they oftentimes in some capacity rely on the internet.

  • I am inevitably going to be required to use the internet for school and work. Most likely in my graphic design and computer science classes next year I will have to be able to use the internet on my laptop during class.

  • If I have an important question that I could find the answer to on Google, I'm going to want to find that answer.

It's hard to find an eloquent solution to this problem. If I come up with a plan for avoiding internet use that is too loose, it will end up getting more and more flexible until it falls apart completely. If the plan is too strict, then I inevitably will not be able to follow it and will abandon it. If the plan is too intricate and complicated, then I will not be able to make myself follow it either.

The best idea I have come up with so far is to delete all the browsers from my laptop and put a copy of Chrome on a flash drive. I would never copy this instance of Chrome onto a hard drive, instead I would just run it from the flash drive every time I wanted to use it. This way, every time I wanted to use the internet, I would have to go find the flash drive. I could also give the flash drive to someone else for a while if I felt like a moment of weakness was coming on. I've been using this for exactly one day and it seems to be working pretty well so far.

The other thing I've been doing for a few days is writing a "plan" of the next day before I go to bed, then sticking to the plan. If something happens to interrupt my plan, then I will draft a new plan as soon as possible. For example, my friend called me up today inviting me over. I wasn't about to say "No, I can't hang out, I have planned out my day and it didn't include you". So when I got back, I wrote a new one. Most of these plans involve limiting internet use to some degree, so this also seems promising. I might also do something where I keep track of how many days in a row I followed the plan and try not to break the chain.

Replies from: Cube, Metus, Risto_Saarelma, drethelin
comment by Cube · 2014-06-04T21:56:45.104Z · LW(p) · GW(p)

I've found that having a two computers, one for work and one for play, has helped immensely.

Replies from: CAE_Jones
comment by CAE_Jones · 2014-06-04T22:34:41.214Z · LW(p) · GW(p)

I like this idea. It's difficult to implement; I have enough computers, but my attempt at enforcing their roles hasn't worked so well.

I've had better success with weaker, outdated hardware: anything without wireless internet access, for starters. Unfortunately, the fact that it's weaker and outdated means it tends to break, and repairs become more difficult due to lack of official support. Then they sort of disappear whenever things get moved due to being least used, and I'm back to having to put willpower against the most modern bells and whistles in my possession.

Generally speaking, the less powerful the internet capabilities, the better. Perhaps a good idea of the optimal amount of data to use would help pick a service plan that disincentivizes wasteful internet use? Or maybe even dialup, if one can get by without streaming video and high-speed downloads.

Another possibility is office space without internet access. Bonus points if there's a way to make getting there easier than leaving (without going overboard, of course).

Or, a strictly monitored or even public internet connection for work, where anything that is not clearly work-related is visible (hopefully, to someone whose opinion/reaction would incentivize staying on task).

If possible, not even having a personal internet connection, and using public locations (Starbucks? Libraries?) when internet is necessary might be another strategy. If work requires internet access, but not necessarily active, one could make lists of the things that need downloading, and the things that do not, and plan around internet availability (this worked pretty well for me in parts of high school, but your mileage may vary).

These solutions all have something in common: I can't really implement any of them right now, without doing some scary things on the other end of a maze constructed from Ugh Fields, anxiety, and less psychological obstacles. So my suggesting them is based on a tenuous analysis of past experience.

Replies from: Cube
comment by Cube · 2014-06-05T15:23:40.271Z · LW(p) · GW(p)

I have not been able to get rid of internet addiction by blocking or slowing it. Conversely I've had (less than ideal) success with over saturation. I don't think it's a thing I'll get rid of soon, aimless browsing is to much of a quick fix. Lately I've been working on making productivity a quicker fix. Getting a little excited everytime I complete something small, doing a dance when its something bigger, etc.

comment by Metus · 2014-06-04T07:51:05.255Z · LW(p) · GW(p)

I think there is some underlying reason for browsing as a default state, maybe conditioning. Should it then be possible to train oneself having a different default state?

comment by Risto_Saarelma · 2014-06-04T03:26:41.187Z · LW(p) · GW(p)

Just turning off your network interface for the duration of a work session (maybe do timed Pomodoro bursts) will guard against the mindless reflex of tabbing over to the browser. Then you get the opportunity to actually make a mindful decision about whether to go out of work phase and off browsing or not. If you get legit stuff to search that isn't completely blocking the offline work, write it down on a piece of scratch paper to look up later.

Tricks like this tend to stop working though. You'll probably just go into mindlessly bringing up the network interface instead in the long term, but even months or weeks of having a working technique are better than not having one.

Maybe you could team up with an Ita who works with the Reticulum and become an Avout who is forbidden from it.

Replies from: gothgirl420666
comment by gothgirl420666 · 2014-06-04T06:06:21.396Z · LW(p) · GW(p)

Yeah, I tried this for a while, along with putting Chrome in increasingly obscure places on my hard drive. After these failed, I came upon the flash drive idea, which has the feature that it involves physical activity and therefore can't be done mindlessly. If you need to, you can throw it across the room.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2014-06-04T17:45:48.541Z · LW(p) · GW(p)

You could just physically unplug your broadband modem while working then, as long as you're the only person using it.

comment by drethelin · 2014-06-04T02:08:21.278Z · LW(p) · GW(p)

I don't mind having web browsing as a default state but what I've done succesfully in the past is have alarms throughout the day to remind me to exercise, leave the house, do chores, etc.

Replies from: None
comment by [deleted] · 2014-06-07T09:59:46.144Z · LW(p) · GW(p)

I used to live someplace close to a church with a bell tower that rang the time every fifteen minutes. I no longer live there, but I have considered writing a program to do the same thing -- I recall it being useful for productivity, and for escaping default-mode (which has a terrible sense of time).

comment by Vulture · 2014-06-07T23:18:45.773Z · LW(p) · GW(p)

In my experience adderall can ameliorate this problem somewhat.

comment by [deleted] · 2014-06-03T19:34:09.040Z · LW(p) · GW(p)

I've experienced similar feelings, and find some exercises from "The Now Habit" to help.

There's an exercise he calls the focusing exercise which involves taking five mintues to get in flow state before starting, which helps, and also found the advice to reframe "How soon can I finish" to "When can I start?" to help as well.

comment by NancyLebovitz · 2014-06-04T01:16:53.084Z · LW(p) · GW(p)

It sounds pretty familiar to me. My version seems to be background anxiety, and it can help to check my breathing and let my abdomen expand when I'm inhaling.

comment by cousin_it · 2014-06-03T11:01:46.537Z · LW(p) · GW(p)

Stratton's perceptual adaptation experiments a century ago have shown that the brain can adapt to different kinds of visual information, e.g. if you wear glasses that turn the picture upside down, you eventually adjust and start seeing it right side up again. And recently some people have been experimenting with augmented senses, like wearing an anklet with cell phone vibrators that lets you always know which way is north.

I wonder if we can combine these ideas? For example, if you always carry a wearable camera on your wrist and feed the information to a Google Glass-like display, will your brain eventually adapt to having effectively three eyes, one of which is movable? Will you gain better depth perception, a better sense of your surroundings, a better sense of what you look like, etc.?

Replies from: Douglas_Knight, Punoxysm, Slider, Alexandros, Lumifer, Username, None, Jayson_Virissimo, Daniel_Burfoot
comment by Douglas_Knight · 2014-06-10T01:30:46.621Z · LW(p) · GW(p)

One aspect of perceptual adaptation I do not often hear emphasized is the role of agency. I first encountered it in this passage:

The first hours were very difficult; nobody could move freely or do anything without going very slowly and trying to figure out and make sense of what he or she saw. Then something unexpected happened: Everything about their bodies and the immediate vicinity that they were touching began to look as before, but everything which could not be touched continued to be inverted. Gradually, by groping and touching while moving around to attain the satisfaction of normal needs, participants in the experiment found that objects further a field began to appear normal to the participants in the experiment. In a few weeks, everything looked the right way up, and they could all do everything without any special attention or care. At one point in the experiment snow began to fall. Kohler looked through the window and saw the flakes rising from the earth and moving upwards. He went out, stretched out his hands, palms upwards, and felt the snow falling on them. After only a few moments of feeling the snow touch his palms, he began to see the snow falling instead of rising.

There have been other experiments with inverted spectacles. One carried out in the United States involved two people, one sitting in a wheelchair and the other pushing it, both fitted with such special glasses. The one who moved around by pushing the chair began to see normally, and after a few hours, was able to find his way without groping, while the one sitting continued to see everything the wrong way.

--- Moshe Feldenkrais, "Man and World," in "Explorers of Humankind," ed Thomas Hanna

comment by Punoxysm · 2014-06-03T20:21:23.837Z · LW(p) · GW(p)

I read about an experiment (no link, sorry) where people wore helmets that gave them a 360 degree view of their surroundings. They were apparently able to adapt quite well, and could eventually do thinks like grab a ball tossed to them from behind without turning around.

comment by Slider · 2014-06-03T13:11:35.935Z · LW(p) · GW(p)

from my experience with focusing on the senses I already have the mere availability of the data is not sufficient. You really need to process it. The glass intervention works well because it also takes away the primary way of interacting with the world. If you only add sense most of it can be pretty much ignored as it doesn't bring any compelling extra value except for being cool for a while. Color TV was kinda nice improvement but not many are jumping on the 3D bandwagon.

So if you really want to go three eyed it could be a good bet it could be good from sense development perspective to go only new-eye mono for a while. Another one would be have a environment where the new capabilities are difference makingly handy. I could imagine that fixing and taking apart computers could benefit from that kind of sensing. You could also purposefully make a multilayered desk so that simply looking what is on the desk would require hand movement but many more documents could be open at any time.

Your brain is already mostly filtering out the massive amount of input it takes, making it quite expensive to make it bother paying attention to yet another sense-datum The sense would also require their own "drivers". I could imagine that managing a moveable eye would be more laboursome than eye focus micro. Having a fixed separation of your viewpoints makes the calculations easy routine. That would have to be expanded into a more general approach for variable separation. There is a camera trick where you change your zoom while simultanously moving the camera in the forward backward dimension keeping the size of the primary target fixed but stretching the perspective. Big variance to the viewpoint separation would induce similar effects. I could imagine how it could be nausea-inducing instead of only cool. Increased mental labour and confusion atleast on the short-term would press against adopting a more expanded sensory experience. Therefore if such transition is wanted it is important to bring the tempting good sides concrete in the practical experience.

comment by Alexandros · 2014-06-03T18:38:50.181Z · LW(p) · GW(p)

I've thought about taking this idea further.

Think of applying the anklet idea to groups of people. What if soccer teams could know where their teammates are at any time, even if they can't see them? Now apply this to firemen. or infantry. This is the startup i'd be doing if I wasn't doing what I'm doing. plugging data feeds right into the brain, and in particular doing this for groups of people, sounds like the next big frontier.

Replies from: cousin_it
comment by cousin_it · 2014-06-03T19:25:15.295Z · LW(p) · GW(p)

What other applications for groups of people can you imagine, apart from having a sense of each other's position?

Replies from: Alexandros
comment by Alexandros · 2014-06-03T19:55:15.051Z · LW(p) · GW(p)

whatever team state matters. maybe online/offline, maybe emotional states, maybe doing biofeedback (hormones? alpha waves?) but cross-team. maybe just 'how many production bugs we've had this week'.

Replies from: Alexandros
comment by Alexandros · 2014-06-03T19:57:09.463Z · LW(p) · GW(p)

but if we're talking startups, I'd probably look at where the money is and go there. Can this be applied to groups of traders? c-level executives? medical teams? maybe some other target group are both flush with cash and early adopters of new tech?

comment by Lumifer · 2014-06-03T15:03:54.581Z · LW(p) · GW(p)

if you always carry a wearable camera on your wrist

Better: put it on your personal drone which normally orbits you but can be sent out to look at things...

comment by Username · 2014-06-09T03:50:55.419Z · LW(p) · GW(p)

I have magnets implanted into two of my fingertips, which extend my sense of touch to be able to feel electromagnetic fields. I did an AMA on reddit a while ago that answers most of the common questions, but I'd be happy to answer any others.

To touch on it briefly, alternating fields feel like buzzing, and static fields feel like bumps or divots in space. It's definitely become a seamless part of my sensory experience, although most of the time I can't tell it's there because ambient fields are pretty weak.

comment by [deleted] · 2014-06-03T19:26:17.075Z · LW(p) · GW(p)

There's already some brain plasticity research which does this for people who have lost senses. Can't remember a specific example, but I know there are quite a few in the book "The Brain That Changes Itself"

comment by Jayson_Virissimo · 2014-06-04T07:21:46.551Z · LW(p) · GW(p)

Well, technologies like BrainPort allow one to "see" with their tongue.

comment by Daniel_Burfoot · 2014-06-03T17:06:07.363Z · LW(p) · GW(p)

I would guess strongly (75%) that the answer is yes. There are incredible stories about people's brains adapting to new inputs. There is one paper in the neuroscience literature that showed how if you connect a video input to a blind cat's auditory cortex, that brain region will adapt new neural structures that are usually associated with vision (like edge detectors).

Replies from: Error
comment by Error · 2014-06-03T17:16:29.309Z · LW(p) · GW(p)

This makes me wonder what could be done with, say, a bluetooth earbud and a smartphone, both of which are rather less conspicuous than Google Glass. Not quite as good as connecting straight to the auditory cortex, but still. The first thing that comes to mind is trying to get GPS navigation to work on a System 1 rather than System 2 level, through subtle cues rather than interpreted speech.

[Edit: or positional cues rather than navigational. Not just knowing which way north is, but knowing which way home is.]

comment by maxikov · 2014-06-04T21:02:09.359Z · LW(p) · GW(p)

A qucik search on Google Scholar with such queries as cryonic, cryoprotectant, cryostasis, neuropreservation confirms my suspicion that there is very little, if any, academic research on cryonics. I realize that being generally supportive of MIRI's mission, Less Wrong community is probably not very judgmental of non-academic science, and I may be biased, being from academia myself, but I believe that despite all problems, any field of study largely benefits from being a field of academic study. That makes it easier to get funding; that makes the results more likely to be noticed, verified and elaborated on by other experts, as well as taught to students; that makesit more likely to be seriously considered by the general public and governmental officials. The last point is particularly important, since on one hand, with the current quasi-Ponzi mechanism of funding, the position of preserved patients is secured by the arrival of new members, and on the other hand, a large legislative action is required to make cryonics reliable: train the doctors, give the patients more legal protection than the legal protection of graves, and eventually get it covered by health insurance policies or single payer systems.

As for the method itself, it frankly looks inadequate as well. I do believe that it's a good bet worth taking, but so did Egyptian pharaohs. And they lost, because their method of preservation turned out to be useless. I'm well aware of all the considerations about information theory, nanorobotics and brain scanning, but improving our freezing technologies to the extent that otherwise viable organisms could be brought back to life without further neural repairs seems to be the thing we should totally be doing.

Thus, I want to see this field develop. I want to see at least once a year a study concerning with cryonic preservation of neural tissue in a peer-reviewed journal with high impact factor. And before I die I want to at least see a healthy chimpanzee being cooled to the temperature of liquid nitrogen, and then brought back to life without losing any of its cognitive abilities.

What can we do about it? Is there an organization that is willing to collect donations and fund at least one academic study in this field? Can we use some crowdfunding platform and start such campaign? Can we pitch it to Calico?

Replies from: ChristianKl, Mestroyer
comment by ChristianKl · 2014-06-04T22:06:57.494Z · LW(p) · GW(p)

I think the nearest thing is the Brain Preservation Foundation. If you want to donate money towards that purpose, they are a good address.

comment by Mestroyer · 2014-07-01T07:22:49.630Z · LW(p) · GW(p)

The last point is particularly important, since on one hand, with the current quasi-Ponzi mechanism of funding, the position of preserved patients is secured by the arrival of new members.

Downvoted because if I remember correctly, this is wrong; the cost of preservation of a particular person includes a lump of money big enough for the interest to pay for their maintenance. If I remember incorrectly and someone points it out, I will rescind my downvote.

comment by iarwain1 · 2014-06-03T15:34:22.883Z · LW(p) · GW(p)

For those of us who for whatever reason can't make it to a CFAR workshop, what are the best ways to get more or less the equivalent? A lot of the information they teach is in the Sequences (but not all of it, from what it looks like), but my impression is that much of the value from a workshop is in (a) hands-on activities, (b) interactions with others, and (c) personalized applications of rationality principles developed in numerous one-on-one and small-group sessions.

So I'm looking for:

  • Resources for getting the information that's not covered (or at least not comprehensively covered) in the Sequences.
  • Ideas for how to simulate the activities.
  • Ideas for how to simulate the small group interactions. This is mainly what LW meetups are for, but for various reasons I can't usually make it to a meetup.
  • How to simulate the one-on-one personalized training.

That last one is probably the hardest, and I suspect it's impossible without either (a) spending an awful lot of time developing the techniques yourself, or (b) getting a tutor. So, anybody interested in being a rationality tutor?

Replies from: John_Maxwell_IV, pcm, MathiasZaman
comment by John_Maxwell (John_Maxwell_IV) · 2014-06-04T05:28:25.299Z · LW(p) · GW(p)

Find & read good self-help type stuff (relevant books by psychologists, Less Wrong posts, Sebastian Marshall, Getting Things Done, etc.) and invent/experiment with your own techniques in a systematic way. Do Tiny Habits. Start meditating. Watch & apply this video. Keep a log of all the failure modes you run in to and catalogue strategies for overcoming each of them. Read about habit formation. Brainstorm & collect habits that would be useful to have and pair them with habit formation techniques. Try lots of techniques and reflect about why things are or are not working.

comment by pcm · 2014-06-06T18:41:27.503Z · LW(p) · GW(p)

Have you asked CFAR whether you could hire one of their instructors to give you one-on-one training over Skype? I expect it would be expensive, but they are flexible with people who are willing to pay thousands of dollars.

comment by MathiasZaman · 2014-06-03T22:19:28.244Z · LW(p) · GW(p)

Ideas for how to simulate the small group interactions. This is mainly what LW meetups are for, but for various reasons I can't usually make it to a meetup.

One of the best things that happened to me was getting into a tumblr rationalist group on skype. The feeling is a bit like a meetup, except people are available all the time.

So, anybody interested in being a rationality tutor?

Yes, but I'm not yet versed enough in the Art to help anyone except a novice. If you have specific things you want to discuss I can probably point you in the right direction.

This is also what the skype group (or meetups) are good for. There will always be someone who can help you with a particular issue.

comment by niceguyanon · 2014-06-03T16:50:16.432Z · LW(p) · GW(p)

I just discovered a very useful way to improve my comfort and posture while sitting in chairs not my own. If you travel a lot or are constantly changing workstations or just want to improve your current set up – buy contact shelf lining, the one with no-slip grip.

The liner adds grip to chairs that either 1. do not adequately recline or 2. reclines but you may tend to slide off (slippery leather chairs). Recently I was provided with a stiff non-reclining wood chair and it was killing my back. Every time I relaxed into the back rest I started to slide down and my posture was terrible and my back hurt. I picked mine up at target

I can't believe it took me this long to discover this, it has greatly improved my comfort.

Edit: In case directions are necessary, place the liner (cut to appropriate length) on the seat not the back rest.

comment by sixes_and_sevens · 2014-06-03T22:57:39.986Z · LW(p) · GW(p)

In a rare case of actually doing something I said I would, I've started to write a utility for elevating certain words and phrases in the web browser to your attention, by highlighting them and providing a tool-tip explanation for why they were highlighted. It's still in the occasionally-blow-up-your-webpage-and-crash-the-browser phase of development, but is showing promise nonetheless.

I have my own reasons for wanting this utility ("LOOK AT THIS WORD! SUBJECT IT TO SCRUTINY OR IT WILL BE YOUR UNDOING!") but thought I would throw it out to LW to see if there are any use-cases I might not have considered.

Replies from: sixes_and_sevens, satt, lmm
comment by sixes_and_sevens · 2014-06-03T23:49:37.899Z · LW(p) · GW(p)

On a related note, is there a reason why Less Wrong, and seemingly no other website, would suffer a catastrophic memory leak when I try and append a JSON-P script element to the DOM? It doesn't report any security policy conflicts; it just dies.

comment by satt · 2014-06-06T02:10:48.658Z · LW(p) · GW(p)

Ooh, ooh, thought of another cluster of danger phrases (inspired by a recent Yvain blog post, I forget which): "studies have shown", "studies show", "studies find", and any other vague claim that something's corroborated by multiple scientific studies which the writer mysteriously can't be bothered to reference properly, or even give a clear description of.

comment by lmm · 2014-06-04T12:55:23.455Z · LW(p) · GW(p)

So is this script somewhere we can try it?

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2014-06-04T17:04:10.840Z · LW(p) · GW(p)

Not without breaking everything horribly (including my debugging tools) in a non-negligible number of cases (including Less Wrong).

I did put together a little bookmarklet example, but since it doubles as an "occasionally mangle your page and possibly make this tab explode in a shower of RAM" button, I decided not to share it until I've isolated and fixed this particular problem.

comment by [deleted] · 2014-06-04T19:31:59.229Z · LW(p) · GW(p)

No one's posted about the new Oregon Cryonics yet?

Replies from: shminux, David_Gerard
comment by shminux · 2014-06-04T23:25:27.659Z · LW(p) · GW(p)

This seems very interesting, maybe someone can look into this in depth. The costs are much more manageable and there are hopefully fewer legal issues with preserving brain only. Not sure why they only talk about "next of kin". Anyway, "chemical preservation" of the brain only for $2500 seems like an interesting alternative to Alcor or CI. It is also more likely to go over better in an argument with people reluctant to opt for the "traditional" cryonics, such as parents of some of the regulars complaining about it.

I am not qualified to judge the quality of their "Instructions for Funeral Director":

Embalm the entire body, paying special attention to the brain. Use arterial fluid on the brain rather than cavity fluid. If 10% Neutral Buffered Formalin is readily available, that is preferred for the brain. Wait one hour and then pump more fluid through the brain again. Since the body will never be displayed at a funeral service, there is no need to perform any cosmetic procedures at all. Those would simply delay shipment.

comment by David_Gerard · 2014-06-07T09:48:14.615Z · LW(p) · GW(p)

Someone from OC came by the RationalWiki Facebook group asking if it was of interest to us ... um. I suggested they should definitely say hi on LW.

comment by deskglass · 2014-06-05T23:38:26.696Z · LW(p) · GW(p)

If you had four months to dedicate to working on a project, what would you work on?

Replies from: Benito, wadavis, polymathwannabe
comment by Ben Pace (Benito) · 2014-06-06T20:41:10.869Z · LW(p) · GW(p)

Learn all the maths to be able to get a job at MIRI :)

comment by wadavis · 2014-06-06T15:33:50.621Z · LW(p) · GW(p)

Intern under a specialist in heat-straightening of damaged steel members.

comment by polymathwannabe · 2014-06-06T15:14:06.248Z · LW(p) · GW(p)

Writing a novel.

comment by Scott Alexander (Yvain) · 2014-06-03T14:28:09.190Z · LW(p) · GW(p)

Yesterday I posted a Michigan meetup.

My location is set to Michigan.

The "Nearest Meetups" column on the right-hand side suggests Atlanta and Houston, but not Michigan.

Is this a known bug?

Replies from: JRMayne, philh
comment by JRMayne · 2014-06-03T14:57:09.977Z · LW(p) · GW(p)

It's a feature, not a bug. The friendly algorithm that creates that column assumes you would rationally prefer Atlanta or Houston to anywhere within 40 miles of Detroit.

comment by philh · 2014-06-03T15:48:21.898Z · LW(p) · GW(p)

'Nearest meetups' ignores where your location is set to, and tries to work it out from your IP address. (Source, my location is set to London.) Perhaps your IP address is misleading about that?

Two sites that try to work out your location from your IP:

http://www.geobytes.com/IpLocator.htm?GetLocation (says I'm in Budapest)

http://www.ip-adress.com/ (says I'm in London)

I'm not sure what system LW uses for this, right now it gives me "upcoming" instead of "nearest".

(Edit: now that I'm at home instead of work, these respectively say London and Glasgow, and I have "nearest meetups" back.)

comment by NancyLebovitz · 2014-06-05T13:16:36.474Z · LW(p) · GW(p)

Is there any convenient way to promote interesting sub-threads to Discussion-level posts?

Replies from: David_Gerard
comment by David_Gerard · 2014-06-05T14:47:13.272Z · LW(p) · GW(p)

c'n'p with link to original, really. (So no.)

comment by Punoxysm · 2014-06-08T06:45:20.753Z · LW(p) · GW(p)

I do not understand - and I mean this respectfully - why anyone would care about Newcomblike problems or UDT or TDT, beyond mathematical interest. An Omega is physically impossible - and if I were ever to find myself in an apparently Newcomblike problem in real life, I'd obviously choose to take both boxes.

Replies from: Kaj_Sotala, ChristianKl, David_Gerard, shminux, drethelin, Risto_Saarelma
comment by Kaj_Sotala · 2014-06-08T07:23:39.309Z · LW(p) · GW(p)

An Omega is physically impossible

I don't think it's physically impossible for someone to predict my behavior in some situation with a high degree of accuracy.

Replies from: Punoxysm
comment by Punoxysm · 2014-06-08T17:51:55.849Z · LW(p) · GW(p)

If I wanted to thwart or discredit pseudo-Omega, I could base my decision on a source of randomness. This brings me out of reach of any real-world attempt at setting up the Newcomblike problem. It's not the same as guaranteeing a win, but it undermines the premise.

Certainly, anybody trying to play pseudo-omega against random-decider would start losing lots of money until they settled on always keeping box B empty.

And if it's a repeated game where Omega explicitly guarantees it will attempt to keep its accuracy high, choosing only box B emerges as the right choice from non-TDT theories.

Replies from: DanielLC
comment by DanielLC · 2014-06-09T20:53:41.707Z · LW(p) · GW(p)

If I wanted to thwart or discredit pseudo-Omega, I could base my decision on a source of randomness. This brings me out of reach of any real-world attempt at setting up the Newcomblike problem.

It's not a zero-sum game. Using randomness means pseudo-Omega will guess wrong, so he'll lose, but it doesn't mean that he'll guess you'll one-box, so you don't win. There is no mixed Nash equilibrium. The only Nash equilibrium is to always one-box.

comment by ChristianKl · 2014-06-08T07:49:42.768Z · LW(p) · GW(p)

An Omega is physically impossible

The idea that we live in a simulation is not a physical impossibility.

At the moment choices can often be predicted 7 seconds in advance by reading brain signals.

Replies from: DanielLC, Punoxysm
comment by DanielLC · 2014-06-09T20:54:38.415Z · LW(p) · GW(p)

Source?

How accurate is this prediction?

Replies from: ChristianKl
comment by Punoxysm · 2014-06-08T17:53:33.153Z · LW(p) · GW(p)

Even if we live in a simulation, I've never heard of anybody being presented a newcomblike problem.

Make a coin flip < 7 seconds before deciding.

Replies from: ChristianKl
comment by ChristianKl · 2014-06-08T20:27:43.593Z · LW(p) · GW(p)

Make a coin flip < 7 seconds before deciding.

Most people don't make coin flips. You can set the rule that making a coin flip is equivalent to picking both boxes.

Replies from: Punoxysm
comment by Punoxysm · 2014-06-08T21:35:48.387Z · LW(p) · GW(p)

Fine, but most people can notice a brain scanner attached to their heads, and would then realize that the game starts at "convince the brain scanner that you will pick one box". Newcomblike problems reduce to this multi-stage game too.

Replies from: ChristianKl
comment by ChristianKl · 2014-06-08T22:23:31.828Z · LW(p) · GW(p)

Brain scanner are technology that's very straightforward to think about. Humans reading other humans is a lot more complicated. People have a hard time accepting that Eliezer won the AI box challenge. "Mind reading" and predicting choices of other people is a task with a similar difficulty than the AI box challenge.

Let's take contact improvisation as an illustrating example. It's a dance form without hard rules. If I'm dancing contact improvisation with a woman than she expects me to be in a state where I follow the situation and express my intuition. If I'm in that state and that means that I touch her breast with my arms that's no real problem. If I on the other hand make a conscious decision that I want to touch her breast and act accordingly I'm likely to creep her out.

There are plenty of people in the contact improvisation field who's awareness of other people is good enough to tell the difference.

Another case where decision frameworks is diplomacy. A diplomat gets told beforehand how he's supposed to negotiate and there might be instances where that information leaks.

Replies from: Punoxysm
comment by Punoxysm · 2014-06-08T23:23:42.410Z · LW(p) · GW(p)

I don't think this contradicts any of my points. Causal Decision theory would never tell to the state department to behave as if leaks are impossible. Yet because leak probability is low, I think any diplomatic group openly published all its internal orders would find itself greatly hampered against others that didn't.

Playing a game against an opponent with an imperfect model of yourself, especially one whose model-building process you understand, does not require a new decision theory.

Replies from: ChristianKl
comment by ChristianKl · 2014-06-09T07:44:00.014Z · LW(p) · GW(p)

I think any diplomatic group openly published all its internal orders would find itself greatly hampered against others that didn't.

It's possible that the channel through which the diplomatic group internally communicates is completely compromised.

comment by David_Gerard · 2014-06-08T09:25:28.559Z · LW(p) · GW(p)

I believe the application was how a duplicable intelligence like an AI could reason effectively. (Hence TDT thinking in terms of all instances of you.)

Replies from: Punoxysm
comment by Punoxysm · 2014-06-08T17:53:14.598Z · LW(p) · GW(p)

Communication and pre-planning would be a superior coordination method.

Replies from: David_Gerard
comment by David_Gerard · 2014-06-08T20:35:26.240Z · LW(p) · GW(p)

This is assuming you know that you might be just one copy of many, at varying points in a timeline.

comment by shminux · 2014-06-08T07:23:31.990Z · LW(p) · GW(p)

Do you think that someone can predict your behavior with maybe 80% accuracy? Like, for example, whether you would one-box or two-box, based on what you wrote? And then confidently leave the $1M box empty because they know you'd two-box? And use that fact to win a bet, for example? Seems very practical.

Replies from: Punoxysm
comment by Punoxysm · 2014-06-08T18:02:34.923Z · LW(p) · GW(p)

If I bet $1001 that I'd one-box I'd have a natural incentive to do so.

However, if the boxes were already stocked and I gain nothing for proving pseudo-Omega wrong, then two-boxing is clearly superior. Otherwise I open one empty box, have nothing, yell at pseudo-Omega for being wrong, get a shrug in response, and go to bed regretting that I'd ever heard of TDT.

comment by drethelin · 2014-06-08T17:44:47.940Z · LW(p) · GW(p)

So as several people said, Omega is probably more within the realm of possibility than you give it credit for, but MORE IMPORTANTLY, Omega is definitely possible for non-humans. As David_Gerard said, the point of this thought exercise is for AI, not for humans. For an AI written by humans, we can know all of its code and predict the answers it will give to certain questions. This means that the AI needs to deal with us as if we are an Omega that can predict the future. For the purposes of AI, you need decision theories that can deal with entities having arbitrarily strong models of each other, recursively. And TDT is one way of trying to do that.

Replies from: Punoxysm
comment by Punoxysm · 2014-06-08T21:45:03.810Z · LW(p) · GW(p)

In general, predicting what code does can be as hard as executing the code. But I know that's been considered and I guess that gets into other areas.

Replies from: drethelin
comment by drethelin · 2014-06-09T16:55:39.720Z · LW(p) · GW(p)

Even if that's the case, when dealing with AI we more easily have the option of simulation. You can run a program over and over again, and see how it reacts to different inputs.

comment by Risto_Saarelma · 2014-06-08T07:55:08.492Z · LW(p) · GW(p)

I understood that people here mostly do care about them because of mathematical interest. It's a part of the "how can we design an AGI" math problem.

comment by Punoxysm · 2014-06-03T20:42:47.192Z · LW(p) · GW(p)

LessWrong's focus on the bay-area/software-programmer/secular/transhumanist crowd seems to me unnecessary. I understand that that's how the organization got its start, and it's fine. But when people here tie rationality to being part of that subset, or to high-IQ in general, it seems a bit silly (I also find the near-obsession with IQ a bit unsettling).

If the sequences were being repackaged as a self-help book targeted towards the widest possible audience, what would they look like?

Some of the material is essentially millenia old, self-knowledge and self-awareness and introspection aren't new inventions. Any decent therapist will also try to get people to see the "outside view" of their actions. Transhumanism and x-risk probably wouldn't belong in this book. Bayesian reasoning and cognitive fallacies have plenty of popular descriptions around them.

Effective altruism doesn't need to be tied to utilitarianism or terms like QALYs. Look at the way the Gates Foundation describes its work, for instance.

The hardline secularism is probably alienating (and frankly, are there not many people for whom at least the outward appearance of belief is rational, when it is what ties them to their communities?) to many people who could still learn a lot. Science can be promoted as an alternative to mysticism in a way that isn't hostile and doesn't provoke instant dismissal by those who most need that alternative.

Am I missing anything here? Is there some large component of rationalism that can't be severed from the way it's packaged on this site and sites like it?

Replies from: Nornagest, Desrtopa, Viliam_Bur, pianoforte611, ChristianKl, Richard_Kennaway
comment by Nornagest · 2014-06-03T21:25:12.474Z · LW(p) · GW(p)

For all the emphasis on Slytherin-style interpersonal competence (not so much on the main site anymore, but it's easy to find in the archive and in Methods), LW's historically had a pretty serious blind spot when it comes to PR and other large-scale social phenomena. There's probably some basic typical-minding in this, but I'm inclined to treat it mostly as a subculture issue; American geek culture has a pretty solid exceptionalist streak to it, and treats outsiders with pity when it isn't treating them with contempt and suspicion. And we're very much tied to geek culture. I've talked to LWers who don't feel comfortable exercising because they feel like it's enemy clothing; if we can't handle something that superficial, how are we supposed to get into Joe Sixpack's head?

Ultimately I think we focus on contrarian technocrat types, consciously or not, because they're the people we know how to reach. I include myself in this, unfortunately.

Replies from: Punoxysm, Lumifer
comment by Punoxysm · 2014-06-04T00:05:13.505Z · LW(p) · GW(p)

A very fair assessment.

I would also note that often when people DO think about marketing LW, they speak about the act of marketing with outright contempt. Marketing is just a set of methodologies to draw attention to something. As a rationalist, one should embrace that tool for anything they care about rather than treating it as vulgar.

comment by Lumifer · 2014-06-04T01:17:23.284Z · LW(p) · GW(p)

how are we supposed to get into Joe Sixpack's head?

A better question is what exactly we are supposed to do inside Joe Sixpack's head?

Make him less stupid? No one knows how. Give him practical advice so that he fails less epically? There are multiple shelves of self-help books at B&N, programs run by nonprofits and the government, classes at the local community college, etc. etc. Joe Sixpack shows very little interest in any of those I don't see why the Sequences or some distillation of them would do better.

Replies from: Nornagest
comment by Nornagest · 2014-06-04T02:27:02.951Z · LW(p) · GW(p)

Nice example of geek exceptionalism there, dude.

To be fair, it might have some merit if we were literally talking about the average person, though I'm far from certain; someone buys an awful lot of mass-market self-help books and I don't think it's exclusively Bay Aryans. But I was using "Joe Sixpack" there in the sense of "someone who is not a geek", or even "someone who isn't part of the specific cluster of techies that LW draws from", and there should be plenty of smart, motivated, growth-oriented people within that set. If we can't speak to them, that's entirely on us.

Replies from: Lumifer
comment by Lumifer · 2014-06-04T04:57:50.017Z · LW(p) · GW(p)

Nice example of geek exceptionalism there, dude.

Nah, just plain-vanilla arrogance :-D I am not quite sure I belong to the American geek culture, anyway.

But I was using "Joe Sixpack" there in the sense of "someone who is not a geek", or even "someone who isn't part of the specific cluster of techies that LW draws from"

Ah. I read "Joe Sixpack" as being slightly above "redneck" and slightly below "your average American with 2.2 children".

So do you mean people like engineers, financial quants, the Make community, bright-eyed humanities graduates? These people are generally not dumb. But I am still having trouble imagining what would you want to do inside their heads.

Replies from: Nornagest
comment by Nornagest · 2014-06-04T05:02:37.939Z · LW(p) · GW(p)

So do you mean people like engineers, financial quants, the Make community, bright-eyed humanities graduates? These people are generally not dumb. But I am still having trouble imagining what would you want to do inside their heads.

The first group of people I thought of was lawyers, who have both a higher-than-average baseline understanding of applied cognitive science and a strong built-in incentive to get better at it. I wouldn't stop there, of course; all sorts of people have reasons to improve their thinking and understanding, and even more have incentives to become more instrumentally effective.

As to what we'd do in their heads... same thing as we're trying to do in ours, of course.

Replies from: Lumifer
comment by Lumifer · 2014-06-04T05:11:10.555Z · LW(p) · GW(p)

same thing as we're trying to do in ours, of course.

Um. Speaking for myself, what I'm trying to do in my own head doesn't really transfer to other heads, and I'm not trying to do anything (serious) inside other people's heads in general.

comment by Desrtopa · 2014-06-03T21:21:46.084Z · LW(p) · GW(p)

The hardline secularism is probably alienating (and frankly, are there not many people for whom at least the outward appearance of belief is rational, when it is what ties them to their communities?) to many people who could still learn a lot. Science can be promoted as an alternative to mysticism in a way that isn't hostile and doesn't provoke instant dismissal by those who most need that alternative.

The hardline secularism (which might be better described as a community norm of atheism, given that some of the community favors creating community structures which take on the role of religious participation,) isn't a prerequisite so much as a conclusion, but it's one that's generally held within the community to be pretty basic.

However, so many of the lessons of epistemic rationality bear on religious belief that not addressing the matter at all would probably smack of willful avoidance.

In a sense, rationality might function as an alternative to mysticism. Eliezer has spoken for instance about how he tries to present certain lessons of rationality as deeply wise so that people will not come to it looking for wisdom, find simple "answers," and be tempted to look for deep wisdom elsewhere. But there's another very important sense where, if you treat rationality like mysticism, the result is that you'll completely fuck up at rationality, and get a group that worships some "rational" sounding buzzwords without gaining any useful insight into reasoning.

Keep in mind that insofar as Less Wrong has educational goals, it's not trying to reach as wide an audience as possible, it's trying to teach as many people as possible to get it right. If "reaching" an audience means instilling them with some memes which don't have much use in isolation, while leaving out important components of rationality, that measure has basically failed.

Replies from: ChristianKl
comment by ChristianKl · 2014-06-05T12:38:34.110Z · LW(p) · GW(p)

In a sense, rationality might function as an alternative to mysticism.

Given that Eliezer wrote HPMOR is not really turning away from mysticism and teaching through stories.

Replies from: None
comment by [deleted] · 2014-06-07T10:05:46.144Z · LW(p) · GW(p)

One would expect an alternative to a thing to share enough characteristics with the thing to make it an alternative.

Turkey is an alternative to chicken. Ice cream is not. Teaching rationality through stories and deep-wisdom tropes is an alternative to teaching mysticism through stories and deep-wisdom tropes. Teaching rationality through academic papers is not.

comment by Viliam_Bur · 2014-06-03T23:11:40.678Z · LW(p) · GW(p)

If the sequences were being repackaged as a self-help book targeted towards the widest possible audience, what would they look like?

More simple language, many examples, many exercises.

And then the biggest problem would be that most people would just skip the exercises, remember some keywords, and think that it made them more rational.

By which I mean that making the book more accessible is a good thing, and we definitely should do it. But rationality also requires some effort from the reader, that cannot be completely substituted by the book. We could reach a wider audience, but it would still be just a tiny minority of the population. Most people just wouldn't care enough to really do the rationality stuff.

Which means that the book should start with some motivating examples. But even that has limited effect.

I believe there is a huge space for improvement, but we shouldn't expect magic even with the best materials. There is only so much even the best book can do.

Some of the material is essentially millenia old, self-knowledge and self-awareness and introspection aren't new inventions.

The problem is, using these millenia old methods people can generate a lot of nonsense. And they predictably do, most of the time. Otherwise, Freud would have already invented rationality, founded CFAR, became a beisutsukai master, built a Friendly AI, and started the Singularity. (Unless Aristotle or Socrates would already do it first.) Instead, he just discovered that everything you dream about is secretly a penis.

The difficult part is to avoid self-deception. These millenia old materials seem quite bad at it. Maybe they were best of what was available at their time. But that's not enough. Archimedes could have been the smartest physicist of his time, but he still didn't invent relativity. Being "best" is not enough; you have to do things correctly.

Replies from: Punoxysm
comment by Punoxysm · 2014-06-04T00:01:41.927Z · LW(p) · GW(p)

By which I mean that making the book more accessible is a good thing, and we definitely should do it. But rationality also requires some effort from the reader, that cannot be completely substituted by the book. We could reach a wider audience, but it would still be just a tiny minority of the population. Most people just wouldn't care enough to really do the rationality stuff.

Okay, this is true. But LessWrong is currently a set of articles. So the medium is essentially unchanged, and any of these criticisms apply to the current form. And how many people do you think the article on akrasia has actually cured of akrasia?

The problem is, using these millenia old methods people can generate a lot of nonsense. And they predictably do, most of the time.

First of all, I'm mainly dealing with the subset of material here that deals with self-knowledge. Even if you disagree with "millenia old", do you disagree with "any decent therapist would try to provide many/most of these tools to his/her patients"?

On the more scientific side, the idea of optimal scientific inquiry has been refined over the years, but the core of observation, experimentation and modeling is hardly new either.

Otherwise, Freud would have already invented rationality, founded CFAR, became a beisutsukai master, built a Friendly AI, and started the Singularity. (Unless Aristotle or Socrates would already do it first.) Instead, he just discovered that everything you dream about is secretly a penis.

I do not see what you mean here. Nobody at LW has invented rationality, become a beisutsukai master, built a Friendly AI or Started the singularity. Freud correctly realized the importance the subconscious has in shaping our behavior, and the fact that it is shaped by past experiences in ways not always clear to us. He then failed to separate this knowledge from some personal obsessions. We wouldn't expect any methods of rationality to turn Freud into a superhero, we'd expect it to help people reading him separate the wheat from the chaff.

Replies from: Viliam_Bur, ChristianKl
comment by Viliam_Bur · 2014-06-04T09:01:15.082Z · LW(p) · GW(p)

But LessWrong is currently a set of articles.

And also an e-book (which is probably not finished yet, last mention here), that is still just a set of articles, but they are selected, reordered, and the comments are removed -- which is helpful, at least for readers like me, because when I read the web, I cannot resist reading the comments (which together can be 10 times as long as the article) and clicking hyperlinks, but when I read the book, I obediently follow the page flow.

A good writer could then take this book as a starting point, and rewrite it, with exercises. But for this we need a volunteer, because Eliezer is not going to do it. And the volunteer needs to have some skills.

And how many people do you think the article on akrasia has actually cured of akrasia?

Akrasia survey data analysis. Some methods seem to work for some people, but no method is universally useful. The highest success was "exercise to increase energy" and even that helped only 25% of people; and the critical weakness seems to be that most people think it is a good idea, but don't do it. To overcome this, we would need some off-line solutions, like exercising together. (Or maybe a "LessWrong Virtual Exercise Hall".)

do you disagree with "any decent therapist would try to provide many/most of these tools to his/her patients"?

Yes, I do. Therapists don't see teaching rationality as their job (although it correlates), wouldn't agree with some parts of our definitions of rationality (many of them are religious, or enjoy some kind of mysticism), and would consider some parts too technical and irrelevant for mental health (Bayes Rule, Solomonoff Prior, neural networks...).

But when you remove the technical details, what is left is pretty much "do things that seem reasonable". Which still would be a huge improvement for many people.

On the more scientific side, the idea of optimal scientific inquiry has been refined over the years, but the core of observation, experimentation and modeling is hardly new either.

That's the theory. Now look at the practice of... say, medicine. How much of it really is evidence-based, and how much of that is double-blind with control group and large enough sample and meta-analysis et cetera? When you start looking at it closely, actually very little. (If you want a horror story, read about Ignaz Semmelweis, who discovered how to save lifes of thousands of people and provided hard evidence... and how the medical community rewarded him.)

comment by ChristianKl · 2014-06-05T13:46:17.736Z · LW(p) · GW(p)

Okay, this is true. But LessWrong is currently a set of articles. So the medium is essentially unchanged, and any of these criticisms apply to the current form.

LessWrong activity seems to shift more into meatspace as time goes on.

We have the study hall for people with akrasia that provides different help then just reading an article about akrasia.

CFAR partly did grow out of LW and they hold workshops.

comment by pianoforte611 · 2014-06-04T02:20:33.383Z · LW(p) · GW(p)

LessWrong's focus on the bay-area/software-programmer/secular/transhumanist crowd seems to me unnecessary.

I don't understand what this means. LW is composed mostly from people of these backgrounds. Are you saying that this a problem?

But when people here tie rationality to being part of that subset, or to high-IQ in general, it seems a bit silly.

If by rationality you mean systematic winning (where winning can be either truth seeking (epistemic rationality) or goal achieving (instrumental rationality)) then no one is claiming that we have a monopoly on it. But if by rationality, you are referring to the group of people who have decided to study it and form a community around it, then yes most of us are high IQ and in technical fields. And if you think this is a problem, I'd be interested in why.

I also find the near-obsession with IQ a bit unsettling

In other words, my opponent believes something which is kind like being obsessed with it, and obsession is bad. If you have a beef with a particular view or argument then say so.

Some of the material is essentially millenia old, self-knowledge and self-awareness and introspection aren't new inventions

Eliezer has responded to this (very common) criticism here

I don't know why you want LW to be packaged to a wide audience. I suspect this would do more harm than good to us, and to the wider audience. It would harm the wider audience because of the sophistication bias, which would cause them to mostly look for errors in thinking in others and not their own thinking. It takes a certain amount of introspectiveness (which LW seems to self-select for) not to become half-a-rationalist.

Replies from: Punoxysm
comment by Punoxysm · 2014-06-04T03:26:23.657Z · LW(p) · GW(p)

I don't understand what this means. LW is composed mostly from people of these backgrounds. Are you saying that this a problem?

If it creates an exclusionary atmosphere, or prevents people outside that group from reading and absorbing the ideas, or closes this community to outside ideas, then yes. But mostly I think that focusing on presenting these ideas only to that group is unnecessary.

If by rationality you mean systematic winning (where winning can be either truth seeking (epistemic rationality) or goal achieving (instrumental rationality)) then no one is claiming that we have a monopoly on it. But if by rationality, you are referring to the group of people who have decided to study it and form a community around it, then yes most of us are high IQ and in technical fields. And if you think this is a problem, I'd be interested in why.

I am really thinking of posts like this where many commenters agonize over how hard it would be to bring rationality to the masses.

In other words, my opponent believes something which is kind like being obsessed with it, and obsession is bad. If you have a beef with a particular view or argument then say so.

I did say what I have a beef with. The attitude that deliberate application of rationality is only for high-iq people, or that only the high-iq people are likely to make real contributions.

Eliezer has responded to this (very common) criticism here

It's not a criticism - it's an explanation for why I don't believe it would be that difficult to package the content of the sequences for a general audience. None of it needs to be packaged as revelatory. Instead of calling rationality systematic winning, just call it a laundry list of methods for being clear-eyed and avoiding self-deception.

I don't know why you want LW to be packaged to a wide audience. I suspect this would do more harm than good to us, and to the wider audience. It would harm the wider audience because of the sophistication bias, which would cause them to mostly look for errors in thinking in others and not their own thinking. It takes a certain amount of introspectiveness (which LW seems to self-select for) not to become half-a-rationalist.

Several responses bring up the "half-a-rationalist" criticism, but I think that's something that can be avoided by presentation. Instead of "here's a bunch of tools to be cleverer than other people", present it as "here's a bunch of tools to occasionally catch yourself before you make a dumb mistake". It's certainly no excuse not to try to think of how a more broadly-targeted presentation of the sequences could be put together.

And really, what's the worst case-scenario? That articles here sometimes get cited vacuously kind of like those fallacy lists? Not that bad.

Replies from: pianoforte611
comment by pianoforte611 · 2014-06-04T19:33:35.897Z · LW(p) · GW(p)

Inclusiveness is not a terminal value for me. Certain types of people are attracted to a community such as LW, as with every other type of community. I do not see this as a problem.

Which of the following statements would you endorse if any?

1) LW should change in such a way as to be more inclusive to a wider variety of people.

1 a) LW members should change how they comment (perhaps avoiding jargon?) so as to be more inclusive.

1 b) LW members should talk change the topics that they discuss in order to be more inclusive.

2) LW should compose a rewritten set of sequences to replace the current sequence as a way of making the community more inclusive.

3) LW should compose a rewritten set of sequences and publish it somewhere (perhaps a book or a different website)) to spread the tools of rationality.

4) LW should try to actively recruit different types of people than the ones that are naturally inclined to read it already.

Replies from: Punoxysm
comment by Punoxysm · 2014-06-04T20:21:32.353Z · LW(p) · GW(p)

I don't think LW needs to change dramatically (though more activity would be nice), I just think it should be acknowledged that the demographic focus is narrow; a wider focus could mean a new community or a growth of LW, or something else.

Mainly #3 and to an extent #4.

I'd modify and combine #4 and #1a/1b into:

5) We should have inclusionary, non-jargony explanations and examples at the ready to express almost any idea on rationality that we understand within LW's context. Especially ideas that have mainstream analogues, which is most of them. This has many potential uses including #1 and #4.

comment by ChristianKl · 2014-06-05T12:05:05.720Z · LW(p) · GW(p)

LessWrong's focus on the bay-area/software-programmer/secular/transhumanist crowd seems to me unnecessary.

What practical steps do you see to make LW less focused on that crowd? What are you advocating?

comment by Richard_Kennaway · 2014-06-04T06:24:05.299Z · LW(p) · GW(p)

If the sequences were being repackaged as a self-help book targeted towards the widest possible audience, what would they look like?

The book that Eliezer is writing? (What's the state of play on that, btw?)

Replies from: Punoxysm
comment by Punoxysm · 2014-06-04T06:59:53.210Z · LW(p) · GW(p)

Link for info?

But is he actually planning to change his style? He's more or less explicitly targeted the bay-area/software-programmer/secular/transhumanist, and he's openly stated that he's content with that focus.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-06-04T07:59:25.077Z · LW(p) · GW(p)

I don't have any more information. The book has been mentioned a few times on LW, but I don't know what stage it's at, and I haven't seen any of the text.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-06-04T10:13:14.191Z · LW(p) · GW(p)

It is a selection of 345 articles, together over 2000 pages, mostly from the old Sequences from Overcoming Bias era, and a few long articles from Eliezer's homepage. The less imporant articles are removed, and the quantum physics part is heavily reduced.

(I have a draft because I am translating it to Slovak, but I am not allowed to share it. Maybe you could still volunteer as a proofreader, to get a copy.)

comment by [deleted] · 2014-06-03T14:44:06.227Z · LW(p) · GW(p)

Effective animal altruism question: I may be getting a dog. Dogs are omnivores who seem to need meat to stay healthy. What's the most ethical way to keep my hypothetical dog happy and healthy?

Edit: Answers Pet Foods appears to satisfice. I'll be going with this pending evidence that there's a better solution.

Replies from: banx
comment by banx · 2014-06-03T17:32:14.369Z · LW(p) · GW(p)

I don't have a full answer to the question, but if you do feed the dog meat, one starting point would be to prefer meat that has less suffering associated with it. It is typically claimed that beef has less suffering per unit mass associated with it than pork and much less than chicken, simply because you get a lot more from one individual. The counterargument would be to claim that cows > pigs > chickens in intelligence/complexity to a great enough extent to outweigh this consideration.

I'm curious: are there specific reasons to believe that dogs need meat while humans (also omnivores) do not? A quick Google search finds lots of vegetarians happy to proclaim that dogs can be vegetarian too, but I haven't looked into details.

Replies from: Ben_LandauTaylor, None
comment by Ben_LandauTaylor · 2014-06-03T18:43:30.112Z · LW(p) · GW(p)

The counterargument would be to claim that cows > pigs > chickens in intelligence/complexity

My understanding is that pigs > cows >> chickens. Poultry vs mammal is a difficult question that depends on nebulous value judgments, but I thought it was fairly settled that beef causes less suffering/mass than other mammals.

Replies from: David_Gerard, None
comment by David_Gerard · 2014-06-04T14:21:19.777Z · LW(p) · GW(p)

Huskies love fish (for obvious practical reasons), and fish are just dumb. (Though the way we achieve that is to mix fishy cat food into our husky's dog food, which is random tinned dog food.)

comment by [deleted] · 2014-06-03T18:47:12.969Z · LW(p) · GW(p)

Pigs on top surprises me, given that I thought pigs had more intelligence/awareness than other meat sources (as measured by nebulous educated guessing on our part).

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-06-04T16:26:19.262Z · LW(p) · GW(p)

From his last sentence, Ben agrees with you. He has just reversed the meaning of the inequality sign.

Replies from: None
comment by [deleted] · 2014-06-04T18:01:48.689Z · LW(p) · GW(p)

You're right, I failed a parse check. Thanks!

comment by [deleted] · 2014-06-03T18:45:46.182Z · LW(p) · GW(p)

Here's a quick citation: http://pets.webmd.com/features/vegetarian-diet-dogs-cats

tldr: Dogs are opportunistic carnivores more than omnivores. They eat whatever they can get, and they'll probably survive without meat, but they'll be missing a bunch of things their bodies expect to have.

comment by [deleted] · 2014-06-06T00:42:35.241Z · LW(p) · GW(p)

My internship has a donation match. I want to donate to something life-extension related; tentatively looking at SENS, but I have some questions:

  • How can I quantify the expected value of money donated? The relevant metric would be "increase in the probability of me personally experiencing dramatic life extension." I have no idea how to go about estimating this, but this determines whether I want to save the money to spend on myself vs. donate it.
  • What other major reputable organizations are there in the biological life extension sphere? Are there any that could use additional money better than SENS?
comment by [deleted] · 2014-06-04T01:43:23.405Z · LW(p) · GW(p)

Today is election day here in Korea. Although I have voted, I have yet to see a satisfactory argument for taking the time to vote. Does anyone know of good arguments for voting? I am thinking of an answer that

  1. Does not rely on the signalling benefits of voting
  2. Does not rely on hyperrational-like arguments.
Replies from: mwengler, wadavis, ShardPhoenix, Oscar_Cunningham, asr
comment by mwengler · 2014-06-04T21:37:16.801Z · LW(p) · GW(p)

Well you see the government comes to you with a closed box that they say they have already filled with either a totalitarian government if they predicted you would not vote, but it is filled with a relatively free republic if they predicted you would vote. They filled the box long ago, however...

comment by wadavis · 2014-06-06T15:40:30.557Z · LW(p) · GW(p)

Up until 2011 in Canada, Parties would receive by-the-vote subsidies to their budgets. This was strongly defended by the center and left parties as a way to keep big money out of politics and a measure of true democracy in our first past the poll system.

comment by ShardPhoenix · 2014-06-04T04:04:42.118Z · LW(p) · GW(p)

I once saw an argument that if you compare the chance of an election being decided by 1 vote to the benefits of getting your preferred party/candidate in power, which may be billions/trillions, then voting can be worth thousands of dollars - at least if you value those benefits at full rather than only for their affect on you (which is dubious).

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2014-06-04T05:32:05.421Z · LW(p) · GW(p)

http://lesswrong.com/lw/fao/voting_is_like_donating_thousands_of_dollars_to/

Replies from: ShardPhoenix
comment by ShardPhoenix · 2014-06-04T06:50:37.322Z · LW(p) · GW(p)

That's it, although checking that post and comments again I feel like it may be making an accounting error of some sort.

edit: Actually it's probably just positing excessive confidence (inspired by hindsight) in the value of getting your guy compared to the other guy.

comment by Oscar_Cunningham · 2014-06-04T09:49:31.342Z · LW(p) · GW(p)

Political parties will change their policies to capture more voters. So even though your vote won't change who wins the election, you will still shift the policies of the parties towards your own views.

Replies from: Lumifer
comment by Lumifer · 2014-06-04T14:47:41.798Z · LW(p) · GW(p)

So even though your vote won't change who wins the election, you will still shift the policies of the parties towards your own views.

You don't achieve this by voting -- you achieve this by loudly proclaiming that you will vote on the basis of issues A, B, and C.

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2014-06-04T17:31:55.004Z · LW(p) · GW(p)

I think half an hour to go and vote is probably more effective than half an hour of loudly proclaiming, but I can't think of a test for this. Perhaps look at elections where the vote showed that what people wanted was different from what the media said people wanted, and then see which way the parties moved.

Replies from: Lumifer
comment by Lumifer · 2014-06-04T17:45:47.425Z · LW(p) · GW(p)

I think half an hour to go and vote is probably more effective than half an hour of loudly proclaiming

The problem is that the party, when considering whether to change policies, has no idea who voted for/against it for which reason. All it knows is that it gained or lost certain number of voters (of certain demographics) in between two elections.

If issue Z is highly important to you and you vote on the basis of the party's attitude to it, how does the party know this if the only thing you do is silently drop your ballot?

Replies from: Eugine_Nier, Oscar_Cunningham
comment by Eugine_Nier · 2014-06-06T01:53:16.765Z · LW(p) · GW(p)

If issue Z is highly important to you and you vote on the basis of the party's attitude to it, how does the party know this if the only thing you do is silently drop your ballot?

Vote for a third party that cares about Z.

Replies from: Lumifer
comment by Lumifer · 2014-06-06T15:06:49.104Z · LW(p) · GW(p)

Vote for a third party that cares about Z.

Provided that one exists. And provided that it isn't completely screwed up about issues A to Y. And provided you are willing to sacrifice the rest of your political signaling power to a signal about Z.

Replies from: Gav
comment by Gav · 2014-06-08T01:38:02.019Z · LW(p) · GW(p)

If you're lucky enough to be in a country with preferential voting, there's usually a handful of 3rd parties with various policies (with published preferences so you know where the vote will 'actually' end up). So you'll at least have the opportunity to cast a few bits of information, rather than a single bit.

Obligatory Ken the Voting Dingo comic about how it's not possible to waste your vote: http://chickennation.com/website_stuff/cant-waste-vote/web-700-cant-waste-vote-SINGLE-IMAGE.png "I'll look into this 'hugs'"

Replies from: Lumifer
comment by Lumifer · 2014-06-08T01:44:14.197Z · LW(p) · GW(p)

Obligatory Ken the Voting Dingo comic

I must say I appreciate the comic which starts with "It's me, your good friend Dennis the Erection Koala" :-D

On the other hand if you actually do care about conveying bits of information, there are much more effective ways than voting.

comment by Oscar_Cunningham · 2014-06-04T18:52:34.409Z · LW(p) · GW(p)

Ah yes, you're right. That clearly weakens the effect of voting substantially.

comment by asr · 2014-06-04T02:44:42.163Z · LW(p) · GW(p)

The only reasons I can think of are your #1 and #2. But I think both are perfectly good reasons to vote...

comment by lukeprog · 2014-06-03T22:44:02.583Z · LW(p) · GW(p)

A handy quote by Alvin Toffler, from his introduction for The Futurists:

If we do not learn from history, we shall be compelled to relive it. True. But if we do not change the future, we shall be compelled to endure it. And that could be worse.

comment by NancyLebovitz · 2014-06-03T16:06:32.642Z · LW(p) · GW(p)

A friend posted this:

anyone know of an online timer that will open a pop-up window at a specified time, with a message I can enter myself?

She's found timers which pop up windows, but none where she can add a message.

Replies from: fezziwig, shminux, Richard_Kennaway, Unnamed
comment by fezziwig · 2014-06-03T20:45:33.916Z · LW(p) · GW(p)

What is she ultimately trying to achieve? More aggressive reminders than a normal calendar app can give you?

Also: computer or smartphone?

Replies from: jobear
comment by jobear · 2014-06-03T21:36:35.383Z · LW(p) · GW(p)

I'm having memory problems which make it hard to function, and I'm trying to work around those. For example, if I have started a process and want to do something else while it runs, I want a reminder to check on it/go back to it in x minutes. I want it to be on my computer instead of phone. In theory I could use the reminder feature on my iPhone, but this is a different kind of reminder and I don't want to dilute the meaning of the reminder sound. Also, I want something that will pop up and interrupt what I'm doing (and not rely on noise that will bother other people), and I want it to be easy to set. Most of the things I can find have an alarm but not a pop-up. This: http://time-in.info/timer.asp has a popup and lets me enter a custom message...but the custom message does not show on the popup window. doh. It may be the best I can do, but I imagine I could have several timers running at once, so it would be confusing without a note. this http://www.copleys.com/timer.htm has the popup behavior I want, but doesn't let me add a message.

comment by shminux · 2014-06-03T16:49:44.441Z · LW(p) · GW(p)

Google Calendar does this, I think.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-06-03T20:10:31.368Z · LW(p) · GW(p)

Google Calendar puts the message in the Google Calendar window rather than a pop-up in front of whatever you're doing.

Replies from: Lumifer
comment by Lumifer · 2014-06-03T21:23:36.431Z · LW(p) · GW(p)

Google has what it calls desktop notifications. See e.g. here

comment by Richard_Kennaway · 2014-06-04T06:38:54.209Z · LW(p) · GW(p)

I use Apple's iCal, which of course is only for Apple devices. It pops up a notification showing the "description", "location", and time of the event. I put the quotes in because you can use those strings for whatever you like. If the calendar is shared in iCloud then an event entered on any of your devices pops up on all of them.

comment by Unnamed · 2014-06-03T21:38:38.282Z · LW(p) · GW(p)

You can do this in Windows using Task Scheduler.

Replies from: jobear
comment by jobear · 2014-06-03T22:43:51.400Z · LW(p) · GW(p)

task scheduler does it, but it's a ton of steps - not something I would want to set up dozens of times a day.

Replies from: kevin_p
comment by kevin_p · 2014-06-04T04:23:45.888Z · LW(p) · GW(p)

Here's a quick-and-dirty batch file I made to add a reminder to the task scheduler. Copy it into Notepad and save it as something.bat , then make a link to it on your desktop or wherever.

@echo off
set /p MESSAGE=What do you want to be reminded of?^

^>
set /p ALERTTIME=When do you want to be reminded (hh:mm:ss)?^

^>
set TASKNAME=%DATE:/=_%_%TIME::=_%
set TASKNAME=%TASKNAME:.=_%
schtasks /create /sc once /tn %TASKNAME% /tr "msg * %MESSAGE%" /st %ALERTTIME%
pause

EDIT: I can't figure out how to make LessWrong put a blank line in a code block. There needs to be an extra blank line before each ^>

It prompts you for the text and the time to pop up the alert. It does have some limitations (you need to specify the exact time rather than e.g., "alert me in 30 minutes", and will only work for the same day), but if people think it's useful I can improve it.

It also needs you to enter your password to schedule the task. It's possible to avoid this by putting your username/password into the batch file, but that's obviously a security risk so I wouldn't recommend it. If you want to do so anyway you can modify the second-to-last line of the file to add the following text (replacing 'username' and 'password' with your actual username and password):

/ru username /rp password
comment by Cube · 2014-06-05T15:45:42.995Z · LW(p) · GW(p)

I think I've figured out a basic neural gate. I will do my best to describe it.

4 nerves: A,B,X,Y, A has it's tail connected to X. B has it's tail connected to X and Y. If A fires, X fires. If B fires, X and Y fire. If A then B fire, X will fire then Y will fire (X need a small amount of time to reset, so B will only be able to activate Y). If B then A fire, X and Y will fire at the same time.

This feels like it could be similar to the AND circuit. Just like modern electronics need AND, OR, and NOT, if I could find all the nerve gates I'd have all the parts needed to build a brain. (or at least a network based computer)

Replies from: philh
comment by philh · 2014-06-06T13:33:09.823Z · LW(p) · GW(p)

How familiar are you with this area? I think that this sort of thing is already well-studied, but I have only vague memories to go by.

As an aside, you only need (AND and NOT) or (OR and NOT), not all three; and if you have NAND or NOR, either of those is sufficient by itself.

Replies from: Cube
comment by Cube · 2014-06-06T15:56:19.532Z · LW(p) · GW(p)

I'm a computer expert but a brain newbie.

The typical CPU is built from n-NOR, n-NAND, and NOT gates. The NOT gates works like a 1-NAND or a 1-NOR (they're the same thing, electronically). Everything else, including AND and OR, are made from those three. The actual logic only requires NOT and {1 of AND, OR, NAND, NOR}. Notice there are several sets of minimum gates and and a larger set of used gates.

The brain (I'm theorizing now, I have no background in neural chemistry) has a similar set of basic gates that can be organized into a Turing machine, and the gate I described previously is one of them.

Replies from: None, V_V, Lumifer
comment by [deleted] · 2014-06-06T18:40:20.449Z · LW(p) · GW(p)

We don't run on logic gates. We run on noisy differential equations.

comment by V_V · 2014-06-08T18:49:51.565Z · LW(p) · GW(p)

The brain (I'm theorizing now, I have no background in neural chemistry) has a similar set of basic gates that can be organized into a Turing machine, and the gate I described previously is one of them.

No.
You can represent logic gates using neural circuits, and use them to describe arbitrary finite-state automata that generalize into Turing-complete automata in the limit of infinite size (or by adding an infinite external memory), but that's not how the brain is organized, and it would be difficult to have any learning in a system constucted in this way.

comment by Lumifer · 2014-06-06T16:23:12.551Z · LW(p) · GW(p)

You might want to look into what's called ANN -- artificial neural networks.

Replies from: Punoxysm
comment by Punoxysm · 2014-06-08T18:07:23.779Z · LW(p) · GW(p)

ANNs don't begin to scratch the surface of the scale or complexity of the human brain.

Not that they're not fun as toy models, or useful in their own right, just remember that they are oblivious to all human brain chemistry, and to chemistry in general.

Replies from: Lumifer
comment by Lumifer · 2014-06-09T01:48:24.110Z · LW(p) · GW(p)

ANNs don't begin to scratch the surface of the scale or complexity of the human brain.

Of course, but Cube is talking about "a similar set of basic gates that can be organized into a Turing machine" which looks like an ANN more than it looks like wetware.

comment by Tenoke · 2014-06-09T13:04:11.485Z · LW(p) · GW(p)