Projects-in-Progress Thread

post by lifelonglearner · 2017-01-21T05:11:32.723Z · score: 9 (10 votes) · LW · GW · Legacy · 47 comments

From Raemon's Project Hufflepuff thread, I thought it might be helpful for there to be a periodical check-in thread where people can post about their projects in the spirit of cooperation. This is the first one. If it goes well, maybe we can make it a monthly thing.

If you're looking for a quick proofread, trial users, a reference to a person in a specific field, or something else related to a project-in-progress this is the place to put it. Otherwise, if you think you're working on something cool the community might like to hear about, I guess it goes here too.

47 comments

Comments sorted by top scores.

comment by Elo · 2017-01-22T01:14:29.908Z · score: 7 (7 votes) · LW · GW

https://docs.google.com/spreadsheets/d/1Xh5DuV3XNqLQ4Vv8ceIc7IDmK9Hvb46-ZMoifaFwgoY/edit#gid=0

https://wiki.lesswrong.com/wiki/Mi_Casa_Lesswrong

Generating a list of houses that are willing to take visiting rationalists around the world. Feel free to add yourself.

comment by Gram_Stone · 2017-01-22T04:17:32.687Z · score: 4 (4 votes) · LW · GW

I think it's possible to exercise Hufflepuff virtue in the act of encouraging more Ravenclaw virtue, right? That is, getting an arbitrary ball rolling is a Hufflepuff thing to do, even if you roll the ball in a Ravenclaw direction? That's an important distinction to me.

A mid-term goal of mine is to replicate Dougherty et al.'s MINERVA-DM in MIT/GNU Scheme (it was originally written in Pascal; no, I haven't requested the authors' source code, and I don't intend to). I also intend to test at least one of its untested predictions using Amazon Mechanical Turk, barring any future knowledge that makes me think that I won't be able to obtain reliable results (which has only become less plausible as I've learned more; e.g. Turkers are more representative of the U.S. population than the undergraduate population that researchers routinely sample from in behavioral experiments; there's also a few enthusiasts who have done some work on AMT-specific methodological considerations).

MINERVA-DM is a formal model of human likelihood judgments that successfully predicts the experimental findings on conservatism), the availability heuristic, the representativeness heuristic, the base rate fallacy, the conjunction fallacy, the illusory truth effect, the simulation heuristic, and the hindsight bias. MINERVA-DM can also be described as a modified version of Bayes' Theorem. I'm not too far yet, having just started learning Scheme/programming-in-general, but I have managed to hobble together a one-line program that outputs an n-vector with elements drawn randomly with replacement from the set {-1, 0, 1}, so I guess I've technically started writing the program.

It's worth saying that I'm not very confident that MINERVA-DM won't be overturned by a better model, and that's not the point.

I need some sort of example, and MINERVA-DM has good properties as an example, because its math is exceedingly simple (i.e., capital-sigma notation, arithmetic mean, basic probability theory (see Bolstad's Introduction to Bayesian Statistics, Ch.3), etc. There are probably plenty of improvements that we need to and could make as a community, but my own concern is that it's never been winter-night-clear to me why at least some of us aren't trying to perform (Keyword Alert!) heuristics and biases/judgment and decision making (JDM)/behavioral decision theory research on LW or on whatever conversational focus we may be using in the near- to mid-term future. There is no organization in the community for this; CFAR is the closest thing to this, and AFAICT, they are not doing basic research into H&B/JDM/BDT. People around here seem to me more likely than most to agree that you're more likely to make progress on applications if you have a deep understanding of the problem that you're trying to solve.

I think it is intuitive that you simply cannot productively do academic work solely in the blogosphere, and when you're explaining a counterintuitive point, a point that is not universal background knowledge, you should recurse far enough to prevent misunderstandings in advance. I no longer find it intuitive that you can't do a substantial amount of work on the blogosphere. For one, a good deal of academic work, especially the kind we're collectively interested in, doesn't require any special resources. Reviews, syntheses, analyses, critiques, and computational studies can all be done from a keyboard. As for experiments, we don't need to buy a particle accelerator for psych research, you guys; this is where Mechanical Turk comes in. E.g. see these two blog posts wherein a scientist replicates one of Tversky and Kahneman's base rate fallacy results with N = 66 for US$3.30, and replicates one of Tversky and Kahneman's conjunction fallacy results with N = 50 for US$2.50. (Here's a list with more examples.)

Arguing that there's important academic work that doesn't require anything but a computer (reviews, syntheses, analyses, computational studies), and demonstrating that you can test experimental predictions with your lunch money seems like a good start on preempting the 'you can't do real science outside of academia' criticism. (It's not like there isn't a precedent for this sort of thing around here anyway.) It also prevents people from calling you a hypocrite for proposing that the community steer in a certain direction without your doing any of the pedaling. I probably would've kept quiet for a lot longer if I didn't think it were important to the community to respond to calls like this article, especially considering that we may be moving to a new platform soon.

comment by Elo · 2017-01-22T01:24:52.824Z · score: 4 (4 votes) · LW · GW

The homepage was recently edited. I don't like the edit and would like to rewrite it. I also don't want to get into an editing war with other people. So if you would like to collaborate on the new front page and what you might want to see on it, the document is here:

https://docs.google.com/document/d/1DRhgrnWT31AfF5JyoHTAo7Q3HJqPZ72rmO8l018qTa4/edit

The previous front page can be found here:

https://wiki.lesswrong.com/index.php?title=Lesswrong%3AHomepage&diff=15726&oldid=15014

I am thinking we might want to emphasise (in not any order):

  • local meetups
  • discussion board + ongoing activity
  • global map of users (zeemaps)
  • lesswrong slack
  • lesswrong IRC
  • a brief history of lesswrong
  • the sequences and HPMOR
  • our friends and their blogs (mindingourway, agentyduck)
  • offshoots businesses, organisations and groups (EA, CFAR, MIRI, Beeminder, Complice, Mealsquares)
  • some other memetic ideas we follow (cryonics, transhumanism, AI, Programming, rational fiction)
  • some kind of list of rat-houses around the world.
  • the latest welcome thread
comment by lifelonglearner · 2017-01-22T16:57:23.425Z · score: 0 (0 votes) · LW · GW

Thanks for putting this together!

I'm unsure how much info we want to put on the LW home page (I'm leaning towards less stuff is better). Are there good repositories / intro pages where we could put the rest of the info?

Also, made a few edits / comments for readability and flow on the doc.

comment by Elo · 2017-01-22T21:04:03.575Z · score: 0 (0 votes) · LW · GW

Agree with most of your edits.

comment by turchin · 2017-01-21T11:56:04.788Z · score: 4 (4 votes) · LW · GW

I am writing an article about fighting aging as a cause for effective altruism - early draft, suggestions welcome.

And also an article and a map about "global vs local solutions of the AI safety problem" - early draft, suggestions welcome.

comment by pico · 2017-01-21T19:01:35.296Z · score: 2 (2 votes) · LW · GW

Please PM me a draft of your fighting aging article if you want to - I can read it and offer feedback

comment by turchin · 2017-01-21T20:37:37.961Z · score: 1 (1 votes) · LW · GW

Thanks, I will do it after I'll finish to include substantial contribution which I got from the other source.

comment by pwno · 2017-01-21T08:36:34.230Z · score: 3 (3 votes) · LW · GW

I recently launched a new service called Hermes. It connects users with dating experts for live texting advice. It runs on a unique platform designed to greatly simplify sharing and discussing text conversations. Since modern dating is changing so rapidly, especially with the rise of online dating apps and a growing population of young people glued to their phones, helping people improve their texting can greatly improve their dating life. I've been a software developer and dating coach for over 10 years so this is sort of my passion project.

I'd be happy to get some trial users. General feedback is greatly appreciated too.

comment by lifelonglearner · 2017-01-21T23:49:14.217Z · score: 2 (2 votes) · LW · GW

Just tried out the Hermes trial! I found the coaches aren't too responsive? (~ 1 hr delay between my first message and their response). I'll see if they can help give some thoughts and give feedback later on the actual advice.

The layout is pretty cool, though!

comment by pwno · 2017-01-22T04:36:52.678Z · score: 0 (0 votes) · LW · GW

Thanks for trying it out. Hermes is still a work in progress and one of our top priorities now is improving responsiveness.

Looking forward to helping you out!

comment by JacobLW (JacobLiechty) · 2017-01-22T07:01:52.824Z · score: 2 (2 votes) · LW · GW

I'm considering buckling down and coordinating a host of existing abstract criticisms/expansions of Rationalism into a series of posts explaining Kegan and Metarationality, roughly as described in Meaningness but in more Less Wrong style which more directly argues from Sequence-level "first priniciples."

I'm a little wary of the worthwhileness of this project, and I suspect many on Less Wrong are ambivalent about Kegan and Chapman, displaying a kind of vague annoyance that Rationalist principles are being challenged versus any sort of excitement that those principles could be built upon in any manner distinguishable from traditional methods. The main sentiment seems to be that "Anything that successfully builds on rationality is automatically now a part of rationality."

I understand the ambivalence, I think. There should be a very high bar for "things that are successful critiques" of rationality. I'd love to inquire how much interest there would be in a high quality version of this project that is extremely self-aware of any criticisms, in order to assess whether to put in the due diligence to ensure it is of that quality. Chapman's entire blog is one thing, but a treatment that brings systems-level insights to his abstract statements could be quite revealing of what is actually meant by them.

In any case, I've started by compiling many of the source materials that would be useful to such a project in a Discussion Post.

comment by Viliam · 2017-01-23T15:51:24.068Z · score: 2 (2 votes) · LW · GW

Thank you for trying to improve the quality of the debate! If you could rewrite the most important insights as a new "sequence" that would be awesome.

If I may express my opinion, I would prefer reading a text that would not include criticism of what to me seems like a strawman of "rationalists", and simply focus on the specific ideas. (Something like writing "2+2=4" instead of "rationalists believe that 2+2=3, but post-rationalists believe that 2+2=4 and here is why".) I am curious how much of post-rationality will remain after the tribal aspects are removed.

comment by JacobLW (JacobLiechty) · 2017-01-24T01:14:32.379Z · score: 0 (0 votes) · LW · GW

Thanks! I agree with the sentiment that a critique of rationalism as a whole would be misguided, not the least of reasons being that that few are qualified to give it in a way that wouldn't be divisive. And that's assuming that post rationalism is meant to be a such a critique, which is incidentally why I prefer the term "metarationalism."

I think identity politics has discouraged many individual post-rationalist thoughts from finding their way onto Less Wrong, and this seems unfortunate.

comment by Viliam · 2017-01-24T11:41:53.966Z · score: 1 (1 votes) · LW · GW

Tone arguments are often frowned upon, but these is a difference between saying "you guys are doing a specific mistake here, let me explain, because this is very important" and "you guys are hopelessly wrong, I am going away and starting my own dojo" -- even if technically both of them mean "you are wrong, and I am right".

It would be especially bad if the guy starting his own new dojo happens to be right about a specific thing X and also to be wrong about a specific thing Y. Now believing in "neither X nor Y" becomes the mark of the old tribe, and believing in "both X and Y" becomes the mark of the new tribe. Which seems to me what typically happens in politics.

I'd like to be able to consider the "postrationalist" or "metarationalist" claims individually, perhaps to agree with some, disagree with some, and express uncertaintly about some. Instead of having two separate packages, and being told to choose the better one.

(Then of course remains the problem with the identity of a "rationalist", where I don't expect people to agree, because that's a thing of aesthetical preferences and social pressures. I'm not pretending any middle ground here; I enjoy the label of "rationalist" or "x-rationalist", and I try to be the one who can cooperate and is willing to pay the cost, hoping to become stronger, as a team. I don't think my contribution matters a lot, but I don't see that as a reason for defecting.)

comment by gjm · 2017-01-23T14:55:48.162Z · score: 2 (2 votes) · LW · GW

a kind of vague annoyance that Rationalist principles are being challenged

I certainly see some negative attitudes towards this sort of thing on LW, but it doesn't look to me at all like "vague annoyance that Rationalist principles are being challenged". Could you explain why you think that's what it is?

(Full disclosure: your description above seems to me like an example of my snarky thesis that postrationality = knowing about rationality + feeling superior to rationalists. But I think that in feeling that way I'm being uncharitable in almost exactly the way I'm suspecting you of being uncharitable. :-) )

For what it's worth, I'm not a fan of the notion that anything that successfully builds on rationality is a part of rationality. Not because it's exactly wrong, but because surely it could happen that the self-identified rationalist community has a wrong or incomplete idea of what actually constitutes effective thinking. In that case, a New Improved Version should indeed be "part of rationality", but until the actual so-called rationalists catch up it might not look that way. And if the rationalist community were sufficiently dysfunctional, calling the New Improved Version "rationality" might be counterproductive. I am not claiming that any of this is actually the case, and in particular I am not claiming that the "postrationalists" or "metarationalists" are in fact in possession of genuine improvements on LW-style rationality. But it's not a possibility that can be ruled out a priori, and this "automatically part of rationality" thing seems to me like it fails to acknowledge the possibility.

comment by JacobLW (JacobLiechty) · 2017-01-24T01:48:19.048Z · score: 1 (1 votes) · LW · GW

You make a good point about the charitability of that statement! I am probably conflating unrelated anecdotes of challenges to rationalism regardless of whether they came from "postrationality." Indeed, a lot of my attraction to these ideas came from the fact that I had originally experienced problems with rationality directly, and then metarationality later offered some descriptive insights into those issues later.

Wholesale critiques of rationalism are unsurprising, but difficult to construct for those committed to maintaining continuity with the movement; it is a difficult coordination problem. If my life had gone slightly different, I might have stopped calling myself a rationalist entirely at one point. It was a circumstantial mix of experiencing non-disfunctional rationality directly along with discovering metarationality that made it feel proper to stick around.

The coordination problem arises in how these wholesale criticisms get applied. It's definitely easy for uncharitability to start abounding, especially where harms have been had. The anecdotes I've described are often contexts where I personally challenged a core rationalist tenet, and perhaps thing for me to learn there is that any reactions I may have received are also the product of how I did the challenging.

I do think there are some very fascinating insights, coming largely from the metarationalist space, that offer a picture of what rationalism was in the first place but ultimately paints it as incomplete if foundational. These are insights which will often feel like critiques, and the extent to which the insights are correct may determine how much internal conflict is felt if they cause people to have to reorient. There's the idea from Kegan that developmental psychology comes in stable Stages, one of which has been interpreted as being rationalistic. Metarationality has a set of claims of how to shake people out of their current mode/stage, and I can speak for myself when I say that it does not often feel pleasant. To your point above, if there ever were a set of insights springing from rationality but that were different enough to be distinguished, one could expect that some would choose to stay and that others would leave, and that there would be at least some unavoidable conflict. And if so, it's largely a matter of individual competence on how much coordination can be achieved. Ideally there would be a large amount of continuity between the mindspaces, and a good way to encourage that is starting with high degrees of charitability. I'm happy to do my part!

comment by gjm · 2017-01-24T02:34:30.927Z · score: 1 (1 votes) · LW · GW

I don't have much to say to most of that besides nodding my head sagely. I will remark, though, that "developmental stage" theories like Kegan's almost always rub me the wrong way, because they tend to encourage the sort of smugly superior attitude I fear I detect in much "postrationalist" talk of rationalism. I don't think I have ever heard any enthusiast for such a theory place themselves anywhere other than in the latest "stage".

(I don't mean to claim that no such theory can be correct. But I mistrust the motives of those who espouse them, and I fear that the pleasure of looking down on others is a good enough explanation for much of the approval such theories enjoy that I'd need to see some actual good evidence before embracing such a theory. I haven't particularly looked for such evidence, in Kegan's case or any other; but nor have I seen anyone offering any.)

comment by JacobLW (JacobLiechty) · 2017-01-24T07:10:38.996Z · score: 1 (1 votes) · LW · GW

It's worth noting that a psych theory having untruthy reasons to be believed is actually evidence the theory is incorrect, especially if one of the evidences for it originally was "other people seem to believe this theory." The superiority angle is definitely one that is immediately recognizable about postrationalist ideas, even on my own personal introspection. In the circles of people I talk about this with, the question of "what stage you're on" is almost completely tabooed for this reason, with there being something of a running joke that "Stage 5s never claim to be Stage 5s."

The pitfalls of belief are hardly a reason to change the underlying theory, but if something like a non-upward system (without subsequent stages but more like a back-and-forth between alternate modes) was equally descriptive of reality, it would be preferable. I've experienced some anecdotal and community evidence for this view, from non-rationality communities who seem to have independently discovered a similar system of "relation" and "distinction" modes which correspond seemingly to odd and even Kegan stages. Kegan doesn't seem to have much to say about what, if anything, comes after Stage 5 besides that it perhaps encompasses any further thinking one might do. But from my anecdote (with zero rigor) it seems that past some stage adults just do the adult thing of punctuating stable modes with periods of nihilism followed by meaning-making of the relation and distinction types, all the way until we die. Under this (far underdeveloped) view, people in the rationality community are a collection of different maturities all undergoing a very strong memetically unified Distinction from uncalculation, bias, defaults, and politics. It seems this is something one can do productively no matter how mature one is. If we take Rationalism as a movement seriously, our self-reflective psychology is probably anti-inductive enough to not be particularly easy to describe with any fixed psychological theory anyway, so we take what we can get.

That all being said, it's important to remember the kind of criticisms of effective altruism and rationality that have been leveled by non-rationalists this whole time. There's certainly a case to be made that we all sound quite superior to "ineffective" "wrong" people. The charge we should always take up is that of confident humility, being unafraid to do the work to become less wrong while never taking the chance to claim anything but luck of the draw at the most fundamental level, recognizing the mental pathways we want to strengthen without ever using it as a status chip. It seems something could be done similarly for people wanting to move past their rationalist selves, acknowledging the strong aspects of what they feel improves their lives and thinking without ever cashing it into their arrogance banks.

comment by Viliam · 2017-01-24T14:54:55.962Z · score: 2 (2 votes) · LW · GW

When at school I learned about Kohlberg's stages of moral development, there was a nice example of a moral problem (something like the Trolley problem, but I think it was about stealing an expensive medicine to heal someone) where either side could be argued from each stage of moral development. For example, you could make a completely selfish argument for either side "I don't care about anyone's property" or "I don't care about anyone's health", but you could also make an abstract principled argument for either side "we should optimize for orderly society" or "we should optimize for helping humans" (simplified versions). The lesson was that the degree of moral development is not the same as the position on an issue.

If I look at the "rationality / postrationality" vs "Kegan's stages" using similar optics, I can see how people on different stages could still argue for either side. Therefore, one could "explain" either side as a manifestation of any of stages 3, 4, and 5.

If the Stage 3 is "socially determined, based on the real or imagined expectations of others", we could argue that people who use the label "rationalists" do it because they are in the Stage 3, and they believe that other "rationalists" expect them to use this label, so they follow the social pressure. But just as well we could argue that people who avoid the label "rationalists" (and use "post-rationalists" instead) do it because their social environment disapproves of the "rationalist" label. Both sides could be following social pressure, only different social pressures, from different groups of people. Maybe "rationalists" are scared that they could lose their group identity. And maybe "post-rationalists" are scared that someone from their social group could pattern-match them to "rationalists" and consequently exclude them from their group, whatever it happens to be (academia, buddhists, cool postmodern people, etc.).

If the Stage 4 is "determined by a set of values that they have authored for themselves", we could similarly argue that "rationalists" have chosen the rational way for themselves, in defiance of the whole society (rejected religion and mysterious answers, criticized education that teaches the teachers' passwords), and using reason as science as their guides they found people with similar values at LessWrong, thanks to Aumann's "great minds think alike" theorem. But just as well we could argue that "post-rationalists" have chosen the post-rational way for themselves, in defiance of the Less-Wrong "rationalists". People from both groups can feel like heroic lonely warriors in an ignorant world dismissive to their ideas.

If people in Stage 5 are "no longer bound to any particular aspect of themselves or their history, and they are free to allow themselves to focus on the flow of their lives", we could find supportive arguments for that, too. The zero-th virtue ("do not ask whether it is 'the Way' to do this or that; ask whether the sky is blue or green; if you speak overmuch of the Way you will not attain it"), internal criticism of LW as "shiny distraction" on one side; abandoning the "rationalist" label on the other side.

What most likely happens in reality is that both sides certainly attract various kinds of people. (And even according to Kegan, one person is often in multiple stages at the same time.) However, here I am going to break the symmetry and say that to me it seems the "post-rationalist" side is almost defining themselves as "we are in the Stage 5, and those who identify as 'rationalists' are in the Stage 4". At least this is how it seems to me from outside. (But complaining too much about this would be the pot calling the kettle black, because "rationalists" define themselves as "we are the rational ones, in the insane world". So in a karmic sense they deserved such comeback.)

Also, accusing other people of not being in Stage 5 feels to me like a kafkatrap. There is no way to defend against such accusation, because whatever evidence of being in Stage 5 you bring, can be dismissed by "Stage 5's never claim to be Stage 5's, so everything you said is evidence of you not being in Stage 5". (But it probably doesn't work the other way. If you admit that you are not in Stage 5, that statement will be taken at face value. At least I think so; I didn't actually try this.) So how does one convince others that they are in Stage 5? From observation, the solution seems to be having a blog about Kegan's stages, and judging others as not being at Stage 5 yet... if you do this, you establish yourself as an expert on Stage 5, and by definition only people on Stage 5 can be experts on Stage 5. If these are the rules of the game, I don't want to play it. (Note how I used the same cheap status trick here: defining other people as pawns in a system, and myself as the smart one who is above and beyond the system. Meh. Oops, I did it again. I am so meta I must be at Stage 8 at least. Oops, I am doing it again. I admit the game is a bit addictive.)

For me, the "rationalist" movement is a place where people similar to me can come and find each other. (Roughly defined as: high IQ, non-religious, trying to "win", willing to help each other, trustworthy, not interested in status games. There is probably more that I can't easily describe here; probably some clicking on personality level.) Even most people who come to LW meetups don't satisfy my criteria, but there at least I can find the few ones much easier than in the general population. I would be sad to lose this one coordination point. Meeting such people brings value to my life; I find it emotionally satisfying to talk with people openly about topics that interest me without having to censor my thoughts or explain long inferential distances; sometimes I also get some useful advice. At this moment I don't see any value I could get from "post-rationality", but I am willing to learn, as long as it doesn't feel to me as someone just playing status games, because I have low tolerance for that.

comment by waveman · 2017-01-24T04:28:40.258Z · score: 0 (0 votes) · LW · GW

nor have I seen anyone offering any [evidence]

Kegan has published a lot of evidence about the consistency of measurements his scheme. See "A guide to the subject-object interview : its administration and interpretation" Lisa Lahey [and four others]. As for validity, not so much, but it does build on the widely accepted work of others (Paiget etc), and "The evolving self" has about 8 pages of citations and references including

Kegan, R. 1976. Ego and truth: per- sonality and the Piagetian paradigm. Ph.D. dissertation, Harvard Univer- sity.

_ 1977. The sweeter welcome: Martin Buber, Bernard Malamud and Saul Bellow. Needham Heights, Mass.: Wexford.

_ 1978. Child development and health education. Principal 57 (3): 91-95.

_ 1979. The evolving self: a process conception for ego psychology. Counseling Psychologist 8 (2): 5-34.

_ 1980. There the dance is: religious dimensions of developmen- tal theory. In Toward moral and religious maturity, ed. ]. W. Fowler and A. Vergote. Morristown, N.J.: Silver Burdette.

_ 1981. A neo-Piagetian ap- proach to object relations. In The self: psychology, psychoanalysis and an- thropology, ed. B. Lee and G. Noam. New York: Plenum Press.

I mistrust the motives

rub me the wrong way

I haven't particularly looked for such evidence

Not very convincing.

My summary of Kegan's model is here. My suggestion is to try it and see if it works.

https://drive.google.com/file/d/0B_hpownP1A4PdERFVXJDVE5SRnc/view?usp=sharing

comment by gjm · 2017-01-24T11:43:47.409Z · score: 0 (0 votes) · LW · GW

Kegan has published a lot of evidence

Thanks for the pointers. I'm more interested in validity than consistency here, I think.

Not very convincing

I was intending to inform, not to convince. (I agree that no one should be convinced of anything much by my saying that I mistrust some people's motives.)

comment by RainbowSpacedancer · 2017-01-21T14:31:18.875Z · score: 2 (2 votes) · LW · GW

I'm working on an overview of the science on spiritual enlightenment. I'm also looking into who has credible claims to it, whether it is something worth pursuing and a survey of the methods used to get there.

If anyone knows someone (or is someone) that thinks they might be there or part-way there and who would be willing to chat a bit, that'd be lovely. If you've just dabbled in some mystical practices and had a few strange experiences and want to bounce some ideas around, that could be fun too.

comment by moridinamael · 2017-01-22T03:44:39.365Z · score: 2 (2 votes) · LW · GW

This blog doesn't appear to be active anymore, but it contains a lot of helpful ideas from an LWer who was an experienced meditator.

The blog led me to buy the book The Mind Illuminated which is a very clear, thorough, secular and neurologically sound (where possible) manual on attaining classical enlightenment through vipassana+mindfulness. I'm currently trying to follow its program as well as I can.

comment by ig0r · 2017-01-29T05:18:21.759Z · score: 1 (1 votes) · LW · GW

+1 for the suggestions made by others. I will ping the blog writer about this post to see if he's interested in reaching out.

You may also want to look at Daniel Ingram and his MCTB community

comment by RainbowSpacedancer · 2017-01-30T15:04:23.815Z · score: 0 (0 votes) · LW · GW

I've read all of Daniel Ingram's stuff. He's a fantastic resource. If you like his stuff, MCTB v2 is scheduled to come out later this year. The draft is much improved over the original IMO.

comment by lifelonglearner · 2017-01-21T15:32:15.669Z · score: 0 (0 votes) · LW · GW

Specifically for meditation: I think Romeo Stevens has worked with mindfulness recently, if that's close to what you're looking for? (You can probably ping him here).

comment by RainbowSpacedancer · 2017-01-21T23:52:22.747Z · score: 0 (0 votes) · LW · GW

Mindfulness is a part of it, I'm interested in the end goal. The lasting changes in perception that are meant to come about through mindfulness or other practices.

comment by lifelonglearner · 2017-01-22T01:14:35.719Z · score: 0 (0 votes) · LW · GW

I know of famous people in the mindfulness world (Shinzen Young, John Yates, and Bhante Gunaratana), but I don't know them personally. Still, emailing them may be worth a shot?

comment by RainbowSpacedancer · 2017-01-22T01:18:16.354Z · score: 0 (0 votes) · LW · GW

I've chatted a little with Shinzen on one of his retreats but I haven't yet looked into the other two. Thanks lifelonglearner.

comment by lifelonglearner · 2017-01-22T02:14:10.787Z · score: 1 (1 votes) · LW · GW

No problem! John Yates is better known as "Culadasa", by the way. He's the author of The Mind Illuminated.

comment by RainbowSpacedancer · 2017-01-30T15:01:02.239Z · score: 1 (1 votes) · LW · GW

Oh, I feel silly, I should have just googled the names, I'm familiar with them. I know Gunaratana by his book and John Yates by his alternate name Culadasa. Thanks anyway, lifelonglearner, they've proven to be an excellent help.

comment by lifelonglearner · 2017-01-21T05:13:15.044Z · score: 2 (2 votes) · LW · GW

I'm working on a primer on the planning fallacy that will cover statistics, debiasing, and general research of the topic. In the coming weeks, I'd love for some people to give quick feedback on the flow / readability of the primer, if they're interested.

comment by Qiaochu_Yuan · 2017-01-22T03:45:06.204Z · score: 5 (5 votes) · LW · GW

I'm working on a primer on the planning fallacy

Expected draft finish date is in 2 weeks

Exciting stuff.

comment by lifelonglearner · 2017-01-26T03:06:05.236Z · score: 4 (4 votes) · LW · GW

I was going to comment about how I had taken care to make a conservative estimate. But then I decided it'd probably be better to actually finish the draft first. Now I'm here proudly announcing that I have a first draft done before my set deadline! Hooray!

comment by lifelonglearner · 2017-01-26T03:08:28.856Z · score: 1 (1 votes) · LW · GW

Link if anyone wants to leave some helpful feedback:

https://docs.google.com/document/d/1i1cWXjmrr76hHtok5nuOz2Yml5xLk88-MQkIYW8RO8I/edit

comment by chaosmage · 2017-01-21T23:27:46.634Z · score: 1 (1 votes) · LW · GW

Happy to help, send me a draft when you have it.

comment by lifelonglearner · 2017-01-21T23:45:04.475Z · score: 1 (1 votes) · LW · GW

Sure, thanks!

comment by Elo · 2017-01-21T10:00:36.128Z · score: 1 (1 votes) · LW · GW

Post a link or pm me.

comment by lifelonglearner · 2017-01-21T15:27:40.529Z · score: 1 (1 votes) · LW · GW

Will do! (Expected draft finish date is in 2 weeks, so I'll ping you then)

comment by CellBioGuy · 2017-01-22T20:31:42.907Z · score: 1 (1 votes) · LW · GW

My internet presence and my IRL presence among my friends has fallen to about zero as I am doing a final push to graduate with my PhD in cell biology and genomics. On a job interview right now for a position studying something I am passionate about for real. Thesis being written (and Latex being learned) for, hopefully, a defense at the end of March.

Its remarkable how much data I have when I actually dig everything up from the last 3 years and lay it out side by side.

comment by Elo · 2017-01-22T04:05:38.881Z · score: 1 (1 votes) · LW · GW

I'd like there to be some kind of list of rat-houses around the world. But I can't champion this project. I also live on my own.

comment by dropspindle · 2017-01-21T23:22:54.824Z · score: 1 (1 votes) · LW · GW

I've been:

1) Self-hacking into liking programming

2)Learning programming (primarily using Odin Project)

comment by lifelonglearner · 2017-01-22T02:16:59.915Z · score: 2 (2 votes) · LW · GW

I've been trying to learn programming (but not in a very disciplined / systematic fashion). Would you recommend the Odin Project? (Everyday Utilitarian recommended it, IIRC, but I was turned off by the cross-linking to different places.)

How goes your self-hacking? I've played around w/ it for math, and the results were pretty good (If we're talking about the generally same thing, that is.)

comment by dropspindle · 2017-01-22T03:14:01.792Z · score: 0 (0 votes) · LW · GW

The self-hacking is going pretty well, considering that I started out absolutely hating programming. A problem that arises is that I don't currently like it enough for it to be self-motivating just through personal enjoyment. I actually got a lot more accomplished when the motivation was "Do the thing that I hate (and learn to like it/ change my self-identity of hating it) so that I can get a better job (...Eventually. I like my current job, so no rush)." Now I like it well enough that the motivation is "Do that thing you like because you like it", but there's usually something else to do that I like better.

I've also done self-hacking for math and mathy subjects, but it was before I would have known of the term. It worked rather well!

Odin Project is more of a slog, but it seems like it will get you where you need to go. I had a lot more FUN on sites like Codewars, which was more useful for the self-hacking part.

comment by lifelonglearner · 2017-01-22T05:02:48.220Z · score: 1 (1 votes) · LW · GW

Hm, thanks for your thoughts on the matter. I've noticed too, that once I get a thing to be "not too terrible", then it feels less like I have to work on it. But then I'll just prioritize other things over it.

comment by whpearson · 2017-01-21T12:00:51.754Z · score: 1 (1 votes) · LW · GW

I'm working next week on what I call user aligned computing.

Video https://www.youtube.com/watch?v=XQgtVdyNzaQ code here https://github.com/eb4890/agorint.

It might be a bit like the control problem for normal computers (but mainly with a separate evolutionary pathway ), it doesn't assume the thing it is trying to control is a super intelligent genie with its own goals. It is a user controlled market which determines the programs the systems access to resources such as memory and processing power and also I/O.

I am hoping might be part of program of Intelligence Amplification and it would make computers more secure in general (less of a monoculture with easily acquirable ambient authority) so might have impact on some fast take off scenarios.

I'm about to start making the market work, having got some basic infrastructure working.