Posts

Running a Futurist Institute. 2017-10-06T17:05:47.589Z · score: 5 (5 votes)
Come check out the Boulder Future Salon this Saturday! 2017-09-06T15:49:16.899Z · score: 1 (1 votes)
Anyone else reading "Artificial Intelligence: A Modern Approach"? 2016-11-05T15:22:33.792Z · score: 1 (2 votes)
LINK: Performing a Failure Autopsy 2016-05-27T14:21:27.343Z · score: 1 (4 votes)
Talk today at CU Boulder 2016-04-05T16:26:06.614Z · score: 1 (2 votes)
Two meetups in Denver/Boulder Colorado 2015-05-05T00:58:02.964Z · score: 0 (1 votes)
LW-ish meetup in Boulder, CO 2015-03-10T14:32:53.115Z · score: 1 (2 votes)
FOOM Articles 2015-03-05T21:32:00.924Z · score: 3 (4 votes)
Intrapersonal comparisons: you might be doing it wrong. 2015-02-03T21:34:03.344Z · score: 16 (17 votes)
Is there a rationalist skill tree yet? 2015-01-30T16:02:37.185Z · score: 15 (18 votes)
LW-ish meetup in Boulder, CO 2015-01-13T05:23:39.024Z · score: 5 (6 votes)
Steelmanning MIRI critics 2014-08-19T03:14:15.072Z · score: 6 (7 votes)
Recommendations for donating to an anti-death cause 2014-04-09T02:56:56.298Z · score: 20 (23 votes)
LWers living in Boulder/Denver area: any interest in an AI-philosophy reading group? 2013-12-31T17:06:56.060Z · score: 1 (2 votes)
Luck II: Expecting White Swans 2013-12-15T17:40:08.775Z · score: 1 (10 votes)
Luck I: Finding White Swans 2013-12-12T17:56:30.191Z · score: 25 (28 votes)
Existential Risk II 2013-10-20T00:25:25.437Z · score: 10 (13 votes)
[LINKS] Killer Robots and Theories of Truth 2013-06-30T22:57:02.361Z · score: -4 (7 votes)
How to Have Space Correctly 2013-06-25T03:47:10.994Z · score: 22 (35 votes)
X-Risk Roll Call 2013-06-19T04:07:36.931Z · score: 6 (13 votes)
A Viable Alternative to Typing 2013-06-06T05:38:17.885Z · score: 2 (9 votes)
Two Weeks of Meditation can Reduce Mind Wandering and Improve Mental Performance. 2013-06-01T09:58:26.343Z · score: 16 (17 votes)
Being Foreign and Being Sane 2013-05-25T00:58:11.748Z · score: 24 (29 votes)

Comments

Comment by fowlertm on Running a Futurist Institute. · 2017-10-09T20:59:05.656Z · score: 0 (0 votes) · LW · GW

I like that idea too. How hard is it to publish in academic journals? I don't have more than a BS, but I have done original research and I can write in an academic style.

Comment by fowlertm on Running a Futurist Institute. · 2017-10-09T20:48:19.510Z · score: 1 (1 votes) · LW · GW

A post-mortem isn't quite the same thing. Mine has a much more granular focus on the actual cognitive errors occurring, with neat little names for each of them, and has the additional step of repeatedly visualizing yourself making the correct move.

https://rulerstothesky.com/2016/03/17/the-stempunk-project-performing-a-failure-autopsy/

This is a rough idea of what I did, the more awesome version with graphs will require an email address to which I can send a .jpg

Comment by fowlertm on Running a Futurist Institute. · 2017-10-09T19:11:02.890Z · score: 0 (0 votes) · LW · GW

Different reasons, none of them nefarious or sinister.

I emailed a technique I call 'the failure autopsy' to Julia Galef, which as far as I know is completely unique to me. She gave me a cheerful 'I'll read this when I get a chance" and never got back to me.

I'm not sure why I was turned down for a MIRIx workshop; I'm sure I could've managed to get some friends together to read papers and write ideas on a whiteboard.

I've written a few essays for LW the reception of which were lukewarm. Don't know if I'm just bad at picking topics of interest or if it's a reflection of the declining status of this forum.

To be clear: I didn't come here to stamp my feet and act like a prissy diva. I don't think the rationalists are big meanies who are deliberately singling me out for exclusion. I'm sure everyone has 30,000 emails to read and a million other commitments and they're just busy.

But from my perspective it hardly matters: the point is that I have had no luck building contacts through the existing institutions and channeling my desire to help in any useful way.

You might be wondering whether or not I'm just not as smart or as insightful as I think I am. That's a real possibility, but it's worth pointing out that I also emailed the failure autopsy technique to Eric S. Raymond -- famed advocate of open source, bestselling author, hacker, philosopher, righteous badass -- and he not only gave me a lot of encouraging feedback, he took time out of his schedule to help me refine some of my terminology to be more descriptive. We're actually in talks to write a book together next year.

So it might be me, but there's evidence to indicate that it probably isn't.

Comment by fowlertm on Running a Futurist Institute. · 2017-10-09T03:36:25.049Z · score: 1 (1 votes) · LW · GW

I hadn't known about that, but I came to the same conclusion!

Comment by fowlertm on Running a Futurist Institute. · 2017-10-09T03:36:07.072Z · score: 0 (0 votes) · LW · GW

I gave that some thought! LW seems much less active than it once was, though, so that strategy isn't as appealing. I've also written a little for this site and the reception has been lukewarm, so I figured a book would be best.

Comment by fowlertm on Running a Futurist Institute. · 2017-10-09T03:34:48.181Z · score: 1 (1 votes) · LW · GW

That's not a bad idea. As it stands I'm pursuing the goal of building a dedicated group of people around these ideas, which is proving difficult enough as it is. Eventually I'll want to move forward with the institute, though, and it seems wise to begin thinking about that now.

Comment by fowlertm on Running a Futurist Institute. · 2017-10-08T16:35:41.179Z · score: 5 (5 votes) · LW · GW

I have done that, on a number of different occasions. I have also tried for literally years to contribute to futurism in other ways; I attempted to organize a MIRIx workshop and was told no because I wasn't rigorous enough or something, despite the fact that on the MIRIx webpage it says:

"A MIRIx workshop can be as simple as gathering some of your friends to read MIRI papers together, talk about them, eat some snacks, scribble some ideas on whiteboards, and go out to dinner together."

Which is exactly what I was proposing.

I have tried for years to network with people in the futurist/rationalist movement, by offering to write for various websites and blogs (and being told no each and every single time), or by trying to discuss novel rationality techniques with people positioned to provide useful feedback (and being ignored each and every single time).

While I may not be Eliezer Yudkowsky the evidence indicates that I'm at least worth casually listening to, but I have had no luck getting even that far.

I left a cushy job in Asia because I wanted to work toward making the world a better place, and I'm not content simply giving money to other people to do so on my behalf. I have a lot of talent and energy which could be going towards that end; for whatever reason, the existing channels have proven to be dead ends for me.

But even if the above were not the case, there is an extraordinary amount of technical talent in the front range which could be going towards more future-conscious work. Most of these people probably haven't heard of LW or don't care much about it (as evinced by the moribund LW meetup in Boulder and the very, very small one in Denver), but they might take notice if there were a futurist institution within driving distance.

Approaching from the other side, I've advertised futurist-themed talks on LW numerous times and gotten, like, three people to attend.

I'll continue donating to CFAR/MIRI because they're doing valuable work, but I also want to work on this stuff directly, and I haven't been able to do that with existing structures.

So I'm going to build my own. If you have any useful advice for that endeavor, I'd be happy to hear it.

Comment by fowlertm on Running a Futurist Institute. · 2017-10-08T16:23:25.465Z · score: 1 (1 votes) · LW · GW

You're right. Here is a reply I left on a Reddit thread answering this question:

This institution will essentially be a formalization and scaling-up of a small group of futurists that already meet to discuss emerging technologies and similar subjects. Despite the fact that they've been doing this for years attendance is almost never more than ten people (25 attendees would be fucking woodstock).

I think the best way to begin would be to try and use this seed to create a TED-style hub of recurring discussions on exactly these topics. There's a lot of low-hanging fruit to be picked in the service of this goal. For example I recently convinced the organizer for the futurist group to switch to a regular spot at the local library instead of the nigh-impossible-to-find hackerspace at which they were doing it before. I've also done things like buy pizza for everyone.

Once we get to where we have a nice, clean, well-lit venue and have at least 20 people regularly attending, I'd like to start reaching out to local businesses, writers, artists, and academics to have them give talks to the group. As it stands it probably wouldn't be worth their time just to speak to 8 people.

TEDxMileHigh does something vaguely like this, but it isn't as focused and only occurs once per year. Once I get that lined out, I'd like the group's first 'product' to be a near-comprehensive 'talent audit' for the Denver/Boulder region. If I had a billion dollars and wanted to invest it in the highest-impact companies and research groups I'd have no idea of where to get started. Here are some questions I'd like to answer:

What are the biggest research and investment initiatives currently happening? Is there more brainpower in nanotech or AI? In neurotech or SENS-type fields? AFAICT nobody knows. Who is doing the most investing? What kind of capital is there available from hedgefunds or angel investors? What sorts of bridges exist between academia, the private sector, think tanks, and investment firms? How can I strengthen them?

So we'll start by aping TED and then try to figure out what kind of talent pool we have to work with. These two goals alone will surely require several years, and there's more than one avenue to monetization (ticket sales; subscriptions to the talent audit)

Beyond this horizon things get fuzzier because it's hard for me to say what direction the institute will take because I need to answer other questions first. For example, I'm very interested in superintelligent AI and related ethical issues. I have even thought of a name for a group devoted to research in the field: 'the Superintelligence Research Group', S.I.R.G (pronounced 'surge').

But is there enough AI/mathematics/computation brainpower around to make such a venture worthwhile? I mean there's more than one computing research group just in Boulder, but are they doing the kind of worked that could be geared toward SAI work?

If so maybe I'll maneuver in that direction; if not, it would probably make more sense to focus on other things.

So that's one possibility. Another is either providing consulting to investors wanting to work with companies in the front range, or angel investing in those companies myself.

But if I'm publishing a newsletter about investment opportunities in the Front Range would I even be allowed to personally invest in companies (i.e. is there any legal conflict of interest or whatever involved)? Would the decision to make the institute an LLC or a 501C3 impact future financial maneuvering?

So you have a short-term, concrete answer to your question and a long-term, speculative answer to your question.

Is there anything else you'd like to know?

Comment by fowlertm on Running a Futurist Institute. · 2017-10-07T01:52:15.086Z · score: 4 (4 votes) · LW · GW

(1) The world does not have a surfeit of intelligent technical folks thinking about how to make the future a better place. Even if I founded a futurist institute in the exact same building as MIRI/CFAR, I don't think it'd be overkill.

(2) There is a profound degree of technical talent here in central Colorado which doesn't currently have a nexus around which to have these kinds of discussions about handling emerging technologies responsibly. There is a real gap here that I intend to fill.

Comment by fowlertm on Come check out the Boulder Future Salon this Saturday! · 2017-09-07T15:13:28.646Z · score: 0 (0 votes) · LW · GW

That hadn't even occurred to me, thank you! Do you think it'd be inappropriate? This isn't a LW specific meetup, just a bunch of tech nerds getting together to discuss this huge tech project I just finished.

Comment by fowlertm on Anyone else reading "Artificial Intelligence: A Modern Approach"? · 2016-11-05T23:17:22.315Z · score: 1 (1 votes) · LW · GW

Thanks! I suppose I wasn't as clear as I could have been: I was actually wondering if there are any people who are reading it currently, who might be grappling with the same issues as me and/or might be willing to split responsibility for creating Anki cards. This textbook is outstanding, and I think there would be significant value in anki-izing as much of it as possible.

Comment by fowlertm on LINK: Performing a Failure Autopsy · 2016-05-29T16:52:03.391Z · score: 0 (0 votes) · LW · GW

Because I missed numerous implications, needlessly increased causal opacity, and failed to establish a baseline before I started fiddling with variables. Those are poor troubleshooting practices.

Comment by fowlertm on Linguistic mechanisms for less wrong cognition · 2015-12-06T16:22:28.964Z · score: 0 (0 votes) · LW · GW

So a semi-related thing I've been casually thinking about recently is how to develop what basically amounts to a hand-written programming language.

Like a lot of other people I make to-do lists and take detailed notes, and I'd like to develop a written notation that not only captures basic tasks, but maybe also simple representations of the knowledge/emotional states of other people (i.e. employees).

More advanced than that, I've also been trying to think of ways I can take notes in a physical book that will allow a third party to make Anki flashcards or evernote entries based on my script. It has to be extremely dense to fit in the margins of a book, and must capture distinct commands like "make a single cloze deletion card for this sentence" and "make four separate cards for this sentence, cloze deleting a different piece of information for each card but otherwise leaving everything intact" and so on.

Any thoughts?

Comment by fowlertm on Deliberate Grad School · 2015-10-12T15:56:48.026Z · score: 1 (1 votes) · LW · GW

I mentioned CMU for the reasons you've stated and because Lukeprog endorsed their program once (no idea what evidence he had that I don't).

I have also spoken to Katja Grace about it, and there is evidently a bit of interest in LW themes among the students there.

I'm unaware of other programs of a similar caliber, though there are bound to be some. If anyone knows of any, by all means list them, that was the point of my original comment.

Comment by fowlertm on Deliberate Grad School · 2015-10-04T16:07:43.369Z · score: 4 (4 votes) · LW · GW

I think there'd be value in just listing graduate programs in philosophy, economics, etc., by how relevant the research already being done there is to x-risk, AI safety, or rationality. Or by whether or not they contain faculty interested in those topics.

For example, if I were looking to enter a philosophy graduate program it might take me quite some time to realize that Carnegie Melon probably has the best program for people interested in LW-style reasoning about something like epistemology.

Comment by fowlertm on Learning takes a long time · 2015-06-01T03:15:20.821Z · score: 1 (1 votes) · LW · GW

Data point/encouragement: I'm getting a lot out of these, and I hope you keep writing them.

I'm one of those could-have-beens who dropped mathematics early on despite a strong interest and spent the next decade thinking he sucked at math before he rediscovered numerical proclivites in his early 20's because FAI theory caused him to peek at Discrete Mathematics.

Comment by fowlertm on FOOM Articles · 2015-03-06T02:41:37.325Z · score: 3 (3 votes) · LW · GW

Both unknown to me, thanks :)

Comment by fowlertm on The outline of Maletopia · 2015-02-21T15:48:19.906Z · score: 0 (0 votes) · LW · GW

Why? What's wrong with wanting to be masculine?

Comment by fowlertm on Intrapersonal comparisons: you might be doing it wrong. · 2015-02-21T15:30:55.395Z · score: 1 (1 votes) · LW · GW

Interesting tie-in, thanks.

Incidentally, how cool would it be to be able to say "my epistemology is the most advanced"? If nothing else it'd probably be a great pickup line at LW meetups.

Comment by fowlertm on Intrapersonal comparisons: you might be doing it wrong. · 2015-02-04T04:39:41.535Z · score: 0 (0 votes) · LW · GW

It's worth a lot, I'll look into it.

Comment by fowlertm on Is there a rationalist skill tree yet? · 2015-02-03T17:28:36.198Z · score: 1 (1 votes) · LW · GW

Agreed. I think in light of the fact that a lot of this stuff is learned iteratively you'd want to unpack 'basic mathematics'. I'm not sure of the best way to graphically represent iterative learning, but maybe you could have arrows going back to certain subjects, or you could have 'statistics round II' as one of nodes in the network.

It seems like insights are what you're really aiming at, so maybe instead of 'probability theory' you have a node for 'distributions' and 'variance' at some early point in the tree then later you have 'Bayesian v. Frequentist reasoning'.

This would help also help you unpack basic mathematics, though I don't know much about the dependencies either. I hope too, soon :)

Comment by fowlertm on Is there a rationalist skill tree yet? · 2015-01-31T14:27:21.872Z · score: 0 (0 votes) · LW · GW

I thought of that as well, it does need some work done in terms of presentation. It'd be a good place to start, yes.

Comment by fowlertm on Programming-like activities? · 2015-01-12T16:35:52.909Z · score: 3 (3 votes) · LW · GW

My two cents: I studied math pretty intensively on my own and later started programming. To my pleasant surprise, the thinking style involved in math transmitted almost directly over into programming. I'd imagine that the inverse is also true.

Comment by fowlertm on Meetup : Denver Area Meetup 2 · 2014-11-14T15:18:59.410Z · score: 0 (0 votes) · LW · GW

I'm sorry I missed this and hope it went well. Work has been chaotic lately, but I absolutely support a LW presence in Denver. I've tried once before to get a similar group off the ground, and would be happy to help this one along with presentations, planning, rationalist game nights, whatever.

Comment by fowlertm on Meetup : Denver Area Meetup 2 · 2014-11-08T14:56:55.206Z · score: 1 (1 votes) · LW · GW

I'll try to be there.

Comment by fowlertm on LWers living in Boulder/Denver area: any interest in an AI-philosophy reading group? · 2014-10-21T14:49:35.762Z · score: 0 (0 votes) · LW · GW

Actually, I folded it into another group called the Boulder Future Salon, which doesn't deal exclusively with x-risk but which has other advantages going for it, like a pre-existing membership.

Comment by fowlertm on Steelmanning MIRI critics · 2014-08-30T02:19:17.881Z · score: 1 (1 votes) · LW · GW

How would you recommend responding?

Comment by fowlertm on Steelmanning MIRI critics · 2014-08-29T16:42:33.622Z · score: 1 (1 votes) · LW · GW

I think I'm basically prepared for that line of attack. MIRI is not a cult, period. When you want to run a successful cult you do it Jim-Jones-style, carting everyone to a secret compound and carefully filtering the information that makes it in or out. You don't work as hard as you can to publish your ideas in a format where they can be read by anyone, you don't offer to publicly debate William Lane Craig, and you don't seek out the strongest versions of criticisms of your position (i.e. those coming from Robin Hanson).

Eliezer hasn't made it any easier on himself by being obnoxious about how smart he is, but then again neither did I; most smart people eventually have to learn that there are costs associated with being too proud of some ability or other. But whatever his flaws, the man is not at the center of a cult.

Comment by fowlertm on The Octopus, the Dolphin and Us: a Great Filter tale · 2014-08-29T16:28:16.833Z · score: 3 (3 votes) · LW · GW

"Note that AI is certainly not a great filter: an AI would likely expand through the universe itself"

I was confused by this, what is it supposed to mean? Off the top of my head it certainly seems like there is sufficient space between 'make and AI that causes the extinction of the human races or otherwise makes expanding into space difficult' and 'make an AI that causes the extinction of the human race but which goes on to colonize the universe' for AI to be a great filter.

Comment by fowlertm on Steelmanning MIRI critics · 2014-08-25T16:52:51.257Z · score: 2 (2 votes) · LW · GW

This comment is a poorly-organized brain dump which serves as a convenient gathering place for what I've learned after several days of arguing with every MIRI critic I could find. It will probably get it's own expanded post in the future, and if I have the time I may try to build a near-comprehensive list.

I've come to understand that criticisms of MIRI's version of the intelligence explosion hypothesis and the penumbra of ideas around it fall into two permeable categories:

Those that criticize MIRI as an organization or the whole FAI enterprise (people making these arguments may or may not be concerned about the actual IE) and those that attack object-level claims made by MIRI.

Broad Criticisms

1a) Why worry about this now, instead of in the distant future, given the abysmal performance of attempts to predict AI?

1b) Why take MIRI seriously when there are so many expert opinions that diverge?

1c) Aren't MIRI and LW just an Eliezer-worshipping cult?

1d) Is it even possible to do this kind of theoretical work so far in advance of actual testing and experimentation?

1e) The whole argument can be dismissed as it pattern matches other doomsday scenarios, almost all of which have been bullshit.


Specific Criticisms

2a) General intelligence is what we're worried about here, and it may prove much harder to build than we're anticipating.

2b) Tool AIs won't be as dangerous as agent AIs.

2c) Why not just build an Oracle?

2d) the FOOM will be distributed and slow, not fast and localized.

2e) Dumb Superintelligence, i.e. nothing worth of the name could possibly misinterpret a goal like 'make humans happy'

2f) Even FAI isn't a guarantee

2g) A self-improvement cascade will likely hit a wall at sub-superintelligent levels.

2h) Divergence Issue: all functioning AI systems have built-in sanity checks which take short-form goal statements and unpack them in ways that take account of constraints and context (???). It is actually impossible to build an AI which does not do this (???), and thus there can be no runaway SAI which is given a simple short-form goal and then carries it to ridiculous logical extremes (I WOULD BE PARTICULARLY INTERESTED IN SOMEONE ADDRESSING THIS).

Comment by fowlertm on Steelmanning MIRI critics · 2014-08-25T16:07:56.088Z · score: 2 (2 votes) · LW · GW

A good point, I must spend some time looking into the FOOM debate.

Comment by fowlertm on Steelmanning MIRI critics · 2014-08-20T16:54:18.315Z · score: 1 (1 votes) · LW · GW

I've heard the singularity-pattern-matches-religious-tropes argument before and hadn't given it much thought, but I find your analysis that the argument is wrong to be convincing, at least for the futurism I'm acquainted with. I'm less sure that it's true of Kurzweil's brand of futurism.

Comment by fowlertm on Steelmanning MIRI critics · 2014-08-19T11:50:59.627Z · score: 2 (2 votes) · LW · GW

Correct, I've been pursuing that as well.

Comment by fowlertm on Steelmanning MIRI critics · 2014-08-19T11:38:28.795Z · score: 1 (1 votes) · LW · GW

Correct :)

Comment by fowlertm on Steelmanning MIRI critics · 2014-08-19T11:37:28.779Z · score: 3 (3 votes) · LW · GW

Only the IE as defended by MIRI; it'd be a much longer talk if I wanted to defend everything they've put forward!

Comment by fowlertm on A Visualization of Nick Bostrom’s Superintelligence · 2014-08-15T16:57:59.463Z · score: 2 (2 votes) · LW · GW

With what software was this done?

Comment by fowlertm on Recommendations for donating to an anti-death cause · 2014-05-08T01:50:17.597Z · score: 1 (1 votes) · LW · GW

For those interested, I ended up donating to the Brain Preservation Foundation, MIRI, SENS, and the Alzheimer's Disease Research Fund.

More detail here:

http://rulerstothesky.wordpress.com/2014/04/25/in-memorium/

Comment by fowlertm on Truth: It's Not That Great · 2014-05-04T23:05:41.134Z · score: 2 (2 votes) · LW · GW

Good stuff. It took me quite a long time to work these ideas out for myself. There are also situations in which it can be beneficial to let somewhat obvious non-truths continue existing.

Example: your boss is good at doing something but his theoretical explanation for why it works is nonsense. Most of the time questioning the theory is only likely to piss them off, and unless you can replace it with something better, keeping your mouth shut is probably the safest option.

Relevant post:

http://cognitiveengineer.blogspot.com/2013/06/when-truth-isnt-enough.html

Comment by fowlertm on Recommendations for donating to an anti-death cause · 2014-04-10T15:34:35.335Z · score: 3 (3 votes) · LW · GW

I'd like to aim squarely at Death.

Comment by fowlertm on Recommendations for donating to an anti-death cause · 2014-04-10T14:54:40.772Z · score: 3 (3 votes) · LW · GW

The Brain Preservation Foundation was one of the first charities I thought of, I'll definitely be considering them.

Comment by fowlertm on Recommendations for donating to an anti-death cause · 2014-04-09T21:43:36.466Z · score: 1 (1 votes) · LW · GW

I would be interested, yes.

Comment by fowlertm on LWers living in Boulder/Denver area: any interest in an AI-philosophy reading group? · 2014-03-23T02:26:25.647Z · score: 0 (0 votes) · LW · GW

Head over to meetup.com and search for AI and Existential Risk, then join the group. We just had our inaugural meeting.

Comment by fowlertm on Futurism's Track Record · 2014-01-30T16:25:42.941Z · score: 0 (0 votes) · LW · GW

I too think it would be economics, though probably of a more philosophical type, like what they do at the London School of Economics.

And yes, I'd be very interested in doing something like that :)

Comment by fowlertm on Dark Arts of Rationality · 2014-01-16T19:08:35.348Z · score: 0 (2 votes) · LW · GW

I propose that we reappropriate the white/black/grey hat terminology from the Linux community, and refer to black/white/grey cloak rationality. Someday perhaps we'll have red cloak rationalists.

Comment by fowlertm on Dark Arts of Rationality · 2014-01-16T19:08:13.028Z · score: 1 (1 votes) · LW · GW

Another nail hit squarely on the head. Your concept of a strange playing field has helped crystallize an insight I've been grappling with for a while -- a strategy can be locally rational even if it is in some important sense globally irrational. I've had several other insights which are specific instances of this and which I only just realized are part of a more general phenomenon. I believe it can be rational to temporarily suspend judgement in the pursuit of certain kinds of mystical experiences (and have done this with some small success), and I believe that it can be rational to think of yourself as a causally efficacious agent even when you know that humans are embedded in a stream of causality which makes the concept of free will nonsensical.

Comment by fowlertm on The mechanics of my recent productivity · 2014-01-08T21:05:33.916Z · score: 6 (6 votes) · LW · GW

I also wanted to say that your recommendations on which chapters of which books to read in which order (personal communication) are something that many other people would be interested in hearing about.

Comment by fowlertm on The mechanics of my recent productivity · 2014-01-08T03:14:48.394Z · score: 7 (7 votes) · LW · GW

Thanks so much for typing all this. It encourages me that I can manage it as well :)

Comment by fowlertm on Doubt, Science, and Magical Creatures - a Child's Perspective · 2013-12-28T15:50:12.411Z · score: 11 (11 votes) · LW · GW

It's also possible that, in concealing the information from your parents, you also managed to conceal it from the TF as well. It would be much, much harder to figure that out experimentally, given how little we know about the mechanisms by which purportedly magical beings interact with information.

Comment by fowlertm on Luck II: Expecting White Swans · 2013-12-17T06:16:11.154Z · score: 1 (1 votes) · LW · GW

"Isn't it taken directly from Nassim Taleb's The Black Swan"

Right.

"Making white swans indicate rare good events makes no sense."

Actually, you could be right, but that's how I'm using it. I don't have my copies of Taleb's books in front of me, but I'm pretty sure he uses the terms the way I'm using them.

Comment by fowlertm on Luck II: Expecting White Swans · 2013-12-15T21:20:14.962Z · score: 0 (0 votes) · LW · GW

Sure. Now, is there ever a time we should try to make ourselves believe things that we don't necessarily have a good reason to think are true?