Posts

[Interview w/ Quintin Pope] Evolution, values, and AI Safety 2023-10-24T13:53:06.146Z
[Interview w/ Rob Miles] The case for taking AI Safety seriously 2023-07-17T17:08:10.961Z
[Interview w/ Zvi Mowshowitz] Should we halt progress in AI? 2023-05-16T18:12:32.376Z
[Interview w/ Jeffrey Ladish] Applying the 'security mindset' to AI and x-risk 2023-04-11T18:14:34.059Z
Did you enjoy Ramez Naam's "Nexus" trilogy? Check out this interview on neurotech and the law. 2022-10-11T11:10:57.964Z
Transhumanism, genetic engineering, and the biological basis of intelligence. 2022-09-14T15:55:52.431Z
An interview with Danica Remy on protecting the Earth from asteroids. 2021-12-26T21:40:12.226Z
Economist Irene Ng on Market design, entrepreneurship, and innovation. 2021-11-26T19:00:38.659Z
Cognitive scientist Joel Chan on metascience, scaling and automating innovation, collective intelligence, and tools for thought. 2021-08-27T16:33:23.532Z
Where did the idea of x-risk come from? 2021-03-26T15:39:40.867Z
GPT-3 and the future of knowledge work 2021-03-05T17:40:12.039Z
Some recent interviews with AI/math luminaries. 2021-03-04T01:26:25.046Z
fowlertm's Shortform 2020-10-27T00:43:30.769Z
I'm interested in a sub-field of AI but don't know what to call it. 2019-08-25T14:55:13.028Z
Running a Futurist Institute. 2017-10-06T17:05:47.589Z
Come check out the Boulder Future Salon this Saturday! 2017-09-06T15:49:16.899Z
Anyone else reading "Artificial Intelligence: A Modern Approach"? 2016-11-05T15:22:33.792Z
LINK: Performing a Failure Autopsy 2016-05-27T14:21:27.343Z
Talk today at CU Boulder 2016-04-05T16:26:06.614Z
Two meetups in Denver/Boulder Colorado 2015-05-05T00:58:02.964Z
LW-ish meetup in Boulder, CO 2015-03-10T14:32:53.115Z
FOOM Articles 2015-03-05T21:32:00.924Z
Intrapersonal comparisons: you might be doing it wrong. 2015-02-03T21:34:03.344Z
Is there a rationalist skill tree yet? 2015-01-30T16:02:37.185Z
LW-ish meetup in Boulder, CO 2015-01-13T05:23:39.024Z
Steelmanning MIRI critics 2014-08-19T03:14:15.072Z
Recommendations for donating to an anti-death cause 2014-04-09T02:56:56.298Z
LWers living in Boulder/Denver area: any interest in an AI-philosophy reading group? 2013-12-31T17:06:56.060Z
Luck II: Expecting White Swans 2013-12-15T17:40:08.775Z
Luck I: Finding White Swans 2013-12-12T17:56:30.191Z
Existential Risk II 2013-10-20T00:25:25.437Z
[LINKS] Killer Robots and Theories of Truth 2013-06-30T22:57:02.361Z
How to Have Space Correctly 2013-06-25T03:47:10.994Z
X-Risk Roll Call 2013-06-19T04:07:36.931Z
A Viable Alternative to Typing 2013-06-06T05:38:17.885Z
Two Weeks of Meditation can Reduce Mind Wandering and Improve Mental Performance. 2013-06-01T09:58:26.343Z
Being Foreign and Being Sane 2013-05-25T00:58:11.748Z

Comments

Comment by fowlertm on fowlertm's Shortform · 2024-05-10T11:35:05.735Z · LW · GW

YouTube can generate those automatically, or you can rip the .mp4 with an online service (just Google around, there are tons), then pass it to something like Otter.ai

Comment by fowlertm on fowlertm's Shortform · 2024-05-09T18:39:27.687Z · LW · GW

We recently released an interview with independent scholar John Wentworth:

It mostly centers around two themes: "abstraction" (forming concepts) and "agency" (dealing with goal-directed systems). 

Check it out!

Comment by fowlertm on Which LessWrongers are (aspiring) YouTubers? · 2023-10-24T13:49:21.015Z · LW · GW

I'm not much of a LWer these days, but I do co-host a podcast on philosophy and emerging technologies which has a growing library of interviews with LWers:

https://www.youtube.com/@futuratipodcast5130/videos

Comment by fowlertm on [Interview w/ Zvi Mowshowitz] Should we halt progress in AI? · 2023-05-16T19:50:23.344Z · LW · GW

I suppose I'm interested in both, but that reference is very helpful. I'm also vaguely aware of some literature on what is called "private governance" that would be germane to this discussion. 

Comment by fowlertm on Transhumanism, genetic engineering, and the biological basis of intelligence. · 2022-09-15T20:18:11.828Z · LW · GW

Interesting claim. We specifically asked him that and he didn't think that was the case, but you could be right!

Comment by fowlertm on Cognitive scientist Joel Chan on metascience, scaling and automating innovation, collective intelligence, and tools for thought. · 2021-10-12T15:20:05.190Z · LW · GW

My admin pointed out the RSS feed (which I assume is what you found) and he's going to see if there's a way to make subscribing easier. 

Thanks for bringing this to my attention!

Comment by fowlertm on Cognitive scientist Joel Chan on metascience, scaling and automating innovation, collective intelligence, and tools for thought. · 2021-10-10T22:47:33.104Z · LW · GW

Huh, let me ask about that! 

Thanks for your interest.

Comment by fowlertm on fowlertm's Shortform · 2021-03-04T00:58:58.505Z · LW · GW
Comment by fowlertm on fowlertm's Shortform · 2020-10-27T00:43:31.222Z · LW · GW

I'm looking for a really short introduction to light therapy and a rig I can put in my basement-office. Over the years I've noticed my productivity just falls off a goddamn cliff after sundown during the winter months, and I'd like to try to do something about it. 

After the requisite searching I see a dozen or so references across lesswrong, and was wondering if someone could just tell me how the story ends and where I can shop for bulbs. 

For the most part I was thinking about just making things brighter, but I'm open to trying red-light therapy too if people have had success with that.  

Comment by fowlertm on I'm interested in a sub-field of AI but don't know what to call it. · 2019-08-26T11:50:37.904Z · LW · GW

Thanks for the recommendations. One thing that would is just knowing what this is called. Do your books give it a name?

Comment by fowlertm on I'm interested in a sub-field of AI but don't know what to call it. · 2019-08-25T20:17:01.603Z · LW · GW

Not yet. That's part of what we're hoping to learn about here.

Comment by fowlertm on Running a Futurist Institute. · 2017-10-09T20:59:05.656Z · LW · GW

I like that idea too. How hard is it to publish in academic journals? I don't have more than a BS, but I have done original research and I can write in an academic style.

Comment by fowlertm on Running a Futurist Institute. · 2017-10-09T20:48:19.510Z · LW · GW

A post-mortem isn't quite the same thing. Mine has a much more granular focus on the actual cognitive errors occurring, with neat little names for each of them, and has the additional step of repeatedly visualizing yourself making the correct move.

https://rulerstothesky.com/2016/03/17/the-stempunk-project-performing-a-failure-autopsy/

This is a rough idea of what I did, the more awesome version with graphs will require an email address to which I can send a .jpg

Comment by fowlertm on Running a Futurist Institute. · 2017-10-09T19:11:02.890Z · LW · GW

Different reasons, none of them nefarious or sinister.

I emailed a technique I call 'the failure autopsy' to Julia Galef, which as far as I know is completely unique to me. She gave me a cheerful 'I'll read this when I get a chance" and never got back to me.

I'm not sure why I was turned down for a MIRIx workshop; I'm sure I could've managed to get some friends together to read papers and write ideas on a whiteboard.

I've written a few essays for LW the reception of which were lukewarm. Don't know if I'm just bad at picking topics of interest or if it's a reflection of the declining status of this forum.

To be clear: I didn't come here to stamp my feet and act like a prissy diva. I don't think the rationalists are big meanies who are deliberately singling me out for exclusion. I'm sure everyone has 30,000 emails to read and a million other commitments and they're just busy.

But from my perspective it hardly matters: the point is that I have had no luck building contacts through the existing institutions and channeling my desire to help in any useful way.

You might be wondering whether or not I'm just not as smart or as insightful as I think I am. That's a real possibility, but it's worth pointing out that I also emailed the failure autopsy technique to Eric S. Raymond -- famed advocate of open source, bestselling author, hacker, philosopher, righteous badass -- and he not only gave me a lot of encouraging feedback, he took time out of his schedule to help me refine some of my terminology to be more descriptive. We're actually in talks to write a book together next year.

So it might be me, but there's evidence to indicate that it probably isn't.

Comment by fowlertm on Running a Futurist Institute. · 2017-10-09T03:36:25.049Z · LW · GW

I hadn't known about that, but I came to the same conclusion!

Comment by fowlertm on Running a Futurist Institute. · 2017-10-09T03:36:07.072Z · LW · GW

I gave that some thought! LW seems much less active than it once was, though, so that strategy isn't as appealing. I've also written a little for this site and the reception has been lukewarm, so I figured a book would be best.

Comment by fowlertm on Running a Futurist Institute. · 2017-10-09T03:34:48.181Z · LW · GW

That's not a bad idea. As it stands I'm pursuing the goal of building a dedicated group of people around these ideas, which is proving difficult enough as it is. Eventually I'll want to move forward with the institute, though, and it seems wise to begin thinking about that now.

Comment by fowlertm on Running a Futurist Institute. · 2017-10-08T16:35:41.179Z · LW · GW

I have done that, on a number of different occasions. I have also tried for literally years to contribute to futurism in other ways; I attempted to organize a MIRIx workshop and was told no because I wasn't rigorous enough or something, despite the fact that on the MIRIx webpage it says:

"A MIRIx workshop can be as simple as gathering some of your friends to read MIRI papers together, talk about them, eat some snacks, scribble some ideas on whiteboards, and go out to dinner together."

Which is exactly what I was proposing.

I have tried for years to network with people in the futurist/rationalist movement, by offering to write for various websites and blogs (and being told no each and every single time), or by trying to discuss novel rationality techniques with people positioned to provide useful feedback (and being ignored each and every single time).

While I may not be Eliezer Yudkowsky the evidence indicates that I'm at least worth casually listening to, but I have had no luck getting even that far.

I left a cushy job in Asia because I wanted to work toward making the world a better place, and I'm not content simply giving money to other people to do so on my behalf. I have a lot of talent and energy which could be going towards that end; for whatever reason, the existing channels have proven to be dead ends for me.

But even if the above were not the case, there is an extraordinary amount of technical talent in the front range which could be going towards more future-conscious work. Most of these people probably haven't heard of LW or don't care much about it (as evinced by the moribund LW meetup in Boulder and the very, very small one in Denver), but they might take notice if there were a futurist institution within driving distance.

Approaching from the other side, I've advertised futurist-themed talks on LW numerous times and gotten, like, three people to attend.

I'll continue donating to CFAR/MIRI because they're doing valuable work, but I also want to work on this stuff directly, and I haven't been able to do that with existing structures.

So I'm going to build my own. If you have any useful advice for that endeavor, I'd be happy to hear it.

Comment by fowlertm on Running a Futurist Institute. · 2017-10-08T16:23:25.465Z · LW · GW

You're right. Here is a reply I left on a Reddit thread answering this question:

This institution will essentially be a formalization and scaling-up of a small group of futurists that already meet to discuss emerging technologies and similar subjects. Despite the fact that they've been doing this for years attendance is almost never more than ten people (25 attendees would be fucking woodstock).

I think the best way to begin would be to try and use this seed to create a TED-style hub of recurring discussions on exactly these topics. There's a lot of low-hanging fruit to be picked in the service of this goal. For example I recently convinced the organizer for the futurist group to switch to a regular spot at the local library instead of the nigh-impossible-to-find hackerspace at which they were doing it before. I've also done things like buy pizza for everyone.

Once we get to where we have a nice, clean, well-lit venue and have at least 20 people regularly attending, I'd like to start reaching out to local businesses, writers, artists, and academics to have them give talks to the group. As it stands it probably wouldn't be worth their time just to speak to 8 people.

TEDxMileHigh does something vaguely like this, but it isn't as focused and only occurs once per year. Once I get that lined out, I'd like the group's first 'product' to be a near-comprehensive 'talent audit' for the Denver/Boulder region. If I had a billion dollars and wanted to invest it in the highest-impact companies and research groups I'd have no idea of where to get started. Here are some questions I'd like to answer:

What are the biggest research and investment initiatives currently happening? Is there more brainpower in nanotech or AI? In neurotech or SENS-type fields? AFAICT nobody knows. Who is doing the most investing? What kind of capital is there available from hedgefunds or angel investors? What sorts of bridges exist between academia, the private sector, think tanks, and investment firms? How can I strengthen them?

So we'll start by aping TED and then try to figure out what kind of talent pool we have to work with. These two goals alone will surely require several years, and there's more than one avenue to monetization (ticket sales; subscriptions to the talent audit)

Beyond this horizon things get fuzzier because it's hard for me to say what direction the institute will take because I need to answer other questions first. For example, I'm very interested in superintelligent AI and related ethical issues. I have even thought of a name for a group devoted to research in the field: 'the Superintelligence Research Group', S.I.R.G (pronounced 'surge').

But is there enough AI/mathematics/computation brainpower around to make such a venture worthwhile? I mean there's more than one computing research group just in Boulder, but are they doing the kind of worked that could be geared toward SAI work?

If so maybe I'll maneuver in that direction; if not, it would probably make more sense to focus on other things.

So that's one possibility. Another is either providing consulting to investors wanting to work with companies in the front range, or angel investing in those companies myself.

But if I'm publishing a newsletter about investment opportunities in the Front Range would I even be allowed to personally invest in companies (i.e. is there any legal conflict of interest or whatever involved)? Would the decision to make the institute an LLC or a 501C3 impact future financial maneuvering?

So you have a short-term, concrete answer to your question and a long-term, speculative answer to your question.

Is there anything else you'd like to know?

Comment by fowlertm on Running a Futurist Institute. · 2017-10-07T01:52:15.086Z · LW · GW

(1) The world does not have a surfeit of intelligent technical folks thinking about how to make the future a better place. Even if I founded a futurist institute in the exact same building as MIRI/CFAR, I don't think it'd be overkill.

(2) There is a profound degree of technical talent here in central Colorado which doesn't currently have a nexus around which to have these kinds of discussions about handling emerging technologies responsibly. There is a real gap here that I intend to fill.

Comment by fowlertm on Come check out the Boulder Future Salon this Saturday! · 2017-09-07T15:13:28.646Z · LW · GW

That hadn't even occurred to me, thank you! Do you think it'd be inappropriate? This isn't a LW specific meetup, just a bunch of tech nerds getting together to discuss this huge tech project I just finished.

Comment by fowlertm on Anyone else reading "Artificial Intelligence: A Modern Approach"? · 2016-11-05T23:17:22.315Z · LW · GW

Thanks! I suppose I wasn't as clear as I could have been: I was actually wondering if there are any people who are reading it currently, who might be grappling with the same issues as me and/or might be willing to split responsibility for creating Anki cards. This textbook is outstanding, and I think there would be significant value in anki-izing as much of it as possible.

Comment by fowlertm on LINK: Performing a Failure Autopsy · 2016-05-29T16:52:03.391Z · LW · GW

Because I missed numerous implications, needlessly increased causal opacity, and failed to establish a baseline before I started fiddling with variables. Those are poor troubleshooting practices.

Comment by fowlertm on Linguistic mechanisms for less wrong cognition · 2015-12-06T16:22:28.964Z · LW · GW

So a semi-related thing I've been casually thinking about recently is how to develop what basically amounts to a hand-written programming language.

Like a lot of other people I make to-do lists and take detailed notes, and I'd like to develop a written notation that not only captures basic tasks, but maybe also simple representations of the knowledge/emotional states of other people (i.e. employees).

More advanced than that, I've also been trying to think of ways I can take notes in a physical book that will allow a third party to make Anki flashcards or evernote entries based on my script. It has to be extremely dense to fit in the margins of a book, and must capture distinct commands like "make a single cloze deletion card for this sentence" and "make four separate cards for this sentence, cloze deleting a different piece of information for each card but otherwise leaving everything intact" and so on.

Any thoughts?

Comment by fowlertm on Deliberate Grad School · 2015-10-12T15:56:48.026Z · LW · GW

I mentioned CMU for the reasons you've stated and because Lukeprog endorsed their program once (no idea what evidence he had that I don't).

I have also spoken to Katja Grace about it, and there is evidently a bit of interest in LW themes among the students there.

I'm unaware of other programs of a similar caliber, though there are bound to be some. If anyone knows of any, by all means list them, that was the point of my original comment.

Comment by fowlertm on Deliberate Grad School · 2015-10-04T16:07:43.369Z · LW · GW

I think there'd be value in just listing graduate programs in philosophy, economics, etc., by how relevant the research already being done there is to x-risk, AI safety, or rationality. Or by whether or not they contain faculty interested in those topics.

For example, if I were looking to enter a philosophy graduate program it might take me quite some time to realize that Carnegie Melon probably has the best program for people interested in LW-style reasoning about something like epistemology.

Comment by fowlertm on Learning takes a long time · 2015-06-01T03:15:20.821Z · LW · GW

Data point/encouragement: I'm getting a lot out of these, and I hope you keep writing them.

I'm one of those could-have-beens who dropped mathematics early on despite a strong interest and spent the next decade thinking he sucked at math before he rediscovered numerical proclivites in his early 20's because FAI theory caused him to peek at Discrete Mathematics.

Comment by fowlertm on FOOM Articles · 2015-03-06T02:41:37.325Z · LW · GW

Both unknown to me, thanks :)

Comment by fowlertm on The outline of Maletopia · 2015-02-21T15:48:19.906Z · LW · GW

Why? What's wrong with wanting to be masculine?

Comment by fowlertm on Intrapersonal comparisons: you might be doing it wrong. · 2015-02-21T15:30:55.395Z · LW · GW

Interesting tie-in, thanks.

Incidentally, how cool would it be to be able to say "my epistemology is the most advanced"? If nothing else it'd probably be a great pickup line at LW meetups.

Comment by fowlertm on Intrapersonal comparisons: you might be doing it wrong. · 2015-02-04T04:39:41.535Z · LW · GW

It's worth a lot, I'll look into it.

Comment by fowlertm on Is there a rationalist skill tree yet? · 2015-02-03T17:28:36.198Z · LW · GW

Agreed. I think in light of the fact that a lot of this stuff is learned iteratively you'd want to unpack 'basic mathematics'. I'm not sure of the best way to graphically represent iterative learning, but maybe you could have arrows going back to certain subjects, or you could have 'statistics round II' as one of nodes in the network.

It seems like insights are what you're really aiming at, so maybe instead of 'probability theory' you have a node for 'distributions' and 'variance' at some early point in the tree then later you have 'Bayesian v. Frequentist reasoning'.

This would help also help you unpack basic mathematics, though I don't know much about the dependencies either. I hope too, soon :)

Comment by fowlertm on Is there a rationalist skill tree yet? · 2015-01-31T14:27:21.872Z · LW · GW

I thought of that as well, it does need some work done in terms of presentation. It'd be a good place to start, yes.

Comment by fowlertm on Programming-like activities? · 2015-01-12T16:35:52.909Z · LW · GW

My two cents: I studied math pretty intensively on my own and later started programming. To my pleasant surprise, the thinking style involved in math transmitted almost directly over into programming. I'd imagine that the inverse is also true.

Comment by fowlertm on Meetup : Denver Area Meetup 2 · 2014-11-14T15:18:59.410Z · LW · GW

I'm sorry I missed this and hope it went well. Work has been chaotic lately, but I absolutely support a LW presence in Denver. I've tried once before to get a similar group off the ground, and would be happy to help this one along with presentations, planning, rationalist game nights, whatever.

Comment by fowlertm on Meetup : Denver Area Meetup 2 · 2014-11-08T14:56:55.206Z · LW · GW

I'll try to be there.

Comment by fowlertm on LWers living in Boulder/Denver area: any interest in an AI-philosophy reading group? · 2014-10-21T14:49:35.762Z · LW · GW

Actually, I folded it into another group called the Boulder Future Salon, which doesn't deal exclusively with x-risk but which has other advantages going for it, like a pre-existing membership.

Comment by fowlertm on Steelmanning MIRI critics · 2014-08-30T02:19:17.881Z · LW · GW

How would you recommend responding?

Comment by fowlertm on Steelmanning MIRI critics · 2014-08-29T16:42:33.622Z · LW · GW

I think I'm basically prepared for that line of attack. MIRI is not a cult, period. When you want to run a successful cult you do it Jim-Jones-style, carting everyone to a secret compound and carefully filtering the information that makes it in or out. You don't work as hard as you can to publish your ideas in a format where they can be read by anyone, you don't offer to publicly debate William Lane Craig, and you don't seek out the strongest versions of criticisms of your position (i.e. those coming from Robin Hanson).

Eliezer hasn't made it any easier on himself by being obnoxious about how smart he is, but then again neither did I; most smart people eventually have to learn that there are costs associated with being too proud of some ability or other. But whatever his flaws, the man is not at the center of a cult.

Comment by fowlertm on The Octopus, the Dolphin and Us: a Great Filter tale · 2014-08-29T16:28:16.833Z · LW · GW

"Note that AI is certainly not a great filter: an AI would likely expand through the universe itself"

I was confused by this, what is it supposed to mean? Off the top of my head it certainly seems like there is sufficient space between 'make and AI that causes the extinction of the human races or otherwise makes expanding into space difficult' and 'make an AI that causes the extinction of the human race but which goes on to colonize the universe' for AI to be a great filter.

Comment by fowlertm on Steelmanning MIRI critics · 2014-08-25T16:52:51.257Z · LW · GW

This comment is a poorly-organized brain dump which serves as a convenient gathering place for what I've learned after several days of arguing with every MIRI critic I could find. It will probably get it's own expanded post in the future, and if I have the time I may try to build a near-comprehensive list.

I've come to understand that criticisms of MIRI's version of the intelligence explosion hypothesis and the penumbra of ideas around it fall into two permeable categories:

Those that criticize MIRI as an organization or the whole FAI enterprise (people making these arguments may or may not be concerned about the actual IE) and those that attack object-level claims made by MIRI.

Broad Criticisms

1a) Why worry about this now, instead of in the distant future, given the abysmal performance of attempts to predict AI?

1b) Why take MIRI seriously when there are so many expert opinions that diverge?

1c) Aren't MIRI and LW just an Eliezer-worshipping cult?

1d) Is it even possible to do this kind of theoretical work so far in advance of actual testing and experimentation?

1e) The whole argument can be dismissed as it pattern matches other doomsday scenarios, almost all of which have been bullshit.


Specific Criticisms

2a) General intelligence is what we're worried about here, and it may prove much harder to build than we're anticipating.

2b) Tool AIs won't be as dangerous as agent AIs.

2c) Why not just build an Oracle?

2d) the FOOM will be distributed and slow, not fast and localized.

2e) Dumb Superintelligence, i.e. nothing worth of the name could possibly misinterpret a goal like 'make humans happy'

2f) Even FAI isn't a guarantee

2g) A self-improvement cascade will likely hit a wall at sub-superintelligent levels.

2h) Divergence Issue: all functioning AI systems have built-in sanity checks which take short-form goal statements and unpack them in ways that take account of constraints and context (???). It is actually impossible to build an AI which does not do this (???), and thus there can be no runaway SAI which is given a simple short-form goal and then carries it to ridiculous logical extremes (I WOULD BE PARTICULARLY INTERESTED IN SOMEONE ADDRESSING THIS).

Comment by fowlertm on Steelmanning MIRI critics · 2014-08-25T16:07:56.088Z · LW · GW

A good point, I must spend some time looking into the FOOM debate.

Comment by fowlertm on Steelmanning MIRI critics · 2014-08-20T16:54:18.315Z · LW · GW

I've heard the singularity-pattern-matches-religious-tropes argument before and hadn't given it much thought, but I find your analysis that the argument is wrong to be convincing, at least for the futurism I'm acquainted with. I'm less sure that it's true of Kurzweil's brand of futurism.

Comment by fowlertm on Steelmanning MIRI critics · 2014-08-19T11:50:59.627Z · LW · GW

Correct, I've been pursuing that as well.

Comment by fowlertm on Steelmanning MIRI critics · 2014-08-19T11:38:28.795Z · LW · GW

Correct :)

Comment by fowlertm on Steelmanning MIRI critics · 2014-08-19T11:37:28.779Z · LW · GW

Only the IE as defended by MIRI; it'd be a much longer talk if I wanted to defend everything they've put forward!

Comment by fowlertm on A Visualization of Nick Bostrom’s Superintelligence · 2014-08-15T16:57:59.463Z · LW · GW

With what software was this done?

Comment by fowlertm on Recommendations for donating to an anti-death cause · 2014-05-08T01:50:17.597Z · LW · GW

For those interested, I ended up donating to the Brain Preservation Foundation, MIRI, SENS, and the Alzheimer's Disease Research Fund.

More detail here:

http://rulerstothesky.wordpress.com/2014/04/25/in-memorium/

Comment by fowlertm on Truth: It's Not That Great · 2014-05-04T23:05:41.134Z · LW · GW

Good stuff. It took me quite a long time to work these ideas out for myself. There are also situations in which it can be beneficial to let somewhat obvious non-truths continue existing.

Example: your boss is good at doing something but his theoretical explanation for why it works is nonsense. Most of the time questioning the theory is only likely to piss them off, and unless you can replace it with something better, keeping your mouth shut is probably the safest option.

Relevant post:

http://cognitiveengineer.blogspot.com/2013/06/when-truth-isnt-enough.html

Comment by fowlertm on Recommendations for donating to an anti-death cause · 2014-04-10T15:34:35.335Z · LW · GW

I'd like to aim squarely at Death.