Running a Futurist Institute.

post by fowlertm · 2017-10-06T17:05:47.589Z · LW · GW · Legacy · 23 comments

Contents

23 comments

Hello,

My name is Trent Fowler, and I'm an aspiring futurist. To date I have given talks on two continents on machine ethics, AI takeoff dynamics, secular spirituality, existential risk, the future of governance, and technical rationality. I have written on introspection, the interface between language and cognition, the evolution of intellectual frameworks, and myriad other topics. In 2016 I began 'The STEMpunk Project', an endeavor to learn as much about computing, electronics, mechanics, and AI as possible, which culminated in a book published earlier this year. 

Elon Musk is my spirit animal. 

I am planning to found a futurist institute in Boulder, CO. I actually left my cushy job in East Asia to help make the future a habitable place. 

Is there someone I could talk to about how to do this? Should I incorporate as a 501C3 or an LLC? What are the best ways of monetizing such an endeavor? How can I build an audience (meetup attendance has been anemic at best, what can I do about that)? And so on. 

Best,

-Trent

23 comments

Comments sorted by top scores.

comment by gwern · 2017-10-06T20:35:19.302Z · LW(p) · GW(p)

Why create a new one at all?

Replies from: fowlertm, IlyaShpitser
comment by fowlertm · 2017-10-07T01:52:15.086Z · LW(p) · GW(p)

(1) The world does not have a surfeit of intelligent technical folks thinking about how to make the future a better place. Even if I founded a futurist institute in the exact same building as MIRI/CFAR, I don't think it'd be overkill.

(2) There is a profound degree of technical talent here in central Colorado which doesn't currently have a nexus around which to have these kinds of discussions about handling emerging technologies responsibly. There is a real gap here that I intend to fill.

Replies from: gwern, turchin
comment by gwern · 2017-10-07T02:18:32.010Z · LW(p) · GW(p)

Even if I founded a futurist institute in the exact same building as MIRI/CFAR, I don't think it'd be overkill.

You know, you could do that. By giving them the money.

Replies from: fowlertm, John_Maxwell_IV
comment by fowlertm · 2017-10-08T16:35:41.179Z · LW(p) · GW(p)

I have done that, on a number of different occasions. I have also tried for literally years to contribute to futurism in other ways; I attempted to organize a MIRIx workshop and was told no because I wasn't rigorous enough or something, despite the fact that on the MIRIx webpage it says:

"A MIRIx workshop can be as simple as gathering some of your friends to read MIRI papers together, talk about them, eat some snacks, scribble some ideas on whiteboards, and go out to dinner together."

Which is exactly what I was proposing.

I have tried for years to network with people in the futurist/rationalist movement, by offering to write for various websites and blogs (and being told no each and every single time), or by trying to discuss novel rationality techniques with people positioned to provide useful feedback (and being ignored each and every single time).

While I may not be Eliezer Yudkowsky the evidence indicates that I'm at least worth casually listening to, but I have had no luck getting even that far.

I left a cushy job in Asia because I wanted to work toward making the world a better place, and I'm not content simply giving money to other people to do so on my behalf. I have a lot of talent and energy which could be going towards that end; for whatever reason, the existing channels have proven to be dead ends for me.

But even if the above were not the case, there is an extraordinary amount of technical talent in the front range which could be going towards more future-conscious work. Most of these people probably haven't heard of LW or don't care much about it (as evinced by the moribund LW meetup in Boulder and the very, very small one in Denver), but they might take notice if there were a futurist institution within driving distance.

Approaching from the other side, I've advertised futurist-themed talks on LW numerous times and gotten, like, three people to attend.

I'll continue donating to CFAR/MIRI because they're doing valuable work, but I also want to work on this stuff directly, and I haven't been able to do that with existing structures.

So I'm going to build my own. If you have any useful advice for that endeavor, I'd be happy to hear it.

Replies from: John_Maxwell_IV, Lumifer
comment by John_Maxwell (John_Maxwell_IV) · 2017-10-09T00:43:07.412Z · LW(p) · GW(p)

Maybe your mistake was to write a book about your experience of self-study instead of making a series of LW posts. Nate Soares took this approach and he is now the executive director of MIRI :P

Replies from: fowlertm
comment by fowlertm · 2017-10-09T03:36:07.072Z · LW(p) · GW(p)

I gave that some thought! LW seems much less active than it once was, though, so that strategy isn't as appealing. I've also written a little for this site and the reception has been lukewarm, so I figured a book would be best.

Replies from: None
comment by [deleted] · 2017-10-11T04:20:15.535Z · LW(p) · GW(p)

We're now a lot more active at LW2.0! Some of my stuff which wasn't that popular here is getting more attention there.

Maybe you could try it too?

comment by Lumifer · 2017-10-09T17:06:55.636Z · LW(p) · GW(p)

being told no each and every single time ... being ignored each and every single time

Do you know why?

Replies from: fowlertm
comment by fowlertm · 2017-10-09T19:11:02.890Z · LW(p) · GW(p)

Different reasons, none of them nefarious or sinister.

I emailed a technique I call 'the failure autopsy' to Julia Galef, which as far as I know is completely unique to me. She gave me a cheerful 'I'll read this when I get a chance" and never got back to me.

I'm not sure why I was turned down for a MIRIx workshop; I'm sure I could've managed to get some friends together to read papers and write ideas on a whiteboard.

I've written a few essays for LW the reception of which were lukewarm. Don't know if I'm just bad at picking topics of interest or if it's a reflection of the declining status of this forum.

To be clear: I didn't come here to stamp my feet and act like a prissy diva. I don't think the rationalists are big meanies who are deliberately singling me out for exclusion. I'm sure everyone has 30,000 emails to read and a million other commitments and they're just busy.

But from my perspective it hardly matters: the point is that I have had no luck building contacts through the existing institutions and channeling my desire to help in any useful way.

You might be wondering whether or not I'm just not as smart or as insightful as I think I am. That's a real possibility, but it's worth pointing out that I also emailed the failure autopsy technique to Eric S. Raymond -- famed advocate of open source, bestselling author, hacker, philosopher, righteous badass -- and he not only gave me a lot of encouraging feedback, he took time out of his schedule to help me refine some of my terminology to be more descriptive. We're actually in talks to write a book together next year.

So it might be me, but there's evidence to indicate that it probably isn't.

Replies from: IlyaShpitser, ChristianKl, Lumifer
comment by IlyaShpitser · 2017-10-09T20:28:21.722Z · LW(p) · GW(p)

Try publishing in mainstream AI venues? (AAAI has some sort of safety workshop this year). I am assuming if you want to start an institute you have publishable stuff you want to say.

Replies from: fowlertm
comment by fowlertm · 2017-10-09T20:59:05.656Z · LW(p) · GW(p)

I like that idea too. How hard is it to publish in academic journals? I don't have more than a BS, but I have done original research and I can write in an academic style.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2017-10-10T20:49:52.094Z · LW(p) · GW(p)

Pretty hard, I suppose.


It's weird, though, if you are asking these types of questions, why are you trying to run an institute? Typically very senior academics do that. (I am not singling you out either, I have the same question for folks running MIRI).

comment by ChristianKl · 2017-10-15T17:27:58.772Z · LW(p) · GW(p)

But from my perspective it hardly matters: the point is that I have had no luck building contacts through the existing institutions and channeling my desire to help in any useful way.

From the outside view a person who has no luck building contacts with existing institutions is unlikely to be a good person to start a new institute.

Of course getting someone like Eric S. Raymond to be open to write a book with you is a good sign.

comment by Lumifer · 2017-10-09T20:13:40.962Z · LW(p) · GW(p)

a technique I call 'the failure autopsy' ... which as far as I know is completely unique to me

Ahem. The rest of the world calls it a post-mortem. See e.g. this.

never got back to me ... I'm not sure why I was turned down... Don't know if I'm just bad at picking topics of interest...

So you do not know why. Did you try to figure it out? Do a post-mortem, maybe?

Replies from: fowlertm
comment by fowlertm · 2017-10-09T20:48:19.510Z · LW(p) · GW(p)

A post-mortem isn't quite the same thing. Mine has a much more granular focus on the actual cognitive errors occurring, with neat little names for each of them, and has the additional step of repeatedly visualizing yourself making the correct move.

https://rulerstothesky.com/2016/03/17/the-stempunk-project-performing-a-failure-autopsy/

This is a rough idea of what I did, the more awesome version with graphs will require an email address to which I can send a .jpg

Replies from: Lumifer
comment by Lumifer · 2017-10-10T14:38:41.600Z · LW(p) · GW(p)

Neat little names, I see. Thank you, I'll pass on the jpg awesomeness.

comment by John_Maxwell (John_Maxwell_IV) · 2017-10-09T00:50:53.286Z · LW(p) · GW(p)

The Future of Life Institute thinks that a portfolio approach to AI safety, where different groups pursue different research agendas, is best. It's plausible to me that we've hit the point of diminishing returns in terms of allocating resources to MIRI's approach, and marginal resources are best directed towards starting new research groups.

Replies from: fowlertm
comment by fowlertm · 2017-10-09T03:36:25.049Z · LW(p) · GW(p)

I hadn't known about that, but I came to the same conclusion!

comment by turchin · 2017-10-08T22:25:34.806Z · LW(p) · GW(p)

You could start a local chapter of Transhumanist party, or of anything you want and just make gatherings of people and discuss any futuristic topics, like life extension, AI safety, whatever. Official registration of such activity is probably loss of time and money, except you know what are going to do with it, like getting donations or renting an office.

There is no need to start any institute if you don't have any dedicated group of people around. Institute consisting of one person is something strange.

Replies from: fowlertm
comment by fowlertm · 2017-10-09T03:34:48.181Z · LW(p) · GW(p)

That's not a bad idea. As it stands I'm pursuing the goal of building a dedicated group of people around these ideas, which is proving difficult enough as it is. Eventually I'll want to move forward with the institute, though, and it seems wise to begin thinking about that now.

comment by IlyaShpitser · 2017-10-06T21:42:52.478Z · LW(p) · GW(p)

Why create any of them?

comment by Yosarian2 · 2017-10-07T22:05:38.328Z · LW(p) · GW(p)

I think you need to try and narrow your focus on exactly what you mean by a "futurist institute" and figure out what specifically you plan on doing before you can think about any of these issues.

Are you thinking about the kind of consulting agency that companies get advice from on what the market might look like in 5 years and what technologies their competitors are using? Or about something like a think-tank that does research and writes papers with the intent on influencing political policy, and is usually supported by donations? Or an academic group, probably tied to a university, which publishes academic papers, similar to what Nick Bostrom does at Oxford? Or something that raises money primarily for scientific and technological research? Or maybe an organization similar to H+ that tries to spread awareness of transhumanist/ singularity related issues, publishes newsletters, has meetings, and generally tries to change people's minds about futurist, technological, AI, and/or transhumanist issues? Or something else entierly?

Basically, without more details about exactly what you are trying to do, I don't think anyone here is going to be able to offer very good advice. i suspect you may not be sure yourself yet, so maybe the first step is to try to think about the different options and try to narrow your initial focus a bit.

Replies from: fowlertm
comment by fowlertm · 2017-10-08T16:23:25.465Z · LW(p) · GW(p)

You're right. Here is a reply I left on a Reddit thread answering this question:

This institution will essentially be a formalization and scaling-up of a small group of futurists that already meet to discuss emerging technologies and similar subjects. Despite the fact that they've been doing this for years attendance is almost never more than ten people (25 attendees would be fucking woodstock).

I think the best way to begin would be to try and use this seed to create a TED-style hub of recurring discussions on exactly these topics. There's a lot of low-hanging fruit to be picked in the service of this goal. For example I recently convinced the organizer for the futurist group to switch to a regular spot at the local library instead of the nigh-impossible-to-find hackerspace at which they were doing it before. I've also done things like buy pizza for everyone.

Once we get to where we have a nice, clean, well-lit venue and have at least 20 people regularly attending, I'd like to start reaching out to local businesses, writers, artists, and academics to have them give talks to the group. As it stands it probably wouldn't be worth their time just to speak to 8 people.

TEDxMileHigh does something vaguely like this, but it isn't as focused and only occurs once per year. Once I get that lined out, I'd like the group's first 'product' to be a near-comprehensive 'talent audit' for the Denver/Boulder region. If I had a billion dollars and wanted to invest it in the highest-impact companies and research groups I'd have no idea of where to get started. Here are some questions I'd like to answer:

What are the biggest research and investment initiatives currently happening? Is there more brainpower in nanotech or AI? In neurotech or SENS-type fields? AFAICT nobody knows. Who is doing the most investing? What kind of capital is there available from hedgefunds or angel investors? What sorts of bridges exist between academia, the private sector, think tanks, and investment firms? How can I strengthen them?

So we'll start by aping TED and then try to figure out what kind of talent pool we have to work with. These two goals alone will surely require several years, and there's more than one avenue to monetization (ticket sales; subscriptions to the talent audit)

Beyond this horizon things get fuzzier because it's hard for me to say what direction the institute will take because I need to answer other questions first. For example, I'm very interested in superintelligent AI and related ethical issues. I have even thought of a name for a group devoted to research in the field: 'the Superintelligence Research Group', S.I.R.G (pronounced 'surge').

But is there enough AI/mathematics/computation brainpower around to make such a venture worthwhile? I mean there's more than one computing research group just in Boulder, but are they doing the kind of worked that could be geared toward SAI work?

If so maybe I'll maneuver in that direction; if not, it would probably make more sense to focus on other things.

So that's one possibility. Another is either providing consulting to investors wanting to work with companies in the front range, or angel investing in those companies myself.

But if I'm publishing a newsletter about investment opportunities in the Front Range would I even be allowed to personally invest in companies (i.e. is there any legal conflict of interest or whatever involved)? Would the decision to make the institute an LLC or a 501C3 impact future financial maneuvering?

So you have a short-term, concrete answer to your question and a long-term, speculative answer to your question.

Is there anything else you'd like to know?