Posts
Comments
Chalmers' short comment in your link amounts to just Chalmers expressing enthusiasm for ontologically basic mental properties, not any kind of recommendation for your specific research program.
Of course you're right that some forms of work pay well. Part of what keeps me down is impatience and the attempt to do the most important thing right now.
To be frank, the Outside View says that most people who have achieved little over many years of work will achieve little in the next few months. Many of them have trouble with time horizons, lack of willpower, or other problems that sabotage their efforts systematically, or prefer to indulge other desires rather than work hard. These things would hinder both scientific research and paid work. Refusing to self-finance with a lucrative job, combined with the absence of any impressive work history (that you have made clear in the post I have seen) is a bad signal about your productivity, your reasons for asking us for money, and your ability to eventually pay it back.
the attempt to do the most important thing right now
No one else seems to buy your picture of what is most important (qualia+safe AI). Have you actually thought through and articulated a model, with a chain of cause and effect, between your course of research and your stated aims of affecting AI? Which came first, your desire to think about quantum consciousness theories or an interest in safe AI? It seems like a huge stretch.
I'm sorry to be so blunt, but if you're going to be asking for money on Less Wrong you should be able to answer such questions.
Some questions:
- How will you make money in the future to pay back the loan?
- Why aren't you doing that now, even on a part-time basis?
- Is there one academic physicist who will endorse your specific research agenda as worthwhile?
- Likewise for an academic philosopher?
- Likewise for anyone other than yourself?
- Why won't physicists doing ordinary physics (who are more numerous, have higher ability, and have better track record of productivity) solve your problems in the course of making better predictive models?
- How would this particular piece of work help with your larger interests? Would it cause physicists to work on this topic? Provide a basis for assessing your productivity or lack thereof?
- Why not spend some time programming or tutoring math? If you work at Google for a year you can then live off the proceeds for several years in Bali or the like. A moderate amount of tutoring work could pay the rent.
An example of the other way to approach this question is the idea of simulating a group of consciousness theorists for 500 subjective years, until they arrive at a consensus on the nature of consciousness. I think it's rather unlikely that anyone will ever get to solve FAI-relevant problems in that way.
The CEV idea there would be to create an AI which is optimizing for expected satisfaction of the utility function that would be output by such a process. If the AI's other functionality is good, it will start with reasonable guesses about what such a process would output, and rapidly improve those guesses. As it further improved, gathered more data, etc, it would better and better approximate that output.