Posts

How does long-term use of caffeine affect productivity? 2012-04-11T23:09:56.035Z · score: 12 (13 votes)

Comments

Comment by quartz on Less Wrong used to like Bitcoin before it was cool. Time for a revisit? · 2012-06-20T22:05:17.689Z · score: 2 (4 votes) · LW · GW

One actionable topic that could be discussed: does the current price reflect what we expect Bitcoin's value to be?

Comment by quartz on Reframing the Problem of AI Progress · 2012-04-13T22:13:43.077Z · score: 1 (1 votes) · LW · GW

These are interesting suggestions, but they don't exactly address the problem I was getting at: leaving a line of retreat for the typical AI researcher who comes to believe that his work likely contributes to harm.

My anecdotal impression is that the number of younger researchers who take arguments for AI risk seriously has grown substantially in the last years, but - apart from spreading the arguments and the option of career change - it is not clear how this knowledge should affect their actions.

If the risk of indifferent AI is to be averted, I expect that a gradual shift in what is considered important work is necessary in the minds of the AI community. The most viable path I see towards such a shift involves giving individual researchers an option to express their change in beliefs in their work - in a way that makes use of their existing skillset and doesn't kill their careers.

Comment by quartz on Needed: A large database of statements for true/false exercises · 2012-04-13T02:49:13.088Z · score: 3 (3 votes) · LW · GW

You could take a look at the 15 million statements in the database of the Never-Ending Language Learning project. The subset of beliefs for which human-supplied feedback exists and beliefs that have high-confidence truth values may be appropriate for your purpose.

Comment by quartz on How does long-term use of caffeine affect productivity? · 2012-04-12T20:55:30.376Z · score: 4 (4 votes) · LW · GW

Nice! This is exactly the kind of evidence I'm looking for. The papers cited in the intro also look highly relevant (James and Rogers, 2005; Sigmon et al, 2009).

Comment by quartz on Reframing the Problem of AI Progress · 2012-04-12T20:39:15.815Z · score: 8 (8 votes) · LW · GW

But the real problem we face is how to build or become a superintelligence that shares our values, and given that this seems very difficult, any progress that doesn't contribute to the solution but brings forward the date by which we must solve it (or be stuck with something very suboptimal even if it doesn't kill us), is bad.

Assume you manage to communicate this idea to the typical AI researcher. What do you expect him to do next? It's absurd to think that the typical researcher will quit his field and work on strategies for mitigating intelligence explosion or on foundations of value. You might be able to convince him to work on some topic within AI instead of another. However, while some topics seem more likely to advance AI capabilities than others, this is difficult to tell in advance. More perniciously, what the field rewards are demonstrations of impressive capabilities. Researchers who avoid directions that lead to such demos will end up with less prestigious jobs, i.e., jobs where they are less able to influence the top students of the next generation of researchers. This isn't what the typical AI researcher wants either. So, what's he to do?

Comment by quartz on How does long-term use of caffeine affect productivity? · 2012-04-12T20:17:37.111Z · score: 2 (2 votes) · LW · GW

Am I the only one who finds it astonishing that there isn't widely known evidence that a psychoactive substance used on a daily basis by about 90% of North American adults (and probably by a majority of LWers) is beneficial if used in this way? What explains this apparent lack of interest? Discounting (caffeine clearly has short-term benefits) and the belief that, even in the unlikely case that caffeine harms productivity in the long run, the harm is likely to be small?

Comment by quartz on Q&A with experts on risks from AI #1 · 2012-01-08T16:10:16.110Z · score: 6 (6 votes) · LW · GW

my firmest belief about the timeline for human-level AI is that we can't estimate it usefully. partly this is because i don't think "human level AI" will prove to be a single thing (or event) that we can point to and say "aha there it is!". instead i think there will be a series of human level abilities that are achieved.

This sounds right. SIAI communications could probably be improved by acknowledging the incremental nature of AI development more explicitly. Have they addressed how this affects safety concerns?

Comment by quartz on Metaethics: Where I'm Headed · 2011-11-22T09:19:31.064Z · score: 2 (2 votes) · LW · GW

Thanks! More like this, please.

Who are you writing for? If you skipped ahead to the metaethics main sequence and just pointed to the literature for cogsci background, do you expect that they would not understand you?

Comment by quartz on Q&A with new Executive Director of Singularity Institute · 2011-11-16T20:26:45.019Z · score: 0 (0 votes) · LW · GW

Addendum: Since the people who upvoted the question were in the same position as you with respect to its interpretation, it would be good to not only address my intended meaning, but all major modes of interpretation.

Comment by quartz on Q&A with new Executive Director of Singularity Institute · 2011-11-14T09:23:34.282Z · score: 7 (7 votes) · LW · GW

A clarifying question. By 'rigor', do you mean the kind of rigor that is required to publish in journals like Risk Analysis or Minds and Machines, or do you mean something else by 'rigor'?

I mean the kind of precise, mathematical analysis that would be required to publish at conferences like NIPS or in the Journal of Philosophical Logic. This entails development of technical results that are sufficiently clear and modular that other researchers can use them in their own work. In 15 years, I want to see a textbook on the mathematics of FAI that I can put on my bookshelf next to Pearl's Causality, Sipser's Introduction to the Theory of Computation and MacKay's Information Theory, Inference, and Learning Algorithms. This is not going to happen if research of sufficient quality doesn't start soon.

Comment by quartz on Q&A with new Executive Director of Singularity Institute · 2011-11-07T07:19:21.436Z · score: 65 (75 votes) · LW · GW

How are you going to address the perceived and actual lack of rigor associated with SIAI?

There are essentially no academics who believe that high-quality research is happening at the Singularity Institute. This is likely to pose problems for your plan to work with professors to find research candidates. It is also likely to be an indicator of little high-quality work happening at the Institute.

In his recent Summit presentation, Eliezer states that "most things you need to know to build Friendly AI are rigorous understanding of AGI rather than Friendly parts per se". This suggests that researchers in AI and machine learning should be able to appreciate high-quality work done by SIAI. However, this is not happening, and the publications listed on the SIAI page--including TDT--are mostly high-level arguments that don't meet this standard. How do you plan to change this?