Video Q&A with Singularity Institute Executive Director

post by lukeprog · 2011-12-10T11:27:06.809Z · LW · GW · Legacy · 124 comments

Contents

  Intro
  Staff Changes
  Rigorous Research
  Friendly AI Sub-Problems
  Improved Funding
  Rationality
  Changing Course
  Experimental Research
  Winning Without Friendly AI
  Conclusion
None
124 comments

 

HD Video link.

MP3 version.

Transcript below.

 

Intro

Hi everyone. I’m Luke Muehlhauser, the new Executive Director of Singularity Institute.

Literally hours after being appointed Executive Director, I posted a call for questions about the organization on the Less Wrong.com community website, saying I would answer many of them on video — and this is that video.

I’m doing this because I think transparency and communication are important.

In fact, when I began as an intern with Singularity Institute, one of my first projects was to spend over a hundred hours working with everyone in the organization to write its first strategic plan, which the board ratified and you can now read on our website.

When I was hired as a researcher, I gave a long text-only interview with Michael Anissimov, where I answered 30 questions about my personal background, the mission of Singularity Institute, about our technical research program, and about the unsolved problems we work on, and also about the value of rationality training.

After becoming Executive Director, I immediately posted that call for questions — a few of which I will now answer.

 

Staff Changes

First question. Less Wrong user ‘wedrifid’ asks:

The staff and leadership at [Singularity Institute] seem to be undergoing a lot of changes recently. Is instability in the organisation something to be concerned about?

On this, I should address specific staff changes that wedrifid is talking about. At the end of summer 2011, Jasen Murray — who was running the visiting fellows program — resigned in order to pursue a business opportunity related to his passion for improving people’s effectiveness. At that same time, I was hired as a researcher after working as an intern for a few months, and Louie Helm was hired as Director of Development after having done significant volunteer work for Singularity Institute for even longer than that. Carl Shulman was also hired as a researcher at this time, and had also done lots of volunteer work before that, including publishing papers like “Arms Control and Intelligence Explosions,” “Implications of a Software-€Limited Singularity,” and “Basic AI Drives and Catastophic Risks," and maybe some others

Another change is that our President, Michael Vassar, is launching a personalized medicine company that we’re all pretty excited about. It has a lot of promise, so we’re excited to see him do that. He’ll still be retaining the title of President because he will, really, continue to do quite a lot of good work for us — networking and spreading our mission wherever he goes. But he will no longer take a salary from Singularity Institute, and that was his idea, several months ago.

But we needed somebody to run the organization, and I was the favorite choice for the job. 

So, should you be worried about instability? Well... I'm excited about the way the organization is taking shape, but I will say that we need more people. In particular, our research team took a hit when I moved from Researcher to Executive Director. So if you care about our mission and you can work with us to write working papers and other documents, you should contact me! My email is luke@intelligence.org.

And I’ll say one other thing. Do not fall prey to the sin of underconfidence. When I was living in Los Angeles I assumed I wasn’t special enough to apply even as an unpaid visiting fellow, and Louie Helm had to call me on Skype and talk me into it. So I thought “What the hell, it can’t hurt to contact Singularity Institute,” and within 9 months of that first contact I went from intern to researcher to Executive Director. So don't underestimate your potential — contact us, and let us be the ones who say "No."

And I suppose now would be a good time to answer another question, this one asked by ‘JoshuaZ’, who asks:

Are you concerned about potential negative signaling/status issues that will occur if [Singularity Institute] has as an executive director someone who was previously just an intern?

Not really. And the problem isn’t that I used to be an unpaid Visiting Fellow, it’s just that I went from Visiting Fellow to Executive Director so quickly. But that's... one of the beauties of Singularity Institute. Singularity Institute is not a place where you need to “pay your dues,” or something. If you’re hard-working and competent and you get along with people and you’re clearly committed to rationality and to reducing existential risk, then the leadership of the organization will put you where you can do the most good and be the most effective, regardless of irrelevant factors like duration of employment.

 

Rigorous Research

Next question. Less Wrong user ‘quartz’ asks:

How are you going to address the perceived and actual lack of rigor associated with [Singularity Institute]?

Now, what I initially thought quartz was talking about was Singularity Institute’s relative lack of publications in academic journals like Risk Analysis or Minds and Machines, so let me respond to that interpretation of the question first.

Luckily, I am probably the perfect person to answer this question, because when I first became involved with Singularity Institute this was precisely my own largest concern with Singualrity Institute, but I changed my mind when I learned the reasons why Singularity Institute does not push harder than it does to publish in academic journals.

So. Here’s the story. In March 2011, before I was even an intern, I wrote a discussion post on Less Wrong called ‘How [Singularity Institute] could publish in mainstream cognitive science journals.’ I explained in detail not only the right style is for mainstream journals, but also why Singularity Institute should publish in mainstream journals. My four reasons were:

 

  1. Some donors will take Singularity Institute more seriously if it publishes in mainstream journals.
  2. Singularity Institute would look a lot more credible in general.
  3. Singularity Institute would spend less time answering the same questions again and again if it publishes short, well-referenced responses to such questions.
  4. Writing about these problems in the common style... will help other smart researchers to understand the relevant problems and perhaps contribute to solving them.

 

Then, in April 2011, I moved to the Bay Area and began to realize why exerting a lot of effort to publish in mainstream journals probably isn’t the right way to go for Singularity Institute, and I wrote a discussion post called ‘Reasons for [Singularity Institute] to not publish in mainstream journals.’

What are those reasons?

The first one is that more people read, for example, Yudkowsky’s thoughtful blog posts or Nick Bostrom’s pre-prints from his website... than the actual journals.

The other reason is that in many cases, most of a writer’s time is invested after the article is accepted to a journal. Which means that most of the work comes after you’ve done the most important part and written up all the core ideas. Most of the work is tweaking. Those are dozens and dozens and dozens of hours not spent on finding new safety strategies, writing new working papers, etc.

A third reason is that publishing in mainstream journals requires you to jump through lots of hoops, like reviewer bias and the normal aversion to stuff that sounds weird.

A fourth reason to not publish so much in mainstream journals is that publishing in mainstream journals requires a pretty large delay in publication, somewhere between 4 months to 2 years.

So: If you’re a mainstream academic seeking tenure, publishing in mainstream journals is what you need to do, because that’s how the system is set up. If you’re trying to solve hard problems very quickly, publishing in mainstream journals can sometimes be something of a lost purpose.

If you’re trying to hard solve problems in mathematics and philosophy, why would you spend most of your limited resources tweaking sentences rather than getting the important ideas out there for yourself or others to improve and build on? Why would you accept delays of 4 months to 2 years? 

At Singularity Institute, we’re not trying to get tenure. We don’t need you to have a Ph.D. We don’t care if you work at Princeton or at Brown Community College. We need you to help us solve the most important problems in mathematics, computer science, and philosophy, and we need to do that quickly.

That said, it will sometimes be worth it to develop a working paper into something that can be published in a mainstream journal, if the effort required and the time delay are not too great.

But just to drive my point home, let me read from the opening chapter of the new book Reinventing Discovery, by Michael Nielsen, the co-author of the leading textbook on quantum computation. It's a really great passage:

Tim Gowers is not your typical blogger. A mathematician at Cambridge University, Gowers is a recipient of the highest honor in mathematics, the Fields Medal, often called the Nobel Prize of mathematics. His blog radiates mathematical ideas and insight.

In January 2009, Gowers decided to use his blog to run a very unusual social experiment. He picked out an important and difficult unsolved mathematical problem, a problem he said he’d “love to solve.” But instead of attacking the problem on his own, or with a few close colleagues, he decided to attack the problem completely in the open, using his blog to post ideas and partial progress. What’s more, he issued an open invitation asking other people to help out. Anyone could follow along and, if they had an idea, explain it in the comments section of the blog. Gowers hoped that many minds would be more powerful than one, that they would stimulate each other with different expertise and perspectives, and collectively make easy work of his hard mathematical problem. He dubbed the experiment the Polymath Project.

The Polymath Project got off to a slow start. Seven hours after Gowers opened up his blog for mathematical discussion, not a single person had commented. Then a mathematician named Jozsef Solymosi from the University of British Columbia posted a comment suggesting a variation on Gowers’s problem, a variation which was easier, but which Solymosi thought might throw light on the original problem. Fifteen minutes later, an Arizona high-school teacher named Jason Dyer chimed in with a thought of his own. And just three minutes after that, UCLA mathematician Terence Tao—like Gowers, a Fields medalist—added a comment. The comments erupted: over the next 37 days, 27 people wrote 800 mathematical comments, containing more than 170,000 words. Reading through the comments you see ideas proposed, refined, and discarded, all with incredible speed. You see top mathematicians making mistakes, going down wrong paths, getting their hands dirty following up the most mundane of details, relentlessly pursuing a solution. And through all the false starts and wrong turns, you see a gradual dawning of insight. Gowers described the Polymath process as being “to normal research as driving is to pushing a car.” Just 37 days after the project began Gowers announced that he was confident the polymaths had solved not just his original problem, but a harder problem that included the original as a special case. He described it as “one of the most exciting six weeks of my mathematical life.” Months’ more cleanup work remained to be done, but the core mathematical problem had been solved.

That is what working for rapid progress on problems rather than for tenure looks like.

And here’s the kicker. We’ve already done this at Singularity Institute! This is what happened, though not quite as fast, when Eliezer Yudkowsky made a few blog posts about open problems in decision theory, and the community rose to the challenge, proposed solutions, and iterated and iterated. That work continued with a decision theory workshop and a mailing list that is still active, where original progress in decision theory is being made quite rapidly, and with none of it going through the hoops and delays of publishing in mainstream journals.

Now, I do think that Singularity Institute needs to publish more research, both in and out of mainstream journals. But most of what we publish should be blog posts and working papers, because our goal is to solve problems quickly, not to wait 4 months to 2 years to go through a mainstream publisher and garner tenure and prestige and so on.

That said, I’m quite happy when people do publish on these subjects in mainstream journals, because prestige is useful for bringing attention to overlooked topics, and because hopefully these instances of publishing in mainstream journals are occurring when it isn’t a huge waste of time and effort to do so. For example, I love the work being done by our frequent collaborators at the Future of Humanity Institute at Oxford, and I always look forward to what they're doing next.

Now, back to quartz's original question about rigorous research. I asked for clarification on what quartz meant, and here's what he said:

In 15 years, I want to see a textbook on the mathematics of FAI that I can put on my bookshelf next to Pearl's Causality, Sipser's Introduction to the Theory of Computation and MacKay's Information Theory, Inference, and Learning Algorithms. This is not going to happen if research of sufficient quality doesn't start soon.

Now, that sounds wonderful, and I agree that the community of researchers working to reduce existential risks, including Singularity Institute, will need to ramp up their research efforts to achieve that kind of goal.

I will offer just one qualification that I don't think will be very controversial. I think most people would agree that if a scientist happened to create a synthetic virus that was airborne and could kill hundreds of millions of people if released into the wild, we wouldn't want the instructions for creating that synthetic virus to be published in the open for terrorist groups or hawkish governments to use. And for the same reasons, we wouldn't want a Friendly AI textbook to explain how to build highly dangerous AI systems. But excepting that, I would love to see a rigorously technical textbook on friendliness theory, and I agree that friendliness research will need to increase for us to see that textbook be written in 15 years. Luckily, the Future of Humanity Institute is putting a special emphasis on AI risks for the next little while, and Singularity Institute is ramping up its own research efforts.

But the most important thing I want to say is this. If you can take ideas and arguments that already exist in blog posts, emails, and human brains (for example at Singularity Institute) and turn them into working papers or maybe even journal articles, and you care about navigating the Singularity successfully, please contact me. My email address is luke@intelligence.org. If you're that kind of person who can do that kind of work, I really want to talk to you.

I’d estimate we have something like 30-40 papers just waiting to be written. The conceptual work has been done, we just need more researchers who can write this stuff up. So if you can do that, you should contact me: luke@intelligence.org.

 

Friendly AI Sub-Problems

Next question. Less Wrong user ‘XiXiDu’ asks:

If someone as capable as Terence Tao approached [Singularity Institute], asking if they could work full-time and for free on friendly AI, what would you tell them to do? In other words, are there any known FAI sub-problems that demand some sort of expertise that [Singularity Institute] is currently lacking?

Terence Tao is a mathematician at UCLA who was a child prodigy and is considered by some people to be one of the smartest people on the planet. He is exactly the kind of person we need to successfully navigate the Singularity, and in particular to solve open problems in Friendly AI theory.

I explained in my text-only interview with Michael Anissimov in September 2011 that the problem of Friendly AI breaks down into a large number of smaller and better-defined technical sub-problems. Some of the open problems I listed in that interview are the ones I’d love somebody like Terence Tao to work on. For example:

How can an agent make optimal decisions when it is capable of directly editing its own source code, including the source code of the decision mechanism? How can we get an AI to maintain a consistent utility function throughout updates to its ontology? How do we make an AI with preferences about the external world instead of about a reward signal? How can we generalize the theory of machine induction —€” called Solomonoff induction â— so that it can use higher-order logics and reason correctly about observation selection effects? How can we approximate such ideal processes such that they are computable?

(That was a quote from the text-only interview.)

But even before that, we’d really like to write up explanations of these problems in all their technical detail, but again that takes researchers and funding and we’re short on both. For now, I’ll point you to Eliezer’s talk at Singularity Summit 2011, which you can Google for.

But yeah, we have a lot of technical problems that we'd like to clarify the nature of so that we can have researchers working on them. So we do need potential researchers to contact us.

I loved watching Batman and Superman cartoons when I was a kid, but as it turns out, the heroes who can save the world are not those who have incredible strength or the power of flight. They are mathematicians and computer scientists. 

Singularity Institute needs heroes. If you are a brilliant mathematician or computer scientist and you want a shot at saving the world, contact me: luke@intelligence.org.

I know it sounds corny, but I mean it. The world needs heroes.

 

Improved Funding

Next, Less Wrong user ‘XiXiDu’ asks:

What would [Singularity Institute] do given various amounts of money? Would it make a difference if you had 10 or 100 million dollars at your disposal...?

Yes it would. Absolutely. If Bill Gates decided tomorrow that he wanted to save not just a billion people but the entire human race, and he gave us 100 million dollars, we would hire more researchers and figure out the best way to spend that money. That's a pretty big project in itself.

But right now, my bet on how we’d end up spending that money is that we would personally argue for our mission to each of the world’s top mathematicians, AI researchers, physicists, and formal philosophers. The Terence Taos and Judea Pearls of the world. And for any of them who could be convinced, we’d be able to offer them enough money to work for us. We’d also hire several successful Oppenheimer-type research administrators who could help us bring these brilliant minds together to work on these problems.

As nice as it is to have people from all over the world solving problems in mathematics, decision theory, agent architectures, and other fields collaboratively over the internet, there are a lot of things you can make move faster when you bring the smartest people in the world into one building and allow them to do nothing else but solve the world's most important problems.

 

Rationality

Next. Less Wrong user ‘JoshuaZ’ asks:

A lot of Eliezer's work has been not at all related strongly to FAI but has been to popularizing rational thinking. In your view, should [Singularity Institute] focus exclusively on AI issues or should it also care about rational issues? In that context, how does Eliezer's ongoing work relate to [Singularity Institute]?

Yes, it’s a great question. Let me begin with the rationality work.

I was already very interested in rationality before I found Less Wrong and Singularity Institute, but when I first encountered the arguments about intelligence explosion, one of my first thoughts was, “Uh-oh. Rationality is much more important than I had originally thought.”

Why? Intelligence explosion is a mind-warping, emotionally dangerous, intellectually difficult, and very uncertain field in which we don’t get to do a dozen experiments so that reality can beat us over the head with the correct answer. Instead, when it comes to intelligence explosion scenarios, in order to get this right we have to transcend the normal human biases, emotions, and confusions of the human mind, and make the right predictions before we can run any experiments. We can't try an intelligence explosion and see how it turns out.

Moreover, to even understand what the problem is, you’ve got to get past a lot of usual biases and false but common beliefs. So we need a more sane world to solve these problems, and we need a saner world to have a larger community of support for addressing these issues.

And, Eliezer’s choice to work on rationality has paid off. The Sequences, and the Less Wrong community that grew out of them, have been successful. We now have a large and active community of people growing in rationality and spreading it to others, and a subset of that community contributes to progress on problems related to AI. Even Eliezer’s choice to write a rationality fanfiction, Harry Potter and the Methods of Rationality, has — contrary to my expectations — had quite an impact. It is now the most popular Harry Potter fan fiction, I think, and it was responsible for perhaps ¼ or ⅕ of the money raised during the 2011 summer matching challenge, and has brought several valuable new people into our community. Eliezer’s forthcoming rationality books might have a similar type of effect.

But we understand that many people don’t see the connection between rationality and navigating the Singularity successfully the way that we do, so in our strategic plan we explained that we’re working to spin off most of the rationality work to a separate organization. It doesn’t have a name yet, but internally we just call it ‘Rationality Org.’ That way, Singularity Institute can focus on Singularity issues, and the Rationality Org (whatever it comes to be called) can focus on rationality, and people can support them independently. That’s something else Eliezer has been working on, along with a couple of others.

Of course, Eliezer does spend some of his time on AI issues, and he plans to return full-time to AI once Rationality Org is launched. But we need more talented researchers, and other contributions, in order to succeed on AI. Rationality has been helpful in attracting and enhancing a community that helps with those things.

 

Changing Course

Next. Less Wrong user ‘JoshuaZ’ asks:

...are there specific sets of events (other than the advent of a Singularity) which you think will make [Singularity Institute] need to essentially reevaluate its goals and purpose at a fundamental level?

Yes, and I can give a few examples that I wrote down.

Right now we’re focused on what happens when smarter-than-human intelligence arrives, because the evidence available suggests to us that AI will be more important than other crucial considerations. But suppose we made a series of discoveries that made it unlikely that AI would arrive anytime soon, but very likely that catastrophic biological terrorism was only a decade or two away, for example. In that situation, Singularity Institute would shift its efforts quite considerably.

Another example: If other organizations were doing our work, including Friendly AI, and with better efficiency and scale, then it would make sense to fold Singularity Institute and transfer resources, donors, and staff to these other, more efficient and effective organizations.

If it could be shown that some other process was much better at mobilizing efforts to address core issues, for example if Giving What We Can (an organization focused on optimal philanthropy) continues doubling each year and spinning off large numbers of skilled people to work on existential risk reduction (as one of the targets of optimal philanthropy), then focus there for a while could make sense — or at least it might make sense to strip away outreach functions from [Singularity Institute], perhaps leaving a core FAI team, and leave outreach to the optimal philanthropy community or something like that.

So, those are just three ways that things could change or we could make some discoveries, and that would radically shift the strategy that we have at Singularity Institute.

 

Experimental Research

Next. User ‘XiXiDu’ asks:

Is [Singularity Institute] willing to pursue experimental AI research or does it solely focus on hypothetical aspects?

Experimental research would, at this point, be a diversion from work on the most important problems related to our mission, which are technical problems in mathematics, computer science, and philosophy. If experimental research becomes more important than those problems in math, computer science, and philosophy, and if we had the funding available to do experiments, we would do experimental research at that time, or fund somebody else to do it. But those aren't the most important or most urgent problems that we need to solve.

 

Winning Without Friendly AI

Next. Less Wrong user ‘Wei_Dai’ asks:

Much of [Singularity Institute’s] research [is] focused not directly on [Friendly AI] but more generally on better understanding the dynamics of various scenarios that could lead to a Singularity. Such research could help us realize a positive Singularity through means other than directly building an [Friendly AI].

Does [Singularity Institute] have any plans to expand such research activities, either in house, or by academia or independent researchers?

The answer to that question is 'Yes'.

Singularity Institute does not put all its eggs in the ‘Friendly AI’ basket. Intelligence explosion scenarios are complicated, the future is uncertain, and the feasibility of many possible strategies is unknown and uncertain. Both Singularity Institute and our friends at Future of Humanity Institute at Oxford have done quite a lot of work on these kinds of strategic considerations, things like differential technological development. It’s important work, so we plan to do more of it.

Most of this work, however, hasn’t been published. So if you want to see it published, put us in contact with people who are good at rapidly taking ideas and arguments out of different people's heads and putting them on paper. Or maybe you are that person! Right now we just don’t have enough researchers to write these things up as much as we'd like. So contact me: luke@intelligence.org.

 

Conclusion

Well, that’s it! I'm sorry I can’t answer all the questions. Doing this takes a lot more work than you might think, but if it is appreciated, and especially if it grows and encourages the community of people who are trying to make the world a better place and reduce existential risk, then I may try to do something like this — maybe without the video, maybe with the video — with some regularity.

Keep in mind that I do have a personal feedback form at tinyurl.com/luke-feedback, where you can send me feedback on myself and Singularity Institute. You can also check the Less Wrong page that will be dedicated to this Q&A and leave some comments there.

Thanks for listening and watching. This is Luke Muehlhauser, signing off.

124 comments

Comments sorted by top scores.

comment by bryjnar · 2011-12-10T14:05:38.590Z · LW(p) · GW(p)

How are you going to address the perceived and actual lack of rigor associated with [Singularity Institute]?

I upvoted this question originally, and while I appreciate your response, I don't feel you addressed what, for me, is the crux of the matter. If the SIAI is so focussed on "solving the most important problems in mathematics, computer science, and philosophy", then where is the progress?

The worry is that the SIAI is seen as somewhere where people pontificate endlessly about the problem, without actually doing useful work towards the solution. It is important to raise awareness of the dangers of an UFAI situation, but you're claiming that you also want the SIAI to be more than that.

But it's hard to take that seriously when there is so little evidence of problems actually getting solved, particularly the hard ones in mathematics and computer science. Eliezer's TDT draft is a step in the right direction, as it's at least evidence that some work is getting done, but it's the sort of thing I'd like to see much, much more of. In addition, it could do with tightening up, and I think the rigour of submitting it to an actual academic journal would be extremely helpful. Even if you don't want to do that, a public draft at least allows some kind of assessment of the work you're doing.

As for the philosophy, I think that's in better shape, but not an awful lot better. There's good material in the sequences, but at the end of the day they're a series of thoughtful blog posts, not a polished, well-structured series of arguments. The quality is better than some published philosophy, but that's not saying much. Again, I think the discipline required to shape some of the material up to get it published would be a good thing.

As long as the SIAI continues to not publish, or otherwise make available, credible documents indicating rigorous progress it is going to be perceived as lacking in rigour. And those of us who aren't privy to what is actually going on in there may worry that this indicates an actual lack of rigour.

Replies from: lukeprog, XiXiDu
comment by lukeprog · 2011-12-10T19:12:22.199Z · LW(p) · GW(p)

As long as the SIAI continues to not publish, or otherwise make available, credible documents indicating rigorous progress it is going to be perceived as lacking in rigour. And those of us who aren't privy to what is actually going on in there may worry that this indicates an actual lack of rigour.

I couldn't agree more.

This is why I talk almost non-stop within Singularity Institute about how we need to be publishing the research that we're doing. It's why I've been trying to squeeze in hours (around helping with the Summit and now being Executive Director) that allow me to author and co-author papers that summarize the current state of research, like 'The Singularity and Machine Ethics' and many others that are in progress: 'Intelligence Explosion: Evidence and Import', 'How to Do Research That Contributes Toward a Positive Singularity', and Open Problems in Friendly Artificial Intelligence. Granted, only the last one could constitute significant research progress, but one reason it's hard to make research progress is that not even the basics have been summarized with good form and clarity anywhere, so I'm first working on these kinds of "platform" documents as enablers of future research progress.

My concern with showing the research that's going on is also why, in the video above, I repeatedly asked for people with experience writing up research papers to contact me.

Eliezer once wrote about how our lack of a PhD on staff and other common complaints didn't seem to be people's "true rejection" of Singularity Institute, but I think the "you don't publish enough research" is a pretty decent candidate for being many people's true rejection.

Believe me, few things would make me happier than having the resources to publish those 30-40 papers I talked about that are sitting in people's heads but not on paper.

Replies from: bryjnar
comment by bryjnar · 2011-12-10T21:47:47.986Z · LW(p) · GW(p)

So it sounds like your answer is: "Publishing research would help, and we're working on it."

That's great! It's just good that you've got a plan. After all, the question was "How are you going to address the perceived lack of rigour".

Replies from: lukeprog
comment by lukeprog · 2011-12-10T21:54:36.648Z · LW(p) · GW(p)

Correct!

comment by XiXiDu · 2011-12-10T14:51:44.618Z · LW(p) · GW(p)

Eliezer's TDT draft is a step in the right direction, as it's at least evidence that some work is getting done, but it's the sort of thing I'd like to see much, much more of.

Even if they were to make some actual progress, most of it would probably be regarded too dangerous to be released. Therefore I predict that you won't see much more of it ever.

There's good material in the sequences, but at the end of the day they're a series of thoughtful blog posts, not a polished, well-structured series of arguments. The quality is better than some published philosophy, but that's not saying much.

Indeed! Think about it this way, if Less Wrong would have been around for 3000 years and the field of academic philosophy would have been founded a few years ago then most of it would probably be better than Less Wrong..

Replies from: bryjnar, lukeprog, Manfred, wedrifid
comment by bryjnar · 2011-12-10T21:56:48.879Z · LW(p) · GW(p)

Even if they were to make some actual progress, most of it would probably be regarded too dangerous to be released. Therefore I predict that you won't see much more of it ever.

I'm not sure how true this is, but suppose it is. Then it seems to me that the SIAI has got a problem. They need people to take them seriously, in order to attract funding and researchers, but they can't release any evidence that might make people take them seriously, as it's regarded as "too dangerous". Dilemma.

Secrecy and a perceived lack of rigour seem likely to go hand in hand. And for those of us outside the SIAI, who are trying to decide whether to take it seriously, said secrecy also makes it seem likely that there is an actual lack of rigour.

Perhaps this just demonstrates that any organization seriously aiming to make FAI has to be secretive, and hence have a bad public image. Which would be interesting. But in that case, the answer to the original question may just be: "We can't really, because it would be too dangerous", which would at least be something.

Indeed! Think about it this way, if Less Wrong would have been around for 3000 years and the field of academic philosophy would have been founded a few years ago then most of it would probably be better than Less Wrong..

And perhaps, just perhaps, LW might have something to learn from that older sibling... I appreciate the desire to declare all past philosophy diseased and start again from nothing, but I think it's misguided. Even if you don't like much of contemporary philosophy, modern-day philosophers are often well-trained critical thinkers, and so a bit of attention from them might help shape things up a bit.

comment by lukeprog · 2011-12-10T19:00:56.735Z · LW(p) · GW(p)

Even if they were to make some actual progress, most of it would probably be regarded too dangerous to be released.

I'm not sure that "most of it" is too dangerous to be released. There is quite a lot of research that can be done in the open. If there wasn't, we wouldn't be trying to write a document like Open Problems in Friendly AI for the public.

Replies from: SilasBarta
comment by SilasBarta · 2011-12-13T16:19:38.513Z · LW(p) · GW(p)

You've managed to come up with excuses for not posting something as rudimentary as statistics that would substantiate your claims of success for rationality bootcamps.

"That would take too much time!" -> So a volunteer can do it for you. -> "But it's private so we can't release it." -> So anonymize it. -> "That takes too much work too." -> Um? -> "Hey, our alums dress nicely now, that should be enough proof."

Frankly, that doesn't bode well.

Replies from: dlthomas
comment by dlthomas · 2011-12-13T16:51:57.927Z · LW(p) · GW(p)

It seems that signaling rigor in hidden domains through a policy of rigor in open domains would be appropriate, and possibly sufficient. It may be expensive, but hopefully the domains addressed would still be of some benefit.

comment by Manfred · 2011-12-10T15:21:53.151Z · LW(p) · GW(p)

Even if they were to make some actual progress, most of it would probably be regarded too dangerous to be released. Therefore I predict that you won't see much more of it ever.

That seems unlikely - well, the being too dangerous, not sure about the regarding. The philosophy of digitizing human preferences seems particularly releasable to me, but depending on how you break the causes of unFAI into malice/stupidity, it can be a good idea to release pretty much anything that's easier to apply to FAI than to unFAI.

comment by wedrifid · 2011-12-10T15:12:10.610Z · LW(p) · GW(p)

Even if they were to make some actual progress, most of it would probably be regarded too dangerous to be released. Therefore I predict that you won't see much more of it ever.

I'd be surprised. There is plenty left that I would expect Eliezer to consider releaseable.

Replies from: XiXiDu
comment by XiXiDu · 2011-12-10T16:05:45.035Z · LW(p) · GW(p)

There is plenty left that I would expect Eliezer to consider releaseable.

Carl Shulman wrote that Eliezer is reluctant to release work that he thinks is relevant to building AGI.

Think about his risk estimations of certain game and decision theoretic thought experiments. What could possible be less risky than those thought experiments while still retaining enough rigor that one would be able to judge if actual progress has been made?

Replies from: wedrifid
comment by wedrifid · 2011-12-10T16:41:46.205Z · LW(p) · GW(p)

Carl Shulman wrote that he is reluctant to release work that he thinks is relevant to building AGI.

(Suggest substituting "Eliezer" for "he" in the above sentence.)

There is plenty of work that could be done and released that is not directly about AGI construction or the other few secrecy requiring areas.

Replies from: XiXiDu
comment by XiXiDu · 2011-12-10T17:17:26.908Z · LW(p) · GW(p)

There is plenty of work that could be done and released that is not directly about AGI construction or the other few secrecy requiring areas.

Right, the friendly AI problem is incredible broad. I wish there was a list of known problems that need to be solved. But I am pretty sure there is a large category of problems that Eliezer would be reluctant to even talk about.

comment by lukeprog · 2011-12-11T01:07:50.410Z · LW(p) · GW(p)

Ask and you shall receive.

Here is one of several emails I've now received in response to my repeated request that potential research collaborators contact me (quoted with permission):

My name is [name]. I am a first year student at [a university] majoring in pure math... I am rather intelligent; I estimate my score on the recent Putnam contest to be thirty, and the consensus is that the questions were of above average difficulty this year. I really care about the Singularity Institute's mission; I have been a utilitarian since age 11, before I knew that the idea had a name and I have cared about existential risk since at least age twelve, when I wrote a short piece on why prevention of the heat death was the greatest moral imperative for humankind (I had come up with the idea of what was essentially a Brownian ratchet years before I read the proof of the H-theorem showing the irreversible increase in entropy).

I want to help with the theory of friendly AI. I currently think that I could work directly on the problem but if my comparative advantage is elsewhere I would like to know that... I would be interested in participating in a rationality camp, the Visiting fellows program or anything else that could help the Singularity Institute.

Keep 'em coming, people!

comment by komponisto · 2011-12-10T22:18:38.844Z · LW(p) · GW(p)

For the love of the flying spaghetti monster, can you please, please stop saying "at Singularity Institute", "within Singularity Institute", et cetera?

As has been explained before, this is annoying, grating, and just plain goofy. It makes you sound like a fly-by-night commercial outfit run by people who don't quite speak English. In my estimation it's about 2:1 evidence that SI* is a scam.

Now, as you know, my prior on the latter hypothesis is pretty low. But this is nevertheless a serious issue. We're talking about how serious your organization sounds, at the 5-second level. And at this point it's also a meta-issue, having to do with whether you (all) listen to criticism. Because, in light of the discussion linked above, you would at the very least need a damn good reason to continue this practice in the face of some rather compelling criticism. As in, "we did a focus group study last year which showed that omitting the definite article would likely result in a 5% increase in donations". As far as I know, you have no such good reason. Indeed, the only reasoning anyone at SI* has offered for this at all is contained in a comment by Louie whose score is currently -9 (not by any accident).

[passage removed]

I could be convinced in the face of a sufficient display of (e.g.) marketing expertise (for example, a focus group study as mentioned above). But in this case, my position is well supported not only by data I provided but also by the agreement of other members of the LW community, as reflected in the voting patterns and other comments. And if Louie's comment is representative of SI's* actual reasoning on this matter, it frankly doesn't look like you people have a clue what you're doing.

* And I just want to emphasize, yet again, that I didn't write "the SI", despite the fact I would write "the Singularity Institute". This contrast is standard usage, and not in any sense contradictory!

Replies from: wedrifid, gwern, Jonathan_Graehl, jimrandomh, shokwave, lukeprog
comment by wedrifid · 2011-12-11T16:16:07.175Z · LW(p) · GW(p)

And at this point it's also a meta-issue, having to do with whether you (all) listen to criticism.

I have to confirm that this in particular is a significant issue. Until he redeemed himself Luke's reply had me updating towards writing him off as another person with too much status/ego to hear correctly.

comment by gwern · 2011-12-11T14:37:32.951Z · LW(p) · GW(p)

I don't think I have ever been so dismayed to see a comment at +15 and no less than 11 children comments. WTF, people.

A strong reaction from me on a language issue is significant Bayesian information.

BS. (Here, let me indulge in some anecdotage - 800 Verbal on the SAT etc, also what I would consider my greatest skill - and it doesn't bother me in the least. That cancel out your 'Bayesian information'? Good grief.)

Your entire comment is sheer pedantry of the worst kind, that I'd expect on Reddit and not LessWrong.

Replies from: JoshuaZ, wedrifid, komponisto
comment by JoshuaZ · 2011-12-12T04:35:48.565Z · LW(p) · GW(p)

For what it is worth, komponisto's basic point without the egotism is essentially correct. The dropping of the definite article sounds incredibly awkward and does signal either a scam or general incompetence. I don't understand what they are thinking. The self-congratulatory puffery that is the second half of the comment doesn't reduce the validity of the central point.

Replies from: komponisto
comment by komponisto · 2011-12-12T08:44:58.760Z · LW(p) · GW(p)

The self-congratulatory puffery that is the second half of the comment doesn't reduce the validity of the central point.

Said "puffery" has now been removed. My own mental context for those remarks was evidently quite different from that in which they were seen by others. (Though no one actually complained until gwern, quite a while after the comment was posted.)

Replies from: wedrifid
comment by wedrifid · 2011-12-12T17:29:06.670Z · LW(p) · GW(p)

Said "puffery" has now been removed. My own mental context for those remarks was evidently quite different from that in which they were seen by others. (Though no one actually complained until gwern, quite a while after the comment was posted.)

It is amazing how much difference one antagonistic reader can make to how a statement is interpreted by others. Apart from the priming it makes you a legitimate target.

Replies from: komponisto, gwern
comment by komponisto · 2011-12-12T21:18:01.611Z · LW(p) · GW(p)

It is amazing how much difference one antagonistic reader can make to how a statement is interpreted by others. Apart from the priming it makes you a legitimate target.

Quite so. This "bandwagon" behavior is disturbing, and has the unfortunate consequence of incentivizing one to reply to hostile comments immediately (instead of taking time to reflect), to fend off the otherwise inevitable karma onslaught.

comment by gwern · 2011-12-12T18:00:43.180Z · LW(p) · GW(p)

Yes, I found Asch's Conformity Experiment pretty amazing too.

comment by wedrifid · 2011-12-11T16:01:23.188Z · LW(p) · GW(p)

Your entire comment is sheer pedantry of the worst kind, that I'd expect on Reddit and not LessWrong.

I support the grandparent. Your condemnation here barely makes any sense and is unjustifiably rude.

I am rather shocked that kompo needed to make the comment. The subject had come up recently and more than enough explanation had been given to SIAI public figures of how to not sound ridiculous and ignorant while using the acronym.

Replies from: gwern, XiXiDu
comment by gwern · 2011-12-11T16:08:58.517Z · LW(p) · GW(p)

Logically ruder than claiming one's dislike is 'Bayesian evidence'? Since when do we dress up our linguistic idiosyncrasies in capitalized statistical drag? Is there any evidence at all that this is a meaningful change, that it really makes one sound 'ridiculous and ignorant'?

Replies from: Vladimir_Nesov, wedrifid
comment by Vladimir_Nesov · 2011-12-12T11:31:35.572Z · LW(p) · GW(p)

than claiming one's dislike is 'Bayesian evidence'?

Own dislike is clearly some evidence of others' dislike, the relevant question is how much evidence. Votes add more evidence.

comment by wedrifid · 2011-12-11T16:30:55.943Z · LW(p) · GW(p)

Logically ruder than claiming one's dislike is 'Bayesian evidence'?

  1. I said unjustifiably rude, not logically rude (although now you are being the latter as well).

  2. There was nothing logically rude about kompo claiming his own expertise as evidence. It does come across as somewhat arrogant and leaves kompo vulnerable to status attack by anyone who considers him presumptive but even if his testimony is rejected "logical rudeness" still wouldn't come into it at all.

Since when do we dress up our linguistic idiosyncrasies in capitalized statistical drag?

Don't try to "dress up" corrections about basic misuse of English as personal idiosyncrasies of komponisto. He may care about using language correctly more than most but the usage he is advocating is the standard usage.

Replies from: gwern
comment by gwern · 2011-12-11T16:43:19.101Z · LW(p) · GW(p)

Don't try to "dress up" corrections about basic misuse of English as personal idiosyncrasies of komponisto. He may care about using language correctly more than most but the usage he is advocating is the standard usage.

Then you will easily be able to come up with citations from well-respected authoritative sources (eg. a nice long column from William Safire giving examples and explaining why it is bad) that it is correct.

comment by XiXiDu · 2011-12-11T16:28:00.143Z · LW(p) · GW(p)
  • The SIAI is located in the U.S. under the jurisdiction of the FBI.
  • SIAI is located in U.S. under the jurisdiction of FBI.
Replies from: komponisto, wedrifid
comment by komponisto · 2011-12-11T18:59:03.658Z · LW(p) · GW(p)

Neither. What you want is:

  • SIAI is located in the U.S., under the jurisdiction of the FBI.
comment by wedrifid · 2011-12-11T16:37:12.539Z · LW(p) · GW(p)

...the subject had come up recently and more than enough explanation had been given to SIAI...

When the entire point of quoting a statement is to question whether or not "the" should be used you can't go around truncating like that! (Are you being disingenuous or is that just a mistake?)

The subject had come up recently and more than enough explanation had been given to SIAI public figures of

Notice the difference in how an added 'the' would sound now?

Incidentally: Think "MIT" or "NASA" instead of "FBI".

Replies from: XiXiDu
comment by XiXiDu · 2011-12-11T17:24:24.200Z · LW(p) · GW(p)

(Are you being disingenuous or is that just a mistake?)

I have now removed the quote completely. I was planning on writing something else first that was more relevant to the quote. Sorry.

Incidentally: Think "MIT" or "NASA" instead of "FBI".

There might be some sort of rules that govern when it is correct to use "the" and when it is wrong. But ain't those rules fundamentally malleable by the perception of people and their adoption of those rules?

An interesting example is the German word 'Pizza' (which happens to mean the same as the English word, i.e. the Neapolitan cuisine). People were endlessly arguing about how the correct plural form of 'Pizza' is 'Pizzen'. Yet many people continued to write 'Pizzas' instead. What happened a few years ago is that the Duden (the prescriptive source for the spelling of German) included 'Pizzas' as a secondary but correct plural form of the word 'Pizza'.

So why did people ever bother to argue in the first place? German, or English for that matter, would have never evolved in the first place if thousands of years ago people would have demanded that all language be frozen at that point of time and only the most popular spelling be regarded as correct.

Not that I have a problem with designing an artificial language or improving an existing language. Just some thoughts.

Replies from: komponisto
comment by komponisto · 2011-12-11T22:14:51.122Z · LW(p) · GW(p)

There might be some sort of rules that govern when it is correct to use "the" and when it is wrong.

The rules may not necessarily be simple, however. In the worst-case scenario, they may simply consist of lists of cases where it is one way and cases where it is the other.

(As you no doubt realize, the same issue also comes up in German: why is it "Deutschland, Österreich, und die Schweiz" instead of "Deutschland, Österreich, und Schweiz" or "das Deutschland, das Österreich, und die Schweiz"?)

But ain't those rules fundamentally malleable by the perception of people and their adoption of those rules?

Yes, and the exact same thing could be said about any human signaling pattern, not just those that concern language. But don't make the mistake of thinking that this is a Fully General Counterargument against any claim about the meaning of a particular signaling pattern in a particular context at a particular time.

It isn't as if everything eventually becomes accepted. Language changes, but it doesn't descend into entropy: in the future, there will still be patterns that are "right" and others that are "wrong", even if these lists are different from what they are now. Not only will some things that are "wrong" now become "right" in the future, but the reverse will also happen: expressions that are "right" now will become "wrong" later.

An interesting example is the German word 'Pizza' (which happens to mean the same as the English word, i.e. the Neapolitan cuisine). People were endlessly arguing about how the correct plural form of 'Pizza' is 'Pizzen'. Yet many people continued to write 'Pizzas' instead. What happened a few years ago is that the Duden (the prescriptive source for the spelling of German) included 'Pizzas' as a secondary but correct plural form of the word 'Pizza'.

From what I understand, linguists actually consider "-s" the regular manner of plural formation in modern German, despite the fact that only a minority of words use it, because it is the default used for new words. (So the dispute you mention is perhaps really about how "new" the word "Pizza" is felt to be.)

comment by komponisto · 2011-12-11T18:31:35.134Z · LW(p) · GW(p)

I'll return the favor and express my own dismay that the parent has been voted up to +3, while wedrifid's comments haven't been voted up to +10 where they deserve to be.

Your comment is sanctimony of the worst kind. Attempting to seize the "moral high ground" at the expense of someone who makes an honest expression of feeling is an all-too-familiar status strategy, and not one that earns any respect from me.

Ironically, the point about the typical mind fallacy, as expressed in Yvain's original post on it, applies with full force to the parent, insofar as you have apparently failed to grasp that others could be seriously bothered by something that doesn't bother you.

(I find it regrettable that I am in a hostile exchange with you, since I have found many of your writings here and on your own site interesting and valuable.)

Replies from: gwern
comment by gwern · 2011-12-11T23:33:48.141Z · LW(p) · GW(p)

I am being sanctimonious about your 'honest expression of feeling'? Let me quote from you again:

I'll let you in on a secret: this kind of stuff (intuiting whether an expression sounds right or wrong, linguistically) is a good candidate for my single greatest skill. (It's a bit embarrassing to admit this, because it's not the kind of skill that's useful for very much at the margin.) My ability in this area isn't perfect, but there is a reason I learned to speak five languages before I was fifteen (despite being an American raised in an exclusively English-speaking environment). A strong reaction from me on a language issue is significant Bayesian information...As has been explained before, this is annoying, grating, and just plain goofy. It makes you sound like a fly-by-night commercial outfit run by people who don't quite speak English. In my estimation it's about 2:1 evidence that SI is a scam...And if Louie's comment is representative of SI's actual reasoning on this matter, it frankly doesn't look like you people have a clue what you're doing.

You have gone way beyond an 'honest expression of feeling'. You have successively claimed arrogantly high linguistic abilities, abused badly important terminology worse than any post like 'Rational toy buying', you have directly condescended to Luke (who is a better writer than you, IMO, even if not fluent in X languages), you have claimed this tiny verbal distinction brings disrepute upon the SIAI and anything connected, called it evidence for a scam, and finish by insulting everyone involved who does not think as you do.

And I will note that despite a direct request to wedrifid for any random grammarian or language maven reference, none has been provided, despite the fact that you can find recommendations for and against any damn grammatical point (because there is no fact of the matter).

So not only are you engaged in ridiculous accusations on something that is manifestly not worth arguing about, you may not even be right.

Replies from: komponisto
comment by komponisto · 2011-12-12T03:05:46.743Z · LW(p) · GW(p)

I don't understand why you are seeking to escalate a conflict that I specifically tried to de-escalate above (see last sentence of grandparent).

I disagree with the above in the strongest possible terms, resent the insults and hostile tone, and take severe exception to the fallacious appeals to emotion, strawman arguments, and question-begging.

Point by point:

You have gone way beyond an 'honest expression of feeling'

No I have not. My original comment reflects my feelings entirely accurately. There is no posturing or exaggeration involved (for what purpose I can't even imagine). I said exactly what I thought, no more, no less. This statement of yours about my "going way beyond" is completely false on its face and must be interpreted as some kind of rhetorical way of saying that you are offended by how strongly I feel. If that was what you meant, that is what you should have said.

You have successively claimed arrogantly high linguistic abilities

I do not consider the level of linguistic ability I claimed to be "arrogantly high". Just high enough for me to be worth listening to, rather than ignored like I was the last time this issue came up. That was the context of this remark about linguistic ability (of which I had omitted all mention on the previous occasion). Note that "worth listening to" is not the same as "worthy of being unconditionally obeyed". Perhaps if I had claimed the latter, that would have been "arrogant". Note also that several specifically non-arrogant disclaimers were inserted: "It's a bit embarrassing to admit this..."; "My ability in this area isn't perfect..."; "it's overrideable". Apparently you didn't notice these, despite having quoted one of them yourself.

abused badly important terminology worse than any post like 'Rational toy buying'

Nonsense. You are free to disagree with my claims about whether X is Bayesian evidence of Y (I assume that is what you are referring to here), but the mere fact that you disagree with such a claim does not make the claim an abuse of terminology. An abuse of terminology would be if I used the term despite not actually meaning "X is more likely if Y is true than if Y is false"; but that is exactly what I meant above.

you have directly condescended to Luke (who is a better writer than you, IMO, even if not fluent in X languages)

Since I have NEVER said anything here about being a "better writer" than anyone else, this uninvited comparison is simply a gratuitous insult. That is NOT what we are talking about here. There are plenty of subskills that make up writing ability, and sensitivity to the kind of grammatical details that I am sensitive to is only one of them (and arguably quite far from the most important).

It's as if you inserted this in a deliberate attempt to signal hostility and offend me. You succeeded.

Also, I did not "condescend" to Luke. I regard Luke as a peer -- not someone of significantly higher or lower status --and addressed him as such. (If anything, I sometimes feel that Luke is inappropriately condescending, although I think he has been making more effort to avoid this, for which he is to be commended.)

tiny verbal distinction

Here you beg the question. For me, it isn't "tiny". That is the whole point!

brings disrepute upon the SIAI and anything connected

False. I claim it brings "disrepute" (not my word) on SIAI itself. I didn't say anything about "anything connected". (It doesn't particularly bring disrepute upon Less Wrong, for example -- even though LW is clearly "connected" to SIAI.)

called it evidence for a scam

Yes. I expect scam organizations to be twice as likely to use "at X Institute" (instead of "at the X Institute") as non-scam organizations.

I take exception to the tactic of appealing to indignation at the word "scam" as if that were an argument against the factual anticipation I stated above.

and finish by insulting everyone involved who does not think as you do

Wrong again. That is simply not a correct characterization of what I wrote. I suggest you read it again, in context, and more charitably. The comment I referred to had in fact completely misunderstood what I was talking about and showed a decidedly superficial analysis of the issue on top of it. In such a context, it is completely appropriate to say "if [that] comment is representative of SI's actual reasoning on this matter, it frankly doesn't look like you people have a clue what you're doing". I further expect this is exactly what the SI staff would expect me to say if in fact it actually looked like that to me (which it did). It does not even address, let alone insult, "everyone who does not think as [I] do". For one thing, no one actually offered a contrary point of view; the only thing evinced was a lack of comprehension of the opinion and argument that I had expressed.

And I will note that despite a direct request to wedrifid for any random grammarian or language maven reference, none has been provided, despite the fact that you can find recommendations for and against any damn grammatical point (because there is no fact of the matter).

In my original comment on this issue I cited numerous Wikipedia articles illustrating the usage in question. You are free to use Google to find further confirmation. And (not without irony, since someone originally attempted to cite it against me), I can refer to this for explicit confirmation of the existence of the distinction we're talking about (between "strong" and "weak" proper nouns, in the terminology used there).

So not only are you engaged in ridiculous accusations on something that is manifestly not worth arguing about

Once again: completely begging the question. I will again refer you to Generalizing from One Example for a discussion of how something that seems insignificant to one person can be highly bothersome to another, a lesson which you have evidently failed to internalize.

you may not even be right

If you were interested in persuading me of this, you have chosen a completely wrong approach. In fact, you have potentially damaged my ability to form correct beliefs in the future, since there is now a feeling of negative affect -- perhaps even an ugh field -- attached to you in my mind, making me less likely to give proper consideration to any information or argument you may have to offer.

We were on good terms before, despite the occasional disagreement. If you thought I was actually wrong about something here (as opposed to being more inclined to notice and bothered by non-standard language patterns than you), would it have been that hard to simply present an argument?

(If anyone is tempted to suggest that the ancestor comment of this thread was hostile in a manner similar to the parent, not only do I disagree, but that isn't even the relevant comparison. My original comments on this topic were free of any sign of exasperation, yet ignored by Luke and other SI personnel, despite upvotes and verbalized agreement from others; hence the impatient tone of the ancestor.)

Replies from: drethelin, gwern, None
comment by drethelin · 2011-12-12T03:51:05.990Z · LW(p) · GW(p)

if you don't see what's wrong with claiming that your opinion on a linguistic matter is basis for significant Bayesian update, especially in the style that you did, then that significantly lowers any update I would make based on your communication skills. I strongly think that "the singularity institute" sounds better, but you're making me sad to agree with you.

Replies from: komponisto, komponisto
comment by komponisto · 2011-12-12T04:03:58.709Z · LW(p) · GW(p)

This is a cheap shot.

(1) I have not made any claim to superior "communication skills". Those are highly complex and involve many smaller abilities. The most I did was make a claim to (a certain kind of) superior language skills in order to draw attention to an explicit argument I had given that had been ignored.

(2) Compare the following:

if you don't see what's wrong with claiming that your opinion on a [insert adjective] matter is basis for significant Bayesian update...

For what class of adjective do you regard this as a general template for a sound argument?

Replies from: drethelin
comment by drethelin · 2011-12-12T04:22:58.912Z · LW(p) · GW(p)

I'll let you in on a secret. IN THE STYLE YOU DID was a part of what I said, and it was an important part. Claiming to be wise enough that what you think should make other people significantly change their point of view is OBVIOUSLY arrogant. What is so hard to understand about that? Adding lines like "I'll let you in on a secret" makes you come off significantly worse. Your style of communication is dismissive of any contrary opinion, insulting, and ridiculously pompous. If you can't see this, my opinion of your language skills HAS to go down based on them being a subset of being able to understand communication. Your dislike of singularity institute is clearly based on what you think that phrasing communicates, and yet you can't seem to understand why people might dislike your own communications.

The class of adjective is irrelevant. What's wrong with that claim is not whether or not it is true or useful, but how well it persuades. And a flat statement saying you should update on my beliefs, when we are specifically talking about whether to update beliefs based on how something is said, is unconvincing and annoying.

Replies from: komponisto, komponisto
comment by komponisto · 2011-12-12T05:40:06.306Z · LW(p) · GW(p)

I have now edited the comment, removing what I understand to have been the most offensive passage.

comment by komponisto · 2011-12-12T04:53:09.607Z · LW(p) · GW(p)

Thank you for the feedback. Let me now try to reply to some of your points, in order to help you and anyone else reading better understand where I am coming from. (I don't intend these replies as rejections of the information you've offered about your own perspective.)

Claiming to be wise enough that what you think should make other people significantly change their point of view is OBVIOUSLY arrogant. What is so hard to understand about that?

I was only claiming to be "wise enough" to have my point of view taken into account. Not all Bayesian updates are large updates! Now, of course, in this particular case, I did think a large update was warranted; but I didn't expect that large update to be made on the basis of my authority, I expected it to be made on the basis of my arguments.

"I'll let you in on a secret" makes you come off significantly worse

That seems bizarre, unless you interpreted it as sarcasm. But it wasn't sarcasm: I spelled out in the next sentence that I was actually embarrassed to be making the admission!

Another strange thing about the reaction to this is that I didn't actually claim my "single greatest skill" was actually all that great. I just said it was the greatest skill I had. It could perhaps be quite bad, with all the other skills simply being even worse. The only comparison was with my own other skills, not the skills of other people.

What I was saying was "if you ever listen to me on anything, listen to me on this!".

And a flat statement saying you should update on my beliefs

This feels to me like I'm being interpreted uncharitably. My statement was highly specific and limited in scope. It was not in any sense a "flat" statement; it was fairly narrowly circumscribed.

Replies from: prase
comment by prase · 2011-12-12T14:23:59.330Z · LW(p) · GW(p)

"I'll let you in on a secret" makes you come off significantly worse

That seems bizarre, unless you interpreted it as sarcasm.

A data point: doesn't seem bizarre to me. Whether I interpret it as (a specific type of) sarcasm I'm not sure. Sarcasm needn't hinge only on the contradiction between the literal and factual meaning of "secret", but also on the contradiction between a relatively familiar / seemingly friendly phrase and the general expression of disagreement.

Replies from: komponisto
comment by komponisto · 2011-12-12T21:10:41.479Z · LW(p) · GW(p)

the contradiction between a relatively familiar / seemingly friendly phrase and the general expression of disagreement.

The phrase was intended to be friendly, precisely in order to mitigate the general expression of disagreement!

Replies from: Emile
comment by Emile · 2011-12-15T11:41:16.122Z · LW(p) · GW(p)

Data point: It didn't come off that way to me either, I found it sounded condescending.

I agree that "at Singularity Institute" sounds weird, but I also know that judgement on what sounds weird or what connotations come up - including things like "I'll let you in on a secret" - vary a lot from person to person, even among people from the same language and country and background.

comment by gwern · 2011-12-15T02:24:16.094Z · LW(p) · GW(p)

(I find it regrettable that I am in a hostile exchange with you, since I have found many of your writings here and on your own site interesting and valuable.)...(see last sentence of grandparent)....If you were interested in persuading me of this, you have chosen a completely wrong approach. In fact, you have potentially damaged my ability to form correct beliefs in the future, since there is now a feeling of negative affect -- perhaps even an ugh field -- attached to you in my mind, making me less likely to give proper consideration to any information or argument you may have to offer. We were on good terms before, despite the occasional disagreement. If you thought I was actually wrong about something here (as opposed to being more inclined to notice and bothered by non-standard language patterns than you), would it have been that hard to simply present an argument?

Irrelevant to me. A bad comment is a bad comment. Our past and future interactions do not matter to me. To the extent I comprehend our interaction, it is me commenting on and discussing your Knox materials and you silently reading whatever you read of me; even if I were selfishly concerned about future interactions, I doubt I would value it at very much - you will continue to discuss Knox or not regardless of whether you are angry with me.

If you really do form an ugh-field just over this discussion, you should work on that. Bad habit to have.

This statement of yours about my "going way beyond" is completely false on its face and must be interpreted as some kind of rhetorical way of saying that you are offended by how strongly I feel. If that was what you meant, that is what you should have said.

You stand by everything you said, the personal attacks and absurd inferences, and feel this is perfectly honest? That this violates no LW norms of communication? That all this is perfectly acceptable? You feel that there is no problem with saying all that, because hey, you actually thought it?

WE ARE NOT OPERATING ON CROCKER'S RULES.

I will repeat this; we operate on a number of norms where we do not accuse, in an inflammatory way, someone of making the SIAI look like incompetent crooks simply because we 'feel honestly' this way.

WE ARE NOT OPERATING ON CROCKER'S RULES.

Some of us do, but not lukeprog or anyone I've noticed in these threads.

I do not consider the level of linguistic ability I claimed to be "arrogantly high". Just high enough for me to be worth listening to, rather than ignored like I was the last time this issue came up...My original comments on this topic were free of any sign of exasperation, yet ignored by Luke and other SI personnel, despite upvotes and verbalized agreement from others; hence the impatient tone of the ancestor.

Interesting that you were ignored, you say. I wonder why you weren't ignored this time? Gee, maybe it has something to do with how you expressed it this time?

But no, you were merely honestly expressing your feelings! (I guess you were being dishonest last time, since I don't see any other way to differentiate the two posts.)

Note also that several specifically non-arrogant disclaimers were inserted: "It's a bit embarrassing to admit this..."; "My ability in this area isn't perfect..."; "it's overrideable". Apparently you didn't notice these, despite having quoted one of them yourself.

'I could be mistaken, and my ability in this area isn't perfect (embarrassingly), but I think your mother is a whore.' You're offended? But I just included 3 disclaimers that you blessed as effective!

Lamentably, disclaimers no longer work in English due to abuse. I believe Robin Hanson has written some interesting things on disclaimers. If you are not speaking in a logical or mathematical mode, don't expect disclaimers to be magic pixy dust which will instantly dispel all problems with your statements. If you didn't believe them as stated, why did you write them? Honest expression of feelings, right?

You are free to disagree with my claims about whether X is Bayesian evidence of Y (I assume that is what you are referring to here), but the mere fact that you disagree with such a claim does not make the claim an abuse of terminology.

This post is Bayesian evidence of you being a murderer, because murderers are low on Agreeableness, which correlates with arguing online.

Without any evidence, any framework, or any of what passes for genuine investigation as opposed to 'I don't like it!', to pull out 'Bayesian evidence' is to dress up what is epsilon evidence (charitably) in euphuistic garb to impress readers. It is technical jargon out of place, ripped from its setting for your rhetorical purpose, and worse than Behe defending creationism with information theory twisted into meaninglessness, because you know better.

Since I have NEVER said anything here about being a "better writer" than anyone else, this uninvited comparison is simply a gratuitous insult. That is NOT what we are talking about here. There are plenty of subskills that make up writing ability, and sensitivity to the kind of grammatical details that I am sensitive to is only one of them (and arguably quite far from the most important).

The point is someone who is an inferior writer has little prior credibility when they claim superiority in any subskill.

Yes. I expect scam organizations to be twice as likely to use "at X Institute" (instead of "at the X Institute") as non-scam organizations. I take exception to the tactic of appealing to indignation at the word "scam" as if that were an argument against the factual anticipation I stated above.

This faux precision is hilarious. This is like reporting figures to 10 significant places. How could you possibly be able to give the likelihood ratio down even to an order of magnitude?

Seriously, how could you? Do you keep lists of legitimate organizations which make you linguistically flip out vs scam organizations which make you linguistically flip out?

Is there a study somewhere I am unaware of which classifies a large sample of organizations by their fraudulence and discusses signs by likelihood ratio which you have been consulting?

I'm dying to know where this 'twice' comes from. After all, you've been insisting on oh-so-precise interpretations of everything you said, from 'honest feeling' to 'Bayesian evidence' to your disclaimers; surely you didn't slip on this fascinating claim. What evidence is your 'factual anticipation' based on?

In my original comment on this issue I cited numerous Wikipedia articles illustrating the usage in question. You are free to use Google to find further confirmation. And (not without irony, since someone originally attempted to cite it against me), I can refer to this for explicit confirmation of the existence of the distinction we're talking about (between "strong" and "weak" proper nouns, in the terminology used there).

You're citing non-SIAI examples as proof of your thesis, while acknowledging that there is an entire common class of names where your thesis is outright false? With no evidence which one one is applicable which is the entire question? Now who is begging the question?

It's as if people are being deliberately mischievous by writing both "the SIAI" (which should be "SIAI"), and on the other hand, "Singularity Institute" (which should be "the Singularity Institute").

Gee, it's almost as if there is no fact of the matter and neither is right or wrong. How strange.

Replies from: wedrifid
comment by wedrifid · 2011-12-15T06:33:23.568Z · LW(p) · GW(p)

Irrelevant to me. A bad comment is a bad comment. Our past and future interactions do not matter to me. To the extent I comprehend our interaction, it is me commenting on and discussing your Knox materials and you silently reading whatever you read of me; even if I were selfishly concerned about future interactions, I doubt I would value it at very much - you will continue to discuss Knox or not regardless of whether you are angry with me.

You are aggressively and publicly trolling a prominent member when he is not being hostile. You should not anticipate the negative consequences of that to be limited to his own perception. You seem to be willfully sabotaging your own reputation. I don't understand why.

You stand by everything you said, the personal attacks and absurd inferences, and feel this is perfectly honest? That this violates no LW norms of communication? That all this is perfectly acceptable? You feel that there is no problem with saying all that, because hey, you actually thought it?

He didn't do anything of the sort.

WE ARE NOT OPERATING ON CROCKER'S RULES.

Which seems to be applicable to you, and not kompo at all.

I will repeat this; we operate on a number of norms where we do not accuse, in an inflammatory way, someone of making the SIAI look like incompetent crooks simply because we 'feel honestly' this way.

Saying that a particular behavior gives a terrible signal is not a personal attack. The following, what kompo actually said, is not a norm violation:

As has been explained before, this is annoying, grating, and just plain goofy. It makes you sound like a fly-by-night commercial outfit run by people who don't quite speak English. In my estimation it's about 2:1 evidence that SI* is a scam.

Replies from: None, gwern
comment by [deleted] · 2011-12-15T07:35:57.013Z · LW(p) · GW(p)

You seem to be willfully sabotaging your own reputation.

I don't know. I gained more respect for gwern after reading his comment.

Replies from: wedrifid
comment by wedrifid · 2011-12-15T07:50:48.649Z · LW(p) · GW(p)

I don't know. I gained more respect for gwern after reading his comment.

Pardon me: "... with the obvious exception of the other person who has also been heavily downvoted for abusing komponisto in the same context"

Replies from: None
comment by [deleted] · 2011-12-15T08:04:55.659Z · LW(p) · GW(p)

What's abusive about it? It seemed to me like a straightforward error, but the presentation was admittedly bad. I was tired and possibly inebriated; so it goes. Nobody lost many hedons over it.

On the gripping hand, gwern doesn't even talk about komponisto's tacit conflation of karma with correctness, or that of total karma with total number of people approving. I don't even agree with gwern on the issue at hand, as I said before.

I gained respect for him because it takes a great deal of nerve to write such a thing, and I think that's admirable. Or maybe my model of gwern is more accurate than yours? I don't know.

EDIT: Rereading that thread, I notice drethelin did succeed in convincing komponisto of a related point. As I expected, it took more writing than I was interested in doing at the time. Props to it, as well.

comment by gwern · 2011-12-15T17:23:10.670Z · LW(p) · GW(p)

You are aggressively and publicly trolling a prominent member when he is not being hostile. You should not anticipate the negative consequences of that to be limited to his own perception. You seem to be willfully sabotaging your own reputation. I don't understand why.

For the same reason people in other articles rail against the 'Rational Xing' meme - because komponisto's sort of comment is the sort of thing I do not want to see spread at all. I do not want to see people browbeating lukeprog or anyone with wild claims about their unproven opinion being 'Bayesian evidence', or all the other pathologies and dark arts in that comment which I have pointed out.

If I fail to convince people as measured by karma points, well, whatever. You win some and you lose some - for example, I was expecting my last comment attacking the Many Worlds cultism here to be downvoted, but no, it was highly upvoted. As they say about real karma, it balances out.

If my reputation is damaged by this, well, whatever. Whatever can be destroyed by the truth should be, no? I think I am right here and if I do not give an 'honest expression of my feelings', I am manipulating my reputation. And if it is so flimsy a thing that a small flamewar over one of the obscurest grammatical points I have seen can damage it, then it wasn't much of a reputation at all and I shouldn't engage in sunk cost fallacy about it.

He didn't do anything of the sort.

Ah, an excellent reply. To many many questions - 'no'. I see.

Which seems to be applicable to you, and not kompo at all.

Tu quoque!

Saying that a particular behavior gives a terrible signal is not a personal attack. The following, what kompo actually said, is not a norm violation:

Yeah, whatever. I already dealt with this BS with the disclaimers and other stuff.

By the way, komponisto has not produced the slightest shred of evidence for that ratio. Is 'making stuff up' not a norm violation on LW these days?

And by the way, you haven't provided any citations for the linguistic point in contention, despite my direct unambiguous challenge several days ago.

How many times will I have to ask you and komponisto about this before you finally dig up something - an Internet grammarian or anything saying you are right about how to refer to the SIAI and its myriad connexions? I think this makes 4, which alone earns you two my downvotes.

Replies from: wedrifid
comment by wedrifid · 2011-12-15T17:59:36.986Z · LW(p) · GW(p)

And by the way, you haven't provided any citations for the linguistic point in contention, despite my direct unambiguous challenge several days ago.

I most certainly haven't. The "challenge" in question was a logically rude - and blatantly disingenuous - attempt to spin the context such that I am somehow obliged to provide citations or else your accusation that komponisto is "dressing up [his] linguistic idiosyncrasies in capitalized statistical drag" is somehow valid - rather than totally out of line. I am actually somewhat proud that after I wrote a response to that comment at the time you made it I discarded it rather than replying - there wasn't anything to be gained and so ignoring it was the wiser course of action.

I was also pleasantly surprised that the community saw through your gambit and downvoted you to -4. In most environments that would have worked for you - people usually reward clever use of spin and power moves like that yet here it backfired.

Replies from: gwern
comment by gwern · 2011-12-15T20:46:27.796Z · LW(p) · GW(p)

The "challenge" in question was a logically rude - and blatantly disingenuous - attempt to spin the context such that I am somehow obliged to provide citations or else your accusation that komponisto is "dressing up [his] linguistic idiosyncrasies in capitalized statistical drag" is somehow valid

If his preference is only his preference, why do we care? We should do nothing to cater to one person's linguistic whims.

If we care because his preference may be shared by the LW community, 10 or 15 upvotes are not enough to indicate a community-wide preference, and likewise nothing should be done.

If we care because his preference is descriptively correct and common across many English-speaking communities beyond LW, then a failure to provide citations is a failure to provide proof, and likewise nothing should be done.

I was also pleasantly surprised that the community saw through your gambit and downvoted you to -4. In most environments that would have worked for you - people usually reward clever use of spin and power moves like that yet here it backfired.

This is another kind of comment I dislike.

Karma should be discussed as little as possible. Goodhart's law, people! The more you discuss karma and even give it weight, the more you destroy any information it was conveying previously. Please don't do that; I like being able to sort by karma and get a quick ranking of what comments are good.

Replies from: None, None
comment by [deleted] · 2011-12-15T21:28:24.921Z · LW(p) · GW(p)

[...] 10 or 15 upvotes are not enough to indicate a community-wide preference [...]

They aren't? I perceive that as a fairly large score and practically the second-highest range a comment ever gets, short of the >40 karma of a particularly clever pun or Yvain comment. (That doesn't justify catering to the whim, but I'd take it seriously at least.)

Replies from: gwern
comment by gwern · 2011-12-15T21:35:32.586Z · LW(p) · GW(p)

This is a buried* thread on a Discussion page; the top comment is now down from the cited 10 or 15 upvotes to just +7 (and my first critical comment is currently at +6); and no one comes to a page on a lukeprog video because they want to weigh in on the burning issue of using 'the'. The people discussing are not a random subset of the community, even if one wanted to argue that the votes were in favor, so there's that too.

If this were written up as say a front page Article, I have no idea what the overall reaction would be, because there are all those other factors destroying our ability to extrapolate from this little flamewar to LW in general.

* I take that back, it was buried but apparently my comments have gotten enough upvotes to be unhidden again.

comment by [deleted] · 2011-12-15T21:26:59.631Z · LW(p) · GW(p)

[...] 10 or 15 upvotes are not enough to indicate a community-wide preference [...]

They don't? I perceive that as a fairly large score and practically the second-highest range a comment ever gets, short of the >40 karma of a particularly clever pun or Yvain comment. (That doesn't justify catering to the whim, but I'd take it seriously at least.)

comment by [deleted] · 2011-12-12T04:26:12.971Z · LW(p) · GW(p)

As someone else who regrettably shares your position, allow me to point out:

And...my comments were highly upvoted. That should take considerable force out of any accusation of "arrogance". I am representing a point of view that quite a number of people share. What do you make of this?

"Highly upvoted"? Ten or twenty anonymous people agreeing with you is hardly exceptional. I could find ten or twenty people that agree the Earth is flat and the clouds are made of fire. Or, for more LW flavor, I can find ten or twenty heavily committed deists.

Unpopular ideas can be right; popular ideas can be wrong. I had thought the water really was up to my ankles, but I guess it isn't, yet.

Replies from: steven0461, komponisto
comment by steven0461 · 2011-12-12T06:38:36.517Z · LW(p) · GW(p)

At the risk of stating the obvious, the opinion of random LessWrongers on some position carries information that the opinion of people hand-picked for agreement with the position does not.

Replies from: None
comment by [deleted] · 2011-12-12T06:46:08.366Z · LW(p) · GW(p)

Karma is far too noisy a channel to communicate anything as subtle as an the aggregate opinion on a comment.

comment by komponisto · 2011-12-12T04:28:44.986Z · LW(p) · GW(p)

"Highly upvoted"? Ten or twenty anonymous people agreeing with you is hardly exceptional.

Actually it is. I don't usually get upvoted to those levels. Something was different about those comments.

(EDIT: Nonetheless, I have edited that sentence out.)

Unpopular ideas can be right; popular ideas can be wrong. I had thought the water really was up to my ankles, but I guess it isn't, yet.

I agree with the first sentence; I don't understand the second.

Replies from: steven0461, None
comment by steven0461 · 2011-12-12T06:41:22.417Z · LW(p) · GW(p)

I think it's a reference to the sanity waterline.

comment by [deleted] · 2011-12-12T04:40:28.336Z · LW(p) · GW(p)

You don't act like you agree with the first sentence, so I'm at a loss. I didn't really expect to change your mind, in any case.

Replies from: wedrifid
comment by wedrifid · 2011-12-12T07:14:25.887Z · LW(p) · GW(p)

You don't act like you agree with the first sentence, so I'm at a loss. I didn't really expect to change your mind, in any case.

You didn't expect basically trolling him to change his mind? Good prediction.

Replies from: None
comment by [deleted] · 2011-12-12T07:38:09.494Z · LW(p) · GW(p)

There are far more eloquent people in this thread arguing other things at far more length, and therefore it seems it has far more stamina for that kind of thing than I do.

Am I wrong, though?

comment by Jonathan_Graehl · 2012-10-04T22:07:15.084Z · LW(p) · GW(p)

I also mildly agree with using a determiner for names of organizations that end in "Company" "Institute" "Organization" etc, and also don't mind treating the acronym version as you would without knowing the expansion.

I don't think it's a full bit of scam-signal, though.

Some weak (top prescriptivist result in Google) evidence: http://writing.umn.edu/sws/assets/pdf/quicktips/articles_proper.pdf (although it contradicts komponisto and me in advising that determiner choice should be the same as the expanded version (while simultaneously advising "an SDMI" and not "a SDMI" presumably because read aloud, "an S" is "an ess")

comment by jimrandomh · 2011-12-15T20:19:32.026Z · LW(p) · GW(p)

Can you please stop saying "at Singularity Institute", "within Singularity Institute", et cetera? As has been explained before, this is annoying, grating, and just plain goofy. It makes you sound like a fly-by-night commercial outfit run by people who don't quite speak English.

Actually, I think this is a linguistic corner case in whether you ought to use the word "the", and some speakers/dialects will fall on either side. Consider:

She works at the institute.

  • She works at institute.
    She works at SingInst.
  • She works at the SingInst.
    ? She works at the Singularity Institute
    ? She works at Singularity Institute
    (* denotes a sentence that is incorrect to all speakers and ? denotes a sentence that is incorrect to some speakers but not all.)

If Singularity Institute parses as a modified noun, then it should have an article. If it parses as a name, then it shouldn't. You can force it to be a name by either compressing it into something that isn't a regular word (SingInst), or by adding something that's incompatible with regular words. Compare:

He will attend the Singularity Summit.
? He will attend Singularity Summit.
He will attend Singularity Summit 2012.

  • He will attend the Singularity Summit 2012.

And that's the entire fact of the matter. From a linguistics perspective, whether a sentence is grammatically correct or incorrect depends solely on the intuition of native speakers; and if native speakers disagree, then it must be a dialect difference. Arguing what is "correct" in a speaker-independent sense is meaningless and unproductive.

comment by shokwave · 2011-12-15T09:21:38.795Z · LW(p) · GW(p)

The recent attention on this discussion compels me to point out that

It makes you sound like a fly-by-night commercial outfit run by people who don't quite speak English.

absolutely does not follow from

Now, as you know, my prior on the latter hypothesis is pretty low

at all.

Like, "time to question whether you are intimately familiar with Bayes Thereom".

I assumed you were, because you spoke of evidence likelihoods and Bayesian evidence in favour of propositions: but now I fear those are just locally high-status words you were using, because when you take a low prior and update on 2:1 evidence you are left with a low prior.

And if you have a low prior for it being a scam, you don't embellish on it being a scam!

I am reminded of the double illusion of transparency. I assumed when people talked about Bayesian evidence they had done calculations.

Replies from: wedrifid
comment by wedrifid · 2011-12-15T11:19:42.137Z · LW(p) · GW(p)

absolutely does not follow from

You seem to be confused. It isn't supposed to follow - it is meant as a contrast! Kompo estimated 1 bit of evidence of crackpotness is embedded in prominent misuse of language. He then reaffirms that despite this he isn't saying that singinst is a crackpot institute... that is what declaring a one bit update on a very low prior means and there is no evidence suggesting that kompo intended anything else. He is making a general gesture of respect to the institute so it is clear that he isn't using this issue as an excuse to insult the institute itself.

I assumed you were, because you spoke of evidence likelihoods and Bayesian evidence in favour of propositions: but now I fear those are just locally high-status words you were using, because when you take a low prior and update on 2:1 evidence you are left with a low prior.

He knows this, has used bayesian reasoning correctly in the past in his posts and has not made a mistake here.

comment by lukeprog · 2011-12-10T22:29:32.270Z · LW(p) · GW(p)

I think you're making too much of this. Don't assume that just because it's grating on you, it must be grating on most people. There is a reason "The Facebook" became "Facebook."

Replies from: komponisto, None
comment by komponisto · 2011-12-10T23:21:01.986Z · LW(p) · GW(p)

Did you read the linked discussion?

I made no assumption of the sort you describe. I said that my reaction was significant Bayesian information (while admitting that it was potentially "overrideable"). I also pointed out that it was shared by other people. So there is no appeal to the typical mind fallacy. Furthermore, it doesn't need to be grating to "most people" to be inadvisable; it just needs to sound wrong, or pattern-match to the wrong reference class of organization.

There is a reason "The Facebook" became "Facebook."

Whenever two rationalists disagree, they should ask themselves which of them has the important information that the other doesn't. Let's try to apply that here. Which do you think is more likely: your noticing a usage pattern that I haven't noticed, or my noticing a pattern that you haven't noticed?

First, I'll look at it from your standpoint. Facebook is a pretty well-known company, especially to people who are active on internet forums. So I think you should have assigned a high probability to my being aware of the way Facebook is referred to, and to having taken it into account in formulating my opinion. In other words, I don't see why you should have any reason to expect that pointing out that the company isn't named "The Facebook" (and nor does one speak of logging onto "the Facebook") would have conveyed new information to me, such that my opinion should be updated.

Now, on the other hand, let's look at it from my standpoint. In the linked thread, I had specifically discussed the fact that different proper nouns are treated differently with respect to article usage, and I had made a specific comparison to similar organizations with similar names (nonprofits with "Institute" or a similar word in their name). In particular, I had even dwelt on the fact that an initialism for an organization may be treated differently from the full name of the same organization; thus "MSRI", but "the Mathematical Sciences Research Institute", and numerous others. Yet this point was completely missed by Louie, an important SI staff member, who apparently thought I was advocating for "the SI", XiXiDu-style. Thus I have precedent for the notion that intelligent people at SI are capable of not only entirely failing to comprehend my point, but of failing to notice standard English usage patterns that they themselves would (I suppose unconsciously) apply in numerous situations. This raises my probability that your comment likewise falls into this category of your not getting something that I get (rather than the reverse). When you combine this with the fact that you yourself have an independent history of failing to read comments, you can kind of see where I come out here: it looks to me like you simply weren't aware of the level of discrimination in my discussion, which would put SI in a different reference class from Facebook.

Is this analysis wrong? I currently believe that if you saw language patterns at the same "level of resolution" that I do, you would update in my direction. Is there something I'm not taking into account such that, if I knew it, I would update in your direction?

ETA: Whoa! I just noticed the voting on the parent and grandparent: -6 and +8 respectively, within a matter of minutes. Luke, this is information. Update on it!

Replies from: lukeprog, steven0461, XiXiDu
comment by lukeprog · 2011-12-10T23:35:42.093Z · LW(p) · GW(p)

I made no assumption of the sort you describe.

Okay. Sorry to have misinterpreted you.

Your analysis in this comment looks correct to me. I hadn't read this comment or this comment until now. My policy of not reading most LW comments is one thing that allows me to get so much other stuff done (there are enough people reading LW comments, not enough people writing research papers), but it does mean that I sometimes miss more informative comments like the ones I just linked to.

I will also say that your analysis in those two comments looks correct to me. Unless given reason to do otherwise, I'll try to start saying "at SI" but also "at the Singularity Institute."

Replies from: komponisto
comment by komponisto · 2011-12-10T23:52:00.765Z · LW(p) · GW(p)

I've just made a $50 donation to the Institute in question. That kind of updating speed deserves an equally quick reward.

Replies from: lukeprog, lukeprog
comment by lukeprog · 2011-12-10T23:58:15.583Z · LW(p) · GW(p)

Thanks!

Now keep an eye on me so you can make sure I'm not just signaling rationality but actually changing my behavior. :)

Replies from: komponisto
comment by komponisto · 2011-12-11T00:01:51.273Z · LW(p) · GW(p)

Oh don't worry, I will. :-)

comment by lukeprog · 2011-12-16T14:53:55.679Z · LW(p) · GW(p)

I do wish I was able to find an official style guide that made this point clear, though. Do you know of one? I couldn't find it in the Chicago guide. It's sort of explained here, though it doesn't make the detailed claims that you do. I've switched my practice mostly because my intuitions agreed with yours when I read your comments, but that could be just status quo bias and me being used to saying "The Singularity Institute" until recently.

comment by steven0461 · 2011-12-11T01:20:39.282Z · LW(p) · GW(p)

Speaking of which, komponisto, I wonder if you could be so good as to back me up on this point.

Replies from: komponisto
comment by komponisto · 2011-12-11T02:02:18.871Z · LW(p) · GW(p)

Done (in part).

comment by XiXiDu · 2011-12-11T11:39:11.855Z · LW(p) · GW(p)

Yet this point was completely missed by Louie, an important SI staff member, who apparently thought I was advocating for "the SI", XiXiDu-style.

I never learnt this stuff in school and my current focus is on improving the math education that I missed rather than rules of grammar. I promise that I will teach myself how to write correct English in future. But right now I don't give it top priority so I hope you will tolerate some of my mistakes for the time being.

comment by [deleted] · 2011-12-10T23:26:34.256Z · LW(p) · GW(p)

It is grating.

I graduated from the California Institute of Technology. Grammatically, that's the (State) Institute of (Stuff), and "institute" gets "the" in American English. (But, I graduated from Caltech - when contracted, the need for "the" disappears.)

Regional dialects differ - I don't know about "institute" specifically, but for a similar example, British/Canadian/etc. English says "hospital" where American English says "the hospital" (I noticed this one in Deus Ex Human Revolution, set in Detroit but developed by Eidos Montreal.) It's also not the case that contractions/acronyms always eliminate "the": consider working for the Federal Bureau of Investigation and working for the FBI. (Insiders will sometimes say "FBI", "CIA", "NSA" without "the", but the general public always adds "the".)

Facebook is a synthesized word, giving them more freedom to develop conventions around it. Similarly, if you went by Singinst, then avoiding "the Singinst" would be perfectly reasonable.

Replies from: XiXiDu
comment by XiXiDu · 2011-12-11T11:44:56.797Z · LW(p) · GW(p)

It's also not the case that contractions/acronyms always eliminate "the": consider working for the Federal Bureau of Investigation and working for the FBI.

Yes, it would never have occured to me that "the FBI" could be wrong.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-12-11T16:54:56.039Z · LW(p) · GW(p)

There are a lot of these. On ten seconds' thought, I would complete "working for..." with: the FBI
the CIA
the NFL
the AMA
the ADA (which isn't an organization, but can still be an employer)

Using a definite article implies syntactically that the referent is uniquely referenced; it wouldn't surprise me if there was an implicit status claim there, and if the resulting status negotiation was contributing to the (IMHO otherwise entirely unjustified) heat with which this nomenclature issue is being discussed/voted on here.

comment by multifoliaterose · 2011-12-10T14:41:15.792Z · LW(p) · GW(p)

Luke: I appreciate your transparency and clear communication regarding SingInst.

The main reason that I remain reluctant to donate to SingInst is that I find your answer (and the answers of other SingInst affiliates who I've talked with) to the question about Friendly AI subproblems to be unsatisfactory. Based on what I know at present, subproblems of the type that you mention are way too vague for it to be possible for even the best researchers to make progress on them.

My general impression is that the SingInst staff have insufficient exposure to technical research to understand how hard it is to answer questions posed at such a level of generality. I'm largely in agreement with Vladimir M's comments on this thread.

Now, it may well be possible to further subdivide and sharpen the subproblems at hand to the point where they're well defined enough to answer, but the fact that you seem unaware of how crucial this is is enough to make me seriously doubt SingInst's ability to make progress on these problems.

I'm glad to see that you place high priority on talking to good researchers, but I think that the main benefit that will derive from doing so (aside from increasing awareness of AI risk) will be to shift SingInst staff member's beliefs in the direction of the Friendly AI problem being intractable.

Replies from: lukeprog, Vladimir_Nesov
comment by lukeprog · 2011-12-10T18:58:17.780Z · LW(p) · GW(p)

I find your answer... to the question about Friendly AI subproblems to be unsatisfactory. Based on what I know at present, subproblemz of the type that you mention are way too vague for it to be possible for even the best researchers to make progress on them.

No doubt, a one-paragraph list of sub-problems written in English is "unsatisfactory." That's why we would "really like to write up explanations of these problems in all their technical detail."

But it's not true that the problems are too vague to make progress on them. For example, with regard to the sub-problem of designing an agent architecture capable of having preferences over the external world, recent papers by (SI research associate) Daniel Dewey, Orseau & Ring, and Hibbard each constitute progress.

My general impression is that the SingInst staff have insufficient exposure to technical research to understand how hard it is to answer questions posed at such a level of generality.

I doubt this is a problem. We are quite familiar with technical research, and we know how hard it is for, in my usual example of what needs to be done to solve many of the FAI sub-problems, "Claude Shannon to just invent information theory almost out of nothing."

In fact, here is a paragraph I wrote months ago for a (not yet released) document called Open Problems in Friendly Artificial Intelligence:

Richard Bellman may have been right that “the very construction of a precise mathematical statement of a verbal problem is itself a problem of major difficulty” (Bellman 1961). Some of the problems in this document have not yet been stated with mathematical precision, and the need for a precise statement of the problem is part of each open problem. But there is reason for optimism. Many times, particular heroes have managed to formalize a previously fuzzy and mysterious concept: see Kolmogorov on complexity and simplicity (Kolmogorov 1965; Li & Vitányi 2008), Solomonoff on induction (Solomonoff 1964a, 1964b; Rathmanner & Hutter 2011), Von Neumann and Morgenstern on rationality (Von Neumann & Morgenstern 1944; Anand 1995), and Shannon on information (Shannon 1948; Arndt 2004).

Also, I regularly say that "Friendly AI might be an incoherent idea, and impossible." But as Nesov said, "Believing problem intractable isn't a step towards solving the problem." Many now-solved problems once looked impossible. But anyway, this is one reason to pursue research in both Friendly AI and on "maxipok" solutions that maximize the chance of an "ok" outcome, like Oracle AI.

comment by Vladimir_Nesov · 2011-12-10T15:27:32.045Z · LW(p) · GW(p)

I'm glad to see that you place high priority on talking to good researchers, but I think that the main benefit that will derive from doing so (aside from increasing awareness of AI risk) will be to shift SingInst' staff member's beliefs in the direction of the Friendly AI problem being intractable.

Believing problem intractable isn't a step towards solving the problem. It might be correct to downgrade your confidence in a problem being solvable, but isn't in itself a useful thing if the goal remains motivated. It mostly serves as an indication of epistemic rationality, if indeed the problem is less tractable than believed, or perhaps it could be a useful strategic consideration. Noticing that the current approach is worse than an alternative (i.e. open problems are harder to communicate than expected, but what's the better alternative that makes it possible to use this piece of better understanding?), or noticing a particular error in present beliefs, is much more useful.

Replies from: multifoliaterose
comment by multifoliaterose · 2011-12-10T15:50:09.530Z · LW(p) · GW(p)

Believing problem intractable isn't a step towards solving the problem. It might be correct to downgrade your confidence in a problem being solvable, but isn't in itself a useful thing if the goal remains motivated.

I agree, but it may be appropriate to be more modest in aim (e.g. by pushing for neuromorphic AI with some built-in safety precautions even if achieving this outcome is much less valuable than creating a Friendly AI would be).

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-12-10T16:03:29.727Z · LW(p) · GW(p)

e.g. by pushing for neuromorphic AI with some built-in safety precautions even if achieving this outcome is much less valuable than creating a Friendly AI would be

I believe it won't be "less valuable", but instead would directly cause existential catastrophe, if successful. Feasibility of solving FAI doesn't enter into this judgment.

Replies from: multifoliaterose
comment by multifoliaterose · 2011-12-10T16:30:40.182Z · LW(p) · GW(p)

I believe it won't be "less valuable", but instead would directly cause existential catastrophe, if successful.

I meant in expected value.

As Anna mentioned in one of her Google AGI talks there's the possibility of an AGI being willing to trade with humans to avoid a small probabity of being destroyed by humans (though I concede that it's not at all clear how one would create an enforceable agreement). Also a neuromorphic AI could be not so far from a WBE. Do you think that whole brain emulation would directly cause existential catastrophe?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-12-10T16:47:01.043Z · LW(p) · GW(p)

I believe it won't be "less valuable", but instead would directly cause existential catastrophe, if successful.

I meant in expected value.

Huh? I didn't mean opportunity cost, but simply that successful neuromorphic AI destroys the world. Staging a global catastrophe does have lower expected value than protecting from global catastrophe (with whatever probabilities), but also lower expected value than watching TV.

Do you think that whole brain emulation would directly cause existential catastrophe?

Indirectly, but with influence that compresses expected time-to-catastrophe after the tech starts working from decades-centuries to years (decades if WBE tech comes early and only slow or few uploads can be supported initially). It's not all lost at that point, since WBEs could do some FAI research, and would be in a better position to actually implement a FAI and think longer about it, but ease of producing an UFAI would go way up (directly, by physically faster research of AGI, or by experimenting with variations on human brains or optimization processes built out of WBEs).

The main thing that distinguishes WBEs is that they are still initially human, still have same values. All other tech breaks values, and giving it power makes humane values lose the world.

Replies from: multifoliaterose
comment by multifoliaterose · 2011-12-10T17:36:32.153Z · LW(p) · GW(p)

Huh? I didn't mean opportunity cost, but simply that successful neuromorphic AI destroys the world. Staging a global catastrophe does have lower expected value than protecting from global catastrophe (with whatever probabilities), but also lower expected value than watching TV.

I was saying that it could be that with more information we would find that

0 < EU(Friendly AI research) < EU(Pushing for relatively safe neuromorphic AI) < EU(Successful construction of a Friendly AI).

even if there's a high chance that relatively safe neuromorphic AI would cause global catastrophe and carry no positive benefits. This could be the case if Friendly AI research sufficiently hard. I think that given the current uncertainty about the difficulty of friendly AI research would have to be extremely confident that relatively safe neuromorphic AI that would cause global catastrophe to rule this possibility out.

Indirectly, but with influence that compresses expected time-to-catastrophe after the tech starts working from decades-centuries to years (decades if WBE tech comes early and only slow or few uploads can be supported initially). It's not all lost at that point, since WBEs could do some FAI research, and would be in a better position to actually implement a FAI and think longer about it, but ease of producing an UFAI would go way up (directly, by physically faster research of AGI, or by experimenting with variations on human brains or optimization processes built out of WBEs).

Agree with this

The main thing that distinguishes WBEs is that they are still initially human, still have same values. All other tech breaks values, and giving it power makes humane values lose the world.

I think that I'd rather have an uploaded crow brain have its computational power and memory substantially increased and then go FOOM than have an arbitrary powerful optimization process; just because a neuromorphic AI wouldn't have values that are precisely human doesn't mean it wouldn't be totally devoid of value from our point of view.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-12-10T18:01:46.703Z · LW(p) · GW(p)

I think that I'd rather have an uploaded crow brain have its computational power and memory substantially increased and then go FOOM than have an arbitrary powerful optimization process; just because a neuromorphic AI wouldn't have values that are precisely human doesn't mean it would be totally devoid of value from our point of view.

I expect it would; even a human whose brain was meddled with to make it more intelligent is probably a very bad idea, unless this modified human builds a modified-human-Friendly-AI (in which case some value drift would probably be worth protection from existential risk) or, even better, a useful FAI theory elicited Oracle AI-style. The crucial question here is the character of FOOMing, how much of initial value is retained.

comment by XiXiDu · 2011-12-10T12:48:11.343Z · LW(p) · GW(p)

Another change is that our President, Michael Vassar, is launching a personalized medicine company that we’re all pretty excited about.

I only read about that now. The president of the Singularity Institute believes that he should rather spend his time on personalized medicine?

Replies from: shokwave, multifoliaterose, Vladimir_Nesov, timtyler
comment by shokwave · 2011-12-11T08:48:20.041Z · LW(p) · GW(p)

I don't think it likely that Vassar strictly prefers medicine to the singularity. Much more likely he can do almost all of the work he does for SingInst when he's with the other company, the work he can't do can be done by someone else just as well (or better, or that work isn't so important), and the extra benefits he can bring outweigh the negatives of reducing committed time.

If he does genuinely think medicine is more important, that's a failing of Michael Vassar, not of SingInst.

(And a success on the part of SingInst in letting him do that, instead of demanding committment).

So, I disagree with your connotations.

comment by multifoliaterose · 2011-12-10T13:42:05.096Z · LW(p) · GW(p)

The company could generate profit to help fund SingInst and give evidence that the rationality techniques that Vassar, etc. use work in a context with real world feedback. This in turn could give evidence of them being useful in the context of x-risk reduction where empirical feedback is not available.

Replies from: curiousepic
comment by curiousepic · 2011-12-14T16:04:21.360Z · LW(p) · GW(p)

Does anyone know if this is the intention?

comment by Vladimir_Nesov · 2011-12-10T13:12:09.997Z · LW(p) · GW(p)

(I believe it's the org that announced the prize recently discussed on LW.)

comment by timtyler · 2011-12-14T18:24:59.039Z · LW(p) · GW(p)

It actually looks like 4 SingInst folk are involved. Networking.

comment by Dr_Manhattan · 2011-12-10T18:32:35.809Z · LW(p) · GW(p)

A notable (ommited?) reason to publish is peer review. External peer review might be too costly for most items like Luke mentioned, but perhaps creating an internal peer review network between SIAI and FHI and some other people might be a useful compromise.

Replies from: lukeprog
comment by lukeprog · 2011-12-10T19:16:38.904Z · LW(p) · GW(p)

perhaps creating an internal peer review network between SIAI and FHI and some other people might be a useful compromise.

Yes, we do this. This is one benefit of the research associates program, for example.

Replies from: Dr_Manhattan
comment by Dr_Manhattan · 2011-12-10T19:23:49.514Z · LW(p) · GW(p)

Make this explicit - the aim is not only to produce high quality output but also to signal that this is high quality output. Mark papers as "reviewed by X" or something.

Curious if you guys found anonymous reviews useful.

comment by David_Gerard · 2011-12-11T12:14:58.280Z · LW(p) · GW(p)

I think most people would agree that if a scientist happened to create a synthetic virus that was airborne and could kill hundreds of millions of people if released into the wild, we wouldn't want the instructions for creating that synthetic virus to be published in the open for terrorist groups or hawkish governments to use.

Some say this has already happened. (I am somewhat cheered that the general reaction was "WHAT THE HELL, HERO?")

comment by TrueBayesian · 2011-12-10T19:50:00.703Z · LW(p) · GW(p)

I dig the 3 day mustache. +1

comment by David_Gerard · 2011-12-11T09:29:27.248Z · LW(p) · GW(p)

Here's a discussion of journal publishing versus preprints on John Baez's Google+. (Started with dodgy publishers, but read the comments.)

He is (and I am) surprised that more scientists don't use arXiv or something arXiv-like, whereas it's pretty much the standard way to quickly stake out credit in physics.

I wonder if there's a place for particularly rigorous SI papers on arXiv or somewhere similar.

comment by Bruno_Coelho · 2011-12-11T02:25:14.337Z · LW(p) · GW(p)

I see some skeptics of the singularity, and analyse the arguments, but there is something I cannot deny: lukeprog( and others) are really trying to solve FAI. Even if in the near future we begin to realize and encounter some evidence in favor of another risk, the compreension of fragility lead us to modify our priorities.

comment by Dr_Manhattan · 2011-12-10T18:49:15.529Z · LW(p) · GW(p)

For reference Eliezer's FAI talk slides are posted here http://lesswrong.com/lw/874/official_videos_from_the_singularity_summit/553j

comment by XiXiDu · 2011-12-10T13:08:11.459Z · LW(p) · GW(p)

And, Eliezer’s choice to work on rationality has paid off. The Sequences, and the Less Wrong community that grew out of them, have been successful.

While 38.5% of all people that know about Less Wrong have read at least 75% of the Sequences only 16.5% think that unfriendly AI is the most worrisome existential risk. How do you know that those 16.5% wouldn't believe you anyway, even without the work on rationality, e.g. by writing science fiction?

Replies from: multifoliaterose, JStewart
comment by multifoliaterose · 2011-12-10T13:49:06.537Z · LW(p) · GW(p)

One doesn't need to know that hundreds of people have been influenced to know that Eliezer's writings have had x-risk reduction value; if he's succeeded in getting a handful of people seriously interested in x-risk reduction relative to the counterfactual his work is of high value. Based on my conversations with those who have been so influenced, this last point seems plausible to me. But I agree that the importance of the sequences for x-risk reduction has been overplayed.

comment by JStewart · 2011-12-11T04:52:16.964Z · LW(p) · GW(p)

As one of the 83.5%, I wish to point out that you're misinterpreting the results of the poll. The question was: "Which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?" This is not the same as "unfriendly AI is the most worrisome existential risk".

I think that unfriendly AI is the most likely existential risk to wipe out humanity. But I think that an AI singularity is likely farther off than 2100. I voted for an engineered pandemic, because that and nuclear war were the only two risks I thought decently likely to occur before 2100, though a >90% wipeout of humanity is still quite unlikely.

edit: I should note that I have read the sequences and it is because of Eliezer's writing that I think unfriendly AI is the most likely way for humanity to end.

comment by XiXiDu · 2011-12-10T12:35:41.166Z · LW(p) · GW(p)

HD Video link. (I can't get embedding on Less Wrong to work.)

Use the old embed code instead.

Replies from: Vladimir_Nesov, None
comment by Vladimir_Nesov · 2011-12-10T13:09:46.007Z · LW(p) · GW(p)

Fixed.

Replies from: lukeprog
comment by lukeprog · 2011-12-10T18:35:35.789Z · LW(p) · GW(p)

Thanks!

comment by [deleted] · 2011-12-11T19:58:37.154Z · LW(p) · GW(p)

Which code?

comment by antigonus · 2011-12-13T09:20:21.782Z · LW(p) · GW(p)

One of the reasons given against peer review is that it takes a long time to articles to be published after acceptance. Is it not possible to make them available on your own website before they appear in the article? (I really have barely any idea how these things work; but I know that in some fields you can do this.)

comment by AlexMennen · 2011-12-10T19:02:24.375Z · LW(p) · GW(p)

I think most people would agree that if a scientist happened to create a synthetic virus that was airborne and could kill hundreds of millions of people if released into the wild, we wouldn't want the instructions for creating that synthetic virus to be published in the open for terrorist groups or hawkish governments to use. And for the same reasons, we wouldn't want a Friendly AI textbook to explain how to build highly dangerous AI systems. But excepting that, I would love to see a rigorously technical textbook on friendliness theory, and I agree that friendliness research will need to increase for us to see that textbook be written in 15 years.

Why do you think that a rigorous description of friendliness would also shed light on how to build AGI?

Replies from: lukeprog
comment by lukeprog · 2011-12-10T19:25:19.252Z · LW(p) · GW(p)

Friendly AI theory isn't just about the problem of friendliness content, but also about the kind of AI architecture that is capable of using friendliness content. But many kinds of progress on that kind of AI architecture will be progress toward AGI that can take arbitrary goals, almost all of which would be bad for humanity.

comment by Adam Zerner (adamzerner) · 2015-05-04T23:32:35.167Z · LW(p) · GW(p)

But right now, my bet on how we’d end up spending that money is that we would personally argue for our mission to each of the world’s top mathematicians, AI researchers, physicists, and formal philosophers.

Is it known why they currently aren't working on FAI?

First thoughts:

1) Do they judge that they are having a bigger impact on the world doing what they are currently doing?

1a) Because they think it's more important?

1b) Because they think they have a comparative advantage in their current field, and that this outweighs the fact that FAI is more important.

2) If {!1},

2a) Is it because they're pursuing a terminal value other than altruism that is outweighing the altruistic benefits of FAI? Truth? Personal happiness?

2b) Are they not the kind of people who pursue their goals strategically? If this is the case, then I can't help but think that something in their brain is confused.

I'm not too confident, but I feel skeptical that they'd be too motivated by money. I mean, I think that everyone "has their price", and that A LOT of money would get them to work on it... but I'm skeptical that "normal a lot" would do the trick. Couldn't they be making millions already if they wanted too? And aren't a lot of them already making millions?

comment by semianonymous · 2012-04-23T07:28:52.091Z · LW(p) · GW(p)

Well, my prior for someone on the internet who's asking for money being scam is no less than 99% (and I do avoid pascal mugging by not taking strings from such sources as proper hypotheses), and I think that is a very common prior, so there better be good evidence that it isn't scam - a panel of accomplished scientists and engineers, working to save the world, etc etc. think something on the scale of IPCC. rather than some weak evidence that it is scam, and something even less convincing than e.g. Steorn's perpetual motion device.

Scamming works best by self deceit though, so even though you are almost certainly just a bunch of fraudsters, you still feel genuinely wronged and insulted by suggestion that you are, because the first people that you would have defrauded would have been yourselves. You'd also feel wronged that there is nothing you could of done to look better. There isn't; if your cause was genuine it would of been started decades ago by more qualified people.

comment by timtyler · 2011-12-11T02:48:09.531Z · LW(p) · GW(p)

How can we generalize the theory of machine induction - called Solomonoff induction - so that it can use higher-order logics and reason correctly about observation selection effects?

I don't really understand. What's with the higher-order logic? Solomonoff induction already uses a Turing-complete reference machine. There's nothing "higher" than that.

I don't think observation-selection effects need particularly special treatment with a dedicated reference machine. The conventional approach would be to simply let the agent see the world. That way it finds out about the laws of physics and observation selection effects. After it has some data, you can see what kind of interpreter it has built for itself - and go from there.

Yes, you could try to manually wire all this kind of thing into the reference machine - but with a sufficiently smart agent, that process can be automated by letting the agent see the world, and seeing what kind of "compiler" it creates for itself. Essentially, this isn't really an important problem that needs solving by humans.

IMHO, the important problem in this area involves find a reference machine that best facilitates self-improvement. We need to find which reference machine languages are most easily understood by mechanical programmers - NOT which ones most accurately represent the real world.

Replies from: endoself
comment by endoself · 2011-12-11T04:06:00.442Z · LW(p) · GW(p)

I don't really understand. What's with the higher-order logic? Solomonoff induction already uses a Turing-complete reference machine. There's nothing "higher" than that.

Have you read this thread?

Replies from: timtyler
comment by timtyler · 2011-12-11T12:38:46.384Z · LW(p) · GW(p)

So: I am not too worried about the universe being uncomputable.

On the race to superintelligence, there are more pressing things to worry about than such possibilities - and those interested in winning that race should prioritise their efforts - with things like this being at the bottom of the heap - otherwise they are more likely to fail.

I don't think that Solomonoff induction has a problem in this area - but it is a plauisble explanation of what the reference to "higher-order logic" referred to.

comment by Raemon · 2011-12-10T22:12:25.644Z · LW(p) · GW(p)

I feel like this should should have been a top level post. Unless you specifically avoid using that for SingInst business.

Replies from: None
comment by [deleted] · 2011-12-11T11:23:36.141Z · LW(p) · GW(p)

I think that's the reason. Remember there was a period where SIAI and the whole topic of (u)FAI where temporarily tabooed for the sake of the health of the rationalist community.

comment by hankx7787 · 2011-12-10T14:27:17.627Z · LW(p) · GW(p)

You totally remind me of the "aliens guy": http://files.sharenator.com/ancient_aliens_guy_RE_Cool_Story_Bro-s553x484-241806.jpg

Replies from: hankx7787, CallMeSIR
comment by hankx7787 · 2011-12-11T23:11:40.674Z · LW(p) · GW(p)

evidently less wrong lacks a sense of humor :P

comment by CallMeSIR · 2011-12-14T18:03:13.679Z · LW(p) · GW(p)

LOL. Nice joke. Thankfully Luke looks more composed than that guy. :)