Posts

Comments

Comment by mindviews on Recent site changes · 2011-06-24T04:48:59.890Z · LW · GW

First off, let me say thank you for all the work that's gone into the site update by everyone involved! The three changes I like most are the new header design (especially the clear separation between Main and Discussion - the old menu was too cluttered), the nearby meetup section, and the expanding karma bubbles.

I had one question about how the nearby meetup list and Location. Is the meetup list supposed to sort by location somehow? If so, what do I need to put in my location? Thanks!

Comment by mindviews on Official Less Wrong Redesign: Call for Suggestions · 2011-04-21T01:02:21.200Z · LW · GW

I agree that the single comment view has more boilerplate up top, but otherwise I'd say it usually fits on screens without any trouble.

I was curious about your comment so I took a look at the screenshot. You say in the bug report that you're using a "fairly small font" setting but the font is being rendered much larger for you than I see using default IE9 and FF4 settings. Plus your picture shows the page with a serif font while the CSS specifies sans-serif. I'm not sure if it's a browser issue or if you're using custom settings, but in a 1600x900 view (as your screenshot size is), I can see the full comment without scrolling.

Mostly I'd like to know if other people "take 2 screens to see one permalinked comment" because I agree that reasonably short comments should be visible without scolling.

Comment by mindviews on April 10 2011 Southern California Meetup · 2011-04-07T09:40:50.740Z · LW · GW

Sorry I can't make it this time - I've got travel plans this weekend. Hope to see everyone next time.

Comment by mindviews on 26 March 2011 Southern California Meetup · 2011-03-25T23:07:40.840Z · LW · GW

Count me in.

Comment by mindviews on February 27 2011 Southern California Meetup · 2011-02-24T23:30:13.433Z · LW · GW

I plan on coming.

Comment by mindviews on December 2010 Southern California Meetup · 2010-12-17T04:52:51.333Z · LW · GW

I'll make a weak vote for the IHOP near UCI. It's easy to get to, has free parking, and seemed to work reasonably well for the last meetup.

Comment by mindviews on December 2010 Southern California Meetup · 2010-12-17T04:36:01.988Z · LW · GW

I'll be there. I'll be driving from Torrance and can give a ride to anyone who happens to be in that area or along the way.

Comment by mindviews on What I would like the SIAI to publish · 2010-11-02T02:46:41.155Z · LW · GW

For those of you who are interested, some of us folks from the SoCal LW meetups have started working on a project that seems related to this topic.

We're working on building a fault tree analysis of existential risks with a particular focus on producing a detailed analysis of uFAI. I have no idea if our work will at all resemble the decision procedure SIAI used to prioritize their uFAI research, but it should at least form a framework for the broader community to discuss the issue. Qualitatively you could use the work discuss the possible failure modes that would lead to a uFAI scenario and quantitatively you can could use the framework and your own supplied probabilities (or aggregated probabilities from the community, domain experts, etc.) to crunch the numbers and/or compare uFAI to other posited existential risks.

At the moment, I'd like to find out generally what anyone else thinks of this project. If you have suggestions, resources or pointers to similar/overlapping work you want to share, that would be great, too.

Comment by mindviews on October 2010 Southern California Meetup · 2010-10-22T06:38:44.779Z · LW · GW

I'll be there but I may not arrive until ~2PM. Not sure what the setup at the IHOP is, but I can bring a LCD projector to hook up to any laptops that join us.

Comment by mindviews on October 2010 Southern California Meetup · 2010-10-22T06:28:17.415Z · LW · GW

It's a social gathering for anyone interested in discussing anything relevant to the LW community. I personally have been part of discussing rationality in general, cryonics, existential risk, personal health, and cognitive bias (among other topics) at the 2 meetups I've been to. It's a good excuse to meet some other folks and trade ideas, start projects, etc.

I don't think we have an agenda organized for this one. But if you're curious, take a look at the comments from the September SoCal meetup for an idea about what was discussed and what people thought was good/bad/interesting about it.

Comment by mindviews on Open Thread, September, 2010-- part 2 · 2010-09-28T09:58:26.958Z · LW · GW

I tried something different and added a link to this section. Any comments on how that works?

Comment by mindviews on Open Thread, September, 2010-- part 2 · 2010-09-27T03:03:58.967Z · LW · GW

I'll join in the fun - any suggestions appreciated.

My profile is currently limited to OKC users, though. I wish there were more LW ladies in SoCal who were easier to find...

Comment by mindviews on September 2010 Southern California Meetup · 2010-09-21T03:00:46.052Z · LW · GW

Hi Darius - If no one else is driving through Burbank, I can backtrack and pick you up.

Comment by mindviews on September 2010 Southern California Meetup · 2010-09-13T22:40:06.811Z · LW · GW

I'll be there. I've got space for 3 more in my car. If anyone in the Pasadena/Glendale area would like a ride, let me know.

Comment by mindviews on Open Thread: July 2010, Part 2 · 2010-07-11T09:52:23.922Z · LW · GW

Is there any philosophy worth reading?

Yes. I agree with your criticisms - "philosophy" in academia seems to be essentially professional arguing, but there are plenty of well-reasoned and useful ideas that come of it, too. There is a lot of non-rational work out there (i.e. lots of valid arguments based on irrational premises) but since you're asking the question in this forum I am assuming you're looking for something of use/interest to a rationalist.

So my question is: What philosophical works and authors have you found especially valuable, for whatever reason?

I've developed quite a respect for Hilary Putnam and have read many of his books. Much of his work covers philosophy of the mind with a strong eye towards computational theories of the mind. Beyond just his insights, my respect also stems from his intellectual honesty. In the Introduction to "Representation and Reality" he takes a moment to note, "I am, thus, as I have done on more than one occasion, criticizing a view I myself earlier advanced." In short, as a rationalist I find reading his work very worthwhile.

I also liked "Objectivity: The Obligations of Impersonal Reason" by Nicholas Rescher quite a lot, but that's probably partly colored by having already come to similar conclusions going in.

PS - There was this thread over at Hacker News that just came up yesterday if you're looking to cast a wider net.

Comment by mindviews on Open Thread: July 2010, Part 2 · 2010-07-11T09:07:23.643Z · LW · GW

An akrasia fighting tool via Hacker News via Scientific American based on this paper. Read the Scientific American article for the short version. My super-short summary is that in self-talk asking "will I?" rather than telling yourself "I will" can be more effective at reaching success in goal-directed behavior. Looks like a useful tool to me.

Comment by mindviews on July 2010 Southern California Meetup · 2010-07-11T00:56:41.725Z · LW · GW

I found a public parking structure just around the corner here with the first 2 hrs free and I believe $3 flat rate parking after 5pm that was a good deal.

Comment by mindviews on July 2010 Southern California Meetup · 2010-07-09T08:37:04.411Z · LW · GW

A trip through Burbank should be fine - I just PMed you contact details.

Comment by mindviews on July 2010 Southern California Meetup · 2010-07-09T02:15:16.928Z · LW · GW

I'm game. I'll be driving from Pasadena and can give a ride if you need one.

Comment by mindviews on A Rational Education · 2010-06-25T10:06:45.935Z · LW · GW

Were you thinking of "Affirmative Action Isn’t About Uplift"?

http://www.overcomingbias.com/2009/07/affirmative-action-wasnt-about-uplift.html

Comment by mindviews on A Rational Education · 2010-06-24T01:45:41.134Z · LW · GW

I got an amazing amount of use out of Order of Magnitude Physics. It can get you in the habit of estimating everything in terms of numbers. I've found that relentlessly calculating estimates greatly reduces the number of biased intuitive judgments I make. A good class will include a lot of interaction and out-loud thinking about the assumptions your estimates are based on. Also or as an alternative, a high-level engineering design course can provide many of the same experiences within the context of a particular domain. (Aerospace/architecture/transportation/economic systems can all provide good design problems for this type of thinking - oddly, I haven't yet seen a computer science design problem example that works as well.)

Also, I'll second recommendations for just about any psychology course. And anywhere you see a course cross-listed between psychology and economics you'll have a good chance of learning about human bias.

Comment by mindviews on What if AI doesn't quite go FOOM? · 2010-06-22T03:46:33.875Z · LW · GW

So you're positing a technique that takes advantage of inflationary theory to permanently get rid of an AI. Thermite - very practical. Launching the little AI box across the universe at near light-speed for a few billion years until inflation takes it beyond our horizon - not practical.

To bring this thread back onto the LW Highway...

It looks like you fell into a failure mode of defending what would otherwise have been a throwaway statement - probably to preserve the appearance of consistency, the desire not to be wrong in public, etc. (Do we have a list of these somewhere? I couldn't find examples in the LW wiki.) A better response to "I hope that was a joke..." than "You are mistaken" would be "Yeah. It was hyperbole for effect." or something along those lines.

A better initial comment from me would have been to make it into an actual question because I thought you might have had a genuine misunderstanding about light cones and world lines. Instead, it came off hostile which wasn't what I intended.

Comment by mindviews on What if AI doesn't quite go FOOM? · 2010-06-21T08:56:55.718Z · LW · GW

I'm pretty sure I'm not mistaken. At this risk of driving this sidetrack off a cliff...

Once an object (in this case, a potentially dangerous AI) is in our past light cone, the only way for its world line to stay outside of our future light cone forever (besides terminating it through thermite destruction as mentioned above) is for it to travel at the speed of light or faster. That was the physics nitpick I was making. In short, destroy it because you cannot send it far enough away fast enough to keep it from coming back and eating us.

Comment by mindviews on Open Thread June 2010, Part 4 · 2010-06-21T08:16:06.814Z · LW · GW

it's basically saying that gravity and EM are both obeying some more general law

No, what's happening is that under certain approximations the two are described by similar math. The trick is to know when the approximations break down and what the math actually translates to physically.

Does it suggest a way to unify gravity and EM?

No.

Keep in mind that for EM there are 2 charges while gravity has only 1. Also, like electric charges repel while like gravitic charges attract. This messes with your expectations about the sign of an interaction when you go from one to the other. That means your intuitive understanding of EM doesn't map well to understanding gravity.

Comment by mindviews on What if AI doesn't quite go FOOM? · 2010-06-21T07:13:58.456Z · LW · GW

Well, I suppose you could launch them out of our future light cone.

I hope that was a joke because that doesn't square with our current understanding of how physics works...

Comment by mindviews on What if AI doesn't quite go FOOM? · 2010-06-21T06:58:03.193Z · LW · GW

The morals of FAI theory don't mesh well at all with the morals of transhumanism.

It's not clear to me that a "transhuman" AI would have the same properties as a "synthetic" AI. I'm assuming that a transhuman AI would be based on scanning in a human brain and then running a simulation of the brain while a synthetic AI would be more declaratively algorithmic. In that scenario, proving a self-modification would be an improvement for a transhuman AI would be much more difficult so I would treat it differently. Because of that, I'd expect a transhuman AI to be orders of magnitude slower to adapt and thus less dangerous than a synthetic AI. For that reason, I think it is reasonable to treat the two classes differently.

Comment by mindviews on Open Thread June 2010, Part 3 · 2010-06-14T11:04:31.767Z · LW · GW

Of course, it could just add complexity and hope that it works, but that’s just evolution, not intelligence explosion.

The critical aspect of a "major-impact intelligence-explosion singularity" isn't the method for improvement but the rate of improvement. If computer processing power continues to grow at an exponential rate, even an inefficiently improving AI will have the growth in raw computing power behind it.

So: do you know any counterarguments or articles that address either of these points?

I don't have any articles but I'll take a stab at counterarguments.

A Majoritarian counterargument: AI turned out to be harder and further away than originally thought. The general view is still tempered by the failure of AI to live up to those expectations. In short, the AI researchers cried "wolf!" too much 30 years ago and now their predictions aren't given much weight because of that bad track record.

A mind can't understand itself counterargument: Even accepting as a premise that a mind can't completely understand itself, that's not an argument that it can't understand itself better than it currently does. The question then becomes which parts of the AI mind are important for reasoning/intelligence and can an AI understand and improve that capability at a faster rate than humans.

Comment by mindviews on Diseased thinking: dissolving questions about disease · 2010-05-31T02:40:02.436Z · LW · GW

I don't think that's a good example. For the status-quo bias to be at work we need to have the case that we think it's worse for people to have both less personal responsibility and more personal responsibility (i.e., the status-quo is a local optimum). I'm not sure anyone would argue that having more personal responsibility is bad, so the status-quo bias wouldn't be in play and the preference reversal test wouldn't apply. (A similar argument works for the current rate of heroin addiction not being a local optimum.)

I think the problem in the example is that it mixes the axes for our preferences for people to have personal responsibility and our preferences for people not to be addicted to heroin. So we have a space with at least these two dimensions. But I'll claim that personal responsibility and heroin use are not orthagonal.

I think the real argument is in the coupling between personal responsibility and heroin addiction. Should we have more coupling or less coupling? The drug in this example would make for less coupling. So let's do a preference reversal test and see if we had a drug that made your chances of heroin addiction more coupled to your personal responsiblity, would you take that? I think that would be a valid preference reversal test in this case if you think the current coupling is a local optimum.

Comment by mindviews on Preface to a Proposal for a New Mode of Inquiry · 2010-05-17T08:18:27.342Z · LW · GW

Thoughts I found interesting:

The failures indicate that, instead of being threads in a majestic general theory, the successes were just narrow, isolated solutions to problems that turned out to be easier than they originally appeared.

Interesting because I don't think it's true. I think the problem is more about the need of AI builders to show results. Providing a solution (or a partial solution or a path to a solution) in a narrow context is a way to do that when your tools aren't yet powerful enough for more general or mixed approaches. Given the variety of identifiable structures in the human brain that gives us intelligence I strongly expect that an AI will be built by combining many specialized parts that will probably be based on multiple research areas we'd recognize today.

One obvious influence comes from computer science, since presumably AI will eventually be built using software. But this fact appears irrelevant to me, and so the influence of computer science on AI seems like a disastrous historical accident.

Interesting because it forced me to consider what I think AI is outside the context of computer science - something I don't normally do.

In my view, AI can and must become a hard, empirical science, in which researchers propose, test, refine, and often discard theories of empirical reality.

Interesting because I'm very curious to see what this means in the context of your coming proposal.

Comment by mindviews on Attention Lurkers: Please say hi · 2010-05-16T08:15:12.135Z · LW · GW

Hi all - been lurking since LW started and followed Overcoming Bias before that, too.