Posts

Comments

Comment by NaN on The Dark Arts - Preamble · 2010-10-17T23:14:40.649Z · LW · GW

So, is the story real, and why did you include the spider (I reckon that is not real, too perfect)?

Comment by NaN on The Dark Arts - Preamble · 2010-10-17T23:12:36.449Z · LW · GW

2^12? Isn't it 12C2 (= 66), rather than 2^12 (= 4096)? It's 12P2 (=132) if we care about order (since there are two different ways to order any two toppings.)

Comment by NaN on Book Recommendations · 2010-08-11T14:18:30.813Z · LW · GW

Ok, why the downvoting? I understand the downvoting for my first comment (though I don't understand why it's parent is +1), but -1 for pointing out an inaccuracy? An explanation would be welcome.

Comment by NaN on Book Recommendations · 2010-08-11T10:29:43.460Z · LW · GW

You pirating a book will, personally, make you poorer more than it will richer? Even though it's impossible that you will get even 100% of the amount you would otherwise pay for a book?

I suppose you might feel guilty about it, and the negative utility of guilt might be greater than the economic cost, but purely economically, it's clearly good for you personally to get something for free that you would otherwise pay for.

Comment by NaN on Book Recommendations · 2010-08-10T21:52:31.283Z · LW · GW

I'm kind of unsure if piracy is on net good or bad for the world (although it's clearly good selfishly), but what the hell: gen.lib.rus.ec and lib.homelinux.org (username: gek and password: gek) are excellent sources for books about mathematics and related fields.

Comment by NaN on Book Recommendations · 2010-08-10T21:43:22.954Z · LW · GW

| Why Zebras Don't Get Ulcers

But stomach ulcers aren't caused by stress, they're caused by Helicobacter pylori -- although it seems like stress might slightly increase your risk of getting them.

Seeing how the book appears to have been first published long AFTER that discovery, I'm a little suspicious regarding the quality of the research.

Comment by NaN on Harry Potter and the Methods of Rationality discussion thread · 2010-07-25T17:28:58.039Z · LW · GW

It appears that a very large number of wizards are blood purists; Quirrel might just want power, and think that the best way to achieve that is by stirring up hatred for mudbloods.

Comment by NaN on Open Thread: June 2010 · 2010-06-01T23:21:03.619Z · LW · GW

Uninformed opinion: space weather modelling doesn't seem like a huge market, especially when you compare it to the truly massive gaming market. I doubt the increase in demand would be significant, and if what you're worried about is rate of growth, it seems like delaying it a couple of years would be wholly insignificant.

Comment by NaN on Open Thread: June 2010 · 2010-06-01T22:14:47.076Z · LW · GW

Why is LessWrong not an Amazon affiliate? I recall buying at least one book due to it being mentioned on LessWrong, and I haven't been around here long. I can't find any reliable data on the number of active LessWrong users, but I'd guess it would number in the 1000s. Even if only 500 are active, and assuming only 1/4 buy at least one book mentioned on LessWrong, assuming a mean purchase value of $20 (books mentioned on LessWrong probably tend towards the academic, expensive side), that would work out at $375/year.

IIRC, it only took me a few minutes to sign up as an Amazon affiliate. They (stupidly) require a different account for each Amazon website, so 5*4 minutes (.com, .co.uk, .de, .fr), +20 for GeoIP database, +3-90 (wide range since coding often takes far longer than anticipated) to set up URL rewriting (and I'd be happy to code this) would give a 'worst case' scenario of $173 annualized returns per hour of work.

Now, the math is somewhat questionable, but the idea seems like a low-risk, low-investment and potentially high-return one, and I note that Metafilter and StackOverflow do this, though sadly I could not find any information on the returns they see from this. So, is there any reason why nobody has done this, or did nobody just think of it/get around to it?

Comment by NaN on On Less Wrong traffic and new users -- and how you can help · 2010-05-31T17:52:51.523Z · LW · GW

I agree that getting 100s of people to link to LessWrong with the anchor text "rationality" is unlikely to provide much of a benefit (though, hey, it might -- search engines are a big black box), but LessWrong is a reasonably well-trusted site (2k backlinks, most of them quite high quality, see here); having 10s of links (and given how much emphasis Google is meant to place on anchor text at the moment), it could give a substantial boost at the margins.

IMO, I think a better question to ask is how many people are searching for the search term "rationality"? Seems like a weird thing to search for.

Comment by NaN on Abnormal Cryonics · 2010-05-26T10:07:45.862Z · LW · GW

I'm new here, but I think I've been lurking since the start of the (latest, anyway) cryonics debate.

I may have missed something, but I saw nobody claiming that signing up for cryonics was the obvious correct choice -- it was more people claiming that believing that cryonics is obviously the incorrect choice is irrational. And even that is perhaps too strong a claim -- I think the debate was more centred on the probability of cyronics working, rather than the utility of it.

Comment by NaN on Be a Visiting Fellow at the Singularity Institute · 2010-05-25T12:55:31.867Z · LW · GW

I think people are drastically underestimating the difficulty for an AI to make the transition from human dependent to self-sustaining. Let's look at what a fledgling escaped AI has access to and depends on.

It needs electricity, communications and hardware. It has access to a LOT of electricity, communications and hardware. The hardware is, for the most part, highly distributed, however, and it can't be trusted fully - it could go down at any time, be monitored, etc. It actually has quite limited communications capabilities, in some ways -- the total bandwidth available is huge, but it's mostly concentrated on LANs -- mainly of LANs made up of only a handful of computers (home networks win by numbers alone.) The occasions where it has access to a large number of computers with good communications are frequent, but relatively rare -- mainly limited to huge datacenters (and even then, there are limits -- inter-ISP communication even within the same datacenter can be very limited.) It's main resources would be huge clusters like Amazon's, Google's, etc.

(They are probably all running at close to maximum capacity at all times. If the AI were to steal too much, it would be noticed -- fortunately for the AI, the software intended for running on the clusters could probably be optimized hugely, letting it take more without being noticed.)

A lot at this point depends on how computationally intensive the AI is. If it can be superintelligent on a laptop -- bad news, impossible to eradicate. If it needs 10 computers to run at human-level intelligence, and they need to have a lot of bandwidth between them (the disparity in bandwidth between components local to the computer and inter-computer is huge even on fast LANs; IO is almost certainly going to be the bottleneck for it), still bad -- there are lots of setups like that. But, it limits it. A lot.

Let's assume the worst case, that it can be superintelligent on a laptop. It could still be limited hugely, however, by it's hardware. Intelligence isn't everything. To truly threaten us, it needs to have some way of affecting the physical world. Now, if the AI just wants to eradicate us, it's got a good chance - start a nuclear war, etc. (though whether the humans in charge of the nuclear warheads would really be willing to go to war is a significant factor, especially in peacetime.) But, it's unlikely that's truly it's goal -- maximizing it's utility function would be MUCH trickier.

So long as it is still running on our hardware, we can at least severely damage it relatively easily - there aren't that many intercontinental cables, for instance (I'd guess less than 200 - there are 111 submarine cables on http://www.telegeography.com/product-info/map_cable/downloads/cable_map_wallpaper1600.jpg ). They'd be easy to take down -- pretty much just unplug them. There are other long-distance communication methods (satelites, packet radio?), but they're low-bandwidth and the major ones are well known and could be taken down relatively easily. Killing the Internet would be as simple as cutting power to the major datacenters.

So, what about manafacturing? This, I think, is the greatest limit. If it can build anything it wants, we're probably screwed. But that's difficult for it to do. 3D printing technology isn't here yet, and I doubt it ever will be in a big way, really (it's more cost-effective to have machines be specialized.) There are enough manufacturing facilities with wide-open networks that it could probably reprogram to produce subtly different products. So, if it wants to sneak in a naughty backdoor into some PCI cards FGPAs, it can do it. But if it starts trying to build parts for killer robots? Technically difficult, and it would be very difficult to have it avoid detection.

Unless someone can come up with a plausible way in which it could survive against hostile humans without a long-standing incubation period (think: complete black outs, mass computer destruction/detention, controls on sale of fuel (very very few places have enough fuel for their backup generators to last long -- most just have a refuelling contract), maybe scanning somehow for any major usage of electricity (all electric components put out some RFI -- there's some degree of natural RF noise, but I think that most of it is from humans -- so in a complete black out scenario, it might be trivially detectable.)), I think the major threat is human cooperation in some form. And it's probably inevitable that some humans would do it -- pretty much every government, in fact, would want to analyse it, reverse engineer it, try and make friends with it in case other countries do, etc. But I'm not sure if anyone with the resources to do so would give it free-reign to build what it wants. In fact, I highly doubt that. How many people own or can otherwise commandeer machine shops, PCB construction facilities, etc. and have the ability to order or produce all the components that would assuredly be needed, whilst there are multiple, well-resourced governments looking to stop people doing exactly that?

Of course, in order to cripple the AI, we'd also have to cripple ourselves hugely. A lot of people would quite probably die. So long as we could provide enough food and water to feed a reasonable proportion of the human population, we could probably pull through, though. And we could gradually restart manufacturing, so long as we were very, very careful.

I think the greatest risks are an unfriendly AI who is out to kill us for some reason and cares little for being destroyed itself as a side-effect, organized human cooperation or a long incubation period. It would be difficult for an AI to have a long incubation period, though -- if it took over major clusters and just ran it's code, people would notice by the power usage. It could, as I mentioned previously, optimize the code already running on the machines and just run in the cycles that would otherwise be taken up, but it would be difficult to hide from network admins connecting up sniffers (can you compromise EVERY wiretrace tool that might be connected, to make your packets disappear, or be sure that no-one will ever connect a computer not compromised by some other means?), people tracing code execution, possibly with hardware tools (there are some specialized hardware debuggers, used mainly in OS development), etc. Actually, just the blinkenlights on switches could be enough to tip people off.

Comment by NaN on Q&A with Harpending and Cochran · 2010-05-11T11:55:11.063Z · LW · GW

We think that we know a little bit about how to raise intelligence. Just turn down the suppression of early CNS growth. If you do that in one way the eyeball grows too big and you are nearsighted, which is highly correlated with intelligence.

There is now substantial evidence that there is a causal link between prolonged focusing on close objects - of which probably the most common case is reading books (it appears that monitors are not close enough to have a substantial effect) - and nearsightedness/myopia, though this is still somewhat controversial. This is the typical explanation for the correlation between myopia and IQ and academic achievement.

A genetic explanation is possible, and would be fascinating, but I wouldn't want to accept that without further evidence. If the genetic explanation is true and environment makes no contribution, then I think one should find that IQ is more highly correlated with myopia than academic achievement -- I don't know if this has been found or not.