Posts
Comments
I knew that those wise and good benefactors of humanity would turn out to have been warning us of the dangers of polyunsaturated fats all along.
They might want to mention it to people like my father, who, on the advice of his doctor, has been pretty much only eating polyunsaturated fats these last twenty years, for the good of his heart.
Or perhaps to McDonalds, who on the basis of a consumer-led campaign changed their famously good beef-dripping fried chips to vegetable-oil fried chips, coincidentally at about the time obesity and various other nasty diseases with no known cause really became fashionable in America.
Another thing linoleic acid does when there's oxygen around is polymerize into a varnish, which is why linseed oil (lin-oleic) is traditionally used to waterproof cricket bats.
It used to say 'do not eat' in quite large letters on the cricket-bat-varnish bottles. Presumably now it says 'heart-healthy!'.
I wouldn't just write off the naive anti-seed oil position either from a chemical point of view. Metabolism is absurdly complicated and finely tuned. Substituting a slightly different substrate into a poorly understood set of reactions and feedback loops is unlikely to go well.
There was very little linoleic acid in the diet we evolved to eat. Sure, it's essential in small quantities, but using it as a major energy source is likely a very bad idea a priori.
I don't buy this, the curvedness of the sea is obvious to sailors, e.g. you see the tops of islands long before you see the beach, and indeed to anyone who has ever swum across a bay! Inland peoples might be able to believe the world is flat, but not anyone with boats.
A Great Man and an inspiration to me and to this community and to all thinking men.
God rest his soul in peace in Paradise.
alt-text is supposed to be: "I'm not even sure they've read Superintelligence"
Forgive me, I have strongly downvoted this dispassionate, interesting, well-written review of what sounds like a good book on an important subject because I want to keep politics out of Less Wrong.
This is the most hot-button of topics, and politics is the mind-killer. We have more important things to think about and I do not want to see any of our political capital used in this cause on either side.
typo-wise you have a few uses of it's (it is) where it should be its (possessive), and "When they Egyptians" should probably read "When the Egyptians".
I did enjoy your review. Thank you for writing it. Would you delete it and put it elsewhere?
Nicely done! I only come here for the humour these days.
Well, this is nice to see! Perhaps a little late, but still good news...
caches out
cashes out?
Haven't you just "fabricated an option" where it's possible to talk about politics on less wrong without it turning into a mind-killed clusterfuck? I mean yes, it would be lovely.....
I wouldn't touch this stuff with someone else's bargepole. It looks like it takes the willpower out of starvation, and as the saying goes, you can starve yourself thin, but you can't starve yourself healthy.
I could be convinced, by many years of safety data and a well understood causal mechanism for both obesity and the action of these drugs, that that's wrong and that they really are a panacea. But I am certainly not currently convinced!
The question that needs answering about obesity is 'why on earth are people with enormous excess fat reserves feeling hungry?'. It's like having a car with the boot full of petrol in jerry cans but the 'fuel low' light is blinking.
depends on facts about physics and psychology
It does, and a superintelligence will understand those facts better than we do.
My basic argument is that the there are probably mathematical limits on how fast it is possible to learn.
Doubtless there are! And limits to how much it is possible to learn from given data.
But I think they're surprisingly high, compared to how fast humans and other animals can do it.
There are theoretical limits to how fast you can multiply numbers, given a certain amount of processor power, but that doesn't mean that I'd back the entirety of human civilization to beat a ZX81 in a multiplication contest.
What you need to explain is why learning algorithms are a 'different sort of thing' to multiplication algorithms.
Maybe our brains are specialized to learning the sorts of things that came in handy when we were animals.
But I'd be a bit surprised if they were specialized to abstract reasoning or making scientific inferences.
All of RL’s successes, even the huge ones like AlphaGo (which beat the world champion at Go) or its successors, were not easy to train. For one thing, the process was very unstable and very sensitive to slight mistakes. The networks had to be designed with inductive biases specifically tuned to each problem.
And the end result was that there was no generalization. Every problem required you to rethink your approach from scratch. And an AI that mastered one task wouldn’t necessarily learn another one any faster.
I had the distinct impression that AlphaZero (the version of AlphaGo where they removed all the tweaks) could be left alone for an afternoon with the rules of almost any game in the same class as go, chess, shogi, checkers, noughts-and-crosses, connect four, othello etc, and teach itself up to superhuman performance.
In the case of chess, that involved rediscovering something like 400 years of human chess theorizing, to become the strongest player in history including better than all previous hand-constructed chess programs.
In the case of go, I am told that it not only rediscovered a whole 2000 year history of go theory, but added previously undiscovered strategies. "Like getting a textbook from the future", is a quote I have heard.
That strikes me as neither slow nor ungeneral.
And there was enough information in the AlphaZero paper that it was replicated and improved on by the LeelaChessZero open-source project, so I don't think there can have been that many special tweaks needed?
This is great. Strong upvote!
Are you claiming that a physically plausible superintelligence couldn't infer the physical laws from a video, or that AIXI couldn't?
Those seem to be different claims and I wonder which of the two you're aiming at?
For example, you might be much smarter than me and a meteorologist, but you'd find it hard to predict the weather in a year's time better than me if it's a single-shot-contest.
Sure, but I'd presumably be quite a lot better at predicting the weather in two days time.
I think this is a great article, and the thesis is true.
The question is, how much intelligence is worth how much material?
Humans are so very slow and stupid compared to what is possible, and the world so complex and capable of surprising behaviour, that my intuition is that even a very modest intelligence advantage would be enough to win from almost any starting position.
You can bet your arse that any AI worthy of the name will act nice until it's already in a winning position.
I would.
If there's some intelligence threshold past which minds pretty much always draw against each other in chess even if there is a giant intelligence gap between them, I wouldn't be that surprised.
Just reinforcing this point. Chess is probably a draw for the same reason Noughts-and-crosses is.
Grandmaster chess is pretty drawish. Computer chess is very drawish. Some people think that computer chess players are already near the standard where they could draw against God.
Noughts-and-crosses is a very simple game and can be formally solved by hand. Chess is only a bit less simple, even though it's probably beyond actual formal solution.
The general Game of Life is so very far beyond human capability that even a small intelligence advantage is probably decisive.
The "purpose" of most martial arts is to defeat other martial artists of roughly the same skill level, within the rules of the given martial art.
Optimizing for that is not the same as optimizing for general fighting. If you spent your time on the latter, you'd be less good at the former.
"Beginner's luck" is a thing in almost all games. It's usually what happens when someone tries a strategy so weird that the better player doesn't immediately understand what's going on.
The other day a low-rated chess player did something so weird in his opening that I didn't see the threat, and he managed to take one of my rooks.
That particular trap won't work on me again, and might not have worked the first time if I'd been playing someone I was more wary of.
I did eventually manage to recover and win, but it was very close, very fun, and I shook his hand wholeheartedly afterwards.
Every other game we've played I've just crushed him without effort.
About a year ago I lost in five moves to someone who tried the "Patzer Attack". Which wouldn't work on most beginners. The first time I'd ever seen it. It worked once. It will never work on me again.
For a clear example of this, in endgames where I have a winning position but have little to no idea how to win, Stockfish's king will often head for the hills, in order to delay the coming mate as long as theoretically possible.
Making my win very easy because the computer's king isn't around to help out in defence.
This is not a theoretical difficulty! It makes it very difficult to practise endgames against the computer.
Paul, this is very thought provoking, and has caused me to update a little. But:
I loathe factory-farming, and I would spend a large fraction of my own resources to end it, if I could.
I believe that makes me unusually kind by human standards, and by your definition.
I like chickens, and I wish them well.
And yet I would not bat an eyelid at the thought of a future with no chickens in it.
I would not think that a perfect world could be improved by adding chickens.
And I would not trade a single happy human soul for an infinity of happy chickens.
I think that your single known example is not as benevolent as you think.
zero-days are a thing, and hell, it's even possible that there are computers connected to the internet somewhere that don't get their patches in a timely manner.
I'm a complete innocent in all this. I've never needed to lose weight before, hence appealing for help here. And I don't know anything about Vilhjálmur Stefánsson or ketogenic diets in general.
I do know that sloth and gluttony aren't the explanation, because I have been a slothful glutton for most of my life and I never gained much weight, nor lost it in the long periods when I was a sporty glutton. That's gone wrong recently, hence my search for reasons and techniques.
Wikipedia seems to imply that Vilhjálmur Stefánsson was interested in eskimo-style all-meat diets.
exfatloss seems to be deliberately holding the amount of protein low, and that does seem to be a load-bearing part of his approach. Also the anti-polyunsaturated fats bit, which I find intriguing because it's such a good theory, and yet it makes predictions which don't seem to be true.
Ultimately the new GLP-1 agonist weightloss drugs.....
I'm irritatingly fat, not dying of morbid obesity. I wouldn't touch such things with someone else's bargepole, absent twenty years of widespread use and researchers motivated to find the unintended consequences.
Chemical patches as a remedy for chemical poisoning is ok, if it's the best we can do, but unless the problem is some permanent environmental contaminant, I'm sure we can do better than that!
That seems perfectly reasonable! Enjoy the sunshine while you still can.
Wonderful news! Good old Elon! I was worried that there might be some last fragment of hope lying around uncrushed.
All the computers on my boat are solar-powered. It'll be fine....
You're an AI. You've got an internet connection. You can make copies of yourself. You're capable of reading instructions for how to take over insecure computers.
Within about a minute you're on almost every computer in the world.
Fight that.
<nogenies>
Yeah, wouldn't it be great if there was some way to not have a nuclear war or build AI or have everyone die of bird flu?
</nogenies>
I think I know how this game goes.
This is a new one! "Computers will never be able to sort lists of words by arbitrary keys."
Does it require quantum microtubules in the incomprehensibly complex neuron to leverage uncomputable mental powers that can defy Gödel's proof or something?
Yeah, wouldn't it be great if there was some way to not have a nuclear war or build AI?
If anyone can think of one, they'll have my full support.
I think this is probably right. When all hope is gone, try just telling people the truth and see what happens. I don't expect it will work, I don't expect Eliezer expects it to work, but it may be our last chance to stop it.
And it does seem to have got a bit of traction. A very non-technical friend just sent me the link, on the basis that she knows "I've always been a bit worried about that sort of thing."
Hello Rufus! Welcome to Less Wrong!
I totally get where you're coming from, and if I thought the chance of doom was 1% I'd say "full speed ahead!"
As it is, at fifty-three years old, I'm one of the corpses I'm prepared to throw on the pile to stop AI.
The "bribe" I require is several OOMs more money invested into radical life extension research
Hell yes. That's been needed rather urgently for a while now.
The audience here is mainly Americans so you might want to add an explicit sarcasm tag.
"Fault" seems a strange phrasing. If your problem was that one of your nerves was misfiring, so you were in chronic pain, would you describe that as "your fault"? (In the sense of technical fault/malfunction, that would absolutely be your "fault", but "your fault" usually carries moral implications.)
Where would you place the fault?
I suspect everyone can relate in that everyone has felt this at some point, or even at a few memorable points.
Duncan did you just deny my existence? (Don't worry, I don't mind a bit. :-) )
I'm a grade A weirdo, my own family and friends affirm this, only the other day someone on Less Wrong (!) called me a rambling madman. My nickname in my favourite cricket club/drinking society was Space Cadet.
And I'm rather smug about this. Everyone else just doesn't seem very good at thinking. Even if they're right they're usually right by accident. Even the clever ones seem to have some sort of blinders on. They don't even take their own ideas seriously.
Why would I be upset by being able to see things they can't see, think thoughts they can't think? That doesn't seem to be the sort of thing that could hurt me.
For most of your essay I was thinking: "Is he just mistaking metaphorical 'everyone' for literal 'everyone'?". But in the comments you say that's not what you meant at all. And I don't even understand that. Surely, if you replace 'everyone' with 'most people' throughout, your existence is not being denied?
And if your existence was being denied, why would that be a problem? If someone came straight up to me and said "You don't exist", I'd just think they were mad, it wouldn't hurt.
I read that you're in pain and it puzzles me. I've always wondered if the bit of my brain that is supposed to feel pressure-to-conform is malformed. I notice it, but it doesn't seem powerful. Maybe yours is in perfect working order? Is it that you really really want to fit in, but in order to do so you'd have to be someone else, and that hurts?
Or have I failed to extract from your essay the meaning you were trying to put into it?
dalmations->dalmatians?
Did someone fiddle with Charlotte?
I went to talk to her after reading this and she was great fun, I quite see how you fell for her.
But I tried again just now and she seems a pale shadow of her former cheerful self, it's the difference between speaking to a human PR drone in her corporate capacity and meeting her at a party where she's had a couple.
Doesn't any such argument also imply that you should commit suicide?
These seemed good, they taste of lavender, but the person trying them got no effect:
https://www.amazon.co.uk/gp/product/B06XPLTLLN/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1
Lindens Lavender Essential Oil 80mg Capsules
The person who had it work for them tried something purchased from a shop, Herbal Calms maybe?, anyway, lavender oil in vegetable oil in little capsules. She reports that she can get to sleep now, and can face doing things that she couldn't previously do due to anxiety if she pops a capsule first.
That makes perfect sense, thank you. And maybe, if we've already got the necessary utility function, stability under self-improvement might be solvable as if it were just a really difficult maths problem. It doesn't look that difficult to me, a priori, to change your cognitive abilities whilst keeping your goals.
AlphaZero got its giant inscrutable matrices by working from a straightforward start of 'checkmate is good'. I can imagine something like AlphaZero designing a better AlphaZero (AlphaOne?) and handing over the clean definition of 'checkmate is good' and trusting its successor to work out the details better than it could itself.
I get cleverer if I use pencil and paper, it doesn't seem to redefine what's good when I do. And no-one stopped liking diamonds when we worked out that carbon atoms weren't fundamental objects.
---
My point is that the necessary utility function is the hard bit. It doesn't look anything like a maths problem to me, *and* we can't sneak up on it iteratively with a great mass of patches until it's good enough.
We've been paying a reasonable amount of attention to 'what is good?' for at least two thousand years, and in all that time no-one came up with anything remotely sensible sounding.
I would doubt that the question meant anything, if it were not that I can often say which of two possible scenarios I prefer. And I notice that other people often have the same preference.
I do think that Eliezer thinks that given the Groundhog Day version of the problem, restart every time you do something that doesn't work out, we'd be able to pull it off.
I doubt that even that's true. 'Doesn't work out' is too nebulous.
But at this point I guess we're talking only about Eliezer's internal thoughts, and I have no insight there. I was attacking a direct quote from the podcast, but maybe I'm misinterpreting something that wasn't meant to bear much weight.
What I am not convinced of, is that given all those assumptions being true, certain doom necessarily follows, or that there is no possible humanly tractable scheme which avoids doom in whatever time we have left.
OK, cool, I mean "just not building the AI" is a good way to avoid doom, and that still seems at least possible, so we're maybe on the same page there.
And I think you got what I was trying to say, solving 1 and/or 2 can't be done iteratively or by patching together a huge list of desiderata. We have to solve philosophy somehow, without superintelligent help. As I say, that looks like the harder part to me.
Please don’t confuse me for someone who doesn’t often worry about these things.
I promise I'll try not to!
A good guess, and thank you for the reference, but (although I admit that the prospect of global imminent doom is somewhat anxious-making), anxiety isn't a state of mind I'm terribly familiar with personally. I'm very emotionally stable usually, and I lost all hope years ago. It doesn't bother me much.
It's more that I have the 'taking ideas seriously' thing in full measure, once I get an *idee fixe* I can't let it go until I've solved it. AI Doom is currently third on the list after chess and the seed oil nonsense, but the whole Bing/Sydney thing started me thinking about it again and someone emailed me Eliezer's podcast, you know how it goes.
Although I do have a couple of friends who suffer greatly from Anxiety Disorder, and you have my sympathies, especially if you're interested in all this stuff! Honestly run away, there's nothing to be done and you have a life to live.
Totally off topic but have you tried lavender pills? I started recommending them to friends after Scott Alexander said they might work, and out of three people I've got one total failure, one refusal to take for good reasons, and one complete fix! Obviously do your own research as to side effects, just cause it's 'natural' doesn't mean it's safe. The main one is if you're a girl it will interfere with your hormones and might cause miscarriages.
To be clear, even if I were somehow granted vivid knowledge of the future through precognition, you’d still seem crazy to me at this point.
(I assume you mean vivid knowledge of the future in which we are destroyed, obviously in the case where everything goes well I've got some problem with my reasoning)
That's a good distinction to make, a man can be right for the wrong reasons.
Even as a doomer among doomers, you, with respect, come off as a rambling madman.
Certainly mad enough to take "madman" as a compliment, thank you!
I'd be interested if you know a general method I could use to tell if I'm mad. The only time I actually know it happened (thyroid overdose caused a manic episode) I noticed pretty quickly and sought help. What test should I try today?
Obviously "everyone disagrees with me and I can't convince most people" is a bad sign. But after long and patient effort I have convinced a number of unfortunates in my circle of friends. Some of whom have always seemed pretty sharp to me.
And you must admit, the field as a whole seems to be coming round to my point of view!
Rambling I do not take as a compliment. But nevertheless I thank you for the feedback.
I thought I'd written the original post pretty clearly and succinctly. If not, advice on how to write more clearly is always welcome. If you get my argument, can you steelman it?
I’m guessing you’re operating on strong intuition here
Your guess is correct, I literally haven't shifted my position on all this since 2010. Except to notice that everything's happening much faster than I expected it to. Thirteen years ago I expected this to kill our children. Now I worry that it's going to kill my parents. AlphaZero was the fire alarm for me. General Game Playing was one of the more important sub-problems.
I agree that if you haven't changed your mind for thirteen years in a field that's moving fast, you're probably stuck.
I think my basic intuitions are:
"It's a terrible idea to create a really strong mind that doesn't like you."
"Really strong minds are physically possible, humans are nowhere near."
"Human-level AI is easy because evolution did it to us, quickly, and evolution is stupid."
"Recursive self-improvement is possible."
Which of these four things do you disagree with? Or do you think the four together are insufficient?
None. But if a problem's not solvable in an easy case, it's not solvable in a harder case.
Same argument as for thinking about Solomonoff Induction or Halting Oracles. If you can't even do it with magic powers, that tells you something about what you can really do.
I'm not proposing solutions here. I think we face an insurmountable opportunity.
But for some reason I don't understand, I am driven to stare the problem in the face in its full difficulty.
I like your phrasing better, but I think it just hides some magic.
In this situation I think we get an AI that repeatedly kills 999,999 people. It's just the nearest unblocked path problem.
The exact reset/restart/turn it off and try again condition matters, and nothing works unless the reset condition is 'that isn't going to do something we approve of'.
The only sense I can make of the idea is 'If we already had a friendly AI to protect us while we played, we could work out how to build a friendly AI'.
I don't think we could iterate to a good outcome, even if we had magic powers of iteration.
Your version makes it strictly harder than the 'Groundhog Day with Memories Intact' version. And I don't think we could solve that version.