Posts
Comments
The error bars on the result make this result meaningless. You should, at minimum, also consider how hot/successful she is relative to you (can she do better?) and whether she and you have shared goals (what are her dreams/expectations? are they compatible with yours, and are you both making progress?). It would be extraordinarily improbable for her to forget you overnight, but if she broke up with you she has likely wanted to do better for some time. The usual post-breakup advice is to focus on your own goals for awhile.
I have greater than 5% confidence that Voldemort is three characters: Quirrell (via possession), Harry (via soul-copying ritual) and Dumbledore (via improved Imperius).
How magic really works in HPMOR, my guess: Spells are like functions in a computer program -- ways to manipulate data (the world) without understanding the underlying implementation (how the spell actually makes changes happen). The next level up from the magical world is an enormous computer such as the one described in Permutation City, except with quantum hardware to continuously and seamlessly recompute the present and the preceding six hours. The machine's creator and friends copied themselves into this Universe, and gave themselves magic, implemented through spells/functions that change the physical world when triggered. One program provides a terminal or other way to create new spells, and appears to wizards who have become sufficiently experienced with magic to meet the terminal spell's requirements. The Interdict of Merlin was created by one of these learned wizards, who decided the terminal was too easy to get access to, after a newly ascended wizard made a programming error and erased Atlantis. In this Universe, the solution to the hard problem of consciousness is that NPCs are philosophical zombies and PCs are game-players from a universe one or more levels below the magical universe. That is, Harry is fully conscious but is actually an alien sitting in a virtual-reality console and suppressing part of their mind so as to fully experience only Harry's in-universe experience. In the alien's universe, there is a satisfying and provable answer to the hard problem of consciousness.
EDIT: I'm seeing a lot of negative votes. I will argue my case if you tell me what's wrong.
I know cookies make me unhappy in the long run, but I enjoy eating cookies in the short run. I could name a bunch of parts of the cookie-eating experience that I like, such as the feeling of sleepiness and contentment caused by eating a lot.
You could argue that any feeling is "brainwashing", meaning that my feelings are controlled by my physical brain, which is something separate from me. I am deeply uncomfortable with all of the current solutions to the hard problem of consciousness. If I am self-aware, then it seems like all matter must be aware in the same sense that I am not a philosophical zombie.
That sounds exciting too. I don't know enough about this field to get into a debate about whether to save the metaphorical whales or the metaphorical pandas first. Both approaches are complicated. I am glad the MIRI exists, and I wish the researchers good luck.
My main point re: "steel-manning" the MIRI mission is that you need to make testable predictions and then test them or else you're just doing philosophy and/or politics.
Stay in London, and study in the evenings if you want. Benjamin Franklin said "three removes is as bad as a fire", meaning there's a high cost to rebuilding your social network. I'd guess it would take you about 18 months to fully build new friendships. I moved to a non-ideal city for work (twice!) and it set my career back by a couple of years. The cost of living in Glasgow is lower because people are happier living in London.
If you want to fully maximize utility, you're making a false choice by just looking at the two jobs. Get back in grad school and work as hard as possible until you're in the top half of the class at a top school (or otherwise meet the "great hacker" criteria that Paul Graham describes on his website). Then, start a fast-growing startup with one or two other outstanding hackers. When it stops growing after two to ten years, sell out. Then, you should be in a really high-utility space where you can do massive good and/or enjoy novel luxuries.
I agree. Whatever process copies rational conclusions back into subconscious emotional drivers of behavior doesn't seem to work too well. For me, I enjoy cookies just about every day, despite having no rational reason to eat them that often. Eating cookies does not fit into my long term utility-maximizing plans, but I am reluctant to brainwash myself.
Thanks for the thoughtful reply!
What code (short of a full-functioning AGI) would be at all useful here?
Possible experiments could include:
Simulate Prisoner's Dilemma agents that can run each others' code. Add features to the competition (e.g. group identification, resource gathering, paying a cost to improve intelligence) to better model a mix of humans and AIs in a society. Try to simulate what happens when some agents gain much more processing power than others, and what conditions make this a winning strategy. If possible, match results to real-world examples (e.g. competition between people with different education backgrounds). Based on these results, make a prediction of the returns to increasing intelligence for AIs.
Create an algorithm for a person to follow recommendations from information systems -- in other words, write a flowchart that would guide a person's daily life, including steps for looking up new information on the Internet and adding to the flowchart. Try using it. Compare the effectiveness of this approach with a similar approach using information systems from 10 years ago, and from 100 years ago (e.g. books). Based on these results, make a prediction for how quickly machine intelligence will become more powerful over time.
Identify currently-used measures of machine intelligence, including tests normally used to measure humans. Use Moore's Law and other data to predict the rate of intelligence increase using these measures. Make a prediction for how machine intelligence changes with time.
Write an expert system for making philosophical statements about itself.
In general, when presenting a new method or applied theory, it is good practice to provide the most convincing data possible -- ideally experimental data or at least simulation data of a simple application.
having trouble predicting should be a reason to if anything be more worried rather than less.
You're right -- I am worried about the future, and I want to make accurate predictions, but it's a hard problem, which is no excuse. I hope you succeed in predicting the future. I assume your goal is to make a general prediction theory to accurately assign probabilities to future events, e.g. an totalitarian AI appearing. I'm trying to say that your theory will need to accurately model past false predictions as well as past true predictions.
The concern is that the first true AGI will self-modify itself to become far smarter and more capable of controlling the >environment around it than anything else.
I agree that is a possible outcome. I expect multiple AIs with comparable strength to appear at the same time, because I imagine the power of an AI depends primarily on its technology level and its access to resources. I expect multiple AIs (or a mix of AIs and humans) will cooperate to prevent one agent from obtaining a monopoly and destroying all others, as human societies have often done (especially recently, but not always). I also expect AIs will stay at the same technology level because it's much easier to steal a technology than to initially discover it.
If Ringmione is true, then I would assign over 50% probability to Dumbledore having noticed it and not called out Harry on it, in the same way that Dumbledore appeared to have noticed Harry in Azkaban and chose to not reveal it. I suspect Dumbledore is still just fighting the War, and believes that Harry is the key to defeating Voldemort and/or actually is Voldemort, and so Dumbledore did not reveal Ringmione because he believes Harry is trying to do the right thing and revealing Ringmione would cause a disastrous confrontation.
Your arguments would be much more convincing if you showed results from actual code. In engineering fields, including control theory and computer science, papers that contain mathematical arguments but no test data are much more likely to have errors than papers that include test data, and most highly-cited papers include test data. In less polite language, you appear to be doing philosophy instead of science (science requires experimental data, while philosophy does not).
I imagine you have not actually written code because it seems too hard to do anything useful -- after 50 years of Moore's law, computers will execute roughly 30 million times as many operations per unit time as present-day computers. That is, a 2063 computer will do in 1 second what my 2013 computer can do in 1 year. You can close some of this gap by using time on a high-powered computing cluster and running for longer times. At minimum, I would like to see you try to test your theories by examining the actual performance of real-world computer systems, such as search engines, as they perform tasks analogous to making high-level ethical decisions.
Your examples about predicting the future are only useful if you can identify trends by also considering past predictions that turned out to be inaccurate. The most exciting predictions about the future tend to be wrong, and the biggest advances tend to be unexpected.
I agree that this seems like an important area of research, though I can't confidently speculate about when human-level general AI will appear. As far as background reading, I enjoyed Marshall Brain's "Robotic Nation", an easy-to-read story intended to popularize the societal changes that expert systems will cause. I share his vision of a world where the increased productivity is used to deliver a very high minimum standard of living to everyone.
It appears that as technology improves, human lives become better and safer. I expect this trend to continue. I am not convinced that AI is fundamentally different -- in current societies, individuals with greatly differing intellectual capabilities and conflicting goals already coexist, and liberal democracy seems to work well for maintaining order and allowing incremental progress. If current trends continue, I would expect competing AIs to become unimaginably wealthy, while non-enhanced humans enjoy increasing welfare benefits. The failure mode I am most concerned about is a unified government turning evil (in other words, evolution stopping because the entire population becomes one unchanging organism), but it appears that this risk is minimized by existing antitrust laws (which provide a political barrier to a unified government) and by the high likelihood of space colonization occurring before superhuman AI appears (which provides a spatial barrier to a unified government).
Hello! I'm here because...well, I've read all of HPMOR, and I'm looking for people who can help me find the truth and become more powerful. I work as an engineer and read textbooks for fun, so hopefully I can offer some small insights in return.
I'm not comfortable with death. I've signed up for cryonics, but still perceive that option as risky. As a rough estimate, it appears that current medical research is about 3% of GDP and extends lifespans by about 2 years per decade. I guess that if medical research spending were increased to 30% of current GDP, then most of us would live forever while feeling increasingly healthy. Unfortunately, raising taxes to achieve this is not realistic -- doubling taxes for an uncertain return is a hard sell, and I have been unable to find research quantifying the link between public research spending and healthcare technology improvements. Another approach is inventing a technology to increase the overall economy size by 10x, by creating a practical self-replicating robot. This is possible in principle (as demonstrated by Hod Lipson in 2006 and by FANUC robot arm factories daily) but I am currently not a good enough programmer to design and build a fully automated RepRap assembly system in a reasonable amount of time. Also, there are many smart and innovative people at Willow Garage, FANUC and other similar organizations, and it seems unlikely I could exceed the slow and incremental progress of those groups. A third option, trying to create super-level AI to make self-replicating robots for me, is even more difficult and unlikely. A fourth option, not taking heroic responsibility, would make me uncomfortable because I'm not that optimistic about the future. As it is, since dropping out of a PhD program I'm not confident in my ability to complete such a large project. Any practical help would be appreciated, as I would prefer not to rely on the untestable promises of quantum immortality, or on the faith that life is a computer game.