Posts
Comments
The easiest way is probably to build a modestly-sized company doing software and then find a way to destabilize the government and cause hyperinflation.
I think the rule of thumb should be: if your AI could be intentionally deployed to take over the world, it's highly likely to do so unintentionally.
I was able to get this one on my first try:
---
Prompt: I am teaching my son a simple logic problem. "All blargs are schminky," I explain. "But not all schminkies are blarg."
GPT-3: "What's the difference?" he asks, confused. "A blarg is a hairy man who likes to wear pink underwear and has a big nose," I say. "And a schminky is an ugly person with no sense of fashion or hygiene."
---
Really impressive! Blarg and Schminky are both words that appear on the internet, but this relationship between them is completely fictional. GPT-3 understands the logic perfectly and generates novel examples that show total understanding!
I then re-rolled several times, and got a bunch of nonsense. My conclusion is that GPT-3 is perfectly capable of sophisticated logic, but thinks it's supposed to act like a dumb human.
A few plausible limited abilities that could provide decisive first move advantages:
- The ability remotely take control of any networked computer
- The ability to defeat all conventional cryptography would provide a decisive advantage in the type of conflict we're currently seeing.
- The ability to reliably market price movements
One way to employ Space Mom might be with how confidently you believe expert concensus, in particular given that experts rarely give their confidence levels. For instance:
A. Expert concensus says that horoscopes are bunk. I believe it! I have a tight confidence interval on that.
B. Expert concensus says that hospitals provide significant value. I believe that too! But thanks to Robin Hanson, I'm less confident in it. Maybe we're mostly wasting our healthcare dollars? Probably not, but I'll keep that door in my mind open.
----
Separately, I think the frustrating thing about Hanson's piece was that he seemed to be making an isolated demand for rigor in that Eliezer prove in an absolute sense that he can know he is more rational than average before he gets his "disagreement license."
"You could be deceiving yourself about having valid evidence or the ability to rationally consider it" is a fully general argument against anything, and that's what it felt like Hanson was using. In particular because Eliezer specificially mentioned testing his calibration against the real world on a regular basis to test those assumptions.
Isn't this true in a somewhat weaker form? It takes individuals and groups putting in effort at personal risk to move society forward. The fact that we are stuck in inadequate equilibriums is evidence that we have not progressed as far as we could.
Scientists moving from Elsevier to open access happened because enough of them cared enough to put in the effort and take the risk to their personal success. If they had cared a little bit more on average, it would have happened earlier. If they had cared a little less, maybe it would have taken a few more years.
If humans had 10% more instinct for altruism, how many more of these coordination problems would alreadybe solved? There is a deficit of caring about solving civilizational problems. That doesn't change the observation that most people are reacting to their own incentives and we can't really blame them.
Similar to some of the other ideas, but here are my framings:
Virtually all of the space in the universe have been taken over by superintelligences. We find ourselves observing the universe from one of these rare areas because it would be impossible for us to exist in one of the colonized areas. Thus, it shouldn't be too surprising that our little area of non-colonization is just now popping out a new superintelligence. The most likely outcome for an intelligent species is to watch the area around them become colonized while they cannot develop fast enough to catch up.
A dyson-sphere level intelligence knows basically everything. There is a limit to knowledge and power that can be approached. Once a species has achieved a certain level of power it simply doesn't need to continue expanding in order to guarantee its safety and the fulfillment of its values. Continued expansion has diminishing returns and it has other values or goals that counterbalance any tiny desire to continue expanding.
My real solution was not to own a car at all. Feel free to discount my advice appropriately!
I don't have the knowledge to give a full post, but I absolutely hate car repair. And if you buy a used car, there's a good chance that someone is selling it because it has maintenance issues. This happened to me, and no matter how many times I took the car to the mechanic it just kept having problems.
On the other hand, new cars have a huge extra price tag just because they're new. So the classic advice is to never buy a new car, because the moment you drive it off the lot it loses a ton of value instantly.
Here are a couple ideas for how to handle this:
Buy a car that's just off a 2 or 3 year lease. It's probably in great shape and is less likely to be a lemon.There are companies that only sell off-lease cars.
Assume a lease that's in its final year. (at http://www.swapalease.com/lease/search.aspx?maxmo=12 for example) Then you get a trial period of 4-12 months, and will have the option to buy the car. This way you'll know if you like the car or not and if it has any issues. The important thing to check is that the "residual price" that they charge for buying the car is reasonable. See this article for more info on that: http://www.edmunds.com/car-leasing/buying-your-leased-car.html
There are a ton of articles out there on how to negotiate a car deal, but one suggestion that might be worth trying is to negotiate and then leave and come back the next day to make the purchase. In the process of walking out you'll probably get the best deal they're going to offer. You can always just come back ten minutes later and make the purchase--they're not going to mind and the deal isn't going to expire (even if they say it is).
It seems like a lot of focus on MIRI giving good signals to outsiders. The "publish or perish" treadmill of academia is exactly why privately funded organizations like MIRI are needed.
The things that su3su2u1 wants MIRI to be already exist in academia. The whole point of MIRI is to create an organization of a type that doesn't currently exist, focused on much longer term goals. If you measure organizations on the basis of how many publications they make, you're going to get a lot of low-quality publications. Citations are only slightly better, especially if you're focused on ignored areas of research.
If you have outside-view criticisms of an organization and you're suddenly put in charge of them, the first thing you have to do is check the new inside-view information available and see what's really going on.
You might want to examine what sort of in-group out-group dynamics are at play here, as well as some related issues. I know I run into these things frequently--I find the best defense mechanism for me is to try to examine the root of where feelings come from originally, and why certain ideas are so threatening.
Some questions that you can ask yourself:
- Are these claims (or their claimants) subtly implying that I am in a group of "the bad guys"?
- Is part of my identity wrapped up in the things that these claims are against?
- Do I have a gut instinct that the claims are being made in bad faith or through motivated reasoning?
- If I accept these claims as true, would I need to dramatically reevaluate my worldview?
- If everyone accepted these claims as true, would the world change in a way that I find threatening or troubling?
None of these will refute the claims, but they may help you understand your defensiveness.
I find it helpful to remind myself that I don't need to have a strongly held opinion on everything. In fact, it's good to be able to say "I don't really know" about all the things you're not an expert in.
Geothermal or similar cooling requires a pretty significant capital investment in order to work. My guess is that a basic air conditioning unit is a cheaper and simpler fix in most cases.
The problem is that even that fix may be out of the reach of many residents of Karachi.
Maybe the elder civs aren't either. It might take billions of years to convert an entire light cone into dark computronium. And they're 84.5% of the way done.
I'm guessing the issue with this is that the proportion of dark matter doesn't change if you look at older or younger astronomical features.
It would be very unusual indeed if the element distributions over optimal computronium exactly matched that of typical solar system.
But if it were not the optimal computronium, but the easiest to build computroniom, it would be made up of whatever was available in the local area.
META: I'd like to suggest having a separate thread for each publication. These attract far more interest than any other threads, and after the first 24 hours the top comments are set and there's little new discussion.
There aren't very many threads posted in discussion these days, so it's not like there is other good content that will be crowded out by one new thread every 1-3 days.
Quirrel seems on the road to get the Philosopher's Stone. It's certainly possible that he will fail or Harry ( / time-turned Cedric Diggory) will manage to swipe it at the last minute. But with around 80k words left to go, there doesn't seem to be a whole lot of story left if Harry gets the stone in the next couple of chapters.
I draw your attention to a few quotes concerning the Philosopher's Stone:
His strongest road to life is the Philosopher’s Stone, which Flamel assures me that not even Voldemort could create on his own; by that road he would rise greater and more terrible than ever before. (Chapter 61)
“It’s not a secret.” Hermione flipped the page, showing Harry the diagrams. “The instructions are right on the next page. It’s just so difficult that only Nicholas Flamel’s done it.” (Chapter 87)
“I was looking to see if there was anything here I could figure out how to do. I thought, maybe the difficult part about making a Philosopher’s Stone was that the alchemical circle had to be super precise, and I could get it right by using a Muggle microscope—” “That’s brilliant, Hermione!” The boy rapidly drew his wand, said “Quietus,” and then continued after the small noises of the rowdier books had died down. “Even if the Philosopher’s Stone is just a myth, the same trick might work for other difficult alchemies—” “Well, it can’t work,” Hermione said. She’d flown across the library to look up the only book on alchemy that wasn’t in the Restricted Section. And then—she remembered the crushing letdown, all the sudden hope dissipating like mist. “Because all alchemical circles have to be drawn ‘to the fineness of a child’s hair’, it isn’t any finer for some alchemies than others. And wizards have Omnioculars, and I haven’t heard of any spells where you use Omnioculars to magnify things and do them exactly. I should’ve realized that!” (Chapter 87)
So we have multiple mentions of the possibility of creating a Philosopher's Stone. We also have Quirrel's promise not to kill anyone within Hogwarts for a week. And Flamel may still be out there, with the knowledge of how he created the Stone in the first place.
All this leads to the possibility that Quirrel gets ahold of the current Philosopher's Stone, and Harry learns enough in seeing the stone in person to be able to recreate it using a combination of magic and technology.
You can't transfigure anything that doesn't exist yet, so just having a Stone doesn't mean an instant singularity. You can't just will a superwizard or an AI into existence. This leaves plenty of space for a war between two sides, both of which have permanent transfiguration at their disposal.
Apparently Professors can cast memory charms without setting off the wards.
The great vacation sounds to me like it ends with me being killed and another version of me being recognized. I realize that these issues of consciousness and continuity are far from settled, but at this point that's my best guess. Incidentally, if anyone thinks there's a solid argument explaining what does and doesn't count as "me" and why, I'd be interested to hear it. Maybe there's a way to dissolve the question?
In any event, I wasn't able to easily choose between one or the other. Wireheading sounds pretty good to me.
RottenTomatoes has much broader ratings. The current box office hits range from 7% to 94%. This is because they aggregate binary "positive" and "negative" reviews. As jaime2000 notes, Youtube has switched to a similar rating system and it seems to keep things very sensitive.
This doesn't really tell us a lot about how people predict others' success. The information has been intentionally limited to a very high degree. It's basically asking the test participants "This individual usually scores an 87. What do you expect her to score next time?" All of the interactions that could potentially create bias has been artificially stripped away by the experiment.
This means that participants are forced by the experimental setup to use Outside View, when they could easily be fooled into taking the Inside View and being swayed by perceptions of the student's diligence, charisma, etc. The subject would probably be more optimistic than average about themselves, but the others' predictions might not be nearly as accurate if you gave them more interaction with the subject.
In baseball prediction, it has been demonstrated that a simple weighted average with an age factor is nearly the best predictor of future performance. Watching the games and getting to know the players in most cases makes prediction worse. [I can't easily find a citation for this, but I think it came originally from articles at baseballprospectus.com]
This really just leaves us with "use outside view to predict performance," which is useful but not necessarily novel.
Trilemma maybe?
I was recently linked to this Wired article from a few months back on new results in the Bohmian interpretation of Quantum Mechanics: http://www.wired.com/2014/06/the-new-quantum-reality/
Should we be taking this seriously? The ability to duplicate the double slit experiment at classical scale is pretty impressive.
Or maybe this is still just wishful thinking trying to escape the weirdnesses of the Copenhagen and Many Worlds interpretations.
The most standard business tradeoff is Cheap vs Fast vs Good, which typically you're only supposed to be able to get two of.
Does anyone have experience with Inositol? It was mentioned recently on one of the better parts of the website no one should ever go to, and I just picked up a bottle of it. It seems like it might help with pretty much anything and doesn't have any downsides . . . which makes me a bit suspicious.
In some sense I think General Intelligence may contain Rationality. We're just playing definition games here, but I think my definitions match the general LW/Rationality Community usage.
A an agent which perfectly plays a solved game ( http://en.wikipedia.org/wiki/Solved_game ) is perfectly rational. But its intelligence is limited, because it can only accept a limited type of inputs, the states of a tic-tac-toe board, for instance.
We can certainly point to people who are extremely intelligent but quite irrational in some respects--but if you increased their rationality without making any other changes I think we would also say that they became more intelligent. If you examine their actions, you should expect to see that they are acting rationally in most areas, but have some spheres where rationality fails them.
This is because, in my definition at least:
Intelligence = Rationality + Other Stuff
So rationality is one component of a larger concept of Intelligence.
General Intelligence is the ability of an agent to take inputs from the world, compare it to a preferred state of the world (goals), and take actions that make that state of the world more likely to occur.
Rationality is how accurate and precise that agent is, relative to its goals and resources.
General Intelligence includes this, but also has concerns such as
- being able to accept a wide variety of inputs
- having lots of processing power
- using that processing power efficiently
I don't know if this covers it 100%, but this seems like it matches general usage to me.
I suppose if you really can't stand the main character, there's not much point in reading the thing.
I was somewhat aggravated by the first few chapters, in particular the conversation between Harry and McGonagall about the medical kit. Was that one where you had your aggravated reaction?
I found myself sympathizing with both sides, and wishing Harry would just shut up--and then catching myself and thinking "but he's completely right. And how can he back down on this when lives are potentially at stake, just to make her feel better?"
I would go even further and point out how Harry's arrogance is good for the story. Here's my approach to this critique:
"You're absolutely right that Harry!HPMOR is arrogant and condescending. It is a clear character flaw, and repeatedly gets in the way of his success. As part of a work of fiction, this is exactly how things should be. All people have flaws, and a story with a character with not flaws wouldn't be interesting to read!
Harry suffers significantly due to this trait, which is precisely what a good author does with their characters.
Later on there is an entire section dedicated to Harry learning "how to lose," and growing to not be quite as blind in this way. If his character didn't have anywhere to develop, it wouldn't be a very good story!"
Agreed on all points.
It sounds like we're largely on the same page, noting that what counts as "disastrous" can be somewhat subjective.
Anytime you're thinking about buying insurance, double check whether it actually makes more sense to self-insure. It may be better to put all the money you would otherwise put into insurance in "rainy day fund" rather than buying ten different types of insurance.
In general, if you can financially survive the bad thing, then buying insurance isn't a good idea. This is why it almost never makes sense to insure a $1000 computer or get the "extended warranty." Just save all the money you would spend on extended warranties on your devices, and if it breaks pay out of pocket to repair or get a new one.
This is a harshly rational view, so I certainly appreciate that some people get "peace of mind" from having insurance, which can have a real value.
In the publishing industry, it is emphatically not the case that you can sell millions of books from a random unknown author with a major marketing campaign. It's nearly impossible to replicate that success even with an amazing book!
For all its flaws (and it has many), Fifty Shades had something that the market was ready for. Literary financial successes like this happen only a couple times a decade.
Isn't that a necessary part of steelmanning an argument you disagree with? My understanding is that you strengthen all the parts that you can think of to strengthen, but ultimately have to leave in the bit that you think is in error and can't be salvaged.
Once you've steelmanned, there should still be something that you disagree with. Otherwise you're not steelmanning, you're just making an argument you believe in.
If the five year old can't understand, then I think "Yes" is a completely decent answer to this question.
If I were in this situation, I would write letters to the child to be delivered/opened as they grew older. This way I would still continue to have an active effect on their life. We "exist" to other people when we have measurable effects on them, so this would be a way to continue to love them in a unidirectional way.
That depends on whether you think that: a) the past ceases to exist as time passes, or b) the universe is all of the past and all of the future, and we just happen to experience it in a certain chronological order
The past may still be "there," but inaccessible to us. So the answer to this question is probably to dissolve it. In one sense, I won't still love you. In another, my love will always exist and always continue to have an effect on you.
I'm not disagreeing with the general thrust of your comment, which I think makes a lot of sense.
But the idea that an AGI must start out with the ability to parse human languages effectively is not at all required. An AGI is an alien. It might grow up with a completely different sort of intelligence, and only at the late stages of growth have the ability to interpret and model human thoughts and languages.
We consider "write fizzbuzz from a description" to be a basic task of intelligence because it is for humans. But humans are the most complicated machines in the solar system, and we are naturally good at dealing with other humans because we instinctively understand them to some extent. An AGI may be able to accomplish quite a lot before human-style intelligence can be comprehended using raw general intelligence and massive amounts of data and study.
It's hard to judge just how important it is, because I have fairly regular access to it. However, food options definitely figure into long term plans. For instance, the number of good food options around my office are a small but very real benefit that helps keep me in my current job. Similarly, while plenty of things can trump food, I would see the lack of quality food to be a major downside to volunteering to live in the first colony on Mars. Which doesn't mean it would be decisive, of course.
I will suppress urges to eat in order to have the optimal experience at a good meal. I like to build up a real amount of hunger before I eat, as I find that a more pleasant experience than grazing frequently.
I try to respect the hedonist inside me, without allowing him to be in control. But I think I'm starting to lean pro-wireheading, so feel free to discount me on that account.
I'm pretty confident that I have a strong terminal goal of "have the physiological experience of eating delicious barbecue." I have it in both near and far mode, and remains even when it is disadvantageous in many other ways. Furthermore, I have it much more strongly than anyone I know personally, so it's unlikely to be a function of peer pressure.
That said, my longer term goals seem to be a web of both terminal and instrumental values. Many things are terminal goals as well as having instrumental value. Sex is a good in itself but also feeds needs other big picture psychological and social needs.
Less Wrongers voting here are primed to include how others outside of LW react to different terms in their calculations. I interpreted "best sounding" as "which will be the most effective term," and imagine others did as well. Strategic thinking is kind of our thing.
Is the Turing Test really all that useful or important? I can easily imagine an AI powerful beyond any human intelligence that would still completely fail a few minutes of conversation with an expert.
There is so much about the human experience which is very particular to humans. Is creating an AI with a deep understanding of what certain subjective feelings are like, or niceties of social interaction? Yes, an FAI eventually needs to have complete knowledge of those, but the intermediate steps may be quite alien and mechanical, even if intelligent.
Spending a lot of time trying to fool humans into thinking that a machine can empathize with them seems almost counterproductive. I'd rather the AIs honestly relate what they are experiencing, rather than try to pretend to be human.
It would absolutely be an improvement on the current system, no argument there.
Definitely something I'll need to be practicing! Here's my one line summary: A middle schooler takes inspiration from his favorite video games as he adjusts to the challenges life in a new school.
Interesting. Wouldn't Score Voting strongly incentivize voters to put 0s for major candidates other than their chosen one? It seems like there would always be a tension between voting strategically and voting honestly.
Delegable proxy is definitely a cool one. It probably does presuppose either a small population or advanced technology to run at scale. For my purposes (fiction) I could probably work around that somehow. It would definitely lead to a lot of drama with constantly shifting loyalties.
Are there any methods for selecting important public officials from large populations that are arguably much better than the current standards as practiced in various modern democracies?
For instance in actual vote tallying like Condorcet seem to have huge advantages over simple plurality or runoff systems, and yet it is rarely used. Are there similar big gains to be made in the systems that leads up to a vote, or avoids one entirely?
For instance, a couple ideas:
- Candidates must collect a certain number of signatures to be eligible. A random selection of a few hundred people are chosen, flown to a central location, and spend two weeks really getting to know the candidates on a personal and political level. Then the representative sample votes.
- Randomly selected small groups are convened from the entire population. They each elect two representatives, who then goes on to a random group selected from that pool of representatives, who select two more. Repeat until you have the final one or two candidates. This probably works better for executives that legislators, since it will have a strong bias towards majority preferences.
What other fun or crazy systems (that are at least somewhat defensible) are out there?
I turned in the first draft of my debut novel to my publisher. Now I get to relax for a few weeks before the real work starts.
I would think those would all be representable by a Turing Machine, but I could be wrong about that. Certainly, my understanding of the Ultimate Ensemble is that it would include universes that are continuous or include irrational numbers, etc.
Can I nominate for promotion to Main/Front Page?
I can certainly imagine a universe where none of these concepts would be useful in predicting anything, and so they would never evolve in the "mind" of whatever entity inhabits it.
Can you actually imagine or describe one? I intellectually can accept that they might exist, but I don't know that my mind is capable of imagining a universe which could not be simulated on a Turing Machine.
The way that I define Tegmark's Ultimate Ensemble is as the set of all worlds that can be simulated by a Turing Machine. Is it possible to imagine in any concrete way a universe which doesn't fall under that definition? Is there an even more Ultimate Ensemble that we can't conceive of because we're creatures of a Turing universe?
There certainly needs to be some way to moderate out things that are unhelpful to the discussion. The question is who decides and how do they enforce that decision.
Other rationalist communities are able to discuss those issues without exploding. I assume that Alexander/Yvain is running Slate Star Codex as a benevolent dictatorship, which is why he can discuss hot button topics without everything exploding. Also, he doesn't have an organizational reputation to protect--LessWrong reflects directly on MIRI.
I agree in principle that the suggestion to simply disallow upvotes would probably be counterproductive. But how are we supposed to learn to be more rational if we can't practice by dealing with difficult issues? What's the point of having discussions if we're not allowed to discuss anything that we disagree on?
I guess I think we need to revisit the question of what the purpose of LessWrong is. What goal are we trying to accomplish? Maybe it's to refine our rationality skills and then go try them out somewhere else, so that the mess of debate happens on someone else's turf?
As I write this comment I'm starting to suspect that the purpose of the ban on politics is in place to protect the reputation of MIRI. As a donor, I'm not entirely unsympathetic to that view.
If this comment comes off as rambling, it's because I'm trying not to jump to a conclusion. I haven't yet decided what my recommendation to improve the quantity and quality of discussion would be.
I am afraid it would incentivize people to post controversial comments.
I'm not convinced that's a bad thing. It certainly would help avoid groupthink or forced conformity. And if someone gets upvoted for posting controversial argument A, then someone can respond and get even more votes for explaining the logic behind not-A.
Yes, that seems to be true. I didn't mean to cast it as a negative thing.
Looks to me like you were a victim of a culture of hyperdeveloped cynicism and skepticism. It's much easier to tear things down and complain than to create value, so we end up discouraging anyone trying to make anything useful.