Posts

An AI Takeover Thought Experiment 2014-06-19T16:59:44.145Z
[Meta] The Decline of Discussion: Now With Charts! 2014-06-04T22:02:49.581Z
[Link] YCombinator-Backed Non-Profit Startup Attempting HIV/AIDS Vaccine 2014-01-23T18:58:09.717Z
[Link] Anti-ageing compound set for human trials 2013-12-22T20:51:25.123Z

Comments

Comment by Gavin on What's Up With Confusingly Pervasive Goal Directedness? · 2022-01-26T23:37:58.186Z · LW · GW

The easiest way is probably to build a modestly-sized company doing software and then find a way to destabilize the government and cause hyperinflation.

I think the rule of thumb should be: if your AI could be intentionally deployed to take over the world, it's highly likely to do so unintentionally.

Comment by Gavin on To what extent is GPT-3 capable of reasoning? · 2020-07-21T20:58:59.778Z · LW · GW

I was able to get this one on my first try:

---

Prompt: I am teaching my son a simple logic problem. "All blargs are schminky," I explain. "But not all schminkies are blarg."

GPT-3: "What's the difference?" he asks, confused. "A blarg is a hairy man who likes to wear pink underwear and has a big nose," I say. "And a schminky is an ugly person with no sense of fashion or hygiene."

---

Really impressive! Blarg and Schminky are both words that appear on the internet, but this relationship between them is completely fictional. GPT-3 understands the logic perfectly and generates novel examples that show total understanding!

I then re-rolled several times, and got a bunch of nonsense. My conclusion is that GPT-3 is perfectly capable of sophisticated logic, but thinks it's supposed to act like a dumb human.

Comment by Gavin on Soft takeoff can still lead to decisive strategic advantage · 2019-08-26T21:47:13.386Z · LW · GW

A few plausible limited abilities that could provide decisive first move advantages:

  • The ability remotely take control of any networked computer
  • The ability to defeat all conventional cryptography would provide a decisive advantage in the type of conflict we're currently seeing.
  • The ability to reliably market price movements
Comment by Gavin on The Right to be Wrong · 2017-11-29T23:16:51.770Z · LW · GW

One way to employ Space Mom might be with how confidently you believe expert concensus, in particular given that experts rarely give their confidence levels. For instance:

A. Expert concensus says that horoscopes are bunk. I believe it! I have a tight confidence interval on that.

B. Expert concensus says that hospitals provide significant value. I believe that too! But thanks to Robin Hanson, I'm less confident in it. Maybe we're mostly wasting our healthcare dollars? Probably not, but I'll keep that door in my mind open.

----

Separately, I think the frustrating thing about Hanson's piece was that he seemed to be making an isolated demand for rigor in that Eliezer prove in an absolute sense that he can know he is more rational than average before he gets his "disagreement license."

"You could be deceiving yourself about having valid evidence or the ability to rationally consider it" is a fully general argument against anything, and that's what it felt like Hanson was using. In particular because Eliezer specificially mentioned testing his calibration against the real world on a regular basis to test those assumptions.

Comment by Gavin on Living in an Inadequate World · 2017-11-10T22:08:11.533Z · LW · GW

Isn't this true in a somewhat weaker form? It takes individuals and groups putting in effort at personal risk to move society forward. The fact that we are stuck in inadequate equilibriums is evidence that we have not progressed as far as we could.

Scientists moving from Elsevier to open access happened because enough of them cared enough to put in the effort and take the risk to their personal success. If they had cared a little bit more on average, it would have happened earlier. If they had cared a little less, maybe it would have taken a few more years.

If humans had 10% more instinct for altruism, how many more of these coordination problems would alreadybe solved? There is a deficit of caring about solving civilizational problems. That doesn't change the observation that most people are reacting to their own incentives and we can't really blame them.

Comment by Gavin on If there IS alien super-inteligence in our own galaxy, then what it could be like? · 2016-02-27T19:34:28.218Z · LW · GW

Similar to some of the other ideas, but here are my framings:

  1. Virtually all of the space in the universe have been taken over by superintelligences. We find ourselves observing the universe from one of these rare areas because it would be impossible for us to exist in one of the colonized areas. Thus, it shouldn't be too surprising that our little area of non-colonization is just now popping out a new superintelligence. The most likely outcome for an intelligent species is to watch the area around them become colonized while they cannot develop fast enough to catch up.

  2. A dyson-sphere level intelligence knows basically everything. There is a limit to knowledge and power that can be approached. Once a species has achieved a certain level of power it simply doesn't need to continue expanding in order to guarantee its safety and the fulfillment of its values. Continued expansion has diminishing returns and it has other values or goals that counterbalance any tiny desire to continue expanding.

Comment by Gavin on Open thread, Nov. 30 - Dec. 06, 2015 · 2015-12-01T15:34:37.916Z · LW · GW

My real solution was not to own a car at all. Feel free to discount my advice appropriately!

Comment by Gavin on Open thread, Nov. 30 - Dec. 06, 2015 · 2015-11-30T17:55:24.617Z · LW · GW

I don't have the knowledge to give a full post, but I absolutely hate car repair. And if you buy a used car, there's a good chance that someone is selling it because it has maintenance issues. This happened to me, and no matter how many times I took the car to the mechanic it just kept having problems.

On the other hand, new cars have a huge extra price tag just because they're new. So the classic advice is to never buy a new car, because the moment you drive it off the lot it loses a ton of value instantly.

Here are a couple ideas for how to handle this:

  1. Buy a car that's just off a 2 or 3 year lease. It's probably in great shape and is less likely to be a lemon.There are companies that only sell off-lease cars.

  2. Assume a lease that's in its final year. (at http://www.swapalease.com/lease/search.aspx?maxmo=12 for example) Then you get a trial period of 4-12 months, and will have the option to buy the car. This way you'll know if you like the car or not and if it has any issues. The important thing to check is that the "residual price" that they charge for buying the car is reasonable. See this article for more info on that: http://www.edmunds.com/car-leasing/buying-your-leased-car.html

There are a ton of articles out there on how to negotiate a car deal, but one suggestion that might be worth trying is to negotiate and then leave and come back the next day to make the purchase. In the process of walking out you'll probably get the best deal they're going to offer. You can always just come back ten minutes later and make the purchase--they're not going to mind and the deal isn't going to expire (even if they say it is).

Comment by Gavin on Open Thread August 31 - September 6 · 2015-08-31T16:18:36.915Z · LW · GW

It seems like a lot of focus on MIRI giving good signals to outsiders. The "publish or perish" treadmill of academia is exactly why privately funded organizations like MIRI are needed.

The things that su3su2u1 wants MIRI to be already exist in academia. The whole point of MIRI is to create an organization of a type that doesn't currently exist, focused on much longer term goals. If you measure organizations on the basis of how many publications they make, you're going to get a lot of low-quality publications. Citations are only slightly better, especially if you're focused on ignored areas of research.

If you have outside-view criticisms of an organization and you're suddenly put in charge of them, the first thing you have to do is check the new inside-view information available and see what's really going on.

Comment by Gavin on Open Thread, Jul. 6 - Jul. 12, 2015 · 2015-07-06T18:25:58.762Z · LW · GW

You might want to examine what sort of in-group out-group dynamics are at play here, as well as some related issues. I know I run into these things frequently--I find the best defense mechanism for me is to try to examine the root of where feelings come from originally, and why certain ideas are so threatening.

Some questions that you can ask yourself:

  1. Are these claims (or their claimants) subtly implying that I am in a group of "the bad guys"?
  2. Is part of my identity wrapped up in the things that these claims are against?
  3. Do I have a gut instinct that the claims are being made in bad faith or through motivated reasoning?
  4. If I accept these claims as true, would I need to dramatically reevaluate my worldview?
  5. If everyone accepted these claims as true, would the world change in a way that I find threatening or troubling?

None of these will refute the claims, but they may help you understand your defensiveness.

I find it helpful to remind myself that I don't need to have a strongly held opinion on everything. In fact, it's good to be able to say "I don't really know" about all the things you're not an expert in.

Comment by Gavin on Stupid Questions July 2015 · 2015-07-03T19:31:20.504Z · LW · GW

Geothermal or similar cooling requires a pretty significant capital investment in order to work. My guess is that a basic air conditioning unit is a cheaper and simpler fix in most cases.

The problem is that even that fix may be out of the reach of many residents of Karachi.

Comment by Gavin on Resolving the Fermi Paradox: New Directions · 2015-04-20T04:52:43.404Z · LW · GW

Maybe the elder civs aren't either. It might take billions of years to convert an entire light cone into dark computronium. And they're 84.5% of the way done.

I'm guessing the issue with this is that the proportion of dark matter doesn't change if you look at older or younger astronomical features.

Comment by Gavin on Resolving the Fermi Paradox: New Directions · 2015-04-20T04:46:02.708Z · LW · GW

It would be very unusual indeed if the element distributions over optimal computronium exactly matched that of typical solar system.

But if it were not the optimal computronium, but the easiest to build computroniom, it would be made up of whatever was available in the local area.

Comment by Gavin on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 108 · 2015-02-20T23:08:17.010Z · LW · GW

META: I'd like to suggest having a separate thread for each publication. These attract far more interest than any other threads, and after the first 24 hours the top comments are set and there's little new discussion.

There aren't very many threads posted in discussion these days, so it's not like there is other good content that will be crowded out by one new thread every 1-3 days.

Comment by Gavin on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapters 105-107 · 2015-02-17T18:51:01.463Z · LW · GW

Quirrel seems on the road to get the Philosopher's Stone. It's certainly possible that he will fail or Harry ( / time-turned Cedric Diggory) will manage to swipe it at the last minute. But with around 80k words left to go, there doesn't seem to be a whole lot of story left if Harry gets the stone in the next couple of chapters.

I draw your attention to a few quotes concerning the Philosopher's Stone:

His strongest road to life is the Philosopher’s Stone, which Flamel assures me that not even Voldemort could create on his own; by that road he would rise greater and more terrible than ever before. (Chapter 61)

“It’s not a secret.” Hermione flipped the page, showing Harry the diagrams. “The instructions are right on the next page. It’s just so difficult that only Nicholas Flamel’s done it.” (Chapter 87)

“I was looking to see if there was anything here I could figure out how to do. I thought, maybe the difficult part about making a Philosopher’s Stone was that the alchemical circle had to be super precise, and I could get it right by using a Muggle microscope—” “That’s brilliant, Hermione!” The boy rapidly drew his wand, said “Quietus,” and then continued after the small noises of the rowdier books had died down. “Even if the Philosopher’s Stone is just a myth, the same trick might work for other difficult alchemies—” “Well, it can’t work,” Hermione said. She’d flown across the library to look up the only book on alchemy that wasn’t in the Restricted Section. And then—she remembered the crushing letdown, all the sudden hope dissipating like mist. “Because all alchemical circles have to be drawn ‘to the fineness of a child’s hair’, it isn’t any finer for some alchemies than others. And wizards have Omnioculars, and I haven’t heard of any spells where you use Omnioculars to magnify things and do them exactly. I should’ve realized that!” (Chapter 87)

So we have multiple mentions of the possibility of creating a Philosopher's Stone. We also have Quirrel's promise not to kill anyone within Hogwarts for a week. And Flamel may still be out there, with the knowledge of how he created the Stone in the first place.

All this leads to the possibility that Quirrel gets ahold of the current Philosopher's Stone, and Harry learns enough in seeing the stone in person to be able to recreate it using a combination of magic and technology.

You can't transfigure anything that doesn't exist yet, so just having a Stone doesn't mean an instant singularity. You can't just will a superwizard or an AI into existence. This leaves plenty of space for a war between two sides, both of which have permanent transfiguration at their disposal.

Comment by Gavin on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapters 105-107 · 2015-02-17T17:35:17.745Z · LW · GW

Apparently Professors can cast memory charms without setting off the wards.

Comment by Gavin on Open thread, Feb. 9 - Feb. 15, 2015 · 2015-02-10T05:46:47.111Z · LW · GW

The great vacation sounds to me like it ends with me being killed and another version of me being recognized. I realize that these issues of consciousness and continuity are far from settled, but at this point that's my best guess. Incidentally, if anyone thinks there's a solid argument explaining what does and doesn't count as "me" and why, I'd be interested to hear it. Maybe there's a way to dissolve the question?

In any event, I wasn't able to easily choose between one or the other. Wireheading sounds pretty good to me.

Comment by Gavin on Stupid Questions December 2014 · 2014-12-10T21:41:14.062Z · LW · GW

RottenTomatoes has much broader ratings. The current box office hits range from 7% to 94%. This is because they aggregate binary "positive" and "negative" reviews. As jaime2000 notes, Youtube has switched to a similar rating system and it seems to keep things very sensitive.

Comment by Gavin on Others' predictions of your performance are usually more accurate · 2014-11-13T20:48:35.027Z · LW · GW

This doesn't really tell us a lot about how people predict others' success. The information has been intentionally limited to a very high degree. It's basically asking the test participants "This individual usually scores an 87. What do you expect her to score next time?" All of the interactions that could potentially create bias has been artificially stripped away by the experiment.

This means that participants are forced by the experimental setup to use Outside View, when they could easily be fooled into taking the Inside View and being swayed by perceptions of the student's diligence, charisma, etc. The subject would probably be more optimistic than average about themselves, but the others' predictions might not be nearly as accurate if you gave them more interaction with the subject.

In baseball prediction, it has been demonstrated that a simple weighted average with an age factor is nearly the best predictor of future performance. Watching the games and getting to know the players in most cases makes prediction worse. [I can't easily find a citation for this, but I think it came originally from articles at baseballprospectus.com]

This really just leaves us with "use outside view to predict performance," which is useful but not necessarily novel.

Comment by Gavin on What are the most common and important trade-offs that decision makers face? · 2014-11-04T15:38:49.657Z · LW · GW

Trilemma maybe?

Comment by Gavin on Open thread, Nov. 3 - Nov. 9, 2014 · 2014-11-04T15:33:07.546Z · LW · GW

I was recently linked to this Wired article from a few months back on new results in the Bohmian interpretation of Quantum Mechanics: http://www.wired.com/2014/06/the-new-quantum-reality/

Should we be taking this seriously? The ability to duplicate the double slit experiment at classical scale is pretty impressive.

Or maybe this is still just wishful thinking trying to escape the weirdnesses of the Copenhagen and Many Worlds interpretations.

Comment by Gavin on What are the most common and important trade-offs that decision makers face? · 2014-11-03T21:06:16.569Z · LW · GW

The most standard business tradeoff is Cheap vs Fast vs Good, which typically you're only supposed to be able to get two of.

Comment by Gavin on [deleted post] 2014-10-29T19:21:35.709Z

Does anyone have experience with Inositol? It was mentioned recently on one of the better parts of the website no one should ever go to, and I just picked up a bottle of it. It seems like it might help with pretty much anything and doesn't have any downsides . . . which makes me a bit suspicious.

Comment by Gavin on What is the difference between rationality and intelligence? · 2014-08-14T21:15:32.352Z · LW · GW

In some sense I think General Intelligence may contain Rationality. We're just playing definition games here, but I think my definitions match the general LW/Rationality Community usage.

A an agent which perfectly plays a solved game ( http://en.wikipedia.org/wiki/Solved_game ) is perfectly rational. But its intelligence is limited, because it can only accept a limited type of inputs, the states of a tic-tac-toe board, for instance.

We can certainly point to people who are extremely intelligent but quite irrational in some respects--but if you increased their rationality without making any other changes I think we would also say that they became more intelligent. If you examine their actions, you should expect to see that they are acting rationally in most areas, but have some spheres where rationality fails them.

This is because, in my definition at least:

Intelligence = Rationality + Other Stuff

So rationality is one component of a larger concept of Intelligence.

General Intelligence is the ability of an agent to take inputs from the world, compare it to a preferred state of the world (goals), and take actions that make that state of the world more likely to occur.

Rationality is how accurate and precise that agent is, relative to its goals and resources.

General Intelligence includes this, but also has concerns such as

  • being able to accept a wide variety of inputs
  • having lots of processing power
  • using that processing power efficiently

I don't know if this covers it 100%, but this seems like it matches general usage to me.

Comment by Gavin on Why are people "put off by rationality"? · 2014-08-07T03:47:51.040Z · LW · GW

I suppose if you really can't stand the main character, there's not much point in reading the thing.

I was somewhat aggravated by the first few chapters, in particular the conversation between Harry and McGonagall about the medical kit. Was that one where you had your aggravated reaction?

I found myself sympathizing with both sides, and wishing Harry would just shut up--and then catching myself and thinking "but he's completely right. And how can he back down on this when lives are potentially at stake, just to make her feel better?"

Comment by Gavin on Why are people "put off by rationality"? · 2014-08-06T19:40:12.807Z · LW · GW

I would go even further and point out how Harry's arrogance is good for the story. Here's my approach to this critique:

"You're absolutely right that Harry!HPMOR is arrogant and condescending. It is a clear character flaw, and repeatedly gets in the way of his success. As part of a work of fiction, this is exactly how things should be. All people have flaws, and a story with a character with not flaws wouldn't be interesting to read!

Harry suffers significantly due to this trait, which is precisely what a good author does with their characters.

Later on there is an entire section dedicated to Harry learning "how to lose," and growing to not be quite as blind in this way. If his character didn't have anywhere to develop, it wouldn't be a very good story!"

Comment by Gavin on Open thread, July 28 - August 3, 2014 · 2014-07-29T12:59:03.405Z · LW · GW

Agreed on all points.

Comment by Gavin on Open thread, July 28 - August 3, 2014 · 2014-07-29T04:55:17.536Z · LW · GW

It sounds like we're largely on the same page, noting that what counts as "disastrous" can be somewhat subjective.

Comment by Gavin on Open thread, July 28 - August 3, 2014 · 2014-07-28T23:24:25.256Z · LW · GW

Anytime you're thinking about buying insurance, double check whether it actually makes more sense to self-insure. It may be better to put all the money you would otherwise put into insurance in "rainy day fund" rather than buying ten different types of insurance.

In general, if you can financially survive the bad thing, then buying insurance isn't a good idea. This is why it almost never makes sense to insure a $1000 computer or get the "extended warranty." Just save all the money you would spend on extended warranties on your devices, and if it breaks pay out of pocket to repair or get a new one.

This is a harshly rational view, so I certainly appreciate that some people get "peace of mind" from having insurance, which can have a real value.

Comment by Gavin on Fifty Shades of Self-Fulfilling Prophecy · 2014-07-25T19:01:15.838Z · LW · GW

In the publishing industry, it is emphatically not the case that you can sell millions of books from a random unknown author with a major marketing campaign. It's nearly impossible to replicate that success even with an amazing book!

For all its flaws (and it has many), Fifty Shades had something that the market was ready for. Literary financial successes like this happen only a couple times a decade.

Comment by Gavin on Steelmanning Inefficiency · 2014-07-03T18:19:02.015Z · LW · GW

Isn't that a necessary part of steelmanning an argument you disagree with? My understanding is that you strengthen all the parts that you can think of to strengthen, but ultimately have to leave in the bit that you think is in error and can't be salvaged.

Once you've steelmanned, there should still be something that you disagree with. Otherwise you're not steelmanning, you're just making an argument you believe in.

Comment by Gavin on Open thread, 30 June 2014- 6 July 2014 · 2014-07-02T16:59:47.930Z · LW · GW

If the five year old can't understand, then I think "Yes" is a completely decent answer to this question.

If I were in this situation, I would write letters to the child to be delivered/opened as they grew older. This way I would still continue to have an active effect on their life. We "exist" to other people when we have measurable effects on them, so this would be a way to continue to love them in a unidirectional way.

Comment by Gavin on Open thread, 30 June 2014- 6 July 2014 · 2014-07-01T19:39:44.537Z · LW · GW

That depends on whether you think that: a) the past ceases to exist as time passes, or b) the universe is all of the past and all of the future, and we just happen to experience it in a certain chronological order

The past may still be "there," but inaccessible to us. So the answer to this question is probably to dissolve it. In one sense, I won't still love you. In another, my love will always exist and always continue to have an effect on you.

Comment by Gavin on Will AGI surprise the world? · 2014-06-23T16:14:23.000Z · LW · GW

I'm not disagreeing with the general thrust of your comment, which I think makes a lot of sense.

But the idea that an AGI must start out with the ability to parse human languages effectively is not at all required. An AGI is an alien. It might grow up with a completely different sort of intelligence, and only at the late stages of growth have the ability to interpret and model human thoughts and languages.

We consider "write fizzbuzz from a description" to be a basic task of intelligence because it is for humans. But humans are the most complicated machines in the solar system, and we are naturally good at dealing with other humans because we instinctively understand them to some extent. An AGI may be able to accomplish quite a lot before human-style intelligence can be comprehended using raw general intelligence and massive amounts of data and study.

Comment by Gavin on On Terminal Goals and Virtue Ethics · 2014-06-19T16:54:47.027Z · LW · GW

It's hard to judge just how important it is, because I have fairly regular access to it. However, food options definitely figure into long term plans. For instance, the number of good food options around my office are a small but very real benefit that helps keep me in my current job. Similarly, while plenty of things can trump food, I would see the lack of quality food to be a major downside to volunteering to live in the first colony on Mars. Which doesn't mean it would be decisive, of course.

I will suppress urges to eat in order to have the optimal experience at a good meal. I like to build up a real amount of hunger before I eat, as I find that a more pleasant experience than grazing frequently.

I try to respect the hedonist inside me, without allowing him to be in control. But I think I'm starting to lean pro-wireheading, so feel free to discount me on that account.

Comment by Gavin on On Terminal Goals and Virtue Ethics · 2014-06-19T00:47:46.171Z · LW · GW

I'm pretty confident that I have a strong terminal goal of "have the physiological experience of eating delicious barbecue." I have it in both near and far mode, and remains even when it is disadvantageous in many other ways. Furthermore, I have it much more strongly than anyone I know personally, so it's unlikely to be a function of peer pressure.

That said, my longer term goals seem to be a web of both terminal and instrumental values. Many things are terminal goals as well as having instrumental value. Sex is a good in itself but also feeds needs other big picture psychological and social needs.

Comment by Gavin on Some alternatives to “Friendly AI” · 2014-06-16T00:21:41.200Z · LW · GW

Less Wrongers voting here are primed to include how others outside of LW react to different terms in their calculations. I interpreted "best sounding" as "which will be the most effective term," and imagine others did as well. Strategic thinking is kind of our thing.

Comment by Gavin on Come up with better Turing Tests · 2014-06-10T15:30:45.580Z · LW · GW

Is the Turing Test really all that useful or important? I can easily imagine an AI powerful beyond any human intelligence that would still completely fail a few minutes of conversation with an expert.

There is so much about the human experience which is very particular to humans. Is creating an AI with a deep understanding of what certain subjective feelings are like, or niceties of social interaction? Yes, an FAI eventually needs to have complete knowledge of those, but the intermediate steps may be quite alien and mechanical, even if intelligent.

Spending a lot of time trying to fool humans into thinking that a machine can empathize with them seems almost counterproductive. I'd rather the AIs honestly relate what they are experiencing, rather than try to pretend to be human.

Comment by Gavin on Open thread, 9-15 June 2014 · 2014-06-10T11:49:44.894Z · LW · GW

It would absolutely be an improvement on the current system, no argument there.

Comment by Gavin on Bragging Thread, June 2014 · 2014-06-10T11:47:53.020Z · LW · GW

Definitely something I'll need to be practicing! Here's my one line summary: A middle schooler takes inspiration from his favorite video games as he adjusts to the challenges life in a new school.

Comment by Gavin on Open thread, 9-15 June 2014 · 2014-06-09T19:23:22.417Z · LW · GW

Interesting. Wouldn't Score Voting strongly incentivize voters to put 0s for major candidates other than their chosen one? It seems like there would always be a tension between voting strategically and voting honestly.

Delegable proxy is definitely a cool one. It probably does presuppose either a small population or advanced technology to run at scale. For my purposes (fiction) I could probably work around that somehow. It would definitely lead to a lot of drama with constantly shifting loyalties.

Comment by Gavin on Open thread, 9-15 June 2014 · 2014-06-09T14:15:01.887Z · LW · GW

Are there any methods for selecting important public officials from large populations that are arguably much better than the current standards as practiced in various modern democracies?

For instance in actual vote tallying like Condorcet seem to have huge advantages over simple plurality or runoff systems, and yet it is rarely used. Are there similar big gains to be made in the systems that leads up to a vote, or avoids one entirely?

For instance, a couple ideas:

  1. Candidates must collect a certain number of signatures to be eligible. A random selection of a few hundred people are chosen, flown to a central location, and spend two weeks really getting to know the candidates on a personal and political level. Then the representative sample votes.
  1. Randomly selected small groups are convened from the entire population. They each elect two representatives, who then goes on to a random group selected from that pool of representatives, who select two more. Repeat until you have the final one or two candidates. This probably works better for executives that legislators, since it will have a strong bias towards majority preferences.

What other fun or crazy systems (that are at least somewhat defensible) are out there?

Comment by Gavin on Bragging Thread, June 2014 · 2014-06-09T13:59:11.338Z · LW · GW

I turned in the first draft of my debut novel to my publisher. Now I get to relax for a few weeks before the real work starts.

Comment by Gavin on Mathematics as a lossy compression algorithm gone wild · 2014-06-07T16:03:30.808Z · LW · GW

I would think those would all be representable by a Turing Machine, but I could be wrong about that. Certainly, my understanding of the Ultimate Ensemble is that it would include universes that are continuous or include irrational numbers, etc.

Comment by Gavin on Mathematics as a lossy compression algorithm gone wild · 2014-06-07T12:11:31.184Z · LW · GW

Can I nominate for promotion to Main/Front Page?

Comment by Gavin on Mathematics as a lossy compression algorithm gone wild · 2014-06-07T12:08:41.348Z · LW · GW

I can certainly imagine a universe where none of these concepts would be useful in predicting anything, and so they would never evolve in the "mind" of whatever entity inhabits it.

Can you actually imagine or describe one? I intellectually can accept that they might exist, but I don't know that my mind is capable of imagining a universe which could not be simulated on a Turing Machine.

The way that I define Tegmark's Ultimate Ensemble is as the set of all worlds that can be simulated by a Turing Machine. Is it possible to imagine in any concrete way a universe which doesn't fall under that definition? Is there an even more Ultimate Ensemble that we can't conceive of because we're creatures of a Turing universe?

Comment by Gavin on [Meta] The Decline of Discussion: Now With Charts! · 2014-06-06T08:48:05.402Z · LW · GW

There certainly needs to be some way to moderate out things that are unhelpful to the discussion. The question is who decides and how do they enforce that decision.

Other rationalist communities are able to discuss those issues without exploding. I assume that Alexander/Yvain is running Slate Star Codex as a benevolent dictatorship, which is why he can discuss hot button topics without everything exploding. Also, he doesn't have an organizational reputation to protect--LessWrong reflects directly on MIRI.

I agree in principle that the suggestion to simply disallow upvotes would probably be counterproductive. But how are we supposed to learn to be more rational if we can't practice by dealing with difficult issues? What's the point of having discussions if we're not allowed to discuss anything that we disagree on?

I guess I think we need to revisit the question of what the purpose of LessWrong is. What goal are we trying to accomplish? Maybe it's to refine our rationality skills and then go try them out somewhere else, so that the mess of debate happens on someone else's turf?

As I write this comment I'm starting to suspect that the purpose of the ban on politics is in place to protect the reputation of MIRI. As a donor, I'm not entirely unsympathetic to that view.

If this comment comes off as rambling, it's because I'm trying not to jump to a conclusion. I haven't yet decided what my recommendation to improve the quantity and quality of discussion would be.

Comment by Gavin on [Meta] The Decline of Discussion: Now With Charts! · 2014-06-06T07:21:10.132Z · LW · GW

I am afraid it would incentivize people to post controversial comments.

I'm not convinced that's a bad thing. It certainly would help avoid groupthink or forced conformity. And if someone gets upvoted for posting controversial argument A, then someone can respond and get even more votes for explaining the logic behind not-A.

Comment by Gavin on [Meta] The Decline of Discussion: Now With Charts! · 2014-06-05T20:53:35.360Z · LW · GW

Yes, that seems to be true. I didn't mean to cast it as a negative thing.

Comment by Gavin on [Meta] The Decline of Discussion: Now With Charts! · 2014-06-05T18:23:11.918Z · LW · GW

Looks to me like you were a victim of a culture of hyperdeveloped cynicism and skepticism. It's much easier to tear things down and complain than to create value, so we end up discouraging anyone trying to make anything useful.