Posts

Comments

Comment by yli on When should an Effective Altruist be vegetarian? · 2014-11-26T12:51:57.795Z · LW · GW

Agreed. Of course the thing about means and ends is that you can always frame the situation in two opposing ways:

Way 1: Eating factory farmed meat and not worrying about it in order to better focus on third world donations is the same as making the following means-end tradeoff:

  • Means: Torturing animals
  • End: Saving lives in the third world

Way 2: Avoiding meat in order to not support factory farming despite the fact that such avoiding causes costs* that lessen the effectiveness of your EA activities is the same as making the following means-end tradeoff:

  • Means: Letting people in the third world die
  • End: Saving animals from being tortured

So which ends don't justify which means?

... Of course for the majority of people it's more like:

  • Means: Torturing animals
  • End: Access to certain tasty foods

And

  • Means: Depriving yourself of certain tasty foods
  • End: Saving animals from being tortured

* It's not clear that it does but that's what the original post assumes so for the sake of example I'm going with it.

Comment by yli on Botworld: a cellular automaton for studying self-modifying agents embedded in their environment · 2014-04-13T07:59:33.206Z · LW · GW

Would be cool if one of the items was a nugget of "computation fuel" that could be used to allow a robot's register machine to run for extra steps. Or maybe just items whose proximity gives a robot extra computation steps. That way you could illustrate situations involving robots with quantitatively different levels of "intelligence". Could lead to some interesting strategies if you run programming competitions on this too, like worker robots carrying fuel to a mother brain.

Comment by yli on Optimal Exercise · 2014-03-12T12:02:55.861Z · LW · GW

Do you have thoughts on whether it's safe for a beginner to lift weights without in-person instruction? From what I hear, even small mistakes in form can cause injury, especially when adding weight quickly like a beginner will do. Is it worth the risk to try and learn good form from only books and videos? My friend attempted Starting Strenght for a month, got a pain in their knee and had to quit, and hasn't been able to get back into it because finding personal instruction is a huge hassle especially if one isn't willing to pay a lot. Should they try again by themselves and just study those books and videos extra closely?

Comment by yli on On saving the world · 2014-02-08T12:46:11.500Z · LW · GW

I can never understand why the idea that replicating systems might just never expand past small islands of clement circumstances (like, say, the surface of the Earth) gets so readily dismissed in these parts.

People in these parts don't necessarily have in mind the spread of biological replicators. Spreading almost any kind of computing machinery would be good enough to count, because it could host simulations of humans or other worthwhile intelligent life.

(Note that that question of whether simulated people are actually conscious is not that relevant to the question of whether this kind of expansion will happen. What's relevant is the question of whether the relevant decision makers would come to think they are conscious. For example, even if simulated people aren't actually conscious, after interacting with simulated people intergrated into society all their lives most non-simulated people would probably think they are conscious, and thus worth sending out to colonize space. And the simulated people themselves will definitely think they are conscious.)

Comment by yli on Beware Trivial Fears · 2014-02-06T14:00:04.012Z · LW · GW

Anything that's just a trivial inconvenience definitely won't protect you from the NSA and probably won't even protect you from random internet people looking to ruin your life/reputation for fun.

Comment by yli on Tulpa References/Discussion · 2014-01-08T11:16:03.783Z · LW · GW

The general impression I got from reading a lot of the stuff that gets posted in the various tulpa communities leads me to believe it is, at its core, yet another group of people who gain status within that group by trying to impress each other with how different or special their situation is.

Used to be, when I read stories about "astral projection" I thought people were just imagining stuff really hard and then making up exaggerated stories to impress each other. Then I found out it's basically the same thing as wake initated lucid dreaming, which is a very specific kind of weird and powerful experience that's definitely not just "imagining things really hard". I still think people make up stories about astral projection to impress each other, but the basic experience is nevertheless something real and unique. The same thing is probably happening with tulpas.

Comment by yli on Singularity Institute now accepts donations via Bitcoin · 2013-11-21T03:08:59.462Z · LW · GW

Please consider sending some Bitcoins to SI at address 1HUrNJfVFwQkbuMXwiPxSQcpyr3ktn1wc9

https://blockchain.info/address/1HUrNJfVFwQkbuMXwiPxSQcpyr3ktn1wc9:

Total Received 343.91998333 BTC
Final Balance 5.55939055 BTC

Comment by yli on Mainstream Epistemology for LessWrong, Part 1: Feldman on Evidentialism · 2013-11-19T06:02:41.664Z · LW · GW

Thanks, this looks to be a good summary of what I'm not missing :)

Comment by yli on Open Thread, November 15-22, 2013 · 2013-11-18T23:56:57.452Z · LW · GW

In a way every game is a rationality game, because in almost every game you have to discover things, predict things, etc. In another way almost no game is one, because domain-specific strategies and skills win out over general ones.

One idea is based on the claim that general rationality skills matter more when it's a fresh new game that nobody has played yet, since then you have to use your general thinking skills to reason about things in the game and to invent game-speficic strategies. So what if there were "mystery game" competitions where the organizers invented a new set of games for every event and only revealed them some set time before the games started? I don't know of any that exist, but it would be interesting to see what kinds of skills would lead to consistent winning in these competitions.

There are various other ways you could think of to make it so that the game varies constantly and there's no way to accumulate game-specific skills, only general ones like quick thinking, teamwork etc. Playing in a different physical place every match like in HPMoR's battles is one.

Comment by yli on Rationality Quotes September 2013 · 2013-11-18T21:36:04.365Z · LW · GW

You can say that whether it's signaling is determined by the motivations of the person taking the course, or the motivations of the people offering the course, or the motivations of employers hiring graduates of the course. And you can define motivation as the conscious reasons people have in their minds, or as the answer to the question of whether the person would still have taken the course if it was otherwise identical but provided no signaling benefit. And there can be multiple motivations, so you can say that something is signaling if signaling is one of the motivations, or that it's signaling only if signaling is the only motivation.

If you make the right selections from the previous, you can argue for almost anything that it's not signaling, or that it is for that matter.

if someone wants to demonstrate some innate or pre-existing quality (such as mathematical ability), they participate in a relevant contest and this is signalling.

If I wanted to defend competitions from accusations of signaling like you defended education, I could easily come up with lots of arguments. Like people doing them to challenge themselves, experience teamwork, test their limits and meet like-minded people. And the fact that lots of people that participate in competitions even though they know they don't have a serious chance of coming on top, etc.

OSHA rules would still require that the crane operator passes the crane related training.

(Sure, but I meant that only truck drivers would be accepted into the crane operator training in the first place, because they would be more likely to pass it and perform well afterward.)

Comment by yli on Rationality Quotes September 2013 · 2013-11-16T19:50:35.522Z · LW · GW

Clearly, a training course for, say, a truck driver, is not signalling, but exactly what it says on the can

If there was a glut of trained truck drivers on the market and someone needed to recruit new crane operators, they could choose to recruit only truck drivers because having passed the truck driving course would signal that you can learn to operate heavy machinery reliably, even if nothing you learned in the truck driving course was of any value in operating cranes.

Comment by yli on Is the orthogonality thesis at odds with moral realism? · 2013-11-07T03:53:22.253Z · LW · GW

I'm pretty sure he is calling into question the claim that "it was dangerous to question that the existence of God could be proven through reason", which was a very common belief throughout most of the middle ages and was held with very little danger as far as I can tell

...

This doctrine was supposed (though we don't know if correctly) to be a doctrine that although reason dictated truths contrary to faith, people are obliged to believe on Faith anyway. It was supressed.

Comment by yli on Personal Evidence - Superstitions as Rational Beliefs · 2013-11-06T06:56:33.950Z · LW · GW

I agreed with this at first, but actually, no. Belief in the supernatural doesn't require belief in gods, spirits or any non-human agents. You could just believe that humans have some supernatural abilities like reading each other's minds. When trying to explain these abilties, only reductionists will conclude that there's some third party agent like a simulator setting things up. Non-reductionists will just accept that being able to read minds is part of how this ontologically fundamental mind stuff works.

Comment by yli on Looking for opinions of people like Nick Bostrom or Anders Sandberg on current cryo techniques · 2013-10-21T12:05:26.295Z · LW · GW

Actually because the zombie uploads are capable of all the same reasoning as M_P, they will figure out that they're not conscious, and replace themselves with biological humans.

On the other hand, maybe they'll discover that biological humans aren't conscious either, they just say they are for reasons that are causally isomorphic to the reasons for which the uploads initially thought they were conscious, and then they'll set out to find a substrate that really allows for consciousness.

Comment by yli on Polyphasic Sleep Seed Study: Reprise · 2013-09-27T17:42:57.339Z · LW · GW

Not polyphasic but

Comment by yli on Notes on Brainwashing & 'Cults' · 2013-09-15T04:01:21.368Z · LW · GW

Thanks for the link. I don't really see creepy cult isolation in that discussion, and I think most people wouldn't, but that's just my intuitive judgment.

Comment by yli on Notes on Brainwashing & 'Cults' · 2013-09-15T01:42:40.729Z · LW · GW

Really? Links? A lot of stuff here is a bit too culty for my tastes, or just embarassing, but "cutting family ties with nonrational family members"?? I haven't been following LW closely for a while now so I may have missed it, but that doesn't sound accurate.

Comment by yli on Please share your reading habits/techniques/strategies · 2013-09-15T01:16:13.774Z · LW · GW

Reading something for 6 hours spread across 6 days will result in more insight than reading it for 12 hours straight. The better sleep you get the stronger this effect is.* So: do things in parallel instead of serially if possible and take care of your sleep.

* These are just guesses based on my personal experience.

Comment by yli on Genies and Wishes in the context of computer science · 2013-08-31T19:31:38.428Z · LW · GW

When people talk about the command "maximize paperclip production" leading into the AI tiling the universe with paperclips, I interpret it to mean a scenario where first a programmer comes up with a shoddy formalization of paperclip maximization that he thinks is safe but actually isn't, and then writes that formalization into the AI. So at no point does the AI actually have to try and interpret a natural language command. Genie analogies are definitely confusing and bad to use here because genies do take commands in english.

Comment by yli on Open Thread, June 16-30, 2013 · 2013-06-23T19:32:09.992Z · LW · GW

Omega appears and tells you you've been randomly selected to have the opportunity to take or leave a randomly chosen bet.

Comment by yli on Open Thread, June 2-15, 2013 · 2013-06-15T15:52:30.189Z · LW · GW

RSS feeds for user's comments seem to be broken with the update to how they display on the page. To see how, just look at eg. http://lesswrong.com/user/Yvain/overview/.rss . It contains a bunch of comments from other people than Yvain. This is pretty annoying, hope it's fixed soon. I'm subscribed to tens of users' comment feeds and it's the main way I read LW. Today all of those feeds got a bunch of spurious updates from the new other-people-comments on everyone's comments page.

Also, some months back there was another change to userpages and it broke all my RSS feeds too, I had to resubscribe to everyone's /user/theirname/comments page where I had previously subscribed to the user/theirname page. I wish updates would never break RSS feeds, I'm sure I'm not the only one who makes significant use of them.

Comment by yli on Prisoner's Dilemma (with visible source code) Tournament · 2013-06-10T18:12:09.074Z · LW · GW

But surely, good sir, common sense says that you should defect against CooperateBot in order to punish it for cooperating with DefectBot.

I thought you said people should tolerate tolerance :)

Comment by yli on Karma as Money · 2013-06-07T20:16:40.390Z · LW · GW

What if you had a script that hid downvotes? Or one that used an unknown algorithm to hide all downvotes and also some upvotes so a lack of upvotes could still mean that you really got upvoted? Or maybe one that added fake downvotes so that you would never have certain knowledge of having been really downvoted? Etc. Personally I like your commenting and would enjoy more of it.

Comment by yli on Rationality Quotes May 2013 · 2013-05-09T23:49:36.379Z · LW · GW

So what's the program? Is it the one that runs every turing machine up to length 100 for BusyBeaver(100) steps, and gets the number BusyBeaver(100) by running the BusyBeaver_100 program whose source code is hardcoded into it? That would be of length 100+c for some constant c, but maybe you didn't think the constant was worth mentioning.

Comment by yli on Justifiable Erroneous Scientific Pessimism · 2013-05-09T19:13:02.999Z · LW · GW

Would a chess program that has a table of all the lines on the board that keeps track of whether they are empty or not and that uses that table as part of its move choosing algorithm qualify? If not, I think we might be into qualia territory when it comes to making sense of how exactly a human is recognizing the emptiness of a line and that program isn't.

Comment by yli on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2013-05-09T19:05:27.802Z · LW · GW

Obviously to really put the idea of people having bounded utility functions to the test, you have to forget about it solving problems of small probabilities and incredibly good outcomes and focus on the most unintuitive consequences of it. For one, having a bounded utility function means caring arbitrarily little about differences between the goodness of different sufficiently good outcomes. And all the outcomes could be certain too. You could come up with all kinds of thought experiments involving purchasing huge numbers of years happy life or some other good for a few cents. You know all of this so I wonder why you don't talk about it.

Also I believe that Eliezer thinks that an unbounded utility function describes at least his preferences. I remember he made a comment about caring about new happy years of life no matter how many he'd already been granted.

(I haven't read most of the discussion in this thread or might just be missing something so this might be irrelevant.)

Comment by yli on [Link] More Right launched · 2013-05-09T02:33:14.369Z · LW · GW

programming resources

Since you've mentioned this before, here's an offhand idea for how to maybe get some: put an announcement on the sidebar or banner asking for developers (and maybe noting that LW is open source - so it's ok to ask people to work for free), that's visible on every page and that links to a page with your list of wanted features and instructions for how to get involved. There could be a bunch of potential developers that don't even know LW needs them, since the subject has only come up in some comment threads. Maybe you guys have already thought of this or know of a reason it wouldn't work, just wanted to put it out there.

Comment by yli on New applied rationality workshops (April, May, and July) · 2013-04-17T01:15:41.056Z · LW · GW

It tells us something bad about CFAR, right? But if they didn't offer refunds, woudn't you be saying that that tells us something bad about them too?

Comment by yli on Rationality Quotes April 2013 · 2013-04-11T20:48:55.603Z · LW · GW

That's awesome, thanks.

Comment by yli on Rationality Quotes April 2013 · 2013-04-11T20:25:43.142Z · LW · GW

I would have easily won that game (and maybe made a quip about free will when asked how...). All you need is some memorized secret randomness. For example, a randomly generated password that you've memorized, but you'd have to figure out how to convert it to bits on the fly.

Personally I'd recommend going to random.org, generating a few hexadecimal bytes (which are pretty easy to convert to both bits and numbers in any desired range), memorizing them, and keeping them secret. Then you'll always be able to act unpredictably.

Well, unpredictably to a computer program. If you want to be able to be unpredictable to someone who's good at reading your next move from your face, you would need some way to not know your next move before making it. One way would be to run something like an algorithm that generates the binary expansion of pi in your head, and delaying calculating the next bit until the best moment. Of course, you wouldn't actually choose pi, but something less well-known and preferably easier to calculate. I don't know any such algorithms, and I guess if anyone knows a good one, they're not likely to share. But if it was something like a pseudorandom bitstream generator that takes a seed, it could be shared, as long as you didn't share your seed. If anyone's thought about this in more depth and is willing to share, I'm interested.

Comment by yli on Open Thread, February 1-14, 2013 · 2013-02-06T18:46:26.146Z · LW · GW

A couple years ago I deliberately used that strategy of reading an article again and again on successive days to grasp some hard sigfpe posts and decision theory posts here on LW. For some of them, it took several days, and some of them I never understood, but on the whole it worked very well. I always thought the reason it works is because of sleeping between sessions. (I still think this is a very useful technique but haven't used it much due to general akrasia.)

Comment by yli on The Level Above Mine · 2013-01-24T20:52:04.709Z · LW · GW

He could have just been talking about trolling in the abstract. And even if not, after reading a bit of his history, his "trolling", if any, is at most at the level of rhethorical questions. I'm not really a fan of his commenting, but if he's banned, I'd say "banned for disagreement" will be closer to the mark as a description of what happened than "banned for trolling", though not the whole story.

Comment by yli on In which I fantasize about drugs · 2013-01-02T13:19:11.021Z · LW · GW

I'd love to see some blind testing of this brainwave stuff to see whether it's more than placebo.

Doesn't seem too hard to do. Just do a blind comparison of genuine binaural beats carefully crafted to induce a state of concentration or whatever, and random noise or misadjusted binaural beats. It probably requires two people though, the tester and someone other than the tester to create the audio files and give them to the tester without telling them which is which. The tester should preferably be a binaural beats virgin - they should never have heard binaural beats before.

Something along the lines of the above would probably work, but I haven't thought about the experimental protocol in detail. If someone actually goes ahead with this, obviously they're gonna have to flesh it out and agree on a more precise protocol.

Personally, I couldn't be the tester because I've listened to binaural beats before and might recognize them. I might be able to be the fake audio file creator, but I'd have to look into it more to make sure I can create something that doesn't accidentally have binaural beats in it, etc.

Comment by yli on Open Thread, December 16-31, 2012 · 2012-12-30T22:03:18.386Z · LW · GW

Well? Go ahead.

Comment by yli on More Cryonics Probability Estimates · 2012-12-25T11:42:26.588Z · LW · GW

The key steps are whether on a molecular level, no more than one original person has been mapped to one frozen brain

Maybe I'm missing something, but even with cremation, on a molecular level probably no more than one person gets mapped to one specific pile of ash, because it would be a huge coincidence if cremating two different bodies ended up creating two identical piles of ash.

Comment by yli on New censorship: against hypothetical violence against identifiable people · 2012-12-24T11:08:28.697Z · LW · GW

Maybe it's just that volunteers that will actually do any work are hard to find. Related.

Personally, I was excited about doing some LW development a couple of years ago and emailed one of the people coordinating volunteers about it. I got some instructions back but procrastinated forever on it and never ended up doing any programming at all.

Comment by yli on Standard and Nonstandard Numbers · 2012-12-21T00:44:33.816Z · LW · GW

After I had read your post but before I had read IlyaSphitser's comment I thought that the particular model with a single integer chain was in fact a model of first-order arithmetic, so the post was definitely misleading to me in that respect.

Comment by yli on My workflow · 2012-12-11T09:15:38.003Z · LW · GW

Thanks for the link to workflowy. Here's some thoughts on it.

Problems:

  • If I use it to keep a todo/projects list, I can't assign priorities or dates to sort and filter by. (Hashtags aren't enough for this.)

  • Can't have the same item linked to from two different places. It's a tree, not a graph. I read some people say they've exported their brains to workflowy, but your brain is more of a graph than a tree. (Duplication isn't enough for this, because changes in the duplicate aren't applied to the original.)

Regardless, it's definitely better than the mess of text files I'm currently using.

The tagging and searching functionality seems to make it possible to implement the I'm bored and have 20 minutes, tell me what to do function that another commenter talked about. Just tag the relevant subtasks with #nextaction or something, and then search for that tag later.

Comment by yli on Rationality Quotes November 2012 · 2012-11-18T17:28:51.506Z · LW · GW

For decision-theoric reasons, the dark lords of the matrix give superhumanly good advice about social organization to religious people in ways that look from inside the simulation like supernatural revelations; non-religious people don't have access to these revelations so when they try to do social organization it ends in disaster.

Obviously.

Comment by yli on Rationality: Appreciating Cognitive Algorithms · 2012-10-07T08:27:55.611Z · LW · GW

If you're willing to say "X" whenever you believe X, then if you say "I believe X" but aren't willing to say "X", your statement that you believe X is actually false. But in conversations, the rule that you're willing to say everything you believe doesn't hold.

Comment by yli on We won't be able to recognise the human Gödel sentence · 2012-10-05T15:15:10.847Z · LW · GW

"Stuart Armstrong does not believe this sentence."

Comment by yli on "Hide comments in downvoted threads" is now active · 2012-10-05T15:13:25.609Z · LW · GW

This could be fixed by making the hiding apply only to comments at most, say, three levels down from a downvoted comment.

Comment by yli on The Useful Idea of Truth · 2012-10-03T02:09:57.025Z · LW · GW

Is this actually a standard term?

I have no idea, I just interpreted it in an obvious way.

Comment by yli on The Useful Idea of Truth · 2012-10-02T14:22:42.175Z · LW · GW

I don't like the "post-utopian" example. I can totally expect differing sensory experiences depending on whether a writer is post-utopian or not. For example, if they're post-utopian, when reading their biography I would more strongly expect reading about them having been into utopian ideas when they were young, but having then changed their mind. And when reading their works, I would more strongly expect seeing themes of the imperfectability of the world and weltschmerz.

Comment by yli on From First Principles · 2012-09-28T16:00:47.705Z · LW · GW

I've seen that essay linked a few times and finally took the time to read it carefully. Some thoughts, for what they're worth:

What exactly is a code? (Apparently they can be genetic or memetic, information theory and Hayek both have something to say about them, and social traditions are instances of them.) How do you derive, refute or justify a code?

There are apparently evolved memetic codes that solve interpersonal problems - how do we know that memetic evolution selects for good solutions to interpersonal problems, and that it doesn't select even more strongly for something useless or harmful, like memorability or easy transmission to children or appeal to the kinds of people in the best position to spread their ideas to others or making one feel good? Why isn't memetic evolution as much of an amoral Azathoth as biological evolution? The results of memetic evolution are just the memes that were best at surviving and reproducing themselves. These generally have no reason to be objectively true. I'm not convinced that there's any reason they should be intersubjectively true (socially beneficial) either. Also, selection among entire social systems seems to require group selection.

And granted that the traditions that are the results of the process of memetic/cultural evolution contain valuable truths, are those truths in the actual content of the traditions, or are they just in what we can infer from the fact that these were the particular traditions that resulted from the process?

Comment by yli on Open Thread, September 15-30, 2012 · 2012-09-28T14:07:54.122Z · LW · GW

This does not seem coherent. The responsible choices people make are always the result of unchosen circumstances. The genes you are born with, the circumstances of the development of your brain, your upbringing, the decisions of your past and perhaps very different self, which information you don't know you don't have, all of these are unchosen and there is no decision making beyond the laws of physics.

Well, we already know that (even in a deterministic world) there's a meaningful way to say that someone could have done something, despite being made of mindless parts obeying only the laws of physics. I think the notion of responsible choice is probably similar.

Comment by yli on Open Thread, September 15-30, 2012 · 2012-09-19T07:54:29.933Z · LW · GW

Well I had in mind how at one time or another I tried to listen to Inside Jokes, Folly of Fools, In Gods we Trust, Godel's Theorem: An Incomplete Guide to Its Use and Abuse, The Origin of Consciousness in the Breakdown of the Bicameral Mind and The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It and quit each of them because it wasn't working, I was missing too much and wasn't enjoying it. Maybe "scholarly" isn't the best word I could have chosen to describe them, and maybe I was just doing it wrong, and should have just gone slower and concentrated better.

The result of a converted blog is this. I just have to write a new parser for every new blog, which usually takes maybe 15 minutes, and the rest is automated.

Comment by yli on Open Thread, September 15-30, 2012 · 2012-09-18T23:10:10.773Z · LW · GW

I've been doing this since November last year and recommend it.

My list of fully listened books has 109 entries now. I've found that an important thing in determining whether a book works well in text-to-speech form is how much of it you can miss and still understand what's going on, or, how dense it is. Genre-wise, narrative or journalistic nonfiction and memoirs make especially good listening; most popular nonfiction works decently; history and fiction are pretty hard; and scholarly and technical writing are pretty much impossible.

A lot of writing on the internet, like blog posts, works well too. I have some scripts for scraping websites, converting them into an intermediate ebook form and then into text-to-speech audiobooks. If I encounter an interesting blog with large archives this is usually how I consume it nowadays.

There's also obviously the issue of comprehension, which I'd say definitely is lower when listening than when reading. But, 1) literally at least 95% of the stuff that I've listened to I never would have read, it would either have sat on my to-read list forever* or I wouldn't have thought it was worth the time and effort in the first place 2) you can view this as a way to discover stuff that's worth deeper study, like skimming, 3) it takes less mental effort than reading, and 4) there are a lot of times when you couldn't read but can still listen, so it's not a tradeoff. There are also some interesting ways in which texts are more memorable when listening, because parts of the text get associated to the place you were in and/or the activity you were doing when you were listening to that part.

Compared to traditional audiobooks, there's the disadvantage that fiction seems to be harder to make sense of in text-to-speech form, but other than that, you get all the benefits of traditional audiobooks plus it's faster** and you can listen to anything.

* Whereas during the last year I've been getting to experience the new and pleasant phenomenon of actually getting to strike off entries from my to-read list pretty often.

** You can speed up the text-to-speech, and while you can also speed up traditional audiobooks, you can speed up the text-to-speech more because it's always the same voice instead of a different one for every book so you can get used to listening to it at higher and higher speeds - I currently do 344, 472 and 612 WPM for normal, fast, and extra-fast-but-still-comprehensible respectively (these numbers have been stable for about the past six months).

Comment by yli on Call for Anonymous Narratives by LW Women and Question Proposals (AMA) · 2012-09-14T17:13:42.607Z · LW · GW

For a quick fix to the own-karma problem, get the firefox Stylish extension and add this stylesheet:

@-moz-document domain("lesswrong.com") {
span.label, span.score, span.monthly-score {
 display:none !important;
}
}
Comment by yli on LessWrong could grow a lot, but we're doing it wrong. · 2012-08-25T00:06:50.637Z · LW · GW

I'm happy you see where I'm coming from. Another opinion I have about issues like this is that to fix them, it wouldn't be enough to read complaints in comments in threads like this one and try to fix things you find people complaining about. You would actually need to find a person (or persons) with a good eye for and intuitions about these things, who has good taste and who knows what they're doing, and just let them take control of the whole design and let them change it at their discretion. I think there must be a few people on LW who would be both capable and willing to do it, though of course the site as it currently is would repel them.