Posts

Why no total winner? 2017-10-15T22:01:37.920Z · score: 38 (15 votes)
Circles of discussion 2016-12-16T04:35:28.086Z · score: 18 (19 votes)
Bill Gates: problem of strong AI with conflicting goals "very worthy of study and time" 2015-01-22T20:21:48.539Z · score: 50 (51 votes)
Slides online from "The Future of AI: Opportunities and Challenges" 2015-01-16T11:17:23.647Z · score: 13 (14 votes)
Elon Musk donates $10M to the Future of Life Institute to keep AI beneficial 2015-01-15T16:33:48.640Z · score: 54 (55 votes)
Robin Hanson's "Overcoming Bias" posts as an e-book. 2014-08-31T13:26:24.555Z · score: 21 (22 votes)
Open thread for December 17-23, 2013 2013-12-17T20:45:00.004Z · score: 5 (6 votes)
A diagram for a simple two-player game 2013-11-10T08:59:35.069Z · score: 22 (25 votes)
Meetup : London social 2013-10-07T11:45:57.286Z · score: 3 (4 votes)
Meetup : London meetup: thought experiments 2013-09-19T20:29:33.168Z · score: 4 (5 votes)
Meetup : London social meetup 2013-09-07T15:22:04.693Z · score: 2 (3 votes)
Nick Beckstead: On the Overwhelming Importance of Shaping the Far Future 2013-06-26T13:17:54.357Z · score: 6 (9 votes)
Welcome to Less Wrong! (July 2012) 2012-07-18T17:24:51.381Z · score: 20 (21 votes)
Useful maxims 2012-07-11T11:56:57.489Z · score: 26 (27 votes)
Quantified Self recommendations 2012-05-18T10:16:07.740Z · score: 9 (10 votes)
Holden Karnofsky's Singularity Institute critique: Is SI the kind of organization we want to bet on? 2012-05-11T07:25:56.637Z · score: 13 (16 votes)
Holden Karnofsky's Singularity Institute critique: other objections 2012-05-11T07:22:13.699Z · score: 3 (6 votes)
Holden Karnofsky's Singularity Institute Objection 3 2012-05-11T07:19:18.688Z · score: 5 (8 votes)
Holden Karnofsky's Singularity Institute Objection 2 2012-05-11T07:18:05.379Z · score: 11 (14 votes)
Holden Karnofsky's Singularity Institute Objection 1 2012-05-11T07:16:29.696Z · score: 8 (11 votes)
Meetup : London 2012-04-26T20:03:09.209Z · score: 3 (4 votes)
How accurate is the quantum physics sequence? 2012-04-17T06:54:18.488Z · score: 49 (53 votes)
How was your meetup? 2012-04-16T06:11:24.129Z · score: 9 (10 votes)
Meetup : London 2012-04-06T16:42:02.277Z · score: 2 (3 votes)
Statistical error in half of neuroscience papers 2011-09-09T23:07:33.743Z · score: 19 (19 votes)
An EPub of Eliezer's blog posts 2011-08-11T14:20:31.512Z · score: 40 (41 votes)
Unknown unknowns 2011-08-05T12:55:37.560Z · score: 11 (14 votes)
Martinenaite and Tavenier on cryonics 2011-08-04T07:39:02.702Z · score: 17 (18 votes)
Meetup : London mini-meetup 2011-08-03T18:17:15.313Z · score: 1 (2 votes)
Robert Ettinger, founder of cryonics, now CIs 106th patient 2011-07-25T12:11:52.631Z · score: 7 (10 votes)
Free holiday reading? 2011-06-28T08:59:01.845Z · score: 4 (5 votes)
The Ideological Turing Test 2011-06-25T22:17:25.746Z · score: 35 (37 votes)
Charles Stross: Three arguments against the singularity 2011-06-22T09:52:08.250Z · score: 10 (13 votes)
London meetup, Sunday 2011-05-15 14:00, near London Bridge 2011-05-13T20:54:32.138Z · score: 2 (3 votes)
GiveWell.org interviews SIAI 2011-05-05T16:29:09.944Z · score: 28 (29 votes)
Reminder: London meetup, Sunday 2pm, near Holborn 2011-04-28T09:26:04.851Z · score: 4 (5 votes)
London meetup, Sunday 1 May, 2pm, near Holborn 2011-04-03T09:47:23.852Z · score: 2 (3 votes)
London meetup, Sunday 2011-03-06 14:00, near Holborn (reminder) 2011-02-26T08:10:02.466Z · score: 5 (6 votes)
Open Thread, January 2011 2011-01-10T11:14:49.179Z · score: 4 (5 votes)
London meetup, Shakespeare's Head, Sunday 2011-03-06 14:00 2011-01-09T15:43:35.015Z · score: 5 (6 votes)
Weird characters in the Sequences 2010-11-18T08:27:20.737Z · score: 5 (6 votes)
Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) 2010-10-30T09:31:29.456Z · score: 34 (40 votes)
London UK, Saturday 2010-07-03: "How to think rationally about the future" 2010-05-31T15:23:20.972Z · score: 10 (11 votes)
LessWrong meetup, London UK, 2010-06-06 16:00 2010-05-23T13:46:44.536Z · score: 6 (7 votes)
A LessWrong poster for the Humanity+ conference next Saturday 2010-04-14T21:38:46.831Z · score: 8 (9 votes)
Meetup after Humanity+ , London, Saturday 2010-04-24? 2010-04-10T12:54:01.601Z · score: 4 (5 votes)
Less Wrong London meetup, tomorrow (Sunday 2010-04-04) 16:00 2010-04-03T09:36:05.289Z · score: 3 (4 votes)
A survey of anti-cryonics writing 2010-02-07T23:26:52.715Z · score: 83 (85 votes)
Beware of WEIRD psychological samples 2009-09-13T11:28:05.581Z · score: 39 (39 votes)
The mind-killer 2009-05-02T16:49:19.539Z · score: 23 (29 votes)

Comments

Comment by ciphergoth on Why so much variance in human intelligence? · 2019-10-02T00:04:09.804Z · score: 20 (4 votes) · LW · GW

Half formed thoughts towards how I think about this:

Something like Turing completeness is at work, where our intelligence gains the ability to loop in on itself, and build on its former products (eg definitions) to reach new insights. We are at the threshold of the transition to this capability, half god and half beast, so even a small change in the distance we are across that threshold makes a big difference.

Comment by ciphergoth on Why so much variance in human intelligence? · 2019-10-01T23:46:31.188Z · score: 4 (2 votes) · LW · GW
As such, if you observe yourself to be in a culture that is able to reach technologically maturity, you're probably "the stupidest such culture that could get there, because if it could be done at a stupider level then it would've happened there first."

Who first observed this? I say this a lot, but I'm now not sure if I first thought of it or if I'm just quoting well-understood folklore.

Comment by ciphergoth on 2018 AI Alignment Literature Review and Charity Comparison · 2018-12-30T08:32:10.789Z · score: 12 (6 votes) · LW · GW

May I recommend spoiler markup? Just start the line with >!

Another (minor) "Top Donor" opinion. On the MIRI issue: agree with your concerns, but continue donating, for now. I assume they're fully aware of the problem they're presenting to their donors and will address it in some fashion. If they do not might adjust next year. The hard thing is that MIRI still seems most differentiated in approach and talent org that can use funds (vs OpenAI and DeepMind and well-funded academic institutions)

Comment by ciphergoth on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2018-12-19T19:32:11.895Z · score: 12 (3 votes) · LW · GW

I note that this is now done. As I have for so many things here. Great work team!

Spoiler space test

Comment by ciphergoth on 2018 AI Alignment Literature Review and Charity Comparison · 2018-12-19T19:21:40.503Z · score: 30 (9 votes) · LW · GW

Rot13's content, hidden using spoiler markup:

Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. Additionally, they already have a larger budget than any other organisation (except perhaps FHI) and a large amount of reserves.

Despite FHI producing very high quality research, GPI having a lot of promising papers in the pipeline, and both having highly qualified and value-aligned researchers, the requirement to pre-fund researchers’ entire contract significantly increases the effective cost of funding research there. On the other hand, hiring people in the bay area isn’t cheap either.

This is the first year I have attempted to review CHAI in detail and I have been impressed with the quality and volume of their work. I also think they have more room for funding than FHI. As such I will be donating some money to CHAI this year.

I think of CSER and GCRI as being relatively comparable organisations, as 1) they both work on a variety of existential risks and 2) both primarily produce strategy pieces. In this comparison I think GCRI looks significantly better; it is not clear their total output, all things considered, is less than CSER’s, but they have done so on a dramatically smaller budget. As such I will be donating some money to GCRI again this year.

ANU, Deepmind and OpenAI have all done good work but I don’t think it is viable for (relatively) small individual donors to meaningfully support their work.

Ought seems like a very valuable project, and I am torn on donating, but I think their need for additional funding is slightly less than some other groups.

AI Impacts is in many ways in a similar position to GCRI, with the exception that GCRI is attempting to scale by hiring its part-time workers to full-time, while AI Impacts is scaling by hiring new people. The former is significantly lower risk, and AI Impacts seems to have enough money to try out the upsizing for 2019 anyway. As such I do not plan to donate to AI Impacts this year, but if they are able to scale effectively I might well do so in 2019.

The Foundational Research Institute have done some very interesting work, but seem to be adequately funded, and I am somewhat more concerned about the danger of risky unilateral action here than with other organisations.

I haven’t had time to evaluate the Foresight Institute, which is a shame because at their small size marginal funding could be very valuable if they are in fact doing useful work. Similarly, Median and Convergence seem too new to really evaluate, though I wish them well.

The Future of Life institute grants for this year seem more valuable to me than the previous batch, on average. However, I prefer to directly evaluate where to donate, rather than outsourcing this decision.

I also plan to start making donations to individual researchers, on a retrospective basis, for doing useful work. The current situation, with a binary employed/not-employed distinction, and upfront payment for uncertain output, seems suboptimal. I also hope to significantly reduce overhead (for everyone but me) by not having an application process or any requirements for grantees beyond having produced good work. This would be somewhat similar to Impact Certificates, while hopefully avoiding some of their issues.

Comment by ciphergoth on Nyoom · 2018-12-15T19:35:02.798Z · score: 7 (4 votes) · LW · GW

I think the Big Rationalist Lesson is "what adjustment to my circumstances am I not making because I Should Be Able To Do Without?"

Comment by ciphergoth on Topological Fixed Point Exercises · 2018-11-17T16:57:43.882Z · score: 21 (5 votes) · LW · GW

Just to get things started, here's a proof for #1:

Proof by induction that the number of bicolor edges is odd iff the ends don't match. Base case: a single node has matching ends and an even number (zero) of bicolor edges. Extending with a non-bicolor edge changes neither condition, and extending with a bicolor edge changes both; in both cases the induction hypothesis is preserved.

Comment by ciphergoth on Last Chance to Fund the Berkeley REACH · 2018-07-01T00:08:27.703Z · score: 24 (7 votes) · LW · GW

From what I hear, any plan for improving MIRI/CFAR space that involves the collaboration of the landlord is dead in the water; they just always say no to things, even when it's "we will cover all costs to make this lasting improvement to your building".

Comment by ciphergoth on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2018-06-17T22:53:24.990Z · score: 2 (1 votes) · LW · GW

Of course I should have tested it before commenting! Thanks for doing so.

Comment by ciphergoth on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2018-06-17T17:36:22.576Z · score: 12 (3 votes) · LW · GW

Spoiler markup. This post has lots of comments which use ROT13 to disguise their content. There's a Markdown syntax for this.

Comment by ciphergoth on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2018-06-17T17:31:01.462Z · score: 10 (2 votes) · LW · GW

I note that this is now done.

Comment by ciphergoth on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2018-06-17T17:30:54.379Z · score: 10 (2 votes) · LW · GW

I note that this is now done.

Comment by ciphergoth on On the Chatham House Rule · 2018-06-14T14:27:17.868Z · score: 40 (14 votes) · LW · GW

"If you're running an event that has rules, be explicit about what those rules are, don't just refer to an often-misunderstood idea" seems unarguably a big improvement, no matter what you think of the other changes proposed here.

Comment by ciphergoth on April Fools: Announcing: Karma 2.0 · 2018-04-01T15:11:53.341Z · score: 22 (6 votes) · LW · GW

I notice your words are now larger thanks to the excellence of this comment!

Comment by ciphergoth on April Fools: Announcing: Karma 2.0 · 2018-04-01T14:31:54.211Z · score: 10 (3 votes) · LW · GW

Excellent, my words will finally get the prominence they deserve!

Comment by ciphergoth on Leaving beta: Voting on moving to LessWrong.com · 2018-03-13T04:50:34.284Z · score: 14 (3 votes) · LW · GW

When does voting close? EDIT: "This vote will close on Sunday March 18th at midnight PST."

Comment by ciphergoth on Making yourself small · 2018-03-08T19:13:50.944Z · score: 8 (2 votes) · LW · GW

I thought of a similar example to you for big-low-status, but I couldn't think of an example I was happy with for small-high-status. Every example I could think of was one where someone is visually small, but you already know they're high status. So I was struck when your example also used someone we all know is high status! Is there a pose or way of looking which both looks small and communicates high status, without relying on some obvious marker like a badge or a crown?

Comment by ciphergoth on Two Coordination Styles · 2018-02-17T15:17:38.980Z · score: 8 (2 votes) · LW · GW

Ainslie, not Ainslee. I found this super distracting for some reason, partly because his name is repeated so often.

Comment by ciphergoth on A LessWrong Crypto Autopsy · 2018-02-04T01:04:43.353Z · score: 8 (2 votes) · LW · GW

A plausible strategy would be to buy say 100 bitcoins for $1 each, then sell 10 at $10, 10 at $100, and so on. With this strategy you would have made $111,000 and hold 60 bitcoins.

Comment by ciphergoth on List of civilisational inadequacy · 2017-12-05T03:55:17.450Z · score: 3 (1 votes) · LW · GW

"Even though gaining too much in pregnancy" is missing the word "weight" I think.

Comment by ciphergoth on Security Mindset and the Logistic Success Curve · 2017-11-28T05:40:30.188Z · score: 19 (5 votes) · LW · GW

I can't work out where you're going with the Qubes thing. Obviously a secure hypervisor wouldn't imply a secure system, any more than a secure kernel implies a secure system in a non-hypervisor based system.

More deeply, you seem to imply that someone who has made a security error obviously lacks the security mindset. If only the mindset protected us from all errors; sadly it's not so. But I've often been in the situation of trying to explain something security-related to a smart person, and sensing the gap that seemed wider than a mere lack of knowledge.

Comment by ciphergoth on Against Modest Epistemology · 2017-11-16T19:17:40.688Z · score: 2 (3 votes) · LW · GW

Please don't bold your whole comment.

Comment by ciphergoth on Living in an Inadequate World · 2017-11-10T06:04:09.026Z · score: 13 (9 votes) · LW · GW

Looks like this hasn't been marked as part of the "INADEQUATE EQUILIBRIA" sequence: unlike the others, it doesn't carry this banner, and it isn't listed in the TOC.

Comment by ciphergoth on Why no total winner? · 2017-10-22T01:04:56.956Z · score: 3 (1 votes) · LW · GW

I agree, if the USA had decided to take over the world at the end of WWII, it would have taken absolutely cataclysmic losses. I think it would still have ended up on top of what was left, and the world would have rebuilt, with the USA on top. But not being prepared to make such an awful sacrifice to grasp power probably comes under a different heading than "moral norms".

Comment by ciphergoth on Seek Fair Expectations of Others’ Models · 2017-10-20T03:51:36.429Z · score: 8 (3 votes) · LW · GW

There are many ways to then conclude that AGI is far away where far away means decades out. Not that decades out is all that far away. Eliezer conflating the two should freak you out. AGI reliably forty years away would be quite the fire alarm.

I don't think I understand this point. Is the conflation "having a model of the long-term that builds on a short-term model" and "having any model of the long term", in which case the conflation is akin to expecting climate scientists to predict the weather? If so I agree that that's a slip up, but my alarm level isn't raised to "freaked out" yet, what am I missing?

Comment by ciphergoth on The Typical Sex Life Fallacy · 2017-10-15T23:28:51.273Z · score: 7 (3 votes) · LW · GW

I move in circles where asking "why is X bad" is as bad as X itself. So for the avoidance of doubt, I do not think that your comment here makes you a bad person.

I'm trying to imagine a conversation where one person expresses a preference about the other's pubic hair that wouldn't be inappropriate, and I'm struggling a little. Here's what I've come up with:

  • A BDSM context in which that sort of thing is a negotiated part.

  • The two have been playing for a while and are intimate enough for that to be appropriate.

  • The other person asks, and gets an honest answer.

It sounds like none of these are what you have in mind; can you paint me a more detailed example?

Comment by ciphergoth on There's No Fire Alarm for Artificial General Intelligence · 2017-10-15T22:39:50.152Z · score: 16 (5 votes) · LW · GW

Which parts do you think are not needed?

Comment by ciphergoth on There's No Fire Alarm for Artificial General Intelligence · 2017-10-15T22:17:28.070Z · score: 7 (3 votes) · LW · GW

Dawkins's "Middle World" idea seems relevant here. We live in Middle World, but we investigate phenomena across a wide range of scales in space and time. It would at least be a little surprising to discover that the pace at which we do it is special and hard to improve on.

Comment by ciphergoth on Deontologist Envy · 2017-10-02T19:51:54.565Z · score: 8 (3 votes) · LW · GW

Thank you! Hooray for this sort of thing :)

Comment by ciphergoth on LW 2.0 Strategic Overview · 2017-09-15T21:24:35.930Z · score: 8 (8 votes) · LW · GW

Also I have already read them all more than once and don't plan to do so again just to get the badge :)

Comment by ciphergoth on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2017-09-15T14:15:44.764Z · score: 17 (7 votes) · LW · GW

Facebook-like reactions.

I would like to be able to publicly say eg "hear hear" on a comment or post, without cluttering up the replies. Where the "like" button is absent eg on Livejournal, I sorely miss it. This is nothing to do with voting and should be wholly orthogonal; voting is anonymous and feeds into the ranking algorithm, where this is more like a comment that says very little and takes up minimal screen real estate, but allows people to get a quick feel for who thinks what about a comment.

Starting with "thumbs up" would be a big step forward, but I'd hope that other reactions would become available later, eg "disagree connotationally" or "haha" or "don't like the tone" or "I want to help with this". Each should be associated with a small graphic, with a hover-over to show the meaning as well as who applied the reaction. Like emoji in eg Discord and unlike Facebook, a single user can apply multiple reactions to the same comment, so I can say both "agree" and "don't like the tone".

I apologise for having buried this feature request in the depths of not one but two comment threads before putting it here :)

Comment by ciphergoth on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2017-09-15T14:03:15.932Z · score: 4 (3 votes) · LW · GW

I think these are two wholly orthogonal functions: anonymous voting, and public comment badges. For badges, I'd like to see something much more like eg Discord where you can apply as many as you think apply, rather than Facebook where you can only apply at most one of the six options (eg both "agree" and "don't like tone").

EDIT: now a feature request.

Comment by ciphergoth on Welcome to Lesswrong 2.0 · 2017-09-15T14:01:12.041Z · score: 5 (2 votes) · LW · GW

I think publicly applying badges to a comment should be completely orthogonal to anonymously voting on it. EDIT: now a feature request.

Comment by ciphergoth on LW 2.0 Strategic Overview · 2017-09-15T03:53:27.322Z · score: 18 (18 votes) · LW · GW

Thank you all so much for doing this!

Eigenkarma should be rooted in the trust of a few accounts that are named in the LW configuration. If this seems unfair, then I strongly encourage you not to pursue fairness as a goal at all - I'm all in favour of a useful diversity of opinion, but I think Sybil attacks make fairness inherently synonymous with trivial vulnerability.

I am not sure whether votes on comments should be treated as votes on people. I think that some people might make good comments who would be bad moderators, while I'd vote up the weight of Carl Schulman's votes even if he never commented.

The feature map link seems to be absent.

Comment by ciphergoth on Circles of discussion · 2016-12-19T22:36:39.244Z · score: 0 (0 votes) · LW · GW

Thinking about it, I'd rather not make the self-rating visible. I'd rather encourage everyone to assume that the self-rating was always 2, and encourage that by non-technical means.

Comment by ciphergoth on Circles of discussion · 2016-12-19T20:28:13.746Z · score: 0 (0 votes) · LW · GW

That makes sense. I'd like people to know when what they're seeing is out of probation, so I'd rather say that even if you have set the slider to 4, you might still see some 3-rated comments that are expected to go to 4 later, and they'll be marked as such, but that's just a different way of saying the same thing.

Comment by ciphergoth on Circles of discussion · 2016-12-19T20:26:38.544Z · score: 0 (0 votes) · LW · GW

It's hard to be attack resistant and make good use of ratings from lurkers.

The issues you mention with ML are also issues with deciding who to trust based on how they vote, aren't they?

It's hard to make a strong argument for "shouldn't be allowed as a user setting". There's an argument for documenting the API so people can write their own clients and do whatever they like. But you have to design the site around the defaults. Because of attention conservation, I think this should be the default, and that people should know that it's the default when they comment.

Comment by ciphergoth on Circles of discussion · 2016-12-18T22:32:41.855Z · score: 0 (0 votes) · LW · GW

This is a really great summary. Maybe we should Skype or something to drill down further on our disagreement? Maybe when I'm in London, and so closer to you in timezone?

Comment by ciphergoth on Circles of discussion · 2016-12-18T22:29:38.694Z · score: 0 (0 votes) · LW · GW

You should rate highly people whose judgment you would trust when it differed from yours. We can use machine learning to find people who generate similar ratings to you, if the need arises.

I thought about the Slashdot thing, but I don't think it makes the best use of people's time. I'd like people reading only the innermost circle to be able to basically ignore the existence of the other circles. I don't even want a prompt that says "7 hidden comments".

Comment by ciphergoth on Circles of discussion · 2016-12-16T18:56:02.179Z · score: 0 (0 votes) · LW · GW

I also like the idea of lots of tags on content, both from submitters and from others. Who tagged what with what is public, not part of the ratings system, just a way to comment on things without commenting. Like Facebook's reaction emoji, except not mutually exclusive.

Comment by ciphergoth on Circles of discussion · 2016-12-16T18:54:42.607Z · score: 0 (0 votes) · LW · GW

Making the self-rating visible for the purpose you state has real value. Will think about that.

BTW it's "canon" not "cannon" - cheers!

Comment by ciphergoth on Circles of discussion · 2016-12-16T18:52:46.137Z · score: 0 (0 votes) · LW · GW

If you don't want a cabal of super users running the show, you won't like anything I propose I think :) But lots of people comment on SSC, or in other forums where one person is basically in charge and will delete what they don't like. If adding content to this site turns out to be a good way to get smart people to comment interestingly on your content, that will be a strong incentive.

Comment by ciphergoth on Circles of discussion · 2016-12-16T18:47:15.128Z · score: 1 (1 votes) · LW · GW

I think your ideas are very compatible with my existing proposal!

I agree about the "too soon" aspect, but this basically came to me fully formed, and it wasn't clear to me that teasing out a part of it to present instead of presenting it all was the right thing. Maybe I should have held off on proposing solutions.

Comment by ciphergoth on Circles of discussion · 2016-12-16T17:43:47.221Z · score: 1 (1 votes) · LW · GW

The site has to have a clear owner, and they decide on the root set. Technically, it's part of the site configuration, and you need admin access to the site to configure it.

Comment by ciphergoth on Circles of discussion · 2016-12-16T17:05:59.198Z · score: 1 (1 votes) · LW · GW

Yes, exactly. I don't think I've done as good a job of being clear as I'd like, so I'm glad you were able to parse this out!

Comment by ciphergoth on Circles of discussion · 2016-12-16T16:01:33.556Z · score: 1 (1 votes) · LW · GW

Can you say something about who would be able to see the individual ratings of comments and users?

Only people who police spam/abuse; I imagine they'd have full DB access anyway.

What do you see are the pros and cons of this proposal vs other recent ones.

An excellent question that deserves a longer answer, but in brief: I think it's more directly targeted towards the goal of creating a quality commons.

What's the reason for this?

Because I don't know how else to use the attention of readers who've pushed the slider high. Show them both the comment and the reply? That may not make good use of their attention. Show them the reply without the comment? That doesn't really make sense.

Note that your karma is not simply the sum or average of the scores on your posts; it depends more on how people rate you than on how they rate your posts.

This seems to create an opening for attack.

Again, the abuse team really need full DB access or something very like it to do their jobs.

Can you point to an intro to attack resistant trust metrics

The only adequate introduction I know of is Raph Levien's PhD draft which I encourage everyone thinking about this problem to read.

Why would it be annoying?

When an untrusted user downvotes, a trusted user or two will end up being shown that content and asked to vote on it; it thus could waste the time of trusted users.

Comment by ciphergoth on Circles of discussion · 2016-12-16T14:50:35.921Z · score: 1 (1 votes) · LW · GW

This is kind of why I want to achieve a "best of both worlds" effect - this creates something like a closed discussion group inside a convenient/casual Reddit, and good discussion can be pulled from the latter into the former.

Comment by ciphergoth on Circles of discussion · 2016-12-16T14:32:45.171Z · score: 8 (8 votes) · LW · GW

This is seeking a technological solution to a social problem.

It is still strange to me that people say this as if it were a criticism.

Comment by ciphergoth on CFAR’s new focus, and AI Safety · 2016-12-08T20:35:58.363Z · score: 2 (2 votes) · LW · GW

I don't think the first problem is a big deal. No-one worries about "I boosted that from a Priority 3 to a Priority 1 bug".

Comment by ciphergoth on On the importance of Less Wrong, or another single conversational locus · 2016-11-29T01:49:53.129Z · score: 12 (12 votes) · LW · GW

I predict that whatever is in this drop will not suffice. It will require at minimum someone who has both significant time to devote to the project, and the necessary privileges to push changes to production.