Posts

How do you visualize the Poisson PDF? 2020-07-05T15:54:49.343Z · score: 6 (4 votes)
Old-world Politics Fallacy 2020-06-23T12:32:55.946Z · score: 7 (3 votes)
SlateStarCodex deleted because NYT wants to dox Scott 2020-06-23T07:51:30.859Z · score: 92 (47 votes)
Rudi C's Shortform 2020-06-22T11:03:02.043Z · score: 4 (1 votes)
Creating better infrastructure for controversial discourse 2020-06-16T15:17:13.204Z · score: 68 (31 votes)
How do you find good content on Youtube? 2020-06-13T12:29:00.859Z · score: 6 (2 votes)
Why isn’t assassination/sabotage more common? 2020-06-04T18:04:40.509Z · score: 9 (9 votes)
What newsletters are you subscribed to, and why? 2020-05-14T14:47:17.584Z · score: 8 (4 votes)
What should I study to understand real-world economics? (I.e., To gain financial literacy) 2020-04-08T19:22:02.807Z · score: 4 (2 votes)
Idea: Create a podcast of admin tagged posts using AI TTS like Amazon Polly 2020-04-08T10:07:52.694Z · score: 4 (2 votes)
What is the literature on effects of vaccines on the central nervous system? 2020-03-31T08:43:40.562Z · score: 4 (2 votes)
How do you study a math textbook? 2020-03-24T18:43:56.815Z · score: 20 (5 votes)
Is cardio enough for longevity benefits of exercise? 2020-01-03T19:57:18.167Z · score: 4 (2 votes)
What is your recommended statistics textbook for a beginner? 2019-12-28T21:19:38.200Z · score: 5 (3 votes)
What subfields of mathematics are most useful for what subfields of AI? 2019-12-06T20:45:31.606Z · score: 5 (3 votes)
What sources (i.e., blogs) of nonfiction book reviews do you find most useful? 2019-11-28T19:43:21.408Z · score: 3 (2 votes)
What video games are more famous in our community than in the general public? 2019-10-23T19:07:29.417Z · score: 11 (7 votes)
What are your recommendations on books to listen to when doing, e.g., chores? 2019-09-26T14:50:38.019Z · score: 6 (5 votes)
Encourage creating repositories on Github instead of Lesswrong 2019-09-26T09:34:12.718Z · score: 2 (1 votes)
How to see the last update date of a post? 2019-09-26T08:37:12.940Z · score: 2 (1 votes)
What things should everyone websearch? 2019-09-25T22:24:42.413Z · score: 2 (3 votes)
What are the studies and literature on the traditional medicine theory of humorism? 2019-09-18T16:06:36.437Z · score: 4 (3 votes)
How do I reach a conclusion on how many eggs per week are healthy? 2019-09-15T15:44:16.753Z · score: 4 (3 votes)
What are some podcasts that just read aloud worthwhile content? 2019-09-09T08:41:29.970Z · score: 6 (5 votes)
Do you have algorithms for passing time productively with only your own mind? 2019-09-07T20:48:42.084Z · score: 8 (6 votes)

Comments

Comment by rudi-c on Featured Tags: July 2020 · 2020-07-09T20:39:57.903Z · score: 3 (2 votes) · LW · GW

Having a way to separate new tagged notifications from other notifs would be a plus. I think the notif system should also not trigger for old (>6 months) posts, at least not now in the initial phase. My inbox is full of tag notifications now.

Comment by rudi-c on Conceptual engineering: the revolution in philosophy you've never heard of · 2020-07-08T09:30:14.244Z · score: 1 (1 votes) · LW · GW

Thanks for the long reply. 

An aside: I think "moderate relativism" is somewhat tautologically true, but I also think it's a very abused and easy-to-abuse idea that shouldn't be acknowledged with these terms. I think that perhaps saying morality is "value-centric" or "protocol-based" (each referring to a different part of "morality". By the second part, I mean a social protocol for building consensus and coordination.) is a better choice of words. After all, relativism implies that, e.g., we can't punish people who do honor killings. This is mostly false, and does not follow from the inherent arbitrariness of morality.

On our inability to fight bad epistemics: I think this is somewhat of an advantage. It seems to me that "traditional rationality" was/is mostly focused on this problem of consensus-truth, but LW abandoned that fort and instead saw that smarter, more rational people could do better for themselves if they stopped fighting the byzantine elements of more normal people. So in LW we speak of the importance of priors and Bayes, which is pretty much a mindkiller for "religious" (broadly conceived) people. A theist will just say that his prior in god is astronomical (which might actually be true) and so the current Bayes factor does not make him not believe. All in all, building an accurate map is a different skillset than making other people accept your map. It might be a good idea to treat them somewhat separately. My own suspicion is that there is something akin to the g factor for being rational, and of course, the g factor itself is highly relevant. So in my mind, I think making normal people "rational" might not even be possible. Sure, (immense) improvement is possible, but I doubt most people will come to "our" ways. For one, epistemic rationality often makes me one worse off by default, especially in more "normal" social settings. I have often contrasted my father's intelligent irrationality with my own rationality, and he usually comes much ahead.

Comment by rudi-c on What should we do about network-effect monopolies? · 2020-07-06T18:07:04.823Z · score: 1 (1 votes) · LW · GW

Sorry, meant to ask if it’s a median or a mean; The words escaped me then.

Comment by rudi-c on Conceptual engineering: the revolution in philosophy you've never heard of · 2020-07-06T15:52:57.252Z · score: 1 (1 votes) · LW · GW

(Comment on the meta-seq)

I think most LWers will agree that philosophy has a lot of useful concepts to contribute (and probably near duplicates of almost all ideas on LW), but the problem is finding the good stuff. It’s pretty obvious that the signal-to-noise ratio is much much better in LW versus “philosoph.” Philosophy does not even mean analytic philosophy. Most people I know who wanted to wet their feet in philosophy, started with Sophie’s World. The very fact that you’re utilizing so much expert knowledge and a bounty system to find LW’s ideas in philosophy is strong evidence against philosophy.

I still think the meta-seq project is awesome. I hope you can publish a bibliography of high quality philosophical material. 

Comment by rudi-c on Schelling Categories, and Simple Membership Tests · 2020-07-06T15:18:52.985Z · score: 3 (2 votes) · LW · GW

The writing style would benefit from not assuming that readers will open and read the links. A short summary (one sentence in a parenthetical) would be helpful for important links. Perhaps you’re assuming Lesswrong’s hover preview in choosing this style, but that hover view doesn’t work in some contexts (including mobile and ereaders).

Comment by rudi-c on What should we do about network-effect monopolies? · 2020-07-06T15:04:12.857Z · score: 1 (1 votes) · LW · GW

Is that average ad revenue a mean or a real average? 

Comment by rudi-c on Rob B's Shortform Feed · 2020-07-02T10:05:12.205Z · score: 1 (1 votes) · LW · GW

How is the signal being kept “costly/honest” though? Is the pain itself the cost? That seems somewhat weird ...

Comment by rudi-c on Rudi C's Shortform · 2020-07-02T09:10:04.651Z · score: 1 (1 votes) · LW · GW

Just read Bostrom’s Pascal’s Mugging; Can’t the problem be solved as follows?

I have a probability estimate E0 in my head for the mugger giving me X (X being a lot) utility if I give them my money. E0 is not a number, as my brain does not seem to work with our traditional floating point numbers. What data structure actually represents E0, is not clear to me, but I can say E0 is a feeling of “empirically next to impossible, game-theoretically inadvisable to act on it being true”. Now, what’s the probability of I getting X utility tomorrow without giving the mugger my money? Let’s call that E1. E1 is “empirically next to impossible.” So giving my money to the mugger does NOT increase my expected utility gain at all! In fact, it decreases it, as I process E0 as a lower probability than E1 (because E0 is game-theoretically negative while E1 is neutral).

Now, you might say this is not solving the problem but bypassing it. I don’t feel this is true. Anyone who has studied numerical computation knows that errors are important and we can never have precise numbers.

Comment by rudi-c on The noncentral fallacy - the worst argument in the world? · 2020-07-01T07:30:41.692Z · score: 1 (1 votes) · LW · GW

A related already named bias is the halo effect.

Comment by rudi-c on Rudi C's Shortform · 2020-06-30T13:59:11.950Z · score: 1 (1 votes) · LW · GW

Are there any good introductory textbooks on decision theory? I searched some months ago, but only found a nontechnical philosophical book ...

Comment by rudi-c on ozziegooen's Shortform · 2020-06-30T13:58:03.348Z · score: 1 (1 votes) · LW · GW

Nothing is a low bar though. :)

Comment by rudi-c on The Illusion of Ethical Progress · 2020-06-29T10:34:01.336Z · score: 1 (1 votes) · LW · GW

I enjoyed the first part of the post on how our sense of ethical progress might be an illusion. It’d do nicely as an isolated post.

The second part that goes on advocating mysticism reads like a non sequitur. It’s an extraordinary claim that most secular STEM people have low priors for. Evidence is not presented for it. It is implied that somehow mysticism can escape the criticism of the first part. This implication does not follow for any not already in the choir.

So the whole post feels like propaganda; A somewhat interesting point is made, and then an unrelated position presented. It relies on the emotional goodwill of the first part to carry the second.

Comment by rudi-c on The Illusion of Ethical Progress · 2020-06-29T10:33:07.705Z · score: 1 (1 votes) · LW · GW

I enjoyed the first part of the post on how our sense of ethical progress might be an illusion. It’d do nicely as an isolated post.

The second part that goes on advocating mysticism reads like a non sequitur. It’s an extraordinary claim that most secular STEM people have low priors for. Evidence is not presented for it. It is implied that somehow mysticism can escape the criticism of the first part. This implication does not follow for any not already in the choir.

So the whole post feels like propaganda; A somewhat interesting point is made, and then an unrelated position presented. It relies on the emotional goodwill of the first part to carry the second.

Comment by rudi-c on The Illusion of Ethical Progress · 2020-06-29T10:25:37.949Z · score: 5 (3 votes) · LW · GW

I am not even an amateur on mysticism, but I doubt the “shared maps” hypothesis. What I’ve known of religions certainly are contradictory maps. Another hypothesis I advance is that mystics generally speak with vague terms. I.e., they share very rough templates of their maps that can then be retrofitted to many a different map. Mystics who seek other disciplines out also have an incentive to make themselves seem united and similar. (My propaganda Islamic textbooks often labor on how all religions are essentially the same and Islam is their latest version in a linear space.) It might even be that the process of extracting those vague templates from their maps somehow produces similar templates. Their maps almost certainly share several constraints of medium making them more similar, like the constraint of human appeal and allowing hierarchical growth (so that the novice can always “aspire” to the master’s level.).

Comment by rudi-c on The Illusion of Ethical Progress · 2020-06-29T10:11:03.213Z · score: 10 (4 votes) · LW · GW

This is a complex claim not backed by a lot of evidence. My heuristics scream pseudoscience.

Comment by rudi-c on Map Errors: The Good, The Bad, and The Territory · 2020-06-27T18:02:08.750Z · score: 1 (1 votes) · LW · GW

I can summarize this post as follows:

 

There is always a danger of being overconfident in our beliefs. So it is a very good idea to take the conventional wisdom seriously when we are dealing with high-stakes situations, and plan in a way that won’t lead to a disaster according to the outside view.

Comment by rudi-c on Why are all these domains called from Less Wrong? · 2020-06-27T14:58:06.293Z · score: 14 (7 votes) · LW · GW

What about greaterwrong.com?

Comment by rudi-c on TurnTrout's shortform feed · 2020-06-27T14:12:40.714Z · score: 1 (1 votes) · LW · GW

How do you estimate how hard your invented problems are?

Comment by rudi-c on Rudi C's Shortform · 2020-06-27T14:10:02.692Z · score: 1 (1 votes) · LW · GW

Do you know of a free solution (not necessarily Free Software, though that’s preferred) I can use to turn speech to text? (Also no cloud solutions. I do not have the credit card and Western phone number they require.)

Comment by rudi-c on There's an Awesome AI Ethics List and it's a little thin · 2020-06-26T12:35:08.625Z · score: 9 (4 votes) · LW · GW

We need a general mechanism to be able to link some Github repositories to the community here. I have previously said that the best textbooks post should be migrated, and I have seen quite a number of “aggregation” posts in the forums. See, e.g., this podcast list.

One solution is to add ability for a post to use the markdown from an external URL. This, plus tagging the “aggregation” posts, plus showing some tags like this as a section on the frontpage, will pretty much solve the problem. Can someone ping an admin to pitch in on this?

Comment by rudi-c on Pongobbets · 2020-06-25T08:24:28.504Z · score: 5 (2 votes) · LW · GW

I think you’re modeling it wrong. Overtheorizing, if I may say.

You need to first judge an offering on its value, and then consider its costs. Among the costs there might be that it tries to trick you. There is nothing special about this particular cost. It might be big for, e.g., Instagram. Then you will naturally reach the conclusion that Instagram is not worth it.

I think the crucial insight is that most of us do not intuitively perceive Instagram as an actively malicious manipulator, because its attack vector is novel and not evolutionarily encountered. Generally, institutions and systems are much stronger now than they were in the past, but our intuitions disregard them. Another example is how people care so much about the object level fact that Facebook built a spying VPN, but not much at all about how Facebook is hoarding power through network effects.

Comment by rudi-c on Requesting feedback/advice: what Type Theory to study for AI safety? · 2020-06-24T08:12:56.363Z · score: 2 (2 votes) · LW · GW

Disclaimer: I am no expert or even amateur on this topic.

The Little Prover and The Little Typer might also be of interest. I played some with https://softwarefoundations.cis.upenn.edu/ , and I liked it. It is an interactive textbook. I recommend using it with Emacs (Spacemacs has a coq layer).

Comment by rudi-c on SlateStarCodex deleted because NYT wants to dox Scott · 2020-06-23T20:05:53.185Z · score: 11 (10 votes) · LW · GW

There is a power imbalance in place. It's not like NYT is engaging this side in its decision. It's also true that NYT's norms are self-serving while hurting others. And this community does not have anywhere near the power to "cancel" NYT. Even if we assume the "mistake theory", making NYT hurt a bit (which is the strongest response this community can hope for) is necessary for creating a feedback loop. Mistakes are seldom corrected when their prices are paid by others.

Comment by rudi-c on [META] Building a rationalist communication system to avoid censorship · 2020-06-23T18:40:18.355Z · score: 3 (2 votes) · LW · GW

I have seen some captchas like "What's the capital of [Some Country]" in some forums. We can add some basic captchas that need some level of highschool math and basic familiarity with the sequences for verifying new user registrations, and then wall-off certain posts from unverified users.

I'm not sure this will be that  effective though. All it takes to defeat it is some screenshots.

Comment by rudi-c on [META] Building a rationalist communication system to avoid censorship · 2020-06-23T18:34:52.437Z · score: 2 (2 votes) · LW · GW

But as the OP notes, we need only defend against mobs and hard evidence. If, e.g., Scott writes pieces with throwaway accounts that are obviously in his style, he can always claim that's an imitator. The mob lives for easy outrage, so as long as we decrease that factor we have a partial solution.

Comment by rudi-c on SlateStarCodex deleted because NYT wants to dox Scott · 2020-06-23T16:54:03.485Z · score: 8 (5 votes) · LW · GW

Considering Scott's name is known by many, spreading false names might break the taboo against doxxing and let the genie out of the lamp. If his name does get out, then the solution might work, but I'm not hopeful. What are the methods you are thinking of spreading the false info? Twitter? Wikipedia?

Comment by rudi-c on SlateStarCodex deleted because NYT wants to dox Scott · 2020-06-23T14:55:16.010Z · score: 14 (11 votes) · LW · GW

Scott can be wrong though. In fact, if his blog does get shut down, that is a major update against his conciliatory world-views. That post is also years old. It might not even be his current position.

Another complication is that in the current climate of victim-worshiping, he has incentives not to act belligerently himself (or ask others to do so). Other people retaliating for Scott without his request will be much better for his reputation. (What I'm trying to say is, he has incentives to underplay his desire for revenge. Obviously I am not in his head so all this is mere speculation.)

Comment by rudi-c on Forbidden Technology · 2020-06-23T11:20:26.441Z · score: 3 (2 votes) · LW · GW

I suggest you try Doom Emacs. I switched recently and refactored my entire config to it. It's much better than Spacemacs. Faster, less buggy, less cluttered.

I also urge other people to take this post with some healthy scepticism. There are a lot of costs in switching the mainstream with a niche product, and it is almost always a tradeoff that you need to consciously think about and keep evaluating its empirical results over time. On the example of emacs, I personally went from a person who didn't know much about CLIs and the shell to a person who is better than most on zsh scripting and knows some sysadmining. (Of course, emacs only pushed me in the right direction. It began the process, and I rolled with it because CLIs rock.) I lost a lot of time over emacs though. I spent days fixing broken configs that were fundamentally broken software that could never work stably. I also lost a lot of time finding emacs in the first place. I have tried a lot of alternatives (with each "try" wasting days by itself.). All in all, know that there usually are good reasons why something is staying niche. The trick is to find the niche products that are suitable for you.

Comment by rudi-c on Rudi C's Shortform · 2020-06-23T11:06:42.302Z · score: 1 (1 votes) · LW · GW

But I don't see much serious technical research on societal alignment at all. (Most political science is just high status people saying charismatic opinions, nothing technical.) That cultural evolution has failed that endeavor (somewhat; it still mostly works, to be fair.) does not mean we should be hopeless that the project is doomed.

Comment by rudi-c on SlateStarCodex deleted because NYT wants to dox Scott · 2020-06-23T11:02:34.359Z · score: 1 (1 votes) · LW · GW

Can you give me a headsup on RationalWiki? It did not shout "unfair bullshit" to me. Are their facts wrong or are they just mean?

Comment by rudi-c on SlateStarCodex deleted because NYT wants to dox Scott · 2020-06-23T08:26:16.025Z · score: 6 (4 votes) · LW · GW

Is there any legal basis for Scott?

Comment by rudi-c on News ⊂ Advertising · 2020-06-22T22:36:45.918Z · score: 6 (3 votes) · LW · GW

I think your problem (regardless of its causes) is to aggregate your news better. There are some worthy news every now and then, and completely cutting out news consumption makes one rather stupid and blind (to trends, to global affairs, ...). Considering we currently lack effective tools for aggregation, going cold turkey can be worth the costs (assuming the most important news are received via our social circle). I personally prefer to pay the costs of aggregating my news, which I currently do by subscribing to a few RSS blogs, lobste.rs top posts, hacker news +500 posts, a few Telegram channels (themselves news aggregators), TLDR newsletter, and O’reilly’s monthly trends newsletter.

Comment by rudi-c on The Best Textbooks on Every Subject · 2020-06-22T12:03:07.309Z · score: 4 (4 votes) · LW · GW

We should migrate this post to a Github Awesome list. That medium works best for this kind of semi-distributed curation. 

Comment by rudi-c on Rudi C's Shortform · 2020-06-22T11:03:07.709Z · score: 8 (3 votes) · LW · GW

Reading AI alignment posts on here has made me realize how a lot of these ideas can potentially also apply to societal structures. Our social institutions are kind of like an AI system that uses humans for its computing units. Unfortunately, our institutions are not that “friendly”. In fact, badly aligned institutions are probably a major cause of unprogress in the developing world. Has there been much thought/discussion on these topics? Is there potential for adapting AI safety research to social mechanism design?

Comment by rudi-c on [Site Meta] Feature Update: More Tags! (Experimental) · 2020-06-22T10:58:44.709Z · score: 1 (1 votes) · LW · GW

Have you considered using Telegram groups+channels? A Telegram channel can have an associated group. The importance stuff go into the channel, which is auto-forwarded to the group, and people can discuss stuff in the group. The network effects are against Telegram, true, but it is quite a superior platform. It also doesn’t perpetuate the bad design of most social media that waste a lot of your time on ultimately unimportant stuff.

Comment by rudi-c on [Site Meta] Feature Update: More Tags! (Experimental) · 2020-06-22T10:51:11.682Z · score: 1 (1 votes) · LW · GW

I suggest decoupling “modeling” and “optimization” from “world.” In fact, what does “world” even mean there? Usecase in mind: An “equilibria” tag for posts about modeling and understanding equilibria and posts about changing (optimizing) them. Currently this needs two new tags, but with decouplement it would be just one.

Other tags:

  • recommendations: posts where the author recommends something to others. (Like the “Connected papers” post)
  • recommendation-aggregation: posts where the author asks for recommendations. (Like the “best textbooks” post.)
  • career-advice
  • undergrad-advice
  • graduate-advice
  • transient: For posts whose relevance is highly time sensitive. A lot of covid posts fall under this, for example. This can help people avoid reading stale posts in the future. Ideally, we want a system where we can filter out transient posts from X time ago.
  • controversial: These posts should be shown only to users with some karma threshold.
  • platitude: posts whose content is not novel but the message merits repetition.
  • empirical: posts that analyze empirical data. A lot of SSC’s posts fall under this category.
  • technical-math: posts that require mathematical maturity to understand (i.e., beyond a good high school math foundation)
  • technical-cs: posts that require familiarity with CS and programming to understand.
  • help-needed: posts that need some help to accomplish their objective. E.g., someone has a good software idea but needs help in implementing it. The tag is used a lot on Github, but I think it can be used for things other than software, too.
Comment by rudi-c on Insights Over Frameworks · 2020-06-21T11:06:36.912Z · score: 3 (2 votes) · LW · GW

I personally believe insights more if I can tie them to some theory. Even if I don’t quite believe in that theory, the mere fact that there is a possible theory under which the insight is true, increases that insight’s credibility in my mind. I think this is somewhat of a cognitive error (it might also be necessary for building models, i.e., it might be a necessary evil of a learning brain), but it’s what it is.

Comment by rudi-c on The point of a memory palace · 2020-06-20T08:22:46.263Z · score: 2 (2 votes) · LW · GW

Truth to tell, learning is never taught, period. I think most students are so stupid they just read their textbooks like a newspaper and expect to remember any of it, without doing any active recalls. But it’s also true that studying better requires more time and mental strain. In the case of visualization, it also probably needs (lots of) innate ability.

Comment by rudi-c on Using a memory palace to memorize a textbook. · 2020-06-20T07:54:07.587Z · score: 4 (3 votes) · LW · GW

I have tried visualization mnemonic techniques (not the palace one. I don’t like palaces.), and they work decentish for biology-like stuff. They don’t work as well for mathematics/CS, because there either it is easier to “understand” the equations/rules, or it is very hard to visualize them. Of course, I do use visualization in trying to understand some math stuff, but it’s a thinking tool then, not a mnemonic. All in all, the big bug is that studying with visualization takes time, and it also can lower comprehension, as the visualization is an overhead in some contexts.

Beware that mnemonic-enhanced memories get forgotten, too. It’s slower, but still happens. If something is really needed in your longterm memory, spaced repetition is essential. Note that most everything are not needed in your longterm memory and you can sustain only a limited amount of spaced repetition. My heuristic is that anything that needs Anki most probably doesn’t belong in the longterm memory in the first place. Things that we really need, we use, and so we remember them.

I suggest memorization via ”upgrading.” That is, you learn A (for example, the concept of a 2D line). Then you and learn B that uses A internally (e.g., the concept of an ellipsis). This pushes A into your subconscious expertise, and grows your knowledge. Vanilla Anki-style repetition will not add anything to you, and you’ll start to hate that you need to grind just to stay the same.

I found the peg technique useful for chemistry. You decide on an image for numbers and elements, memorize them (e.g., via anki), and now you can visualize a lot of stuff easily.

Comment by rudi-c on The one where Quirrell is an egg · 2020-06-17T23:49:04.105Z · score: 3 (2 votes) · LW · GW
  • on adding a fiction section. I think the sphere of very short rational fiction has not riped at all.
Comment by rudi-c on Moloch's Toolbox (1/2) · 2020-06-17T22:21:29.175Z · score: 1 (1 votes) · LW · GW

The economic arguments are implicitly building on a weaker version of Simplico’s beliefs in the discussion. Fame and money are already mostly inexploitable anywhere, but human health is quite exploitable even with all these bad equilibria. The reason the Overton window is so behind the optimal ideas is that people are stupid. The reason that the Overton window is so narrow is that people are sheep. I wonder if Elizier is biased to dismiss people’s faults as sins of the System, so people end up looking good.

Comment by rudi-c on Moloch's Toolbox (1/2) · 2020-06-17T21:47:32.722Z · score: -1 (2 votes) · LW · GW

I think you should make this a top post. Scientology abused the religious laws for evil gains and won handsomely, so abusing those for good causes might be possible, too.

Comment by rudi-c on Creating better infrastructure for controversial discourse · 2020-06-17T21:28:49.548Z · score: 5 (3 votes) · LW · GW

I have always assumed longterm pseudonyms will be traceable, but I have not seen much analysis or datapoints on it. Do you have some links on that?

On coordinated attacks; Can’t a “recursive” karma system that assigns more weight to higher-karma users’ votes, combined with a good moderation team, and possibly an invite-based registry system work? I think you’re too pessimistic. Have many competent people researched this problem at all?

Dr. Hsu is now being “cancelled.” He is using a Google Docs to gather signatures in his defense. That Google Docs was very hard to sign, possibly because of high genuine traffic or DDoS attacks. It’s clear that we have no machinery for coordinating again cancellation. I am no expert, but I can already think of a website that gathers academics, and uses anonymized ring signatures for them to support their peers against attacks.

Honestly, the only single accomplishment I have seen in this area is Sam Harris. He understood the danger early, and communicated that to his followers, subsequently building his platform via direct subscriptions that is somewhat “cancelproof.” 

Comment by rudi-c on Mod Notice about Election Discussion · 2020-06-16T15:26:28.101Z · score: 1 (1 votes) · LW · GW

I posted my reply to this as a post. Can you ping users on posts?

Comment by rudi-c on Open & Welcome Thread - June 2020 · 2020-06-14T07:17:53.101Z · score: 1 (1 votes) · LW · GW

How are you handling the problem that rationality will often pay negative if not over a critical mass (e.g., it often leads to poor signaling or anti-signaling if one is lucky)?

Comment by rudi-c on Hazard's Shortform Feed · 2020-06-13T12:11:31.035Z · score: 1 (1 votes) · LW · GW

I think you’re falling for the curse of knowledge. Most people are so naive that they do think their, e.g., vision is a “direct experience” of reality. The more simplistic books are needed to bridge the inferential gap.

Comment by rudi-c on Turns Out Interruptions Are Bad, Who Knew? · 2020-06-13T09:58:41.165Z · score: 3 (2 votes) · LW · GW

On a laptop, you can easily write a script to do this. On iOS, using Siri Shortcuts, you should be able to cobble up something, too. BTW, I, too, really like my Kindle Oasis, but it sucks for academic text. How good is this Onyx thing? What are its disadvantages compared to an iPad? How good is it for surfing the net? (I personally like the “doesn’t bother my eyes” part much better than “distraction-free”)

Comment by rudi-c on Why do all out attacks actually work? · 2020-06-13T09:50:42.849Z · score: 1 (1 votes) · LW · GW

What are some examples of this algorithm being inaccurate? It seems awfully like the efficient market hypothesis to me. (I don’t particularly believe in EMH, but it’s an accurate enough heuristic.)

Comment by rudi-c on Jimrandomh's Shortform · 2020-06-12T12:06:30.568Z · score: 1 (1 votes) · LW · GW

We need to update down on any complex, technical datapoint that we don’t fully understand, as China has surely paid researchers to manufacture hard-to-evaluate evidence for its own benefit (regardless of the truth of the accusation). This is a classic technique that I have seen a lot in propaganda against laypeople, and there is every reason it should have been employed against the “smart” people in the current coronavirus situation.

Comment by rudi-c on Overcorrecting As Deliberate Practice · 2020-06-10T18:31:48.996Z · score: 3 (2 votes) · LW · GW

My gut feeling has always been to overcorrect. :D