Comment by gilch on What are good resources for learning functional programming? · 2019-07-06T22:06:15.756Z · score: 2 (2 votes) · LW · GW

[Disclaimer: I haven't finished it yet]

Haskell Programming From First Principles: Pure functional programming without fear or frustration by Christopher Allen and Julie Moronuki.

The preface explains how Chris wrote it in the process of teaching Julie, an absolute beginner to computer programming, and they refined the material together using that feedback process.

The result I found more approachable than Learn You a Haskell for Great Good, which I had read earlier.

This probably falls in the "How" category. It has exercises. But there is some exposition on the "What" and "Why" as well. If you work though it you should have achieved basic proficiency with the language, which should help you understand any ML-style language, or exposition using similar notation.

Comment by gilch on Counterspells · 2019-04-30T02:35:30.441Z · score: 1 (1 votes) · LW · GW

I actually thought the term was apt. If someone is "under the spell" of bad ideas, you need to "break the spell" somehow before they can think clearly again. This usage is not without precedent. A particular kind of spell needs a particular kind of Counterspell to break it. It's an antidote, not a panacea, so the D&D conception fits the concept better than the MtG version.

Comment by gilch on Counterspells · 2019-04-30T02:28:47.313Z · score: 4 (4 votes) · LW · GW

Dominate by what measure? In terms of scoring debate points with the audience, no. "Fallacy X" takes less time to say. People who don't know what "X" means may still assign you higher status because you named something in Latin, and assume people who can reply quickly are smarter, and therefore right.

Counterspells seem more effective when arguing in writing than in person, when the slower response time isn't as costly. You also wouldn't have to memorize them.

If the goal is to get a single interlocutor to actually change their mind, something like Street Epistemology might be better. Politics is the mind-killer. When a position is tied to identity, direct confrontations are simply attacks to be resisted. You have to cut sideways and undermine their foundations. Don't focus on the reasons why they believe (where Counterspells seem to be focused), but how they come to beliefs. If their epistemology is broken, don't expect more evidence to sway them--because they're just not listening.

But all of the above are still Dark Arts, because they're rhetorical tricks that can be selectively applied to anything you don't like. Yes, there has to be an opening. The interlocutor has to have at least appeared to have made a "mistake" in reasoning or at least the presentation of it, which may make it less Dark than more underhanded rhetorical tricks, but which openings you choose to attack shows your own bias.

If you care about the truth, don't reach for any formulaic gotcha ammunition. Steelman. Take the most charitable interpretation of the opposing argument you can muster, and then cut it down. If you can.

Comment by gilch on Counterspells · 2019-04-28T01:36:24.981Z · score: 23 (10 votes) · LW · GW

While I do think that rhetoric is a skill worth developing, don't forget that rhetorical tricks are Dark Arts.

Many so-called "Logical Fallacies" are unfortunately applied to arguments that are valid inferences. On priors, you are better off trusting experts in their field than laymen. But this is called the "argument from authority fallacy". The correct counter is Argument Screens Off Authority. And so on. Learning Counterspells is no substitute for grokking Bayes, and may even be harmful if they just give you excuses not to listen or more ammunition to shoot your own foot with.

Also, someone should totally make a card game out of this.

Comment by gilch on What are the advantages and disadvantages of knowing your own IQ? · 2019-04-13T04:11:38.898Z · score: 1 (1 votes) · LW · GW

IQ is measuring something real and important to life outcomes (the so-called g factor) but it is not everything that matters to life outcomes or cognition. As Keith Stanovich pointed out in What Intelligence Tests Miss, IQ is not the same thing as rationality. Your intelligence can defeat itself if misapplied. And due to human nature, it will. The more clever you are, the more ways you can deceive yourself. Reading the Sequences (or RAZ) can help you learn to stop doing that.

Why wouldn't you want to know your own IQ? Are you afraid a "bad" result would become a self-fulfilling prophecy? If you want to be more rational, then the truth is not something to be afraid of! You have been living your whole life with whatever IQ you have. Knowing your own strengths and weaknesses doesn't change what they are. It just gets rid of the weakness of not knowing that.

There are multiple components to IQ tests. You may be stronger in some areas than others. In my case, the IQ test revealed that I have a learning disability, despite having the overall genius IQ typical of LessWrong readers (or at least LessWrong survey takers).

Learning about this was not a disappointment. It was a relief. I had always felt like I somehow wasn't measuring up to my own apparent potential, but now I realize it wasn't my fault. Now that I know what my weakness is, I can better compensate for it, and better leverage what strengths I have.

While we don't know good ways to improve fluid intelligence by much (besides avoiding those things that make it worse, like sleep deprivation, etc.) there are well-known ways of increasing your crystallized intelligence: Read more and better books. Listen to audiobooks during your commute. Use SRS and mnemonics. Specialize. You can generally out-learn someone a bit smarter than you if you develop better study habits. And you can expect to far out-learn someone who isn't even studying your field.

You can also increase your effective fluid intelligence in many useful situations by using external tools. Working memory is consistently one of the worst bottlenecks in human cognition. Write things down when thinking. Draw diagrams. Take pictures.

Learn to use a computer more effectively. Try org-mode or FreeMind or TiddlyWiki. Learn to use a spreadsheet. Try AutoHotKey to improve your efficiency. Learn Python, if you can. Try working through a math book with Mathematica instead of pencil-and-paper. There's a night-and-day difference in effectiveness between an illiterate genius and a merely bright person who has access to a PC and Google and knows how to use them.

Comment by gilch on Open Thread April 2019 · 2019-04-02T02:46:00.546Z · score: 1 (1 votes) · LW · GW

Markdown auto-increments numbered points while ignoring the actual number. I often number my numbered markdown lists with all 1.'s for this reason.

Comment by gilch on Do you like bullet points? · 2019-04-02T02:42:07.095Z · score: 1 (1 votes) · LW · GW

Rather than bullet points per se, I find it natural to think hierarchically. Bullet outlines are one way of writing this hierarchy out. I think this is OK for a comment.

For a top-level post though, it feels unfinished. It's fine to start with an outline, but note that markdown headers also have hierarchical layers just like bullets, with paragraphs below that. Prefer the headers for posts and reserve bullets for very short "leaf nodes" below the level of paragraphs, and only when they add clarity.

If you find you like to think in outlines, I recommend trying FreeMind, which lets you uproot and graft entire subtrees with much less effort than typing bullets. Once you have your thoughts outlined, you can export to normal bullet points.

Comment by gilch on What are effective strategies for mitigating the impact of acute sleep deprivation on cognition? · 2019-04-02T02:16:57.613Z · score: 4 (3 votes) · LW · GW
  • Get more sleep at night.
    • Take melatonin at the appropriate time and dose. It's cheap and legal in the U.S., but most products have way too much. https://slatestarcodex.com/2018/07/10/melatonin-much-more-than-you-wanted-to-know/ most insomnia drugs are not much more effective than this.
    • Avoid light at night, especially blue light. Light inhibits natural melatonin production, which interferes with you circadian rhythms.
      • If you can't darken your room completely, you can use a sleep mask instead. Get the kind with cups (like opaque swim goggles) instead of the kind that puts pressure on your eyes.
      • Use f.lux on your personal devices to reduce blue light after sunset or use one of the similar built-in features of your OS. Windows 10 has the new "Night Light" setting, macOS and iOS have "night shift" mode. Newer Samsung phones have a "blue light filter" setting. These options vary in quality and may have configurable intensity. More intense is more effective and it's surprising how much you get used to it.
    • Falling asleep is a common failure mode of certain types of meditation practice. You can use this to your advantage when suffering from insomnia in bed. Even beginners fail to meditate this way accidentally, so it's not particularly difficult to do on purpose. Focus your attention on the sensation of breathing or on the ringing in your ears. When you notice you are lost in thought, refocus your attention. But when you notice the dreaming arise without directed effort, dive in and let them take you. It works for me anyway. If not, at least you got your meditation in today.
  • Take naps. Even 20 minutes dramatically improves performance when sleep deprived.
    • Try the sleep mask when napping.
    • Try the meditation techniques for naps too.
  • Track your sleep quality.
    • You can get smartphone apps that purport to do this using the phone's sensors. Some fitness trackers or smartwatches also have this function built in or available as an app. Accuracy varies.
    • You may have sleep apnea. Talk to your doctor about doing a sleep study to diagnose possible issues and treatments. Some people do much better on a CPAP, but there are many other treatment options.
  • Avoid eating late at night. This can cause indigestion, which can keep you awake.
    • if you suffer from heartburn, sleep on your left side to contain it better, because your esophagus attaches to your stomach on the right side (unless you're one of those rare people with backwards internal organs).
  • Exercise regularly. I'm not sure why this helps, but it seems to. Perhaps mental fatigue doesn't always line up with physical fatigue unless you actually make some effort physically during the day.
Comment by gilch on What are effective strategies for mitigating the impact of acute sleep deprivation on cognition? · 2019-04-02T02:03:24.972Z · score: 0 (0 votes) · LW · GW

accidental duplicate

Comment by gilch on Ideas for a fact checking widget · 2019-03-19T02:14:17.936Z · score: 2 (2 votes) · LW · GW

I have heard of similar attempts, for example, Media Bias/Fact Check (dot com) has some browser extensions, which purport to show the political bias of known sources. The website also claims to rate news organizations on factual accuracy.

That said, consuming even "honest" news sources will give you a very distorted picture of reality, due to the way the news tends to prey on human bias. Because of this, mere fact checking doesn't go nearly far enough: outliers need to be put in perspective.

Comment by gilch on Tiles: Report on Programmatic Code Generation · 2019-02-22T01:41:23.995Z · score: 4 (3 votes) · LW · GW

That real-world snippet example doesn't look very readable. Did the formatting fail? I don't see any line breaks and the # seem misplaced for comments.

Comment by gilch on Tiles: Report on Programmatic Code Generation · 2019-02-22T01:39:17.988Z · score: 3 (3 votes) · LW · GW

If I'm coding in Go, I want to use Go and I want to use its power fully. I don't want to use some crippled version of it that's used only inside of templates.

That's what makes Lisp macros awesome. You write them in Lisp.

Also, since you like Python, have seen the Mako language? It's less restricted than Jinja2 and can take pretty much arbitrary Python.

But, when rendering web pages, (the primary use for Jinja2) you want to keep as much complexity out of your templates as possible, so you can test your logic more easily. A restricted DSL enforces that.

Comment by gilch on The Best Textbooks on Every Subject · 2018-10-09T05:44:37.036Z · score: 1 (1 votes) · LW · GW

Free to "borrow".

Comment by gilch on A Rationalist's Guide to... · 2018-08-19T04:03:32.242Z · score: 1 (1 votes) · LW · GW

OK, so go meta. How does one go about discovering these secrets? We don't all have to find the same one. How can a rationalist find these secrets better than average Joe?

Comment by gilch on Open Thread August 2018 · 2018-08-10T02:56:20.382Z · score: 3 (2 votes) · LW · GW

I notice that I'm getting spam posts on my LessWrong RSS feeds and still see them in my notifications (that bell icon on the top right), even after they get deleted.

Comment by gilch on A Rationalist's Guide to... · 2018-08-10T02:51:18.941Z · score: 5 (4 votes) · LW · GW

How about A Rationalist's Guide to Early Retirement?

Or, If you're so rational, why ain'cha rich?

OK, so some of you got lucky with the Bitcoins. Can we do any better than buy-and-hold the index? (Like Wealthfront etc.) Option spreads? Tax liens? Arbitraging junk on Ebay? Do you have to start a company, or can you do just as well as a consultant? Are there easy ways to start up passive income or do you need a rich uncle?

Comment by gilch on A Rationalist's Guide to... · 2018-08-10T02:29:58.184Z · score: 6 (4 votes) · LW · GW

I'd like A Rationalist's Guide to Signing Up for Cryonics.

Suppose you've made the decision and have the finances to do it. How do you go about it? Which institution would have better expected outcomes? Neuro or full-body? Which life insurance company? What do you tell your family? How can you best ensure that you actually get froze before your brain rots in case of your unforeseen accidental death, as opposed to a more convenient death due to age or disease in a controlled setting like a hospital? (Which we might expect in a younger-aged group.)

Comment by gilch on A Rationalist's Guide to... · 2018-08-10T02:21:50.995Z · score: 2 (2 votes) · LW · GW

I would like A Rationalists Guide to Personal Catastrophic Risks.

We like to think a lot about Global Catastrophic Risks (especially the EA folks), but there are smaller problems that are just a devastating to the individual.

Should we wear helmets in cars? Should we wear covert body armor? Own a gun? Get a bug-out bag? An emergency cube? Learn wilderness survival?

And by how much should we be concerned about those "survivalist" topics vs less obvious longevity steps like flossing your teeth? Not everyone's risk profile is the same. How do we assess that?

How should we measure that? Dollars? QALYs? Micromorts? Should we use hyperbolic discounting? Do we expect to reach actuarial escape velocity (or be granted near-immortality after the Singularity) and how would that change the calculus?

Do anthropic effects matter to subjective survival? In the multiverse?

Consider also other catastrophes that don't kill you, like losing a limb, or going blind, or more social risks like identity theft or getting scammed or robbed or sued, etc.

Comment by gilch on A Rationalist's Guide to... · 2018-08-10T01:44:38.466Z · score: 5 (3 votes) · LW · GW

I would like to have A Rationalist's Guide to Reading Science.

Particularly, how to benefit from existing scientific publications and understand them well enough to write a Rationalist's Guide to X or Much More Than You Wanted to Know About X, where X is some field without common knowledge consensus, like medicine or diet or exercise or psychology.

Reading science news headlines seems suboptimal. How confident can we be in any particular study? We know there are some perverse incentives in science. Publish or perish, new discoveries more valued than replications, p-hacking, etc. What should we be wary of? How much training do we need in the field? Is this impossible without a degree in statistics?

Comment by gilch on Why no total winner? · 2017-10-18T23:06:54.944Z · score: 2 (0 votes) · LW · GW

Moral norms: Nuclear weapons gave the USA a decisive advantage at the end of WWII. If the USA had been entirely ruthless and bent on power at any cost, it would immediately have used that advantage to cripple all rivals for world superpower and declared its rulership of the world.

Dubious. We never had a decisive advantage. At the end of WWII we had a few atomic bombs, yes, but not ICBMs. The missile tech came later. Our delivery method was a bomber, and the Soviets were quite capable of shooting them down, unlike Japanese at the time. Attacking them would have been a terrible risk.

The Soviets had spies and developed their own nuclear weapons much sooner than we anticipated.

In hindsight, there may have been a point early in the Cold War when a nuclear first strike could have worked, be we didn't know that at the time, and we'd have certainly taken losses.

Comment by gilch on Voting Weight Discussion · 2017-10-03T06:29:17.156Z · score: 13 (5 votes) · LW · GW

Per-Paragraph Voting

I sometimes find myself agreeing (or disagreeing) with only part of a long comment.

Current options include replying with a quotation of the part in question, or voting on the whole thing based on the part. Sometimes this isn't worth the effort or seems unfair and I wish I could just vote on that one part.

Conciseness in comments is a virtue given our limited free time and attention, but I don't want to turn Less Wrong into Twitter. Due to inferential gaps, some concepts really do take long comments to get across, but aren't worth a top-level post. Our karma system should accommodate that.

I don't think it should be so fine-grained that we could insert a vote between every character. That's noise. It could really make a popular post hard to read. On the fine scale, per-sentence, or perhaps even per-punctuation mark is more reasonable.

But I suspect that the appropriate granularity is per paragraph. There's already a visual gap there, so you can insert karma votes (or tags) without hurting readability.

I'm not sure how to handle the case of edits. Minor typo fixes aren't a problem, but changing the number of paragraphs, or moving sentences around could be a problem. But there's nothing stopping us now from completely rewriting a comment to say something completely different. This is bad form, of course. And it might be bad form to change that in a deceptive way once it's voted on, but honest mistakes are possible.

I think edited posts (even comments) should have a publicly-visible history, like a wiki.

Comment by gilch on Voting Weight Discussion · 2017-10-03T05:56:10.719Z · score: 6 (3 votes) · LW · GW

Reactions--Vote with Emojis or Tags

Karma currently conflates multiple possible reations into a single datum. An upvote currently could mean "me too" or "I updated" or "I agree" or "others should read this", or "LOL", etc. A downvote could mean "fallacy" or "poor quality" or "disagree", etc. It's hard for posters and readers to discern intent.

Facebook and Github (and probably others) now have a small number of Emoji reactions, instead of just +/- or "Like". This is a more fine-grained feedback mechanism than a simple vote, but still easy (and familiar) enough for everyone to understand and use.

These give us additional low-effort feedback mechanisms that normally wouldn't be worth a written reply. Facebook's selection isn't appropriate for Less Wrong, but we could choose some better ones. They also need not be icons. Words or short phrases (i.e. "tags") will do. We'd probably want to pull memes about good discourse and their failure modes from the Sequences, e.g. #applause-light or #updated, etc., as well as all the well-known logical fallacies and biases. (These could also be links to the wiki for the uninitiated.)

For maximum flexibility, we could also give users the option to reply with a custom tag. Popular options could be added to the default suggestions, either automatically, or curated by the Sunshine Regiment. My concern with this approach is that custom tags may not have clear meanings, but a curated set could be linked to agreed-upon definitions in the wiki.

Tags also give us a good mechanism for straw polls. I've seen karma used this way on the old Less wrong, with a yes/no question asking for karma votes, (and a reply by the same author to counter balance it with opposite votes). The new weighted karma kind of destroys this feature. But with tags, we could vote with #yes #no or even #A #B #C #D, etc. for multiple-choice questions.

Comment by gilch on Voting Weight Discussion · 2017-10-03T05:20:14.620Z · score: 7 (3 votes) · LW · GW

I'm not exactly sure what you mean by "iterations" here. Is it about getting enough votes? Or about what conversion function to use when grandfathering established users?

I think it would be possible to experiment with the current data. You have a record of the dates of all posts and votes so far. Rather than grandfathering in established users with some human-estimated prior, give everyone the same starting score, and try computing their current karma% from scratch. See if it gives you reasonable answers. See if it finds hidden gems. Try a different prior (with enough votes, any reasonable choice should get similar results). This won't answer questions about incentives, but it will give you a good comparison to the current unbounded karma system.

Comment by gilch on Voting Weight Discussion · 2017-09-30T21:05:25.938Z · score: 9 (4 votes) · LW · GW

Bayesian Karma

Attention is a limited resource. I don't have the time or interest to read every comment on LessWrong. So what is karma even for? I use the karma score for one simple yes-no question: "Is reading this worth my time?".

Is displaying the number of upvotes minus downvotes really the best way to answer the question? Karma should not be about mere popularity, or path dependence based on whoever voted first. The weighting system is an improvement, but I think we can do better.

Display the estimated probability (as a percentage) that the post is worth my time.

Users will start with a reasonable personal karma score (say less that 50%), while established users with high karma on the old system will start with somewhat higher personal scores. Then the initial (prior) score of a post is the score of its author.

Users can then upvote or downvote posts, and this will be taken as Bayesian evidence about the quality of that post, shifting its score from its prior in the direction of the vote.

The probability that any given vote is accurate will be basted on the voters' karma percentage. Those with high personal karma will be assumed to have better judgement (because they write high karma posts), so their vote will have more weight.

And finally, the user's personal karma percentage will be adjusted from their prior by using the karma percentage of their posts as Bayesian evidence. This means that the personal karma percentage of a user is the estimated probability that the user's next post is worth reading.

Comment by gilch on Ten small life improvements · 2017-08-30T01:18:58.774Z · score: 0 (0 votes) · LW · GW

AutoHotKey can remap keys in Windows, among other things. I used it to get basic Vim commands in any text field. It also has a Python module. I haven't found a good alternative for Linux or Mac.

Comment by gilch on Idea for LessWrong: Video Tutoring · 2017-07-09T23:45:56.482Z · score: 1 (1 votes) · LW · GW

I propose that Learners individually reach out to Teachers, and set up meetings.

Consider that it may make sense for you to act as a Teacher, even if you don't have a super strong grasp of the topic.

Why that way instead of the reverse? True, the learners probably have the greater motivation. But, the learners have a better idea of what they want to learn than the teachers have of what they can teach, especially if we're accepting teachers without a super strong grasp of their topics. Thus, I think it would make more sense for the learners to post in detail what they want, and the teachers to look over all of that and make the offer on whatever topics they can help with, even if only a little.

We could certainly do both, but then I worry that each will hope the other initiates. The cure for this is if one individual plays matchmaker to get things started. Due to the bystander effect, I'll name adamzerner as the obvious choice for the role, but you could delegate then abdicate if someone else is willing.

I pinned http://lesswrong.com/r/discussion/lw/p69/idea_for_lesswrong_video_tutoring/ to #productivity on the LessWrongers Slack group.

A chat room could also work better than individual emails. But everybody has to be on the same channel and check it regularly. I don't even have an invite yet (I just asked Elo for one). Is everyone else on?

Comment by gilch on Idea for LessWrong: Video Tutoring · 2017-07-09T23:27:39.192Z · score: 0 (0 votes) · LW · GW

We might be able to apply these "differences" to our attempt. A lot of the value we're talking about here is just some basic direction to get started and help when you get stuck. That's a pretty "small barrier to entry", and then "small incremental improvements".

Could we dedicate a Slack channel to video tutoring? My experience with small IRC groups is that there is a small number of experts who check in frequently, or at least daily. Then the beginners will occasionally pop in and ask questions. If they're patient enough to stay on, an expert usually answers within the day, and often it starts a real-time chat when the expert mentions the beginner's handle. We could use the Slack channel to ask questions to get started or when we get stuck. If an appropriate teacher is on, then they can start a video chat/screen share on another site. There would be no obligation for a certain time limit.

Comment by gilch on Idea for LessWrong: Video Tutoring · 2017-07-09T23:07:37.473Z · score: 0 (0 votes) · LW · GW

Screen sharing is one-way though. If you both need to draw on the same space, it would be pretty awkward. I've heard of Twiddla, Deekit, and GroupBoard, but haven't used them.

That https://talky.io site looks pretty useful. I've used a similar one called https://appear.in which also uses WebRTC. I don't know if one is better. [Edit: looking this over, talky seems better for our purposes.]

Comment by gilch on Against lone wolf self-improvement · 2017-07-09T02:24:52.351Z · score: 1 (1 votes) · LW · GW

I'll second that it's relevant. Links should say what they point to though. In this case, it was: Idea for LessWrong: Video Tutoring

Comment by gilch on Idea for LessWrong: Video Tutoring · 2017-07-09T02:23:00.543Z · score: 2 (2 votes) · LW · GW

Video chat probably isn't good enough by itself for many topics. For programming, screen-sharing software would be helpful. For mathematics, some kind of online whiteboard would help. Is there anything else we need? Do any of you know of good resources? Free options that don't require registration are preferable.

Comment by gilch on Idea for LessWrong: Video Tutoring · 2017-07-09T02:16:41.400Z · score: 1 (1 votes) · LW · GW

Failure seems like the default outcome. How do we avoid that? Have there been other similar LessWrong projects like this that worked or didn't? Maybe we can learn from them.

Group projects can work without financial incentives. Most contributors to wikis and open-source software, and web forums like this one, aren't paid for that.

Assume we've made it work well, hypothetically. How did we do it?

Comment by gilch on Open thread, June 26 - July 2, 2017 · 2017-07-04T18:16:36.043Z · score: 0 (0 votes) · LW · GW

But particular acts are always one or the other. This is obvious, since if an act contributes to a good purpose, and there is nothing bad about it, it will be good. On the other hand, if it contributes to no good purpose at all, it will be bad, because it will be a waste of time and energy.

You can't have this both ways. You define the morality of an act not by its consequence, but by whether the agent should be blamed for the consequence. But then you also deny the existence of morally neutral acts based on consequence alone. Contradiction.

Moral agents in the real world are not omniscient, not even logically omniscient. Particular acts may always have perfect or suboptimal consequences, but real agents can't always predict this, and thus cannot be blamed for acting in a way that turns out to be suboptimal in hindsight (in the case the prediction was mistaken).

It sounds like you're defining anything suboptimal as "bad", rather than a lesser good. If you do accept the existence of lesser goods and lesser evils, then replace "suboptimal" with "bad" and "perfect" with "good" in the above paragraph, and the argument still works.

Comment by gilch on Open thread, June 26 - July 2, 2017 · 2017-07-02T23:08:16.039Z · score: 1 (1 votes) · LW · GW

Possibly relevant (crazy idea about extracting angular momentum from the Earth)

Comment by gilch on Open thread, June 26 - July 2, 2017 · 2017-07-02T23:02:10.711Z · score: 0 (0 votes) · LW · GW

Rationalists should win. We do care about instrumental rationality. Epistemic rationality is a means to this end. Doesn't that mean "change"?

Comment by gilch on Open thread, June 26 - July 2, 2017 · 2017-07-02T22:55:31.581Z · score: 0 (0 votes) · LW · GW

The opposite of doing wrong is NOT doing wrong, and is also doing right

You deny the existence of morally neutral acts? There's a difference between "not blameworthy" and "praiseworthy".

If you say "such and such is morally wrong, but not blameworthy

That's not exactly what I said. But I'm not so confident that normal persons entirely agree with each other on such definitions. If an insane person kills another person, we may not call that blameworthy (because the insane person is not a competent moral agent), but we still call the act itself "wrong", because it is unlawful, has predictably bad consequences, and would be blameworthy had a (counterfactually) competent person done it. I hear "normal person"s use this kind of definition all the time.

Comment by gilch on Open thread, June 26 - July 2, 2017 · 2017-07-02T00:14:41.189Z · score: 0 (0 votes) · LW · GW

That's not how morality works. moral wrongness is something blameworthy.

We might be arguing word definitions at this point, but if your definition is "blameworthiness", then I think I see what you mean.

If you say that something is wrong, it necessarily follows that the opposite is right.

What? No it doesn't! Reversed stupidity is not intelligence. Neither is reversed immorality morality. The foolhardy action in battle is wrong, therefore, the cowardly action is right? The right answer is not the opposite. The courageous action is somewhere in between, but probably closer to foolhardy than cowardly.

Comment by gilch on Open thread, June 26 - July 2, 2017 · 2017-07-01T21:04:50.256Z · score: 1 (1 votes) · LW · GW

Stop assuming

That's unreasonable. Humans have to assume a great deal to communicate at all. It takes a great deal of assumed background knowledge to even parse a typical English sentence. I said "vegans" are opposed to eating brainless bivalves, not that "Zarm" is. Again I'm talking to the audience and not only to you. You claim to be a vegan, so it is perfectly reasonable to assume on priors you take the majority vegan position of strict vegetarianism until you tell me otherwise (which you just did, noted). You sound more like a normal vegetarian than the stricter vegan. Some weaker vegetarian variants will still eat dairy, eggs, or even fish.

My understanding is the majority of vegans generally don't eat any animal-derived foods whatsoever, including honey, dairy, eggs, bivalves, insects, gelatin; and also don't wear animal products, like leather, furs, or silk. Or they at least profess to this position for signaling purposes, but have trouble maintaining it. Because it's too unhealthy to be sustainable long term.

Comment by gilch on Open thread, June 26 - July 2, 2017 · 2017-07-01T19:06:10.307Z · score: 0 (0 votes) · LW · GW

you're also comparing religion and spiritual cultures to scientific arguments.

Because veganism seems more like religion than science. You give the benefit of the doubt to even bugs based on weak evidence.

Based off what evidence? I'm not saying something either way for animals like jellyfish, but you can't just say "near-certain" with no backing.

No backing? How about based on the scientific fact that jellyfish have no brain? They do have eyes and neurons, but even plants detect light and share information between organs. It's just slower. I find it bizarre that vegans are okay with eating vegetables, but are morally opposed to eating other brainless things like bivalves. It is possible to farm these commercially. https://sentientist.org/2013/05/20/the-ethical-case-for-eating-oysters-and-mussels/

Comment by gilch on Open thread, June 26 - July 2, 2017 · 2017-07-01T18:10:56.929Z · score: 0 (0 votes) · LW · GW

Vegetables are also alive.

And?

That was only if you answered "yes" to the previous question. You didn't, so never mind.

Stop being so condescending please. I'm doing both. I'm an effective altruist

Public posts are talking to the general audience, not just to you. Veganism seems more religious than rational (like politics), but I'll try to tone it down since you seem more reasonable and asked nicely. Assume good faith. Tone doesn't come through well in writing, and it's more on the reader than the writer.

If they aren't conscious, I'm not against it.

Then why not eat eggs? I don't mean the factory-farmed kind. If the hens were happy would it be okay? If yes, you should be funding the farms that treat their hens better with your food purchases, even if it's not perfect, to push the system in a good direction.

Vegans are constantly correcting people saying how its about minimizing suffering, not eliminating. Because there's no possible way I could be right, right? It'd have to be rationalizing, lol. that's silly. I'd push for plant foods.

Even if that were the more effective intervention? Forget about the diet thing. It's not that effective. Do what actually makes a difference. Use your buying power to push things in a good direction, even if that means eating meat in the short term. See http://slatestarcodex.com/2015/09/23/vegetarianism-for-meat-eaters/

Trophic levels.

It's relevant in some cases, but I don't entirely buy that argument. Efficiency, yes, but morality? On marginal land not fertile enough for farming, you can still raise livestock. No pesticides. What about wild-caught fish? Those are predators higher up the food chain, but they have a more natural life before they're caught.

Comment by gilch on Open thread, June 26 - July 2, 2017 · 2017-07-01T17:33:12.298Z · score: 0 (0 votes) · LW · GW

You say you're against killing self aware beings. If pigs were proven to be self aware, would you quit eating them?

That's not exactly what I said, but it's pretty close. I established the mirror test as a bound above which I'd oppose eating animals. That is only a bound--it seems entirely plausible to me that other animals might deserve moral consideration, but the test is not simply self awareness.

Absolute proof doesn't even exist in mathematics--you take the axioms on faith, but then you can deduce other things. At the level of pigs, logical deduction breaks down. We can only have a preponderance of the evidence. If that evidence were overwhelming (and my threshold seems different than yours), then yeah, I'd be morally opposed to eating pigs, other things being equal. In that case I'd take the consequentialist action that does the most good by the numbers. Like funding a charity to swap meats in school lunch (or better yet, donating to MIRI), rather than foregoing pork in all circumstances. That pigs in particular might be self aware already seems plausible on the evidence, and I've already reduced my pork intake, but at present, if I was offered a ham sandwich at a free lunch, I'd still eat it.

Comment by gilch on Open thread, June 26 - July 2, 2017 · 2017-07-01T06:06:26.846Z · score: 0 (0 votes) · LW · GW

How about the severely mentally disabled

If it's severe enough, I think this is a cultural question that could go either way, not an categorical evil. There are probably relatively good places in the Moral Landscape where this kind of thing is allowed. In the current culture, it would violate important Schelling points about not killing humans and such. Other things would have to change to protect against potential abuse, before this can be allowed.

Comment by gilch on Open thread, June 26 - July 2, 2017 · 2017-07-01T05:58:06.644Z · score: 1 (1 votes) · LW · GW

It is okay to take them off of life support, if the "severely" part is sufficient. Eating humans is bad for other reasons. I also do not approve of feeding cows to other cows, for example. It causes prion disease.

Comment by gilch on Open thread, June 26 - July 2, 2017 · 2017-07-01T05:53:45.787Z · score: 0 (0 votes) · LW · GW

Nope, I still think that's wrong. It can't be helped until they develop better technology maybe, but it's wrong. The species in Greg Egan's Orthogonal series was like that. They eventually figured out how to reproduce without dying.

There are things about ourselves that evolution did to us that we ought to change. Like dying of old age, for example. Evolution is not moral. It is indifferent. The Sequences illustrate this very clearly.

Comment by gilch on Open thread, June 26 - July 2, 2017 · 2017-07-01T04:17:00.466Z · score: 0 (0 votes) · LW · GW

Is it some deontological objection to killing living things? Vegetables are also alive. To killing animals in particular? I thought we were over this "soul" thing. Is it about cutting short future potential? These aren't humans we're talking about. They don't invent devices or write symphonies. Is it about cutting short future positive experiences? Then consciousness is still important.

You are not innocent.

Commercial vegetable farming kills animals! Pesticides kill insects with nerve gas. If they're conscious, that's a horrible way to die. But that wasn't your true objection. It cuts short future experiences. Or are bugs also below even your threshold for moral relevance? In that case, why not eat them? Even so, heavy farm equipment like combines kill small mammals, like mice and voles. That's why people occasionally find severed rodent heads in their canned green beans. The government has limits for this sort of impurity, but it's not zero. It simply wouldn't be practical economically.

So if farming mere vegetables also kills animals, why not become an ascetic? Just stop eating. You can reduce your future harm to zero, at the cost of one human. Your instincts say no? Ascetic cavemen did not reproduce. Game theory is relevant to morality.

Now you see it's a numbers game. You can't eliminate your harm to animals. You cannot live without killing. You still believe even bugs are morally relevant. You've even rejected suicide. So now what do you do? What can you do? It's a numbers game. You have to try to minimize the harm rather than eliminate it. (At least before the Singularity). Is veganism really the best way to do that?

No, it really is not. Forget about your own diet. It's not an effective use of your limited resources. Try to improve the system. Fund science to determine where the threshold of consciousness is, so you can target your interventions appropriately. Fund more humane pesticides, that work faster. Fund charities that change meat in school lunch from chicken to beef. Blasphemy, you say? You are not innocent! How many calories in one cow? How many chickens do you have to slaughter to feed as many children as one cow? Numbers game. Take this seriously or you're just signaling.

I think I've laid out a pretty good case for why Veganism makes no sense, but since virtue signaling is important to your social status, I'm sure you'll come up with some rationalization I haven't thought of in order to avoid changing your mind.

Comment by gilch on Open thread, June 26 - July 2, 2017 · 2017-07-01T04:16:47.098Z · score: 1 (1 votes) · LW · GW

Veganism seems well-intentioned, but misguided. So then, your main reason for veganism is some sense of empathy for animal suffering? My best guess for vegans' motives is to merely signal that empathy, for social status without any real concern for their real-world impact on animal welfare.

Empathy is a natural human tendency, at least for other members of the tribe. Extending that past the tribe, to humans in general, seems to be a relatively recent invention, historically. But it does at least seem like a useful trait in larger cities. Extending that to other animals seems unnatural. That doesn't mean you're wrong, per se, but it's not a great start. A lot of humans believe weird things. Animistic cultures feel may feel empathy for sacred objects, like boulders or trees, or dead ancestors, or even imaginary deities with no physical form. They may feel this so strongly that it outweighs concern for their fellow humans at times. Are you making the same mistake? Do mere rocks deserve moral consideration?

So there are things that are morally important and things that are not. Where do we draw that line? Is it only a matter of degree, not kind? How much uncertainty do we tolerate before changing the category? If you take the precautionary principle, so that something is morally important if there's even a small chance it could be, aren't you the same as the rock worshipers neglecting their fellow humans?

Why do you believe animals can suffer? No, we can't take this as a settled axiom. Many people do not believe this. But I'll try to steelman. My thoughts are that generally humans can suffer. Humans are a type of animal, thus there exists a type of animal that can suffer. We are related to other species in almost exactly the same sense that we are related to our grandparents (and thereby our cousins), just more generations back. Perhaps whatever makes us morally relevant evolved before we were human, or even appeared more than once through convergent evolution. Not every organism need have this. You are related to vegetables in the evolutionary sense. That's why they're biochemically similar enough to ourselves that we can eat them. You're willing to eat vegetables, so mere relation isn't enough for moral weight.

By what test can we distinguish these categories? Is it perhaps the mere fact of an aversive behavior to stimulus? Consider the Mimosa pudica, a plant that recoils to touch. Is it morally acceptable to farm and kill such a plant? That's just an obvious case. Many plants show aversive behaviors that are less obvious, like producing poisons after injury, even releasing pheromones that stimulate others nearby to do the same. But again, you're fine with eating plants. Consider that when you burn your finger, your own spinal cord produces a reflexive aversive behavior before the nerve impulse has time to reach your brain. Is your spine conscious? Does it have moral weight by itself? Without going into bizarre thought experiments about the moral treatment of disembodied spinal cords, I think we can agree a conscious mind is required to put something in the "morally relevant" category. I hope you're enough of a rationalist to be over the "soul" thing. (Why can't vegetables have souls? Why not rocks?) I think it is a near certainty that the simplest of animals (jellyfish, say) are no more conscious than vegetables. So merely being a member of the animal kingdom isn't enough either.

So which animals then? I think there's a small chance that animals as simple as honeybees might have some level of conscious awareness. I also think there's a significant chance that animals as advanced as gorillas are not conscious in any morally relevant way. Gorillas, notably, cannot pass the mirror test. Heck, I'm not even sure if Dan Dennet is conscious! So why are we so worried about cows and chickens? I am morally opposed to farming and eating animals that can pass the mirror test.

Gorillas to honeybees are pretty wide error bars. Can we push the line any farther down? Trying to steelman again. What about humans too young to pass the mirror test? Is it morally acceptable to kill them? Are vegans as a subculture generally pro life, or pro choice? On priors, I'd guess vegans tend Democratic Party, so pro choice, but correct me if I'm wrong. It seems so silly to me that I can predict answers to moral questions with such confidence based on cultural groups. But it goes back to my accusation of vegans merely signaling virtue without thinking. You're willing to kill humans that are not conscious enough. So that fails too.

Even if there's some degree of consciousness in lesser beings, is it morally relevant? Do they suffer? Humans have enlarged frontal lobes. This evolved very recently. It's what gives us our willpower. This brain system fights against the more primitive instincts for control of human behavior. For example, human sex drive is often strong enough to overcome that willpower. (STIs case in point.) But why did evolution choose that particular strength? Do you think humans would still be willing to reproduce if our sex drive was much weaker than it is? This goes for all of the other human instincts. It has to be strong enough to compete against human will. Most notably for our argument, this includes the strength of our pain response, but it also applies to other adverse experiences, like fear, hunger, loneliness, etc. What might take an overwhelming urgency in human consciousness to get a human to act, might only require a mild preference in lower animals, which have nothing better to do anyway. Lower animals may have some analogue to our pain response, but that doesn't mean it hurts.

I think giving any moral weight to the inner experience of cows and chickens is already on very shaky ground. But I'm not 100% certain of this. I'm not even 90% certain of this. It's within my error bars. So in the interest of steelmanning, lets grant you that for the sake of argument.

Is it wrong to hurt something that can suffer? Or is it just sometimes the lesser of evils? What if that thing is evil? What if it's merely indifferent? If agents of an alien mind indifferent to human values (like a paperclip maximizer) could suffer as much as humans, but have no more morality than a spider, would it be wrong to kill them? Would it be wrong to torture them for information? They would cause harm to humans by their very nature. I'd kill them with extreme prejudice. Most humans would even be willing to kill other humans in self-defense. Pacifistic cavemen didn't reproduce. Pacifistic cultures tend to get wiped out. Game theory is relevant to morality. If a wild animal attacks your human friend, you shoot it dead. If a dog attacks you while you're unarmed you pin it to the ground and gouge out its brains through its eye socket before it rips your guts out. It's the right thing to do.

If you had a pet cat, would you feed it a vegan diet? Even though it's an obligate carnivore, and would probably suffer terribly from malnutrition? Do carnivores get a pass? Do carnivores have a right to exist? Is it okay to eat them instead? Is it wrong to keep pets? Only if they're carnivores? Why such prejudice against omnivores, like humans? Meat is also a natural part of our diet. Despite your biased vegan friends telling you that meat is unhealthy, it's not. Most humans struggle getting adequate nutrition as it is. A strict prohibition on animal products makes that worse.

But maybe you think farm animals are more innocent than indifferent. They're more domesticated. Not to mention herbivores. Cows have certainly been know to kill humans though. Pigs even kill human children. Maybe cows are not very nice people. But if I'm steelmanning, I must admit that self-defense and factory farming are very different things. But why aren't you okay with hunting for food? What about happy livestock slaughtered humanely? If you are okay with that, then support that kind of farm with your meat purchases, have better health for the more natural human diet, and make a difference instead of this pointless virtue signalling. If you're not okay with that, then it's not just about suffering, is it? That was not your true objection.

Then what is?

Comment by gilch on One-Magisterium Bayes · 2017-06-30T03:46:51.788Z · score: 3 (3 votes) · LW · GW

-1

A request for the "argument from authority" fallacy. Freethinkers discuss ideas directly on their merits, not on their author's job description. A rationalist doesn't ignore any evidence, of course (even authors' job descriptions), but try to weight them accurately, okay?

Comment by gilch on Stupid Questions June 2017 · 2017-06-17T02:48:07.831Z · score: 0 (0 votes) · LW · GW

I'm not sure what idea you're talking about. Are you talking about intranational unity or international unity? Can you give examples?

Comment by gilch on Stupid Questions June 2017 · 2017-06-17T02:41:12.248Z · score: 1 (1 votes) · LW · GW

No, I don't, or only a little bit. See Moravec's Paradox: the easiest tasks to program a computer to do are those we are most conscious of. There are parts of the brain that are remarkably plastic, but there is a lot of background processing that we are not aware of.

If you don't believe that exercising willpower drains your willpower, then it actually still does, but you just don't notice it as soon. This has been tested. It's also true that certain mental abilities can be improved with practice, this is just plasticity. Think of it like exercising a muscle. If you overexert yourself, a strong instinct will try to stop you from hurting yourself. Trained athletes can overcome this instinct to some extent, but they still have real physical limits. And of course, appropriate exercise can improve performance over the long term, but again there are real physical limits to how much.

Comment by gilch on Why I think worse than death outcomes are not a good reason for most people to avoid cryonics · 2017-06-13T05:07:36.648Z · score: 1 (1 votes) · LW · GW

I don't buy it. Why don't you wake up as Britney Spears instead? Clearly there's some information in common between your mind patterns. She is human after all (at least I'm pretty sure).

Clearly there is a sufficient amount of difference that would make your copy no longer you.

I think it is probable that cryonics will preserve enough information, but I think it is nigh impossible that my mere written records could be reconstructed into me, even by a superintelligence. There is simply not enough data.

But given Many Worlds, a superintelligence certainly could attempt to create every possible human mind by using quantum randomization. Only a fraction of these could be realized in any given Everett branch, of course. Most possible human minds are insane, of course, since their memories would make no sense.

Given the constraint of "human mind" this could be made more probable than Boltzmann Brains. But if the Evil AI "upgrades" these minds, then they'd no longer fit that constraint.

Comment by gilch on Why I think worse than death outcomes are not a good reason for most people to avoid cryonics · 2017-06-11T19:03:47.805Z · score: 1 (1 votes) · LW · GW

One is that the hell exist in our simulation, and suicide is a sin :)

Pascal's mugging. One could just as easily imagine a simulation such that suicide is necessary to be saved from hell. Which is more probable? We cannot say.

Another is that quantum immortality is true AND that you will survive any attempt of the suicide but seriously injured. Personally, I don't think it is the tail outcome, but give it high probability, but most people give it the very low probability.

I also think this is more likely than not. Subjective Immortality doesn't even require Many Worlds. A Tegmark I multiverse is sufficient. Assuming we have no immortal souls and our minds are only patterns in matter, then "you" are simultaneously every instantiation of your pattern throughout the multiverse. Attempting suicide will only force you into living only in the bad outcomes where you don't have control over your life anymore, and thus cannot die. But this is exactly what the suicidal are trying to avoid.

Stupid Questions June 2017

2017-06-10T18:32:57.352Z · score: 3 (3 votes)

Stupid Questions May 2017

2017-04-25T20:28:53.797Z · score: 7 (8 votes)

Open thread, Apr. 24 - Apr. 30, 2017

2017-04-24T19:43:36.697Z · score: 3 (4 votes)

Open thread, Apr. 17 - Apr. 23, 2017

2017-04-18T02:47:46.389Z · score: 1 (2 votes)

Cheating Omega

2017-04-13T03:39:10.943Z · score: 7 (8 votes)