Posts

DIY Transcranial Direct Current Stimulation. Who wants to go first? 2012-03-14T16:58:30.024Z
RAND Health Insurance Experiment critiques 2012-02-18T17:52:59.330Z

Comments

Comment by dustin on Technological stagnation: Why I came around · 2021-01-23T23:56:36.837Z · LW · GW

I can't point to the episode(s) or post(s), but I believe both on his blog and on his podcast Conversations with Tyler, Tyler has expressed the idea that we may be currently coming out of the stagnation of stuff in the Real World driven by stuff like SpaceX, CRISPR, mRNA, etc.

Comment by dustin on The map and territory of NFT art · 2020-12-30T02:04:05.961Z · LW · GW

See also: art forgeries that pass for the original for years before they're discovered.  Their value, despite nothing changing except their origin story, usually plummets

Comment by dustin on 100 Tips for a Better Life · 2020-12-25T18:54:27.124Z · LW · GW

69. When you ask people, “What’s your favorite book / movie / band?” and they stumble, ask them instead what book / movie / band they’re currently enjoying most. They’ll almost always have one and be able to talk about it.

 

I can't imagine narrowing the dimensions of my preferences in such a way that one single piece of media can become my "favorite" so I'm never sure what to think when someone else seems to have done so.

Comment by dustin on 100 Tips for a Better Life · 2020-12-25T18:39:49.058Z · LW · GW

The downside of getting used to multiple monitors this is that I now find it impossible to get anything done on a laptop.  There's a constant low level background irritation when I find myself confined to one tiny screen.

There's diminishing returns of course, but I've found 3 monitors to be the best for me.  One portrait and two landscape.

Comment by dustin on 100 Tips for a Better Life · 2020-12-25T18:31:12.830Z · LW · GW

Possibly, it depends on the individual cop.  However, I think the idea is that if you haven't done anything wrong and you don't answer any questions you're in a better position than if you have done something wrong and the chance that you say something that sounds incriminating and/or the cop is not questioning in good faith.

In other words, the consequences of seeming suspicious with no evidence against you are much better for you than the consequences of saying the wrong thing.

Comment by dustin on 100 Tips for a Better Life · 2020-12-25T18:28:18.483Z · LW · GW

By far the most common context in which anyone I know has interacted with the cops is when filing police reports for damaged or stolen property

 

USA resident here that lives in a more rural-esque area:

I can't say I know anyone who has talked to the cops to file a report. Every interactions that I can think of between people I know and the cops has been in situations wherein they could incriminate themselves. Traffic stops and the like.

Comment by dustin on 100 Tips for a Better Life · 2020-12-22T22:34:23.159Z · LW · GW

I'd recommend AutoIT instead of AHK.  Not that AutoIT is a great language, but it's a better language than AHK, using more standard language constructs.

Comment by dustin on Notes on Good Temper · 2020-11-29T20:34:20.850Z · LW · GW

I agree with you.

However, in case my last comment wasn't clear on the subject: I do not think anger is required to punch the bully. I'm not sure anger is required in any circumstance and I'm sure anger has negative consequences no matter the reason for it.

Comment by dustin on Notes on Good Temper · 2020-11-28T17:20:48.416Z · LW · GW

Yes, I agree that anger serves that purpose and I think a person should be aware of that. However,

  1. You have to balance that against the times wherein anger causes negative outcomes.  It it really that often that most people in modern societies have to scare off others from not doing further injustices to them or their group to offset the negative outcomes sourced in anger? I can't think of one time I've been angry and felt like it was a useful way to use my emotional resources.
  2. Is anger the only way to signal your reliability to your group and to scare off those who would do further injustices to you? Probably not. For one, I don't think feeling angry is the only way to achieve the desired signaling. You can just...choose to respond in a way to signal you're not to be messed with or whatever is appropriate. When signaling is required, there's multiple non-angry options available to the good-tempered. Biting sarcasm. The air of the unflappable cool person who handles their shit. Just flat out pretending to be angry!

Despite being a friendly person that people generally like (I think!), I'm a fairly solitary individual (by choice!) (I hope!). In my experience it's been 95% situations wherein I do not need to signal to any group that I'm a reliable member and those who would be on the receiving end of my anger if I had any are people I'll never see again.  

Usually it's something like the most recent situation I was in wherein I think people would have expected me to react with anger...

There was a young man and woman having a huge screaming fight outside a 4-plex apartment building my parents own.  It'd been going on for like 15 minutes so I went over there and told them to keep it quiet and please leave the property.  They both got very belligerent with me, and I felt nothing approaching anger. Just amusement evidenced by a smirk.  That guy in particular didn't like the smirk.

I'll never see those people again. But, if I was going to, or if there were people around to make a mental note about whether I'm a reliable group member, they'd have just seen the guy whom they couldn't get a rise out of.

There's been maybe 5 instances in the past 15 years similar to that wherein a person or small group of strangers that I'll never see again and who were directing their anger at me specifically while I was by myself or with my wife.  There's been one time in the same time period wherein it was prudent to think about signaling to others that I was a reliable group member.

I'm just not so sure that anger is actually more useful than harmful.

Comment by dustin on Notes on Good Temper · 2020-11-27T18:47:40.792Z · LW · GW

As one often accused of good temper, I'm always amused by the fact that it often makes people angrier when you don't get (as) angry as they think you should. (And, of course, this amusement makes the situation worse)

What I sometimes find overlooked in discussions about whether you should or should not get angry is whether your anger is constructive.  Some people seem to thrash and wail and accomplish nothing to address the source of their anger, and others who calmly address the problem.

I do not find credible the claim that anger is a necessary prerequisite to address (some) wrongs.  It may be for some, but I think motivation-to-address-injustice is not inextricably linked to anger. Of course, as someone who seems to be naturally good tempered, this belief is self-serving...

Comment by dustin on Is Stupidity Expanding? Some Hypotheses. · 2020-10-16T16:21:17.693Z · LW · GW

I think it's plausible that many or most people today barely skate by on literacy and algebra when they're in school and it all almost immediately fades away to the bare minimum they require to survive once they're out of school.  Note that Mauro was talking about what civilization required out of people, not what they were capable of doing.

I also think it's plausible that while you didn't need to read, write, and algebraize at some point in the past, you regularly needed other mental skills like...how to track animals or when to plant corn or whatever the heck you need to survive when there isn't our modern civilization supporting you (obviously I'm suckling on the teat of modern civilization because I don't know wtf).

Note that I'm not actually claiming that either of these are true, only that I can see "how the mental part can be true".

Comment by dustin on Why isn't JS a popular language for deep learning? · 2020-10-08T22:43:36.107Z · LW · GW

I'm very open to hearing about setups that work

I could probably help you with specific problems, but my advice is mostly going to just be "use PyCharm".

Like I said, it's not perfect, but I don't find it horrible.  But then again, many people find using Python or JS horrible no matter what, so "its horrible/not-horrible" is kind of hard to generalize.

One thing to note is that there is active work in the Python community about improving the typing situation for tensors. You can search for "tensor typing" on the python typing-sig list for more insight.

Yeah, this is basically what I'm confused about. In other areas I see a million JS fans piling in proclaiming the benefits even when it makes no sense, but that just doesn't seem to happen with ML.

JS does offer real obvious advantages over some languages and JS probably made inroads in fields where those languages are used a lot.  The problem with Python vs JS is as I described in my root comment.  Also Python and JS are actually very similar in day to day usage, so there's no slam dunk case for a switch to JS.

Comment by dustin on Why isn't JS a popular language for deep learning? · 2020-10-08T20:16:42.050Z · LW · GW

I've used both JS and Python extensively for like a decade (and TS for a couple of years).  I think they all very effective languages.  

For deep learning there all the usual benefits of using JS, e.g.:

  • easy to learn
  • huge community
  • flexible about paradigms
  • write code once, run anywhere (especially useful for training/deploying models as well as cool applications like federated learning on client devices).

I'm not really convinced JS has any useful benefit over Python in these areas except for running in the browser. I think Python runs everywhere else JS would run. I don't think running in the browser has enough benefit to enough projects to overcome the already-built institutional knowledge around Python deep learning.  Institutional knowledge is very important.

I know Python3 has type hints, but it's a really horrible experience compared to any proper typed language.

I do not find this to be the case.  Note that I'm not saying that Python typing is as effective as, say TS or C#, or many other languages with typing "built-in", I'm just saying I don't find it to be a horrible experience.

Both languages it's hard to get a consistent experience with libraries that don't properly implement types. On one hand DefinitelyTyped provides a larger variety of types for third party libraries than does TypeShed. On the other hand, IME, a good IDE is much more able to infer type information with your typical Python library than it is with your typical JS library.

That being said, I just don't think many people doing deep learning stuff are doing any sort of type checking anyway.  

I think if types are very important to you, depending on what about types you're looking for, you're much more likely to move to Java or C++ or Julia or something.

But with VSCode plugins, I just hover over a variable and it tells me what I'm doing wrong immediately.

I use PyCharm, not VSCode, but it gives you a lot of that sort of thing with Python code because of it's native support for typing and type inference. However, this isn't a very useful comparison point without a much more detailed comparison of what each offers.

 

In general, I think the real answer to your question is that JS isn't obviously better or obviously better enough and thus there's just no push to move to JS. 

Comment by dustin on Not all communication is manipulation: Chaperones don't manipulate proteins · 2020-10-01T00:04:39.078Z · LW · GW

I'll make this last comment to clarify my position and if you want to reply, I'll let you have the last word (I say this with sincerity and not in a passive-aggressive manner!)

First of all, I feel like you're continuing to defend the idea of nonmanipulative communication. To make it clear, I'm not questioning whether it exists or is useful or anything at all.  I'm questioning the idea that the chaperone-protein analogy is actually analogous to any sort of communication.

You spoke about the equivalent would be the therapist talking to people in the enviroment of the patient that are external to the therapist. A chaperone doesn't change things in the enviroment of the protein that are external to it to make the enviroment interact with the protein in a good way. 

Hmm. 

I don't feel as if that's exactly material to the point at hand. The main point is that the chaperone doesn't interact with the protein in any way. It's impossible for a human to be like the chaperone and for the human to communicate with the "protein".

However, I will point out that I don't mean to claim exactly what you seem to think I mean to claim. My only claim is that the therapist interacting with people other than patient, without interacting with the patient, would be somewhat analogous to the chaperone. That is as far as it goes.  That doesn't go far enough to become a useful analogy because the chaperone - protein relationship is not equivalent to any sort of communication.

There are reasons why the phrase holding space is frequently used to describe this kind of communication as something that the therapist does. 

There are things in the field of alternative communication that are hard to communicate. I'm not sure whether there's much more that I can say at this point if what I have already written doesn't bring the idea across. 

I think you're still sidestepping the point here.  "Things in the field of alternative communication" have almost no bearing on the point of my comments.

My whole point is that the chaperone-protein "relationship" is not communication at all. There is no special type of communication that is not communication.

(You can probably make the argument that the protein communicates one-way with the chaperone. How does the chaperone "know" where to be? I do not know. However, this is impossible to analogize with the type of communication you're analogizing with.)

In this case the therapist doesn't have a particular purpose towards which they want the patient to change.

Sure, I agree.

My comments do not attempt to dispute that. My point is that, I do not think you made the case for this definition of (or any of the definitions of) "manipulative" because 1) the chaperone is not analogous to communication of the type you describe and 2) your post largely hangs on this analogy.

If you take away the analogy, your post amounts to the assertion that non-manipulative communication exists.

Comment by dustin on Not all communication is manipulation: Chaperones don't manipulate proteins · 2020-09-30T20:54:41.518Z · LW · GW

No, the chaperone is basically the full enviroment surrounding the protein while it folds.

 

Perhaps you can expand on this because I do not see how it's functionally different from what I said.  It becomes the full environment by intervening with the protein's environment.  It cannot become the full environment without intervening with the protein's environment.

In the moment in which the protein folds the chaperone is it's enviroment just like the therapist sets the enviroment during a session with the patient. 

...and thus I do not see how it's "just like" what a therapist does...at least if we're talking about the ways in which the therapist communicates with the patient.  

I understand the intention of the therapist is to be like the chaperone. But your analogy seems to be between the chaperone and what the therapist actually does.

This is not to say that the therapist can or cannot communicate with the patient without manipulation, only that that the therapist actually does communicate with the patient and the chaperone does not.

It's a concept from which useful distinctions are drawn in some areas of therapy. 

This might be true.  However, your post seems to be making the argument that the type of communication a therapist participates in is literally nonmanipulative and I do not think that is the same argument you make with this sentence.

Comment by dustin on Not all communication is manipulation: Chaperones don't manipulate proteins · 2020-09-29T00:46:35.759Z · LW · GW

Actually, by your description I don't think the chaperone intervenes with the protein at all.  There does not seem to be any communication from the chaperone to the protein.  The chaperone intervenes with the environment surrounding the protein.

The closest analogy I can think of that seems to match, is a therapist communicating with everyone around their patient without actually communicating with the patient and keeping it a secret from the patient that they did so.

I'm not sure that is a useful definition of non-manipulative communication.

Comment by dustin on Not all communication is manipulation: Chaperones don't manipulate proteins · 2020-09-29T00:36:51.441Z · LW · GW

Right. I guess my point is that that seems to make comparing the chaperone to the ML algorithm a non-starter.

While I wasn't making this point in my comment, I also think it doesn't seem likely a good analogy to nonmanipulative conversation since the participants in the nonmanipulative conversation are never in a similar state of ignorance.  Even if you're talking to a complete stranger and trying to be nonmanipulative.

You might be able to emulate such a state, but your post makes no argument to that effect.

Comment by dustin on Not all communication is manipulation: Chaperones don't manipulate proteins · 2020-09-28T23:20:24.904Z · LW · GW

Your title and opening sentences make me think you want to convey the idea that the phrase "non-manipulative communication" means exactly what the literal words the phrase is made up of mean. I do not think you made the case that that is so.

  1. For me, your intuition pump does not seem sufficiently analogous or "pumpy" enough to the communication you're describing.
  2. You state Carl Rogers says that psychologists act in the same way. As I do not think chaperones and proteins are sufficiently analogous to human communication, I do not think that he actually says that.
  3. You do not actually attempt to make any argument that the communication that is called "nonmanipulative communication" is actually, literally, nonmanipulative.

This allows a chaperone that works in an uncomplicated way to achieve a result that very complex machine learning algorithms currently don't achieve. The machine learning algorithm tries to figure out the best way for the protein to fold while the chaperone just lets the protein find this way by itself.

These sentences seem to be trying to put tension between the machine learning algorithm and the chaperone.  However, it is not clear to me that the result achieved by the chaperone is the same as the result machine learning algorithms attempt to achieve.

Does the chaperone "know" in what way the protein folded itself?  Can we interrogate the chaperone to learn about the protein? I think not. Neither the chaperone nor the protein has an inkling about the other...nor could they even if we grant them magical sentience or agency.

A physical process that emulates the result a ML algorithm is going for would seemingly encompass much more than just the chaperone. To me, if you really wanted to analogize chaperones to something somewhat apropos, it seems to be more analogous to some small component of some ML algorithm than it is to the ML algorithm itself.

Unlike humans, when it comes to agency and intent, the protein and chaperone do not have any.

For these reasons, this does not seem like an intuition pump that gets me to an understanding of the type of communication you're talking about and I do not think you've made an argument that "non-manipulative communication" is non-manipulative.  I think you completely sidestepped what your opening seems to promise an elucidation of.

I want to note that I haven't made any claims about whether or not "non-manipulative communication" actually is or is not a literally correct phrase.  I've given almost no thought to it, which is why I was interested to read this post when I saw the headline on my RSS feeds.

 

The following is more of an aside or addendum that is unrelated to the previous part of my comment:

Even if all communication actually is manipulative, we may want to, almost tautologically, define the phrase to mean the type of communication you're describing.  This is sometimes a useful thing to do. I agree that the type of communication you describe is good and useful and something we should have in our toolbox.

I actually think I've got a pretty good grasp on what is meant by "non-manipulative communication", and I think it's an important and useful mode of communication for humans. As already mentioned, I've not really given the subject any thought, but as of right now, I don't think that phrase is a literally correct usage of the words "non-manipulative" and "communication".  

I also think that's OK.

Comment by dustin on The ethics of breeding to kill · 2020-09-11T16:57:34.137Z · LW · GW
But, if we applied this model, what would make it unique to suicide and not to any other preference ?
And if you apply this model to any other preference and extent it to humans, things get really dystopian really fast.

I'm not sure it is unique to suicide, and regardless I'd imagine we'd have to take it on a case by case basis because evolution is messy. I think whether it leads to dystopia or not is not a useful way to determine if it actually describes reality.

Regardless, the argument I'm trying to make is not that this model I described is the correct model, but that it's at least a plausible model and that there are probably other plausible models and if there are such alternative plausible models then you have to seriously engage them before you can make a considered decision that the suicide rate is a good proxy for value of animal life.

This is not really analogous, in that my example is "potential to reduce suffering" vs "obviously reducing suffering". A telescope is neither of those, it's working towards what I'd argue is more of a transcedent goal.

Yes, I agree that along that dimension it is not analogous. I was using it as an example of the fact that addressing more than one different issue is possible when the resources available are equal to or greater than the sum of resources required to address each issue.

I am also willing to acknowledge that it is at least *possible* some humans might benefit from actions that they don't consent to, but still I don't engage in those actions because I think it's preferable to treat them as agentic beings that can make their own choices about what makes them happy.

I think my point was that until you're willing to put a semblance of confidence levels on your beliefs, then you're making it easy to succumb to inconsistent actions.

How possible is it that we don't understand the mental lives of animals well enough to use the suicide argument? What are the costs if we're wrong? What are the costs if we forgo eating them?

Most of society has agreed that actually yes we should coerce some humans into actions that they don't consent to. See laws, prisons, etc. This is because we can look at individual cases, weigh the costs and benefits, and act accordingly. A generalized principle of "prefer to treat them as agentic beings with exceptions" is how most modern societies currently work. (How effective we are at that seems to vary widely...but I think most would agree that it's better than the alternative.)

Regardless, I'm not sure that arranging our food chain to lessen or eliminate the number of animals born to be eaten actually intersects with interfering with independent agents abilities to self-determine. If it did, it seems like we are failing in a major way by not encouraging everyone to bring as many possible humans into existence as possible until we're all living at the subsistence level.

People mostly don't commit suicide just because they're living at such a level. Thus, I think by your argument, we are doing the wrong thing by not increasing the production of humans greatly. However, I think most people's moral intuitions cut against that course of action.

Comment by dustin on The ethics of breeding to kill · 2020-09-08T18:56:03.663Z · LW · GW
I think it's fair to use suicide as a benchmark for when someone's life becomes miserable enough for them to end it.

Yes, but that's because it's a tautology!

I don't think I agree that suicide is a sufficient proxy for whether an entity enjoys life more than it dislikes life because I can imagine too many plausible, yet currently unknown mechanisms wherein there are mitigating factors. For example:

I imagine that there are mental processes and instincts in most evolved entities that adds a significant extra prohibition against making the active choice to end their own life and thus that mental ability has a much smaller role in suicide "decisions".

In the world where there is no built-in prohibition against ending your own life, if the "enjoys life" indicator is at level 10 and the "hates life" indicator is at level 11, then suicide is on the table.

In, what I think is probably our world, when the "enjoys life" indicator is at level 10 the "hates life" indicator has to be at level 50.

What's more, it seems plausible to me that the value of this own-life-valuing indicator addon varies from species to species and individual to individual.

If this holds true, then own-life-valuing indicator addon would only be there for a being that already exists.


This is not to say that we can certainly conclude that animals being farmed don't actually dislike life more than they enjoy it. This could certainly be the case, and they might just lack the reasoning to commit suicide.
...
Thus I fail to see a strong ethical argument against the eating of animals from this perspective.

Here you're seemingly willing to acknowledge that it's at least *possible* that animals dislike life more than they enjoy it. If I read you correctly and that is what you're acknowledging, then you would really need to compare the cost of that possibility being correct vs the cost of not eating meat before making any conclusion about the ethical state of eating animals.

Until then, the sanest choice would seem to be that of focusing our suffering-diminishing potential onto the beings that can most certainly suffer so much as to make their condition seem worst than death.

This seems to me similar to the arguments made akin to "why waste money on space telescopes (or whatever) when people are going hungry right here on earth?".

Neither reducing the suffering of beings that can most certainly suffer and those that might be suffering seems likely to consume all of our suffering-diminishing potential. Maybe we can conclude that the likelihood of farm animals suffering in a way that we should care about is so low as to be worth absolutely no suffering-diminishing potential, but I don't think you've made that case.


In summary, I think the main critique I have of the line of argument presented in this post is that it hangs on suicide being a proxy for life-worth-living and that it's equivalent to not having existed in the first place.

I don't think you've made a strong enough case that suicide is a sufficient measure of suffering-has-exceeded-the-cost-of-continuing-to-live. There are too many potential and plausible confounding factors. I think that the case needs to be really strong to outweigh the costs of being wrong.


(Hilariously, I'm not a vegan or a vegetarian.)

Comment by dustin on Ice · 2020-09-06T19:23:27.911Z · LW · GW
It is my opinion that the pos­si­bil­ity of catas­trophic ice sheet col­lapse should be care­fully con­sid­ered and stud­ied as a real pos­si­bil­ity.

Is it not already? I kind of assumed it was already seriously considered and studied. I do not follow climate science very closely and mostly just read what comes across my RSS feeds on the subject. I've heard of the possibility of catastrophic ice sheet collapse a large number of times in the last...say...5 years.

  • What's the right amount of resources to expend on thinking about this?
  • Is my previous exposure to articles and people talking about the subject indicative of sufficient or insufficient interest and study of this possibility?
  • How do we assess the current amount of resources expended on the subject?
Comment by dustin on Thiel on Progress and Stagnation · 2020-08-13T22:52:40.145Z · LW · GW

Maybe!

But, to be clear, I was responding to the claim that it was original thinking.

Comment by dustin on How Beliefs Change What We See in Starlight · 2020-08-06T19:51:10.632Z · LW · GW

I know the vagueness of this is going to be irritating, and I sincerely apologize up front. I'm not a very "hygienic" reader...aka, I don't do a good job of physically or mentally organizing the information I've consumed to easily reference it in the future.

I can't actually think of any exact posts or comments, but when I ask myself "what do I like about LW?", one of the answers I give myself is something along the lines of "not willing to just accept science or scientific conventional wisdom at face value". (It's also possible that the impression I've built over the past 10+ years is just confused...probably stemming from the aforementioned bad information hygiene.)

Eliezer posted at least once on something at least tangentially related...about how science can't save you or something like that. There's been posts or comment threads about vitamins and I think other health-related "stuff". Over the years, Scott Alexander has written bucking-the-science-establishment-on-X posts as well.

As I give it more thought, I also think of posts that were written from the standpoint where the poster was seemingly prepared to accept that science was wrong or even thought ahead of time that science was wrong, but after investigation found out that, yep, science was probably right. IIRC, the vitamins post I mentioned above was in that vein.

Comment by dustin on How Beliefs Change What We See in Starlight · 2020-08-06T17:08:17.900Z · LW · GW

gjm gave specific definitions of what he meant by "weirdness". I've yet to see you seriously engage on what he meant using the principle of charity and trying to figure out why you two were so far apart on this issue. That would be great to read and an effective way of convincing other people of your righteousness!

This willingness to engage is the core of good content on this site. Newcomers often have a hard time adjusting to this not-normal way of discussing issues.

As has been your wont in these threads you almost immediately fall back to accusing whomever you're arguing with to being biased in some way and saying "nuh-uh".

Comment by dustin on How Beliefs Change What We See in Starlight · 2020-08-06T16:56:52.188Z · LW · GW

All in all, I find myself really disheartened by this whole saga since, 1) I find it, in the abstract, plausible that there are areas of modern science that have went down the wrong road because the practitioners have misled themselves, 2) some of the best content for me on LW over the many years has been of the type that highlights such deficiencies, and 3) I can see no progress being made on resolving our disagreements here.

As such, I'm not sure how much more value we can get out continuing these discussions. That really makes me sad since being willing to continually engage until disagreements are resolved is something I often enjoy.

Comment by dustin on How Beliefs Change What We See in Starlight · 2020-08-06T16:53:02.356Z · LW · GW

When someone makes several comments that are longer than the post itself, and when the reasoning is demonstrably fallacious

By this criterion, your original post is a gish gallop since it also included demonstrably fallacious statements.

On the other hand, we could take the charitable reading and say "maybe I don't understand the point they're trying to make and we should discuss it".

Comment by dustin on How Beliefs Change What We See in Starlight · 2020-08-06T16:47:10.825Z · LW · GW

Just to make it clear and explicit. I am not a scientist nor am I a member of the scientific establishment.

Comment by dustin on How Beliefs Change What We See in Starlight · 2020-08-05T23:15:57.072Z · LW · GW

When someone makes several comments that are longer than the post itself, and when the reasoning is demonstrably fallacious (weirdness criterion!?), I think it is fair to call the comment a gish gallop when that is the most economical way to express what happened.

You could have engaged on whether this was "demonstrably fallacious". That would have been interesting to read and I would've upvoted a good comment of this sort.

Again, you are the one who seems to be arguing in bad faith. It is very frustrating because LW has a long history of criticizing the practice of science, and it'd be interesting to see another good discussion in that vein.

Comment by dustin on How Beliefs Change What We See in Starlight · 2020-08-05T22:56:30.643Z · LW · GW

o I did that in this post, but then I was told by dustin that I've written something too glaringly obvious yet clearly incorrect and controversial.

No, I'm not qualified to gauge whether you are clearly incorrect. I am qualified to comment on whether you're making a convincing argument. Your arguments are not convincing largely because you do not really engage with people who question you.

The Ghost of Joseph Weber, the response was a series of gish gallops by gjm in which he argued that organizing random data according to a criteria called 'weirdness' was scientific. (It is not.)

And this is the problem. You could, for example, have a good and through discussion with gjm about this specific point. But you won't, and I find it disappointing.

Look, here's the deal for me:

  1. Bringing up that human bias could be the cause of a scientific result is not sufficient nor necessary to negate that result...the bias is beside the point of whether they are right or not. You have to engage the results.
  2. Most people, no matter how smart, do not have the background, time, or energy to engage on specific points of the technical subjects you have raised in your series of posts. (Of note, this is why you would do better to focus on single, specific technical points rather than shotgunning a non-physics-expert audience with every single technical thing you think is wrong with advanced physics experiments.) (This is also why, to most observers you are the one who started out with a gish gallop.)
  3. These technical points are the only thing you have to hang your hat on.
  4. gjm, to all appearances, seems to actually have the background to engage you on these points.
  5. Instead of engaging on any point gjm raised, you basically just dismissed all of them out of hand.
  6. Because of this, to an outsider of the field, you are now the one who looks like the one who has succumbed to unknown-to-us biases.
  7. As far as any outsider can tell there are a lot of plausible explanations for your position, and only one of them has to do with you being right...and you lowered my priors in the "this person is right about all of this physics stuff" explanation for your posts by rejecting engagement with the main person trying to engage you on a technical level.
  8. gjm could be full of shit. I don't know, but I do know that it doesn't seem like he's full of shit. I do know that a few of the factual things he brought up that I do have the background to check on...like him saying you were misquoting others seemed spot on. Add on to that your refusal to engage, and you're obviously going to be in the position you're in now.
  9. You may very well be correct but you're doing us all a disservice by arguing your points poorly.
Comment by dustin on How Beliefs Change What We See in Starlight · 2020-08-04T23:50:28.715Z · LW · GW

I don't think you're saying anything here that longtime community members do not understand. Most here have discussed the basic human biases you're describing ad nauseum. The pushback you've received is not because we do not understand the biases you're describing. The pushback you've received is sourced in disagreements that scientists are doing the things that your analogies imply they are doing.

In this post you're just reasserting the things that people have disagreed with you about. I recommend directly addressing the points that people have brought up rather than ignoring them and restating your analogies. A brief perusal of what people have commented on your posts seems to show remarkably little effort by you to address any particular feedback other than to hand wave it away.

This is particularly the case when most people's priors are that the person disagreeing with the scientific establishment is the one who has a very strong burden of proof.

Comment by dustin on Free Educational and Research Resources · 2020-07-31T03:16:29.790Z · LW · GW

I've been taking community college classes since I was like 15 years old (now in mid 40s) to learn skills for hobbies or just satisfy curiosity. I really recommend it.

Comment by dustin on What a 20-year-lead in military tech might look like · 2020-07-29T22:39:03.843Z · LW · GW

With aimbots you could shoot them down, but even an autoturret would probably only be able to take out 10 or so before they closed in on it and blew it up.

It doesn't seem unlikely to me, dependent upon terrain, that an aimbotted CIWS-esque system would easily take out a 1000 unit swarm of drones. I'm curious about your reasoning that leads you to conclude otherwise.

Comment by dustin on The Basic Double Crux pattern · 2020-07-22T17:29:07.887Z · LW · GW

In my experience, where Double Crux is easiest is also where it's the least interesting to resolve a disagreement because usually such disagreements are already fairly easily resolved or the disagreement is just uninteresting.

An inconveniently large portion of the time disagreements are so complex that the effort required to drill down to the real crux is just...exhausting. By "complex" I don't necessarily mean the disagreements are based upon some super advanced model of the world, but just that the real cruxes are hidden under so much human baggage.

This is related to a point I've made here before about Aumann's agreement theorem being used as a cudgel in an argument...in many of the most interesting and important cases it usually requires a lot of effort to get people on the same page and the number of times where all participants in a conversation are willing to put in that effort seems vanishingly small.

In other words, double crux is most useful when all participants are equally interested in seeking truth. It's least useful in most of the real disagreements people have.

I don't think this is an indictment of double cruxin', but just a warning for someone who reads this and thinks "hot damn, this is going to help me so much".

Comment by dustin on Thiel on Progress and Stagnation · 2020-07-21T02:06:45.071Z · LW · GW

I think Thiel is correct about much (most? all?) of these things, but I'm also very suspicious of the idea that most of it is original thinking.

Then again, it's not important enough to me to do any of the work of tracing the history of these ideas. Hopefully someone else cares enough to educate me.

Comment by dustin on The Ghost of Joseph Weber · 2020-07-21T01:56:15.365Z · LW · GW

That is a way to make a rough estimate in the same way that providing the construction costs for a whole shopping mall is a way of providing a rough estimate of how much it costs for me to walk in the door of said mall.

In other words, there are too many unknowns and counterfactuals for that to even begin to be a useful way of calculating how much EHT cost.

In a way it's almost besides the point. You made the positive claim, seemingly without any solid facts, that it cost billions of dollars. When you were called on it, a way to increase the confidence of others in your arguments and presented facts would be to say something like "you know, I shouldn't have left that in there, I withdraw that statement".

By not doing so and sticking to your guns you increase the weight others give to the idea that you're not being intellectually honest.

Your current tack might be useful in political rhetoric in some quarters, but it doesn't seem like it will be effective with your current audience.

Comment by dustin on Criticism of some popular LW articles · 2020-07-19T04:24:50.198Z · LW · GW

A couple of initial thoughts I had whilst reading this. Take these as more of pondering on my state of mind rather than critiques or corrections.

Without some more formal structure in place, the nature of which I'm unaware, I am not able to "assess" content for correctness or usefulness.

I find this curiously foreign to my default mode of thinking when reading on LW and elsewhere. It is not uncommon for me to find myself thinking "that seems wrong" and "that seems right" within a single paragraph of content from writers I think are the "rightest". On the other hand, I usually do not feel as confident about my assessment in either direction as you seem to be in your post.

That being said...

My re­ac­tion to ra­tio­nal­ist con­tent is gov­erned by my frame of mind.

I assume this to be the case with all content and I've always assumed it holds for everyone and it hasn't occurred to me to think of rationalist content as different in this way, but seeing you state it "out loud" makes me think maybe I should have.

Comment by dustin on The Ghost of Joseph Weber · 2020-07-19T03:08:32.194Z · LW · GW

So, you seem to continue to use a rhetorical device wherein you do not directly address the points that your interlocuters are bringing up and just answer the question you wish was asked.

For example, this comment I'm replying to here has almost zero bearing on what I said. Saying EHT is bad is not a way to address the argument that EHT did not cost billions of dollars. EHT may very well be bad, but that has no bearing on the subject at hand.

In your previous comment to me in this thread you did the same thing.

Comment by dustin on The Ghost of Joseph Weber · 2020-07-15T20:12:35.659Z · LW · GW

Since you seemingly can't defend nor withdraw your claim that EHT cost billions of dollars, a reasonable person can only assume that the rest of the factual content of your post is suspect.

Comment by dustin on The Ghost of Joseph Weber · 2020-07-14T20:55:02.693Z · LW · GW

I'm not arguing that the telescopes are useless

It did not seem like you were making such an argument, nor was I asserting that you were making such an argument.

The telescope could have cost umpteen trillions of dollars and that fact alone would not support your claim that EHT cost billions of dollars.

I'm not sure how to understand the fact that the previous statement is obvious and yet you still made your comments. I feel like the most charitable interpretation that I can come up with still does not leave a good impression of your overall argument.

I'm not harping on this apparent mistake for no reason. It's just that of all the things described by gjm this seems like it might be the easiest to explicate.

Comment by dustin on The Ghost of Joseph Weber · 2020-07-14T19:11:51.615Z · LW · GW

It's unclear if you're claiming that you have actual figures that show the EHT actually cost billions of dollars or if you're claiming that you think it's likely, but just a guess, that it kept all those radio telescopes "in business", or if you're taking back your claim that it cost billions of dollars.

Comment by dustin on Types of Knowledge · 2020-06-20T19:56:43.930Z · LW · GW

First, an apology as this comment is going to be frustratingly lacking much in the way of concrete examples. I have the kernel of an idea, but it would require more thought than I'm willing to put into it to expand it. I post it to get it out of my head and in case maybe someone else will want to think about it more...

I kind of understand the categories you're trying to carve out, but I'm also leery of them. It feels like your descriptions of the categories make assumptions about meanings and these assumptions are hidden and a person could trick themselves.

I'd have to think about it a lot more to really pin down the ephemeral idea I'm trying to get at, but it's similar to the observation I've made here before that Sherlock Holmes's observation that "Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth." is dangerous because its too easy to convince yourself that you've explored all the possible explanations.

In a similar manner, your description of why something should be considered engineering or scientific knowledge feels as if a person could convince themselves without realizing it that a thing belongs in one category or another where from an objective standpoint you'd be able to make a rational argument for something appearing in either category.

It also feels as if many things will switch between categories depending upon your priors. Your system as stated seems like it would put reciting the steps to make bread and making bread with those steps into separate categories. As an avid bread maker, I'm currently unconvinced of the utility of a category system that would put reciting the steps to make bread and actually making bread with those steps into different categories. I guess I would ask what is the goal of putting those two things into separate categories? What do you hope to get out of doing so?

On the other hand, I'm also unconvinced that a category system has to be so rigorous to be useful. It might be that a category system can be just rigorous enough to help a person...but, like I said, I'm leery that it will lead a person astray from their goals of using such a system without them realizing it.

Comment by dustin on Why isn’t assassination/sabotage more common? · 2020-06-07T22:00:31.879Z · LW · GW

It seems more likely to me that modern technology has made it harder for someone to become a leader even if there are people who have decided to act as such. It does not seem likely to me that there are no outspoken people who want to be leaders or that they are, in general, afraid of assassination.

Take the realm of elected political leaders. By the very nature of this realm there is just one person of focus for each campaign and I'm not under the impression that there are a dwindling number of campaigns for political office...a position that has been under threat of assassination over history.

Comment by dustin on Why isn’t assassination/sabotage more common? · 2020-06-05T02:15:05.933Z · LW · GW

Was that actually the plan or just a post facto explanation? My prior would be that this happened because of the organizing mechanisms of the day (internet vs in-person meeting of the past).

Comment by dustin on Running Wired Ethernet · 2020-05-14T06:51:20.806Z · LW · GW

Just so you know, the crawlspace is where every dropped nail ends up during construction. Some contractors do a better job than others at cleaning that up.

Comment by dustin on Studies On Slack · 2020-05-14T00:47:59.165Z · LW · GW

I can only assume you aren't aware that there are many readily available discussions about why Behe's irreducible complexity doesn't hold water.

To have any chance of making any headway with the argument you seem to be attempting here, you're going to have to seriously engage with the large quantity of work that is a retort to the irreducible complexity thesis.

Imagine you're in a world where it's not immediately obvious that a structure built of brick is more resistant to fire than a structure built of straw. There's been lots of discussion back and forth for generations about the relative merits of brick vs straw.

There's a famous expert in brick structures named Fred and everyone on both sides of the debate are aware of Fred. Fred has written a book that brick people think makes it obvious that brick buildings are the best. The straw people have many and varied reasons that they think prove Fred is wrong.

Now, you're interested in helping the straw people see the light. You have an opportunity to talk to a room full of straw people. You want to convince them that brick structures are the best. You're not interested in a tribal fight about brick vs straw, you want to actually persuade and convince.

Would your opening gambit be to say "Brick structures are the best because Fred says so? It's so obvious!". No, of course not! The most reasonable approach would be to engage with the already extensive discussion the straw people have around Fred's ideas.

Comment by dustin on Running Wired Ethernet · 2020-05-13T23:09:07.268Z · LW · GW

Those bare feet in a crawlspace make me nervous!

Comment by dustin on Why I'm Not Vegan · 2020-04-12T00:46:13.673Z · LW · GW

I could imagine so would a lot of non-rationalist meat eaters

Maybe your imagination accurately reflects reality or maybe not, but it's certainly not discongruent with enough people having the viewpoint(s) that make jkaufman's stance not-unusual.

The average person's revealed preferences seem to assign close to zero weight to animal suffering.

On the other hand, we could make the argument that we should compare jkaufman's position to what I would assume to be the tiny minority of people who have given any substantial amount of thought to veganism and animal suffering.

In that case, I would agree that it is likely that he is unusual.

Comment by dustin on [U.S. Specific] Free money (~$5k-$30k) for Independent Contractors and grant recipients from U.S. government · 2020-04-12T00:39:21.214Z · LW · GW

I decided against applying for either of these. I'm self-employed with no other employees and I haven't currently lost any income. I may or may not in the coming months. I'm worried about the repercussions if I apply for this, accept the money, and then end up not actually needing it.

Comment by dustin on Why I'm Not Vegan · 2020-04-09T21:36:58.279Z · LW · GW

Given the rate of veganism, I'm not sure "unusual" would apply to jkaufman in either case.

Comment by dustin on Research on repurposing filter products for masks? · 2020-04-06T18:57:20.454Z · LW · GW

Agreed.

I'm just hoping that that they can give the OP some information about using HEPA filters.

I've noticed that many N95 masks also have an exhaust valve.