Posts

DIY Transcranial Direct Current Stimulation. Who wants to go first? 2012-03-14T16:58:30.024Z
RAND Health Insurance Experiment critiques 2012-02-18T17:52:59.330Z

Comments

Comment by Dustin on Problems of evil · 2021-04-19T21:36:40.929Z · LW · GW

I did not hate this post, but I also spent much of my time thinking it felt like one of the many threads all over the internet discussing would Boromir from LOTR do this particular thing or how come Saruman didn't do this other particular thing.

It just seems, like most theology, a lot of discussion based upon the rules and structures of something that might as well be LOTR. That is not to say I'm just right out asserting some strict atheist position and "lol religion and spirituality". However, this all mostly seems to hang on an a set of conceptions about the world that doesn't obviously seem to exist...at least particularly to the most of the audience this essay is going to reach.

Of course, this stems from my point of view that questions the very existence of spirituality as something to be pondered over as anything more than memetic hazards traveling through time and quirks of brain construction.

Comment by Dustin on [deleted post] 2021-04-18T03:10:39.972Z

that the government isn't profit-maximizing. 

 

I'm saying that that is the case currently and agreeing with ChristianKI that that incentives pressure against that under your regime.

  1. If, as you are proposing, being not-profit-maximizing is the reason USPS hasn't driven FedEx out of business 
  2. and being not-profit-maximizing is the result of current incentives
  3. and someone claims, as ChristianKI does, that the incentives for being not-profit-maximizing change under your proposed take-the-wealth regime
  4. then the evidential weight of USPS not driving FedEx out of business under the current regime is weakened quite a bit since the very thing under question is that that will remain the case.
Comment by Dustin on Could degoogling be a practice run for something more important? · 2021-04-18T00:01:15.619Z · LW · GW

I don't necessarily disagree, but I will note that there are a lot of alternatives to many of Google's tools.  Some are better, some are nearly as good, some are much worse, but I feel like you could get a long way to systems that help humans work together with all of the free and open source replacements that are out there.

In other words, I'm not so sure that Google and other Google-esque companies are a necessary component of tools to help us work together.

Comment by Dustin on [deleted post] 2021-04-17T23:45:31.638Z

I do not think the government currently has anything to gain by out-competing FedEx.  It seems like you're just kinda just re-asserting the very thing that ChristianKl is questioning.

Comment by Dustin on How & when to write a business plan · 2021-04-15T22:32:47.632Z · LW · GW

Yes, definitely. The sparse nature of expertise networks in most areas still seems to really rear its head. 

This isn't to say it's impossible.  Like you say approaching people out of the blue is a good idea, and I can say with experience that it works!  It's just that (maybe?) there are greater barriers for these people.

It's more a problem of finding entrepreneurial experts than it is in finding field experts.  Imagine you have an idea for building a business that disrupts the Jiffy Lube's of the world (I got my oil changed today) but you're located in Montgomery County, KY with a population of 27k.  You can likely easily find someone willing to give you ideas about how an auto repair shop works, but very unlikely that you'll know of someone who, for example, knows how to start a franchising company, or do good market research, or how to find investors, or whatever.

Maybe this budding entrepreneur just shouldn't be attempting to this. Maybe the budding entrepreneur should just be moving to whatever more urban areas is appropriate for their field.  It's just that I find all this a little unfortunate for all the areas that get drained of the best entrepreneurs.

Comment by Dustin on How & when to write a business plan · 2021-04-15T21:05:08.123Z · LW · GW

This comment is a reflection on the state of social networks and not a critique of the OP's post.

One thing I always see glossed over in articles like this one is:  where to find these experts giving advice?

On one hand, of course its this way. The people writing these articles are already experts (or pretending to be!) and are immersed in dense networks of entrepreneurial people and are targeting people with access to these networks. Almost by definition, the people starting the biggest/most-profitable/MOST-EST businesses are largely going to be people with access to these same networks and capital. Much (most? all?) of this seems predicated on physical location.

On the other hand, I've started and exited a successful (relative to its peers in its physical area) business and know a person or two like me that are located outside of urban areas where these types of entrepreneurial networks are focused.  I know all of us would've greatly benefited from a vastly expanded network of experts. And, unfortunately, our networks in more rural areas of the US are much better than what is available in large swathes of the world.

Comment by Dustin on Forcing Yourself is Self Harm, or Don't Goodhart Yourself · 2021-04-12T15:24:53.302Z · LW · GW

If this is like, established fact or something...I did not know this, and I understand why the hypothetical person was also unaware of this.

 

As Viliam says, it's something I've heard constantly throughout my life.  However, the hypothetical person not having heard of it relates to the points I'm trying to make.  I'm saying that rather than telling them to focus on something other than performance, telling them how to better measure themselves might be the better course.

But since I don't expect to see an RCT anytime soon

To be clear, this is exactly why I tried to couch all my language in this thread in "might", "I think", and other terms to indicate that not only am I not sure, but I'm not sure how anyone can be sure about this subject.

When I say to the OP, "I'm also not sure how much weight to give your personal experiences in this area.", I think I'm saying the same thing you're saying.  I'm not trying to say in a roundabout way that I don't believe the experiences of the OP. I'm saying my literal state of mind. I also want the OP to post posts like this one for the same reasons you describe.

Comment by Dustin on Forcing Yourself is Self Harm, or Don't Goodhart Yourself · 2021-04-10T20:30:50.703Z · LW · GW

What you say is true, but it's a reduction of the problem to be less bad by applying weaker optimization pressure rather than an actual elimination of the problem. Weak Goodharting is still Goodharting and it will still, eventually, subtly screw you up.

 

  1. I think all self improvement is subject to Goodharting, even the type you recommend.
  2. The best things available to us to do about that:
    1. Be nimble and self-aware.  Adjust your processes to notice when you're harming yourself.
    2. Be thoughtful in how you measure success.

 

I do not think this is actually a contradiction to your post, but, at least for me, it seems like a more actionable framing of the issue.

Comment by Dustin on Forcing Yourself is Self Harm, or Don't Goodhart Yourself · 2021-04-10T20:29:07.318Z · LW · GW

Bah, the site ate my comment.  I'm not going to try to recreate it, so here's a rough summary of what I said:

  1. I think all self improvement is subject to Goodharting, even the type you recommend.
  2. The best things available to us to do about that:
    1. Be nimble and self-aware.  Adjust your processes to notice when you're harming yourself.
    2. Be thoughtful in how you measure success.
Comment by Dustin on Forcing Yourself is Self Harm, or Don't Goodhart Yourself · 2021-04-10T17:16:59.857Z · LW · GW

The following comment isn't exactly a criticism. It's more just exploring the idea.

I still struggle to really get on board with the advice you offer here while at the same time thinking that the general idea has a lot of merit.  I think that both making yourself broadly better and focusing on narrow areas is maybe the best approach.

Take your illustrative story.  I'd say the problem here is not that the person is trying to focus on the narrow area of increasing productivity.  It's that they picked a bad metric and a bad way of continual measuring themselves against the metric.  The story just kind of glosses over what I would say is the most important part!

I'd say that 65%-75% of the problem this person has is that they apparently didn't seriously think about this stuff before hand and pre-commit to a good strategy for measurement.

The person who looks and says "I only wrote 100 words last hour?!??!" kind of reminds me of the investor checking their stock prices every day.

For this person three months or six months or a year might be a better time frame for checking how they're doing.  Regardless, the main point I want to make is that how well this person would be able to improve themselves in this area while maintaining their well being is largely dependent upon making good decisions on this very important question.

On the other hand, making good decisions about this is also part of your advice...aka, keeping broad self-improvement in mind.

FWIW, I've lived most of my adult life (I'm in my mid 40s) basically with this sort of mindset...focusing on specific areas of self-improvement but also being well aware of how it might affect my broad well-being and taking that into account. I think everyone who knows me would tell you I'm a well-adjusted, friendly, and happy person.

That being said, I feel like that a lot of that is inherent in my personality so I'm not sure how much weight to give my personal experience.

Also, I wanted to say that I know many people who really came into their own in their mid-to-late thirties. I think a lot of people just start getting their life into order by that time, so I'm also not sure how much weight to give your personal experiences in this area.

Comment by Dustin on Convict Conditioning Book Review · 2021-04-10T16:14:06.952Z · LW · GW

I dabble in getting more fit from time to time, and bodyweight work always seems to call to me, but...

*Why* does Wade think lifting weights are bad and calisthenics are good?  It seems like he just asserts that as the case and then goes on to demonstrate the benefits of calisthenics.

(Forgive me if this was covered more than I thought...I read this over two separate days, but I don't recall much discussion of this.)

Comment by Dustin on If my previous research is wrong, what are my options ? · 2021-04-07T22:30:45.372Z · LW · GW

If we publish a new article, as my boss wanted, I fear some people will still find the first paper and not the second one, will keep quoting it, and, god forbid, use that published method.

 

Can you clarify this for me?  Why would a new article make it more likely that people would find the first paper compared to the current situation wherein the only paper they could find is the first paper?

Comment by Dustin on Forcing yourself to keep your identity small is self-harm · 2021-04-03T17:18:45.419Z · LW · GW

I believe I keep my identity small "naturally".  The idea of belonging to an identity kind of gives me an icky feeling. I'm not attracted to the idea of being part of an identity or describing myself as being part of an identity. I do not express myself as being a rationalist or any other sort of group you could plausibly describe as being part of my identity.

This is not to say that I don't do rationalist things. I do not find the concept of an identity useful in describing or motivating the actions I take.

Keeping my identity small is more just a side effect of other processes.  I do not have the value "keep your identity small".  Keeping my identity small just falls out of other processes. However, in the past, I'm sure I've mistakenly described the reason I don't have identity X is "because I want to keep my identity small".  I'm not sure why that is. It seems easier to describe the process that way?

This is all to say that some percentage of people saying they're doing X to keep their identity small are misconstruing what is happening. I do not know how common this is, but surely it's not zero percent.

Comment by Dustin on Defending the non-central fallacy · 2021-03-11T03:00:30.248Z · LW · GW

This comment is mostly an aside, so feel free to skip reading if you're not interested in a digression.

To quote the quote of Huemer:

Similarly, if taxation is theft, then it would probably be wrong to tax people, say, to pay for an art museum.

I enjoyed much of the argument that you quoted, but this struck me.

I think maybe the "would probably be" should have been written "it might be".  I can think of arguments around the benefits to society of everyone getting to enjoy art in an art museum that do not seem to apply to the individual who steals to buy a piece of art for their home.

I haven't thought them all the way through, but, again, I found the quoted sentence quite incongruous.

Comment by Dustin on Has anyone on LW written about material bottlenecks being the main factor in making any technological progress? · 2021-01-29T00:47:15.866Z · LW · GW

I think the idea is that the AI doesn't say "help me establish a dictatorship".  The AI says "I did this one weird trick and made a million dollars, you should try it too!" but surprise, the weird trick is step 1 of 100 to establish The AI World Order.

Comment by Dustin on Technological stagnation: Why I came around · 2021-01-23T23:56:36.837Z · LW · GW

I can't point to the episode(s) or post(s), but I believe both on his blog and on his podcast Conversations with Tyler, Tyler has expressed the idea that we may be currently coming out of the stagnation of stuff in the Real World driven by stuff like SpaceX, CRISPR, mRNA, etc.

Comment by Dustin on The map and territory of NFT art · 2020-12-30T02:04:05.961Z · LW · GW

See also: art forgeries that pass for the original for years before they're discovered.  Their value, despite nothing changing except their origin story, usually plummets

Comment by Dustin on 100 Tips for a Better Life · 2020-12-25T18:54:27.124Z · LW · GW

69. When you ask people, “What’s your favorite book / movie / band?” and they stumble, ask them instead what book / movie / band they’re currently enjoying most. They’ll almost always have one and be able to talk about it.

 

I can't imagine narrowing the dimensions of my preferences in such a way that one single piece of media can become my "favorite" so I'm never sure what to think when someone else seems to have done so.

Comment by Dustin on 100 Tips for a Better Life · 2020-12-25T18:39:49.058Z · LW · GW

The downside of getting used to multiple monitors this is that I now find it impossible to get anything done on a laptop.  There's a constant low level background irritation when I find myself confined to one tiny screen.

There's diminishing returns of course, but I've found 3 monitors to be the best for me.  One portrait and two landscape.

Comment by Dustin on 100 Tips for a Better Life · 2020-12-25T18:31:12.830Z · LW · GW

Possibly, it depends on the individual cop.  However, I think the idea is that if you haven't done anything wrong and you don't answer any questions you're in a better position than if you have done something wrong and the chance that you say something that sounds incriminating and/or the cop is not questioning in good faith.

In other words, the consequences of seeming suspicious with no evidence against you are much better for you than the consequences of saying the wrong thing.

Comment by Dustin on 100 Tips for a Better Life · 2020-12-25T18:28:18.483Z · LW · GW

By far the most common context in which anyone I know has interacted with the cops is when filing police reports for damaged or stolen property

 

USA resident here that lives in a more rural-esque area:

I can't say I know anyone who has talked to the cops to file a report. Every interactions that I can think of between people I know and the cops has been in situations wherein they could incriminate themselves. Traffic stops and the like.

Comment by Dustin on 100 Tips for a Better Life · 2020-12-22T22:34:23.159Z · LW · GW

I'd recommend AutoIT instead of AHK.  Not that AutoIT is a great language, but it's a better language than AHK, using more standard language constructs.

Comment by Dustin on Notes on Good Temper · 2020-11-29T20:34:20.850Z · LW · GW

I agree with you.

However, in case my last comment wasn't clear on the subject: I do not think anger is required to punch the bully. I'm not sure anger is required in any circumstance and I'm sure anger has negative consequences no matter the reason for it.

Comment by Dustin on Notes on Good Temper · 2020-11-28T17:20:48.416Z · LW · GW

Yes, I agree that anger serves that purpose and I think a person should be aware of that. However,

  1. You have to balance that against the times wherein anger causes negative outcomes.  It it really that often that most people in modern societies have to scare off others from not doing further injustices to them or their group to offset the negative outcomes sourced in anger? I can't think of one time I've been angry and felt like it was a useful way to use my emotional resources.
  2. Is anger the only way to signal your reliability to your group and to scare off those who would do further injustices to you? Probably not. For one, I don't think feeling angry is the only way to achieve the desired signaling. You can just...choose to respond in a way to signal you're not to be messed with or whatever is appropriate. When signaling is required, there's multiple non-angry options available to the good-tempered. Biting sarcasm. The air of the unflappable cool person who handles their shit. Just flat out pretending to be angry!

Despite being a friendly person that people generally like (I think!), I'm a fairly solitary individual (by choice!) (I hope!). In my experience it's been 95% situations wherein I do not need to signal to any group that I'm a reliable member and those who would be on the receiving end of my anger if I had any are people I'll never see again.  

Usually it's something like the most recent situation I was in wherein I think people would have expected me to react with anger...

There was a young man and woman having a huge screaming fight outside a 4-plex apartment building my parents own.  It'd been going on for like 15 minutes so I went over there and told them to keep it quiet and please leave the property.  They both got very belligerent with me, and I felt nothing approaching anger. Just amusement evidenced by a smirk.  That guy in particular didn't like the smirk.

I'll never see those people again. But, if I was going to, or if there were people around to make a mental note about whether I'm a reliable group member, they'd have just seen the guy whom they couldn't get a rise out of.

There's been maybe 5 instances in the past 15 years similar to that wherein a person or small group of strangers that I'll never see again and who were directing their anger at me specifically while I was by myself or with my wife.  There's been one time in the same time period wherein it was prudent to think about signaling to others that I was a reliable group member.

I'm just not so sure that anger is actually more useful than harmful.

Comment by Dustin on Notes on Good Temper · 2020-11-27T18:47:40.792Z · LW · GW

As one often accused of good temper, I'm always amused by the fact that it often makes people angrier when you don't get (as) angry as they think you should. (And, of course, this amusement makes the situation worse)

What I sometimes find overlooked in discussions about whether you should or should not get angry is whether your anger is constructive.  Some people seem to thrash and wail and accomplish nothing to address the source of their anger, and others who calmly address the problem.

I do not find credible the claim that anger is a necessary prerequisite to address (some) wrongs.  It may be for some, but I think motivation-to-address-injustice is not inextricably linked to anger. Of course, as someone who seems to be naturally good tempered, this belief is self-serving...

Comment by Dustin on Is Stupidity Expanding? Some Hypotheses. · 2020-10-16T16:21:17.693Z · LW · GW

I think it's plausible that many or most people today barely skate by on literacy and algebra when they're in school and it all almost immediately fades away to the bare minimum they require to survive once they're out of school.  Note that Mauro was talking about what civilization required out of people, not what they were capable of doing.

I also think it's plausible that while you didn't need to read, write, and algebraize at some point in the past, you regularly needed other mental skills like...how to track animals or when to plant corn or whatever the heck you need to survive when there isn't our modern civilization supporting you (obviously I'm suckling on the teat of modern civilization because I don't know wtf).

Note that I'm not actually claiming that either of these are true, only that I can see "how the mental part can be true".

Comment by Dustin on Why isn't JS a popular language for deep learning? · 2020-10-08T22:43:36.107Z · LW · GW

I'm very open to hearing about setups that work

I could probably help you with specific problems, but my advice is mostly going to just be "use PyCharm".

Like I said, it's not perfect, but I don't find it horrible.  But then again, many people find using Python or JS horrible no matter what, so "its horrible/not-horrible" is kind of hard to generalize.

One thing to note is that there is active work in the Python community about improving the typing situation for tensors. You can search for "tensor typing" on the python typing-sig list for more insight.

Yeah, this is basically what I'm confused about. In other areas I see a million JS fans piling in proclaiming the benefits even when it makes no sense, but that just doesn't seem to happen with ML.

JS does offer real obvious advantages over some languages and JS probably made inroads in fields where those languages are used a lot.  The problem with Python vs JS is as I described in my root comment.  Also Python and JS are actually very similar in day to day usage, so there's no slam dunk case for a switch to JS.

Comment by Dustin on Why isn't JS a popular language for deep learning? · 2020-10-08T20:16:42.050Z · LW · GW

I've used both JS and Python extensively for like a decade (and TS for a couple of years).  I think they all very effective languages.  

For deep learning there all the usual benefits of using JS, e.g.:

  • easy to learn
  • huge community
  • flexible about paradigms
  • write code once, run anywhere (especially useful for training/deploying models as well as cool applications like federated learning on client devices).

I'm not really convinced JS has any useful benefit over Python in these areas except for running in the browser. I think Python runs everywhere else JS would run. I don't think running in the browser has enough benefit to enough projects to overcome the already-built institutional knowledge around Python deep learning.  Institutional knowledge is very important.

I know Python3 has type hints, but it's a really horrible experience compared to any proper typed language.

I do not find this to be the case.  Note that I'm not saying that Python typing is as effective as, say TS or C#, or many other languages with typing "built-in", I'm just saying I don't find it to be a horrible experience.

Both languages it's hard to get a consistent experience with libraries that don't properly implement types. On one hand DefinitelyTyped provides a larger variety of types for third party libraries than does TypeShed. On the other hand, IME, a good IDE is much more able to infer type information with your typical Python library than it is with your typical JS library.

That being said, I just don't think many people doing deep learning stuff are doing any sort of type checking anyway.  

I think if types are very important to you, depending on what about types you're looking for, you're much more likely to move to Java or C++ or Julia or something.

But with VSCode plugins, I just hover over a variable and it tells me what I'm doing wrong immediately.

I use PyCharm, not VSCode, but it gives you a lot of that sort of thing with Python code because of it's native support for typing and type inference. However, this isn't a very useful comparison point without a much more detailed comparison of what each offers.

 

In general, I think the real answer to your question is that JS isn't obviously better or obviously better enough and thus there's just no push to move to JS. 

Comment by Dustin on Not all communication is manipulation: Chaperones don't manipulate proteins · 2020-10-01T00:04:39.078Z · LW · GW

I'll make this last comment to clarify my position and if you want to reply, I'll let you have the last word (I say this with sincerity and not in a passive-aggressive manner!)

First of all, I feel like you're continuing to defend the idea of nonmanipulative communication. To make it clear, I'm not questioning whether it exists or is useful or anything at all.  I'm questioning the idea that the chaperone-protein analogy is actually analogous to any sort of communication.

You spoke about the equivalent would be the therapist talking to people in the enviroment of the patient that are external to the therapist. A chaperone doesn't change things in the enviroment of the protein that are external to it to make the enviroment interact with the protein in a good way. 

Hmm. 

I don't feel as if that's exactly material to the point at hand. The main point is that the chaperone doesn't interact with the protein in any way. It's impossible for a human to be like the chaperone and for the human to communicate with the "protein".

However, I will point out that I don't mean to claim exactly what you seem to think I mean to claim. My only claim is that the therapist interacting with people other than patient, without interacting with the patient, would be somewhat analogous to the chaperone. That is as far as it goes.  That doesn't go far enough to become a useful analogy because the chaperone - protein relationship is not equivalent to any sort of communication.

There are reasons why the phrase holding space is frequently used to describe this kind of communication as something that the therapist does. 

There are things in the field of alternative communication that are hard to communicate. I'm not sure whether there's much more that I can say at this point if what I have already written doesn't bring the idea across. 

I think you're still sidestepping the point here.  "Things in the field of alternative communication" have almost no bearing on the point of my comments.

My whole point is that the chaperone-protein "relationship" is not communication at all. There is no special type of communication that is not communication.

(You can probably make the argument that the protein communicates one-way with the chaperone. How does the chaperone "know" where to be? I do not know. However, this is impossible to analogize with the type of communication you're analogizing with.)

In this case the therapist doesn't have a particular purpose towards which they want the patient to change.

Sure, I agree.

My comments do not attempt to dispute that. My point is that, I do not think you made the case for this definition of (or any of the definitions of) "manipulative" because 1) the chaperone is not analogous to communication of the type you describe and 2) your post largely hangs on this analogy.

If you take away the analogy, your post amounts to the assertion that non-manipulative communication exists.

Comment by Dustin on Not all communication is manipulation: Chaperones don't manipulate proteins · 2020-09-30T20:54:41.518Z · LW · GW

No, the chaperone is basically the full enviroment surrounding the protein while it folds.

 

Perhaps you can expand on this because I do not see how it's functionally different from what I said.  It becomes the full environment by intervening with the protein's environment.  It cannot become the full environment without intervening with the protein's environment.

In the moment in which the protein folds the chaperone is it's enviroment just like the therapist sets the enviroment during a session with the patient. 

...and thus I do not see how it's "just like" what a therapist does...at least if we're talking about the ways in which the therapist communicates with the patient.  

I understand the intention of the therapist is to be like the chaperone. But your analogy seems to be between the chaperone and what the therapist actually does.

This is not to say that the therapist can or cannot communicate with the patient without manipulation, only that that the therapist actually does communicate with the patient and the chaperone does not.

It's a concept from which useful distinctions are drawn in some areas of therapy. 

This might be true.  However, your post seems to be making the argument that the type of communication a therapist participates in is literally nonmanipulative and I do not think that is the same argument you make with this sentence.

Comment by Dustin on Not all communication is manipulation: Chaperones don't manipulate proteins · 2020-09-29T00:46:35.759Z · LW · GW

Actually, by your description I don't think the chaperone intervenes with the protein at all.  There does not seem to be any communication from the chaperone to the protein.  The chaperone intervenes with the environment surrounding the protein.

The closest analogy I can think of that seems to match, is a therapist communicating with everyone around their patient without actually communicating with the patient and keeping it a secret from the patient that they did so.

I'm not sure that is a useful definition of non-manipulative communication.

Comment by Dustin on Not all communication is manipulation: Chaperones don't manipulate proteins · 2020-09-29T00:36:51.441Z · LW · GW

Right. I guess my point is that that seems to make comparing the chaperone to the ML algorithm a non-starter.

While I wasn't making this point in my comment, I also think it doesn't seem likely a good analogy to nonmanipulative conversation since the participants in the nonmanipulative conversation are never in a similar state of ignorance.  Even if you're talking to a complete stranger and trying to be nonmanipulative.

You might be able to emulate such a state, but your post makes no argument to that effect.

Comment by Dustin on Not all communication is manipulation: Chaperones don't manipulate proteins · 2020-09-28T23:20:24.904Z · LW · GW

Your title and opening sentences make me think you want to convey the idea that the phrase "non-manipulative communication" means exactly what the literal words the phrase is made up of mean. I do not think you made the case that that is so.

  1. For me, your intuition pump does not seem sufficiently analogous or "pumpy" enough to the communication you're describing.
  2. You state Carl Rogers says that psychologists act in the same way. As I do not think chaperones and proteins are sufficiently analogous to human communication, I do not think that he actually says that.
  3. You do not actually attempt to make any argument that the communication that is called "nonmanipulative communication" is actually, literally, nonmanipulative.

This allows a chaperone that works in an uncomplicated way to achieve a result that very complex machine learning algorithms currently don't achieve. The machine learning algorithm tries to figure out the best way for the protein to fold while the chaperone just lets the protein find this way by itself.

These sentences seem to be trying to put tension between the machine learning algorithm and the chaperone.  However, it is not clear to me that the result achieved by the chaperone is the same as the result machine learning algorithms attempt to achieve.

Does the chaperone "know" in what way the protein folded itself?  Can we interrogate the chaperone to learn about the protein? I think not. Neither the chaperone nor the protein has an inkling about the other...nor could they even if we grant them magical sentience or agency.

A physical process that emulates the result a ML algorithm is going for would seemingly encompass much more than just the chaperone. To me, if you really wanted to analogize chaperones to something somewhat apropos, it seems to be more analogous to some small component of some ML algorithm than it is to the ML algorithm itself.

Unlike humans, when it comes to agency and intent, the protein and chaperone do not have any.

For these reasons, this does not seem like an intuition pump that gets me to an understanding of the type of communication you're talking about and I do not think you've made an argument that "non-manipulative communication" is non-manipulative.  I think you completely sidestepped what your opening seems to promise an elucidation of.

I want to note that I haven't made any claims about whether or not "non-manipulative communication" actually is or is not a literally correct phrase.  I've given almost no thought to it, which is why I was interested to read this post when I saw the headline on my RSS feeds.

 

The following is more of an aside or addendum that is unrelated to the previous part of my comment:

Even if all communication actually is manipulative, we may want to, almost tautologically, define the phrase to mean the type of communication you're describing.  This is sometimes a useful thing to do. I agree that the type of communication you describe is good and useful and something we should have in our toolbox.

I actually think I've got a pretty good grasp on what is meant by "non-manipulative communication", and I think it's an important and useful mode of communication for humans. As already mentioned, I've not really given the subject any thought, but as of right now, I don't think that phrase is a literally correct usage of the words "non-manipulative" and "communication".  

I also think that's OK.

Comment by Dustin on The ethics of breeding to kill · 2020-09-11T16:57:34.137Z · LW · GW
But, if we applied this model, what would make it unique to suicide and not to any other preference ?
And if you apply this model to any other preference and extent it to humans, things get really dystopian really fast.

I'm not sure it is unique to suicide, and regardless I'd imagine we'd have to take it on a case by case basis because evolution is messy. I think whether it leads to dystopia or not is not a useful way to determine if it actually describes reality.

Regardless, the argument I'm trying to make is not that this model I described is the correct model, but that it's at least a plausible model and that there are probably other plausible models and if there are such alternative plausible models then you have to seriously engage them before you can make a considered decision that the suicide rate is a good proxy for value of animal life.

This is not really analogous, in that my example is "potential to reduce suffering" vs "obviously reducing suffering". A telescope is neither of those, it's working towards what I'd argue is more of a transcedent goal.

Yes, I agree that along that dimension it is not analogous. I was using it as an example of the fact that addressing more than one different issue is possible when the resources available are equal to or greater than the sum of resources required to address each issue.

I am also willing to acknowledge that it is at least *possible* some humans might benefit from actions that they don't consent to, but still I don't engage in those actions because I think it's preferable to treat them as agentic beings that can make their own choices about what makes them happy.

I think my point was that until you're willing to put a semblance of confidence levels on your beliefs, then you're making it easy to succumb to inconsistent actions.

How possible is it that we don't understand the mental lives of animals well enough to use the suicide argument? What are the costs if we're wrong? What are the costs if we forgo eating them?

Most of society has agreed that actually yes we should coerce some humans into actions that they don't consent to. See laws, prisons, etc. This is because we can look at individual cases, weigh the costs and benefits, and act accordingly. A generalized principle of "prefer to treat them as agentic beings with exceptions" is how most modern societies currently work. (How effective we are at that seems to vary widely...but I think most would agree that it's better than the alternative.)

Regardless, I'm not sure that arranging our food chain to lessen or eliminate the number of animals born to be eaten actually intersects with interfering with independent agents abilities to self-determine. If it did, it seems like we are failing in a major way by not encouraging everyone to bring as many possible humans into existence as possible until we're all living at the subsistence level.

People mostly don't commit suicide just because they're living at such a level. Thus, I think by your argument, we are doing the wrong thing by not increasing the production of humans greatly. However, I think most people's moral intuitions cut against that course of action.

Comment by Dustin on The ethics of breeding to kill · 2020-09-08T18:56:03.663Z · LW · GW
I think it's fair to use suicide as a benchmark for when someone's life becomes miserable enough for them to end it.

Yes, but that's because it's a tautology!

I don't think I agree that suicide is a sufficient proxy for whether an entity enjoys life more than it dislikes life because I can imagine too many plausible, yet currently unknown mechanisms wherein there are mitigating factors. For example:

I imagine that there are mental processes and instincts in most evolved entities that adds a significant extra prohibition against making the active choice to end their own life and thus that mental ability has a much smaller role in suicide "decisions".

In the world where there is no built-in prohibition against ending your own life, if the "enjoys life" indicator is at level 10 and the "hates life" indicator is at level 11, then suicide is on the table.

In, what I think is probably our world, when the "enjoys life" indicator is at level 10 the "hates life" indicator has to be at level 50.

What's more, it seems plausible to me that the value of this own-life-valuing indicator addon varies from species to species and individual to individual.

If this holds true, then own-life-valuing indicator addon would only be there for a being that already exists.


This is not to say that we can certainly conclude that animals being farmed don't actually dislike life more than they enjoy it. This could certainly be the case, and they might just lack the reasoning to commit suicide.
...
Thus I fail to see a strong ethical argument against the eating of animals from this perspective.

Here you're seemingly willing to acknowledge that it's at least *possible* that animals dislike life more than they enjoy it. If I read you correctly and that is what you're acknowledging, then you would really need to compare the cost of that possibility being correct vs the cost of not eating meat before making any conclusion about the ethical state of eating animals.

Until then, the sanest choice would seem to be that of focusing our suffering-diminishing potential onto the beings that can most certainly suffer so much as to make their condition seem worst than death.

This seems to me similar to the arguments made akin to "why waste money on space telescopes (or whatever) when people are going hungry right here on earth?".

Neither reducing the suffering of beings that can most certainly suffer and those that might be suffering seems likely to consume all of our suffering-diminishing potential. Maybe we can conclude that the likelihood of farm animals suffering in a way that we should care about is so low as to be worth absolutely no suffering-diminishing potential, but I don't think you've made that case.


In summary, I think the main critique I have of the line of argument presented in this post is that it hangs on suicide being a proxy for life-worth-living and that it's equivalent to not having existed in the first place.

I don't think you've made a strong enough case that suicide is a sufficient measure of suffering-has-exceeded-the-cost-of-continuing-to-live. There are too many potential and plausible confounding factors. I think that the case needs to be really strong to outweigh the costs of being wrong.


(Hilariously, I'm not a vegan or a vegetarian.)

Comment by Dustin on Ice · 2020-09-06T19:23:27.911Z · LW · GW
It is my opinion that the pos­si­bil­ity of catas­trophic ice sheet col­lapse should be care­fully con­sid­ered and stud­ied as a real pos­si­bil­ity.

Is it not already? I kind of assumed it was already seriously considered and studied. I do not follow climate science very closely and mostly just read what comes across my RSS feeds on the subject. I've heard of the possibility of catastrophic ice sheet collapse a large number of times in the last...say...5 years.

  • What's the right amount of resources to expend on thinking about this?
  • Is my previous exposure to articles and people talking about the subject indicative of sufficient or insufficient interest and study of this possibility?
  • How do we assess the current amount of resources expended on the subject?
Comment by Dustin on Thiel on Progress and Stagnation · 2020-08-13T22:52:40.145Z · LW · GW

Maybe!

But, to be clear, I was responding to the claim that it was original thinking.

Comment by Dustin on [deleted post] 2020-08-06T19:51:10.632Z

I know the vagueness of this is going to be irritating, and I sincerely apologize up front. I'm not a very "hygienic" reader...aka, I don't do a good job of physically or mentally organizing the information I've consumed to easily reference it in the future.

I can't actually think of any exact posts or comments, but when I ask myself "what do I like about LW?", one of the answers I give myself is something along the lines of "not willing to just accept science or scientific conventional wisdom at face value". (It's also possible that the impression I've built over the past 10+ years is just confused...probably stemming from the aforementioned bad information hygiene.)

Eliezer posted at least once on something at least tangentially related...about how science can't save you or something like that. There's been posts or comment threads about vitamins and I think other health-related "stuff". Over the years, Scott Alexander has written bucking-the-science-establishment-on-X posts as well.

As I give it more thought, I also think of posts that were written from the standpoint where the poster was seemingly prepared to accept that science was wrong or even thought ahead of time that science was wrong, but after investigation found out that, yep, science was probably right. IIRC, the vitamins post I mentioned above was in that vein.

Comment by Dustin on [deleted post] 2020-08-06T17:08:17.900Z

gjm gave specific definitions of what he meant by "weirdness". I've yet to see you seriously engage on what he meant using the principle of charity and trying to figure out why you two were so far apart on this issue. That would be great to read and an effective way of convincing other people of your righteousness!

This willingness to engage is the core of good content on this site. Newcomers often have a hard time adjusting to this not-normal way of discussing issues.

As has been your wont in these threads you almost immediately fall back to accusing whomever you're arguing with to being biased in some way and saying "nuh-uh".

Comment by Dustin on [deleted post] 2020-08-06T16:56:52.188Z

All in all, I find myself really disheartened by this whole saga since, 1) I find it, in the abstract, plausible that there are areas of modern science that have went down the wrong road because the practitioners have misled themselves, 2) some of the best content for me on LW over the many years has been of the type that highlights such deficiencies, and 3) I can see no progress being made on resolving our disagreements here.

As such, I'm not sure how much more value we can get out continuing these discussions. That really makes me sad since being willing to continually engage until disagreements are resolved is something I often enjoy.

Comment by Dustin on [deleted post] 2020-08-06T16:53:02.356Z

When someone makes several comments that are longer than the post itself, and when the reasoning is demonstrably fallacious

By this criterion, your original post is a gish gallop since it also included demonstrably fallacious statements.

On the other hand, we could take the charitable reading and say "maybe I don't understand the point they're trying to make and we should discuss it".

Comment by Dustin on [deleted post] 2020-08-06T16:47:10.825Z

Just to make it clear and explicit. I am not a scientist nor am I a member of the scientific establishment.

Comment by Dustin on [deleted post] 2020-08-05T23:15:57.072Z

When someone makes several comments that are longer than the post itself, and when the reasoning is demonstrably fallacious (weirdness criterion!?), I think it is fair to call the comment a gish gallop when that is the most economical way to express what happened.

You could have engaged on whether this was "demonstrably fallacious". That would have been interesting to read and I would've upvoted a good comment of this sort.

Again, you are the one who seems to be arguing in bad faith. It is very frustrating because LW has a long history of criticizing the practice of science, and it'd be interesting to see another good discussion in that vein.

Comment by Dustin on [deleted post] 2020-08-05T22:56:30.643Z

o I did that in this post, but then I was told by dustin that I've written something too glaringly obvious yet clearly incorrect and controversial.

No, I'm not qualified to gauge whether you are clearly incorrect. I am qualified to comment on whether you're making a convincing argument. Your arguments are not convincing largely because you do not really engage with people who question you.

The Ghost of Joseph Weber, the response was a series of gish gallops by gjm in which he argued that organizing random data according to a criteria called 'weirdness' was scientific. (It is not.)

And this is the problem. You could, for example, have a good and through discussion with gjm about this specific point. But you won't, and I find it disappointing.

Look, here's the deal for me:

  1. Bringing up that human bias could be the cause of a scientific result is not sufficient nor necessary to negate that result...the bias is beside the point of whether they are right or not. You have to engage the results.
  2. Most people, no matter how smart, do not have the background, time, or energy to engage on specific points of the technical subjects you have raised in your series of posts. (Of note, this is why you would do better to focus on single, specific technical points rather than shotgunning a non-physics-expert audience with every single technical thing you think is wrong with advanced physics experiments.) (This is also why, to most observers you are the one who started out with a gish gallop.)
  3. These technical points are the only thing you have to hang your hat on.
  4. gjm, to all appearances, seems to actually have the background to engage you on these points.
  5. Instead of engaging on any point gjm raised, you basically just dismissed all of them out of hand.
  6. Because of this, to an outsider of the field, you are now the one who looks like the one who has succumbed to unknown-to-us biases.
  7. As far as any outsider can tell there are a lot of plausible explanations for your position, and only one of them has to do with you being right...and you lowered my priors in the "this person is right about all of this physics stuff" explanation for your posts by rejecting engagement with the main person trying to engage you on a technical level.
  8. gjm could be full of shit. I don't know, but I do know that it doesn't seem like he's full of shit. I do know that a few of the factual things he brought up that I do have the background to check on...like him saying you were misquoting others seemed spot on. Add on to that your refusal to engage, and you're obviously going to be in the position you're in now.
  9. You may very well be correct but you're doing us all a disservice by arguing your points poorly.
Comment by Dustin on [deleted post] 2020-08-04T23:50:28.715Z

I don't think you're saying anything here that longtime community members do not understand. Most here have discussed the basic human biases you're describing ad nauseum. The pushback you've received is not because we do not understand the biases you're describing. The pushback you've received is sourced in disagreements that scientists are doing the things that your analogies imply they are doing.

In this post you're just reasserting the things that people have disagreed with you about. I recommend directly addressing the points that people have brought up rather than ignoring them and restating your analogies. A brief perusal of what people have commented on your posts seems to show remarkably little effort by you to address any particular feedback other than to hand wave it away.

This is particularly the case when most people's priors are that the person disagreeing with the scientific establishment is the one who has a very strong burden of proof.

Comment by Dustin on Free Educational and Research Resources · 2020-07-31T03:16:29.790Z · LW · GW

I've been taking community college classes since I was like 15 years old (now in mid 40s) to learn skills for hobbies or just satisfy curiosity. I really recommend it.

Comment by Dustin on What a 20-year-lead in military tech might look like · 2020-07-29T22:39:03.843Z · LW · GW

With aimbots you could shoot them down, but even an autoturret would probably only be able to take out 10 or so before they closed in on it and blew it up.

It doesn't seem unlikely to me, dependent upon terrain, that an aimbotted CIWS-esque system would easily take out a 1000 unit swarm of drones. I'm curious about your reasoning that leads you to conclude otherwise.

Comment by Dustin on The Basic Double Crux pattern · 2020-07-22T17:29:07.887Z · LW · GW

In my experience, where Double Crux is easiest is also where it's the least interesting to resolve a disagreement because usually such disagreements are already fairly easily resolved or the disagreement is just uninteresting.

An inconveniently large portion of the time disagreements are so complex that the effort required to drill down to the real crux is just...exhausting. By "complex" I don't necessarily mean the disagreements are based upon some super advanced model of the world, but just that the real cruxes are hidden under so much human baggage.

This is related to a point I've made here before about Aumann's agreement theorem being used as a cudgel in an argument...in many of the most interesting and important cases it usually requires a lot of effort to get people on the same page and the number of times where all participants in a conversation are willing to put in that effort seems vanishingly small.

In other words, double crux is most useful when all participants are equally interested in seeking truth. It's least useful in most of the real disagreements people have.

I don't think this is an indictment of double cruxin', but just a warning for someone who reads this and thinks "hot damn, this is going to help me so much".

Comment by Dustin on Thiel on Progress and Stagnation · 2020-07-21T02:06:45.071Z · LW · GW

I think Thiel is correct about much (most? all?) of these things, but I'm also very suspicious of the idea that most of it is original thinking.

Then again, it's not important enough to me to do any of the work of tracing the history of these ideas. Hopefully someone else cares enough to educate me.

Comment by Dustin on [deleted post] 2020-07-21T01:56:15.365Z

That is a way to make a rough estimate in the same way that providing the construction costs for a whole shopping mall is a way of providing a rough estimate of how much it costs for me to walk in the door of said mall.

In other words, there are too many unknowns and counterfactuals for that to even begin to be a useful way of calculating how much EHT cost.

In a way it's almost besides the point. You made the positive claim, seemingly without any solid facts, that it cost billions of dollars. When you were called on it, a way to increase the confidence of others in your arguments and presented facts would be to say something like "you know, I shouldn't have left that in there, I withdraw that statement".

By not doing so and sticking to your guns you increase the weight others give to the idea that you're not being intellectually honest.

Your current tack might be useful in political rhetoric in some quarters, but it doesn't seem like it will be effective with your current audience.