Posts

DIY Transcranial Direct Current Stimulation. Who wants to go first? 2012-03-14T16:58:30.024Z · score: 14 (19 votes)
RAND Health Insurance Experiment critiques 2012-02-18T17:52:59.330Z · score: 5 (6 votes)

Comments

Comment by dustin on The ethics of breeding to kill · 2020-09-11T16:57:34.137Z · score: 9 (4 votes) · LW · GW
But, if we applied this model, what would make it unique to suicide and not to any other preference ?
And if you apply this model to any other preference and extent it to humans, things get really dystopian really fast.

I'm not sure it is unique to suicide, and regardless I'd imagine we'd have to take it on a case by case basis because evolution is messy. I think whether it leads to dystopia or not is not a useful way to determine if it actually describes reality.

Regardless, the argument I'm trying to make is not that this model I described is the correct model, but that it's at least a plausible model and that there are probably other plausible models and if there are such alternative plausible models then you have to seriously engage them before you can make a considered decision that the suicide rate is a good proxy for value of animal life.

This is not really analogous, in that my example is "potential to reduce suffering" vs "obviously reducing suffering". A telescope is neither of those, it's working towards what I'd argue is more of a transcedent goal.

Yes, I agree that along that dimension it is not analogous. I was using it as an example of the fact that addressing more than one different issue is possible when the resources available are equal to or greater than the sum of resources required to address each issue.

I am also willing to acknowledge that it is at least *possible* some humans might benefit from actions that they don't consent to, but still I don't engage in those actions because I think it's preferable to treat them as agentic beings that can make their own choices about what makes them happy.

I think my point was that until you're willing to put a semblance of confidence levels on your beliefs, then you're making it easy to succumb to inconsistent actions.

How possible is it that we don't understand the mental lives of animals well enough to use the suicide argument? What are the costs if we're wrong? What are the costs if we forgo eating them?

Most of society has agreed that actually yes we should coerce some humans into actions that they don't consent to. See laws, prisons, etc. This is because we can look at individual cases, weigh the costs and benefits, and act accordingly. A generalized principle of "prefer to treat them as agentic beings with exceptions" is how most modern societies currently work. (How effective we are at that seems to vary widely...but I think most would agree that it's better than the alternative.)

Regardless, I'm not sure that arranging our food chain to lessen or eliminate the number of animals born to be eaten actually intersects with interfering with independent agents abilities to self-determine. If it did, it seems like we are failing in a major way by not encouraging everyone to bring as many possible humans into existence as possible until we're all living at the subsistence level.

People mostly don't commit suicide just because they're living at such a level. Thus, I think by your argument, we are doing the wrong thing by not increasing the production of humans greatly. However, I think most people's moral intuitions cut against that course of action.

Comment by dustin on The ethics of breeding to kill · 2020-09-08T18:56:03.663Z · score: 7 (4 votes) · LW · GW
I think it's fair to use suicide as a benchmark for when someone's life becomes miserable enough for them to end it.

Yes, but that's because it's a tautology!

I don't think I agree that suicide is a sufficient proxy for whether an entity enjoys life more than it dislikes life because I can imagine too many plausible, yet currently unknown mechanisms wherein there are mitigating factors. For example:

I imagine that there are mental processes and instincts in most evolved entities that adds a significant extra prohibition against making the active choice to end their own life and thus that mental ability has a much smaller role in suicide "decisions".

In the world where there is no built-in prohibition against ending your own life, if the "enjoys life" indicator is at level 10 and the "hates life" indicator is at level 11, then suicide is on the table.

In, what I think is probably our world, when the "enjoys life" indicator is at level 10 the "hates life" indicator has to be at level 50.

What's more, it seems plausible to me that the value of this own-life-valuing indicator addon varies from species to species and individual to individual.

If this holds true, then own-life-valuing indicator addon would only be there for a being that already exists.


This is not to say that we can certainly conclude that animals being farmed don't actually dislike life more than they enjoy it. This could certainly be the case, and they might just lack the reasoning to commit suicide.
...
Thus I fail to see a strong ethical argument against the eating of animals from this perspective.

Here you're seemingly willing to acknowledge that it's at least *possible* that animals dislike life more than they enjoy it. If I read you correctly and that is what you're acknowledging, then you would really need to compare the cost of that possibility being correct vs the cost of not eating meat before making any conclusion about the ethical state of eating animals.

Until then, the sanest choice would seem to be that of focusing our suffering-diminishing potential onto the beings that can most certainly suffer so much as to make their condition seem worst than death.

This seems to me similar to the arguments made akin to "why waste money on space telescopes (or whatever) when people are going hungry right here on earth?".

Neither reducing the suffering of beings that can most certainly suffer and those that might be suffering seems likely to consume all of our suffering-diminishing potential. Maybe we can conclude that the likelihood of farm animals suffering in a way that we should care about is so low as to be worth absolutely no suffering-diminishing potential, but I don't think you've made that case.


In summary, I think the main critique I have of the line of argument presented in this post is that it hangs on suicide being a proxy for life-worth-living and that it's equivalent to not having existed in the first place.

I don't think you've made a strong enough case that suicide is a sufficient measure of suffering-has-exceeded-the-cost-of-continuing-to-live. There are too many potential and plausible confounding factors. I think that the case needs to be really strong to outweigh the costs of being wrong.


(Hilariously, I'm not a vegan or a vegetarian.)

Comment by dustin on Ice · 2020-09-06T19:23:27.911Z · score: 11 (6 votes) · LW · GW
It is my opinion that the pos­si­bil­ity of catas­trophic ice sheet col­lapse should be care­fully con­sid­ered and stud­ied as a real pos­si­bil­ity.

Is it not already? I kind of assumed it was already seriously considered and studied. I do not follow climate science very closely and mostly just read what comes across my RSS feeds on the subject. I've heard of the possibility of catastrophic ice sheet collapse a large number of times in the last...say...5 years.

  • What's the right amount of resources to expend on thinking about this?
  • Is my previous exposure to articles and people talking about the subject indicative of sufficient or insufficient interest and study of this possibility?
  • How do we assess the current amount of resources expended on the subject?
Comment by dustin on Thiel on Progress and Stagnation · 2020-08-13T22:52:40.145Z · score: 1 (1 votes) · LW · GW

Maybe!

But, to be clear, I was responding to the claim that it was original thinking.

Comment by dustin on How Beliefs Change What We See in Starlight · 2020-08-06T19:51:10.632Z · score: 7 (3 votes) · LW · GW

I know the vagueness of this is going to be irritating, and I sincerely apologize up front. I'm not a very "hygienic" reader...aka, I don't do a good job of physically or mentally organizing the information I've consumed to easily reference it in the future.

I can't actually think of any exact posts or comments, but when I ask myself "what do I like about LW?", one of the answers I give myself is something along the lines of "not willing to just accept science or scientific conventional wisdom at face value". (It's also possible that the impression I've built over the past 10+ years is just confused...probably stemming from the aforementioned bad information hygiene.)

Eliezer posted at least once on something at least tangentially related...about how science can't save you or something like that. There's been posts or comment threads about vitamins and I think other health-related "stuff". Over the years, Scott Alexander has written bucking-the-science-establishment-on-X posts as well.

As I give it more thought, I also think of posts that were written from the standpoint where the poster was seemingly prepared to accept that science was wrong or even thought ahead of time that science was wrong, but after investigation found out that, yep, science was probably right. IIRC, the vitamins post I mentioned above was in that vein.

Comment by dustin on How Beliefs Change What We See in Starlight · 2020-08-06T17:08:17.900Z · score: 9 (4 votes) · LW · GW

gjm gave specific definitions of what he meant by "weirdness". I've yet to see you seriously engage on what he meant using the principle of charity and trying to figure out why you two were so far apart on this issue. That would be great to read and an effective way of convincing other people of your righteousness!

This willingness to engage is the core of good content on this site. Newcomers often have a hard time adjusting to this not-normal way of discussing issues.

As has been your wont in these threads you almost immediately fall back to accusing whomever you're arguing with to being biased in some way and saying "nuh-uh".

Comment by dustin on How Beliefs Change What We See in Starlight · 2020-08-06T16:56:52.188Z · score: 11 (5 votes) · LW · GW

All in all, I find myself really disheartened by this whole saga since, 1) I find it, in the abstract, plausible that there are areas of modern science that have went down the wrong road because the practitioners have misled themselves, 2) some of the best content for me on LW over the many years has been of the type that highlights such deficiencies, and 3) I can see no progress being made on resolving our disagreements here.

As such, I'm not sure how much more value we can get out continuing these discussions. That really makes me sad since being willing to continually engage until disagreements are resolved is something I often enjoy.

Comment by dustin on How Beliefs Change What We See in Starlight · 2020-08-06T16:53:02.356Z · score: 2 (2 votes) · LW · GW

When someone makes several comments that are longer than the post itself, and when the reasoning is demonstrably fallacious

By this criterion, your original post is a gish gallop since it also included demonstrably fallacious statements.

On the other hand, we could take the charitable reading and say "maybe I don't understand the point they're trying to make and we should discuss it".

Comment by dustin on How Beliefs Change What We See in Starlight · 2020-08-06T16:47:10.825Z · score: 5 (3 votes) · LW · GW

Just to make it clear and explicit. I am not a scientist nor am I a member of the scientific establishment.

Comment by dustin on How Beliefs Change What We See in Starlight · 2020-08-05T23:15:57.072Z · score: 12 (6 votes) · LW · GW

When someone makes several comments that are longer than the post itself, and when the reasoning is demonstrably fallacious (weirdness criterion!?), I think it is fair to call the comment a gish gallop when that is the most economical way to express what happened.

You could have engaged on whether this was "demonstrably fallacious". That would have been interesting to read and I would've upvoted a good comment of this sort.

Again, you are the one who seems to be arguing in bad faith. It is very frustrating because LW has a long history of criticizing the practice of science, and it'd be interesting to see another good discussion in that vein.

Comment by dustin on How Beliefs Change What We See in Starlight · 2020-08-05T22:56:30.643Z · score: 17 (9 votes) · LW · GW

o I did that in this post, but then I was told by dustin that I've written something too glaringly obvious yet clearly incorrect and controversial.

No, I'm not qualified to gauge whether you are clearly incorrect. I am qualified to comment on whether you're making a convincing argument. Your arguments are not convincing largely because you do not really engage with people who question you.

The Ghost of Joseph Weber, the response was a series of gish gallops by gjm in which he argued that organizing random data according to a criteria called 'weirdness' was scientific. (It is not.)

And this is the problem. You could, for example, have a good and through discussion with gjm about this specific point. But you won't, and I find it disappointing.

Look, here's the deal for me:

  1. Bringing up that human bias could be the cause of a scientific result is not sufficient nor necessary to negate that result...the bias is beside the point of whether they are right or not. You have to engage the results.
  2. Most people, no matter how smart, do not have the background, time, or energy to engage on specific points of the technical subjects you have raised in your series of posts. (Of note, this is why you would do better to focus on single, specific technical points rather than shotgunning a non-physics-expert audience with every single technical thing you think is wrong with advanced physics experiments.) (This is also why, to most observers you are the one who started out with a gish gallop.)
  3. These technical points are the only thing you have to hang your hat on.
  4. gjm, to all appearances, seems to actually have the background to engage you on these points.
  5. Instead of engaging on any point gjm raised, you basically just dismissed all of them out of hand.
  6. Because of this, to an outsider of the field, you are now the one who looks like the one who has succumbed to unknown-to-us biases.
  7. As far as any outsider can tell there are a lot of plausible explanations for your position, and only one of them has to do with you being right...and you lowered my priors in the "this person is right about all of this physics stuff" explanation for your posts by rejecting engagement with the main person trying to engage you on a technical level.
  8. gjm could be full of shit. I don't know, but I do know that it doesn't seem like he's full of shit. I do know that a few of the factual things he brought up that I do have the background to check on...like him saying you were misquoting others seemed spot on. Add on to that your refusal to engage, and you're obviously going to be in the position you're in now.
  9. You may very well be correct but you're doing us all a disservice by arguing your points poorly.
Comment by dustin on How Beliefs Change What We See in Starlight · 2020-08-04T23:50:28.715Z · score: 32 (15 votes) · LW · GW

I don't think you're saying anything here that longtime community members do not understand. Most here have discussed the basic human biases you're describing ad nauseum. The pushback you've received is not because we do not understand the biases you're describing. The pushback you've received is sourced in disagreements that scientists are doing the things that your analogies imply they are doing.

In this post you're just reasserting the things that people have disagreed with you about. I recommend directly addressing the points that people have brought up rather than ignoring them and restating your analogies. A brief perusal of what people have commented on your posts seems to show remarkably little effort by you to address any particular feedback other than to hand wave it away.

This is particularly the case when most people's priors are that the person disagreeing with the scientific establishment is the one who has a very strong burden of proof.

Comment by dustin on Free Educational and Research Resources · 2020-07-31T03:16:29.790Z · score: 5 (4 votes) · LW · GW

I've been taking community college classes since I was like 15 years old (now in mid 40s) to learn skills for hobbies or just satisfy curiosity. I really recommend it.

Comment by dustin on What a 20-year-lead in military tech might look like · 2020-07-29T22:39:03.843Z · score: 3 (2 votes) · LW · GW

With aimbots you could shoot them down, but even an autoturret would probably only be able to take out 10 or so before they closed in on it and blew it up.

It doesn't seem unlikely to me, dependent upon terrain, that an aimbotted CIWS-esque system would easily take out a 1000 unit swarm of drones. I'm curious about your reasoning that leads you to conclude otherwise.

Comment by dustin on The Basic Double Crux pattern · 2020-07-22T17:29:07.887Z · score: 4 (3 votes) · LW · GW

In my experience, where Double Crux is easiest is also where it's the least interesting to resolve a disagreement because usually such disagreements are already fairly easily resolved or the disagreement is just uninteresting.

An inconveniently large portion of the time disagreements are so complex that the effort required to drill down to the real crux is just...exhausting. By "complex" I don't necessarily mean the disagreements are based upon some super advanced model of the world, but just that the real cruxes are hidden under so much human baggage.

This is related to a point I've made here before about Aumann's agreement theorem being used as a cudgel in an argument...in many of the most interesting and important cases it usually requires a lot of effort to get people on the same page and the number of times where all participants in a conversation are willing to put in that effort seems vanishingly small.

In other words, double crux is most useful when all participants are equally interested in seeking truth. It's least useful in most of the real disagreements people have.

I don't think this is an indictment of double cruxin', but just a warning for someone who reads this and thinks "hot damn, this is going to help me so much".

Comment by dustin on Thiel on Progress and Stagnation · 2020-07-21T02:06:45.071Z · score: 2 (2 votes) · LW · GW

I think Thiel is correct about much (most? all?) of these things, but I'm also very suspicious of the idea that most of it is original thinking.

Then again, it's not important enough to me to do any of the work of tracing the history of these ideas. Hopefully someone else cares enough to educate me.

Comment by dustin on The Ghost of Joseph Weber · 2020-07-21T01:56:15.365Z · score: 2 (2 votes) · LW · GW

That is a way to make a rough estimate in the same way that providing the construction costs for a whole shopping mall is a way of providing a rough estimate of how much it costs for me to walk in the door of said mall.

In other words, there are too many unknowns and counterfactuals for that to even begin to be a useful way of calculating how much EHT cost.

In a way it's almost besides the point. You made the positive claim, seemingly without any solid facts, that it cost billions of dollars. When you were called on it, a way to increase the confidence of others in your arguments and presented facts would be to say something like "you know, I shouldn't have left that in there, I withdraw that statement".

By not doing so and sticking to your guns you increase the weight others give to the idea that you're not being intellectually honest.

Your current tack might be useful in political rhetoric in some quarters, but it doesn't seem like it will be effective with your current audience.

Comment by dustin on Criticism of some popular LW articles · 2020-07-19T04:24:50.198Z · score: 6 (4 votes) · LW · GW

A couple of initial thoughts I had whilst reading this. Take these as more of pondering on my state of mind rather than critiques or corrections.

Without some more formal structure in place, the nature of which I'm unaware, I am not able to "assess" content for correctness or usefulness.

I find this curiously foreign to my default mode of thinking when reading on LW and elsewhere. It is not uncommon for me to find myself thinking "that seems wrong" and "that seems right" within a single paragraph of content from writers I think are the "rightest". On the other hand, I usually do not feel as confident about my assessment in either direction as you seem to be in your post.

That being said...

My re­ac­tion to ra­tio­nal­ist con­tent is gov­erned by my frame of mind.

I assume this to be the case with all content and I've always assumed it holds for everyone and it hasn't occurred to me to think of rationalist content as different in this way, but seeing you state it "out loud" makes me think maybe I should have.

Comment by dustin on The Ghost of Joseph Weber · 2020-07-19T03:08:32.194Z · score: 3 (2 votes) · LW · GW

So, you seem to continue to use a rhetorical device wherein you do not directly address the points that your interlocuters are bringing up and just answer the question you wish was asked.

For example, this comment I'm replying to here has almost zero bearing on what I said. Saying EHT is bad is not a way to address the argument that EHT did not cost billions of dollars. EHT may very well be bad, but that has no bearing on the subject at hand.

In your previous comment to me in this thread you did the same thing.

Comment by dustin on The Ghost of Joseph Weber · 2020-07-15T20:12:35.659Z · score: 8 (7 votes) · LW · GW

Since you seemingly can't defend nor withdraw your claim that EHT cost billions of dollars, a reasonable person can only assume that the rest of the factual content of your post is suspect.

Comment by dustin on The Ghost of Joseph Weber · 2020-07-14T20:55:02.693Z · score: 4 (4 votes) · LW · GW

I'm not arguing that the telescopes are useless

It did not seem like you were making such an argument, nor was I asserting that you were making such an argument.

The telescope could have cost umpteen trillions of dollars and that fact alone would not support your claim that EHT cost billions of dollars.

I'm not sure how to understand the fact that the previous statement is obvious and yet you still made your comments. I feel like the most charitable interpretation that I can come up with still does not leave a good impression of your overall argument.

I'm not harping on this apparent mistake for no reason. It's just that of all the things described by gjm this seems like it might be the easiest to explicate.

Comment by dustin on The Ghost of Joseph Weber · 2020-07-14T19:11:51.615Z · score: 1 (1 votes) · LW · GW

It's unclear if you're claiming that you have actual figures that show the EHT actually cost billions of dollars or if you're claiming that you think it's likely, but just a guess, that it kept all those radio telescopes "in business", or if you're taking back your claim that it cost billions of dollars.

Comment by dustin on Types of Knowledge · 2020-06-20T19:56:43.930Z · score: 4 (3 votes) · LW · GW

First, an apology as this comment is going to be frustratingly lacking much in the way of concrete examples. I have the kernel of an idea, but it would require more thought than I'm willing to put into it to expand it. I post it to get it out of my head and in case maybe someone else will want to think about it more...

I kind of understand the categories you're trying to carve out, but I'm also leery of them. It feels like your descriptions of the categories make assumptions about meanings and these assumptions are hidden and a person could trick themselves.

I'd have to think about it a lot more to really pin down the ephemeral idea I'm trying to get at, but it's similar to the observation I've made here before that Sherlock Holmes's observation that "Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth." is dangerous because its too easy to convince yourself that you've explored all the possible explanations.

In a similar manner, your description of why something should be considered engineering or scientific knowledge feels as if a person could convince themselves without realizing it that a thing belongs in one category or another where from an objective standpoint you'd be able to make a rational argument for something appearing in either category.

It also feels as if many things will switch between categories depending upon your priors. Your system as stated seems like it would put reciting the steps to make bread and making bread with those steps into separate categories. As an avid bread maker, I'm currently unconvinced of the utility of a category system that would put reciting the steps to make bread and actually making bread with those steps into different categories. I guess I would ask what is the goal of putting those two things into separate categories? What do you hope to get out of doing so?

On the other hand, I'm also unconvinced that a category system has to be so rigorous to be useful. It might be that a category system can be just rigorous enough to help a person...but, like I said, I'm leery that it will lead a person astray from their goals of using such a system without them realizing it.

Comment by dustin on Why isn’t assassination/sabotage more common? · 2020-06-07T22:00:31.879Z · score: 1 (1 votes) · LW · GW

It seems more likely to me that modern technology has made it harder for someone to become a leader even if there are people who have decided to act as such. It does not seem likely to me that there are no outspoken people who want to be leaders or that they are, in general, afraid of assassination.

Take the realm of elected political leaders. By the very nature of this realm there is just one person of focus for each campaign and I'm not under the impression that there are a dwindling number of campaigns for political office...a position that has been under threat of assassination over history.

Comment by dustin on Why isn’t assassination/sabotage more common? · 2020-06-05T02:15:05.933Z · score: 3 (2 votes) · LW · GW

Was that actually the plan or just a post facto explanation? My prior would be that this happened because of the organizing mechanisms of the day (internet vs in-person meeting of the past).

Comment by dustin on Running Wired Ethernet · 2020-05-14T06:51:20.806Z · score: 0 (2 votes) · LW · GW

Just so you know, the crawlspace is where every dropped nail ends up during construction. Some contractors do a better job than others at cleaning that up.

Comment by dustin on Studies On Slack · 2020-05-14T00:47:59.165Z · score: 0 (3 votes) · LW · GW

I can only assume you aren't aware that there are many readily available discussions about why Behe's irreducible complexity doesn't hold water.

To have any chance of making any headway with the argument you seem to be attempting here, you're going to have to seriously engage with the large quantity of work that is a retort to the irreducible complexity thesis.

Imagine you're in a world where it's not immediately obvious that a structure built of brick is more resistant to fire than a structure built of straw. There's been lots of discussion back and forth for generations about the relative merits of brick vs straw.

There's a famous expert in brick structures named Fred and everyone on both sides of the debate are aware of Fred. Fred has written a book that brick people think makes it obvious that brick buildings are the best. The straw people have many and varied reasons that they think prove Fred is wrong.

Now, you're interested in helping the straw people see the light. You have an opportunity to talk to a room full of straw people. You want to convince them that brick structures are the best. You're not interested in a tribal fight about brick vs straw, you want to actually persuade and convince.

Would your opening gambit be to say "Brick structures are the best because Fred says so? It's so obvious!". No, of course not! The most reasonable approach would be to engage with the already extensive discussion the straw people have around Fred's ideas.

Comment by dustin on Running Wired Ethernet · 2020-05-13T23:09:07.268Z · score: 1 (1 votes) · LW · GW

Those bare feet in a crawlspace make me nervous!

Comment by dustin on Why I'm Not Vegan · 2020-04-12T00:46:13.673Z · score: 2 (2 votes) · LW · GW

I could imagine so would a lot of non-rationalist meat eaters

Maybe your imagination accurately reflects reality or maybe not, but it's certainly not discongruent with enough people having the viewpoint(s) that make jkaufman's stance not-unusual.

The average person's revealed preferences seem to assign close to zero weight to animal suffering.

On the other hand, we could make the argument that we should compare jkaufman's position to what I would assume to be the tiny minority of people who have given any substantial amount of thought to veganism and animal suffering.

In that case, I would agree that it is likely that he is unusual.

Comment by dustin on [U.S. Specific] Free money (~$5k-$30k) for Independent Contractors and grant recipients from U.S. government · 2020-04-12T00:39:21.214Z · score: 1 (1 votes) · LW · GW

I decided against applying for either of these. I'm self-employed with no other employees and I haven't currently lost any income. I may or may not in the coming months. I'm worried about the repercussions if I apply for this, accept the money, and then end up not actually needing it.

Comment by dustin on Why I'm Not Vegan · 2020-04-09T21:36:58.279Z · score: 3 (2 votes) · LW · GW

Given the rate of veganism, I'm not sure "unusual" would apply to jkaufman in either case.

Comment by dustin on Research on repurposing filter products for masks? · 2020-04-06T18:57:20.454Z · score: 1 (1 votes) · LW · GW

Agreed.

I'm just hoping that that they can give the OP some information about using HEPA filters.

I've noticed that many N95 masks also have an exhaust valve.

Comment by dustin on Research on repurposing filter products for masks? · 2020-04-03T23:14:36.589Z · score: 4 (3 votes) · LW · GW

I don't have an answer, but maybe you can reach out to the people at MUSC who designed this: https://web.musc.edu/innovation/covid-19-innovation

Comment by dustin on The questions one needs not address · 2020-03-23T19:31:03.169Z · score: 1 (1 votes) · LW · GW

I'm having a hard time understanding how your comment is applicable to mine. AFAICT, definitions have little to do with my comment.

Comment by dustin on The questions one needs not address · 2020-03-21T23:55:11.376Z · score: 2 (2 votes) · LW · GW

It’s the sort of act that says “Yeah, we made this awe-inspiring thing, but we really owe it to thousands of of past generations. None of us can fully comprehend how we managed to do this, so let’s dedicate its highest floor to something transcendent, something that symbolizes the beautiful, impossible and absurd experiment that made it possible, our society”.

It feels like this is something you're reading into it more than what people creating it necessarily thought. You can easily tell a bunch of other just-as-plausible stories about it that are not nearly as positive-sounding.

Comment by dustin on Importing masks from China · 2020-03-21T01:27:18.259Z · score: 3 (3 votes) · LW · GW

For plenty of stuff the fact it's fake, a cheap copy, or just crappy in general isn't that much of an issue.

For example, aliexpress is widely used by hobbyists for cheap electronics components like sensors and microcontrollers.

Comment by dustin on How are people tracking confirmed Coronavirus cases / Coronavirus deaths? · 2020-03-08T01:20:01.691Z · score: 1 (1 votes) · LW · GW

I use Home Assistant for my home automation needs. It has a coronavirus sensor which pulls from the John Hopkins data. I then do two things with that data:

1. I have HA configured to send notifications to my phone when deaths and confirmed cases change by X%.

2. I use the influxdb integration with HA to ship the sensor data to...influxdb. I graph that data with Grafana.

covid-19 graph

Comment by dustin on Quarantine Preparations · 2020-03-03T00:29:17.922Z · score: 2 (2 votes) · LW · GW

I'm interested in the mask recommendation.

Not that I've done a ton of research but most of the articles I've come across have been neutral-to-negative about masks.

Here's one of the articles I just came across today:

https://www.consumerreports.org/coronavirus/do-you-need-a-mask-to-prevent-coronavirus/


edit: Though, I guess maybe the idea of getting some is to protect others if you have symptoms rather than protecting yourself from getting infected.

Comment by dustin on How much delay do you generally have between having a good new idea and sharing that idea publicly online? · 2020-02-22T23:38:45.671Z · score: 3 (3 votes) · LW · GW

I'll ship it when it's ready.

If I have a good idea about how to tie my shoelaces I'll share it immediately!

If I have a good idea about a foundational change in western philosophy it will take me years.

Comment by dustin on Wanting More Intellectual Stamina · 2020-02-18T18:51:41.627Z · score: 2 (2 votes) · LW · GW

I used to be terribly distracted by video games.

I can't pinpoint the exact thing that happened that let me really cut down on that, but some things I did that all seemed to lead to my current state of playing video games 0-3 hours per week.

1. Uninstalled all games and game distribution services.

2. Downgraded my internet connection to something that makes downloading games take a long time. (I've since upgraded my internet connection but haven't had a "relapse")

3. Unsubscribed from video gaming RSS feeds.

4. Gotten older!

Comment by dustin on Don't Double-Crux With Suicide Rock · 2020-01-01T21:09:01.287Z · score: 7 (2 votes) · LW · GW

I just want to spring off of this to point out something about Aumann's agreement theorem. I often see it used as a kind of cudgel because people miss an important aspect.

It can take us human beings time and effort to converge on a view.

Oftentimes it's just not worth it to one or more of the participants to invest that time and effort.

Comment by dustin on Circling as Cousin to Rationality · 2020-01-01T02:00:15.698Z · score: 10 (6 votes) · LW · GW

A slight bit of style critique: I spent the first half of the post thinking "why does he keep capitalizing the word circle and using it this way?". I've literally never heard of this.

It's possible that I'm just way out of the norm here. I don't live in a rationalist hub of activity, but I do read a good portion of LW and a few related blogs.

Comment by dustin on Programmers Should Plan For Lower Pay · 2019-12-29T19:29:12.544Z · score: 1 (1 votes) · LW · GW

I handle this uncertainty via diversification.

I've dumped portions of my income into purchasing and building rental properties.

Comment by dustin on Defining "Antimeme" · 2019-12-27T00:08:51.217Z · score: 4 (3 votes) · LW · GW
The typical response to encountering a regular meme is to assign a truth value to it via rationality.

This seems...iffy.

Comment by dustin on Tapping Out In Two · 2019-12-06T00:32:22.044Z · score: 3 (3 votes) · LW · GW

As I've gotten older, I've become more and more fine with just leaving an internet argument. The *other guy is winning* feeling is 95% gone.

Now, that doesn't mean I won't make a comment or two, but if the other person wants to get into a back and forth that seems like it's going to be draining I'm completely fine with just not participating anymore.

All this to say is that maybe it'll get easier for some?

(FWIW, I'm ~40 and I've been arguing on the internet since I was 12-ish)

Comment by dustin on Elon Musk is wrong: Robotaxis are stupid. We need standardized rented autonomous tugs to move customized owned unpowered wagons. · 2019-11-04T20:06:46.381Z · score: 1 (1 votes) · LW · GW

Isn't that just the price of an electric car right now? Won't they be vastly cheaper in the future?

Comment by dustin on Aella on Rationality and the Void · 2019-11-01T19:46:35.718Z · score: 1 (1 votes) · LW · GW

I've never went on a trip, but I always find descriptions of the experience puzzling. The various things that people describe seem like things I "do" myself when I put my mind to it.

This confuses me as either people are bad at describing what the experience is like or I'm different from people who write about their experiences on LSD.

edit: To be clear, people generally note that it's difficult to put into language what the experience is like, so when I say people are bad at describing the experience, I don't believe this to be an accountable failure on the explainers part.

Comment by dustin on A simple sketch of how realism became unpopular · 2019-10-12T15:50:07.176Z · score: 3 (2 votes) · LW · GW

I can see this being true, but I'm not entirely convinced.

I have no background in philosophy. I don't read philosophy other than occasionally dipping into LW.

Of course, there exists the possibility that occasional dipping into LW has been enough, or that the necessary mental rigor has just seeped into the general populace over the intervening few hundred years.

Also, I'm not sure "anyone in 1710" is the right comparison. More like "people thinking about philosophy in 1710".

Of course, that is likely what you meant, but I think the less precise wording you used makes your argument a lot more convincing so I think it's important to point out the distinction.

To be clear, I'm not arguing that actually I am a ninja of philsophy. I'm just saying that your point doesn't necessarily make me less confused.

Comment by dustin on A simple sketch of how realism became unpopular · 2019-10-12T00:27:26.808Z · score: 8 (6 votes) · LW · GW
The error here is mixing up what falls inside vs. outside of quotation marks. "I'm conceiving of a not-conceivable object" is a formal contradiction, but "I'm conceiving of the concept 'a not-conceivable object'" isn't, and human brains and natural language make it easy to mix up levels like those.

I immediately saw this mistake *while reading the text of the mistake.*

So, now I'm confused. Am I master ninja of a philosopher? Are you misrepresenting the level of people's confusion about this? Have I been arguing on the internet for 3 decades and thus I'm just hypersensitive to language/text based mistakes? Are we both wrong that this is a mistake?

Comment by dustin on What is category theory? · 2019-10-06T17:46:32.341Z · score: 4 (3 votes) · LW · GW

I'm curious if being a programmer is helpful to understanding category theory in the same way as having "a few mathematical structures under your belt" is.