Posts

Bohaska's Shortform 2024-04-15T06:51:59.052Z
High school advice 2023-09-11T01:26:18.747Z
Why did Russia invade Ukraine? 2022-06-17T01:36:10.812Z

Comments

Comment by Bohaska on Bohaska's Shortform · 2024-04-15T06:51:59.154Z · LW · GW

Is the Renaissance caused by the new elite class, the merchants, focusing more on pleasure and having fun compared to the lords, who focused more on status and power?

Comment by Bohaska on [LINK] Terrorists target AI researchers · 2024-02-09T10:09:53.469Z · LW · GW

 hmm, is there a collection of the history of terrorist attacks related to AI?

Comment by Bohaska on Manifold Markets · 2024-02-03T12:33:39.434Z · LW · GW

But Manifold adds 20 mana to liquidity per new trader, so it'll eventually become more inelastic over time. The liquidity doesn't stay at 50 mana. 

Comment by Bohaska on Defense Against The Dark Arts: An Introduction · 2024-01-01T05:27:24.621Z · LW · GW

After reading this and your dialogue with Isusr, it seems that Dark Arts arguments are logically consistent and that the most effective way to rebut them is not to challenge them directly in the issue.

jimmy and madasario in the comments asked for a way to detect stupid arguments. My current answer to that is “take the argument to its logical conclusion, check whether the argument’s conclusion accurately predicts reality, and if it doesn’t, it’s probably wrong”

For example, you mentioned before an argument which says that we need to send U.S. troops to the Arctic because Russia has hypersonic missiles that can do a first-strike on the US, but their range is too short to attack the US from the Russian mainland, but it is long enough to attack the US from the Arctic.

If this really were true, we would see this being treated as a national emergency, and the US taking swift action to stop Russia from placing missiles in the Arctic, but we don’t see this.

Now, for some arguments (e.g. AI risk, cryonics), the truth is more complicated than this, but it’s a good heuristic for telling whether you need to investigate an argument more thoroughly or not.

Comment by Bohaska on 5. Moral Value for Sentient Animals? Alas, Not Yet · 2024-01-01T04:11:22.208Z · LW · GW

We do agree that suffering is bad, and that if a new clone of you would experience more suffering than happiness, then it’ll be bad, but does the suffering really outweigh the happiness they’ll gain?

You have experienced suffering in your life. But still, do you prefer to have lived, or do you prefer to not have been born? Your copy will probably give the same answer.

(If your answer is genuinely “I wish I wasn’t born”, then I can understand not wanting to have copies of yourself)

Comment by Bohaska on Beyond the Data: Why aid to poor doesn't work · 2023-12-31T03:56:21.035Z · LW · GW

I do believe your main point is correct, just that most people here already know that.

Comment by Bohaska on 5. Moral Value for Sentient Animals? Alas, Not Yet · 2023-12-31T02:16:10.575Z · LW · GW

Ethical worth may not be finite, but resources are finite. If we value ants more, then that means we should give more resources to ants, which means that there are less resources to give to humans. 
 

From your comments on how you value reducing ant suffering, I think your framework regarding ants seems to be “don’t harm them, but you don’t need to help them either”. So basically reducing suffering but not maximising happiness.

Utilitarianism says that you should also value the happiness of all beings with subjective experience, and that we should try to make them happier , which leads to the question of how to do this if we value animals. I’m a bit confused, how can you value not intentionally making them suffer, but not also conclude that we should give resources to them to make them happier?

Comment by Bohaska on 5. Moral Value for Sentient Animals? Alas, Not Yet · 2023-12-31T02:06:24.884Z · LW · GW

The reason why it’s considered good to double the ant population is not necessarily because it’ll be good for the existing ants, it’s because it’ll be good for the new ants created. Likewise, the reason why it’ll be good to create copies of yourself is not because you will be happy, but because your copies will be happy, which is also a good thing.
 

Yes, it requires the ants to have subjective experience for making more of them to be good in utilitarianism, because utilitarianism only values subjective experiences. Though, if your model of the world says that ant suffering is bad, then doesn’t that imply that you believe ants have subjective experience?

Comment by Bohaska on 2. AIs as Economic Agents · 2023-12-30T12:42:01.878Z · LW · GW

Why would our CoffeeFetcher-1000 stay in the building and continue to fetch us coffee? Why wouldn't it instead leave, after (for example) writing a letter of resignation pointing out that there are staving children in Africa who don't even have clean drinking water, let alone coffee, so it's going to hitchhike/earn its way there, where it can do the most good [or substitute whatever other activity it could do that would do the most good for humanity: fetching coffee at a hospital, maybe].

 

Why can't you just build an AI whose goal is to fetch its owners coffee, and not to maximize the good it'll do?

Comment by Bohaska on Beyond the Data: Why aid to poor doesn't work · 2023-12-30T06:53:34.122Z · LW · GW

I think you just got the wrong audience. People assume that you’re referring to effective altruism charities and aid. The average LessWrong reader already believes that traditional aid is ineffective, this post is mostly old info. Your criticisms of aid sound a bit ignorant because people pattern-match your post to criticism of charities like GiveDirectly, when people have done studies that show GiveDirectly has quite a good cost-benefit ratio

Your post is accurate, but redundant to EAs. 

 Also, slightly unrelated, but what do you think about EA charities? Have you looked into them? Do you find them better than traditional charities? 

Comment by Bohaska on Here's the exit. · 2023-12-30T04:26:53.966Z · LW · GW

 Are there any similar versions of this post on LW which express the same message, but without the patronising tone of Valentine? Would that be valuable?

Comment by Bohaska on Open Thread – Winter 2023/2024 · 2023-12-29T03:28:23.248Z · LW · GW

Would more people donate to charity if they could do so in one click? Maybe...

Comment by Bohaska on 2023 Unofficial LessWrong Census/Survey · 2023-12-28T09:37:16.365Z · LW · GW

I don't think so, I also only noticed it on the frontpage today.

Comment by Bohaska on Alignment allows "nonrobust" decision-influences and doesn't require robust grading · 2023-12-26T12:01:54.379Z · LW · GW

I was initially a bit confused over the difference between an AI based on shard theory and one based on an optimiser and a grader, until I realized that the former has an incentive to make its evaluation of results as accurate as possible, while the latter doesn't. Like, the diamond shard agent wouldn't try to fool its grader because it'll conflict with its goal to have more diamonds, whereas the latter agent wouldn't care.

Comment by Bohaska on Theses on Sleep · 2023-12-06T12:07:01.925Z · LW · GW

So, most people see sleep as something that's obviously beneficial, but this post was great at sparking conversation about this topic, and questioning that assumption about whether sleep is good. It's well-researched and addresses many of the pro-sleep studies and points regarding the issue. 

I'll like to see people do more studies on the effects of low sleep on other diseases or activities. There's many good objections in the comments, such as increased risk of Alzheimer's, driving while sleepy and how the analogy of sleep deprivation to fasting may be misguided. 

There was a good experiment presented here, where Andrew Vlahos replied

> I'm a tutor, and I've noticed that when students get less sleep they make many more minor mistakes (like dropping a negative sign) and don't learn as well. This effect is strong enough that for a couple of students I started guessing how much sleep they got the last couple days at the end of sessions, asked them, and was almost always right. 

and guzey replied with a proposed experiment

> As an experiment -- you can ask a couple of your students to take a coffee heading to you when they are underslept and see if they continue to make mistakes and learn poorly (in which case it's the lack of sleep per se likely causing problems) or not (in which case it's sleepiness)

Hopefully someone does bother to do it in the future. 

 


 

Comment by Bohaska on Criticism of Eliezer's irrational moral beliefs · 2023-09-28T07:23:56.989Z · LW · GW

Eliezer used “universally compelling argument” to illustrate a hypothetical argument that could persuade anything, even a paper clip maximiser. He didn’t use it to refer to your definition of the word.

You can say that the fact it doesn’t persuade a paper clip maximiser is irrelevant, but that has no bearing on the definition of the word as commonly used in LessWrong.

Comment by Bohaska on Criticism of Eliezer's irrational moral beliefs · 2023-09-28T07:21:29.314Z · LW · GW

 Isn’t morality a human construct? Eliezer’s point is that morality defined by us, not an algorithm or a rule or something similar. If it was defined by something else, it wouldn’t be our morality.

Comment by Bohaska on Criticism of Eliezer's irrational moral beliefs · 2023-09-28T01:35:49.130Z · LW · GW

How would you define objective morality? What would make it objective? If it did exist, how would you possibly be able to find it?

Comment by Bohaska on Jimmy Apples, source of the rumor that OpenAI has achieved AGI internally, is a credible insider. · 2023-09-28T01:28:49.312Z · LW · GW

How would we be able to verify such a claim? How would we investigate this? What specific help do you need from us?

Comment by Bohaska on AI should be used to find better morality · 2023-09-28T01:27:07.129Z · LW · GW

What would it mean for an AI to be right or wrong about morality? Isn’t morality defined by us? How would you define morality?

Comment by Bohaska on How have you become more hard-working? · 2023-09-28T00:40:44.060Z · LW · GW

There’s an empathy reaction which looks like a heart, if you want that.

Comment by Bohaska on Rationality: From AI to Zombies · 2023-09-27T02:28:41.311Z · LW · GW

Mind if you can write a follow-up review about how you joined the rationalist/EA community? Interested to see how your journey progressed 🙂

Comment by Bohaska on Kenshō · 2023-09-27T01:40:49.132Z · LW · GW

What was the result of your request for further communication outside of LessWrong?

Comment by Bohaska on On being downvoted · 2023-09-17T06:33:13.256Z · LW · GW

I wonder, what percentage of users vote based on post quality, and what percentage vote based on the viewpoint of the post?

Comment by Bohaska on Chinese History · 2023-09-16T09:23:41.549Z · LW · GW

The three historical figures I can think of who built giant institutions lasting thousands of years are Paul the Apostle, Mohammad and Qin Shihuang. 

I will not exactly classify Qin Shihuang as in that vein.  While the idea of the Mandate of Heaven and the idea that China should be unified into one dynasty has been fully established by him (Almost all rebellions in Chinese history were about overthrowing the emperor and replacing it with a new one, but only rarely about changing their government structure), the Qin dynasty collapsed with his son. Qin is not exactly known for being a long-lasting nation.

I believe Confucius is a much better example. His philosophy and teachings have been passed down all the way to today's China, and has held its importance for thousands of years.

Comment by Bohaska on High school advice · 2023-09-11T07:57:55.600Z · LW · GW

I think your advice is pretty fine but does not seem to be related to high school advice in general. I prefer advice that is directed to a potential high school student, not really looking for advice directed to newcomers on LessWrong.

Comment by Bohaska on Baking is Not a Ritual · 2023-08-27T13:43:00.845Z · LW · GW

If you don't want to eat your own tasty pastries due to future regrets, I'm willing to volunteer to help you eat them for free.

Comment by Bohaska on ACX Meetups Everywhere 2023: Times & Places · 2023-08-26T10:37:24.745Z · LW · GW

Nitpick: Italy as a headline appears twice

Comment by Bohaska on Noting an error in Inadequate Equilibria · 2023-08-08T07:00:30.636Z · LW · GW

We need more epistemic spot checks like these for important claims made in other posts

Comment by Bohaska on The Parable of Hemlock · 2022-08-11T09:57:55.085Z · LW · GW

It took me a while to fully understand your point in this post. I think that adding a obviously wrong example that’s identical in structure to "All men are mortal.  Socrates is a man.  Therefore Socrates is mortal.", will help. My example is “All chickens are mortal. Socrates is a chicken. Therefore, Socrates is mortal.” It’ll help show that the original example given in the post is wrong.

Comment by Bohaska on China Covid Update #1 · 2022-04-18T11:50:43.756Z · LW · GW

Actually, China has got vaccines and boosters in a lot of arms, vaccination rate 85%