Posts

Comments

Comment by blankcanvas on Open thread, June 26 - July 2, 2017 · 2017-07-01T08:26:13.161Z · LW · GW

A question. Would you rather be born and live for thirty years and then be executed, or never be born at all? To me, the answer depends on how I was going to live those thirty years. Well fed and sheltered while hanging out in a great open space with a small community of peers? That sounds pretty good to me. Locked in a tiny box with no stimulation? Quite possibly not. (I have a really strong desire for my own existence, even in the face of that existence being terrible, but I generally treat that as a weird quirk of my own mind.)

Regardless of one's answer, it's not relevant, unless you're talking of humans and not cows, what is relevant however is a question like this:

"Do you support bringing cows into existence, regardless for the fact that the majority will be inflicted with unnecessary suffering?" "Do you support bringing cows into existence, though they will be executed in 30 yrs?"

A cows well being is an assumption, will the cow be more or less likely to be miserable for those 30 years?

Using your cognition in the position of a human is incorrect when talking of cows, a cow in the position of a cow is correct. Anthropomorphism I would consider a human bias. Human in the position of a cow is better as it might lead to a conclusion to not inflict unnecessary suffering, but it's also a bias, so it's a question if the means justify the end, whereas the means in this case is the argument and any hesitation is rationality.

By being vegan, you are not supporting lives. If everyone in the world went vegan tomorrow (or more realistically if a perfect vat-grown beef substitute was released) what would happen to the animals on a nice free-range cruelty-free farm?

How many cows march on cruelty-free farms and how many march on non-cruelty-free farms?

Many questions are raised which means more data to conclude a meaningful answer or ethics.

Comment by blankcanvas on Open thread, June 26 - July 2, 2017 · 2017-06-30T20:38:51.250Z · LW · GW

I want to go into this full-time but I'm unfortunately looking at part-time work and full-time studies (60 h / week) which annoys me deeply, and I've never manged to do even 10 hours a week (conscientiousness in the 2nd percentile, yes 2% of people have less in conscientiousness and uhm neuroticism in the 80th percentile). I'm thinking about skipping my studies to the government-funded school, which bribes me very well, just working 20 h a week and doing Maps of Meaning etc, 40 h a week. I wrote about it more here: http://lesswrong.com/lw/p6l/open_thread_june_26_july_2_2017/dusd

I'm not your ordinary LWer, this is not my only account. If you are looking to make people buy into this who are hyperrational and IQ's in the 140's, I wasn't the targeted audience :).

Thanks for the advice by the way.

Comment by blankcanvas on Open thread, June 26 - July 2, 2017 · 2017-06-30T20:10:24.864Z · LW · GW

21-22.

The government is willing to give me (stolen from taxpayers) around $1000/month in student welfare to finish my high school degree. And a student loan at ~1.5% interest of around $1000/month and ~$500 in welfare for university. However if I work part-time alongside finishing high school, it will pretty much be what I pay in taxes. ~50% tax on freelance, and 25% on goods I purchase in the store. But that means 60 h / week.

I don't think I want to work in an unskilled labor job. If I was certain that my IQ was around 100 then I would... If I don't go to school now I will have to learn on my own for 1-2 years web development to get a job that way to sustain building AGI.

I know Yudkowsky don't, but how would you balance work-agi-life?

Comment by blankcanvas on Open thread, June 26 - July 2, 2017 · 2017-06-30T19:14:12.001Z · LW · GW

I'm a high school dropout with my IQ in the low 120's to 130. I want to do my part and build a safe AGI, but it will take 7 years to finish high school and a bachelor and master's. I have no math or programming skills. What would you do in my situation? Should I forget about AGI and do what exactly?

If I work on a high school curriculum it doesn't feel like I am getting closer to building an AGI, neither do I think working on a bachelor would either. I'm questioning if I really want to do AGI work or am capable of it, compared let's say if my IQ was in the 140-160's.

Comment by blankcanvas on Open thread, June 26 - July 2, 2017 · 2017-06-30T18:51:14.220Z · LW · GW

Unfortunately, I'm not a man who has this undetected prejudice, I've personally been delving into mysticism in the past through meditation. I am familiar with Jordan Peterson and have watched many of his lectures thanks to the YouTube algorithm, but I'm unsure what I have learned. Do you have any suggestion in what order to watch and learn from his lectures? I'm thinking Maps of Meaning 2017 - Personality 2017 - Biblical Series.

I've also tried reading his book recommendations, like Brave New World, rated at #1, but it doesn't really seem to captivate my attention, it feels more like a chore than anything. I suppose that's how I viewed the books after all, "I need to download this information into my brain so our AGI-system I might help create won't wipe us out".

Comment by blankcanvas on Effective Altruism : An idea repository · 2017-06-27T19:40:11.035Z · LW · GW

Posting on behalf of my coworker Sam Deere (who didn't have enough karma to post):

I registered this account today and couldn't post, so I figured I had to verify an email associated with this account and now it works. :)

Comment by blankcanvas on Open thread, June 26 - July 2, 2017 · 2017-06-27T18:43:53.023Z · LW · GW

It doesn't make sense to have internally generated goals, as any goal I make up seems wrong and do not motivate me in the present moment to take action. If a goal made sense, then I could pursue it with instrumental rationality in the present moment, without procrastination as a means of resistance. Because it seems as it simply is resistance of enslavement to forces beyond my control. Not literally, but you know, conditioning in the schooling system etc.

So what I would like, is a goal which is universally shared among you, me and every other Homo Sapiens, which lasts through time. Preferences which are shared.

Comment by blankcanvas on Open thread, June 26 - July 2, 2017 · 2017-06-27T18:20:41.176Z · LW · GW

So all of your actions in the present moment is guided towards your brothers happiness? I didn't mean switching between goals as situations change, only one goal.

Comment by blankcanvas on Open thread, June 26 - July 2, 2017 · 2017-06-27T18:19:29.576Z · LW · GW

That's why I am asking here. What goal should I have? I use goal and preference interchangeably. I'm also not expecting the goal/preference to change in my lifetime, or multiple lifetimes either.

Comment by blankcanvas on Open thread, June 26 - July 2, 2017 · 2017-06-27T17:41:10.284Z · LW · GW

I don't know what goal I should have to be a guide for instrumental rationality in the present moment. I want to take this fully seriously, but for the instrumental rationality in of it self with presence.

"More specifically, instrumental rationality is the art of choosing and implementing actions that steer the future toward outcomes ranked higher in one's preferences.

Why, my, preferences? Have we not evolved rational thought further than simply anything one self cares about? If there even is such a thing as a self? I understand, it's how our language has evolved, but still.

Said preferences are not limited to 'selfish' preferences or unshared values; they include anything one cares about."

Not limited, to selfish preferences or unshared values, what audience is rationality for?

https://wiki.lesswrong.com/wiki/Rationality