Posts

Are we certain that gpt-2 and similar algorithms are not self-aware? 2019-07-11T08:37:59.606Z · score: 1 (7 votes)
Modeling AI milestones to adjust AGI arrival estimates? 2019-07-11T08:17:55.914Z · score: 11 (5 votes)
What would be the signs of AI manhattan projects starting? Should a website be made watching for these signs? 2019-07-03T12:22:40.666Z · score: 13 (6 votes)

Comments

Comment by ozyrus on Open thread, Sep. 26 - Oct. 02, 2016 · 2016-09-26T23:25:21.591Z · score: 1 (1 votes) · LW · GW

I've been meditating lately on a possibility of an advanced artificial intelligence modifying its value function, even writing some excrepts about this topic.

Is it theoretically possible? Has anyone of note written anything about this -- or anyone at all? This question is so, so interesting for me.

My thoughts led me to believe that it is theoretically possible to modify it for sure, but I could not come to any conclusion about whether it would want to do it. I seriously lack a good definition of value function and understanding about how it is enforced on the agent. I really want to tackle this problem from human-centric point, but i don't really know if anthropomorphization will work here.

Comment by ozyrus on Stupid Questions, 2nd half of December · 2015-12-23T16:04:20.294Z · score: 5 (5 votes) · LW · GW

Well, this is a stupid questions thread after all, so I might as well ask one that seems really stupid.

How can a person who promotes rationality have excess weight? Been bugging me for a while. Isn't it kinda the first thing you would want to apply your rationality to? If you have things to do that get you more utility, you can always pay diet specialist and just stick to the diet, because it seems to me that additional years to life will bring you more utility than any other activity you could spend that money on.

Comment by ozyrus on Sensation & Perception · 2015-08-26T15:05:44.716Z · score: 0 (0 votes) · LW · GW

A good read, though I found it rather bland (talking about writing style). I did not read the original article, but compression seems ok. More will be appreciated.

Comment by ozyrus on Open Thread - Aug 24 - Aug 30 · 2015-08-25T07:58:10.889Z · score: 4 (4 votes) · LW · GW

Are there any lesswrong-like sequences focused on economics, finance, business, management? Or maybe just internet communities like lesswrong focused on these subjects?

I mean, the sequences introduced me to some really complex knowledge that improved me a lot, while simultaneously being engaging and quite easy to read. It is only logical to assume that somewhere on the web, there must be some articles in the same style covering different themes. And if there are not, well, someone must surely do this, I think there is some demand for this kind of content.

So, feel free to link lesswrong-like series of blogposts on any theme, actually: that will be really helpful for me. P.S. In hindsight, i guess there may be some post here, on lesswrong, containing all these links I am looking for. If so, could anyone link me to it?

Comment by ozyrus on Welcome to Less Wrong! (7th thread, December 2014) · 2015-05-20T22:03:42.703Z · score: 1 (1 votes) · LW · GW

It seems that your implicit question is, "If rationality makes people more effective at doing things that I don't value, >then should the ideas of rationality be spread?" That depends on how many people there are with values that are >inconsistent with yours, and it also depends on how much it makes people do things that you do value. And I would >contend that a world full of more rational people would still be a better world than this one even if it means that there >are a few sadists who are more effective for it. There are murderers who kill people with guns, and this is bad; but >there are many, many more soldiers who protect their nations with guns, and the existence of those nations allow >much higher standards of living than would be otherwise possible, and this is good. There are more good people >than evil people in the world. But it's also true that sometimes people can for the first time follow their beliefs to their >logical conclusions and, as a result, do things that very few people value.

Excellent answer! Yes, you deducted the implicit question right. I also agree that this is a rather abstract field of moral philosophy, though i did not see that at first. Although I don't think that your argument for the world being a better place with everyone being rational holds up, especially this point

There are more good people than evil people in the world.

Even if there are, there is no proof that after becoming "rational" they will not become "bad" (apostrophes because bad is not defined sufficiently, but that'll do.). I can imagine some interesting prospect for experiments in this field by the way. I also think that the result will vary if the subject is placed in society of only-rationalists vs usual society - with "bad" actions carried out more in the second example, as there is much less room for cooperation.

But of course that is pointless discussion, as the situation is not really based on reality in any way and we can't really tell what will happen. :)

Comment by ozyrus on Welcome to Less Wrong! (7th thread, December 2014) · 2015-05-20T16:43:36.054Z · score: 2 (2 votes) · LW · GW

Hello, everyone!

LW came to my attention not so long ago, and I've been commited to reading it since that moment about a month ago. I am a 20-year old linguist from Moscow, finishing my bachelor's. Due to my age, I've been pondering with usual questions of life for the past few years, searching for my path, my philosophy, essentially, a best way to live for me.

I studied a lot of religions, philosophies, and they all seemed really flat, essentially because of the reasons stated in some articles here. I came close to something resembling a nice way to live after I read "Atlas shrugged", but something about it bothered me, and after thorough analysis of this philosophy I decided to take some good things from it and move on, as I did a lot of times before.

I found this gem of a site through reddit and roko's basilisk (is it okay if I say it here? I heard discussion was banned). I am deeply into the whole idea of rationality and nearly all ideas that are presented on this site, but something really bothers me here, too.

The thing is that it is implied that altruism and rationality go hand in hand; maybe I missed some important articles that could explain me, why?

Let's imagine a hypothetical scenario: there is a guy, Steve, who really does not feel anything when he helps other people nor when does other "good" things generally; he does this only because his philosophy or religion tells them to. Say this guy was introduced to ideas of rationality and thus he is no longer bound by his philosophy/religion. And if Steve also does not feel bad about other people suffering (or even takes pleasure in it?)?

What i wanted to say is that rationality is a gun that can point both ways: and it is a good thing that LessWrong "sells" this gun with a safety mechanism (if it is such "safety mechanism". Once again, maybe I missed something really critical that explains why altruism and "being good" is the most rational strategy).

In other ways, Steve does not really care about humanity; he cares about his well-being and will utilize all knowledge he got just to meet his ends ( people are different, aren't they? and ends are different, too).

Or even another, average rationalist Jack estimated that his own net gain will be significantly bigger if he hurts or kills someone (considering his emotions and feelings about overall humanity net gain, and all other possible factors). That means he must carry on? Or is it a taboo here? Or maybe it is a problem of this site's demographics and nobody even considered this scenario (which fact I really doubt).

I feel that i dive too deep into metaphors, but i am not yet a good writer. I hope you understood my thought and can make me less wrong. :)

edit: fixed formatting