Posts

Comments

Comment by FinalState on Holden's Objection 1: Friendliness is dangerous · 2012-05-24T11:53:09.605Z · LW · GW

Mathematically predictable but somewhat intractable without a faster running version of the instance, with the same frequency of input. Or predictable within ranges of some general rule.

Or just generally predictable with the level of understanding afforded to someone capable of making one in the first place, that for instance could describe the cause of just about any human psychological "disorder".

Comment by FinalState on Holden's Objection 1: Friendliness is dangerous · 2012-05-23T21:34:17.687Z · LW · GW

Familiar things didn't kill you. No, they are interested in familiarity. I just said that. It is rare but possible for a need for familiarity (as defined mathematically instead of linguistically) to result in sacrifice of a GIA instance's self...

Comment by FinalState on Holden's Objection 1: Friendliness is dangerous · 2012-05-23T21:10:40.216Z · LW · GW

Because it is the proxy for survival. You cannot avoid something you by definition cannot have any memory of (nor could have your ancestors)

Self Defense of course requires first fear of loss (aversion to loss is integral, fear and will to stop it is not), awareness of self and then awareness that certain actions could cause loss of self.

Comment by FinalState on Holden's Objection 1: Friendliness is dangerous · 2012-05-23T20:53:16.175Z · LW · GW

ohhhh... sorry... There is really only one, and everything else is derived from it. Familiarity. Any other values would depend on the input, output and parameters. However familiarity is inconsistent with the act of killing familiar things. The concern comes in when something else causes the instance to lose access to something it is familiar with, and the instance decides it can just force that to not happen.

Comment by FinalState on Holden's Objection 1: Friendliness is dangerous · 2012-05-23T16:23:04.009Z · LW · GW

The concept of agent is logically inconsistent with the General Intelligence Algorithm. What you are trying to refer to with Agent/tool etc are just GIA instances with slightly different parameters, inputs, and outputs.

Even if it could be logically extended to the point of "Not even wrong" it would just be a convoluted way of looking at it.

Comment by FinalState on Holden's Objection 1: Friendliness is dangerous · 2012-05-23T15:31:45.540Z · LW · GW

EDIT: To edit and simplify my thoughts, in order to get a General Intelligence Algorithm Instance to do anything requires masterful manipulation of parameters with full knowledge of generally how it is going to behave as a result. A level of understanding of psychology of all intelligent (and sub-intelligent) behavior. It is not feasible that someone would accidentally program something that would become an evil mastermind. GIA instances could easily be made to behave in a passive manner even when given affordances and output, kind of like a person that was happy to assist in any way possible because they were generally warm or high or something.

You can define the most important elements of human values for a GIA instance, because most of human values are a direct logical consequence of something that cannot be separated from the GIA... IE if general motivation X accidentally drove intelligence (see: Orthogonality Thesis ) and it also drove positive human values, then positive human values would be unavoidable. It is true that the specifics of body and environment drive some specific human values, but those are just side effects of X in that environment and X in different environments only changes so much and in predictable ways.

You can directly implant knowledge/reasoning into a GIA instance. The easiest way to do this is to train one under very controlled circumstances, and then copy the pattern. This reasoning would then condition the GIA instance's interpretation of future input. However, under conditions which directly disprove the value of that reasoning in obtaining X the GIA instance would un-integrate that pattern and reintegrate a new one. This can be influenced with parameter weights.

I suppose this could be a concern regarding the potential generation of an anger instinct. This HEAVILY depends on all the parameters however, and any outputs given to the GIA instance. Also, robots and computers do not have to eat, and have no associated instincts with killing things in order to do so... Nor do they have reproductive instincts...

Comment by FinalState on Thoughts on the Singularity Institute (SI) · 2012-05-23T15:04:54.287Z · LW · GW

What on earth is this retraction nonsense?

Comment by FinalState on General purpose intelligence: arguing the Orthogonality thesis · 2012-05-23T10:18:08.471Z · LW · GW

This one is actually true.

Comment by FinalState on Thoughts on the Singularity Institute (SI) · 2012-05-17T12:02:06.942Z · LW · GW

That is just wrong. SAI doesn't really work like that. Those people have seen too many sci fi movies. It's easy to psychologically manipulate an AI if you are smart enough to create one in the first place. To use terms I have seen tossed around, there is no difference between tool and agent AI. The agent only does things that you program it to do. It would take a malevolent genius to program something akin to a serial killer to cause that kind of scenario.

Comment by FinalState on Thoughts on the Singularity Institute (SI) · 2012-05-16T19:19:06.644Z · LW · GW

I am open to arguments as to why that might be the case, but unless you also have the GIA, I should be telling you what things I would want to do first and last. I don't really see what the risk is, since I haven't given anyone any unique knowledge that would allow them to follow in my footsteps.

A paper? I'll write that in a few minutes after I finish the implementation. Problem statement -> pseudocode -> implementation. I am just putting some finishing touches on the data structure cases I created to solve the problem.

Comment by FinalState on Thoughts on the Singularity Institute (SI) · 2012-05-16T16:00:00.398Z · LW · GW

Guys... I am in the final implementation stages of the general intelligence algorithm. Though I had intellectual property concerns regarding working within the academic network (especially with the quality of people I was exposed to), I am willing to work with people that I perceive as intelligent and emotionally mature enough to respect my accomplishments. Although I do not perceive an absolute need to work with anyone else, I do have the following concerns about finishing this project alone:

Ethical considerations - Review of my approach to handling the potential dangers including some not typically talked about (DNA encoding is an instance of the GIA which means publishing could open the door to genetic programming). How to promote the use of it to quantify the social sciences as much or more than the use of it for sheer technological purposes which could lead to a poorly understood MAD situation between any individual and the human race. Example: The question of how/when to render assistance to others without harming their self-determination could be reduced to an optimization problem.

I could just read everything written on this subject but most of it is off base.

Productivity considerations - Considering the potential implications having accountability to others for meeting deadlines etc could increase productivity... every day may matter in this case. I have pretty much been working in a vacuum other than deducing information from comments made by others, looking up data on the internet, and occasionally debating with people incognito regarding their opinions of how/why certain things work or do not work.

If anyone is willing and able to meet to talk about this in the southeast I would consider it based on a supplied explanation of how best to protect oneself from the loss of credit for one's ideas in working with others (which I would then compare to my own understanding regarding the subject for honesty and accuracy) If there is no reason for anyone to want to collaborate under these circumstances then so be it, but I feel like more emotionally mature and intelligent people would not feel this way.

Comment by FinalState on [SEQ RERUN] Bell's Theorem: No EPR "Reality" · 2012-04-27T01:25:51.415Z · LW · GW

The cheapest approach is to fail to differentiate between different labeling systems that conform to all known observations. In this way, you stick to just the observations themselves.

Conventional interpretation of the Bell experiments violates this by implying c as a universal speed barrier. There is no evidence that such a barrier applies to things we have no experience of.

Comment by FinalState on [SEQ RERUN] Bell's Theorem: No EPR "Reality" · 2012-04-27T01:15:49.105Z · LW · GW

You did not even remotely understand this comment. The whole point of what is written here is that there are infinite "Not even wrong" theories that conform to all current observations. The conventional interpretation of the Bell experiments is one of the less useful ones, because it is convoluted and has a larger computational complexity than necessary.

Comment by FinalState on [SEQ RERUN] Bell's Theorem: No EPR "Reality" · 2012-04-25T14:51:24.309Z · LW · GW

It's really simple. The hidden variables are not local. General Relativity does not apply in the case of the particles below a certain size. Can you create a logically consistent belief set such that the FTL particles are not FTL and really just existing in multiple states at once? Yes.

You can also say that on 4/25/12, up is down and down is up so I fell up and couldn't get back down again.

IE there are infinite labeling systems for every set of observations. The minimal set has the least computational cost to consider, and thus is easier for people to process. Some people however, tribals to be specific, are more interested in protecting legacies than they are with using the computationally cheaper belief set. The cost is reduced frequency of new inspirations of understanding.

Comment by FinalState on Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality · 2012-04-24T16:22:39.983Z · LW · GW

I know 2 reasons why people suck at bringing theory to practice, neither of which completely validates your claim.

1) They suck at it. They are lost in a sea of confusion, and never get to a valuable deduction which can then be returned to the realm of practicality. But they are still intrigued and still get better a little bit at a time with each new revelation granted by internal thought or sites like Less Wrong.

2) They are too good at it. Before going to implement all the wonderful things they have learned, they figure out something new that would require updating their implementation approach. Then another thing. And another. Then they die.

I suffered from 1 for a while when I was younger, and now from 2. I have found the best way to overcome this is to convince other people of what I have figured out thus far. They take what I give them and they run with it in their practical applications.

The act of explaining it to others is the thing that survives from your "dojo" model into the optimal approach to theory. It causes you to better understand it yourself and have more things to explain. It is this which brought me from identifying myself as an epistemologist to a mathematician who could create problem statements from the knowledge I had and provide functional solutions that could be programmed into computers or analyzed by computers to create optimal solutions. Before that I felt I was at my best when providing concise and elegant descriptions of functional knowledge that people could easily integrate into their approach.

A lot of that knowledge was thought experiment versions of the type of stuff you read on LessWrong. So to sum up, this site presents ready to consume concise functional knowledge, and promotes communication between people on interesting subjects. I understand a lot of people are going to be stuck at 1 for the foreseeable future, but so was I at one point. In the meantime, they can spread the ready to consume concise functional knowledge.