Posts

Comments

Comment by lyghtcrye on Quadratic Voting and Collusion · 2021-11-18T19:15:42.593Z · LW · GW

Doesn't this, by extension, seem to more directly lead to a cost-benefit problem of coalitions?


At some point the marginal cost of additional votes leads will be greater than the marginal cost of influencing other voters, either via direct collusion, or via altering their opinions through alternative incentives, such as by subsidizing voters who agree but care less, or by offering payments or commitments that mitigate the reasons opponents care about the issue.

I'm not sure that's necessarily a bad thing, but there are lots more ways to influence other voters with resources than just colluding to vote to eachother's advantage.

Comment by lyghtcrye on Prediction should be a sport · 2017-08-11T12:13:26.258Z · LW · GW

We essentially have this already occurring in the form of fantasy football leagues, which itself has gone from basically being gambling to basically being an e-sport. If you haven't considered it already, perhaps you should look into some of the ways that the NFL is making use of fantasy football for both marketing and information gathering purposes.

Comment by lyghtcrye on Crazy Ideas Thread · 2015-07-19T01:48:23.464Z · LW · GW

I like to imagine that eventually we will be able to boil the counter-intuitive parts of quantum physics away into something more elegant. I keep coming back to the idea that every current interaction could theoretically be modeled as the interactions of variously polarized electromagnetic waves. Such as mass being caused by rotational acceleration of light, and charge being emergent from the cross-interactions of polarized photons. I doubt the idea really carves reality at the joints, but I think it's probably closer to accurate than the standard model, which is functional but patchworked, much like the predictive models used by astrologers prior to the acceptance of heliocentrism.

Comment by lyghtcrye on Intelligence Amplification and Friendly AI · 2013-09-28T10:10:50.977Z · LW · GW

I seem to have explained myself poorly. You are effectively restating the commonly held (on LessWrong) views that I was attempting to originally address, so I will try to be more clear.

I don't understand why you would use a particular fixed standard for "human level". It seems to be arbitrary, and it would be more sensible to use the level of human at the time when a given AGI was developed. You yourself say as much in your second paragraph ("more capable than its creators at the time of its inception"). Since IA rate determines the capabilities of the AIs creators, then a faster rate of IA than AI would mean that the event of a more capable AGI would never occur.

If a self-modifying AGI is less capable than its creators at the time of its inception, then it will be unable to FOOM, from the perspective of its creators, both because they would be able to develop a better AI in a shorter time than an AI could improve itself, and because if they were developing IA at a greater pace they would advance faster than the AGI that they had developed. Given the same intelligence and rate of work, an easier problem will see more progress. Therefore, if IA is given equal or greater rate of work than AI, and it happens to be an easier problem, then humans would FOOM before AI did. A FOOM doesn't feel like a FOOM from the perspective of the one experiencing it though.

Your final point makes sense, in that it address the point that the probability of the first fast takeoff being in the AI field may be larger than the IA field, or in that AI is an easier problem. I fail to see why a software problem is inherently easier than a biology or engineering problem though. A fundamental breakthrough in software is just as unlikely as a hardware, and there are more paths to success for IA than AI that are currently being pursued, only one of which is a man-machine interface.

I considered being a bit snarky and posting each of your statements as direct opposites (IE all that matters is if a self modifying human becomes more capable than an AI at the time of its augmentation), but I feel like that would convey the wrong message. The dismissive response genuinely confuses me, but I'm making the assumption that my poor organization has made my point too vague.

Comment by lyghtcrye on Intelligence Amplification and Friendly AI · 2013-09-28T08:25:29.378Z · LW · GW

I have been mulling around a rough and mostly unformed idea in my head regarding AI-first vs IA-first strategies, but I was loathe to try and put it into words until I saw this post, and noticed that one of the scenarios that I consider highly probable was completely absent.

On the basis that subhuman AGI poses minimal risk to humanity, and that IA increases the level of optimization ability required of an AI to be considered human level or above, it seems that there is a substantial probability that an IA-first strategy could lead to a scenario in which no superhuman AGI can be developed because it is economically infeasible to research that field as opposed to optimizing accelerating returns from IA creation and implementation. Development of AI whether friendly or not would certainly occur at a faster pace, but if IA proves to simply be easier than AI, which given our poor ability estimate the difficulty of both approaches may be true, development in that field would continue to outpace it. It could certainly instigate either a fast or slow takeoff event from our current perspective, but from the perspective of enhanced humans it would be simply an extension of existing trends.

A similar argument could be made in regard to Hanson's WBEM based scenarios, through the implication that given the ability to store a mind to some hardware system, it would be more economically efficient to emulate that mind at a faster pace than to parallel process multiple copies of that mind in the same hardware space, and likewise hardware design would trend toward rapid emulation of single workers rather than multiple instances in order to reduce costs accrued by redundancy and increase gains in efficiency accrued by experience. This would imply that mind enhancement of a few high efficiency minds would occur much earlier and that exceptional numbers of emulated workers would be unlikely to be created, but rather that a few high value workers would occupy a large majority of relevant hardware very soon after the creation of such technology.

An IA field with greater pace than AI does of course present its own problems, and I'm not trying to endorse moving towards an IA-first approach with my ramblings. I suppose I'm simply trying to express the belief that discussion of IA as an alternative to AI rather than an instrument toward AI is rather lacking in this forum and I find myself confused as to why.

Comment by lyghtcrye on Definition of AI Friendliness · 2013-09-13T00:01:43.960Z · LW · GW

I'm not sure why you phrased your comment as a parenthetical, could you explain that? Also, while I agree with your statement, appearing competent to engage in discussion is quite important for enabling one to take part in discussion. I don't like seeing someone who is genuinely curious get downvoted into oblivion.

Comment by lyghtcrye on Definition of AI Friendliness · 2013-09-12T23:41:32.264Z · LW · GW

That question is basically the hard question at the root of the difficulty of friendly AI. Building an AI that would optimize to increase or decrease a value through its actions is comparably easy, but determining how to evaluate actions into a scale that measures results in a comparison with human values is incredibly difficult. Determining and evaluating AI friendliness is a very hard problem, and you should consider reading more about the issue so that you don't come off as naive.

Comment by lyghtcrye on Use Your Identity Carefully · 2013-08-22T08:32:22.557Z · LW · GW

While personal identification with a label can be constraining, I find that the use of labels for signalling are tremendous. Not only does a label work in the same way as jargon, expressing a complex data set with a simple phrase, but because most labels carry tribal consequences it acts as a somewhat costly signal in terms of identifying alliances. Admittedly, one could develop a habit of using labels which becomes a personal identification, but being aware of such risk is the best way to combat the effects thereof.

Comment by lyghtcrye on The idiot savant AI isn't an idiot · 2013-07-19T22:40:03.994Z · LW · GW

I certainly agree with that statement. It was merely my interpretation that violating the intentions of the developer by not "following it's programming" is functionally identical to poor design and therefore failure.

Comment by lyghtcrye on The idiot savant AI isn't an idiot · 2013-07-19T01:10:50.825Z · LW · GW

Of course this is something that only a poorly designed AI would do. But we're talking about AI failure modes and this is a valid concern.

Comment by lyghtcrye on The idiot savant AI isn't an idiot · 2013-07-18T21:01:52.239Z · LW · GW

I find it highly likely that an AI would modify its own goals such that its goals were concurrent with the state of the world as determined by its information gathering abilities in at least some number of cases (or, as an aside, altering the information gathering processes so it only received data supporting a value situation). This would be tautological and wouldn't achieve anything in reality, but as far as the AI is concerned, altering goal values to be more like the world is far easier than altering the world to be more like goal values. If you want an analogy in human terms, you could look at the concept of lowering ones expectations, or even at recreational drug use. From a computer science perspective it appears to me that one would have to design immutability into goal sets in order to even expect them to remain unchanged.

Comment by lyghtcrye on [SEQ RERUN] Bayesians vs. Barbarians · 2013-05-05T23:56:09.412Z · LW · GW

I had no intention of implying extreme altruism or egoism. I should be clear that by altruism I mean the case in which an agent believes that the values of of some other entity or group have a smaller discount rate than those of the agent itself, while egoism is the opposite scenario. I describe myself as an egoist, but this does not mean that I am completely indifferent to others. In the real world, one would not describe a person who engages in altruist signalling as an altruist, but rather that person would choose the label of altruist as a form of signalling.

Either way, returning to the topic at hand with the taboo in effect, those who value the continuation of their society as a greater value than personal survival will be willing to accept greater risks to their own life to improve the chances of victory at war. Likewise those who consider their own survival more highly, even if they expect that loss in war may endanger their life, will choose actions that are less risky to themselves even if it is a less advantageous action for the group. By attempting to modify the values of other to place greater weight on society over the individual, and by providing evidence which makes pro-social actions seems more appealing ,such as by making them appear less risky to the self, one can improve the probability of victory and improve the chance of personal survival. Of course if everyone were engaging in this behavior, and we assume equal skills and resources amongst all parties, there would either be no net effect on the utility values of agents within the group, or a general trend toward greater pro-social behavior would form, depending on what level of skill and susceptibility we assume. This is a positive outcome as the very act of researching and distributing the information required would create greater net resources in terms of knowledge and strategy for effectively contributing to the war effort.

Comment by lyghtcrye on [SEQ RERUN] Bayesians vs. Barbarians · 2013-05-05T02:23:10.103Z · LW · GW

While it may not be the point of the exercise, from examining the situation, it appears that the best course of action would be to attempt to convert as many people in Rationalistland as possible to altruism. The reason I find this interesting is because it mirrors real world behavior of many rationalists. There are a bevy of resources for effective altruism, discussions on optimizing the world based on an altruistic view, and numerous discussions which simply assume altruism in their construction. Very little exists in the way of openly discussing optimal egoism. As an egoist myself, I find this to be a perfectly acceptable situation, as it is beneficial for me if more people who aren't me or are in groups that do not include me become altruistic or become more effective at being altruistic.

Comment by lyghtcrye on Optimal rudeness · 2013-04-13T04:13:07.544Z · LW · GW

I haven't compiled any data relating rudeness to karma, and thus only have my imperfect recollection of prior comments to draw on, but I can certainly see your point here. I doubt, however, that an unpopular opinion or argument would benefit from rudeness if the post is initially well formed. I would expect rudeness to amplify polarization, thereby benefiting popular arguments and high status posters, and politeness to mitigate it. Would you be willing to provide me with some examples for or against this expectation from your observations?

Comment by lyghtcrye on Pick Up Artists(PUAs) my view · 2013-04-11T08:41:49.635Z · LW · GW

Yet if it is about "10 good ways to prepare for the job interview" I usually don't read this kind of objections. On the contrary it is assumed that when going for an interview candidates will dress as well as they can, have polished their CVs and often waded through lists of common questions/problems and their solutions(speaking as a computer programmer here). Not doing so would be considered sloppy. It is rare to hear: "People, just go to the interview and present yourself as you are, if the company likes you it will take you."

While most of this post seems weakly designed and poorly edited (I will assume due purely to excessive haste), this statement brings up a point worth discussing. In truth, misrepresenting oneself in a job interview is a poor choice for one who desires stable and fruitful employment. Certainly one should strive to display their positive qualities while minimizing their negative qualities, but such a tactic is certainly not deceitful, as it is assumed by your interviewer that you will be performing such an optimization of your facade and will adjust their expectations accordingly. Likewise I believe that a critical difference between the "PUA" culture that is being discussed here and the central essence of optimizing one's ability to attract a mate is in the level of misrepresentation applied to an altered goal set.

A person not interested in keeping a job for any significant duration would have no motivation to be honest during an interview, as actually being effective is no longer a concern. A person attempting to attract a mate with no intention of producing offspring or maintaining a relationship that includes emotional investment is also lacking the motivation for honesty. One need not be in any way sexist for such a duplicitous mode of operation to be effective, it is merely the circumstance that our current culture expects male initiation of courtship rituals toward females. Refining the technique of initiating and succeeding in such social interactions is in and of itself a neutral goal, like any tool or technique, but when applying such an "art", there can certainly exist distasteful methods. The difference between a shrewd businessman and a con-man often lies primarily in the level of respect for the other member of their transactions, and the same can be said of this mating technique.

Comment by lyghtcrye on I need help: Device of imaginary results by I J Good · 2013-04-08T14:20:15.385Z · LW · GW

It seems to me that it would be more effective to work from evidence that you have encountered personally or in the case of hypothetical evidence, could have hypothetically encountered. In the case of historical figures, unless you happen to be an archaeologist yourself, the majority of the evidence you have is through secondary and tertiary sources. For example, if a publication alleged that Julius was a title, not a name, and was used by many Caesars, and thus many acts attributed to the person Julius Caesar were in fact performed by separate individuals, you would probably have little reason to believe this. If a great number of publications, especially from respected organizations and individuals within the archaeological field posited the same thing, it might be sufficient to give you pause (it would for me in any case).

It seems to me that the intent here is to evaluate a prior based on a great quantity of weak evidence. Both weak evidence directly to the contrary in sufficient quantity, or evidence that discredits the sources used to generate your prior should sufficiently alter the probability to create doubt.

Comment by lyghtcrye on Game for organizational structure testing · 2013-04-08T12:07:12.057Z · LW · GW

A rules light game such as poker or chess would give you a lot of leeway in designing a scoring system and implementing the social systems, but probably has an insufficiently complex game state to allow for a large team size while still minimizing redundancy. If you want to develop for large teams (which is almost required to create a difference between true democracy and a representative system), I would suggest a highly customizable, complex game such as Civilization 5, perhaps by allowing each player to control and receive data from an initial unit with socially selected ability to control cities and subsequently produced units within the team.

Comment by lyghtcrye on I need help: Device of imaginary results by I J Good · 2013-04-06T11:18:43.582Z · LW · GW

I have the same reservation regarding the probability regarding propositions 1) and 2) as army1987. In particular, I find that the probability that all writings regarding the aforementioned people are true is exceedingly low for both, but the probability that some person existed bearing that name, who performed at least one action or bore one trait that was subsequently recorded is rather high. Considering that this is meant to be an exercise on evaluating one's priors (or at least that is how it appears to me), I would consider choosing one interpretation or the other and work from that. If you feel the need, simply try both interpretations or find a middle ground that you feel comfortable with. If this is not your issue with the propositions, then I would require more information on your attempts to solve the exercise in order to provide meaningful feedback.