Comment by slider on Yes Requires the Possibility of No · 2019-05-19T22:05:32.274Z · score: 1 (1 votes) · LW · GW

As might be typical of my neurotype when I see text such as " an honest evaluation " as in the top-level comment I resolve it to mean the uncommon case when a person actually effectively seeks a honest opinion as the plain english would suggest. The type of reading that interprets it as the common case could easily suggest that honest asking would be impossible or a irrelevant alternative. And indeed people are trained enough that even when asked for a "honest" opinion they will give the expected opinion. I didn't really get the simulcranum levels and such but in such dynamics people have lost the meaning of honesty.

Comment by slider on Yes Requires the Possibility of No · 2019-05-19T21:56:02.760Z · score: 1 (1 votes) · LW · GW

I think it needs clarification. It's clearly vague enough that it's not a valid reason by itself. However it is reasonable to think that part of the "bad vibe" would be the type why political meshing is bad while part of it could be relevant.

For example it could be that there is worry that constantly mentioning a specific point goes for "mere exposure" where just being exposed to a viewpoint increases ones belief in it without actual argumentation for it. Zack_M_Davis could then argue that the posting doesn't get exposure more than would have been gotten by legimate means.

But we can't go that far because there is no clear image what is the worry and unpacking the whole context would probably derail into the political point or otherwise be out-of-scope for epistemology.

For example if some crazy scientist like a nazi-scientist was burning people (I am assuming that burning people is ethically very bad) to see what happens I would probably want to make sure that the results that he produces contains actual reusable information. Yet I would probably vote against burning people. If I just contain myself to the epistemological sphere I might know to advice that larger sample-sizes lead to more realiable results. However being acutely aware that the trivial way to increase the sample size would lead to significant activity I oppose (ie my advice burns more people) I would probably think a little harder whether there is a lives-spent efficient way to get reliability. Sure refusing any cooperation ensures that I don't cause any burned people. But it is likely that left to their own devices they would end up burning more people than if they were supplied with basic statistics and how to get maximum data from each trial. On one hand value is fragile and small epistemology improvements might correspond to big dips in average well-being. On the other hand taking the ethical dimension effectively into account it will seemingly "corrupt" the cold-hearted data processing. From lives-saved ambivalent viewpoint those nudges are needless inefficiencies, "errors". Now I don't know whether the worry about this case is that big but I would in general be interested when small linkages are likely to have big impacts. I guess from a pure epistemological viewpoint it would be "value chaoticness" where small formulation differences have big or unpredictable implications for values.

Comment by slider on Yes Requires the Possibility of No · 2019-05-19T18:14:16.280Z · score: 9 (2 votes) · LW · GW

I know it's the typical outcome, but I don't know why it would be inevitable or obvious. A person that verbally asks for an "honest" answer but punishes is not in fact asking for a honest answer. Part of the reason why people add the qualifier is the belief that those kinds "give you more positive affect".

If you try to shoot for an actual honest opinion you have to care to differentiate between asking for "dishonestly honest" opinions. For the kind of mindset that has "whatever can be destroyed by the truth should be destroyed" actually honest opinions are what to shoot for. But I have bad models on what attracts people to "dishonestly honest" opinions. I suspect that that mindset could benefit from different framing ("I have your back" vs "yes" ie forgo claims on state of the world in favour of explicit social moves).

This lesswrong post might make someone seek out more "dishonest positivity" by applying a "rejection danger" in pursuit of "belief strengthening". I feel that there is an argument to be made that when rejection danger realises you should just eat it in the face without resisting and the failure mode prominently features resisting the rejection. And on the balance if you can't withstand a no then you will not have earned the yes and should not be asking the question in the first place.

That is on the epistemic side there is a "conservation of expected evidence" but on the social side there is a "adherence to recieved choice", you can't give control of an area of life conditional on how that control would be used, if you censor someone you are not infact giving them a choice.

Comment by slider on Yes Requires the Possibility of No · 2019-05-19T16:31:32.974Z · score: 8 (2 votes) · LW · GW

Note that in 1 if you want to avoid the "lackluster doing" outcome you have to genuinely be willing to not do / take pessimism effectively in to account when you do the group discussion: It seems to be a very distinct skill which is not very obvious.

In 9 it's kinda weird that a bayesian wants to increase a probability of a proposition. Someone that takes conservation of expected evidence into hearth would know that a too high number would be counterproductive hubris. I guess it could mean "I want to make X happen" vs "I want to believe X will happen". I get how the reasoning works on the belief side but effecting the world side I am unsure the logic even applies.

Comment by slider on Yes Requires the Possibility of No · 2019-05-19T16:13:57.513Z · score: 15 (3 votes) · LW · GW

I had hard time to track down what is the refefrent to the abuse mentioned in the parent post.

It does seem that the concept was employed in a political context. To my brain politizing is a particular kind of use. I get that if you effectively employ any kind of argument towards a political end it becomes politically relevant. However it would be weird if any tool employed would automatically become part of politics.

If beliefs are to pay rent and this particular point is established / marketed to establish a specific another point I could get on board with a expectation to disclose such "financial ties". Up to this point I know that this belief is sponsored by another belief but I do not know which belief and I don't fully get why it would be troublesome to reveal this belief.

Comment by slider on Yes Requires the Possibility of No · 2019-05-19T15:57:04.184Z · score: 3 (2 votes) · LW · GW

I thought that the point of the clarification of an "honest" answer is that you are actually willing to take a "no" for an answer. A non-qualified opinion even if at surface level being an evaluation propably truly isn't an evaluation. It might be interesting that even if you ask for an "honest" answer people might refuse to give one.

Comment by slider on Coherent decisions imply consistent utilities · 2019-05-15T13:15:59.417Z · score: 1 (1 votes) · LW · GW

In a case where you are going to pick less variance less expected value over more variance more expected value it will mean that option needs to have a bigger "utility number". In order to get that you need to mess with how utility is calculated. Then it becomes ambigious whether the "utility-fruits" are redefined in the same go as we redefine how we compare options. If we name them "paperclips" it's clear that they are not touched by such redefining.

It triggerred a "type-unsafety" trigger but the operation overall might be safe as it doesn't actualise the danger. For example having an option of "plum + 2 utility" could give one agent "plum + apple" if it valued apples and "plum + pear" if it valued pears. I guess if you consistenly replace all physical items for their utility values it doesn't happen.

In the case of "gain 1 utility with probability 1" if your agent is risk-seeking it might give this option "actual" utility less than 1. In general if we lose the distribution independence we might need to retain the information of our suboutcomes rather than collapsing it to he a single number. For if an agent is risk-seeking it's clear that it would prefer A=( 5% 0,90% 1, 5% 2) to B=(100%, 1). But same risk-seeking in combined lotteries would make it prefer C=(5% , 90% A, 5% A+A) over A. When comparing C and A it's not sufficent to know that their expected utilities are 1.

Emotional valence as cognition mutator (not a bug, but a feature)

2019-05-15T12:49:40.661Z · score: 9 (4 votes)
Comment by slider on Coherent decisions imply consistent utilities · 2019-05-13T20:27:44.694Z · score: 1 (1 votes) · LW · GW

I was thinking of another agent judging my strategies and making a backed argument why I am wrong. If someone said "you were suboptimal on fruit front, I fixed that mistake for you" and I arrive at a table with 2 worm apples, I would be annoyed/pissed. I am assuming that the other agent can't evaluate their cleanness - it's all fruit to them. Moreover it might be that worm apples are rare and observing my trade activity it might be inductively well supported that I seem to value "fruit-maximization" a great deal (nutrition maximisation with clean fruit is just fruit maximisation). And it might be important to understand that he didn't mean to cause wormy apples (he isn't even capable of meaning that) but his actions might have infact caused it.

In the case that wormy apples are frequent the hypothesis that I am a fruit-maximiser is violated clearly enough that he knows to be on shaky grounds on modelling me as a fruitmaximiser. For some very unskilled traders they might confuse one type of fruit with another and be inconsistent because they can't get their fruit categories straight. At some midskill "fruitmaximisement" peaks and those that don't understand things beyond that point will confuse those that are yet to get to fruitmaximization and those that are past that. Expecting super-intelligent things to be consistent kind of assumes that if a metric ever becomes a good goal higher levels will never be weaker on that metric, that maximation strictly grows and never decreases with ability for all submetrics.

Comment by slider on Coherent decisions imply consistent utilities · 2019-05-13T17:25:18.884Z · score: 2 (2 votes) · LW · GW

I think you are not allowed to refer explicitly to utility in the options. That is an option of "I do not choose this option" is selfdefeating and illformed. In another post I posited a risk-averse utility function that references amount of paperclips. Maximising the utility function doesn't maximise expected amount of paperclips. Even if the physical objects of interest are paperclips and we value them linearly a paperclip is not synonymous with utilon. It's not a thing you can give out in an option.

Comment by slider on Coherent decisions imply consistent utilities · 2019-05-13T15:59:44.960Z · score: 1 (1 votes) · LW · GW

The "damage" from shooting your own foot is defined in the terms of the utility-number.

Say I pick a dominated strategy that nets me 2 apples and the dominating strategy nets me 3 apples. If on another level of modelling I can know that the first apples are clean and the 2 apples in the dominating arrangement have worms I might be happy to be dominated. Apple-level damage is okay (while nutritional level damage might not be). All deductive results are tautologies but "if you can't model the agent as trying to achieve goal X then it's inefficient at achieving X" seems very far from "incoherent agents are stupid".

Comment by slider on Speaking for myself (re: how the LW2.0 team communicates) · 2019-04-28T15:19:37.615Z · score: 1 (1 votes) · LW · GW

I would phrase is that the number 3 in my head and the number 3 in your head both correspond to the number 3 "out there" or to the ""common social" number 3.

For example my number 3 might participate in being part of a input to a cached results of multiplication tables while I am not expecting everyone else to do so.

The old philosphical problem of whether the red I see the the same red that you see kind of highlights how the reds could plausibly be incomparable while the practical reality that color talk is possible is not in question.

Comment by slider on Speaking for myself (re: how the LW2.0 team communicates) · 2019-04-28T15:13:10.051Z · score: 1 (1 votes) · LW · GW

I think models can be run on computers and I think people passing papers can work as computers. I do think it's possible to have an organization that does informational work that none of it's human participants do. I do appriciate that such work is often very secondary to the work that actual individuals do. But I think that if someone aggressively tried to make a system that would survive a "bad faith" human actor it might be possible and even feasible.

Comment by slider on Speaking for myself (re: how the LW2.0 team communicates) · 2019-04-28T12:33:13.858Z · score: 6 (3 votes) · LW · GW

I took the line written to mean that there are no "opinion leaders". In a system where people could vote but actually trust someone elses judgement the amount of votes doesn't reflect the amount of judgement processes employed.

I also think that in a system that requires a consensus it becomes tempting to produce a false consensus. This effect is strong enough that in all context where people bother with the concept of consensus there is enough basis to suspect that it doesn't form that there is a significant chance that all particular consensuses are false. By allowing a system of functioning to tolerate non-consensus it becomes practical to be the first one to break a consensus and the value of this is enough to see requiring consensus to be harmful.

All the while it being true that while opinions diverge there is real debate to be had.

Comment by slider on Counterspells · 2019-04-28T12:10:01.585Z · score: 9 (5 votes) · LW · GW

"Counterspells" are supposed to be useful.

MtG counterspell is a card but it's also a spell category. Spell in that category usually cost less the more specific their target restrictions are. They also all accomplish the same thing in that ultimately nothing happens (ie a cancellation).

Using magic here as a metaphor might be fitting as the point of such a move is to reveal that the machinery supposed to be employed actually doesn't do anything ie that magic doesn't work and is just wishful thinking. The worry would be that by acknowledging the attempted methods you "steep down to their level" ie employ magic yourself despite not believing in it.

Comment by slider on Pascal's Mugging and One-shot Problems · 2019-04-25T13:47:52.455Z · score: 2 (2 votes) · LW · GW

Many times opinions how to handle uncertainty get baked into the utility functions. That is a standard naive construction is to say "be risk neutral" and value paperclips linearly for their amount. But I could imagine a policy for which more paperclips is always better but from a default position of 100% 2 paperclips it wouldn't choose a option of 0.1% 1 paperclips, 49.9% 2 paperclips and 50% 3 paperclips. One can construct a "risk averse" function where the new function can simply be optimised. But does it really mean the new function is not a paper clip maximation function?

Comment by slider on Pascal's Mugging and One-shot Problems · 2019-04-23T20:33:03.989Z · score: 2 (2 votes) · LW · GW

I think your analysis of "maximise" just compares x>y without regard how much bigger x is which is kind of a natural consequence for subtracting expected utility out. However it does highlight that if our goal is "maximise paperclips" it doesn't really say whether "win harder" is relevant or not. That is 2>1 but so is 1000>1. So for cases when an outcome is not a constant amount of paperclips we need more rules than what the object of attention is. So a paperclip maximiser is actually underspecified.

Comment by slider on The Cacophony Hypothesis: Simulation (If It is Possible At All) Cannot Call New Consciousnesses Into Existence · 2019-04-16T22:11:34.305Z · score: 4 (3 votes) · LW · GW

If people are biological computers and simulation can't spring up new consciousness, doesn't that mean that a baby can't have a conciouness? In a way baby isn't meant to simulate but I do think that the internal world shouldn't be designated as illusory. That is we don't need or ask for detailed brainstate histories of babies to designate them as conscious.

Comment by slider on "Intelligence is impossible without emotion" — Yann LeCun · 2019-04-11T01:11:52.567Z · score: 1 (1 votes) · LW · GW

I very much suspect that what is understood with the words alters what is said a lot. For example I expected that the argument WOULD apply to autonomous cars.

I do have thought about a similar thought before. In my formulation what an actor does is dependent on their brain state and different actions require different states. Thinking is action where you externally do nothing/very trivial thing but go from one brain state to another. Brain states are likely to be so complex that they never really reoccur. At the very least you could think that in additional time step you remember the last time step. In order to make sense of what is going in the brain you are going to draw equivalence classes on the myriad possible configurations. If you are merely trying to have an effective theory about how the mind actually works and are not prejudiced or normative how for example verbal behaviour should be taken into account a "cut reality at the joints" grouping of the states would be those where there are the most robost transition probabilities from one group to another. This way you get "natural laws" like A->B->C->D where you do not have to understand what state class C "represents" or "is about". If there are "splits" in the natural laws that is A->B->C->D and A->B->E->F are both likely it becomes a useful concept of which "river" gets activated in particular thoughtrains.

The brain state needs to include all sensory information. It's likely that the sensory information doesn't dominate which brain state is entered into. If the non-sensory information DOES dominate what state is entered into this is probably this seems like a important point about the funciton of the mind lets call these "control cognitions". One of the theorethical notions of the stateclasses migth be that given a fixed set of sensory information what is the set of all possible control cognitions.

One interpretation on "The car won't have emotions" could be that the car doesn't have any (signifcant) control cognitions because of its nearly stateless nature. The bit about "autonomous thought" is that if the impulse to do stuff comes from within it can't be sensory-dominated so it must come from a control cognition.

However it would seem that "rational reasoning" is not often called "emotional". Things like verbal thoguths of "I will walk there" would seem very much to be very capable of being a control cognition yet we would not describe such a mental stance to be very emotional. We could also think of someone walking somewhere because of sexual arousal reasons which would be very emotional behaviour/thinking. however given an arbitrary or novel mental state I do not know how I could call it an emotion or not. Thus when we examine a system like a car and somebody says "there is no emotion there" my doubt is that "you would not recognise a emotion if you saw one so how you know one isn't here?" or that "hey here is a vivid and complex control cognition network how come these are not emotions?"

Comment by slider on [Feature Idea] Epistemic Status · 2019-04-10T16:03:14.743Z · score: 1 (1 votes) · LW · GW

Reading random posts I saw a pattern of these epistemic statuses but they seemed very confusing and illeigble for me. It was not obvoius how they should be taken and how one would go about leaning what they mean.

I came across as "I don't know what these people are doing, so I can't really participate".

Reading some of the older comments here it seems there was an idea of providing some sort of clarity and it seems what is going on now really doesn't provide what was sought after (maybe a bit seprate question whether providing that is a good idea in the first place).

Comment by slider on The Hard Work of Translation (Buddhism) · 2019-04-09T22:11:02.636Z · score: 1 (1 votes) · LW · GW

This reply easks for a epistemic status note. I started noticing something to a similar name on lesswrong posts. I was quite a bit confused on what they were supposed to accomplish. This particular suggestion leaves it quite murky for what kinds of considrations those notes would be used for.

In general I didn't know whether those epistemic status notes were a community thing, how recent a development they were and where to look up the relevant info. Also for some strange reason it felt weird to ask directly only about them on a random post (and I did not ask).

Comment by slider on The AI alignment problem as a consequence of the recursive nature of plans · 2019-04-08T16:02:32.787Z · score: 2 (2 votes) · LW · GW

The line between executable actions and plans is presented as being quite clear cut and a statement about "nobody *does* getting rich" as a claim of fact. Similar logic could be employed to argue that "nobody ever grabs a glass they only apply pressure with their fingers. Or in reverse "there is nobody on the planet whose mind percieves 'getting rich* as a single executionable action nor could there ever (reasonably) be". I could imagine that there are people for whom "hedgefunding" is a basic action that doesn't require esoteric aiming while it not being typically for it to be so. And "getting rich" is not that far from that.

Kind of like magic as "a mechanism you know how to use but don't know how it works" ie "sufficiently unexplained technology" is a concept in the eye of the beholder so too is the division between plans and actions. That is "sufficiently non-practical actions" are plans. Was part of what Yoda was trying to get with "don't try, do" about this?

I think sex is still a functional part of human thriving. That it doesn't do insemnation doesn't stop it from doing all of it's social/community bonding. If you use a hammer as a stepping stone in order to reach high places you are not failing to use a hammer to impact nails, you are succesfully using a hammer to build a house. "Having sex doesn't lead to spreading genes" seems also like a claim of fact. Well what about keeping your marriage intact with your test tube childrens biological mother? I could see how celibacy could throw a serious wrench in that. If we also keep that worker ants further their genes agenda without having direct offspring can we truly rule out similar effects for example homosexual couples?

In a way evolution only cares about the goal of "thrive" and pushing for it really can't go wrong. But in pushing it it is often important for it to be extremely flexible possibly to the point of anti-alignment with the sub-goals. Repurpose limbs? Repurpose molecyles? Alingment would be dangerous. Also in the confclit between asexual and sexual reproduction having strong aligment is the *downside* of asexual reproduction.

I read this also as a magic color analog that argues for white over atleast black and green"in order to get anywhere we need to erect a system to get there". Green can answer "there is power in diversity, having all your eggs in the same basket is a recipe for stagnation. Red answers "Only by moving your brush against the canvas of life will you ever see a picture and even if you could see why would you then bother painting it?" Black would complain about unneccesary commitments and "leaving money on the table". Blue can answer "If you now commit to optimise candy you will never come to appriciate the possibility of lobster dinners."

Comment by slider on The trolley problem, and what you should do if I'm on the tracks · 2019-04-01T22:42:08.105Z · score: 3 (2 votes) · LW · GW

I don't think the standard formulation makes the assumtion that the people are chosen at random. It is just "5 humans" and you can think for yourself "I am human" and go "I have 20% chance of being one of those people". But its just ignorance of the distribuiton.

I don't think you can consistently think that people deserve trials and think that a person could have knowledge to justify not pulling the lever on them. We are assuming split second so no time for trial so we pretty much need to treat allegations of being hitler as hearsay. it's kinda trivial that if a person had trial-proof-reason-to-think hitler-accusations hold. But saying person falls short of that means its okay to treat persons differently on an uncertainty. That is in serious conflict with "right to trial". I guess the position is easier to defend when "wait and hold trial" is an option bu here it highlights how one is to conduct oneself in contexts where trials are impossible or infeasible. But a stance of "innocent until proven guilty" would say that consequence without possiblilty of defence is wrong meaning that "split second" doesn't give liberty to treat people on lighter standards of proof. I thin one could be of the positon that if you are " notorious hitler" but have no trial-level proof on you other persons duty-to-save you is lowered. That is "notoriety" is an "adequte" trial for "split second" enforcement.

Comment by slider on Will superintelligent AI be immortal? · 2019-04-01T22:05:08.838Z · score: 1 (1 votes) · LW · GW

The question presupposes that by continuing living you fullfill your values better. It might be that after a couple millenia additional millenias don't really benefit that much.

I am presuming that if immortality is possible then the value of it is transfinite and thus any finite chance (infinidesimals migth still lose) means it overrides all other considerations.

In a way a translation to more human scale problem is "Are there acts you should take even if taking those actions would cost your life regardless of how well you think you can use your future life?" The way it would not be analogous would be that human lifes are assumed to be finite (note that if you genuinely think that there is a chance a particular human be immortal it is just the original question). This can lead to a stance where you estimate what a humanlife in good conditions could achieve without regard to your particular condition and if particular conditions allow you to take an even better option you could take it. This could lead to stuff like risking your life for relatively minor advantages in middle ages where death was very relevantly looming anyways. In those times it might have been relevant to "what I can achieve before I cause my own death?" and since then the option to trying to die of old age (ie not causing your own death actively) has become a relevant option that breaks the old way of framing the question. But if you take it seriously that shooting for old age is imperative it means that if there is a street that you estimate there is a 1% risk of being in a muggin situation with 1% chance of it ending with you getting shot it rules out using that street as a way to move.

In analogy as long as there is heat there will be computational uncerntainty which means that there will always be ambient risk about things going wrong. That is you might have a high certainty of functioning in some way indefinitely but working in a sane way is way less certain. And all action and thinking options deal in energy use and thus deal in increasing insanity risk.

Comment by slider on On the Nature of Agency · 2019-04-01T21:03:11.694Z · score: 3 (2 votes) · LW · GW

I read this as an argument why black should win the conflict between black and green in the magic color relations. I am more familiar where black is painted as the viallain side and this contrast seemd to be very fruitful in my mind as this seemed like the rare point of view that is pro-black.

-Black does have to worry about punishments of deviants but black also can be quite okay about being in actual conflict. The "error corrections" of deviation punishments can sometimes be justified. At the local level you don't always have the means to appriciate your actions consequences for the wider picture. Green likes empricially found global solutions and it really really dislikes when black individuals have a strong influence in their locality preventing certain types of solutions. Ignoring the athmospheric effects of CO2 allows for easier design of powerful industrial processes and picking up that fruit might be very relevant to personal interests but its not like the restriction (or I guess concern at this stage) is there without cause.

-Black takes actions that have known downsides that it can think it can get away with. The trouble is that sometimes there are downsides they could not have known from their start position and they can get bitten by things they could have known but did not in fact know. Green doesn't have a model why what it does works so it handles unknown dangers equally as well as familiar threats. Curiosity like kills cats (althoguht curiosity isn't selected against atleast not strong enough).

-In the magic metaphor the willingness to take loss is much more severe. It's about willingness to cut your hand off to get at your goal. Framing it as "having high consitution" easily paints a picture where losses can be recovered from. But if you die or you lose your arm you don't get resurrected or regrow a limb. Black is about achieving things that are otherwise impossible but also summoing stuff that would never happen to you otherwise too.

-The flip side of preventing taking others opinins too readily is imposing your will too strongly on others. If you take on a vocabulary that suggests and helps you make a plan of action but also demonises other people it can be easier to be the villain (pretty common trope also that villains act and heroes react). If it is better to rule hell and than serve in heaven is it worth the trouble to turn heaven into hell based solely that your personal situation improves? The whole "aligment problem" is kind of the realisation that an independent mind will have an independent direction which could theorethically be in conflict with other directions. The black stance is that "indidivual will" is a resource to be celebrated and not a problem to be solved away.

Comment by slider on Want to Know What Time Is? · 2019-03-08T15:00:55.309Z · score: 1 (1 votes) · LW · GW

If I study history with this view I would be moving events in time. There seems to be a lot of implicit assumtions that could go multiple ways needed to make it compatible with ordinary time phenomena.

There seems to be similarity with Zeno's arrows in that if each state is considered separately no movement occurs. Is this just a restatement of that view?

Comment by slider on Why is this utilitarian calculus wrong? Or is it? · 2019-01-28T16:08:39.326Z · score: 1 (3 votes) · LW · GW

I don't know whether it's equivalent but it seems the transaction is equivalent to a $20-$30 price point fair deal plus a $80-$70 sized gift. In the limit if a dollar is worth equal to everyone then a gift $X-$X=0 no net change from gifting. The trade part comes from evaluation being higher than cost. This would be true even if the beneficiary and the cost bearer would be the same party.

Comment by slider on Summary: Surreal Decisions · 2018-11-30T04:39:46.863Z · score: 1 (1 votes) · LW · GW

Infinite sums of finite terms and finite sums of infinite terms might be different and the latter are quite easy. With A= ω * 1000 + ω * -1000, B= ω * 1000 + (ω-1000) * -1000 + 1000000*1000, C= (ω-1000) * 1000 + (ω) * -1000 + 1000000*-1000, its clear that B>A>C

To my belief normal utility funtions can be scaled to remain essentially the same. That is if one explicit version gives numbers 1, 10, 100 to the options then a tenfold function that gives 1, 100 and 1000 to the same options is equally valid. I would expect this to hold in the transfinites in that a function giving 1, ω and ω * ω would be as good as one giving ω , ω * ω and ω * ω * ω.

I am not sure that surreals neccesarily invoke infinite sums and their orderings. ω can be defined without sums and it becomes a separate thing to prove for example that 1+1=2 (that is, this is a genuine claim about how addition works in relation to already existing numbers, it's not a restatement of the definition of 2). There is the issues that just because a value is transfinite you don't know how big it is and some problems might be sensitive to get the magnitudes right. Say that you have pascal wager options of not having a life or afterlife, having a life for another day, living one day in heaven and living indefinetely in heaven. The correctish values would be 0, 1*1 , ω * 1 and ω * ω, the fourth option being clearly better than the third rather than equally good. Also there is no natural number N so that 1 * N >= ω but 1* ω = ω. "repeatedly +1" migth only refer to the first. Surreals deals with actual infinites not infinities as a limit of finite processes. In a way both ω abd ω * ω would appear as a series of "++++++..." so decomposition into a plus ordering can't be their distinguising mark.

Comment by slider on Summary: Surreal Decisions · 2018-11-30T03:41:46.045Z · score: 1 (1 votes) · LW · GW

Converting between option preference and a utility number might be wanted even in scenarios where we have different kinds of preferences that we both care about but are distinct. Say that you can create or kill a human being and receive and receive or lose money. A morality that prefers 0 humans killed or created to a human killed regardless of money effect, but still uses money as a tie-breaker seems a relevant option.

If you formulate the number of such a option with a number system that is a single archimedian class (ie is finite) as A + B then there will be some natural number N that A + N B is greater than A + B ie that there will be some amount of money that is preferable to a human life if lifes or money is to be preferred at all. We could do this by treating "life-preferences" and "money-references" as separate utilities but as surreals they can be both be incorporated correctly into a single number (with finite and infinite factors).

In this sense "bros before hoes" implies a sense of infinity in the world.

Comment by slider on Too Much Effort | Too Little Evidence · 2017-09-02T17:16:23.065Z · score: 0 (0 votes) · LW · GW

I guess with swans you can just say "go look at swans in africa" which gives a recipe for public experience that would reproduce the category-boundary.

It is the case with seaguls that they are mostly white but males and female have different ultraviolet patterns. Here someone who doesn't see ultraviolet can easily tell the classes apart but someone who sees only 3 colors will be nearly impossible. Then if throught some kind of (convulted) training you could make your eye see ultraviolet (people with and without all the natural optics respond slightly differently to ultraviolet light sources so there is a theorethical chance some kind of extreme alteration of the eyes could yield it to clearly recognisable levels).

Now ultraviolet cameras can be produces and those pretty much produce public experiences (ie the output can be read by a 3 color person too). Now I am wondering whether the difference between "private sensors" and constructed instruments is merely that with constructed instruments we do have a theory how they work but with "black box sensors" we might only know how to (re)produce them but don't know how they actually work. However it would seem that sentences like "this and this kind of machine will classify X into two distinct groups Y and Z" would be interesting challenges to your theory of experiment setting and would warrant research. That is any kind of theory that doesn't believe "in the training" would have to claim something other on what the classification would be (that all X would be marked Y, that the groups would not be distinct, that the classifier would inconsistently label the same X Y one time and Z the next time). But I guess those are only of indrect interest if the direct interest is whether groups Y and Z can be established at all.

Comment by slider on Is there a flaw in the simulation argument? · 2017-09-02T16:43:06.721Z · score: 0 (0 votes) · LW · GW

The lines were mistakenly quotes while they are actually my argumentation (needs a paragraph break rather than a line break)

I think I took the question to be that there is evidence about the distribution on how people get sorted into rooms. But I guess it being history makes it not apply to the present while knowledge of a timeless mechanism would apply to both present and history. But here in circustances it might be the case that previous people were sorted with a different logic and you are the first person with a brand new logic. And thus we have 0 knowledge of the logic and can only count the possible cases.

Comment by slider on Intrinsic properties and Eliezer's metaethics · 2017-08-31T10:37:19.851Z · score: 1 (1 votes) · LW · GW

If I am given 3 location vectors and asked whether they fall on a plane I can't do it at a glance, I need some kinda involved calculations. Make the space high-dimensional enough and I will need to build a much more assisting strctures to make it apparent whether a set of 3 points makes a regular triangle or not.

Comment by slider on Is there a flaw in the simulation argument? · 2017-08-31T10:25:50.919Z · score: 1 (1 votes) · LW · GW

It shouldn’t. After all, if everyone currently in rooms X and Y were to bet that they’re in room X, just about everyone would win.

edit: separated wrongly quoted part Yet if everyone bet that they are in room Y vast majority would win (1 000 / 1 vs 1 000 000 000 / 10 000). In the scenario you can deduce that a lot less questions will be posed in room X.

You are tying to invoke that "right now" is always a relevant indifference breaker. It might be that you are imagining that people in room X will be posed a question NOW. But what if every Xer was asked only the question once when they entered? Then what the contents of the room NOW are becomes irrelevant to the distribuiton of questions. We can keep the amount of questions the same and keep more people in. In the limit we can have the whole 10000 stay for the whole duration when the single persons are driven throught the other. Still more questions will be asked in total in the single person room. But maybe crucially a new person entering the single room doesn't mean that eveyone in the big room will be reasked. What is proper to focus on is the first time everyone is asked and this only happens once for everyone in the big room (I guess we need to assume you would remember if asked the second time).

Comment by slider on Too Much Effort | Too Little Evidence · 2017-08-31T09:56:41.978Z · score: 0 (0 votes) · LW · GW

So seeing many white swans makes you less prepared for black swans than someone who has seen 0 swans?

I do think that someone who seriously understands the difference between inductive and deducative reasoning won't be completely cut out, but I get that in practise this will be so.

It has less to do with rationality and more to do with "stuff that I believe now". If you believe in something it will mean you will disbelieve in something else.

Comment by slider on What is Rational? · 2017-08-28T18:31:44.017Z · score: 0 (0 votes) · LW · GW

Being lucky is not being rational. However it is undoubtable that winning in a lottery is mostly a positive outcome and that it requires for you to have purchased the ticket which is a decision. Something that looks only at outcomes would applaud the decision to buy the ticket (perhaps unconditionally).

The definiton of instrumental rationality is most commonly invoked when critisiing those that employ a complex methodology of choosing correctly while the methodolody can be criticed or the agent had evidence that could have been construed to be a reason to abandon the methodology. The critism "before" "instrumental rationality would focus on making an error in applicaiton of a methodology or not having any methodology at all to make the decision. The common sentiment from these can seem like "have a methodology and apply it correctly". And it seems clear that there are better and worse methodologies and one should try to apply the best available. And it seems "I had a methodology and applied it" doesn't make one to be "rational" (more like "dogmatic").

It seems one coudl have a reasonable chance of being "rational" if one had bad methodologies if one actively upswitches and upgrades their "carry on" methodology whenever they encounter new ones. It seems also that as the argument goes on the focus on metacognition increases. This can be seen also to frame the previous critisms in a new light. Its not that unmethodological decisions are "unrational" per se but doing so means likely that you missed to pick up a good methodology before that where you here could have applied to great success. So rather than "having" an methodology its more important to "pick up" methodologies with it being less essential whether you currently have or do not have a good methodology. With consistent pickups you should in the future have a great quality methodology. but rather than being the means its the effect.

Comment by slider on Could the Maxipok rule have catastrophic consequences? (I argue yes.) · 2017-08-28T18:13:40.364Z · score: 1 (1 votes) · LW · GW

It would seem that increasing certainty for "Does the rocket launch successfully?" would be more important than "How early does the rocket launch?". Most acts that shoot for a early launch would seem to increase the risk that something goes wrong in the launch or that the launching colonization would be insufficient or suicidal. Otherwise it just seem like logic of "better die soon to get to heaven faster to have 3 days + infinity instead of just infinity in it". I think that I ought to turn down any offerings of shady moral actions for however virgins in heaven (and this should not be sensitive (atleast greatly) to the number of virgins). So if it used for "lets get seriously rockety" I don't think the analysis adds anything beyond "rockets are cool".

Comment by slider on P: 0 <= P <= 1 · 2017-08-28T18:01:25.139Z · score: 0 (0 votes) · LW · GW

Threatening to infllict "infinite negative utility" is qualitateive rather than quantitative. You have not yet said how much you would inflict. Contrast with saying "I am going to inflict finite negative utility to you".

If you know transfinite amounts it is possible ot make a threat of a infinite magnitude that is rational on expectation maximation grounds to reject as implausible. If you threaen me with omega negative utility but want only finite rewards if I think your plausibility is 1 per omega per omega I would still be losing infinitely by handing over the finite amount. While this makes the technical claim false it is in essence true. If the "ransom" is finite and the threat transfinite then the plausibily will need to be (sufficently) infinidecimal to be rejectable.

However there might be the door that infinidecimal doubt would be a different thing than a probabilty of 0. "Almost never" allows finite occurences while having 0 as the closest real approximator (any positive finite would be grossly inappropriate).

Comment by slider on Too Much Effort | Too Little Evidence · 2017-08-28T17:44:51.786Z · score: 0 (0 votes) · LW · GW

"there is an abundance of extreme strong evidence (experimental and mathematical)" means that we find the story that somebody actually performed some kind of interaction with the universe to hold the belief (very) plausible. Contrast this with faked results where somebody types up a scientific paper but actually forges or doesn't do the experiments claimed. One of the main methods of "busting" it is doing it ourselfs ie replication.

There are crises where research communities spend time and effort on the assumption that a scientific paper holds true. We could say that if this "fandom" does not involve replication then their methodology is something other than science and thus they are not scientists. However the enthusiasim or the time spent on the paper by itbelief doesn't make it any more scientific or reliable. If the "founding paper" and derivative concepts and systems are forgeries it taints the whole body of activity to be epistemologically of low quality even if the "derivating steps" (work done after the forged paper) were logically sound.

However "what knowledge you should be a fan of?" is not a scientific question. Given all the hypotheses out these there is no right or wrong choice what to research. The focus is more that if you decide to target some field it would be proper to produce knowledge rather than nonsense. "If I knew exactly what I was doing it would not be called research, now would it?". There can be no before the fact guarantees what the outcome will be.

Asking whether you should bother to figure something out totally misses the epistemology of it. Someone who sees inherent value in knowledge will see moderate pain to attain it. But typically this is not the only relevant motivator. In the limit where this is the only motivation there is never a question of "hey we could do this to try figure out X" that would be answered in the "don't do it" variety. Such a system of morales could for example think that whether it is moral to ever have leisure as that time could be used for more research or how much more than fullitme one should be a researcher and how much one can push overtime to to not risk being burned out and not being able to function as a researcher in the future.

A hybrid model that has other interests than knowledge can and will often say "it would be nice to know but its too expensive - that knowledge is not worth the 3 lives lost those resources could be alternatively used for to save", "no we can't do human expeirments as the information is not worth the suffering of the test subjects" (Nazis get here a science boost as they are not slowed down by ethics boards). However sometimes it is legit to enter patients into a double blind experiement where 50% of patients will go untreated (or with only placebo) as the knowledge attained can then be used to for the benefit of other patients ("suffering" metric comes as net positive despite having positive and negative components). So large and important tool knowledge gains can offset other considerations. But asking the reverse of "how small a knowledge gain can be assumed to be overridden by other considerations" can't not really be answered without knowledge of what would override it. Or we can safely say that there is no knowledge small enough that it wouldn't be valuable if obtainable freely.

Comment by slider on 0.999...=1: Another Rationality Litmus Test · 2017-01-26T12:45:40.128Z · score: 1 (1 votes) · LW · GW

0.999... doesn't map into 1 divided by {{0,1,2,3,4,5...}|} (=epsilon)

However those that disagree are obviously thinking something akin to 1-epsilon which is not equal to 1. However you can't refer to it with a decimal system (atleast the standard one). Arguments that refer to (spesific) decimal places are therefore inapplicable. Reals are archimedian but surreals are not. For surreals there are elements a,b a>b so taht there is no N so that a<b*N (arbitararily large finite multiples of b are not guaranteed to bypass a).

Comment by slider on 0.999...=1: Another Rationality Litmus Test · 2017-01-26T12:31:59.965Z · score: 0 (0 votes) · LW · GW

Its a simple argument that tries ot be rigorous. If I don¨t agree with it I must disagree with some part of it. When I go step by step over it there is a suspicious step.

The proof assumes/states that 9.99999... -0.999999 = 9. I am unconfident with operations on infinite decimal place decimals that I am not sure that I agree. 9.99999... -0.9999 could also be 8.00...009. In particular I don't know whether you get the same object if you mulitiply 0.9999... by ten or if you set the first zero equal to 9.

Understanding to agree with how the proof handles is to be proficient on what reals are and the technicalities and to understand that reals are what is meant.

Havinga standard that things are not real if they can't be realised in reals would make i and complex numbers to be "unintelligble".

Comment by slider on Too Much Effort | Too Little Evidence · 2017-01-26T12:05:34.276Z · score: 0 (0 votes) · LW · GW

Powerful particle accelerators are not trivial things to produce at will. They cost a lot of money. They are not produced because the hypothesis seems strong but because the topic is important.

Someone who isn't willing to replicate the particle colliding has only the word of the one that has personally done so. Aren't you asking on how can you do science without bothering to do experiments?

Which reseach fields are funded more is more of a political and value question rather than a question of rationality. In your situation the effort expected should be pretty stable and capped but the value of the knowledge is vague. Even if you would be 100% sure that a conclusive negative or positive would be produced if the effort would have been expended it would have been an open question was it cost-effective.

For example suppose that I want you to fund my reseach with $100 dollars on the mating habits of spiders (results of which you would then get). Is this a utilon negative or positive deal? Contrast this with a reserach that would have a 50% chance of lowering the cost of producing a product from $90 to $80 and then the problem was how much one could afford to pay for it (haggle over it).

Comment by slider on No negative press agreement · 2016-09-01T13:22:58.617Z · score: 0 (0 votes) · LW · GW

I think when the target of the agreement is a spin approach or topic that should just be avoided I think the common has a already established name - a taboo.

It would also seem that it would be against the journalists integrity to participate in such in a big extent. If a laweyr would be only allowed to sue if the verdict would be "not guilty" many would not take such an ararngement to fullfill proper legal investigation. But then again if you phrase it as a plea bargain it sounds a lot better. To the extent I would view this positively is that it woudl make a person willing to share details on a hot topic issue they would not otherwise share. It better for the issue to come out even if it accompanies a spin. It would also ensure that there ared atleast somen steelmans in the discussion getting space. But by large precommitment to opinion I see as a thing wrong with press, not something to be strived for.

Comment by slider on New Pascal's Mugging idea for potential solution · 2016-08-18T17:14:11.850Z · score: 0 (0 votes) · LW · GW

If someone claims something about modest numbers there is little need to differentiate between the scenario described and the utterance being evidence for that kind of scenario to hold.

To me its not that the there is only a certain amount of threat per letter you can use (where 3^^^3 tries to be efficient) but the communicative details of the threat lose signficance in the limit.

Its about how much credible threat can be conveyed in a speech bubble. And I don't think that has the form of "well that depends on how many characters the bubble can fill". One does not up their threat level by being able to say large numbers and then saying "I am going to hurt you X much". At the limit when your act would register in my mind as you credibly speaking a big threat would be hardly recognised as speaking any more. Its the point of instead of making air vibrate, you initiate supernovas to make a point.

Comment by slider on Fairness in machine learning decisions · 2016-08-18T15:11:23.375Z · score: 0 (0 votes) · LW · GW

Repeating a scenario from long ago.

You have a village that is pested by bees but also farms crops. Lets hypothetically blow it out of proprotino and say that a certain number of people die from bee-sting and a certain number die from starvation.

And lets say that bees poolinate plants and there are also non-poisonous pollinators around (such as maybe butterflies).

Somebody see a small flying insect that has yellow and black stripes in it. He argues that because it looks like a bee and bees frigging kill people we should swat it immidetly. Now considred the counterargument of someone that knows that a non-poisonouns bee mimic also lives nearby. And then let is be clear that if they swatted eveything that looked like a bee there would be singficantly less pollinators left to make the harvest yield good and related starvation deaths.

When someone is swatting a bee lookalike they are not probably thinking about the starvation deaths they are causing.

I think I left the matter in the state that just because a grouping gives information given no new information it doesn't lessen the amoutn of information that you do need. Even after not getting poisoned you need to find food. Thus everybody agrees that people should bother checking on what they are about to swat and should be about dilligent about swatting bees and should be dilligent about not swatting butterflies.

But what does not really stand for long is that somebody who summarily just swats all beelikes is being dilligent. ALL WHILE everybody agrees taht swattting is more right than not swatting. But coloration is not the only info you can deduce from bugs. But mimicry works because ti takes signficantly more cognitive effort to make those distinctions. Thus how right you use the easily avaible information doesn' tsave you from not gathering the hard to get infromation or how poorly you performed on it.

Thus the vilalge is better off educating people about the tellsigns of the mimics and that does not detract from the villages need to keep remembering that bees are poisonous

Comment by slider on Fairness in machine learning decisions · 2016-08-18T14:30:42.584Z · score: 0 (0 votes) · LW · GW

"If you want accurate risk assessment, you must take into account data (income, savings, industry/job stability, other kinds of debt, etc.) that correlates with ethnic averages."

While not strictly true this is true in essence. The failure point is telling though. What you need is to make generalization that are more general than single individuals. Why that categorization dimensions needs to be ethnicity is not forced at all. Why it would not be gender? Why is it not that you have a certain gene?

When you take such a grouping of indivudals and say that "this average is meaningfull to the decision that I am going to make" that is no longer strictly need.

In dissocaited theorethical talk you could argue and backup as some groupings being more meanignful than others. But the whole discriminatory problems come from people applying a set of groupings that are just common or known without regard to the fit or justifiabliy for he task at hand. That is we first fix the categories and then argue about their ranks rather than letting rankings define categories.

Comment by slider on Market Failure: Sugar-free Tums · 2016-06-30T10:31:49.647Z · score: 0 (0 votes) · LW · GW

Often in this kind of speak success is because of individual rationality and failure is because of lack of technology and communal stupidity.

Comment by slider on Meme: Valuable Vulnerability · 2016-06-29T17:26:01.161Z · score: -1 (1 votes) · LW · GW

Indeed I mean just the antonym whatever that ends up meaning.

You have to consider that people who do not enter vulnerable states will try to spin it as being the right thing to do. You can get things like the PATRIOT act throught if you refer to "domestic security" if people are terrorised by attacks. You could reasonably question whether it makes sense to forgo freedom to gain security. There is this quote of:

"People who would trade a little bit of freedom in exchange for security, will end up losing both and deserve neither".

But this kind of logic did not end up prevailing. Its not like people admitted that they are too pussy to be the land of the free but interpreted as the circumstances forcing their hand.

Comment by slider on Meme: Valuable Vulnerability · 2016-06-29T17:18:05.803Z · score: 1 (1 votes) · LW · GW

Well I mean some mental strategies dare not risk the possibility of depression sadness etc. To that kind of mindset the "emotional cost" is very relevant. It it is associated with a fragile ego. But it does mean that in certain situations there is a clear option that is the emotionally least taxing even if it is not the most productive or most truthful option. Scapegoating your way out of harm usually leads into things like irresponcibiilty. But it is tempting and for many the default action if no corrective action is taken.

Comment by slider on Are smart contracts AI-complete? · 2016-06-28T09:05:36.555Z · score: 0 (0 votes) · LW · GW

Its usually seen as very rude to go exercise violence on another states soil.

Comment by slider on Meme: Valuable Vulnerability · 2016-06-28T09:03:55.469Z · score: 1 (1 votes) · LW · GW

Emotions usually have a use. Being emotionally "secure" means to choose your mental actions so to avoid possibility of negative emotions. If you allow yourself to be emotionally vulnerable you do not sensor your emotions and their usefullness away from you. That is you allow the state of the world to influence you and do not let your self identity to hijack your mental state.

Comment by slider on Are smart contracts AI-complete? · 2016-06-22T23:33:08.926Z · score: 0 (2 votes) · LW · GW

From what I gather its a contract that is so spesific there isn't any room for interpretation.

Normal contracts refer more strongly to a "reasonable human being fluent in the language its written in". This leaves some of the basic concepts somewhat open and some fitting is needed to make spesific sense for the circumstances. For example we can refer to "chairs" without there being any rigourous definition of such. But a smart contract can define rigourously complex hypotheticals on what migth happen in a way that doesn't leave any ambigity.

I guess the theory of it could be way more sound if humans were capable of logical omniscience. But given a piece of code you probably only understand its main modes of function. If you brough up some weird edgecase on how the spesification binds you and ask "did you really will this?" a typical human can't answer yes to all such questions. Its like committing to following the bible and then afterwards finding out it involves stoning people (and then not being so willing after learning this fact). Someone who doesn't know that marriage influences inheritance etc is not capable of agreeing to a marriage, even if he says yes during some ritual. But marriage is simple enough that it can be externally verified when a person does understand it or atleast ought to.

In a way we do somethign similar when we include terms of service that are boring enough and skippable enough that lots of people do not read them. But here the whole text is readable. It is just so dense in mathematical/technical coding that reading it in a comphehensive way would take significant effort which is not usually done. Its like saying "here read this schrödinger equation. Congratulations now you understand quantum mechanics completely". But the comprehensiveness can be executed and the ability to prescan it for known tricks/bugs makes it not automatically socially useless. But if the code ends up later to have a property that neither of the parties were aware off and could not be expected to come aware off upon prescanning the code are they legally stuck on following it? You either have to acknowledge that the parties have incomplete information on the functioning of the code or you have bindings that nobody knew about or intended. Which is pretty bad for informed consent.

Friendliness in Natural Intelligences

2014-09-18T22:33:31.750Z · score: -4 (7 votes)