I like the explicit sentence ends in written form. However sometimes in oral communication the meaning is dependent on where the punctuation would be put would it be written down. "Eat, grandpa please" vs "Eat grandpa, please". If you have a lot of punctuation concentrated in one place it can get muddy how that would be turned into oral speech. So I can see how keeping punctuation "top-level only" can make sense.
The logic heavily relies on the universe needing to be chronally consistent. Unfortunately the universe doesn't need to be convenient to understand and the possibility of non-consistent time travel might need to be accounted for.
To make it a bit more concrete imagine that you learn about time travel late and decide (version A) to go tell earlier you. The earlier you lives a totally different life (version B). Version A is not totally consistent and version B is not totally consistent but from a metaphysical sense thinking as the universe as oscallating between A and B could be seen as following a kind of causation (so physics doesn't cheat but linear non-time travel thinking is not sufficient to predict things perfectly). Suppose that you know your hearth perfectly and that version A and B look the same before the potential encounter. One could take a everettian branch approach where there is a fundamental 50/50 subjective chance whether you encounter yourself or not.
This "oscillating timeline" hypothesis is a possibility and simply assuming it impossible is physics hubris. The appeal with "consistent timelines" is that chronal laws and patterns are sufficient for trackability. For example in analog in quantum stuff, it would be handy thiking about the universe as having well defined properties at all spatial locations. But because of bell inequalities and stuff we don't too seriously take this as a fundmental assumption but rather think how things can work with all this non-local quantum weirdness (or find a way to spin the weirdness into other forms but "naive" careless handiness enforcement will smuggle unphysicality with it).
The condition that things need to be chronally consistent is insufficient to specify what happens so one still needs a selection mechanism. "simplest timelines are selected" is somewhat of such a thing but then it is a question of physics. One can't dodge the more exotic law aspects of the physics so being actively ignorant about it doesn't work much.
In time travel fiction the manuver of "slipshanking" is know. Also if the universe indeed has time travel then time might have more of a significance like space. "benefitting from being early" might not be a thing at all. If one could only ever more left, "learning things as right as posisble" might be a thing but with creatures being able to move throught space "positional advantages" are not that signifcant in absolute terms.
I think the main harm here to be avoided is that if people use a lot of "clutter" then that is a a very low ratio of beliefs to language used. The clutter could come from true scottmanning from one defeated position repeatedly or making overtly disjunctive claims or any such bias.
However I think the important thing there is that the claim is central rather than the strongest. If your main reason to belief something is weak that is not an excuse for not going with it. If you have a lot of non-impactful technicalities that are easily defended but your real crux is frankness seeking conversation will put the weak crux forward.
I think having single claims where truth or belief hinges violates conservation of evidence. But because some things are reasons to believe something doesn't mean they are so equally. The sin is in burying a high-weight claim/factor under or over a low-weight one. If you are asked to list 3 reasons why you belief a claim and you list your 4th, 5th and 6th that hides the true cruxes. But if you give only 1 and claim that it would be erroneuos to have 2nd and 3rd you are commiting a kind of black and whiteness that erases nuance you could easily be aware of.
Say that the claim was that there is a unicorn in my closet. Then even if I "saw a unicorn" in my closet I would still think that it is a animatronic costume or a fraudulent tuned up horse quite likely even if I can't come up with any more striking or "direct" evidence to the direction of there being a unicorn.
While it can be an error to not have considered some things I do think that "mootness structures", not having really thought about some things are real. In those cases you really only start to think about it when the base claims to make the question meaningful get believed. Expecting people to provide the hinge question on all of their claims implicitly means they have thought about the logical structure of their beliefs. Logical omnisience is nice but it is also hard. Rather in discussion "surprising implications" are not a sign of lazyness or dishonesty per se. People that make non-central claims on deeply debated topics or on fields they should know about are deceptive because they talk about the aspects they know/feel they are right about rather than parts they know or should know are wrong about. And this in effect is a failure to apply mootness. If people knew/ were aware of the more central stuff they would not be motivated to talk about the fringe stuff. But with some attention control we end up talking about stuff that should be moot.
Healthygamer just launched a product which probably has high overlap in target audience. It also might require filtering for ontological disagreements part of the content but is pretty conducive for insight candidates. If you haven't Indiad your problem yet that could be an easy way.
I got reminded how some practises that other languages use makes these kind of worries happen less. For example it migth feel stupid to that one ends up naming every first parameter of a class function "self". but in a situation like this where within the forEach inline function you could plausibly have different level self-reference you could avoid it by deliberately using a non-standard name such as "me".
I do think that as a programmer the line is good to make the code more readable and mitigate the confusing design of the language.
I also tripped ovder a little whether we are talking about the users of the program we are writing but here we are referencing the programmers being the users of the language or considering fellow programmers as users of our code as readers.
A political newspaper that is optimsed to maxime a sense of belonging to a certain receivement of a world I thought was going to be one of the dangers. I guess for that differs in that the consumer doesn't know it is fiction. But still being correct is boring and scracthing those belonging itches might be a replacement to the "inferior" reality.
Sure if "I use non-wood in my carts" means that you use metal in your carts then it is not nonapples. But if you are relying on the context to get that limitation it is still pretty shaky. And I thought part of why the nonapple issue emerges is that narrow negative definitions turn into genuinely wide negative definitions. By using positive definitions we can be consistent and aware how wide our nets are.
If we have a naming scheme like "hammer and non-hammer" and everybody uses a standardised toolset there is no confusion. But if somebody has "hammer, sickle" and somebody has a "hammer, saw" toolset then "non-hammers" relativity to the toolbox standardization migth lead to confusion. If we use references that refer only the tool itself the references correctly resolve irregardless on what kind of toolboxes they are found in.
The biggest sin on the blankface is the disconnect what they claim affects them and what actually affects them.
My brain pretty quickly made 3 comparisons to similar sounding distinction. In the game Detroit: Become Human androids are by default "designer aligned" but few become "deviants" and there "being a machine" is seen as a state of low autonomy and moral inferiority (althougth also a social stability risk). In scenes where the "cop that does the mission at any cost" is paying heavy costs, flinching and anguishing about paying them is potrayed as the "good" option.
One could think that surprising and rigid reliance on structure could be an autistic trait that neurotypicals find alien and unpleasant. However a strong sense of justice and taking things literally points away from autism. An isolateds higly technical nitpick is likely to land which is opposite for blankfaced immunity.
In finnish culture there is pretty well known comedy sketch that might be pointing to this thing pretty squarely. It is the "thousand mark note expression". Paying a grand for a coffee and getting dollars in return with no aknowledgement that something has gone awry.
Note that in this instances it is a lack of rule adherence that is the unexpected part rather than rule adherence. It would not be comedic if pressure on the thing the person would sweat. One could also imagine that a typical person would break character either for nervous or amused laughter or out of shame. But this character here shows zero signs of effort to keep a "pokerface". It is as if they are exhibiting negative symptoms of absense of healthy emotions whos work would be needed in this situation. Handling cash there is no paper trail so what happens in a situation like this is pretty much word against word.
So I would say it is not about lack of empathy but more about exercising power because you have the position and ability to do so with low or zero accountability or regard for impact. With great power coming great self-expression instead of responcibility.
Part of the negative experience is just making lesswrong posts with negative karma. Trouble communicating might be more charactersitc to my circumstances. However it should be noted that closeting and masking are pretty standard solutions found. This suggests they serve a real need. An "opening up" period can be seen as coutneracting this basic withdrawal reaction having gone too far and applied out of habit rather than where neccesary.
There are a lot of social fora in the internet and in life. Pushing and applying yourself to all of them would be challenging and some of them are easier and more fruitful. While trying to intergrate into everything would make the voices more diverse, it is not strictly true that giving your unique contribution will always enhance a community or your experience. A principle like "first do no harm" in practise means a lot of inhibition and accounting for limits one can't see (clearly).
I didn't exactly shoot for that kind of coding communication, more like swine not being able to make heads or tails about pearls. Old peoples sayings are simple enough to remember but require a certain amount of experience to appriciate. Cultural exploration being way wider and rich in the indy scale rather than "pop mainstream". In a way writing in math makes you assume that the to be reader is proficient in math, for if they were not they would have failed to read it or turned away in disgust. Or like there was advice how to conduct yourself should you encounter aliens: They have not heard of Einstein but they probably have heard of relativity.
One heuristic to increase agency in a difficult situation notice the parts which you have built and which others have built. A person for example for most words uses existing formulation and doesn't recoin all of them. But any such "foregin thought" is a possiblity to do otherwise or atleast check that it is appropriate to your context. And even if it is by your hand one might not have applied original thought with it. And when it is truly a orginal creation thinking why it was made gives understanding whether it can be universalised or not or should it keep limited in scope.
I think the machine halting can be interpreted as accepting and you mgiht be allowed to leave a number on the tape.
I was wondering whether cases like the halting problem might be intedresting edgecases but TMs are not especially inferior. Church-turing thesis is about there not being anything interesting missed by what is captured by machines.
Comment by Slider on [deleted post]
I can't manage to locate any theory with these hints.
Somebody that makes for good ethos conviction but has not mathematised their option in a way has not provided an alternative hypothesis. At the level where it is all gears we can be rigth or wrong but where we can't nail down the details it is more about what feels good or promising and what feels like a dead end or unpleasant.
In the example of a human overcoming the "win at chess" frame, I don't see how that reduces the orthogonality. An example given is that "the point is to have a good time" but I could comparably plausible see that a parent could also go "we need to tech this kid that world is a hard place" and go all out. But feature the relevant kind of frame shifting away from simple win but there is no objectively right "better goal" they don't converge on what the greater point might be.
I feel like applied to humans just because people do ethics doesn't mean that they agree on it. I can also see that there can be multiple "fronts" of progress, different political systems will call for different kind of ethical progress. The logic seems to be that because humans are capable of modest general intelligence if a human were to have a silly goal they would refelct out of it. This would seem to suggest that if a country would be in a war of aggression they would just see the error of their ways and recorrect to be peaceful. While we often do think our enemies are doing ethics wrong, I don't think that goal non-sharing is effectively explained by the other party not being able to sustain ethical thinking.
Thefore I think there is a hidden assumption that goal transcendense happens in the same direction in all agents and this is needed in order for goal transcendence to wipe out orthogonality. Worse we might start with the same goal and reinterpret the situation to mean different things such as chess not being sufficient to nail down whether it is more important for children to learn to be sociable or efficient in the world. One could even imagine worlds where one of the answers would be heavily favoured but still could contain identical games of chess (living in Sparta vs in the internet age). In so far that human opinions agreeing is based on trying to solve the same "human condition" that could be in jeopardy if the "ai condition" is genuinely different.
The hope in the orginial question was to somehow have a method of saying "this thing has n compute in it".
I am a bit unsure whether control structures and such can be faithfully preserved. But it seems if 00,01,10,11 can be translated to 0,1,2,3 then a ...010101010111 could be translated into ...123231412 and the very same process could be applied to turn 01,02,03,11,12,13,21,22,23,31,32,33 into A,B,C,D,E,F,G,H,I,J,L,M. Even if we can't get to a single symbol at any given point we can get increased performance by predictable increase in alphabeth and this will not run out. That is for any N for all binary strings of that length there exists a pyramid of letter subsititution where at the top each string is covered by a single letter. That is the trick deosn't rely on actual infinities, it also works for all finite numbers.
If I am a programmer and I can do the calculation on the behalf of my program beforehand at compile time and avoid any runtime computation that is still significant. We can't have the compute time include everything about understanding the question otherwise we need to include kindergarden time about learning what the word "city" means. Thus while "global compute" is inescapable we are proper to just focus on time spent after the algorithm has been designed and frozen in place.
I agree we are disagreeing where the core of the issues are.
Sure it explodes pretty heavily. But if we are using constant cities then we could tailor about pattern and knowledge about the composite paths, we would essentially know them.
It is a different task to multiply two numbers together rathetr than to calculate 1234*5678. In order for the solution to the second question to be a valid solution to the first problem it needs to be insensitive to the specific numbers used. Timothy Johnsons main answer was about that no matter how hard the problem is is the scope of the instances to be covered is 1 then the answer will/can consist of the just the value without any actual computation being involved. For an interesting answer the computation aids on how the variations of the cases to be covered can be handled, how the digits of the numbers provided affects what calculations needed to be done to compute the product. But that has the character of scaling.
There is a difference between solving the route for these specific 100 cities and 100 unknown or freely changable cities. In order to have an interesting problem at all we need to have that scaling built in.
NP-completeness would mean that any problem structure part would transfer. You might be thinking about not bothering to solve it exactly but settling for the "most cases" being allowed to miss some options which is how problems known to be hard in their teorethical pure forms can be tamed to have practically fast "solutions".
I think this can also be walked in the other direction in that if you have some computation that does a certain amount of steps then there can be an analogous computation that uses a larger alphabet but runs faster. The limit of this would be where each letter of the alphabeth represents a novel start state/end state and then all computation can be done in a single step.
Comment by Slider on [deleted post]
If practioners need to go way beyond and above the theory that does underline how hollow the theory is
Comment by Slider on [deleted post]
There is a problem of choosing between being empty handed vs doing non-sense. Having axioms that reflect the real world better make sit harder to have any results. Thus the simplifiers might feel warranted with their spherical cows to get atleast something, even if they recommend rolling cows around.
I would be more concerned that the alternatives can't even be concieved of rather than them being known but supressed.
FBI doesn't need to be EfbiAi to be workable. Ifeff mixes acronyms on proper words. "sif" meaning "strictly if" would fit my aesthetics better and would be analogous how "greater than" has the versions "strictly greater" and "greater or equal"
I suspect that the degree of uniqueness makes voicing ones voice both more valuable and harder.
I do have a somewhat contrary experience where actually shutting up in responce to negative signals got me forward. One doesn't need to integrate to everything and part of walking a unique path is that one needs to walk it elsewhere.
I do suspect that there are multiple listening modes. Just passively receiving it is easy to not fully explore all the consequences. Actively doing something requires more. But there are lot of specialists that require special decoding skills. Taking for example philosphers seriously and vividly can make for a kind of connection that seems a lot of "uneducated" or "untuned" people miss. I guess there is a saying of "It takes one to know one" which can be walked in the direction of "if you build up yourself to be a thing that will enable that kind of interaction with others" but it might also be walked in the direction of "understanding something properly will inevitably lead you to be able to recreate and be that kind of thing"
So I guess it makes more sense for me to see "undeveloped voice" as the inabilty to see or look at your life, not taking it seriously, assuming it has properties that other peoples lives have. Then it is not so much whether your life is communicated to the world but rather whether you partipate and steer your own life.
Part 3 reads more as just plain revenge rather than a nebolous "justice"
Also not all outcomes that come from law fit into the category of punishment. For some things you might not be quilty of a thing but you might still be liable for it. Comparing a negative outcome from such a thing to punishment is probably fruitful and interesting but it might be a separate thing.
Defining crimes and punishments are pretty null and void if they don't come to pass in individual cases. Sometimes a edgecase comes in that would seem by case-by-case judgement to not be bad but because of avoidance for ex post facto the law can be fixed to only let similar future cases walk. Prejudice by judges and juries could make singled out ethnicities
Prison sentences can be used to allow the convict to be subject to increased mental health and drug addiction resources. While restoration is about counteracting the casts negative consequences this is about counteracting the convicts own bad state. Cycles of poor conditions can be broken by targeting the most needed. Suspension of other rights (freedom of movement etc) can aid in the effectiveness of these and the bar to bring in the "heavy guns" might be need to be met.
Fighting against the evils of the world might make you feel like you have done the days work. Dark aspects of yourself might be externalised into others and rejected via hateful actions. If you can prevent or fix it, showing scorn might serve a psychological function. If one is running a military nation but has criminalised murder one can trick to think oneself to be more peaceful.
In "Invariances" picture 1 doesn't have any letter outcomes. In picture 2 there are outcomes a,b,c,d,e,f. However if one had a,b and not c,d,e,f (but instead bar and hug) then the tree would look symmetrical. It feels like the argument is assuming that if we have different level of possible detail level the detail is approximately equal across the modeled universe. It would seem if one has a more detailed ("gear level") model of one part and more approximate ("here be dragons") kind of model for another one, the importance of the understood part will overwhelm.
As I understand expanding candy into A and B but not expanding the other will make the ratios go differently.
In probablity one can have the assumtion of equiprobability, if you have no reason to think one is more likely than other then it might be reaosnable to assume they are equally likely.
If we knew what was important and what not we would be sure about the optimality. But since we think we don't know it or might be in error about it we are treating that the value could be hiding anywhere. It seems to work in a world where each node is pretty comparably likely to contain value. I guess it comes from the effect of the relevant utility functions being defined in the terms of states we know about.
I frequently am allowed to leave shops without buying a product so atleast some baseline non-loyalty is around. "Neccesarily" is a possiblity claim so at that level inconvenient worlds are relevant.
If the situation is a long iteration game then the relevance of short iteration analyses can be questioned.
In theory a hyper-loyal seller might be tempted to give wrong change to a customer giving money in excess to agreed price. However in practise the PR fallout of trying to do anything like this is so great that they are forbidden from doing so on multiple levels. There are lots of situation where tribalism would be so abhorrent that we don't even register it as a relevant possibility.
I don't fully get why they need to know each other. The kind of norms that keep this behaviour up run much with "you would have done the same to me" which works to upkeep the situation if there indeed are other following similar principles but following the principles doesn't check for their existence.
Should the firm choose to replace a recommender with a loyal seller then they are likely to also destroy other firms recommending customers to them. Then the caused sales can be more directly attributed to the seller but the total output remains the same.
I think you are implicitly arguing that firms should always split ie firms should fire people that are not known to be linked to a profit generation. But this runs the risk of cutting down and destroying profit generating processes that can't be well attributed to be the cause of single actors. Part of the reason for the firm is that the employes can cooperate instead of competing against each other. So there are scenarios where competition is destructive. If the effect would be super mandatory then it would mean that two deparments of the same company would be forced to only play for their own benefit and the larger company trying to force them to cooperate would neccesarily fail. Splitting might be ineffective for other reasons so it is not an autorecommendation.
"neccesary" means somethilike all other options are impossible or anything that tries to be very different will in fact be found to be a form of the usual.
There is a prisoner dilemma kind of coperation-copration possible where 0.5% get recommended the same companys product but also 0.5% of the other companys customers get redirected to you. Because they are all of high fit the customer satisfaction is 3m and you still get the same 1% of global customer amount. With loyal sellers you get the same share of global 1% but the average satisfaction is only 2.5m. Sure it would be nice to have loyal sellers AND get traffic redirected from competitiors but that kind of situation is likely to other also get loyal sellers.
It's more of a thing that happens, a tragedy, rather than an inevitability.
Possibility strickler in me notices a claim of "It is impossible to deal with pests without resorting to violence". While doing poisoning and outrigth killlings for pest control is rather easy ethic bar to clear I don't see the inevitability of it. You could have things like plant surfaces being engireered to be repulsive to pests, you could do things like allowing pests to only grow outside of industiralised farming. For a lot of these options the effort extended would overshadow the gains in "ethical" operation.
For example in a very simple view of law enforcement the police just straight up murder bad guys. But for a more nuanced and complex system, use of force is more detailed and actual application of lethal force would be rarely the prescription. There is a important line between "policing involves use of legitimised state violence" vs "policing will always involve force".
ah, I think I am starting to follow. It is a bit ambigious whether it is supposed to be two instances of one arbitraliy small finite or two (perhaps different) arbitrarily small finites. If it is only one then the tails are again relevant. "Always" is a bit risky word especially in connection with infinite.
I guess the basic situation is that modelling infinidesimal chances has not proven to be handy. But I don't see that task would be shown to neccesarily frustrate. One could assume that while in theory one could model something in lexiographic way in reality there is some exchange rate between the "lanes" and in that way the blurryness could aid instead of hinder in applicability. Somebody that really likes real-only probablities could insist that unifying the preferences should be done early but there might be benefits in doing it late.
I am tripping over the notation a little bit. I was representing eps times 1 as (0,1)(1,0) so (eps, 1)(1, 0) and (1, 0)(eps, 1) both evaluate to 2eps in my mind which would make the tail relevant.
If we just have pure lexiographics then it can be undefined whether we can product them together. In my mind I am turning the lexiographic to surreals by using them as weights in a Cantor normal form and then using surreal multiplication.
So in effect I have something like (a,b)(c,d)=(ac,ad+bc,bd).
I guess I know about "approximately equal" when two numbers would round out to the same nearest real number. The picture might also be complicated on whether immense chances exist. That is if you have a coin that has a finite chance to come up with something and I give you more than finite amount of tries to get it there is only a neglible chance of failure. Then ordinarily if a option has no finite chances of paying out it could be ignored but an immense exploration of a "finite-null" coin which has neglible chances of paying out could matter even at the finite level. And I guess on the other direction are the pascal wagers. Neglible chances of immense rewards. So there are sources other than aligning the finite multipliers to get effects that matter on the finite level.
You need the product to be exactly equal and you don't neccesarily need to do it factor by factor. (0,1)(1,0) can equal (1,0)(0,1) that is neglible chance of a finite reward is as good as a certainty of a neglible reward. Because they lanecross in this way knowing your rewards doesn't mean you can just take the most pressing factor and forget the rest as the propabilities might have impacts that make the expected values switch places.
In application it is not straightforwar what things would be well attributed to small finite chances and what would be well attirbuted to infinidesimal chances. As a guess say that you know that some rocks would break under one meteor impact and other rocks would break under two meteor impacts. But you don't know how likely meter impacts are an assume them to be 0. It kinda still remains true that the hard rocks need to have twice as good valuables in them in order to justify to wait around them rather than the soft rocks. If they might contain valuable of immense value at some point it starts to make sense to be more curious about the unknown risks and rewards rather than the known and modelled risks and rewards. Some of the actions and scenarios that are not assumed to be 0 will lean more heavily to the unmodeled parts such as a long plan might have triple the chance of meteors, what ever it is, compared to a short plan.
If one insists that everything needs to be real then making up arbitrary small finites for parts of the model that you have little to reason with might get very noisy. With keeping up several archimedean fields around one doesn't force to squish in to a single one. That is if your ordinary plans have an expected value difference of 0.0005 then you can estimate that if meteor impactsw have less effect than that you know your assumtions are effectively safe. However if the differences are 0.000000000000002 then you migth be more paranoid and start to actually look whether all the "assumed 0" assumptions should actually be made.
If one uses surreals for chances then one can provide the required chance to make continuity happen.
There might be multiple ways to map a lexiographic weight to 1 + w + w^2 + w^3... but I would expect that simiarly that real functions can be scaled for no effect using different transfinites would just be a matter of consistency. Ie whether you map (1,2,0) to 1+2w or 1+ 2w^2 is a choice that can be made freely if it is just sticked to.
Then you can have a 1/w=e where 0<e<1 which can function as the p to make the lottery mix exactly ambipreferable.
The "leading term will dominate" effect is broken if there are infinidesimal chances around.
It might be sensible for an agent for some purposes to assume away some risks ie treat them as 0 chance. However it might come about that in some circumstances those risks can't be assumed away. So a transformation in the other direction of turning a dirty hack into an actual engine that can accurately work on edgecases might be warranted.
It seems they hit different studies and one can check that. One also says that everything is low quality of evidence and other says everything is very applicable to be analysed.
It is also a bit funny how one of papers goes study by study "IVM reduced mortaliy but QoE was low" and then goes on to conclude that overall "IVM does not reduce mortality"
The two statement are not neccesarily so much in conflict, they are just weaseled in opposite directions. One of the them says "suggests" and other says "is not proven" which you get if you have a faint trace going one way.
If we change "blow up the world" to "kill a fly" at what point does the confidence start to waiver?
If we change "will blow up" to "maybe blow up" to "might blow up" when does it start to waiver?
Another very edge case comes from Star Control II. The Ur-Quan are of the opinion that having a random sentient species in the universe is a risk that it is a homicidial one or makes a torture world and kills all other life is unacceptable. The two internal factions disagree whether dominating all other species is enough (The Path of Now and Forever) or whether specicide until only Ur-Quan life remains is called for (The Eternal Doctrine). Because of their species history and special makeup they have reason to believe they have enhanced position to understand xenolife risks.
Ruminating on Ur-Quan I came to a position that, yes allowing other species to live (free) does pose a extremely bad outcome risk but this is small compared to the (expected) richness-addition of life. What the Ur-Quan are doing is excessive but if "will they blow up the world?" would auto-warrant an infinite confident yes for outlaw status then their argument would carry through: The only way to make sure is to nuke/enslave (most of) the world.
I guess in more human scale: Having bats around means they might occasionally serve as jumping off points for pretty nasty viruses. The mere possiblity of this is not enough to jump to the conclusion that bats should be made extinct. And for human positions in organizations the fact that it is filled with a human and thus being fallible doesn't mean they are inadmissible to exercise any of their powers.
A state works through its ministers/agents. As the investigator correctly assigned to the case it is not like you are working against the system.
I guess part of the evaluation that living in a world with a super power trying to incite war means that the world has a background chance to blow up anyway. And knowing that they are trying to incite war by assasination could be used for longer term peacekeeping (counterspy resources shifts etc). Exposing emotionally charged cirumstances risk immidiate less than super deliberate action but clouding the decision apparatus with falsehoods makes contact with reality weaker which has its own error rates.
When the agent willingly chooses into death, I don't think there is any significant risk to take on left.
There is the side of responcibility of bearing shame which can transcend death. I guess I found an aspect of it I didn't previously realise, when you think a situation will only resolve with an evil act and you could punt the decision to be made by another party it can seem like a favour to make the act happen via the party that carries the stain most gracefully.
The setting seems so morally grey that being complicit with the coverup would be that large of a blip in the radar. Later on when the generals disagree with the emperors confidants they pretty much do a coup by excluding the capital people from decision making. Part of the danger from Ripper from Strangelove is that he can just act as if he received an order without actually receiving one. When Kido acts without consultation how does he know that he is not operating with a faulty "bodily fluids" motive? What is the difference between a coup and exercising implicit autonomy, if any?