Posts
Comments
Another example of a real-world Moral quandry that the real world would love H+ disucssion lists to take on, is the issue of how much medical care to invest in end-of-life patients. Medical advances will continue to make more expensive treatment options available. In Winnipeg, there was a case recently where a patient in a terminal coma had his family insist on not taking him off life support. In Canada in the last decade or so, the decision was based on a doctor's prescription. Now it also encompases family and the patient's previous wishes. 3 doctors quit over the case. My first instinct was to suggest doctors be trained exclusively to be coma-experts, but it seems medical boards might already have accomplished this. I admire a fighting spirit, and one isolated case doesn't tax the healthcare system much. But if this becomes a regular occurrence...this is another of many real-world examples that require intelligent thought. Subhan's position has already been proven wrong many many times. There are cognitive biases but they aren't nearly as strong or all-encompassing as is being suggested here. For example, I'd guess every reader on this list is aware that other people are capable of suffering and feeling happiness that corresponds with their own experiences. This isn't mirror-neurons or some other "bias", it is simple grade school deduction that refutes Subhan's position. You don't have to be highly Moral, to admit it's out there in some people. For instance, most children get what Subhan doesn't.
(ZMDavis wrote:) "But AGI is [...] not being a judge or whoever writes laws."
If Eliezer turns out to be right about the power of recursive self-improvement, then I wouldn't be so sure."
Argh. I didn't mean that as a critique on EY's prowess as an AGI theorist or programmer. I doubt Jesus would've wanted people to deify him, just to be nice to eachother. I doubt EY meant for his learning of philosophy to be interpreted as some sort of Moral code, he was just arrogant enough not to state he was sometimes using his list to as a tool to develop his own philosophy. I'm assuming any AGI project would be a team, and I'd doubt he'd challenge his best comparitive advantage is not ethics. Maybe he plans on writing the part of the code that tells an AGI how to stop using resources for a given job.
Yes, EY's past positions about Morality are closer to Subhan's than Obert's. But AGI is software programming and hardware engineering, not being a judge or whoever writes laws. I wouldn't suggest deifying EY if your goal is to learn ethics.
"Why the obsession with making other people happy?"
Not obsessed. Just pointing out the definition of morality. High morality is making yourself and other people happy.
Phillip Huggan: "Or are you claiming such an act is always out of self-interest?" (D.Bider:) Such acts are. Stuff just is. Real reasons are often unknowable; and if known, would be trivial, technical, mundane.
That's deep.
"Stuff is. Fitting stuff that happens into a moral framework? A hopeless endeavor for misguided individuals seeking to fulfil the romantic notion that things should make sense."
To me, there is nothing unintelligible about the ntion that my acts can have consequences. Generally I'm not preachy about it as democracy and ethical investing are appropriate forums to channel my resources towards in Canada. But the flawed line of reasoning that knowledge can never correlate with reality only finds salvation in solipsism, not a very likely scenario IMO. These kinds of reasonings are used by tyrants, for the record (it is god's will, it is for the national good, etc).
"If we're going to intervene because a child in Africa is dying of malaria or hunger - both thoroughly natural causes of death - then should we not also intervene when a lion kills an antelope, or a tribe of chimpanzees is slaughtered by their neighbors?"
Natural doesn't make it good. I'd value the child more highly because his physiology is more known (language and written records help) in how to keep him happy, and more importantly because he could grow up to invent a cure for malaria. Yes, eventually we should intervene by providing the chimps with mechanical dummies to murder, if murder makes them happy. Probably centuries away from that. It's nice that you draw the line around at least a group of others, but you seem to be using your own inability to understand Morality as evidence that others who have passed you on the Moral ladder, should come back down. You shouldn't be so self-conscious about this and certainly shouldn't be spreading the meme. I don't understand chemistry well or computer programming at all, but I don't go loudly proclaiming fake computer programming syntax or claiming that atoms don't exist, like EY is inciting here and like you are following. I'm not calling you evil. I'm saying you probably have the capacity to do more good, assuming you are middle class and blowing money on superfluous status-symbol consumer goods. Lobbying for a luxury tax is how I would voice my opinion, a pretty standard avenue I learned from a Macleans magazine back issue. Here, my purpose is to deprogram as many people as possible stuck in a community devoted to increasing longevity, but using means (such as lobbying for the regression of law) that meme-spread promote the opposite.
(Subhan wrote:) "And if you claim that there is any emotion, any instinctive preference, any complex brain circuitry in humanity which was created by some external morality thingy and not natural selection, then you are infringing upon science and you will surely be torn to shreds - science has never needed to postulate anything but evolution to explain any feature of human psychology -" Subhan: "Suppose there's an alien species somewhere in the vastness of the multiverse, who evolved from carnivores. In fact, through most of their evolutionary history, they were cannibals. They've evolved different emotions from us, and they have no concept that murder is wrong -"
The external morality thingy is other people's brain states. Prove the science comment, Subhan. It is obviously a false statement (once again, argument reduces to solipsism which can be a topic buts needs to be clearly stated as such). Evolution doesn't explain how I learned long division in grade 1. Our human brains are evolutionary horrible calculators, not usually able to chunk more than 8 memorized numbers or do division without learning math. Learning and self-reflection dominate reptillian brains in healthy individuals. The latter from a utilitiarian perspective, murder would generally be wrong, even if fun. There is the odd circumstance where it might be right, but it it is so difficult to game the future that it is probably better just to outlaw it altogether than raise the odds of anarchy. For instance, in Canada a head-of-state and abortionists have been targetted (though our head of state was ready to cave in the potential assassin's skull before the police finally apprehended him). In many developing countries it is much worse. Presumably the carnivore civilization would need a lot of luck just to industrialize; would be more prosperous by fighting their murder urges. Don't call them carnivores, call them Mugabe's Zimbabwe. We have an applied example of a militarily-weak government in the process of becoming a tyranny, raping women and initiating anarchy. There are lessons that could be learned here, Britian has just proposed to 2000 strong rapid-response military force, under what circumstances should they be used (I like regression from democracy plus plausible model of something better plus lower quality-of-living plus military weakness plus invasion acceptance of military alliance; if the African Union says no regime change, does that constitute a military alliance?). Does military weakness as a precursor condition do more harm than good by gaming nations to up-arm?
In Canada, there is a problem how to deal with youths, at what age should they be treated as mental competant adults. Brain science seems to show humans don't fully mature until about 25, so to me that is an argument to treat the onset of puberty to 25 or so as an in-between category when judging. Is alcohol and/or alcoholism analogous to mental health problems? I'd guess no, but maybe childhood trauma is a mitigating factor to consider. How strong does mental illness have to be before using it is a consideration? In Canada, an Afghanistan veteran used post-traumatic stress disorder as a mitigating factor in a violent crime. Is not following treatment or the absence fo treatment something to consider? Can a mentally ill individual sue a government or claim innocence for initiating $10 billion in tax cuts rather than a mental health programme? I'd guess only if it became clear how important such a program was, say, if it worked very successfully in another nation and the government had the fiscal means to do so. Should driving drunk itself be a crime? If so, why not driving with radio, infant, cellphone...as intersection video camera surveillence catches traffic offenders, should the offence fine be dropped proportionately to the increased level of surveillence? See, courts know there are other individuals and that the problems of mental health and children not understanding there are other people, don't prevent healthy adults from knowing other people are real. This reminds me of discussions about geopolitics on the WTA list, with seemingly progressive individuals not being able to condemn torture and the detaining indefinitely of innocent people, simply because the forum was overrepresented with Americans (who still don't score that bad, just not as good as Europe and Canada when it comes to Human Rights).
(robin brandt wrote:)"But we already know why murder seems wrong to us. It's completely explained by a combination of game theory, evolutionary psychology, and memetics."
Sure, but the real question is why murder is wrong, not seems wrong. Murder is wrong because it destroys human brains. Generally, Transhumanists have a big problem (Minsky or Moravec or Vinge religion despite evidence to the contrary) figuring out the human brains are conscious and calculators are not. I have a hard time thinking of any situation when it could be justified by among other things, proving the world is likely to be better off without murder. I guess killing Hitler during the latter years of the Holocaust might have stopped it if it was happening because of his active intervention. But you kill him off too early and Stalin and Hitler don't beat the shit out of eachother. This conversation is stuck at some 6th grade level. Could be talking about the death penalty, or income correlating with sentencing, or terrorism and Human Rights. Or employee dangerous technology Human Rights (will future gene sequencers require a Top Secret level of security clearance?). Right now the baseline is to treat all very potentially dangerous future technologies with a High Level of security clearance, I'm guessing. Does H+ have anything of value to add to existing security protocols? Have they even been analyzed? Nope.
If this is all just to brain storm about how to teach an AGI ethics, no one here is taking it from that angle. I've had a conversation with a Subhan friend as a teenager. If I was blogging about it, I'd do it under a forum titled Ethics for Dummies.
Sorry TGGP I had to do it. Now replace the word "charity" with "taxes".
(Constant quoted from someone:)"What we know about the causal origins of our moral intuitions doesn't obviously give us reason to believe they are correlated with moral truth."
Yes, but to a healthy intelligent individual not under duress, these causal origins (I'm assuming the reptilian or even mammalian brain centres are being referenced here) are much less a factor than is abstract knowledge garnered through education. I may feel on some basic level like killing someone that gives me the evil eye, but these impulses are easily subsumed by social conditioning and my own ideals of myself. Claiming there is a very small chance I'll commit evil is far different than claiming I'm a slave to my reptillian desires. Some people are slaves to those impulses, courts generally adjust for mental illness.
(denis bider wrote:) "Obert seems to be trying to find some external justification for his wants, as if it's not sufficient that they are his wants; or as if his wants depend on there being an external justification, and his mental world would collapse if he were to acknowledge that there isn't an external justification.
To me, this reads to say that if solipsism were true, Obert will have to become a hedonist. Correct. Or are you claiming Obert needs some sort of status? I didn't read that at all. Patriotism doesn't always seek utilitarianism as one's nation is only a small portion of the world's population. Morality does. Denis, are you claiming there is no way to commit acts that make others happy? Or are you claiming such an act is always out of self-interest? The former position is absurd, the latter runs into the problem that people who jump on grenades, die. I'm guessing there is a cognitive bias found in some/many of this blogs readers and thread starters, that because they know they are in a position of power vis-a-vis the average citizen, they are looking for any excuse not to accept moral resposibility. This is wrong. A middle class western individual, all else equal, is morally better by donating conspicious consumption income to charity, than by exercising the Libertarian market behaviour of buying luxury goods. I'm not condemning the purchasing behaviour, I'm condemning the Orwellian justification of trying to take (ego) pleasure in not owning up to your own consumption. If you are smart enough to construct such double-think, you can be smart enough to live with your conscious. Obert does not take Morally correct position just to win the argument with idiot Subhan. There are far deeper issues that can be debated on this blog about these issues, further up the Moral ladder. For instance, there are active legal precidents being formed in real world law right now, that could be influenced were this content to avoid retracing what is already known.
"I think the meaning of "it is (morally) right" may be easiest to explain through game theory."
Game theory may be useful here, but it is only a low-level efficient means to an ends. It might explain social heirachies on our past or in other species and it might explain the evolution of law, and it might be the highest up the Moral ladder some stupid or mentally impaired individuals can achieve. For instance, a higher Morality system than waiting for individuals to turn selfish before punishing them, is to ensure parents aren't abusive and childhood cognitive development opportunities exist. A basic pre-puberty or pre-25 social safety net is an improvement on game theory to reaching that tiled max-morality place. This no-morality line of reasoning might have some relavence if that happy place is a whole volume of diferent states. There are likely trade-offs between novel experiences and known preferences quite apart from harvesting unknown/dangerous energy resources. I know someone who likes cop shows and takes sleeping pills. This individuals can sometimes watch all his favourite Law + Order show reruns as if they were original. Maybe I'm a little jealous here in that I know every episode of Family Guy off by heart. Just because you don't know if there are Moral consequences doesn't mean there aren't. The key question is if you have the opportunity to easily learn about your moral sphere of influence. An interesting complication mentioned is how to know if what you think is a good act, isn't really bad. In my above forest example, cutting a forest into islands makes those islands more susceptible to invasive species and supressing a natural insect species might make forests less sustainable over the long-term. But that is a quesiton of scientific method and epistimology, not ontology. Ask whether setting fire to an orphanage is Morally equivalent to making a difficult JFK-esque judgement is silly. Assuming they are equivalent assumes because you don't know the answer to any given question, that eveyone else doesn't know either. I'm sure the cover this at some point in the Oxford undergraduate curriculum.
The difference between duty and desire, is that some desires might harm other people while duty (you can weasily change the definition to mean Nazi duty but then you are asking an entirely different question) always helps other people. "Terminal values" as defined, are pretty weak. There are e=mc^2 co-ordinates that have maximized happiness values. Og may only be able to eat tubers, but most people literate are much higher on the ladder, and thus, have a greater duty. In the future presumably, the standards will be even higher. At some point assuming we don't screw it up the universe will be tiled with happy people, depending on the energy resources of the universe and how accurately they can be safely charted. Subhan is at a lower level on the ladder of Morality. All else equal (it never is as uploading is a delusion), Obert has a greater duty.
Wow, what a long post. Subhan doesn't have a clue. Tasting a cheesburger like a salad, isn't Morality. Morality refers to actions in the present that can initiate a future with preferred brain-states (the weasily response would be to ask what these are, as if torture and pleasure weren't known, and initiate a conversation long enough to forget the initial question). So if you hypnotize yourself to make salad taste like cheeseburgers for health reasons, you are exercising Morality. I've got a forestry paper open in the other window. It is very dry, but I'm hoping I can calculate a rate of spread for an invasive species to plan a logging timeline to try to stop it. There is also a football game on. Not a great game, but don't pull a Subhan and try to tell me I'm reading the forestry paper because I like it more than the football game. I'm reading it because I realize there are brainstates of tourists and loggers and AGW-affected people that would rather see the forests intact, than temporarily dead. That's really all it boils down to. After gaining enough expertise over your own pysche sometime in childhood (ie. most 10 years olds would not waste time with this conversation), a developmental psychologist would know just when, you (a mentally healthy individual) realize there are other people who experience similiar brain states. Yes mirror neurons and the like are probably all evolutionary in origin, that doesn't change anything. There really are locally universe configurations that are "happier" in the net, than in other configurations. There is a ladder of morality, certainly not set in stone (torture me and all of a sudden I probably start valuing myself a lot more). I'd guess the whole point of this is to teach an AGI where to draw the line in upgrading human brain architectures (either that or I really do enjoy reading forestry over watching a game, and a really like salad over pizza and Chinese food). I don't see any reason why human development couldn't continue as it does now, voluntarily, the way human psyches are now developed (ie, trying pizza and dirt, and noting the preference for pizza in the future). Everyone arguing against morality-as-given is saying salad tastes better than pizza, as if there weren't some other reason for eating salad. The other reasons (health, dating a vegetarian, personal finances) maybe deserve a conversation, but not one muddled with this. Honestly, you follow Subhan's flawed reasoning methodology (as it seems Transhumanists and Libertarians are more likely to do than average, for whatever reason), you get to the conclusion consciousness doesn't exist. I think the AGI portion of this question depends a lot more on the energy resources of the universe than upon how to train an AGI to be a pyschologist, as unless there is some hurry to give the teaching/counselling reins to an AGI, what's the rush?
...The think-tank money would include futurism like SIAI and this blog's topics. For longevity research, I think the best way to promote it might be to screen what health/pharma/biotech companies spend the most on R+D in relavent sub-fields. Money would only come in handy to market such a portfolio as "boomer-ethical". I'd want to give R.Freitas money to do diamond surface chemistry computer sims, but given that they come down in price every year I wouldn't be sure the optimal amount. Think-tanks is pretty vague. You'd want to look into the specifics of FDA approval processes in pursuit reform ideas; you could fund such a think-tank but the real bottleneck would be educating policy researchers. I'd think any university would respond favourably in instituting new research schools of any type, probably get matching government funding too. For example, W.Buffett had trouble finding cheap investments as soon as he had tens of billions to play with. The Provincial Government of Alberta couldn't figure out what to do with their oil revenues when approaching eleven digits of play money.
I agree with A.Madden. If the question was phrased $10 trillion in physical wealth that didn't exist before, it would be different. I wouldn't trust myself to manage more than a few hundred billion, and I'd destroy the other $9.6 trillion. Maybe a $75000 investment trust for myself and about twice that for family and friends. Most of my investment strategies (Grahamian Value modified to account for future demographic, geopolitical, cultural and technological trends) breakdown at such high valuations. I like the CDI index and I like P.Martin's initiative to tie Canadian African foreign aid to those nations (Kenya gets bonus pts for denouncing Mugabe and South Africa and Zimbabwe would get nothing at present) that stamp out corruption. So maybe $40 billion to buy down the debt of said nations. I'd saturate research grants for things like apiculture, grain, and desalination research; maybe one billion would cover 10000 grants. But the real need is probably university research infrastructures, and I'd be worried about being stuck with the operating costs and the marginal research gains not being anything close to the research output now. I'd want to harmonize distance learning education globally, but that is dependant on accreditation and immigration reforms, and I don't think the real gains will accrue until holograms fill bandwidth. So for now I'd make strategic scientific journal sectors free. Would buy up mosquito nets and saturate microfinance penetration (dependant on training door-to-door bankers). Maybe $5 billion? Lots of think-tanks on various subjects. I wasted much time doing casual labour to make rent and may again in the future; no doubt there are millions of others. $2 billion would fund 1000 think-tanks that mimic university institutions without the need to force superfluous course material. I'd establish a $10 billion trust for grain storage GMO research, and physical pilot projects that mimic the UK's previous Intervention Storage project, but at a much higher tech level. I'd establish a $10 billion trust to accelerate the research into urban robotic greenhouses (inert gas greenhouse sheaths, robotic pruners, OLEDs, time-release fertilizers, GMOs). I'd short any oil and coal companies that have funded Neoconservative think-tanks or used lawyers that have previously defended the asbestos or tobacco industries. $25 billion. I'd pay 1/6 the value to developing nations of building wind turbines instead of coal, as wind company stock options towards their sovereign wealth funds. A $4 billion trust? Saturate IDE's drip irrigation and hand pump market; a $2 billion trust. $5 billion(?) to work with the big US banks to implement grain elevators and exchanges in the developing world, and novel metal commodity contracts in the developed world (semiconductors and polymer solar cells need metals like gallium). I'd match any CSA funding if they wanted to reinstate NASA's cancelled NIAC ($500 million). A trust to fight computer hackers (best way to fight AI threats from what I can tell), and sketch out the possibility of a low footprint gene sequencing and electronics market decades in the future, to fight designer pandemic and AI/AGI. $10 billion. Fighting pandemics appears to turn into fighting Staph infections, so $50 billion to give hospitals in the western world gelFAST alcohol rubs dispensors for their nurses and U of T nurse hand-wash sensors ($300 a bed) in all hospital beds. $100 billion for basic sewage, health, nutrition and education infrastructures in the developing world. Saturate Cuba's 3rd world doctor exporting programme, $500 million. A trust to GMO a crop containing all 8 essential amino acids. At $263 billion here. The problem is my investment strategy wouldn't work at such a high valuation, and I'd need to devote a lot of time to learn new strategies just to park and manage the trusts. Also, a lot of things I think are high ROEs are properly handled by governments. There is a $10 billion programme forwarded to fight mental health disorders in Canada, certainly useful for managing any futuristic technologies. I like building nursing homes and affordable pedestrian friendly housing, and I like rejigging agriculture tarriff/subsidy rates, but that is the job of government. Same for things like offering free bank accounts, the job of private industry. Most of the above can be distilled to university grants, and making university accessible to the third world and to first world adults. Many things I don't know enough about to fund. A $1 billion trust to buy light-weight solar cells and mini wind turbines for the developing world. $2 billion to cut Canada's boreal forest into pieces to fight Mountain Pine Beetle, if it eats Jack Pines. $100 million to fund solid-state hydrogen research. $900 million to buy up rainforests/wetlands in danger. A $1 billion prize trust to give annual awards to leaders that safeguard their own environmental capital. I'd want to buy a Zenn electric car, but they are illegal in Canada. $269 billion.
I'm glad to see this was going somewhere. I'd say yes, if humans have free will, than an AGI could too. If not on present semiconductor designs, than with some 1cc electrolyte solution or something. But free will without the human endocrine system isn't the type of definition most people mean when they envision free will. But I suppose a smart enough AGI could deduce and brute force it. Splitting off world-lines loses much of the fun without a mind, even if it can technically be called free will. I'd want to read some physics abstracts before commenting further about free will.
"Lets say we, as humans, placed some code on every server on the net that mimics a neuron. Is that going to become sentient? I have no idea. Probably not."
Ooo, even better, have the code recreate a really good hockey game. Have the code play the game in the demolished Winnipeg Arena, but make the sightlines better. And have the game between Russia and the Detroit Redwings. Have Datsyuk cloned and play for both teams. Of course, programs only affect the positions of silicon switches in a computer. To actually undemolish a construction site you need an actuator (magic) that affects the world outside the way lines of computer code flip silicon switches. The cloning the player part might be impossible, but at least it seems more reasonable than silicon switches that are conscious.
"No, you have to be the ultimate source of your decisions. If anything else in your past, such as the initial condition of your brain, fully determined your decision, then clearly you did not."
Once again, a straw man. Free will might not exist but it won't be disproved by this reasoning. People that claim free will don't claim 100% free will; actions like willing your own birth. Free will proponents generally believe the basis for free will is choosing from among two or more symbolic brain representations. If the person read a book about the pain of being burned to death, in the few seconds between past contemplating self and present decisive self, than the straw man holds.
In the above example, if the fear of fire is instinctive, no free will. If it is attained through symbolic contemplation in the past of what one would do in such a circumstance or how one values neighbourhood civilian lives, or one's desire to be a hero or celebrity, then at least the potential for free will exists.
Once again, free will does not mean willing your own existence, it means choosing from brain symbols in a way that affects your future (if free will exists). I expect to post the exact same argument here on different threads repeatedly ad neaseum, that free will does not mean willing your own birth (or willing your own present or future, or willing the universe).
I'll ask again, don't tachyons induce feedbacks that destroy the EY concept of a "block MWI universe"?
"...Also, your last two comments are almost completely off-topic."
I was just playing the Devil's Advocate, screwing around to "help" others build debating skills while not telling them I was wasting their time :)
About Devil's Advocacy, it is fine as long as it is stated. Don't go claiming the Holocaust was a good thing and should be completed this time around, without mentioning the part about just wanting to heighten the quality of debating skills.
TGGP, if present rates of US prison incarceration existed historically, the USA would never have been a superpower. $100000 a person annually, at 3 million people. You do the math. The worst part is they are all black and poor. They are being imprisoned because: 1) they can't afford lawyers, 2) they are black, 3) only thirdly, they are guilty. Lets cut the crap, most people reading this blog have committed offenses that could see them imprisoned for years. Drugs and pills without a prescription are obvious, but there are many more. If you are black or poor, you are far likely to be cuaght and imprisoned. There is a prison quota to pay administrative prison salaries and to ensure work for construction contractors. Prisoners are forced to labour at rates that are used to undercut SE Asian labour force bids. Try to say no to the work and not get raped by other prisoners. Prisoners in maximum security US institutions learn criminal skills (good in learning how to grow weed, bad most other skills), are emotionally hardened, and are subject (along with staff) to an environment that encourages mental health problems. I wasn't kidding when I said the USA now spends more on prisons than on higher education. Only 3rd world countries do this. The contractors could literally be building public schools and the staff could be trained in tangential security and health occupations (in Canada, 20% of nursing home residents are subject to assault by mentally ill residents, something that could be avoided with a former prison guard on duty). More importantly, in my eyes, imprisoning so many innocent people somewhat condones all other crimes. The above excludes the health care cost of increased Hepatitas, drug usage, mental health problems, HIV, staff stress.... I don't know who James Q Wilson is, but I am 100% certain he is not African American or a poor person in the Western world. If he were, he would reflexively adopt a correct policy position. He likely earns 6 figures and is white. The police have never pulled him over and searched his pockets for pills or given him a breathalyzer after a car accident.
"Tangential argument: existential risk maximizing actors, thank goodness, don't exist, nor do more than a tiny number of people seeking to destroy humanity. Beware the Angry Death Spiral."
I think I'll stand by my words and qualify the statement maybe GWB could start WWIII single-handedly and isn't, so this is only pertaining to the threat global warming. S.Harper couldn't be misplaying the threat worse. Canada's governing structure has a provision where the Queen of England is the real head of state, and the Governer General would almost certainly remove our PM from power if he did things like igniting Canada's coal reserves (a nice trick to have in the arsenal though, if the world is heading towards an Ice Age, as Ice Ages typically onset in decades or less). We are very early in on Global Warming. If GWB and S.Harper were acting as they are now, a decade or two from now, my correct position would be the mainstream. I didn't mean actively as in willfully, like Nazi evil. I meant it more like allowing a population to starve (literally in this context), like Soviet evil inflicted on the Ukraine. GWB and S.Harper know full well what they are doing is greedy and they both know enough or are purposely (as opposed to unintentionally) avoiding the knowledge. Yep, I stand by my statement. When history looks back, if we make it there, GWB and S.Harper will be looked back as being one of the world leaders of their respective nations ever, solely on the demerits of their handling of Global Warming. B.Obama and S.Dion, solely by coming after them with an environmental platform that doesn't threaten to destroy humanity for short-term profit, will go down in history as at least above average leaders.
Am I part of an angry death spiral? I think my comments are measured. Probably even kind. S.Harper's first act of government was to cancel 17 Canadian Global Warming research programmes, including a critical ocean one. Do I really need to post what dubya has done on this file? The angry death spiral only happens if Republicans and Conservatives maintain power over the years ahead. America finally cashes in its WWII credit and Canada temporarily loses post-modern status. Not a spiral. Yet.
"But with anyone in this state of mind, I would sooner begin by teaching them that policy debates should not appear one-sided." I think you have to qualify this statement with "unresolved" policy debates.
I'll take the positions: 1) another Holocaust would be a bad thing. 2) global warming is real and S.Harper and GWB are real existential risk maximizing actors. 3) the US prison economy (construction, staffing and forced prison labour), now consuming more resources than Universities in your retarded country, is a conflict of interest. It won't help students at all to adapt the opposite positions.
The problem with taking evil positions "just for kicks", is that many of these positions are adapted in real life. There are a powerful (low teens percentile) political minorities in Europe and Russia that wouldn't mind another Holocaust and would welcome more skeptical minds like EY to briefly adopt their positions. Same for oil supporters in Canada and the USA that presently run the world and are actively seek humanity's destruction. The USA incarcerates a greater % of its population than anyone; is practically a 3rd world country. Slavery is still alive in the USA.
"unresolved" turns the above brain sharpening positions into acceptable (but still false policy positions): 1) Immigration should be reduced or union jobs should be subsidized with public funds or cultural minorities should melting pot. 2) I'm greedy and would rather consume than stabilize Earth for future generations. 3) We need retarded Republican policies to try to maintain global military hedgemony, and the Republican alliance shouldn't be fractured; also, incarcerating Democrats prevents them from voting.
Don't encourage malleable students to adopt evil positions, they may like it.
The nature of time has been covered by many great minds from a religious viewpoint, as mentioned by nick. It is also an active research topic among mainstream universities. I'm not particularly interested in the question, but the best analysis I've read comes from a few N.Bostrom papers, and a book I once read called "Time Machines". The book supposes a block universe, but states very clearly that this may not be the way the universe operates. From what I understand, this means the opposite of what EY wrote. It means the Copenhagen determination (that magic causes wavefront collapses) is a block universe. From my understanding, MWI means the universe would only be deterministic if there were no tachyons (I'm not sure, but I think these are predicted in most GUTs), otherwise there would be feedbacks. Even if no tachyons, the universe would only be deterministic in a past direction. The real question is what causes universes to split off. This is deep physics. There are papers on this topic. If someone were to suggest one, I would read it. The whole point of Tipler's "The Physics of Immortality", was to use shearing forces in a collapsing universe (universe strongly appears to be open, unfortunately), as an energy source. Where would a never ending universe fit when viewed through block universe goggles? Once again I ask, don't tachyons eliminate the block universe concept for all energy except photons travelling at c?
I'm not discouraging discussion. But there are some topics where this may be a cutting edge dialectic, such as the nature of minds, the computational power limits if any to recursive AI software programs, and AGI/AI controls. But this debate is inferior to mainstream university research. Keep it up, but the real question is how much money to spend on particle accelrators and observatories, that might resolve these basic physics questions. The money people use mainstream physicists as their info sources. These mainstream physicists have written papers. If EY's "block universe" hypothesis were correct, we wouldn't experience time. Simple anthropic reasoning disproves it. Time exists. The future is more important than the past. If anyone takes the time to find papers that deal with splitting off universes, I'd attempt to read them and discuss. I hope if mildly recursive software AI systems are built in the decades ahead and the human brain/mind is modelled by IBM or whoever, that those interested here in AI/AGI will keep up with these findings and not continue to discuss "inferior" content. Maybe I'm just pissed because I realize blogs where GUT amateurs talk about time, have limits.
Off-topic, but I suggest EY's idea of an AGI using mixed chemicals to form a mobile robot (and assumedly hack the internet), is now dated. With rep-rap and ink jet polymers, rapid plastics prototyping...a far more likely scenario is that an AI would hack a printer and output some sort of shape-memory device or conducting plastic as an origami crane. Normally this is a moot point, but there may be real defenses that could be dreamed in these sorts of discussions. If it is not known whether AGI is possible with a 2000BC Egyptian wooden abacus, or needs a computer from 10000000AD, but we know people may try to use the same sort of technologies and/or hacking procedures as weapons, why not diversify one's fields-of-expertise? If I were to suggest AI/AGI prescriptions to cyber police, I'd suggest cracking down on Eastern European, Russian and Chinese virus writers and better funding the good guys.
(H.Finney wrote:) "But then, some philosophers have claimed that brains could perhaps influence quantum events, pointing to the supposed collapse of the wave function being caused by consciousness as precedent. And we all know how deep that rabbit hole goes."
How deep does it go? Penrose's (a physicist) quantum brain components (an aspect of neurobiology and philosophy of mind) don't seem to exist, but I had to dig up ideas like the "cemi field theory" on my own, in past discussions on this topic (which always degenrated to uploading for immortality and cryonics); they certainly weren't forwarded by free-will naysayer robots.
"(EY wrote:) If you're thinking about a world that could arise in a lawful way, but whose probability is a quadrillion to one, and something very pleasant or very awful is happening in this world... well, it does probably exist, if it is lawful. But you should try to release one quadrillionth as many neurotransmitters, in your reward centers or your aversive centers, so that you can weigh that world appropriately in your decisions. If you don't think you can do that... don't bother thinking about it."
What if it is a fifty-fifty decision? If I see a pretty girl who is a known head-case, I can try to make the neural connection of her image with my boobies-Marylin-Manson neuron. Once I start to use abstract concepts (encoded in a real brain) to control chemical squirts, I'm claiming the potential for some limited free will. I doubt there are any world-lines where a computer speaker materializes into my lungs, even though it is physically possible. But if I think I'd like to crush the speaker into my chest, it might happen. In fact, I'd bet world-lines split off so rarely, that there isn't a single world'line where I attack myslef with a computer speaker right now. Has anyone read recent papers describing what variables limit decoherence assuming MWI? To my knowledge, photon effects only demonstrate a "few" nearby photons in parralel worlds.
Don't faster-than-c solutions to general relavity destroy the concept of MWI as a block universe?
Er, to try to simply my above point: in my model, energy (say, an atom) at time-sequence t1, sums up all its interactions with the rest of its local universe (such as a CNS if it is a brain atom), and this "calculation" affects the weighting of sick-of-ice-cream t2, t2a, t2b, world-lines. In claiming MWI is a block universe, you are accepting t1 ping-pongs to the subsequent split world-lines t2, t2a, t2b, without any "calculation" as described.
Ultimately it is a question of what limits are imposed on the splitting off of new world-lines in the multiverse. The speed-of-light, yes. I don't see why the physics of mind couldn't also qualify.
"In Thou Art Physics, I pointed out that since you are within physics, anything you control is necessarily controlled by physics."
I could just as easily argue since I'm within my past self's future light cone, anything I control is/was necessarily controlled by (a younger) me. In both cases we are playing with words and muddying the waters rather than learning or teaching.
I don't see why you can't just reverse the logic and claim that since everything in my mind is controlled by physics, thought is an act of my free will. I don't believe in strong free will. But I do believe by the time a toddler can form ideals (desires ice cream) that aren't real, some free will is already at work. The theory (math is not subject to General Relativity and thus "deterministic" is this description has nothing to do with the "deterministic" used to describe human actions)of MWI may be deterministic, but playing with English language words suggests actors can't choose their world-lines by using the physics of their minds to cascade synchronized neural firing patterns that activate the parts of our brains producing minds. Maybe there is no free will, but I'd need to see a convincing theory of consciousness absent circular reasoning. The Plinko disc make fall determinalistically, but if the Plinko chip had a human CNS and accurate memories of past drops, I bet it might try to rotate in a preferred fall path, and if the Plinko chip based its decision on reflected ideals, I'd say there is some free will there (neuron firing seems at a small enough scale to harness some of the quantum-spooky-stuff that causes universes to split off, for instance. I think our brains can control the % of world-lines that decide whether to binge eat ice cream. Equating a block universe to MWI assumes there is an end state where the total ratio of all time-space co-ordinates is known. in reality, this end state does not exist (as time breaks down outside reality, like when forming the mathematical concept of a block universe). There are many random events that control which world-line an individual experiences, but I don't see why volitions can't be among the cases. I doubt few people defending free will really mean to defend their right to bring about their own birth.
Patrick, my quantum key encrypted supercomputer (assuming this is what is needed to build an AGI) is an intranet and not accessible by anyone outside the system. You could try to corrupt the employees, but that would be akin to trying to pursue a suitcase nuke: 9 out of 10 buyers are really CIA or whoever. Has a nuclear submarine ever been hacked? How will an AGI with the resources of the entire Multiverse, hack into a quantumly encrypted communications line (a laser and fibreoptics)? It can't.
I'm trying to brainstorm exactly what physical infrastructures would suffice to make an AGI impotent, assuming the long-term. For instance, if you put all protein products in a long que with neutron bombs nearby and inspect every product protein-by-protein...just neutron bomb all protein products if an anamoly is detected. Same for the 2050 world's computer infrastructures. Have computers all wired to self destruct with backups in a bomb shelter. If the antivirus program (might not even be necessary if quantum computers are ubiquitous) detects an anomoly, there goes all the computers. I'm smarter than a grizzly or Ebola, but I'm still probably dead against either. That disproves your argument. More importantly, drafting such defenses probably has a higher EV of societal good than against AGI because humans will almost certainly try these sorts of attacks.
I'm not saying every defense will work, but plz specifically disprove the defenses I've written. It might help e-security some day. There is the opportunity here to do this as IDK these conversations are happening in too many other forums, but singulatarians are dropping the ball because of a political cognitive bias that they wanna build their software like it or not.
Another defense is once/if a science of AGI is established, determine the minimum run-time needed on the most powerful computers not under surveillence, to make an AGI. Have all computers built to radioactively decay before that run-time is achieved. Another run-time defense, don't allow distributed computing applications to use beyond a certain # of nodes. I can understand dismissing the after-AGI defenses, but to categorically dismiss the pre-AGI defenses...
My thesis is that the computer hardware required for AGI is so advanced, that the technology of the day can ensure surveillence wins, if it is desired not to construct an AGI. Once you get beyond the cognitive bias that thought is computation, you start to appreciate how far into the future AGI is, and that the prime threat of this nature is from conventional AI programmes.
bambi, IDK anything about hacking culture, but I doubt kids need to read a decision theory blog to learn what a logic bomb is (whatever that is). Posting specific software code, on the other hand...
About flaws in the post: the idea that environmentalists shouldn't oppose a scaling up of nuclear power is a flaw. This paper: http://www.stormsmith.nl/ was lucky enough to be written (I was lucky enough to find it since every nuclear utility on the planet ignored unearthing the basic economic analysis contained within it). Basically, scaling up of nuclear fails to cost the complete life-cycle of decommissioning a powerplant. And, additionally, going to all nukes ensures the nuclear lobby (the process of lobbying is something Libertarians don't understand) becomes a nearly permanent chunk of the world's economy. My point is that Bayesian reasoning here only unearths the nuclear facts and lies the nuclear industry forwards. You need reliable information sources in addition to Bayes to make correct judgement calls on eoconmics/energy here.
Cryonics is another flaw. IDK if it works or not. But the expensive process certainly shouldn't be a part of Universal Healthcare coverage at present. The best research (not 1970's deductive reasoning)about brains I've read to date, suggests thought is a substrate specific (IDK how much, semiconductors no way but maybe more inclusive than CNS proteins) process that functions as temperature-dependant solitons. Whatever temperature the brain goes down to in the cold water of ice-slip hypothermia survivors, does not necessarily mean brain processes will survive liquid nitrogen or helium temperatures. Under the mathematical model of reality most transhumanists have, temperature (requires physics) doesn't even exist!! I hope cryonics works, and if I were rich enough I might sign up or fund suspension research, but to suggest those who don't believe in cryonics are fools is to suggest brains work identically at 25C and -273C. Then the "rationalist" rebuttal is always to invoke "uploading for immortality" (where Transhumanism dies to me, despite all its progressive memes). If rationalists can't understand mathematics isn't physics, I don't want to be labelled a rationalist and I will pepper posts such as this to avoid unsuspecting readers from mindlessly believing a mindless belief system; I am trying to prevent H+ from functioning as a cult.
Intrade free money?! Surely you must know it takes $5000-$15000/yr in basic costs alone to live in most of the Western world. Surely you must know the average savings rate is very low. Please qualify statements like this with "rich/middle classes can make investments on liquid (can Insite really handle trillions of $$ as suggested and wouldn't it then be subject to manipulation: I'll bet the 65 cents in my pocket Phillip will splash coffee on himself). Bayes may be important, but if it misses very basic facts (like 9/10ths of the world can't presently afford to live off investment income), why would the world want to incorporate more Bayes, a small subset of probability theory already in math, logic and computer science curriculums (I think)? Sweet. I did splash coffee on myself and double up to $1.30. Now I can donate to the political party most likely to teach (for the purpose here of reasoning skills, not ethics) probability theory instead of religious dogma, in public schools. That is the conclusion of the post: better public education at the expense of pop-culture. Or more funding for public education commercials?
...as for the 3rd last paragraph, yes, once a 2008 AGI has the ability to contact 2008 humans, humanity is doomed if the AGI deems fit. But I don't see why a 2050 world couldn't merely use quantum encyption communications, monitored for AGI. And monitor supercomputing applications. Even the specific method describing how AGI gets protein nanorobots might be flawed in a world certainly ravaged by designer pandemic terrorist attacks. All chemists (and other 2050 WMD professions) are likely to be monitored with RF tags. All labs, even the types of at-home PCR biochemistry today, are likely to be monitored. Maybe there are other methods the Bayesian AGI could escape (such as?). Wouldn't X-raying mail for beakers, and treating the protein medium aghar like plutonium is now treated, suffice? Communications jamming equipment uniformly distributed throughout Earth, might permanently box an AGI that somehow (magic?!) escapes a supercomputer application screen. If AGI needs computer hardware/software made in the next two or three decades it might be unstoppable. Beyond that, humans will already be using such AGI hardware requirements to commission WMDs and the muscular NSA 2050 will already be attentive to such phenomena.
Two conclusions from the specific example: 1) The aliens are toying with us. This is unsettling in that it is hard to do anything good to prove our worth to aliens that can't meet even a human level of ethics. 2) The aliens/future-humans/creator(s)-of-the-universe are limited in their technological capabilities. Consider Martians who witness the occasional rover land. They might be wondering what it all means when we really have no grand scheme; are merely trying not to mix up Imperial and Metric units in landing. Such precise stellar phenomena is maybe evidence of a conscious creator in that it suggests an artificial limit being run up upon by the signals (who may themselves be the conscious creator). A GUT would determine whether the signal is "significant" in terms of physics. Inducing ET via Anthropic Principle reasoning gives me a headache. I much prefer to stick to trying to fill in the blanks of the Rare Earth hypothesis.
Typo. Sorry. Should say GUT where I wrote lasers. I'll proofredafjkdsf all my posts in future.
"I have to find an actual physicist to discuss this with, but there appears to be nothing wrong with Einstein's quest for a unified theory; he simply didn't have the prerequisite information of QM at the time (Feynman, Dyson, etc. didn't develop renormalization until the 1940s). MWI wasn't proposed until several years after Einstein's death."
I can't recall what renormalization is. I think there is something wrong with Einstein's quest; he was akin to Aristotle's atom theory. The Sung Dynasty was about the earliest atoms could be empirically uncovered, and a GUT is about as far away from Einstein in terms of knowledge base. I actually think Einstein's biggest accomplishment was political: writing to FDR about the possibility of a nuke. Einstein is responsible in this regard for a year of robotics, car, and computer progress along with tens of millions of present Japanese and American lives. I think the two characteristics that allowed Einstein to make 3 huge discoveries (Brownian motion, SR, GR) were his rich family that got him his patent clerk job and his willingness to be aloof and not follow the Popper-ian knowledge base of the time. I doubt he was the first to notice something wrong with phlogistan, but no one had the spare time and the determination to retool the knowledge base from ground zero (has anyone else ever taken an eight year diversion into mathematics to solve a single physics problem?). I don't think he had the same respect for quantum theory, despite founding it, that he did for GR. It seemed like he was trying to graft "quantum effects that functioned as non-local wormholes" onto GR, rather than genuinely finding a GUT by respecting quantum theory. No doubt he would have immediately championed MWI, but it seems like he was genuinely trying to undercut Copenhagen Interpretation rather than building upon it (this is in response to EY's MWI comment in the thread starter). All I'm saying is that if he would've realized the limits of his deductive method, he might've made even more contributions in his latter years and been the greatest thinker ever, instead of sharing the mantle with a handful of others. Maybe the most cutting edge scientific field is genetics. Someone might be able to deduce a science of the behaviour of animal-human hybrids studying the input animal temperaments and physiologies, but a better avenue would be to be a protein folding scientist and learn how to cure cancer or diabetes or something. I don't want to speak for Einstein's study strengths and weaknesses, but maybe we'd have optical computers now if Einstein would've transitioned to optics instead of lasers. I can't think of any physical knowledge areas now that are in as bad shape as cosmology was pre-Einstein. The next Einstein will come from social science fields, probably (is why I mentioned M.Yunus). With computers, everything physics is research teams nowadays. Maybe M.Lazaridis funding a quantum computer research park, is the closest anyone now can come to advancing a theoretical physics field as much as Einstein (cosmology) did.
"As of now, at least, reasoning based on scanty evidence is something that modern-day science cannot reliably train modern-day scientists to do at all."
By definition, scientists must use induction. Meant to say thinkers. IDK why thinkers mostly use induction now: maybe because the scientific funding model seems to work okay or because once you induce too far ahead, the content becomes useless if new research deviates the course a bit. For instance, all GUT/TOE physicists use Einstein-ian deduction in their elegant models. Einstein was lucky to be redeemed so quickly in that novel observatories were just being constructed. It is more expensive (maybe risky too) to turn the galaxy into a giant particle accelerator. In social sciences fileds, there is deduction. M.Yunus stimulated microfinance with a $26? loan by deducing collateral isn't a primary motivator in debt repayment (primary are entrepreneurial drive and quality-of-living gains). Drexler's nanotechnology vision was deduction. Many political programmes are deductions.
I agree with the general body content deduction is underappreciated. On reflection, the reason may be because an act of deduction almost always occurs in fields where there is no competing induction (ie. R.Freitas's simulations probably render much of E.Drexler's deductions obsolete). Thus deduction is a proxy to unearth low-hanging fruit? Deductive GUTs are fine, but will certainly be eclipsed by induced particle accelerator engineering blueprints one day. Deduction is free and addresses the issue of hypothesis generation somewhat.
I disagree strongly with the suggestion Einstein was a proponent of MWI. In fact, the overemphasis on deduction (defined here as induction from few au priors) caused him to waste the remaining 2/3 of his life attempting to disprove quantum phenomena, no?
Hopefully, ignoring ethics, cloning people for whatever reason will only ensure one of three (even less considering genetic mutations) character traits for whatever Eugenics you are practising. There is nurture and there is personal inspiration (probably could be defined here as intensity of rationality). If there is no Earth Summit in 1992, I probably don't pick up a bunch of environmental pamphlets one weekend, then. My decade-later clone exposed to Fox News maybe even exacerbates the leading extinction threat. Maybe if I don't grow up with cats, I don't make the inspired choice to value living beings; maybe my Fox News clone values killing Muslims and other "infidels" instead? If Eliezer doesn't read whichever sci-fi story inspired him, does he make the choice to focus upon AGI?
My thoughts on the future of mankind:
1) Near-term primary goal to maximize productive peron/yrs. 2) Rearrange capital flows to prevent productive person/yrs from being lost to obvious causes (ie. UN Millenium development goals and invoking sin-taxes), with effort to offer pride-savings win-win situations. Re-educate said workforce. Determine optimum resource allocation towards civilization redundancy efforts based upon negative externality accounting revised (higher) economic growth projections. Isolate states exporting anarchy or not attempting to participate in globalized workforce. Begin measuring purchasing parity adjusted annual cost to provide a Guaranteed Annual Income (GAI) in various nations. 3) Brainstomring of industries required to maximize longeivty, and to handle technologies and wield social systems essential for safely transitioning first to a medical/health, then to a leisure society. 4) Begin reworking bilateral and global trade agreements to reward actors who subsequently trend towards #3. Begin building a multilateral GAI fund to reward actors who initiate #5. 5) Mass education of society towards health/medical and other #3 sectors. Begin dispensing GAI to poor who are trending towards education/employment relevant to #3 sectors. 6) Conversion of non-essential workforces to health/medical R+D and other #3 sectors. Hopefully the education GAI load will fall and the fund can focus upon growing to encompass a larger GAI population base in anticipation of the ensuing leisure society. 7) Climax of medical/health R+D workforce. 8) Mature medical ethics needed. Mature medical AI safeguards needed. Education in all medical AI-relevant sectors. Begin measuring AI medical R+D advances vs. human researcher medical R+D advances. 9) Point of inflection where it becomes vastly more efficient to develop AI medical R+D systems rather than educating researchers (or not if something like real-time human trials bottleneck software R+D). Subsequent surplus medical/health labour-force necessitates a global GAI by now at the latest. AI Medical R+D systems become a critical societal infrastructure and human progress in the near-term will be limited by the efficacy and safety (ie. from computer virii) of these programs. 10) Leisure society begins. Diminishing returns from additional resource allocations towards AI medical R+D. Maximum rate of annual longevity gains. 11) Intensive study of mental health problems in preparation for #13. Brainstorming of surveillence infrastructures needed to wield engineering technologies as powerful as Drexler-ian nanotechnology. Living spaces will resemble the nested security protocols of a modern microbiology lab. Potentially powerful occupations and consumer goods will require increased surveillence. Brainstorming metrics to determine the most responsible handlers of a #13 technology (I suggest something like the CDI Index as a ranking). 12) Design blueprints for surveillence tools like quantum-key encryption and various sensors must be ready either before powerful engineering technologies are developed, or be among the first products created using the powerful technology. To maintain security for some applications it may be necessary to engineer entire cities from scratch. Sensors should be designed to maximize human privacy rights. The is a heighten risk of WWIII from this period on until just after the technology is developed. 13) A powerful engineering technology is developed (or not). The risk of global tyranny is highest since 1940. Civilization-wide surveillence achieved to ensure no WMDs unleashed, and no dangerous technological experiments. A technology like the ability to cheaply manufacture precision diamond products, could unleash many sci-fi-ish applications including interstellar space travel and the hardware required for recursively improving AI software (AGI). This technology would signal the end of capitalism and patent regimes. A protocol for encountering technologically inferior ETs might be required. Safe AGI/AI software programs would be needed before desired humane applications should be used. Need mature sciences of psychology and psychiatry to assist the benevolent administration of this technology. Basic Human Rights, goods and services should be administered to all where tyrannical regimes don't possess military parity. 14) Weaponry, surveillence, communications and spacecraft developed to expand the outer perimeter of surveillence beyond the Solar System. Twin objectives: to ensure no WMDs such as rogue AGI/AI programs, super high energy physics experiments, kinetic impactor meteors,etc., are created; and to keep open the possibility of harvesting resources required to harness the most powerful energy resources in the universe. The latter objective may require the development of physics experiments and/or AGI that conflicts with the former objective. The latter objective will require a GUT/TOE. Developing a GUT may require the construction of a physics experimental apparatus that should be safe to use. Need a protocol for dealing with malevolent ETs at approximate technological parity with humanity. Need a protocol to accelerate the development of dangerous technologies like AGI and Time Machines if the risks from these are deemed less than the threat from aliens; there are many game-theoric encounter scenarios to consider. This protocol may be anthropomorphic to how to deal with malevolent/inept conscious or software actors that escape the WMD surveillence perimeter. 16) If mapping the energy stores of the universe is itself safe/sustainable or if using the technologies needed to do so is safe, begin expanding a universe energy survey perimeter, treating those who attempt to poison future energy resources as pirates. 17) If actually harnessing massive energy resources or using the technologies required to do so is dangerous, a morality will need to be defined that determines a tradeoff of person/yrs lost vs. potential energy resources lost. The potential to unleash Hell Worlds, Heavens and permanent "in-betweens" is of prime consideration. Assuming harnessing massive energy resources is safe (doesn't end local universe) and holds a negligible risk of increasing odds of a Hell World or "in betweens", I suggest at this point invoking a Utilitarian system like Mark Walker's "Angelic Heirarchy", whereby from this point on, conscious actors begin amassing "survival credits". As safe energy resources dry up towards the latter part of a closed universe (or when atoms decay), trillions of years from now, actors who don't act to maximize this dwindling resource base will be killed to free up resources required to later mine potentially uncertain/dangerous massive energy resources. Same thing if the risk of unleashing Hell Worlds or destroying reality is deemed too high to pursue mining the energy resource: a finite resource base suggests those hundred trillion yr old actors with high survival credit totals, live closer to the end of the universe, as long as enforcing such a morality is itself not energy intensive. A Tipler-ian Time Machine may be the lever here; using it or not might determine net remaining harvestable energy resources and the quality-of-living hazard level in taking different courses of action. 18a) An indefinite Hell World. 18b) An indefinite Heaven World. 18c) End of the universe for conscious actors, possibly earlier than necessary because of a decision that fails to harness a dangerous energy source. If enforcing a "survial credit" administrative regime is energy intensive, the Moral system will be abandoned at some point and society might degenerate into cannabalism.
For what it's worth I'm posting my thoughts about the future of mankind on B.Goertzel's AGIRI forum tomorrow. The content may be of interest to the FHI.
Personally, I think the focus here on cognitive biases in decision making is biased in that it distracts from many other factors (education, info sources, personality, mild mental psychosis, the level of caffeine and sugar in one's blood, etc). If it helps to shed any light on the Popper-ian process of scientific consensus, I'll offer my own anecdote with the suggestion that the process he hypothesizes affects much more than science:
I could not believe in 2006 that the Chicago Bears would lose to the Colts. Even though the Colts had previously beaten a scarier aerial attack at had a revamped defence, I thought the Bears would take it.
Whatever K.Popper was describing; I don't know how true it is, is some sort of vindictive ego judgement call that extends far. Scientists are only highlighted here because they are falsely expected to be rational. In reality, their research is rational, not the process where they weigh their research against the research of other scientists. The latter is contaminated by sociology of some sort.