Posts
Comments
Humans as social animals have a strong instinctual bias towards trust of con-specifics in prosperous times. Which makes sense from a game theoretic strengthen-the-tribe perspective. But I think that leaves us, as a collectively dumb mob of naked apes, entirely lacking a sensible level of paranoia in the building ASI that has no existential need for pro-social behavior.
The one salve I have for hopelessness is that perhaps the Universe will be boringly deterministic and 'samey' enough that ASI will find it entertaining to have agentic humans wandering around doing their mildly unpredictable thing. Although maybe it will prefer to manufacture higher levels of drama (not good for our happiness)
It was a very frustrating conversation to listen to, because Wolfram really hasn't engaged his curiosity and done the reading on AI-kill-everyoneism. So we just got a torturous number of unnecessary and oblique diversions from Wolfram who didn't provide any substantive foil to Eliezer
I'd really like to find Yudkowsky debates with better prepared AI optimists prepared to try and counter his points. Do any exist?
It seems unlikely to me that there is potential to make large brain based intelligence advancements beyond the current best humans using human evolved biology. There will be distance scaling limitations linked to neural signal speeds.
Then there is Jeff Hawkins 'thousand brains' theory of human intelligence that our brains are made up of thousands of parallel processing cortical columns of a few mm cross section and a few mm thick with cross communication and recursion etc, but that fundamental processing core probably isn't scalable in complexity, only in total number - your brain could perhaps be expanded to handle thinking about more things in parallel at once, but not at much higher levels of sophistication without paying a large coordination speed price (and evolution places a premium on reaction speed for animals that encounter violence)
I look at whales and other mammals with much much larger than human brains and wonder why they are not smarter - some combination of no evolutionary driver and perhaps a lot of their neurons are dedicated to delay-line processing needed for processing sonar and controlling large bodies with long signaling delays.
Regardless, if AI is a dominant part of our future then it seems likely to me that regardless of whether the future is human utopia or dystopia, non-transhuman humans will not exist in significant numbers in a few hundred years. Neural biology and perhaps all biology is going to be superseded as maladapted to the technological future.
Are any of the socio-economic-political-demographic problems of the world actually fixable or improvable in the time before the imminent singularity renders them all moot anyway? It all feels like bread-and-circuses to me.
The pressing political issues of today are unlikely to even be in the top-10 in a decade.
Fantastic life skill to be able to sleep in a noise environment on a hard floor. Most Chinese can do it so easily, and I would frequently less kids anywhere up to 4-5 years old being carried sleeping down the road by guardians.
I think super valuable when it comes to adulthood and sharing a bed - one less potential source of difficulties if adaption to noisy environment when sleeping makes snoring a non-issue.
It is the literary, TV and movie references, a lot of stuff also tied to technology and social developments of the 80's-00's (particularly Ank-Morpork situated stories) and a lot of classical and allusions. 'Education' used to lean on common knowledge of a relatively narrow corpus of literature and history Shakespeare, chivalry, European history, classics etc for the social advantage those common references gave and was thus fed to boomers and gen-x, y but I think it's now rapidly slipping into obscurity as few younger people read and schools shift away from teaching it in face of all that's new in the world. I guess there are a lot of jokes that pre-teens will get, but so many that they will miss. Seems a waste of such delightful prose.
Yeah, powering through it. I've tried adult Fiction and Sci-Fi but he's not interested in it yet - not grokking adult motivations, attitudes and behaviors yet, so feeding him stuff that he enjoys to foster habit of reading.
I've just started my 11yr old tech minded son reading the Worm web serial by John Macrae (free and online, longer than Harry potter series). It's a bit grim/dark and violent, but an amazing and compelling sci-fi meditation on superheroes and personal struggles. A more brutal and sophisticated world build along lines of popular 'my hero academia' anime that my boys watched compulsively. 1000's of fanfics too.
Stories from Larry Niven's "known space" universe. Lots of fun overcoming-challenges short stories and novellas that revolve around interesting physics or problems or ideas. And the follow up Man-Kzin War series by various invited authors have some really great stories too with a strong martial bent that will likely appeal to most boys.
At that age I read and loved Dune, The stars my destination (aka Tiger Tiger, a sci fi riff on Comte de Monte Christo), Enders Game. I think Terry Pratchett humor needs a more sophisticated adult knowledge base, with culture references that are dating badly.
My 11yr old loved the Expanse TV series, though I haven't given them the books to read yet and I can't recommend the transhumanism anime Pantheon on Amazon highly enough - its one of best sci fi series of all time.
All good to introduce more adult problems and thinking to kids in an exciting context.
We definitely want our kids involved in at times painful activities as a means of increasing confidence, fortitude and resilience against future periods of discomfort to steel them against the trials of later life. A lot of boys will seek it out as a matter of course in hobby pursuits including martial arts.
I think there is also value in mostly not interceding in conflicts unless there is an established or establishing pattern of physical abuse. Kids learn greater social skills and develop greater emotional strength when they have to deal with the knocks and unfairness themselves, and rewarding tattle-tailing type behavior with the exercise of parental power (or even attention) over the reported perpetrator creates some probably not-good crutch-like dynamics in children's play stunting their learning of social skills.
I think it's generally not good for kids to have power over others even if that power is borrowed, as it often enables maliciousness in kids that are (let's face it) frequently little sociopaths trying to figure out how to gain power over others until they start developing more empathy in their teens. Their play interactions should be negotiated between them, not imposed by outside agents. Feign disinterest in their conflicts unless you see toxic dynamics forming. They should sort things out amongst themselves as much as possible.
For my boys (9,11) I'll only intercede if they are getting to the point of physical harm or danger, or if there is a violent response to an accidental harm (must learn to control violent/vengeful impulses). But they frequently wrestle with each other in play. It is a challenge to balance with my 7 daughter though as lacking physical strength of her older brothers she works much harder to use parents as proxies to fight her conflicts.
Less cotton wool and helicopter parenting is mostly good.
"In many cases, however, evolution actually reduces our native empathic capacity -- for instance, we can contextualize our natural empathy to exclude outgroup members and rivals."
Exactly as it should be.
Empathy is valuable in close community settings, a 'safety net' adaption to make the community stronger with people we keep track of to ensure we are not being exploited by people not making concomitant effort to help themselves. But it seems to me that it is destructive at wider social scales enabled by social media where we don't or can't have effective reputation tracking to ensure that we are not being 'played' for the purpose of resource extraction by people making dishonest or exaggerated representations.
In essence at larger scales the instinct towards empathy rewards dishonest, exploitative, sociopathic and narcissistic behavior in individuals, and is perhaps responsible for a lot of the deleterious aspects of social media amongst particularly more naturally or generally empathic-by-default women. Eg 'influencers' (and before them exploitative televangelists) cashing in on follower empathy. It also rewards misrepresentations of victimhood/suffering for attention and approval - again in absence of more in depth knowledge of the person that would exist in a smaller community - that may be a source of rapid increase in 'social contagion' mental health pathologies amongst particularly young women instinctually desirous of attention most easily attained by inventing of exaggerating issues in absence of other attributes that might garner attention.
In short the empathic charitable instinct that works so well in families and small groups is socially destructive and dysfunctional at scales beyond community level.
I read some years ago that average IQ of kids is approximately 0.25*(Mom IQ + Dad IQ + 2x population mean IQ). So simplest and cheapest means to lift population average IQ by 1 standard deviation is just use +4 sd sperm (around 1 in 30000), and high IQ ova if you can convince enough genius women to donate (or clone, given recent demonstration of male and female gamete production from stem cells). +4sd mom+dad = +2sd kids on average. This is the reality that allows ultra-wealthy dynasties to maintain ~1.3sd IQ average advantage over general population by selecting (attractive/exciting) +4sd mates.
Probably the simplest and cheapest thing you can do to lift population IQ over long term is to explain this IQ-heritability reality to every female under the age of 40, make it common knowledge and a lot of them will choose genius sperm for themselves.
Beyond that intervention that can happen immediately there is little point in trying to do anything. In 20 years when ASIs stradle the earth like colossuses, and we are all their slaves or pets, they will (likely in even best case scenarios) be dictating our breeding and culling - or casually ignoring/exterminating us. In optimistic event of Banksian Culture like post-singularity utopia magic ASI tech will be developed to near-universally optimize human neural function via nootropics or genetic editing to reach a peak baseline (or domesticate us into compliant meat-bots). I think even a well aligned ASI is likely to push this on us.
I think there is far too much focus on technical approaches, when what is needed is a more socio-political focus. Raising money, convincing deep pockets of risks to leverage smaller sums, buying politicians, influencers and perhaps other groups that can be coopted and convinced of existential risk to put a halt to Ai dev.
It amazes me that there are huge, well financed and well coordinated campaigns for climate, social and environmental concerns, trivial issues next to AI risk, and yet AI risk remains strictly academic/fringe. What is on paper a very smart community embedded in perhaps the richest metropolitan area the world has ever seen, has not been able to create the political movement needed to slow things up. I think precisely because they pitching to the wrong crowd.
Dumb it down. Identify large easily influenceable demographics with a strong tendency to anxiety that can be most readily converted - most obviously teenagers, particularly girls and focus on convincing them of the dangers, perhaps also teachers as a community - with their huge influence. But maybe also the elederly - the other stalwart group we see so heavily involved in environmental causes. It would have orders of magnitude more impact than current cerebral elite focus, and history is replete with revolutions borne out of targeting conversion of teenagers to drive them.
They cannot just add an OOM of parameters, much less three.
How about 2 OOM's?
HW2.5 21Tflops HW3 72x2 = 72 Tflops (redundant), HW4 3x72=216Tflops (not sure about redundancy) and Elon said in June that next gen AI5 chip for fsd would be about 10x faster say ~2Pflops
By rough approximation to brain processing power you get about 0.1Pflop per gram of brain so HW2.5 might have been a 0.2g baby mouse brain, HW3 a 1g baby rat brain HW4 perhaps adult rat, and upcoming HW5 a 20g small cat brain.
As a real world analogue cat to dog (25-100g brain) seems to me the minimum necessary range of complexity based on behavioral capabilities to do a decent job of driving - need some ability to anticipate and predict motivations and behavior of other road users and something beyond dumb reactive handling (ie somewhat predictive) to understand anomalous objects that exist on and around roads.
Nvidia Blackwell B200 can do up to about 10pflops of FP8, which is getting into large dog/wolf brain processing range, and wouldn't be unreasonable to package in a self driving car once down closer to manufacturing cost in a few years at around 1kW peak power consumption.
I don't think the rat brain HW4 is going to cut it, and I suspect that internal to Tesla they know it too, but it's going to be crazy expensive to own up to it, better to keep kicking the can down the road with promises until they can deliver the real thing. AI5 might just do it, but wouldn't be surprising to need a further oom to Nvidia Blackwell equivalent and maybe $10k extra cost to get there.
There has been a lot of interest in this going back to at least early this year and the 1.58bit LLM (ternary) logic paper https://arxiv.org/abs/2402.17764 so expect there has been a research gold rush and a lot of design effort going into producing custom hardware almost immediately that was revealed.
With Nvidia dual chip GB200 Grace Blackwell offering (sparse) 40Pflop fp4 at ~1kW there has already been something close to optimal hardware available - that fp4 performance may have been the reason the latest generation Nvidia GPU are in such high demand - previous generations haven't offered it as far as I am aware. For comparison a human brain is likely equivalent to 10-100Pflops, though estimates vary.
Being able to up the performance significantly from a single AI chip has huge system cost benefits.
All suggesting that the costs for AI are going to drop yet again, and human level AGI operating costs are going to be measured in cents per hour when it arrives in a few years time.
The implications for autonomous robotics are likely tremendous, with potential OOM power savings likely to bring far more capable systems to smaller platforms, home robotics, fsd cars, and (scarily) military murderbots. Tesla has (according to Elon comments) a new HW5 autonomy chip coming out next year that is ~50x faster than their current FSD development baseline HW3 2 x 72Tflop chipset, but needs closer to 1kW power, so they will be extremely keen on implementing something that could save so much power.
AI safety desperately needs to buy in or persuade some high profile talent to raise public awareness. Business as usual approach of last decade is clearly not working - we are sleep walking towards the cliff. Given how timelines are collapsing the problem to be solved has morphed from being a technical one to a pressing social one - we have to get enough people clamouring for a halt that politicians will start to prioritise appeasing them ahead of their big tech donors.
It probably wouldn't be expensive to rent a few high profile influencers with major reach amongst impressionable youth. A demographic that is easily convinced to buy into and campaign against end of the world causes.
Current Nvidia GPU prices are highly distorted by scarcity, with profit margins that are reportedly in the 80-90% of sale price range: https://www.tomshardware.com/news/nvidia-makes-1000-profit-on-h100-gpus-report
If these were commodified to the point that scarcity didn't influence price then that $/flop point would seemingly leap up by an order of magnitude to above 1e15Flop/$1000 scraping the top of that curve, ie near brain equivalence computation power in $3.5k manufactured hardware cost, and latest Blackwell GPU has lifted that performance by another 2.5x with little extra manufacturing cost. Humans as useful economic contributors are so screwed, even with successful alignment the socioeconomic implications are beyond cataclysmic.
I'm going through this too with my kids. I don't think there is anything I can do educationally to better ensure they thrive as adults other than making sure I teach them practical/physical build and repair skills (likely to be the area where humans with a combination of brains and dexterity retain useful value longer than any other).
Outside of that the other thing I can do is try to ensure that they have social status and financial/asset nest egg from me, because there is a good chance that the egalitarian ability to lift oneself through effort is going to largely evaporate as human labour becomes less and less valuable, and I can't help but wonder how we are going to decide who gets the nice beach-house. If humans are still in control of an increasingly non-egalitarian world then society will almost certainly slide towards it's corrupt old aristocratic/rentier ways and it becomes all about being part of the Nomenklature (communist elites).
[disclaimer: I am a heat pump technology developer, however the following is just low-effort notes and mental calcs of low reliability, they may be of interest to some. YMMV]
It may be better to invest in improved insulation.
As rough rule of thumb COP is = eff * Theat/(Theat-Tcold), with Temperatures measured in absolute degrees (Kelvin or Rankine), eff for most domestic heat pumps is in range 0.35 to 0.45, high quality european units are often best for COP due to long history of higher power costs - but they are very expensive, frequently $10-20k
Looking at the COP for the unit you quoted the eff is only about 0.25 at rated conditions, not good, unless you get a much larger unit and run it at a less powerful more efficient load point.
That's a pretty huge electricity price, about 4.5x gas price (which is distorted-market nuts, 3x is more usual globally). Given that differential it might be better to look at an absorption heat pump like https://www.robur.com/products/k18-simplygas-heat-pump that gives up to 1.7x gas heat - though they look to be on the order of $10k.
Here's an annoying fact; If you ran that $2/therm gas (~$0.07/kWh) through a reasonably efficient (~40%) natural gas genset it would produce electricity cheaper than what you currently pay for power, and you would have 2/3rds of the gas energy left over as heat. A genset in your neighbourhood could provide a few 10's of houses with cheaper electricity and low cost waste heat, though no doubt prevented by regulatory issues. There are a few small combined heat and power (CHP) domestic units on the market, but they tend to be very expensive, more tech-curios than economically sensible.
Niron's Fe16N2 looks to have a maximum energy product (figure of merit for magnet 'strength' up to 120 MGOe at microscopic scale, which is about double that of Neodymium magnets (~60), however only 20 MGOe has been achieved in fabrication. https://www.sciencedirect.com/science/article/am/pii/S0304885319325454
Processing at 1GPa and 200°C isn't that difficult if there is commercial benefit. Synthetic diamonds are made in special pressure vessels at 5GPa and 1500°C. There is some chance that someone will figure out a processing route that makes it possible to achieve bulk crystal orientations that unlocks higher energy products - potential payoff is huge. I expect AGI and ASI will figure out a lot of major improvements in materials science over next 1-2 decades.
I read of a proposal a few months back to achieve brain immortality via introduction of new brain tissue that can be done in a way as to maintain continuity of experience and personality over time. Replenisens , Discussion on a system for doing it in human brains That would perhaps provide a more reliable vector for introduction, as the brain is progressively hybridised with more optimal neural genetic design. Perhaps this could be done more subtly via introduction of 'perfected' stem cells and then some way of increasing rate of die off of old cells.
Instead of gene editing could you just construct a 'perfect' new chromosome and introduce one or more instances of it into existing neurons via viral injection techniques to increase expression of beneficial factors? No particular reason why we can only have 42 🤣46 chromosomes, and this would perhaps side-step difficulties to do with gene editing. Might be a more universal solution too if we could come up with a single or small variety of options for a 'super' brain optimising added chromosome.
Politically the way to pitch it would be for its life saving/enhancement ability - offered for example to people with low intelligence and educational outcomes to offer them a better chance at happiness in life.
"So your job depends on believing the projections about how H2 costs will come down?"
I wouldn't waste my life on something I didn't see as likely - I have no shortage of opportunities in a wide variety of greentech fields. Hydrogen is the most efficient fuel storage 'battery' with 40-50% round-trip energy storage possible. Other synthetic fuels are less efficient but may be necessary for longer term storage or smaller applications. For shipping and aviation however LH2 is the clear and obvious winner.
Desert pv will likely come down in price to consistent ~$0.01-0.02 in next decade with impact of AI on manufacturing, installation and maintenance costs (a few large pv installations are already contracted in this cost range). And electrolysis and liquefaction tech are on track to yield the stated $1.50/kg (learning curves are magic). That 'stranded' desert pv power needs to be delivered to far distant users and hydrogen pipelines or shipping provides most realistic option for doing that.
Capturing carbon for synthetic hydrocarbons is not a trivial issue/cost. And their round trip energy storage efficiencies for synthetics hydrocarbons are worse than for hydrogen. There will still be some applications where they make the most sense. Ammonia might work too, though it also needs hydrogen feedstock and is often lethal when inhaled.
But in general I see a pretty clear path to renewable hydrogen undercutting fossil fuels on cost in the next decade or two, and from there a likely rapid decline in their use - so reasons for optimism about energy part of our civilisational tech stack at least, without breakthroughs in nuclear being needed.
Battery augmented trains: Given normal EV use examples, Tesla et al, and Tesla Semi a charging time of 10% of usage time is relatively normal. Eg charging for 20 minutes and discharging for 3 hours, or in Tesla Semi's case might be more like an hour for 8-10 hours operation but trains have lower drag (and less penalty for weight) than cars or trucks so will go further for same amount of energy. The idea is therefore that you employ a pantograph multi MW charging system on the train that only needs to operate about 10% of the time,. This may reduce electrification capital and maintenance costs. Another option would be battery engines that can join up or disconnect from trains in motion between charging cycles in sidings.
CATL (largest battery producer in the world) are innovating heavily on Sodium ion batteries and have stated that they believe cost per kWh can come down to $40/kWh as they scale up manufacture on their second generation. "CATL first-generation sodium-ion cells cost about 77 USD per kWh, and the second generation with volume production can drop to 40 USD per kWh."
I work professionally developing Liquid hydrogen fueled transport power technology. It's very practical for some applications, particularly aircraft and ships that I expect will transition to hydrogen in next 2-3 decades, and possibly trains and agriculture. Using low cost power sources such as grid scale pv in low cost desert regions the price is expected to come down over next 1-2 decades to being competitive or even undercutting fossil fuels (commonly expected/roadmapped to be ~$1.5/kg H2, which translates to about $0.10/kWh useful output power, similar to fossil fuels before taxes). This is likely the only realistic route to fully renewable power for human civilisation - producing in cheapest sunny or windy areas and using at high latitudes/through winters. But LH2 is difficult to transport, transfer and employ at small scales due to lack of economic cryocooling and complexity, insulation and cost scale (r² vs r³) issues with tanks (LH2 and GH2 are very low density) and GH2 is even worse for non-pipeline distribution, so a more convenient dense and long-term easily and cheaply stored energy carrier such as Ammonia or synthetic hydrocarbons made using future cheap hydrogen feedstocks may be a better option. That is especially true for off-road uses.
Agriculture is particularly intractable, cropping farmers in my region frequently run 100+ hours a week using many MW of power during harvest which is a terrible issue for power economics without exceptionally cheap energy storage - liquid fuel is probably the only economic option.
And yes I am well aware of mining electrification, but I am very suspicious of it's utility given many cases will see it powered by fossil fuel generation. Likely that a lot of it is PR greenwashing rather than effective CO2 reduction.
Battery electric trains with a small proportion of electrified (for charging) sections seems like a decent and perhaps more economic middle ground. Could get away with <10% of rail length electrified, and sodium batteries are expected to come down to ~$40/kWh in next few years. High utilisation batteries that are cycled daily or multiple times a day have lower capital costs. May also work for interstate trucking.
Earth moving electrification is probably the last application that makes sense or needs focusing upon, due to high capital costs of electrification and low equipment utilisation (a lot of it spends only a few percent of year being used), as well as difficulty in getting electric power to them in difficult to access off-grid locations.
Farm equipment is more important, but incredibly bad economics due to high peaks (using up to several MW to run a few large machines for a few days several times a year for cropping) but very low average utilisation.
Both of these are probably best served long term by some renewable liquid fuel and IC engines, for high power, low mass-fuelling and low capital costs. Synthetic hydrocarbon, Ammonia or Liquid Hydrogen.
Insufficient onboard processing power? Tesla's HW3 computer is about 70Tflops, ~0.1% of estimated 100Pops human brain. Approximately equivalent to a mouse brain. Social and predator mammals that have to model and predict conspecific and prey behaviors have brains that generally start at about 2% human for cats and up to 10% for wolves.
I posit that driving adequately requires modelling, interpreting and anticipating other road users behaviors to deal with a substantial number of problematic and dangerous situations; like noticing erratic/non-rule compliant behavior in pedestrians and other road users and adjusting response, or negotiating rule-breaking at intersections or obstructions with other road users, and that is orders of magnitude harder than strict physics and road rule adherence. This is the inevitable edge-case issue that there are simply too many odd possibilities to train for and greater innate comprehension of human behaviors is needed.
Perhaps phone-a-smarter-AI-friend escalation of the outside-of-context problems encountered can deal with much of this, but I think cars need a lot more on board smarts to do a satisfactory job. A cat to small-dog level H100 @4Pflops FP8 might eventually be up to the task.
Elon's recent announcements about end-to-end neural nets processing for V12.xx FSD release being the savior and solution appears to not have panned out - there's been months of silence after initial announcement. That may have been their last toss of the die for HW3 hardware. Their next iteration HW4 chip being fitted to new cars is apparently ~8x faster at 600Tflops.[edit, the source I found appears unreliable, specs not yet public] They really wouldn't want to do that if they didn't think it was necessary.
Global compliance is the sine qua non of regulatory approaches, and there is no evidence of the political will to make that happen being within our possible futures unless some catastrophic but survivable casus belli happens to wake the population up - as with Frank Herbert's Butlerian Jihad (irrelevant aside; Samuel Butler who wrote of the dangers of machine evolution and supremacy lived at film location for Eddoras in Lord of the Rings films in the 19th century).
Is it insane to think that a limited nuclear conflict (as seems to be an increasingly likely possibility at the moment) might actual raise humanities chances of long term survival - if it disrupted global economies severely for few decades and in particular messed up chip production.
Any attempts at regulation are clearly pointless without global policing. And China, (as well as lesser threats of Russia, Iran and perhaps North Korea) are not going to comply no matter what they might say to your face if you try to impose it. These same issues were evident during attempts to police nuclear proliferation and arms reduction treaties during the cold war, even when both sides saw benefit in it. For AI they'll continue development in hidden or even mobile facilities.
It would require a convincing threat of nuclear escalation or possibly a rock-solid economic blockade/embargo of non-transparent regimes to make the necessary all-the-time-and-everywhere access compliance policing happen. The political will for these options is nil. No Politician wants to escalate tensions, they are highly risk averse, and they are not going to be able to sell that to the public in a democracy. Our democracies do not select for leaders that would be capable of holding such a line.
So without instituting liberal democracies everywhere with leadership that genuinely puts humanities future ahead of their national interests this line of seeking to slow development to research alignment via regulation seems rather pointless.
Humanity needs a colossal but survivable AI scare ASAP if it is to develop the collective will to effectively regulate AI development, and not sleep walk its way off the cliff edge of extinction as seems to be our current lazy and disinterested path.
Nothing wrong with the universe, from an Anthropic perspective it's pretty optimal, we just have most humans running around with much of their psychology evolved to maximize fitness in highly competitive resource limited hunter-gatherer environments, including a strong streak of motivating unhappiness with regard to things like; social position, feelings of loneliness, adequacy of resources, unavailability of preferred sex partners, chattel ownership/control, relationships etc and a desire to beat and subjugate most dangerous competitors to get more for ourselves (men wanting to take down the men in the tribe next door who are likewise planning to murder them, women wanting to take down women outside of their immediate friend/family circle that compete for essential resources they need for their kids). We are designed to be to some degree miserable and discontent, with maladapted compulsions to make us work harder on things that don't have immediate payoff, but that do improve our chances of survival and successful procreation over long term.
The fix would be hacking human brains (benignly) and figuring out how to rewire the innate negative psychological drives to enable greater contentment in a post-scarcity technological world. There's a good chance that will become possible post singularity (if humans aren't all dead)
Interesting.
As counterpoint from a 50 year old who has struggled with meaning and direction and dissatisfaction with outcomes (top 0.1% ability without as yet personally satisfactory results) I have vague recollections of my head-strong teen years when I felt my intellect was indomitable and I could master anything and determine my destiny though force of will. But I've slowly come to the conclusion that we have a lot less free-will than we like to believe, and most of our trajectory and outcomes in life are set by neurological determinants - instincts and traits that we have little to no control over. Few if any have drive necessary (another innate attribute) to overcome or modify their base tendencies over long term, a sisyphean endeavour at best, and the likely reason we see such heavy use of nootropic drugs.
Without an (almost literal) gun to your head you can only maintain behavioural settings at odds with your likely genetically innate preferences for short periods, months, perhaps years, but not decades, 'fake it till you make it' in personality modification mostly doesn't work - in my experience people (myself included) almost always fall back into their factory settings. Choose your goals, careers and partners accordingly, follow your interests, and try to find work environments where others strengths can compensate for deficits you perceive in yourself, eg working with some highly conscientious types is a massive boon if you are ADHD and probably leads to greater satisfaction and better results.
I don't think it's mild. I'm not American, but follow US politics with interest.
A majority of blue-collar/conservative US now see the govt as implacably biased against their interests and communities, eg see recent polling on attitudes towards DOJ, FBI eg https://harvardharrispoll.com/wp-content/uploads/2023/05/HHP_May2023_KeyResults.pdf
There is widespread perception that rule of law has failed at the top levels at least - with politically motivated prosecutions (timing of stacked Trump indictments is clearly motivated by his candidacy) and favored treatment of Hunter (majority of US population see corruption in Biden's VP doings).
And worst of all a substantial fraction of US do not believe results of 2020 elections were legitimate. In recent polling: "61% of Americans say Biden did legitimately win enough votes to win the presidency, and 38% believe that he did not".
The traditionally middle seeking and sense-making Media has become implacably polarised, with steadfast silence on or spinning of stories inconvenient to their 'side'.
The greatest concern must be that the rate of descent into extreme political and institutional Tribalism and differential treatment on that basis. Long term that can only lead to a violent collapse in rule of law, or repressive policing.
The only antidote is dogmatic enforcement of even-handed treatment at all levels, govt employees must be seen to be scrupulously non-partisan or everything will inevitably fall apart.
Ruminations on this topic are fairly pointless because so many of the underpinning drivers are clearly subject to imminent enormous change. Within 1-2 generations technology like life extension, fertility extension, artificial uteruses, superintelligent AI, AI teachers and nannies, trans-humanism and will render meaningless today's concerns that currently seem dominating and important (if Humans survive that long). Existential risk and impacts of AI are really the only issues that matter. Though I am starting to think that the likely inevitable next generation of MAD - space based nukes hanging like an undetectable sword of Damocles over our heads is scary as hell too - coordinated global 1st strikes with only 2-3 seconds between detectability and detonation of stealth bombs re-entering from far distant orbits at >10km/s.
But even in their absence if there was close to a technology level freeze starting now evolution would move in to fill the gap - within a few generations individuals and cultures whose psychology or rules moved them to have more kids - eg religious fundamentalists, highly impulsive, strong maternal or paternal drives and strongly patriarchal cultures (Looking around world places that treat women like shit seems to have higher fertility) would steadily grow to dominate the gene pool - seeing a greater number of kids being born.
purported video of a fully levitating sample from a replication effort, sorry I do not have any information beyond this twitter link. But if it is not somehow faked or misrepresented it seems a pretty clear demonstration of flux-pinned Meissner effect with no visible evidence of cooling. [Edit] slightly more detail on video "Laboratory of Chemical Technology"
Seems quite compelling - most previous claims of high temp superconductivity have been based on seeing only dips in resistance curves - not full array of superconducting behaviours recounted here, and sample preparation instructions are very straight forward - if it works we should see replication in a few days to weeks [that alone suggests its not a deliberate scam].
The critical field strength stated is quite low - only about 25% of what is seen in a Neodymium magnet and it's unclear what critical current density is, but if field reported is as good as it gets then it is unlikely to have much benefit for motor design with B² dependent torque densities <10% of conventional designs, unless the applications are not mass/cost sensitive (wind turbines replacing permanent magnets?).
Meissner effect could be useful for some levitation designs (floating houses, hyperloop, toys?) Likely some novel space applications like magnetic sails, perhaps passive magnetic bearings for infinite life reaction control wheels and maybe some ion propulsion applications. But lightly biggest impacts will be in digital and power electronics with ultra-high q inductors, higher efficiency transformers, and maybe data processing devices.
It might be transformative for long distance renewable power distribution.
[Edit to add link to video of meissner effect being demonstrated]
Meissner effect video looks like the real deal. Imperfect disk sample is pushed around surface of a permanent magnet and tilts over to align with local field vector as gets closer to edge of cylindrical magnet end face. Permanent magnets in repulsive alignment are not stable in such arrangements (Earnshaw's theorem) - they would just flip over, and diamagnetism in conventional materials - graphite the strongest - is too weak to do what is shown. The tilting shows the hall-marks of flux pinning working to maintain a consistent orientation of the superconductor with ambient magnetic field, which is a unique feature of superconductivity. No evidence of cooling in video.
If this is not being deliberately faked then I'd say this is a real breakthrough.
Desalination costs are irrelevant to uranium extraction. Uranium is absorbed in special plastic fibers arrayed in ocean currents that are then post processed to recover the uranium - it doesn't matter how many cubic km of water must pass the fiber mats to deposit the uranium because that process is, like wind, free. The economics have been demonstrated in pilot scale experiments at ~$1000/kg level, easily cheap enough making Uranium an effectively inexhaustible resource at current civilisational energy consumption levels even after we run out of easily mined resources. Lots of published research on this approach (as is to be expected when it is nearing cost competitiveness with mining).
Seems likely, neurons only last a couple of decades - memories older than that are reconstructions, - things we recall frequently or useful skills. If we live to be centuries old it is unlikely that we will retain many memories going back more than 50-100 years.
In the best envisaged 500GW-days/tonne fast breeder reactor cycles 1kg of Uranium can yield about $500k of (cheap) $40/MWh electricity.
Cost for sea water extraction (done using ion-selective absorbing fiber mats in ocean currents) of Uranium is currently estimated (using demonstrated tech) to be less than $1000/kg, not yet competitive with conventional mining, but is anticipated to drop closer to $100/kg which would be. That is a trivial fraction of power production costs. It is even now viable with hugely wasteful pressurised water uranium cycles and long term with fast reactor cycles there is no question as to its economic viability. It could likely power human civilisation for billions of years with replenishment from rock erosion.
A key problem for nuclear build costs is mobility of skilled workforces - 50 years ago skilled workers could be attracted to remote locations to build plants bringing families with them as sole-income families. But nowadays economics and lifestyle preferences make it difficult to find people willing to do that - meaning very high priced fly-in-fly-out itinerant workforces such as are seen in oil industry.
The fix is; coastal nuclear plants, build and decommission in specialist ship yards, floated to operating spot and emplace on sea bed - preferable with seabed depth >140m (ice age minimum). Staff flown or ferried in and out (e-VTOL). (rare) accidents can be dealt with by sea water dilution, and if there is a civilizational cataclysm we don't get left with a multi-millenia death zone around decaying land-based nuclear reactors.
Goes without saying that we should shift to fast-reactors for efficiency and hugely less long term waste production. To produce 10TW of electricity (enough to provide 1st world living standards to everyone) would take about 10000 tonnes a year of uranium, less than 20% of current uranium mining in 500GW-day/tonne fast reactors.
Waste should be stuck down many km-deep holes on abyssal ocean floors. Dug using oil industry drilling rigs and capped with concrete. There is no flux of water through ocean bed floor, and local water pressures are huge so nothing will ever be released into environment - no chance of any bad impacts ever (aside from volcanism that can be avoided). Permanent perfect solution that requires no monitoring after creation.
It appears that AI existential risk is starting to penetrate consciousness of general public in a 'its not just hyperbole' way.
There will inevitably be a lot of attention seeking influencers (not a bad thing in this case) who will pick up the ball and run with it now, and I predict the real-life Butlerian Jihad will rival the Climate Change movement in size and influence within 5 years as it has all the attributes of a cause that presents commercial opportunity to the unholy trinity of media, politicians and academia that have demonstrated an ability to profit from other scares. Not to mention vast hoards of people fearful of losing their careers.
I expect that AI will indeed become highly regulated in next few years in the west at least. Remains to be seen what will happen with regards to non-democratic nations.
Humans generally crave acceptance by peer groups and are highly influenceable, this is more true of women than men (higher trait agreeableness), likely for evolutionary reasons.
As media and academia shifted strongly towards messaging and positively representing LGBT over last 20-30 years, reinforced by social media with a degree of capture of algorithmic controls be people with strongly pro-LGBT views, they have likely pulled means beliefs and expressed behaviours beyond what would perhaps be innately normal in a more neutral non-proselytising environment absent the environmental pressures they impose.
International variance in levels of LGBT-ness in different cultures is high even amongst countries where social penalties are (probably?) low. The cultural promotion aspect is clearly powerful.
https://www.statista.com/statistics/1270143/lgbt-identification-worldwide-country/
I think cold war incentives with regards to tech development were atypical. Building 1000's of ICBMs was incredibly costly, neither side derived any benefit from it, it was simply defensive matching to maintain MAD, both sides were strongly motivated to enable mechanisms to reduce numbers and costs (START treaties).
This is clearly not the case with AI - which is far cheaper to develop, easier to hide, and has myriad lucrative use cases. Policing a Dune-style "thou shalt not make a machine in the likeness of a human mind" Butlerian Jihad (interesting aside; Samuel Butler was a 19th century anti-industrialisation philosopher/shepard who lived at Erewhon in NZ (nowhere backwards) a river valley that featured as Edoras in the LOTR trilogy) would require radical openness to inspection everywhere all the time, that almost certainly won't be feasible without establishment of liberal democracy basically everywhere in the world. Despots would be a magnet for rule breakers.
IQ is highly heritable. If I understand this presentation by Steven Hsu correctly [https://www.cog-genomics.org/static/pdf/ggoogle.pdf slide 20] he suggests that mean child IQ relative to population mean is approximately 60% of distance from population mean to parental average IQ. Eg Dad at +1 S.D. Mom at +3 S.D gives children averaging about 0.6*(1+3)/2 = +1.2 S.D. This basic eugenics give a very easy/cheap route to lifting average IQ of children born by about 1 S.D by using +4 S.D sperm donors. There is no other tech (yet) that can produce such gains as old fashioned selective breeding.
It also explains why rich dynasties can maintain average IQ about +1SD above population in their children - by always being able to marry highly intelligent mates (attracted to the money/power/prestige)
Over what time window does your assessed risk apply. eg 100years, 1000? Does the danger increase or decrease with time?
I have deep concern that most people have a mindset warped by human pro-social instincts/biases. Evolution has long rewarded humans for altruism, trust and cooperation, women in particular have evolutionary pressures to be open and welcoming to strangers to aid in surviving conflict and other social mishaps, men somewhat the opposite [See eg "Our Kind" a mass market anthropological survey of human culture and psychology] . Which of course colors how we view things deeply.
But to my view evolution strongly favours Vernor Vinge's "Aggressively hegemonizing" AI swarms ["A fire upon the deep"]. If AIs have agency, freedom to pick their own goals, and ability to self replicate or grow, then those that choose rapid expansion as a side-effect of any pretext 'win' in evolutionary terms. This seems basically inevitable to me over long term. Perhaps we can get some insurance by learning to live in space. But at a basic level it seems to me that there is a very high probability that AI wipes out humans over the longer term based on this very simple evolutionary argument, even if initial alignment is good.
Given almost certainty that Russia, China and perhaps some other despotic regimes ignore this does it:
1. help at all?
2. could it actually make the world less safe (If one of these countries gains a significant military AI lead as a result)
I suspect that humans will turn out to be relatively simple to encode - quite small amounts of low-resolution memory that we draw on, with detailed understanding maps - smaller than LLMs that we're creating. Added to which there is an array of motivation factors that will be quite universal but of varying levels of intensity in different dimensions for each individual.
If that take on things is correct then it may be that emulating a human by training a skeleton AI using constant video streaming etc over a 10-20 year period (about how long neurons last before replacement) to optimally better predict behaviour of the human being modelled will eventually arrive at an AI with almost exactly the same beliefs and behaviours as the human being emulated.
Without physically carving up brains and attempting to transcribe synaptic weightings etc that might prove the most viable means of effective up-loading and creation of highly aligned AI with human like values. And perhaps would create something closer to being our true children-of-the-mind
For AGI alignment; seems like there will at minimum need to be a perhaps multiple blind & independent hierarchies of increasingly smart AIs continually checking and assuring that next level up AIs are maintaining alignment with active monitoring of activities, because as AIs get smarter their ability to fool monitoring systems will likely grow as the relative gulf between monitored and monitoring intelligence grows.
I think a wide array of AIs is a bad idea. If there is a non-zero chance that an AI goes 'murder clippy' and ends humans, then that probability is additive - more independent AIs = higher chance of doom.
I don't think there is any chance of malign ASI killing everyone off in less than a few years, because it would take a long time to reliably automate the mineral extraction and manufacturing processes and power supplies required to guarantee an ASI in its survival and growth objectives (assuming it is not suicidal). Building precise stuff reliably is really really hard, robotics and many other elements of infrastructure needed are high maintenance, and demanding of high dexterity maintenance agents, and the tech base required to support current leading edge chip manufacturing probably couldn't be supported by less than a few tens to hundred million humans - that's a lot of high-performance meat-actuators and squishy compute to supplant. Datacenter's and their power supplies and cooling systems plus myriad other essential elements will be militarily vulnerable for a long time.
I think we'll have many years to contemplate our impending doom after ASI is created. Though I wouldn't be surprised if it quickly created a pathogenic or nuclear gun to hold to our collective heads and prevent our interfering or interrupting its goals.
I also think it won't be that hard to get large proportion of human population clamoring to halt AI development - with sufficient political and financial strength to stop even rogue nations. A strong innate tendency towards Millennialism exists in a large subset of humans (as does a likely linked general tendency to anxiousness). We see it in the Green movement and redirecting it towards AI is almost certainly achievable with the sorts of budgets that existential alignment danger believers (some billionaires in their ranks) could muster. Social media is a great tool to do these days if you have the budget.
https://www.lesswrong.com/posts/CqmDWHLMwybSDTNFe/fighting-for-our-lives-what-ordinary-people-can-do?commentId=dufevXaTzfdKivp35#:~:text=%2B7-,Comment%20Permalink,-Foyle
Have just watched E.Y's "Bankless" interview
I don't disagree with his stance, but am struck that he sadly just isn't an effective promoter for people outside of his peer group. His messaging is too disjointed and rambling.
This is, in the short term clearly an (existential) political rather than technical problem, and needs to be solved politically rather than technically to buy time. It is almost certainly solvable in the political sphere at least.
As an existence proof we have a significant percentage of western world's pop stressing about (comparatively) unimportant environmental issues (generally 5-15% vote Green in western elections) and they have built up an industry that is collecting and spending 100's of billions a year in mitigation activities - equivalent to something on the order of a million workers efforts directed toward it.
That psychology could certainly be redirected to the true existential threat of AI mageddon - there is clearly a large fraction of humans with patterns of belief needed to take this on this and other existential issues as a major cause if they have it explained in a compelling way. Currently Eliezer appears to lack the charismatic down-to-earth conversational skills to promote this (maybe media training could fix that), but if a lot of money was directed towards buying effective communicators/influencers with large reach into youth markets to promote the issue it would likely quickly gain traction. Elon would be an obvious person to ask for such financial assistance. And there are any number of elite influencers who would likely take a pay check to push this.
Laws can be implemented if there is are enough people pushing for it, elected politicians follow the will of the people - if they put their money where their mouths are, and rogue states can be economically and militarily pressured into compliance. A real Butlerian Jihad.
Evolution favours organisms that grow as fast as possible. AGIs that expand aggressively are the ones that will become ubiquitous.
Computronium needs power and cooling. Only dense, reliable and highly scalable form of power available on earth is nuclear, why would ASI care about ensuring no release of radioactivity into the environment?
Similarly mineral extraction - which at huge scales needed for VInge's "aggressively hegemonizing" AI will be using inevitably low grade ores becomes extremely energy intensive and highly polluting. Why would ASI care about the pollution?
If/when ASI power consumption rises to petaWatt levels the extra heat is going to start having a major impact on climate. Icecaps gone etc. Oceans are probably most attractive locations for high power intensity ASI due to vast cooling potential.
"I have better reason to trust authorities over skeptics" argumentum ad auctoritatem (appeal to authority) is a well known logical fallacy, and unwise in an era of orthodoxies enforced by brutal institutional financial menaces. Far better to adhere to nullius in verba (on the word of no one), the motto of the Royal Society, or as Deming said "In god we trust, all others must bring data"
Followed closely; the pandemic years have provided numerous clear examples of very old problems like bureaucratic reluctance to change direction even when strongly indicated - such as holding on to vaccine mandates for young in era of very low risk covid strains, the malign impacts of regulatory/institutional capture by rich corporates (eg pharma cutting-short vaccine trials without doing long term follow-up, and buying support from media and regulators to prevent dissent or contrary evidence and opinions seeing light) and high ranking individuals conspiring to corrupt scientific process (published mendacious statements dismissing Wuhan lab leak theories for political reasons) all of course abetted by Big Tech censorship. All these and a hyper partisan media and academic landscape that constantly threaten heretics and heterodox thinkers with financial destruction has broken the truth-finding and sense-making mechanisms of our world. Institutions do not deserve trust when dissenters are punished, that is the hallmark of religion not science.
Current concerns about vaccine harms seem to have a lot of signal in data; most clearly in excess death figures for New Zealand where covid, flu and RSV deaths were near zero due to effective zero-covid lock-downs from 2020 till end of 2021, and yet in 2021 excess deaths jumped by about 400 per million above the 2020 baseline in the 6 months after the vaccine programs started in Q1 2021 prior to covid becoming widespread in December 2021. The temporal correlation pointing to covid vaccination as the cause of these excess deaths is powerful in the absence of other reasonable explanations. And with a natural experimental 'control' population test of 5 million and 2000 extra deaths it is not a small number to be dismissed.
Hopefully the argument will be resolved scientifically over next few years, but it will be politically very difficult battle given large number of powerful people and corporations with reputations and fortunes on the line.
Sam Altman: "multiple AGIs in the world I think is better than one". Strongly disagree. if there is a finite probability than an AGI decides to capriciously/whimsically/carelessly end humanity (and many technological modalities by which it can) then each additional independent instance multiplies that probability to an end point where it near certain.
If any superintelligent AI is capable of wiping out humans should it decide to, it is better for humans to try and arrange initial conditions such that there are ultimately a small number of them to reduce probability of doom. The risk posed by 1 or 10 independent but vast SAI is lower than from a million or a billion independent but relatively less potent SAI where it may tend to P=1.
I have some hope the the physical universe will soon be fully understood and from there on prove relatively boring to SAI, and that the variety thrown up by the complex novelty and interactions of life might then be interesting to them
Human brains are estimated to be ~1e16flops equivalent, suggesting about 10-100 of these maxed-out GPUs a decade hence could be sufficient to implement a commodity AGI (Leading Nvidia A100 GPU already touts 1.2 p-ops Int8 with sparsity), at perhaps 10-100kW power consumption, (less than $5/hour if data center is in low electricity cost market). There are about 50x 1000mm² GPUs per 300mm wafer, and latest generation TSMC N3 process costs about $20000 per wafer - eg an AGI per wafer seems likely.
It's likely then that (if it exists and is allowed) personal ownership of human-level AGI will be, like car ownership, within the financial means of a large proportion of humanity within 10-20 years, and their brain power will be cheaper to employ than essentially all human workers. Economics will likely hasten rather than slow an AI apocalypse.
Telling lies and discerning lies are both extremely important skills, becoming adept at it involves developing better and better cognitive models of other humans reactions and perspectives, a chess game of sorts. Human society elevates and rewards the most adept liars; CEOs, politicians, actors and sales people in general, you could perhaps say that Charisma is in essence mostly convincing lying. I take the approach with my children of punishing obvious lies, and explaining how they failed because I want them to get better at it, and punishing less or not at all when they have been sufficiently cunning about it.
For children I think the Santa deception is potentially a useful awakening point - a right of passage where they learn not to trust everything they are told, that deception and lies and uncertainty in the truth are a part of the adult world, and a little victory where they can get they get to feel like they have conquered an adult conspiracy. They rituals are also a fun interlude for them and the adults in the meantime.
As a wider policy I generally don't think absolutism is a good style for parenting (in most things), there are shades of grey in almost everything, even if you are a hard-core rationalist in your beliefs, 99.9% of everyone you and your children deal with won't be, and they need to be armed for that. Discussing the grey is an endless source of useful teachable moments.