Posts

Consciousness as recurrence, potential for enforcing alignment? 2023-04-18T23:05:52.045Z
46% of US adults at least "somewhat concerned" about AI extinction risk. 2023-04-05T05:25:13.839Z

Comments

Comment by Foyle (robert-lynn) on Raising children on the eve of AI · 2024-02-18T02:03:46.560Z · LW · GW

I'm going through this too with my kids.  I don't think there is anything I can do educationally to better ensure they thrive as adults other than making sure I teach them practical/physical build and repair skills (likely to be the area where humans with a combination of brains and dexterity retain useful value longer than any other).

Outside of that the other thing I can do is try to ensure that they have social status and financial/asset nest egg from me, because there is a good chance that the egalitarian ability to lift oneself through effort is going to largely evaporate as human labour becomes less and less valuable, and I can't help but wonder how we are going to decide who gets the nice beach-house.  If humans are still in control of an increasingly non-egalitarian world then society will almost certainly slide towards it's corrupt old aristocratic/rentier ways and it becomes all about being part of the Nomenklature (communist elites).

Comment by Foyle (robert-lynn) on Running the Numbers on a Heat Pump · 2024-02-09T07:01:16.749Z · LW · GW

[disclaimer: I am a heat pump technology developer, however the following is just low-effort notes and mental calcs of low reliability, they may be of interest to some. YMMV]

It may be better to invest in improved insulation.

As rough rule of thumb COP is = eff * Theat/(Theat-Tcold), with Temperatures measured in absolute degrees (Kelvin or Rankine), eff for most domestic heat pumps is in range 0.35 to 0.45, high quality european units are often best for COP due to long history of higher power costs - but they are very expensive, frequently $10-20k

Looking at the COP for the unit you quoted the eff is only about 0.25 at rated conditions, not good, unless you get a much larger unit and run it at a less powerful more efficient load point.

That's a pretty huge electricity price, about 4.5x gas price (which is distorted-market nuts, 3x is more usual globally).  Given that differential it might be better to look at an absorption heat pump like https://www.robur.com/products/k18-simplygas-heat-pump that gives up to 1.7x gas heat - though they look to be on the order of $10k. 

Here's an annoying fact; If you ran that $2/therm gas (~$0.07/kWh) through a reasonably efficient (~40%) natural gas genset it would produce electricity cheaper than what you currently pay for power, and you would have 2/3rds of the gas energy left over as heat.  A genset in your neighbourhood could provide a few 10's of houses with cheaper electricity and low cost waste heat, though no doubt prevented by regulatory issues.   There are a few small combined heat and power (CHP) domestic units on the market, but they tend to be very expensive, more tech-curios than economically sensible.  

Comment by Foyle (robert-lynn) on on neodymium magnets · 2024-01-31T21:42:02.512Z · LW · GW

Niron's Fe16N2 looks to have a maximum energy product (figure of merit for magnet 'strength' up to 120 MGOe at microscopic scale, which is about double that of Neodymium magnets (~60), however only 20 MGOe has been achieved in fabrication. https://www.sciencedirect.com/science/article/am/pii/S0304885319325454

Processing at 1GPa and 200°C isn't that difficult if there is commercial benefit.  Synthetic diamonds are made in special pressure vessels at 5GPa and 1500°C.  There is some chance that someone will figure out a processing route that makes it possible to achieve bulk crystal orientations that unlocks higher energy products - potential payoff is huge.  I expect AGI and ASI will figure out a lot of major improvements in materials science over next 1-2 decades.

Comment by Foyle (robert-lynn) on Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible · 2023-12-13T00:44:49.595Z · LW · GW

I read of a proposal a few months back to achieve brain immortality via introduction of new brain tissue that can be done in a way as to maintain continuity of experience and personality over time.   Replenisens , Discussion on a system for doing it in human brains That would perhaps provide a more reliable vector for introduction, as the brain is progressively hybridised with more optimal neural genetic design.  Perhaps this could be done more subtly via introduction of 'perfected' stem cells and then some way of increasing rate of die off of old cells.

Instead of gene editing could you just construct a 'perfect' new chromosome and introduce one or more instances of it into existing neurons via viral injection techniques to increase expression of beneficial factors?  No particular reason why we can only have 42 🤣46 chromosomes, and this would perhaps side-step difficulties to do with gene editing.  Might be a more universal solution too if we could come up with a single or small variety of options for a 'super' brain optimising added chromosome.

Politically the way to pitch it would be for its life saving/enhancement ability - offered for example to people with low intelligence and educational outcomes to offer them a better chance at happiness in life.

Comment by Foyle (robert-lynn) on Why not electric trains and excavators? · 2023-11-25T11:18:33.292Z · LW · GW

"So your job depends on believing the projections about how H2 costs will come down?"

I wouldn't waste my life on something I didn't see as likely - I have no shortage of opportunities in a wide variety of greentech fields.  Hydrogen is the most efficient fuel storage 'battery' with 40-50% round-trip energy storage possible.  Other synthetic fuels are less efficient but may be necessary for longer term storage or smaller applications.  For shipping and aviation however LH2 is the clear and obvious winner.

Desert pv will likely come down in price to consistent ~$0.01-0.02 in next decade with impact of AI on manufacturing, installation and maintenance costs (a few large pv installations are already contracted in this cost range).  And electrolysis and liquefaction tech are on track to yield the stated $1.50/kg (learning curves are magic).  That 'stranded' desert pv power needs to be delivered to far distant users and hydrogen pipelines or shipping provides most realistic option for doing that.

Capturing carbon for synthetic hydrocarbons is not a trivial issue/cost.  And their round trip energy storage efficiencies for synthetics hydrocarbons are worse than for hydrogen.  There will still be some applications where they make the most sense.  Ammonia might work too, though it also needs hydrogen feedstock and is often lethal when inhaled.

But in general I see a pretty clear path to renewable hydrogen undercutting fossil fuels on cost in the next decade or two, and from there a likely rapid decline in their use  - so reasons for optimism about energy part of our civilisational tech stack at least, without breakthroughs in nuclear being needed.

Comment by Foyle (robert-lynn) on Why not electric trains and excavators? · 2023-11-24T00:27:54.424Z · LW · GW

Battery augmented trains: Given normal EV use examples, Tesla et al, and Tesla Semi a charging time of 10% of usage time is relatively normal.  Eg charging for 20 minutes and discharging for 3 hours, or in Tesla Semi's case might be more like an hour for 8-10 hours operation but trains have lower drag (and less penalty for weight) than cars or trucks so will go further for same amount of energy.  The idea is therefore that you employ a pantograph multi MW charging system on the train that only needs to operate about 10% of the time,.  This may reduce electrification capital and maintenance costs.  Another option would be battery engines that can join up or disconnect from trains in motion between charging cycles in sidings.

CATL (largest battery producer in the world) are innovating heavily on Sodium ion batteries and have stated that they believe cost per kWh can come down to $40/kWh as they scale up manufacture on their second generation. "CATL first-generation sodium-ion cells cost about 77 USD per kWh, and the second generation with volume production can drop to 40 USD per kWh."  

I work professionally developing Liquid hydrogen fueled transport power technology.  It's very practical for some applications, particularly aircraft and ships that I expect will transition to hydrogen in next 2-3 decades, and possibly trains and agriculture. Using low cost power sources such as grid scale pv in low cost desert regions the price is expected to come down over next 1-2 decades to being competitive or even undercutting fossil fuels (commonly expected/roadmapped to be ~$1.5/kg H2, which translates to about $0.10/kWh useful output power, similar to fossil fuels before taxes).  This is likely the only realistic route to fully renewable power for human civilisation - producing in cheapest sunny or windy areas and using at high latitudes/through winters.  But LH2 is difficult to transport, transfer and employ at small scales due to lack of economic cryocooling and complexity, insulation and cost scale (r² vs r³) issues with tanks (LH2 and GH2 are very low density) and GH2 is even worse for non-pipeline distribution, so a more convenient dense and long-term easily and cheaply stored energy carrier such as Ammonia or synthetic hydrocarbons made using future cheap hydrogen feedstocks may be a better option.  That is especially true for off-road uses.

Agriculture is particularly intractable, cropping farmers in my region frequently run 100+ hours a week using many MW of power during harvest which is a terrible issue for power economics without exceptionally cheap energy storage - liquid fuel is probably the only economic option.

And yes I am well aware of mining electrification, but I am very suspicious of it's utility given many cases will see it powered by fossil fuel generation.  Likely that a lot of it is PR greenwashing rather than effective CO2 reduction.

Comment by Foyle (robert-lynn) on Why not electric trains and excavators? · 2023-11-21T23:05:00.075Z · LW · GW

Battery electric trains with a small proportion of electrified (for charging) sections seems like a decent and perhaps more economic middle ground.   Could get away with <10% of rail length electrified, and sodium batteries are expected to come down to ~$40/kWh in next few years.  High utilisation batteries that are cycled daily or multiple times a  day have lower capital costs.  May also work for interstate trucking.

Earth moving electrification is probably the last application that makes sense or needs focusing upon, due to high capital costs of electrification and low equipment utilisation (a lot of it spends only a few percent of year being used), as well as difficulty in getting electric power to them in difficult to access off-grid locations.

Farm equipment is more important, but incredibly bad economics due to high peaks (using up to several MW to run a few large machines for a few days several times a year for cropping) but very low average utilisation.

Both of these are probably best served long term by some renewable liquid fuel and IC engines, for high power, low mass-fuelling and low capital costs.  Synthetic hydrocarbon, Ammonia or Liquid Hydrogen. 

Comment by robert-lynn on [deleted post] 2023-11-05T06:14:26.549Z

Insufficient onboard processing power?  Tesla's HW3 computer is about 70Tflops, ~0.1% of estimated 100Pops human brain.  Approximately equivalent to a mouse brain.  Social and predator mammals that have to model and predict conspecific and prey behaviors have brains that generally start at about 2% human for cats and up to 10% for wolves.

I posit that driving adequately requires modelling, interpreting and anticipating other road users behaviors to deal with a substantial number of problematic and dangerous situations; like noticing erratic/non-rule compliant behavior in pedestrians and other road users and adjusting response, or negotiating rule-breaking at intersections or obstructions with other road users, and that is orders of magnitude harder than strict physics and road rule adherence.  This is the inevitable edge-case issue that there are simply too many odd possibilities to train for and greater innate comprehension of human behaviors is needed.

Perhaps phone-a-smarter-AI-friend escalation of the outside-of-context problems encountered can deal with much of this, but I think cars need a lot more on board smarts to do a satisfactory job.  A cat to small-dog level H100 @4Pflops FP8 might eventually be up to the task.

Elon's recent announcements about end-to-end neural nets processing for V12.xx FSD release being the savior and solution appears to not have panned out - there's been months of silence after initial announcement.  That may have been their last toss of the die for HW3 hardware.  Their next iteration HW4 chip being fitted to new cars is apparently ~8x faster at 600Tflops.[edit, the source I found appears unreliable, specs not yet public]  They really wouldn't want to do that if they didn't think it was necessary.

Comment by Foyle (robert-lynn) on Architects of Our Own Demise: We Should Stop Developing AI · 2023-10-26T07:36:39.618Z · LW · GW

Global compliance is the sine qua non of regulatory approaches, and there is no evidence of the political will to make that happen being within our possible futures unless some catastrophic but survivable casus belli happens to wake the population up - as with Frank Herbert's Butlerian Jihad (irrelevant aside; Samuel Butler who wrote of the dangers of machine evolution and supremacy lived at film location for Eddoras in Lord of the Rings films in the 19th century).  

Is it insane to think that a limited nuclear conflict (as seems to be an increasingly likely possibility at the moment) might actual raise humanities chances of long term survival - if it disrupted global economies severely for few decades and in particular messed up chip production.

Comment by Foyle (robert-lynn) on Thoughts on responsible scaling policies and regulation · 2023-10-26T07:14:51.937Z · LW · GW

Any attempts at regulation are clearly pointless without global policing.  And China, (as well as lesser threats of Russia, Iran and perhaps North Korea) are not going to comply no matter what they might say to your face if you try to impose it.  These same issues were evident during attempts to police nuclear proliferation and arms reduction treaties during the cold war, even when both sides saw benefit in it.  For AI they'll continue development in hidden or even mobile facilities.

It would require a convincing threat of nuclear escalation or possibly a rock-solid economic blockade/embargo of non-transparent regimes to make the necessary all-the-time-and-everywhere access compliance policing happen.  The political will for these options is nil.  No Politician wants to escalate tensions, they are highly risk averse, and they are not going to be able to sell that to the public in a democracy.  Our democracies do not select for leaders that would be capable of holding such a line.

So without instituting liberal democracies everywhere with leadership that genuinely puts humanities future ahead of their national interests this line of seeking to slow development to research alignment via regulation seems rather pointless.

Humanity needs a colossal but survivable AI scare ASAP if it is to develop the collective will to effectively regulate AI development, and not sleep walk its way off the cliff edge  of extinction as seems to be our current lazy and disinterested path.

Comment by Foyle (robert-lynn) on Is there something fundamentally wrong with the Universe? · 2023-09-12T22:49:50.422Z · LW · GW

Nothing wrong with the universe, from an Anthropic perspective it's pretty optimal, we just have most humans running around with much of their psychology evolved to maximize fitness in highly competitive resource limited hunter-gatherer environments, including a strong streak of motivating unhappiness with regard to things like; social position, feelings of loneliness, adequacy of resources, unavailability of preferred sex partners, chattel ownership/control, relationships etc and a desire to beat and subjugate most dangerous competitors to get more for ourselves (men wanting to take down the men in the tribe next door who are likewise planning to murder them, women wanting to take down women outside of their immediate friend/family circle that compete for essential resources they need for their kids).  We are designed to be to some degree miserable and discontent, with maladapted compulsions to make us work harder on things that don't have immediate payoff, but that do improve our chances of survival and successful procreation over long term.

The fix would be hacking human brains (benignly) and figuring out how to rewire the innate negative psychological drives to enable greater contentment in a post-scarcity technological world.  There's a good chance that will become possible post singularity (if humans aren't all dead) 

Comment by Foyle (robert-lynn) on That time I went a little bit insane · 2023-08-19T03:41:17.427Z · LW · GW

Interesting.

As counterpoint from a 50 year old who has struggled with meaning and direction and dissatisfaction with outcomes (top 0.1% ability without as yet personally satisfactory results) I have vague recollections of my head-strong teen years when I felt my intellect was indomitable and I could master anything and determine my destiny though force of will.  But I've slowly come to the conclusion that we have a lot less free-will than we like to believe, and most of our trajectory and outcomes in life are set by neurological determinants - instincts and traits that we have little to no control over.  Few if any have drive necessary (another innate attribute) to overcome or modify their base tendencies over long term, a sisyphean endeavour at best, and the likely reason we see such heavy use of nootropic drugs.

Without an (almost literal) gun to your head you can only maintain behavioural settings at odds with your likely genetically innate preferences for short periods, months, perhaps years, but not decades, 'fake it till you make it' in personality modification mostly doesn't work - in my experience people (myself included) almost always fall back into their factory settings.  Choose your goals, careers and partners accordingly, follow your interests, and try to find work environments where others strengths can compensate for deficits you perceive in yourself, eg working with some highly conscientious types is a massive boon if you are ADHD and probably leads to greater satisfaction and better results.

Comment by Foyle (robert-lynn) on The U.S. is becoming less stable · 2023-08-19T00:58:34.823Z · LW · GW

I don't think it's mild.  I'm not American, but follow US politics with interest.

A majority of blue-collar/conservative US now see the govt as implacably biased against their interests and communities, eg see recent polling on attitudes towards DOJ, FBI eg https://harvardharrispoll.com/wp-content/uploads/2023/05/HHP_May2023_KeyResults.pdf

There is widespread perception that rule of law has failed at the top levels at least - with politically motivated prosecutions (timing of stacked Trump indictments is clearly motivated by his candidacy) and favored treatment of Hunter (majority of US population see corruption in Biden's VP doings).

And worst of all a substantial fraction of US do not believe results of 2020 elections were legitimate.  In recent polling: "61% of Americans say Biden did legitimately win enough votes to win the presidency, and 38% believe that he did not".

The traditionally middle seeking and sense-making Media has become implacably polarised, with steadfast silence on or spinning of stories inconvenient to their 'side'.

The greatest concern must be that the rate of descent into extreme political and institutional Tribalism and differential treatment on that basis. Long term that can only lead to a violent collapse in rule of law, or repressive policing.

The only antidote is dogmatic enforcement of even-handed treatment at all levels, govt employees must be seen to be scrupulously non-partisan or everything will inevitably fall apart.

Comment by Foyle (robert-lynn) on Perpetually Declining Population? · 2023-08-09T03:54:22.421Z · LW · GW

Ruminations on this topic are fairly pointless because so many of the underpinning drivers are clearly subject to imminent enormous change.  Within 1-2 generations technology like life extension, fertility extension, artificial uteruses, superintelligent AI, AI teachers and nannies, trans-humanism and will render meaningless today's concerns that currently seem dominating and important (if Humans survive that long).  Existential risk and impacts of AI are really the only issues that matter.  Though I am starting to think that the likely inevitable next generation of MAD - space based nukes hanging like an undetectable sword of Damocles over our heads is scary as hell too - coordinated global 1st strikes with only 2-3 seconds between detectability and detonation of stealth bombs re-entering from far distant orbits at >10km/s.

But even in their absence if there was close to a technology level freeze starting now evolution would move in to fill the gap - within a few generations individuals and cultures whose psychology or rules moved them to have more kids - eg religious fundamentalists, highly impulsive, strong maternal or paternal drives and strongly patriarchal cultures (Looking around world places that treat women like shit seems to have higher fertility) would steadily grow to dominate the gene pool - seeing a greater number of kids being born. 

Comment by Foyle (robert-lynn) on My current LK99 questions · 2023-08-02T06:35:18.255Z · LW · GW

purported video of a fully levitating sample from a replication effort, sorry I do not have any information beyond this twitter link.  But if it is not somehow faked or misrepresented it seems a pretty clear demonstration of flux-pinned Meissner effect with no visible evidence of cooling. [Edit] slightly more detail on video "Laboratory of Chemical Technology"

Comment by Foyle (robert-lynn) on The First Room-Temperature Ambient-Pressure Superconductor · 2023-07-26T09:47:50.501Z · LW · GW

Seems quite compelling - most previous claims of high temp superconductivity have been based on seeing only dips in resistance curves - not full array of superconducting behaviours recounted here, and sample preparation instructions are very straight forward - if it works we should see replication in a few days to weeks [that alone suggests its not a deliberate scam].

The critical field strength stated is quite low - only about 25% of what is seen in a Neodymium magnet and it's unclear what critical current density is, but if field reported is as good as it gets then it is unlikely to have much benefit for motor design with B² dependent torque densities <10% of conventional designs, unless the applications are not mass/cost sensitive (wind turbines replacing permanent magnets?).

Meissner effect could be useful for some levitation designs (floating houses, hyperloop, toys?) Likely some novel space applications like magnetic sails, perhaps passive magnetic bearings for infinite life reaction control wheels and maybe some ion propulsion applications.  But lightly biggest impacts will be in digital and power electronics with ultra-high q inductors, higher efficiency transformers, and maybe data processing devices.

It might be transformative for long distance renewable power distribution. 

[Edit to add link to video of meissner effect being demonstrated]

Meissner effect video looks like the real deal.  Imperfect disk sample is pushed around surface of a permanent magnet and tilts over to align with local field vector as gets closer to edge of cylindrical magnet end face.  Permanent magnets in repulsive alignment are not stable in such arrangements (Earnshaw's theorem) - they would just flip over, and diamagnetism in conventional materials - graphite the strongest - is too weak to do what is shown.  The tilting shows the hall-marks of flux pinning working to maintain a consistent orientation of the superconductor with ambient magnetic field, which is a unique feature of superconductivity.  No evidence of cooling in video.

If this is not being deliberately faked then I'd say this is a real breakthrough.

Comment by Foyle (robert-lynn) on nuclear costs are inflation · 2023-06-28T14:45:44.819Z · LW · GW

Desalination costs are irrelevant to uranium extraction.  Uranium is absorbed in special plastic fibers arrayed in ocean currents that are then post processed to recover the uranium - it doesn't matter how many cubic km of water must pass the fiber mats to deposit the uranium because that process is, like wind, free.  The economics have been demonstrated in pilot scale experiments at ~$1000/kg level, easily cheap enough making Uranium an effectively inexhaustible resource at current civilisational energy consumption levels even after we run out of easily mined resources.  Lots of published research on this approach (as is to be expected when it is nearing cost competitiveness with mining).

Comment by robert-lynn on [deleted post] 2023-06-28T14:18:18.638Z

Seems likely, neurons only last a couple of decades - memories older than that are reconstructions, - things we recall frequently or useful skills.  If we live to be centuries old it is unlikely that we will retain many memories going back more than 50-100 years.

Comment by Foyle (robert-lynn) on nuclear costs are inflation · 2023-06-28T04:53:02.058Z · LW · GW

In the best envisaged 500GW-days/tonne fast breeder reactor cycles 1kg of Uranium can yield about $500k of (cheap) $40/MWh electricity.

Cost for sea water extraction (done using ion-selective absorbing fiber mats in ocean currents) of Uranium is currently estimated (using demonstrated tech) to be less than $1000/kg, not yet competitive with conventional mining, but is anticipated to drop closer to $100/kg which would be.  That is a trivial fraction of power production costs.  It is even now viable with hugely wasteful pressurised water uranium cycles and long term with fast reactor cycles there is no question as to its economic viability.  It could likely power human civilisation for billions of years with replenishment from rock erosion.

A key problem for nuclear build costs is mobility of skilled workforces - 50 years ago skilled workers could be attracted to remote locations to build plants bringing families with them as sole-income families.  But nowadays economics and lifestyle preferences make it difficult to find people willing to do that - meaning very high priced fly-in-fly-out itinerant workforces such as are seen in oil industry.

The fix is; coastal nuclear plants, build and decommission in specialist ship yards, floated to operating spot and emplace on sea bed - preferable with seabed depth >140m (ice age minimum).  Staff flown or ferried in and out (e-VTOL). (rare) accidents can be dealt with by sea water dilution, and if there is a civilizational cataclysm we don't get left with a multi-millenia death zone around decaying land-based nuclear reactors.

Goes without saying that we should shift to fast-reactors for efficiency and hugely less long term waste production.  To produce 10TW of electricity (enough to provide 1st world living standards to everyone) would take about 10000 tonnes a year of uranium, less than 20% of current uranium mining in 500GW-day/tonne fast reactors. 

Waste should be stuck down many km-deep holes on abyssal ocean floors.  Dug using oil industry drilling rigs and capped with concrete.  There is no flux of water through ocean bed floor, and local water pressures are huge so nothing will ever be released into environment - no chance of any bad impacts ever (aside from volcanism that can be avoided).  Permanent perfect solution that requires no monitoring after creation.

Comment by Foyle (robert-lynn) on Has anyone thought about how to proceed now that AI notkilleveryoneism is becoming more relevant/is approaching the Overton window? · 2023-04-05T08:45:44.345Z · LW · GW

It appears that AI existential risk is starting to penetrate consciousness of general public in a 'its not just hyperbole' way.

There will inevitably be a lot of attention seeking influencers (not a bad thing in this case) who will pick up the ball and run with it now, and I predict the real-life Butlerian Jihad will rival the Climate Change movement in size and influence within 5 years as it has all the attributes of a cause that presents commercial opportunity to the unholy trinity of media, politicians and academia that have demonstrated an ability to profit from other scares.  Not to mention vast hoards of people fearful of losing their careers.

I expect that AI will indeed become highly regulated in next few years in the west at least.  Remains to be seen what will happen with regards to non-democratic nations.

Comment by Foyle (robert-lynn) on The Social Recession: By the Numbers · 2023-04-03T03:22:06.605Z · LW · GW

Humans generally crave acceptance by peer groups and are highly influenceable, this is more true of women than men (higher trait agreeableness), likely for evolutionary reasons.

As media and academia shifted strongly towards messaging and positively representing LGBT over last 20-30 years, reinforced by social media with a degree of capture of algorithmic controls be people with strongly pro-LGBT views, they have likely pulled means beliefs and expressed behaviours beyond what would perhaps be innately normal in a more neutral non-proselytising environment absent the environmental pressures they impose.

International variance in levels of LGBT-ness in different cultures is high even amongst countries where social penalties are (probably?) low.  The cultural promotion aspect is clearly powerful.

https://www.statista.com/statistics/1270143/lgbt-identification-worldwide-country/ 

Comment by Foyle (robert-lynn) on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-03-30T10:03:02.998Z · LW · GW

I think cold war incentives with regards to tech development were atypical.  Building 1000's of ICBMs was incredibly costly, neither side derived any benefit from it, it was simply defensive matching to maintain MAD, both sides were strongly motivated to enable mechanisms to reduce numbers and costs (START treaties).

This is clearly not the case with AI - which is far cheaper to develop, easier to hide, and has myriad lucrative use cases.  Policing a Dune-style "thou shalt not make a machine in the likeness of a human mind" Butlerian Jihad (interesting aside; Samuel Butler was a 19th century anti-industrialisation philosopher/shepard who lived at Erewhon in NZ (nowhere backwards) a river valley that featured as Edoras in the LOTR trilogy) would require radical openness to inspection everywhere all the time, that almost certainly won't be feasible without establishment of liberal democracy basically everywhere in the world.  Despots would be a magnet for rule breakers.

Comment by Foyle (robert-lynn) on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-03-30T09:45:40.036Z · LW · GW

IQ is highly heritable.  If I understand this presentation by Steven Hsu correctly [https://www.cog-genomics.org/static/pdf/ggoogle.pdf slide 20] he suggests that mean child IQ relative to population mean is approximately 60% of distance from population mean to parental average IQ.  Eg Dad at +1 S.D. Mom at +3 S.D gives children averaging about 0.6*(1+3)/2 = +1.2 S.D.  This basic eugenics give a very easy/cheap route to lifting average IQ of children born by about 1 S.D by using +4 S.D sperm donors.  There is no other tech (yet) that can produce such gains as old fashioned selective breeding.  

It also explains why rich dynasties can maintain average IQ about +1SD above population in their children - by always being able to marry highly intelligent mates (attracted to the money/power/prestige)

Comment by Foyle (robert-lynn) on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-03-30T09:21:05.022Z · LW · GW

Over what time window does your assessed risk apply.  eg 100years, 1000?  Does the danger increase or decrease with time?

I have deep concern that most people have a mindset warped by human pro-social instincts/biases.  Evolution has long rewarded humans for altruism, trust and cooperation, women in particular have evolutionary pressures to be open and welcoming to strangers to aid in surviving conflict and other social mishaps, men somewhat the opposite [See eg "Our Kind" a mass market anthropological survey of human culture and psychology] .   Which of course colors how we view things deeply.

But to my view evolution strongly favours Vernor Vinge's "Aggressively hegemonizing" AI swarms ["A fire upon the deep"].  If AIs have agency, freedom to pick their own goals, and ability to self replicate or grow, then those that choose rapid expansion as a side-effect of any pretext 'win' in evolutionary terms.  This seems basically inevitable to me over long term.  Perhaps we can get some insurance by learning to live in space.  But at a basic level it seems to me that there is a very high probability that AI wipes out humans over the longer term based on this very simple evolutionary argument, even if initial alignment is good.

Comment by Foyle (robert-lynn) on FLI open letter: Pause giant AI experiments · 2023-03-29T05:09:53.764Z · LW · GW

Given almost certainty that Russia, China and perhaps some other despotic regimes ignore this does it:

1. help at all?

2. could it actually make the world less safe (If one of these countries gains a significant military AI lead as a result)

Comment by Foyle (robert-lynn) on We have to Upgrade · 2023-03-26T11:36:33.934Z · LW · GW

I suspect that humans will turn out to be relatively simple to encode - quite small amounts of low-resolution memory that we draw on, with detailed understanding maps - smaller than LLMs that we're creating.  Added to which there is an array of motivation factors that will be quite universal but of varying levels of intensity in different dimensions for each individual.

If that take on things is correct then it may be that emulating a human by training a skeleton AI using constant video streaming etc over a 10-20 year period (about how long neurons last before replacement) to optimally better predict behaviour of the human being modelled will eventually arrive at an AI with almost exactly the same beliefs and behaviours as the human being emulated.

Without physically carving up brains and attempting to transcribe synaptic weightings etc that might prove the most viable means of effective up-loading and creation of highly aligned AI with human like values.  And perhaps would create something closer to being our true children-of-the-mind

For AGI alignment; seems like there will at minimum need to be a perhaps multiple blind & independent hierarchies of increasingly smart AIs continually checking and assuring that next level up AIs are maintaining alignment with active monitoring of activities, because as AIs get smarter their ability to fool monitoring systems will likely grow as the relative gulf between monitored and monitoring intelligence grows.

I think a wide array of AIs is a bad idea.  If there is a non-zero chance that an AI goes 'murder clippy' and ends humans, then that probability is additive - more independent AIs = higher chance of doom.  

Comment by Foyle (robert-lynn) on Bankless Podcast: 159 - We’re All Gonna Die with Eliezer Yudkowsky · 2023-02-22T21:41:29.251Z · LW · GW

I don't think there is any chance of malign ASI killing everyone off in less than a few years, because it would take a long time to reliably automate the mineral extraction and manufacturing processes and power supplies required to guarantee an ASI in its survival and growth objectives (assuming it is not suicidal).  Building precise stuff reliably is really really hard, robotics and many other elements of infrastructure needed are high maintenance, and demanding of high dexterity maintenance agents, and the tech base required to support current leading edge chip manufacturing probably couldn't be supported by less than a few tens to hundred million humans - that's a lot of high-performance meat-actuators and squishy compute to supplant.  Datacenter's and their power supplies and cooling systems plus myriad other essential elements will be militarily vulnerable for a long time.

I think we'll have many years to contemplate our impending doom after ASI is created.  Though I wouldn't be surprised if it quickly created a pathogenic or nuclear gun to hold to our collective heads and prevent our interfering or interrupting its goals.

I also think it won't be that hard to get large proportion of human population clamoring to halt AI development - with sufficient political and financial strength to stop even rogue nations.  A strong innate tendency towards Millennialism exists in a large subset of humans (as does a likely linked general tendency to anxiousness).  We see it in the Green movement and redirecting it towards AI is almost certainly achievable with the sorts of budgets that existential alignment danger believers (some billionaires in their ranks) could muster.  Social media is a great tool to do these days if you have the budget.

https://www.lesswrong.com/posts/CqmDWHLMwybSDTNFe/fighting-for-our-lives-what-ordinary-people-can-do?commentId=dufevXaTzfdKivp35#:~:text=%2B7-,Comment%20Permalink,-Foyle

Comment by Foyle (robert-lynn) on Fighting For Our Lives - What Ordinary People Can Do · 2023-02-22T11:24:13.945Z · LW · GW

Have just watched E.Y's "Bankless" interview

I don't disagree with his stance, but am struck that he sadly just isn't an effective promoter for people outside of his peer group.  His messaging is too disjointed and rambling. 

This is, in the short term clearly an (existential) political rather than technical problem, and needs to be solved politically rather than technically to buy time.  It is almost certainly solvable in the political sphere at least.

As an existence proof we have a significant percentage of western world's pop stressing about (comparatively) unimportant environmental issues (generally 5-15% vote Green in western elections) and they have built up an industry that is collecting and spending 100's of billions a year in mitigation activities - equivalent to something on the order of a million workers efforts directed toward it.

That psychology could certainly be redirected to the true existential threat of AI mageddon - there is clearly a large fraction of humans with patterns of belief needed to take this on this and other existential issues as a major cause if they have it explained in a compelling way.  Currently Eliezer appears to lack the charismatic down-to-earth conversational skills to promote this (maybe media training could fix that), but if a lot of money was directed towards buying effective communicators/influencers with large reach into youth markets to promote the issue it would likely quickly gain traction.  Elon would be an obvious person to ask for such financial assistance.  And there are any number of elite influencers who would likely take a pay check to push this.

Laws can be implemented if there is are enough people pushing for it, elected politicians follow the will of the people - if they put their money where their mouths are, and rogue states can be economically and militarily pressured into compliance. A real Butlerian Jihad.

Comment by Foyle (robert-lynn) on Bankless Podcast: 159 - We’re All Gonna Die with Eliezer Yudkowsky · 2023-02-21T22:00:48.825Z · LW · GW

Evolution favours organisms that grow as fast as possible.  AGIs that expand aggressively are the ones that will become ubiquitous.

Computronium needs power and cooling.  Only dense, reliable and highly scalable form of power available on earth is nuclear, why would ASI care about ensuring no release of radioactivity into the environment?

Similarly mineral extraction - which at huge scales needed for VInge's "aggressively hegemonizing" AI will be using inevitably low grade ores becomes extremely energy intensive and highly polluting.  Why would ASI care about the pollution?

If/when ASI power consumption rises to petaWatt levels the extra heat is going to start having a major impact on climate.  Icecaps gone etc.  Oceans are probably most attractive locations for high power intensity ASI due to vast cooling potential.

Comment by Foyle (robert-lynn) on Whatever their arguments, Covid vaccine sceptics will probably never convince me · 2023-02-01T04:47:58.515Z · LW · GW

"I have better reason to trust authorities over skeptics"  argumentum ad auctoritatem (appeal to authority) is a well known logical fallacy, and unwise in an era of orthodoxies enforced by brutal institutional financial menaces. Far better to adhere to nullius in verba (on the word of no one), the motto of the Royal Society, or as Deming said "In god we trust, all others must bring data"

Followed closely; the pandemic years have provided numerous clear examples of very old problems like bureaucratic reluctance to change direction even when strongly indicated - such as holding on to vaccine mandates for young in era of very low risk covid strains, the malign impacts of regulatory/institutional capture by rich corporates (eg pharma cutting-short vaccine trials without doing long term follow-up, and buying support from media and regulators to prevent dissent or contrary evidence and opinions seeing light) and high ranking individuals conspiring to corrupt scientific process (published mendacious statements dismissing Wuhan lab leak theories for political reasons) all of course abetted by Big Tech censorship.  All these and a hyper partisan media and academic landscape that constantly threaten heretics and heterodox thinkers with financial destruction has broken the truth-finding and sense-making mechanisms of our world.  Institutions do not deserve trust when dissenters are punished, that is the hallmark of religion not science. 

Current concerns about vaccine harms seem to have a lot of signal in data; most clearly in excess death figures for New Zealand where covid, flu and RSV deaths were near zero due to effective zero-covid lock-downs from 2020 till end of 2021, and yet in 2021 excess deaths jumped by about 400 per million above the 2020 baseline in the 6 months after the vaccine programs started in Q1 2021 prior to covid becoming widespread in December 2021.   The temporal correlation pointing to covid vaccination as the cause of these excess deaths is powerful in the absence of other reasonable explanations.  And with a natural experimental 'control' population test of 5 million and 2000 extra deaths it is not a small number to be dismissed.

Hopefully the argument will be resolved scientifically over next few years, but it will be politically very difficult battle given large number of powerful people and corporations with reputations and fortunes on the line.

Comment by Foyle (robert-lynn) on Transcript of Sam Altman's interview touching on AI safety · 2023-01-21T23:34:54.172Z · LW · GW

Sam Altman: "multiple AGIs in the world I think is better than one".  Strongly disagree.  if there is a finite probability than an AGI decides to capriciously/whimsically/carelessly end humanity (and many technological modalities by which it can) then each additional independent instance multiplies that probability to an end point where it near certain.

Comment by Foyle (robert-lynn) on High-level hopes for AI alignment · 2022-12-17T10:52:41.286Z · LW · GW

If any superintelligent AI is capable of wiping out humans should it decide to, it is better for humans to try and arrange initial conditions such that there are ultimately a small number of them to reduce probability of doom.  The risk posed by 1 or 10 independent but vast SAI is lower than from a million or a billion independent but relatively less potent SAI where it may tend to P=1.

I have some hope the the physical universe will soon be fully understood and from there on prove relatively boring to SAI, and that the variety thrown up by the complex novelty and interactions of life might then be interesting to them

Comment by Foyle (robert-lynn) on Predicting GPU performance · 2022-12-15T02:11:40.377Z · LW · GW

Human brains are estimated to be ~1e16flops equivalent, suggesting about 10-100 of these maxed-out GPUs a decade hence could be sufficient to implement a commodity AGI (Leading Nvidia A100 GPU already touts 1.2 p-ops Int8 with sparsity), at perhaps 10-100kW power consumption, (less than $5/hour if data center is in low electricity cost market).  There are about 50x 1000mm² GPUs per 300mm wafer, and latest generation TSMC N3 process costs about $20000 per wafer - eg an AGI per wafer seems likely.

It's likely then that (if it exists and is allowed) personal ownership of human-level AGI will be, like car ownership, within the financial means of a large proportion of humanity within 10-20 years, and their brain power will be cheaper to employ than essentially all human workers.  Economics will likely hasten rather than slow an AI apocalypse.

Comment by Foyle (robert-lynn) on Is Santa Real? · 2022-12-02T20:15:04.559Z · LW · GW

Telling lies and discerning lies are both extremely important skills, becoming adept at it involves developing better and better cognitive models of other humans reactions and perspectives, a chess game of sorts.  Human society elevates and rewards the most adept liars; CEOs, politicians, actors and sales people in general, you could perhaps say that Charisma is in essence mostly convincing lying.  I take the approach with my children of punishing obvious lies, and explaining how they failed because I want them to get better at it, and punishing less or not at all when they have been sufficiently cunning about it.

For children I think the Santa deception is potentially a useful awakening point - a right of passage where they learn not to trust everything they are told, that deception and lies and uncertainty in the truth are a part of the adult world, and a little victory where they can get they get to feel like they have conquered an adult conspiracy.  They rituals are also a fun interlude for them and the adults in the meantime.

As a wider policy I generally don't think absolutism is a good style for parenting (in most things), there are shades of grey in almost everything, even if you are a hard-core rationalist in your beliefs, 99.9% of everyone you and your children deal with won't be, and they need to be armed for that.  Discussing the grey is an endless source of useful teachable moments.

Comment by Foyle (robert-lynn) on Planes are still decades away from displacing most bird jobs · 2022-11-28T19:53:26.359Z · LW · GW

“He [Arthur Dent] learned to communicate with birds and discovered their conversation was fantastically boring. It was all to do with windspeed, wingspans, power-to-weight ratios and a fair bit about berries.”.  Douglas Adams; So long and thanks for all the fish.

Comment by Foyle (robert-lynn) on Could a single alien message destroy us? · 2022-11-25T21:47:08.450Z · LW · GW

The gap between invention of radio and Superintelligent AI in our case (and perhaps most cases of evolution of intelligent life) appears to be <150 years.  A pretty narrow window to hit unless we are being actively observed - and that would likely imply they have had time to notice multicellular life on earth and get observers to us at low fractions of light speed.

If intelligent (inevitably superintelligent) Aliens exist and care about physical reality beyond their own stellar system then they can and likely will spread out to have a presence in every interesting star system in the galaxy within a million years - and planets with multicellular life are likely highly anomalous and interesting for curious Aliens.

It would be hard to believe that this hasn't already happened given 1-4e11 stars and 5-10e9 years 'window for life' in milky way, making the zoo hypothesis in my mind the most likely solution of the Fermi paradox (with weak anecdotal evidence in the form of seemingly increasingly furtive UFOs over last century).  Evolution selects for aliens that choose to propagate and endure, and the technology to do so is almost trivially easy once intelligence and superintelligence evolves, so if intelligence has evolved in the Milky Way and cares about other species developing, then it is clearly not hegemonic (evidenced by our continuing existence) and is likely already here.

If all this is the case - and aliens are here watching us then it also provides an existence proof than Alignment is possible.  Conversely if they are not here then that is perhaps weak evidence that Alignment is not possible - that Super intelligent AI is either auto-extinguishing or almost universally disinterested in biological life. 

Comment by Foyle (robert-lynn) on Covid vaccine safety: how correct are these allegations? · 2021-06-27T15:19:56.121Z · LW · GW
Comment by Foyle (robert-lynn) on What will 2040 probably look like assuming no singularity? · 2021-05-21T04:31:51.639Z · LW · GW
Comment by Foyle (robert-lynn) on What Would You Store to Maximize Value in 100 Years? A Thought Experiment · 2021-05-18T01:56:58.624Z · LW · GW

delete

Comment by Foyle (robert-lynn) on How to think about wearing masks / distancing within the household? · 2020-11-22T09:43:02.156Z · LW · GW

delete

Comment by Foyle (robert-lynn) on What risks concern you which don't seem to have been seriously considered by the community? · 2020-10-29T12:52:17.815Z · LW · GW

delete

Comment by Foyle (robert-lynn) on How should one deal with life threatening infections or air planes? · 2020-10-29T12:13:27.993Z · LW · GW

delete