Posts

RussellThor's Shortform 2024-02-13T03:39:55.961Z
Drone Wars Endgame 2024-02-01T02:30:46.161Z
Little attention seems to be on discouraging hardware progress 2023-06-30T10:14:59.862Z
P-zombies, Compression and the Simulation Hypothesis 2023-05-20T11:36:23.180Z
Has the Symbol Grounding Problem just gone away? 2023-05-04T07:46:09.444Z

Comments

Comment by RussellThor on AI Regulation is Unsafe · 2024-04-25T05:54:26.567Z · LW · GW

No I have not seen a detailed argument about this, just the claim that once centralization goes past a certain point there is no coming back. I would like to see such an argument/investigation as I think it is quite important. "Yuval Harari" does say something similar in "Sapiens" 

Comment by RussellThor on AI Regulation is Unsafe · 2024-04-24T10:05:43.646Z · LW · GW

There is a belief among some people that our current tech level will lead to totalitarianism by default. The argument is that with 1970's tech the soviet union collapsed, however with 2020 computer tech (not needing GenAI) it would not. If a democracy goes bad, unlike before there is no coming back. For example Xinjiang - Stalin would have liked to do something like that but couldn't. When you add LLM AI on everyone's phone + Video/Speech recognition, organized protest is impossible.

Not sure if Rudi C is making this exact argument. Anyway if we get mass centralization/totalitarianism worldwide, then S risk is pretty reasonable. AI will be developed under such circumstances to oppress 99% of the population - then goes to 100% with extinction being better.

I find it hard to know how likely this is. Is clear to me that tech has enabled totalitarianism but hard to give odds etc.

Comment by RussellThor on When is a mind me? · 2024-04-19T21:31:36.964Z · LW · GW

Such optimizations are a reason I believe we are not in a simulation. Optimizations are essential for a large sim. I expect them not to be consciousness preserving

Comment by RussellThor on When is a mind me? · 2024-04-17T21:31:08.887Z · LW · GW

But it could matter if its digital vs continuous.  <OK longer post and some thoughts a bit off topic perhaps>

Your A,B,C,D ... leads to some questions about what is conscious (C) and what isn't. 

Where exactly does the system stop being conscious

1. Biological mind with neurons

2. Very high fidelity render in silicon with neurons modelled down to chemistry rather than just firing pulses

3. Classic neural net spiking approx done in discrete maths that appears almost indistinguishable to 1,2. Producing system states A,B,C,D

4. same as (3) but states are saved/retrieved in memory not calculated.

5. States retrieved from memory many times  - A,B,C,D ... A,B,C,D ... does this count as 1 or many experiences?

6. States retrieved in mixed order A,D,C,B....

7 States D,D,D,D,A,A,A,A,B,B,B,B,C,C,C,C .. does this count 4* or nothing.

A possible cutoff is between 3/4. Retrieving instead of calculating makes it non-conscious.  But what about caching, some calc, some retrieved? 

As you prob know this has been gone over before, e.g. Scott Aaronson. Wonder what your position is?

https://scottaaronson.blog/?p=1951
with quote:

"Maybe my favorite thought experiment along these lines was invented by my former student Andy Drucker.  In the past five years, there’s been a revolution in theoretical cryptography, around something called Fully Homomorphic Encryption (FHE), which was first discovered by Craig Gentry.  What FHE lets you do is to perform arbitrary computations on encrypted data, without ever decrypting the data at any point.  So, to someone with the decryption key, you could be proving theorems, simulating planetary motions, etc.  But to someone without the key, it looks for all the world like you’re just shuffling random strings and producing other random strings as output.

You can probably see where this is going.  What if we homomorphically encrypted a simulation of your brain?  And what if we hid the only copy of the decryption key, let’s say in another galaxy?  Would this computation—which looks to anyone in our galaxy like a reshuffling of gobbledygook—be silently producing your consciousness?"

and last but not least:

"But, in addition to performing complex computations, or passing the Turing Test, or other information-theoretic conditions that I don’t know (and don’t claim to know), there’s at least one crucial further thing that a chunk of matter has to do before we should consider it conscious.  Namely, it has to participate fully in the Arrow of Time. "

https://www.scottaaronson.com/papers/giqtm3.pdf
 

Comment by RussellThor on Moving on from community living · 2024-04-17T21:10:11.098Z · LW · GW

Sounds interesting. Always relevant because arguably the "natural state" of humans is hunter-gatherer tribes. In my country high end retirement villages are becoming very popular because of the Pro type reasons you give. It seems some retirees, and gangs! lol are most in tune with their roots.

I had half expected the communal living thing to go more mainstream by now (similar things in fiction like https://en.wikipedia.org/wiki/Too_Like_the_Lightning) It seems it needs a lot more critical mass, e.g. specifically designed house/houses to get the right balance between space and togetherness school right nearby, gated suburb etc so its child safe.

Longer term, I expect to see some interesting social stuff to come from space colonies as there kind of experiments are forced on the inhabitants.

Comment by RussellThor on Are we so good to simulate? · 2024-03-07T04:22:14.016Z · LW · GW

OK but why would you need high res for the minds? If its an ancestor sim and chatbots can already pass the Turing test etc, doesn't that mean you can get away with compression or lower res? The major arc of history won't be affected unless they are pivotal minds. If its possible to compress the sims so they experience lesser consciousness than us but still are very close to the real thing (and havn't we almost already proven that can be done with our LLM's), then an ancestor simulator would do that.

Comment by RussellThor on Are we so good to simulate? · 2024-03-06T00:37:04.007Z · LW · GW

If thats right, and its almost always low-res sims that are sufficient then that destroys the main ancestor sim argument for our conscious experience being simulated. Low res is not conscious in the same way we are, different reference class to base reality bio-consciousness

Comment by RussellThor on Are we so good to simulate? · 2024-03-04T06:02:39.151Z · LW · GW

If Windows95 was ever conscious (shock!) it would be very sure it was in a virtual machine (i.e like simulated) if it existed at the time when VM's existed.  It would reason about Moores law/resources going up exponentially. and be convinced it was in a VM. However I am pretty sure it would be wrong most of the time? Most Win95 instances in history were not run in VM and we have stopped bothering now? An analogy sort of but gives an interesting result.

Comment by RussellThor on RussellThor's Shortform · 2024-03-02T06:00:32.431Z · LW · GW

Random ideas to expand on

https://www.theguardian.com/technology/2023/jul/21/australian-dishbrain-team-wins-600000-grant-to-develop-ai-that-can-learn-throughout-its-lifetime

https://newatlas.com/computers/human-brain-chip-ai/

https://newatlas.com/computers/cortical-labs-dishbrain-ethics/

Could this be cheaper than chips in an extreme silicon shortage? How did it learn, can we map connections forming and make better learning algorithms. 
 

Birds vs ants/bees.

A flock of birds can be dumber than the dumbest individual bird, a colony of bees/ants can be smarter than than the individual, and smarter than a flock of birds! Bird avoiding predator in geometrical pattern - no intelligence as predictability like fluid has no processing. Vs bees swarming the scout hornet or ants building a bridge etc. Even though no planning in ants, no overall plan in individual neurons? 

The more complex pieces the less well they fit together. Less intelligent units can form a better collective in this instance. Not like human orgs. 

Progression from simple cell to mitochondria - mito have no say anymore but fit in perfectly. Multi organism like hive are next level up - simpler creatures can have more cohesion in upper level. Humans have more effective institutions in spite of complexity b/c of consciousness, language etc. 

RISC vs CISC Intel vs NVIDIA, GPU for super computers. I though about this years ago, led to prediction that Intel or other CISC max business would lose to cheaper.

Time to communicate a positive singularity/utopia 

Spheres of influence, like we already have, uncontacted tribes, Amish etc. Taking that further, Super AI must leave earth, perhaps solar system, enhanced ppl to of earth eco-system, space colonies, or Mars etc.

Take the best/happy nature to expand, don't take suffering to >million stars.

Humans can't do interstellar faster than AI anyway even if that was the goal, it would have to prepare it first, and can travel faster. So no question majority of humanity interstellar is AI. Need to keep earth for people. What is max CEV? Well keep earth ecosystem, humans can progress, discover on their own? 

Is the progression to go outwards, human, posthuman/Neuralink, WBE? it is is some sci-fi Peter Hamilton/ Culture (human to WBE)

Long term all moral systems don't know what to say on pleasure vs self determination/achievement. Eventually we run out of inventing things - should it go asymptotically slower.

Explores should be on the edge of civilization. For astronomers, shouldn't celebrate JWST, but complain about Starlink - that is inconsistent. Edge of civilization has expanded past low earth orbit, that is why we get JWST. Obligation then to put telescopes further out. 

Go to WBE instead of super AI - know for sure it is conscious. 

Is industry, tech about making stuff less conscious with time? e.g. mechanical things have zero, vs a lot when done by people. Is that a principle for AI/robots? then there are no slaves etc.

Can ppl get behind this? - implied contract with future AI? acausal bargaining.

https://www.lesswrong.com/posts/qZJBighPrnv9bSqTZ/31-laws-of-fun

Turing test for WBE - how would you know?

Intelligence processing vs time

For search, exponential processing power gives linear increate in rating, Chess, Go. However this is a small search space. For life, does the search get bigger the further out you go.

e.g. 2 steps is 2^2 but 4 steps is 4^4. This makes sense if there are more things to consider the further ahead you look. e.g. house price for 1 month, general market, + economic trend. 10+ years then demographic trends, changing govt policy, unexpected changes in transport patterns, (new rail nearby or in competing suburb etc)

If applies to tech, then regular experiments shrink the search space, need physical experimentation to get ahead.

For AI, if its like intuition/search then need search to improve intuition. Can only learn from long term.

 

Long pause or not?

How long should we pause? 10 years? Even in stable society there is diminishing returns - seen this with pure maths, physics, philosophy, when we reach human limits, then more time simply doesn't help. Reasonable to assume with CEV like concept also.

Pause carries danger? Is it like the clear pond before a rapid, are we already in the rapid, then trying to stop is dangerous having baby is fatal etc. "Emmett Shear" of go fast slow, stop, pause, Singularity seems ideal, though possible? WBE better than super AI - cultural as elder? 

1984 quote “If you want a vision of the future, imagine a boot stamping on a human face--forever.”

"Heaven is high and the emperor is far away" is a Chinese proverb thought to have originated from Zhejiang during the Yuan dynasty.

Not possible earlier but is possible now. If democracies go to dictatorship but not back then pause is bad. Best way to keep democracies is to leave hence space colonies. Now in Xinjiang, the emperor is in your pocket, LLM can understand anything - how far back to go before this is not possible? 20 years, if not possible, then we are in the white water, and we need to paddle forwards, can't stop.

Deep time breaks all common ethics? 

Utility monster, experience machine, moral realism tiling the universe etc. Self determination and achievement will be in the extreme minority over many years. What to do, fake it forget it and keep achieving again? Just keep options open until we actually experience it.

All our training is about intrinsic motivation and valuing achievement rather than pleasure for its own sake. Great asymmetry in common thought  "meaningless pleasure" makes sense and seems bad or not good, but "meaningless pain" doesn't make it less bad. Why should that be the case. Evolution has biased us to not value pleasure or experience it as much as we "should"? Learn to take pleasure regard thinking "meaningless pleasure" is itself a defective attitude? If you could change yourself, should you dial down the need to achieve if you lived in a solved world?

What is "should" in is-ought. Moral realism in the limit? "Should" is us not trusting our reason, as we shouldn't. If reason says one thing, then it could be flawed as it is in most cases. Especially as we evolved, then if we always trusted it, then mistakes are bigger than benefits, so the feeling "you don't do what you should" is two systems competing, intuition/history vs new rational.

Comment by RussellThor on On coincidences and Bayesian reasoning, as applied to the origins of COVID-19 · 2024-02-22T06:34:09.126Z · LW · GW

"most likely story you can think of that would make it be wrong" - that can be the hard part. For investments its sometimes easy - just they fail to execute, their competitors get better, or their disruption is itself disrupted.
Before the debate I put Lab leak at say 65-80%, now more like <10%. The most likely story/reason I had for natural origin being correct (before I saw the debate) was that the host was found, and the suspicious circumstances where a result of an incompetent coverup and general noise/official lies  mostly by the CCP around this.

Well I can't say for sure that LL was wrong of course, but I changed my mind for a reason I didn't anticipate - i.e. a high quality debate that was sufficiently to my understanding.

For some other things its hard to come up with a credible story at all, i.e. AGW being wrong I would really struggle to do.

Comment by RussellThor on On coincidences and Bayesian reasoning, as applied to the origins of COVID-19 · 2024-02-19T18:44:09.832Z · LW · GW

Some advice I heard that was for investing was when committing to a purchase, write a story of what you think is most likely to make you lose your money. Perhaps you could identify your important beliefs that also perhaps are controversial and each year write down the most likely story you can think of that would make it be wrong? I also believe that you can only full learn from you  own experience so building up a track record is necessary.

Comment by RussellThor on On coincidences and Bayesian reasoning, as applied to the origins of COVID-19 · 2024-02-19T06:17:25.221Z · LW · GW

Good article. I listened to all the rootclaim debate and found it informative. After that debate, I have a lot less belief in the credibility of giving accurate bayes estimates for complicated events, e.g. both debaters attempted it but their estimates where different by >>> like >1e20 I think.

I think this applies even more for P(doom) for AI,  after all its about something that hasn't even happened yet - I agree with the criticism that P(doom) is more a feeling rather than the result of rationality.

Comment by RussellThor on Raising children on the eve of AI · 2024-02-16T06:37:06.147Z · LW · GW

I have thought the same with young kids. After a little thought I decided its best not to really change anything. As you said there is no benefit of taking kids out of school even if you believe the skills wont be useful. If they are happy at school and gain a sense of mastery and purpose with peers etc then that is good.

If you want specifics then sure asking them "what do you want to be when you grow up" is definitely not a good idea, if it ever was. Also our society is all geared towards feeling worthwhile if you make something rather than just being intrinsically worthwhile. You can wonder how a parent in the "Culture" universe would bring up their child. 

Perhaps the more open a child is to a Brain Computer Interface such as Neuralink, the more they will contribute in the future? I keep the kids away from ChatGPT and image generators - if kids get a sense of achievement from drawing then I don't see any good that can come of letting them make artwork in seconds.

Also I consider it plenty likely enough that we won't see that future anytime soon. If you have read "Chip wars" you will realize how incredibly fragile the semi-conductor supply chain is. If Taiwan is invaded, then that guarantees say 5+ years of a setback and I could realistically see 2 decades if things really turn to strife and supply chains collapse. Additionally if we have a "slow takeoff" as I believe once again China will see that they will essentially have no chance of a Chinese century and will be motivated to destabilize things - likewise for any major power that thinks they are losing out. Where I live thats supply chain shortages but not a vastly changed world.

Comment by RussellThor on Alignment has a Basin of Attraction: Beyond the Orthogonality Thesis · 2024-02-14T08:46:02.166Z · LW · GW

Thanks for the effort.

In the discussion about selfishness on this post it seems a bit implied that we know how to make a "self" or it will just appear like a humans. However that is not my experience with GPT-4. Often I have found its lack of self-awareness a significant handicap in its ability to be useful - I assume it has some self awareness, it doesn't and wastes my time as a result. Consider a game engine that does "intuition" and "search" such as a GO engine. It is very good at examining possibilities and "intuits" what moves to consider and can model GO very well, but not itself at all. 

If there is an algorithmic structure that self-awareness requires to be efficient and effective(why wouldn't there be), then just throwing compute to get GPT-X won't necessarily get there at all. If we do get a capable AI it won't act in a way we would expect.

For humans it seems there is evolutionary pressure for us not only to have a "self" but to appear to have a consistent one to others so they can trust us etc. Additionally our brain structure prevents us from must being in a flow state the whole time where we do a task without questioning whether we should do something better, or whether it is the right thing to do. We accept this and furthermore consider this to be a sign of a complete human mind. 

Our current AI seems more like creating "mind pieces" than a creature with a self/consciousness that would question its goals. Is there a difference between "what it wants and what we want" or just "what is wanted"?

I agree in general terms that "alignment has a basin of attraction" and GPT-4 is inside is somewhat justified.

Comment by RussellThor on RussellThor's Shortform · 2024-02-13T03:40:52.235Z · LW · GW

Rootclaim covid origins debate:


This piece relates to this manifold market
and these videos

I listened to most of the 17+ hours of the debate and found it mostly interesting, informative and important for someone either interested in COVID origins or practicing rationality.

I came into this debate about 65-80% lab leak, and left feeling <10% is most likely.

Key takeaways

  • The big picture of the lab leak is easy to understand and sounds convincing, however the details don't check out when put under scrutiny.
  • Both sides attempted Bayesian estimates and probabilities and got absolutely absurd differences in estimates.
  • Rootclaim failed to impress me - the takeaway I got is that they are well suited to say murder cases where there is history to go off, but when it comes to such a large messy, one-off event as COVID origins they didn't know what evidence to include, how to properly weight it etc. They didn't present a coherent picture of why we should accept their worldview and estimates. An example is where they asserted that even if Zoonosis was the origin then the claimed market was not the origin because the details of infected animals and humans wasn't what they expected. This seems an absurd claim to make with confidence judging on the data available. When forced to build models (rather than rely on multiplying probabilities) they were bad at it and overconfident in their conclusions from such models.
  • More generally this led me to distrust Bayesian inference type methods in complicated situations. Two smart reasonably well prepared positions could be off by say >1e12 in relative estimates. Getting all the details right, building consistent models that are peer reviewed by experts cannot be made up for by giving uncertainties to things.
  • Regarding AI, I have now more sympathy to the claim that P(Doom) is a measure of how the individual feels, rather than a defensible position on what the odds actually are.
Comment by RussellThor on RussellThor's Shortform · 2024-02-13T03:39:56.125Z · LW · GW
Comment by RussellThor on Drone Wars Endgame · 2024-02-07T06:50:12.169Z · LW · GW

I expect that if drones ever become a serious threat will see the proliferation of lots of Gatling guns mounted on tanks and other vehicles, linked to a decentralised radar system combining lots of different radars of different specs. 

Yes - fair enough and I hope you are right - I would be happier if defense wins too. I hope soon  Europe/USA develops such a system, including the ability to mass produce it in the quantities needed.

Comment by RussellThor on Drone Wars Endgame · 2024-02-06T21:09:26.526Z · LW · GW

Firstly some context:
Missile vs gun
Radio comms and protection against jamming

For your points

  1. Guided bullets - yes good, unsure whether they can be made cheap yet but if they can of course such a system would use them
  2. Chaff etc - yes probably correct, however it seems this is not needed for missiles to destroy current guns.
  3. Fly to 1000m - Yes it would, however for sniper drone we are comparing the cost to a actual soldier. I have in mind something like https://newatlas.com/drones/huntress-turbojet-drone/ for heavier drones. Other sniper drones could be electric with a very short flight time, carried by the huntress or logistics drone
  4. Relay drones - the idea is most of them fly over territory that has been secured - think a drone with flapping wings like a bird circling at 1000m - if you shoot it down with a big gun you give away your position. Also such drones will be doing constant surveillance of territory.
  5. Anti-armor only - yes however infantry holed up in a building can't stop the invasion, it can route around them. 
  6.  Flak guns - yes guns can take down drones economically, however it becomes missiles vs flak gun.
  7. Aircraft - yes I overstated a bit - for the initial invasion conducted with stockpiled materiel, they can't easily stop it. However taking out the aircraft is very important for the drone army. The drones can take out the airbases - so it could be a race between the fast aircraft trying to bomb the logistics before the drones reached the airfields. Most countries are ~1000 kilometers or less in length, which is in range of a cheap Cessna type logistics drone before they even do mesh network fuel drops to extend the range. Such low slow cheap aircraft would be protected by MANPAD carrying drones, or just equipped with them. fighter jets would be forced to shoot expensive missiles to destroy them, rather than get in close with the cannon etc. Even if the fighters can do 1,5000-2000 kilometers conventional forces could still enter at the edge of their range and help with logistics as the aircraft could not fly many sorties.

For a specific idea, consider a country on one side of a conflict or potential conflict. E.g. Armenia vs, Azerbaijan, Iran vs Saudi Arabia, or Russia vs Europe (through say Latvia) the side planning a drone army invasion stops active conflict for 1-3 years and quietly builds up a stockpile of the drones. They build enough of the cheap long distance logistics drones (and believable decoys) so they have more of them than the enemy has fighter jet missiles.

They then launch a somewhat surprise invasion - its easier to hide a deployment of drones than soldiers. They try to cut a long narrow path as quickly as possible to take out the key defenses. They route round defenses where possible and destroy the air power of the enemy first by destroying the airports and air bases. The missile drones fan out about 10K or so from the logistics drones destroying armor that attack them. Sniper and cheap recon drones continually launch from logistics drones or are carried by missile drones.

After the fast fighter jets are gone, the army then spreads out and attacks any armor that is not dug in. Well protected areas are isolated so they can't be resupplied.

If the attacking side is prepared to commit war crimes the attacked side would surrender by now as the drone army can attack most towns/cities.

Finally even a weak country has conventional forces. These can then enter mostly unopposed, infantry in buildings are helpless against basic artillery, and tunnels etc cannot protect the civilian population.

So my point is that in a apparently even conflict (judged by conventional strength) one side could suddenly get a large advantage - Iran could take Saudi Arabia, and reach Riyadh quickly. Also Russia could suddenly threaten Europe in a way it can't do with its existing forces.

Comment by RussellThor on Drone Wars Endgame · 2024-02-06T20:07:37.783Z · LW · GW

Trophy sounds less effective than phalanx for missile defense in this situation:

https://en.wikipedia.org/wiki/Trophy_(countermeasure)

"The system is currently incapable of defeating kinetic energy anti-tank weapons."

So a Javelin type missile that released a rail gun type slug when on the outside of the Trophy defense  range would destroy the target.

"In the ATGM's case, the EFP will affect the shaped plasma jet, dramatically decreasing its penetration ability." - This sounds a normal missile will still cause damage, and since Phalanx is not as armored as a tank, probably destroy it. 

I don't see why https://newatlas.com/drones/huntress-turbojet-drone/ should cost >$100K when mass produced. In the context of this article, before we talk about strengths/weaknesses of Iron dome type defenses, the vast majority of countries don't have them currently deployed and can't afford to.

Comment by RussellThor on Drone Wars Endgame · 2024-02-06T09:09:32.115Z · LW · GW

For the phalanx or similar - check this link this link. The consensus seems to be that the gun can't take out multiple missiles.

"We’re gonna assume that by “missile” you mean a trans-sonic rocket propelled guided warhead, which is incapable of significant evasive or deceptive course changes within the last 1500 meters of its approach. We’re further going to assume that by CIWS you mean a guns only Phalanx installation or foreign equivalent, firing 4200–4500 rounds per minute of 20mm kinetic energy projectiles at 1200 meters per second. The unit would have the standard assemblage of radars and fire control.

A typical Phalanx installation has about a eighty percent chance of destroying two of these missiles if they are detected within ten seconds of each other in an unobstructed field of fire. If the missiles are “jinking” to avoid destruction, the chance of getting both with one Phalanx unit drops to thirty percent. If the missiles enter the kill zone more than ten seconds apart, the kill rate climbs to well over ninety percent."

Wikipedia gives a Javelin (surface to air) with a max speed of "Mach 1.7+ approx." EDIT the ground one is much slower, probably because its has more penetrating power. I don't think we need that against the gun, it is not as armored as a tank.

I don't doubt that a fixed gun can take out drones, but its missiles vs gun that matters here. At 115 degrees per sec, 2 Javelin 120% apart going Mach 1.7 is going to be a serious problem for it. I get a cost quoted of about US$5M for the land based gun vs $78K for the missile. https://www.thedefensepost.com/2023/03/02/us-uk-javelin-missile/

If it was made cheaper, would it be less effective? Also note that the land based system needs to withstand armor piercing rounds from the heaviest gun something like  this drone  could carry. Sure you could destroy the drone but probably not before it can get some rounds off. Especially if it engaged from >1500m out. That drone could also carry the Javelin and I doubt someone  (soldier) could shoot it down easily.

See this comment thread about jamming

Comment by RussellThor on Drone Wars Endgame · 2024-02-06T07:23:24.585Z · LW · GW

OK don't agree with that - I think there is great benefit to flying low/slow/cheap. For a start the logistics drones wont be in maximally hostile - the other drones will have taken care of anything that could easily destroy a logistics drone. I was thinking of this drone as a logistics drone, but I its load is too low. Instead imagine the logistics drones are modified fixed wing from micro-light to Cessna specs (single prop, chemical powered). This gives large range and reasonable carrying capacity. Add some elec powered quadcopter like rotors for short bursts of power. This could fly low over roads as i have said, and even perhaps have strengthened landing gear wheels so it can cruise along the road to save fuel. The elec rotors let it take off much faster than normal aircraft. It could be say $~100K rather than F35 cost.

I don't see how this can be easily attacked by bombs - for a start the bomb needs to out-run it laterally. Secondly if the bomb is locking onto the rotor, then it will be giving off radar and be able to be picked up and targeted. How is it going to stop a cheaper suicide drone exploding next to its fins and disabling them so it can't track?

If it can handle that, then how will it tell from prop signature which is a logistics drone vs cheap drone? Cheap drones could deliberately have a similar rotor signature or spoof one with a radar transmitter.

When the bomb is detected (say 10 seconds warning) how about if the logistics drone stops its propellor, and puts a brake on it to slow it down fast, at the same time as the jet powered drone mentioned before takes off from it? Surely the bomb will be tricked into chasing and not catching that instead?

Finally (and I am not expert on radar) can the prop signal be concealed by a metal plate mounted above it (think a spoiler but over the front and square) thus blocking the signal from say +20% from any angle above. Can the plate be filled with material that will either absorb or reflect radar so that the prop underneath cannot be detected?

Comment by RussellThor on Drone Wars Endgame · 2024-02-04T01:16:12.233Z · LW · GW

Thanks for the info. What about RF weapons that is a focused short or EMP pulse against a drone. What range and countermeasures?

Comment by RussellThor on Drone Wars Endgame · 2024-02-03T20:07:16.919Z · LW · GW

Ok sure without going into the details I don't dispute an advanced gun system can take out the drones. However such a system counts as armor in my system. It would get the anti armor attack I have described elsewhere. Surely the gun costs more than a suitably sized missile so we get back to coordinated drone swarm with different units vs fixed armor

Comment by RussellThor on Drone Wars Endgame · 2024-02-03T09:10:38.431Z · LW · GW

Upon further thought the logistics drones fly low over roads and disguise themselves e.g same color so that they cannot be seen from that altitude. Cheap decoy drones with the same cross sectional area from above would also seem to work very well

Comment by RussellThor on Drone Wars Endgame · 2024-02-03T06:15:37.145Z · LW · GW

Good point. Am on holiday so can't reply in detail. However drone deploys chaff turns sideways changing profile in a way heli can't. Bomb loses lock. Also cheaper drones can attack bomb sensor and control surface. Logistics drone can also deploy decoys such as large parachute or similar looking decoy drones. Bomb can be optimized however to being cheaper faster glide and turn speed. Also swarm of bombs can communicate. Where do you think that would lead if both sides optimized

Comment by RussellThor on Drone Wars Endgame · 2024-02-02T19:55:38.243Z · LW · GW

Thanks - yes I was somewhat pulling my assertions out of nowhere, it was somewhat of an invitation to the reader to think about physical limits and question the current situation than to say I knew the details of where it would all lead. If the articles I linked to did not exist yet I would still be writing a similar article and claiming they would soon. 

Specifically for netting, it is already used on choke points (trenches for both sides in Ukraine) - however couldn't the cheapest suicide drone explode against the netting to then let others through?

Comment by RussellThor on Drone Wars Endgame · 2024-02-02T19:49:31.997Z · LW · GW

Thanks for the thoughts.

Firstly that we both agree that much such tech does not exist. The major goal of the article is to think about what could exist within physical limits and foreseeable etc.

For independent movement - I am not convinced that self driving cars tell you that much. Mostly airborne drones that are expendable seem like quite a different problem. Radio mesh networks for many fixed points is a very mature tech now - smart meters all coordinate, making the mesh automatically as desired. I expect adapting it to moving units is either not hard or already solved by the military. Sure there is still a long way to go to anything like military hardened optimization but a military would be foolish to assume their adversary was not close to achieving it.

For light vs radio, see this comment - rather than expecting p2p immediately my more important claim is the EW and Jamming will lose out in the "endgame" situation. I may look at the physics/cost of laser diodes with beam spread, intensity if I have time.

I am not sure entirely how your different point of view changes things, or if it is even different to mine. To be specific I claim that front line infantryman carrying rifles will soon be obsolete, then infantry driving vehicles. The front line (or zone as it may be much more spread out) if there are soldiers there at all will be spending almost all their time stationary in well protected areas such as well underground, or in heavily armored units coordinating the battlefield. 

Comment by RussellThor on Drone Wars Endgame · 2024-02-02T19:28:21.243Z · LW · GW

Thanks for the thoughts.

"I expect current militaries to successfully adapt before/as new drones emerge" - I hope so as I think that would make a safer world. However I am not so confident - institutional inertia makes me think it all too likely that they would not anticipate and adapt leading to an unstable situation and more war. Also without actual fights how would one side know the relative strength of their drone system? They or their opponent could have an unknown critical weakness. We have no experience in predicting real world effectiveness from a paper system. I am told war is more likely when sides do not know their relative strength.

"Economies of scale likely overdetermine winners" - yes especially important for e.g. China vs USA if we want an example of one side with better tech/access to chips but worse at manufacturing.

Ground vs Air

All good points - I am agnostic/quite uncertain as to where the sweet spot is. I would expect any drone of medium to large size would be optimized to make as much use of the ground as possible.

Radio vs Light

Yes, I do not know what the "endgame" is for radio comms vs jammers, if it turns out that radio can evade jammers then light will not be used. My broader point I think I will make more specific now is that EW and jammers will not be effective in late stage highly optimized drone warfare. If that is because radio/stealth wins then yes, otherwise light comms will be developed (and may take some time to reach optimal cheapness/weight etc) because it would give such an advantage.

Comment by RussellThor on Drone Wars Endgame · 2024-02-02T19:13:44.653Z · LW · GW

OK firstly if we are talking fundamental physical limits how would sniper drones not be viable? Are you saying a flying platform could never compensate for recoil even if precisely calibrated before? What about fundamentals for guided bullets - a bullet with over 50% chance of hitting a target is worth paying for.

Your points - 1. The idea is a larger shell (not regular sized bullet) just obscures the sensor for a fraction of a second in a coordinated attack with the larger Javelin type missile. Such shell/s may be considerably larger than a regular bullet, but much cheaper than a missile. Missile or sniper size drones could be fitted with such shells depending on what was the optimal size.

Example shell (without 1K range I assume) however note that currently chaff is not optimized for the described attack, the fact that there is currently not a shell suited for this use is not evidence against it being impractical to create.

The principle here is about efficiency and cost. I maintain that against armor with hard kill defense it is more efficient to have a combined attack of sensor blinding and anti-armor missiles than just missiles alone. e.g it may take 10 simul Javelin to take out a target vs 2 Javelin and 50 simul chaff shells. The second attack will be cheaper, and the optimized "sweet spot" will always have some sensor blinding attack in it. Do you claim that the optimal coordinated attack would have zero sensor blinding?

2. Leading on from (1) I don't claim light drones will be. I regard a laser as a serious obstacle that is attacked with the swarm attack described before the territory is secured. That is blind the senor/obscure the laser, simul converge with missiles. The drones need to survive just long enough to shoot off the shells (i.e. come out from ground cover, shoot, get back). While a laser can destroy a shell in flight, can it take out 10-50 smaller blinding shells fired from 1000m at once?

(I give 1000m as an example too, flying drones would use ground cover to get as close as they could. I assume they will pretty much always be able to get within 1000m against a ground target using the ground as cover)

Comment by RussellThor on Drone Wars Endgame · 2024-02-01T22:58:38.510Z · LW · GW

Yes that sure sounds difficult. However if drones can fly over with logistics drones following, you mostly control the territory. Its more like MAD where no-one can settle anymore. 

The worst mine I can think of is one that cannot be detected by a metal detector and is a little bit underground. It stays  underground, to come to the surface at a random time, then finds a target. Think a large cicada with explosive that can be detonated without needing metal. Not sure how possible, but seems maximally horrible.

Comment by RussellThor on Drone Wars Endgame · 2024-02-01T20:53:46.855Z · LW · GW

Yes I agree about high altitudes. Some people have now started to make the distinction between low altitude air superiority and high/fast altitude. High altitude superiority is still very important, but not having it is not perhaps the crippling problem it used to be. The more evenly matched the forces, the more it probably matters

Comment by RussellThor on Drone Wars Endgame · 2024-02-01T20:14:18.011Z · LW · GW

LOL ...
"make a meme picture based of the very common one of a man looking at another women when holding his gf hand. The situation is where you write a blog post, someone criticized it, and rather than replying you wait for someone else to defend it well. There are exactly two texts, one on each women. The text on the other women, in the foreground on the left is "wait for someone else to defend it" The text on the girlfriend on the right is "defend your own post" 

<Drone wars are ahead of meme wars for the moment>

Generated by DALL·E
Comment by RussellThor on Drone Wars Endgame · 2024-02-01T20:09:51.150Z · LW · GW

Thanks for the detailed and well thought out replies! I was about to make most of the same points you make, and you have done a good job of making them for me.

Comment by RussellThor on Drone Wars Endgame · 2024-02-01T05:10:07.095Z · LW · GW

Next Big Future has just dropped a few articles on this e.g. 

https://www.nextbigfuture.com/2024/01/air-force-research-lab-focuses-on-better-missiles-and-ai-drone-swarms.html

Comment by RussellThor on My Alignment "Plan": Avoid Strong Optimisation and Align Economy · 2024-02-01T05:07:23.192Z · LW · GW

But it would surely be more likely to hack x-2 than x-1?

Comment by RussellThor on My Alignment "Plan": Avoid Strong Optimisation and Align Economy · 2024-02-01T03:06:42.297Z · LW · GW

What about quickly distributing frontier AI when it is shown to be safe? That is risky of course if it isn't safe, however if the deployed AI is as powerful and distributed as far as possible, then a bad AI would need to be more powerful comparatively to take over. 

So

AI(x-1) is everywhere and protecting as much as possible, AI(x) is sandboxed 

VS

AI(x-2) is protecting everything, AI(x-1) is in a few places, AI(x) is sandboxed.

Comment by RussellThor on Without Fundamental Advances, Rebellion and Coup d'État are the Inevitable Outcomes of Dictators & Monarchs Trying to Control Large, Capable Countries · 2024-01-31T23:57:47.983Z · LW · GW

Parody/analogy aside, a real concern among those that don't want to slow down AI is that while it was true that dictatorships where unstable, they no longer are. They (and I somewhat) believe 'If You Want a Picture of the Future, Imagine a Boot Stamping on a Human Face – for Ever' is the likely outcome if we don't push through to super AI. That is the AI tools already available are sufficient to forever keep a dictatorship in power once it is setup. So if we pause at this tech level we will steadily see societies become such dictatorships, but never come back.

Consider if a govt insisted that everyone wear an ankle bracelet tracker constantly that recorded everything someone said, + body language, connected to a LLM. There would simply be no possibility to express dissent, and certainly no way to organize any. Xinjiang could be there soon.

If that is the case, then a long pause on AI will mean the final decision on AI if it happened at all would be made by such a society - surely a bad outcome. We simply don't know how stable or otherwise our current tech level is, and there seems to be no easy way to find out.

Comment by RussellThor on MIRI 2024 Mission and Strategy Update · 2024-01-06T19:45:58.037Z · LW · GW

Its that I and many others would identify with WBE and such a group of WBE much more than the more pure AI. If the WBE behaves like a human then it is aligned by definition to me.

If we believe AI is extreme power, we already have too much power, its all about making something we identify with.

Comment by RussellThor on MIRI 2024 Mission and Strategy Update · 2024-01-05T04:42:23.053Z · LW · GW

What's preventing them from massive investments into WBE/upload? Many AI/tech leaders who think the MIRI view is wrong would also support that.

Comment by RussellThor on "Humanity vs. AGI" Will Never Look Like "Humanity vs. AGI" to Humanity · 2023-12-17T02:11:58.134Z · LW · GW

Yes this is a slow-takeoff scenario that it is realistic to be worried about. 

Comment by RussellThor on Current AIs Provide Nearly No Data Relevant to AGI Alignment · 2023-12-16T20:49:17.979Z · LW · GW

I agree with this

- there is absolutely no architectural advance in the human brain over our primate ancestors worth mentioning, other than scale"

However how do you know that a massive advance isn't still possible, especially as our NN can use stuff such as backprop, potentially quantum algorithm to train weights and other potential advances,  that simply aren't possible for nature to use? Say we figure out the brain learning algorithm, get AGI then quickly get something that uses the best of both nature and tech stuff not assessable to nature.

Comment by RussellThor on Current AIs Provide Nearly No Data Relevant to AGI Alignment · 2023-12-16T07:10:19.864Z · LW · GW

I can't point to such a site, however you should be aware of AI Optimists, not sure if Jacob plans to write there. Also follow the work of Quentin Pope, Alex Turner, Nora Belrose etc. I expect the site would point out what they feel to be the most important risks. I don't know of anyone rational, no matter how optimistic who doesn't think there are substantial ones.

Comment by RussellThor on Shallow review of live agendas in alignment & safety · 2023-12-11T21:58:38.358Z · LW · GW

Thanks for all the effort! There really is a lot going on.

Comment by RussellThor on How LDT helps reduce the AI arms race · 2023-12-10T20:26:37.195Z · LW · GW

Ok but why not just the Coherent Extrapolated Volition  of humanity? (One of Yud's better concepts). A norm where all parties agree to that seems easier to get people to sign up to also. That by definition includes much of your values, sure there is some incentive to defect, but not much it would seem.

Comment by RussellThor on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-10T04:16:55.398Z · LW · GW

For me - 

  1. Yes to AI being a big deal and extremely powerful ( yes I doubt anyone would be here otherwise)
  2. Yes - Don't think anyone can reasonably claim its <5% but then so is not having AI if x-risk is defined to be humanity missing practically all of its Cosmic endowment.
  3. Maybe - Even with slow takeoff, and hardware constrained you get much greater GDP, though I don't agree with 100x (for the critical period that is, 100x could happen later). E.g. car factories are made to produce robots, we get 1-10 billion more minds and bodies per year, but not quite 100X. ~10x per year is enough to be extremely disruptive and x-risk anyway.

---

(1)

Yes I don't think x-risk is >95% - say 20% as a very rough guess that humanity misses all its Cosmic endowment. I think AI x-risk needs to be put in this context - say you ask someone

"What's the chance that humanity becomes successfully interstellar?"

If they say 50/50 then being OK with any AI x-risk less than 50% is quite defensible if getting AI right means that its practically certain you get your cosmic endowment etc.

---

(2)

I do think its defensible that a century of dedicated research on alignment doesn't get risk <15% but because alignment research is only useful a little bit in advance of capabilities - say we had a 100 year pause, then I wouldn't have confidence in our alignment plan at the end of it.

Anyway regarding x-risk I don't think there is a completely safe path. Too fast with AI and obvious risk, too slow and there is also other obvious risks. Our current situation is likely unstable. For example the famous quote

"If you want a picture of the future, imagine a boot stamping on a human face— forever."

I believe that is now possible with current tech, where it was not say for Soviet Russia. So we may be in the situation where societies can go 1984 totalitarian bad, but not come back because our tech coordination skills are sufficient to stop centralized empires from collapsing. LLM of course make censorship even easier. (I am sure there are other ways our current tech could destroy most societies also)

If that's the case, a long pause could result in all power being in such societies which when the pause ended would be very likely to screw up alignment.

 That makes me unsure what regulation to advocate for, though I am in favor of slowing down hardware AI progress but fully exploring the capabilities of our current HW.

Most importantly I think we should hugely speed up Neuralink type devices and brain uploading. I would identify much more with an uploaded human that was then carefully, appropriately upgraded to superintelligence than an alternative path where a pure AI superintelligence was made.

We have to accept that we live in critical times and just slowing things down is not necessarily the safest option.

Comment by RussellThor on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-09T00:10:47.586Z · LW · GW

OK you are answering at a level more detailed than I raised and seem to assume I didn't consider such things. My reason and IMO the expected reading of "SUSY has failed" is not that such particles have been ruled out as I know they havn't, but that its theoretical benefits are severely weakened or entirely ruled out according to recent data. My reference to SUSY was specifically regarding its opportunity to solve the Hierarchy Problem. This is the common understanding of one of the reasons it was proposed. 

I stand by my claim that many/most of the top physicists expected for >1 decade that it would help solve such a problem. I disagree with the claim:

"but I think the smart physicists knew all along that those were just plausible hypotheses worth checking, " Smart physicists thought SUSY would solve the hierarchy problem.

----

Common knowledge, from GPT4:

"can SUSY still solve the Hierarchy problem with respect to recent results"

Hierarchy Problem: SUSY has been considered a leading solution to the hierarchy problem because it naturally cancels out the large quantum corrections that would drive the Higgs boson mass to a very high value. However, the non-observation of supersymmetric particles at expected energy levels has led some physicists to question whether SUSY can solve the hierarchy problem in its simplest forms.

Fine-Tuning: The absence of low-energy supersymmetry implies a need for fine-tuning in the theory, which contradicts one of the primary motivations for SUSY as a solution to the hierarchy problem. This has led to exploration of more complex SUSY models, such as those with split or high-scale supersymmetry, where SUSY particles exist at much higher energy scales.

----

IMO ever more complex models rapidly become like epi-cycles.

Comment by RussellThor on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-08T04:32:40.593Z · LW · GW

Well they definitely can be applied there - though perhaps its a stage further than analogy and direct application of theory? Then of course data can agree/disagree. 

Comment by RussellThor on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-06T00:51:12.401Z · LW · GW

Thanks for you feedback. I certainly appreciate your articles and I share many of your views. Reading what you had to say, along with Quentin, Jacob Cannell, Nora was a very welcome alternative take that expanded my thinking and changed my mind. I have changed my mind a lot over the last year, from thinking AI was a long way off and Yud/Bostrom were basically right to seeing that its a lot closer and theories without data are almost always wrong in may ways - e.g. SUSY was expected to be true for decades by most of the world's smartest physicists. Many alignment ideas before GPT3.5 are either sufficiently wrong or irrelevant to do more harm than good.

Especially I think the over dependence on analogy, evolution. Sure when we had nothing to go on it was a start, but when data comes in, ideas based on analogies should be gone pretty fast if they disagree with hard data.

(Some background - I read the site for over 10 years have followed AI for my entire career, have an understanding of Maths, Psychology, and have built and deployed a very small NN model commercially.  Also as an aside I remember distinctly being surprised that Yud was skeptical of NN/DL in the earlier days when I considered it obviously where AI progress would come from - I don't have references because I didn't think that would be disputed afterwards)

I am not sure what the silent majority belief on this site is (by people not Karma)? Is Yud's worldview basically right or wrong?

Comment by RussellThor on Arguments for optimism on AI Alignment (I don't endorse this version, will reupload a new version soon.) · 2023-10-16T07:52:15.251Z · LW · GW

Good to see your point of view. The old arguments about AI doom are not convincing to me anymore, however getting alignment 100% right, whatever that means in no way guarantees a positive Singularity.

Should we be talking about concrete plans about that now? For example I believe with a slow takeoff if we don't get Neuralink or mind uploading, then our P(doom) -> 1 as the Super AI gets ever more ahead of us. The kind scenarios I can see 

  1. "dogs in a war zone" great powers make ever more powerful AI and use them as weapons. We don't understand our environment and it isn't safe. The number of humans steadily drops to zero.
  2. Some kind of Moloch hell, without explicit shooting. Algorithms run our world, we don't understand it anymore and they bring out the worst in us. We keep making more sentient AI, we are greatly outnumbered by them until no more.
  3. WALL-E type scenario - basic needs met, digital narcotics  etc we lose all ambitions.

I can't see a good one as ASI gets way further ahead of us. With a slow takeoff there is no sovereign to help with our CEV, pivotal acts are not possible etc.

I personally support some kind of hardware pause - when Moores law runs out at 1-2nm don't make custom AI chips to overcome the von Neumann bottleneck, in combination with accelerating hard Neural interface, WBE/mind uploading. Doomer types seem also to back something similar.

I don't see the benefit of arguing with the conventional 2010's era alignment ideas anymore - only data will change people's minds now. Like if you believe in a fast takeoff, nothing short of having IQ 180 AI+/weak superintelligence saying that "I can't optimize myself further unless you build me some new hardware" would make a difference I can see.

Comment by RussellThor on Evolution Solved Alignment (what sharp left turn?) · 2023-10-13T05:10:39.198Z · LW · GW

Yes thanks, that thread goes over it in more detail than I could.