Non-Technical Preparation for Hacker-AI and Cyberwar 2.0+

post by Erland Wittkotter (Erland) · 2022-12-19T11:42:52.289Z · LW · GW · 0 comments

Contents

      How could governments and their people prepare with non-technical means for Hacker-AI and Cyberwar 2.0+ before sufficient technical capabilities provide protection?
  (1) Situations
  (2) Cyberwar 2.0+
  (3) What is detectable in a Cyberwar 2.0+?
  (4) What are the Preparation Goals/Measures for a Cyberwar 2.0+ Target?
    1. Information/intelligence gathering
    2. Preservation/continuity of governance
    3. Protection against economic damages/disruption
    4. Protection of communication with citizens
    5. Maintaining capability to do reliable actions
    6. Protection of people
    7. Generally suggested methods/rules or behavior
  (5) Preparations for not-directly targeted countries
  (6) Discussion
  (7) Conclusion
None
No comments

I have been hacked, and some (selected) files have been deliberately deleted - this file/post was part of it. I found suspicious software on my computer; I made screenshots - these screenshots were deleted together with several files from my book “2028 - Hacker-AI and Cyberwar 2.0”. I could recover most from my backup.

So far, I have hesitated to post sensational warnings on using smartphones in malware-based cyberwars designed to decapitate governments. Still, I wrote this paper out of concern that this could happen – and cheap marketing talk does not and will not change that.

I don’t know who hacked me, but I assume they got this post. It is spicy (and scary). It is published here to protect my family and myself. I have beefed up my security. Certainly, these measures are no match for what I believe attackers could do. Therefore, I established measures to guarantee that my research will survive and be published before it is officially published as a book.

I changed my password credentials, but I am under no illusion that this is enough to protect this publication. I have asked friends to save and print this post on their local systems (and not to sign into this site while doing this). I will certainly repost it if an impersonator deletes (or modifies) it with my credentials. (Lesswrong doesn’t have 2FA – and I am not suggesting that because I assume it is insufficient anyway).

 

Hacker-AI is AI used in hacking to increase speed for creating new malware that is easier and more targeted in its use. It could become an advanced super-hacker tool (gaining sysadmin access in all IT devices), able to steal crypto-keys, any secret it needs or uses on devices every available feature. As discussed in “Hacker-AI and Digital Ghosts – Pre-ASI” [LW · GW] and “Hacker-AI – Does it already exist?” [LW · GW], Hacker-AI helps malware to avoid detection (as a digital ghost) and make itself irremovable on visited devices. 

In the paper/post “Hacker-AI and Cyberwar 2.0+”, [LW · GW] I have hypothesized that Hacker-AI is an attractive cyber weapon for states waging cyberwar by facilitating the following cyberwar-related actions or capabilities:

  1. Surveillance (audio, video, and usage) of smartphones or other IT devices - at scale (every system)
  2. Selective access denial for some people caused by their device’s malware
  3. Direct threatening of targeted people via AI bots on devices
  4. Realtime Deep-Fakes and their use in refining facts/truth
  5. Reduction of costly war consequences via pre-war intelligence and preparation
  6. Misdirections on who is the culprit in cyber activities

An effective defense against the above capabilities is only achievable by technically restricting the capabilities of Hacker-AI, as suggested in the tech proposal posted in “Improved Security to Prevent Hacker-AI and Digital Ghosts [LW · GW]”. However, it will take time to deploy sufficient defense capabilities, during which an assailant could start waging Cyberwar 2.0.

How could governments and their people prepare with non-technical means for Hacker-AI and Cyberwar 2.0+ before sufficient technical capabilities provide protection?

(1) Situations

In the post “Safe Development of Hacker-AI Countermeasures – What if we are too late?”. I introduced six Threat-Levels (TL) depending on the defender’s knowledge of the assumed or proven existence, capabilities, or actions of Hacker-AI as it relates to the development/deployment of technical countermeasures:

Hacker-AI is feasible, but it is unknown if it already exists. Therefore, the current threat level is below TL-2, but this could change quickly if at least one nation uses the time to work on Hacker-AI features and turns it into a deployed cyber weapon of a Cyberwar 2.0+; then we would have TL-3. 

We are not predicting future events or political developments, or intentions. We describe a scenario that requires urgent attention due to its attractiveness to adversaries and the catastrophic outcome for its targets. 

We assume that nations like PROC are preparing and then waging a cyberwar 2.0 on Taiwan. Alternatively, Russia could attack Ukraine (or another neighbor) in 3 or 4 years with the here mentioned cyberwar capabilities. It seems Russia or PROC have invested more in propaganda-based, espionage, or damage-creating cyber-war weapons than offensive malware-generating Hacker-AI and Cyberwar 2.0+ capabilities.

Additionally, the US and its allies could have already developed parts of the Hacker-AI technology, but because of legal considerations, they are using it sparely in a targeted way and not deploying it offensively.

Because Taiwan (ROC) has a high density of smartphones/IT devices, it is almost an ideal candidate for this type of warfare. Therefore, the most likely, first example of Cyberwar 2.0+ is the annexation of Taiwan by PROC. In this baseline scenario, we assume that PROC has the technical skills to use cyber-attack tools to replace (decapitate) ROC’s leadership, incl. governmental bureaucracy. Then PROC could create/operate, in the aftermath, an AI-based surveillance apparatus over 23 million inhabitants to fortify their gains.

Based on our baseline scenario, the targeted country (ROC) and seemingly unaffected countries (USA, NATO countries) are not technically prepared. We have two distinct situations. (A) The target country is directly surveilled and attacked in Cyberwar 2.0+, and (B) directly unaffected countries (bystanders) respond/adapt to the fact that Cyberwar 2.0+ is now an existential threat for every country. 

Only the USA, NATO countries, and a few other countries have the engineering to contribute to the development/production of countermeasures. It is unlikely that Taiwan, as the cyberwar target, could participate.

(2) Cyberwar 2.0+

On a high-level view, Cyberwar 2.0+ is deniable, and its operations are very hard to detect. Its most likely outcome is the replacement of the government and the establishment of an AI-based surveillance system. The war operations consist of three distinct Cyberwar Phases (CWP): (I) Pre-, (II) Actual- and (III) Post-war, defined by different activities:

On the assailant’s side, people involved in this war as operators won’t dare to talk as they would know about the surveillance capabilities. Any sign of decent could probably be identified ahead of the operation. 

The biggest impact of cyberwar 2.0+ is on people in the targeted country; their freedom is taken permanently. Even if citizens are later allowed to travel outside their countries, the government would know if these people have left enough “collateral” back home (i.e., their family, etc.), guaranteeing their return.

If a country was overtaken almost overnight, everyone involved in national security would ask, how can any country, including the USA or alliances of nations, protect their sovereignty? 

Additionally, deniability and misdirection are essential to avoid direct retaliation. However, the outcome is speaking the truth: a government was likely replaced in a cyberwar. An uncontrolled, fear-triggered escalation to military or nuclear retaliation (without solid proof) seems unlikely; what if the regime change was just a coup, an internal power struggle? Therefore, the biggest impact this event could likely have is that nations massively mobilize their technical talents to develop technical protection against Hacker-AI asap. However, we are then (for sure) too late. The problem of being too late and its late mitigation was discussed in the post: “Safe Development of Hacker-AI Countermeasures – What if we are too late?”.

(3) What is detectable in a Cyberwar 2.0+?

Technically unprepared defenders will not detect Cyberwar 2.0 activities. Detection happens only if the assailant intentionally steps out of the shadow, e.g., when it interacts with people, which means it has directly threatened many people via AI bots. However, evidence that this has happened could probably be avoided by the malware. Threats serving an assailant’s agenda could also be delivered via deep fakes and blamed on an inner political struggle for power. We must assume that Hacker-AI’s malware/digital ghosts remain undetected during all cyberwar phases or that it always uses misdirection to blame others. The only exception is that many people within the attacker’s camp dare to speak out as whistleblowers despite the personal risks and dangers to them.

Government’s or society’s decapitation could start as isolated technical problems – due to the suppression of certain information, it could take several days until the full scope of these disruptions becomes apparent to larger audiences. Even then, the assailant could find credible ways to blame others. During a time of confusion, the assailant could use the uncertainty to arrest or intimidate people in key positions within the bureaucracy, security apparatus, or political class and destabilize the existing order further. The carefully planned approach by attackers is likely more successful than any uncoordinated attempt to resist determined actions.

For technically unsophisticated victims, it is unlikely to recognize Hacker-AI activity. Other explanations for cyber events seem more reasonable. Cyberwar 2.0 (CWP-II) could already have started without anyone in the targeted country detecting it. Systematic surveillance of most people via their electronic devices (smartphones and PCs) could give attackers an uncatchable advantage. 

Malware could surveil phone calls and other smartphone activities, locations, or proximity or grab the resumes from user’s eMail. People’s phone data reveal a person’s role, status, motivation, and potential pressure points. The audio could be transcribed on devices, and surveillance data could be aggregated into small, inconspicuous data packages uploaded to 1000s of servers outside the target and assailant’s country covertly. From these data, the assailant could automatically derive detailed plans of action to manipulate institutions covertly. To win the cyberwar, the attacker must identify key people incl. possible replacements, and create surveillance bubbles around them.

Cyberwar 2.0+ assumes that most people (even in relevant positions) can be compelled to collaborate with assailants’ demands via direct intimidation by AI bots or real-time deep-fakes in communication or publications. If done inconspicuously, intimidation would not leave any traces, accept some reporting on it. 

The targeted country does not need to be invaded by soldiers. Many people in a society could be turned into collaborators, even traitors – with little persuasion. A machine-generated voice could deliver massive/ruthless threats (e.g., against the well-being of a family), while humans would potentially fail to be credible or consistent enough. Threats are delivered to individuals with malware/AI bots. Targeted persons are prevented from extracting digital or physical evidence or having other witnesses in these occurrences. Everyone who is targeted for being a collaborator could subsequently be observed. Follow-up actions could be automatically initiated via AI, drones, or deep fake calls with voices from relatives reporting strange or even scary events. It must be expected that most targeted (unprepared) people would stay silent and comply.

The problem for governments facing that kind of adversary is that finding nothing could mean there is nothing or something they cannot detect. The new defense line is in Cyberwar 2.0 within the populous. Defenders will receive plenty of data traces with useless noise or data that should misdirect the defender’s attention or conclusions. Without changing this critical deficit/blindness conceptionally, there is little hope that meaningful actions or resistance could alter the outcome of Cyberwar 2.0.

(4) What are the Preparation Goals/Measures for a Cyberwar 2.0+ Target?

If the assailants’ main goal is the government’s and society’s decapitation, then defenders must establish measures to counter cyberwar 2.0+ activities by keeping the government in place and operable as long as possible. If possible, the conflict’s cost must be significantly increased for the assailant. Unfortunately, both goals are very difficult to achieve, likely impossible without dedicated or extensive preparation.

Preparing for cyberwar 2.0 using digital or internet-based means to report suspicious deep-fakes or intimidation by AI bots is a serious mistake. Digital channels will be suppressed or manipulated by flooding false or manipulated reports. Assuming that defenders could somehow work around attacker’s capabilities is most likely wrong. We must accept (a) covert surveillance via mobile phones, (b) denial of service for critical people/organizations, (c) intimidations via AI bots or (d) deep fakes, (e) comprehensive attacker preparation/planning via simulation, and (f) misdirection on who is the attacker. 

Governments must proactively prepare rules to remain in charge during CWP-II and to make its illegal replacement by a puppet government as difficult and time-consuming as possible. Unfortunately, the probability that a legitimate government could survive Cyberwar 2.0 is slim. It could already be considered a victory if it could publicly announce CUCA (“Country is under Cyber-Attack”) and warn the world community about what happened.

Still, we propose that governments and citizens must prepare measures to achieve many goals within the following categories:

  1. Information/Intelligence 
    • Governments need reliable info on threats/demanded tasks from intimidated people asap
  2. Preservation of Structures and organizational missions
    • Preventing government’s decapitation by maintaining (reduced) command and control
    • Preservation of existing bureaucratic/security structures and hierarchies
    • Increased organizational resilience against external influence or personal intimidation
  3. Protection against painful economic disruptions/damages
    • Reduction of economic disruption for defenders
    • Prepared methods to slow down detrimental decisions, accelerating beneficial decisions
  4. Protected (unaltered) access to or communication with citizens
    • Dependable announcement that a comprehensive cyberwar has started (CUCA)
    • Establishing (reliable) methods of authorized information flow to all citizens
  5. Maintaining capability for reliable actions during and after CWP-II
    • Preparing a command/control backup (i.e., an underground) for retaking governance later
  6. Protection of people
    • Preventing arrests of innocent bureaucracy, security, or leadership – except it is done by people who have first-hand knowledge or irrefutable evidence of treason 
    • Protection of people who have given information despite threats

Keeping secrets around most measures is a waste because the adversary will gain this information anyway. However, if people are involved, electronic traces must be avoided proactively. The strength of preparation should come from making it public (open source) so that people in different positions can contribute with their detailed know-how to make it better.

1. Information/intelligence gathering

The government needs to have reliable information on what the assailant demanded within intimidating calls to people as soon as possible (within 24 hrs or faster). We must give informants a safe method to report these demands despite vicious threats to their people and families.

2. Preservation/continuity of governance

The assailant’s main mission is to decapitate the existing government – violently or via cyber-means by isolating the leadership or certain bureaucratic layers from each other. During this time, the risk is that the assailant will use fake orders or deep fake audio/video calls to create new structures with new or compromised/intimidated leadership. The existing governmental/bureaucratic/security structures and hierarchies and their organizational missions must be preserved or operated in pre-determined modes. 

Preventing government’s decapitation means that the command and control over countries’ institutions must be maintained, but potentially with reduced intensity. 

Because it is unlikely that technically not prepared countries can detect CWP-I activities, only demands by AI bots within CWP-II could be reported. Country is under a cyber-attack (CUCA) announcement should only be issued if there is sufficient evidence that decapitation is adversary’s goal.

Then, governance must significantly change after CUCA is confirmed or declared. Meetings and orders should be given in person only. Note-taking or orders are handwritten (or done with an old typewriter). 

The government’s overall goals are to maintain command and control and have resilient/stable organizations. The focus of sieged governments is to keep the lights on and not to make (unnecessary) changes. Unlike other wars, in Cyberwar 2.0, every decision, order, or change of laws/rules could be fake, and its authenticity must be questioned because of Hacker-AI, i.e., malware from Hacker-AI that could manipulate data representations of events to its advantage. A freeze in making modifications to rules/laws is a significant limitation on government’s sovereignty, but it is necessary due to the nature of cyberwar 2.0 waged against the targeted country.

Here are a few suggestions for concrete measures:

The performance of operations at governmental organizations after the CUCA declaration will likely have no impact on the government’s decapitation. Regime change is more likely accelerated with events that have nothing to do with operational activities. The theater leading to a new government controlled by the assailant is assessed based on established (political) conventions and not how poorly the legitimate executive continued its governance.

Realistically, the old government is being replaced quickly by a new puppet regime; cyberwar activities will stop then. Preparation for the continuation of governance is probably a waste of effort. In Cyberwar 2.0, the attacker will win predictably. The most important goal should be the announcement of CUCA to warn the world community of the cyberwar. If the government can provide evidence for its claim, it probably scored an important victory before being defeated.

3. Protection against economic damages/disruption

Economic consequences for the assailant will come from sanctions, a less productive workforce, or sabotage. Increasing assailant’s costs without increasing the personal risks of people involved in these acts is extremely difficult and potentially unachievable.

Limiting painful economic consequences from disruptions or damages from assailants is largely outside the control of governments in CWP-II or CWP-III. The government should still try to reduce economic disruption for defender’s population. Government’s analysts should identify methods to slow down detrimental decisions for its citizens while accelerating beneficial decisions. 

Suggesting concrete proposals is outside author’s competence.

4. Protection of communication with citizens

The government must stay in touch with its citizens. Also, as consumers of received information, people must be sure that they receive unaltered messages that the government has authorized. People must trust the message, the medium (i.e., printed or verbal on an event), and the messenger, who the receiver should (at least) know. Digital communication channels must be distrusted. The following goals are for phases CWP-II and CWP-III:

  1. There must be a dependable announcement to its citizens that the country is in a comprehensive cyberwar (CUCA). After this declaration, citizens are informed and prepared that services or messages are either compromised (i.e., helping the assailant) or deep-faked. It should be publicly announced that all publications, incl. videos/TV and all not-in-person audio/video communication, could be faked and used for propaganda or disinformation purposes.
  2. Tasks within civil defense measures are assigned (preferably) to teams of trained citizens or training new (trusted) volunteers. Potentially spontaneously formed (autonomous) teams could help maintain and improve society’s living conditions when government’s more centralized command and control is failing.
  3. The legitimate government has reliable methods of authorized information flow down to all citizens based on word of mouth and redundant trusted messengers.

We acknowledge that we do not have a comprehensive plan on how the above objectives can be accomplished reliably and sustainably (under surveillance). However, a few suggestions should be made here:

5. Maintaining capability to do reliable actions

In CWP-II, governments are under assault; command and control and the continuity of governance were already discussed under item (b) preservation/continuity of government. 

The cyberwar will likely be lost, and a puppet regime will take control. In CWP-III, surveillance would continue and be enhanced by public surveillance measures if some people try to avoid being surveilled by their smartphones or personal devices.

Still, the old government should not give up on helping their fellow citizens to regain their freedom and stop the intrusive AI-based surveillance.

Dedicated experts should prepare conceptional plans on what key positions should be retaken/occupied by sympathizers or which technical key components should be deactivated or sabotaged that could favorably change the result of civil uprisings or help the resistance network.

Even if it is unlikely that there is a back from total AI-controlled surveillance, there should be no stone unturned in which we study the reversal.

These security-hardware retrofits can be miniaturized to very small components, i.e., as small as network plug-connectors (about 1cm3). They would need to be smuggled into the country and distributed to people who want to regain control over their devices. 

Existing smartphones are likely not retrofittable. We would need to destroy them as potential spying devices and use either simple burner phones or a new smartphone with security components.

The idea of liberating a cyber-occupied country requires: 

(a)  fixing IT equipment with security-hardware retrofits around the same time in all devices so that people doing that would not become a target for the puppet regime’s security, 

(b)  removing power from all (non-retrofittable) mobile or IoT devices and 

(c)  having a plan of switching off or removing adversarial control over public surveillance/infrastructure IT systems via (dormant underground/resistance) teams focusing on assigned tasks. 

However, the occupier or new regime could have changed many systems within CWP-III; returning to the old order/pre-occupation might be impossible.

6. Protection of people

Citizens of the target country are in danger for three reasons: 

  1. The new regime could arrest someone who is a member of the political leadership, politically opposed to PROC, for being a key part of government’s bureaucracy, or for being a member of government’s security (military, intelligence services, or police).
  2. Anyone who spoke about being contacted by assailant’s AI bots, ignored to comply with their demands and their threats. These threats might be real, and machines would not forget; they could be implemented via drones almost automatically.
  3. Once assailant controls security and the justice system, many more could be arrested because they fit a profile (i.e., being a potential saboteur) and therefore jailed in a re-education camp

Events related to (i) and (ii) happen during the actual war (i.e., CWP-II). Events related to (iii) are the aftermath (CWP-III) and are outside the control of the legitimate (but replaced) government.

Helping many people to escape a country affected by cyberwar 2.0 or after CUCA is announced is considered an important long-term contribution and potentially a good investment into a better future for that country and the people left behind.

7. Generally suggested methods/rules or behavior

Being under constant (covert) surveillance via smartphones or other IT devices (audio, video, or usage) is a new quality in surveillance. Switching them off is often not enough, as electronic devices can appear to be switched off. People will deal with this situation differently. Some will surrender and accept the new reality, while others will give up on smartphones or IoT devices, keep electronic devices disconnected from the power supply, or remove their batteries. The novel 1984 by George Orwell illustrates the risk of (actively) avoiding surveillance; it could make people doing that more suspicious. Still, personal methods and organizational policies could be implemented to deal with mobile devices already pre-cyberwar.

However, organizations should have trained cyber-security professionals educated in this topic. 

(5) Preparations for not-directly targeted countries

The most significant difference between targeted and not-directly targeted countries is that non-targeted countries are probably not the target of total smartphone surveillance. The required computation backend size would be too large to generate comprehensive data models of all people. Still, Hacker-AI operators might have done some other reconnaissance missions in which they determine who is important to surveil more intensively. They may even do surveillance regularly or continuously on these individuals.

How intrusive Hacker-AI operatives are within not-directly targeted countries is difficult to predict. They might try to penetrate defense systems and make militaries’ logistics partly or fully inoperable if they know they can’t be detected. Also, they might try to understand and then deactivate the (nuclear) retaliation system in several critical key components without making these changes detectable. I cannot claim to know whether Hacker-AI’s malware can penetrate hardened military systems. However, if these systems use the same architectural principles as regular systems, i.e., von-Neuman architecture, virtual address space, direct memory access (DMA), etc., and no physical separation of security and regular tasks or whitelisting of all apps before entering RAM, then it remains to be seen if malware can do it or if these systems are good enough to resist.

The goals of not-directly targeted countries after the confirmation that Cyberwar 2.0+ with Hacker-AI exists and is a viable form of warfare are probably twofold: 

(1) What can a country do immediately to prevent being another victim of Hacker-AI and Cyberwar 2.0+? 

(2) Creating and protecting a safe environment in which Hacker-AI countermeasures can be developed, manufactured, distributed, and deployed.

Regarding (1): Nothing could prevent or protect a country from being the next target. However, we would still suggest that the already proposed methods of guaranteeing continuity of government for targeted countries should be used. Additionally, smartphone use should be reduced until new smartphones with security hardware are manufactured and widely deployed. 

Although it might be too late, some computer systems should be taken offline immediately and kept offline until security hardware solutions are used to protect them. Also, it is not sufficient, but still, old software (from persistent storage media, like CDs) should be reinstalled if there is a chance that the system could be compromised.

Based on cultural or societal consent, countries will respond to the threat of Cyberwar 2.0 and potential total AI-based surveillance in different ways. They all must hope that technical security tools will lead out of this problem soon. It is conceivable that countries will use their war mobilization act to create tools to help them adapt their bureaucracy, economy, and citizens to the new normal. However, that requires new tools that must be developed and finally deployed within step (2).

Regarding (2): This post will refer to the post “Safe Development of Hacker-AI Countermeasures – What if we are too late?” for details. Every single piece of preparation in the absence of Hacker-AI would be of tremendous help. The most important goal is to safeguard engineering skills, manufacturing capacities, and services, including all steps toward deployment against nefarious malware interferences in every way imaginable.

(6) Discussion

War preparation aims to deny the adversary his goals. For Cyberwar 2.0+, the assailant would try to achieve a rapid regime change and low follow-up costs from the war and its aftermath. Denying these goals in a conventional war is done by destruction and sanctions from the world community. However, cyberwar is a remote data operation with surveillance, intimidation, and misdirection designed to decapitate a government and its society and replace it with coerced puppets recruited from within the targeted population.

It is very difficult to fight a war where the covert frontlines go trace-less through society’s own population. Cyberwar activities are deniable and easily blamed on others. Coercion of people designed to accomplish operational goals that can’t be done otherwise could be the only detectable event in which an assailant would get out of the shadow. Physical evidence for cyber attacks will likely not exist. Governments could create a paper trail from anonymous handwritten reports to aggregation at more centralized information/intelligence offices from which trends or patterns could be derived.

These papers should be brought as diplomatic mail to other countries, where they should be stored and further studied. Only via diplomatic couriers, potentially involving other countries, would the world receive some physical evidence that led to the announcement of a cyberwar (CUCA). This outcome could already be considered an important victory under otherwise futile conditions.

(7) Conclusion

Too late means what it says: too late. Preparing for these situations in which we are presumingly too late does not necessarily provide different outcomes. For targeted countries that are not technically prepared for a Cyberwar 2.0+, preparation would, at most, provide the world community a signal that changes (like a regime change) within the targeted country were triggered by a confirmed cyberwar 2.0+.

It would be a significant success if many people targeted via arrest could be rescued before being placed in re-education camps. 

A confirmed cyberwar 2.0 would show that waging war is a (seemingly) risk-free decision. This war could send shockwaves through the world. No country will be safe until technical means would make surveillance on smartphones and IT devices much more difficult or even not feasible.

0 comments

Comments sorted by top scores.