Host Keys and SSHing to EC2

post by jefftk (jkaufman) · 2025-04-17T15:10:29.139Z · LW · GW · 6 comments

Contents

6 comments

I do a lot of work on EC2, where I ssh into a few instances I use for specific purposes. Each time I did this I'd get a prompt like:

$ ssh_ec2nf
The authenticity of host 'ec2-54-224-39-217.compute-1.amazonaws.com
(54.224.39.217)' can't be established.
ED25519 key fingerprint is SHA256:...
This host key is known by the following other names/addresses:
    ~/.ssh/known_hosts:591: ec2-18-208-226-191.compute-1.amazonaws.com
    ~/.ssh/known_hosts:594: ec2-54-162-24-54.compute-1.amazonaws.com
    ~/.ssh/known_hosts:595: ec2-54-92-171-153.compute-1.amazonaws.com
    ~/.ssh/known_hosts:596: ec2-3-88-72-156.compute-1.amazonaws.com
    ~/.ssh/known_hosts:598: ec2-3-82-12-101.compute-1.amazonaws.com
    ~/.ssh/known_hosts:600: ec2-3-94-81-150.compute-1.amazonaws.com
    ~/.ssh/known_hosts:601: ec2-18-234-179-96.compute-1.amazonaws.com
    ~/.ssh/known_hosts:602: ec2-18-232-154-156.compute-1.amazonaws.com
    (185 additional names omitted)
Are you sure you want to continue connecting (yes/no/[fingerprint])?

The issue is that each time I start my instance it gets a new hostname (which is just derived from the IP) and so SSH's trust on first use doesn't work properly.

Checking that "185 additional names omitted" is about the number I'd expect to see is ok, but not great. And it delays login.

I figured out how to fix this today:

  1. Edit ~/.ssh/known_hosts to add an entry for each EC2 host I use under my alias for it. So I have c2-44-222-215-215.compute-1.amazonaws.com ssh-ed25519 AAAA... and I duplicate that to add ec2nf ssh-ed25519 AAAA... etc.

  2. Modify my ec2 ssh script to set HostKeyAlias: ssh -o "StrictHostKeyChecking=yes" -o "HostKeyAlias=ec2nf" ...

More secure and more convenient!

(What got me to fix this was an interaction with my auto-shutdown script, where if I did start_ec2nf && sleep 20 && ssh_ec2nf but then went and did something else for a minute or two the machine would often turn itself off before I came back and got around to saying yes.)

Comment via: facebook, mastodon, bluesky

6 comments

Comments sorted by top scores.

comment by Dagon · 2025-04-17T21:06:40.317Z · LW(p) · GW(p)

You can put those options into .ssh/config, which makes it work for things which use SSH directly (scp, git, other tools) when they don't know to go through your script.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2025-04-17T23:58:56.828Z · LW(p) · GW(p)

I don't see how I could put them in .ssh/config? Lets say I have three hosts, with instance IDs i-0abcdabcd, i-1abcdabcd, and i-2abcdabcd. I start them with commands like start_ec2 0, start_ec2 1 etc where start_ec2 knows my alias-to-instance ID mapping and does aws --profile sb ec2 start-instances --instance-ids <alias>. Then to ssh in I have commands like ssh_ec2 0 which looks up the hostname for the instance and then ssh's to it.

Replies from: faul_sname
comment by faul_sname · 2025-04-18T16:42:48.180Z · LW(p) · GW(p)

I think Dagon is saying that any time you're doing ssh -o "OptionKey=OptionValue" you can instead add OptionKey OptionValue under that host in your .ssh/config, which in this case might look like

Host ec2-*.compute-1.amazonaws.com
    HostKeyAlias aws-ec2-compute
    StrictHostKeyChecking yes

i.e. you would still need step 1 but not step 2 in the above post.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2025-04-18T17:36:50.241Z · LW(p) · GW(p)

If I only ever ssh'd into a single EC2 instance (aws-ec2-compute) then that would work, but I have several. Since Host ec2-*.compute-1.amazonaws.com matches any EC2 instance, and there's no way to tell from the hostname whether this is the one I'm calling ec2_0, ec2_1, ec2_2 etc, I can't do this through the .ssh/config.

Replies from: faul_sname
comment by faul_sname · 2025-04-18T17:54:37.078Z · LW(p) · GW(p)

If you were to edit ~/.ssh/known_hosts to add an entry for each EC2 host you use, but put them all under the alias ec2, that would work.

So your ~/.ssh/known_hosts would look like

ec2 ssh-ed25519 AAAA...w7lG
ec2 ssh-ed25519 AAAA...CxL+
ec2 ssh-ed25519 AAAA...M5fX

That would mean that host key checking only works to say "is this any one of my ec2 instances" though.

Edit: You could also combine the two approaches, e.g. have

ec2 ssh-ed25519 AAAA...w7lG
ec2_01 ssh-ed25519 AAAA...w7lG
ec2 ssh-ed25519 AAAA...CxL+
ec2_02 ssh-ed25519 AAAA...CxL+
ec2 ssh-ed25519 AAAA...M5fX
ec2_nf ssh-ed25519 AAAA...M5fX

and leave ssh_ec2nf as doing ssh -o "StrictHostKeyChecking=yes" -o "HostKeyAlias=ec2nf" "$ADDR" while still having git, scp, etc work with $ADDR. If "I want to connect to these instances in an ad-hoc manner not already covered by my shell scripts" is a problem you ever run into. I kind of doubt it is, I was mainly responding to the "I don't see how" part of your comment rather than claiming that doing so would be useful.

comment by Brendan Long (korin43) · 2025-04-17T18:00:32.888Z · LW(p) · GW(p)

This post prompted me to look into more general purpose solutions to this, since it seems like "SSH into an IP that's known to be owned by a public cloud" should be fully automated at this point. We know which IP's are part of AWS and we can fetch the host keys securely using the AWS CLI (or helper tools like this). We should be able to do the same over HTTPS for GitHub, Azure, Google Cloud, etc.

It's surprising to me that no one seems to have made a general-purpose CLI or SSH plugin (if that's a thing) for this. Google Cloud has a custom CLI that does this but it obviously only works for their servers.