Executing Multiple SSH Commands Using Python

There are many ways to do this. If you only want to run a few commands, then the Python subprocess  module might be best. If you are only working in Python 2.X, then Fabric might be best. Since I wanted 2.X or 3.X and I wanted to run lots of commands, I went with Paramiko. Here is the solution:

import paramiko

IDENTITY = 'path to private key'
REMOTE = 'url of remote'

k = paramiko.RSAKey.from_private_key_file(IDENTITY)
with paramiko.SSHClient() as ssh:
    ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
    ssh.connect(REMOTE, username='vagrant', pkey=k)
    for comand in commands:
        ssh.exec_command(command)

 

Advertisements

SSH Notes

SSH Config

This file just makes some useful aliases. Put it in ./ssh Here is a sample entry:

Host mysite
 HostName xxx.xxx.xxx.xxx
 IdentityFile ~/.ssh/id_rsa
 User root

Public Key Does not Work

Run ssh with -vvvv to debug. If you see this:

debug1: Offering DSA public key: /home/chuck/.ssh/id_dsa
debug2: we sent a publickey packet, wait for reply
debug1: Authentications that can continue: publickey,password

It means the key does not match the one on your local machine.

Copying the Key to the Remote Host

ssh-copy-id -i ~/.ssh/id_rsa.pub user@remote-host

Reconfiguring AutoSSH

I start autossh using a shell script. In this script there are variables for the host IP address and the port. The first time I ran the script everything worked as expected.

Later the host IP address changed. So I killed the autossh processes, changed the IP address and re-ran the script. The problem was autossh started with the old IP address!

Somehow my original script was copied to the same dir as the autossh command (/usr/bin). When I ran the new script, somehow the one in the autossh dir was run. Deleting the script in the autossh dir solved the problem.

Making an SSH known_hosts File for Ansible

When I use Ansible, I often use SSH to download files from multiple locations. A recurring problem is Ansible hangs because it is waiting for someone to accept the host’s SSH finger print. The solution is to use ssh-key to make the known_hosts file. Here is the gist:

- name: Create a known hosts file for root
  shell: ssh-keyscan bitbucket.org >> /root/.ssh/known_hosts

 

Vagrant, PostgreSQL and pgAdmin

Here’s how to use pgAdmin to inspect a database on a Vagrant virtual machine. Vagrant already has SSH setup so the easiest and most secure way to connect to that database is using pgAdmin’s SSH Tunnel feature. To do this, click the connection icon (a plug) and you will see something like this:

Screenshot from 2015-01-20 17:42:48

For Name you can enter anything. Note: Host is localhost, not the vagrant IP address . Username is the database username. Password is the database password. DO NOT CLICK OK yet. Instead, click the SSH Tunnel tab. You will see something like this:

pgadmin2

Tunnel host is the IP address of the Vagrant VM. Username is the Linux username used when you SSH in to the virtual machine. The identity file is the SSH private key for this virtual machine. When you run “vagrant up”, it creates a .vagrant folder. The private key is in there in a folder like:

~/my_project/.vagrant/machines/default/virtualbox/private_key

Now click OK and you should be connected.

Addendum

I just started getting this error:

Error connecting to the server: server closed the connection unexpectedly
 This probably means the server terminated abnormally
 before or while processing the request.

Turns out the error was in the first GUI (above) for setting up the connection. For Host, I put the vagrant IP address. However, since I was using SSH Tunneling, the correct Host is localhost. Ugh…