Raspberry Pi Cluster SSH & SCP Setup
Automated/Passwordless SSH Setup
The hostnames used for our cluster are:
You can verify that your current Pi has an appropriate hostname by running
cat /etc/hostname and see if you get one of the node hostnames listed above.
Generate Master Node SSH Key
To make things easier for connecting to our slave nodes from the master, we will generate a SSH key for the master node and then register it as an authorized SSH key on each of the slave nodes. You generate an SSH key on the master node by running:
ssh-keygen –t rsa –C “pi@master-node0”
-C in the above command is just to add our own comment to the SSH key.
You will be prompted for a file to save the key, just leave it blank by pressing enter. You will be prompted for a passphrase, again, just leave it blank by pressing enter twice. (We’re leaving it blank to avoid having to type passwords when running parallel programs in our cluster)
Once the master node SSH key is generated, we need add it to the other slave nodes as an authorized key.
cat ~/.ssh/id_rsa.pub | ssh pi@ipaddressofslave “mkdir .ssh;cat >> .ssh/authorized_keys”
The above step can be repeated for all the slave nodes by replacing the IP address and user (if username differs, in our case all users are the default “pi”).
Alternatively, if you have many nodes to copy this to, you can automate this process with a bash script and a txt file listing all the IP addresses of the nodes you wish to copy this to.
while read ip; do ssh-copy-id -i ~/.ssh/id_rsa.pub pi@$ip done < IPlistfile.txt
Once done, ssh each of the slave node and create the ssh key using the same command as in master node. Once ssh key was generated, the public key of each node needs to be added to the authorized key of Master node using the same command as in master node.
cat ~/.ssh/id_rsa.pub | ssh pi@ipaddressofmaster “cat >> .ssh/authorized_keys”
Automated/Passwordless SCP Setup
Once the passwordless SSH is setup, we can easily copy files to all nodes with scp. SCP is used to send files back and forth between networked nodes through the terminal.
A common use would be something like:
scp /path/to/fileToBeSent.cpp pi@slave-ip-address:~/slave/code/path
We can automate this a bit with some bash scripting:
while read ip; do scp /home/pi/fileToBeSent.cpp pi@$ip:~/slave/code/path done < IPlistfile.txt
There are several ways to achieve this, another good option if available on your setup may be the package parallel-scp