1) Context
This is not meant to be an all-inclusive guide on what is going on as much as a quick checklist on what needs to happen. Ideally there will be links here for these steps to help explain what all is happening and details about the services being installed.
This guide was written in May of 2024. Ideally, we keep this up to date, but everything here is actively being worked on and modified, so please make sure to bother your local RT or change the documentation here yourself if you find something here is out of date because of a development in our automation somewhere.
Link back to the deployment landing page: https://generaltso.cc.gatech.edu/guide/linux-server-deployment-landing-page
Last updated Date: Oct 2024
Back to top2) Assumptions
- We're assuming that you are trying to boot into Ubuntu 22. If you are booting into Rhel 9, the instructions are mostly the same, but there are some key differences that can be found in the drop down menus
- If you are deploying a laptop, make sure to refer to the supplemental guide at https://generaltso.cc.gatech.edu/guide/linux-laptop-deployment-supplemental
- All of the commands mentioned here are generally easier to run when logged in as root, or when elevating. I will usually log in as myself, then elevated with
sudo -i
- In an attempt to accommodate as many of our server instance as possible, there are conditionals here based on the type of system you are building. In general, they should be next to other in the bullets and marked in bold.
3) Early Steps
- Set up the DHCP in GTIPAM
- Gather the physical information for inventory
- hostname
- MAC address
- IP address
- CAE number
- PO number, cost, and warranty dates
- physical location
- If a Physical Server
- set up the BMC in GTIPAM
- If a VM
- set up the VM. It is worth noting that the drives should be marked as scsi in order for the OS to recognize them as "sda", otherwise, the OS will see them as "vda"
- If a desktop
- Every workstation is a bit different depending on motherboard manufacturer, but you will need to go into the bios and manually enable netbooting in ipv4, make sure that pxe is enabled, and any kind of CSM compatibility is enabled to allow the workstation to netboot and also have Linux installed on it
4) Booting the OS
Text goes here
- Set up the system in Cobbler using the instructions found at https://tso.cc.gatech.edu/howtos/deploying-ubuntu-22-cobbler for ubuntu 22
- Once the OS is installed, you should be able to log in with the user cocinstance and the default admin password.
- Access the system through the console or through
ssh cocinstance@HOSTNAME.cc.gatech.edu
- Access the system through the console or through
- If cobbler is not working, you can also install the OS via ISO instead:
5) ADD TO INVENTORY
- Inventory currently uses the UUID of the system as it's identifier for systems. If there is an entry with the UUID in it, WOSA will attempt to overwrite it, if it does not exist, a new entry will be created. The UUID can be called with
dmidecode -s system-uuid
. - As part of the Cobbler installation process for Ubuntu, the sensu debug log will get an alert with the UUID of the system. While waiting for the system to install in Cobbler, go ahead and create the inventory entry with this UUID and other manual information.
6) Salt
Reference documentation: https://tso.cc.gatech.edu/howtos/deploy-ubuntu-2204-salt
Troubleshooting documentation: https://tso.cc.gatech.edu/howtos/salt-tips-tricks-and-troubleshooting
- Accept the keys in alcali before doing much else
- If these are not present on alcali, run the following first command to populate the key in alkali, then accept it.
- Start running the base salt commands
sudo salt-call state.apply
sudo salt-call state.apply
(yes, for now we need to run this twice because of foreman registration issues)sudo salt-call state.apply util.motd_template
sudo salt-call state.apply util.sshd_template
sudo salt-call state.apply util.reminder
sudo salt-call state.apply root_authorized_keys
- Check that you can log in as root from carora
sudo ssh HOSTNAME.cc.gatech.edu
- Run some more commands. Either from the console or while logged in through an ssh terminal. ssh is usually easier so you can copy and paste commands
sudo salt-call state.apply rsyslog
sudo salt-call state.apply sssd
sudo salt-call state.apply coc-base.pam
(only run this if you want to create home directories (i.e. not netmounted)sudo salt-call state.apply coc-base.sudo
- Check that you can log in as yourself through ssh
6.1) If deploying in Rhel
6/6/24 -- [Bug]: https://github.gatech.edu/delegated-admin/gt-salt-coc/issues/9
If you are planning to use nethomes rather than local homes on a system, you will need to run authselect select sssd --force
if you are not applying the coc-base.pam state to allow sssd logins.
7) Configure the System
7.1) If deploying Ubuntu
- Manage the root and local account.
sudo passwd
change the password to the default passworddeluser --remove-home cocinstance
(may need to reboot the machine and make sure this user isn't logged in anywhere)
- Install some base apps
apt autoremove -y && apt update && apt full-upgrade -y && apt autoremove -y
apt update && apt -y install ncdu iptraf-ng inxi cdpr htop screen vim build-essential x11-apps linux-tools-common ipmitool ethtool iperf3 tmux pigz uptimed aptitude ldap-utils
- Set up unattended upgrades
- This may change depending on the type of server you are building, but most of our systems will be using an aggressive auto update policy
apt install unattended-upgrades update-notifier-common && dpkg-reconfigure --priority=low unattended-upgrades
- edit the lines in the conf file
nano /etc/apt/apt.conf.d/20auto-upgrades
- add
APT::Periodic::Download-Upgradeable-Packages "1";
APT::Periodic::AutocleanInterval "7";
- add
nano /etc/apt/apt.conf.d/50unattended-upgrades
- uncomment the line and change to true
Unattended-Upgrade::Remove-Unused-Dependencies "true";
- uncomment the line and change to true
7.2) If deploying Rhel
- Install some base apps
dnf groupinstall 'Development Tools' 'System Tools'
dnf autoremove -y && dnf update && dnf full-upgrade -y && dnf autoremove -y
dnf update && dnf -y install ncdu iptraf-ng inxi htop screen vim screen xorg-x11-xauth perf gcc gcc-c++ ipmitool ethtool iperf3 tmux pigz uptimed openldap-clients redhat-rpm-config pigz uptimed
systemctl enable --now uptimed
- Set up unattended upgrades
- This may change depending on the type of server you are building, but most of our systems will be using an aggressive auto update policy
dnf install dnf-automatic
- edit the lines in the conf file:
vi /etc/dnf/automatic.conf
- Change the following fields:
upgrade_type = default
download_updates = yes
apply updates = yes
reboot = never
- NOTE: Depending on the system/endpoint, these may need to be changed, especially with regards to automatically applying updates. When in doubt, ask a fellow RT.
- Change the following fields:
- Enable the dnf automatic update timer
systemctl enable --now dnf-automatic.timer
- Verify the dnf-automatic timer is active:
systemctl list-timers
7.3) OS Agnostic for everything below
- Force disable sleep
systemctl mask --now {sleep,suspend,hibernate,hybrid-sleep}.target
- Change the .bashrc file
nano /root/.bashrc
- edit
HISTSIZE=10000
HISTFILESIZE=10000
HISTCONTROL="ignorespace"
- add
HISTIGNORE="history"
alias df="df -x devtmpfs -x tmpfs -x squashfs -x overlay"
alias lsblk="lsblk -e 7"
- edit
7.4) Set up Borg
salt-call state.apply borg_backup
- Follow the commands given at the end of the state output
7.4.1) Alternative - Manual Borg Setup
Some additional knowledge. Borg is an encrypted backup service that we use for our non zfs backups. If you want to backup a filesystem on zfs, you should be using syncoid/sanoid instead. Borgmatic is a wrapper class that gives some additional user features to borg that we will install alongside it. In general, assume that all of these commands should be run as root due to where all the path variables and permissions end up.
Key Resources
https://borgbackup.readthedocs.io/en/stable/installation.html
https://torsion.org/borgmatic/docs/how-to/set-up-backups/
https://docs.borgbase.com/restore/borg/borgmatic
https://torsion.org/borgmatic/docs/how-to/extract-a-backup/RHEL 9 Deployments
In the case of Rhel 9 deployments, changes to the Path are required to run Borg commands as root. Add the folloowing lines to the root/.bashrc file:
- vi /root/.bashrc
# Add usr and borg directory to PATH
PATH="$PATH:/usr/local/bin"
export PATH
How to check the status of the backups
If the host is able to connect with cc-hereford backups, then the status of the last backups can be run as root with
borgmatic list
. You can see the latest backups withborgmatic list --last 3
. We use three because by default there are three archives that are being backed up and this will pull the latest three archives.Installing Borg and Borgmatic
We install borg using pip as that is generally the most up to date version.
- Make sure that the prerequisite software is installed beforehand.
- Ubuntu:
apt update && apt install python3-pylibacl libacl1-dev python3-pip libssl-dev python3-venv python3-virtualenv pkg-config liblz4-dev libxxhash-dev libzstd-dev -y
- Rhel 9:
dnf update && dnf install openssl-devel gcc gcc-c++ python3-pylibacl libacl libacl-devel python3-pip python3-virtualenv pkgconf lz4-devel libzstd-devel xxhash-devel fuse fuse3-devel fuse3 redhat-rpm-config python3-pkgconfig -y
- Ubuntu:
- Create the directory for the environment that we will be running borg out of, start a virtual environment and install borg. We target a specific version of borgbackup right now because of a python compatibility error, and hope that this is not a permanent procedure (Aug 2024)
mkdir -p /opt/virtualenv/borg
python3 -m venv /opt/virtualenv/borg
source /opt/virtualenv/borg/bin/activate
pip3 install --upgrade borgbackup
pip3 install --upgrade borgmatic
deactivate
- Run this one liner to add borg to the path instead of just in the virtual environment's path
for EACHFILE in borg borgmatic; do if [[ -L /usr/local/bin/${EACHFILE} ]]; then rm -rf /usr/local/bin/${EACHFILE}; fi; ln -s /opt/virtualenv/borg/bin/${EACHFILE} /usr/local/bin/; done
- Load the conf files from shoemaker
mkdir -p /etc/borgmatic.d
mkdir -p /etc/borgmatic.d/common/
wget shoebox.cc.gatech.edu/deployment-files/borg_conf/ssh_cmd.yaml -P /etc/borgmatic.d/common/
wget shoebox.cc.gatech.edu/deployment-files/borg_conf/excludes -P /etc/borgmatic.d/common/
wget shoebox.cc.gatech.edu/deployment-files/borg_conf/standard.yaml -P /etc/borgmatic.d/
wget shoebox.cc.gatech.edu/deployment-files/borg_conf/containers.yaml -P /etc/borgmatic.d/
wget shoebox.cc.gatech.edu/deployment-files/borg_conf/web.yaml -P /etc/borgmatic.d/
Create the initial variables and add cc-hereford to known host
- get the fqdn of the host. This should include the .cc extension as well
- If in doubt, you should be able to get what the server sees with
hostname -f
- you can edit
/etc/hosts
if you need to add the extension. The format should be127.0.1.1
HOSTNAME.cc.gatech.edu
HOSTNAME
- If in doubt, you should be able to get what the server sees with
- Set the UUID and hostname as a variable
export UUID=$(dmidecode -s system-uuid | sed 's/[^[:alnum:]]//g')
export HOSTNAME=$(hostname -f)
- export a keyfile name that hereford will use to identify all backups from this host. If we rebuild a server, we will need to make sure this matches or we will be creating new backups instead of adding to older ones.
- example of the final line should look like (with an updated hostname)
export KEYFILENAME=${HOSTNAME}-${UUID}-$(date "+%s")
- Do some ssh keygen work
ssh-keygen -t ecdsa -N "" -C "coc-backups@${KEYFILENAME}" -f "/root/.ssh/${KEYFILENAME}"
[[ -f "/root/.ssh/coc-backups" ]] && rm -f "/root/.ssh/coc-backups"
ln -s "/root/.ssh/${KEYFILENAME}" "/root/.ssh/coc-backups"
ssh-keyscan -p "8765" "collective-001.cc.gatech.edu" >> "/root/.ssh/known_hosts"
ssh-keyscan -p "5678" "assimilation.cc.gatech.edu" >> "/root/.ssh/known_hosts"
- Add this block to the bottom of
/root/.bashrc
export BORG_RSH="ssh -i /root/.ssh/coc-backups"
export BORG_REPO="ssh://coc-backups@collective-001.cc.gatech.edu:8765/./"
Registering the Host with cc-hereford
- Hereford has a registration script that does most of the lifting here that all runs inside of a container named assimilation. Most of this process here is setting up the variables and getting the key in order to run the script remotely on hereford. We don't need to do anything on the backup server to install borg, but it's useful to know that we are triggering things there from the host.
- Eventually this process will become scripted and use less one-line commands, but in the meantime, these are the individual commands to run.
ip addr | perl -ne 'if (/127\.\d+\.\d+\.\d+|::1/) { $a=1 } elsif (/((inet|inet6|ether)\s+(\S+))/) {print "$1\n"}' >> /tmp/borgid.txt
echo uuid ${UUID} >> /tmp/borgid.txt
echo "hostname $(uname -n)" >> /tmp/borgid.txt
export KEYFILE=$(readlink -f "/root/.ssh/coc-backups")
export SSHCMD="register ${UUID} coc-backups $(cat ${KEYFILE}.pub)"
- create a file that will hold the assimilation private key at
/tmp/borg_key
. You should delete this file when you are done setting up borg, but putting it in the /tmp will help guarantee that.- Get the contents of that file from gtswd at
/s1/gtswd/x86_64-ubuntu20.04/sw-borgbackup/release/6.dir/assimilate
- Get the contents of that file from gtswd at
chmod 600 /tmp/borg_key
- Run the actual script from the host that will trigger on hereford. If the script runs successfully, you should see something along the lines of "OK" return back. Otherwsie, most likely the variables were incorrectly set or the container may not be running.
cat /tmp/borgid.txt | ssh -p 5678 -i /tmp/borg_key coc-backups@assimilation.cc.gatech.edu ${SSHCMD}
- Wait around 30 seconds to a minute and congratulations. You should now be good to start running borg backups
Initializing and Running Borgmatic
Now that everything has been set up and hereford is configured to receive backups, we are ready to actually run borgmatic commands.
- Initialize the repository and archives. It will ask you to set a passphrase. We do not want a passphrase, so just hit enter twice, and do not display anything for verification.
borgmatic init --encryption repokey
- Run the initial backup. If this is a new server, the process should be take up to a minute, but if you are installing borg on something with a fair amount of data, you may want to run this in a tmux session.
borgmatic --verbosity 1 --files
- Check the status of the backups you just made. We use 3 here because that is how many archives we are using by default. This number could change if you add additional .yaml configuration files
borgmatic list --last 3
- Finally, add this to the daily cron job for backups to regularly occur. The random is used to make sure that the backup server doesn't get slammed by all of the servers trying to backup at once.
echo "$(( $RANDOM % 60 )) $(( $RANDOM % 24 )) * * * root " 'PATH=$PATH:/usr/local/bin /usr/local/bin/borgmatic --verbosity 1 --files' > /etc/cron.d/borgmatic_daily
Post Install - Running backups and restoring backups from the host
If you are wanting to manually run a borg backup, you can do that with
borgmatic --verbosity 1 --files
. If you receive an error about there being a lock file, that is because the server is currently in the process of running a backup. Do not break the lock unless you know what you are doing here.If you are wanting to restore a backup I'd recommend looking up the documentation first to get familiar with the process https://docs.borgbase.com/restore/borg/borgmatic . It is worth noting that restoring a backup does not reset the system back to a previous state. Specifically, restoring a backup will pull files from an archive into a directory of your choosing (by default it will be the directory you are currently in)
The general steps are
- you will want to create a directory to place the files in
- What I will usually do is create a temporary directory at
/tmp/borg_restore
- What I will usually do is create a temporary directory at
- Get the archive that you will want to pull backup data from.
- This can be seen with
borgmatic rlist
- Note down the name of the archive, it should be in the format of something like standard-2024-04-29T09:49:54
- This can be seen with
- Pull the files from the archive
borgmatic extract --archive standard-2024-04-29T09:49:54 --destination /tmp/borg_restore/
- If you are looking at pulling a specific file append a
--path /file_location
to the command- e.g.
borgmatic extract --archive standard-2024-04-29T09:49:54 --destination /tmp/borg_restore/ --path /home/important_file
- e.g.
- If you just want the latest backup either way, you can use archive
--latest
instead of using the full path name
If you are unsure of what archive contains the backup you want to restore or in general want to browse through older data, then you can mount that backup to the host with borg. The general steps should include
- Identify the name of the archive as we did previously with restoring a backup
- e.g.
borgmatic rlist
- e.g.
- Using similar variables as when restoring a backup with --destination and --archive, run
borgmatic mount --archive latest --mount-point /mnt
And with this you should have the filesystem mounted and can browse through what is backed up in that archive.
- vi /root/.bashrc
7.5) If a Desktop
- install the desktop environment
apt install ubuntu-desktop
- disable the part where it forces a local user to be created
nano /etc/dconf/profile/gdm
- create and add
ser-db:user
system-db:gdm
file-db:/usr/share/gdm/greeter-dconf-defaults
- create and add
nano /etc/gdm3/custom.conf
- edit
[daemon]
InitialSetupEnable=false
- edit
- reboot the desktop with a screen attached to make sure the local user isn't forced to be created
- Note: If the network does not connect after installing the Ubuntu desktop, try the following:
- Update the
/etc/NetworkManager/NetworkManager.conf
file to have[ifupdown] managed=
betrue
instead offalse
systemctl restart NetworkManager
- Update the
7.6) If NOT a Desktop
7.6.1) set up Sensu
- Right now this is a manual process with the instructions on https://tso.cc.gatech.edu/howtos/manually-installing-sensu-research-linux-systems
7.6.2) Manual Sensu Installation Instructions
Context
This documentation is meant to 1-1 recreate the steps that simon does, but with some updated conf files. It’s reasonable that we will want to change this in the future, but this is mostly to document how the process is done now. Ideally this process will be handled by salt, but this will still be good reference material of what is happening as of May 2024.
Install the libraries and packages
Ubuntu 24 and below:
curl -s https://packagecloud.io/install/repositories/sensu/stable/script.deb.sh | sudo bash
apt install sensu-go-agent
RHEL:
curl -s https://packagecloud.io/install/repositories/sensu/stable/script.rpm.sh | sudo bash
dnf install sensu-go-agent
Ubuntu 24:
Sadly, the developers at Sensu have neglected updates, so an official version for Ubuntu 24 does not exist. However, you can configure the Sensu repository for Ubuntu 22 and install it without issue. Commands taken from: https://discourse.sensu.io/t/official-install-not-working-for-ubuntu-24-04-lts/3246/5
Update - This has now been updated by the sensu developers and no longer needs this manual step. However, we will keep it here as archive if we ever need it in the next OS upgrade
echo "deb https://packagecloud.io/sensu/stable/ubuntu/ jammy main" >> /etc/apt/sources.list.d/sensu.list
sudo mkdir -p /etc/apt/keyrings && curl -fsSL https://packagecloud.io/sensu/stable/gpgkey | sudo gpg --dearmor -o /etc/apt/keyrings/sensu.gpg
echo "deb [signed-by=/etc/apt/keyrings/sensu.gpg] https://packagecloud.io/sensu/stable/ubuntu/ jammy main" >> /etc/apt/sources.list.d/sensu.list
sudo apt-get update -y
sudo apt-get install -y sensu-go-agent
Continue the rest of the configuration as usual.
Enable Sensu and copy over the config files
Right now, these are default files that we are using for research systems, to change this, you should just have to go into the agent.yaml file and change the subscription to match your specific team. Currently (May 2024) the files are located on shoemaker, but hopefully this will be changed to shoebox and eventually salt soon.
systemctl enable --now sensu-agent.service
mkdir /etc/sensu/plugins
wget shoebox.cc.gatech.edu/deployment-files/sensu_conf/agent.yml -P /etc/sensu/
wget shoebox.cc.gatech.edu/deployment-files/sensu_conf/check-localfs-usage -P /etc/sensu/plugins/
wget shoebox.cc.gatech.edu/deployment-files/sensu_conf/check-ssh -P /etc/sensu/plugins/
wget shoebox.cc.gatech.edu/deployment-files/sensu_conf/check-zfs-health -P /etc/sensu/plugins/
wget shoebox.cc.gatech.edu/deployment-files/sensu_conf/check-zfs-usage -P /etc/sensu/plugins/
chmod 755 /etc/sensu/plugins/check-ssh
chmod 755 /etc/sensu/plugins/check-localfs-usage
chmod 755 /etc/sensu/plugins/check-zfs-health
chmod 755 /etc/sensu/plugins/check-zfs-usage
systemctl is-enabled sensu-agent.service
systemctl restart sensu-agent.service
systemctl status sensu-agent.service
By now, hopefully you see a webex notification that your system is online.
7.7) If a GPU Server
- install nvidia
- run
ubuntu-drivers devices
to get a list of nvidia drivers available. More documentation can be found at:
https://linuxconfig.org/how-to-install-the-nvidia-drivers-on-ubuntu-20-04-focal-fossa-linux
https://linuxconfig.org/how-to-install-the-nvidia-drivers-on-ubuntu-22-… - If in doubt on what driver to use, nvidia keeps a search tool at https://www.nvidia.com/Download/driverResults.aspx/214066/en-us/
- check that the drivers are working by running
nvidia-smi
after a reboot
- run
- install cuda
- follow the instructions on https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=22.04&target_type=deb_network
- make sure to install the network version
- enable persitenced mode
systemctl edit --full nvidia-persistenced
- And instead of that variable being
--no-persistence-mode
you want it to say--persistence-mode
. (Remove the "no"). systemctl daemon-reload
systemctl restart nvidia-persistenced
- confirm by running
nvidia-smi
7.8) If a server using ZFS
- Add ZFS and ZFS backups
- manually install syncoid and sanoid
- configure and set up backups using the guide at https://generaltso.cc.gatech.edu/guide/backing-zfs
7.8.1) Manual Syncoid/Sanoid Installation
Context
Please note that this process only covers the installation and loading of syncoid/sanoid. This does not cover configuring and setting up backups. That is all covered in https://tso.cc.gatech.edu/howtos/backing-up-zfs
Install the libraries and packages
If ubuntu 22
apt install libconfig-inifiles-perl libcapture-tiny-perl gzip lzop mbuffer pv
If rhel 9
dnf install perl-Config-IniFiles perl-Data-Dumper perl-Capture-Tiny lzop mbuffer mhash pv
Load the applications and conf files directly
Right now, these are default files that we are using for research systems, to change this, you should just have to go into the agent.yaml file and change the subscription to match your specific team. Currently the files are located on shoebox, but hopefully this will be changed to salt soon.
if ubuntu 22
wget shoebox.cc.gatech.edu/deployment-files/sanoid_files/findoid-ubuntu/findoid -P /usr/local/sbin/ && chmod 755 /usr/local/sbin/findoid
If rhel 9
wget shoebox.cc.gatech.edu/deployment-files/sanoid_files/findoid-ubuntu/findoid -P /usr/local/sbin/ && chmod 755 /usr/local/sbin/findoid
The rest should all be the same
wget shoebox.cc.gatech.edu/deployment-files/sanoid_files/sanoid -P /usr/local/sbin/ && chmod 755 /usr/local/sbin/sanoid
wget shoebox.cc.gatech.edu/deployment-files/sanoid_files/syncoid -P /usr/local/sbin/ && chmod 755 /usr/local/sbin/syncoid
wget shoebox.cc.gatech.edu/deployment-files/sanoid_files/sleepymutex -P /usr/local/sbin/ && chmod 755 /usr/local/sbin/sleepymutex
wget shoebox.cc.gatech.edu/deployment-files/sanoid_files/sanoid_conf_files/sanoid.conf -P /etc/sanoid/ && chmod 660 /etc/sanoid/sanoid.conf
wget shoebox.cc.gatech.edu/deployment-files/sanoid_files/sanoid_conf_files/sanoid.defaults.conf -P /etc/sanoid/ && chmod 660 /etc/sanoid/sanoid.defaults.conf
wget shoebox.cc.gatech.edu/deployment-files/sanoid_files/sanoid_conf_files/sanoid.example.conf -P /etc/sanoid/ && chmod 660 /etc/sanoid/sanoid.example.conf
Load and enable the services
wget shoebox.cc.gatech.edu/deployment-filessanoid_files/systemd_services/sanoid-prune.service -P /etc/systemd/system/ && chmod 644 /etc/systemd/system/sanoid-prune.service
wget shoebox.cc.gatech.edu/deployment-filessanoid_files/systemd_services/sanoid.service -P /etc/systemd/system/ && chmod 644 /etc/systemd/system/sanoid.service
wget shoebox.cc.gatech.edu/deployment-filessanoid_files/systemd_services/sanoid.timer -P /etc/systemd/system/ && chmod 644 /etc/systemd/system/sanoid.timer
systemctl enable sanoid-prune.service
systemctl enable sanoid.timer
systemctl start sanoid-prune.service
systemctl start sanoid.timer
systemctl daemon-reload
Load the public keys from cc-hereford and cc-braford
echo 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNwFWai1rKDyh7LcutkHjTKQmgCSJWbzr7DpITOkcSziEhpQQ1jekeg5anwrl+bRnB8vRRhIoxtVY/8/eZS06V8= root@cc-hereford.cc.gatech.edu' >> /root/.ssh/authorized_keys
echo 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP+bAe4+qkAcUyKhHPyzl8OLWRFBJAO8cw3f7N6ZoiIPdUY382quEJYF0CDhyZG4APmtwNOETdCHmkAkG4TAjuI= root@cc-braford.cc.gatech.edu' >> /root/.ssh/authorized_keys
7.9) If using Nethomes
Follow the guide at https://generaltso.cc.gatech.edu/guide/configure-system-nethomes
7.10) Configure User Access
Add final user groups to server
- add user logins to
/etc/sssd/conf.d/01-HOSTNAME
- add users to root access to
/etc/sudoers.d/50-HOSTNAME
- make sure to run
visudo -f /etc/sudoers.d/50-HOSTNAME
when editing the sudoers file!!!!
- make sure to run
- restart sssd and sshd with
systemctl restart sssd
andsystemctl restart sshd
- add user logins to
8) LAST STEP
Always make sure that all of your packages are up to date by running apt update and apt upgrade. If apt is struggling, try running aptitude upgrade to help resolve any conflicting dependencies
And finally, make sure thet you can reboot the server and that it comes up cleanly before calling it done.
Back to top9) Final Handoff Steps
- IF A DESKTOP
- register the machine in BOK to allow for dhcp
- IF IN A DATACENTER
- Make sure to configure the switchports with the correct vlan and description that you are plugging network in to
- https://tso.cc.gatech.edu/howtos/cisco-ios-and-nxos-basics
10) Troublehsooting Steps
Add troubleshooting steps and solutions here as you come across them
ToDo:
Add link to Garrett's video
Back to top