Current revision updated by dhoward32 on
Originally created by dhoward32 on
Back to top

1) Context

 This is not meant to be an all-inclusive guide on what is going on as much as a quick checklist on what needs to happen. Ideally there will be links here for these steps to help explain what all is happening and details about the services being installed. 

This guide was written in May of 2024. Ideally, we keep this up to date, but everything here is actively being worked on and modified, so please make sure to bother your local RT or change the documentation here yourself if you find something here is out of date because of a development in our automation somewhere.

Link back to the deployment landing page: https://generaltso.cc.gatech.edu/guide/linux-server-deployment-landing-page

Last updated Date: Oct 2024

Back to top

2) Assumptions

  • We're assuming that you are trying to boot into Ubuntu 22. If you are booting into Rhel 9, the instructions are mostly the same, but there are some key differences that can be found in the drop down menus
  • If you are deploying a laptop, make sure to refer to the supplemental guide at https://generaltso.cc.gatech.edu/guide/linux-laptop-deployment-supplemental
  • All of the commands mentioned here are generally easier to run when logged in as root, or when elevating. I will usually log in as myself, then elevated with sudo -i
  • In an attempt to accommodate as many of our server instance as possible, there are conditionals here based on the type of system you are building. In general, they should be next to other in the bullets and marked in bold.
Back to top

3) Early Steps

  • Set up the DHCP in GTIPAM
  • Gather the physical information for inventory
    • hostname
    • MAC address
    • IP address
    • CAE number
    • PO number, cost, and warranty dates
    • physical location
  • If a Physical Server
    • set up the BMC in GTIPAM
  • If a VM
    • set up the VM. It is worth noting that the drives should be marked as scsi in order for the OS to recognize them as "sda", otherwise, the OS will see them as "vda"
  • If a desktop
    • Every workstation is a bit different depending on motherboard manufacturer, but you will need to go into the bios and manually enable netbooting in ipv4, make sure that pxe is enabled, and any kind of CSM compatibility is enabled to allow the workstation to netboot and also have Linux installed on it
Back to top

4) Booting the OS

Text goes here

Back to top

5) ADD TO INVENTORY

  • Inventory currently uses the UUID of the system as it's identifier for systems. If there is an entry with the UUID in it, WOSA will attempt to overwrite it, if it does not exist, a new entry will be created. The UUID can be called with dmidecode -s system-uuid .
  • As part of the Cobbler installation process for Ubuntu, the sensu debug log will get an alert with the UUID of the system. While waiting for the system to install in Cobbler, go ahead and create the inventory entry with this UUID and other manual information.
Back to top

6) Salt

Reference documentation: https://tso.cc.gatech.edu/howtos/deploy-ubuntu-2204-salt
Troubleshooting documentation: https://tso.cc.gatech.edu/howtos/salt-tips-tricks-and-troubleshooting

  • Accept the keys in alcali before doing much else
    • If these are not present on alcali, run the following first command to populate the key in alkali, then accept it.
  • Start running the base salt commands
    • sudo salt-call state.apply
    • sudo salt-call state.apply (yes, for now we need to run this twice because of foreman registration issues)
    • sudo salt-call state.apply util.motd_template
    • sudo salt-call state.apply util.sshd_template
    • sudo salt-call state.apply util.reminder
    • sudo salt-call state.apply root_authorized_keys
  • Check that you can log in as root from carora sudo ssh HOSTNAME.cc.gatech.edu
  • Run some more commands. Either from the console or while logged in through an ssh terminal. ssh is usually easier so you can copy and paste commands
    • sudo salt-call state.apply rsyslog
    • sudo salt-call state.apply sssd
    • sudo salt-call state.apply coc-base.pam (only run this if you want to create home directories (i.e. not netmounted)
    • sudo salt-call state.apply coc-base.sudo
  • Check that you can log in as yourself through ssh

6.1) If deploying in Rhel

6/6/24 -- [Bug]: https://github.gatech.edu/delegated-admin/gt-salt-coc/issues/9
If you are planning to use nethomes rather than local homes on a system, you will need to run authselect select sssd --force if you are not applying the coc-base.pam state to allow sssd logins. 

Back to top

7) Configure the System

RHEL Deployment Changes  

7.1) If deploying Ubuntu

  • Manage the root and local account.
    • sudo passwd change the password to the default password
    • deluser --remove-home cocinstance (may need to reboot the machine and make sure this user isn't logged in anywhere)
  • Install some base apps
    • apt autoremove -y && apt update && apt full-upgrade -y && apt autoremove -y
    • apt update && apt -y install ncdu iptraf-ng inxi cdpr htop screen vim build-essential x11-apps linux-tools-common ipmitool ethtool iperf3 tmux pigz uptimed aptitude ldap-utils
  • Set up unattended upgrades
    • This may change depending on the type of server you are building, but most of our systems will be using an aggressive auto update policy
    • apt install unattended-upgrades update-notifier-common && dpkg-reconfigure --priority=low unattended-upgrades
    • edit the lines in the conf file
      • nano /etc/apt/apt.conf.d/20auto-upgrades
        • add
          • APT::Periodic::Download-Upgradeable-Packages "1";
            APT::Periodic::AutocleanInterval "7";
      • nano /etc/apt/apt.conf.d/50unattended-upgrades
        • uncomment the line and change to true
          • Unattended-Upgrade::Remove-Unused-Dependencies "true";

7.2) If deploying Rhel

  • Install some base apps
    • dnf groupinstall 'Development Tools' 'System Tools'
    • dnf autoremove -y && dnf update && dnf full-upgrade -y && dnf autoremove -y
    • dnf update && dnf -y install ncdu iptraf-ng inxi htop screen vim screen xorg-x11-xauth perf gcc gcc-c++ ipmitool ethtool iperf3 tmux pigz uptimed openldap-clients redhat-rpm-config pigz uptimed
    • systemctl enable --now uptimed
  • Set up unattended upgrades
    • This may change depending on the type of server you are building, but most of our systems will be using an aggressive auto update policy
    • dnf install dnf-automatic
    • edit the lines in the conf file:
      • vi /etc/dnf/automatic.conf
        • Change the following fields:
          • upgrade_type = default
          • download_updates = yes
          • apply updates = yes
          • reboot = never
        • NOTE: Depending on the system/endpoint, these may need to be changed, especially with regards to automatically applying updates. When in doubt, ask a fellow RT. 
      • Enable the dnf automatic update timer
        • systemctl enable --now dnf-automatic.timer
      • Verify the dnf-automatic timer is active:
        • systemctl list-timers

7.3)  OS Agnostic for everything below

  • Force disable sleep
    • systemctl mask --now {sleep,suspend,hibernate,hybrid-sleep}.target
  • Change the .bashrc file
    • nano /root/.bashrc
      • edit
        • HISTSIZE=10000
          HISTFILESIZE=10000
          HISTCONTROL="ignorespace"
      • add
        • HISTIGNORE="history"
          alias df="df -x devtmpfs -x tmpfs -x squashfs -x overlay"
          alias lsblk="lsblk -e 7"

7.4) Set up Borg

  • salt-call state.apply borg_backup
    • Follow the commands given at the end of the state output
  • 7.4.1) Alternative - Manual Borg Setup

    Some additional knowledge. Borg is an encrypted backup service that we use for our non zfs backups. If you want to backup a filesystem on zfs, you should be using syncoid/sanoid instead. Borgmatic is a wrapper class that gives some additional user features to borg that we will install alongside it. In general, assume that all of these commands should be run as root due to where all the path variables and permissions end up.

    Key Resources

    https://borgbackup.readthedocs.io/en/stable/installation.html
    https://torsion.org/borgmatic/docs/how-to/set-up-backups/
    https://docs.borgbase.com/restore/borg/borgmatic
    https://torsion.org/borgmatic/docs/how-to/extract-a-backup/

    RHEL 9 Deployments

    In the case of Rhel 9 deployments, changes to the Path are required to run Borg commands as root. Add the folloowing lines to the root/.bashrc file:

    •  vi /root/.bashrc
      • # Add usr and borg directory to PATH
        PATH="$PATH:/usr/local/bin"
        export PATH
    How to check the status of the backups

    If the host is able to connect with cc-hereford backups, then the status of the last backups can be run as root with borgmatic list . You can see the latest backups with borgmatic list --last 3 . We use three because by default there are three archives that are being backed up and this will pull the latest three archives.

    Installing Borg and Borgmatic

    We install borg using pip as that is generally the most up to date version.

    1. Make sure that the prerequisite software is installed beforehand.
      • Ubuntu: 
        • apt update && apt install python3-pylibacl libacl1-dev python3-pip libssl-dev python3-venv python3-virtualenv pkg-config liblz4-dev libxxhash-dev libzstd-dev -y
      • Rhel 9:
        • dnf update && dnf install openssl-devel gcc gcc-c++ python3-pylibacl libacl libacl-devel python3-pip python3-virtualenv pkgconf lz4-devel libzstd-devel xxhash-devel fuse fuse3-devel fuse3 redhat-rpm-config python3-pkgconfig -y 
    2. Create the directory for the environment that we will be running borg out of, start a virtual environment and install borg. We target a specific version of borgbackup right now because of a python compatibility error, and hope that this is not a permanent procedure (Aug 2024)
      • mkdir -p /opt/virtualenv/borg
      • python3 -m venv /opt/virtualenv/borg
      • source /opt/virtualenv/borg/bin/activate
      • pip3 install --upgrade borgbackup
      • pip3 install --upgrade borgmatic
      • deactivate
    3. Run this one liner to add borg to the path instead of just in the virtual environment's path
      • for EACHFILE in borg borgmatic; do if [[ -L /usr/local/bin/${EACHFILE} ]]; then rm -rf /usr/local/bin/${EACHFILE}; fi; ln -s /opt/virtualenv/borg/bin/${EACHFILE} /usr/local/bin/; done
    4. Load the conf files from shoemaker
      • mkdir -p /etc/borgmatic.d
      • mkdir -p /etc/borgmatic.d/common/
      • wget shoebox.cc.gatech.edu/deployment-files/borg_conf/ssh_cmd.yaml -P /etc/borgmatic.d/common/
      • wget shoebox.cc.gatech.edu/deployment-files/borg_conf/excludes -P /etc/borgmatic.d/common/
      • wget shoebox.cc.gatech.edu/deployment-files/borg_conf/standard.yaml -P /etc/borgmatic.d/
      • wget shoebox.cc.gatech.edu/deployment-files/borg_conf/containers.yaml -P /etc/borgmatic.d/
      • wget shoebox.cc.gatech.edu/deployment-files/borg_conf/web.yaml -P /etc/borgmatic.d/
    Create the initial variables and add cc-hereford to known host
    1. get the fqdn of the host. This should include the .cc extension as well
      • If in doubt, you should be able to get what the server sees with hostname -f
      • you can edit /etc/hosts if you need to add the extension. The format should be
    2. Set the UUID and hostname as a variable
      • export UUID=$(dmidecode -s system-uuid | sed 's/[^[:alnum:]]//g')
      • export HOSTNAME=$(hostname -f)
    3. export a keyfile name that hereford will use to identify all backups from this host. If we rebuild a server, we will need to make sure this matches or we will be creating new backups instead of adding to older ones.
      • example of the final line should look like (with an updated hostname)
      • export KEYFILENAME=${HOSTNAME}-${UUID}-$(date "+%s")
    4. Do some ssh keygen work
      • ssh-keygen -t ecdsa -N "" -C "coc-backups@${KEYFILENAME}" -f "/root/.ssh/${KEYFILENAME}"
      • [[ -f "/root/.ssh/coc-backups" ]] && rm -f "/root/.ssh/coc-backups"
      • ln -s "/root/.ssh/${KEYFILENAME}" "/root/.ssh/coc-backups"
      • ssh-keyscan -p "8765" "collective-001.cc.gatech.edu" >> "/root/.ssh/known_hosts"
      • ssh-keyscan -p "5678" "assimilation.cc.gatech.edu" >> "/root/.ssh/known_hosts"
    5. Add this block to the bottom of /root/.bashrc
      • export BORG_RSH="ssh -i /root/.ssh/coc-backups"
      • export BORG_REPO="ssh://coc-backups@collective-001.cc.gatech.edu:8765/./"
    Registering the Host with cc-hereford
    • Hereford has a registration script that does most of the lifting here that all runs inside of a container named assimilation. Most of this process here is setting up the variables and getting the key in order to run the script remotely on hereford. We don't need to do anything on the backup server to install borg, but it's useful to know that we are triggering things there from the host.
    • Eventually this process will become scripted and use less one-line commands, but in the meantime, these are the individual commands to run.
    1. ip addr | perl -ne 'if (/127\.\d+\.\d+\.\d+|::1/) { $a=1 } elsif (/((inet|inet6|ether)\s+(\S+))/) {print "$1\n"}' >> /tmp/borgid.txt
    2. echo uuid ${UUID} >> /tmp/borgid.txt
    3. echo "hostname $(uname -n)" >> /tmp/borgid.txt
    4. export KEYFILE=$(readlink -f "/root/.ssh/coc-backups")
    5. export SSHCMD="register ${UUID} coc-backups $(cat ${KEYFILE}.pub)"
    6. create a file that will hold the assimilation private key at /tmp/borg_key . You should delete this file when you are done setting up borg, but putting it in the /tmp will help guarantee that.
      • Get the contents of that file from gtswd at /s1/gtswd/x86_64-ubuntu20.04/sw-borgbackup/release/6.dir/assimilate
    7. chmod 600 /tmp/borg_key
    8. Run the actual script from the host that will trigger on hereford. If the script runs successfully, you should see something along the lines of "OK" return back. Otherwsie, most likely the variables were incorrectly set or the container may not be running.
      • cat /tmp/borgid.txt | ssh -p 5678 -i /tmp/borg_key coc-backups@assimilation.cc.gatech.edu ${SSHCMD}
    9. Wait around 30 seconds to a minute and congratulations. You should now be good to start running borg backups
    Initializing and Running Borgmatic

    Now that everything has been set up and hereford is configured to receive backups, we are ready to actually run borgmatic commands.

    1. Initialize the repository and archives. It will ask you to set a passphrase. We do not want a passphrase, so just hit enter twice, and do not display anything for verification.
      • borgmatic init --encryption repokey
    2. Run the initial backup. If this is a new server, the process should be take up to a minute, but if you are installing borg on something with a fair amount of data, you may want to run this in a tmux session.
      • borgmatic --verbosity 1 --files
    3. Check the status of the backups you just made. We use 3 here because that is how many archives we are using by default. This number could change if you add additional .yaml configuration files
      • borgmatic list --last 3
    4. Finally, add this to the daily cron job for backups to regularly occur. The random is used to make sure that the backup server doesn't get slammed by all of the servers trying to backup at once.
      • echo "$(( $RANDOM % 60 )) $(( $RANDOM % 24 )) * * * root " 'PATH=$PATH:/usr/local/bin /usr/local/bin/borgmatic --verbosity 1 --files' > /etc/cron.d/borgmatic_daily
    Post Install - Running backups and restoring backups from the host

    If you are wanting to manually run a borg backup, you can do that with borgmatic --verbosity 1 --files . If you receive an error about there being a lock file, that is because the server is currently in the process of running a backup. Do not break the lock unless you know what you are doing here.

    If you are wanting to restore a backup I'd recommend looking up the documentation first to get familiar with the process https://docs.borgbase.com/restore/borg/borgmatic . It is worth noting that restoring a backup does not reset the system back to a previous state. Specifically, restoring a backup will pull files from an archive into a directory of your choosing (by default it will be the directory you are currently in)

    The general steps are

    1. you will want to create a directory to place the files in
      • What I will usually do is create a temporary directory at /tmp/borg_restore
    2. Get the archive that you will want to pull backup data from.
      • This can be seen with borgmatic rlist
      • Note down the name of the archive, it should be in the format of something like standard-2024-04-29T09:49:54
    3. Pull the files from the archive
      • borgmatic extract --archive standard-2024-04-29T09:49:54 --destination /tmp/borg_restore/
      • If you are looking at pulling a specific file append a --path /file_location to the command
        • e.g. borgmatic extract --archive standard-2024-04-29T09:49:54 --destination /tmp/borg_restore/ --path /home/important_file
      • If you just want the latest backup either way, you can use archive --latest instead of using the full path name

    If you are unsure of what archive contains the backup you want to restore or in general want to browse through older data, then you can mount that backup to the host with borg. The general steps should include

    1. Identify the name of the archive as we did previously with restoring a backup
      • e.g. borgmatic rlist
    2. Using similar variables as when restoring a backup with --destination and --archive, run
      • borgmatic mount --archive latest --mount-point /mnt

    And with this you should have the filesystem mounted and can browse through what is backed up in that archive.

     

7.5) If a Desktop

  • install the desktop environment
    • apt install ubuntu-desktop
  • disable the part where it forces a local user to be created
    • nano /etc/dconf/profile/gdm
      • create and add
        • ser-db:user

          system-db:gdm

          file-db:/usr/share/gdm/greeter-dconf-defaults

    • nano /etc/gdm3/custom.conf
      • edit
        • [daemon]
          InitialSetupEnable=false
  • reboot the desktop with a screen attached to make sure the local user isn't forced to be created
  • Note: If the network does not connect after installing the Ubuntu desktop, try the following:
    • Update the /etc/NetworkManager/NetworkManager.conf file to have [ifupdown] managed= be true instead of false
    • systemctl restart NetworkManager

 

7.6) If NOT a Desktop

  • 7.6.1) set up Sensu

    • Right now this is a manual process with the instructions on https://tso.cc.gatech.edu/howtos/manually-installing-sensu-research-linux-systems
    • 7.6.2) Manual Sensu Installation Instructions

      Context

      This documentation is meant to 1-1 recreate the steps that simon does, but with some updated conf files. It’s reasonable that we will want to change this in the future, but this is mostly to document how the process is done now. Ideally this process will be handled by salt, but this will still be good reference material of what is happening as of May 2024.

      Install the libraries and packages
      Ubuntu 24 and below:
      • curl -s https://packagecloud.io/install/repositories/sensu/stable/script.deb.sh | sudo bash
      • apt install sensu-go-agent
      RHEL:
      • curl -s https://packagecloud.io/install/repositories/sensu/stable/script.rpm.sh | sudo bash
      • dnf install sensu-go-agent
      Ubuntu 24:

      Sadly, the developers at Sensu have neglected updates, so an official version for Ubuntu 24 does not exist. However, you can configure the Sensu repository for Ubuntu 22 and install it without issue. Commands taken from: https://discourse.sensu.io/t/official-install-not-working-for-ubuntu-24-04-lts/3246/5

      Update - This has now been updated by the sensu developers and no longer needs this manual step. However, we will keep it here as archive if we ever need it in the next OS upgrade

      • echo "deb https://packagecloud.io/sensu/stable/ubuntu/ jammy main" >> /etc/apt/sources.list.d/sensu.list
      • sudo mkdir -p /etc/apt/keyrings && curl -fsSL https://packagecloud.io/sensu/stable/gpgkey | sudo gpg --dearmor -o /etc/apt/keyrings/sensu.gpg
      • echo "deb [signed-by=/etc/apt/keyrings/sensu.gpg] https://packagecloud.io/sensu/stable/ubuntu/ jammy main" >> /etc/apt/sources.list.d/sensu.list
      • sudo apt-get update -y
      • sudo apt-get install -y sensu-go-agent

      Continue the rest of the configuration as usual.

      Enable Sensu and copy over the config files

      Right now, these are default files that we are using for research systems, to change this, you should just have to go into the agent.yaml file and change the subscription to match your specific team. Currently (May 2024) the files are located on shoemaker, but hopefully this will be changed to shoebox and eventually salt soon.

      • systemctl enable --now sensu-agent.service
      • mkdir /etc/sensu/plugins
      • wget shoebox.cc.gatech.edu/deployment-files/sensu_conf/agent.yml -P /etc/sensu/
      • wget shoebox.cc.gatech.edu/deployment-files/sensu_conf/check-localfs-usage -P /etc/sensu/plugins/
      • wget shoebox.cc.gatech.edu/deployment-files/sensu_conf/check-ssh -P /etc/sensu/plugins/
      • wget shoebox.cc.gatech.edu/deployment-files/sensu_conf/check-zfs-health -P /etc/sensu/plugins/
      • wget shoebox.cc.gatech.edu/deployment-files/sensu_conf/check-zfs-usage -P /etc/sensu/plugins/
      • chmod 755 /etc/sensu/plugins/check-ssh
      • chmod 755 /etc/sensu/plugins/check-localfs-usage
      • chmod 755 /etc/sensu/plugins/check-zfs-health
      • chmod 755 /etc/sensu/plugins/check-zfs-usage
      • systemctl is-enabled sensu-agent.service
      • systemctl restart sensu-agent.service
      • systemctl status sensu-agent.service

      By now, hopefully you see a webex notification that your system is online.

7.7) If a GPU Server

7.8) If a server using ZFS

  • Add ZFS and ZFS backups
    • manually install syncoid and sanoid
    • configure and set up backups using the guide at https://generaltso.cc.gatech.edu/guide/backing-zfs
    • 7.8.1) Manual Syncoid/Sanoid Installation

      Context

      Please note that this process only covers the installation and loading of syncoid/sanoid. This does not cover configuring and setting up backups. That is all covered in https://tso.cc.gatech.edu/howtos/backing-up-zfs

      Install the libraries and packages

      If ubuntu 22

      • apt install libconfig-inifiles-perl libcapture-tiny-perl gzip lzop mbuffer pv

      If rhel 9

      • dnf install perl-Config-IniFiles perl-Data-Dumper perl-Capture-Tiny lzop mbuffer mhash pv
      Load the applications and conf files directly

      Right now, these are default files that we are using for research systems, to change this, you should just have to go into the agent.yaml file and change the subscription to match your specific team. Currently the files are located on shoebox, but hopefully this will be changed to salt soon.

      if ubuntu 22

      • wget shoebox.cc.gatech.edu/deployment-files/sanoid_files/findoid-ubuntu/findoid -P /usr/local/sbin/ && chmod 755 /usr/local/sbin/findoid

      If rhel 9

      • wget shoebox.cc.gatech.edu/deployment-files/sanoid_files/findoid-ubuntu/findoid -P /usr/local/sbin/ && chmod 755 /usr/local/sbin/findoid

      The rest should all be the same

      • wget shoebox.cc.gatech.edu/deployment-files/sanoid_files/sanoid -P /usr/local/sbin/ && chmod 755 /usr/local/sbin/sanoid
      • wget shoebox.cc.gatech.edu/deployment-files/sanoid_files/syncoid -P /usr/local/sbin/ && chmod 755 /usr/local/sbin/syncoid
      • wget shoebox.cc.gatech.edu/deployment-files/sanoid_files/sleepymutex -P /usr/local/sbin/ && chmod 755 /usr/local/sbin/sleepymutex
      • wget shoebox.cc.gatech.edu/deployment-files/sanoid_files/sanoid_conf_files/sanoid.conf -P /etc/sanoid/ && chmod 660 /etc/sanoid/sanoid.conf
      • wget shoebox.cc.gatech.edu/deployment-files/sanoid_files/sanoid_conf_files/sanoid.defaults.conf -P /etc/sanoid/ && chmod 660 /etc/sanoid/sanoid.defaults.conf
      • wget shoebox.cc.gatech.edu/deployment-files/sanoid_files/sanoid_conf_files/sanoid.example.conf -P /etc/sanoid/ && chmod 660 /etc/sanoid/sanoid.example.conf
      Load and enable the services
      • wget shoebox.cc.gatech.edu/deployment-filessanoid_files/systemd_services/sanoid-prune.service -P /etc/systemd/system/ && chmod 644 /etc/systemd/system/sanoid-prune.service
      • wget shoebox.cc.gatech.edu/deployment-filessanoid_files/systemd_services/sanoid.service -P /etc/systemd/system/ && chmod 644 /etc/systemd/system/sanoid.service
      • wget shoebox.cc.gatech.edu/deployment-filessanoid_files/systemd_services/sanoid.timer -P /etc/systemd/system/ && chmod 644 /etc/systemd/system/sanoid.timer
      • systemctl enable sanoid-prune.service
      • systemctl enable sanoid.timer
      • systemctl start sanoid-prune.service
      • systemctl start sanoid.timer
      • systemctl daemon-reload
      Load the public keys from cc-hereford and cc-braford
      • echo 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNwFWai1rKDyh7LcutkHjTKQmgCSJWbzr7DpITOkcSziEhpQQ1jekeg5anwrl+bRnB8vRRhIoxtVY/8/eZS06V8= root@cc-hereford.cc.gatech.edu' >> /root/.ssh/authorized_keys
      • echo 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP+bAe4+qkAcUyKhHPyzl8OLWRFBJAO8cw3f7N6ZoiIPdUY382quEJYF0CDhyZG4APmtwNOETdCHmkAkG4TAjuI= root@cc-braford.cc.gatech.edu' >> /root/.ssh/authorized_keys

7.9) If using Nethomes

Follow the guide at https://generaltso.cc.gatech.edu/guide/configure-system-nethomes

7.10) Configure User Access

  • Add final user groups to server

    • add user logins to /etc/sssd/conf.d/01-HOSTNAME
    • add users to root access to /etc/sudoers.d/50-HOSTNAME
      • make sure to run visudo -f /etc/sudoers.d/50-HOSTNAME when editing the sudoers file!!!!
    • restart sssd and sshd with systemctl restart sssd and systemctl restart sshd

     

Back to top

8) LAST STEP

Always make sure that all of your packages are up to date by running apt update and apt upgrade. If apt is struggling, try running aptitude upgrade to help resolve any conflicting dependencies

And finally, make sure thet you can reboot the server and that it comes up cleanly before calling it done.

Back to top

9) Final Handoff Steps

Back to top

10) Troublehsooting Steps

Add troubleshooting steps and solutions here as you come across them

 

ToDo:

Add link to Garrett's video

Back to top
Identifier Categories
Specific categories