This is the New Endpoint Ubuntu Deployment How-To, if you are trying to manually install Salt on an existing machine, go here:
Salt Tips, Tricks, and Troubleshooting
PXE Boot
Use cobbler to bootstrap the machine. You can select metadata options to set the disk and storage configurations. lease refer to the guide Cobbler Deployment Guide for more detailed instructions
First Time Login
If salt key does not auto accept the minion key and run the sssd state, you will need to log in manually to the machine. The default admin user create during the install process is un: cocinstance pw: the normal admin password. Make sure to elevate priviliges using sudo -i to be logged in as root, and run the salt states fromt his user.
Salt manual installtion
If the salt-call command does not return anything, then salt probably wasn't installed, there are more details on https://tso.cc.gatech.edu/howtos/salt-tips-tricks-and-troubleshooting, but you can run the command below to get started.
For Ubuntu 18.04, 20.04, and 22.04:
curl "https://gominionconfig.salt.gatech.edu/api/install?unit=coc&os=ubuntu" | bash -e
For RHEL 8 or RHEL 9:
curl "https://gominionconfig.salt.gatech.edu/api/install?unit=coc&os=redhat" | bash -e
Salt Setup
Unless the minion key was not properly pre-seeded into Alcali, Cobbler should have installed salt and run the initial state.apply execution. You can harmlessly verify this by running:
sudo salt-call state.apply
If the key was not accepted, go to ALCALI and accept the key manually. Then rerun the above command.
Apply the Template States
NOTE: The saltenv and pillarenv arguments can be used to change the git branch so that you can access salt formulas that are not yet located in the coc-prd environment. We don't need to do this for most boxes anymore, but if you need to, the sytax looks like this:
sudo salt-call saltenv=coc-dev pillarenv=coc-dev state.apply
The first thing you should do is sync the custom grains to the minion, this ensures that we have detailed grain information for our custom formulas on the first run. You will see a list of grain files that are synced to the minion for evaluation. The grains.items
command will evaluate all of the grains on the system.
sudo salt-call saltutil.sync_grains sudo salt-call grains.items
OPTIONAL: Check that the pillar data looks OK:
sudo salt-call pillar.items
Run through this list of states to get everything set up on the box. The first line is the base load, following lines are templates.
sudo salt-call state.apply
Cortex may not succeed when running this for the first time. This is because the Cortex binary is not in the repository by default, and we'll need to install it manually
For Ubuntu 20
wget https://nexus.az.gatech.edu/repository/ai-apt-focal/pool/c/cortex-agent/cortex-agent_7.7.2.66464_amd64.deb dpkg -i cortex-agent_7.7.2.66464_amd64.deb rm cortex-agent_7.7.2.66464_amd64.deb
For Ubuntu 22
wget https://nexus.az.gatech.edu/repository/ai-apt-jammy/pool/c/cortex-agent/cortex-agent_7.9.0.82606_amd64.deb dpkg -i cortex-agent_7.9.0.82606_amd64.deb rm cortex-agent_7.9.0.82606_amd64.deb
sudo salt-call state.apply util.motd_template sudo salt-call state.apply util.sshd_template sudo salt-call state.apply root_authorized_keys
Now check that you can log into the box as root, from carora. NOTE: nevah will not have the correct ciphers available to log into an Ubuntu 22.04 box.
sudo salt-call state.apply fail2ban
The next one (coc-base.pam) is needed for sssd/ldap users to get home directories properly created upon login. DO NOT RUN THE PAM FORMULA IF YOU DO NOT WANT AUTOMATICALLY CREATED HOME DIRECTORIES!!!
sudo salt-call state.apply coc-base.pam sudo salt-call state.apply coc-base.sudo
Next you should log in as yourself from kuri or carora. Ensure that you can use sudo to become root on the box.
If you are looking at this for an older box, you may need to apply the sssd state. This should already be applied to an Ubuntu 22 box.
sudo salt-call state.apply sssd
Advanced Pillar Manipulation
This is handy for creating the custom host-specific override to include extra allow groups and home directory overrides.
(Caveat Emptor: I have not tested comment rendering in the sssd.conf file via this mechanism)
You can test the pillar rendering as you build your custom pillar injections by running the pillar.items command in the following format:
sudo salt-call pillar.items pillar='{"sssd": "test"}'
You can then run the state.apply sssd
command to apply just the sssd
formula, or the plain state.apply
command to do all states at once, sssd
included. This wild example populates the desired fields for kia.cc.gatech.edu in one shot, as long as the 01_kia_sssd.conf
file does not already exist.
sudo salt-call state.apply sssd pillar='{"sssd": {"install": true, "lookup": {"conf_local": {"sssd": {"services": ["nss", "pam", "autofs", "sudo"]},"domain/gatech.edu": {"override_homedir":["/nethome/%u"],"simple_allow_groups": ["coc-tso", "coc-tso-admin", "coc-hparch-access", "coc-hparch-vortex"]}}}}}
The following text can be used as a starting point to add a few more GRS roles without needing to remove irrelevant roles:
pillar='{"sssd": {"install": true, "lookup": {"conf_local": {"sssd": {"services": ["nss", "pam", "autofs", "sudo"]},"domain/gatech.edu": {"override_homedir":["/nethome/%u"],"simple_allow_groups": ["coc-tso", "coc-tso-admin", "coc-edit-this-role"]}}}}}'
Definitely double check that the resulting file has the expected contents before moving on.