including the lab from 2018

This commit is contained in:
langdon 2020-02-06 14:02:06 -05:00
commit e47c8ce553
59 changed files with 2847 additions and 0 deletions

9
README.md Normal file
View File

@ -0,0 +1,9 @@
# Summit Lab 2018: Containerizing Applications
1. **[LAB 0](labs/lab0/chapter0.md)** Introduction / Setup
1. **[LAB 1](labs/lab1/chapter1.md)** Introducing podman
1. **[LAB 2](labs/lab2/chapter2.md)** Analyzing a monolithic application
1. **[LAB 3](labs/lab3/chapter3.md)** Deconstructing an application into microservices
1. **[LAB 4](labs/lab4/chapter4.md)** Orchestrated deployment of a decomposed application
1. **[LAB 5](labs/lab5/chapter5.md)** OpenShift templates and web console
1. **[BONUS - LAB 6](labs/lab6/chapter6.md)** OpenShift Ansible Broker

103
labs/lab0/chapter0.md Normal file
View File

@ -0,0 +1,103 @@
## Introduction
In this lab, we are going to leverage a process known as [`oc cluster up`](https://github.com/openshift/origin/blob/master/docs/cluster_up_down.md). `oc cluster up` leverages the local docker daemon and enables us to quickly stand up a local OpenShift Container Platform to start our evaluation. The key result of `oc cluster up` is a reliable, reproducible OpenShift environment to iterate on.
Expected completion: 5-10 minutes
## Find your AWS Instance
This lab is designed to accommodate many students. As a result, each student will be given a VM running on AWS. The naming convention for the lab VMs is:
**student-\<number\>**.labs.sysdeseng.com
You will be assigned a number by the instructor.
Retrieve the key from the [instructor host](https://instructor.labs.sysdeseng.com/summit/L1108.pem) so that you can _SSH_ into the instances by accessing the password protected directory from the table above. Download the _L1108.pem_ file to your local machine and change the permissions of the file to 600.
```bash
$ PASSWD=<password from instructor>
$ wget --no-check-certificate --user student --password ${PASSWD} https://instructor.labs.sysdeseng.com/summit/L1108.pem
$ chmod 600 L1108.pem
```
## Connecting to your AWS Instance
This lab should be performed on **YOUR ASSIGNED AWS INSTANCE** as `ec2-user` unless otherwise instructed.
**_NOTE_**: Please be respectful and only connect to your assigned instance. Every instance for this lab uses the same public key so you could accidentally (or with malicious intent) connect to the wrong system. If you have any issues please inform an instructor.
```bash
$ ssh -i L1108.pem ec2-user@student-<number>.labs.sysdeseng.com
```
**NOTE**: For Windows users you will have to use a terminal like [PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html) to SSH using the private key.
Once installed, use the following instructions to SSH to your VM instance: [http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/putty.html](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/putty.html).
TIP: Use the rhte.ppk key located at: [instructor host](https://instructor.labs.sysdeseng.com/L1108.ppk) as PuTTY uses a different format for its keys.
## Getting Set Up
For the sake of time, some of the required setup has already been taken care of on your AWS VM. For future reference though, the easiest way to get started is to head over to the OpenShift Origin repo on github and follow the "[Getting Started](https://github.com/openshift/origin/blob/master/docs/cluster_up_down.md)" instructions. The instructions cover getting started on Windows, MacOS, and Linux.
Since some of these labs will have long running processes, it is recommended to use something like `tmux` or `screen` in case you lose your connection at some point so you can reconnect:
```bash
$ sudo yum -y install screen
$ screen
```
In case you get disconnected use `screen -x` or `tmux attach` to reattach once you reestablish ssh connectivity. If you are unfamiliar with screen, check out this [quick tutorial](https://www.mattcutts.com/blog/a-quick-tutorial-on-screen/). For tmux here is a [quick tutorial](https://fedoramagazine.org/use-tmux-more-powerful-terminal/).
All that's left to do is run OpenShift by executing the `start-oc.sh` script in your home directory. First, let's take a look at what this script is doing, it's grabbing AWS instance metadata so that it can configure OpenShift to start up properly on AWS:
```bash
$ cat ~/start-oc.sh
```
Now, let's start our local, containerized OpenShift environment:
```bash
$ ~/start-oc.sh
```
The resulting output should be something of this nature
```bash
Using nsenter mounter for OpenShift volumes
Using 127.0.0.1 as the server IP
Starting OpenShift using registry.access.redhat.com/openshift3/ose:v3.9.14 ...
OpenShift server started.
The server is accessible via web console at:
https://<public hostname>:8443
You are logged in as:
User: developer
Password: <any value>
To login as administrator:
oc login -u system:admin
```
You should get a lot of feedback about the launch of OpenShift. As long as you don't get any errors you are in good shape.
OK, so now that OpenShift is available, let's ask for a cluster status & take a look at our running containers:
```bash
$ oc version
$ oc cluster status
```
As noted before, `oc cluster up` leverages docker for running
OpenShift. You can see that by checking out the containers and
images that are managed by docker:
```bash
$ sudo docker ps
$ sudo docker images
```
We can also check out the OpenShift console. Open a browser and navigate to `https://<public-hostname>:8443`. Be sure to use http*s* otherwise you will get weird web page. Once it loads (and you bypass the certificate errors), you can log in to the console using the default developer username (use any password).
## Lab Materials
Clone the lab repository from github:
```bash
$ cd ~/
$ git clone https://github.com/dustymabe/summit-2018-container-lab
```
## OpenShift Container Platform
What is OpenShift? OpenShift, which you may remember as a "[PaaS](https://en.wikipedia.org/wiki/Platform_as_a_service)" to build applications on, has evolved into a complete container platform based on Kubernetes. If you remember the "[DIY Cartridges](https://github.com/openshift/origin-server/blob/master/documentation/oo_cartridge_guide.adoc#diy)" from older versions of Openshift, essentially, OpenShift v3 has expanded the functionality to provide complete containers. With OpenShift, you can build from a platform, build from scratch, or anything else you can do in a container, and still get the complete lifecycle automation you loved in the older versions.
You are now ready to move on to the [next lab](../lab1/chapter1.md).

BIN
labs/lab0/pageant.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

40
labs/lab0/windows.md Normal file
View File

@ -0,0 +1,40 @@
## Using PuTTY
### Installation
It is easier to use the MSI installer for PuTTY. It will include pageant,
puttygen and putty which is all required.
- [x86 MSI](https://the.earth.li/~sgtatham/putty/latest/w32/putty-0.70-installer.msi)
- [x64 MSI](https://the.earth.li/~sgtatham/putty/latest/w64/putty-64bit-0.70-installer.msi)
### Import PEM
Open `PuTTYgen` - Start -> PuTTY -> PuTTYgen
Using the `Conversions` menu select `Import key`. Select the PEM file and click `Open`.
Now click `Save private key` button and save as `awskey.ppk`.
### Add Key to Agent
Open `pageant` - Start -> PuTTY -> pageant
Right click the pageant icon and click `Add key`.
![pageant](pageant.png)
Find and select `awskey.ppk` and Click `Open` button.
### Configure PuTTY
Now the simple part.
Open `PuTTY` - Start -> PuTTY -> PuTTY
In the `Host Name` input field use the provided hostname for an instance in AWS.
In the `Saved Session` input field use `aws` as the name of the session. Click `Save`. Then click `Open`.
If you need additional information please take a look at the following link: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/putty.html

14
labs/lab1/Dockerfile Normal file
View File

@ -0,0 +1,14 @@
FROM registry.access.redhat.com/rhel7
MAINTAINER Student <student@example.com>
RUN yum -y install httpd --disablerepo "*" --enablerepo rhel-7-server-rpms
RUN echo "Apache" >> /var/www/html/index.html
RUN echo 'PS1="[apache]# "' > /etc/profile.d/ps1.sh
EXPOSE 80
# Simple startup script to avoid some issues observed with container restart
ADD run-apache.sh /run-apache.sh
RUN chmod -v +x /run-apache.sh
CMD [ "/run-apache.sh" ]

192
labs/lab1/chapter1.md Normal file
View File

@ -0,0 +1,192 @@
# LAB 1: podman and buildah
In this lab we will explore the podman environment. If you are familiar with podman this may function as a brief refresher. If you are new to podman this will serve as an introduction to podman basics. Don't worry, we will progress rapidly. To get through this lab, we are going to focus on the environment itself as well as walk through some exercises with a couple of podman images/containers to tell a complete story and point out some things that you might have to consider when containerizing your application.
What is [podman](https://github.com/projectatomic/libpod), you may ask? Well, the README explains it in detail but, in short, it is a tool for manipulating OCI compliant containers created by docker or other tools (such as buildah). The docker utility provides build, run, and push functions on docker containers via a docker daemon. We are leveraging three daemonless tools, which support OCI compliant containers, that do each function separately. Namely, [buildah](https://github.com/projectatomic/buildah) for building, [skopeo](https://github.com/projectatomic/skopeo) for pushing/pulling from registries, and podman for verification/run. podman will transparently use the buildah and skopeo technologies for the user to build and push/pull from registries, all without the overhead of a separate daemon running all the time.
This lab should be performed on **YOUR ASSIGNED AWS VM** as `ec2-user` unless otherwise instructed.
Expected completion: 15-20 minutes
Agenda:
* Review podman, buildah and docker
* Review podman and buildah help
* Explore a Dockerfile
* Build an image
* Launch a container
* Inspect a container
* Build image registry
Perform the following commands as `ec2-user` unless instructed otherwise.
## podman and docker
Both podman and docker share configuration files so if you are using docker in your environment these will be useful as well. These files tell podman how the storage and networking should be set up and configured. In the /run/containers/registries.conf file check out the registry settings. You may find it interesting that you can *add a registry* and *block a registry* by modifying /etc/containers/registries.conf. Think about the different use cases for that.
```bash
$ cat /etc/containers/registries.conf #but don't add things here
$ cat /etc/containers/registries.d/default.yaml #instead, duplicate this
$ cat /etc/containers/storage.conf
$ cat /etc/containers/policy.json
```
Unlike docker, podman doesn't need an always running daemon. There are no podman processes running on the system:
```bash
$ pgrep podman | wc -l
```
However, the docker daemon is running. You can see that and also check
the status of the docker daemon:
```bash
$ pgrep docker | wc -l
$ systemctl status docker
```
## podman and buildah Help
Now that we see how the podman startup process works, we should make sure we know how to get help when we need it. Run the following commands to get familiar with what is included in the podman package as well as what is provided in the man pages. Spend some time exploring here.
Check out the executable provided:
```bash
$ rpm -ql podman | grep bin
$ rpm -ql buildah | grep bin
```
Check out the configuration file(s) that are provided:
```bash
$ rpm -qc podman
$ rpm -qc buildah
```
Check out the documentation that is provided:
```bash
$ rpm -qd podman
$ rpm -qd buildah
```
Run `podman {help,info}` to check out the storage configuration and how to find more information.
```bash
$ podman --help
$ podman run --help
$ sudo podman info
```
Run `buildah help` to check out general options and get detailed information about specific options.
```bash
$ buildah --help
$ buildah copy --help
```
## Let's explore a Dockerfile
Here we are just going to explore a simple Dockerfile. The purpose for this is to have a look at some of the basic commands that are used to construct a podman or docker image. For this lab, we will explore a basic Apache httpd Dockerfile and then confirm functionality.
Change to `~/summit-2018-container-lab/labs/lab1` and `cat` out the Dockerfile
```bash
$ cd ~/summit-2018-container-lab/labs/lab1
$ cat Dockerfile
```
```dockerfile
FROM registry.access.redhat.com/rhel7
MAINTAINER Student <student@example.com>
RUN yum -y install httpd --disablerepo "*" --enablerepo rhel-7-server-rpms
RUN echo "Apache" >> /var/www/html/index.html
RUN echo 'PS1="[apache]# "' > /etc/profile.d/ps1.sh
EXPOSE 80
# Simple startup script to avoid some issues observed with container restart
ADD run-apache.sh /run-apache.sh
RUN chmod -v +x /run-apache.sh
CMD [ "/run-apache.sh" ]
```
Here you can see in the `FROM` command that we are pulling a RHEL 7 base image that we are going to build on. Containers that are being built inherit the subscriptions of the host they are running on, so you only need to register the host system.
After gaining access to a repository we update the container and install `httpd`. Finally, we modify the index.html file, `EXPOSE` port 80,which allows traffic into the container, and then set the container to start with a `CMD` of `run-apache.sh`.
## Build an Image
Now that we have taken a look at the Dockerfile, let's build this image. We could use the exact same command, swapping podman for docker, to build with docker.
```bash
$ sudo podman build -t redhat/apache .
$ sudo podman images
```
Podman is not actually building this image, technically it is wrapping buildah to do so. If you wanted to use buildah directly you could do the same thing as `sudo podman build -t redhat/apache .` by using `sudo buildah build-using-dockerfile -t redhat/apache .`. You can even see `buildah images` will report the same thing as `podman images`.
```bash
$ sudo buildah images
```
## Run the Container
Next, let's run the image and make sure it started.
```bash
$ sudo podman run -dt -p 8080:80 --name apache redhat/apache
$ sudo podman ps
```
Here we are using a few switches to configure the running container the way we want it. We are running a `-dt` to run in detached mode with a pseudo TTY. Next we are mapping a port from the host to the container. We are being explicit here. We are telling podman to map port 8080 on the host to port 80 in the container. Now, we could have let podman handle the host side port mapping dynamically by passing a `-p 80`, in which case podman would have randomly assigned a port to the container. Finally, we passed in the name of the image that we built earlier. If you wish, you can swap podman for docker and the exact same commands will work.
Okay, let's make sure we can access the web server.
```bash
$ curl http://localhost:8080
Apache
```
Now that we have built an image, launched a container and confirmed that it is running, lets do some further inspection of the container. We should take a look at the container IP address. Let's use `podman inspect` to do that.
## Time to Inspect
```bash
$ sudo podman inspect apache
```
We can see that this gives us quite a bit of information in json format. We can scroll around and find the IP address, it will be towards the bottom.
Let's be more explicit with our `podman inspect`
```bash
$ sudo podman inspect --format '{{ .NetworkSettings.IPAddress }}' apache
```
You can see the IP address that was assigned to the container.
We can apply the same filter to any value in the json output. Try a few different ones.
Now lets look inside the container and see what that environment looks like. Execute commands in the namespace with `podman exec <container-name OR container-id> <cmd>`
```bash
$ sudo podman exec -t apache bash
```
Now run some commands and explore the environment. Remember, we are in a slimmed down container at this point - this is by design. You may find surprising restrictions and that not every application you expect is available.
```bash
[apache]# ps aux
[apache]# ls /bin
[apache]# cat /etc/hosts
```
Remember, you can always install what you need while you are debugging something. However, remember it won't be there on the next start of the container unless you add it to your Dockerfile. For example:
```bash
[apache]# less /run-apache.sh
bash: less: command not found
[apache]# yum install -y less --disablerepo "*" --enablerepo rhel-7-server-rpms
[apache]# less /run-apache.sh
...
```
Exit the container namespace with `CTRL+d` or `exit`.
Whew, so we do have some options. Now, remember that this lab is all about containerizing your existing apps. You will need some of the tools listed above to go through the process of containerizing your apps. Troubleshooting problems when you are in a container is going to be something that you get very familiar with.
Before we move on to the next section let's clean up the apache container so we don't have it hanging around.
```bash
$ sudo podman rm -f apache
```
In the [next lab](../lab2/chapter2.md) we will be analyzing a monolithic application.

View File

@ -0,0 +1,8 @@
---
- hosts: localhost
become_method: sudo
tasks:
- name: launch openshift
command: /bin/bash /home/ec2-user/start-oc.sh
become: false
ignore_errors: yes

10
labs/lab1/jump-to-lab1.sh Executable file
View File

@ -0,0 +1,10 @@
#!/bin/bash
read -p "This script is to jump over the previous labs, is that really what you want to do? [y/N] " -n 1 -r
echo # (optional) move to a new line
if [[ $REPLY =~ ^[Yy]$ ]]
then
sudo yum install -y ansible
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
ansible-playbook -i "localhost," -c local $DIR/jump-to-here-playbook.yml
fi

8
labs/lab1/run-apache.sh Normal file
View File

@ -0,0 +1,8 @@
#!/bin/bash
# Make sure we're not confused by old, incompletely-shutdown httpd
# context after restarting the container. httpd won't start correctly
# if it thinks it is already running.
rm -rf /run/httpd/* /tmp/httpd*
exec /usr/sbin/apachectl -D FOREGROUND

View File

@ -0,0 +1,31 @@
FROM registry.access.redhat.com/rhel7
MAINTAINER Student <student@example.com>
# ADD set up scripts
ADD scripts /scripts
RUN chmod 755 /scripts/*
# Common Deps
RUN yum -y install openssl --disablerepo "*" --enablerepo rhel-7-server-rpms
RUN yum -y install psmisc --disablerepo "*" --enablerepo rhel-7-server-rpms
# Deps for wordpress
RUN yum -y install httpd --disablerepo "*" --enablerepo rhel-7-server-rpms
RUN yum -y install php --disablerepo "*" --enablerepo rhel-7-server-rpms
RUN yum -y install php-mysql --disablerepo "*" --enablerepo rhel-7-server-rpms
RUN yum -y install php-gd --disablerepo "*" --enablerepo rhel-7-server-rpms
RUN yum -y install tar --disablerepo "*" --enablerepo rhel-7-server-rpms
# Deps for mariadb
RUN yum -y install mariadb-server --disablerepo "*" --enablerepo rhel-7-server-rpms
RUN yum -y install net-tools --disablerepo "*" --enablerepo rhel-7-server-rpms
RUN yum -y install hostname --disablerepo "*" --enablerepo rhel-7-server-rpms
# Add in wordpress sources
COPY latest.tar.gz /latest.tar.gz
RUN tar xvzf /latest.tar.gz -C /var/www/html --strip-components=1
RUN rm /latest.tar.gz
RUN chown -R apache:apache /var/www/
EXPOSE 80
CMD ["/bin/bash", "/scripts/start.sh"]

View File

@ -0,0 +1,61 @@
FROM registry.access.redhat.com/rhel7
>>> No tags on image specification - updates could break things
MAINTAINER Student <student@example.com>
# ADD set up scripts
ADD scripts /scripts
>>> If a local script changes then we have to rebuild from scratch
RUN chmod 755 /scripts/*
# Disable all but the necessary repo(s)
RUN yum-config-manager --disable \* &> /dev/null
RUN yum-config-manager --enable rhel-7-server-rpms
>>> The yum-config-manager method to managing repos can be time consuming during a "sudo podman build"...
>>> whereas, enabling the necessary repo(s) during a "yum install" is much faster.
# Common Deps
RUN yum -y install openssl
RUN yum -y install psmisc
>>> Running a yum clean all in the same statement would clear the yum
>>> cache in our intermediate cached image layer
# Deps for wordpress
RUN yum -y install httpd
RUN yum -y install php
RUN yum -y install php-mysql
RUN yum -y install php-gd
RUN yum -y install tar
# Deps for mariadb
RUN yum -y install mariadb-server
RUN yum -y install net-tools
RUN yum -y install hostname
>>> Can group all of the above into one yum statement to minimize
>>> intermediate layers. However, during development, it can be nice
>>> to keep them separated so that your "build/run/debug" cycle can
>>> take advantage of layers and caching. Just be sure to clean it up
>>> before you publish. You can check out the history of the image you
>>> have created by running *sudo podman history bigimg*.
# Add in wordpress sources
COPY latest.tar.gz /latest.tar.gz
>>> Consider using a specific version of Wordpress to control the installed version
RUN tar xvzf /latest.tar.gz -C /var/www/html --strip-components=1
RUN rm /latest.tar.gz
RUN chown -R apache:apache /var/www/
>>> Can group above statements into one multiline statement to minimize
>>> space used by intermediate layers. (i.e. latest.tar.gz would not be
>>> stored in any image).
EXPOSE 80
CMD ["/bin/bash", "/scripts/start.sh"]

Binary file not shown.

View File

@ -0,0 +1,46 @@
#!/bin/bash
set -e
__mysql_config() {
echo "Running the mysql_config function."
mysql_install_db
chown -R mysql:mysql /var/lib/mysql
/usr/bin/mysqld_safe &
sleep 10
}
__setup_mysql() {
printf "Running the start_mysql function.\n"
ROOT_PASS="$(openssl rand -base64 12)"
USER="${DBUSER-dbuser}"
PASS="${DBPASS-$(openssl rand -base64 12)}"
NAME="${DBNAME-db}"
printf "root password=%s\n" "$ROOT_PASS"
printf "NAME=%s\n" "$DBNAME"
printf "USER=%s\n" "$DBUSER"
printf "PASS=%s\n" "$DBPASS"
mysqladmin -u root password "$ROOT_PASS"
mysql -uroot -p"$ROOT_PASS" <<-EOF
DELETE FROM mysql.user WHERE user = '$DBUSER';
FLUSH PRIVILEGES;
CREATE USER '$DBUSER'@'localhost' IDENTIFIED BY '$DBPASS';
GRANT ALL PRIVILEGES ON *.* TO '$DBUSER'@'localhost' WITH GRANT OPTION;
CREATE USER '$DBUSER'@'%' IDENTIFIED BY '$DBPASS';
GRANT ALL PRIVILEGES ON *.* TO '$DBUSER'@'%' WITH GRANT OPTION;
CREATE DATABASE $DBNAME;
EOF
killall mysqld
sleep 10
}
# Call all functions - only call if not already configured
DB_FILES=$(echo /var/lib/mysql/*)
DB_FILES="${DB_FILES#/var/lib/mysql/\*}"
DB_FILES="${DB_FILES#/var/lib/mysql/lost+found}"
if [ -z "$DB_FILES" ]; then
printf "Initializing empty /var/lib/mysql...\n"
__mysql_config
__setup_mysql
fi

View File

@ -0,0 +1,51 @@
#!/bin/bash
set -e
__handle_passwords() {
if [ -z "$DBNAME" ]; then
printf "No DBNAME variable.\n"
exit 1
fi
if [ -z "$DBUSER" ]; then
printf "No DBUSER variable.\n"
exit 1
fi
# Here we generate random passwords (thank you pwgen!) for random keys in wp-config.php
printf "Creating wp-config.php...\n"
# There used to be a huge ugly line of sed and cat and pipe and stuff below,
# but thanks to @djfiander's thing at https://gist.github.com/djfiander/6141138
# there isn't now.
sed -e "s/database_name_here/$DBNAME/
s/username_here/$DBUSER/
s/password_here/$DBPASS/" /var/www/html/wp-config-sample.php > /var/www/html/wp-config.php
#
# Update keys/salts in wp-config for security
RE='put your unique phrase here'
for i in {1..8}; do
KEY=$(openssl rand -base64 40)
sed -i "0,/$RE/s|$RE|$KEY|" /var/www/html/wp-config.php
done
}
__handle_db_host() {
# Update wp-config.php to point to our linked container's address.
DB_PORT='tcp://127.0.0.1:3306' # Using localhost for this one
sed -i -e "s/^\(define('DB_HOST', '\).*\(');.*\)/\1${DB_PORT#tcp://}\2/" \
/var/www/html/wp-config.php
}
__httpd_perms() {
chown apache:apache /var/www/html/wp-config.php
}
__check() {
if [ ! -f /var/www/html/wp-config.php ]; then
__handle_passwords
__httpd_perms
fi
__handle_db_host
}
# Call all functions
__check

View File

@ -0,0 +1,14 @@
#!/bin/bash
set -eux
MYDIR=$(dirname $0)
# Perform configurations
$MYDIR/config_mariadb.sh
$MYDIR/config_wordpress.sh
# Start mariadb in the background
/usr/bin/mysqld_safe &
# Start apache in the foreground
/usr/sbin/httpd -D FOREGROUND

163
labs/lab2/chapter2.md Normal file
View File

@ -0,0 +1,163 @@
# LAB 2: Analyzing a Monolithic Application
Typically, it is best to break down services into the simplest components and then containerize each of them independently. However, when initially migrating an application it is not always easy to break it up into little pieces but you can start with big containers and work towards breaking them into smaller pieces.
In this lab we will create an all-in-one container image comprised of multiple services. We will also observe several bad practices when composing Dockerfiles and explore how to avoid those mistakes. In lab 3 we will decompose the application into more manageable pieces.
This lab should be performed on **YOUR ASSIGNED AWS VM** as `ec2-user` unless otherwise instructed.
Expected completion: 20-25 minutes
Agenda:
* Overview of monolithic application
* Build podman image
* Run container based on podman image
* Exploring the running container
* Connecting to the application
* Review Dockerfile practices
## Monolithic Application Overview
Our monolithic application we are going to use in this lab is a simple wordpress application. Rather than decompose the application into multiple parts we have elected to put the database and the wordpress application into the same container. Our container image will have:
* mariadb and all dependencies
* wordpress and all dependencies
To perform some generic configuration of mariadb and wordpress there are startup configuration scripts that are executed each time a container is started from the image. These scripts configure the services and then start them in the running container.
## Building the podman Image
View the `Dockerfile` provided for `bigapp` which is not written with best practices in mind:
```bash
$ cd ~/summit-2018-container-lab/labs/lab2/bigapp/
$ cat Dockerfile
```
Build the podman image for this by executing the following command. This can take a while to build. While you wait you may want to peek at the [Review Dockerfile Practices](#review-dockerfile-practices) section at the end of this lab chapter.
```bash
$ sudo podman build -t bigimg .
```
## Run Container Based on podman Image
To run the podman container based on the image we just built use the following command:
```bash
$ sudo podman run -P --name=bigapp -e DBUSER=user -e DBPASS=mypassword -e DBNAME=mydb -d bigimg
$ sudo podman ps
```
Take a look at some of the arguments we are passing to podman. With `-P` we are telling podman to publish all ports the container exposes (i.e. from the Dockerfile) to randomly assigned ports on the host. In this case port 80 will get assigned to a random host port. Next we are providing a ```name``` of ```bigapp```. After that we are setting some environment variables that will be passed into the container and consumed by the configuration scripts to set up the container. Finally, we pass it the name of the image that we built in the prior step.
## Exploring the Running Container
Now that the container is running we will explore the container to see what's going on inside. First off, the processes were started and any output that goes to stdout will come to the console of the container. You can run `podman logs` to see the output. To follow
or "tail" the logs use the `-f` option.
**__NOTE:__** You are able to use the **name** of the container rather than the container id for most `podman` (or `docker`) commands.
```bash
$ sudo podman logs -f bigapp
```
**__NOTE:__** When you are finished inspecting the log, just CTRL-C out.
If you need to inspect more than just the stderr/stdout of the machine then you can enter into the namespace of the container to inspect things more closely. The easiest way to do this is to use `podman exec`. Try it out:
```bash
$ sudo podman exec -t bigapp /bin/bash
[CONTAINER_NAMESPACE]# pstree
[CONTAINER_NAMESPACE]# cat /var/www/html/wp-config.php | grep '=='
[CONTAINER_NAMESPACE]# tail /var/log/httpd/access_log /var/log/httpd/error_log /var/log/mariadb/mariadb.log
```
Explore the running processes. Here you will see `httpd` and `MySQL` running.
```bash
[CONTAINER_NAMESPACE]# ps aux
```
Press `CTRL+d` or type `exit` to leave the container shell.
## Connecting to the Application
First detect the host port number that is is mapped to the container's port 80:
```bash
$ sudo podman port bigapp 80
```
Now connect to the port via curl:
```bash
$ curl -L http://localhost:<port>/
```
## Review Dockerfile practices
So we have built a monolithic application using a somewhat complicated Dockerfile. There are a few principles that are good to follow when creating a Dockerfile that we did not follow for this monolithic app.
To illustrate some problem points in our Dockerfile it has been replicated below with some commentary added:
```dockerfile
FROM registry.access.redhat.com/rhel7
>>> No tags on image specification - updates could break things
MAINTAINER Student <student@example.com>
# ADD set up scripts
ADD scripts /scripts
>>> If a local script changes then we have to rebuild from scratch
RUN chmod 755 /scripts/*
# Common Deps
RUN yum -y install openssl --disablerepo "*" --enablerepo rhel-7-server-rpms
RUN yum -y install psmisc --disablerepo "*" --enablerepo rhel-7-server-rpms
>>> Running a yum clean all in the same statement would clear the yum
>>> cache in our intermediate cached image layer
# Deps for wordpress
RUN yum -y install httpd --disablerepo "*" --enablerepo rhel-7-server-rpms
RUN yum -y install php --disablerepo "*" --enablerepo rhel-7-server-rpms
RUN yum -y install php-mysql --disablerepo "*" --enablerepo rhel-7-server-rpms
RUN yum -y install php-gd --disablerepo "*" --enablerepo rhel-7-server-rpms
RUN yum -y install tar --disablerepo "*" --enablerepo rhel-7-server-rpms
# Deps for mariadb
RUN yum -y install mariadb-server --disablerepo "*" --enablerepo rhel-7-server-rpms
RUN yum -y install net-tools --disablerepo "*" --enablerepo rhel-7-server-rpms
RUN yum -y install hostname --disablerepo "*" --enablerepo rhel-7-server-rpms
>>> Can group all of the above into one yum statement to minimize
>>> intermediate layers. However, during development, it can be nice
>>> to keep them separated so that your "build/run/debug" cycle can
>>> take advantage of layers and caching. Just be sure to clean it up
>>> before you publish. You can check out the history of the image you
>>> have created by running *podman history bigimg*.
# Add in wordpress sources
COPY latest.tar.gz /latest.tar.gz
>>> Consider using a specific version of Wordpress to control the installed version
RUN tar xvzf /latest.tar.gz -C /var/www/html --strip-components=1
RUN rm /latest.tar.gz
RUN chown -R apache:apache /var/www/
>>> Can group above statements into one multiline statement to minimize
>>> space used by intermediate layers. (i.e. latest.tar.gz would not be
>>> stored in any image).
EXPOSE 80
CMD ["/bin/bash", "/scripts/start.sh"]
```
More generally:
* Use a specific tag for the source image. Image updates may break things.
* Place rarely changing statements towards the top of the file. This allows the re-use of cached image layers when rebuilding.
* Group statements into multi-line statements. This avoids layers that have files needed only for build.
* Use `LABEL run` instruction to prescribe how the image is to be run.
* Avoid running applications in the container as root user where possible. The final `USER` declaration in the Dockerfile should specify the [user ID (numeric value)](https://docs.openshift.com/container-platform/latest/creating_images/guidelines.html#openshift-specific-guidelines) and not the user name. If the image does not specify a USER, it inherits the USER from the parent image.
* Use `VOLUME` instruction to create a host mount point for persistent storage.
In the [next lab](../lab3/chapter3.md) we will fix these issues and break the application up into separate services.

View File

@ -0,0 +1,8 @@
---
- hosts: localhost
become_method: sudo
tasks:
- name: launch openshift
command: /bin/bash /home/ec2-user/start-oc.sh
become: false
ignore_errors: yes

10
labs/lab2/jump-to-lab2.sh Executable file
View File

@ -0,0 +1,10 @@
#!/bin/bash
read -p "This script is to jump over the previous labs, is that really what you want to do? [y/N] " -n 1 -r
echo # (optional) move to a new line
if [[ $REPLY =~ ^[Yy]$ ]]
then
sudo yum install -y ansible
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
ansible-playbook -i "localhost," -c local $DIR/jump-to-here-playbook.yml
fi

296
labs/lab3/chapter3.md Normal file
View File

@ -0,0 +1,296 @@
# LAB 3: Deconstructing an application into microservices
In this lab you will deconstruct an application into microservices, creating a multi-container application. In this process we explore the challenges of networking, storage and configuration.
This lab should be performed on **YOUR ASSIGNED AWS VM** as `ec2-user` unless otherwise instructed.
NOTE: In the steps below we use `vi` to edit files. If you are unfamiliar, this is a [good beginner's guide](https://www.howtogeek.com/102468/a-beginners-guide-to-editing-text-files-with-vi/). In short, "ESC" switches to command mode, "i" let's you edit, "wq" let's you save and exit, "q!" let's you exit without saving (all executed in command mode).
Expected completion: 20-30 minutes
## Decompose the application
In the previous lab we created an "all-in-one" application. Let's enter the container and explore.
```bash
$ sudo podman exec -t bigapp /bin/bash
```
### Services
From the container namespace list the log directories.
```bash
[CONTAINER_NAMESPACE]# ls -l /var/log/
```
We see `httpd` and `mariadb`. These are the services that make up the Wordpress application.
### Ports
We saw in the Dockerfile that port 80 was exposed. This is for the web server. Let's look at the mariadb logs for the port the database uses:
```bash
[CONTAINER_NAMESPACE]# grep port /var/log/mariadb/mariadb.log
```
This shows port 3306 is used.
### Storage
#### Web server
The Wordpress tar file was extracted into `/var/www/html`. List the files.
```bash
[CONTAINER_NAMESPACE]# ls -l /var/www/html
```
These are sensitive files for our application and it would be unfortunate if changes to these files were lost. Currently the running container does not have any associated "volumes", which means that if this container dies all changes will be lost. This mount point in the container should be backed by a "volume". Later in this lab, we'll use a directory from our host machine to back the "volume" to make sure these files persist.
#### Database
Inspect the `mariadb.log` file to discover the database directory.
```bash
[CONTAINER_NAMESPACE]# grep databases /var/log/mariadb/mariadb.log
```
Again, we have found some files that are in need of some non-volatile storage. The `/var/lib/mysql` directory should also be mounted to persistent storage on the host.
Now that we've inspected the container stop and remove it. `podman ps -ql` (don't forget `sudo`) prints the ID of the latest created container. First you will need to exit the container.
```bash
[CONTAINER_NAMESPACE]# exit
$ sudo podman stop $(sudo podman ps -ql)
$ sudo podman rm $(sudo podman ps -ql)
```
If we are confident in what we are doing we can also "single-line" the above with `sudo podman rm -f $(sudo podman ps -ql)` by itself.
## Create the Dockerfiles
Now we will develop the two images. Using the information above and the Dockerfile from Lab 2 as a guide, we will create Dockerfiles for each service. For this lab we have created a directory for each service with the required files for the service. Please explore these directories and check out the contents and the startup scripts.
```bash
$ mkdir ~/workspace
$ cd ~/workspace
$ cp -R ~/summit-2018-container-lab/labs/lab3/mariadb .
$ cp -R ~/summit-2018-container-lab/labs/lab3/wordpress .
$ ls -lR mariadb
$ ls -lR wordpress
```
### MariaDB Dockerfile
1. In a text editor create a file named `Dockerfile` in the `mariadb` directory. (There is a reference file in the `mariadb` directory if needed)
$ vi mariadb/Dockerfile
1. Add a `FROM` line that uses a specific image tag. Also add `MAINTAINER` information.
FROM registry.access.redhat.com/rhel7:7.5-231
MAINTAINER Student <student@example.com>
1. Add the required packages. We'll include `yum clean all` at the end to clear the yum cache.
RUN yum -y install --disablerepo "*" --enablerepo rhel-7-server-rpms \
mariadb-server openssl psmisc net-tools hostname && \
yum clean all
1. Add the dependent scripts and modify permissions to support non-root container runtime.
ADD scripts /scripts
RUN chmod 755 /scripts/* && \
MARIADB_DIRS="/var/lib/mysql /var/log/mariadb /run/mariadb" && \
chown -R mysql:0 ${MARIADB_DIRS} && \
chmod -R g=u ${MARIADB_DIRS}
1. Add an instruction to expose the database port.
EXPOSE 3306
1. Add a `VOLUME` instruction. This ensures data will be persisted even if the container is lost. However, it won't do anything unless, when running the container, host directories are mapped to the volumes.
VOLUME /var/lib/mysql
1. Switch to a non-root `USER` uid. The default uid of the mysql user is 27.
USER 27
1. Finish by adding the `CMD` instruction.
CMD ["/bin/bash", "/scripts/start.sh"]
Save the file and exit the editor.
### Wordpress Dockerfile
Now we'll create the Wordpress Dockerfile. (As before, there is a reference file in the `wordpress` directory if needed)
1. Using a text editor create a file named `Dockerfile` in the `wordpress` directory.
$ vi wordpress/Dockerfile
1. Add a `FROM` line that uses a specific image tag. Also add `MAINTAINER` information.
FROM registry.access.redhat.com/rhel7:7.5-231
MAINTAINER Student <student@example.com>
1. Add the required packages. We'll include `yum clean all` at the end to clear the yum cache.
RUN yum -y install --disablerepo "*" --enablerepo rhel-7-server-rpms \
httpd php php-mysql php-gd openssl psmisc && \
yum clean all
1. Add the dependent scripts and make them executable.
ADD scripts /scripts
RUN chmod 755 /scripts/*
1. Add the Wordpress source from gzip tar file. podman will extract the files. Also, modify permissions to support non-root container runtime. Switch to port 8080 for non-root apache runtime.
COPY latest.tar.gz /latest.tar.gz
RUN tar xvzf /latest.tar.gz -C /var/www/html --strip-components=1 && \
rm /latest.tar.gz && \
sed -i 's/^Listen 80/Listen 8080/g' /etc/httpd/conf/httpd.conf && \
APACHE_DIRS="/var/www/html /usr/share/httpd /var/log/httpd /run/httpd" && \
chown -R apache:0 ${APACHE_DIRS} && \
chmod -R g=u ${APACHE_DIRS}
1. Add an instruction to expose the web server port.
EXPOSE 8080
1. Add a `VOLUME` instruction. This ensures data will be persisted even if the container is lost.
VOLUME /var/www/html/wp-content/uploads
1. Switch to a non-root `USER` uid. The default uid of the apache user is 48.
USER 48
1. Finish by adding the `CMD` instruction.
CMD ["/bin/bash", "/scripts/start.sh"]
Save the Dockerfile and exit the editor.
## Build Images, Test and Push
Now we are ready to build the images to test our Dockerfiles.
1. Build each image. When building an image podman requires the path to the directory of the Dockerfile.
$ sudo podman build -t mariadb mariadb/
$ sudo podman build -t wordpress wordpress/
1. If the build does not succeed then resolve the issue and build again. Once successful, list the images.
$ sudo podman images
1. Create the local directories for persistent storage. Match the directory permissions we set in our Dockerfiles.
$ mkdir -p ~/workspace/pv/mysql ~/workspace/pv/uploads
$ sudo chown -R 27 ~/workspace/pv/mysql
$ sudo chown -R 48 ~/workspace/pv/uploads
1. Run the wordpress image first. See an explanation of all the `podman run` options we will be using below:
* `-d` to run in daemonized mode
* `-v <host/path>:<container/path>:z` to mount (technically, "bindmount") the directory for persistent storage. The :z option will label the content inside the container with the SELinux MCS label that the container uses so that the container can write to the directory. Below we'll inspect the labels on the directories before and after we run the container to see the changes on the labels in the directories
* `-p <host_port>:<container_port>` to map the container port to the host port
```bash
$ ls -lZd ~/workspace/pv/uploads
$ sudo podman run -d -p 8080:8080 -v ~/workspace/pv/uploads:/var/www/html/wp-content/uploads:z -e DB_ENV_DBUSER=user -e DB_ENV_DBPASS=mypassword -e DB_ENV_DBNAME=mydb -e DB_HOST=0.0.0.0 -e DB_PORT=3306 --name wordpress wordpress
```
Note: See the difference in SELinux context after running with a volume & :z.
```bash
$ ls -lZd ~/workspace/pv/uploads
$ sudo podman exec wordpress ps aux #we can also directly exec commands in the container
```
Check volume directory ownership inside the container
```bash
$ sudo podman exec wordpress stat --format="%U" /var/www/html/wp-content/uploads
```
Now we can check out how wordpress is doing
```bash
$ sudo podman logs wordpress
$ sudo podman ps
$ curl -L http://localhost:8080 #note we indicated the port to use in the run command above
```
**Note**: the `curl` command returns an error but demonstrates
a response on the port.
5. Bring up the database (mariadb) for the wordpress instance. For the mariadb container we need to specify an additional option to make sure it is in the same "network" as the apache/wordpress container and not visible outside that container:
* `--network=container:<alias>` to link to the wordpress container
```bash
$ ls -lZd ~/workspace/pv/mysql
$ sudo podman run -d --network=container:wordpress -v ~/workspace/pv/mysql:/var/lib/mysql:z -e DBUSER=user -e DBPASS=mypassword -e DBNAME=mydb --name mariadb mariadb
```
Note: See the difference in SELinux context after running w/ a volume & :z.
```bash
$ ls -lZd ~/workspace/pv/mysql
$ ls -lZ ~/workspace/pv/mysql
$ sudo podman exec mariadb ps aux
```
Check volume directory ownership inside the container
```bash
$ sudo podman exec mariadb stat --format="%U" /var/lib/mysql
```
Now we can check out how the database is doing
```bash
$ sudo podman logs mariadb
$ sudo podman ps
$ sudo podman exec mariadb curl localhost:3306
$ sudo podman exec mariadb mysql -u user --password=mypassword -e 'show databases'
$ curl localhost:3306 #as you can see the db is not generally visible
$ curl -L http://localhost:8080 #and now wp is happier!
```
You may also load the Wordpress application in a browser to test its full functionality @ `http://<YOUR AWS VM PUBLIC DNS NAME HERE>:8080`.
## Deploy a Container Registry
Let's deploy a simple registry to store our images.
Inspect the Dockerfile that has been prepared.
```bash
$ cd ~/summit-2018-container-lab/labs/lab3/
$ cat registry/Dockerfile
```
Build & run the registry
```bash
$ sudo podman build -t registry registry/
$ sudo podman run --name registry -p 5000:5000 -d registry
```
Confirm the registry is running.
```bash
$ sudo podman ps
```
### Push images to local registry
Push the images
```bash
$ sudo podman images
$ sudo podman push --tls-verify=false mariadb localhost:5000/mariadb
$ sudo podman push --tls-verify=false wordpress localhost:5000/wordpress
```
## Clean Up
Remove the mariadb and wordpress containers.
```bash
$ sudo podman rm -f mariadb wordpress
$ sudo podman ps -a
```
In the [next lab](../lab4/chapter4.md) we introduce container orchestration via OpenShift.

View File

@ -0,0 +1,29 @@
---
- hosts: localhost
become_method: sudo
tasks:
- name: launch openshift
command: /bin/bash /home/ec2-user/start-oc.sh
async: 600
poll: 0
register: start_oc
- name: build bigimg
command: podman build -t bigimg /home/ec2-user/summit-2018-container-lab/labs/lab2/bigapp
become: true
- name: cleanup bigapp container
command: podman rm -f bigapp
become: true
ignore_errors: yes
- name: launch bigapp
command: podman run -P --name=bigapp -e DBUSER=user -e DBPASS=mypassword -e DBNAME=mydb -d bigimg
become: true
- name: Check on openshift launch
async_status: jid={{ start_oc.ansible_job_id }}
register: cache_result
until: cache_result.finished
retries: 300
ignore_errors: yes

10
labs/lab3/jump-to-lab3.sh Executable file
View File

@ -0,0 +1,10 @@
#!/bin/bash
read -p "This script is to jump over the previous labs, is that really what you want to do? [y/N] " -n 1 -r
echo # (optional) move to a new line
if [[ $REPLY =~ ^[Yy]$ ]]
then
sudo yum install -y ansible
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
ansible-playbook -i "localhost," -c local $DIR/jump-to-here-playbook.yml
fi

View File

@ -0,0 +1,19 @@
FROM registry.access.redhat.com/rhel7:7.5-231
MAINTAINER Student <student@example.com>
# Deps for mariadb
RUN yum -y install --disablerepo "*" --enablerepo rhel-7-server-rpms \
mariadb-server openssl psmisc net-tools hostname && \
yum clean all
# Add set up scripts
ADD scripts /scripts
RUN chmod 755 /scripts/* && \
MARIADB_DIRS="/var/lib/mysql /var/log/mariadb /run/mariadb" && \
chown -R mysql:0 ${MARIADB_DIRS} && \
chmod -R g=u ${MARIADB_DIRS}
EXPOSE 3306
VOLUME /var/lib/mysql
USER 27
CMD ["/bin/bash", "/scripts/start.sh"]

View File

@ -0,0 +1,46 @@
#!/bin/bash
set -e
__mysql_config() {
echo "Running the mysql_config function."
mysql_install_db
# chown -R mysql:mysql /var/lib/mysql
/usr/bin/mysqld_safe &
sleep 10
}
__setup_mysql() {
printf "Running the start_mysql function.\n"
ROOT_PASS="$(openssl rand -base64 12)"
USER="${DBUSER-dbuser}"
PASS="${DBPASS-$(openssl rand -base64 12)}"
NAME="${DBNAME-db}"
printf "root password=%s\n" "$ROOT_PASS"
printf "NAME=%s\n" "$DBNAME"
printf "USER=%s\n" "$DBUSER"
printf "PASS=%s\n" "$DBPASS"
mysqladmin -u root password "$ROOT_PASS"
mysql -uroot -p"$ROOT_PASS" <<-EOF
DELETE FROM mysql.user WHERE user = '$DBUSER';
FLUSH PRIVILEGES;
CREATE USER '$DBUSER'@'localhost' IDENTIFIED BY '$DBPASS';
GRANT ALL PRIVILEGES ON *.* TO '$DBUSER'@'localhost' WITH GRANT OPTION;
CREATE USER '$DBUSER'@'%' IDENTIFIED BY '$DBPASS';
GRANT ALL PRIVILEGES ON *.* TO '$DBUSER'@'%' WITH GRANT OPTION;
CREATE DATABASE $DBNAME;
EOF
killall mysqld
sleep 10
}
# Call all functions - only call if not already configured
DB_FILES=$(echo /var/lib/mysql/*)
DB_FILES="${DB_FILES#/var/lib/mysql/\*}"
DB_FILES="${DB_FILES#/var/lib/mysql/lost+found}"
if [ -z "$DB_FILES" ]; then
printf "Initializing empty /var/lib/mysql...\n"
__mysql_config
__setup_mysql
fi

View File

@ -0,0 +1,10 @@
#!/bin/bash
set -eux
MYDIR=$(dirname $0)
# Perform configurations
$MYDIR/config_mariadb.sh
# Start apache in the foreground
/usr/bin/mysqld_safe

View File

@ -0,0 +1,12 @@
FROM registry.access.redhat.com/rhel7
MAINTAINER Student <student@example.com>
RUN yum -y install docker-registry --disablerepo "*" \
--enablerepo rhel-7-server-extras-rpms && \
yum clean all
EXPOSE 5000
ENTRYPOINT ["/usr/bin/registry"]
CMD ["serve", "/etc/docker-distribution/registry/config.yml"]

View File

@ -0,0 +1,22 @@
FROM registry.access.redhat.com/rhel7:7.5-231
MAINTAINER Student <student@example.com>
RUN yum -y install --disablerepo "*" --enablerepo rhel-7-server-rpms \
httpd php php-mysql php-gd openssl psmisc && \
yum clean all
ADD scripts /scripts
RUN chmod 755 /scripts/*
COPY latest.tar.gz /latest.tar.gz
RUN tar xvzf /latest.tar.gz -C /var/www/html --strip-components=1 && \
rm /latest.tar.gz && \
sed -i 's/^Listen 80/Listen 8080/g' /etc/httpd/conf/httpd.conf && \
APACHE_DIRS="/var/www/html /usr/share/httpd /var/log/httpd /run/httpd" && \
chown -R apache:0 ${APACHE_DIRS} && \
chmod -R g=u ${APACHE_DIRS}
EXPOSE 8080
VOLUME /var/www/html/wp-content/uploads
USER 48
CMD ["/bin/bash", "/scripts/start.sh"]

Binary file not shown.

View File

@ -0,0 +1,59 @@
#!/bin/bash
set -e
__handle_passwords() {
if [ -z "$DB_ENV_DBNAME" ]; then
cat <<EOF
No DB_ENV_DBNAME variable. Please link to database using alias 'db'
or provide DB_ENV_DBNAME variable.
EOF
exit 1
fi
if [ -z "$DB_ENV_DBUSER" ]; then
printf "No DB_ENV_DBUSER variable. Please link to database using alias 'db'.\n"
exit 1
fi
# Here we generate random passwords (thank you pwgen!) for random keys in wp-config.php
printf "Creating wp-config.php...\n"
# There used to be a huge ugly line of sed and cat and pipe and stuff below,
# but thanks to @djfiander's thing at https://gist.github.com/djfiander/6141138
# there isn't now.
sed -e "s/database_name_here/$DB_ENV_DBNAME/
s/username_here/$DB_ENV_DBUSER/
s/password_here/$DB_ENV_DBPASS/" /var/www/html/wp-config-sample.php > /var/www/html/wp-config.php
#
# Update keys/salts in wp-config for security
RE='put your unique phrase here'
for i in {1..8}; do
KEY=$(RANDFILE="/usr/share/httpd/.rnd" openssl rand -base64 40)
sed -i "0,/$RE/s|$RE|$KEY|" /var/www/html/wp-config.php
done
}
__handle_db_host() {
if [ "$MARIADB_SERVICE_HOST" ]; then
# Update wp-config.php to point to our kubernetes service address.
sed -i -e "s/^\(define('DB_HOST', '\).*\(');.*\)/\1$MARIADB_SERVICE_HOST:$MARIADB_SERVICE_PORT\2/" \
/var/www/html/wp-config.php
else
# Update wp-config.php to point to our linked container's address.
sed -i -e "s/^\(define('DB_HOST', '\).*\(');.*\)/\1$DB_HOST:${DB_PORT#tcp://}\2/" \
/var/www/html/wp-config.php
fi
}
__httpd_perms() {
chown apache:apache /var/www/html/wp-config.php
}
__check() {
if [ ! -f /var/www/html/wp-config.php ]; then
__handle_passwords
# __httpd_perms
fi
__handle_db_host
}
# Call all functions
__check

View File

@ -0,0 +1,10 @@
#!/bin/bash
set -eux
MYDIR=$(dirname $0)
# Perform configurations
$MYDIR/config_wordpress.sh
# Start apache in the foreground
/usr/sbin/httpd -D FOREGROUND

327
labs/lab4/chapter4.md Normal file
View File

@ -0,0 +1,327 @@
# LAB 4: Orchestrated deployment of a decomposed application
In this lab we introduce how to orchestrate a multi-container application in OpenShift.
This lab should be performed on **YOUR ASSIGNED AWS VM** as `ec2-user` unless otherwise instructed.
Expected completion: 40-60 minutes
Let's start with a little experimentation. I am sure you are all excited about your new blog site! And, now that it is getting super popular with 1,000s of views per day, you are starting to worry about uptime.
So, let's see what will happen. Launch the site:
```bash
$ sudo podman run -d -p 8080:8080 -v ~/workspace/pv/uploads:/var/www/html/wp-content/uploads:z -e DB_ENV_DBUSER=user -e DB_ENV_DBPASS=mypassword -e DB_ENV_DBNAME=mydb -e DB_HOST=0.0.0.0 -e DB_PORT=3306 --name wordpress wordpress
$ sudo podman run -d --network=container:wordpress -v ~/workspace/pv/mysql:/var/lib/mysql:z -e DBUSER=user -e DBPASS=mypassword -e DBNAME=mydb --name mariadb mariadb
```
Take a look at the site in your web browser on your machine using
`http://<YOUR AWS VM PUBLIC DNS NAME HERE>:8080`. As you learned before, you can confirm the port that your server is running on by executing:
```bash
$ sudo podman ps
$ sudo podman port wordpress
8080/udp -> 0.0.0.0:8080
8080/tcp -> 0.0.0.0:8080
```
Now, let's see what happens when we kick over the database. However, for a later experiment, let's grab the container-id right before you do it.
```bash
$ OLD_CONTAINER_ID=$(sudo podman inspect --format '{{ .ID }}' mariadb)
$ sudo podman stop mariadb
```
Take a look at the site in your web browser or using curl now. And, imagine explosions! (*making sound effects will be much appreciated by your lab mates.*)
* web browser -> `http://<YOUR AWS VM PUBLIC DNS NAME HERE>:8080`
OR
```bash
$ curl -L http://localhost:8080
```
Now, what is neat about a container system, assuming your web application can handle it, is we can bring it right back up, with no loss of data.
```bash
$ sudo podman start mariadb
```
OK, now, let's compare the old container id and the new one.
```bash
$ NEW_CONTAINER_ID=$(sudo podman inspect --format '{{ .ID }}' mariadb)
$ echo -e "$OLD_CONTAINER_ID\n$NEW_CONTAINER_ID"
```
Hmmm. Well, that is cool, they are exactly the same. OK, so all in all, about what you would expect for a web server and a database running on VMs, but a whole lot faster (well, the starting is). Let's take a look at the site now.
* web browser -> `http://<YOUR AWS VM PUBLIC DNS NAME HERE>:8080`
OR
```bash
$ curl -L http://localhost:8080
```
And.. Your site is back! Fortunately wordpress seems to be designed such that it does not need a restart if its database goes away temporarily.
Finally, let's kill off these containers to prepare for the next section.
```bash
$ sudo podman rm -f mariadb wordpress
```
Starting and stopping is definitely easy, and fast. However, it is still pretty manual. What if we could automate the recovery? Or, in buzzword terms, "ensure the service remains available"? Enter Kubernetes/OpenShift.
## Using OpenShift
Now login to our local OpenShift & create a new project:
```bash
$ oc login -u developer
You have one project on this server: "myproject"
$ oc new-project devel
Now using project "devel" on server "https://127.0.0.1:8443".
```
You are now logged in to OpenShift and are using the ```devel``` project. You can also view the OpenShift web console by using the same credentials to log in to ```https://<YOUR AWS VM PUBLIC DNS NAME HERE>:8443``` in a browser.
## Pod Creation
Let's get started by talking about a pod. A pod is a set of containers that provide one "service." How do you know what to put in a particular pod? Well, a pod's containers need to be co-located on a host and need to be spawned and re-spawned together. So, if the containers always need to be running on the same container host, well, then they should be a pod.
**Note:** We will be putting this file together in steps to make it easier to explain what the different parts do. We will be identifying the part of the file to modify by looking for an "empty element" that we inserted earlier and then replacing that with a populated element.
Let's make a pod for mariadb. Open a file called mariadb-pod.yaml.
```bash
$ mkdir -p ~/workspace/mariadb/openshift
$ vi ~/workspace/mariadb/openshift/mariadb-pod.yaml
```
In that file, let's put in the pod identification information:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: mariadb
labels:
name: mariadb
spec:
containers:
```
We specified the version of the Kubernetes API, the name of this pod (aka ```name```), the ```kind``` of Kubernetes thing this is, and a ```label``` which lets other Kubernetes things find this one.
Generally speaking, this is the content you can copy and paste between pods, aside from the names and labels.
Now, let's add the custom information regarding this particular container. To start, we will add the most basic information. Please replace the ```containers:``` line with:
```yaml
containers:
- name: mariadb
image: localhost:5000/mariadb
ports:
- containerPort: 3306
env:
```
Here we set the ```name``` of the container; remember we can have more than
one in a pod. We also set the ```image``` to pull, in other words, the container
image that should be used and the registry to get it from.
Lastly, we need to configure the environment variables that need to be fed from
the host environment to the container. Replace ```env:``` with:
```yaml
env:
- name: DBUSER
value: user
- name: DBPASS
value: mypassword
- name: DBNAME
value: mydb
```
OK, now we are all done, and should have a file that looks like:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: mariadb
labels:
name: mariadb
spec:
containers:
- name: mariadb
image: localhost:5000/mariadb
ports:
- containerPort: 3306
env:
- name: DBUSER
value: user
- name: DBPASS
value: mypassword
- name: DBNAME
value: mydb
```
Our wordpress container is much less complex, so let's do that pod next.
```bash
$ mkdir -p ~/workspace/wordpress/openshift
$ vi ~/workspace/wordpress/openshift/wordpress-pod.yaml
```
```yaml
apiVersion: v1
kind: Pod
metadata:
name: wordpress
labels:
name: wordpress
spec:
containers:
- name: wordpress
image: localhost:5000/wordpress
ports:
- containerPort: 8080
env:
- name: DB_ENV_DBUSER
value: user
- name: DB_ENV_DBPASS
value: mypassword
- name: DB_ENV_DBNAME
value: mydb
```
A couple things to notice about this file. Obviously, we change all the appropriate names to reflect "wordpress" but, largely, it is the same as the mariadb pod file. We also use the environment variables that are specified by the wordpress container, although they need to get the same values as the ones in the mariadb pod.
Ok, so, let's launch our pods and make sure they come up correctly. In order to do this, we need to introduce the ```oc``` command which is what drives OpenShift. Generally, speaking, the format of ```oc``` commands is ```oc <operation> <kind>```. Where ```<operation>``` is something like ```create```, ```get```, ```remove```, etc. and ```kind``` is the ```kind``` from the pod files.
```bash
$ oc create -f ~/workspace/mariadb/openshift/mariadb-pod.yaml
$ oc create -f ~/workspace/wordpress/openshift/wordpress-pod.yaml
```
Now, I know i just said, ```kind``` is a parameter, but, as this is a create statement, it looks in the ```-f``` file for the ```kind```.
Ok, let's see if they came up:
```bash
$ oc get pods
```
Which should output two pods, one called ```mariadb``` and one called ```wordpress``` . You can also check the OpenShift web console if you already have it pulled up and verify the pods show up there as well.
If you have any issues with the pods transistioning from a "Pending" state, you can check out the logs from the OpenShift containers in multiple ways. Here are a couple of options:
```bash
$ oc logs mariadb
$ oc describe pod mariadb
$ oc logs wordpress
$ oc describe pod wordpress
```
Ok, now let's kill them off so we can introduce the services that will let them more dynamically find each other.
```bash
$ oc delete pod/mariadb pod/wordpress
```
Verify they are terminating or are gone:
```bash
$ oc get pods
```
**Note** you used the "singular" form here on the ```kind```, which, for delete, is required and requires a "name". However, you can, usually, use them interchangeably depending on the kind of information you want.
## Service Creation
Now we want to create Kubernetes Services for our pods so that OpenShift can introduce a layer of indirection between the pods.
Let's start with mariadb. Open up a service file:
```bash
$ vi ~/workspace/mariadb/openshift/mariadb-service.yaml
```
and insert the following content:
```yaml
apiVersion: v1
kind: Service
metadata:
name: mariadb
labels:
name: mariadb
spec:
ports:
- port: 3306
selector:
name: mariadb
```
As you can probably tell, there isn't really anything new here. However, you need to make sure the ```kind``` is of type ```Service``` and that the ```selector``` matches at least one of the ```labels``` from the pod file. The ```selector``` is how the service finds the pod that provides its functionality.
OK, now let's move on to the wordpress service. Open up a new service file:
```bash
$ vi ~/workspace/wordpress/openshift/wordpress-service.yaml
```
and insert:
```yaml
apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
name: wordpress
spec:
ports:
- port: 8080
selector:
name: wordpress
```
Here you may notice there is no reference to the wordpress pod at all. Any pod that provides "wordpress capabilities" can be targeted by this service. Pods can claim to provide "wordpress capabilities" through their labels. This service is programmed to target pods with a label of ```name: wordpress```.
Another example of this might have been if we had made the mariadb-service just a "db" service and then, the pod could be mariadb, mysql, sqlite, anything really, that can support SQL the way wordpress expects it to. In order to do that, we would just have to add a ```label``` to the ```mariadb-pod.yaml``` called "db" and a ```selector``` in the ```mariadb-service.yaml``` (although, an even better name might be ```db-service.yaml```) called ```db```. Feel free to experiment
with that at the end of this lab if you have time.
Now let's get things going. Start mariadb:
```bash
$ oc create -f ~/workspace/mariadb/openshift/mariadb-pod.yaml -f ~/workspace/mariadb/openshift/mariadb-service.yaml
```
Now let's start wordpress.
```bash
$ oc create -f ~/workspace/wordpress/openshift/wordpress-pod.yaml -f ~/workspace/wordpress/openshift/wordpress-service.yaml
```
OK, now let's make sure everything came up correctly:
```bash
$ oc get pods
$ oc get services
```
**Note** these may take a while to get to a ```RUNNING``` state as it pulls the image from the registry, spins up the containers, etc.
Eventually, you should see:
```bash
$ oc get pods
NAME READY STATUS RESTARTS AGE
mariadb 1/1 Running 0 45s
wordpress 1/1 Running 0 42s
```
```bash
$ oc get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mariadb ClusterIP 172.30.xx.xx <none> 3306/TCP 1m
wordpress ClusterIP 172.30.xx.xx <none> 8080/TCP 1m
```
Now let's expose the wordpress service by creating a route
```bash
$ oc expose svc/wordpress
```
And you should be able to see the service's accessible URL by viewing the routes:
```bash
$ oc get routes
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
wordpress wordpress-devel.<YOUR AWS VM PUBLIC IP>.nip.io wordpress 8080 None
```
Check and make sure you can access the wordpress service through the route:
```bash
$ curl -L wordpress-devel.<YOUR AWS VM PUBLIC IP>.nip.io
```
* OR open the URL in a browser to view the UI
Seemed awfully manual and ordered up there, didn't it? In our [next lab](../lab5/chapter5.md) we'll demonstrate how simple deployments can be with OpenShift templates.

View File

@ -0,0 +1,117 @@
---
- hosts: localhost
become_method: sudo
tasks:
- name: launch openshift
command: /bin/bash /home/ec2-user/start-oc.sh
async: 600
poll: 0
register: start_oc
- name: build registry
command: podman build -t registry /home/ec2-user/summit-2018-container-lab/labs/lab3/registry
become: true
async: 300
poll: 0
register: registry
- name: build mariadb
command: >
podman build
-t mariadb
-f /home/ec2-user/summit-2018-container-lab/labs/lab3/mariadb/Dockerfile.reference
/home/ec2-user/summit-2018-container-lab/labs/lab3/mariadb/
become: true
async: 300
poll: 0
register: mariadb
- name: build wordpress
command: >
podman build
-t wordpress
-f /home/ec2-user/summit-2018-container-lab/labs/lab3/wordpress/Dockerfile.reference
/home/ec2-user/summit-2018-container-lab/labs/lab3/wordpress/
become: true
async: 300
poll: 0
register: wordpress
- name: cleanup containers
command: podman rm -f {{ item }}
become: true
with_items:
- bigapp
- registry
- mariadb
- wordpress
ignore_errors: yes
- name: create pv dir
file:
state: directory
path: /home/ec2-user/workspace/pv
owner: ec2-user
group: ec2-user
become: true
- name: create mysql pv dir
file:
state: directory
path: /home/ec2-user/workspace/pv/mysql
owner: 27
group: ec2-user
become: true
- name: create wp pv dir
file:
state: directory
path: /home/ec2-user/workspace/pv/uploads
owner: 48
group: ec2-user
become: true
- name: Check on registry build
async_status: jid={{ registry.ansible_job_id }}
register: registry_result
until: registry_result.finished
retries: 300
become: true
- name: Check on mariadb build
async_status: jid={{ mariadb.ansible_job_id }}
register: mariadb_result
until: mariadb_result.finished
retries: 300
become: true
- name: Check on wordpress build
async_status: jid={{ wordpress.ansible_job_id }}
register: wordpress_result
until: wordpress_result.finished
retries: 300
become: true
- name: cleanup images
shell: podman images | grep -E '<none>' | awk '{print $3}' | xargs podman rmi -f -
become: true
ignore_errors: yes
- name: launch registry
command: podman run --name registry -p 5000 -d registry
become: true
- name: load mariadb into registry
command: podman push --tls-verify=false mariadb localhost:5000/mariadb
become: true
- name: load wordpress into registry
command: podman push --tls-verify=false wordpress localhost:5000/wordpress
become: true
- name: Check on openshift launch
async_status: jid={{ start_oc.ansible_job_id }}
register: cache_result
until: cache_result.finished
retries: 300
ignore_errors: yes

10
labs/lab4/jump-to-lab4.sh Executable file
View File

@ -0,0 +1,10 @@
#!/bin/bash
read -p "This script is to jump over the previous labs, is that really what you want to do? [y/N] " -n 1 -r
echo # (optional) move to a new line
if [[ $REPLY =~ ^[Yy]$ ]]
then
sudo yum install -y ansible
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
ansible-playbook -i "localhost," -c local $DIR/jump-to-here-playbook.yml
fi

View File

@ -0,0 +1,19 @@
apiVersion: v1
kind: Pod
metadata:
name: mariadb
labels:
name: mariadb
spec:
containers:
- name: mariadb
image: localhost:5000/mariadb
ports:
- containerPort: 3306
env:
- name: DBUSER
value: user
- name: DBPASS
value: mypassword
- name: DBNAME
value: mydb

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: Service
metadata:
name: mariadb
labels:
name: mariadb
spec:
ports:
- port: 3306
selector:
name: mariadb

View File

@ -0,0 +1,19 @@
apiVersion: v1
kind: Pod
metadata:
name: wordpress
labels:
name: wordpress
spec:
containers:
- name: wordpress
image: localhost:5000/wordpress
ports:
- containerPort: 8080
env:
- name: DB_ENV_DBUSER
value: user
- name: DB_ENV_DBPASS
value: mypassword
- name: DB_ENV_DBNAME
value: mydb

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
name: wordpress
spec:
ports:
- port: 8080
selector:
name: wordpress

126
labs/lab5/chapter5.md Normal file
View File

@ -0,0 +1,126 @@
# LAB 5: OpenShift templates and web console
In this lab we introduce how to simplify your container deployments with OpenShift templates. We will also explore the web console.
This lab should be performed on **YOUR ASSIGNED AWS VM** as `ec2-user` unless otherwise instructed.
Expected completion: 20 minutes
## Project preparation
Ensure you're still logged in as the developer user.
```shell
$ oc whoami
developer
```
Let's create a new project.
```bash
$ oc new-project production
Now using project "production" on server "https://10.xx.xx.xxx:8443".
```
## Wordpress templated deployment
This time, let's simplify things by deploying an application template. We've already included a template with lab5 which leverages our wordpress & mariadb images.
```bash
$ cd ~/summit-2018-container-lab/labs/lab5/
$ grep localhost:5000 wordpress-template.yaml
```
Feel free to view the full template.
Let's deploy this wordpress template by adding your template to the production project
```bash
$ oc create -f wordpress-template.yaml
template "wordpress" created
```
Deploy your new template with "oc new-app" and note its output
```bash
$ oc new-app --template wordpress
--> Deploying template "production/wordpress" to project production
```
View all of the newly created resources
```bash
$ oc get all
```
Wait for rollout to finish
```bash
$ oc rollout status -w dc/mariadb
replication controller "mariadb-1" successfully rolled out
$ oc rollout status -w dc/wordpress
replication controller "wordpress-1" successfully rolled out
```
Verify the database started
```bash
$ oc logs dc/mariadb
mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
```
Verify wordpress started
```bash
$ oc logs dc/wordpress
/usr/sbin/httpd -D FOREGROUND
```
`oc status` gives a nice view of how these resources connect
```bash
$ oc status
```
Check and make sure you can access the wordpress service through it's route:
```bash
$ oc get routes
$ curl -L wordpress-production.<YOUR AWS VM PUBLIC IP>.nip.io
```
* Or open the URL in a browser to view the UI
OpenShift includes several ready-made templates. Let's take a look at some of them:
```shell
$ oc get templates -n openshift
```
For more information on templates, reference the official OpenShift documentation:
[https://docs.openshift.com/container-platform/latest/dev_guide/templates.html](https://docs.openshift.com/container-platform/latest/dev_guide/templates.html)
[https://docs.openshift.com/container-platform/latest/install_config/imagestreams_templates.html#is-templates-subscriptions](https://docs.openshift.com/container-platform/latest/install_config/imagestreams_templates.html#is-templates-subscriptions)
## Web console
Now that we have deployed our template, lets login as developer to the OpenShift web console - `https://<YOUR AWS VM PUBLIC DNS NAME HERE>:8443`
The console url can be retrieved with:
```bash
$ oc cluster status
```
Login to the web console with the `developer` user.
![image not loading](images/1.png "Login")
And after weve logged in, we see a list of projects that the developer user has access to. Let's select the `production` project:
![image not loading](images/2.png "Projects")
Our project landing page provides us with a high-level overview of our wordpress application's pods, services, and route:
![image not loading](images/3.png "Overview")
Let's dive a little deeper. We want to view a list of our pods by clicking on `Pods` in the left Applications menu:
![image not loading](images/4.png "Pods")
Next, let's click on one of our running pods for greater detail:
![image not loading](images/5.png "Wordpress")
With this view, we have access to pod information like status, logs, image, volumes, and more:
![image not loading](images/6.png "PodDetails")
Feel free to continue exploring the console.
In the final [bonus lab](../lab6/chapter6.md) you'll get to play with some new features, the service catalog and broker.

BIN
labs/lab5/images/1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

BIN
labs/lab5/images/2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 266 KiB

BIN
labs/lab5/images/3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

BIN
labs/lab5/images/4.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

BIN
labs/lab5/images/5.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

BIN
labs/lab5/images/6.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

View File

@ -0,0 +1,97 @@
---
- hosts: localhost
become_method: sudo
tasks:
- name: launch openshift
command: /bin/bash /home/ec2-user/start-oc.sh
async: 600
poll: 0
register: start_oc
- name: build registry
command: podman build -t registry /home/ec2-user/summit-2018-container-lab/labs/lab3/registry
become: true
async: 300
poll: 0
register: registry
- name: build mariadb
command: >
podman build
-t mariadb
-f /home/ec2-user/summit-2018-container-lab/labs/lab3/mariadb/Dockerfile.reference
/home/ec2-user/summit-2018-container-lab/labs/lab3/mariadb/
become: true
async: 300
poll: 0
register: mariadb
- name: build wordpress
command: >
podman build
-t wordpress
-f /home/ec2-user/summit-2018-container-lab/labs/lab3/wordpress/Dockerfile.reference
/home/ec2-user/summit-2018-container-lab/labs/lab3/wordpress/
become: true
async: 300
poll: 0
register: wordpress
- name: cleanup registry container
command: podman rm -f registry
become: true
ignore_errors: yes
- name: Check on registry build
async_status: jid={{ registry.ansible_job_id }}
register: registry_result
until: registry_result.finished
retries: 300
become: true
- name: Check on mariadb build
async_status: jid={{ mariadb.ansible_job_id }}
register: mariadb_result
until: mariadb_result.finished
retries: 300
become: true
- name: Check on wordpress build
async_status: jid={{ wordpress.ansible_job_id }}
register: wordpress_result
until: wordpress_result.finished
retries: 300
become: true
- name: cleanup images
shell: podman images | grep -E '<none>' | awk '{print $3}' | xargs podman rmi -f -
become: true
ignore_errors: yes
- name: launch registry
command: podman run --name registry -p 5000 -d registry
become: true
- name: load mariadb into registry
command: podman push --tls-verify=false mariadb localhost:5000/mariadb
become: true
- name: load wordpress into registry
command: podman push --tls-verify=false wordpress localhost:5000/wordpress
become: true
- name: Check on openshift launch
async_status: jid={{ start_oc.ansible_job_id }}
register: cache_result
until: cache_result.finished
retries: 300
ignore_errors: yes
- name: openshift login
command: oc login -u developer
become: false
- name: create empty devel project
command: oc new-project devel
become: false
ignore_errors: yes

10
labs/lab5/jump-to-lab5.sh Executable file
View File

@ -0,0 +1,10 @@
#!/bin/bash
read -p "This script is to jump over the previous labs, is that really what you want to do? [y/N] " -n 1 -r
echo # (optional) move to a new line
if [[ $REPLY =~ ^[Yy]$ ]]
then
sudo yum install -y ansible
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
ansible-playbook -i "localhost," -c local $DIR/jump-to-here-playbook.yml
fi

View File

@ -0,0 +1,172 @@
kind: Template
apiVersion: v1
labels:
template: wordpress-template
metadata:
name: wordpress
annotations:
openshift.io/display-name: "WordPress MariaDB Example"
description: "An example WordPress application with a MariaDB database."
tags: "wordpress,php,mariadb,summit"
iconClass: "icon-php"
objects:
- apiVersion: v1
kind: DeploymentConfig
metadata:
generation: 1
labels:
app: wordpress
name: mariadb
spec:
replicas: 1
selector:
app: wordpress
deploymentconfig: mariadb
strategy:
type: Rolling
template:
metadata:
labels:
app: wordpress
deploymentconfig: mariadb
spec:
containers:
- env:
- name: DBNAME
value: ${DBNAME}
- name: DBPASS
value: ${DBPASS}
- name: DBUSER
value: ${DBUSER}
image: 'localhost:5000/mariadb:latest'
imagePullPolicy: Always
livenessProbe:
initialDelaySeconds: 45
tcpSocket:
port: 3306
timeoutSeconds: 1
name: mariadb
ports:
- containerPort: 3306
readinessProbe:
exec:
command:
- /bin/sh
- -i
- -c
- MYSQL_PWD="$DBPASS" mysql -h 127.0.0.1 -u $DBUSER -D $DBNAME
-e 'SELECT 1'
timeoutSeconds: 2
dnsPolicy: ClusterFirst
restartPolicy: Always
triggers:
- type: ConfigChange
- apiVersion: v1
kind: Service
metadata:
labels:
app: wordpress
name: mariadb
spec:
ports:
- name: 3306-tcp
port: 3306
protocol: TCP
targetPort: 3306
selector:
app: wordpress
deploymentconfig: mariadb
- apiVersion: v1
kind: DeploymentConfig
metadata:
generation: 1
labels:
app: wordpress
name: wordpress
spec:
replicas: 1
selector:
app: wordpress
deploymentconfig: wordpress
strategy:
type: Rolling
template:
metadata:
labels:
app: wordpress
deploymentconfig: wordpress
spec:
containers:
- env:
- name: DB_ENV_DBNAME
value: ${DBNAME}
- name: DB_ENV_DBPASS
value: ${DBPASS}
- name: DB_ENV_DBUSER
value: ${DBUSER}
image: 'localhost:5000/wordpress:latest'
imagePullPolicy: Always
livenessProbe:
initialDelaySeconds: 45
tcpSocket:
port: 8080
timeoutSeconds: 1
name: wordpress
ports:
- containerPort: 8080
protocol: TCP
readinessProbe:
tcpSocket:
port: 8080
timeoutSeconds: 1
dnsPolicy: ClusterFirst
restartPolicy: Always
triggers:
- type: ConfigChange
- apiVersion: v1
kind: Service
metadata:
labels:
app: wordpress
name: wordpress
spec:
ports:
- name: 8080-tcp
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: wordpress
deploymentconfig: wordpress
type: ClusterIP
- apiVersion: v1
kind: Route
metadata:
labels:
app: wordpress
name: wordpress
spec:
host: ""
port:
targetPort: 8080-tcp
to:
kind: Service
name: wordpress
parameters:
- description: Username for MariaDB user that will be used for accessing the database.
displayName: MariaDB User
from: user[A-Z0-9]{3}
name: DBUSER
required: true
value: user
- description: Password for the MariaDB user.
displayName: MariaDB Password
from: '[a-zA-Z0-9]{12}'
name: DBPASS
required: true
generate: expression
- description: Name of the MariaDB database accessed.
displayName: MariaDB Database Name
name: DBNAME
required: true
value: mydb

126
labs/lab6/chapter6.md Normal file
View File

@ -0,0 +1,126 @@
## Introduction
In this lab, we are going to build upon the previous labs and leverage what we have learned to utilize the [Automation Broker](http://automationbroker.io/) (nee OpenShift Ansible Service Broker). As part of this process, we will be using the latest upstream release available for this project. By the time you are finished with the lab, you will have deployed an application, a database and bound the two together. It should become evident how this self service process can improve the productivity of developers on your team.
If you are unfamiliar with the Automation Broker, in short, it provides pre-packaged, multi-service applications using a container for distribution. The Automation Broker uses Ansible as its definition language but does not require significant Ansible knowledge or experience.
Expected completion: 10-20 minutes
## Setup Environment
First, free up some resources:
```bash
$ oc delete project devel production
```
The `./run_latest_build.sh` deploys the Ansible Broker to your existing OpenShift environment.
```bash
$ cd ~/summit-2018-container-lab/labs/lab6/scripts/
$ ./run_latest_build.sh
```
A successful deployment will end with output similar to:
```bash
Signature ok
subject=/CN=client
Getting CA Private Key
service "asb" created
service "asb-etcd" created
serviceaccount "asb" created
clusterrolebinding "asb" created
clusterrole "asb-auth" created
clusterrolebinding "asb-auth-bind" created
clusterrole "access-asb-role" created
persistentvolumeclaim "etcd" created
deploymentconfig "asb" created
deploymentconfig "asb-etcd" created
secret "asb-auth-secret" created
secret "registry-auth-secret" created
secret "etcd-auth-secret" created
secret "broker-etcd-auth-secret" created
configmap "broker-config" created
serviceaccount "ansibleservicebroker-client" created
clusterrolebinding "ansibleservicebroker-client" created
```
Verify the rollout is successful before proceeding.
```bash
$ oc rollout status -w dc/asb
$ oc get all
$ oc logs dc/asb
```
You are now logged in with the `admin` user. You can switch projects, browse around.
```bash
$ oc get all -n kube-service-catalog
$ oc get projects
```
Now log back in with the developer user.
```bash
$ oc login -u developer
$ oc get all
$ oc get projects
```
Now get the URL for the web console for your AWS VM by checking the cluster status. The web console URL is listed as part of the output. Be sure to refresh your browser.
```bash
$ oc cluster status
Web console URL: https://<YOUR AWS PUBLIC HOSTNAME>:8443
```
## Deploy an Ansible Playbook Bundle Application
Now, we are going to deploy our first application using the ansible broker.
- In the middle navigation panel, click on `All` and then click on the `Hello World (APB)` application.
- Click `Next`.
- Click the dropdown under `Add to Project` and select `Create Project`.
- Give the project a name `apb`. Leave the rest of the options as default and click `Create`.
- Now you will notice that the service is being provisioned. Click on the `Continue to the project overview` link (in the middle of the page). This will take you to the new project namespace that was created when we made the application.
- Give the deployment a minute or so to finish, and in the upper right hand side, you will see a new URL that points to your application. Click on that and it will open a new tab.
- Go back to the project, explore the environment, view logs, look at events, scale the application up, deploy it again, etc...
- Now go back to your CLI and explore what was just created.
```bash
$ oc get projects
NAME DISPLAY NAME STATUS
apb Active
```
Switch to that project and look at what was created.
```bash
$ oc project apb
$ oc get all
$ oc status
```
## Create Database
Now that we have deployed an application, you'll notice that its database information says `No database connected`. Let's create a database and then bind the hello-world app to it.
- Return to the OpenShift web console.
- In the upper right part of the page, click `Add to Project` and then `Browse Catalog`.
- Select the `PostgreSQL (APB)` database from the catalog.
- Click `Next`.
- Select the `Development` Plan and click `Next`.
- Enter a password.
- Select a PostgreSQL version.
- Click `Next`
- Click `Create`. Do not bind at this time.
- Click on the `Continue to the project overview`.
- Once PostgreSQL is provisioned, you'll see both the `hello-world` and the `postgresql` applications. This may take a minute or so.
## Bind Application to Database
- At the bottom of the project overview page, you should see a set of our newly provisioned services.
- On the `PostgreSQL (APB)` service, click `Create Binding`.
- Click `Bind`.
- Click `Close`.
- Let's look at the newly created secret by clicking `Resources` on the left menu and then `Secrets`. The newest secret should be at the top of the list. Click on the newest secret _(e.g. dh-postgresql-apb-qgt7d-credentials-hb0v7)_ and reveal its contents.
- Now let's bind the application to our database by clicking `Add to Application` in the upper right corner.
- Select the `hello-world` (it may be more cryptic than that) app from the drop-down and click `Save`.
- Return to the Project Overview page by clicking `Overview` on the left menu.
- Once the new deployment is finished, go back to the hello-world application url and refresh. Our application is now connected to the DB as evidenced by the populated PostgreSQL information.
This concludes the lab. To summarize, we started out with Docker basics as a review, built a large monolithic application and then decomposed it. Next we automated the deployment of that application using OpenShift templates. Finally, we experimented with the new service broker technology.
Please feel free to share this lab and contribute to it. We love contributions.

View File

@ -0,0 +1,22 @@
---
- hosts: localhost
become_method: sudo
tasks:
- name: launch openshift
command: /bin/bash /home/ec2-user/start-oc.sh
become: false
ignore_errors: yes
- name: openshift login
command: oc login -u developer
become: false
- name: create empty devel project
command: oc new-project devel
become: false
ignore_errors: yes
- name: create empty production project
command: oc new-project production
become: false
ignore_errors: yes

10
labs/lab6/jump-to-lab6.sh Executable file
View File

@ -0,0 +1,10 @@
#!/bin/bash
read -p "This script is to jump over the previous labs, is that really what you want to do? [y/N] " -n 1 -r
echo # (optional) move to a new line
if [[ $REPLY =~ ^[Yy]$ ]]
then
sudo yum install -y ansible
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
ansible-playbook -i "localhost," -c local $DIR/jump-to-here-playbook.yml
fi

View File

@ -0,0 +1,128 @@
#!/bin/bash
#
# Minimal example for deploying latest built 'Ansible Service Broker'
# on oc cluster up
#
#
# We deploy oc cluster up with an explicit hostname and routing suffix
# so that pods can access routes internally.
#
# For example, we need to register the ansible service broker route to
# the service catalog when we create the broker resource. The service
# catalog needs to be able to communicate to the ansible service broker.
#
# When we use the default "127.0.0.1.nip.io" route suffix, requests
# from inside the cluster fail with an error like:
#
# From Service Catalog: controller manager
# controller.go:196] Error syncing Broker ansible-service-broker:
# Get https://asb-1338-ansible-service-broker.127.0.0.1.nip.io/v2/catalog:
# dial tcp 127.0.0.1:443: getsockopt: connection refused
#
# To resolve this, we explicitly set the
# --public-hostname and --routing-suffix
#
# We use the IP of the docker interface on our host for testing in a
# local environment, or the external listening IP if we want to expose
# the cluster to the outside.
#
# Below will default to grabbing the IP of docker0, typically this is
# 172.17.0.1 if not customized
#
#source ~/cleanup-oc.sh
docker pull docker.io/ansibleplaybookbundle/origin-ansible-service-broker:v3.9
docker tag docker.io/ansibleplaybookbundle/origin-ansible-service-broker:v3.9 docker.io/ansibleplaybookbundle/origin-ansible-service-broker:latest
ASB_VERSION=ansible-service-broker-1.1.17-1
NAMESPACE=ansible-service-broker
#BROKER_IMAGE="registry.access.redhat.com/openshift3/ose-ansible-service-broker:v3.7"
#ETCD_IMAGE="registry.access.redhat.com/rhel7/etcd:latest"
#ETCD_PATH="/usr/bin/etcd"
# REGISTRY_USER <- RHCC user, REGISTRY_PASS <- RHCC password, REGISTRY_TYPE="rhcc", REGISTRY_NAME="rhcc", REGISTRY_URL="https://registry.access.redhat.com"
#metadata_endpoint="http://169.254.169.254/latest/meta-data"
#PUBLIC_HOSTNAME="$( curl -s "${metadata_endpoint}/public-hostname" )"
#PUBLIC_IP="$( curl -s "${metadata_endpoint}/public-ipv4" )"
#DOCKER_IP="$(ip addr show docker0 | grep -Po 'inet \K[\d.]+')"
#DOCKER_IP=${DOCKER_IP:-"127.0.0.1"}
#PUBLIC_IP=${PUBLIC_IP:-$DOCKER_IP}
#HOSTNAME=${PUBLIC_IP}.nip.io
#ROUTING_SUFFIX="${HOSTNAME}"
#oc cluster up --service-catalog=true --routing-suffix=${ROUTING_SUFFIX} --public-hostname=${PUBLIC_HOSTNAME}
#
# Logging in as system:admin so we can create a clusterrolebinding and
# creating ansible-service-broker project
#
oc login -u system:admin
oc new-project $NAMESPACE
#
# A valid dockerhub username/password is required so the broker may
# authenticate with dockerhub to:
#
# 1) inspect the available repositories in an organization
# 2) read the manifest of each repository to determine metadata about
# the images
#
# This is how the Ansible Service Broker determines what content to
# expose to the Service Catalog
#
# Note: dockerhub API requirements require an authenticated user only,
# the user does not need any special access beyond read access to the
# organization.
#
# By default, the Ansible Service Broker will look at the
# 'ansibleplaybookbundle' organization, this can be overridden with the
# parameter DOCKERHUB_ORG being passed into the template.
#
TEMPLATE_URL=${TEMPLATE_URL:-"https://raw.githubusercontent.com/openshift/ansible-service-broker/${ASB_VERSION}/templates/deploy-ansible-service-broker.template.yaml"}
DOCKERHUB_ORG=${DOCKERHUB_ORG:-"ansibleplaybookbundle"} # DocherHub org where APBs can be found, default 'ansibleplaybookbundle'
ENABLE_BASIC_AUTH="false"
VARS="-p BROKER_CA_CERT=$(oc get secret -n kube-service-catalog -o go-template='{{ range .items }}{{ if eq .type "kubernetes.io/service-account-token" }}{{ index .data "service-ca.crt" }}{{end}}{{"\n"}}{{end}}' | tail -n 1)"
# Creating openssl certs to use.
mkdir -p /tmp/etcd-cert
openssl req -nodes -x509 -newkey rsa:4096 -keyout /tmp/etcd-cert/key.pem -out /tmp/etcd-cert/cert.pem -days 365 -subj "/CN=asb-etcd.$NAMESPACE.svc"
openssl genrsa -out /tmp/etcd-cert/MyClient1.key 2048 \
&& openssl req -new -key /tmp/etcd-cert/MyClient1.key -out /tmp/etcd-cert/MyClient1.csr -subj "/CN=client" \
&& openssl x509 -req -in /tmp/etcd-cert/MyClient1.csr -CA /tmp/etcd-cert/cert.pem -CAkey /tmp/etcd-cert/key.pem -CAcreateserial -out /tmp/etcd-cert/MyClient1.pem -days 1024
ETCD_CA_CERT=$(cat /tmp/etcd-cert/cert.pem | base64)
BROKER_CLIENT_CERT=$(cat /tmp/etcd-cert/MyClient1.pem | base64)
BROKER_CLIENT_KEY=$(cat /tmp/etcd-cert/MyClient1.key | base64)
# -p BROKER_IMAGE="$BROKER_IMAGE" -p ETCD_IMAGE="$ETCD_IMAGE" -p ETCD_PATH="$ETCD_PATH" \
curl -s $TEMPLATE_URL \
| oc process \
-n $NAMESPACE \
-p DOCKERHUB_ORG="$DOCKERHUB_ORG" \
-p ENABLE_BASIC_AUTH="$ENABLE_BASIC_AUTH" \
-p ETCD_TRUSTED_CA_FILE=/var/run/etcd-auth-secret/ca.crt \
-p BROKER_CLIENT_CERT_PATH=/var/run/asb-etcd-auth/client.crt \
-p BROKER_CLIENT_KEY_PATH=/var/run/asb-etcd-auth/client.key \
-p ETCD_TRUSTED_CA="$ETCD_CA_CERT" \
-p BROKER_CLIENT_CERT="$BROKER_CLIENT_CERT" \
-p BROKER_CLIENT_KEY="$BROKER_CLIENT_KEY" \
-p NAMESPACE="$NAMESPACE" \
$VARS -f - | oc create -f -
if [ "$?" -ne 0 ]; then
echo "Error processing template and creating deployment"
exit
fi
#
# Then login as 'developer'/'developer' to WebUI
# Create a project
# Deploy mediawiki to new project (use a password other than
# admin since mediawiki forbids admin as password)
# Deploy PostgreSQL(ABP) to new project
# After they are up
# Click 'Create Binding' on the kebab menu for Mediawiki,
# select postgres
# Click deploy on mediawiki, after it's redeployed access webui
#

7
scripts/ansible/wall.yml Normal file
View File

@ -0,0 +1,7 @@
- name: Send Notification
hosts: aws
gather_facts: false
tasks:
- name: Wall instance poweroff
command: 'wall "instance will poweroff at 1:30 est"'
become: true

39
scripts/aws-cli/loft-launch Executable file
View File

@ -0,0 +1,39 @@
#!/bin/bash
# Source Variables
source ./vars &> /dev/null
# Set up path for var file so we can include it.
INCLUDE="$(dirname "$0")"
# Check for existance of jq package
if [[ ! -f /usr/bin/jq ]] && [[ ! -f /usr/local/bin/jq ]]
then
echo
echo "no jq package installed"
echo "jq is needed to assign names to infra nodes during launch"
echo
exit
else
echo "jq is installed"
fi
# Get security group id for nodes
SEC_GROUP_ID=$(aws ec2 describe-security-groups \
--query 'SecurityGroups[].GroupId[]' \
--filters Name=group-name,Values=$LOFT_SEC_GROUP \
--output text)
echo "Using security group: ${SEC_GROUP_ID}"
# Create the instances and assign tags
for NODE_COUNT in $NODE_COUNT; do
echo "Creating instance ${NODE_COUNT}"
aws ec2 create-tags \
--resources $(aws ec2 run-instances \
--image-id $AMI_ID \
--instance-type $LOFT_INST_TYPE \
--subnet-id $SUBNET_ID_1 \
--security-group-ids $SEC_GROUP_ID \
--key-name $KEY_NAME --output json | jq -r ".Instances[0].InstanceId") \
--tags "Key=Name,Value=$LOFT_NODE-$NODE_COUNT" "Key=${TAG_KEY1},Value=${TAG_VALUE1}" "Key=${TAG_KEY2},Value=${TAG_VALUE2}"
done

13
scripts/aws-cli/loft-list Executable file
View File

@ -0,0 +1,13 @@
#!/bin/bash -x
LIST_FILE=aws-loft-list.json
TAG_KEY=lab_type
TAG_VALUE=loft-lab
LOFT_SERVER=ec2-54-153-82-60.us-west-1.compute.amazonaws.com
aws ec2 describe-instances --query 'Reservations[].Instances[].{PublicHostname:PublicDnsName,PublicIP:PublicIpAddress}' --filters "Name=instance-state-name,Values=running" "Name=tag:${TAG_KEY},Values=${TAG_VALUE}" --output json | jq --arg START 1 '($START | tonumber) as $s
| to_entries
| map({StudentID: ($s + .key), PublicHostname:.value.PublicHostname, PublicIP:.value.PublicIP })' > ${LIST_FILE}
cat ${LIST_FILE}
echo "The next command will attempt to copy the file './${LIST_FILE}' to the web server as '/var/www/html/${LIST_FILE}', if it fails ensure your AWS key is loaded or modify the scp line as needed."
scp ${LIST_FILE} ec2-user@${LOFT_SERVER}:/var/www/html/${LIST_FILE}

42
scripts/aws-cli/vars Normal file
View File

@ -0,0 +1,42 @@
# Create Custom Variables. You can change this to anything you want, makes a unique environment.
# Example: LAB_USER=tohuges or LAB_USER=scollier or LAB_USER=student
LAB_USER="student"
ENV_NAME="conf-lab"
# Number of loft nodes that need to be created
# Just provide a start and finish number
NODE_COUNT=$(seq 1 2)
# AMI Id
# This needs to be updated everytime you update the AMI on AWS.
AMI_ID=ami-ec43df83
# Instance types
# I only requested an extension to 105 instances for this type. Can't launch more than 15 of any other type.
LOFT_INST_TYPE=t2.xlarge
# Network info
# This is all static. Already created in our VPC
SUBNET_CIDR_1='10.50.0.0/24'
SUBNET_ID_1=subnet-a7e470cc
# Availability Zones
# Will need to change this once we swap to the west coast.
REGION=eu-central-1
AZ_1=eu-central-1a
# Security Groups
# This is static
LOFT_SEC_GROUP=DEVCONF-security-group
# Tags
TAG_KEY1="lab_type"
TAG_VALUE1=${ENV_NAME}
TAG_KEY2="lab_user"
TAG_VALUE2=${LAB_USER}
# Loft node names
LOFT_NODE=${ENV_NAME}-${LAB_USER}
# AWS Key
KEY_NAME=rhte

View File

@ -0,0 +1,7 @@
#!/bin/bash
# CLEANUP
oc cluster down
docker rm -vf $(docker ps -aq)
docker volume rm $(docker volume ls -q)
findmnt -lo target | grep "/var/lib/origin/openshift.local." | xargs sudo umount
sudo rm -rf /var/lib/origin/openshift.local.*

5
scripts/host/start-oc.sh Normal file
View File

@ -0,0 +1,5 @@
# STARTUP
metadata_endpoint="http://169.254.169.254/latest/meta-data"
public_hostname="$( curl -s "${metadata_endpoint}/public-hostname" )"
public_ip="$( curl -s "${metadata_endpoint}/public-ipv4" )"
oc cluster up --service-catalog=true --public-hostname="${public_hostname}" --routing-suffix="${public_ip}.nip.io"

View File

@ -0,0 +1,60 @@
#!/usr/bin/python
# Used this resources to build this simple script
# https://boto3.readthedocs.io/en/latest/guide/ec2-example-managing-instances.html
# https://pygsheets.readthedocs.io/en/latest/
# This library below was extremely slow removed for pygsheets
# http://gspread.readthedocs.io/en/latest/
# https://www.twilio.com/blog/2017/02/an-easy-way-to-read-and-write-to-a-google-spreadsheet-in-python.html
from __future__ import print_function
import pygsheets
import boto3
import os
import time
def main():
ec2 = boto3.client('ec2')
filters = [{'Name':'tag:lab_type', 'Values':["loft-lab"],'Name': 'instance-state-name', 'Values': ['running']}]
instances = ec2.describe_instances(Filters=filters)
gc = pygsheets.authorize(service_file='%s/nycawsloft-af8212519288.json' % os.environ['HOME'])
row = ["Student ID", "Public URL", "Public IP Address", "Claimed By"]
sht = gc.open("NYC AWS Loft Instances")
wks = sht.worksheet('index', 0)
wks.update_row(1, values=row)
row_count = 2
for r in instances['Reservations']:
for i in r['Instances']:
for t in i['Tags']:
if t['Key'] == 'Name':
if 'spare' in t['Value']:
student_id = t['Value']
else:
student_id = t['Value'].split('-')[-1]
print(i['PublicDnsName'])
print(i['PublicIpAddress'])
row = [student_id, i['PublicDnsName'], i['PublicIpAddress']]
# Sleep is required otherwise the script will hit the API limit
time.sleep(0.5)
wks.update_row(row_count, values=row)
row_count = row_count + 1
if __name__ == '__main__':
main()

3
setup/TODO Normal file
View File

@ -0,0 +1,3 @@
Change to CentOS image?
Leave as Red Hat and put in instructions and pre-reqs for obtaining developer access.
To avoid having to share our AMI, document any changes we made to our AMI and ansiblize them so anyone can launch a CentOS or RH instance and then run through the lab.

View File

@ -0,0 +1,185 @@
# Introduction
This guide is here to show instructors how to set up and run the lab. It covers a few tasks:
* Local environment pre-reqs
* AWS pre-reqs
* AMI preparation
* Docker configuration
* Lab launch instructions
* Web server configuration
## Pre-Reqs
* VPC is created
* Key pair has been created
** Provide the aws access and secret key
** Provide the correct region
* Running web server to host the ssh key
## Prepare the AMI
### Launch AMI
* Log into AWS
* Choose EC2
* Click "Launch Instance"
* Select "Red Hat"
* Select "t2.large", click "Next"
- Select "Network"
- Select "Subnet"
- Ensure "Auto-assign Public IP" is "enabled"
- Click "Next"
* Change disk size to "40"
* Click "Next"
* Select "Existing security group"
- Pick the security group you already created
* Click "Review and Launch"
* Click "Launch"
* Select your existing key pair that has already been created
- Click "Acknowledge"
* Click "Launch Instances"
* Click "View Instances"
### Configure the AMI
Do this as root:
* Example: "ssh -i <KEY PAIR NAME HERE>.pem ec2-user@ec2-54-204-171-139.compute-1.amazonaws.com"
* You will need to subscribe to Red Hat CDN, and then disable RHUI. You'll need to use an account that has the appropriate permissions.
```
subscription-manager register
subscription-manager list --available --matches 'Red Hat OpenShift Enterprise Infrastructure'
subscription-manager attach --pool 8a85f9XXXXXXXXXXXX
subscription-manager refresh
subscription-manager repos --disable=*
subscription-manager repos --enable="rhel-7-server-rpms" --enable="rhel-7-server-extras-rpms" --enable="rhel-7-server-ose-3.7-rpms"
yum repolist
```
* Update: "yum -y update"
* Install a couple of packages: "yum -y install ansible python-devel git wget firewalld docker bash-completion"
* Install the development tools: "yum -y groupinstall "Development Tools""
Configure Docker
```
vi /etc/containers/registries.conf
[registries.insecure]
registries = ['172.30.0.0/16']
groupadd docker
usermod -aG docker ec2-user
systemctl daemon-reload
systemctl restart docker
systemctl enable docker
```
Get the latest "oc" client.
```
yum -y install atomic-openshift-clients
```
Configure Firewalld
```
systemctl enable firewalld
systemctl restart firewalld
firewall-cmd --permanent --new-zone dockerc
firewall-cmd --permanent --zone dockerc --add-source 172.17.0.0/16
firewall-cmd --permanent --zone dockerc --add-port 8443/tcp
firewall-cmd --permanent --zone dockerc --add-port 53/udp
firewall-cmd --permanent --zone dockerc --add-port 8053/udp
firewall-cmd --permanent --zone public --add-port=8443/tcp
firewall-cmd --permanent --zone public --add-port=80/tcp
firewall-cmd --permanent --zone public --add-port=53/tcp
firewall-cmd --permanent --zone public --add-port=53/udp
firewall-cmd --permanent --zone public --add-port=80/tcp
firewall-cmd --permanent --zone public --add-port=443/tcp
firewall-cmd --permanent --zone public --add-port=2379/tcp
firewall-cmd --permanent --zone public --add-port=2380/tcp
firewall-cmd --permanent --zone public --add-port=4789/udp
firewall-cmd --permanent --zone public --add-port=8053/tcp
firewall-cmd --permanent --zone public --add-port=8053/udp
firewall-cmd --permanent --zone public --add-port=8443/tcp
firewall-cmd --permanent --zone public --add-port=8444/tcp
firewall-cmd --permanent --zone public --add-port=10250/tcp
firewall-cmd --reload
reboot
```
Do the following as the ec2-user:
Meet the requirements of "oc cluster up"
```
sudo sysctl -w net.ipv4.ip_forward=1
```
Clone the lab repo:
```
git clone https://github.com/dustymabe/summit-2018-container-lab
chmod +x /home/ec2-user/summit-2018-container-lab/scripts/host/start-oc.sh
chmod +x /home/ec2-user/summit-2018-container-lab/scripts/host/cleanup-oc.sh
mv /home/ec2-user/summit-2018-container-lab/scripts/host/start-oc.sh ~
mv /home/ec2-user/summit-2018-container-lab/scripts/host/cleanup-oc.sh ~
```
Start the cluster to cache the iamges.
```
~/start-oc.sh
```
Now log into the console with the URL given as "oc cluster up" output. Once you can do that, you are ready to create an AWS AMI.
```
~/cleanup-oc.sh
rm -rf /home/ec2-user/summit-2018-container-lab
```
### Create AMI
* In AWS console right click on the instance you just configured.
- Choose "Image", and then "Create Image"
- Provide an "Image Name", "Image Description", Click "Create Image"
## Set up a web server for the students
* Use the same AMI launch sequence for a lightweight apache web server
* Install httpd, start and enable the service
* Copy the lab private key to the web server and make available via http
* May want to add AWS termination protection on this to make sure noone blows it away
## Launch the VMs for the students
Clone the repository, this is done from your local workstation
```
git clone -b RHTE-EMEA-PROD https://github.com/scollier/managing-ocp-install-beyond.git
cd managing-ocp-install-beyond/
cp my_secrets.yml <my-username>.yml
```
** Fill out the variables in the file
* launch the playbook
```
ansible-playbook -v -e @<my-username>.yml aws_lab_launch.yml
```
* log into the AWS vm and start the lab
```
ssh -i /path/to/rhte.pem ec2-user@tower-<my-username>-devops-test-1.rhte.sysdeseng.com
```
Each VM is assigned a public DNS name. Log in with your student ID substituted in the the DNS name above
## References
* https://github.com/openshift/origin/blob/master/docs/cluster_up_down.md
* https://access.redhat.com/documentation/en-us/openshift_container_platform/3.7/html/installation_and_configuration/installing-a-cluster#install-config-install-host-preparation