So You Want to Run Serverless Jamf Pro

Edit 2019-11-29: I had left a us-east-2 specific AMI ID for the NAT instance in the template. The template and this post have been updated with instructions on finding the correct NAT AMI for the region you wish to deploy to and passing that as a parameter.

First we went through containerizing Jamf Pro, and now we’re talking about running it serverless?

What does that even mean?

Serverless Is Serverless

Saying “it’s not serverless, there’s still a server” is about as on point as saying “it’s not the cloud, it’s just someone else’s computer”. There are servers, of course, but functionality is exposed through services. Take the AWS definition of serverless:

  • No server management.
    This is a major aspect of modern cloud providers that isn’t specific to serverless computing. If you’re using a service you shouldn’t ever be worrying about the configuration, management, and maintenance of the underlying resources.
  • Automatic scaling.
    Serverless services should scale easily and according to load. Not all offerings in the space are fully automatic without some configuration on our part. We’ll get to that later.
  • Pay for usage; don’t pay for idle.
    When you create a virtual machine it’s a static piece of infrastructure that you pay for hourly even if you aren’t utilizing it. Serverless on AWS states that resources should spin down automatically, or be able to spin down, and sit without incurring cost until they’re resumed or invoked.
  • High availability.
    This one has the same caveat as the scaling item. Across the spectrum, most serverless services will handle this for your automatically. Others will expose the ability, but you’ll need to take extra steps.

Jamf Pro is not a serverless application. In reality it’s a monolithic application – one big, self contained web app – and it does require an always running instance. In the past this would be a virtual machine. We would probbaly have MySQL installed and running on another VM too. As of our last post we know we can do this as a container, but what will that container run on?

AWS Managed Services

Deploying to AWS we have offerings that we can take advantage of to translate the Docker environment of the previous post to a fully hosted one that has no infrastructure for us to manage. We can also define everything about our infrastructure and service as code using CloudFormation.

CloudFormation is a lot like the Docker Compose file I showed in the pervious post. It’s a JSON or YAML file that defines everything about what will be created as a part of a stack. This can include networking, services, databases, you name it. If it’s in AWS, it’s in CloudFormation*.

* Except when it’s not.

I’ll also break down what the estimated minimum costs for these are for a 30 day period. If you deploy the provided CloudFormation template in your account understand that you will incur costs.

Virtual Private Cloud (VPC)

We’re not deploying on a laptop and hitting local host any more. Running in the cloud means we need to establish a network architecture that will keep key our key resources protected.

We’re going to create a VPC with six subnets and the security groups allowing traffic between them. Now, the one that I’m going to walk through is not designed around high availability – one of the serverless tenets – in that two of our resources will not exist in multiple available ability zones.

Availability zone? Think of it as a data center (but that’s not actually correct) within a given AWS region. For high availability you would architect a service to exist in two or more availability zones (and in some cases in multiple AZs in multiple regions!). Our load balancer and database require multiple subnets in different AZs. Our NAT and the web app will only exist in one purely to make this simpler and to save on short term costs while we create these resources.

Here’s the breakdown of our planned VPC:

  • 2x public load balancer subnets accepting HTTP traffic over port 80 or HTTPS over port 443 (I’ll explain later).
  • 1x public subnet for the NAT instance that accepts all traffic from our web app subnet out to the internet.
  • 1x private subnet for the web app containers that accepts traffic over port 8080 from our load balancer subnets.
  • 2x private database subnets for our database that accept traffic over port 3306 from the web app subnet.

Here’s a quick diagram that shows how this all looks.

There are no costs for any of the network configuration of our VPC.

Application Load Balancer (ALB)

Exposing any web server directly on the public internet, containerized or not, is not a good idea. Our load balancer will be deployed in our public subnet allowing traffic over port 80 or 443 depending on if we supply a certificate ARN to the CloudFormation stack.

What’s that about? The template I’ll be providing later has a couple of conditionals in it. By default, application load balancers only accept HTTP traffic over port 80. If you want to accept traffic over port 443 that means you need to provide your own certificate. I’ll walk through that later on when we’re getting ready for deployment, but know that if you can’t generate a valid certificate in your account you can still deploy this stack to accept traffic over port 80.

The HTTP option is only for demonstrative purposes. You shouldn’t communicate with a Jamf Pro instance over an unencrypted connection and send anything sensitive.

Sadly, an ALB is not a truly serverless service. It’s always running, and AWS charges per hour even if there’s no traffic. The estimated minimum cost in a month is $21.96.

> See Elastic Load Balancing Pricing for more details.

NAT Instances and Gateways

Our NAT (network address translation) is the first place we’re really breaking the serverless concept. The template defines a NAT instance which is the smallest (and cheapest) sized EC2 available using an AWS provided image.

There is an option for a NAT Gateway – which is in the template but commented out – that is fully managed, but you still pay for idle, and it requires at least two subnets in two AZs like our load balancer. However, it is also much more performant, and scales itself under load. It’s the clear choice for a production environment, but not for testing and experimenting.

We will need an AMI for the AWS region we plan to deploy to for the NAT instance. AMI stands for Amazon Machine Image. Think of it like a virtual machine snapshot. Amazon has many of these AMI available, for free, to launch EC2 instances from by reference the ID. The ID is going to be unique for each region, and there will be many different versions of certain AMIs as they’re updated over time, so we need a way to look up the latest.

I have a quick one-line command using the AWS CLI based off the instructions available in the KB Finding a Linux AMI. This documentation also describes how to locate an AMI using the AWS console if you prefer to do so using the GUI.

aws ec2 describe-images --region ${REGION} --filters 'Name=name,Values=amzn-ami-vpc-nat-*' 'Name=state,Values=available' --query 'reverse(sort_by(Images, &CreationDate))[:1].ImageId' --output text

This will provide the AMI ID of the latest image matching our name filter for the given ${REGION}. Use this to look up the appropriate AMI so you can pass the ID into the CloudFormation template later.

Estimated minimum on-demand cost for our t3a.nano EC2 NAT: $3.38. The estimated minimum cost for a NAT Gateway: $32.40.

See Amazon VPC Pricing and Amazon EC2 Pricing for more details.

Aurora Serverless

Did you know serverless SQL databases existed? Aurora Serverless is very interesting in that it truly does fit all four descriptions of a serverless service. When you create one of these instances you can configure it to automatically pause after a period of inactivity. No connections for 30 minutes? Paused. And you’re not billed for any hours where it isn’t running!

Seasoned Jamf Pro admins may have already spotted the problems here:

  1. Jamf Pro is chatty. Odds are good once you’ve enrolled something or configured connections to any external services the application will be querying the database often enough that pausing will never occur.
  2. If the database were ever to become paused Jamf Pro would not be able to recover and require a restart of the application. Jamf Pro does depend on a highly available database on the backend.

So why use this offering? Cost-wise, it isn’t actually that much higher than the cheapest Aurora MySQL instance ($0.06/hour vs a db.t3.small at $0.041/hour), and we get auto-scaling along for the ride. Plus it’s even more managed for us than Aurora MySQL. And really… we’re using this kinda for funsies. It works, it works just fine, but it’s not supported by Jamf (Aurora is, Aurora Serverless is not), and Jamf Pro doesn’t even play to the strengths of this offering where we can pause it so save on money.

That being said, it’s much more expensive than spinning up the cheapest RDS MySQL option (a db.t3.micro at $0.017/hour).

Estimate minimum cost for Aurora Serverless under full utilization: $43.20.

See Amazon Aurora Pricing and Amazon RDS for MySQL Pricing for more details.

Fargate

Here we finally get to the issue of running our Java web app in a serverless fashion. Jamf Pro can’t sit idle and not cost us money (unfortunately), but we can ensure that the application isn’t running on top of infrastructure that we are provisioning, configuring, and managing!

Enter Fargate. AWS’s container service, Elastic Container Service (ECS) allows us to roll our own clusters with autoscaling groups of EC2 instances for the underlying resource that we can then run containers on top of. Lots of overhead there, but that’s the path to go when you’re at large scale and needing to drive down costs by leveraging Spot Fleets and EC2 reserved instances.

We’re not touching any of that.

Fargate is an ECS cluster where we don’t think about any of that. We just launch containers in it and AWS takes care of the task scheduling and assignment of resources out of some nebulous pool that we have zero visibility into. It just works. Neat, huh?

Fargate is something I generally leverage for running long tasks that don’t fit within the constraints of a Lambda function. But for a static web app, especially one I want to run as a container, it makes a lot of sense to go this route than add on all the overhead of managing a full blown ECS cluster.

Fargate charges us for vCPU hours and GB of memory allocated during those hours. Jamf Pro is more memory bound than CPU bound, so we can save on the compute (the expensive part) and focus more on RAM. In the CloudFormation template I’ve allocated .5 vCPU and 2 GB of memory.

The gotcha is again comparing the cost to the alternative that requires more management and overhead. Fargate’s pricing puts it above the EC2 t3.small instance that has the same amount of memory but 2 vCPU allocated.

Estimated cost for running one container in Fargate at these allocations: $20.97.

See Amazon Fargate Pricing and Amazon EC2 Pricing for more details.

The Grand Total

If we tally up the estimated minimum costs of all the components above…

$89.51 per month, or $2.98 per day. We detailed many areas where you can reduce this cost by trading away some some of the managed aspects of the services we’ve chosen, but this pricing also isn’t indicative of a real production environment. In production you should use a NAT Gateway (more expensive), you probably should use an actual Aurora Cluster, and you’re going to be running at least two, if not three, web app containers across multiple AZs.

The CloudFormation template that defines the environment described above is not intended for long term operation. You can deploy it to play around with a cloud hosted Jamf Pro server with production-like infrastructure and then you can tear it all down once you’re. done.

And then deploy it all again.

That’s the beauty of the infrastructure as code.

Elastic Container Registry

Before we deploy, we need something to deploy. In the previous post I walked through created a Jamf Pro image using the WAR file you can obtain from Jamf Nation. In order for Fargate to use that image we have to publish it to a location it can pull from.

The public DockerHub probably isn’t the appropriate place.

But we have an internal option to AWS: Elastic Container Registry. It lives alongside ECS and EKS (#kubelife) and provides a private registry to host our images.

Create a Jamf Pro Repository

This will be quick, but you’re going to need an AWS profile and credentials setup locally to use with the aws CLI. If you don’t have the CLI installed you can follow the instructions here (version 1). Be sure to do this first.

  1. Log into the AWS console and head to ECR in the region you will be wanting to deploy to (I use us-east-2).
  2. Click Create repository in the upper-right corner.
  3. Enter jamfpro for the repository name.
  4. Click Create repository.

Now we can push our image built in the last post up here. If you click on your new repository you will see a button labeled View push commands in the upper-right. This gives you a cheat sheet pop-up for everything you need to do in the Terminal.

Log into ECR for Docker (copy all the $ characters). Replace ${AWS_REGION} with the region of your repository.

$(aws ecr get-login --no-include-email --region ${AWS_REGION})

Tag your local image to match the new remote registry. Replace the ${VERSION} with the one you build and tagged for your jamfpro image (in the previous post it was 10.17.0. Replace ${AWS_ACCOUNT with your account number (it will be shown in that cheat sheet) and ${AWS_REGION} again.

docker tag jamfpro:${VERSION} ${AWS_ACCOUNT}.dkr.ecr.${AWS_REGION}.amazonaws.com/jamfpro:${VERSION}

Now push the image up to your AWS account! This may take a few minutes depending on your internet connection.

 docker push ${AWS_ACCOUNT}.dkr.ecr.${AWS_REGION}.amazonaws.com/jamfpro:${VERSION}

This tagged image is the URI that we need to deploy our CloudFormation stack. You can verify this in the AWS console when viewing the tagged images in the repository.

The Certificate

At this point we need to generate a certificate if we want to launch our Jamf Pro stack using HTTPS. This certificate must exist in AWS Certificate Manager for the region you are deploying to. No way around that. To request a certificate, you need to own a domain (it doesn’t have to be managed by Route 53, but you must have control of it).

If you can’t check both of those boxes you will need to proceed with HTTP.

If you do, you experience will vary depending on where your domain lives and how you will handle validation. AWS offers DNS validation which can be handled automatically if the domain is managed by Route 53 (super slick), and even not if you also have access to create DNS records for your domain, or email validation where a message will be sent to the address on record.

For this step, I’m going to refer you to Amazon’s documentation: Request a Public Certificate – AWS Certificate Manager.

Deploying the Stack

And so, we finally come to it. Here is the CloudFormation template. Take a look through the resources and see how they map to what has been described above. Save this file to your computer. You have two ways of deploying this.

AWS CLI

Using the AWS CLI and your credentials you can run either of the following commands to launch the CloudFormation stack. The template contains a number of Parameters to set certain values unique to the deployment. Using the CLI, these need to be passed as a part of the --parameter-overrides argument. As with the ECR commands, be sure to set or replace the variables with the correct values.

This command omits the CertificateArn and will launch the stack using HTTP.

 aws cloudformation deploy \
    --region ${AWS_REGION} \
    --stack-name jamfpro-serverless-http \
    --template-file jamfpro-template.yaml \
    --capabilities CAPABILITY_IAM \
    --parameter-overrides \
        NatImageId=${AMI_ID} \
         JamfProImageURI=${AWS_ACCOUNT}.dkr.ecr.${AWS_REGION}.amazonaws.com/jamfpro:${VERSION} \
        DatabaseMasterUsername=jamfadmin \
        DatabaseMasterPassword=${SECRET} 

This command includes the CertificateArn to enable HTTPS.

aws cloudformation deploy \
    --region ${AWS_REGION} \
    --stack-name jamfpro-serverless-https \
    --template-file jamfpro-template.yaml \
    --capabilities CAPABILITY_IAM \
    --parameter-overrides \
        NatImageId=${AMI_ID} \
        JamfProImageURI=${AWS_ACCOUNT}.dkr.ecr.${AWS_REGION}.amazonaws.com/jamfpro:10.17.0 \
        CertificateArn=arn:aws:acm:${AWS_REGION}:${AWS_ACCOUNT}:certificate/${CERTIFICATE_ID} \
        DatabaseMasterUsername=jamfadmin \
        DatabaseMasterPassword=${SECRET}

AWS Console

Alternately, there’s nothing wrong with using the GUI. Go to CloudFormation in the region you wish to deploy to (and the same region that has your ECR image and ACM certificate) and click on Create stack: With new resources.

Upload the template file you saved and walk through the interface to populate the values you want for the parameters (some have defaults you can stick with). There is a checkbox at the end “I acknowledge that AWS CloudFormation might create IAM resources” you will need to click. This is prompting because there are IAM roles in the template that will be created (in the CLI this was handled with the --capabilities CAPABILITY_IAM argument).

Click Create stack and watch the stream of event as the entire stack builds.

Access in Your Browser

This part depends on if you deployed the stack to use HTTP or HTTPS.

Check the Outputs tab for the CloudFormation stack in the console and copy the domain name for LoadBalancerDNS.

If your deployment used HTTP you can access Jamf Pro at that address.

If your deployment used HTTPS, you can now create a DNS record to point here. If you are using Route 53 in AWS this will be an A alias record pointing to the domain. If not, you can create a CNAME record that points here.

Teardown

Once you’re done playing around with your cloud environment you can wipe everything by selecting the CloudFormation stack and clicking the Delete button.

That’s it.

The Next Phase

This mini series came about because I had a pretty bad idea. In order to actually work on this bad idea I needed to have a running instance of Jamf Pro out in AWS with the database in Aurora. In order to do that, I wanted to run the web app in Fargate. In order to do that I needed to make an image I could deploy. It made sense to share my work along the way.

The next phase of this journey is extending this Jamf Pro environment by taking advantage of the features that AWS provides. If you’re treating your cloud like a server rack you’re not building to its strengths. All the major players aren’t simply sitting there providing compute. There are services services, APIs, frameworks, and ways of tying it all together.

We’re going to build something cool. So stay tuned for the next phase in our containerized journey:

“{ Spoilers }”

So You Want to Containerize Jamf Pro

Containers. They’re what all the cool kids talk about these days.

But really, if you haven’t been moving everything you’re doing to either containerized or serverless deployments you may want to make a commitment to putting that at the forefront of your 2020 objectives.

And yeah, we’re already well into that internally at Jamf (more on that in the future).

For the Jamf admin who has yet to migrate to Jamf Cloud and still operates their own environment, you may have already considered moving off your virtual machine (or, shudder, bare metal) installs.

But where to start?

Let’s start with the basics. We’re going to use Docker on your laptop to get Jamf Pro up and running without having to install Java, Tomcat, or MySQL.

You’re Going to Need Docker

Before continuing, you’re going to need Docker Desktop. 17.09 is the latest at time of posting. Finish this before continuing (you won’t need anything else!).

Mac: https://docs.docker.com/docker-for-mac/install/

Windows 10: https://docs.docker.com/docker-for-windows/install/

Building a Jamf Pro Docker Image

First thing we need is a Docker image we can deploy.

What is an image? A Docker image is an artifact that contains everything needed to run an application. Think of it like a virtual machine snapshot, but immutable and highly portable. Containers are the running processes launched using an image.

To build an image we need a Dockerfile that contains all the instructions on what to install and configure. Engineers at Jamf maintain a “starter” image that we will be using for our environment.

The source code is located here: https://github.com/jamf/jamfpro

But, we don’t need to build this image ourselves. The latest version of it is already available on DockerHub: https://hub.docker.com/r/jamfdevops/jamfpro

DockerHub is a public repository of already built images you can pull down and use! You’ll find all sorts of images readily available from members of the open source community as well as software vendors.

The jamfdevops/jamfpro image is going to be the foundation of our jamfpro image we launch.

Some of you might be thinking at this point, “Why are we building another image from this one?” True, we can use the image on its own and provide a ROOT.war file at run time, but this is only helpful for one-off testing.

In a production, or production-like, setting we won’t be running docker commands to launch our instances. We’ll be following best practices with pipelines, and infrastructure and configuration as code. To accomplish this we need an immutable deployment artifact.

Pull a copy of this image down to your computer before continuing (it doesn’t have a latest tag so you have to specify – 0.0.10 is the latest at time of posting).

docker pull jamfdevops/jamfpro:0.0.10

Download the ROOT.war

We need the WAR file for a manual installation of Jamf Pro first. Customers cant obtain one from their Assets page on Jamf Nation: https://www.jamf.com/jamf-nation/my/products

Yes, you have to be a customer for this step. ¯\_(ツ)_/¯

Click Show alternative downloads and then Jamf Pro Manual Installation. Extract the zip file into an empty directory.

Build the Deployable Image

Here’s a quick script that will use the jamfdevops/jamfpro image as a base for our output deployment image. The VERSION variable can be changed for what you are using (10.17.0 is the current at time of posting).

# Run from a directory that _only_ contains your ROOT.war file.
# Change VERSION to that of the ROOT.war being deployed

VERSION=10.17.0

docker build . -t jamfpro:${VERSION} -f - <<EOF
FROM jamfdevops/jamfpro:0.0.10
ADD ROOT.war  /data/
EOF

Grab this script on my GitHub as a Gist: build_jamfpro_docker_version.py

I’m using an inline Dockerfile for this script. FROM tells it what the base image is. ADD is the command that will copy the ROOT.war file into the image’s /data directory. This is where the jamfdevops/jamfpro startup script checks for a WAR file to extract into the Tomcat webapps directory.

You should see the following after a successful build of the image.

Sending build context to Docker daemon  227.4MB
Step 1/2 : FROM jamfdevops/jamfpro:0.0.10
 ---> f87f303293bc
Step 2/2 : ADD ROOT.war  /usr/local/tomcat/webapps/
 ---> 83cd51f8b622
Successfully built 83cd51f8b622
Successfully tagged jamfpro:10.17.0

To learn more, see https://docs.docker.com/develop/develop-images/dockerfile_best-practices/

Create a Docker Network

Our environment is going to need two running containers: the Jamf Pro web app, and a MySQL database. We’re going to create a virtual network for them to launch in that will allow communication between the web app and the database.

It’s a quick one-liner.

docker network create jamfnet

Our new jamfnet network is a bridge network (the default). We don’t need to do any additional configuration for our purposes today.

To learn more, see https://docs.docker.com/network/bridge/

Start a MySQL Container

Now to start a database. We’re going to use the official mysql:5.7 image from DockerHub, and this image gives us some handy shortcuts to make our local environment easier to setup, but let’s talk about what we’re about to do and why you shouldn’t do this in production, or production-like environments.

  • We’re running a database in a container.
    This isn’t necessarily something that you shouldn’t do, but the way we’re doing this is not production grade. Running databases in Docker is fantastic for having the service available without needing to install and configure it on your own. You should really, reallyreally know what you’re doing.
    Protip: use managed database services from your provider of choice (which we’re going to explore in the near future).
  • We’re not using a volume to persist the database.
    Containers are ephemeral. They don’t preserve state or data without the use of some external volumes (either on the host or remote). You can preserve the data on disk by mounting a local directory into the running container using an addtional argument: -v /my/local/dir:/var/lib/mysql.
  • We’re allowing the image startup scripting to create the Jamf Pro database.
    The MySQL image has a handy feature exposed through the MYSQL_DATABASE environment variable to create a default database. This is a shortcut for setting up Jamf Pro as we don’t have to connect afterwards to create it, but this means the only user available is root which leads us to the last point…
  • We’ll be using the root MySQL user for Jamf Pro.
    Never do this in production. For our local test environment it’s fine – we’re not hosting customer or company data and it’s temporary.

Refer to this Jamf Nation KB on how to properly setup the Jamf Pro database on MySQL: https://www.jamf.com/jamf-nation/articles/542/manually-creating-the-jamf-pro-database

With all that said, let’s take a look at the docker run command to start up our test MySQL database.

docker run --rm -d \
    --name jamf_mysql \
    --net jamfnet \
    -e MYSQL_ROOT_PASSWORD=jamfsw03 \
    -e MYSQL_DATABASE=jamfsoftware \
    -p 3306:3306 \
    mysql:5.7

The --rm argument it so delete the container once it stops. If you omit this then you can stop and restart containers while preserving their last state (going back to the issue of not preserving our MySQL data in a mounted volume: it would still persist while the container was stopped).

-d is short for --detach and will start the container in a new process without tying up the current shell. If you omit this you will remain attached to the running process and view the log stream.

These two options are specific to us treating this as a short lived test environment. In a production container deployment you wouldn’t even think about this because you won’t be deploying the containers with the docker command.

The --name argument provides a friendly name we can use to interact with our container instead of the randomized ones that will be generated otherwise. This is important when it comes to the internal networking to Docker.

Which, with --network​, the container will launch attached to our jamfnet network, and it will be discoverable via DNS by any other container in the same network. Our web app will be able to reach its database by connecting to mysql://jamf_mysql:3306.

The two -e arguments set the values for environment variables in the running container. Most configurable options for a containerized service are handled through environment variables. You can use the same image across multiple environments by changing the values that are passed.

In addition to the web app container, we’re going to want to be able to interact with the data being written to MySQL from our own command line or utilities. In order to do so, we must expose the service’s ports on the host. The -p or --publish argument is a mapping of host ports to container ports. MySQL communication is over port 3306 so we are mapping that port on our computer to the same port on our container.

Run the command and you will see a long randomized ID printed out. You can verify what containers are running by typing docker ps -a.

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                               NAMES
bd2e35ebcdd1        mysql:5.7           "docker-entrypoint.s…"   40 seconds ago      Up 39 seconds       0.0.0.0:3306->3306/tcp, 33060/tcp   jamf_mysql

To learn more about the MySQL Docker image, view the documentation on DockerHub: https://hub.docker.com/_/mysql

Connect Using MySQL CLI or MySQL Workbench

Now that MySQL is up an running we can connect to it either on the command line using mysql or with a GUI application such as MySQL Workbench.

If you want to use the CLI, you don’t need to install anything addtional. You already have everything you need with the mysql:5.7 Docker image!

docker run --rm -it \
    --net jamfnet \
    mysql:5.7 \
    mysql -u root -h jamf_mysql -p

There are a few new things here. Instead of running a service and detaching it, we are telling it to launch the container with an interactive terminal with the -it arguments (--interactive and --tty respectively).

We’re also passing a command after the name of the image we want to use. This overrides the entrypoint for the image (I’ve been referring to this as a “startup” script up until now). The entrypoint is the default command that runs for an image if another is not provided.

You can also see that we’ve attached to the jamfnet network again and are passing the name of our MySQL container as the host argument for mysql. If we didn’t have port 3306 exposed this method allows us to launch a shell to access private resources.

But, because we do have port 3306 exposed, we can run this container in a slightly different way.

docker run --rm -it \
    --net host \
    mysql:5.7 \
    mysql -u root -h 127.0.0.1 -p

The host network pretty much does as it sounds. This container will come up without network isolation that bridge networks provide and access resources much like other clients would. Here you can see we pass the localhost IP instead of the container name and the MySQL connection will be established.

Launch a Jamf Pro Container

We’re finally ready to launch the Jamf Pro web app container itself. The command for this is going to look very similar to what we did with MySQL.

docker run --rm -d \
    --name jamf_app \
    --net jamfnet \
    -e DATABASE_USERNAME=root \
    -e DATABASE_PASSWORD=jamfsw03 \
    -e DATABASE_HOST=jamf_mysql \
    -p 80:8080 \
    jamfpro:10.17.0

MySQL doesn’t talk to Jamf Pro, so naming it doesn’t really do anything for us here, but it does make managing the container using docker a little easier.

We have a different set of environment variables specific to our Jamf Pro image. Again, an image is a static artifact we use for a deployment. The deployment is customized through the use of environment variable values that are applied on startup (the entrypoint scripting). You could easily spin up numerous local Jamf Pro instances all pointing to their own unique databases just by switching out those values all from the one image.

We’re also passing theMySQL container name for the DATABASE_HOST like in our previous example with the mysql CLI.

There’s one difference with out -p argument. We’re mapping port 80 on our host to port 8080 of the container (which is what Jamf Pro uses when it comes up). This is a handy feature of publishing ports: you can effectively perform port forwarding from the host to the container. Instead of interacting with the web app at http://localhost:8080 in our browser we can just use http://localhost.

Run the command and you will again see another long randomized ID printed. Verify the running containers using docker ps -a as before.

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                               NAMES
382d3fd60afe        jamfpro:10.17.0     "/startup.sh"            24 seconds ago      Up 22 seconds       0.0.0.0:80->8080/tcp                jamf_app
bd2e35ebcdd1        mysql:5.7           "docker-entrypoint.s…"   43 minutes ago      Up 42 minutes       0.0.0.0:3306->3306/tcp, 33060/tcp   jamf_mysql

Access in Your Browser

Open up Safari, enter localhost into the address bar, and you’ll be greeted by the warm glow of a Jamf EULA.

Consistency, Repeatability

You’ve achieved your first major milestone: you’re running Jamf Pro in a containerized environment. Now, how do we take what we have done above and ensure we can repeat it several hundred (thousand) times exactly as we did now when we spun everything manually.

Infrastructure/configuration as code is the means to achieve this. The way you implement this is going to be different depending on where it is you’re deploying to. For Docker, the tool we want to use to define how to bring up Jamf Pro is Docker Compose (which I’ve presented about).

If you installed Docker Desktop you will already have this CLI tool available to you! Docker Compose uses YAML files to define the elements of your application (services, networks, volumes) and allows you to spin up and manage environments from them.

Here’s a Docker Compose file that automates everything we went through in this post.

version: "3"
services:
  mysql:
    image: "mysql:5.7"
    networks:
      - jamfnet
    ports:
      - "3306:3306"
    environment:
      MYSQL_ROOT_PASSWORD: "jamfsw03"
      MYSQL_DATABASE: "jamfsoftware"
  app:
    image: "jamfpro:10.17.0"
    networks:
      - jamfnet
    ports:
      - "80:8080"
    environment:
      DATABASE_USERNAME: "root"
      DATABASE_PASSWORD: "jamfsw03"
      DATABASE_HOST: "mysql"
    depends_on:
      - mysql
networks:
  jamfnet:

Grab this file on my GitHub as a Gist: jamfpro-docker-compose.yml

Notice anything about the attributes on our services? Our definitions map almost 1:1 with the CLI arguments to docker run. The service’s key becomes the name which we are able to use as a reference just as before. In the app service (previously jamf_app) we are passing mysql as the DATABASE_HOSTNAME – Docker’s DNS magic continuing to do the work for us. There’s also a Docker Compose specific option in here, depends_on, that references it. This tells Docker Compose that the mysql service must finish starting before it brings up the app.

The beauty of Docker Compose is that it takes very little explaining to understand once you’ve already done some work with the docker CLI. From the manual commands we ran as a part of this post you can understand what the YAML file is defining and what will happen when we run it.

To do, save the above into to a docker-compose.yml file and run the following command from the same directory (I’m in a directory called build):

/build % docker-compose up --detach
Creating build_mysql_1 ... done
Creating build_app_1   ... done
/build %

Fast, consistent, and repeatable deployments from a single definition. To tear this stack down and clean up, run:

/build % docker-compose down
Stopping build_app_1   ... done
Stopping build_mysql_1 ... done
Removing build_app_1   ... done
Removing build_mysql_1 ... done
Removing network build_jamfnet
/build %

The Next Phase

From here your Jamf Pro server is up, running, and ready for whatever testing you have in store. Ideally, as you progress on this journey, the Docker image you use at this step would be the one that you are ultimately deploying to production. The running application itself is identical at each stage it moves through your deployment processes.

You’ve taken your first steps with running Jamf Pro containerized on your computer, but as alluded to in the beginning this is only an exercise in the basics. A production environment is not going to be a laptop with Docker installed (…at least, I certainly hope not). What you’re most likely looking at in that case is a managed container service in the cloud from one of the big three: Amazon Web Services, Google Cloud Platform, or Microsoft Azure.

If you’ve followed me long enough you’ll know that I’m an AWS developer. That’s where my production environment would be, and I can define in CloudFormation templates (AWS’s infrastructure as code implementation) the things I’m going to need:

  • Virtual Private Cloud
  • Application Load Balancer
  • Fargate Cluster
  • Aurora MySQL Database (with a twist)

So stay tuned for the next phase in our containerized journey:

“So You Want to Run Serverless Jamf Pro”

Trick Sam into building your Lambda Layers

Right now, the SAM CLI doesn’t support building Lambda Layers; those magical additions to Lambda that allow you to defined shared dependencies and modules. If you’re unfamiliar, you can read more about them here:

New for AWS Lambda – Use Any Programming Language and Share Common Components

If you read my last article on using sam build, you might think to yourself, “Hey, I can add Layers into my template to share common code across all my Lambdas!”, but hold on! At the moment sam build does not support building Layers the way it builds Lambda packages.

But, there is a hacky way around that. Here’s our repository:

carbon (7)

Now here’s the contents of template.yaml:

carbon (6).png

We’ve defined an AWS::Serverless::Function resource, but with no events, or any other attributes for that matter. We have also defined a AWS::Serverless::LayerVersion resource for our Lambda Layer, but the ContentUri path points to the build directory for the Lambda function.

See where this is going?

sam build will install all the dependencies for our Layer and copy its code into the build directory, and then when we call sam package the Layer will use that output! Spiffy. This does result in an orphan Lambda function that will never be used, but it won’t hurt anything just sitting out there.

Now, we aren’t done quite yet. According to AWS’s documentation, you need to place Python resources within a python directory inside your Layer. The zip file that sam build creates will be extracted into /opt, but the runtimes will only look, by default in a matching directory within /opt (so in the case of Python, that would be /opt/python).

See AWS Lambda Layers documentation for more details.

We can’t tell sam build to do that, but we can still get around this inside our Lambda functions that use the new Layer by adding /opt into sys.path (import searches all of the locations listed here when you call it). Here’s an example Python Lambda function that does this:

carbon (5)

Performing a test execution gives us the following output:

START RequestId: fd5a0bf2-f9af-11e8-bff4-ab8ada75cf17 Version: $LATEST
['/var/task', '/opt/python/lib/python3.6/site-packages', '/opt/python', '/var/runtime', '/var/runtime/awslambda', '/var/lang/lib/python36.zip', '/var/lang/lib/python3.6', '/var/lang/lib/python3.6/lib-dynload', '/var/lang/lib/python3.6/site-packages', '/opt/python/lib/python3.6/site-packages', '/opt/python', '/opt']
<module 'pymysql' from '/opt/pymysql/__init__.py'>
<module 'sqlalchemy' from '/opt/sqlalchemy/__init__.py'>
<module 'stored_procedures' from '/opt/stored_procedures.py'>
END RequestId: fd5a0bf2-f9af-11e8-bff4-ab8ada75cf17
REPORT RequestId: fd5a0bf2-f9af-11e8-bff4-ab8ada75cf17	Duration: 0.82 ms	Billed Duration: 100 ms 	Memory Size: 128 MB	Max Memory Used: 34 MB

Voila! We can see the inclusion of /opt into our path (and the expected path of /opt/python before it) and that our dependencies and custom module were all successfully imported.

It breaks PEP8 a little, but it gets the job done and we have now successfully automated the building and deployment of our Lambda Layer using AWS’s provided tooling.

 

 

Possum is dead; long live the squirrel.

Sam Build

A part of me is sorry to say that the title is not clickbait. Just before re:Invent, the AWS SAM developers made a pretty big announcement:

SAM CLI Introduces sam build Command

You can now use the sam build command to compile deployment packages for AWS Lambda functions written in Python using the AWS Serverless Application Model (AWS SAM) Command Line Interface (CLI).

All you need to do is*:

carbon

This command will iterate over your SAM template and output ready to package versions of your template and Python Lambdas to a .aws-sam/build directory. This lines up exactly with work I was preparing to do for possum, but AWS has gone ahead and done all the work.

* With other required arguments depending on your environment.

In fact, you’ll find that sam build nearly has feature parity with possum with a few exceptions which I’ll go into. Let’s take a look at what one of my serverless projects looks like as an example:

MyApp/
├── src/
|   └── functions/
│       └── MyLambda/
│           ├── my_lambda.py
│           └── requirements.txt
├── Pipfile
├── Pipfile.lock
└── template.yaml

I use pipenv for managing my development environments. The project’s overall dependencies are defined in my Pipfile while the pinned versions of those dependencies are in the Pipfile.lock. Individual dependencies for my Lambdas are defined in their own requirements.txt files within their directories.

I use PyCharm for all of my Python development. Using pipenv to manage the individual virtual environment for a given project allows me to take advantage of the autocompletion features of PyCharm across all the Lambda functions I’m working on. I maintain the individual requirements.txt files for each of my Lambdas and have their listed packages match the version in my Pipfile.lock (I have scripting in possum 1.5.0 that manages syncing the package versions in the requirements.txt files for me).

Now, when I run sam build it will perform all the same actions as possum, but instead of creating the zipped archive and uploading straight to S3 the built Lambdas will be available within the project’s directory.

Possum was originally written as a replacement for sam package that would include dependencies. It would upload the Lambda packages directly to an S3 bucket.

MyApp/
├── .aws-sam/
|   └── build/
|       ├── MyLambda/
│       |   ├── installed_depencency/
│       |   |   └── {dependency files}
│       |   └── my_lambda.py
|       └── template.yaml
├── src/
|   └── functions/
│       └── MyLambda/
│           ├── my_lambda.py
│           └── requirements.txt
├── Pipfile
├── Pipfile.lock
└── template.yaml

The new template located at .aws-sam/build/template.yaml has had the CodeUri keys updated to reference the relative paths within the .aws-sam/build directory. You will see that these copies of the Lambda code now contain all the dependencies that were defined within the requirements.txt file.

The example above generalizes this. Just to show you, the ApiContributorRegistration Lambda for CommunityPatch installs the cryptography and jsonschema packages. This is what the output looks like for a built Lambda:

CommunityPatch/
├── .aws-sam/
    └── build/
        └── ApiContributorRegistration/
            ├── asn1crypto/
            ├── asn1crypto-0.24.0.dist-info/
            ├── cffi/
            ├── cffi-1.11.5.dist-info/
            ├── cryptography/
            ├── cryptography-2.4.1.dist-info/
            ├── idna/
            ├── idna-2.7.dist-info/
            ├── jsonschema/
            ├── jsonschema-2.6.0.dist-info/
            ├── pycparser/
            ├── pycparser-2.19.dist-info/
            ├── schemas/
            ├── six-1.11.0.dist-info/
            ├── _cffi_backend.cpython-36m-x86_64-linux-gnu.so
            ├── api_contributor_registration.py
            ├── requirements.txt
            └── six.py

Dependencies usually have dependencies of their own (those two packages became seven!). And that’s just one Lambda.

Sam Invoke

Now, at this point you could take the output from sam build and perform sam package to get everything loaded into S3 and have a deployment template to run in CloudFormation. However, now that we have a build template we can take advantage of the SAM CLI’s powerful test features which possum was working towards adopting:

carbon (1).png

We can unit test our Lambdas using generated AWS events from the SAM CLI! I’ll cover my workflow for this in more detail at a later time, but before deploying the entire app out to AWS we can now perform some sanity checks that the Lambdas should execute successfully when given a proper payload. Ideally, you would want to generate multiple event payloads to cover a variety of potential invocations.

Sam Package/Deploy

From here the standard package and deploy steps follow (using either the sam or aws CLI tools) which I won’t cover here as I’ve done so in other posts. The full process referencing the new .aws-sam/build directory looks like this:

carbon (4).png

sam package knows to use the template output from sam build without having to specify the path to it!

Gotchas

While all of this is great, let’s cover the exceptions I alluded to earlier.

sam build will perform the build every single time. Even if you don’t make changes between builds it will still rebuild all your functions. This is agonizingly slow. Preventing unneeded builds was one of the first features that went into possum to speed up my personal development. The AWS devs have been listening to some of my feedback on how I implemented this and are looking into adopting a similar solution for sam build.

Every Lambda must have a requirements.txt file even if they don’t have any external dependencies. I ran into this one right away. At the moment, sam build expects there to always be a requirements.txt file within a Lambda function’s code directory. Use a blank file for simple Lambdas as a workaround. The AWS devs are aware of this and will be fixing it.

python must resolve to a Python environment of the same version as your serverless app. If python resolves to a different version (like on a Mac where it resolves to the 2.7 system executable) activate a virtual environment of the correct version as a workaround. You should be able to easily do this if you’re using pipenv by running pipenv shell. The reason this isn’t an issue for possum is because possum relies on pipenv for generating the correct Python build environment based upon the runtime version defined in the template. The AWS devs have been taking my feedback and are looking into this.

Edit: The below wheel issue is fixed in sam 0.8.1!

You may run into The error message “Error: PythonPipBuilder:ResolveDependencies – {pycparser==2.19(sdist)}”. This happens if you’re using a version of Python that didn’t include the wheel package. This will be fixed in a future release, but you can pip install wheel in the Python environment that sam was installed to as a workaround.

You’re also going to run into that error when you try to us the –use-container option because the Docker image pulled for the build environment is also missed that package.

The workaround is to build intermediary image based on lambci/lambda:build-python3.6, install the wheel package, and then tag it using the same tag (yes, you’re tagging an image to override the existing tag with your own custom one) . This will also be fixed in a future release.

Dev Update (2019-11-09)

I’m finally doing my Friday dev updates. These posts are short and sweet – they talk about work that went into any of my open source projects for the past ~week and/or what work is being done with them.

Let’s dive in. Continue reading “Dev Update (2019-11-09)”

Jamf The Gathering: A Slack Bot’s Story

JNUC happened recently. You might heard about it. Some stuff came up. There was a really awesome burn during the opening keynote.

It was a really nice burn.

Continue reading “Jamf The Gathering: A Slack Bot’s Story”

Blog switch up

I haven’t been posting to the blog with as much frequency as I used to. Partially, I think this is due to my ideas around larger, more in depth posts that require a lot more time sitting down and crafting.

I’m going to try something different post Penn State MacAdmins. I want to start forcing myself to write more technical content, but in smaller bite-sized pieces that focus on fundamentals rather than all encompassing solutions. Topics that are generic, but provide examples which can be used as building blocks. As these are smaller, and more example driven, I’m going to set a goal to post every week on Monday morning, and maybe Wednesday too if I have enough posts in the pipeline.

On Fridays, I want to start posting “Dev Updates”. I have a number of projects that I have open sourced and am committed to updating for Mac admin community. These projects do have channels in the MacAdmins Slack, and their GitHub repos are linked there, but keeping up for followers would involve a lot of back scrolling to find out what has been discussed. I plan to include in these weekly updates: new issues raised on GitHub, recapping discussions from the Slack channels, and describing any work that has been done during that week on features/bugs. This should not only provide a digestible status updates for those who want them, but help keep me focused.

The ultimate goal here is that I’m writing more again. When it comes to learning and bettering myself professionally, there are two ways I go about it: post about it publicly, or present on it publicly. Both of which force you to cover all your bases in the face of public scrutiny.

We’ll see how this goes.