So You Want to Run Serverless Jamf Pro

Edit 2019-11-29: I had left a us-east-2 specific AMI ID for the NAT instance in the template. The template and this post have been updated with instructions on finding the correct NAT AMI for the region you wish to deploy to and passing that as a parameter.

First we went through containerizing Jamf Pro, and now we’re talking about running it serverless?

What does that even mean?

Serverless Is Serverless

Saying “it’s not serverless, there’s still a server” is about as on point as saying “it’s not the cloud, it’s just someone else’s computer”. There are servers, of course, but functionality is exposed through services. Take the AWS definition of serverless:

  • No server management.
    This is a major aspect of modern cloud providers that isn’t specific to serverless computing. If you’re using a service you shouldn’t ever be worrying about the configuration, management, and maintenance of the underlying resources.
  • Automatic scaling.
    Serverless services should scale easily and according to load. Not all offerings in the space are fully automatic without some configuration on our part. We’ll get to that later.
  • Pay for usage; don’t pay for idle.
    When you create a virtual machine it’s a static piece of infrastructure that you pay for hourly even if you aren’t utilizing it. Serverless on AWS states that resources should spin down automatically, or be able to spin down, and sit without incurring cost until they’re resumed or invoked.
  • High availability.
    This one has the same caveat as the scaling item. Across the spectrum, most serverless services will handle this for your automatically. Others will expose the ability, but you’ll need to take extra steps.

Jamf Pro is not a serverless application. In reality it’s a monolithic application – one big, self contained web app – and it does require an always running instance. In the past this would be a virtual machine. We would probbaly have MySQL installed and running on another VM too. As of our last post we know we can do this as a container, but what will that container run on?

AWS Managed Services

Deploying to AWS we have offerings that we can take advantage of to translate the Docker environment of the previous post to a fully hosted one that has no infrastructure for us to manage. We can also define everything about our infrastructure and service as code using CloudFormation.

CloudFormation is a lot like the Docker Compose file I showed in the pervious post. It’s a JSON or YAML file that defines everything about what will be created as a part of a stack. This can include networking, services, databases, you name it. If it’s in AWS, it’s in CloudFormation*.

* Except when it’s not.

I’ll also break down what the estimated minimum costs for these are for a 30 day period. If you deploy the provided CloudFormation template in your account understand that you will incur costs.

Virtual Private Cloud (VPC)

We’re not deploying on a laptop and hitting local host any more. Running in the cloud means we need to establish a network architecture that will keep key our key resources protected.

We’re going to create a VPC with six subnets and the security groups allowing traffic between them. Now, the one that I’m going to walk through is not designed around high availability – one of the serverless tenets – in that two of our resources will not exist in multiple available ability zones.

Availability zone? Think of it as a data center (but that’s not actually correct) within a given AWS region. For high availability you would architect a service to exist in two or more availability zones (and in some cases in multiple AZs in multiple regions!). Our load balancer and database require multiple subnets in different AZs. Our NAT and the web app will only exist in one purely to make this simpler and to save on short term costs while we create these resources.

Here’s the breakdown of our planned VPC:

  • 2x public load balancer subnets accepting HTTP traffic over port 80 or HTTPS over port 443 (I’ll explain later).
  • 1x public subnet for the NAT instance that accepts all traffic from our web app subnet out to the internet.
  • 1x private subnet for the web app containers that accepts traffic over port 8080 from our load balancer subnets.
  • 2x private database subnets for our database that accept traffic over port 3306 from the web app subnet.

Here’s a quick diagram that shows how this all looks.

There are no costs for any of the network configuration of our VPC.

Application Load Balancer (ALB)

Exposing any web server directly on the public internet, containerized or not, is not a good idea. Our load balancer will be deployed in our public subnet allowing traffic over port 80 or 443 depending on if we supply a certificate ARN to the CloudFormation stack.

What’s that about? The template I’ll be providing later has a couple of conditionals in it. By default, application load balancers only accept HTTP traffic over port 80. If you want to accept traffic over port 443 that means you need to provide your own certificate. I’ll walk through that later on when we’re getting ready for deployment, but know that if you can’t generate a valid certificate in your account you can still deploy this stack to accept traffic over port 80.

The HTTP option is only for demonstrative purposes. You shouldn’t communicate with a Jamf Pro instance over an unencrypted connection and send anything sensitive.

Sadly, an ALB is not a truly serverless service. It’s always running, and AWS charges per hour even if there’s no traffic. The estimated minimum cost in a month is $21.96.

> See Elastic Load Balancing Pricing for more details.

NAT Instances and Gateways

Our NAT (network address translation) is the first place we’re really breaking the serverless concept. The template defines a NAT instance which is the smallest (and cheapest) sized EC2 available using an AWS provided image.

There is an option for a NAT Gateway – which is in the template but commented out – that is fully managed, but you still pay for idle, and it requires at least two subnets in two AZs like our load balancer. However, it is also much more performant, and scales itself under load. It’s the clear choice for a production environment, but not for testing and experimenting.

We will need an AMI for the AWS region we plan to deploy to for the NAT instance. AMI stands for Amazon Machine Image. Think of it like a virtual machine snapshot. Amazon has many of these AMI available, for free, to launch EC2 instances from by reference the ID. The ID is going to be unique for each region, and there will be many different versions of certain AMIs as they’re updated over time, so we need a way to look up the latest.

I have a quick one-line command using the AWS CLI based off the instructions available in the KB Finding a Linux AMI. This documentation also describes how to locate an AMI using the AWS console if you prefer to do so using the GUI.

aws ec2 describe-images --region ${REGION} --filters 'Name=name,Values=amzn-ami-vpc-nat-*' 'Name=state,Values=available' --query 'reverse(sort_by(Images, &CreationDate))[:1].ImageId' --output text

This will provide the AMI ID of the latest image matching our name filter for the given ${REGION}. Use this to look up the appropriate AMI so you can pass the ID into the CloudFormation template later.

Estimated minimum on-demand cost for our t3a.nano EC2 NAT: $3.38. The estimated minimum cost for a NAT Gateway: $32.40.

See Amazon VPC Pricing and Amazon EC2 Pricing for more details.

Aurora Serverless

Did you know serverless SQL databases existed? Aurora Serverless is very interesting in that it truly does fit all four descriptions of a serverless service. When you create one of these instances you can configure it to automatically pause after a period of inactivity. No connections for 30 minutes? Paused. And you’re not billed for any hours where it isn’t running!

Seasoned Jamf Pro admins may have already spotted the problems here:

  1. Jamf Pro is chatty. Odds are good once you’ve enrolled something or configured connections to any external services the application will be querying the database often enough that pausing will never occur.
  2. If the database were ever to become paused Jamf Pro would not be able to recover and require a restart of the application. Jamf Pro does depend on a highly available database on the backend.

So why use this offering? Cost-wise, it isn’t actually that much higher than the cheapest Aurora MySQL instance ($0.06/hour vs a db.t3.small at $0.041/hour), and we get auto-scaling along for the ride. Plus it’s even more managed for us than Aurora MySQL. And really… we’re using this kinda for funsies. It works, it works just fine, but it’s not supported by Jamf (Aurora is, Aurora Serverless is not), and Jamf Pro doesn’t even play to the strengths of this offering where we can pause it so save on money.

That being said, it’s much more expensive than spinning up the cheapest RDS MySQL option (a db.t3.micro at $0.017/hour).

Estimate minimum cost for Aurora Serverless under full utilization: $43.20.

See Amazon Aurora Pricing and Amazon RDS for MySQL Pricing for more details.


Here we finally get to the issue of running our Java web app in a serverless fashion. Jamf Pro can’t sit idle and not cost us money (unfortunately), but we can ensure that the application isn’t running on top of infrastructure that we are provisioning, configuring, and managing!

Enter Fargate. AWS’s container service, Elastic Container Service (ECS) allows us to roll our own clusters with autoscaling groups of EC2 instances for the underlying resource that we can then run containers on top of. Lots of overhead there, but that’s the path to go when you’re at large scale and needing to drive down costs by leveraging Spot Fleets and EC2 reserved instances.

We’re not touching any of that.

Fargate is an ECS cluster where we don’t think about any of that. We just launch containers in it and AWS takes care of the task scheduling and assignment of resources out of some nebulous pool that we have zero visibility into. It just works. Neat, huh?

Fargate is something I generally leverage for running long tasks that don’t fit within the constraints of a Lambda function. But for a static web app, especially one I want to run as a container, it makes a lot of sense to go this route than add on all the overhead of managing a full blown ECS cluster.

Fargate charges us for vCPU hours and GB of memory allocated during those hours. Jamf Pro is more memory bound than CPU bound, so we can save on the compute (the expensive part) and focus more on RAM. In the CloudFormation template I’ve allocated .5 vCPU and 2 GB of memory.

The gotcha is again comparing the cost to the alternative that requires more management and overhead. Fargate’s pricing puts it above the EC2 t3.small instance that has the same amount of memory but 2 vCPU allocated.

Estimated cost for running one container in Fargate at these allocations: $20.97.

See Amazon Fargate Pricing and Amazon EC2 Pricing for more details.

The Grand Total

If we tally up the estimated minimum costs of all the components above…

$89.51 per month, or $2.98 per day. We detailed many areas where you can reduce this cost by trading away some some of the managed aspects of the services we’ve chosen, but this pricing also isn’t indicative of a real production environment. In production you should use a NAT Gateway (more expensive), you probably should use an actual Aurora Cluster, and you’re going to be running at least two, if not three, web app containers across multiple AZs.

The CloudFormation template that defines the environment described above is not intended for long term operation. You can deploy it to play around with a cloud hosted Jamf Pro server with production-like infrastructure and then you can tear it all down once you’re. done.

And then deploy it all again.

That’s the beauty of the infrastructure as code.

Elastic Container Registry

Before we deploy, we need something to deploy. In the previous post I walked through created a Jamf Pro image using the WAR file you can obtain from Jamf Nation. In order for Fargate to use that image we have to publish it to a location it can pull from.

The public DockerHub probably isn’t the appropriate place.

But we have an internal option to AWS: Elastic Container Registry. It lives alongside ECS and EKS (#kubelife) and provides a private registry to host our images.

Create a Jamf Pro Repository

This will be quick, but you’re going to need an AWS profile and credentials setup locally to use with the aws CLI. If you don’t have the CLI installed you can follow the instructions here (version 1). Be sure to do this first.

  1. Log into the AWS console and head to ECR in the region you will be wanting to deploy to (I use us-east-2).
  2. Click Create repository in the upper-right corner.
  3. Enter jamfpro for the repository name.
  4. Click Create repository.

Now we can push our image built in the last post up here. If you click on your new repository you will see a button labeled View push commands in the upper-right. This gives you a cheat sheet pop-up for everything you need to do in the Terminal.

Log into ECR for Docker (copy all the $ characters). Replace ${AWS_REGION} with the region of your repository.

$(aws ecr get-login --no-include-email --region ${AWS_REGION})

Tag your local image to match the new remote registry. Replace the ${VERSION} with the one you build and tagged for your jamfpro image (in the previous post it was 10.17.0. Replace ${AWS_ACCOUNT with your account number (it will be shown in that cheat sheet) and ${AWS_REGION} again.

docker tag jamfpro:${VERSION} ${AWS_ACCOUNT}.dkr.ecr.${AWS_REGION}${VERSION}

Now push the image up to your AWS account! This may take a few minutes depending on your internet connection.

 docker push ${AWS_ACCOUNT}.dkr.ecr.${AWS_REGION}${VERSION}

This tagged image is the URI that we need to deploy our CloudFormation stack. You can verify this in the AWS console when viewing the tagged images in the repository.

The Certificate

At this point we need to generate a certificate if we want to launch our Jamf Pro stack using HTTPS. This certificate must exist in AWS Certificate Manager for the region you are deploying to. No way around that. To request a certificate, you need to own a domain (it doesn’t have to be managed by Route 53, but you must have control of it).

If you can’t check both of those boxes you will need to proceed with HTTP.

If you do, you experience will vary depending on where your domain lives and how you will handle validation. AWS offers DNS validation which can be handled automatically if the domain is managed by Route 53 (super slick), and even not if you also have access to create DNS records for your domain, or email validation where a message will be sent to the address on record.

For this step, I’m going to refer you to Amazon’s documentation: Request a Public Certificate – AWS Certificate Manager.

Deploying the Stack

And so, we finally come to it. Here is the CloudFormation template. Take a look through the resources and see how they map to what has been described above. Save this file to your computer. You have two ways of deploying this.


Using the AWS CLI and your credentials you can run either of the following commands to launch the CloudFormation stack. The template contains a number of Parameters to set certain values unique to the deployment. Using the CLI, these need to be passed as a part of the --parameter-overrides argument. As with the ECR commands, be sure to set or replace the variables with the correct values.

This command omits the CertificateArn and will launch the stack using HTTP.

 aws cloudformation deploy \
    --region ${AWS_REGION} \
    --stack-name jamfpro-serverless-http \
    --template-file jamfpro-template.yaml \
    --capabilities CAPABILITY_IAM \
    --parameter-overrides \
        NatImageId=${AMI_ID} \
         JamfProImageURI=${AWS_ACCOUNT}.dkr.ecr.${AWS_REGION}${VERSION} \
        DatabaseMasterUsername=jamfadmin \

This command includes the CertificateArn to enable HTTPS.

aws cloudformation deploy \
    --region ${AWS_REGION} \
    --stack-name jamfpro-serverless-https \
    --template-file jamfpro-template.yaml \
    --capabilities CAPABILITY_IAM \
    --parameter-overrides \
        NatImageId=${AMI_ID} \
        JamfProImageURI=${AWS_ACCOUNT}.dkr.ecr.${AWS_REGION} \
        CertificateArn=arn:aws:acm:${AWS_REGION}:${AWS_ACCOUNT}:certificate/${CERTIFICATE_ID} \
        DatabaseMasterUsername=jamfadmin \

AWS Console

Alternately, there’s nothing wrong with using the GUI. Go to CloudFormation in the region you wish to deploy to (and the same region that has your ECR image and ACM certificate) and click on Create stack: With new resources.

Upload the template file you saved and walk through the interface to populate the values you want for the parameters (some have defaults you can stick with). There is a checkbox at the end “I acknowledge that AWS CloudFormation might create IAM resources” you will need to click. This is prompting because there are IAM roles in the template that will be created (in the CLI this was handled with the --capabilities CAPABILITY_IAM argument).

Click Create stack and watch the stream of event as the entire stack builds.

Access in Your Browser

This part depends on if you deployed the stack to use HTTP or HTTPS.

Check the Outputs tab for the CloudFormation stack in the console and copy the domain name for LoadBalancerDNS.

If your deployment used HTTP you can access Jamf Pro at that address.

If your deployment used HTTPS, you can now create a DNS record to point here. If you are using Route 53 in AWS this will be an A alias record pointing to the domain. If not, you can create a CNAME record that points here.


Once you’re done playing around with your cloud environment you can wipe everything by selecting the CloudFormation stack and clicking the Delete button.

That’s it.

The Next Phase

This mini series came about because I had a pretty bad idea. In order to actually work on this bad idea I needed to have a running instance of Jamf Pro out in AWS with the database in Aurora. In order to do that, I wanted to run the web app in Fargate. In order to do that I needed to make an image I could deploy. It made sense to share my work along the way.

The next phase of this journey is extending this Jamf Pro environment by taking advantage of the features that AWS provides. If you’re treating your cloud like a server rack you’re not building to its strengths. All the major players aren’t simply sitting there providing compute. There are services services, APIs, frameworks, and ways of tying it all together.

We’re going to build something cool. So stay tuned for the next phase in our containerized journey:

“{ Spoilers }”