So You Want to Run Serverless Jamf Pro

Edit 2019-11-29: I had left a us-east-2 specific AMI ID for the NAT instance in the template. The template and this post have been updated with instructions on finding the correct NAT AMI for the region you wish to deploy to and passing that as a parameter.

First we went through containerizing Jamf Pro, and now we’re talking about running it serverless?

What does that even mean?

Serverless Is Serverless

Saying “it’s not serverless, there’s still a server” is about as on point as saying “it’s not the cloud, it’s just someone else’s computer”. There are servers, of course, but functionality is exposed through services. Take the AWS definition of serverless:

  • No server management.
    This is a major aspect of modern cloud providers that isn’t specific to serverless computing. If you’re using a service you shouldn’t ever be worrying about the configuration, management, and maintenance of the underlying resources.
  • Automatic scaling.
    Serverless services should scale easily and according to load. Not all offerings in the space are fully automatic without some configuration on our part. We’ll get to that later.
  • Pay for usage; don’t pay for idle.
    When you create a virtual machine it’s a static piece of infrastructure that you pay for hourly even if you aren’t utilizing it. Serverless on AWS states that resources should spin down automatically, or be able to spin down, and sit without incurring cost until they’re resumed or invoked.
  • High availability.
    This one has the same caveat as the scaling item. Across the spectrum, most serverless services will handle this for your automatically. Others will expose the ability, but you’ll need to take extra steps.

Jamf Pro is not a serverless application. In reality it’s a monolithic application – one big, self contained web app – and it does require an always running instance. In the past this would be a virtual machine. We would probbaly have MySQL installed and running on another VM too. As of our last post we know we can do this as a container, but what will that container run on?

AWS Managed Services

Deploying to AWS we have offerings that we can take advantage of to translate the Docker environment of the previous post to a fully hosted one that has no infrastructure for us to manage. We can also define everything about our infrastructure and service as code using CloudFormation.

CloudFormation is a lot like the Docker Compose file I showed in the pervious post. It’s a JSON or YAML file that defines everything about what will be created as a part of a stack. This can include networking, services, databases, you name it. If it’s in AWS, it’s in CloudFormation*.

* Except when it’s not.

I’ll also break down what the estimated minimum costs for these are for a 30 day period. If you deploy the provided CloudFormation template in your account understand that you will incur costs.

Virtual Private Cloud (VPC)

We’re not deploying on a laptop and hitting local host any more. Running in the cloud means we need to establish a network architecture that will keep key our key resources protected.

We’re going to create a VPC with six subnets and the security groups allowing traffic between them. Now, the one that I’m going to walk through is not designed around high availability – one of the serverless tenets – in that two of our resources will not exist in multiple available ability zones.

Availability zone? Think of it as a data center (but that’s not actually correct) within a given AWS region. For high availability you would architect a service to exist in two or more availability zones (and in some cases in multiple AZs in multiple regions!). Our load balancer and database require multiple subnets in different AZs. Our NAT and the web app will only exist in one purely to make this simpler and to save on short term costs while we create these resources.

Here’s the breakdown of our planned VPC:

  • 2x public load balancer subnets accepting HTTP traffic over port 80 or HTTPS over port 443 (I’ll explain later).
  • 1x public subnet for the NAT instance that accepts all traffic from our web app subnet out to the internet.
  • 1x private subnet for the web app containers that accepts traffic over port 8080 from our load balancer subnets.
  • 2x private database subnets for our database that accept traffic over port 3306 from the web app subnet.

Here’s a quick diagram that shows how this all looks.

There are no costs for any of the network configuration of our VPC.

Application Load Balancer (ALB)

Exposing any web server directly on the public internet, containerized or not, is not a good idea. Our load balancer will be deployed in our public subnet allowing traffic over port 80 or 443 depending on if we supply a certificate ARN to the CloudFormation stack.

What’s that about? The template I’ll be providing later has a couple of conditionals in it. By default, application load balancers only accept HTTP traffic over port 80. If you want to accept traffic over port 443 that means you need to provide your own certificate. I’ll walk through that later on when we’re getting ready for deployment, but know that if you can’t generate a valid certificate in your account you can still deploy this stack to accept traffic over port 80.

The HTTP option is only for demonstrative purposes. You shouldn’t communicate with a Jamf Pro instance over an unencrypted connection and send anything sensitive.

Sadly, an ALB is not a truly serverless service. It’s always running, and AWS charges per hour even if there’s no traffic. The estimated minimum cost in a month is $21.96.

> See Elastic Load Balancing Pricing for more details.

NAT Instances and Gateways

Our NAT (network address translation) is the first place we’re really breaking the serverless concept. The template defines a NAT instance which is the smallest (and cheapest) sized EC2 available using an AWS provided image.

There is an option for a NAT Gateway – which is in the template but commented out – that is fully managed, but you still pay for idle, and it requires at least two subnets in two AZs like our load balancer. However, it is also much more performant, and scales itself under load. It’s the clear choice for a production environment, but not for testing and experimenting.

We will need an AMI for the AWS region we plan to deploy to for the NAT instance. AMI stands for Amazon Machine Image. Think of it like a virtual machine snapshot. Amazon has many of these AMI available, for free, to launch EC2 instances from by reference the ID. The ID is going to be unique for each region, and there will be many different versions of certain AMIs as they’re updated over time, so we need a way to look up the latest.

I have a quick one-line command using the AWS CLI based off the instructions available in the KB Finding a Linux AMI. This documentation also describes how to locate an AMI using the AWS console if you prefer to do so using the GUI.

aws ec2 describe-images --region ${REGION} --filters 'Name=name,Values=amzn-ami-vpc-nat-*' 'Name=state,Values=available' --query 'reverse(sort_by(Images, &CreationDate))[:1].ImageId' --output text

This will provide the AMI ID of the latest image matching our name filter for the given ${REGION}. Use this to look up the appropriate AMI so you can pass the ID into the CloudFormation template later.

Estimated minimum on-demand cost for our t3a.nano EC2 NAT: $3.38. The estimated minimum cost for a NAT Gateway: $32.40.

See Amazon VPC Pricing and Amazon EC2 Pricing for more details.

Aurora Serverless

Did you know serverless SQL databases existed? Aurora Serverless is very interesting in that it truly does fit all four descriptions of a serverless service. When you create one of these instances you can configure it to automatically pause after a period of inactivity. No connections for 30 minutes? Paused. And you’re not billed for any hours where it isn’t running!

Seasoned Jamf Pro admins may have already spotted the problems here:

  1. Jamf Pro is chatty. Odds are good once you’ve enrolled something or configured connections to any external services the application will be querying the database often enough that pausing will never occur.
  2. If the database were ever to become paused Jamf Pro would not be able to recover and require a restart of the application. Jamf Pro does depend on a highly available database on the backend.

So why use this offering? Cost-wise, it isn’t actually that much higher than the cheapest Aurora MySQL instance ($0.06/hour vs a db.t3.small at $0.041/hour), and we get auto-scaling along for the ride. Plus it’s even more managed for us than Aurora MySQL. And really… we’re using this kinda for funsies. It works, it works just fine, but it’s not supported by Jamf (Aurora is, Aurora Serverless is not), and Jamf Pro doesn’t even play to the strengths of this offering where we can pause it so save on money.

That being said, it’s much more expensive than spinning up the cheapest RDS MySQL option (a db.t3.micro at $0.017/hour).

Estimate minimum cost for Aurora Serverless under full utilization: $43.20.

See Amazon Aurora Pricing and Amazon RDS for MySQL Pricing for more details.

Fargate

Here we finally get to the issue of running our Java web app in a serverless fashion. Jamf Pro can’t sit idle and not cost us money (unfortunately), but we can ensure that the application isn’t running on top of infrastructure that we are provisioning, configuring, and managing!

Enter Fargate. AWS’s container service, Elastic Container Service (ECS) allows us to roll our own clusters with autoscaling groups of EC2 instances for the underlying resource that we can then run containers on top of. Lots of overhead there, but that’s the path to go when you’re at large scale and needing to drive down costs by leveraging Spot Fleets and EC2 reserved instances.

We’re not touching any of that.

Fargate is an ECS cluster where we don’t think about any of that. We just launch containers in it and AWS takes care of the task scheduling and assignment of resources out of some nebulous pool that we have zero visibility into. It just works. Neat, huh?

Fargate is something I generally leverage for running long tasks that don’t fit within the constraints of a Lambda function. But for a static web app, especially one I want to run as a container, it makes a lot of sense to go this route than add on all the overhead of managing a full blown ECS cluster.

Fargate charges us for vCPU hours and GB of memory allocated during those hours. Jamf Pro is more memory bound than CPU bound, so we can save on the compute (the expensive part) and focus more on RAM. In the CloudFormation template I’ve allocated .5 vCPU and 2 GB of memory.

The gotcha is again comparing the cost to the alternative that requires more management and overhead. Fargate’s pricing puts it above the EC2 t3.small instance that has the same amount of memory but 2 vCPU allocated.

Estimated cost for running one container in Fargate at these allocations: $20.97.

See Amazon Fargate Pricing and Amazon EC2 Pricing for more details.

The Grand Total

If we tally up the estimated minimum costs of all the components above…

$89.51 per month, or $2.98 per day. We detailed many areas where you can reduce this cost by trading away some some of the managed aspects of the services we’ve chosen, but this pricing also isn’t indicative of a real production environment. In production you should use a NAT Gateway (more expensive), you probably should use an actual Aurora Cluster, and you’re going to be running at least two, if not three, web app containers across multiple AZs.

The CloudFormation template that defines the environment described above is not intended for long term operation. You can deploy it to play around with a cloud hosted Jamf Pro server with production-like infrastructure and then you can tear it all down once you’re. done.

And then deploy it all again.

That’s the beauty of the infrastructure as code.

Elastic Container Registry

Before we deploy, we need something to deploy. In the previous post I walked through created a Jamf Pro image using the WAR file you can obtain from Jamf Nation. In order for Fargate to use that image we have to publish it to a location it can pull from.

The public DockerHub probably isn’t the appropriate place.

But we have an internal option to AWS: Elastic Container Registry. It lives alongside ECS and EKS (#kubelife) and provides a private registry to host our images.

Create a Jamf Pro Repository

This will be quick, but you’re going to need an AWS profile and credentials setup locally to use with the aws CLI. If you don’t have the CLI installed you can follow the instructions here (version 1). Be sure to do this first.

  1. Log into the AWS console and head to ECR in the region you will be wanting to deploy to (I use us-east-2).
  2. Click Create repository in the upper-right corner.
  3. Enter jamfpro for the repository name.
  4. Click Create repository.

Now we can push our image built in the last post up here. If you click on your new repository you will see a button labeled View push commands in the upper-right. This gives you a cheat sheet pop-up for everything you need to do in the Terminal.

Log into ECR for Docker (copy all the $ characters). Replace ${AWS_REGION} with the region of your repository.

$(aws ecr get-login --no-include-email --region ${AWS_REGION})

Tag your local image to match the new remote registry. Replace the ${VERSION} with the one you build and tagged for your jamfpro image (in the previous post it was 10.17.0. Replace ${AWS_ACCOUNT with your account number (it will be shown in that cheat sheet) and ${AWS_REGION} again.

docker tag jamfpro:${VERSION} ${AWS_ACCOUNT}.dkr.ecr.${AWS_REGION}.amazonaws.com/jamfpro:${VERSION}

Now push the image up to your AWS account! This may take a few minutes depending on your internet connection.

 docker push ${AWS_ACCOUNT}.dkr.ecr.${AWS_REGION}.amazonaws.com/jamfpro:${VERSION}

This tagged image is the URI that we need to deploy our CloudFormation stack. You can verify this in the AWS console when viewing the tagged images in the repository.

The Certificate

At this point we need to generate a certificate if we want to launch our Jamf Pro stack using HTTPS. This certificate must exist in AWS Certificate Manager for the region you are deploying to. No way around that. To request a certificate, you need to own a domain (it doesn’t have to be managed by Route 53, but you must have control of it).

If you can’t check both of those boxes you will need to proceed with HTTP.

If you do, you experience will vary depending on where your domain lives and how you will handle validation. AWS offers DNS validation which can be handled automatically if the domain is managed by Route 53 (super slick), and even not if you also have access to create DNS records for your domain, or email validation where a message will be sent to the address on record.

For this step, I’m going to refer you to Amazon’s documentation: Request a Public Certificate – AWS Certificate Manager.

Deploying the Stack

And so, we finally come to it. Here is the CloudFormation template. Take a look through the resources and see how they map to what has been described above. Save this file to your computer. You have two ways of deploying this.

AWS CLI

Using the AWS CLI and your credentials you can run either of the following commands to launch the CloudFormation stack. The template contains a number of Parameters to set certain values unique to the deployment. Using the CLI, these need to be passed as a part of the --parameter-overrides argument. As with the ECR commands, be sure to set or replace the variables with the correct values.

This command omits the CertificateArn and will launch the stack using HTTP.

 aws cloudformation deploy \
    --region ${AWS_REGION} \
    --stack-name jamfpro-serverless-http \
    --template-file jamfpro-template.yaml \
    --capabilities CAPABILITY_IAM \
    --parameter-overrides \
        NatImageId=${AMI_ID} \
         JamfProImageURI=${AWS_ACCOUNT}.dkr.ecr.${AWS_REGION}.amazonaws.com/jamfpro:${VERSION} \
        DatabaseMasterUsername=jamfadmin \
        DatabaseMasterPassword=${SECRET} 

This command includes the CertificateArn to enable HTTPS.

aws cloudformation deploy \
    --region ${AWS_REGION} \
    --stack-name jamfpro-serverless-https \
    --template-file jamfpro-template.yaml \
    --capabilities CAPABILITY_IAM \
    --parameter-overrides \
        NatImageId=${AMI_ID} \
        JamfProImageURI=${AWS_ACCOUNT}.dkr.ecr.${AWS_REGION}.amazonaws.com/jamfpro:10.17.0 \
        CertificateArn=arn:aws:acm:${AWS_REGION}:${AWS_ACCOUNT}:certificate/${CERTIFICATE_ID} \
        DatabaseMasterUsername=jamfadmin \
        DatabaseMasterPassword=${SECRET}

AWS Console

Alternately, there’s nothing wrong with using the GUI. Go to CloudFormation in the region you wish to deploy to (and the same region that has your ECR image and ACM certificate) and click on Create stack: With new resources.

Upload the template file you saved and walk through the interface to populate the values you want for the parameters (some have defaults you can stick with). There is a checkbox at the end “I acknowledge that AWS CloudFormation might create IAM resources” you will need to click. This is prompting because there are IAM roles in the template that will be created (in the CLI this was handled with the --capabilities CAPABILITY_IAM argument).

Click Create stack and watch the stream of event as the entire stack builds.

Access in Your Browser

This part depends on if you deployed the stack to use HTTP or HTTPS.

Check the Outputs tab for the CloudFormation stack in the console and copy the domain name for LoadBalancerDNS.

If your deployment used HTTP you can access Jamf Pro at that address.

If your deployment used HTTPS, you can now create a DNS record to point here. If you are using Route 53 in AWS this will be an A alias record pointing to the domain. If not, you can create a CNAME record that points here.

Teardown

Once you’re done playing around with your cloud environment you can wipe everything by selecting the CloudFormation stack and clicking the Delete button.

That’s it.

The Next Phase

This mini series came about because I had a pretty bad idea. In order to actually work on this bad idea I needed to have a running instance of Jamf Pro out in AWS with the database in Aurora. In order to do that, I wanted to run the web app in Fargate. In order to do that I needed to make an image I could deploy. It made sense to share my work along the way.

The next phase of this journey is extending this Jamf Pro environment by taking advantage of the features that AWS provides. If you’re treating your cloud like a server rack you’re not building to its strengths. All the major players aren’t simply sitting there providing compute. There are services services, APIs, frameworks, and ways of tying it all together.

We’re going to build something cool. So stay tuned for the next phase in our containerized journey:

“{ Spoilers }”

Possum is dead; long live the squirrel.

Sam Build

A part of me is sorry to say that the title is not clickbait. Just before re:Invent, the AWS SAM developers made a pretty big announcement:

SAM CLI Introduces sam build Command

You can now use the sam build command to compile deployment packages for AWS Lambda functions written in Python using the AWS Serverless Application Model (AWS SAM) Command Line Interface (CLI).

All you need to do is*:

carbon

This command will iterate over your SAM template and output ready to package versions of your template and Python Lambdas to a .aws-sam/build directory. This lines up exactly with work I was preparing to do for possum, but AWS has gone ahead and done all the work.

* With other required arguments depending on your environment.

In fact, you’ll find that sam build nearly has feature parity with possum with a few exceptions which I’ll go into. Let’s take a look at what one of my serverless projects looks like as an example:

MyApp/
├── src/
|   └── functions/
│       └── MyLambda/
│           ├── my_lambda.py
│           └── requirements.txt
├── Pipfile
├── Pipfile.lock
└── template.yaml

I use pipenv for managing my development environments. The project’s overall dependencies are defined in my Pipfile while the pinned versions of those dependencies are in the Pipfile.lock. Individual dependencies for my Lambdas are defined in their own requirements.txt files within their directories.

I use PyCharm for all of my Python development. Using pipenv to manage the individual virtual environment for a given project allows me to take advantage of the autocompletion features of PyCharm across all the Lambda functions I’m working on. I maintain the individual requirements.txt files for each of my Lambdas and have their listed packages match the version in my Pipfile.lock (I have scripting in possum 1.5.0 that manages syncing the package versions in the requirements.txt files for me).

Now, when I run sam build it will perform all the same actions as possum, but instead of creating the zipped archive and uploading straight to S3 the built Lambdas will be available within the project’s directory.

Possum was originally written as a replacement for sam package that would include dependencies. It would upload the Lambda packages directly to an S3 bucket.

MyApp/
├── .aws-sam/
|   └── build/
|       ├── MyLambda/
│       |   ├── installed_depencency/
│       |   |   └── {dependency files}
│       |   └── my_lambda.py
|       └── template.yaml
├── src/
|   └── functions/
│       └── MyLambda/
│           ├── my_lambda.py
│           └── requirements.txt
├── Pipfile
├── Pipfile.lock
└── template.yaml

The new template located at .aws-sam/build/template.yaml has had the CodeUri keys updated to reference the relative paths within the .aws-sam/build directory. You will see that these copies of the Lambda code now contain all the dependencies that were defined within the requirements.txt file.

The example above generalizes this. Just to show you, the ApiContributorRegistration Lambda for CommunityPatch installs the cryptography and jsonschema packages. This is what the output looks like for a built Lambda:

CommunityPatch/
├── .aws-sam/
    └── build/
        └── ApiContributorRegistration/
            ├── asn1crypto/
            ├── asn1crypto-0.24.0.dist-info/
            ├── cffi/
            ├── cffi-1.11.5.dist-info/
            ├── cryptography/
            ├── cryptography-2.4.1.dist-info/
            ├── idna/
            ├── idna-2.7.dist-info/
            ├── jsonschema/
            ├── jsonschema-2.6.0.dist-info/
            ├── pycparser/
            ├── pycparser-2.19.dist-info/
            ├── schemas/
            ├── six-1.11.0.dist-info/
            ├── _cffi_backend.cpython-36m-x86_64-linux-gnu.so
            ├── api_contributor_registration.py
            ├── requirements.txt
            └── six.py

Dependencies usually have dependencies of their own (those two packages became seven!). And that’s just one Lambda.

Sam Invoke

Now, at this point you could take the output from sam build and perform sam package to get everything loaded into S3 and have a deployment template to run in CloudFormation. However, now that we have a build template we can take advantage of the SAM CLI’s powerful test features which possum was working towards adopting:

carbon (1).png

We can unit test our Lambdas using generated AWS events from the SAM CLI! I’ll cover my workflow for this in more detail at a later time, but before deploying the entire app out to AWS we can now perform some sanity checks that the Lambdas should execute successfully when given a proper payload. Ideally, you would want to generate multiple event payloads to cover a variety of potential invocations.

Sam Package/Deploy

From here the standard package and deploy steps follow (using either the sam or aws CLI tools) which I won’t cover here as I’ve done so in other posts. The full process referencing the new .aws-sam/build directory looks like this:

carbon (4).png

sam package knows to use the template output from sam build without having to specify the path to it!

Gotchas

While all of this is great, let’s cover the exceptions I alluded to earlier.

sam build will perform the build every single time. Even if you don’t make changes between builds it will still rebuild all your functions. This is agonizingly slow. Preventing unneeded builds was one of the first features that went into possum to speed up my personal development. The AWS devs have been listening to some of my feedback on how I implemented this and are looking into adopting a similar solution for sam build.

Every Lambda must have a requirements.txt file even if they don’t have any external dependencies. I ran into this one right away. At the moment, sam build expects there to always be a requirements.txt file within a Lambda function’s code directory. Use a blank file for simple Lambdas as a workaround. The AWS devs are aware of this and will be fixing it.

python must resolve to a Python environment of the same version as your serverless app. If python resolves to a different version (like on a Mac where it resolves to the 2.7 system executable) activate a virtual environment of the correct version as a workaround. You should be able to easily do this if you’re using pipenv by running pipenv shell. The reason this isn’t an issue for possum is because possum relies on pipenv for generating the correct Python build environment based upon the runtime version defined in the template. The AWS devs have been taking my feedback and are looking into this.

Edit: The below wheel issue is fixed in sam 0.8.1!

You may run into The error message “Error: PythonPipBuilder:ResolveDependencies – {pycparser==2.19(sdist)}”. This happens if you’re using a version of Python that didn’t include the wheel package. This will be fixed in a future release, but you can pip install wheel in the Python environment that sam was installed to as a workaround.

You’re also going to run into that error when you try to us the –use-container option because the Docker image pulled for the build environment is also missed that package.

The workaround is to build intermediary image based on lambci/lambda:build-python3.6, install the wheel package, and then tag it using the same tag (yes, you’re tagging an image to override the existing tag with your own custom one) . This will also be fixed in a future release.

Jamf The Gathering: A Slack Bot’s Story

JNUC happened recently. You might heard about it. Some stuff came up. There was a really awesome burn during the opening keynote.

It was a really nice burn.

Continue reading “Jamf The Gathering: A Slack Bot’s Story”

Possum – A packaging tool for Python AWS Serverless Applications

The applications I build on AWS are all written in Python using the Serverless Application Model (SAM). Building my applications using a template and Lambda functions, I quickly ran into a limitation of the aws command line tools: external dependencies.

If your Lambda functions have no dependencies (not including the AWS SDKs), or you pre-download and embed them alongside your code, the standard package command works:

serverless_04

However, if you want to install dependencies at the time of packaging the application, you are left in a position where you need to roll your own build system. Amazon provides instructions on creating a Python deployment package, but it would be nice if running the aws cloudformation command did this for us.

Possum

I wrote a packaging tool to fill in the gap left by Amazon’s. Possum (an amalgamation of “Python AWS SAM”) processes a SAM template file just as aws cloudformation package but creates per-function Lambda deployment packages if it detects a requirements file within the function’s directory (Pipfile or requirements.txt).

Possum can be installed from the Python Package Index:

possum_01

Once installed, Possum becomes available as a command line tool (it is loaded into your Python installation’s /bin directory):

possum_02.png

What Possum does is iterate over the Resources section of your SAM template and find all the objects of the AWS:Serverless:Function type, determine the location of their code using the Properties:CodeUri value, and through the magic of Pipenv create individual virtual environments to download the external dependencies, if any, and zip the files together into a Lambda package. Once the package and upload process is complete, Possum will either print your updated deployment template on the screen or write it out to a filename that you specified.

possum_03.png

In the above example, my HelloWorld function didn’t have any defined dependencies within it’s directory so the contents were zipped up as they were. For the Authorizer, there was a Pipfile present which triggered the build process. The approach to Lambda function dependencies with Possum is to handle them on a per-function basis. This creates artifacts that only include the required packages for that function (or none).

Pipenv is not installed with Possum. Instead, Possum will shell-out to run the Pipenv commands (so you will need to have Pipenv installed separately).

After Possum has finished, I can take the deployment.yaml file and deploy the application using aws cloudformation deploy or the AWS console.

Try It Out

If you’re working with Python Lambda functions, please give Possum a try! If you encounter an issue, or have a feature request, you can open an issue on the GitHub page.

Possum’s GitHub Page
https://github.com/brysontyrrell/Possum

Possum on the Python Package Index
https://pypi.org/project/possum/