So You Want to Containerize Jamf Pro

Containers. They’re what all the cool kids talk about these days.

But really, if you haven’t been moving everything you’re doing to either containerized or serverless deployments you may want to make a commitment to putting that at the forefront of your 2020 objectives.

And yeah, we’re already well into that internally at Jamf (more on that in the future).

For the Jamf admin who has yet to migrate to Jamf Cloud and still operates their own environment, you may have already considered moving off your virtual machine (or, shudder, bare metal) installs.

But where to start?

Let’s start with the basics. We’re going to use Docker on your laptop to get Jamf Pro up and running without having to install Java, Tomcat, or MySQL.

You’re Going to Need Docker

Before continuing, you’re going to need Docker Desktop. 17.09 is the latest at time of posting. Finish this before continuing (you won’t need anything else!).

Mac: https://docs.docker.com/docker-for-mac/install/

Windows 10: https://docs.docker.com/docker-for-windows/install/

Building a Jamf Pro Docker Image

First thing we need is a Docker image we can deploy.

What is an image? A Docker image is an artifact that contains everything needed to run an application. Think of it like a virtual machine snapshot, but immutable and highly portable. Containers are the running processes launched using an image.

To build an image we need a Dockerfile that contains all the instructions on what to install and configure. Engineers at Jamf maintain a “starter” image that we will be using for our environment.

The source code is located here: https://github.com/jamf/jamfpro

But, we don’t need to build this image ourselves. The latest version of it is already available on DockerHub: https://hub.docker.com/r/jamfdevops/jamfpro

DockerHub is a public repository of already built images you can pull down and use! You’ll find all sorts of images readily available from members of the open source community as well as software vendors.

The jamfdevops/jamfpro image is going to be the foundation of our jamfpro image we launch.

Some of you might be thinking at this point, “Why are we building another image from this one?” True, we can use the image on its own and provide a ROOT.war file at run time, but this is only helpful for one-off testing.

In a production, or production-like, setting we won’t be running docker commands to launch our instances. We’ll be following best practices with pipelines, and infrastructure and configuration as code. To accomplish this we need an immutable deployment artifact.

Pull a copy of this image down to your computer before continuing (it doesn’t have a latest tag so you have to specify – 0.0.10 is the latest at time of posting).

docker pull jamfdevops/jamfpro:0.0.10

Download the ROOT.war

We need the WAR file for a manual installation of Jamf Pro first. Customers cant obtain one from their Assets page on Jamf Nation: https://www.jamf.com/jamf-nation/my/products

Yes, you have to be a customer for this step. ¯\_(ツ)_/¯

Click Show alternative downloads and then Jamf Pro Manual Installation. Extract the zip file into an empty directory.

Build the Deployable Image

Here’s a quick script that will use the jamfdevops/jamfpro image as a base for our output deployment image. The VERSION variable can be changed for what you are using (10.17.0 is the current at time of posting).

# Run from a directory that _only_ contains your ROOT.war file.
# Change VERSION to that of the ROOT.war being deployed

VERSION=10.17.0

docker build . -t jamfpro:${VERSION} -f - <<EOF
FROM jamfdevops/jamfpro:0.0.10
ADD ROOT.war  /data/
EOF

Grab this script on my GitHub as a Gist: build_jamfpro_docker_version.py

I’m using an inline Dockerfile for this script. FROM tells it what the base image is. ADD is the command that will copy the ROOT.war file into the image’s /data directory. This is where the jamfdevops/jamfpro startup script checks for a WAR file to extract into the Tomcat webapps directory.

You should see the following after a successful build of the image.

Sending build context to Docker daemon  227.4MB
Step 1/2 : FROM jamfdevops/jamfpro:0.0.10
 ---> f87f303293bc
Step 2/2 : ADD ROOT.war  /usr/local/tomcat/webapps/
 ---> 83cd51f8b622
Successfully built 83cd51f8b622
Successfully tagged jamfpro:10.17.0

To learn more, see https://docs.docker.com/develop/develop-images/dockerfile_best-practices/

Create a Docker Network

Our environment is going to need two running containers: the Jamf Pro web app, and a MySQL database. We’re going to create a virtual network for them to launch in that will allow communication between the web app and the database.

It’s a quick one-liner.

docker network create jamfnet

Our new jamfnet network is a bridge network (the default). We don’t need to do any additional configuration for our purposes today.

To learn more, see https://docs.docker.com/network/bridge/

Start a MySQL Container

Now to start a database. We’re going to use the official mysql:5.7 image from DockerHub, and this image gives us some handy shortcuts to make our local environment easier to setup, but let’s talk about what we’re about to do and why you shouldn’t do this in production, or production-like environments.

  • We’re running a database in a container.
    This isn’t necessarily something that you shouldn’t do, but the way we’re doing this is not production grade. Running databases in Docker is fantastic for having the service available without needing to install and configure it on your own. You should really, reallyreally know what you’re doing.
    Protip: use managed database services from your provider of choice (which we’re going to explore in the near future).
  • We’re not using a volume to persist the database.
    Containers are ephemeral. They don’t preserve state or data without the use of some external volumes (either on the host or remote). You can preserve the data on disk by mounting a local directory into the running container using an addtional argument: -v /my/local/dir:/var/lib/mysql.
  • We’re allowing the image startup scripting to create the Jamf Pro database.
    The MySQL image has a handy feature exposed through the MYSQL_DATABASE environment variable to create a default database. This is a shortcut for setting up Jamf Pro as we don’t have to connect afterwards to create it, but this means the only user available is root which leads us to the last point…
  • We’ll be using the root MySQL user for Jamf Pro.
    Never do this in production. For our local test environment it’s fine – we’re not hosting customer or company data and it’s temporary.

Refer to this Jamf Nation KB on how to properly setup the Jamf Pro database on MySQL: https://www.jamf.com/jamf-nation/articles/542/manually-creating-the-jamf-pro-database

With all that said, let’s take a look at the docker run command to start up our test MySQL database.

docker run --rm -d \
    --name jamf_mysql \
    --net jamfnet \
    -e MYSQL_ROOT_PASSWORD=jamfsw03 \
    -e MYSQL_DATABASE=jamfsoftware \
    -p 3306:3306 \
    mysql:5.7

The --rm argument it so delete the container once it stops. If you omit this then you can stop and restart containers while preserving their last state (going back to the issue of not preserving our MySQL data in a mounted volume: it would still persist while the container was stopped).

-d is short for --detach and will start the container in a new process without tying up the current shell. If you omit this you will remain attached to the running process and view the log stream.

These two options are specific to us treating this as a short lived test environment. In a production container deployment you wouldn’t even think about this because you won’t be deploying the containers with the docker command.

The --name argument provides a friendly name we can use to interact with our container instead of the randomized ones that will be generated otherwise. This is important when it comes to the internal networking to Docker.

Which, with --network​, the container will launch attached to our jamfnet network, and it will be discoverable via DNS by any other container in the same network. Our web app will be able to reach its database by connecting to mysql://jamf_mysql:3306.

The two -e arguments set the values for environment variables in the running container. Most configurable options for a containerized service are handled through environment variables. You can use the same image across multiple environments by changing the values that are passed.

In addition to the web app container, we’re going to want to be able to interact with the data being written to MySQL from our own command line or utilities. In order to do so, we must expose the service’s ports on the host. The -p or --publish argument is a mapping of host ports to container ports. MySQL communication is over port 3306 so we are mapping that port on our computer to the same port on our container.

Run the command and you will see a long randomized ID printed out. You can verify what containers are running by typing docker ps -a.

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                               NAMES
bd2e35ebcdd1        mysql:5.7           "docker-entrypoint.s…"   40 seconds ago      Up 39 seconds       0.0.0.0:3306->3306/tcp, 33060/tcp   jamf_mysql

To learn more about the MySQL Docker image, view the documentation on DockerHub: https://hub.docker.com/_/mysql

Connect Using MySQL CLI or MySQL Workbench

Now that MySQL is up an running we can connect to it either on the command line using mysql or with a GUI application such as MySQL Workbench.

If you want to use the CLI, you don’t need to install anything addtional. You already have everything you need with the mysql:5.7 Docker image!

docker run --rm -it \
    --net jamfnet \
    mysql:5.7 \
    mysql -u root -h jamf_mysql -p

There are a few new things here. Instead of running a service and detaching it, we are telling it to launch the container with an interactive terminal with the -it arguments (--interactive and --tty respectively).

We’re also passing a command after the name of the image we want to use. This overrides the entrypoint for the image (I’ve been referring to this as a “startup” script up until now). The entrypoint is the default command that runs for an image if another is not provided.

You can also see that we’ve attached to the jamfnet network again and are passing the name of our MySQL container as the host argument for mysql. If we didn’t have port 3306 exposed this method allows us to launch a shell to access private resources.

But, because we do have port 3306 exposed, we can run this container in a slightly different way.

docker run --rm -it \
    --net host \
    mysql:5.7 \
    mysql -u root -h 127.0.0.1 -p

The host network pretty much does as it sounds. This container will come up without network isolation that bridge networks provide and access resources much like other clients would. Here you can see we pass the localhost IP instead of the container name and the MySQL connection will be established.

Launch a Jamf Pro Container

We’re finally ready to launch the Jamf Pro web app container itself. The command for this is going to look very similar to what we did with MySQL.

docker run --rm -d \
    --name jamf_app \
    --net jamfnet \
    -e DATABASE_USERNAME=root \
    -e DATABASE_PASSWORD=jamfsw03 \
    -e DATABASE_HOST=jamf_mysql \
    -p 80:8080 \
    jamfpro:10.17.0

MySQL doesn’t talk to Jamf Pro, so naming it doesn’t really do anything for us here, but it does make managing the container using docker a little easier.

We have a different set of environment variables specific to our Jamf Pro image. Again, an image is a static artifact we use for a deployment. The deployment is customized through the use of environment variable values that are applied on startup (the entrypoint scripting). You could easily spin up numerous local Jamf Pro instances all pointing to their own unique databases just by switching out those values all from the one image.

We’re also passing theMySQL container name for the DATABASE_HOST like in our previous example with the mysql CLI.

There’s one difference with out -p argument. We’re mapping port 80 on our host to port 8080 of the container (which is what Jamf Pro uses when it comes up). This is a handy feature of publishing ports: you can effectively perform port forwarding from the host to the container. Instead of interacting with the web app at http://localhost:8080 in our browser we can just use http://localhost.

Run the command and you will again see another long randomized ID printed. Verify the running containers using docker ps -a as before.

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                               NAMES
382d3fd60afe        jamfpro:10.17.0     "/startup.sh"            24 seconds ago      Up 22 seconds       0.0.0.0:80->8080/tcp                jamf_app
bd2e35ebcdd1        mysql:5.7           "docker-entrypoint.s…"   43 minutes ago      Up 42 minutes       0.0.0.0:3306->3306/tcp, 33060/tcp   jamf_mysql

Access in Your Browser

Open up Safari, enter localhost into the address bar, and you’ll be greeted by the warm glow of a Jamf EULA.

Consistency, Repeatability

You’ve achieved your first major milestone: you’re running Jamf Pro in a containerized environment. Now, how do we take what we have done above and ensure we can repeat it several hundred (thousand) times exactly as we did now when we spun everything manually.

Infrastructure/configuration as code is the means to achieve this. The way you implement this is going to be different depending on where it is you’re deploying to. For Docker, the tool we want to use to define how to bring up Jamf Pro is Docker Compose (which I’ve presented about).

If you installed Docker Desktop you will already have this CLI tool available to you! Docker Compose uses YAML files to define the elements of your application (services, networks, volumes) and allows you to spin up and manage environments from them.

Here’s a Docker Compose file that automates everything we went through in this post.

version: "3"
services:
  mysql:
    image: "mysql:5.7"
    networks:
      - jamfnet
    ports:
      - "3306:3306"
    environment:
      MYSQL_ROOT_PASSWORD: "jamfsw03"
      MYSQL_DATABASE: "jamfsoftware"
  app:
    image: "jamfpro:10.17.0"
    networks:
      - jamfnet
    ports:
      - "80:8080"
    environment:
      DATABASE_USERNAME: "root"
      DATABASE_PASSWORD: "jamfsw03"
      DATABASE_HOST: "mysql"
    depends_on:
      - mysql
networks:
  jamfnet:

Grab this file on my GitHub as a Gist: jamfpro-docker-compose.yml

Notice anything about the attributes on our services? Our definitions map almost 1:1 with the CLI arguments to docker run. The service’s key becomes the name which we are able to use as a reference just as before. In the app service (previously jamf_app) we are passing mysql as the DATABASE_HOSTNAME – Docker’s DNS magic continuing to do the work for us. There’s also a Docker Compose specific option in here, depends_on, that references it. This tells Docker Compose that the mysql service must finish starting before it brings up the app.

The beauty of Docker Compose is that it takes very little explaining to understand once you’ve already done some work with the docker CLI. From the manual commands we ran as a part of this post you can understand what the YAML file is defining and what will happen when we run it.

To do, save the above into to a docker-compose.yml file and run the following command from the same directory (I’m in a directory called build):

/build % docker-compose up --detach
Creating build_mysql_1 ... done
Creating build_app_1   ... done
/build %

Fast, consistent, and repeatable deployments from a single definition. To tear this stack down and clean up, run:

/build % docker-compose down
Stopping build_app_1   ... done
Stopping build_mysql_1 ... done
Removing build_app_1   ... done
Removing build_mysql_1 ... done
Removing network build_jamfnet
/build %

The Next Phase

From here your Jamf Pro server is up, running, and ready for whatever testing you have in store. Ideally, as you progress on this journey, the Docker image you use at this step would be the one that you are ultimately deploying to production. The running application itself is identical at each stage it moves through your deployment processes.

You’ve taken your first steps with running Jamf Pro containerized on your computer, but as alluded to in the beginning this is only an exercise in the basics. A production environment is not going to be a laptop with Docker installed (…at least, I certainly hope not). What you’re most likely looking at in that case is a managed container service in the cloud from one of the big three: Amazon Web Services, Google Cloud Platform, or Microsoft Azure.

If you’ve followed me long enough you’ll know that I’m an AWS developer. That’s where my production environment would be, and I can define in CloudFormation templates (AWS’s infrastructure as code implementation) the things I’m going to need:

  • Virtual Private Cloud
  • Application Load Balancer
  • Fargate Cluster
  • Aurora MySQL Database (with a twist)

So stay tuned for the next phase in our containerized journey:

“So You Want to Run Serverless Jamf Pro”

MacAdmins 2018

Four Years of MacAdmins

Back in February of this year I was able to present at MacAD.UK in London (I attended in 2017; had a blast both times). This marked my eight appearance at a conference as a speaker since joining Jamf in 2012 as the second member of their fledgling IT department. To be fair, four of those appearances were at JNUC. ¯\_(ツ)_/¯

In about month, I’ll be making my fourth appearance, third speaking, at the MacAdmins Conference at Penn State. I have loved this conference every year I’ve attended, and credit is due to the organizers who accumulate a great roster of speakers with a range of content subjects. You’re never without something to listen to.

My first time speaking her, in 2016, I gave what would end up being my most widely viewed presentation to date: Craft Your Own GUIs with Python and Tkinter. The video on YouTube has garnered an insane 82K+ views. I’ll attribute much of that to the subject’s appeal outside of Mac admin circles.

On the second round in 2017 I went a bit further. I attempted, to mixed results, a half day workshop on building Jamf Pro Integrations along with another presentation: How Docker Compose Changed My Life. The workshop had a number of challenges that were all lessons I took to heart for the future: I had drastically underestimated the time needed for my content (we didn’t finish), the notice about prerequisite experience was lost from the sched.com listing, and I had no helpers to assist with questions cause us to pause frequently as I went around the room.


This year I’ll be doing another double feature, but no workshop. Two presentations at the 2018 conference!

Bryson’s doing a Jamf preso?

It’s true. Not counting JNUC, I will be delivering my first official Jamf presentation at a conference. Our gracious Marketing department offered our sponsor slot to me and even allowed me to pick whatever I wanted for the subject!

My choice is something near and dear to me: the recently announced Jamf Marketplace. Why is this near and dear? Creating integrations with Jamf Pro has been a passion of mine, and the Marketplace is a step towards a beautiful future where admins and developers can publish their work for all to share in. I’m very excited for this one.

Session Link: Get Your Tools in Front of Thousands with the Jamf Marketplace

Talking Serverless and AWS

My personal session (not affiliated with Jamf) is all about the new focus in my professional life: serverless application architectures in AWS. That alone can be a pretty broad subject. My presentation will focus on Lambda: the AWS service for running code without servers.

There is a lot of potential for Lambda within your org if you have an AWS account, or would be allowed to create one (you’d be shocked at what you can achieve within the free tier – which I’ll touch on). Beyond the tried and true cron job, you can implement all sorts of crazy even driven workflows with custom processing handled by Lambda functions you’ve written in your preferred language (which is Python, right?).

I’ll be doing a deep dive into subject. We’ll cover the basics of Lambda, how IAM permissions work and how to apply them, the best practices of defining and deploying using CloudFormation (what I call template-first development), and hopefully more if time allows. It’s an area I’ve become very passionate about and I’m looking so forward to being able to present on this to the Mac admin community.

Session Link: Diving into AWS Lambda: An Intro to Serverless for Admins


I hope to see you next month! If you don’t find me wandering the halls between sessions, please reach out on Slack, or peek into Legends. It’s a favorite.

If you’re interested in the presentations I’ve done over the years at various conferences, you can find that list with YouTube links here.

CommunityPatch.com (beta)

In previous posts, I talked about two projects I had been working on for the Jamf community to better help admins get started using the new “External Patch Sources” feature in Jamf Pro 10.2+. While working on Patch Server and the companion Patch-Starter-Script, I also wrote a quick proof of concept for a serverless version that would run in an AWS account.

The Stupid Simple Patch Server uses API Gateway and Lambda functions to serve patch definitions you stored in an S3 bucket. I even included the same API endpoints from the Patch Server so workflows between the two could be shared. I even took it a step further and added a subscription API so it would sync with a remote patch definition via URL.

That side project (of a side project) made me think about how I could take the basic design and build upon it into something that could be used by multiple admins. At first, I wrote a lot of code to transform the Stupid Simple Patch Server into a multi-tenant application. At a point, I considered the limitations of what could be done in a manner that could be considered secure and scrapped much of it.

But not everything. The work I had done was retooled into a new concept: a single, public, community managed patch source for Jamf Pro. A service where anyone could contribute patch definitions, and be able to manage and update them. Five minutes after having this idea I bought the communitypatch.com domain and setup a beta instance of my work-in-progress:

https://beta.communitypatch.com

CommunityPatchBeta.png

New API

The big green “Read the docs” button on the main page will take you to… the documentation! There you will find those APIs in much greater detail.

The community managed patch source mirrors a number of features from my Patch Server project. The /jamf endpoints are here to integrate with Jamf Pro and the service can be used as an external patch source.

The /api endpoints are slightly different from the Patch Server, but allow for creating definitions by providing the full JSON or a URL to an external source (creating a synced definition) and updating the versions afterwards.

From the docs, here’s the example for creating a new patch definition using the Patch-Starter-Script:

curl https://beta.communitypatch.com/api/v1/title \
   -X POST \
   -d "{\"author_name\": \"<NAME>\", \"author_email\": \"<EMAIL>\", \"definition\": $(python patchstarter.py /Applications/<APP> -p "<PUBLISHER>")}" \
   -H 'Content-Type: application/json'

Here, there are required author_name and author_email keys you need to provide when creating a definition. The author_name you choose will be injected into the ID and name keys of the definition you’re providing.

For example, if I provide “Bryson” for my name, and I’m creating the “Xcode.app” definition, it’s ID will become “Xcode_Bryson” and the display name “Xcode (Bryson)”. These changes make it possible to differentiate titles when browsing in Jamf Pro, and for members of the community to better identify who is managing what (as well as sharing with each other).

After you create a patch definition, you will be emailed an API token to the address you provided in author_email. This token is specifically for managing that title, and is the only way to update the title after. Your email address is not saved with CommunityPatch. A hash of it is stored with the title so you can reset the token should you lose it or need the previous one invalidated (this feature is not implemented yet).

Updating works similarly to Patch Server (but without the items key):

curl http://beta.communitypatch.com/api/v1/title/<ID>/version \
   -X POST \
   -d "$(python patchstarter.py /Applications/<APP> --patch-only)" \
   -H 'Content-Type: application/json' \
   -H 'Authorization: Bearer <TOKEN>'

 

Try It Out

I had a number of admins on Slack giving me feedback and testing the API for a few weeks. While I have work left to do to ensure the production version of CommunityPatch is performant, and still some more features to finish writing, I am at a stage where I would like those interesting in contributing to and using CommunityPatch to join in, and try the documented features (in your test environments).

You can jump right in by joining the #communitypatch channel on the MacAdmins Slack, hitting the CommunityPatch documentation, play around with the API, test definitions you create in your Jamf Pro test environments, and discuss what you find.

CommunityPatch is being written out in the open. You can go to GitHub and see the code for yourself. You can even contribute at a code/docs level if you like! For the immediate, having admins test it out and report back will provide me a lot of value as I work towards completing the application and deploying it to production.

Links

Patch Starter Script

Jamf Pro 10.2 is not far. I recently released my Patch Server project for admins in the beta (and after the release) to host their own custom patch definitions from. The server hosts the definitions and provides an API for maintaining them afterwards with automation. A missing piece is a tool to create your initial definitions to upload.

This morning I posted a script that aims to address that gap:

https://github.com/brysontyrrell/Patch-Starter-Script

This command line utility will take an existing application on your Mac and generate a basic (we’ll call it default) patch definition. This is primarily done using the Info.plist file within the app bundle.

Creating a Patch Definition

Using GitHub Desktop.app as an example, the script can output the JSON to stdout:

$ python patchstarter.py /Applications/GitHub\ Desktop.app -p "Github"

Or it can write a JSON file to a directory of your choice (in this example the current working directory):

$ python patchstarter.py /Applications/GitHub\ Desktop.app -p "Github" -o .

The -p or –publisher argument allows you to give the name of the application’s publisher. This was included as this information is not (normally) found in Info,plist.

Below, I’ve shown the GitHub Desktop.app example’s output (the keys print out of order in Python 2, but keep in mind that key order doesn’t matter). Next to each line I’ve included where the value maps from:

{
    "id": "GitHubDesktop",                          <-- CFBundleName without spaces
    "name": "GitHub Desktop",                       <-- CFBundleName
    "appName": "GitHub Desktop.app",                <-- Application filename
    "bundleId": "com.github.GitHub",                <-- CFBundleIdentifier
    "publisher": "GitHub",                          <-- Optional command line argument (see above)
    "currentVersion": "Hasty Things Done Hastily",  <-- CFBundleShortVersionString
    "lastModified": "2018-02-12T18:33:02Z",         <-- UTC Timestamp of when the script ran
    "requirements": [
        {
            "name": "Application Bundle ID", 
            "operator": "is", 
            "value": "com.github.GitHub",  <-- CFBundleIdentifier
            "type": "recon", 
            "and": true
        }
    ], 
    "patches": [
        {
            "version": "Hasty Things Done Hastily",  <-- CFBundleShortVersionString
            "releaseDate": "2017-05-22T20:24:33Z",   <-- Application last modified timestamp
            "standalone": true, 
            "minimumOperatingSystem": "10.9",        <-- LSMinimumSystemVersion
            "reboot": false, 
            "killApps": [
                {
                    "appName": "GitHub Desktop.app",  <-- Application filename
                    "bundleId": "com.github.GitHub"  <-- CFBundleIdentifier
                }
            ], 
            "components": [
                {
                    "version": "Hasty Things Done Hastily",  <-- CFBundleShortVersionString
                    "name": "GitHub Desktop",                <-- CFBundleName
                    "criteria": [
                        {
                            "name": "Application Bundle ID", 
                            "operator": "is", 
                            "value": "com.github.GitHub",  <-- CFBundleIdentifier
                            "type": "recon", 
                            "and": true
                        }, 
                        {
                            "name": "Application Version", 
                            "operator": "is", 
                            "value": "Hasty Things Done Hastily\",  <-- CFBundleShortVersionString
                            "type": "recon" 
                        }
                    ]
                }
            ], 
            "capabilities": [
                {
                    "name": "Operating System Version", 
                    "operator": "greater than or equal", 
                    "value": "10.9",  <-- LSMinimumSystemVersion
                    "type": "recon"
                }
            ],
            "dependencies": []
        }
    ],
    "extensionAttributes": []
}

It is important to understand how the above values map in the event that you create a definition, but it has used incorrect values because the developer assigned them differently than what is considered standard (especially true for version strings).

In the event the CFBundleShortVersionString or LSMinimumSystemVersion keys are missing from Info.plist, the script will prompt you for an alternative.

Add Extension Attributes

You can also include extension attributes that need to be a part of your definition. For example, if you have the following bash script as an extension attribute for GitHub Desktop.app saved as github-ea.sh:

#!/bin/bash

outputVersion="Not Installed"

if [ -d /Applications/GitHub\ Desktop.app ]; then
    outputVersion=$(defaults read /Applications/GitHub\ Desktop.app/Contents/Info.plist CFBundleShortVersionString)
fi

echo "<result>$outputVersion</result>"

You can pass it to the -e or –extension-attribute argument:

$ python patchstarter.py /Applications/GitHub\ Desktop.app -p "Github" -e github-ea.sh

The extension attribute will be appended to the extensionAttributes key in the definition (the key here is the appName in lowercase with spaces replacing dashes):

{
    "...",
    "extensionAttributes": [
        {
            "key": "github-desktop",
            "value": "IyEvYmluL2Jhc2gKCm91dHB1dFZlcnNpb249Ik5vdCBJbnN0YWxsZWQiCgppZiBbIC1kIC9BcHBsaWNhdGlvbnMvR2l0SHViXCBEZXNrdG9wLmFwcCBdOyB0aGVuCiAgICBvdXRwdXRWZXJzaW9uPSQoZGVmYXVsdHMgcmVhZCAvQXBwbGljYXRpb25zL0dpdEh1YlwgRGVza3RvcC5hcHAvQ29udGVudHMvSW5mby5wbGlzdCBDRkJ1bmRsZVNob3J0VmVyc2lvblN0cmluZykKZmkKCmVjaG8gIjxyZXN1bHQ+JG91dHB1dFZlcnNpb248L3Jlc3VsdD4iCg==",
            "displayName": "GitHub Desktop"
        }
    ]
}

This will not update any of the capabilities or components/criteria in the generated definition! It will be up to you to make the edits to reference the extension attribute.

Create Patch Data Only

This script is only meant to create a starter definition for an application. It will only have the one version so there will be no historical data for reporting. However, if you upload it to your own running Patch Server you can maintain it by updating the version through the API.

The –patch-only argument will take an application and only generate that portion of the definition:

python patchstarter.py /Applications/GitHub\ Desktop.app -p "GitHub" --patch-only

The JSON output can be sent to the Patch Server’s API to update the stored definition. An example curl command is in the GitHub readme:

$ curl -X POST http://localhost:5000/api/v1/title/GitHubDesktop/version -d "{\"items\": [$(python patchstarter.py /Applications/GitHub\ Desktop.app -p "GitHub" --patch-only)]}" -H 'Content-Type: application/json'

Try It Out

This script should ease the process of getting started with your own custom patch definitions. If you have access to multiple versions of an application, you can use the –patch-only argument to generate the data for each and place them into your starter definition.

Note that versions must be in descending order! The latest version must always be at the top of your patches array and the older version at the bottom.

If you run into problems or have feature requests, open an issue on the GitHub repository!

Patch Server for Jamf Pro

(TL;DR, gimme the link: https://github.com/brysontyrrell/PatchServer)

After several months of not getting around to it, my PatchServer project on GitHub is finally nearing a true 1.0 state.

I am openly asking for those who have been following this project, and those who are interested in this project, to download, use, and provide feedback on what should be finished before the release of Jamf Pro 10.2.

Please create issues on GitHub for bugs and feature requests that you would want to make the cut for 1.0.

Some time late last year (and I say some time because it’s all becoming a blur), I was brought into a meeting where I was shown our (Jamf’s) progress on providing a framework for customers to be able to create their own patch definitions. This framework would allow customers to setup their own patch servers and add them to their JSS.

A day or so later, I wrote the first rough version of my own implementation.

Backing up a sec:

What’s a patch definition?

In Jamf Pro v10 we introduced a feature called Patch Management. With this, you could subscribe to a number of software titles that Jamf curates and maintains. Once subscribed, your JSS will, on a schedule, read in the patch definitions of those software titles to stay updated.

For more about Patch Management, see the Jamf Pro Admin Guide (10.1):

These patch definitions (which are JSON data) contain historical information about a software title’s version history and requirements for determining if the software is installed on a managed computer. This allows admins to use the Patch Management feature to create reports and update policies to automatically patch those software titles on computers.

Of course, when these features came out there was one resounding question from nearly everyone:

“Why can’t we make our own patch definitions?”

External Patch Sources

The framework I mentioned above is the answer to this. In Jamf Pro 10.2+ you will have the option of adding External Patch Sources to your JSS. Then, in addition to the official Jamf software titles, you will be able to subscribe to your own and use the same reporting and policy features.

 

The external patch source must be a server your JSS is able to reach via HTTP/HTTPS. This patch server must expose the following endpoints:

  • /software
    This returns a JSON array of all the software titles that are available on this server. For example:

    [
      {
        "currentVersion": "10.1.1", 
        "id": "JamfAdmin", 
        "lastModified": "2018-02-03T03:34:34Z", 
        "name": "Jamf Admin", 
        "publisher": "Jamf"
      }, 
      {
        "currentVersion": "10.1.1", 
        "id": "JamfImaging", 
        "lastModified": "2018-02-03T03:34:36Z", 
        "name": "Jamf Imaging", 
        "publisher": "Jamf"
      }, 
      {
        "currentVersion": "10.1.1", 
        "id": "JamfRemote", 
        "lastModified": "2018-02-03T03:34:40Z", 
        "name": "Jamf Remote", 
        "publisher": "Jamf"
      }
    ]
  • /software/TitleId,TitleId
    This returns the same JSON as above, but limited to the comma separated list of software titles. For example (passing JamfAdmin,JamfRemote):

    [
      {
        "currentVersion": "10.1.1", 
        "id": "JamfAdmin", 
        "lastModified": "2018-02-03T03:34:34Z", 
        "name": "Jamf Admin", 
        "publisher": "Jamf"
      }, 
      {
        "currentVersion": "10.1.1", 
        "id": "JamfRemote", 
        "lastModified": "2018-02-03T03:34:40Z", 
        "name": "Jamf Remote", 
        "publisher": "Jamf"
      }
    ]
  • /patch/TitleId
    This returns the full patch definition JSON of the software title. Here is an abbreviated example:

    {
      "id": "JamfAdmin",
      "name": "Jamf Admin",
      "publisher": "Jamf", 
      "appName": "Jamf Admin.app", 
      "bundleId": "com.jamfsoftware.JamfAdmin", 
      "currentVersion": "10.1.1", 
      "lastModified": "2018-02-03T03:34:34Z", 
      "extensionAttributes": [
        {"ExtensionAttributeObjects"}
      ],
      "patches": [
        {"PatchObjects"}
      ], 
      "requirements": [
        {"RequirementsObjects"}
      ]
    }

If you had a patch server located at http://patch.my.org, the full URLs would be:

At this time, there is no product that Jamf is providing for customers to install and have a ready to use patch server. The focus has been on opening up the framework that the official patch source uses and allow customers to extend their environments through a little engineering work.

Not all of us are engineers, of course. Thus…

Enter: Patch Server

gui_01_index.png

I wanted to have a working patch server ready for the Jamf community in time for 10.2’s release. My initial patch server implementation (I call it an implementation because it’s one way of providing a patch source) achieved serving the proper JSON data for each of the endpoints described above using a database (SQLite) for the backend.

While my original goals were much grander, including the ability to fully manage a patch definition in a GUI instead of writing out JSON, I had to pare it back in order to get the project into a deliverable state.

In the past week I went through the code and ripped out everything that I felt was not needed, or doable. Then, I went through and added in new features (ported from another project) and streamlined the UI elements that were left.

This patch server features:

  • All required Jamf Pro endpoints to serve as an External Patch Source
  • An API for programmatic management of patch definitions and versions.
    • Create/delete patch definitions.
    • Add versions to existing patch definitions.
    • Create backup archives of all patch definitions.
  • UI for management of patch definitions.
  • Validation of uploaded patch definitions.
    gui_05_validation.png
  • Full user documentation at http://patchserver.readthedocs.io/
    patchserver_docs.png

    • UI Overview
    • Setup Instructions
    • API Documentation

Bring the Requests

Until Jamf Pro 10.2 is released, I’m not going to tag the project at a 1.0 version. If you are in Jamf’s beta program and testing 10.2, I invite you to give this a try and let me know what you think. Specifically, I’m asking for you do open up issues on GitHub for:

  • Bugs you find
  • Features you want, such as:
    • Connect to an actual database like MySQL (?)
  • Documentation you want, such as:
    • Instructions for installing on X

Not everything that is reported might get worked on, but the good news is I released the patch server under the MIT license. If you have some Python chops you can fork it and do whatever you want with the codebase to suit your needs!

But, I don’t wanna setup a server…

If you had that reaction to the idea of setting up your own external patch source, ask yourself if you match any of these descriptions:

  1. My JSS can talk to pretty much anything if I want it to,
  2. I want a patch server; I don’t want to host a patch server,
  3. It doesn’t matter where my patches live as long as I can get and manage them,
  4. Can’t this be a cloud thing?

If so… stayed tuned for a future blog post.