Farewell to the Unofficial JSS API Docs

Hey everyone.

With the launch of the Jamf Developer Portal I think it’s time I took down my Unofficial JSS API Docs site on Confluence.

I launched it as a community resource for a lack of API documentation, but now that Jamf has something out there I feel it’s time I save my $10 a month. If you found these resources helpful in the past, great! That was the whole point.

The site will come down after November 17th. For those Google searching and coming across this post, click on the dev portal link I provided above to reach the official documentation provided by Jamf.


Scripting the stuff that you think is only in the JSS GUI

(Or Jamf Pro – I may own Dean a dollar now…)

The JSS APIs are the first and best solution to writing automations or integrations with the data that’s in your JSS and taking action on them.

Still, those APIs sometimes have gaps in them. Things that you have access to in the GUI but not otherwise. Sometimes you will be staring at a button and asking yourself, “Why can’t I do this with the API?”

Well, perhaps you can.

In this post I am going to detail how you can replicate actions you see in the JSS GUI via scripting and open up new options to automating some normally manual processes.

It’s not really reverse engineering

Screen Shot 2016-11-15 at 10.57.44 AM.pngIt’s easier to figure out what’s happening in a web interface than you might think. I’ll be using Chrome here to dig in and find out what is happening in the background. In Chrome, you will want to use a handy feature called “Inspect” which opens a console for you to view all sorts of data about the page you are on and what it is doing.

You can open that by right/control-clicking on the browser and selecting the option from the context menu.

To observe the various requests that are happening as you click around you will want to use the “Network” tab. This section details every action the current page is making as it loads. That includes page resources, images, HTML content and various other requests.

Screen Shot 2016-11-15 at 10.58.09 AM.png

As you can see there is a lot of stuff that gets loaded. Most of it you can ignore because it isn’t relevant to what we’re trying to accomplish. Keep this open and watch carefully though as you begin clicking on actions in the pages your are on.

Let’s use OS X Configuration Profiles as an example. Wouldn’t it be nice if you could trigger a download of a signed configuration profile from the JSS without having to go to the GUI? Let’s see what happens when the ‘Download’ button is clicked.

Screen Shot 2016-11-15 at 2.47.58 PM.png

An HTTP POST request was made to the current page! POST requests usually contain data, so if we scroll down to the bottom of the Headers tab we see that the browser sent the following form-encoded data.

Screen Shot 2016-11-15 at 2.53.11 PM.png

There’s a lot of stuff being submitted here, but we can rationally ignore most of it and focus on just two key values: action and session-token.

Performing a POST to the configuration profile’s page in the JSS with those two values as form data will result in us being able to get a signed configuration profile returned!

Now, about that session-token…

You will find as you inspect actions in the JSS GUI the value called the session-token is used almost everywhere, but what is it?  The value isn’t in the cookies for our browser session, but we know it is being submitted as a part of the form data. If the data isn’t in the session then it must be stored somewhere else, and because we know it is being sent with the form…

Screen Shot 2016-11-15 at 11.22.27 AM.png

The token is in the HTML as a hidden form field! The session-token has an expiration of 30 minutes (1800 seconds) at the time it is created and is contained within the page itself. We need only to get a page that contains this token, parse it and then use it until the expiration point has been reached and then obtain another one (this is a process the JSS session handles for you and you never have to think about when in the GUI, but it’s a bit different when you’re manually obtaining these tokens and need to keep track of time).

You knew Python was going to be in here somewhere

Let’s look at some Python code using the requests library to obtain one of these session-tokens. This is a little different than how you would interact with the REST API because we need to be logged into a session and obtain a cookie for our requests.

That’s a simple task with requests:

import requests

session = requests.Session()

data = {'username': 'your.name', 'password': 'your.pass'}
session.post('https://your.jss.org', data=data)

With the code above you have now obtained a cookie that will be used for all further interactions between you and the JSS. To parse out the session-token from a page we can use this tiny function to quickly handle the task:

def get_session_token(html_text):
    for line in html_text.splitlines():
        if 'session-token' in line:
            return line.encode('utf-8').translate(None, '<>"').split('=')[-1]

You would pass the returned HTML from a GET request into the function like so:

session_token = get_session_token(session.get('https://your.jss.org/OSXConfigurationProfiles.html?id=XXX').text)

That tackles the most complicated piece about replicating GUI functions. Now that we can easily obtain session-tokens we can pass them with form data for anything we capture using the Chrome console.

Here’s the code to download a signed configuration profile and save it onto the Mac:

data = {'session-token': session_token, 'action': 'Download'}
r = session.post('https://your.jss.org/OSXConfigurationProfiles.html?id=XXX&o=r', data=data)

with open('/Users/me/Desktop/MyConfig.mobileconfig', 'wb') as f:

The r.content method returns the data from the response as binary instead of text like you saw above with r.text being passed to our get_session_token() function.

Double-click that .mobileconfig file and you’ll see a nice green Verified message along with the signing source being the JSS Built-In Signing Certificate.

Screen Shot 2016-11-15 at 3.23.23 PM.png

Now apply that EVERYWHERE

As you can see we were able to take a download action in the JSS and script it to pull down the desired file and save it locally without using a browser. Our process was:

  1. Perform the desired action once and observe the HTTP request and required data
  2. Start a session using an HTTP library or binary (in this example we used requests)
  3. Get a session-token from a JSS page
  4. Recreate the HTTP request using the library/binary passing the required form data with the session-token as expected

That sums it up. The key is you will need to perform the action you want to automate at least once so you can capture the request’s headers and determine what data you need to submit and how that data is going to be returned or what the response is expected to be.

Not everything int he JSS GUI will perform posts back to the exact same URI of the object you’re looking at, and the form data between these actions is likely to be different all over the place save for the presence of the session-token (from what I have observed so far).

And of course…

TEST TEST TEST TEST!!! That can never be stressed enough for anything you are doing. Be sure you’re not going to accidentally cause data loss or pull sensitive information and store it insecurely outside of the JSS. There are already plenty of ways to shoot yourself in the foot with the JSS, don’t add to it with a poorly written script.

Webhooks come to the JSS

There are some who said this day would never come…

This has been on my wish list for a very long time, and on the wishlists of several other people in the community that I’ve talked to about it. With the v9.93 update released yesterday we finally have webhooks for the JSS: real time outbound event notifications.

This is a big deal for those of us who work on building integrations into the Casper Suite. If you’ve wanted a service to run and take action on changes happening in the JSS you were normally forced to have an API script run on a schedule to pull in mass amounts of data to parse through. That’s not real time and computationally expensive if you’re an admin with a large environment.

How the Events API relates to Webhooks

There has been an alternative route to using the REST API and that was the Events API. If you haven’t heard of it that may be because it isn’t advertised too loudly. It was shown off at the 2012 JNUC by some of the JAMF development team:

The Events API is Java based. You write a plugin that registers for certain events and then it receives data to process. This all happens on the JSS itself as the plugin must be installed into the application. It is also Java which not many of us are all too fluent in. Plus if you use JAMF Cloud you don’t have access to the server so plugins aren’t likely to be in the cards anyway.

Enter webhooks.

This new outbound event notification feature is actually built ON TOP of the existing Events API. Webhooks translate the Events API event into an HTTP POST request in JSON or XML format. HTTP, JSON and XML. Those are all things that the majority of us not only understand but work with on an almost daily basis. They’re languages we know, and they’re agnostic to what you use to process them. You can use shell scripting, Python, Ruby, Swift; it doesn’t matter now!

How a webhook integration work

If you want to start taking advantage of webhooks for an integration or automation you’re working on, the first thing to understand is that webhooks needs to be received on an external web server to the JSS. This diagram shows the basic idea behind what this infrastructure looks like:


Wehbooks trigger as events occur within the JSS. The primary driver behind the majority of these events will be the check-in or inventory submission of your computers and mobile devices. When a change occurs the JSS will fire off the event to the web server hosting the integration you’ve created.

At that point your integration is going to do something with the data that it receives. Likely, you’ll want to parse the data for key values and match them to criteria before executing an action. Those actions could be anything. A few starting examples are:

  • Send emails
  • Send chat app notifications
  • Write changes to the JSS via the REST API
  • Write changes to a third-party service

Create a webhook in the JSS

There are a number of events from the Events API you can enable as an outbound webhook. They are:

  • ComputerAdded
  • ComputerCheckIn
  • ComputerInventoryCompleted
  • ComputerPolicyFinished
  • ComputerPushCapabilityChanged
  • JSSShutdown
  • JSSStartup
  • MobileDeviceCheckIn
  • MobileDeviceCommandCompleted
  • MobileDeviceEnrolled
  • MobileDevicePushSent
  • MobileDeviceUnEnrolled
  • PatchSoftwareTitleUpdated
  • PushSent
  • RestAPIOperation
  • SCEPChallenge
  • SmartGroupComputerMembershipChange
  • SmartGroupMobileDeviceMembershipChange

For a full reference on these webhook events you can visit the unofficial docs located at https://unofficial-jss-api-docs.atlassian.net/wiki/display/JRA/Webhooks+API

When you create the outbound webhook in the JSS you give it a descriptive name, the URL of your server that is going to receive the webhook, the format you want it sent int (XML or JSON) and then the webhook event that should be sent.


Once saved you can see all of your webhooks in a simple at-a-glance summary on the main page:


That’s it. Now every time this event occurs it will be sent as an HTTP POST request to the URL you provided. Note that you can have multiple webhooks for the same event going to different URLs, but you can’t create webhooks that send multiple events to a single URL. At this time you need to create each one individually.

Create your integration

On the web server you specified in the webhook settings on the JSS you will need something to receive that HTTP POST and process the incoming data.

There are a number of examples for you to check out on my GitHub located here: https://github.com/brysontyrrell/Example-JSS-Webhooks

As a Python user I’m very comfortable using a microframework called Flask (http://flask.pocoo.org/). It’s simple to start with, powerful to use and allows you to scale your application easily.

Here’s the basic one-file example to getting started:

import flask

app = flask.Flask('my-app')

def index():
    return "<h1>Hello World!</h1>"

if __name__ == '__main__':

On line 1 we’re import the flask module. On line 3 we’re instantiating our flask app object.

On line 5 this is a decorator that says “when a user goes to this location on the web server run this function”. The ‘/’ location would be the equivalent to http://localhost/ – the root or index of our server. Our decorated function is only returning a simple HTML string for a browser to render in this example

Line 9 will execute the application if we’re running it from the command line as so:

~$ python my_app.py

I will say at this point that this works just fine for testing out your Flask app locally, but don’t do this in production.

There are many, many guides all over the internet for setting up a server to run a Flask application properly (I use Nginx as my web server and uWSGI to execute the Python code). There are some articles on the Unofficial JSS API Docs site that will cover some of the basics.

With that being said, to make our Flask app receive data, we will modify our root endpoint to accept POST requests instead of GET requests (when using the @app.route() decorator it defaults to accepting only GET). We also want to process the incoming data.

This code block has the app printing the incoming data to the console as it is received:

import flask

app = flask.Flask('my-app')

@app.route('/', methods=['POST'])
def index():
    data = flask.request.get_json()  # returned JSON data as a Python dict()
    # Do something with the data here
    return '', 204

if __name__ == '__main__':

For processing XML you’ll need to import a module to handle that:

import flask
import xml.etree.ElementTree as Et

app = flask.Flask('my-app')

@app.route('/', methods=['POST'])
def index():
    data = Et.fromstring(flask.request.data)
    # Do something with the data here
    return '', 204

if __name__ == '__main__':

You can see that we changed the return at the end of the function to return two values: an empty string and an integer. This is an empty response with a status code of 204 (No Content). 2XX status code signal to the origin of the request that it was successful.

This is just a technical point. The JSS will not do anything or act upon different success status codes or error codes. 200 would be used if some data were being returned to the requestor. 201 if an object were being created. Because neither of those are occurring, and we won’t be sending back any data, we’re using 204.

With this basic starting point you can begin writing additional code and functions to handle processing the inbound requests and taking action upon them.


Here is a list of the links that I had provided in this post:


I have my opinions on API design

So I’m going to write about them.

In this context I’m really talking about REST APIs: those wonderful HTTP requests between you and some application that allow you to do all sorts of great things with your data.  Most projects that I have the most fun with involve working with an API; reading about it, testing against it, building the solution with it and coming up with other crazy sh*t to do with it.

Quickly, about REST

REST APIs provide a simple interface to an application over – normally – HTTP.  It follows the familiar standards of other architectures: create: POST, read: GET, update: PUT/PATCH and delete: DELETE.  You use these methods to interact with ‘resources’ at different ‘endpoints’ of the service.  These endpoints will return and/or accept data to achieve a desired result.

That’s the high level overview.

From there you will start encountering a wide range of variations and differences.  Some APIs will allow XML with your requests, but not JSON.  Some work with JSON, but not XML.  APIs may require you to explicitly declare the media type you’re going to interact with.  You might have one API that accepts HTTP basic authentication while others have a token based authentication workflow (like OAuth/OAuth2).  There is a lot of variance in the designs from service to service.

Which brings us to the opinions on design

The APIs I do the most work with currently are for the JSS (of course), JIRA and HipChat, but I’ve also poked around in CrashPlan and Box on the side.  There are a lot of things that I like about all of these APIs and, frankly, some things that really irk me.  And, I mean, really irk me.  Those experiences started me in the direction of learning what it was like to create my own.

If you know me at all you know that I have a real passion about Python.  My current obsession has been the Flask, a microframework for Python that allows you to write web applications.  I’ve been using it for HipChat add-ons that I’m developing, but I was really excited to get into Flask because I could start building my own REST APIs and dig into how they are designed.

Between working with established APIs and the reading and experimenting as I work on my own, I’ve determined there are a number of design choices I would want implemented in any API I worked with.

But it’s in the interface…

Two years ago I had the opportunity to attend Dreamforce.  That year was a biggie as Salesforce was transitioning their development platform and announced their intention to be mobile first and API first.  It was a pretty “phenomenal” keynote.  There were tons of sessions during the mega-conference devoted to the plethora of new and revamped APIs that now made up the Salesforce platform.  My big take away was a slide that provided an extremely high overview of the new stack.  All of Salesforce’s apps and services sat above a unified API layer.

I can’t say why that stuck with me so much at the time since I didn’t even know how to write a simple Python script, but it did.  This was the first big idea that I held onto about API design: implement your features at the API level first, document the implementation and then use that to build onto the end-user solution.

There are plenty of examples out there of services that segregate their user interface from their API and I’ve seen forums with a lot of developers or IT professionals asking why something was implemented in the GUI but inaccessible through their API which prevented an app/integration/automation from advancing.  So, as Salesforce put it, API first.

Documented without the docs

I’ve seen a lot of great examples of API documentation out there.  CrashPlan, JIRA and HIpChat are at the top of my “how to do it right” examples in that they provides representations of data for each supported request method for an endpoint, returned HTTP status codes and potential error messages with their causes.  This is invaluable information for anyone who is writing a script or application against an API, but they all share the same weakness: they’re docs that exist outside the API.

A robust API can provide all the information a developer requires through through the same HTTP methods that – allowing for automated discovery of the API’s capabilities without scrolling around web pages and then flipping back to your console.

There’s an HTTP method I’ve read about, but not one I’ve seen in any of the docs for these APIs as supported.  That would be the OPTIONS method.  It’s a great idea!  Want to know what you can do to a resource?  Pass OPTIONS as the method and in the response there will be a header “Allow” that will list them.

This could be extended to be a contextual method based upon the level of access the provided credentials have.  Say a resource supports GET, POST, PUT, PATCH and DELETE but our user account only supports creating and updating resources.  An admin would return all five in the response header, but our user would only have GET, PUT and PATCH as valid options.

So ok, there’s an HTTP method in the REST standard that allows us to discovery how we can interact with our resources.  Now how do we determine what the valid format of our data in our requests is supposed to be?  JIRA actually implements a solution this for ‘Issues.’  Check out the following endpoints:


Text The ‘createmeta’ endpoint will return a wealth of data including available projects, issues types, fields and what is required when creating a new issue.  That’s a goldmine of data that’s specific to my JIRA instance!  Then it gets even better when parameters are passed to filter it down even further to better identify what you need to do.  Like this:


That will return all of the required fields I require to create a new ‘Task’ within the ‘Information Technology’ project in my JIRA board.  If I create a task and then want to update it I can call the second endpoint to reveal all of the fields relevant to this issue, which are required and acceptable values for input.

Despite how great the above is, that’s about all we get for the discovery through JIRA’s API.  We still need to go back to the online docs to reference the other endpoints.

Something I read on RESTful API Design struck a note on this topic.  The idea pitched here is to use forms to provide back to the client a representation of a valid request for the endpoint by passing the appropriate MIME type (for example: ‘application/x-form+json’).  This isn’t something you could expect to have a uniform definition of, but that wouldn’t matter!  You could still programmatically obtain information about any API endpoint by passing the the MIME type for the desired format.

Here’s an example of what a response might look like to such a request:

curl http://my.api.com/users -H "content-type: application/x-form+json" -X POST

    "method": "POST",
    "type": "user",
    "fields": {
        "name": {
            "type": "string",
            "required": true
        "email": {
            "type": "string",
            "required": false
        "is_admin": {
            "type": "bool",
            "required": false

They can do a lot more work for you

Usually if you’re making a request to an object there will be references, links, within the data to other objects that you can make calls to.  Sometimes this is as simple as an ID or other unique value that can be used to build another request to retrieve that resource.  Seems like an unnecessary amount of code to handle this on the part of the client.

There are two ways of improving this.  The first is to include the full URL to the linked resource as a part of the parent.

curl http://my.api.com/users -H "content-type: application/json"

    'name': 'Bryson',
    'email': 'bryson.tyrrell@gmail.com,
    'computers': [
            'id': 1,
            'name': 'USS-Enterprise',
            'url': 'https://my.api.com/computers/1'

The second can build upon this by allowing parameters to be passed that tell the API to return linked objects that are expanded to include all of the data in one request.  JIRA’s API does this for nearly every endpoint.

curl http://my.api.com/users?expand=computers -H "content-type: application/json"

    'name': 'Bryson',
    'email': 'bryson.tyrrell@gmail.com,
    'is_admin': true,
    'computers': [
            'id': 1,
            'name': 'USS-Enterprise',
            'url': 'https://my.api.com/computers/1'
            'uuid': 'FBFF2117-B5A2-41D7-9BDF-D46866FB9A54',
            'serial': 'AA1701B11A2B',
            'mac_address': '12:A3:45:B6:7C:DE',
            'model': '13-inch Retina MacBook Pro',
            'os_version': '10.10.2'

Versions are a good thing

All APIs change over time.  Under the hood bug fixes that don’t affect how the client interacts with the service aren’t much to advertise, but additions or changes to endpoints need to be handled in a way that can (potentially) preserve compatibility.

The most common kind of versioning I interact with has it directly in the URL.  I’m going to reference HipChat on this one:


The v1 API was deprecated some time ago as HipChat migrated to their newer and more robust v2 API.  While the v1 API is still accessible it has limitations compared to v2, is lacking many of the endpoints and is no longer supported which means that a lot of integrations that were written using v1 are steadily being phased out.

The differences between the two versions of the API are huge, especially when it comes to authentication, but even after its release the v2 API has had a number of changes and additions made to it.  Unless you’re watching for them they would be easy to miss.

Going the route of maintaining the version of the API in the URL, I found this example:

my.api.com/ < Points to the latest version of the API
my.api.com/2/ < Points to latest version of the v2 API
my.api.com/2.0/ < Points to a specific version of the v2 API

On the backend the objects would need to track which version a field or endpoint was added (or even removed) and handle the response to a request based upon the version passed in the URL.  Anything requested that falls outside of the version would prompt the appropriate 4XX response.

Another method of versioning is used with GitHub’s API.  By default your API requests are made against the latest version of the API, but you you can specify a previous version by having it passed as a part of the ‘Accept’ header:

curl https://api.github.com/users/brysontyrrell -H "Accept: application/vnd.github.v3.full+json"

I’ve read about pros and cons for both approaches, but they serve the purpose of identifying changes in an API as it evolves while providing a means for compatibility with existing clients.

Multiple formats isn’t a sin

My personal preference for any REST API I work with is JSON.  JSON is easy to me, it makes sense, it just works.  I can think of one glaring example off the top of my head of an API I frequently work with that lets me read back objects in JSON but only accepts XML for POST/PUT requests.  Frustrating.

Still, JSON is my preference.  Plenty of people prefer XML.  In some cases XML may be easier to work with than JSON (such as parsing in shell scripts) or be the better data set for an application.  Structurally XML and JSON can be very interchangeable depending upon the data that is being accessed.

If the object can be converted to multiple formats then it may be a good idea to support it.  By passing the appropriate MIME type the API can return data in the requested format.  If no MIME type is passed there should be a default type that is always returned or accepted.


It’s late now and I’ve dumped a lot of words onto the page.  There’s a PyCharm window open with the shell of my sample API project that attempts to implement all of the design ideas I describe above.  Once I finish it I’ll throw it up on GitHub and see about incorporating some of the requests/responses to it into the article.

Managed VPP via Self Service

A goal I have had for some time was to get away from users simply “requesting” their VPP apps through Self Service and being able to grant them the ability to self-assign those apps (as long as they were eligible).  After doing some work on a little HTML scraping of the JSS I finally have working code to achieve this goal.

HTML Scraping?

If we’re going to allow a user to self-assign some App Store apps we need the ability to ensure there are enough available seats for them to do so.  As of version 9.62, Content as it relates to managed VPP seats is not accessible using the REST API.  There is no quick call we can make to view unassigned VPP content.

What I spent a little time working on was the method by which I could load an Advanced Content Search and parse the results.  This is different that just making an REST API request to the JSS using HTTP basic authentication (this is what you see in pretty much all examples of interacting with the JSS REST API, among others).  The web app uses session cookies for this.

Enter Python (as always, with me).

Using some code I already wrote for getting a cookie for the JIRA REST API on another project (Nessus2JIRA – you’ll hear more about that later), I wrote a new mini JSS class that incorporated both HTTP basic authentication for some the API calls that would need to be made for User Extension Attributes as well as obtaining a session cookie for when we needed to pull up an advanced content search (the web interface and the REST API do not share authentication methods!).

If you’re looking for that already, don’t worry.  There’s a GitHub link at the bottom (where I am now keeping all my code for you guys to grab) that contains the entire sample script for Self Service.

I’ve seen some impressive examples of HTML scraping in shell scripts using SED and AWK.  For what I’m extracting from the HTML of the JSS page I found the solution to be fairly simple.  Let me break it down:

The Advanced Content Search

We need the ability to pull in the information on all of the OS X apps that we are distributing via managed VPP so we can parse out the app in question and if it has unassigned seats for the user.  In my environment I had two content searches created for us to reference this at a glance for both iOS and OS X.  They report the Content Name, Total Content, the Assigned Content and Unassigned Content for a single criteria: Content Type: is: Mac app (or iOS app for the former).

For the script we only care about Unassigned Content so we really only need that and the Content Name, but the other fields are nice if you are pulling up the search in the GUI to view and don’t conflict with how we’re going to perform the HTML scraping.

Of course, there’s still the problem with generating the search.  Going to the URL to view the search requires us to click the View button to get our results.  As it so happens, Zach Halmstad recently dropped some knowledge on a thread for a feature request related to sharing search result links:


In Zach’s words: “…If you look at the URL of the group or search, it will look like this:  smartMobileDeviceGroups.html?id=2&o=r  If you change the value of the “o” parameter to “v” so it looks like this:  smartMobileDeviceGroups.html?id=2&o=v  It should forward the URL onto the search results.”

Boom.  We can perform an HTTP request using that parameter value and get back a search result!

Now, how do we extract that data? I took a look through the HTML and found the data which is populated into a table by some JavaScript.


	var data = [

	['OS X Server',"20","17","3",],

	var sortable = new Array;

It’s just an array which means I could convert it into a native Python list type and iterate over the values with ease.  As I’m being very specific about what data I need I came up with a solution for finding and extracting these lines:

  1. I took the response from my HTTP request, the HTML of the page, and then converted it into a Python list at every newline character.
  2. I began a loop through this HTML list looking for the index value matching “\tvar data = [“ which denotes the beginning of the array.
  3. I restarted my loop at the above index +1 and started concatenating the lines of the array together into a single string (skipping the blank lines). Once I reached the line matching “\tvar sortable = new Array;” I killed the loop.
  4. I evaluate my string and out comes the list containing each entry of my VPP content with the values.

Here’s what that code looks like in action:

# Breaking up the returned HTML into a list
html = response.read().splitlines()

# The applist string starts with the open bracket for our list
applist = "["

# Here is the loop through the html list pulling 
for item in html:
    if item == "\tvar data = [":
        for line in html[html.index(item) + 1:]:
            if line == "\tvar sortable = new Array;":
            elif line.rstrip():
                applist += line.strip(';')[1:]

# We need the 'ast' module to perform the eval into a list
import ast
applist = ast.literal_eval(applist)

Parsing through this new list is now super easy:

for x in applist:
    if int(x[-1]) > 0:
        print(x[0] + " has " + x[-1] + " seats available.")

Keynote has 2 seats available.
OS X Server has 3 seats available.
Pages has 1 seats available.

The Golden Triangle: Extension Attribute to Smart Group to VPP Assignment

All of the VPP assignments in my environment are handled via User Extension Attribute.  This was done for a number of reasons including the easy of dropping a user into scope for one of these apps, but also to future-proof us for when we would start leveraging the API to handle those assignments.

The setup is very simple.  Each App Store app that we distribute through managed VPP has its own extension attribute.  Let’s take Baldur’s Gate as an example (if you don’t have this available to your org, look deep inside and ask yourself, “why not?”).  For every VPP extension attribute there are two available values from a pop-up menu: Assigned and Unassigned.

(Note: since you can set a pop-up menu back to a blank value, ‘Unassigned’ is actually unnecessary from a technical standpoint, but if you have other IT staff working in a JSS it makes more visual sense to set value to ‘Unassigned’ instead of nothing in my opinion) 

Once the user extension attribute is in place create a matching Smart User Group with the sole criteria being the value is set to ‘Assigned.’  Now you make this smart group the scope for a VPP Assignment that only assigns that app.  That’s it!  You now have an App Store app that you can dynamically assign via the JSS REST API (or with ease by flipping values directly on a user’s record).

Updating a User Extension Attribute

The last piece of this is using the REST API to flip the User Extension Attribute for the logged in user to ‘Assigned’.  If you want to get in deeper with the API you can check out my two earlier blog posts “The JSS REST API for Everyone” which give a general introduction and overview.

The two pieces of information you need to update a User Extension Attribute are the user’s username or ID and the ID of the extension attribute that will be updated.  Perform a PUT on either of these resources with the following XML to change the value (be sure to update the ID values!):



(Note: this is pretty much what you would do for almost ANY extension attribute in the JSS)

Check Out the Full Script on GitHub

As promised, here is a full working example of the script for you to grab:


View the README for a breakdown of how to setup the policy (and be sure to do this in a test environment).  The one omission in this code is inside the function that is triggered when there are no available seats of the app to assign:

def create_ticket():
    """This function would generate a ticket with information on the app and user
    IT staff would purchase additional seats of the app and then to assign it

    This function would be called where the number of available seats was not greater than 0
    Customize to suit your environment"""
    print("Creating ticket.")

In my case I would have code in here to take the different values that were passed to the script and generate a Zendesk ticket on the behalf of the user informing IT that more seats needed to be purchased and that the user should be assigned this app once the purchase process is complete.  That takes the onus off of the user to perform yet another step when they are informed the app isn’t immediately available.

If you’re also a Zendesk user you can review a Python script I have here that creates a request ticket for a user from Self Service:


Otherwise, you should feel free to add your own code for whatever remediation action you would want to take should there not be any available seats for the user.  If you have questions about that please feel free to reach out to me and we can discuss it.

Take it Further

The entire setup described above allows for apps to be assigned and unassigned easily.  You can take the existing code and modify it to allows users to voluntarily return managed VPP seats if they are no longer using them.  The script produces dialog prompts in the event of an error, unavailable seats or success in assigning the app.  You’ll notice these are embedded AppleScripts (a little PyObjC fun that gets around needing to use Tkinter) so you can work with those as well to further customize the feedback to your users.

And as I already said, feel free to hit me up if you have questions.

Happy New Year!

Using Box as a Distribution Point – The 9.x Version

[UPDATE 1-8-2019] Box has deprecated WebDAV support. WebDAV connections will no longer function at the end of January, 2019, and this workflow will no longer work. See their post at the link below.

Deprecation: WebDAV Support

I never updated my original post on using Box.com as a Casper Share after the Casper Suite 9.0 was released  The process is still essentially the same as in 8.x, but instead of only posting updated screenshots I thought I would add in a couple of alternatives to relying upon Box Sync and Casper Admin as I had described before.

But first, something pretty important I’ve learned for anyone considering this workflow…

Single Sign On Considerations

WebDAV doesn’t work with SSO credentials.  If you check out Box’s article here you can follow the instructions for setting up an ‘external password’ that will use the account’s email address and that password for authentication.  If you choose to go this route consider setting password requirements that force the password to match or exceed what is used for SSO.

Now for putting it all together.

Setup the Casper Share Folder in Box

Before putting all of the settings into your JSS, make sure the Box folder you want to use is in place with the correct permissions.

Create a user account in Box that is just for read-only (viewer) access and invite it to that folder.  By inviting the user you can have the ownership of the Casper Share rest with any other user in your Box account, located anywhere in their folder hierarchy, and it will still be a root-level folder for the read-only user.

Remember, you can invite as many other users are you need to for managing your packages, or you can get real crafty and do some fun stuff using Box’s automation tools they’ve introduced.  Example:

  1. You have a folder in Box called “Package Staging”.
  2. Your IT staff upload a package here and it triggers a task to the party responsible for reviewing packages prior to distribution.
  3. If the task is marked as completed the package is then automatically moved to the Casper Share.

Nifty stuff.  I won’t really dive too much into it beyond that, but you get the gist.

Now, inside the Casper Share create a new “Packages” folder.  Because this is being treated as a traditional File Share Distribution Point the JSS will take the address and path we input below and then look for a “Packages” folder for the files to download.

The Box based Casper Share is now ready for use.

File Share Distribution Point Settings

In 9.x there was a change where the File Sharing tab for your File Share Distribution Point could not have empty values.  In 8.x as I had previously described you could get around the File Sharing settings by never clicking on it.  Even so, these settings are meaningless as we will never interact with this distribution point using AFP/SMB, so inputting junk values is acceptable.

The following screenshots detail the settings to use.  Here is also a quick summary:

  • General > Server Address: dav.box.com
  • HTTP/HTTPS > “Use HTTP downloads” is checked
  • HTTP/HTTPS > “Use SSL” is checked
  • HTTP/HTTPS > Context: /dav/CasperShare — This can also be any folder you want instead of labeling it “CasperShare”
  • HTTP/HTTPS > Authentication Type: Username and Password



Configure Your Network Segment

You may be in a position where you want to have specific network segments being directed to this distribution point, but generally you would want to use Box as the external catch-all for anyone who is not inside one of your offices.  You Network Segment settings will look something like this to direct all external clients to download from Box:

  • Starting IP Address:
  • Ending IP Address:
  • Default Distribution Point: Specific file share distribution point
  • Default File Share Distribution Point: Your Box.com distribution point from above

Skip Casper Admin, Use a Browser and the API

In the previous article I had details how to setup Box Sync on a Mac, make the directory mountable allowing you to continue to leverage Casper Admin for package management.

You don’t really need to do that.  Using the Box web interface works great for uploading even large files that are multiple gigabytes in size.

The one thing that was nice about Casper Admin was it created the package object for you in the JSS after you dragged it in and the app copied the file out to your master distribution point.  You can easily do this yourself through the API and build out a workflow that works best for your staff.  If you’re not familiar with the JSS REST API you can read my introduction post here.  There are code example for how to interact with it.

The minimal XML you would use to post your package’s information into the JSS would look like this:


That’s it.  POST that to ../JSSResource/packages/id/0 and you’ll have a package you can assign to your policies.  Of course, there are a lot of options you can set in the XML.  You only need to include the elements that you want to specify in the JSS.  Otherwise, everything not included in the XML will be set at their defaults (disabled checkboxes, no category, priority 10).

    <info>Repackaged on 2014-09-12</info>
    <notes>Packaged by Bryson</notes>
    <os_requirements>10.7, 10.8, 10.9, 10.10</os_requirements>

There are also elements that are very specific to the type of package you’re working with.

If the installer requires a reboot, set that flag with this element:


If you’re using a DMG instead of a PKG you can set the “Fill User Template” options:


If you are still working in an environment with PowerPC Macs in the mix you can set the restrictions for the architecture and an alternative package to run:


Lastly, if you’re dealing with distributing software updates and want them installable only if they are listed in the Mac’s available software updates, you can enable that:



A few quick updates: scripts, Promoter, jsslib

I’ve been absent from the blog for a while with my time being pretty split between work and family.  I thought I’d drop a quick update on some things I’m meaning to get posted and also get some small ones out of the way.  I also wanted to switch out of my minimalist theme and into something a bit easier on the eyes.

Updated script for listing Mac App Store apps  in Self Service

The original version of this script I posted here attempted to capture inventory data from a user installing an App Store app just as thought they were installing software published by the IT department.  Back in 10.8 the script worked, but as of 10.9 timeouts and errors with various AppleScript actions happened randomly and it was no longer reliable.

Also, come VPP v2 and I had a philosophical shift when it came to having accurate inventory.  As I no longer could be sure when users were claiming the apps we were provisioning from the VPP portal it made no sense to rewrite my original App Store workflow  I’m treating my Macs more and more like I would iPads when it comes to provisioning.

I trimmed out most of the code and was left with this:


# The 'loggertag' is used as a tag for all entries in the system.log

log() {
# This log() function writes messages to the system.log and STDOUT
echo "${1}"
/usr/bin/logger -t "${loggertag}: ${policy}" "${1}"

# The iTunes address for the App (can be grabbed from its App Store page) is passed
# from the JSS into the 'App Store URL (itunes.apple.com/...)' variable from parameter 4.
# Example: itunes.apple.com/app/ibooks-author/id490152466
log "Mac App Store URL: ${appAddress}"

# The App Store is opened to the specified app.
log "Opening the Mac App Store"
/usr/bin/osascript -e "tell application \"System Events\" to open location \"macappstore://${appAddress}\""
if [ $? != 0 ]; then
    log "Script error $?: There was en error opening the Mac App Store"
    exit 1

exit 0

The script takes only the iTunes/App Store URL and opens the Mac App Store to that item and nothing more.  Its simpler, an overall better experience, and I am still able to present a quick shortcut to common Mac App Store apps in Self Service just as I would on any of my users’ iOS devices.


New script for downloading Box Sync 4 via Self Service

Prior to the release of Box Sync 4 I had written a new stand-alone Self Service install script for the app.  The script at this earlier post still works for loading the Box Edit plugin, and I still use it, but the script below downloads the Box Sync 4 DMG, mounts it, copies the app and then remove all traces of the process.


# The 'policytag' is used for log entries
# The 'loggertag' is used as a tag for all entries in the system.log

log() {
# This log() function writes messages to the system.log and STDOUT
/usr/bin/logger -t "${loggertag}: ${policytag}" "${1}"
echo "${1}"

mountCheck() {
if [ -d /Volumes/boxsync4 ]; then
    log "/Volumes/boxsync4/ directory exists"
    if [[ $(/sbin/mount | /usr/bin/awk '/boxsync4/ {print $3}') == "/Volumes/boxsync4" ]]; then
        log "/Volumes/boxsync4/ is a mounted volume: unmounting"
        /usr/bin/hdiutil detach /Volumes/boxsync4
        if [ $? == 0 ]; then
            log "/Volumes/boxsync4/ successfully unmounted"
            log "hdiutil error $?: /Volumes/boxsync4/ failed to unmount"
            exit 1
    log "Deleting /Volumes/boxsync4/ directory"
    /bin/rm -rf /Volumes/boxsync4

cleanup() {
# The cleanup() function handles clean up tasks when 'exit' is called
log "Cleanup: Starting cleanup items"
if [ -f /tmp/boxsync4.dmg ]; then
    log "Deleting /tmp/boxsync4.dmg"
    /bin/rm -rf /tmp/boxsync4.dmg
log "Cleanup complete."

# This 'trap' statement will execute cleanup() once 'exit' is called
trap cleanup exit

log "Beginning installation of ${policytag}"

# Check for the expected size of the downloaded DMG
webfilesize=$(/usr/bin/curl box.com/sync4mac -ILs | /usr/bin/tr -d '\r' | /usr/bin/awk '/Content-Length:/ {print $2}')
log "The expected size of the downloaded file is ${webfilesize}"

# Download the Box Sync Installer DMG
log "Downloading the Box Sync Installer DMG"
if [ -f /tmp/boxsync4.dmg ]; then
    # If there's another copy of the DMG inside /tmp/ it is deleted prior to download
    /bin/rm /tmp/boxsync4.dmg
    log "Deleted an existing copy of /tmp/boxsync4.dmg"

/usr/bin/curl -Ls box.com/sync4mac -o /tmp/boxsync4.dmg
if [ $? == 0 ]; then
    log "The Box Sync Installer DMG successfully downloaded"
    log "curl error $?: The Box Sync Installer DMG did not successfully download"
    exit 1

# Check the size of the downloaded DMG
dlfilesize=$(/usr/bin/cksum /tmp/boxsync4.dmg | /usr/bin/awk '{print $2}')
log "The size of the downloaded file is ${dlfilesize}"

# Compare the expected size against the downloaded size
if [[ $webfilesize -ne $dlfilesize ]]; then
    echo "The file did not download properly"
    exit 1

# Check if the /Volumes/boxsync4 directory exists and is a mounted volume

# Mount the /tmp/boxsync4.dmg file
/usr/bin/hdiutil attach /tmp/boxsync4.dmg -mountpoint /Volumes/boxsync4 -nobrowse -noverify
if [ $? == 0 ]; then
    log "/tmp/boxsync4.dmg successfully mounted"
    log "hdiutil error $?: /tmp/boxsync4.dmg failed to mount"
    exit 1

if [ -e /Applications/Box\ Sync.app ]; then
    /bin/rm -rf /Applications/Box\ Sync.app
    log "Deleted an existing copy of /Applications/Box\ Sync.app"

log "Copying /Volumes/boxsync4/Box\ Sync.app to /Applications"
/bin/cp -R /Volumes/boxsync4/Box\ Sync.app /Applications/Box\ Sync.app
if [ $? == 0 ]; then
    log "The file copied successfully"
    log "cp error $?: The file did not copy successfully"
    exit 1

# Open /Applications/Box\ Sync.app
# This will also begin the migration from Box Sync 3 to Box Sync 4
#/usr/bin/open /Applications/Box\ Sync.app

# Run a recon to update the JSS inventory
log "Postinstall for ${policytag} complete. Running Recon."
/usr/sbin/jamf recon
if [ $? == 0 ]; then
    log "Recon successful."
    log "jamf error $?: There was an error running Recon."

exit 0


I haven’t abandoned Promoter.  Work is still progressing, but I stopped during my reorganization of the code into classes/modules when I broke off into a tangent that ended up becoming a full blow JSS REST API Python library.  I’ll dive into that next, but I have made a couple of changes to my goals with Promoter:

  • I’m dropping file migration support – for now
    There are a number of reasons behind this.  The bigger piece of this is there’s no API to the JSS for uploading packages/scripts.  The other part is that because I have been writing Promoter as a tool to be integrated into scripted workflows it occurs to me that most IT admins will already have their own methods of transferring files between different shares and such functionality would be redundant and possibly less robust.  In the end, upon a successful migration, you would want to write in the code that would migrate your files into your production environment (if that would even be required).
  • Including more controls
    Based upon some feedback from Tobias and others, I’m going to be expanding the command line options to allow some limited transformation of the XML before it is posted to the destination JSS mainly in the form of swapping out values (e.g. setting a Site, adding in computer groups for scope that are present on the destination, forcing the policy to be enabled by default).
  • Support for importing as a Python module
    Once I have finished cleaning up the code I’ll be posting the raw source online so everyone can grab it and modify it freely, but also so the code can be easily imported into other Python scripts instead of being a command line utility.  That said, I’ll still post compiled versions of the binaries as I did before so they can be run on systems their either do not have the third-partly libraries or don’t have Python installed.

jsslib – A Python library for the JSS REST API

As I mentioned above, I’ve been working on a Python library for making scripts interacting with the JSS REST API easy and quick to write.  The project took off when I decided to rewrite the existing JSS class I had written for Promoter and then began expanding it to cover the full API.

The syntax for the new library looks like this:

>>> import jsslib
>>> myjss = jsslib.JSS('https://myjss.com', 'myuser', 'mypass')
>>> results = myjss.computers()
<Response [200]> https://jss.jamfcloud.com/bryson/JSSResource/computers

The above line would return the list of all computers in the JSS (you can see the URL in the output).  While that’s easy to write its not very special.  Any kind of library should be doing work for you and making things easier for you.  That’s why I decided to write the new API to handle a lot of tasks for the end user.  The returned objects from these calls all result in “auto-parsed” attributes:

>>> results.id_list
[1, 2]
>>> results.size
>>> results.data
'<?xml version="1.0" encoding="UTF-8"?><computers><size>2</size><computer><id>1</id><name>USS-Voyager</name></computer><computer><id>2</id><name>Starship Enterprise</name></computer></computers>'

In addition to auto-parsing, the library also allows for any and all ‘simple’ objects in the JSS to be created or updated using parameters instead of passing XML:

>>> results = myjss.buildings_create("Minneapolis")
<Response [201]> https://jss.jamfcloud.com/bryson/JSSResource/buildings/id/0
>>> results.id
>>> myjss.buildings_update(5, "St Paul")
<Response [201]> https://jss.jamfcloud.com/bryson/JSSResource/buildings/id/5
<jsslib.JSSObject instance at 0x103629dd0>
>>> myjss.buildings(5)
<Response [200]> https://jss.jamfcloud.com/bryson/JSSResource/buildings/id/5
>>> myjss.buildings(5).name
<Response [200]> https://jss.jamfcloud.com/bryson/JSSResource/buildings/id/5
'St Paul'

The API will also be ‘smart’ in that it will use order of priority to test your input for certain API calls.  The best example of this is with computers and mobile devices.  Here is a series of API calls to look up a computer.  Take a note of the URLs in each of the outputs.

>>> results = myjss.computers(1)
<Response [200]> https://jss.jamfcloud.com/bryson/JSSResource/computers/id/1
>>> results.udid
>>> myjss.computers(results.udid).name
<Response [200]> https://jss.jamfcloud.com/bryson/JSSResource/computers/udid/ZZZZ0000-ZZZZ-1000-8000-001B639ABA8A
>>> myjss.computers("USS-Voyager").serial_number
<Response [200]> https://jss.jamfcloud.com/bryson/JSSResource/computers/name/USS-Voyager

You’ll notice again I’m making use of the auto-parsing features that are built into the library.  Computer and mobile device records all return the following attributes already parsed and readable: UDID, serial number, MAC address, ID, model and computer name.

You might be wondering how you’re going to figure out how all of this works as the JSS API returns a wide variety of objects with different sets of data.  A huge point of this to me was to make sure anyone could start working with the library without hand holding, and so I’m making sure all of the documentation if baked into the code just like any other Python library:

>>> help(jsslib.JSS.advancedcomputersearches)

Help on method advancedcomputersearches in module jsslib:

advancedcomputersearches(self, identifier=None) unbound jsslib.JSS method
    Returns a JSSObject for the /advancedcomputersearches resource
    Example Usage:
    myjss.advancedcomputersearches() -- Returns all advanced computer searches ('None' request)
    myjss.advancedcomputersearches(1) -- Returns an advanced computer search by ID
    myjss.advancedcomputersearches('name') -- Returns an advanced computer search by name
    Keyword Arguments:
    identifier -- The ID or name of the advanced computer search
    Returned Values for a 'None' request:
    JSSObject.data -- XML from response
    JSSObject.size -- Total number of all advanced computer searches
    JSSObject.id_list -- List of all advanced computer search IDs (int) from the response
    Returned Values for ID or name requests:
    JSSObject.data -- XML from response
    JSSObject.id -- The ID of the advanced computer search

I presented jsslib at an internal company event not long ago and I was about 50% done at that point.  This library will end up being used by Promoter for all of its interactions with the JSS, and I will be posted the library somewhere once it is ready.  I’m also still continuing to flesh out some of the features (I’m considering adding a .search() method to returned objects for searching the XML data without having to pipe it into an XML parser).

You can expect to see jsslib popping up on here again in the near future.

As always, if you have any comments or questions just reach out to me here or anywhere else I linger.