Stop everything! Start using Pipenv!

Seven months ago I tweeted about Pipenv:

That was about it.

The project- which was at v3.6.1 at the time – still felt new. At Jamf, we were working on our strategy for handling project requirements and dependencies going forward for Jamf Cloud’s automation apps and Pipenv, while a cool idea, didn’t have maturity and traction that.

As @elios pointed out on the MacAdmins Slack: Pipenv was announced in January of this year as an experimental project.

Today it is the officially recommended Python packaging tool from Python.org.

In his words, “That’s awfully fast…”

The GitHub repository for Pipenv has 5400+ stars and over 100+ contributors to the project. It is proceeding at breakneck development speeds and adding all kinds of juicy features every time you turn around. It’s on v8.3.2.

Within ten minutes of finally, FINALLY, giving Pipenv a try today I made the conscious decision to chuck the tooling we had made and convert all of our projects over.

(Oh, and the person who wrote that tooling wholly agreed)

Let’s talk about that.

Part 1: Why requirements.txt sucks

There is was this standard workflow to your Python projects. You needed two key tools: pip and virtualenv. When developing, you would create a Python virtual environment using virtualenv in order to isolate your project from the system Python or other virtual environments.

You would then use pip to install all the packages your project needs. Once you had all of those packages installed you would then run a command to generate a requirements.txt file that listed every package at the exact installed version.

The requirements.txt file stays with your project. When someone else gets a copy of the repository to work on it or deploy it they would follow the same process of creating a virtual environment and then using pip to install all of the required packages by feeding that requirements.txt file into it.

Now, that doesn’t seem so bad, right? The drawbacks become apparent as you begin to craft your build pipelines around your projects.

Maintenance is a Manual Chore

The first thing to call out is how manual this process is. You need to create the same version virtual environment for the project once you’ve cloned it and then install the packages. If you want to begin upgrading packages you need to be sure to manually export a new requirements.txt file each time.

Removing packages is far more problematic. Uninstalling with pip will not remove dependencies for a package! Take Flask, for example. It’s not just one package. It has four sub-packages that get installed with it, and one of those (Jinja2) has its own sub-package.

So, maybe the answer is to manually maintain only the top-level packages in your requirements.txt file? That’s a hard no. You don’t want to do that because it makes your environment no longer deterministic. What do I mean? I mean, those sub-packages we just talked about are NOT fixed versions. When you’re installing Python packages from the Python Package Index (PyPI) the aren’t using requirements.txt files with fixed versions for their dependencies. They instead are specifying ranges allowing for installs at a minimum and/or maximum version for that sub-package.

This makes a lot of sense when you think about it. When installing a package for the first time as a part of a new project, or updating a package in an existing project, you are going to want the latest allowed versions for all the sub-packages for any patches that have been made.

Not so much when it comes to a project/application. You developed and tested your code at specific package versions that are a known working state. This is past the development phase when bugs from updates to those packages can be identified and addressed. When it comes to deployment, your project’s requirements.txt file must be deterministic and list every package at the exact version.

Hence, maintenance of this file becomes a manual chore.

But Wait, There’s Testing!

There are a lot of packages that you might need to install that have nothing to do with running your Python application, but they have everything to do with running tests and building documentation. Common package requirements for this would by Pytest, Tox, and Sphinx with maybe a theme. Important, but not needed in a deployment.

We want to be able to specify a set of packages to install that will be used during the testing and build process. The answer unfortunately, is a second requirements.txt file. This one would have a special name like requirements-dev.txt which is only used during a build. This file could only contain the specific packages and be installed with pip after the standard requirements.txt, or it could contain all of those plus the build packages. In either case, the maintenance problem continues to grow.

So We Came Up With Something…

Our process at Jamf ended up settling on three types of requirements.txt files in an effort to address all the shortcomings described.

  • requirements.txt
    This file contained the full package list for the project at fixed versions. It would be used for spinning up new development environments or during deployment.
  • core-requirements.txt
    This file contained the top-level packages without fixed versions. Development environments would not be built from this. Instead, this file would be used for recreating the standard requirements.txt file at updated package versions and eliminating orphaned packages that were no longer used or removed from the project at some point.
  • build-requirements.txt
    This is a manually maintained file per-project that only contained the additional packages needed during the build process.

This approach isn’t too dissimilar to what others have implemented for many of the same reasons we came up with.

Part 2: Enter Pipfile

Not Pipenv? We’re getting there.

Pipfile was introduced almost exactly a year ago at the time of this post (first commit on November 18th, 2016). The goal of Pipfile is to replace requirements.txt and address the pain points that we covered above.

Here is what a Pipfile looks like:

[source]

url = "https://pypi.python.org/simple"
verify_ssl = true
name = "pypi"

[dev-packages]



[packages]



[requires]

python_version = "2.7"

PyPI is the default source in all new Pipfiles. Using the scheme shown, you can add additional [source] sections with unique names to specify your internal Python package indexes (we have our own at Jamf for shared packages across projects).

There are two groups for listing packages: dev-packages and packages. In a Pipfile these lists follow how we were handling this where only the build packages go into the dev-packages list and all project packages go under the standard packages list.

Packages in the Pipfile can have fixed versions or be set to install whatever the latest version is.

[packages]

flask = "*"
requests = "==2.18.4"

The [requires] section dictates the version of Python that the project is meant to run under.

From the Pipfile you would create what is called a Pipfile.lock which contains all of the environment information, the installed packages, their installed versions, and their SHA256 hashes. The hashes are a recent security feature of pip to validate packages that are installed when deployed. If there is a hash mismatch you can abort. A powerful security tool in preventing malicious from entering your environments.

Note that you can specify the SHA256 hashes of packages in a normal requirements.txt file. This is a feature of pip and not of Pipfile.

It is this Pipfile.lock that will be used to generate environments on other systems whether for development or deployment. The Pipfile will be used for maintaining packages and dependencies and regenerating your Pipfile.lock.

All of this is just a specification. Pip as of yet still does not support Pipfiles, but there is another..

Part 3: Pipenv Cometh

Pipenv is the implementation of the Pipfile standard. It is built on top of pip and virtualenv and manages both your environment and package dependencies, and it does so like a boss.

Get Started

Install Pipenv with pip (or homebrew if that’s your jam).

$ pip install pipenv

Then in a project directory create a virtual environment and install some packages!

$ pipenv --two
Creating a virtualenv for this project…
<...install output...>
Virtualenv location: /Users/me/.local/share/virtualenvs/test-zslr3BOw
Creating a Pipfile for this project…

$ pipenv install flask
Installing flask…
<...install output...>
Adding flask to Pipfile's [packages]…
Locking [dev-packages] dependencies…
Locking [packages] dependencies…
Updated Pipfile.lock (36eec0)!

$ pipenv install requests==2.18.4
Installing requests==2.18.4…
<...install output...>
Adding requests==2.18.4 to Pipfile's [packages]…
Locking [dev-packages] dependencies…
Locking [packages] dependencies…
Updated Pipfile.lock (572f23)!

$

Notice how we see the Locking messages after every install? Pipenv automatically regenerated the Pipfile.lock each time the Pipfile is modified. Your fixed environment is being automatically maintained!

Graph All The Things

Let’s look inside the Pipfile itself.

[source]

url = "https://pypi.python.org/simple"
verify_ssl = true
name = "pypi"

[dev-packages]



[packages]

flask = "*"
requests = "==2.18.4"


[requires]

python_version = "2.7"

No sub-packages? Nope. It doesn’t need to track those (they end up in the Pipfile.lock, remember?). But, if you’re curious, you can use the handy graph feature to view a full dependency tree of your project!

$ pipenv graph
Flask==0.12.2
 - click [required: >=2.0, installed: 6.7]
 - itsdangerous [required: >=0.21, installed: 0.24]
 - Jinja2 [required: >=2.4, installed: 2.9.6]
   - MarkupSafe [required: >=0.23, installed: 1.0]
 - Werkzeug [required: >=0.7, installed: 0.12.2]
requests==2.18.4
 - certifi [required: >=2017.4.17, installed: 2017.11.5]
 - chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4]
 - idna [required: >=2.5,<2.7, installed: 2.6]
 - urllib3 [required: <1.23,>=1.21.1, installed: 1.22]
 
$

Check that out! Notice how you can see the requirement that was specified for the sub-package in addition to the actual installed version?

Environment Management Magic

Now let’s uninstall Flask.

$ pipenv uninstall flask
Un-installing flask…
<...uninstall output...>
Removing flask from Pipfile…
Locking [dev-packages] dependencies…
Locking [packages] dependencies…
Updated Pipfile.lock (4ddcaf)!

$

And re-run the graph command.

$ pipenv graph
click==6.7
itsdangerous==0.24
Jinja2==2.9.6
 - MarkupSafe [required: >=0.23, installed: 1.0]
requests==2.18.4
 - certifi [required: >=2017.4.17, installed: 2017.11.5]
 - chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4]
 - idna [required: >=2.5,<2.7, installed: 2.6]
 - urllib3 [required: <1.23,>=1.21.1, installed: 1.22]
Werkzeug==0.12.2

 

Yes, the sub-packages have now been orphaned within the existing virtual environment, but that’s not the real story. If we look inside Pipfile we’ll see that requests is the only package listed, and if we look inside our Pipfile.lock we will see that only requests and it’s sub-packages are present.

We can regenerate our virtual environment cleanly with only a few commands!

$ pipenv uninstall --all
Un-installing all packages from virtualenv…
Found 20 installed package(s), purging…
<...uninstall output...>
Environment now purged and fresh!

$ pipenv install
Installing dependencies from Pipfile.lock (f58d9f)…
 🐍 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 5/5 — 00:00:01
To activate this project's virtualenv, run the following:
 $ pipenv shell

$ pipenv graph
requests==2.18.4
 - certifi [required: >=2017.4.17, installed: 2017.11.5]
 - chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4]
 - idna [required: >=2.5,<2.7, installed: 2.6]
 - urllib3 [required: <1.23,>=1.21.1, installed: 1.22]

$

Mindblowing!!!

Installing the dev-packages for our builds uses an additional flag with the install command.

$ pipenv install sphinx --dev
Installing sphinx…
<...install output...>
Adding sphinx to Pipfile's [dev-packages]…
Locking [dev-packages] dependencies…
Locking [packages] dependencies…
Updated Pipfile.lock (d7ccf2)!

The appropriate locations in our Pipfile and Pipefile.lock have been updated! To install the dev environment perform the same steps for regenerating above but add the –dev flag.

$ pipenv install --dev
Installing dependencies from Pipfile.lock (f58d9f)…
 🐍 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 18/18 — 00:00:05
To activate this project's virtualenv, run the following:
 $ pipenv shell

Part 4: Deploy Stuff!

The first project I decided to apply Pipenv to in order to learn the tool is ODST. While there is a nice feature in Pipenv where it will automatically import a requirements.txt file if detected, I opted to start clean and install all my top-level packages directly. This gave me a proper Pipfile and Pipfile.lock.

Here’s the resulting Pipfile.

[source]

url = "https://pypi.python.org/simple"
verify_ssl = true
name = "pypi"


[dev-packages]

pytest = "*"
sphinx = "*"
sphinx-rtd-theme = "*"


[packages]

flask = "*"
cryptography = "*"
celery = "*"
psutil = "*"
flask-sqlalchemy = "*"
pymysql = "*"
requests = "*"
dicttoxml = "*"
pyjwt = "*"
flask-login = "*"
redis = "*"


[requires]

python_version = "2.7"

Here’s the graph of the installed packages (without the dev-packages).

$ pipenv graph
celery==4.1.0
 - billiard [required: >=3.5.0.2,<3.6.0, installed: 3.5.0.3]
 - kombu [required: <5.0,>=4.0.2, installed: 4.1.0]
 - amqp [required: >=2.1.4,<3.0, installed: 2.2.2]
 - vine [required: >=1.1.3, installed: 1.1.4]
 - pytz [required: >dev, installed: 2017.3]
cryptography==2.1.3
 - asn1crypto [required: >=0.21.0, installed: 0.23.0]
 - cffi [required: >=1.7, installed: 1.11.2]
 - pycparser [required: Any, installed: 2.18]
 - enum34 [required: Any, installed: 1.1.6]
 - idna [required: >=2.1, installed: 2.6]
 - ipaddress [required: Any, installed: 1.0.18]
 - six [required: >=1.4.1, installed: 1.11.0]
dicttoxml==1.7.4
Flask-Login==0.4.0
 - Flask [required: Any, installed: 0.12.2]
 - click [required: >=2.0, installed: 6.7]
 - itsdangerous [required: >=0.21, installed: 0.24]
 - Jinja2 [required: >=2.4, installed: 2.9.6]
 - MarkupSafe [required: >=0.23, installed: 1.0]
 - Werkzeug [required: >=0.7, installed: 0.12.2]
Flask-SQLAlchemy==2.3.2
 - Flask [required: >=0.10, installed: 0.12.2]
 - click [required: >=2.0, installed: 6.7]
 - itsdangerous [required: >=0.21, installed: 0.24]
 - Jinja2 [required: >=2.4, installed: 2.9.6]
 - MarkupSafe [required: >=0.23, installed: 1.0]
 - Werkzeug [required: >=0.7, installed: 0.12.2]
 - SQLAlchemy [required: >=0.8.0, installed: 1.1.15]
psutil==5.4.0
PyJWT==1.5.3
PyMySQL==0.7.11
redis==2.10.6
requests==2.18.4
 - certifi [required: >=2017.4.17, installed: 2017.11.5]
 - chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4]
 - idna [required: >=2.5,<2.7, installed: 2.6]
 - urllib3 [required: <1.23,>=1.21.1, installed: 1.22]

In my Dockerfiles for the project I swapped out using requirements.txt and switched it to install my packages using the Pipefile.lock into the system Python (in a containerized app there’s no real need to create a virtual environment).

RUN /usr/bin/apt-get update -q && \
    /usr/bin/apt-get install -qqy build-essential git && \
    /usr/bin/apt-get install -qqy python-pip python-dev && \
    /usr/bin/pip install pipenv && \
    /usr/bin/apt-get install -qqy libssl-dev libffi-dev && \
    /usr/bin/apt-get install -qqy uwsgi uwsgi-plugin-python && \
    /usr/bin/apt-get clean && \
    /bin/rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

COPY /docker/webapp/web-app.ini /etc/uwsgi/apps-enabled/

COPY /Pipfile* /opt/odst/
WORKDIR /opt/odst/
RUN /usr/local/bin/pipenv install --system --deploy

COPY /ods/ /opt/odst/ods/
COPY /application.py /opt/odst/
COPY /docker/ods_conf.cfg /opt/odst/

RUN /bin/chown -R www-data:www-data /opt/odst/ods/static

CMD ["/usr/bin/uwsgi", "--ini", "/etc/uwsgi/apps-enabled/web-app.ini"]

The –system flag tells Pipenv to install to the system Python. The –deploy flag will abort the installation if the Pipfile.lock is out of date or the Python version is wrong. Out of date? Pipenv knows if your Pipfile.lock matches up with the Pipfile by comparing a SHA256 hash of the Pipfile that it has saved. If there’s a mismatch then there is an issue and it aborts.

If I didn’t want to rely on this hash match feature, I could instead pass a –ignore-pipfile flag that will tell it to proceed using only the Pipfile.lock.

I should mention at this point that Pipenv dramatically speeds up build times. When using pip your packages will install sequentially. Pipenv will install packages in parallel. The difference is immediately noticeable, especially when regenerating your development environments.

Still, The World is Catching Up

Python development workflows area clearly moving to this tool, but Pipenv is less than a year old at this point and not everything will support generating environments from the provided Pipfiles.

The two I am currently using that fall in that category are AWS Elastic Beanstalk and ReadTheDocs.org. Both need a requirements.txt file in order to build the Python environments.

That is thankfully a simple task. You can generate a requirements.txt file from Pipenv by using the lock option.

$ pipenv lock --requirements > requirements.txt

 

For Elastic Beanstalk applications, I can have this command run as a part of my build pipeline so the file is included with the ZIP archive before it goes to S3.

In the case of Read the Docs there is an open issue for adding Pipfile support, but until then I will need to generate the requirements.txt file as I make changes to my environment and save it with the repository.

For Read the Docs I’ll want the extra dev-packages. This is done by using the pipenv run command. This will execute the succeeding string as if it were run in the environment.

$ pipenv run pip freeze > requirements.txt

The bright side is that I am no longer actually managing this file. It is straight output from my Pipenv managed environment!

Part 5: Profit

I hope you enjoyed my overview and impressions of Pipenv. It’s a tool that is going to have a huge impact on development and deployment workflows – for the better!

A huge shout out to Kenneth Reitz for creating yet another invaluable package for the Python community!

* Edit Nov 9: Changed the command for exporting a requirements.txt file. Using pipenv lock -r -d only outputs the packages under the dev-packages section.

Advertisements

Farewell to the Unofficial JSS API Docs

Hey everyone.

With the launch of the Jamf Developer Portal I think it’s time I took down my Unofficial JSS API Docs site on Confluence.

I launched it as a community resource for a lack of API documentation, but now that Jamf has something out there I feel it’s time I save my $10 a month. If you found these resources helpful in the past, great! That was the whole point.

The site will come down after November 17th. For those Google searching and coming across this post, click on the dev portal link I provided above to reach the official documentation provided by Jamf.

Open Distribution Server Technology (w/JNUC Recap)

ODST @JNUC

At JNUC 2017, I was given the opportunity to do a session detailing the progress I’ve made and the vision I have for a new file distribution server that can serve to replace the now discontinued JDS (Jamf Distribution Server).

This was a last minute addition to the conference schedule and we were unable to record it, but the Mac admin community took notes which can be found here. I’ve also uploaded the presentation’s slide deck on SlideShare.

The source code for ODST is available on GitHub. It is currently in an early Alpha state with some of the core functionality complete.

Project Goals

ODST came about with the sunsetting of the JDS. I set out to design my own implementation of an automated file distribution server but with additional features to make it a more powerful component of an administrator’s environment.

The goal of ODST is to provide an on-premise file syncing and distribution server solution that puts automation and integration features first.

The ODS (Open Distribution Server) application itself is modular and being designed to fit into as many deployment models as possible. This ranges from a simple single-server installation on Linux, Windows, or macOS to containerized deployments in Docker or Kubernetes.

While there will be initial support for the ODS to integrate with Jamf Pro it is not a requirement for using the application. This will allow administrators using other management tools to take advantage of the solution and submit feature requests for integrations with them as well.

Planned Features

  • A full web interface (built on top of the Admin API)
  • The Admin API for integrating your ODS instances with existing automations and workflows.
  • Many-to-many registration and syncing which will allow package uploads to any ODS and still replicate throughout your network.
  • Package and ODS staging tags to restrict how certain levels of packages replicate through the network.
  • Webhooks and email to send notifications to other services alerting them to events that are occurring on your ODS instances.
  • LDAP integration for better control and accountability when granting other administrators and techs access to your ODS instances.
  • And more to come…

Package Syncing

Where the JDS synced by running an every five minute loops task to poll another server, the ODS application uses a private ODS API for communicating between instances.

When two ODS instances are registered to each other they will have each others’ keys saved to their databases and use those keys to sign API requests.

The standard order of operations during a package upload would be:

  1. The admin uploads a package to ODS1.
  2. ODS1 generates the SHA1 hash of the package and also generates SHA1 hashes for every 1 megabyte chunk of that package. This information is saved to the database.
  3. ODS1 sends a notification to every registered ODS instance that a new package is available.
  4. ODS2 receives this notification and makes a return API request for the full details of the package.
  5. ODS2 saves the pending package to the database and a download task is sent to the queue.
  6. The ODS2 worker takes the download task off the queue and begins downloading the package in 1 megabyte chunks, comparing hashes for every chunk, and saving them to a temporary location.
  7. Once the ODS2 worker has downloaded all chunks it recombines them to the single file, performs a final SHA1 check, and moves the package to the public download directory.
  8. ODS2 then performs step #3 to propagate the package to other ODS instances it is registered with.

If the download process seems familiar, it is borrowed from how Apple performs MDM initiated application installs.

Application Architecture

The ODS application is more complex than the JDS in order to facilitate the additional features that are being built on top of the file syncing. In addition to the application server, a production deployment would also include a front-end web server (Nginx or Apache), a Redis server for the queuing system, a database server (ODST falls back to a local SQLite database file if there is not a database service to connect to), and workers that process queued actions.

Single Server

ODS_Single_Server.png

Multi-Server or Containerized

ODS_Multi_or_Containerized.png

The queuing system is an important element as it backgrounds many of the processes that the server will need to perform in reaction to notifications or requests (such as queuing notifications, API requests to other ODS instances, file downloads, and file hashing operations). This frees up the application to continue accepting requests by removes long process blocks.

How the Community Can Help

When I gave the JNUC presentation I only took up half of the allotted time to discuss what was completed with the project and what was planned. The second half was spent in open discussion to take in feedback and guidance from the target audience on what was needed on the road to a 1.0 release.

Adding LDAP support was the first item to come out of this and is my next planned feature to write in after the file syncing framework is finished. I encouraged participants to open GitHub issues on the repo as we discussed their questions and asks. I want to continue to encourage this. The ODST project is meant for the community and should continue to be community driven in its roadmap.

When it comes to contributing to the project I am not asking for code help at this time. Don’t feel that you need to know Python or web development with Flask in order to contribute. There are many other areas that I am in need of help:

  • Testing! As I make new commits to the repository and add in more features you can help ensure everything is working by running the latest version and trying them out. Submit issues, provide logs, provide details on how you’re deploying the application (the provided Docker Compose file is the quickest and easiest way), and by doing so you will help verify features work as expected and solidify the quality of the application.
  • Determine optimal configurations. There are quite a few components to the ODS application and I am learning as I go for how to configure the web server. More experienced administrators who are familiar with these technologies, especially in production environments, can help work towards a baseline for…
  • Installers! The ODS application can be custom setup for almost any kind of deployment, but we still want an easy option where an admin can grab an installer for load it onto a single Linux or Windows server. If you have experience building installers on those platforms please reach out! I’ve also mentioned containerization a few times, and having an official Docker images for the ODS application and worker components should be a part of this initiative.
  • Documentation. Much Documentation. There will be official docs available at odst.readthedocs.io which will be generated from the main repository on GitHub. You can help maintain and improve that documentation with pull requests as you find errors or inaccurate instructions/details as the project iterates. The documentation will be especially invaluable when it comes to the aforementioned installers, custom installations, and the administrator user guide portion that will walk user through how to perform actions.

If you haven’t yet, please join the #odst channel in the Mac Admins Slack where you can discuss the project with me directly as well as other admins who are using, testing, and contributing as they can.

I hope to build something that will provide great value to our community and fill the gap the JDS left in a lot of environments. I hope to see you on GitHub and Slack soon!

Build Your Own Jamf Pro Integrations: Part I An Intro to Webhooks and Flask

Welcome back to the BYO Jamf Pro Integrations tutorial! In Part I we will be giving an introduction to both the webhooks feature of Jamf Pro and the Flask microframework that you installed into your virtual environment in the introduction post.

Webhooks in Jamf Pro

Webhooks are a framework introduced in Jamf Pro v9.93. A webhook itself is an HTTP callback: an HTTP POST that occurs when something happens.

A webhook itself is an HTTP request made by a server to a destination, with a payload, in response to an event. Jamf Pro’s webhooks are built directly on top of a pre-existing Java API called the Events API.

The Events API allowed for Java plugins as .jar files to be installed on the Jamf Pro server. While not an option for Jamf Cloud customers, self-hosted users can take advantage of this. Learn more here: https://github.com/jamf/JSSEventsAPI

While not 100%, the available events for Jamf Pro Webhooks closely matches the list for the Events API.

  • ComputerAdded
  • ComputerCheckIn
  • ComputerInventoryCompleted
  • ComputerPolicyFinished
  • ComputerPushCapabilityChanged
  • JSSShutdown
  • JSSStartup
  • MobileDeviceCheckIn
  • MobileDeviceCommandCompleted
  • MobileDeviceEnrolled
  • MobileDevicePushSent
  • MobileDeviceUnEnrolled
  • PatchSoftwareTitleUpdated
  • PushSent
  • RestAPIOperation
  • SCEPChallenge
  • SmartGroupComputerMembershipChange
  • SmartGroupMobileDeviceMembershipChange

To setup a webhook for one of these events, log into your Jamf Pro Server and navigate to Settings -> Global Management -> Webhooks. Click the + New button and you will be taken to a screen to set and select the following:

  • Name
    A description.
  • URL
    The address that you want Jamf Pro to send the event data to.
  • Content Type
    Choose whether that data in in XML or JSON format.
  • Event
    The event from the list above that you want to send on.

When you setup your webhook, Jamf Pro will send an HTTP POST with a payload of the content type you selected containing contextual data on the event. These payloads are broken into two parts: the webhook and eventObject/event keys.

Here is an example in XML:

<JSSEvent>
    <webhook>
        <id>1</id>
        <name></name>
        <webhookEvent>JSSShutdown</webhookEvent>
    </webhook>
    <eventObject>
        <institution></institution>
        <hostAddress></hostAddress>
        <webApplicationPath></webApplicationPath>
        <isClusterMaster>false</isClusterMaster>
        <jssUrl></jssUrl>
    </eventObject>
</JSSEvent>

Here is that same example as JSON:

{
    "webhook": {
        "id": 1,
        "name": "",
        "webhookEvent": "JSSShutdown"
    },
    "event": {
        "institution": "",
        "hostAddress": "",
        "webApplicationPath": "",
        "isClusterMaster": false,
        "jssUrl": ""
    }
}

The webhook key is about the Jamf Pro Webhook itself. The database ID, name you set and the type of event are contained here. This is data that, later in the tutorials, can be used to identify which events you are receiving from Jamf Pro.

The eventObject (XML) or event (JSON) key contains the contextual data of what triggered the event, or contextual data about the event depending upon which event was triggered. Many of the events all send the exact same data under this key with the difference will be what is contained under the webhook key. This is, for examples, true for Computer* and MobileDevice* events.

You can dive into full examples of every webhook event in XML and JSON formats at the Unofficial JSS API Docs site: https://unofficial-jss-api-docs.atlassian.net/wiki/spaces/JRA/pages/14450694/Webhooks+API

Flask

There needs to be something on the receiving end, the destination URL of the Jamf Pro webhook, to receive the payload and process it / take action on. This is where web development enters the picture. The type of integration that works with webhooks is a web app with endpoints that can accept POST requests.

There are many web technologies out there for all kinds of programming languages. The tutorial series will focus on Python (very popular in the Mac admin community for scripting alongside Ruby) and a microframework called Flask.

The “micro” portion means that Flask does not contains many elements of larger frameworks like a pre-defined database interface. Instead, Flask relies on extensions that plug into the framework and extend the functionality of your code. Flask also does not dictate design choices. Flask apps can be hundreds of files in size, structured in nearly any way, or just one single file. The size and complexity of the project is determined by the scope of your work.

Here is the absolute smallest Flask app that you could write (and you can use this as the boilerplate code to start any of your projects from):

# my-jamf-app.py
import flask

app = flask.Flask(__name__)


@app.route('/')
def root():
    return "Hello Penn State MacAdmins!"

if __name__ == '__main__':
    app.run()

Seven lines.

You should have Flask installed and available for your project in the virtual environment created in the first post. At the top of our file we are importing the package.

Then we create an app object that is an instance of the flask.Flask() class:

app = flask.Flask(__name__)

This object will represent the web app throughout our code. To add endpoints, or routes, to the web app we will use the route() decorator.

Decorators are a special kind of Python syntax that “wrap” a function around another function.

In this case, the route() decorator will register an endpoint based upon the path we give as it’s first argument and then execute the wrapped function below it whenever that endpoint is requested! You’ll be able to see clearly this in a moment.

@app.route('/')
def root():
    return "Hello Penn State MacAdmins!"

The“/” path means the root of the web server. Once this app is running you will be able to reach it in your web browser by navigating to http://localhost:5000. The “/” path resolves to that address.

Flask has a built in development server you can start by calling the run() method on the app object. When you call a Python file as a script from the command line the __name__ dunder becomes set to a value of __main__.

If you are a little confused by the word “dunder” at this point don’t worry, you can continue on without understanding some of these concepts, but you may want to brush up on your Python with some online resources.

By checking if the __name__ dunder is __main__ you can control what your Python scripts do based on whether they have been called from the command line or, later on, imported into other Python code. When imported, the __name__ dunder takes on the name of the file!

So, the last two lines of this single file Flask app mean it will only run the app using the development server if it has been called as a script from the command line:

if __name__ == '__main__':
    app.run()
(byojamf) ~$ python /path/to/my-jamf-app.py
 * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)

Your web app is now available when you navigate to http://localhost:5000 in your web browser. Give it a try and you should see the text message in the code! See how when you request the “/” or root of the web app the root() function is executing?

Decorators in action.

Next up…

Accepting Webhooks and Testing with Jook

I’ll see you in Part II tomorrow!

 

 

 

Open Distribution Server and JNUC 2017

Two posts in one day! I wanted to do a quick JNUC update and promote a session that I’m really excited for.

This year, as with years past, I will be pretty involved with the conference. Aside from finding me roaming the halls of the Hyatt; I am on the committee for the first ever JNUC Hachathon, participating in the API Scripting and Webhooks labs, and delivering the Webhooks Part Deux! presentation with Oliver Lindsey from our Pro Services team.

But the session I am most excited about is a very late addition that was put onto the JNUC App’s schedule this morning.

The Open Distribution Server

Around July (Penn State), I began work on an alternative distribution server to the JDS. As the community recently learned, the JDS has been discontinued and will no longer be supported by Jamf as cloud-centric options are being focused on. Prior to that announcement, I was involved in some talks with Product Management at Jamf about the JDS, and I took the opportunity to show them what I was working on.

Joe Bloom, our Jamf Pro Product Manager who you will hear talk at several product sessions this year, was very excited about this and urged me to continue working on my distribution server and release it as a free, open source solution.

Joe has secured an additional session slot on Tuesday at 4:00 PM dedicated to the Open Distribution Server. You can find it at the link or in the JNUC App (it is not listed on the website).

During this session I’m going to talk about the goals of this project, what it aims to solve, what features I have implemented and plan to implement, but then turn the rest of the time over to you so we can talk about the key things that will make this a successful solution:

  • What features don’t work as described or need changed to fit your workflows?
  • What features are missing that you need?
  • How can the community contribute to this project?

The current code base for this project was posted to GitHub a couple weeks ago:

https://github.com/brysontyrrell/ODST/tree/develop

The Open Distribution Server (ODS) is an open-source package distribution and syncing solution for IT administrators to serve as a potential alternative for the Jamf Distribution Server.

For those looking for an on-premise, automated distribution point solution, and those who are in need of a replacement for their JDS infrastructure, please attend and be a part of the discussion.

I hope to see you there!

Build Your Own Jamf Pro Integrations: A Tutorial Series

byo-jpi-logo
Enter a caption

During Penn State MacAdmins 2017, I delivered my first ever workshop style session entitled “Build Your Own Jamf Pro Integrations.” Going in I felt well prepared and convinced that people would walk away feeling empowered to build new kinds of functionality atop their Jamf Pro environments.

The result was not what I envisioned.

Information about the level of experience with Python that was required coming into the workshop was lost in the posting. My slide pacing was too fast. People weren’t given enough time to write down the examples on screen before I would move on to lab sections which did not contain any code examples. Due to a lot of the group not being at the needed experience level with Python I burned through extra time moving around to help them troubleshoot issues and reach the next step. I ended up not finishing all of the content I had prepared and the workshop was left with an unfinished air about it.

Overall, I wasn’t too happy with how the workshop turned out. Some of the attendees afterwards gave me some feedback about what could have been done to improve for a future version (which I was also encouraged to submit for 2018). Make sure the prerequisite experience for the workshop is clearly communicated. The full code being available prior to the workshop would have made transitioning between sections easier. Volunteer assistants to help others with issues and errors as they arise to allow me to focus on delivery. Do a full day and not a half: I had more than enough content to fill it.

Later, when the form submitted feedback was compiled and provided, I found that on a whole the sentiments above were shared by many. They enjoyed the content, they liked the hands-on approach, but structure and timing prevented them from getting the most out of it. The reception was better than I had expected (coming from an IT background, I know it’s more likely that people will submit complaints than compliments).

While I still intend to submit this workshop again for MacAdmins 2018, I have decided to adapt the slide deck into a multi-part tutorial series that will cover building your first Flask (Python) based integration with Jamf Pro and Slack.

Once this post has gone live, each part of the tutorial will go up once a day until the series has been completed. The full code for the complete sample project will be available on my GitHub on day of the last posting.

Requirements

You can use either Python 2 or Python 3, but you will need to create a virtual environment and install the required modules from the Python Package Index (PyPI).

If you are going to use the Python installation that came with your Mac (Python 2) you can install pip and virtualenv using the following commands:

~$ sudo easy_install pip
~$ sudo pip install virtualenv

If you have installed another copy of Python (either 2 or 3) on your Mac, or you are installing Python on Windows or Linux, these commands should be available as a part of that.

Create the virtual environment for the project somewhere in your user directory (the root or your Documents folder would both work):

~$ virtualenv byojamf

To use your new virtual environment call it’s activate script and you will see the environment’s name appear in parenthesis in the terminal session:

~$ source /path/to/byojamf/bin/activate
(byojamf) ~$

Now install the three PyPI modules that we will be using for this sample project:

(byojamf) ~$ pip install flask jook requests

Flask is a microframework for writing web applications.

GitHub: https://github.com/pallets/flask
Documentation: http://flask.pocoo.org/docs/latest/

Requests is an HTTP client library that we will be using to make API requests.

GitHub: https://github.com/requests/requests
Documentation: http://docs.python-requests.org/en/master/

Jook is a Jamf Pro Webhooks simulator to allow you to test your integration.

GitHub: https://github.com/brysontyrrell/Jook
Documentation: http://jook.readthedocs.io/en/latest/

Next up…

An Intro to Webhooks and Flask

I’ll see you in Part I tomorrow!

Casper-HC: the friendly HipChat plugin

Hello all.

The last time I posted on this blog I was still working in the IT department at Jamf. I happen to still be working at Jamf, but after five years of watching the IT team grow up I was approached with an opportunity to make a difference within our Cloud & Delivery team (Jamf Cloud) as a System Administrator.  You’ll likely hear more about that as time goes on.

Now, I wouldn’t have taken any job that didn’t involve me writing lots, and lots, of Python code, and as it so happens I’m currently in the middle of building an API using my all-time favorite microframework: Flask.

In the month leading up to my transition I tried to burn through code-completion (in my book that means the application was in a fully usable state if not feature complete) on several web apps that I had rolling. I’ve been given permission to push all of these to the Jamf IT GitHub page so you’ll see a series of blog posts detailing each of them.

Those projects include:

  • Casper-HC
    (This post)
  • QuickDNS 2
    A front-end to a DNS (bind9) that allows the creation of randomized and chosen names and managing those names using a RESTful API.
  • Apple School Manager Photo Server
    A side-project for Jamf that allowed numerous Jamf Pro servers to demo Apple School Manager’s photo integration (a bit of a niche).
  • Avatar Server (Employee Photos)
    An internal employee photo service that mimics the way photos are delivered from Gravatar with options for scaling and auto-cropping. This was originally written for us to be able to use Facewall in our Minneapolis office lobby.

So, about Casper-HC…

When webhooks finally, finally, came into Jamf Pro with v9.93 I was very excited about what that meant for the next generations of plugins/integrations that could start to be developed. I had openly talked about a bare-bones HipChat plugin that I had been working on and my desire to build real time notifications into its features. I had also talked about doing the same with Slack.

We’re an Atlassian shop so HipChat got my attention first (sorry…).

casper-hc-installed.png

Casper-HC is the friendly HipChat plugin to Jamf Pro:

https://github.com/jamfit/Casper-HC

The original (internal) plugin I wrote for HipChat and Jamf Pro was purely a search interface in chat format. Type some commands into a room with some search terms and you’ll get nicely rendered results back.

The all-new plugin is a Flask application that is installed per-room (not globally) and makes much better use of HipChat’s API framework. All of the original search functionality has been preserved, improved, and I’ve integrated the new webhooks to provide notifications.

Get started.

When you install the plugin into a room you’ll get the nice notification you see above providing a randomized endpoint for you to send webhook events to. This is partly security through obscurity as there are no authentication options from the Jamf Pro server to the destination. It also links the inbound webhook to the appropriate chatroom.

If authentication makes its way into the product I can add in support.

An extra step of configuring a service account for accessing the REST API is required before using the search features, but notifications are immediately available. This allows you to install the plugin into rooms that will purely display notifications without any other features.

Follow the suggestion to type casper help to learn more.

casper-hc-help.png

Not all the help text has been implemented at this time, but we can see that enabling a service account is done on the configuration page for the plugin for this room. Heading there gives us a very bare bones screen for entering a URL, username, and a password for the JSS.

 

casper-hc-configure.png

Clicking Save will perform a quick test to the Jamf Pro server to verify the account can actually authenticate against it. Upon success the username and password are encrypted and saved in the database and you will receive two notifications.

 

casper-hc-configured.png

casper-hc-configured-notify.png

A future feature would be to verify the service account has all the required permissions for the search functions. That currently isn’t being handled.

What all does it do?

While not a huge deal, you can always grab your current version of the Jamf Pro server.

casper-hc-version.png

Right away notifications were available to setup in the new room. You might have noticed that all of the system notifications from the plugin are shown in purple. Different types of notifications will have different colors to help them stand out in high traffic rooms and in some cases provide a little context to what occurred.

casper-hc-notifications.png

The following webhooks are supported:

  • ComputerAdded
  • ComputerCheckIn
  • ComputerInventoryCompleted
  • JSSShutdown
  • JSSStartup
  • MobileDeviceCheckIn
  • MobileDeviceEnrolled
  • MobileDeviceUnEnrolled
  • PatchSoftwareTitleUpdated
  • RestAPIOperation

Some of those could easily flood a chat room which is where installing across multiple rooms comes in handy. One trick with RestAPIOperation notifications is that the plugin, for the installed room, will ignore API notifications triggered by the service account for that room, but those API calls would appear in another room also receiving API notifications.

The search functionality covers computers, mobile devices, and users. As a design choice I skipped the “slash” command convention and made the plugin listen for room messages beginning with “casper” and then the appropriate keyword. The regular expressions support shortnames for each of them so you can type quicker:

  • c|omputer|s
  • m|obile|s
  • u|ser|s

casper-hc-computers1.png

All of the search commands follow the same syntax of:

casper [command] (search string)

Computer and mobile device searches take advantage of the */match/* endpoints in the API. This effectively replicates the functionality of being logged into Jamf Pro and using the search boxes. When searching these devices you can:

  • Use wildcards (*)
  • Matches on most device identifiers and location data
  • Return single and list results

Users are a little different in that they have no similar feature in the API. Instead, you can match a user by passing either the matching username or email address for their record (no wildcards here).

Nearly every notification contains a weblink back to the original object in the Jamf Pro web interface. This makes the plugin extremely handy for techs by eliminating a large number of clicks in order to get to the device records of interest. If you have a chat room setup to receive notifications for support tickets you can search that user immediately without leaving the window (this is what the buzzword “chat-ops” refers to).

How do you start using it?

As I mentioned earlier, Casper-HC is just a Flask app. It requires a MySQL database for the backend and a web server for the front end, but it isn’t dependent upon any specific platform. You can find some more instructions on getting up and running in both a test and production environment in the README file for the project’s repository.

The running plugin that is being used for chat rooms at Jamf is deployed as a Docker container. You can find the setup for this here:

https://github.com/jamfit/Casper-HC-Docker

The docker-compose setup launches three containers: Nginx, Casper-HC + uWSGI, and MySQL. A data volume is created to persist the database between container tear-downs. You can find out more in the README at the link.

If you decide to follow suit with deploying Casper-HC as a containerized app you will want to create a service to periodically run a mysqldump and backup your database.

Are there future plans?

With my move out of IT I won’t be directly working on improving the plugin for my own use any longer, but if people were to begin using it and open issues on the GitHub page for bug fixes and feature requests I can pick up the work or others can contribute to improving the plugin.

A few of the features I had planned on getting to:

  • Get computer/mobile/user group memberships in chat
  • View advanced search results in chat
  • Non-destructive MDM commands from HipChat cards (card actions)
  • File uploads to Jamf Pro (via chat attachments)

If your org uses HipChat and Jamf Pro, I’d like to encourage you to try it out and send some feedback my way.