Stop everything! Start using Pipenv!

Seven months ago I tweeted about Pipenv:

That was about it.

The project- which was at v3.6.1 at the time – still felt new. At Jamf, we were working on our strategy for handling project requirements and dependencies going forward for Jamf Cloud’s automation apps and Pipenv, while a cool idea, didn’t have maturity and traction that.

As @elios pointed out on the MacAdmins Slack: Pipenv was announced in January of this year as an experimental project.

Today it is the officially recommended Python packaging tool from Python.org.

In his words, “That’s awfully fast…”

The GitHub repository for Pipenv has 5400+ stars and over 100+ contributors to the project. It is proceeding at breakneck development speeds and adding all kinds of juicy features every time you turn around. It’s on v8.3.2.

Within ten minutes of finally, FINALLY, giving Pipenv a try today I made the conscious decision to chuck the tooling we had made and convert all of our projects over.

(Oh, and the person who wrote that tooling wholly agreed)

Let’s talk about that.

Part 1: Why requirements.txt sucks

There is was this standard workflow to your Python projects. You needed two key tools: pip and virtualenv. When developing, you would create a Python virtual environment using virtualenv in order to isolate your project from the system Python or other virtual environments.

You would then use pip to install all the packages your project needs. Once you had all of those packages installed you would then run a command to generate a requirements.txt file that listed every package at the exact installed version.

The requirements.txt file stays with your project. When someone else gets a copy of the repository to work on it or deploy it they would follow the same process of creating a virtual environment and then using pip to install all of the required packages by feeding that requirements.txt file into it.

Now, that doesn’t seem so bad, right? The drawbacks become apparent as you begin to craft your build pipelines around your projects.

Maintenance is a Manual Chore

The first thing to call out is how manual this process is. You need to create the same version virtual environment for the project once you’ve cloned it and then install the packages. If you want to begin upgrading packages you need to be sure to manually export a new requirements.txt file each time.

Removing packages is far more problematic. Uninstalling with pip will not remove dependencies for a package! Take Flask, for example. It’s not just one package. It has four sub-packages that get installed with it, and one of those (Jinja2) has its own sub-package.

So, maybe the answer is to manually maintain only the top-level packages in your requirements.txt file? That’s a hard no. You don’t want to do that because it makes your environment no longer deterministic. What do I mean? I mean, those sub-packages we just talked about are NOT fixed versions. When you’re installing Python packages from the Python Package Index (PyPI) the aren’t using requirements.txt files with fixed versions for their dependencies. They instead are specifying ranges allowing for installs at a minimum and/or maximum version for that sub-package.

This makes a lot of sense when you think about it. When installing a package for the first time as a part of a new project, or updating a package in an existing project, you are going to want the latest allowed versions for all the sub-packages for any patches that have been made.

Not so much when it comes to a project/application. You developed and tested your code at specific package versions that are a known working state. This is past the development phase when bugs from updates to those packages can be identified and addressed. When it comes to deployment, your project’s requirements.txt file must be deterministic and list every package at the exact version.

Hence, maintenance of this file becomes a manual chore.

But Wait, There’s Testing!

There are a lot of packages that you might need to install that have nothing to do with running your Python application, but they have everything to do with running tests and building documentation. Common package requirements for this would by Pytest, Tox, and Sphinx with maybe a theme. Important, but not needed in a deployment.

We want to be able to specify a set of packages to install that will be used during the testing and build process. The answer unfortunately, is a second requirements.txt file. This one would have a special name like requirements-dev.txt which is only used during a build. This file could only contain the specific packages and be installed with pip after the standard requirements.txt, or it could contain all of those plus the build packages. In either case, the maintenance problem continues to grow.

So We Came Up With Something…

Our process at Jamf ended up settling on three types of requirements.txt files in an effort to address all the shortcomings described.

  • requirements.txt
    This file contained the full package list for the project at fixed versions. It would be used for spinning up new development environments or during deployment.
  • core-requirements.txt
    This file contained the top-level packages without fixed versions. Development environments would not be built from this. Instead, this file would be used for recreating the standard requirements.txt file at updated package versions and eliminating orphaned packages that were no longer used or removed from the project at some point.
  • build-requirements.txt
    This is a manually maintained file per-project that only contained the additional packages needed during the build process.

This approach isn’t too dissimilar to what others have implemented for many of the same reasons we came up with.

Part 2: Enter Pipfile

Not Pipenv? We’re getting there.

Pipfile was introduced almost exactly a year ago at the time of this post (first commit on November 18th, 2016). The goal of Pipfile is to replace requirements.txt and address the pain points that we covered above.

Here is what a Pipfile looks like:

[source]

url = "https://pypi.python.org/simple"
verify_ssl = true
name = "pypi"

[dev-packages]



[packages]



[requires]

python_version = "2.7"

PyPI is the default source in all new Pipfiles. Using the scheme shown, you can add additional [source] sections with unique names to specify your internal Python package indexes (we have our own at Jamf for shared packages across projects).

There are two groups for listing packages: dev-packages and packages. In a Pipfile these lists follow how we were handling this where only the build packages go into the dev-packages list and all project packages go under the standard packages list.

Packages in the Pipfile can have fixed versions or be set to install whatever the latest version is.

[packages]

flask = "*"
requests = "==2.18.4"

The [requires] section dictates the version of Python that the project is meant to run under.

From the Pipfile you would create what is called a Pipfile.lock which contains all of the environment information, the installed packages, their installed versions, and their SHA256 hashes. The hashes are a recent security feature of pip to validate packages that are installed when deployed. If there is a hash mismatch you can abort. A powerful security tool in preventing malicious from entering your environments.

Note that you can specify the SHA256 hashes of packages in a normal requirements.txt file. This is a feature of pip and not of Pipfile.

It is this Pipfile.lock that will be used to generate environments on other systems whether for development or deployment. The Pipfile will be used for maintaining packages and dependencies and regenerating your Pipfile.lock.

All of this is just a specification. Pip as of yet still does not support Pipfiles, but there is another..

Part 3: Pipenv Cometh

Pipenv is the implementation of the Pipfile standard. It is built on top of pip and virtualenv and manages both your environment and package dependencies, and it does so like a boss.

Get Started

Install Pipenv with pip (or homebrew if that’s your jam).

$ pip install pipenv

Then in a project directory create a virtual environment and install some packages!

$ pipenv --two
Creating a virtualenv for this project…
<...install output...>
Virtualenv location: /Users/me/.local/share/virtualenvs/test-zslr3BOw
Creating a Pipfile for this project…

$ pipenv install flask
Installing flask…
<...install output...>
Adding flask to Pipfile's [packages]…
Locking [dev-packages] dependencies…
Locking [packages] dependencies…
Updated Pipfile.lock (36eec0)!

$ pipenv install requests==2.18.4
Installing requests==2.18.4…
<...install output...>
Adding requests==2.18.4 to Pipfile's [packages]…
Locking [dev-packages] dependencies…
Locking [packages] dependencies…
Updated Pipfile.lock (572f23)!

$

Notice how we see the Locking messages after every install? Pipenv automatically regenerated the Pipfile.lock each time the Pipfile is modified. Your fixed environment is being automatically maintained!

Graph All The Things

Let’s look inside the Pipfile itself.

[source]

url = "https://pypi.python.org/simple"
verify_ssl = true
name = "pypi"

[dev-packages]



[packages]

flask = "*"
requests = "==2.18.4"


[requires]

python_version = "2.7"

No sub-packages? Nope. It doesn’t need to track those (they end up in the Pipfile.lock, remember?). But, if you’re curious, you can use the handy graph feature to view a full dependency tree of your project!

$ pipenv graph
Flask==0.12.2
 - click [required: >=2.0, installed: 6.7]
 - itsdangerous [required: >=0.21, installed: 0.24]
 - Jinja2 [required: >=2.4, installed: 2.9.6]
   - MarkupSafe [required: >=0.23, installed: 1.0]
 - Werkzeug [required: >=0.7, installed: 0.12.2]
requests==2.18.4
 - certifi [required: >=2017.4.17, installed: 2017.11.5]
 - chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4]
 - idna [required: >=2.5,<2.7, installed: 2.6]
 - urllib3 [required: <1.23,>=1.21.1, installed: 1.22]
 
$

Check that out! Notice how you can see the requirement that was specified for the sub-package in addition to the actual installed version?

Environment Management Magic

Now let’s uninstall Flask.

$ pipenv uninstall flask
Un-installing flask…
<...uninstall output...>
Removing flask from Pipfile…
Locking [dev-packages] dependencies…
Locking [packages] dependencies…
Updated Pipfile.lock (4ddcaf)!

$

And re-run the graph command.

$ pipenv graph
click==6.7
itsdangerous==0.24
Jinja2==2.9.6
 - MarkupSafe [required: >=0.23, installed: 1.0]
requests==2.18.4
 - certifi [required: >=2017.4.17, installed: 2017.11.5]
 - chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4]
 - idna [required: >=2.5,<2.7, installed: 2.6]
 - urllib3 [required: <1.23,>=1.21.1, installed: 1.22]
Werkzeug==0.12.2

 

Yes, the sub-packages have now been orphaned within the existing virtual environment, but that’s not the real story. If we look inside Pipfile we’ll see that requests is the only package listed, and if we look inside our Pipfile.lock we will see that only requests and it’s sub-packages are present.

We can regenerate our virtual environment cleanly with only a few commands!

$ pipenv uninstall --all
Un-installing all packages from virtualenv…
Found 20 installed package(s), purging…
<...uninstall output...>
Environment now purged and fresh!

$ pipenv install
Installing dependencies from Pipfile.lock (f58d9f)…
 🐍 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 5/5 — 00:00:01
To activate this project's virtualenv, run the following:
 $ pipenv shell

$ pipenv graph
requests==2.18.4
 - certifi [required: >=2017.4.17, installed: 2017.11.5]
 - chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4]
 - idna [required: >=2.5,<2.7, installed: 2.6]
 - urllib3 [required: <1.23,>=1.21.1, installed: 1.22]

$

Mindblowing!!!

Installing the dev-packages for our builds uses an additional flag with the install command.

$ pipenv install sphinx --dev
Installing sphinx…
<...install output...>
Adding sphinx to Pipfile's [dev-packages]…
Locking [dev-packages] dependencies…
Locking [packages] dependencies…
Updated Pipfile.lock (d7ccf2)!

The appropriate locations in our Pipfile and Pipefile.lock have been updated! To install the dev environment perform the same steps for regenerating above but add the –dev flag.

$ pipenv install --dev
Installing dependencies from Pipfile.lock (f58d9f)…
 🐍 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 18/18 — 00:00:05
To activate this project's virtualenv, run the following:
 $ pipenv shell

Part 4: Deploy Stuff!

The first project I decided to apply Pipenv to in order to learn the tool is ODST. While there is a nice feature in Pipenv where it will automatically import a requirements.txt file if detected, I opted to start clean and install all my top-level packages directly. This gave me a proper Pipfile and Pipfile.lock.

Here’s the resulting Pipfile.

[source]

url = "https://pypi.python.org/simple"
verify_ssl = true
name = "pypi"


[dev-packages]

pytest = "*"
sphinx = "*"
sphinx-rtd-theme = "*"


[packages]

flask = "*"
cryptography = "*"
celery = "*"
psutil = "*"
flask-sqlalchemy = "*"
pymysql = "*"
requests = "*"
dicttoxml = "*"
pyjwt = "*"
flask-login = "*"
redis = "*"


[requires]

python_version = "2.7"

Here’s the graph of the installed packages (without the dev-packages).

$ pipenv graph
celery==4.1.0
 - billiard [required: >=3.5.0.2,<3.6.0, installed: 3.5.0.3]
 - kombu [required: <5.0,>=4.0.2, installed: 4.1.0]
 - amqp [required: >=2.1.4,<3.0, installed: 2.2.2]
 - vine [required: >=1.1.3, installed: 1.1.4]
 - pytz [required: >dev, installed: 2017.3]
cryptography==2.1.3
 - asn1crypto [required: >=0.21.0, installed: 0.23.0]
 - cffi [required: >=1.7, installed: 1.11.2]
 - pycparser [required: Any, installed: 2.18]
 - enum34 [required: Any, installed: 1.1.6]
 - idna [required: >=2.1, installed: 2.6]
 - ipaddress [required: Any, installed: 1.0.18]
 - six [required: >=1.4.1, installed: 1.11.0]
dicttoxml==1.7.4
Flask-Login==0.4.0
 - Flask [required: Any, installed: 0.12.2]
 - click [required: >=2.0, installed: 6.7]
 - itsdangerous [required: >=0.21, installed: 0.24]
 - Jinja2 [required: >=2.4, installed: 2.9.6]
 - MarkupSafe [required: >=0.23, installed: 1.0]
 - Werkzeug [required: >=0.7, installed: 0.12.2]
Flask-SQLAlchemy==2.3.2
 - Flask [required: >=0.10, installed: 0.12.2]
 - click [required: >=2.0, installed: 6.7]
 - itsdangerous [required: >=0.21, installed: 0.24]
 - Jinja2 [required: >=2.4, installed: 2.9.6]
 - MarkupSafe [required: >=0.23, installed: 1.0]
 - Werkzeug [required: >=0.7, installed: 0.12.2]
 - SQLAlchemy [required: >=0.8.0, installed: 1.1.15]
psutil==5.4.0
PyJWT==1.5.3
PyMySQL==0.7.11
redis==2.10.6
requests==2.18.4
 - certifi [required: >=2017.4.17, installed: 2017.11.5]
 - chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4]
 - idna [required: >=2.5,<2.7, installed: 2.6]
 - urllib3 [required: <1.23,>=1.21.1, installed: 1.22]

In my Dockerfiles for the project I swapped out using requirements.txt and switched it to install my packages using the Pipefile.lock into the system Python (in a containerized app there’s no real need to create a virtual environment).

RUN /usr/bin/apt-get update -q && \
    /usr/bin/apt-get install -qqy build-essential git && \
    /usr/bin/apt-get install -qqy python-pip python-dev && \
    /usr/bin/pip install pipenv && \
    /usr/bin/apt-get install -qqy libssl-dev libffi-dev && \
    /usr/bin/apt-get install -qqy uwsgi uwsgi-plugin-python && \
    /usr/bin/apt-get clean && \
    /bin/rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

COPY /docker/webapp/web-app.ini /etc/uwsgi/apps-enabled/

COPY /Pipfile* /opt/odst/
WORKDIR /opt/odst/
RUN /usr/local/bin/pipenv install --system --deploy

COPY /ods/ /opt/odst/ods/
COPY /application.py /opt/odst/
COPY /docker/ods_conf.cfg /opt/odst/

RUN /bin/chown -R www-data:www-data /opt/odst/ods/static

CMD ["/usr/bin/uwsgi", "--ini", "/etc/uwsgi/apps-enabled/web-app.ini"]

The –system flag tells Pipenv to install to the system Python. The –deploy flag will abort the installation if the Pipfile.lock is out of date or the Python version is wrong. Out of date? Pipenv knows if your Pipfile.lock matches up with the Pipfile by comparing a SHA256 hash of the Pipfile that it has saved. If there’s a mismatch then there is an issue and it aborts.

If I didn’t want to rely on this hash match feature, I could instead pass a –ignore-pipfile flag that will tell it to proceed using only the Pipfile.lock.

I should mention at this point that Pipenv dramatically speeds up build times. When using pip your packages will install sequentially. Pipenv will install packages in parallel. The difference is immediately noticeable, especially when regenerating your development environments.

Still, The World is Catching Up

Python development workflows area clearly moving to this tool, but Pipenv is less than a year old at this point and not everything will support generating environments from the provided Pipfiles.

The two I am currently using that fall in that category are AWS Elastic Beanstalk and ReadTheDocs.org. Both need a requirements.txt file in order to build the Python environments.

That is thankfully a simple task. You can generate a requirements.txt file from Pipenv by using the lock option.

$ pipenv lock --requirements > requirements.txt

 

For Elastic Beanstalk applications, I can have this command run as a part of my build pipeline so the file is included with the ZIP archive before it goes to S3.

In the case of Read the Docs there is an open issue for adding Pipfile support, but until then I will need to generate the requirements.txt file as I make changes to my environment and save it with the repository.

For Read the Docs I’ll want the extra dev-packages. This is done by using the pipenv run command. This will execute the succeeding string as if it were run in the environment.

$ pipenv run pip freeze > requirements.txt

The bright side is that I am no longer actually managing this file. It is straight output from my Pipenv managed environment!

Part 5: Profit

I hope you enjoyed my overview and impressions of Pipenv. It’s a tool that is going to have a huge impact on development and deployment workflows – for the better!

A huge shout out to Kenneth Reitz for creating yet another invaluable package for the Python community!

* Edit Nov 9: Changed the command for exporting a requirements.txt file. Using pipenv lock -r -d only outputs the packages under the dev-packages section.

Advertisements

Build Your Own Jamf Pro Integrations: A Tutorial Series

byo-jpi-logo
Enter a caption

During Penn State MacAdmins 2017, I delivered my first ever workshop style session entitled “Build Your Own Jamf Pro Integrations.” Going in I felt well prepared and convinced that people would walk away feeling empowered to build new kinds of functionality atop their Jamf Pro environments.

The result was not what I envisioned.

Information about the level of experience with Python that was required coming into the workshop was lost in the posting. My slide pacing was too fast. People weren’t given enough time to write down the examples on screen before I would move on to lab sections which did not contain any code examples. Due to a lot of the group not being at the needed experience level with Python I burned through extra time moving around to help them troubleshoot issues and reach the next step. I ended up not finishing all of the content I had prepared and the workshop was left with an unfinished air about it.

Overall, I wasn’t too happy with how the workshop turned out. Some of the attendees afterwards gave me some feedback about what could have been done to improve for a future version (which I was also encouraged to submit for 2018). Make sure the prerequisite experience for the workshop is clearly communicated. The full code being available prior to the workshop would have made transitioning between sections easier. Volunteer assistants to help others with issues and errors as they arise to allow me to focus on delivery. Do a full day and not a half: I had more than enough content to fill it.

Later, when the form submitted feedback was compiled and provided, I found that on a whole the sentiments above were shared by many. They enjoyed the content, they liked the hands-on approach, but structure and timing prevented them from getting the most out of it. The reception was better than I had expected (coming from an IT background, I know it’s more likely that people will submit complaints than compliments).

While I still intend to submit this workshop again for MacAdmins 2018, I have decided to adapt the slide deck into a multi-part tutorial series that will cover building your first Flask (Python) based integration with Jamf Pro and Slack.

Once this post has gone live, each part of the tutorial will go up once a day until the series has been completed. The full code for the complete sample project will be available on my GitHub on day of the last posting.

Requirements

You can use either Python 2 or Python 3, but you will need to create a virtual environment and install the required modules from the Python Package Index (PyPI).

If you are going to use the Python installation that came with your Mac (Python 2) you can install pip and virtualenv using the following commands:

~$ sudo easy_install pip
~$ sudo pip install virtualenv

If you have installed another copy of Python (either 2 or 3) on your Mac, or you are installing Python on Windows or Linux, these commands should be available as a part of that.

Create the virtual environment for the project somewhere in your user directory (the root or your Documents folder would both work):

~$ virtualenv byojamf

To use your new virtual environment call it’s activate script and you will see the environment’s name appear in parenthesis in the terminal session:

~$ source /path/to/byojamf/bin/activate
(byojamf) ~$

Now install the three PyPI modules that we will be using for this sample project:

(byojamf) ~$ pip install flask jook requests

Flask is a microframework for writing web applications.

GitHub: https://github.com/pallets/flask
Documentation: http://flask.pocoo.org/docs/latest/

Requests is an HTTP client library that we will be using to make API requests.

GitHub: https://github.com/requests/requests
Documentation: http://docs.python-requests.org/en/master/

Jook is a Jamf Pro Webhooks simulator to allow you to test your integration.

GitHub: https://github.com/brysontyrrell/Jook
Documentation: http://jook.readthedocs.io/en/latest/

Next up…

An Intro to Webhooks and Flask

I’ll see you in Part I tomorrow!

Casper-HC: the friendly HipChat plugin

Hello all.

The last time I posted on this blog I was still working in the IT department at Jamf. I happen to still be working at Jamf, but after five years of watching the IT team grow up I was approached with an opportunity to make a difference within our Cloud & Delivery team (Jamf Cloud) as a System Administrator.  You’ll likely hear more about that as time goes on.

Now, I wouldn’t have taken any job that didn’t involve me writing lots, and lots, of Python code, and as it so happens I’m currently in the middle of building an API using my all-time favorite microframework: Flask.

In the month leading up to my transition I tried to burn through code-completion (in my book that means the application was in a fully usable state if not feature complete) on several web apps that I had rolling. I’ve been given permission to push all of these to the Jamf IT GitHub page so you’ll see a series of blog posts detailing each of them.

Those projects include:

  • Casper-HC
    (This post)
  • QuickDNS 2
    A front-end to a DNS (bind9) that allows the creation of randomized and chosen names and managing those names using a RESTful API.
  • Apple School Manager Photo Server
    A side-project for Jamf that allowed numerous Jamf Pro servers to demo Apple School Manager’s photo integration (a bit of a niche).
  • Avatar Server (Employee Photos)
    An internal employee photo service that mimics the way photos are delivered from Gravatar with options for scaling and auto-cropping. This was originally written for us to be able to use Facewall in our Minneapolis office lobby.

So, about Casper-HC…

When webhooks finally, finally, came into Jamf Pro with v9.93 I was very excited about what that meant for the next generations of plugins/integrations that could start to be developed. I had openly talked about a bare-bones HipChat plugin that I had been working on and my desire to build real time notifications into its features. I had also talked about doing the same with Slack.

We’re an Atlassian shop so HipChat got my attention first (sorry…).

casper-hc-installed.png

Casper-HC is the friendly HipChat plugin to Jamf Pro:

https://github.com/jamfit/Casper-HC

The original (internal) plugin I wrote for HipChat and Jamf Pro was purely a search interface in chat format. Type some commands into a room with some search terms and you’ll get nicely rendered results back.

The all-new plugin is a Flask application that is installed per-room (not globally) and makes much better use of HipChat’s API framework. All of the original search functionality has been preserved, improved, and I’ve integrated the new webhooks to provide notifications.

Get started.

When you install the plugin into a room you’ll get the nice notification you see above providing a randomized endpoint for you to send webhook events to. This is partly security through obscurity as there are no authentication options from the Jamf Pro server to the destination. It also links the inbound webhook to the appropriate chatroom.

If authentication makes its way into the product I can add in support.

An extra step of configuring a service account for accessing the REST API is required before using the search features, but notifications are immediately available. This allows you to install the plugin into rooms that will purely display notifications without any other features.

Follow the suggestion to type casper help to learn more.

casper-hc-help.png

Not all the help text has been implemented at this time, but we can see that enabling a service account is done on the configuration page for the plugin for this room. Heading there gives us a very bare bones screen for entering a URL, username, and a password for the JSS.

 

casper-hc-configure.png

Clicking Save will perform a quick test to the Jamf Pro server to verify the account can actually authenticate against it. Upon success the username and password are encrypted and saved in the database and you will receive two notifications.

 

casper-hc-configured.png

casper-hc-configured-notify.png

A future feature would be to verify the service account has all the required permissions for the search functions. That currently isn’t being handled.

What all does it do?

While not a huge deal, you can always grab your current version of the Jamf Pro server.

casper-hc-version.png

Right away notifications were available to setup in the new room. You might have noticed that all of the system notifications from the plugin are shown in purple. Different types of notifications will have different colors to help them stand out in high traffic rooms and in some cases provide a little context to what occurred.

casper-hc-notifications.png

The following webhooks are supported:

  • ComputerAdded
  • ComputerCheckIn
  • ComputerInventoryCompleted
  • JSSShutdown
  • JSSStartup
  • MobileDeviceCheckIn
  • MobileDeviceEnrolled
  • MobileDeviceUnEnrolled
  • PatchSoftwareTitleUpdated
  • RestAPIOperation

Some of those could easily flood a chat room which is where installing across multiple rooms comes in handy. One trick with RestAPIOperation notifications is that the plugin, for the installed room, will ignore API notifications triggered by the service account for that room, but those API calls would appear in another room also receiving API notifications.

The search functionality covers computers, mobile devices, and users. As a design choice I skipped the “slash” command convention and made the plugin listen for room messages beginning with “casper” and then the appropriate keyword. The regular expressions support shortnames for each of them so you can type quicker:

  • c|omputer|s
  • m|obile|s
  • u|ser|s

casper-hc-computers1.png

All of the search commands follow the same syntax of:

casper [command] (search string)

Computer and mobile device searches take advantage of the */match/* endpoints in the API. This effectively replicates the functionality of being logged into Jamf Pro and using the search boxes. When searching these devices you can:

  • Use wildcards (*)
  • Matches on most device identifiers and location data
  • Return single and list results

Users are a little different in that they have no similar feature in the API. Instead, you can match a user by passing either the matching username or email address for their record (no wildcards here).

Nearly every notification contains a weblink back to the original object in the Jamf Pro web interface. This makes the plugin extremely handy for techs by eliminating a large number of clicks in order to get to the device records of interest. If you have a chat room setup to receive notifications for support tickets you can search that user immediately without leaving the window (this is what the buzzword “chat-ops” refers to).

How do you start using it?

As I mentioned earlier, Casper-HC is just a Flask app. It requires a MySQL database for the backend and a web server for the front end, but it isn’t dependent upon any specific platform. You can find some more instructions on getting up and running in both a test and production environment in the README file for the project’s repository.

The running plugin that is being used for chat rooms at Jamf is deployed as a Docker container. You can find the setup for this here:

https://github.com/jamfit/Casper-HC-Docker

The docker-compose setup launches three containers: Nginx, Casper-HC + uWSGI, and MySQL. A data volume is created to persist the database between container tear-downs. You can find out more in the README at the link.

If you decide to follow suit with deploying Casper-HC as a containerized app you will want to create a service to periodically run a mysqldump and backup your database.

Are there future plans?

With my move out of IT I won’t be directly working on improving the plugin for my own use any longer, but if people were to begin using it and open issues on the GitHub page for bug fixes and feature requests I can pick up the work or others can contribute to improving the plugin.

A few of the features I had planned on getting to:

  • Get computer/mobile/user group memberships in chat
  • View advanced search results in chat
  • Non-destructive MDM commands from HipChat cards (card actions)
  • File uploads to Jamf Pro (via chat attachments)

If your org uses HipChat and Jamf Pro, I’d like to encourage you to try it out and send some feedback my way.

Scripting the stuff that you think is only in the JSS GUI

(Or Jamf Pro – I may own Dean a dollar now…)

The JSS APIs are the first and best solution to writing automations or integrations with the data that’s in your JSS and taking action on them.

Still, those APIs sometimes have gaps in them. Things that you have access to in the GUI but not otherwise. Sometimes you will be staring at a button and asking yourself, “Why can’t I do this with the API?”

Well, perhaps you can.

In this post I am going to detail how you can replicate actions you see in the JSS GUI via scripting and open up new options to automating some normally manual processes.

It’s not really reverse engineering

Screen Shot 2016-11-15 at 10.57.44 AM.pngIt’s easier to figure out what’s happening in a web interface than you might think. I’ll be using Chrome here to dig in and find out what is happening in the background. In Chrome, you will want to use a handy feature called “Inspect” which opens a console for you to view all sorts of data about the page you are on and what it is doing.

You can open that by right/control-clicking on the browser and selecting the option from the context menu.

To observe the various requests that are happening as you click around you will want to use the “Network” tab. This section details every action the current page is making as it loads. That includes page resources, images, HTML content and various other requests.

Screen Shot 2016-11-15 at 10.58.09 AM.png

As you can see there is a lot of stuff that gets loaded. Most of it you can ignore because it isn’t relevant to what we’re trying to accomplish. Keep this open and watch carefully though as you begin clicking on actions in the pages your are on.

Let’s use OS X Configuration Profiles as an example. Wouldn’t it be nice if you could trigger a download of a signed configuration profile from the JSS without having to go to the GUI? Let’s see what happens when the ‘Download’ button is clicked.

Screen Shot 2016-11-15 at 2.47.58 PM.png

An HTTP POST request was made to the current page! POST requests usually contain data, so if we scroll down to the bottom of the Headers tab we see that the browser sent the following form-encoded data.

Screen Shot 2016-11-15 at 2.53.11 PM.png

There’s a lot of stuff being submitted here, but we can rationally ignore most of it and focus on just two key values: action and session-token.

Performing a POST to the configuration profile’s page in the JSS with those two values as form data will result in us being able to get a signed configuration profile returned!

Now, about that session-token…

You will find as you inspect actions in the JSS GUI the value called the session-token is used almost everywhere, but what is it?  The value isn’t in the cookies for our browser session, but we know it is being submitted as a part of the form data. If the data isn’t in the session then it must be stored somewhere else, and because we know it is being sent with the form…

Screen Shot 2016-11-15 at 11.22.27 AM.png

The token is in the HTML as a hidden form field! The session-token has an expiration of 30 minutes (1800 seconds) at the time it is created and is contained within the page itself. We need only to get a page that contains this token, parse it and then use it until the expiration point has been reached and then obtain another one (this is a process the JSS session handles for you and you never have to think about when in the GUI, but it’s a bit different when you’re manually obtaining these tokens and need to keep track of time).

You knew Python was going to be in here somewhere

Let’s look at some Python code using the requests library to obtain one of these session-tokens. This is a little different than how you would interact with the REST API because we need to be logged into a session and obtain a cookie for our requests.

That’s a simple task with requests:

import requests

session = requests.Session()

data = {'username': 'your.name', 'password': 'your.pass'}
session.post('https://your.jss.org', data=data)

With the code above you have now obtained a cookie that will be used for all further interactions between you and the JSS. To parse out the session-token from a page we can use this tiny function to quickly handle the task:

def get_session_token(html_text):
    for line in html_text.splitlines():
        if 'session-token' in line:
            return line.encode('utf-8').translate(None, '<>"').split('=')[-1]

You would pass the returned HTML from a GET request into the function like so:

session_token = get_session_token(session.get('https://your.jss.org/OSXConfigurationProfiles.html?id=XXX').text)

That tackles the most complicated piece about replicating GUI functions. Now that we can easily obtain session-tokens we can pass them with form data for anything we capture using the Chrome console.

Here’s the code to download a signed configuration profile and save it onto the Mac:

data = {'session-token': session_token, 'action': 'Download'}
r = session.post('https://your.jss.org/OSXConfigurationProfiles.html?id=XXX&o=r', data=data)

with open('/Users/me/Desktop/MyConfig.mobileconfig', 'wb') as f:
    f.write(r.content)

The r.content method returns the data from the response as binary instead of text like you saw above with r.text being passed to our get_session_token() function.

Double-click that .mobileconfig file and you’ll see a nice green Verified message along with the signing source being the JSS Built-In Signing Certificate.

Screen Shot 2016-11-15 at 3.23.23 PM.png

Now apply that EVERYWHERE

As you can see we were able to take a download action in the JSS and script it to pull down the desired file and save it locally without using a browser. Our process was:

  1. Perform the desired action once and observe the HTTP request and required data
  2. Start a session using an HTTP library or binary (in this example we used requests)
  3. Get a session-token from a JSS page
  4. Recreate the HTTP request using the library/binary passing the required form data with the session-token as expected

That sums it up. The key is you will need to perform the action you want to automate at least once so you can capture the request’s headers and determine what data you need to submit and how that data is going to be returned or what the response is expected to be.

Not everything int he JSS GUI will perform posts back to the exact same URI of the object you’re looking at, and the form data between these actions is likely to be different all over the place save for the presence of the session-token (from what I have observed so far).

And of course…

TEST TEST TEST TEST!!! That can never be stressed enough for anything you are doing. Be sure you’re not going to accidentally cause data loss or pull sensitive information and store it insecurely outside of the JSS. There are already plenty of ways to shoot yourself in the foot with the JSS, don’t add to it with a poorly written script.

Webhooks come to the JSS

There are some who said this day would never come…

This has been on my wish list for a very long time, and on the wishlists of several other people in the community that I’ve talked to about it. With the v9.93 update released yesterday we finally have webhooks for the JSS: real time outbound event notifications.

This is a big deal for those of us who work on building integrations into the Casper Suite. If you’ve wanted a service to run and take action on changes happening in the JSS you were normally forced to have an API script run on a schedule to pull in mass amounts of data to parse through. That’s not real time and computationally expensive if you’re an admin with a large environment.

How the Events API relates to Webhooks

There has been an alternative route to using the REST API and that was the Events API. If you haven’t heard of it that may be because it isn’t advertised too loudly. It was shown off at the 2012 JNUC by some of the JAMF development team:

The Events API is Java based. You write a plugin that registers for certain events and then it receives data to process. This all happens on the JSS itself as the plugin must be installed into the application. It is also Java which not many of us are all too fluent in. Plus if you use JAMF Cloud you don’t have access to the server so plugins aren’t likely to be in the cards anyway.

Enter webhooks.

This new outbound event notification feature is actually built ON TOP of the existing Events API. Webhooks translate the Events API event into an HTTP POST request in JSON or XML format. HTTP, JSON and XML. Those are all things that the majority of us not only understand but work with on an almost daily basis. They’re languages we know, and they’re agnostic to what you use to process them. You can use shell scripting, Python, Ruby, Swift; it doesn’t matter now!

How a webhook integration work

If you want to start taking advantage of webhooks for an integration or automation you’re working on, the first thing to understand is that webhooks needs to be received on an external web server to the JSS. This diagram shows the basic idea behind what this infrastructure looks like:

basic_webhook_integration_diagram.png

Wehbooks trigger as events occur within the JSS. The primary driver behind the majority of these events will be the check-in or inventory submission of your computers and mobile devices. When a change occurs the JSS will fire off the event to the web server hosting the integration you’ve created.

At that point your integration is going to do something with the data that it receives. Likely, you’ll want to parse the data for key values and match them to criteria before executing an action. Those actions could be anything. A few starting examples are:

  • Send emails
  • Send chat app notifications
  • Write changes to the JSS via the REST API
  • Write changes to a third-party service

Create a webhook in the JSS

There are a number of events from the Events API you can enable as an outbound webhook. They are:

  • ComputerAdded
  • ComputerCheckIn
  • ComputerInventoryCompleted
  • ComputerPolicyFinished
  • ComputerPushCapabilityChanged
  • JSSShutdown
  • JSSStartup
  • MobileDeviceCheckIn
  • MobileDeviceCommandCompleted
  • MobileDeviceEnrolled
  • MobileDevicePushSent
  • MobileDeviceUnEnrolled
  • PatchSoftwareTitleUpdated
  • PushSent
  • RestAPIOperation
  • SCEPChallenge
  • SmartGroupComputerMembershipChange
  • SmartGroupMobileDeviceMembershipChange

For a full reference on these webhook events you can visit the unofficial docs located at https://unofficial-jss-api-docs.atlassian.net/wiki/display/JRA/Webhooks+API

When you create the outbound webhook in the JSS you give it a descriptive name, the URL of your server that is going to receive the webhook, the format you want it sent int (XML or JSON) and then the webhook event that should be sent.

Webhooks_AddNew.png

Once saved you can see all of your webhooks in a simple at-a-glance summary on the main page:

Webhooks_List.png

That’s it. Now every time this event occurs it will be sent as an HTTP POST request to the URL you provided. Note that you can have multiple webhooks for the same event going to different URLs, but you can’t create webhooks that send multiple events to a single URL. At this time you need to create each one individually.

Create your integration

On the web server you specified in the webhook settings on the JSS you will need something to receive that HTTP POST and process the incoming data.

There are a number of examples for you to check out on my GitHub located here: https://github.com/brysontyrrell/Example-JSS-Webhooks

As a Python user I’m very comfortable using a microframework called Flask (http://flask.pocoo.org/). It’s simple to start with, powerful to use and allows you to scale your application easily.

Here’s the basic one-file example to getting started:

import flask

app = flask.Flask('my-app')

@app.route('/')
def index():
    return "<h1>Hello World!</h1>"

if __name__ == '__main__':
    app.run()

On line 1 we’re import the flask module. On line 3 we’re instantiating our flask app object.

On line 5 this is a decorator that says “when a user goes to this location on the web server run this function”. The ‘/’ location would be the equivalent to http://localhost/ – the root or index of our server. Our decorated function is only returning a simple HTML string for a browser to render in this example

Line 9 will execute the application if we’re running it from the command line as so:

~$ python my_app.py

I will say at this point that this works just fine for testing out your Flask app locally, but don’t do this in production.

There are many, many guides all over the internet for setting up a server to run a Flask application properly (I use Nginx as my web server and uWSGI to execute the Python code). There are some articles on the Unofficial JSS API Docs site that will cover some of the basics.

With that being said, to make our Flask app receive data, we will modify our root endpoint to accept POST requests instead of GET requests (when using the @app.route() decorator it defaults to accepting only GET). We also want to process the incoming data.

This code block has the app printing the incoming data to the console as it is received:

import flask

app = flask.Flask('my-app')

@app.route('/', methods=['POST'])
def index():
    data = flask.request.get_json()  # returned JSON data as a Python dict()
    # Do something with the data here
    return '', 204

if __name__ == '__main__':
    app.run()

For processing XML you’ll need to import a module to handle that:

import flask
import xml.etree.ElementTree as Et

app = flask.Flask('my-app')

@app.route('/', methods=['POST'])
def index():
    data = Et.fromstring(flask.request.data)
    # Do something with the data here
    return '', 204

if __name__ == '__main__':
    app.run()

You can see that we changed the return at the end of the function to return two values: an empty string and an integer. This is an empty response with a status code of 204 (No Content). 2XX status code signal to the origin of the request that it was successful.

This is just a technical point. The JSS will not do anything or act upon different success status codes or error codes. 200 would be used if some data were being returned to the requestor. 201 if an object were being created. Because neither of those are occurring, and we won’t be sending back any data, we’re using 204.

With this basic starting point you can begin writing additional code and functions to handle processing the inbound requests and taking action upon them.

Resources

Here is a list of the links that I had provided in this post:

 

HipSlack – because we can all be friends (and promoter…)

Channeling my inner David Hayer, “Kept you waiting, huh?”

But really, it has been a while. I’ve had a pretty tumultuous 2015 that pulled me away from projects both at JAMF and personally, and also took time away from the Mac admin communities, but now I’m starting to get back into writing code and just doing stuff like the good old days.

And the best way to do that is throw something out there!

Meet HipSlack

0

Someone, who shall remain nameless, joined JAMF and quipped about using Slack and HipChat side by side. It begged the question, “What, you mean like a bridge?”

45 minutes of my life late than night yielded the initial Flask app that accomplishes the bare minimum functionality that I can build off of: all messages from one HipChat room get piped into a Slack channel, and all the messages from that Slack channel get sent back to the HipChat room.

I’ve posted the initial source code to my GitHub. Right now it only supports one room to one channel and only sends text messages between them. My plans will include supporting multiple room/channel installs, transferring uploaded files between services, mapping emoji/emoticons to display correctly (where possible) and… maybe… see if @mentions could be made to work.

https://github.com/brysontyrrell/HipSlack

Check out the README for instructions on firing this up and playing around with it on your own. Also feel free to improve upon and submit pull requests if you want to take a crack at it before I get around to implementing more features/improvements.

And about promoter…

I threw that up too. Didn’t want people to believe it was vaporware.

https://github.com/brysontyrrell/promoter

Please, please don’t run that as production yet (there’s even a warning!).

I have my opinions on API design

So I’m going to write about them.

In this context I’m really talking about REST APIs: those wonderful HTTP requests between you and some application that allow you to do all sorts of great things with your data.  Most projects that I have the most fun with involve working with an API; reading about it, testing against it, building the solution with it and coming up with other crazy sh*t to do with it.

Quickly, about REST

REST APIs provide a simple interface to an application over – normally – HTTP.  It follows the familiar standards of other architectures: create: POST, read: GET, update: PUT/PATCH and delete: DELETE.  You use these methods to interact with ‘resources’ at different ‘endpoints’ of the service.  These endpoints will return and/or accept data to achieve a desired result.

That’s the high level overview.

From there you will start encountering a wide range of variations and differences.  Some APIs will allow XML with your requests, but not JSON.  Some work with JSON, but not XML.  APIs may require you to explicitly declare the media type you’re going to interact with.  You might have one API that accepts HTTP basic authentication while others have a token based authentication workflow (like OAuth/OAuth2).  There is a lot of variance in the designs from service to service.

Which brings us to the opinions on design

The APIs I do the most work with currently are for the JSS (of course), JIRA and HipChat, but I’ve also poked around in CrashPlan and Box on the side.  There are a lot of things that I like about all of these APIs and, frankly, some things that really irk me.  And, I mean, really irk me.  Those experiences started me in the direction of learning what it was like to create my own.

If you know me at all you know that I have a real passion about Python.  My current obsession has been the Flask, a microframework for Python that allows you to write web applications.  I’ve been using it for HipChat add-ons that I’m developing, but I was really excited to get into Flask because I could start building my own REST APIs and dig into how they are designed.

Between working with established APIs and the reading and experimenting as I work on my own, I’ve determined there are a number of design choices I would want implemented in any API I worked with.

But it’s in the interface…

Two years ago I had the opportunity to attend Dreamforce.  That year was a biggie as Salesforce was transitioning their development platform and announced their intention to be mobile first and API first.  It was a pretty “phenomenal” keynote.  There were tons of sessions during the mega-conference devoted to the plethora of new and revamped APIs that now made up the Salesforce platform.  My big take away was a slide that provided an extremely high overview of the new stack.  All of Salesforce’s apps and services sat above a unified API layer.

I can’t say why that stuck with me so much at the time since I didn’t even know how to write a simple Python script, but it did.  This was the first big idea that I held onto about API design: implement your features at the API level first, document the implementation and then use that to build onto the end-user solution.

There are plenty of examples out there of services that segregate their user interface from their API and I’ve seen forums with a lot of developers or IT professionals asking why something was implemented in the GUI but inaccessible through their API which prevented an app/integration/automation from advancing.  So, as Salesforce put it, API first.

Documented without the docs

I’ve seen a lot of great examples of API documentation out there.  CrashPlan, JIRA and HIpChat are at the top of my “how to do it right” examples in that they provides representations of data for each supported request method for an endpoint, returned HTTP status codes and potential error messages with their causes.  This is invaluable information for anyone who is writing a script or application against an API, but they all share the same weakness: they’re docs that exist outside the API.

A robust API can provide all the information a developer requires through through the same HTTP methods that – allowing for automated discovery of the API’s capabilities without scrolling around web pages and then flipping back to your console.

There’s an HTTP method I’ve read about, but not one I’ve seen in any of the docs for these APIs as supported.  That would be the OPTIONS method.  It’s a great idea!  Want to know what you can do to a resource?  Pass OPTIONS as the method and in the response there will be a header “Allow” that will list them.

This could be extended to be a contextual method based upon the level of access the provided credentials have.  Say a resource supports GET, POST, PUT, PATCH and DELETE but our user account only supports creating and updating resources.  An admin would return all five in the response header, but our user would only have GET, PUT and PATCH as valid options.

So ok, there’s an HTTP method in the REST standard that allows us to discovery how we can interact with our resources.  Now how do we determine what the valid format of our data in our requests is supposed to be?  JIRA actually implements a solution this for ‘Issues.’  Check out the following endpoints:

/rest/api/2/issue/createmeta
/rest/api/2/issue/{issueIdOrKey}/editmeta

Text The ‘createmeta’ endpoint will return a wealth of data including available projects, issues types, fields and what is required when creating a new issue.  That’s a goldmine of data that’s specific to my JIRA instance!  Then it gets even better when parameters are passed to filter it down even further to better identify what you need to do.  Like this:

/rest/api/2/issue/createmeta?projectIds=10201&issuetypeIds=3

That will return all of the required fields I require to create a new ‘Task’ within the ‘Information Technology’ project in my JIRA board.  If I create a task and then want to update it I can call the second endpoint to reveal all of the fields relevant to this issue, which are required and acceptable values for input.

Despite how great the above is, that’s about all we get for the discovery through JIRA’s API.  We still need to go back to the online docs to reference the other endpoints.

Something I read on RESTful API Design struck a note on this topic.  The idea pitched here is to use forms to provide back to the client a representation of a valid request for the endpoint by passing the appropriate MIME type (for example: ‘application/x-form+json’).  This isn’t something you could expect to have a uniform definition of, but that wouldn’t matter!  You could still programmatically obtain information about any API endpoint by passing the the MIME type for the desired format.

Here’s an example of what a response might look like to such a request:

curl http://my.api.com/users -H "content-type: application/x-form+json" -X POST

{
    "method": "POST",
    "type": "user",
    "fields": {
        "name": {
            "type": "string",
            "required": true
        },
        "email": {
            "type": "string",
            "required": false
        },
        "is_admin": {
            "type": "bool",
            "required": false
        }
    }
}

They can do a lot more work for you

Usually if you’re making a request to an object there will be references, links, within the data to other objects that you can make calls to.  Sometimes this is as simple as an ID or other unique value that can be used to build another request to retrieve that resource.  Seems like an unnecessary amount of code to handle this on the part of the client.

There are two ways of improving this.  The first is to include the full URL to the linked resource as a part of the parent.

curl http://my.api.com/users -H "content-type: application/json"

{
    'name': 'Bryson',
    'email': 'bryson.tyrrell@gmail.com,
    'computers': [
        {
            'id': 1,
            'name': 'USS-Enterprise',
            'url': 'https://my.api.com/computers/1'
        }
    ]
}

The second can build upon this by allowing parameters to be passed that tell the API to return linked objects that are expanded to include all of the data in one request.  JIRA’s API does this for nearly every endpoint.

curl http://my.api.com/users?expand=computers -H "content-type: application/json"

{
    'name': 'Bryson',
    'email': 'bryson.tyrrell@gmail.com,
    'is_admin': true,
    'computers': [
        {
            'id': 1,
            'name': 'USS-Enterprise',
            'url': 'https://my.api.com/computers/1'
            'uuid': 'FBFF2117-B5A2-41D7-9BDF-D46866FB9A54',
            'serial': 'AA1701B11A2B',
            'mac_address': '12:A3:45:B6:7C:DE',
            'model': '13-inch Retina MacBook Pro',
            'os_version': '10.10.2'
        }
    ]
}

Versions are a good thing

All APIs change over time.  Under the hood bug fixes that don’t affect how the client interacts with the service aren’t much to advertise, but additions or changes to endpoints need to be handled in a way that can (potentially) preserve compatibility.

The most common kind of versioning I interact with has it directly in the URL.  I’m going to reference HipChat on this one:

api.hipchat.com/v1
api.hipchat.com/v2

The v1 API was deprecated some time ago as HipChat migrated to their newer and more robust v2 API.  While the v1 API is still accessible it has limitations compared to v2, is lacking many of the endpoints and is no longer supported which means that a lot of integrations that were written using v1 are steadily being phased out.

The differences between the two versions of the API are huge, especially when it comes to authentication, but even after its release the v2 API has had a number of changes and additions made to it.  Unless you’re watching for them they would be easy to miss.

Going the route of maintaining the version of the API in the URL, I found this example:

my.api.com/ < Points to the latest version of the API
my.api.com/2/ < Points to latest version of the v2 API
my.api.com/2.0/ < Points to a specific version of the v2 API

On the backend the objects would need to track which version a field or endpoint was added (or even removed) and handle the response to a request based upon the version passed in the URL.  Anything requested that falls outside of the version would prompt the appropriate 4XX response.

Another method of versioning is used with GitHub’s API.  By default your API requests are made against the latest version of the API, but you you can specify a previous version by having it passed as a part of the ‘Accept’ header:

curl https://api.github.com/users/brysontyrrell -H "Accept: application/vnd.github.v3.full+json"

I’ve read about pros and cons for both approaches, but they serve the purpose of identifying changes in an API as it evolves while providing a means for compatibility with existing clients.

Multiple formats isn’t a sin

My personal preference for any REST API I work with is JSON.  JSON is easy to me, it makes sense, it just works.  I can think of one glaring example off the top of my head of an API I frequently work with that lets me read back objects in JSON but only accepts XML for POST/PUT requests.  Frustrating.

Still, JSON is my preference.  Plenty of people prefer XML.  In some cases XML may be easier to work with than JSON (such as parsing in shell scripts) or be the better data set for an application.  Structurally XML and JSON can be very interchangeable depending upon the data that is being accessed.

If the object can be converted to multiple formats then it may be a good idea to support it.  By passing the appropriate MIME type the API can return data in the requested format.  If no MIME type is passed there should be a default type that is always returned or accepted.

Wrap-up

It’s late now and I’ve dumped a lot of words onto the page.  There’s a PyCharm window open with the shell of my sample API project that attempts to implement all of the design ideas I describe above.  Once I finish it I’ll throw it up on GitHub and see about incorporating some of the requests/responses to it into the article.