Stop everything! Start using Pipenv!

Seven months ago I tweeted about Pipenv:

That was about it.

The project- which was at v3.6.1 at the time – still felt new. At Jamf, we were working on our strategy for handling project requirements and dependencies going forward for Jamf Cloud’s automation apps and Pipenv, while a cool idea, didn’t have maturity and traction that.

As @elios pointed out on the MacAdmins Slack: Pipenv was announced in January of this year as an experimental project.

Today it is the officially recommended Python packaging tool from Python.org.

In his words, “That’s awfully fast…”

The GitHub repository for Pipenv has 5400+ stars and over 100+ contributors to the project. It is proceeding at breakneck development speeds and adding all kinds of juicy features every time you turn around. It’s on v8.3.2.

Within ten minutes of finally, FINALLY, giving Pipenv a try today I made the conscious decision to chuck the tooling we had made and convert all of our projects over.

(Oh, and the person who wrote that tooling wholly agreed)

Let’s talk about that.

Part 1: Why requirements.txt sucks

There is was this standard workflow to your Python projects. You needed two key tools: pip and virtualenv. When developing, you would create a Python virtual environment using virtualenv in order to isolate your project from the system Python or other virtual environments.

You would then use pip to install all the packages your project needs. Once you had all of those packages installed you would then run a command to generate a requirements.txt file that listed every package at the exact installed version.

The requirements.txt file stays with your project. When someone else gets a copy of the repository to work on it or deploy it they would follow the same process of creating a virtual environment and then using pip to install all of the required packages by feeding that requirements.txt file into it.

Now, that doesn’t seem so bad, right? The drawbacks become apparent as you begin to craft your build pipelines around your projects.

Maintenance is a Manual Chore

The first thing to call out is how manual this process is. You need to create the same version virtual environment for the project once you’ve cloned it and then install the packages. If you want to begin upgrading packages you need to be sure to manually export a new requirements.txt file each time.

Removing packages is far more problematic. Uninstalling with pip will not remove dependencies for a package! Take Flask, for example. It’s not just one package. It has four sub-packages that get installed with it, and one of those (Jinja2) has its own sub-package.

So, maybe the answer is to manually maintain only the top-level packages in your requirements.txt file? That’s a hard no. You don’t want to do that because it makes your environment no longer deterministic. What do I mean? I mean, those sub-packages we just talked about are NOT fixed versions. When you’re installing Python packages from the Python Package Index (PyPI) the aren’t using requirements.txt files with fixed versions for their dependencies. They instead are specifying ranges allowing for installs at a minimum and/or maximum version for that sub-package.

This makes a lot of sense when you think about it. When installing a package for the first time as a part of a new project, or updating a package in an existing project, you are going to want the latest allowed versions for all the sub-packages for any patches that have been made.

Not so much when it comes to a project/application. You developed and tested your code at specific package versions that are a known working state. This is past the development phase when bugs from updates to those packages can be identified and addressed. When it comes to deployment, your project’s requirements.txt file must be deterministic and list every package at the exact version.

Hence, maintenance of this file becomes a manual chore.

But Wait, There’s Testing!

There are a lot of packages that you might need to install that have nothing to do with running your Python application, but they have everything to do with running tests and building documentation. Common package requirements for this would by Pytest, Tox, and Sphinx with maybe a theme. Important, but not needed in a deployment.

We want to be able to specify a set of packages to install that will be used during the testing and build process. The answer unfortunately, is a second requirements.txt file. This one would have a special name like requirements-dev.txt which is only used during a build. This file could only contain the specific packages and be installed with pip after the standard requirements.txt, or it could contain all of those plus the build packages. In either case, the maintenance problem continues to grow.

So We Came Up With Something…

Our process at Jamf ended up settling on three types of requirements.txt files in an effort to address all the shortcomings described.

  • requirements.txt
    This file contained the full package list for the project at fixed versions. It would be used for spinning up new development environments or during deployment.
  • core-requirements.txt
    This file contained the top-level packages without fixed versions. Development environments would not be built from this. Instead, this file would be used for recreating the standard requirements.txt file at updated package versions and eliminating orphaned packages that were no longer used or removed from the project at some point.
  • build-requirements.txt
    This is a manually maintained file per-project that only contained the additional packages needed during the build process.

This approach isn’t too dissimilar to what others have implemented for many of the same reasons we came up with.

Part 2: Enter Pipfile

Not Pipenv? We’re getting there.

Pipfile was introduced almost exactly a year ago at the time of this post (first commit on November 18th, 2016). The goal of Pipfile is to replace requirements.txt and address the pain points that we covered above.

Here is what a Pipfile looks like:

[source]

url = "https://pypi.python.org/simple"
verify_ssl = true
name = "pypi"

[dev-packages]



[packages]



[requires]

python_version = "2.7"

PyPI is the default source in all new Pipfiles. Using the scheme shown, you can add additional [source] sections with unique names to specify your internal Python package indexes (we have our own at Jamf for shared packages across projects).

There are two groups for listing packages: dev-packages and packages. In a Pipfile these lists follow how we were handling this where only the build packages go into the dev-packages list and all project packages go under the standard packages list.

Packages in the Pipfile can have fixed versions or be set to install whatever the latest version is.

[packages]

flask = "*"
requests = "==2.18.4"

The [requires] section dictates the version of Python that the project is meant to run under.

From the Pipfile you would create what is called a Pipfile.lock which contains all of the environment information, the installed packages, their installed versions, and their SHA256 hashes. The hashes are a recent security feature of pip to validate packages that are installed when deployed. If there is a hash mismatch you can abort. A powerful security tool in preventing malicious from entering your environments.

Note that you can specify the SHA256 hashes of packages in a normal requirements.txt file. This is a feature of pip and not of Pipfile.

It is this Pipfile.lock that will be used to generate environments on other systems whether for development or deployment. The Pipfile will be used for maintaining packages and dependencies and regenerating your Pipfile.lock.

All of this is just a specification. Pip as of yet still does not support Pipfiles, but there is another..

Part 3: Pipenv Cometh

Pipenv is the implementation of the Pipfile standard. It is built on top of pip and virtualenv and manages both your environment and package dependencies, and it does so like a boss.

Get Started

Install Pipenv with pip (or homebrew if that’s your jam).

$ pip install pipenv

Then in a project directory create a virtual environment and install some packages!

$ pipenv --two
Creating a virtualenv for this project…
<...install output...>
Virtualenv location: /Users/me/.local/share/virtualenvs/test-zslr3BOw
Creating a Pipfile for this project…

$ pipenv install flask
Installing flask…
<...install output...>
Adding flask to Pipfile's [packages]…
Locking [dev-packages] dependencies…
Locking [packages] dependencies…
Updated Pipfile.lock (36eec0)!

$ pipenv install requests==2.18.4
Installing requests==2.18.4…
<...install output...>
Adding requests==2.18.4 to Pipfile's [packages]…
Locking [dev-packages] dependencies…
Locking [packages] dependencies…
Updated Pipfile.lock (572f23)!

$

Notice how we see the Locking messages after every install? Pipenv automatically regenerated the Pipfile.lock each time the Pipfile is modified. Your fixed environment is being automatically maintained!

Graph All The Things

Let’s look inside the Pipfile itself.

[source]

url = "https://pypi.python.org/simple"
verify_ssl = true
name = "pypi"

[dev-packages]



[packages]

flask = "*"
requests = "==2.18.4"


[requires]

python_version = "2.7"

No sub-packages? Nope. It doesn’t need to track those (they end up in the Pipfile.lock, remember?). But, if you’re curious, you can use the handy graph feature to view a full dependency tree of your project!

$ pipenv graph
Flask==0.12.2
 - click [required: >=2.0, installed: 6.7]
 - itsdangerous [required: >=0.21, installed: 0.24]
 - Jinja2 [required: >=2.4, installed: 2.9.6]
   - MarkupSafe [required: >=0.23, installed: 1.0]
 - Werkzeug [required: >=0.7, installed: 0.12.2]
requests==2.18.4
 - certifi [required: >=2017.4.17, installed: 2017.11.5]
 - chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4]
 - idna [required: >=2.5,<2.7, installed: 2.6]
 - urllib3 [required: <1.23,>=1.21.1, installed: 1.22]
 
$

Check that out! Notice how you can see the requirement that was specified for the sub-package in addition to the actual installed version?

Environment Management Magic

Now let’s uninstall Flask.

$ pipenv uninstall flask
Un-installing flask…
<...uninstall output...>
Removing flask from Pipfile…
Locking [dev-packages] dependencies…
Locking [packages] dependencies…
Updated Pipfile.lock (4ddcaf)!

$

And re-run the graph command.

$ pipenv graph
click==6.7
itsdangerous==0.24
Jinja2==2.9.6
 - MarkupSafe [required: >=0.23, installed: 1.0]
requests==2.18.4
 - certifi [required: >=2017.4.17, installed: 2017.11.5]
 - chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4]
 - idna [required: >=2.5,<2.7, installed: 2.6]
 - urllib3 [required: <1.23,>=1.21.1, installed: 1.22]
Werkzeug==0.12.2

 

Yes, the sub-packages have now been orphaned within the existing virtual environment, but that’s not the real story. If we look inside Pipfile we’ll see that requests is the only package listed, and if we look inside our Pipfile.lock we will see that only requests and it’s sub-packages are present.

We can regenerate our virtual environment cleanly with only a few commands!

$ pipenv uninstall --all
Un-installing all packages from virtualenv…
Found 20 installed package(s), purging…
<...uninstall output...>
Environment now purged and fresh!

$ pipenv install
Installing dependencies from Pipfile.lock (f58d9f)…
 🐍 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 5/5 — 00:00:01
To activate this project's virtualenv, run the following:
 $ pipenv shell

$ pipenv graph
requests==2.18.4
 - certifi [required: >=2017.4.17, installed: 2017.11.5]
 - chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4]
 - idna [required: >=2.5,<2.7, installed: 2.6]
 - urllib3 [required: <1.23,>=1.21.1, installed: 1.22]

$

Mindblowing!!!

Installing the dev-packages for our builds uses an additional flag with the install command.

$ pipenv install sphinx --dev
Installing sphinx…
<...install output...>
Adding sphinx to Pipfile's [dev-packages]…
Locking [dev-packages] dependencies…
Locking [packages] dependencies…
Updated Pipfile.lock (d7ccf2)!

The appropriate locations in our Pipfile and Pipefile.lock have been updated! To install the dev environment perform the same steps for regenerating above but add the –dev flag.

$ pipenv install --dev
Installing dependencies from Pipfile.lock (f58d9f)…
 🐍 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 18/18 — 00:00:05
To activate this project's virtualenv, run the following:
 $ pipenv shell

Part 4: Deploy Stuff!

The first project I decided to apply Pipenv to in order to learn the tool is ODST. While there is a nice feature in Pipenv where it will automatically import a requirements.txt file if detected, I opted to start clean and install all my top-level packages directly. This gave me a proper Pipfile and Pipfile.lock.

Here’s the resulting Pipfile.

[source]

url = "https://pypi.python.org/simple"
verify_ssl = true
name = "pypi"


[dev-packages]

pytest = "*"
sphinx = "*"
sphinx-rtd-theme = "*"


[packages]

flask = "*"
cryptography = "*"
celery = "*"
psutil = "*"
flask-sqlalchemy = "*"
pymysql = "*"
requests = "*"
dicttoxml = "*"
pyjwt = "*"
flask-login = "*"
redis = "*"


[requires]

python_version = "2.7"

Here’s the graph of the installed packages (without the dev-packages).

$ pipenv graph
celery==4.1.0
 - billiard [required: >=3.5.0.2,<3.6.0, installed: 3.5.0.3]
 - kombu [required: <5.0,>=4.0.2, installed: 4.1.0]
 - amqp [required: >=2.1.4,<3.0, installed: 2.2.2]
 - vine [required: >=1.1.3, installed: 1.1.4]
 - pytz [required: >dev, installed: 2017.3]
cryptography==2.1.3
 - asn1crypto [required: >=0.21.0, installed: 0.23.0]
 - cffi [required: >=1.7, installed: 1.11.2]
 - pycparser [required: Any, installed: 2.18]
 - enum34 [required: Any, installed: 1.1.6]
 - idna [required: >=2.1, installed: 2.6]
 - ipaddress [required: Any, installed: 1.0.18]
 - six [required: >=1.4.1, installed: 1.11.0]
dicttoxml==1.7.4
Flask-Login==0.4.0
 - Flask [required: Any, installed: 0.12.2]
 - click [required: >=2.0, installed: 6.7]
 - itsdangerous [required: >=0.21, installed: 0.24]
 - Jinja2 [required: >=2.4, installed: 2.9.6]
 - MarkupSafe [required: >=0.23, installed: 1.0]
 - Werkzeug [required: >=0.7, installed: 0.12.2]
Flask-SQLAlchemy==2.3.2
 - Flask [required: >=0.10, installed: 0.12.2]
 - click [required: >=2.0, installed: 6.7]
 - itsdangerous [required: >=0.21, installed: 0.24]
 - Jinja2 [required: >=2.4, installed: 2.9.6]
 - MarkupSafe [required: >=0.23, installed: 1.0]
 - Werkzeug [required: >=0.7, installed: 0.12.2]
 - SQLAlchemy [required: >=0.8.0, installed: 1.1.15]
psutil==5.4.0
PyJWT==1.5.3
PyMySQL==0.7.11
redis==2.10.6
requests==2.18.4
 - certifi [required: >=2017.4.17, installed: 2017.11.5]
 - chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4]
 - idna [required: >=2.5,<2.7, installed: 2.6]
 - urllib3 [required: <1.23,>=1.21.1, installed: 1.22]

In my Dockerfiles for the project I swapped out using requirements.txt and switched it to install my packages using the Pipefile.lock into the system Python (in a containerized app there’s no real need to create a virtual environment).

RUN /usr/bin/apt-get update -q && \
    /usr/bin/apt-get install -qqy build-essential git && \
    /usr/bin/apt-get install -qqy python-pip python-dev && \
    /usr/bin/pip install pipenv && \
    /usr/bin/apt-get install -qqy libssl-dev libffi-dev && \
    /usr/bin/apt-get install -qqy uwsgi uwsgi-plugin-python && \
    /usr/bin/apt-get clean && \
    /bin/rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

COPY /docker/webapp/web-app.ini /etc/uwsgi/apps-enabled/

COPY /Pipfile* /opt/odst/
WORKDIR /opt/odst/
RUN /usr/local/bin/pipenv install --system --deploy

COPY /ods/ /opt/odst/ods/
COPY /application.py /opt/odst/
COPY /docker/ods_conf.cfg /opt/odst/

RUN /bin/chown -R www-data:www-data /opt/odst/ods/static

CMD ["/usr/bin/uwsgi", "--ini", "/etc/uwsgi/apps-enabled/web-app.ini"]

The –system flag tells Pipenv to install to the system Python. The –deploy flag will abort the installation if the Pipfile.lock is out of date or the Python version is wrong. Out of date? Pipenv knows if your Pipfile.lock matches up with the Pipfile by comparing a SHA256 hash of the Pipfile that it has saved. If there’s a mismatch then there is an issue and it aborts.

If I didn’t want to rely on this hash match feature, I could instead pass a –ignore-pipfile flag that will tell it to proceed using only the Pipfile.lock.

I should mention at this point that Pipenv dramatically speeds up build times. When using pip your packages will install sequentially. Pipenv will install packages in parallel. The difference is immediately noticeable, especially when regenerating your development environments.

Still, The World is Catching Up

Python development workflows area clearly moving to this tool, but Pipenv is less than a year old at this point and not everything will support generating environments from the provided Pipfiles.

The two I am currently using that fall in that category are AWS Elastic Beanstalk and ReadTheDocs.org. Both need a requirements.txt file in order to build the Python environments.

That is thankfully a simple task. You can generate a requirements.txt file from Pipenv by using the lock option.

$ pipenv lock --requirements > requirements.txt

 

For Elastic Beanstalk applications, I can have this command run as a part of my build pipeline so the file is included with the ZIP archive before it goes to S3.

In the case of Read the Docs there is an open issue for adding Pipfile support, but until then I will need to generate the requirements.txt file as I make changes to my environment and save it with the repository.

For Read the Docs I’ll want the extra dev-packages. This is done by using the pipenv run command. This will execute the succeeding string as if it were run in the environment.

$ pipenv run pip freeze > requirements.txt

The bright side is that I am no longer actually managing this file. It is straight output from my Pipenv managed environment!

Part 5: Profit

I hope you enjoyed my overview and impressions of Pipenv. It’s a tool that is going to have a huge impact on development and deployment workflows – for the better!

A huge shout out to Kenneth Reitz for creating yet another invaluable package for the Python community!

* Edit Nov 9: Changed the command for exporting a requirements.txt file. Using pipenv lock -r -d only outputs the packages under the dev-packages section.

Advertisements

Build Your Own Jamf Pro Integrations: A Tutorial Series

byo-jpi-logo
Enter a caption

During Penn State MacAdmins 2017, I delivered my first ever workshop style session entitled “Build Your Own Jamf Pro Integrations.” Going in I felt well prepared and convinced that people would walk away feeling empowered to build new kinds of functionality atop their Jamf Pro environments.

The result was not what I envisioned.

Information about the level of experience with Python that was required coming into the workshop was lost in the posting. My slide pacing was too fast. People weren’t given enough time to write down the examples on screen before I would move on to lab sections which did not contain any code examples. Due to a lot of the group not being at the needed experience level with Python I burned through extra time moving around to help them troubleshoot issues and reach the next step. I ended up not finishing all of the content I had prepared and the workshop was left with an unfinished air about it.

Overall, I wasn’t too happy with how the workshop turned out. Some of the attendees afterwards gave me some feedback about what could have been done to improve for a future version (which I was also encouraged to submit for 2018). Make sure the prerequisite experience for the workshop is clearly communicated. The full code being available prior to the workshop would have made transitioning between sections easier. Volunteer assistants to help others with issues and errors as they arise to allow me to focus on delivery. Do a full day and not a half: I had more than enough content to fill it.

Later, when the form submitted feedback was compiled and provided, I found that on a whole the sentiments above were shared by many. They enjoyed the content, they liked the hands-on approach, but structure and timing prevented them from getting the most out of it. The reception was better than I had expected (coming from an IT background, I know it’s more likely that people will submit complaints than compliments).

While I still intend to submit this workshop again for MacAdmins 2018, I have decided to adapt the slide deck into a multi-part tutorial series that will cover building your first Flask (Python) based integration with Jamf Pro and Slack.

Once this post has gone live, each part of the tutorial will go up once a day until the series has been completed. The full code for the complete sample project will be available on my GitHub on day of the last posting.

Requirements

You can use either Python 2 or Python 3, but you will need to create a virtual environment and install the required modules from the Python Package Index (PyPI).

If you are going to use the Python installation that came with your Mac (Python 2) you can install pip and virtualenv using the following commands:

~$ sudo easy_install pip
~$ sudo pip install virtualenv

If you have installed another copy of Python (either 2 or 3) on your Mac, or you are installing Python on Windows or Linux, these commands should be available as a part of that.

Create the virtual environment for the project somewhere in your user directory (the root or your Documents folder would both work):

~$ virtualenv byojamf

To use your new virtual environment call it’s activate script and you will see the environment’s name appear in parenthesis in the terminal session:

~$ source /path/to/byojamf/bin/activate
(byojamf) ~$

Now install the three PyPI modules that we will be using for this sample project:

(byojamf) ~$ pip install flask jook requests

Flask is a microframework for writing web applications.

GitHub: https://github.com/pallets/flask
Documentation: http://flask.pocoo.org/docs/latest/

Requests is an HTTP client library that we will be using to make API requests.

GitHub: https://github.com/requests/requests
Documentation: http://docs.python-requests.org/en/master/

Jook is a Jamf Pro Webhooks simulator to allow you to test your integration.

GitHub: https://github.com/brysontyrrell/Jook
Documentation: http://jook.readthedocs.io/en/latest/

Next up…

An Intro to Webhooks and Flask

I’ll see you in Part I tomorrow!

Webhooks come to the JSS

There are some who said this day would never come…

This has been on my wish list for a very long time, and on the wishlists of several other people in the community that I’ve talked to about it. With the v9.93 update released yesterday we finally have webhooks for the JSS: real time outbound event notifications.

This is a big deal for those of us who work on building integrations into the Casper Suite. If you’ve wanted a service to run and take action on changes happening in the JSS you were normally forced to have an API script run on a schedule to pull in mass amounts of data to parse through. That’s not real time and computationally expensive if you’re an admin with a large environment.

How the Events API relates to Webhooks

There has been an alternative route to using the REST API and that was the Events API. If you haven’t heard of it that may be because it isn’t advertised too loudly. It was shown off at the 2012 JNUC by some of the JAMF development team:

The Events API is Java based. You write a plugin that registers for certain events and then it receives data to process. This all happens on the JSS itself as the plugin must be installed into the application. It is also Java which not many of us are all too fluent in. Plus if you use JAMF Cloud you don’t have access to the server so plugins aren’t likely to be in the cards anyway.

Enter webhooks.

This new outbound event notification feature is actually built ON TOP of the existing Events API. Webhooks translate the Events API event into an HTTP POST request in JSON or XML format. HTTP, JSON and XML. Those are all things that the majority of us not only understand but work with on an almost daily basis. They’re languages we know, and they’re agnostic to what you use to process them. You can use shell scripting, Python, Ruby, Swift; it doesn’t matter now!

How a webhook integration work

If you want to start taking advantage of webhooks for an integration or automation you’re working on, the first thing to understand is that webhooks needs to be received on an external web server to the JSS. This diagram shows the basic idea behind what this infrastructure looks like:

basic_webhook_integration_diagram.png

Wehbooks trigger as events occur within the JSS. The primary driver behind the majority of these events will be the check-in or inventory submission of your computers and mobile devices. When a change occurs the JSS will fire off the event to the web server hosting the integration you’ve created.

At that point your integration is going to do something with the data that it receives. Likely, you’ll want to parse the data for key values and match them to criteria before executing an action. Those actions could be anything. A few starting examples are:

  • Send emails
  • Send chat app notifications
  • Write changes to the JSS via the REST API
  • Write changes to a third-party service

Create a webhook in the JSS

There are a number of events from the Events API you can enable as an outbound webhook. They are:

  • ComputerAdded
  • ComputerCheckIn
  • ComputerInventoryCompleted
  • ComputerPolicyFinished
  • ComputerPushCapabilityChanged
  • JSSShutdown
  • JSSStartup
  • MobileDeviceCheckIn
  • MobileDeviceCommandCompleted
  • MobileDeviceEnrolled
  • MobileDevicePushSent
  • MobileDeviceUnEnrolled
  • PatchSoftwareTitleUpdated
  • PushSent
  • RestAPIOperation
  • SCEPChallenge
  • SmartGroupComputerMembershipChange
  • SmartGroupMobileDeviceMembershipChange

For a full reference on these webhook events you can visit the unofficial docs located at https://unofficial-jss-api-docs.atlassian.net/wiki/display/JRA/Webhooks+API

When you create the outbound webhook in the JSS you give it a descriptive name, the URL of your server that is going to receive the webhook, the format you want it sent int (XML or JSON) and then the webhook event that should be sent.

Webhooks_AddNew.png

Once saved you can see all of your webhooks in a simple at-a-glance summary on the main page:

Webhooks_List.png

That’s it. Now every time this event occurs it will be sent as an HTTP POST request to the URL you provided. Note that you can have multiple webhooks for the same event going to different URLs, but you can’t create webhooks that send multiple events to a single URL. At this time you need to create each one individually.

Create your integration

On the web server you specified in the webhook settings on the JSS you will need something to receive that HTTP POST and process the incoming data.

There are a number of examples for you to check out on my GitHub located here: https://github.com/brysontyrrell/Example-JSS-Webhooks

As a Python user I’m very comfortable using a microframework called Flask (http://flask.pocoo.org/). It’s simple to start with, powerful to use and allows you to scale your application easily.

Here’s the basic one-file example to getting started:

import flask

app = flask.Flask('my-app')

@app.route('/')
def index():
    return "<h1>Hello World!</h1>"

if __name__ == '__main__':
    app.run()

On line 1 we’re import the flask module. On line 3 we’re instantiating our flask app object.

On line 5 this is a decorator that says “when a user goes to this location on the web server run this function”. The ‘/’ location would be the equivalent to http://localhost/ – the root or index of our server. Our decorated function is only returning a simple HTML string for a browser to render in this example

Line 9 will execute the application if we’re running it from the command line as so:

~$ python my_app.py

I will say at this point that this works just fine for testing out your Flask app locally, but don’t do this in production.

There are many, many guides all over the internet for setting up a server to run a Flask application properly (I use Nginx as my web server and uWSGI to execute the Python code). There are some articles on the Unofficial JSS API Docs site that will cover some of the basics.

With that being said, to make our Flask app receive data, we will modify our root endpoint to accept POST requests instead of GET requests (when using the @app.route() decorator it defaults to accepting only GET). We also want to process the incoming data.

This code block has the app printing the incoming data to the console as it is received:

import flask

app = flask.Flask('my-app')

@app.route('/', methods=['POST'])
def index():
    data = flask.request.get_json()  # returned JSON data as a Python dict()
    # Do something with the data here
    return '', 204

if __name__ == '__main__':
    app.run()

For processing XML you’ll need to import a module to handle that:

import flask
import xml.etree.ElementTree as Et

app = flask.Flask('my-app')

@app.route('/', methods=['POST'])
def index():
    data = Et.fromstring(flask.request.data)
    # Do something with the data here
    return '', 204

if __name__ == '__main__':
    app.run()

You can see that we changed the return at the end of the function to return two values: an empty string and an integer. This is an empty response with a status code of 204 (No Content). 2XX status code signal to the origin of the request that it was successful.

This is just a technical point. The JSS will not do anything or act upon different success status codes or error codes. 200 would be used if some data were being returned to the requestor. 201 if an object were being created. Because neither of those are occurring, and we won’t be sending back any data, we’re using 204.

With this basic starting point you can begin writing additional code and functions to handle processing the inbound requests and taking action upon them.

Resources

Here is a list of the links that I had provided in this post:

 

HipSlack – because we can all be friends (and promoter…)

Channeling my inner David Hayer, “Kept you waiting, huh?”

But really, it has been a while. I’ve had a pretty tumultuous 2015 that pulled me away from projects both at JAMF and personally, and also took time away from the Mac admin communities, but now I’m starting to get back into writing code and just doing stuff like the good old days.

And the best way to do that is throw something out there!

Meet HipSlack

0

Someone, who shall remain nameless, joined JAMF and quipped about using Slack and HipChat side by side. It begged the question, “What, you mean like a bridge?”

45 minutes of my life late than night yielded the initial Flask app that accomplishes the bare minimum functionality that I can build off of: all messages from one HipChat room get piped into a Slack channel, and all the messages from that Slack channel get sent back to the HipChat room.

I’ve posted the initial source code to my GitHub. Right now it only supports one room to one channel and only sends text messages between them. My plans will include supporting multiple room/channel installs, transferring uploaded files between services, mapping emoji/emoticons to display correctly (where possible) and… maybe… see if @mentions could be made to work.

https://github.com/brysontyrrell/HipSlack

Check out the README for instructions on firing this up and playing around with it on your own. Also feel free to improve upon and submit pull requests if you want to take a crack at it before I get around to implementing more features/improvements.

And about promoter…

I threw that up too. Didn’t want people to believe it was vaporware.

https://github.com/brysontyrrell/promoter

Please, please don’t run that as production yet (there’s even a warning!).

Update App Info – New Script on GitHub

As usually happens with us, I went digging around in some old folders I had been stashing a bunch of old (frankly, quite ugly) code and came across something I had done as a proof of concept.

 

 

 

The issue this tries to address is that the JSS never updates a posted app after you have created it.  So “Microsoft Word for iPad” became “Microsoft Word” several versions later, and your users will see this in the App Store, but in Self Service it still has the old name, version number, description and icon.

The original script only dealt with version numbers to address the problem with the Self Service web clip sending false positives for app updates (for admins who chose to show them).  What happened is the version installed on a device wouldn’t match the one in the JSS  (they would, in fact, be at a higher version usually) and the end-user would see an update for that app that didn’t exist.

I don’t know if many admins do that any more, or use the Self Service web clip for that matter, but the problem of inaccurate app names and description text still remained for Self Service.

That is what this takes care of.

https://github.com/brysontyrrell/Update-App-Info

Currently the script doesn’t handle the icon portion due to an issue with the API I encountered.  I’ll be adding that functionality in once I’m sure it will work without error.  It will, however, run on Mac/Linux/Windows so you have flexibility in how you go about using it.  Check out the README for more details.

Quick note: If you are using anything I’ve posted to GitHub and run into problems please use the ‘Issues’ feature to let me know and I’ll see about making updates to fix it.

Managed VPP via Self Service

A goal I have had for some time was to get away from users simply “requesting” their VPP apps through Self Service and being able to grant them the ability to self-assign those apps (as long as they were eligible).  After doing some work on a little HTML scraping of the JSS I finally have working code to achieve this goal.

HTML Scraping?

If we’re going to allow a user to self-assign some App Store apps we need the ability to ensure there are enough available seats for them to do so.  As of version 9.62, Content as it relates to managed VPP seats is not accessible using the REST API.  There is no quick call we can make to view unassigned VPP content.

What I spent a little time working on was the method by which I could load an Advanced Content Search and parse the results.  This is different that just making an REST API request to the JSS using HTTP basic authentication (this is what you see in pretty much all examples of interacting with the JSS REST API, among others).  The web app uses session cookies for this.

Enter Python (as always, with me).

Using some code I already wrote for getting a cookie for the JIRA REST API on another project (Nessus2JIRA – you’ll hear more about that later), I wrote a new mini JSS class that incorporated both HTTP basic authentication for some the API calls that would need to be made for User Extension Attributes as well as obtaining a session cookie for when we needed to pull up an advanced content search (the web interface and the REST API do not share authentication methods!).

If you’re looking for that already, don’t worry.  There’s a GitHub link at the bottom (where I am now keeping all my code for you guys to grab) that contains the entire sample script for Self Service.

I’ve seen some impressive examples of HTML scraping in shell scripts using SED and AWK.  For what I’m extracting from the HTML of the JSS page I found the solution to be fairly simple.  Let me break it down:

The Advanced Content Search

We need the ability to pull in the information on all of the OS X apps that we are distributing via managed VPP so we can parse out the app in question and if it has unassigned seats for the user.  In my environment I had two content searches created for us to reference this at a glance for both iOS and OS X.  They report the Content Name, Total Content, the Assigned Content and Unassigned Content for a single criteria: Content Type: is: Mac app (or iOS app for the former).

For the script we only care about Unassigned Content so we really only need that and the Content Name, but the other fields are nice if you are pulling up the search in the GUI to view and don’t conflict with how we’re going to perform the HTML scraping.

Of course, there’s still the problem with generating the search.  Going to the URL to view the search requires us to click the View button to get our results.  As it so happens, Zach Halmstad recently dropped some knowledge on a thread for a feature request related to sharing search result links:

https://jamfnation.jamfsoftware.com/featureRequest.html?id=3011

In Zach’s words: “…If you look at the URL of the group or search, it will look like this:  smartMobileDeviceGroups.html?id=2&o=r  If you change the value of the “o” parameter to “v” so it looks like this:  smartMobileDeviceGroups.html?id=2&o=v  It should forward the URL onto the search results.”

Boom.  We can perform an HTTP request using that parameter value and get back a search result!

Now, how do we extract that data? I took a look through the HTML and found the data which is populated into a table by some JavaScript.

...
 <script>
 $(document).ready(function(){

	var data = [

	['Keynote',"15","13","2",],
	['Numbers',"12","12","0",],
	['OS X Server',"20","17","3",],
	['Pages',"9","8","1",],];

	var sortable = new Array;
 ...

It’s just an array which means I could convert it into a native Python list type and iterate over the values with ease.  As I’m being very specific about what data I need I came up with a solution for finding and extracting these lines:

  1. I took the response from my HTTP request, the HTML of the page, and then converted it into a Python list at every newline character.
  2. I began a loop through this HTML list looking for the index value matching “\tvar data = [“ which denotes the beginning of the array.
  3. I restarted my loop at the above index +1 and started concatenating the lines of the array together into a single string (skipping the blank lines). Once I reached the line matching “\tvar sortable = new Array;” I killed the loop.
  4. I evaluate my string and out comes the list containing each entry of my VPP content with the values.

Here’s what that code looks like in action:

# Breaking up the returned HTML into a list
html = response.read().splitlines()

# The applist string starts with the open bracket for our list
applist = "["

# Here is the loop through the html list pulling 
for item in html:
    if item == "\tvar data = [":
        for line in html[html.index(item) + 1:]:
            if line == "\tvar sortable = new Array;":
                break
            elif line.rstrip():
                applist += line.strip(';')[1:]
        break

# We need the 'ast' module to perform the eval into a list
import ast
applist = ast.literal_eval(applist)

Parsing through this new list is now super easy:

for x in applist:
    if int(x[-1]) > 0:
        print(x[0] + " has " + x[-1] + " seats available.")

Keynote has 2 seats available.
OS X Server has 3 seats available.
Pages has 1 seats available.

The Golden Triangle: Extension Attribute to Smart Group to VPP Assignment

All of the VPP assignments in my environment are handled via User Extension Attribute.  This was done for a number of reasons including the easy of dropping a user into scope for one of these apps, but also to future-proof us for when we would start leveraging the API to handle those assignments.

The setup is very simple.  Each App Store app that we distribute through managed VPP has its own extension attribute.  Let’s take Baldur’s Gate as an example (if you don’t have this available to your org, look deep inside and ask yourself, “why not?”).  For every VPP extension attribute there are two available values from a pop-up menu: Assigned and Unassigned.

(Note: since you can set a pop-up menu back to a blank value, ‘Unassigned’ is actually unnecessary from a technical standpoint, but if you have other IT staff working in a JSS it makes more visual sense to set value to ‘Unassigned’ instead of nothing in my opinion) 

Once the user extension attribute is in place create a matching Smart User Group with the sole criteria being the value is set to ‘Assigned.’  Now you make this smart group the scope for a VPP Assignment that only assigns that app.  That’s it!  You now have an App Store app that you can dynamically assign via the JSS REST API (or with ease by flipping values directly on a user’s record).

Updating a User Extension Attribute

The last piece of this is using the REST API to flip the User Extension Attribute for the logged in user to ‘Assigned’.  If you want to get in deeper with the API you can check out my two earlier blog posts “The JSS REST API for Everyone” which give a general introduction and overview.

The two pieces of information you need to update a User Extension Attribute are the user’s username or ID and the ID of the extension attribute that will be updated.  Perform a PUT on either of these resources with the following XML to change the value (be sure to update the ID values!):

../JSSResource/users/id/1
../JSSResource/users/name/bryson.tyrrell

<user>
    <extension_attributes>
        <extension_attribute>
            <id>1</id>
            <value>Assigned</value>
        </extension_attribute>
    </extension_attributes>
</user>

(Note: this is pretty much what you would do for almost ANY extension attribute in the JSS)

Check Out the Full Script on GitHub

As promised, here is a full working example of the script for you to grab:

https://github.com/brysontyrrell/Self-Service-VPP-Assignment

View the README for a breakdown of how to setup the policy (and be sure to do this in a test environment).  The one omission in this code is inside the function that is triggered when there are no available seats of the app to assign:

def create_ticket():
    """This function would generate a ticket with information on the app and user
    IT staff would purchase additional seats of the app and then to assign it

    This function would be called where the number of available seats was not greater than 0
    Customize to suit your environment"""
    print("Creating ticket.")

In my case I would have code in here to take the different values that were passed to the script and generate a Zendesk ticket on the behalf of the user informing IT that more seats needed to be purchased and that the user should be assigned this app once the purchase process is complete.  That takes the onus off of the user to perform yet another step when they are informed the app isn’t immediately available.

If you’re also a Zendesk user you can review a Python script I have here that creates a request ticket for a user from Self Service:

https://github.com/brysontyrrell/Zendesk-Self-Service-Request

Otherwise, you should feel free to add your own code for whatever remediation action you would want to take should there not be any available seats for the user.  If you have questions about that please feel free to reach out to me and we can discuss it.

Take it Further

The entire setup described above allows for apps to be assigned and unassigned easily.  You can take the existing code and modify it to allows users to voluntarily return managed VPP seats if they are no longer using them.  The script produces dialog prompts in the event of an error, unavailable seats or success in assigning the app.  You’ll notice these are embedded AppleScripts (a little PyObjC fun that gets around needing to use Tkinter) so you can work with those as well to further customize the feedback to your users.

And as I already said, feel free to hit me up if you have questions.

Happy New Year!

A few quick updates: scripts, Promoter, jsslib

I’ve been absent from the blog for a while with my time being pretty split between work and family.  I thought I’d drop a quick update on some things I’m meaning to get posted and also get some small ones out of the way.  I also wanted to switch out of my minimalist theme and into something a bit easier on the eyes.

Updated script for listing Mac App Store apps  in Self Service

The original version of this script I posted here attempted to capture inventory data from a user installing an App Store app just as thought they were installing software published by the IT department.  Back in 10.8 the script worked, but as of 10.9 timeouts and errors with various AppleScript actions happened randomly and it was no longer reliable.

Also, come VPP v2 and I had a philosophical shift when it came to having accurate inventory.  As I no longer could be sure when users were claiming the apps we were provisioning from the VPP portal it made no sense to rewrite my original App Store workflow  I’m treating my Macs more and more like I would iPads when it comes to provisioning.

I trimmed out most of the code and was left with this:

#!/bin/bash

# The 'loggertag' is used as a tag for all entries in the system.log
loggertag="selfservice-macappstore"

log() {
# This log() function writes messages to the system.log and STDOUT
echo "${1}"
/usr/bin/logger -t "${loggertag}: ${policy}" "${1}"
}

# The iTunes address for the App (can be grabbed from its App Store page) is passed
# from the JSS into the 'App Store URL (itunes.apple.com/...)' variable from parameter 4.
# Example: itunes.apple.com/app/ibooks-author/id490152466
appAddress="${4}"
log "Mac App Store URL: ${appAddress}"

# The App Store is opened to the specified app.
log "Opening the Mac App Store"
/usr/bin/osascript -e "tell application \"System Events\" to open location \"macappstore://${appAddress}\""
if [ $? != 0 ]; then
    log "Script error $?: There was en error opening the Mac App Store"
    exit 1
fi

exit 0

The script takes only the iTunes/App Store URL and opens the Mac App Store to that item and nothing more.  Its simpler, an overall better experience, and I am still able to present a quick shortcut to common Mac App Store apps in Self Service just as I would on any of my users’ iOS devices.

SelfService_MacAppStore

New script for downloading Box Sync 4 via Self Service

Prior to the release of Box Sync 4 I had written a new stand-alone Self Service install script for the app.  The script at this earlier post still works for loading the Box Edit plugin, and I still use it, but the script below downloads the Box Sync 4 DMG, mounts it, copies the app and then remove all traces of the process.

#!/bin/bash

# The 'policytag' is used for log entries
policytag="BoxSync4"
# The 'loggertag' is used as a tag for all entries in the system.log
loggertag="boxsync4install"

log() {
# This log() function writes messages to the system.log and STDOUT
/usr/bin/logger -t "${loggertag}: ${policytag}" "${1}"
echo "${1}"
}

mountCheck() {
if [ -d /Volumes/boxsync4 ]; then
    log "/Volumes/boxsync4/ directory exists"
    if [[ $(/sbin/mount | /usr/bin/awk '/boxsync4/ {print $3}') == "/Volumes/boxsync4" ]]; then
        log "/Volumes/boxsync4/ is a mounted volume: unmounting"
        /usr/bin/hdiutil detach /Volumes/boxsync4
        if [ $? == 0 ]; then
            log "/Volumes/boxsync4/ successfully unmounted"
        else
            log "hdiutil error $?: /Volumes/boxsync4/ failed to unmount"
            exit 1
        fi
    fi
    log "Deleting /Volumes/boxsync4/ directory"
    /bin/rm -rf /Volumes/boxsync4
fi
}

cleanup() {
# The cleanup() function handles clean up tasks when 'exit' is called
log "Cleanup: Starting cleanup items"
mountCheck
if [ -f /tmp/boxsync4.dmg ]; then
    log "Deleting /tmp/boxsync4.dmg"
    /bin/rm -rf /tmp/boxsync4.dmg
fi
log "Cleanup complete."
}

# This 'trap' statement will execute cleanup() once 'exit' is called
trap cleanup exit

log "Beginning installation of ${policytag}"

# Check for the expected size of the downloaded DMG
webfilesize=$(/usr/bin/curl box.com/sync4mac -ILs | /usr/bin/tr -d '\r' | /usr/bin/awk '/Content-Length:/ {print $2}')
log "The expected size of the downloaded file is ${webfilesize}"

# Download the Box Sync Installer DMG
log "Downloading the Box Sync Installer DMG"
if [ -f /tmp/boxsync4.dmg ]; then
    # If there's another copy of the DMG inside /tmp/ it is deleted prior to download
    /bin/rm /tmp/boxsync4.dmg
    log "Deleted an existing copy of /tmp/boxsync4.dmg"
fi

/usr/bin/curl -Ls box.com/sync4mac -o /tmp/boxsync4.dmg
if [ $? == 0 ]; then
    log "The Box Sync Installer DMG successfully downloaded"
else
    log "curl error $?: The Box Sync Installer DMG did not successfully download"
    exit 1
fi

# Check the size of the downloaded DMG
dlfilesize=$(/usr/bin/cksum /tmp/boxsync4.dmg | /usr/bin/awk '{print $2}')
log "The size of the downloaded file is ${dlfilesize}"

# Compare the expected size against the downloaded size
if [[ $webfilesize -ne $dlfilesize ]]; then
    echo "The file did not download properly"
    exit 1
fi

# Check if the /Volumes/boxsync4 directory exists and is a mounted volume
mountCheck

# Mount the /tmp/boxsync4.dmg file
/usr/bin/hdiutil attach /tmp/boxsync4.dmg -mountpoint /Volumes/boxsync4 -nobrowse -noverify
if [ $? == 0 ]; then
    log "/tmp/boxsync4.dmg successfully mounted"
else
    log "hdiutil error $?: /tmp/boxsync4.dmg failed to mount"
    exit 1
fi

if [ -e /Applications/Box\ Sync.app ]; then
    /bin/rm -rf /Applications/Box\ Sync.app
    log "Deleted an existing copy of /Applications/Box\ Sync.app"
fi

log "Copying /Volumes/boxsync4/Box\ Sync.app to /Applications"
/bin/cp -R /Volumes/boxsync4/Box\ Sync.app /Applications/Box\ Sync.app
if [ $? == 0 ]; then
    log "The file copied successfully"
else
    log "cp error $?: The file did not copy successfully"
    exit 1
fi

# Open /Applications/Box\ Sync.app
# This will also begin the migration from Box Sync 3 to Box Sync 4
#/usr/bin/open /Applications/Box\ Sync.app

# Run a recon to update the JSS inventory
log "Postinstall for ${policytag} complete. Running Recon."
/usr/sbin/jamf recon
if [ $? == 0 ]; then
    log "Recon successful."
else
    log "jamf error $?: There was an error running Recon."
fi

exit 0

Promoter

I haven’t abandoned Promoter.  Work is still progressing, but I stopped during my reorganization of the code into classes/modules when I broke off into a tangent that ended up becoming a full blow JSS REST API Python library.  I’ll dive into that next, but I have made a couple of changes to my goals with Promoter:

  • I’m dropping file migration support – for now
    There are a number of reasons behind this.  The bigger piece of this is there’s no API to the JSS for uploading packages/scripts.  The other part is that because I have been writing Promoter as a tool to be integrated into scripted workflows it occurs to me that most IT admins will already have their own methods of transferring files between different shares and such functionality would be redundant and possibly less robust.  In the end, upon a successful migration, you would want to write in the code that would migrate your files into your production environment (if that would even be required).
  • Including more controls
    Based upon some feedback from Tobias and others, I’m going to be expanding the command line options to allow some limited transformation of the XML before it is posted to the destination JSS mainly in the form of swapping out values (e.g. setting a Site, adding in computer groups for scope that are present on the destination, forcing the policy to be enabled by default).
  • Support for importing as a Python module
    Once I have finished cleaning up the code I’ll be posting the raw source online so everyone can grab it and modify it freely, but also so the code can be easily imported into other Python scripts instead of being a command line utility.  That said, I’ll still post compiled versions of the binaries as I did before so they can be run on systems their either do not have the third-partly libraries or don’t have Python installed.

jsslib – A Python library for the JSS REST API

As I mentioned above, I’ve been working on a Python library for making scripts interacting with the JSS REST API easy and quick to write.  The project took off when I decided to rewrite the existing JSS class I had written for Promoter and then began expanding it to cover the full API.

The syntax for the new library looks like this:

>>> import jsslib
>>> myjss = jsslib.JSS('https://myjss.com', 'myuser', 'mypass')
>>> results = myjss.computers()
<Response [200]> https://jss.jamfcloud.com/bryson/JSSResource/computers

The above line would return the list of all computers in the JSS (you can see the URL in the output).  While that’s easy to write its not very special.  Any kind of library should be doing work for you and making things easier for you.  That’s why I decided to write the new API to handle a lot of tasks for the end user.  The returned objects from these calls all result in “auto-parsed” attributes:

>>> results.id_list
[1, 2]
>>> results.size
'2'
>>> results.data
'<?xml version="1.0" encoding="UTF-8"?><computers><size>2</size><computer><id>1</id><name>USS-Voyager</name></computer><computer><id>2</id><name>Starship Enterprise</name></computer></computers>'

In addition to auto-parsing, the library also allows for any and all ‘simple’ objects in the JSS to be created or updated using parameters instead of passing XML:

>>> results = myjss.buildings_create("Minneapolis")
<Response [201]> https://jss.jamfcloud.com/bryson/JSSResource/buildings/id/0
>>> results.id
'5'
>>> myjss.buildings_update(5, "St Paul")
<Response [201]> https://jss.jamfcloud.com/bryson/JSSResource/buildings/id/5
<jsslib.JSSObject instance at 0x103629dd0>
>>> myjss.buildings(5)
<Response [200]> https://jss.jamfcloud.com/bryson/JSSResource/buildings/id/5
>>> myjss.buildings(5).name
<Response [200]> https://jss.jamfcloud.com/bryson/JSSResource/buildings/id/5
'St Paul'

The API will also be ‘smart’ in that it will use order of priority to test your input for certain API calls.  The best example of this is with computers and mobile devices.  Here is a series of API calls to look up a computer.  Take a note of the URLs in each of the outputs.

>>> results = myjss.computers(1)
<Response [200]> https://jss.jamfcloud.com/bryson/JSSResource/computers/id/1
>>> results.udid
'ZZZZ0000-ZZZZ-1000-8000-001B639ABA8A'
>>> myjss.computers(results.udid).name
<Response [200]> https://jss.jamfcloud.com/bryson/JSSResource/computers/udid/ZZZZ0000-ZZZZ-1000-8000-001B639ABA8A
'USS-Voyager'
>>> myjss.computers("USS-Voyager").serial_number
<Response [200]> https://jss.jamfcloud.com/bryson/JSSResource/computers/name/USS-Voyager
'QP8ZZZZHX85'

You’ll notice again I’m making use of the auto-parsing features that are built into the library.  Computer and mobile device records all return the following attributes already parsed and readable: UDID, serial number, MAC address, ID, model and computer name.

You might be wondering how you’re going to figure out how all of this works as the JSS API returns a wide variety of objects with different sets of data.  A huge point of this to me was to make sure anyone could start working with the library without hand holding, and so I’m making sure all of the documentation if baked into the code just like any other Python library:

>>> help(jsslib.JSS.advancedcomputersearches)

Help on method advancedcomputersearches in module jsslib:

advancedcomputersearches(self, identifier=None) unbound jsslib.JSS method
    Returns a JSSObject for the /advancedcomputersearches resource
 
    Example Usage:
    myjss.advancedcomputersearches() -- Returns all advanced computer searches ('None' request)
    myjss.advancedcomputersearches(1) -- Returns an advanced computer search by ID
    myjss.advancedcomputersearches('name') -- Returns an advanced computer search by name
 
    Keyword Arguments:
    identifier -- The ID or name of the advanced computer search
 
    Returned Values for a 'None' request:
    JSSObject.data -- XML from response
    JSSObject.size -- Total number of all advanced computer searches
    JSSObject.id_list -- List of all advanced computer search IDs (int) from the response
 
    Returned Values for ID or name requests:
    JSSObject.data -- XML from response
    JSSObject.id -- The ID of the advanced computer search
(END)

I presented jsslib at an internal company event not long ago and I was about 50% done at that point.  This library will end up being used by Promoter for all of its interactions with the JSS, and I will be posted the library somewhere once it is ready.  I’m also still continuing to flesh out some of the features (I’m considering adding a .search() method to returned objects for searching the XML data without having to pipe it into an XML parser).

You can expect to see jsslib popping up on here again in the near future.

As always, if you have any comments or questions just reach out to me here or anywhere else I linger.