Dev Update (2019-11-09)

I’m finally doing my Friday dev updates. These posts are short and sweet – they talk about work that went into any of my open source projects for the past ~week and/or what work is being done with them.

Let’s dive in. Continue reading “Dev Update (2019-11-09)”

Jamf The Gathering: A Slack Bot’s Story

JNUC happened recently. You might heard about it. Some stuff came up. There was a really awesome burn during the opening keynote.

It was a really nice burn.

Continue reading “Jamf The Gathering: A Slack Bot’s Story”

Blog switch up

I haven’t been posting to the blog with as much frequency as I used to. Partially, I think this is due to my ideas around larger, more in depth posts that require a lot more time sitting down and crafting.

I’m going to try something different post Penn State MacAdmins. I want to start forcing myself to write more technical content, but in smaller bite-sized pieces that focus on fundamentals rather than all encompassing solutions. Topics that are generic, but provide examples which can be used as building blocks. As these are smaller, and more example driven, I’m going to set a goal to post every week on Monday morning, and maybe Wednesday too if I have enough posts in the pipeline.

On Fridays, I want to start posting “Dev Updates”. I have a number of projects that I have open sourced and am committed to updating for Mac admin community. These projects do have channels in the MacAdmins Slack, and their GitHub repos are linked there, but keeping up for followers would involve a lot of back scrolling to find out what has been discussed. I plan to include in these weekly updates: new issues raised on GitHub, recapping discussions from the Slack channels, and describing any work that has been done during that week on features/bugs. This should not only provide a digestible status updates for those who want them, but help keep me focused.

The ultimate goal here is that I’m writing more again. When it comes to learning and bettering myself professionally, there are two ways I go about it: post about it publicly, or present on it publicly. Both of which force you to cover all your bases in the face of public scrutiny.

We’ll see how this goes.

MacAdmins 2018

Four Years of MacAdmins

Back in February of this year I was able to present at MacAD.UK in London (I attended in 2017; had a blast both times). This marked my eight appearance at a conference as a speaker since joining Jamf in 2012 as the second member of their fledgling IT department. To be fair, four of those appearances were at JNUC. ¯\_(ツ)_/¯

In about month, I’ll be making my fourth appearance, third speaking, at the MacAdmins Conference at Penn State. I have loved this conference every year I’ve attended, and credit is due to the organizers who accumulate a great roster of speakers with a range of content subjects. You’re never without something to listen to.

My first time speaking her, in 2016, I gave what would end up being my most widely viewed presentation to date: Craft Your Own GUIs with Python and Tkinter. The video on YouTube has garnered an insane 82K+ views. I’ll attribute much of that to the subject’s appeal outside of Mac admin circles.

On the second round in 2017 I went a bit further. I attempted, to mixed results, a half day workshop on building Jamf Pro Integrations along with another presentation: How Docker Compose Changed My Life. The workshop had a number of challenges that were all lessons I took to heart for the future: I had drastically underestimated the time needed for my content (we didn’t finish), the notice about prerequisite experience was lost from the sched.com listing, and I had no helpers to assist with questions cause us to pause frequently as I went around the room.


This year I’ll be doing another double feature, but no workshop. Two presentations at the 2018 conference!

Bryson’s doing a Jamf preso?

It’s true. Not counting JNUC, I will be delivering my first official Jamf presentation at a conference. Our gracious Marketing department offered our sponsor slot to me and even allowed me to pick whatever I wanted for the subject!

My choice is something near and dear to me: the recently announced Jamf Marketplace. Why is this near and dear? Creating integrations with Jamf Pro has been a passion of mine, and the Marketplace is a step towards a beautiful future where admins and developers can publish their work for all to share in. I’m very excited for this one.

Session Link: Get Your Tools in Front of Thousands with the Jamf Marketplace

Talking Serverless and AWS

My personal session (not affiliated with Jamf) is all about the new focus in my professional life: serverless application architectures in AWS. That alone can be a pretty broad subject. My presentation will focus on Lambda: the AWS service for running code without servers.

There is a lot of potential for Lambda within your org if you have an AWS account, or would be allowed to create one (you’d be shocked at what you can achieve within the free tier – which I’ll touch on). Beyond the tried and true cron job, you can implement all sorts of crazy even driven workflows with custom processing handled by Lambda functions you’ve written in your preferred language (which is Python, right?).

I’ll be doing a deep dive into subject. We’ll cover the basics of Lambda, how IAM permissions work and how to apply them, the best practices of defining and deploying using CloudFormation (what I call template-first development), and hopefully more if time allows. It’s an area I’ve become very passionate about and I’m looking so forward to being able to present on this to the Mac admin community.

Session Link: Diving into AWS Lambda: An Intro to Serverless for Admins


I hope to see you next month! If you don’t find me wandering the halls between sessions, please reach out on Slack, or peek into Legends. It’s a favorite.

If you’re interested in the presentations I’ve done over the years at various conferences, you can find that list with YouTube links here.

Stop everything! Start using Pipenv!

Seven months ago I tweeted about Pipenv:

That was about it.

The project- which was at v3.6.1 at the time – still felt new. At Jamf, we were working on our strategy for handling project requirements and dependencies going forward for Jamf Cloud’s automation apps and Pipenv, while a cool idea, didn’t have maturity and traction that.

As @elios pointed out on the MacAdmins Slack: Pipenv was announced in January of this year as an experimental project.

Today it is the officially recommended Python packaging tool from Python.org.

In his words, “That’s awfully fast…”

The GitHub repository for Pipenv has 5400+ stars and over 100+ contributors to the project. It is proceeding at breakneck development speeds and adding all kinds of juicy features every time you turn around. It’s on v8.3.2.

Within ten minutes of finally, FINALLY, giving Pipenv a try today I made the conscious decision to chuck the tooling we had made and convert all of our projects over.

(Oh, and the person who wrote that tooling wholly agreed)

Let’s talk about that.

Part 1: Why requirements.txt sucks

There is was this standard workflow to your Python projects. You needed two key tools: pip and virtualenv. When developing, you would create a Python virtual environment using virtualenv in order to isolate your project from the system Python or other virtual environments.

You would then use pip to install all the packages your project needs. Once you had all of those packages installed you would then run a command to generate a requirements.txt file that listed every package at the exact installed version.

The requirements.txt file stays with your project. When someone else gets a copy of the repository to work on it or deploy it they would follow the same process of creating a virtual environment and then using pip to install all of the required packages by feeding that requirements.txt file into it.

Now, that doesn’t seem so bad, right? The drawbacks become apparent as you begin to craft your build pipelines around your projects.

Maintenance is a Manual Chore

The first thing to call out is how manual this process is. You need to create the same version virtual environment for the project once you’ve cloned it and then install the packages. If you want to begin upgrading packages you need to be sure to manually export a new requirements.txt file each time.

Removing packages is far more problematic. Uninstalling with pip will not remove dependencies for a package! Take Flask, for example. It’s not just one package. It has four sub-packages that get installed with it, and one of those (Jinja2) has its own sub-package.

So, maybe the answer is to manually maintain only the top-level packages in your requirements.txt file? That’s a hard no. You don’t want to do that because it makes your environment no longer deterministic. What do I mean? I mean, those sub-packages we just talked about are NOT fixed versions. When you’re installing Python packages from the Python Package Index (PyPI) the aren’t using requirements.txt files with fixed versions for their dependencies. They instead are specifying ranges allowing for installs at a minimum and/or maximum version for that sub-package.

This makes a lot of sense when you think about it. When installing a package for the first time as a part of a new project, or updating a package in an existing project, you are going to want the latest allowed versions for all the sub-packages for any patches that have been made.

Not so much when it comes to a project/application. You developed and tested your code at specific package versions that are a known working state. This is past the development phase when bugs from updates to those packages can be identified and addressed. When it comes to deployment, your project’s requirements.txt file must be deterministic and list every package at the exact version.

Hence, maintenance of this file becomes a manual chore.

But Wait, There’s Testing!

There are a lot of packages that you might need to install that have nothing to do with running your Python application, but they have everything to do with running tests and building documentation. Common package requirements for this would by Pytest, Tox, and Sphinx with maybe a theme. Important, but not needed in a deployment.

We want to be able to specify a set of packages to install that will be used during the testing and build process. The answer unfortunately, is a second requirements.txt file. This one would have a special name like requirements-dev.txt which is only used during a build. This file could only contain the specific packages and be installed with pip after the standard requirements.txt, or it could contain all of those plus the build packages. In either case, the maintenance problem continues to grow.

So We Came Up With Something…

Our process at Jamf ended up settling on three types of requirements.txt files in an effort to address all the shortcomings described.

  • requirements.txt
    This file contained the full package list for the project at fixed versions. It would be used for spinning up new development environments or during deployment.
  • core-requirements.txt
    This file contained the top-level packages without fixed versions. Development environments would not be built from this. Instead, this file would be used for recreating the standard requirements.txt file at updated package versions and eliminating orphaned packages that were no longer used or removed from the project at some point.
  • build-requirements.txt
    This is a manually maintained file per-project that only contained the additional packages needed during the build process.

This approach isn’t too dissimilar to what others have implemented for many of the same reasons we came up with.

Part 2: Enter Pipfile

Not Pipenv? We’re getting there.

Pipfile was introduced almost exactly a year ago at the time of this post (first commit on November 18th, 2016). The goal of Pipfile is to replace requirements.txt and address the pain points that we covered above.

Here is what a Pipfile looks like:

[source]

url = "https://pypi.python.org/simple"
verify_ssl = true
name = "pypi"

[dev-packages]



[packages]



[requires]

python_version = "2.7"

PyPI is the default source in all new Pipfiles. Using the scheme shown, you can add additional [source] sections with unique names to specify your internal Python package indexes (we have our own at Jamf for shared packages across projects).

There are two groups for listing packages: dev-packages and packages. In a Pipfile these lists follow how we were handling this where only the build packages go into the dev-packages list and all project packages go under the standard packages list.

Packages in the Pipfile can have fixed versions or be set to install whatever the latest version is.

[packages]

flask = "*"
requests = "==2.18.4"

The [requires] section dictates the version of Python that the project is meant to run under.

From the Pipfile you would create what is called a Pipfile.lock which contains all of the environment information, the installed packages, their installed versions, and their SHA256 hashes. The hashes are a recent security feature of pip to validate packages that are installed when deployed. If there is a hash mismatch you can abort. A powerful security tool in preventing malicious from entering your environments.

Note that you can specify the SHA256 hashes of packages in a normal requirements.txt file. This is a feature of pip and not of Pipfile.

It is this Pipfile.lock that will be used to generate environments on other systems whether for development or deployment. The Pipfile will be used for maintaining packages and dependencies and regenerating your Pipfile.lock.

All of this is just a specification. Pip as of yet still does not support Pipfiles, but there is another..

Part 3: Pipenv Cometh

Pipenv is the implementation of the Pipfile standard. It is built on top of pip and virtualenv and manages both your environment and package dependencies, and it does so like a boss.

Get Started

Install Pipenv with pip (or homebrew if that’s your jam).

$ pip install pipenv

Then in a project directory create a virtual environment and install some packages!

$ pipenv --two
Creating a virtualenv for this project…
<...install output...>
Virtualenv location: /Users/me/.local/share/virtualenvs/test-zslr3BOw
Creating a Pipfile for this project…

$ pipenv install flask
Installing flask…
<...install output...>
Adding flask to Pipfile's [packages]…
Locking [dev-packages] dependencies…
Locking [packages] dependencies…
Updated Pipfile.lock (36eec0)!

$ pipenv install requests==2.18.4
Installing requests==2.18.4…
<...install output...>
Adding requests==2.18.4 to Pipfile's [packages]…
Locking [dev-packages] dependencies…
Locking [packages] dependencies…
Updated Pipfile.lock (572f23)!

$

Notice how we see the Locking messages after every install? Pipenv automatically regenerated the Pipfile.lock each time the Pipfile is modified. Your fixed environment is being automatically maintained!

Graph All The Things

Let’s look inside the Pipfile itself.

[source]

url = "https://pypi.python.org/simple"
verify_ssl = true
name = "pypi"

[dev-packages]



[packages]

flask = "*"
requests = "==2.18.4"


[requires]

python_version = "2.7"

No sub-packages? Nope. It doesn’t need to track those (they end up in the Pipfile.lock, remember?). But, if you’re curious, you can use the handy graph feature to view a full dependency tree of your project!

$ pipenv graph
Flask==0.12.2
 - click [required: >=2.0, installed: 6.7]
 - itsdangerous [required: >=0.21, installed: 0.24]
 - Jinja2 [required: >=2.4, installed: 2.9.6]
   - MarkupSafe [required: >=0.23, installed: 1.0]
 - Werkzeug [required: >=0.7, installed: 0.12.2]
requests==2.18.4
 - certifi [required: >=2017.4.17, installed: 2017.11.5]
 - chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4]
 - idna [required: >=2.5,<2.7, installed: 2.6]
 - urllib3 [required: <1.23,>=1.21.1, installed: 1.22]
 
$

Check that out! Notice how you can see the requirement that was specified for the sub-package in addition to the actual installed version?

Environment Management Magic

Now let’s uninstall Flask.

$ pipenv uninstall flask
Un-installing flask…
<...uninstall output...>
Removing flask from Pipfile…
Locking [dev-packages] dependencies…
Locking [packages] dependencies…
Updated Pipfile.lock (4ddcaf)!

$

And re-run the graph command.

$ pipenv graph
click==6.7
itsdangerous==0.24
Jinja2==2.9.6
 - MarkupSafe [required: >=0.23, installed: 1.0]
requests==2.18.4
 - certifi [required: >=2017.4.17, installed: 2017.11.5]
 - chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4]
 - idna [required: >=2.5,<2.7, installed: 2.6]
 - urllib3 [required: <1.23,>=1.21.1, installed: 1.22]
Werkzeug==0.12.2

 

Yes, the sub-packages have now been orphaned within the existing virtual environment, but that’s not the real story. If we look inside Pipfile we’ll see that requests is the only package listed, and if we look inside our Pipfile.lock we will see that only requests and it’s sub-packages are present.

We can regenerate our virtual environment cleanly with only a few commands!

$ pipenv uninstall --all
Un-installing all packages from virtualenv…
Found 20 installed package(s), purging…
<...uninstall output...>
Environment now purged and fresh!

$ pipenv install
Installing dependencies from Pipfile.lock (f58d9f)…
 🐍 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 5/5 — 00:00:01
To activate this project's virtualenv, run the following:
 $ pipenv shell

$ pipenv graph
requests==2.18.4
 - certifi [required: >=2017.4.17, installed: 2017.11.5]
 - chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4]
 - idna [required: >=2.5,<2.7, installed: 2.6]
 - urllib3 [required: <1.23,>=1.21.1, installed: 1.22]

$

Mindblowing!!!

Installing the dev-packages for our builds uses an additional flag with the install command.

$ pipenv install sphinx --dev
Installing sphinx…
<...install output...>
Adding sphinx to Pipfile's [dev-packages]…
Locking [dev-packages] dependencies…
Locking [packages] dependencies…
Updated Pipfile.lock (d7ccf2)!

The appropriate locations in our Pipfile and Pipefile.lock have been updated! To install the dev environment perform the same steps for regenerating above but add the –dev flag.

$ pipenv install --dev
Installing dependencies from Pipfile.lock (f58d9f)…
 🐍 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 18/18 — 00:00:05
To activate this project's virtualenv, run the following:
 $ pipenv shell

Part 4: Deploy Stuff!

The first project I decided to apply Pipenv to in order to learn the tool is ODST. While there is a nice feature in Pipenv where it will automatically import a requirements.txt file if detected, I opted to start clean and install all my top-level packages directly. This gave me a proper Pipfile and Pipfile.lock.

Here’s the resulting Pipfile.

[source]

url = "https://pypi.python.org/simple"
verify_ssl = true
name = "pypi"


[dev-packages]

pytest = "*"
sphinx = "*"
sphinx-rtd-theme = "*"


[packages]

flask = "*"
cryptography = "*"
celery = "*"
psutil = "*"
flask-sqlalchemy = "*"
pymysql = "*"
requests = "*"
dicttoxml = "*"
pyjwt = "*"
flask-login = "*"
redis = "*"


[requires]

python_version = "2.7"

Here’s the graph of the installed packages (without the dev-packages).

$ pipenv graph
celery==4.1.0
 - billiard [required: >=3.5.0.2,<3.6.0, installed: 3.5.0.3]
 - kombu [required: <5.0,>=4.0.2, installed: 4.1.0]
 - amqp [required: >=2.1.4,<3.0, installed: 2.2.2]
 - vine [required: >=1.1.3, installed: 1.1.4]
 - pytz [required: >dev, installed: 2017.3]
cryptography==2.1.3
 - asn1crypto [required: >=0.21.0, installed: 0.23.0]
 - cffi [required: >=1.7, installed: 1.11.2]
 - pycparser [required: Any, installed: 2.18]
 - enum34 [required: Any, installed: 1.1.6]
 - idna [required: >=2.1, installed: 2.6]
 - ipaddress [required: Any, installed: 1.0.18]
 - six [required: >=1.4.1, installed: 1.11.0]
dicttoxml==1.7.4
Flask-Login==0.4.0
 - Flask [required: Any, installed: 0.12.2]
 - click [required: >=2.0, installed: 6.7]
 - itsdangerous [required: >=0.21, installed: 0.24]
 - Jinja2 [required: >=2.4, installed: 2.9.6]
 - MarkupSafe [required: >=0.23, installed: 1.0]
 - Werkzeug [required: >=0.7, installed: 0.12.2]
Flask-SQLAlchemy==2.3.2
 - Flask [required: >=0.10, installed: 0.12.2]
 - click [required: >=2.0, installed: 6.7]
 - itsdangerous [required: >=0.21, installed: 0.24]
 - Jinja2 [required: >=2.4, installed: 2.9.6]
 - MarkupSafe [required: >=0.23, installed: 1.0]
 - Werkzeug [required: >=0.7, installed: 0.12.2]
 - SQLAlchemy [required: >=0.8.0, installed: 1.1.15]
psutil==5.4.0
PyJWT==1.5.3
PyMySQL==0.7.11
redis==2.10.6
requests==2.18.4
 - certifi [required: >=2017.4.17, installed: 2017.11.5]
 - chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4]
 - idna [required: >=2.5,<2.7, installed: 2.6]
 - urllib3 [required: <1.23,>=1.21.1, installed: 1.22]

In my Dockerfiles for the project I swapped out using requirements.txt and switched it to install my packages using the Pipefile.lock into the system Python (in a containerized app there’s no real need to create a virtual environment).

RUN /usr/bin/apt-get update -q && \
    /usr/bin/apt-get install -qqy build-essential git && \
    /usr/bin/apt-get install -qqy python-pip python-dev && \
    /usr/bin/pip install pipenv && \
    /usr/bin/apt-get install -qqy libssl-dev libffi-dev && \
    /usr/bin/apt-get install -qqy uwsgi uwsgi-plugin-python && \
    /usr/bin/apt-get clean && \
    /bin/rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

COPY /docker/webapp/web-app.ini /etc/uwsgi/apps-enabled/

COPY /Pipfile* /opt/odst/
WORKDIR /opt/odst/
RUN /usr/local/bin/pipenv install --system --deploy

COPY /ods/ /opt/odst/ods/
COPY /application.py /opt/odst/
COPY /docker/ods_conf.cfg /opt/odst/

RUN /bin/chown -R www-data:www-data /opt/odst/ods/static

CMD ["/usr/bin/uwsgi", "--ini", "/etc/uwsgi/apps-enabled/web-app.ini"]

The –system flag tells Pipenv to install to the system Python. The –deploy flag will abort the installation if the Pipfile.lock is out of date or the Python version is wrong. Out of date? Pipenv knows if your Pipfile.lock matches up with the Pipfile by comparing a SHA256 hash of the Pipfile that it has saved. If there’s a mismatch then there is an issue and it aborts.

If I didn’t want to rely on this hash match feature, I could instead pass a –ignore-pipfile flag that will tell it to proceed using only the Pipfile.lock.

I should mention at this point that Pipenv dramatically speeds up build times. When using pip your packages will install sequentially. Pipenv will install packages in parallel. The difference is immediately noticeable, especially when regenerating your development environments.

Still, The World is Catching Up

Python development workflows area clearly moving to this tool, but Pipenv is less than a year old at this point and not everything will support generating environments from the provided Pipfiles.

The two I am currently using that fall in that category are AWS Elastic Beanstalk and ReadTheDocs.org. Both need a requirements.txt file in order to build the Python environments.

That is thankfully a simple task. You can generate a requirements.txt file from Pipenv by using the lock option.

$ pipenv lock --requirements > requirements.txt

 

For Elastic Beanstalk applications, I can have this command run as a part of my build pipeline so the file is included with the ZIP archive before it goes to S3.

In the case of Read the Docs there is an open issue for adding Pipfile support, but until then I will need to generate the requirements.txt file as I make changes to my environment and save it with the repository.

For Read the Docs I’ll want the extra dev-packages. This is done by using the pipenv run command. This will execute the succeeding string as if it were run in the environment.

$ pipenv run pip freeze > requirements.txt

The bright side is that I am no longer actually managing this file. It is straight output from my Pipenv managed environment!

Part 5: Profit

I hope you enjoyed my overview and impressions of Pipenv. It’s a tool that is going to have a huge impact on development and deployment workflows – for the better!

A huge shout out to Kenneth Reitz for creating yet another invaluable package for the Python community!

* Edit Nov 9: Changed the command for exporting a requirements.txt file. Using pipenv lock -r -d only outputs the packages under the dev-packages section.

Farewell to the Unofficial JSS API Docs

Hey everyone.

With the launch of the Jamf Developer Portal I think it’s time I took down my Unofficial JSS API Docs site on Confluence.

I launched it as a community resource for a lack of API documentation, but now that Jamf has something out there I feel it’s time I save my $10 a month. If you found these resources helpful in the past, great! That was the whole point.

The site will come down after November 17th. For those Google searching and coming across this post, click on the dev portal link I provided above to reach the official documentation provided by Jamf.

Open Distribution Server Technology (w/JNUC Recap)

ODST @JNUC

At JNUC 2017, I was given the opportunity to do a session detailing the progress I’ve made and the vision I have for a new file distribution server that can serve to replace the now discontinued JDS (Jamf Distribution Server).

This was a last minute addition to the conference schedule and we were unable to record it, but the Mac admin community took notes which can be found here. I’ve also uploaded the presentation’s slide deck on SlideShare.

The source code for ODST is available on GitHub. It is currently in an early Alpha state with some of the core functionality complete.

Project Goals

ODST came about with the sunsetting of the JDS. I set out to design my own implementation of an automated file distribution server but with additional features to make it a more powerful component of an administrator’s environment.

The goal of ODST is to provide an on-premise file syncing and distribution server solution that puts automation and integration features first.

The ODS (Open Distribution Server) application itself is modular and being designed to fit into as many deployment models as possible. This ranges from a simple single-server installation on Linux, Windows, or macOS to containerized deployments in Docker or Kubernetes.

While there will be initial support for the ODS to integrate with Jamf Pro it is not a requirement for using the application. This will allow administrators using other management tools to take advantage of the solution and submit feature requests for integrations with them as well.

Planned Features

  • A full web interface (built on top of the Admin API)
  • The Admin API for integrating your ODS instances with existing automations and workflows.
  • Many-to-many registration and syncing which will allow package uploads to any ODS and still replicate throughout your network.
  • Package and ODS staging tags to restrict how certain levels of packages replicate through the network.
  • Webhooks and email to send notifications to other services alerting them to events that are occurring on your ODS instances.
  • LDAP integration for better control and accountability when granting other administrators and techs access to your ODS instances.
  • And more to come…

Package Syncing

Where the JDS synced by running an every five minute loops task to poll another server, the ODS application uses a private ODS API for communicating between instances.

When two ODS instances are registered to each other they will have each others’ keys saved to their databases and use those keys to sign API requests.

The standard order of operations during a package upload would be:

  1. The admin uploads a package to ODS1.
  2. ODS1 generates the SHA1 hash of the package and also generates SHA1 hashes for every 1 megabyte chunk of that package. This information is saved to the database.
  3. ODS1 sends a notification to every registered ODS instance that a new package is available.
  4. ODS2 receives this notification and makes a return API request for the full details of the package.
  5. ODS2 saves the pending package to the database and a download task is sent to the queue.
  6. The ODS2 worker takes the download task off the queue and begins downloading the package in 1 megabyte chunks, comparing hashes for every chunk, and saving them to a temporary location.
  7. Once the ODS2 worker has downloaded all chunks it recombines them to the single file, performs a final SHA1 check, and moves the package to the public download directory.
  8. ODS2 then performs step #3 to propagate the package to other ODS instances it is registered with.

If the download process seems familiar, it is borrowed from how Apple performs MDM initiated application installs.

Application Architecture

The ODS application is more complex than the JDS in order to facilitate the additional features that are being built on top of the file syncing. In addition to the application server, a production deployment would also include a front-end web server (Nginx or Apache), a Redis server for the queuing system, a database server (ODST falls back to a local SQLite database file if there is not a database service to connect to), and workers that process queued actions.

Single Server

ODS_Single_Server.png

Multi-Server or Containerized

ODS_Multi_or_Containerized.png

The queuing system is an important element as it backgrounds many of the processes that the server will need to perform in reaction to notifications or requests (such as queuing notifications, API requests to other ODS instances, file downloads, and file hashing operations). This frees up the application to continue accepting requests by removes long process blocks.

How the Community Can Help

When I gave the JNUC presentation I only took up half of the allotted time to discuss what was completed with the project and what was planned. The second half was spent in open discussion to take in feedback and guidance from the target audience on what was needed on the road to a 1.0 release.

Adding LDAP support was the first item to come out of this and is my next planned feature to write in after the file syncing framework is finished. I encouraged participants to open GitHub issues on the repo as we discussed their questions and asks. I want to continue to encourage this. The ODST project is meant for the community and should continue to be community driven in its roadmap.

When it comes to contributing to the project I am not asking for code help at this time. Don’t feel that you need to know Python or web development with Flask in order to contribute. There are many other areas that I am in need of help:

  • Testing! As I make new commits to the repository and add in more features you can help ensure everything is working by running the latest version and trying them out. Submit issues, provide logs, provide details on how you’re deploying the application (the provided Docker Compose file is the quickest and easiest way), and by doing so you will help verify features work as expected and solidify the quality of the application.
  • Determine optimal configurations. There are quite a few components to the ODS application and I am learning as I go for how to configure the web server. More experienced administrators who are familiar with these technologies, especially in production environments, can help work towards a baseline for…
  • Installers! The ODS application can be custom setup for almost any kind of deployment, but we still want an easy option where an admin can grab an installer for load it onto a single Linux or Windows server. If you have experience building installers on those platforms please reach out! I’ve also mentioned containerization a few times, and having an official Docker images for the ODS application and worker components should be a part of this initiative.
  • Documentation. Much Documentation. There will be official docs available at odst.readthedocs.io which will be generated from the main repository on GitHub. You can help maintain and improve that documentation with pull requests as you find errors or inaccurate instructions/details as the project iterates. The documentation will be especially invaluable when it comes to the aforementioned installers, custom installations, and the administrator user guide portion that will walk user through how to perform actions.

If you haven’t yet, please join the #odst channel in the Mac Admins Slack where you can discuss the project with me directly as well as other admins who are using, testing, and contributing as they can.

I hope to build something that will provide great value to our community and fill the gap the JDS left in a lot of environments. I hope to see you on GitHub and Slack soon!