Dev Update (2019-11-09)

I’m finally doing my Friday dev updates. These posts are short and sweet – they talk about work that went into any of my open source projects for the past ~week and/or what work is being done with them.

Let’s dive in. Continue reading “Dev Update (2019-11-09)”

MacAdmins 2018

Four Years of MacAdmins

Back in February of this year I was able to present at MacAD.UK in London (I attended in 2017; had a blast both times). This marked my eight appearance at a conference as a speaker since joining Jamf in 2012 as the second member of their fledgling IT department. To be fair, four of those appearances were at JNUC. ¯\_(ツ)_/¯

In about month, I’ll be making my fourth appearance, third speaking, at the MacAdmins Conference at Penn State. I have loved this conference every year I’ve attended, and credit is due to the organizers who accumulate a great roster of speakers with a range of content subjects. You’re never without something to listen to.

My first time speaking her, in 2016, I gave what would end up being my most widely viewed presentation to date: Craft Your Own GUIs with Python and Tkinter. The video on YouTube has garnered an insane 82K+ views. I’ll attribute much of that to the subject’s appeal outside of Mac admin circles.

On the second round in 2017 I went a bit further. I attempted, to mixed results, a half day workshop on building Jamf Pro Integrations along with another presentation: How Docker Compose Changed My Life. The workshop had a number of challenges that were all lessons I took to heart for the future: I had drastically underestimated the time needed for my content (we didn’t finish), the notice about prerequisite experience was lost from the sched.com listing, and I had no helpers to assist with questions cause us to pause frequently as I went around the room.


This year I’ll be doing another double feature, but no workshop. Two presentations at the 2018 conference!

Bryson’s doing a Jamf preso?

It’s true. Not counting JNUC, I will be delivering my first official Jamf presentation at a conference. Our gracious Marketing department offered our sponsor slot to me and even allowed me to pick whatever I wanted for the subject!

My choice is something near and dear to me: the recently announced Jamf Marketplace. Why is this near and dear? Creating integrations with Jamf Pro has been a passion of mine, and the Marketplace is a step towards a beautiful future where admins and developers can publish their work for all to share in. I’m very excited for this one.

Session Link: Get Your Tools in Front of Thousands with the Jamf Marketplace

Talking Serverless and AWS

My personal session (not affiliated with Jamf) is all about the new focus in my professional life: serverless application architectures in AWS. That alone can be a pretty broad subject. My presentation will focus on Lambda: the AWS service for running code without servers.

There is a lot of potential for Lambda within your org if you have an AWS account, or would be allowed to create one (you’d be shocked at what you can achieve within the free tier – which I’ll touch on). Beyond the tried and true cron job, you can implement all sorts of crazy even driven workflows with custom processing handled by Lambda functions you’ve written in your preferred language (which is Python, right?).

I’ll be doing a deep dive into subject. We’ll cover the basics of Lambda, how IAM permissions work and how to apply them, the best practices of defining and deploying using CloudFormation (what I call template-first development), and hopefully more if time allows. It’s an area I’ve become very passionate about and I’m looking so forward to being able to present on this to the Mac admin community.

Session Link: Diving into AWS Lambda: An Intro to Serverless for Admins


I hope to see you next month! If you don’t find me wandering the halls between sessions, please reach out on Slack, or peek into Legends. It’s a favorite.

If you’re interested in the presentations I’ve done over the years at various conferences, you can find that list with YouTube links here.

CommunityPatch.com (beta)

In previous posts, I talked about two projects I had been working on for the Jamf community to better help admins get started using the new “External Patch Sources” feature in Jamf Pro 10.2+. While working on Patch Server and the companion Patch-Starter-Script, I also wrote a quick proof of concept for a serverless version that would run in an AWS account.

The Stupid Simple Patch Server uses API Gateway and Lambda functions to serve patch definitions you stored in an S3 bucket. I even included the same API endpoints from the Patch Server so workflows between the two could be shared. I even took it a step further and added a subscription API so it would sync with a remote patch definition via URL.

That side project (of a side project) made me think about how I could take the basic design and build upon it into something that could be used by multiple admins. At first, I wrote a lot of code to transform the Stupid Simple Patch Server into a multi-tenant application. At a point, I considered the limitations of what could be done in a manner that could be considered secure and scrapped much of it.

But not everything. The work I had done was retooled into a new concept: a single, public, community managed patch source for Jamf Pro. A service where anyone could contribute patch definitions, and be able to manage and update them. Five minutes after having this idea I bought the communitypatch.com domain and setup a beta instance of my work-in-progress:

https://beta.communitypatch.com

CommunityPatchBeta.png

New API

The big green “Read the docs” button on the main page will take you to… the documentation! There you will find those APIs in much greater detail.

The community managed patch source mirrors a number of features from my Patch Server project. The /jamf endpoints are here to integrate with Jamf Pro and the service can be used as an external patch source.

The /api endpoints are slightly different from the Patch Server, but allow for creating definitions by providing the full JSON or a URL to an external source (creating a synced definition) and updating the versions afterwards.

From the docs, here’s the example for creating a new patch definition using the Patch-Starter-Script:

curl https://beta.communitypatch.com/api/v1/title \
   -X POST \
   -d "{\"author_name\": \"<NAME>\", \"author_email\": \"<EMAIL>\", \"definition\": $(python patchstarter.py /Applications/<APP> -p "<PUBLISHER>")}" \
   -H 'Content-Type: application/json'

Here, there are required author_name and author_email keys you need to provide when creating a definition. The author_name you choose will be injected into the ID and name keys of the definition you’re providing.

For example, if I provide “Bryson” for my name, and I’m creating the “Xcode.app” definition, it’s ID will become “Xcode_Bryson” and the display name “Xcode (Bryson)”. These changes make it possible to differentiate titles when browsing in Jamf Pro, and for members of the community to better identify who is managing what (as well as sharing with each other).

After you create a patch definition, you will be emailed an API token to the address you provided in author_email. This token is specifically for managing that title, and is the only way to update the title after. Your email address is not saved with CommunityPatch. A hash of it is stored with the title so you can reset the token should you lose it or need the previous one invalidated (this feature is not implemented yet).

Updating works similarly to Patch Server (but without the items key):

curl http://beta.communitypatch.com/api/v1/title/<ID>/version \
   -X POST \
   -d "$(python patchstarter.py /Applications/<APP> --patch-only)" \
   -H 'Content-Type: application/json' \
   -H 'Authorization: Bearer <TOKEN>'

 

Try It Out

I had a number of admins on Slack giving me feedback and testing the API for a few weeks. While I have work left to do to ensure the production version of CommunityPatch is performant, and still some more features to finish writing, I am at a stage where I would like those interesting in contributing to and using CommunityPatch to join in, and try the documented features (in your test environments).

You can jump right in by joining the #communitypatch channel on the MacAdmins Slack, hitting the CommunityPatch documentation, play around with the API, test definitions you create in your Jamf Pro test environments, and discuss what you find.

CommunityPatch is being written out in the open. You can go to GitHub and see the code for yourself. You can even contribute at a code/docs level if you like! For the immediate, having admins test it out and report back will provide me a lot of value as I work towards completing the application and deploying it to production.

Links

Open Distribution Server Technology (w/JNUC Recap)

ODST @JNUC

At JNUC 2017, I was given the opportunity to do a session detailing the progress I’ve made and the vision I have for a new file distribution server that can serve to replace the now discontinued JDS (Jamf Distribution Server).

This was a last minute addition to the conference schedule and we were unable to record it, but the Mac admin community took notes which can be found here. I’ve also uploaded the presentation’s slide deck on SlideShare.

The source code for ODST is available on GitHub. It is currently in an early Alpha state with some of the core functionality complete.

Project Goals

ODST came about with the sunsetting of the JDS. I set out to design my own implementation of an automated file distribution server but with additional features to make it a more powerful component of an administrator’s environment.

The goal of ODST is to provide an on-premise file syncing and distribution server solution that puts automation and integration features first.

The ODS (Open Distribution Server) application itself is modular and being designed to fit into as many deployment models as possible. This ranges from a simple single-server installation on Linux, Windows, or macOS to containerized deployments in Docker or Kubernetes.

While there will be initial support for the ODS to integrate with Jamf Pro it is not a requirement for using the application. This will allow administrators using other management tools to take advantage of the solution and submit feature requests for integrations with them as well.

Planned Features

  • A full web interface (built on top of the Admin API)
  • The Admin API for integrating your ODS instances with existing automations and workflows.
  • Many-to-many registration and syncing which will allow package uploads to any ODS and still replicate throughout your network.
  • Package and ODS staging tags to restrict how certain levels of packages replicate through the network.
  • Webhooks and email to send notifications to other services alerting them to events that are occurring on your ODS instances.
  • LDAP integration for better control and accountability when granting other administrators and techs access to your ODS instances.
  • And more to come…

Package Syncing

Where the JDS synced by running an every five minute loops task to poll another server, the ODS application uses a private ODS API for communicating between instances.

When two ODS instances are registered to each other they will have each others’ keys saved to their databases and use those keys to sign API requests.

The standard order of operations during a package upload would be:

  1. The admin uploads a package to ODS1.
  2. ODS1 generates the SHA1 hash of the package and also generates SHA1 hashes for every 1 megabyte chunk of that package. This information is saved to the database.
  3. ODS1 sends a notification to every registered ODS instance that a new package is available.
  4. ODS2 receives this notification and makes a return API request for the full details of the package.
  5. ODS2 saves the pending package to the database and a download task is sent to the queue.
  6. The ODS2 worker takes the download task off the queue and begins downloading the package in 1 megabyte chunks, comparing hashes for every chunk, and saving them to a temporary location.
  7. Once the ODS2 worker has downloaded all chunks it recombines them to the single file, performs a final SHA1 check, and moves the package to the public download directory.
  8. ODS2 then performs step #3 to propagate the package to other ODS instances it is registered with.

If the download process seems familiar, it is borrowed from how Apple performs MDM initiated application installs.

Application Architecture

The ODS application is more complex than the JDS in order to facilitate the additional features that are being built on top of the file syncing. In addition to the application server, a production deployment would also include a front-end web server (Nginx or Apache), a Redis server for the queuing system, a database server (ODST falls back to a local SQLite database file if there is not a database service to connect to), and workers that process queued actions.

Single Server

ODS_Single_Server.png

Multi-Server or Containerized

ODS_Multi_or_Containerized.png

The queuing system is an important element as it backgrounds many of the processes that the server will need to perform in reaction to notifications or requests (such as queuing notifications, API requests to other ODS instances, file downloads, and file hashing operations). This frees up the application to continue accepting requests by removes long process blocks.

How the Community Can Help

When I gave the JNUC presentation I only took up half of the allotted time to discuss what was completed with the project and what was planned. The second half was spent in open discussion to take in feedback and guidance from the target audience on what was needed on the road to a 1.0 release.

Adding LDAP support was the first item to come out of this and is my next planned feature to write in after the file syncing framework is finished. I encouraged participants to open GitHub issues on the repo as we discussed their questions and asks. I want to continue to encourage this. The ODST project is meant for the community and should continue to be community driven in its roadmap.

When it comes to contributing to the project I am not asking for code help at this time. Don’t feel that you need to know Python or web development with Flask in order to contribute. There are many other areas that I am in need of help:

  • Testing! As I make new commits to the repository and add in more features you can help ensure everything is working by running the latest version and trying them out. Submit issues, provide logs, provide details on how you’re deploying the application (the provided Docker Compose file is the quickest and easiest way), and by doing so you will help verify features work as expected and solidify the quality of the application.
  • Determine optimal configurations. There are quite a few components to the ODS application and I am learning as I go for how to configure the web server. More experienced administrators who are familiar with these technologies, especially in production environments, can help work towards a baseline for…
  • Installers! The ODS application can be custom setup for almost any kind of deployment, but we still want an easy option where an admin can grab an installer for load it onto a single Linux or Windows server. If you have experience building installers on those platforms please reach out! I’ve also mentioned containerization a few times, and having an official Docker images for the ODS application and worker components should be a part of this initiative.
  • Documentation. Much Documentation. There will be official docs available at odst.readthedocs.io which will be generated from the main repository on GitHub. You can help maintain and improve that documentation with pull requests as you find errors or inaccurate instructions/details as the project iterates. The documentation will be especially invaluable when it comes to the aforementioned installers, custom installations, and the administrator user guide portion that will walk user through how to perform actions.

If you haven’t yet, please join the #odst channel in the Mac Admins Slack where you can discuss the project with me directly as well as other admins who are using, testing, and contributing as they can.

I hope to build something that will provide great value to our community and fill the gap the JDS left in a lot of environments. I hope to see you on GitHub and Slack soon!

Open Distribution Server and JNUC 2017

Two posts in one day! I wanted to do a quick JNUC update and promote a session that I’m really excited for.

This year, as with years past, I will be pretty involved with the conference. Aside from finding me roaming the halls of the Hyatt; I am on the committee for the first ever JNUC Hachathon, participating in the API Scripting and Webhooks labs, and delivering the Webhooks Part Deux! presentation with Oliver Lindsey from our Pro Services team.

But the session I am most excited about is a very late addition that was put onto the JNUC App’s schedule this morning.

The Open Distribution Server

Around July (Penn State), I began work on an alternative distribution server to the JDS. As the community recently learned, the JDS has been discontinued and will no longer be supported by Jamf as cloud-centric options are being focused on. Prior to that announcement, I was involved in some talks with Product Management at Jamf about the JDS, and I took the opportunity to show them what I was working on.

Joe Bloom, our Jamf Pro Product Manager who you will hear talk at several product sessions this year, was very excited about this and urged me to continue working on my distribution server and release it as a free, open source solution.

Joe has secured an additional session slot on Tuesday at 4:00 PM dedicated to the Open Distribution Server. You can find it at the link or in the JNUC App (it is not listed on the website).

During this session I’m going to talk about the goals of this project, what it aims to solve, what features I have implemented and plan to implement, but then turn the rest of the time over to you so we can talk about the key things that will make this a successful solution:

  • What features don’t work as described or need changed to fit your workflows?
  • What features are missing that you need?
  • How can the community contribute to this project?

The current code base for this project was posted to GitHub a couple weeks ago:

https://github.com/brysontyrrell/ODST/tree/develop

The Open Distribution Server (ODS) is an open-source package distribution and syncing solution for IT administrators to serve as a potential alternative for the Jamf Distribution Server.

For those looking for an on-premise, automated distribution point solution, and those who are in need of a replacement for their JDS infrastructure, please attend and be a part of the discussion.

I hope to see you there!

Using OS X VMs 
for FileVault 2 Testing

This post details how to create a FileVault 2-ready base OS X virtual machine using Parallels Desktop 10 and then create either a virtual machine template or make use of the new ‘linked clones’ feature to quickly create new virtual machines to test FileVault 2 workflows.

Using a virtual machine for FileVault 2 testing has a number of benefits.

  • A virtual machine is much smaller than most Macs (without repartitioning) and it takes less time to fully encrypt.
  • Instead of having to decrypt a Mac to run a test again, or wipe it to reinstall OS X, you can clone virtual machines endlessly and delete the encrypted ones when you are finished with them.
  • You can also snapshot virtual machines and them revert to previous states.

Create a New Base OS X 10.9 VM

You can easily create a new virtual machine by dragging the “Install OS X Mavericks.app” onto the Parallels icon in your dock.

Choose the default options for CPU (2), RAM (2 GB) and Disk space (64 GB).

Click Continue to create the virtual machine.

Repartition the VM to Conserve Disk Space

When a virtual machine is FileVault 2 encrypted it will encrypt all of the disk space that it is assigned.  If left at the default of 64 GB the virtual disk file will expand to the full size and take up unneeded space on your Mac.

To prevent this, repartition the disk drive so the boot volume only takes up what is required to install OS X.

Once you are in the OS X Installer, go to the Utilities menu and open Disk Utility.

Split the drive into two partitions with the first/boot partition being set to 16 GB.

Click Apply to create the new partitions and then quit Disk Utility.

Install OS X to the 16 GB boot partition.

Reinstall OS X to Create a Recovery Partition

Once the install of OS X has finished and you have created your account open Terminal and run ‘diskutil list’.  You will note there is no Recovery HD partition present.

Per Parallel’s KB article, copying the “Install OS X Mavericks.app” to the virtual machine and running it will correctly create the Recovery HD partition and allow FileVault 2 encryption.

http://kb.parallels.com/en/122661

Mount your host Mac in the virtual machine and copy the “Install OS X Mavericks.app” to your second partition.  Then run it and re-install OS X Mavericks.

Upon completion of the second install, run ‘diskutil list’ again and you will find the Recovery HD is present.

The final virtual machine takes up less than ~15 GB of disk space on your Mac.

DO NOT FILEVAULT 2 ENCRYPT THIS!

Close all open applications and shut down the virtual machine for the next steps.

Create a New VM for FileVault 2 Testing

Option 1) Template the Base OS X 10.9 VM

You can convert your base OS X virtual machine into a VM Template.  Select the virtual machine and then from the File menu select “Clone to Template…” to create a copy that will be the template, or select “Convert to Template” to convert the virtual machine and not create a copy.

Once you have your template, double-click on it and you will receive a prompt to create a new virtual machine from it.  This will create a new virtual machine

You can share a virtual machine template with other Parallels users or move it to other Macs.  Each virtual machine created from a template will have unique identifiers to avoid conflicts.

Option 2) Create a Linked Clone of the Base OS X 10.9 VM

“Linked Clones” are a new feature of Parallels Desktop 10.  A linked clone is a virtual machine created from a snapshot of a “parent” virtual machine.  This method is faster than the above virtual machine templates for spinning up instances.

http://kb.parallels.com/en/122669

By selecting “New Linked Clone…” from the virtual machine’s context menu a snapshot will be  automatically created from the parent virtual machine and the cloned virtual machine window will immediately appear and be ready to launch.

Enable FileVault 2 on the Cloned VM

With your clone of the base virtual machine you can now enroll it to a JSS (or whatever) to test whatever FileVault 2 workflows are required.

In the below screenshots you can see this virtual machine was enrolled, a configuration profile was deployed requiring FileVault 2 and included key redirection to the JSS.  Once logged out, FileVault 2 was triggered.

Bryson’s Bash Favorites

We recently had our JAMF Nation User Conference here in Minneapolis.  I spent a lot of time with a lot of brilliant sysadmins from around the world.  After three nights of mingling it became pretty clear that despite how much I enjoy talking about technology solutions (and I will continue to talk as people hand me Martinis), I don’t feed a lot of that back into Mac admin community, and it was made clear that I should give it a try.

So, this will be my first installment of three posts sharing some of what I’ve learned.  This first will focus on Bash scripting.  The second will serve as an introduction to Python for the Bash scripter (let me be up front about this: Python is so hot…) and some of the benefits that it offers to us as administrators.  The last post will be entirely about the JSS API, interacting the the data in both Bash and Python as well as some examples of what you can accomplish with it.

So, on to Bash…

For the last two years as a Mac administrator I’ve learned quite a bit on the subject of shell scripting and have been shown a wealth of handy tricks that have helped refine my skill (there are a lot of wizards at JAMF).  In this post I want to show some of my favorite scripting solutions and techniques that have become staples to my writing style.  Now, if you’re reading this I’m going to assume you’re already pretty familiar with using a Mac via the command line and basic scripting (if the word shebang doesn’t conjure up the image of a a number sign with an exclamation mark, you might not be ready).

Error Checking and Handling

As admins we write a lot of code that interacts with the Mac in ways that can certainly ruin a person’s day (if not their life, or so they would claim).  When executing commands there is a variable always readily available to view:

USS-Enterprise:~ brysontyrrell$ echo $?
0

The ‘$?’ represents the result of the last command that was run.  If there were no errors you  receive a zero (0) in return.  For anything else there will be a value greater than zero which represents your error code.  Visit the manpage of any number of commands on your Mac and you will usually find a section devoted to what an error code represents (if not, we have Google).  Error codes are critical for figuring out what went wrong.

In many cases we might need to kill a script if a command failed.  After all, if data is being manipulated or moved around there’s not a whole lot of sense in executing the remainder of the script when some key actions did not perform correctly.  We can do this with a simple IF statement that triggers an ‘exit’:

cp /path/to/source /path/to/destination
if [ $? -ne 0 ]; then
    exit
fi

Now, this will exit the script if our copy operation failed, but it isn’t that great.  The script will exit, but we won’t have any tangible information about how or why.  Let’s add a few things into this to make it more helpful to us:

cp /path/to/source /path/to/destination
if [ $? -ne 0 ]; then
    echo "There was a problem with the copy operation. Error code $?"
    exit 1
fi

Now we’re getting somewhere.  With this the script will not only output onto the Terminal that there was an error, but it will return the error code and also exit our script with a value greater than zero which will be reported as a failure!  We can have error codes that mean different things. ‘1’ is just the default.  Any numerical value can be used to represent an error (or types of error if we’re lumping them together) and its meaning can be recorded either within the script or in other documentation:

# Exit 1 for general error
# Exit 10 for copy operation error
cp /path/to/source /path/to/destination
if [ $? -ne 0 ]; then
    echo "There was a problem with the copy operation. Error code $?"
    exit 10
fi

Using ‘echo’ for the output is great if we’re running the script manually, but what we’re writing will be run remotely and we will not be watching it execute live.  We’re going to need something that will allow us to go back at a later time to review what transpired:

cp /path/to/source /path/to/destination
if [ $? -eq 0 ]; then
    log "The copy operation was successful."
else
    log "There was a problem with the copy operation. Error code $?" 10
fi

The log() call you see here is actually a function that I use for almost everything I write.  We’re going to cover what exactly it does a little later, but the basic of the above script is that it will output a message upon both the success of the command as well as the failure and exit the script after the error.  Never underestimate how important it is that your scripts are telling you what they are doing.  Your life will be better for it.

Operators for Shortcuts

Our above examples all allow you to execute multiple actions in response to the success or failure of a command based upon the result.  Sometimes we might only need to trigger one command in response to an action.  We can use operators  to achieve this effect without writing out an entire IF statement:

# A copy operation using an IF statement to execute a file removal
cp /path/to/source /path/to/destination
if [ $? -eq 0 ]; then
    rm /path/to/source
fi

# The above operation using the '&&' operator
cp /path/to/source /path/to/destination && rm /path/to/source

In the second example the source file we are copying is deleted so long as the ‘cp’ command left of the ‘AND’ operator returned zero.  If there had been an error then the code on the right side won’t execute.  Both examples achieve the same result but using the operator acts as a short cut and allows you to cut down on the amount of code you need to write.  If you need to achieve the same effect but when the result is not zero we can turn to the ‘OR’ operator:

# A copy operation using an IF statement to exit upon failure
cp /path/to/source /path/to/destination
if [ $? -ne 0 ]; then
    exit 10
fi

# The same copy operation but using '||' to trigger the 'exit'
cp /path/to/source /path/to/destination || exit 10

Its a TRAP

This one is a personal favorite.  Usage of the Bash builtin ‘trap’ is actually pretty new to me, and it is one of the hands down coolest (if you’re like me and think scripting is, you know, cool) things I’ve seen.  A ‘trap’ give you the ability to determine actions that are performed when your script terminates or when commands throw errors!  Let me demonstrate with a very basic example:

# Here we define the function that will contain our commands for an 'exit'
onExit() {
rm -rf /private/tmp/tempfiles
}

# Here we set the 'trap' to execute upon an EXIT signal
trap onExit EXIT

for file in /private/tmp/tempfiles/*; do
    cp "${file}" /path/to/destination/
done

# This 'exit' command will send an EXIT signal which will trigger the 'trap'
exit 0

As you can see in the above example, ‘trap’ is very easy to use.  The syntax for ‘trap’ is:

trap 'command(s)' signal(s)

We created an onExit() function containing the actions we wanted to perform.  This became the command in the ‘trap’ line. Once triggered, the temporary directory that we were copying files from is automatically purged once the script is complete. This makes cleanup much simpler and easier on us.  It also allows far more control over the state of the system upon an error requiring we kill the script in process.  I had mentioned in my introduction that we could have traps for both terminations and errors, did I not?  Let’s expand upon that first example and make it a bit more robust:

onExit() {
rm -rf /private/tmp/tempfiles
}

# This function contains commands we want to execute every time there is an ERROR
onError() {
errorCode=$?
cmdLine=$1
cmdName=$2
echo "ERROR: ${cmdName} on line ${cmdLine} gave error code: ${errorCode}"
exit 1
}

trap onExit EXIT

# Here we introduce a second 'trap' for ERRORs
trap 'onError $LINENO $BASH_COMMAND' ERR

for file in /private/tmp/tempfiles/*; do
    cp "${file}" /path/to/destination/
done

exit 0

This one certainly has a lot more going on.  The way I approach these ‘traps’ is pretty simple: my EXIT performs a cleanup of any working files while my ERROR handles outputting the information I will need to determine what went wrong.  In this example I have an ‘exit’ command included inside the onError() function so the cleanup onExit() function is still called in the first event of an error.  That’s not a practice I’m recommending, but I am showing that it is an option.  There are plenty of cases out there where you would want the script to continue on even if an error occurs in the middle of a copy operation (user account migration, anyone?).  Those are the times when you will want to be explicit about where in your script certain errors trigger an ‘exit.’

Let’s break down that onError() function:

onError() {
# Our first action is to capture the error code of the command (remember, this changes after EVERY command executed)
errorCode=$?
# This variable is from $LINENO which tells us the number line the command resides on in the script
cmdLine=$1
# The last variable is from $BASH_COMMAND which is the name of the command itself that gave an error
cmdName=$2
# Our tidy statement here puts it all together in a human-readable form we can use to troubleshoot
echo "ERROR: ${cmdName} on line ${cmdLine} gave error code: ${errorCode}"
exit 1
}

# In this 'trap' we call not just our function, but we also pass two parameters along to it
# The $LINENO and $BASH_COMMAND variables are called 'Internal Variables' to the Bash shell
trap 'onError $LINENO $BASH_COMMAND' ERR

We’re going to make this onError() function even more powerful a little later by visiting the log() function I had mentioned.  Before that, let’s go back to the onExit() function.  This ‘trap’ ideally is where we want to perform all of our cleanup actions, and the basic example I gave it wiping out a temporary directory of working files.  While our scratch space is removed in this process it does not address any actions we may have made in other areas of the system.  So, do we want to write all of that into the onExit() function even if they may not be relevant to when the script terminated?

I’m a big fan of the idea: “If I don’t HAVE to do this, then I don’t want to.”  The meaning of this is I don’t want to execute commands on a system (especially when I’m masquerading around as root) if they’re unnecessary.  We can write our onExit() function to behave following that ideology.  I don’t quite remember where I first saw this on the internet, but it was damned impressive:

# This is an array data type
cleanup=()

onExit() {
# Once again we're capturing our error code right away in a unique variable
exitCode=$?
rm -rf /private/tmp/tempfiles
# If we exit with a value greater than zero and the 'cleanup' array has values we will now execute themif [ $exitCode -ne 0 ] && [ "${#cleanup[@]}" -gt 0 ]; thenfor i in"${cleanup[@]}"; do# The 'eval' builtin takes a string as an argument (executing it)eval"$i"donefi
echo"EXIT: Script error code: $exitCode"
}

onError() {
errorCode=$?
cmdLine=$1
cmdName=$2
echo"ERROR: ${cmdName} on line ${cmdLine} gave error code: ${errorCode}"exit 1
}

trap onExit EXIT
trap'onError $LINENO $BASH_COMMAND' ERR

for file in /private/tmp/tempfiles/*; docp"${file}" /path/to/destination/
    # After each successful copy operation we add a 'rm' command for that file into our 'cleanup' array
    cleanup+=('rm /path/to/destination/"${file##*/}"')
doneexit 0

We have now transformed our onExit() function into one giant UNDO command.  The IF statement within it will always remove the temporary working directory (which we always want, no matter what the exit status is) but will now run additional commands out of our handy ‘cleanup’ array.  Effectively, unless the script successfully completes the entire copy operation it will, on the first error, remove every file that did make it into the destination.  This leaves the system pretty much in the same state as it was before our script ran.  We can take this concept further in much larger scripts by adding new commands into a ‘cleanup’ array as we complete sections of our code.

Logging is Awesome

I’m finally getting around to explaining that log() function from earlier.  Logs are fantastic for troubleshooting as they generally contain a lot of data that helps point us towards the source of the issue.  You can approach logging of your own scripts in two ways: append and existing log or use your own customized one.  In my case, I append all of my script log output into the Mac’s system.log using the ‘logger’ command.  This command allows you to do some pretty cool things (like message priority), but my use is fairly simple.

log () {
if [ -z "$2" ]; then
    logger -t "it-logs: My Script" "${1}"
else
    logger -t "it-logs: My Script" "${1}"
    logger -t "it-logs: My Script" "Exiting with error code $2"
    exit $2
fi
}

log "This is a log entry"

***OUTPUT IN SYSTEM.LOG AS SHOWN IN CONSOLE.APP***
Nov 10 12:00:00 USS-Enterprise.local it-logs: My Script[1701]: This is a log entry

You’re probably piecing together how this function works.  the ‘-t’ flag in the command creates a tag for the log entry.  In my case I have a universal prefix for the tag I use in all of my scripts (here I’m using ‘it-logs:’ but its similar) and then I follow it with the name of the script/package for easy reference (you read right: everything I write about in this post I use for preinstall and postinstall scripts in my packages as well).  The tagging allows me to grab the system.log from a machine and filter all entries containing ‘it-logs’ to see everything of mine that has executed, or I can narrow it down to a specific script and/or package by writing the full tag.  Its really nice.

Right after the tag inside the brackets is the process ID, and then we have our message.  If you scroll back up to the example where I used the log() function you’ll see that in code that triggered on a failure I included a ’10’ as a second parameter.  That is the error code to use with an ‘exit.’  If present, log() will write the message first and then write a second entry stating that the script is existing with an error code and  ‘exit’ with that code (and trigger our onExit() trap function).

If you want to maintain your own log, instead of writing into the system.log, you can easily do so with a similar function:

# You must ensure that the file you wish to write to exists
touch /path/to/log.log

log () {
if [ -z "$2" ]; then
    # Here 'echo' commands will output the text message that is '>>' appended to the log
    echo $(date +"%Y %m %d %H:%M")" it-logs: My Script: ${1}" >> /path/to/log.log
else
    echo $(date +"%Y %m %d %H:%M")" it-logs: My Script: ${1}" >> /path/to/log.log
    echo $(date +"%Y %m %d %H:%M")" it-logs: My Script: Exiting with error code $2" >> /path/to/log.log
    exit $2
fi
}

***OUTPUT IN LOG.LOG AS SHOWN IN CONSOLE.APP***
2013 11 10 20:38 it-logs: My Script: This is a log entry

The end result is just about the same, and with a .log extension it will automatically open in the Console.app and still use the filtering.

Now I’m going to take the copy operation from above and write in the log() function so you can see how the whole package fits together:

log () {
if [ -z "$2" ]; then
    logger -t "it-logs: My Script" "${1}"
else
    logger -t "it-logs: My Script" "${1}"
    logger -t "it-logs: My Script" "Exiting with error code $2"
    exit $2
fi
}

cleanup=()

onExit() {
exitCode=$?
log "CLEANUP: rm -rf /private/tmp/tempfiles"
rm -rf /private/tmp/tempfiles
if [ $exitCode -ne 0 ] && [ "${#cleanup[@]}" -gt 0 ]; then
    for i in "${cleanup[@]}"; do
        log "ADD-CLEANUP: $i"
        eval "$i"
    done
fi
}

onError() {
errorCode=$?
cmdLine=$1
cmdName=$2
log "ERROR: ${cmdName} on line ${cmdLine} gave error code: ${errorCode}" 1
}

trap onExit EXIT
trap 'onError $LINENO $BASH_COMMAND' ERR

for file in /private/tmp/tempfiles/*; do
    cp "${file}" /path/to/destination/
    log "COPY: ${file} complete"
    cleanup+=('rm /path/to/destination/"${file##*/}"')
done

exit 0

Interacting With the User Stuff

There are a ton of examples out there for grabbing the name of the currently logged in user and finding their existing home directory.  Both items are very important when we’re executing our scripts as the root user.  To get the name of the logged in user I’ve found these three methods:

# Though the simplest I have found that this method does not work in a package preinstall/postinstall script
USS-Enterprise:~ brysontyrrell$ echo $USER
brysontyrrell

# This one is also very simple but it relies upon a command that may not be present in future OS X builds
USS-Enterprise:~ brysontyrrell$ logname
brysontyrrell

# The following is used pretty widely and very solid (the use of 'ls' and 'awk' nearly future-proofs this method)
USS-Enterprise:~ brysontyrrell$ ls -l /dev/console | awk '{print $3}'
brysontyrrell

One step beyond this is to then find the user’s home directory so we can move and/or manipulate data in there.  One piece of advice I’ve been given is to never assume I know what the environment is.  Users are supposed to be in the ‘/Users/’ directory, but that doesn’t mean they are.  If you’ve never played around much with the directory services command line utility (‘dscl’), I’m happy to introduce you:

USS-Enterprise:~ brysontyrrell$ dscl . read /Users/brysontyrrell | awk '/NFSHomeDirectory:/ {print $2}'
/Users/brysontyrrell

‘dscl’ is incredibly powerful and gives use easy access to a lot of data concerning our end-users’ accounts.  In fact, you can take that above command and change out the regular expression ‘awk’ is using to pull out all sorts of data individually:

USS-Enterprise:~ brysontyrrell$ dscl . read /Users/brysontyrrell | awk '/GeneratedUID:/ {print $2}'
123A456B-7DE8-9101-1FA1-2131415B16C1

USS-Enterprise:~ brysontyrrell$ dscl . read /Users/brysontyrrell | awk '/UniqueID:/ {print $2}'
501

USS-Enterprise:~ brysontyrrell$ dscl . read /Users/brysontyrrell | awk '/PrimaryGroupID:/ {print $2}'
20

USS-Enterprise:~ brysontyrrell$ dscl . read /Users/brysontyrrell | awk '/UserShell:/ {print $2}'
/bin/bash

Alternative Substitution

Kind of an odd header, but you’ll get it in a moment.  One of the bread ‘n butter techniques of scripting is to take the output of one command and capture it into a variable.  This process is known as “command substitution” where instead of displaying the entered command we are shown the result.  The traditional way of doing this is to enclose the command you are capturing in `backticks`.  Instead of using backticks, use a $(dollar and parenthesis) so your editor of choice still highlights the command syntax correctly.

Check out this example which will pull out the type of GPU of the Mac:

gpuType=`system_profiler SPDisplaysDataType | /awk -F': ' '/Chipset Model/ {print $2}' | tail -1`
gpuType=$(system_profiler SPDisplaysDataType | awk -F': ' '/Chipset Model/ {print $2}' | tail -1)

Functionally, both of these statements are identical, but now as we write our scripts we have an easier time going back and identifying what is going on at a glance.  Let’s take two of the examples from above for obtaining information about the logged in user and write them the same way for a script:

userName=$(ls -l /dev/console | awk '{print $3}')
userHome=$(dscl . read /Users/"${userName}" | awk '/NFSHomeDirectory:/ {print $2}')

cp /private/tmp/tempfiles/somefile "${userHome}"/Library/Preferences/