The JSS REST API for Everyone

In this post I’m going to be exploring the JAMF Software Server (JSS) REST API. In the opening we’ll be covering some of the basics behind HTTP requests and responses. Then there will be examples of making these calls for both Bash and Python that you can use as starting points. Lastly I’ll outline an example of using the API for performing a search and pulling out the data we want.

To better follow along with this post check out your “JSS REST API Resource Documentation”:

https://myjss.com/api (adapt to your JSS address)

The documentation page for your JSS contains a list of all of the available resources, their objects, the parameters those objects can use and what methods you can use to interact with them. Especially useful on the documentation page is the “Try it out!” button for every GET example. This will display the XML of any object in yours JSS for you to view and copy as you being to explore the API (Note: any use of the documentation page to view resources will require authentication).

 

HTTP methods:

All of the interactions with the JSS REST API are done using HTTP methods. There are four methods you may use:

GET retrieves data from a JSS resource for us to parse. A successful GET returns a status code of 200 and the XML of the requested resource/object.

POST takes input XML from a variable or a file and creates a new JSS object for a resource. A successful POST returns a status code of 201 and XML containing the ID of the object we just created, allowing us to further interact with it.

PUT takes input XML from a variable or a file and updates a JSS object. A successful PUT returns a status code of 201 and XML containing the ID of the object we updated.

DELETE will delete a JSS object. A successful DELETE will return a status code of 200 and XML containing the ID of the object we deleted including the tag: “<successful>True</successful>”.

 

Understanding status codes for HTTP responses:

Successful and failed API calls will return a variety of status codes. 2XX codes represent a successful response. 4XX and 5XX codes are errors. The status code you receive can help you troubleshoot your API call and determine what is incorrect or needs to be changed.

200: Your request was successful.

201: Your request to create or update an object was successful.

400: There was something wrong with your request. You should recheck the request and/or XML and reattempt the request.

401: Your authentication for the request failed. Check your credentials or check how they are being processed/handled.

403: You have made a valid request, but you lack permissions to the object you are trying to interract with. You will need to check the permissions of the account being used in the JSS interface under “System Settings > JSS User Accounts & Groups”.

404: The JSS could not find the resource you were requesting. Check the URL to the resource you are using.

409: There was a conflict when your request was processed. Normally this is due to your XML not including all of the required data, having invalid data or there is a conflict between what your resource and another one (e.g. some resources require a unique <name>). Check your XML and reattempt the request.

500: This is a generic internal server error. A 500 status code usually indicates something has gone wrong on the server end and is unrelated to your request.

 

Using cURL in Shell Scripts:

Working with shell scripts you will be using the cURL command. The variables below for the XML can be either XML string variables or files. If you are using a variables instead of a file you will need to be careful that you have properly escaped double-quotes within the string.

GET:
curl https://myjss.com/JSSResource/.. --user "$user:$pass"

POST:
curl https://myjss.com/JSSResource/.. --user "$user:$pass" -H "Content-Type: text/xml" -X POST -d $POSTxml

PUT:
curl https://myjss.com/JSSResource/.. --user "$user:$pass" -H "Content-Type: text/xml" -X PUT -d $PUTxml

DELETE:
curl https://myjss.com/JSSResource/.. --user "$user:$pass" -X DELETE

If you want to get both the XML response as well as the status code in the same call, you will need to amend the above commands to include “–write-out \\n%{http_code} –output -“ so the status code is written at the bottom of the output.

curl https://myjss.com/JSSResource/.. --user "$user:$pass" --write-out \\n%{http_code} --output -

If you go to parse the XML you may need to remove the status code line.

 

Using urllib2 in Python:

In Python 2.X the standard library for HTTP requests is urllib2. While there are more lines of code involved than the above cURL examples, Python has a few advantages. The response that we create below contains separate attributes for the returned XML and the status code.

# Import these resources into your script
import urllib2
import base64

# Initiate each request with these two lines
request = urllib2.Request('https://myjss.com/JSSResource/..')
request.add_header('Authorization', 'Basic ' + base64.b64encode(UsernameVar + ':' + PasswordVar))

#Use the following lines for each method when performing that request
GET:
response = urllib2.urlopen(request)

POST:
request.add_header('Content-Type', 'text/xml')
request.get_method = lambda: 'POST'
response = urllib2.urlopen(request, POSTxml)

PUT:
request.add_header('Content-Type', 'text/xml')
request.get_method = lambda: 'PUT'
response = urllib2.urlopen(request, PUTxml)

DELETE:
request.get_method = lambda: 'DELETE'
response = urllib2.urlopen(request)

We can take everything shown above and write a multi-purpose function to handle each of the methods. Depending upon what you are writing you can use this as a starting point and adapt it by removing some methods, changing the value returned (in this case the entire response) or introduce error handling so failures trigger actions or display specific output instead of Python’s Traceback.

Example of a Python API function:

def call(resource, username, password, method = '', data = None):
    request = urllib2.Request(resource)
    request.add_header('Authorization', 'Basic ' + base64.b64encode(username + ':' + password))
    if method.upper() in ('POST', 'PUT', 'DELETE'):
        request.get_method = lambda: method
     
    if method.upper() in ('POST', 'PUT') and data:
        request.add_header('Content-Type', 'text/xml')
        return urllib2.urlopen(request, data)
    else:
        return urllib2.urlopen(request)

Usage:

response = call('https://myjss.com/JSSResource/..', 'myname', 'mypass')
response = call('https://myjss.com/JSSResource/..', 'myname', 'mypass', 'post', myXML)
response = call('https://myjss.com/JSSResource/..', 'myname', 'mypass', 'put', myXML)
response = call('https://myjss.com/JSSResource/..', 'myname', 'mypass', 'delete')

In any case, we can get our status code and the XML by typing:

response.code
response.read()

 

Parsing results:

In Bash you have a few ways of retrieving data from your XML. Many Unix wizards out there can dazzle you with complex looking Awk and Sed one-liners. I have a simple one that I have used before here:

response=$(curl https://myjss.com/JSSResource/computers/id/1/subset/location --user "$user:$pass")
email=$(echo $response | /usr/bin/awk -F'<email_address>|</email_address>' '{print $2}')

Those two lines (which can also be made just one) return the email address of the user assigned to the Mac which can be handy in configuring software. The above method works because I know there is only one “<email_address>” tag in the XML.

For a more ubiquitous tag such as “<name>” we need to pipe our XML to a powerful command-line tool called Xpath. Xpath is a Perl binary that parses XML (but cannot modify it!). To retrieve the value of the tag you pipe the XML to Xpath with the path to the tag as an argument and then we can use Awk or Sed to remove the tags.

response=$(curl https://myjss.com/JSSResource/computers/id/1 --user "$user:$pass")
username=$(echo $response | xpath '//general/name' 2>&1 | awk -F'<name>|</name>' '{print $2}')

The “2>&1” for the Xpath command is redirrecting STDERR to STDOUT so we can suppress some of the verbose output from Xpath.

In Python we will be using the “xml.etree.ElementTree” library to working with the returned XML from the JSS. While there are a number of options avialable in Python, xml.etree.ElementTree is a part of the standard library and, from my experience, handles the job very well.

response = call('https://myjss.com/JSSResource/computers/id/1', 'myname', 'mypass')

# You'll see this in almost all online xml.etree.ElementTree examples where the import has
# been shortened to 'etree' or 'eTree' for easier writing.
import xml.etree.ElementTree as etree

xml = etree.fromstring(response.read())

We now have an object we can interact with. You can use xml.etree.ElementTree to not only read data out of of the XML, but modify it, output the changes into an XML string that can be used for a POST or PUT request, and also create new XML data on its own or that can be inserted into another XML object. The library is very powerful, but we’ll cover just reading out data. You might notice that the “.find()” method here is similar to Xpath where we type out the path to our tags.

# ID of the Mac's assigned Site
xml.find('general/site/id').text

# The Mac's assigned user's email address
xml.find('location/email_address').text

# The Mac's model
xml.find('hardware/model').text

# Here's a more advanced example where we're extracting part of the XML and parse it.
# In this example we're getting the name, version and path for every application that
# is in the Mac's inventory record.
applications = xml.find('software/applications')
for app in applications.iter('application'):
    print app.find('name').text
    print app.find('version').text
    print app.find('path').text

 

Working with the JSS API:

We’re now going to take all of the above knowledge and apply it to an example flow using the JSS REST API. In this scenario, we have a script or an app that is going to:

  1. Create an advanced computer search looking for Macs assigned to a specific user.
  2. Update that search with new criteria and display fields.
  3. Find and parse data on those results.
  4. Delete the advanced search once we are done.

Something that you won’t find in the JSS API documentation are criteria for when you’re creating an object. This can end up being a trial and error process, but the best practice I can recommend is going into a JSS, creating an empty object and then reading that object’s XML either through the API or the API documentation page. You should be able to get a good idea of what the base requirements for the XML you’ll be writing must be.

In our POST XML below we have the basic outline of an advanced computer search. Here we only have the name and a single criteria which we’ll pass a variable into (the triple quotes at the beginning and the end are a Python feature for multi-line variables, but you can write source XML files to use in Bash too). Something I would like to make a note of is the object’s “<name>” tag is only required when you are POSTing a new object. You will not need it again unless you intend to rename the object.

POSTxml = '''<advanced_computer_search>
    <name>Example</name>
    <criteria>
        <criterion>
            <name>Username</name>
            <priority>0</priority>
            <and_or>and</and_or>
            <search_type>is</search_type>
            <value>{VARIABLE}</value>
        </criterion>
    </criteria>
</advanced_computer_search>'''
POST https://myjss.com/JSSResource/advancedcomputersearches/id/0 POSTxml
    Returns 201 status code and XML with the "<id>" of created resource.

The URL in my POST example ends with “/id/0”. Every object for each resource in the JSS is assigned a unique ID in sequential order starting at 1. Sending our POST to 0 tells the JSS we are creating a new object and it should be assigned the next available ID number. That ID is then returned with our response. Once we capture that ID and store it in ours script we will be able to pass it to the rest of our functions.

Let’s say the ID of our newly created object was 100.

GET https://myjss.com/JSSResource/advancedcomputersearches/id/100
    Returns 200 status code and XML of the object.

Even though we only provided the name and a single criteria for our advanced search the response XML now includes many more elements now including “<computers>” and “<display_fields>”. As soon as we POST the advanced computer search the JSS populates it with matching results.

We’re now going to update the critera for the advanced search. Our XML is going to look nearly identical except we have removed the “<name>” tag as we don’t want to make any changes there. Any PUT made to the JSS will replace the existing fields those contained in the XML. A PUT action is not an additive function. This is something to be careful about. If you were to perform a PUT of a single computer into a static computer group you would end up wiping the membership!

We’re going to switch this search from searching by assigned user to computer name.

PUTxml1 = '''<advanced_computer_search>
    <criteria>
        <criterion>
            <name>Computer Name</name>
            <priority>0</priority>
            <and_or>and</and_or>
            <search_type>is</search_type>
            <value>${VARIABLE}</value>
        </criterion>
    </criteria>
</advanced_computer_search>'''

PUT https://myjss.com/JSSResource/advancedcomputersearches/id/100 PUTxml1
    Returns 201 status code and XML with the "<id>" of the updated resource.

GET https://myjss.com/JSSResource/advancedcomputersearches/id/100
    Returns 200 status code and XML of the resource.

Performing another GET on the advanced search will return new results. If you scanned through either of the lists inside “<computers>” you would find that the XML contains the JSS IDs, names, and UUIDs (or UDIDs…) of all the Macs that meet the criteria. While we could begin making API calls on each of these computer objects to retrieve and parse data, there’s a more efficient method.

We’re going to update the advanced search with criteria for both the username and the computer name to narrow the results even further, but we’re also going to add in display fields we want returned values on. The criteria for the advanced search have priorities associated so we need to make sure the second criterion is incremented.

PUTxml2 = '''<advanced_computer_search>
    <criteria>
        <criterion>
            <name>Username</name>
            <priority>0</priority>
            <and_or>and</and_or>
            <search_type>is</search_type>
            <value>{VARIABLE}</value>
        </criterion>
        <criterion>
            <name>Computer Name</name>
            <priority>1</priority>
            <and_or>and</and_or>
            <search_type>like</search_type>
            <value>{VARIABLE}</value>
        </criterion>
    </criteria>
    <display_fields>
        <display_field>
            <name>JSS Computer ID</name>
        </display_field>
        <display_field>
            <name>Asset Tag</name>
        </display_field>
        <display_field>
            <name>Computer Name</name>
        </display_field>
        <display_field>
            <name>Username</name>
        </display_field>
        <display_field>
            <name>Operating System</name>
        </display_field>
        <display_field>
            <name>Model</name>
        </display_field>
        <display_field>
           <name>Serial Number</name>
        </display_field>
    </display_fields>
</advanced_computer_search>'''

PUT https://myjss.com/JSSResource/advancedcomputersearches/id/100 PUTxml2
    Returns 201 status code and XML with the "<id>" of the updated resource.

If you ran the advanced search in the JSS web app you would see each of those “<display_fields>” as their own column with the matching computer records as rows. In the API we can access the data in the same way by making a GET request to “../JSSResource/computerreports/id/100”. The “<computer_reports>” resource provides the data for the selected “<display_fields>” of an advanced computer search. We can use the same ID for both resources.

GET https://myjss.com/JSSResource/computerreports/id/100
    Returns 200 status code and XML of the resource containing the values for each "<display_field>" of the matching advanced computer search.

You can rinse and repeat the above steps and number of times, parse the data and pipe it out to another system or in a more human readable format. Once we are done with the advanced search and computer report we can clean up by deleting the resource from the JSS.

DELETE https://myjss.com/JSSResource/advancedcomputersearches/id/100
 Returns 200 status code and XML with the "<id>" of the deleted resource and a "<successful>" tag.

 

Wrap Up

Hopefully the example we walked through and the code snippets will help you start experimenting more the the JSS REST API and start leveraging it to automate more of what you do. While I chose to do an example based up advanced computer searches, I’ve heard of amazing workflows involving:

  • Automated license management (../JSSResource/licensedsoftware)
  • Package deployment (../JSSResource/packages, ../JSSResource/policies)
  • Security (../JSSResource/computercommands, ../JSSResource/mobiledevicecommmands).

If you have any feedback on this post, or feel that something hasn’t been explained clearly, please reach out to me.

Advertisements

Bryson’s Bash Favorites

We recently had our JAMF Nation User Conference here in Minneapolis.  I spent a lot of time with a lot of brilliant sysadmins from around the world.  After three nights of mingling it became pretty clear that despite how much I enjoy talking about technology solutions (and I will continue to talk as people hand me Martinis), I don’t feed a lot of that back into Mac admin community, and it was made clear that I should give it a try.

So, this will be my first installment of three posts sharing some of what I’ve learned.  This first will focus on Bash scripting.  The second will serve as an introduction to Python for the Bash scripter (let me be up front about this: Python is so hot…) and some of the benefits that it offers to us as administrators.  The last post will be entirely about the JSS API, interacting the the data in both Bash and Python as well as some examples of what you can accomplish with it.

So, on to Bash…

For the last two years as a Mac administrator I’ve learned quite a bit on the subject of shell scripting and have been shown a wealth of handy tricks that have helped refine my skill (there are a lot of wizards at JAMF).  In this post I want to show some of my favorite scripting solutions and techniques that have become staples to my writing style.  Now, if you’re reading this I’m going to assume you’re already pretty familiar with using a Mac via the command line and basic scripting (if the word shebang doesn’t conjure up the image of a a number sign with an exclamation mark, you might not be ready).

Error Checking and Handling

As admins we write a lot of code that interacts with the Mac in ways that can certainly ruin a person’s day (if not their life, or so they would claim).  When executing commands there is a variable always readily available to view:

USS-Enterprise:~ brysontyrrell$ echo $?
0

The ‘$?’ represents the result of the last command that was run.  If there were no errors you  receive a zero (0) in return.  For anything else there will be a value greater than zero which represents your error code.  Visit the manpage of any number of commands on your Mac and you will usually find a section devoted to what an error code represents (if not, we have Google).  Error codes are critical for figuring out what went wrong.

In many cases we might need to kill a script if a command failed.  After all, if data is being manipulated or moved around there’s not a whole lot of sense in executing the remainder of the script when some key actions did not perform correctly.  We can do this with a simple IF statement that triggers an ‘exit’:

cp /path/to/source /path/to/destination
if [ $? -ne 0 ]; then
    exit
fi

Now, this will exit the script if our copy operation failed, but it isn’t that great.  The script will exit, but we won’t have any tangible information about how or why.  Let’s add a few things into this to make it more helpful to us:

cp /path/to/source /path/to/destination
if [ $? -ne 0 ]; then
    echo "There was a problem with the copy operation. Error code $?"
    exit 1
fi

Now we’re getting somewhere.  With this the script will not only output onto the Terminal that there was an error, but it will return the error code and also exit our script with a value greater than zero which will be reported as a failure!  We can have error codes that mean different things. ‘1’ is just the default.  Any numerical value can be used to represent an error (or types of error if we’re lumping them together) and its meaning can be recorded either within the script or in other documentation:

# Exit 1 for general error
# Exit 10 for copy operation error
cp /path/to/source /path/to/destination
if [ $? -ne 0 ]; then
    echo "There was a problem with the copy operation. Error code $?"
    exit 10
fi

Using ‘echo’ for the output is great if we’re running the script manually, but what we’re writing will be run remotely and we will not be watching it execute live.  We’re going to need something that will allow us to go back at a later time to review what transpired:

cp /path/to/source /path/to/destination
if [ $? -eq 0 ]; then
    log "The copy operation was successful."
else
    log "There was a problem with the copy operation. Error code $?" 10
fi

The log() call you see here is actually a function that I use for almost everything I write.  We’re going to cover what exactly it does a little later, but the basic of the above script is that it will output a message upon both the success of the command as well as the failure and exit the script after the error.  Never underestimate how important it is that your scripts are telling you what they are doing.  Your life will be better for it.

Operators for Shortcuts

Our above examples all allow you to execute multiple actions in response to the success or failure of a command based upon the result.  Sometimes we might only need to trigger one command in response to an action.  We can use operators  to achieve this effect without writing out an entire IF statement:

# A copy operation using an IF statement to execute a file removal
cp /path/to/source /path/to/destination
if [ $? -eq 0 ]; then
    rm /path/to/source
fi

# The above operation using the '&&' operator
cp /path/to/source /path/to/destination && rm /path/to/source

In the second example the source file we are copying is deleted so long as the ‘cp’ command left of the ‘AND’ operator returned zero.  If there had been an error then the code on the right side won’t execute.  Both examples achieve the same result but using the operator acts as a short cut and allows you to cut down on the amount of code you need to write.  If you need to achieve the same effect but when the result is not zero we can turn to the ‘OR’ operator:

# A copy operation using an IF statement to exit upon failure
cp /path/to/source /path/to/destination
if [ $? -ne 0 ]; then
    exit 10
fi

# The same copy operation but using '||' to trigger the 'exit'
cp /path/to/source /path/to/destination || exit 10

Its a TRAP

This one is a personal favorite.  Usage of the Bash builtin ‘trap’ is actually pretty new to me, and it is one of the hands down coolest (if you’re like me and think scripting is, you know, cool) things I’ve seen.  A ‘trap’ give you the ability to determine actions that are performed when your script terminates or when commands throw errors!  Let me demonstrate with a very basic example:

# Here we define the function that will contain our commands for an 'exit'
onExit() {
rm -rf /private/tmp/tempfiles
}

# Here we set the 'trap' to execute upon an EXIT signal
trap onExit EXIT

for file in /private/tmp/tempfiles/*; do
    cp "${file}" /path/to/destination/
done

# This 'exit' command will send an EXIT signal which will trigger the 'trap'
exit 0

As you can see in the above example, ‘trap’ is very easy to use.  The syntax for ‘trap’ is:

trap 'command(s)' signal(s)

We created an onExit() function containing the actions we wanted to perform.  This became the command in the ‘trap’ line. Once triggered, the temporary directory that we were copying files from is automatically purged once the script is complete. This makes cleanup much simpler and easier on us.  It also allows far more control over the state of the system upon an error requiring we kill the script in process.  I had mentioned in my introduction that we could have traps for both terminations and errors, did I not?  Let’s expand upon that first example and make it a bit more robust:

onExit() {
rm -rf /private/tmp/tempfiles
}

# This function contains commands we want to execute every time there is an ERROR
onError() {
errorCode=$?
cmdLine=$1
cmdName=$2
echo "ERROR: ${cmdName} on line ${cmdLine} gave error code: ${errorCode}"
exit 1
}

trap onExit EXIT

# Here we introduce a second 'trap' for ERRORs
trap 'onError $LINENO $BASH_COMMAND' ERR

for file in /private/tmp/tempfiles/*; do
    cp "${file}" /path/to/destination/
done

exit 0

This one certainly has a lot more going on.  The way I approach these ‘traps’ is pretty simple: my EXIT performs a cleanup of any working files while my ERROR handles outputting the information I will need to determine what went wrong.  In this example I have an ‘exit’ command included inside the onError() function so the cleanup onExit() function is still called in the first event of an error.  That’s not a practice I’m recommending, but I am showing that it is an option.  There are plenty of cases out there where you would want the script to continue on even if an error occurs in the middle of a copy operation (user account migration, anyone?).  Those are the times when you will want to be explicit about where in your script certain errors trigger an ‘exit.’

Let’s break down that onError() function:

onError() {
# Our first action is to capture the error code of the command (remember, this changes after EVERY command executed)
errorCode=$?
# This variable is from $LINENO which tells us the number line the command resides on in the script
cmdLine=$1
# The last variable is from $BASH_COMMAND which is the name of the command itself that gave an error
cmdName=$2
# Our tidy statement here puts it all together in a human-readable form we can use to troubleshoot
echo "ERROR: ${cmdName} on line ${cmdLine} gave error code: ${errorCode}"
exit 1
}

# In this 'trap' we call not just our function, but we also pass two parameters along to it
# The $LINENO and $BASH_COMMAND variables are called 'Internal Variables' to the Bash shell
trap 'onError $LINENO $BASH_COMMAND' ERR

We’re going to make this onError() function even more powerful a little later by visiting the log() function I had mentioned.  Before that, let’s go back to the onExit() function.  This ‘trap’ ideally is where we want to perform all of our cleanup actions, and the basic example I gave it wiping out a temporary directory of working files.  While our scratch space is removed in this process it does not address any actions we may have made in other areas of the system.  So, do we want to write all of that into the onExit() function even if they may not be relevant to when the script terminated?

I’m a big fan of the idea: “If I don’t HAVE to do this, then I don’t want to.”  The meaning of this is I don’t want to execute commands on a system (especially when I’m masquerading around as root) if they’re unnecessary.  We can write our onExit() function to behave following that ideology.  I don’t quite remember where I first saw this on the internet, but it was damned impressive:

# This is an array data type
cleanup=()

onExit() {
# Once again we're capturing our error code right away in a unique variable
exitCode=$?
rm -rf /private/tmp/tempfiles
# If we exit with a value greater than zero and the 'cleanup' array has values we will now execute themif [ $exitCode -ne 0 ] && [ "${#cleanup[@]}" -gt 0 ]; thenfor i in"${cleanup[@]}"; do# The 'eval' builtin takes a string as an argument (executing it)eval"$i"donefi
echo"EXIT: Script error code: $exitCode"
}

onError() {
errorCode=$?
cmdLine=$1
cmdName=$2
echo"ERROR: ${cmdName} on line ${cmdLine} gave error code: ${errorCode}"exit 1
}

trap onExit EXIT
trap'onError $LINENO $BASH_COMMAND' ERR

for file in /private/tmp/tempfiles/*; docp"${file}" /path/to/destination/
    # After each successful copy operation we add a 'rm' command for that file into our 'cleanup' array
    cleanup+=('rm /path/to/destination/"${file##*/}"')
doneexit 0

We have now transformed our onExit() function into one giant UNDO command.  The IF statement within it will always remove the temporary working directory (which we always want, no matter what the exit status is) but will now run additional commands out of our handy ‘cleanup’ array.  Effectively, unless the script successfully completes the entire copy operation it will, on the first error, remove every file that did make it into the destination.  This leaves the system pretty much in the same state as it was before our script ran.  We can take this concept further in much larger scripts by adding new commands into a ‘cleanup’ array as we complete sections of our code.

Logging is Awesome

I’m finally getting around to explaining that log() function from earlier.  Logs are fantastic for troubleshooting as they generally contain a lot of data that helps point us towards the source of the issue.  You can approach logging of your own scripts in two ways: append and existing log or use your own customized one.  In my case, I append all of my script log output into the Mac’s system.log using the ‘logger’ command.  This command allows you to do some pretty cool things (like message priority), but my use is fairly simple.

log () {
if [ -z "$2" ]; then
    logger -t "it-logs: My Script" "${1}"
else
    logger -t "it-logs: My Script" "${1}"
    logger -t "it-logs: My Script" "Exiting with error code $2"
    exit $2
fi
}

log "This is a log entry"

***OUTPUT IN SYSTEM.LOG AS SHOWN IN CONSOLE.APP***
Nov 10 12:00:00 USS-Enterprise.local it-logs: My Script[1701]: This is a log entry

You’re probably piecing together how this function works.  the ‘-t’ flag in the command creates a tag for the log entry.  In my case I have a universal prefix for the tag I use in all of my scripts (here I’m using ‘it-logs:’ but its similar) and then I follow it with the name of the script/package for easy reference (you read right: everything I write about in this post I use for preinstall and postinstall scripts in my packages as well).  The tagging allows me to grab the system.log from a machine and filter all entries containing ‘it-logs’ to see everything of mine that has executed, or I can narrow it down to a specific script and/or package by writing the full tag.  Its really nice.

Right after the tag inside the brackets is the process ID, and then we have our message.  If you scroll back up to the example where I used the log() function you’ll see that in code that triggered on a failure I included a ’10’ as a second parameter.  That is the error code to use with an ‘exit.’  If present, log() will write the message first and then write a second entry stating that the script is existing with an error code and  ‘exit’ with that code (and trigger our onExit() trap function).

If you want to maintain your own log, instead of writing into the system.log, you can easily do so with a similar function:

# You must ensure that the file you wish to write to exists
touch /path/to/log.log

log () {
if [ -z "$2" ]; then
    # Here 'echo' commands will output the text message that is '>>' appended to the log
    echo $(date +"%Y %m %d %H:%M")" it-logs: My Script: ${1}" >> /path/to/log.log
else
    echo $(date +"%Y %m %d %H:%M")" it-logs: My Script: ${1}" >> /path/to/log.log
    echo $(date +"%Y %m %d %H:%M")" it-logs: My Script: Exiting with error code $2" >> /path/to/log.log
    exit $2
fi
}

***OUTPUT IN LOG.LOG AS SHOWN IN CONSOLE.APP***
2013 11 10 20:38 it-logs: My Script: This is a log entry

The end result is just about the same, and with a .log extension it will automatically open in the Console.app and still use the filtering.

Now I’m going to take the copy operation from above and write in the log() function so you can see how the whole package fits together:

log () {
if [ -z "$2" ]; then
    logger -t "it-logs: My Script" "${1}"
else
    logger -t "it-logs: My Script" "${1}"
    logger -t "it-logs: My Script" "Exiting with error code $2"
    exit $2
fi
}

cleanup=()

onExit() {
exitCode=$?
log "CLEANUP: rm -rf /private/tmp/tempfiles"
rm -rf /private/tmp/tempfiles
if [ $exitCode -ne 0 ] && [ "${#cleanup[@]}" -gt 0 ]; then
    for i in "${cleanup[@]}"; do
        log "ADD-CLEANUP: $i"
        eval "$i"
    done
fi
}

onError() {
errorCode=$?
cmdLine=$1
cmdName=$2
log "ERROR: ${cmdName} on line ${cmdLine} gave error code: ${errorCode}" 1
}

trap onExit EXIT
trap 'onError $LINENO $BASH_COMMAND' ERR

for file in /private/tmp/tempfiles/*; do
    cp "${file}" /path/to/destination/
    log "COPY: ${file} complete"
    cleanup+=('rm /path/to/destination/"${file##*/}"')
done

exit 0

Interacting With the User Stuff

There are a ton of examples out there for grabbing the name of the currently logged in user and finding their existing home directory.  Both items are very important when we’re executing our scripts as the root user.  To get the name of the logged in user I’ve found these three methods:

# Though the simplest I have found that this method does not work in a package preinstall/postinstall script
USS-Enterprise:~ brysontyrrell$ echo $USER
brysontyrrell

# This one is also very simple but it relies upon a command that may not be present in future OS X builds
USS-Enterprise:~ brysontyrrell$ logname
brysontyrrell

# The following is used pretty widely and very solid (the use of 'ls' and 'awk' nearly future-proofs this method)
USS-Enterprise:~ brysontyrrell$ ls -l /dev/console | awk '{print $3}'
brysontyrrell

One step beyond this is to then find the user’s home directory so we can move and/or manipulate data in there.  One piece of advice I’ve been given is to never assume I know what the environment is.  Users are supposed to be in the ‘/Users/’ directory, but that doesn’t mean they are.  If you’ve never played around much with the directory services command line utility (‘dscl’), I’m happy to introduce you:

USS-Enterprise:~ brysontyrrell$ dscl . read /Users/brysontyrrell | awk '/NFSHomeDirectory:/ {print $2}'
/Users/brysontyrrell

‘dscl’ is incredibly powerful and gives use easy access to a lot of data concerning our end-users’ accounts.  In fact, you can take that above command and change out the regular expression ‘awk’ is using to pull out all sorts of data individually:

USS-Enterprise:~ brysontyrrell$ dscl . read /Users/brysontyrrell | awk '/GeneratedUID:/ {print $2}'
123A456B-7DE8-9101-1FA1-2131415B16C1

USS-Enterprise:~ brysontyrrell$ dscl . read /Users/brysontyrrell | awk '/UniqueID:/ {print $2}'
501

USS-Enterprise:~ brysontyrrell$ dscl . read /Users/brysontyrrell | awk '/PrimaryGroupID:/ {print $2}'
20

USS-Enterprise:~ brysontyrrell$ dscl . read /Users/brysontyrrell | awk '/UserShell:/ {print $2}'
/bin/bash

Alternative Substitution

Kind of an odd header, but you’ll get it in a moment.  One of the bread ‘n butter techniques of scripting is to take the output of one command and capture it into a variable.  This process is known as “command substitution” where instead of displaying the entered command we are shown the result.  The traditional way of doing this is to enclose the command you are capturing in `backticks`.  Instead of using backticks, use a $(dollar and parenthesis) so your editor of choice still highlights the command syntax correctly.

Check out this example which will pull out the type of GPU of the Mac:

gpuType=`system_profiler SPDisplaysDataType | /awk -F': ' '/Chipset Model/ {print $2}' | tail -1`
gpuType=$(system_profiler SPDisplaysDataType | awk -F': ' '/Chipset Model/ {print $2}' | tail -1)

Functionally, both of these statements are identical, but now as we write our scripts we have an easier time going back and identifying what is going on at a glance.  Let’s take two of the examples from above for obtaining information about the logged in user and write them the same way for a script:

userName=$(ls -l /dev/console | awk '{print $3}')
userHome=$(dscl . read /Users/"${userName}" | awk '/NFSHomeDirectory:/ {print $2}')

cp /private/tmp/tempfiles/somefile "${userHome}"/Library/Preferences/