Update App Info – New Script on GitHub

As usually happens with us, I went digging around in some old folders I had been stashing a bunch of old (frankly, quite ugly) code and came across something I had done as a proof of concept.

 

 

 

The issue this tries to address is that the JSS never updates a posted app after you have created it.  So “Microsoft Word for iPad” became “Microsoft Word” several versions later, and your users will see this in the App Store, but in Self Service it still has the old name, version number, description and icon.

The original script only dealt with version numbers to address the problem with the Self Service web clip sending false positives for app updates (for admins who chose to show them).  What happened is the version installed on a device wouldn’t match the one in the JSS  (they would, in fact, be at a higher version usually) and the end-user would see an update for that app that didn’t exist.

I don’t know if many admins do that any more, or use the Self Service web clip for that matter, but the problem of inaccurate app names and description text still remained for Self Service.

That is what this takes care of.

https://github.com/brysontyrrell/Update-App-Info

Currently the script doesn’t handle the icon portion due to an issue with the API I encountered.  I’ll be adding that functionality in once I’m sure it will work without error.  It will, however, run on Mac/Linux/Windows so you have flexibility in how you go about using it.  Check out the README for more details.

Quick note: If you are using anything I’ve posted to GitHub and run into problems please use the ‘Issues’ feature to let me know and I’ll see about making updates to fix it.

Advertisements

The JSS REST API for Everyone, part 2

I had a really positive response to the original article I posted on the JSS API, and the good people over at Macbrained.org even asked for a copy to be posted up on their site.  It was a lot of fun to write up and I decided to expand upon the first post with items I skipped or did not go into detail about.  I’m also compiling both articles into a single document with a more cohesive order and fully scripted examples of the examples that are covered.

 

JSS User Accounts

If you plan to leverage the JSS API for automation or other tasks, you will want to setup unique user accounts for each script/service/app that needs access.  In version 8.x of the JSS this was a little more straightforward than in version 9.x.  In both cases, the JSS objects that you can interact with have CRUD permissions to enable or disable for the user account you are working with.

CRUD means: Create, Read, Update and Delete.  They are analogous to the API methods we have already covered: POST, GET, PUT and DELETE.

In 8.x, API permissions were separate from the web interface permissions.  You could have API accounts with limited access to certain JSS objects and no access to the web interface.  The 8.x account information API page also only displays the CRUD permissions that applied to that object.

JSS 8x API Settings

In 9.x, the permissions for all JSS objects are unified.  Setting the CRUD permissions for an object grants both API and web interface access for the account.  Any account you create for API access will also be able to log into the JSS web interface.  Another effect is that the full CRUD list is shown for every object.  You will need to rely on the API Documentation to determine what permissions are applicable to the objects you are interacting with.

JSS 9x API Settings

 

Modifying and Building XML using ElementTree (Python)

If your workflow requires modifying JSS objects you will need to drop Bash for, or at least augment it with, another scripting language that has a solid XML parser available.  In my examples here we will be using Python and the ‘xml.etree.ElementTree’ library.

This example will be modifying a static computer group’s memberships.  As I mentioned in our earlier example with advanced computer searches a PUT is not an additive action (or subtractive for that matter).  If you make a PUT request of only one item to a JSS object that contained a list of ten you will end up replacing them them all.  To add to the existing list you must include your new item in a new list already containing the existing items and then submit it.

The basic workflow for updating a JSS object is to GET the original XML, parse out the section(s) we will be updating, insert/remove the XML the elements we want and then PUT this back into the JSS to the same JSS ID.

GET https://myjss.com/JSSResource/computergroups/id/123
    Returns 200 status code and XML of the resource.

<?xml version="1.0" encoding="UTF-8"?>
<computer_group>
    <id>123</id>
    <name>The Fleet</name>
    <is_smart>false</is_smart>
    <site>
        <id>-1</id>
        <name>None</name>
    </site>
    <criteria>
        <size>0</size>
    </criteria>
    <computers>
        <size>3</size>
        <computer>
            <id>1</id>
            <name>USS-Enterprise</name>
            <mac_address>NC:C1:70:1A:C1:1A</mac_address>
            <alt_mac_address/>
            <serial_number>Z00AB1XYZ2QR</serial_number>
        </computer>
        <computer>
            <id>2</id>
            <name>USS-Excelsior</name>
            <mac_address>NC:C2:00:01:1A:2B</mac_address>
            <alt_mac_address/>
            <serial_number>Z00CD2XYZ3QR</serial_number>
        </computer>
        <computer>
            <id>3</id>
            <name>USS-Defiant</name>
            <mac_address>NC:C1:76:41:B2:B3</mac_address>
            <alt_mac_address/>
            <serial_number>Z00EF3XYZ4QR</serial_number>
        </computer>
    </computers>
</computer_group>

The computer group object in the JSS contains a lot of information we won’t need for the update we will be performing. We need to convert the XML string we received in the response into an ElementTree object that we can work with.

computergroup = etree.fromstring(response.read())

To make some of our interactions easier we’re going to make another ElementTree object that is just the ‘computers’ node of the ‘computergroup’ object we just created.

computers = computergroup.find('computers')

To better visualize what that did, here is how the ‘computers’ object would print out:

<computers>
    <computer>
        <id>1</id>
        <name>USS-Enterprise</name>
        <mac_address>NC:C1:70:1A:C1:1A</mac_address>
        <alt_mac_address/>
        <serial_number>Z00AB1XYZ2QR</serial_number>
    </computer>
    <computer>
        <id>2</id>
        <name>USS-Excelsior</name>
        <mac_address>NC:C2:00:01:1A:2B</mac_address>
        <alt_mac_address/>
        <serial_number>Z00CD2XYZ3QR</serial_number>
    </computer>
    <computer>
        <id>3</id>
        <name>USS-Defiant</name>
        <mac_address>NC:C1:76:41:B2:B3</mac_address>
        <alt_mac_address/>
        <serial_number>Z00EF3XYZ4QR</serial_number>
    </computer>
</computers>

As you can see, we are now only interacting with the ‘computers’ node and its children. One of the great things about using the ElmenetTree library is that we can divide up a large XML source into multiple parts, like breaking out the computers above, but all of the changes we make will be reflected in the root object.

For example, if we wanted to find and delete the computer named ‘USS-Enterprise’ we could use the following code on the ‘computers’ object:

for computer in computers.findall('computer'):
    if computer.find('name').text == 'USS-Enterprise':
        computers.remove(computer)

Now if we print the output of the ‘computergroup’ object it will not include any computers named ‘USS-Enterprise’:

<computer_group>
    <id>123</id>
    <name>The Fleet</name>
    <is_smart>false</is_smart>
    <site>
        <id>-1</id>
        <name>None</name>
    </site>
    <criteria>
        <size>0</size>
    </criteria>
    <computers>
        <size>3</size>
        <computer>
            <id>2</id>
            <name>USS-Excelsior</name>
            <mac_address>NC:C2:00:01:1A:2B</mac_address>
            <alt_mac_address/>
            <serial_number>Z00CD2XYZ3QR</serial_number>
        </computer>
        <computer>
            <id>3</id>
            <name>USS-Defiant</name>
            <mac_address>NC:C1:76:41:B2:B3</mac_address>
            <alt_mac_address/>
            <serial_number>Z00EF3XYZ4QR</serial_number>
        </computer>
    </computers>
</computer_group>

We can use the same logic with the other key identifiers in our static group: JSS ID, serial number and MAC address.

Let’s take the truncated membership and PUT it back into the JSS; removing the computer ‘USS-Enterprise’ from the static group. We won’t be using the XML retrieved by our GET for this. Instead, we will use ElementTree to build a new XML object and then copy the computers over.

NewXML = etree.Element('computer_group')
NewXML_computers = etree.SubElement(newXML, 'computers')

We created the root element, ‘computer_group’, in the first line and then created a node, ‘computers’, in the second line. If we print out the ‘NewXML’ object it will look like this:

<computer_group>
    <computers/>
<computer_group>

Now that we have our XML structure we can use a simple for loop to iterate over each computer in the source XML and copy them over. The result will be an XML object containing only the computers we want to update the JSS computer group with.

for computer in computers.iter('computer'):
    NewXML_computers.append(computer)

<computer_group>
    <computers>
        <computer>
            <id>2</id>
            <name>USS-Excelsior</name>
            <mac_address>NC:C2:00:01:1A:2B</mac_address>
            <alt_mac_address/>
            <serial_number>Z00CD2XYZ3QR</serial_number>
        </computer>
        <computer>
            <id>3</id>
            <name>USS-Defiant</name>
            <mac_address>NC:C1:76:41:B2:B3</mac_address>
            <alt_mac_address/>
            <serial_number>Z00EF3XYZ4QR</serial_number>
        </computer>
    </computers>
</computer_group>

Now we can output this into a string and make a PUT request to the JSS. Unless writing to a file on the disk, ElementTree will not include our XML declaration line. As a workaround, we can have the XML declaration read in another string variable and then concatenate it to the output XML string.

xmlDeclaration = '<?xml version="1.0" encoding="UTF-8"?>'
PUTxml = xmlDeclaration + etree.tostring(NewXML)

And here is our final XML:

<?xml version="1.0" encoding="UTF-8"?>
<computer_group>
    <computers>
        <computer>
            <id>2</id>
            <name>USS-Excelsior</name>
            <mac_address>NC:C2:00:01:1A:2B</mac_address>
            <alt_mac_address/>
            <serial_number>Z00CD2XYZ3QR</serial_number>
        </computer>
        <computer>
            <id>3</id>
            <name>USS-Defiant</name>
            <mac_address>NC:C1:76:41:B2:B3</mac_address>
            <alt_mac_address/>
            <serial_number>Z00EF3XYZ4QR</serial_number>
        </computer>
    </computers>
</computer_group>

Now we’re going to make the PUT request and update the static computer group’s membership.

PUT https://myjss.com/JSSResource/computergroups/id/123 PUTxml
    Returns 201 status code and XML with the "<id>" of the updated resource.

This code allowed us to remove a computer from a static group. What if you wanted to add a computer? The concept is the same, but now we’ll build a second XML object to insert into our XML for the PUT. To make this a little more interesting we’ll get the informaion about the computer via the API as well.

GET https://myjss.com/JSSResource/computergroups/id/123
    Returns 200 status code and XML of the resource.computergroup = etree.fromstring(response.read())

computers = computergroup.find('computers')

NewXML = etree.Element('computer_group')
NewXML_computers = etree.SubElement(NewXML, 'computers')

for computer in computers.iter('computer'):
    NewXML_computers.append(computer)

Now we will retrieve the information of the computer to add to the group and parse that into an XML object to add into our ‘NewXML’. The computer object we’re creating is just like the ‘NewXML’ object but with more sub-elements. We are also assigning values to the elements that we’re creating, parsed from the returned JSS computer XML.

GET https://myjss.com/JSSResource/computers/id/5
    Returns 200 status code and XML of the resource.
Mac = etree.fromstring(response.read())
NewMember = etree.Element('computer')
NewMember_id = etree.SubElement(NewMember, 'id')
NewMember_id.text = Mac.find('general/id').text
NewMember_name = etree.SubElement(NewMember, 'name')
NewMember_name.text = Mac.find('general/name').text
NewMember_macadd = etree.SubElement(NewMember, 'mac_address')
NewMember_macadd.text = Mac.find('general/mac_address').text
NewMember_serial = etree.SubElement(NewMember, 'serial_number')
NewMember_serial.text = Mac.find('general/serial_number').text

NewXML_computers.append(NewMember)

PUTxml = xmlDeclaration + etree.tostring(NewXML)

<?xml version="1.0" encoding="UTF-8"?>
<computer_group>
    <computers>
        <computer>
            <id>1</id>
            <name>USS-Enterprise</name>
            <mac_address>NC:C1:70:1A:C1:1A</mac_address>
            <alt_mac_address/>
            <serial_number>Z00AB1XYZ2QR</serial_number>
        </computer>
        <computer>
            <id>2</id>
            <name>USS-Excelsior</name>
            <mac_address>NC:C2:00:01:1A:2B</mac_address>
            <alt_mac_address/>
            <serial_number>Z00CD2XYZ3QR</serial_number>
        </computer>
        <computer>
            <id>3</id>
            <name>USS-Defiant</name>
            <mac_address>NC:C1:76:41:B2:B3</mac_address>
            <alt_mac_address/>
            <serial_number>Z00EF3XYZ4QR</serial_number>
        </computer>
        <computer>
            <id>5</id>
            <name>USS-Constitution</name>
            <mac_address>NC:C1:70:0C:00:00</mac_address>
            <serial_number>Z00FE4XYZ5QR</serial_number>
        </computer>
    </computers>
</computer_group>

PUT https://myjss.com/JSSResource/computergroups/id/123 PUTxml
    Returns 201 status code and XML with the "<id>" of the updated resource.

 

Extension Attributes for Power Admins

While we spent a good amount of time describing how to update a static computer group using the API, I will say this is probably a workflow you should avoid (though you may find situations where maintaining a static group this way is the correct solution).  It makes for a great example, and good practice, but there are better ways to achieve the same goal.

Consider the extension attribute.  An extension attribute is capable of returning or storing additional data beyond the standard inventory report.  You can create extension attributes to be populated by LDAP attribute, pop-up menu, script or simple text input.  These values become a part of a device’s inventory record and can be used for criteria in smart groups and advanced searches.

Let’s look at a possible scenario to apply this to.  We have an extension attribute labeled “App Store” that is displayed with the location information for a mobile device.  There are smart mobile device groups for each country your organization has a VPP account in and the extension attribute is a pop-up menu with the criteria for populating those groups.  Tried to those smart groups are country specific apps and ebooks for redemption.

As a part of the on-boarding process your users may select the App Store they will be receiving these VPP codes from.  Whichever way you wish to present this you can leverage the JSS API to dynamically populate the available options be based upon what is defined by the extension attribute you created:

GET https://myjss.com/JSSResource/mobiledviceextensionattributes/id/1
    Returns 200 status code and XML of the resource.
<mobile_device_extension_attribute>
    <id>1</id>
    <name>App Store</name>
    <description>Used for scoping content from a specific country's App Store.</description>
    <data_type>String</data_type>
    <input_type>
        <type>Pop-up Menu</type>
        <popup_choices>
            <choice>United States</choice>
            <choice>Canada</choice>
            <choice>Great Britain</choice>
            <choice>Germany</choice>
            <choice>Hong Kong</choice>
            <choice>Australia</choice>
        </popup_choices>
    </input_type>
    <inventory_display>User and Location</inventory_display>
</mobile_device_extension_attribute>

Parse out the extension attribute’s ID, name, data type and the section for the ‘popup_choices’.  Make the choices a drop list for your user to select from.  When they go to submit you can take their selection and the other details and pipe them into XML that will update their device’s record.

PUTxml = '''<mobile_device>
    <extension_attributes>
        <extension_attribute>
            <id>1</id>
            <name>Mobile Device Type</name>
            <type>String</type>
            <value>United States</value
        </extension_attribute>
    </extension_attributes>
</mobile_device>'''
PUT https://myjss.com/JSSResource/mobiledevices/id/1 PUTxml
    Returns 201 status code and XML with the "<id>" of the updated resource.

This PUT will update the mobile device’s “Last Inventory Update” timestamp causing the smart groups to recalculate their memberships.  The end result is through an API action you have made tailored content immediately available to a user’s device without waiting for the 24 hour mobile device update cycle.  The same is true for computers.

 

Wrap Up, part 2

I hope to have a neatly formatted copy of the two posts together in PDF format soon.  It was after publishing the first post and chatting with people who had questions that I decided to write a follow-up that went into more advanced territory.  Before I wrap up the PDF I may yet have more to throw in.  If you have any feedback, as before, please reach out to me!  You’ll find I’m pretty active on Twitter.

Bryson’s Bash Favorites

We recently had our JAMF Nation User Conference here in Minneapolis.  I spent a lot of time with a lot of brilliant sysadmins from around the world.  After three nights of mingling it became pretty clear that despite how much I enjoy talking about technology solutions (and I will continue to talk as people hand me Martinis), I don’t feed a lot of that back into Mac admin community, and it was made clear that I should give it a try.

So, this will be my first installment of three posts sharing some of what I’ve learned.  This first will focus on Bash scripting.  The second will serve as an introduction to Python for the Bash scripter (let me be up front about this: Python is so hot…) and some of the benefits that it offers to us as administrators.  The last post will be entirely about the JSS API, interacting the the data in both Bash and Python as well as some examples of what you can accomplish with it.

So, on to Bash…

For the last two years as a Mac administrator I’ve learned quite a bit on the subject of shell scripting and have been shown a wealth of handy tricks that have helped refine my skill (there are a lot of wizards at JAMF).  In this post I want to show some of my favorite scripting solutions and techniques that have become staples to my writing style.  Now, if you’re reading this I’m going to assume you’re already pretty familiar with using a Mac via the command line and basic scripting (if the word shebang doesn’t conjure up the image of a a number sign with an exclamation mark, you might not be ready).

Error Checking and Handling

As admins we write a lot of code that interacts with the Mac in ways that can certainly ruin a person’s day (if not their life, or so they would claim).  When executing commands there is a variable always readily available to view:

USS-Enterprise:~ brysontyrrell$ echo $?
0

The ‘$?’ represents the result of the last command that was run.  If there were no errors you  receive a zero (0) in return.  For anything else there will be a value greater than zero which represents your error code.  Visit the manpage of any number of commands on your Mac and you will usually find a section devoted to what an error code represents (if not, we have Google).  Error codes are critical for figuring out what went wrong.

In many cases we might need to kill a script if a command failed.  After all, if data is being manipulated or moved around there’s not a whole lot of sense in executing the remainder of the script when some key actions did not perform correctly.  We can do this with a simple IF statement that triggers an ‘exit’:

cp /path/to/source /path/to/destination
if [ $? -ne 0 ]; then
    exit
fi

Now, this will exit the script if our copy operation failed, but it isn’t that great.  The script will exit, but we won’t have any tangible information about how or why.  Let’s add a few things into this to make it more helpful to us:

cp /path/to/source /path/to/destination
if [ $? -ne 0 ]; then
    echo "There was a problem with the copy operation. Error code $?"
    exit 1
fi

Now we’re getting somewhere.  With this the script will not only output onto the Terminal that there was an error, but it will return the error code and also exit our script with a value greater than zero which will be reported as a failure!  We can have error codes that mean different things. ‘1’ is just the default.  Any numerical value can be used to represent an error (or types of error if we’re lumping them together) and its meaning can be recorded either within the script or in other documentation:

# Exit 1 for general error
# Exit 10 for copy operation error
cp /path/to/source /path/to/destination
if [ $? -ne 0 ]; then
    echo "There was a problem with the copy operation. Error code $?"
    exit 10
fi

Using ‘echo’ for the output is great if we’re running the script manually, but what we’re writing will be run remotely and we will not be watching it execute live.  We’re going to need something that will allow us to go back at a later time to review what transpired:

cp /path/to/source /path/to/destination
if [ $? -eq 0 ]; then
    log "The copy operation was successful."
else
    log "There was a problem with the copy operation. Error code $?" 10
fi

The log() call you see here is actually a function that I use for almost everything I write.  We’re going to cover what exactly it does a little later, but the basic of the above script is that it will output a message upon both the success of the command as well as the failure and exit the script after the error.  Never underestimate how important it is that your scripts are telling you what they are doing.  Your life will be better for it.

Operators for Shortcuts

Our above examples all allow you to execute multiple actions in response to the success or failure of a command based upon the result.  Sometimes we might only need to trigger one command in response to an action.  We can use operators  to achieve this effect without writing out an entire IF statement:

# A copy operation using an IF statement to execute a file removal
cp /path/to/source /path/to/destination
if [ $? -eq 0 ]; then
    rm /path/to/source
fi

# The above operation using the '&&' operator
cp /path/to/source /path/to/destination && rm /path/to/source

In the second example the source file we are copying is deleted so long as the ‘cp’ command left of the ‘AND’ operator returned zero.  If there had been an error then the code on the right side won’t execute.  Both examples achieve the same result but using the operator acts as a short cut and allows you to cut down on the amount of code you need to write.  If you need to achieve the same effect but when the result is not zero we can turn to the ‘OR’ operator:

# A copy operation using an IF statement to exit upon failure
cp /path/to/source /path/to/destination
if [ $? -ne 0 ]; then
    exit 10
fi

# The same copy operation but using '||' to trigger the 'exit'
cp /path/to/source /path/to/destination || exit 10

Its a TRAP

This one is a personal favorite.  Usage of the Bash builtin ‘trap’ is actually pretty new to me, and it is one of the hands down coolest (if you’re like me and think scripting is, you know, cool) things I’ve seen.  A ‘trap’ give you the ability to determine actions that are performed when your script terminates or when commands throw errors!  Let me demonstrate with a very basic example:

# Here we define the function that will contain our commands for an 'exit'
onExit() {
rm -rf /private/tmp/tempfiles
}

# Here we set the 'trap' to execute upon an EXIT signal
trap onExit EXIT

for file in /private/tmp/tempfiles/*; do
    cp "${file}" /path/to/destination/
done

# This 'exit' command will send an EXIT signal which will trigger the 'trap'
exit 0

As you can see in the above example, ‘trap’ is very easy to use.  The syntax for ‘trap’ is:

trap 'command(s)' signal(s)

We created an onExit() function containing the actions we wanted to perform.  This became the command in the ‘trap’ line. Once triggered, the temporary directory that we were copying files from is automatically purged once the script is complete. This makes cleanup much simpler and easier on us.  It also allows far more control over the state of the system upon an error requiring we kill the script in process.  I had mentioned in my introduction that we could have traps for both terminations and errors, did I not?  Let’s expand upon that first example and make it a bit more robust:

onExit() {
rm -rf /private/tmp/tempfiles
}

# This function contains commands we want to execute every time there is an ERROR
onError() {
errorCode=$?
cmdLine=$1
cmdName=$2
echo "ERROR: ${cmdName} on line ${cmdLine} gave error code: ${errorCode}"
exit 1
}

trap onExit EXIT

# Here we introduce a second 'trap' for ERRORs
trap 'onError $LINENO $BASH_COMMAND' ERR

for file in /private/tmp/tempfiles/*; do
    cp "${file}" /path/to/destination/
done

exit 0

This one certainly has a lot more going on.  The way I approach these ‘traps’ is pretty simple: my EXIT performs a cleanup of any working files while my ERROR handles outputting the information I will need to determine what went wrong.  In this example I have an ‘exit’ command included inside the onError() function so the cleanup onExit() function is still called in the first event of an error.  That’s not a practice I’m recommending, but I am showing that it is an option.  There are plenty of cases out there where you would want the script to continue on even if an error occurs in the middle of a copy operation (user account migration, anyone?).  Those are the times when you will want to be explicit about where in your script certain errors trigger an ‘exit.’

Let’s break down that onError() function:

onError() {
# Our first action is to capture the error code of the command (remember, this changes after EVERY command executed)
errorCode=$?
# This variable is from $LINENO which tells us the number line the command resides on in the script
cmdLine=$1
# The last variable is from $BASH_COMMAND which is the name of the command itself that gave an error
cmdName=$2
# Our tidy statement here puts it all together in a human-readable form we can use to troubleshoot
echo "ERROR: ${cmdName} on line ${cmdLine} gave error code: ${errorCode}"
exit 1
}

# In this 'trap' we call not just our function, but we also pass two parameters along to it
# The $LINENO and $BASH_COMMAND variables are called 'Internal Variables' to the Bash shell
trap 'onError $LINENO $BASH_COMMAND' ERR

We’re going to make this onError() function even more powerful a little later by visiting the log() function I had mentioned.  Before that, let’s go back to the onExit() function.  This ‘trap’ ideally is where we want to perform all of our cleanup actions, and the basic example I gave it wiping out a temporary directory of working files.  While our scratch space is removed in this process it does not address any actions we may have made in other areas of the system.  So, do we want to write all of that into the onExit() function even if they may not be relevant to when the script terminated?

I’m a big fan of the idea: “If I don’t HAVE to do this, then I don’t want to.”  The meaning of this is I don’t want to execute commands on a system (especially when I’m masquerading around as root) if they’re unnecessary.  We can write our onExit() function to behave following that ideology.  I don’t quite remember where I first saw this on the internet, but it was damned impressive:

# This is an array data type
cleanup=()

onExit() {
# Once again we're capturing our error code right away in a unique variable
exitCode=$?
rm -rf /private/tmp/tempfiles
# If we exit with a value greater than zero and the 'cleanup' array has values we will now execute themif [ $exitCode -ne 0 ] && [ "${#cleanup[@]}" -gt 0 ]; thenfor i in"${cleanup[@]}"; do# The 'eval' builtin takes a string as an argument (executing it)eval"$i"donefi
echo"EXIT: Script error code: $exitCode"
}

onError() {
errorCode=$?
cmdLine=$1
cmdName=$2
echo"ERROR: ${cmdName} on line ${cmdLine} gave error code: ${errorCode}"exit 1
}

trap onExit EXIT
trap'onError $LINENO $BASH_COMMAND' ERR

for file in /private/tmp/tempfiles/*; docp"${file}" /path/to/destination/
    # After each successful copy operation we add a 'rm' command for that file into our 'cleanup' array
    cleanup+=('rm /path/to/destination/"${file##*/}"')
doneexit 0

We have now transformed our onExit() function into one giant UNDO command.  The IF statement within it will always remove the temporary working directory (which we always want, no matter what the exit status is) but will now run additional commands out of our handy ‘cleanup’ array.  Effectively, unless the script successfully completes the entire copy operation it will, on the first error, remove every file that did make it into the destination.  This leaves the system pretty much in the same state as it was before our script ran.  We can take this concept further in much larger scripts by adding new commands into a ‘cleanup’ array as we complete sections of our code.

Logging is Awesome

I’m finally getting around to explaining that log() function from earlier.  Logs are fantastic for troubleshooting as they generally contain a lot of data that helps point us towards the source of the issue.  You can approach logging of your own scripts in two ways: append and existing log or use your own customized one.  In my case, I append all of my script log output into the Mac’s system.log using the ‘logger’ command.  This command allows you to do some pretty cool things (like message priority), but my use is fairly simple.

log () {
if [ -z "$2" ]; then
    logger -t "it-logs: My Script" "${1}"
else
    logger -t "it-logs: My Script" "${1}"
    logger -t "it-logs: My Script" "Exiting with error code $2"
    exit $2
fi
}

log "This is a log entry"

***OUTPUT IN SYSTEM.LOG AS SHOWN IN CONSOLE.APP***
Nov 10 12:00:00 USS-Enterprise.local it-logs: My Script[1701]: This is a log entry

You’re probably piecing together how this function works.  the ‘-t’ flag in the command creates a tag for the log entry.  In my case I have a universal prefix for the tag I use in all of my scripts (here I’m using ‘it-logs:’ but its similar) and then I follow it with the name of the script/package for easy reference (you read right: everything I write about in this post I use for preinstall and postinstall scripts in my packages as well).  The tagging allows me to grab the system.log from a machine and filter all entries containing ‘it-logs’ to see everything of mine that has executed, or I can narrow it down to a specific script and/or package by writing the full tag.  Its really nice.

Right after the tag inside the brackets is the process ID, and then we have our message.  If you scroll back up to the example where I used the log() function you’ll see that in code that triggered on a failure I included a ’10’ as a second parameter.  That is the error code to use with an ‘exit.’  If present, log() will write the message first and then write a second entry stating that the script is existing with an error code and  ‘exit’ with that code (and trigger our onExit() trap function).

If you want to maintain your own log, instead of writing into the system.log, you can easily do so with a similar function:

# You must ensure that the file you wish to write to exists
touch /path/to/log.log

log () {
if [ -z "$2" ]; then
    # Here 'echo' commands will output the text message that is '>>' appended to the log
    echo $(date +"%Y %m %d %H:%M")" it-logs: My Script: ${1}" >> /path/to/log.log
else
    echo $(date +"%Y %m %d %H:%M")" it-logs: My Script: ${1}" >> /path/to/log.log
    echo $(date +"%Y %m %d %H:%M")" it-logs: My Script: Exiting with error code $2" >> /path/to/log.log
    exit $2
fi
}

***OUTPUT IN LOG.LOG AS SHOWN IN CONSOLE.APP***
2013 11 10 20:38 it-logs: My Script: This is a log entry

The end result is just about the same, and with a .log extension it will automatically open in the Console.app and still use the filtering.

Now I’m going to take the copy operation from above and write in the log() function so you can see how the whole package fits together:

log () {
if [ -z "$2" ]; then
    logger -t "it-logs: My Script" "${1}"
else
    logger -t "it-logs: My Script" "${1}"
    logger -t "it-logs: My Script" "Exiting with error code $2"
    exit $2
fi
}

cleanup=()

onExit() {
exitCode=$?
log "CLEANUP: rm -rf /private/tmp/tempfiles"
rm -rf /private/tmp/tempfiles
if [ $exitCode -ne 0 ] && [ "${#cleanup[@]}" -gt 0 ]; then
    for i in "${cleanup[@]}"; do
        log "ADD-CLEANUP: $i"
        eval "$i"
    done
fi
}

onError() {
errorCode=$?
cmdLine=$1
cmdName=$2
log "ERROR: ${cmdName} on line ${cmdLine} gave error code: ${errorCode}" 1
}

trap onExit EXIT
trap 'onError $LINENO $BASH_COMMAND' ERR

for file in /private/tmp/tempfiles/*; do
    cp "${file}" /path/to/destination/
    log "COPY: ${file} complete"
    cleanup+=('rm /path/to/destination/"${file##*/}"')
done

exit 0

Interacting With the User Stuff

There are a ton of examples out there for grabbing the name of the currently logged in user and finding their existing home directory.  Both items are very important when we’re executing our scripts as the root user.  To get the name of the logged in user I’ve found these three methods:

# Though the simplest I have found that this method does not work in a package preinstall/postinstall script
USS-Enterprise:~ brysontyrrell$ echo $USER
brysontyrrell

# This one is also very simple but it relies upon a command that may not be present in future OS X builds
USS-Enterprise:~ brysontyrrell$ logname
brysontyrrell

# The following is used pretty widely and very solid (the use of 'ls' and 'awk' nearly future-proofs this method)
USS-Enterprise:~ brysontyrrell$ ls -l /dev/console | awk '{print $3}'
brysontyrrell

One step beyond this is to then find the user’s home directory so we can move and/or manipulate data in there.  One piece of advice I’ve been given is to never assume I know what the environment is.  Users are supposed to be in the ‘/Users/’ directory, but that doesn’t mean they are.  If you’ve never played around much with the directory services command line utility (‘dscl’), I’m happy to introduce you:

USS-Enterprise:~ brysontyrrell$ dscl . read /Users/brysontyrrell | awk '/NFSHomeDirectory:/ {print $2}'
/Users/brysontyrrell

‘dscl’ is incredibly powerful and gives use easy access to a lot of data concerning our end-users’ accounts.  In fact, you can take that above command and change out the regular expression ‘awk’ is using to pull out all sorts of data individually:

USS-Enterprise:~ brysontyrrell$ dscl . read /Users/brysontyrrell | awk '/GeneratedUID:/ {print $2}'
123A456B-7DE8-9101-1FA1-2131415B16C1

USS-Enterprise:~ brysontyrrell$ dscl . read /Users/brysontyrrell | awk '/UniqueID:/ {print $2}'
501

USS-Enterprise:~ brysontyrrell$ dscl . read /Users/brysontyrrell | awk '/PrimaryGroupID:/ {print $2}'
20

USS-Enterprise:~ brysontyrrell$ dscl . read /Users/brysontyrrell | awk '/UserShell:/ {print $2}'
/bin/bash

Alternative Substitution

Kind of an odd header, but you’ll get it in a moment.  One of the bread ‘n butter techniques of scripting is to take the output of one command and capture it into a variable.  This process is known as “command substitution” where instead of displaying the entered command we are shown the result.  The traditional way of doing this is to enclose the command you are capturing in `backticks`.  Instead of using backticks, use a $(dollar and parenthesis) so your editor of choice still highlights the command syntax correctly.

Check out this example which will pull out the type of GPU of the Mac:

gpuType=`system_profiler SPDisplaysDataType | /awk -F': ' '/Chipset Model/ {print $2}' | tail -1`
gpuType=$(system_profiler SPDisplaysDataType | awk -F': ' '/Chipset Model/ {print $2}' | tail -1)

Functionally, both of these statements are identical, but now as we write our scripts we have an easier time going back and identifying what is going on at a glance.  Let’s take two of the examples from above for obtaining information about the logged in user and write them the same way for a script:

userName=$(ls -l /dev/console | awk '{print $3}')
userHome=$(dscl . read /Users/"${userName}" | awk '/NFSHomeDirectory:/ {print $2}')

cp /private/tmp/tempfiles/somefile "${userHome}"/Library/Preferences/

Publish Mac App Store apps in Self Service – just like on iOS

(As of Casper Suite 9.4 the JSS natively supports Mac App Store apps in Self Service)

A topic that went around the office recently was listing Mac App Store apps in Self Service on the Mac much in the way that can do so for iOS Self Service.  In our Self Service this is something that I had already done via a script to give users one-click access to Apple Configurator, iBooks Author and Xcode (for starts) while updating their inventory records upon an actual install of the app.

Here is the script that achieves this result:

#!/bin/bash

# The iTunes address for the App (can be grabbed from its App Store page) is passed
# from the JSS into the 'appAddress' variable from parameter 4.
# Example (you would not include quotes): "itunes.apple.com/us/app/ibooks-author/id490152466"
appAddress="$4"


# The name of the app is pass from the JSS into the 'appName' variable from parameter 5.
# Example (you would not include quotes): "iBooks Author.app"
appName="$5"

# A file is created for timestamp comparison
touch /tmp/timestamp


# The user is prompted with a dialog and then Self Service is hidden.
osascript -e 'Tell application "System Events" to display dialog "Self Service will be suspended in the background until you have closed the Mac App Store." with title "Now opening Mac App Store" with text buttons {"OK"} default button 1 giving up after 5'
osascript -e 'tell application "System Events" to tell process "Self Service" to set visible to false'


# The App Store is opened to the specified app.
open macappstore://"${appAddress}"


sleep 3
# Every three seconds the running process list is polled to see if the App Store is open.
while TRUE; do
    openCheck=$(ps -u `logname` -x | awk '/Applications\/App\ Store.app/ {print $2}')
    if [ -z "${openCheck}" ]; then
        break
    fi
    sleep 3
done

# Once the App Store is closed Self Service is shown again.
osascript -e 'tell application "System Events" to tell process "Self Service" to set visible to true'

# The inventory record for that Mac is updated if the user installed the app after the
# timestamp file had been created.  If not, recon is not run.
if [ "/Applications/${appName}" -nt "/tmp/timestamp" ]; then

    osascript -e 'Tell application "System Events" to display dialog "Now updating your inventory..." with title "Returning to Self Service" with text buttons {"OK"} default button 1 giving up after 5'
    jamf recon
fi

# Removes the timestamp file.
rm /tmp/timestamp


exit 0

Comments are included throughout the script, but I can give you a breakdown of what is happening:

  • An empty file is created to log a timestamp of when this process has begun.
  • The user is informed that Self Service is going to be hidden until they clos the Mac App Store.  They can either click “OK” or the prompt will timeout after 5 seconds.
  • The Mac App Store is opened to the chosen app.
  • Every three seconds in the background the script is polling the running process list to see if the Mac App Store is still open.
  • Once the Mac App Store has been closed Self Service will be brought back into view.
  • The date of the app that should be present in the /Applications directory is compared to our timestamp file.  If it is NEWER than the timestamp then we run a Recon to grab the inventory for the JSS.  If it is OLDER then the Recon is not run (thus saving some cycles).

The script works great for populating a catalog of Mac App Store apps within Self Service on the Mac just as we can for our iOS devices.  It is limited in the fact that it is not a solution for redemption codes to paid apps, but it at least can point our people to the right apps and grab inventory if they do install them.

Feel free to share this around and improve upon!

Box Edit and Box Sync via Self Service

We’re doing a lot of work with our Box.com account over at JAMF Software, and going forward I’m going to be educating my users about using Box Sync and Box Edit to get the most out of our new setup.  However, I don’t want to hand them a sheet of instructions detailing logging into Box.com to get the installers for these services.  Surely, there is a better way.

A lot of companies are now using apps for software installers on Macs, and to be truthful that irks me a little.  I can’t plop an app into my Casper Share and easily make a policy out of it, and my personal approach to software deployment is that I don’t want (and don’t like) to do snapshot package creation.  I prefer going 100% native with the vendor’s provided installers.

So,  I whipped up a Self Service policy script (I do love Self Service):

#!/bin/bash
# Pass the following from the JSS Parameter 4 to select the script's action:
# edit: Install / Uninstall Box Edit
# sync: Install / Uninstall Box Sync
# If either service is already installed the app will open with an "Uninstall" option

# Error Codes:
# 0: Script executed successfully
# 101: The downloaded file did not pass the checksum test
# 102: The downloaded Box Edit DMG failed to mount
# 103: The downloaded Box Sync app file could not be unzipped

installerOption="$4"

if [[ ${installerOption} = edit ]]; then
     echo ""
     echo "Downloading Box Edit"
     webCheckSum=$(curl -sI https://sync.box.com/static/BoxEdit/BoxEditInstaller.dmg | tr -d '\r' | awk '/Content-Length/ {print $2}')
     curl -fkS --progress-bar https://sync.box.com/static/BoxEdit/BoxEditInstaller.dmg -o /tmp/BoxEditInstaller.dmg
     fileCheckSum=$(cksum /tmp/BoxEditInstaller.dmg | awk '{print $2}')
     if [ $webCheckSum -ne $fileCheckSum ]; then
          rm /tmp/BoxEditInstaller.dmg
          echo "The file did not download properly, Exiting..."
          exit 101
     fi
     echo "Mounting Box Edit DMG..."
     hdiutil attach -quiet /tmp/BoxEditInstaller.dmg
     if [ $? -ne 0 ]; then
          rm /tmp/BoxEditInstaller.dmg
          echo "The Box Edit DMG failed to mount properly, exiting..."
          exit 102
     fi
     cp -fR /Volumes/Box\ Edit\ Installer/Install\ Box\ Edit.app /tmp/
     hdiutil eject -quiet /Volumes/Box\ Edit\ Installer/
     rm /tmp/BoxEditInstaller.dmg
     echo "Opening the Box Edit Installer app"
     open -a /tmp/Install\ Box\ Edit.app
     osascript -e "delay .5" -e 'tell application "Box Edit Installer" to activate'

elif [[ ${installerOption} = sync ]]; then
     echo ""
     echo "Downloading Box Sync"
     webCheckSum=$(curl -sI https://sync.box.com/static/sync/release/BoxSyncMac.zip | tr -d '\r' | awk '/Content-Length/ {print $2}')
     curl -fkS --progress-bar https://sync.box.com/static/sync/release/BoxSyncMac.zip -o /tmp/BoxSyncMac.zip
     fileCheckSum=$(cksum /tmp/BoxSyncMac.zip | awk '{print $2}')
     if [ $webCheckSum -ne $fileCheckSum ]; then
          rm /tmp/BoxSyncMac.zip
          echo "The file did not download properly, Exiting..."
          exit 101
     fi
     echo "Unzipping the Box Sync app..."
     unzip -oq /tmp/BoxSyncMac.zip -d /tmp/
     if [ $? -ne 0 ]; then
          echo "The Box Sync app failed to unzip properly, exiting..."
          exit 103
     fi
     rm -f /tmp/BoxSyncMac.zip
     echo "Opening the Box Sync Installer app"
     open -a /tmp/Box\ Sync\ Installer.app
     osascript -e "delay .5" -e 'tell application "Box Sync Installer" to activate'
fi

echo "Finished"
exit 0

There’s very little here that’s special.  The script does perform some error checking to make sure the download and/or extraction process completes appropriately and throws messages into the logs so you have a better idea of what happened if it failed.  Because the Box installer apps present an “Uninstall” option if you run them and the service is already present these policies do double-duty for removal.

For me, using this script means that I don’t have to worry about repackaging the installer apps and putting them into the Casper Shares to achieve the same end result (the user having to click through the process).  It also means that every user will always receive the most recent version of both of these apps when they go to install them.  By using the toggle parameter I only need the one script to handle both options.  Here’s what the complete policies look like in my Self Service:

Box Edit Self Service

Box Sync Self Service

The downside to this script is that I will need to update/rewrite it if Box.com changes how they package or deploy the installers.  For now, its elegance is in its simplicity to the user.  Feel free to take this and apply it to your own organization if you’re looking to distribute these Box.com tools!

P.S. In case you were curious, the “JAMF Software Box.com User Guide” is being created in iBooks Author.

Fusion Drive Script

Update 5/1/13: I cleaned up the way I posted the code so there won’t be any more mishaps when copying and pasting into a script file.

This is the entire script that I demonstrate in my “Fusion Drive and CoreStorage” video.  Use at your own risk, improve upon it if you feel you can and then share your results back!  Copy and paste this into your preferred text editor (I use TextWrangler) and save it in the .sh format.

(Note: the double ## is just my personal way of marking comments, single # does of course work just fine, don’t complain)

#!/bin/bash
clear
## These first blocks in the script are all functions that are constantly called on below.
## Using functions instead of in-sequence scripting allows us to perform error-check loops
## without lots of extra coding.  All of the functions must be passed through first so that
## the shell knows what they are and can call them.
## Before the script allows the user to begin inputting variables, if first checks for any
## available disk nodes to add to the new CoreStorage Logical Group and lists them.  If
## there are no available disks, or only one is available, then an error message with
## instructions is displayed and the script exits.
function devNodeList {
dID=0
echo "The following device nodes are available:"
diskCount=0
while True
do
     diskNode="disk${dID}"
     dnCK=$(diskutil list | grep -m 1 "${diskNode}" | awk -F/ '{print $3}')
     if [ -z "$dnCK" ]; then
          break
     fi

     csCheck=$(diskutil list | awk -F/ "/Apple_CoreStorage/ && /$diskNode/" | awk '{print $2}')
     if [ -z "$csCheck" ]; then
          csCheck=$(diskutil cs info "$diskNode" 2> /dev/null | awk '/Role:/ {print $2}')
          if [ "$csCheck" != "Logical" ]; then
               diskutil cs info "${diskNode}" 1> /dev/null
               diskCount=$((diskCount+1))
          fi
     fi
     dID=$((dID+1))
done
echo ""
if [ $diskCount -eq 0 ]; then
     echo "There are no available disks to create the CoreStorage Logical Group."
     echo "Run 'diskutil list' to see all disks with an 'Apple_CoreStorage'"
     echo "partition and 'diskutil cs list' to see the CoreStorage Logical Groups"
     echo "and use their UUIDs to delete them using 'diskutil cs delete lvgUUID'."
     echo ""
     exit 10
elif [ $diskCount -gt 0 -a $diskCount -lt 2 ]; then
     echo "There are not enough available disks to create the CoreStorage Logical"
     echo "Group. Run 'diskutil list' to see all disks with an 'Apple_CoreStorage'"
     echo "partition and 'diskutil cs list' to see the CoreStorage Logical Groups"
     echo "and use their UUIDs to delete them using 'diskutil cs delete lvgUUID'."
     echo ""
     exit 10
fi
}

## This function is the prompt for the first device node.  Note that the two variables that
## are set at the end are used in the "devNodeCheck" function to verify that the device node
## does not already belog to a CoreStorage Logical Group and to recall the "devNodeOne"
## function if it is invalid.

function devNodeOne {
echo "Please input the device node for the first disk (e.g. disk0):"
echo "(It is recommended that you set the SSD as the first disk)"
read "dnOne"
fncDN="$dnOne"
fncCall="devNodeOne"
devNodeCheck
}

## This function is identical to "devNodeOne" expect that it calls another function before the
## "devNodeCheck" to make sure the user did not enter the same device node twice.

function devNodeTwo {
echo "Please input the device node for the second disk (e.g. disk1):"
echo "(Your second disk should be your HDD)"
read "dnTwo"
fncDN="$dnTwo"
fncCall="devNodeTwo"
devNodeRepeat
devNodeCheck
}

## Here the device node entered by the user is checked to see if it is valid.  If not, then the
## user is prompted to enter another by calling the function that was passed in the variable.

function devNodeCheck {
dnCK=`diskutil list | grep -m 1 "$fncDN" | awk -F/ '{print $3}'`
if [ -z "$dnCK" ]; then
     echo ""
     echo "You have entered an invalid device node."
     echo ""
     $fncCall
fi
}

## This function is only called be devNodeTwo to ensure that the user hasn't entered the same
## one twice.  If they did, the user is returned to the devNodeTwo function to try again.

function devNodeRepeat {
if [ "$dnTwo" == "$dnOne" ]; then
     echo ""
     echo "You have already selected $dnOne for the CoreStorage Logical Volume Group."
     echo ""
     devNodeTwo
fi
}

## This function checks to see if the device node entered by the user belongs to a CoreStorage
## Logical Volume Group.  If it does, it is unusable for the purposes of this script and exits.

function coreStorageCheck {
csCK=`diskutil list | awk '/Apple_CoreStorage/ && /'$fncDN'/ {print $2}'`
if [ "$csCK" == "Apple_CoreStorage" ]; then
     echo ""
     echo "The entered device node belongs to a CoreStorage Logical Volume Group."
     echo "The script will now exit. Please use 'diskutil cs delete lvgUUID' to"
     echo "delete the CoreStorage Logical Volume Group from the disk."
     echo ""
     exit 20 ## Exits with error code 20
fi

csCK=`diskutil cs info "$fncDN" 2> /dev/null | awk '/Role:/ {print $2}'`
if [ "$csCK" == "Logical" ]; then
     echo ""
     echo "The entered device node is a CoreStorage Logical Volume. The script "
     echo "will now exit. Please use 'diskutil cs deleteVolume lvUUID' to delete"
     echo "the CoreStorage Logical Volume from the disk, and then delete the"
     echo "CoreStorage Logical Volume Group using 'diskutil cs delete lvgUUID'."
     echo ""
     exit 25 ## Exits with error code 25
fi
}

## If the device node did not pass the coreStorageCheck and the user agrees to delete the
## CoreStorage Logical Volume Group, this script will perform the task, list the available
## device nodes after the operation, and prompt the user to enter the first device node again.

## Not written yet.
## function deleteCoreStorageLVG {
## }

echo "#################################################################"
echo "#                                                               #"
echo "#  WARNING!!! This script is inherently dangerous. It will      #"
echo "#  destroy all existing data on the disks you specify when      #"
echo "#  creating the new CoreStorage Logical Volume Group.           #"
echo "#                                                               #"
echo "#  Before continuing, verify that no other CoreStorage Logical  #"
echo "#  Volume Groups are on the target drives. If there are, delete #"
echo "#  them using the command 'diskutil cs delete lvgUUID'          #"
echo "#                                                               #"
echo "#  How to best use this script: 1) target boot the Mac you      #"
echo "#  want to create the Fusion Drive on and connect to it, or     #"
echo "#  2) copy this script to another drive you can access while    #"
echo "#  running the OS X Mountain Lion Installer.  Mountain Lion is  #"
echo "#  required to create the CoreStorage Logical Group.            #"
echo "#                                                               #"
echo "#################################################################"
echo ""
echo "At any time press the [Cmd] + [.] keys to terminate the script."
echo "Press the Enter key to continue..."
read

devNodeList
devNodeOne
coreStorageCheck

echo ""

devNodeTwo
coreStorageCheck

echo ""
echo "Please input the label for the CoreStorage Logical Volume Group:"
read "csLVGname"

echo ""
echo "Please input the label for the CoreStorage Logical Volume (e.g. \"Macintosh HD\"):"
read "csLVname"

echo ""
echo "Creating the CoreStorage Logical Volume Group \"${csLVGname}\"."
diskutil cs create "${csLVGname}" $dnOne $dnTwo

## Grad the UUID of the just created CoreStorage Logical Volume Group.  Since these
## groups are always listed in order, the last result ('tail -1') will always be correct.

lvgUUID=`diskutil cs list | awk '/Logical Volume Group/ {print $5}' | tail -1`

echo ""
echo "Creating the CoreStorage Logical Volume \"${csLVname}\" with Mac OS Extended (Journaled)."
diskutil cs createVolume "${lvgUUID}" jhfs+ "${csLVname}" 100%

echo ""
echo "The Fusion Drive has been successfully created.  You may now install"
echo "or image OS X Mountain Lion onto the CoreStorage Logical Volume."
echo ""

exit 0