Category Archives: Mac OS X

Load balancing in Munki – thinking out loud.

This post is equal parts thinking out loud, and committing an idea to writing so I don’t forget. Please chime in if you have input. BTW: None of this code should be trusted as working – this is sketching only.

The Munki system I manage as my day job recently hit a milepost: when I released an Office 2016 update, it saturated the server.

Munki Web Admin ground to a halt (my first indication), and when I finally got logged in the server load was hovering between 110 and 120! Not good.

Short term, I threw a couple more cores at the VM, doubled up the RAM, and called it a day. But, that will only take me so far, and I am approaching the saturation point for the 10GB link from this VM to campus. I need solution that is a bit more sustainable.

Munki makes it easy on the surface to spread the updates out by way of catalogs. Divide your fleet into X groups, and then create Production catalogs 1 through X, assigning them in turn to machines in your fleet. Then decide how long you want take to roll out the software, and add the software to each of the X catalogs at intervals over the duration of the rollout.

There are a few problems with this approach:

  1. The Munki Admin (Me) is lazy, and shouldn’t be relied on to “release” software to each of these catalogs in turn over the duration of the rollout.
  2. If you have a bunch of machines (nee, manifests) in your fleet, you don’t want to have to go back through all of them to re-assign the catalogs.
  3. Will it work? Sure, but it’s a royal hack for a solution.

But what if we can functionally take this approach, and automate it in some fashion?

For starters, let’s consider mod_rewrite in Apache. As it turns out, you can set a RewriteMap value to point at a script, and dynamically generate a value, silently redirecting a request for /catalogs/production to a changing value – say /catalogs/production[0-9].

So, within a .htaccess file under the catalogs directory, we’ll place something like:

RewriteEngine   on
RewriteMap      lb    prg:/usr/local/cgi-bin/
RewriteRule     ^/(production)$ ${lb:$1}   [P,L] 

And the script would look like:

#!/usr/bin/env python
## randomly append a numeric value to the production catalog for each request.

import random

print "production" + str(random.randint(0,9))

By my reckoning, if the rewrite works, then the percentage of check-ins from clients will receive X piece of software (as a function of the percent of catalogs that have it included). I understand the math isn’t exact here because we are pseudo-randomizing the catalog that is delivered, and when you consider the number of times a client will check in over the course of a day (between 12 and 24 times), an active client has a high probability to get production0 offered up during that first day.

Maybe we roll the production catalog offered – stepping from 0 through 9 sequentially… I’m still thinking about that.

Now that the load balance piece is solved, we have to generate the 10 production catalogs and inject the software to each of them in turn over the duration of the rollout.

So let’s say we want to roll something out over the course of 48 hours, and we have 10 pseudo catalogs. So each 4.8 hours, we will run a function that will read in the .plist for the software being load-balanced, make use of a counter that we’ll set in the xml, and append that to the end of the make-catalogs generated catalog.

here’s a problem – if a prior version of softwareX is listed earlier in the catalog because it’s in the “master”, make-catalogs generated one, it will override what we add at the end. Hmmmm…

To be continued…

Of ESXi, Mac Pro firmware, and Spotlight

Upgrade a Mac Pro 6,1 running VMWare ESXi 6.0 to 6.5. Here’s the rub: the machine has set on the shelf for the past 2 years, quietly running the hypervisor, not getting firmware updates, “collecting zero-days along the way” (as one Mr. Pepijn Bruienne succinctly put it).

ESXi 6.5 doesn’t install nicely on the 6,1 at the current time. William Lam of Virtually Ghetto has provided us with great information as usual, and easy to find links to VMWare documentation on how to work around the installation issues.

But – uh oh, VMWare’s documentation lists a minimum firmware version of B17 and I’m running B05. So, after wrinkling up my nose, trying anyway, and waiting… hoping… for a boot to complete that never would, I resigned to the fact that indeed VMWare must have listed this requirement for a reason.

So, this should be easy, right? Apple provides firmware updates bundled in the OS upgrades these days, so the worst-case scenario should look something like “install 10.12.3 on an external drive, and apply the combo updater to bring it up to 10.12.4”. That should kick off the firmware updater, and voila, we’ll be patched.
Without boring everyone with the details of how hard I tried to get this process to work, I finally – through trial and error – realized that the MacPro 6,1 appears to REQUIRE booting off the internal SSD in order to process firmware updates. Maybe this is a nuance of B05, maybe it’s a design *cough* feature of the trash-can, but it consistently ignored every staged EFI upgrade when it was booted from an external drive.
Since I run two of these Mac Pro Grouch-houses for our campus deployment system, I was able to develop the process on the first, and reproduce on the second.
The dance looked like this:

  1. Boot your ESXi configured MacPro “Can’t innovate any more my ass” model in target disk mode. Hook it up to a Thunderbolt equipped machine with more free space than the drive size on the MPro.
  2. Open the terminal, and type “diskutil list”. Look for the Mac Pro’s volumes – there will be a device with several volumes labeled “Untitled, Untitled2” etc. Make a note of the device number (it’ll be something like /dev/disk2).
  3. Unmount the disk by using the device number discovered above –
    $ sudo diskutil unmountDisk /dev/disk2
  4. Back up your ESXi machine’s entire disk. I used “dd” for this –
    $ sudo dd if=/dev/rdisk2 of=ESXiBackup.img bs=32768
    (yes, I used ‘/dev/rdisk2’. It’s for performance reasons, but you’ll still have time to fetch a beverage while this runs)
  5. Wipe the internal drive on the MacPro and install an older version of 10.XX – this will compel an OS (and firmware) update. The older release may not be needed – Software Update may prompt to upgrade the firmware anyway, but I did not test that. Check your work your verifying the FW version in System Profiler.
  6. Now we can reverse step 4 and put the backup ESXi image back on the machine
    $ sudo dd if=ESXiBackup.img of=/dev/rdisk2 bs=32768

Ok – the firmware is up to date at this point. Now we can follow the instructions on the VMWare documentation and upgrade the hypervisor OS, right?
Not so fast there, cowboy.

The very bad joke
You see, when we mounted Trashy McTrashFace in target disk mode, Spotlight instantly started indexing the drives and created a directory at the top of each of the 4 (in my case) volumes on the virtual machine host. This folder then interferes with the ESXi installer.
My guess is when you run the ESXi 6.5 installer and ask it to update the system (keeping the current config), the installer reaps the existing OS, checks to ensure the volume is in an expected state (empty), before it lays down the new OS. Since Spotlight created a folder called “.Spotlight-V100”, the ESXi installer finds a non-empty target and grinds to a halt as if it just found a nematode in its fish dinner.
When you reboot and attempt to re-run the installer, you no longer have the option to upgrade – but instead you can only install a fresh instance of the hypervisor OS. This destroys your existing configuration – accounts, network config, storage config, vm registrations (but not the actual vms). While not catastrophic, it is still a huge pain in the butt because you have to reconfigure everything, locate and re-register your vms, and potentially modify the storage devices on each vm since the UUID changed on any external storage mounts.

So, how do we solve this?
These Spotlight folders are benign to the existing ESXi installation – it isn’t bothered by them at all. This gives us the opportunity to boot the not-yet-upgraded hypervisor, delete these folders prior to upgrading, and save our configuration.
Booting ESXi, enabling the local shell and then dropping in to the console (Google it for instructions – it’s easy to find) gives us the ability to run a simple ‘find’ command to locate any of these pesky search index folders.

$ find / -name ‘.Spotlight*’

We can club them in turn with our good friend ‘rm -r’ (the ESXi version of find doesn’t appear to have a –delete option).
NOW you can follow the remainder of the instructions at VMWare, and the system will upgrade in place and your configuration will stick around.
And, you’ll find your way home much earlier. On a Saturday. With less cursing. Trust me.

IOS, Apple Configurator, WiFi association and Failing MDM enrollment

While attempting to manage my first IOS cart, I ran into a catch-22. We were using VPP app distribution, so an MDM server was needed, but when I went through the “Prepare” phase of Apple Configurator, or reset the iPads when they returned to the cart, they would fail to enroll to the MDM.

No MDM, no software install.

Our campus WiFi network utilizes 802.1x authentication, so the Prepare / Reset workflow in Apple Configurator should look like this:

IOS Install -> 802.1x WiFi mobileconfig -> MDM Enrollment profile -> App Install

This was supposed to happen each time an iPad returned to the cart (to return it to a fresh state), but about 70% of the time the iPads would fail to re-enroll to the MDM.

With the help of our crackerjack wireless folk, I was able to track this down to an over-aggressive timeout within Apple Configurator when applying the MDM enrollment profile. It simply would die with the error that the server was unreachable before the WiFi connection had been established. No check-and-retry – just march straight to failure.

After consulting with our Apple SE, and him in turn consulting with an internal IOS deployment specialist, a hidden preference came to the surface:

defaults write EnrollmentProfileInstallationDelay 20

(or some other number of seconds – don’t exceed 120).

This is applied in the user scope, so no “sudo” is needed. I upped mine to 40 seconds, restarted Apple Configurator and tried again with a stopwatch in my hand. It appears that this is a timeout value, because there was no 40 second delay, but it DID allow it to wait longer for the WiFi association to occur, and now my devices appear to associate without issue.

Other admins appear to have worked their way around this by staging the mobileconfig profiles. First pass, apply the WiFi config. Second pass, apply the MDM enrollment. While this works for 1-to-1 deployments, if you have a classroom cart with iPads that need to be wiped upon return it takes a ton of involvement by the admin – you must Unsupervise the devices, then Prepare, and finally apply the MDM enrollment in the Supervise stage each time they have to be wiped.

Increasing the enrollment delay appears to solve this so the staged enrollment technique isn’t necessary.

Ubuntu hosted munki2 repo missing icons in Managed Software Center

Default values in Apache2 on Ubuntu have “/icons” aliased to a system wide directory.

If Managed Software Center is not picking up the icons you have specified for your install packages, add the following into your virtual host configuration:

Alias /icons/ [your repo path root]/icons/

A simple service apache2 reload and you should be all set.

Puppet, PuppetDB and Puppet-Dashboard setup on Ubuntu

In May, 2012, I learned about Munki at the PSU MacAdmins conference*, and subsequently spent the next year learning, configuring, and implementing it on our campus. It has been a very productive addition to our client management toolset – a god-send, really. However I have been cogitating on how to improve… get to “SU Mac Deployment v2.0”.

So this is the year of “Puppet”.

First off, let me say this: I had my first Munki Server set up in a couple hours – tops. I had the MunkiWebAdmin interface set up in a couple hours the first time. To get a functional Puppet server set up, configured, talking to clients and correctly running the Web interface took me the better part of a week. There are many components to this system and they have to talk to each other. While the documentation on the PuppetLabs website is pretty complete, there are so many components that the simple task of filtering out what you need for a basic setup is non-trivial. At least, it was for me.

Here is the process that I ended up with (after rewinding my Ubuntu VM about 10 times and starting over). It’s lengthy, so pour yourself some coffee now…

1) As of June, 2013, you will want to use Ubuntu 12.0.4 LTS (Precise Pangolin). When you do the initial setup, install the SSH and LAMP server personalities.

2) Verify your short hostname in both /etc/hostname and /etc/hosts. It’s also a good idea to set the server name in /etc/apache2/httpd.conf now also (ServerName puppet.blah.tld), but that one is optional.

3) Add the puppetlabs package repository to your apt sources.

$ wget; sudo dpkg -i puppetlabs-release-precise.deb; sudo apt-get update

4) Install the puppet packages – including the pieces to run PuppetMaster via Passenger on Apache. By default, PuppetMaster runs on a less capable web server called WEBrick. If you are doing this for a production environment, use Passenger. I also install phpMyAdmin at this point for future MySQL admin tasks.

$ sudo apt-get install puppet-common puppetdb puppetdb-terminus puppetmaster-common \
puppetmaster-passenger phpmyadmin puppet-dashboard puppetmaster phpmyadmin

5) This installs Puppet-Dashboard webapp in /usr/share/ – we want to change the ownership on this directory so Apache has access to everything.

$ sudo chmod -R www-data:www-data /usr/share/puppet-dashboard

6) Browse to your server/phpmyadmin and log in. Create a new database called “dashboard_production” and a new user called “dashboard” that can log in from Localhost. Use the “Generate” function for the password, and COPY THAT PASSWORD. You’ll need it in a bit. Make sure that your “dashboard” user does not have any global permissions, but set it up with total permissions specifically on the “dashboard_production” database.

7) Open /usr/share/puppet-dashboard/config/database.yml in your Favorite Text Editor. Scroll down – you will find a block of code specific to the “dashboard_production” database. Paste the password in the appropriate spot.

8) Open /etc/mysql/my.cnf in your FTE, and look for “max_packet_size”. Change that value to 32M. Restart the MySQL server

$ service mysql restart

9) Update the following settings in /usr/share/puppet-dashboard/config/settings.yml

 ca_server = [URL of your puppetmaster server]
 enable_inventory_service = true
 inventory_server = [URL of your server]

10) Disable the non-Apache-Passenger puppetmaster job. Open /etc/default/puppetmaster and set

 START = no

11) Enable the background workers for the puppet-dashboard. Open /etc/default/puppet-dashboard-workers and set

 START = yes
 NUM_DELAYED_JOB_WORKERS = [the number of CPU cores on your server]

12) Create a “puppetdb.conf” file in /etc/puppet (NOT in /etc/puppetdb). Set the contents to:

server = [your server URL]
port = 8081

13) Edit the file /etc/puppet/puppet.conf. There are several sections delineated by [section name] that may exist in this file – on both the puppetmaster server and the puppet agents. We’re going to set up the [master] section first – add the following, and modify [serverurl] to fit your environment.

 node_terminus = exec
 external_nodes = /usr/bin/env PUPPET_DASHBOARD_URL=http://[serverurl]:3000 /usr/share/puppet-dashboard/bin/external_node
 reports = store, http
 reportsurl = http://[serverurl]:3000/reports/upload
 storeconfigs = true
 storeconfigs_backend = puppetdb

14) And then ADD the following section to the same file. This is how you need to set the puppet.conf file on ALL of your agents that check in to the server – customize [serverurl] and [localhostname].

 server = [serverurl]
 report = true
 pluginsync = true
 certname = [localmachinehostname]

15) You need to edit the /etc/puppet/auth.conf file next. This file contains sections related to what paths are accessible by certain means – SSL key pair, no authentication, locally or from remote systems… etc. They all start with a “path /blah” declaration.

NOTE: The final “path /” section in this file says “block everything else”, which causes any rules placed below it to be ignored. BE SURE to place the following rule ABOVE this last declaration. Yes, I made this mistake and it took me quite a while to figure out what was wrong.

Add this to the bottom of the file, ABOVE the “path /” section.

 path /facts
 auth yes
 method find, search
 allow dashboard

16) Create a “routes” file with the following contents (/etc/puppet/routes.yaml)

    terminus: puppetdb
    cache: yaml

17) Place the following file in your /etc/apache2/sites_enabled directory. Modify the lines with [YOUR SERVER URL] as needed. I have modified this file to disable the host-specific passenger module activation (it was causing an issue since the passenger module is loaded globally with the initial install), as well as enabling the http-auth pieces.


18) Set up your http-auth users/password. The first command creates the file (the -c flag) and adds a user, the second command without the -c flag adds a user to the existing file.

$ htpasswd -c /etc/apache2/passwords [username]
$ htpasswd /etc/apache2/passwords [username]

19) cd to /usr/share/puppet-dashboard, and run the following commands:

$ sudo -u www-data rake gems:refresh_specs
$ sudo -u www-data rake RAILS_ENV=production db:migrate  # sets up the initial database schema in MySQL
$ sudo -u www-data rake cert:create_key_pair             # creates a key-pair to link to puppetmaster
$ sudo -u www-data rake cert:request                     # requests a signed key from puppetmaster
$ puppet cert sign dashboard                             # signs the key that dashboard just requested
$ sudo -u www-data rake cert:retrieve                    # retrieves the signed key

20) Finally, reboot the machine. Upon boot, I still have to start the puppetdb service by hand for some reason (I added “START = yes” to the file in /etc/default, but no-go). I’m looking into this, but you can run “sudo service puppetdb start” upon machine boot and be good to go.

Now I just have to create the classes and manifests for our environment. I’ll document as I go.

* Late in May, 2013, I attended the MacAdmins conference for the second time at Penn State. It was once again a VERY worthwhile trip. Kudos to the team that puts this together, and if you – dear reader – have never been to it, I would strongly suggest putting it on your short list of things to do to improve your skills in Mac administration.

Setting your Munki ClientID as part of a Deploy Studio workflow.

1) Use the hostname form. Important fields are “Computer Name” and “Computer Information #4”
2) Ensure that you have the “Configure” step set up in the workflow and are applying those fields
3) Install Munki Tools and your generic ManagedInstalls.pref
4) Include this script – execute after restart. The script will first check the “Computer Information #4” field, and if empty will reference the “Computer Name” and set ClientIdentifier in the ManagedInstalls.plist file to that value. If neither is filled in, then it silent exits and touches nothing.

Script is below:

#! /bin/bash

### This script checks for a value in the fourth custom field set by ARD/DeployStudio
### and then the computer name (in the Sharing prefpane).
### If a value exists in the custom field, it sets the Client Identifier in 
### ManagedInstalls.plist to that value, other wise it sets the ClientIdentifier to
### the Computer Name.
### -- Tim Schutt, December 13, 2012

SYSCLIENTID=$(scutil --get ComputerName)
CUSTCLIENTID=$(defaults read /Library/Preferences/ Text4)

if [ -n "$CUSTCLIENTID" ]
	defaults write /Library/Preferences/ManagedInstalls ClientIdentifier $CUSTCLIENTID
elif [ -n "$SYSCLIENTID" ]
	defaults write /Library/Preferences/ManagedInstalls ClientIdentifier $SYSCLIENTID

exit 0

OS X – Allow non-admin users (including Network accounts) to manage printers

There seem to be many ways to skin this cat, many that involve butchering the cups.conf file. It’s always felt a bit like rusty-spoon surgery to me doing it that way.

Today, I found what I feel is a much more elegant way to do it. Simply add “everyone” to the lpadmin group.

Here’s how:
From a sudo-able account, execute the following:

sudo dseditgroup -o edit -a everyone -t group _lpadmin

Now, any user on your system can manage their own printers.

My only queasiness is the concept of “everyone” rather than “Authenticated users”.

Feedback is welcome.

Reposado, Ubuntu and Deploy Studio – URL Rewriting

After getting Reposado running on my Ubuntu server, the long, custom URLs – unique to each version of OS X – annoyed me. This also prevented me from using this repository from Deploy Studio because you can’t specify anything other than the base SUS URL in their Software update tool.

There are good URL Rewriting rules available on the github site for Reposado to correct this, but I had trouble configuring Apache2 under Ubuntu so the rewrites would behave.

These are the steps and final configuration files I ended with.

1) Enable the mod_rewrite engine

$ a2enmod rewrite

2) Place the following file in /etc/apache2/sites-enabled. I named mine 0000-reposado.

<VirtualHost *:8088>
  ServerName default
  DocumentRoot "/Lake/asus/html"
  DirectoryIndex index.html index.php

  ErrorDocument 404 /error/HTTP_NOT_FOUND.html.var
  Loglevel warn

  <Directory />
    Options Indexes FollowSymLinks MultiViews
    AllowOverride All

  <IfModule mod_mem_cache.c>
    CacheEnable mem /
    MCacheSize 4096

3) Inform Apache that you have a virtual host listening on port 8088 by editing the /etc/apache2/ports.conf file.

Find this:

NameVirtualHost *:80
Listen 80

And add this below it:

NameVirtualHost *:8088
Listen 8088

4) In the html directory in the Reposado space, create a .htaccess file with the following contents.

RewriteEngine On
Options FollowSymLinks
RewriteBase /
RewriteCond %{HTTP_USER_AGENT} Darwin/8
RewriteRule ^index(.*)\.sucatalog$ content/catalogs/index$1.sucatalog [L]
RewriteCond %{HTTP_USER_AGENT} Darwin/9
RewriteRule ^index(.*)\.sucatalog$ content/catalogs/others/index-leopard.merged-1$1.sucatalog [L]
RewriteCond %{HTTP_USER_AGENT} Darwin/10
RewriteRule ^index(.*)\.sucatalog$ content/catalogs/others/index-leopard-snowleopard.merged-1$1.sucatalog [L]
RewriteCond %{HTTP_USER_AGENT} Darwin/11
RewriteRule ^index(.*)\.sucatalog$ content/catalogs/others/index-lion-snowleopard-leopard.merged-1$1.sucatalog [L]
RewriteCond %{HTTP_USER_AGENT} Darwin/12
RewriteRule ^index(.*)\.sucatalog$ content/catalogs/others/index-mountainlion-lion-snowleopard-leopard.merged-1$1.sucatalog [L]

Now, when you deploy a machine with Deploy Studio and use the Software Update feature, it will pull the correct sucatalog based on the OS you’re updating. Just like Apple’s servers.