Ubuntu hosted munki2 repo missing icons in Managed Software Center

Default values in Apache2 on Ubuntu have “/icons” aliased to a system wide directory.

If Managed Software Center is not picking up the icons you have specified for your install packages, add the following into your virtual host configuration:

Alias /icons/ [your repo path root]/icons/

A simple service apache2 reload and you should be all set.

Puppet, PuppetDB and Puppet-Dashboard setup on Ubuntu

In May, 2012, I learned about Munki at the PSU MacAdmins conference*, and subsequently spent the next year learning, configuring, and implementing it on our campus. It has been a very productive addition to our client management toolset – a god-send, really. However I have been cogitating on how to improve… get to “SU Mac Deployment v2.0″.

So this is the year of “Puppet”.

First off, let me say this: I had my first Munki Server set up in a couple hours – tops. I had the MunkiWebAdmin interface set up in a couple hours the first time. To get a functional Puppet server set up, configured, talking to clients and correctly running the Web interface took me the better part of a week. There are many components to this system and they have to talk to each other. While the documentation on the PuppetLabs website is pretty complete, there are so many components that the simple task of filtering out what you need for a basic setup is non-trivial. At least, it was for me.

Here is the process that I ended up with (after rewinding my Ubuntu VM about 10 times and starting over). It’s lengthy, so pour yourself some coffee now…

1) As of June, 2013, you will want to use Ubuntu 12.0.4 LTS (Precise Pangolin). When you do the initial setup, install the SSH and LAMP server personalities.

2) Verify your short hostname in both /etc/hostname and /etc/hosts. It’s also a good idea to set the server name in /etc/apache2/httpd.conf now also (ServerName puppet.blah.tld), but that one is optional.

3) Add the puppetlabs package repository to your apt sources.

$ wget http://apt.puppetlabs.com/puppetlabs-release-precise.deb; sudo dpkg -i puppetlabs-release-precise.deb; sudo apt-get update

4) Install the puppet packages – including the pieces to run PuppetMaster via Passenger on Apache. By default, PuppetMaster runs on a less capable web server called WEBrick. If you are doing this for a production environment, use Passenger. I also install phpMyAdmin at this point for future MySQL admin tasks.

$ sudo apt-get install puppet-common puppetdb puppetdb-terminus puppetmaster-common \
puppetmaster-passenger phpmyadmin puppet-dashboard puppetmaster phpmyadmin

5) This installs Puppet-Dashboard webapp in /usr/share/ – we want to change the ownership on this directory so Apache has access to everything.

$ sudo chmod -R www-data:www-data /usr/share/puppet-dashboard

6) Browse to your server/phpmyadmin and log in. Create a new database called “dashboard_production” and a new user called “dashboard” that can log in from Localhost. Use the “Generate” function for the password, and COPY THAT PASSWORD. You’ll need it in a bit. Make sure that your “dashboard” user does not have any global permissions, but set it up with total permissions specifically on the “dashboard_production” database.

7) Open /usr/share/puppet-dashboard/config/database.yml in your Favorite Text Editor. Scroll down – you will find a block of code specific to the “dashboard_production” database. Paste the password in the appropriate spot.

8) Open /etc/mysql/my.cnf in your FTE, and look for “max_packet_size”. Change that value to 32M. Restart the MySQL server

$ service mysql restart

9) Update the following settings in /usr/share/puppet-dashboard/config/settings.yml

 ca_server = [URL of your puppetmaster server]
 enable_inventory_service = true
 inventory_server = [URL of your server]

10) Disable the non-Apache-Passenger puppetmaster job. Open /etc/default/puppetmaster and set

 START = no

11) Enable the background workers for the puppet-dashboard. Open /etc/default/puppet-dashboard-workers and set

 START = yes
 NUM_DELAYED_JOB_WORKERS = [the number of CPU cores on your server]

12) Create a “puppetdb.conf” file in /etc/puppet (NOT in /etc/puppetdb). Set the contents to:

[main]
server = [your server URL]
port = 8081

13) Edit the file /etc/puppet/puppet.conf. There are several sections delineated by [section name] that may exist in this file – on both the puppetmaster server and the puppet agents. We’re going to set up the [master] section first – add the following, and modify [serverurl] to fit your environment.

 node_terminus = exec
 external_nodes = /usr/bin/env PUPPET_DASHBOARD_URL=http://[serverurl]:3000 /usr/share/puppet-dashboard/bin/external_node
 reports = store, http
 reportsurl = http://[serverurl]:3000/reports/upload
 storeconfigs = true
 storeconfigs_backend = puppetdb

14) And then ADD the following section to the same file. This is how you need to set the puppet.conf file on ALL of your agents that check in to the server – customize [serverurl] and [localhostname].

 [agent]
 server = [serverurl]
 report = true
 pluginsync = true
 certname = [localmachinehostname]

15) You need to edit the /etc/puppet/auth.conf file next. This file contains sections related to what paths are accessible by certain means – SSL key pair, no authentication, locally or from remote systems… etc. They all start with a “path /blah” declaration.

NOTE: The final “path /” section in this file says “block everything else”, which causes any rules placed below it to be ignored. BE SURE to place the following rule ABOVE this last declaration. Yes, I made this mistake and it took me quite a while to figure out what was wrong.

Add this to the bottom of the file, ABOVE the “path /” section.

 path /facts
 auth yes
 method find, search
 allow dashboard

16) Create a “routes” file with the following contents (/etc/puppet/routes.yaml)

---
master:
  facts:
    terminus: puppetdb
    cache: yaml

17) Place the following file in your /etc/apache2/sites_enabled directory. Modify the lines with [YOUR SERVER URL] as needed. I have modified this file to disable the host-specific passenger module activation (it was causing an issue since the passenger module is loaded globally with the initial install), as well as enabling the http-auth pieces.

puppet_dashboard_vhost

18) Set up your http-auth users/password. The first command creates the file (the -c flag) and adds a user, the second command without the -c flag adds a user to the existing file.

$ htpasswd -c /etc/apache2/passwords [username]
$ htpasswd /etc/apache2/passwords [username]

19) cd to /usr/share/puppet-dashboard, and run the following commands:

$ sudo -u www-data rake gems:refresh_specs
$ sudo -u www-data rake RAILS_ENV=production db:migrate  # sets up the initial database schema in MySQL
$ sudo -u www-data rake cert:create_key_pair             # creates a key-pair to link to puppetmaster
$ sudo -u www-data rake cert:request                     # requests a signed key from puppetmaster
$ puppet cert sign dashboard                             # signs the key that dashboard just requested
$ sudo -u www-data rake cert:retrieve                    # retrieves the signed key

20) Finally, reboot the machine. Upon boot, I still have to start the puppetdb service by hand for some reason (I added “START = yes” to the file in /etc/default, but no-go). I’m looking into this, but you can run “sudo service puppetdb start” upon machine boot and be good to go.

Now I just have to create the classes and manifests for our environment. I’ll document as I go.

* Late in May, 2013, I attended the MacAdmins conference for the second time at Penn State. It was once again a VERY worthwhile trip. Kudos to the team that puts this together, and if you – dear reader – have never been to it, I would strongly suggest putting it on your short list of things to do to improve your skills in Mac administration.

Setting your Munki ClientID as part of a Deploy Studio workflow.

1) Use the hostname form. Important fields are “Computer Name” and “Computer Information #4″
2) Ensure that you have the “Configure” step set up in the workflow and are applying those fields
3) Install Munki Tools and your generic ManagedInstalls.pref
4) Include this script – execute after restart. The script will first check the “Computer Information #4″ field, and if empty will reference the “Computer Name” and set ClientIdentifier in the ManagedInstalls.plist file to that value. If neither is filled in, then it silent exits and touches nothing.

Script is below:

#! /bin/bash

### This script checks for a value in the fourth custom field set by ARD/DeployStudio
### and then the computer name (in the Sharing prefpane).
### If a value exists in the custom field, it sets the Client Identifier in 
### ManagedInstalls.plist to that value, other wise it sets the ClientIdentifier to
### the Computer Name.
### -- Tim Schutt, December 13, 2012  taschutt@syr.edu

SYSCLIENTID=$(scutil --get ComputerName)
CUSTCLIENTID=$(defaults read /Library/Preferences/com.apple.RemoteDesktop Text4)

if [ -n "$CUSTCLIENTID" ]
then
	defaults write /Library/Preferences/ManagedInstalls ClientIdentifier $CUSTCLIENTID
elif [ -n "$SYSCLIENTID" ]
then
	defaults write /Library/Preferences/ManagedInstalls ClientIdentifier $SYSCLIENTID
fi

exit 0

Deploy Studio Script – hide Bootcamp partition from OS X.

Place this in your deploy studio workflow, and defer execution until first boot.

#!/bin/bash

## Script by Tim Schutt, 2012 ##

##########################################################
## find the bootcamp partition MUST BE NAMED "BOOTCAMP" ##
##########################################################

BC=$(diskutil list | grep BOOTCAMP | grep -o 'disk[0-9]s[0-9]';) 

##########################################################
## find the UUID of the previous bootcamp partition.    ##
##########################################################

UUID=$(diskutil info $BC | grep -o '[0-9a-zA-Z]\{8\}-[0-9a-zA-Z]\{4\}-[0-9a-zA-Z]\{4\}-[0-9a-zA-Z]\{4\}-[0-9a-zA-Z]\{12\}';)

##########################################################
## Disable auto-mounting of Bootcamp partition. You can ##
## still mount the partition manually with Disk Utility ##
##########################################################

echo "UUID=$UUID none ntfs ro,noauto 0 0" > /etc/fstab

exit 0

OS X – Allow non-admin users (including Network accounts) to manage printers

There seem to be many ways to skin this cat, many that involve butchering the cups.conf file. It’s always felt a bit like rusty-spoon surgery to me doing it that way.

Today, I found what I feel is a much more elegant way to do it. Simply add “everyone” to the lpadmin group.

Here’s how:
From a sudo-able account, execute the following:

sudo dseditgroup -o edit -a everyone -t group _lpadmin

Now, any user on your system can manage their own printers.

My only queasiness is the concept of “everyone” rather than “Authenticated users”.

Feedback is welcome.

Reposado, Ubuntu and Deploy Studio – URL Rewriting

After getting Reposado running on my Ubuntu server, the long, custom URLs – unique to each version of OS X – annoyed me. This also prevented me from using this repository from Deploy Studio because you can’t specify anything other than the base SUS URL in their Software update tool.

There are good URL Rewriting rules available on the github site for Reposado to correct this, but I had trouble configuring Apache2 under Ubuntu so the rewrites would behave.

These are the steps and final configuration files I ended with.

1) Enable the mod_rewrite engine

$ a2enmod rewrite

2) Place the following file in /etc/apache2/sites-enabled. I named mine 0000-reposado.

<VirtualHost *:8088>
  ServerName default
  ServerAdmin taschutt@syr.edu
  DocumentRoot "/Lake/asus/html"
  DirectoryIndex index.html index.php

  ErrorDocument 404 /error/HTTP_NOT_FOUND.html.var
  Loglevel warn

  <Directory />
    Options Indexes FollowSymLinks MultiViews
    AllowOverride All
  </Directory>

  <IfModule mod_mem_cache.c>
    CacheEnable mem /
    MCacheSize 4096
  </IfModule>
</VirtualHost>

3) Inform Apache that you have a virtual host listening on port 8088 by editing the /etc/apache2/ports.conf file.

Find this:

NameVirtualHost *:80
Listen 80

And add this below it:

NameVirtualHost *:8088
Listen 8088

4) In the html directory in the Reposado space, create a .htaccess file with the following contents.

RewriteEngine On
Options FollowSymLinks
RewriteBase /
RewriteCond %{HTTP_USER_AGENT} Darwin/8
RewriteRule ^index(.*)\.sucatalog$ content/catalogs/index$1.sucatalog [L]
RewriteCond %{HTTP_USER_AGENT} Darwin/9
RewriteRule ^index(.*)\.sucatalog$ content/catalogs/others/index-leopard.merged-1$1.sucatalog [L]
RewriteCond %{HTTP_USER_AGENT} Darwin/10
RewriteRule ^index(.*)\.sucatalog$ content/catalogs/others/index-leopard-snowleopard.merged-1$1.sucatalog [L]
RewriteCond %{HTTP_USER_AGENT} Darwin/11
RewriteRule ^index(.*)\.sucatalog$ content/catalogs/others/index-lion-snowleopard-leopard.merged-1$1.sucatalog [L]
RewriteCond %{HTTP_USER_AGENT} Darwin/12
RewriteRule ^index(.*)\.sucatalog$ content/catalogs/others/index-mountainlion-lion-snowleopard-leopard.merged-1$1.sucatalog [L]

Now, when you deploy a machine with Deploy Studio and use the Software Update feature, it will pull the correct sucatalog based on the OS you’re updating. Just like Apple’s servers.

Yay.

Windows 7 imaging with FOG – how I got it to work (85% of the time)

http://fogproject.org

Tasked with deploying about 90-100 computers this summer (~60 PCs, and 35 Macs), it became clear to me that building each by hand was NOT a valid option. The building where I reside has a 20+ year old network. The simple act of applying Windows updates after the first install was a multi-hour proposition on each system. There was simply no time.

So armed with a basic gigabit switch, a reasonably good PC loaded with Ubuntu, FOG, two network cards, and enough drive space to store the system images, I got started.

I will detail my FOG server configuration in a future post

Almost immediately, I discovered that FOG works like a champ with Windows XP, but not with Windows 7. It was a bumpy ride working the kinks out of this system, but in the end I consider it a considerable success and worth the effort.

NOTE: I had VERY similar hardware for this rollout, so I didn’t have to generalize the installation / get into sysprep. I ran out of time this summer, but plan on tackling that in the coming months.

Here is the distilled process I settled on:

  • Install Win 7  – update update update.
  • Disable hibernation (open a CMD window -> “powercfg -h off”)
  • Install Office. Third party software beyond this was causing issues for me, so I limited the image to Office only. Update update update.
  • Browse to [yourFOGserver]/client – download FOG Client Service and Fogprep. Install FOG service and point it at your server.
  • Turn off Virtual Memory (Computer -> Properties -> Advanced System Settings -> Performance “Settings…” -> Advanced -> Virtual Memory “Change…” -> uncheck “Automatically manage…”)
  • Defrag the hard drive – I used Defraggler http://www.piriform.com/defraggler
  • Shrink the NTFS partition (Computer -> Manage -> Disk Management -> Rt. click on partition -> Shrink Volume). When prompted for the new volume size, add 2GB to it. You will appreciate this breathing room.
  • run fogprep as admin and immediately shut the system down. If you boot from this volume before capturing the image, you will need to run fogprep again.
  • On your FOG server: create a new image – Multi partition single disk, non-resizable.
  • PXE boot your client – select “full system inventory…” option. It’s a Windows 7 system and assign it to the image you just created. DO NOT IMAGE IT YET. Shut it down.
  • On your FOG server – select the machine from inventory, click on “Basic Tasks” and upload the image.
  • Restart your client machine, PXE boot – it will automatically grab the image from the hard drive.

Now, you should have a working template image that can be deployed to similar hardware.

When you deploy this to a new system, PXE boot the system to the FOG menu, inventory the machine and assign it to that image. Then choose to image the system immediately. It will restart (you may need to tell it once again to boot from the network) and pull the image down to the system.

Once the new system has the image, then simply expand the NTFS partition and re-enable virtual memory. In that order. I typically leave hibernation disabled.