Friday, October 19, 2012

Testing shell methods with PHPUnit

Testing is fun. Testing global functions isn't, especially when those global functions use pass-by-reference parameters.

I write a lot of command line applications with PHP and I often must deal with exec, shell_exec, and passthru. This causes major headaches when I need to simulate a behavior for the purposes of testing.

In Bart, we have a class named Shell, which wraps many of the global shell and system functions of the PHP language. It's also a great place to collect any methods that don't come out of the Box with PHP. Combining the Shell class with Diesel  lets me mock or stub out pretty much all of my interactions with the shell. There's one catch though: PHPUnit doesn't work with mocking methods that have pass-by-reference parameters. Both exec and passthru return information back to the caller via this approach.

Enter the MockShell class. MockShell is a small stub class exposing an exec and passthru method. It also exposes two other methods to configure the commands expected by either of these methods for the duration of a unit test. Finally, it provides a verify() method to be called upon completion of a test run to verify that its expectations were met.

MockShell takes a mocked Shell class instance as a parameter. Any method calls sent to the MockShell that it doesn't understand are passed along the mocked MockShell. This utility of this is to allow you to provide only one stubbed object when you configure Diesel for the test. See below,

function testSymlinksCreated()
 $phpuShell = $this->getMock('Bart\Shell');
  ->with('~/code/nagios/logs', 0777, false);

 // Create the MockShell and supply it the mocked Shell instance
 // that any calls to methods _other_ than exec and passthru
 // ...may be passed on to that mock.
 $shell = new \Bart\Stub\MockShell($this, $phpuShell);
  ->expectExec("cd ~/code/nagios && ln -s /etc/nagios config", array(), 0, null)
  ->expectExec("cd /www && ln -s ~/code/nagios nagios", array(), 0, null);

 $this->registerDiesel('Bart\Shell', $shell);

 // Configure a generator for the nagios app
 $g = new Generator('nagios');
 // Expect that this creates the "logs" directory
 // ...and creates the two symlinks above

 // Verify that the two execs were called
 // PHPUnit will verify the call to mkdir()

Check out all the Bart code at,

Saturday, September 29, 2012

Dan Pink on Creativity and Motivation

A friend recently shared Dan Pink's talk on Creativity and Motivation with me. Pink discusses the progression of the 20th century 1st world workforce from a majority of simple task oriented workers to a majority of cognitive, right brained knowledge workers in our modern day. He then proposes that this new workforce is motivated not by the same monetary goals as yesterday, but rather by more fulfilling incentives, which he lists as autonomy, mastery, and purpose.

I wholeheartedly agree that the leading thought workers of today are ultimately attracted to these higher causes of self-fulfillment. Yet, I differ from his purportedly absolute stance that motivating by these three factors and removing monetary reward from the picture is a recipe for success.

Pink provides two extended examples,

  1. Atlassian Fedex day. Fedex day at Atlassian is essentially a 24 hour period during which engineers are encouraged to develop whatever code they want, specifically nothing on which they're currently working. Eventually, this has graduated into a 20% autonomy policy.
  2. Microsoft Encarta versus Wikipedia. Encarta was a multi-year project out of Microsoft, costing them large amounts of money and diligent coordination. Wikipedia promised no money to any contributor nor did it set any schedule for content.
In the case of Atlassian, his claim is that the engineers produce higher quality work faster during autonomous time because they are working for themselves. I agree that this has a lot of merit, but I disagree that this measurement alone can prove his point. In a typical development environment, engineers report to product managers. Product managers set the specifications and timelines. Typically, the communication between the engineer and the product manager is poor and this poor communication leads to bugs, missed deadlines, and subpar products. Much has been written on this subject (see Agile, XP, Scrum, Kanban). While this isn't the only difference between autonomous and required development, it does provide reasonable justification to question the percentage that autonomy plays in the success of Fedex days.

Next, Encarta versus Wikipedia. Again, I agree that there is much more incentive to contribute to wikipedia as an individual versus a requirement to work on Encarta as an employee. However, I attribute to that discrepancy very little influence in the eventual failure of Encarta. Three other factors played a very large role in the outcome of the Encarta / Wikipedia showdown,
  1. It happened right around when the internet was becoming ubiquitous.
  2. Wikipedia is a SaaS model: available anywhere; always up to date; social. Contrast that against Encarta which was available only on your machine, valid only at the time of install, and did not provide a straightforward way to share information with everyone.
  3. Wikipedia is free (See Chris Anderson, Freemium).
Ultimately I'm not opposed to his proposal; I'm just not convinced by his two examples.

My feeling is that material gain must still be taken into account. For example, consider a job that pays $200,000 versus a job paying only $40,000 that promised full autonomy in choosing projects and deadlines: very few engineers will choose the latter. At 200,000 against 160,000, you'd potentially have a strong case.

My conclusion is that autonomy, mastery, and purpose drive higher quality innovation, but that the level at which those drivers operate is variable. Furthermore, this debate needs more unassailable data points.

Tuesday, September 4, 2012

Bash Functions in Scope

Bash 4 is a lovely collection of globally shared state and scope. Variables and functions are declared, defined, and often redefined in many places and in a hopefully predictable order.

Here is my goto list of commands to help narrow down the field when I'm trying to debug things,

  • which is a quick way to find out where a command resides -- if it does at all
    • Using the -a option, you can see all of the places where the command could be defined
    • By default, functions and aliases are not searched by which. However, the manual documentation for which suggests you set which to the following, 
    • (alias; declare -f) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@
  • type is a big step above which and will tell you more in depth information about the command
    • Using the -a option, you can see all of the places where the command could be defined
  • env with no arguments, this lists the global variables in scope
  • declare on its own will list every variable and function defined. It can give you a little more information about functions than the methods above.
    • Use the -f option to see the definition of the function
    • Use the -F option to show only the name and attributes.
    • To see where a function is defined (file and line number), you have to enable the extdebug shell option. This is probably my favorite!
      • shopt -s extdebug && declare -F $yourFunctionName

Sunday, July 29, 2012

PHP Dependency Injection, Part 2

A few months ago I blogged about Diesel, a dependency injection framework for PHP. Since then, it's gotten a lot of use in our internal PHP code as well as our open source project, Bart.

It's also gotten a good deal of feedback from the rest of the team and been the subject of many a debate. The product of all that feedback and debate is a new version of Diesel that is easier to use and integrates more seamlessly into your existing class structure.

One of my initial goals with Diesel was that it would provide non-singleton nature, both from a per-class perspective and from the dependency registry point of view. Such that users would be able to define custom dependencies, and chains of these, when creating dependent classes. However, in practice, we observed that this was pretty much never necessary outside of unit tests.

Since we're able to reset pretty much any state we want in between tests, we concluded that the added complexity, not to mention burden on class signatures, was unnecessary and the Diesel interface could be simplified to static methods. The result is below,

  * Create an instance of class
  * @param string $className Name of the class
  * @param array $arguments Any arguments needed by the class
  * @return $className New instance of $className($arguments)
 public static function create()

  * Get singleton instance of this class
  * @param type $className
  * @return $className Singleton instance of class
 public static function singleton($className)

So users of Diesel, now only need to call either Diesel::create($className) or Diesel::singleton($className). If nothing is registered (which should be the case for production code), then Diesel will use reflection to create a new instance of the class with the supplied arguments.  If a singleton is desired, a new instance is created and cached for any future requests.

This is much simpler than previously, where all dependent classes had to define the dieselify() method and needed to accept Diesel instances in their constructors.

For tests, stubs or mocks can be registered via anonymous functions that can verify the arguments and then return the stub. Another improvement is that Diesel now supports enforcing during test time that a method be registered for any requested class. I.e. if a new instance or singleton is requested during tests for which no instantiation method exists, an exception is raised. This will prevent accidental creation of real classes during tests.

Monday, July 23, 2012

PHP Quality Metrics with Jenkins

I recently took over management of a few PHP projects and my first order of business was enhancing the code quality. Code quality means different things to different people. In my case it means relevant, thoughtful test coverage and well balanced code (a la Robert C Martin). I scoured the internet and came up with some good tools for measuring both.

See, and there's even a book!

We're already using PHPUnit for our tests, but I wasn't doing any coverage analysis. That was easy to add. Next, I started using all the mess detection and robert-c-martin analysis tools. All in all, I was able to follow the directions and put a build shell script together without too much trouble.  The pieces with which I had to pay some attention were the machine setup, the jenkins project configurations, and the PHPUnit exclude options. See my gist for a sample phpunit configuration to exclude files over which you have no control, or matter not, in your project.

To save myself some trouble I put the installation steps and quality build steps into Bart, our open source project, on which all of our other projects depend (via composer). The script can be found here,

Finally, my jenkins config file template can be found on github, You can replace the template variables with their respective values for your project.

Comments welcome!

Saturday, July 7, 2012

Keep It Simple, Stupid

I had a great time last week at DevOps Days in Sunnyvale. The presentations were great, I attended some thoughtful break out sessions, and I learned about some pretty cool new tools. Much of this is being covered by the rest of the folks there, so I won’t reiterate it here, but what I did want to discuss is the underlying theme to much of the successful tools, methods, and ideas that were presented: keeping it simple.

We’ve all heard it before: keep it simple, stupid. However, almost every time I hear it, we’re talking only about software or process. Yes, that makes sense. We can observe many of the most successful tools and processes in our industry achieved that description because they were simple. What struck me extends beyond just the software and the tools. Keeping it simple succeeded on a grander scale because of human nature, and that human nature applies to so much more than just software and process.

 Many, if not all, of us involved in DevOps or pragmatism have a certain mindset. We are passionate, outgoing (albeit if only technically), and curious. We are a minority. The success of a new process or tool within our group does not promise success anywhere else. The majority may not have the drive, the passion, or they just lack the time. To achieve success with that majority, the new system must cater to those characteristics.

Consider the technical reader’s preferred source of knowledge. Only a small percentage of people I see nowadays still read entire technical books. However, innumerable people seem to be reading blogs, feeds, hacker news, etc. Why is that? It’s probably because a blog isn’t complicated. A blog is short, to the point, and often summarizes a longer (harder to read) technical piece.

In the non-technical world, consider cooking. Much the same, few people I know nowadays cook elaborate (read prep time > 25 minutes) meals for themselves. The norm seems to be defrost-able, quick prep time, or eating out. And how many young couples buy cookbooks and use them versus just searching the web for simple recipes?

My proposition is that the adoption of new process, tools, or approaches to thinking is driven by a minority and succeeds when those new processes, tools, or ways of thinking are presented to the majority in a simple, digestible fashion.

If we want to spread the adoption of DevOps (cultural changes, new tools, and states of mind) throughout our industry, then we must commit ourselves keeping things simple. I fear that heavy featured tools with long technical manuals, lengthy technical books, or new groupthink will not be readily adopted or consumed. Rather, their wondrous benefits and knowledge will stay locked in their manuals, between their covers, or in our minds alone, their collective surfaces barely scraped by some curious, but time-strapped or non-motivated, souls.

Thursday, February 9, 2012

Introducting Diesel - PHP Dependency Injection

PHPUnit made unit testing PHP an actual pleasant experience, but there's still something missing when it comes to generically injecting stubs and mock behavior into your classes when your classes extend beyond simple relationships. For some background on the topic, Martin Fowler's essay on injection is an excellent starting point.

Either you end up with constructors that take endless lists of parameters or with shifty setter methods that can leave your classes-under-test in undesirable states of non-initialization. I.e $class = new Klass(); $class->setSomeReference($mockedObject);

Diesel was born out of the need to avoid both of these situations in a reusable and easily understandable manner. Diesel is pure PHP and does not rely on attributes or XML files. It's not quite a cake pattern -- PHP does not provide mix-in capabilities -- but does provide a similar concept of defining a default implementation for your production environment use cases and respectively (optionally) granularly stubbing out all of your test use cases.

The Diesel system itself is one small PHP class. It works by relying on some static cooperation from each dependent class to implement a method which will register all of its production dependencies. For non-production use cases (i.e. Testing), it relies on each test to configure a non-static Diesel instance with each dependency for the given class under test -- most commonly each dependency will be stubbed using PHPUnit::getMock().

Dependencies for a class may be registered statically or locally, where a local registration is local to the instance upon which it was registered (i.e. no other Diesels will be affected by it). Both registration methods have the same signature, register($owner, $class, $instantiate).  Consumers of Diesel can produce their dependent objects by using its non-static factory method which roughly resembles, create($owner, $class)-- further specified later.

Ok, so what does all of this mean? Let's move into a real world example. One of the utilities built into our build and release tools (Bart) project is a "stop the line" git pre-receive hook. The hook simply queries our development Jenkins server for the status of the latest build. If that build passed, then the commit is permitted, otherwise only commits whose message contains "{buildfix}" may proceed. This is explained in more detail on the project's github home page.  The core class behind this feature relies on two other classes: a Jenkins class and a Git class.

In order to test our Stop-the-Line class, we need to stub out method calls to the Jenkins and Git classes. This is a perfect situation for Diesel to inject stub classes.  So let's see how it works below. For the eager, you may find the entire test class at Stop The Line Test.php.

First, we must configure Stop_The_Line to work with Diesel. That means defining the registration method and accepting a Diesel instance to its constructor. Wait! Didn't I say earlier that taking injection classes as constructor parameters was bad? Well, I've concluded that it's only bad to the extent that they produce unmanageable lists of params. In Diesel's case, all of your injection is controlled by only one parameter. Not a bad compromise. So, Ok, back to the code.

class Stop_The_Line {
  // The constructor param at the end
  public function __construct($git_dir, $conf, Diesel $di = null) {
    // Use the default static dependencies?
    $this->di = $di ?: new Diesel();

    // Use Diesel to produce an instance of a Jenkins Job
    // Notice how Jenkins params are passed in optional 3rd param here
    $this->job = $di->create($this, 'Jenkins_Job', array(
      'host' => $conf['host'],
      'job_name' => $conf['job_name'],
      'w' => $w,

  // This method will be automatically called by Diesel IF and ONLY IF
  // ...there is no local or static registration for Stop_The_Line
  public static function dieselify($me)
    Diesel::register_global($me, 'Git', function($params) {
      return new Git($params['git_dir']);

    Diesel::register_global($me, 'Jenkins_Job', function($params) {
      return new Jenkins_Job($params['host'], $params['job_name'], $params['w']);

Now, our production code can use Stop_The_Line with its default dependencies simply by omitting the last parameter to the constructor.  Test code can inject instances of Jenkins_Job and Git by passing in a so contrived Diesel instance as the last parameter to the constructor.

class Stop_The_Line_Test extends Bart_Base_Test_Case {
  public function testStopTheLine() {
    $job_name = 'the build';
    $conf = array('host' => '...', 'job_name' => $job_name);

    // This is the Diesel ONLY for this test i.e. NO other tests
    // ...will be affected by the dependencies it defines
    $di = new Diesel();

    $this->configureJenkinsJob($di, $job_name, $conf);

    // Our contrived Diesel will be used when STL produces the Jenkins Job
    // ...and it's Git instance
    $stl = new Stop_The_Line('.git', $conf, $di);

    // We expect the line to stop because BOTH checks fail:
    // 1. Jenkins build failed,
    // 2. Commit message did not contain {buildfix}
    $this->assert_throws('Exception', 'Jenkins not healthy', function() use($stl) {


  private function configureJenkinsJob(Diesel $di, $job_name, $conf) {
    // Use PHPUnit to create a stub job
    $mock_job = $this->getMock('Jenkins_Job', array(), array(), '', false);
    // And set it up to say the last build failed

    // Now register this stub job for ONLY this Diesel instance
    $phpu = $this;
    $di->register_local('Git_Hook_Stop_The_Line', 'Jenkins_Job',
      function($params) use($phpu, $conf, $job_name, $mock_job) {
        $phpu->assertEquals($job_name, $params['job_name'],
            'Jenkins job name did not match');

        $phpu->assertEquals($conf['host'], $params['host'],
            'Expected host to match conf');

        return $mock_job;

  private function configureGit(Diesel $di) {
    $mock_git = $this->getMock('Git', array(), array(), '', false);

    // Stop the Line checks if commit message contains {buildfix}
    // Let's see what happens when it doesn't
      ->will($this->returnValue('The commit message'));

     $di->register_local('Git_Hook_Stop_The_Line', 'Git',
      function($params) use($mock_git) {
       return $mock_git;

So as you can see, Diesel lets you granularly control the injection of multiple dependencies per system under test per test. Moreover, it does this in a pure PHP fashion giving programmatic power that just isn't offered (easily or transparently) by XML. In contrast to a annotation based system that injects dependencies via object reflection, Diesel allows you naturally call object constructors and allows those constructors to completely configure themselves in a straightforward manner. A reflection based system requires you to have empty constructors and hides actual implementations from the developer, which can lead to misunderstandings and bugs.

More details can actually be found at the link to the test I provided above. My code above was extracted thence and then tweaked to make it a little simpler (i.e. maybe I made some mistakes). Also, you can see examples how Diesel can be used within a chain of inheritance and for multiple injected classes. In fact, I encourage you to do so as that is where the full utility of Diesel really shines.

Thursday, February 2, 2012

Encode AVI for iPad (for Free)

I wanted to convert an old movie of mine to m4v so that I could watch it on my iPad. I was unable to import it into my iTunes since the format didn't match, so I had to convert it to a format that makes iTunes happy -- namely mpeg4.

Since I didn't want to pay for Handbrake (and they don't have a default iPad format), I decided to just do it myself using ffmpeg.

I am on a mac, so I needed to install ffmpeg. The easiest way to do that was to use homebrew. My first attempt to install failed because home brew had some issues.  As it ended up, I had to uninstall home brew and uninstall mac ports (don't forget sudo rm /usr/local/bin/{brew,port})! I went through some drama with my Ruby installation (1.8.7), since home brew wasn't loading, which involved using rvm to install 1.9.3, then 1.9.2, and then back to my system install, which suddenly started working again with the brew install script.

Next: brew install ffmpeg. This got me *all* the required dependencies for ffmpeg in one go! Awesome.

Next, the encoding line:

ffmpeg -i myMovieFile.avi -acodec libfaac -ac 2 -ab 160k -s 1024x768 -vcodec libx264 -vpre iPod640 -b 1200k -f mp4 -threads 0 myMovieFile.ipad.aac

which I adapted from,

Wednesday, February 1, 2012

Testing Scalatra with Immutable Specs2

import org.specs2.Specification
import org.specs2.mock.Mockito
import org.scalatra.test.ScalatraTests
import org.eclipse.jetty.testing.ServletTester
import org.specs2.specification.After

class MySpec extends Specification { def is = 
  "My spec must" ^
  "verify that scalatra can be tested" ! Specs().assert()

   * Create a new jetty context each time
   * This lets us mock expectations *per* specification
  case class Specs() extends After with ScalatraTests with Mockito {
    // The servlet tester gives an http context to your tests
    lazy val tester = new ServletTester

    val myMock = mock[MyObject]

    // Register your servlet with the context and inject the mock
    addServlet(new MyServlet(myMock), "/*")

    // don't leave the jetty context hanging around
    def after { tester.stop() }

    // Pay special attention to the "this" keyword, which will provide the
    // ...method in a scope such that "after" may be called for teardown
    def assert() = this {
      // Just some expectation on the mock object
      myMock.get(0) returns true

      get("/") {
        there was one(myMock).get(0)

Sunday, January 29, 2012

Puppet: Multiple Classes & Inheritance

I learned about a very cool thing yesterday: Defined Types in Puppet. Previously, I was trying to use parameterized classes for code reusability, but the problem there is that you can only declare one class resource per node. So puppet would yell if you tried to use the same parameterized class more than once on the same node. That's where defined types come in, they let you essentially declare a constructor like you would for a class, but as a method signature. Then, you can declare as many as you want and give them names.

The tie in to how to use this with classes and inheritance is to run the Defined Type declaration within each of your individual classes.

Defined types are not auto-loaded in the same fashion as the rest of the modules, so it's necessary to define them in a file named "init.pp" Otherwise they won't load and you'll see lots of errors like, "err: Could not retrieve catalog from remote server: Error 400 on SERVER: Puppet::Parser::AST::Resource failed with error ArgumentError: Invalid resource my_defined_resource at .../modules/module_name/manifests/my_manifest.pp:8 on node node.f.q.d.n"

Friday, January 13, 2012

Specs2 Mockito - Invoke Injected Callback

Mockito and Specs2 work great together and make testing my code a lot of fun. I ran into a problem yesterday in which I needed to get access to a callback function that was passed into an object that I was mocking. For some concreteness, I was mocking a zookeeper client and needed to invoke the callback function I passed to "zookeeper.watchNode(path, callbackFunction)".

Up until this point, I've only needed to verify arguments from my test. I haven't needed to actually do something with the argument itself. My original approach was to use a custom matcher to capture the parameter using function matchers. But that involved a var and was rather messy.

It turns out there is a MUCH better way to do this. Called "argument capture." In my case, my code looked like this:

      // The signature of the zookeeper watch function
      val listener = capture[(Option[Array[Byte]]) => Unit]

      // create system under test, which will call watchNode internally
      new SUT(zkMock)

      // Use the arg capturers to capture the params SUT passed in
      there was one(zkMock).watchNode(path, listener)

      // Now, call the callback method
      listener.value(Some("some updated value from zookeeper"))

      SUT.someValue must_== "some updated value"

Thursday, January 12, 2012

Specs2: IndexOutOfBoundsException: 30

I was consistently seeing a java.lang.IndexOutOfBoundsException: 30 exception today while working on some code. I had written a spec with a single use case, which was marked as pending because I couldn't test my trait until I flushed out another piece of code upon which it depended.

I'd seen the error in the past, but couldn't remember how (of it) it went away. Since I was working on a pretty much skeleton spec, I figured now would be a good time to get to the bottom of it.

Well, it was pretty simple! In connecting my use case text description to the actual test, I was using a "^" instead of an "!", which also explains the other oddity I was seeing (that I had thought related to pending tests) "No source file found at src/test/scala/...", since the "^" was expecting a String as the result of my test method being run, not a Specs result. Go DSLs!

def is =
"My trait should" ^
"do what I want" ^ SpecsStub().assert() // WRONG, ^ should be !

case class SpecsStub() {
  def assert() = {