Friday, December 2, 2011

Does File Exist in PHP Include Path?


We built a wrapper script around PHPUnit that loads and configures our application for testing so that test authors need not pepper their test files with redundant lines like "require_once dirname(...) . '/setup/file/included/in/all/tests.php'". This in itself is actually pretty cool, but that's fodder for another post =) The wrapper is a big time saver, but it comes at the price of a little consumption of internal PHPUnit implementation, most importantly how it starts itself up.

The start up invocation changed between 3.5 and 3.6 and one of those changes involved how PHPUnit autoloads classes. This introduced a problem because we needed to be able to support both versions of PHPUnit while the upgrade was underway. Fortunately it turned out that a determining factor between 3.5 and 3.6 was the presence of the "Autoload.php" file in the root of the PHPUnit extension module. Unfortunately, it turned out to be nontrivial to check for the existence of the file using the standard file_exists() method. The problem is that file_exists() doesn't check PHP's include path for relative file paths (unlike the logic of require_once()).

After a little research, we were able to solve the issue by a little bit of fun with file_get_contents, which takes an additional parameter to let it know that you'd like to check the include path for the presence of the file. Additionally, in the interest of performance, it lets you load only the first byte of the file, rather the entire thing.

// Support for newer version of PHPUnit during rollout to all VMs                                                                                         
// This will load the first byte of the file from the include path                                                                                         
// ...if it exists                                                                                                                                        
if (@file_get_contents('PHPUnit/Autoload.php', true, null, 0, 1))                                                                                          
{                                                                                                                                                          
    // File exists, we're on 3.6...
    require 'PHPUnit/Autoload.php';                                                                                                                        
    // ... other stuff
} 
else
{
    // Do stuff for 3.5
}

Check out the docs on file_get_contents for a description of the params.

UPDATE: The function stream_resolve_include_path may do the trick with much less hassle.

Thursday, November 17, 2011

Notes From Implementing Lean Software Development - From Concept to Cash

I read the Poppendieck's book on implementing lean software development almost a year ago and I continue to go back to it for advice and insight. I finally brought myself to put together a brief summary on the first half of the book and decided to share it with my blog readers. One of the great things about the book is the set of industry examples they provide to exemplify to the points that they are making. This is especially useful to me, since I often have no debating ammunition aside from my personal beliefs. To make it easy to look up those examples, I'm including the page numbers along with the ideas below.


The book can be purchased from Amazon



  • pg 24 Early specification does NOT reduce waste, it encourages it
  • pg 28 "Do It Right the First Time"
    • This means TEST first to keep bugs OUT. It does NOT mean think of all possible future needs of the feature.
  • pg 32 Forecast predictions, not fact. Avoid "analysis paralysis."
    • Build processes that allow quick feedback and responses, rather than building for an uncertain feature
  • pg 33 "Plans are useless, but planning is indispensable" - Eisenhower
  • pg 34 Achieving high value, low stress feature delivery is impossible WITHOUT superb quality (in the form of tests)
  • pg 38 Sub-optimizing is BAD
    • E.g. Optimizing the writing of ONLY the feature code, but NOT the test code
    • --result--> More complex code, higher potential for introducing new bugs, longer to write new code
    • E.g. Optimizing ONLY your part of the process. The overall process still drags along and you get frustrated.
    • --solution--> Team ownership!
  • pg 74 7 Wastes
    • Relearning by failing to engage current knowledge
    • --solution?--> Information needs to be accessible. People should not be siloed and should talk often.
  • pg 101 Increase estimation reliability by decreasing variability (i.e. estimate and commit to smaller projects)
  • pg 105 FASTER delivery --> Reduce the number of things in WIP (queue theory)
  • pg 124 Exec thinking is so ingrained that Lean concepts are invisible
  • pg 126 Group is NOT a team until everyone is COMMITTED
    • E.g. Sports - track versus rowing
  • pg 150 Story by Rally does a great job highlighting the DRAWBACKS of Technical DEBT incurred by NOT slowing down to address untested code
  • pg 151 You must *commit* to action items coming out of Retrospectives
    • Nothing is accomplished if you only discuss problems.
  • pg 153 More important than processes is LEARNING (understanding), SHARING, and SOLVING PROBLEMS
    • Experience over documentation. 
    • Refined documentation far outweighs garrulous documentation

Sunday, November 13, 2011

HTML5 Drag and Drop - Chrome Not Working?

The W3C standard defines seven event types for drag and drop, http://dev.w3.org/html5/spec/dnd.html#dndevents.  Pretty cool to have so many options, but the nieve reader may be surprised to learn how they work together. Specifically, for people binding to the "drop" event.

The default behavior of the dragover event is described as such, "Reset the current drag operation to 'none'". So what does this mean for developers binding to the drop event? It means, that unless you prevent the default behavior of dragover, your drop event will not fire. This is not a fun one to learn on your own.

Friday, November 11, 2011

Could not evaluate: Could not retrieve information from source(s)

Oops! I was evaluating a template, NOT a file.

file {
    "/destination/file/path":
          source => template("path/to/template");
}

The key is that "source" should be "content"!


file {
    "/destination/file/path":
          content => template("path/to/template");
}

So, what was happening behind the scenes is that puppet was trying to source the literal value of the evaluated template file. Could be useful?

Wednesday, November 9, 2011

Reading /etc/rc.status

What Cool Bash Stuff Did I Learn?
  • What does RC Stand For? ==> Resource Control
  • How do I tell vim my file type is Bash? 
    • I can set the following anywhere in the file,
      • # vim: set filetype=sh
      • # vim: syntax=sh
  • How can I make my if()s look prettier? ==> Use {}
    [ -e $pathToFile ] || {
       echo >&2 "File $file does not exit"
       exit 1
    }
    
  • How do I prompt a command to ask me for input?
    • ==> cat file-to-prepend - file-to-append > output-file

Monday, October 31, 2011

Hello Specs2 JUnit, Farewell Henkelmann TestListener

Out of the box, SBT 10.1 doesn't give you test results in JUnit output. This is a problem for anyone using Jenkins to process test results. Fortunately, last November, Christoph Henkelmann released a plugin to SBT that plugs in to the test execution life cycle of SBT, http://henkelmann.eu/junit_xml_listener.  This was great, but had a few bugs with inconsistent results and JUnit data ending up in the wrong file or XML being badly formatted. These bugs occurred inconsistently and infrequently enough to be only a minor annoyance.

Over this past summer, specs2 added a JUnit output option for its test results. At the time, the documentation didn't fully explain the necessary integration points. So I'll explain what I did to get it working. Most of this information is now available at, http://etorreborre.github.com/specs2/guide/org.specs2.guide.Runners.html if you search for "junit".

// Put this in your SBT build file to output to both console
// ...and junit files console is helpful so that you can see the
// ...output in jenkins build history
testOptions in Test += Tests.Argument("junitxml", "console")

If you're running on the command line, you can provide SBT system arguments straight up, like

bash $> sbt junitxml

Finally, on the SBT command line, it won't work with the SBT "test" command, but it will work with the SBT "test-only" command.

sbt-command-line>test-only com.company.class.specs -- junitxml

The other cool learning that I experienced at this time, was with using SBT collections. We have several scala projects and we need to share common SBT properties between them. To accomplish this, we made our own SBT plugin and import all of its settings at the top of each project. One of the settings is the testListener SBT setting, which attaches the junitXmlListener to all our unit test output. Since, I wanted to test out the junitxml flag in only one project before fully committing to it, I couldn't outright remove the test listener for all the projects. So in order to remove only in one project, I used the SBT filter syntax: ~=


testListeners ~= { (listeners: Seq[TestReportListener]) =>
    listeners filterNot ( _.isInstanceOf[JUnitXmlTestsListener] )
  }




Wednesday, October 26, 2011

Gerrit Code Review - Unpack error Missing unknown

We use the Gerrit code review tool at my company; currently version 2.1.8. Gerrit gives us a lot of tools and is an extremely useful tool for code quality and knowledge sharing (i.e. code reviewing). However, Gerrit can often require some TLC. Today was one of those days.

While I was doing some maintenance on our git host today, I noticed that git suggested I prune one of our repos. So, I went ahead and pruned. About half an hour later, one of our developers reported that he couldn't create a new patch set in Gerrit -- neither update or create were working. Pretty soon, I was getting several reports of this from the team. They were all getting the same error:


error: unpack failed: error Missing unknown 3061766be9c324fa47fb4832399b34db5a186276
fatal: Unpack error, check server log
To ssh://git.dev:29418/webapp
 ! [remote rejected] HEAD -> refs/for/master/master (n/a (unpacker error))

The key part of this message is the git object that's missing, 3061766. The git prune must have identified that the object was dangling, no longer necessary, and removed it. However, for some reason, JGit inside of Gerrit still felt the object served some purpose important enough to throw an exception. I tried to find the object in the remote repo and locally, but to no avail, it was history.

After trying pretty much everything I could find on the internet (http://code.google.com/p/gerrit/issues/detail?id=585http://groups.google.com/group/repo-discuss/browse_thread/thread/cf7095d3dc364c7e/f2c11756a5a0396f?) and replacing the jgit jar in my tmp .gerritcodereview directory, I finally found a solution.

The solution was to restore a backup of our git repository to a temp directory and use git cat-file to first verify the object 3061766 was valid. Then, I used git verify-pack to find the pack file that contained the commit. Then I copied the pack and its index file (http://progit.org/book/ch9-4.html) into our real repo. Shazzam! Gerrit started accepting change sets again!

In more detail:

  1. tar -xf backup.tar ./path-to-git-dir
  2. cd path-to-git-dir
  3. git cat-file -t 3061766be9c324fa47fb4832399b34db5a186276
  4. cd objects/pack
  5. ls | xargs git verify-pack -v | grep 3061766be9c324fa47fb4832399b34db5a186276
  6. # back track until you find the pack that contains the commit
  7. cd /git/path-to-git-dir/objects/pack
  8. cp /untarred-repo/objects/pack/...{idx,pack} .



Vim Registers Current File Name

So you guys may remember how I was all obsessed with vim registers a few months ago. Well, today I was typing away at some things and while in insert mode, I wanted to splat the name of the file I was currently editing. So I took a risk:

Ctrl+R %     # i.e. Access register %, which holds the name of the current file.

result: the name of the file pasted in!

Pretty cool. 

Tuesday, September 27, 2011

Bash Function Return Values and Exit

For those lucky souls out there programming in bash, you may be making use of the bash subshell trick that allows you to "return" a value from a function.

function echoTwice() {
  echo "$1"
  echo "$1"
}

declare twice=$(echoTwice "woot")
echo "Twice is: $twice"

Will produce the output,

Twice is: woot woot

The tricky gotcha in this situation involves exiting on error conditions. For example, consider the following enhancement to echoTwice,

function echoTwice() {
  if [ -z "$1" ]; then
    echo "Must provide an argument" >&2
    exit 1
  fi

  echo "$1"
  echo "$1"
}

declare twice=$(echoTwice)
echo "Twice is: $twice"

You would expect to see the error message and nothing else.  However, instead you see

Must provide an argument
Twice is:

Why does this happen? It happens because the assignment to $twice happens via a subshell. A subshell is spawned to evaluate the result of echoTwice. When that subshell exits early, bash treats it no differently than had the whole function proceded. Subsequently, rather than quitting your code, processing continues when the subshell completes. So, how do you accomodate this in your code without resorting to ugly globals?

The solution that I have found is to test if $twice was assigned.

if [ -z "$twice" ]; then
  # Error message emitted by echoTwice prints to STD_ERR and
  # ...will not be consumed by the sub-shell
  exit 1
fi

The reason I can't just check the exit status is because declare twice= is an actual execution and will typically have a valid exit status.

EDIT (Nov 2nd, 11): It turns out that you can check the return status! The issue is accurately described that declare twice= has its own exit status, but the exit status arises from the use of the bash built-in "declare" and NOT from the assignment to the variable (See the "declare" section of http://www.gnu.org/software/bash/manual/bashref.html#Bash-Builtins).  So, the solution becomes:

declare twice=
twice=$(echoTwice)
if [ $? -ne 0 ]; then
  # Error message emitted by echoTwice prints to STD_ERR and
  # ...will not be consumed by the sub-shell
  exit 1
fi
echo "Twice is: $twice"

Friday, September 16, 2011

Don't Lose Your Phone

I get lots of credit card offers in the mail and in my email. In my life, I'm probably indirectly responsible for the life of an entire tree just to print all the offers I've received (not to account for the energy expended delivering them, ink for printing, etc.).  Today I finally resolved to bring the offers to an end. There is a phone number listed at the bottom of most offers, 1-888-5OPTOUT, which you can call to remove yourself from the list of people agencies will contact with the great deals. According to the FTC, this number is provided by the National Credit Bureau (http://www.ftc.gov/privacy/protect.shtm).

I called the number and was very shocked at the information it exposed about me via the simple fact of the phone number from which I was calling. Essentially, by identifying my phone number the automated service provided me, with no security challenge at all, my home address and full name. Transcript below:

  • NCB: Are you calling from your home phone?
  • Me: Yes
  • NCB: Please verify your address. Is it 123 Street Name, Town, State?
  • Me: Yes
  • NCM: Is your last name "Plumber"?
  • Me: Yes
  • NCM: Is your first name "Joe"?
  • Me: Yes
  • NCM: Please enter your SSN
After this point, I actually had to provide the information to verify it was me. 

So, I suppose that knowing someone's phone number may be enough to find their information online already anyway (white pages?), but I was utterly shocked that a service sponsored by the federal government would so easily give your private information away. My conclusion, be extra careful not to lose your phone and don't give your phone number out too willingly.

Wednesday, June 29, 2011

Git Unpack Error Over HTTP Fetch

Had a problem today with a server fetching the latest changes from our upstream repository over HTTP.


git fetch origin
error: packfile .git/objects/pack/pack-385ce85680e3c3ff129907559101b9a4544a9da0.pack does not match index
error: packfile .git/objects/pack/pack-385ce85680e3c3ff129907559101b9a4544a9da0.pack cannot be accessed


I didn't have much luck finding any advice on google, so I thought I'd post my own solution, which was extremely simple!


rm  .git/objects/pack/pack-385ce85680e3c3ff129907559101b9a4544a9da0.pack
git gc
git fetch origin

Getting pack 385ce85680e3c3ff129907559101b9a4544a9da0
which contains dcbe1aa3d3e564aea30acc55a7df105bfdc586a2
# success

Essentially, in the version of git we run (1.6.3.1), git fetch over HTTP will blindly pull down every pack regardless of whether or not it is applicable to the downstream checkout. What was happening was the pack somehow got corrupted and an unfetched pack was dependent on that corrupted pack, so git couldn't resolve the mismatch. Removing the corrupted pack let git re-fetch it and the dependent pack that followed.

Sunday, May 22, 2011

Don't Miss the CalTrain

I have a bad track record of getting to the CalTrain stop on time. It's a very depressed experience to see the train receding into the distance as I race around the curve to the train stop on my bike. It's even more depressing to instead see the train loading, sprint down the stairs, under the track, and back up the other side (all the while carrying my bike rather than riding down the ramp), and get close enough to touch the train as the conductor closes the doors before I can get in and then watch the train recede into the distance as I stand there panting.

It's clear that the CalTrain expects us to always be on time and shows no quarter. Why? Well because the CalTrain is always on time. Isn't it? Actually, no, it's not. For one reason or another, the CalTrain can be as much as an hour late, full, or even canceled.

As passengers, we need to be prepared and in the know. So far, the best resources I've found for being on time and keeping tabs on CalTrain delays are a clock and twitter.com.

The programmer in me hates manual labor, so I decided to automate a little system to help me out. It comes in two parts,
  1. A script to (a) check twitter for tweets @caltrain and (b) make my computer speak and pop up a dialog box with the tweets reminding me to pack up and go catch the train
  2. A schedule to run the script at the same time every day, just in time to catch the train
The script itself comes in two parts, AppleScript and Ruby. The AppleScript to hook into my Finder app for the dialog box. The Ruby to parse Twitter's API so that I don't have to do it in AppleScript (shudder).

I've posted the scripts up on github.

The following pages helped me out:
Ruby JSON - http://flori.github.com/json/
Twitter Search - http://search.twitter.com/api/
AppleScript - http://www.tee-boy.com/forums/viewtopic.php?f=8&t=76

Wednesday, April 6, 2011

MySQL.rc Configuring Your MySQL Client

I recently wanted to customize the SQL command line prompt so that I'd know the database server to which I was connected. Previously, I'd been using the --prompt option when connecting (e.g. alias dev-db='mysql -h dev-db --prompt "dev-db-mysql>"';), but that gets maintenance heavy as I add more and more databases to my shortcuts.

I resolved to find a more generic way. I tried all sorts of google searches, all of which came up unhelpful. With no choice left, I headed to the MySQL docs. After some clicking around, I came upon this page, http://dev.mysql.com/doc/refman/5.0/en/mysql-commands.html.

This is great news! Apparently, I have several options.
  • /etc/my.cnf - Globally change the mysql client for all users of a given machine
  • ~/.my.cnf - Change for just me on a given machine
  • export $MYSQL_PS1 - Change just my prompt without requiring any new files
Since all of the machines I use are already controlled with puppet, it was a simple change for me to add a definition for $MYSQL_PS1 in my profile.

Friday, March 25, 2011

Element in Array with Bash?

I recently found myself editing a rather long conditional evaluation in Bash that was essentially comparing a variable, $color, against several valid matches. If $color didn't match any of them, the condition evaluated to true. I was a little disappointed with the prospect of maintaining it and the stretch of ORs and ==s across my screen. In a "real" scripting language, I would just use an array and check if $color was an element. Then I realized that Bash lets me use an array, just a little differently. With a combination of echo and grep, I got the job done.

declare -a valid_colors=( 'green' 'red' 'blue' )

echo "${valid_colors[@]}" | grep -qv "$color"
if [ $? -eq 0 ]; then
  # Do your stuff
fi

Tuesday, March 8, 2011

Oh Shit! Git Amend Ate My Changes!

Today a developer did a git commit --amend and accidentally overwrote his entire HEAD. (I'm not sure how, but it did happen.) I knew that since git saves (what seems like) everything, if we could find the commit hash for his previous HEAD that we'd be able to cherry-pick it. We scrolled up through his terminal but couldn't find any reference to it =/ So that meant we needed to try finding it the hard way: .git/

So then I started looking through the git-dir and after a little digging, we found therein a log file for each branch (.git/logs/refs/heads/*), the contents of which lists the last several commit hashes made to the branch respectively. Alright! Each file contains a somewhat chronological list of the hashes generated by each change to the tree (rebase, pull, commit, cherry-pick, etc.). We were able to use git show on the hashes near the end of our branch's log file and recover the change!

Conclusion, steps to recover:
  1. git branch -- # e.g. my-feature
  2. tail .git/logs/refs/heads/my-feature
  3. git show $hash # using the hashes from the log file until you find the one you want
UPDATE:
So it turns out the easiest way to do this is to use git reflog, which will list all the changes to your HEAD for the past X changes. Way easier!

Monday, March 7, 2011

Bash PS1 Colors and More Space

Below is a script I use for adding some jazz and meta information to my prompt in bash. Although I do use oh-my-zsh now-a-days, a nice Bash prompt is still useful when I am on a server without zshell.

I'm not sure if this is best way to achieve this, but I use `echo` in order to add the color variables in the PS1 string, which should be evaluated only per PS1, not at the time of definition. I originally was using double quotes to surround the entire thing, but then realized that `git_ps1` was getting executed only when PS1 was being defined, instead of when PS1 was being evaluated.

On most systems, you can save this script at /etc/profile.d/prompt.sh and it will be automatically evaluated. If you do set your PS1 using oh-my-zsh, then whatever value was set here will be overridden.


Friday, February 4, 2011

Add Color to PHP Echo in CLI

I adapted this PHP script from If Not True Then False for CentOS escape codes, which use a semi-colon and not a comma between bold and color codes for foreground color escaping.

[Update Sep 3, 2012] This code is now available on github as part of the Box Bart project.

The newer code has __callStatic magic letting you type EscapeColors::red('Some warning message'); in place of the wordier (and less obvious) EscapeColors::fg_color('red', 'Some warning message');


/**
 * Color escapes for bash output
 */
class Escape_Colors
{
 private static $foreground = array(
  'black' => '0;30',
  'dark_gray' => '1;30',
  'red' => '0;31',
  'bold_red' => '1;31',
  'green' => '0;32',
  'bold_green' => '1;32',
  'brown' => '0;33',
  'yellow' => '1;33',
  'blue' => '0;34',
  'bold_blue' => '1;34',
  'purple' => '0;35',
  'bold_purple' => '1;35',
  'cyan' => '0;36',
  'bold_cyan' => '1;36',
  'white' => '1;37',
  'bold_gray' => '0;37',
 );

 private static $background = array(
  'black' => '40',
  'red' => '41',
  'magenta' => '45',
  'yellow' => '43',
  'green' => '42',
  'blue' => '44',
  'cyan' => '46',
  'light_gray' => '47',
 );

 /**
  * Make string appear in color
  */
 public static function fg_color($color, $string)
 {
  if (!isset(self::$foreground[$color]))
  {
   throw new Exception('Foreground color is not defined');
  }

  return "\033[" . self::$foreground[$color] . "m" . $string . "\033[0m";
 }

 /**
  * Make string appear with background color
  */
 public static function bg_color($color, $string)
 {
  if (!isset(self::$background[$color]))
  {
   throw new Exception('Background color is not defined');
  }

  return "\033[" . self::$background[$color] . 'm' . $string . "\033[0m";
 }

 /**
  * See what they all look like
  */
 public static function all_fg()
 {
  foreach (self::$foreground as $color => $code)
  {
   echo "$color - " . self::fg_color($color, 'Hello, world!') . PHP_EOL;
  }
 }

 /**
  * See what they all look like
  */
 public static function all_bg()
 {
  foreach (self::$background as $color => $code)
  {
   echo "$color - " . self::bg_color($color, 'Hello, world!') . PHP_EOL;
  }
 }
}


Monday, January 31, 2011

Passenger Root Directive Not Found

Error:
nginx: [emerg]: unknown directive "passenger_root" in /box/etc/nginx/nginx.conf:18


Solution:
http://markmail.org/message/ektzyf3hm3oyz5un#query:related%3Aektzyf3hm3oyz5un+page:1+mid:i5hukr2cemguxvtj+state:results

Two versions of nginx were installed on my openSUSE box. I believe this was because I had already installed it before running the passenger installation script. I may try running it again and telling it to install the new version into the proper path.