30 Books In 2016!

2016 was the year that I grew the most as a human. The growth was primarily due to the simple act of setting and following through on a goal to read 30 books within the year. I remember reading somewhere that there are two paths for growth as a person: 1) Find a mentor, someone who can show you how to be a decent human being, or 2) Read good non-fiction books, which tell you how to be a decent human. Finding the right mentor can be really difficult, but books are always available– just an Amazon click or trip to the library away.

Goals should be difficult, but attainable. Reading 30 books in the year was not easy for me. Firstly, my reading speed is on the lower end of the spectrum; I like to ponder things read and don’t go zooming through a book. To make matters worse, I tend to enjoy reading longer, non-fiction books.

I accomplished this goal and here is the breakdown of the books I read this year:

1 – Not Fade Away by Laurence Shames and Peter Barton

Rating: 4/5

Sometimes we forget that life is short and we should cherish every moment. This book is a wonderful reminder. It is a memoir from a man dying of stomach cancer (Peter Barton). Guaranteed to make the reader cry.

2 – Personal MBA by Josh Kaufman

Rating: 4/5

The cost of an MBA is around $140,000. The cost of this book was around $10. What a deal! Each short chapter is an MBA lesson. I got a lot of valuable information from this book. For example, he goes through the 5 independent processes of business:

  1. Value Creation – Discovering what people want then creating it
  2. Marketing – Attracting attention and building demand for what you’re created.
  3. Sales – Turning prospects into paying customers.
  4. Value Delivery – Giving customers what they are promised and ensuring they are satisfied.
  5. Finance – Bringing in enough money to make effort worthwhile.

3 – Rich Dad Poor Dad by Robert T. Kiyosaki

Rating: 2/5

I felt this book was full of fluff and it would have been better as a blog post. The biggest takeaways:

  1. Tax advantages of corporations An employee is taxed on income. A corporation is taxed on income after expenses.
  2. Aim for increasing assets instead of liabilities. Keep expense low to reduce liabilities.
  3. Work to learn, not earn.
  4. Be in control of your emotions. Don’t let fear or opinions of others dictate your actions.

4 – How to Fail at Anything and Still Win Big by Scott Adams

Rating: 4/5

A hilarious book filled with great advice. Some takeaways:

  1. The power of simplification e.g. capitalism is only about making profits.
  2. A failure is a tool, not an outcome.
  3. Passion is bullshit– more likely to take unreasonable risks in pursuit of passion.
  4. System vs goals With goals, you are in a constant state of failure. Having a healthy diet would be a system; losing ten pounds would be a goal.
  5. Combining skills to make one very marketable. “Every skill you acquire doubles your odds of success”

5 – Wooden: A Lifetime of Observations and Reflections On and Off the Court Hardcover by John Wooden and Steve Jamison

Rating: 3/5

John Wooden, the famous UCLA basketball coach, is one of the people I most admire. In an age of indecency, he had such a strong moral character. This short book gives a glimpse into the life he led. He preached to be more interested in the process than the results. For example, he never worried before a game whether they would win or lose because it had already been determined by the previous month/year’s practices.

6 – The Obstacle Is the Way: The Timeless Art of Turning Trials into Triumph by Ryan Holiday

Rating: 2.5/5

Recounts a similar idea to John Wooden’s idea of focusing on the process and the things that can be controlled and letting go of everything else.

7 – AWOL on the Application Trail by David Miller

Rating: 3.5/5

A story of a man who quit his office job to do the 2,000+ mile hike of the Appalachian Trail. A great reminder of the possibility to step outside of the norm and that hardship builds character. Excerpt:

Having a rough time on the trail is not the same as the irredeemable frustrations of urban life, such as being stuck in traffic or wading through a crowded store. Difficulty on the trail, like this long and rainy day, is usually reflected upon fondly. There is the soothing, rhythmic beat of rainfall, the feeling that the woods are being washed and rejuvenated, the odors of the woods awakened by moisture. There is appreciation for the most simple of things, such as a flat and dry piece of ground and something warm to eat. There is satisfaction in having endured hardship, pride in being able to do for myself in the outdoors. There is strength in knowing I can do it again tomorrow.

8 – An Astronaut’s Guide to Life on Earth by Chris Hadfield

Rating: 3/5

Memoir of Chris Hadfield, a prolific astronaut. The book is filled with interesting stories and practical life advice.

9 – Autobiography of a Face by Lucy Grealy

Rating: 3.5/5

A memoir from Lucy Grealy who had Ewing’s Sarcoma, a type of bone cancer, in her jaw. Surgery was required to remove the tumor, leaving her face disfigured. The book is about her coming to acceptance of her new appearance.

This singularity of meaning—I was my face, I was ugliness—though sometimes unbearable, also offered a possible point of escape. It became the launching pad from which to lift off, the one immediately recognizable place to point to when asked what was wrong with my life.

10 – The Promise of a Pencil: How an Ordinary Person Can Create Extraordinary Change by Adam Braun

Rating: 3/5

The story of Adam Braun who gave up a lucrative career in the financial sector to start a non-profit, building schools around the world. A great reminder to not get trapped in the rat race and make sure to add some good to the world.

11 – Do you talk funny? by David Nihill

Rating: 4/5

Public speaking tips from an Irish stand-up comedian. Specifically, he talks about how adding humor can improve public speeches. Some takeaways:

  1. The speakers who deliver their talk most tend to be the best and most polished. They know where the laugh lines are, they know what phrasing works best, and they know their timing.
  2. People don’t invest in your business or product. They invest in you and your story. If you want people to remember what you say, tell a compelling story.
  3. The best way to be more engaging, memorable, and funny quickly is to tell a story that contains a few essential elements. “Who wants what and what stops them from getting it?”
  4. The most powerful stories are not about the storyteller; they are about the person who is hearing the story.
  5. Joke structure: P = Preparation (the situation setup), A = Anticipation (this can be often achieved with just a timely pause), P = Punch line (story/joke payoff).
  6. There’s always a funny or a humorous relatable element in real-life stories. The key is to tie them to your overall macro concept and get to laugh lines as quickly and effectively as possible. Keep it relevant to everybody on a macro level before going micro and adding detail.
  7. The reality is, you can’t wing it. If you don’t prepare, you may do okay some of the time, poorly all too often, and good occasionally. You have to practice. Practice breeds consistency, good habits, and success. This is something that every comedian, performer, and athlete knows.

12 – Effective Javascript by David Herman

Rating: 2/5

Of course, I had to read at least one technical book over the year. A book of tips for writing JavasSript. I found most of the tips to not have a lot of practical applications. A better JavaScript book is Secrets of the JavaScript Ninja by John Resig.

13 – Buried In The Sky by Peter Zuckerman and Amanda Padoan

Rating: 3/5

The story of the 2008 climbing disaster on Mount Everest where eleven people died. Really interesting in that it talks about the mistakes that led to the disaster.

14 – A Million Steps by Kurt Koontz

Rating: 3/5

Tells of a man’s walk of the Camino de Santiago in Spain.

Rain falls, adversity discourages, and pain hurts. They are all inevitable. The Camino taught me to go with the flow of these uncontrollable situations. We all control the reaction.

15 – Radical Acceptance by Tara Brach

Rating: 3/5

A book on accepting oneself. She tells a story that really hit home for me. One of her friend’s mother was on her deathbed:

“Most of the time Marilyn’s mother remained unconscious, her breath labored and erratic. One morning before dawn, she suddenly opened her eyes and looked clearly and intently at her daughter. “You know,” she whispered softly, “all my life I thought something was wrong with me.” Shaking her head slightly, as if to say, “What a waste,” she closed her eyes and drifted back into a coma.”

16 – Five Lessons by Ben Hogan

Rating: 3.5/5

I am trying to improve my golf game. This is the defacto golf instruction book. It is short but filled with all the information needed to create the perfect golf swing. I will be re-reading this one.

17 – Checklist Manifesto: How to Get Things Right by Atul Gawande

Rating: 3/5

An argument for the power of checklists as a problem-solving tool to simply the complex. Because of this book, I started using checklists a lot more.

18 – You are a Badass: How to Stop Doubting Your Greatness and Start Living an Awesome Life by Jen Sincero

Rating: 3.5/5

I am usually not one for self-help books, but I thought this contained a lot of great motivational quotes and stories, eg:

  • It’s not that the things and opportunities that we want in life don’t exist yet. It’s that we’re not yet aware of their existence (or the fact that we can really have them).
  • It’s like we’re born with a big bag of money, more than enough to fund any dream of ours, and instead of following our instincts and our hearts, we invest in what other people believe we should invest in.
  • Our thoughts become our words, our words become our beliefs, our beliefs become our actions, our actions become our habits, and our habits become our realities.
  • It’s so simple; fear will always be there, poised and ready to wreak havoc, but we can choose whether we’re going to engage with it.
  • Because so often when we say we’re unqualified for something, what we’re really saying is that we’re too scared.
  • There’s something called the Crab Effect. If you put a bunch of crabs in a bowl and if, while they’re in there crawling all over each other, one of them tries to climb out, the rest of them will try to pull him back down instead of helping to push him out.
  • The only failure is quitting. Everything else is just gathering information.

19 – Edison – Inventing the Modern World by Alexander Kennedy

Rating: 3/5

A few things struck me about the life of Edison:
1. Despite his many successes, he was not very wealthy.
2. When his laboratory burned down and he lost everything, he saw it as an opportunity to start over.
3. He was a proponent of his DC electric current and even when it was clear AC was more practical, he stubbornly tried to push his system. He even helped finance the execution of animals by AC current to show its danger.
4. His friendship and partnership with Ford allowed him to keep inventing, even when his financial outlook was grim.

20 – On the Move by Oliver Sacks

Rating: 4/5

A beautifully-written memoir of Oliver Sacks, a famous neurologist. Talks about his dealing with the social stigma of his homosexuality and stories from his rounds of seeing patients.

21 – Open by Andrea Agassi

Rating: 5/5

I enjoyed this book so much that I read all 400 pages is 3 days. The thing that was the most surprising for me about Andrea Agassi’s life is that he didn’t enjoy playing tennis, but still proceeded to play for so many years.

22 – Tripping Over the Truth by Travis Christofferson

Rating: 3/5

Covers the Metabolic Theory of Cancer, which says that cancer is caused by damage to cell’s mitochondria (the energy-producing part of the cell). The currently accepted theory is SMT (Somatic Mutation Theory), which says that cancer is caused by DNA mutations.

23 – Darwin by Alexander Kennedy

Rating: 3/5

  • Darwin was considered by his teachers and his father to be somewhat below average in both intelligence and achievement.
  • Darwin was included on the voyage of the HMS Beagle because the captain wanted “to bring along a gentleman companion for company.”
  • It wasn’t any formal training that led Darwin to his theory. It was Darwin’s firm belief that the training acquired on the voyage allowed him to achieve all of his scientific accomplishments.
  • Darwin was in bad health for most of his life: “My chief enjoyment and soul employment throughout life has been scientific work; and the excitement from such work makes me for the time forget, or drives quite away, my daily discomfort.”

24 – Gifts of Imperfection by Brene Brown

Rating: 3/5

Similar to Tara Brach’s Radical Acceptance, this book is a good read about “owning our story.”

25 – Sapiens by Yuval Noah Harari

Rating: 4/5

A long book, but well worth the time. Chronologically tells the story of how humans became the most dominant species on earth. Also, provides insight into human social norms, traditions, etc. One of the “must read” books on the list.

26 – The Par Plan by Golf Tech

Rating: 2.5/5

Another book to try and help improve my golf game. This one contains a lot of worthwhile drills.

27 – Out With It: How Stuttering Helped Me Find My Voice by Katherine Preston

Rating: 4/5

A memoir from Katherine Preston, a woman who stutters. I am a stutterer and this felt like my life story. If you stutter or know someone who stutters, I highly recommend this book.

28 – Steve Jobs by Walter Isaacson

Rating: 4/5

Man, Steve Jobs was a dick. I kept asking myself if I would want to work with this man; Does his genius outweigh his treatment of people? I think the answer is no.

29 – Born to Run by Christopher McDougall

Rating: 3.5/5

The story of the Tarahumara tribe in Mexico who run ultra distances without modern footwear. Started off kind of slow, even thought of putting it down, but I am glad that I didn’t.

Takeaways/quotes:
– Make friends with pain, and you will never be alone.
– American distance running went into a death spiral precisely when cash entered the equation.
– Perhaps all our troubles—all the violence, obesity, illness, depression, and greed we can’t overcome—began when we stopped living as Running People.
– Deny your nature, and it will erupt in some other, uglier way.
– Every great cause begins as a movement, becomes a business, and turns into a racket.
– “You don’t stop running because you get old. You get old because you stop running.
– Chimps don’t have a nuchal ligament. Neither do pigs. Know who does? Dogs. Horses. And humans.
– Using running as a weapon in persistence hunting. “If you can run six miles on a summer day then you, my friend, are a lethal weapon in the animal kingdom.”

30 – Benjamin Franklin: An American Life by Walter Isaacson

Rating: 3/5

Along with John Wooden, Benjamin Franklin is the person I admire most. It’s crazy how prolific he was as a writer, inventor, and statesmen. This book goes into great detail (I felt sometimes too much) about Ben’s life. Some things I was not aware of:
– He spent most of his adult life in England and France.
– Just how revolutionary his experiments with lightning and electricity were.
– He was a celebrity in France.
– He was a bit of flirt. He kept lifeline correspondence with several women over his life.
– He almost completely ignored his real family. His wife died after not seeing him for several years while he was in France.

31 – Tools of Titans by Tim Ferriss

Rating: 4.5/5

I love the Tim Ferris Podcast. This book is kind of like cliff notes for all of his podcasts. A must buy for any fan.

A Case Study on the Advantage of Simplicity – Creating Transcripts.io

“Life is really simple, but we insist on making it complicated.” ~ Confucius

Often times I think we get trapped into a certain way of thinking. Case and point, I have always used the same base components when building a web app:

  1. A Web Framework to handle request routing, page generating, etc. e.g. Rails, Java EE
  2. A database as a persistent data store
  3. Usually a front-end library like Backbone or React
  4. Ancillary tools like Checkstyle, esLint

This is a lot of stuff and rarely in the past did I question if it was all necessary. I took the status quo for granted and chugged along, dragging this fat stack with me.

Then one day, I was listening to the Tim Ferriss Podcast, one of my favorites, and he had a quote that got me thinking. He said when working on a project and getting bogged down by details, he would ask himself, “What would this look like if it were simple?” I think this is a great question to ask yourself when starting a project because it forces you to challenge your defaults and open your eyes to possible new, easier options. I decided to put this thought into action and ended up creating, Transcripts.io, a site to host searchable transcripts.

Cost of Complexity

Sometimes complexity is necessary, but it should always try to be minimized. I think often times as software engineers/developers, we over complicate a solution.

We build the Taj Mahal when a fishing shack would do.

We forget that there are costs to added complexity. Complex code is harder to understand than simple code. The extra complexity of having a database opens us up to a whole host of problems from SQL injection to security bugs in the database itself. WordPress, one of the most popular blogging/CMS platform, uses MySQL, a popular open source database flavor, as its data store. There was a somewhat famous multi-byte encoding bug in MySQL, which could lead to security exploits: Andrew Nacin: Anatomy of a Critical Security Bug. Since WordPress carried the “complexity baggage” of having a database in the first place, the security vulnerability became their problem. If WordPress didn’t have the added complexity of having a database, there would be no security bug affecting the framework.

Where the Rubber Meets the Road

I am a big fan of the Tim Ferriss Podcast (I think I mentioned that before). One of the things I often find myself wanting to do is search back to listen to parts I found interesting and unfortunately Tim doesn’t provide transcripts and searching Podcast audio is not really possible. So with this in mind, I decided to create a site where the transcripts could live, along with a utility to search transcript text. But before I started I asked myself in typical Tim fashion, “What would this look like if it were simple.”

In order to keep things simple, I would have to deviate from my normal stack. There would be no database; there would be no web framework. Instead, I would use a static site generator called Jekyll. Unlike a CMS, which has a database, Jekyll has no database to worry about, instead, like its name implies, static pages are generated kind of like the compilation step in Java or other statically-typed languages. A cool feature about Jekyll is that it has a plugin framework, which allows customization of the generated pages basically any which way. So I wrote a Jekyll plugin to generate transcript pages (the source is here: https://github.com/jonmbake/jekyll-transcribed).

Cost of Simplicity

One of the requirements for Transcripts.io was to support full-text search on the transcript text. This would have been really easy if I had used a database. Unfortunately, since I decided to keep things simple, I had no database to leverage for search. So I ended up having to introduce extra complexity by including the search platform, SOLR. By starting off keeping things simple, I ended up introducing extra complexity in the end, but this was a conscious choice and made late in the game when I had all the transcript generation working.

It is much better to make a conscious choice to add complexity later than assume complexity from the start.

Takeaways

If you are building an e-commerce site with millions of visitors per month, you probably need a complex solution. If you are building a personal blog, you probably don’t. Either way, the level of complexity should match the task at hand, and it should be decided with much thought.

If you are interested in transcription or want to help out transcribing, please check out Transcripts.io.

Three JavaScript Constructs to Respond to an Asynchronous Event

JavaScript is asynchronous in nature. It won’t wait on an AJAX call to complete, or a setTimeout function to fire, instead it will keep trucking along and come back to these “background tasks” after they complete. Given its asynchronous disposition, JavaScript gives a few constructs to make responding to asynchronous event completion possible. This post covers three of these constructs: Callbacks, Events and Promises.

Callbacks

A Callback is simply a function that gets invoked when an async operation completes. A good example is jQquery.ajax‘s success callback:

var onSuccessCallback = function(data) {
  //kick off another AJAX request using data returned from first request
  if (data.yes) {
      $.ajax({
        url: "/second-request",
        success: function () {
          console.log('Second request succeeded!');
        }
    });  
  }
};
$.ajax({
    url: "/should-second-request-proceed",
    success: onSuccessCallback
});

$.ajax.sucess gets invoked when the asynchronous AJAX call gets a successful response. In addition, there are also callbacks for onError and onComplete.

Downside of callbacks

Callbacks can get pretty ugly when nested. Often times you need to call another callback after its parent callback has completed, and then call another callback after that. This can lead to deeply nested functions, which are not very aesthetically pleasing from a coding perspective.

Events

Another alternative is events. If you have ever used a framework like Backbone or jQuery, you are well aware of events. Events are nice because they allow the decoupling of where and when the event was fired from things that care about the outcome of that event. For example, Backbone will fire a sync event anytime a model is saved to the server (an async event) and anything that cares about this can listen:

var MyModel = Backbone.Model.extend({
    urlRoot : '/foo'
});
var MyView =  Backbone.View.extend({
    initialize: function() {
        this.listenTo(this.model, "sync", this.render);
    },
    render: function () {
        this.$el.text('Synced value from the server ' + this.model.get('name'));
    }
});
var model = new Model({id: 1});
var view = new MyView({model: model});
model.fetch();  //will trigger 'sync' after AJAX request finishes and the view will render with fetched value

Pros and Cons of Events

Allowing elements to listen to and trigger events allows for decoupling; the model doesn’t need to know about the view. The main drawback to events is that when systems become complex, and events are flying all over the place, it can be hard to debug when problems arise. Event ordering can be important. A bug may be only visible when events A, B and C happen in order, but absent otherwise. This can be tricky to debug.

Promises

The final construct for dealing with async events is the newest and most promising (pun intended): Promises. Prior to ES6, Promises were only available through a third party libraries like Bluebird, Q or jQuery. ES6 has promises baked in: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise.

Promises are JavaScript objects that represents an asynchronous operation. The operation has a state of either pending, fulfilled, or rejected. For example, when wrapping an AJAX request within a Promise, when the request first kicks off, the promise will be in a pending state. If the request fails with something like a 500 then it will transition to rejected. If, on the other hand, it succeeds the Promise state will transition to fulfilled.

Here is the same Callback example using Promises instead. Conveniently, $.ajax returns a promise:

$.ajax({url: "/should-second-request-proceed"})
  .then(function (data) {
    if (data.yes) {
      return $.ajax({
        url: "/second-request"
    });
    return $.Deferred().reject('Second request failed:(');
  }
  }).then(function () { console.log('Second request succeeded!'); })

The main benefit of Promises is that they can be chained, avoiding the nesting-hell of callbacks.

Conclusion

Each construct has its use. If you are not not nesting async operations, then a Callback is the way to go because it is simple and doesn’t require a third party library. Events are great for decoupled components. Promises can be used for “asynchronous chaining”. All three constructs have their place in the JavaScript landscape.

Note: All the code written is pseudo code. It was not tested in the browser.

Bypassing MailChimp Double Opt-In

Sometimes service providers make certain things more difficult then they need be. Often times, they have good reasons for making them difficult. The reason often times boils down to preventing people from doing stupid things that they don’t fully understand the repercussions for.

MailChimp is a tool for managing email campaigns. A campaign is created by creating a mailing list which users can subscribe to. By default, subscribing to a campaign is double opt-in, meaning that not only does a user have to sign up [by entering their email address to a form], but must also click a link within an email sent to that address to be added to the list.

Thinking like a software developer, this is probably a good default because it leads to better list quality; it is a great way to prevent unauthorized sign-up. However, it complicates the process because now sign-up is a two-step processing. If a user forgets to click the link, they never get subscribed to the list.

I recently helped a friend with a project where he wanted to bypass the double opt in. Basically, he had a web-site where users could sign up for a free ebook and he was using a mailing list to manage people that downloaded the book. He set up a hook, where the book was automatically sent when the user got subscribed to the list. He did not want them to have to confirm being added to list prior to sending the free e-book.

I looked far and wide for a configuration setting at the list level to turn off the double opt-in, but interesting enough, I came to the conclusion that there is no such configuration option available. We had to do it the hard way: by using their API. Mail chimp has API wrappers for many languages. For reference check out MailChimp API Docs.

The code

Let’s start with the form to sign up. A user enters their email address and the free ebook is sent to that address:

<p>To get a Free ebook delivered straight to your email, enter your address below</p>
<form action="thank_you.php" method="post" novalidate>
    <div class="field-group">
      <input type="email" value="" name="email" class="email" placeholder="Enter your email address" required>
    </div>
    <small>No credit card required. Your information is 100% secure and will never be shared with anyone.</small>
</form>

Notice the form action is a PHP page, where we thank our users for trying the free book and, most importantly, hit the MailChimp API to actually subscribe the user to the book mailing list, which has the hook to send the free book when a person subscribes:

<?php
  require('./vendor/mailchimp/mailchimp/src/MailChimp.php');
  try {
    $api = new MailChimp(getenv('MAILCHIMP_API_KEY'));
    $result = $api->lists->subscribe(getenv('MAILCHIMP_LIST_ID'),  array("email" => $_POST['email']), null, 'html', false, false, false, false);
  } catch (Exception $e) {
    error_log ('Exception while attempting to send email to ' . $_POST['email']);
  }
?>
<!doctype html>
<html lang="en">
    <head>
        <!-- ... -->
        <body>
          <section class="legal">
            <article>
              <h1>Thank You!</h1>
              <p>Your free book is on its way!</p>
            </article>
          </section>
      </body>
  </html>

thank_you.php

The API method we use is/lists/subscribe, which has the following method signature:

subscribe(string apikey, string id, struct email, struct merge_vars, string email_type, bool double_optin, bool update_existing, bool replace_interests, bool send_welcome)

Notice the double_optin parameter. Passing in false, as we did above, prevents Double Opt In. A few other notable tidbits:

  1. This code is dependent on the MailChimp PHP API Wrapper being present (notice the MailChimp.php include).
  2. You must specify the MailChimp API key and List ID in the PHP page using the API. In the example, these are specified as environment configs.

That’s it folks. In this post, I scratched my head a bit at why people make things so hard and then showed that even difficult obstacles can be overcome.

The 3-step Refactor

Code gets messy over time. It’s a fact of life. Features get added on top of each other until you have a stinking pile of code. Refactoring is re-organizing the pile so the code is better able to deal with future extensions; it is changing the code without changing how the application behaves. This is key: the application should behave in the same manner before and after the refactoring.

Step 1 – Understand The Current Code

Since the behavior is not going to change, it is very important to understand the current behavior, both from an application and code perspective, before jumping in and making changes. Let’s say you don’t heed this advice and jump in like a cowboy and start changing code. Later, you go to test your change and the application behaves in a way that might be undesirable (aka a bug) or might be how the system was designed. This is why it is a good idea to understand the application area [where the refactoring is touching] because it makes going back and testing a whole lot easier.

In my humble opinion, it is also a good idea to ensure automated tests, like unit and integration, for the code under refactoring is up to snuff. One of the most wonderful benefits of unit tests are ensuring code after a refactoring works as intended. These benefits are only evident if the tests are actually in place.

Step 2 – Make the Changes (Slowly)

Making too many changes at once is fine in the event that the code actually functions when you get around to testing it. The problem is that often times it does not and locating what part (or parts) of the change broke things can be difficult. A much better approach is to make a small change, test and repeat. This way when the change breaks existing functionality it is easy to locate the cause (hint: the last small change).

Step 3 – Test Your Change

Testing a refactoring doesn’t involve just running the suite of unit tests– any part of the application touched by the refactoring should be tested by hand ie logging into the app. Having unit tests are great, but they are not a guarantee that the application works as intended. Always, always log in to the application and throughly test the refactoring touch points.

That’s it! Refactoring is a necessary process for maintaining the health of our apps. Taking a step-wise approach will ensure the refactoring is done in a safe and timely manner.

A Better Contact Form

What the hell was I thinking? It is the question I ask myself just about everytime I look at code that I had written in the past. The longer it has been since I touched the code, the greater this feeling of bewilderment at how stupid I was. This is probably a good thing because it is an indicator of growth as a software developer.

Recently, I re-explored code I had written for a Bootsrap Contact Form and without failure that same feeling of “what the hell was I thinking” creeped into my mind. This post is about the mistakes I made and what I did to correct them. (PS: I pushed the updates to Github here: Bootstrap 3 Contact Form V1.1 changes)

Email Configuration – Use Environment Variables!

The first version of the contact form required developers that used the form to go to the main PHP file, find the the place where the recipient email address was defined and change it to the email address they wanted to use. This was clunky and not very practical. If you write a library or utility, it is not a good idea to have users go in and edit your code to add their configurations.

There is a popular group of principles for building web-based applications called the 12-Factor App. One of the principles of the 12-Factor App is to use Environment Variables for configuration values. In the case of the contact form, the email address should be an environment config (and is in the new version).

Problems with PHP#Mail

A big part of developing software is weighing the pros and cons of a particular solution. Solutions often times involve pulling in a dependency. The major pro of pulling in a dependency is that it usually adds greater capabilities to the applcation. On the other hand, it adds more complexity and bloat to the software systeem.

PHP#Mail is the built-in PHP function to send an email. It is very basic, not supporting things like authentication. Unfortunately, most email providers today require authentication so PHP#Mail is not a very practical solution. The solution was to use a third-party library, PHPMailer, which supports things like authentication and SSL/TLS encryption.

Bad Javascript Practices

Dynamic languages like Ruby and Javascript are great because they allow developers a great freedom to do pretty much whatever they want (including crazy things like Monkey Patching). Freedom is good, but when there are many ways to do something often times programmers will not make the best choices.

One of the greatest no-nos in Javscript is to pollute the global namespace with global variables. The previous version of the contact form had some contact form utilities defined in the global namespace like (see var contactForm.

A better solution is to use a self-executing function which creates a new namespace (outside of global) like so:

(function (window) {
    //new scope - not global
}(window))

Final Thoughts

Code is never perfect. It can always be improved on. I think it is always a good idea to go back and refactor to improve upon code that you written in the past. Always strive to make your code the best it can be; I know I will.

Check out the new version demo: Bootstrap 3 Contact Form Demo Source: Bootstrap 3 Contact Form Source

Tips and Tricks for Creating a Project with Angular and Rails

Developing front-end Angular applications in combination with Rails as the back-end API framework can be a lot of fun. However, there can be a steep learning curve since you basically have to learn two frameworks. This post describes a few tips and tricks of using Angular with Rails that I hope you find useful.

Using a Versioned API

Creating a versioned API is a good idea especially if you are going to have multiple clients (not just your front-end application) accessing the API. In Rails, you can create a versioned API by adding something like this to your routes.rb:

namespace :api do
    namespace :v1 do
        resources :users
    end
end

Usually when accessing the API from within Angular you will either use the $http or $resource services. To construct the User $resource, the code would be:

var UserResource = $resource('/api/v1/users/:userId', { userId: '@userId' }, { 'update': { method: 'PUT' } } );
var users = UserResource.query();

The problem with this approach is that the version number is hard coded within the URL of the resource. You will have many resources, mock resources and $http calls within your front-end app; when you decide to use version 2 of the API you will have to go back and find all these places and change the ‘v1’ to a ‘v2’.

A better approach is to create an Angular Constant and then use a Request Interceptor to add the version number to the URL:

angular
  .module('jbApp')
  //--> create angular constant for api version
  .constant('appConfig', {
    apiVersion: 'v1'
  })
  .config(function($httpProvider, appConfig){
    $httpProvider.interceptors.push(function() {
      return {
        'request': function(config) {
          //--> if request url starts with /api then append version number
          if (config.url.indexOf('/api') === 0) {
            config.url = config.url.replace('/api', '/api/' + appConfig.apiVersion);
          }
          return config;
        }
      }
    });
  });

Then our resources no longer have the API version number hard-coded:

var UserResource = $resource('/api/users/:userId', { userId: '@userId' }, { 'update': { method: 'PUT' } } );

Of course, you still have to go and fix any places where the API changed:).

Testing Strategies

Both Angular and Rails are frameworks that take testing seriously. Both were built with testing baked-in. This is great, but it can be easy to become overwhelmed with the number of testing tools available and what exactly to test.

Obviously, we should test both the back and front ends. I think it is also good to do both unit tests and integration tests on both the front and back ends. Unit tests ensure a single unit of code is doing what it is suppose to be; integration tests, on the other hand, test that a feature or section of the application is doing what it is suppose to do. A good workflow for me is to use TDD with unit tests, and then when a feature is complete go back and write integration tests to ensure that feature works as intended.

For the back-end, I would recommend using rspec for both integration and unit tests. Integration testing an API usually boils down to testing the controllers, which Rspec makes easy. Models and any library code can also be unit tested using Rspec.

On the front-end, I use karma with jasmine for unit tests and Protractor/Selenium for front-end integration tests. The integration tests for the front-end will be end-to-end tests since you will also need a functional API.

Whatever testing strategy you choose, just choose something. Automated testing is too valuable for flushing out bugs and gives you a sort of confidence that your code is correct (that fuzzy feeling that we all know and love).

Version Control/Structuring your Project

I think it is a good idea to have two distinct version-controlled repositories- one for the front-end angular code and another for the back-end rails code. To “integrate” the two, any time you push a front-end change, you should pull that change into the public directory of your rails app. In addition, you should also run the entire suite of tests, both front and back end any time you pull in front-end changes or make back-end changes. This is where a pre-commit hook is extremely valuable.

Pre-commit Hook

A pre-commit hook is used to take some action before committing your code to version control. It is even possible to fail a commit by checking some logic like the results of automated tests. The pre-commit hook should live in your rails repository. It should do something like the following steps:

  1. Run back-end (rspec) tests
  2. Run front-end tests/build assets (minify, etc.)
  3. Copy front-end assets into /public
  4. Run front-end integration tests
  5. If all tests pass without error then commit

Here is an example of a pre-commit hook: EXAMPLE. Note: this pre-commit hook was used with an angular/grunt project and git as the version control.

Replacing Flash

A lot of the nice features of Rails you will no longer have access to when using it as a simple API. The whole view portion of the rails stack is completely off limits; the only data following to the front-end is JSON. One of those nice features you will have to do without is Flash, which allows the passing of messages (alerts) to the very next action. With a Rails/Angular app that Flash message will never be accessible to the front-end.

I found that a good replacement for flash is toastr, which is a “a Javascript library for non-blocking notifications”. You just simply have to create a service like the following:

'use strict';

angular.module('myApp')
  .factory('toaster', function () {
    return {
        success: function (text) {
            toastr.success(text,'Success');
        },
        error: function (text) {
            toastr.error(text, 'Error');
        },
        info: function (text) {
            toastr.info(text, 'Info');
        },
        warning: function (text) {
            toastr.warning(text, 'Warning');
        }
    };
});

Then anytime you want to “flash” a message you just have to do the following:

angular.module('myApp')
  .controller('MyCtrl', function ($scope, toaster) {
    toaster.success('A successful toast!');
  });

Camelcase vs Snakecase for JSON Property Names

One of the things that I dislike about Rails is that it favors “snake_case” over “camelCase”. The decision to go with snake case goes back to its origin, but now-a-days just about every framework uses camelcase and it is especially common for JSON property names.

The JSON serializer I use for my Rails projects (and probably the most popular) is Active Model Serializers. This works great except that it excepts property names to be snake case (it’s the Rails way after all). Fortunately, there is a configuration property you can give it, config.key_format = :lower_camel, that will serialize property names in camelcase, however deserialization is still an issue.

I am not sure if it is the best way, but to get around this issue I again used an Angular Request Interceptor:

angular
  .module('myApp', [])
  .config(function($httpProvider, appConfig){
    $httpProvider.interceptors.push(function() {
      return {
        'request': function(config) {
          if (config.url.indexOf('/api') === 0) {
            //rails expect params to be in snakecase and not camelcase
            if (typeof config.data === 'object') {
              config.data = decamelizeKeys(config.data);
            }
            config.url = config.url.replace('/api', '/api/' + appConfig.apiVersion);
          }
          return config;
        }
      });
    });

The interceptor code happens right before a request is made to the server. The decamelizeKeys function converts JSON property names of config.data (the data being submitted) to the snake case, which Rails expects. Fortunately, in version 0.10 of ActiveModel::Serializers, it looks like have camel-case property names will not be problem.

Conclusion

I hope this helps you in your journey to creating a Rails/Angular project. I think both technologies are great. I like Rails for its awesome testing frameworks and Angular makes it easy to get a project up and running quickly.

Four Common SQL Query Structures

When starting out writing SQL queries, it can be difficult to understand exactly how to structure a query. SQL is declarative in nature, which allows one to quickly obtain a subset of relevant data. However, often times you will know the subset of data that you want, just not the SQL needed to obtain it. This posts is intended to arm you with some common query structures to make the process of getting to the data you want easier. The four query types that we will go over are:

  1. Correlated Sub-query
  2. Inline View
  3. Sub-query Factoring Clause
  4. Nested Selects

Our Test Data

The example queries use two tables Employee and Address. Employees have a name and manager, with manager being a foreign key to the Employee table. Address has street and city and a FK back to Employee.

The following SQL was used to create the tables and populate some data:

create table employee (
    id int not null primary key,
    name varchar(80),
    manager_id int
);

create table address (
    employee_id int,
    street varchar(80),
    city varchar(80)
);


insert into employee values(1, 'Jill', null);
insert into employee values(2, 'Alice', null);
insert into employee values(3, 'Jon', 1);
insert into employee values(4, 'Jim', 1);
insert into employee values(5, 'Beth', 2);

insert into address values(1, '324 Sycomore St.', 'Madison');
insert into address values(2, '123 Main St.', 'LaCrosse');
insert into address values(3, '478 Superior Ave', 'Chicago');
insert into address values(4, '7888 Park St', 'Fleming');
insert into address values(5, '99238 Hammersly Dr', 'Ersling');

Correlated Sub-query

Correlated sub-queries are often used with an Aggregate Function or the Exists or Not Exists sub-clauses to filter based on an attribute. For example, say we want to obtain all managers who manage more than one person. This is easy with a correlated sub-query and using the aggregate function count:

select
  m.name
from
  employee m
where
  1 < (
    select
      count(*)
    from
      employee e
    where
      --this is where we correlate back to the main query, hence the name
      m.id = e.manager_id
  )
;

Inline View

Inline views are useful when you need to join to a view, but the view will not be useful outside of the context of query. If the view were to be useful in other queries, it makes sense to extract the inline view to a view.

Inline views are defined within the from clause. Let’s say that we want to get a list of employees, their managers and the city that their manager is from. We can use an inline view to get employee(manager)/address.

select
  e.name,
  ma.name manager_name,
  ma.city manager_city
from
  --a view defined within the from clause
  (
    select
      e2.id,
      e2.name,
      a.city
    from
      employee e2
    join
      address a on (e2.id = a.employee_id)
    ) ma
join
  employee e on (ma.id = e.manager_id)
;

Sub-query Factoring Clause

Like inline views, sub-query factoring clause lets you create “private views” only visible to the query. You should choose Sub-query Factoring when you need to join on the view multiple times. For example, say we want to also get the employee’s city. Notice we are joining ea multiple times.

with ea as
(
  select
    e2.id,
    e2.manager_id,
    e2.name,
    a.city
  from
    employee e2
  join
    address a on (e2.id = a.employee_id)
)
select
  e.name employee_name,
  e.city employee_city,
  m.name manager_name,
  m.city manager_city
from
  ea e
join
  ea m on (e.manager_id = m.id)
;

Nested Selects

One nice feature of SQL is that it allows for nested structures: views inside of queries, queries inside of other queries, selects inside of selects, etc. Often times you just need to obtain the values from a single column of a table. Instead of joining to the table, it is possible to use a nested select.

In this example, we are getting the city value for an employee using a Nested Select:

select
  e.name employee_name,
  (
    select
      a.city
    from
      address a
    where
      a.employee_id = e.id
   ) employee_city
from
  employee e
;

Conclusion

Hopefully you learned something new. Learning common query structures such as the ones presented here will help to make you a SQL ninja (whatever that means).

Levitra buy in uk Everything is an Abstraction

The amount of data available to us at any given moment is staggering. Right now, as I stand in front of mac, the Air Conditioner is blowing at a certain decibel-level, pumping out air at an exact temperature with an exact velocity. The paint on the wall has a level of shine at a distinct hue. The light shining from the lamp behind me has an exact brightness and color to it. Literally, there are terabytes of data available to analyze at any given second. If my brain were even to process a fraction of that data, it would be overwhelmed (and might even explode).

The brain has a great facility for coping with these copious amounts of information flooding us at any given moment. And that mechanism is filtering. Over millions of years of evolution, our brains have become very good at filtering out the unimportant details (things that will not kill us), and only letting through those tidbits that are essential to our survival. Our brain is an abstraction machine. It just doesn’t have the power to create an 100% accurate portrayal of the actual world we live. It must create a mental-model of the world, with only the important details, so our brains don’t choke from information overload. Our mental model might not be 100% correct, but, for the most part, it is good enough to get us through the day without any major incidence.

Good Object-Oriented Software Is All About Abstractions

Object-Oriented programming works because it allows us to think in ways that our brains are designed to– with objects that model the real world. Just as the amount of data in the wild is staggering, the complexity of a computer programming, while not at the same level, is still overwhelming to even the smartest among us. By thinking in terms of objects and actions those objects can perform, we are allowed to think in smaller segments that we can understand without being overwhelmed.

People have a good understanding of the world because their mind has created good abstractions. Good software also uses the right abstraction level. The main feature of OOP, Polymorphism and Inheritance, allows us to control the level of abstraction of our programs. Too much abstraction and the program will be highly flexible to future change, but very difficult for others to understand. There is always that balance between flexibility for future change and not making the code too difficult to understand. Just as our understanding of the world requires the right level of mental abstraction so does our OOP software.