Let’s Talk About CSS Naming (pt 1)


It can bring you to an air-guitar celebrating high, or an over-the-top-face-palming low. When I started my web development journey, CSS was just as confusing as the programming language I was trying to learn. It was a pain to style those messy elements into the beautiful layout that you had popping up in your head. Some of the confusion did go away eventually, but there was still another issue with CSS that stayed. It wasn’t CSS attributes, or even how to properly use them. After all, those questions could simply be answered with a quick google search. The problem was naming.

There are only two hard things in Computer Science: cache invalidation and naming things. — Phil Karlton

For a non-developer, the problem seems so silly. Naming things? How difficult can that be? It does seem quite trivial at first from an outsider’s perspective, but any developers who have even begun to build projects, regardless of size, will tell you that naming is something that needs to be taken with more than just a grain of salt. It can be the deciding factor in your code’s readability and consequently, its maintainability. Conversely, bad naming habits will also make others less receptive to reading your code.

This practice carries over to CSS as well, specifically those CSS class names. When we begin to style our webpages, we usually create a separate CSS file, declare the class names on there, and then import that stylesheet over to our html file. We then utilize those class names and id’s on our html elements. Seems pretty simple right? For small, personal projects with very little html, you probably don’t have to worry too much about how the classes are named. A couple chunks of css code with vague naming won’t cause a lot of readability issues.

When web applications grow however, and the amount of html components begin to increase, developers need to focus on the way they name their classes. They need to begin to follow good CSS naming practices.

There are many conventions out there trying to solve the problem of organizing CSS markup. Some of these are explicit tools that need to be used, while others are simply naming conventions to be followed. Let’s take a look at a few.

CSS Pre-processors (SASS / LESS)

CSS pre-processors can be thought of as a tool that “extends” vanilla CSS functionality. It provides a developer with the option to create and store CSS variables, provide mixins that can be re-used throughout the CSS files, and a host of other features that help streamline the styling process. The modularization of the CSS also leads to better organized code and greater project maintainability. When using a pre-processor like SASS, you’ll also need a tool to compile the file into its CSS counterpart. This is usually done with a bundling tool like Webpack or Gulp.

OOCSS (Object Orientated CSS)

OOCSS is a naming guideline with two core principles. The first is to “separate the structure from the skin”. What this means is that you wouldn’t have a div class called “tv-box-container”. Instead, you’d separate each component in your html by giving them a different name that is descriptive of what they are. You’d name a div with “tv”, and perhaps the images inside are named as “img”. By doing so, your override the need to reference the actual element itself in CSS, which causes problems when you need to re-use certain styles.

The second guideline is to “separate the container and the content”. Instead of referencing an element using “.tv-container h2”, you’d ideally give that particular h2 a class of its own. By doing so, you can assure that all h2’s without that style look the same, and any class that does need the styling can simply add the specific class name.

BEM (Block Element Modifiers)

BEM styling brought a lot of sanity to larger projects that have a lot of different html components. When using BEM conventions, your div might be named “tv”, while your inner div would be named “tv__retro”. If you had any other nested divs inside of “tv__retro”, they would start with the name “retro__” and reference a description of what that element contains after the underscores. By following this convention, you are essentially organizing your blocks of html into very particular styles, minimizing the chance of conflicts between different styles between a large amount of CSS files.

By following the BEM convention, your html components became more re-usable, since the styles that they depend on contain unique class names and references.

More in part 2




Imagine for a moment that your mom wanted you and your friends to make a cake. The most obvious way to do this would be to carefully listen to the instructions, and then go off by yourselves to make the cake. After you’ve finished, you return to your mom and show the results.

Sure, that is a method that could work. But due to its non-communicative nature, things might go wrong. What if you present the final cake, but your mom realizes that she wanted brown sugar instead of chocolate flakes as the third layer of the cake? It would be very costly to make those changes.

A better way – an agile way to carry out this mission would be to break the making of the cake into smaller steps. Bite-sized pieces that you can frequently show to your mom. For example, in hour one you build up the first layer of the cake. Then you show this to your mom and see what she thinks. If she likes it, then she will tell you to proceed. If not, she can suggest changes. Due to your mom seeing all the progress as the cake is being made, changes are much easier to implement. If you do need to make changes, you can simply update one component instead of the entire cake.

This is the general idea behind agile development, but it’s a very, very shallow introduction.

The main idea here is that those developing the project adhere to short-term goals. These are usually called stories. Engineers are assigned specific tasks for them to complete. After assignment, they go and enter what is known as a “sprint”. This means that they will try and implement the specific story, and report back once it’s complete. These sprints usually last anywhere from two weeks to a month. On top of all of this, a daily meeting, called a scrum, is held very quickly in order to assess progress. These scrums can help identify problems that may be holding the implementation back. At the very end of a sprint, the engineers gather and create new stories that need to be completed in a following sprint. This process continues until the final product is created.

During the initial phase of fleshing out stories, engineers have to fully understand the purpose of their application. This involved creating various user profiles that will represent potential categories of real users. A power user is going to have much more intricate knowledge of your product than a regular person. This means the engineers have to create two user profiles, one for each. There are many potential user profiles, and the engineering team should try to cover as much as possible.

Once user profiles are set up, the team can begin throwing out stories for each type of user. For example, as a power user, I want to have an advanced configurations setting so that I can customize the application further. These types of stories have to be written out for each feature of the app. Once the stories behind each feature are written, the engineers will understand their application a whole lot more.

Once stories are fleshed out, the sprinting begins. Teams meet each day in the morning for a quick meeting, or a “scrum”. These scrum sessions begin with each engineer stating the accomplishments of the day before, and then reporting their tasks for that day. Engineers will also report any blockers that they have encountered while implementing a story. Any potential confusions, bugs, or other problems that may arise during development. The meetings try to move as quick as possible, and usually never go over fifteen minutes.

Once the scrum session is over, the engineers return to their tasks of implementing the story for that sprint.

These sprint sessions usually last a short period of time. At the very end of a sprint, the team will assess the development process of the application, and decide on new stories. They will then product more sprints to try and implement those stories.

The cycle goes on until the final product is produced. It is easy to tell that such a process has many advantages over the traditional method of “assign and forget”. Agile is not a methodology, but simply a different way of approaching a problem. Break down large problems, provide constant feedback for each chunk of feature, and organize in a way that optimizes production time and efficiency. Of course, it does have its flaws. Their are countless companies that incorrectly implement scrum, or abuse it with their employees. On the other hand, agile becomes a valuable asset to any team willing to stay true to the process, and provides a tactical way of tackling an impossible task.

The Big O

There was a time when developers needed to know the number of operations that executed when their code was being run. Computers were much slower back then, and it was important to optimize software so that no precious memory and processing power were wasted. Nowadays we possess extremely powerful computers capable of processing hundreds of millions of operations per second. As a result, many developers don’t care too much about the number of operations. Up to a certain point. When dealing with massive, massive amounts of data however, optimization becomes critical. Especially if you are trying to squeeze out a couple more milliseconds of performance.

Thus, the Big O notation was born! This mathematical symbol O( function ) was created to help computer scientists understand the amount of operations being executed when a particular function is being run, given a variable input data. Simply put, how long does a function take to process when fed some amount of information. What is important to understand here is that the Big O does not focus on the actual amount of data being operated upon. That’s not the important part. When analyzing time complexity, it is vital to understand the way the number of operations increase when the number of data increases. Utilizing Big O does not give us the exact amount of operations, but it can provide developers with a general idea of performance.

It may not sound so important for smaller amounts of data – and that’s completely true! If you only had a couple thousand pieces of data lying around, optimizations would bring very little difference to the table. Our modern processors are more than capable of handling such tasks, even when badly optimized. However, what if you were to run a badly optimized function on a dataset in the tens of millions, hundreds of millions? At a certain point, the different numbers of operations per data will begin to show their true colors. A constant time operation, O(n), will stay relatively linear in terms of operations while a time complexity of O(n^2) will quickly explode. Conversely, a function that produces a complexity of O(log(n)) will scale exponential better and better as the amount of data grows.

Let’s imagine that we had a dataset containing 2000 elements.

An O(1) time complexity will cost a total of 1 operations.

An O(n) time complexity will cost a total of 2000 operations.

An O(log(n)) time complexity will cost a total of 7 operations.

An O(n * log(n)) time complexity will cost a total of 14,000 operations.

An O(n^2) time complexity will cost a total of 4,000,000 operations.

See the differences in cost? 2000 elements is a negligible amount of data to be concerned about, but the time complexity difference is still apparent. What if we were to use a significantly larger dataset? Let’s say, 100 million? Now we’re starting to get somewhere. Now, optimization becomes extremely important.

So what does this mean when we deal with hundreds of millions of data? In certain industries, such as finance, where high frequency trades are being executed every millisecond, optimizations within time complexity could mean the difference between winning or losing millions of currency. In other industries, such as e-commerce, providing a good user experience means toning down load times between different pages as tons and tons of data is being processed. A long load time will result in a dramatic decrease of users.

This is why it’s still important to understand this concept. When we analyze the Big O for a given function, we are looking for its worst possible scenario. There are a multitude of important questions to be asked. What is the case where processes are being wasted by the millions and billions, and what can we do to lower that amount and make our code faster and more competitive in certain ecosystems.


Shell Scripts: Save previous development time!

What is a script?

You can think of it in the context of a play, television show, or movie. The actor takes the script, and executes each and every line. The script tells them what they can and cannot say. In the context of shell scripts, our computer is the one reading and following the directions.

A shell script, in essence, is a plain text file which contains a series of instructions that are then carried out by the operating system. It is also known as a program. These instructions are usually commands we type ourselves, like find, ls, or touch. There are times however, when we need to automate a series of large, tedious commands. Never fear, for there is a large device sitting in front of us that is incredibly good at such mundane tasks. This is where the power of shell scripting comes into play.

Shell scripts are generally created for useful tasks that may need to be used in repetition, but require the same sets of commands each time. Not only that, but we also create shell scripts for commands that would normally not be inputted manually on to the command line. These include if/else/switch, conditionals, variables, and functions.

When you create a plain text file and enter a list of commands, you have created a shell script. The computer will proceed to read the script and execute it sequentially. This can be done via the command line. Let’s create a simple shell script together.

During this tutorial, any parenthesis shown are meant to represent html tags.

Create a Basic Shell Script

In the command line, create a new file in the present working directory and name it anything you’d like.

touch is-pizza-good

Now that we have a new file, let’s populate it with instructions! The first line we need to put tells the current active shell to execute the script inside the Bash environment. Without it, bash would have no clue what to do with our script. Here is the line below.

#!/usr/bin/env bash

Now that we’ve established a working relationship with the Bash environment, we can begin doing whatever we want with the script.

#!/usr/bin/env bash
echo "YES" > the-answer.txt
echo "The answer awaits you inside this directory!"

Remember, anything that you would normally put on the command line can be put into a shell script.

Executing our Shell Script

Before we are able to run our script, we need to give it adequate permission, otherwise you will get an error saying that permission has been denied. To circumvent this, we simply use a single command that tells the operating system our script is safe to run. The chmod command allows you to change the permissions of files and folders, and the +x flag grants our script the permission required to execute successfully.

chmod +x is-pizza-good

Now that we’ve cleared up the permission conflicts, we can proceed to run our script! To run it, simply type:


If the script has run successfully, you will see the commands being executed. This may be harder to detect visually depending on the nature of the script.

"The answer awaits you inside this directory!"


As you execute a shell script, you can add additionals arguments to the end of the line. These arguments can be used within your script to deliver dynamic outputs. The example below shows a quick example of how paramters might be used in a basic script.

Here, we have a script called webify-name that takes in an argument and outputs it to a html document in h1 headers. I use parenthesis on the “h1” instead of tags because WordPress freaks out with the formatting.

#!/usr/bin/env bash
#!/usr/bin/env bash
echo "Preparing to webify the name $1"
echo "(h1)$1(/h1)" > name.html
echo "The name $1 has been webified!"

The program will look for the $1 parameter, and use it to execute the script. Let’s try running it!

chmod +x webify-name
./webify-name benjamin

The program sees that the string benjamin is passed in as the $1 parameter position. Now, every single $1 in our script is essentially replaced by the string benjamin. We have made a dynamic program!

As we’ve learned, $1 represents the first paramter position. Below is the format for adding in additional parameters.

./default-shell-script $1 $2 $3...

Using these parameters, we can create cool little commands that carry out useful actions and save lots of time!

#!/usr/bin/env bash
echo extracting $1 into $2/backup2…
mkdir $2/backup2
tar -xzf $1 -C $2/backup2
echo done!

Alright! Let’s take what we’ve learned so far to create a bash script that sets up a basic website folder template.

#!/usr/bin/env bash
#!/usr/bin/env bash
echo Creating a new file structure of a website at $1/src

mkdir $1/src
mkdir $1/src/js
mkdir $1/src/css
mkdir $1/src/images

touch $1/src/js/main.js
touch $1/src/css/style.css
touch $1/src/index.html

echo "(h2)$2(/h2)" >> $1/src/index.html
echo "h2 { color: gray; }" >> $1/src/css/style.css

echo Finished!

Though there is a maximum number of shell script parameters acceptable, it is a very large number, so you will never have to worry about reaching a limit.

The River of Data: A Brief look at Reactive Programming

Re-reactive What!?

You probably just read the words “reactive programming” and immediately checked out. Even though it is a rather complicated topic, you need not worry about it. By the end of this tutorial, you will gain some solid knowledge regarding this topic.

Remember, a healthy dose of struggling is essential for the learning process. So sit back, relax, and enjoy the ride. Let’s begin.

Back to the Beginning

Take a look at the expression below.

a = b + c

If the variables “b” and “c” were both five, then what would “a” equate to? Hopefully, you were able conclude that “a” would equal 10. Now, what would happen if we changed the value of “c” to 6? Mathematically speaking, the correct answer would be 11. We changed one of the variables, and therefore the outcome changes immediately. Makes sense, right?

Funnily enough, there are a few quarks that occur when we bring the example above into the world of programming. Let’s take a look at our little algebraic expression using imperative programming.

var b = 5, c = 5;

var a = b + c

var c = 6

What do you think “a” will be this time? If you answered “10”, you are correct! In imperative programming, the evaluation of “a” would never change after it is run, even if we attempted to change the variables to affect “a”. This is imperative programming at its very essence. In order to change “a”, we would need to run another statement explicitly telling the program to change the variable “a” to another value.

Conversely, in reactive programming, we are creating variable evaluations by changing their dependencies. In the case of “a = b + c”, “a” reacts to the changes happening with “b”, “c”, and the particular type of operation we are doing.

Give Me a Real Life Example!

Fair enough. Open an Excel spreadsheet on your computer. Now, enter a simple formula into one of the cells.

B1 * C1

If you changed the values of B1 and/or C1, all the cells affected by this formula would be changed instantly. They reacted to a shift coming from the numerous potential values that can be inserted into the original formula. Pretty cool right? As you can probably imagine, reactive programming is extremely powerful. For example, it can be very useful when creating responsive user interface and animations.

Are you still confused? Don’t worry. Here’s another example showing off the power of reactive programming.

What if the excel formula was replaced with a function which maps an air pollution graph that changes depending on a user’s selected location? Thanks to reactive programming, we are able to see the graphical interface change in response to user interaction even as the app is running! Something is changing in our graph function which immediately invokes a response.

You’ve made it this far! Give yourself a pat on the back! You should now understand the basic differences between imperative and reactive programming. Go make some tea, and do some pushups, because we’re about to dive into some of the core concepts surrounding reactive programming.

Degrees of Explicitness

Up to this point, we had been using very basic examples showing reactive programming in action. Our “a = b + c” expression is a very explicit form of reactive programming, because we are literally pushing the flow of changes in one direction.

b -> + -> c -> = a

Our excel example had a bit more complexity, since our cell formula dictates the output of every single cell specific to that formula. Here’s an easier way to imagine the evaluation of the excel cells. Imagine each of the cells as little delivery boxes, and our entire spreadsheet as a factory. For each of the delivery boxes in our factory, we are going to call on our formula to do something with that box. We can change the contents of the box, label the box, or even put the box into a bigger box. The possibilities are endless. Here is some pseudo code for what I just described.

factoryStream = [box, box, box, box, box, box]

factoryStream.forEach( formula( box ) ).subscribe() => do something

Our formula is acting upon all the boxes inside the factory, and doing something with each of them. We end up with a single formula pointing to a number of boxes, all with varied outcomes. Pretty cool, right?

But wait! There’s more! You can even become more implicit with the direction of transformations using reactive programming. What if we wanted to find a certain group of boxes, or even a specific box? In reality, we would need to go over each of the boxes, find the desired ones, and put them in a separate container. Then, we would need to take that container and find the specific box. We can see that this particular example is much more implicit than the previous examples.

Static and Dynamic

This concept is quite simple. Static reactive programming conveys data flows that are set up statically without any form of dynamic input during runtime. Our first example “a = b + c” is static, because we are literally assigning values to “b” and “c”.

A dynamic reactive program can change its data flows during run time. If you’ve ever worked in Photoshop, and created custom colors, you will notice that the color preview changes as you drag the cursor around the color wheel. This is a great example of dynamic reactive programming, because the data flows are being updated as the application is running.

Higher-order Reactive Programming

Higher-order reactive programming sounds like some kind of crazy, advanced terminology. Fear not! This concept is simply stating that data flows from one reactive evaluation can be used to determine the data flow for another evaluation. We actually went over a basic example of higher-order reactive programming in our example with the factory and the boxes.

Let’s get a little more specific. For this example, we will work with a big box full of movies. What if we wanted to find the most expensive movie that was made before the year 2000? Think about the actual steps you would take to physically accomplish that task.

We would need to filter out all the movies that were made before 2000 and put them aside. Now that we have a pile of movies made before 2000, we can then compare prices until we find the highest priced movie. Below is the scenario in pseudo code.

movieStream = [movie, movie, movie, movie, movie, movie]

movieStream.filterOut ( filterOptions ( movie ) ).comparePrices ( findHighestPrice() ).subscribe()

This is seriously cool stuff. You can see that the data stream we get from filtering the movies is then used to find the highest priced movie. We can chain various actions together, using a multitude of data streams to achieve the desired results. Does your head hurt yet?

Differentiated Reactive Programming

Computers are fast and furious. They process large amounts of data with ease, almost in an instant. However, there are times when we need to specify the order of evaluation for data streams. Remember higher-order reactive programming? What if a certain data stream was not processed on time, but needed for another reaction to happen? Bug alert.

Here is where data flow differentiation can come in use. We can allow for the evaluation of the certain data streams to occur first, before setting off the evaluation of other data streams.

If this is still confusing, think about that time when you had to follow a recipe. Did you take all the ingredients you got from the supermarket and toss them all into the pan at once? Obviously not. There is a very specific set of instructions that you must follow, one after the other. You might have to chop up the onions before tossing them in the pan, and you might have to do that only after the oil is sizzling in the pan. It’s a data stream full of onions! Perhaps at some point, you are allowed to toss in multiple items, but these actions are still carried out under guidance from a very explicit set of instructions.

Evaluation Models

When data is changed in reactive programming, that change will pulse outwards towards all data obtained partially or completely from that original data. Unfortunately, a naive implementation of reactive programming may be problematic for certain data structures. There are data structures where the time it takes to process each data flow compound exponentially. This is not something we want.

A solution to this problem is the proper utilization of differential reactive programming. By controlling the timing of data flows, we can simply tell the program to evaluate when a certain variable is needed.


We use this programming paradigm with a host of other paradigms, including imperative, object-orientated, and functional programming. We’ll save functional reactive programming for another time, but we can talk about imperative and object-orientated reactivity.

Imperative programming is used to act upon reactive data structures, creating one-way data flow constraints. Think of a data graphic that changes depending on other streams of data. Once the data is changed, we can use then utilize imperative programming to recreate a new data graphic.

Creating reactivity within object-orientated programming is quite different. Instead of methods, objects possess reactions that re-evaluate when other reactions have changed. It’s quite powerful. Imagine each and every status update from your Facebook used as a data stream to invoke a series of reactions. Perhaps the status that has the most “likes” from your friends will have higher priority on your newsfeed.

Confused? Let’s explore the Facebook example in depth using a popular Javascript reactive programming library called RxJs.

The Facebook Feed

Above is a non-reactive model of the Facebook feed. How do we get data from this feed? We need to loop through the array of comments until we reach the end, and then utilize the information that we get from the traversal. Is there a better way to keep track of new comments in the feed?

Now check out our model as a stream using an Observable. As it receives streams of information, it can perform individual transformations on the streams.

In a reactive model, the data source contains all the concepts and behaviors that it needs to determine when it has new data, when an error happens, or when it completes.

An Observable is simply RxJs’s way of representing a reactive data source. A reactive data source produces data over a period of time, and at some point will either error out, complete, or never complete until the process is terminated. Whenever an Observable gets data, it produces a result.

You can also attach many pipelines of transformation, called subscriptions, to a particular Observable. For example, you could pipe the new data into a particular format and then display it on a web page.

Once subscribed to an Observable, all subscribers will receive an update when the Observable receives new data.

Let’s tie in this new knowledge to better understand the two models of the Facebook feed above.

In the first example, we are pulling data from the model by iterating through it, getting the information, and then doing something with that information. In the second example, the data is being pushed to us without any further instructions. We then subscribe to that data, receive new streams of updates as they come, and then do something with that update.

The second reactive model represents a uni-directional data flow, because our data source become a kind of entry point for our application and our business logic depends on the data source as well. Notice how we don’t need to do anything until the data changes. Hence – reactive programming!

RxJs in Action

We’ve been talking very theoretical about this library, so how it actually works when writing the code.

  .filter(s => s.message.contains("Greetings"))
  .flatMap(s => getUserAvatar(s.user))
  .map(a => $('').attr('src', a.url))
  .subscribe($avatar => { $avatars.append($avatar) }

In the example above, we are getting an Observable that produces new values every time a tweet comes in. We filter out the tweets that do not contain the word “Greetings” in them. The next step is particularly interesting. The flatMap method takes a function that returns a promise or an Observable, and merges it into the stream. After we get the information from our getUserAvatar function, we map our return value to a new image element. Finally, we subscribe to the entire pipeline, and the result of that pipeline is an $avatar object.

When using reactive programming, you need to think in three important steps. What data do we want, what transformations we want to apply, and what we want to do with the final result at the end of all the transformations.


You’ve made it to the end! Hopefully, you now have a grasp on some of the basic concepts of reactive programming! They say that the journey of a thousand miles begins with a single step. Congratulations on taking the plunge into this new programming paradigm. There are many more articles, tutorials, and videos for you to scour and devour. Maybe you’ll even go on to build a reactive app!

Data Bending


We respond to data differently depending on its presentation.

A tightly filled table with sprawling labels and an endless depth do nothing to engage the viewer, and if the contents are foreign enough, you’ll find a user lost in a sea of confusion. There exists various ways to transform data into living, breathing entities. We can colorize, animate, and build the interface in ways that help convey the insight portrayed within. Flow is also important. We need to think about data in relation to the user story and how it contributes to the application as a whole. If all of this information is at the forefront, it needs to be engaging. First impressions matter.

Thankfully, good design decisions make all the differences in the world. You’ll convert new users who share the same data-driven appreciation. In a world brimming with I/O, it has never been more important to process data in meaningful ways.

Data is becoming more intimidating, and its never-ending volumes can present tricky problems. In order to obtain appropriate information from massive amounts of data, we utilize machine learning techniques. Researchers in this field investigate the ways in which programs can “intuitively” consume data on scales so massive it would be impossible otherwise. The amount of data humans and programs output each year have become too large to consume and it’s becoming vital to deploy programs that teach themselves the filtering process. Unfortunately, machine learning is beyond the scope of this post, and perhaps this entire blog. We will focus instead on the steps succeeding a successful data extraction.


Developing mean visualization skills require a bit of patience, and small, data-driven projects are the way to go. Once you’ve extracted the necessary information, it’s onward to finding an optimal way to present it. There are a myriad of methods a developer can take, but the main concern is app functionality. Who’s the audience? What’s the user story? Hopefully, you’ve had enough time to think through these questions. There are times when presentation of the data matters less, especially when the end user has no need for such levels of analytics. A recipe application does not need ten pie charts examining the various hours in which the user cooked with sugar instead of salt.

But then there are occasions where beautiful data presentations are vital. Is your extracted data telling a story? Do you depend on the data to construct a bridge between the user and some complex analytics? If you need some hits of inspiration, head on over to the New York Times Interactive and see how the visualized data brings another dimension of depth and context to its articles. Examine some effective designs and ask yourself about the effectiveness of the data being presented. Could it be better, or worse?


Now that you’ve had some inspiration for visualizing data, you can dive into a wide array of tools made just for the task. Do you speak Javascript? Try heading over to D3.js and play around with the sample code until you get a feel for the various configurations. There exists a large amount of tools out there, each facilitating the creation of visual data. Find a few that you enjoy working with, and build a few toy projects with them.

Embrace the overload of the information age and learn to swim through its currents, and you’ll be well on the way to transforming ordinary stats into captivating statements.

Your promise is awaiting…

Promises solve a lot of issues that callbacks trip over, helping to clean up readability by avoiding callback hell and endless callback jumps. They also empower us with more control over asynchronous code, providing .then and .catch blocks. Even so, there are still times when these wonderful little tools falter.

Have you ever tried implementing a series of complex conditionals inside a Promise chain? It becomes quite ugly. You might end up with multiple nested promises, perhaps even six, or seven layers deep. Sprinkle in a few if and else statements, and you’ve got a recipe for disaster.

You could flatten those nested Promises so that each .then block returns a new Promise. Even then, you’d be dealing with a large number of chained Promises. Even then, it’d still be messy to keep track of return values on the tenth Promise block. What if you had to keep track of every value returned in every single one of the blocks?

function showUsers {
   return findUsersFromDB
     .then(users => {
       if (users) {
           .then(attr => {
              // keep going!?
             return attr;
       } else {
         // we love nesting!

One option would be an object that is passed through each .then block, gathering information along the way. Another option would be a global object that every block has access to. Both are acceptable, but come with their own downfalls that may eventually trigger bugs that are extremely hard to squash. Globals can easily be accessed and changed, while passed objects can become tricky to track in deeply nested blocks.

Thankfully, ES2017 comes with a few tricks up its sleeves. Async functions are a feature currently implemented in Node 8.x, and help alleviate a lot of the problems encountered when one solely relies on Promises and/or callbacks to handle all the heavy lifting. What’s even cooler is the fact that an await function returns a promise itself!

So what exactly is an async function? It is syntactic sugar built off of the backs of generators and promises (quite interesting to see an await function’s behind-the-scenes). An in-depth explanation is beyond the scope of this post, but simply put, it is another way of writing asynchronous code. It gives the illusion of code that looks and behaves in a manner similar to synchronous code, and provides a cleaner interface for doing so.

Here is how the general syntax looks.

async function showUsers {
   const users = await findUsersFromDB();
   if (users) {
     const attr = await processResults(users);
   } else {
     // whoo no nesting!

It is important to note that you can only utilize .await if it’s in the same function scope as the async function. If you declared an async function with another function inside, and tried using .await in the inner function, you would get a syntax error. It’s fairly easy to stumble upon this error, so be aware.

This implementation is not a cure-all for every problem, and in some cases, you are better off utilizing Promises (concurrency). However, .await works by casting the function into a Promise – meaning anything returned from an await function can also be chained with a .then as well!

The difference between the two examples above is night and day. It is easier to read, understand, and with a small modification – debug as well. In order to handle error cases, simply wrap the contents of the async function in a try/catch block. The catch block will also send any errors encountered inside of the await function to your error handlers. By doing so, you are taking of synchronous and asynchronous code simultaneously.

It is important to understand that while async functions are a great tool to have, it may turn your code into a series of singular await functions that need to wait on each other in order to execute. Serial code. This means that the most efficient way to write async functions needing to execute in parallel is to utilize Promise methods like .all, resolving all your await functions at once.

Remember, it is important to know when your code could be running concurrently when they are currently running sequentially. Async functions are yet another set of tools that can be mixed and matched with other asynchronous coding methods, and a good understanding of generators and promises will yield greater results in one’s ability to utilize async functions.

Please add sugar first (and some testable code)

Why do I have to write all these tests? It’s more code! What’s the point?

The amount of code in a comprehensive unit test might equate to the amount in the production module that’s being tested. So why double the code?

Think about that application you just built. It contains a myriad of features, and a few are particularly complex. How long would it take you to parse through the code base, find that one spot you need to test, and then proceed to implement it? Think about it. Your day would go far smoother simply writing out some extra lines of code that target your test specifications.

But wait. There’s more.

Sure, you can start a project by writing the functional code first, and slap on those tests at the very end. However, doing so risks creating code that is harder to test due to structural problems such as dependencies and modules organized in ways that would prove difficult when those tests start to run. Think of testing as the sugar you need to add when making some delicious brownies. If you never add in the sweet stuff, no amount of toppings is going to make for that tasty core. It works the same way with those assertions. The entire blueprint of the code changes when you start the development process with a test driven mindset.

At this point you might ask: What exactly is code that is “hard to test”?

It’s actually quite easy to spot. Let’s say you are writing a unit test for a particular module. The difficulty of writing that unit test will depend on the structure of the code. Does the module contain a giant function with multiple degrees of functionality? Are there heavily nested logical operations that utilize different private functions and variables? At some point, you begin to lose control of the code.

Yes, you can still manage to write that unit test. But how clean will it be? How clear and concise will your assertions look if the subject of its operations is a kerfuffle’d code base? A test that is hard to read is essentially useless.

This is why it is important to have tests baked into your code, and not added in as an act of desperation. It is important to understand that the very act of writing tests while developing changes the mindset of how you present the code.

You may be a fresh-off-the-boat developer, or a veteran with decades of experience, and writing tests can seem daunting at first. But never fear. In the end, testing is a skill like any other skill. It’s going to take a lot of practice, but it will lead to better code, cleaner tests, and a bunch of happier developers!


A Promise to Escape the realm of Callback Hell

Let’s begin with a story.

You are creating an image gallery dedicated to steak lovers around the world. As a way of adding a little spice to the presentation, you decide to allow five steaks to fade onto the screen, one after another. In order to achieve this, you’ll want to make sure that the images are successfully loaded before you attempt to animate each one.

No problem right? You stretch out your hands, do a couple fist pumps, and proceed to unleash the fury and power of the Javascript callback unto the unsuspecting editor.

function imageCallBack(url, callback) {
  let img = new Image();
  img.onload = function() {
    callback(null, img);

  img.onerror = function() {
    callback(new Error('Image was not loaded'));

  img.src = url;

Let’s see how this will look when you try to chain a series of nested callback functions. We assume that a function named “animateImage()” will be called after the image has been successfully loaded.

imageCallBack('img/steak1.png', (err, steak1) => {
  if err throw err;
  animateImage(steak1, 'fadein');
  imageCallBack('img/steak2.png', (err, steak2) => {
    if err throw err;
    animateImage(steak2, 'fadein');
    imageCallBack('img/steak3.png', (err, steak3) => {
      if err throw err;
      animateImage(steak3, 'fadein');
      imageCallBack('img/steak4.png, (err, steak4) => {
        if err throw err;
        animateImage(steak4, 'fadein');

Something seems wrong here. The code is starting to look real ugly.

Imagine if you had to debug five, six, seven, or even ten nested callbacks, each with its own layer of complexity. That would be absolutely terrifying.

Welcome to “callback hell”.

Looking at the code above, there are tons of repetition in error handling and way too many brackets. Is there a better way to handle such a request?

Thankfully, ES6 has given us the power of Promises, which allow us to rid the callback nesting that can occur when multiple asynchronous requests need to be made that also happen to depend on the outcome of each other.

Here’s the original callback function refactored and utilizing Promises.

function imagePromise(url) {
  return new Promise((resolve, reject) => {
    let img = new Image();

    img.onload = function() {

    img.onerror = function() {
      reject(new Error('Image was not loaded'));

    img.src = url;

We see that the function is now returning a new Promise object. Its constructor has two arguments: resolve and reject. As you might have guessed, a resolved Promise returns the unravelled value while a rejected Promise will return an error.

Once you have a reference to a Promise object, you can call the “.then()” method to carry out an action if the Promise has been resolved.

The resulting code is much easier to read, understand, and debug.

  .then(steaks => {
    steaks.forEach(steak => animateImage(steak, 'fadein'));
  .catch(err => {

This is the power of Promises in Javascript. They accomplish the same things as normal callbacks, but possess a nicer syntax and the ability to be chained in various ways.

Does this mean that all callbacks should be replaced with Promises? Probably not.

There are times when utilizing a callback is necessary because the callback needs to be run synchronously and more than once. Think of Javascript’s “forEach()” Array method and how it might be built by utilizing a callback for every element in the array. In that particular scenario, a Promise would not be able to achieve what the callback is capable of doing.

As a general rule of thumb, it is best if you use the right tool for the job. Promises shine when you need to make asynchronous requests that depend on multiple other asynchronous requests. If you ever find yourself chaining endless callbacks back to back, you may need to give your eyes a little rest and break out the Promises!


UX Teardown: Twitter

Thanks to the great curriculum at Viking School, I’ve been dipping my feet in the world of user experience. A good user experience makes a happy user, and increases the likelihood of retaining your users. In this series of UX Teardown, I will be taking a closer look at how the visual components of Twitter affect and impact their users.

If an individual has spent a certain amount of time on the internet or on the news, chances are good that they have encountered mention of a “tweet” or the Twitter app. Let’s take a look at the interface provided by Twitter and see how it performs for a positive user experience.

Once the user has logged in, the landing page is quite intuitive.

Screen Shot 2017-04-16 at 7.51.45 PM

Twitter has definitely set some of their own personal trends in terms of user experience. For instance, the use of hash tags is now ubiquitous around the entire web, and most users are familiar when they see it. As such, they can litter the landing page with hash tags, because they have been a big enough force to the internet, that the rest of the internet has recognized the relationship between hash tags and trending topics.


Screen Shot 2017-04-16 at 8.01.21 PM

Moving on to the navigation, its bar is spacious and present no matter which page you go to on the site. It contains a mix of icons and text to help with accessibility, and the primary functionality of twitter is emphasized with a fancy “Tweet” button.

The navigation flows very well, and is quite linear. You get from point A to point B without any sort of confusion. The navigation bar is always there to bring a user back to A, if they wish to return to their profile page.

It’s hard to miss. A+ on that one. Good job Twitter.

Twitter is a profile-based social media website. As such, it attracts users from all walks of life, from an unknown programmer to the president of the United States. The goal of the user is to post short message snippets, known as a “tweet”, to all of their followers, and Twitter has provided a very visual cue to where these tweets can be created.

Screen Shot 2017-04-16 at 8.08.15 PM

On the left side of the profile landing page, we see the important information clearly laid out. Important statistics and top trends are all displayed in an easy and readable fashion.

Every single statistic takes the user to a relevant page, retaining a similar interface, and keeping the website coherent. For example, clicking on “Following” brings the users to a list of followers displayed like postcards. The flow is natural, and the design does not confuse the user.

Twitter’s information architecture follows a rather Database-like pattern. You have users and followers who all are capable of posting tweets. There are relationships here, but not all of them depend on each other. Take a look at public tweets as an example; they can be viewed by anybody and even embedded in other websites.

Twitter is all about getting those followers and tweeting to them. The website emphasizes this by good placement of the “Follow” buttons, along with the appropriate designs to make them stand out. The same goes for the “Tweet” button, as mentioned above. This is the case for every single user profile, and makes it easy for users to interact with the main functionalities of the app.

Screen Shot 2017-04-16 at 8.33.53 PM

Twitter does an incredible job crafting and telling stories in user experience, and it shows. It is currently the hottest platform for every subject imaginable, including domestic and international politics, social uprisings, and various other world-changing events. This success is due to their intuitive interface, which makes it almost impossible to get lost in. All colors are paired with contrasting elements that bring out the UI and into the view of the user.

In terms of poor experience, there is very little to point out. One little quirk I noticed was the “Twitter” logo at the very middle of the navigation bar. It seems to be clickable like a link, but does not actually take the user anywhere. Perhaps it should be changed to either a functional link, or an element that cannot be interacted with, so that users do not try and click on it and expect to be taken to another page.