4elements, Amsterdam, Holland

  1. JavaScript Tools of the Trade: CodePen.io

    When I wrote about JSBin awhile back, it definitely stirred up a lot of conversation and debate about which online code snippet editor was the best. A couple of alternatives were thrown out, especially CodePen.

    With so many readers feeling so passionate and committed to their specific choices, I wanted to do a writeup that highlighted some of the coolness of CodePen. I will say that this isn't a comparison article, mainly because I really hate doing those types of articles. Each one of these tools is unique and offers its own value which I find boils down to user preference, in many cases.

    Common Ground

    There's definitely similarities between the tools, both in user interface and functionality. If we look at both tools, you'll see that the multi-pane UI is fairly common and for good reason; it's very intuitive.

    JSBin:

    CodePen:

    The multi-pane approach makes it incredibly easy to visualize all of the facets of your sample code, allowing you to quickly update markup or JavaScript and get immediate results. And that's what these tools are generally for; quick prototyping and testing of code snippets.

    In addition to the UI similarities, both editors allow users to:

    • Reference third party libraries
    • Save code snippets for future use
    • Share snippets via custom links
    • Collaborate with other developers on the same code
    • Embed the code snippets into other pages
    • JavaScript linting

    From my perspective, these are all critical features to any code bin tool, allowing a user to not only prototype front-end code using the assets they commonly use but also allowing them to share it across most mediums used by developers. As a technical writer, the ability to embed a live snippet is incredibly important since it provides real-time feedback to demo the code that I create, reinforcing the concepts I'm writing about, while also giving real feedback and code to the reader.

    In some cases, the feature parity boils down to cost, as to whether these features are available or not. For example, if you want collaboration features, JSBin offers that for free while it's a feature only available to CodePen's Pro level offering. And to be clear (and I've said this before), I think it's perfectly fine to charge for great features. So whether the feature is free or subscription-based is irrelevant in my opinion, as long as it brings value to the user.

    CodePen

    Of all the code bin tools I've used, I can certainly say that CodePen is by far the most aesthetically pleasing and quite honestly, it makes sense. CodePen's front-end design was built by the extremely talented developer, Chris Coyier who has an amazing eye for user interface design. And it certainly shows in the polished look and feel of the tool.

    A lot of thought has been put into providing easy access to the multitude of important features while ensuring that the editor isn't cluttered and features aren't intrusive. This is important since viewport real estate, in this type of UI, is tight. Trying to balance everything, provide a decent coding experience across multiple languages, while providing immediate results; That's tricky indeed and forces some thinking in terms of layout.

    This is clearly evident in the use of well-placed icons within the headers of each script pane.

    Leveraging a commonly used UI icon for settings (a gear), you can see how the CodePen team has nicely consolidated quite a bit of important, complementary functionality that enhances the prototyping experience. This is what I mean by the UI not being intrusive: offering intuitive shortcuts to extra features. But it's more than just throwing in extra features. These are real-world tools that web developers are using everyday and are important to getting a legitimate sense of whether their prototype will work or not.

    Features such as the "Details" view demonstrate CodePen's focus on providing not only a solid editing experience but a strong social angle that allows users to get better visibility into the usefulness of the code snippets they're sharing:

    Now, while both tools offer extensive complementary features, in my opinion CodePen demonstrates a clear slant towards providing better tools for designers via its rich support for Sass, LESS, and Stylus including add-ons such as Compass, Bourbon and Nib.

    But while it shines in CSS and markup, CodePen doesn't offer as much as JSBin when it comes to JavaScript library support. It's not to say that it doesn't include a lot of the major players, but with the plethora of new libraries and frameworks in use today, there's a clear difference in terms of built-in support:

    CodePen:

    JSBin:

    It's clear from these screenshots (which are only partial captures) that the breadth of JavaScript framework support is far more extensive than CodePen both in numbers and supported versions. CodePen does offer the ability to include an external JavaScript resource into your code snippet but the convenience of being able to click on a dropdown and choose from an array of JS frameworks is pretty useful.

    There is one feature, though, that is a total stand out for me and that's this:

    That's right. CodePen includes integration with one of my favorite browser testing tools, BrowserStack.com. I've written about the service before and absolutely love it for its simplicity for testing and breadth of browser coverage, so seeing CodePen offering integration with it is a big plus. But it's more than that one button that makes it awesome. It's the dropdown next to it that allows you to decide which browser you'd like to target:

    ... which then directs you to BrowserStack with that same information:

    In terms of convenience, this is a definite win for developers. It's important to note that you will be redirected away from CodePen when you click on the BrowserStack button and you will need to have a BrowserStack account in order to use it.

    Going for CodePen Pro

    CodePen comes in a free edition and a more feature-rich Pro version. This includes features for live previewing of code across multiple devices, collaboration with other developers, theming of embedded code pens and intuitive "Professor Mode" which is very useful for online training and classes.

    Of the Pro features available, the two I find most useful are the Live View and Asset Hosting with the latter allowing developers to upload assets like images and script files which can be used directly in the code pen.

    You might be wondering why this is useful? Well, the alternative is to find an external hosting option like a CDN or your own server. Being able to directly upload your own custom JavaScript file, images or stylesheet solves that problem and makes those assets readily available to use in your code pens. Once uploaded, it's a simple matter of clicking on the asset, grabbing its URL and dropping it into your code:

    This alone merits the $9 per month to use the service and it seems unique to CodePen.

    The reason I really like the Live View feature is because testing across different form factors is incredibly important nowadays. This is done by sending the pen's link to the device you want to test with. That can be done by directly typing it into the mobile browser's URL bar or using CodePen's share dialog to send a text message to a mobile device:

    When the pen is updated on your computer, it almost instantaneously reflects the changes across any number of devices.

    CodePen is Well Built

    CodePen is a great tool. It's very polished and feature-rich with fantastic support for markup and CSS tooling. The fact that some features are subscription-based only, doesn't phase me a bit since I don't mind supporting good software. I will say that some features seem like they should be a standard part of the service, specifically creating private pens and live preview, especially when other services already offer that for free.

    In my opinion, having used both JSBin and CodePen, it's clear that they share very similar capabilities and the choice of which service to use, will ultimately come down to personal preference. Some may like the professional UI of CodePen while others may appreciate the extensive breadth of JavaScript framework support in JSBin. It may just boil down to using both to accomplish different tasks but I can say CodePen is certainly a worthwhile service to leverage and I'll be adding it to my tools of the trade.

     

    0 Comments

    Leave a comment › Posted in: Daily

    1. Using New Relic to Monitor Your Servers

      A running application is not just a bunch of code, the code also has to run somewhere. I am talking about your production servers. It is just as important to ensure that your production boxes are behaving themselves as it is to make sure that your application code is performant. You can set up systems like Nagios to help you with this, but these can be extremely complex to work with, require significant infrastructure of their own and can be total overkill (unless your infrastructure needs are extremely complex). New Relic provides a less full-featured but very simple alternative when it comes to infrastructure monitoring.

      If you've read some of our previous articles on New Relic, you should be right at home with how the New Relic dashboards work. The server monitoring dashboards use the same concepts. If you're already using New Relic, you can begin receiving data about your server performance very quickly. Even if you haven't previously set up New Relic, it may be worth using it just for server monitoring. The six or so dashboards that New Relic provides can significantly delay (or even entirely remove) the need for a more full-featured infrastructure monitoring solution.

      Why Do I Need a Service to Monitor Boxes at All?

      Depending on the needs of your application, you may have a web component, database, cache, search, load balancer, etc. Some of these may share the same box. But, once your application gets beyond a certain size, you will start putting some of these on their own boxes. When you only have one production server things are easy. You SSH into that box, run a few shell commands and get a pretty good idea regarding the health of that one server. As the number of boxes grows this can become a bit of a chore. It would be handy if you could have a way to find out about the health of all your boxes at once. This is exactly the problem that New Relic server dashboards solve. You get a snapshot of the health of all your production servers at once.

      Of course, manually checking the health of all your servers is not the most efficient thing to do. When things go wrong, you want to find out as soon as it happens, not the next time you decide to check. Most infrastructure monitoring systems have a way to send alerts when particular parts of the monitored servers fail (e.g., disk full, using too much RAM etc). New Relic is no different. You can use the very flexible, alert policy infrastructure to send failure notifications in any way you like, such as email, web hooks etc.

      Lastly, infrastructure issues often don't appear suddenly, historical context is important. RAM will slowly get eaten up for hours before the box begins to fail, the disk will fill up for days before things come to a head. Spot checking your servers does not give you the historical context you need to prevent the issues from happening. If you just happen to check the disk usage when it's getting a bit full, you can do something about it. If not, you only learn about the problem when your boxes die. New Relic collects data and sends it back to their servers all the time, so the dashboards are all about historical context. This makes it very easy to preempt certain classes of problems.

      It Works in Real Life

      Let me tell you a couple of stories. We use New Relic in Tuts+ for both application performance monitoring and server monitoring. A few months ago I was on call, when our boxes started misbehaving every few minutes. They weren't quite falling over, but the application would perform very poorly for short periods of time. I logged on to the boxes and found that the memory usage was very high. So I rebooted the servers one by one and things seemed to be ok for a while. But a few hours later it all started happening again. This smelled like a memory leak. 

      So I logged in to New Relic to have a look at the graphs. Sure enough, one of the deploys we did previously had introduced a memory leak into the application. It would take a few hours for all the memory to be consumed by the application at which point it would go into a desperate garbage collection frenzy, causing all sorts of funny issues. Looking at the memory graphs on all the boxes it was immediately obvious what was happening. At the time we didn't have any alerts set up (we do now), so we didn't become aware of the problem until it caused other issues to manifest. But, being able to compare all the boxes to each other, as well as having the historical context, let me easily diagnose the problem, roll out a fix and get to sleep on time that night.

      Here is another one. Recently there was an outage in the AWS datacenter where Tuts+ is hosted. When things finally settled down, we rebooted all the boxes to make sure there were no niggling issues. But when the boxes came back, the application would intermittently return 500 responses or perform very poorly some of the time. This was likely an issue with one or more of the servers, which is very annoying to diagnose when you have many boxes. Once again, looking at New Relic allowed us to surface the issue very quickly. One of our boxes came back with a rogue process which was consuming a lot of CPU, causing the app on that box to perform poorly. Another box was affected by some sort of AWS glitch which caused the disk IO utilization of that box to be 100%. We took that box out of our load balancer, got rid of the rogue process on the other one and the application started to perform fine again.

      The graphs New Relic provides are truly useful and I wouldn't want to do without them, so let me show how to get server monitoring up and running.

      Installing the New Relic Server Monitoring Agent

      Basically it all comes down to logging on to your server and installing the New Relic server monitoring daemon (nrsysmond). If you've read the New Relic for PHP article, the procedure is almost identical. As usual, let's assume we're on Ubuntu. 

      The first thing to do is to import the New Relic repository key:

      wget -O - https://download.newrelic.com/548C16BF.gpg | sudo apt-key add -

      Now we add the New Relic repository itself to the system: 

      sudo sh -c 'echo "deb http://apt.newrelic.com/debian/ newrelic non-free" > /etc/apt/sources.list.d/newrelic.list'

      Now we just use apt:

      sudo apt-get update
      sudo apt-get install newrelic-sysmond

      After it's finished installing, you will get a nice message like this:

      *********************************************************************
      *********************************************************************
      ***
      ***  Can not start the New Relic Server Monitor until you insert a
      ***  valid license key in the following file:
      ***
      ***     /etc/newrelic/nrsysmond.cfg
      ***
      ***  You can do this by running the following command as root:
      ***
      ***     nrsysmond-config --set license_key=<your_license_key_here>
      ***
      ***  No data will be reported until the server monitor can start.
      ***  You can get your New Relic key from the 'Configuration' section
      ***  of the 'Support' menu of your New Relic account (accessible at
      ***  https://rpm.newrelic.com).
      ***
      *********************************************************************
      *********************************************************************

      Let's do what it says. Firstly, let's jump into our New Relic account settings to look up our license key (it will be on the right):

      Now let's run the command:

      nrsysmond-config --set license_key=<your_license_key_here>

      If you check the config file now: /etc/newrelic/nrsysmond.cfg. You'll see your license key in there. We're ready to start the agent:

      /etc/init.d/newrelic-sysmond start

      You can now check your process list to make sure it is running:

      ps -ef | grep nrsys
      newrelic 10087     1  0 09:25 ?        00:00:00 /usr/sbin/nrsysmond -c /etc/newrelic/nrsysmond.cfg -p /var/run/newrelic/nrsysmond.pid
      newrelic 10089 10087  0 09:25 ?        00:00:00 /usr/sbin/nrsysmond -c /etc/newrelic/nrsysmond.cfg -p /var/run/newrelic/nrsysmond.pid
      ubuntu   10100  9734  0 09:25 pts/1    00:00:00 grep --color=auto nrsys

      As per the PHP agent, there are two processes. One is a monitor process and the second is the worker. The worker actually does the job of communicating with the New Relic servers, the monitoring process simply watches the worker and if the worker dies, for whatever reason, it will spawn a new one.

      We can also check the logs to make sure there were no errors on startup:

      cat /var/log/newrelic/nrsysmond.log
      2014-05-25 09:25:02 [10089/main] always: New Relic Server Monitor version 1.4.0.471/C+IA started - pid=10089 background=true SSL=true ca_bundle=<none> ca_path=<none> host=ip-10-196-10-195
      2014-05-25 09:25:03 [10089/main] info: RPM redirect: collector-102.newrelic.com(50.31.164.202) port 0 (0 means default port)

      Everything looks fine, and you should now start seeing data appear in the New Relic UI.

      Configuring the Server Monitoring Agent

      Most of the time you won't need to configure anything else beyond the license key, but if you do need to up the log level or configure a proxy, it is definitely possible. It all lives in /etc/newrelic/nrsysmond.cfg. The file is very well commented and pretty self-explanatory. If you do change anything, remember to restart the daemon:

      /etc/init.d/newrelic-sysmond restart

      There is only one subtle thing when it comes configuring server monitoring and that's the name of the server, as it will be seen in the New Relic dashboards. By default New Relic will take the hostname of the box and make that the name of the server in the dashboards (i.e., the output of the hostname command). I recommend you keep it this way. If you're also using New Relic for application monitoring, keeping the hostname, as output by the hostname command, as the name of the server will ensure that New Relic can correctly work out which applications are running on which boxes and link everything up properly in the UI.

      If you really have to, you can change the name of the server as it will appear in the UI by setting the hostname= parameter in the configuration file: /etc/newrelic/nrsysmond.cfg. You will need to restart the daemon for this to take effect. You can also modify the name of the server directly in the UI which won't affect the daemon.

      Using the Server Monitoring Dashboards

      The first thing you see when you click the Servers link on the left is a snapshot of all your servers and the key metrics for all of them (CPU, Disk, Memory, IO). 

      This page can let you see if one or more of your boxes are obviously misbehaving. Here you can also rename a server or add tags to it, if necessary.

      If we click on one of the servers, we come to the main server dashboard:

      There are six main metrics here:

      • CPU usage
      • Memory usage
      • Disk IO utilization
      • Network IO
      • Load average
      • Process List

      This will give you a quick overview of a particular server. You can drill down into each of the graphs to get more information. For example, you can drill down into the CPU graph to see which processes are using the CPU:

      Or you can drill down into the disk graph to see your IO rate, a breakdown of reads and writes, as well as get an estimate of how long it will be before your disk is full.

      The best part is you can use the same operations on all these graphs as you can on application-level graphs. So, you can zoom in on a five minute window to look at a CPU usage spike more closely, or you can have a look at a seven day trend in memory usage. 

      The best part is, the graphs are simple to understand, you're not overwhelmed with metrics and you can compare similar boxes to each other. This can help you diagnose 99% of common problems you're likely to encounter with your infrastructure.

      Setting up Server Monitoring Alerts

      New Relic has recently done a lot of work to improve their alerting capabilities. Alert policies is what they've come up with across their whole system (e.g. there are application alert policies for application and server alert policies for boxes). It may be a little confusing at first, but it is pretty simple once you get the hang of it. There are two main concepts, policies and channels. In terms of server alerts, it works like this: 

      We set up a policy and assign some servers to it:

      You also create a channel (e.g. email, webhook) to which alerts can be sent:

      You then assign a channel to a policy. From that point on, depending on the settings for the channel (e.g. first critical event, all critical events, downtime only). You will get notifications on that channel.

      The only confusing bit about alert policies is where to find them. They live under Tools->Alert Policies:

      You then need to click on Servers in the menu at the top, to find server alert policies.

      Conclusion

      If you're already using an infrastructure monitoring solution like Nagios and it's working well for you, then you may not get too much extra from New Relic server monitoring (although the graphs and historical trends are pretty excellent). However, if you're not monitoring your infrastructure at all or your current solution isn't working for you, definitely give New Relic a try. For me, it has become the first tool I go to when I suspect that something is wrong with my servers. And often enough, it will let me know that trouble is brewing before the situation becomes critical. As developers, that's the kind of tools we all want in our arsenal.

       

      0 Comments

      Leave a comment › Posted in: Daily

    1. Setting Up Firebase for Your Next Project

      In today's tutorial we will get you up and running with Firebase by building a simple chat room application by leveraging Firebase's Javascript API. This application will provide you with the building blocks to develop more advanced real-time applications on your own projects.

      Getting Started

      In order to get Firebase up and running, you are going to have to create a free developer account by visiting their website, and registering. Once you have successfully registered, Firebase will redirect you to your account dashboard where you will have access to all of your Firebase data locations and other neat features. However, right now you should select the Firebase data location entitled, MY FIRST APP. Feel free to rename this application or create a new one.

      When, the Firebase data location was created, it was assigned its very own unique host-name. This unique host-name is very important; because this is the location where your data will be read from and written too. We will discuss the host-name in more depth, later in the tutorial but for now:

      Let's Start Building

      The first item on the agenda: create a new HTML file that references the Firebase client, jQuery, and Bootstrap CDNs. It is quite obvious that we need to reference the Firebase CDN. Now, it may not be as clear why we are referencing both jQuery and Bootstrap. I am using both jQuery and Bootstrap for the purpose of rapid application development. Both of these libraries allow me to do things very quickly and they don't require much hand coding. However, I will not be covering either jQuery or Bootstrap in any great detail; so feel free to learn more about these JavaScript libraries on your own.

      The HTML

      The markup that implements what I described is as follows:

      <!DOCTYPE html>
      <html>
      <head>
      	<meta charset="utf-8">
      	<meta http-equiv="X-UA-Compatible" content="IE=edge">
      	<title>Firebase Chat Application</title>
      	<link rel="stylesheet" href="http://netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css">
      </head>
      <body>
      	
      	<script src="https://cdn.firebase.com/js/client/1.0.6/firebase.js"></script>
      	<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script>
      	<script src="http://netdna.bootstrapcdn.com/bootstrap/3.1.1/js/bootstrap.min.js"></script>
      </body>
      </html>

      Now that we have our HTML file created and it is referencing the correct CDNs, let's begin working out the rest of our application.

      First, We need to determine what essential functionality this application will need. It seems that most chat room style applications have two similarities: a message box for sending messages to a server and a section that gets populated with messages from a server. In our case, this server is going to be our Firebase data location.

      Let's implement the message box for sending messages to a server before hand. This will require us to create a simple interface that includes an input field and a submit button wrapped within form tags. Since we are referencing the Bootstrap stylesheet, we have the convenience of using some predefined bootstrap styles to create the interface. As I stated earlier, this is very convenient and allows us to write less code by hand.

      So let's first place a div with the class attribute container directly after the opening body tag within the HTML file. This is a bootstrap feature that provides width constraints and padding for the page content. Within the container tags, lets add a title wrapped within H1 tags, so that we can give the application a descriptive name. My title will be, "Firebase Chat Application". Feel free to use your own creativity when writing your title.

      The markup that implements what I described above, looks like this:

      <div class="container">
          <h1>Firebase Chat Application</h1>
      </div>

      In addition, we also need to add a div with class attributes: panel and panel-default. A panel is a Bootstrap component that provides a simple box that contains four interior DIVs: panel-heading, panel-title, panel-body, and panel-footer by default. We will not be using panel-heading and panel-title but we will use both panel-body and panel-footer. The panel DIV will be used as the main container for the chat room client.

      The markup that implements what I described above, is as follows:

      <div class="container">
          <h1>Firebase Chat Application</h1>
      
      	<div class="panel panel-default">
      		<div class="panel-body"></div>
      		<div class="panel-footer"></div>
      	</div>
      </div>

      At the moment, we will not be working with the panel-body. However, we will need to use this section later in the tutorial for populating messages from our data location.

      Right now we will be focusing on the panel footer div. The panel footer will contain an input field, submit button, and reset button. We will give the input field an attribute id of comments and the submit button an attribute id of submit-btn. Both of these id attributes are very important and will be needed later in the tutorial. Feel free to alter the attribute IDs for the form elements.

      The markup that implements what I described above, is as follows:

      <div class="container">
          <h1>Firebase Chat Application</h1>
      
      	<div class="panel panel-default">
      		<div class="panel-body"></div>
      		<div class="panel-footer">
      
      			<form role="form">
      				<div class="form-group">
      					<label for="comments">Please enter your comments here</label>
      					<input class="form-control" id="comments" name="comments">
      				</div>
      
      				<button type="submit" id="submit-btn" name="submit-btn"
      					class="btn btn-primary">Send Comments</button>
      
      				<button type="reset" class="btn btn-default">Clear Comments</button>
      			</form>
      
      		</div>
      	</div>
      </div>

      Now let's implement the JavaScript that will allow us to send the message to our Firebase's data location.

      The JavaScript

      First we need to add a set of script tags directly above the closing body tag, within the HTML file. Within the script tags, we need to create a reference to our Firebase's data location. Without this reference, we cannot write data to our data location. This can be accomplished by initializing the Firebase constructor and passing our data location as the parameter. Remember, the Firebase data location was created when you setup Firebase (the unique host-name).

      The code that implements what I described above, is as follows:

      var fireBaseRef = new Firebase("YOUR FIREBASE DATA URL");

      After initializing the Firebase reference object, we need to bind a click event handler to the submit button selector. The location of this selector is within the panel footer. Also, we need to ensure that the event handler callback contains a return false statement as the last line of code. This will ensure that the default action of submitting the form, does not occur and prevent the event from bubbling up the DOM tree. However, in some cases you may want event bubbling to occur.

      Both of the JavaScript snippets below implement what is described above:

      $("#submit-btn").bind("click", function() {
      	
      	return false;
      });
      $("#submit-btn").bind("click", function(event) {
      	
      	event.preventDefault();
      	event.stopPropagation();
      });

      Next, we will define a variable that references the comment's selector and another variable that removes the white spaces from the beginning and end of the comment's value.

      The code that implements what I described above, is as follows:

      $("#submit-btn").bind("click", function() {
      	var comment = $("#comments");
      	var comment_value = $.trim(comment.val());
      	
      	return false;
      });

      Now we need to determine the method needed for actually writing theses comments to our data location.

      Writing Data to Firebase

      Firebase's API offers several methods to write data to a data location. However, in today's tutorial we are going to focus on using the set() and push() methods. Let's briefly review what each of these methods allow us to do.

      • The set() method will write data to the data location, as well as overwrite any data that is currently stored at the data location.
      • The push() method will write data to the data location by automatically generating a new child location with a unique name. In addition, this unique name will be prefixed with a time-stamp. This will allow all the children locations to be chronologically-sorted.

      After reviewing both the set() and push() methods; I think it is quite evident that we need to leverage the push() method in our application. Otherwise, we will continuously overwrite the latest comment at our data location and that would be no fun.

      To do this, let's jump back to the JavaScript that we previously added to our page. We now need to push the comment value to our data location. Now keep in mind that there are different push methods that allow us to push data in various formats, such as an object, array, string, number, boolean, or null. We will just use an object that has a key value pair of a comment and  a comment value. In addition, we will attach an optional callback to fire after the push methods have finished. The callback will return an error object on failure, and on success, a null value.

      The code that implements what I described above, is as follows:

      $("#submit-btn").bind("click", function() {
      	var comment = $("#comments");
      	var commentValue = $.trim(comment.val());
      
      	fireBaseRef.push({comment: commentValue}, function(error) {
      		if (error !== null) {
      			alert('Unable to push comments to Firebase!');
      		}
      	});
      
      	return false;
      });

      Now let's add something to ensure that the chat room users aren't able to write blank messages to our data location. This can easily be accomplished by adding a simple if else statement that checks the length of the comment's value.

      The code that implements what I described above, is as follows:

      $("#submit-btn").bind("click", function() {
      	var comment = $("#comments");
      	var commentValue = $.trim(comment.val());
      
      	if (commentValue.length === 0) {
      		alert('Comments are required to continue!');
      	} else {
      		_fireBaseRef.push({comment: commentValue}, function(error) {
      			if (error !== null) {
      				alert('Unable to push comments to Firebase!');
      			}
      		});
      
      		comment.val("");
      	}
      
      	return false;
      });
      

      Great, we've successfully completed the section of our application that allows users to write data to our data location. But, we are sill lacking the functionality that provides users with a real-time chat experience. This type of experience will require the ability to read data from the child locations, within the data location.

      Reading Data from Firebase

      As we mentioned earlier, most chat room style applications read data from a server and then populate a section of the interface. We will need to do the same thing in our application, by leveraging the Firebase API.

      Firebase's API offers several methods to read data from a data location.  In todays tutorial, we are going to focus on using the on() method.

      The on() method has several arguments that are worth looking into, but we are only going to cover the first two arguments: eventType and callback. Let's review both of these.

      Selecting an eventType

      The "eventType" argument has several options. Let's look at each so that we are able to determine which will meet our needs.

      • "value" - will be triggered once, and reads all comments, and every time any comments change it will be triggered again, as well as read all the comments.
      • "child_added" - will be triggered once for each comment, as well as each time a new comment is added.
      • "child_removed" - will be triggered any time a comment is removed.
      • "child_changed" - will be triggered any time a comment is changed.
      • "child_moved" - will be triggered any time a comment's order is changed.

      After looking over the above options, it seems quite clear that we should be using "child_added" as our "eventType". This even type will be triggered once for each comment at our data location, as well as each time a new comment is added. In addition, when a new comment is added it will not return the entire set of comments at that location, but just the last child added. This is exactly what we want! There is no need to return the entire set of comments, when a new comment is added.

      Analyzing the callback

      The "callback" for the on() method provides an item that Firebase refers to as a "snapshot of data” which has several member functions, the focus today is on name() and val().

      The name() member function provides us with the unique name of the "snapshot of data". If you remember earlier, we used the push() function to write a new comment to our data location. When push() was called, it generated a new child location using a unique name and that is the name that will be returned via the call back member function,name().

      The val() member function provides us with the JavaScript object representation of the "snapshot of data" and with this snapshot, we will be able to retrieve a comment from our data location. However, we need to backtrack for a moment. 

      Earlier in this tutorial we implemented the JavaScript needed to push comments to our Firebase location and we did this by pushing an object with a key value pair. In this case, the key was "comment" and the value was the input that the user entered. Therefore, if we want to extract a comment from our "snapshot of data" we need to recognize the correct data type. In this case we are dealing with an object, so you can use either dot notation or bracket notation to access the specified property.

      Both of the JavaScript snippets below, implement what is described above:

      fireBaseRef.on('child_added', function(snapshot) {
      	var uniqName = snapshot.name();
      	var comment = snapshot.val().comment;
      });
      
      fireBaseRef.on('child_added', function(snapshot) {
      	var uniqName = snapshot.name();
      	var comment = snapshot.val()["comment"];
      });
      

      Rendering the Comments

      Next let's create a simple, yet clean way to display each comment. This can easily be achieved by wrapping each comment within a div and labeling each comment with its unique name. Usually comments are labeled with the user's name that wrote that comment, in our case, this is an anonymous chat room client.

      The code that implements what I described above, is as follows:

      var commentsContainer = $('#comments-container');
      
      $('<div/>', {class: 'comment-container'})
      	.html('<span class="label label-info">Comment ' + uniqName + '</span>' + comment);
      

      Next we must append each comment to the comment's container and get the current vertical position of the comment's container scrollbar and scroll to that latest location. This will ensure that each time a comment is pushed to Firebase, all users using the chat application will see the latest comment made. All of this must be done within the callback.

      It should look something like this:

      var commentsContainer = $('#comments-container');
      
          $('<div/>', {class: 'comment-container'})
      		.html('<span class="label label-info">Comment ' + uniqName + '</span>' + comment)
      		.appendTo(commentsContainer);
      
      	commentsContainer.scrollTop(commentsContainer.prop('scrollHeight'));

      Now lets apply some simple CSS styles to the DIVs wrapped around each comment block. This will make the appearance slightly more attractive and user friendly. These styles should be added within the style tags, located in the head section of the HTML.

      The code that implements what I described above, is as follows:

      .container {
      	max-width: 700px;
      }
      
      #comments-container {
      	border: 1px solid #d0d0d0;
      	height: 400px;
      	overflow-y: scroll;
      }
      
      .comment-container {
      	padding: 10px;
      	margin:6px;
      	background: #f5f5f5;
      	font-size: 13px;
      	-moz-border-radius: 5px;
      	-webkit-border-radius: 5px;
      	border-radius: 5px;
      }
      
      .comment-container .label {
      	margin-right: 20px;
      }
      
      .comment-container:last-of-type {
      	border-bottom: none;
      }

      Running the Application

      It's now time to run our application. Lets begin by opening up two instances of our favorite, modern browser and placing them side by side on our desktop. Next, we will browse to the file location of our file that we created, on both browsers. Test it out by writing a few comments and enjoy watching the magic of Firebase. 

      It is unbelievable that only a couple lines of code can produce such a powerful application. Feel free to edit this snippet in any way to produce your desired results.

      Check out the online demo to see it in action. Below is the complete source code for the entire chat room application:

      <!DOCTYPE html>
      <html>
      <head>
          <meta charset="utf-8">
      	<meta http-equiv="X-UA-Compatible" content="IE=edge">
      	<title>Firebase Chat Application</title>
      	<link rel="stylesheet" href="http://netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css">
      
      	<style>
      		.container {
          		max-width: 700px;
        		}
      
      		#comments-container {
      			border: 1px solid #d0d0d0;
      			height: 400px;
      			overflow-y: scroll;
      		}
      
      		.comment-container {
      			padding: 10px;
      			margin:6px;
      			background: #f5f5f5;
      			font-size: 13px;
      			-moz-border-radius: 5px;
      			-webkit-border-radius: 5px;
      			border-radius: 5px;
      		}
      
      		.comment-container .label {
      			margin-right: 20px;
      		}
      
      		.comment-container:last-of-type {
      			border-bottom: none;
      		}
      	</style>
      </head>
      <body>
      
      	<div class="container">
      
      		<h1>Firebase Chat Application</h1>
      
      		<div class="panel panel-default">
      
      			<div class="panel-body">
      				<div id="comments-container"></div>
      			</div>
      
      			<div class="panel-footer">
      
      				<form role="form">
      					<div class="form-group">
      						<label for="comments">Please enter your comments here</label>
      						<input class="form-control" id="comments" name="comments">
      					</div>
      
      					<button type="submit" id="submit-btn" name="submit-btn"
      						class="btn btn-success">Send Comments</button>
      
      					<button type="reset" class="btn btn-danger">Clear Comments</button>
      				</form>
      
      			</div>
      		</div>
      	</div>
      
      	<script src="http://cdn.firebase.com/js/client/1.0.6/firebase.js"></script>
      	<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script>
      	<script src="http://netdna.bootstrapcdn.com/bootstrap/3.1.1/js/bootstrap.min.js"></script>
      	<script>
      
      		var fireBaseRef = new Firebase("YOUR FIREBASE DATA URL");
      
      		$("#submit-btn").bind("click", function() {
      			var comment = $("#comments");
      			var commentValue = $.trim(comment.val());
      
      			if (commentValue.length === 0) {
      				alert('Comments are required to continue!');
      			} else {
      				fireBaseRef.push({comment: commentValue}, function(error) {
      					if (error !== null) {
      						alert('Unable to push comments to Firebase!');
      					}
      				});
      
      				comment.val("");
      			}
      
      			return false;
      		});
      
      		fireBaseRef.on('child_added', function(snapshot) {
      			var uniqName = snapshot.name();
      			var comment = snapshot.val().comment;
      			var commentsContainer = $('#comments-container');
      
      			$('<div/>', {class: 'comment-container'})
      				.html('<span class="label label-default">Comment ' 
      					+ uniqName + '</span>' + comment).appendTo(commentsContainer);
      
      			commentsContainer.scrollTop(commentsContainer.prop('scrollHeight'));
      		});
      
      	</script>
      </body>
      </html>

      In Summary

      In today's tutorial, we worked all the way through the process of implementing a simple chat room application by leveraging Firebase's JavaScript API. In doing so, we were able to experience the power of Firebase and gain an appreciation for its convenience. Below are some of the key items that we hit on today:

      • Referencing a Firebase data location by initializing a Firebase constructor.
      • Writing data to Firebase by using the push method.
      • Reading data from Firebase by using the on method with the event type "child_added".

      I hope this tutorial has given you the starting point you need to take things further with Firebase. If you have any questions or comments, feel free to leave them below. Thanks again for your time and keep exploring the endless possibilities of the Firebase API.

      Resources

       

      0 Comments

      Leave a comment › Posted in: Daily

    1. Creating Reusable Forms in Symfony 2

      In this video, we'll build upon our existing knowledge of Symfony 2 to learn how to create reusable forms. We'll learn how to create a separate form class to house our form logic, build the form in a controller and then render it to the browser, from a template.

      In Conclusion

      Now there's lots of other options that you can pass in while creating your forms, to customize them to your liking. We'll see more of these in action as we proceed through the series. 

      In the next video, we'll learn how to validate our data and process the form submission in order to prepare the data to be persisted to a database. Stay tuned for the next one! 

      Thanks for watching.

       

      0 Comments

      Leave a comment › Posted in: Daily

    1. Sharing Polymer Components: Part 1

      In my last tutorial about the Polymer library, I explained how to take advantage of this great new tool to create reusable web components. The key point of the tutorial and of using components, is to help our development by:

      • Encapsulating much of the complex code and structure
      • Allowing developers to use a simple-to-use tag style naming convention
      • Providing a suite of predefined UI elements to leverage and extend

      I'm still smitten with it and wanted to explore this a little more by checking out a new template the Polymer team released to make deployment and reuse substantially easier.

      The Canonical Path

      One of the quirks of the Polymer development process which I didn't touch on, was the disconnect between developing a component and actually making it available for reuse by others. Let's take a look at a snippet from my previous tutorial:

      <link rel="import" href="../bower_components/polymer/polymer.html">

      This purpose of this code is to include Polymer core, the main API that allows you to define the custom elements. For local development and testing, this actually works perfectly but unfortunately the path specified will actually prevent you from sharing this component. The reason for this is because if another user tries to install your element with Bower (which is the preferred installation method), it's going to end up as a sibling of Polymer in their bower_components folder. 

      The problem with that, is that the component is going to be looking for ../bower_components/polymer/polymer.html which will be an invalid path. Components must always assume that their dependencies are siblings, so it should actually be looking for ../polymer/polymer.html. This the "canonical path."

      In chatting with the awesome Rob Dodson of the Polymer team, he said that the only way around this was to develop using the method I originally outlined and once I was ready to share my component, convert all of my paths that look for bower_components over to ../ when I'm ready to publish my element.

      This is obviously not ideal and I probably could've created some type of Grunt task to parse through my component files to make these updates. Thankfully, the Polymer team has been noodling on this and has come up with a creative solution that they call the untitled-element.

      The untitled-element Template

      When I mention untitled-element, I'm referring to a new boilerplate that's available to make creating reusable and deployable components substantially easier, by giving you a base foundation to work with. It helps to eliminate the issues I mentioned above by:

      • Providing guidance on proper directory structure
      • Incorporating an additional component that serves to document your API
      • Allowing you to easily demo your component during development and when sharing

      The big win for this, is being able to develop your component without having to go through the trouble of making substantial path changes that are not only cumbersome, but could break your element if you miss something.

      Currently, the project is a part of PolymerLabs as it gets put through its paces, but it's certainly looking pretty solid:

      Now the first thing you're going to want to do is create a development directory that will house your new component, as well as all of the Bower components you'll install. I called mine polymerdev. From there, you'll need to go to the untitled-element Github repo and download it into your new development folder. This is an important step because you need to extract the contents into that folder, to avoid the sibling directory issues that I mentioned previously.

      Extracting the .zip file will create a new folder called untitled-element-master which contains the boilerplate files you'll need to create your new component. You'll need to rename a couple of things based on the name of your component. This includes:

      • The untitled-element-master folder
      • untitled-element.html
      • untitled-element.css

      I'm going to recreate the Reddit element that I created in my last tutorial, so this is what the changes would look like:

      • untitled-element-master -> reddit-element
      • untitled-element.html -> reddit-element.html
      • untitled-element.css -> reddit-element.css

      Here's what the directory looked like before:

      And here's what it looks like after the updates:

      The key thing to understand, is that you'll be working inside of the reddit-element folder when creating your component and in later steps, when we use Bower to install the Polymer components, that directory will be a direct sibling to the newly installed directories. I know I'm really harping on this point but it's an important thing to understand, since it affects your ability to share your component.

      To finish this off, you're going to want to update the references to your component name, inside of the following files:

      • bower.json
      • demo.html
      • reddit-element.html

      Each of these files contains references to untitled-element within the code and need to be updated. Remember that any references to untitled-element should be changed to reddit-element. If you want to be absolutely sure, do a global search-and-replace using your editor.

      Getting Setup for Bower

      Since Bower is the preferred method for installing Polymer and sharing components, let's go through a few steps to ensure that we setup the Reddit component's Bower configuration properly.

      By default, the boilerplate includes a bower.json file. This file is used to list several things, including the name of the component and any dependencies that need to be installed to use it. Here's what it looks like:

      {
        "name": "reddit-element",
        "private": true,
        "dependencies": {
          "polymer": "Polymer/polymer#master"
        }
      }
      

      First, I'll remove the private property since it'll prevent the component from being listed in the Bower registry. Since it's supposed to be shareable, I want it to be listed. Also, the Reddit component needs to make an Ajax call, so I'm including a dependency on the Polymer core-elements set of components which includes the Ajax functionality that I need. Lastly, I'll add a version number to track this going forward. Here's the end result:

      {
        "name": "reddit-element",
        "version": "0.0.1",
        "dependencies": {
          "polymer": "Polymer/polymer#~0.2.3",
          "core-elements": "Polymer/core-elements#~0.2.3",
        }
      }

      The last bit of Bower configuration that needs to be done, is to create a file called .bowerrc in the reddit-element folder which defines a custom install location for the dependencies of our component. This is defined as simple JSON data like this:

      {
         "directory": "../"
      }

      What this essentially does is tells Bower to install any dependencies one level up, so that they're siblings of the reddit-element folder. The reason this is important is because when someone installs our component with Bower, it'll be placed into the bower_components folder as siblings to everything else in there (including Polymer). Structuring things this way, ensures that we're developing in the same way that we'll eventually be sharing. It also allows us to use the canonical path I mentioned above, ensuring a consistent way of referencing other components in our code.

      Let's review the differences. If I didn't create the .bowerrc file and ran the bower install command, my directory structure would look like this:

      This installs the bower_components folder directly into the component's directory, which is not what I want. I want the reddit-element folder to be a sibling to all of the dependencies that it needs:

      This method ensures that when a user installs this component using Bower, the component and the dependencies will be installed properly into the bower_components folder.

      With the .bowerrc file added to your component's folder and the bower.json setup with the proper dependencies, the next step is to run bower install, to have Bower pull down the appropriate components and add them to the project.

      Coming Up Next

      In this tutorial, I wanted to make sure I laid a solid foundation for how to use the new Polymer boilerplate and some of the rationale behind the design choices that the team has made.

      In the next part of this series, I'm going to go over the new documentation component that's included in the boilerplate and how it will make sharing and demoing your component substantially easier. I'll also show you how to setup your component to be shared via the Bower registry.

       

      0 Comments

      Leave a comment › Posted in: Daily

  • Page 3 of 23 pages  < 1 2 3 4 5 >  Last ›
  • Browse the Blog

    Syndicate

    governing-bruise