4elements, Amsterdam, Holland

  1. Website Launch Announcement

    DesignScan, Get insight in your website launches their site www.designscan.me

    After 3 years of development, in spare time, DesignScan is excited to announce the launch of their new service and website. The website goes live today, on Monday, June 2, 2014, and is located at: www.designscan.me

    It is time to retake control of your own website. At the newly launched DesignScan.me, instead of endlessly browsing or talking with web design agencies, owners get insight in their own website (Design age, flaws, Usability and others). DesignScan.me features cutting-edge, in-house developed, software that analyses your website design and reports back to the owner in human understandable language.

    DesignScan.me is on a mission to deliver perfect, readable, insights for website owners and to bring the two worlds together, website owners and web- designers, developers. By doing so we pave the way where we provide website owners with the tools and knowhow, and provide the designers and developers with potential new business, clients.


    Our website is divided into three main sections: MADE SCANS, DASHBOARD (after login) and SUPPORT. By moving to a more client-centric layout, we allow visitors to access information based on their own choice rather than sift through everything to decide what is of interest to them.

    In de MADE SCANS section our current and new members will find detailed information about the companies we made designscans for, this is also the place where you can start earning credits by participating in our “Vote & Earn” program. The DASHBOARD area is specially designed for our registered members. A place where your designscan results are shown and explained, in a human understandable language.

    The website will feature new types of rich content, inspired by our experience, gathered materials and a great team of people working for DesignScan.me. You will find this content in the SUPPORT section in the form of articles, case studies, videos, presentations, interactive forum and live email (Free, for members, one-on-one email help).

    We will update our website on a regular basis, with news, events, product launches and new content. We encourage our visitors to visit our website and sign up for our newsletter, the first issue of which will be released at the end of July. As this functionality shall be available within the next two weeks, please keep visiting our website: www.designscan.me.

    If you experience any problems using our website or if you have any suggestions, please contact us at marketing@designscan.me



    Leave a comment › Posted in: Daily Sneak Peak

    1. New Course: Getting Started With Symfony 2

      Symfony 2 is one of the most popular modern PHP frameworks. It has the advantages of being modular, extensible, and feature complete. Our new course, Getting Started With Symfony 2, is designed for first-time users of the framework, and will walk you through all its basic coding features. 

      What You'll Learn

      We'll cover everything you need to get started with Symfony 2, such as bundles, templating with layouts, routing, building forms with validation, and how to build a full CRUD (Create, Read, Update and Delete) application with database interaction. 

      Tuts+ instructor Andrew Perkins will walk you through step by step how to begin developing web applications using the awesome Symfony 2 framework. You'll get some hands-on practice as well, as we work through the example of building a CRUD application for storing books. 

      By the end of the course, you'll have the fundamental skills necessary to build basic, database-driven web applications using the Symfony 2 framework.

      Watch the Introduction

      Start Learning With a 14 Day Free Trial

      You can take our new Symfony 2 course straight away with a completely free 14 day trial of a Tuts+ subscription. Start your free 14 day trial today, to access this course and hundreds of others.



      Leave a comment › Posted in: Daily

    1. Basic Functional Testing With Symfony 2’s Crawler

      Testing your web applications is one of the best things you can do to ensure its health, safety, and security, both for the app and your app's visitors. Symfony 2 offers a complete integration testing suite that you can use to make sure your applications run just as you expect. Today we'll look at how we can use Symfony 2 and PHPUnit, the testing framework that it employs, to write basic functional tests using the Crawler.


      Before we can begin any sort of testing, let's setup our project by downloading the Symfony 2 framework, configure it and then also download PHPUnit. 

      Installing Symfony 2

      The best way to download Symfony 2 is to use Composer. If you don't yet know what Composer is, be sure to checkout a few of the awesome Tuts+ articles and courses on it, they'll bring you up to speed quickly. 

      We'll first want to open our Terminal or command line interface so we can issue a few composer commands. Once in your Terminal, change directories into your local development's webroot. For me, on OS X, this will be my ~/Sites directory:

      cd ~/Sites

      Once in the proper directory, we can now use Composer to create a new Symfony 2 project which will download and install the framework plus any of its dependencies. 

      composer create-project symfony/framework-standard-edition crawling/ '~2.5'

      This command tells composer to create a new project using the Symfony 2 framework in a new directory name crawling/, and then we're also specifying the exact version to download, version ~2.5. If this is the first time you're downloading the framework, this may take a while as there's a lot of libraries to downloaded for all of the vendors. So you might want to take a quick break and come back in a few minutes. 

      After the download completes, your Terminal should now be displaying an interactive wizard which will help you setup the configuration. It's very self explanatory, just enter in your own credentials or take the defaults as I have done:

      Configuring Crawler

      Once you enter in your config information, Symfony 2 is downloaded, installed and ready to be used. Now we just need to get PHPUnit so we can test our code.

      Installing PHPUnit

      To download PHPUnit we can use a wget command in our Terminal to retrieve the .phar file or just download it from their website, it's up to you:

      wget https://phar.phpunit.de/phpunit.phar

      With the .phar downloaded, now we need to adjust its permissions and move it into a location where our Terminal or Command Line and PHP will have access to it. On my machine using OS X, I moved this into my /usr/local/bin directory. I also renamed the file to be just phpunit so I don't have to worry about the extension when trying to run my tests, saving me a bit of time:

      chmod +x phpunit.phar
      sudo mv phpunit.phar /usr/local/bin/phpunit

      We should now be able to verify that PHPUnit was installed and is accessible via the Terminal by running the phpunit command. You should see something like this:

      Running PHPUnit

      Creating the Crawling Bundle

      Now we need a bundle to hold our application and test code. Let's create one using Symfony 2's console, from within our Terminal:

      cd ~/Sites/crawling
      php app/console generate:bundle --namespace=Crawling/FtestingBundle --format=yml

      Here, we first change directories into our crawling project and then use the console to generate a new bundle. We also specify this bundle's Vendor and bundle name, separated by a forward slash (/). Lastly, we tell it to use YAML as the format for our configuration. Now you can use whatever format you'd like if you don't want to use YAML and you could also name your bundle however you prefer, just as long as you first give it a vendor name and end your bundle name with the suffix Bundle.

      After running the above command, we again get a nice wizard to help complete the bundle installation. I just hit enter for each prompt to take the defaults as this keeps the entire process nice and simple and smooths out any path issues by putting your files in custom locations. Here's a screenshot of my bundle wizard:

      Creating the Crawling Bundle

      How To Run Your Tests

      Ok, we've got Symfony 2, PHPUnit, and our bundle; I think we're ready to now learn how to run our PHPUnit tests alongside Symfony. It's actually really easy, just change directories into your crawling project and issue the phpunit -c app/ command to run all of your application's tests. You should get the following result in your Terminal:

      Running PHPUnit Tests

      When we generated our bundle it also generated a little sample code for us. The test that you see ran above is part of that sample code. You can see that we have a green bar, letting us know that our tests passed. Now right above the Time: 1.97 seconds, we also have a single dot showing us that just one test was ran. In the green bar we have our status of OK as well as how many tests and assertions were ran. 

      So by running just this one command we know that our Symfony 2 app is installed, running properly, and tested! 

      Creating a Controller, Template & Route

      We now need some actual application code that we can test. 

      The Controller

      Let's start by creating a new controller class file and controller action. Inside of your crawling project, under src/Crawling/FtestingBundle/Controller, create a new file named CrawlingController.php and insert the following into it:

      namespace Crawling\FtestingBundle\Controller;
      use Symfony\Bundle\FrameworkBundle\Controller\Controller;
      class CrawlingController extends Controller {

      In this file we just define our basic controller class structure, giving it the proper namespace and including the necessary Controller parent class. 

      The Controller Actions

      Inside of our class, let's now define our two simple controller actions. They are going to just render two different pages: a home page and an other page:

      public function homeAction() {
          return $this->render('CrawlingFtestingBundle:Crawling:home.html.twig');
      public function otherAction() {
      	return $this->render('CrawlingFtestingBundle:Crawling:other.html.twig');

      The Templates

      Now we need to create the template files for these controller actions. Under src/Crawling/Ftesting/Resources/views, create a new directory named Crawling to hold our CrawlingController's template files. Inside, first create the home.html.twig file, with the following HTML inside:

      <h1>Crawling Home Page</h1>
      <p>Here's our crawling home page.</p>
      <p>Please visit <a href="{{ path('crawling_other') }}">this other page</a> too!</p>

      This just contains some basic HTML and a link to the other page.

      Now also go ahead and create the other.html.twig file, with this HTML inside:

      <h1>Other Page</h1>
      <p>Here's another page, which was linked to from our home page, just for testing purposes.</p>

      The Routes

      Finally, for our application code, let's define the routes for these two pages. Open up src/Crawling/FtestingBundle/Resources/config/routing.yml and enter in the following two routes, underneath the default-generated route that came with our route file:

          path: /crawling/home
          defaults: { _controller: CrawlingFtestingBundle:Crawling:home }
          path: /crawling/other
          defaults: { _controller: CrawlingFtestingBundle:Crawling:other }

      Here I define two routes, one for each of our controller actions. We start with the routes name, which we can use in links, etc.. and then we specify the routes path which is its URI to access the page in the browser, and then we tell it which controller it should map too.

      Now remember with YAML you don't want to use any tabs, always use spaces or your routes will not work!

      So, with just these two pages, even with how basic and static they are, we can still learn a lot about how to use Symfony 2's Crawler to test that the entire spectrum of having a controller, template, route, and links work as an integrated whole(a functional test), as well as ensure our pages display the correct HTML structure. 

      Writing a Functional Test

      We're now ready to begin learning how to write functional tests using the Crawler. First, we'll create a test file.

      Creating Our Test File

      All of your tests in Symfony 2, PHPUnit tests are stored your bundle's Tests/Controller directory Each controller should have its own controller test file named after the controller class that it's testing. Since we have a CrawlingController, we'll need to create a CrawlingControllerTest.php file inside src/Crawling/FtestingBundle/Tests/Controller, with the following class definition:

      namespace Crawling\FtestingBundle\Tests\Controller;
      use Symfony\Bundle\FrameworkBundle\Test\WebTestCase;
      class CrawlingControllerTest extends WebTestCase

      Here we namespace our test and then include in the WebTestCase parent class giving us our PHPUnit testing functionality. Our test class is named exactly the same as our file name and we extend the WebTestCase parent class so that we inherit its features.

      Now let's create a test method to hold our assertions that we'll make to test out our home page. Inside of our test class, let's create the following method:

      public function testHome() {

      Anytime you create a test method using PHPUnit in Symfony 2 we always prefix our method name with the word test. You can give the method name itself any name you'd like, although the convention is to name it after the controller action that you're testing. So here, I've named mine testHome to follow that convention.

      The Client

      Now inside of our test method we need a way to simulate a browser so that we can issue an HTTP request to one of our routes and test that everything is working as we expect it to. To do this we'll create a client object by calling a static createClient() method:

      $client = static::createClient();

      We can now use this $client object to make that HTTP request and begin using the Crawler.

      The Crawler

      The Crawler is at the core of functional testing in Symfony 2 and allows us to traverse and collect information about our web application's page, as well as perform actions like clicking links or submitting forms. Let's define our Crawler object by making an HTTP request using the client. Add the following right under your $client object, in your testHome method:

      $crawler = $client->request('GET', '/crawling/home');

      This will return a Crawler object to test our home page. This will both let us know that our page exists, it has the correct HTML and formatting and that the controller, template, and route all work as a unit.  

      Testing the Heading & Paragraph

      To begin our functional tests, we want to assert that our home page contains the proper heading with the proper content inside of it. We use our $crawler object and its various methods to do this. These methods will all return back another Crawler object to us which contains the actual tested page's response. We'll then test this response to ensure everything is as expected.

      Add the following code to your testHome method:

      $heading = $crawler->filter('h1')->eq(0)->text();
      $this->assertEquals('Crawling Home Page', $heading);

      We start by calling our $crawler object's filter() method to filter the page's response and select all h1 elements. We can then chain on other method calls to filter our selection down even further. Here I use the eq() method which accepts an index position of the h1 element we want to select. I chose to select index 0, the first heading. Finally, I chain on the text method call, which will return back this HTML element's text content and store the result into a $heading variable.

      After filtering the h1 element that we want to test for, we now need to assert that we have the correct element. We do this using the assertEquals() method which accepts as the first argument the value we expect the heading to have, and as the second argument, the actual value of the returned response, which is our $heading itself. By doing this we'll know that we're on the correct page, if the content matches what we expect it to be.

      Running the Heading Test

      So with just four simple lines of PHP code, we'll be able test our home controller, template, and route. Let's run our test to make sure it passes. In your Terminal, from within your crawling Symfony project, run phpunit -c app/. You should see the following:

      Running the Heading Test

      Here we now have two tests and two assertions, all of which are passing! Now you can test for the single paragraph below the heading in a similar way, but this time we'll use the first(), method, like this:

      $para1 = $crawler->filter('p')->first()->text();
      $this->assertEquals("Here's our crawling home page.", $para1);

      If you rerun your tests, we now have three passing assertions. Good job!

      Testing Clicking a Link

      Now let's try testing out the process of clicking our this other page link. It should take us to the other page and display the proper content there as well. Insert the following code into your testHome method:

      $link = $crawler->filter('a:contains("this other page")')->first()->link();
      $otherPage = $client->click($link);
      $this->assertEquals('Other Page', $otherPage->filter('h1')->first()->text());

      We start off by filtering our home page by a tags. We use the :contains() filter method to filter the a tags by their content so we make sure to select the correct link. We then just chain on the first() method to grab the first one and call the link() method on it to create a link object so that we can simulate clicking on it using our $client.

      Now that we have a $link object, we need to click it, by calling the $client object's click() method and passing in the $link object to it and storing the response back into the $otherPage variable. This is just like any other Crawler object, the click method returns the response. Very easy!

      And then lastly, we just assert that our $otherPage's heading text is equal to what we expect it to be using the assertEquals() method. If it is, we know that our link works!

      Run Your Tests One Last Time!

      Let's now run our tests one final time to make sure that our link works correctly and that we're on the proper page after clicking it. Here's my Terminal results:

      Run Your Tests One Last Time

      We have two tests and four assertions, all of which are passing. App complete!


      And that's it. We've tested out that our controller, controller actions, templates, and routes all work together and we know that the HTML and content for each element is displaying properly on the page, as well as that our link, links to the proper location. Job well done.

      I now encourage you to try out what you've learned by testing out the other page, adding more HTML or links to it, and generally just getting a feel for using the Crawler to ensure your page is working as expected. 

      Now this tutorial just lays down the foundations for understanding the functional testing process in Symfony 2 using the Crawler. But, stay tuned for an upcoming article on testing more dynamic-database driven Symfony 2 applications.



      Leave a comment › Posted in: Daily

    1. Creating Maintainable WordPress Meta Boxes: Verify and Sanitize

      Throughout this series, we've been creating a plugin that's meant to provide authors with a way to collect, manage, and save ideas and references to content that they're creating within WordPress.

      While doing so, we're also looking at ways that we can organize our plugin so that the code and the file organization is clear and maintainable so that as the plugin continues development, we're able to easily add, remove, and maintain its features.

      Up to this point, we've put together the basic file organization of the plugin as well as the front-end, but we haven't actually implemented functionality for saving information to the database. And if you can't save information, then the plugin is of little benefit to anyone.

      In this post, we're going to hop back into the server-side code and begin implementing the functionality that will:

      1. Verify the user has the ability to save post meta data
      2. Sanitize the post meta data
      3. Save the post meta data
      4. Validate and retrieve the post meta data

      We've got our work cut out for us. In this article, we're going to be looking at the first two steps and then in the next post, we'll be looking at the final two steps.

      Verifying Permissions

      In order to verify that the user has the ability to publish to save post meta data, we need to implement a security check during the serialization process. In order to do this, we need to take advantage of a nonce value.

      A nonce is a "number used once" to protect URLs and forms from being misused.

      1. Add a Nonce

      In order to introduce one into our meta box, we can implement the functionality in the markup that's responsible for rendering the post template. To do this, load admin/views/authors-commentary-navigation.php and update the template so that it includes a call to wp_nonce_field:

      <div id="authors-commentary-navigation">
          <h2 class="nav-tab-wrapper current">
      		<a class="nav-tab nav-tab-active" href="javascript:;">Draft</a>
      		<a class="nav-tab" href="javascript:;">Resources</a>
      		<a class="nav-tab" href="javascript:;">Published</a>
      		// Include the partials for rendering the tabbed content
      		include_once( 'partials/drafts.php' );
      		include_once( 'partials/resources.php' );
      		include_once( 'partials/published.php' );
      		// Add a nonce field for security
      		wp_nonce_field( 'authors_commentary_save', 'authors_commentary_nonce' );

      In the code above, we've introduced a nonce that corresponds to the action of saving the author's commentary (which we've named authors_commentary_nonce) and associated it with a value that's identified by authors_commentary.

      We'll see where this comes into play momentarily. For now, if you load your browser, you won't see anything new display. That's because the nonce values are displayed in a hidden field. 

      For those who are curious, you can launch your favorite browser's development tools, inspect the meta box, and you should find something like the following in the markup:

      <input type="hidden" id="authors_commentary_nonce" name="authors_commentary_nonce" value="f3cd131d28">

      Of course, the value of your nonce will be different.

      2. Check the Nonce

      In order to make sure the user has permission to save the post, we want to check three things:

      1. that the user is saving information for a post post type
      2. that the post is not being automatically saved by WordPress
      3. that the user actually has permission to save

      We'll write two helper functions to achieve the first and third, and we'll use some built-in functions to check number two (which will actually be used in the second helper function).

      First, let's go ahead and setup the hook and the function that will be used to leverage the helper functions and save the meta data. In the constructor for Authors_Commentary_Meta_Box, add the following line of code:

      <?php add_action( 'save_post', array( $this, 'save_post' ) ); ?>

      Next, let's define the function. Note that I'm making calls to two functions in the following block of code. We'll be defining them momentarily:

       * Sanitizes and serializes the information associated with this post.
       * @since    0.5.0
       * @param    int    $post_id    The ID of the post that's currently being edited.
      public function save_post( $post_id ) {
          /* If we're not working with a 'post' post type or the user doesn't have permission to save,
      	 * then we exit the function.
      	if ( ! $this->is_valid_post_type() || ! $this->user_can_save( $post_id, 'authors_commentary_nonce', 'authors_commentary_save' ) ) {

      Given the code above, we're telling WordPress to fire our save_post function whenever its save_post action is called. Inside of the function, we're saying "If the post that's being saved is not a 'post' post type, or if the user does not have permission to save, then exit the function."

      Of course, we need to define the functions so that the logic works. First, we'll write the is_valid_post_type function as a private function of the current class. It will check the $_POST array to ensure that the type of post that's being saved is, in fact, a post.

       * Verifies that the post type that's being saved is actually a post (versus a page or another
       * custom post type.
       * @since       0.5.0
       * @access      private
       * @return      bool      Return if the current post type is a post; false, otherwise.
      private function is_valid_post_type() {
          return ! empty( $_POST['post_type'] ) && 'post' == $_POST['post_type'];

      Next, we'll add the user_can_save function. This is the function that will ensure that the post isn't being saved by WordPress, and that if a user is saving the function, then the nonce value associated with the post action is properly set.

       * Determines whether or not the current user has the ability to save meta data associated with this post.
       * @since       0.5.0
       * @access      private
       * @param		int		$post_id	  The ID of the post being save
       * @param       string  $nonce_action The name of the action associated with the nonce.
       * @param       string  $nonce_id     The ID of the nonce field.
       * @return		bool				  Whether or not the user has the ability to save this post.
      private function user_can_save( $post_id, $nonce_action, $nonce_id ) {
          $is_autosave = wp_is_post_autosave( $post_id );
          $is_revision = wp_is_post_revision( $post_id );
          $is_valid_nonce = ( isset( $_POST[ $nonce_action ] ) && wp_verify_nonce( $_POST[ $nonce_action ], $nonce_id ) );
          // Return true if the user is able to save; otherwise, false.
          return ! ( $is_autosave || $is_revision ) && $is_valid_nonce;

      Notice here that we're passing in the nonce_action and the nonce_id that we defined in the template in the first step. We're also using wp_verify_nonce in conjunction with said information.

      This is how we can verify that the post that's being saved is done so by a user that has the proper access and permissions.

      Sanitize the Data

      Assuming that the user is working with a standard post type and that the s/he has permission to save information, we need to sanitize the data.

      To do this, we need to do this following:

      1. Check to make sure that none of the information in the post meta data is empty
      2. Strip our anything that could be dangerous to write to the database

      After we do this, then we'll look at saving the information for each of the meta boxes. But first, let's work on sanitization. There are a couple of ways that we can go about implementing this. For the purposes of this post, we'll do it in the most straightforward way possible: We'll check for the existence of the information based on its key, then, if it exists, we'll sanitize it.

      For experienced programmers, you're likely going to notice some code smells with the code we're about to write. Later in this series, we'll be doing some refactoring to see how we can make the plugin more maintainable so it's all part of the intention of this particular post.

      With that said, hop back into the save_post function.

      1. Drafts

      Since the first tab that exists within the meta box is the Drafts tab, we'll start with it. Notice that it's a textarea, so the logic that exists for sanitizing that information should be as follows:

      • remove any HTML tags
      • escape the contents of the text area

      Recall that the textarea is named authors-commentary-drafts so that we can access it within the $_POST array. To achieve this, we'll use the following code:

      // If the 'Drafts' textarea has been populated, then we sanitize the information.
      if ( ! empty( $_POST['authors-commentary-drafts'] ) ) {
          // We'll remove all white space, HTML tags, and encode the information to be saved
      	$drafts = trim( $_POST['authors-commentary-drafts'] );
      	$drafts = esc_textarea( strip_tags( $drafts ) );
      	// More to come...

      Simply put, we're checking to see if the information in the $_POST array is empty. If not, then we'll sanitize the data.

      2. Resources

      This particular field is a little more form because it's dynamic. That is, the user can have anything from zero-to-many input fields all of which we'll need to manage. Remember that this particular tab is designed to primarily be for URLs so we need to make sure that we're safely sanitizing the information in that way.

      First, we need to make one small change to the createInputElement function that exists within the admin/assets/js/resources.js file. Specifically, we need to make sure that the name attribute is using an array so that we can properly access it and iterate through it when looking at $_POST data.

      Make sure that the lines of code responsible for creating the actual element looks like this:

      // Next, create the actual input element and then return it to the caller
      $inputElement =
          $( '<input />' )
      		.attr( 'type', 'text' )
      		.attr( 'name', 'authors-commentary-resources[' + iInputCount + ']' )
      		.attr( 'id', 'authors-commentary-resource-' + iInputCount )
      		.attr( 'value', '' );

      Notice that the key to what we've done lies in the line that updates the name. Specifically, we're placing the number of inputs an indexes of the array.

      Next, hop back into the save_post function and add the following code (which we'll discuss after the block):

      // If the 'Resources' inputs exist, iterate through them and sanitize them
      if ( ! empty( $_POST['authors-commentary-resources'] ) ) {
          $resources = $_POST['authors-commentary-resources'];
      	foreach ( $resources as $resource ) {
      		$resource = esc_url( strip_tags( $resource ) );
      		// More to come...

      Because we're working with an array of inputs, we need to first check that the array isn't empty. If it's not, then we need to iterate through it because we aren't sure how many inputs we're going to have to manage.

      Similar to the previous block, we're doing a basic level of sanitization and escaping. This is something that you can make as aggressive or as relaxed as you'd like. We'll be coming back to this conditional in the next post when it's time to save the data.

      3. Published

      This tab is similar to the previous tabs in that we're dealing with an indeterminate number of elements that we need to sanitize. This means that we're going to need to make a small update to the partial responsible for rendering this input.

      On the upside, we're only dealing with a checkbox which has a boolean value of being checked or not (or, specifically, 'on' or empty) so sanitizing the information is really easy.

      First, let's update the partial. Locate admin/views/partials/published.php. Next, find the line that defines the input checkbox and change it so that it looks like this:

      <label for="authors-commentary-comment-<?php echo $comment->comment_ID ?>">
          <input type="checkbox" name="authors-commentary-comments[<?php echo $comment->comment_ID ?>]" id="authors-commentary-comment-<?php echo $comment->comment_ID ?>" />
      	This comment has received a reply.

      Notice that we've changed the name attribute so that it uses a an array with an index as its value. Next, we'll hop back into the save_post function one more time in order to introduce validation on this particular element:

      // If there are any values saved in the 'Resources' input, save them
      if ( ! empty( $_POST['authors-commentary-comments'] ) ) {
          $comments = $_POST['authors-commentary-comments'];
      	foreach ( $comments as $comment ) {
      		$comment = strip_tags( stripslashes( $comment ) );
      		// More to come...

      Just as we've done with the previous pieces of data, we first check to see if the content exists. If so, then we sanitize it to prepare it for saving. If it doesn't then we don't do anything.

      On To Saving

      At this point, we're positioned to take on the last two points of the series:

      1. Saving and Retrieving
      2. Refactoring

      Starting in the next post, we'll revisit the code that we've written in this post to see how we can save it to the database and retrieve it from the database in order to display it on the front-end.

      Next, we'll move on to refactoring. After all, part of writing maintainable code is making sure that it's well-organized and easily changeable. Since the code that we work with on a day-to-day basis has already been written and could stand to be refactored, we're going to see how to do that by the end of the series.

      In the meantime, review the code above, check out the source from GitHub, and leave any questions and comments in the field below. 



      Leave a comment › Posted in: Daily

    1. Securing Your Server Login

      Final product image
      What You'll Be Creating

      Thanks to the growing abundance of useful self-hosted apps such as WordPress and the affordable growth of cloud hosting providers, running your own server is becoming increasingly compelling to a broader audience. But securing these servers properly requires a fairly broad knowledge of Linux system administration; this task is not always suitable for newbies. 

      When you sign up for the typical cloud hosting account, you'll receive an email with a root account, password, and IP address, and instructions to sign in via SSH on port 22. But it's important to take several additional steps beyond the basic access configurations. That first root password below is actually just the starting point for security. There's much more to do.

      Typical new Linux credentials with root login and password

      This tutorial will provide an overview of common incremental approaches to secure your typical Linux server.

      Approaches to Server Security

      For purposes of this tutorial, I'm using a new Ubuntu 14.04 droplet from Digital Ocean with the LAMP configuration; if you wish to follow along with the same configuration, example configuration steps are explained here

      Once you map your chosen domain name, you're ready to begin. I'm using http://secure.lookahead.io for my example.

      You can log in to your server with SSH: ssh root@secure.lookahead.io. The server should require you to change your password during the first login attempt:

      Change your password at first unix login

      Now, the rest is up to you. Here are a handful of common approaches to improving your server's login security:

      1. Update Your System Components

      Firstly, it's important to regularly update your Linux system components. Typically, when you log in, Ubuntu will tell you how many packages you have that need to be updated. The commands to update your packages are:

      sudo apt-get update
      sudo apt-get dist-upgrade

      The recent Shellshock security vulnerability revealed in Bash is a perfect example of the need to regularly update your system files.

      Each time you log in, Ubuntu will tell you if there are packages and security updates that can be updated.

      If you wish, you can activate unattended upgrades:

      sudo apt-get install unattended-upgrades
      sudo dpkg-reconfigure unattended-upgrades

      2. Change Your SSH Port From the Default

      Leaving SSH access on port 22 makes it faster and easier for hackers to target your servers. Let's change the default SSH port to something more obscure.

      Edit the SSH configuration file:

      sudo nano /etc/ssh/sshd_config

      Change to a different port number, e.g.:

      # What ports, IPs and protocols we listen for
      Port 33322

      Restart SSH:

      sudo service ssh restart

      Then, log out and try to log in again. You should see this error message:

      ssh root@secure.lookahead.io
      ssh: connect to host secure.lookahead.io port 22: Connection refused

      This time, use the following SSH command, changing the port to 33322: ssh -p 33322 root@secure.lookahead.io. You should be able to log in successfully.

      3. Activate a Firewall

      Using a firewall can help block access to ports left unnecessarily open and close attack vectors; it can also help with logging attempted intrusions.

      If you happen to be using Amazon AWS cloud services, there's a nice web user interface for its firewall called security groups. The console for AWS security groups makes it easy to turn off access to all ports except your new chosen SSH port and port 80 for web browsing; you can see a visual example of this here:

      Amazon AWS Security Group Inbound Rules

      If you wish to implement Linux-based firewalls, you can study ufw and iptables. While it's beyond the scope of this tutorial, I will give a brief example of using ufw, the "uncomplicated firewall". 

      First, we'll enable ufw and then allow access for our SSH port 33322 as well as all http traffic, on port 80. Then, we'll deny access on the standard SSH port 22.

      sudo ufw enable
      sudo ufw allow 33322
      sudo ufw allow http
      sudo ufw deny 22
      sudo ufw status

      Be careful configuring ufw, as you can lock yourself out of an existing console session and your entire server.

      If you wish to go deeper, port knocking provides a way to more fully obscure your SSH access port. There is a detailed tutorial for advanced users by Justin Ellingwood: How to Use Port Knocking to Hide Your SSH Daemon from Attackers.

      4. Change Your Root Login Name

      Now, let's eliminate the root login user (or ubuntu on some systems) and customize the administrator's login.

      We'll add a user named "hal". Replace "hal" with your preferred username in the examples below:

      sudo adduser hal

      Add your new user to the sudo group for administrators:

      sudo adduser hal sudo

      Add your new user to the sudoers group. Edit the sudoers file:

      sudo nano /etc/sudoers

      Add this line to the sudoers file, in the user privileges section:


      Edit the SSH configuration file again:

      sudo nano /etc/ssh/sshd_config

      Remove the root or ubuntu account from the AllowUsers field. You may also need to add this line if it's not in your configuration file:

      AllowUsers hal

      Make sure PermitRootLogin is off:

      PermitRootLogin no

      Restart the service:

      sudo service ssh restart

      Log out and try to sign in again as root. You shouldn't be able to. Then, try to log in as Hal: ssh -p 33322 hal@secure.lookahead.io. That should work just fine.

      Note that some users may wish to restart SSH, log out, and verify that you can log in as Hal before turning off root login.

      5. Activate Google Two-Factor Authentication

      Now, we're going to add two-factor authentication to your server login; in other words, when we try to log in to the server, we will be required to provide a time-sensitive code from an app on our phone. 

      For this example, we'll use Google Authenticator. Be sure to download Google Authenticator from the iTunes app store or Play store.

      Then, from your server console, install the Google Authenticator package:

      sudo apt-get install libpam-google-authenticator

      Then we'll edit the Pluggable Authentication Module (PAM) for SSH to require two-factor authentication from Google:

      nano /etc/pam.d/sshd

      Add the following line at the top:

      auth required pam_google_authenticator.so

      Then, return to editing the SSH configuration file again:

      sudo nano /etc/ssh/sshd_config

      Change the ChallengeResponseAuthentication to yes:

      # Change to yes to enable challenge-response passwords (beware issues with
      # some PAM modules and threads)
      ChallengeResponseAuthentication yes

      Save the change and activate the authenticator:


      In addition to seeing a large QR code (as shown at the top of this tutorial), you'll also see a set of secret login keys and be asked some questions to customize your configuration:

      Google Authenticator Emergency Scratch Codes

      Using the Google Authenticator app on your phone, choose the edit pen icon in the upper right and add a new entry using the button at the bottom. You can either scan the QR code with your camera or enter the secret key. Once completed, Google Authenticator will be ready to provide codes to you for your next login.

      Print a copy of these emergency scratch codes and save them in a secure location, in case you ever need to recover your login without two-factor authentication.

      Restart the SSH service again and log out:

      sudo service ssh restart

      Log in again, and this time you'll be asked for a verification code before your password. Type in the six-digit verification code from Google Authenticator on your phone:

      Google Authenticator Verification Code Request

      The addition of two-factor authentication adds a strong layer of secondary security to your server. Still, there is more we can do.

      6. Switch to Using SSH Keys for Login

      It's wise to turn off your server's password-based login in favor of security keys; keys are far more resistant to attack. Passwords are short and subject to dictionary attacks; keys are longer and for the most part can only be compromised by government agency supercomputers.

      To create your SSH key, follow these instructions. Change to the home directory for your new user:

      cd /home/hal

      Make an SSH directory and set permissions:

      mkdir .ssh
      chmod 700 .ssh

      Generate a new key pair. When prompted, it's up to you whether to add a password to the key:

      cd .ssh
      ssh-keygen -b 1024 -f id_hal -t dsa

      Add the public key to the authorized_keys file:

      cat ~/.ssh/id_hal.pub > ~/.ssh/authorized_keys

      Set the permissions for the key file:

      chmod 600 ~/.ssh/*

      Move the private key to a temp folder for download to your local computer:

      cp ~/.ssh/* /tmp
      chmod 644 /tmp/*

      Download the new private key to your computer using your Ubuntu account. On your computer, use Terminal:

      scp -P 33322 -i ~/.ssh/id_hal hal@secure.lookahead.io:/tmp/* ~/.ssh

      Set permissions and test:

      cd ~/.ssh
      chmod 400 id_hal
      ssh -p 33322 -i ~/.ssh/id_hal hal@secure.lookahead.io

      If you run into any errors, you can try looking at the log on the AWS server while you attempt to login:

      tail -f /var/log/auth.log

      Remove the temporary key files from the server's tmp directory:

      rm -rf /tmp/*

      Edit the SSH configuration file again:

      sudo nano /etc/ssh/sshd_config

      Turn off Password Authentication:

      PasswordAuthentication no

      Restart the SSH service again:

      sudo service ssh restart

      Now, nobody will be able to log in to your server without your private key. To log in to your server, use the following command:

      ssh -p 33322 -i ~/.ssh/id_hal hal@secure.lookahead.io

      Make sure you secure the computer you're using with the private key on it; it's also wise to store a copy of your private key on a flash drive somewhere physically secure.

      Note that Google Authenticator two-factor authentication is bypassed when using SSH key security.

      Also, if you ever do get locked out of your server during these configurations, DigitalOcean provides a web console that acts as if a keyboard was plugged into your server. For other hosts, you may need to get help from their support team.

      7. Manage Your Application Security

      While the login portal of your server is a serious vulnerability, honestly, the applications you choose to install are likely to pose even bigger risks. For example, recently I read that using improperly secured regular expressions in your PHP app can open your server to ReDoS attacks.

      But a more commonplace example is the recent WordPress plugin vulnerability with Slider Revolution. A theme I had installed actually bundled this plugin, so I had to update the theme to fix the bug.

      Application vulnerabilities can be difficult to keep up on. It can make returning to managed hosting seem attractive again; don't give up! Be careful of apps you install, stay on mailing lists for your code providers and keep everything regularly up-to-date.

      Be proactive, and do what you can to protect your apps. For example, look at how I describe adding Apache user security to the installation of popular web app, PHPMyAdmin, which is used for simplifying MySQL database access and administration. Because only administrators need access to PHPMyAdmin and the consequences of it being hacked are high, adding an additional layer of password security is quite appropriate for this particular app.

      Security is a huge concern and a big area with which to grapple. I hope you've found this tutorial useful. Please feel free to post your own ideas, corrections, questions and comments below. I'd be especially interested in alternate and extended approaches. You can also reach me on Twitter @reifman or email me directly.



      Leave a comment › Posted in: Daily

  • Page 1 of 32 pages  1 2 3 >  Last ›
  • Browse the Blog