4elements, web design and consultancy

  1. 7 Handy SQL Scripts for SQL Developers

    A lot of things that we do depend upon the knowledge that we possess. If we are aware of what can be done, only then we can make smarter and more effective decisions. That is why it is always good to have quick tips and tricks handy in your pocket. This principle applies everywhere, including for MS-SQL developers.

    Through this article I would like to share a few SQL scripts which have proven to be very useful for my daily job as a SQL developer. I'll present a brief scenario about where each of these scripts can be used along with the scripts below.

    Note: Before reaping the benefits from these scripts, it is highly recommended that all of the provided scripts be run in a test environment first before running them on a real-time database to ensure safety.

    1. Search for Text Inside All the SQL Procedures

    Can we imagine life without Control-F in today's world? Or a life without search engines! Dreadful, isn't it? Now imagine you have 20-30 sql procedures in your database and you need to find the procedure that contains a certain word.

    Definitely one way to do it is by opening each procedure one at a time and doing a Control-F inside the procedure. But this is manual, repetitive, and boring. So, here is a quick script that allows you to achieve this.

    2. Compare Row Counts in Tables From Two Different Databases With the Same Schema

    If you have a large database and the source of data for your database is some ETL (extract, transform, load) process that runs on a daily basis, this next script is for you.

    Say you have scripts that run on a daily basis to extract data into your database and this process takes about five hours each day. As you begin to look more deeply into this process, you find some areas where you can optimize the script to finish the task in under four hours.

    You would like to try out this optimization, but since you already have the current implementation on a production server, the logical thing to do is try out the optimized process in a separate database, which you would replicate using the existing database.

    Now, once ready, you would run both ETL processes and compare the extracted data. If you have a database with many tables, this comparison can take quite a while. So, here's a quick script that facilitates this process.

    3. Back Up Multiple Databases at Once

    In any IT company, the first thing a newly hired programmer (or sql developer) has to do before writing his or her first SQL query is buy insurance of the working version of the production database, i.e. make a backup.

    This single act of creating a backup and working with the backup version gives you the freedom to perform and practice any kind of data transformation, as it ensures that even if you blow off the company's client's data, it can be recovered. In fact, not just new hires but even the veterans from the same IT company never perform any data transformation without creating backups.

    Although backing up databases in SQL Server is not a difficult task, it
    definitely is time-consuming, especially when you need to back up many databases at once. So the next script is quite handy for this purpose.

    4. Shrink Multiple Database Logs at Once

    Every SQL Server database has a transaction log that records all transactions and the database modifications made by each transaction. The transaction log is a critical component of the database and, if there is a system failure, the transaction log might be required to bring your database back to a consistent state.

    As the number of transactions starts increasing, however, space availability starts becoming a major concern. Fortunately, SQL Server allows you to reclaim the excess space by reducing the size of the transaction log.

    While you can shrink log files manually, one at a time using the UI provided, who has the time to do this manually? The following script can be used to shrink multiple database log files rapidly.

    5. Restrict Connection to the Database by Setting Single-User Mode

    Single-user mode specifies that only one user at a time can access the database and is generally used for maintenance actions. Basically, if other users are connected to the database at the time that you set the database to single-user mode, their connections to the database will be closed without warning.

    This is quite useful in the scenarios where you need to restore your database to the version from a certain point in time or you need to prevent possible changes by any other processes accessing the database.

    6. String Function in SQL to Generate Dynamic Texts

    Many programming languages allow you to insert values inside string texts, which is very useful when generating dynamic string texts. Since SQL doesn't provide any such function by default, here is a quick remedy for that. Using the function below, any number of texts can be dynamically inserted inside string texts.

    7. Printing Table Column Definitions

    When comparing multiple databases that have similar schemas, one has to look at the details of table columns. The definitions of the columns (data types, nullables?) are as vital as the name of the columns themselves.

    Now for databases having many tables and tables having many columns, it can take quite a while to compare each column manually with a column from another table of another database. The next script can precisely be used to automate this very process as it prints the definitions of all tables for a given database.


    In this article, we looked at seven useful scripts that can cut down tons of manual, laborious work and increase overall efficiency for SQL developers. We also looked at different scenarios where these scripts can be implemented.

    If you're looking for even more SQL scripts to study (or to use), don't hesitate to see what we've got available on CodeCanyon.

    Once you begin to get the hang of these scripts, certainly you will begin to identify many other scenarios where these scripts can be used effectively.

    Good luck!


    Leave a comment › Posted in: Daily

    1. Introduction to Webpack: Part 2

      In the previous tutorial we learned how to set up a Webpack project and how to use loaders to process our JavaScript. Where Webpack really shines, though, is in its ability to bundle other types of static assets such as CSS and images, and include them in our project only when they're required. Let's start by adding some styles to our page.

      Style Loaders

      First, create a normal CSS file in a styles directory. Call in main.css and add a style rule for the heading element.

      So how do we get this stylesheet into our page? Well, like most things with Webpack, we'll need another loader. Two in fact: css-loader and style-loader. The first reads all the styles from our CSS files, whilst the other injects said styles into our HTML page. Install them like so:

      Next, we tell Webpack how to use them. In webpack.config.js, we need to add another object to the loaders array. In it we want to add a test to match only CSS files as well as specify which loaders to use.

      The interesting part of this code snippet is the 'style!css' line. Loaders are read from right to left, so this tells Webpack to first read the styles of any file ending in .css, and then inject those styles into our page.

      Because we've updated our configuration file, we'll need to restart the development server for our changes to be picked up. Use ctrl+c to stop the server and webpack-dev-server to start it again.

      All we need to do now is require our stylesheet from within our main.js file. We do this in the same way as we would any other JavaScript module:

      Note how we haven't even touched index.html. Open up your browser to see the page with styled h2. Change the colour of the heading in your stylesheet to see it instantly update without a refresh. Lovely.

      You've Got to Sass It

      "But nobody uses CSS these days, Grandad! It's all about Sass". Of course it is. Luckily Webpack has a loader to do just the thing. Install it along with the node version of Sass using:

      Then update webpack.config.js:

      This is now saying that for any file ending with .scss, convert the Sass to plain CSS, read the styles from the CSS, and then insert the styles into the page. Remember to rename main.css to main.scss, and require the newly named file in instead. First some Sass:

      Then main.js:

      Super. It's as easy as that. No converting and saving files, or even watching folders. We just require in our Sass styles directly.


      "So images, loaders too I bet?" Of course! With images, we want to use the url-loader. This loader takes the relative URL of your image and updates the path so that it's correctly included in your file bundle. As per usual:

      Now, let's try something different in our webpack.config.js. Add another entry to the loaders array in the usual manner, but this time we want the regular expression to match images with different file extensions:

      Note the other difference here. We're not using the exclude key. Instead we're using include. This is more efficient as it is telling webpack to ignore everything that doesn't match a folder called "images".

      Usually you'll be using some sort of templating system to create your HTML views, but we're going to keep it basic and create an image tag in JavaScript the old-fashioned way. First create an image element, set the required image to the src attribute, and then add the element to the page.

      Head back to your browser to see your image appear before your very eyes!


      Another task commonly carried out during development is linting. Linting is a way of looking out for potential errors in your code along with checking that you've followed certain coding conventions. Things you may want to look for are "Have I used a variable without declaring it first?" or "Have I forgotten a semicolon at the end of a line?" By enforcing these rules, we can weed out silly bugs early on.

      A popular tool for linting is JSHint. This looks at our code and highlights potential errors we've made. JSHint can be run manually at the command line, but that quickly becomes a chore during development. Ideally we'd like it to run automatically every time we save a file. Our Webpack server is already watching out for changes, so yes, you guessed it—another loader.

      Install the jshint-loader in the usual way:

      Again we have to tell Webpack to use it by adding it to our webpack.config.js. However, this loader is slightly different. It's not actually transforming any code—it's just having a look. We also don't want all our heavier code-modifying loaders to run and fail just because we've forgotten a semicolon. This is where preloaders come in. A preloader is any loader we specify to run before our main tasks. They're added to our webpack.conf.js in a similar way to loaders.

      Now our linting process runs and fails immediately if there's a problem detected. Before we restart our web server, we need to tell JSHint that we're using ES6, otherwise it will fail when it sees the const keyword we're using.

      After the module key in our config, add another entry called "jshint" and a line to specify the version of JavaScript.

      Save the file and restart webpack-dev-server. Running ok? Great. This means your code contains no errors. Let's introduce one by removing a semicolon from this line:

      Again, save the file and look at the terminal. Now we get this:

      Thanks, JSHint!

      Getting Ready for Production

      Now that we're happy our code is in shape and it does everything we want it to, we need to get it ready for the real world. One of the most common things to do when putting your code live is to minify it, concatenating all your files into one and then compressing that into the smallest file possible. Before we continue, take a look at your current bundle.js. It's readable, has lots of whitespace, and is 32kb in size.

      "Wait! Don't tell me. Another loader, right?" Nope! On this rare occasion, we don't need a loader. Webpack has minification built right in. Once you're happy with your code, simply run this command:

      The -p flag tells Webpack to get our code ready for production. As it generates the bundle, it optimises as much as it can. After running this command, open bundle.js and you'll see it's all been squashed together, and that even with such a small amount of code we've saved 10kb.


      I hope that this two-part tutorial has given you enough confidence to use Webpack in your own projects. Remember, if there's something you want to do in your build process then it's very likely Webpack has a loader for it. All loaders are installed via npm, so have a look there to see if someone's already made what you need.

      Have fun!


      Leave a comment › Posted in: Daily

    1. Building a CMS: rubyPress

      After creating a content management system (CMS) basic structure, and the actual server using Go and Node.js, you’re ready to try your hand at another language.

      This time, I'm using the Ruby language to create the server. I have found that by creating the same program in multiple languages, you begin to get new insights on better ways to implement the program. You also see more ways to add functionality to the program. Let’s get started.

      Setup and Loading the Libraries

      To program in Ruby, you will need to have the latest version installed on your system. Many operating systems come pre-installed with Ruby these days (Linux and OS X), but they usually have an older version. This tutorial assumes that you have Ruby version 2.4.

      The easiest way to upgrade to the latest version of ruby is to use RVM. To install RVM on Linux or Mac OS X, type the following in a terminal:

      This will create a secure connection to download and install RVM. This installs the latest stable release of Ruby as well. You will have to reload your shell to finish the installation.

      For Windows, you can download the Windows Ruby Installer. Currently, this package is up to Ruby 2.2.2, which is fine to run the libraries and scripts in this tutorial.

      Once the Ruby language is properly installed, you can now install the libraries. Ruby, just like Go and Node, has a package manager for installing third-party libraries. In the terminal, type the following:

      This installs the Sinatra, Ruby Handlebars, Kramdown, and Slim libraries. Sinatra is a web application framework. Ruby Handlebars implements the Handlebars templating engine in Ruby. Kramdown is a Markdown to HTML converter. Slim is a Jade work-alike library, but it doesn’t include Jade’s macro definitions. Therefore, the macros used in the News and Blog post indexes are now normal Jade.

      Creating the rubyPress.rb File

      In the top directory, create the file rubyPress.rb and add the following code. I will comment about each section as it's added to the file.

      The first thing to do is to load the libraries. Unlike with Node.js, these are not loaded into a variable. Ruby libraries add their functions to the program scope.

      The Handlebars library gets initialized with the different helper functions defined. The helper functions defined are date, cdate, and save.

      The date helper function takes the current date and time, and formats it according to the format string passed to the helper. cdate is similar except for passing the date first. The save helper allows you to specify a name and value. It creates a new helper with the name name and passes back the value. This allows you to create variables that are specified once and affect many locations. This function also takes the Go version, which expects a string with the name, ‘|’ as a separator, and value.

      The next part of the code is for loading the cacheable items of the web site. This is everything in the styles and layout for your theme, and the items in the parts sub-directory. A global variable, $parts, is first loaded from the server.json file. That information is then used to load the proper items for the layout and theme specified. The Handlebars template engine uses this information to fill out the templates.

      The next section contains the definitions for all the routes. Sinatra is a complete REST compliant server. But for this CMS, I will only use the get verb. Each route takes the items from the route to pass to the functions for producing the correct page. In Sinatra, a name preceded by a colon specifies a section of the route to pass to the route handler. These items are in a params hash table.

      The page function gets the name of a page from the route and passes the layout in the $parts variable along with the full path to the page file needed for the function processPage. The processPage function takes this information and creates the proper page, which it then returns. In Ruby, the output of the last function is the return value for the function.

      The post function is just like the page function, except that it works for all post type pages. This function expects the post type, post category, and the post itself. These will create the address for the correct page to display.

      The figurePage function uses the processPage function to read the page content from the file system. This function receives the complete path to the file without the extension. figurePage then tests for a file with the given name with the html extension for reading an HTML file. The second choice is for a md extension for a Markdown file.

      Lastly, it checks for an amber extension for a Jade file. Remember: Amber is the name of the library for processing Jade syntax files in Go. I kept it the same for inter-functionality. An HTML file is simply passed back, while all Markdown and Jade files get converted to HTML before passing back.

      If a file isn’t found, the user will receive the 404 page. This way, your “page not found” page looks just like any other page except for the contents.

      The processPage function performs all the template expansions on the page data. It starts by calling the figurePage function to get the page’s contents. It then processes the layout passed to it with Handlebars to expand the template.

      Then the processShortCode function will find and process all the shortcodes in the page. The results are then passed to Handlebars for a second time to process any macros left by the shortcodes. The user receives the final results.

      The processShortCodes function takes the text given, finds each shortcode, and runs the specified shortcode with the arguments and contents of the shortcode. I use the shortcode routine to process the contents for shortcodes as well.

      A shortcode is an HTML-like tag that uses -[ and ]- to delimit the opening tag and -[/ and ]- the closing tag. The opening tag contains the parameters for the shortcode as well. Therefore, an example shortcode would be:

      This shortcode defines the box shortcode without any parameters with the contents of <p>This is inside a box.</p>. The box shortcode wraps the contents in the appropriate HTML to produce a box around the text with the text centered in the box. If you later want to change how the box is rendered, you only have to change the definition of the shortcode. This saves a lot of work.

      The last thing in the file is the $shortcodes hash table containing the shortcode routines. These are simple shortcodes, but you can create other shortcodes to be as complex as you want.

      All shortcodes have to accept two parameters: args and contents. These strings contain the parameters of the shortcode and the contents the shortcodes surround. Since the shortcodes are inside a hash table, I used a lambda function to define them. A lambda function is a function without a name. The only way to run these functions is from the hash array.

      Running the Server

      Once you have created the rubyPress.rb file with the above contents, you can run the server with:

      Since the Sinatra framework works with the Ruby on Rails Rack structure, you can use Pow to run the server. Pow will set up your system’s host files for running your server locally the same as it would from a hosted site. You can install Pow with Powder using the following commands in the command line:

      Powder is a command-line routine for managing Pow sites on your computer. To get Pow to see your site, you have to create a soft link to your project directory in the ~/.pow directory. If the server is in the /Users/test/Documents/rubyPress directory, you would execute the following commands:

      The ln -s creates a soft link to the directory specified first, with the name specified secondly. Pow will then set up a domain on your system with the name of the soft link. In the above example, going to the web site http://rubyPress.dev in the browser will load the page from the server.

      To start the server, type the following after creating the soft link:

      To reload the server after making some code changes, type the following:

      rubyPress Main Page
      rubyPress Main Page

      Going to the website in the browser will result in the above picture. Pow will set up the site at http://rubyPress.dev. No matter which method you use to launch the site, you will see the same resulting page.


      Well, you have done it. Another CMS, but this time in Ruby. This version is the shortest version of all the CMSs created in this series. Experiment with the code and see how you can extend this basic framework.


      Leave a comment › Posted in: Daily

    1. What Is jQuery?

      Avid readers of Envato Tuts+ come from a wide variety of backgrounds in terms of experience, culture, and what they are looking to learn. When it comes to technology, it's easy to take for granted those who are just starting out, especially if you've done any type of development for an extended amount of time.

      That said, one of the nice things about becoming a developer is that once you've gotten a handle on a particular language and set of skills, it's easy to translate them into other areas of development.

      In an attempt to make sure we're reaching the widest audience possible, we're aiming to publish content aimed directly at beginners who are curious about a given language, platform, or application.

      And in this article, we're going to be focused exclusively on jQuery. Specifically, we're going to take a high-level look at everything that jQuery offers and how it can help us, and we're going to review some of the projects that have also come to fruition from the initial library.

      All About jQuery

      First released in 2006 by John Resig, jQuery set out to be a cross-platform JavaScript library that made it easier to write client-side solutions.

      The jQuery Homepage

      At the time this was released, it was especially useful because of the inconsistencies that existed among JavaScript implementations in Internet Explorer, Firefox, and eventually Google Chrome (which wasn't released until 2008).

      As defined by Wikipedia:

      jQuery is a cross-platform JavaScript library designed to simplify the client-side scripting of HTML. jQuery is the most popular JavaScript library in use today, with installation on 65% of the top 10 million highest-trafficked sites on the Web. jQuery is free, open-source software licensed under the MIT License.

      Furthermore, the jQuery website itself says:

      jQuery is a fast, small, and feature-rich JavaScript library. It makes things like HTML document traversal and manipulation, event handling, animation, and Ajax much simpler with an easy-to-use API that works across a multitude of browsers. With a combination of versatility and extensibility, jQuery has changed the way that millions of people write JavaScript.

      But what does this mean for us as developers? Perhaps the best way for us to understand what all the library offers is to examine what it claims to offer.

      1. HTML Document Traversal

      When a browser renders a web page, it's a visual representation of what's known as the DOM (or the document object model). This model can be conceptually modeled as a tree data structure where there are certain nodes each with roots and leaves.

      For example, see this image as provided by the Web Step Book:

      An example of the DOM for a basic web page

      When you're working with jQuery, you can easily traverse the contents of the DOM in order to reach or to find the nodes, elements, or values you're aiming to retrieve.

      This means if you're looking for the text of a div element that has a unique ID, it's easy to do.

      If you're trying to loop through all of the span elements, you can do that as well:

      We'll review this particular functionality a little bit more in the next section as it goes to show some of the additional work you can do to manipulate the page.

      Granted, these examples are simple, and things can get more complicated, especially as we introduce method chaining. For example:

      At this point, you're not supposed to know what's going on with the code, but it's meant to show you how useful jQuery can be in certain situations through the use of helper functions and method chaining.

      Suffice it to say, the power of jQuery lies in its ability to query the DOM (hence the name jQuery) and then make adjustments to it through the use of a well-documented API (that's replete with examples of how to use each function).

      One could argue that everything else stems from that feature alone. So with that said, let's continue to look at some of what that looks like.

      2. HTML Document Manipulation

      When it comes to actually manipulating the DOM, jQuery has a lot of functions that allow us to change what our visitors see.

      Some of these functions are simple, such as allowing us to show or hide elements that aren't visible whenever a page loads. Other functions allow us to create new elements and append them to an existing element, or prepend them in front of an entire list.

      If you're trying to loop through all of the span elements in order to add a class attribute to them, you can do that, as well:

      This is barely scratching the surface of what DOM manipulation functionality is available within jQuery. By looking at the API, under the Manipulation section, you can see just how many options are available to us (along with good examples).

      To give further examples, we can also:

      • determine height or width of the document, the window, or any given element
      • grab the values from any given element (assuming it offers this ability)
      • toggle class names
      • and much more

      Remember that one of the things that make jQuery an attractive solution for so many developers is that all of the functions and examples we're looking at in this article are cross-browser compatible.

      3. Event-Handling

      If you're brand new to JavaScript, then one thing that's key to understanding how it works with the page that's being displayed in the web browser is that it responds to various events.

      That is, when a user clicks on an element, makes a keystroke, or clicks the mouse, then the browser raises a signal corresponding to the event that occurred. This is what allows us to take advantage of the user's interaction with the browser.

      Specifically, every time a user does something to the page, we can respond using a custom event. The problem is that not every browser implements events the same way (this is why there's a need for a specification, but that's content for another post).

      Luckily, jQuery makes this much easier by defining a consistent name for all of the events such that we can use the same name for an event to which we're trying to respond and it will work across all of the major browsers.

      4. Animation

      When jQuery first came out, Flash was still relatively popular, and general animations across the web weren't completely dead.

      When we talk about animation in the context of jQuery, though, we're not talking about the same type of effects or behaviors that we're used to seeing with older technology. Instead, we're talking about effects that give users feedback that something has happened on the screen. Furthermore, it's less invasive and can add a nice sense of style to a page or application when used correctly (anything can be abused, though).

      You can view the entire effects API on this page, but it's worth noting that jQuery's effects can range anywhere from handling simple fading in and fading out of elements or sliding elements into view to something more complex such as manipulating the queue of effects that are registered to fire against an element.

      Granted, the latter case assumes you have a bit of experience with the effects API, but it's something that comes naturally given enough time with the library and the documentation.

      5. Ajax

      If you're not familiar with Ajax, it's essentially a way that a web page can make a call to the server, handle the response, and update a portion of a page without having to refresh the entire page. Though the technology has been around for quite some time now, it's still something that's really cool and can provide some really neat functionality within the context of a page or web application when used appropriately and effectively.

      Though support for Ajax isn't as bad as it was five or ten years ago, the implementation for the API across browsers can vary ever so slightly. This means that we're tasked with writing Ajax code specifically for a browser Microsoft provides, that Google provides, that Apple provides, that Chrome provides, and so on.

      At least, that's the case without jQuery. Thanks to its support for Ajax, we can leverage Ajax in a number of different ways without having to get into the cross-browser inconsistencies. In fact, it's trivially easy to handle GET and POST requests while also having the ability to make far more advanced calls using the $.ajax method.

      Once you get used to having the API available in the core of your application or at your disposal, it's difficult to imagine not working with it (or with something like it).

      A Word About Extensibility

      One feature that a lot of server-side frameworks and libraries offer is the ability to create extensions to the core codebase. Modern client-side libraries and frameworks allow this, and jQuery is no different.

      Say, for example, you work in a particular niche in which you find yourself creating the same functionality for each project. Or what if you have a product that you're selling and you have a bit of custom code that needs to integrate with jQuery, but it might require different parameters depending on the project.

      What do you do then?

      Fortunately, jQuery supports plugins. This means that we, as developers, not only have the ability to tap into plugins that others have written (some of which are available on the jQuery website, others being available on GitHub), but we're also able to develop our own plugins.

      We can then reuse this code in our own projects, or make them available on sites such as GitHub for others to offer contributions, fixes, features, and so on.

      Additional jQuery Projects

      Since its inception, jQuery has grown into more than just a JavaScript library that offers us the ability to perform both simple and powerful operations in a cross-platform compatible way.

      In addition to the core library, jQuery has also resulted in two other notable projects that are worth mentioning before we wrap this article. Although we're not going to look at the details of what each project affords, we will take a high-level view of each project, if for no other reason than being aware of what's available to us should we need this for future work.

      jQuery UI

      The jQuery UI Homepage

      From the jQuery UI homepage:

      jQuery UI is a curated set of user interface interactions, effects, widgets, and themes built on top of the jQuery JavaScript Library. Whether you're building highly interactive web applications or you just need to add a date picker to a form control, jQuery UI is the perfect choice.

      This library was first published in 2007, about a year after jQuery itself. It works as a complementary library to jQuery in that it leverages the cross-platform compatibility of the library to help create widgets that can be used throughout a website.

      Many of the widgets include commonly used pieces of functionality. For example:

      There are also advanced features such as effects, utilities, and interactions. Everything that we've covered so far (as well as the things we haven't) include a wide variety of callbacks, attributes, and functions that allow us to interact with them to the fullest extent.

      All of the aforementioned features also offer various themes to make sure they fit the look and feel of your website. Finally, all of the features outlined here and included on the site are well documented.

      jQuery Mobile

      The jQuery Mobile Homepage

      From the jQuery Mobile homepage:

      jQuery Mobile is a HTML5-based user interface system designed to make responsive web sites and apps that are accessible on all smartphone, tablet and desktop devices.

      This library is the most recent introduction into the family of libraries being released in 2010 (with the last stable release being in 2014).

      Much like its UI counterpart, it offers a well-documented API and custom themes that are ideal for the various devices your project may target.

      Whereas the previous two libraries offer a set of cross-platform features that allow us to write jQuery and accompanying widgets in a relatively easy manner, jQuery Mobile includes a CSS framework that allows us to also design user interfaces that are ideal for the nature of our respective project.

      The framework includes:

      From there, the library offers what you'd expect from a project geared towards making web development much easier for various mobile devices. These include things as:

      Finally, the number of browsers that are still available and used in the wild is high. Though we've seen a decrease in the usage of older versions of Internet Explorer and wider adoption of Chrome, we still have certain users sticking with older browsers for a number of reasons.

      Sometimes, these users are on older browsers because of the nature of their company intranet. Sometimes it has to do with the mobile devices and/or phones they've been assigned for their job. And sometimes it simply has to do with the inability to upgrade to something better.

      No matter, though. jQuery Mobile offers support for most of the browsers and operating systems that are currently available. If you're not sure if the platform that you're targeting is supported by the library, you can always check the browser support page.

      Additional Resources


      Understanding what jQuery is (and what it isn't) and how it's related to JavaScript is important so that you know what's being done for you and what you can do when needing to work with the library.

      As previously mentioned, some may argue that you need to learn JavaScript first and then learn jQuery; others may argue that learning jQuery is a great way to work your way backwards to JavaScript.

      Whatever the case, jQuery is a longstanding library in the JavaScript economy and is one that's used in a number of very popular projects (such as WordPress) so learning it will give you a leg up in a number of different ways.

      JavaScript has become one of the de-facto languages of working on the web. It’s not without its learning curves, and there are plenty of frameworks and libraries to keep you busy, as well. If you’re looking for additional resources to study or to use in your work, check out what we have available in the Envato marketplace.

      If that's not enough, there's plenty of documentation and open-source code available for you to review and read, as well. There are also widely available plugins and an active blog to keep you in the loop with all of the news happening with the library's development.

      For those who are interested in JavaScript (particularly in the context of WordPress), feel free to follow me on my blog and/or Twitter at @tommcfarlin. You can catch all of my courses and tutorials on my profile page, as well.

      Don't hesitate to leave any questions or comments in the feed below, and I'll aim to respond to each of them.


      Leave a comment › Posted in: Daily

    1. Build a Custom Report in OpenCart

      No matter what business you're dealing with, it's always important to have tools which help you analyze the overall statistics of day-to-day happenings. Of course, it also helps you build further strategies for your business just in case things are not on the right track.

      Today, we'll discuss reporting tools in the context of OpenCart. You'll find lots of useful reports in the core itself. There are four main categories to be precise—Sales, Products, Customers and Marketing—and each of them further provides more options to view the information in different contexts.

      In our example, we'll build a report that displays all the products that are viewed but not purchased yet. Of course, it's a simple use case, but you could go ahead and create a more complex one as per your requirements.

      I assume that you're using the latest version of OpenCart and are familiar with the basic module development process in OpenCart, as we'll emphasize report generation rather than the basic module development steps. If you would like to explore basic module development in OpenCart, there's a nice article on the subject.

      Back-End File Setup

      Let's list the files that need to be implemented for our custom report:

      • admin/controller/report/product_custom.php: It's the main controller file that is used to load model data and set up the variables.
      • admin/model/report/product_custom.php: It's a model file that is used to set up SQL queries to fetch the data from the database.
      • admin/view/template/report/product_custom.tpl: It's a view file which contains the presentation logic.
      • admin/language/english/report/product_custom.php: It's a language file.

      The Controller

      Go ahead and create a file admin/controller/report/product_custom.php with the following contents.

      The important thing to note here is that we've placed it under the "report" directory, which is the right place for all report-related files.

      Apart from that, it's pretty usual controller stuff—we're loading the appropriate language and model in the index method, and then setting up the variables. At the end, we've assigned product_custom.tpl as our main template file that is responsible for the main report output.

      The Model

      Moving further, let's set up the model file at admin/model/report/product_custom.php.

      There are two methods in our model file—the getCustomProducts fetches appropriate records from the database, while getTotalCustomProducts returns the total record count used by the pagination component in the controller.

      The View

      Next, the view file should be located at admin/view/template/report/product_custom.tpl.

      It'll display the list of products in a nice tabular way, and of course it's responsive as bootstrap is in the core now!

      The Language File

      At the end, let's create a language file at admin/language/english/report/product_custom.php.

      So that's it as far as the file setup is concerned.

      Grant Permission for the Custom Report

      Although we’ve finished with our custom report module, you won’t be able to access it yet. That's because it’s considered a new resource and the administrator user group should be permitted to access this resource. Hence, let’s go ahead and grant permission for this resource to the administrator user group.

      Navigate to System > Users > Users Group and edit the Administrator user group. Under the Access Permission drop-down box, check the report/product_custom resource and save the group.

      Custom Report

      Now, you should be able to access this resource.

      How to Access Our Report in the Back-End

      We’ll need to modify admin/view/template/common/menu.tpl to include our custom report link. For the sake of simplicity, we’ll modify it directly, but you may like to achieve the same using the OCMOD extension. It allows you to change the core files using an XML-based search/replace system.

      Now, open the admin/view/template/common/menu.tpl file and look for the following line.

      After that line, add the following line.

      Now you should be able to see our link under Reports > Products. Click on that to see our awesome custom report!

      Custom Report Link

      It lists all the products which are viewed and not purchased yet. So, that’s it as far as the custom report creation is concerned; I hope that it wasn’t too much at once. But, anyway, you’ve got the idea, and you could extend it easily as per your requirements.


      Today, we’ve discussed how to create a custom report in OpenCart. We went through the complete process of setting up the required files, and in the later part of the article we demonstrated how to access the report from the back-end.

      If you're looking for additional OpenCart tools, utilities, extensions, and so on that you can leverage in your own projects or for your own education, don't forget to see what we have available in the marketplace.

      I hope that you’ve enjoyed everything so far and stay tuned for more on OpenCart. In case of any queries and suggestions, you could reach me via Twitter or use the comments feed below.


      Leave a comment › Posted in: Daily

    1. Introduction to Webpack: Part 1

      It's fairly standard practice these days when building a website to have some sort of build process in place to help carry out development tasks and prepare your files for a live environment.

      You may use Grunt or Gulp for this, constructing a chain of transformations that allow you to throw your code in one end and get some minified CSS and JavaScript out at the other.

      These tools are extremely popular and very useful. There is, however, another way of doing things, and that's to use Webpack.

      What Is Webpack?

      Webpack is what is known as a "module bundler". It takes JavaScript modules, understands their dependencies, and then concatenates them together in the most efficient way possible, spitting out a single JS file at the end. Nothing special, right? Things like RequireJS have been doing this for years.

      Well, here's the twist. With Webpack, modules aren't restricted to JavaScript files. By using Loaders, Webpack understands that a JavaScript module may require a CSS file, and that CSS file may require an image. The outputted assets will only contain exactly what is needed with minimum fuss. Let's get set up so we can see this in action.


      As with most development tools, you'll need Node.js installed before you can continue. Assuming you have this correctly set up, all you need to do to install Webpack is simply type the following at the command line.

      This will install Webpack and allow you to run it from anywhere on your system. Next, make a new directory and inside create a basic HTML file like so:

      The important part here is the reference to bundle.js, which is what Webpack will be making for us. Also note the H2 element—we'll be using that later.

      Next, create two files in the same directory as your HTML file. Name the first main.js, which as you can imagine is the main entry point for our script. Then name the second say-hello.js. This is going to be a simple module that takes a person's name and DOM element, and inserts a welcome message into said element.

      Now that we have a simple module, we can require this in and call it from main.js. This is as easy as doing:

      Now if we were to open our HTML file then this message would obviously not be shown as we've not included main.js nor compiled the dependencies for the browser. What we need to do is get Webpack to look at main.js and see if it has any dependencies. If it does, it should compile them together and create a bundle.js file we can use in the browser.

      Head back to the terminal to run Webpack. Simply type:

      The first file specified is the entry file we want Webpack to start looking for dependencies in. It will work out if any required files require any other files and will keep doing this until it's worked out all the necessary dependencies. Once done, it outputs the dependencies as a single concatenated file to bundle.js. If you press return, you should see something like this:

      Now open index.html in your browser to see your page saying hello.


      It isn't much fun specifying our input and output files each time we run Webpack. Thankfully, Webpack allows us to use a config file to save us the trouble. Create a file called webpack.config.js in the root of your project with the following contents:

       Now we can just type webpack and nothing else to achieve the same results.

      Development Server

      What's that? You can't even be bothered to type webpack every time you change a file? I don't know, today's generation etc, etc. Ok, let's set up a little development server to make things even more efficient. At the terminal, write:

      Then run the command webpack-dev-server. This will start a simple web server running, using the current directory as the place to serve files from. Open a new browser window and visit http://localhost:8080/webpack-dev-server/. If all is well, you'll see something along the lines of this:

      An example of the server

      Now, not only do we have a nice little web server here, we have one that watches our code for changes. If it sees we've altered a file, it will automatically run the webpack command to bundle our code and refresh the page without us doing a single thing. All with zero configuration.

      Try it out, change the name passed to the sayHello function, and switch back to the browser to see your change applied instantly.


      One of the most important features of Webpack is Loaders. Loaders are analogous to "tasks" in Grunt and Gulp. They essentially take files and transform them in some way before they are included in our bundled code.

      Say we wanted to use some of the niceties of ES2015 in our code. ES2015 is a new version of JavaScript that isn't supported in all browsers, so we need to use a loader to transform our ES2015 code into plain old ES5 code that is supported. To do this, we use the popular Babel Loader which, according to the instructions, we install like this:

      This installs not only the loader itself but its dependencies and an ES2015 preset as Babel needs to know what type of JavaScript it is converting.

      Now that the loader is installed, we just need to tell Babel to use it. Update webpack.config.js so it looks like this:

      There are a few important things to note here. The first is the line test: /\.js$/, which is a regular expression telling us to apply this loader to all files with a .js extension. Similarly exclude: /node_modules/ tells Webpack to ignore the node_modules directory.  loader and query are fairly self-explanatory—use the Babel loader with the ES2015 preset.

      Restart your web server by pressing ctrl+c and running webpack-dev-server again. All we need to do now is use some ES6 code in order to test the transform. How about we change our sayHello variable to be a constant?

      After saving, Webpack should have automatically recompiled your code and refreshed your browser. Hopefully you'll see no change whatsoever. Take a peek in bundle.js and see if you can find the const keyword. If Webpack and Babel have done their jobs, you won't see it anywhere—just plain old JavaScript.

      On to Part 2

      In Part 2 of this tutorial, we'll look at using Webpack to add CSS and images to your page, as well as getting your site ready for deployment.


      Leave a comment › Posted in: Daily

    1. Free Course on Creating a New JavaScript Framework

      Do you want to know how to create your own JavaScript framework?

      In our new Coffee Break Course, Daily Mail developer Jason Green tells you about Milo: the Daily Mail's homegrown JavaScript framework that powers its high-volume news site. He'll tell you all about the reasons the Daily Mail team decided to roll their own framework and introduce some of the features that set Milo apart from other existing frameworks.

      Create Your Own JavaScript Framework screenshot

      This is the first course in a series about Milo, its architecture, and the challenges of building it. Best of all, this course is completely free. 

      Screenshot from Create a JavaScript Framework course

      Watch the introduction below to find out more.

      To take this free course, simply go to the course page and follow the steps to create a free account. If you already have an account, just log in and you’ll be able to get started right away.


      Leave a comment › Posted in: Daily

    1. Automate All the Things With Ansible: Part Two


      This is part two of a two-part tutorial on Ansible. Part one is here. In this part, you will learn about roles (Ansible's building blocks), variables, loops, how to use roles in playbooks, and how to organize roles into a directory structure.


      When you manage tens, hundreds or more servers, probably many of them need to be configured similarly. Different groups of servers like web servers or database servers will require their own special configuration, but also may share some other common functionality. It is of course possible to just copy tasks around, but this gets old really fast when dealing with a complicated infrastructure.

      Ansible roles are the ticket. Playbooks can include roles. Roles can depend on other roles, and Ansible best practices recommend grouping hosts in your inventory file based on their roles. Roles are the backbone of serious Ansible-managed infrastructure. As usual, I'll start with an example and introduce many of the capabilities of roles through the example.

      I like aliases and shell functions a lot because I can't remember all the arcane switches and options for each command, and also because it saves a lot of typing. I also like to have some tools like htop and tmux on every server I log in to.

      Here is a file that contains some of my favorite aliases and functions. I'll call it '.gigirc'. By the way, if you ever wondered what the 'rc' suffix stands for in all those rc files, then it stands for 'Run Commands'.

      Let's define a role called 'common' that creates a user called 'gigi', adds a public ssh key, copies the '.gigirc' file and adds a line at the end of '~/.bashrc' that runs this file and finally installs the common packages vim, htop and tmux (defined in the 'vars/main.yml file').

      I will introduce a lot of new stuff here: four different modules, variables, and loops. Also, roles are typically spread across multiple files in a standard directory structure. I'll show you a couple of files and then explain about the directory structure. Here is the 'tasks/main.yml' file:

      And here is the vars/main.yml file that contains the definition of the 'COMMON_PACKAGES' variable used to specify which common packages to install.


      The user module can manage user accounts. Here I use it to create the user 'gigi'.

      The authorized_key module is for adding/removing SSH authorized keys. Here I use it to add my public key for the 'gigi' user.

      The lineinfile module can be used to replace or add single lines to a file. In this case, I use it to source the '.gigirc file' from '.bashrc', so all the cool aliases and functions in '.gigirc' are always available in any interactive session.

      Finally, the apt module has tons of options for managing apt packages. Here I just install some common packages.


      The COMMON_PACKAGES you see in the last task for installing common packages is a variable. Ansible lets you use variables defined almost anywhere: playbooks, inventory, roles, dedicated files, and even environment variables. There is a lot more information about variables in the documentation.


      Ansible is declarative, so it doesn't support explicit loops. But there is a plethora of with_#censored# that allows you to perform repeated operations on some structure like a list of users, packages. or lines in a file. You can also repeat operations until some condition is true or get the index of the current item. Additional information can be found in the documentation.

      Role Directory Structure

      Here is what a typical role directory structure may look like:


      ├── handlers

      │   └── main.yml

      ├── meta

      │   └── main.yml

      ├── tasks

      │   └── main.yml

      ├── templates

      └── vars

          ├── Debian.yml

          ├── Ubuntu.yml

          └── main.yml

      The 'tasks/main.yml' file is the where all the tasks are defined. Each task corresponds to an Ansible command that typically uses a module.

      The 'meta/main.yml' file will contain a list of other roles that the current role depends on. Those roles' tasks will be executed before the current role, so it can be sure all its prerequisites are met.

      The 'handlers/main.yml' file is where you keep your handlers, like the handler you saw earlier that starts Nginx after installation.

      The templates directory is where you keep Jinja2 templates of configuration and other files that you want to populate and copy to the target system.

      The vars directory contains various variables and can conditionally contain different values for different operating systems (very common use case).

      It's important to note that Ansible is very flexible and you can put anything almost anywhere. This is just one possible structure that makes sense to me. If you look at other people's directory structures, you may see something completely different. That's totally fine. Don't be alarmed. Ansible is not prescriptive, although it does provide guidance for best practices.

      Using Roles

      Roles do the heavy lifting, but playbooks are how you actually do work. The playbooks marry the inventory and the roles and specify what roles to play on which host. Here is what a playbook with roles looks like:

      Running the playbook produces the following output:


      Ansible is a great tool. It is lightweight. It can be used interactively with ad-hoc commands, and it scales very well to massive systems. It also has a lot of momentum and a great community. If you manage or even just work with remote servers, you want Ansible.


      Leave a comment › Posted in: Daily

  • Page 2 of 94 pages  < 1 2 3 4 >  Last ›
  • Browse the Blog