A lot of things that we do depend upon the knowledge that we possess. If we are aware of what can be done, only then we can make smarter and more effective decisions. That is why it is always good to have quick tips and tricks handy in your pocket. This principle applies everywhere, including for MS-SQL developers.
Through this article I would like to share a few SQL scripts which have proven to be very useful for my daily job as a SQL developer. I'll present a brief scenario about where each of these scripts can be used along with the scripts below.
Note: Before reaping the benefits from these scripts, it is highly recommended that all of the provided scripts be run in a test environment first before running them on a real-time database to ensure safety.
Can we imagine life without Control-F in today's world? Or a life without search engines! Dreadful, isn't it? Now imagine you have 20-30 sql procedures in your database and you need to find the procedure that contains a certain word.
Definitely one way to do it is by opening each procedure one at a time and doing a Control-F inside the procedure. But this is manual, repetitive, and boring. So, here is a quick script that allows you to achieve this.
SELECT DISTINCT o.name AS Object_Name,o.type_desc FROM sys.sql_modules m INNER JOIN sys.objects o ON m.object_id=o.object_id WHERE m.definition Like '%search_text%'
If you have a large database and the source of data for your database is some ETL (extract, transform, load) process that runs on a daily basis, this next script is for you.
Say you have scripts that run on a daily basis to extract data into your database and this process takes about five hours each day. As you begin to look more deeply into this process, you find some areas where you can optimize the script to finish the task in under four hours.
You would like to try out this optimization, but since you already have the current implementation on a production server, the logical thing to do is try out the optimized process in a separate database, which you would replicate using the existing database.
Now, once ready, you would run both ETL processes and compare the extracted data. If you have a database with many tables, this comparison can take quite a while. So, here's a quick script that facilitates this process.
use YourDatabase_1 CREATE TABLE #counts ( table_name varchar(255), row_count int ) EXEC sp_MSForEachTable @command1='INSERT #counts (table_name, row_count) SELECT ''?'', COUNT(*) FROM ?' use YourDatabase_2 CREATE TABLE #counts_2 ( table_name varchar(255), row_count int ) EXEC sp_MSForEachTable @command1='INSERT #counts_2 (table_name, row_count) SELECT ''?'', COUNT(*) FROM ?' SELECT a.table_name, a.row_count as [Counts from regular run], b.row_count as [Counts from mod scripts], a.row_count - b.row_count as [difference] FROM #counts a inner join #counts_2 b on a.table_name = b.table_name where a.row_count <> b.row_count ORDER BY a.table_name, a.row_count DESC
In any IT company, the first thing a newly hired programmer (or sql developer) has to do before writing his or her first SQL query is buy insurance of the working version of the production database, i.e. make a backup.
This single act of creating a backup and working with the backup version gives you the freedom to perform and practice any kind of data transformation, as it ensures that even if you blow off the company's client's data, it can be recovered. In fact, not just new hires but even the veterans from the same IT company never perform any data transformation without creating backups.
Although backing up databases in SQL Server is not a difficult task, it
definitely is time-consuming, especially when you need to back up many databases at once. So the next script is quite handy for this purpose.
DECLARE @name VARCHAR(50) -- database name DECLARE @path VARCHAR(256) -- path for backup files DECLARE @fileName VARCHAR(256) -- filename for backup DECLARE @fileDate VARCHAR(20) -- used for file name -- specify database backup directory SET @path = 'E:\\Sovit\_BackupFolder\' exec master.dbo.xp_create_subdir @path -- specify filename format SELECT @fileDate = CONVERT(VARCHAR(20),GETDATE(),112) DECLARE db_cursor CURSOR FOR SELECT name FROM master.dbo.sysdatabases WHERE name IN ('DB_1','DB_2','DB_3', 'DB_4','DB_5','DB_6') -- only these databases OPEN db_cursor FETCH NEXT FROM db_cursor INTO @name WHILE @@FETCH_STATUS = 0 BEGIN SET @fileName = @path + @name + '_' + @fileDate + '.BAK' BACKUP DATABASE @name TO DISK = @fileName FETCH NEXT FROM db_cursor INTO @name END CLOSE db_cursor DEALLOCATE db_cursor
Every SQL Server database has a transaction log that records all transactions and the database modifications made by each transaction. The transaction log is a critical component of the database and, if there is a system failure, the transaction log might be required to bring your database back to a consistent state.
As the number of transactions starts increasing, however, space availability starts becoming a major concern. Fortunately, SQL Server allows you to reclaim the excess space by reducing the size of the transaction log.
While you can shrink log files manually, one at a time using the UI provided, who has the time to do this manually? The following script can be used to shrink multiple database log files rapidly.
DECLARE @logName as nvarchar(50) DECLARE @databaseID as int DECLARE db_cursor CURSOR FOR SELECT TOP 10 name,database_id -- only 10 but you can choose any number FROM sys.master_Files WHERE physical_name like '%.ldf' and physical_name not like 'C:\%' -- specify your database paths and name not in ('mastlog') -- any database logs that you would like to exclude ORDER BY size DESC OPEN db_cursor FETCH NEXT FROM db_cursor INTO @logName , @databaseID WHILE @@FETCH_STATUS = 0 BEGIN DECLARE @databaseName as nvarchar(50) SET @databaseName = DB_NAME(@databaseID) DECLARE @tsql nvarchar(300) SET @tsql='USE ['+@databaseName+'] ALTER DATABASE ['+@databaseName+'] set recovery simple DBCC SHRINKFILE ('+@logName+' , 1)' EXEC(@tsql) FETCH NEXT FROM db_cursor INTO @logName , @databaseID END CLOSE db_cursor DEALLOCATE db_cursor
Single-user mode specifies that only one user at a time can access the database and is generally used for maintenance actions. Basically, if other users are connected to the database at the time that you set the database to single-user mode, their connections to the database will be closed without warning.
This is quite useful in the scenarios where you need to restore your database to the version from a certain point in time or you need to prevent possible changes by any other processes accessing the database.
USE master; GO ALTER DATABASE YourDatabaseName SET SINGLE_USER WITH ROLLBACK IMMEDIATE; GO ALTER DATABASE YourDatabaseName SET READ_ONLY; GO ALTER DATABASE YourDatabaseName SET MULTI_USER; GO
Many programming languages allow you to insert values inside string texts, which is very useful when generating dynamic string texts. Since SQL doesn't provide any such function by default, here is a quick remedy for that. Using the function below, any number of texts can be dynamically inserted inside string texts.
--Example Usage --declare @test varchar(400) --select @test = [dbo].[FN_SPRINTF] ('I am %s and you are %s', '1,0', ',') --param separator ',' --print @test -- result: I am 1 and you are 0 --select @test = [dbo].[FN_SPRINTF] ('I am %s and you are %s', '1#0', '#') --param separator ',' --print @test -- result: I am 1 and you are 0 SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO -- ============================================= -- AUTHOR: <SOVIT POUDEL> -- ============================================= CREATE FUNCTION DBO.FN_SPRINTF ( @STRING VARCHAR(MAX), @PARAMS VARCHAR(MAX), @PARAM_SEPARATOR CHAR(1) = ',' ) RETURNS VARCHAR(MAX) AS BEGIN DECLARE @P VARCHAR(MAX) DECLARE @PARAM_LEN INT SET @PARAMS = @PARAMS + @PARAM_SEPARATOR SET @PARAM_LEN = LEN(@PARAMS) WHILE NOT @PARAMS = '' BEGIN SET @P = LEFT(@PARAMS+@PARAM_SEPARATOR, CHARINDEX(@PARAM_SEPARATOR, @PARAMS)-1) SET @STRING = STUFF(@STRING, CHARINDEX('%S', @STRING), 2, @P) SET @PARAMS = SUBSTRING(@PARAMS, LEN(@P)+2, @PARAM_LEN) END RETURN @STRING END
When comparing multiple databases that have similar schemas, one has to look at the details of table columns. The definitions of the columns (data types, nullables?) are as vital as the name of the columns themselves.
Now for databases having many tables and tables having many columns, it can take quite a while to compare each column manually with a column from another table of another database. The next script can precisely be used to automate this very process as it prints the definitions of all tables for a given database.
SELECT sh.name+'.'+o.name AS ObjectName, s.name as ColumnName ,CASE WHEN t.name IN ('char','varchar') THEN t.name+'('+CASE WHEN s.max_length<0 then 'MAX' ELSE CONVERT(varchar(10),s.max_length) END+')' WHEN t.name IN ('nvarchar','nchar') THEN t.name+'('+CASE WHEN s.max_length<0 then 'MAX' ELSE CONVERT(varchar(10),s.max_length/2) END+')' WHEN t.name IN ('numeric') THEN t.name+'('+CONVERT(varchar(10),s.precision)+','+CONVERT(varchar(10),s.scale)+')' ELSE t.name END AS DataType ,CASE WHEN s.is_nullable=1 THEN 'NULL' ELSE 'NOT NULL' END AS Nullable FROM sys.columns s INNER JOIN sys.types t ON s.system_type_id=t.user_type_id and t.is_user_defined=0 INNER JOIN sys.objects o ON s.object_id=o.object_id INNER JOIN sys.schemas sh on o.schema_id=sh.schema_id WHERE O.name IN (select table_name from information_schema.tables) ORDER BY sh.name+'.'+o.name,s.column_id
In this article, we looked at seven useful scripts that can cut down tons of manual, laborious work and increase overall efficiency for SQL developers. We also looked at different scenarios where these scripts can be implemented.
If you're looking for even more SQL scripts to study (or to use), don't hesitate to see what we've got available on CodeCanyon.
Once you begin to get the hang of these scripts, certainly you will begin to identify many other scenarios where these scripts can be used effectively.