Saturday 6 December 2014

What is synonyms ? Is it useful?


SQL Server 2005 has introduced synonyms which enable the reference of another object (View, Table, Stored Procedure or Function) potentially on a different server, database or schema in your environment.  In short, this means that the original object that is referenced in all of your code is really using a completely different underlying object, but no coding changes are necessary.  Think of this as an alias as a means to simplify migrations and application testing without the need to make any dependent coding changes.

SYNONYM's can be very useful and can be created for
  • Tables
  • Views
  • Assembly Stored Procedures, Table Valued Functions, Aggregations
  • SQL Scalar Functions
  • SQL Stored Procedures
  • SQL Table Valued Functions
  • SQL Inline-Table-Valued Functions
  • Local and Global Temporary Tables
  • Replication-filter-procedures
  • Extended Stored Procedures
Benefits
  • SYNONYMs provide a layer of abstraction over the referenced object
  • Allow changes to complicated (multi part) and lengthy names with a simplified alias as a same server resident object.
  • Provides flexibility for changing the location of objects without changing existing code.
  • SYNONYMs can be created in the same database to provide backward compatibility for older applications in case of drop or rename of objects.
  • SYNONYMs can be useful if you give the front-end query tools like spreadsheets and Access linked tables direct links in to the tables.
Limitations
  • SYNONYMs are loosely bound to the referenced objects. So you can delete a SYNONYM without getting any warning that it is being referenced by any other database object.
  • Chaining is not allowed. It means that you can not create SYNONYM of a SYNONYM.
  • Obviously consumes possible object names, as you can not create a table with the same name of a synonym
  • The object for which the SYNONYM is being created is checked at run time. It is not checked at creation time. So this means that if you make any related error e.g. spelling error, the synonym will created, but you will get an error while accessing the object.
  • SYNONYM can not be referenced in a DDL statement

Monday 3 November 2014

Importance of SET NOCOUNT in Stored Procedures

                    Today , I am going to share one of important concept .It sounds very small but it is very important.Whenever we write any procedure and execute it a message appears in message window that shows number of rows affected with the statement written in the procedure.
e.g

CREATE PROCEDURE Test_no_count
AS
    SET NOCOUNT OFF

  BEGIN
      SELECT 1

      SELECT 2

      SELECT 3

      SELECT 4

  END 

GO

Exec Test_no_count
GO
Go
    Default value is SET NOCOUNT OFF.We don't need to specify OFF.For demo purpose I have specified explicitly.
If it is off then it will return affected row message as marked circle sometime affected row count will be multiple  of 100s so this message will creates an extra overhead on the network. 

By using SET NOCOUNT we can remove this extra overhead from the network, that can actually improve the performance of our database and our application.


When SET NOCOUNT is ON, the count is not returned. When SET NOCOUNT is OFF, the count is returned.

GO
CREATE PROCEDURE Test_no_count
AS
    SET nocount On

  BEGIN
      SELECT 1

      SELECT 2

      SELECT 3

      SELECT 4
  END 

GO


Exec Test_no_count 

GO
     Important thing is @@ROWCOUNT function is updated even when SET NOCOUNT is ON.
           SET NOCOUNT ON statement can be useful in store procedures. SET NOCOUNT ON statement into store procedures can reduce network traffic, because client will not receive the message indicating the number of rows affected by T-SQL statement. Setting SET NOCOUNT to ON can provide a significant performance boost, because network traffic is greatly reduced.





Saturday 1 November 2014

Disable and Enable the Foreign Key Constraint on the table in SQL Server

     Its  interesting concept but need to know while working on huge database.Recently my friend attended one of the interview .He was rejected there.Interviewer asked him one question which he found difficult while attending that interview.So he asked me to help him to understand the solution.
   He said I  never heard of disabling and enabling constraints but I heard about triggers.Sometime we work on test environment while development. Whenever we deliver any feature to QA team .They test lot of positive and negative scenario for regression.
Let us assume there are some negative cases which we have to test  .Let say we have two table
1.Table having primary key .
2. Table having Foreign key.
e,.g

 CREATE TABLE TEST_PRIMARY
  (
     ID INT PRIMARY KEY
  )

GO

CREATE TABLE TEST_FOREIGN
  (
     ID     INT IDENTITY (1, 1),
     PVALUE INT,
     FOREIGN KEY (PVALUE) REFERENCES TEST_PRIMARY(ID)
  )

GO

INSERT TEST_PRIMARY
       (ID)
SELECT 1
UNION
SELECT 2
UNION
SELECT 3

GO

INSERT INTO TEST_FOREIGN
            (PVALUE)
VALUES      (1),
            (2),
            (3)

GO
  Now QA want to test  some negative case for example they are interested to know how system will behave if  constraints are not working properly means there are no constraints on table.

e.g

Syntax :
EXEC Sp_msforeachtable   "ALTER TABLE <table Name> NOCHECK CONSTRAINT all"
Query :
EXEC Sp_msforeachtable   "ALTER TABLE  TEST_FOREIGN NOCHECK CONSTRAINT all" 

Now if  you will try to  insert value into TEST_FOREIGN other than reference value it will allow.
e.g.

INSERT INTO TEST_FOREIGN
            (PVALUE)
VALUES      (4),

  Normally it should not allow.

Now  we can enable constraints back again
e.g
EXEC sp_msforeachtable "ALTER TABLE <table name> WITH CHECK CHECK CONSTRAINT all"

EXEC sp_msforeachtable "ALTER TABLE Test_foreign WITH CHECK CHECK CONSTRAINT all"


Note : before enabling constraint we need to delete data which are conflicting  primary key relation from TEST_FOREIGN table.























Wednesday 29 October 2014

Recent execution of Procedures in SQL Server

         Recently I was working on one functionality. Interesting thing was I did not know it fully.
I need to complete it as early as .I can say it was enhancement .When term come as enhancement then as SQL developer we know it involves  lots of R&D if application is complex.It was same scenario which I caught.I  didn't  know anything but need to complete as early as possible.
      So I thought its time to do smart work not hard work!!!
 I knew one thing everything in that functionality was handling through procedures.So I thought If I will get to know flow of execution of procedures I mean order of calling of procedures .If I will get that I can easily capture list of procedures and following flow I can get sequence of procedures .
    I tried and it worked for me.I got list of  procedure and modified code as per  requirement .Unit tested and done all regression it passed all cases.So sometime smartwork also work.
I will demonstrate as  follow
e.g
create procedure X
as select 1
Go
create procedure Y
as select 1
GO
exec x
GO
exec y


  I have created two procedure named X and Y .I have executed this procedures .
I have used following query to get result


SELECT    db_name(DMV.database_id), 
  OBJECT_NAME(object_id, database_id) AS proc_name, 
  DMV.last_execution_time
  FROM sys.dm_exec_procedure_stats AS DMV
  where db_name(DMV.database_id) is not null
   order by   DMV.last_execution_time desc 





Thursday 25 September 2014

What is Phantom read?

           Phantom means unexpected or unrealistic. It occurs basically when two identical queries are executed, and the set of rows returned by the second query is different from the first. Let’s have a simple example; suppose your banking policy got changed and according to that the minimum balance should be 150$ instead of 100$ for each account type, anyways this is not a big deal for a data base administrator. He will perform an update statement for each account type where the minimum balance is less than 150$ and updates the value to 150$. But unfortunately when the manager checks the database, he got one record with minimum balance less than 150$ in the same table. The DBA got surprised, how come this is possible as he performed the update statement on the whole table.
This is called Phantom read. The occurrence of Phantom reads are very rare as it needs proper circumstances and timing for such type of events as in the above example, someone may have inserted one new record with the minimum balance less than 150$ at the very same time when the DBA executed the UPDATE statement. And as it is a new record, it didn’t interfere with the UPDATE transaction and executed successfully. This type of Phantom reads can be avoided using higher level of isolation i.e. SERIALIZABLE .

What is a Dirty Read?

              A dirty read takes no notice of any lock taken by another process. The read is officially “dirty” when it reads data that is uncommitted. This can become problematic if the uncommitted transaction fails or for some other reason is rolled back.
              Imagine a scenario in which you are shopping on a website and place an item into your basket and proceed to payment. The site's checkout process decrements the stock by one and starts to charge your card all in the one transaction. At that time, a second unrelated process starts. The website's back office stock interface runs and makes a dirty read of all the product inventory levels, reading the reduced value. Unfortunately, there is a problem with your transaction (insufficient funds), and your purchase transaction is rolled back. The website stock level has now reverted to the original level, but the stock interface has just reported a different value.

Let see following example to understand in detail:
 /* SESSION 1 */
 
BEGIN TRANSACTION;
UPDATE Person.Person
SET FirstName = 'James'
WHERE LastName = 'Jones';
WAITFOR DELAY '00:00:05.000';
ROLLBACK TRANSACTION;
SELECT FirstName
,LastName
FROM Person.Person
WHERE LastName = 'Jones';
 /* SESSION 2 */

 

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; 
SELECT FirstName ,LastName 
FROM Person.Person 
WHERE LastName = 'Jones';

Once Session 1 is running, quickly switch over to a second session and execute the following SQL statement. The SQL in this second session will perform a dirty read. If you time it right and execute this query while the transaction in Session 1 is open (it has not yet been rolled back), then your output will match Figure 1 and every person with a surname of “Jones” now has a first name of “James”:

Wednesday 17 September 2014

How to check how many processes going on Database server?

                Today I am going to share one of the interesting experience.Few days before I was working on one of the release . We was using one DB server it has almost 20 database in it.Out of 20 databases 15 was in use.We was testing one functionality .Normally for that functionality system was taking 10 - 20 sec.We started testing and for that functionality it was taking 10 to 20 minute.We surprised what happened.Again we tried then it has taken  9-10 minute.Again we tested and found it is taking  time more than 20 minute.This is different behavior .we concluded there is no problem in code because it was giving correct result.Then question is what is problem.After further analysis we found there are other databases in server which are in use.It is taking more server resources.Following query was helped us identify root cause.

SELECT [Spid] = session_Id
      , ecid
      , [Database] = DB_NAME(sp.dbid)
      , [User] = nt_username
      , [Status] = er.status
      , [Wait] = wait_type
      , [Individual Query] = SUBSTRING (qt.text,
             er.statement_start_offset/2,
      (CASE WHEN er.statement_end_offset = -1
             THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
            ELSE er.statement_end_offset END -
                                er.statement_start_offset)/2)
      ,[Parent Query] = qt.text
      , Program = program_name
      , Hostname
      , nt_domain
      , start_time
    FROM sys.dm_exec_requests er
    INNER JOIN sys.sysprocesses sp ON er.session_id = sp.spid
    CROSS APPLY sys.dm_exec_sql_text(er.sql_handle)as qt
    WHERE session_Id > 50              -- Ignore system spids.
    AND session_Id NOT IN (@@SPID)     -- Ignore this current statement.
    ORDER BY 1, 2

  Above query will tell us how many process are going on server.

Thursday 11 September 2014

What is temporary procedure in sql server?

    I was working on critical defect in production environment.I was having only read permission on server.
After investigation I realized I need to do code change in procedure suddenly I remember it is production server I don't have permission to create and drop object.I decided I will test new code in procedure only without any impact on production server.It sounds something magical but there is no magic .I achieved all this with concept of temporary procedure without touching production data.There is limitation as we are having only read permission We can't do DML operation on main table.So I achieved it by creating temp table.
Final output I checked with this dummy table .Interesting thing is  code worked got desired output.Let us discuss in detail
       Temporary Stored Procedures are similar to normal Stored Procedures, but as their name suggests, have a fleeting existence. There are two kinds of temporary Stored Procedures, local and global. Temporary Stored Procedures are created just like any other SP but the name must be prefixed with a hash (#) for a local temporary SP and two hashes (##) for a global temporary Stored Procedure.

A local temporary Stored Procedure is available only in the current session and is dropped when the session is closed or for a different session.

 Please comment if this information is useful.
 


What is table variable?

Table Variable

This acts like a variable and exists for a particular batch of query execution. It gets dropped once it comes out of batch. This is also created in the Tempdb database but not the memory. This also allows you to create primary key, identity at the time of Table variable declaration but not non-clustered index.
  1. GO
  2. DECLARE @TProduct TABLE
  3. (
  4. SNo INT IDENTITY(1,1),
  5. ProductID INT,
  6. Qty INT
  7. )
  8. --Insert data to Table variable @Product
  9. INSERT INTO @TProduct(ProductID,Qty)
  10. SELECT DISTINCT ProductID, Qty FROM ProductsSales ORDER BY ProductID ASC
  11. --Select data
  12. Select * from @TProduct
  13. --Next batch
  14. GO
  15. Select * from @TProduct --gives error in next batch

What is CTE and Recurssive CTE?

 CTE   (Common table expression)
            CTE stands for Common Table expressions. It was introduced with SQL Server 2005. It is a temporary result set and typically it may be a result of complex sub-query. Unlike temporary table its life is limited to the current query. It is defined by using WITH statement. CTE improves readability and ease in maintenance of complex queries and sub-queries. Always begin CTE with semicolon.

SELECT *
FROM   (SELECT Addr.Address,
               Emp.Name,
               Emp.Age
        FROM   Address Addr
               INNER JOIN Employee Emp
                       ON Emp.EID = Addr.EID) Temp
WHERE  Temp.Age > 50
ORDER  BY Temp.NAME

By using CTE above query can be re-written as follows :
 ;WITH CTE1(Address, Name, Age)--Column names for CTE, which are optional
     AS (SELECT Addr.Address,
                Emp.Name,
                Emp.Age
         FROM   Address Addr
                INNER JOIN EMP Emp
                        ON Emp.EID = Addr.EID)
SELECT *
FROM   CTE1 --Using CTE
WHERE  CTE1.Age > 50

ORDER  BY CTE1.NAME

Recursive CTE  :
  Recursive is the process in which the query executes itself. It is used to get results based on the output of base query. We can use CTE as Recursive CTE (Common Table Expression).
 
Consider following query
 ;WITH CTE1(Address, Name, Age)--Column names for CTE, which are optional
     AS (SELECT Addr.Address,
                Emp.Name,
                Emp.Age
         FROM   Address Addr
                INNER JOIN EMP Emp
                        ON Emp.EID = Addr.EID)
SELECT *
FROM   CTE1 --Using CTE
WHERE  CTE1.Age > 50

ORDER  BY CTE1.NAME

 In the above example emp_cte is a common expression table, the base record for the cte is derived by the first sql query before union all. The result of the query gives you the employeeid which don’t have managerid. Second query after union all is executed repeatedly to get results and it will continue until it returns no rows. for above e.g. result will have employeeids which have managerid (ie, employeeid of the first result).  this is obtained by joining cte result with employee table on columns employeeid of cte with managerid of table employee.
this process is recursive and will continue till there is no managerid who doesn’t have employeeid




What is temporary table?

       In SQL Server, temporary tables are created at run-time and you can do all the operations which you can do on a normal table. These tables are created inside Tempdb database. Based on the scope and behavior temporary tables are of two types as given below-

Local Temp Table :

  1. table name is stared with single hash ("#") sign.
    1. CREATE TABLE #Local
    2. (
    3. UserID int,
    4. Name varchar(50),
    5. Address varchar(150)
    6. )
    7. GO
    8. insert into #Local values ( 1, 'Shailendra','Noida');
    9. GO
    10. Select * from #Local
    The scope of Local temp table exist to the current session of current user means to the current query window. If you will close the current query window or open a new query window and will try to find above created temp table, it will give you the error.
  2. Global Temp Table

    Global temp tables are available to all SQL Server sessions or connections (means all the user). These can be created by any SQL Server connection user and these are automatically deleted when all the SQL Server connections have been closed. Global temporary table name is stared with double hash ("##") sign.
    1. CREATE TABLE ##Global
    2. (
    3. UserID int,
    4. Name varchar(50),
    5. Address varchar(150)
    6. )
    7. GO
    8. insert into ##Global values ( 1, 'Shailendra','Noida');
    9. GO
    10. Select * from ##Global
    Global temporary tables are visible to all SQL Server connections while Local temporary tables are visible to only current SQL Server connection.

Performance of TempDB

                      Recently I was facing issue related with performance.Same operation we was performing on one server,it was taking more time to complete as compare to other server.After dig  analysis I found root cause which was really interesting .I would like to share here on my blog .
                  Root cause was files mapping for tempdb .On new server we found tempdb is mapped to only one file .Old system tempdb is mapped to more than one file.I will explain how I got the information.
Please refer following query

SELECT name AS FileName,
       SIZE*1.0/128 AS FileSizeinMB,
            CASE max_size
                WHEN 0 THEN 'Autogrowth is off.'
                WHEN -1 THEN 'Autogrowth is on.'
                ELSE 'Log file will grow to a maximum size of 2 TB.'
            END Autogrowth,
            growth AS 'GrowthValue',
            'GrowthIncrement' = CASE
                                    WHEN growth = 0 THEN 'Size is fixed and will not grow.'
                                    WHEN growth > 0
                                         AND is_percent_growth = 0 THEN 'Growth value is in 8-KB pages.'
                                    ELSE 'Growth value is a percentage.'
                                END
FROM tempdb.sys.database_files;

These query is very important to understand -
tempdb.sys.database_files : will give us list of file which are mapped to tempdb.

So the solution which I used to improve performance - We have added more file to tempdb
e.g.
         Important thing  data file (.mdf)    and log file  (.ldf)  should not be present on same disk.It should be present on different disk also auto increment  option for size should be on.

ALTER DATABASE tempdb
ADD FILE (NAME = tempdev2, FILENAME = 'W:\tempdb2.mdf', SIZE = 256);
ALTER DATABASE tempdb
ADD FILE (NAME = tempdev3, FILENAME = 'X:\tempdb3.mdf', SIZE = 256);
ALTER DATABASE tempdb
ADD FILE (NAME = tempdev4, FILENAME = 'Y:\tempdb4.mdf', SIZE = 256);
For imformation :
  What exactly tempdb stores?
  • Global (##temp) or local (#temp) temporary tables, temporary table indexes, temporary stored procedures, table variables, tables returned in table-valued functions or cursors.
  • Database Engine objects to complete a query such as work tables to store intermediate results for spools or sorting from particular GROUP BY, ORDER BY, or UNION queries.
  • Row versioning values for online index processes, Multiple Active Result Sets (MARS) sessions, AFTER triggers and index operations (SORT_IN_TEMPDB).
  • DBCC CHECKDB work tables.
  • Large object (varchar(max), nvarchar(max), varbinary(max) text, ntext, image, xml) data type variables and parameters.



Wednesday 10 September 2014

Understanding missing index in SQL server

   While analyzing deadlock , I was  reviewing the query execution plan and querying index related dynamic management views (DMVs), I noticed the problem is related with potential missing indexes on columns. The index related dynamic management views (DMVs).
I am interested to share that information on my blog
 I queried are as follow:

sys.dm_db_missing_index_details — Returns detailed information about missing indexes, including the table, columns used in equality operations, columns used in inequality operations, and columns used in include operations.

sys.dm_db_missing_index_group_stats — Returns information about groups of missing indexes, which SQL Server updates with each query execution (not based on query compilation or recompilation).

sys.dm_db_missing_index_groups — Returns information about missing indexes contained in a missing index group.
Using these dynamic management views (DMVs), I wrote the following query, which returns the list of possible missing indexes for all SQL Server user databases. The results are ordered by index advantage that helps you to identify how beneficial each index would be, if we create them on the table.

SELECT CAST(SERVERPROPERTY('ServerName') AS [nvarchar](256)) AS [SQLServer]
,db.[database_id] AS [DatabaseID]
,db.[name] AS [DatabaseName]
,id.[object_id] AS [ObjectID]
,id.[statement] AS [FullyQualifiedObjectName]
,id.[equality_columns] AS [EqualityColumns]
,id.[inequality_columns] AS [InEqualityColumns]
,id.[included_columns] AS [IncludedColumns]
,gs.[unique_compiles] AS [UniqueCompiles]
,gs.[user_seeks] AS [UserSeeks]
,gs.[user_scans] AS [UserScans]
,gs.[last_user_seek] AS [LastUserSeekTime]
,gs.[last_user_scan] AS [LastUserScanTime]
,gs.[avg_total_user_cost] AS [AvgTotalUserCost]
,gs.[avg_user_impact] AS [AvgUserImpact]
,gs.[system_seeks] AS [SystemSeeks]
,gs.[system_scans] AS [SystemScans]
,gs.[last_system_seek] AS [LastSystemSeekTime]
,gs.[last_system_scan] AS [LastSystemScanTime]
,gs.[avg_total_system_cost] AS [AvgTotalSystemCost]
,gs.[avg_system_impact] AS [AvgSystemImpact]
,gs.[user_seeks] * gs.[avg_total_user_cost] * (gs.[avg_user_impact] * 0.01) AS [IndexAdvantage]
,'CREATE INDEX [Missing_IXNC_' + OBJECT_NAME(id.[object_id], db.[database_id]) + '_' + REPLACE(REPLACE(REPLACE(ISNULL(id.[equality_columns], ''), ', ', '_'), '[', ''), ']', '') + CASE
WHEN id.[equality_columns] IS NOT NULL
AND id.[inequality_columns] IS NOT NULL
THEN '_'
ELSE ''
END + REPLACE(REPLACE(REPLACE(ISNULL(id.[inequality_columns], ''), ', ', '_'), '[', ''), ']', '') + '_' + LEFT(CAST(NEWID() AS [nvarchar](64)), 5) + ']' + ' ON ' + id.[statement] + ' (' + ISNULL(id.[equality_columns], '') + CASE
WHEN id.[equality_columns] IS NOT NULL
AND id.[inequality_columns] IS NOT NULL
THEN ','
ELSE ''
END + ISNULL(id.[inequality_columns], '') + ')' + ISNULL(' INCLUDE (' + id.[included_columns] + ')', '') AS [ProposedIndex]
,CAST(CURRENT_TIMESTAMP AS [smalldatetime]) AS [CollectionDate]
FROM [sys].[dm_db_missing_index_group_stats] gs WITH (NOLOCK)
INNER JOIN [sys].[dm_db_missing_index_groups] ig WITH (NOLOCK)
ON gs.[group_handle] = ig.[index_group_handle]
INNER JOIN [sys].[dm_db_missing_index_details] id WITH (NOLOCK)
ON ig.[index_handle] = id.[index_handle]
INNER JOIN [sys].[databases] db WITH (NOLOCK)
ON db.[database_id] = id.[database_id]
WHERE id.[database_id] > 4 -- Remove this to see for entire instance
ORDER BY [IndexAdvantage] DESC
OPTION (RECOMPILE);

Obviously these missing indexes are the ones that the SQL Server optimizer identified during query compilation, and these missing index recommendations are specific recommendation targeting a specific query.  Consider submitting your workload and the proposed index to the Database Tuning Advisor for further evaluation that include partitioning, choice of clustered versus non-clustered index, and so on.

First Database In Sql Server