Posts

Showing posts from 2015

INSERTING HUGE DATA in a Table to check PerformanceIssues.

CREATE TABLE TblNumbers (ID int identity(1,1) primary key,Num INT) go ;WITH N AS (     select 0 as Num  union all  select 0  union all  select 0  union all  select 0  union all  select 0  union all     select 0  union all  select 0  union all  select 0  union all  select 0  union all  select 0--10 ) ,Numbers AS ( SELECT ROW_NUMBER()OVER(ORDER BY (SELECT 1))AS Rn FROM N N1,--10rows N N2,-->10*10=100 N N3,-->10*10*10=1000 N N4,---->10*10*10*10=10000 N N5,------>10*10*10*10*10=100000 N N6,--10*10*10*10*10*10=1000000 N N7,--10*10*10*10*10*10*10=10000 000 N N8, N N9 ) INSERT INTO TblNumbers (Num) SELECT Rn FROM Numbers SELECT * FROM TblNumbers

How to avoid KEYLOOKUP operator

Image
The Key Lookup Operator is a bookmark look up on a table with a clustered index. This Key Lookup can be quite expensive, so we should try to eliminate them when you can. Of course we should also need to consider your overall work load, and how often the query with the key lookup is executed. Key Lookup occurs when you have an index seek against a table ,But your query requires   additional columns that are not in that index. This causes SQL Server to have to go back and retrieve those extra columns you can see the example here.   One way to reduce or eliminate Key Lookup is to remove some or all of the columns that are causing   the Key Lookup from the query. This can easily break your application. So don’t do this until you are sure that these columns are necessary. And the second method is creating covering index   on a column of the table in SELECT list. A covering index is simply a non-clustered index   that has all the columns needed to either satisfy the ent

AUTO_CREATE_STATISTICS NOT USEFUL FOR CREATING STASTICS ON INDEXES

Image
AUTO_CREATE_STATISTICS option does not determine whether statistics get created for indexes. This option also does not generate filtered statistics.It applies strictly to single-column statistics for the full table. That is why even though AUTO_CREATE_STATISTICS enable on database level. We need to again update statistics for the Indexes. If you observed below table i have create a table with 3 columns and I ran the 3 different queries with by using 3 different columns in WHERE condition. And statistics created automatically created because of AUTO_CREATE_STATISTICS option. But this option will not create statistics on Indexes that is why we will run SP_UPDATESTATS once we did REBUILD indexes.

How to find the users in a group login

Hi, By running the below query we can find out that who are the users in a that group. use master go EXEC xp_logininfo @acctname = 'AOINTL\USCareUsers' , @option = 'members'  

How to move logins from one instance to another instance in SQL Server

Connect the instance that you need to move the logins. And run the below script in master database. Because logins are server level objects USE master GO IF OBJECT_ID ('sp_hexadecimal') IS NOT NULL   DROP PROCEDURE sp_hexadecimal GO CREATE PROCEDURE sp_hexadecimal     @binvalue varbinary(256),     @hexvalue varchar (514) OUTPUT AS DECLARE @charvalue varchar (514) DECLARE @i int DECLARE @length int DECLARE @hexstring char(16) SELECT @charvalue = '0x' SELECT @i = 1 SELECT @length = DATALENGTH (@binvalue) SELECT @hexstring = '0123456789ABCDEF' WHILE (@i <= @length) BEGIN   DECLARE @tempint int   DECLARE @firstint int   DECLARE @secondint int   SELECT @tempint = CONVERT(int, SUBSTRING(@binvalue,@i,1))   SELECT @firstint = FLOOR(@tempint/16)   SELECT @secondint = @tempint - (@firstint*16)   SELECT @charvalue = @charvalue +     SUBSTRING(@hexstring, @firstint+1, 1) +     SUBSTRING(@hexstring, @secondint+1, 1)   SELECT @i = @i + 1 END SELECT @hexvalue = @charvalu

OLTP vs OLAP

Difference between OLTP and OLAP Datbases. Difference OLTP System   OLAP System   Source of data Operational data; OLTPs are the original source of the data. Consolidation data; OLAP data comes from the various OLTP Databases Purpose of data To control and run fundamental business tasks To help with planning, problem solving, and decision support What the data Reveals a snapshot of ongoing business processes Multi-dimensional views of various kinds of business activities Inserts and Updates Short and fast inserts and updates initiated by end users Periodic long-running batch jobs refresh the data Queries Relatively standardized and simple queries Returning relatively few records Often complex queries involving aggregations Processing Speed Typically very fast Depends on the amount of data involved; batch data refreshes and complex queries may take

Error: 3041, Severity: 16, State: 1 ;Backup detected log corruption in database DatabaseName. Context is FirstSector

There is a Transaction log backup scheduled on my production server. On one bad day the Transaction log backup sql agent job throwing error and the error was like below. And I have found this error in the ERROR path(Eg:C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Log\ERRORLOG)   Error: 3041 , Severity: 16 , State: 1    Backup detected log corruption in database DatabaseName . Context is FirstSector . LogFile: 2 'E:\MSSQL\DATA\DatabaseName.ldf' VLF SeqNo: x541af1 VLFBase: x42e20000 LogBlockOffset: x43785c00 SectorStatus: 2 LogBlock . StartLsn . SeqNo: x6320656c LogBlock . StartLsn . Blk: x6f6e6e61 Size: x656e And the steps I have done to fix this issue is: However Transaction log backup is not working. 1) Take the database full backup 2) Change the Data base recovery model from FULL to Simple( breaking the log backup chain and removing the requirement that the damaged portion of log must be backed up) 3) Run CHECKPOINT 4) Ag

How to find which session is causing lock

By running the below query we can find that which query is causing to the locking. SELECT lok.resource_type ,lok.resource_subtype ,DB_NAME(lok.resource_database_id) ,lok.resource_description ,lok.resource_associated_entity_id ,lok.resource_lock_partition ,lok.request_mode ,lok.request_type ,lok.request_status ,lok.request_owner_type ,lok.request_owner_id ,lok.lock_owner_address ,wat.waiting_task_address ,wat.session_id ,wat.exec_context_id ,wat.wait_duration_ms ,wat.wait_type ,wat.resource_address ,wat.blocking_task_address ,wat.blocking_session_id ,wat.blocking_exec_context_id ,wat.resource_description FROM sys.dm_tran_locks lok JOIN sys.dm_os_waiting_tasks wat ON lok.lock_owner_address = wat.resource_address    

How to change the server name of SQL Server:

Image
If you are trying to change the name of the server in a production environment you need to look at the below steps. Please check whether Replication, Log shipping, Mirroring is installed. If that is the case, you should be cautious before you are running this script. You need to disable all these before you are going to run the below command. And also ensure that you have a backup of all the databases available. And follow the below steps. If you are trying to change the "Default Instance" you can run the below command.  

DIFFERENT ISOLATION LEVELS AND ITS BEHAVIOUR.

READ UNCOMMITTED is the least restrictive isolation level because it ignores locks placed by other transactions. Transactions executing under READ UNCOMMITTED can read modified data values that have not yet been committed by other transactions; these are called "dirty" reads. READ COMMITTED is the default isolation level for SQL Server . It prevents dirty reads by specifying that statements cannot read data values that have been modified but not yet committed by other transactions. Other transactions can still modify, insert, or delete data between executions of individual statements within the current transaction, resulting in non-repeatable reads, or "phantom" data. REPEATABLE READ is a more restrictive isolation leve l than READ COMMITTED. It encompasses READ COMMITTED and additionally specifies that no other transactions can modify or delete data that has been read by the current transaction until the current transaction commits. Concurrency is lowe