Labels

Monday 29 February 2016

Logshipping in SQL Server

Log-shipping is one of the disaster recovery solution. Log-shipping copies transactions from 'primary database' to one or more 'secondary databases' by sending transaction log backups and then restore to the destination databases individually. If the primary database becomes unavailable any of the secondary database can brought into online manually.

To work with Log-shipping we required 3 servers  i.e Primary Server , Secondary Server and optional monitor server.

1. Primary Server : 

    * Primary Server holds the original copy of the database .
    * Log shipping configuration and administration will be done from primary server.
    * In primary server the database in in online state where the applications or users are       connected.
    * In primary server , the database called primary database and its recovery model must be       full or bulk-logged.

2. Secondary Server :

    * The server which maintains the copy of the original database is called as the secondary       server.
    * We can maintain multiple secondary servers in Log-shipping configuration.
    * In secondary server the database state must be either restoring or stand by mode.
    * User can have read-only access if the secondary database configured in Stand By       option.
    * Database recovery model must be Full or Bulk-Logged.

3. Monitor Server :

    * Monitor server is an optional server which records status of backup , copy & restore       operations and raises alerts if any job fail.

Advantages :

* To reduce down time for applications .
* To implement load balancing i.e we can allow reads from server and manipulations from primary server.

Disadvantages :

There is no automatic fail over i.e If primary server fails then secondary server will not come to on-line.
There might be a data loss if T.Log is damaged.
There is no immediate shrink to the secondary server.

Once you create Log shipping the below jobs will be created and handled by SQL Server Agent.

1. Backup Job :

  * Backup job performs transaction log backups from the primary database and placed into shared folder.
  * It is created and executed in Primary Server.

2 . Copy Job :

  * Copy job performs copy the transaction log backups from shared folder to secondary server folder.
  * Copy job is created and executed in Secondary Server.

3. Restore Job :

    * Restore job performs restoring transaction log backups onto secondary database .
    * Restore job is created and executed in Secondary Server.

4. Alert Job :

  * Alert job raises alerts if any backup , copy and restore jobs fails.
  * Alert job will be created and executed in Monitor Server.

Prerequisites :

* SQL Server 2005 Standard Edition , Workgroup Edition or Enterprise Edition must be installed on all server instances involved in Log-shipping.
* All servers should have same case sensitivity settings.
* The databases recovery model must be either Full or Bulk-Logged.
* To configure Log-shipping we must have Sysadmin on each server instance.

Configuring Log-shipping :

* Choose Primary Server , Secondary Server and optional Monitory Server.
* Create a folder to hold backup files of primary database and grant read-write permission on this folder to the account on which sql server services are running.
* Create one more folder in secondary server to copy the backup files from the primary and share this folder to the proxy account by giving read-write permission in secondary server.
* Go to Primary server --> Right Click on required database --> Tasks --> Ship Transaction Log.
* Select check box "Enable this as Primary Database"
* Click on Backup settings --> Enter bacup folder path = "\\SystemName\\BackupFolderName"
* Click on schedlue change the time to five monutues --> ok
* Under secondary database "Click on Add Button"
* Select secondary server instance and click on connect button.
* Select the option "No the secondary database is Initialized"
* Click on copy file tab --> Enter destination folder as "\\SystemName\\CopyfolderName"
*Click on schedule change the time to 5 minutes.
* Click on Restore trandaction log tab.
* Select "standby mode" option and check box "Disconnect"
* Click on schedule change time and keep 5 minutes then OK --> OK
* Under the monitor server instance select the option to user a monitore server instance.
* Click on settings --> click on connect --> Select the required monitor server instance name.
* Click on ok-->ok.

Observations :

* Go to Primary Server --> SQL Server Agent --> Jobs --> View the backup job with the name "Backup folder name" which is created in primary database.

* Three Alerts are created in Alerts folder with the names
   
                       * Log Shipping Primary Server Alert
                       * Log Shipping Secondary Server Alert
                       * Log Shipping Monitor Server Alert.
* In secondary server verify that 2 jobs are created.

                       * Copy (LS_Copy_Sales)
                       * Restore (Ls_Restore_Sales)
* All the above jobs (B,C&R) uses "SQL Logship.exe) file.
* Alert jobs calls sys.sp_check_log_shipping_monitor_alert.
* SQLMain.exe is responsible for updating backup , copy & restore information in the monitor server.

Monitoring Log-shipping :

We can monitor Log Shipping in 3 ways i.e

                                          1. Using MSDB Tables.
                                          2. Using Log Shipping Status Report
                                          3. Using Job History

1. Using MSDB Tables and Views :

Go to Primary Server --> MSDB

1. Log_Shipping_Primary_databases

   It consists of information related to Log Backups Folder Path & Last Log Backup File Name.

2. Log_Shipping_Primary_Secondaries

   It consists of information related to secondary server name and database name

 Go to secondary server --> MSDB

1. Log_Shipping_Secondary

  It consists of information related to Copy folder and Last copied file.

2. Log_Shipping_Secondary_databases

 It consists of information related to last restore file.



Performing Fail Over :

Process of making secondary database available to the applications or users is called fail over when primary server / db failed.

Log Shipping failover process is not automatic.

1. Perform one last T.Log backup in the primary server if possible.
2. Disable Log Shipping Jobs.
3. Restore if any backups are not applied at secondary server WITH NO RECOVERY.
4. Restore tail log backup WITH RECOVERY.
5. Remove Log shipping configuration from Primary Server.
6. Select any one of secondary server available to the users by bring into online with alter database database name set online
7. Right click on primary database and generate scripts for users and logins.
8. Then move the script to secondary server to create users and logins .
9. Re configure log shipping from new server i.e secondary server.

Thursday 18 February 2016

SQL Server Installation Quiz Notes

1. SQL Server 7 can be installed as default instance only i.e you cannot install named instances where as SQL Server 2000 and above versions can be installed as either default or named instance.

2. Local System Account is the default SQL Services login account when you run an unattended installation.

3. If you are running multiple sql servers in your network which are all SQL Server 2000 . What should you choose when installing another instance of SQL Server : Collations

4. SQL Server 2000 collations are similar character sets in previous versions of SQL Server.

5.  setup.iss is a installation initialization file

6. SQL Server 2000 can have up to 16 simultaneous instances.

7. You can run SQL Server setup from the installation SQL Server 2000 CD and on choosing the advanced menu generate batch file to run unattended installation.

8.SQL Server 2000 doesn't install Microsoft Search on a Windows 9 X PC .

9. SQL Server default sort-order is Dictionary sort-order case sensitive.

10. The default installation path of named instance would be Program Files \ Microsoft SQL Server \ MSSQL$Instancename

11. When installing SQL Server 2000 you should change the default collation or sort order in the               below situations

 Using a language besides English
 You are matching an older version
 Using an application that requires you to do so

12. How to test SQL Server has installed successfully

You can test either by running a query on test database or open enterprise manager and drill down into one of the test database.

13. Default user account will also be the SQL agent and server login.

14. During installation you should select the sort order

15. You install SQL Server 2000 with the server name as test2ksql and now you wish to test the installation using OSQL . How you will do it ?
 run this from command prompt : osql /Usa /Ppassword /Stest2ksql  

Wednesday 17 February 2016

SQL Server interview questions

What is Telnet ? Why do we need it ?

Telnet is a network protocol that allows a user on one computer to log on into another computer that is part of the same network .

What is Port Number ? Why do we need to open Port Number ?

Port Numbers are associated with TCP / IP network connections used to allow different applications on the same computer to share network resources simultaneously .

What is SQL Server browser ?

SQL Server browser runs as a windows service . It will listens incoming requests for MSSQL Server resources and provides information about SQL Server instances installed on a computer.
If browser services are not running then the components tries to connect to a named instance without specifying port no will be failed.


Tuesday 16 February 2016

SQL Query interview questions and answers

1. A column has some negative and some positive values in Numbers tables. My requirement is to find sum of negative numbers and the sum of the positive numbers in 2 separate columns . How can you do it ?

Ans :

Select

sum( case when num<0 then num else 0 end ) as Sum_Negative,

sum(case when num >0 then num else o end ) Sum_Positive

from Numbers .

2. Display count of employees department wise .

Ans : Select count(emp_id) , dname from employee where group by dname .

3. Display total salary department wise where more than two employees exists ?

Ans : Select dept_no , sum(sal) as Total_Sal from employee group by dept_no having count(emp_no) >2

4. Display manager and their salary in Employees table ?

Ans : Select count(manager),count(sal) from employee .

5. Delete employee data from employee who got incentive ?

Ans : Delete from Employee where employee_id in (Select employee_ref_id from Incentives)

6.  Display department wise average salary from employee table order by salary ascending ?

Ans : Select department , avg(salary) from Employee group by department order by avg_salary asc

7. Get department wise no of employees in a department , total salary with respect to a department from employee table order by total salary descending ?

Ans : Select department , count(first_name),sum(salary) Total_Salary from Employee group by department order by Total_Salary desc

8. Get employee details from employee table who joined before March 1 st 2013 .

Ans : Select * from employee where joining_date  < '01/03/2013'

9. Get employee details from employee table whose first name starts with 'J' and name contains 4 letters.

Ans : Select * from employee where first_name like 'J____' (keep 3 underscores) .

10. Get employee details from employee table whose salary between 300000 and 500000 .

Ans : Select * from employee where salary between 300000 and 500000

11.  Get Employee ID's of those employees who didn't receive incentives .

Ans :

Select employee_id from Employee
Except
select employee_id from Incentives

12.  Get first name from employee table after removing white spaces from right side.

Ans : Select RTRIM(first_name) from Employee

13. Get length of first_name from employee table ?

Ans : Select len(first_name) from Employee

14.  How to fetch data that are common in 2 query results ?

Ans :

Select * from Employee
intersect
select * from employee1

15. If there are two tables employee1 and employee2 and both have records. How can I fetch all the records but common records only once ?

Ans :

Select * from Employee1
Union
Select * from Emplyee2

16. Display all records from Employee table where deptno=30 and sal>1500.

Ans :

Select * from employee where deptno=30 and sal>1500

17. Select department , total salary with respect to a department from employee table where total salary greater than 900000 order by total_salary descending .

Ans :

Select department , sum(salary) as total_salary from Employee group by department having sum(salary) > 900000 order by total_salary desc.

18. Select employee details from employee table if data exists in tab2 table ?

Ans : Select * from Employee where exists ( Select * from tab2);

19. Select first_name , incentive amount from employee and incentives table for those employees who have incentives and incentive amount greater than 4000.

Ans : Select first_name , incentive_amount from employee  a inner join incentives b on a.employee_id=b.employee_ref_id and incentive_amount > 4000

18. SQL Query to find Max salary from each department ?

Ans : Select DeptID , Max(salary) from Employee group by DeptID

19. Suppose there is annual salary information provided by Employee table . How to fetch monthly salary of each and every employee ?

Ans : Select ename , sal/12 as MontlySal from Employee

20 . There is a table which contains 2 columns Student and Marks . You need to find all the students whose marks are greater than average marks i.e list of above average students ?

Ans : Select student , marks from table where marks > ( select avg(marks) from table )

21. Write a query to find the products which doesn't have sales at all ?

Ans : Select p.product_name from Products P left join Sales S on (P.product_id=S.Product_id ) where s.quantity is Null

Sunday 14 February 2016

Configuring Antivirus software for protecting Micorosft SQL Server

It will applicable for MSSQL7.0 , 2000 , 2005, 2008 , Windows NT Server 4, Windows Server 2000 & Windows Server 2003.

Anti-virus activity can cause can cause performance degradation and data corruption.

The following are the some of few possible issues.

1. SQL Server file system can potentially cause high disk utilization.
2. High CPU spikes or High CPU usage .
3. On cluster environment you may get resource failure problems when you  try to move a group to different node.
4. Access denied to SQL Server files.
5. Originate stop errors on Windows Server.

To over come these issues you need to do exclusions from Virus Scanning

Exclude windows paging file such as  edb*.log , rest1.log , rest2.log,edb.chk,tmp.ed,pagefile.sys
Exclude the spool directory i.e %systemroot%\System32\Spool ) and subfolders
Exclude *.mdf,*.ldf,*.ndf from virus scanning activity.
Exclude database backup and transaction log backup files.
Exclude SQL Server directories \Program Files \ Microsoft SQL Server
Exclude folders related to windows update i.e %systemroot%\softwaredistribution\datastore and its subfolders .

Additional exclusions on Cluster are :

Exclude the quorum drive
Exclude the \MSDTC directory in the MSDTC share drive.
Exclude the %sytemroot%\Cluster directory
Temp Folder for the Cluster Service Account.

Friday 12 February 2016

SQL DBA : How many ways you can make a table as read only

There might be many cases where table needs to be as Read-Only in order to maintain the existing data.

I have mentioned below few possibilities which ever I knows

1. Put the table on Read-Only filegroup.

2. Deny Permission.

3. Make the database Read-Only.

4. With help of triggers.

SQL DBA : Mirrored databases disconnect and goes in - recovery state after a server restart

I had this situation when my production server was restarted .

To fix this issue , I have gone through the error log to find out the exact reason and identified that connection between principal and mirror is not established properly .Mirrored database with a big logfile  and a server restart caused the " In Recovery State " of the mirrored database .

To fix this error I have to restart the  database mirroring end point.

Step1 : Stop the Endpoint

Alter EndPoint < End Point Name > State=STOPPED

Step2 : Start the Endpoint.

Alter EndPoint < End Point Name > State=STOPPED

Wednesday 10 February 2016

SQL Server DBA Interview question and answers Part 5

1. How SQL Server finds the modified extents for taking differential backups ?

All modifications are recorded in differential change backup page after full backup i.e SQL Server finds the modified extents by using DCM .

2.  What is meant by backup retention period ?

Backup retention period means how many days of backups are maintained in disk / tape . After retention period the existing old backups will be delted .

4. What are main differences are between Disk and Tape ?

Disk : Disk is fast and lives less life . Cost is more and it will be mainly used for online .

Tape : Tape is slow and lives more life . Cost is less and It will be mainly used for offline .

5.  What is the user of Lite Speed third party backup tool ?

Lite Speed 5.0 introduced by Quest Software . The following benefits will get with lite speed .

1. Faster than SQL Server backup .
2. Size can reduce up to 60%.
3. Can secure with encryption .
4. Reduce the load on SQL Server .

5. In which edition On-Line restoration is possible ?

On-Line restoration is possible only in Enterprise edition .

6. What is point-in-time restore and use of that ?

Point-in-time restore is used to restore a database to a particular time just before a failure has occured or before corrupt data .
STOPAT option is used to restore a database to specific time .-

7. What is Restore Strategy ?

1. First Restore recent full backup  with No-Recovery.
2. Then restore last diff . backup with No-Recovery.
3. Then restore all transaction log backups since recent full / diff backups with Recovery .

8. What are permissions required for Log shipping ?

We must have sysadmin permission on each server instance to configure Log shipping .

9. In Log shipping which recovery models can we used ?

We can use either Full or Bulk - Logged recovery models for Log shipping .

10. What are the errors occurred in Log shipping ?

There are two errors occurred during Log shipping .

1. 14420 : This error occurs when the backup job fails .
2. 14421 : This error occurs when the restoring job fails .

11. How to failover secondary server , when the primary server fails in Log shipping ?

If the Primary Server will become un-available do the following steps .

1. Perform one last T.Log backup in the primary server if possible.
2. Disable Log Shipping Jobs.
3. Restore if any backups are not applied at secondary server WITH NO RECOVERY.
4. Restore tail log backup WITH RECOVERY.
5. Remove Log shipping configuration from Primary Server.
6. Select any one of secondary server available to the users by bring into on-line with alter database database name set online
7. Right click on primary database and generate scripts for users and logins.
8. Then move the script to secondary server to create users and logins .
9. Re configure log shipping from new server i.e secondary server.

12. What is Lock escalation ?

Lock escalation is the process of converting a lot of low level locks like row locks , page locks into higher level locks like table locks .

SQL Server DBA Interview question and answers Part 4


1. What is purpose of Registered Servers ?

Used to store the server connection information for future connections.

2. How to enable multiple processors to SQL Server ?

Right Click on SQL Server Instance --> Properties --> Processors --> Enable Processors

3. What is Linked Server ? How to connect Linked Server ? How to test Linked Server ?

One server connected with another server to execute queries against OLEDB data sources on remote server .

OLEDB Provider :

An OLEDB provider is a DLL that manages and interacts with a specific data sources such as SQL Server 7.0 , Access , Excel , ODBC , Oracle , DB2 , Local File System , Exchange Server etc .
SQL Native Client (PROGID :SQLNCLI) is the official OLEDB provider for SQL Server .

To connect Linked Server using OLEDB Provider :

sp_addlinkedserver @server='servername' , @srvproduct='SQL Server / Oracle '

How to test the connection to a linked server :

sp_testlinkedserver <ServerName>

4. What is Fill Factor ? How to assign Fill Factor ?

A fill factor is a reserved free space on each leaf level page which is used for future growth of data or index in a table and reduces the page splits .

Assign Fill Factor :

Right Click On Server --> Properties --> Database Setting --> Default Index Fill Factor --> Provide Value .

Recommended fill factor value is 70-80%

5. What are the protocols used in networking ? What is the default port no of TCP / IP ? Can we change the port number ?

The protocols used in networking are TCP / IP , Named Pipes , VIA , Shared Memory .
The default port no of TCP / IP is 1433 . According to the company requirements we can change port numbers .

6. What is Max Degree of Parallelism ? When it will set up ?

7 . What is AWE (Address Windowing Extensions ) ? How to configure Dynamic Memory Allocation ?

Generally SQL Server reserves only a small portion of memory . Some times reserved memory not sufficient to SQL Server and required additional memory . AWE is used to support very large amounts of physical memory . When AWE is enabled the operating system dynamically allocates memory to SQL Server based on the min server memory and max server memory settings .

In Windows Server 2003 Standard Edition supports upto 4 GB , Enterprise Edtion supports upto 32 GB and Data Center supports 64 GB .

To Configure AWE : Right click on Instance --> Properties --> Memory --> Enable AWE --> Set Mine Memory and Max memory etc .

Note : AWE is not needed and cannot be configured on 64 - bit operating system .

8. In which environments we can use simple recovery model to the databases ?

Simple recovery model is useful for Development , Test , Datawarehouses or not often changed environments .

9 . In which recovery model Point-in-time recovery model is possible ?

Point-in-time recovery model is possible only in Full and Bulk-Logged recovery models but in bulk logged recovery model Point-in-time recovery may or may not possible because we will lost bulk operations those are not recorded in to the log .

10. Why can't take T.Log backups in simple recovery model ?

In simple recovery model the transaction logs are truncated .

11. What is use of Recovery Models ?

Recovery models are designed to control transaction log maintenance .

12. What is Tail Log backup ?

Tail Log backup is the last backup that is used to recover the database to the point of failure . Supports only Full or Bulk Logged recovery models.

Situation to perform tail-log backup 

Assume that 4 PM log backup is performed and 5 PM log backup has to be performed . In this case if the database crashed at 4:30 PM so here 30 mins of data may loss . So we use tail log backup to recover this 30 mins of data  .

13. Can we take Diff / T.Log backups without Full Backups ?

No , Its not possible because Full backup is the base backup for Diff / T.Log backups .

14. What is the use of Copy - Only backup ?

Copy only backup is new feature in SQL Server 2005 and they do not affect overall backup and restore procedure . Copy Only backup cannot affect on log sequence number .

15. How to automate backups on schedule basis ?

We can create maintenance plans or any third party tool for taking backups on schedule basis .

SQL Server DBA Interview question and answers Part 3

1. What is transaction ? What is transaction (ACID ) properties ?

      Transaction is a set of logical unit of work and it contains one or more database operations. A valid transaction should be met ACID ( Atomocity , Consistency , Isolation , Durability ) properties .

a. Atomicity :

 A Transaction must be an atomic unit of work i.e either all data modification are performed or none of them is performed.

b. Consistency :

Consistency means before a transaction begins the database must be in a consistent state.

c. Isolation :

Isolation means that the effects of each transaction are the same as if the transaction was only one in the system i.e A transaction either sees a data in the state it was in before another concurrent transaction modified it or see the data after second transaction completed.

d. Durability :

Durability means that once a transaction is committed the effect of the transactions remain permanently in the database even in the event of system failures.

2. What is the purpose of LSN numbers ?

Every record in the transaction log is uniquely identified by a log sequence number . LSNs are used internally during restore sequence to track the point in time to which data has been restored.

3. Explain about RAID and RAID levels ?

RAID stands for redundant array of independent disks . RAID is a disk system that contains arrays to provide greater performance , reliability , storage capacity and lower cost . Typically 0 , 1 , 5 and 10 RAID levels are used in SQL Server .

To know more about RAID levels please refer the below link from my blog .

http://sqlandsai.blogspot.in/2016/02/raid-levels.html

4. How to set SQL Server in Single user mode and Minimal Mode ?

a. Single User Mode :

Only admin can access the server i.e user cannot access .

Start --> Run --> COmmand Prompt --> SQLSERVER.EXE -m

b. Minimal Mode :

All users can access the server .

Start --> Run --> Command Prompt --> SQLSERVER.EXE -f

5. What is meant by Orphan Users ? How to find Orphan Users and Resolve them ?

When a DB is move , attach & detach , restoring to different instance or  corresponding login is dropped then user of the database becomes Orphaned Users and they can't access the database because the database users are not mapped to SID (Security Identifiers ) in the new instance .

To detect orphaned users :

sp_change_users_login 'Report '

To resolve orphaned users :

sp_change_users_login 'update_one' , 'db_user','login_name'

6. What is Row size of a database ?

Row size of a database is 8060 bytes.

7. What is Heap table ?

A table without a clustered index is called as heap table .

8. How many ways you can move the data or databases between servers and databases ?

SSIS , DTS , Attach & Detach , Bulk copy program , Bulk Insert & Import and Export wizard .

9. How to find database ID ?

Select db_id('db_name')

10 . How to find out the database name using database id ?

select db_name(database id value)

11. How to find out Logins ? Current Users ?

To find out Logins :

Select * from sys.syslogins

To find out Current Users :

Select current_user

12. Is it possible to take backups and restores of system databases ?

Except Tempdb all the system databases can take backups .

13. What is service account ?

All SQL Services required a login account to operate . The login account can be

a. Local service account
b. Domain user account
c. Network service account
d. Local system account

Generally all companies are preferred "Domain user account " because it will require when the service account need to interact with other services on the network .

14. What do you mean by Collation ? what is default Collation ?

Collation is basically the sort order . Based on the collation SQL Server sorting and compare the data There are 3 types of sort orders i.e Dictionary case sensitive , Dictionary case insensitive and Binary .

15. How you will find the problems if Installation fails ?

We can analyze the errors by viewing the details available in 'Summary.txt' file located in setup bootstrap folder in 'C:\Program Files \ Microsoft SQL Server \ 90 \ Setup Bootstrap \ Log '

SQL Backup failed - Error 15015 - A non recoverable I / O error occured.

You might get this error when you tried to take the backup in SQL Server using SSMS .

The error looks like below the screen .













This error is occurred due to insufficient space to take backup at the drive .

You can avoid this error by following any one of the below mentioned methods .

1. Use compression backup .

2. Move or delete the old backups .

3. Use stripped backup concepts to split the backup file to multiple drives.

Monday 8 February 2016

SSIS Performance tuning

Performance problems will raise due to bottlenecks .

Situation to find bottle neck

1. To create a package with optimization .

2. There is a package which is running such a long time .

This bottle neck can be at any  many levels .

1. Package Level

2. Source Level

3.Destination Level

4.Transformation Level

5.Data Flow Task Level

6.System Level

We can identify bottlenecks by using progress tab information or by using log providers.

Package Level Tuning 

1. Implement check points to have better restart ability of components in the package .

2. Disable Event Handler: EH decrease package performance so unnecessary event handlers should be removed or disabled .

3. Maximum Concurrent Executable : Increasing the nof executable will increase the parallelism of package and concurrantly execute in less time .

4. Maximum Error Count : Default '1' means it fails for single error in the package . If you increase the error count it doesn't fails the package until it reaches the count.

Data Flow Task Level Tuning Tips

1. Delay Validation (T/F) : True means the validation of component is delayed until the execution of  other component finished.

2. Default buffer max rows and size : Increase or decrease according to the volume of data loading i.e for more volume increase rows and buffer size . For less volume decrease rows and buffer size .

3. Engine Threads : Default it takes 10 if we increase more threads it runs more parallel and uses more processes to finish the data flow operations .

Note :  Thread is a part of process to do some task . More threads less data and less threads with more data decreases performance .

4. Run in Optimized Mode : If it is true then data flow avoids unnecessary transformations , conversions etc operations during package execution .

Source Level Tuning

In case of Flat File

1. Try to take the flat file local to the system .

2. Use the property "FastParse=True" so that the column uses faster , local neutral processing routines and avoids unnecessary conversions .

We can find fast parse option by right clicking on advanced editor --> Input and Output properties --> O/P columns --> Fast Parse

If the source is table or view

1. Create index on source table so that it retrieves the data faster.

2. Instead of taking a table , take a SQL Query or SQL Command as data access mode to get the required columns and rows of data .

3. At connection manager level click on ALL and change the packet size : 32767 instead of 0 . If packet size is zero means 4 KB of data will be transferred at a time . If packet size 32767 means 32 KB of data to be transferred at a time .

Destination Level Tuning Tips 

1. In case of flat file try to take the file local to the system .

2. In case of relational db table or view use data access method as SQL Command to load required data.

3. Use data access mode as fast load to load the data much faster .

4. If the table contains constraints , indexes and triggers then loading will be slow so we need to disable or drop them once the loading finished recreate or enable them.

To implement this there are many ways









Another way is event handling

on pre execute of data flow disable / drop functionality.
on post execute of data flow enable / recreate .

Transformation Level Tuning Tips

We need to identify either the transformation is synchronous or asynchronous .

Asynchronous transformation always takes time to run than synchronous.

Synchronous Transformation 

Here rows are processed as it is getting from the source .ex: derived column , data conversion etc.

Synchronous transformation occupies less memory and process less rows.

Asynchronous Transformation

In Asynchronous transformation until the retrieved rows system collects it doesn't perform any operation. It retrieves more memory and less rows. These transformations either blocks partially or fully so these transformations also called as partial blocking or fully blocking transformations ex sort,merge,aggregate etc.

if there are no bottle necks in the above all levels then we are having bottle necks at system level.


I hope the article was useful! and I need to let you know that this won’t stop here .keep in touch and feel free to ask any doubts related to this topic .

SSRS Performance Tuning


The below are the key points needs to consider for better reporting performance .

We can identify sluggish reports (slow running ) from query execution log view from Report Server db.

From execution log view we can identify where the report server spending most of time i.e data retrieval , report processing and rendering .

Data Retrieval Time :

1. Some of the Reports performance significantly improved by querying from OLAP Cube .

2. T SQL Query used in the report needs to be tuned and optimized for better performance.

3. If real time data is not desired then create snapshots.

4. Queries bringing large data set shouldn't be run for interactive report viewing but should be derived via subscriptions .

5. Don't retrieve unnecessary data.

6. Increase query or report timeout.

High Report Processing Time :

1. Avoid sorting and grouping of the Reports .

2. Data which is grouped and sorted from the SQL db shouldn't be grouped and sorted at the report layer .

3. Add page breaks for large reports to take advantage of on-demand report processing introduced in SQL 2008 .

4. Avoid using Globals!Totalpages function in any of the expression in the reports.

5. Avoid images and charts nested inside tablix .

6. Use Drill Through Reports i.e Summary Report --> Detail Report whereever feasible since it reduces the data set significantly .

7. Avoid blank pages .

RS Scaleout Deployment :

  For Large reports avoid running them interactively and schedule them as email subscriptions which can be run on separate server dedicated for processing subscription jobs .

Reduce N/W Latency :

1. Interactive Reportserver and B2B Production can moved to a single server to reduce the n/w latency .

2. Ensure the single server is much higher configuration since both report service and SQL Server are resource intensive application.

3. Max server memory should be capped to limit SQL Server memory to 40-50% of the total physical RAM .


I hope the article was useful! and I need to let you know that this won’t stop here .keep in touch and feel free to ask any doubts related to this topic .

SSRS Limitations in Express Edition

The below are the key points need to remember before suggesting Express Edition for Reports .

1. Only named instance is supported .

2. Report builders is not available .

3. No SQL Server agent is available so scheduling is not possible.

4. RS will not be able to use more than 1 GB RAM .

5. Reports can be rendered only in Excel , PDF and Image formats only .

6. Other features such as Caching, Snapshot and subscriptions are not available .

7. Cannot be integrated with share point .



I hope the article was useful! and I need to let you know that this won’t stop here .keep in touch and feel free to ask any doubts related to this topic .

Thursday 4 February 2016

SQL Server Isolation Levels

The below isolation levels are available in SQL Server

          1. Read Committed
          2. Read Un Commited
          3. Repeatable Read
          4. Serializable
          5. Snapshot

1. Read Committed :

Read Committed is the default transaction isolation level in SQL Server .
In Read Committed Select query will consider only committed value of a table i.e in case if any opened or uncommitted transactions are available on a table from any another sessions then select query will wait until these transactions completes.

2. Read Uncommitted :

Read Uncommitted also called as dirty reads because it doesn't wait for committed values on  a table.

Note : 

If you want to maintain "Read Committed " isolation level and required dirty reads for specific table then you can use with(nolock) in select query for the required tables. 

3. Repeatable Read :

Select query of the table that is used under transaction of isolation level "Repeatable Read " cannot be modified i.e it will allows insert new data but doesn't allow updates from any other sessions till transaction is completed.

4. Serializable :

Serializable is similar to Repeatable Read but the difference is it prevents phantom reads. It works based on range lock i.e if table has index then it locks records based on index range used in where clause . If the table doesn't have the index then it locks the complete table.

5. Snapshot :

Snapshot is similar to Serializable but the difference is snapshot is doesn't hold lock on table during the transaction so table can be modified in another sessions where as snapshot isolation maintains versioning in tempdb for old data in case of any data modification occurs in other sessions so that the existing transactions displays old data from tempdb.\


I hope the article was useful! and I need to let you know that this won’t stop here .keep in touch and feel free to ask any doubts related to this topic .


RAID Levels

In General most of the Organization will follow RAID 0 , RAID 1,RAID5 & RAID10.

RAID10 also known as RAID 1 + RAID 0

RAID stands for Redundant Array of Inexpensive (Independent ) Disks.

RAID 0 (Speed) : Stripped Blocks , No Mirror & No Parity

1. To implement RAID0 we need minimum 2 disks .
2. Excellent performance because blocks are stripped.
3. No Redundancy hence no mirror , no parity.
4. Not recommended for any critical systems.

RAID 1 (Data Protection ) : Mirrored Blocks , No stripe and No Parity

1. To implement RAID1 we need minimum 2 disks .
2. Good performance because no stripping and no parity.
3. Excellent Redundancy because blocks are mirrored.

RAID 5 (Data Protection and Speed ): Stripped Blocks and Distributed Parity


1. To implement RAID5 we need minimum 3 disks . 2. Good Performance
3. Good Redundency
4. Recommended if the DB is heavily read oriented because write operations will be slow .
5. Best cost effective option because it is providing both good performance and redundancy.


RAID 10 : Mirrored Blocks and Stripped Blocks

1. To implement RAID5 we need minimum 4 disks .
2. Excellent Performance because blocks are stripped.
3. Excellent Redundancy because blocks are mirrored.
4. Highly recommended for any critical applications if you doesn't bother about cost.

Points to Remember :

If budget is not concern consider RAID10 for transaction log , data and index files.
If you have budget restrictions then use  RAID 5 for data and index files and RAID1 for transaction log files .




Tuesday 2 February 2016

Replication in SQL Server

Replication :

Replication is the process of copying and distributing the data between databases to different servers throughout the enterprise .

Replication is a set of technologies for copying and distributing data and database objects from one database to another and then synchronizing between databases to maintain consistency.

Advantages of Replication

a. Each site maintains their copies.
b. Any failure of the centralized server will not be affected on the individual sites.
c. It moves the data closer to the users.
d. Users can define their own business logic on their own copiers.
e. Number of concurrent users will be less due to this less number of locking and blockings.


Types of Replications :

Snapshot Replication :

Snapshot Replication is very useful when the source data changes occasionally (Rarely). Snapshot replication takes a picture or a snapshot of the database and propagated(Circulated) to the subscribers . It reduces the overhead on publishers and subscribers because it doesn't monitor data updates.

Transacational Replication :

Transactional replication starts with a snapshot of the publisher database . With transactional replication , any changes made to the articles are captured from the transactional log and propagated to the distributor continuously and automatically . Using transactional replication we can keep the publisher and subscriber in almost exactly the same state .

Merge Replication :

Merge  replication starts with a snapshot of the publisher database . Subsequent schema changes and schema modifications made at the publisher and subscribers are tracked with triggers . The subscriber synchronizes with the publisher when connected to the network and exchanges all rows that have changed between the publisher and subscriber since the last time synchronization occurred .

Replication Terminologies 

a. Article : 


An article is the data , transactions or stored procedures that are stored with in a publication . This is the actual information that is going to be replicated .

b. Publisher :

The server which provides the source data is called as the publisher .

c. Publications :

The articles which are replicated are called as the publications .

d. Subscriber :

The subscriber which receives the source data from the publisher is called as the subscriber .

e. Subscriptions :

The articles which are receiving the replicated data are called as the subscriptions .

f . Distributor :

Distributor is the responsible for distributing the data from the publisher to subscrier ,
Distributor maintains the history of the replication process .

Subscription Types :

a. Push Subscriptions :

In Push subscriptions the publisher is the responsible for pushing the data from the publisher .
For Push Subscriptions ,the distribution agent and merge agent runs on the distributed server .

c. Pull Subscriptions :

In Pull Subscriptions the subscriber is the responsible of pulling the data from the publisher .
For Pull Subscriptions , the  distribution agent and the  merge agent runs on subscriber agent .

Replication Agents :

Replication Agents are responsible to implement the replication process .
Replication Agents are the .exe files which runs as jobs .
We can find 5 replication agents in the following folder .
Drive :\ Program Files \ Microsoft SQL Server \90 \COM

Types of Agents :

                       1. Snapshot Agent (snapshot.exe)
                       2. Distribution Agent (distrib.exe)
                       3. Log Reader Agent (logread.exe)
                       4. Merge Agent (replmerg.exe)
                       5. Queue Reader Agent (qrdrsvc.exe)

1. Snapshot Agent : (snapshot.exe )

Snapshot Agent is responsible for creating the snapshot of the publications .
The snapshot will be stored in the repldatafoler fo that instance in the distributor.
The snapshot is created by using the following files .

a. ,sch : Schema files which contains the articles structure .
b. .bcp : Contains the articles data.
c. .idx : Contains the indexes information.
d. .trg : Contains the triggers information
e. .pre : Contains the referential integrity constraints .

2. Distributor Agent : (distrib.exe)

DA is responsible for distributing the data from the publisher to subscriber .
The DA runs on distributed server for push subscriptions and on the subscriber for pull subscriptions.

3. Log Reader Agent : (logread.exe)

LA is used in transactional publications .
LA is responsible for reading the transactions from the publisher and will be replicated to the subscribers .
LA runs on distributor which will be created for each database on which the transactional publication is implemented.

4. Merge Agent : (replmerg.exe)

MA is used in Merge Replications .
MA is responsible to read the transactions from the publisher and will be replicated to the subscriber and also vice-versa.
Same as DA the MA runs on distributor server for push subscriptions and on the subscriber for pull subscriptions.
It creates one MA for each article participating in the merger replications.

5. Queue Reader Agent (qrdrsvc.exe )

QRA is used in transactional publication with updatable subscriptions .
QRA is responsible for reading the transactions from the log file of the publisher and will be kept in a queue.
These queue of transactions will be replicated to the subscriber .
The queue of transactions will be replicated to the subscriber.
The QRA runs on the distributor which will be created only once for a instance .

Points to Remember :

Once the Replication configured successfully we should able to notify the below points .

New system database with the name DISTRIBUTION will be created.
Six Jobs will be created under the agent services.
New Login with the name distribution_admin will be created .
To work with replication the recovery model should be either Full or Bulk Logged .

I hope the article was useful! and I need to let you know that this won’t stop here .keep in touch and feel free to ask any doubts related to this topic .