Thursday, January 20, 2011

What is Database Mirroring

In the proposed database mirroring solution there will be two groups of active-passive windows clusters deployed at each location (PRIMARY SITE and Secondary Site). The Primary Site will be the primary site and the Secondary Site will be the disaster recovery (DR) site.
The different failure scenarios are as follows:
- In case of a failure at the Primary Site, Node-A would automatically fail-over to Node-B on the Primary Site.
-In case of a Node-B failure, Node-C at Disaster Recovery site will need to be manually, brought online.
-In case of a Node-C failure, Node-C will automatically fail-over to Node-D.
Asynchronous database mirroring is to be implemented to replicate the data across these sites. Fail-over and fail-back is to be performed manually from PRIMARY SITE to Secondary Site.
Database Mirroring Overview
Database mirroring is a software technology built into Microsoft SQL Server providing high availability of application databases. Mirroring is implemented on a per database basis and works only with databases that use the full recovery model. The simple and bulk-logged recovery models do not support database mirroring.
Database mirroring offers a substantive increase in availability over the level previously possible using Microsoft SQL Server and provides an easy-to-manage alternative or supplement to failover clustering or log shipping. Database mirroring works by maintaining a hot standby server. During a typical mirroring session, after a production server fails, client applications can recover quickly by reconnecting to the standby server.
Pros
- Database mirroring architecture is more robust and efficient than Database Log Shipping. It can be configured to replicate the changes synchronously to minimize data loss.
-Database mirroring has automatic server failover and client failover capabilities.
-Configuration is simpler than log shipping and replication, and has built-in network encryption support (AES algorithm).
-Because propagation can be done asynchronously it requires less bandwidth than synchronous method (e.g. host-based replication, clustering). Replication is not limited by geographical distance with current technology.
-Database mirroring supports full-text catalogs.
-Database mirroring does not require special hardware (such as shared storage, heart-beat connection) and cluster hardware, thus potentially has lower infrastructure cost.
Cons
-As the mirroring database is in a constantly restoring state a database snapshot is required to use the database for reporting.
-There is a risk in using automatic fail-over on a SQL Server cluster set-up, as in case of hardware failure on primary node there is a possibility of fail-over occurring to the mirrored database instead of secondary node.
-Additional SQL Server instances may be required for witnessing.

 

Wednesday, January 19, 2011

SQL Server Management Studio (SSMS) – Keyboard Shortcut Keys

Here is list of Shortcut keys to work with SSMS handy.

Action

SSMS-Shortcut Key

Move to the SQL Server Management Studio menu bar

ALT

Activate the menu for a tool component

ALT+HYPHEN

Display the context menu

SHIFT+F10

Display the New File dialog box to create a file

CTRL+N

Display the New Project dialog box to create a new project

CTRL+SHIFT+N

Display the Open File dialog box to open an existing file

CTRL+O

Display the Open Project dialog box to open an existing project

CTRL+SHIFT+O

Display the Add New Item dialog box to add a new file to the current project

CTRL+SHIFT+A

Display the Add Existing Item dialog box to add an existing file to the current project

ALT+SHIFT+A

Display the Query Designer

CTRL+SHIFT+Q

Close a menu or dialog box, canceling the action

ESC

Clear all bookmarks

CTRL-SHIFT-F2

Insert or remove a bookmark (toggle)

CTRL+F2

Move to next bookmark

F2

Move to previous bookmark

SHIFT+F2

Cancel a query

ALT+BREAK

Connect

CTRL+O

Disconnect

CTRL+F4

Disconnect and close child window

ALT+F4

Database object information

ALT+F1

Clear the active Editor pane

CTRL+SHIFT+DEL

Comment out code

CTRL+SHIFT+C

Copy

CTRL+C or Ctrl+Insert

Cut

CTRL+X or Shift+Del

Decrease indent

SHIFT+TAB

Delete through the end of a line in the Editor pane

CTRL+DEL

Find

CTRL+F

Go to a line number

CTRL+G

Increase indent

TAB

Make selection lowercase

CTRL+SHIFT+L

Make selection uppercase

CTRL+SHIFT+U

Paste

CTRL+V or Shift+Insert

Remove comments

CTRL+SHIFT+R

Repeat last search or find next

F3

Replace

CTRL+H

Select all

CTRL+A

Undo

CTRL+Z

Execute a query

F5 or Ctrl + E

Help for Query Analyzer

F1

Help for the selected Transact-SQL statement

SHIFT+F1

Switch between query and result panes

F6

Switch panes

Shift+F6

Window Selector

CTRL+W

New Query window

CTRL+N

Object Browser (show/hide)

F8

Object Search

F4

Parse the query and check syntax

CTRL+F5

Print

CTRL+P

Display results in grid format

CTRL+D

Display results in text format

CTRL+T

Move the splitter

CTRL+B

Save results to file

CTRL+SHIFT+F

Show Results pane (toggle)

CTRL+R

Save

CTRL+S

Insert a template

CTRL+SHIFT+INSERT

Replace template parameters

CTRL+SHIFT+M

Display estimated execution plan

CTRL+L

Display execution plan (toggle ON/OFF)

CTRL+K

Index Tuning Wizard

CTRL+I

Show client statistics

CTRL+SHIFT+S

Show server trace

CTRL+SHIFT+T

Use database

CTRL+U


 


 

Thursday, January 13, 2011

SQL Server Service Account Privileges

SQL Server 2005 Books Online indicates that the SQL Server service account requires permission to start the following related services (among many other requirements): SQL Server Active Directory Helper and SQL Server VSS Writer services. How can I verify service-related permissions for the SQL Server service account?

Obviously, the service really would NOT NEED permissions to start these services if you were not going to make use of them.  Let's assume for the moment that we are going to do so, and that you want to determine first of all if the SQL Server service account has these permissions and/or what permissions the account has to services on the machine in question.
A tool called "AccessChk" will allow you to see this type of information, along with lots of other access-related information for given accounts. You can find information on the tool at the following link - http://www.microsoft.com/technet/sysinternals/utilities/AccessChk.mspx.

Always use SQL Server Configuration Manager to change the account used by the SQL Server or SQL Server Agent services, or to change the password for the account. In addition to changing the account name, SQL Server Configuration Manager performs additional configuration such as setting permissions in the Windows Registry so that the new account can read the SQL Server settings. Other tools such as the Windows Services Control Manager can change the account name but do not change associated settings. If the service cannot access the SQL Server portion of the registry, the service may not start properly.

 

To start and run, each service in SQL Server must have an account configured during installation. Startup accounts used to start and run SQL Server can be built-in system accounts, local user accounts, or domain user accounts.

Domain User Account

If the service must interact with network services, access domain resources like file shares or if it uses linked server connections to other computers running SQL Server, you might use a minimally-privileged domain account. Many server-to-server activities can be performed only with a domain user account. This account should be pre-created by domain administration in your environment.

Local User Account

If the computer is not part of a domain, a local user account without Windows administrator permissions is recommended.

Local Service Account

The Local Service account is a built-in account that has the same level of access to resources and objects as members of the Users group. This limited access helps safeguard the system if individual services or processes are compromised. Services that run as the Local Service account access network resources as a null session without credentials. Be aware that the Local Service account is not supported for the SQL Server or SQL Server Agent services. The actual name of the account is "NT AUTHORITY\LOCAL SERVICE".

Network Service Account

The Network Service account is a built-in account that has more access to resources and objects than members of the Users group. Services that run as the Network Service account access network resources by using the credentials of the computer account. The actual name of the account is "NT AUTHORITY\NETWORK SERVICE".

Local System Account

Local System is a very high-privileged built-in account. It has extensive privileges on the local system and acts as the computer on the network. The actual name of the account is "NT AUTHORITY\SYSTEM".

In addition to having user accounts, every service has three possible startup states that users can control:

  • Disabled   The service is installed but not currently running.

  • Manual   The service is installed, but will start only when another service or application needs its functionality.

  • Automatic   The service is automatically started by the operating system.

Always run SQL Server services by using the lowest possible user rights. Use a specific low-privilege user account or domain account instead of a shared account for SQL Server services. Use separate accounts for different SQL Server services. Do not grant additional permissions to the SQL Server service account or the service groups. Permissions will be granted through group membership or granted directly to a service SID, where a service SID is supported..

 

Tuesday, January 11, 2011

Log Shipping vs. database mirroring

Database mirroring is functionality in the SQL Server engine that reads from the transaction log and copies transactions from the principal server instance to the mirror server instance.  It can operate synchronously or asynchronously. If configured to operate synchronously, the transaction on the principal will not be committed until it is hardened to disk on the mirror. Database mirroring supports only one mirror for each principal database. Database mirroring also supports automatic failover if the principal database becomes unavailable.  The mirror database is always offline in a recovering state, but you can create snapshots of the mirror database to provide read access for reporting, etc.

Log shipping is based on SQL Server Agent jobs that periodically take log backups of the primary database, copy the backup files to one or more secondary server instances, and restore the backups into the secondary database(s).  Log shipping supports an unlimited number of secondaries for each primary database.In case of a failover, we have to manually bring one of the secondaries online and this may increase the data recovery time.

Database mirroring is preferable to log shipping in most cases, although log shipping does have the following advantages:

  1. it provides backup files as part of the process
  2. multiple secondaries are supported
  3. it is possible to introduce a fixed delay when applying logs to allow the secondary to be used for  recovering from user error.
  4. it work in low bandwidth
  5. the secondary database can be in Stand-By mode, which can be used for reporting purposes

 

More information about both technologies is available in SQL Server Books Online in the topics "Understanding Log Shipping" and "Overview of Database Mirroring".

Monday, January 10, 2011

Log Shipping – NORECOVERY vs. STANDBY mode

When you restore a transaction log using the NORECOVERY option, SQL Server does not need to analyze the transaction log and roll back any uncommitted transactions at the end of the restore process.


When you restore a transaction log using the STANDBY option, SQL Server has to undo any uncommitted transactions so that the data files contain only committed transactions. While it does this, it records whatever details necessary to be able to redo the uncommitted transactions. From what I can tell, beyond a certain limit, it simply records the entire contents of the pages that it had to undo to the undo file. Thus, if your transaction log contained uncommitted transactions that affected rows contained in 1000 different pages, the entire contents of those 1000 pages will be stored in the undo file, even though not all the rows in those pages may be affected by the uncommitted transactions.


When you then restore the next transaction log, SQL Server has to first apply the contents of the undo file to the data files i.e. the 1000 pages above have to be first written to the data files. Only after that can SQL Server restore the transaction log, and if you are again using the STANDBY option, SQL Server has to again undo the changes made by any uncommitted transactions, and record those operations to the undo file. Note that the uncommitted transactions does not just come from the latest transaction log. It includes uncommitted transactions in all previous transaction logs that have been restored.
Say that in your conversion process, you made changes to 1000 rows in a table, and the transaction is uncommitted. In your production database, if you viewed the contents of the table in read committed isolation level from a different connection, you will not see the changes to those rows. Actually, you'll be blocked from viewing the contents of the table. However, if you changed the isolation level to read uncommitted, you will see the changes. This is because the data files contain only the new values. If the transaction was now rolled back, SQL Server would record the rollback in the transaction log, essentially an update of the uncommitted 'new' values with the 'old' values (obtained from the transaction log since it is still an active transaction).
Now in your standby database, if you viewed the contents of the table, you will not see the changes, regardless of which transaction isolation level you used. This is because the data pages contain only the old values. The new uncommitted values are stored in the undo file. So basically, there is more work to be performed when using the STANDBY option as compared to the NORECOVERY option.
You mentioned that the undo file grew to 3 GB in some cases. So on the next transaction log restore, your SQL Server instance would need to redo 3 GB worth of modifications/page writes, restore the transaction log (which was probably very fast), then scan through the entire transaction log (the .ldf file), undo all uncommitted transactions again, and generate the undo file containing details of those transactions.
You can log additional details re the backup/restore process to your SQL Server log using the following trace flags
DBCC TRACEON (3004, 3605, -1)
but do so at your own risk as I do not know all the side-effects of turning those options on.

I believe that only affects the number of records stored in the msdb..sysjobhistory table, and would have little or no impact on your restore throughput.

Log shipping –Failover

When it's time to failover to the secondary database, so that it becomes our primary database, we have to do the following·

  1. if possible, back up the transaction log on the primary database 
  2. Apply all outstanding transaction logs to the secondary database after careful consideration. You may not want to restore all the transactions, if the reason you are failing over is because some unwanted transactions were performed on the primary database. In these cases, you would want to restore to a point-in-time just before those transactions were committed. 
  3. run a final restore with recovery on the secondary database e.g. 

                               RESTORE DATABASE DatabaseName WITH RECOVERY

      4.   point your users to the new primary database 

      5.   set up log shipping again as part of your disaster recovery plan 

You can also use the sp_change_primary_role and sp_change_secondary_role stored procedures to perform the final tasks on the primary and secondary databases respectively. If you have set up a monitor server too, you can use the sp_change_monitor_role stored procedure to update the log shipping records. See the Books Online documentation on these stored procedures for further details.

Thursday, January 6, 2011

Reading SQL Trace Files using fn_trace_gettable function

While working in SQL Server, most of the time we create trace files(.trc) to identify performance bottleneck. You can create trace files using SQL Profiler or you can use SQL as well. Most of us would go SQL Profiler way, because simply putting is very easy and quite elegant too, further more it gives you WYSIWYG environment.

Now suppose somebody has already generated a trace file, and you just how to analyze it. Here too you can do two things, use SQL Profiler or use SQL to read the trace file. SQL Profiler is the tool of choice over here too, for all the reasons mentioned above. But consider you dont have access to SQL Profiler, for a strange reason ofcourse, what do you do?

Well this is where SQL Server comes to rescue with a very nice function “fn_trace_gettable“. fn_trace_gettable will give essentially the same information which you would get using SQL Profiler.

The syntax for fn_trace_gettable function is as follows

fn_trace_gettable( filename , number_files )

–read trace file using fn_trace_gettable
SELECT *
FROM fn_trace_gettable(‘D:\Audit\SampleTrace.trc’, default)

 

 

The output of fn_trace_gettable is same as the data columns displayed in SQL Profiler, for e.g.

  • TextData
  • BinaryData
  • DatabaseID
  • ObjectName
  • ApplicationName
  • EventClass
  • DatabaseName …

When analyzing for performance bottlenecks, we would general look at the following

  • TextData
  • Duration
  • StartTime
  • EndTime
  • Reads
  • Writes
  • EventClass
  • DatabaseName

The true power of reading trace files using fn_trace_gettable is that you can utilise the full power of ANSI SQL. For e.g. you can order the trace according to the duration, i.e. list all the queries in the order in which it took execute. or find out which query is called most often, some times small queries called multiple times can be a performance bottleneck in its own right.

When you execute the fn_trace_gettable function and select EventClass Datacolumn, you should notice that it doesnt give you a descriptive name, but rather an integer value. To get the descriptive name you can use the following system table “sys.trace_events“, it lists all the events that you see in the SQL profiler.

There are various events which are exposed by SQL Server. You can get a full list with the description on MSDN (sp_trace_setevent)

–Read the trace file and select only a few events. Display the descriptive names of the events selected

SELECT TextData ,
        NTUserName ,
        ClientProcessID ,
        ApplicationName ,
        LoginName ,
        SPID ,
        Duration ,
        StartTime ,
        EndTime ,
        ServerName,
        eventClass,
        EventName = (SELECT NAME FROM sys.trace_events e WHERE e.trace_event_id = a.eventClass)
FROM ::fn_trace_gettable('D:\Audit\SampleTrace.trc', DEFAULT) a
WHERE EventClass IN (10,12, 41, 43, 45,46,47)


fn_trace_gettable is very powerful and very flexible. Explore it, it is an excellent tool to have in your arsenal for SQL performance tuning.