Showing posts with label SQL Server. Show all posts
Showing posts with label SQL Server. Show all posts

Monday, 16 May 2016

Creating a SQL Server alias

The Problem

I have a laptop with a SQL Server Express 2012 installed and configured as a named instance (localhost\SQLEXPRESS2012). I’ve cloned a code repository and want to be able to run the applications it contains but they are all configured with connection strings that look for a different instance name (localhost\dev_2012).

I could start modifying connection strings but there are multiple applications and therefore connection strings to modify. I’d prefer to be able to create an alias to the database that matches the one in the configuration files so I don’t need to modify them at all.

The Solution

The solution is to create an alias to the named instance using the SQL Server Configuration Manager.
Open the SQL Server Configuration Manager (Start Menu > All Programs > Microsoft SQL Sever 2012 > Configuration Tools).

Check that TCP/IP is enabled for the instance you are creating an alias for. Enable it if it is not.

sql-alias-001

Once TCP/IP is enabled we can create an alias to the instance for the SQL Native Client. In my case this was for the 32-bit version. Expand the SQL Native Client 11.0 Configuration element and right-click on Aliases. Select New Alias… from the context menu.

Use the new instance name as the Alias Name and set the Server value to the original named instance.

2016-05-16 11_03_01-localhost_dev_2012 Properties

Note the Port No field. By default SQL Server uses 1433 but you can check your setup using the SQL Server Configuration Manager. Open the SQL Server Network Configuration element again and select the protocols for the named instance. Right-click on TCP/IP and view the Properties.

2016-05-16 11_13_10-TCP_IP Properties

If Listen All is enabled on the Protocol tab move to the IP Addresses tab and scroll down to the IPAll section. If Listen All is not enabled you will need to look for the appropriate IP section.

If SQL Server is configured to use a static IP address it will appear in the TCP Port value. If is is configured to use a dynamic port the port in use will appear as the TCP Dynamic Port value. In either case this is the port number to use when creating the alias.

2016-05-16 11_15_22-TCP_IP Properties

Click OK to close any open dialog. Restart SQL Server.

The new instance name will now get routed to the actual instance (calls to localhost\dev_2012 will get routed to localhost\SQLEXPRESS2012 in this case).

You can check everything works by connecting the SQL Server Management Studio to the new instance name.

Wednesday, 18 June 2014

Download SQL Server Express

Only a few days ago we were moaning in the office about how complicated it was to dowload the correct version of SQL Server Express. Well, it seems we were not alone.

Scott Hanselman has come to the rescue with an awesome blog post that provides easy links to the various SQL Server Express binaries. Here's Scott's short link to his blog post:

http://downloadsqlserverexpress.com

One for the bookmark list. Thanks Scott!



Thursday, 22 May 2014

Minimising deadlocks with READ_COMMITTED_SNAPSHOT

The problem

We have a system that has a number of separate Windows services that all access a shared database hosted by SQL Server 2008 R2. The system is used to provide online hydraulic modelling in support of a water utility company’s maintenance activities. The services are built around SynerGEE Water, a hydraulic modelling product from DNVGL.

One of the Windows services is responsible for monitoring a model library – SynerGEE models are stored as MDB files on the file system – and when a new or updated model is detected it adds a MODEL record to the database. It also indexes all of the pipes in the model and adds them to a MODEL_PIPE table in the database.

A second service checks the database for new MODEL records and then invokes SynerGEE to perform some hydraulic analysis. The results of this analysis are used to update the MODEL_PIPE records.

We observed that if a number of models had been updated in one go the result was that sometimes a database deadlock occurred when the second service was querying the MODEL_PIPE table. This was because the first service was in the process of adding other MODEL_PIPE records for other models at the same time.

We are using NHibernate for all data access and all database queries or updates are wrapped in transactions with the assistance of the Spring.Net transaction template. NHibernate Profiler was used to confirm that all the transactions were correctly formed and we could see that the transactions were using the READ COMMITTED isolation level.

The solution

Firstly, I did some research around Minimizing Deadlocks and noted that using a row-based isolation level can help. In particular, activating READ_COMMITTED_SNAPSHOT on the database can help by allowing SQL Server to use row versioning rather than shared locks.

“When the READ_COMMITTED_SNAPSHOT database option is set ON, a transaction running under read committed isolation level uses row versioning rather than shared locks during read operations.” [1]

Further research around snapshot isolation levels provided further insight:

“The READ_COMMITTED_SNAPSHOT database option determines the behavior of the default READ COMMITTED isolation level when snapshot isolation is enabled in a database. If you do not explicitly specify READ_COMMITTED_SNAPSHOT ON, READ COMMITTED is applied to all implicit transactions. This produces the same behavior as setting READ_COMMITTED_SNAPSHOT OFF (the default). When READ_COMMITTED_SNAPSHOT OFF is in effect, the Database Engine uses shared locks to enforce the default isolation level. If you set the READ_COMMITTED_SNAPSHOT database option to ON, the database engine uses row versioning and snapshot isolation as the default, instead of using locks to protect the data.” [2]

Bingo! If the READ_COMMITTED_SNAPSHOT database option is set to ON row versioning is used instead of locks.

“Once snapshot isolation is enabled, updated row versions for each transaction are maintained in tempdb. A unique transaction sequence number identifies each transaction, and these unique numbers are recorded for each row version. The transaction works with the most recent row versions having a sequence number before the sequence number of the transaction. Newer row versions created after the transaction has begun are ignored by the transaction…

…Snapshot isolation uses an optimistic concurrency model. If a snapshot transaction attempts to commit modifications to data that has changed since the transaction began, the transaction will roll back and an error will be raised. ” [2]

In our case this looked very interesting because the first Windows service adds new MODEL and MODEL_PIPE records in a transaction and is done with them. The second Windows service then reads the new MODEL and MODEL_PIPE records and updates them in a separate transaction. The chances of an optimistic concurrency issue are minimal. Although the two services are accessing the same table they are not accessing the same rows. Therefore, using row version-based locks would allow the two services to work better together.

So, I enabled READ_COMMITTED_SNAPSHOT on the database [3] and found that the deadlocks no longer occurred.

ALTER DATABASE <dbname> SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
ALTER DATABASE <dbname> SET READ_COMMITTED_SNAPSHOT ON;
ALTER DATABASE <dbname> SET MULTI_USER;

Result!

References

[1] Minimizing Deadlocks, Technet.

[2] Snapshot Isolation in SQL Server, MSDN.

[3] Enabling Row Versioning-Based Isolation Levels, Technet.

Saturday, 21 December 2013

SQL Server indexing basics – a bit of revision

This post really just for me. You may get something from it but don’t expect very much! It’s been a while since I visited SQL Server indexing fundamentals so I thought I’d do a bit of revision and this post is really just my notes.

Pages

“The fundamental unit of data storage in SQL Server is the page. The disk space allocated to a data file (.mdf or .ndf) in a database is logically divided into pages numbered contiguously from 0 to n. Disk I/O operations are performed at the page level. That is, SQL Server reads or writes whole data pages.” [1]

  • The page size is 8 KB
  • This means databases have 128 pages per megabyte
  • Each page begins with a 96-byte header including:
    • Page number
    • Page type
    • Amount of free space on the page
    • Allocation unit ID of the object that owns the page

 

http://i.technet.microsoft.com/dynimg/IC147464.gif

 

Extents

“Extents are the basic unit in which space is managed. An extent is eight physically contiguous pages, or 64 KB. This means SQL Server databases have 16 extents per megabyte.” [1]

 

Heaps

“A heap is a table without a clustered index.” [2]

“If a table is a heap and does not have any nonclustered indexes, then the entire table must be examined (a table scan) to find any row.” [2]

  • Do not use a heap when
    • the data is frequently returned in a sorted order
    • the data is frequently grouped together
    • ranges of data are frequently queried from the table
    • there are no nonclustered indexes and the table is large

 

Indexes

“An index is an on-disk structure associated with a table or view that speeds retrieval of rows from the table or view. An index contains keys built from one or more columns in the table or view. These keys are stored in a structure (B-tree) that enables SQL Server to find the row or rows associated with the key values quickly and efficiently.” [3]

 

Clustered Indexes

  • Clustered indexes sort and store the data rows in the table or view based on their key values. These are the columns included in the index definition. There can be only one clustered index per table, because the data rows themselves can be sorted in only one order. [3]

  • The only time the data rows in a table are stored in sorted order is when the table contains a clustered index. When a table has a clustered index, the table is called a clustered table. If a table has no clustered index, its data rows are stored in an unordered structure called a heap. [3]

 

  • Clustered indexes
    • have a row in sys.partitions (with index_id = 1)
    • determine the physical order of data
  • Consider using when
    • There are a large number of distinct values in a column
    • On columns that are
      • frequently accessed
      • frequently searched for a range of values
    • Queries return very large result sets

 

Nonclustered indexes

  • Nonclustered indexes
    • do not affect the physical order of the data rows
    • have a structure separate from the data rows
    • contain the nonclustered index key values and each key value entry has a pointer to the data row that contains the key value
  • The pointer from an index row in a nonclustered index to a data row is called a row locator

 

Primary keys

  • A primary key automatically creates a clustered index, except when
    • a nonclustered primary key has been explicitly specified
    • a clustered index already exists

“Indexes are automatically created when PRIMARY KEY and UNIQUE constraints are defined on table columns. For example, when you create a table and identify a particular column to be the primary key, the Database Engine automatically creates a PRIMARY KEY constraint and index on that column.” [3]

 

Unique indexes (unique constraints)

  • A unique index is created when you create PRIMARY KEY or UNIQUE constraint. [4]
  • You can create a unique index independent of a constraint or when you create an indexed view

 

Filtered indexes

“A filtered index is an optimized nonclustered index especially suited to cover queries that select from a well-defined subset of data. It uses a filter predicate to index a portion of rows in the table. A well-designed filtered index can improve query performance as well as reduce index maintenance and storage costs compared with full-table indexes.” [5]

  • Use a filtered index when a well defined subset of results are part of a select statement.

 

Index Maintenance

Reorganise

  • Physically reorganises the leaf nodes of the index only

Rebuild

  • Drops the existing index and recreates it

Tips

  • Avoid DBCC SHRINKDB because it increases index fragmentation
  • Put the clustered index on the column that is distinct and increasing in value to avoid fragmentation
  • Check index fragmentation and then
    • If fragmentation is >5% <= 30% then reorganise the index
    • If fragmentation >30% then rebuild the index [6]

 

Beware

  • Too many indexes on a table can reduce performance
    • Execution plan can be less efficient
    • Queries can be less efficient
    • Queries may use the wrong – less efficient – index
  • Duplicate indexes offer no advantages and can reduce performance during inserts, updates and deletions
  • Unused indexes may also reduce performance during inserts, updates and deletions

 

References

[1] Understanding Pages and Extents – SQL 2008 documentation

[2] Heaps (Tables without Clustered Indexes) – SQL 2012 documentation

[3] Clustered and Nonclustered Indexes Described – SQL Server 2012 documentation

[4] Create Unique Indexes - SQL Server 2012 documentation

[5] Create Filtered Indexes - SQL Server 2012 documentation

[6] Reorganize and Rebuild Indexes - SQL Server 2012 documentation

Thursday, 19 December 2013

The Accidental DBA and the SQL Server Maintenance Solution

I recently came across 2 fantastic resources for understanding and implementing the essentials of maintaining SQL Server databases.

The Accidental DBA

As a software developer you often encounter situations where you become the DBA whether you want to or not. It is not uncommon to be working in an environment where there is no dedicated DBA but the applications under development rely on SQL Server as a back-end data store.

So, what do you need to know to keep the SQL Server database up-and-running and to prevent queries from slowing to a standstill? Well, it turns out there is an excellent series of blog posts on www.sqlskills.com called the Accidental DBA, offering 30 days of top tips. You can find the series here:

 

SQL Server Maintenance Solution

Something referred to by the Accidental DBA series is the SQL Server Maintenance Solution put forwards by Ola Hallengren. This awesome set of scripts allows you to run backups, perform integrity checks, and index and statistics maintenance on all editions of Microsoft SQL Server 2005, SQL Server 2008, SQL Server 2008 R2, and SQL Server 2012.

 

UPDATE 24/12/2013 – Replacement for sp_helpindex

Also on sqlskills.com, Kimberly Tripp has a replacement for the standard sp_helpindex stored procedure.

Wednesday, 22 June 2011

Bulk copy, full text and a severe error

I have a program that is using SqlBulkCopy to batch update data in a SQL Server Express 2008 database. Everything was working fine until, for no apparent reason, the bulk inserts failed with the following exception:

“A severe error occurred on the current command. The results, if any, should be discarded.”

After much hair pulling the cause of the error turned out to be the addition of a full text index on one of the fields being updated. Removing the full text index stopped the exception from being thrown. That’s all well and good but I needed a full text index.

The index had been defined something like this:

EXEC sp_fulltext_database 'enable'
GO

CREATE FULLTEXT CATALOG MyCatalog
GO

CREATE FULLTEXT INDEX ON dbo.MyTable
(
    MyColumn
    Language 0X0
)

KEY INDEX PK_MyTable ON MyCatalog WITH CHANGE_TRACKING AUTO

After some experimentation the problem seemed to be with the CHANGE_TRACKING option. By setting it to OFF the bulk copy worked fine but failed with either AUTO or MANUAL.* For my purposes this was acceptable because the data was fairly static and updated infrequently. I was left with having to ensure the index was rebuilt or populated appropriately as a separate process.

 

References

* Full-Text Index Population - http://msdn.microsoft.com/en-us/library/ms142575.aspx

Monday, 20 June 2011

Error installing SQL Server Express 2008

I ran in to an issue when trying to install SQL Server Express 2008 with Advanced Services. The installation file was a single executable (SQLEXPRADV_x86_ENU.exe)  but when I ran it I got the following error in a dialog box:

“SQL Server Setup has encountered the following error:

Invoke or BeginInvoke cannot be called on a control until the window handle has been created..”

The solution to the problem is:

  1. SQLEXPRADV_x86_ENU.exe is a self-extracting Zip file. Use WinZip to extract it to a local folder.
  2. Open a command window and navigate to the folder containing the extracted setup files.
  3. Run setup.exe from the command prompt.

That’s it.

Note that if you want to use Add or Remove Programs feature to add new features you can use the extracted files as the ‘installation media’.

Monday, 6 June 2011

Saving changes is not permitted

I’ve run into this problem a couple of times now and it’s really annoying every time it happens. I created a table in SQL Express using the SQL Management Studio and saved it. I then tried adding some new columns to the table but when I saved it again the following error appeared:

“Saving changes is not permitted. The changes you have made require the following tables to be dropped and re-created. You have either made changes to a table that can't be re-created or enabled the option Prevent saving changes that require the table to be re-created.”

The dialog is really helpful:

Untitled

The solution is as follows:

  1. Go to Tools > Options…
  2. Select the Designers node.
  3. Uncheck the Prevent saving changes that require table re-creation option.

 

Untitled

Job done.

Friday, 6 August 2010

Isolation levels in SQL Server

Isolation defines how or when changes made by an operation become visible to other concurrent operations. In other words you can use Isolation levels to isolate a resource for a transaction and protect that resource from modification by other transactions.

Resources are protected through the use of locks. SQL Server decides what locks to take out (and when) by looking at the Isolation Level for the transaction. Lower levels of isolation will allow multiple access to a resource simultaneously but may introduce concurrency related problems (e.g. dirty-reads, data inaccuracy). Higher levels of isolation should eliminate concurrency problems but risk introducing blocking.

Serializable

All transactions occur in a completely isolated fashion.

Places a range lock on the data set preventing other users from updating or inserting rows into the data set until the transaction is complete.

This is the most restrictive of the four isolation levels.

Repeatable read

All data records read by a SELECT statement cannot be changed. The transaction acquires read locks on all retrieved data, but does not acquire range locks.

Locks are placed on all data that is used in a query preventing other users from updating the data. However, new ‘phantom’ rows can be inserted into the data set by another user and will be included in later reads in the current transaction.

Read committed

Records retrieved by a query are not prevented from modification. Non-repeatable reads may occur (data retrieved in a SELECT statement may be modified by some other transaction when it commits). Read locks are acquired on selected data but they are released immediately. Write locks are released at the end of the transaction.

Shared locks are held while the data is being read to avoid dirty reads but the data can be changed before the end of the transaction.

This is the default Isolation Level of SQL Server. It eliminates dirty-reads but other concurrency problems remain.

Read uncommitted

Dirty reads are allowed. One transaction may see uncommitted changes made another transaction.

It is possible to read uncommitted or dirty data but values in the data can be changed and rows can appear or disappear in the data set before the end of the transaction.

Monday, 10 May 2010

GO command and semicolons in SQL Server

The GO Command

The batch terminator. The GO command is not part of the Transact-SQL language but is used by the SQLCMD, OSQL and ISQL utilities that can also be used within Query Analyzer and the Query Editor window.

A batch is a set of T-SQL statements that are submitted for execution as a group. A script is a file containing set of T-SQL statements. One script can contain many batches.

Note: if you declare variables in a script and issue a GO command the variables will ne destroyed.

The GO command is sometimes required (e.g. when defining a stored procedure – the CREATE PROCEDURE statement must be the first statement in the query batch so any preceding statements must be followed by GO).

SET QUOTED_IDENTIFIER ON
GO

SET ANSI_NULLS ON
GO

CREATE PROCEDURE dbo.MyProcedure
AS
BEGIN
 SELECT *
 FROM INFORMATION_SCHEMA.TABLES
END

SET QUOTED_IDENTIFIER OFF
GO

The Semicolon

The semicolon character is a statement terminator.

It is a part of the ANSI SQL-92 standard but was never used within Transact-SQL.

There are two situations in which you must use the semicolon:

  • Where you use a Common Table Expression (CTE), and the CTE is not the first statement in the batch.
  • Where you issue a Service Broker statement and the Service Broker statement is not the first statement in the batch.

See also: http://www.sqlservercentral.com/articles/SQL+Puzzles/thegocommandandthesemicolonterminator/2200/

Common Table Expressions (CTEs)

Common Table Expressions (CTEs) were introduced in SQL Server 2005. They are view/derived tale-like construct.

“A common table expression (CTE) can be thought of as a temporary result set that is defined within the execution scope of a single SELECT, INSERT, UPDATE, DELETE, or CREATE VIEW statement. A CTE is similar to a derived table in that it is not stored as an object and lasts only for the duration of the query. Unlike a derived table, a CTE can be self-referencing and can be referenced multiple times in the same query.” - http://msdn.microsoft.com/en-us/library/ms190766.aspx (MSDN SQL Server 2008 documentation)

CTEs offers the advantage of improved readability and ease in maintenance of complex queries.

The basic structure of a CTE is:

WITH expression_name [ ( column_name [,...n] ) ]
AS
( CTE_query_definition )

You can then query the CTE as you would a table/view.

Example:

USE AdventureWorks;
GO
-- Define the CTE
WITH Sales_CTE (SalesPersonID, NumberOfOrders, MaxDate)
AS
(
    SELECT SalesPersonID, COUNT(*), MAX(OrderDate)
    FROM Sales.SalesOrderHeader
    GROUP BY SalesPersonID
)

-- Query using the CTE
SELECT E.EmployeeID, OS.NumberOfOrders, OS.MaxDate,
       E.ManagerID, OM.NumberOfOrders, OM.MaxDate
FROM HumanResources.Employee AS E
    JOIN Sales_CTE AS OS -- Join against CTE as if it were a table
    ON E.EmployeeID = OS.SalesPersonID
    LEFT OUTER JOIN Sales_CTE AS OM -- CTE can be referenced more than once
    ON E.ManagerID = OM.SalesPersonID
ORDER BY E.EmployeeID;
GO

You can define multiple CTEs after the WITH keyword by separating each CTE with a comma.

A CTE can reference itself and previously defined CTEs in the same WITH clause. Forward referencing is not allowed.

Lots more here: http://msdn.microsoft.com/en-us/library/ms175972.aspx

Thursday, 6 May 2010

Schema binding in SQL Server

I recently had to make changes to a database table – simple changes, just adding 2 NULL columns. However, I was confronted with an error because the table in question was bound to a view created with SCHEMABIND.

Schema binding allows dependencies to be created between database object so that changes cannot be made accidentally which might otherwise break a dependant object. In the case of a view it would be possible to add, alter or drop table columns the view depends on. Schema binding prevents this.

To modify a database object that participates in schema binding you must drop or alter the bound object to remove the schema binding before making the necessary changes.

Views are not schema bound by default. To ensure views participate in schema binding use the "WITH SCHEMABINDING" clause when creating them.

USE [DatabaseNameHere]
GO 

CREATE VIEW [dbo].[ViewNameHere] WITH SCHEMABINDING AS 
SELECT … snip …
FROM [dbo].[TableNameHere] 
GO

Schema binding is commonly used with objects like views and User Defined Functions (UDF).

Note that for UDFs schema binding can result in performance gains due to the way SQL Server prepares query plans etc.

Wednesday, 14 April 2010

Problems recycling SQL Server logs

Sometimes it is possible that SQL Server has problems cycling the error or SQL Agent logs. This can result in maintenance plans reporting failure.

These errors may manifest themselves on the server as missing files in the log sequence (e.g. there is an ERORLOG file, ERRORLOG.1 is missing but there is an ERRORLOG.2 etc.). Occasionally the problem can be fixed by renaming the files so the sequence is compete again.

To recycle error or SQL Agent log manually try running one of the following:

EXEC sp_cycle_agent_errorlog

EXEC sp_cycle_errorlog

DBCC ERRORLOG

You can also recycle the SQL Server Agent log via the Management Studio by right-clicking on SQL Server Agent > Error Logs, and choosing recycle from the popup menu. However, it is possible that the log simply won’t recycle:

22022

In this case a safe bet is to restart the SQL Server Agent service.

Thursday, 1 April 2010

Function for splitting text data

Try using the following function:

USE [DatabaseNameHere]
GO

CREATE FUNCTION dbo.Split
    (
      @String VARCHAR(8000),
      @Delimiter CHAR(1)
    )
RETURNS @temptable TABLE ( items VARCHAR(8000) )
AS BEGIN     
    DECLARE @idx INT     
    DECLARE @slice VARCHAR(8000)     
    
    SELECT  @idx = 1     
    IF LEN(@String) < 1
        OR @String IS NULL 
        RETURN     
    
    WHILE @idx != 0     
        BEGIN     
            SET @idx = CHARINDEX(@Delimiter, @String)     
            IF @idx != 0 
                SET @slice = LEFT(@String, @idx - 1)     
            ELSE 
                SET @slice = @String     
  
            IF ( LEN(@slice) > 0 ) 
                INSERT  INTO @temptable ( Items )
                VALUES  ( @slice )     

            SET @String = RIGHT(@String, LEN(@String) - @idx)     
            IF LEN(@String) = 0 
                BREAK     
        END 
    RETURN     
   END

Then execure a query like the following:

SELECT TOP 10 * FROM dbo.Split('Item1,Item2,Item3',',')

The code was taken from here: http://www.logiclabz.com/sql-server/split-function-in-sql-server-to-break-comma-separated-strings-into-table.aspx

Tuesday, 30 March 2010

Lines of SQL in stored procedures

If you want to count the number of lines of SQL in stored procedures and functions try running the following SQL:

USE [Database name here]

SELECT  t.sp_name,
        SUM(t.lines_of_code) - 1 AS lines_ofcode,
        t.type_desc
FROM    ( SELECT    o.name AS sp_name,
                    ( LEN(c.text) - LEN(REPLACE(c.text, CHAR(10), '')) ) AS lines_of_code,
                    CASE WHEN o.xtype = 'P' THEN 'Stored Procedure'
                         WHEN o.xtype IN ( 'FN', 'IF', 'TF' ) THEN 'Function'
                    END AS type_desc
          FROM      sysobjects o
                    INNER JOIN syscomments c ON c.id = o.id
          WHERE     o.xtype IN ( 'P', 'FN', 'IF', 'TF' )
                    AND o.category = 0
                    AND o.name NOT IN ( 'fn_diagramobjects', 'sp_alterdiagram',
                                        'sp_creatediagram', 'sp_dropdiagram',
                                        'sp_helpdiagramdefinition',
                                        'sp_helpdiagrams', 'sp_renamediagram',
                                        'sp_upgraddiagrams', 'sysdiagrams' )
        ) t
GROUP BY t.sp_name,
        t.type_desc
ORDER BY 1

Wednesday, 10 March 2010

Why user NOCOUNT?

Setting NOCOUNT to ON stops the message that shows the count of the number of rows affected by a T-SQL statement or stored procedure from being returned as part of the result set.

Why is this useful or important? It can result in a significant performance boost when running SQL statements:

“When SET NOCOUNT is ON, the count is not returned. When SET NOCOUNT is OFF, the count is returned.

The @@ROWCOUNT function is updated even when SET NOCOUNT is ON.

SET NOCOUNT ON prevents the sending of DONE_IN_PROC messages to the client for each statement in a stored procedure. For stored procedures that contain several statements that do not return much actual data, setting SET NOCOUNT to ON can provide a significant performance boost, because network traffic is greatly reduced.

The setting specified by SET NOCOUNT is in effect at execute or run time and not at parse time.” – from SQL Server online documentation

Tuesday, 12 January 2010

Rebuilding execution plans

To force SQL Server to rebuild its execution plans for a particular database run the following SQL:

DECLARE @intDBID INTEGER SET @intDBID = 
(SELECT dbid FROM master.dbo.sysdatabases WHERE name = 'DatabaseNameHere') DBCC FLUSHPROCINDB (@intDBID)

To rebuild execution plans for all databases run the following SQL:

DBCC FREEPROCCACHE

You can see the execution plans for a procedure by using the SET command:

USE [DatabaseNameHere]
GO

SET SHOWPLAN_ALL OFF 
GO

EXEC [WidgetQueries_RecentlyAddedContent] 0
GO

You can also force a recompilation for a specific procedure or trigger using sp_recompile stored procedure:

sp_recompile [ @objname = ] 'object'

There are many other options. Check the documentation for details.

Wednesday, 30 December 2009

Restore a database using TSQL

To restore a database use the following SQL:

USE [tempdb]
ALTER DATABASE [DatabaseName]

SET SINGLE_USER WITH
ROLLBACK IMMEDIATE
GO

RESTORE DATABASE [DatabaseName]
FROM DISK = 'C:\BackupName.BAK'
WITH MOVE 'MDF_logical_name' TO 'C:\MDFName.mdf',
MOVE 'LDF_logical_name' TO 'C:\LDFName.ldf'
GO

ALTER DATABASE [DatabaseName] SET MULTI_USER
GO

Note that this puts the database into single user mode first (i.e. it kicks off other users). Don’t forget the USE [tempdb] statement or you may find the database still in use and the restore may fail. If you’re doing a straight restore you can get the logical filenames etc. from the backup file:

RESTORE FILELISTONLY FROM DISK = 'C:\BackupName.BAK'
GO

Wednesday, 16 December 2009

User-defined Functions in SQL Server

User-defined functions (UDFs) encapsulate logic for use in other queries. Views are limited to a single SELECT statement but user-defined functions can have multiple SELECT statements.

There are basically 3 categories of UDF:

  • Scalar-valued function
    • Returns a single, scalar value.
    • Can be used as column name in queries.
    • Can contain an unlimited number of statements as long as only one value is returned.
  • Inline function
    • Returns a variable of data type table.
    • Can only contain a single SELECT statement.
    • The structure of the returned value is generated from the columns that compose the SELECT statement.
    • A table variable need not be explicitly declared and defined.
  • Table-valued function
    • Can contain any number of statements that populate the table variable to be returned.
    • Useful when you need to return a set of rows.
    • A table variable must be explicitly declared and defined.
    • An advantages over a view is that the function can contain multiple SQL statements whereas a view is composed of only one statement.

Differences between Stored Procedures and UDFs:

  • UDF can be used in SQL statements whereas SPs can’t.
  • UDFs that return tables can be treated as another rowset and can be used in JOINs.
  • Inline UDF's can be thought of as views that take parameters and can be used in JOINs etc.

Thursday, 5 November 2009

Fulltext catalogues

To identify the full text catalogues run the following SQL:
SELECT name 
FROM sys.fulltext_catalogs;
GO
Alternatively use the SQL Management Studio.
  • Go to Object Explorer and connect to your server.
  • Expand the Databases node and find the database in question.
  • Expand the Storage > Full Text Catalogues node.
  • Right-click on a catalogue name and select Properties.
Note that the Tables/Views page of the Full-Text Catalog Properties dialog box shows how the catalogue tracks changes (radio buttons at the bottom of the screen).

Common tasks

Rebuild
“Tells SQL Server to rebuild the entire catalog. When a catalog is rebuilt, the existing catalog is deleted and a new catalog is created in its place. All the tables that have full-text indexing references are associated with the new catalog. Rebuilding resets the full-text metadata in the database system tables.”
Reorganize
“Tells SQL Server to perform a master merge, which involves merging the smaller indexes created in the process of indexing into one large index. Merging the full-text index fragments can improve performance and free up disk and memory resources. If there are frequent changes to the full-text catalog, use this command periodically to reorganize the full-text catalog.
REORGANIZE also optimizes internal index and catalog structures.
Keep in mind that, depending on the amount of indexed data, a master merge may take some time to complete. Master merging a large amount of data can create a long running transaction, delaying truncation of the transaction log during checkpoint. In this case, the transaction log might grow significantly under the full recovery model. As a best practice, ensure that your transaction log contains sufficient space for a long-running transaction before reorganizing a large full-text index in a database that uses the full recovery model. For more information, see Managing the Size of the Transaction Log File.”

SQL command to run

You can trigger a rebuild or reorganisation using the following command:
ALTER FULLTEXT CATALOG catalog_name 
{ REBUILD [ WITH ACCENT_SENSITIVITY = { ON | OFF } ]
| REORGANIZE
| AS DEFAULT 
}
See http://msdn.microsoft.com/en-us/library/ms176095.aspx