Technology Blog on: Business Intelligence, Cloud Computing, Data Mining, Warehousing, Database Systems, Project Management, Security and Capacity Management, High-Tech Web Analytics Systems.
Sunday, November 09, 2008
Stored Procedures Best Practices (learn from the Best)
Instead of adding more to already overloaded junk on the web regarding SQL Stored Procedure Best Practices, let's learn from those who are the best! :)
Labels:
performance,
sql
Wednesday, October 22, 2008
Avoiding Cursors and Loops for Append Operations
Most of the times, to do basic append operations in SQL; the very first thing comes in mind is the CURSOR and a LOOP.
For example, if I want to print a comma seperated list of columns in any given table; i would quickly think of a cursor with a loop assigning the comma seperated column list to a variable.
BUT, there is pretty simpler, smarter, and quicker work around to do appends like this:
TRADITIONAL CURSOR WITH WHILE LOOP SOLUTION FOR APPENDS:
create table #mytable (column_a varchar(2), column_b varchar(2), column_c varchar(3))
declare cursor_mytable cursor
for select c.name
from tempdb.dbo.sysobjects o
join tempdb.dbo.syscolumns c on c.id=o.id
where o.id=object_id('tempdb.dbo.#mytable')
and o.type = 'U'
declare @v varchar(100), @v2 varchar(800)
set @v2 =''
open cursor_mytable
fetch cursor_mytable into @v
while @@fetch_status = 0
begin
if @v2 =''
begin
set @v2 =@v
end
else
begin
set @v2=@v2+','+@v
end
fetch cursor_mytable into @v
end
close cursor_mytable
deallocate cursor_mytable
drop table #mytable
select @v2
SIMPLE CURSOR & LOOPLESS, YET SMARTER SOLUTION FOR APPENDS:
create table #mytable (column_a varchar(2), column_b varchar(2), column_c varchar(3))
declare @v varchar(100), @v2 varchar(800), @tbl sysname
select @v2 = '', @v = ''
select @v = c.name , @v2 = @v2+','+@v
from tempdb.dbo.syscolumns c
INNER JOIN tempdb.dbo.sysobjects o on c.id = o.id
where o.id=object_id('tempdb.dbo.#mytable')
and o.type = 'U'
drop table #mytable
select substring(@v2,2,len(@v2))
The output of both code snippets above is the same but simplicity is obvious in the latter one!
Labels:
performance,
sql
Wednesday, August 27, 2008
SQL Server 2005 Versions and Features Comaprisons
One thing i found missing in the comparison is, that Peer To Peer Transactional Replication used for scaling out is only supported in the Enterprise Edition
Labels:
sql
Saturday, August 09, 2008
SQL Server version Info and Build Details!
To get the version info for the SQL server we use:
select @@version
or
select serverproperty('productversion')
Following are very cool links to get a very detailed information about all builds and versions which fairly helps in maintaining patches and security!
http://sqlsecurity.com/FAQs/SQLServerVersionDatabase/tabid/63/Default.aspx
http://sqlserverbuilds.blogspot.com/
http://support.microsoft.com/kb/321185
select @@version
or
select serverproperty('productversion')
Following are very cool links to get a very detailed information about all builds and versions which fairly helps in maintaining patches and security!
http://sqlsecurity.com/FAQs/SQLServerVersionDatabase/tabid/63/Default.aspx
http://sqlserverbuilds.blogspot.com/
http://support.microsoft.com/kb/321185
Labels:
sql
SQL Server 2008 (Katmai) download links!
Trial Download:
http://www.microsoft.com/sqlserver/2008/en/us/trial-software.aspx
SQL Server 2008 Feature Pack:
http://www.microsoft.com/downloads/details.aspx?familyid=c6c3e9ef-ba29-4a43-8d69-a2bed18fe73c&displaylang=en&tm
SQL Server 2008 Upgrade Advisor:
http://www.microsoft.com/downloads/details.aspx?FamilyID=F5A6C5E9-4CD9-4E42-A21C-7291E7F0F852&displaylang=en
http://www.microsoft.com/sqlserver/2008/en/us/trial-software.aspx
SQL Server 2008 Feature Pack:
http://www.microsoft.com/downloads/details.aspx?familyid=c6c3e9ef-ba29-4a43-8d69-a2bed18fe73c&displaylang=en&tm
SQL Server 2008 Upgrade Advisor:
http://www.microsoft.com/downloads/details.aspx?FamilyID=F5A6C5E9-4CD9-4E42-A21C-7291E7F0F852&displaylang=en
Labels:
sql
Wednesday, July 23, 2008
Advanced SQL Server Interview Questions
I came across several questions during an Interview for SQL admin; I thought other folks might also benifit from it.
And I will try to add more to the list as long as I am being Interviewed! ;)
- Question: When you failover an SQL Server from node1 to node 2, do all processes/users automatically shifted to node 2?
- Jobs running on node 1 will fail and do not automatically Failover to node2. You might need to run those jobs manually after the Failover.
- Question: How can you move a physical table from one disk to another (using filegroup),
- Recreate the clustered index of the table on the destination disk (filegroup), the physical table automatically moves along with the clustered index to the new disk.
- Question: When you upgrade a DB what thing doesn't automatically get updated?
- run UPDATE STATS manually because statistics are not automatically updated upon upgrade.
- Question: Is it fine to have a fill factor of value 100 by default when creating an index?
- Do not automatically accept default fill factor value of 100 unless your data seldom changes. You might need a lower fill factor value for OLTP db to reduce page splitting and save I/O.
- Question: Is it fine to have SQL startup account as a member of Builtin/Administrators group, if not then what explicit rights might be required for a startup account with limited rights?
- Though it’s recommended that SQL start up server account should not be domain admin, and should have limited rights to avoid risking complete machine/network in case of injection/intrusion; You might need LOCK PAGES IN MEMORY setting to be explicitly allowed for the SQL start up account in order to start the SQL service.
- Question: How do we replicate dbs over WAN with limited internet bandwidth and is it possible to setup a single large publication in that case?
- While replicating Large DBs over WAN, you need to setup ftp options on publication so the snapshot could manually be uploaded on ftp for subscriber to download. You can not create a snapshot file (.cab) over 2 GB, so if you have large tables; use multiple publications with less than 2 GB snapshot cab file.
- Question: How can we brwose SQL configuration options without wanting to enable advanced options
- SELECT * FROM SYS.CONFIGURATIONS
- Question: What is the recommended size to set when setting up a disk or RAID for an SQL database
- Optimum Block size to set when formatting a disk or setting up RAID is 64 K for SQL database, as SQL Server basic data storage is in Extents which are of 64 K.
- Question: Is it a problem if your NTFS disk is 90% used?
- NTFS needs room to work,and when you exceed 80% capacity, NTFS become less efficient and I/O cansuffer for it. Keeping adequate free space onf the disk will prevent the MTF from becoming fragmented, which is important because the Disk Defragmenter tool can't defragment the MFT.
- Question: In SQL 2000, we know that page size in SQL storage structue is 8064 bytes out which 4 is header information. what's the actual row size in a data page?
- The actual max row size in SQL 2000 is 8039 bytes, not 8060. A table to hold a row of 8040 bytes is not possible and the following command will fail:
- CREATE TABLE ROWSIZE ( address CHAR(8000), name CHAR (40) )
- Question: In SMS executing the below command; what will be the output.
SELECT @@version - GO 5
- It will execute the select command five times.
- Question: Does enabling AWE on 32 bit OS means better Procedure Cache?
- SQL on 32 bit OS with AWE eabled still can't address more than 4 GB of virtual address space. Only buffer cache can benifit from more RAM on 32 bit with AWE enabled.
- Question: Does enabling AWE on 64bit OS means better Procedure Cache?
- True, however one of the Draw back of SQL on 64 bit OS is that we can not manage procedure cache size and SQL server may prefer to keep large expensive plans in cache and it drains out memory. And a larger procedure cache may also impact the time for SQL to search the procedure cache to determine if a plan exists.
- Question: What's the purpose of AWE enable option in SQL 2005 64 bit
- On 64 bit OS AWE isn't ignored (as per several msdn posts) " locked pages, awe mechanism, enable better application's scalability and performance both during ramp up time and over long period of time." http://blogs.msdn.com/slavao/archive/2005/04/29/413425.aspx
- Question: How can we handle an SQL statement to rollback if it gets blocked, instead of waiting for the blocking process to commit or rollback?
- By using SET LOCK_TIMEOUT
- Question: What's wrong with the following queries:
SELECT DISTINCT name FROM DepartmentsTable ORDER BY id
- Second Query will return error because when using DISTINCT all columns should be in SELECT statement referred in ORDER BY cluase.
Monday, June 09, 2008
Purging Merge Replication History to avoid Locking/Blocking
Last week I visited a Dataware house that pulls data over the WAN from several OLTP servers setup as merge Publishers. One of the main OLTP servers I was asked to look closely, had a very high processor utilization and consistent spikes of processor queue length because of high lock escallations and blocking. The machine was configured pretty fine with adequate hardware resources. All productions tables were fairly defragmented, partitioned; tempdb on seperate SCISIs, blah blah blah. Yet over all I/O was sky high and processor spikes were shouting that there's some thing fairly wrong.
During the investigation, I found that the merge replication agent was configured to run every 5 minutes on the server. I executed the following query that showed me the potential cuase of the problem:
select min(coldate) min_repldate, max(coldate) max_repldate , count(*) rows_replicated
from msmerge_contents a
join msmerge_genhistory b on a.generation=b.generation and a.tablenick=b.art_nick
where b.genstatus<>0
________________________________________
min_repldate, max_repldate, rows_replicated
2008-01-01 ,2008-06-08 ,12010000
There were around 120 million records in the replication history table since last six months.
Since replication doesn't automatically delete old replicated transactions from these tables except some of the tables in distribution database; each time the replication agent used to kick off, queries and entries were made to these highly fragmented tables causing high I/O and processor time and the user activity involving published tables was being blocked.
Snapshot was not adequite solution because the servers were on the WAN and published tables were substantially large.
I used following command to purge the history records older than 7 days to bring relief to the server.
use publisheddb
GO
begin tran
select count(*) from msmerge_contents with(tablockx)
select count(*) from MSmerge_genhistory with(tablockx)
declare @coldate datetime
select @coldate = getdate()-7
delete MSmerge_contents from MSmerge_contents a, MSmerge_genhistory b where a.generation = b.generation and a.tablenick=b.art_nick and b.coldate < @coldate and a.generation > 0
delete MSmerge_genhistory where coldate < @coldate and pubid is not null
commit
GO
dbcc dbreindex ( MSmerge_contents)
GO
dbcc dbreindex ( MSmerge_genhistory)
GO
update statistics MSmerge_contents
GO
update statistics MSmerge_genhistory
GO
Ideally, we need to keep these two tables (msmerge_contents, msmerge_genhistory) as narrowed as possible. That helps replication to complete quickly and avoid locking/blocking for long time.
There should be a scheduled routine to purge all previously completed transaction daily/weekly depending upon the setup and environement.
During the investigation, I found that the merge replication agent was configured to run every 5 minutes on the server. I executed the following query that showed me the potential cuase of the problem:
select min(coldate) min_repldate, max(coldate) max_repldate , count(*) rows_replicated
from msmerge_contents a
join msmerge_genhistory b on a.generation=b.generation and a.tablenick=b.art_nick
where b.genstatus<>0
________________________________________
min_repldate, max_repldate, rows_replicated
2008-01-01 ,2008-06-08 ,12010000
There were around 120 million records in the replication history table since last six months.
Since replication doesn't automatically delete old replicated transactions from these tables except some of the tables in distribution database; each time the replication agent used to kick off, queries and entries were made to these highly fragmented tables causing high I/O and processor time and the user activity involving published tables was being blocked.
Snapshot was not adequite solution because the servers were on the WAN and published tables were substantially large.
I used following command to purge the history records older than 7 days to bring relief to the server.
use publisheddb
GO
begin tran
select count(*) from msmerge_contents with(tablockx)
select count(*) from MSmerge_genhistory with(tablockx)
declare @coldate datetime
select @coldate = getdate()-7
delete MSmerge_contents from MSmerge_contents a, MSmerge_genhistory b where a.generation = b.generation and a.tablenick=b.art_nick and b.coldate < @coldate and a.generation > 0
delete MSmerge_genhistory where coldate < @coldate and pubid is not null
commit
GO
dbcc dbreindex ( MSmerge_contents)
GO
dbcc dbreindex ( MSmerge_genhistory)
GO
update statistics MSmerge_contents
GO
update statistics MSmerge_genhistory
GO
Ideally, we need to keep these two tables (msmerge_contents, msmerge_genhistory) as narrowed as possible. That helps replication to complete quickly and avoid locking/blocking for long time.
There should be a scheduled routine to purge all previously completed transaction daily/weekly depending upon the setup and environement.
Labels:
Replication
Thursday, May 08, 2008
Find out Pending Merge Replication changes
In many of the Merge Replication setups, the subscribers are not in Local LAN and replicating large transactions become a nightmare. And if some one updates, insert,s or deletes very large amount of rows; the merge agent seems to be stuck to replicate changes and users can not see regular changes.
Large DML in merge replication also results in Locking, High I/O and slow response time; because changes applied at the subscribers are row by row.
So, you might need to find out exactly which table and which rows are pending to be replicated.
-- Finding inserted/update records
select distinct a.name , b.generation,b.coldate transaction_time, c.rowguid changed_row ,b.changecount tran_count
from sysmergearticles a
join sysobjects ab on ab.name=a.name
join msmerge_genhistory b on a.nickname=b.art_nick
join msmerge_contents c on c.generation=b.generation and c.tablenick=b.art_nick
where b.genstatus =0
order by c.rowguid
In the Next post, we will discuss on how to skip replicating the large DML rows for a particular table and apply the change manually on the subscriber to restore the normal merge process and saving business time.
Large DML in merge replication also results in Locking, High I/O and slow response time; because changes applied at the subscribers are row by row.
So, you might need to find out exactly which table and which rows are pending to be replicated.
SCRIPT!
-- run on Publisher/Subscriber
-- run on Publisher/Subscriber
-- NOTE: if the table is published in multiple publications number or rows
-- returned by the below queries would be articlechanges*subscribers
-- Finding inserted/update records
select distinct a.name , b.generation,b.coldate transaction_time, c.rowguid changed_row ,b.changecount tran_count
from sysmergearticles a
join sysobjects ab on ab.name=a.name
join msmerge_genhistory b on a.nickname=b.art_nick
join msmerge_contents c on c.generation=b.generation and c.tablenick=b.art_nick
where b.genstatus =0
order by c.rowguid
-- Finding deleted records
select a.name table_name , b.generation, b.coldate transaction_time , t.rowguid changed_row, b.changecount
from sysmergearticles a
join MSmerge_tombstone t on a.nickname=t.tablenick
join msmerge_genhistory b on a.nickname=b.art_nick and b.generation=t.generation
where b.genstatus=0
order by coldate desc
from sysmergearticles a
join MSmerge_tombstone t on a.nickname=t.tablenick
join msmerge_genhistory b on a.nickname=b.art_nick and b.generation=t.generation
where b.genstatus=0
order by coldate desc
In the Next post, we will discuss on how to skip replicating the large DML rows for a particular table and apply the change manually on the subscriber to restore the normal merge process and saving business time.
Labels:
Replication
Monday, April 21, 2008
How to login to SQL Server as SA while running Windows Authentiaction only mode
A colleague of mine ran into following situation:
My Company polocies enforce that domain-admins group be removed from sysadmin role in production SQL Server and windows authentication mode should only be used.
He removed the domain-admin group from sysadmin roles , then set windows authentication only and restarted the server at night.
When he tried to log in back with his domain user account (which was part of domain admin group too) he could only log on to master db with public role only!!!!!!!!!!!!!!!!!!!!!
He forgot to add his own account explicitly in SQL Server before removing domain-admins group.
Interesting situation:
Besides my colleague was planning to run rebuildm.exe utility to rebuild the master db , and then setting up logins and all user databases using scripts.
I tried something on my local test machine and found that there's a way!
I opened Enterprise Manager and tried to connect using SA with SQL authentication; Error: Login Failed for user 'sa'. Reason: Not associated with a trusted SQL Server connection.
hmmmm! OK
Solution:
This isn't a security breach; since you already logged in to the machine locally, that means your are an authenticated user to the system. NamedPipes library only supports connections that are valid windows logins. You can not connect using NamedPipes over the WAN or out of a trusted domain. Your Windows login must be authenticated on the machine running SQL Server before in order to use NamedPipes!
Cheers!
My Company polocies enforce that domain-admins group be removed from sysadmin role in production SQL Server and windows authentication mode should only be used.
He removed the domain-admin group from sysadmin roles , then set windows authentication only and restarted the server at night.
When he tried to log in back with his domain user account (which was part of domain admin group too) he could only log on to master db with public role only!!!!!!!!!!!!!!!!!!!!!
He forgot to add his own account explicitly in SQL Server before removing domain-admins group.
Interesting situation:
- Domain-admins removed from SQL sysadmin role
- SQL running Windows Authentication only Mode.
- No windows user exist in SQL with sysadmin role.
Besides my colleague was planning to run rebuildm.exe utility to rebuild the master db , and then setting up logins and all user databases using scripts.
I tried something on my local test machine and found that there's a way!
I opened Enterprise Manager and tried to connect using SA with SQL authentication; Error: Login Failed for user 'sa'. Reason: Not associated with a trusted SQL Server connection.
hmmmm! OK
Solution:
- I logged into the operating system running production SQL machine using local admin account.
- Checked that NAMED-PIPES are enabled under the Server network utility.
- Start --> Program Files --> Microsoft SQL Server --> SQL Server Client Network Utility
- Goto Alias, Select Add.
- Under network libaries , selected NAMED PIPES
- Gave server alias prod1
- Under Server Name typed the actual SQL instance name (in our case PRODUK1)
- Clicked OK.
- Open the Enterprise manager again
- New SQL Server Registration.
- Under the server, typed the alias: prod1
- Selected, Use SQL Server authentication
- user name : SA
- password : typed the password
- clicked OK.
This isn't a security breach; since you already logged in to the machine locally, that means your are an authenticated user to the system. NamedPipes library only supports connections that are valid windows logins. You can not connect using NamedPipes over the WAN or out of a trusted domain. Your Windows login must be authenticated on the machine running SQL Server before in order to use NamedPipes!
Cheers!
Labels:
Database Administration,
Security
Friday, April 18, 2008
Hot News SQL 2005! SP3
Sick of applying KBArticles, patches and updating binaries. :(
Finally microsoft seems in approval of delivering SQL 2005 Service Pack3 by fourth quarter of 2008.
Yuppeee!!!!!
http://sqlblog.com/blogs/aaron_bertrand/archive/2008/04/15/sql-server-2005-sp3-is-publicly-confirmed.aspx
http://blogs.technet.com/dataplatforminsider/archive/2008/04/15/sql-server-2005-sp3-coming-soon.aspx
Finally microsoft seems in approval of delivering SQL 2005 Service Pack3 by fourth quarter of 2008.
Yuppeee!!!!!
http://sqlblog.com/blogs/aaron_bertrand/archive/2008/04/15/sql-server-2005-sp3-is-publicly-confirmed.aspx
http://blogs.technet.com/dataplatforminsider/archive/2008/04/15/sql-server-2005-sp3-coming-soon.aspx
Labels:
Service Packs
RAM and procedure cache issue resolved in SQL 2008 (Thanx to Adam Mechanic) :)
lack of RAM control and inefficiencies in SQL Procedure Cache very well addressed and escallated by the SQL Guru!
Microsoft introducing Resource Governer in sql 2008 sates that you can control the amount of RAM Proc Cache can acquire by defining resource pools. Details below:
http://sqlblog.com/blogs/adam_machanic/archive/2008/04/15/sql-server-procedure-cache-more-relief-on-the-way.aspx
https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=293188
Microsoft introducing Resource Governer in sql 2008 sates that you can control the amount of RAM Proc Cache can acquire by defining resource pools. Details below:
http://sqlblog.com/blogs/adam_machanic/archive/2008/04/15/sql-server-procedure-cache-more-relief-on-the-way.aspx
https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=293188
Labels:
Database Administration,
performance
Friday, April 11, 2008
Career as a DBA
Database Administration comes into play in medium large and large enpterprises and organizations running database server boxes where the business can not tolerate downtime and data availability is very critical.
Such setups involve expensive hardware and an enterprise class DBMS solution , eg., MSSQL Enteprise Edition, ORACLE Enterprise Edition etc.
Usually the server boxes include SAN(Storage Area Network) or NAS(Network Area Storage) setup as clusters and connected to multiple active/active or active/passive server boxes. The SAN usually connects to the server through a Host Bus Adapter and the medium could be Fiber Channel or some other fast communication channel.
NAS connects to the server via a Switch and has much more flexibility of placing it physically farther than the servers.
The disk drives in NAS, SAN, or the Server itself are fast SATA or SCSIs disks bundled up in different level of hardware RAIDS for faster read/writes and fault tolerance.
Many people focus more on multiple high speed processors with not quite edequate L2 Cache. As a general rule of thumb, a PIII processor running on a board with greator L2 Cache would outperform a P4 Processor running with inadequate L2 Cache.
A DBAs task also involves selecting and configuring RAIDs to serve different kind of I/O activity; example is that you require different RAID for sequential Reads/Writes and different for non sequential Reads/Writes. One also needs to accommodate the fault tolerance.
Technically speaking, DBAs are most prone to waking up nights and staying long at office for backups , performance and downtime issues. The business stucks if the db server goes down; that adds your value to your organization more than a Developer(no offence). DBA is a hot and fun job, but you also take stress along with you at home with a fear of being called late night for assistance:)
A DBA tasks include:
Disaster Recovery Plans (including backups, restore testing)
Performance Tuning (Includes hardware and Server performance monitoring, SQL optimization)
Development (TSQL or PLSQL, whatever dbms you have)
Security (Audits, user roles and permissions, patterns of transactions etc)
Reporting (Reports server, including automation of reports for management and staff)
As of my past experience Databases are most prone to DBAs than any thing; to err is human :). I don't believe if any of the db gurus never caught up in a disasterous situation of a messed up database, the quality that seperates them from ordinary db guys is, how confidently they react by keeping the nerves in control and resolving the problem in best possible way with minimum down time.
Many smart and compitent Tecchy folks turn to the DARK END; the so called MANAGEMENT. As it's said, when you get technically experienced and older in your field; your technical skills become secondary to your management skills. Your spoken and writting skills, dressing and attitude decides how far you'll go.
Or in other words; Degree of Management Skills is INVERSELY proportional to that of Technical Skills.:)
Cheers!
Labels:
Career,
Database Administration
Thursday, April 03, 2008
SQL Machine in a workgroup; unable to access network resources!
In order for SQL server to access network resrources(eg. taking backups etc on network paths) with in a domain, we are aware that SQL Startup Agent account should be a domain user with appropriate rights within the LAN.
Howerver, What if SQL Server machine is not part of Domain and you need to BCP out a file to your webserver which is in the LAN but both machines are in a workgroup?
Here's a solution:
Howerver, What if SQL Server machine is not part of Domain and you need to BCP out a file to your webserver which is in the LAN but both machines are in a workgroup?
Here's a solution:
- Create a share folder on Destination Network(webserver).
- MAP a drive on your SQL machine to the shared folder you created.
- Run your SQL command (eg backup or bcp) with the security context of a Windows user account, not SQL user. The actual problem lies here; using SA won't work, it will give you error that network path not accessible. You have to use windows user account in SQL to acess the network resources in case machine running SQL Server is not a part of DOMAIN.
Note: You can ommit the step 2 and use UNC naming convention to access the path directly in the code ( eg. backup database mydb to disk = '\\webserver\shared\mydb.bak'), but it's better to use MAP-Drives for robust coding.
Labels:
sql
Wednesday, April 02, 2008
SQL Server 2008 Reporting Services!
Check out below link to find what's behind the Blissful and promising SQL Server 2008 Reporting Services! ;)
http://www.dundas.com/Company/Media/PressSQL2008.aspx
http://www.dundas.com/Company/Media/PressSQL2008.aspx
Labels:
Business Intelligence,
Reporting Services,
sql
Subscribe to:
Posts (Atom)