Category: SQL Server

Deploy Database Project using Octopus

Octopus is a deployment tool that use the Nuget packaging mechanism to pack your application and deploy it over multiple environments.

Unfortunately it does not have a native support (yet) for Visual Studio Database project, so I had to provide a sort of workaround in my project structure to allow Octopus to deploy also a “Database Project Nuget package”.

Visual Studio .dacpac

Visual Studio Database project is capable to generate diff scripts, a full schema deployment script and also a post deployment script (in case you need to populate the database with some demo data, for example). When you compile a Database project this is the outcome:

image

As you can see we have two different .dacpac files. One for the master Database and one for my own database. A dacpac file is what is called “Data Tier Application” and it’s used within SQL Server to deploy a database schema.

Another interesting thing is the schema structure, in every database project you will have also a second output folder with the following structure:

image

And in the obj folder we have an additional output:

image

which contains a Model.xml file. This file can be used to integrate entity framework with our database schema. The postdeploy.sql is a custom script that we generate and execute after the database deployment.

Package everything with Nuget and OctoPack

So, what do we need in order to have a proper Nuget package of our database schema? Well, first of all let’s see what we should carry on in our package. Usually I create a package with the following structure:

image

The steps to obtain this structure are the following:

1 – Modify the database project to run OctoPack

  <Import 
        Project="$(SolutionDir)\.nuget\NuGet.targets" 
        Condition="Exists('$(SolutionDir)\.nuget\NuGet.targets')" />
  <Import 
        Project="$(SolutionDir)\.octopack\OctoPack.targets" />
</Project>

2 – Provide a .nuspec file with the following structure:

<?xml version="1.0"?>
<package xmlns="http://schemas.microsoft.com/packaging/2010/07/nuspec.xsd">
  <metadata>
    <!-- Your file specifications -->
  </metadata>
  <files>
    <!-- The Database Schema -->
    <file src="\dbo\**\*.sql" 
            target="Content\Schema"/>
    <!-- The deployment script -->
    <file src="\obj\**\*.sql" 
            target="Content\Deploy" />
    <file src="\obj\**\*.xml" 
            target="Content\Deploy" />
    <!-- Your .dacpac location -->
    <file src="..\..\..\..\..\..\bin\**\*.dacpac" 
            target="Content\Deploy" />
  </files>
</package>

And of course have your Build Server the RunOctoPack variable enabled.

Install the package using Powershell

The final step to make the package “digestable” by Octopus using PowerShell. In our specific case we need a power shell script that can execute the .dacpac package and the post deployment script. That’s quite easy.

In order to install a .dacpac with power shell we can use this command:

# load Dac Pac
add-type -path "C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\Microsoft.SqlServer.Dac.dll"

# make DacServices object, needs a connection string 
$d = new-object Microsoft.SqlServer.Dac.DacServices "server=(local)"

# Load dacpac from file & deploy to database named pubsnew 
$dp = [Microsoft.SqlServer.Dac.DacPackage]::Load($DacPacFile) 
$d.Deploy($dp, $DatabaseName, $true)

In my case I set some variables in Octopus in order to be able to dynamically create the database and locate the .dacpac file.

image

The final result is available through Octopus deployment console, cause I always set my PShell commands using | Write-Host at the end:

image

Final note: remember that the only way to stop a deployment step in Octopus using Power Shell is to return –1. In my case I wrap the code in a Try/Catch and return –1 if you want to stop the deployment but you can find a better explanation here.

Virtualizing SQL Server, some tips.

I just got these considerations about virtualizing SQL Server on VMWare from my Boss and I would like to share them with you.

General comments on problems encountered and why people not virtualizing SQL

  1. Storage ( most people just do not have enough spindles to start with and then when people consolidate they use less spindles not more, error in design and understanding of basic i/o principles )
  2. Queue lengths are a problem so add as many paths to storage as possible.
    Using vmware makes sequential reads look more like random reads due to interleaving of traffic across vm’s
  3. Ensure all dbs ( data, temp, log etc ) are on separate vmdks or preferably rdms, improves i/o queue throughput
  4. Memory ( ensure you set min, max reservations per vm to avoid swapping )
  5. 1 database instance per vmhost ( to allowing you to tweak memory / priorities as required )
  6. Use the latest intel chips, as provide biggest reduction in latencies for vm tasks

Following a thru 6 results in linear performance until the cpu saturates around 90%.

Basically they said VMware is never the bottleneck …

Assuming the rules above are followed you should then consider tweaking the following:

  1. Tweak the o/s first to allow large pages and ensure alignment with storage ( gives +6% improvement )
  2. Turn on sql priority boost in SQL and ensure correct O/S drivers installed in host ( gives +14% when used with 1. )
  3. Turn on static transmit coalescing in ESX networking ( gives +1%, so they advise don’t bother )

They are interesting considerations. Honestly, I know there are some “how to” and some “best practices” when you prepare a SQL environment. Using separate HD for separate files (mdf, ldf), keep system database separate from user database in different disks, split big tables in different location, maintain the indexes and so on … Unfortunately when you reflect this to a Virtual Environment, everything changes.

I didn’t find yet a good documentation about virtualizing SQL but I just tried to build a Clustered environment with Windows 2008 R2 x64 and SQL 08 x64 SP1 and the Quad Core I used for my experiments was not enough …

Smile

Tags:

T-SQL copy and clone an existing table.

Something really useful that I like in Microsoft Access, is the capacity to select and clone an object just with a menu command.

The options we have in access are:

image

Structure only – will copy/paste into a new table the table structure, so the index, the primary key and so on.

Structure and Data – will copy/paste everything inside a new table.

Append – will copy the data into the existing cloned object.

What can we do with SQL Server?

Of course with SQL Server we cannot simply do that, especially if we are considering to automate this process inside a script or better, inside a job.

The first idea is to use the simple script SELECT INTO in this way:

In this way we will copy: the records, the table structure but not the index or the primary keys.

We can try to use the right click command Drop and Create and then change the destination name of our table:

New Picture (9)

Finally, we can use the SSIS service and create a powerful package able to do that. In this way we can be sure we are going to clone everything and decide if we want to copy/paste also the data.

Transfer SQL Server object task. This task will do everything we need to copy and paste an object inside SQL:

image

We can also use some variables in order to make our task dynamic.

Pretty cool, isn’t it?

Tags:

Reporting Services 2008 book.

Today I would like to spend some rows about a great book I have purchased on www.Amazon.com. The book is Microsoft SQL Server 2008 Reporting Services and the author is Brian Larson, edited by MC Graw Hill. This is the direct link: book on amazon.

book01

Let’s spend just few words about.

  1. First of all in my opinion the book is really great because it covers all the section of reporting services, it doesn’t exclude nothing, really!
  2. Second point is the approach. Brian explain every report, step by step, by including some designer issue and trouble you can find during the execution of the tutorials. He also explain the SQL part, so if you have to run a GROUP BY, he will explain you what it is a GROUP BY.
  3. Third, the book is done from easy to hard. What does it mean? The first chapter is very easy and day by day by reading the book you will go to produce more complex reports.
  4. Finally the data sample. You can download from the web site the complete database sample that it’s really complex. This is great because when I started to read and use the tutorial, I spent just few minutes to install and run the database.

A very good job, a must into your shelf!!

Install TFS 2008 on Windows Server 2008

Last week I decided to buy the TFS license for the TFS 2008. After 2 days and a lot of headaches I decided to follow step by step the official guide which is available here: Team Foundation installation guide and to be honest, if you follow, step by step the guide, you will have a full working version of TFS. Also my MVP friend Simon Chiaretta suggested me to use the official guide.

This short guide will show you how to install a stand alone version, this means, a single server version which will contains: WSS 3.0, SQL Standard 2008 and TFS 2008. So first of all we need 3 licenses, SQL Server 2008 standard (the express edition is not working with TFS), WSS 3.0 and Windows Server Standard 2008 and TFS 2008.

Hey buddy, my suggestion is “spend more money and buy a x64 server with a SCSI disk system, otherwise everything will be very slow …

  1. First of all, install windows server 2008
    Probably this is the most straightforward step.
    1. I don’t suggest any particular installation here, just a couple of few points. Install Windows Server 2008 standard, configure it as an Application Server and a Web Server.
    2. Install IIS 7.0 and the NET Framework 3.5 with SP1 and verify it works, you can just point with the browser to http://localhost
    3. Create a dedicated account for the installation, you must have administrative rights with the account, otherwise you will encounter a lot of trouble …
  2. Install SQL Server 2008 Standard
    1. Lunch the setup for SQL Server 2008 Standard edition.
    2. You must install the Server components, the Client components, the Reporting Services, Full-text search, Analysis services and the Management Studio if you will work locally into your machine.
    3. For every service, specify at least, 1 account, otherwise after the installation you will not be able to login into SQL. I know it’s non sense but it’s how works … Also use only Windows Authentication.
    4. Remember to reboot the machine after the installation and verify the server is running by connecting with the Management Studio. Don’t configure the Reporting Services, this step will be done by TFS. Yeah, I tried to configure it like integrated with MOSS and nothing were working properly.
    5. If you receive during the installation process the error for the Power shell you did one of this wrong steps: you already have SQL 2005, you are using a CTP version of SQL which is coming with an old version of the power shell, you are using a wrong setup of SQL, you are using the Express with advanced services.
  3. Install WSS 3.0 on Windows server 2008
    1. If you are here this means that you have done your SQL installation (cool, it takes 3 times for me). Now download WSS 3.0 with SP1 from here: WSS 3.0 SP1. Don’t download the WSS 3.0, it doesn’t work with Windows 2008, no way!!
    2. Run the setup and choose the advanced installation. You have to choose the front-end installation and choose where you want to locate the installation directory.
    3. After the installation is done, choose to build a new server farm, then choose the local server name, leave the database name as is, and with this syntax domain\user provide the credential for the TFS administrator user. Check the box for the port, don’t use the suggested port and take a note of the port you will use, then choose the NTLM authentication.
    4. Check your settings and after you take a note of everything proceed with the configuration of WSS. It will take a while!
    5. Some prompt for WSS 3.0
      1. Now we need to play with the DOS. First open prompt and change directory to : Drive:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\bin\
      2. stsadm.exe -o extendvs -exclusivelyusentlm -url http://WSSServerName:80 -ownerlogin Domain\UserName -owneremail “admin@localhost” -sitetemplate sts -description “Default Web Site”
      3. stsadm.exe -o siteowner -url http://WSSServerName:80 -secondarylogin Domain\TFSSETUP
      4. The first will be the administrator of SharePoint, the second one the admin account for TFS. I use the same account …
  4. Install step by step Team Foundation Server 2008
    1. First of all, let’s install the TFS server. Choose the database server (local machine name) and go ahead.
    2. Warning. I have received the warning that my SQL Server is not working, because I need TFS SP1. Integrate the SP1 of TFS into the DVD is not easy, but you can find in the latest .chm the description.
    3. Configure the account for TFS and SQL Server.
    4. Choose the previous WSS address and site (did you take note of the port??)

The next post will be how to configure TFS 2008, when I will have time to do it.

SSIS, write message for interacting with the user.

SSIS, Sql Server Integration Services, is a full functional development environment. Unfortunately when we are building a package, before register and release the package, we need to Log, trace or debug the script. This process is a MUST if we are using a script task, a task which give you the availability to work directly with C# or VB.NET.

Package Example.

Create a simple SSIS project and insert a Sequence Container, then in the sequence container insert a Script Task and choose the C# language.

script log

Please don’t take care about my screenshot, I know the package fails, but because I toke a screenshot and I’m lazy …
This is just an example, ok?

 

 

 

Now let’s create a couple of Global Variables in our package and let’s call it: VARIABLE01 and VARIABLE02.

Finally let’s go to edit our script task (right click on the task, edit script in the properties).

Some C# code.

First of all in out package class let’s go to create a function that return our variable value.

And now, let’s write this if block in the main routine:

Now we are ready to view how we can interact with the users in the SSIS package.
It’s very important that in the main routine you assign a result for the package

execution, don’t forget!!

Write to the Log, to the Output or to the Windows.

Well, now we need to build our WriteLog function. We have 3 way to do that.

The first one is to write directly on the Log:

The second one is to write to the Output, so when you will run the package in the Business Intelligence IDE, you will see the message in the output console:

Or you can send a MessageBox:

SQL Server, how to implement FULL TEXT search.

One of the most useful function that I like in SQL Server is the FULL TEXT search. When you have a table or some tables with string fields, like VARCHAR or NVARCHAR, you know that running LIKE clause is CPU consuming.

In order to run this example you must before activate the FULL TEXT search capability in your SQL engine. To do that:

  1. Open the SQL Server surface area configuration.
  2. Open the Services and Connections page.
  3. Configure the service to be enable.

Now you must run this Stored Procedure in order to activate the service in the Management Studio:


Last step is to activate the service for the tables, fields you want to use.

  1. Right click on the table and choose: Full text index>>define new …
  2. Define the columns that will take part in the index, the schedule and the file group (consider a separated file group for big data)

Now we can run our examples. We will use the two clause for implementing a FULL TEXT search, the CONTAINS and the FREETEXT.

Please refer to the previous post for the table schema ().

Using the CONTAINS clause.

The CONTAINS function search the exact word matches and word prefix matches. Let’s go to search all the addresses with the word Street.

And this is our result:

In this case with the CONTAINS clause we will get all the rows where there is a word Street but not the rows where the word Street  is inside another word like: MainStreet. To do that we need this syntax:

If you note, you need a double quote and a * before to find everything with street at the end.

Using the FREETEXT clause to get more results.

If you need to match all the result with the street word inside and similar results, you need to use the FREETEXT clause.

With the freetext clause, SQL generates various forms of the search term, breaking single words into parts.


The result here will be:

SQL server, write correlated subqueries.

A subquery is a query that is nested into another query. When you work with T-SQL you have different ways to use subqueries, and every method has different performance results.

Our example is a Database with two tables, take a look on the image below:

SubQueries

 In this case we have a main table Employee with a one-to-many relation to the Address table.

Where value IN. (non correlated example)

The first query I want to show you is:


In this case the SELECT will search all the Employee id in the address table and will return a list of id used by the first query to show the employee found. In this case we will not view the employees without and address. This is a non correlated example, because SQL will search before in the Address table, without know nothing about the employee table, than it will match the list of id with the employee table.

Where value EXISTS. (correlated example)

The second query will use the EXISTS clause:


In this case the query is using a correlated value from the outer query. The inner query match the id’s with the id’s in the outer query.

The performance for this queries are totally different if you are working with thousand of rows.

To understand which one is better, you should run both queries with an execution plan and try to build the right clustered index to run the most performing query.

SQL SSIS, insert only new rows.

SSIS workflow is really amazing when you have to add some automatism in your data flow processes. But sometimes is better to make same checks before start with a really dangerous BULK operation.

One of the most noise stuff I have to do is to launch insert of new records when I receive updates from web (web services, XML data). But when I do this I’m always afraid about duplicated rows.

Ho to check if a row exist, in SSIS?

SSIS Package contains a component called lookup. Very intuitive name. So if you need to insert only some new rows in a table you can build a workflow like this:

image

So we have three components:

  1. A OLEDB Datasource that reads data from the source table
  2. A LookUp component that check every row, if exists in the destination table. If doesn’t it redirects the rows in the error event.
  3. A Destination SQL that receive only the new rows, the rows that are in error in the lookup component.

Easy and automatic.

Write intelligent query with if exist … drop.

Usually when we build a query we start the .sql file, or command in the query editor with a simple command … CREATE PROCEDURE … bla bla bla.

Ok but if I’m building a database from scratch is very probably that my procedure will be modified a lot of time. So every time we have to launch the command DROP PROCEDURE, or ALTER PROCEDURE.

This sounds good if you are building your object. But in case of you want to store as a BACKUP PROCEDURE some scripts in the network, is better to produce this scripts as ATOMIC and AUTOMATIC.

To accomplish this task, we can use a function really useful in SQL SERVER, IF [OBJECT] EXIST, DROP IT!

So now you can run the script as much time as you want.