A question that we often get asked is how to deploy specific variations of a database to different customers. Or how to deploy different static configuration data to each customer, and how to version control, test and deploy this per customer in an automated way.
In the post below I’m going to run through an example scenario how to to achieve customer specific database deployments using SQL Source Control, SQL CI, the Red Gate Team City Plugin and Deployment Manager.
- I have two customers, who we’ll call and A and B
- I want to deploy a database and web application to both customers
- For each customer I need to deploy some customised stored procedures
- For each customer I need to load different static configuration data
- I want to keep the the web application code identical for both customers, and manage per customer variations through configuration
Source Control and Environments
I’m source controlling the database in git. I’m using SQL Source Control to version the database structure, and will use the static data feature to version control the static data per customer. I’m version controlling the web application in the same git repository.
Each customer has a Continuous Integration environment and a Production environment. After pushing code to a master git branch all changes will be deployed to the CI environments and test executed per customer.
I’m going to use schemas as the mechanism to separate out the shared ‘base’ parts of the database and the per customer procedures and data. I’ll use the dbo schema to contain all the shared elements, and then one schema per customer for other elements.
- To Customer A I will deploy schemas: ‘dbo’ and ‘a’
- To Customer B I will deploy schemas: ‘dbo’ and ‘b’
SQL Source Control Filters
Filters are the mechanism that will allow us to separate out the required schemas. I’m going to check two additional filters into the git repository in order to split the code base at the SQL CI level. I want to be able to run my tSQLt tests during the CI process, so I’ll include those schemas in both databases too.
- Filter Customer A: Include the schemas below
- Filter Customer B: Include the schemas below
Note: I’m not going to save these filters as the standard Filters.scpf file in the SQL Source Control Database Scripts folder. I’m going to save them outside that folder and use them during the CI process. I created these by hand, but could have used the filter editor in SQL Compare or SQL Source Control and saved these files.
<?xml version="1.0" encoding="utf-8" standalone="yes"?> <NamedFilter version="1" type="SQLCompareFilter"> <FilterName>Include A</FilterName> <Filter version="1" type="DifferenceFilter"> <FilterCaseSensitive>False</FilterCaseSensitive> <Filters version="1"> <None version="1"> <Include>True</Include> <Expression>(@SCHEMA = 'dbo') OR (@SCHEMA = 'tSQLt') OR (@SCHEMA = 'SQLCop') OR (@SCHEMA = 'a')</Expression> </None> <!-- ....Rest of Filter File Truncated for Example.... --> </Filters> </Filter> </NamedFilter>
CI Server Setup
The Team City CI server will trigger the following steps on a commit. I’m adapting the process described in the Continuous Integration Using Red Gate Tools Whitepaper.
- Build the web application
- I won’t go into details here, but the application is built using MSBuild and a NuGet package is created
- Build two databases, one per customer
- Duplicate my database script folders
- Create a temp folder for each customer
- Copy the database scripts folder into each temp folder
- Copy the appropriate filter folder to each temp folder
- Prepare the customer specific databases (repeated once per customer)
- Empty the database scripts folder
- Run SQL Compare to move the objects that match the customer specific filter to the database scripts folder
- Run SQL Data Compare to move any static data that is required by the customer specific database in the database scripts folder
- Create a NuGet package of this customer specific database scripts folder using the SQL CI Team City Plugin
- Deploy the web application and database to the CI environments using Deployment Manager
- Create a release containing both customer specific databases and the web application NuGet packages
- Deploy release to Customer A CI environment
- Deploy release to Customer B CI environment
- Duplicate my database script folders
Note: The critical step we have made by using the filters and SQL Compare and SQL Data Compare to build the database scripts directories is that each package will only contain the database objects and static data that each customer requires.
Duplicate Database Script Folders Step
mkdir tempA mkdir tempB xcopy SimpleTalk\Database\ScriptsFolder tempA /e /s /h xcopy SimpleTalk\Database\ScriptsFolder tempB /e /s /h copy /Y SimpleTalk\Database\include_a.scpf tempA\Filter.scpf copy /Y SimpleTalk\Database\include_b.scpf tempB\Filter.scpf
Prepare Customer Specific Database Step (example for Customer A)
rmdir SimpleTalk\Database\ScriptsFolder /S /Q mkdir SimpleTalk\Database\ScriptsFolder copy /Y tempA\Filter.scpf SimpleTalk\Database\ScriptsFolder\Filter.scpf SQLCompare.exe /filter=tempA\Filter.scpf /include="Identical" /include="StaticData" /scripts1=tempA /scripts2=SimpleTalk\Database\ScriptsFolder /Options:DecryptPost2KEncryptedObjects,IgnoreFillFactor,IgnoreWhiteSpace,IgnoreFileGroups,IgnoreUserProperties,IgnoreWithElementOrder /synchronize SQLDataCompare.exe /scripts1=tempA /scripts2=SimpleTalk\Database\ScriptsFolder /synchronize rmdir tempA /S /Q
The packaging step uses the standard Red Gate SQL Server Build Runner. The only specific configuration is:
Additional SQL Compare Command line Switches:
In the Deployment Manager section setting a different database package name for each customer specific database, e.g. ’database_a’ or ‘database_b’
Note: The additional command line options are to make sure that the relevant filters are applied during the CI testing steps, and to not automatically add dependencies to the build. This is specifically to enable the static data scenario described below.
Deploy the Web Application and Databases Step
DeploymentManager.exe --create-release --server=http://localhost:81 --project="Simple Talk" --apiKey=XXXXXX--version=1.%build.number% --force DeploymentManager.exe --deploy-release --server=http://localhost:81 --project="Simple Talk" --deployto=CI-A --apiKey=XXXXXX --version=1.%build.number% --force DeploymentManager.exe --deploy-release --server=http://localhost:81 --project="Simple Talk" --deployto=CI-B --apiKey=XXXXXX --version=1.%build.number% --force
Deployment Manager Setup
Setup in Deployment Manager is very simple. I’ve set up a single project which has four environments. The project has four deployment steps.
- Deploy the Web Application package to Customer A Environments (CI-A, Production-A)
- Deploy Database A package to Customer A Environments (CI-A, Production-A)
- Deploy the Web Application package to Customer B Environments (CI-B, Production-B)
- Deploy Database B package to Customer B Environments (CI-B, Production-B)
Note: If you’re using the same host machine for both CI environments you can use variables in Deployment Manager to make sure that the database and web applications are deployed with different names for each customer.
Using Views and Table Value Functions (TVF) as a Proxy to Customer Specific Static Data
What if I need the web application to refer to the same table in the shared base ‘dbo’ schema, and see different static configuration data for each customer? Here is one approach
Create two tables, one in each schema which contain the static data required for each customer. Use the Static Data feature of SQL Source Control to version this data.
CREATE TABLE [a].[RSSFeeds]( [RSSFeedID] [int] IDENTITY(1,1) NOT NULL, [FeedName] [varchar](max) NULL )
CREATE TABLE [b].[RSSFeeds]( [RSSFeedID] [int] IDENTITY(1,1) NOT NULL, [FeedName] [varchar](max) NULL )
Create a view in the shared schema that calls a TVF:
CREATE VIEW [dbo].[RSSFeeds] AS SELECT * FROM RSSFeedsTVF() GO
This TVF then looks to see which table exists in the information schema and then returns the contents of that table:
CREATE FUNCTION [dbo].[RSSFeedsTVF]() RETURNS @RSSFeedsTemp TABLE ( [RSSFeedID] [int] NOT NULL, [FeedName] [varchar](max) NULL ) AS BEGIN IF EXISTS(SELECT 1 FROM Information_Schema.Tables WHERE table_schema = 'b' AND TABLE_NAME = 'RSSFeeds') INSERT @RSSFeedsTemp SELECT * FROM [b].[RSSFeeds] ELSE INSERT @RSSFeedsTemp SELECT * FROM [a].[RSSFeeds] RETURN END
Note: Creating the view is only required if your application needs to use the table/view format of dbo.RSSFeeds not the funtion format dbo.RSSFeeds(). Of course it’s totally possible to use configuration in the application instead to switch which schema you are looking at. However in this scenario we’re specifically not modifying our application code. For large quantities of static data this approach may not be appropriate, it depends on how this configuration data is then used within your application.Automated Deployment, Continuous Delivery, Red Gate Deployment Manager, Software Delivery, SQL
A couple of weeks ago I posted about integrating Deployment Manager’s RgPublish command line with TFS. These scripts have now been extended to support calling the DeploymentManager command line to trigger deployments automatically after a build. This is great for deploying to a shared development or test environment such that the whole team has somewhere to go to view the latest version of your software.
These changes are hot off the press, so do create issues or pull-requests as helpful, and I’ll do my best to review and integrate them.
Version numbering of multiple components and the release may be interesting when using TFS. Release version numbers need to be unique. Let’s say you have a simple case of a web app and database:
- Web app builds, it will create a package versioned eg. 184.108.40.206.
- Database builds and produces a package versioned eg. 220.127.116.11. (Your latest web app package is still 18.104.22.168.)
There are two possible solutions to this:
- Have a separate build for the release creation and deployment. Trigger this on successful build of either web app or database. The version number sequence is then always incremented. This is the approach we use on our demos using TeamCity, and is the recommended approach. A drawback is that the version number of the release will not match exactly the build number of the web app. But you could easily ensure the major version number matches.
- Setup a dependency for the web app on the database so that whenever the database component builds, a web app build follows. Then the auto-deployment can be triggered from the web app only. A drawback is that the web app likely doesn’t have a strong dependency on the database, and doesn’t need to be rebuilt. This means publishing duplicate packages for the web app. It also doesn’t scale well with multiple dependencies, as there would be even more duplication.
But it doesn’t look like TFS supports either successful build triggers or build dependencies, out-of-the-box. I’d be interested to learn what approach you take when implementing this, or any approaches you’re taking to solve similar problems.TFS
The future of deployment is not going to be so bright if we focus only on application code and leave the database side of things to fall behind. It is true that changes to application code happen more frequently but you will also frequently need to deploy changes to your database schema and reference data to keep delivering customer value.
At these points it is vital that you don’t mess up: arguably the data behind your application is your most important asset and very few apps will even work at all if the DB is borked.
At Red Gate we’re constantly coming up with ideas and doing work to make working with databases and delivering changes to them ingeniously simple. As part of our aspiration to make delivery of database changes easier and easier we’ve recently done – and shipped – two very important bits of work.
Databases are First Class Citizens
Its always been possible to deploy database packages from Deployment Manager, but over the last couple of weeks we’ve put in some grunt work and re-architected bits of DM under the hood. The net effect of this work are the following three things:
- It is now possible to add database servers directly to environments which also means you don’t need DM agents to make database deployments
- Creating database deployment steps is now significantly easier. Check out our announcement blog post for all the details.
- The internals of DM now recognise database servers as first class objects. This will not mean much to the casual user, and indeed the UI doesn’t look much different. However it means that DM now has a better foundation on which to build further improvements that make database delivery and deployment better and better – and we intend to be doing a lot in that area over the coming months.
Link to Source Control
For a few years now Red Gate has made it possible for SSMS users to easily version their databases using SQL Source Control. Now, to make it easier to deploy database packages directly from SSMS we’ve shipped some functionality that lets you, directly from SSMS, package any revision of your database for deployment. Check out Alex’s post for the detail of how this feature works.
These may seem perhaps like two small(ish) and independent features. However we see them as taking us further down the path of instrumenting what’s needed to make the process of automating and simplifying database delivery ever more achievable.
* The starter edition of Deployment Manager is completely free for up to 5 projects and deployment targets.Continuous Delivery
In this post I’ll show you how easy it is to use Deployment Manager to deploy database changes from your source control system.
In a recent post Justin reminded us how important it is to source control everything. In fact it was his number one tip for Achieving Continuous Delivery. Like it or not “everything” includes the database. Source controlling databases used to be hard, but luckily it’s no longer a problem thanks to Red Gate’s SQL Source Control tool.
Over the past few of years, we’ve been loud proponents that like application releases, database releases should be based on source control. Simply comparing your development and production environments and syncing with SQL Compare isn’t enough. I won’t delve into the intricate details of why this is isn’t best practice here, but Justin is planning an article on this topic (so make sure to subscribe to our RSS feed to catch this).
Deploying from source control is not a problem if you are doing continuous integration (CI) as your packages are already based on source control, but what if you aren’t? Pretty much everyone wants to use CI, but not all of us are yet in a situation where it is possible. If you aren’t yet using a CI server you can use the Deployment Manager SSMS add-in to create database packages and publish them to Deployment Manager. However, until now this approach only allowed you to create a package based on the current state of a database, not your source code.
This wasn’t good enough for us. So we integrated SQL Source Control into the Deployment Manager SSMS add-in. You now have the option to create a package based on any historical version of your database.
How to deploy database changes from source control with Deployment Manager:
- Launch the Deployment Manager SSMS add-in by right clicking on your database in the object explorer
- Select SQL Source Control as the database source on the second page
- You’ll then be asked which revision you’d like to use to create a package
- Once you’ve published your database package to Deployment Manager you can then create a release and deploy it to your chosen environment
You’ll find more detailed instructions on our documentation site.New version, Red Gate Deployment Manager, Software Delivery, SQL
At Red Gate we believe in the principle of ‘dogfooding’. If you are unfamiliar with this term, it’s not as bad as it sounds ;o) Dogfooding is a slang term where a company uses its own products because they believe in them, and also as a ‘canary’ test. The ‘canary’ in canary test originates from canaries that were used in coal mining to alert miners when toxic gases reached dangerous levels. For the Deployment Manager team, the Red Gate DevOps team are our canaries.
Once we feel a release is ready for the public and has passed through a whole host of automated tests and a select set of manual tests, we upgrade the Devops Deployment Manager server to the new version. We also update all their agents for them. However, our Devops team have their builds automated using TeamCity, where they have RgPublish and Deployment Manager build steps. So even though we update the Deployment Manager server for them, we do not update their build server to have the latest versions of RgPublish.exe or Deployment Manager.exe.
Over the last few weeks, we have been amending the build configurations in TeamCity for DevOps projects to make use of the new Tools page that we have in Deployment Manager. This has meant that we are able to download the latest versions of Deploymentmanager.exe and RgPublish.exe for every build. We have done this by adding a new build step to the projects within TeamCity. This build step is a PowerShell runner type and running the following source code:
$client = New-Object System.Net.Webclient
This means that they no longer require a version of these executables in source control, nor do they need to remember to manually update them each time their Deployment Manager server is updated.Automated Deployment, Development, Red Gate Deployment Manager, Software Delivery
Turns out that calling RgPublish from Team Foundation Server (or Service) was harder than expected.
My first approach was to call from the existing default build template. It looked like the property Advanced.Post-build script path in the build definition would be able to do what I wanted, but there were a few problems:
- StdOut doesn’t seem to make it into the build log. (I hope this was just my machine, or an issue with the version of TFS 2013 Preview I’m running.)
- Build numbers cannot be passed in, in this way.
My second approach was to use MSBuild to call the command line. Phil Van Houten recently created some scripts for this, and was kind enough to share them on GitHub.
My third approach is to have a custom build template. While difficult to learn, I found the build template to be the most flexible way to control the build process. It means you can provide fields on the build definition with descriptions and default values.
The templates are based on the TFS default templates, so have all the standard build and test functionality you would expect. There are versions available for both TFS 2010 and 2012. Additionally they have a Red Gate category at the end, with all of the properties you need to provide to publish packages to Deployment Manager. The fields are flexible enough that it will work with any kind of project. For example websites can set the RgPublish source as the folder containing the published website. Web applications can set it to the project file. Databases will soon work in a similar way as well.
The custom templates are available on GitHub. There are versions available for both TFS 2010 and 2012.
The script modifies the build number format to be just the BuildID. The BuildID is an integer that continues to increase, unlike the build revision number that resets each day. The version number of the published package is the value used for Publish version number followed by the build number. Eg. 22.214.171.1249. This should be suitable for most cases, but you can always tweak the behaviour as needed.
To integrate the custom template into your build process, you can either:
- Use the templates directly. This is easiest. It’s only possible if you are currently using the default build template, as you’ll otherwise lose any customisations you’ve made.
- Merge the changes into your build template. This is a little bit tricky. You need to make sure the right pieces of XML go in the right places. It’s worth it though, if you’re reusing a custom build template across a number of different projects. The diff is available via the GitHub repository readme.
You’ll need to make sure the RgPublish.exe command line is available to the build agent in some way. You could either copy it to your build agent, or commit it to TFS. You can download the latest version of this command line from the Tools menu in your Deployment Manager installation.
Ideally TFS would have the ability to compose build templates in some fashion. Then we would be able to provide Deployment Manager integration in a drop-in fashion, allowing it to be easily updated too. Activities are close – the logic for calling RgPublish and handling errors could be wrapped into a single custom activity assembly. But template customisations would still be needed to expose properties in a friendly way.
Update: This has now been extended to support DeploymentManager.exe for automatic release creation and deployment. We plan to extend our sqlCI functionality to work in this way too.
I hope this provides a helpful starting point when integrating Deployment Manager with TFS. Please let us know if it is helpful to you, and tell us about problems you encounter or ideas you have for making it easier.Red Gate Deployment Manager, TFS
We’re really excited to announce that we’ve just shipped a new version of Deployment Manager with “1st class SQL Server database support”.
What does 1st class SQL Server database support mean?
Basically, it means we’ve made it incredibly easy to add SQL Server instances to Deployment Manager. Now SQL Servers can be added to environments instead of a generic target machine. You can specify connection information there, which will be used by your deployment projects:
You don’t need to install agent software on a database machine to do database deployments on it. Instead, database deployments are run from the Deployment Manager Server itself (providing it has access to the databases you want to update).
We’ve added the concept of a Database Step to projects. When adding a Database Step, you’ll be prompted for meaningful database deployment options, removing the requirement to add project variables until you want to utilize environmental variations (such as varying database name between your Test and Staging environments, for example):
We’ll be writing more about 1st class database deployments in the next week, but in the meantime, why not download the latest version of Deployment Manager and see how easy it is to automate your database deployments.Red Gate Deployment Manager, SQL
“Every software development team should have a fully automated deployment process.” That’s according to pretty much everyone I meet at conferences and events. It’s not even a debate. It’s a declaration.
In actual fact there are only a small percentage of software development teams who have a ‘one-click’, totally hands-off, fully automated deployment process. Many teams have a partially manual deployment process and most have an entirely manual process.
Why aren’t you doing it?
If automating software deployment process is such a ‘no-brainer’, why aren’t more people doing it? My theory is that development teams are not doing it because they perceive the overhead of creating, setting-up, configuring and maintaining an automated deployment mechanism to not be worth the potential short-term benefits…
“Setting-up automated deployment will take forever! It’ll steal time from developing the actual software my team has been assembled to create!”
“Our deployment process has so many moving parts, how do I even begin to automate it?”
“It’ll just drain energy from our developers – who would rather be solving business problems and not writing scripts to copy files across a network.”
You’ve probably heard colleagues (or yourself) say something along these lines. This perceived overhead can become quite a sinister obstacle. It may seem so large and insurmountable, that the development team don’t really know where to start when tackling it.
One way of overcoming this is being very clear on what the benefits of doing so are. So, without further ado, here are the 5 big benefits of deployment automation that we’ve seen in our team since automating our processes.
The 5 Big Benefits
#1: Deployments become much less error-prone and much more repeatable
Manual deployments are error prone. This is because it involves humans doing stuff, so is governed by Murphy’s Law: if anything can go wrong, it will go wrong. Important steps in a release can be accidentally missed, faults that occur during release may not be spotted, the incorrect versions of software can be shipped and broken software ends up going live. If you are lucky, you’ll be able to recover from this quickly. If you are unlucky, well… it’s pretty embarrassing at best.
Automated deployments don’t suffer from variability. Once configured, the process is set, and will be the same every time a release is initiated. If it works the first time you deploy your software into a given environment, it’ll work the 100th time.
#2: Anyone in the team can deploy software
With an automated deployment process, the knowledge of how to release your software is captured in the system, not in an individual’s brain.
Performing manual or partially-automated deployments are often the responsibility of a small subset of people in an organisation. In fact, it’s not uncommon for this duty to fall to a single person in given project team. If that person is off ill or otherwise unavailable at short notice, releasing can become a nightmare. Now, anyone who has access to the “Deploy” button can initiate a release.
#3:Engineers spend their time developing software
Performing and validating a manual deployment process is often a time-consuming, thankless task. This job can fall to developers and testers in a development team who would otherwise be spending their time producing the next set of awesome features and improvements to the software.
The time taken to initiate a fully automated deployment is counted in seconds. Validation of those deployments happens behind the scenes and team members may only need to spend further time on a deployment if something has actually gone wrong. As a result, the team get to spend more time doing what they enjoy and have been assembled to do; create great software.
#4: Deploying to somewhere new is not a headache
Automated deployments are not only repeatable, but they are configurable too. Although the underlying release process is permanent, the target environments and machines can be changed easily.
This means that if software needs to be deployed to a new test environment or if a new client installation needs to be created, the overhead of deploying to that additional target is negligible. It is simply a case of configuring your existing set-up and relying on your tried and tested release automation to get the job done.
#5: You can release more frequently
A single deployment performed by an automated deployment mechanism has a low overhead. A release process with a low overhead is one that can be repeated frequently. As I’ve discussed on this blog in the past, frequent releases are desirable for many reasons, but the crux of the matter is that frequent releases promote truly agile software development.
Teams that release frequently can deliver valuable features to their users more often and in incremental steps. In doing so they can gather continuous feedback from these users on the software they are creating and adapt their approach as a result. This feedback can be the difference between a product delighting its target audience or missing them completely.
Debunking the “overhead” myth
These benefits sound pretty desirable, but remember that sinister set-up overhead that is discouraging many teams from automating deployments? That overhead is really just TIME. More accurately, it is your estimation of the amount of time required to create an automated deployment mechanism.
The good news is that this overhead isn’t anywhere as big as you fear.
You do not need to write your own automated deployment systems. You don’t need to pay for a team of consultants to come into your business, agree some requirements and deliver a behemoth system designed to get your product out the door.
There are a number of good automated software deployment tools out there – and yes, Deployment Manager is one of those – that can be tailored to your needs, integrated into your existing infrastructure and will reduce the overhead of setting up an automated deployment process to an hour or two. Leaving your team with a ‘one-click’, totally hands-off, fully automated deployment process that is, apparently, a no-brainer.
How to make progress
Here’s how you can take your first steps towards automating your deployments:
- Download Deployment Manager and work through our Getting Started Guide
- If you need some help, contact Alex and he’ll take you through setting up Deployment Manager
- Find out how to start making progress towards Continuous Delivery by reading our 5 Tips for Achieving Continuous Delivery article
First things first: I am not a developer; I am a tech sales guy. I help customers get started with continuous integration (CI) and deployment but up until last week I hadn’t used Jenkins. However I got it working in less than an hour and documented the process. So here’s a step by step tutorial on how to do it yourself.
The point is that CI reduces your feedback loop such that changes that may break the build are caught as soon as they are committed to source control. Additionally, it is very easy to automate unit or integration tests but that is not what I am discussing now. While CI started with application code you should apply the same principles to databases by automating your standard deployment tool to deploy your source control commit to a CI database.
In this walk-through I will explain how to get the acknowledged benefits of CI for your databases.
Before you start, here’s what you need:
In my case I do everything locally on my own machine but you can set things up as you like. In either case, these are the tools and components you’ll need to have in place:
- A SQL Server that you can connect to.
- SQL Source Control (Red Gate tool) to connect your database to a source control repository. I’m using Tortoise SVN which is open source. (Database source control is a prerequisite for database CI. If you don’t source control your DB already you should sort that out first. If you work in SSMS, SQL Source Control will make versioning your database trivial.)
- A Deployment Manager server and agent. (You can get it here and the Starter Edition is completely free.)
- The SQL Automation Pack for the Red Gate command line tools. (You can use the free 28-day trial to follow this tutorial.)
- And, finally, the latest build of Jenkins CI (open source) which you can get from the Jenkins website.
This post assumes that you already have SQL Source Control and Deployment Manager set up and that you broadly understand how they work. If you have these bits and pieces installed and in place you’re ready to follow in my footsteps!
Step 1: Create & configure a new job in Jenkins CI
In Jenkins click New Job (top left) and give it the name “1. Database CI and Publish DB Package to DM”, and select the radio button to Build a free-style software project.
Add the root of your source control repository where it says “Repository URL”
Finally set up a build trigger. The easiest way is to set it up to poll your SVN repo once a minute (use “* * * * *”). (You may want a push trigger to improve performance but this is a great way to get started.)
Step 2: Copy scripts from the SQL Automation Pack
Fire up the SQL Automation Pack and click the Open Folder button under Build Scripts section.
Copy the contents of your Build Scripts folder into the SVN repository containing your code and commit everything. After this step my folder now looks like this:
Step 3: Configure variables in “sqlCI.targets”
Open the “sqlCI.targets” file in a text editor so that you can configure variables to get Jenkins to update a CI database based on the code you just committed to source control and, if successful, generate a package for Deployment Manager. (Note: you shouldn’t need to change anything after the “<!– You shouldn’t need to change anything after here. –>” marker.)
So set the following variables in this file:
- Database folder path
This is the relative path from whatever you set up as your repository in Jenkins earlier. In my case, the full path that I use for SQL Source Control is: “file:///C:/Users/alex.yates/Desktop/Repositories/DemoRepositories/WidgetShopWithJenkins/trunk/DB/Create Scripts” and I have already told Jenkins that my source control location is: “file:///C:/Users/alex.yates/Desktop/Repositories/DemoRepositories/WidgetShopWithJenkins/trunk”. So for this variable I entered just the extra bit: “DB\Create Scripts” which will probably work for you too.
- Database server
The SQL Server instance you’ll be connecting to.
- Target database name
The database you want to deploy to on the above server
- Either Windows Authentication or SQL Username and SQL Password
I tend to use the second option as it’s easier to setup if you’re just testing
- NuGet Package Name
This is the name Deployment Manager will use whenever it refers to the package for your database.
- Build number
The correct syntax for the build number variable in Jenkins is BUILD_NUMBER. So I used the following convention for my package versions: 1.$(BUILD_NUMBER). This way the version number of the package will reference your build number and increment each time the build is run.
- Package Repository URL
This can be found in Deployment Manager in the Settings tab under Package feeds as shown below.
- Deployment Manager URL
The URL of your Deployment Manager server.
- Deployment Manager API key
This can be found by clicking your log-in name in the top right corner of Deployment Manager.
- Automation Licence Serial Number
This is the license key for your SQL Automation Pack if you have one. If not, you can use the 28-day free trial to get started.
You’re done with this bit! Save the file and commit your changes to source control.
Step 4: Tell Jenkins to run your build
Download the Jenkins MSBuild Plugin. You can find it by going to Jenkins > Manage Plug-ins and scrolling down the list of available plug-ins as shown below:
You also need to tell Jenkins where your MSBuild.exe files are located by going to Jenkins/Manage Jenkins/Configure System and adding my MSBuild installations as shown below:
Now select the “Build a Visual Studio Project or solution using MSBuild” build step and point it at your sqlCI.proj file (which references the sqlCI.targets file).
And that’s that.
Now every time you commit a database change with SQL Source Control, Jenkins will (within a minute) run the sqlCI.proj file which will run the SQL Compare command line to update your CI database. This will fail if your database is in an inconsistent state but if it is successful Jenkins will generate a package for you and publish it to Deployment Manager. This means that all your database packages are generated automatically, they’ve been through at least some automated tests to ensure that the database is deployable, and all your packages can be traced back to source control.
That’s database CI. Now for the final lap…
Step 5: Setting up automated deployment
The really nice bit is what comes next. By adding a separate job with a build step to “Execute a Windows batch command” using the following syntax – you can create a release in Deployment Manager using your latest packages and deploy it to a LATEST environment.
While similar, the CI job and the Automated Deployment jobs serve different purposes. Remember: CI is for testing by computers; LATEST is for testing by humans and will contain your entire stack (not just the database). Also, if you automate a deployment to LATEST, promoting your entire stack to STAGING or PRODUCTION is staggeringly simple. So here’s what to do:
Create a second job using the Execute a Windows batch command option in Jenkins and call it “2. Create a release in DM and deploy to LATEST”. For Source Code Management selected “None” and here is the syntax you need for the command use this command line syntax in the build step:
"C:\Program Files (x86)\Red Gate\Deployment Manager\Tools\deploymentmanager.exe" ^
--server=<> "--project=<>" ^
Then set up the build trigger as follows:
Now set up a post-build action on your first project to tell Jenkins to run this new project:
Finally add an email notification as a post-build step to both jobs:
It isn’t that hard and doesn’t take that long to set up in practice. Now you I can catch mistakes early using Jenkins and the SQL Automation Pack, and easily deploy your code and database with Deployment Manager.
Are you interested?
As I mentioned at the beginning a lot of my job is helping customers get set up with this sort of stuff. If you need any help getting set up follow us on Twitter (and tweet us) and we’ll be happy to help/Continuous Delivery, Software Delivery
In the most recent release of Deployment Manager we made the various kinds of notifications you may get as you’re using the product a little bit better.
This is what they now look like:
Hopefully you’ll find that:
- The colours are crisper
- The text, being a dark grey, is much easier to read
- The icons help identify the type of notifications and distinguish between them better in the case of colour blindness.