“What happens when we only want to release some of the changes that we’re currently working on?”
There it is. The big question. Branching strategies. The question that everybody always asks.
There are so many reasons why the order in which you want your users to receive changes is not the same as the order you started building them in. Priorities change, urgent fixes are required, people go on holiday and some changes simply take longer than others.
If you are delivering in short development iterations, you almost certainly need to have an approach to releasing functionality, which is not the same as the order in which you checked it in.
Sadly there’s no simple one-size-fits-all answer to this question. There are are many different approaches, and each has advantages and disadvantages. In this post, I’ve outlined some of these and the challenges each approach brings – whether that’s within your deployment process, version control approach, or the way you write code. It’s a careful balance, and the branching strategy that works for one team and one project may not work well for another.
When talking to customers about the options, I always say there’s going to be an element of pain involved, and it’s a case of working out where this pain is going to be the least impact.
The tip we often give is take a look at all the approaches, and to adopt the minimal amount of one of these approaches that would work for you. Don’t start with something complex and designed to work in all circumstances, but start with something simple and add more structure as situations arise.
Branch When Required (Pain in your Deployment Process)
If you don’t have the situation where you need to deploy some code but not everything very often, then don’t optimise for it. A good approach here might be to create a release branch ad-hoc. Create a temporary branch which doesn’t contain the changes you don’t want to deploy, and treat that branch as the master branch for that deployment and deploy to QA/Staging/Prod. Remove the branch when it’s no longer required and go back to deploying from the master.
- Only a pain for a short amount of time
- This involves changing the deployment process for a specific release, and then returning to normal for the next one
- It introduces the risk of going wrong at the last minute in your production deployment
- Harder to automate this approach, you may have to have a more manual process for these ad-hoc deployments
I’m not a fan of manually changing process between releases, as this does increase this risk of things going wrong at the time you least want it. There’s also the risk that you forget to put things back to normal in the next release. I’d much prefer to use the same process for every deployment, so that I’m confident that it will work every time.
However if you’re reading this article this as an academic question, and this isn’t a problem that you have experienced in the last couple of months. Then this might be the lowest risk starting point for your team, and you can build on it as situations arise.
Feature Branching and Release Branches (Pain in your Version Control)
This very robust and scalable model is often called feature/release branching. It has the advantage that you can start with something fairly simple and grow this as required.
A starting point would be something like:
- Create a release branch for every sprint, e.g. release/sprint-12
- Write every feature on its own branch, e.g. feature/variable-sorting
- When a feature is ‘done’, merge feature onto the release branch
- Test the release branch
- When it’s ready to go, merge the release onto master and deploy to QA/Prod
A very flexible strategy for this is described on http://nvie.com/posts/a-successful-git-branching-model/
A similar approach can be made using individual repositories as an alternative to branches. Github describe their workflow in detail in this post: http://zachholman.com/talk/how-github-uses-github-to-build-github/
- Lets you control with extreme granularity which features go live when, and which features are in test/CI environments
- Lets you make hot fixes based on current production versions
- It’s possible to automate this approach to deal with almost every possible scenario
- Creates a workflow with many check points for peer reviews and testing
- Highly adaptable to add more environments or modify the process
- You need a VCS with good branching support (e.g. git or Hg), or you’re going to tear all your hair out very quickly
- The team needs to be incredibly disciplined with VCS and good at using their VCS clients
- Somebody needs to spend a decent chunk of their time managing the repos, as it will be messy. They will need to own this process.
- Setting up this process in its entirety can take a while
- Difficult to work out where to run your CI builds (do you test all the feature branches? Or wait until a feature is done before testing and risk finding mistakes late?). Team City and Bamboo have some clever features to help out with this (Gatekeepers)
If you do deploy in a very different order to the way you code this is a very viable approach. However it’s easy to get wrong. On a personal past experience note I’ve been part of a fairly large distributed development team following this approach and it was seriously time consuming. If you’re using Continuous Integration and have aspirations for Continuous Delivery, I’d recommend considering the alternatives. The Humble/Farley Continuous Delivery book advises against branching, and I’m inclined to agree unless this is a very regular occurrence. It’s worth reading about the experiences (good and bad) of other people doing this.
Feature Flagging (Pain in your Code)
A source control-based approach isn’t the only way to solve this challenge. An approach advocated by the Continuous Delivery movement is to abstract the act of releasing code from that of releasing functionality. This means you can ship code that is not yet complete alongside releasable features, and use configuration to decide what users are able to see.
Your approach may look something like:
- Write every feature wrapped in code to prevent it from running until enabled via configuration
- Commit code daily (even if features are incomplete) and deploy them at will to all environments (including production if appropriate)
- When a feature is ready, enable it for end-users through a configuration change
- A deployment and a release are no longer coupled
- Although this may feel like a very drastic approach, it’s a much simpler concept to explain than the branching approach above
- It’s fairly easy to extend this to allow you to release features to small ‘canary’ subsets of user before releasing to everyone. That might be internal users, a beta group or a small percentage of your user base
- May require a fairly big shift in mindset across the organisation
- Need to make sure that vast amounts of code doesn’t sit unchecked in (and untested). Recommendation is to check in every day
- Requires trust in the automation of testing, building and releasing processes
- Need to think about the database releases quite carefully
This is fairly common with Continuous Delivery, and has been very successful at Etsy, Facebook, Yahoo. There’s a good presentation at http://www.slideshare.net/500startups/ross-synder-etsy-sxsw-lean-startup-2013. That said, it can take a long time to set up and adjust culture to this process.
Specifically for the database, this probably means making the database forward/backwards-compatible across versions. That abstraction layer could sit within the database (say via stored procedures) or within the application code itself. Even the big guns using this approach mentioned above do weekly database deployments to decouple these change from the code changes.
The Continuous Delivery book on this topic is excellent and discusses databases in chapter 12.
How does the Deployment Manager team deal with this challenge?
The team value releasability of the code base very highly, which means being confident that we can release the master branch at any time. Along with the weekly release cadence ‘Release Wednesday’, we’ve been known to go from customer bug report to general release containing a fix in less than 5 hours.
We’ll be writing about the process that the team uses to manage code, releases and deployments. We’ll also talk about why we chose one branching and merging model over the others, and how we’ve eaten our own dog food by using Deployment Manager as part of this release strategy.
Find out more about Deployment Manager
Deployment Manager is a release management tool that allows you to automate your application and database deployment pipeline. There’s a step-by-step video in one of our recent blog posts on how Deployment Manager works, or you can take it for a spin with the free starter edition for up to five projects and target marchines..NET, Automated Deployment, Continuous Delivery, Development, Red Gate Deployment Manager, Software Delivery
The most popular post on this blog was written by the first Product Manager to work on Deployment Manager (DM) and promoter of all things Continuous Delivery, Justin Caldicott. His final message to us was that we should automate everything:
This got me thinking about how many of you guys are really automating everything. You may not know that about half of our customers are kind enough to send us feature usage data. We use this anonymous data to make decisions about how to most effectively improve the product. One of the metrics that we track is how many deployments are actually being made with DM, and I think it is fair to say that it is really taking off. We can only hope this is a sign that people are taking Justin’s words to heart.
So, how can I, a pre-sales engineer for DM, automate my job? As you can imagine, I have been flooded with requests for demos recently. I’ve tried very hard, (okay, not tried, but really thought about it), but so far I have been unable to find a way to increase the number of hours in a day. If you are responsible for managing releases, you feel the same way about time, especially when those trendy developers or managers who are keen on releasing so ruddy frequently come calling twenty times a day. I think both you and I need to find a better way of working!
While there is no replacement for the personal touch of a private demo, either onsite or via a remote screen-share, most of what I demonstrate follows exactly the same pattern… I know the exact questions you are going to ask because I have heard them time and time again. That leads me to think that the bulk of my work can be automated. This is my attempt at just that.
I always start with a high level overview. My aim here is to get across the core functionality and main benefits of Deployment Manager as concisely as possible and to provide a Wow moment as I deploy .NET and SQL hand in hand in just three clicks. This also gives clients a good idea about what they are working towards. This video is 5 minutes long:
At this point I say something like this:
“That’s a high level overview. In a few moments I’ll go on to discuss how we create our packages in the first place for both databases and applications. Before I move on, do you have any questions?”
And the customer will ask the following questions…
“That is pure magic… How does it actually work?”
“How does the database deployment work?”
“How do I manage IIS?”
You don’t. We do. We copy the files to the target server and then go into IIS and point it at your new file location. Hence downtime is just about zero.
“What if I need to deploy to customers at remote sites?”
You need to set up a secure connection between your ‘DM Server’ and your ‘DM Agent’. The DM agent will be at your client site.
“Can I use DM to deploy <insert technology other than ASP.NET and SQL Server>?”
Yes*, but you need to use a little PowerShell. We natively understand how to deploy SQL Server and IIS based .NET applications. For other stuff, such as Windows Services, SSRS packages, Oracle databases or WinForms apps, you can include a few lines of PowerShell to explain exactly how you want us to handle the deployment. We also have a PowerShell forum where users share and comment on each other’s scripts.
*We do not support Linux.
Do you support Windows Auth?
That’s enough questions for now. As with all works in progress, this guide is not complete – I don’t want to overload you, or me. If you have another question please post it here and I’ll reply as soon as I can. Just for you (and everyone else).
This is how you create your packages for your .NET applications in both a manual and an automated way. This video is 6 minutes long:
“In a few moments I’ll go on to discuss how we create our packages for our database. Before I move on, do you have any questions?”
And the customer will ask me:
“How do I manage my configuration settings?”
Moving on to database packages…
Erm, confession: I’ve not actually created this video yet, but it is well documented. Essentially you can either use the SSMS add-in (just like the VS add-in from the previous video) or you can use the SQL Automation Pack to integrate with a CI server. (The SQL Automation Pack is licenced separately from DM.) I’ve written a step by step tutorial for Jenkins users here, and we have one for TFS here. Ben Rees has done the same for Bamboo. We also have a plug-in for TeamCity users. But you are not limited to these four, we have customers who have integrated with almost every CI server, for example:
— Rob Richardson (@rob_rich) February 6, 2014
I tend to finish a demo with some sort of commitment to next steps. This is normally a commitment to some sort of evaluation within a given time-frame. To get started with the install and configuration you should follow this step by step guide. It is surprisingly easy to set up.
I know from experience that initiatives to improve best practice tend to get pushed down the road. New features take priority. If lots of different teams in different parts of the business get involved there is too much talking and not enough doing, and that isn’t good for anyone. I recommend that you make a commitment to get a small proof of concept up and running within a short given timeframe. Perhaps give yourself a single sprint? And keep it small at first.
The starter edition of Deployment Manager is free so it is perfect for a single team to manage their own releases for a project or two. Once you have it up and running for your own team (and you are able to release much more frequently and reliably than any of the other teams in your company) you shouldn’t have any trouble persuading the CTO that this is a good idea.
If I have missed anything, please add a comment or contact me directly by email (all our email addresses are firstName.lastName@red-gate.com) or tweet (@_AlexYates_) and I’ll happily take some time to help you personally. As I said before, there is no replacement for a proper, private demonstration with two way communication, but sometimes we can automate the repetitive bits.
To read more about or download Deployment Manager:Uncategorized
Back in August last year I announced on this blog that the Deployment Manager Team had decided to release our product on Wednesdays. In fact, I said we were going to release every Wednesday.
We were keen to get away from the ‘opportunist’ release process we had fallen into, where we released whenever an opportunity arose. This resulted in an irregular deployment cadence and had the side-effect that we were continually agonizing over whether ‘now’ was a good time to ship. Release Wednesdays, our initiative to release every Wednesday and only on Wednesdays, was intended to fix that while ensuring the product code is in a constantly shippable state and we didn’t swamp our uses in new version notifications.
So, what happened? Well, since the inception of Release Wednesdays six months ago, we’ve released Deployment Manager 23 times and have settled into a really solid, comfortable deployment.
When discussing Deployment Manager’s frequent, regular releases I find myself using the metaphor or a train running to a timetable. It helps to communicate a few key concepts of our approach:
- Releases happen on a deliberate schedule
- A release delivers cargo (features, bug fixes, enhancements) to a destination (the users)
- The team needs to get its cargo ready in time for the next scheduled release. If they miss that, then the release will not wait for them and the cargo has to wait for the next scheduled release.
As an aside, this model is often used in a multiple team environment, where a number of development streams need to be synchronised in order to get a release out of the door.
Release Wednesdays are a success!
We’ve found that there are many benefits of this release cadence:
- Users get valuable new functionality sooner – if we feel that users will get value from a partially complete feature (where the ‘complete’ part has been fully tested), a bug fix or a usability enhancement, then we ship it on the following Wednesday.
- We don’t need to do ‘special’ builds – because our next release is on average only 3.5 days away, we never need to do special (a.k.a. private) build to help individual users get around a problem.
- We are better at releasing Deployment Manager – we’ve practiced, we trust our automated tests and we’ve streamlined our process.
- We don’t ship broken releases – even though we release much more often than we used to, we have not shipped a release we have had to recall since we started Release Wednesdays. That’s because we’ve kept the delta between releases small and the risk associated with an individual release low.
- We don’t spend time worrying about when we should release – if it is not a Wednesday, we don’t start the release process.
- We are always releasable – we’ve had to organise our development environment so that we are always ready to release. That means using the likes of feature branches, tiered automated tests and continuous deployment into a test environment.
Our Release Wednesdays initiative has been a great success and it’s something we are pretty proud of.
However… this week we did not release on Wednesday
This was because our new ‘cargo’ was not ready to deliver to our users. This was not unexpected, because we had estimated that the feature we are working on – the automated clear-up of old deployments on agents – would take two weeks to complete.
It’s quite normal for us to be tackling a feature that takes longer than a week to develop. In the past, in parallel with this feature development, we have also worked on bug fixes or usability tweaks that, when complete, are valuable cargo in themselves. Some features we work on give the user a valuable new capability even if they are only partially complete. So, we ship that cargo on our weekly release schedule.
What has changed over the last few weeks is that the team have become concerned that this parallel development is slowing down the delivery of important features we’d like to give users as soon as possible. This issue came up at our most recent fortnightly retrospective meeting; the team were disappointed that we had not delivered what we had wanted to in the previous sprint. We decided that this was because, between all team members, we had worked on four or five discrete pieces of work at the same time. To tackle this issue, we decided to reduce the work the team is allowed to undertake in parallel to be two work items (stories, bugs or enhancements).
This experiment has had the desired effect, as progress on our current feature has been excellent. We’ve ‘swarmed’ on that one thing – meaning the entire team has worked on developing a single feature. You can see that from the picture below showing the team’s faces on the whiteboard, representing what people are working on. The team members’ avatars (cheesy pictures) are close together (all three rows are for the same feature, btw). In addition, everyone understands the feature at a technical and functional level, and it truly feels like the team’s work.
However, another result of this focus is that because the feature, a two-week sized piece of cargo, was not ready for delivery on Release Wednesday. As we hadn’t worked on anything else, there was no new valuable capability for us to put into the hands of our users.
David, one of the Deployment Manager team, summed this up really well:
I see it as being a trade-off
On the one hand we want to progress a number of different things so we have lots of small, valuable stuff to give to users every week. On the other hand we want to finish the most important functionality sooner, getting everyone swarming on the same feature and feeling like a true team.
Because we think about continuous improvement, we’ve decided to change the balance in favour of finishing features, right now. However, we’ll look at what happens and may decide that we want to shift the balance back, or that what we actually want to do is break our features into smaller, valuable parts.
So, fear not: we’re still committed to Release Wednesdays, we’ve just decided to focus on adding one new feature at a time. Users will get the first one of those features next week. On Wednesday.Uncategorized
Are your database changes getting out of hand?
Developing on a relational database that is shared with the whole team, without any way of tracking changes, can be very painful.
Versioning your database can provide easy collaboration and repeatable deployments. However there are a number of different strategies on how to version a database.
Below we visit three of the most popular methods, and talk about the pros and cons of each approach.
Version Upgrade Scripts
If you are making scripts to deploy the database anyway, you may well commit those to a version control system too. By putting the scripts in one folder and naming them with an index, the date and a brief description you have a change history.
- Audit. An audit trail of changes made. In some organizations this is required. But it’s invaluable when debugging problems in production to know what was changed and when.
- Repeatability. We can run all the scripts in sequence to create the database. This also works well with packaged/installed software.
- Lightweight deployment. Once scripts have been saved, it’s just a case of running them in sequence. The same process works for production deployments as much as it does development environments.
- No state history. We don’t get to see what the schema looks like at any point in history, and diff between versions. It’s hard to tell who made each change too, unless each change is stored in its own file and you can use the version control system to see who created the file.
- Need to replay changes. If we want to get a version of the database at a specific version, even an empty one, we need to run the scripts in sequence. This can be inconvenient, but it is also slow for initial installations which will get longer with each change. You’ll also have to know exactly which revision each of your dev/test/staging databases are currently using, so that you can start running the upgrade scripts at the right revision.
- Integrity Checking. This approach can also be error prone when updating a database, as it may have drifted from the expected starting state. In this case running a script may not change the database into the state you desire. Fixing a drifted database isn’t smooth either. You’d likely need to replay all the scripts to get a new database with the expected schema, then use a database comparison tool to remove any changes.
- Take it in turns. Because of needing to define the running order of scripts, you’ll need to decide whose changes are run when. This can be a pain at commit time, as you’ll need to ensure you have the latest version before committing, and you could easily find yourself competing for the next index number.
- Branching and merging are hard. If you commit it’s hard to know if your changes conflict with another colleagues; every change shows up as a new file. Merging branches can involve looking closely at other recently committed upgrade scripts to ensure compatibility.
This approach needs tools to create the upgrade scripts, store them consistently, and apply them. A tool that helps run these scripts is DbDeploy.
Version Schema (and Static Data)
With this approach you can version control the desired state of the database after each change, rather than versioning the individual changes.
- State history and audit. The major gain from versioning the schema in its entirety is the ability to see what the database looked like at any given point in time. Plus you can generate diffs between revisions or versions to see exactly which objects were changed, how, by who and why.
- Easier branching and merging. Given it’s easier to see what’s changed it’s also easier to merge separate branches of a project. By using a version control system such as Git with optimistic locking and a graph based approach most merges will be handled automatically.
- Easy to create new environments. It’s trivial to get new team members setup, they can check out the latest version of the database from source control and quickly get up and running.
- Readily available tooling. There are many ready to use tools that follow this approach and make your workflow even faster. Red Gate produces tooling for SQL Server, MySQL and Oracle that allow:
- Comparing two databases (or versions). See the differences and generate scripts to apply the changes.
- Detecting and versioning changes. Make changes in the DB tool of your choice, then just add a comment and press commit.
- Object level history. See how an individual object has changed, alongside VCS comments and authors in your development environment.
- Static data. Add infrequently changing static configuration data to your version controlled database.
- Release automation. Deploy the version controlled database using your CI server and release management tools.
- See SQL Source Control, SQL Compare and Deployment Manager for more details and free trials.
- Custom scripts. Even the best comparison tools can’t work what’s happened in some scenarios. For example has a table been dropped and a new one created with the same columns, or has a table been renamed? What if you want to insert some reference data? In these cases custom scripts are required to explain the intentions of your changes. Red Gate tools use a built in migrations framework to spot these edge cases and prompt you for customisations.
Version Abstraction (i.e. Version Code)
If you are using a development framework such as Entity Framework in .NET, Django in Python, or Active Record in Ruby you have an abstraction layer in place that lets you version the database as code. Commonly these are call migrations or migration layers.
- Database changes automatically generated from code changes. Most frameworks allow you edit the models in your code, and will then automatically create the required database changes. These will then be applied as required when you deploy the application to each environment
- State history and audit. As your changes are in the code they are tracked when you commit to the version control system.
- Atomic commits. The database change is automatically created based on a code change, and stored within the application codebase. This means that a commit of the application code will require any required database changes. This ‘atomic’ commit is better continuous delivery practise than having two commits, the first of which might have been a broken build with an inconsistent database or application.
- Tooling and Process. The tooling is usually part of the framework, which means that all developers will have the software required to create and run database changes. The framework may well be optimised for this workflow too.
- Database Agnostic. Some frameworks, for example Ruby’s ActiveRecord Migrations are database agnostic. This allows for flexibility in the future, or the ability to use lighter weight development databases where appropriate.
- Outgrowing the framework. If your application becomes very popular, or you wish to use more complex database features you may find that the framework does not support what you want to use. For example Entity Framework does not expect logic such as stored procedures to be in the database. If you do want to version control these you’ll have to add custom SQL to pre-deployment scripts, and many of the advantages of state history and audit will be lost.
- Custom upgrade scripts. As with the other approaches you may make code changes where the framework cannot determine exactly what you want. For example splitting a column. In these cases you may need to insert custom SQL. Some frameworks, such as Entity Framework have methods that allow you signal your intentions through additional data annotations.
As you can see each approach brings a different set of advantages and drawbacks. The method that works for you will depend on which elements of version control are most important for you and your organisation.Automated Deployment, Continuous Delivery, Development, Red Gate Deployment Manager, Uncategorized
We mentioned a little while ago that we’re focusing on database releases in Deployment Manager. Two recent examples of work we’ve done on this are dedicated SQL Server targets and steps, and the ability to publish from source control using the SSMS publish add-in.
These two things make it much easier to start deploying your databases out to development, testing and QA environments. But we’re not satisfied with this. We want you to feel comfortable deploying your databases out to production using Deployment Manager. We’ve a number of upcoming stories that work towards this aim, and I’m really excited to see them come together.
This is where we need your help. We need to understand the detail on how you and your team are currently deploying databases out to production.
If you can spare ten minutes, please complete the survey I put together. We’re also offering a $50 Amazon gift certificate to one lucky respondent.Uncategorized
A question that we often get asked is how to deploy specific variations of a database to different customers. Or how to deploy different static configuration data to each customer, and how to version control, test and deploy this per customer in an automated way.
In the post below I’m going to run through an example scenario how to to achieve customer specific database deployments using SQL Source Control, SQL CI, the Red Gate Team City Plugin and Deployment Manager.
- I have two customers, who we’ll call and A and B
- I want to deploy a database and web application to both customers
- For each customer I need to deploy some customised stored procedures
- For each customer I need to load different static configuration data
- I want to keep the the web application code identical for both customers, and manage per customer variations through configuration
Source Control and Environments
I’m source controlling the database in git. I’m using SQL Source Control to version the database structure, and will use the static data feature to version control the static data per customer. I’m version controlling the web application in the same git repository.
Each customer has a Continuous Integration environment and a Production environment. After pushing code to a master git branch all changes will be deployed to the CI environments and test executed per customer.
I’m going to use schemas as the mechanism to separate out the shared ‘base’ parts of the database and the per customer procedures and data. I’ll use the dbo schema to contain all the shared elements, and then one schema per customer for other elements.
- To Customer A I will deploy schemas: ‘dbo’ and ‘a’
- To Customer B I will deploy schemas: ‘dbo’ and ‘b’
SQL Source Control Filters
Filters are the mechanism that will allow us to separate out the required schemas. I’m going to check two additional filters into the git repository in order to split the code base at the SQL CI level. I want to be able to run my tSQLt tests during the CI process, so I’ll include those schemas in both databases too.
- Filter Customer A: Include the schemas below
- Filter Customer B: Include the schemas below
Note: I’m not going to save these filters as the standard Filters.scpf file in the SQL Source Control Database Scripts folder. I’m going to save them outside that folder and use them during the CI process. I created these by hand, but could have used the filter editor in SQL Compare or SQL Source Control and saved these files.
<?xml version="1.0" encoding="utf-8" standalone="yes"?> <NamedFilter version="1" type="SQLCompareFilter"> <FilterName>Include A</FilterName> <Filter version="1" type="DifferenceFilter"> <FilterCaseSensitive>False</FilterCaseSensitive> <Filters version="1"> <None version="1"> <Include>True</Include> <Expression>(@SCHEMA = 'dbo') OR (@SCHEMA = 'tSQLt') OR (@SCHEMA = 'SQLCop') OR (@SCHEMA = 'a')</Expression> </None> <!-- ....Rest of Filter File Truncated for Example.... --> </Filters> </Filter> </NamedFilter>
CI Server Setup
The Team City CI server will trigger the following steps on a commit. I’m adapting the process described in the Continuous Integration Using Red Gate Tools Whitepaper.
- Build the web application
- I won’t go into details here, but the application is built using MSBuild and a NuGet package is created
- Build two databases, one per customer
- Duplicate my database script folders
- Create a temp folder for each customer
- Copy the database scripts folder into each temp folder
- Copy the appropriate filter folder to each temp folder
- Prepare the customer specific databases (repeated once per customer)
- Empty the database scripts folder
- Run SQL Compare to move the objects that match the customer specific filter to the database scripts folder
- Run SQL Data Compare to move any static data that is required by the customer specific database in the database scripts folder
- Create a NuGet package of this customer specific database scripts folder using the SQL CI Team City Plugin
- Deploy the web application and database to the CI environments using Deployment Manager
- Create a release containing both customer specific databases and the web application NuGet packages
- Deploy release to Customer A CI environment
- Deploy release to Customer B CI environment
- Duplicate my database script folders
Note: The critical step we have made by using the filters and SQL Compare and SQL Data Compare to build the database scripts directories is that each package will only contain the database objects and static data that each customer requires.
Duplicate Database Script Folders Step
mkdir tempA mkdir tempB xcopy SimpleTalk\Database\ScriptsFolder tempA /e /s /h xcopy SimpleTalk\Database\ScriptsFolder tempB /e /s /h copy /Y SimpleTalk\Database\include_a.scpf tempA\Filter.scpf copy /Y SimpleTalk\Database\include_b.scpf tempB\Filter.scpf
Prepare Customer Specific Database Step (example for Customer A)
rmdir SimpleTalk\Database\ScriptsFolder /S /Q mkdir SimpleTalk\Database\ScriptsFolder copy /Y tempA\Filter.scpf SimpleTalk\Database\ScriptsFolder\Filter.scpf SQLCompare.exe /filter=tempA\Filter.scpf /include="Identical" /include="StaticData" /scripts1=tempA /scripts2=SimpleTalk\Database\ScriptsFolder /Options:DecryptPost2KEncryptedObjects,IgnoreFillFactor,IgnoreWhiteSpace,IgnoreFileGroups,IgnoreUserProperties,IgnoreWithElementOrder /synchronize SQLDataCompare.exe /scripts1=tempA /scripts2=SimpleTalk\Database\ScriptsFolder /synchronize rmdir tempA /S /Q
The packaging step uses the standard Red Gate SQL Server Build Runner. The only specific configuration is:
Additional SQL Compare Command line Switches:
In the Deployment Manager section setting a different database package name for each customer specific database, e.g. ’database_a’ or ‘database_b’
Note: The additional command line options are to make sure that the relevant filters are applied during the CI testing steps, and to not automatically add dependencies to the build. This is specifically to enable the static data scenario described below.
Deploy the Web Application and Databases Step
DeploymentManager.exe --create-release --server=http://localhost:81 --project="Simple Talk" --apiKey=XXXXXX--version=1.%build.number% --force DeploymentManager.exe --deploy-release --server=http://localhost:81 --project="Simple Talk" --deployto=CI-A --apiKey=XXXXXX --version=1.%build.number% --force DeploymentManager.exe --deploy-release --server=http://localhost:81 --project="Simple Talk" --deployto=CI-B --apiKey=XXXXXX --version=1.%build.number% --force
Deployment Manager Setup
Setup in Deployment Manager is very simple. I’ve set up a single project which has four environments. The project has four deployment steps.
- Deploy the Web Application package to Customer A Environments (CI-A, Production-A)
- Deploy Database A package to Customer A Environments (CI-A, Production-A)
- Deploy the Web Application package to Customer B Environments (CI-B, Production-B)
- Deploy Database B package to Customer B Environments (CI-B, Production-B)
Note: If you’re using the same host machine for both CI environments you can use variables in Deployment Manager to make sure that the database and web applications are deployed with different names for each customer.
Using Views and Table Value Functions (TVF) as a Proxy to Customer Specific Static Data
What if I need the web application to refer to the same table in the shared base ‘dbo’ schema, and see different static configuration data for each customer? Here is one approach
Create two tables, one in each schema which contain the static data required for each customer. Use the Static Data feature of SQL Source Control to version this data.
CREATE TABLE [a].[RSSFeeds]( [RSSFeedID] [int] IDENTITY(1,1) NOT NULL, [FeedName] [varchar](max) NULL )
CREATE TABLE [b].[RSSFeeds]( [RSSFeedID] [int] IDENTITY(1,1) NOT NULL, [FeedName] [varchar](max) NULL )
Create a view in the shared schema that calls a TVF:
CREATE VIEW [dbo].[RSSFeeds] AS SELECT * FROM RSSFeedsTVF() GO
This TVF then looks to see which table exists in the information schema and then returns the contents of that table:
CREATE FUNCTION [dbo].[RSSFeedsTVF]() RETURNS @RSSFeedsTemp TABLE ( [RSSFeedID] [int] NOT NULL, [FeedName] [varchar](max) NULL ) AS BEGIN IF EXISTS(SELECT 1 FROM Information_Schema.Tables WHERE table_schema = 'b' AND TABLE_NAME = 'RSSFeeds') INSERT @RSSFeedsTemp SELECT * FROM [b].[RSSFeeds] ELSE INSERT @RSSFeedsTemp SELECT * FROM [a].[RSSFeeds] RETURN END
Note: Creating the view is only required if your application needs to use the table/view format of dbo.RSSFeeds not the funtion format dbo.RSSFeeds(). Of course it’s totally possible to use configuration in the application instead to switch which schema you are looking at. However in this scenario we’re specifically not modifying our application code. For large quantities of static data this approach may not be appropriate, it depends on how this configuration data is then used within your application.Automated Deployment, Continuous Delivery, Red Gate Deployment Manager, Software Delivery, SQL
A couple of weeks ago I posted about integrating Deployment Manager’s RgPublish command line with TFS. These scripts have now been extended to support calling the DeploymentManager command line to trigger deployments automatically after a build. This is great for deploying to a shared development or test environment such that the whole team has somewhere to go to view the latest version of your software.
These changes are hot off the press, so do create issues or pull-requests as helpful, and I’ll do my best to review and integrate them.
Version numbering of multiple components and the release may be interesting when using TFS. Release version numbers need to be unique. Let’s say you have a simple case of a web app and database:
- Web app builds, it will create a package versioned eg. 22.214.171.124.
- Database builds and produces a package versioned eg. 126.96.36.199. (Your latest web app package is still 188.8.131.52.)
There are two possible solutions to this:
- Have a separate build for the release creation and deployment. Trigger this on successful build of either web app or database. The version number sequence is then always incremented. This is the approach we use on our demos using TeamCity, and is the recommended approach. A drawback is that the version number of the release will not match exactly the build number of the web app. But you could easily ensure the major version number matches.
- Setup a dependency for the web app on the database so that whenever the database component builds, a web app build follows. Then the auto-deployment can be triggered from the web app only. A drawback is that the web app likely doesn’t have a strong dependency on the database, and doesn’t need to be rebuilt. This means publishing duplicate packages for the web app. It also doesn’t scale well with multiple dependencies, as there would be even more duplication.
But it doesn’t look like TFS supports either successful build triggers or build dependencies, out-of-the-box. I’d be interested to learn what approach you take when implementing this, or any approaches you’re taking to solve similar problems.TFS
The future of deployment is not going to be so bright if we focus only on application code and leave the database side of things to fall behind. It is true that changes to application code happen more frequently but you will also frequently need to deploy changes to your database schema and reference data to keep delivering customer value.
At these points it is vital that you don’t mess up: arguably the data behind your application is your most important asset and very few apps will even work at all if the DB is borked.
At Red Gate we’re constantly coming up with ideas and doing work to make working with databases and delivering changes to them ingeniously simple. As part of our aspiration to make delivery of database changes easier and easier we’ve recently done – and shipped – two very important bits of work.
Databases are First Class Citizens
Its always been possible to deploy database packages from Deployment Manager, but over the last couple of weeks we’ve put in some grunt work and re-architected bits of DM under the hood. The net effect of this work are the following three things:
- It is now possible to add database servers directly to environments which also means you don’t need DM agents to make database deployments
- Creating database deployment steps is now significantly easier. Check out our announcement blog post for all the details.
- The internals of DM now recognise database servers as first class objects. This will not mean much to the casual user, and indeed the UI doesn’t look much different. However it means that DM now has a better foundation on which to build further improvements that make database delivery and deployment better and better – and we intend to be doing a lot in that area over the coming months.
Link to Source Control
For a few years now Red Gate has made it possible for SSMS users to easily version their databases using SQL Source Control. Now, to make it easier to deploy database packages directly from SSMS we’ve shipped some functionality that lets you, directly from SSMS, package any revision of your database for deployment. Check out Alex’s post for the detail of how this feature works.
These may seem perhaps like two small(ish) and independent features. However we see them as taking us further down the path of instrumenting what’s needed to make the process of automating and simplifying database delivery ever more achievable.
* The starter edition of Deployment Manager is completely free for up to 5 projects and deployment targets.Continuous Delivery
In this post I’ll show you how easy it is to use Deployment Manager to deploy database changes from your source control system.
In a recent post Justin reminded us how important it is to source control everything. In fact it was his number one tip for Achieving Continuous Delivery. Like it or not “everything” includes the database. Source controlling databases used to be hard, but luckily it’s no longer a problem thanks to Red Gate’s SQL Source Control tool.
Over the past few of years, we’ve been loud proponents that like application releases, database releases should be based on source control. Simply comparing your development and production environments and syncing with SQL Compare isn’t enough. I won’t delve into the intricate details of why this is isn’t best practice here, but Justin is planning an article on this topic (so make sure to subscribe to our RSS feed to catch this).
Deploying from source control is not a problem if you are doing continuous integration (CI) as your packages are already based on source control, but what if you aren’t? Pretty much everyone wants to use CI, but not all of us are yet in a situation where it is possible. If you aren’t yet using a CI server you can use the Deployment Manager SSMS add-in to create database packages and publish them to Deployment Manager. However, until now this approach only allowed you to create a package based on the current state of a database, not your source code.
This wasn’t good enough for us. So we integrated SQL Source Control into the Deployment Manager SSMS add-in. You now have the option to create a package based on any historical version of your database.
How to deploy database changes from source control with Deployment Manager:
- Launch the Deployment Manager SSMS add-in by right clicking on your database in the object explorer
- Select SQL Source Control as the database source on the second page
- You’ll then be asked which revision you’d like to use to create a package
- Once you’ve published your database package to Deployment Manager you can then create a release and deploy it to your chosen environment
You’ll find more detailed instructions on our documentation site.New version, Red Gate Deployment Manager, Software Delivery, SQL
At Red Gate we believe in the principle of ‘dogfooding’. If you are unfamiliar with this term, it’s not as bad as it sounds ;o) Dogfooding is a slang term where a company uses its own products because they believe in them, and also as a ‘canary’ test. The ‘canary’ in canary test originates from canaries that were used in coal mining to alert miners when toxic gases reached dangerous levels. For the Deployment Manager team, the Red Gate DevOps team are our canaries.
Once we feel a release is ready for the public and has passed through a whole host of automated tests and a select set of manual tests, we upgrade the Devops Deployment Manager server to the new version. We also update all their agents for them. However, our Devops team have their builds automated using TeamCity, where they have RgPublish and Deployment Manager build steps. So even though we update the Deployment Manager server for them, we do not update their build server to have the latest versions of RgPublish.exe or Deployment Manager.exe.
Over the last few weeks, we have been amending the build configurations in TeamCity for DevOps projects to make use of the new Tools page that we have in Deployment Manager. This has meant that we are able to download the latest versions of Deploymentmanager.exe and RgPublish.exe for every build. We have done this by adding a new build step to the projects within TeamCity. This build step is a PowerShell runner type and running the following source code:
$client = New-Object System.Net.Webclient
This means that they no longer require a version of these executables in source control, nor do they need to remember to manually update them each time their Deployment Manager server is updated.Automated Deployment, Development, Red Gate Deployment Manager, Software Delivery