This article discusses benchmark scaling as the number of cores continues to grow on servers.
This article discusses Etsy's growing pains. It states something that I've felt for a long, long time--"If you are doing something clever, it's probably wrong."
This (old) article discusses discipline in software development. It has me thinking that I should read McConnell's Code Complete again. I know I've boxed, moved and unboxed my copy several times in the last 10 years and it seems that there is a second edition.Tempus fugit.
This article discusses refactoring tables with no downtime. This isn't for every database. There is a good deal of work here (in terms of development, testing and DBA work) that you would not need to do in a 8x5 environment, when you can easily take the database offline for a while to do a migration or upgrade. This strategy is only for databases that must stay up 24x7. Another thing is that it seems that a team could only work on a small number of changes (in terms of schema) for any particular migration. (This goes hand-in-hand with one of the author's conclusions, which points out that limiting the features of the system to a small number of high-value features was important.) That works if you are agile and following a "release-early, release-often" strategy with users who expect to see steady but gradual changes. If your users expect to see a bunch of changes all at once, it might not work as well. You might have to be more fussy about when you surface new features to the users, or you might have to drift back to a more traditional waterfall model or you might be stuck taking outages to do migrations.
This article on using the "database project" features in Visual Studio. I have used this feature, with varying degrees of success, since Visual Studio 2005.
I particularly like the "DeployLog table" idea. I used a similar but different tactic over a decade ago. In that case, deployment was done by running a series of TSQL scripts via a custom-built utility. The name of every script run via the utility was recorded in a table. Based on the file name of the script, the utility would not run the same script twice. There was a scheme that allowed a script to insist that certain scripts had been run as a kind of prerequisite. The utility was smart enough to deal with it all. During development and testing, there was a lot of checking back-and-forth for objects that differed from the 'official' code base.
Looking back at all of that, it seems over-complicated but we did not have sophisticated build tools or version control like TFS, Subversion or git and we did not have good control over the (several) sites that would be running scripts.
I haven't been able to work in a CI environment, but most of the suggestions in the blog post seem to make sense. I am not so sure about the "Use 'Not in build'" suggestion. I can easily see how this would be valuable if you do not have version control in place, but you should have version control in place. Personnel should be able to see the history of the code by looking through the VCS and having a good grasp of how the code has been branched.
No comments:
Post a Comment