Whenever I do anything with an SSRS server, I always backup the key and park it somewhere.
Automating that would be handy, and this seems to fit the bill nearly 100%.
The only additional think that I would like to see something that uses PasswordSafe or KeePass to keep a password, rather than a table.
Ashdar Partners specializes in Performance Engineering (and re-engineering) of Microsoft SQL Server database systems.
Thursday, December 22, 2011
Friday, July 22, 2011
Running SSIS Packages in Denali
I was skimming SSIS Logging in Denali and I noticed that the author used a feature that I had heard was going to be in Denali, but had forgotten about.
You can run a package using straight-up Transact-SQL.
Developers have been asking me for this ability for four or five years. Since it wasn't available, we have tried and/or implemented all sorts of stuff. Things like:
- Running packages via SQL-CLR. (Microsoft recommends that you don't do that.)
- Running packages via a call to a web service running on IIS that is co-located on the same server as SQL Server. (That is a lot of extra work, installation and configuration. It is more stuff that can break and it is more code to maintain.)
- Setting up a job and letting developers have all sorts of permissions just to test a package on a server. (That is more configuration to do and you are going to wind up with jobs whose purpose has been forgotten as developers transition into and out of projects.)
Denali looks better and better to me all of the time.
(Plus, all of that logging is good too.)
Saturday, July 16, 2011
Quickly adding new columns to tables in Denali
More and more, I am starting to really like what I see in Denali.
The other day, this headline caught my eye: "ONLINE NON-NULL WITH VALUES COLUMN ADD IN SQL SERVER 11".
In short, it will be possible to quickly add a new non-nullable column to a table. This operation is quick because it is a metadata change. Prior to this new feature, it would be necessary to wait until the server physically manipulates every row in the table. This manipulation would involve exclusive table locking. It would possibly include splitting pages and lots of disk I/O.
This feature would be an absolute boon in a 24x7 system and it should be handy for avoiding lengthy schema upgrades on any system with large tables.
You can read about this feature in detail here.
The other day, this headline caught my eye: "ONLINE NON-NULL WITH VALUES COLUMN ADD IN SQL SERVER 11".
In short, it will be possible to quickly add a new non-nullable column to a table. This operation is quick because it is a metadata change. Prior to this new feature, it would be necessary to wait until the server physically manipulates every row in the table. This manipulation would involve exclusive table locking. It would possibly include splitting pages and lots of disk I/O.
This feature would be an absolute boon in a 24x7 system and it should be handy for avoiding lengthy schema upgrades on any system with large tables.
You can read about this feature in detail here.
Thursday, June 2, 2011
A quick review of SolidQ's latest e-book, on Data Quality and Master Data Management
The folks at SolidQ are providing a free e-book:
Data Quality and Master Data Management with Microsoft SQL Server 2008 R2
Dejan Sarka, Davide Mauri
I had a look through the book and I'm providing this tiny synopsis/review. The book can be found here.
Generally, I think that the books is definitely worth reading (at twice the price!) for anyone who is working with MDS or who has a data cleansing project of any kind in front of them.
Data Quality and Master Data Management with Microsoft SQL Server 2008 R2
Dejan Sarka, Davide Mauri
I had a look through the book and I'm providing this tiny synopsis/review. The book can be found here.
Other than marketing blurbs, I haven't really been exposed to MDM until I read the book. So, I am starting more-or-less from scratch.
Chapter 1, "Master Data Management" seems to be a decent primer. It presents a background on relational and dimensional models, and where they and MDM fits in the universe.
Chapter 2, "Master Data Services Concepts and Architecture", discusses MDS in detail. I think that this gave me a good feel for how MDS is supposed to work.
Nonetheless, I think that most of that could be gleaned from other sources. The hidden gems here are in the second half of the book, which is comprised of chapters 3 and 4, "Data Quality and SQL Server 2008 R2 Tools" and "Identity Mapping and De-duplicating", respectively. These chapters are dense with information and technique-heavy, discussing various algorithms. T-SQL, CLR and SSIS tactics to find and quantify data quality issues are presented. The math and queries can become fairly complex, and the authors state that the book is not for newbies. but I think that they are eye-opening with respect to what can be done.
My biggest complaints are:
I wanted more depth on integrating MDS into new and existing LOB applications, particularly with identity mapping between the MDS and (possibly a multitude) of LOB applications. If I have two LOB applications, each with their own surrogate key for customer data, how do I manage keeping everything updated?
I would like to see more information on updating information that is in the MDS. For example, when a client changes addresses, do I have to manually go in and tweak things through the MDS web interface? Do I have to go through the whole import process again? What if I have 40 million customers and I get thousands of changes a day? Is there a programmatic interface for doing this? Is there some way to replicate changes back to LOB apps? Do I remove the customer-address-update page from my LOB apps and rely entirely on MDS? How would CSRs submit change to this address data?
Chapter 1, "Master Data Management" seems to be a decent primer. It presents a background on relational and dimensional models, and where they and MDM fits in the universe.
Chapter 2, "Master Data Services Concepts and Architecture", discusses MDS in detail. I think that this gave me a good feel for how MDS is supposed to work.
Nonetheless, I think that most of that could be gleaned from other sources. The hidden gems here are in the second half of the book, which is comprised of chapters 3 and 4, "Data Quality and SQL Server 2008 R2 Tools" and "Identity Mapping and De-duplicating", respectively. These chapters are dense with information and technique-heavy, discussing various algorithms. T-SQL, CLR and SSIS tactics to find and quantify data quality issues are presented. The math and queries can become fairly complex, and the authors state that the book is not for newbies. but I think that they are eye-opening with respect to what can be done.
My biggest complaints are:
I wanted more depth on integrating MDS into new and existing LOB applications, particularly with identity mapping between the MDS and (possibly a multitude) of LOB applications. If I have two LOB applications, each with their own surrogate key for customer data, how do I manage keeping everything updated?
I would like to see more information on updating information that is in the MDS. For example, when a client changes addresses, do I have to manually go in and tweak things through the MDS web interface? Do I have to go through the whole import process again? What if I have 40 million customers and I get thousands of changes a day? Is there a programmatic interface for doing this? Is there some way to replicate changes back to LOB apps? Do I remove the customer-address-update page from my LOB apps and rely entirely on MDS? How would CSRs submit change to this address data?
Another thing that I am not 100% clear on is what attributes belong in the MDS and what do not. Obviously, a customer's address does belong in the MDS but something like a customer account balance would not. If we are talking about products, then inventory levels do not belong in the MDS, but what about something like shelf location in a warehouse? Do parts lists make for good hierarchies? Etc. I presume that there even finer shades of grey than the rough examples I am giving here.
Generally, I think that the books is definitely worth reading (at twice the price!) for anyone who is working with MDS or who has a data cleansing project of any kind in front of them.
Tuesday, May 24, 2011
HTML5 Talk at FIS in Malvern
I attended the inaugural meeting of the Philadelphia Google Technology User's Group (PGTUG) last night. The content of the meeting was a talk given by @kurrik, a Google Programs Engineer, on new HTML5 features. I was the only attendee brave (or uncool) enough to sit in the front row.
While I am an admin by definition, I am a coder at heart. For evidence, I can submit thousands of Transactions-SQL and Powershell scripts. Even so, web development isn't my 'thing'. It was interesting to see what the development community is up to. This was particularly true since the talk was Chrome centric. I drink plenty of Microsoft Kool-Aid as it is.
The talk was fast-paced, with 68 slides in less than 90 minutes, including demos. The talk was nominally about items in the HTML5 specification. While competing products were mentioned, it might be more accurate to say that the talk was on Chrome's implementation of certain HTML5 features.
The talk did not get bogged down in detailed specifics. It was aimed at developers and assumed a basic understanding of web coding in javascript and CSS. I could not have written from scratch any of the code that was presented, but I do have enough coding experience to comfortably follow along. It seems that I am qualified to be the world's oldest junior web programmer.
The talk covered three general areas: file, graphics and audio. The items that I found the most interesting were:
File
The ease of loading files that could then be manipulated with javascript, support for drag and drop file operations and local file systems. There was a brief mention of planned support for file system quotas.
Graphics
Demos covered image transformations, 3D modeling and the ability to upload code and data to the GPU. GPU access like a powerful tactic that could be exploited for computation and not just eye candy.
Audio
Demos covered playing audio files, waveform generation and audio processing (such as beat detection). Basically, you could write your own music visualizations with straight browser code.
My overall take-away was that there is more and more of an operating system built into modern browsers.
Many, if not all, of the demonstrations can be found here and @kurrik encourages you to follow him on Twitter.
While I am an admin by definition, I am a coder at heart. For evidence, I can submit thousands of Transactions-SQL and Powershell scripts. Even so, web development isn't my 'thing'. It was interesting to see what the development community is up to. This was particularly true since the talk was Chrome centric. I drink plenty of Microsoft Kool-Aid as it is.
The talk was fast-paced, with 68 slides in less than 90 minutes, including demos. The talk was nominally about items in the HTML5 specification. While competing products were mentioned, it might be more accurate to say that the talk was on Chrome's implementation of certain HTML5 features.
The talk did not get bogged down in detailed specifics. It was aimed at developers and assumed a basic understanding of web coding in javascript and CSS. I could not have written from scratch any of the code that was presented, but I do have enough coding experience to comfortably follow along. It seems that I am qualified to be the world's oldest junior web programmer.
The talk covered three general areas: file, graphics and audio. The items that I found the most interesting were:
File
The ease of loading files that could then be manipulated with javascript, support for drag and drop file operations and local file systems. There was a brief mention of planned support for file system quotas.
Graphics
Demos covered image transformations, 3D modeling and the ability to upload code and data to the GPU. GPU access like a powerful tactic that could be exploited for computation and not just eye candy.
Audio
Demos covered playing audio files, waveform generation and audio processing (such as beat detection). Basically, you could write your own music visualizations with straight browser code.
My overall take-away was that there is more and more of an operating system built into modern browsers.
Many, if not all, of the demonstrations can be found here and @kurrik encourages you to follow him on Twitter.
Friday, January 21, 2011
Designing Highly Scalable OLTP Systems
I have been spending time watching SQL Bits videos lately. So far, the video that I find most interesting is "Designing Highly Scalable OLTP Systems". The material was presented by Thomas Kejser and Ewan Fairweather, who are both members of the SQLCAT team at Microsoft.
Several years ago, I saw a system with the the compilation lock behavior that they describe in their talk. We happened to use the same method that they suggested to remediate the problem with our system. Since the code was in stored procedures, making these changes was not difficult. We measured a transactional throughput increase, but It is always nice to get additional, real-world validation.
I am also intrigued by the "hash partitioning" strategy used to avoid hot pages in tables.
Several years ago, I saw a system with the the compilation lock behavior that they describe in their talk. We happened to use the same method that they suggested to remediate the problem with our system. Since the code was in stored procedures, making these changes was not difficult. We measured a transactional throughput increase, but It is always nice to get additional, real-world validation.
I am also intrigued by the "hash partitioning" strategy used to avoid hot pages in tables.
Subscribe to:
Posts (Atom)