Friday, September 9, 2016

SQL Sentry's Plan Explorer now free; The other SSMS tools I use

This is great:

SQL Sentrys' Plan Explorer(tm) is now free for all users.

I haven't been doing much plan analysis lately, as I have been working with largely legacy SQL Server 2008 R2 (and earlier) code. I did spend a lot of time with the free version of Plan Explorer about a year and a half ago. I'm unaware of a better tool for it's specific task.

While I am on the subject, the other things that round out my SSMS extensions are:

I've been experimenting with the ApexSQL Refactor and Search tools lately, but I haven't committed to them yet. I like the "reformat" that Apex provides. It is much more flexible that the "Poor Man's formatter", but, if I'm honest, I must say that I'm a little bewildered about how to configure it best for the way I like to see code. The Apex Search tool seems like a good replacement for Red-Gate's search, but I can't say that I've really found anything fatally wrong with Red-Gate's search. 

Friday, May 6, 2016

Windows Administration and Trends in (PowerShell) Scripting

Jeffery Snover is the person most responsible for PowerShell, which revolutionized my approach to database administration. He has a background in scripting going back to VMS (IIRC), which is how he came to create PowerShell. His Twitter bio lists "Microsoft Technical Fellow/ Lead Architect for Enterprise Cloud Group/ Azure Stack Architect/ PowerShell Architect / Science fan".

From my point of view, he directs strategy for Windows administration. I listen to him because the things he talks about are likely to influence my work life and they give insight into how the PowerShell team expect people to use their product.

The podcast covers a variety of things in a light way. The thing that grabbed my attention the most was that Snover seems to be saying that Windows will be implementing things similar to what linux does with root and sudo. (My linux experience is limited but my two takeaways are: You never log in as root and sudo controls what your 'day to day' login can do.) Imitation is the sincerest form of flattery, as they say.

Beyond that, it seems that I should be working towards two goals.

One of those goals should be to get my code into an open repository.

I have been using version control for many years, but I have always kept "my code" in a closed repository.

Initially, I used subversion. At first, I ran my own server (on an old Sun workstation) out of my home office. I moved to a cloud-based solution after experiencing a number of power outages. (I just logged into my old Subversion repository for the first time in well over a year. It has over 3,000 check ins.)

In spring of 2014, Subversion was seeming old-fashioned, so I tried Git for a while. That was during an extremely slow period for changes to my code, so I never got very far into it. I documented the start of that period with this blog posting

I've been using Microsoft's TFS Online for the last couple of years. This seemed a natural fit because I spend a lot of time in Visual Studio, it provides feature and bug tracking and it a cloud-based solution. Since then, Visual Studio has come to embrace Git.

TFS Online seemed like a pretty hot technology in 2014, but I feel like I've missed the boat with GitHub. The current trend seems to demand use of GitHub. The work required to move my code from TFS Online to GitHub is large. There are over 170 files of varying complexity, with a few being modules with many functions. I work very hard to keep client-specific things out of "my code", but I would need to vet everything again. 

I did do a pass with ScriptCop through much of that code in 2015. I fixed most of the things that ScriptCop spotted. Other than that, much of this code hasn't been looked at in years.

I don't want to split my code between different repositories. I like being able to install one set of tools and I don't want to get into situations where I'm looking for something that is in the other repository.

The other goal is to start testing operational environments like developers test their code.  In my case, I'd like to test my code like developers test their code. :-/

I fiddled around with PSUnit when that was the hot thing, but I never integrated it with my day-to-day work. The current hot technology for this is Pester, and I'd like to do more with it.

Implementing Pester (or any testing framework) "for real" would be a lot of work. My earliest code was written in the PowerShell 1.0 and 2.0 era. Other than Transact-SQL, my coding at that time was mainly in DOS batch and Windows Scripting Host's version of VBScript. PowerShell was a new thing and it was not obvious that it would be as successful as it has been. My PowerShell scripts were not written with testability in mind. The technical debt is enormous. Layers of code are based on that work. Testing nearly anything seems to require reworking layers of code. Changing that code breaks other code. Since there is no testing framework, those bugs aren't noticed until I happen to run them.

In short, it looks like I can't do anything without doing everything and there is no time budget for that. If I work on the testability of recent code the going seems slow, there isn't much impact, working on that code is not my "day job" and I can't keep the enthusiasm going.

Thursday, April 30, 2015

Some Quick Troubleshooting in the Home Office

I spent some time troubleshooting problems in my home office tonight. 

Firstly, I managed to realize that the Windows 10 CTP that I've been running on Brix since the fall was somehow using the "Performance" power profile. I'm not sure if Windows did that to "help me" or if I did it when I was fiddling with things. Brix is a desktop (sort of -- it actually sits on my sub-woofer), so saving power isn't a big deal but this setting would have led to higher processor operating frequencies, more heat output and a louder fan. I do care about louder fans. One of the reasons that I bought Brix was to get away from my noisy old Core 2 Duo. When I set the power plan to "Balanced", Brix quieted down immediately. 

This will probably be more impressive once the temperatures start to climb as we head towards summer.

Secondly, I got down to business and finally looked at the time/clock problem I've been seeing around my home office. I had noticed that Brix was about 1-2 minutes behind my mobile phone a while back. After a month or two, that discrepancy had turned into 3-4 minutes. Last night, I noticed that my laptop was also wrong. 

The first thing I did was cross-check my mobile with a client's computers. Their time matched my mobile. Also, the clock on my cable box matched my phone. So, my mobile is right and my computers are wrong.

The next thing to check was the time on the AD domain. Unsurprisingly, the time was wrong on the domain. This meant that the time on my domain controllers (Hera and Zeus) was probably wrong. Yup, both of them were wrong when I looked.

I'd never really thought about time synchronization issues here at my home office. The last time I dealt directly with NTP was on my old Sun workstation/server. I got rid of that over six years ago. I had set up my VMWare host and built my tiny domain in the summer of 2013 and it had all just worked since then. 

I don't need super-precise time synchronization, but I have to draw the line somewhere. Maybe it was the VMWare upgrade that did it, but I can't brook a five minute clock discrepancy between my computers and the rest of the world. 

Whenever I have an unfamiliar problem, the first thing I do is break out google and spend five minutes researching my situation.

Simply put, my VMWare host did not have the NTP service turned on. So I configured it and then set it running. Next, I configured VMWare tools to set the time on Hera and Zeus, my two domain controllers. VMWare tools should set the time on the domain controllers to match my VMWare host every 60 seconds. That should be more than often enough to fit my needs. Finally, I forced my workstation, Brix, to update it's time from the domain because I was impatient. 

And there I was, with synced time on all of my computers. I had the whole thing sorted out in about 15 minutes.

Who needs a sysadmin, anyway? Am I right? 

Tuesday, April 1, 2014

What are DACPACs and how do I use them?

I had someone ask me about DACPACs recently.

DACPAC technology had fallen off of my radar after I had seen demos of the feature at a SQL Saturday many years ago.

In short, a DACPAC is a file that contains definitions for the objects of a database. This file can be used to create new databases or update old databases to a new version. For the most part, existing data in the updated database should be preserved. DACPAC technology is intended to replace the bundles of .SQL scripts and the giant "Hail, Mary" scripts that are often used to update databases.

At the time of that SQL Saturday demo, which was probably in the SQL Server 2008 timeframe, DACPAC technology was new and there were a lot of gotchas. IIRC, it was presented as a way to create databases in SQ:L Azure.

A few weeks ago, I noticed that SSDT was creating DACPAC files. I've long been a user of the database comparison tool provided by SSDT and other "Data Dude" descendants, but I didn't give the "freebie" DACPAC file that SSDT generated for me a second thought.

Since someone asked, I thought that I would spend some time researching this and write it up for the blog.

Things in the DACPAC universe seem to have improved substantially, and I found the following links useful:

According to the wiki, the DACPAC scheme still makes a copy of the target database and subsequently deletes the old one. That might be a showstopper for many large databases and was the main reason that I banished DACPAC to the back of my brain. However, I don't see many complaints on the web about this and that made me suspicious. In some very limited testing of a single-table database with four columns, I did not see any creation of mysterious databases. Perhaps the situation has changed with SQL Server 2012 and the wiki is out of date?

(While performing my limited tests, I noticed that a certain amount of downgrade-ability might be feasible. By simply applying the DACPAC to my database, I could remove a column. I'm not sure how I can exploit this, or if it would work on a non-trivial database, but this is the sort of think I like to keep in the back of my head as a possible future trick.)

In short, DACPAC seems to have matured into a viable deployment strategy. I intend to look for situations where I can use this technology to improve the speed and quality of deploying new database versions.

Saturday, January 4, 2014

SugarSync Is No Longer Quite So Sweet

Over the last few years, I've spent more time following business issues in the IT industry than I had when I was younger. Perhaps it's just another mark of getting older.

Over the holidays, a news item caught my eye: SugarSync has stopped giving away free storage.

It's the hallmark of any new technology or developing area of business:

  • Someone comes up with something new. 
  • Many other players pile into the new space.
  • Some time passes
  • Players that can't make money (or generate the growth numbers they want) leave the new field and the space consolidates.
  • The consolidation can go down to just a few companies. Those companies do not always include the one that created the space.

We have seen this with cars (in the first part of the 20th century, there were hundreds of small manufacturers in the US alone. now there are only two behemoths that are headquartered here and a few dozen firms based overseas). Other technologies (radio, television, ISPs, RDBMs, hard drives, and personal computers to name a few) and business areas (department stores, malls, convenience stores, hardware stores) have gone through the same evolution.

In the case of consumer-oriented, internet-based, replicated storage, I believe that DropBox was first and SugarSync followed. (I'm ignoring things like rcp, unison and such. Even though they have been available for decades in some cases, they never caught on past the sysadmin or power *nix user crowd.) With SugarSync changing it's business model, I can't help but wonder if we are seeing more evidence in "the internet" turning away from the free-for-all model (so to speak) to the same old charge-'em-at-the-door model that we've seen for generations. Periods of consolidation can presage increased profitability for the survivors.

A few years ago, I was an avid SugarSync user. I used their Windows and Android clients. I chose SugarSync over DropBox, Box, Skydrive and Mesh because SugarSync had an Android client and a the most liberal policy with respect to free space. IOW, they gave away more GB than their competitors.

At some point, the Android client stopped syncing my photos properly. At that time, both Google Drive and Microsoft Skydrive became viable alternatives. I need Google and Microsoft for other reasons, so SugarSync was the odd man out. I retired my SugarSync clients.

With industry heavyweights giving away storage space, SugarSync has a hard row to hoe. Microsoft and Google can afford to give away space, as a sort of loss leader. In the long run, I think that the online editing provided by Microsoft and Google will become increasing attractive to users. DropBox has name recognition and is widely supported by popular apps. Box has the enterprise orientation that DropBox doesn't (at least not yet). I could see a larger company buying up DropBox or Box. I am not sure where SugarSync fits into the market, five years down the road.

I would say that the best path for any of these small firms would be an acquisition by a larger player that already provides some sort of SaaS and supports multiple platforms. This leaves Apple and Microsoft out, but a consumer-focused organisation like Yahoo might do nicely. Amazon might find it easy to integrate SugarSync into their S3 storage offering and it might be worth something to them if the costs are right.

Tuesday, September 10, 2013

Entering the Brave New World of Google+

It's been a long time since I've done anything interesting with the blog, and it seems a little stale.

With some prompting from Blogger's "Buzz" blog, I'm hooking this blog up to a Google+ account. This is a bit of a brave new world for me, since I've confined myself to LinkedIn, Twitter and a few other sites. 

At the least, I hope that this will help comments on the blog be more dynamic and that a few more people might find these posts. Perhaps we can get a few more conversations going as well.

Friday, September 6, 2013

A Short Tale of Troubleshooting a PS/2 keyboard and a "Legacy Free" System

I recently found an IBM Model M keyboard at my local Goodwill store. There is rarely any computer hardware in there worth having, so this was quite a surprise.
As far as I can tell, this Model M has never been used even though it was made in 1993. It came in what appears to be the original box, with a styrofoam insert. The coiled, un-stretched cable still has plastic wrap on it. There is no visible wear on the keys. There are no finger grease stains. It is a virgin, IBM-branded, Lexmark-manufactured beige beauty. Even the box doesn't look 20 years old.

The model M was built before USB was a thing, so it has a PS/2 port. My main rig has a "legacy free" motherboard, so it doesn't have PS/2 ports. It only has USB ports.
The keyboard I have been using lately is from long-gone Micron computer. I bought a computer from Micron in the 1990s and the only remaining evidence of that is that keyboard. (Micron was a popular computer vendor at that time, though there isn't much evidence of that anymore.) Normally, I use a PS/2 to USB convertor to connect my Micron keyboard to my computer. That has worked great for several years.

When I unplugged the old Micron and plugged in my new Model M, they keyboard was unresponsive. No lights, no nothing. Worse, the mouse that is plugged into the same PS/2->USB converter also stopped working.

In the spirit of the IT Crowd, I turned it off and turned it on again. No change in behavior. I plugged my old keyboard back into the computer. Everything works. This wasn't the best way to start the week.

I thought that my siren-like keyboard was dead, but I carried on and kept experimenting. It turns out that if I plug the Model M into the PS/2 port on my Thinkpad dock, the keyboard works.
The PS/2->USB converter I was using is an Adesso Ez-PU21 "Smart Adaptor". It's got a snazzy logo, a serial number and some other information written on it. I've been using it for so long I don't even remember where I got it from.

While researching the problem, I found a detailed explanation of how the PS/2 interface works. It has links to a history lesson on keyboards and detailed information on the communications protocols used for keyboads and mice, keyboard scan codes and more. There is a more approachable article describing the PS/2 interface on Wikipedia.

The new Model M has four pins in it's connector, my Micron PS/2 keyboard has six pins and so does my mouse. I have another, older and grungier Model M that also has six pins. The two pins that are missing are shown as "not implemented" on all of the PS/2 connector diagrams that I can find. The two "extra" pins were sometimes used to implement non-standard mouse-and-keyboard combination cables. Those missing pins shouldn't make a difference, yet they do.

I dug out the other, no-name, beige-grey converter that I own. I had thrown into the bottom of my parts pile years ago, last using it with my Pentium III/933 desktop. There is no writing on it other than "Made in China" and the date it passed Quality Control in June, 2006. I've got no ideal who made it. It works. No problem. 

Once again, persistence wins over genius. I've got a great, "new" keyboard from 1993.