Wednesday, December 26, 2012

How I Migrated My Source Code from Subversion to Git

While at University in the eighties, I didn't learn much in the way of useful software development tactics or strategies. It was just a question of fulfilling the homework requirements and getting past the exams. I was an EE student and not a ComSci major, after all. When I started my first job out of college, which was as much about programming as it was engineering, I was introduced to version control in the form of the PolyTron Version Control System, or PVCS. At that time, I worked in a two-person coding group. PVCS kept us from stepping on each other's work and allowed us to track our changes, using nothing more than the legendary MS-DOS command line.
 
When I moved on to my second job, the coding environment was more complex and chaotic. I was surprised that the version control strategy consisted of occasionally pkzipping the source files. The product itself was somewhat confused. I've worked on many projects since then, and it always seems that the better-run projects have their VCS strategies mapped out. The development team's understanding of how to best use a VCS seems to correlate to the quality of the system being built. If the developers use the VCS as a dumping ground, with little distinction between it and a group of folders on a file share, it is often true that the project itself will be a mess. Ditto if checking code in happens infrequently, as some sort of afterthought. Ditto for lack of integration testing, among other things. As time has gone on, it seems that more and more teams do avail themselves of VCS and do learn the "ins-and-outs" of their chosen VCS. I believe that the large number of distributed, open-source projects that exist today have contributed to this positive development.
 
I've used a number of different VCS over the years. As I mentioned, PVCS was first. (I once interviewed for a job that entailed providing integration between PVCS and Visual Basic.) The second one was a version control system on SunOS called SCCS. IIRC, SCCS was extremely basic, with a pedigree that dates back to the seventies. I spent a couple of years working with SCCS and make files. I helped evaluate ClearCase for a few days. (We didn't adopt it. IIRC, it was pretty neat, but it took up too many resources on our HP minicomputer and we couldn't afford enough licenses for our development team.)  I used Microsoft's Source Safe and TFS. I managed to avoid using CVS for very much. 
 
My go-to VCS software for the better part of the last decade has been Subversion. I like Subversion for a few reasons. It is tiny and easy to install. Creating a repository is so easy that you can create a repository for a project that will only run for a day or two and throw the repository away afterwards. Subversion has built-in diff and merge tools, making it easy to see what you have changed between commits. It is easy to set up a server for subversion. I used to run my own subversion servers, on top of apache on linux using old Sun and Dell workstations and dealing with firewalls, port forwarding and DDNS. Eventually, after a series of power failures at my apartment, I moved my Subversion to a freebie online service. Running my own server was just too much effort and not enough reward. No one was going to hire me to run their Subversion server for them, so there wasn't much use in practicing all the time.
 
So, why move away from Subversion? I have been contemplating such a move for a couple of years. One (weak) reason is that I'm not using my repository as much as I used to. My Subversion repository has 3043 commits, spaced out over seven years. I haven't been committing very much this year (the last change was in July) as I find myself doing less and less interesting stuff. In addition to fewer commits, I'm keeping less in my Subversion repository than I used to. I'm relying on Microsoft's SkyDrive and similar products for some of the stuff that I used to keep in my cloud-based Subversion repository. For example, I've moved my copies of the SysInternals utilities from my Subversion repository to SkyDrive.
 
A stronger reason is that the popularity of Subversion seems to be dwindling in favor of Mercurial and (particularly) Git. Git has a lot of weight behind it because it is used by the linux kernel developers. Both systems are interesting in that they don't rely on a central point of control. Instead, everyone has a full copy of the repository and updated source code is passed around using a more egalitarian method. Each repository is more of a peer than a centralized point of control.
 
I'm not particularly worried about sharing my own stuff (it isn't groundbreaking or anything), but I think that knowing Git is more useful than knowing Subversion nowadays. I certainly see Git mentioned in more job want ads than Subversion. I don't think that you can really learn something without using it frequently (at least monthly, preferably daily), so it's time to chuck out Subversion and pick up Git. 
 
I'm not going to recite each detail of the process here, as there is plenty of information available online (you've probably already seen much of it) and no one has asked me for details. I'm only going to provide my highlights and observations.
 
Moving my repository from Subversion to Git was simple. Basically, Git supports direct importation of Subversion repositories via the "git svn clone" command. All you really need to do the import is the Git software. I installed the software, did a little prep work and ran the import process.
 
(It also seems that you can use Git as a sort of front-end to a shared Subversion repository. This way, you can use the Git tools without forcing a migration away from Subversion. This is an interesting idea and it certainly might be useful on large projects. I'm an army of one here, with no other person's wishes to get in the way. I'd rather keep things simple, so I did not explore this feature.) 

Installing the Windows version of Git is as easy as downloading an installer from a web site and running it. (You don't need a full-blown linux-like environment like Cygwin. I have used Cygwin before and Cygwin (particularly the X server) is great, but if you don't need a lot of linux "stuff" and you don't need bash, then Cygwin is overkill.) If you are on Windows, you want to make sure that you are running a recent enough build of Git. If not, the Subversion functionality may not be included. If you are on linux, it is possible that you already have git on your system. You may need to install additional packages such as git-core and git-svn. Exactly what packages must be installed and the exact commands to install those packages may vary by distribution, as it is with all things linux.

The only pre-migration task to be done is to create a small text file that maps users from the old Subversion repository to email addresses in the new Git repository. This was simple for me (four old users each map to one common email address), but could be tricky in environments with many users, particularly if those users have moved on. The hardest part of creating the text file was making sure that I had an entry for each of the old users in the Subversion repository because I wasn't 100% sure how many users I had used in the last nine years. There are scripts available that can look through the history of the relevant Subversion repository to create a text file that functions as a starting point for you to edit.
 
While actually running the import process, I ran into two real problems. The first problem was that the import process asks for user input at a certain point (it asks about the security fingerprint for the old subversion repository) and it seems that the libraries that are used to accept that user input do not work inside of a PowerShell shell window. My workaround for that was to Ctrl-C out of the attempt, close the PowerShell shell window and then use an old-fashioned CMD console window to run the import process. That worked fine and, so far, I haven't had the problem come up again and running Git in a PowerShell shell window has been OK.
 
The second problem cost me much more time. I used the --stdlayout command line switch because all of the examples I had seen did. The switch causes the import process to assume a certain, widely-used layout for the source code that exists in the Subversion repository. I did not follow this layout when I initially set up my Subversion repository and I never missed it, partly because my code is simple and partly because I haven't had the opportunity to bring others into the project. In short, with the switch, the import process looks for source files in certain locations. Since I did not set up those locations, the import process didn't find anything to import. The process simply ran for a while and then reported success without actually importing any code into my new Git repository. After spending some time swearing, I broke down and read the documentation on the command line switches. I realized that --stdlayout was trying to do something that didn't pertain to my repository. Removing the --stdlayout switch allowed the process to go forward.
 
As an exercise, I also ran through the same import using my linux Mint Virtual Machine (VM) running on my ESXi 5.1 server. The results were pretty much the same. Obviously, I used bash and I didn't have the PowerShell input problem. For whatever reason, running the import in the VM was around twice as fast as running it "on the metal" on my Windows 7 laptop. This is surprising because my ESXi server is pretty slow, being a six year old HP workstation with a pair of lowly 5150 Xeons.
 
Now that I have my repository out of Subversion and in Git, all I have to do is get used to the Git workflow and command line syntax.

Thursday, December 13, 2012

Notes from PSSUG: SQL Server on Windows Azure and Visual Studio Debugging & Source Control


This blog entry consists of my notes and thoughts from the PSSUG meeting at Microsoft's offices in Malvern on 2012/12/12. Any misunderstandings are mine. The evening was broken up into three parts.

First, Kevin Howell presented "Visual Studio 2012: A Complete IDE (Debugging and Source Control". He covered some of the enhancements since Visual Studio 2010. He did a quick demo of server-side debugging of a stored procedure, and pointed out a few configuration changes that you must make to your solution before you can use this feature.
Kevin particularly likes the schema comparison feature, now part of SQL Server Data Tools, which I think debuted back in Visual Studio 2008 or 2005 as an add-on/down-loadable thing. He points out that you need to be using the Premium or Ultimate editions of Visual Studio 2012 to use schema comparison feature. The comparison feature does seem greatly improved, even over Visual Studio 2010 and is light years beyond what was available five years ago. To me, the VS interface still looks very busy, and it seems hard to know where to look for particular things.
The solution/localdb duality also seems a little confusing. I presume that a developer can make changes to the solution without updating the database running on localdb. When you then compare the localdb database to the "production" database running on a server somewhere, you will not see those changes. Microsoft does not seem to believe in shared database development environments.

One of the things that Kevin did was use the "Clean" menu pick before a "Build". I'd never noticed Clean before, even though I've used SSDT's predecessors to reverse engineer scores of databases. If you are having trouble with odd build errors, you might want to give Clean a try.

With the new Visual Studio comes a new version of TFS. (I keep meaning to switch from subversion to git.) It seems that you need to be running SQL Server 2008 R2 to host TFS 2012 and that you need to have Full Text Search (FTS) installed. (You need FTS to use TFS. Is there no coordination between acronym teams at Microsoft?)
Second, there was some time spent on PSSUG business.
  • The election of board members was completed.
  • Upcoming PSSUG and PASS events were detailed. Most importantly, as far as I am concerned, SQL Saturday #200 will be held in June, 2013. There is some talk of having more sessions than last year and perhaps some sessions that could be longer and more detailed than the one hour sessions that have been the rule in the past. In any event, it should be a great day as many people will have much practical experience with SQL Server 2012 by then. IIRC, the last SQL Saturday held here filled up a month or more in advance. You can register now.
  • The Microsoft Technology Center (MTC) in Philly has created a web site to provide MTC news and videos shot during PSSUG presentations. (Microsoft's offices are really in Malvern, but I'll let that slide.)
  • This might only be news to me, but SQL PASS will be held in Charlotte next year. Perhaps I can scare up the budget for Charlotte, it might even be within driving distance of Ashdar World Headquarters here in Downingtown. Seattle is awesome but it's a big drain for a small business.

Third, Rodger Doherty of Microsoft presented on "SQL Server on a Windows Azure Virtual Machine". He points out that SQL Server 2008, SQL Server 2008 R2 and SQL Server 2012 are supported. Since we are dealing with Virtual Machines (IaaS) and not SQL Azure's (PaaS), I would guess that older versions of SQL Server would run. However, if you have functionality or performance problems you should not expect to get help from Microsoft.

While Windows Azure provides some redundancy due to the way data disks are implemented, it does not provide a complete High Availability (HA) solution by itself. For true HA, you must use features provided by SQL Server. There is no support for Windows Failover Clusters, but database mirroring and AlwaysOn are supported. (The one quirk is that AlwaysOn listeners are not supported "yet". This might be a networking/VPN problem than a SQL Server problem.) The biggest HA hurdle to get over is latency, especially if you want to be in more than one region.
I'm 95% sure that log shipping still works, too. Microsoft doesn't seem to talk about log shipping very much anymore, but it occupies a warm spot in my heart. Log shipping is so non-fancy that I think it would be hard to break. If Microsoft's version has a problem, you can always write your own log shipping implementation in a day if you need to.

Licensing, as always, demands some attention. Right now, you have two options. There is "pay by the hour" licensing, in which you select from a gallery of images. This works pretty much like AWS. These images are sysprepped, apparently just like you would prepare them for a local, multiple-install scenario. It's important to realize that, once set loose, the image will first go through the usual steps of a sysprepped image and may reboot several times before the VM is ready for use. This process could take as long as 10 or 15 minutes but it is still much faster than going through a requisition process for a new local physical server.

If you are on Software Assurance, you can opt for "bring your own" licensing. This means that you could possibly do a p2v of a machine, copy the VHD to Azure and start running it on Microsoft's equipment.

The big part that seems to be missing is something like AWS's "reserved instances", where you can get a discount from the hourly rate if you commit to a longer period of time. There doesn't seem to be support for MSDN license holders yet and TechNet members are definitely shut out, though I do see references to "free" account offers floating around every few months. With AWS giving away small VM instances and a price war looming or already in effect, it would be great for Microsoft to follow suit.

As far as running SQL inside of Windows Azure, one should not expect high-performing I/O. You should expect more latency to the disks than you would get in a local set up. It seems that performance is more in line with consumer-grade SATA drives than 15 KRPM SAS drives or SSD. While there is an option to enable write-caching on drives that you put SQL Server data on, you shouldn't do that. It can lead to corrupted databases. If you need speed, you should probably work to ensure that the memory allocation for the VM can hold the entire database.

Each VM has a "D:", which is a non-persistent drive that is intended for data that does not have to survive a reboot. You should consider putting tempdb on these D: disks. If you put tempdb on the "C:", it will count against your storage and you will pay for it. Since the "D:" comes up with nothing on it (not even directories), you need to do a little work to create the directory path and relevant ACLs before SQL Server starts and tries to build the tempdb database somewhere.

Rodger kept using the word "blade". This might be a colloquialism or it might mean that Windows Azure is actually running on blade servers. If so, this just underlines the fact that these systems are not meant for high-performance, bleeding-edge databases. During the Q&A session, he cautioned against moving very large data warehouse implementations to Windows Azure, though smaller data warehouse implementations may be fine.
Pricing is by "VM size". You have a small number of choices. This is similar to the way that AWS started. AWS has added options as time has gone on and demand has picked up. There is no customization of one of these sizes. You can't say "I want lots of cores and RAM, but not so much disk space".
Page compression is supported in Windows Azure VMs. You do still need to be running Enterprise Edition SQL Server, however. The CPU performance penalty seems to be something on the order of 5%. YMMV.

Rodger stressed that these servers are on the open Internet and that you should be careful about opening firewall ports and choosing passwords. If you are running SQL Server on the default port (1433), your SA account will get probed. It is suggested to use port mapping and unusual ports for common services like RDP or SQL Server. If you can avoid opening those ports to the world, that would be better. There is a VPN available, but it is only compatible with certain hardware VPN devices at this time. Such devices aren't necessarily expensive, costing perhaps a few hundred dollars US, but they would tie your access down to one physical location and that might not be easily workable if you have several remote DBAs or are a traveling demo person.
In any case, I would advocate (re-)reading any SQL Server security best practices document that you can find before you start creating servers.

Another caution is that this new virtualization environment isn't officially released yet. That means that Microsoft may reset the environment in some way, nuking your stuff. Unless you can tolerate a couple of days of downtime while you rebuild everything, putting production systems up on Windows Azure right now isn't a great idea. After Microsoft officially opens Windows Azure for business, they are aiming for 99.5% uptime for non-HA systems and 99.95% uptime for systems that use some sort of HA technology.
An Active Directory server should use the smallest instance, as AD simply doesn't need many resources. You can extend your existing AD infrastructure into Microsoft's cloud.
There is a cost reporting scheme. It doesn't seem to be as flexible as what AWS provides, but it may be simpler and allow you to jump into IaaS more quickly.
There is an Early Adoption Cook Book with much useful advice.
Performance troubleshooting of systems may be difficult since there is no access into performance counters from the "host" of the VM.
To sum up my impressions, Windows Azure looks like it could easily grow into a AWS competitor. They will need to make many improvements before they are a match for AWS. To me, the two most immediate needs are a larger variety of instances (with respect to cores, sizing, bandwidth, etc) and some sort of reserved instance pricing. Microsoft has deep pockets and can ramp things up over a period of months or years, just like they have been fighting VMWare with Hyper-V. I would only use Windows Azure for development or test systems first and I would start with systems that require less performance.
That should be the last PSSUG meeting for 2012. See you next year.

Wednesday, October 31, 2012

Windows Devices Musings, for Late October 2012


With all of the flamage surrounding Windows 8, tablets and phones, I've been resisting making any comments. (I have so many negative comments to make about the Windows 8 UI that I sound like a crazed Unabomber, and there is little to be gained from venting at this point.)  For posterity's sake, I'm going to put down some random musings that might be fun to check back on in 2014.

I believe that the end game for tablets will be Microsoft selling tablets directly and without apology. Microsoft's formerly joined-at-the-hip hardware partners will be left trying to sell 'traditional' PCs (laptops and desktops) with an increasingly tablet-ified Windows. Some vendors may take stabs at shipping Linux distros (again), but that has it's own set of problems. There just isn't enough pricing room when the market wants to buy a tablet for $300 and Microsoft wants $150 or more for their software.

People seem to feel that the Surface tablets are too expensive to compete with iPads. With Balmer's "everything is a PC" mantra, why wouldn't a Windows Surface "tablet" be priced similarly to a laptop, especially when you add a keyboard?

I expect Windows RT tablets to be considered as a failure one or two years in, much like UltraBooks. Windows RT is not the Windows that people are familiar with. The Windows RT tablets have only been out a week and I've seen at least two people who claim to be able to out-type Word's ability to put characters on the screen. That might have been sort-of OK in 1992. Today, it is unacceptable. My guess is that Windows RT tablets sell OK up to and into the holiday buying season, but people sour on them by the end of Q1 in 2013 after they have had some time to see the flaws and Windows 8 Pro tablets are released. Windows RT needs improvements in processor power (or Word's code needs to be tightened up enormously) and screen resolution. At best, Windows RT will be seen as another flawed "1.0" Microsoft product and people will start to wait for the "3.0" release before buying in. I suspect that the third release will not happen.

If the existing hardware manufacturers are very unlucky, Microsoft will decide that they should make traditional laptops (with permanently attached keyboards) and desktops as well as tablets. Since keyboard feature so heavily in Microsoft's marketing for Surface 'tablets', Microsoft is arguably already making laptops. This could push companies like Dell, Asus and HP out of the market. Since Dell and HP use contract manufacturers, it is just a question of Microsoft engaging those same manufacturers. Since MS doesn't have to pay a license fee for Windows, they could be able to bid more than a Dell and still make more money. In short, the way has been shown by Apple and vertical integration is the new "in" thing.

If fewer and fewer companies are making desktops, at what point do the economies of scale that parts for desktops have enjoyed start to dwindle and pricing for desktops starts to go up, rather than down?

Microsoft's reputation for pushing a new technology hard and then unceremoniously dumping it is becoming widespread, unquestioned lore among developers. Silverlight is the stand-out example. I just rebuilt two of my systems and the only reason I have to install Silverlight is Netflix. The strategists at Netflix must feel like chumps.

One of the advantages that Windows has over other operating systems is it's ability to run on lots of different hardware: graphics adapters, network cards, RAID adapters, fiber channel HBAs, etc. Basically, you buy the gadget, stick it into the computer, install the drivers and off you go. Tablets aren't like that. (Arguably, computers aren't like that anymore either. I remember when there were dozens of graphics chip manufacturers. There are now three significant players in that space.) Apple tightly controls the hardware that goes into it's iDevices. (MacOS has always had a more limited range of hardware that could be used. Since most Mac users aren't using expandable machines like Mac Pros, this isn't as obvious as it used to be.)

Android is more fractured, but vendors choose the hardware that goes into a tablet or phone and then custom-build the software for it. It's not like an owner is going to replace the graphics card in their tablet. It isn't physically possible. My point being that the market that Microsoft is pushing Windows into erases one of the strengths of Windows.


Tuesday, September 18, 2012

Where can I find help on my technical problem?

Hey.

Did you know that I am #78 on the All Time "Top Users" list at Stack Exchange's Database Administrator's site? I was surprised myself.

Here is a link to my information page.

Stack Exchange provides sites to ask questions about a number of topics ranging from technical things like databases or programming to less technical things like bicycles or philosophy. As a quick introduction, here is a link to their About page.

Pick a topic at Stack Exchange and and learn from many top people in their fields. It is always informative to see the problems that people run into over and over. Better yet, create a login and help someone out.

Wednesday, June 13, 2012

SQL Saturday #121 Observations

Last Saturday, I attended SQL Saturday #121, at Microsoft's Malvern location, which is just outside of Philadelphia, PA, USA. I enjoyed all of the presentations that I attended and I thought that the event was well-attended. Microsoft has a very nice facility and this is the first time that I've really seen all of it. I snapped a few quick photos, which can be found on my twitter stream, @dstrait.
 
I've been attending SQL Saturday (and Code Camp) events for many years. The most strinking thing (to me) that I noticed about this year's event was that there were only two or three laptops, but there were five different people carrying iPads and at least one carrying some sort of Android tablet. One person had an external keyboard bluetoothed to his iPad. There were too many smart phones to count.  In the past, there would have been a couple of dozen laptops and I would have brought my own laptop. Once, I actually lugged my eight-pound Thinkpad 770ED around all day. This time I brought a couple of pens, a pad of paper and my smart phone.
 
It's hard to detect whether a phone is iOS or Android from a distance, but I didn't notice any Windows Phone devices. I see that Nokia will be giving away Lumia 900 smart phones to college students. In 1984, when I was a university student, I bought an Apple Macintosh 128K at dealer cost through an Apple-sponsored program. Macs weren't selling too well just then, I presume that the Nokia smart phones aren't doing much better.
 
Like many other professionals, I'm looking at the forthcoming release of Windows 8 with trepidation. On the one hand, Microsoft needs to do something to address where the bulk of computing is going: phones and tablets. On the other hand, I'm worried about server, desktop (and laptop) users becoming second class citizens because of Microsoft's stance that Windows must run on everything, rather than tablets having their own special fork of Windows, and, worse, that the desktop experience will take a backseat to a Metrified tablet experience. I have VMs running Windows 8, but I have no intention of upgrading my workhorse machines to Windows 8. Windows 8 doesn't do anything that I want that Windows 7 doesn't do. For me, migrating to Windows 8 would make as much sense as migrating to MacOS or going back to Ubuntu or debian.
 
It seems that Microsoft feels that long-time users will be happy reducing their expections of what an operating system does. Microsoft isn't trying to sell us old wine in new bottles, it is trying to sell old wine in smaller bottles. This is a shame, but it should lead to a long, long career for Windows 7.

Thursday, May 17, 2012

How can Google+ increase usage?

There has been ample coverage of Google+ and it's growth or lack of it. So much so, that I won't even bother linking to any. Everyone knows that Google+ is not heavily used when compared to usage rates from Facebook. From my personal vantage point, Twitter and Facebook still attract more users more quickly than Google+. Many who tried Google+ early on have given up on it. I'm not sure that Google+ usage compares favorably to the current usage of prior internet darlings MySpace and Friendster.

When Google rolled Google+ out, it seemed like they wanted people to jump from Blogger to Google+. This would stem the losses to Twitter and Facebook. I recall seeing hearsay that Blogger would be eliminated.  Fortunately, that has not come to pass.

To me, the nice thing about Blogger is that it is possible to get some return from your effort. Ashdar Partners has not made a dime directly from this blog. Being who I am, I like numbers and measurements. I do look at the blog's statistics from time to time. The fact that anyone at all looks at these pages for any reason is encouraging to me.

As far as I know, there is no way to see if anyone is reading Google+ posts unless they "+1" the post or leave a commentary. There is no aggregated reporting on those attributes. It doesn't seem possible to ask a question like "Which of my posts has the most +1's?". For people who are full-time bloggers (rather than running one as a sideshow) or who have multiple blogs the ability to judge reader interest in particular topics is critical and guides what they blog about.

I can see if anyone reads Blogger posts, even if they don't leave commentary. There are nice reports, with drill-down items. The only metric that Google+ seems to surface is "followers", which doesn't provide me with a number that I can get excited about.

So, with respect to improving the usage of Google+, I have two suggestions. I will go with the obvious one first.

Google should provide a way to monetize Google+ postings. I know that I'd be more likely to post if I thought that I could get something out of it and my posting wasn't going to be just another data source being collected by Google in order to market me to other companies. It doesn't have to be a huge reward. Only a few dollars a year or even just an occasional free coffee at Starbucks or even Dunkin' Donuts would make it seem more worthwhile.

I think that the economic aspect would tempt a lot of people. The downside on this is that it would cost Google something. It might not be a lot, but it would be more than they are spending now.

I think that my second suggestion would tempt fewer people but those people are the ones that tend to take their blogs and online prescence more seriously.

Google should provide analytics on Google+. They do it for Blogger and my recollection is that this facility has only improved over the years. I presume that Google is collecting and aggregating similar statistics on Google+ postings for their own use. I couldn't be too hard to surface that data so that Google+ users could see what generates user interest.

In short, if Google want Google+ to supplant Blogger, Google+ should provide all of the features of Blogger plus new features that make Google+ better than Blogger. They have not achieved that, yet.

Tuesday, May 15, 2012

My Experience with OneNote on Smart Phones

OneNote is my favorite Microsoft productivity program. I would like to say that I have had a lot of experience with Microsoft's OneNote on mobile devices, particularly on smart phones. Unfortunately, I have just about no experience with OneNote on smart phones.

I have been using OneNote on Windows desktops and laptops for well over four years. I've used both OneNote 2007 and OneNote 2010. OneNote was the program that convinced me to move from taking notes from simple text files. I had been taking notes that way since around 1989. Generally speaking, I'd rather have access to OneNote than Word, Outlook, or even Access (the database program).

I think that OneNote is the worst-marketted product in Microsoft's stable. People just don't know about it.

Two years ago, I was  using a Nokia E71 smart phone and awaiting the first modern Windows Phone devices. My expectation was that I could run OneNote on my smart phone and seamlessly edit my notes on my phone, the Microsoft Live web site or the OneNote 2010 installed on my laptop. I carry my phone everywhere. I could have all of my notes with me at all times.

Then I found out that OneNote on Windows Phone wouldn't fully support everything that the Windows edition of OneNote does support. The features that wouldn't be supported were a little mysterious. Which of my notes would not be viewable on my smart phone? I couldn't tell. This was my first disappointment with OneNote.

Within the week, I bought an Android smart phone, a Motorola XT710. I subsequently moved the vast majority of my notes to EverNote or Google Docs. Now, I have over 1200 notes stored in EverNote and I use it nearly every day for both work and personal data. I started this blog entry in EverNote and finished it off in blogger's editor. These days, I spend more time with EverNote than any other software, except for a web browser.

When I heard that there would be an Android version of OneNote, I was again hopeful. Shortly thereafter, I learned that the version of Android on my Motorola XT701 ("Eclair") was not supported by OneNote, due to the age of that version of Android. This was my second disappointment. I sent OneNote back to the bottom of the pile of things to try.
So, as of last Saturday, I have a new, modern Samsung Galaxy Nexus smart phone with the latest version of Andorid (Ice Cream Sandwich, patched to 4.0.4). I thought that there would be no reason to not at least give OneNote a try. I hopefully installed OneNote on my new smart phone. OneNote allowed me to enter my LiveID and then it crashed. I restarted it and it crashed again. Restart. Crash. I don't think I've seen any other Android program crash like this. This is my third disappointment.

I don't see many other smart phone users with this problem, though I can't imagine what I am doing that they are not. I haven't had time to gum up the phone with lots of strange apps. Perhaps my phone is too new; it has only be around for seven months. For the time being, I intend to keep the app on my phone and see if it gets any updates.

Monday, May 14, 2012

How Did the Upgrade to my Samsung Galaxy Nexus Work?

After about 18 months of service, I have found that my mobile needs have outgrown my Motorola XT701. You can read earlier blog entries about my XT701 here. The old smart phone would take longer and longer to respond to touches, programs seemed to be spending time swapping, I had to remove some apps to free up storage space and there is no upgrade path from Android "Eclair" for the phone. So, though the lifespan of the XT701 seemed short, it seemed like time for a new phone.

Our communications needs at Ashdar Partners are minimal, we rely on Verizon FIOS for the heavy stuff. I buy off-contract phones because I've got a decent ATT plan with a grandfathered price and ATT coverage generally works for us. It is less expensive to buy a phone outright than it is to get a subsidized phone on a contract and see a subsequent increase in the monthly bill.

After some research, I settled on a Samsung Galaxy Nexus. The runner up was a Samsung Galaxy S II. The Galaxy Nexus seems a little more modern than the Galaxy S II. It was easier to buy a Galaxy Nexus directly from Google than find a seller on eBay. I was interested in the Samsung Galaxy S III and the HTC One X. These are smart phones were just announced and have limited availability so far. I think that the are too expensive to purchase off-contract. I can get most of the value of a GSIII or a 1X by spending (roughly) half as much. I also considered a Samsung Captivate Glide, but I think that I've gotten used to the idea of having only virtual keyboards on a smart phone.

I don't think that ordering a Galaxy Nexus through Google's site could have been easier. The phone arrived as expected on Friday, but I had to let it sit until Saturday when I had more time to tinker with it. Here are the issues that I ran into while setting up my new phone:
  • Only paid-for apps are automatically downloaded and installed when you configure a new phone. I did not realize that free apps will not be automatically downloaded and reinstalled. I've seen comments from people about waiting while their old apps downloaded to their new phones and I never thought that free apps would be different than pay apps, in this respect. So, I spent an hour or so manually downloading apps and re-entering login information. This was made easier by using the list of installed apps that Google provides. I also found that the voice-recognition works pretty well and allowed me to search for apps more quickly. This "install-fest" was an opportunity to re-install a few things that I had to remove from the old phone due to storage limitations and leave off a few apps that I don't really use, so it wasn't all bad.
  • I ran into a data migration issue with aCar. The problem was compounded by chance. I had researched how to move aCar data files from an old device to a new device just a two weeks ago. However, the program was updated in the interim and the old method no longer applies and has been removed from the aCar FAQ. On-demand import/export/backup is limited to the paid version of the program. I do use aCar some and, though I'm not 100% in love with the app, I have a few records in there that are business-related and therefore tax-deductible. So, I just went ahead and bought it. (It's worth saying that aCar is much faster on my new phone and I expect to be using it more often in the future.) With my limited usage of the program, I'm not sure how long it will take for the $6 investment to pay off. After several years of smartphone ownership, this is my first purchase. I didn't realize how easy it is to buy apps. It is almost too easy to buy an app.
  • I didnt' have an account for Key Ring, which is an app that keeps track of customer loyalty cards. I guess that I never created one. The data was somewhere on my old SD card and I could have probably found it if I had looked for it. I only have six or so of these cards and it was just as easy to re-scan the cards as it was to dig up the data files. I also created an account for Key Ring.Since Android 4.0 has data usage auditing built in, I no longer need 3G Watchdog. I exported my old data and stashed it with my SugarSynced files in case that I want to see those numbers in the future.
  • I am disappointed with the "MTP" support. I'm used to just copying files onto and off of my phones via USB or by swapping the SD card into my laptop. I didn't realize how limiting MTP is and there is no SD card in a Galaxy Nexus. At first, I wasn't sure how to copy files like my aCar backup or some custom alert noises I like to use to my new phone. My first thought was SugarSync, but seemed like it would involve a lot of copying and then moving files around, meaning that it would be time-consuming. I did a little bit of research first, and I found a reference to AirDroid on Android Central. AirDroid is impressive. Install it. I don't think that you will be disappointed.
  • OneNote has disappointed me again. More on that it another post.

So far, the new phone is better than my old phone in nearly every way. The screen is bigger, has more resolution and generally looks better (pentile or not) than the old phone. Android 4.0, AKA "Ice Cream Sandwich", is much better thought out than the version on my old phone, "Eclair". Naturally, everything is faster.

The largest gripes that people seem to have with the Galaxy Nexus are the screen and the camera. It might be true that there are phones out there with better screens and cameras, but both are better on my Galaxy Nexus than my old XT701.

I don't see a problem with the screen. It is easier to read text on the new phone than the old phone, and YouTube clips seem fine.

The camera support was almost non-existent on my old XT701 and the camera on my prior phone, a Nokia E71, was buggy. The few test shots and video clips that I've taken with the Galaxy Nexus look pretty good, and I hope to expand my use of the camera in the future. The cats aren't going to take pictures of themselves...

The only negative about the smart phone is that it is bigger. It doesn't fit into the back pocket of my jeans as easily as my old one. With a bigger screen, it can't be helped and it seems that everyone wants larger and larger screens these days.

Wednesday, May 9, 2012

Did You Read It? for April, 2012


This blog post mentions a few of the most interesting articles that I have read recently, or recently-ish, with a little bit of my commentary on each article.

What's the difference between a temp table and a table variable in SQL Server? This is the best answer I have seen.

Here is a short and sweet bit of PowerShell code that can make a persons life a little more enjoyable by providing some feedback to long-running processes.
This is from 2010, but I'd never seen it until now. I would temper it with Malcolm Gladwell's view that it takes 10,000 hours to become expert at something. (I am not sure if Mr. Gladwell originated that thought, but he has popularized it.) I'm not so sure that 'polyglot developers' will know everything there is to know about all of the languages on their resume, plus Active Directory administration, plus database clustering, plus SAN administration, plus VMWare, plus being up on the latest half-dozen NOSQL darlings, but maybe they don't need to be. Do we really need all of the hyper-specialization that we have bred into IT in the last 20 years? Perhaps we are only ensuring that specialists have a hard time pulling their narrowly-defined career out of the weeds when their chosen technology becomes obsolete or when that up-and-coming thing fizzles out. What if we just invest in smart people that are quick learners?
This caught my eye. Microsoft is doubling down on getting more businesses to put their data on Azure. Small businesses aren't usually the ones who are concerned with certifications. Medium and large businesses are.


Friday, April 20, 2012

Data Safety Is No Accident

(With apologies to David Foster Wallace.)
I follow various tags on stackexchange sites, primarily database-related things. If you do this for long enough, you will see certain questions come up again and again. Frequently Asked Questions. FAQs. This is not a new phenomenon, it dates from the days of USENET and probably even before then
One of those FAQs is: "OMG, something horrible happened to my database, I don't have any backups and now I think my boss will yell at me and hurt my feelings. How can I restore the data before that happens?".
As a DBA, your first job is being able to restore the data. Just backing up the database isn't any good if you can't restore it. It doesn't matter if you are using SQL Server, Oracle, MySQL, postgresql or any other data storage technology. It doesn't matter if you are an "accidental DBA" and would rather be coding. (I started my career as a coder and, frankly, I would rather be coding too.) People rely on you to protect their data in ways that they don't understand and can't do for themselves.

Backups should be boring. Restores should only be slightly more exciting, since they should be rarer (much rarer) than backups. If your backups and restores are not boring, you are doing them wrong.

Before you run upgrade or update scripts from the development team, protect yourself. Stuff happens even when people are doing their level best. A last minute change might have deleted a crucial term in the WHERE clause of that update statement. The script that you recieve might be a concatenation of several other scripts, and the whole Magoo has never been tested at once. Some snafu with the VCS means that you are given the wrong version of the file.

Ask yourself, "If something goes bad with this script, what will I do to reverse it's effects?" As a DBA, can easily protect yourself by taking an extra backup (remember to use Copy-Only backup commands if you are using a differential backup strategy) or by taking a database snapshot.

For scripts that run on large databases that only change a procedure or view, it might not be worth the time to back up the whole database. I like to script out the body of that object. I was doing that so often, I wrote a little PowerShell utility so I could do it quickly. Sure, that procedure is probably in the VCS, but sometimes finding the right version in a hurry isn't so easy.

If you don't have DBA rights, you can protect yourself by adding some transactional protection to the script that you have been handed (be sensible - you don't want excessive log file growth). If you want to make a backup copy of the table you are modifying, that is OK but don't let it hang around forever. Add an item to your calendar that will force you to come back and drop that backup copy. Or consider BCP-ing the data out. It's quick, once you get the hang of the bcp command syntax.

If you are inserting rows and find a problem, you might be able to delete the new rows if you can describe them with a WHERE clause. Many tables have some sort of createdby or updatedby columns. BUT, watch out for triggers, as they can modify data in other tables in surprising ways and, when it comes to databases, surprises are almost always bad.

Moving a Windows XP install from VPC/VHD to VMWare ESXi 5.0

I had an old Windows XP VHD that I used as a development environment for a client. I don't work with that VHD anymore, but I didn't want to just delete it. I did want to get the VHD off of my laptop to free up storage space and cut down on backup requirements. It seemed that the thing to do was to put in on my ESXi 5.0 host. But, how does one do that? Ah, a learning opportunity...

One googles, and one finds that VMWare provides a tool called the "VMware vCenter Converter Standalone Client". This tool is aimed at industrial-scale conversions and it can handle a couple of different situations (different sources, primarily). I downloaded the tool, installed it, and used it. It was surprisingly easy.
I did get some warnings about not having some sysprep files in place. That was a little scary, but I had a fallback path (the old VHD). If the new VMWare VM was broken, I would only have lost some time and gained some experience. I ignored the error. (From googling around, those files can be obtained from Microsoft and that might be worth doing, if you are converting a large number of machines.)
The tool converted the VHD VM to a ESXi VM and placed on my ESXi host. Using the VSphere management console, I started the new VM and had a look. I expected that I would have some slogging to go through, maybe I'd have to download drivers for the VMWare-specific "hardware". 
Nope. 
 
While the VM was booting, I was distracted by another task. When I got back to working on the VM a few minutes later, I found that it was installing 59 Windows updates and was humming along. When it finished installing the updates, it rebooted. I could then log in using my old credentials. Windows complained about having been moved to "new hardware", but it allowed me to re-enable the license with just a few mouse clicks. No phone calls, no drama. I then updated the virus software to Microsoft System Essentials from the older Microsoft product.
 
I expanded the desktop to 1600x1200, to match my venerable Dell FP2001 panel. (This Dell is the best monitor I have ever owned. Between 1990 and 2001, I owned CRTs from Princeton, MAG and Sony. They all died. I even had the MAG fixed once and it died again. I have been using my Dell for 12 years. I've never had a problem with it.)
I configured the VM to use one vcpu. It is pretty snappy, maybe a touch more sluggish than running on my three year old Lenovo T500. wprime says that the VM is only a little slower than running natively on the Core2 Duo in my laptop.
 
So, the whole process went smoothly and I could see using the tool to do mass-P2V-ing of systems.

Friday, March 30, 2012

"Did You Read It" for March, 2012


This blog post mentions some of the most interesting articles that I have read recently, with a little bit of my commentary on each article.
 
I read "Implementing SQL Server solutions using Visual Studio 2010 Database Projects-a compendium of project experiences". If you are using "Data Dude" in VS, you should read it to. Even if you don't implement all of the suggestions that @jamiet, it will give you a better idea of what is possible. The downside is that you may have to put a fair amount of time into improving things like build output or creating data that can be loaded into a development database that might not match the size of the production database, yet provides enough coverage to provide realistic query plans and durations. PMs are not always sympathetic to spending  time on "features" that users do not see. 
Having spent a lot of time with Database Projects myself, you can burn a lot of energy trying to get the tool to do what you want before realizing (or without realizing) that you want the wrong thing. Earlier versions of the tool also seemed kind of haphazard and it always seemed that some things always had to be hand-managed anyway (object security, linked servers and SQL Agent jobs, for example), leaving a "this tool is unfinished" taste in my mouth. As with most things that Microsoft sticks with, the tool has improved over time.
I also read "Discipline Makes Strong Developers", which is from a while back. It applies to DBAs as well, or anyone who is trying to move from assembly-line levels of competance to craftsman leves of competance. I'm sure that I've read before but it is worth reading again. Particularly for the quote from Code Complete, and because "grabastic" is such a great word.
Lately, I seem to be watching more than reading. This lets the presenter get more deeply into their topic than they could in a blog posting, but requires a larger commitment in time. Some of these might be a little bit older, but they should still be relevant.
I watched "TechNet Webcast: How Microsoft IT Architected and Redeployed an Application to Windows Azure", which is a level 300 talk. I was surprised that they had to move away from SharePoint to get the scaling that they needed. Not that I am a SharePoint guy.
I watched SQL Server and Solid State Storage - SQL/SSD, presented by Jon Reade. Once again, I see that space is cheap and speed is expensive. Simply put, when comparing SSDs to traditional hard disks, SSDs are expensive when comparing GB of storage space and disks are expensive when comparing I/O per second. 
I watched 28 Weeks later - How to scale-out your MS Business Intelligence environment, presented by Sascha Lorenz and Performance Tuning Large Analysis Services Cubes, presented by Duncan Sutcliffe. I still tend towards tuning rather than throwing hardware at things, so Sutcliffe's talk is more aligned with my thinking. If you are Google or eBay or any number of the larger web properties, even with perfectly-tuned custom code you will never get the performance you require. Most businesses aren't in this position an anyone who is using SQL Server on Windows Server (or IIS on Windows Server) might not have the financial ability to deploy unlimited copies of those products, due to licensing costs. Google, for example, can deploy as many copies of Linux as they care to and they work at a scale where custom-coding everything makes sense for them. Heck, Google builds their own customized server hardware. I've worked for places that bought HP DL580 servers by the truckload.
I watched Consolidation and Virtualization Strategies-Microsoft Technology Center, presented by Ross Mistry. This is a nice overview of the strategies available to consolidators or people to start moving to a hypervisor-based (HyperV or VMWare, though he doesn't mention VMWare) or Azure-based cloud. There is only so much you can do in an hour, and its' lack of depth means that this talk is not something that will help you actually build that 16 node failover cluster, but it will go over it and the alternatives. Mistry also goes over how to map out your SQL Server situation and collect usage and performance data on your current systems so that you can plan accordingly. 
 
I listened to a RunAs Radio episode featuring Jeremiah Pescke that discussed use of ORMs. I've been categorically opposed to these sorts of things, since they always seem to add too much overhead. Jeremiah's attitude is that ORMs are tools and can be used for good or evil, which sounds like something I'd say so perhaps I should soften my stance. He does go on to say that a positive, two-way communication between the development and DBA team is critical. I say that it is a shame that every development team can't have their own captive DBA.