Thursday, December 18, 2008

Finding Partition Offsets Made Easier (with Powershell)

A while back, I read Linchi Shea's blog posting on finding partition offsets. I recently found some time to rework his C# file plus perl script solution into a single Powershell script.

The upsides to my solution are:
  • It's Powershell-only, so there is nothing to compile and you don't need to install perl.
  • Using Powershell provides some added bling, for no real cost. It's easy to construct a pipeline that will interrogate many servers or will provide sorting or filtering.

The major downside to my solution is:
  • You must install Powershell. This shouldn't be a problem on Windows Vista or Windows Server 2008.
Please be careful of the formatting. While embedding the code here is more convenient than a zip file, I'm still learning blogger and things may not wrap the way that I would like them too. I've also looked for a way to post syntax-highlighted Powershell code. There are solutions for other languages, but nothing for Powershell is jumping out at me so I will rely on a simple <pre> tag for this post.

# Example of use:
# $serverlist | foreach { Get-PartitionInfo $_ } | ft -a
#

param (
[string] $computer = $(throw "ERROR: The 'computer' parameter is required.")
)

$scope = new System.Management.ManagementScope
$scope.Path = "\\$($computer)\root\cimv2"
$scope.Connect()

$query = new System.Management.ObjectQuery
$query.QueryString = "SELECT * FROM Win32_DiskPartition"

$searcher = new System.Management.ManagementObjectSearcher($scope, $query)
$queryCollection = $searcher.Get()

# this is a quick-and-dirty way to build an object with members that I can reference
$report = "" | select-object "Computer","PhysicalDiskIndex",
"PartitionIndex","PartitionOffset","PartitionOffsetKB",
"LogicalDiskName","LogicalDiskVolumeName","LogicalDiskSize","LogicalDiskSizeGB"

# start setting values for the members of my reporting object.
# I'm casting everything to make sure I get data types that I want
$report.Computer = [string] $computer
$queryCollection | foreach {
$report.PhysicalDiskIndex = [int] $_.DiskIndex
$report.PartitionIndex = [int] $_.Index
$report.PartitionOffset = [int64] $_.StartingOffset
$report.PartitionOffsetKB = [int64] $_.StartingOffset /1kb
$_.GetRelated("Win32_LogicalDisk") | foreach {
$report.LogicalDiskName = [string] $_.Name
$report.LogicalDiskVolumeName = [string] $_.VolumeName
$report.LogicalDiskSize = [int64] $_.Size
$report.LogicalDiskSizeGB = [int] [math]::round($_.Size / 1gb, 0)
# emit the report for this iteration.
# This gets passed to the next item in the pipeline, if there is one.
$report
}
}


Tuesday, September 30, 2008

I'm attending the upcoming Philly Code Camp

A bit of a plug for the local user groups:

The next Philly Code Camp will be held on October 11, 2008 at DeVry in Fort Washington. 

I don't make it to the regular Philly SQL Server User Group meetings anymore because I'm working downtown and the drive to Malvern is painful, but I do try to make it to the Code Camps. I always learn something.

I tend to bounce from track to track, depending on my interest level in a particular topic and how crowded the room is, but I'm always in line for lunch. ;-)

Back from the dead

I haven't had much going on lately, at least not anything of interest to anyone out there.

It's been a long time between posts, but here are some new posts, albeit less technical than my earlier postings.

"Data Dude" Licensing changed; better development environments for all

One way or another, I've been fighting the lack of integration between "backend" and middle-tier (or client-tier) development environments for decades now. 


About a year ago, I was fairly thrilled with the introduction of the "Database Edition" of Visual Studio, A/K/A "Data Dude". I was immediately disappointed by the restrictive licensing and the cost. I've been using Data Dude tools for nearly a year. I think that they can be an abolute boon for developers of new applications or ones that are in "maintenance mode". Unfortunately, any discussion of Data Dude always involves it's cost.

 

It often seems like the cost of everything in the computing universe drops except for bandwidth and Microsoft products. It wasn't always this way.

 

Some of my earliest professional IT work involved writing programs with QuickBasic 3.0 (and, later, with Microsoft BASIC using the "Professional Development System" A/K/A "PDS Version 7.0). I taught myself C using QuickC and a Microsoft Press book. (I still remember getting my first linked list to work, back in 1990.) All of this software was rather inexpensive. I think that QB was around $50. Even PDS, with the serious, "grown up" compiler, was only a few hundred dollars, IIRC. The pricing of many of the Visual Studio SKUs make a few hundred look like "free" and, unlike c# or vb.net, there is no Express Edition of Visual Studio for Databases.

 

Sure, Visual Studio is much more feature-packed than those old products. But "cheap" is what drives technology into every nook and cranny of the world, not "fancy". "Cheap" is what allows people to experiment with new things. If the cost of an experiment is low, people are much more likely to try it and the payback for a successful experiment is multiplied. If powershell wasn't a free add-on, I probably would either be fiddling with bash, ruby or python. Lots of smart people use python.

 

When I got to the office this morning, I was delighted to see the change in licensing that allows all developers to start using "Visual Studio Team System 2008 Database Edition" along with "Visual Studio Team System 2008 Development Edition". To me, this is almost better than the release of SQL Server 2008. I wish that Microsoft would have done this with Visual Studio 2005. Letting this go until Visual Studio 2010 was released would only produce another two years of unnecessarily sub par applications.

 

It's my hope that this change in licensing will lead to wider adoption and better applications. "Better" being cheaper-to-build, more-efficient-to-run and more-reliable-to-use.

Thursday, July 17, 2008

Legacy Connectivity in a 64 bit world, Part 5 of 5

And now, the final boot in the teeth...

As I detailed in my last post, it was decided to segregate access to Visual FoxPro data by running OPENQUERY statements on a 32 bit SQL Server 2005 instance. I did all of my initial tests on my laptop, using a local SQL Server instance to access the FoxPro files stored on network file shares. Before I deployed this to a server, my initial thinking was that this wouldn't be too bad once we put support for delegation into place.

Aside: For the uninitiated, delegation allows a SQL Server to use the Active Directory credentials (AKA "domain logins") of the currently-connected user to perform tasks. Essentially, you need to create an Active Directory object called a Service Principal Name or "SPN". This is not hard to set up. For me, the bigger chore was getting over the heebie-jeebies caused by the idea of having a service impersonate users.

In our case, one server would be doing remote queries to another SQL Server. By putting this into place, we could keep our linked server configurations simple, without lots of "mapping" of "user on localserver" to "user on remoteserver". This also works for SQL Server credentials (AKA "standard logins" or "sql logins"), as long as the usernames and passwords match on the local and the remote servers. We would just need to create logins and users on the 32 bit server, as we normally would. This would help to document our security situation and should make adding and removing users easy for the DBAs since we use Active Directory groups for everything. The network guys do the work of adding users to the AD groups, so the DBAs have effectively outsourced that function to the network security people.)

Here's the thing: The Visual Fox Pro OLEDB drivers do not work if you are querying them remotely.

And by "remotely", I mean anything that isn't running on the server itself. If I use SSMS from my workstation and try to use AD credentials, the query fails with the message:
OLE DB provider "VFPOLEDB" for linked server "OLEDBFOXPRO" returned message "Invalid path or file name.".

If I reconnect using SQL credentials, the query works. If I use NTLM authentication rather than Kerberos authentication, both of which authentiate my AD credentails, the query fails.

The situation is the same if I set up a linked server and query it instead of going directly to the Fox Pro server. This makes sense because a linked server will just go and query the Fox Pro server on my behalf. If I can't directly run a query with SSMS, the linked server won't be able to run that query either.

If I RDP onto the 32 bit server and use SSMS to run a query, it works fine. It doesn't matter if I either AD or SQL credentials; they both work.

This is very repeatable and fails even if I specify a full path to the file.

This doesn't seem to be affected by what credentials the SQL Server services run under.
If I copy the FoxPro files to the 32 bit server, it also works if I connect with AD credentials. Unfortunately, we need these files on the network file shares.

To sum up: The failure is specific to what kind of credentials I connect with. I believe that this is a bug in the OLEDB drivers.

So, we will have to set up a certain amount of mappings of user credentials in our linked servers. By luck, many of the applications use SQL Server credentials rather than Active Directory credentials, and those seem to work.

Tuesday, July 15, 2008

Legacy Connectivity in a 64 bit world, part 4 of 5

This post continues my five part saga on connecting to legacy data formats. This third data source type (of four types) is Visual FoxPro. I haven't seen very much Fox since 1992, when I was a C programmer and worked with dBase, FoxPro and Clipper for a few months. I thought that everyone left FoxPro and dBase behind in the mid-1990's. Not so.

Visual FoxPro
was dead-ended by Microsoft a while back. Microsoft states that the ODBC drivers are depricated and that the 32 bit OLEDB drivers should be used. I can pretty reliably bring down an instance with the ODBC drivers and I am very interested in using something besides these drivers.

There are 32 bit OLEDB drivers freely available from Microsoft but it looks like there won't ever be any 64 bit drivers.

All of the projects which I found on Sourceforge looked too shaky to consider running "for real", and I'm not in a position to start writing or supporting open-source OLEDB drivers. No third-party has stepped up to the plate, and I think that someone is missing a business opportunity here. I know that xBase is dead but given the amount of tools that used to work with FoxPro, dBase and Clipper files, it is odd to me that no one has ported their stuff to 64 bit and produced an OLEDB driver.

The fourth data source type is Excel and Access files. This is even more flabbergasting than the FoxPro situation. For the uninitiated, Excel and Access support is through the "Jet" drivers, which can also do some FoxPro work. The problem for us is that there does not seem to be a 64 bit version of Jet. The replacement for Jet is called "Ace" and is oriented towards the Office 2007 file formats. That doesn't matter because there is no 64 bit build of Ace, either. I'm sure that there will eventually be a 64 bit build of Ace, but we can't wait until the next version of Office ships to get it.

As a workaround, I could potentially import the data to native SQL tables using DTS and/or SSIS using the 32 bit providers if I force the package to run on the server in 32 bit mode. The issue there is that there is heavy use of OPENQUERY at random times during the day to these external data sources. The data sources are not updated on a schedule, so I can't just schedule a package to load the data once every so often. I can't alter the clients, so having them run some sort of code with sp_start_job, xp_cmdshell, the OLE/COM stored procedures or CLR before looking for their data isn't practical. (No to mention that implementing a solution with sp_start_job would complicate our security issues, that we are trying very hard to avoid enabling the xp_cmdshell or COM stored procedures or that running packages in a CLR procedure seems to be very frowned on by Microsoft.)

The solution that we are working towards is to keep a 32 bit instance around for the explicit task of using the 32 bit drivers. In other words, SQLServer/64 talks to SQLServer/32 which does the dirty work and hands the results back to SQLServer/64. This is suggested on various googlable forums around the internet. Our initial testing shows that this is not any worse than directly access files. Most of our effort here will be due to increased administration requirements and security configuration.

Thursday, July 10, 2008

Legacy Connectivity in a 64 bit world, Part 3 of 5

The first hint of real trouble came when we started looking at the existing DB/2 connectivity.

The second data source type (of four types) is DB/2. Currently, the application uses 32 bit ODBC drivers dating from the late 1990s. They were written by Wall Data, which was acquired by NetManage in 1999. NetManage's web site doesn't seem to acknowledge that these drivers existed or that NetManage supplies any sort of database connectivity software. One hint as to the age of the drivers: The drivers refer to AS/400, which was renamed to "iSeries" about, oh, forever ago.

It turns out that there are a number of 64 bit OLEDB drivers for DB/2 available, including Microsoft's own DB/2 driver, which used to be part of it's "Host Integration Server 2004" product, but is now freely downloadable. IBM's got drivers too.

My initial problem wasn't finding drivers, it was getting the linked server to talk to the DB2 instance. No one involved understood what to supply as a connection string. I found several resources on appropriate connection strings via google. When I configured the linked server with my best guesses, all I got was "invalid parameter" errors, which did not provide any sort of hints as to what the actual problem was.

Eventually, I found that the current IBM DB/2 driver would not talk to the DB/2 instance in question because the instance was so old (the version dates to the mid-1990's) and modern IBM drivers can't handle anything that old. (There was some sort of seismic shift in the communications layers during the intervening years and the new drivers just don't "habla the espanol", so to speak.)

The vendor that controls the DB/2 is not going to upgrade for us; we are only one of their customers. So, the next thing on my hit-list was the Microsoft drivers.

Eventually, I found the correct way to get the Microsoft drivers to connect to the DB/2 instance, which is to use Microsoft's setup program (the "Data Access Tool") and not to try to construct the strings manually. Unfortunately, the drivers won't do anything but connect because Microsoft's drivers look for certain packages to be installed on the DB/2 instance. The instance in question did not have those packages, so I could not even issue SELECT statements. The vendor that controls the DB/2 instance will not install these package. Due to the age of the instance, I'm not sure if these packages would operate properly If I were in their position, I wouldn't install them either.

We don't get very much data from this vendor, and they actually prefer to FTP the data via a secure channel. So, the actual solution will be to move from remote queries to loading text files which are FTP'ed to us.

Tuesday, July 8, 2008

Legacy Connectivity in a 64 bit world, Part 2 of 5

To continue from the last posting, a project that I am associated with needs to deal with four non-SQL Server data sources. I'll start with the easy one first...

The first data source type is text files. We can get the data in these files into SQL Server with BULK INSERT (which seems deprecated in SQL Server 2005, but seems to still be in SQL Server 2008) or OPENQUERY using the bulk option. This is all native to SQL Server, so I'm not counting this as a major issue.

Exporting to text is a problem, but there are only a few very simple examples of that in the codebase. We are trying to avoid use of xp_cmdshell for all of the usual reasons. For most of our issues, we can probably use SSIS or even a custom vbscript in a job. The worst-case scenario is a small CLR procedure.

(Why-o-why doesn't SQL Server have a BULK EXPORT?)

I'd prefer to keep these OPENQUERY statements in place because they are mostly in views and procedures stored in user databases. It's much easier to manage database objects than it is to manage objects like DTS or SSIS packages. Packages are stored at a server level rather than a database level. In other words, if I backup a database and give it to someone else, I have to remember to export the packages and give those as well. The other thing is that it's not feasible to run a package when a client program runs a select statement and return some sort of result set from the package. Not all of our code is nicely bound up into stored procedures (which is hardly uncommon), so there is no way to 'get in there" with some sort of shim and run something before a client expects results.

The data amounts in play here are small, usually no more than a few thousands of rows, so performance should never be a problem. So, to sum up, this isn't that much of a problem. The first real problem will be detailed in the next posting.

Thursday, July 3, 2008

Legacy Connectivity in a 64 bit world, Part 1 of 5

One of my projects involves moving some very old applications running against a 32 bit SQL Server 2000 instance to a 64 bit SQL Server 2005 instance. When I the project was initiated, I thought my largest problems would be:
  • Differences in behavior between 2000 and 2005
  • Porting a couple of hundred DTS packages to SSIS

Wrong. It turns out that my biggest problem is a lack of 64 bit OLEDB drivers.

I have had reason to spend a good deal of time researching the availability of 64 bit solutions for certain legacy data formats. This is a summary of that research and I hope that it saves someone some pain. Anyone could do what I have done here by spending time searching with google. I don't know anything unique, nor do I have any insider knowledge. Still, I was surprised by the situation on the ground.

An early version of this originally appeared on Swynk email lists, specifically the sql2k5 list. It's been edited to tighten up some of my informal writing style. I have also added some new details and information.

(FWIW, I've been reading and occasionally sometimes posting to the Swynk lists since some time during the Clinton administration. I gave up reading usenet postings years ago and I find "web" forums too clunky to use on a regular basis). I think that the only thing that I've been reading longer than swynk is slashdot.)


To begin: In a 64 bit SQL Server environment, you must have 64 bit drivers for OPENQUERY to use. In other words, a 64 bit SQL Server won't run 32 bit OLEDB drivers.

One can get around this problem if the codebase is in DTS. In that scenario, it is possible to force the execution of DTS packages in a 32 bit environment on a 64 bit Windows OS. Another possibility would be to run the packages on a dedicated 32 bit SQL Server. This has performance and security considerations. (Generally, I like to run packages on the destination server.)

Unfortunately, our code base has a lot of OPENQUERY usage. OPENQUERY is used in stored procedures that are executed by user actions throughout a day, as well as by scheduled jobs. Theoretically, all of the OPENQUERY could be rewritten as DTS or SSIS packages and then run by xp_commandshell or via on-demand jobs. I'm leery of the performance and security implications of that, plus that would be an overwhelming amount of work.

So, what to do? Details in the following posts...

Tuesday, July 1, 2008

Powershell functions: whoami and Prompt

Two quick ones for this post, before the US 4th of July holiday. Both functions have one thing in common: use of environment variables. In order to see the environment variables that are available, just type:
dir env:

(This listing could be extensive. (dir env:).Count returns 51 on my laptop.)

Environment variables aren't as popular as they used to be, with settings located in registry entries, .ini files and .xml files. For a few things, it's still easier to find their values through an environment variable than to figure out where they hide in the registry or google for the proper .net calls. When I set up a new machine, I add a path to the environment that points to my local copy of my scripts. This can be very handy since I still have some .vbs and .wsf scripts lurking around.

whoami simply displays the name of the currently logged in user. It's very simple, but it is also a good, simple example of how to use environment variables in a script.

Here is the function:
function whoami {
$env:userdomain + "\" + $env:username
}

I often use $env:userdomain as a default for parameters of functions that look up information about domain users and groups. With most of my clients, the chances are that the user or group that I am interested in will be in my domain.

Prompt functions are another one of those things that everyone writes. Some are sparse, some are colorful and some display so much information that they can be confusing to anyone but the original author. Mine is fairly sparse:

function prompt {
$host.ui.RawUI.WindowTitle = "psh"
$promptText = "[" + $env:username + "@" + $env:COMPUTERNAME + " @" + $(get-date -uformat "%H:%M:%S")
write-host $promptText -NoNewLine
write-host " $(get-location)" -NoNewLine -ForeGroundColor Cyan
write-host "]"
}

I'm on the fence about whether the two-live format is a good idea or not. I include the time as a sort of poor-mans stop watch. It lets me get an idea for a when a command stopped without actually timing it and without getting too fancy in the prompt code.

That's all for this posting.

Saturday, June 28, 2008

Poweshell function: Get-Uptime

For this posting, I present an uptime command. This is the sort of thing that most scripters wind up writing, I suppose. For me, the neat thing was how just a few lines of script produced a program that was more flexible and usable than an old, compiled EXE program that I had been carting with me around on a USB stick for years.

A little bit of this code is a refactoring of code posted at BSOnPosh, but that page seems to have been taken down during a site re-arrangement and my search didn't find anything. IIRC, I wanted an easier way to get uptime information for more than one server, so I refactored some of his code (the WMIDate conversion, which I wanted to be able to re-use for other things, is sort of fiddly) and added my own code to loop over the pipe line input, to produce the TextDescription and to make the select-object code more readable by using the $fields variable to hold all of the necessary script blocks.

Anyway, I present what I've got for your perusal. Personally, I like to see different approaches to the same problem as it often gives additional insight into what is possible. Those "Wow, that's much easier than what I've been doing" moments can be embarrassing, but they also mean that I'm learning and IT isn't an industry where you can afford to stop learning.

The code is here. As usual, it's pretty straightforward. (My most ambitious scripts are really too site-specific to be of use to anyone else.) There is a helper function, Convert-WMIDateToSystemDate, which does exactly what you'd think and there is the main function, Get-Uptime, which also does as you'd expect.

You could invoke the function like this:
("server1", "server2") | get-uptime | sort TotalSeconds | ft -a

The more interesting columns are the "TotalSeconds" column, which is more useful for sorting than actually being read and interpreted by humans, and the "TextDescription" column, which describes the uptime of a server in days, hours, minutes and seconds and is easily understood by the most P of PHBs.

Tuesday, June 24, 2008

A (simplistic) implementation of a watch command

Today's posting covers a function which mimics the 'watch' command, which is popular on Unix and Linux systems. The watch command will repetitively run a command line and display the output. This is useful for watch the progress of systems writing a series of files or waiting for a server to come online. Just start a shell, run the command and park the shell so it is still visible, but out of the way. Even though my function does not do a particularly thorough job mimicking the Unix watch command, I still get a lot of mileage out of it.

This one is so simple that I am just going to paste it in here and hope that the blogging software doesn't drop the formatting. I'm still learning the quirks of blogger, so please bear with me. Here goes:

##############################################################################
# Purpose: More-or-less mimic the popular unix "watch" command
function Watch-Command {
param (
[string] $command,
[int] $interval = 2
)

do {
clear
invoke-expression $command

# Just to let the user know what is happening
"Waiting for $interval seconds. Press Ctrl-C to stop execution..."
sleep $interval
}
# loop FOREVER
while (1 -eq 1)
}

That's it. I normally alias this to watch.

Let's say that you were waiting for a particular file to show up in the temporary directory of your workstation. A simple solution might be:

watch "dir c:\temp\somefile.txt"

This isn't as cute as writing a function to ring the system bell, play a wave file or say "You've got a file.", but it is quick, functional and flexible, which are things we value highly here at the Ashdar Partners blog. It's also already written. I could write a custom-built function to do those things, but why bother when I already have watch?

One quirk with my version of watch is that the quoting of the -command parameter works differently than what you get on bash. On bash, you can often avoid enclosing the whole command in quotes. With my Powershell version, you should probably just go ahead and wrap the command in double quotes because any embedded spaces will require the use of quotes. Stopping a moment to consider the question "should I or shouldn't I add quotes" is more work than just conditioning yourself to do it. It's usually simple enough to wrap the command line in quotes. If you already have complicated quoting in place, or you want to do something that requires some more processing you can get the same effect by writing a little function.

As a simple example, lets say that I want to repetitively perform two tasks. Firstly, I want to output out the time to the console using the following command:

date

and, secondly, I want to look for a particular file using the following command:

dir c:\sqlserver-list.txt

I can do this one of (at least) two ways. The first, and more obvious way, is to wrap everything in quotes and separate the two command with a semicolon.

watch "date ; dir c:\sqlserver-list.txt"

The second, and ultimately more flexible way, is to built a little function with those commands via the command line. This also provides another viewpoint on how flexible Powershell really is. The function can be named anything; for this example I will use the ever-popular foo. Just type in the following in a shell window, hitting return where you see the line breaks here:

function foo {
date
dir c:\sqlserver-list.txt
}

After the closing "}", you will need to hit return twice. This will bring you back to an interactive command line. Then, to prove that we really have a new function, you can run the following two commands to see information about our new function:

dir function:foo
(dir function:foo).Definition

The first command will show you properties of the foo function object. The second command will show you the body ("definition" in Powershell-speak) of the foo function object. Lastly, you can use the function with watch:

watch foo


Lastly, the -interval parameter controls how long the function waits before running the command again. To run our foo function once every 60 seconds, you could use it like this:

watch -command foo -interval 60

Two seconds is a pretty short duration, but it is the default on Linux systems and I saw no real reason to change it.

That's all for this time. Thanks for reading.

Thursday, June 19, 2008

Powershell Function: Get-LogicalDisk

In this post, I will discuss Get-LogicalDisk. Get-LogicalDisk is a function that retrieves disk information from remote servers via WMI. This allows for quick and flexible reporting on storage usage and availibility.

This is a bit of a departure from "SQL Server"-oriented functionality that I have been describing in recent posts. In some shops, free disk space or, more likely, the lack of it is a daily battle. Every morning, I run a script that checks for many different kinds of "negative events" that might have happened over night. A "negative event" might be a failed job, database growth or a server running low on disk space. The latter check calls Get-SQLServerList and Get-LogicalDisk to do all of the heavy work.

The script is straightforward, so I'll just continue with my tradition of a brief overview with some examples. (This is a blog, after all, and not really suited to voluminous detail. FWIW, it is my opinion that detail on what a confusing part of a script does should be in the comments in the script, not in a blog posting. If you find a part of any script confusing, let me know and I'll update the comments in the script and possibly do a blog posting to explain how it works.)

The function takes a list of computers as pipeline input, so feeding a list of SQL Servers using Get-SQLServerList (which I normally alias as "ssl") or any other method is easy. Get-SQLServerList uses WMI to retrieve the following information from each "fixed disk" on the remote system:
  • Computer
  • Letter
  • Label
  • Space, which gives the total size of the disk
  • FreeSpace, which gives the size of the free space on the drive
  • UsedSpace, which gives the size of the space which is being used on the drive
  • Free-%
  • Used-%
  • Unit
The computer name is returned to allow discrimination between servers in reports that cover more than one server. The function also reports back what units (MB, GB, TB, etc.) the three space values are measured in. this gives a visual cue to a user who is manually running reports from a command line.

I would like to point out that one could calculate UsedSpace by subtracting FreeSpace from Space and that one could calculate Free-% given Used-%, yet they are returned by Get-SQLServerList. These calculations are done by Get-SQLServerList so a calling program (or something that you've hacked together on a command line) do not have to. This makes for a more flexible function and for less typing.


Here is a simple example, including some output, which finds information on the local computer:
PS> get-logicaldisk . | ft -a

Computer Letter Label Space FreeSpace UsedSpace Free-% Used-% Unit
-------- ------ ----- ----- --------- --------- ------ ------ ----
MYLAPTOP C: local.boot 33.17 7.94 25.23 24 76 GB

If you are using a naming scheme (or some other method) to discriminate between development, quality assurance and production servers, the following query might provide a list of disks where you might be able to store that 'temporary' backup:

PS> ssl de | Get-LogicalDisk | where {$_.Letter -match "c"} | sort FreeSpace | ft -a


Here is a command that looks for disks on production servers with less than 5% free space:
PS> ssl pr | Get-LogicalDisk | where {$_."Free-%" -lt 5} | ft -a

I hope that you can see the power of the Get-LogicalDisk function, which can easily be leveraged by additional simple Powershell syntax to provide complicated, real-time reports on disk space issues.

Tuesday, June 17, 2008

In my last post, I introduced Get-SQLServerList. In this post, I am going to describe some more ways to leverage the Get-SQLServerList and Invoke-SQLQuery functions.

Basically, I am suggesting that you follow this formula:
Step 1. Put your server names in a text file called "sqlserver-list.txt" and put the file in the root of your C drive.
Step 2. dot-source the functions that I've given you.
Step 3. Profit.

(I'd like to remind you that I normally alias Get-SQLServerList as "ssl" and Invoke-SQLQuery as "iq". This is to save on typing. It's also worth pointing out that I normally use the "ft" alias instead of typing format-table. On with the show...)

Let's say that your boss wants to know which 64 bit servers aren't at SP2. One way to do this would be to spend a lot of time manually connecting to each system, running a query and putting the results into a spreadsheet. Another way to do this is to go to the shared Word or Excel file that is supposed to be updated (but never is) with this information, spend time looking through it and cutting-and-pasting what you have found into an email. Another way to do this would be to purchase, install and maintain some sort of enterprise management product that never seems to have quite the report that you want.

Here's what I do:
PS>$result = ssl | iq | where {$_.Edition -match '64' -and !($_.ProductLevel -eq 'SP2')}

That will take a few seconds to run. What this does is:
  • Uses Get-SQLServerList to gets a list of servers to look at.
  • Runs Invoke-SQLQuery, using the default command, to retrieve the relevant information from each server. The results are packed into a System.Array and handed off to the next link in the pipeline.
  • Tests the results in the System.Array to find servers that have '64' in the Edition string, signifying a 64 bit edition of SQL Server, and are not at SP2.
  • Stores the resulting array, which contains only the information we want, in the $result variable.
  • Note that I do not specify ft here. If I did, $result would contain formatted data, which means that I would not be able to apply a sort or additonal where clauses to $result.

Now, I want to see what I have collected. So, I run this:
PS> $result | ft -a

I am specifying ft now, because I want nicely-formatted output. You can specfiy Format-List (fl) if you prefer.

If I don't have the results that I want, I can go back and change the original command, run it again and look at the contents of $result again. Or maybe I want to see the small servers first:

PS> $result | sort TotalServerMemoryMB | ft -a

Too many columns? OK, let's cut it down to the more interesting ones:

PS>$result = ssl | iq | where {$_.Edition -match '64' -and !($_.ProductLevel -eq 'SP2')} | select Computer,ProductVersion,Build,ProductLevel,Edition,totalServerMemoryMB
PS>$result | ft -a

Once I have what I want, then:
PS> $result | out-clipboard

Now, just paste the clipboard contents into an email, and it's done. Slightly fancier reporting can be done by putting the results into an Excel file, providing column headings, bolding them and setting the column widths. You could also probably save as an xml file, come up with a style sheet to apply to the data and send the resulting report off to whomever wants it. In my world, "fast" is almost always more important than "pretty".

Once again, I hope that you'll see that you are really only limited by your imagination. Well, that and your skill at constructing pipelines. It can be daunting at first, but the idea of using a Transact SQL where clause should be familiar to you, it's just that Powershell's where clause syntax is different.

Thursday, June 12, 2008

Powershell Function: Get-SQLServerList

In my last post, I described Invoke-SQLQuery. In this post, I describe Get-SQLServerList, which retrieves a list of SQL Servers. Get-SQLServerList leverages Invoke-SQLQuery and provides an easy way to provide server names to a pipeline.

This time, the zip file for Invoke-SQLQuery contains two functions. The first function is Query-ServerCatalog. This function builds a simple SELECT statement and then uses Invoke-SQL Query to run it. This is useful as an example of the sort of thing that you can do with Invoke-SQLQuery. Query-ServerCatalog will return a result set full of servernames, assuming that you have a particular table in a particular database on a particular server that holds these names. Setting that up and keeping it current is a bit of work and probably is not worth doing unless you work in a large environment with a group of DBAs.

The more interesting function is Get-SQLServerList. Some of the highlights of Get-SQLServerList are:
  • It reads a text file to find the names of servers and matches them against the $computer parameter.
  • There is simplistic support for 'comments' in this text file. A '#' at the start of a line indicates that the remainder of the line should be ignored. This allows commentary information to be placed in the text file and provides a simple way to temporarily ignore a particular server.
  • If the text file isn't found, Get-SQLServerList tries to grab a list of servers from a table using Query-SQLServerCatalog. I don't use this function now. Instead, I rely entirely on text files to store the server names. Text files are very easy to manage and change. I am leaving the feature alone because it's not broken, as far as i know, and it might be useful to me or someone else in the future.
  • If a value is specified for -ProductVersion, Get-SQLServerList actually connects to all of the servers that match the given value of $computer and then looks for matching ProductVersion information. Even though it connects to all of the servers it still runs in just a few seconds.

With careful naming of servers, I can quickly select subsets of the entire server population. For our purposes, a seven character long name might consist of the following 'fields', along with some made-up examples:

Three characters indicating location of server: PHL (Philadelphia), NYC (New York City), SEA (Seattle)

Two characters indicating environment: DE (Development), "QA" (Quality Assurance), "PR" (Production)

Two characters indicating a 'server number': 01, 02, ..., 10, ..., 20, ..., 99

Using this scheme, the contents of the text file might look like this:
PHLQA05
PHLPR06
PHLPR88
NYCDE11
NYCPR11
NYCPR44
SEADE04
SEADE05
SEAPR33
SEAQA33
# end of file


I normally alias Get-SQLServerList to just "ssl", to cut down on typing. Here are some quick examples, using my made-up server names, followed with some made-up output.

# Show me all of the servers
PS>ssl

PHLQA05
PHLPR06
PHLPR88
NYCDE11
NYCPR11
NYCPR44
SEADE04
SEADE05
SEAPR33
SEAQA33

# Show me all of the QA servers. Note that the function does not use capitalization-sensitive comparisons:
PS>ssl qa
PHLQA05
SEAQA33

# Show me all of the Philly servers:
PS>ssl phl

PHLQA05
PHLPR06
PHLPR88

# Show me a particular server:
PS>ssl phlpr88
PHLPR88


And so on. By providing a -productversion value, you can run more intricate queries:

# Show me all of the sql2005 servers:
PS>ssl -ProductVersion 2005
PHLQA05
PHLPR06
PHLPR88
NYCDE11
NYCPR11

# Show me all of the sql2005 servers in Philly
PS>ssl phl 2005
PHLQA05
PHLPR06
PHLPR88

# Show me all of the servers with build 3054
PS>ssl -ProductVersion 3054
PHLPR06

So, we can easily find servers that are interesting to us. Note that both the -Computer and -ProductVersion are using Powershell's matching semantics. This means that you can create fairly complicated searches to answer questions like "show me the Philly servers with even-numbered server numbers".

That's all for now. In my next post, I'll show you some more of the commands that leverage Get-SQLServerList and Invoke-SQLQuery.

Tuesday, June 10, 2008

Powershell Function: Invoke-SQLQuery and Ping-Computer

In this posting, I am going to (briefly) discuss Invoke-SQLQuery, which will run a SQL Server query against a computer or a group of computers and then return the results as a Powershell object. This object can be fed to another element in a pipeline, where it can be sliced, diced and stored using Powershell's intrinsic capabilities.

About a year ago, I was looking for a way to return a result set to Powershell. It seemed that I was forever logging into a SQL Server instance, running one query, then logging out and repeating the whole thing on a different instance.

One day, I read a posting on Powershell and SMO by Muthusamy Anantha Kumar. I noticed some ado.net code towards the bottom of the page and I same the proverbial light: Powershell is to .Net objects what VBScript is to COM objects. Armed with that, I started working on a function that ultimately evolved into Invoke-SQLQuery.

I'd like to point out the following facets of Invoke-SQLQuery:
  • This function directly accepts computer names from pipeline input. This makes it trivial to run a query on a large number of SQL Servers.
  • If you prefer, you can use the -computer parameter to specify one computer at a time. Often, this syntax is more terse than specifying a whole pipeline.
  • The command provided with -commandText is executed on each computer, in turn.
  • The output from each execution is aggregated into a single "mega result set".
  • It is usually important to be able to determine which rows came from which computer because there is only one result set. As a matter of habit, I stick a @@SERVERNAME at the start of the SELECT clause of my query.
  • The default query provides a quick overview of the server and includes version information, memory information, instance start time, etc. I use this as a connectivity test or to remind myself of what software is installed on a particular computer.
  • The query that is sent to the server can be quite complex. The query can be multiline, but not multibatch. The query can create and use temporary tables. If you can run your query in SSMS, as long as you are not using the "GO" keyword, you should be able to run your query with Invoke-SQLQuery.
  • The function can read a query from a file passed in with -commandFile. I have added this functionality only in the last few weeks.
  • Invoke-SQLQuery uses Ping-Computer to ensure that the remote computer is at least booted before trying to connect to a SQL Server instance. This cuts down on the time it takes to fail when a remote computer is down or unreachable for whatever reason. Ping-Computer is a very close cousin of Brandon Shell's Ping-Server, having only few simple changes to match my scripting style. I'm presenting it here for your conveniance, but you should definitely check out Bradon's blog. I have learned much from it.
Some drawbacks of Invoke-SQLQuery:
  • Invoke-SQLQuery runs the given query on each computer, serially, one execution at a time. Clearly this will not scale to hundreds of servers, but I've been happily running this at a site with about 20 servers dozens of times a day for about a year now.
  • This does not support multiple returned result sets. In other words, you can't issue two selects in your query and get two different result sets in a collection of some sort. Again, this has not hindered me.
  • UPDATE, INSERT and DELETE are not supported. I have other scripts that can handle other sorts of DML.
  • When loading queries from a file, you will get runtime errors if the file contains the "GO" keyword. I have not built in an fail-safe/sanity check for this yet.
Here are a few examples of how I use this function. Please watch out for word wrap.

To call a SQL Server system stored procedure on a remote computer and show only some of the returned columns:

Invoke-SQLQuery -computer "server_a" -commandtext "sp_helpdb" | ft -auto Name,Status


Run pretty much any query that you'd like on a single computer:

$cmdText = "select @@servername 'Computer', count(*) 'ConnectionCount' from master.dbo.sysprocesses"


$result = Invoke-SQLQuery -computer:"server_a" -commandtext:$cmdText

$result | ft -auto


Run the same query, but this time run it against two computers. The names of the computers are placed in an array. That array is then fed to a pipeline. It would be trivial to add additional computers to the array, which will cause the command to be run on those computers as well. It is also easy to load an array of computer names from a simple text file.

$computers = ("server_a", "server_b")

$result = $computers | Invoke-SQLQuery -commandtext:$cmdText

$result | ft -auto Computer,ConnectionCount


The output of that last command might look like this:

Computer ConnectionCount

-------- ---------------
SERVER_A 35
SERVER_B 82


Note that the contents of $result are a real Powershell object. That means that you can go and run something like this:

$result | sort ConnectionCount -desc
| ft -auto Computer,ConnectionCount

Computer ConnectionCount
-------- ---------------
SERVER_B 82
SERVER_A 35


You can add additional pipeline items as you wish. For example, instead of (or in addition to) applying a sort, you could apply a where clause. This would be silly in my two-server example, but imagine that you have a hundred servers and you would like to know the top five servers, by number of connections. After running the queries and placeing the results in $result, you could run something like this:

$result | sort ConnectionCount -desc | select -first 5


You could stick your results in an Excel file:

$result | sort ConnectionCount -desc | select -first 5
| Out-Excel

You are only limited by your imagination.

Tuesday, June 3, 2008

Powershell Function: Change-VolumeLabel

This posting continues with my series of articles that describes my Powershell environment by describing a function that I use to change volume labels on remote machines. A file with the function is available.

I wrote this function because I needed a way to change labels associated with volumes on remote machines. Earlier versions of this function were actually written in VBScript. I rewrote that old code in Powershell because I wanted more consistency with the "standard" Powershell command line switches and I am (slowly) moving away from all of my old VBScript programs.

Before I had a script, I would use RDP to connect to the remote machine, login and then use Windows Explorer to change the volume labels. This was a lot of effort for a small change. I use the volume label at one of my clients as a quick way to discern between SAN and non-SAN volumes by prepending "san" or "local" on the volume labels. When this convention was adopted, there were a fair number of old machines to go through. I also use this script when setting up new machines, so this function has gotten a workout.

This function is pretty straightforward. The function has the following key features:
  1. It leverages WMI to do the dirty work.
  2. It supports several of the "standard" Powershell command switches, namely "-verbose", "-confirm" and "-whatif". It does this by leveraging Jeffery Snover's "Should-Process" function. At a minimum, I try to support these three switches in all of my scripts.
  3. If you don't specify a drive leter, the script prints a short report showing the drives that exist. This supports the notion that an administrator should be able to "explore" or "discover" an environment with powershell and make changes interactively. It sure beats remembering things.
  4. There is one caveat with this script. It does not work with versions of Windows that pre-date Windows Server 2003. It seems that WMI did not provide the WIN32_VOLUME class prior to Windows Server 2003. If someone tries this funciton on Windows Vista, I'd love to hear if it works or not.
That's all I have for this post. For my next post, I will pick out another function from my set up and describe it.

Friday, May 30, 2008

Parsing SQLIO output with Powershell

Anyone who has used sqlio knows that it's output files are not particularly readable. I used to manually run sqlio tests, then retype the output into a spreadsheet so that I could more easily analyze the data. This proved time-consuming, error-prone and boring, so I wrote a script that parses all of the data that I wanted from the sqlio output file. This post provides that code, and some background on how I use it. I will return to my series on my Powershell set up next time.

The first thing to do is to run your tests with sqlio and get it's results into files. This is left as an excercise to the reader. I have a .cmd file that I use to do this. I use a .cmd file because most of the machines that I test do not have Powershell installed on them. I am including this file in the SQLIO.scripts.zip file.

The reason that I break the running of the tests and the parsing of the output into two seperate processes is to decouple them. It can take quite a while for sqlio to run, especially if you are running tests at multiple block sizes or varying between random and sequential or read and write. The idea is to get all the data first, then I can parse at my leisure. If something goes wrong with the parsing, I don't have to rerun my entire test. This also allows me to tweak the parsing script as I go along, without having to re-run the data collection phase.

Running tests tedious to do. If all you want to do is try the parsing script, I have included some sample output that you can use in the sqlio.results.zip file.

The second thing to do is to run the script that parses a raw output file and store the parsed data in a Powershell object.

That command might look something like this:
$result = ls "*.txt" | %{Parse-SQLIOOutput -file $_}

The last thing that we need to do is to analyze the data. Storing the data in a Powershell object allows us to use all of Powershells features for producing reports, or to funnell the data into another tool for analysis.

For example, we can use the where keyword to restrict the output or we could store the data in an Excel file or a SQL Server table. A command to look at the data on a command line might look something like this:
$result | sort Drive,RandomOrSequential,ReadOrWrite,BlockSize,QueueDepth | format-list *

Or we could just sort it and send it to Microsoft Excel: $result | sort Drive,RandomOrSequential,ReadOrWrite,BlockSize,QueueDepth | Out-Excel

That's all there is to my script. I hope that you download the files and give it a try.

It is worth mentioning that Linchi Shea has a perl script that does more-or-less the same thing as my script. I haven't tried it myself, but Linchi does good stuff and you should definitely check it out if you are into perl.

Wednesday, May 28, 2008

My Powershell set up, Part 2 of 2

Last time, I explained that my profile consists of several files, with various functions (and associated aliases) broken out by functionality.

Each of these "subprofiles" is loaded when my standard Powershell profile runs.

. "$env:winbin\psh\profile.Security.ps1"
. "$env:winbin\psh\profile.RemoteService.ps1"
. "$env:winbin\psh\profile.SQLServer.ps1"
. "$env:winbin\psh\profile.dstrait.ps1"
. "$env:winbin\psh\profile.$($env:userdomain).ps1"

The last line will load a client-specific file, depending on the credentials that I am currently using.

Some of the functions in these files are mine and some of them were borrowed from various web sites. When I borrow a function, I add a small comment header that describes where the function came from. This would allow me to go back and look for more interesting scripts from that author, or to look for improved versions or allow me to give credit where credit is due. Some of these functions are heavily modified versions of such originals, some have been modified so much that they are probably unrecognizable.

Here is a listing of functions, grouped by Filename. For my next posting in this series, I will pick one of these functions and explain what I use it for and how it works.

profile.dstrait.ps1
-------------------
Change-VolumeLabel
Convert-WMIDateToSystemDate
Find-File
Get-DiskUsage
Get-Guihelp
Get-Hardware
Get-LiveHelp
Get-LogicalDisk.
Get-Range
Get-RecyclerSize
Get-RemoteDrives
Get-Uptime
Manage-Computer
Out-Excel
Ping-Computer
prompt
Report-ODBCDriver
Report-ODBCSystemDSN
Should-Process
tree
Watch-Command
whoami

profile.Legion.ps1
------------------
Archive-SQLAgentJob
Archive-SQLDatabase
Archive-SQLDTSPackage
Archive-SQLStoredProcedure
Copy-SQLDTSPackage
Document-Ticket
Export-WinbinToLegion.line
Fix-SQLAgentJobStep
Fix-SQLDatabaseOwner
Get-DomainSQLServerList.line
Get-Ticket
Report-LatestBackups
Report-SQLAgentJobsWithNonDefaultOwners
Report-SQLServerRolesWithUnusualMembers
Report-SQLServerSpaceUsage
ScriptOut-SQLAgentJob

profile.RemoteService.ps1
-------------------------
List-RemoteService
Pause-RemoteService
Start-RemoteService
Stop-RemoteService

profile.Security.ps1
--------------------
Add-ComputerGroupMember
Check-IsDomainComputer
Check-IsDomainGroup
Check-IsDomainUser
List-ComputerGroupMembers
List-DomainComputers
List-DomainGroupMembers
List-DomainGroupsForUser
Remove-ComputerGroupMember
Search-ActiveDirectory

profile.SQLServer.ps1
---------------------
Execute-SQLAgentJob
Execute-SQLDBCCUpdateUsage
Execute-SQLQuery
Get-SQLServerList
Get-SQLStatus
Get-SQLVirtualNodes
Invoke-SQLQuery
Out-SQLDatabase
Query-ServerCatalog
Report-SQLAgentJob
Report-SQLAgentJobHistory
Report-SQLDatabase
Report-SQLDatabaseObject
Report-SQLDatabaseTable
Report-SQLDTSPackage
Report-SQLLinkedServer
Report-SQLOfflineDatabase
Report-SQLServerMemory.line
Report-SQLServerSearch

Wednesday, May 21, 2008

My Powershell set up, Part 1 of 2

This posting gives an overview how I have set up Powershell on my workstation.

Any heavy shell user normally customizes his (or her) environment. These environments are (and will likely always be) a work in progress. As you learn things and write additional functions to help you with your tasks, your environment will grow and change. I find that I am tweaking things even as a I write this posting.

Aside: One of the things that I am forever tweaking is function names. I try to follow the Powershell verb-noun convention, but there are lots of spots where I have had to make judgment calls. It's not hard to find inconsistencies. For example, I vacillate between "Report", "List" and "Get". I'm not crazy about Get. My issue with Get is that it seems that Get should return an object and be used like this:

$result = Get-Thingy
$result.MemberFunction $param
$result = $null

whereas I run a lot of stuff that just returns information.

A 'Report' is usually more complicated than a simple 'List', which is often a single column.

What's better "Invoke-SQLQuery" or "Get-SQLResults"?

Anyway, let's get back on track.

Firstly, I am still running Powershell V1.0. I haven't spent any time with the Powershell 2.0 CTP, though I have read a few introductory articles on new features. I have had issues with other Microsoft CTPs in the past and I'm loathe to install something on a 'production' computer that I might have a hard time removing. I am also dismayed with the decision to not support remote execution under Windows XP or Windows Server 2003. None of my clients have deployed Vista on more than a handful of workstations, and none of them have even started looking at Windows Server 2008. They will not being doing so just to run Powershell.

Secondly, I use Powertab. Powertab provides pop-up help for functions and files, just by hitting the tab key.

Thirdly, I use the Powershell Community Extensions (PSCX). PSCX provides lots of small, handy functions.

Lastly, my profile consists of several files, with various functions (and associated aliases) broken out by functionality. I have two reasons for breaking them out. The first reason is that my original profile was getting very large. My main profile is over 79 KB right now. The second reason is that I try to break out stuff that is specific to a client and not useful anywhere else.

Next time, I will go into some detail on how my profile works.

Sunday, May 18, 2008

First Post

So, belatedly, I become a blogger.

It's not that I don't like to write. I like to write very much. Too much, really. Anyone who has been the victim of one of my emails or read one of my postings on the old Swynk SQL DBA list can vouch for that.

The primary thing that has always held me back from blogging is a simple question: "What can I say about SQL Server that others have not already said?". There are a wealth of great blogs and articles discussing SQL Server and there is little point in piling on.

The one thing that I know better than any other DBA that I have come across is Powershell. (It's true that I don't get out much.) I have had a few people comment on how quickly I can slap together reports, push changes to databases and do other tasks. A key reason that I can do this is Powershell.

So, we'll push on with powershell for a while and see how the page hits work out. There may be a few side trips along the way, as I have a few useful scripts written in Transact-SQL and in VBScript and my main 'thing' is actually performance engineering.

First a little background. I am a long-time (ten years) SQL Server DBA but, let's face it, Transact-SQL isn't appropriate for everything. In prior lives, I worked as a developer and wrote in a variety of languages (C and various dialects of BASIC being the most famous, PL/1 and HPL possibly being the least famous). Transact-SQL has always felt like a second class language to me. The most obvious places where TSQL falls down are file manipulation, exporting data, error handling, program automation through COM, WMI, use of SQL-DMO, coupling and the like. Sometimes TSQL isn't appropriate just because there isn't any SQL Server in the picture. Using TSQL to write a report of free disk space on a group of servers, for example.

(I have also written a fair amount of code in "batch language", the native shell language of dos and windows. Such scripts are often syntactically mindbending, in a lovecraftian sense, the are hard to debug and should not be mentioned in polite company.)

In the late nineties, I started using VBScript in conjunction with Windows Scripting Host (WSH). This allowed me to write scripts to do things like search directory hierarchies to delete old files, script out SQL Server objects, fix job step output file paths, write change scripts, check disk space and do all sorts of other useful things. WSH is a good way to write small scripts that can manipulate files, perform simple-but-repetive tasks, automate programs through COM. WSH still has it's uses because powershell is not widely deployed yet.

A few years ago, with the advent of .Net and the tendency towards deprication of COM, I started looking around for a new language. I've tried several different things. For me, Jscript isn't any more useful than VBscript. I do not think that I have ever written anything longer than 10 lines long in bash, though I was a long time user of cygwin's bash and I maintain a couple of linux systems at home. I have worked on a few small things in perl, but I just never felt comfortable with it. Lots of smart people use python (and ruby), but it's more non-Microsoft stuff to install on a new computer.

I started using powershell about two years ago. At first, I used it simply as a replacement for the old Windows command shell. As I learned more of the language, I found myself using it more and more with my daily tasks. It was very easy to get started and to increase my skills as time went on and I discovered how to exploit powershell's features. In short order, I was writting scripts. Then I started using WMI, then COM, then .net objects. It's easy to get started and just go along at your own rate.

I'm not going to spend much time cheerleading for powershell. In fact here are the bullet points for my whole speech:
I believe that powershell is the best thing Microsoft has done in a long while.
I think that if you are a SQL Server admin or any sort of network or Active Domain administrator, and you have any tendencies towards scripting (as opposed to point-and-click use of GUI-based admin tools), you should be at least competent in powershell.
If you are working in Microsoft's universe, fighting Microsoft's tide is usually unsuccessful.
Microsoft seems to be trying its' damndest to get you to use powershell.

I am not going to try to teach powershell basics because there are plenty of good resources online. I don't own any powershell books, so I can't really make any suggestions. I'm used to teaching myself things, often under pressure, but a book or books might work well for you.

I am not going to show you lots of gee-whiz syntax or how to write terse and incomprehensible one-liners because I don't see much use in that, as far as trying to teach goes. It's fun as an intellectual exercise. It's pretty easy to turn everything into a one-liner, if you hide it in a function.

What am I going to do? A few examples:

I have a script that can report information on storage (sizes, free space, percent free, etc.) quickly, from a command line. With some powershell trickery, I can easily run this report on one server or dozens. The great thing about this is that the output from each server is coalesced into a single collection of queryable objects. I can dump the output to the screen, or stick it in a Excel file or a table.

I have a script that can run an ad-hoc SQL query on any instance or group of instances I choose. Similarly, I can coalesce the report into a single collection. This script can be fed instance names through a pipeline, so they can be stored in any form of permanent storage.

I intend to split my postings between functions in my profile and scripts that do more complicated things. I also intend to keep the length of posts down to something shorter than this one.