Lately, I have been trying to up my game a bit and one of the things I am doing is using some of the latest tools and bits of things.
(Partly, this is because I have been working with some old technology lately and I am feeling "dated". Ask me about Fox Pro, if you dare.)
PowerShell
After many years of not seeing value in updating, I am moving to PowerShell 6. (Version 6.2, specifically.) Frankly, more things work than I expected. I haven't seen any performance regressions and things have gone smoothly. I think I spent more time incorporating the PowerShell version into my Prompt function than anything else that I had to do because of the upgrade.
SSMS
I am running the absolutely-latest build of SSMS. I skipped some SSMS builds when the removed the diagramming functionality. Now that it has been put back into the product, I am on the update train again.
Aside: The diagramming feature isn't awesome but it is easy to use, requires no installation, doesn't cost anything and is more than adequate for 95% of the things I need such a tool for. I know that it isn't a real "enterprisey" ERD tool, but I don't need one. In fact, if they took out all of the "modify the tables in the database" features, I would like it more. The main thing is that, sometimes, you just want to look at a diagram of a subset of the tables in a database.
Windows 10 1903 Update
After not getting any update action for weeks after it was released, I bit the bullet and forced this latest Windows release onto my main laptop. (Carpe Diem, right?) It's been fine so far. The blurred-out login display was a little jarring at first. Multi-monitor has always been the shakiest feature of my laptop, but it has not been any worse after updating.
DbaTools
After several years of "beta", DbaTools has had it's first release that is judged as production-ready. I've updated and I'm slowly looking through my code for issues. I've been using this as a production tool for years. I'm excited for this release because it validates all of the work that has been done on the tool and the names of functions should stop jumping around so much.
Learnings
I spent a lot of time watching things on YouTube (or the SQL PASS site) in order to learn new things and see how other people approach the problems that I probably have. Lately, I have been focusing on the PowerShell + DevOps Global Summit 2019 videos.
Just For Fun
I am trying out the new "Windows Terminal", which is in preview and available in the Windows Store. I found this through a post on Hanselman's site. It found my Windows Subsystem for Linux (WSL) and gave me bash, it found my new and old PowerShell installations and my old, trusty cmd shell (which I still sometimes use for chocolatey). I haven't put it into heavy use yet, but nearly everything seems OK so far. (The settings menu pick doesn't seem to be working properly for me.)
Ashdar Partners specializes in Performance Engineering (and re-engineering) of Microsoft SQL Server database systems.
Showing posts with label powershell. Show all posts
Showing posts with label powershell. Show all posts
Monday, June 24, 2019
Saturday, June 22, 2019
DbaTools version 1.0 has been released
(Actually, they are up to version 1.0.2 already.)
I've been using DbaTools for a number of years. I have come to rely on it more and more. Now, I find it at least as deeply entwined in my PowerShell code as the SqlServer module is.
All right-thinking people use DbaTools. And you should, too.
More seriously, DbaTools is one of the few PowerShell modules I absolutely rely on, on a daily basis. I use DbaTools so frequently that I can't easily categorize it all. In short, I can:
I've been using DbaTools for a number of years. I have come to rely on it more and more. Now, I find it at least as deeply entwined in my PowerShell code as the SqlServer module is.
All right-thinking people use DbaTools. And you should, too.
More seriously, DbaTools is one of the few PowerShell modules I absolutely rely on, on a daily basis. I use DbaTools so frequently that I can't easily categorize it all. In short, I can:
- Check servers, instances, databases, disks, security and many other things for issues on a daily basis
- Check business rules on a daily basis
- Restore databases to test environments on an ad-hoc basis. I have this working so smoothly that I forget that I have Ola Hallengren's backup scripts actually running the database backups.
- Load data into databases
- Extract data from databases
I've never actually used DbaTools for it's original purpose, which was migrating databases and instances onto new computers. It can still do that, but it has added so much more goodness over the years that you should install it anyway. There really is something for everyone in DbaTools.
Saturday, January 19, 2019
If you want to see a
cool ten-minute, "lightning" demo of how you can use PowerShell and
Excel together to analyze data or produce nice reports, watch this YouTube video,
from 7:47 to 18:22. This is presented by Doug Finke, the author of the Import-Excel PowerShell
module.
Friday, May 6, 2016
Windows Administration and Trends in (PowerShell) Scripting
Here is a Run As Radio chat with Jeffery
Snover.
Jeffery Snover is the
person most responsible for PowerShell, which revolutionized my approach to
database administration. He has a background in scripting going back to VMS
(IIRC), which is how he came to create PowerShell. His Twitter bio lists
"Microsoft Technical Fellow/ Lead Architect for Enterprise Cloud Group/
Azure Stack Architect/ PowerShell Architect / Science fan".
From my point of view,
he directs strategy for Windows administration. I listen to him because the
things he talks about are likely to influence my work life and they give
insight into how the PowerShell team expect people to use their product.
The podcast covers a
variety of things in a light way. The thing that grabbed my attention the most
was that Snover seems to be saying that Windows will be implementing things
similar to what linux does with root and sudo. (My linux experience is limited
but my two takeaways are: You never log in as root and sudo controls what your
'day to day' login can do.) Imitation is the sincerest form of flattery, as
they say.
Beyond that, it seems
that I should be working towards two goals.
One of those goals
should be to get my code into an open repository.
I have been using
version control for many years, but I have always kept "my code" in a
closed repository.
Initially, I used
subversion. At first, I ran my own server (on an old Sun workstation) out of my
home office. I moved to a cloud-based solution after experiencing a number of
power outages. (I just logged into my old Subversion repository for the first time
in well over a year. It has over 3,000 check ins.)
In spring of 2014,
Subversion was seeming old-fashioned, so I tried Git for a while. That was
during an extremely slow period for changes to my code, so I never got very far
into it. I documented the start of that period with this blog posting.
I've been using
Microsoft's TFS Online for the last couple of years. This seemed a natural fit
because I spend a lot of time in Visual Studio, it provides feature and bug
tracking and it a cloud-based solution. Since then, Visual Studio has come to
embrace Git.
TFS Online seemed like
a pretty hot technology in 2014, but I feel like I've missed the boat with GitHub.
The current trend seems to demand use of GitHub. The work required to move my
code from TFS Online to GitHub is large. There are over 170 files of varying complexity, with a few being modules with
many functions. I work very hard to keep client-specific things out of "my code", but I would need to vet everything again.
I did do a pass with ScriptCop through much of that code in
2015. I fixed most of the things that ScriptCop spotted. Other than that, much
of this code hasn't been looked at in years.
I don't want to split
my code between different repositories. I like being able to install one set of
tools and I don't want to get into situations where I'm looking for something
that is in the other repository.
The other goal is to
start testing operational environments like developers test their code. In my case, I'd like to test my code like
developers test their code. :-/
I fiddled around with
PSUnit when that was the hot thing, but I never integrated it with my
day-to-day work. The current hot technology for this is Pester, and I'd like to
do more with it.
Implementing Pester (or any testing framework) "for real" would be a lot of work. My earliest code was
written in the PowerShell 1.0 and 2.0 era. Other than Transact-SQL, my coding
at that time was mainly in DOS batch and Windows Scripting Host's version of VBScript.
PowerShell was a new thing and it was not obvious that it would be as
successful as it has been. My PowerShell scripts were not written with
testability in mind. The technical debt is enormous. Layers of code are based on that work. Testing nearly
anything seems to require reworking layers of code. Changing that code breaks
other code. Since there is no testing framework, those bugs aren't noticed
until I happen to run them.
In short, it looks like I can't do anything
without doing everything and there is no time budget for that. If I work on the
testability of recent code the going seems slow, there isn't much impact,
working on that code is not my "day job" and I can't keep the
enthusiasm going.
Wednesday, May 9, 2012
Did You Read It? for April, 2012
This blog post mentions a few of the most interesting articles that I have read recently, or recently-ish, with a little bit of my commentary on each article.
What's the difference between a temp table and a table variable in SQL Server? This is the best answer I have seen.
Here is a short and sweet bit of PowerShell code that can make a persons life a little more enjoyable by providing some feedback to long-running processes.
This is from 2010, but I'd never seen it until now. I would temper it with Malcolm Gladwell's view that it takes 10,000 hours to become expert at something. (I am not sure if Mr. Gladwell originated that thought, but he has popularized it.) I'm not so sure that 'polyglot developers' will know everything there is to know about all of the languages on their resume, plus Active Directory administration, plus database clustering, plus SAN administration, plus VMWare, plus being up on the latest half-dozen NOSQL darlings, but maybe they don't need to be. Do we really need all of the hyper-specialization that we have bred into IT in the last 20 years? Perhaps we are only ensuring that specialists have a hard time pulling their narrowly-defined career out of the weeds when their chosen technology becomes obsolete or when that up-and-coming thing fizzles out. What if we just invest in smart people that are quick learners?
This caught my eye. Microsoft is doubling down on getting more businesses to put their data on Azure. Small businesses aren't usually the ones who are concerned with certifications. Medium and large businesses are.
Thursday, December 22, 2011
SSRS key backups and PowerShell
Whenever I do anything with an SSRS server, I always backup the key and park it somewhere.
Automating that would be handy, and this seems to fit the bill nearly 100%.
The only additional think that I would like to see something that uses PasswordSafe or KeePass to keep a password, rather than a table.
Automating that would be handy, and this seems to fit the bill nearly 100%.
The only additional think that I would like to see something that uses PasswordSafe or KeePass to keep a password, rather than a table.
Wednesday, March 3, 2010
Using Powershell in a SQL Server 2008 job step
I have been using Powershell since I first got wind of the beta and realized that all of my Windows Scripting Host skills would become obsolete.
However, I've never had the opportunity to try out the Powershell job step type for SQL Agent jobs until today. I know that the Powershell that "comes with SQL Server" isn't the same as the regular Powershell but, other than some fiddling around with the SQL Server file system provider, I've bumped up against those differences before. It's just different enough to be frustrating, so I'm blogging about it. It's a bit of a rant, but maybe it will save someone some time or someone can point out something that I've overlooked.
My goal is to have a simple script that I can use in a SQL Agent job that can automatically discover and backup any SSAS database that exists on the same computer that as the SQL Agent. (This is an improvement over my old method, which used hard-coded XMLA queries to backup specific databases. This is OK if you aren't adding SSAS databases very often, and will remain my default method on SQL Server 2005 servers because SQL Server 2005 doesn't support Powershell in job steps. I suppose that I might be able to code up something using VBScript and COM, but it's boring and I don't have much call for it.)
Hunting around the web, I found an interesting script authored by Idera. Idera's script isn't exactly what I wanted and it is designed to be called from a Powershell shell/console, so I learned how they did it by reading their script and writing my own custom script. It's not rocket science. In the process of getting my script working, I ran into a few quirks of how the Powershell job steps work, and I figured that it was worth a blog entry.
Issue #1:
The first interesting thing is that you can't just use write-host "Hello, World". This is because the SQL Agent doesn't implement a console. There are details on that here. For simple text output to the job step output log, use something like write-output "Hello, World" or write-output $host.
I like to use a liberal sprinkling of write-verbose in my scripts. It provides a good troubleshooting tool and, if I provide enough write-verbose statements in my script, I find that I rarely have to resort to write-debug or a real debugger with breakpoints when I have problems with a script.
The thing is that write-verbose doesn't work in a job step. In other words, this won't work:
$verbose = $true
write-verbose "this is verbose"
This won't work, either:
$verbose = $true
write-verbose "this is verbose" -verbose:$verbose
So, to sum up, write-host, write-verbose, write-debug and their ilk are out. Those commands want to write to a console, and there is no console when running under a job step.
Issue #2:
The next thing was that the assemblies that I wanted to use are not pre-loaded by the environment. That doesn't seem like such a bad thing.
A job step received an error at line 5 in a PowerShell script. The corresponding line is '[Microsoft.AnalysisServices.Server] $server = New-Object ([Microsoft.AnalysisServices.Server]) '. Correct the script and reschedule the job. The error information returned by PowerShell is: 'Unable to find type [Microsoft.AnalysisServices.Server]: make sure that the assembly containing this type is loaded.
My solution for that LoadWithPartialName().
Issue #3:
Inside of a job step, Powershell seems fussier about pointing to enumerations. For example, this doesn't work in a job step but it does run in regular Powershell 2.0:
foreach ($d in $server.Databases) { Do-Stuff }
But this does work:
$databases = $server.Databases
foreach ($d in $databases) { Do-Stuff }
Issue #4:
There is no developer support for developing Powershell scripts inside of SSMS. Basically, you just get a simple edit control to type or paste your script into. You can't set break points or watch variables, you don't get IntelliSense or even syntax highlighting. There is no "Parse" to check your syntax. This matches what you get with the "ActiveX" scripting step.
The lack of debugging support lead me to many iterations of what we old-timers used to call "debug by printf" or "debug by lprint", now resurrected as "debug by write-output" and cycles of edit-save-run-fail-look.at.output.log.
I could have developed the script in notepad++ and then copy-and-pasted it into the job step, but then the syntax quirks would pop up. It is a lot easier to fix one syntax quirk if you add one line at a time than if you try to add a whole 30 line script at one go, particularly if the syntax rules are not quite what you are used to.
Issue #5:
The "syntax error" messages can be misleading. The little editor window that SSMS provides does not show line numbers. If you copy-and-paste the code into an editor, it seems that the line numbers given for errors do not correspond to the location of problematic lines within the script.
Issue #6:
An innocent-looking thing like the following line causes a syntax error:
# $($causeAnError)
I think that the SQL Agent things that there is some sort of an embedded job variable in there, even though the line is commented out. If you remove the first $ and the parenthesis, the syntax error disappears.
Issue #7:
It seems that it is harder to concatenate ad-hoc strings together in code. So, instead of this:
foreach ($d in $databases ) {
write-output "$($d.Name)"
}
I find myself doing things like this:
foreach ($d in $databases ) {
$msg = "{0}" -f $d.Name
write-output $msg
}
Issue #8:
I could not call Get-Date or [Datetime]::Now. I don't know why; they are two different things that would do the same thing: get me the current time, which is nice to have in auditing/logging output.
FWIW, my final script looks like the following. There are no national secrets here, but there are two caveats:
- Watch out for any word-wrap
- I'm still testing this, so I wouldn't rely on it for a production system without vetting and testing it first.
Here is the script:
$msg = "Starting "
write-output $msg
# The SSAS assembly isn't loaded for you
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.AnalysisServices") | Out-Null
# create an instance of the backup object
[Microsoft.AnalysisServices.BackupInfo] $serverBackup = New-Object ([Microsoft.AnalysisServices.BackupInfo])
# create an instance of the Server object and connect to the local, default SSAS instance
[Microsoft.AnalysisServices.Server] $server = New-Object ([Microsoft.AnalysisServices.Server])
$server.Connect(".")
# dump out some stuff to show that we are actually connected
write-output $server
# where do the backups go?
$BackupDestination = $server.ServerProperties.Item("BackupDir").Value
if (-not $backupDestination.EndsWith("\")) {
$backupDestination += "\"
}
# Set the standard attributes of our backups.
$serverBackup.AllowOverwrite = 1
$serverBackup.ApplyCompression = 1
$serverBackup.BackupRemotePartitions = 1
$databases = $server.Databases
$msg = "There are {0} databases to back up." -f $databases.Count
write-output $msg
foreach ($d in $databases) {
$serverBackup.file = "{0}{1}.abf" -f $backupDestination, $d.name
$msg = "Backing up '{0}' to '{1}'" -f $d.Name, $serverBackup.file
write-output $msg
$d.Backup($serverBackup)
}
$server.Disconnect()
$msg = "Ending "
write-output $msg
Thursday, December 18, 2008
Finding Partition Offsets Made Easier (with Powershell)
A while back, I read Linchi Shea's blog posting on finding partition offsets. I recently found some time to rework his C# file plus perl script solution into a single Powershell script.
The upsides to my solution are:
The major downside to my solution is:
The upsides to my solution are:
- It's Powershell-only, so there is nothing to compile and you don't need to install perl.
- Using Powershell provides some added bling, for no real cost. It's easy to construct a pipeline that will interrogate many servers or will provide sorting or filtering.
The major downside to my solution is:
- You must install Powershell. This shouldn't be a problem on Windows Vista or Windows Server 2008.
# Example of use:
# $serverlist | foreach { Get-PartitionInfo $_ } | ft -a
#
param (
[string] $computer = $(throw "ERROR: The 'computer' parameter is required.")
)
$scope = new System.Management.ManagementScope
$scope.Path = "\\$($computer)\root\cimv2"
$scope.Connect()
$query = new System.Management.ObjectQuery
$query.QueryString = "SELECT * FROM Win32_DiskPartition"
$searcher = new System.Management.ManagementObjectSearcher($scope, $query)
$queryCollection = $searcher.Get()
# this is a quick-and-dirty way to build an object with members that I can reference
$report = "" | select-object "Computer","PhysicalDiskIndex",
"PartitionIndex","PartitionOffset","PartitionOffsetKB",
"LogicalDiskName","LogicalDiskVolumeName","LogicalDiskSize","LogicalDiskSizeGB"
# start setting values for the members of my reporting object.
# I'm casting everything to make sure I get data types that I want
$report.Computer = [string] $computer
$queryCollection | foreach {
$report.PhysicalDiskIndex = [int] $_.DiskIndex
$report.PartitionIndex = [int] $_.Index
$report.PartitionOffset = [int64] $_.StartingOffset
$report.PartitionOffsetKB = [int64] $_.StartingOffset /1kb
$_.GetRelated("Win32_LogicalDisk") | foreach {
$report.LogicalDiskName = [string] $_.Name
$report.LogicalDiskVolumeName = [string] $_.VolumeName
$report.LogicalDiskSize = [int64] $_.Size
$report.LogicalDiskSizeGB = [int] [math]::round($_.Size / 1gb, 0)
# emit the report for this iteration.
# This gets passed to the next item in the pipeline, if there is one.
$report
}
}
Tuesday, July 1, 2008
Powershell functions: whoami and Prompt
Two quick ones for this post, before the US 4th of July holiday. Both functions have one thing in common: use of environment variables. In order to see the environment variables that are available, just type:
dir env:
(This listing could be extensive. (dir env:).Count returns 51 on my laptop.)
Environment variables aren't as popular as they used to be, with settings located in registry entries, .ini files and .xml files. For a few things, it's still easier to find their values through an environment variable than to figure out where they hide in the registry or google for the proper .net calls. When I set up a new machine, I add a path to the environment that points to my local copy of my scripts. This can be very handy since I still have some .vbs and .wsf scripts lurking around.
whoami simply displays the name of the currently logged in user. It's very simple, but it is also a good, simple example of how to use environment variables in a script.
Here is the function:
function whoami {
$env:userdomain + "\" + $env:username
}
I often use $env:userdomain as a default for parameters of functions that look up information about domain users and groups. With most of my clients, the chances are that the user or group that I am interested in will be in my domain.
Prompt functions are another one of those things that everyone writes. Some are sparse, some are colorful and some display so much information that they can be confusing to anyone but the original author. Mine is fairly sparse:
function prompt {
$host.ui.RawUI.WindowTitle = "psh"
$promptText = "[" + $env:username + "@" + $env:COMPUTERNAME + " @" + $(get-date -uformat "%H:%M:%S")
write-host $promptText -NoNewLine
write-host " $(get-location)" -NoNewLine -ForeGroundColor Cyan
write-host "]"
}
I'm on the fence about whether the two-live format is a good idea or not. I include the time as a sort of poor-mans stop watch. It lets me get an idea for a when a command stopped without actually timing it and without getting too fancy in the prompt code.
That's all for this posting.
dir env:
(This listing could be extensive. (dir env:).Count returns 51 on my laptop.)
Environment variables aren't as popular as they used to be, with settings located in registry entries, .ini files and .xml files. For a few things, it's still easier to find their values through an environment variable than to figure out where they hide in the registry or google for the proper .net calls. When I set up a new machine, I add a path to the environment that points to my local copy of my scripts. This can be very handy since I still have some .vbs and .wsf scripts lurking around.
whoami simply displays the name of the currently logged in user. It's very simple, but it is also a good, simple example of how to use environment variables in a script.
Here is the function:
function whoami {
$env:userdomain + "\" + $env:username
}
I often use $env:userdomain as a default for parameters of functions that look up information about domain users and groups. With most of my clients, the chances are that the user or group that I am interested in will be in my domain.
Prompt functions are another one of those things that everyone writes. Some are sparse, some are colorful and some display so much information that they can be confusing to anyone but the original author. Mine is fairly sparse:
function prompt {
$host.ui.RawUI.WindowTitle = "psh"
$promptText = "[" + $env:username + "@" + $env:COMPUTERNAME + " @" + $(get-date -uformat "%H:%M:%S")
write-host $promptText -NoNewLine
write-host " $(get-location)" -NoNewLine -ForeGroundColor Cyan
write-host "]"
}
I'm on the fence about whether the two-live format is a good idea or not. I include the time as a sort of poor-mans stop watch. It lets me get an idea for a when a command stopped without actually timing it and without getting too fancy in the prompt code.
That's all for this posting.
Saturday, June 28, 2008
Poweshell function: Get-Uptime
For this posting, I present an uptime command. This is the sort of thing that most scripters wind up writing, I suppose. For me, the neat thing was how just a few lines of script produced a program that was more flexible and usable than an old, compiled EXE program that I had been carting with me around on a USB stick for years.
A little bit of this code is a refactoring of code posted at BSOnPosh, but that page seems to have been taken down during a site re-arrangement and my search didn't find anything. IIRC, I wanted an easier way to get uptime information for more than one server, so I refactored some of his code (the WMIDate conversion, which I wanted to be able to re-use for other things, is sort of fiddly) and added my own code to loop over the pipe line input, to produce the TextDescription and to make the select-object code more readable by using the $fields variable to hold all of the necessary script blocks.
Anyway, I present what I've got for your perusal. Personally, I like to see different approaches to the same problem as it often gives additional insight into what is possible. Those "Wow, that's much easier than what I've been doing" moments can be embarrassing, but they also mean that I'm learning and IT isn't an industry where you can afford to stop learning.
The code is here. As usual, it's pretty straightforward. (My most ambitious scripts are really too site-specific to be of use to anyone else.) There is a helper function, Convert-WMIDateToSystemDate, which does exactly what you'd think and there is the main function, Get-Uptime, which also does as you'd expect.
You could invoke the function like this:
("server1", "server2") | get-uptime | sort TotalSeconds | ft -a
The more interesting columns are the "TotalSeconds" column, which is more useful for sorting than actually being read and interpreted by humans, and the "TextDescription" column, which describes the uptime of a server in days, hours, minutes and seconds and is easily understood by the most P of PHBs.
A little bit of this code is a refactoring of code posted at BSOnPosh, but that page seems to have been taken down during a site re-arrangement and my search didn't find anything. IIRC, I wanted an easier way to get uptime information for more than one server, so I refactored some of his code (the WMIDate conversion, which I wanted to be able to re-use for other things, is sort of fiddly) and added my own code to loop over the pipe line input, to produce the TextDescription and to make the select-object code more readable by using the $fields variable to hold all of the necessary script blocks.
Anyway, I present what I've got for your perusal. Personally, I like to see different approaches to the same problem as it often gives additional insight into what is possible. Those "Wow, that's much easier than what I've been doing" moments can be embarrassing, but they also mean that I'm learning and IT isn't an industry where you can afford to stop learning.
The code is here. As usual, it's pretty straightforward. (My most ambitious scripts are really too site-specific to be of use to anyone else.) There is a helper function, Convert-WMIDateToSystemDate, which does exactly what you'd think and there is the main function, Get-Uptime, which also does as you'd expect.
You could invoke the function like this:
("server1", "server2") | get-uptime | sort TotalSeconds | ft -a
The more interesting columns are the "TotalSeconds" column, which is more useful for sorting than actually being read and interpreted by humans, and the "TextDescription" column, which describes the uptime of a server in days, hours, minutes and seconds and is easily understood by the most P of PHBs.
Tuesday, June 24, 2008
A (simplistic) implementation of a watch command
Today's posting covers a function which mimics the 'watch' command, which is popular on Unix and Linux systems. The watch command will repetitively run a command line and display the output. This is useful for watch the progress of systems writing a series of files or waiting for a server to come online. Just start a shell, run the command and park the shell so it is still visible, but out of the way. Even though my function does not do a particularly thorough job mimicking the Unix watch command, I still get a lot of mileage out of it.
This one is so simple that I am just going to paste it in here and hope that the blogging software doesn't drop the formatting. I'm still learning the quirks of blogger, so please bear with me. Here goes:
##############################################################################
# Purpose: More-or-less mimic the popular unix "watch" command
function Watch-Command {
param (
[string] $command,
[int] $interval = 2
)
do {
clear
invoke-expression $command
# Just to let the user know what is happening
"Waiting for $interval seconds. Press Ctrl-C to stop execution..."
sleep $interval
}
# loop FOREVER
while (1 -eq 1)
}
That's it. I normally alias this to watch.
Let's say that you were waiting for a particular file to show up in the temporary directory of your workstation. A simple solution might be:
watch "dir c:\temp\somefile.txt"
This isn't as cute as writing a function to ring the system bell, play a wave file or say "You've got a file.", but it is quick, functional and flexible, which are things we value highly here at the Ashdar Partners blog. It's also already written. I could write a custom-built function to do those things, but why bother when I already have watch?
One quirk with my version of watch is that the quoting of the -command parameter works differently than what you get on bash. On bash, you can often avoid enclosing the whole command in quotes. With my Powershell version, you should probably just go ahead and wrap the command in double quotes because any embedded spaces will require the use of quotes. Stopping a moment to consider the question "should I or shouldn't I add quotes" is more work than just conditioning yourself to do it. It's usually simple enough to wrap the command line in quotes. If you already have complicated quoting in place, or you want to do something that requires some more processing you can get the same effect by writing a little function.
As a simple example, lets say that I want to repetitively perform two tasks. Firstly, I want to output out the time to the console using the following command:
date
and, secondly, I want to look for a particular file using the following command:
dir c:\sqlserver-list.txt
I can do this one of (at least) two ways. The first, and more obvious way, is to wrap everything in quotes and separate the two command with a semicolon.
watch "date ; dir c:\sqlserver-list.txt"
The second, and ultimately more flexible way, is to built a little function with those commands via the command line. This also provides another viewpoint on how flexible Powershell really is. The function can be named anything; for this example I will use the ever-popular foo. Just type in the following in a shell window, hitting return where you see the line breaks here:
function foo {
date
dir c:\sqlserver-list.txt
}
After the closing "}", you will need to hit return twice. This will bring you back to an interactive command line. Then, to prove that we really have a new function, you can run the following two commands to see information about our new function:
dir function:foo
(dir function:foo).Definition
The first command will show you properties of the foo function object. The second command will show you the body ("definition" in Powershell-speak) of the foo function object. Lastly, you can use the function with watch:
watch foo
Lastly, the -interval parameter controls how long the function waits before running the command again. To run our foo function once every 60 seconds, you could use it like this:
watch -command foo -interval 60
Two seconds is a pretty short duration, but it is the default on Linux systems and I saw no real reason to change it.
That's all for this time. Thanks for reading.
This one is so simple that I am just going to paste it in here and hope that the blogging software doesn't drop the formatting. I'm still learning the quirks of blogger, so please bear with me. Here goes:
##############################################################################
# Purpose: More-or-less mimic the popular unix "watch" command
function Watch-Command {
param (
[string] $command,
[int] $interval = 2
)
do {
clear
invoke-expression $command
# Just to let the user know what is happening
"Waiting for $interval seconds. Press Ctrl-C to stop execution..."
sleep $interval
}
# loop FOREVER
while (1 -eq 1)
}
That's it. I normally alias this to watch.
Let's say that you were waiting for a particular file to show up in the temporary directory of your workstation. A simple solution might be:
watch "dir c:\temp\somefile.txt"
This isn't as cute as writing a function to ring the system bell, play a wave file or say "You've got a file.", but it is quick, functional and flexible, which are things we value highly here at the Ashdar Partners blog. It's also already written. I could write a custom-built function to do those things, but why bother when I already have watch?
One quirk with my version of watch is that the quoting of the -command parameter works differently than what you get on bash. On bash, you can often avoid enclosing the whole command in quotes. With my Powershell version, you should probably just go ahead and wrap the command in double quotes because any embedded spaces will require the use of quotes. Stopping a moment to consider the question "should I or shouldn't I add quotes" is more work than just conditioning yourself to do it. It's usually simple enough to wrap the command line in quotes. If you already have complicated quoting in place, or you want to do something that requires some more processing you can get the same effect by writing a little function.
As a simple example, lets say that I want to repetitively perform two tasks. Firstly, I want to output out the time to the console using the following command:
date
and, secondly, I want to look for a particular file using the following command:
dir c:\sqlserver-list.txt
I can do this one of (at least) two ways. The first, and more obvious way, is to wrap everything in quotes and separate the two command with a semicolon.
watch "date ; dir c:\sqlserver-list.txt"
The second, and ultimately more flexible way, is to built a little function with those commands via the command line. This also provides another viewpoint on how flexible Powershell really is. The function can be named anything; for this example I will use the ever-popular foo. Just type in the following in a shell window, hitting return where you see the line breaks here:
function foo {
date
dir c:\sqlserver-list.txt
}
After the closing "}", you will need to hit return twice. This will bring you back to an interactive command line. Then, to prove that we really have a new function, you can run the following two commands to see information about our new function:
dir function:foo
(dir function:foo).Definition
The first command will show you properties of the foo function object. The second command will show you the body ("definition" in Powershell-speak) of the foo function object. Lastly, you can use the function with watch:
watch foo
Lastly, the -interval parameter controls how long the function waits before running the command again. To run our foo function once every 60 seconds, you could use it like this:
watch -command foo -interval 60
Two seconds is a pretty short duration, but it is the default on Linux systems and I saw no real reason to change it.
That's all for this time. Thanks for reading.
Thursday, June 19, 2008
Powershell Function: Get-LogicalDisk
In this post, I will discuss Get-LogicalDisk. Get-LogicalDisk is a function that retrieves disk information from remote servers via WMI. This allows for quick and flexible reporting on storage usage and availibility.
This is a bit of a departure from "SQL Server"-oriented functionality that I have been describing in recent posts. In some shops, free disk space or, more likely, the lack of it is a daily battle. Every morning, I run a script that checks for many different kinds of "negative events" that might have happened over night. A "negative event" might be a failed job, database growth or a server running low on disk space. The latter check calls Get-SQLServerList and Get-LogicalDisk to do all of the heavy work.
The script is straightforward, so I'll just continue with my tradition of a brief overview with some examples. (This is a blog, after all, and not really suited to voluminous detail. FWIW, it is my opinion that detail on what a confusing part of a script does should be in the comments in the script, not in a blog posting. If you find a part of any script confusing, let me know and I'll update the comments in the script and possibly do a blog posting to explain how it works.)
The function takes a list of computers as pipeline input, so feeding a list of SQL Servers using Get-SQLServerList (which I normally alias as "ssl") or any other method is easy. Get-SQLServerList uses WMI to retrieve the following information from each "fixed disk" on the remote system:
I would like to point out that one could calculate UsedSpace by subtracting FreeSpace from Space and that one could calculate Free-% given Used-%, yet they are returned by Get-SQLServerList. These calculations are done by Get-SQLServerList so a calling program (or something that you've hacked together on a command line) do not have to. This makes for a more flexible function and for less typing.
Here is a simple example, including some output, which finds information on the local computer:
PS> get-logicaldisk . | ft -a
Computer Letter Label Space FreeSpace UsedSpace Free-% Used-% Unit
-------- ------ ----- ----- --------- --------- ------ ------ ----
MYLAPTOP C: local.boot 33.17 7.94 25.23 24 76 GB
If you are using a naming scheme (or some other method) to discriminate between development, quality assurance and production servers, the following query might provide a list of disks where you might be able to store that 'temporary' backup:
PS> ssl de | Get-LogicalDisk | where {$_.Letter -match "c"} | sort FreeSpace | ft -a
Here is a command that looks for disks on production servers with less than 5% free space:
PS> ssl pr | Get-LogicalDisk | where {$_."Free-%" -lt 5} | ft -a
I hope that you can see the power of the Get-LogicalDisk function, which can easily be leveraged by additional simple Powershell syntax to provide complicated, real-time reports on disk space issues.
This is a bit of a departure from "SQL Server"-oriented functionality that I have been describing in recent posts. In some shops, free disk space or, more likely, the lack of it is a daily battle. Every morning, I run a script that checks for many different kinds of "negative events" that might have happened over night. A "negative event" might be a failed job, database growth or a server running low on disk space. The latter check calls Get-SQLServerList and Get-LogicalDisk to do all of the heavy work.
The script is straightforward, so I'll just continue with my tradition of a brief overview with some examples. (This is a blog, after all, and not really suited to voluminous detail. FWIW, it is my opinion that detail on what a confusing part of a script does should be in the comments in the script, not in a blog posting. If you find a part of any script confusing, let me know and I'll update the comments in the script and possibly do a blog posting to explain how it works.)
The function takes a list of computers as pipeline input, so feeding a list of SQL Servers using Get-SQLServerList (which I normally alias as "ssl") or any other method is easy. Get-SQLServerList uses WMI to retrieve the following information from each "fixed disk" on the remote system:
- Computer
- Letter
- Label
- Space, which gives the total size of the disk
- FreeSpace, which gives the size of the free space on the drive
- UsedSpace, which gives the size of the space which is being used on the drive
- Free-%
- Used-%
- Unit
I would like to point out that one could calculate UsedSpace by subtracting FreeSpace from Space and that one could calculate Free-% given Used-%, yet they are returned by Get-SQLServerList. These calculations are done by Get-SQLServerList so a calling program (or something that you've hacked together on a command line) do not have to. This makes for a more flexible function and for less typing.
Here is a simple example, including some output, which finds information on the local computer:
PS> get-logicaldisk . | ft -a
Computer Letter Label Space FreeSpace UsedSpace Free-% Used-% Unit
-------- ------ ----- ----- --------- --------- ------ ------ ----
MYLAPTOP C: local.boot 33.17 7.94 25.23 24 76 GB
If you are using a naming scheme (or some other method) to discriminate between development, quality assurance and production servers, the following query might provide a list of disks where you might be able to store that 'temporary' backup:
PS> ssl de | Get-LogicalDisk | where {$_.Letter -match "c"} | sort FreeSpace | ft -a
Here is a command that looks for disks on production servers with less than 5% free space:
PS> ssl pr | Get-LogicalDisk | where {$_."Free-%" -lt 5} | ft -a
I hope that you can see the power of the Get-LogicalDisk function, which can easily be leveraged by additional simple Powershell syntax to provide complicated, real-time reports on disk space issues.
Tuesday, June 17, 2008
In my last post, I introduced Get-SQLServerList. In this post, I am going to describe some more ways to leverage the Get-SQLServerList and Invoke-SQLQuery functions.
Basically, I am suggesting that you follow this formula:
Step 1. Put your server names in a text file called "sqlserver-list.txt" and put the file in the root of your C drive.
Step 2. dot-source the functions that I've given you.
Step 3. Profit.
(I'd like to remind you that I normally alias Get-SQLServerList as "ssl" and Invoke-SQLQuery as "iq". This is to save on typing. It's also worth pointing out that I normally use the "ft" alias instead of typing format-table. On with the show...)
Let's say that your boss wants to know which 64 bit servers aren't at SP2. One way to do this would be to spend a lot of time manually connecting to each system, running a query and putting the results into a spreadsheet. Another way to do this is to go to the shared Word or Excel file that is supposed to be updated (but never is) with this information, spend time looking through it and cutting-and-pasting what you have found into an email. Another way to do this would be to purchase, install and maintain some sort of enterprise management product that never seems to have quite the report that you want.
Here's what I do:
PS>$result = ssl | iq | where {$_.Edition -match '64' -and !($_.ProductLevel -eq 'SP2')}
That will take a few seconds to run. What this does is:
Now, I want to see what I have collected. So, I run this:
PS> $result | ft -a
I am specifying ft now, because I want nicely-formatted output. You can specfiy Format-List (fl) if you prefer.
If I don't have the results that I want, I can go back and change the original command, run it again and look at the contents of $result again. Or maybe I want to see the small servers first:
PS> $result | sort TotalServerMemoryMB | ft -a
Too many columns? OK, let's cut it down to the more interesting ones:
PS>$result = ssl | iq | where {$_.Edition -match '64' -and !($_.ProductLevel -eq 'SP2')} | select Computer,ProductVersion,Build,ProductLevel,Edition,totalServerMemoryMB
PS>$result | ft -a
Once I have what I want, then:
PS> $result | out-clipboard
Now, just paste the clipboard contents into an email, and it's done. Slightly fancier reporting can be done by putting the results into an Excel file, providing column headings, bolding them and setting the column widths. You could also probably save as an xml file, come up with a style sheet to apply to the data and send the resulting report off to whomever wants it. In my world, "fast" is almost always more important than "pretty".
Once again, I hope that you'll see that you are really only limited by your imagination. Well, that and your skill at constructing pipelines. It can be daunting at first, but the idea of using a Transact SQL where clause should be familiar to you, it's just that Powershell's where clause syntax is different.
Basically, I am suggesting that you follow this formula:
Step 1. Put your server names in a text file called "sqlserver-list.txt" and put the file in the root of your C drive.
Step 2. dot-source the functions that I've given you.
Step 3. Profit.
(I'd like to remind you that I normally alias Get-SQLServerList as "ssl" and Invoke-SQLQuery as "iq". This is to save on typing. It's also worth pointing out that I normally use the "ft" alias instead of typing format-table. On with the show...)
Let's say that your boss wants to know which 64 bit servers aren't at SP2. One way to do this would be to spend a lot of time manually connecting to each system, running a query and putting the results into a spreadsheet. Another way to do this is to go to the shared Word or Excel file that is supposed to be updated (but never is) with this information, spend time looking through it and cutting-and-pasting what you have found into an email. Another way to do this would be to purchase, install and maintain some sort of enterprise management product that never seems to have quite the report that you want.
Here's what I do:
PS>$result = ssl | iq | where {$_.Edition -match '64' -and !($_.ProductLevel -eq 'SP2')}
That will take a few seconds to run. What this does is:
- Uses Get-SQLServerList to gets a list of servers to look at.
- Runs Invoke-SQLQuery, using the default command, to retrieve the relevant information from each server. The results are packed into a System.Array and handed off to the next link in the pipeline.
- Tests the results in the System.Array to find servers that have '64' in the Edition string, signifying a 64 bit edition of SQL Server, and are not at SP2.
- Stores the resulting array, which contains only the information we want, in the $result variable.
- Note that I do not specify ft here. If I did, $result would contain formatted data, which means that I would not be able to apply a sort or additonal where clauses to $result.
Now, I want to see what I have collected. So, I run this:
PS> $result | ft -a
I am specifying ft now, because I want nicely-formatted output. You can specfiy Format-List (fl) if you prefer.
If I don't have the results that I want, I can go back and change the original command, run it again and look at the contents of $result again. Or maybe I want to see the small servers first:
PS> $result | sort TotalServerMemoryMB | ft -a
Too many columns? OK, let's cut it down to the more interesting ones:
PS>$result = ssl | iq | where {$_.Edition -match '64' -and !($_.ProductLevel -eq 'SP2')} | select Computer,ProductVersion,Build,ProductLevel,Edition,totalServerMemoryMB
PS>$result | ft -a
Once I have what I want, then:
PS> $result | out-clipboard
Now, just paste the clipboard contents into an email, and it's done. Slightly fancier reporting can be done by putting the results into an Excel file, providing column headings, bolding them and setting the column widths. You could also probably save as an xml file, come up with a style sheet to apply to the data and send the resulting report off to whomever wants it. In my world, "fast" is almost always more important than "pretty".
Once again, I hope that you'll see that you are really only limited by your imagination. Well, that and your skill at constructing pipelines. It can be daunting at first, but the idea of using a Transact SQL where clause should be familiar to you, it's just that Powershell's where clause syntax is different.
Thursday, June 12, 2008
Powershell Function: Get-SQLServerList
In my last post, I described Invoke-SQLQuery. In this post, I describe Get-SQLServerList, which retrieves a list of SQL Servers. Get-SQLServerList leverages Invoke-SQLQuery and provides an easy way to provide server names to a pipeline.
This time, the zip file for Invoke-SQLQuery contains two functions. The first function is Query-ServerCatalog. This function builds a simple SELECT statement and then uses Invoke-SQL Query to run it. This is useful as an example of the sort of thing that you can do with Invoke-SQLQuery. Query-ServerCatalog will return a result set full of servernames, assuming that you have a particular table in a particular database on a particular server that holds these names. Setting that up and keeping it current is a bit of work and probably is not worth doing unless you work in a large environment with a group of DBAs.
The more interesting function is Get-SQLServerList. Some of the highlights of Get-SQLServerList are:
With careful naming of servers, I can quickly select subsets of the entire server population. For our purposes, a seven character long name might consist of the following 'fields', along with some made-up examples:
Three characters indicating location of server: PHL (Philadelphia), NYC (New York City), SEA (Seattle)
Two characters indicating environment: DE (Development), "QA" (Quality Assurance), "PR" (Production)
Two characters indicating a 'server number': 01, 02, ..., 10, ..., 20, ..., 99
Using this scheme, the contents of the text file might look like this:
PHLQA05
PHLPR06
PHLPR88
NYCDE11
NYCPR11
NYCPR44
SEADE04
SEADE05
SEAPR33
SEAQA33
# end of file
I normally alias Get-SQLServerList to just "ssl", to cut down on typing. Here are some quick examples, using my made-up server names, followed with some made-up output.
# Show me all of the servers
PS>ssl
PHLQA05
PHLPR06
PHLPR88
NYCDE11
NYCPR11
NYCPR44
SEADE04
SEADE05
SEAPR33
SEAQA33
# Show me all of the QA servers. Note that the function does not use capitalization-sensitive comparisons:
PS>ssl qa
PHLQA05
SEAQA33
# Show me all of the Philly servers:
PS>ssl phl
PHLQA05
PHLPR06
PHLPR88
# Show me a particular server:
PS>ssl phlpr88
PHLPR88
And so on. By providing a -productversion value, you can run more intricate queries:
# Show me all of the sql2005 servers:
PS>ssl -ProductVersion 2005
PHLQA05
PHLPR06
PHLPR88
NYCDE11
NYCPR11
# Show me all of the sql2005 servers in Philly
PS>ssl phl 2005
PHLQA05
PHLPR06
PHLPR88
# Show me all of the servers with build 3054
PS>ssl -ProductVersion 3054
PHLPR06
So, we can easily find servers that are interesting to us. Note that both the -Computer and -ProductVersion are using Powershell's matching semantics. This means that you can create fairly complicated searches to answer questions like "show me the Philly servers with even-numbered server numbers".
That's all for now. In my next post, I'll show you some more of the commands that leverage Get-SQLServerList and Invoke-SQLQuery.
This time, the zip file for Invoke-SQLQuery contains two functions. The first function is Query-ServerCatalog. This function builds a simple SELECT statement and then uses Invoke-SQL Query to run it. This is useful as an example of the sort of thing that you can do with Invoke-SQLQuery. Query-ServerCatalog will return a result set full of servernames, assuming that you have a particular table in a particular database on a particular server that holds these names. Setting that up and keeping it current is a bit of work and probably is not worth doing unless you work in a large environment with a group of DBAs.
The more interesting function is Get-SQLServerList. Some of the highlights of Get-SQLServerList are:
- It reads a text file to find the names of servers and matches them against the $computer parameter.
- There is simplistic support for 'comments' in this text file. A '#' at the start of a line indicates that the remainder of the line should be ignored. This allows commentary information to be placed in the text file and provides a simple way to temporarily ignore a particular server.
- If the text file isn't found, Get-SQLServerList tries to grab a list of servers from a table using Query-SQLServerCatalog. I don't use this function now. Instead, I rely entirely on text files to store the server names. Text files are very easy to manage and change. I am leaving the feature alone because it's not broken, as far as i know, and it might be useful to me or someone else in the future.
- If a value is specified for -ProductVersion, Get-SQLServerList actually connects to all of the servers that match the given value of $computer and then looks for matching ProductVersion information. Even though it connects to all of the servers it still runs in just a few seconds.
With careful naming of servers, I can quickly select subsets of the entire server population. For our purposes, a seven character long name might consist of the following 'fields', along with some made-up examples:
Three characters indicating location of server: PHL (Philadelphia), NYC (New York City), SEA (Seattle)
Two characters indicating environment: DE (Development), "QA" (Quality Assurance), "PR" (Production)
Two characters indicating a 'server number': 01, 02, ..., 10, ..., 20, ..., 99
Using this scheme, the contents of the text file might look like this:
PHLQA05
PHLPR06
PHLPR88
NYCDE11
NYCPR11
NYCPR44
SEADE04
SEADE05
SEAPR33
SEAQA33
# end of file
I normally alias Get-SQLServerList to just "ssl", to cut down on typing. Here are some quick examples, using my made-up server names, followed with some made-up output.
# Show me all of the servers
PS>ssl
PHLQA05
PHLPR06
PHLPR88
NYCDE11
NYCPR11
NYCPR44
SEADE04
SEADE05
SEAPR33
SEAQA33
# Show me all of the QA servers. Note that the function does not use capitalization-sensitive comparisons:
PS>ssl qa
PHLQA05
SEAQA33
# Show me all of the Philly servers:
PS>ssl phl
PHLQA05
PHLPR06
PHLPR88
# Show me a particular server:
PS>ssl phlpr88
PHLPR88
And so on. By providing a -productversion value, you can run more intricate queries:
# Show me all of the sql2005 servers:
PS>ssl -ProductVersion 2005
PHLQA05
PHLPR06
PHLPR88
NYCDE11
NYCPR11
# Show me all of the sql2005 servers in Philly
PS>ssl phl 2005
PHLQA05
PHLPR06
PHLPR88
# Show me all of the servers with build 3054
PS>ssl -ProductVersion 3054
PHLPR06
So, we can easily find servers that are interesting to us. Note that both the -Computer and -ProductVersion are using Powershell's matching semantics. This means that you can create fairly complicated searches to answer questions like "show me the Philly servers with even-numbered server numbers".
That's all for now. In my next post, I'll show you some more of the commands that leverage Get-SQLServerList and Invoke-SQLQuery.
Tuesday, June 10, 2008
Powershell Function: Invoke-SQLQuery and Ping-Computer
In this posting, I am going to (briefly) discuss Invoke-SQLQuery, which will run a SQL Server query against a computer or a group of computers and then return the results as a Powershell object. This object can be fed to another element in a pipeline, where it can be sliced, diced and stored using Powershell's intrinsic capabilities.
About a year ago, I was looking for a way to return a result set to Powershell. It seemed that I was forever logging into a SQL Server instance, running one query, then logging out and repeating the whole thing on a different instance.
One day, I read a posting on Powershell and SMO by Muthusamy Anantha Kumar. I noticed some ado.net code towards the bottom of the page and I same the proverbial light: Powershell is to .Net objects what VBScript is to COM objects. Armed with that, I started working on a function that ultimately evolved into Invoke-SQLQuery.
I'd like to point out the following facets of Invoke-SQLQuery:
To call a SQL Server system stored procedure on a remote computer and show only some of the returned columns:
Invoke-SQLQuery -computer "server_a" -commandtext "sp_helpdb" | ft -auto Name,Status
Run pretty much any query that you'd like on a single computer:
$cmdText = "select @@servername 'Computer', count(*) 'ConnectionCount' from master.dbo.sysprocesses"
$result = Invoke-SQLQuery -computer:"server_a" -commandtext:$cmdText
$result | ft -auto
Run the same query, but this time run it against two computers. The names of the computers are placed in an array. That array is then fed to a pipeline. It would be trivial to add additional computers to the array, which will cause the command to be run on those computers as well. It is also easy to load an array of computer names from a simple text file.
$computers = ("server_a", "server_b")
$result = $computers | Invoke-SQLQuery -commandtext:$cmdText
$result | ft -auto Computer,ConnectionCount
The output of that last command might look like this:
Computer ConnectionCount
-------- ---------------
SERVER_A 35
SERVER_B 82
Note that the contents of $result are a real Powershell object. That means that you can go and run something like this:
$result | sort ConnectionCount -desc | ft -auto Computer,ConnectionCount
Computer ConnectionCount
-------- ---------------
SERVER_B 82
SERVER_A 35
You can add additional pipeline items as you wish. For example, instead of (or in addition to) applying a sort, you could apply a where clause. This would be silly in my two-server example, but imagine that you have a hundred servers and you would like to know the top five servers, by number of connections. After running the queries and placeing the results in $result, you could run something like this:
$result | sort ConnectionCount -desc | select -first 5
You could stick your results in an Excel file:
$result | sort ConnectionCount -desc | select -first 5 | Out-Excel
You are only limited by your imagination.
About a year ago, I was looking for a way to return a result set to Powershell. It seemed that I was forever logging into a SQL Server instance, running one query, then logging out and repeating the whole thing on a different instance.
One day, I read a posting on Powershell and SMO by Muthusamy Anantha Kumar. I noticed some ado.net code towards the bottom of the page and I same the proverbial light: Powershell is to .Net objects what VBScript is to COM objects. Armed with that, I started working on a function that ultimately evolved into Invoke-SQLQuery.
I'd like to point out the following facets of Invoke-SQLQuery:
- This function directly accepts computer names from pipeline input. This makes it trivial to run a query on a large number of SQL Servers.
- If you prefer, you can use the -computer parameter to specify one computer at a time. Often, this syntax is more terse than specifying a whole pipeline.
- The command provided with -commandText is executed on each computer, in turn.
- The output from each execution is aggregated into a single "mega result set".
- It is usually important to be able to determine which rows came from which computer because there is only one result set. As a matter of habit, I stick a @@SERVERNAME at the start of the SELECT clause of my query.
- The default query provides a quick overview of the server and includes version information, memory information, instance start time, etc. I use this as a connectivity test or to remind myself of what software is installed on a particular computer.
- The query that is sent to the server can be quite complex. The query can be multiline, but not multibatch. The query can create and use temporary tables. If you can run your query in SSMS, as long as you are not using the "GO" keyword, you should be able to run your query with Invoke-SQLQuery.
- The function can read a query from a file passed in with -commandFile. I have added this functionality only in the last few weeks.
- Invoke-SQLQuery uses Ping-Computer to ensure that the remote computer is at least booted before trying to connect to a SQL Server instance. This cuts down on the time it takes to fail when a remote computer is down or unreachable for whatever reason. Ping-Computer is a very close cousin of Brandon Shell's Ping-Server, having only few simple changes to match my scripting style. I'm presenting it here for your conveniance, but you should definitely check out Bradon's blog. I have learned much from it.
- Invoke-SQLQuery runs the given query on each computer, serially, one execution at a time. Clearly this will not scale to hundreds of servers, but I've been happily running this at a site with about 20 servers dozens of times a day for about a year now.
- This does not support multiple returned result sets. In other words, you can't issue two selects in your query and get two different result sets in a collection of some sort. Again, this has not hindered me.
- UPDATE, INSERT and DELETE are not supported. I have other scripts that can handle other sorts of DML.
- When loading queries from a file, you will get runtime errors if the file contains the "GO" keyword. I have not built in an fail-safe/sanity check for this yet.
To call a SQL Server system stored procedure on a remote computer and show only some of the returned columns:
Invoke-SQLQuery -computer "server_a" -commandtext "sp_helpdb" | ft -auto Name,Status
Run pretty much any query that you'd like on a single computer:
$cmdText = "select @@servername 'Computer', count(*) 'ConnectionCount' from master.dbo.sysprocesses"
$result = Invoke-SQLQuery -computer:"server_a" -commandtext:$cmdText
$result | ft -auto
Run the same query, but this time run it against two computers. The names of the computers are placed in an array. That array is then fed to a pipeline. It would be trivial to add additional computers to the array, which will cause the command to be run on those computers as well. It is also easy to load an array of computer names from a simple text file.
$computers = ("server_a", "server_b")
$result = $computers | Invoke-SQLQuery -commandtext:$cmdText
$result | ft -auto Computer,ConnectionCount
The output of that last command might look like this:
Computer ConnectionCount
-------- ---------------
SERVER_A 35
SERVER_B 82
Note that the contents of $result are a real Powershell object. That means that you can go and run something like this:
$result | sort ConnectionCount -desc | ft -auto Computer,ConnectionCount
Computer ConnectionCount
-------- ---------------
SERVER_B 82
SERVER_A 35
You can add additional pipeline items as you wish. For example, instead of (or in addition to) applying a sort, you could apply a where clause. This would be silly in my two-server example, but imagine that you have a hundred servers and you would like to know the top five servers, by number of connections. After running the queries and placeing the results in $result, you could run something like this:
$result | sort ConnectionCount -desc | select -first 5
You could stick your results in an Excel file:
$result | sort ConnectionCount -desc | select -first 5 | Out-Excel
You are only limited by your imagination.
Tuesday, June 3, 2008
Powershell Function: Change-VolumeLabel
This posting continues with my series of articles that describes my Powershell environment by describing a function that I use to change volume labels on remote machines. A file with the function is available.
I wrote this function because I needed a way to change labels associated with volumes on remote machines. Earlier versions of this function were actually written in VBScript. I rewrote that old code in Powershell because I wanted more consistency with the "standard" Powershell command line switches and I am (slowly) moving away from all of my old VBScript programs.
Before I had a script, I would use RDP to connect to the remote machine, login and then use Windows Explorer to change the volume labels. This was a lot of effort for a small change. I use the volume label at one of my clients as a quick way to discern between SAN and non-SAN volumes by prepending "san" or "local" on the volume labels. When this convention was adopted, there were a fair number of old machines to go through. I also use this script when setting up new machines, so this function has gotten a workout.
This function is pretty straightforward. The function has the following key features:
I wrote this function because I needed a way to change labels associated with volumes on remote machines. Earlier versions of this function were actually written in VBScript. I rewrote that old code in Powershell because I wanted more consistency with the "standard" Powershell command line switches and I am (slowly) moving away from all of my old VBScript programs.
Before I had a script, I would use RDP to connect to the remote machine, login and then use Windows Explorer to change the volume labels. This was a lot of effort for a small change. I use the volume label at one of my clients as a quick way to discern between SAN and non-SAN volumes by prepending "san" or "local" on the volume labels. When this convention was adopted, there were a fair number of old machines to go through. I also use this script when setting up new machines, so this function has gotten a workout.
This function is pretty straightforward. The function has the following key features:
- It leverages WMI to do the dirty work.
- It supports several of the "standard" Powershell command switches, namely "-verbose", "-confirm" and "-whatif". It does this by leveraging Jeffery Snover's "Should-Process" function. At a minimum, I try to support these three switches in all of my scripts.
- If you don't specify a drive leter, the script prints a short report showing the drives that exist. This supports the notion that an administrator should be able to "explore" or "discover" an environment with powershell and make changes interactively. It sure beats remembering things.
- There is one caveat with this script. It does not work with versions of Windows that pre-date Windows Server 2003. It seems that WMI did not provide the WIN32_VOLUME class prior to Windows Server 2003. If someone tries this funciton on Windows Vista, I'd love to hear if it works or not.
Friday, May 30, 2008
Parsing SQLIO output with Powershell
Anyone who has used sqlio knows that it's output files are not particularly readable. I used to manually run sqlio tests, then retype the output into a spreadsheet so that I could more easily analyze the data. This proved time-consuming, error-prone and boring, so I wrote a script that parses all of the data that I wanted from the sqlio output file. This post provides that code, and some background on how I use it. I will return to my series on my Powershell set up next time.
The first thing to do is to run your tests with sqlio and get it's results into files. This is left as an excercise to the reader. I have a .cmd file that I use to do this. I use a .cmd file because most of the machines that I test do not have Powershell installed on them. I am including this file in the SQLIO.scripts.zip file.
The reason that I break the running of the tests and the parsing of the output into two seperate processes is to decouple them. It can take quite a while for sqlio to run, especially if you are running tests at multiple block sizes or varying between random and sequential or read and write. The idea is to get all the data first, then I can parse at my leisure. If something goes wrong with the parsing, I don't have to rerun my entire test. This also allows me to tweak the parsing script as I go along, without having to re-run the data collection phase.
Running tests tedious to do. If all you want to do is try the parsing script, I have included some sample output that you can use in the sqlio.results.zip file.
The second thing to do is to run the script that parses a raw output file and store the parsed data in a Powershell object.
That command might look something like this:
$result = ls "*.txt" | %{Parse-SQLIOOutput -file $_}
The last thing that we need to do is to analyze the data. Storing the data in a Powershell object allows us to use all of Powershells features for producing reports, or to funnell the data into another tool for analysis.
For example, we can use the where keyword to restrict the output or we could store the data in an Excel file or a SQL Server table. A command to look at the data on a command line might look something like this:
$result | sort Drive,RandomOrSequential,ReadOrWrite,BlockSize,QueueDepth | format-list *
Or we could just sort it and send it to Microsoft Excel: $result | sort Drive,RandomOrSequential,ReadOrWrite,BlockSize,QueueDepth | Out-Excel
That's all there is to my script. I hope that you download the files and give it a try.
It is worth mentioning that Linchi Shea has a perl script that does more-or-less the same thing as my script. I haven't tried it myself, but Linchi does good stuff and you should definitely check it out if you are into perl.
The first thing to do is to run your tests with sqlio and get it's results into files. This is left as an excercise to the reader. I have a .cmd file that I use to do this. I use a .cmd file because most of the machines that I test do not have Powershell installed on them. I am including this file in the SQLIO.scripts.zip file.
The reason that I break the running of the tests and the parsing of the output into two seperate processes is to decouple them. It can take quite a while for sqlio to run, especially if you are running tests at multiple block sizes or varying between random and sequential or read and write. The idea is to get all the data first, then I can parse at my leisure. If something goes wrong with the parsing, I don't have to rerun my entire test. This also allows me to tweak the parsing script as I go along, without having to re-run the data collection phase.
Running tests tedious to do. If all you want to do is try the parsing script, I have included some sample output that you can use in the sqlio.results.zip file.
The second thing to do is to run the script that parses a raw output file and store the parsed data in a Powershell object.
That command might look something like this:
$result = ls "*.txt" | %{Parse-SQLIOOutput -file $_}
The last thing that we need to do is to analyze the data. Storing the data in a Powershell object allows us to use all of Powershells features for producing reports, or to funnell the data into another tool for analysis.
For example, we can use the where keyword to restrict the output or we could store the data in an Excel file or a SQL Server table. A command to look at the data on a command line might look something like this:
$result | sort Drive,RandomOrSequential,ReadOrWrite,BlockSize,QueueDepth | format-list *
Or we could just sort it and send it to Microsoft Excel: $result | sort Drive,RandomOrSequential,ReadOrWrite,BlockSize,QueueDepth | Out-Excel
That's all there is to my script. I hope that you download the files and give it a try.
It is worth mentioning that Linchi Shea has a perl script that does more-or-less the same thing as my script. I haven't tried it myself, but Linchi does good stuff and you should definitely check it out if you are into perl.
Wednesday, May 28, 2008
My Powershell set up, Part 2 of 2
Last time, I explained that my profile consists of several files, with various functions (and associated aliases) broken out by functionality.
Each of these "subprofiles" is loaded when my standard Powershell profile runs.
. "$env:winbin\psh\profile.Security.ps1"
. "$env:winbin\psh\profile.RemoteService.ps1"
. "$env:winbin\psh\profile.SQLServer.ps1"
. "$env:winbin\psh\profile.dstrait.ps1"
. "$env:winbin\psh\profile.$($env:userdomain).ps1"
The last line will load a client-specific file, depending on the credentials that I am currently using.
Some of the functions in these files are mine and some of them were borrowed from various web sites. When I borrow a function, I add a small comment header that describes where the function came from. This would allow me to go back and look for more interesting scripts from that author, or to look for improved versions or allow me to give credit where credit is due. Some of these functions are heavily modified versions of such originals, some have been modified so much that they are probably unrecognizable.
Here is a listing of functions, grouped by Filename. For my next posting in this series, I will pick one of these functions and explain what I use it for and how it works.
profile.dstrait.ps1
-------------------
Change-VolumeLabel
Convert-WMIDateToSystemDate
Find-File
Get-DiskUsage
Get-Guihelp
Get-Hardware
Get-LiveHelp
Get-LogicalDisk.
Get-Range
Get-RecyclerSize
Get-RemoteDrives
Get-Uptime
Manage-Computer
Out-Excel
Ping-Computer
prompt
Report-ODBCDriver
Report-ODBCSystemDSN
Should-Process
tree
Watch-Command
whoami
profile.Legion.ps1
------------------
Archive-SQLAgentJob
Archive-SQLDatabase
Archive-SQLDTSPackage
Archive-SQLStoredProcedure
Copy-SQLDTSPackage
Document-Ticket
Export-WinbinToLegion.line
Fix-SQLAgentJobStep
Fix-SQLDatabaseOwner
Get-DomainSQLServerList.line
Get-Ticket
Report-LatestBackups
Report-SQLAgentJobsWithNonDefaultOwners
Report-SQLServerRolesWithUnusualMembers
Report-SQLServerSpaceUsage
ScriptOut-SQLAgentJob
profile.RemoteService.ps1
-------------------------
List-RemoteService
Pause-RemoteService
Start-RemoteService
Stop-RemoteService
profile.Security.ps1
--------------------
Add-ComputerGroupMember
Check-IsDomainComputer
Check-IsDomainGroup
Check-IsDomainUser
List-ComputerGroupMembers
List-DomainComputers
List-DomainGroupMembers
List-DomainGroupsForUser
Remove-ComputerGroupMember
Search-ActiveDirectory
profile.SQLServer.ps1
---------------------
Execute-SQLAgentJob
Execute-SQLDBCCUpdateUsage
Execute-SQLQuery
Get-SQLServerList
Get-SQLStatus
Get-SQLVirtualNodes
Invoke-SQLQuery
Out-SQLDatabase
Query-ServerCatalog
Report-SQLAgentJob
Report-SQLAgentJobHistory
Report-SQLDatabase
Report-SQLDatabaseObject
Report-SQLDatabaseTable
Report-SQLDTSPackage
Report-SQLLinkedServer
Report-SQLOfflineDatabase
Report-SQLServerMemory.line
Report-SQLServerSearch
Each of these "subprofiles" is loaded when my standard Powershell profile runs.
. "$env:winbin\psh\profile.Security.ps1"
. "$env:winbin\psh\profile.RemoteService.ps1"
. "$env:winbin\psh\profile.SQLServer.ps1"
. "$env:winbin\psh\profile.dstrait.ps1"
. "$env:winbin\psh\profile.$($env:userdomain).ps1"
The last line will load a client-specific file, depending on the credentials that I am currently using.
Some of the functions in these files are mine and some of them were borrowed from various web sites. When I borrow a function, I add a small comment header that describes where the function came from. This would allow me to go back and look for more interesting scripts from that author, or to look for improved versions or allow me to give credit where credit is due. Some of these functions are heavily modified versions of such originals, some have been modified so much that they are probably unrecognizable.
Here is a listing of functions, grouped by Filename. For my next posting in this series, I will pick one of these functions and explain what I use it for and how it works.
profile.dstrait.ps1
-------------------
Change-VolumeLabel
Convert-WMIDateToSystemDate
Find-File
Get-DiskUsage
Get-Guihelp
Get-Hardware
Get-LiveHelp
Get-LogicalDisk.
Get-Range
Get-RecyclerSize
Get-RemoteDrives
Get-Uptime
Manage-Computer
Out-Excel
Ping-Computer
prompt
Report-ODBCDriver
Report-ODBCSystemDSN
Should-Process
tree
Watch-Command
whoami
profile.Legion.ps1
------------------
Archive-SQLAgentJob
Archive-SQLDatabase
Archive-SQLDTSPackage
Archive-SQLStoredProcedure
Copy-SQLDTSPackage
Document-Ticket
Export-WinbinToLegion.line
Fix-SQLAgentJobStep
Fix-SQLDatabaseOwner
Get-DomainSQLServerList.line
Get-Ticket
Report-LatestBackups
Report-SQLAgentJobsWithNonDefaultOwners
Report-SQLServerRolesWithUnusualMembers
Report-SQLServerSpaceUsage
ScriptOut-SQLAgentJob
profile.RemoteService.ps1
-------------------------
List-RemoteService
Pause-RemoteService
Start-RemoteService
Stop-RemoteService
profile.Security.ps1
--------------------
Add-ComputerGroupMember
Check-IsDomainComputer
Check-IsDomainGroup
Check-IsDomainUser
List-ComputerGroupMembers
List-DomainComputers
List-DomainGroupMembers
List-DomainGroupsForUser
Remove-ComputerGroupMember
Search-ActiveDirectory
profile.SQLServer.ps1
---------------------
Execute-SQLAgentJob
Execute-SQLDBCCUpdateUsage
Execute-SQLQuery
Get-SQLServerList
Get-SQLStatus
Get-SQLVirtualNodes
Invoke-SQLQuery
Out-SQLDatabase
Query-ServerCatalog
Report-SQLAgentJob
Report-SQLAgentJobHistory
Report-SQLDatabase
Report-SQLDatabaseObject
Report-SQLDatabaseTable
Report-SQLDTSPackage
Report-SQLLinkedServer
Report-SQLOfflineDatabase
Report-SQLServerMemory.line
Report-SQLServerSearch
Wednesday, May 21, 2008
My Powershell set up, Part 1 of 2
This posting gives an overview how I have set up Powershell on my workstation.
Any heavy shell user normally customizes his (or her) environment. These environments are (and will likely always be) a work in progress. As you learn things and write additional functions to help you with your tasks, your environment will grow and change. I find that I am tweaking things even as a I write this posting.
Aside: One of the things that I am forever tweaking is function names. I try to follow the Powershell verb-noun convention, but there are lots of spots where I have had to make judgment calls. It's not hard to find inconsistencies. For example, I vacillate between "Report", "List" and "Get". I'm not crazy about Get. My issue with Get is that it seems that Get should return an object and be used like this:
$result = Get-Thingy
$result.MemberFunction $param
$result = $null
whereas I run a lot of stuff that just returns information.
A 'Report' is usually more complicated than a simple 'List', which is often a single column.
What's better "Invoke-SQLQuery" or "Get-SQLResults"?
Anyway, let's get back on track.
Firstly, I am still running Powershell V1.0. I haven't spent any time with the Powershell 2.0 CTP, though I have read a few introductory articles on new features. I have had issues with other Microsoft CTPs in the past and I'm loathe to install something on a 'production' computer that I might have a hard time removing. I am also dismayed with the decision to not support remote execution under Windows XP or Windows Server 2003. None of my clients have deployed Vista on more than a handful of workstations, and none of them have even started looking at Windows Server 2008. They will not being doing so just to run Powershell.
Secondly, I use Powertab. Powertab provides pop-up help for functions and files, just by hitting the tab key.
Thirdly, I use the Powershell Community Extensions (PSCX). PSCX provides lots of small, handy functions.
Lastly, my profile consists of several files, with various functions (and associated aliases) broken out by functionality. I have two reasons for breaking them out. The first reason is that my original profile was getting very large. My main profile is over 79 KB right now. The second reason is that I try to break out stuff that is specific to a client and not useful anywhere else.
Next time, I will go into some detail on how my profile works.
Any heavy shell user normally customizes his (or her) environment. These environments are (and will likely always be) a work in progress. As you learn things and write additional functions to help you with your tasks, your environment will grow and change. I find that I am tweaking things even as a I write this posting.
Aside: One of the things that I am forever tweaking is function names. I try to follow the Powershell verb-noun convention, but there are lots of spots where I have had to make judgment calls. It's not hard to find inconsistencies. For example, I vacillate between "Report", "List" and "Get". I'm not crazy about Get. My issue with Get is that it seems that Get should return an object and be used like this:
$result = Get-Thingy
$result.MemberFunction $param
$result = $null
whereas I run a lot of stuff that just returns information.
A 'Report' is usually more complicated than a simple 'List', which is often a single column.
What's better "Invoke-SQLQuery" or "Get-SQLResults"?
Anyway, let's get back on track.
Firstly, I am still running Powershell V1.0. I haven't spent any time with the Powershell 2.0 CTP, though I have read a few introductory articles on new features. I have had issues with other Microsoft CTPs in the past and I'm loathe to install something on a 'production' computer that I might have a hard time removing. I am also dismayed with the decision to not support remote execution under Windows XP or Windows Server 2003. None of my clients have deployed Vista on more than a handful of workstations, and none of them have even started looking at Windows Server 2008. They will not being doing so just to run Powershell.
Secondly, I use Powertab. Powertab provides pop-up help for functions and files, just by hitting the tab key.
Thirdly, I use the Powershell Community Extensions (PSCX). PSCX provides lots of small, handy functions.
Lastly, my profile consists of several files, with various functions (and associated aliases) broken out by functionality. I have two reasons for breaking them out. The first reason is that my original profile was getting very large. My main profile is over 79 KB right now. The second reason is that I try to break out stuff that is specific to a client and not useful anywhere else.
Next time, I will go into some detail on how my profile works.
Subscribe to:
Posts (Atom)