BizTalk Core Databases

BizTalkMgmtDB Most entities from BizTalk Admin Console are stored in this database (applications, orchestrations, assemblies, sendports, receive ports and locations, etc…)Most updates in BizTalk Admin Console are stored in this database (applications, orchestrations, assemblies, sendports, receive ports and locations, etc…). Thus BizTalk Admin Console is basically an update program to this database. You can also use WMI to update it.
BizTalkMsgboxDB This is the MessageBox containing information about the messages, instances and subscriptions which are processed by BizTalk. Can be large and has a lot of heavy processing. A large site can have more than one msgbox (see “Scaling Out”)
BizTalkDTADB Contains information about all the processed messages and instances
BizTalkRuleEngineDB database for the Business Rules Engine, updated by the GUI “Business Rule Composer” program. Use the “Business Rules Engine Deployment Wizard” to migrate business rules from one environment to another.
SSODB Single Sign-On Database – BizTalk stores data related to SendPorts and Receive Ports here. For sure, any password that needs encrypted goes here (for example an FTP password). BizTalk will not function at all if this database is not available and the corresponding service (“Enterprise Single Sign-On Services”) is not running. This service must come up before BizTalk.

BizTalk installs creates a SQL Agent Backup job, but it will be disabled, and you have to configure it. See this article on Backing Up and Restoring Biztalk Server Databases.

BizTalk BAM (Business Activity Monitoring) Databases

Obviously, these are used only if you have implemented BAM. I’d love to know what percentage of BizTalk shops use BAM. Most places I’ve worked don’t use it. See BAM Quick Start

BAMStarSchema Contains the staging table, and the measure and dimension tables which are set in originally in an Excel spreadsheet, then later deployed to this database.
BAMPrimaryImport Contains raw tracking data. For example, when you capture data from an orchestration, it is originally stored here, before being processed further. If you need to right custom queries against this database, be sure and use the views, not the underlying tables.
BAMArchive Archive of old business activity data. This keeps the BAM Primary Import database smaller by migrating older data here.
BAMAnalysis Contains the OLAP (three dimensional) data cubes. Learn more about OLAP here.
BAMAlertsApplication Contains alert information for BAM notifications. The web application, called the “BAM portal”, allows users to specify conditions and events on which they want notifications and alerts to occur.
BAMAlertsNSMain Contains instance information specifying how the notification services connect to the system that BAM is monitoring.

Are you a GACer? Or do you redeploy every time? Do you like to GAC in BizTalk?

I suppose it depends a lot on the place you work, the attitudes, and the procedures in place, and what security you have, and who does the deploys.

The last place I worked, I inherited what I affectionately referred to as “the monolithic app”, i.e. a BizTalk application that was made up of about 24 different projects in one solution. So redeploying the whole thing was often a pain, and sometimes there were pieces of it that I didn’t want to deploy (perhaps multiple development threads going on in that same big solution).

If you are a beginner, let me explain. With BizTalk, you can export and import the MSI (with or without the bindings). But suppose you need to change for instance just a literal value in a map or an orchestration. Why redeploy the whole enchilada? You can rebuild that specific .dll, copy it to each of the BizTalk servers in the group, and run GacUtil against it;

GacUtil /i MyOrchestration.dll 

Then you just restart the host instances that use that .dll, or all of them you are unsure, then you are good to go.

The downside of GACing

I just recently discovered that if you export an MSI from that machine, the MSI that is exported will use the original .DLL referenced by that application. Even though run-time picks up the new GAC’ed .DLL, the export MSI may not. You can get around this by going to “Resources”, adding the .DLL and clicking “Overwrite”. As a matter of fact, you can GAC it from there too! I generally use BTDF (BizTalk Deployment Framework) , so creating MSIs that way was not of interest to me. That was, at least until I got to my latest client. They don’t use BTDF, and that’s their Modus Operandi.

When does the GAC Trick not work?

If you add new entities that have to registered in the BizTalkMgmt Database, then you will need to import the MSI. That means new schemas, new maps, new orchestrations, new pipelines, etc… Or if you promote a field that was not promoted before, the database needs to know about that.

GAC is good redeploying code when you made logic changes, changes of literals, etc… Adding new functoids to an existing map or even new shapes to an orchestration is no big deal, they are not registered in the database. But you cannot add any new send/receive ports to your orchestration if you plan to GAC.

A Must! Keep an Audit Trail of Your GACs

It’s important that no man GAC’s unto himself. If you have other people administering your BizTalk environment, they need to know what you deployed (or GAC’ed) and when.

It’s common to keep a directory such as C:\Deploys. Under that keep a folder name for each BizTalk Application


Some Other GAC Tricks

I often just put the GacUtil and related files in the same folder as my deploy. There now seem to be too many versions of GacUtil, and I hate to waste time looking for the right one.

Here are the files required: 1) all your .dll’s that you want to GAC, and the three files: GacUtil.exe, GacUtil.exe.config, and GacUTlRC.dll:


I often create a file called GacAll.cmd and put in the same directory. It simply runs GacUtil on each of the .dlls there; something like this:

gacutil /i myFirst.dll 
gacutil /i mySecond.dll 
gacutil /i myThird.dll 

When trying to export an MSI from BizTalk Admin Console for an application, I got this error:

“Unable to serialize web directory… Verify the location exists and it is accessible”

Example of error in BizTalk Admin Console



In my case, we had SharePoint on the same machine as BizTalk, and thus a coworker set up a special virtual directory for published BizTalk services, and assigned port 9000. Thus, when I ran the wizard to create the webservice, I had to specify port 9000, and now on the export MSI file, I also need to specify the port 9000 (following a colon after the localhost name).

Example of Solution



For a while, I’ve been wanting to explore deeper or “hack” into the internals of the BizTalk Databases, i.e. BizTalkMsgBox and BizTalkDTA. When do records gets stored in these database, how long do they stay there, and when do they get “cleaned-up”.

So here is my first scenario.  I use a SQL command to Find out how many rows of each table in your BizTalk Databases (as per previous blog post).

I modified the query to store the data in a statistics table that I created so that I could use queries to identify any differences. Here’s what that table looks like:

Now let me show you how I collected the data. I ran two queries, once against each of the two databases, captured the date/time to use the exact same time (for purposes of joining later) and store statistics. I change the label ‘Before’ to ‘During’ and ‘After’ later when I run the script again.
First run:

USE BizTalkMsgBoxDb

declare @WorkDateTime dateTime2 = getDate() 
declare @Label varchar(15) = 'Test' 
print @WorkDateTime
print @Label 
insert into StatSnap.dbo.TableRowCounts  
SELECT @WorkDateTime, 'BizTalkMsgDb', +'.'+ TableName , @Label, SUM(pa.rows) RowCnt
FROM sys.tables ta
INNER JOIN sys.partitions pa  ON pa.OBJECT_ID = ta.OBJECT_ID
INNER JOIN sys.schemas sc  ON ta.SCHEMA_ID = sc.schema_id
WHERE ta.is_ms_shipped = 0 AND pa.index_id IN (1,0)

insert into StatSnap.dbo.TableRowCounts  
SELECT @WorkDateTime, 'BizTalkMsgDb', +'.'+ TableName , @Label, SUM(pa.rows) RowCnt
FROM sys.tables ta
INNER JOIN sys.partitions pa  ON pa.OBJECT_ID = ta.OBJECT_ID
INNER JOIN sys.schemas sc  ON ta.SCHEMA_ID = sc.schema_id
WHERE ta.is_ms_shipped = 0 AND pa.index_id IN (1,0)

Once the data is collected, the following query shows the which tables changed in size. I manually entered the date/times to match-up one specific earlier run from any later runs.

use statsnap
--truncate table TableRowCounts
--select * from TableRowCounts
select TRCBefore.DBName, TRCBefore.TableName, 
    TRCBefore.Label, TRCBefore.RowCountStat, 
    TRCDuring.Label, TRCDuring.RowCountStat,
    TRCAfter.Label, TRCAfter.RowCountStat,
    TRCLater.Label, TRCLater.RowCountStat
from TableRowCounts as TRCBefore 
inner join TableRowCounts as TRCDuring on TRCBefore.DBname = TRCDuring.DBName and TRCBefore.TableName = TRCDuring.Tablename 
inner join TableRowCounts as TRCAfter  on TRCBefore.DBname = TRCAfter.DBName and TRCBefore.TableName = TRCAfter.Tablename 
where TRCBefore.DateTimeStat = '2015-01-27 09:16:42.2800000'
  and TRCDuring.DateTimeStat = '2015-01-27 09:19:55.6900000'
  and TRCAfter.DateTimeStat  = '2015-01-27 09:27:30.7900000'
  and (TRCBefore.RowCountStat != TRCDuring.RowCountStat
  or   TRCBefore.RowCountStat != TRCAfter.RowCountStat

The resulting query combining all stats:



I created an orchestration that takes a simple messages from a file receive location, maps it, delays 3 minutes, then sends it to a file. I snapshot the stats ‘Before’ starting the orchestration, ‘During’ – while the 3 minute delay is happening, then again ‘After’ the orchestration has completed.

One of the most important columns is nState, the values are described in this StackOverflow post.

    1 : Started
    2 : Completed
    3 : Terminated
    4 : Suspended
    5 : ReadyToRun
    6 : Active
    8 : Dehydrated
    16: CompletedWithDiscardedMessages
    32: SuspendedNonResumable
    64: InBreakpoint

Instances table


It turns out that I had four suspended instances for a few weeks back, 2 not-resumeable (nstate=32), and 2 resumeable (nstate=4) .  You can actually see the error messages in the nvcErrorDescription column.  I terminated those, then ran the query again. After deleting them, the following remain. Note that one has nState = 8, which would be my dehydrated orchestration (in the 3 minute delay). By running the query frequently while the orchestration is running, you can watch the nState change from 2 to 8.  Sometimes it stayed with the value of 2 the whole 3 minutes.


:Later, I dropped four different messages, causing 4 orchestration instances to start.  Below you can see which GUIDs are the same, and one one change per instance



BTS_Applications table

The bts_applications table contains hosts, not applications:


Host Tables

This was not a surprise to me, because I had seen it before, but someone new to the internal of the database might find it surprising. When you create a new “Host” using BizTalk Admin Console (or via Object Explorer, WMI, or Powershell Extensions), five SQL tables are created for each host.



As Expected


I will continue with more observations and details in Part 2.



The question today:  Should BizTalk Administrators be granted/given the “SysAdmin” role (authority) in the SQL database that supports BizTalk.

Generally, my answer is yes (at least on the BizTalk machine that is running SQL to support BizTalk, and here’s why.

1) The BizTalk Admins typically have to deal with the SQL Agent Jobs.  BizTalk installs about 14 jobs, and you’ll need SysAdmin at a minimum while doing the install.

2) BizTalk Admins sometimes need full power such as creating new databases, backing-up and restoring databases, attaching and detaching databases.

NOTE: Most companies should not be running the BizTalk database an their production databases on the same machine.  Most everyone separates the two for performance reasons.  Thus, giving the BizTalk Administrator the SysAdmin rights is only required on the BizTalk machine (or the SQL machine that supports BizTalk).  In larger BizTalk installation, even BizTalk and the BizTalk that supports SQL are separated onto two physical (or logical/virtual) machine names.

BizTalk 2010 Default SQL Agent Jobs Installed

BizTalk 2010 Default SQL Agent Jobs Installed

In some bigger companies, duties might be separated, such that there is really a person in the BizTalk Admin role, and that person handles the installs and the management of the SQL Agent Jobs.  In most companies, the BizTalk Developers play somewhat of a BizTalk Admin role as well (at least in the development, if not the production environment).

Further, in some companies, the developers are allowed to create SQL Agent Jobs related to the development database, while in other companies only the DBA will have that authority.

The corrollary of the above question would be this:  Why can’t I see the BizTalk SQL Agent Jobs when I open SSMS (SQL Studio Management Studio).  The answer is that you need the SysAdmin SQL role to see the Agent Jobs.

NOTE: If you want to be more granular, instead of granting SysAdmin, there are three roles specific to SQL Agent: SQLAgentUserRole (users get access only to jobs owned by them), SQLAgentReaderRole  (can view but not change all jobs, but also get update on their own jobs as per the prior role),  SQLAgentOperatorRole (everything in prior role, but can execute enable/disable any job in system, just still can only modify their own jobs).

<a target=”_blank” href=””></a>

Adding the SysAdmin role is demonstrate in both the GUI mode and the command mode here:

<a target=”_blank” href=””></a>



I’m at a new client, and here’s the scenario.  They have good Hosts and Host Instances set up in production, and maybe the QA environment (I don’t have access yet).  But in the development environment, they have been using the default host instances.  I still can’t imagine how they did their binding files correctly.  One of my first goals is to get them to use the BizTalk Deployment Framework.  The second goal is to make the host instances the same in the development environments (and also on the VMs my co-worker and I are running).

So as soon as they give me a list of the Production Host-Instances I want to re-create them in the 4 Dev Environments.  Obviously, a script is needed. The script below does the job for the Hosts and Host Instances.  What I have left to do is to script the setup of the adapters that should be associated with the Host Instances.

I recently read this blog by Sandro Pereira:  I’m not sure why he called WMI directly, maybe he wrote that before the Powershell Extensions became available on CodePlex:  I had some errors getting it to work.  Further, his logic goes and builds the “best practices”.  I’m not yet ready or able to change the clients practices in Production.  (It’s funny how some clients give you the “keys to the kingdom” on the first day, and others keep you in more of a “pen”.  It depends on the size of the company, their security and deployment practices, whether or not you have to support and fix production issues, and what exactly you were hired to do.

The Script below, as short as it is, took me several hours to get working. The documentation link on the Powershell Extensions was broken on CodePlex, so I could not download any documentation, so it was sort of a trial and error approach. What I love, and borrowed from Sandro Pereira’s code above, is the ability of Powershell to prompt for the password, and save it in a $credentials object; then you can pass that when you set up the Host-Instances (i.e. the user/pass for the service name that runs the Host-Instances.)

NOTE 1: I have not tested this in an environment where there are more than one BizTalk server in the group.  In that case, code changes would have to be made to add one Host-Instance per server.

NOTE 2: In BizTalk 2013, Powershell Extensions are included with the install, you don’t need the CodePlex extensions.

Add-PSSnapIn -Name BiztalkFactory.PowerShell.Extensions  #NOTE: Must be in 32-bit version of Powershellto use this SnapIn
#get-PsSnapIn -registered   ### list registered Snap-In's

function AddHostAndHostInstance($HostName, $Is32BitOnly, $IsTrackingOn, $HostType)
 if ($HostType -eq "Isolated") 
     $HostTypeCode = 2
     $HostTypeCode = 1 
 #$HostType = 1 # 1 = InProcess assuming 2 = ISO 
 $myNTHostGroupName = "mscwvbizd01\BizTalk Application Users"
 $AuthTrusted = $false
 $serverName = "mscwvbizd01"
 #$HostItemType = 1 #NOTE: I got an error when I tried using the -ItemType on the New-Item of the Host-Instance 
 #(fortunately it is defaulting to In-Process) 

 cd "Biztalk:\Platform Settings\Hosts"
 Write-Host "Try to add New Host=$hostName"
 #New-Item $HostName -HostType:$HostType -NtGroupName:$NTGroupName -AuthTrusted:$AuthTrusted
 $temp = New-Item -path $hostName -HostType:$HostTypeCode -NtGroupName:$myNTHostGroupName -AuthTrusted:$AuthTrusted 
 Set-ItemProperty -Path $hostName -Name 'Is32BitOnly' -Value $Is32BitOnly
 Set-ItemProperty -Path $hostName -Name 'HostTracking' -Value $IsTrackingOn

 cd "BizTalk:\Platform Settings\Host Instances"
 Write-Host "Try to add New HostInstance=$hostName"
 New-Item $hostName -HostName $hostName -Credentials $hostCredentials -RunningServer $serverName
### MAIN CODE HERE - Calls Function Above once per Host/Host-Instance ### 

$domainName = "biztalkdev"
$serviceUserid = "biztalkservice"

$hostCredentials = $Host.ui.PromptForCredential("Logon Credentials","This account must have SQL Server permissions.", $domainName + "\" + $serviceUserid, "");
#[String]$hostCredentialsPassword = [Runtime.InteropServices.Marshal]::PtrToStringAuto([Runtime.InteropServices.Marshal]::SecureStringToBSTR($hostCredentials.Password));

# pass three parms: first host name, is32Bit, IsTracking 
AddHostAndHostInstance "TrackingHost"  "True" "True" "In-Process"
AddHostAndHostInstance "AS2Receive"    "True" "True" "Isolated"
AddHostAndHostInstance "TestNewHost32" "True" "False" "In-Process" 
AddHostAndHostInstance "TestNewHost64" "False" "False" "Isolated" 

Write-Host “Script Completed”

Here’s one more library of functions that a co-worker of mine used at a previous client. I didn’t learn about it until after I wrote the above:

Update: 05/30/2017 – above script was added to include the line to set the Is32BitFlag so you can make the Host 64-bit or 32-bit using the Powershell extensions.

Update: 05/31/2017 – added the ability to create Isolated hosts by passing a fourth parameter

Update 05/31/2017 – now I also have a script to Update Adapters with Host Names. This was modeled from code here.

Sometimes one of your databases is eating up diskspace, and you want to know why.  By running the query below, you can easily view the number of rows for each table (and can even sort so that the tables with the most rows are on top).

--USE BizTalkMgmtDb
SELECT +'.'+ TableName ,SUM(pa.rows) RowCnt
FROM sys.tables ta
INNER JOIN sys.partitions pa  ON pa.OBJECT_ID = ta.OBJECT_ID
INNER JOIN sys.schemas sc  ON ta.SCHEMA_ID = sc.schema_id
WHERE ta.is_ms_shipped = 0 AND pa.index_id IN (1,0)


Below are samples only.  I’m not saying you rresults should be similar.  It depends totally on your run history and types of BizTalk artifacts you have deployed.

Sample Results for BizTalkDTADb:

records counts by table name in BizTalkDTADb

records counts by table name in BizTalkDTADb

On the development environment, my new client had not scheduled the SQL Agent Job: “DTA Purge and Archive (BizTalkDTADb)”

This database was in need of cleanup by running

truncate table dbo.dta_DebugTrace;
exec dtasp_PurgeTrackingDatabase
0, --@nLiveHours tinyint, --Any completed instance older than the live hours +live days
7, --@nLiveDays tinyint = 0, --will be deleted along with all associated data
30, --@nHardDeleteDays tinyint = 0, --all data older than this will be deleted.
'2014-08-30' --@DateLastBackup

I hope to blog more about the clean-up process soon.  The above screen shot was after doing a partial cleanup.  dta_MessageInOutEvents had over 2 million rows when I started.  I’m cleaning up on month at a time to make sure I don’t fill up the transaction log. Meanwhile, check out this blog:


Sample Results for BizTalkMgmtDb:


Powershell Code:

# LUNAR DUST - home grown server/service monitor - sends out email when services defined in related .CSV config file are down 
# Author: Neal Walters - Nov 2013 
# (Lunar Dust is a play on words of the monitor name "Solar Wind") 
[string[]] $users = "",'"" # List of users to email your report to (separate by comma)  
$fromemail = ""
$SMTPserver = "" #enter your own SMTP server DNS name / IP address here
$YNTraceSuccess = "N"   #setting to "Y" will create more trace/debug to the ServiceTestTrace.txt file, setting to "N" only shows servers/services that were down
$TraceFilename = "D:\scripts\ServiceMonitorTrace.txt"
$LocalServerName = "MyServerName"    #used to determine if we should check remote server or not on GetService 
set-item trustedhosts $LocalServerName

#Get arguments flexibly, in either order.
#one parms is a "Y" or "N" to indicate to send an email, even when no servers are in error status 
#another optional parm is the filename of the CSV to read, if omitted, a default filename is used.
if ($args.Length -gt 0)
   if ($args[0] -eq "Y" -or $args[0] -eq "N")
       $IsEmailOn = $args[1]
   if ($args[0].Length -gt 4)
          $csvFilename = $args[0]

if ($args.Length -gt 1)
   if ($args[1] -eq "Y" -or $args[1] -eq "N")
       $IsEmailOn = $args[1]
   if ($args[1].Length -gt 4)
          $csvFilename = $args[0]

if ([string]::IsNullOrEmpty($csvFilename))
   $csvFilename = "D:\Scripts\ServerMonitorConfig.csv" 
   Write-Host "Setting csfFileName=$csvFileName" 

Write-Host "csvFilename=$csvFilename"
$csv = Import-Csv $csvFilename -Header @("IsActive","Environment","Category","ServerName","ServiceName","Criticality")

$HTMLMessage="<h2>Server/Service Status</h2><table border='1'><tr><th>Environment</th><th>Category</th><th>ServerName</th><th>ServiceName</th><th>Status</th><th>Process Started Date/Time</th><th>UserName</th></tr>"
$CriticalErrorCount = 0
$ErrorCount = 0 
$ServerCount = 0 

foreach ($line in $csv) {
  if ($line.IsActive -eq "Active")
    $reportStatus = "" 
    $ServerCount = $ServerCount + 1  

	#$Service = (get-service -Name $line.ServiceName -ComputerName $line.ServerName)
	#this is slower than above, but it gives us the processId which we can use to find out what time the service/process started 
	write-host "Verifying: " $line.ServerName $line.ServiceName 
    $myDate = Get-Date
    if ($YNTraceSuccess = "Y")
       Add-Content $TraceFilename "$myDate TRC01 $($line.ServerName) $($line.ServiceName)"
    $error.clear()  #clear any prior errors, otherwise same error may repeat over-and-over in trace 
    if ($LocalServerName -eq $line.ServerName)
           # see if not using -ComputerName on local computer avoids the "service not found" error 
           Add-Content $TraceFilename "$myDate TRCW1 using local computer " 
	       $Service = (get-wmiobject win32_service -filter "name = '$($line.ServiceName)'")
           Add-Content $TraceFilename "$myDate TRCW2 using remote computer $($line.ServerName) not eq $LocalServerName" 
	       $Service = (get-wmiobject win32_service -ComputerName $line.ServerName -filter "name = '$($line.ServiceName)'")

    if ($error -ne $null) 
        Write-Host "----> $($error[0].Exception) " 
        Add-Content $TraceFilename "$myDate TRCE1 $($error[0].Exception)" 

	if ($Service -eq $null) 
	    $reportStatus = "Service Not Found: name = '$($line.ServiceName)'"
		$trColor = "Yellow"
		$ErrorCount = $ErrorCount + 1  
		$CriticalErrorCount = $CriticalErrorCount + 1
		$CreationDate = "NA" 
        Write-Host "----> $reportStatus " 
        Add-Content $TraceFilename "$myDate TRC02 $reportStatus" 
        #Write-Host "Service Exists"
		#$status = $Service.Status 
		#if ($status -eq "Running")  #this was the check when using get-service instead of get-wmiobject win32_service)

		$reportStatus = $Service.State

		if ($Service.Started -eq "True") {
			#$reportStatus = "Up"
			$trColor = "White"
			# when service is running, then we can lookup the ProcessId to get the Userid and CreationDate (Time the service was started) 
			$ServicePID = $Service.ProcessID
			#Write-Host "Process id: $ServicePID" 
			$ProcessInfo = Get-WmiObject -Class Win32_Process -ComputerName $line.ServerName -Filter "ProcessID='$ServicePID'" -ea 0
			$CreationDate = $ProcessInfo | % { $_.ConvertToDateTime( $_.CreationDate )}
		    Write-Host "Down Service.Started=$($Service.Started) " 
            Write-Host "Status=$($Service.Status)  State=$($Service.State)"
			#$reportStatus = $Service.State
			$trColor = "Orange"
		    $ErrorCount = $ErrorCount + 1  
			if ($line.Criticality -eq "Error") 
                #switch from orange to yellow background 
			    $trColor = "Yellow"
				$CriticalErrorCount = $CriticalErrorCount + 1
			#Write-Host "down status=$status"   #result was empty string or null 
			$CreationDate = "NA" 
	#Write-Host "test=$reportStatus"
	$TextMessage += "$($line.Environment) $($line.Category) $($line.ServerName) $($line.ServiceName) $reportStatus $CreationDate $($Service.StartName)`r`n" 

    #build the TR and TD Cells of the HTML Table 
	$HTMLMessage += "<tr bgcolor='$trColor'>"
    $HTMLMessage += "<td>$($line.Environment)</td>"
    $HTMLMessage += "<td>$($line.Category)</td>"
    $HTMLMessage += "<td>$($line.ServerName)</td>"
    $HTMLMessage += "<td>$($line.ServiceName)</td>"
    $HTMLMessage += "<td>$reportStatus</td>"
    $HTMLMessage += "<td>$CreationDate</td>"
    $HTMLMessage += "<td>$($Service.StartName)</td>"
    $HTMLMessage += "</tr>`r`n" 

     Write-Host "Skipping InActive " $line.ServerName $line.ServiceName 
Write-Host '------' 
#$HTMLMessage = "<h3>Critical Server Count=$CriticalErrorCount  Total Error Count=$ErrorCount</h3>" + $HTMLMessage + "</table>" + "<h3>Yellow is critical, Orange is not critical.</h3>" 
$HTMLMessage = "<h3>Server-Count=$ServerCount  Critical-Count=$CriticalErrorCount  Total-Error-Count=$ErrorCount</h3>$HTMLMessage</table><h3>Yellow is critical, Orange is not critical.</h3>" 
$mydate = Get-Date 
Write-Host "Date=$myDate" 
$HTMLMessage = "<h3>$mydate</h3>$HTMLMessage"
Write-Host  $TextMessage 
Write-Host "`r`n Server-Count=$ServerCount   Critical-Count=$CriticalErrorCount  Total-Error-Count=$ErrorCount"
Write-Host "Date=$myDate" 

$emailSubject = "QT Service Email was Requested" 
if ($CriticalErrorCount -gt 0)
   $emailSubject = "Critical QT Server Down Alert: There are $CriticalErrorCount critical services down" 

#always send email when one 1 or more critical errors are found, or when the $IsEmailOn parm is set to "Always" which sends email regardless of error count 
if ($CriticalErrorCount -gt 0 -or $IsEmailOn -eq "Always")
   send-mailmessage -from $fromemail -to $users -subject $emailSubject -BodyAsHTML -body $HTMLMessage -priority High -smtpServer $SMTPserver
   Write-Host "Alert Email Sent with Subject=$emailSubject" 
   Add-Content $TraceFilename "$myDate TRC99 Alert Email Sent with Subject=$emailSubject" 


Create a CSV like this, with a list of the servers to monitor.
Column 1
Column 2 is an arbitrary environment name. Column 3 is a category of server functionality (could potentially be used to route email to the group in charge of those types of servers, e.g. IIS, BizTalk, SQL…)
Column 3 is Server/Machine Name.
Column 4 is the Windows Service name. Use “LanmanServer” just to know if the machine is up and running.
Column 5 indicates whether this should cause a critical error or just a warning (when the service is down).


Schedule a .bat or .cmd file such as follows, pointing to the desired CSV as defined above.

powershell -command "& 'D:\Scripts\ServerMonitor.ps1'" d:\Scripts\ServerMonitorConfig.csv
get-service | where {$ -match "win"}

Normally displays like this:

Note how very long names can get cut off with the ellipses….

Did you know that Powershell has a “grid-view”, similar to SQL?

get-service | where {$ -match "win"} | out-gridview


If you pipe all get-service to the out-gridview, then you can see how you can even filter the data using the “GUI” mode.
Click “Add Crtieria”, then click the checkbox next the desired property name.


You can then enter a match/contains value for that property, and the grid instantly changes to show only the matching data:


Instead of filtering with the “contains” clause, you can click the word “contains”, and choose other methods of filtering.

I was truly amazed when I discovered this feature.



This is basically a follow-up to the previous post How to add a where filter match/contains/like to any powershell cmdlet (such as Get-Service)

Cmdlets are named in the format “verb-noun”, for example get-service, start-service, etc…
Suppose you cannot remember all the verbs associated with a noun. You can run this:

get-command -Noun process


Recall from the prior blog you can do something like this to get all your BizTalk Host Instances (services):

get-service  | where {$ -match "biztalk"}

Now, suppose you wanted to start all the BizTalk Host Instances, you just pipe the results of the previous command to the “start-service” cmdlet. (Note: You may have to run Powershell “as admin” to avoid any security errors.)

get-service  | where {$ -match "biztalk"} | start-service 

There is an interesting optional parm that let’s you check your script. The “-WhatIf” parm can be added to Start-Service or Stop-Service. It will then only shows what would happen if the cmdlet runs. The cmdlet is not run.


PS C:\WINDOWS\system32> get-service | where {$ -match "DTC"} | stop-service -whatif
What if: Performing the operation "Stop-Service" on target "Distributed Transaction Coordinator (MSDTC)".

If you check the status of the service after running the above stop-service, you will see that it didn’t really stop.