I just found this sample in my archive, so decided to post it:

# This demonstrates very basic MQ functionality (to a remote machine) using PowerShell

# Neal Walters - 10/30/2013
Add-PSSnapin IBM.PowerShell.WebSphereMQ
$qmconns = @()
$qmconns += New-WMQQmgrConnDef -Name YOURNAME-HostName 10.1.3.200 -Channel YOURR.SvcChannel -Port 1415
$qmgrs = Get-WMQQueueManager -Connections $qmconns
if ($qmgrs -eq $null) {Write-Host "`$qmgrs is null"}
get-wmqqueue FPES* -Qmgr $qmgrs | Select Name, CurrentDepth, DepthHighEvent, DepthHighLimit
get-wmqqueue FDS* -Qmgr $qmgrs | Select Name, CurrentDepth, DepthHighEvent, DepthHighLimit
 
Write-Host "Members"
# see the properties available# get-wmqqueue -Qmgr $qmgrs | Get-Member

I got the same error as the one in this blog:
http://brucetank.blogspot.com/2017/10/biztalk-2016-wcf-custom-transport.html


System.Configuration.ConfigurationErrorsException: Unrecognized attribute 'ApplicationName'. Note that attribute names are case-sensitive.

The difference is that I got it run-time, where in the blog above, he got it at the time of importing the bindings.

I was afraid that maybe the Cumulative Updates (CUs) were out of sync; and I have a project underway to make sure that both systems are on the latest CU6.

I tried removing the “ApplicationName = ”” out of the binding file and re-importing, but when I went to BizTalk Admin in my Production environment, the Application Name was there in the properties on the WCF-Custom (SQL) send port (and still had a blank value).

I looked for other differences between test and production, and discovered that I had switched it to a 64-bit Host Instance in Prod. When I switched it back to 32-bit Host Instance, it got past this issue.

Here some PowerShell samples to get you going quickly with the Event Log.

I’ve seen EventLogs that are full of noisy and bothersome INFO level messages. For example, we are seeing these “noise” messages about every 10 seconds. So scrolling through the Event Viewer looking for errors can take some time. Obviously, I’m working with other people to stop those messages, but until then… so suppose you want to filter on just the errors for the last 60 minutes?

Use the first command below to get all the errors.

The second shows you how to get all warnings and then further filters by a word in the message.

cls 
$minutes=60 
Get-Eventlog -log application -after ((get-date).addMinutes($minutes*-1)) -EntryType Error 

#Get-Eventlog  -log application -after ((get-date).addMinutes($minutes*-1)) -EntryType Warning  | Where-Object {$_.Message -like '*Mercury*'} 

#Write-Host "---------------------------"
#$el = Get-EventLog -log application -index 1096900 
#write-host $el.Message

The Message will get cut off, but you get a list of the dates and times of the errors or warnings. You can then use the “Index” value to go back the full text of a specific item.

#Write-Host "---------------------------"
$el = Get-EventLog -log application -index 1096900 
write-host $el.Message

See related blog PowerShell EventLog Automated Email

You can also loop through EventLog data and selectively write out whichever fields you want.

cls 
$minutes=90
#Get-Eventlog -log application -after ((get-date).addMinutes($minutes*-1)) -EntryType Warning 

$logs = Get-Eventlog  -log application -after ((get-date).addMinutes($minutes*-1)) -EntryType Warning  | Where-Object {$_.Message -like '*Mercury*'} 

foreach ($log in $logs) 
{
    Write-Host "---------------------------"
    Write-Host "Index=$($log.Index)" 
    Write-Host "DateTim: $($log.TimeGenerated) " 
    write-host $log.Message

}

What I did to cause the error

Dropped an XML file that went through a Map, and was picked up by a Send Port with a CSV Pipeline.
Worked okay if I used PassThru Pipeline.

Suspend Erorr

There was a failure executing the send pipeline: “DL.MGLocationData.Pipelines.MyAppNameLocationDataCSV_Send, DL.MGLocationData.Pipelines, Version=1.0.0.0, Culture=neutral, PublicKeyToken=59a32f7375d7234a”
Source: “Flat file assembler” Send Port: “sp_MyAppNameLocationData_CSV” URI: “e:\Integration\Outbound\MyAppNameLocationData\MyAppName_%datetime%.csv” Reason: Object reference not set to an instance of an object

Solution

Turn on output map validation for your map.

Check the following:
1) Your are mapping something to every single CSV column
2) You don’t have same column names misspelled (one way in map, and another way in the schema)

With BizTalk, we often archive files in a disk directory, where each Vendor has its own archive.

Simple Example

I found the simple example below, then expanded upon:

Get-ChildItem E:\Archive | Measure-Object -Property Length -sum

Source from http://woshub.com/powershell-get-folder-sizes/

Fancy Example

This starts with the Archive directory then basically runs the command above for each subdirectory. One trick was to get the variables from the “measure-object”, and the second trick was how to present the data. Just using Write-Host statements looked very ugly.

To present the data, I wanted to use the Out-GridView. To do that, I had to load a HashTable in a list, and pass that List to the Out-GridView. I haven’t got all the numbers right-justified yet, perhaps soon.

$topDirectories = Get-ChildItem $startDir -Directory  
$countDir = 0 
$List = New-Object System.Collections.ArrayList

foreach ($dir in $topDirectories) 
{
   $countDir = $countDir + 1
   #if ($countDir -gt 1) {exit}
   #Write-Host "Dir: $($dir.FullName)"
   $measure = Get-ChildItem $dir.FullName -Recurse | Measure-Object -Property Length -sum
   Write-Host $dir.BaseName $measure.Count $measure.Sum 

   ## Empty directory seems to have null values 
   if ($measure.Sum -eq -$null) 
   {
       $SizeMB = 0 
   }
   else 
   {
       $SizeMB = $measure.Sum / 1024 / 1024 
   }
   

    $Hash = [ordered]@{
         Directory=  $dir.Fullname 
         Count = $measure.Count 
         ##Size = $size.ToString().PadLeft(15,' ' ) #   $a.PadLeft(15,[char]4)
         SizeMB = [math]::Round($SizeMB *100) / 100   #round to two decimal places 
         }

    [void]$List.Add(( [pscustomobject]$Hash  ))

}

$List | Out-GridView -Title 'Drive Space' 

Sample Output

Scenario

Let’s say you have a directory with a 100 files, and you want to slowly drop those file into BizTalk (via a file adapter Receive Location). Rather than copying all the files at once, you would like to wait 2, 5, or 10 seconds between each file drop? Maybe you want to make the tracked message easier to read, by basically making all the process single-threaded.

What I called “File Dripping” is really a form of throttling. Maybe your BizTalk system is stressed, and you don’t want to throw in 1000s of messages at once. BizTalk has to store all those messages in a database “queue” in the SQL database server. Then it takes longer for BizTalk to ‘walk the chain’ or to keep searching through these when doing various queries in the process of administration and processing. By dropping the files in slowly, the BizTalk Message depth never gets very deep.

Furthermore, you might want the files to be processed in date/time order.

PowerShell Solution

We will use Copy-Item (or Move-Item) – configured by a parameter. And to do a wait for x seconds, we use the Powershell Start-Sleep command in . To get the files sorted, you pipe the Get-ChildItem to the commandlet “Sort-Object LastWriteTime -Ascending”.

cls 
$fromDirName = "e:\Integration\Inbound\AppName\SubDir\"
$toDirName = "e:\Integration\Inbound\AppName\FileDropFolder\"
$fileMask = "*.*"
$waitSeconds = 5
$moveOrCopy = "Copy"  


$files = Get-ChildItem $fromDirName -Filter $fileMask -File  | Sort-Object LastWriteTime -Ascending 
$countMatchingFiles = $files.Count 
Write-Host "Count Matching Files=$countMatchingFiles fileMask=$fileMask" 

foreach ($file in $files) 
{
    if ($moveOrCopy -eq "Copy") 
      {
        Copy-Item $file.FullName -Destination $toDirName
        Write-Host "Copied $($file.BaseName)" 
      }
    else 
      {
        Move-Item $file.FullName -Destination $toDirName
        Write-Host "Moved $($file.BaseName)" 
      }
    Write-Host "Waiting $waitSeconds seconds" 
    Start-Sleep -Seconds $waitSeconds 
}

In the system I’m working with the dates in DB2 are defined as type “DECIMAL”, the they are stored as offsets from 19000000.
Note, it’s not the number of days since 1900-00-00, but literally the the current date (YYYYMMDD less 19000000).

T-SQL converting numeric DB2 date to something more readable

Thus, when using a T-SQL query to the Linked-Server, the date can be converted as follows:

SELECT RDABDT + 19000000 as RDABDT2,  * from Linked.x.dbo.table order by RDABDT desc 

RDABDT2 shows as 20200219. No need to convert when you sort, as the larger decimals are already the more recent dates.

C# – Converting to DB2 Date (integer or decimal)

        public static int EDIDateToDBDateInteger(string CCYYMMDD)
        {
            int rt = 0;
            if (CCYYMMDD == null)
            {
                throw new ApplicationException("EDIDatetoDBDateInteger CCYYMMDD argument passed is null");
            }
            if (CCYYMMDD.Length != 8)
            {
                throw new ApplicationException(
                   "EDIDatetoDBDateInteger CCYYMMDD length is not 8, value received=" + CCYYMMDD);
            }
            rt = Convert.ToInt32(CCYYMMDD) - 19000000;
            return rt;
        }

        public int XMLDateToDB2DateInteger(string convertDate)
        {
            string sMonth = DateTime.Parse(convertDate).ToString("MM");
            string sDay = DateTime.Parse(convertDate).ToString("dd");
            string sYear = DateTime.Parse(convertDate).Year.ToString();
            string sdateWorker = sYear + sMonth + sDay;
            int iDateWorker = Int32.Parse(sdateWorker);
            iDateWorker = iDateWorker - 19000000;
            return iDateWorker;
        }

        public int AnyDateToDB2DateInteger(string dateToConvert)
        {
            string yearStr = DateTime.Parse(dateToConvert).ToString("yyyy");
            string monthStr = DateTime.Parse(dateToConvert).ToString("MM");
            string dayStr = DateTime.Parse(dateToConvert).ToString("dd");
            int db2Date = Convert.ToInt32(yearStr + monthStr + dayStr) - 19000000;
            return db2Date;
        }

What if you knew the big swings and trends in BitCoin and other cryptos.
Marius Landman has able to correctly predict over 90% of the major swings in the market. He grew up as a poor orphan in South Africa, and went on to be on an elite South African military team. The started predicting Health and Safety trends for major oil companies, and found that he could also find similar trends in the cyprtocurrency markets.

A sample from one of Marius Landman’s Cryptocurrency Trend and Forecast Webinar

Why Join the ToNoIt Community

It’s Time to Know It

NOTE: This is not financial advice. This can be one of your tools that help you do your own research.

Register for free ToNoIt account.
FREE SIGNUP to get to know the system (no payment required). After you register, you can explore the site, and decide if you want to join with the paid membership, which includes updates (usually 2 or 3 per week, including a one to two hour long webinar recording each week). if you want live access to the webinar, I can show you how to get that too. Payments are made only via cryptos.

Here’s an example where I wanted to create a small XML file with a unique list of TerminalIDs from my application.

On the left, which I had to blur out, the Looping functoid is connected to a parent record that repeats, and the other lines are connect to the source of the TerminalID.

The scripting component has the following “Inline C#” code:

public System.Collections.Generic.List<string> duplicateList = new System.Collections.Generic.List<string>();

public bool IsDuplicate( string terminalID )
{
     if( duplicateList.Contains( terminalID) )
        return true;
     duplicateList.Add( terminalID );
     return false;
}

You can see that it creates a collection object called a "Generic.List". It then adds terminalIDs to that list, after checking if the item is already in the list or not. It then returns a true or false.

The logical equals functoid tests the output of the above to "false". I think it is used to make sure we send a boolean true/false to the item on the right side of the map. Maybe it could be avoided, but I saw this in someone else's map, and I left it for now. I made one attempt to remove it and reverse the true/false coming out of Scripting Component, but it didn't work.

In case you didn't know, when set a field on the right side of the map to "false", then that item doesn't get mapped out. You basically suppress it.

The output, when I passed two inputs with "ABC" as the terminal ID is as follows:

<?xml version="1.0"?>
<ns0:UtilListTerminalIDs xmlns:ns0="http://TMW.Integration.Schemas.UtilListTerminalIDs">
  <UtilListTerminalID>
     <TerminalID>ABC</TerminalID>
  </UtilListTerminalID>
</ns0:UtilListTerminalIDs>

Remember that you can right-click the map, and click "Validate" to see the XSLT behind the map. The XSLT file will be shown in the Output window, and you can do a CNTL-Click on the .XSL filename.

Later, I passed two messages to the following C# routine which identifies if a TerminalID was not found in the target database (after doing a query on it). I could have also done the dup logic in the C# routine. This code uses XPath and the XmlDoc.SelectingSingleNode method to lookup the terminal ID in the results of the query. (The database query was previously returned as an XML message in the orchestration.)

I didn't want the orchestration to continue, because it would get an error later when it tries to insert a row into another table that requires the TerminalID to be in the target database.

        public static string FindMissingTerminalIDsInTRMSTResponse(
XmlDocument UtilListTerminalIDs, XmlDocument TRMSResponse)
{
string rt = "";
string xpathTerminalIDs = "//TerminalID";
XmlNodeList terminalNodes = UtilListTerminalIDs.SelectNodes(xpathTerminalIDs);
int countMatches = 0;
int countNodes = 0;
int countMisMatches = 0;

foreach (XmlNode terminalNode in terminalNodes)
{
countNodes++;
string xpathVerify = "//*[local-name()='TSATRM'][text()='" + terminalNode.InnerText + "']";
Console.WriteLine("xpathVerify=" + xpathVerify);

XmlNode testTerminalExistsNode = TRMSResponse.SelectSingleNode(xpathVerify);

if (testTerminalExistsNode != null)
{
countMatches++;
}
else
{
countMisMatches++;
if (countMisMatches > 1)
{
rt = rt + ", "; // comma separate after the first one
}
rt = rt + terminalNode.InnerText;
}

}

At first, I was going to return a boolean from this method, but then I decided that I would suspend the orchestration if the input had missing TerminalIDs, and the message should list all the missing IDs in a string. That went into my orchestration Suspend Shape.

How to run a SQL command against a Microsoft SQL Server Database (MSSQL) from a PowerShell script.

cls 

$datasource = "server=server\instance;database=mydb;trusted_connection=true"

#if not using Integrated Security, 
#you might want to pass user/pass in variables or prompt for them 
#$connectionString = 'User Id=' + $username + ';Password=' + $password + ';' + $datasource
$connectionString = $datasource
write-host $connectionString 
$connection = New-Object System.Data.SQLClient.SQLConnection($connectionString)
$connection.open()

# -- reset so Biztalk ReceiveLocation Polling Process will pick them up 
$updateCommand = "update PollingTable set entryStatus = 'New' where ID in (101, 102)  "

Write-Host $updateCommand 

$Command = New-Object System.Data.SQLClient.SQLCommand 
$Command.Connection = $connection 
$Command.CommandText = $updateCommand 

$rowsAffected = $Command.ExecuteNonQuery() 
Write-Host ("Rows Affected by Update=$rowsAffected") 

$connection.close()  #not sure if this helps... 
                     #but in case you have more code after this...