One day, I hope to be able to fully understand and explain the “Ambient Transaction” true/false option in BizTalk WCF-SQL and WCF-Custom send and receive ports. Until then, I’m going to collect the most relevant info and blogs here:
Specifies whether the SQL adapter performs the operations using the transactional context provided by the caller. Default is true, which means that the adapter always performs the operations in a transactional context. If there are other resources participating in the transaction, and SQL Server also joins the transaction, the transaction gets elevated to an MSDTC transaction.
However, there can be scenarios where you do not want the adapter to perform operations in a transactional context. For example:
While performing a simple Select operation on a SQL Server database
While specifying a polling statement that performs a Select operation, and does not involve any changes to the table either through a Delete statement or by invoking a stored procedure.
Both these operations do not make any updates to the database table and, hence, elevating these operations to use an MSDTC transaction can be a performance overhead. In such scenarios, you can set the binding property to false so that the SQL adapter does not perform the operations in a transactional context.
Not performing operations in a transactional context is advisable only for operations that do not make changes to the database. For operations that update data in the database, we recommend setting the binding property to true; otherwise you might either experience message loss or duplicate messages, depending on whether you are performing inbound or outbound operations.
http://msdn.microsoft.com/en-us/library/ms973865.aspx (Richard Seroter above refers to the “Ambient Transaction” feature as being part of the Systems.Transaction in .NET Framework 2.0 and afterwards. It seems like understanding Ambient Transactions from native C# first would be key to understanding how it works in BizTalk.
The above link provides the following information, but as for me now, it does not pass the “so what” and the “WIIFM” (What’s In It for Me) test.
System.Transactions defines a concept called an ambient transaction. The ambient transaction is the transaction that is present in the thread that the current application code is executing within. To obtain a reference to the ambient transaction call the static Current property of Transaction:
If there is no ambient transaction, Current will return null.
The ambient transaction object is stored in the thread local storage (TLS). As a result, when the thread winds its way across multiple objects and methods, all objects and methods can access their ambient transaction.
Later it says:
The value of TransactionScopeOption lets you control whether the scope takes part in a transaction, and if so, whether it will join the ambient transaction or become the root scope of a new transaction.
A TransactionScope object has three options:
Join the ambient transaction.
Be a new scope root, that is, start a new transaction and have that transaction be the new ambient transaction inside its own scope.
Do not take part in a transaction at all.
This forum post tries to answer the question: “What exactly is an ambient transaction?”
There are 2 main kinds of transactions; connection transactions and ambient transactions. A connection transaction (such as SqlTransaction) is tied directly to the db connection (such as SqlConnection), which means that you have to keep passing the connection around – OK in some cases, but doesn’t allow “create/use/release” usage, and doesn’t allow cross-db work.
The alternative is an ambient transaction; new in .NET 2.0, the TransactionScope object (System.Transactions.dll) allows use over a range of operations (suitable providers will automatically enlist in the ambient transaction). This makes it easy to retro-fit into existing (non-transactional) code, and to talk to multiple providers (although DTC will get involved if you talk to more than one).
Note here that the two methods can handle their own connections (open/use/close/dispose), yet they will silently become part of the ambient transaction without us having to pass anything in.
If your code errors, Dispose() will be called without Complete(), so it will be rolled back. The expected nesting etc is supported, although you can’t roll-back an inner transaction yet complete the outer transaction: if anybody is unhappy, the transaction is aborted.
The other advantage of TransactionScope is that it isn’t tied just to databases; any transaction-aware provider can use it. WCF, for example. Or there are even some TransactionScope-compatible object models around (i.e. .NET classes with rollback capability – perhaps easier than a memento, although I’ve never used this approach myself).
Blog on on the topic: Transaction.Current and Ambient Transactions by Florin Lazar:
A transaction which automatically identifies a code block that needs to support a transaction without explicitly mentioning any transaction related things. An ambient transaction is not tied just to a database, any transaction aware provider can be used. TransactionScope implements an ambient transaction. If you see the use of TransactionScope, you will not find transaction related anything sent to any method or setting any property. A code block is automatically attached with the transaction if that code is in any TransactionScope. A WCF transaction is another example of a transaction aware provider. Any one can write a transaction aware provider like the WCF implementation.
WCF-SQL and DTC Ambient Transactions in a cross domain scenario
The ambient transaction option ensures that the BizTalk adapter flows a transaction through to SQL Server and thus the SQL transaction will only commit when the message received by BizTalk is successfully written to the BizTalk message box database. This is of course crucial in a guaranteed delivery based solution where you can’t afford to lose any messages.
I found that all worked well when ambient transactions were turned off, however when turned on it looked like the receive location just hangs, holding a lock on SQL resources (I tried to do a select on the table in question using SQL Server Management Studio and it couldn’t return any values due to locks being in place) which won’t be removed until the host instance is reset.
The above blog talks about using DTCPing and DTCTester to identify DTC issues, and fixing them by laxing the security options in the DTC configuration.
Conclusions (or lack thereof):
So for now, I suggest studying that second web page to get an idea of what the topic is really about. If I had time, I would write some C# programs to test outside of BizTalk.
Error: Failed to load Group xxxx.BizTalkMgmtDb] data providers (in BTSAdmin Console)
Solution: Verify SQL Server is available, if not, start-it.
Screen shot below shows trying to connect to the SQL server; it might be on the same machine as BizTalk or it might be on another machine. The above error could also indicate a communication or maybe even a permission issue.
Open the SQL Config utility and start SQL server. The screen shot below shows how to start the SQL Configuration Manager:
The configuration utility shows which SQL services are started and stopped. Right click then select “Start” if it is not running:
There was a failure executing the receive pipeline: “Microsoft.BizTalk.DefaultPipelines.XMLReceive, Microsoft.BizTalk.DefaultPipelines, Version=184.108.40.206, Culture=neutral, PublicKeyToken=31bf3856ad364e35″ Source: “XML disassembler” Receive Port: “Receive_ExtractEFSR_Delta_FMA” URI: “mssql://.//QTAviation?InboundId=ExtractEFSR&ExtractTypeCode=Delta_FMA_Daily” Reason: Finding the document specification by message type “http://schemas.microsoft.com/Sql/2008/05/Polling/#Polling” failed. Verify the schema deployed properly.
Solution: Needed to change from Polling to TypedPolling (on Binding tab, inboundOperationType
1) Use the adapter wizard to generate a schema and custom bindings for a SQL stored proc
2) Created an orchestration of my own, and then tried to call the stored prod with a two-way (request/response) send port.
The ERROR that I received:
<code>The adapter failed to transmit message going to send port “Send_Extract_EFSR_Aggregate_For_AA_Departure_WCF_Custom” with URL “mssql://mySqlServer//MyDatabaseName?”. It will be retransmitted after the retry interval specified for this Send Port. Details:”Microsoft.ServiceModel.Channels.Common.UnsupportedOperationException: The action “<BtsActionMapping xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xmlns:xsd=”http://www.w3.org/2001/XMLSchema”>
<Operation Name=”My_SProc_Name” Action=”TypedProcedure/dbo/My_SProc_Name” />
</BtsActionMapping>” was not understood.</code>
The two SOLUTIONs:
1) When you create a send port in an orchestration, by default the Operation name is “Operation-1”.
2) You have two choices to fix:
a) Change the “Operation Name” in the “Action” to “Operation_1″ like this:
b) Go to the orchestration, and change the Operation on the send port from “Operation_1” to “My_SProc_Name” (i.e. whatever your stored proc name is).
Then rebuild and redeploy (or at least rebuild and re-GAC your orchestration).
NOTE: Be sure to restart the host instances of the Orchestration and SendPorts if you redeploy the orchestration.
I made some interesting discoveries today. We had a file from a client’s database administrator (DBA) that had a list of all the stored procs from a certain database. He was off today, so we were unsure how he created it. Using the open source (free) editor called NotePad++, we did a “find” on “Create Proc” and it showed for example 500 stored procs. However, we knew there were about 700 stored procs on that SQL server, so where were the missing 200 stored procs?
Another client gave me a list of stored procs he created using a simple T-SQL query. So I started writing a C# program to extract the names from the first DBA, and to put them in a file so I could using another tool, Total Commander, to do a side-by-side “diff”/compare, and find the names of the missing 200 stored proc’s.
At first, I thought the DBA must have given me a human-created file, that perhaps came from their source control system. The reason was is that there so many inconsistencies, for example:
CREATE PROCEDURE ABC …
CREATE procedure ABC …
create PROC ABC
PROC ABC …
and some had
CREATE PROCEDURE ABC AS
CREATE PROCEDURE ABC WITH RECOMPILE
CREATEtPROCEDURE ABC (with the tab instead of space between the words “CREATE” and “PROC”)
CREATE PROC ABC @PARM1 VARCHAR(30), @PARM2 INT…
Eventually, I figured out that the file did have all 700 stored proc’s in it, but it was the simple “Find/Count” command in NotePad++ wasn’t accurate enough to do the job.
So what apparently happens, is that SQL takes your exact “Create Proc” syntax, and stores in the meta-data of the database. Then when you extract it again, it has the 100% exact same format.
This is 100% opposite of the way the IDMS mainframe database works (which I used for over 20 years). In IDMS, you can say for example, “ADD REC ABC”, “ADD RECORD DEF”, “ADD RECO XYZ”, or event split it on different lines. But later, when you say “DISPLAY REC ABC”, it will spit it back out in a formal canonized syntax such like this:
RECORD NAME IS ABC VERSION 1
My client uses a non-standard SQL port number.
So for example, when connecting in SSMS (SQL Server Management Studio), we type in “MyServer,1234” for the server name. This also works in connection strings (i.e. specify servername, then comma, then port number.
I suppose if you were using a named instance, it would be “MyServer/MyInstance,1234”.
But if you are using WCF/SQL Adapter, it’s a little obtuse, you must specify the port number in the “InstanceName” property field, even though you are not using an instance:
Specifing port number for SQL Server (in WCF/SQL Adapter for BizTalk)
Type a comma, followed by the port number in the instance name.
Unfortunately, I wasted a few hours today. My current customer is still on BT2006/R2, but might be moving to BT2009 or BT2010 soon. I was building-out a Virtual PC with Win 2008, and just assumed that I could run BT2009 with SQL2008/R2. Well you cannot! I guess BT2010 will be the first version of BizTalk to work with SQL2008/R2.
You normally think that something small like an R2 won’t make that much a different, but indeed it does.
You have to open the install log to see the error – and it’s various ambigous:
[13:05:10 Error BtsCfg] Failed to create Management database “BizTalkMgmtDb” on server “WhatEver”.
Error “0x80040E1D” occurred. Other blogs talk about this error being caused by not having Case-Insenstive turned on for you SQL Server. I guess basically, it means BizTalk doesn’t like something about your SQL server, but sadly, it doesn’t really tell you what it that it doesn’t like.
When will Microsoft start making user-friendly errors? LOL.
And when will the numbering schemes for the products make sense? More Laughter!
Uncaught exception (see the ‘inner exception’ below) has suspended an instance of service ‘MyApp.Common.BizTalk.Artifacts.HandleCommonErrors(4e50ec54-338d-1887-a441-2dc31c830952)’.
The service instance will remain suspended until administratively resumed or terminated.
If resumed the instance will continue from its last persisted state and may re-throw the same unexpected exception.
Exception thrown from: segment -1, progress -1
Inner exception: Received unexpected message type ” does not match expected type ‘http://MyApp.Common.BizTalk.Artifacts/Config#ConfigResp’.
Exception type: UnexpectedMessageTypeException
Target Site: Void VerifyMessage(Microsoft.XLANGs.Core.Envelope, System.String, Microsoft.XLANGs.Core.Context, Microsoft.XLANGs.Core.OperationInfo)
The following is a stack trace that identifies the location where the exception occured
Change Send Port ReceivePipeline to XMLReceive instead of PassThru.
BizTalk could not parse the XML without the XMLRecieve pipeline.
Update on 08/31/2016:
This happened to me again. I had a solution with multiple projects, and every time I did a deploy, the SendPort was getting reset. I’m not 100% sure why, but the way I fixed that was to go to the Solution itself, and turn off the deploy of the common module. I must have had a pipeline or something that was causing the SendPort to be modified each time I deployed.