MEX and WSDL are two different schemes to tell potential clients about the structure of your service.So we can choose to either make the service contracts public as “metadata exchance format” (MEX) or in the “web service description language” (WSDL) — the latter being accessible via HTTP(s). There’re 2 ways to expose metadata.One with MEX endpoint and one with . The first one will expose metadata using WS-MetadataExchange and second one will use WSDL
Above quoted from: Saurabh Somani’s blog – Http Get vs Mex End Point
One forum post said that “WSDL” is the “legacy method” and “MEX” is the “new improved version”. But he was corrected by a follow-up post, saying that WSDL is still used by many service platforms, and MEX and WSDL delivery the same output. (Ido’s blog above actually compares the results from the two.)
Mex is designed to support non-HTTP protocols. A client can add a reference in VS only if httpGetEnabled/httpsGetEnabled is enable or if you have define a mex endpoint in the configuration of your service. The best practice is to expose metadata on dev environnement but not on production. MEX are more configurable, while WSDL is more interoperable with older versions of clients and non-.net clients that work with WSDLs.
To disable MEX and expose your service via WSDL, then 1) do not specify a MEX endpoint, and 2) specify httpGetEnabled=”true”.
BizTalk and Error “Could not find a base address that matches scheme http for the endpoint with binding MetadataExchangeHttpBinding”
The MSDN forum posts weren’t coming up in Google, I only found them when I posted in the MSDN forum for BizTalk. So I decided to post them here to give them some “google love”. None of them actually helped me solve my issue, but when you are getting this error you probably need to read all of them to see which one fits your case.
Solution for that issue: This is working fine after i have changed my configuration file. i forgot to include binding configuration and changed mexHttpBinding to mexHttpsBinds and changed httpGetEnabled = false.
Are you a GACer? Or do you redeploy every time? Do you like to GAC in BizTalk?
I suppose it depends a lot on the place you work, the attitudes, and the procedures in place, and what security you have, and who does the deploys.
The last place I worked, I inherited what I affectionately referred to as “the monolithic app”, i.e. a BizTalk application that was made up of about 24 different projects in one solution. So redeploying the whole thing was often a pain, and sometimes there were pieces of it that I didn’t want to deploy (perhaps multiple development threads going on in that same big solution).
If you are a beginner, let me explain. With BizTalk, you can export and import the MSI (with or without the bindings). But suppose you need to change for instance just a literal value in a map or an orchestration. Why redeploy the whole enchilada? You can rebuild that specific .dll, copy it to each of the BizTalk servers in the group, and run GacUtil against it;
Then you just restart the host instances that use that .dll, or all of them you are unsure, then you are good to go.
The downside of GACing
I just recently discovered that if you export an MSI from that machine, the MSI that is exported will use the original .DLL referenced by that application. Even though run-time picks up the new GAC’ed .DLL, the export MSI may not. You can get around this by going to “Resources”, adding the .DLL and clicking “Overwrite”. As a matter of fact, you can GAC it from there too! I generally use BTDF (BizTalk Deployment Framework) , so creating MSIs that way was not of interest to me. That was, at least until I got to my latest client. They don’t use BTDF, and that’s their Modus Operandi.
When does the GAC Trick not work?
If you add new entities that have to registered in the BizTalkMgmt Database, then you will need to import the MSI. That means new schemas, new maps, new orchestrations, new pipelines, etc… Or if you promote a field that was not promoted before, the database needs to know about that.
GAC is good redeploying code when you made logic changes, changes of literals, etc… Adding new functoids to an existing map or even new shapes to an orchestration is no big deal, they are not registered in the database. But you cannot add any new send/receive ports to your orchestration if you plan to GAC.
A Must! Keep an Audit Trail of Your GACs
It’s important that no man GAC’s unto himself. If you have other people administering your BizTalk environment, they need to know what you deployed (or GAC’ed) and when.
It’s common to keep a directory such as C:\Deploys. Under that keep a folder name for each BizTalk Application
Some Other GAC Tricks
I often just put the GacUtil and related files in the same folder as my deploy. There now seem to be too many versions of GacUtil, and I hate to waste time looking for the right one.
Here are the files required: 1) all your .dll’s that you want to GAC, and the three files: GacUtil.exe, GacUtil.exe.config, and GacUTlRC.dll:
I often create a file called GacAll.cmd and put in the same directory. It simply runs GacUtil on each of the .dlls there; something like this:
Every once in a while, I need to write to the Application EventLog from a BizTalk Orchestration, although I usually have my own SQL trace that I write to.
On almost all the overrides for WriteEntry, the first parameter is “Source”. I suggest you try some existing source, such as “XLANG/s” or “BizTalk Server”. You can create your own source, but then if you are not running in Admin mode, and you are the first person to write with that source, you will die with an error. (See separate section on that further below.)
I used the above in a “Catch Exception” within a scope.
You can see the other overrides here , but the one above with three parms seems to be the most common.
By the way, I got too fast typing, and actually put in System.Diagnostics.TraceEventType.Warning. The Trace has the shares some levels that the EventLogEntryType has (Severe, Error, Warning, Info), but has more options. Needless to say, I had the red-squiggly line under some code, and of course it wouldn’t build because of a signature mismatch.
Creating your Own Event Source
This must be done in advance by an administrator. There is a windows command that will do it for you, either run it from the command prompt, or put it in a .cmd/.bat file:
eventcreate /ID 1 /L APPLICATION /T INFORMATION /SO MYEVENTSOURCE /D "My first log"
or if you prefer PowerShell, do something like this:
Write-EventLog -LogName Application -Source "MyNewSource" -Message "My first log" -EventId 0 -EntryType information
One trick to remember is that if you do this on your test system, you will have to remember to do it when you deploy up to the next higher environment (e.g. QA, UAT, or Production).
1. The CD shouldn’t be needed, but if you are running Powershell in a different directory in ISE, it can be helpful.
2. Get-ChildItem returns all files in the current directory with the mask *.xml
3. Then “ForEach” matching file, do what is in the curly brackets of the Foreach-Object loop. If you know what you are doing you can pipe without the Foreach, but I like to break it down, so I can add the debug Write-Host statements and run a simulation run (by commenting out the actual Rename-Item statement) before the final rename.
4. The -replace is the keyword that tells us that we are doing a RegEx replace. Here I’m changing a date like this: YYYYMMDDHHMMSS_xxxx to YYYYMMDDHHMM. (This was a requirement of the a customer. The downside is you could have duplicate files on the rename if more than one file was created in the same minute; but that was not our issue.)
I’m using the () to capture the YYYYMMDDHHMM string and then the `$1 substitutes that string back into the new filename. The grave-accent mark is the escape character to tell Powershell that I don’t want to insert a variable by the name $1 (which would have a value of null or empty-string, because I don’t have such a variable. $1 is used only with the Powershell replace, it’s not a real Powershell variable.
5. Write-Host shows the new filename.
6. Do the actual rename. Just comment out this line with # at the beginning to do a simulation run and verify the names.
It is not hard to initialize an empty group element in a BizTalk map. After ten years of BizTalk, I don’t remember having to do it before, and at first, I thought I would have to create an XSLT functoid. That works to, but here is the easy way.
I had a customer who had an element called Condition with an “array” of int element under it. For some reason, when there was not a single value of “int” being set, they still wanted the parent element “” to be created.
I almost always use the “String Concatenate” functoid to initialize a new field. My fear of using it was that I thought I had to put a value in the “Value” column. You see below there is a yellow warning sign to the left, indicating no value has been set.
While it’s true that you do need a value, the value can be an empty string. Just click in the “Value” column, then click back on the “Name” column, and the icon changes to a white check box in a green circle.
You probably already know that you can right click the Map in solution explorer, then select “Validate Map” to see the XSL generated. Here is what was generated for the above case.
The value from the GUI that is typed in, or in this specific class, when I just clicked on the field is highlighted in yellow. It is set to $var:v1 which is put as the value of the element.
A client C# program was call an orchestration published as a web service. I was turning on Basic Authenticiation in BizTalk and IIS. C# client program kept getting: “The requested service [service name] could not be activated. See the server’s dianostic trace logs for more information.
NOTE: This may not be your solution. We had DNS pointing a subdomain.domain.com to one of two servers. I simply had the servers crossed. I think they were having a load balancing issue, and had DNS just pointing to one of the two. Other users were on Machine A, so I logged on to Machine B, thinking that’s where the DNS was pointing. Later I confirmed that DNS was in fact pointing to Machine A.
//*[local-name()=’Link’]/*[@LinkID=’13’] (XPath returned 0 items)
//*[local-name()=’Links’]/[local-name()=’Links’ and @LinkID=’13’] ( Unexpected token “[” in path expression or Expression must evaluate to a node-set)
//[local-name()=’Links’]//*[local-name()=’Link’][@LinkID=’13’] ( Unexpected token “[” in path expression or Expression must evaluate to a node-set) – This is same as one that worked below except // instead of //*
What works are these:
//*[local-name()=’Link’ and @LinkID=’13’]
Do you need to “take inventory” of your BizTalk artifacts (assets)? I.e. Get a simple count or high-level summary for your BizTalk Artifacts?
For years, we’ve had tools like BizTalk Documenter (free on CodePlex), but that just works for one application. What if you walk into a client site and you want to know in high-level, executive summary terms, what is the size of the application?
The code below is a quick SQL-only solution. Some people shutter at the idea of using SQL against the BizTalk databases, but it’s fast and easy. Similar code could be written in C#, Powershell or other tools using ObjectExplorer, WMI, or Powershell plug-ins for BizTalk.
set transaction isolation level read uncommitted --allows dirt reads but avoids locking
APP.nvcName as AppName,
(Select count(*) from bts_receiveport AS RP
where RP.nApplicationID = APP.nID
) as RcvPortCount,
(Select count(*) from adm_ReceiveLocation AS RL
INNER JOIN bts_receiveport AS RP2 ON RL.ReceivePortId = RP2.nID
where RP2.nApplicationID = APP.nID
) as RcvLocCount,
(Select count(*) from bts_Orchestration AS ORCH
INNER JOIN bts_assembly AS ASSEM ON ASSEM.nID = ORCH.nAssemblyID
where ASSEM.nApplicationID = APP.nID
) as OrchCount,
(Select count(*) from bts_Pipeline AS PIPE
INNER JOIN bts_assembly AS ASSEM ON ASSEM.nID = PIPE.nAssemblyID
where ASSEM.nApplicationID = APP.nID
) as PipelineCount,
(Select count(*) from bt_documentSpec as BTSCHEMA
INNER JOIN bts_assembly AS ASSEM ON ASSEM.nID = BTSCHEMA.AssemblyID
where ASSEM.nApplicationID = APP.nID
) as SchemaCount,
(Select count(*) from bt_mapSpec as MAP
INNER JOIN bts_assembly AS ASSEM ON ASSEM.nID = MAP.AssemblyID
where ASSEM.nApplicationID = APP.nID
) as MapCount
--(Select count(*) from bts_Component AS PIPECOMP
-- INNER JOIN bts_assembly AS ASSEM ON ASSEM.nID = PIPECOMP.nAssemblyID
-- where ASSEM.nApplicationID = APP.nID
-- ) as PipelineComponentCount
FROM bts_application AS APP
where App.IsSystem = 0 and App.nvcName not in ('BizTalk EDI Application','BizTalk Global','rosettanet')
order by App.nvcName
Ideas for Improvements
Run the same query on each of your environments: Dev, QA, Stage, Prod, etc… and store results in a SQL Holding Table. Then do a final query to merge the results and show which apps are in which environment, and identify when the number of artifacts is different. This would help you to compare one environment to another, for example Test to Prod.
This is how I found a SQL Injection Hacker who was changing all the product titles in my custom shopping cart using a diabolical injection technique. First, I knew the problem was happening, because I saw HTML inserted into my product titles and product descriptions of my database and website. My application was written in “Classic ASP” and VBScript, before .NET came out; since I have hundreds of scripts, I never rewrote them for .NET.
Protecting my Data and Setting the Trap
First of all, I added a T-SQL trigger to disallow multiple record updates, and write to a log file when the code is triggered.
ALTER TRIGGER [dbo].[myTable_multiUpdatePrevention]
FOR UPDATE AS
DECLARE @Count int
DECLARE @MaxAllowedRowToChange int = 5
Select @Count = count(*) from Inserted
IF @Count > @MaxAllowedRowToChange
— Log the issue to the table called ‘Log’
declare @Message varchar(500)
set @Message = ‘Trigger: Possible Hack: Multiple update attempt: Table=Product rowCount=’ + convert(varchar(10),@Count)
insert into LogHacks (logmessage,logdatetime, logUserid) Values (@Message,getdate(), SYSTEM_USER)
RAISERROR(‘Cannot update more than x rows of Product table without disabling trigger ‘,16,1)
Install Auditing Software
I installed “Auto Audit” from Codeplex, see my prior blog on this great tool.
Look at IIS Log
Based on the above, I monitored the LogHacks table, and now can see when the attack is happening, and also now I’m protected and don’t have to restore my data anymore. I was able to capture the userid, which narrowed it down to one of my classic ASP pages.
I then went to my IIS log, and looked the pages retrieved or posted nearest to that time. I found this:
At first, I was being narrow minded, and though injection was only happening in web page forms.
I hadn’t thought about query strings on the URL itself!
Just to find out how this amazingly clever piece of code works, I did the following:
Captured SQL Generated fom my ASP Program
Below, you can see that they they took my “like” clause, and hijacked it, by closing it with a single quote, then adding their own statements.
Unscramble the Hex Code
To interpret the above mess, I wrote this little T-SQL Script (note, I copied the entire hex string, but below I put … so as not to repeat it again here:
declare @showvar varchar(8000)
set @showvar = cast(0x73...36f72 as varchar(8000))
Looking at the Hacker’s Actual Code
When I ran the above code, the actual SQL is shown (line breaks added to make it readable):
set ansi_warnings off
DECLARE @T VARCHAR(255),@C VARCHAR(255)
DECLARE Table_Cursor CURSOR FOR
select c.TABLE_NAME,c.COLUMN_NAME from INFORMATION_SCHEMA.columns c, INFORMATION_SCHEMA.tables t
where c.DATA_TYPE in ('nvarchar','varchar','ntext','text')
and c.CHARACTER_MAXIMUM_LENGTH>80 and t.table_name=c.table_name
and t.table_type='BASE TABLE'
FETCH NEXT FROM Table_Cursor INTO @T,@C
BEGIN EXEC(‘UPDATE [‘+@T+’]
FETCH NEXT FROM Table_Cursor INTO @T,@C
I must admit this is some clever code, and illustrates what hackers will do to sell generic drugs for “you know what”. When I went to their site, my Malware plug-i told me it wasn’t safe, so I didn’t actually load their pages.
The code is basically looking for all super-long text fields, anything over 80 characters in length. It does that by searching INFORMATION_SCHEMA.tables peaking at all my tables and columns. The goal of the attack is apparently to just append their code HTML code, and let your site keep working, so that for many days, they get the SEO impact of the extra link-backs, until you discover the issue. However, in some cases, the HTML was not closing properly, and made a mess of some of my web pages.
Finding my injection bug
I found that I was stringing together a complex where clause depending on various factors. So moral of the story is, never take anything directly form the URL! Request(“language”) is pulling a language variable from the URL So they basically set language to close off the like wild card, then put their own code after that…
optionalSelection = " and prod_title like '%" & Request("language") & "%'"
Fixing my injection bug
optionalSelection = " and prod_title like '%" & fixSqlInsertion(Request("language")) & "%'"
The above is not super-elegant; it results in a “500 Sever Error” to the caller, but I don’t care what the hacker sees really. If he sees the 500, maybe he will quit visiting my site. This was the fastest fix, not the most effective. The better technique would probably be to create a stored procedure, and pass parms to it. Another technique I could use is to limit the size of the URL to a maximum length, to avoid long chunks of code like the one above.
The code above just removes the characters that a SQL injection would need to do damage. To be valid T-SQL, you need to use @ sign for variables, and you need quote marks and equal signs. Doing a DELETE or a DROP doesn’t require all that punctuation, so I prohibit those words; and substitute some fake word that will cause the SQL to fail.
There are two other ideas I could make use of to improve my code more:
1) Parameterized queries
2) Just make sure a SQL select statement doesn’t contain the words “Update”, “Delete”, “Drop”, “Alter”, etc… A select should just be a select.
SSOSettingsFileImport.exe /userGroupName:"BizTalk Application Users" /adminGroupName:"BizTalk Server Administrators" exited with code 1
In this case, you probably didn’t change the values of the SsoAppUserGroup and SsoAppAdminGroup in the default SettingsFileGenerator.xml, and your site has different user groups for SSO.
Could not enlist orchestration...
This can happen when you forget to change the default PortBindingsMaster.xml file. The setup gives you a valid PortBindingsMaster.xml file, but it practically empty. You need to export your Bindings either directly to this file, or massage your Bindings file and add your desired substitutional variables that can be set by the SettingsFileGenerator.xml file.