Neudesic Blogs

Passion for Innovation
Creating a self-contained test suite for WCF services using Microsoft Visual Studio Coded UI


Modern enterprise class web applications are designed for modularity. As a Quality Analyst, the focus is primarily on testing functionality using a black-box approach leveraging the presentation tier. Applications with a modular web stack however consist of self-contained components that can have a lifecycle independent of other components of the system. Some of these components may not even have a presentation aspect, and may only exist as web services. It thus becomes imperative for a QA to test the structural aspects of the solution, e.g. asserting the web service methods in a service stack. The testing suite to assert the structural aspects should also be self-contained, not requiring any environmental dependencies, and not causing inadvertent test data entries in the database while testing CRUD functionality.

In this blog post we will discuss creating a self-contained test suite for WCF services using Microsoft Visual Studio Coded UI that can host, run, and assert service behavior without creating test data in your application database.


The following tools / software is needed for you to be able to create the test suite.

Visual Studio 2010 Premium/Ultimate or

Visual Studio 2012 Premium/Ultimate



The following figure illustrates the approach to consuming the service for testing in a Coded UI Test Project build using Visual Studio 2012.


Coded UI Test Suite for WCF Services

Follow these steps to create the test suite project using Coded UI

·         Launch your instance of Visual Studio in administrative mode.

·         Select the Project Type as Test Project.

·         Select the Coded UI Test Project Template from the available template options.

·         Once the Project is successfully created  add a reference to the Service assembly against which you want to create your tests under the references section as shown below:



Self-Host the Service in the Test Context

Now that we have successfully created the Test Project and added the reference of the Service Library, the next step is to Self-Host the service. This is important since you don’t want the suite to be dependent on a staging environment hosting the services. In order to achieve that add the following lines of code in the initialize section of your test class, by creating an initialize method and decorating it with the TestInitialize attribute.


If you explore the code, we are using an instance of the ServiceHost class and passing the type of the service to be hosted. If there is additional binding and service behavior information needed, you can use the ServiceHost properties to define them. In our example we are using an HTTP binding to have the service accessible over the HTTP protocol. Next we will create a sample CSV file to define the data as an input to the test.

Create the CSV File for input data

Below mentioned is a sample CSV file that will act as an input source of data for the test method.


Next there are two approaches to asserting the web methods. If your services support transaction control at the client level, then you can suitably run the test without having to actually insert the test data into your application database. But first, let us look at a simple assert of a web method using Coded UI without worrying about the test data entries in the database.

Approach to testing web methods without support for Transaction Scope

In our test example, we will test a database insert using the Coded UI test class. Call the service methods by using instances and capture the response into one string. After that validate the output with our input values as illustrated in the code sample below


Let’s now breakdown the steps performed in the above mentioned section of code.

a)      Define  data source

b)      Read input data from the source file

c)        Call the external service method

d)      Assert the results 

e)      Posting test results into csv file.

Store test results for historical assessment

You can store the test results for assessing the performance of your tests over a period of time by storing the test results in a CSV file. The following code shows how to post the results for historical review:


Approach to testing web methods with support for Transaction Scope

The second approach is where you can run your tests successfully without having to insert test data in your application database. However this requires transaction support in the services to be extended to clients. In order to ensure that the service is transaction aware, the service requires some attributes. These attributes have to be supported across multiple layers within the service.

One of the key attributes that has to be defined for each API within the service is the TransactionFlowOption, which has to be either set to Mandatory or Allowed. When the TransactionFlowOption is set as Mandatory this indicates that the transaction need to be controlled by the client and if it is set as Allowed then the transaction can be controlled by the Service or the Client. This attribute has to be specified for each of the API’s which are intended to support Transactions.

The following code illustrates our sample service contract.


In the service class also add the “TransactionFlowOption” attribute to each of the API’s. In addition to the TransactionFlowOption some additional Attributes are also required as mentioned below.


These attributes are added to define the behavior of the Service Method. The TransactionScopeRequired attribute being set to true ensures that the Operation has to be executed within a Transaction Scope. The TransactionAutoComplete attribute being set to true ensures, the transaction scope would be automatically completed on successful execution of the operation.

After providing the above mentioned Attributes Operation definition in the Service Class looks like the following:


The last part of supporting Transactions in the Service is ensuring that TransactionFlow is enabled in the Web.config file, by having the transactionFlow="true" setting in the config file under the binding section as mentioned below.



Modify app.config in the test suite and mention the following  details:



Next we need to modify the service hosting code inside the TestInitialize method to support this behavior as shown below:


Define transaction scope and call the service methods by using instances and capture the response .

The benefit with 'TransactionScope' is that it allows the client to decide whether to insert the data into the database or not. In the above example, it does not insert the data by default. However, if we want to insert all we have to do is add a call to the 'Complete' method of the 'TransactionScope' object.


After successful run of the tests we will get following test results:


Apart from the Asserts here we are posting test results into csv file for our future reference.  In that csv file Test Results will appear as follows:


Tear down the service host 

In the Test Cleanup Tear down the service by using following code




·         Reduced Dependency On Dev Team & On Infrastructure Team

·         Avoid environmental dependencies

·         With the transaction scope we can avoid the junk data insert into database during our Unit testing or Service level testing



·         QA Team requires C# coding skills.

·         Also requires information about the Service API’s.





Posted: Mar 24 2014, 20:52 by Uma.Malleshwari | Comments (0) RSS comment feed

Categories: Testing Services

How to Create 'Up and Running' Azure Service Bus Queues in Minutes

This is the fifth blog post in this series.  You can find the previous posts here:

·                    Windows Azure Service Bus Overview

·                    Service Bus Relay

·                    Service Bus Brokered Messaging Overview

·                    Windows Azure Service Bus Notification Hubs

Creating the Service Bus Queue:

A Windows Azure Service Bus Queue can be created in a variety of ways.  Some of these include using:

·         Windows Azure Service Bus Portal

·         Windows Azure SDK for Visual Studio

·         Service Bus Explorer Tool

·         .NET Code

Windows Azure Service Bus Portal

After logging into the portal, select the Service Bus option from the left menu icons.  Select the Namespace where a queue needs to be created and click the “New” button at the bottom of page.   


Windows Azure SDK for Visual Studio

You can download the Windows Azure SDK for Visual Studio here:  Once downloaded, you can log into your Windows Azure service account or connect to a specific service bus namespace from Visual studio.  A connection string that is needed for this can be obtained by logging onto Windows Azure portal, selecting service bus from the left menu, selecting service bus namespace, then clicking on “Connection Information” which is located on the bottom of screen.

An ACS connection string can be copied from the screen below.



A new Windows Azure Service Bus connection can be made from Visual Studio using this connection string.  Once a connection is established, queues can be directly created from Visual Studio.


Service Bus Explorer Tool

Another option is to use the Service Bus Explorer which can be downloaded for free here:  Source code for the Service Bus Explorer can also be downloaded directly from Visual Studio 2012.  To download via Visual Studio, go to new project, select online, then select samples, and finally, search for the service bus explorer. 


Once the tool is downloaded, a connection to Windows Azure Service Bus can be made using the ACS connection string.

Queues can be created from Service Bus Explorer by right-clicking on queues and choosing  the create queue option.

.NET Code

Queues can also be created from .NET code by downloading a NuGet package from Visual Studio.  Open any new project or existing project and go to references, right click, then select ‘Manage NuGet Packages’  and search for Windows Azure Service Bus.  This will download the client library needed to interact with Service bus objects.    The following two lines of codes will create queue in your namespace, once service namespace address and access credentials are given in config file.  Alternatively servicebus connectionstring can be passed as a parameter to createfromconnectionstring method of namespace manager class.

            Microsoft.ServiceBus.NamespaceManager nsMgr = NamespaceManager.Create();



Testing Message Flow to Window Service Bus Queue:


This can be easily tested from Service Bus Explorer.

By clicking on Send Messages, a new window appears allowing you to enter a text message or to choose a file.


Test messages can also be sent and received from Visual Studio itself.    After Windows Azure Service Bus SDK is downloaded, upon opening Server Explorer, you will find the option to connect to Service Bus.


 After the connection is made, as described earlier, test messages can be sent and received from the service bus queues by expanding the service bus and right clicking on the queue name. 


This feature can also be used to check the message count and other properties of the queue by going to the properties windows in Visual Studio.



Messages can be sent and received with .NET code once the Windows Azure client library is installed via the NuGet package as described above.   To enable a connection to the service bus queue, go to the App.Config file (in a console project) and add the connection string for the queue. 

<?xml version="1.0" encoding="utf-8"?>



<!--Copy namespace connection string from the portal and place it in the connectionString value-->

    <add key="Microsoft.ServiceBus.ConnectionString" value="Endpoint=sb://;SharedAccessKeyName=yourQ;SharedAccessKey=xxxxxxxxx="/>



Sample code for a console application that sends and receives messages from the queue:

using System;

            using System.Collections.Generic;

            using Microsoft.ServiceBus;

            using Microsoft.ServiceBus.Messaging;


namespace ConsoleApplication1


    class Program


        private static QueueClient queueClient;

        private static string QueueName = "registrationsQ";


        static void Main(string[] args)



            Console.WriteLine("Enter any key to send messages to service bus  queue ...");



           // Creates Queue client with your QueueName

   queueClient = QueueClient.Create(QueueName);



            Console.WriteLine("Enter any key to receive messages from service bus queue ...");



            Console.WriteLine("\nEnter any key to exit.");




        private static void SendMessagesToServiceBusQueue()



            List<BrokeredMessage> messageList = new List<BrokeredMessage>();


            messageList.Add(CreateTestMessage("1", "Person 1; Will Attend; 2 Guests"));

            messageList.Add(CreateTestMessage("2", "Person 2: Decline; 0"));

            messageList.Add(CreateTestMessage("3", "Person 3: May be; 4 Guests"));

            messageList.Add(CreateTestMessage("4", "Person 3: Will Attend; 1 Guests"));


            Console.WriteLine("\nSending messages to service bus queue...");


            foreach (BrokeredMessage message in messageList)


                while (true)




                      // Sends message to Queue 



                    catch (MessagingException e)




                    Console.WriteLine(string.Format("Message has been sent: Id = {0}, Content = {1}", message.MessageId, message.GetBody<string>()));






        private static void ReceiveMessagesFromServiceBusQueue()


            Console.WriteLine("\nReceiving message from service bus queue...");

            BrokeredMessage message = null;

            while (true)




                    //receives messages from Queue

                    message = queueClient.Receive(TimeSpan.FromSeconds(15));

                    if (message != null)


                        Console.WriteLine(string.Format("Message received: Id = {0}, Body = {1}", message.MessageId, message.GetBody<string>()));

                        // Process message, like storing to DB, Enrich, redirect etc.,





                        //no more messages in the queue




                catch (MessagingException e)








        private static BrokeredMessage CreateTestMessage(string id, string messageContent)


            BrokeredMessage message = new BrokeredMessage(messageContent);

            message.MessageId = id;

            return message;










Posted: Feb 14 2014, 03:52 by Shankar.Perumaalla | Comments (0) RSS comment feed


BizTalk Server & Dynamics CRM Online: Integration with On-premises Lines of Business Systems Using BizTalk Server 2013

This is the eighth blog post in this series.  You can find the previous posts here:

·         Innovations in Integration on the Microsoft Platform

·         Lowering Barriers to Innovation: BizTalk IaaS and Walkthrough

·         BizTalk Sever 2013 Enhancements

·         Windows Azure BizTalk Services EDI Overview Including Portal Configuration

·         Windows Azure BizTalk Services - Utilizing EAI Bridge

·         Windows Azure BizTalk Services: Integrating with on-premises LOB systems using

the BizTalk Adapter Services

·         ReST (Representational State Transfer) in BizTalk 2013



Companies that leverage Dynamics CRM Online want to take full advantage of its rich set of capabilities by integrating with existing lines of business systems that are available on premise. It is common nowadays for organizations to have other lines of business systems that are hosted on premise, behind the firewall.


In this initial blog, I will show one solution that is available for integrating CRM Online with BizTalk 2013 hosted on premise. In the first scenario, I will demonstrate how data can be sent from the BizTalk application to CRM Online. 


BizTalk to CRM Online Integration

     Fig. 1: Overall Architecture of main components

CRM Online exposes the Organization Service (IOrganizationService). This is a built-in CRM WCF service that is used to interact with CRM data and metadata.


I will be simulating creating a new account via BizTalk and show how to send it to CRM Online. In order to run this scenario, you will need the following:

  • CRM Online account
  • BizTalk Server 2013


Summary of steps (additional details and screenshots below):


1.       Navigate to the CRM online organization instance to retrieve the OrganziationService URL.

2.       Use BizTalk WCF Service Consuming wizard to generate schema from the Organization Service web service.

3.       Create source schema (Account schema).

4.       Map source schema to the CRM online schema generated in step 2.

5.       Create orchestration (CreateAccount orchestration).

6.       Create custom binding and behavior; register them in BizTalk.

7.       Configure WCF custom send port.

8.       Configure BizTalk.

9.       Simulate sending message by dropping the account sample file in the predefined folder.

10.   Log into CRM Online and verify that the account has been created.




Using Organization Service to interact with CRM Online Walkthrough:


Step 1 - Log into CRM Online to retrieve Organization Service URL;

Go to Settings - > Customizations -> Developer Resources


Step 2 – Use BizTalk Publish Wizard to generate the schema and the binding file.

In BizTalk solution right-click schema project, select Add and then Add Generated Items; select Consume WCF Service Wizard; navigate through the wizard and enter the URL of your Dynamics Organization Service endpoint ( the url discovered in Step1); after clicking the Get button, click ‘Next’ and select Import.






Step 3 – Replace the generated Organization Service schema with the BizTalk schemas that are provided via CRM SDK (\SDK\Schemas\CRMBizTalkIntegration)


Below is the organizationservice_schemas_microsoft_com_xrm_2011_contracts_services schema. Note that the schema provides CRUD operations in addition to the other CRM-related operations. The nodes that are being used to create a new account are highlighted below. The schema provided by CRM is flexible; it exposes a key-value pair; the key maps to the CRM entity attribute (i.e. name, address1, city, or phone). LogicalName maps to the CRM entity name (i.e. account).


Step 4 – Create account schema.

Step 5 – Map account schema to the expected CRM schema format.




Step 6 – Create orchestration in BizTalk. The orchestration is receiving the message via rcvAccountPort; it then uses Construct Message shape to map the message received to the expected Organization Service schema and sends it to CRM. The response message is captured and written to the file system.  


Step 7 – Deploy BizTalk artifacts to BizTalk.


Step 8 – Configure OrganizationService send port.


Import the binding configuration (OrganizationService_Custom.BindingInfo) generated in Step-2. Rename the port from auto generated WcfSendPort_OrganizationService_CustomBinding_IOrganizationService_Custom to OrganizationService.


CRM Online is using Windows Live Id authentication. In order to configure the BizTalk send port to authenticate with CRM Online instance, a custom WCF behavior has to be built and registered in the Global Assembly Cache (GAC) and the configuration file (machine.config or in BizTalk WCF-Custom send handler).


Mikael Håkansson’s blog post provides a detailed explanation with the sample code for creating the custom WCF behavior.  I leveraged it for this demo. See below the detailed steps necessary to configure the OrganizationService send port.



OrganizationService send port.


OrganizationService Send Port – General tab view.

Note that no changes have been made to the custom binding generated in Step 2.



OrganizationService Send Port – Binding tab view.

Select customBinding as the Binding Type.


OganizationService Send Port – Behavior tab view

I am using liveIdAuthentication custom behavior which requires the following three properties:

  • crmuri  the organization service url   (
  • username - CRM username
  • password – CRM password


Step 9 – Start BizTalk application.

Drop the sample file that contains account-related information in the predefined incoming folder.


Sample file:

Verify in CRM Online that the account has been added.



Verify the response is received in the expected file system folder. 


The response contains the account id which was automatically generated when the new account entity was created in CRM.

Response Message:

The scenario that was presented above used the WCF SOAP Organization Service to integrate with the CRM Online application. Another option typically used for interacting with CRM Online is for BizTalk to leverage a proxy service which in turn uses CRM SDK API to authenticate and communicate with CRM; crmsvcutil tool is part of CRM SDK and can be leveraged to create early bound entity CRM classes that can be used in the proxy service. The proxy service approach is recommended for more complex integration scenarios.


Useful Resources:




Windows Azure Service Bus Notification Hubs

This is the fourth blog post in this series.  You can find the previous posts here:

·              Windows Azure Service Bus Overview


·              Service Bus Relay


·              Service Bus Brokered Messaging Overview


In 1997, Wired magazine published a story titled “PUSH! Kiss your browser goodbye: The radical future of media beyond the Web”, predicting how push technology would take over internet-based web communication. While we have not seen that happen yet,  a large influx of mobile devices and the creation of billions of mobile apps has still led to a significant increase in the  use of push technology.

Push notifications are very important part of our multitasking scenarios in the mobile market. Push notification is becoming popular because most phone apps in suspended mode rely on push notification, as opposed to pull notification, to greatly extend battery life. With a pull mechanism, 70% of polling to the notification server results in no change, adds more strain to wireless network bandwidth, and greatly reduces battery life.

Push notifications are run within the phone operating system core and notify the shell of the phone about application data/content changes. As soon as a notification arrives, it is mapped to a shared channel through a push notification server.

Push notifications are based on a publish/subscribe pattern and channel and subscriber information is stored in registration databases. In case of data/content change, notifications get delivered based on subscribing condition.

Windows Azure Notification Hubs provide an easy-to-use infrastructure that enables you to send mobile push notifications from any backend (in the cloud or on-premises) to any mobile platform. It can deliver notification to Windows 8 apps, Windows Phone, Android phones and iPhone.

You can use Windows Azure Notification Hubs for both enterprise and consumer scenarios such as:

-        Sending breaking news notifications to millions with low latency (Notification Hubs powers Bing applications pre-installed on all Windows and Windows Phone devices)

-        Sending location-based coupons to user segments

-        Sending event notifications to users or groups for sports/finance/games applications

-        Notifying users of enterprise events like new messages/emails and sales leads

-        Sending one-time-passwords required for multi-factor authentication


Other uses of push-enabled web applications include market data distribution (stock tickers), online chat/messaging systems (webchat), auctions, online betting and gaming, sport results, monitoring consoles, and sensor network monitoring.


As you can see in the image below, there are four major entities that take part in the notification process:

-        Device app – App installed on the device which subscribes to notification services

-        Platform notification service – A proprietary protocol used to facilitate the notification upon occurrence of an event such as Apple’s, Microsoft’s or Windows’ notification services

-        Event source – Event capture module to invoke an event in case of data/content changes at the source

-        Service bus notification hub – An abstraction layer to manage push notification registrations that is responsible for pushing communication events to the platform notification service



The steps involved in delivering/broadcasting push notifications are:


-        App registers with platform notification services using service bus notification SDK APIs

-        Service bus notification hub manages the push notification registration of the device app in registration database

-        As the event is raised, it is sent to the service bus notification hub    

-        Service bus notification hub goes to registration database and finds key/value pair for subscribers applications and sends subscriber and notification information to platform notification services

-        Platform notification services delivers/broadcast notification to the application/device

-        Any failure/information is reported back to service bus notification hub


Please refer below links for further reading/samples and demos:




Posted: Jan 30 2014, 03:47 by Vishnu.Tiwari | Comments (0) RSS comment feed

Categories: Azure

Neuron ESB and RabbitMQ

This is the third post in our Neuron-ESB series. You can find the first two posts here:


Since 2005, most of my time as a developer / consultant has been spent working with Microsoft BizTalk. Microsoft introduced the ESB Guidance (now referred to as the ESB Toolkit) to BizTalk 2006 R2. This introduced capabilities which allowed BizTalk to function as an Enterprise Service Bus. However, I have sometimes found it to be very difficult to get that ESB to do what I want.

I have spent much of the last two years working with an Enterprise Service Bus called Neuron ESB. While no product is perfect, I find that getting it to do what I want generally takes much less work than I was used to, and I'm very happy about that. When I started with Neuron, it supported 4 basic internal transports - Peer, TCP, MSMQ, and Named Pipes. Starting with Neuron ESB 3.0, it now has an additional transport, RabbitMQ. This allows durable, guaranteed, and reliable messaging. The same capabilities are available through MSMQ, but I'm pleased to have an alternative, particularly since RabbitMQ is open source.

The most difficult part of my experience with RabbitMQ was getting it installed properly. Apparently the installer for Neuron ESB can handle that for you automatically (if I had read the manual beforehand I would have known that). In any case, here's what I did to install it.

I first tried to download RabbitMQ and install it. It runs on top of Erlang, so I saw a warning to install Erlang first.

I downloaded Erlang from
here. When I first tried to run the Erlang install, it complained about the installed version of the Visual C++ Redistributable. Once I uninstalled the latest versions of it (both 32 and 64 bit versions, although I'm not sure if both were necessary), the Erlang install ran without complaint. After that, the RabbitMQ install also ran through without complaint.

In Neuron, every messaging application you want to create should start with a topic. The topic defines the transport, QoS (Quality of Service), and auditing parameters. So I created a topic and chose RabbitMQ as the transport. To do that, I clicked on Messaging at the bottom left of the window, which navigates to the Messaging area of the application. Then I clicked Topics on the left hand side, and then New on the upper right frame:


To test the topic that uses RabbitMQ - I needed to create a publisher to send messages using the topic, and a subscriber to receive messages. The publisher is an abstract representation of a real world entity (SQL, CRM, C# app) which will be the message source, and the subscriber is an abstract representation of the receiver of the message. Here's what the publisher looks like in Neuron. To create the publisher, I clicked Publishers on the left and then clicked New on the upper right frame. Then I clicked the Edit Subscriptions button to add the ability to publish to the topic.

Creating a subscriber is equally trivial, click on Subscribers, create a new one, and then Edit Subscriptions to allow the subscriber to receive messages with the desired topic. Both publishers and subscribers are Parties in Neuron, and it is possible for a Party to be both a publisher and a subscriber.

After creating the publisher and the subscriber, two queues have now been created in RabbitMQ. To see them, click Deployment on the lower left, followed by RabbitMQ on the left navigation.

Actually testing this scenario highlights one place where Neuron really shines - an easy to use test client is provided which will allow sending messages through the system without needing to connect to any of the resources that will be used in the live environment. We'll need two instances of the test client, one to represent each role (publisher and subscriber). Click the Tools menu all the way at the top of the Neuron ESB Explorer window, then choose 2 Test Clients. In each client, use the drop down menu to choose the Party Id for each test client, and then click Connect. One client should be the publisher party, and the other the subscriber. If you use the Send tab on the publisher client to type and send a message, it will show up on the Receive tab of the subscriber client.

To get more of a sense of how all of how the queuing transport works, I recommend trying other scenarios. For example, try disconnecting the subscriber client temporarily, and then sending a message using the publisher. You will notice that the next time you connect the subscriber, it receives the queued message. Also, try connecting multiple clients as subscribers. When a message is sent, only one client will get the message. That's because all of the subscribers are pulling the message from the same queue, and once it's empty, no one else will see it.


Here are a few resources you can use:

White Paper: Implementing the Scatter Gather ESB Pattern with Neuron Pipelines



Posted: Jan 23 2014, 01:24 by Steve.Harclerode | Comments (0) RSS comment feed

Categories: Connected Systems | Neuron | Neuron ESB | Service Bus

Service Bus Brokered Messaging Overview

This is the third blog post in this series.  You can find the previous two posts here:

·        Windows Azure Service Bus Overview

·        Service Bus Relay

Service Bus Brokered Messaging


Service Bus brokered messaging provides enterprise class asynchronous messaging capabilities hosted in Windows Azure datacenters. The brokered messaging scheme can also be thought of as asynchronous or “temporally decoupled” messaging. The messaging infrastructure reliably stores messages until the consuming party is ready to receive them. This allows the components of distributed application to be disconnected.

The Service Bus provides Queues that can be used for point-to-point messaging and Topics & Subscriptions that can be used for publish-subscribe messaging. All data communicated through the Service Bus brokered messaging services are encapsulated in messages.

The Brokered Message represents the unit of communication between Service Bus clients and the serialized instances of the Brokered Message objects are transmitted through a wire when messaging clients communicate via queues and topics. The diagram below characterizes the Brokered Message structure.

Why Service Bus Brokered Messaging

The decoupled communication has many advantages by leveraging asynchronous, or decoupled messaging features that support publish-subscribe, temporal decoupling, and load balancing scenarios using the Service Bus messaging infrastructure. The Service Bus brokered messaging service provides a number of capabilities that make it an attractive choice for implementing enterprise class level messaging capabilities:

ü High Availability: The asynchronous and durable nature of messaging features allows developers to build applications that provide high availability for their customers and the hosting Azure data centers ensure the high availability of   Service Bus brokered messaging services.


ü Reliable Message Delivery: It is vital that messages sent from the source application be successfully delivered to a destination application, and the reliability of message delivery is often an important factor. BizTalk developers are very  familiar with the  concepts of durable messaging as all messages passing through a BizTalk message channel will be persisted in a SQL database. For web service operations WS-* standards such as WS-ReliableMessaging and WS-Reliability can be used to improve the reliability.

Reliable messaging systems will typically use durable storage to store messages that are in-transit. For example, if the middleware application receiving and processing messages from a queue suffers an outage, the front-end sending application can still place messages on the queues and topics. The request message will be stored in a durable store and can be received and processed when the receiving application comes back online.

The Azure Service Bus queues, topics and subscriptions leverage the high availability data storage services provided by the Windows Azure platform to ensure the messages are persisted reliably. The enqueueing and dequeuing of messages using transactions can help to ensure reliability in messaging applications.

The following semantics can be used to describe message reliability:

  • At-Most-Once Delivery
  • At Least Once Delivery
  • Exactly Once Delivery
  • Ordered Delivery


        Receive and Delete is fastest (At-most-once semantics). The Message will be lost if receiver crashes or transmission fails.


      Peek Lock (At-Least once semantics) - Message is locked when retrieved and reappears on broker when not deleted within lock timeout.

      Session and Peek Lock - Message is locked along with all subsequent messages with same session-id ensuring order.

ü     Low-Latency: Azure Service Bus brokered messaging provides durable messaging with low latency, however as the messaging infrastructure is hosted in Azure datacenters, you will probably find that the latency from Service Bus brokered messaging is proportional to the physical distance between the applications and the datacenter used for hosting.

ü     Scalability: Azure Service Bus brokered messaging services are hosted in Windows Azure datacenters and are able to scale automatically. Sudden increases in message load will be managed by the Azure hosting environment and additional resources allocated to handle the demand.

ü    Features: Structured message content, serializable message body, sophisticated message receipt semantics, message sessions and correlated messages expose send endpoint and listen endpoint.

ü    Programming Models: REST Interface, Direct Model (Managed .Net API) and WCF Model (NetMessagingBinding Class).


Queues offer First In, First Out (FIFO) message delivery to one or more competing consumers. That is, messages are typically received and processed by the receivers in the order in which they were added to the queue. When using queues, components of a distributed application do not communicate directly with each other, they instead exchange messages via a queue, which acts as an intermediary. Although the Service Bus brokered messaging infrastructure is hosted in Windows Azure data centers, the sending and receiving applications can be cloud-based or on-premise.


Here are examples of a few typical scenarios that can be implemented using Queues:

Load Leveling - Receiver receives and processes at its own pace. It can never be overloaded, can add receivers as queue length grows and reduce receiver if queue length is low or zero. Queues handle traffic spikes by never stressing out the backend. Please refer to the image below for an illustration:

Offline/Batch - Allows taking the receiver offline for servicing or other reasons. Requests are buffered up until the receiver is available again. Please refer to the image below for an illustration:


Load Balancing - Multiple receivers compete for messages on the same queue (or subscription). Provides automatic load balancing of work to receivers volunteering for jobs. Observing the queue length allows determination of whether more receivers are required.

Fan In - Information into a single queue from a range of data sources. Multi-Stage Aggregation or Rollup fan into a set of queues, perform aggregation or roll-up or reduction and fan further. Please refer to the image belowfor an illustration:


Below are few Queue fundamental and advanced capabilities:

Fundamental Capabilities

Advanced Capabilities


First-In-First-Out (FIFO)

Scheduled delivery



Exactly Once Delivery,
Ordered Delivery

Automatic Dead Lettering, Message Deferral and Poison Message Support


Transaction Support


In-Place Update


Receive Behavior

Blocking With or Without Timeout,

Server-Side Transaction Log


Receive Mode

Peek & Lock,
Receive & Delete

Storage Metrics


Exclusive Access Mode


Purge Queue Function


Lease/Lock Duration

60 Seconds (default)

Message Groups


Lease/Lock Granularity

Queue Level

Duplicate Detection


Batched Receive


WCF Integration


Batched Send


WF Integration


Service Bus Queues provide a number of advanced features such as sessions, transactions, duplicate detection, automatic dead-lettering, and durable publish/subscribe capabilities. They may be a preferred choice for building a hybrid application.

Topics and Subscriptions

In contrast to queues, in which each message is consumed by a single consumer, topics and subscriptions provide a one-to-many form of communication in a “publish/subscribe” pattern. Useful for scaling to very large numbers of recipients, each published message is made available to each subscription registered with the topic.




Topics are similar to the enqueueing end of a queue. Applications send messages to topics in exactly the same way that they send them to queues by using a MessageSender object. Topics provide no interface for dequeuing messages, but instead have a collection of zero or more subscriptions that will subscribe to the messages sent to the topic based on filter rules. Topics provide support for message expiration, deadlettering and duplicate detection. The techniques for sending messages to queues can be used to send messages to topics. Messages are sent to a topic and delivered to one or more associated subscriptions, depending on filter rules that can be set on a per-subscription basis.


Subscriptions are similar to the dequeuing end of a queue. Applications receive messages from subscriptions in exactly the same way they receive them from queues by using a QueueClient or MessageSession object. A topic can have zero or more subscriptions, but a subscription can only belong to one topic. Subscriptions provide no external interface for enqueueing messages. Internally, messages are enqueued on the subscription inside publish-subscribe channel based on routing logic in the subscription filters.

Subscriptions provide support for message expiration, dead lettering and message sessions. As topics provide support for duplicate message detection there is no need for this to be implemented on subscriptions as any duplicate messages should be detected by the topic before reaching the subscription. The subscriptions can use additional filters to restrict the messages that they want to receive. Messages are sent to a topic in the same way they are sent to a queue, but messages are not received from the topic directly. Instead, they are received from subscriptions.


Message Distribution (Taps and Fan-Out) - Each receiver gets its own copy of each message. Subscriptions are independent and allow for many independent 'taps' into a message stream. Subscribers can filter down the messages by interest.

Filter Expressions:

Filter expressions are used to determine which of the messages sent to the topic the subscription will subscribe to. There are currently four types of filter that can be added to a subscription:

ü  SqlFilter – Subscribes to messages based on a T-SQL like expression based using values in the message property dictionary

ü  CorrelationFilter – Subscribes to messages based on the value of the CorrelationId property of the message

ü  True Filter– Messages are always subscribed to

ü  False Filter – Messages are never subscribed to


Filtering allows up to 2000 rules per subscription and each matched rule yields a message copy.

Correlation in Service Bus is required to set up reply paths between sender and receiver. The available three correlation models in Service Bus are Message-correlation (Queues), Subscription-Correlation (Topics) and Session-Correlation.

Message Correlation (Queues):  Originator sets MessageId or CorrelationId, Receiver copies it to reply. Reply sent to Originator-owned Queue indicated by ReplyTo and Originator receives and dispatches on CorrelationId. High throughput needs and work usually completes in minimal time.

Subscription Correlation (Topics): Originator sets MessageId or CorrelationId, Receiver copies it to reply. Originator has Subscription on shared reply Topic with rule filtering on id and Originator receives and dispatches on CorrelationId.

Session Correlation: Originator sets ReplyToSessionId on outbound session, Receiver sets SessionId for reply session and Originator filters on known SessionId using session receiver. It’s a Reliable multiplexed duplex communication.

Brokered Messaging Capabilities In Action

The supported programming models for Queues and Topics in Service Bus are REST Interface, Direct Model (Managed .Net API) and WCF Model (NetMessagingBinding Class). Below are the fundamental classes we will use when working with Service Bus Queues.

NamespaceManager: Used in the management context and provides the ability to perform administrative functions such as creating, inspecting and deleting Queues on your namespace in the messaging fabric.

BrokeredMessage: Used in the runtime context and defines the message that you will send and consume over Service Bus Queues.

QueueDescription: Used in the admin context and describes a Service Bus Queue.

MessagingFactory: Used in the runtime context and as its name implies, acts as a factor for creating classes that allow you to send and receive messages on a Queue using new classes such as QueueClient, MessageSender and MessageReceiver.

QueueClient: Used in the runtime context and allows you to send and receive messages over a Queue.

MessageSender and MessageReceiver: Used in the runtime context and allow you to send and receive messages over a Queue, Topic or both.

Below are the steps to create a Queue sample application to send and receive the messages.

      ü   Create a Service Bus Namespace in the Windows Azure portal.        


     ü   After creating a service bus namespace, create a project in Visual Studio and install Service Bus NuGet package in your application. The NuGet Visual Studio extension makes it easy to install and update libraries and tools in Visual  Studio.


ü     Copy the namespace connection information from portal and set up a Service Bus Connection String in your application (You have to use different configuration mechanism to configure connection string when using with cloud services).


ü To explore or view the created Service Bus Queues and Topics, you can use Visual Studio 2013/2012 Server Explorer or Service Bus Explorer Tool (You can download Service Bus Explorer tool from this link

ü Connect to the Service Bus using Visual Studio or Explorer Tool, provide connection information to connect.


ü  In your application use the below code to create a Queue in Service Bus, you can use QueueExists method to check if a queue with a specified name already exists.


ü    To send messages to Queue, use the code below. Wwhile sending a message is very simple when using the QueueClient, another option is to use a MessageSender class. The MessageSender class is a lower level class that allows you to send messages without thinking about whether you are working with a Queue or a Topic.



ü  Run the sample. After sending the order message to Queue, check it in the Service Bus Explorer, and you will find a newly created Queue and message.



ü  To receive a message from the orders Queue, use the code below. You can also use the MessageReceiver class to receive messages.




We have discussed the NamespaceManager, BrokeredMessage, QueueDescription and QueueClient classes. Now, here are the new classes which we will use for Topics:


TopicDescription: Used in the management context and describes a Service Bus Topic.

TopicClient: Used in the runtime context and allows you to send messages to a Topic.

SubscriptionClient: Used in the runtime context and allows you to receive messages from a Subscription over a Topic.


Below are the steps to create a Topic sample application to send and receive the messages.



        ü  In your sample application, use the code below to create a OrdersTopic


ü  Use the code snippets below to create Subscriptions and filters for the OrdersTopic. The “AllOrders” filter is the default filter that is used if no filter is specified when a new subscription is created. When the AllOrders filter is used, all messages published to the topic are placed in the subscription's virtual queue.

ü  The most flexible type of filter supported by subscriptions is the SqlFilter, which implements a subset of SQL92. SQL filters operate on the properties of the messages that are published to the topic.

ü  The example below also creates subscriptions named "HighRangeOrders" and “LowRangeOrders” with a SqlFilter that only selects messages that have a custom OrderRangeId property greater than 100 and less than 100.


ü  Use the code snippet below to send Messages to OrdersTopic


ü  After running the sample, verify it in Service Bus Explorer. You will find the OrdersTopic and the subscriptions created.

ü  To receive messages from subscriptions: From the client application, I have submitted the message with OrderRangeId = 201. The message has been send to OrdersTopic and delivered to “AllOrders” and “HighRangeOrders” Subscriptions. You can see it in the screen shot below.




ü  From the client application, I have again submitted the message with OrderRangeId = 91. The message has been sent to OrdersTopic and delivered to the “AllOrders” and “LowRangeOrders” Subscriptions. You can see it in the screen shot below.


Azure Service Bus Brokered Messaging Resources


ü  Introducing Queues and Topics in Azure Service Bus

ü  Best Practices for Leveraging Windows Azure Service Bus Brokered Messaging API

ü  Service Bus Queues

ü  Service Bus Topics

ü  Windows Azure Service Bus

ü  Windows Azure Service Bus Brokered Messaging

ü  Windows Azure AppFabric Service Bus Brokered Messaging

ü  Capacity Planning for Service Bus Queues and Topics


If you’d like to learn about our Connected Systems practice at Neudesic, please visit this page:


Posted: Jan 16 2014, 04:38 by Mahesh.Pesani | Comments (0) RSS comment feed

Categories: Connected Systems | Service Bus

Neuron-ESB: Reliable Messaging

This is the second post in our Neuron-ESB series. You can find the first post here:

The result of almost a decade’s worth of expertise solving tough integration problems for our customers, Neuron-ESB is an Enterprise Service Bus middleware and integration platform built on Microsoft’s .NET framework and Windows Communication Foundation technology that is used to connect and coordinate the interaction of a significant number of diverse distributed applications.

Neuron-ESB was recently included in Gartner’s list of “Cool Vendors in Application and Integration Platforms” by Gartner Research.

To continue our blog series on Neuron-ESB, I am going to focus on one of the biggest challenges with Web-based protocols, reliable messaging. In this post, I will show how Neuron-ESB helps enterprises overcome messaging challenges by providing better:

  • Message Routing (content based, static)
  • Message Processing (transformation, enhancement/augmentation)
  • Message Reliability (guaranteed delivery)

A message is information that one party sends to, or receives from, the bus. A message contains both data as well as metadata. Respectively, these are referred to as the payload or header (context) properties of the message. Both are defined as parts of a Neuron-ESB message.

The payload of a Neuron-ESB message can be one of several formats:

  • Serializable .NET object – The ability to pass .NET objects provides flexibility for developers who prefer using objects over XML
  • Binary data – The ability to pass binary data provides flexibility for developers who have to share content that is neither serializable nor XML
  • Text data – Any type of string data
  • XML data – Any XML data. XSD schemas are not required to use XML as the payload

Message Routing

Neuron-ESB uses a “publish and subscribe” messaging infrastructure to abstract and dynamically route, process, and mediate messages between endpoints. Applications and/or services simply publish messages onto the bus, without regard to the type or number of consumers; similarly, they may subscribe to specific messages or groups of messages, without regard to the source of the messages. This frees the developers from spending time on messaging business logic and allows them to concentrate on the specific business logic associated with manipulating the message data.

A subscription is composed of a topic or sub topic, the permission in which a message can be sent to or from that topic (i.e. Send/Receive), and can be further optionally restricted using one or more conditions. A condition is either a pre-existing or ad-hoc filter expression (using predicates) that can include message header properties as well as message content. Sometimes this is referred to as “content-based routing”. Managing subscriptions for publishers and subscribers can be done through the Neuron-ESB Explorer. Users can quickly add topics, wildcards, reusable conditions (i.e. content-based routing) and ad-hoc conditions to define a subscription and activate it instantaneously.


Message Processing

Messages in Neuron-ESB are easily transformed within a process via XSLT or WCF service.

Transforming with XSLT

The Transform – XSLT process step can be used to apply an XSLT/XSL transform to the process message. Additionally, parameterized XSLT/XSL is supported. Parameters can be useful when the same value must be repeated many times within the document.


XSLT in the Transform – XSLT process step window.


The Transform – XSLT process step used in a VETO (Validate, Enrich, Transform, Operate) pattern.

Transforming via WCF

The Transform - Service Process calls a WCF service that implements the IESBTransformationService interface. The step modifies the message format or structure based on the implementation of the IESBTransformationService interface. The interface could be implemented to invoke a Microsoft BizTalk Server map, perform a code transform, call an existing transformation service, etc.

Message Reliability

One of the biggest challenges with Web-based protocols is reliable message delivery. There are many reasons that endpoints may not be available, and in every case the overall reliability of the application is only as good as the least reliable link. Because business applications often require high levels of delivery guarantee, Neuron-ESB provides a configurable policy object with a variety of options to help ensure message delivery.

Neuron-ESB provides a variety of methods to improve the reliability of messaging. By associating a policy with a service or adapter endpoint, the endpoint will execute the policy upon message failure, and can be configured to retry message transmission, log the message to the disk, or republish the message on a new topic. Policies can dramatically increase the robustness of message delivery and error handling.

An adapter policy configured to retry a failed event 5 times, then log to the Neuron audit database.

You Have the Power
Neuron-ESB provides the power and flexibility you expect from an integration platform in regards to message handling. Whether it be dynamically routing your messages based on the content, transforming your data to fit the needs of your client’s strict requirements, or ensuring your important data is delivered to its intended recipient, Neuron-ESB gives you the power to make it happen.

Neuron-ESB 3.0 Available for Download
The Neuron-ESB 3.0 trial edition can be downloaded by clicking on the link below:

Messaging with Neuron-ESB Resources:

If you’d like to learn about our Connected Systems practice at Neudesic, please visit this page:

Posted: Jan 09 2014, 02:47 by Jereme.Downs | Comments (0) RSS comment feed

Categories: Connected Systems | Neuron | Neuron ESB | Service Bus

ReST (Representational State Transfer) in BizTalk 2013

This is the seventh blog post in this series.  You can find the first six posts here:


Recently, there has been a lot of emphasis on architectural patterns that can communicate easily without having to jump hurdles or cross boundaries. Service providers are trying to break ground by integrating seamlessly with various types of consumers tangible or vice-versa. We have been hearing about the ReST architecture pattern for some time and it is pretty much a must have if you are a service provider. Let’s look at how BizTalk is embracing ReST architecture and its advantages.


So, what is ReST?

An architectural style is a set of constraints that can be applied when building something. And an architectural style of software is something that describes the features that can be used to guide the implementation of a software system. ReST (sometimes spelled "REST") stands for Representational State Transfer. It relies on a stateless, client-server, cacheable communications protocol -- and in virtually all cases, the HTTP protocol is used.

ReST is an architectural style that can be used to build software in which clients (user agents) can make requests of services (endpoints). ReST is one way to implement a client-server architectural style. In fact, ReST explicitly builds on the client-server architectural style. ReST is an architecture style for designing networked applications. The idea is that, rather than using complex mechanisms such as CORBA, RPC or SOAP to connect between machines, simple HTTP is used to make calls between machines. In many ways, the World Wide Web itself, based on HTTP, can be viewed as a ReST-based architecture. ReSTful applications use HTTP requests to post data (create and/or update), read data (e.g., make queries), and delete data. Thus, ReST uses HTTP for all four CRUD (Create/Read/Update/Delete) operations. ReST is a lightweight alternative to mechanisms like RPC (Remote Procedure Calls) and Web Services (SOAP, WSDL, et al.).


Why do we need ReST?

There are 3 main factors that affect the decision to implement ReST:

Lightweight Protocol - When using ReST, we can leverage HTTP protocol to send and receive messages in simple XML format or other formats like JSON or even plain text. It is not mandatory to use http; tcp also is a viable protocol, but ReST has comfortably found more acceptance in the HTTP consumer arena. There is no SOAP involved.  This means ReST uses normal HTTP methods instead of a big XML format describing everything. For example, to obtain a resource you use HTTP GET, and to put a resource on the server, you use HTTP PUT. To delete a resource on the server, you use HTTP DELETE.

Highly Supported – Since ReST-based services can be available over HTTP using simple HTTP verbs, most of the web browsers can consume services using AJAX or jQuery. Also, in the mobile space where limited bandwidths are a constraint, this lightweight architecture is very tempting and can be easily consumed using jQuery mobile.

Growing adoption – The mobile industry adoption has pushed high demand of using this architecture style. Many ReST based cloud services are widely used and adopted today. Some examples are Twitter, Salesforce, and Amazon etc.


BizTalk 2013 now supports ReST-based solutions

Microsoft offers ReST support in BizTalk Server 2013 through the WCF-WebHttp Adapter. This has been anticipated for a long time by BizTalk developers. A majority of the services in the cloud are ReST-based. When exposing a public API over the internet to handle CRUD operations on data, ReST is now generally considered the best option. Twitter, Salesforce, Amazon all offer ReST API's to use their services. This is just an example of the companies that support ReST, but there are actually many more. With the increase of mobile devices and light weight rich (Ajax) web applications over the years, adoption of ReST has grown.

With BizTalk Server 2013 there is an adapter, the WCF-WebHttp, which will support ReST. The adapter gives you the ability to send messages to ReSTful services through the WCF-WebHttp send adapter. With the receive location, you can receive messages from a ReSTful service. Through the send adapter, you can do a GET request. This is widely used service operation when it comes to interacting with a ReSTful service. Besides GET, there is DELETE, POST and PUT.

Microsoft BizTalk product group has made a good decision by supporting ReST through the new BizTalk release 2013. Most of the services currently in the cloud are based on ReST, which lead to more integration solutions requiring communication with ReSTful services. The integration solution can now utilize BizTalk Server as one of its components. Within an enterprise BizTalk Server can be the heart of the messaging infrastructure supporting many protocols.

The WCF-WebHttp adapter supports both synchronous and asynchronous communications and also has great support for URI parameters, mapping verbs to URIs for greater routing support etc.


How does the BizTalk ReST Adapter benefit us?

·        Mobile and Web 2.0 Integration:

The image below shows how existing systems in the organization are already connected to BizTalk using its rich adapter pack. Within an enterprise BizTalk can be the heart of the messaging infrastructure. With the advent of the new ReST adapter, all this data in downstream systems can now be exposed to upstream systems or end consumers through mobile devices like phones and tablets.

·        Consuming third party services:

Leverage ReST-based services to enhance your business assets. A majority of the services in the cloud are ReST-based. For several reasons mentioned above, ReST is the generally accepted architecture used to expose API over the internet to handle CRUD operations on data. The below image depicts that vision and how this capability can be used to enrich & consume data on one hand and on the other how to enhance business assets by leveraging cloud services. There are a large number of vendors that support SOAP-based services like address verification, customer basic information, and credit verification, but many vendors are now moving towards the ReST-based services because they are  low cost and can be consumed by any sort of consumer.



·        Leverage BizTalk Artifacts:

BizTalk could be the heart of the messaging infrastructure in your organization and the ability to utilize all the functionality inherent in BizTalk, like maps, and orchestration workflows is terrific and goes a long way in transforming organizations to the way they can consume and expose data. As seen in the picture below, when messages flow through BizTalk, we can add message tracking and reporting based on calls and who is making the calls. The orchestration engine is a business process engine for correlating between services that run across different spans of timings. Now we can leverage all this capability when consuming ReST-based services.



Ok, since we have some context of how ReST works in BizTalk, I am going to demonstrate how we can use BizTalk to consume data from a public indices service.

The Scenario:

1.    A client application takes a date as user input and sends a request to fetch gas indices for that date to a WCF service exposed by BizTalk.

2.    BizTalk captures the request and uses messaging to send the request out to a public gas indices website hosted by ICE.

3.    The result is returned to through BizTalk to the calling client application.

4.    The diagram below shows a visual representation of what I have in mind.


Implementation summary in short:

1.    Uses the WCF publishing wizard to publish a schema as a WCF service.

2.    Uses both the WCF-WsHttp and WCF-WebHttp Adapter.

3.    Strips out the message body using a BizTalk Pipeline component to remove request body or using a property on the outbound message set on the Send Port.

4.    Uses credentials with WCF Security to connect to the ICE indices website

5.    Uses variable mapping to dynamically append the requested datetime from the client to the URI to fetch appropriate data. We can achieve that by:

·        Mapping URI parameters to promoted properties.

·        Defining custom variable in your URL on the send port.

Solution View:


My solution consists of:

1.    The Client - A simple C# Windows Application project.

2.    A PipelineComponents project – Used to strip the body of the HTTP GET request. This functionality is now currently built into the new WCF adapters, but it is a feature that is good to know.

3.    A Pipelines project - This contains a Send Pipeline that uses the above pipeline component.

4.    A Schemas project – This contains the Schema for the Request and Response of the WCF Service and also a Property Schema for promoting the date variable that will be eventually mapped in the outgoing request.

5.    Finally a Maps project – This contains two maps. A map for the outgoing request to the ICE service and another to map the incoming response from the ICE service.


Step 1 - Let’s build the Request-Response schema and expose it as a WCF Service.

a.    In the Schemas Project, create a new schema and name it “ICEIndices.xsd”. Set the properties for the schema as shown in the diagram.



b.   Rename the Record element to “Request”. Since I want the client to pass in a single parameter of date, add an Element node called “AppendDateTime” of type string (I am using a generic type here but we could also use dateTime) under Request.

c.    Add a new Property Schema to the schemas project. In this schema add an element called “APPENDDATETIME” of type string.




d.   Return to the ICEIndices.xsd file and promote the AppendDateTime element as a “Property Field”. Use the newly created property schema and APPENDDATETIME property.



e.    Now we need to create the Response schema for the data that we will receive from the ICE indices website. This is a url for the website that I will be using: (Please note: This site requires credentials also to retrieve data.)

As you can see the data is a comma separated flat file. I downloaded the flat file and used the Generate Schemas Wizard to generate the Response flat file schema. After you have successfully created it, it should look like this:





f.     Now we need to add this Response schema as a reference to the the original schema “ICEIndices.xsd”. You can do that by adding the Response Schema to the “Imports” property collection.



g.   Now, create a new Child Record called “Record” and underneath this add a new Child Record element. In the “Data Structure Type” property dropdown add the reference to Response Node.



h.   Now the ICEIndices schema is ready to be published a WCF service to be consumed by any client application. Use the BizTalk WCF Publishing Wizard as shown:




You have now successfully published a WCF service that will run through BizTalk. After this is completed you should be able to see a receive location in the specified BizTalk Application like so: 



Step 2. – Build two maps to map “Outgoing” request to ICE Index website and another map for the “Incoming” response from the website.

a.    Create a map and name is Request_to_ICEIndicesRequest.btm. The source and destination schemas will be the same.




b.    Create a map and name it Response_to_ICEIndices.btm. The source will be the stand-alone Response.xsd and the destination will be the “Record” node of the ICEIndices.xsd schema.




Step 3 – Create the Pipeline components to remove the Request body before calling the HTTP GET method.

a.    Add a class called RemoveBody.cs that implements the Microsoft.BizTalk.Component.Interop.IComponent, IBaseComponent, IPersistPropertyBag, IComponentUI interfaces. In the “Execute” method add the following code snippet to remove the body of the message:



Step 4 – Create two pipelines.

a.    Create a Send pipeline and name it SendPipelineRemoveBody.btp. Add the RemoveBody component in the Pre-Assemble stage of the pipeline. This will remove the request body upon calling the ICE service.




b.   Add a receive pipeline and name it ICEIndices.btp and add a Flat File disassembler to the Dissassemble stage of the pipeline. When we get the response from the ICE service, since this is a CSV file we need to use this to convert it to XML.





c.    The flat file disassembler will use the Response schema as document Schema




Step 5 – Build a simple Client application

a.    Add a Windows Form to the windows application project and name it FetchICEIndices.cs. It has a simple UI that accepts a Date input, has a couple labels to display the result and a button to invoke the process.




b.   Now we need the service that we created using the WCF publishing wizard, we will add it as Service Reference to the project.




c.    In the Fetch Button click event, I simply use the generated client proxy class of the service to invoke BizTalk which eventually called the ICE website gets the date and send its back to the client UI.




Step 6 – Deploy and configure BizTalk artifacts.

a.    Deploy the Schemas, Maps and Pipeline projects to BizTalk.





b.   The WCF-WSHttp receive location for the request from the client was already created in Step 1.h.






c.    Now the only thing left is to create the Send Port that calls the ICE Indices website and receives the response.

Name the send port SP_GetICEIndices. Set the adapter type to WCF-WebHttp. Set the Send Pipeline and Receive Pipeline to the custom pipelines we created.






d.   Configure the Adapter properties as below:





Notice in the address URI property I have only given part of the whole URL (usually the part that remains constant).


e.    In the HTTP method mapping, specify the operation method as GET and URL as the rest of the url of website. The {appenddatetime} part of the url is the actual dynamic part of the request url and will be mapped to the input parameter that the user input using the client application.






f.     Use the Variable mapping section and click the edit button to set the variable part of the url to an incoming data value. Set the variable name as the name given within the curly brackets in the operation above. The property name and namespace is the name we specified in the Property Schema.






Since this is a promoted property in the incoming request, this value is fetched and mapped to form the request URL.


g.   The only other property to set is the credentials to invoke the ICE website. Set that Security tab:






h.   An alternative to using the RemoveBodyPipelineComponent is to use the Suppress body property in the Messages tab. This is also good and will prevent all the work to create a pipeline component class and a Send pipeline just to remove the body for the GET request.







i.     In the Outbound Maps of the Send Port set the map





j.     In the Inbound Maps, set the inbound map






k.    In the Filters property, set the appropriate property to subscribe on






l.     Since this is a pure messaging solution, the request message received by BizTalk through its WCF service receive location will be subscribed by this Send Port.


Step 7 – Start the BizTalk Application.


Step 8 – Test

On the windows client application, I gave an input of Dec 2 2013 and clicked the Fetch button. It returns 109 records and it displayed the first one in the set:




That’s it. This was a quick and easy way to test out the new functionality of BizTalk WCF Web based adapter to invoke services and consume the results. Have fun trying this out and let me know if there are hiccups!



Webinar: Attention Developers! Now You Can REST Easy with BizTalk Server 2013


If you’d like to learn about our Connected Systems practice at Neudesic, please visit this page:


Posted: Dec 19 2013, 06:29 by Shyju.Samuel | Comments (0) RSS comment feed

Categories: BizTalk | REST | WebHttp Adapter





Neudesic Social Media

Follow Neudesic on Twitter Follow Neudesic on Facebook Neudesic Neudesic