Neudesic Blogs

Passion for Innovation

Claims-based Federated Security Using Windows Azure Access Control Service and Third Party Identity Providers

When selling innovative products to third party customers, it is imperative that end users have the ability to use existing credentials to gain access to your product.  Users do not want to have to maintain more than one set of credentials to access their enterprise applications.  Products should federate with a user’s existing directory stores, authenticate and authorize seamlessly.  Imagine how frustrating it is for users to enter credentials each time they access a different application throughout their work day.

There are some important terms to know when creating a product that will federate with directory stores such as Active Directory and Open LDAP in order to save users the hassle of maintaining multiple credentials.

Important Terms

Federation: Federation refers to multiple security domains (or realms), usually of multiple organizations, and establishing trust for granting access to these multiple security domains through an authentication process that is trusted by multiple organizations. 

Subject: The entity that needs to be authenticated.  This can be a user that wants to log in to your application, or a piece of code that needs to access the external components like web services.

Claims: Claims are statements about the subject.  For example, the subject’s first name, last name, role, user ID, birth date, department, etc.  In fact, virtually any attribute pertaining to the subject can be a claim.

Relying Party (RP): The application or web service that needs to delegate subject authentication logic to external components.

Token: Claims travel inside a token and are usually part of a secured token issued by the Active Directory Federation Service (defined below) or the Security Token Service (also defined below).  Although a token can be a username/password combination, or even a simple string such as a bearer token in OAuth 2.0, in this context, tokens can also be either XML-based (like an SAML token) or binary-based (like an X.509 certificate).

Security Token Service (STS): Software-based service responsible for issuing security tokens.  STS should be highly available for all traffic from the Relaying Party and should also be scalable.

Active Directory Federation Services (AD FS): A software component developed by Microsoft that can be installed on Windows Server to provide single sign-on access to systems and applications.  AD FS can act as a Federated Service with the ability to federate multiple Identity Providers, and can also act as an on-premise STS.  AD FS comes bundled with Windows Server 2012.

Windows Azure Access Control Service (ACS): Cloud-based service that provides an easy way of authenticating and authorizing users to gain access to web applications and services, acting as an STS in the cloud.  ACS provides out of the box federation with popular social web identities like Google, Yahoo, Facebook, and Windows Live ID.  Developers do not need to write any code to integrate these web identities into their security solutions. ACS also acts as a Federation Service and has the ability to federate with WS-Federation based identity providers.

Third Party Identity Providers (IdPs): User directory services, or identity stores, that AD FS or ACS will look to when authenticating users, and upon successful authentication, will gather claims.  These claims will be part of a secured token and be passed on to the Relaying Party.  The Relaying Party can make authorization decisions based upon these claims.  Active Directory, OpenLDAP, Novell, SQL Server or any other relational database, as well as web identities like Google, Facebook, Yahoo!, and Windows Live Store, can all serve as IdPs.  These IdPs are usually located across boundaries from the Relaying Party.  The Relaying Party relies on these IdPs to authenticate users.  IdPs hold the Authentication logic.


So, now that you know the important terms, how do you get your application to federate a user’s existing directory stores, authenticate and authorize seamlessly?  This is accomplished by establishing trust between your application and the STS by delegating your authentication to a Third Party IdPs.

Assume two companies, A and B, want to conduct business.  Company A wants Company B’s users to access its application.  While there are more complex and difficult ways for Company A to authenticate Company B’s users, the easiest solution is for Company A to use Company B’s authentication instead of having its own separate authentication- in essence, having Company A trust Company B’s authentication of its users.

This is where WS-Trust can come into play.  WS-Trust introduces the concept of a Secure Token Service (STS), which is a web service that is responsible for generating claims that are trusted by consumers.  If Company A and Company B both establish a WS-Trust where Company A trusts an STS for Company B, Company B’s users can carry tokens issued by their STS and present those tokens to Company A, which trusts the STS and grants access to Company B’s users.

WS-Trust defines a message request called Request Security Token (RST) and issues it to the STS.  The STS, in turn, replies via a response called Request Security Token Response (RSTR) that holds the security token to be used to grant users access.  WS-Trust describes the protocol for requesting tokens via RST and issues tokens via RSTR.


WS-Federation builds on WS-Trust and simplifies the creation of federated scenarios by defining a common infrastructure for achieving federated identity for both web services (called active clients) and web browsers (called passive clients).

WS-Federation dictates that organizations participating in federation should publish communication and security requirements in Federation Metadata.  This metadata adds federation-specific communication requirements on top of other security-related policies.  For example, token types and single sign out requirements are defined in the Federation Metadata.

WS-Federation does not mandate a specific token format, although SAML tokens are heavily used.

Claims-Based Architecture Diagram

The diagram below shows a typical example of how to create your claims-based architecture so that your application can authenticate using a Third Party Identity Provider.



In this architecture:


·         RP, STS, IdP, AD FS, Thinktecture, SQL Server, Security Tokens, claims, and https transport security are all involved.
·         Using WS-Trust Protocol, RP has trust with ACS/STS and ACS has in turn trust developed with all the involved identity providers.
·         WS-Federation protocol facilitates setting up the federation between RP, STS and WS-Federation IdPs. Each party exposes its Federation metadata XML file which helps consumers set up policies when interacting with that particular IdP like message security, token signing and encryption using certificates etc.
·         Other IdPs like Google, Facebook, and Yahoo are non WS-Federation parties. Google, Yahoo are based on OpenId 2.0 protocol, Facebook is based on OAuth protocol using Graph APIs. ACS works with these IdPs out of the box.
·         Above topology is a typical example of Passive federation – in which 302 browser redirects happen seamlessly.
·         This architecture is very scalable in nature. You can add more identity providers to ACS without affecting the RP at all.
·         User tries to request the RP web page.
·         STS Authentication kicks in at RP server and it sends back the browser URL to redirect to STS/ACS.
·         ACS receives the request and checks if any specific IdP are requested to be authenticated against. If any specific IdP is requested by RP, ACS redirects again to the IdP login URL.
·         If any specific IdP is NOT requested, ACS/STS presents the Home realm discovery page to the user. This page lists all the IdPs associated with the user.
·         User picks an interested IdP to be authenticated against and ACS redirects again to that IdP login URL.
·         Login Page from IdP is presented to the user to accept the user Id and password.
·         If forms authentication is set up in IdP, then a standard ASP.NET web forms login page will be shown to the user.
·         If Windows Authentication is enabled in IdP then user will be greeted with standard windows login prompt.
·         If IdP is non WS-Federation provider, then their corresponding Login pages will be displayed. For example Google, Yahoo, Facebook login pages.
·         In either case, once the user enters valid credentials, the IdP will generate the security token (usually a SAML token), sign it, and send it back to ACS.
·         ACS will validate the signature and Issuer via WS-Trust standards using Windows Identity Foundation (WIF) APIs and make sure a token was issued by the trusted party and it was not tampered with over the wire.
·         If validated, ACS will run claims or protocol transformation rules, check to see if there is anything set up for the concerned IdP, regenerate the token, sign it, and send it back to the requesting browser using the Return URL set up in ACS for the RP.
·         Browser will redirect and send this token back to RP.
·         RP can validate the signature and Issuer and make sure the token was issued by the trusted party and it was not tampered with over the wire.
·         If validated, WIF APIs will grab claims out of the token and create an HTTP context user and Claims Principal containing all the claims.
·         At this point in time, user is authenticated for this RP. Further RP can then look into the claims and provide authorization in the application.

Future Posts

This blog post is high level in nature, and in future posts, I will go into further details about the components/products mentioned in this blog post, such as:

  • Web Application Set-Up as RP using WIF with ACS/STS
  • ACS
  • ADFS
  • Active Federation

Reference Links:

  • An Introduction to Claims


  • Federated Identity with Windows Azure Access Control Service


  • Federated Identity with Multiple Partners and Windows Azure Access Control Service

  • Active Directory Federation Service

If you’d like to learn about our Connected Systems practice at Neudesic, please visit this page:






BizTalk Sever 2013 Enhancements

This is the third blog post in this series.  You can find the first two posts here:

Innovations in Integration on the Microsoft Platform

Lowering Barriers to Innovation: BizTalk IaaS and Walkthrough

Microsoft BizTalk Server unites enterprise application integration (EAI) and business-to-business (B2B) integration.  BizTalk Server is a mature product that has been around for over a decade with a new release every 2-3 years.  The picture below shows the main features and enhancements made in previous releases.


According to statistics provided by Microsoft, BizTalk is the most deployed product in its category and is used by 81% of Fortune Global 100 companies.  

BizTalk Server 2013 was released in March and features enhancements that were influenced by a combination of industry trends and customer feedback.

At a high level, these enhancements can be grouped into the following categories:

1.      Running in the Cloud

2.      Connecting to the Cloud

3.      Simplifying the Experience

4.      Improving Performance

5.      Supporting the Latest Platforms and Standards.

Running in the Cloud

BizTalk Server 2013 allows you to run BizTalk Server in an Azure Infrastructure as a Service (IaaS) environment.   This can reduce hardware procurement lead times and help reduce the time and cost of setting up and maintaining your BizTalk environments.  You can also move applications from on-premises to Azure and back.  Refer to Lowering Barriers to Innovation: BizTalk IaaS and Walkthrough for more information on IaaS.

Connecting to the Cloud

BizTalk Server 2013 includes out-of-the box adapters that send and receive messages from Windows Azure Service Bus, simplifying the task of building hybrid applications.  BizTalk Server 2013 also provides adapters that invoke REST endpoints and expose BizTalk Server artifacts as RESTful services using cloud adapters like WCF-BasicHttpRelay, WCF-NetTCPRelay and SB-Messaging.

BizTalk Server 2013 also includes an enhanced SharePoint adapter, which makes integrating with SharePoint as simple as integrating with a file share.  BizTalk Server 2013 also supports the Azure Access Control Service, which enables customers to move their EDI and EAI based solutions to the cloud.

Simplifying the Experience

BizTalk Server was originally built to use designers and configuration to minimize code writing, and additional investments have been made in this release to make BizTalk Server even easier and more user-friendly. 

For instance, the dependencies between artifacts can now be viewed and navigated in the BizTalk Admin console using the Dependency Explorer, which allows you to navigate your solution and find dependency between different artifacts.  This helps you easily identify how changes you make might impact other artifacts. The BizTalk Administration Console pictured below shows the view dependencies option and the full dependency information corresponding to the selected artifact.


The ESB capabilities previously introduced in the ESB Toolkit are now fully integrated with BizTalk Server, and the ESB configuration experience is vastly simplified to enable a quick setup.

Integrating with SharePoint using BizTalk Server 2013 is now as simple as integrating with a file share.  The dependency on SharePoint forms has been removed, while still providing backward compatibility.  You can find more information about the SharePoint Services Adapter here.  The picture below shows the configurable properties in the SharePoint Services transport.


BizTalk Server 2013 now comes with out-of-the-box support for SFTP, enabling the sending and receiving of messages from an SFTP server.  In the past, users had to either develop their own SFTP solution or use 3rd party adapters.  The picture below shows the configurable properties in the SFTP transport.


Improving Performance

BizTalk Sever 2013 supports host handler association of dynamic send ports.  In past releases, all dynamic send ports executed in the adapter’s default host.  Since there was only one default host for an adapter, all messages were routed using the same host, decreasing performance.  With BizTalk 2013, it is possible to configure the adapter’s send handler.  You can find more information about the dynamic send port handler here.  The picture below shows the dynamic send port handler configuration.


Minimal Lower Layer Protocol (MLLP) is the absolute standard for transmitting HL7 messages via TCP/IP.  BizTalk Server’s MLLP adapter is widely used by hospitals and medical clinics throughout the world.   In the 2013 release, Microsoft has made performance improvements to the MLLP adapter.   Tests conducted revealed a performance improvement of up-to 5 times. 

The mapping engine in BizTalk Server 2010 and prior releases makes use of XslTransform API for mapping needs.  The transformation mapping engine in BizTalk 2013 makes use of the enhanced XslCompiledTransform API, providing performance enhancements.  Once the load method completes successfully, the transform method can be called simultaneously from multiple threads.  The new XSLT processor compiles the XSLT style sheet to a common intermediate format, and once the style sheet is compiled, it can be cached and reused.

Supporting the Latest Platforms and Standards

BizTalk Server 2013 supports the following platforms and standards:

If you would like to learn more regarding the concepts covered in this post please visit these resources:

·        Windows Server 2012, Microsoft Visual Studio 2012, Microsoft SQL Server 2012, Microsoft System Center 2012, the latest version of Microsoft Office

·        SAP 7.2 and 7.3, Oracle Database 11.2, Oracle E-Business Suite 12.1, Oracle Siebel 8.1

·        Health Level Seven (HL7) 2.5.1 and 2.6

·        Society for Worldwide Interbank Financial Telecommunication (SWIFT) 2012 Message Pack

·        X12 5030, EDIFACT D05B


If you would like to learn more regarding the concepts covered in this post please visit these resources:



In addition, there are several differences between installing BizTalk Server on the 32-bit and 64-bit editions of Windows.  Considerations while installing BizTalk Server 2013 can be found here.  Full list of hardware and software requirements can be found here.

If you’d like to learn about our Connected Systems practice at Neudesic, please visit this page:

Introducing Service Bus for Windows Server (Service Bus 1.0 Beta)

Service Bus for Windows Server (Service Bus 1.0 Beta)

On July 16, Microsoft released the beta of Microsoft Service Bus 1.0 for Windows Server. This release has been tightly kept under wraps for several months and my team was fortunate enough to have the opportunity to evaluate the early bits and help shape this release.
With the Beta now live, I’d like to share a bit of our perspective on this release, why it is significant and provide some details based on our experience with the bits.
Before I do so, let me provide an overview of Azure Service Bus to put into context this important capability as it exists today and where it is going.

A Brief Introduction to Azure Service Bus

Windows Azure Service Bus enables customers to integrate applications leveraging messaging capabilities that until now were only available in enterprise grade on-premise middleware platforms like BizTalk, Tibco, Neuron, and IBM WebSphere.
Azure Service Bus provides foundational messaging capabilities like pub-sub over a highly elastic messaging fabric that in addition to providing scalability, significantly simplifies exposing, composing and consuming services regardless of where they reside.

The core features that are part of Azure Service Bus today include:

  • Connectivity - Rich options for interconnecting apps such as relayed messaging which enables federation of service endpoints across network and trust boundaries
  • Messaging – Reliable and transaction-aware Cloud messaging via Queues and Topics
  • Service Management - Consistent management surface and service observation capabilities via the Azure Portal and rich APIs for building your own management tools
  • Security – Integration with Azure Access Control Service for authentication against Service Bus endpoints

Until now, most of the capabilities in Windows Azure including Azure Service Bus have been delivered under Microsoft’s “cloud-first” approach. Since the release of Windows Server AppFabric in Spring of 2010 (rebranded Microsoft AppFabric 1.1 for Windows Server), Microsoft has been very focused on investing new capabilities in the cloud with a promise that these capabilities will eventually land on-premise . With the latest release of Microsoft Service Bus 1.0 Beta for Windows Server, Microsoft delivers on this promise of fidelity between Cloud and on-premise capabilities.
In addition, Microsoft seems to have taken a slight detour from their “cloud-first” trend by delivering a new cloud-scale Workflow host for Windows Azure simply called (for now) Workflow for Windows Server.  See the following post
for more information on Windows Azure Workflow, and if you would like to learn more about the Azure hosted version of Azure Service Bus, check out “Introducing Queues and Topics in Azure Service Bus” in CODE Magazine written by my Neudesic colleague Rick Garibay:

Introducing Service Bus for Windows Server

With the latest release of the Service Bus for Windows Server, Microsoft is extending the brokered messaging capabilities of Windows Azure Service Bus previously only available through Windows Azure hosting to a private, on-premise hosting environment.  While this release is delivered under the name Microsoft Service Bus 1.0 Beta for Windows Server, you will find that there is strong parity with the existing Azure Service Bus capabilities in terms of the API and overall development experience.  
In fact, you will find that the samples in the Service Bus for Windows Server SDK are very similar to the samples in the existing Azure Service Bus SDK. The capabilities in this release include:

  • Secure messaging
  • Multiple messaging protocols
  • Reusable patterns
  • Delivery assurance through reliable messaging
  • Scalability  
  • Cross-domain/network connectivity with minimal network changes

Service Bus for Windows Server is built on the Microsoft .NET Framework 4.5 PU3 and requires Windows Server 2008 R2, SQL Server 2008 R2 and Windows PowerShell 3.0. All these platforms must be running on a 64-bit operating system. The storage layer for the system (SQL) can be deployed on dedicated remote server or on one of the compute nodes or in Windows Azure SQL Database. The compute nodes used in this stack can be hosted either on-premises or on Windows Azure IAAS.
The following figure shows the platform stack for Service Bus for Windows Server:

Before you start exploring these capabilities, it is worth spending some time understanding some of the core components of Service Bus for Windows Server.  These key components include:

  • Service Bus Message Container

Service Bus for Windows Server uses SQL Server to store messages. Each database is mapped to a runtime component called a message container. Message containers point to the underlying database as well as additional cached information in order to accelerate the Service Bus. A Service Bus application server can host multiple message containers (thus communicating to multiple databases), but a message container is always hosted on a single Service Bus application server.

A Service Bus messaging entity (a queue, topic, subscription or rule) is created in a message container (and corresponding database). All messages in a Service Bus messaging entity are stored in the same container (and database). You should create multiple containers (even on the same database engine) in order to enable the Service Bus to balance the load on its servers as well as to support future scaling (adding more servers).

Service Bus message containers are created by running the following PowerShell command:

  • Service Bus Service Namespaces

With Windows Azure Service Bus, a service namespace is a projection used for addressing and management all top-level entities such as queues and topics which are either addressed as an HTTP or sb:// path which starts with the name of the service namespace.

The Service Bus for Windows Server uses a similar approach to using service namespaces, but extends the cloud schema to support specifying the server hosts in your own private hosting environment. The Service Bus 1.0 Beta for Windows Server enables creating service namespaces using one of three addressing scheme:

  • A path-based address (the default), that uses the fully-qualified domain name (FQDN) of the Service Bus nodes. The service URI for this schema appears as follows:


  • A DNS-registered namespace schema supports DNS capabilities. By using DNS, you can decouple the actual server nodes (FQDN) from clients using the Service Bus. In other words, when you create a service namespace with a DNS-registered schema, you provide the URI which registered in your DNS. The service URI will be similar to the following:

In this release, the Windows Azure Service Bus supports the use of configuration files for passing parameters for initialization code. Using this method (whether you are using Service Bus on Windows Server, inside Windows Azure worker or web roles), you can control deployment parameters outside your code. This enables you to point to different Service Bus deployments without the need to recompile the application.

It is worth noting that any endpoints exposed for Service Bus entities are secured and require authentication via the use of a claims token.  In support of this, Service Bus for Windows Server provides a Secure Token Service (STS) known as $STS, which can be used for translating traditional credentials like an Active Directory username and password into a claims token.  As a result, you will find that you will need to obtain a token from $STS before you are able to consume any Service Bus endpoints.


Service Bus for Windows Server addresses a number of challenges which historically have inhibited adoption of this extremely innovative capability. With support for mature ALM scenarios while providing the benefit of evaluating this Azure features On-Premises instead of going only to “cloud”, we believe this release is a good step in the right direction, giving customers the best of both worlds when it comes to evaluating and choosing the right capability for the job at hand. While additional core messaging capabilities like transformation, validation, routing, etc. are not currently included in either version of Service Bus, the work happening on Azure Service Bus Integration Services/BizTalk PaaS currently in CTP provides a very interesting glimpse of what’s likely to come and this is a great time to jump in and learn more.
Documentation and sample code is available in the included SDK, which you can download here:


Posted: Jul 18 2012, 04:36 by Manoj.Talreja | Comments (1) RSS comment feed

Tags: , , ,
Categories: AppFabric | Azure

Exploring Azure AppFabric Service Bus V2 May CTP: Topics

As syndicated from

In my previous post, I discussed Azure AppFabric Service Bus Queues, a key new capability in the first CTP of the Azure AppFabric Service Bus V2 release that was announced on May 17th.

Queues are an important addition to Azure AppFabric Service Bus capabilities because they provide a solid foundation on which to build loosely coupled distributed messaging solutions. The natural decoupling of queues introduces a number of natural side effects that can further benefit non-functional quality attributes of your solution such as performance, scalability and availability.

The graphic on the right is taken from my recent whitepaper “Developing and Extending Apps for Windows Azure with Visual Studio”image and shows the perpetual mismatch of supply and demand of IT capacities. If we think of this mismatch as load on the Y axis being introduced over time, the result is either failure to deliver a service or spending too much on hardware. imageThe goal, then is to align the demand with capacity.

Queues allow us to get closer to the drawing on the left because capacity can be tuned to scale as needed and at its own pace. This is effective because with queues, a consumer/worker can be throttled to only consume what it can handle. If the consumer/worker is offline, items in the queue will queue up, providing classic “store and forward” capabilities. If the consumer/worker is very busy, it will only consume the messages it is able to reliably pull from the queue. If we add more consumers/workers, each consumer/worker will consume messages at its optimal rate (determined by processing capacity, tuning, etc.), resulting in a natural distribution of work. Of course, stronger, more capable consumers/workers may consume more messages, but as long as there are messages in the queue, there is work to be done and the capacity can be allocated accordingly.

As you can see, queues are a great pattern for building loosely coupled distributed solutions, and the ability to add consumers/workers to a process or message exchange in a manner that is transparent from the client/producer perspective makes queues even more useful.

This is the idea behind topics. Before I dive into topics though, let’s talk about the problem that topics are trying to solve.image

imageIn distributed systems, it is useful to design message exchanges in terms of publishers and subscribers. Publishers are clients that are either sending a request-response message and will wait for a response; or are sending a single, one way message and expect no response. Subscribers care about these messages for one reason or another and thus subscribe to these messages. This is the essence of the Publish-Subscribe, or more succinct “Pub-Sub” messaging pattern. A one-way pub-sub  message exchange pattern is modeled to my left, and again to my right to build on a concrete example. Purchases, be they on-line or at brick-and-mortar retail outlets typically involve a point-of-sale (POS) system.

One of the first things a smart, modern POS software does when a unit is sold is to update inventory on that product so that the company can make proactive, intelligent decisions about managing inventory levels. In most cases, this is an administrative function, that is (or should be) transparent to the customer. When an order/sale is placed, an event occurs which is of interest to an Inventory Service that is responsible for decrementing the inventory count on a shared store. Of course, this is just one of several things that likely need to happen when an order is placed. Credit card authorization as well as fulfillment (whatever that means in the context of the purchase) needs to take place as shown below on your left.

All of a sudden things are more complex than they were before. I want to decouple the POS Client from the downstream business of authorizing a credit card and shipping the product to the customer’s doorstep. Depending on the context, the credit authorization process may be request-response or one-way. For most high-volume online retailers, the financial outcome of imagethe transaction is transparent to the purchasing experience. Ever gotten an email from after you made your purchase letting you know that your order is in a pending state because you need to update your expiration date on file so it can authorize your credit card? This asynchronous approach is common to facilitate scale and performance, and it is also good business.

It is very important to note that when the credit card authorization is designed as one-way, there are a number of guarantees that must be made. First, the Credit Service must receive the message, no matter how busy the front-end or back-end services are. Second, but of equal importance, is that the Credit Service must receive the message once and only once. Failure to deliver on the first or second guarantee will lead to lost revenue, either due to lost transactions or very disgruntled customers.

Using queues is a first step towards achieving these desired outcomes. However, now we need to reason about who should have the responsibility of sending the message to each subscriber? It can’t be the POS Client, because we want to decouple it from this kind of intimate knowledge. Adding this responsibility to each subscriber is also just as bad, or arguably worse. image

What we need is an intermediary that forms both a logical and physical relationship between publishers and subscribers such that the minimum degree of coupling is accomplished between the two and no more. This is exactly what a topic provides.

Topics are nothing new. They have been the mainstay of JMS-based systems for years, and thus have proven their usefulness in the field of distributed computing as a great way to logically associate actions to events, thus achieving pub-sub in a minimally coupled manner.

In our scenario, when a sale occurs, the POS Client publishes a message to the “Orders” topic to signal a new order event. At this point, corresponding subscribers are notified by the topic. This logical relationship is modeled to your right. Each one of the subscribers might receive a copy of the same message, in which case we would define the message exchange pattern as multi-cast. It is also possible-and likely- that each service exposes a different contract and thus, transformation between a canonical message and the expected message must take place, but this is a subject for a later post.

How the subscribers receive the message, be it one-way or request-response, is a physical implementation decision. In the classic sense, the logical abstraction of a topic affords us some latitude in choosing a physical transport that will fuse the publisher and subscriber(s) together. 

Note: Theory aside, everything I am sharing here is based on my experience over the last few days jumping into these early bits. I am not an expert on the implementation of these capabilities or the features of the CTP and am merely sharing my learnings and thoughts as I explore these exciting new features. If you have questions, know of a different or better way to do something as it applies to the CTP, or have any any corrections, please use the comments below and I’ll gladly consider them and/or share them with the product team and post updates back here.

Exploring Azure AppFabric Service Bus V2 Topics

In Azure AppFabric Service Bus V2, the physical transport used in Topics is Azure AppFabric Service Bus Queues, which allows us to harness the physical advantages of queues and the logical abstraction that topics provide to design our distributed solutions at internet scale.

If you’ve played with the Azure AppFabric Service Bus Queues, or read my introduction to this series you might be wondering what makes Azure AppFabric Service Bus Topic so special. So far, you might be thinking that we could accomplish much of what I’ve discussed with Queues and you wouldn’t be alone. I struggled with this initially as well.

The way that Queues and Topics are implemented in the current technology preview, Topics don’t really seem all that useful until you want to refine when or under what conditions a Subscriber should receive a message beyond simply being subscribed to a Topic, and this can be pretty powerful.

In fact, we can code the scenario shown in the article on Queues to be functionally equivalent with Topics without immediately gaining much.

Creating Topics and Subscriptions

Start by creating a ServiceBusNamespaceClient and MessagingFactory just as before:

  1: ServiceBusNamespaceClient namespaceClient = new ServiceBusNamespaceClient( 
  2:         ServiceBusEnvironment.CreateServiceUri( 
  3:         "sb", serviceNamespace,  
  4:         string.Empty),  
  5:         sharedSecretCreds); 
  7: MessagingFactory messagingFactory = MessagingFactory.Create( 
  8:     ServiceBusEnvironment.CreateServiceUri( 
  9:     "sb",  
 10:     serviceNamespace,  
 11:     string.Empty), 

Next, create a Topic called “Orders”:

  1: Topic ordersTopic = namespaceClient.CreateTopic("Orders");

The underlying queue infrastructure is created for you.

Now, create a subscription for the Inventory Service on the “Orders” topic:

  1: Subscription inventoryServiceSubscription = ordersTopic.AddSubscription("InventoryServiceSubscription");

At this point, the creation of the Topic and Subscription above would take place in a management context without regard to, or any knowledge of the actual publisher or subscriber(s).

I would expect tooling either from Microsoft or the community or both to start to crop up soon to provide a user experience for these types of management chores, including the ability to enumerate Queues, Topics, etc. For example, before I create a Topic, I need to ensure that the Topic doesn’t already exist or I will get an MessagingEntityAlreadyExistsException. I used the GetQueues method on the namespace client, and if the Topic or Queue entity exists, use the DeleteQueue or DeleteTopic method.

Publishing on a Topic

Now, the client/publisher creates a TopicClient and MessageSender and sends the message on the Orders Topic:

  1: TopicClient pOSClientPublisher = messagingFactory.CreateTopicClient("Orders"); 
  3: MessageSender msgSender = pOSClientPublisher.CreateSender(); 
  5: Order order = new Order(); 
  6: order.OrderId = 42; 
  7: order.Products.Add("Kinect", 70.50M); 
  8: order.Products.Add("SamsungFocus", 199.99M); 
  9: order.Total = order.Products["Kinect"] + order.Products["SamsungFocus"]; 
 11: var msg = BrokeredMessage.CreateMessage(order); 
 13: msgSender.Send(msg); 
 14: msgSender.Close();

Note that the client/publisher knows nothing about the subscriber. It is only bound to a logical Topic called “Orders” on line 1 above. It is running in some other process somewhere in the world (literally) that has an internet connection and can make an outbound connection on TCP port 9354***.

Subscribing to a Topic

On the receiving end, a SubscriptionClient is created along with a MessageReciever. The SubscriptionClient instance is created from the MessagingFactory instance which accepts the name of the topic and the subscription itself.

Note the RecieveMode is the same as before which will have the effect of ensuring that the Inventory Service Subscriber receives the message at most once (FWIW, I think PublisherClient and SubscriberClient make more sense that SubscriptionClient and TopicClient respectively, but the intent of the classes is pretty clear and again, these are early bits so expect changes as the team gets feedback and continues to bake the API):

  1: SubscriptionClient inventoryServiceSubscriber = messagingFactory.CreateSubscriptionClient("Orders", "InventoryServiceSubscription"); 
  3: MessageReceiver msgReceiver = inventoryServiceSubscriber.CreateReceiver(ReceiveMode.PeekLock); 
  5: var recdMsg = msgReceiver.Receive(); 
  6: msgReceiver.Close(); 
  8: var recdOrder = recdMsg.GetBody<Order>(); 
 10: Console.WriteLine("Received Order {0} on {1}.", recdOrder.OrderId, "Inventory Service Subscriber");

The code above would be wrapped into a polling algorithm that allows you to have fine control over the polling interval, which as Clemens Vasters pointed out in a side conversation recently is a key capability that allows you to throttle your subscribers. The samples in the SDK show a polling technique which works, but it would be nice to see an option for setting some config and letting the API do this for you.

Regardless of the approach you take to checking for messages, the only thing the subscriber knows about is the Topic name and the name of the subscription on line 1 above (having to provide the subscription name in addition to the topic name, seems a bit redundant to me and more coupling than is needed).

Multiple Subscribers

At this point, we’ve emulated the functionality of the Queuing example shown in my first post in the series.

While we’ve increased our level of abstraction, and are no longer thinking about (or care) about the fact that there’s a queue in between the publisher and subscriber, so far, topics haven’t bought us much just yet…

Think back to the scenario above. There are two other services that care about orders. We can create subscriptions for them just as we did for the Inventory Service at management/configuration time:

  1: Subscription fulfillmentServiceSubscription = ordersTopic.AddSubscription("FulfillmentServiceSubscription"); 
  2: Subscription creditServiceSubscription = ordersTopic.AddSubscription("CreditServiceSubscription");

Of course, new subscriptions can be added long after the Topic has been created, and this is one of the many powerful aspects of this logical abstraction from publishers and subscribers. This approach introduces agility into your solutions because you can add subscribers with minimal friction, and in a fully location transparent manner.

As with the publisher (TopicClient), subscribers (SubscriberClients) live in their own process anywhere in the world with an internet connection and can be fired up at will. If one is offline, or unavailable, the message will be queued (provided that previous subscribers have peeked the message (ReceiveMode.PeekLock) as opposed to popping it off the queue (ReceiveMode.ReceiveAndDelete). Below is the simple code for adding listeners/Subscribers for the Credit Service and Fulfillment Service:

  1: // Credit Service Subscriber 
  2: SubscriptionClient creditServiceSubscriber = messagingFactory.CreateSubscriptionClient("Orders", "CreditServiceSubscription"); 
  4: msgReceiver = creditServiceSubscriber.CreateReceiver(ReceiveMode.PeekLock); 
  5: recdMsg = msgReceiver.Receive(); 
  6: msgReceiver.Close(); 
  8: recdOrder = recdMsg.GetBody<Order>(); 
 10: Console.WriteLine("Received Order {0} on {1}.", recdOrder.OrderId, "Credit Service Subscriber"); 
 12: // Fulfillment Service Subscriber 
 13: SubscriptionClient fulfillmentServiceSubscriber = messagingFactory.CreateSubscriptionClient("Orders", "FulfillmentServiceSubscription"); 
 15: msgReceiver = fulfillmentServiceSubscriber.CreateReceiver(ReceiveMode.PeekLock); 
 16: recdMsg = msgReceiver.Receive(); 
 17: msgReceiver.Close(); 
 19: recdOrder = recdMsg.GetBody<Order>(); 
 21: Console.WriteLine("Received Order {0} on {1}.", recdOrder.OrderId, “Fulfillment Service Subscriber");

Creatiimageng two additional SubscriptionClients for the Credit Service and Fulfillment Service results in all three subscribers getting the message as shown on the right. Again, in my examples, I am running each subscriber in the same process, but in the real world, these subscribers could be deployed anywhere in the world provided they can establish a connection to TCP 9354.

Rules/Actions, Sessions/Groups

Now, what if we wanted to partition the subscribers such that in addition to subscribing to a Topic, additional logic could be evaluated to determine if the subscribers are really interested in the message? Our online retailer probably (err, hopefully) has a centralized inventory management system and credit card processor, but may have different fulfillment centers across the world.

Based on the customer’s origin, the order should go to the closest fulfillment center to minimize cost and ship times (i.e. North America, South America, Africa, East, Europe, Asia, Australia).

Azure AppFabric Service Bus V2 supports this approach with Sessions, Rules and Actions. I group these into the idea of a message pipeline. In addition to the subscriptions, the Topic evaluates additional context or content of the published message configured at management time to introduce some additional filtering and very lightweight orchestration. The topic subscription is the right place for this to happen because again, it is a management-time task. Publishers and subscribers merely send/receive messages. It is the benefit of a logically centralized, yet physically distributed messaging model that affords us the ability to manage these details in a centralized way.

You can create a RuleDescription to evaluate some property or field in the message that indicates country of origin, and as an action, set a property on the message to identify the fulfillment center.

To illustrate this, first, I’ve added two properties to the BrokerMessage that I am publishing on the “Orders” Topic. I’ll use these properties when I configure my rule and action next:

  1: msg.Properties.Add("CountryOfOrigin", "USA"); 
  2: msg.Properties.Add("FulfillmentRegion", "");

Notice that in line 2 above, I’ve intentionally created the “FulfillmentRegion” property with an empty string, since we are going to apply some logic to determine the fulfillment region.

Now, I use a RuleDescription and SqlFilterExpression to determine if the CountryOfOrigin is the United States. If the SqlFilterExpression evaluates to true, then the SqlFilterAction fires and sets the FulfillmentRegion  to “North America”:

  1: RuleDescription fulfillmentRuleDescription = new RuleDescription(); 
  2: fulfillmentRuleDescription.FilterExpression = new SqlFilterExpression("CountryOfOrigin = 'USA'"); 
  3: fulfillmentRuleDescription.FilterAction = new SqlFilterAction("set fulfillmentRegion='North America'");

Of course, in the real world, there would be a more sophisticated process for identifying the country of origin, but simple, contrived examples make it so that articles get published. Winking smile

The evaluation and any corresponding actions must fire when the message is in-flight as any actions taken could influence the routing of the message, as with the example above which will meet a subscription rule we’ll configure on the Fulfillment Service Description next.

OK, so now we have some properties we can play with and we’ve defined a RuleDescription. The last thing we need to do is modify the FulfillmentServiceSubscription to include the RuleDescription I just created. This makes the FulfillmentSubscription conditional, based on the conditions we've defined in the instance of the RuleDescription called fulfillmentRuleDescription:

  1: Subscription fulfillmentServiceSubscription = ordersTopic.AddSubscription("FulfillmentServiceSubscription", fulfillmentRuleDescription);

Now, when I run my code, all three subscribers receive the order message just as before, however this time, we know that the only reason that the Fulfillment Service is getting the message is because it is acting as the North America fulfillment center. If I modify the CountryOfOrigin property in line 1 in the 3rd code sample up from here to anything but “North America” the Fulfillment Service will not receive the message at all.

As I continue to model out my subscribers, I could create a subscription for each fulfillment center that is capable of receiving real-time orders and then create RuleDescriptions accordingly. This would allow me to distribute fulfillment geographically (good for scale, reliability and happy customers) as well as ensuring that I am always only pulling back messages that I need. If, during peak times around holidays, the volume of orders increases, I can simply add additional fulfillment subscribers for that region to ensure that packages ship as quickly as possible and that no orders are lost.


Closing Thoughts

So far, I’m pretty impressed with the powerful messaging capabilities that Azure AppFabric Service Bus V2 Topics introduced in the May CTP, and I’m excited to see where things are going.

As Azure AppFabric Service Bus matures further, I would love to see additional transport channels supported by Azure AppFabric Service Bus Topics. Just as with the creation of the Topic and Subscriptions as a management function, the transport would also be defined and created/configured at management time. This is really where the power and elegance of Topics shines through in my opinion because the publisher and subscriber don’t know or care about the transport- they’re just connecting to a topic and sending and/or receiving messages.

By way of some nit picks, I think that PublisherClient makes more sense than TopicClient, and along with considering a modification to SubscriptionClient, having a PublisherClient and SubscriberClient that publish and subscribe to a Topic seems a lot cleaner and more intuitive to me.

I’m also trying to get used the fact that we need Clients and Senders/Receivers. Again, to me it would seem more intuitive to simply have a PublisherClient and SubscriberClient that own the sending and receiving. Perhaps we’re jus seeing visible seams in the API due to the early nature, or there’s a good reason for this that I haven’t thought of yet.

At PDC 10, the AppFabric team announced that they are investing in an “Integration Service” that will provide additional messaging capabilities by way of transformation and advanced mapping similar to how we leverage these capabilities in BizTalk today. I can see Topics getting much more robust when, in addition to modifying properties on the BrokerMessage, we can mediate and transform a message in-flight just before reaching a subscriber, and I can also think of some nice message enrichment patterns that would be very nice.

*** One important thing to note is that in the current May CTP, Queues and Topics do not provide the same NAT/Firewall traversal capabilities of their relay siblings. For the .NET Messaging API (which I’ve been using to share my learnings thus far) as well as the WCF ServiceBusMessagingBinding, outbound TCP 9354 is required. Also note that the channel type is Net.Tcp (think WCF). This means that in the current CTP, the only way to ensure full interoperability across publishers/subscribers and guarantee an outbound connection (assuming HTTP ports aren’t locked down) is to use the REST API, but I suspect we’ll see more parity of this important feature across client types for Queues and Topics.

What’s Next?

There’s still much to explore around sessions, filtering, the WCF ServiceBusMessagingBinding, the REST API and how we might bridge Azure AppFabric Service Bus with on-premise messaging capabilities. Exciting stuff- stay tuned!

Posted: May 30 2011, 08:50 by Rick.Garibay | Comments (0) RSS comment feed

Tags: , ,
Categories: AppFabric | Azure | Connected Systems | Headlines | WCF

Exploring Azure AppFabric Service Bus V2 May CTP: Queues

As syndicated from

Today, the AppFabric team announced the first Community Technology Preview of the Azure AppFabric Service Bus V2.

The second release of Azure AppFabric Service bus is a significant milestone that the team has been hard at work on for several months. While I’ve had the privilege of attending a number of SDRs and watching this release move from ideation to actual bits, this is the first time I’ve been able to actually get my hands on the code, so I’ve spent the better part of this evening diving in.

First, if you are new to the Azure AppFabric Service Bus, there are many resources available to get you up to speed. I highly recommend this whitepaper by Aaron Skonnard: A Developer’s Guide to the Service Bus. If you are a visual learner, please consider checking out my webcast on AppFabric Service Bus here:

In a nutshell, like most ESBs, Azure AppFabric Service Bus is the manifestation of a number of core messaging patterns that provide the ability to design messaging solutions that are loosely coupled. What makes the Azure AppFabric Service Bus unique is that it provides these capabilities at “internet scale”, meaning that it is designed to decouple clients and services regardless of whether they are running on premise or in the cloud. As a result, the AppFabric Service Bus is a key technology for enabling hybrid scenarios at the platform level (i.e. PaaS) and serves as a key differentiator in the market today for enabling organizations to adopt cloud computing in a pragmatic way.

Messaging patterns in general provide a common frame on which to think about and build composite applications. Cloud and hybrid computing necessitate many of the same messaging patterns found within the on-premise enterprise and introduce new complexities that are somewhat unique.  imageAzure AppFabric Service Bus V2 introduces tried and true messaging capabilities such as Queues, Topics, Pipes and Filters (Rules/Actions) as well as sequencing semantics and of course durability.

It is important to note that the Azure AppFabric Service Bus is not a replacement for on-premise publish-subscribe messaging. It enables new scenarios that allow you to integrate your current on-premise messaging and provide the ability to compose clouds, be they your own, your partners or those of private cloud providers such as Microsoft. The drawing on the right is from the Microsoft whitepaper I mentioned in the introduction. Notice that the Azure AppFabric Service Bus is providing the ability to integrate clients and services regardless of where they reside, and for non-trivial SOA, on-premise pub-sub for decoupling composed services and clients is essential.


Queues are typically used to provide temporal decoupling, which provides support for occasionally connected clients and/or services. With a queue, a client writes to a known endpoint (private/local or public/somewhere on the network) without regard to the state of the downstream service. If the service/consumer is running, it will read work items or messages from the queue. Otherwise, the queue will retain the message(s) until the service is available to retrieve the message or the message expires.

As promised, Azure AppFabric Service Bus V2 delivers on durable messaging capabilities beyond the current Message Buffer feature with the introduction of Queues.

Queues are supported in the .NET API (shown below), REST API and with a new WCF Binding called “ServiceBusMessagingBinding”. While my preferred approach will certainly be WCF, the .NET API helps to understand the new API. In addition, the process of creating a queue is required even with WCF since queue creation is outside of WCF’s immediate area of concern.

The first thing you need to do when working with Queues, Topics and Subscribers is create a Service Bus Namespace Client which is an anchor class for managing Service Bus entities:

ServiceBusNamespaceClient namespaceClient = new  ServiceBusNamespaceClient(ServiceBusEnvironment.CreateServiceUri("sb",  serviceNamespace, string.Empty),  sharedSecretCreds);


Once you have an instance of the ServiceBusNamespaceClient, you can create a queue by simply instantiating the Microsoft.ServiceBus.Messaging.Queue class (I’ve set the queueName field to “Orders”):

  Queue queue = namespaceClient.CreateQueue(queueName);

Next, create a Queue client. The Queue Client manages both send and receive operations:

1: MessagingFactory messagingFactory =

MessagingFactory.Create(ServiceBusEnvironment.CreateServiceUri("sb", serviceNamespace, string.Empty), sharedSecretCreds);

   2:  var queueClient = messagingFactory.CreateQueueClient(queue);

The code above uses the Microsoft.ServiceBus.Messaging.MessagingFactory which accepts your namespace and security credentials and returns an instance of the MessagingFactory.

Next, you create an instance of Microsoft.ServiceBus.Messaging.MessageSender using the CreateSender method:

   1:  var messageSender = queueClient.CreateSender();
   2:  messageSender.Send(msg);
   3:  messageSender.Close();
Line 7 below creates a BrokeredMessage, which is a new message class which represents a unit of communication between Service Bus clients. Note that this class has nothing to do with the System.ServiceModel.Channels Message class, however when using the ServiceBusMessagingBinding the classic WCF Message class is used. The BrokeredMessage class consists of a number of methods and some interesting properties including ContentType, CorrelationId, MessageId, Label and a property bag called Properties which we’ll explore as we progress through the new features.
In this example, I’m using an overload of CreateMessage that accepts a serializable object. The method uses the DataContractSerializer with a a binary XmlDictionaryWriter to create a BrokeredMessage.
   1:  Order order = new Order();
   2:  order.OrderId = 42;
   3:  order.Products.Add("Kinect",70.50M);
   4:  order.Products.Add("SamsungFocus", 199.99M);
   5:  order.Total = order.Products["Kinect"] + order.Products["SamsungFocus"];
   7:  var msg = BrokeredMessage.CreateMessage(order);


Finally, we can send the message:

   1:  var messageSender = queueClient.CreateSender();
   2:  messageSender.Send(msg);
   3:  messageSender.Close();


With much of the infrastructure code out of the way, we can use the same queueClient instance to create a MessageReciever and request the BrokeredMessage message from the Orders queue:

   1:  var messageReceiver = queueClient.CreateReceiver(ReceiveMode.ReceiveAndDelete);
   2:  var recdMsg = messageReceiver.Receive();
   3:  messageReceiver.Close();

Note the ReceiveMode in line 1 above. This has the effect of enforcing an “Exactly-Once” “At Most Once” (thanks David Ingham) receive semantic since the first consumer to read the message will pop it off the queue. The opposite option is RecieveMode.PeekLock which provides “At Least Once” delivery semantics. As David Ingham, Program Manager on the AppFabric team kindly adds in his comments below: “If the consumer is able to record the MessageIds of messages that it has processed then you can achieve “ExactlyOnce” processing semantics with PeekLock mode too.” –Thanks David!

Once we have the BrokeredMessage, we can retrieve the body:
   1:  var recdOrder = recdMsg.GetBody<Order>();
   2:  Console.WriteLine("Received Order {0} with total of ${1}",recdOrder.OrderId,recdOrder.Total);

The cool thing about the CTP is that Microsoft is offering you a free labs environment in which to explore and play. In the CTP, you can create a maximum of 100 queues, with a maximum size of 100MB each and messages can not exceed a payload size of 256KB- pretty workable constraints for a first CTP.

To get started, and dive in for yourself, be sure to download the Azure AppFabric V2 SDK at:

In my next post, we’ll explore Topics and Subscriptions which allow for rich pub-sub including the ability to multicast to thousands of potential subscribers.

Hats off to the AppFabric Messaging team for all of their hard work with this release!

Posted: May 17 2011, 08:00 by Rick.Garibay | Comments (0) RSS comment feed

Tags: , ,
Categories: AppFabric | Azure | Connected Systems | General | Headlines | WCF





Neudesic Social Media

Follow Neudesic on Twitter Follow Neudesic on Facebook Neudesic Neudesic