Skip to main content

Instrumentation with Semantic Logging Application Block from Microsoft Enterprise Library 6

Introduction

Instrumentation, the process of auditing and logging in your applications, is a vital part of any enterprise solution. When you are developing applications with Service Level Agreements and specific Auditing requirements, logging is a big deal. Add in the complexity of the decoupled nature of the cloud with Service Orientated Architecture, piecing together a clear view of a business process from end to end can be a challenge.

Microsoft has recently released the Semantic Logging Application Block (SLAB) to the enterprise library. This article will form the first part of a mini-series, detailing the benefits, pitfalls and examples of how to make use of it in your enterprise.

Although new to the Enterprise Library, Semantic logging is not a new concept. It has been adopted for years under different names such as "structured logging", "strongly typed logging" and "schematised logging" to name a few.

What is it?

SLAB inherits event structure from the Event Tracing for Windows (ETW) pipeline, allowing it to store the payload of the event along with it's metadata. In this case you have data along with data which describes the format. ETW has built in support for logging a stack trace along with the event and can also be extended which enables you to add your own event data.

SLAB's secret source is to make light work of using a reliable and robust logging infrastructure that has been around since Windows 2000 in a way that abstracts us as developers from the messy details of manifests and publishing to ETW, which is typically overly complex, focusing on What you want to log and Where you want to store those logs. 

What you want to log 

The abstraction of what you want to log comes in the form of an EventSource. You create a class which inherits from the EventSource class and decorate a set of methods with attributes which determine the following:

  • The parameters the log method expects 
  • The Logging Level
  • The Event Id
  • The Task (optional)
  • Operational Codes (optional)

ETW will also provide a TimeStamp for you.

This upfront design approach allows you to separate your design of what will be logged into a discrete, specific set of methods that will be called from your business logic. 

The downside to this approach as you will find when you start using it in earnest is the amount of different methods you will have to create in order to log each and every log or audit event. The current docs recommend splitting your EventSources into partial classes to help with the manageability. My current preference is to create an EventSource for each business process which i will demonstrate shortly.

At Assemblysoft we specialise in Custom Software Development tailored to your requirements. We have experience creating Booking solutions, as we did for HappyCamperVan Hire. You can read more here.

We can onboard and add value to your business rapidly. We are an experienced Full-stack development team able to provide specific technical expertise or manage your project requirements end to end. We specialise in the Microsoft cloud and .NET Solutions and Services. Our developers are Microsoft Certified. We have real-world experience developing .NET applications and Azure Services for a large array of business domains. If you would like some assistance with Azure | Azure DevOps Services | Blazor Development  or in need of custom software development, from an experienced development team in the United Kingdom, then please get in touch, we would love to add immediate value to your business.

Assemblysoft - Your Safe Pair of Hands

https://assemblysoft.com/


Where you want to log

To store a log with SLAB you need to configure a listener and one or more Event Sinks. An event sink receives events, by default as an EventEntry type, and has the responsibility of storing the event data in a text file, event trace, database or other storage medium. Out of the box the SLAB contains the following sinks:
  • Console
  • Flat File
  • Rolling File
  • Sql Database
  • Windows Azure Table
You can configure sinks either in code or in xml configuration. Which you use is determined by whether you are listening for events 'In Process' or 'Out of Process'

In-Process

When using the SLAB in-process, all configuration is performed in code. You first create an ObservableListener<EventEntry> (default) and you subscribe to the listener with one or more event sinks.

Note*
If you are using the block in-process, the ETW infrastructure is not used to collect events but it does use the same semantic approach to logging as ETW.

For the purposes of our sample, we will use in process.

Out-of-Process

When using the SLAB out-of-process, all configuration is performed in xml. A windows service acts as a listener which receives events from ETW and configuration sets up the subscriptions for the event sinks. The event sinks are then hosted by the windows service.

*the next article will highlight the configuration and benefits of using out of process.

Sample Code

The code sample associated with this article contains a console application which makes use of the SLAB in process and logs to the console.

Road Map

At this time i envisage at least 3 parts consisting of:


  • Part 1 - Semantic Logging Application Block - Microsoft Enterprise Library 6 (In Process)
  • Part 2 - Semantic Logging in Windows Azure (Out of Process)
  • Part 3 - Extending Semantic Logging - Creating Custom Sinks for Windows Azure

Additional Reading - What is the Enterprise Library?

If you have used other blocks from the Enterprise Library before, feel free to skip this section. "Enterprise Library is made up of a series of application blocks, each aimed at managing specific crosscutting concerns. In case this concept is unfamiliar, crosscutting concerns are those annoying tasks that you need to accomplish in several places in your application. When trying to manage crosscutting concerns there is often the risk that you will implement slightly different solutions for each task at each location in your application, or that you will just forget them altogether. Writing entries to a system log file or Windows Event Log, and validating user input are typical crosscutting concerns. While there are several approaches to managing them, the Enterprise Library application blocks make it a whole lot easier by providing generic and configurable functionality that you can centralise and manage. What are application blocks? The definition we use is "pluggable and reusable software components designed to assist developers with common enterprise development challenges." Application blocks help address the kinds of problems developers commonly face from one line-of-business project to the next. Their design encapsulates the Microsoft recommended practices for Microsoft .NET Framework-based applications, and developers can add them to .NET-based applications and configure them quickly and easily." -  Direct quote: Enterprise Library Developers Guide.

The diagram below taken from the Enterprise Library Developers Guide (preview) shows the different blocks, their dependencies and the highlighted block we will be focusing on

 Figure 1


Why another Logging Framework?

Like me you have probably been using logging frameworks such as Log4net, nLog and the Logging Application Block for many years. So why another framework when Microsoft in particular already have the Logging Application Block?

There a lot of similarities in terms of the features and the things you can do when compared to other frameworks including logging events to multiple destinations, filtering and formatting. The real power of Semantic Logging comes from the way SLAB events get written to the logs. Your events are strongly typed which allows for a consistent format and structure. There is no chance of a developer making a coding error when the log event is written such as passing in the wrong event id or verbosity. It is easier to post analyse and query the log results due to their strongly types nature and formatting. It becomes easier to consume log data from other applications. You can more easily see the big picture in terms of correlation between business processes. Automation can take place in a much cleaner way as the sinks available can utilise database and cloud storage mechanisms, preserving the actual state, already in use and being actively consumed by existing applications. 

Responsibility of concerns

While great and easy to adopt in your applications, much responsibility for logging with traditional frameworks rests with the developers. For example, setting the logging level or deciding what gets logged is constrained by what the developer at the time of writing a piece of logic deemed to be appropriate. In a world of auditing where systems require specific checkpoints and requirements, this responsibility can very easily miss some vital information, purely because a new developer adjusting some existing code never fully understood the context or simply makes a mistake in choosing the type of event to be logged. Semantic Logging removes this responsibility from the developers consuming the Logging object and places it with the design of the logging requirements. 

Rich Data - keeping the context

One of the biggest problems i have experienced when dealing with logs is the flattened nature of them and the different types of information recorded for each log entry. Some class with some method has logged an event which contains information very specific to the operation and very often not generalised enough to see a picture or view of a business process from end to end when viewing the log data that has been recorded.

Putting some meat on the bone (What do i need to do as a dev)

I created a simple console app to demonstrate using the SLAB in process.

Get References

The first thing you will want to do is add references to the SLAB. The recommended approach is to install a package via NuGet as shown below:



Click 'Accept' to the license agreement and after a few moments you will see the following added to your references:



Designing an EventSource (The What)

We first create a class which derives from EventSource. EventSource can be found in System.Diagnostics.Tracing so you will need to add a using statement as well.

The next step is to define a method that will represent a log. This is simply a method with a void return type that takes as arguments one or more of the following CLR types:
  • String
  • int32
  • int64
  • Guid 
  • Long
Surprisingly as of writing this article, the DateTime type is not supported, neither are complex types.

After you have defined your method/s, you need to decorate them with the [Event] attribute and add the necessary values. The minimum required is to specify the Event Id.

Note* The same value needs to make it into the WriteEvent() method call inside the body of the method as shown in the sample.

The EventSource from the sample code looks like the following:



The sample demonstrates two log events: Start and End application.

Although the sample is very basic, the real challenge starts to emerge as your event requirements grow. Future articles will address this in the samples.

The next step is to enable the eventsource to be consumed by providing an instance of it. The simplest way of achieving this is to create a singleton instance of the eventsource and expose it via a public property as shown in the code sample.

Logging an Event

Now the design of our events has been completed, it's time to put the hat of the developer on and log some events on our event source as shown below:




At this point, we have performed all the upfront tasks that are required. A developer could start logging the points in the application that are appropriate for each event without needing to know any of the internals as to what logging levels have been set, the event id that is associated with the event or even where the events will be stored. All this is taken care of by the design of the EventSource. It maybe that this level of detail has not yet been decided upon and yet development can move ahead seamlessly.

Consuming events (The where)

Before any events can be stored anywhere, an event source listener needs to be setup and sinks need to subscribe to it.

As  mentioned earlier, there are a number of sinks available out of the box.
  • Console
  • Flat File
  • Rolling Flat File
Also by adding further references, available via NuGet packages it is suprisingly easy to add and configure:
  • Sql Database Sink
  • Windows Azure Table Sink
* We will demonstrate these in the next article


For our first sample we will use the Console Sink. But first we need to create a listener as shown below:




Simple Output

After running the console application we can see two events being output, demonstrating 
the Event Id, Message, Event Name and Level.
The Console output can be seen below:




Adding another listener

Next we add another listener, this time making use of the out of the box 'FlatFile' sink. As shown below, with just two lines of code, our logs are being stored in a text file 'Log.txt'


Opening the log file shows the expected results as shown below:




Formatting the results
One nice feature of the sinks is the ability to add a formatter. This will be quite attractive to those wanting to migrate from existing logging frameworks to Semantic Looging but still want to retain the current log format so that post analysis can continue without interuption after moving to SLAB. There are a number of formatters available out of the box for example a Json formatter.

To show how simple it is to add a formatter, we have added a simple Console Mapper which changes the color of the console based on the Event Level.

First we add an Event Level to the Event Source [Event] attribute as shown below:



Now the design of the Event Source has been updated, the color mapper is added as shown:


With very little effort, we have formatted our output in a way that adds real value. The results when running the console, which uses the formatter, are shown once more below:




Source Code

Semantic Logging Console Part1 (Updated 8th June 2013)

References (recommended reading)


Conclusion

Does Semantic Logging solve all our logging desires?

In two words, Kinda and It depends. It does come a long way to separating concerns, enabling a true design of auditing and logging needs and adds that process as a first class citizen. If you have specific auditing requirements that can map individually to specific logging methods, it gets you a long way out of the box. If you want a more generic solution, although you can create custom event sources to map against logging frameworks which take a level and an exception for example, it's not as flexible yet. That said, you can always come up with creative ways to bend any technology to suit where necessary. Semantic Logging has enough extensibility points to customise what you need and I aim to highlight those in future articles as mentioned. 

It's definitely a shift in thinking and forces more thought into designing logs before consumption, which I am certainly in favour of from certain design perspectives. Also it separates instrumentation from your business logic and enables the two tasks to run in parallel, useful for larger teams.

Deriving and EventSource from an interface caused me a few problems such that I left it out of the sample code where ordering of the methods appears crutial. This makes testing a bit of a pain although there are some specific test methods that exist for an EventSource which help to validate it out of the box. These will be detailed in the next article. 

Bear in mind this is a version 1 addition to the block so we can hopefully expect some improvements for authoring, configuration and features.

More to come soon...

Checkout some other musings via my blazor.net and azure blog here carlrandall.net

Popular posts from this blog

Windows Azure Storage Emulator failed to install

CodeProject Windows Azure Storage Emulator failed to install When attempting to install a new version of the Azure Storage Emulator either as a separate installation package or automatically as part of an Azure SDK update, you may run into an error message which states the storage emulator has failed to install. This can occur using the Web Platform Installer (WebPI), NuGet Package Manager or when performing the install manually. Below is the message received using the WebPI.   Storage Emulator Background  (optional reading) The windows azure storage emulator executable lives under the Microsoft SDKs directory as shown below: Configuration If we take a quick look inside the WAStorageEmulator.exe.config file we can see each of the storage services pointing to local service endpoints. <StorageEmulatorConfig>     <services>       <service name=" Blob " url="http://127.0.0.1:10000/"/>       <service

Debugging Python and Iron Python using Visual Studio

Now Python is a first class citizen since the release of Visual Studio 2017 and can be configured directly from the Installation IDE, below are a few settings worth bookmarking for your next python integration project. Debugging Python One of the first things you are going to want to do is step through your code when using Visual Studio, particularly as the language is dynamic and inspection of local and global scope soon becomes necessary. One thing to note is that if you start from a native python project, this is all wired up for you but if you are using .Net to call python modules or want to support an older python version, such as 2.7, you will soon see that breakpoints are not being hit due to symbols not being loaded.   Enable Just My Code To distinguish user code from non-user code in .net, Just My Code looks at two things: PDB (Program Database) files, and Optimization Program Database A .pdb file, otherwise known as a symbol file, maps the identifiers

Azure DevOps Authorisation

Managing whether an identity has access to a given  service, feature, function, object, or method in Azure DevOps comes down to authorisation. Fortunately, by default, the DevOps permissions are set in such a way to enable you to focus on the job at hand, DevOps. Loosely translated this means 'don't get in my way'. My experience is that the Azure DevOps team have done a good job at this, enabling you to crack on developing, building, testing and releasing without much hindrance. Working with relaxed permissions is great when you are the owner and possibly either a one man band or small team but as soon as we need to consider larger teams, varying roles with approvals and degrees of access, authorisation becomes a real concern. I was recently involved in a project utilising offshore developers where trust was a concern and a number of specific teams handling specific roles needed to come together to approve a set of pipelines.  This article is a pick of findings a