Running into Command Prompt from Explorer

Here is how to add Command Prompt to the right click menu on a folder in Windows Explorer.

Create a text file, type in the following lines and save it as addprompt.reg.

Windows Registry Editor Version 5.00

[HKEY_CLASSES_ROOT\Directory\shell\Command]
@="Command Prompt:"

[HKEY_CLASSES_ROOT\Directory\shell\Command\Command]
@="cmd.exe /k cd %1"

Save this out and double click the file.



I wonder if there is a way of doing the same for a bat file. What I want is -- that i should be able to right click on it and say run under command prompt.

Visual Studio Crashes Whilst Working With Orchestration

I have been working quite a bit on biztalk server 2006 R2 lately. Suddenly I noticed that my orchestration designer was crashed when ever I opened it. Here is a link that saved me.

Orchestration Designer Crash

Log4Net framework

Most projects have requirements of logging info/errors/warning etc. for easy diagnosis or monitoring of the application in production environment. I have used Log4Net framework - an open source framework by appache to meet the goals. I have found it much more easier to use then Microsoft enterprise Logging framework.
It is much more configurable. As it provides pretty comprehensive log4net.config section.
It has good number of appenders that come with the framework it self.
It is extensible.
I found much more easier to add custom appenders, configuration properties etc.
It has a very good online documentation.
There are many other frameworks which give capability to integrate log4net logging with it. For example
  • Castle has provided an excellent plug-in to integrate it with WCF service. Since the application was exposed as a web service, we wanted to log every request and response to and from the service. With windsor container, we were able to intercept and request in and response going out from the service. At this interceptor we called did our logging.
  • NHibernate also allows us to log all SQL-queries fired through it. Here is the link to know how to do this:
    http://blogs.hibernatingrhinos.com/nhibernate/archive/2008/07/01/how-to-configure-log4net-for-use-with-nhibernate.aspx
It is also considered faster then Microsoft enterprise logging framework(check here)

Here is how I started with a simple implementation of loggers.



I created a class with a dictionary of loggers with key as logging level and value as the logger it self. I wanted my application to call simple methods like debug, error and let Logging wrapper take care of choosing the logger itself. So I had static methods to the class which selected the appropriate logger and logged. In the static constructor I also had to configure my loggers once i had created them. I did that using

log4net.Config.XmlConfigurator.ConfigureAndWatch(new FileInfo("log4net.config"))

This loads the log4net.config from the running directory and configures the logger with the appenders etc.

[NonSerialized] private static readonly IDictionary loggers = new Dictionary();

static Logger()
{
loggers.Add(LoggingLevel.DEBUG, LogManager.GetLogger(LoggingLevel.DEBUG));
loggers.Add(LoggingLevel.INFO, LogManager.GetLogger(LoggingLevel.INFO));
loggers.Add(LoggingLevel.ERROR, LogManager.GetLogger(LoggingLevel.ERROR));
loggers.Add(LoggingLevel.FATAL, LogManager.GetLogger(LoggingLevel.FATAL));
log4net.Config.XmlConfigurator.ConfigureAndWatch(new FileInfo("log4net.config"));
}



public static string FormatErrorMessage(Exception ex)
{
StringBuilder stringBuilder = new StringBuilder();
stringBuilder.AppendFormat("Error message: {0} \n Stack trace: {1}", CompleteExceptionMessage(ex),
CompleteStackTrace(ex));
return stringBuilder.ToString();
}

public static void Debug(string message)
{
loggers[LoggingLevel.DEBUG].Debug(message);
}

I had a slightly weird requirement of logging the entire request and response message through the wcf service. Infact I feel this is a common requirement and we should provide a appender within log4net library to do this.

Initially when we used the logging framework as it is, we realized that our messages were getting serialized even though logging was turned off. This was because, we use to serialize the message and then pass the message as string to logger for logging.

To overcome this, we had to extend the log4net implementation where we override the Debug(object message), and Info(object message) method. We also had to write our own appender. This appender would in turn serialize the object passed in and log it. It was simple, we just derive the existing rollingfileAppender and override the existing append method.



The reason why we had to override the methods was because log4net internally converts everything to string before constructing LogEventData object. So even though we were passing in the object, we use to get object.ToString(). We ended up creating a custom logger which was basically extending the existing one. In this we, hid the existing logger methods by ours and created the loggingEvent object the way we wanted. It was then picked up by our appender from log4net.config file.

In the class diagram, you will notice that I had write a CommonLogManager as well. Since I have written a custom logger around the existing logger. I need to wrap the logger that the log4net logmanger return by the one that I have written. That's exactly what WrapperMap allows me to do. This is how it does.


public static class CommonLogManager
{
private static readonly WrapperMap s_wrapperMap =
new WrapperMap(WrapperCreationHandler);


public static CommonLogImpl GetLogger(string name)
{
return WrapLogger(LoggerManager.GetLogger(Assembly.GetCallingAssembly(), name));
}


private static CommonLogImpl WrapLogger(ILogger logger)
{
return (CommonLogImpl) s_wrapperMap.GetWrapper(logger);
}


private static ILoggerWrapper WrapperCreationHandler(ILogger logger)
{
return new CommonLogImpl(logger);
}
}


If you look at my implementation of logger class -- where I create different loggers into a dictionary. It turns out that it is not testable. I would like to write test around logging that I am doing. For that -- I change small bits in the class.


public class Logger
{
[NonSerialized] private static IDictionary loggers = new Dictionary();

static Logger()
{
loggers.Add(LoggingLevel.DEBUG, CommonLogManager.GetLogger(LoggingLevel.DEBUG));
loggers.Add(LoggingLevel.INFO, CommonLogManager.GetLogger(LoggingLevel.INFO));
loggers.Add(LoggingLevel.ERROR, CommonLogManager.GetLogger(LoggingLevel.ERROR));
loggers.Add(LoggingLevel.FATAL, CommonLogManager.GetLogger(LoggingLevel.FATAL));
XmlConfigurator.ConfigureAndWatch(new FileInfo("log4net.config"));
}

protected static IDictionary Loggers
{
get { return loggers; }
set { loggers = value; }
}



And here is how my test looked like.

[TestFixture]
public class DemoLoggingTest
{
#region Setup/Teardown

[SetUp]
public void Setup()
{
request = new request();
request.operand1 = 10;
request.operand2 = 0;
request.secretpassword = "password";
demoLogging = new DemoLibrary();
mockRepository = new MockRepository();
}

#endregion

private request request;
private DemoLibrary demoLogging;
private MockRepository mockRepository;

private void CreatemockLogger(ILog mockCommonLogger)
{
var loggerStub = new LoggerStub();
var logs = new Dictionary();
logs.Add("DEBUG", mockCommonLogger);
logs.Add("ERROR", mockCommonLogger);
loggerStub.injectLoggers(logs);
}

[Test]
[ExpectedException(typeof (DivideByZeroException))]
public void ShouldLogDebugLogs()
{

var mockCommonLogger = (ILog) mockRepository.CreateMock(typeof (ILog));
CreatemockLogger(mockCommonLogger);
mockCommonLogger.Debug(request);
mockCommonLogger.Error(null);
LastCall.On(mockCommonLogger).IgnoreArguments();
mockRepository.ReplayAll();
demoLogging.Divide(request);
mockRepository.VerifyAll();
}


[TearDown]
public void TearDown()
{
request = null;
demoLogging = null;
mockRepository = null;
}
}

internal class LoggerStub : Logger
{
public void injectLoggers(IDictionary dictionary)
{
Loggers = dictionary;
}
}



For writing my appenders, -- it was a piece of cake. Here is an example.

public class SerializingRollingFileAppenderAppender : RollingFileAppender
{
protected override void Append(LoggingEvent loggingEvent)
{
LoggingEventData data = loggingEvent.GetLoggingEventData();
data.Message = LogMessageHelper.FormatMessageBeforeLogging(loggingEvent);
base.Append(new LoggingEvent(data));
}
}

public class LogMessageHelper
{
private static string SerializeMessage(object message)
{
if (message is string) return (string) message;
using (var writer = new StringWriter())
{
SerializeMessage(message, writer);
return writer.ToString();
}
}

private static void SerializeMessage(object message, TextWriter writer)
{
new XmlSerializer(message.GetType()).Serialize(writer, message);
}


public static string GetThreadId()
{
return "Thread Id: " + Thread.CurrentThread.ManagedThreadId;
}


public static string FormatMessageBeforeLogging(LoggingEvent loggingEvent)
{
var sb = new StringBuilder();

sb.Append(" ");
sb.Append(GetThreadId());

string data = SerializeMessage(loggingEvent.MessageObject);
sb.Append(data);
sb.Append(Environment.NewLine);
return sb.ToString();
}
}


Since our request/response messages had confidential information which we did not want to log. We had to make our custom parameters take in a set of regex's. This regex's were applied on the serialized message and the matches were hashed. In order to do this, you basically need to add a property which by convention is going to be used in config file as well. This is how my appender looked like.


public class SerializingRollingFileAppenderAppender : RollingFileAppender
{
private readonly List RegexFilterPatterns = new List();

public string RegexFilterPattern
{
set { RegexFilterPatterns.Add(value); }
}

protected override void Append(LoggingEvent loggingEvent)
{
LoggingEventData data = loggingEvent.GetLoggingEventData();
data.Message = LogMessageHelper.FormatMessageBeforeLogging(loggingEvent,RegexFilterPatterns);
base.Append(new LoggingEvent(data));
}
}

Hope this is all that one may need for logging.
However -- I did miss making asynchronous call to logging in log4net.

Fitnesse Test

Currently I am working on a project that could publish a service for our clients to carry out purchase of their products. In order to carry out our functional testing, and get some visibility around the actions our service was doing, we decided to use Fitnesse for automating are functional tests.

Fitnesse is a pretty neat framework for creating different scenarios quickly using a wiki based editor taking in values for simulating different behaviors based on the value.
Since our service came into play at the end of the work flow for the entire system. We had to write many fixtures for various entities that were required by the purchase service.

Fixtures is basically the same thing as "TestFixture" for a .Nunit test. It is a class which carries out different actions based on the input provided, after which assertions are made to validate the action. This often makes us wonder, why do we need to write Fitnesse test when we can validate this with a unit test also. This is because Unit tests are pure code, and does not offer a good representation of the functionality. They do help us in documenting the roles for every method we write, but does not offer visibility to business analysts.

So, we decided any story will be called dev-complete only when the pair has fixtures for the test scenarios that the QA has written. As in as the story gets in the dev, it is also played in QA. The QA only has to worry about the scenario he is creating on the wiki. It allows him to focus on the test scenarios for the story rather then code for automating the scenario. Since there is no UI which lightens up the QA instincts, the wiki actually comes pretty close to give QA the same feel and environment to experiment with different data.

How do the fixtures work:
Fitnesse uses reflection to interact with the fixtures.
It supports easy copy with primitive types.
In order to call methods, we suffix the method name with ? or ().
In order to pass in parameters to various methods, we can use action fixture, where we define the action and parameter for that action.
Any value specified under an action is then validated with the return value. This basically acts like the expected value, which is asserted with the return value.

What i miss in Fitnesse:
Show run time output, as in I should be able to tell fitnesse to take output from some stream, be it a console a log file, and that should be displayed while running. It will make the test for interactive and responsive.

Overall I like the fitnesse test, as i don't have to go through the all web pages that come in the flow to successfully verify if my functionality is working fine or not. Especially when you are working at the last stage of the process. Just click test. :)


Pair Programming

Often while playing the role of an observer during pair programming, I missed the touch of my keyboard and mouse in order to point out something or take control of coding. It was embarrassing whenever I suddenly jerked my pair for his keyboard and mouse.
Well right now, I was introduced this cool application
synergy http://synergy2.sourceforge.net to assist me while pairing.I now share my keyboard and mouse with the my pair. And this way, whenever i want to take control, I can simply use my keyboard and mouse which placed right in front of me. No hustle, No snapping for the keyboard and mouse.
Cheers :)

Volatile Keyword in C#

I came across this keyword, when our code needed a lock in case of thread safe Singleton class.

So here was the code:

public class ModuleCatalogue
{
   private static volatile ModuleCatalogue _moduleCatalogue;
   private static object syncRoot = new object();
   private ModuleCatalogue
   {}
   public static ModuleCatalogue instance
   {
    get
   {
     if (_moduleCatalogue == null)
     {
       lock (syncRoot)
      {

      if (_moduleCatalogue == null)
      _moduleCatalogue = new ModuleCatalogue()
      }
     }
    return _moduleCatalogue;
   }
  }
}

So why do we need to make _moduleCatalogue volatile. This is mainly because our compiler tries to be smart and tries to do some optimization by caching the value. So we use the keyword “volatile” to make sure it does not do any optimization on that memory location.And thus making sure that the thread retrieves the most up to date value. It is also said that the compiler optimizes some section of the code as well. This might affect our value in case of muti-threaded environment.

Test Driven Development - TFC

In test first coding, as we write the test code before writing the class, we are motivated to think about how our class will be used. Without it we focus more on implementation. This leads to a design that is simpler and pragmatic. However, once we get into developing the code, the unit tests can pretty much drive our design.

There are several benefits that one could see from an approach like this:

  1. Simplifies the design
  2. Completely revert the way we develop.
  3. Makes us think about how our object would be used.
  4. Helps us develop better interfaces that are easier to use.
  5. Would change the way we perceive things.
  6. Makes the code easily testable
  7. Serves as invaluable form of documentation
  8. Makes the code robust
  9. Creates the opportunity for us to think of the failures and what we need to accommodate.
  10. Provides a safety net as we refactor the code – the test cases are our angels.

There are at least three things we need to write the test for:

  1. Positive: what is should do correctly assuming every thing is ideal.
  2. Negative: what could go wrong and how should the code behave.
  3. Exception: possibility of alternate sequence of events that could happen and how the code should behave to accommodate those.

Where to write a test:

  • Should be part of the project.
  • It can be within the class for testing private members.
Key things to keep in Mind:
  1. Red/Green Refactor should be our mantra
  2. Do not make many changes at once.
  3. Place the tests near the code
  4. Isolate your tests – failure of one should not affect the other.
  5. Write a test for a bug you find.
  6. Do not refactor code without having test cases to support.
  7. Test on all your platforms.