Tuesday, March 29, 2011

A Taxonomy of Test Doubles

Many words have been written in the TDD community about the myriad ways of mocking in service of unit tests. After all this effort, there remains a great deal of confusion, ambiguity, in the understanding of many--maybe even most--developers who are using mocks.

No less than the likes of the eminently wise Martin Fowler has tackled the subject. Fowler's article is indispensible, and it in large part built the foundation of my own understanding of the topic. But it is quite long, and was originally written several years ago, when mocks were almost exclusively hand-rolled, or created with the record/replay idiom that was popular in mocking frameworks before lambdas and expressions were added to C# and VB.NET with Visual Studio 2008. Add to that the fact that the article was written in the context of a long-standing argument between two different philosophies of mocking.

Unfortunately these arguments continue on even today, as can be seen in the strongly-worded post that Karl Seguin wrote last week. Looking back now, with several more years of community experience and wisdom in unit testing and mocking behind us, we can bring a bit more perspective to the discussion than what was available at that time. But we won't throw away Fowler's post completely. Within his post, there are firm foundations we can build on, in the definitions of the different types of mocks that Fowler identified.

There are four primary types of test doubles. We'll start with the simplest, and move through in order of ascending complexity.

Dummies

A dummy is probably the most common type of test double. It is a "dumb" object that has no real behavior. Methods and setters may be called without any exception, but without any side-effect, and getters will return default values. Dummies are typically used as placeholders to fill an argument or property of a specific type that won't actually be used by the test subject during the test in question. While a "real" object wouldn't actually be used, an instance of a concrete type may have strings attached, such as dependencies of its own, that would make the test setup difficult or noisy.

Dummmies are most efficiently created using a mock framework. These frameworks will typically allow a mock to be created without actually configuring any of the members. Instead they will provide sensible defaults, should some innocuous behavior be necessary to satisfy the subject.

Stubs

A stub is a test double which serves up "indirect input" to the test subject. An indirect input is information that is not provided to an object by the caller of its methods or properties, but rather in response to a method call or property access by the subject itself, to one of its dependencies. An example of this would be the result of a factory creation method. Factories are a type of dependency that is quite commonly replaced by a stub. Their whole purpose is to serve up indirect input, toward the goal of avoiding having to provide the product directly when it may not be available at the time.

Stubs tend to be quite easy to set up even with more primitive mocking frameworks. Typically, all that is needed is to specify ahead of time the value that should be returned in response to a particular call. The usual simplicity of stubs should not be taken as false comfort that the doubles are not too complicated, however. Stubs can get quite complex if they need to yield a variety of different objects multiple calls. The setup for this kind of scenario can get messy quick, and that should be taken as a sign to move on to a more complex type of double.

Mocks

A mock is a type of test double that is designed to accept and verify "indirect output" from the subject class. An indirect output is a piece of information that is provided by the test subject to one of its dependencies, rather than as a return value to the caller. For example, a class that calls Console.WriteLine with a message for printing to the screen is providing an indirect output to that method.

The term "mock" for a particular type of test double is in a certain way unfortunate. In the beginning there was no differentiation. All doubles were mocks. And all the frameworks that facilitated easy double creation were called mocking frameworks. The reason that "mock" has stuck as a particular type of double is because in those beginning times, most test doubles tended to take a form close to what we today still call a "mock". Mocks were used primarily to specify an expectation of a particular series of method calls and property access.

These "behavioral mocks", or "classical mocks" as Fowler calls them, gave birth to the record/replay idiom for mock configuration that reached its peak in the days of RhinoMocks. And due to the tendency of inexperienced developers to create complicated object interactions and temporal coupling, mocks continue to be a very popular and common form of test double. Mocking frameworks make it far easier to unit test classes that rely on these types of coupling. This has led many to call for the abolishment of mocks and mocking frameworks in a general sense, claiming that they provide a crutch that makes it too easy to leave bad code in place. I'm sympathetic to the sentiment, but I think that this is throwing the baby out with the bathwater.

Fakes

Fakes are the most complicated style of test double. A fake is an object that acts simultaneously as both a stub and a mock, providing bidirectional interaction with the test subject. Often fakes are used to provide a substantial portion of the dependency's interface, or even all of it. This can be quite useful in the case of a database dependency, for example, or a disk storage service. Properly testing an object that makes use of storage or persistence mechanisms often requires testing a full cycle of behavior which includes both pushing to and pulling from the storage. An in-memory fake implementation is often a very effective way of avoiding relying on such stateful storage in your tests.

Given their usefulness, fakes are also probably the most misused type of test double. I say this because many people create fakes using a mocking framework, thinking they are creating simple mocks. Or worse, they knowingly implement a full-fledged fake using closures around the test's local variables. Unfortunately, due to the verbosity of mocking APIs in static languages, this can very easily become longer and more complex code than an explicit test-specific implementation of the interface/base class would be. Working with very noisy, complicated, and fragile test setup is dangerous, because it's too easy to lose track of what is going on and end up with false-passes. When your test's "arrange" step starts to overshadow the "act" and the "assert" steps, it's time to consider writing a "hand-rolled fake". Hand-rolled fakes not only remove brittle and probably redundant setup from your tests, but they also often can be very effectively reused throughout all the tests for a given class, or even multiple classes.

It's not Just Academic

These are the primary categories into which nearly all, if not all, test doubles can be grouped. Fowler did a great job of identifying the categories, but I think this crucial information is buried within a lot of context-setting and illustration that doesn't necessarily offer great value today. Mocking is ubiquitous among the subset of developers that are doing unit testing. But too many people go about unit testing in an ad hoc fashion, rather than deliberately with a plan and a system for making sense of things. I believe that a simple explanation of the major types and usages of test doubles, as I've tried to provide here, can aid greatly in bringing consistency and clarity of intent to developers' unit tests. At the very least, I hope it can instill some confidence that, with a little discipline, pattern and reason can be found in the often messy and overwhelming world of unit testing.

Tuesday, March 22, 2011

YAGNI Abuse

Have you ever proposed a code change or a course of action in a project for the purpose of improving the stability and maintainability of the code base, only to have someone dispute the need on the basis of YAGNI? I was flummoxed the first time this happened to me. Since then I've learned that it's not at all rare, and in fact may even be common.

The YAGNI principle is a wonderful thing. Used properly, it can have a huge beneficial impact on your productivity, your schedule, and on the maintainability of your product. But like so many other important ideas in the history of software development, YAGNI has become a poorly understood and misused victim of its fame. Through constant abuse it has become difficult to communicate the sentiment that it was intended for without a thorough explanation. And I can't count the number of times I've heard YAGNI cited in a completely incorrect or even dangerous way.

The term "YAGNI" has fallen prey to a similar disease as "agile". People often invoke it as an excuse not to do something that they don't want to do. Unfortunately, this quite often includes things that they should do. Things that have long constituted good design, and good software engineering practice. A few examples of things that I have personally been horrified to hear disputed on YAGNI grounds include:

These are all activities that are strongly valued and diligently practiced in the most productive, successful, and small-A-agile software development organizations and communities. For myself and many of you out there, it's patently obvious that this is a subversion and abuse of the YAGNI strategy. Your first instinct in response to this kind of misuse is to say, with no little conviction, "that's not what YAGNI means."

This, of course, will not convince anyone who has actually attempted to use the YAGNI defense to avoid good engineering practices. But to refute them, you needn't rely solely on the forceful recitation of the principle as they do. Fortunately for us, YAGNI is not an elemental, indivisible tenet of software engineering. It did not spring fully-formed from Ron Jeffries' head. Rather it is based in experience, observation, and analysis.

It is clear from reading Jeffries' post on the XP page that the key to sensible use of the YAGNI principle is remembering that it tells you not to add something that you think you might or probably will need, or even certainly will need, in the future. YAGNI is a response to the urge to add complexity that is not bringing you closer to the immediate goal. Particularly common instances of true YAGNI center on features that haven't been identified as either crucial or wanted, such as configurability, alternative logging mechanisms, or remote notifications.

Looking at my original list, it is clear that none of these things truly add complexity in this way. A very naive metric of complexity such as "number of code entities" may seem to indicate the opposite. But these are actually all established and reliable methods for controlling complexity. What these techniques all have in common is that they restrict the ways in which parts of your program are allowed to interact with other parts of your program. The interaction graph is the most dangerous place for complexity to manifest in a program because it compounds the difficulty of changing any one part of the application without affecting the rest of it like a row of dominoes. The practices I identified above, which are so often refuted as "adding complexity", are some of the many ways to guide your application toward this:


And away from this:


There is a multitude practices, patterns, and design principles that help keep your modules small, their scopes limited, and their boundaries well-defined. YAGNI is one of them, but not the only one. Claiming YAGNI to avoid this kind of work is "not even wrong". Not only are you gonna need it, but you do need it, right from day one. Working without these tools is seeding the ground of your project with the thorns and weeds of complexity. They provide you with a way to keep your code garden weed-free. In this way they are kin to YAGNI, not its enemy. Claiming otherwise reveals either a disrespect for, or a lack of understanding of, the benefits of good design and engineering practices in a general sense. So next time someone sets up this contradiction in front of you, don't let them get away with it. Show your knowledge, and stand up for quality and craft.

Tuesday, March 15, 2011

Retrospective on a Week of Test-First Development

Any programmer who is patient enough to listen has heard me evangelizing the virtues of Test-Driven Design. That is, designing your application, your classes, your interface, for testability. Designing for testability unsurprisingly yields code which can very easily have tests hung onto it. But going beyond that, it drives your code to a better overall design. Put simply, this is because testing places the very same demands on your code as does incremental change.

You likely already have an opinion on whether that is correct or not. In which case, I'm either preaching to the choir, or to a brick wall. I'll let you decide which echo chamber you'd rather be in, but if you don't mind hanging out in the pro-testability room for a while, then read on.

Last week I began a new job. I joined a software development lab that follows an agile process, and places an emphasis on testability and continuous improvement. The lead architect on our development team has encouraged everyone to develop ideally in a test-first manner, but I'm not sure how many have taken him up on that challenge. I've always wondered how well it actually works in practice, and honestly, I've always been a bit skeptical of the benefits. So I decided this big change of environment was the perfect opportunity to give it a shot.

After a week of test-first development, here are the most significant observations:
  1. Progress feels slower.
  2. My classes have turned out smaller, and there are more of them.
  3. My interfaces and public class surfaces are much simpler and more straightforward.
  4. My test have turned out shorter and simpler, and there are more of them.
  5. I spent a measurable amount of time debugging my tests, but a negligible amount of time debugging the subject classes.
  6. I've never been so confident before that everything works as it is supposed to.

Let's break these out and look at them in detail.

1. Progress feels slower.

This is the thing I worried most about. Writing tests has always been an exercise in patience, in the past. Writing a test after writing the subject means sitting there and trying to think about all the ways that what you just wrote could break, and then writing tests for all of them. Each tests include varying amounts of setup, and dependency mocking. And mocking can be tough, even when your classes are designed with isolation in mind.

The reality this week is that yes, from day to day, hour to hour, I am writing less application code. But I am re-writing code less. I am fixing code less. I am redesigning code less. While I'm writing less code, it feels like each line that I do write is more impactful and more resilient. This leads very well into...

2. & 3. My classes have turned out smaller, and there are more of them.
My interfaces and public class surfaces are much simpler and more straightforward.

The next-biggest worry I had was that in service of testability, my classes would become anemic or insipid. I thought there was a chance that my classes would end up so puny and of so little presence and substance that it would actually become an impediment to understandability and evolution.

This seems reasonable, right? Spread your functionality too thin and it might just evaporate like a puddle in dry heat. Sprinkle your functionality across too many classes and it will become impossible to find the functionality you want.

In fact the classes didn't lose their presence. Rather I would say that their identities came into sharp and unmistakable focus. The clarity and simplicity of their public members and interfaces made it virtually impossible to misuse them, or to mistake whether their innards do what they claim to. This enhances the value and impact of the code that consumes it. Furthermore it makes test coverage remarkably achievable, which is something I always struggled with when working test-after. On that note...

4. My tests have turned out simpler, and there are more of them.

The simple surface areas and limited responsibilities of each class significantly impacted the nature of the tests that I am writing, compared to my test-after work. Whereas I used to spend many-fold more time "arranging" than "acting" and "asserting", the proportion of effort this step takes has dropped dramatically. Setting up and injecting mocks is still a non-trivial part of the job. But now this tends to require a lot less fiddling with arguments and callbacks. Of course an extra benefit of this is that the test are more readable, which means their intent is more readily apparent. And that is a crucial aspect of effective testing.

5. I spent a measurable amount of time debugging my tests, but a negligible amount of time debugging the subject classes

There's not too much to say here. It's pretty straightforward. The total amount of time I spent in the debugger and doing manual testing was greatly reduced. Most of my debugging was of the arrangement portions of tests. And most of that ended up being due to my own confusion about bits of the mocking API.

6. I've never been so confident before that everything works as it is supposed to.

This cannot be overstated. I've always been fairly confident in my ability to solve problems. But I've always had terrible anxiety when it came to backing up correctness in the face of bugs. I tend to be a big-picture thinker when it comes to development. I outline general structure, but before ironing out all the details of a given portion of the code, I'll move on to the interesting work of outlining other general structure.

Test-first development doesn't let me get away with putting off the details until there's nothing "fun" left. If I'm allowed to do that then by the time I come back to them I've usually forgotten what the details need to be. This has historically been a pretty big source of bugs for me. Far from the only source, but a significant one. Test-driven design keeps my whims in check, by ensuring that the details are right before moving on.

An Unexpected Development

The upshot of all this is that despite the fact that some of the things I feared ended up being partially true, the net impact was actually the opposite of what I was afraid it would be. In my first week of test-first development, my code has made a shift toward simpler, more modular, more replaceable, and more provably correct code. And I see no reason why these results shouldn't be repeatable, with some diligence and a bit of forethought in applying the philosophy to what might seem an incompatible problem space.

The most significant observation I made is that working like this feels different from the other work processes I've followed. It feels more deliberate, more pragmatic. It feels more like craft and less like hacking. It feels more like engineering. Software development will always have a strong art component. Most applied sciences do, whether people like to admit it or not. But this is the first time I've really felt like what I was doing went beyong just art plus experience plus discipline. This week, I feel like I moved closer toward that golden ideal of Software Engineering.

Tuesday, March 8, 2011

Strategy vs. Tactics in Coding Standards

I'm starting a new job on March 7. As I wrapped up my time with the company where I've spent the past 3 years, I spent some time thinking about the decisions and responsibilities that I was given, and that I took on, while there. One of the reasons I was hired was to bring to the development team my more formal education and professional experience as a software developer. And one of the responsibilities I was allowed was to lead regular discussions with the other developers. The goal of these discussions was to help our team to become more professional in the way we developed software. Of course it was only a matter of time before someone on the team brought up coding standards.

I've had enough experience with coding standards to have developed some skepticism to the way that they are typically applied. It sounds good, in theory. Bring some consistency, order, and predictability to the source code produced by the group. But in my experience, the practice rarely delivers on the promise. It's easy to start a fight on this topic, of course. It tends to be a "religious" topic in that people have strong and entrenched opinions, and are rarely shaken from them regardless of argument. This is unfortunate, because I think there is a way to make them work. But it requires thinking about coding standards in a different kind of way.

The challenge of establishing standards, and the reason standards tend to be so fractious, is that "what works" must necessarily vary depending on the case. But very rarely does an organization who establishes coding standards allow for much variance in response to new problems or process friction. To figure out why this is, and how we can deliver on some of the promise of standards without resorting to brainwashing, let's take a closer look at the types coding standards we see in the wild.

In my experience, coding standards tend to fall into one of two broad categories. There are strategic coding standards, which are general and outcome-oriented. And there are tactical coding standards, which are specific and mechanics-oriented.

I'll address the latter first, as these are the kinds of standards that tend to start religious wars. Here are some examples of tactical coding standards:
  • All method headers must include a description, pre-conditions, side-effects, argument descriptions, and return value description.
  • Use C#'s "var" keyword only for variables of anonymous data types.
  • Place each method argument on its own line.

At this point you're probably at least nodding your head in recognition. Maybe some of you are nodding in satisfied approval, while others are holding back an outburst of disgust. When most people hear the words "coding standards", the tactical ones are the ones they think of. And these often evoke strong emotional responses. Their most common adherents tend to be managers or project leads, and often regular developers who've been around the block a few times.

The reason people want tactical coding standards is simple. They bring a modicum of consistency to a wild and woolly world. You often can't trust that the guy writing code in the next cube can code his way out of a paper bag. So when you have to take over his code after he's gone it's nice to know that, at the very least, he followed the standards. If all else fails, there is one thing that will provide sure footing as you plumb the dangerous depths of their code.

I understand this, and I sympathize. But at the same time, I chafe under these standards. They feel like micromanagement to me. It makes me feel like my coworkers don't trust me to do the thing that I was hired to do. There are organizations where this can't be taken for granted. But my own personal response to those environments is to leave them. I want to work with people I can trust at this level, and that can trust me the same.

It's not all bad for tactical standards, and we'll circle back in a bit. But for now let's look at some examples of strategic coding standards.
  • Write informative class headers.
  • Endeavor to keep methods to less than 25 lines in length.
  • Use whitespace and indenting to identify related code.

Strategic coding standards are broad. They do not tell the developer exactly what to write. Rather they set an expectation of the kind of code that should be written, and leave it to the developer as to how exactly that kind of code should be created. In my experience, these are the kinds of standards that are okay to mandate and enforce from a management perspective. Evoke the general feel of the code you want your team to produce, and then let them use their skill, experience, and professionalism to decide how to make it happen.

You can probably tell that in general I feel a lot better about this type of standard than the other. For one, they stay out of your way on a minute-to-minute basis. They're also far easier to get people to agree upon. Rather than being a cynical defensive measure against bad code, they empower the developer while fostering good coding habits.

Now having said all that, I do think there is a place for tactical standards. One big reason that I named these categories as I did is because I think that the role of establishing each type of standard can map to the organization similarly to how the roles map in the military, from which these terms are taken. Strategic standards can be mandated by leadership without stifling the developers. While the particular tactics can be determined by the people "on the ground".

There are many cases where tactical coding standards are beneficial. But it almost always serves the organization better to let the developers establish and enforce them. Take a team with multiple people all sharing equally in ownership of a codebase, each working from day to day on code written by the others. Imagine further that the code is sensitive in some way that can't abide regression bugs or the like. Maybe there is also a huge amount of code, or a large number of coders.

In these types of situations, it's crucial that the developers be able to pick up a strange piece of code and get a quick picture of what's going on. And to be able to modify the code without introducing a foreign-looking island of code to trip up the next guy. To do this, the developers have to be comfortable with the low-level style aspects of the code, both to read and to write. The best hope a team has of developing this comfort is to, as a self-directed group, come to agreement on common-denominator style standards over time and through shared experience.

There is a balance to be struck here. We want to ensure quality output from our teams, but we don't want to turn our developers into code-generators that don't take responsibility for their own work product. I think this balance is best struck by the proper application of each type of coding standard, in the proper context. If leaders can establish strategic standards that express high-level goals for clean code and understandable designs, then they can empower their teams to find solutions to the lower-granularity challenges such as understanding each other, and reducing the friction involved in cooperation on a shared codebase. This is strategy and tactics in practice, and at its best.

Tuesday, March 1, 2011

Anatomy of a Windows Service - Part 4

Deciding how to trigger the work of the service usually comes down to some sort of threading solution. And threading in Windows services is, as everywhere else, a thorny issue. The most basic need we have is to be able to respond to orders from the Service Control Manager (SCM) without blocking processing at an inconvenient spot. Beyond that, we may want to have a bit of work kicking off periodically, with a short period, or we may want to respond to some environmental event.

Responding to environmental events is fairly easy when the framework provides a callback mechanism for it, such as the FileSystemWatcher. But if there's no such facility, then you're stuck with polling, which is in the same boat as the guy who needs to do work on a quick periodic schedule. This leads us to worker threads, and all the headaches that go along with that.

I have no intention of getting deep into threading concepts and strategies here. Not least because I am woefully uninformed and inexperienced with them. If you need to be highly responsive, highly predictable, or highly parallel, that's an entirely new problem space. So it will have to suffice to say here that you should not just hack something together quickly. Do your research and do it right.

What I have done in the past is to use a Thread.Timer to fire a callback on my desired period. But I use a flag to make sure multiple work units do not process in parallel, and I don't worry about missing a period by a nanosecond simply because the work completed just after I checked the flag. Instead I just wait for the next one. This limits the responsiveness of my service, but it's simple and reliable because it sidesteps the issues that arise out of mutually writeable shared state and truly parallel work.

For our sample project here, I introduced this mechanism by creating a worker class that manages the timer. Let's take a look at that.


Note the flag properties, which are used as a guard on the timer event to make sure that processing doesn't re-enter before the previous iteration completes. The timer interval is set to start things rolling, and cleared to stop them. The added complexity in the Stop method is crucial to understand as well. Timers are tricky to get rid of without blocking the thread. We use a wait handle to do it. Wait handles are worth reading up on if you deal with threading at all. Another nice feature of this worker class is that an object of this type can be reused: Start can be called to spin things up again after Stop has finished.

The worker presents essentially the same interface as the service itself, which may seem a bit redundant. All I am really trying to achieve by this is to respect the separate responsibilities of management of service as a whole, as opposed to management of the work timer and triggers. The service itself is just processing events from the SCM. It need not know, when it tells the worker to start, whether the worker is firing up a new thread to get started, waiting for the moment the current one finishes, or just taking a pass on this period.

The service class will now delegate to the worker. This is very straightforward. I just had to replace the IGreeter property with a property for the worker. Now in OnStart, we spin up the worker, and in OnStop we finally have some code as well, to spin down the worker.


You can find the source code for our service project at this point tagged in the repo for this post on GitHub.

Now that we've got the service doing everything it's supposed to do, we may decide that we'd like to be able to change exactly how it does it's job. Essentially, we want to make it configurable. The only real parameters of our service at this point are the work interval and the file name, and those are hard-coded. We could easily make the service more flexible by taking these hardcoded values and making them configurable.

One way to do that would be to use the service argument array, which operates just like the regular app argument array. We don't have much to configure here, so that would probably work for us. But to imitate a real-world scenario where there may be lots to configure, let's pretend that won't be sufficient.

Another option would be to use a config file. Services, being normal .NET executables, can have app.config files just like a command line or Windows app does. I'm personally not fond of the default .NET config mechanism though, so I think I'll roll my own, because that can be quick and easy too, especially when our needs are as simple as they are here. Let's throw together a little XML format for our two settings.


We can load this pretty easily without relying on the magic of the .NET configuration mechanism, by creating a serializable XML data object. We can put it behind a general settings interface, and have a loader class to populate it. The interface will be enough to shield the rest of the app from the persistence details.

Here is the data class, and the settings interface.


The loader class is nearly as simple. Standard XML deserialization.


We can ensure that the code that uses these classes is kept simple, by registering these properly with the IoC container.


This registration will ensure that any IConfigSettings-typed dependency within a given Autofac lifetime context will receive the same ConfigSettings instance. The first resolution of that ConfigSettings instance will be created using the ConfigLoader service.

After identifying what we want to be configurable, we can go about defining some settings providers. I'll remind you that the value of settings providers is that they allow you to define dependencies on a given configuration value, while also decoupling the dependent class from the other unrelated settings that happen to cohabit the configuration source.

Below the work interval settings provider. I'll spare you the very similar code found in the file name settings provider. The IoC registration is likewise straightforward.


With the service now configurable, we can perform one final trick. When a service stops, the process may not necessarily stop. So we need some way to establish when the configuration is reloaded. The natural place for this to happen is when the OnStart event is triggered. Since the config file is loaded when a new IConfigSettings dependency is resolved for the first time in a context, we just need a new context to be spun up with each call to OnStart.

We can make this happen by taking advantage of Autofac's relationship types again. A Func<T> dependency will re-resolve each time the function is executed. And an Owned<T> dependency will cause a new lifetime scope to be created each time the dependency is resolved. So all we need to do is combine the two.

Here is the new service class, with the Func<Owned<T>> dependency.


When OnStart is called, assuming the service isn't already running, we create a new worker, from the delegate dependency. With this new worker comes a new scope, so it's dependency on IConfigSettings will be resolved with a new object, and the config file will be loaded on the spot. Conversely, OnStop allows the worker to go out of scope, resulting in the lifetime scope being cleaned up.

With that, we have finally completed our Hello World Windows service. The guts are all there, and it's cleanly designed around interfaces for maximum testability should we desire to have a unit test sweet. And finally, we use Autofac to conveniently manage a relevant object lifetime scope, making sure our settings are loaded and unloaded at the appropriate time. The actual core behavior of the service itself can be as simple or as complex as it needs to be, while remaining easily accessible and manageable via the worker object and it's lifetime scope.

The full source for the service in its final incarnation can, like all the waypoints, be found in this post's GitHub repo. Thanks for bearing with me through this month of Windows service architecture! I'll see if I can't find something new and different to tackle for next week.