Monday, December 29, 2008

TDD-friendly CSLA solution – Part 3: Mobilizing the DTO

I thought I’d present a little bit of the detail of the design right now. This particular part of the design is really at the heart of my proposed TDD solution.

Addendum: Let me rephrase what I am trying to achieve. I want all of my business object dependencies to be explicit and I want to pass them in through a constructor. That's just my preference. I am aware that there are other ways to inject dependencies and to achieve most of what I am aiming for - for example, see Peter Stromquist's blog. However, I only want to inject these dependencies once throughout the lifetime of the object (unless of course there is a reason for changing them, then I would expose a setter). In addition, and most importantly, I only ever wish to work with one instance of the business object once it is constructed rather than changing my references after each save.

I am aware that others have achieved similar results to what I am trying to achieve by removing the use of the DataPortal altogether. However, I still want to use the DataPortal mechanism, since it provides for one access point to the server for which all data access requests are serviced.

I have mentioned that ultimately I want to be able to inject validation, authorization, and other services, in addition to data access code. For the purposes of this discussion, however, we only need to consider the data access separation. The concepts I introduce here and in subsequent posts will ultimately allow for the injection of other dependencies.

So, let’s just talk about separating the data access for now…

When developing applications with CSLA, I decouple the specific data access technology by using a shaped data container – a Data Transfer Object (DTO). During a create or fetch, the data access code populates the DTO which the business object then reads from. It also works in reverse – during an insert or update, the business object populates the DTO and the data access code reads from the DTO and subsequently persists data to an underlying datasource (I will make this code available in future posts). This is nothing new. In fact, the CSLA code comes with an example DeepData application that demonstrates the concept.

My point for discussion is this: If we go to the trouble of abstracting out the data access by introducing a DTO, why don’t we send the DTO across the wire rather than the business object? The actual data access code will still run on a server, but we don’t send the business object across the wire.

Sending the DTO accross the wire, rather than the business object, enables the client to continue to work with the same instance of the business object. Others have achieved this by removing the Dataportal altogether. However, I still want to use the Dataportal mechanism, since it provides for one access point to the server for which all data access requests are serviced.

Rather than make the DTO the mobile object, I suggest that we piggyback the serializable DTO on a mobile command object, since the DataPortal has infrastructure in place for a command object.

Note: Since the business object has methods to reconstruct itself from a DTO, we could always create the business object again on the server in order to check validation rules and authorization rules if necessary. If any logic run on the server results in a state change, the state is captured in the DTO and provided back to the business object on the client after the operation is complete. A new instance of the business object does not need to be created on the client, instead the existing instance will update it’s state from the returned DTO.

Here is an example of what the business object constructor will look like (access modifiers on the constructor and other methods will be discussed another time):

[Serializable]
public class Invoice : BusinessBase<Invoice>
{
[NotUndoable]
private IInvoiceDataGateway dataGateway;

// allow the service locator to inject the concrete instance
internal Invoice() : this( ServiceLocator.Current.GetInstance<IInvoiceDataGateway>()) { }

// inject the dependency manually
internal Invoice(IInvoiceDataGateway dataGateway)
{
this.dataGateway = dataGateway;
MarkNew();
}
}

 
What used to be the factory region of the business object looks like this (note that some code has been removed and replaced with comments for now):

new internal void Save()
{
// check authorizations, edit level, validity, isdeleted, isbusy, etc
if (IsDirty)
{
RefreshFromDto(dataGateway.Save(WriteDto()));
}
}

internal Invoice Fetch(long id)
{
// check authorizations

ReadDto(dataGateway.Fetch(id), true);
return this;
}

// Create and Delete methods to be shown later

 
Note that because I am using the command object to execute my CRUD methods, I need to add code to my business class to do things such as check for authorizations before saving, marking itself clean after save, etc – things that the Dataportal fetch, insert, and update methods would normally do for the business object. Most of this code could eventually be templated or put into the business base class. Again, the fact that these are internal will be discussed later.

The business object will also contain DTO assembly methods. The children will also contain these 3 methods. Here they are for you to consider. They will be discussed in future posts.

private void ReadDto(InvoiceDto dto, bool markOld)
{
LoadProperty<long>(IdProperty, dto.Id);
// load all other properties/fields here

LineItems.ReadDto(dto.LineItems, markOld);
// load other children here

if (markOld) MarkOld();
}

private InvoiceDto WriteDto()
{
if (IsDirty)
{
InvoiceDto dto = new InvoiceDto();
dto.IsSelfDirty = IsSelfDirty;
dto.IsDeleted = IsDeleted;
dto.IsNew = IsNew;

dto.Id = ReadProperty<long>(IdProperty);

if (IsSelfDirty)
{
if (!IsDeleted)
{
// copy the remainder of the root state to the dto
}
}

LineItems.WriteDto(dto.LineItems);
// do for all other children

return dto;
}

return null;
}

private void RefreshFromDto(InvoiceDto dto)
{
if (dto != null)
{
if (IsNew) LoadProperty<long>(IdProperty, dto.Id);

timestamp = dto.LastChanged;

LineItems.RefreshFromDto(dto.LineItems);
// refresh other children
MarkOld();
}
}


 
And finally, the concrete instance of the IInvoiceDataGateway will look something like the following. I admit that calling a command object for each method doesn’t look very nice. For now, I am just demonstrating a proof of concept and I want to use existing DataPortal code.

public class MobileInvoiceDataGateway : IInvoiceDataGateway
{
public MobileInvoiceDataGateway() { }

#region IInvoiceDataGateway Members

public InvoiceDto Fetch(long id) { return GetByIdTransactionScript.GetById(id); }

public void Delete(long id) { DeleteTransactionScript.Delete(id); }

public InvoiceDto Save(InvoiceDto data)
{
if (data != null)
{
if (data.IsDeleted)
{
// Details will come later
}
else
{
if (data.IsNew)
return InsertTransactionScript.Insert(data);
else
return UpdateTransactionScript.Update(data);
}
}
return null;
}

#endregion // IInvoiceDataGateway Members

#region Transaction Scripts

[Serializable]
class GetByIdTransactionScript : CommandBase
{
private InvoiceDto data;
private long id;

public GetByIdTransactionScript(long id) { this.id = id; }

public static InvoiceDto GetById(long id) { return DataPortal.Execute<GetByIdTransactionscript>(new GetByIdTransactionScript(id)).data; }

protected override void DataPortal_Execute() { // Details will come later }
}

[Serializable]
class InsertTransactionScript : CommandBase
{
private InvoiceDto data;

public InsertTransactionScript(InvoiceDto data) { this.data = data; }

public static InvoiceDto Insert(InvoiceDto data) { return DataPortal.Execute<InsertTransactionScript>(new InsertTransactionScript(data)).data; }

protected override void DataPortal_Execute()
{
// Details will come later
// note: we can reconstruct the business object if required
}
}

[Serializable]
class UpdateTransactionScript : CommandBase
{
private InvoiceDto data;

public UpdateTransactionScript(InvoiceDto data) { this.data = data; }

public static InvoiceDto Update(InvoiceDto data) { return DataPortal.Execute<UpdateTransactionScript>(new UpdateTransactionScript(data)).data; }

protected override void DataPortal_Execute()
{
// Details will come later
// note: we can reconstruct the business object if required
}
}

[Serializable]
class DeleteTransactionScript : CommandBase
{
private long id;

public DeleteTransactionScript(long id) { this.id = id; }

public static void Delete(long id) { DataPortal.Execute<DeleteTransactionScript>(new DeleteTransactionScript(id)); }

protected override void DataPortal_Execute() { // Details will come later }
}

#endregion // Transaction Scripts
}

 
I am sure that this code may leave more questions than it has answered. I will eventually address all concerns.

Sunday, December 28, 2008

TDD-friendly CSLA solution - Part 2: Feature Analysis

The features I will use out of the box:

1. DataBinding support. I use the Supervising Controller variant of the Model View Presenter pattern in my windows forms applications (I strongly suggest that you read Jeremy Miller’s discussion on this if you have not done so already). As you will see in my future posts, while I use the presenter to encapsulate the application control logic, I still like to tunnel the business object through to the view so properties can be bound directly.

2. Undo / Redo. The undo/redo code is very nicely written and I doubt that there will be a need to inject an alternative implementation of this. This code exists in a CSLA base class from which all editable objects are derived. Even if you don’t use it, there is no need to remove it.

3. Aggregate management (parent/child management, dirty tracking, etc). This is a perfect example of the GRASP Expert pattern at work – assign a responsibility to the class that has the information needed to carry out the responsibility. Each object manages its undo/redo stack, dirty state, and validity state. At any one time, the root object can determine the resulting aggregate state.

The things I will modify a little:

1. Tracking broken business rules. I like the fact that the object can check itself and enforce rules; however, I want to be able to allocate the rules in a separate component and subsequently inject those rules. Note that the control over when those rules are checked is still in the hands of the business object. This allows me to continue using the Windows Forms ErrorProvider component.

2. Enforcing Authorization rules. I want to be able to allocate the authorization rules in a separate component. Note that the control over when those rules are checked is still in the hands of the business object. This allows me to continue using the CSLA ReadWriteAuthorization component.

3. Executing the data access code in another tier. I will use the DataPortal in an unorthodox manner so that I don’t have to deal with a copy of my object after a save.

The things I need to change or work around:

1. Data access code in the business object. Whilst the ObjectFactory provides for separation of data access code, it does not enable injection of data access code through the constructor. Constructor injection results in an explicit declaration of the class dependencies. Please see below for a further discussion on constructor dependency injection.

2. A separate copy of the object being returned after a Save via the DataPortal. As I discussed in my previous post, an insert/update invocation to the DataPortal returns a different instance of the business object. This is just the nature of using mobile objects. My solution will still use the DataPortal to execute the data access code on a server, without having to update all local references to a new instance after the save.

3. Static Factory methods and Save method exposed to UI code. As explained above, I use the Supervising Controller variant of the MVP pattern. This allows me to tunnel the business object through to the view so that properties can be databound. However, I prefer that any method calls be relayed by the presenter when an event is raised by the view. This allows me to handle exceptions and notifications (and to initiate other common application logic in response to method calls) in a consistent manner.

Subsequent posts will serve to expand on each of the above points.


A note on my preference for Constructor Dependency Injection

I am aiming for a solution where I can injection all dependencies through the constructor. The constructor, along with the properties and methods will serve to fully specify the dependencies of the class.

It is equally valid to use dynamic injection of dependencies through calls to singleton services, factories and/or service locators within the business object code. However, I prefer all dependencies to be explicit in the class interface/specification, rather than having to search through source code for calls to a service locator.

Another common way to do dependency injection is through property setters. The smelly thing about this is that it requires that construction and setting be done as two separate steps. The interface implies that the property is always settable, when in fact the case would usually be that it only be set immediately after construction. I feel that if a property or field of an object can only be set once, then it should be an input parameter during construction.

For more information, I suggest you read Martin's Fowler's discussion on which option to use.

Saturday, December 27, 2008

TDD-friendly CSLA solution - Part 1: Introduction

I have 3 years development experience with Rockford Lhotka’s CSLA.NET. Recently, I had considered packing my bags and moving away from the framework. Ryan Kelley seems to have felt the same way about CSLA, and in fact ended the relationship! However, my relationship with CSLA has managed to overcome my recent indiscretions! You see, I have been having an affair with test-driven-development. As you can imagine, this doesn’t sit well with CSLA.

This post, and the ones to follow, are not intended to spark debate about whether one should do TDD or not. There have been several discussions on Rocky’s forum with respect to this. The purpose of this series of posts is to propose a way to do TDD and still use CSLA.

In fact, I did give CSLA the flick for all of one hour! It was only when I left it, that I began to appreciate the good things it did that I had taken for granted … okay, enough of the metaphor, it’s getting a bit creepy! I figured that I would need to write my own framework or find another alternative, so I started making a list of all the things that this framework would need to do for a complex windows forms application (which is my specific area of interest). To cut a long story short, it ended up pointing me back to CSLA. Why write all this stuff myself, when it is already available to me? So I decided to tackle the problem from the other direction – Let’s keep all of CSLA and add to or remove from it where appropriate in order to meet my TDD needs. If that means we need to tweak the source code a bit, then so be it.

The main reason that CSLA is not TDD friendly is not due to CSLA itself, but rather the serialization mechanism. After an invocation to the data portal, the returned object is a new object. This is just the nature of using mobile objects. Since the objects need to be mobile, they must be serializable. This puts some constraints on things. For example, if I want to inject dependencies and services through the constructor, then these dependencies must be (a) serializable or (b) I need to re-assign those dependencies to the copy of the object when it is returned from the data portal. Having to make every injected dependency serializable is just a pain and in some cases it may not be possible. Also, even if the dependencies are serializable, they may be heavy, so that’s just too much extra baggage that the mobile objects need to carry across the wire. In addition, if the dependencies are marked as non-serializable and then re-assigned to the copy upon deserialization, this makes the injection through the constructor in the first place superfluous (since there are now two ways to assign the dependencies).

In an effort to make CSLA a little bit more TDD friendly, CSLA now provides the ObjectFactory which enables the separation of data access code from the business objects. However, while this may address one specific issue (taking the data access out of the business objects), it still does not address the underlying problem … making CSLA more unit-test friendly – It may not just be the data access that I want to separate out, but I may also want to separate out the validation provider, authorization provider, or other services. I could then subsequently inject different concrete implementations (or mocks for unit testing). Again, I don’t want a debate to start about why someone may want to do this, but rather I will suggest a way that I think this can be done if you want to do so.

Another reason that CSLA is not TDD friendly is that the level of encapsulation is quite high. Good encapsulation results in a business object which has one responsibility. However, this is very subjective. How do you break down responsibility? How detailed should the responsibility be? CSLA business objects tend to provide an encapsulation for one large responsibility which can be broken down into many sub-responsibilities. For example, a Product class has one responsibility, to manage all information with respect to a product. However, this also means data access, validation, authorization, etc ... And I think this is perfectly logical! This is one way to think about things. However, more often than not, it will produce large monolithic classes which could eventually become too complex to work with and maintain.

TDD, on the other hand, tends to drive out fine-grained responsibilities, resulting in extremely cohesive and low-coupled code. This naturally means that the bits are easy to “get at” and to test in isolation. CSLA business objects, however, encapsulate a lot and are not very open. So business objects constructed using CSLA can only ever be “integration” tested, not truly unit tested. Don’t get me wrong, if there is a lot of code in your CSLA objects then you will most probably still need to write a lot of code when you break up the responsibilities into other objects for the purposes of TDD … let’s face it, if you’re a developer then you have to write code. If you don’t like coding, then you’re in the wrong profession.

I have come up with a solution that is amenable to TDD and keeps most of what CSLA has to offer. I will elaborate on the details in future posts.