It’s been a long time coming, but I’m starting to prepare for a 2.0.0 release of CastIron, my micro-ORM. Yesterday I pushed out the third pre-release version to github, and this version is probably going to be a release candidate once I improve some test coverage and documentation. Here I’m going to talk a little bit about what’s new in the release.
New MapCompiler
After working on the ParserObjects project with parser combinators, I noticed an interesting parallel with CastIron. One of the central features of CastIron is the map compiler, which compiles an Expression
tree to map data from an IDataReader
into the object of your choice. The Map Compiler, I realized, worked a lot like a Recursive Descent parser: It iterated over the object metadata, and at each stage it either mapped a column or else it recursed to map another object. Knowing that this compiler was analogous to a Recursive Descent parser meant that I could build the map compiler as an object graph instead of mutually-recursive function calls. After some work restructuring the code and making various other cleanups along the way, I am now beyond happy with the results. The new Map Compiler forms the backbone of CastIron v2.0.
One of the things I always find rewarding about software development, is when you perform a refactor and you start to get unexpected good results from it. This is the case with the MapCompiler refactor, where a number of new features which would have been extremely difficult to add in the previous implementation were suddenly so easy to implement that I barely had to think about it at all. For example, in the new architecture it was quite easy to add support for ValueTuple
mappings (in .NET Standard 2.0 builds, anyway):
var (int id, string name) = result.AsEnumerable<(int, string)>("SELECT 1, 'TEST'").Single();
Some more opportunities to improve suddenly appeared after the MapCompiler
refactor, such as improved type configuration.
Type Configuration
Consider this SQL query:
SELECT
1 AS ID,
'Parent' AS [Name],
2 AS Child_ID,
'Child' as Child_Name;
And this object:
public class TypeA
{
public TypeA() { }
public TypeA(int id) { Id = id; }
public int Id { get; set; }
public string Name { get; set; }
public TypeB Child { get; set; }
}
public class TypeB
{
public int Id { get; set; }
public string Name { get; set; }
}
In CastIron v1 you were only able to configure options for the top-level object you were mapping. So in this example, you could only configure options for objects of type TypeA
. In the new system, you can configure options for all types which appear in the map:
var output = results
.AsEnumerable(s => s
.For<TypeA>(a => a.UseConstructor(typeof(int)))
.For<TypeB>(b => b.UseSubclass<TypeBSubclass>())
)
.FirstOrDefault();
Now when we execute the query, the mapper will basically compile a mapping method which looks like this:
private TypeA Map(IDataRecord reader)
{
var value0 = (int)reader.GetValue(0);
// Calls the constructor with 1 int parameter
var instance1 = new TypeA(value0);
instance1.Name = (string)reader.GetValue(1);
// instantiates the subclass of TypeB instead of TypeB itself
var instance2 = (TypeB)new TypeBSubclass();
instance2.Id = (int)reader.GetValue(2);
instance2.Name = (string)reader.GetValue(3);
instance1.Child = instance2;
return instance1;
}
Notice how it maps the TypeA
constructor parameter id
from column Id
, and how it also creates an instance of TypeBSubclass
instead of TypeB
. It’s these configuration options which, I believe, make CastIron so powerful. But it doesn’t end there, because this new configuration system allows some things which we hadn’t been able to do before:
public class MyData
{
public int Id { get; set; }
public ICollection<string> Tags { get; set; }
}
var values = results
.AsEnumerable<MyData>(s => s
.For<ICollection<string>>(b => b
.UseClass<MyCustomStringCollection>()
)
)
.ToList();
In CastIron v1 you couldn’t configure custom collection types to use for interfaces like IEnumerable<T>
, ICollection<T>
or IList<T>
. In these cases CastIron would instantiate a List<T>
internally and cast it to the desired interface type. Now in v2 you have control over what type of collection is instantiated (Your collection type must have a parameterless constructor and implement a method .Add(T)
, in addition to the specific interface type you choose, because CastIron uses the .Add()
method to populate the collection). This little bit of flexibility just wasn’t possible before in v1, but was easy to put together in v2.
SqlServer Adaptor
In CastIron v1, the Microsoft SQL Server implementation was baked into the core library, while support for other DB providers (PostgreSQL and SQLite) was included in separate libraries. In v2, I have separated the SQL Server logic out into a separate provider library and isolated the dependency on System.Data.SqlClient
. Now, if you want to use CastIron, you will need to reference the core library (CastIron.Sql
) and also one of the provider libraries (CastIron.SqlServer
, CastIron.Sqlite
or CastIron.Postgres
).
This change allowed me to really focus on what the core shared functionality was, and help improve my abstractions to make sure that all providers worked equally.
ISqlRunner
changes
In v1 there were a few random settings and behaviors you could configure on each query call: Whether or not to use transactions, whether or not to capture performance metrics, setting timeouts, etc. In v2 I have moved all these options to the RunnerFactory
:
var runner = RunnerFactory.Create(connectionString, config => config
.SetTimeoutSeconds(20)
.UseTransaction(IsolationLevel.Serializable)
);
These options will apply to every single query or command executed by that runner, so you don’t have to keep setting your timeouts or isolation level on every single call. If you need different options for different calls, you can create multiple ISqlRunner
instances. They’re cheap, don’t store much data, and contain no mutable state.
Road to 2.0
In addition to these big changes I mention above there were more than a few little big fixes, optimizations and other improvements throughout the code. I’ve been making use of SonarQube lately, along with OpenCover and a few other local process improvements to help improve the quality of code. I have also been leaning a lot more on Docker to host instances of various databases for unit testing purposes.
The version I uploaded to nuget yesterday is a candidate to become 2.0.0. The test coverage isn’t perfect (I’ve been hovering around 70%, which isn’t bad considering there are some significant chunks of code that I don’t want to test). I’m starting to integrate it into some downstream projects. If testing there goes well and once I do a few other small cleanups, I think we will have an official 2.0. Expect to see that in the coming weeks.