DeploymentItem Attribute

I was writing some code today that uses a xml file in my project, I setup a test project and tried to run it and of course I was getting file not found errors. I spent sometime fooling around with Visual Studio and the build actions, copying the file to the test project and changing build actions, spent alot of time on Google trying to figure this out.

Maybe it’s just been too long since I worked with files but eventually I came across the DeploymentItem Attribute.

[TestClass]
[DeploymentItem(@"C:\some\folders\project\Needed\my.xml", "Needed")]
public class ReportRepositoryTest
{
    ...
}

The first Parameter is the location of the file, the second and optional parameter is a folder to place the file in so you make the test project match up to the same relative locations as the actual project. When you execute your test if you view the Out\ folder you will see Out\Needed\my.xml

This is by far the simplest solution for your test project to access files needed by your application, especially since you won’t need to copy the files any where or deal with sharing files.

BloggingContext.ApplicationInstance.CompleteRequest();

Critique: Pluggable ASP.NET CacheManager

Today as I was perusing DotNetKicks I ran across this post by John Sheehan. The post goes over creating a simple CacheManager that allows extensibility through interfaces. The basis of his class is dervived off the usage of.

public class CacheManager
{
    protected ICacheProvider _repository;
    public CacheManager(ICacheProvider repository)
    {
        _repository = repository;
    }

    public void Store(string key, object data)
    {
        _repository.Store(key, data);
    }

    public void Destroy(string key)
    {
        _repository.Destroy(key);
    }

    public T Get<T>(string key)
    {
        return _repository.Get<T>(key);
    }
}

You can view the full source with the implemenation of the interfaces and the declaration of the ICacheProvider interface over at his blog at Pluggable ASP.NET CacheManager.

Reading this post and one of the comments on his blog made me follow up in his comment stream, I felt like posting it here as it may benefit some of you also. Firstly I had to acknowledge the fact that Enterprise Library exists and contains it’s own Caching Application Block. Which also handles the concrete usages of the CacheProvider’s through a Factory implementation on the manager instead of concrete usage of the manager.

One of the posters on his blog asked the question “What’s the point in using CacheManager instead of just using the ICacheProvider instances?” This triggered my response on the elegance of this design John choose to follow for loosely coupling his code rather than direct coupling.

With your question if you implemented the caching solution using only the specific classes if you ever decided you wanted to change your caching backing store from HttpRuntime.Cache (note: John, you should reference the cache as HttpRuntime.Cache instead of Httpcontext.Current.Cache, calling the context just causes extra processing to just resolve to HttpRuntime.Cache) to a sql data store or to a memory caching solution like memcache or velocity you will now have to go into you code and change every single usage of RequestProvider to use MemCacheProvider or whatever other implementation you wish to use.

With a loosely coupled implementation that John created here, if you ever wish to switch from one provider to another you only ever need to change where it instantiates CacheManger to use the new provider instead. This brings me to my point about Enterprise Library’s CacheManager having a factory method for creating the caching providers usage, this will allow you to only need to declare the CacheProviders once in your code and no matter how many times you change the concrete implmentation of ICacheProvider you will only ever need to change 1 line of code in the entirety of your project which is a great thing indeed.

This idea of creating shared services that you can plug and play based off of interfaces is the basis of the idea of “Inversion of Control” or IoC that creates very robust and completely decoupled projects. Some of the most well known IoC frameworks are Microsoft’s Unity, Spring.NET, Castle Windsor, Ninject, StructureMap to name a few there are quite a bit of frameworks out there for IoC. Creating loosely coupled code is definitely getting to the point where it is mandatory for a project to be a well made solution.

It’s always a good day when I have the chance to discuss genericism, interfaces, loose coupling and inversion of control.

BloggingContext.ApplicationInstance.CompleteRequest();

Working with List in .Net 3.5

In this post I will be going over some of the features that were added or expanded upon in the release of .NET 3.5 and the usage of the lambda operator. I will start my tutorial off with the Exists() method which has a parameter of Predict<t> match. This is basically a really fancy way of saying this method does not want an object, it wants a method!

Now C# has even more flexibility in the way you choose to style your code design always remember all 3 of these solutions are the same.

public void LambaUsage()
{
    var intList = new List<int> {1, 2, 3, 4, 5, 10, 40, 50};

    bool exists = intList.Exists(Predicate);
}

private bool Predicate(int x)
{
    return x > 25;
}

This is the first way of calling Exists which is using a named method for the fact it has a physical name and declaration of method Predicate with a parameter of int and a return type of bool. Predicate will check if x > 25 returning true if it is, making Exists parameter the method Predicate will have it iterate through the list and try the Predicate over every object if it returns true for any case Exists will return true.

In C# 2.0 along with the inclusion of List and Generics that were added, they created anonymous methods. Personally I’ve always had issues remembering the syntax to write anonymous methods and without the help of Resharper I’d have been very frequently checking google for the usage of them. I’m sure you’ll be able to see why.

bool exists = intList.Exists(delegate(int x) { return x > 25; });

That sure is a lot of syntax to achieve the same thing that is defined in the named method, don’t you agree? The developers inside Microsoft obviously did too, that’s why they created the Lambda operator in C# 3.0. The Lambda operator makes writing anonymous methods for parameters much simpler and concise.

bool exists = intList.Exists(intVar => intVar > 25);

Always remember these 3 methods are interchangeable and no one way is better than another, it’s all semantical candy but personally and I feel most .NET developers willl agree the Lambda operator makes the most sense. In this statement intVar is an implicitly typed variable (in this case int), the Lambda operator => is saying to use intVar foreach object inside the list to do the method over => there.

So the Lambda operator has a variable and a method and operator creates an arrow towards the method that will use the variable. Not really the most scientific description but I think it’s very easy to remember that way.

More to come later today.

Working with Collections – Generic IList

I am writing this article directly in response to Working with Collections – ArrayList which obviously about the array list class. Now I’m going to make a blanket statement that includes the word NEVER, generally a statement using never or always tends to be factually inaccurate for how absolute those 2 words are, however I am going to make the statement anyway.

You should NEVER use the ArrayList class.

This class should even be as far as marked deprecated since the introductions of generics a long way back with the release of 2.0 framework. Now I’m sure there’s been many articles that have beat the usage of generics into the ground but here I will make one to explain how to use List<type> class instead of the ArrayList class.

Firstly, what are generics?

Generics were added to the framework as a way to create classes that can contain any type of object while working with it strongly typed and not down casting it to System.Object the way that ArrayList does. Classes that follow patterns like ArrayList will cause objects to be repeatedly boxed and unboxed.

int i = 123;
object o = (object)i;  // boxing

o = 123;
i = (int)o;  // unboxing

Boxing

Boxing is used to store value types in the garbage-collected heap. Boxing is an implicit conversion of a value type to the type object or to any interface type implemented by this value type. Boxing a value type allocates an object instance on the heap and copies the value into the new object.

Performance

In relation to simple assignments, boxing and unboxing are computationally expensive processes. When a value type is boxed, a new object must be allocated and constructed. To a lesser degree, the cast required for unboxing is also expensive computationally. For more information, see Performance.

For further information on boxing see the MSDN.

Now that you have a clearer understanding of what boxing is if you think about the ArrayList class since you can insert any type of object into the list with reckless abandon every single addition and read from the list will cause un/boxing to occur. With the statements above it’s clear to see why in usage this is a very poor class to use.

Adding

The List class also supports the inserting of strings in the same way that ArrayList does both with the Add() and AddRange() methods. I will show examples of both the .NET 2.0 usage and new conventions for the 3.5 usages.

List<string> strings20 = new List<string>();
const string dotnetchris = "dotNetChris";

strings20.Add("Marisic.Net");
strings20.Add(dotnetchris);

//Build error: Argument Type 'object' is not assignable to parameter type 'string'
strings20.Add((object)dotnetchris);

//.Net 3.5 added the var keyword and the ability to initialize a list easily.
var strings35 = new List<string> {"initial", "strings", "loaded"};

As you can see in 3.5 creating a collection with initial data much easier. It also doesn’t allow boxing to occur as you can not even add a string object if you down cast it to Object.

var stringsCombined = new List<string>();
stringsCombined.AddRange(strings20);
stringsCombined.AddRange(new[] {"string1", "string2"});

With the AddRange() method you can add any class that implements ICollection which is an interface defining a common usage of a list structure so List<> and arrays are the most common implementers of ICollection.

Now lets add in some LINQ!

strings35.AddRange(from stringval in stringsCombined
                   where stringval.StartsWith("s")
                   select stringval);
//strings35 now has: "inital", "strings", "loaded", "string1", "string2"

LINQ expressions can be used to select specific data from our lists and then make it a list itself which you could assign to it’s own declaration or use in AddRange()!

Insert

The correlary to Add and AddRange methods are the Insert and InsertRange I will move usage of these and that you can index through Lists quickly. With insert you can choose where an item is inserted into the List where as Add/Range will always place the item(s) to the end of the list.

var numbers = new List<int> {2, 2, 4};
numbers.Insert(0, 1);
numbers[2]++;
//numbers now has: 1 2 3 4

Remember that indexers are always 0 based with the Microsoft classes So Insert(0, 1) will insert on the index 0 the value of 1.

//ArgumentOutOfRangeException always remember Count - 1
numbers[numbers.Count] = 5;

Always keep indexers in mind to avoid this from occurring.

At this point I am going to hand the ball back to the article that started this post that you see the usage of Iterating through a list, removing objects and a few other helpful functions. I’ve shown some new features that were added in 3.5 that makes working with lists even greater and easier than ever. In my next post I will go over usage of some of the more complicate functions on the List class that take a paraemter of Predict<T> and the lamba operator.

BloggingContext.ApplicationInstance.CompleteRequest();

Visual Studio Collapse Solution Explorer Items

I’ve been letting this tab sit open for a while since I’ve actually been busy here at work but it’s time for me to update my blog. One of the most annoying things to me in Visual Studio is the fact it loves to open up half the solution explorer when you load projects or by the time your finished working on a code set that the entire tree is opened in random ways and makes it very hard to find the specific files you want to work on unless you collapse all of the open items.

This I’m sure for many others has been a clossal waste of time after being fed up with the solution of click click click i knew there had to be a macro to solve it which is when I found Scott Kuhl’s blog post Visual Studio 2005 Collapse All Macro.

Make sure to take a look at JStuart’s comment to greatly speed up the macro.

BloggingContext.ApplicationInstance.CompleteRequest();