Monday, March 14, 2016

Homemade Lightweight Automapper Part 4

I promised in the last blog post that I would demonstrate a legitimate use for a type dictionary. Well, it has been over 2 years at this point since my last post. It's a good thing I haven't given up my day job to become a professional blogger. Anyway. I cannot remember what exactly I had in mind, so while I can confirm that there are legitimate uses, the best demonstration I could give actually has nothing to do with projections at all, so I feel is a little beyond the scope of this post. I will, however, do a post on it... at some point... maybe in 2 years...

So where we left off we were getting pretty close to a decent projection engine, with the slight caveat that we have to build up some atrociously long, dependency ridden projection dictionaries for the whole thing to work. Well I have gone through a few more iterations at this point and those dependencies are now gone, as well as a slight cleanup of usage syntax. The entire thing is really simple, so lets dig into the code:
First of all we need some sort of Projection Profile that we can tuck into our model.

public abstract class Profile<TSource, TDest> {
 public Expression<Func<TSource, TDest>> Projection { get; private set; }

 public void CreateMap(Expression<Func<TSource, TDest>> Projection) {
  this.Projection = Projection;
 }
}
This lets us create a nested class in our model, like so:

public class ProjectionModel {
 ...

 public class Profile : Profile<Model, ProjectionModel> {
  public Profile() {
   CreateMap(o => new ProjectionModel {
    ...
   });
  }
 }
}
Lastly, our revised 3 projection methods for converting IQueryables, IEnumerables and instances of our projection models wrapped into a neat extension class:

public static class ProjectionModelExtension {
 public static TDest Project<TSource, TDest>(this TSource Model, Profile<TSource, TDest> Profile) {
  if (null == Model || null == Profile) {
   return default(TDest);
  }

  return Profile.Projection.Compile().Invoke(Model);
 }

 public static IEnumerable<TDest> Project<TSource, TDest>(this IEnumerable<TSource> List, Profile<TSource, TDest> Profile) {
  if (null == List || null == Profile) {
   return null;
  }

  Func<TSource, TDest> p = Profile.Projection.Compile();

  return List.Select(p);
 }

 public static IQueryable<TDest> Project<TSource, TDest>(this IQueryable<TSource> Query, Profile<TSource, TDest> Profile) {
  if (null == Query || null == Profile) {
   return null;
  }

  Expression<Func<TSource, TDest>> p = Profile.Projection;

  return Query.Select(p);
 }
}
When you put all that together you can use the following syntax:

IEnumerables

IEnumerables<Model> list;
IEnumerables<ProjectionModel> plist = list.Project(new ProjectionModel.Profile());
IQueryables

IQueryable<Model> query;
IQueryable<ProjectionModel> pquery = query.Project(new ProjectionModel.Profile());
Single Models

Model m;
ProjectionModel p = m.Project(new ProjectionModel.Profile());
It's that simple! So enjoy and have fun!

Sunday, February 23, 2014

Homemade Lightweight Automapper Part 3

If you're wondering why it has taken me so long to write up part 3, it's for a number of reasons. Firstly, procrastination, secondly I've been pretty busy with other things, thirdly, while writing the 2nd part of the series, we were actually reiterating over the process we use to do projections, because of some of the drawbacks of the method I've described and fourthly, I've been learning a whole bunch on delegates, expressions and lambdas in C# recently to try and find out just how far down the rabbit hole I can go. I should also note, that the AutoMapper library is now working correctly with IQueryables, so this is all really just a mental exercise at this point. As such, I'll finish off the series as it was intended, but I'll also continue it on, and get let you guys see my journey with me.

So when we left off, we had the infrastructure in place with our list of projections ready to go. We were just missing the code to let us use them!

There are some basic ways we need to get the data when it comes to linq, and they really boil down to 3 different methods. 1.) We obviously need our IQueryable<Model> to build up our SQL in a monadic fashion. 2.) We need our IEnumerable<Model> for any Enumerable object that is not an IQueryable, this includes List and other such containers. 3.) We need our individual Models.

So. We need 3 different Project() methods to encompass each type. The 'meat' of these methods are going to look pretty much the same for any projection type code you'll write. Essentially, we need to get the projection, and apply it to our objects.

We want to write these these extension methods so that they can work directly on the IQueryable, IEnumerable, and Domain types, so our declarations would be something like the following:

static IQueryable<TDest> Project<TSource, TDest>(this IQueryable<TSource> DomainQuery);
static IEnumerable<TDest> Project<TSource, TDest>(this IEnumerable<TSource> DomainList);
static TDest Project<TSource, TDest>(this TSource Domain);
This gives us a syntax of:

query.Project<Domain, Projection>();
list.Project<Domain, Projection>();
model.Project<Domain, Projection>();
respectively. It's pretty verbose. The problem here is that when working with generic methods, you can only hide the 'generic' signature, if all types can be implied by the compiler. In our case, we're only using TSource, so TDest can't be implied, and we have to use both in our method signature.

We can get around this by redefining our methods using the out keyword like so:

static void Project<TSource, TDest>(this IQueryable<TSource> DomainQuery, out IQueryable<TDest> ProjectionQuery);
static void Project<TSource, TDest>(this IEnumerable<TSource> DomainList, out IEnumerable<TDest> ProjectionList);
static void Project<TSource, TDest>(this TSource Domain, out TDest Projection);
This lets us do the following:

IQueryable<Projection> pQyery;
query.Project(out pQuery);
It removes the generic signature nicely, but you get stuck with the ugly out doing it this way.

Jimmy Bogard, author of the Automapper Library, actually gets around this by using 2 functions, a Project() which wraps the object into an IProjection interface then To() to do the actual conversion, so you end up with model.Project().To(); which is actually pretty clean. With a little more infrastructure you can clean this up just the tiniest bit more, but I'll save that for another post.

We can clean this up, but it's going to take some refactoring that I'll save for later. If you'll remember back in part 2 we had set up a dictionary map for our projections that could be called with the syntax projections[typeof(TSource)][typeof(TDest)]. There is one problem, however. Our dictionary currently only holds Expressions, not Expression<T>'s. We'll need to cast it before we can return it. Thankfully it's a pretty straightforward cast. Lets wrap that in a function for kicks and giggles.

Expression<Func<TSource, TDest>> GetProjection<TSource, TDest>() {
   Expression e = projections[typeof(TSource)][typeof(TDest)];

   return e as Expression<Func<TSource, TDest>>;
}
LinqToSQL, Entity Framework, and other's of that ilk use expression trees to write their SQL queries, while LinqToObjects uses delegates. Both of these are defined by lambdas, but what goes on behind the scenes is very different. Thankfully MS have made it very simple to compile our Expression into a delegate on the fly using a Compile() method, so passing around Expressions gives us the best of both worlds.

   Expression<Func<TSource, TDest>> e;
   Func<TSource, TDest> dele = e.Compile();
From what I understand the compilation itself is a little slow (not as slow as reflection), but it's not all that noticeable for 90% of what we would do with it. To go the other way from delegate to expression tree is a rather nasty mess of using reflection to determine what the delegate is doing and then building the immutable expression tree piece by piece. I wouldn't recommend it, dynamically building expression trees is a nasty business, as is reflection.

Armed with this knowledge, defining our IQueryable and IEnumerable projection methods becomes pretty simple.

   static void IQueryable<TDest> Project<TSource, TDest>(this IQueryable<TSource> Query) {
      Expression<Func<TSource, TDest>> projection = GetProjection<TSource, TDest>();

      return Query.Select(projection);
   }

   static void IEnumerable<TDest> Project<TSource, TDest>(this IEnumerable<TSource> List) {
      Expression<Func<TSource, TDest>> projection = GetProjection<TSource, TDest>();

      return List.Select(projection.Compile());
   }
Linq Extensions already give us our handy Select method with takes a delegate for IEnumerables or an Expression for IQueryables respectively. Our single object is going to be a little less straight forward. In this case we want to use our delegate and we want to run it directly on the object. Keep in mind that our delegate is a lambda, which is another way of saying that it is an anonymous function.

I'll write up a blog post on delegates soon. It is too much to go into detail here unfortunately. Suffice to say using a delegate is as easy as calling its Invoke method where the parameters are the values/objects to operate on. In our case:

   static void TDest Project<TSource, TDest>(this TSource Domain) {
      Expression<Func<TSource, TDest>> projection = GetProjection<TSource, TDest>();

      return projection.Compile().Invoke(Domain);
   }
That's really all there is to it. Those three projection functions take any object you need and convert it into another type using an expression. When we move away from the dictionaries, those will pretty much stay the same. Now as promised, I'll list some of the issues with the dictionary approach, though I'll save the solution for another post.

First and foremost, any time you are dealing with a type lookup, you've taken over a job that really belongs to the compiler. As such, any time you try and Project, you run the risk of throwing an exception when the lookup fails. Any issues therefore, are found at run time rather than compile time. This isn't necessarily a bad thing, but it does entirely remove the #1 benefit of using a strongly typed language, and it can be a little more difficult to debug as a result. There are legitimate uses for such a lookup (I'll demonstrate one in the next blog post as a matter of fact) but overall, when you find yourself relying on this pattern, you should really stop and ask yourself if there is a better way.

Secondly, creating lookups like this is much better suited to data rather than code. It's far nicer to load some data file, and loop through the contents, adding each entry to a lookup dictionary than it is to have pages of code where all you're doing is defining said dictionary manually. if I recall correctly in my current project, we have a over 300 view models, and while currently our dictionaries are broken out a little bit, that's still a lot of lines where certain entries could get lost or forgotten.

Thirdly, it pushes our definitions away from where they are being defined. If you're missing a projection in the dictionary, you have to go to your Projector class, add it there and rebuild. This sort of dependency means that you cannot have your class as a standalone code block, and also means that you'll be jumping between different class files any time you want to make a projection. It would be much nicer if we could remove the dependency altogether, and it would be easier to test.

I'll address each of these in a later post.

Wednesday, August 21, 2013

Homemade Lightweight Automapper Part 2

Again, please forgive any code mistakes. I've written this post up largely without the help of a compiler to help catch the bugs. The initial solution I came up with was some sort of case statement but after a few hours of research I came to the realization that C# simply doesn't support a case statements on types. A dictionary object was my next port of call. More specifically a Dictionary of Dictionaries. The nice thing about dictionaries in C# is that you can access them like arrays, using a key in place of the index. So a Dictionary of Dictionaries can be used the same way as a 2 dimensional array. In our case we want to use types as our keys so our dictionary syntax becomes:
dict[typeof(Source)][typeof(Dest)];
Linq syntax uses Func<TSource, TDest> for L2SQL and wraps it into a much less restricted Expression<...> for L2OBJs. It is really easy to pull the Func out of the Expression so our actual Projections are going be of type Expression<Func<TSource, TDest>>

Our key will be the destination type. Given that, our inner dictionary will need to be a dictionary of type:
IDictionary<TDest, Expression<Func<TSource, TDest>>>;
Where TDest is our Key and our Expression is our Value. If that looks ugly, it's because it is.

Our outer dictionary will need to hold our inner dictionary and be accessed by our source type, so it's type will be:
IDictionary<TSource, IDictionary<TDest, Expression<Func<TSource, TDest>>>>;
That is *really* ugly. Not to mention that our code isn't generic at all, and our dictionary object is strongly typed so it won't just accept actual types. It needs the more generic Type type. And for convenience sake, MS have included a non generic Expression class (which is as it turns out more generic.)
IDictionary<Type, IDictionary<Type, Expression>>;
That's about as nice as we're going to get in this case I think. There's not a whole lot we can do to clean that up. So what's the next best thing to getting rid of the dust in the house? Sweeping it under the rug! So what we want to do is abstract the nastiness away from the programmer.

Currently we can only set up our projection dictionary by doing something like the following:
IDictionary<Type, IDictionary<Type, Expression>> projections = new Dictionary<Type, Dictionary<Type, Expression>>() {
   { typeof(Source1), new Dictionary<Type, Expression>() {
      { typeof(Dest1), (Expression)(() => {
         return new Dest1() {
            ...
         }
      }) }
      , { typeof(Dest2), (Expression)(() => {
         return new Dest2() {
            ...
         }
      }) }
   } }
   , { typeof(Source2), new Dictionary<Type, Expression>() {
      { typeof(Dest3), (Expression)(() => {
         return new Dest3() {
            ...
         }
      }) }
      , { typeof(Dest4), (Expression)(() => {
         return new Dest4() {
            ...
         }
      }) }
   } }
};
Which is pretty brutal, especially since in MVC.NET you're going to be working with probably at least 10+ domain models and twice as many view models. The whole point of this exercise is to reduce the amount of code we have to write while still having flexibility. So, lets get to refactoring.

The obvious starting point is to wrap the previous code into an Add function. To use the Source and Dest types we'll have to make it a generic function, which is fine:
void Add<TSource, TDest>(IDictionary<Type, IDictionary<Type, Expression>> Projections, Expression Expression) {
   Type source = typeof(TSource);
   Type dest = typeof(TDest);
   
   if(true == Projections.ContainsKey(source)) {
      Projections[source].Add(Expression);
   } else {
      IDictionary<Type, Expression>> d = new Dictionary<Type, Expression>>();
      d.Add(dest, Expression);

      Projections.Add(source, d);
   }
}

IDictionary<Type, IDictionary<Type, Expression>> projections =
   new Dictionary<Type, IDictionary<Type, Expression>>
;

Expression e = (Expression)(() => {
   return new TDest() {
      ...
   }
});

Add<Source, Dest>(projections, e);
That's not bad, but we can do better. After all, you don't want to have to type all that mess every time. One easy thing we can do to clean the syntax up is move our Expressions to variables rather than methods.
Expression e = (Expression)(o => new TDest() {
      ...
   }
);
But we're not done yet. Since the generic Expression class derives from the Expression class, we can do away with the casting...
Expression e = o => new TDest() {
      ...
   }
;
The Add function could also be refactored more. Instead of having to write Add<Source, Dest>(d, e) for every single projection, wouldn't it be nicer to simply split it out into two? Add<Source>(d) and Add<Dest>(d, e).

Lets do that.
void Add<TSource>(IDictionary<Type, IDictionary<Type, Expression>> Projections, IDictionary<Type, Expression> Dictionary) {
   Projections.Add(typeof(TSource), Dictionary);
}

void Add<TDest>(IDictionary<Type, Expression> Dictionary, Expression Expression) {
   Dictionary.Add(typeof(TDest), Expression);
}
One of the nice features of C# is the ability to make Extension methods. Make your method static and slap it in a static class, add the 'this' keyword to the first parameter and the type of object that it is will allow you to use that method as if it was a part of the original class while abstracting away the parameter. Lets do that here:
static class Mapper {
   static void Add<TSource>(this IDictionary<Type, IDictionary<Type, Expression>> Projections, IDictionary<Type, Expression> Dictionary) {
      Projections(typeof(TSource), Dictionary);
   }

   static void Add<TDest>(this IDictionary<Type, Expression> Dictionary, Expression Expression) {
      Dictionary.Add(typeof(TDest), Expression);
   }
}
Our Add methods can now be called like so:
d2.Add<Dest>(e);
d1.Add<Source>(d2);
Almost done with the Add functions. Rather than calling d1.Add(); many times, wouldn't it be nicer to simply chain them?
static IDictionary<Type, IDictionary<Type, Expression>> Add<TSource>(this IDictionary<Type, IDictionary<Type, Expression>> Projections, IDictionary<Type, Expression> Dictionary) {
   Projections.Add(typeof(TSource), Dictionary);
   return Projections;
}

static void Add<TDest>(this IDictionary<Type, Expression> Dictionary, Expression Expression) {
   Dictionary.Add(typeof(TDest), Expression);
   return Dictionary;
}
Thus we can call them:
d2
   .Add<Dest1>(e1)
   .Add<Dest2>(e2)
   ...
;

d1
   .Add<Source1>(d2)
   .Add<Source2>(d3)
   ...
;
Okay, one last update to the Add function and we'll be close to done. In our haste to refactor, we've committed the cardinal programmer sin. Optimizing at the start. We've made things a little TOO generic. I've always found the generic method syntax (f<t>()) to be a little verbose. The other great thing about extension methods is that if we play our cards right, the 'this' parameter can actually imply the type of the generic, abstracting it away so that it looks like a normal function. Once our projection goes into the dictionary it loses its type and becomes a standard Expression rather than an Expression<> so we can't actually do anything for the outer dictionary Add method, but we can certainly clean up the inner Add.
static Dictionary<Type, Expression> Add<TSource, TDest>(this Dictionary<Type, Expression> Dictionary, Expression<Func<TSource, TDest>> Expression) {
   Dictionary.Add(typeof(TDest), Expression);
   return Dictionary;
}
Now for one final cleanup method:
static Expression GetProjection(Type Source, Type Dest) {
   return Projections[Source][Dest];
}
Altogether our code is looking pretty nice so far:
public static class Mapper {
   private static Dictionary<Type, Expression> Add<TDest>(this Dictionary<Type, Expression> Dictionary, Expression Expression) {
      Dictionary.Add(typeof(TDest), Expression);
      return Dictionary;
   }

   private static Dictionary<Type, Expression> Add<TSource, TDest>(this Dictionary<Type, Expression> Dictionary, Expression<Func<TSource, TDest>> Expression) {
      Dictionary.Add(typeof(TDest), Expression);
      return Dictionary;
   }

   private static Dictionary<Type, Dictionary<Type, Expression>> Add<TSource>(this Dictionary<Type, Dictionary<Type, Expression>> Dictionary, Dictionary<Type, Expression> Projections) {
      Dictionary.Add(typeof(TSource), Projections);
      return Dictionary;
   }

   private static Expression GetProjection(Type Source, Type Dest) {
      return Projections[Source][Dest];
   }

   private static Dictionary<Type, Dictionary<Type, Expression>> Projections {
      get {
         return new Dictionary<Type, Dictionary<Type, Expression>>()
            .Add<Source1>(Source1Projections)
            .Add<Source2>(Source2Projections)
         ;
      }
   }

   private static Dictionary<Type, Expression> Source1Projections{
      get {
         return new Dictionary<Type, Expression>()
            .Add(Dest1.Projection)
            .Add(Dest2.Projection)
         ;
      }
   }

   private static Dictionary<Type, Expression> Source2Projections{
      get {
         return new Dictionary<Type, Expression>()
            .Add(Dest3.Projection)
            .Add(Dest4.Projection)
         ;
      }
   }
}
You can define your actual projections in your view models:
public class Dest1 {
    <properties>
    <constructors>
    <methods>

    public static Expression<Func<Source1, Dest1>> Projection {
       get {
          return o => new Dest1() {
             ...
          };
       }
    }
}
There's probably more I could do to refactor this. There really isn't much reason the projections need to be properties rather than standard variables for example but I'll leave it there. This post is becoming pretty long winded, so I'll save the rest for Part 3. We're 2/3rds of the way there so stay tuned for the Actual implementing the projection methods that will let you use the mapper.

Tuesday, August 20, 2013

Homemade Lightweight Automapper Part 1

I've been using MVC3.NET for just over a year now in a project for work. I find myself liking the approach, I feel more in control of the code that I write, and while I admire MS efforts, I am glad to be rid of the horrible overhead that is the ViewState.

Using Entity Framework, I've learned a *lot* about linq. I went with a POCO approach for my models. I feel that this approach is much more elegant and lightweight than relying on EF to map everything out, and the less auto-generated nightmarish code the better IMHO.

If there is one thing I could complain about MS's bash at the MVC paradigm, it is how code heavy it is. Classes and classes and more classes. And did I mention classes? Once you get used to the slightly over engineered verboseness of it though, you do begin to appreciate the perks that go with it.

In the project I'm currently working on, we decided early on to start work with the Kendo JS framework, which is pretty shiny, and to try and help reduce code a bit, we started using the AutoMapper 3rd party library fairly early on.

Well, if you've used Kendo, you'll know that for the MVC side of things, all of the 'big' data display controls such as grids or lists are designed fairly heavily around taking their data in IQueryable format. They will take any IEnumerable, but if you have more than a few hundred records being pulled from your database, you don't want to materialize those records into memory until you need to, or things get slooooooooowwww very quickly.

Let me back up and give a brief outline of Entity Framework and Linq. There are two kinds of linq that use the same syntax and both work on most any sort of container object... arrays, dictionaries, lists, etc. Linq to Object works on objects in memory, Linq to SQL works specifically in a monadic way to generate a SQL query that is only run at the last minute.

You generally use L2SQL on IQueryables and L2OBJ on everything else. Problems arise when casting IQueryables to their parent interface IEnumerables because then you can quickly lose track of where your data is and run into performance issues (among other things).

After jumping into the deep end (When I started on this project, I personally had *very* little MVC experience no Kendo experience, no linq or Entity Framework experience and had been working in .NET 2.0 until that time) we quickly discovered that Sorting, Grouping, and Paging, the main 3 things you want to do with data in a Grid, did not play nicely unless you were working with IQueryables... oh they work fine if you only have a few hundred records, but once your DB starts filling up, a few thousand records being sorted, paged and grouped in memory rather than at the DB hurts really badly.

The AutoMapper had worked reasonably well up until that point but there was one hitch: It did not work with IQueryables. I did a bit of research and discovered that it indeed worked with IQueryables if you used a certain syntax... something along the lines of Project().To(), but the line of code would always inexplicably crash, no matter how much I coaxed it. Pretty sure that the particular version of the library that we were using had a bug, and unfortunately at the time it was not an option to upgrade.

This lead me to start looking into ways I could simply do it myself. After all, the Automapper code is fairly ugly to work with anyway, albeit fairly powerful if you work it right. I'll use the next 1-2 blog posts to run through how to create a rudimentary automapper that works fairly well in most complex cases.

Tuesday, July 30, 2013

Nifty JavaScript Approach to Large Arrays

I figured this method out a while back in regards to storing things fast but also keeping flexibility of indexed arrays. I by no means am an expert in JS, and I learn more every day so if anyone has a better method, definitely let me know!

JavaScript natively has two types of arrays, a standard index array a[0], a[1], a[2], etc., and a key/value dictionary which uses a very similar syntax but with the difference being that you can use any string or number as your index value: a[0], a[5], a['hi'].

The problem is twofold. 1.) Storing large amounts of data in an index array means that you have to run through a loop when doing a look up. If you have less than a few hundred records this isn't so bad, but once you have a few thousand it presents a significant slowdown. And 2.) To loop through a dictionary array you need to use a for...in loop (JS version of a foreach/forall) which is *really* slow compared to a regular for loop.

Neither an index or a dictionary array are very good options for large amounts of data, or I should say specific types of large amounts of data as there are things which both do very well.

Therefore, I came up with a nice little method that has the flexibility of both. The solution is this, and it's a very simple one with the only drawback being a slight memory increase: Use both types of array at once.

In its simplest form it is the following class:
function List() {  
   this.Indexed = [];  
   this.Dictionary = [];
}

List.prototype.KeyValPair = function(Key, Value) {  
   this.Key = Key;  
   this.Value = Value;  
}

List.prototype.GetByKey = function(Key) {
   var i = this.Dictionary[Key];

   return this.Indexed[i].Value;
}

List.prototype.GetAtIndex = function(i) {
   return this.Indexed[i].Value;
}

List.prototype.Add = function(Key, Value) {
   var item = new this.KeyValPair(Key, Value);

   this.AddItem(item);
}

List.prototype.AddItem = function(KeyValPair) {
   this.Indexed.push(KeyValPair);
   this.Dictionary[KeyValPair.Key] = this.Indexed.length - 1;
}

List.prototype.AddRange = function(KeyValPairs) {
   for(var i = 0, len = KeyValPairs.length; i < len; ++i) {
      this.AddItem(KeyValPairs[i]);
   }
}

List.prototype.RemoveAtIndex = function(i) {
   if(undefined === this.Indexed[i]) {
      return;
   }

   var key = this.Indexed[i].Key;

   this.Indexed.splice(i, 1);
   this.Dictionary[key] = undefined;

   this.SyncList(i);
}

List.prototype.RemoveByKey = function(Key) {
   var i = this.Dictionary[Key];
   if(undefined === i) {
      return;
   }

   this.Indexed.splice(i, 1);
   this.Dictionary[Key] = undefined;

   this.SyncList(i);
}

List.prototype.RemoveIndexRange = function(From, To) {
   for(var i = From; i < To; ++i) {
      var key = this.Indexed[i].Key;

      this.Dictionary[key] = undefined;
   }

   this.Indexed.splice(From, To - From);

   this.SyncList(From);
}

List.prototype.RemoveKeyRange = function(Keys) {
   var minI = this.Indexed.length;

   for(var i = 0, len = Keys.length; i < len; ++i) {
      var key = Keys[i];
      var index = this.Dictionary[key];
      
      if(minI > index) {
         minI = index;
      }

      this.Dictionary[key] = undefined;
      this.Indexed.splice(index, 1);
   }

   this.SyncList(minI);
}

List.prototype.SyncList = function(From) {
   if(undefined === From) {
      From = 0;
   }

   for(var i = From, len = this.Indexed.length; i < len; ++i) {
      var keyValPair = this.Indexed[i];

      this.Dictionary[keyValPair.Key] = i;
   }
}

List.prototype.Clear = function() {
   this.Indexed.length = 0;
   this.Dictionary.length = 0;
}
I do have to apologize for any mistakes, I've written this from memory and not run it through any checking for errors. You can of course expand this to have AddRange and RemoveRange functionality fairly easily.

The greatest advantage of this approach is that it abstracts all need for looping to native code when it comes to doing a lookup. The only drawback is having to resync your list every time you remove an item.

I personally have used this method quite often and found it extremely useful for sifting through large amounts of data in JS.

Edit: I fleshed out the class a little bit... simply because I'm a code junky and couldn't help myself, so enjoy.

Thursday, April 18, 2013

My thoughts on Always On

Note: This is a semi-rant, but I hope to have presented myself well. Also, I haven't even gotten into the whole 'Always On DRM stops pirates' argument. I may do another blog on that particular gem in the near future.

Having been a major player of World of Warcraft for the last 8 years, having garnered approx. 200+/- days of played time (4800+/- hours) in essentially an Always On environment, you would think that I would be totally fine with the current push towards it as the 'next big thing'. Well here are my thoughts on it.

I come from Australia where the ISPs have no clue what 'unlimited data plan' means. I spent around 5 of those 8 years playing in Australia with 500-2k latency every day constantly balancing the fact that if I downloaded too much that month, my internet would be so throttled that playing anything online would be an exercise in extreme patience.

The next 2 years would be spent in the US under the machinations of Comcast, in an area where apparently our ISP didn't know the meaning of the word stable internet. For 8 months my wife and I would be constantly calling them, wondering why our internet was either bouncing from 1 mb to 11 mb download or disconnected entirely. Needless to say, raiding in WoW was severely hit and miss during this time. At least in Australia I could simply cut back on my YouTube browsing for the month to be able to play.

The last year has been spent with AT&T (and I am currently on hiatus with WoW). Compared to my previous ISPs, AT&T have been like a dream. I think we have only had a few minor disruptions after a year of being with them. However, I have heard stories from people who have used AT&T in other areas that they can be just as hit and miss as my experience with Comcast. Especially after a large storm, which, as I currently live in Alabama, happens quite frequently.

The two major 'always on' games that have been published of late, ActiBlizz's Diablo III and EA's Sim City both had the most horrible launches in gaming history. Bar none.

I actually was a big fan of Diablo II, so when Blizzard announced their annual pass deal that allowed one to receive a free copy of D3 for paying a year's subscription, I was all into it. Why not? I was going to be paying for a year anyway. After installing D3 on launch and getting to play for maybe 30 minutes, I put it down and never picked it up again.

All that to say this: The infrastructure is simply not in place. It isn't. It may be one day, but that day is not now. It won't be now by the time the xBox720 comes out and it probably won't be now until a.) ISPs start swapping to fiber optic cable exclusively and b.) Publishers start making sure their games actually are properly stress tested before release rather than simply shoving them onto the consumers and letting them take the brunt of the fallout.

Even if it (the infrastructure) was ready now at this very moment. The consumer base is clearly not. I have read maybe 1 in 40 (if that) comments actually defending the always on paradigm instead of simply blasting it.

The main gamer demographic is still getting older, as kids who grew up playing Mario get older. It hasn't stabilized yet and probably won't for at  least 20 years. The fact of the matter is that your average gamer now most likely a.) has a job b.) is married and c.) has kids. This cuts the average free time from 16 hours a day to 8 hours, to 2 hours, to maybe 1/2 an hour every 2 days if you're lucky. What these customers don't want is to get to that incredibly rare free time slot and see this: 'Error logging in. Server down.' or this: 'Cannot connect to the internet. Please call your service provider.'

The fallout of publishers jumping on the always on bandwagon may not be instantaneous, but it will be inevitable as more and more consumers get pushed into finding better ways to spend their time than wasting it by staring at a login screen.

I don't know what they think or how they come to the conclusions that they do at major game companies, but statements affirming consumer consent to always on are clearly and unequivocally false. I know that I will probably not be purchasing the next xBox. I know that EA has lost a potential buyer for at least a long time, and many more publishers are heading in the same direction. And finally, I know that there are literally thousands of older games out there that are looking really good to play right now. None of them have always on, or DRM of any kind and to me they look better and better every time someone mentions 'always on' being the way the industry is headed.