Windows Communication Framework (WCF): Beware the fake IDisposable implementation !!

Yeesh. My fascination with WCF became red-faced shame overnight.

We’re using WCF client/server both on a server, so an ASP.NET web app can query a custom indexing service. Since this was a fresh project with no legacy constraints, I opted to use WCF rather than remoting to…, well, to drink the kool-aid I suppose, but I thought the argument made at the AZGroups presentation that “you shouldn’t have to worry about the plumbing” was compelling. (Now that the solution is almost fully baked, I am really annoyed I went down this path simply because of the hassle I went through in having to manually populate the original strong types in a shared codebase between client and server. IMO, DataContract-driven proxy code is only useful for third parties.)

An initial WCF implementation with a simple loop of create, invoke, and drop scope a WCF client that used named pipes to a WCF service was freezing up after 12 iterations. Executing manually, roughly one iteration per second, it froze up on the 50th or so iteration.

Turned out I wasn’t calling Close() and should have been. *blush* Of course. But I looked for Dispose() to see if I could use the using() statement, and it wasn’t there. Or, wasn’t explicit, one must cast to IDisposable first before calling its Dispose() method.

Fixing that, now I was getting exceptions on Close() / Dispose() if the server had returned a Fault message. Buried deep in the far back of the WCF book I’m reading–and actually I had to use Reflector to figure this out before I looked in the book to see if I was right–is a brief mention not to use the using() statement with WCF clients, and don’t call Dispose(), either, but to call Close() manually. Dispose() on WCF clients actually call Close() internally. But just don’t expect the CLR / compiler to pick that up, and you shouldn’t always call Close(), either, but rather Abort(). Confused yet?

As I posted in,

IDisposable was always percieved to be the happy, safe haven for getting rid of objects that use unmanaged resources. If something implemented IDisposable, Dispose() was always callable. Not so anymore.

((IDisposable)client).Dispose() can only be called on a WCF client if Close() can be called, because internally it calls Close(). Close() cannot be called unless basically it’s in the Open state; otherwise, you have to execute Abort() instead, which is not a memeber of IDisposable. This means that, even though the object does indeed implement IDisposable, its *SUPPORT* for IDisposable is 100% dependent upon the caller evaluating the State of the object to determine whether or not it’s open. In other words, Microsoft has established a new precedent: IDisposable mandates extraneous state-checking code before its IDisposable implementation is usable, and the only thing you can do about it is wrap it.

I might’ve opted to create a new interface, IReallyDispose, but then I’d still have to implement it. I could create an abstract class, WcfDisposable, but C# doesn’t support multiple inheritance. The best I can do is put a sticky note on my computer monitor that reads: "WCF client objects don’t REALLY implement IDisposable unless they’re Open!" Then I can only hope that I’ll pay attention to my stickynote when I’m going about WCF coding.

Does anyone else besides me find this to be unacceptably stupid and messy? I really *WANT* to like WCF. I love greenfield projects that use promising new technology, but when new technology abandons key design patterns like this, it really gets under my skin.

Discussing the matter further, ..

This isn’t about the object not being able to Close(). I don’t mind Close() raising exceptions. The core problem is that IDisposable throws an exception just because the object is in a "Faulted" state, while the object retains unmanaged resources!! IDisposable is generic and agnostic to connections/sockets/pipes/channels/streams, so I disagree when most people say "Dispose() and Close() are one and the same", because they’re not. What Dispose() is supposed to do is safely unload unmanaged resources, whether that means to Close() or not. WCF shouldn’t implement IDisposable if IDisposable.Dispose() will ever throw exceptions. I don’t care if Dispose() calls Close(), it should wrap that call with …

void IDisposable.Dispose()
	if (this.State == CommunicationState.Closing ||
		this.State == CommunicationState.Closed ||
		this.State == CommunicationState.Faulted)

Instead, Reflector says it’s implemented as such:

void IDisposable.Dispose()

Since IDisposable has compile-time support for managing resources with Dispose, including the using() statement, this implementation is garbage.

There should be a working IDisposable.Dispose() that clears out unmanaged resources if you are *NOT* working in a transaction and have nothing to "abort" except the open connection itself. IMO, outside of a transaction, disposal of any object is an "abortion".

The bug in the design isn’t just faulty Dispose(), but that IDisposable was implemented in the first place. The practice we are told to use is to ignore it, and to call Close() or Abort() ourselves. Therefore, it’s not disposable, it’s only Closable/Abortable, depending on state. Why, then, did they implement IDisposable?

Where does Microsoft stand on this? Well, according to this forum post [link], they couldn’t figure out what to do themselves, so they released it with it with no real solution. Literally, "for good or for ill we have landed where we have", which was to try{} to Close, catch{} to Abort. Oh, nice planning. My respect for Microsoft just went down about 50 points.

Categories: Software Development

AJAX Based Bandwidth Test

At work, I was tasked to add auto-playing Flash video feeds on the home page of one of our web sites, and I decided that before I do anything I should implement a quick AJAX-driven bandwidth check so that dial-up users aren’t fed a heavy video feed without prior user consent. I threw the test together in a matter of three or four hours, and sent the engineering team an e-mail letting them know that this is new working functionality we can use any time.

The funny thing about the timing is that on the same day another co-worker working on another web site needed the same functionality immediately. To be honest, I overheard talk about this, and so I hoped my contribution would be useful for that project as well. I have no doubt my co-worker could have accomplished the same task, but being able to drop off sharable solutions like this is part of what makes a team environment so valuable.


The Bandwidth Test: What It Is and How it Works

The functionality I’m referring to here is a bandwidth test using AJAX.

Actually, truth be told, it’s not all-out AJAX as there is no XML in use here, but there is an asynchronous Javascript-driven HTTP request for a string of bytes, and a measurement of the amount of time that took. Actually, there are two asychronous HTTP requests. The first request is for a single byte of data–this measures latency and HTTP overhead. The second request is for a larger set of bytes of data, i.e. 128KB, which is configurable as a control property. The amount of time taken for the first request (1B) is subtracted from the amount of time taken from the second request (128KB), and this measures Pipe Speed. The requests are both sent to a "callback" URL that you can configure, but by default it’s an .aspx file stored as "~/controls_callback/GenBytes.aspx?len=number_of_bytes".

(Please excuse the horribly crappy illustration — I had to use Paint.NET as I don’t have Adobe Illustrator installed on my laptop, and then on top of that Live Writer and/or Live Spaces does weird things to the image. Click to enlarge.)

The test is not intended to be exact, but to be an approximation to determine whether your connection is moderately fast or dog slow. But the accuracy of the test gets better the slower the connection is, so a slow connection–which comprises the more vulnerable web audience–will be less likely to have incorrect test results than a higher speed user.

The test is implemented on a site by adding a small, simple ASP.NET tag:

(Add to top of .aspx file:)
<%@ Register Src="~/controls/SpeedTest.ascx" TagPrefix="ajax" TagName="SpeedTest" %>


<ajax:SpeedTest runat="server" ExecuteTest="true" UseCookie="false" ClientCompleteFunction="myJavascriptFunction" />

The control automatically registers a block of Javascript to the containing Page. ExecuteTest="true" is the default setting–if you do not include this attribute, the test will execute, but setting it to False will result in no Javascript being added to the page at all. Why would the tag still be useful? Setting UseCookie="true" will add the test results to the cookie ("astLatency" and "astPipeSpeed") and will expose these values as properties. (If the cookies are already set and UseCookies is true, the test won’t execute anyway.) You might choose to execute the test on one page and then redirect (ExecuteTest="true"), and only read the resulting cookie on another page (ExecuteTest="false"). But if the first hit can make a safe, untested assumption — the test is asynchronous, after all — then you might as well just put the control on your content page, turn on the test, and set ExecuteTest="false" on the next hit from the same visitor, when the cookie has been set.


ExecuteTest (gets/sets bool) true = Adds test script which auto-executes.
false = Doesn’t do anything client-side.
UseCookies (gets/sets bool) true = Disables test execution if the Request already has the assigned cookie.
false = Doesn’t evaluate cookies.
CookieExpires (gets/sets TimeSpan) TimeSpan in granularity of days to retain the cookie if UseCookies == true.
CookieLatency (gets/sets int?) Nullable<Int32> of the cookie value astLatency.
CookiePipeSpeed (gets/sets int?) Nullable<Int32> of the cookie value astPipeSpeed.
ClientCompleteFunction (gets/sets string) The name of your own Javascript function that executes when the test is complete. The function should accept two parameters: latency and pipeSpeed.
CallBackUrl (gets/sets string) The URL of the support .aspx file that dynamically generates bytes of a requested length.
ByteLength (gets/sets int) The number of bytes to be requested from the CallBackUrl.

You can download the whole thing and examine the Javascript in the control file (SpeedTest.ascx) here…


Demo (from a slow cable connection as host):

Categories: Web Development

XML-to-C# Code Generation

Altova probably hates me. Not their products, but the company. I’ve frequently wanted to give their product line a noble shot for utilization, but I never have time to give it a fair shot, so I am never able to afford to purchase it or give it a full recommendation to my employer. My old user ID shows up in their tutorial videos alongside generic examples of hackers and spammers. For years, I’d try reinstalling the product to get past the 30-day trial in hopes that I’d have time to really check their cool tools out. When they killed that ability, I tried doing it within a virtual machine. Now in VMs I cannot get a trial key anymore; perhaps my e-mail domain name is blocked.

But I often forget that there is no real need for an investment in some third party XML code generation tool like Altova’s XMLSpy or MapForce if you need a complete object model written in C# to introspect a deserialized XML file. After spending hours Googling for C# code generators from XML, I realized that the solution is right under my nose. And I don’t have to spend a dime for it.

Why Generate?

You might be asking, why are you trying to generate C# code? Doesn’t System.Xml.XmlDocument and its XPath support work well enough to do what you need to do with an XML document? The answer is, yes, sometimes. Sometimes Notepad.exe is sufficient to edit an .aspx file, too, but that doesn’t mean that having a good ASP.NET IDE w/ code generation, like Visual Studio, should be ignored for Notepad.

In fact, I was happy with using XmlDocument until I realized that some of the code I was tasked to maintain consisted of hundreds of lines of code that would read CDATA values into a business object’s own properties, like this:

XmlNode node = storyNode.SelectSingleNode("./title");
if (node != null && node.ChildNodes.Count > 0 && node.ChildNodes[0].value != null)
	this._title = node.ChildNodes[0].Value

node = storyNode.SelectSingleNode("./category");
if (node != null && node.ChildNodes.Count > 0 && node.ChildNodes[0].value != null)
	this._category = node.ChildNodes[0].Value


This just seemed silly to me. When I started working with a whole new XML schema that was even more complex, I decided that manually writing all that code is just ludicrous.


Visual Studio 2005 (of which there are freely downloadable Express versions, of course) has the ability to introspect an XML document to generate an XML Schema (.xsd). It’s really very simple: load the XML file into the IDE, then select "Create Schema" from the "XML" menu. Overwhelmed by the complexity of it all yet?

Bear in mind that the resulting Schema is not perfect. It must be validated–by you. If at first glance the schema looks fine, there’s a simple test to validate it: simply programmatically load your XML document while enforcing the schema. For my purposes, I found that most of the adjustments I needed to make were just to make "required" elements "optional", unless of course they were indeed required.

XSD -> C# Code

If the schema’s clean, all you need is the .NET Framwork SDK, which comes bundled with Visual Studio 2005. Tucked away therein is XSD.exe, which does the magic for you. All you have to do is pass it "/c" along with the name of the .xsd file and the new name of the .cs file you want it to auto-generate.

The generated C# code isn’t always perfect, either. To say nothing of the rough comment stubs, one or two textual content elements were completely ignored in my case–the attributes were exposed as C# properties but the content, which was CDATA, was not. Easy enough to fix. This was likely due to an imperfect XSD file, but since this was really a run-once-and-forget-about-it effort, I was not afraid of diving into the C# to add the missing properties.

        private string _value;
        public string value
            get { return _value; }
            set { _value = value; }

System.Xml.Serialization.XmlSerializer works flawlessly with the generated C# code. I created the following generic class for the generated classes to inherit, so that they automatically offer a Deserialize() method:

using System;
using System.Collections.Generic;
using System.Text;
using System.Xml;
using System.Xml.Serialization;
using System.IO;

namespace MyProject.XmlGen
    public class XmlDeserializer<T>
        public static T Deserialize(string xmlFilePath)
            using (FileStream stream = new FileStream(xmlFilePath, FileMode.Open))
                return Deserialize(stream);
        public static T Deserialize(Stream xmlFileStream)
            return (T)Serializer(typeof(T)).Deserialize(xmlFileStream);

        public static T Deserialize(TextReader textReader)
            return (T)Serializer(typeof(T)).Deserialize(textReader);

        public static T Deserialize(XmlReader xmlReader)
            return (T)Serializer(typeof(T)).Deserialize(xmlReader);

        public static T Deserialize(XmlReader xmlReader, string encodingStyle)
            return (T)Serializer(typeof(T)).Deserialize(xmlReader, encodingStyle);

        public static T Deserialize(XmlReader xmlReader, XmlDeserializationEvents events)
            return (T)Serializer(typeof(T)).Deserialize(xmlReader, events);

        public static T Deserialize(XmlReader xmlReader, string encodingStyle, XmlDeserializationEvents events)
            return (T)Serializer(typeof(T)).Deserialize(xmlReader, encodingStyle, events);

        private static XmlSerializer _Serializer = null;
        private static XmlSerializer Serializer(Type t)
            if (_Serializer == null) _Serializer = new XmlSerializer(t);
            return _Serializer;


So with this I just declare my generated C# as such:

public class MyGeneratedClass : XmlDeserializer<MyGeneratedClass>

Literally, now it takes a whopping ONE line of code to deserialize an XML file and access it as a complex object model.

MyGeneratedClass myObject = MyGeneratedClass.Deserialize(xmlFilePath);


Categories: Software Development

LiveWriter + SyncToy

Well I started blogging here about half a year ago primarily because I had quit a very bad-fitting job and started to pursue other work; my blog was intended to help me refocus on my hard skills set and also put a little bit out on display. I didn’t get very far at all, though, just about three weeks or so, before I found an almost dream job. No job is perfect but my current job now is still the best job I’ve ever had, hence no real need to be a show-off (or a wannabe, for that matter).

But I also started blogging here to dump tech thoughts. And that’s the more "pure" reason for blogging anyway, one that motivated me almost equally.

I was using LiveWriter here. No, not PowerBlog, which I built by myself over the course of two or three years and now, two years after having walked away, I look at it with a cringe and a blush. Recently I fired up the PowerBlog source code in Visual Studio on Windows Vista. Crash, bang, boom, I spent at least a couple hours trying to refactor naming convention clashes (like "WebBrowser", which also showed up in System.Windows.Forms in .NET 2.0) and other issues, but ultimately I couldn’t win. R.I.P., PowerBlog v2. Maybe I’ll rewrite you from scratch someday, a v3, in .NET v3.

But LiveWriter is a cleaner version of what PowerBlog tried to be. It lacks a lot of features PowerBlog had, like a grid list of all posts not unlike Outlook Express (indeed Outlook Express was a huge design inspiration for PowerBlog), but LW has something PB didn’t have and that I had always planned to add but never even started: image insertion and management. LW took a step further and applied automatic resizing and effects. Great. I like it.

I guess what stalled the blog here was the fact that I got a laptop, but LiveWriter had been set up on my desktop PC, which served as a web server (for tinkering), a Windows XP Media Center Edition server for my XBox, and a file server for all my personal junk and backups from present time to years past. Later I got another computer, a dedicated gaming / music workstation for the living room, which took over my big 20" LCD screen. Now my "home server" had an old CRT and was being ignored in the back room not really doing much except serving recorded TV to my living room over the LAN.

I could have installed LiveWriter on my laptop, and in fact I did, but I discovered that my drafts were not kept on the server. I’d have to synchronize the drafts. And I didn’t figure it was worth the time to mess with it, and that I should just sit down at the "back room PC" if I want to blog.

Never happened. Very recently I tried Remote Desktop, but LiveWriter has an almost fatal incompatibility with Remote Desktop: it invalidates the entire screen (causes the screen to re-paint itself) with every keystroke, which means that for every letter typed into Live Writer I would have to watch the screen go blank for a split second. A-[blink]-n-[blink]-n-[blink]-o-[blink]-y-[blink]-i-[blink]-n-[blink]-g-[blink]-!-[blink]

So now some months have passed sinced I was really blogging and I’ve been queuing up a lot of things, mostly small, that I would have liked to have blogged about, even if only a brief mention. Not long ago, I upgraded my back room home server to Windows Vista, with a fresh installation, giving me a clean slate. So today I finally sat down and came up with a solution.

The problem: I have three computers, and would like to blog using any one of them. I don’t want to blog using a web interface, I prefer to use a Windows application. But I also want my drafts and post history to be kept in that windows app. PowerBlog handled this, but it was so permanently alpha I would not bother to attempt using it. LiveWriter is good enough, although at some point I want to post code blocks and LiveWriter is too limited at formatting options

The solution was to synchronize my three machines for LiveWriter drafts and history. Fortunately, the Drafts and post history of Live Writer are kept as flat files in the user profile directory. Since all three computers are running Vista, all three keep their LiveWriter files here: C:\Users\Jon\Documents\My Weblog Posts. I could just edit the registry and point LiveWriter to use \\mediacenter\c$\Users\Jon\Documents\My Weblog Posts as the application "home" directory, but I thought that would be too severe and prone to too much LAN chatter. I thought about using a tool I had written that used .NET’s FileSystemWatcher to synchronize files, but decided it was too much trouble and doesn’t take into account file changes while one or the other computer is shut down or offline. I could xcopy the files with a batch file, but Microsoft SyncToy seemed to be a better quick solution, for ease of implementation. Robocopy would have done just as well as SyncToy, I suppose, but I thought of SyncToy first.

On both of my living room computers (the gaming / music workstation, and the laptop) I pointed SyncToy to C:\Users\Jon\Documents\My Weblog Posts as the left directory, and to \\mediacenter\c$\Users\Jon\Documents\My Weblog Posts as the right directory, with the synchronization type flag set to Synchronize.

Then on both computers I created a SyncAndRun.bat file:

"C:\Users\Jon\AppData\Local\SyncToy\SyncToy.exe" -R
"C:\Program Files\Windows Live Writer\WindowsLiveWriter.exe"
"C:\Users\Jon\AppData\Local\SyncToy\SyncToy.exe" -R

This essentially synchronizes to catch up my blog posts, then runs Live Writer so I can do my blog thing, then synchronizes again to push out any blog changes.

Then I dragged a shortcut to the .bat file to my Quick Launch toolbar. I changed the shortcut icon to use the WindowsLiveWriter.exe icon, and changed the shortcut file name to "Synchronize and Run Windows Live Writer".

There! 🙂 Now everything is perfect. I can hop onto any of my three home computers and start blogging away. Everything will be in place.

Categories: Pet Projects

Google Search for the Command Line

December 16, 2006 2 comments

Nothing spectacular here, but occasionally I feel the urge to do a quick Google search from the command line and I decided to build one. Google apparently stopped supporting their SOAP API, but it’s still not shut down yet, so I threw together a command line Google search.


Categories: Cool Tools

Simple Search for Vista or Office 2007 Users

Do you, like I do, find it ironic that the default, so-called "simple" file search parameters in Windows are related to metadata rather than, simply, the file name and the directories? The amount of effort one must go through just to produce a filename search in a list of specific directories in Vista is atrocious. You have to choose the most distant and difficult-to-reach options just to specify the directories in which to search, and you can’t even use semicolons. The absolute quickest method of adding multiple directories for a search (using a keyboard) is as follows: (after Start menu -> search) … [tab] [tab] [space] [tab] [End] [tab] .. type a directory .. [Esc] [tab] [space] [tab] .. type another directory .. [Esc] [tab] [space] ..

I threw a fit in the Vista newsgroups and decided to build my own to bring the true simple search back to my life. Here’s a start, for your enjoyment. Source code included, throw in Regular Expression search support in the source code if you like, or whatever, but my needs are met.


Categories: Cool Tools

WPF/E CTP is here!!

It’s Christmas season and I’m feeling nostalgic, reminiscing of childhood days when I opened up Christmas presents and felt like life was just beginning.

Yesterday it seems Microsoft shared with us the first WPF/E preview. (That’s WPF = Windows Presentation Foundation, “/E” = / Everywhere.)

Get to know this acronym, because this is Microsoft’s real-world answer to Adobe/Macromedia Flash. They’re making XAML renderable on web pages as controls, and exposed to Jscript (so you can do true AJAX and call into its DOM, for instance) and supported on IE, Firefox, Safari, for Windows, for the Mac  …

This is like Windows Vista for the Web on every platform.

Here’s a sample page using WPF/E (must be installed):

.. a-heh-hand ..

Oh yes, and Microsoft is giving us stocking gifts, too. The “Interactive Designer” tool (the Flash IDE equivalent) is now “Expression Blend” and is now Beta 1. The SDK also integrates with Visual Studio


Categories: Software Development