Life and code.
RSS icon Home icon
  • The Mysterious DatasetDataReader

    Posted on January 23rd, 2004 Brian No comments

    In the comments for a post on Dataset/DataReader stuff over on Roy Osherove‘s blog, Steve Maine writes about some mystery guy who created a DatasetDataReader class for unit testing, and that it was really useful and cool. Well, that guy was me. So in response to the request for it, here is the code.

    It’s a pretty hacky implementation of a IDataReader, and some things simply throw a NotImplementedException because I didn’t need them at the time. If you happen to make some improvements to it, I’d appreciate it if you flung the bits back my way.

    Enjoy!

  • No Service is an Island

    Posted on January 19th, 2004 Brian No comments

    I think everybody is starting to go a little crazy trying to define the term Service Oriented Architecture. Udi states that he sees “two distinct and complimentary views.” I quote from his site:

    1. SOA is about how to integrate different systems.
    2. SOA is about how to architect a single system.

    I disagree. As we all move our systems to be architected around services, different systems and single systems begin to blur together in a way never before possible. As soon as a single system exposes services, it becomes a candidate for integration with other systems. The concept of a “single system” will have less and less meaning as services become more ubiquitous and our “single systems” begin depending on several other systems for their proper functioning. Arguing between the two definitions is pointless: A good service-oriented system both enables and requires integration with other services. In a service-oriented world, no system is an island.

    The flexible, resiliant nature of the XML-based interfaces (discussed earlier here and here) is what makes it all possible. It allows that zen to become a reality with minimal effort. Given these new technologies at our disposal, a good architect will design the system to allow for the simplest integration between them. Of course it will still be possible to architect your system poorly. You can write web services all day, but without thinking about how those services might be integrated with other systems, then you’re just using an fancy remoting framework. You will have achieved a “single system,” but such a system is definitely not service-oriented, and you will have missed out on the real value that the web service stack has to offer.

  • The Problems with .NET Exceptions

    Posted on January 14th, 2004 Brian 1 comment

    My current contract involves a lot of Java work. I’ve always been a big fan of Java, although my personal opinion is that C# and .NET are superior to Java in most ways. The step back to Java, then, has really highlighted many of the evolutionary improvements that Microsoft has taken in the construction of their new platform. I really miss things like properties, attributes, and the precompiler. I find myself yearning for #region so that I can make my code more readable. The lack of automatic boxing is annoying, and I despise Java’s CLASSPATH code location scheme. But there is one thing that Java pretty much nailed but .NET does exceptionally poorly: Exceptions.

    You’ll hear strong opinions on both sides of the issue of declared exceptions, but I come down firmly on the side that declared exceptions are most definitely a Good Thing. No matter how you cut it, exception flow is part of modern programming. Unfortunately, it’s a part this is often overlooked, especially by novice programmers, and all-too-often even by experienced developers. There are two big advantages from declared exceptions.

    First, it causes the compiler to enforce a part of interface design that is mostly forgotten. What exceptions can be thrown from a method is as much a part of the interface as the return type or the parameters. In .NET, though, exceptions are relagated to mere mention in the documentation. Even with Visual Studio .NET’s built-in auto-help finder thing (which I always turn off because it annoys the hell out of me), it’s way too easy to forget that your method might throw an exception. In Java, though, the compiler ensures that you won’t forget. It’s the same as when it ensures you are allowed to make an upcast assignment.

    Second, declared exceptions force you to think about exceptions! That might sound redundant, but so be it. Exceptions and their flow through the system is rarely thought through correctly, even by experienced architects. Making the tools we use noisey about our deficiences is a sure way to help us improve.

    Some of the naysayers will attempt to point out that declaring every exception that can occur in a method leads to an insane number of class names in your throws statement. Java makes an acceptable compromise that a certain subset of exceptions, namely those extending from RuntimeException, do not have to be declared in the throws clause. This allows extremely common exceptions, such as a NullPointerException, to be thrown without declaration. Some others might say that declared exceptions result in code that catches and ignores exceptions inappropriately, such as the following:

    try
    {
        // Do something dangerous.
    }
    catch (SomeDeclaredException e)
    {
    }

    This is often done just to “shut up the compiler.” Only coders who lack understanding of exceptions would do such a thing without a carefully considered (and hopefully commented) reason. As I stated above, though, there is a definite lack of understanding of exceptions among developers, especially .NET developers; So we would likely see such code a lot in C#. But at least they would be thinking about it, and that’s the first step towards understanding it.

    I only have one idea about why .NET doesn’t require declared exceptions. The .NET runtime is often very tightly bound to the underlying Win32 platform. The IJW (It Just Works) cool-as-hell black-magic that lets you transparently call native C++ from Managed C++ helps blur that boundary and promote that tight coupling. In the unmanaged world, exceptions can occur at any time and be of any type. In C++, you can throw anything at all as an exception, even a char* string constant. Perhaps the fuzzy nature of exceptions in that IJW world makes it impossible to always determine what exceptions could be thrown from a method?

    The fact that J# does support this ability, though, seems to undermine that idea. It’d be really cool if the next version of the C# compiler would support something similar to J#, at least as an option. I’m not holding my breath, though.

    No wait, there’s more. I’m not done with my rant yet!

    The other thing about .NET exceptions is the utterly poor design of the exception hierarchy. Java neatly seperates catastrophic errors, which should never be handled, from merely exceptional conditions, that a developer might have some chance of handling correctly. It does this by separating the former into the Error class and the latter into the Exception class. The .NET hierarchy makes no such distinction, to its detriment. For example, the OutOfMemoryException class extends from SystemException, which is the same base class as IOException, SerializationException, XmlException, and a host of other exceptions that are generally not catastrophic. So if you are writing code, and you are required to handle all non-catastrophic errors, you are also roped into handling several catastrophic ones as well. The alternatives are to either write code in your catch block to re-throw the catastrophic exceptions, or write several different catch clauses for each non-catastrophic error in the hierarchy! Clearly, a better design of the hierarchy would have saved us from this problem altogether.

    Okay, I’m done for now. I’ll get off my soapbox. But if you’re listening, Microsoft please please PLEASE fix your exception class hierarchy. It sucks. Period.

  • But They’re Better Interfaces

    Posted on January 7th, 2004 Brian No comments

    Steve writes:

    The addition of an open content model represents a fundamental change in the way we think about interfaces in an SOA world.

    I couldn’t agree more. In the past, the technology used for interfaces limited was limited. The new technology has removed some of those limitations, but we are still thinking about interfaces.

    I think this whole discussion on interfaces ties together very nicely with Steve’s very good post on SOA vs. OOP and James Avery’s final conclusions on the topic. Object-oriented programming will still be used as one of the techniques for coding web services. Web services provide the next-generation of interface technology between different systems that have been coding using those techniques. And service-oriented architecture is the idea of making those systems relatively simple and discreet; and by leveraging the new flexible, resiliant, open model interface technology, make it possible for those different systems to continue to interact with each other over time and despite change.

    An astute reader may have noted that this is one of the first times in recent posts that I have used the phrase “service-oriented architecture.” I have resisted a comparison between OOP and SOA because they are apples and oranges. The former is a technique for programming, and the latter is an architectural pattern.

    And Steve, I kind of consider this a continuation of our old conversations. Thanks for being a sparring partner.

  • Services Really *Are* All About Interfaces

    Posted on January 6th, 2004 Brian No comments

    Steve, I agree that service-oriented programming is absolutely better and more interesting than old-skool interfaces. However, I disagree with you with your analysis. Interface-based programming has never implied binary compatibility. Interfaces are about making guarantees as to the nature of the input and output of a method. Back in the days when men were men and pushed around bits and cobbled together assembly to make things go, the interface was specified soley in documentation. As we got smarter, we invented things like C and header files and compilers to relieve some of the human burden by makeing those interfaces machine readable and machine enforcable. Component technologies like COM, CORBA, and even Java and .NET interfaces are the next evolutionary attempts at the same thing. Binary compatibility just happened to be the way all this was implemented, and it was not without many already mentioned limitations.

    When you get down to it, when you call a web service with a certain input, you expect a certain output. Period. You don’t expect that to change. The semantic nature of the input and output is still transmitted via documentation, but the syntactic method of input/output exchange has evolved to utilize flexible, extensible technology, transport-agnostic technology called XML. So instead of things breaking at the slightest change, we get “[stability] in the face of change.”

    Bonus for us, because the old way sucked at change; but it is also a heck of a lot easier to implement, not to mention way faster. (No matter how you slice it, a few pointer lookups will always be faster than string manipulation.) The advantages of XML come at the cost of performance and ease-of-implementation. So unless Don Box works some unbelievable magic with Indigo, we can expect to see them both around for a long time.

  • Another Enum Gotcha

    Posted on January 6th, 2004 Brian No comments

    Steve is talking about the enum versioning problem. You know the one. It’s that annoying problem where you have an enum, say

    enum Foo
    {
        Bar  = 0,
        Bast = 1
    }

    Now, oftentimes, you will write a switch statement to deal with your different enumeration cases. (Incidentally, why isn’t their some sort of language structure to deal with this more naturally? Something a la polymorphism except for method dispatch based on an enumeration value. Switch statements suck.)

    switch (enumValue)
    {
        case Foo.Bar:
            // Do Foo stuff.
            break;
        
        case Foo.Bast:
            // Do Bast stuff.
            break;
    }

    Now, if you add a new enumeration value, your switch statement could break and do things you wouldn’t expect. Steve says you should always put a default handler in there, just in case. I agree, even if your default handler is to throw an exception or do a Debug.Fail().

    There is one more case you need to be aware of, though. In .NET, it’s possible to cast an arbitrary value to your enumeration type and get around the type checking completely. Take the following case.

    Foo enumValue = (Foo)42;
    
    switch (enumValue)
    {
        case Foo.Bar:
            // Do Foo stuff.
            break;
        
        case Foo.Bast:
            // Do Bast stuff.
            break;
    }

    This code is completely valid, and the enumValue will match neither the Bar case nor the Bast case. When would you ever encounter a case like this? (Pun intended.) I can’t really think of any off the top of my head, but writing secure and robust code means taking into account situations that should never occur. Incidentally, you can use the Enum.IsValue() method to determine if the value is defined by the enumeration or not.

  • Service-Oriented vs. Object-Oriented

    Posted on January 4th, 2004 Brian No comments

    Avery is going through the thrashing of trying to morph an object-oriented mind into a service-oriented world. He’s getting the right answer, but I don’t think he’s getting there the right way.

    Avery is trying to place them on equal footing and then decide which one is better to use, but he could have saved himself a lot of trouble by realizing up front that object-oriented programming and service-oriented programming are best used for different things. Object-orientation provides a many advantages to somebody programming who already has great deal of knowledge about the system. Encapsulation, polymorphism, and the bundling of code and data together all make it easy to produce elegant yet complex systems that require a great deal of pre-knowledge to use. But let’s face it, if you don’t have perfect knowledge of the system, knowing how to use an object is really hard. Throw distributed object-oriented computing into the mix, which brings problems like lifetime, persistance, and marshalling, and you have yourself a good old-fashioned nightmare. If you don’t believe me, you’ve never spent much time trying to make Microsoft’s distributed COM work right.

    Service-oriented programming solves these problems by taking a step backwards, by taking objects out of the mix. It isn’t that different than the procedural coding style developed hand-in-hand with C and UNIX: You have a bunch of utility functions that are strung together in a certain order to Get Things Done. The difference between now and then is XML. Thanks to the beauty of , now we can make our procedures flexible enough to withstand the changes that so often break the tightly-bound world of objects. As a bonus, since it was built from the ground up on web technologies, we get a stateless, scalable, distributed system for free.

    So when should you use one or the other? Avery is dead on when he states that a service-oriented system is totally money when it comes to subsystems that a) could be reused, or b) will probably morph over time but can’t break legacy systems. These cases need to be considered, though, since parsing XML is expensive. Use objects when everything is a single whole, and where the interfaces between the subsystems won’t be changing much. Even if you’re cross-process, if interfaces aren’t going to be changing, then just use remoting. It’s a heckuva lot easier than putting together a web service. The speed and elegance and expressiveness of objects are a big win for developer productivity. Often, I find that I leverage object-oriented capabilities to hide the fact I’m calling a service. It’s simple to wrap a call to customer.Login(credentials) can easily wrap a call to CustomerService.Login(credentials).

  • PublishedAttribute

    Posted on December 28th, 2003 Brian No comments

    Martin Fowler has a new bliki entry on the difference between public interface and a published interface. I think Martin is dead-on in his statements, so much so that I would favor seeing the next generation of programming languages support a published keyword to better differentiate from public. In theory, public should be good enough, but practical experience shows otherwise.

    Unfortunately, languages change slowly, and popular new languages are introduced at rare intervals. So in the meantime, I am going to leverage the metadata capabilities of the .NET runtime with the PublishedAttribute class that I just coded up. Feel free to download it for yourself.

    I would love to see Microsoft adopt something similar into the .NET base class libraries. The compiler could warn you about usage of unpublished classes, or even completely disallow usage along the same lines as unsafe code. The auto-complete in the editor could grey out unpublished interfaces, or hide them completely at your option. This has got to be better than the hateful

    The Foo type supports the .NET Framework infrastructure and is not intended to be used directly from your code.

    documentation that we have today.

  • .NET Web Service Clients Give You No Choice

    Posted on December 17th, 2003 Brian No comments

    (How’s that for a provocative post title? I can’t wait for the flames!)

    I’m doing a lot of web services interop work with Java and .NET. I’ve got a Windows Forms application calling a Java web service over SOAP. The Java application behind that web service “does stuff,” and then turns around calls a .NET web service via HTTP GET. It’s a lot of fun stitching this stuff together. More or less.

    I discovered several weeks ago that Microsoft’s wsdl.exe tool immediately barfs when it encounters a <choice> element in the schema. It does this because, if you think about it a little bit, a <choice> is really hard! The <choice> element is the XML equivalent of C’s union keyword. It allows multiple structures to be defined as valid in a certain space, but which structure will actually be there is unknown until run time. Generating C# code for something like that is rought.

    Here’s a way they could have done it, though: Take the elements actually in the <choice> and give them a common interface. Then you can put the interface in place of the classes generated for the <choice>‘s child schemas. You can then just recurse through all of the structures allowed by the <choice> element, and generate the code as usual. At run-time, you can figure out which structure you have using a simple is statement.

    Why use an interface? Imagine that, somewhere in your schema, you have a <choice> that looks like this:

    <choice>
      <element type="e1" />
      <element type="e2" />
    </choice>

    The classes E1 and E2 get generated from the types e1 and e2, respectively, and that their base class is given the name B1. Then, later on, you have a <choice> that looks like this:

    <choice>
      <element type="e2" />
      <element type="e3" />
    </choice>

    So what do you do? You either need to generate a different class for e2 schema type here, or the generated class E2 has to have multiple base classes. Since you can’t do multiple inheritence in .NET, implementing multiple interfaces is the next best thing. So each <choice> gets an interface generated for it.

    It all seems to work in my head, but admittedly I haven’t tried it. Watchu think?

  • Mono Web Services

    Posted on December 11th, 2003 Brian No comments

    Bwa ha ha ha ha….I will rule the world!!

    Or not. But I did finally get Mono web services up and running on samus. It’s just a simple Hello World, and I’m not going to link to it, since I am experimenting and it won’t be there by the time you click on it. But how about a screenshot just to prove it works….

    This is really exciting, as I’ve been doing a lot of Java/.NET web services interop lately. Now I can do it on a server that’s not my Windows laptop.