" /> Bill de hÓra: December 2002 Archives

« November 2002 | Main | January 2003 »

December 20, 2002

Your sofware business model is dead, get over it

Loosely coupled has a nice summary of a paper by Vinod Khosla

If your software business model is about capturing requirements, getting sign off and moving customers into change control at the first hint of change, this might enlighten you as to why that approach is a difficult, inefficient and expensive way to deliver value, for all concerned on a software project. If you're an XP or agile programmer, this should help you articulate to management and executives what's wrong with with high ceremony heavy-weight approaches to software and how you can help align the business with customer needs.

Scott Sterling on Versioning

worth reading

Supporting versioning at the individual service level opens the door to lots of possible screw ups, in my opinion. Ideally, clients of the service API will import a "versionless"" interface.

Yes. In the Web world, this is known as a uniform interface. CLOS hackers will recognize the notion as a generic method. OO gits will probably call it polymorphism.

Only the implementation of the interface is versioned.

Yes, but only internally to the server. The version information is just noise to the client. Unless it's so very different. Then technically you're looking at a new application, not an upgrade (whether you call it an upgrade is a different issue).

Version configuration will be done via XML configuration for the service. Still, my biggest worry is about unforeseen dependencies between services, where if you have a dozen services with some at version 1 while a couple are at v4 and others are at v6 or whatever. There's a strong potential for dependency between service impementations and versions, where, say the v6 logger works with the v5 naming service and up, but not with any earlier versions.

Well, that's tight coupling for you. If we can't tease these things apart and preserve modularity across versions, it's as if we've written one big function. The model that seems to work very well at the level of Java is the Plugin pattern. The success of the Plugin isn't a surprise: it's simply a less constraining variant of the uniform interface principle (that protocols like HTTP use). However some libs (like Log4j) make it awkward to protect your application from their details - config files can lock you into libraries and their versions just as much as poorly encapsulated exposure to the source can.

Speaking of all this, I responded to a post on the axis mailing list yesterday about much the same problem. It was claimed that building tightly coupled systems using web services was a hard problem - forgive me, but I though the idea of WS was to get us away from all those tightly coupled headaches!

Could become a nightmare for testing and support maintaining meshes of mix-and-match services.

Oh yes :) The REST/HTTP uniform interface wins hands down, when clients and services are run by different authorities with their own agendas. Give all published services the same interface. Just think of it as the Plugin pattern applied to the web.

December 18, 2002

Goldfish

The Register
Ted Farrell, Oracle's architect and director of strategy for application development tools now sitting on the Eclipse board, said continued divergence of the Eclipse community from Sun, BEA, Oracle and these companies' own partners could fragment the Java tools market - a dangerous possibility in the face of concerted attack from Microsoft Corp.

Sounds like a rerun of the UNIX wars with .NET playing the part of Windows NT.

Doesn't anybody learn from history in this business?

Ant sucks 2

Codito ergo sum

I'm just marking this for later. Ant does have problems, but not any of the reasons given here.

OS Gridware from Sun

Pushing the envelope
Cool.

Yep!

Jsp for Zope

Jsp for Zope

I really don't know whether this is very cool, or fully bogus...

CLASSPATH Hell

Mike Clark's Weblog

But the kind of classpath problems that don't conveniently throw exceptions are really nasty. You only suspect there's a problem because the version of a class you think is being used is not. It usually turns out that the class you're expecting to be used is masked by another version of the class at a higher precedence in the CLASSPATH variable. Those kind have a way of robbing massive amounts of your time.

Close, but not quite. CLASSPATH headaches are symptomatic of another problem: versioning. The sofware industry sucks at versioning. What Mike describes above I often call CLASSPATH Hell - named after DLL Hell. In reality JAR packaging offers no significant advances previous component /packaging efforts like COM. In J2EE things go from bad to worse: there you need to keep client and server jarfiles in sync (as Steve Loughran has pointed out). This has the effect of tightly coupling the evolution of clients and servers in a very nasty way. And let's not even talk about what making something Serializable does for managing versions of a class.

Webservices is one area where I feel versioning problems will become truly ugly. Things are difficult enough to manage where clients and services are within the same administrative boundary - on the web they're not. Anyone who thinks you can keep clients and services in lockstep or just frig systems into shape like you would on the LAN is in for rude awakening. Indeed Martin Fowler has indicated we need language support for differentiating between public and published interfaces.

But trying to bite off the versioning problem at the level of languages seems to be going in too low, for the web at least. We might be better off instead looking at architectural models and protocols that are better designed to cope with the independent nature of clients and web services, such as REST.

And I'm really looking forward to trying out Mike's tools :)

interface inheritance

The Fishbowl: Inheritance Taxonomy

(This is an expansion of a post I made to the extreme-programming mailing list last week, I guess it bears repeating since I can't be bothered writing anything new right now. It was a response to the comment: I understand how implementation inheritance works in other languages... but so far I don't see if or why VB's lacking it is so bad... I'd like to hear comments on either side... anyone have any?).

Perfect. I wrote a post for this thread, but didn't send it; I thought it was a bit long, and veering slightly off-topic. So I'll put it up here instead :)

:Kay Pentecost:
:

December 16, 2002

Simple gifts (part 2)

ClydeHatter.com: Simple gifts (part 2)

Simplicity is an elusive concept. Perhaps it depends on which end of the telescope you are looking through. Simplicity from the point of view of an end-user is usually purchased at the expense of complexity elsewhere in the system. So, if I purchase an application which allows me to create my website by dragging and dropping a collection of pretty icons, then you can be sure that the simplicity which is offered to me as an end-user is purchased at the cost of some serious complexity hidden deep in the bowels of the application. If I want to tweak the application in question in order to modify or extend its functionality, then it's going to be much harder than if I've coded the site myself in raw HTML. Every layer of abstraction which simplifies the task of the end-user, adds an extra level of complexity (or maybe opacity is a better word) to the maintenance of the application's back-end.

Perhaps getting a handle on simplicity is hard because we don't have a strong definition of what complexity is (even complexity theorists don't agree on what complexity is), so we can't even define simplicity in terms of what it isn't. Fwiw, two things provoke the word complexity in me:

- excess interactions between parts
- excess number of abstractions being revealed at any one time

ie, something whose behaviour remains unpredictable (outputs don't make sense from inputs) while revealing too much (irrelevent, noisy) information is complex. We need to be able to build models of things outside ourseleves and there's only so much platespinning we can do in our heads before we start getting confused.

No doubt this will wind up like the Spanish Inquisition sketch in Monty Python - 'There are two things that make for complexity, unpredictability and noise... and dissonance. Wait, there are three things make for complexity, unpredictability, noise, dissonance and...'

Yet a lot of what is understood as simplicity in computing is about moving the complexity around or sweeping it under the rug. I imagine the problem with many APIs is that they're the published equivalent of a workaround - they make systems more complex by doing a poor job of hiding parts that should have been either clarified or simply left alone.

I imagine there's a law of conservation of complexity for computing.

December 15, 2002

FIPA Specifications voted to Standard Status

PR0004

This is very good news. The FIPA agent stack is very similar to webservices at the plumbing level (indeed I'd expect any future FIPA system could run on top of SOAP/WSDL).

Aside from objections from the REST side of the house, much of the limitations of webservices, I believe can be traced back to an absence of a content model for message exchange and not having a good story on service discovery/introspection. The FIPA standards address those issues head on, and should fit snugly on top of webservices plumbing. That suggests they'll be of more immediate use and appeal than the semantic web technologies, assuming the webservices community are prepared to absorb them.

December 14, 2002

Fine young cannibals

ClydeHatter.com: Web Log

Clyde again, on the impossibility of making computers with ounce of common sense.

Coq au Vin

Cannibal case shocks Germany : CNN

Police hold alleged killer and cannibal: news.scotsman

Gibt es noch mehr Menschenfresser?

You couldn't make this stuff up.

(thanks to Clyde for the turn of phrase)

Prince 2K

PRINCE2.com Homepage.

Take a look at the date...

a few minutes later....

My bad! The date is fine on IE, but not on Mozilla. Here's the call:

<script language="javascript">document.write(getdate(DDMMYY));</script>

here's the script

var DDMMYY =0
var MMDDYY =1
function getdate(mode)
{
    var now = new Date();
    var dayNr = ((now.getDate()<10) ? "0" : "")+ now.getDate();
    if (mode==DDMMYY)
       var MonthDayYear 
=(dayNr+"/"+(now.getMonth()+1)+"/"+now.getYear());
    else
       var MonthDayYear 
=((now.getMonth()+1)+"/"+dayNr+"/"+now.getYear());
    return MonthDayYear;
}
function gettime()
{
    var now = new Date();
    var ampm = (now.getHours() >= 12) ? " P.M." : " A.M."
    var hours = now.getHours();
        hours = ((hours > 12) ? hours - 12 : hours);
    var minutes = ((now.getMinutes() < 10) ? ":0" : ":") + 
now.getMinutes();
    var seconds = ((now.getSeconds() < 10) ? ":0" : ":") + 
now.getSeconds();
    var TimeValue =(" " + hours + minutes + seconds + " "  + ampm);
    return TimeValue;
}

I guess it could have stood some testing :)

December 12, 2002

The XML family of specs

Big Picture of the XML Family of Specifications

Good grief!

Code is not text

Incoherence

   int outletNumber = 25;
   Outlet myOutlet = new Outlet(outletNumber);
   ConnectionListener myListener = new SMTPListener();

Now, what do you all think of this alternative:

int outletNumber = 25;
Outlet myOutlet = new Outlet(outletNumber);
ConnectionListener myListener = new SMTPListener();

Much more readable, right?

Not for me.

Ara Abrahamian on edit/compile/debug/startup in J2EE

Memory Dump

Here we go-

Make the damn thing less complex! Cut off dependencies to containers. We shouldn't have to run the web app tests inside a container, and we shouldn't have to use mockobjects to run then out of the container. Look at webwork, with a clean servlet-independent approach you can run the test cases off the servlet container like normal code.

No we shouldn't. But that's the price you pay for container management in J2EE. The container takes care of everything and everything it doesn't is by definition, unimportant. There is zero facility for testing deployments into J2EE - indeed I don't think the J2EE specs have a tester role. Not that I like the idea of separating test from development but it would tell me at least that the J2EE architects were thinking about testing at all. Seriously, testing container based J2EE components is a nightmare - so much so that no-one seems to want to take it head on on the JOSS world, other than Cactus. This state of affairs is tolerable for Servlets, as you can factor the logic out to beans and test them standalone (use! beans! they tell me). But this strategy makes no sense in the back tiers - the whole point of the container model is have significant processing in the container and have the container manage the hard stuff, like transactions, pooling and resource access and thread safety. As soon as you take logic out of the container, you lose the benefits of containership. Damned if you do, damned if you don't. I wonder, is unit testing in Java of more value that anything a container can give? UT helps produce high quality code in very short timeframes. Container management in J2EE doesn't always seem to be on the right side of the cost/benefit curve. The answer seems to be to design an app server with unit and acceptance testing in mind or think hard on the rationales for the container paradigm, and that may involve thinking hard about how application code squares with relational databases- hte fact is muich of the realized value of something like EJB is in caching the database. Nor do things make sense with JSP, which are close to untestable (yes everything's testable, but JSPs are a stretch). Again the container is not designed with testing in mind - of what use is a stacktrace pointing to a line in the compiled servlet to me with a JSP, when I'm working with the JSP?

IDEs can provide efficient implementations of Maven goals

That's got to be Maven's job. Period. I'm not interested in moving to a build system that requires an IDE to optimize its execution.

Ant and Maven are a solution to a problem: treating software development like a factory line consisting of many steps which can be automated. The problem is with all these steps and complexity of each step we end up struggling with a giant slow build process.

Wholehearted agreement with the production line idiom, though to be honest, Ant doesn't really support it (unless you start using system entities to compose buildfiles). This shouldn't be a surprise - Ant at heart is a depedency resolver cum expert system cum graph walker, not a pipelining architecture. And in truth, speed is not the problem I see with building Java. The problem we have in work with the build/release process are versioning released jarfiles (our own and third party's), in particular keeping clients and server jarfiles in sync - aside from container testing, coupling clients and servers through jar libraries is IMO a serious design flaw with J2EE. Very soon now, Propylon will announce the release of a product for XML processing which supports versioning, managing and unit testing of processing components, yet we're in shtick when it comes to doing the same things for the Java code that actually implements the product. It's a surreal situation.

And just wait until we start having to manage web service deployments through Java where the server is continually being upgraded and you simply cannot guarantee that client jarfiles or even JDKs are synced with yours. Web services is a very, very different world to the RMI/J2EE barrio.
The CLASSPATH and the JAR file format are inadequate for managing dependencies versioning and product releases. JMX I reserve judgement on, but i don't see much support by way of versioning or deployment.

While the Java world agonizes about web service 'illities, XML v Binary, REST v RPC, J2EE v .NET, XML Schema vesus untyped XML, I fully expect that web services management, versioning and maintenance to be blissfully ignored. As an industry these are things we've always sucked at - but if we're anyway serious about a services paradigm, that means we're serious about things like quality, testing, maintanance, independent evolution of clients and services. There is no real support in J2EE, for any of this. It's the stuff of business models.

Processing XML with Java

Cafe au Lait Java FAQs, News, and Resources

ERH says his latest book isn't selling as well as others have. He's wondering if that's because it's online. Possible, but I doubt it. I guess the obvious reason is that computer books are never going to sell as well in the current downturn (Tim O'Reilly, surprise, has observed this). But I can think of some other reasons. First off, I saw it in a bookstore on Monday. It's a big book, ie it's Wrox Press size. Good as it may be, I'm not inclined to get through a that-sized book at the moment (and I never was). As it happens, most of us at Propylon are blessed with being very busy - we're in demand! Tho' I have a suspicion that the people who bought doorstep books two years ago are either out of a job, or are now way overworked while the industry rationalizes the overspend of the last decade. I also suspect 1000+ pages turn off a certain class of readership- but since so many IT books are absurdly fat, and I'm not a marketeer, I guess the market remains. Last, I tend to track new books from good authors and I never knew Elliotte had a new book out until last week, when I came across a link to the online download (which I haven't downloaded yet, sigh). Maybe I was asleep, but maybe it wasn't as well marketed as Elliotte's output for O' Reilly - believe me, I knew all about XML in a Nutshell (both versions, and my most used XML tomb) and Java IO before they shipped.

December 11, 2002

cow.mpeg

Simply put: this knocks Ballmer - Planet Of The Apes Audition as the funniest clip I've seen in ages. Thanks Derek!

JSR-109 Implementing Enterprise Web Services

JSR-109 Implementing Enterprise Web Services - Final Release

downloading it now...

hacking to rebirth of cool

James Strachan: good taste in music. I'd forgotten all about the rebirth series. James is listening to No 4. No 3 is my favourite tho' and reminds me when I was bartending... daiquiri, martini, manhattan, old-fashioned... ah yes. Here's two others I remember going round the lounge scene in London with the rebirth series 'back in da Nineties': United Future Organization: 3rd perspective, Beastie boys: The In Sound from Way Out!.

Speaking of James, I got a really clear and thoughtful mail from him about my comments on Jelly, and Java in general. It's only fair now that I give it a proper test run. Plus it's used in Maven, which seems to have a lot of the features I wanted for build management (I was going to give it a pass, but I might just be better off extending it - tho' if I do, I sign up for jelly afaict)

December 07, 2002

Sean McGrath: The privilege of XML parsing

The privilege of XML parsing - Data types, binary XML and XML pipelines.

Making datatypes (particulary WXS) a part of core XML, and the propagation of Binary XML are the two most important issues affecting the evolution of XML. Read the post and you'll understand why.

Refactoring with Martin Fowler

Refactoring with Martin Fowler. Good stuff, but not as in depth as Bill Venner's series with Ken Arnold. Applications of Enterprise Architecture should arrive before Xmas.

Jira

We're evaluating Jira in work. So far, excellent.

December 06, 2002

Marc Fleury loves EJBs shocker

read the paper. This is one of a series. A good read.

A model in the mind is quite experimental (but syntax is a geeks best friend).

Sean links to Mark Pilgrim''s XFML facet map. I'm all for metadata, more of it and better I say, and... wow - this is good stuff, not been on my radar before now.

And perhaps it's yet another lost opportunity for RDF to make practical inroads on the web. This is the old old problem with RDF that will not go away - people will tend to create a point or domain level solution for their informatics/metadata needs instead of going for the general model. RDF does not have a good story for specific solutions. Meta beats meta-meta. [Of course the other old old problem that wil not go away for RDF is the XML syntax. No doubt I'll be ranting about that here at some point.]

And the lesson learned from three years (including one on the RDF working group) figuring what the hell to do with with RDF, other than generate it is this: if there's a problem with data solve it with metadata, not metametadata. From the XFML spec:

XFML is a specialised format, as opposed to XTM or RDF, which are generic metadata formats. XFML will not solve all your metadata needs.

XFML, by design, is simple to write code for.

Which is why I expect it to get adopted.

My instincts tell me RDF or something like it is important, indeed I believe generalized metadata on the web is inevitable (but when?). Nonetheless, getting the trenches to look at RDF, never mind use or take half-seriously is today a hard sell. Persuading people that RDF is an investment rather than speculative generality (or worse, AI) is a constant problem for me- perhaps I should work on those sales skills.

One thing I will say in favour of the metameta view. Many communities are thinking about metadata, even if some of the results are hacks. It's only a matter of time before people start wondering how to hook up and move between these metadata formats, particulary given the limitations of web services languages.

But it might a while coming- there are lots of domains whose metadata needs are quite basic. And lets be honest, many of us are still making a living performing data transforms and systems integration (programming the plumbing has plenty of life in it). But when it does happen, we can call it information integration.

Rational in Blue

IBM has acquired Rational. For a ton of money (but hey, Grady Booch is worth at least twice that).

The gen in work is that IBM are to the say the least, interested in Rational's blue chip client list. Strategically this lines up well with their purchase of PwC. IBM have needed to fill out their top tier services offerings for some years now. PwC+Rational gets them to a place where they arguably go toe to toe with CGEY, Monday and Bearing Point.

However...

Russell Beattie said a while back that the day will come when there will be two IDEs in the world, Eclipse and Visual Studio. This acquisition might add some credence to Russel's argument. While Rational gave us the RUP, they are in large part a tools company. One wonders if any of the Rational suite will trickle into the Eclipse framwork (or perhaps Websphere).

And who knows. Maybe we'll see IBM Services evangelizing the Agile RUP. :)

XML versus APIs

I saw two interesting, conflicting, posts in the last few days regarding XML processing.

Adam Bosworth is starting a series on processing XML. Forget raw XML- apparently dealing with DOM and SAX is too clumsy for programmers. So we need to bind to APIs.
The article starts out really well:


Mapping XML into program data structures inherently risks losing semantics and even data because any unexpected annotations may be stripped out or the schema may be simply too flexible for the language.

Yes. Moving information from code to XML can be a risk.

Today's programmer has two tools available to parse and manipulate XML files: the Document Object Model (DOM) and Simple API for XML (SAX). Both, as we shall see, are infinitely more painful and infinitely more prolix than the previous code example.
While the DOM can be used to access elements, the language doesn't know how to navigate through the XML's structure or understand its schema and node types. Methods must be used to find elements by name. Instead of the previous simple instruction, now the programmer must write something like:
Tree t = ParseXML("somewhere");
PERatio = number(t.getmember( "/stock/price")) / (( number(t.getmember( "/stock/revenues") - number( t.getmember("/stock/expenses"))
In this example, number converts an XML leaf node into a double. This is not only hideously baroque, it's seriously inefficient. Building up a tree in memory uses up huge amounts of memory, which must then be garbage collected - bad news indeed in a server environment.

Now I have huge respect for Adam Bosworth, but try as I might, I can't agree with the line of this article. If you start binding XML to APIs and making things API-centric, you risk going back to the non-interoperable systems quagmire that XML is supposed to get us out of. As well as this it moves the developer to an API/object oriented view of the world rather than a document oriented mindset. This is a mistake. APIs/Objects have traditionally not interoperated, not even with backend databases and other object systems in the same administrative domain. What objects and APIs have done is allowed us to build large maintainable systems. Objects help us talk to the machines and build comprehensible systems. Interoperability, getting machines and systems in different domains, with different owners, running on different technology, to talk to each, is a different problem again and not something that objects were designed to solve.


I don't dispute the points about inefficiency or even that dealing DOM and SAX can be awkward and clumsy. Buit this seems like an argument from performance and optimization (very J2EE!) rather than a real usability concern with XML processing APIs. If it were, I expect the argument to be that we need better APIs as ER Harold and Microsoft keep telling us, not that we need to hide the XML completely (if you're a Java programmer working XML, take the time to look at the .NET System.XML library). Maybe some years out we can think about hiding the AngleBracketedUnicodeText

Om xml-dev, Tim Bray puts forward the opposite argument (xml-dev - Re: [xml-dev] Typing and paranoia). I'll quote:


There's a deep tension here that won't go away. Some of us really
REALLY want to be able to deal with the bits on the wire and REALLY like
the open-ness and interoperability that gives us. Others really REALLY
want to take the bits on the wire away and present us instead with an
API that has 117 entry points averaging 5 arguments and try to convince
us that this is somehow equivalent. XML, for the first time in my
professional career, represents a consensus on interoperability: that it
is achieved by interchanging streams of characters with embedded markup.
Since about 15 seconds after XML's release, the API bigots have been
trying to recover from this terrible mistake and pretend that the syntax
is ephemeral and the reality is the data structure, just read chapters 3
through 27 of the API spec, buy the programmer's toolkit, sign up for
professional services and hey-presto, you'll be able to access your own
data, isn't that wonderful!?!?

I'm not sure people that like APIs are bigoted, but I am sure that if you think you can eradicate XML from your programs in favour of APIs and object models, there will come a day when you systems will decay and will cease to interoperate. If you really value interoperability, if you really want to get systems hooked up and keep them hooked up, you will want stay close to the XML.

Tim Bray is right - XML hands down wipes the floor with any previous attempt to get systems interoperating with each other, especially when you combine with MIMEish protocols like HTTP.

I see the same tension coming to the Open Office community's doorstep. Currently there are two flavours for programming Oo. You can unzip an .sxw file, manipulate the XML, and zip the results back up. Or you can go through the Oo API. In the last month, I've done both. Now the surface area of the Oo API is vast. There are hundreds of objects to know about, there's understanding how to interact with the Oo object broker, UCB. If that wasn't enough, there's the Oo IDL format and a scripting language to get to grips with. In fact it's more like a platform a la the JDK, than an API. The XML format is no lightweight either- the spec document is a 500+ page .pdf), but my impression so far is that its a more tractable and cohesive approach that the API platform.

December 04, 2002

A break from the Norm

More on interfaces. I like this passage:

Cedric:


If you read my post and Cameron's carefully, you noticed that we are not trying to resurrect the Hungarian notation completely, but we are simply pondering ways to adapt it to Java.

Russell Beattie:


If you follow the IFoo convention, and you want to change a class into an Interface, you have to change all the places it is instantiated and all the places it is used (in method arguments and declarations).

Cedric:


This is not really a problem if you use a good, modern IDE.


This argument doesn't stand up when you start thinking about published APIs. I don't think an IDE, however good, can refactor a client's code. Especially the ones I don't know are using my code.

More:

Cedric: Since we are harping about conventions, here is another one that is finally gaining some momentum in the Java community, despite being pushed forth by Microsoft in the first place: all interfaces should begin with "I" (e.g. "IWorkspace").
The standard argument against such a practice is, again, that it breaks encapsulation: "I am dealing with a type, I don't care if it's an interface or a concrete type."
Well, you should, because they are not equivalent. For example, you cannot "new" an interface, and it's the kind of information I would like to have right off the bat, not when I do my first attempt at compiling and realize that now, I need to find a way to find a concrete implementation of the said interface.

Mmm, hardly. The fact that you can't construct against an interface doesn't imply you should mark its name with an 'I'. Indeed it's irrelevant - what possible help marking an interface with an 'I' is escapes me - what can that information tell me the context of the code cannot? To be honest I'm not interested in whether something is an interface, I'm interested in its interface- which as Parnas said years ago, is the combination of signature and behaviour.

Sorry, I don't get the rationales.

Similar arguments apply against the merits of Hungarian notation (indeed using 'I' is just that). Cedric points out that good IDEs can help us keep var iable names in sync with assigned types. First, this only makes sense in statically compiled (languages, but since this discussion is going around Java weblogs, we can let that go). Second, , I don't want to depend on an IDE for such an arbitrary reason, or to help me write noisy code. Third while I buy the argument put forward that we can adapt to any convention, that's not an argument to say we should to adapt arbitrary ones. I'm inclined to say let the Hungarian notation die - the best convention is the one you don't need.

Charles Miller says it best:


I prefer to have code that is easy to read in the general case, and tools that will tell me the supporting information if and when I need it. Hungarian notation is an artifact of a time when the tools weren't good enough to give us this information in any way but by throwing it all in our face at once. Now we have colour-coding, tool-tips and one-keypress navigation available to us, Hungarian notation is a horrendously clumsy anachronism. The information should be available, but not obscuring the code. Which is why I don't use Hungarian notation, but I do use a good, modern IDE.

But I didn't agree with this:

Whatever James Gosling might say about IDEs, I have little sympathy for people who think that a text-editor alone qualifies as a complete programming environment.
Things like code-completion, fast class-navigation and Javadoc access, context-aware searches (find implementors, find callers) and inline error detection not luxuries any more. They are essential to efficient programming. And if someone is deliberately choosing to program in an environment that doesn't have them, that someone is either so good they don't need additional notation, or (more likely) wasting time and money.

What's odd abut this is that many productive programmers I've worked with do not use, or depend on IDEs very much. I don't believe I'm alone in that experience. That's not to say we shouldn't use IDES, would be more productive with them, or should be dismissive of IDEs in general. But they are pretty far from neccessary. My experience is that there is no one IDE that is ideal- in Java-land, IDEA and Eclipse are close, but nowhere near sufficient. And at some point you'll always need to drop out to a command line to get something done.

December 02, 2002

field prefixes

Code conventions

I agree with prefixed class fields, but not for the reasons given above. For example this is bad:

public void class Employee {
  private String name;
  public void setName(String n) {
    this.name = name;
  }
}
because the Javasoft conventions do not recommend using a different naming scheme for fields. Consequently, when you read the body of a method, you can never know if you are dealing with a local variable or a field. And this can be deadly.

But what about:

public void class Employee {
  private String name;
  public void setName(String n) {
    itsName = name;
  }
}

I'm failing to see how changing your field naming convention can help when you mistype what's on the rhs of a statement.

Fwiw, I use two prefixes - 'the' for statics and 'its' class fields.

The reasons I use prefixes are 1: ease of reading - I find 'its' less verbose that 'this.' (and naturally 'self.' is my pet hate with Python). And I find it harder to read code where the fields are not marked out in some way or where they are marked out, they are marked out with underscores (which is typographically unpleasant to my eye). 2: having prefix makes search replace operations easy, banal as that sounds.

Definitely not one to get riled up about, I just found the reasoning odd.

On the other hand, when Charles Miller talks about marking interfaces with I, I find myself in strong agreement. There just isn't a good reason to bother distinguishing interfaces from other types.

Going retro

Pushing the envelope talks about blocks in Java.

IMO C2 has the definitive guide to blocks in Java by Robert Di Falco. It's been there for a few years now.

It's good that the java world is coming to appreciate the facilties in other langauges. Like Darren says:

I sometimes wonder if language designers are doomed to reinvent things that some guys over the road already implemented a decade ago.

Make that decades ago. Lisp people have be watching this go on since the 1960s. That's got to be annoying.

December 01, 2002

Strassman v Microsoft (but it's all in the graph)

frontline: hackers: who's responsible?: the pernicious characteristics of monocultures:

A quote from Strassman's essay:

The "Great Potato Famine" or the "Irish Famine" occurred in 1845-49 when the potato crop failed in successive years. The crop failures were caused by blight that destroyed the potato plant. It was the worst famine to occur in Europe in the 19th century. By the early 1840s, almost one-half of the Irish population--but primarily the rural poor--had come to depend almost exclusively on the potato for their diet, and the rest of the population also consumed it in large quantities. A heavy reliance on just one or two high-yielding varieties of potato greatly reduced the genetic variety that ordinarily prevents the decimation of an entire crop by disease, and thus made the Irish vulnerable. In 1845 a fungus arrived accidentally from North America, and that same year Ireland had unusually cool, moist weather, in which the blight thrived. About 1.1 million people died from starvation or typhus and other famine-related diseases. Many emigrated, and by 1921 the population was barely half of what it had been in the early 1840s.

Strassman goes on to talk about the risks of technical monocultures, focusing on Microsoft. The notion presented in Strassman's essay, that a technolgical monoculture could help effect a system wide crash is an interesting one and worth exploring. But I strongly doubt a monoculture alone is sufficient cause. We can correlate famine with crop failure. But it takes more than crop failure to induce famine. Crop failure was never the sole cause of the Irish famine.

Microsoft's reponse takes the line that computer networks and potoato crops can't be compared since the latter is organic, and this makes them fundamentally distinct from non-organic networks. All in all, an interesting debate with both sides having flawed arguments.

Another way to approach this matter is to examine the characteristics of our internetworks as graphs. In terms of their properties as graphs, computer internetworks and organic systems are quite similar. Notably in how they carry their respective viruses there is not much distinction to be made. Random graph theory and the study of the characteristics of networks, irregardless of the domain are now fertile research areas. The point is to determine whether the graph topology of our information systems is conducive to catastrophic failure and whether in fact technological monocultures, be they in the private or public domain, can contribute to such failures.

XML-RPC case study

0xDECAFBAD: XML-RPC, a mini case study

Interesting contrast to the effbot's.

It's good to see reasoned assessments of these technologies in the field. Discussions about SOAP|REST|XML-RPC tend to degenerate.

Paul Prescod goes into the limitations of XML-RPC in more detail in response to this study. After my experience using XML-RPC my thoughts are as follows:

  • XML-RPC is tied to HTTP. This more than anything else, makes it architecturally distinct from SOAP.

  • A lazy way to send lists and dictionaries around. This is ok if you own both endpoints or just need to send list and dictionaries. If you don't, or have more complex data structures, you might want to look elsewhere.

  • Very quick to setup. SOAP toolkits do not come close. In Propylon we've used XML-RPC in the past to quickly hook up two machines that were part of a messaging system. And I do mean quickly- it took a few hours to get a Python endpoint pushing XML messages from a BizTalk framework into J2EE. I suspect this is the key appeal of XML-RPC. If your data is a dictionary say, it's probably quicker to install XML-RPC that use HTTP POST and write serializers for that dictionary structure.

  • No Unicode. IMO this is the second most brain damaged aspect of XML-RPC. But since character encoding is such an alround pita, it's understandable to see a drop-down to ASCII, if not really forgivable. At least with a Unicode base you have a fighting chance of getting in and of Latin-1; with an ASCII baseline you're stuffed.

  • Extensibility, which Paul takes issue with, wasn't really a problem for me. You only have to read the spec to see that's not what is was designed for. In other words don't use XML-RPC for communications whose form you think is unstable or liable to change over time.

  • XML payloads might end up being base 64'd (if you're not sending the specified structures around). This is most brain damaged aspect of XML-RPC. I imagine this was done to avoid problems with namespaces or because no-one thought about using XML-RPC to send documents around, but it could have be solved by using Java style packagespace names for the XML-RPC elements and a looser spec. Note that SOAP has to deal with this problem with namespaces too as does any XML envelope format. The answer of course is to use a MIME carrier format as SwA, BEEP, SMTP or HTTP does - of course then maybe you're wondering (like me) what you needed an XML envelope format architected on namespaces for in the first place.

Overall I'd say XML-RPC has the properties of something like a neat Perl hack. It's good for small quick things and prototyping, but not something you'd extend or manage, or use where requirements are unstable. But if you're weren't sure about which way to go the web services kerfuffle, and the above points don't affect you XML-RPC is a good fence to sit on.

The Java ceiling

Jelly - Jelly : Executable XML. Jelly is a scripting/tag language that uses XML for its syntax.

This is supposed to be good because you can use Jelly to process Jelly, or any other XML, and since like XSLT it has the property of lexical closure, you can pipeline the stuff (of course with XSLT, the reality of doing this is not so straightforward). Why not use an existing scripting language (ie embed Mozilla Rhino for using Javascript) is not discussed.

I suspect Jelly was invented for the following reasons:


  • someone felt a genuine need for a procedural scripting language for munging XML.
  • it would be cool if the language was also in XML, so we could build processing chains and filters (lexical closures).
  • using Ant as a scripting language is officially bad practice.
  • in reality XSLT sucks for pipelining.

What I don't understand is why anyone would want to write code in XML, when they could use powerful languages, such as Python/Jython, Perl or the aforementioned Javascript.

Jelly is informative nonetheless. Ant clearly has limitations when it comes to multiple projects and versioning (arguably it's not designed for that, but even the ASF use it as though it were). I've experienced this first hand, and it's good to see that Jakarta has implicitly acknowledged it with the Maven project [I'm putting something together to deal with managing Java code, somewhere in between Maven and Cruisecontrol - watch this space]. It tells me that neither XSLT or JSP are sufficiently repurposable or hacker friendly to be used for anything other than generating (X)HTML.

Most importantly, it suggests along with some other goings on in the world of Java open source, that Java as a language and platform is reaching a natural level of incompetence. Java in short is under strain. That strain is centered around making code adaptable and repurposable at runtime. To really do that, you need a language that lets you change the software while it's running. In the past, this need was not a business need - it was limited to a minority using Lisp Machines and Smalltalk, or experimental scripting environments. Only the mainframe needed to stay up 24x7 and businesses requirements tended to be a tad more stable. Today, with the client-server web, it is crucial to a business to be able adjust running code without taking servers offline.

Jelly, XDoclet, XRAI, the strong interest in Aspects and scripting runtimes, suggests that Java is perhaps getting in the way. Not so much because these things exist, but because the form they take seems entirely designed to get around the Java language while remaining inside the JVM.

The first clues we had on the limitations of the Java language proper were the absence of runtime introspection and type generics. Runtime introspection was fixed years ago with the Reflection API (a kludge nonetheless, compared to what can be done in Smalltalk|Python|Lisp). Arguably, Java with generics is a new language that supersets Java. Generics are a huge leap forward. It's weird that while many of us Java developers would spit on C++, C++ remains the more powerful and expressive language, primarily because you can use it for generic programming. Consider what Alex Stepanov (of C++ STL fame) has to say about Java:


You can't write a generic max() in Java that takes two arguments of some type and has a return value of that same type. Inheritance and interfaces don't help. And if they cannot implement max or swap or linear search, what chances do they have to implement really complex stuff? These are my litmus tests: if a language allows me to implement max and swap and linear search generically - then it has some potential.

Doing something that would be trivial in Lisp, awkward in Python or C++, seem to be hard in Java and require either non-standard extensions to the language with a custom compiler (AspectJ), using Javadoc as preprocessing engine (XDoclet), or byte code tweaking (CGLIB). Note that no-one in the Perl or Python communities would dream of creating anything like Jelly (or Ant for that matter).

My point is that the Java Open Source community is gradually finding the Java language is an obstacle in itself. I think over the next year we'll see it come into general awareness that Java as designed is a bottleneck. Indeed innovation in Java today is too often using Java to write interpreters for little languages to do things in Java you can't do in Java. Or adding new standard libraries.

I'm not actually interested in beating up on Java, though it may sound that way. I use it a lot and like it. The reason the situation concerns me is that in my experience business people don't actually care a whole lot about Java the language, they care about J2EE. Of course, J2EE is underwritten by Java and the JVM, but that is a detail. You might as well be saying that aluminium and ABS underwrites a car.

But as businesses requirements roughly translate into the technical need for continuously adaptive systems flung across the Internet (as opposed to highly scalable and modular ones flung across the LAN, the classic J2EE pitch), the dissonance between what is needed at the business level and what can be achieved with the Java language in reasonable time and money will widen. Adaptive businesses need adaptive systems. Adaptive systems need adaptive languages.

Even Gosling's current anti-.NET slogan "J2EE is a marketplace" misses the point. So does all the rest of the .NET v J2EE nonsense . Adaptation is what matters to a modern business. The biggest risk to the J2EE franchise is not .NET, but Java itself. The minute businesses figure out that the Java equates to inflexible systems (the way they did with Mainframes, CICs, COBOL and C++), J2EE ceases to be marketable proposition.

The thing is, Sun has the talent to make Java a truly flexible programming language. Guy Steele Gregor Kiczales, and Richard Gabriel are or have been involved with Java, and they know a lot about how to make languages adaptive.