" /> Bill de hÓra: April 2004 Archives

« March 2004 | Main | May 2004 »

April 30, 2004

Pragmatic Unit testing in c#: good book, easy to read

Finished reading the Prags new C# testing book yesterday. It's very good.

I found it easy to read. They say it's designed for onscreen reading, and it is if you use Acrobat Reader 6. Then it's very readable (5 blew goats, so I upgraded). The type face is a serif*, and it ascendent and descendent strokes are quite narrow, but that's ok. They contrast strongly enough with the background. The code samples have pretty colours, and are in a very readable tiny-text. Overall, the book looks good online. No need to buy a print copy.

* Yes, everyone does this, but please, consider using a sans-serif font onscreen. PDF sans serif looks really good since I upgraded the reader.

Isn't it? Mmmmm. Marvellous.

Le Big Maque:

Really should get one, someday.

April 28, 2004

Current reading

Juval Lowy: .NET Components Michael Brundage: XQuery Ian Witten: Managing Gigabytes Phil Bishop, Nigel Warren: JavaSpaces in practice JXTA protocols

Using RDF

Robert Sayre commented on my WS stack:

Seriously, why is RDF in there?

RDF is in there, because I know people are using it, but aren't talking about it so much (it's a bit like the problem JXTA/Jini has).

At the moment, RDF works well for "normalizing" content across administrations - in English, creating shareable keys. For example we use RDF:

  1. to help manage reliable delivery over HTTP - each message exchange is assigned a URI and we can make assertions about the delivery state as RDF. As well as that all the messages have their own IDs which are linked to the exchange URI. This is proving extremely useful for tracking.
  2. In a pubsub system to log messages. In this system subscribers have the the option to receive batchs of enveloped messages, wrapped in a envelope - this helps them manage their downloads. Each individual message has an identity, but when one gets swallowed inside a batch it would "vanish" from the audit trail. To track where messages went we used RDF assertions about containership.
  3. I haven't done this yet, but I'm very, very close to writing an RDF n-triples appender for log4j. I have a bunch of components that have identity and I would love to be able to log their activity as RDF instead of eyeballing and grepping through "[time][name][event]" traces to build what is essentially a call graph. If anyone has done this, give me a shout.

I think the problem with "seeing" RDF being used, aside from obvious issues like syntax are:

  1. the lack of a query language that will work outside someone's toolkit or product (tho' RDQL would be my choice right now). At the moment most RDF seems to be about data capture. The W3C are at requirements stage on this, but I'm hoping they don't go off the deep end as happened with the RDF-MT and XQuery.
  2. the fact that most people still are not identifying things in way that can live outside the scope of a single database or filesystem. Everybody is declaring property-value pairs of some kind, but not enough people are using shareable keys identifying the thing the property-value applies to. We're either using pure context (filenames, root elements) or auto incrementing primary keys. The point is we are already identifying things, but not as usefully as we could be.
  3. Uninformed press. I don't see RDF used so much for designing vocabularies or ontological work - the use-cases which seem to get the most press. Certainly that's not what I'm using it for. I've never written an RDF schema for production use.

So for me, RDF is part of the stack. My RDF needs are relatively low level (think operations, systems management) compared to most of the talk around RDF (ontology, content management). It can be summed up as follows - "what is it, where is it?". "Why is it, how is it" isn't on the radar yet. When you use RDF this way, it proves to be cheap and cost-effective - no agonizing about models, no pollution of your XML vocabularies. Just useful data.

April 27, 2004

On message

Tim Bray joins the party:

I think somebody needs to stand up and start waving a flag that's labeled 'WS-Simplification' or 'Real Web Services' or something, that's all about building applications with what's here today and what works today: XML, HTTP, URIs, SOAP, WSDL*, and that's about it.

People already are. Mark Baker, Sean McGrath, Paul Prescod, Don Box, even myself, have been pointing this out in one way or another for some time. Mark Baker in particular took a lot of stick for not being on message with WS orthodoxy. Paul Prescod was disembowelling SOAP-RPC and UDDI two years ago. It's a pity the W3C's TAG hasn't been the group taking the leadership role and waving that flag.

Almost everything you need to do in this space, well someone is probably already doing it with a combination of SMTP, HTTP, FTP, RDF, URI, MIME, XML. That's the WS stack. Perhaps Atom/RSS will enter that set.

* I have my doubts about WSDL/SOAP - good for demos tho'. I'm probably being unfair at this stage, but SOAP/WSDL to me is tainted with all that RPC, WXS, protocol neutrality, and by Infoset weasel wording. Some of the WSDL-driven tools are very cool, but cool does not a system make.

Building Developers with an ISV

Eric Sink is forgiven for writing life support for VSS:

The bulk of their time should be spent writing code and fixing bugs. But every developer also needs to be involved in other areas such as the following:

* Spec documents
* Configuration management
* Code reviews
* Testing
* Automated tests
* Documentation
* Solving tough customer problems

Using my terminology, these things are the difference between a programmer and a developer. The developer has a much larger perspective and an ability to see the bigger picture. The programmer writes code, throws it over the wall to the testers and waits for them to log bugs. The developer knows it is better to find an [sic] fix bugs now, since he just might be the one talking to the customer about it later.

[via Erik]

April 26, 2004

Pragmatic Unit testing in c#: good book, hard to read

Update: installing acrobat reader 6 made the book read just fine - couldn't get any joy from reader 5.

Been reading the Prags new C# testing book. It's very good.

But I'm finding it hard to read. They say it's designed for onscreen reading, but it's not. The type face is a serif*, and it ascendent and descendent strokes are too narrow. They appear gray and don't contrast strongly enough with the background (possibly this font was anti-aliased). The code samples have pretty colours, but are in tiny-text. Overall, the book looks slightly like a fax. I'll be buying a print copy next time.

Sorry trees.

Yes, everyone does this, but please, consider using a sans-serif font onscreen.

April 23, 2004

Tabs versus Spaces in a nutshell

Tabs versus Spaces: the tab is a presentation macro, not a character - the fact that some bearded idiot made it an ascii character is an unfortunate decision we're stuck with. If you're too young to know what ascii is, tab is a bit like the bold or font tag in HTML - also unfortunate decisions. Tab characters don't belong it source code, ever - only idiots put tabs in source code. Map the tab key to multiple whitespaces instead - I don't care how you do it, just get it done. No, I don't care what you do with bold tags in HTML. No. No. No.



Darren Hobbs thinks about using using SMTP and NNTP for messaging.

worth clicking to see the other posters

I'll have what's he's having. I think XMLPP (Jabber) might be an option for pubsub.


Be all that you can be.

A ringtone for all your personalities

Endless hours of fun.

Now, for the horror themepark remix: open these in new windows.


I've enjoyed reading Dan Creswell's Weblog over the last few weeks. I hope Jini folks are following it.

The Margaret Thatcher Illusion

The Margaret Thatcher Illusion: Now I know why people start to look creepy upside down, if you look at them long enough...

Ahem. But if you tilt your neck slowly the face will flip from normal to freakish. Minutes of endless fun, right there.

That Tony Hoare quote

...there are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies.

Enough already, we all know that bit. Now, here's the rest of it, which most no-one quotes:

The first method is far more difficult. It demands the same skill, devotion, insight, and even inspiration as the discovery of the simple physical laws which underlie the complex phenomena of nature. It also requires a willingness to accept objectives which are limited by physical, logical, and technological constraints, and to accept a compromise when conflicting objectives cannot be met. No committee will ever do this until it is too late.

April 22, 2004

There's not a lot of money in revenge

Inigo Montoya

Which Princess Bride Character are You?
this quiz was made by mysti

There should be one and preferably only one obvious way to do it

So, what could be simpler than looping through an array of integers in C#? Which way would you do the loop?
   1. foreach (int i in foo) {}
   2. for (int i = 0; i < foo.Length; i++) {}
   3. int len = foo.Length; for (int i = 0; i < len; i++) {}
[...excellent analysis...]

- Joshua Allen

Maybe this is stupid... but doesn't the problem go away when there is only one way?

WXS is like

"XML is like Cardboard [...] I thought that this was a great analogue. Pity he did not try to think of one for XML schemas..."


Just kidding!


[From the It Takes Ages To Get A Taxi Via IM Department]

A colleague of mine next to me is currently handling half a dozen IM conversations, but he's topping out. I guess his attention span is being slashdotted but in IM. SpanDotted, or maybe the SpanDot Effect. Email would be easier at this point, assuming he could find the email amidst the spam haystack of his inbox. The lesson? Synchronous systems are nicely interactive but don't scale; aynschronous ones scale, but are overrun with spam. Or something like that.

Distribution: confusion heaped on confusion

Damn, I love XML over HTTP.

There's a twisty maze of conversation around an entry from Dan Creswell that was picked up by Dale Asberry. Dale:

I think the discussion is pointing out some very big reasons why Jini isn't taking off: people don't get distributed computing. They just don't get it.

Dale again:

Never claimed that there were any magic bullets. Only that distributed application design isn't really as hard as most people think [..].

So it's not that hard, but people don't get it. Ok, I guess they're not mutually inconsistent statements. But they don't explain much; it's not like Jini is the only distributed technology at hand. I would agree that distributed computing doesn't have to be as hard as people think, given the right technology. But I think people think right - with industry standard frameworks (DCOM, CORBA, J2EE, WS, CUPS) distributed computing is hard.

Todd Blanchard is close:

Probably the root cause of the unfortunate disconnect you bemoan is the modeling of services as objects. Objects have state (ivars). Services, ideally, don't. Objects without state are just bundles of functions - which is a much better model for a service than a class. I rather suspect that this is the source of much bad design - services are NOT best modeled as objects.

Services do have state, but they don't expose it, they accept it. Which Dale had pointed out:

This means to me that any state needed by a service will be included in the request either directly by including it as a request parameter, indirectly through a parameter containing a global reference, or implicitly through some asynchronous mechanism.

But then:

My practical solution to this is to perform remote method calls, or groups of calls, within a Jini transaction that also contains any and all JavaSpace entries representing service state. This buys me consistency in state for all distributed participants AND fault-tolerant fail-over for when a service becomes unavailable.

Aka a transactional blackboard. Which is maybe about as well as you can do for object invocations. In all this discussion, it seems objects are an artefact of Java, not the nature of distribution. Nonetheless, if we were doing Java middleware over tommorrow, Jini/JXTA would be where to start, not J2EE.

You see, when it comes to distributed computing, I think all object oriention does is obscure matters. We end up talking about transactions, parameters, references, interfaces, when we should be talking about messages, state, names, protocols. And if that doesn't fit with object thinking, so much the worse for it - distribution and decentralization is becoming the norm.

But object wonks might object because that separates behaviour from data - bad, bad, bad. Which is what Todd is talking about when he says objects are not a good model for services. I've yet to me a J2EE advocate who didn't see DTOs and the like as essentially latency optimization/workaround instead of proper network design (DTOs being degenerate messages) [utter bunk, sorry about that...]. I even see object advocates today insinuating that a service layer is some kind of dirty hack.

I haven't even talked about how you're supposed to manage or integrate the arbitrary interface signatures an object model allows. Or versioning. And I'm not going to either.

I don't know. I just wouldn't start with an object model and then figure out how to negotiate their consistent state once I'd flung them around the network. I'd look for an application protocol that fitted my needs. The closest I'd get to an object model is a generative or associative model (a la JavaSpaces, or content based routers). But for that I don't need objects in the design - I need data tuples.

[calexico: whipping the horse's eyes]

What's in a name?

Asked on David Boxenhorn's blog:

Amritas: What kind of name is Bill de hra?

Not sure. O'Hora is probably the same name.

Update: not much in a name, or so I thought. Ladies and Gentlemen, I give you Jon Hanna:

- It clearly seems to be an Anglo-Norman name given the "de". However the "de" was an 18th century variation on the name ("de Hore" and "de Hora", later still "de Hora" was Gaelicised into "de hra"). Earlier variations include "le Hore", "la Hore", "le Horhe" and "the Hore".
The "de" invention was done with some knowledge of genealogy though, for the name is indeed Norman. The earliest example of the name in Ireland traditionally being Phillipe le Hore, who was one of Strongbow's men.
However some of those names also came to Ireland as Cromwellian officers or adventurers, the Co. Cork line in particular are more likely to be descendents of later arrivals to the country.
"O'Hora is probably the same name"
- Probably not. The O'Horas are related to the Haras, Harras, Haras and O'Haras. Of course given the similarity of the names and the relative frequency with which we alter our surnames here it's likely that some O'Horas are really from the le Hore/la Hore/le Horhe/the Hore/de Hore/de Hora/de hra line and some de hras are really from the Haras/Harras/ Haras/O'Haras line.
"Northern Ireland, where the Normans (with a huge French influence) "colonised"."
- The Normans colonised throughout the island. Northern Ireland doesn't really become differentiated in terms of where one can trace one's ancestors to until the Ulster Plantation under Elizabeth I when Scottish Protestants were imported in great numbers. Ironically the motivation for this was that at the time Ulster was the province with the least loyalty to the English Crown. Today Ulster-Scots names (like mine) are found throughout the island.
"Why is the cApitalized?"
- Irish orthography distinguishes between letters that are always part of a word and letters that are part of a word mutating depending on how it's used. The word is being written as if it were an irish word "ra" which is mutated when following "de" to "de hra" (in all lower-case it would be "de h-ra", and if one were using a Gaelic like those at http://www.evertype.com/celtscript/csmain.html then it would be "de h-ra" in that case as well).
The distinction can be significant, "r nAthair" means "our Father", "r Nathair" means "our Snake", the lower-case difference between "r n-athair" and "r nathair" making this a bit clearer.
In this case though it is a deliberate Gaelicisation of de Hra in this case, rather than the effects of an Irish etymology.
"you will see buttons for adding letters with fadas. They are not capitalized"
They don't need to be given that it's the UI of a case-insensitive search. Irish never adopted the practice of omitting diacriticals on capitals to cope with technical limitations like the French did (and this became the normal French orthography, though diacriticals on capitals are making a tentative comeback). However Irish did completely lose the dot on some consonants, replacing it with a following h (hence the name Saḋḃ became Sadhbh, and so on).

I'm speechless.

April 21, 2004

Not in cwm, and definitely not in RDF

I do hope folks realize that when RDF goes mainstream, lock-in via rules engines will be the order of the day.

Norm Walsh is working out negation: Not in RDF. Go and look at it, the example is cool. But it could have been better titled "Using cwm rules to express not". [Which is not as snappy and I'm such a pedant. But I want to scream every time I see cwm/N3 conflated with RDF.]

[beastie boys: sabotage]

It's called software for a reason

There are a few things in life I have done that are or were remotely like building software systems:

  1. Gardening.
  2. Farming.
  3. Oil painting.
  4. Cooking.
  5. Raising children.

This is counter to much orthodox thinking, and that is troubling. To be honest it's not what I expected starting out. Nor are these stock for juicy metaphors. I appreciate that the current metaphors have much more cachet - try going into a boardroom are describe what you're going to be to doing as gardening rather than architecting.

But, the disciplines we look to for metaphorical inspiration are racing headlong to become 'soft', to transcend their physical limitations. As Ralph Johnston pointed out, there are real difficulties in altering a bridge or any physical thing after its been made - theoretically no such difficulties exist in software. Many of them use computers and buy our software just for that purpose, most notably for simulation, and they are I think dissatisfied with our ability to give them the softness they want. Whereas we seem to be desperately hungry for a physics, any kind of hardness. It's the oddest thing.

nouvel Programming seems problematic, because it seems impossible to control. So - more process, more architecture, more tools, in the hope someday that programming will simply disappear in a puff of abstract modelling. I've talked about demonizing programming before as being a bad idea. It's like a physicist hating physics because he sees wave functions everywhere, and wills them into particles by writing down equations. Maybe that's not good architecture thinking in some people's books - sorry but I can't begin to talk about software architecture without taking programming and development into consideration. I don't think there's any getting away from practice.

EMEA Architects Journal 2: less architecture, more practice

It is impossible to design a system so perfect that no one needs to be good - T.S. Eliot

I liked the first issue of the Journal. I liked the second one less. What bothered me is the lack of focus on implementation or engineering and almost total emphasis on abstract architecture. How do I execute? What do I build?

chaplin Over half the issue was dedicated to SOA, but I'm still none the wiser above what I already knew myself from building in that style. There's nothing to indicate how one would can create an SOA system. Perhaps it's just that my notion of proper automation and care in software is not informed by building houses, factory production, engineering bridges, or questionable abstractions. I believe the EMEA Journal has sound goals - make systems more valuable to business, perform a leadership role on Windows platforms; but the publication should consider mixing up some balance in its articles. Any one or two of these articles would be fine, but not all five. After five, the good ideas become vapourware.

lang Pat Helland's Metropolis is featured in the Journal and is the best of the bunch. This is a remarkable opus. I think it's probably the ultimate expression of software as Building, as the construction of an Architecture. It's the last word on the matter - you will not find a better essay on the subject. There are strong economic pressures to go and think this way. It's comforting to think that we are reaching some kind of critical inflection point of commodization, productization (perhaps most accurately, Walmartization) of software, when society and industry has spent and is spending such frightening amounts of capital on software. Nonetheless, to me the city is not a good metaphor for software, and never has been. Writing software has not felt like designing and building a house. Using software is not like using a house - a program is not a habitat. Changing software is not like demolition (if it is, you're in trouble). I've built houses, I've built SOAs, and the processes have nothing much to do with each other as far as I can see. The problems in software are down to complexity, expediency in engineeering and duplication much more than insufficient architecture. We can call making and thinking about software systems building and architecture if we want, but that's as far as it goes. I like organic and growth metaphors over physical construction or industrial ones - they ring true to me.

gehry And yes, my business title is Technical Architect. But lets be careful about aggrandizing ourselves with direct comparisons to Architects in other disciplines. We are not architects or engineers like those folks are - they have a richer deeper background to draw from. The likes of Marcus Vitruvius, Bruneschelli or Thomas Slade don't exist in software. But - it doesn't matter what your process, technical and architectural leanings are. At some point, if you want a system that will do useful work and grow with your needs, you will need to find competent people to write the code.

That is kind of of material I would like to see mixed into the next issues of the Journal.

April 20, 2004

The reason PHP is more popular than Java: it's safer

I really don't understand why PHP, etc. dominates the free/cheap hosting solutions. Tomcat is a production class Servlet Container and JBoss easily holds it own against the commercial application servers. Then for database you can always choose MySql although personally I'd go for Postgres (this has the benefit of being similar to Oracle should the client want a transfer). - Mike Cogan

I'll take a stab. It's because Java/JVM wasn't designed with multi-user environments in mind. LAMP solutions leverage the (usually *NIX) operating system to manage user processes.

The JVM by comparison is a single user environment - code is running pretty much as root, which is what you'd expect given its use-case evolution (devices, websites, middleware). I think it would be too easy to hose a server running on a JVM if anyone could deploy code in it. My hosting provider emails me if I have more than two processes running for a period, that's fairly easy to do from the OS. Not so easy is to email me if I have more than two Java threads running or if I have deadlocked the JVM altogether. The (usually *NIX) operating systems have been doing this well for decades so that's entirely sensible to build on them. But Java only has the JVM, so you have to build all the user controls again from the ground up.

The alternative, to run multiple JVMs won't fly - memory gobbling plus you have to manage inter-JVM comms from the server to your Servlet and back again (Servlets are not designed for this, in fact, quite the opposite - their initial selling point was running them in the same process as the server). Tomcat, JBoss and friends (UCL? For multihoming? With that reputation? Are they quite mad?) strike me as too flaky for multiuser environments until the JDK ships with Isolates.

I'd love to know how the few Java hosting providers that do exist manage to keep things running at all.

Random thoughts: scripting in Java

I've spent two days writing server admin scripts in Java to be run via Ant. I think the only way to do it and not go insane or build a YAFF is via Command objects. You get some measure of recomposition/reuse from your function chunks that way. Otherwise everything turns into single class hairball. Always split iteration from function. I despair that Java can't pass methods around. A little Java, a few Patterns is the best book on writing commands in Java - if you look at that and go "That's not Java", I don't blame you (tip: eat before you read it). Assuming the things you're dealing with have names, RDF makes for a kicking log/journal format. If they don't have names, give them some. The DOM: uurrghhhhh. The next time I do this I'm using Groovy. And XOM.

Game over - Emacs wins?

Cool Firefox tips from Jon Udell: More Firefox search plugins

I once described Eclipse as a $40M port of Emacs to an IBMer - which didn't go down too well. But it occurs to me that Firefox with its plugin approach is not a dissimiliar way of manging things to Eclipse. Now, you can make deep alterations in Emacs, given its Lisp-fu nature. Powerful, but you might end up mortally wounded. The Eclipse/Moz approach to this seems to be different - don't muck with the core and make sure you don't have to by keeping it very small and manage extensibility via plugins. I think you can characterize that as more directing and structured than the Emacs approach.

I wonder if we're not compensating limited flexibility in the programming languages by building flexible and reflective containers instead, that you manage through a ConfigurationLanguage. And I think we're well aware by now that one way to tie someone into a container based on an open standard is through the ConfigurationLanguage. Perhaps we should ask if this the right road to go down - at what point does the kernel become an interpreter?

Maybe what we need is not a middleware container, but a middleware interpreter.

Which brings me onto micro-kernels and lightweight containers in Java. I wonder if there's any real difference between a deployment and a plug-in, and whether we can't compare containers to interpreters in that case. Here's a question. Which of the current crop of Java container/kernel architectures do you think supports the plugin model well?

Shai Agassi: time to market

Found via Patrick Logan:

On time to market:

Whirlpool - they can finish a design and the R&D of a product, and it takes them a year to get the product on the market. You know what happens during that year? All the Koreans get to the market before them. Why? Because they copy their designs -they see them in a show, on the floor, and they copy them -and they have a three-month time to market.
Same thing happened to Philips. Philips invented all the key innovations in the consumer electronics space, and they always were three months behind Sony. Even on stuff they invented, DVDs, they came in after. Manage that time to change, and you become Dell. You don't manage it, you become dealt. - Shai Agassi

On optimization:

Always XML, because when I go outside my company, I don't know what's on the other side. Now, if the two of them can actually do a handshake and say, 'Hey, by the way, we can optimize this,' great. I don't believe that outside the company, optimization is so big. When I look outside, the big latency is network latency. Compressing, decompressing XML is not going to make a huge difference.
Inside the company, I can build a huge pipe between my business processes written in ABAP, which I can't change- it runs my business, it's been tested, I can't touch it-and my Java infrastructure so I can do my customization. That pipeline, which is getting crossed a hundred thousand times every second, can be optimized. Excellent- you gave me something that nobody else can do. That's what we can dig underneath our own town to do this. But when I go outside the company, or between divisions, there is no value in changing this. - Shai Agassi

April 19, 2004

MySQL conference: blogged

Anthony Eden has done some serious blogging on the MySQl Conference; well worth a look.

5 more papers

Mark Baker has followed Mark Nottingham's list with one of his own, including Rohit Kahre's thesis (I knew I'd forgotten something - Must. Get. Better. Metadata). He also has an interesting take on Marshall Rose's RFC.

[calexico: pepita]

XML 1.1 in a Nutshell

XML 1.1 is an abomination. You don't need it and you shouldn't use it. - Elliote Rusty Harold

That'll make for a short book.

Just what is it with Irish weather?

Out my window, I see there is a clear blue sky, with hailstones crashing onto the yard. In April. Unbelievable.

Light on Dark

Inspired by recent discussion on dark v light page schemes, I decided to switch to a dark scheme for a while. Having everything in CSS is a godsend - the switch didn't take long. This place using to look a lot like this.

But - I'm upset at Miller and Brunning for having more tasteful schemes.

[calexico: crumble]

Java DOM attribute structure

In case I forget again:

    // xmlns:re="urn:foo"
    //  |    |    |
    //  |    |     -- getValue 'urn:foo'
    //  |    |
    //  |     -- getLocalName 're'
    //  |
    //  -- getPrefix 'xmlns'
    //  |      |
    //  --------
    //    |
    //     -- getName 'xmlns:re'
    // xmlns="urn:foo"
    //  |    |    |
    //  |    |     -- getValue 'urn:foo'
    //  |    |
    //  |     -- getLocalName  null
    //  |
    //  -- getPrefix  null
    //  |      |
    //  --------
    //    |
    //     -- getName  'xmlns'

The DOM: a stupid evil plot.

[calexico: guero canelo]

Unacknowledged fear is the source of all software project failures: discuss

XP rhetoric is characterized by broad and sweeping generalizations about software development practice, projects and developers. A classic example is the following, from Kent Beck:

Unacknowledged fear is the source of all software project failures [1]

It takes a special kind of person to make such claims - specifically, one that is breathtakingly arrogant. - Hacknot

I wonder. Does anyone think that Kent Beck's claim is arrogant? More importantly, does anyone think it's wrong?

Crypto-Gram has an RSS feed

Bruce Schneiers Crypto-Gram RSS Feed.

April 18, 2004

Interprocess SOA

So. Here's a quote:

1. Processes have 'share nothing' semantics. This is obvious since they are imagined to run on physically separated machines.
2. Message passing is the only way to pass data between processes. Again since nothing is shared this is the only means possible to exchange data.
3. Isolation implies that message passing is asynchronous. If process communication is synchronous then a sodware error in the receiver of a message could indefinitely block the sender of the message destroying the property of isolation.
4. Since nothing is shared, everything necessary to perform a distributed computation must be copied. Since nothing is shared, and the only way to communicate between processes is by message passing, then we will never know if our messages arrive (remember we said that message passing is inherently unreliable.) The only way to know if a message has been correctly sent is to send a confirmation message back. - Joe Armstrong (Making reliable distributed systems in the presence of software errors) [pdf]

That greatly impressed me when I read it first. Not because of SOA or REST or any of that stuff. But because it's a quote about the archictectural constraints of a programming language - Erlang. It still impresses me - Joe Armstrong's PhD is a tour de force, required reading for those of us working on top of virtual machines and networks.

I've been following the discussion going on between Steve Loughran, Ted Neward and Patrick Logan regarding the granularity of the SOA model. The conversation has gotten around to Erlang:

Patrick Logan implies that the solution to having an SOA model inproc is Erlang. I shall have a look at this. I was debating looking at learning a new language, as I am finding Java dev too, slow, these days, even with a refactoring IDE. I was trying to decide between Python and Ruby though :) - Steve Loughran

I come firmly down on the inproc SOA side. I would find it hard to believe anyone could read Armstrong's thesis and decide that the message passing style is only for computing in the large. But we should be talking about interproc. mesage passing seems suitable for any system with interprocess communications that needs to be flexible, robust and scale. Joe Armstrong describes Erlang as a concurrent pure message passing language *, which sounds like some of the core constraints of an SOA, or at least a good start.

Java is getting there. Isolates (JSR-121) when they arrive (1.6), will offer some of the constraints needed to build decent inter-process systems on the JVM. Plus the sadly underused JavaSpaces has shown us how to really design a protocol/service oriented API. The Jini folks have been banging the services drum for years. There is also the E language, one the first languages built on the JVM. Finally the 1060 net kernel stuff is a REST/URI oriented approach and supplied as a jarfile.

* Which might be characterized as Smalltalk on steroids. There's a lot of confusion around what is meant by 'message passing' in Smalltalk. Conceptually I think Smalltalkers are talking about message queue type messaging and in some cases asynchrony, but physically I understand Smalltalk works like Java or C++, i.e. it's jumptables, and pointers all the way down. Hence the confusion, but I trust someone will correct me if I'm wrong about this.

You'll shut me down with a push of your button?

Monopolies are sacrifices of the many to the few. Where the power is in the few it is natural for them to sacrifice the many to their own partialities and corruptions. Where the power, as with us, is in the many not in the few, the danger can not be very great that the few will be thus favored. It is much more to be dreaded that the few will be unnecessarily sacrificed to the many. - Thomas Jefferson
Just so I've got this straight, let me read this back. I go to the store and pay $30 for a DVD and place it in a player. I don't modify the DVD, I simply play it back in a different way than it was placed on the DVD. The Director's Guild thinks I shouldn't be able to do that because I am messing with the "intent" of the director. I guess that the "random" button on my CD player ought to be illegal as well then since I'm clearly changing the intent of whoever laid out the tracks on the CD by playing them back in a different order. If this isn't about the clearest example of hubris then I don't know what is. - Phil Windley

Webservices for the rest of us

Not a WS spec in sight: Implementing "Bob's Listening To".

Home network with ADSL

A few folks have asked me how my home network is laid out. Here's the diagram as of yesterday:


The WiFi doesn't use WEP encryption, but does use MAC filtering. Maybe that's not too clever, but WEP is insecure as well as being slow, plus the link seems to be flaky when it's on. The SMC acts as a print server out of the box, which saves me having to set things up via SMB or whatnot (both servers are linux). The DMZ in gray isn't done yet, but I'd like to be able to log back in while not exposing the entire LAN (The Eircom plan I'm on is for dynamic IP, but there are ways around that). First tho' I need to harden the network.

Here's the kit:

  • Netopia Cayman router (supplied by Eircom)
  • SMC Barricade router (comes in wifi and non-wifi versions)
  • 2 x SMC2835W 54Mbps cardbus adapters
  • 1 x EZ Connect Turbo SMC2455W access point
  • Crossover cable for the router link
  • Patch cable for the servers
  • 4 port switch

All in all, assuming you had the kit, it shouldn't take more than rainy morning to set things up. Most of that would be in installing wifi drivers on the laptops and running cable. Strictly speaking you don't need the switch as the barricade has four ports available. And if you go for the newer wifi Barricade you won't need the access point. All this stuff can be got at Komplett.

Previously I'd been unable to make the Cayman router supplied with Eircom ADSL work with my SMC Barricade. Turns out there were two problems. First I was telling the SMC to dial into the Cayman using PPOE instead of just taking an IP address from it. Second I wasn't using a crossover cable between the two (orange in the dia. above). That last one was a bit embarrassing. Rule no 1 of networking: always check your cable (thanks Hugh!).

This would be so cool: a Squeezebox running on the LAN.

Information ownership: it's not yours until you can move it

I completely agree with Tim O'Reilly's assessment of the privacy concerns around Google GMail. Mark Nottingham has made similar observations. It's a red herring, that can be dispelled by explaining how email works. What I wanted to pick up was an almost casual observation Tim made on data migration:

The big question to me isn't privacy, or control over software APIs, it's who will own the data. What's critical is that gmail makes a commitment to data migration capabilities, so the service isn't a one way door to the future. - Tim O'Reilly

The free flow of data across applications just isn't happening today. It is essential, and I think inevitable that the way we manage our information changes, given the way we work and live.

Outside the consumer space of GMail I work in integrating systems, using web technology, XML and SOA. "Systems Integration" is not really systems integration; it's information itegration. When you move, reroute and repurpose information, a business can benefit from either a new service or an existing service at lowered cost with heightened quality. That's what it's all about. It's expensive, but becoming less so, in part due to the commoditization of software through open source and standards. Most business systems are not designed with data accessibility in mind. Yes, we've had XML for years, but XML's real impact so far has been down in the protocol/plumbing space. It's true value for data has not been realized - all that XML lying around is as yet, untapped potential.

I'm telling you this, because I think what's happened in business systems over the last 5 years can inform what's going to happen in the consumer space. To appreciate the consumer application space, replace "systems" with "applications" and "business" with "customer" in the above paragraph and we might see that things are going to change. The same commoditization of data formats will have consequences for current consumer software business models, just as the commoditization of plumbing and networking has had for business systems.

Tim again:

The ability to search through my email with the effectiveness that has made Google the benchmark for search. How many times have people asked, "When can I have Google to search my hard disk?" That's a hard problem, as long as it's just your disk, on your isolated machine. But it's solvable once Google has lots and lots of structured data to work with, and can build algorithms to determine patterns in that data. Gmail is Google's brilliant solution to that problem: don't search the desktop, move the desktop application to a larger, searchable space where the metadata can be collected and made explicit, as it is on the web.

I don't know - this is what stops GMail looking like innovation to me. I agree with Jeremy Zawodny that this is incremental improvement. Google are ultimately loading your email into a database, as they've done with the Web. It's a centralized model, not an edge model. To me that's highly dissonant with the network OS vision. And it's just email - personally, email is a fraction of the information I need coherency for. We need search that ranges across protocol and application information space.

I believe anyone who can distribute search away from the centre to the edges will win big.

Where Google is showing huge innovation is in technology management - by the sounds of things the ratio of admins to servers over there is impressive. You really, really, want these guys running your data centres. This is much like pointing out that where Amazon innovated was not technology but logistics management combined with a new sales channel. And there's only so much excitment to be had in optimizing the data centre ;)

Tim mentions Chandler with regard to the desktop and sounds almost disappointed. But open source could be a driver in making data application independent because open source is where the momentum for change can be maintained. What needs to happen here as much as anything is to get the open source community (especially the Java community given the portable nature of Java), off its focus on server-sided networking and start looking at users and their needs. We don't need any more web frameworks. Projects like Chandler, and less directly Eclipse, are the start of that. Like using open source for middleware, this will be only advantageous for some prime movers - if everyone does it then the economics change drastically and companies whose model is traditional product and not services will start to feel the pain of lower and lower margins until they rethink what it is they do.

Further innovation I believe will come from open source. There's no incentive to do this in the commercial world, except to gain a temporary advantage over the competition or make some soothing marketing noises to consumers. To really do it, to really make the strategic decision that the application franchise is secondary to the users information needs and execute on that vision, will require a ground up rethink, new technology, new business models, new partners, the lot. The industry is not going to do this of its own accord - data lock-in is a cash cow. If Microsoft ever moves against Google it may be in part because moving your data to a Google cluster from the desktop has implications for the Windows and Office franchise. But all that's really happened is a switch from one centralized model to another.

As for all that space. A GB of disk space is roughly a dollar (and who knows how cheap at the bulk Google will buy at). This makes even more sense given Google's management innovations - they could probably have gone to 10Gb. 1GB, when you have it, it isn't enough.

And then there's the structure of the data itself. If you want your data to travel into the future with you, look for RDF or Topic Map compatibility. Those formats are independent of the highly transient application formats and the less transient protocol formats. Yes, they're associated with all that Semantic Web handwringing about ontologies and the like, but in that event a transformation or a script will do and is better than agonizing about the perfect information model.

April 17, 2004


BBC Forum Conversation

Five papers on networked systems

Mark Nottingham has recommended 5 protocol design papers *. Excellent idea. Mark's looking for others. I don't know of five other papers remotely as good as that lot on protocol design. But here's five I really like, all touching on the design of protocol aware systems, with a slant on the effects of the design on implementation and administration:

  • Making Web Services that Work [pdf], Steve Loughran (weblog). Essential for anyone building a service that runs the risk of actually being used. Plenty of war stories, minimal handwaving.
  • The Protocol versus Document Points of View, Donald Eastlake III. A wider audience could lead to the end of a number of permathreads in the XML world.
  • Web Search for a Planet [pdf], Barroso, Dean and Holze. This is a recent paper but will be around for a long time to come, for its cluster design and especially for its remarkable cost-benefit analysis of server infrastructure.
  • Javaspaces Service Specification [pdf]. This is an unusual beast, in that it defines a protocol (Linda) in terms of an object oriented language (Java). Elegant.
  • On Distributed Communications, Paul Baran. This is to protocols and distributed systems as Vannevar Bush's "As We May Think" is to hypertext and the web. Celebrating its 40th Anniversary in 2004.

* Of the five, I hadn't read the WebDAV paper before, but they're all stand out reads. Mark mentions Marshall Rose's "On the Design of Application Protocols", which is a personal favourite - for anyone from a document, relational or OO background that has come to webservices or SOA, it's the best intro to the protocol oriented point of view (Rose's book on Beep is also very good).

April 16, 2004

RDF: I fought the markup

I spoke with a colleague of mine yesterday. He's working with XUL/Mozilla at the moment. His least favourite bit? The RDF/XML. Hates it. Since he knows I'm an RDF fan, he rightly wonders what could I see in it. So... here's a six year old markup language, that started with three syntactic forms all lumped together; that was an early testbed for how to use namespaces; whose opportunity for real development in the W3C was hamstrung by its charter (and yet, while equally constrained, the model was rewritten from the ground up). Even its own working group didn't use it.

I gave him a thirty minute run down on RDF (and inevitably, RSS) history, explaining as best I could how we got here and why some of us think the RDF metadata has a lot to offer. I outlined some of the key thinkers behind it, early considerations as a syndication and device profiling format (my colleague, like a number people in Propylon, really understand the issues around device/phone profiling); why it's a good thing not to just have property-value pairs, but to explicitly name the thing that is being associated with the property; that we can enable stuff like third party distributed metadata, have precise definitions for properties, types and classes; that we can use it to help unify disparate data sets; that we can create and merge rich data graphs by using URIs as identifiers; that we can simplify data management and interchange; that we can think about extensibility in data as well as modularity; that in a pinch, we can fire up wget and see if a URI has any documentation. That in the long run, since RDF has a formal model, we can inference and query over data sets just we do with SQL, while unshackling the data from the database and maybe alleviating the extreme autism we see in relational data today. But for the syntax, I had no good answers or justification for sticking with it. The world is not overflowing with RDF/XML, and there is minimal (if any) infrastructure built on it. And as he astutely pointed out, the triples under the hood are meant to be preserved during transformation - indeed, that's the whole point.

Today, I ran across a post from Leigh, from last year, on Dorothea's RDF syntax rant:

Her posting made me wonder whether this frustration is down to the basic elements of the RDF syntax, e.g. the element and attribute names, or it's inherent variability: that there are multiple ways to encode the same data using slightly different syntactic structures.

I'm guessing -- and I'm hoping Dorothea will step in to correct me here - that most of frustration is because of the variability. That's certainly the cause of much griping from hackers keen to use plain old XML tools on RDF data. The variability defeats any attempt to use, e.g. XSLT, without an initial normalization step. - Leigh Dodds (RDF Syntax: Profiling and Styling)

Variability is only part of it. No-one I know can write it down without making mistakes, no-one I know can read it without getting confused. But people are expected to believe after coming into contact with RDF/XML, that RDF is really quite simple. And that the tools will save them.

But this doesn't mean that we have to throw out the syntax entirely.

I think we should consider throwing it out. The model was thrown out without anyone batting an eyelid. The syntax has been around for over half a decade - it's not catching on and I remain convinced that nothing hurts RDF adoption more than RDF/XML. Perhaps most damning, it's completely failed the dogfood test - if the W3C won't use the markup they're specifying, why would they expect anyone else to?

Why people buy Powerbooks

I may be seen as a whiner here, but the point to note is that just as Linux makes another attack on the desktop, with Mandrake and Suse leading the way with great products at sensible prices, and even Sun making a good Linux-on-the-desk sales pitch, the corporate desktop is going mobile. Even many home systems are laptops now -the speed of a Pentium M and size of a display means it makes incredible sense. Yet if the OS has trouble coming to terms with mobile systems, it is not going to make headway against the existing platform, Windows, that being what I am going to return to tonight. - Steve Louhgran


[beastie boys: sabotage]

iQgen 2.0

Stefan's company has released v2.0 of their iQgen MDA oriented product.

April 15, 2004


Jeff thinks we should probably stop calling it syndication:

When more people start publishing content that doesn't fit the title/description/url format (recipes, movie reviews, photos, music playlists, etc.), "standard" formats will start to spring up (some have already) and the browsers will need to support them in some fashion. (This requires that the publishing tools support these new formats as well, which they eventually will. The whole ecosystem -- readers, publishing software, publishers, browsers -- will move along in fits and starts, just like it did with RSS.) -Jeff Kottke

It would move along a lot better if we could settle on RDF as the baseline model for metadata. Atom and Rss2.0 by themselves aren't flexible enough to support domain specific content like this - eventually they'll end with a value proposition like SOAP's and the interop discussion will have to move up one level to get the tools to be as useful as Jeff, Jon and Lucas want them to be.

Which reminds me, Danny and I have an Atom charter proposal to write up re RDF.

ADSL: 28 Months Later

I now have an ADSL connection, roughly 28 months after I thought I would have. Well, it's more like 33 months later, as that's the length if time I've been back in Ireland, and there was a lot of talk of broadband being available on October of 2001 when I moved (laugh if you want, I didn't find it funny). But it would have been 28 Months Later if I hadn't move house in the new year and cancelled. Honest.

I downloaded Eclipse 30M8, which is about 85Mbs, in 25 minutes. Good.

Eircom supply you with a one-port Netopia Cayman router and dhcp server (which does a bunch of other stuff if you look past the quickguide). Formerly I dialed up using an SMC Barricade 4-port plus print server (an excellent product).

I haven't figured out how to get the Cayman and the Barricade to play nice yet - I think I'd like to split out natting/dhcp and routing; certainly I'll want a DMZ at some point. My laptops are currently connected via wireless by hanging an SMC WAP off the Cayman's port, but that means the servers are off the LAN. Presumably I can connect the Cayman into the SMC WAN port, and do some jiggery pokery with routes/ppoe/dhcp. But no luck so far. There's not much info to be found online. If anyone knows how to hook these together, pointers much appreciated. Maybe it's time to go full wifi.

April 14, 2004

Counters as identifiers: a stupid evil plot

I came across some code once. It was an API wrapper around some DB calls. Some of the API calls had integer argument. Turns out these are were taking primary keys as arguments, some of which were fixed to constants in the code. My guess this had something to do with the integrity being managed by App code (as repeated selects being joined in software) rather than the DB (as foreign key constraints or SQL based joins). I'm no DB expert then or now, nor do I claim to fully understand the pros and cons of DB v App managed integrity. But I don't imagine that was a very good idea. What if the DB has to be reconstituted and you're using auto-incremented pks (which is what that system happened to do)? Why pass pks around the code when you can use a join?

I noticed a while back that Moveable Type uses numbers in its entry URLs. I haven't looked to see if these are bound to DB pks, but they are incrementing. This also strikes me as not a good idea. What if I have rebuild my blog and the numbers are rebound to different entries? Actually that's exactly what happened to me last year when the Berkeley db backing my MT install got totalled. Links people had made to my blog were rebound to different entries. It was off by six or something. Oops.

I think Mark Pilgrim or someone wrote something clever once about indirecting MT entries using MT and Apache hacks, but I'm too lazy to look for it and naive enough to think I shouldn't have to.

I suppose this is the bit where folks tell me that using GUIDs or URIs as primary keys is a stupid evil plot that any idiot would know about. Not enterprise enough perhaps. Certainly inefficient.

April 13, 2004

As We May Hack

I can walk into any meeting anywhere in the world with a piece of paper in hand, and I can be sure that people will be able to read it, mark it up, pass it around, and file it away. I can't say the same for electronic documents. I can't annotate a Web page or use the same filing system for both my email and my Word documents, at least not in a way that is guaranteed to be interoperable with applications on my own machine and on others. Why not? - A Manifesto for Collaborative Tools

Yawp after me - "RDF! RDF!". But seriously, the answer to "why not?" is not this:

The problem with usability is not a lack of good ideas; it's that most of these ideas never make it into real applications. There are many reasons for this, from organizational shortsightedness to the vagaries of the marketplace. As frustrating and as uncontrollable as these factors may be, the onus for changing the situation is on both the researchers who develop these ideas and the programmers who implement them.

The answer, if there is a single answer, is to be found in the marketplace, with the software vendors, the customers, and their fears and desires. The essential problem is that vendors do not want to offer collaboration at the risk of undercutting their wares through interoperation. Collaboration neccessitates the free flow of data across applications - whereas most vendors would rather produce suites and kitchen-sink uber-apps to encapsulate as many uses of data as possible. Chris Ferris has summed this up perfectly:

Interoperability is an unnatural act for a vendor. If they (the customer) want/need interoperability, they need to demand it. They simply cannot assume that the vendors will deliver interoperable solutions out of some altruistic motivation. The vendors are primarily motivated by profit, not good will.

There's a class of articles that tend to look to assign blame to programmers for what's wrong with software. They appear in Computing or CACM from time to time (and are one of the facets of irrelevance that gave cause to not renewing my ACM membership). I find them ferociously, willfully, ignorant on how software actually is conceived, designed, marketed, built and sold. Blaming programmers is intellectually slothful. We are, and let's be clear about this, decades past the time the blame could be laid squarely at the programmers feet.

A Manifesto for Collaborative Tools veered close to that, while never quite getting there - exhorting developers, with only token gesture as to how decisions about software are made. Software is a complete commercial ecosystem that extends far beyond hacking code. Ironically like its observation of the semantic web, this manifesto is unlikely to take hold because it does not address the real issue, which is the marketplace and not technique. This failure in analysis is all the more frustrating as I agree with the essential sentiment expressed (we need better tools, now). Plus the writing is wonderful.

Gamesdomain! what! were! they! thinking!?

Looks like Gamesdomain got folded into Yahoo!. The reviews seem to have simply vanished (or they're so well hidden I can't find them) - what's that, about eight years worth of content? Now you get three paragraph boilerplate. The screenshots clickthroughs are tiny. The site is rubbish. Looks like it's over to GameSpy then. Or just wait for blogs to fill the review void.


Seen on Slashdot, a Dom Joly quip:

[nokia tune=annoying] ring [/nokia]

Hello. HELLO
I'm writing on slashdot. SLASHDOT

Nah it's rubbish

Jini and JXTA: unask the Question

Radovan would have me unask the question why Jini and JXTA?

I think we need neither. Well, JXTA is more Java-independent... Well, Jini has nice discovery protocol... and so what? Fortunatelly, JXTA is closer to web services, which is good. I like both of them as p2p concepts. But if I had to chose, I would chose Globus stuff.

I had a few answers to my question, which are worth summarizing:


I've looked at both a short while ago, and my understanding is that Jini is predicated on mobile code (i.e. tied to Java) while JXTA is a fancied-up XML message routing system. Based on this, they appear to be irreconcilable,


[...] build your services and inhouse applications using JINI. However, for an internet wide application, use JXTA. My take, JXTA seems to take a more pragmatic approach and therefore may be better in general, that may also be the reason its being adopted for N1 rather than JINI.


Sun is finally starting to realize that JINI and JXTA are both similar and complimentary. There is rumored to be an open-source project to bridge the two, but it doesn't look very open to me. [nwhere.jini.org/


JINI came first, but was Java specific, and other P2P-like systems that were language-independent were getting a good bit of attention while JINI relatively languished in disinterest. JXTA was conceived in part to address the desire for a protocol-level approach rather than a language-specific approach. Also JINI was conceived or at least marketed to a significant extent as an embedded systems mechanism. JXTA was conceived as more of an Internet P2P mechanism. Of course these were not technical restrictions, but more part of their subtext.

so, all this helps. Carlos points out the internet/intranet divide, but I wonder why encourage the distinction? There seems to be trade off between the homogenity of Java-oriented Jini and the (potential) ubiquity of protocol-oriented JXTA. Also, I find Jini to be somewhat better specified than JXTA and with a better programming model in JavaSpaces. But for the kinds of things I'm interested in, choosing Jini means choosing Java over protocols - that's a big ask. My experience is that things defined in terms of protocols trump things defined in terms of programming languages.

I'll say one thing about my online experiences around Jini/JXTA. There are too many software communities hovering around Java - java.sun.com, java.net, jxta.org, jcp.org, jini.org, developers.sun.com, who knows what in the midp/device space. It feels disparate. Sun could do worse than talk to O'Reilly about to manage all this good stuff. O'Reilly's business spans many different communities, but for my money they make it all hang together online cohesively.

And yes, let's look at Globus/OGSA too.

April 12, 2004

A hazardous and technically unexplainable journey

Ray Ozzie invites us to think:

Imagine what a traditional PIM might look like if it were possible to build it in a modular fashion, with each modules' underlying object schemas, store, and methods exposed as standards so that others could build upon them? Imagine building custom domain-specific client-side CRM/SFA solutions that might leverage these common standards. Imagine the deconstruction and refactoring of traditional desktop applications so that higher-level domain-specific applications and solutions could be built from components currently embedded in larger integrated packages. Imagine what "content management" might become in an era where collections of objects can be created, retrieved, cached, replicated, published in conjunction with service-oriented systems, yet one in which a variety of content creation and manipulation applications can effectively leverage common storage and synchronization mechanisms. - Ray Ozzie (640KB ought to be enough for anyone)

I'm reading this, yawping to myself "RDF! RDF!". I can imagine Danny, Leigh and Edd doing the same thing.

Good to see someone thinking beyond the next couple of quarters.

Leave integration to the programmers

Tim Bray on hooking MS and Sun kit together:

It's best illustrated by example; let's suppose that some big auto manufacturer has a bunch of J2EE infrastructure doing purchasing scheduling, and has a BizTalk deployment doing messaging to foreign subsidiaries; and suppose they want to make the two of them start talking to each other.

So the smart thing to do is for the car-company CIO to get in touch with his account managers at Microsoft and Sun and tell them to send in some really smart senior people to sit down and work out how to do it. Except for, until a week ago it was really hard for us to work together on this kind of a problem because both sides would want to check with their lawyers before saying anything about anything, and the lawyers (as is their job) would point out the risks, and as you can well imagine the customer would be severely unimpressed.

If the nature of the environment is indeed talk to legal before answering your customers... well that's got to be frustrating.

I don't know tho' - sometimes the thing to do is to think beyond the vendors and find some smart programmers instead. Speaking from experience in the scenario Tim offers- we've done exactly this at Propylon for our customers - integrating BizTalk and J2EE doesn't require the level of vendor coordination and focus that Tim is talking about.

April 11, 2004

It's not called Middleware-Services for a reason

Mark is upbeat about this entry from Jim Webber. So am I. Yet, I confess it's been frustrating over the last few years listening to middleware/rpc types dictating how to build web-services. I have always had the impression that these folks secretly considered people like myself to be childish web-monkeys incapable of building true enterprise systems (which no-one denies is hard work). Anyway, the increased convergence between web and middleware factions can only be a good thing.

The fact is that regular Web Services have a far more uniform interface than the REST style. I've laid down the challenge before, but you can't get much more uniform than one (imaginary) verb, can you? - Jim Webber

So, having figured out that not constraining an architecture's verb set doesn't work, time to swing to the polar opposite - one verb. A curious approach ;)

We're past the RPC stage of Web Services, and we've been past it for a while now. Message to the REST community - stop telling us that we're playing catch up and see what's really been happening on our side of the fence. - Jim Webber

Glad to hear it, but message to the ex-RPC community - stop telling us how you think you can build build internet scale systems this time around, and go and look at the ones we have. C'mon guys, it's not called middleware-services for a reason!

Mark responds thusly:

That's really good to hear you say that Web services have uniform semantics. But it's impossible for them to be more uniform than REST, because REST prescribes uniform interface semantics by definition. - Mark Baker

I have to admit I don't get this. What does it mean? Mark's thinking in two areas, uniformity by definition and self-description has confused me in the past. My notion of self-description is based mathematical logic (yawn), but is precise in that respect. I don't find HTTP any more self-describing than XML, but I wouldn't find RDF self-describing either because there are real limits to the descriptive power of formal languages. Maybe Mark is talking about the the amount of description/state (inline information) carried by a REST-style message rather than the descriptive power of the message language. Which would make me something of a pedant. Guess we'll have to meet up sometime.

The HTTP verbs are too numerous. - Jim Webber

I would say the HTTP verbs are sufficient :) but if there's an argument that the HTTP verb set is either proliferate or inadequate I'm interested.

Jim made an interesting observation:

In terms of addressing the asynchronous behaviours, the speakers (Bill Donoghoe co-presented with Hao) were pretty honest (with a little prodding from some "pommie" bloke in the audience) that full asynchronous behaviour isn't really an option for most developers today. They discussed a simple polling pattern and agreed that sometimes it just makes sense to do synchronous communication rather than polling. Although the implication that synchronous consumers block while they're waiting for responses from services is being economical with the facts (hint: threads). - Jim Webber

Very true, this is economical with the truth, but even backed by threads, the sync model is never going to scale as well as an event/queue/async based model can. There's scope in making better request-response models atop an asynchronous infrastructure. We've looked at this in Propylon at the application and messaging levels, especially at how to give application developers a good programming model for highly asynchronous systems (put it this way - there are reasons why NIO-backed servlet engines aren't prevelant).

Under the hood, much of it comes right down to how the servers and the wiring between the tiers themselves are architected. In terms of the scalability needs for machine to machine comms, there's only so much you can do on top of a thread per request model before the servers melt down (and "melt down" merely implies more threads than CPUs). An ex-colleague of mine Miles Sabin, did a lot of work in the area of scalable web-servers, and for a taste of what can been done, take a look at Matt Welsh's Seda (I think Matt and Miles designed the first two Java NIO backed web servers). Dan Kegel has a good summary of the technical issues and options in his C10K problem page. I feel this is becoming a much more critical subject today for web services than it was during the dot-com era.

Finally, having ordered it, I'm looking forward to reading Jim's book.

J2EE and the JVM: Ted Neward calls it

Frankly, in all 100% brutal honesty, I think the JVM is due for a major refit, and while I think the J2EE stack is more architecturally sound than the current crop of technologies from the .NET side of the world, EJB definitely needs to be shunted off to a less-central role in the stack and the Connector Architecture (JCA), for one, needs to be made more visible. - Ted Neward*

* Ted and I had a brief email exchange about this entry, about which I'll say one thing. He's 110% focused on giving customers the best solutions, regardless of source. Cool.

[50 cent: in the club]

HTTP is not just a transport

If you think HTTP is just a transport, you could be missing out on a lot of value. Saying HTTP is just a transport is like saying a database is just a filesystem.

A surprise .profile

I think it's going to take until 2010 until we really see the simplicity of Lisp come back to "commercial" programming systems. I'm seeing some of it now both inside and outside the big house, but there are a lot of people who need to move before we get there. - Don Box

Hang on a minute. I thought to myself, "Huh?". This is the guy that wrote that book and co-authored that spec. Ah - emacs. Cool.

[streets: fit but you know it]

April 10, 2004

Setup, Teardown

Brude Eckel is getting a lot of criticism about embedding tests in comments.

I'm curious about this for two reasons:

  1. How Bruce responds to this criticism, some of which has been harsh. He has taught in form of another, thousands of C++ and Java programmers over the years, and is highly regarded as a result.
  2. Whether it actually works. There's a vaguely analogous discussion going on in the Groovy community that intermixing languages in code via pluggable parsers would be a good, "enabling", thing. *

Either of these approaches look like things I want to steer clear of. JUnit in comments comes across as unreadable gorp one hundred classes later. Inlined languages comes across as a display of parsing and programming skill without stopping to think whether the end result is indeed enabling - there's a difference between enabling people and encouraging them to do stupid things (perhaps best understood by those who teach or have kids).

But then again, what do I know? I haven't actually tried them. Maybe they work - isn't that the real test?

[bob marley: easy shanking ]

Irish: more RESTful, less semantic?

Perhaps. HTTP, the most RESTful of protocols follows an Irish sentence structure. The Irish language is organized around Verb Subject Object (VSO) structure. This is different for example from English or Swahili, which has a Subject Verb Object (SVO) organization. We should note that RDF is organized like English as are many object oriented languages*. Like RESTful solutions, the VSO structure is uncommon. Unlike RDF solutions, but like object solutions SVO is quite common. Yet the Subject Object Verb (SOV) structure remains the most frequent in natural language, slightly ahead of SVO.

Of course, this has significant implications not just for our software solutions, but for our computer communications in general!

* Here's a thought: if we converted HTTP to an SVO form by putting the request URI in front of the method, how much more work would it take to convert HTTP requests into RDF?

[bob marley: jamming]

JXTA and Jini

I've been looking at these two technologies again recently. Maybe it's just me, but does anyone understand why we need both JXTA and Jini?

From the Jini faq:

Q. What other technologies compete with the Jini architecture?
A. There are none.

Are they sure about that? The JXTA FAQ has a number of entries asking how it compares to other technologies. But not Jini.

April 07, 2004

History should be written by the winners

RSS history
RSS history
RSS history
RSS history
RSS history
RSS history

April 05, 2004

Declare the pennies on your eyes

Spotted on Slashdot: RDF killer app

April 03, 2004

Better is better: improving productivity through programming languages

[a long entry, this, on why a using better languages makes all kinds of technical and economic sense. Much talk herein of Lisp, Java, software economics, risk, processes, outsourcing, drawing as programming, luddites, and perpetual motion too. It ends happily.]

Always start with a fallacy: analysis as argument

In software, we're living in an interesting time, when the outlook is more uncertain than usual and the industry is somewhat humbled following the dot-com bubble and a tough recession. I'm speculating that major change is coming to mainstream programming, and the change will be in the kinds of programming languages we use. I think this change will be more significant in impact than Object Orientation, and will result in evidently better productivity and software. The drivers are an increasing need for expressive power and a requirement to continuously rework and adapt the code. For this, we want a better language, whether we know it or not.

If I had to pick one thing as most important to programming today it's the ability to change your mind, to adapt the code to new circumstances. Existing popular languages and software processes do not support these meta-requirements very well, if at all. Everything indicates that we need to be able to change code quickly to keep up with our customers and express their and our ideas with as few subsidies as possible paid to the compiler.

Oddly enough, we're trying to do this without using languages we already know to be a good fit. I speculate this approach has lead to a continuous cycle of reinvention, and the expression of language design as creative amnesia.

Better is better

Sometimes what you need a better saw. Movements like test-first, agile and extreme programming are all very well, but they're for the most part focused on controlling existing pathologies in how we manage programming today.

Some people are going to object to my calling some languages better than others. Yes, it's a poor basis for argument. We all do it though. Bjarne Stroustrup is still telling us how good C++ is. There is a spectacular community advocacy around Java. VB and Perl programmers love to talk about getting things done quickly. And so on. Take your pick, but what I do believe is that there's not an iota of merit to the arguments that language choice doesn't matter. But better to me means better at change.

How I learned to program

I'd suggest, only half jokingly, that Java and Python have come up with something that we'd love to have: the adulation of the masses. Some elitist grouch will no doubt rejoin that he doesn't want Lisp to be popular, but sentiment here says otherwise. Anyway, I don't know how to bottle the certain je ne sais quoi that Java and Python have, or had. - Cameron MacKinnon

I love reading c.l.lisp. When I started out in software it was one of those places where I hung out and augmented my learning - along with c.l.object, arsdigita, xml-dev, extremeprogramming, the wiki (especially the wiki).

I came to programming despite myself. In art college training as an industrial designer I had to be taught CS101 via Pascal and it was such a horrible unfun experience that confirmed my then-held suspicion that with computers and technology came misery. After that I didn't look at a program for nearly seven years. But I naturally feel back into it - as artificial intelligence which has always been my favourite todo with a computer - while many people suspect that AI is a useless endeavour it's both fun and a challenge trying to turn Pinocchio into a Real Boy. Though like any fable, you wouldn't want to take it too literally. With an AI degree you learn computing differently to many others as the emphasis is very different. The first language we learned was Prolog. Then C. I learned some Lisp on the side, even though it was That American AI Language.

Why am I telling you all this? Well I speculate that my views on what makes for a good programming language do not accord with the industry I work in, and I think that has had something to do with my education which was somewhat impractical in terms of the marketplace.

Symptomatic syntax

I have never been happy using Java to generate web pages. Or later on, with scriplets. Or C++ for neural networks and machine learning. I can't read more than a few screenfuls of Perl or XSLT without getting motion sickness. I constantly bemoan the fact the you can't pass methods as arguments in Java. I keep screwing up Groovy closures with that inane bracket line convention. Make a note of that last one, it's important if we're going to talk about a better language for the job.

Syntax matters in a programming language. Groovy is inspired by features in other languages such as Dylan (and I suspect Python and Ruby), but because it also aspires to keep Java programmers comfortable it inherits some Java syntax. It has less syntax than Java but it still has a lot of syntax. Syntax seems helpful at first but it gets in the way when it comes to doing more interesting and flexible things you might to do with a program - such as closures or function factories. So to support closures, the Groovy parser needs you to help it by using the whitespace to give it clues.

One thing you'll notice about some languages is that they have less syntax. I think this quality is one dimension which indicates how good a language is. Lisp has almost no supporting syntax - it's like working with the parse tree directly - which is why one person has called it a programmable programming language. My favourite language Prolog has more syntax than Lisp but it looks pretty barren compared to Java. As does ML.

If you don't believe that your programming language has a lot of syntax,

  1. Write the lexer/parser for it.

  2. Write the lexer/parser for Lisp, or ML.

  3. Compare.

The less syntax you have the more uniform the language is and the less special case logic you need to manipulate or extend it - for example you can alter it yourself as you program instead of requiring a supporting language/compiler/paradigm.

I suspect that if your automagical "find all callers" refactoring Java IDE was sentient, Java would look not look like Java in its minds eye. It would like a Lisp syntax tree with some extra bits. Imagine having that sort of introspective power as part of the language itself.

Along with syntax are the evaluation rules for language. In Lisp and Scheme these are quite brief yet are extraordinarily flexible. The processing models for most popular languages are much longer while being much less flexible. Partially this a reflection of the complexity of the syntax, partially the syntax reflect the complexity or incoherence of evaluation rules, but mainly it's a reflection of the internal consistency and formal model of the language (if it has one).

Once they go down this route, languages that want to extend have to keep piling on new special case syntax to work around the existing extra syntax, to the point where they go off the deep end and the signal-noise ratio is just too much. Consider the progression of Perl through versions 4,5 and 6. Or witness languages piled on top of JSP. Java has done a good job of managing syntax creep over time. C++ has so much syntax and semantics it took years to get the compilers right. So you need consider both syntax and semantics together when considering a programming language.

One man's paradigm is another man's practice

This is one reason why Lisp and functional programming folks are a bit cool on stuff like Object and Aspect orientation. They can extend to Aspects or Objects using the language constructs directly. To most of us working in popular languages these are paradigm shifting ways of thinking about programming. But we can't use Java directly to support something like aspects or generics. There's whole communities working on language and compiler extensions, open source libraries, as well as that all important vendor support - J2EE app servers are being renegineered just to support aspects. To a Lisp person these are just constructs expressed as macros (which are written in Lisp itself), something you do on a rainy weekend. What's the fuss about?

Of course one of persons driving AOP in Java is Gregor Kiczales, who is an ex-Lisp hacker. And before that so was Guy Steele, who has contributed a lot to the Java's programming model. Richard Gabriel, yet another Lisp luminary, works for Sun. So you have to wonder.

It turns out that people of done a lot of work on making languages powerful, evolvable and flexible. In fact all of the things we are evidently craving for today. They tend to end up aiming toward Lisp. It's something a running joke in the Lisp community that all language efforts are doomed to reinvent Lisp. Lisp by the way, is not my favourite language, but in my very limited objectivity, it is by design as good a language available to a professional application programmer as any, and the best place to start if you are intending to design a new language. I'm wary of universals or absolutes but Lisp seems to represent a pinnacle of achievement in programming languages that is as yet, unmatched.

The tensile strength of language

One assumption made with Java/J2EE was to conflate the language you'd use to build the managed runtimes with the language you'd use to build the managed applications - that assumption is only sound if the language is sufficiently flexible. Java is a good language to build an application container and runtime. I will happily build message queuing software or a web server with it. But it is a weak language for application logic. Too often, expressing yourself in Java is tedious. You cannot easily change the code that exists already. We rely overly on frameworks to provide the flexibility the language does not give us. This is not a sustainable approach to building applications - over time the cost to change the system and the risk of breakage through change grows. The application is gradually strangled in the grip of its own logic. It comes to be replaced, ported to new language, or let die. This is the way with any system that cannot be easily adapted.

Clearly lots of people agree with me, since the main innovation in the middleware/enterprise space since the adoption of VMs and managed runtimes has been a steady stream of languages whose interpreters are written in Java and an attempt to shift data, contracts and configurations out of software objects into XML documents. Once upon a time that was a controversial thing to say - now it's just obvious that there is plenty of work we do not want to use Java for. JSP. Velocity. JSTL. EL. Ant. Groovy. Jython. There are multiple Lisp like languages available for the JVM, but they don't get much use commercially, if at all.

I pick on Java as an example but what I'm saying is equally applicable to C++, COBOL and VB. And C# will possibly go through a phase like Java's current one.


If it's so great, why aren't we all using Lisp? Lisp programmers seem unable to figure this out. After twenty years of introspection, the best anyone has come with is Richard Gabriel's infamous as "Worse is Better" maxim. There are some reasons we have resisted Lisp-or-something-like-it adoption, it's complicated, there is no single overarching cause, but we can split it into two areas. First issues with Lisp itself. Second a lack of understanding of why popularity is important in the software business.

To understand why we don't use Lisp, but keep reinventing parts of it every few years, I think you have to have run across a few Lisp people and used a few other languages to put food on the table.

How to love Lisp

Lisp in case you don't know is one the oldest programming languages, coming up to fifty years by the end of the decade. It's positively antique compared to C++, Perl or Java. But it has evolved with the times, because at it's core, it's evolvable. It's believed by some that popular programming languages are evolving toward Lisp, just not fast enough. I believe the meta-requirements of change and adaptation I started with , are accelerating this trend.

[I'd love to be able to compare Lisp to a shark, while comparing popular languages to primates, that would be very quotable - but of course the shark hasn't evolved for millions of years, and there are no signs of humans and chimps growing fins and extra cartilage.]

The non-syntax is distinctly weird initially, especially the ordering of arguments. The term I would use for how this feels is 'disorienting'. If you've been trained to write x = 1+2, then writing (setf x (+ 1 2)) is going to take some getting used to. The former version is called infix and the latter is called prefix. Prefix seems to suck, but it is one of the best things about Lisp since it allows one to manipulate expressions much more easily than the popular infix notion does while eliminating redundancy - adding three numbers is simply (+ 1 2 3). Most popular languages use a mix of both - infix for expressions and prefix for functions. Not everyone likes all those brackets, but with Lisp you use indentation not brackets to layout code - as you do with most languages, whether you care to admit it or not.

How to hate Lisp

We're not complete idiots though, in ignoring Lisp. Two of the main reasons we don't use Lisp are the Lisp community themselves and the lack of portable libraries that are relevant to modern programming work.

It's corny to say the worst thing about Lisp is a Lisp programmer. But there's some truth in it. Work long enough in this industry and you'll run into a Lisp bigot. Every language has bigots but Lisp ones are the worst. They're annoying, not just because they have a point (they do), and not just because they are better students of programming history (they are) but because they're annoying in the way they make a point. These people however right they may be are not doing themselves or you any favours with their arrogance. Not at a time when no-one knows where we're going or what the software landscape will look like in a year. With big change comes big opportunity. Lisp missed the boat once already in large part through misplaced arrogance, missing it again through more misplaced arrogance would be careless.

The issue of good support libraries is the one of big lessons learned by Java, VB and C#. A programming language is one thing, but a libraries is another. I think Lisp never traditionally emphasized libraries because the language is so flexible you can build one on your own, but then you have the twin nightmares of integration and reuse lying in wait for you. "On your own" is not how modern systems are built, especially distributed ones. Indeed the lack of interopability and repeated effort was one of the drivers to standardize Lisp and libraries, now called ANSI Common Lisp. Unfortunately it standardized functions a la the C++ STL - so Lisp looks like a language with a kitchen sink of a java.util.* package, but domain packages a la java.jdbc.* you have to hack together yourself. And javax.swing.* - well there is none. Which is so very Ninety Eighties. Thus, while the book, ANSI Common Lisp, does a great job telling you how to on generate HTML documents, it doesn't tell you how to serve them up or read them from disk or render them.


Why wish for popularity in a programming language at all? It depends on what you're doing - sometimes you might not want to be working with a popular language, just a better one that gives you an edge. If you're in the city courier business and all your competition are using the postal service while you use crazy people on bicycles, that's an advantage right there. You might not let on you're using crazy people on bicycles lest the competition get wind of it and imitate you. In the area I work in today, middleware and business systems integration, areas where Java, C++ and .NET dominate, working in an unpopular but better language is not the advantage it should be. This is because we have something more like a game instead of an industry. Most paid programmers are doing something like this.

Why are supposedly better languages unpopular? Why are we using more suitable ones? Paul Graham, the Lisp luminary, has laid the blame squarely at the feet of mediocre management and ignorant developers. It's a compelling idea that, to seek out an idiot, but it's much too simplistic to reflect how our industry actually works to be true or useful. Languages are of course popular because they are popular - it's a virtuous circle. It's perfectly sensible for grads and developers to gravitate to a language they can get work in, and the popular languages you can get work in by definition. But to understand why popular languages matter, we have to stop talking about them and talk a bit about the economic system most developers are working under.

The code game

In the game of building business systems, we have demonized programming. What has mattered is not so much the technical ability and power the language gives a programmer, but the purchasing ability the language gives the supplier and customer over the whole project and lifespan of the system. The reality is that Java C# and VB are well supported, well funded languages. It's easy to find material and training to get skilled up on. By extension, it's easy to find a Java or .NET or VB programmer. And it doesn't matter so much that they are good but that they are easily found and easily replaced; that there is a pool of people who can work the language and critically the associated platforms the language works under. In paid work, it's often more important to understand J2EE that Java. Therefore at a business level we can talk about managing risk with popular languages, yet at the technology level it makes no sense to use an inferior language. It's like using wood instead of steel to build a railway bridge, because the world is shy on blacksmiths but teeming with carpenters. Fitness for purpose is often a secondary concern.

The idea that a system could be built in one language in a fraction of the time and cost it took to build in another, more popular language doesn't matter as much as the fact that the world is awash in people who can help build and support a system built with a popular language. Over the time of the system the ad-populum approach is perceived to be less expensive and less risky than the alternative. And to paraphrase the old saying, no-one ever got fired for buying what's popular.

Longer than you can stay solvent

The economics of building with inferior languages are self serving. They require larger and larger people pools which require increasingly popular languages. They also encourage high-ceremony processes that attempt to coordinate large teams building in large systems. We're building large systems because we need to express as many of our requirements upfront. We express our requirements upfront because we can't easily change the system later on. Of course these processes don't do that to our satisfaction, but what they do allow is to isolate almost everything that is not programming into a repeatable well-understood phases of a process. All the messy parts are left over to the phase called "build" or "construction" or "integration". The result is that we have software processes that make us feel less exposed to the risk of programming with popular languages than we actually are. I speculate that we are in fact over-exposed as result of demonizing programming and making it a process shibboleth, but that's not what it feels like.

We don't like to talk much about programmer meat-markers and bodyshopping - we'd rather talk about skills shortages and democratizing programmer. Outsourcing is about as close as we'll get to acknowledging that the business is a harsh mistress. But outsourcing is a tactical response as we'll see soon.

On being smart

People who advocate better languages I suspect do not always appreciate the consequences of this macro-economic model of the software business. And I think there is also a more direct, interpersonal, problem here. Programming something even half decent requires considerable intellectual ability. That's not all there is to it, but if you're not smart, it will be much harder for you to be a decent programmer. We're mostly in denial about this and would like to believe that programming can be done by anyone or can be made somehow unnecessary. It's the opposite view we take to managerial or executive ability, where we pay through the node for talent. Yet it's been known since we started building software that some programmers are much better than others. Maybe all disciplines are like this, but it is very obvious in programming. Naturally this intelligence makes people who don't have it uncomfortable, particularly if that person is supposed to lead or organize very smart people to do something that they don't fully understand, even though that person is probably better paid, more socially equipped and has nothing to worry about.

As with meat-markets this is not something we like to talk about a lot. It's not egalitarian and intelligence is something we're sensitive about culturally. But consider that it's not controversial to say being a good musician or athlete requires a level of ability, some of which is innate. Or that our educational systems are based around being ranked smarter than the next person anyway. So we should get over ourselves a bit and accept the fact the good software requires more than stringent process; it requires well-above average intelligence to create it. One problem with acknowledging this is that is does not fit with the aforementioned economic doctrine. There, we'd like to democratize programming and so commoditize that ability. But, there are simply not enough above average intelligence people to go around - or more accurately there are not enough above average people to get things done using popular but inferior languages - so even though it seems to be hard to get work for many people, the providers and buyers of software are prone to talk about rising wage costs and skills shortages. And given that any organizations consider software to be a capital cost rather than investment, it's irritating to have to compete for developer talent. In truth, under this doctrine, there will tend to be a skills shortage more often than not.


Historically this march to commoditization is nothing new. When the commercial world depended on the guild halls, entire technologies were invented to help work around their members - most famously in the period of the industrial revolution. In the last century industries and governments did their utmost to work around, and then break, the trade unions. Some people believe we're seeing the same thing happen to programming. As had been said to me from time to time - surely you don't want to be a programmer all your life? But the analog doesn't hold up so well when you consider that programming is not like any activity ever encapsulated by a guild or a trade union. You might as well be automating mathematics, or law.

Despite this, the companies that seem to succeed in pure play software are the ones that have organized themselves around finding very smart people and servicing them so then have everything they need to do their jobs as they would with their managers. Microsoft, Google and the pre-merger HP are good examples of this culture. The service and solutions organizations have tended to prefer an emphasis on repeatable process and outward professionalism. There are exceptions in both cases.

Outsourcing your head into the sand

So we come to nub of the problem. Unpopular languages do not fit the business and risk models that most developers work under. But perhaps there's an alternative. Adjust the model to support programmer productivity.

I think at some point this will have to happen - the current mass-customization economics for software do not do enough to support business' unslakable thirst to change, integrate and adapt software. There has to be a better way to negotiate than change control. I described the current model earlier as a self-serving, but it is so by way of being something of a bubble.

Outsourcing development is a tactical option, one that offsets the inevitable sea-change for a few more years perhaps. It does not itself change the underlying model or address problems in how we build systems. Those in the East will run into the same problems as those in the West have. We might understand outsourcing as a form of creative accounting. Remember also that within the current model, an enormous supporting services infrastructure has grown up around weak languages used by masses of developers; from vendor platforms to service integrators to tool and IDE builders to recruitment agencies. In the same way open source is eating away at the margins dictated by software vendors, the pressing need for adaptation will put pressure on the margins we can justofy as suppliers of solutions that are not adaptable enough.

Base metal, Perpetual motion

Perhaps the most widely cited strategic answer to reducing programming costs is not to improve existing languages or even to make unpopular, potentially high-productivity languages popular. It's to draw programs instead of write them. That way maybe anyone can program and we can get rid of those unpredictable programmers and awkward phases in the software process. I and many others find this strategy to be flawed as a the next step in programming productivity - we've seen CASE come and go and come and go, and UML remain on the whiteboard for anything that remotely looks like a working system. Today it's the turn of XML based business process modelling languages and from the people that brought you the UML, the MDA. But listen, if you can't or won't understand how to program in Java, you will find no solace in drawing pictures for the benefit of a business process engine. Wanting this to be so is the same sort of irrationality that is held when trying to turn base metal into gold, or crate a perpetual motion machine. It's a belief that is largely supported through the ignorance resulting from demonizing programming. Consider that years ago, people said that 4GLs would allow business people to work with their data without the need for programming. The reality is that the single widely used 4GL, SQL has generated an enormous market for software and services if not an entire sector - which seems to be the exact opposite of no programming.

No crock at the end of the rainbow

What are our options? Honestly I don't see us rushing to embrace Lisp this decade. I think the candidates today are clearly Ruby, Python, and perhaps on Java, Groovy. All of these have enough of lessons learned from Lisp to represent a big leaps forward in productivity. All of the languages are capable of running on the JVM or the CLR - we're not talking about ripping out billions of euros worth of infrastructure.

The signs are positive. The Java community are considering adding Groovy to their toolkit and Ruby is a regular topic of conversation. Sun have expressing interest in scripting on the JVM. Python usage in the enterprise is growing - and Jim Hugunin has recently demonstrated that Python can run well on .NET. Enterprise types don't snigger so much at scripting languages anymore, not even Perl and PHP. More and more the debt we owe to the Lisp community is recognized.

Where things will be tricky is in software process. We will need alternative processes to support the kind of work allowed by these languages and the needs for change demanded by customers. A lot of what do now is not going to be applicable. This is why I say the business models will need to be adjusted. Software process are even more a reflection of commercial practice then they are engineering.

To bootstrap a language into the mainstream requires marketing and selling it - it's very expensive to do this. You could build a community around it, as Python and Ruby have done but that will only take you so far, about as far as Python and Ruby are. What's really needed is the industry to consider the economic reality that what we're giving customers isn't exactly what they want, or perhaps watch a new breed of service oriented entrepreneurs fill the demand.

I'm not speculating that we will see another downturn. Quite the opposite. I think the move to better languages, and letting developers use the best tools for the job will herald an economic boom, one that is sustainable and wealth-creating because it represents genuine increases in productivity. It will also represent an offset to sunk costs in software expenditure by resulting in more flexible systems.

April 02, 2004

Et late fines custode tueri


APIs give the illusion of the ability to interoperate with other systems. The reality is that an API will lock you into a particular vendor. APIs are used as competitive weapons all over the map.

You can read more goodness here. I hope the day never comes when we say the same about standards!

Cargo cult specification

Tim Bray finds a cluster of of webservices specs and asks:

Is this the future? Is the emperor dressed? - Tim Bray

Tim has found twenty six specs. Elsewhere the apache wiki more comprehensively lists forty eight, but I'm sure there's closer to sixty and that's not including any of the Grid computing ones which baseline with WSDL/SOAP. There are so many, in fact, that I've justified setting up an RSS feed cut from the apache wiki, so the rest of us (pun intended) can keep up. Bob Sutor said recently that this has to be the year we stop talking about SOAP and WSDL. True, but really SOAP and WSDL are the least of our worries. The industry collectively needs to stop generating specifications for specifications sake.

A lot of webservices spec writing and work within standards bodies is being done without much (it seems) real understanding of what the value of specifications and standards bodies are. Richard Feynman has described this mentality perfectly:

In the South Seas there is a cargo cult of people. During the war they saw airplanes with lots of good materials, and they want the same thing to happen now. So they've arranged to make things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head for headphones and bars of bamboo sticking out like antennashe's the controllerand they wait for the airplanes to land. They're doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn't work. No airplanes land. So I call these things cargo cult science, because they follow all the apparent precepts and forms of scientific investigation, but they're missing something essential, because the planes don't land. 1

I see WS-* as a cargo cult and as such it runs the risk of being a failure. Thankfully some folks get it, and realise we need to ratify and ground things at the very least in running code, even if we don't quite have rough consensus.

[1] I first came across the Feynman quote many moons ago when Steve McConnell described the cargo cult in software engineering.