" /> Bill de hÓra: January 2006 Archives

« December 2005 | Main | February 2006 »

January 29, 2006

Emitting RSS with Django

The programming idiom is sweet, as most XML via Stringification tends to be. Note to self: check out the feed class and see what kind of broken RSS can get out. And maybe file a Unicode patch.

January 25, 2006

UI clunker #2

From TortoiseSVN's installer, note the warning message:

clunker2.gif

"DO NOT INSTALL THIS FEATURE if you don't use VS.NET with Web Projects!!!"

Now, I love TortoiseSVN, but if this message is important enough to be screamed it's important enough to avoid a double negative. Presumably I should install if if I use VS.NET with Web Projects, i.e.:

"INSTALL THIS FEATURE if you use VS.NET with Web Projects!!!"

but, when it comes to double negatives in English, you never know. Unless it's a not a programmer that didn't write it.

January 24, 2006

Ripping PDF

In "Imitation is the saddest form of flattery" Dave Thomas writes

"For this reason, I honestly don’t mind other publishers blatantly ripping us off. But I’d rather they didn’t. Instead, I’d rather they found their own ways of innovating, and build their own ideas that others found useful. The publishing industry is in transition. It needs all the good ideas it can get. All publishers should contribute in their own way to the reshaping of the industry. Simply aping someone else’s success won’t help the community as a whole."

I felt Dave Thomas came across as whiny (ed: too opinionated) disappointed, as well as unintentionally hinting that the Pragmatics innovations don't pose high barriers to entry. Then I read Derrick Story's entry "New "Rough Cuts" Provides Early Access to O'Reilly Books" and have a bit more sympathy. No mention of the beta books model. Story links to another post where Tim O'Reilly says:

"At O'Reilly, we've always said that a key part of our business is watching the alpha geeks, and then building products to bring their knowledge and insights to a wider audience. "

Seems like the alpha geeks have figured out how to rationalise book publication. In fairness O'Reilly mentions the beta book scheme as an inspiration, as well as mentioning he figured it out back in 2000, and didn't implement it.

Still I wonder if either house gets why the beta book is valuable. It's not because one gets involved in shaping the book, it's because one gets information now. The lesson here is that for some purposes, people don't need the level of quality associated with a book if the information is timely. Weblogs and online articles have likely been a big part in this lowering of expected standards. The other lesson is that the traditional publishing cycle for a tech book is dysfunctional for open source projects, SaaS, and "release early and often" software. They make books instant legacy - by the time book ships the project has moved on. I've seen this with Eclipse WebWork, Spring, Rails, Subversion, Hibernate, you name it. One answer to this is something like a beta book programme, or for post-ship, the Sourcebeat model of getting maintenance releases to the book. [I can see the Sourcebeat model being adapted into the IT sector for documentation and operations manuals, which are notorious for getting out of sync with deployed systems.]

Here's the thing - distribution is still broken. Now that publishers have figured out how to function in market redefined by open source projects and online services, you still can't get a book or chapter via a feed or as markup. It's all licenced PDF, -from the Prags, from the Safari Online, from Sourcebeat. It would be interesting to see the current moralizing around music/video/software copyright and distribution played out in the tech book sector if programmers ever start ripping PDFs to XML.

January 23, 2006

Struts Action 2

"A proposal to merge the WebWork 2 community and codebase into the Apache Struts project."

2006 reading list

Here as a reminder/guilt-trip. I should set this up as a feed, a XOXO list will do for now. And it seems there are no Erlang books for the working stiff. Perhaps I'll write my own.

January 21, 2006

Transformation pipelines and domain mapping as semantic mashups

Dare Obasanjo: "Proponents of Semantic Web technologies tend to gloss over these harsh realities of mapping between vocabularies in the real world."

Blech, that's pretty weak. This stuff is hard, but really now, who doesn't know that? The people I'm aware of that do work with such technologies (or any technologies) are under no illusions as to how difficult this is. It's no secret I've got little sympathy for the syntax-doesn't-matter position around RDF, but strawmen like the above are pointless Herein some thoughts on domain mapping and metadata.

You can't talk about metadata unification sensibly without an economic angle to quantify what's mean by "quality". Without answering the question "when is the metadata good enough?" you're on a hiding to nothing. In the real world what you do to make the mapping is largely (and often exclusively) dependent on how much money and time you have to figure it out. Indeed being time and resource bound is an operational definition of "real world" for software developers.The people that will pay to have data mapped and suffer the most relative to variant models tend to be inside enterprises, and enterprise these days don't like to take risks and are most definitely resource constrained.

"What I'd be interested in seeing is whether there is a way to get some of the benefits of Semantic Web technologies while acknowledging the need for syntactical mappings as well."

Realistically? Transformation piplines. Separating syntax and semantics sounds clean from some purist or architectural viewpoint, but practically speaking you often need to consider the two together for a given integration. You also want to manage any interesting mappings as discrete units of work so the system doesn't buckle under its own logic.

100% fidelity is a fallacy. As mentioned the people that want metadata unification are most definitely resource constrained. 100% fidelity in most cases is probably not cost-effective or even needed. When you accept 100% fidelity for metadata is a fallacy, this frees you up to look at new approaches instead of pursuing dead ends, in much the same way that accepting latency frees you up in designing distributed systems .

You don't need to do the mapping in one shot.This is one place where the pipelining idea kicks XSLT into a cocked hat. For example if you can map vocabularies A.xml and B.xml into RDF/XML syntax than you can use an RDF or OWL based mapping in turn to the RDF variants to achieve a decent unification. You do not want to try all that in one shot. I think if we make any progress whatsoever in the web scale on automated domain mapping we'll find that the point transformations are being layered and composed along a computational lattice. Given what we know about distributed systems and biological signal processing we should expect this approach of linking up specialized transforms to be more robust than a general purpose technique or canonical models. It also means you can start small and focused, which is critical for successful deployments. When you add URL query strings into the mix for obtaining the data, this is exactly how mashups work today - passing filtered data from script to script via chains of HTTP GETs.

You can automate some of the work. John Sowa reported a few years back on some work on automatically extracting models from databases. The problem presented was to normalize a tranche of database models. In terms of the time+money angle already mentioned, the standard approach - deploy consultants and specialsts to reverse engineer a canonical model - was slated at some years and some millions of dollars, The alternative tried out was to use an automated approach that combined some analogical reasoning and pattern matching to abduct* the models and then unify them. The results were frightening - the automated approach in combination with two guys did a good job, quickly, and at a fraction of the cost. At a lesser level I've seen web applications unified into a single application via automated scraping the apps and automated form fillins, combined with an exception management system where a person steps in and does what the machine can't that was cheaper and faster than the alternative suggestion of throwing out the webapps and unifying their underlying databases into a central one. In this approach, software does data cleansing for the users, whereas in the orthodox approach to data integration, users clean up data for software. In short let the computers filter and preprocess the data en masse and have them spit out whatever they can't resolve at any stage for human analysis. The way the current Web is organised around people pulling down data from search engines and feeds makes for a rather large exception management system.

There's no magic in an automated approach. It boils down to rather sophisticated pattern matching. Even if the final automated mapping results are off, so much heavy lifting had been done by the reasoning and transformation toolchains that the cost to send in someone to clean things up are much reduced. Think of it as HTML tidy or the Universal Feed Parser, but more involved. Using automation here is a classic disruptive play - the results from an automated approach to unifying metadata might not be as comprehensive or as accurate as hand-mapping or shared up-front agreements, but it's so cheap to do, it becomes plausible in its own right.


* in the technical sense, infer the best possible explanation from the data.

January 20, 2006

Eclips-ing Python

Stephen O'Grady:

"While Eclipse is not traditionally thought of as a first option for dynamic language programmers - whether it because of the overhead of running a JVM or the perception that Eclipse is Java-only (it's not) - I was quite pleased with the Ruby Development Tools add-in for the platform "

A number of us in work use Eclipse to work on Python thanks to the splendid PyDev plugin.

Peter Seibel declares victory. Now all we need is an Eclipse plugin for CLisp- they could call it "Eclispe". And then there would no question that Eclipse, was in fact, a 120M dollar port of Emacs to Java.

January 19, 2006

It works

goober.png

I am Jack's new messaging experience

It's weird talking to people on the phone, IM and suchlike when your point of contact for years in the technical community has been mailing lists and mutual weblog comments. Of late such non-asynchronous communication has been increasing. Hence I thought I should put a fote into my AIM and Skype profile. And really do an "about" page for here, 'cos for all anyone knows, I'm a bot. You know, humanize things a bit.

Skype. I clicked on some stuff. The fote went up.

AIM. Ah. Now, I do all my AIM stuff via a Jabber tunnel. It seems to put a AIM fote up I need go through an AIM client. I downloaded the AIM installer. It's very loud. It took ages to download. It told me I was a bit waay, a bit woo, a cheekie chappie. It felt it knew me. It wanted to empathize. It seemed pumped to be here now. No, I didn't want the AOL browser to be my default browser. Later it would pop up the browser anyway for a demo. Just in case. AIM installer experience in a nutshell? The Zooroopa Tour Abbreviated. Or maybe Benetton On Crack. It sucked.

Onto the client. It's like, the acme, of crap IM design.

I'm ty
(ui focus onto aim - alt-tab)
ping into in m
(flash flash flash. Click.What?! Oh, nothing much - alt-tab)
y weblog for
(ui focus onto aim - alt-tab)
m.

Butit'shardtofinisihthepostbecausethe
(flash flash flash. Click.What?! Oh, nothing much - alt-tab)
imclientkeepspoppingup. Arggh. It's like a TV that changes the channel when the ads are on somewhere else.

The AIM UI is tinged bluish and looks like it's 160 yards away. This is usually called atmospheric perspective when done on purpose and I think, is a first for interface
(flash flash flash. What?! Oh, nothing much - alt-tab)
design. If you added a tinge of ochre it would look like proper urban smog pollution. Maybe it already does and twigged I'm not the demographic for toxified beautiful sunsets. And the client really doesn'
(flash flash flash. What?! Oh, nothing much - alt-tab)
t want to be turned off. It took me 3 strikes to get it out of the task tray.

Conclusion? Heaven help us if Go
(flash flash flash. Click.What?! Oh, nothing much - alt-tab)
ogle take this Jabber stuff seriously. Interrrupt Oriented Advertising, anyone? Ok. Let's get the fote up an
(ui focus onto aim - alt-tab)
d uninstall.

I bet you look good on the dancefloor

"Now do you recognise them? This is the Geldof Generation."

January 17, 2006

QOTD

"There has been much talk about component architectures but only one true success: Unix pipes." - Rob Pike

January 13, 2006

The Big Bopper

I've seen a good few weblog entries in the last few weeks about business and startups emphasising finding customer pain points. Well, home media is wall to wall pain.

Here's Russell Beattie: "What I’m trying to convey is a simple thought: Home Media is still complete chaos."

Good write up. Now if only I could find the link to the article on setting a media centre for the home up I rest this week (I think it was on Tom's hardware). Creating that chaos is hard work.

Nonetheless I'm impressed Russell got through all that without mentioning the following show stoppers:

  1. Control: so you wired it all up. What will you use as the bopper*?
  2. DRM: so you wired it all up. Now comes non-interoperability by design.

Even assuming you can get it all set up, DRM and a media Controller remain problems.

Anyone who thinks they can get people to operate a media centre via a laptop or PC is nuts. By coincidence we were talking about this exact problem in work during the week, and came to the conclusion that something like an Archos or a wireless PDA is the ideal bopper for a media center. I suggested a tablet PC to start, because of the form factor, but they're much too fragile. An Archos PMA 400 is more robust and has a better chance of survival. A PSP could work but inputting commands into one of those is immensely tedious - plus I wouldn't want to drop a PSP. There's a lot of pain in wiring up and configuring just a sane file management system for the home, never mind media center, and I think whoever comes up with a combined storage/player/bopper solution for the home is going to make a lot of money, assuming they are not sabotaged by DRM. I think one way to be a pure software play in this market is to partner with a device maker and leverage their channels, and in doing so avoiding signing over exclusive distribution rights to hardware guys. A solution also needs to Just Back Things Up - the days of archiving fotes by putting your negatives into a shoebox are over, however I imagine there are millions and millions of digital photos out there just waiting for a hardrive failure to be vaporised. Ultimately what's disruptive about this problem is that you need to build a system and sell it is a product. Nick Carr thinks consumers don't want systems, they want products - "Consumers certainly want to share Internet connections with other family members, and some of them may want to share a printer over a home network, but beyond that they show little interest in connectivity" , but this is short-sighted. Homes now have more data than enterprises did 30 years ago and the volume of personal data and content being digitized is increasing a rapid clip. Give it time. What people want is convenience. Interestingly, you can't take this pain away with products - in fact adding more products makes the problem worse. It's a classic systems integration problem that someone like BEA, IONA or EMC would have an intuitive grasp of, but it's happening in the home not the enterprise. There are no good stories out there on how a family manages terabytes of data. None.

The DRM issue is largely imposed by content providers, and is only growing. This year or next could be the year consumers pushed back on not being able to listen to or watch content on their system of choice. I speculate that one vector for this will be, of all things, UMDs for the Playstation Portable. In Ireland a UMD retails for about 30 euro, whereas the equivalent DVD could be got for as little as 10 euro. Assuming you upgrade the memory stick to 512Mb or 1Gb you can burn DVDs for playback on the PSP. But the legal and technical status around DVD burning is extrememly vague. Ripping DVDs is also more effort than most people want to go through. However, tell Joe Consumer after shelling out for a PSP that they have to buy Pirates Of The Carribean twice for the kids, once for a relatively portable medium (DVD) and once again for a non-portable format (UMD) which can cost up to 3 times as much, and I predict a riot. I'm also seeing an interesting pattern of people just not buying media of late, either because they don't know what they can do with a DVD/CD, what the limitations across devices are, or don't know what malware or junk the discs are spraying onto their PCs. The back catalog is big enough. In trying to maintain the value and stranglehold over current physical distribution channels, the entertainment industry seems to be creating real problems for itself. Finally, you can't help but wonder if in the next few years war by proxy will be fought in the home between content electronics and software companies with DRM as the weapon of choice.


* A "bopper" is what we call a remote control in our house.

January 06, 2006

A few good dynamic programmers for the CLR

"We’re looking for a few exceptionally talented individuals with dynamic language experience (Python, Ruby, PHP, JavaScript, etc.) to come join our efforts to make the Common Language Runtime (CLR) the world’s best platform for dynamic languages and dynamic scenarios." - Jim Hugunin:

After some compliants about deadness last year it looks like Iron Python is warming up again. The list has been picking up of late.

Concurrency in Software

"So, although shared-state concurrency provides the most granular level of control, it is also the most difficult model to program to." - Bill Clementson

I'd have to agree. Proper tech stuf from Bill Clementson, who along with some good links, nicely weaves the Nutch map-reduce implementation and Google Sawzall into a discussion on concurrent programming. Bill also mentions blackboards - here's hoping he follows this post up with a survey of tuple-spaces.

January 05, 2006

The Draw Boy

"There ought to be some mechanical way of doing this job, something on the principle of the Jacquard loom, whereby holes in a card regulate the pattern to be woven." -Dr. John Shaw Billings, on manipulating census data

The Jacquard loom , invented by Joseph Marie Jacquard was the world's first programmable machine, or as we call them today, computers. Jacquard's loom strongly influenced Charles Babbage who is typically credited with the first computer, and it is very likely that Herman Hollerith, inventor of the census tabulating machine (which used punch cards), was influenced by the loom. Jacquard himself was commissioned by Napoleon, who felt the creation of complex silk patterns could do with automation to benefit the industry as a whole. Complex patterns were preferred by the upper classes and royalty, in part because they were beautiful but also because they were absurdly time-consuming and expensive to make. Silk in particular represented a problem because it was so fine compared to cloth. Up to then complex silk weaving was untouched by the technical advances in the creation of regular linear and checked patterns on silk cloth. It was felt that non-linear patterns were not subject to productivity increases.

Duff's Draw Boy and Mr. Austin's Engine Loom

Jacquard felt differently. Jacquard was originally a draw boy. Silk weaving prior to Jacquard was done on draw looms, large devices operated by two workers, the weaver and the draw boy. The weaver handled the threading of warp threads, which are support threads, through weft threads, whcih are passed between the warp threads to form the cloth, so that the worven rows were laid out to form the desired pattern. The draw boy raised or lowered the warps' reeds, according to the weaver's instructions. The weaver made his threading decision on the pass of the shuttle as each row of the cloth was created. The reeds carried and guided the warp thread so that the weft the combination of reeds dictated where the alternate silk colour of the weft would appear on each back and forth pass of the shuttle as controlled by the weaver - over time a pattern would appear, as the weaver called out to the draw boy which reeds to lift on each pass. A master weaver and with a good draw boy might produce two rows per minute, or 2 square inches of patterned silk cloth a day. They really were boys, adults were too big to do this job. That's because the draw boy stood on top of the loom while lifting the reeds in accordance with the instructions of the weaver. Draw boys worked 6-8 hours a day in horrendous working conditions, lifting 30 pounds of reeds at a time on every call, were often ill and sometimes crippled by the work. Jacquard hated being a draw boy and had made the automation of the looms his life's work.

Jacquard loom

In an age of complex and increasingly powerful geared mechanisms powered by men or water, Jacquard's loom was ingeniously simple, not much more a plain set of rods and connectors that did not require much physical power to operate. The reeds were replaced by vertical steel rods, called hooks, as they had a hook on the top end. On each weave the rods could move either up or down as determined by the hook. The hook raises or lowers a harness which carries and guides the warp thread, now tied to a weight instead of a boys hands so that the weft laid above or below the warp, allowing a row of the pattern to be created as with the traditional loom. To control which rods were raised, pins on the side of could be pushed in, one pin for each rod, raising the rod and would spring back out when released. The pins were pushed in accordance to holes in a card, called the pattern card. The pattern card rotated on each pass of the shuttle, pushing in a combination of pins in accordance with the pattern.

jacquard pattern

To program a loom to create a pattern as you wove, you needed the pattern card called the Jacquard card. The blank card was overlaird with a pattern and a row of holes were punched into it as the blank card rotated. Once you had the card you could copy it as many times as you liked and run it on as many looms as you wanted - today we call that parallel computing. This allowed a weaver to weave up to 33 feet a day of a complex pattern instead of the time of pre-Napoleonic royal court, two square inches. The overall design allowed the weavers natural back and forth motion of the shuttle to determine the pattern, and removed the need for the draw boy.

It was a binary machine, whose physical operating technique, the use of pattern cards to dictate the arrangement on ones and zeroes that instructed the computer what to do next, was common in the computing industry well into the 1970s. The Jacquard loom itself is in common use, art and textiles students still train today on such looms.

woven Jacquard pattern


The Jacquard loom stayed untouched for almost 150 years. The power loom, invented by Cartwright some time before the Jacquard loom, but the advancement there was the use of electrical power supply rather than mechanism. The next advance in programming a loom occurred at the end 20th was get rid of the cards and program them using a much smaller representation of the pattern, as software. The modern digital computer, based in no small part on Jacquard's pattern matching technology, had eaten its parent. Today's Jacquard looms are computer controlled and electrically powered and can have hundreds or even thousands of hooks, but their technical architecture and functioning is essentially unchanged since 1801. The Rapier loom was the next technical (hardware advance), in having no shuttle to move back and forth its weaving mechanism today allows the weft to cross the warp 625 times per minute, speeds that cannot be matched by a weaver and which can create up to 120 metres of cloth per hour in a modern factory.

Hollerith's patent on the Key Punch

The extreme automation and productivity of the Jacquard and then Rapier looms clearly does not require a weaver as the job was understood by Jacquard. What you did need was someone to invent a pattern that could be transcribed onto a pattern card. This gave way to the designer, whose job it was to create patterns that would be inscribed on a card and later into a computer. Designers of all kinds are very much a by-product of industrial production. Because the volume of cloth produced was large, prices came down as more and more people could have access to fine silk. To drive demand, patterns were regularly altered to allow people to buy the newest and latest styles (silk finery having always been bought for coquettish reasons). In seasonal form, this constant alteration combined with cheap production and advertising of the new styles has more or less given rise to the fashion industry.

And what of the draw boys, and the job which Jacquard so despised when he was a boy? Despite the health hazards of their work, the looms were smashed by draw boys, who rioted, and who also threatened Jacquard's life.

January 04, 2006

AntAnt: a tool for creating ant builds

A while back, I mentioned a tool I use called AntAnt. It's an Ant script for generating standard build layouts for Java projects, something I've been using in one form or another since 2001. . Some people ask me about its availability. I've been meaning to distribute it for some time and have finaly gotten round to it. It's a small tool, so it hardly merits a dedicated website, but the readme is here: AntAnt Readme. From the readme:


The goals of AntAnt are to:

  • Produce a 'good enough' default layout for Java projects.
  • Avoid hand coding the same Ant targets again and again.
  • Allow folks to tailor an Ant build system without starting from scratch.
  • Provide common targets so you can walk up to any project and start using it.

  • Allow automated tools to consistently run builds and distributions.

It's impossible to standardize every aspect of a build without inconveniencing some people or getting into endless arguments about best practices. However 80% of Ant files are the same for every projects (there's a lot of cut and paste in the Ant world) and duplicating the effort for every project gets old fast, as does understanding each and every project's quirks. The build and the project layout used by AntAnt is derived from what seems to work well enough, and from idioms shared by other Java shops, open source projects and outlined in some books.

Most of the tasks in AntAnt can be customized without too much effort. The 20% you need to customize most of the time can be altered via build.xml. Where Ant task tend to become custom is at the deployment end, so typically the things you will need to set up in Ant are:

  • warfile generation for webapps
  • zipfile generation for distributions
  • jarfile generation where a single project is creating multiple jar files
  • specialised installation and deployment tasks


Releases and drops are available at http://dehora.net/code/antant/. The development trunk is here: AntAnt trunk, and the current 'stable' release can be checked out from here: AntAnt stable. It's currently a release candidate - AntAnt 1.0.0 will released under the ASF v2.0 licence (same as Ant). AntAnt is not meant to be a framework and there are no plans to make it so. It's a stunt work - something that should be small enough to explained in an hour and after that hour something you should feel comfortable changing for your own needs. Therefore I don't have new features planned, but if you find bugs mail me.

January 03, 2006

What are you going to do?

You should approach a design problem like this NOT by saying "what structure can I perceive in the small range of name types I am familiar with" but "what do I actually need to DO with names?"

Try JOnAS

Ted Neward has had enough of the JBoss/Geronimo thing

"JBoss goes out of business, the JBoss source code goes back to being maintained by developers whose principal interest is in maintaining open-source projects rather than making money, and it all gets folded together with what the Geronimo folks are doing. In other words, the open-source community stops the infighting* and starts pulling oars in the same direction at the same time. For once."

Ted might want to look at JOnAS if he hasn't already. I don't know why it doesn't get more attention. While the ASF and JBoss.org have been squabbling over rights, ObjectWeb have been shipping solid code. JORAM in particular is a quality JMS implementation.

There was a curious comment on Ted's entry from Mohan Radhakrishnan:

"So if JDK 6 ships then that would be the first time in Java's history most companies working on client Java projects would be 2 major versions behind."

I hadn't though of it like that before. Interesting. And Dolphin, aka Java 7 is in the pipeline.


* Note the mischaracterisation: the ASF+JBoss are hardly the "open-source community" and I doubt they consider their particular debate "infighting".

January 02, 2006

Learn more stuff

Clemens Vasters on the growing surface space of .NET:

"But for a development team to benefit from all these technologies, specialization is absolutely needed. The times when development teams had roughly the same technology knowledge breadth and everyone could do everything are absolutely coming to an end. And the number of generalists who have a broad, qualified overview on an entire platform is rapidly shrinking."

I think the question here for .NET practitioners and architects has to be - is all this stuff actually needed to the point where specialists are required to make projects succeed? I also wonder whether .NET isn't in the same boat in 2006 as J2EE was in 2002, whereby the technology on offer is overkill for what's needed. Aside from the technology, there's a good argument to be had that we're not especially good in this industry at getting specialists to colloborate and communicate on projects, ergo any platform that acts as a forcing function towards specialists rather than generalists presents its own risks.

Atom Protocol: draft-ietf-atompub-protocol-08

The latest draft of the Atom Publishing Protocol is now available, draft-ietf-atompub-protocol-07.

Dave calls out the main change - URI templates are dropped in favour of next/previous paging of entries. Another change is minor but critical - the examples are now valid Atom. The previous draft showed invalid Atom. The invalid Atom was put in as a barbaric YAWP to highlight an issue around clients being able to populate an Atom entry's required elements (esp. atom:link and atom:id) . The point's now been made.

There are bugs in 07 - the example and schema in section 7 still refer to URI templates, and IRIs are specified in the context of HTTP instead of URIs (for HTTP IRIs should be mapped down to URIs before use) - in fact IRIs are specced across the board. Personally I find thisIRI/URI/URL stuff stupifyingly annoying, but that's from a spec-writing viewpoint.

What we need is a policy for when to hit the shuffle button

"Markov Decision Processes to the Rescue."

Cut-and-Paste Year

January, "We're not so much building on the programming state of the art as continually have each generation of programmers rediscover it." February, "I honestly don't think I'll even finish the Structure And Interpretation of Computer Programs." March, "Services don't get finished, they become available, and then need to stay available." April, "From the Joseph Heller school of specification." May, "Java good for plumbing, Ruby good for taps." June, "If you're in the software business you'll know by now that data is the new platform." July, "Be careful, it's not all solved in the system architecture." August, "Ain't web architecture grand?" September, "Right now,the way phones, PVRs, computers, handhelds just don't work together makes life very difficult." October, "Gravy granules. The recipe says three quarters of an ounce of gravy granules. The gravy granules thing is metric. And we only have a teaspoon. me: That's what we call systems integration in work." November, "Time to write a todo list webapp? 16mins." December, "It may turn out to be a surprise for the industry, but I think the people in the trenches saw it coming a long time ago."

January 01, 2006

LaundryList APIs

In recent years the most difficult software abstractions I've dealt with are APIs that evolved from a laundry list of convenience calls- I'm talking about getThis, getThat getThisByThat, getThatByThis ("how do I get other?", "there's an other?" "Yuh", "No problem, I'll add getOther just here"). You can't compose or say anything new with this kind of API - it's nonsense instead of language. They are especially problematic in languages that have a compile/package/deploy cycle or on in runtime containers where versioning is not well thought out (which means most containers).

What LaundryList APIs achieve, possibly unintentionally is an almost total abstraction not just from things like 'this', 'that' and the 'other', but also from the verbs like 'get' which you need to talk about the things in question. Which is encapsulating exactly the wrong stuff. All that needs encpsulation is how this that and the other were made, not how to speak to them.