" /> Bill de hÓra: July 2004 Archives

« June 2004 | Main | August 2004 »

July 31, 2004

A day late, and a dollar short

Dorothea Salo:

"Look. RDF people? We get it. We really do. I know enough about RDF to put it into operation, if it were any damned use to me whatever."

Norm Walsh:

"This little case study supports a direction that's starting to feel right to me: RDF is a good tool for aggregating and analyzing data, but it's not the right tool for creating or maintaining information. In a sense, (some of) the RDF community are already leaning this way too, with proposals like GRDDL being developed to define standard ways for extracting RDF from data that's richly marked but not directly encoded in RDF/XML. But for the record, the fact that I have to embed RDF/XML in comments in XHTML still sucks."

Danny Ayers:

"Sure, there is still a way to go before RDF can be effectively lined up alongside doc-oriented XML. As Norm points out, RDF/XML is good for RDF core dumps. It's acceptable for interchange too. But handwritten? Well if you can comfortably handwrite TEI and DocBook, it shouldn't present much of a problem. But personally anything beyond a few lines of sample/test data, and I'm outta here."

Dan Brickley:

"Sometimes there are emergent properties of a set of sensible, well motivated decisions grounded in a whole load of subtle constraints. We (I say we, I turned up late to this bit of work) had constraints from nature of the task (graphs into trees), from HTML browser deployment concerns, from XML arcana, from RDF's data model (unordered)."

It's all quite frustrating really.

July 29, 2004

Subversion note

On merging and tracking changes:

In Subversion, a global revision number N names a tree in the repository: it's the way the repository looked after the Nth commit. It's also the name of an implicit changeset: if you compare tree N with tree N-1, you can derive the exact patch that was committed. For this reason, it's easy to think of "revision N" as not just a tree, but a changeset as well. If you use an issue tracker to manage bugs, you can use the revision numbers to refer to particular patches that fix bugs-for example, "this issue was fixed by revision 9238.". Somebody can then run svn log -r9238 to read about the exact changeset which fixed the bug, and run svn diff -r9237:9238 to see the patch itself. And Subversion's merge command also uses revision numbers. You can merge specific changesets from one branch to another by naming them in the merge arguments: svn merge -r9237:9238 would merge changeset #9238 into your working copy.

On christening names:

A nice way of finding the revision in which a branch was created (the "base" of the branch) is to use the --stop-on-copy option to svn log. The log subcommand will normally show every change ever made to the branch, including tracing back through the copy which created the branch. So normally, you'll see history from the trunk as well. The --stop-on-copy will halt log output as soon as svn log detects that its target was copied or renamed.

On ancestral merges:

Ideally, your version control system should prevent the double-application of changes to a branch. It should automatically remember which changes a branch has already received, and be able to list them for you. It should use this information to help automate merges as much as possible. Unfortunately, Subversion is not such a system. Like CVS, Subversion 1.0 does not yet record any information about merge operations. When you commit local modifications, the repository has no idea whether those changes came from running svn merge, or from just hand-editing the files. What does this mean to you, the user? It means that until the day Subversion grows this feature, you'll have to track merge information yourself.

all from the Subversion docs

IronPython 0.6

IronPython 0.6 for .NET/Mono is out. Jim Hugunin on the .NET CLR and his move to Microsoft:

"The more time that I spent with the CLR, the more excited I became about its potential. At the same time, I was becoming more frustrated with the slow pace of progress that I was able to make working on this project in my spare time. After exploring many alternatives, I think that I've found the ideal way to continue working to realize the amazing potential of the vision of the CLR. I've decided to join the CLR team at Microsoft beginning on August 2. At Microsoft I plan to continue the work that I've begun with IronPython to bring the power and simplicity of dynamic/scripting languages to the CLR. My work with Python should continue as a working example of a high-performance production quality implementation of a dynamic language for the CLR. I will also reach out to other languages to help overcome any hurdles that are preventing them from targeting the CLR effectively. I welcome any and all feedback about how to best accomplish this."
I wonder if Sun are looking at this and thinking it's time to invest in Jython or a start a JSR.

[via Dong Zhang]

July 27, 2004

Atom and Cool URIs: dogma, idealism, expediency

The new architecture was being created for the workers. The holiest of all goals: perfect worker housing. - Tom Wolfe

The thread that wouldn't die.

Tim Bray disagrees with Mark Pilgrim around URIs as identifiers.

Marks' position, which is fairly specific to Atom:

  • Permalinks are transient, not permanent.

  • Keep permalinks and ids distinct.

  • Prefer tag: URI constructs for ids.

Tim's position, which strays beyond Atom:

  • Get decent software that generates context free URIs.

  • Permanently own a domain.

  • Don't use tag: URIs because they're not registered (but don't think hard about why that is).

  • Any identity model that isn't single URI (nominalist) is Web-unfriendly (specifically, you'll burn bookmarks, Pagerank and caches)

On the face of things, this is very sensible advice, and is essentially the core W3C dogma - cool URIs don't change. But there are some details that happens to strengthen Mark's position and weaken Tim's. In short - forget about the application software; the Web/Internet infrastructure itself doesn't support cool URIs.

Never mind the URI, here's my UUID.


If you think that there's a good chance your URIs will change, you shouldn't use them for IDs. But, if you think that, you should also bloody well be looking for better software or hosting or whatever.

Let's talk about ICANN instead. We can bang on all day about getting tooled up with decent software to generate sensible URIs and not harm Pagerank, but that is to neglect issues around domain name ownership, which cannot be solved in software. The dirty secret of the Web is that you don't own your domain name; you rent it. As soon as you stop renting it, every URI under that namespace is almost certainly toast, irrespective of whether someone else rents it after you. What kind of permanence is that? No-one will support your ex-URI space because it cost money to serve up content on the web and the more read you are (or were) the more it costs. No amount of non-broken software will not help you here. But an id tag can. Having said all that, here's my (strawman) counter-argument :

If you think that there's a good chance your URIs will change, you shouldn't use them for IDs. But, if you think that, you should also bloody well be looking for better domain name governance or perpetual rights to domain names or a more democratic web or a web that doesn't punish content providers or whatever.

Arguing for permanance in http: URIs when the critical partition of the http: namespace, the domain, is de facto transient - make what you will of it.

Web Realpolitik.

I dislike Mark Pilgrim's position, because it involves managing the relationships between various identifiers, which can get complicated, is prone to inconsistencies, and is invariably a local solution (whereas using a lone URI is not). I think it helps to have to had to deal with or integrate namespaces where names/identities are composite entities like Atom's to appreciate what a compelling idea the lone URI is - folks, there's real money down in managing composite identities across administrations. Consequently the Atom id tag sucks. However, Mark's position does address issues with URI (ie, domain) transience. Consequently the Atom id tag sucks, but I will end up using it.

Emotionally, I like where Tim's coming from much better, but emotionally I like world peace much better too. Short of radically different web infrastructure and management, Mark's way will work out for most folks, who are not and never will be web geeks, administrators, or information architects. The "Cool URIs don't change" argument doesn't work so well unless you are in position to guarantee perpeptuity, or have a deep and strong technical understanding of web architecture - the evidence suggests it doesn't work at all at the "consumer grade", which is most bloggers. Otherwise - in my case that means roughly 150 euros a year (well worth it) and nobody trading on the name "dehora" coming to boot me out of that domain (we'll see about that being worth it). But it's my job to know about this stuff; I'm not a consumer grade exemplar.

I don't know - sometimes the cool URI view seems to belong in a Carlsberg advertisement, who don't do web architecture, but if they did it'd probably be the best web architecture in the world: power is decentralized to the edge peers, people own their domain names, they can affordably serve content, they can host their own content, they can continue to do so when the world loves to read their content, they can avail of sensibly crafted software so they don't have to be programmers if they don't want to be, they don't get hounded out the domains, bodies like ICANN and the IETF are running like well-oiled machines, and the physical architecture supports what the web and information architects want. The web we have is not like this - it is natted, centralized, dynamically assigned, firewalled, attacked, mismanaged, inefficient, litigated over, awash in expediency, filth, greed and infection. The Web is much more like the real world than the Web.

Death and taxes.

The deployed Web works against cool URIs, not for them. That's what Mark Pilgrim understands and the W3C does not. To publish cool URIs is to actively enage in a fight against entropy, the natural order of things.


'Permalink changes'? Yes, permalinks are not as permanent as you might think. Here's an example that happened to me. My permalink URLs were automatically generated from the title of my entry, but then I updated an entry and changed the title. Guess what, the 'permanent' link just changed! If you're clever, you can use an HTTP redirect to redirect visitors from the old permalink to the new one (and I did). But you can't redirect an ID.

I don't have a problem using titles in Atom permalinks, or with using tag: URIs in Atom ids. I've come to decide that I won't change titles post publication. As for tag: URIs - I happen to be a fan of these. Despite whatever problems Tim has reading them, I find them easy enough to eyeball. It takes 15 minutes to implement a generator. The fact the scheme is not registered doesn't matter because the reason it's not is not a technical/engineering matter; it's down to an obtuse registration process. Again this is an issue that goes beyond software and comes down to management. So I say go ahead and use tag: URIs for Atom ids.

Despite all that I'm up for following Tim's line; I will continue dole out per annum to rent and host dehora.net for as long as I can afford it. I am careful about issuing permanent URIs, but definitely not careful enough - for example I've had to port most of the URIs for this weblog, since MT does not do out of the box what web 'nominalists' consider the right thing. I've left behind redirects as Mark describes above. The conclusion? As things stand it's unrealistic to expect more than a small minority to do likewise or to really care about stable identifiers. There are better things to be doing that agonizing over URIs and the lifespan of a domain name. That anyone has to is a failure of the architecture as deployed.

[limp bizkit: faith]
first published: 2004-05-29 17:05:33. Some edits have been made and typos fixed, but the reason that the date was changed (aka 'genxed') is that the ID/URI permathread has come up on the Atom working group list recently, and for anyone involved who reads here, this post captures my essential position on the matter, which is probably best understood as one of realpolitik.

July 26, 2004

jxtaSpaces release


"jxtaSpaces is an implementation of a tuplespace for the JXTA network. It allows the coordination and synchronization of JXTA peers using a JavaSpaces-like programming model. "

July 25, 2004

Bend Sinister

DiveIntoPython DivesIntoPaper: "He's asking that we buy 4000 copies so he can pay back his advance - but Mark, you not familiar with the story of Dr. Faust?"

Comics question

Does anyone know of comics (not just covers), drawn by Glenn Fabry other than the Slaine series?


Mark Pilgrim:

"XML on the Web has failed. Miserably, utterly, completely. Multiple specifications and circumstances conspire to force 90% of the world into publishing XML as ASCII. But further widespread bugginess counteracts this accidental conspiracy, and allows XML to fulfill its first promise ("publish in any character encoding") at the expense of the second ("draconian error handling will save us")".

Aside from Atom, where there was a long running discussion on XML + HTTP + RFC3023, I have some experience of this (along with Sean). In Ireland there exists an eGovernment integration messaging hub called the IAMS. The envelope format is XML but it's subsetted to be ASCII only on the wire - anything else is rejected. In truth this was done originally because one of the two first agencies on the hub insisted upon it, but the format has not been upgraded to allow higher encodings and in some part it is due to the current state of incoherence in application protocols between MIME and XML. Yes, borked encoding is a an issue, but not enough for us all to play Cassandra just yet.

"The entire world of syndication only works because everyone happens to ignore the rules in the same way. So much for ensuring interoperability."

I'm not as pessimistic about this, as I think this speaks to the rules not to interop - on the web the XML sky has not yet fallen in. And the rules are being looked at. Since Mark wrote (published? issued? created?*) his article, Paul Hoffman has informed the Atom WG that Apache will do the right thing with .atom files in a future release, Tim Bray has managed to persuade Microsoft to address the issue in a future release of IIS, and there is a new I-D obsoleting the Dread Pirate RFC3023 (specifically text/xml is gone). There is also a workaround that is acceptable in my mind and consistent with Postel's laws - decouple HTTP and XML processing altogether. At the end of the day the situation is restricted to encoding arcana, which is just one facet of the XML value-add and I suspect nothing like as substantial a part as Mark claims, which he has to in order to bolster the argument that XML has failed. The situation with XML on the web appears better than HTML, and imo that is the practical benchmark - things have never been better.

This does strike at one issue with the way the Internet and Web specifies its technologies - overall architectural consistency can get put on the long finger. Usually this is a good thing, but sometimes it leaves room for incoherence as formats and protocols are combined to new uses. And we will see another media types and applciation protocols problem like this in the future. Rather than encodings, it will be to do with the incoherence between media types and extensions to RDF semantics.

* Apologies - it's something of Atom joke

July 21, 2004

ReSharper 1.0

"...you have to learn how to work them - but boy, when you do, they make those languages start to feel like scripting languages." -Ward Cunningham.

July 20, 2004

cvs2svn: brilliant!

I've just finished moving all my versioned stuff over to Subversion from CVS with cvs2svn.

July 18, 2004


R is an XML serialization for RDF.

Design goals

  • It's for serializing RDF in XML, not XML in RDF
  • Readable
  • Writable
  • Hackable
  • No choices
  • Cut and paste friendly
  • Optional support for contexts
  • Optional support for quotation
  • A decision in 10 minutes whether to use it

Things R will not be supporting:

  • XMLBase (and by extension no relative URIs)
  • QName abbreviations in place of URIs
  • Canonicalization of XML literals
  • Literal as subjects
  • Nested graphs
  • Reification (use quotation and get over it)


The name was originally RDFX and was announced last March. A Google search shows that RDFX is already taken by an Eclipse plugin for RDF. So, a new name was needed. The design goals are the same now as then but the R syntax has changed quite a bit since, enough to put it into a new namespace. I'm doing this because except for the simplest graphs I don't like RDF/XML on a number of levels: technical, aesthetic, usability; and I have a need to interchange RDF data rather than do mixins between RDF and XML/HTML.

The markup

Here you go, this example covers most of capabilities:

  <r:rdf xmlns:r="http://www.dehora.net/r/2004/07/" version="20040718">
      <r:description>I am not a description </r:description>
      <r:context uri="http://example.org/con10"/>
        <r:context uri="http://example.org/con12" />
        <r:subject uri="http://example.org/10"/>
        <r:property uri="http://example.org/11"/>
        <r:value uri="http://example.org/12"/>
        <r:subject bnode="K12"/>
        <r:property uri="http://example.org/20"/>
        <r:value type="http://www.w3.org/2001/XMLSchema#integer">22</r:value>
        <r:subject uri="http://example.org/31"/>
        <r:property uri="http://example.org/32"/>
        <r:context uri="http://example.org/con10"/>
        <r:subject uri="http://example.org/1000"/>
        <r:property uri="http://example.org/1001"/>
        <r:subject uri="http://example.org/one"/>
        <r:property uri="http://example.org/two"/>
        <r:value type="http://www.dehora.net/r/2004/07/type/xml" xmlns="" >
          <target name="get-ant-extensions" description="">
            <get src="${ant.href.ext}/jdepend.jar" dest="${ant.lib.home}/jdepend.jar"/>
    <r:graph quoted="yes">
      <r:description>I am not asserted, but I'm not sure what that means</r:description>
        <r:subject uri="http://example.org/1"/>
        <r:property uri="http://example.org/1/2"/>
        <r:value uri="http://example.org/1/2/3"/>

It's a relatively flat, repeating structure, much like what you'd expect from database XML dumps or record oriented markup. An R document contains sequences of graph elements which in turn contain sequences of statements. A statement is the usual three part RDF structure.

Subjects and properties

Subjects can be URIs or blank nodes (sometimes called bnodes). To indicate a subject is a blank node, use the 'bnode' attribute' otherwise use 'uri'. Properties are simple; they're always just URIs.

Literals and values

Every RDF triple has a property value (usually called the 'object'). For resources, we just use a uri attribute. Anything else is an RDF Literal. It turns out that Literals are the most complicated thing to serialize. R has 3 literal forms:

  • Raw element content: this is just straight text inside the value element. The value element has no uri attribute in this case.
  • Typed literals: if you want to say that a literal has a type, you can do this by setting a type attribute on value. The type must be a URI along the lines of the things you might see in xsd.
  • Embedded XML: this is special case of a typed literal. Where the type attribute value has the value 'http://www.dehora.net/r/2004/07/18/type/xml' you can assume the element content is XML.

Descriptions and Contexts

Any graph element can carry an optional description element. A graph or a statement may also have a context element. Contexts are mildly controversial since they're not core RDF but a lot of practitioners, including myself, find them useful to do things like track where statements came from or to qualify statements in some way. Some people might use reification for this - but personally I find reification is broken and best avoided.


It's a design goal, but I'm really not sure about this part - let's call it "experimental". A graph element may carry a 'quoted' attribute, which would mean that the graph in question may be referred to but its triples should not be asserted. This is definitely not in RDF, but computing logic languages tend to have a concept of quotation (again this is something people used to think you could use reification for). Like I said, I'm not sure about it.

RDF Collections

No support as of yet. I'm not sure how best to write down collections right now, but I'll try to ensure their support in the syntax will be backward compatible with this version.


Some things have changed in R since RDFX last March. All URIs have been moved into attributes (they have no structure that would require them to be in element content). The literal constructs inside the value element were more verbose. Values did not support bnodes. R lives in a new namespace and has a version attribute added. Subject, property and value are no longer ordered in the schemata.

It's still ugly!

Yes it is remarkably ugly. If you use a default namespace you might alleviate some of that, but in truth the ugliness is driven by the URIs - it's hard to look at a lot of URIs in XML and not object to it aesthetically. Here's an example with the URIs and prefix elided:

        <context uri="..." />
        <subject uri="..."/>
        <property uri="..."/>
        <value uri="..."/>
        <context uri="..." />
        <subject bnode="..."/>
        <property uri="..."/>
        <value uri="..."/>
        <subject uri="..."/>
        <property uri="..."/>

The point of this however is that the document structure should be regular and clean enough to read and write - no striped syntax, no abbreviated forms, few surprises.

The schemata

An RNG schema is available here, and the a WXS one generated by trang is available here. Here's the schema in rnc form:

  default namespace = "http://www.dehora.net/r/2004/07/"
  start = rdf
  rdf = element rdf {
      attribute version { "20040718" },
  graph = element graph { quoted?, description?, context?, statement* }
  description = element description { text }
  context = element context { uri }
  statement = element statement { context?, (subject & property & value) }
  subject = element subject { bnode | uri }
  property = element property { uri }
  value = valueURI | valueBNODE | valueTYP | valueTYPXML
  valueURI = element value { uri }
  valueBNODE = element value { bnode }
  valueTYP =  element value {
      attribute type { xsd:anyURI }?,
  valueTYPXML = element value {
      attribute type { "http://www.dehora.net/r/2004/07/type/xml" },
  ANY =  element * {
      (attribute * { text }
       | text
       | ANY)*
  quoted = attribute quoted { "yes" | "no" }
  uri = attribute uri { xsd:anyURI }
  bnode = attribute bnode { xsd:NMTOKEN }

July 17, 2004

Freak Atom occurrence

There's been no mail from the atom-syntax list in the last 90 minutes or so.

How odd.

July 16, 2004

RDF 101

A no-nonsense guide to Semantic Web specs for XML people - good piece by Stefano Mazzocchi. If you're new to RDF and OWL, but are comfortable with XML it comes highly recommended as Part I of a series. Some minor criticisms within.

  • There are markup people out there who can live in the RDF world; Uche Ogbuji, Norm Walsh, Shelley Powers, Joshua Allen. Even me.
  • Stefano gives RDF reification too much credit - RDF reification is something of a mirror to the soul, as you can see whatever you want to see in it. However it doesn't bear much scrutiny. To paraphrase Wolfgang Pauli, it's so incoherent it's not even wrong.
  • N3 isn't RDF*. To say that Jena or CWM can do transformations between RDF/XML and N3 is a simplification. N3 is a more expressive but less well-defined language than RDF - yes, you can inscribe N3 in RDF form (sort of) but it won't mean anything wthout the extra semantics N3 adds. This is something like putting Python code with lambda expressions inside Java strings - yet without an eval() function in Java to evaluate them, they're just strings... Do remember too that the semantics of N3 is defined only by the cwm source code which somewhat defeats the point of having a language with declarative semantics like N3 on the web in the first place. Stefano should take a look at Turtle rather than N3.
  • I don't believe the number one criticism of RDF is taxonomy centralization. I think it's a mix of the XML syntax and a lack of obvious usages. Stefano points out his issues with the RDF/XML. In fairness the RDF community is coming around to the syntax issue. And I understand why people don't get what use RDF is beyond Distributed AI research; since it doesn't have a standard query language it's not obvious what to do with the stuff other than generate it. Having RDF is like having the Relational Data Model but without the RDBMS or SQL to motivate usage. **

But until RDQL hits the streets there is always OWL. Stefano nails it; OWL gives you something to do with your RDF. And OWL is a good piece of web tech - Recently on atom-syntax Mark Nottingham wrote out a list of things a versioning policy for Atom needed to be able to deal with. Previously the discussion had been focused mostly on SOAP style mustIgnore and mustUnderstand attribute usage and clear prose. Then Ian Davis produced a truly fascinating way of handling the version compatability requirements using OWL constructs, and one that deserves serious evaluation beyond Atom. At first glance it seems more powerful than the mU/mI approach Web Services developers will be familiar with.

* I have a tradition of putting that in red on this site, just so you wouldn't miss it.

** Hands up who thinks RDBMSes or SQL are not useful. Ok, now hands up who use an RDBMS because they like the Relational Data Model rather than SQL. Exactly.

July 15, 2004

No-one expects the language inquisition

Patrick Logan:

"Complexity begets complexity. Rigid languages are the foundation of a 'language priesthood' just as humongous mainframes were the foundation of a 'computing priesthood' before the personal computer."

July 12, 2004


Brian Foy thinks Blogs can be public databases:

"I already "blog for Google", which is the same thing as the old usenet practice of posting a post about some problem I encountered and how I solved it. These entries are not really for discussion, but more for the archives so that the next poor soul can find it. Randal Schwartz tells me this is how it was back in the day when he could read all of usenet in a half-hour."

Query language, anyone?

July 04, 2004

It's the interface, stupid!

I've spent a goodly amount of time recently surveying client side technology. There's a number of things coming up personally, and professionally at Propylon that will involve UI work other than web interfaces. Generally we're adopting XUL/Mozilla at work, but I've been playing with Swing, WinForms, SWT, OpenOffice, WxPython over the last couple of months as well trying to nail down some choices.

And then two of my colleagues, Praveg and Tommy, pointed me at DB Designer 4. It's an open source database design and modelling tool targeted primarily at MySQL. DB Designer is a beautiful application - visually elegant, clean, responsive, consistent. It's intuitive - it mostly does what you expect it do. It also has a extremely good visual modelling space.

The last time I responded this positively to an application was probably IntelliJ IDEA or Mozilla mail. So given my current investigations, I had to go and get the source for this thing. And it's built on Delphi.

Which has served me a timely dope slap: it's the interface, stupid. Somewhere along the way I'd forgetten that toolkits come second to usability.

July 03, 2004

Refactoring to efficiency

Jonathan Sobel: "Efficiency comes from elegant solutions, not optimized programs. Optimization is just a few correct-preserving transformations away."

The problem that Java Isolates solve

Let's call it the "PHP Problem" (though the "LAMP Problem" would do as well). That is, Java and the JVM are not ideal for building applications which are fault tolerant, robust and will scale horizontally*. Threads, threadlocal storage, synchronization barriers, XA, classloaders and the security model are not the best mechanisms** we could have for building the apps we need to build, which are fault tolerant, robust and scale horizontally. The right mechanism is the Process. Pete Soper, one the folks working on Java Isolates calls the issues these mechanisms leave us with the "middleware blues":

The middleware blues are what I think of as composite dissatisfaction with class loader tricks and the constant tension between security, communication and weight (memory, cpu, etc). The tricks have gone so far that it's hard to maintain the view that Java is a strongly typed language:it can be almost impossibly hard to disambiguate object references in some cases! So I would have sought confirmation of the strong desire for a real solution inthe form of a third abstraction: Java processes. It's now pretty clear that it will be easier to add this abstraction than to go on forever, stretching the class loader charter and chasing security and performance characteristics that become ever more expensive.

Pete explains what Isolates bring to the table:

JSR-121 is about fitting a process abstraction called "Isolates" into Java. Isolates provide a third building block that is designed in Java terms, leveraging the type safety guarantees of the language into isolation that does not mandate either the heavy weight or slow communication of a VM while offering strong and obvious security guarantees. Isolates offer an alternative to the next generation of class loading tricks and allow system designers to achieve satisfaction with middleware and special applications where real isolation is critical but resources are limited. Isolates also enable infrastructure upgrades over the long haul in an orderly and well thought out manner.

Isolates are slated for Java 1.6.

* Java != Jini
** I use the word 'mechanism' carefully; the right abstraction would be a service/resource

[sordid: amon tobin]

PHP and JSP: scale is a red herring

Russell Beattie on Friendster moving to PHP: "What parts of the Friendster decision am I missing?". Perhaps it's this. Java doesn't seem to be designed with multi-user environments in mind. LAMP solutions do.

[scissor sisters: laura]