Plugin pros and cons

Stephen O'Grady has a nice post about the Gnome Do plugin model. So I thought I'd write down some thoughts on the pros and cons of plugin software architectures. When I say plugins here, I'm overloaded the word as a general concept to include things like Zope products, OSGi bundles, Eclipse/Jira/IDEA plugins, XPIs, Apache modules, and even javascript in browser (yeah, that's a stretch - it's more a comment on how the modern browser fails as a platform for code on demand). So apologies in advance for the imprecision.

What's to like.

Granularity. A plugin bundle has a nice modular granularity about it - I don't know how else to put except to say that components/bundles feel "right-sized". For example take Spring Dynamic Modules (OSGi based) - bundles have a better functional abstraction that can lead to strong modular cohesion and solid organisation of codebases, in comparison to implementation wiring, which imposes no real packaging or build constraints. They also seem to provide a more coherent basis for organising code than objects when considered against acceptance tests, BDD, requirements, or even simple things like release notes.
Partial upgrades. Monolithic systems tend to require monolithic upgrades, or workarounds to the build process and version model to support partial upgrades. In critical operational environments that will put even minor fixes onto a high-ceremony release process. Plugins allow for surgical upgrades. They also reduce the cost of regression testing, drastically. Undeployment is straightforward as long as the plugin manages its reated data properly (see "Data contracts and trust"  below).
Concurrent engineering. This is related to the partial upgrades notion. A nice side effect of a plugin architecture is that development can support multiple parallel streams of work and along with that, multiple parallel release streams. Unless you've worked on a codebase that allows this it's hard to explain how effective it can be - it's the kind of stuff project and release managers dream about, but rarely if ever get to see. Concurrent engineering is probably the most effective process technique for managing failure risk in product design; that aside, the goal here is to be able to treat system S.1.0.0 as version umbrella for a collection of plugins, P.1.0.0 ... Pn.1.0.0. Minor and feature upgrades then can be managed as release configuration where one or two plugins are upgraded with the others left in place resulting in S.1.1.0. This configuration can be done (almost) entirely through metatdata. I should probably follow up with another post just about how that works.
Contracts. Good plugin systems force the platform to expose strong contracts to hosted services - this tends to shore up the software architecture in general.

Democratisation. Developers using your product or service can code to it without having to know much about your code internals. This allows innovation and independent evolution. In some cases it can be used to scale development efforts. This is especially important if you have a database centric architecture (extending domains driven by table designs is notoriously difficult and messy).
Configuration. This isn't cited much around plugins but is a very important. Configuration in a plugin architecture will tend be split out along functional lines avoiding systems that are functionally cohesive but exhibit no cohesion around configuration (big-ball-of-mud.conf); this sounds like a little thing but can complicate everything post green-bar - build, packaging, deployment, regression. For example, look at the way apache2 httpd.conf gets split out to individual files compared to the apache single file approach.

They're cool. No, really. Complexity, incohesion and the tendency for software systems to move toward entropy is almost impossible to understand unless you're a software developer. As a result business people really like plugins. It's how a lot of the rest of the industrial world works and makes instant sense to them - a lot more than saying seemingly trivial feature X will take 3d to implement but require a mass refactoring that will take 10d. Showing a stakeholder a web page with a list of deployed and available plugins makes them regard the software in a totally new light.

What's not to like

Abstraction and the whole "metaness" of it all. Plugins require a lot of extra abstraction - service provider and callback interfaces, manifest formats, registries, dependency chains, plugin lifecycles - all need to be defined. A lot of things that are implicit in monolithic architectures need to be made explicit and consistent, even architectures that support dependency inversion techniques.
Platform/Plugin bitrot. Plugin A.1 and B.1 run on P.1. You upgrade to P.2 because. But B.1 will not run on P.2. Or worse you cannot upgrade to B.2 because A.1 won't run on P.2 - in the meantime the rest of the product ecosystem is moving to P.3 and you risk being stranded  and/or unsupported on P.1. The latter tends to happen when the plugin itself becomes as important as the supporting platform. I saw this a lot with Zope2/Plone, which has a very sophisticated product plugin architecture (Plone itself is a Zope plugin) and to a (much lesser) degree I've seen it with Jira and Firefox. Arguably this is a kind of business model - waiting for people to pay you to upgrade the plugin.
Development and testing. Plugins have to be plugged into something which mean deployment, unless great has been taken to abstract away the runtime.
 Isolation. Plugins need to not interact in uncontrolled ways and avoid shared state - this is hard to do in shared environments like runtimes and virtual machines. I think this as much as anything is why OSGi is the future of Java plugin architectures. Unless Sun decide to ship Isolates in a future JDK, OSGi's the only proven game in town for classpath isolation. Java's classloader architecture doesn't support the kind of "multihoming" plugins need beyond trivial handler classes. The browser as a platform for javascript "plugins" (more accurately code-on-demand) is a mess in this regard - global variables, xhr hijacking and the kind of weird stuff prototype does all need to go away. Even then that's only the structural/contract side - managing access to shared resources like memory or cycles or IO is much harder -witness how Google App Engine, a grandiose plugin system (really!?), restricts access to external resources.
 Data contracts and trust. Plugins that generate data and then break the data contract on upgrade are a massive headache and imo are a bigger problem that API breakage. Arguably the market weeds these out, code that doesn't respect data can't be trusted, but for some people it can be too late.



    Great overview, this hits many important points. One thing I've seen that falls in the down side is that decomposition into plugins logically leads over time to having MANY plugins (100s). It can sometimes be daunting to understand a system composed of 100s of small chunks instead of a dozen bigger chunks. Naming and organization can help a lot in this regard.

    A plugin architecture also tends to automatically lower the coupling within the system, a more orthogonal design where separation of concerns becomes an obvious trait*.

    It also tends to degrade nicer. If a plugin experiences trouble it can disable itself, disable a conflicting module, or disable all modules, leaving features disabled but the program still running**.

    *The whole Java 6 with its 17.000 classes is a good example of something that should have been designed this way from the ground up. Apparently Sun is finding it very hard to take the thing apart for the new module system of update 10 and sadly will never get close to the small kernel size of Flash/Air or Silverlight.

    **As seen in Firefox and NetBeans.