Private methods, Texan APIs, standard interfaces
I don't test private methods. Being able to quickly create, move around, and change the functionality of private methods is vital to remaining agile while developing.
Sounds good. I've often upped a private method to public so that I could test it. Michael Feathers at ObjectMentor has a nice way of putting this- when you start wanting to test private methods, that's your design genius telling you the object interface needs to be adjusted. And this works fine so long as you haven't made it that the object can be called with a sequence of calls that is broken or non-intuitive and puts the code into an undefined state.
There is a significant cost involved in changing the behaviour of your public interface: you have to make sure each caller can cope with the change. Sometimes you even have to write new tests for each caller because you have introduced some new edge-case that wasn't there before
Maybe we can do that on a LAN or a middleware. If we're writing for clients we don't own or can't influence (as is often the case with webservices), fooling around with a public contract is a non-starter, as are expectations of synchronized upgrades across the clients. When Amazon or Ebay upgrade their sites, nobody expects or accepts that you have to upgrade your browser in kind. To take it a step further, we might have clients we not only don't own, but don't know about (such as an RSS reader).
Martin Fowler has talked about published versus public interfaces in the past and has suggested that languages need to cater for this distinction. Published interfaces might just push you out of the API realm and into that of application protocols. But maybe this is moving away from programming languages altogether. For web services, perhaps thinking about interface design nudges you away from an RPC style towards doc/literal or a direct binding to the underlying protocol's methods.
There's two aspects to this: behaviour and signature. Too large or too volatile a signature places an (unneccessary?) burden of cost on clients, and as RESTafarians like Mark Baker argue, just won't scale, which is why widely adopted application and data protocols (HTTP/SMTP/SQL) have controlled, stable interfaces. Behaviour is different again. But safe to say that if you are a client of an API whose behaviour is erratic across versions, it might be a good idea to find a new service provider or library.
But if we're staying on APIs, one option might be to stick with a uniform interface solely based around actions and avoid the 'doVerbToNoun' compound idiom (sometimes called computerEnglisch) as far as is practical. Now telling someone that any JavaBean only needs get() and set() instead of a series of getThis(), setThat() calls can produce skeptical responses, but there's a surprising amount of useful work you can get done with a handful of actions. After all there's only so many things you can ask an object to do, but there are any numbers of objects to ask. It's a matter of striking the right balance- at one end you pay in inefficiency, indirection and sometimes a limited ability to express your ideas; at the other, you wind up with ten gallon hat APIs that have big learning curves, big surface areas and likely as not, unintended and buggy call sequences.
For Java programming, plugins (something that Eclipse and IDEA have bullseyed), and service layer decorators at package and system boundaries can be a great help. Both these patterns are popular at work in Propylon and see plenty of use - it's great to be able to grab a package with a controlled, well-sized interface and run with it, knowing that it won't take days to understand and its contract won't break you two releases later. The same reasoning is applied in PropelX - there is one interface that every component in an XML pipeline shares, which is the same interface as the pipelines themselves. Standard, composable interfaces are a well worn road to scalable systems.