The Robustness Principal and internal APIs

RFC 761 section 1.2.2.

2.10. Robustness Principle
  TCP implementations should follow a general principle of robustness:
  be conservative in what you do, be liberal in what you accept from

RFC 1122 elaborates with section 1.2.2.:

  1.2.2  Robustness Principle
         At every layer of the protocols, there is a general rule whose
         application can lead to enormous benefits in robustness and
         interoperability [IP:1]:
                "Be liberal in what you accept, and
                 conservative in what you send"
         Software should be written to deal with every conceivable
         error, no matter how unlikely; sooner or later a packet will
         come in with that particular combination of errors and
         attributes, and unless the software is prepared, chaos can

For anything expected to inter-operate “in the wild” with other implementations of a given standard or API this approach is optimal as both competitive advantage (how successful would a browser that only accepted valid HTML have been in a world with Netscape 1.0?) and user experience (always working as best as possible increases the chance of something working).

However, when an API is internal to a component, department, or a company it might well be better for implementations to reject fast invalid or malformed requests with informative error messages rather than attempting to proceed.

Within an organization the chances of integration testing catching an error before it affects user experience is much higher and failing immediately helps reduce the chances of future revisions of software needing to be “bug compatible” with easily avoided problems. It’s not a competitive advantage to try to muddle along with bad input in a world where there are no competitors. The entire system is more robust if each side of every API is careful that they are producing only valid requests and responses.

For example, if a company has standardized on UTF-8 as a wire format for text it is probably best if all new implementations of services validate that their input is correct UTF-8 at every edge and refuse to process anything that is invalid. Otherwise eventually some system is going to end up having to try and guess the correct encoding in order to proceed and serve valid data that came in an API that didn’t validate and has to go out an API that must be valid.

This only applies, of course, when both sides of an integration are new enough to be tested together but this applies to a lot of new development within an organization — a new capability is being added to an entire “stack” resulting in API revisions at each layer.

It’s dramatically easier to catch problems at the first opportunity and requiring the source of the incorrectness to correct it before proceeding than ending up years later with tons of special compatibility hacks where an API is versioned based on the different clients foibles rather than by design.

Internal APIs should be designed to be as easy as possible to use correctly — but once designed they should only work when used correctly.