Getty Images/iStockphoto

Tip

The 6 non-negotiable REST architecture constraints

While REST-centric design isn't necessarily hard, there are some non-negotiable rules when it comes to resource provisioning. Here are six all architects should know.

The term "architectural constraints" refers to the characteristics that an architecture must have to fit the definition of a particular model, such as REST. By adhering to the specific, underlying rules that form the foundation for these architecture constraints, it becomes much easier to understand exactly what makes something "RESTful" -- as well as avoid the headache-inducing problems those new to this architectural style often face.

Unfortunately, many architects continue to allow practices that violate REST principles, yet still believe they've mastered RESTful design. However, they would be well-served to learn that there is no way around the basic requirements that define this architecture model.

Let's examine the six fundamental REST-based architecture constraints everyone should use to guide a REST-based implementation, including why they are so important and the tactical development and design practices they entail.

Uniform interface

The first REST architecture constraint we'll examine is the uniform interface constraint. In a RESTful design, it's helpful to conceptualize and classify individual services as application resources that expose their own individual client interface. It's important that these resources are small enough in terms of scope and data volume that a single interface can easily handle them. Finally, REST demands that the interface mechanisms these resources use to expose their capabilities should behave in a manner consistent with any other interfaces that expose related services.

This is a fairly easy constraint to meet, provided that architects make the effort to treat these resources as "nouns" -- in other words, unique entities that accept input and interpret calls in the form of procedural instructions for a certain operation (which could be considered "verbs"). This is a foundational principle of object-oriented programming: To treat services as objects that take orders. An easy way to remember this is to think in terms of sending actions to objects, not objects to actions.

Note that it's also important for architects to remain consistent in terms of resource definitions and API designs, so that the resources don't erroneously align and share data with the wrong API or client interface.

Client-server model

The next constraint to discuss is the client-server model, which demands that the coupling between a RESTful resource and associated APIs should only exist in the client-side interface. Those resources are free to evolve independently, but the interface must remain intact.

This loose coupling helps cut down on mismanaged dependency relationships that cause breakages and complicate update processes, which is critical when developers frequently create resources and clients in isolation. This also makes it easier for software teams to collaborate at the level where back-end development meets front-end design, since there is less risk that developers make a change that suddenly breaks or alters the interface's functionality.

Stateless

Next, we'll examine the stateless constraint. Stateless behavior is a particularly critical characteristic of REST-based architectures. Statelessness means that the server provisioning application resources won't store information between requests, or enforce a required processing sequence for calls and requests. Each request's output should be prioritized based solely on the contents of the request and the specified operation -- not on past behaviors regarding sequences of events or orders of operations.

This property is the key to the scalability and resilience properties of services, because it means that any instance of a resource fulfills a request the same way independent of overarching processing schedules. However, stateless design is a complicated one to achieve, especially when frequent requests carry significant amounts of state-related data. The most straightforward way to approach it, if possible, is to avoid storing internal data between resource requests altogether, including the specific sequence of calls and requests.

Cacheable

The constraint that seems to confuse architects the most is the cacheable requirement. Essentially, an architecture is cacheable if the response from a server can store itself in an intermediary and, ideally, abstracted repository. When needed, this repository can offer up those cached responses for reuse location point and guarantee -- for a specified period, at least -- that the behavior of those responses won't change.

Not everything needs to be cacheable to satisfy this constraint, but every response that can be easily and affordably cached must be identified. Additionally, the cache must be readily capable of serving up new requests any time it can safely handle burdensome workloads in the server's stead.

If the server responses and the resources they deliver aren't subject to scheduled updates or feature changes, it's prudent practice to include that information in the server's responses to the client. This way, desperate application systems can reliably fall back upon the cache in the case that responses are delayed or the server goes down.

Layered system model

The fifth REST architecture constraint we'll examine is the layered system model, which says an application should be able to define resources by assigning them to layers of functionality, with each layer corresponding to a single, shared service capability. In some cases, those capabilities can be something simple, like load-balancing activities. In other cases, the layer might house a more complex process that requires multiple servers and software elements, such as big data processing.

As is the case for the cacheable constraint discussed earlier, it's not necessary to segment every single RESTful service into its own layer. Instead, simply focus on being able to fully support the layered model when necessary. Thankfully, the practice of implementing a layer-based model is relatively straightforward, provided the architecture is prepared to handle it.

Code on demand

Code on demand is the final constraint, and the only one regarded as an optional practice. Code on demand says that, when responding to requests, RESTful resources should be prepared to provide code that gets executed on the client side, rather than the server side (or somewhere in between). Serving client-side code makes it possible to distribute work when server-side execution might cause failures or performance problems.

Putting code on demand into practice requires knowing a little bit about the code execution capabilities possessed by the clients that access a resource. To do this, the server must identify specific client-side capabilities to ensure that code will truly run as expected on that end. Typically, this is accomplished by using a general-purpose programming language that most of the associated application components will support and understand during a code exchange.

Next Steps

How to use abstracted repositories in dependency injection

"The pros and cons of a layered architecture pattern"

Dig Deeper on API design and management

Software Quality
Cloud Computing
TheServerSide.com
Close