Use asynchronous communication (for example, message-based communication) across internal microservices . It’s highly advisable not to create long chains of synchronous HTTP calls
across the internal microservices because that incorrect design will eventually become the main cause
of bad outages. On the contrary, except for the front-end communications between the client
applications and the first level of microservices or fine-
grained API Gateways, it’s recommended to use
only asynchronous (message-based) communication once past the initial request/response cycle,
across the internal microservices. Eventual consistency and event-driven architectures will help to
minimize ripple effects. These approaches enforce a higher level of microservice autonomy and
therefore prevent against the problem noted here.
Use retries with exponential backoff . This technique helps to avoid short and intermittent failures
by performing call retries a certain number of times, in case the service was not available only for a
short time. This might occur due to intermittent network issues or when a microservice/container is
moved to a different node in a cluster. However, if these retries are not designed properly with circuit
breakers, it can aggravate the ripple effects, ultimately even causing a
Denial of Service (DoS)
.
Work around network timeouts . In general, clients should be designed not to block indefinitely and
to always use timeouts when waiting for a response. Using timeouts ensures that resources are never
tied up indefinitely.
Use the Circuit Breaker pattern . In this approach, the client process tracks the number of failed
requests. If the error rate exceeds a configured limit, a “circuit breaker” trips so
that further attempts
fail immediately. (If a large number of requests are failing, that suggests the service is unavailable and
that sending requests is pointless.) After a timeout period, the client should try again and, if the new
requests are successful, close the circuit breaker.