THE IMPACT OF DATA COMPRESSION ON THE EFFICIENCY OF DIRECT INTERACTION BETWEEN SERVICES IN A MICROSERVICE ARCHITECTURE

Authors

  • O. Sypiahin Master of Science (Information Security), Full Stack Engineer / Information Security Engineer, floLive, 21 Bar Kochva St., Bnei Brak, Tel Aviv District, Israel
  • O. Shvaikin Salesforce Developer, VRP Consulting, США, 268 Bush Street #3836, San Francisco, CA 94104
  • V. Lopukhovych master, Senior Software Engineer (contractor), Disney Streaming, 3005 Carrington Mill Blvd, Morrisville, NC 27560, USA

DOI:

https://doi.org/10.36910/775.24153966.2025.83.10

Keywords:

microservices, proxyless architecture, data compression, GZIP, Snappy, cloud infrastructure, traffic optimization

Abstract

In modern microservice architectures, which are increasingly implemented in high-load, distributed environments
with dynamic scalability, there is a growing interest in models of direct interaction between individual services without the use
of traditional proxy solutions – so-called proxyless approaches. This interaction model reduces system response time, avoids
delays caused by intermediary routing, optimizes infrastructure costs, decreases the number of potential points of failure, and
ensures greater architectural flexibility, scalability, control over data flows, and compliance with cloud-native principles. In
this context, the implementation of data compression algorithms becomes particularly important as one of the key means of
optimizing information exchange between services. Compression significantly reduces the volume of transmitted data, lowers
network load, minimizes request processing latency, and helps reduce resource consumption during intensive inter-service
traffic.
This paper provides a theoretical analysis of the impact of data compression on the quality, performance, and
reliability of direct communication within a microservice architecture. The focus is on lossless algorithms such as GZIP and
Snappy, which are among the most widely used in cloud-native environments supporting REST and gRPC. The analysis
explores the specifics of their integration, the dependence of efficiency on data format and structure, the type of API requests
(single or batch), as well as network latency levels and computational overhead. The advantages of GZIP are highlighted for
high-load scenarios that require deep compression, while Snappy is preferred in cases with strict latency constraints and a
priority on speed. Potential limitations related to service compatibility, CPU overhead, and configuration flexibility when
managing compression parameters manually are also identified. The importance of configuration consistency on both ends of
service interaction is emphasized, including proper header encoding and support for the required formats.
The conclusion is drawn that an adaptive and context-aware approach to selecting a compression algorithm –
considering the nature of the API, the structure of the load, the interaction topology, processing priorities, and network
characteristics–is critically important to ensure stable, reliable, and efficient microservice operation within proxyless
architectures of modern cloud and hybrid platforms.

References

Published

2025-12-01

Issue

Section

Статті