The rise of the localhost module

One size does not fit all

Localhost module is a microservice implementation style aiming for the right size and performance.

This post is W.I.P.

  • business object is called these days microservice, and its implementation and unit of deployment is called a container
    • that is buzzwords sorted
  • localhost module is used inside containers
    • Not to be confused with the blasphemy idea: container the size of the VM.  Aka the “monolith”
    • localhost modules are remote from each other but are all inside a single container
    • I implement them as Linux services
  • “localhost module”  and localhost modules inside it are a physical architecture concept
    • one level above that microservice implementation is a physical unit of deployment (aka container) with one or more interfaces
    • thus “microservice” is a “business object”  on the level of the logical architecture,
      • and here we are implementing it
      • does it have or not, persistent data storage inside it is completely container private business
        • most importantly think before you commit to having SQL in a container

 

Let us first clarify the recent (2022Q1) upheaval. Amazon (AWS) did not move from microservices to the “monolith“. They just knew what I and others have known for years: one process microservices implementations, are “not working well” in all situations and for all requirements. They simply achieved the required performance by moving the microservices in question into localhost module,  with one web app inside + localhost modules.  

Keep in mind that is/was a very specific AWS-hosted, video-monitoring heavy-duty app. So that was a key requirement and reason for “(not) going back to monolith”: performance.  But here are other microservices not looking always for the most feasible architecture. Deployment, cost, and maintenance are for me the chief reasons I tend to do what I am calling: localhost module tm. In essence container with multiple modules implemented as Linux services inside.

 

Design issues

My localhost module is not one executable in a container. It is deployed with internal decoupled modules with clear-cut boundaries. Modules are kept remote from each other. That improves the resilience and resilience to change. (My multi (Linux) service architecture concept is NOT to be confused with earlier attempts, trying to keep the hardware, software, and data close together in one place.)

Of course, you might imagine localhost module can harbor (much) more than one Module (plus the process where the API is implemented) Linux Service.  It can also be implemented with, for example, one or more services inside, for persistent data storage (ie. Redis service). That also lowers the costs, since that is not using cloud provider services, external to the standard container.

For communicating between the front-end app and services, or between services, I might go as low as named pipes. Or it might be feasible to use one language/one platform principle and stick to .NET Core and gRPC. In essence, whatever you can do on a Linux VM you can do in a localhost module tm. Just do not go overboard. 

A container itself does not have physical RAM (Random Access Memory) inside it. Containers are a lightweight form of virtualization that shares the underlying host system’s resources, including CPU, memory, and storage.

When you run a container, it utilizes the RAM allocated to the host system. The container runtime, such as Docker or Kubernetes, manages the allocation of resources to containers based on their requirements. You can specify resource limits for containers, including the amount of memory they can use, but the actual physical RAM is part of the host system.

In summary, the amount of RAM available to a container is determined by the host system’s resources and the configuration settings applied to the container.

The important logical architecture attribute is to lower the number of microservices in your logical system topology and improve the overall performance and resilience. And last but not least, to improve the deployability and cost. Often many otherwise free services when packaged as Cloud “services” are just free things packaged as containers by the vendor and sold to you. 

Application

The whole localhost module idea is not to have one app per container.

By no means be sure first to read the official Docker page on running multiple services inside localhost module. The intro paragraph is somewhat overly cautious but keep in mind the whole picture of your microservice-heavy system.

You might like to have everything as Python, NODE js, or GO code; in here and (where I am) right now it is .NET core and C#. One can develop .NET core Linux service more easily now than before. Yes, each with the Kestrel running inside.  So you will have your little constellation of remote modules running inside your container, and talking remotely to each other. Isn’t that just perfect? (hint: gRPC is faster than JSON over HTTP and messaging is more resilient, but keep in mind unpacking/packing performance costs).

(one simple and useful text on Linux and services: https://schh.medium.com/linux-services-with-systemd-d0252a27ebce )

Performance is critical

If you are into Linux C shenanigans here is how you talk to/from services in a standard way. Native messaging included. Do not just skip over that. In a localhost module tm, you can use C code in decoupled modules for extremely fast performance.

Also if you happen to like pain, and in particular you like Windows container pain here is Kestrel inside Windows service.

Implementation possibilities are endless, just be sure to choose what is most feasible to the project. As far as architecture is concerned 

 

Concept In the Wild

Are there any proponents of the same concept? Certainly. Look into Pulp localhost module, perhaps.

PS: I might also like the title “MiniService”. Although it looks like “back to the good old VM”, brigade under camouflage.

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.