Feasible Cloud Computing Architecture (and implementation)

Good analogy always helps

one queue, many serving bureaucrats in the back.

And I will use standard bureaucratic organization service paradigm. Imagine you enter into some bureaucratic institution, which has large hall with only ONE counter (gasp)!? For us humans not very nice scenario. This hall is filled with hundreds of people.

Although rest assured, one queue to one or more counters is faster system than many parallel queues with one counter serving each.

Fortunately after queuing to deliver your request (aka message), you do not (have to) hang around, because the information you need will be delivered back to your home address, through the post. That is possible because your (home) address was part of the request. So there is a “PUT” counter, and you “just” have to use it to hand over your “case papers” ( aka: message to the bureaucrats somewhere behind), and then go back home and wait for the response to be “sent back” to you. In essence a true fire-and-forget, communication paradigm.

Unknown to you, on the other side of the “counters room” which receives the cases through the “PUT” counter(s) , there is a FIFO queue on the “GET” counter, which is oriented towards the inner corridors of this large bureaucratic institution. And there are bureaucrats, who are all queuing up, and each of them takes the first “case” (message) that is waiting on the “GET” counter. So she than goes back to some little stuffy cubical to work on your case, she got. When she finishes, Ms Bureaucrat posts the papers, to your postal address, which you have dutifully provided on your side of the crucial large envelope you used, as instructed.

And this is the core of  the trick. This is exactly how “everything works” in systems based on MQ. From Macro to micro aka application level.

But how this maps to the solution for servers and services running on our servers ? Keep in mind I am trying to describe the most resilient and decoupled solution here. And even more important it is to note we are in the same time solving inter-cloud communication here.

Therefore we are very much bound with issues of flexibility, location and resilience of the actual infrastructure required to effectively implement inter-cloud based messaging.

Inter Cloud Messaging Queues
Step Three: Inter Cloud Messaging Queues. From Middle to Back, and beck.

Here is the change. In between to clouds above I have added two message queue based services. One for sending to the back and one for sending to the front. Again please keep in mind this is again TA. Messaging infrastructure we are focusing on, just for this moment. This architecture of the MQ infrastructure, gives the level of message traffic resilience and monitoring and robustness that is actually always required.

For example: if messaging services are kept in separate (undisclosed) locations, any of the two clouds can be up or down; it does not matter. Messages in the queues will stay intact ready to be served as soon as cloud message consumer is “back” and servers need them. Of course the same applies for any piece of the complex infrastructure that happens to exist between data-centers. Golden rule of the messaging is: keep the message until receiver has not signaled that it received it OK.

Even more Clouds

I suppose you are now (more than) happy with the level of the MQ TA details, and we can again “abstract away” this into the message services cloud. So, here we go, yet another diagram, with yet another cloud. In ages past we had three tier architectures, these days here we have three clouds architecture.  Similarity is artificial of course. But scalability and resilience are not.

 

MQS Cloud Actually
Step Four: MQS Cloud  Established

 

This I might call one feasible and fully “Cloudifed” solution. Also Safe Hybrid (private — orange/ public — red and cyan) Cloud solution. With all the scalability, resilience and feasibility solved and in-built. And security too, of course. Where is the System Resilience here? Front cloud is elastic by its very own definition.  Thus resilient and above all feasible.

Middle cloud (Middle/Cloud Ware?) is also almost infinitely scalable performance wise.  And in the costs context too. The more customers you have the more you pay. But crucially: Not in advance. Large up-front payments are not necessary. For both front and middle clouds.

And please do not forget. Both front and middle cloud here are hired by you. You do now own that infrastructure. You are not responsible for it. You just want the best SLA money can buy.  There is no OPEX. Just CAPEX part of the cost.

Only orange cloud is private and thus your responsibility to “do it right”, as an proper private cloud. Yours to implement your favorite and required security solution. Yours to physically locate and archive as your company “compliance gods” dictate. These days best implemented by buying an data-center in container. To start very quickly working as your private cloud. Fully compliant and secure too.

Application Architecture

All the data in front and middle cloud is transient. There is no state deliberately managed by application owners (that is you) in there. In this architecture, all the actual physical storage, and data, is safe and completely in your hands. Of course queues in the MQ Service do contain some data. But that is all highly dynamic and in transit. Transient data. Jut like some (very) safe containers with your goods traveling on the train that you do not need to own. Destination warehouse is yours of course.

Going forward we will describe key concepts of Application Architecture required and feasible to implement. Following the concepts and solutions described above.

Part two is still work in progress.

(HINT: yes, will will reveal the roles for  Micro Services and explain this  “Server-less  Computing”  hype)

 

 


1: for an real life example please look into Amazon Simple Queue Service (Amazon SQS)
2: TA = Technical Architecture