Thanks for your suggestion. I need to express our application architecture and our demand more clearly. following is the details (some details has been discussed , but I repeated it here)
1) What's the architecture of our applications now ? (not using k8s)
1.1) application architecture:
Our applications is composed of to multi processes. One process maintenance a cache implemented by Shared-memory and keep consistence with other machine's cache. All the other processes are http restful service and served as query service for the other system (each query will do heavily computing over data in cache ). When business-logical changed, we need add new process or changed the exist process;
1.2) cluster architecture:
applications'cluster composed of multi physical machines, each machine is deployed same processes; when we want scale up, we must add new physical machine manually.
2. What's our purpose when we want to using k8s?
2.1)Each query service can be deployed and evolution independently.
So each service should be deployed as one pod, and cannot combine all services in one pod.
2.2) Elastic-Scale:
Business Service elastic scale : The application is composed of multi business-service to do query. The service's cpu load are dynamically and not synchronous fluctuation. Sometime load of service a,b,... will be very heavy , and other times service x,y, will be very heavy. So we want each services can elastically scale up && down according their true cpu-load.
Cache Process elastic scale: Business-services can be spread to new nodes without cache, so the cache process should also be spread to new node correspondingly.
3. What's the solution and difficult problems when we using k8s?
At first, I want deploy each query service as a pod , and cache-maintenance-process as a pod(cache pod for simplify). but difficult problem occurs.
3.1) When the query service scale up/down to new node, how to scale up/down the cache pod together; and keep only single cache pod in one node. Even more difficult thing is to recycle the cache-maintenance-process.
At first I think deploy cache pod as Daemon-Set is enough, but actually it seems not. Because it cannot scale up/down dynamically.
So I wonder if any share-memory tech between different pods. But I think I have make a mistake, the actually requirement for us is some pod correlation mechanism supported by k8s.
在 2016年12月22日星期四 UTC+8上午10:57:56,Vishnu Kannan写道: