com.spotify.docker.client.DockerRequestException: Request error: DELETE unix://localhost:80/v1.12/co

175 views
Skip to first unread message

Chiranga Alwis

unread,
Sep 15, 2015, 1:33:33 PM9/15/15
to fabric8

I am working on a Java application which deploys web artifacts in Apache Tomcat Docker Containers with the use of Google Kubernetes. I am using https://github.com/spotify/docker-client in order to carry out Docker Image and Container handling activities andhttps://github.com/fabric8io/fabric8/tree/master/components/kubernetes-api for Kubernetes related functionalities.

In this application, I have added a functionality which enables the user to remove a web artifact which the user deploys.

When removing I,

  1. 1. delete the Kubernetes replication controller which I use to generate the desired number of pod replicas

  2. 2. separately delete off the replica pods (as pods are not deleted automatically when the replication controller is deleted in the corresponding method in the Java API)

  3. 3. delete off the corresponding Service created

  4. 4. delete off the Docker Containers corresponding to the pods deleted off

  5. 5. finally, remove the Docker Image used for the deployment

Following code shows the removal functionality implemented:

public boolean remove(String tenant, String appName) throws WebArtifactHandlerException {
 
String componentName = WebArtifactHandlerHelper.generateKubernetesComponentIdentifier(tenant, appName);
 
final int singleImageIndex = 0;
 
try {
 
if (replicationControllerHandler.getReplicationController(componentName) != null) {
 
String dockerImage = replicationControllerHandler.getReplicationController(componentName).getSpec()
 
.getTemplate().getSpec().getContainers().get(singleImageIndex).getImage();
 
List<String> containerIds = containerHandler.getRunningContainerIdsByImage(dockerImage);
 replicationControllerHandler
.deleteReplicationController(componentName);
 podHandler
.deleteReplicaPods(replicationControllerHandler.getReplicationController(
 
WebArtifactHandlerHelper.generateKubernetesComponentIdentifier(tenant, appName)),
 tenant
, appName);
 serviceHandler
.deleteService(componentName);
 
Thread.sleep(KUBERNETES_COMPONENT_REMOVAL_DELAY_IN_MILLISECONDS);
 containerHandler
.deleteContainers(containerIds);
 imageBuilder
.removeImage(tenant, appName, WebArtifactHandlerHelper.getDockerImageVersion(dockerImage));
 
return true;
 
} else {
 
return false;
 
}
 
} catch (Exception exception) {
 
String message = String.format("Failed to remove web artifact[artifact]: %s",
 
WebArtifactHandlerHelper.generateKubernetesComponentIdentifier(tenant, appName));
 LOG
.error(message, exception);
 
throw new WebArtifactHandlerException(message, exception);
 
}
}

Implementation of the Docker Container deletion functionality is as follows:
public void deleteContainers(List<String> containerIds) throws WebArtifactHandlerException {
 
try {
 
for (String containerId : containerIds) {
 dockerClient
.removeContainer(containerId);
 
Thread.sleep(OPERATION_DELAY_IN_MILLISECONDS);
 
}
 
} catch (Exception exception) {
 
String message = "Could not delete the Docker Containers.";
 LOG
.error(message, exception);
 
throw new WebArtifactHandlerException(message, exception);
 
}
}

In the above case although the execution of the desired function takes place without any sort of issue, at certain instances I tend to get the following exception.
Sep 11, 2015 3:57:28 PM org.apache.poc.webartifact.WebArtifactHandler remove
SEVERE: Failed to remove web artifact[artifact]: app-wso2-com
org.apache.poc.miscellaneous.exceptions.WebArtifactHandlerException: Could not delete the Docker Containers.
    at org.apache.poc.docker.JavaWebArtifactContainerHandler.deleteContainers(JavaWebArtifactContainerHandler.java:80)
    at org.apache.poc.webartifact.WebArtifactHandler.remove(WebArtifactHandler.java:206)
    at org.apache.poc.Executor.process(Executor.java:222)
    at org.apache.poc.Executor.main(Executor.java:46)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
Caused by: com.spotify.docker.client.DockerRequestException: Request error: DELETE unix://localhost:80/v1.12/containers/af05916d2bddf73dcf8bf41c6ea7f5f3b859c90b97447a8248ffa7b5b3968691: 409
    at com.spotify.docker.client.DefaultDockerClient.propagate(DefaultDockerClient.java:1061)
    at com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:1021)
    at com.spotify.docker.client.DefaultDockerClient.removeContainer(DefaultDockerClient.java:544)
    at com.spotify.docker.client.DefaultDockerClient.removeContainer(DefaultDockerClient.java:535)
    at org.wso2.carbon6.poc.docker.JavaWebArtifactContainerHandler.deleteContainers(JavaWebArtifactContainerHandler.java:74)
    ... 8 more
Caused by: com.spotify.docker.client.shaded.javax.ws.rs.ClientErrorException: HTTP 409 Conflict
    at org.glassfish.jersey.client.JerseyInvocation.createExceptionForFamily(JerseyInvocation.java:991)
    at org.glassfish.jersey.client.JerseyInvocation.convertToException(JerseyInvocation.java:975)
    at org.glassfish.jersey.client.JerseyInvocation.translate(JerseyInvocation.java:795)
    at org.glassfish.jersey.client.JerseyInvocation.access$500(JerseyInvocation.java:91)
    at org.glassfish.jersey.client.JerseyInvocation$5.completed(JerseyInvocation.java:756)
    at org.glassfish.jersey.client.ClientRuntime.processResponse(ClientRuntime.java:189)
    at org.glassfish.jersey.client.ClientRuntime.access$300(ClientRuntime.java:74)
    at org.glassfish.jersey.client.ClientRuntime$1.run(ClientRuntime.java:171)
    at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
    at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
    at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
    at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
    at org.glassfish.jersey.internal.Errors.process(Errors.java:267)
    at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:320)
    at org.glassfish.jersey.client.ClientRuntime$2.run(ClientRuntime.java:201)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

I searched a large number of sources for any help for this but still I wasn't able to avoid it in all instances, I execute this functionality.

At the beginning I tended to get this issue more often than now, but allowing the executing thread to sleep at the end of deleting each Docker Container and before deleting any Docker Containers, gradually reduced the number of instances I am getting this issue.

Is sleeping the thread the ultimate solution for this issue or is there any other reason for this issue to pop and a solution that can help me to avoid this exception? Any help is greatly appreciated.

James Strachan

unread,
Sep 16, 2015, 3:17:39 AM9/16/15
to Chiranga Alwis, fabric8
with kubernetes you don't need to delete container instances explicitly - kubernetes takes care of that for you; just create a new image, a new RC and scale it up - and scale down the old RC (and you can delete it when its empty). i.e. apart from building a docker image all you need is the kubernetes client & kubernetes API. 

You could look at using Kubernetes's Rolling Upgrades to handle migrating from version 1 to version 2 of your docker images too

--
You received this message because you are subscribed to the Google Groups "fabric8" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fabric8+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
James
-------
Red Hat

Twitter: @jstrachan
Email: james.s...@gmail.com
hawtio: http://hawt.io/

Open Source DevOps and Integration

James Strachan

unread,
Sep 16, 2015, 3:18:32 AM9/16/15
to Chiranga Alwis, fabric8

Chiranga Alwis

unread,
Sep 16, 2015, 4:52:15 AM9/16/15
to fabric8
Hi James, yes as you said when you are directly dealing with the Kubernetes command line API, I also experienced the fact that not only all the pods corresponding to the deleted replication controller get deleted but even the underlying Docker containers get erased off, too. But unfortunately I found out that when I deleted a replication controller through the Java Kubernetes client (I have mentioned before), the corresponding pods seem to still exist plus even at the point where I am deleting those pods separately, the corresponding underlying Docker containers are only stopped but not removed. 
Had these functions take place as we expect surely I don't have to write some of the code samples which you have mentioned about. 
...

James Strachan

unread,
Sep 16, 2015, 4:56:42 AM9/16/15
to Chiranga Alwis, fabric8
On 16 September 2015 at 09:52, Chiranga Alwis <chiran...@gmail.com> wrote:
Hi James, yes as you said when you are directly dealing with the Kubernetes command line API,

or REST API (or java kubernetes-client)

 
I also experienced the fact that not only all the pods corresponding to the deleted replication controller get deleted but even the underlying Docker containers get erased off, too.

Yes. Deleting a pod deletes the docker containers for the pod.

 
But unfortunately I found out that when I deleted a replication controller through the Java Kubernetes client (I have mentioned before), the corresponding pods seem to still exist

The KubernetesClient can handle that for you. Or you can scale() the RC down to zero; then when its zero delete it. Or delete the RC then delete the pods.
 

plus even at the point where I am deleting those pods separately, the corresponding underlying Docker containers are only stopped but not removed. 

Kubernetes should GC away stopped containers AFAIK

 
Had these functions take place as we expect surely I don't have to write some of the code samples which you have mentioned about. 


Generally with Kubernetes you should not be working with Docker directly; let kubernetes do the docker stuff

 

--
You received this message because you are subscribed to the Google Groups "fabric8" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fabric8+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages