InProcess for C#

499 views
Skip to first unread message

falco...@gmail.com

unread,
May 31, 2016, 10:36:22 AM5/31/16
to grpc.io
I've seen that there is InProcess support for the Java implementation. I haven't found one for the C# implementation. Is it possible (planned or as a workaround)? I would like to use a shortcut and spare the time for serialization -> TCP/IP -> deserialization if it stays in the same process. The same service will be offered for internal and external usage at the same time.

It seems to be possible to use a CallInvoker for the Client, but I am not sure yet what it needs to build something like a LocalCallInvoker which operates on the same services registered with the Server class. Especially since the ServerServiceDefinition.AddMethod methods seem to wrap the handler already in some grpc-bound classes. So it seemed to need a bit more abstraction to allow it. And I'm also not sure if this is the best way to go, so I'm hoping for some hints from you guys :)

falco...@gmail.com

unread,
May 31, 2016, 10:59:24 AM5/31/16
to grpc.io, falco...@gmail.com
I just stumbled over the issue https://github.com/grpc/grpc/pull/5928 and saw that Jan mentioned that it's still not possible but at least more than before.

From my understanding, the client-channel coupling seems to be loose now, but the server seems to still be tightly coupled with the channel, right?

As written in the issue, it seem's to be under discussion if you want to integrate in-process at all. My project would hugely benefit from this functionality due to the performance improvement this brings. (I'm using the same services for internal and external requests).

Jan Tattermusch

unread,
May 31, 2016, 11:31:31 AM5/31/16
to falco...@gmail.com, grpc.io
On Tue, May 31, 2016 at 7:59 AM, <falco...@gmail.com> wrote:
I just stumbled over the issue https://github.com/grpc/grpc/pull/5928 and saw that Jan mentioned that it's still not possible but at least more than before.

From my understanding, the client-channel coupling seems to be loose now, but the server seems to still be tightly coupled with the channel, right?

Yes, you understand that correctly. This kind of "serverside coupling" doesn't really represent a problem I think, because some changes would need to be made on the server side to make in-process possible (and a server-side call handler doesn't really depend on the "Server" class in any interesting way).
 

As written in the issue, it seem's to be under discussion if you want to integrate in-process at all. My project would hugely benefit from this functionality due to the performance improvement this brings. (I'm using the same services for internal and external requests).

The biggest problems with in-process:
1. C# protobufs  are mutable and therefore passing the messages between client and server by reference is dangerous. 
2. At least one of the possible approaches to implement in-process communication would be to entirely skip interaction with C core, but that can be highly problematic because we would basically need to reimplement the whole stack to ensure the in-process communication would behave the same as the regular communication (and we really don't want to do that).

One thing that could be done is to expose some options to make it possible to use a sockpair  or unix domain sockets to communicate between a client and a server. That would still involve serialization and deserialization though.

Can you give some description of the purpose of your application and what seems to be the performance bottleneck for you? We definitely have plans to add more performance optimization for gRPC C#. (E.g. for large messages, there is currently too much copying happening but we have plans to improve that).
 

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscribe@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/cf638422-d314-4b22-8ae9-9f876944e872%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Benjamin Krämer

unread,
May 31, 2016, 12:03:01 PM5/31/16
to Jan Tattermusch, grpc.io

> The biggest problems with in-process:
> 1. C# protobufs are mutable and therefore passing the messages between client and server by reference is dangerous.
I do understand that. But it should be faster to deep-clone it instead of (de-)serializing it. Or using the serializing to clone it and at least spare the TCP.

> 2. At least one of the possible approaches to implement in-process communication would be to entirely skip interaction with C core, but that can be highly problematic because we would basically need to reimplement the whole stack to ensure the in-process communication would behave the same as the regular communication (and we really don't want to do that).
>
> One thing that could be done is to expose some options to make it possible to use a sockpair or unix domain sockets to communicate between a client and a server. That would still involve serialization and deserialization though.
>
> Can you give some description of the purpose of your application and what seems to be the performance bottleneck for you? We definitely have plans to add more performance optimization for gRPC C#. (E.g. for large messages, there is currently too much copying happening but we have plans to improve that).
I did some tests and it seems that the overhead is around 1000 times for smaller messages (averaged in a test with 10.000 messages, skipped the first message and therefore the connecting time). Localhost only in this test, server and client running in the same process.

Don't get me wrong, it's still fast enough and I will use it as it is right now. But we usually have a lot of communication happening. It's a middleware framework for industry 4.0, connecting a lot of different devices, translating between protocols and doing some calculations. There are a lot of small messages and a couple big ones. Most of the communication is not time critical but it has to be done by a deadline of around 2 seconds (usually not a problem). Right now we are using proto2 and have already some hundreds of different messages defined. So GRPC is pretty welcome in joining our technology stack and getting rid of delegating messages myself.

Jan Tattermusch

unread,
May 31, 2016, 12:16:59 PM5/31/16
to Benjamin Krämer, grpc.io
On Tue, May 31, 2016 at 9:02 AM, Benjamin Krämer <falco...@gmail.com> wrote:

> The biggest problems with in-process:
> 1. C# protobufs  are mutable and therefore passing the messages between client and server by reference is dangerous.
I do understand that. But it should be faster to deep-clone it instead of (de-)serializing it. Or using the serializing to clone it and at least spare the TCP.

> 2. At least one of the possible approaches to implement in-process communication would be to entirely skip interaction with C core, but that can be highly problematic because we would basically need to reimplement the whole stack to ensure the in-process communication would behave the same as the regular communication (and we really don't want to do that).
>
> One thing that could be done is to expose some options to make it possible to use a sockpair  or unix domain sockets to communicate between a client and a server. That would still involve serialization and deserialization though.
>
> Can you give some description of the purpose of your application and what seems to be the performance bottleneck for you? We definitely have plans to add more performance optimization for gRPC C#. (E.g. for large messages, there is currently too much copying happening but we have plans to improve that).
I did some tests and it seems that the overhead is around 1000 times for smaller messages (averaged in a test with 10.000 messages, skipped the first message and therefore the connecting time). Localhost only in this test, server and client running in the same process.

Overhead 1000x compared to what? So you care mostly about throughput for smaller protobuf messages? (btw, "smaller" can mean lots of things, could be be more specific?). What are the throughput numbers you are seeing? Are you on windows or on Linux?

falco...@gmail.com

unread,
Jun 1, 2016, 4:08:56 AM6/1/16
to grpc.io, falco...@gmail.com
I just did some tests with the Person message from the Addressbook example and another two test sets with the TestAllTypes message from the official unit test.

I did three test sets:
  1. The small message test: The first test set sends a Person message with only the name set to the service, where it get's the name field reversed and reassigned. Very simple.
  2. The server-side heavy work test: The second test set uses a completely filled TestAllTypes message embedded as Any in the signed.message field of my Configuration message (just ignore it, I only reused it from a project). It serializes the signed.message field to a byte string and saves it to signedSerialized.message, including a signature, the public key and the signature for the public key. The signing takes around 500ms since it's done on a hardware dongle. Afterwards it deletes the signed.message field since it's included in the signedSerialized field.
  3. Test big message test: The third test set does the same as the second from the message-assigning-perspective, but I commented the hardware signing part out to just test the serialization/deserialization/TCP time for a bigger message.

Those were the results, also showing the input and output messages:

Test set: Reverse name of Person (only name set)
================================================
Input: { "name": "John Wayne" }
Output: { "name": "enyaW nhoJ" }
================================================
#1
First plain call:             0,2694 ms
Plain method call (avg):      0,0002 ms
First service call:          43,1761 ms
Avarage over 80.000:          0,4029 ms

#2
First plain call:             0,2862 ms
Plain method call (avg):      0,0002 ms
First service call:          43,0379 ms
Avarage over 80.000:          0,4176 ms

#3
First plain call:             0,2851 ms
Plain method call (avg):      0,0003 ms
First service call:          43,6412 ms
Avarage over 80.000:          0,4151 ms

#4 (plain using serialization-copy)
First plain call:             1,2594 ms
Plain method call (avg):      0,0010 ms
First service call:          41,1755 ms
Avarage over 80.000:          0,4804 ms


Test set: Sign configuration with Any TestAllTypes (delay 500 ms)
=================================================================
Input: { "componentName": "UnitTestComp", "serviceName": "UnitTestServ", "signed": { "message": { "@type": "type.googleapis.com/protobuf_unittest.TestAllTypes", "@value": "CGQQsZmd7rldGP////8PIP///////////wEojwcw5b/xvM7OBT0XAAAAQcsE+3EfAQAATYX///9RDtAxjMX0//9dAABEQWEAAAAAAIA3QGgBcg50ZXN0CXdpdGgJdGFic3oEAQIDBJIBAggjmgECCAqiAQIIFKgBAbABBbgBCdIBAgg2+gEDZMgBggIMsZmd7rldy4nsj/cjigIGAP////8PkgILAP///////////wGaAgSPB/YBogIO5b/xvM7OBeS/8bzOzgWqAggXAAAAKgAAALICEMsE+3EfAQAAsUzHnesCAAC6AgiF////QQEAAMICEA7QMYzF9P//scyR10wnAADKAggAAERBw/XIQdICEAAAAAAAgDdAZmZmZmZmRUDaAgIBAOICDnRlc3QJd2l0aAl0YWJz4gIPSnVzdEFub3RoZXJUZXN06gIEAQIDBOoCAIIDAggjggMCCDeKAwIICooDAggUkgMCCBSSAwIICpoDAgECogMCBQaqAwIJB7IDAgg2sgMCCAu6AwQIFhBkugMFCCEQyAHCAw4IsZmd7rldELGZne65XcIDDgjLieyP9yMQy4nsj/cjygMGCP////8PygMGEP////8P0gMLCP///////////wHSAwsQ////////////AdoDBgiPBxCPB9oDBgj2ARD2AeIDEAjlv/G8zs4FEOW/8bzOzgXiAxAI5L/xvM7OBRDkv/G8zs4F6gMKDRYAAAAVFwAAAOoDCg0MAAAAFSoAAADyAxIJywT7cR8BAAARywT7cR8BAADyAxIJsUzHnesCAAARsUzHnesCAAD6AwoNhf///xWF////+gMKDUEBAAAVQQEAAIIEEgkO0DGMxfT//xEO0DGMxfT//4IEEgmxzJHXTCcAABGxzJHXTCcAAIoEBRUAAERBigQHCAEVw/XIQZIECREAAAAAAIA3QJIECwgBEWZmZmZmZkVAmgQCEAGaBAIIAaIEHQoLRmlyc3RTdHJpbmcSDnRlc3QJd2l0aAl0YWJzogQfCgxTZWNvbmRTdHJpbmcSD0p1c3RBbm90aGVyVGVzdKoEBhIEBAMCAaoEAggBsgQEEgIII7IEBggBEgIIN7oEBBICCAq6BAYIARICCBTCBAQSAggUwgQGCAESAggKygQCEAHKBAQIARAC0gQCEAXSBAQIARAG2gQCEAnaBAQIARAH4gQEEgIINuIEBggBEgIICw==" } }, "unsigned": { "message": { "@type": "type.googleapis.com/protobuf_unittest.TestAllTypes", "@value": "CGQQsZmd7rldGP////8PIP///////////wEojwcw5b/xvM7OBT0XAAAAQcsE+3EfAQAATYX///9RDtAxjMX0//9dAABEQWEAAAAAAIA3QGgBcg50ZXN0CXdpdGgJdGFic3oEAQIDBJIBAggjmgECCAqiAQIIFKgBAbABBbgBCdIBAgg2" } } }
Output: { "componentName": "UnitTestComp", "serviceName": "UnitTestServ", "unsigned": { "message": { "@type": "type.googleapis.com/protobuf_unittest.TestAllTypes", "@value": "CGQQsZmd7rldGP////8PIP///////////wEojwcw5b/xvM7OBT0XAAAAQcsE+3EfAQAATYX///9RDtAxjMX0//9dAABEQWEAAAAAAIA3QGgBcg50ZXN0CXdpdGgJdGFic3oEAQIDBJIBAggjmgECCAqiAQIIFKgBAbABBbgBCdIBAgg2" } }, "signedSerialized": { "message": "CssHCjJ0eXBlLmdvb2dsZWFwaXMuY29tL3Byb3RvYnVmX3VuaXR0ZXN0LlRlc3RBbGxUeXBlcxKUBwhkELGZne65XRj/////DyD///////////8BKI8HMOW/8bzOzgU9FwAAAEHLBPtxHwEAAE2F////UQ7QMYzF9P//XQAAREFhAAAAAACAN0BoAXIOdGVzdAl3aXRoCXRhYnN6BAECAwSSAQIII5oBAggKogECCBSoAQGwAQW4AQnSAQIINvoBA2TIAYICDLGZne65XcuJ7I/3I4oCBgD/////D5ICCwD///////////8BmgIEjwf2AaICDuW/8bzOzgXkv/G8zs4FqgIIFwAAACoAAACyAhDLBPtxHwEAALFMx53rAgAAugIIhf///0EBAADCAhAO0DGMxfT//7HMkddMJwAAygIIAABEQcP1yEHSAhAAAAAAAIA3QGZmZmZmZkVA2gICAQDiAg50ZXN0CXdpdGgJdGFic+ICD0p1c3RBbm90aGVyVGVzdOoCBAECAwTqAgCCAwIII4IDAgg3igMCCAqKAwIIFJIDAggUkgMCCAqaAwIBAqIDAgUGqgMCCQeyAwIINrIDAggLugMECBYQZLoDBQghEMgBwgMOCLGZne65XRCxmZ3uuV3CAw4Iy4nsj/cjEMuJ7I/3I8oDBgj/////D8oDBhD/////D9IDCwj///////////8B0gMLEP///////////wHaAwYIjwcQjwfaAwYI9gEQ9gHiAxAI5b/xvM7OBRDlv/G8zs4F4gMQCOS/8bzOzgUQ5L/xvM7OBeoDCg0WAAAAFRcAAADqAwoNDAAAABUqAAAA8gMSCcsE+3EfAQAAEcsE+3EfAQAA8gMSCbFMx53rAgAAEbFMx53rAgAA+gMKDYX///8Vhf////oDCg1BAQAAFUEBAACCBBIJDtAxjMX0//8RDtAxjMX0//+CBBIJscyR10wnAAARscyR10wnAACKBAUVAABEQYoEBwgBFcP1yEGSBAkRAAAAAACAN0CSBAsIARFmZmZmZmZFQJoEAhABmgQCCAGiBB0KC0ZpcnN0U3RyaW5nEg50ZXN0CXdpdGgJdGFic6IEHwoMU2Vjb25kU3RyaW5nEg9KdXN0QW5vdGhlclRlc3SqBAYSBAQDAgGqBAIIAbIEBBICCCOyBAYIARICCDe6BAQSAggKugQGCAESAggUwgQEEgIIFMIEBggBEgIICsoEAhABygQECAEQAtIEAhAF0gQECAEQBtoEAhAJ2gQECAEQB+IEBBICCDbiBAYIARICCAs=", "signature": "AAECAwQ=", "publicKey": "AAECAwQ=", "publicKeySignature": "AAECAwQ=" } }
=================================================================
#1
First plain call:           524,4791 ms
Plain method call (avg):    514,3143 ms
First service call:         593,9198 ms
Avarage over 100:           514,6074 ms

#2
First plain call:           523,0690 ms
Plain method call (avg):    514,5455 ms
First service call:         586,1580 ms
Avarage over 100:           514,6934 ms

#3
First plain call:           508,8830 ms
Plain method call (avg):    514,5127 ms
First service call:         596,1040 ms
Avarage over 100:           514,6225 ms

#4 (plain using serialization-copy)
First plain call:           524,6345 ms
Plain method call (avg):    514,5882 ms
First service call:         549,5549 ms
Avarage over 100:           514,7216 ms


Test set: Sign configuration with Any TestAllTypes (no delay)
=================================================================
Input: { "componentName": "UnitTestComp", "serviceName": "UnitTestServ", "signed": { "message": { "@type": "type.googleapis.com/protobuf_unittest.TestAllTypes", "@value": "CGQQsZmd7rldGP////8PIP///////////wEojwcw5b/xvM7OBT0XAAAAQcsE+3EfAQAATYX///9RDtAxjMX0//9dAABEQWEAAAAAAIA3QGgBcg50ZXN0CXdpdGgJdGFic3oEAQIDBJIBAggjmgECCAqiAQIIFKgBAbABBbgBCdIBAgg2+gEDZMgBggIMsZmd7rldy4nsj/cjigIGAP////8PkgILAP///////////wGaAgSPB/YBogIO5b/xvM7OBeS/8bzOzgWqAggXAAAAKgAAALICEMsE+3EfAQAAsUzHnesCAAC6AgiF////QQEAAMICEA7QMYzF9P//scyR10wnAADKAggAAERBw/XIQdICEAAAAAAAgDdAZmZmZmZmRUDaAgIBAOICDnRlc3QJd2l0aAl0YWJz4gIPSnVzdEFub3RoZXJUZXN06gIEAQIDBOoCAIIDAggjggMCCDeKAwIICooDAggUkgMCCBSSAwIICpoDAgECogMCBQaqAwIJB7IDAgg2sgMCCAu6AwQIFhBkugMFCCEQyAHCAw4IsZmd7rldELGZne65XcIDDgjLieyP9yMQy4nsj/cjygMGCP////8PygMGEP////8P0gMLCP///////////wHSAwsQ////////////AdoDBgiPBxCPB9oDBgj2ARD2AeIDEAjlv/G8zs4FEOW/8bzOzgXiAxAI5L/xvM7OBRDkv/G8zs4F6gMKDRYAAAAVFwAAAOoDCg0MAAAAFSoAAADyAxIJywT7cR8BAAARywT7cR8BAADyAxIJsUzHnesCAAARsUzHnesCAAD6AwoNhf///xWF////+gMKDUEBAAAVQQEAAIIEEgkO0DGMxfT//xEO0DGMxfT//4IEEgmxzJHXTCcAABGxzJHXTCcAAIoEBRUAAERBigQHCAEVw/XIQZIECREAAAAAAIA3QJIECwgBEWZmZmZmZkVAmgQCEAGaBAIIAaIEHQoLRmlyc3RTdHJpbmcSDnRlc3QJd2l0aAl0YWJzogQfCgxTZWNvbmRTdHJpbmcSD0p1c3RBbm90aGVyVGVzdKoEBhIEBAMCAaoEAggBsgQEEgIII7IEBggBEgIIN7oEBBICCAq6BAYIARICCBTCBAQSAggUwgQGCAESAggKygQCEAHKBAQIARAC0gQCEAXSBAQIARAG2gQCEAnaBAQIARAH4gQEEgIINuIEBggBEgIICw==" } }, "unsigned": { "message": { "@type": "type.googleapis.com/protobuf_unittest.TestAllTypes", "@value": "CGQQsZmd7rldGP////8PIP///////////wEojwcw5b/xvM7OBT0XAAAAQcsE+3EfAQAATYX///9RDtAxjMX0//9dAABEQWEAAAAAAIA3QGgBcg50ZXN0CXdpdGgJdGFic3oEAQIDBJIBAggjmgECCAqiAQIIFKgBAbABBbgBCdIBAgg2" } } }
Output: { "componentName": "UnitTestComp", "serviceName": "UnitTestServ", "unsigned": { "message": { "@type": "type.googleapis.com/protobuf_unittest.TestAllTypes", "@value": "CGQQsZmd7rldGP////8PIP///////////wEojwcw5b/xvM7OBT0XAAAAQcsE+3EfAQAATYX///9RDtAxjMX0//9dAABEQWEAAAAAAIA3QGgBcg50ZXN0CXdpdGgJdGFic3oEAQIDBJIBAggjmgECCAqiAQIIFKgBAbABBbgBCdIBAgg2" } }, "signedSerialized": { "message": "CssHCjJ0eXBlLmdvb2dsZWFwaXMuY29tL3Byb3RvYnVmX3VuaXR0ZXN0LlRlc3RBbGxUeXBlcxKUBwhkELGZne65XRj/////DyD///////////8BKI8HMOW/8bzOzgU9FwAAAEHLBPtxHwEAAE2F////UQ7QMYzF9P//XQAAREFhAAAAAACAN0BoAXIOdGVzdAl3aXRoCXRhYnN6BAECAwSSAQIII5oBAggKogECCBSoAQGwAQW4AQnSAQIINvoBA2TIAYICDLGZne65XcuJ7I/3I4oCBgD/////D5ICCwD///////////8BmgIEjwf2AaICDuW/8bzOzgXkv/G8zs4FqgIIFwAAACoAAACyAhDLBPtxHwEAALFMx53rAgAAugIIhf///0EBAADCAhAO0DGMxfT//7HMkddMJwAAygIIAABEQcP1yEHSAhAAAAAAAIA3QGZmZmZmZkVA2gICAQDiAg50ZXN0CXdpdGgJdGFic+ICD0p1c3RBbm90aGVyVGVzdOoCBAECAwTqAgCCAwIII4IDAgg3igMCCAqKAwIIFJIDAggUkgMCCAqaAwIBAqIDAgUGqgMCCQeyAwIINrIDAggLugMECBYQZLoDBQghEMgBwgMOCLGZne65XRCxmZ3uuV3CAw4Iy4nsj/cjEMuJ7I/3I8oDBgj/////D8oDBhD/////D9IDCwj///////////8B0gMLEP///////////wHaAwYIjwcQjwfaAwYI9gEQ9gHiAxAI5b/xvM7OBRDlv/G8zs4F4gMQCOS/8bzOzgUQ5L/xvM7OBeoDCg0WAAAAFRcAAADqAwoNDAAAABUqAAAA8gMSCcsE+3EfAQAAEcsE+3EfAQAA8gMSCbFMx53rAgAAEbFMx53rAgAA+gMKDYX///8Vhf////oDCg1BAQAAFUEBAACCBBIJDtAxjMX0//8RDtAxjMX0//+CBBIJscyR10wnAAARscyR10wnAACKBAUVAABEQYoEBwgBFcP1yEGSBAkRAAAAAACAN0CSBAsIARFmZmZmZmZFQJoEAhABmgQCCAGiBB0KC0ZpcnN0U3RyaW5nEg50ZXN0CXdpdGgJdGFic6IEHwoMU2Vjb25kU3RyaW5nEg9KdXN0QW5vdGhlclRlc3SqBAYSBAQDAgGqBAIIAbIEBBICCCOyBAYIARICCDe6BAQSAggKugQGCAESAggUwgQEEgIIFMIEBggBEgIICsoEAhABygQECAEQAtIEAhAF0gQECAEQBtoEAhAJ2gQECAEQB+IEBBICCDbiBAYIARICCAs=", "signature": "AAECAwQ=", "publicKey": "AAECAwQ=", "publicKeySignature": "AAECAwQ=" } }
=============================================================
#1
First plain call:             1,5647 ms
Plain method call (avg):      0,0255 ms
First service call:          40,8435 ms
Avarage over 100:             0,4550 ms

#2
First plain call:             1,5115 ms
Plain method call (avg):      0,0257 ms
First service call:          44,6056 ms
Avarage over 100:             0,4919 ms

#3
First plain call:             1,6204 ms
Plain method call (avg):      0,0254 ms
First service call:          45,5780 ms
Avarage over 100:             0,4815 ms

#4 (plain using serialization-copy)
First plain call:             3,0675 ms
Plain method call (avg):      0,0315 ms
First service call:          41,1928 ms
Avarage over 80.000:          0,4382 ms

So it seems, that the serialization/TCP handling takes around 400us, nearly independant of the message size (indicated by test sets 1 and 3). Since test 2 has a lot of work to do, the overhead is nearly unmeasurable. The "1000x" was based on the plain call compared to the 400us it takes for the ProtoBuf/GRPC framework stuff. To make the comparison more fair, I used Person.Parser.ParseFrom(p.ToByteString()) to copy the Person message before making the plain calls. So the difference between #1-3 and #4 is the serialization time without TCP.


Looking over these results, I'm not sure anymore if adding the in-process handling is worth the additional effor.


I used a Intel Core i7-2640M @ 2.80 GHz for my tests.

Malc

unread,
Oct 7, 2016, 10:05:24 AM10/7/16
to grpc.io, falco...@gmail.com
I have a slightly different usecase in mind. 
We would like to move a legacy system to using gRPC. All new stuff can go there easy; but for the older components it would be useful to migrate first to an in-process separation of interface, and then to pull those out into external services as we get more mature in them.
Being in the same memory space allows us to start building an interface while having some of the more malignant elements still worming their way through the global namespace; until we pick those out too.

Is there a likely date in the future that this might be available?

Thanks

Malc

Jan Tattermusch

unread,
Oct 13, 2016, 2:14:23 PM10/13/16
to Malc, grpc.io, Benjamin Krämer
At this point, there's no ETA, and it's very unlikely this would happen before the end of 2016. On the other hand, for what you are describing, you don't really need an in-process server. You can run a server on localhost (on an autoselected port) and then connect to it with your clients. You can have as many servers and clients in the same process as you want. Memory space will be the same you cheating as you describe it will be possible.
 

Thanks

Malc

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscribe@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.

Craig Tiller

unread,
Oct 13, 2016, 2:17:44 PM10/13/16
to Jan Tattermusch, Malc, grpc.io, Benjamin Krämer
I expect we'll be putting this together early Q1 2017.

On Thu, Oct 13, 2016 at 11:14 AM 'Jan Tattermusch' via grpc.io <grp...@googlegroups.com> wrote:
On Fri, Oct 7, 2016 at 4:05 PM, Malc <marki...@gmail.com> wrote:
I have a slightly different usecase in mind. 
We would like to move a legacy system to using gRPC. All new stuff can go there easy; but for the older components it would be useful to migrate first to an in-process separation of interface, and then to pull those out into external services as we get more mature in them.
Being in the same memory space allows us to start building an interface while having some of the more malignant elements still worming their way through the global namespace; until we pick those out too.

Is there a likely date in the future that this might be available?

At this point, there's no ETA, and it's very unlikely this would happen before the end of 2016. On the other hand, for what you are describing, you don't really need an in-process server. You can run a server on localhost (on an autoselected port) and then connect to it with your clients. You can have as many servers and clients in the same process as you want. Memory space will be the same you cheating as you describe it will be possible.
 

Thanks

Malc

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.

To post to this group, send email to grp...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.

To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.

seba...@squidex.io

unread,
Apr 4, 2017, 2:03:17 AM4/4/17
to grpc.io, jtatte...@google.com, marki...@gmail.com, falco...@gmail.com
Hi, is this possible now? My use case is also a legacy system, which must be migrated step by step. Step 1 would be to define good contracts.

Jan Tattermusch

unread,
Apr 4, 2017, 4:29:31 AM4/4/17
to seba...@squidex.io, grpc.io, Malc, Benjamin Krämer
I think we have the support in C core now, but I would need to do more research in terms of how to expose the functionality in C#.

seba...@squidex.io

unread,
Apr 10, 2017, 3:33:55 AM4/10/17
to grpc.io, seba...@squidex.io, marki...@gmail.com, falco...@gmail.com
Do you have an idea, when this would happen? It it very important for us. I tried to do something by myself. Just getting a direct reference to the server class in the client. But all important classes are internal.

Jan Tattermusch

unread,
Apr 10, 2017, 11:38:25 AM4/10/17
to seba...@squidex.io, grpc.io, Malc, Benjamin Krämer
On Mon, Apr 10, 2017 at 9:33 AM, <seba...@squidex.io> wrote:
Do you have an idea, when this would happen? It it very important for us. I tried to do something by myself. Just getting a direct reference to the server class in the client. But all important classes are internal.

I asked around and it looks like the InProcess support hasn't landed in C core yet (but is about to land approx. this week). After that, we can take a look at how hard would it be to expose it in C#.

Also, why exactly is lack of InProcess blocking for you? I can see there could be performance limitations, but  you can always just expose a server at a random port (gRPC C# server has the 'PickUnused' feature) and connect to it with a client. You can certainly have a client and server in the same process.
This example should server you well (and the InProcess we would provide wouldn't actually be much faster I think).
server = new Server
{
    Services = { YourService.BindService(new YourServiceImpl()) },
    Ports = { { "localhost", ServerPort.PickUnused, SslServerCredentials.Insecure } }
};
server.Start();
channel = new Channel("localhost", server.Ports.Single().BoundPort, ChannelCredentials.Insecure);
 
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscribe@googlegroups.com.

To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.

marki...@gmail.com

unread,
Apr 10, 2017, 11:53:48 AM4/10/17
to grpc.io, seba...@squidex.io, marki...@gmail.com, falco...@gmail.com
hi all;
Just as a note on using localhost loopback; it can cause issues running in a corporate environment where the security policy is excessively restrictive.

Personally I worked around with an implementation class that sits under my service base class which is public and can be called directly by inclusion of the dll. It's not ideal but it keeps the main elements of the interface, in that it uses the protobuf generated objects as the form of communication and response, and the service class just implements everything as a pass through to the underlying impl class.

Still hoping to switch to the in process interface when you have it but it is not blocking for me.

Malc
Reply all
Reply to author
Forward
0 new messages