gRPC Java 1.14.0 Released

308 views
Skip to first unread message

Carl Mastrangelo

unread,
Jul 31, 2018, 2:18:59 PM7/31/18
to grpc.io

Notice: This is expected to be the last version supporting Java 6. Comment on #3961 if this causes you trouble. Android API level 14 support will be unchanged.

Dependencies

  • Updated to Netty 4.1.27 and Netty TCNative 2.0.12
  • gRPC is now regularly tested with JDK 9 and 10

API Changes

  • OkHttpChannelBuilder#negotiationType is now deprecated
  • Made protobuf, protobuf-lite, and protobuf-nano classes final.

New Features

  • Channel Tracing now record State Changes
  • Stubs now have an RpcMethod annotation for use with annotation processors
  • Added support for providing List<EquivalentAddressGroup> to LoadBalancer Subchannels, in addition to the option of providing a EquivalentAddressGroup (EAG). This prevents the need for LoadBalancers to "flatten" a List<EquivalentAddressGroup> into a single EquivalentAddressGroup which loses/confuses the EAG's AttributesNameResolvers can now specify Attributes in an EAG and expect that the values are passed to gRPC's core. Future work will add List<EAG> for OobChannels.
  • InProcessSocketAddress now has a useful toString() method
  • AndroidChannelBuilder is now easier to build
  • RoundRobinLoadBalancer now scales better when using stickiness

Behavior Changes

  • gRPCLB no longer depends on having a Service Config

Bug Fixes

  • Fix regression that broke Java 9 ALPN support. This fixes the error "SunJSSE selected, but Jetty NPN/ALPN unavailable" (#4620)
  • Fixed a bug with gRPC LB parsing SRV DNS records ( 6dbe392 )
  • enterIdle() will exit idle mode if channel is still in use (#4665)
  • TransmitStatusRuntimeExceptionInterceptor now avoids accidentally double closing the call.

Documentation

  • Clarified StreamObserver interaction with thread safety

Thanks to all our Contributors:



Carl Mastrangelo

unread,
Jul 31, 2018, 7:02:59 PM7/31/18
to grpc.io
FYI, It was brought to my attention that Netty has a bug in 4.1.26 that causes increased memory usage when using NettyEpoll,   Please avoid using this release if you depend on Netty Epoll (which should be uncommon).

eleano...@gmail.com

unread,
Aug 23, 2018, 4:09:55 PM8/23/18
to grpc.io
Hi Carl, 

what about StreamObserver thread safety? can you please point to me the documentation if it exists?

Thanks a lot!

On Tuesday, July 31, 2018 at 11:18:59 AM UTC-7, Carl Mastrangelo wrote:

Carl Mastrangelo

unread,
Aug 23, 2018, 5:38:07 PM8/23/18
to grpc.io
You can see the change here: https://github.com/grpc/grpc-java/commit/defb955f3ab233e11d960a42495ca955306d57a4  .  StreamObserver wraps a ClientCall.

eleano...@gmail.com

unread,
Aug 23, 2018, 5:59:59 PM8/23/18
to grpc.io

Hi Carl, 

Thanks for the reply! I have a question regarding this:

my grpc client and server are doing bi-directional streaming, in the StreamObserver.onNext() the client passed to server, its just print out the response from the server.
And on the client side, when creating the channel, I passed a fixedThreadPool with 5 threads. And I see from client side, the results get printed by 5 threads.  So that means 5 threads are accessing the same StreamObserver object, but as you mentioned StreamObserver is not thread safe?

Since the onNext() is just System.out.println(), maybe the threads do not access the StreamObserver concurrently. but what if the logic of process the response takes time, and when thread1 hasn't finished with its onNext() call, the next response arrives, and another thread trying to process it, is there any consequence of this scenario?

Thanks a lot!

Eric Anderson

unread,
Aug 24, 2018, 10:17:22 AM8/24/18
to eleano...@gmail.com, grpc-io
On Thu, Aug 23, 2018 at 3:00 PM <eleano...@gmail.com> wrote:
my grpc client and server are doing bi-directional streaming, in the StreamObserver.onNext() the client passed to server, its just print out the response from the server.
And on the client side, when creating the channel, I passed a fixedThreadPool with 5 threads. And I see from client side, the results get printed by 5 threads.  So that means 5 threads are accessing the same StreamObserver object, but as you mentioned StreamObserver is not thread safe?

Since the onNext() is just System.out.println(), maybe the threads do not access the StreamObserver concurrently. but what if the logic of process the response takes time, and when thread1 hasn't finished with its onNext() call, the next response arrives, and another thread trying to process it, is there any consequence of this scenario?

We make sure to call it only from one thread at a time. We'll continue re-using a thread for delivering callbacks if more work is coming. But if there's a period of no callbacks we'll return from the thread. The next time we need to do callbacks a different thread may be chosen.

eleano...@gmail.com

unread,
Aug 24, 2018, 12:04:59 PM8/24/18
to grpc.io
Hi Eric, 

Thanks a lot for the explanation. If I understand you correctly, at certain point in time, there will only be 1 thread processing the callback, and there will NEVER be multiple threads processing the callbacks concurrently. 
If this is the case, what is the point of having the executor() configuration in ChannelBuilder and ServerBuilder? 

Thanks a lot!

Eric Anderson

unread,
Aug 24, 2018, 6:34:49 PM8/24/18
to Jin Yi, grpc-io
On Fri, Aug 24, 2018 at 9:05 AM <eleano...@gmail.com> wrote:
If I understand you correctly, at certain point in time, there will only be 1 thread processing the callback, and there will NEVER be multiple threads processing the callbacks concurrently. 
If this is the case, what is the point of having the executor() configuration in ChannelBuilder and ServerBuilder?

There will only be 1 thread processing that one RPC's callback. Multiple threads can be processing callbacks, but for different RPCs.

Also, it's common for applications to already have thread pools sitting around and want to reuse them instead of creating yet-more-threads.

eleano...@gmail.com

unread,
Aug 24, 2018, 6:36:07 PM8/24/18
to grpc.io
Hi Eric, 

Thanks a lot, I got it!
Reply all
Reply to author
Forward
0 new messages