Tested on 1.4.2.
IMHO, currently GPR_LOG_SEVERITY_INFO logs too much and is unsuitable for use in prod scenario. INFO to me means that something interesting happened which is not an error but an important event to be noticed. So for example, if a client logs in, login/logout may be INFO log but if the client executes 5 rpcs, rpc handling generates DEBUG logs and not INFO log.
Currently, for a single rpc call , it spews 20 lines when nothing out of ordinary happened.
2017/10/04-15:42:54.247 INFO RETURN_EVENT[000001DCD6DBA130]: OP_COMPLETE: tag:000001E6B6B60870 OK
2017/10/04-15:42:54.247 INFO grpc_call_start_batch(call=000001DCD713FBE8, ops=0000003515AFF510, nops=1, tag=000001E6B6B45180, reserved=0000000000000000)
2017/10/04-15:42:54.247 INFO ops[0]: RECV_CLOSE_ON_SERVER cancelled=000001E6B6B451F0
2017/10/04-15:42:54.247 INFO grpc_completion_queue_next(cc=000001DCD6DBA130, deadline=gpr_timespec { tv_sec: 9223372036854775807, tv_nsec: 0, clock_type: 1 }, reserved=0000000000000000)
2017/10/04-15:42:54.247 INFO grpc_server_request_registered_call(server=000001DCD6CE1AA0, rmp=000001DCD7064460, call=000001E6B6B60A28, deadline=000001DDBA92F558, initial_metadata=000001DDBA92F590, optional_payload=000001E6B6B60A30, cq_bound_to_call=000001DCD6DBA130, cq_for_notification=000001DCD6DBA130, tag=000001E6B6B609F0)
2017/10/04-15:42:54.247 INFO grpc_call_get_peer(000001DCD713FBE8)
2017/10/04-15:42:54.247 INFO grpc_call_get_peer(000001DCD713FBE8)
2017/10/04-15:42:54.263 INFO grpc_call_start_batch(call=000001DCD713FBE8, ops=000000351C4FE5E0, nops=3, tag=000001DDB9ADCCC8, reserved=0000000000000000)
2017/10/04-15:42:54.263 INFO ops[0]: SEND_INITIAL_METADATA(nil)
2017/10/04-15:42:54.263 INFO ops[1]: SEND_MESSAGE ptr=000001DCD6C06700
2017/10/04-15:42:54.263 INFO ops[2]: SEND_STATUS_FROM_SERVER status=0 details=(null)(nil)
2017/10/04-15:42:54.263 INFO cq_end_op_for_next(exec_ctx=000000351C4FE528, cc=000001DCD6DBA130, tag=000001DDB9ADCCC8, error="No Error", done=00007FF6349AFA90, done_arg=000001DCD71413E8, storage=000001DCD71413F0)
2017/10/04-15:42:54.263 INFO cq_end_op_for_next(exec_ctx=000000351C4FE528, cc=000001DCD6DBA130, tag=000001E6B6B45180, error="No Error", done=00007FF6349AFA90, done_arg=000001DCD7141308, storage=000001DCD7141310)
2017/10/04-15:42:54.263 INFO RETURN_EVENT[000001DCD6DBA130]: OP_COMPLETE: tag:000001DDB9ADCCC8 OK
2017/10/04-15:42:54.263 INFO grpc_completion_queue_next(cc=000001DCD6DBA130, deadline=gpr_timespec { tv_sec: 9223372036854775807, tv_nsec: 0, clock_type: 1 }, reserved=0000000000000000)
2017/10/04-15:42:54.263 INFO RETURN_EVENT[000001DCD6DBA130]: OP_COMPLETE: tag:000001E6B6B45180 OK
2017/10/04-15:42:54.263 INFO grpc_completion_queue_next(cc=000001DCD6DBA130, deadline=gpr_timespec { tv_sec: 9223372036854775807, tv_nsec: 0, clock_type: 1 }, reserved=0000000000000000)
2017/10/04-15:42:54.264 INFO grpc_call_unref(c=000001DCD713FBE8)
2017/10/04-15:42:54.264 INFO grpc_metadata_array_destroy(array=000001DCD71403D8)
2017/10/04-15:42:54.264 INFO grpc_metadata_array_destroy(array=000001DDB9ADCB40)
How about increasing the granularity of log levels? For example, SPAM, TRACE, INFO, WARN, ERROR so that different logs are handled appropriately?