Bidi Aysnc Server (c++) does not receive message.

911 views
Skip to first unread message

song...@hotmail.com

unread,
Jan 24, 2017, 2:22:47 PM1/24/17
to grpc.io
I changed the c++ HelloWorld example to use Bidi Async server. The modified proto is as follows.

// The greeting service definition.
service Greeter {
  // Sends a greeting
  rpc SayHello (stream HelloMessage) returns (stream HelloMessage) {}
}

// The response message containing the greetings
message HelloMessage {
  string message = 1;
}

The sync server works fine. However, the async server does not work. It does not receive anything after calling stream.read(). Eventually I got the error below.

Server listening on 0.0.0.0:50051
Greeter server received:
Greeter server replied: Hello
pure virtual method called
terminate called without an active exception
Aborted

I made the async server code based on the original greeter_async_server.cc with slight changes. I paste my code code below for your reference. Can you please take a look which part is wrong? How can I make it work? Thanks.


class ServerImpl final {
 public:
  ~ServerImpl() {
    server_->Shutdown();
    // Always shutdown the completion queue after the server.
    cq_->Shutdown();
  }

  // There is no shutdown handling in this code.
  void Run() {
    std::string server_address("0.0.0.0:50051");

    ServerBuilder builder;
    // Listen on the given address without any authentication mechanism.
    builder.AddListeningPort(server_address, grpc::InsecureServerCredentials());
    // Register "service_" as the instance through which we'll communicate with
    // clients. In this case it corresponds to an *asynchronous* service.
    builder.RegisterService(&service_);
    // Get hold of the completion queue used for the asynchronous communication
    // with the gRPC runtime.
    cq_ = builder.AddCompletionQueue();
    // Finally assemble the server.
    server_ = builder.BuildAndStart();
    std::cout << "Server listening on " << server_address << std::endl;

    // Proceed to the server's main loop.
    HandleRpcs();
  }

 private:
  // Class encompasing the state and logic needed to serve a request.
  class CallData {
   public:
    // Take in the "service" instance (in this case representing an asynchronous
    // server) and the completion queue "cq" used for asynchronous communication
    // with the gRPC runtime.
    CallData(Greeter::AsyncService* service, ServerCompletionQueue* cq)
        : service_(service), cq_(cq), stream_(&ctx_), status_(CREATE) {
      // Invoke the serving logic right away.
      Proceed();
    }

    void Proceed() {
      if (status_ == CREATE) {
        // Make this instance progress to the PROCESS state.
        status_ = PROCESS;

        // As part of the initial CREATE state, we *request* that the system
        // start processing SayHello requests. In this request, "this" acts are
        // the tag uniquely identifying the request (so that different CallData
        // instances can serve different requests concurrently), in this case
        // the memory address of this CallData instance.
        service_->RequestSayHello(&ctx_, &stream_, cq_, cq_, this);
      } else if (status_ == PROCESS) {
        // Spawn a new CallData instance to serve new clients while we process
        // the one for this CallData. The instance will deallocate itself as
        // part of its FINISH state.
        new CallData(service_, cq_);

        // The actual processing.
        std::string prefix("Hello ");
        stream_.Read(&request_, this);
        std::cout << "Greeter server received: " << request_.message() << std::endl;
        reply_.set_message(prefix + request_.message());
        std::cout << "Greeter server replied: " << reply_.message() << std::endl;
        stream_.Write(reply_, this);

        // And we are done! Let the gRPC runtime know we've finished, using the
        // memory address of this instance as the uniquely identifying tag for
        // the event.
        status_ = FINISH;
        stream_.Finish(Status::OK, this);
      } else {
        GPR_ASSERT(status_ == FINISH);
        // Once in the FINISH state, deallocate ourselves (CallData).
        delete this;
      }
    }

   private:
    // The means of communication with the gRPC runtime for an asynchronous
    // server.
    Greeter::AsyncService* service_;
    // The producer-consumer queue where for asynchronous server notifications.
    ServerCompletionQueue* cq_;
    // Context for the rpc, allowing to tweak aspects of it such as the use
    // of compression, authentication, as well as to send metadata back to the
    // client.
    ServerContext ctx_;

    // What we get from the client.
    HelloMessage request_;
    // What we send back to the client.
    HelloMessage reply_;

    // The means to get back to the client.
    ServerAsyncReaderWriter<HelloMessage, HelloMessage> stream_;

    // Let's implement a tiny state machine with the following states.
    enum CallStatus { CREATE, PROCESS, FINISH };
    CallStatus status_;  // The current serving state.
  };

  // This can be run in multiple threads if needed.
  void HandleRpcs() {
    // Spawn a new CallData instance to serve new clients.
    new CallData(&service_, cq_.get());
    void* tag;  // uniquely identifies a request.
    bool ok;
    while (true) {
      // Block waiting to read the next event from the completion queue. The
      // event is uniquely identified by its tag, which in this case is the
      // memory address of a CallData instance.
      // The return value of Next should always be checked. This return value
      // tells us whether there is any kind of event or cq_ is shutting down.
      GPR_ASSERT(cq_->Next(&tag, &ok));
      GPR_ASSERT(ok);
      static_cast<CallData*>(tag)->Proceed();
    }
  }

  std::unique_ptr<ServerCompletionQueue> cq_;
  Greeter::AsyncService service_;
  std::unique_ptr<Server> server_;
};

int main(int argc, char** argv) {
  ServerImpl server;
  server.Run();

  return 0;
}

Vijay Pai

unread,
Jan 24, 2017, 11:25:33 PM1/24/17
to grpc.io, song...@hotmail.com
What does your Client code look like?

song...@hotmail.com

unread,
Jan 24, 2017, 11:44:20 PM1/24/17
to grpc.io, song...@hotmail.com
Mt client side uses sync API, as shown below.

class GreeterClient {
 public:
  GreeterClient(std::shared_ptr<Channel> channel)
      : stub_(Greeter::NewStub(channel)) {}

  // Assambles the client's payload, sends it and presents the response back
  // from the server.
  std::string SayHello(const std::string& user) {
    // Data we are sending to the server.
    HelloMessage request;
    request.set_message(user);

    // Container for the data we expect from the server.
    HelloMessage reply;

    // Context for the client. It could be used to convey extra information to
    // the server and/or tweak certain RPC behaviors.
    ClientContext context;
    std::shared_ptr<ClientReaderWriter<HelloMessage, HelloMessage> > stream(stub_->SayHello(&context));

    // The actual RPC.
    stream->Write(request);
    stream->Read(&reply);
    return reply.message();
  }

 private:
  std::unique_ptr<Greeter::Stub> stub_;

};

int main(int argc, char** argv) {
  // Instantiate the client. It requires a channel, out of which the actual RPCs
  // are created. This channel models a connection to an endpoint (in this case,
  // localhost at port 50051). We indicate that the channel isn't authenticated
  // (use of InsecureChannelCredentials()).
  GreeterClient greeter(grpc::CreateChannel(
      "localhost:50051", grpc::InsecureChannelCredentials()));
  std::string user("world");
  std::string reply = greeter.SayHello(user);
  std::cout << "Greeter received: " << reply << std::endl;

  return 0;
}


Vijay Pai於 2017年1月24日星期二 UTC-8下午8時25分33秒寫道:

Sree Kuchibhotla

unread,
Jan 25, 2017, 12:07:51 AM1/25/17
to song...@hotmail.com, grpc.io
Hi,
You are incorrectly using the async streaming API on the server side. 

In the following code, stream_.Read(),  stream_.Write() and stream_.Finish() are three Async calls -and would return immediately. You should have done a cq_.Next() after each call to make sure the async operations actually completed.

---
       std::string prefix("Hello ");
        stream_.Read(&request_, this);

       //*** SREE: You should wait for cq_.Next() to return the tag (i.e 'this') before proceeding ***
      // It is not safe to proceed without that

        std::cout << "Greeter server received: " << request_.message() << std::endl;
        reply_.set_message(prefix + request_.message());
        std::cout << "Greeter server replied: " << reply_.message() << std::endl;
        stream_.Write(reply_, this); 

       //*** SREE: You should wait for cq_.Next() to return the tag (i.e 'this') before proceeding ***

        // And we are done! Let the gRPC runtime know we've finished, using the
        // memory address of this instance as the uniquely identifying tag for
        // the event.
        status_ = FINISH;
        stream_.Finish(Status::OK, this);
---

The correct way to do it is:

1) Expand your state-machine in CallData.  i.e change  enum CallStatus { CREATE, PROCESS, FINISH }; 
to enum CallStatus { CREATE, PROCESS, READ_CALLED, WRITE_CALLED, FINISH };

2) Change your Proceed() function to something like below:

void Proceed() {
   switch(status_) {
     case CREATE: {

        service_->RequestSayHello(&ctx_, &stream_, cq_, cq_, this);
        status_ = PROCESS;
        break;
      } 
     case PROCESS: {
        new CallData(service_, cq_);


        std::string prefix("Hello ");
        stream_.Read(&request_, this);
        status_ = READ_CALLED;
        break;
      }
      case READ_CALLED: {
        std::cout << "Greeter server received: " << request_.message() << std::endl;
        reply_.set_message(prefix + request_.message());
        std::cout << "Greeter server replied: " << reply_.message() << std::endl;
        stream_.Write(reply_, this);
  
        status_ = WRITE_CALLED;
        break;
      }
      case WRITE_CALLED: {
        stream_.Finish(Status::OK, this);
        status_ = FINISH;
        break;
      }
      case FINISH: {
        delete this;
      }
    }



Hope this helps,

thanks,
Sree



--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscribe@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/808939a7-9335-4194-82af-4e92fadf04e9%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

song...@hotmail.com

unread,
Jan 25, 2017, 1:07:08 AM1/25/17
to grpc.io, song...@hotmail.com
Thank you Sree! It is working now. I really appreciate your help. You guys are awesome!

Sree Kuchibhotla於 2017年1月24日星期二 UTC-8下午9時07分51秒寫道:

Sree Kuchibhotla

unread,
Jan 25, 2017, 2:50:58 AM1/25/17
to song...@hotmail.com, grpc.io
Glad to know if worked. Thanks :)

-Sree

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscribe@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
Message has been deleted

Venelin Vasilev

unread,
Oct 15, 2020, 7:52:12 AM10/15/20
to grpc.io
Hey Sree,

thank you for sharing this. Can you please check this out: https://groups.google.com/g/grpc-io/c/3LMvM62SAo0 ?

Thank you in advance!

To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.

To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
Reply all
Reply to author
Forward
0 new messages