io.grpc.StatusRuntimeException: UNAVAILABLE: Credentials failed to obtain metadata

1,730 views
Skip to first unread message

app.i...@gmail.com

unread,
May 2, 2019, 1:24:08 PM5/2/19
to gce-discussion

Getting "com.google.api.gax.rpc.UnavailableException: io.grpc.StatusRuntimeException: UNAVAILABLE: Credentials failed to obtain metadata" just when I run on windows, on linux works perfectly



Hi, I have a Simple Maven JavaFX application that Im building for Linux and Windows OS.
On Linux, application works perfectly, really incredible performance by the API.
The problem is when I go to windows, When I try to start streaming audio to google, this error appear:
com.google.api.gax.rpc.UnavailableException: io.grpc.StatusRuntimeException: UNAVAILABLE: Credentials failed to obtain metadata.

PS1: I created unit test to guarantee that my GOOGLE_APPLICATION_CREDENTIALS is being setted, so thats not supposed to be the error.
PS2: If I get a new json authentication from google, SOMETIMES my program works, but just for a couple hours and, after that, the error appear again. (Im talking about windows of course)
PS3: The exactly same json authenticator file is working perfectly on linux.

1- API: SpeechClient API
2- OS: Windows 10
3. Java version: "1.8.0_211"
4. I`ve tried on almost all google-speech versions, but im now at 1.1.0

  1. I just created a JavaFX gui with record button and, when I start the stream on windows I get this
    "com.google.api.gax.rpc.UnavailableException: io.grpc.StatusRuntimeException: UNAVAILABLE: Credentials failed to obtain metadata"
    as an error

Code example

This is more beautiful to see: (https://stackoverflow.com/questions/55787208/google-speechclient-io-grpc-statusruntimeexception-unavailable-credentials-fai)

However, this is my Runnables:

SpeechRecognizerRunnable:
`import Controller.GUI.VoxSpeechGUIController;
import Model.SpokenTextHistory;
import com.google.api.gax.rpc.ClientStream;
import com.google.api.gax.rpc.ResponseObserver;
import com.google.api.gax.rpc.StreamController;
import com.google.cloud.speech.v1.*;
import com.google.protobuf.ByteString;

import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.DataLine;
import javax.sound.sampled.TargetDataLine;
import java.io.IOException;
import java.util.ArrayList;

public class SpeechRecognizerRunnable implements Runnable{

private VoxSpeechGUIController controller;

private String notFinalTranscript = "";
public SpeechRecognizerRunnable(VoxSpeechGUIController voxSpeechGUIController) {
    this.controller = voxSpeechGUIController;
}

@Override
public void run() {
    MicrofoneRunnable micrunnable = MicrofoneRunnable.getInstance();
    Thread micThread = new Thread(micrunnable);
    ResponseObserver<StreamingRecognizeResponse> responseObserver = null;
    try (SpeechClient client = SpeechClient.create()) {
        ClientStream<StreamingRecognizeRequest> clientStream;
        responseObserver =
                new ResponseObserver<StreamingRecognizeResponse>() {

                    ArrayList<StreamingRecognizeResponse> responses = new ArrayList<>();

                    public void onStart(StreamController controller) {
                        System.out.println("Comecando");
                    }

                    public void onResponse(StreamingRecognizeResponse response) {

                        try {
                            responses.add(response);
                            StreamingRecognitionResult result = response.getResultsList().get(0);
                            SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
                            String transcript = alternative.getTranscript();
                            System.out.printf("Transcript : %s\n", transcript);
                            if(!result.getIsFinal()){
                                notFinalTranscript = transcript;
                                controller.setLabelText(SpokenTextHistory.getInstance().getActualSpeechString() + " (" + transcript + ")");
                            }
                            if(result.getIsFinal()){
                                String newText = SpokenTextHistory.getInstance().getActualSpeechString() + " " + transcript;
                                SpokenTextHistory.getInstance().setActualSpeechString(newText);
                                controller.setLabelText(newText);
                            }
                        }
                        catch (Exception ex){
                            System.out.println(ex.getMessage());
                            ex.printStackTrace();
                        }
                    }

                    public void onComplete() {
                    }

                    public void onError(Throwable t) {
                        System.out.println(t);
                    }
                };

        clientStream = client.streamingRecognizeCallable().splitCall(responseObserver);

        RecognitionConfig recognitionConfig =
                RecognitionConfig.newBuilder()
                        .setEncoding(RecognitionConfig.AudioEncoding.LINEAR16)
                        .setLanguageCode("pt-BR")
                        .setSampleRateHertz(16000)
                        .build();
        StreamingRecognitionConfig streamingRecognitionConfig =
                StreamingRecognitionConfig.newBuilder()
                        .setInterimResults(true)
                        .setConfig(recognitionConfig).build();

        StreamingRecognizeRequest request =
                StreamingRecognizeRequest.newBuilder()
                        .setStreamingConfig(streamingRecognitionConfig)
                        .build(); // The first request in a streaming call has to be a config

        clientStream.send(request);

        try {
            // SampleRate:16000Hz, SampleSizeInBits: 16, Number of channels: 1, Signed: true,
            // bigEndian: false
            AudioFormat audioFormat = new AudioFormat(16000, 16, 1, true, false);
            DataLine.Info targetInfo =
                    new DataLine.Info(
                            TargetDataLine.class,
                            audioFormat); // Set the system information to read from the microphone audio
            // stream

            if (!AudioSystem.isLineSupported(targetInfo)) {
                System.out.println("Microphone not supported");
                System.exit(0);
            }
            // Target data line captures the audio stream the microphone produces.
            micrunnable.targetDataLine = (TargetDataLine) AudioSystem.getLine(targetInfo);
            micrunnable.targetDataLine.open(audioFormat);
            micThread.start();

            long startTime = System.currentTimeMillis();

            while (!micrunnable.stopFlag) {

                long estimatedTime = System.currentTimeMillis() - startTime;

                if (estimatedTime >= 55000) {

                    clientStream.closeSend();
                    clientStream = client.streamingRecognizeCallable().splitCall(responseObserver);

                    request =
                            StreamingRecognizeRequest.newBuilder()
                                    .setStreamingConfig(streamingRecognitionConfig)
                                    .build();

                    startTime = System.currentTimeMillis();

                } else {
                    request =
                            StreamingRecognizeRequest.newBuilder()
                                    .setAudioContent(ByteString.copyFrom(micrunnable.sharedQueue.take()))
                                    .build();
                }

                clientStream.send(request);
            }
            if(!notFinalTranscript.equals(""))
            {
                String newText = SpokenTextHistory.getInstance().getActualSpeechString() + " " + notFinalTranscript;
                SpokenTextHistory.getInstance().setActualSpeechString(newText);
                controller.setLabelText(newText);
            }
        } catch (Exception e) {
            System.out.println(e);
        }
    } catch (IOException e) {
        e.printStackTrace();
    }
}

MicrofoneRunnable:
`import javax.sound.sampled.TargetDataLine;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingQueue;

public class MicrofoneRunnable implements Runnable{
public boolean stopFlag;

public static volatile BlockingQueue<byte[]> sharedQueue = new LinkedBlockingQueue();
public static TargetDataLine targetDataLine;
public static int BYTES_PER_BUFFER = 6400;

private static MicrofoneRunnable instance;

public static MicrofoneRunnable getInstance(){
    if(instance == null)
        instance = new MicrofoneRunnable();

    return instance;
}

private MicrofoneRunnable(){
    stopFlag = false;
}

public void setStopFlag(Boolean bool){
    stopFlag = bool;
}


@Override
public void run() {
    targetDataLine.start();
    byte[] data = new byte[BYTES_PER_BUFFER];
    while (!stopFlag && targetDataLine.isOpen()) {
        try {
            int numBytesRead = targetDataLine.read(data, 0, data.length);
            if ((numBytesRead <= 0) && (targetDataLine.isOpen())) {
                continue;
                }
                sharedQueue.put(data.clone());
        } catch (InterruptedException e) {
            System.out.println("Microphone input buffering interrupted : " + e.getMessage());
        }
    }
    targetDataLine.stop();
    targetDataLine.close();
}


Jun (Cloud Platform Support)

unread,
Jul 8, 2019, 10:36:50 AM7/8/19
to Google Cloud Developers
Hey Igor,

Google Groups are reserved for general product discussion, StackOverflow for technical questions whereas Issue Tracker for product bugs and feature requests.

To get a better support you should post to the relevant forum, please read the Community Support article for better understanding.
Reply all
Reply to author
Forward
0 new messages