
I have been working on the Project "Word level highlighting" and want to put down some points:
As I have understood from the flite tts code,
->
NativeFliteTTS.java interacts with the flite library
->
fliteService.cpp receives all the actions from the the java code and then, appropriate actions are taken in the
fliteEngine.cppNow, we are able to get callbacks in the
fliteEngine.cpp after each word and
receive the same in NativeFliteTTS.java.(as described in the image above)
If I have correctly understood the problem, we have to send these same values to the GoRead app.
The solution I propose for this problem is:
1.porting the flite library into the app
Flite calculates everything on itself for text to speech, but uses "
android_tts_synth_cb_t ttsSynthDoneCBPointer" to output the voice.
This is the only part of android, that is uses, which prevents it from being an independent library and installs it as android-speech service.
So, what we can do is, we can handle this corresponding library accordingly /
find an alternate for it, and then, can port this along with the flite library,to the project.
I am searching for the same, and will update if I find something.
2.Secondly, we are getting the values in the java code of flite tts.
So, can we use public intents (or something like that), to send these values to the goread?
We can send data across the apps in android, but i am not sure if we an do it here? Acc. to me, this option can be considered if it can be precisely shaped according to our need.
I will try to test the second option on a test app provide you with the observations.
Also, if we port the Flite tts to the app, we would also have to change the highlighting and tts part accordingly.
But the main part is getting the callback values in the app,
Please provide feedback on this, so that I can continue in the right direction.
Cheers
yashasvi