How would you know that more data generated a better result? You can always say that and repeat the sentence; do you just want to be "right" or can you clarify how the data with for instance wavenet has to look like for it to become the way it was presented in the papers of wavenet?
--
You received this message because you are subscribed to the Google Groups "Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+unsubscribe@tensorflow.org.
To post to this group, send email to dis...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/discuss/988602c5-5226-4377-a6f5-a9af835a0b9c%40tensorflow.org.
--
You received this message because you are subscribed to the Google Groups "Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+unsubscribe@tensorflow.org.
To post to this group, send email to dis...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/discuss/40e1de45-b0bc-4c41-ac27-cd1e6a19c228%40tensorflow.org.
Most companies can't open source ML stuff, because the training data can't be published and the code itself would be completely useless. Hence, you see very few production quality open source ML projects.Expensive means recording, say, a hundred thousand audio clips, annotating the precise words that were spoken and aligning the text to the audio clips. This will require time per clip. Now multiply by the number of clips and make up a number based on the estimated hourly pay.
On Fri, Mar 2, 2018 at 2:01 PM, 'Der Zurechtweiser' via Discuss <dis...@tensorflow.org> wrote:
--What does "expensive" mean in " it's also expensive to produce good training data"?Your argument is like saying that Silicium was perfect, the problems came with people building turing machines with it. But isn't it surprising that none of the projects being built upon tensorflow are convincing? Contrary to that, a lot of projects built on c++ are convincing.Am Freitag, 2. März 2018 12:28:20 UTC+1 schrieb Edvard Fagerholm:I think you're complaining to the wrong people. TensorFlow is a framework for implementing neural networks (as well as some other things), while WaveNet is a particular NN for which there's a TensorFlow implementation. You don't complain to the Clang project that their C++ compiler sucks, because you cloned someone's code from Github, which was in C++, and it didn't work. None of those networks you mention would work any better had you ported the same code to e.g. PyTorch.
BTW, speech synthesis and speech-to-text are quite well known to be very data hungry problems for which it's also expensive to produce good training data. Making these work well for some smaller local language is a pretty big undertaking due to the required data collection and annotation.
Best,
Edvard
You received this message because you are subscribed to the Google Groups "Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+u...@tensorflow.org.
To post to this group, send email to dis...@tensorflow.org.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+unsubscribe@tensorflow.org.
To post to this group, send email to dis...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/discuss/3a9bf8cb-77bd-4b41-a748-dd4234a384da%40tensorflow.org.
tf.enable_eager_execution()
.tf.contrib.quantize
package.tf.custom_gradient
.Dataset
with new tf.contrib.data.SqlDataset
.tf.contrib.framework.CriticalSection
.tf.regex_replace
.tf.contrib.data.bucket_by_sequence_length