--
You received this message because you are subscribed to the Google Groups "Pantheon" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pantheon-stanf...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/pantheon-stanford/e1c58687-6853-437b-aec9-4ffd02eaf4bb%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/pantheon-stanford/CAK50sgvDNHSXY1YfrP4xJ1FhEWuN2FxoMHzp4%3DkL%3D_a7s24N%2BA%40mail.gmail.com.
Thanks, Francis and Keith!
I think I need some time to read or run the Spearmint implementation to understand how the calibrated emulator works and assess whether I should use it to solve my problem.
My problem is about using the “Data-Driven Network” approach, I got the real-world datasets from our production environment with many users and countries, but replaying the dataset can’t perfectly imitate real-world network environment because under some cases the performance of CC algorithms looks like impossible in practice. So, I think I need to calibrate the dataset and add more other suitable network parameters to get the final replay settings, not just for the gap of simulation-to-reality in machine learning, but also for the same gap in traditional CC’s optimization. Further, my question is about how could I calibrate the dataset and add other parameters rationally to close to CC algorithms’ performances in the wild. “Reproduce” may have two steps, the first is to repeat it as Keith said, the second is porting it to my situation for calibrating our dataset and obtaining suitable Mahimahi emulation settings. I do feel this process might be laborious, and I’m not sure if I would be allowed to work on it even though I think this would be beneficial for CC optimization.
Also, I think this is a little different from purely constructing network parameter combinations to imitate real-world network environments. Containing some real-world datasets is more complex and challenging, and easily makes CC algorithms expose their problems even though they were well-designed, and helps find CC optimization directions.
Last but not the least, congratulations to you! I saw some posts from twitter about Puffer got the NSDI Community Award! Well deserved!
Jing
To view this discussion on the web visit https://groups.google.com/d/msgid/pantheon-stanford/CA%2BPdy1iFvEu8Wiw25Ju3eBKAhjRzgC7SLgvSbf2Bbf8Viyb7TA%40mail.gmail.com.