Hi Kamlesh,
I'm not part of the ProxySQL team, but I found this question interesting and hopefully I can provide some input based on my usage of ProxySQL.
First, you can put the Aurora replicas into hostgroups as you have indicated, '10' for some read traffic and '20' for other read traffic, but that would mean you would have to manually manage the Aurora instances in those host groups and could not take advantage of the ProxySQL auto-discovery of new instances as they come online, or are replaced by Aurora (rare, but it can happen).
I have a similar situation with some very read-heavy workloads that used to need isolation, so we built a second Aurora cluster that uses traditional binlog replication to receive data updates from the main Aurora cluster. This allowed the read-heavy workloads to run in isolation from the regular workloads, and with the binlog replication keeping them almost in sync. Occasionally, the replication lags a bit, but that is due to certain workloads overloading the second cluster, causing the replication SQL thread to not be able to keep up with the IO thread - but this wasn't a big issue for us, as the output from the read-heavy traffic was not required to be real-time.
If you go down this route, you can take advantage of the ProxySQL auto-discovery, by specifying multiple entries in the `mysql_aws_aurora_hostgroups` and `mysql_replication_hostgroups` tables that manage the two clusters separately from the ProxySQL perspective and allow ProxySQL to add or remove instances as required if you scale-in or -out your clusters (maybe using Cluster auto-scaling?).
Whichever route you choose, I would recommend using separate 'usernames' for each of the applications, so that you would not need to specify /* REPORTING */ as a query hint to select the target hostgroup, but could specify the default hostgroup for each username, so those users were routed to the appropriate hostgroup, and thereby the required replica instances. Your users with 'write' permissions would then be routed by default to the hostgroup of the 'writer' instance in the primary cluster.
You may still get the problem of all the connections from a particular source going to one instance if they start up as part of a pool, but we've found that configuring a delay in the pool warmup and keeping the initial minimum connections in the pool small does seem to help with that, too. Don't ask me what the connection parameters are, I can't remember - the app team deal with that side of things.
On the connection side of things, we have a small ProxySQL cluster in AWS running on three m7.large instances, and we regularly push over 5k connections to the backend Aurora clusters. I
suggest tuning the ProxySQL parameter mysql-connection_max_age_ms to a value other than zero (unlimited), as we found that the connections to Aurora continued to grow and consumed a significant amount of RAM on the instance. We have ours set for 2 hours, so ProxySQL drops idle connections to Aurora after 2 hours.
I hope that helps point you in the right direction - I'm not saying it's the best/ultimate solution for you, but it would satisfy most of your requirements.
Regards,
Dave