Hi,
I'm still in the process of migration from channels 1 to channels 2...
So, we have an `AsyncJsonWebsocketConsumer` where websocket clients want to operate on a django model instance - each websocket mostly on the same object.
With channels 1 we would always send with each message the `objID` - doing `obj=model.objects.get(pk=objID)` and read/write on `obj`.
In channels 2 I could save the `objID` server side in the consumer class - like `self.objID` with one initial message like `set-obj-id`.
For the ORM access I now have to do `obj = await database_sync_to_async( model.objects.get )(pk=self.objID)`
QUESTION: Is is legitimate to store the "model instance itself" like `self.obj = await database_sync_to_async( model.objects.get )(pk=objID)` and keep that instance around for the lifetime of that socket -- doing some `await database_sync_to_async( self.obj.save )()` on some respective messages ?
Note: multiple websockets might operate on the same obj/objID ! So my feeling is that would work as long as I keep testing with the `runserver` or in production as long as I know that I have only ONE PROCESS -- but in general the django caching mechanism might cause inconsistencies ... !?
Another, somewhat related question is regarding a kind of "background routine" that I started to implement as an asyncio Task.
Since that "routine" should be able to be canceled and pauses (I'm using an asyncio.event) from any websocket consumer:
QUESTION: What is the best place to store that Task object ? Again with `runserver` I can just use a global module-level `dict` to keep those Task objects.
The whole async experience is really twisting my mind and interesting at the same time ;-)
Regards,
Sebastian