Yes, it is planned to cover streaming as well.
Even if we think of a streaming job as running continuously it still has a lifecycle.
A streaming job will still have runs as it gets stopped, upgraded and started again.
The job version will track whether the code has been updated.
A dataset could be a kafka topic. In that sense you might want to capture metadata like the offsets at which you a=started or stopped the job.
Facets are meant to allow capture metadata specific to certain types of jobs or datasets.
The difference is that batch jobs usually consumes and produce a predefined amount of data when a streaming job does not.