Here are the minutes from the telecon (thank you, Guy Davidson and Timur Doumler, for taking them).
I've also uploaded them to the Kona pages of the WG21 Wiki for those who have access to that:
--
Action items: 2D graphics?
RO: A few things were to happen, but they all involve papers so there has
been no advance.
TD: There was an updated paper, there was discussion of a possible meeting,
but there is no progress on the paper.
RO: We were in favour of time being spent, but I haven't heard of any
updates. We also voted that the webview was worth spending time working on,
but there has been no update. I have not heard from Mike McLaughlin.
GD: Graphics paper: several people mailed and asked what the status is. Both
within and outside of committee. He tried to contact Mike and heard nothing.
The authors (incl. David Ludwig) have a slack channel but it's been mostly
dead lately. We have a paper that would be good to go, but we need more
direction. In the meantime I'm submitting parts of the paper, like linear
algebra. There will be a geometry paper as a follow-up. A colour paper is
in the works as well. Now it gets complicated: we need a windowing paper.
Sorry to open the meeting with an existential crisis about the purpose of
the group. But we need to talk about it. I need to get some affirmation from
SG13 that HMI should happen.
RO: There was a heated discussion about whether HMI should happen on the
library-ext reflector in response to the audio paper. I tried to get a
response from Herb and Bryce. What is the relationship between LEWGI and
SG13? Where does a paper go first?
JM: IMHO, it should go to a SG like SG13 first. Incubator should then not have a
say in papers that go to a study group like SG13. Anyone who has profound
opposition will be in LEWG and not the incubator group. That said, this
particular study group, given the fundamental opposition to I/O that some
people have, we should not invest too much effort into papers like graphics
like audio before we can get some affirmation by LEWG to proceed.
Brett: So what direction should we take? New devices are coming out and it
would be nice to have a standard on how we should approach that. Are we
working towards devices, embedded, something else?
RO: Part of that is a bigger question that WG21 are wrestling with. My own
view is that the low level stuff that gives you a platform independent API
makes more sense to be in the standard. Then you are not reliant on somebody
out there maintaining a package on 27 different bits of hardware. It becomes
something the vendor owns. Anything above that fits nicely into a package
manager or Boost. The code is based on a fairly firm foundation. That may be
that with audio step one would be to get a common interface that people can
use across the board. That would be a good step forward. Whether any of the
entities that come on top of that can be standardised I don't know. That
direction is unclear to me.
TD: I agree with Roger. What is in the audio paper right now, and the same
could be said about the graphics and windowing: its the bare minimum that
you need to interact with the soundcard. Everything else on top of that
doesn't have to be in the standard. There is no way at all in the standard
to interact with graphics or audio. Most languages provide them: C++
shouldn't be the weird one that doesn't do these things.
RR: The low level bits of graphics are rapidly mutating while audio is not.
TD: If the audio paper had been done 20 years ago, it would have looked
pretty much the same. That is the difference for graphics.
Brett: we should try to define an API that would predict these changes. The
direction of graphics or hardware is multicore. We could put a standard API
on there.
RO: It has been interesting over the last couple of weeks seeing the
difference being pointed out between audio and graphics. It should be a lot
more straightforward to get the audio through.
TD: It feels wrong to have just the audio in the standard without the rest
of HMI. If you want to write any simple kind of game for example, you always
need windowing, drawing, control, audio. It is just odd.
DL: One of the things with graphics is that there are some established
libraries out there, SFML, SDL, people seem to like using that. They then
get wary when there's a new kid on the block.
TD: Same with audio: some of the libraries are very popular, but the basic
bits of the API are the same. I totally understand people saying we should
only standardise an extant library, but not in this particular case: these
libraries have been out there for decades.
DL: Is there value in having a C++ API that someone writes a paper for that
is aiming to be a standard API with implementations that wrap up some of
these other libraries, perhaps as a bootstrapping mechanism?
RR: There is value in it. Most of the audio libraries I know of are C, not C++
RO: This paper brings first class C++ into it. My hope would be that
something would happen with this paper that would be similar to the graphics
paper: if we had implementations of the paper on different audio suppliers
hardware we would get proof of concept.
TD: I have written a CoreAudio implementation of the API and the API needs a
little tweak, but it should be ready by Kona. Probably it needs some period
of usage. Others would need to write implementations.
RO: This is where you get a lot of good feedback, when you get someone who
wasn't a paper author implementing and giving feedback
TD: I have offers of help. This is going to happen this year, then we need
to see what happens.
GD: This is what happened with the graphics paper
RO: Successful audio might help smooth the path of graphics. The other
question is that we still have to answer the technical objections that we
have received about the graphics paper. The Google Group will be moving away
from Google because of the trouble SG14 had. I'm hoping it will simply be a
matter of transferring all the members.
GD: Sadly, the SG14 transfer hadn't gone so well. It's not smooth.
Tom: We may want to focus on naming standards too; the mailing lists are
unevenly named.
DL: There has been talk of a library extension group. Is this for real?
TD: This is the Library Evolution reflector, LEWG.
RO: Vinnie posted on that reflector first, then on SG13. The discussion took
place on LEWG. Not sure how best to handle that one. The wider question
about what goes in the library does belong on LEWG.
DL: I was curious to hear if concerns were the same at different levels.
RO: I don't think we all agree. Some think the library should be large, some
small, some public, some with vendor, there are varying desires. We do have
some things to work on that mindset, but its bigger than SG13.
TD: There is a direction group, ask them to clarify what their opinion is on
the wider HMI topic?
RO: As it happens, we're having a telecon next Wednesday. There is a paper
now so I can add it to the agenda.
TD: It can be phrased as a wider HMI issue, not just audio. Loic Joly: The
interest of graphics was twofold, to ease adoption of C++ and to ease
teaching. The values are different for audio, smaller.
RO: I agree, playing sounds would be nice
TD: easy to do from this paper too
RO: Conceptually more straightforward than graphics.
JM: We DID approve of the Kronos date extensions, quite large, we also have
Network extensions coming in, which is also large, from Boost.ASIO, I
haven't heard people arguing about that. Networking is huge and interacting
with the outside world, similarly to graphics and audio.
RO: My feeling is it would make sense to have interface to bits of hardware
that are standard on most computers.
JM: We have to make sure we are well integrated enough. The network standard
comes with its model of asynchronicity, with callbacks, of which audio is
part of the picture.
RO: Are there any technical questions that people have about the audio
paper? The first question is "Do we want to pursue audio?" Politically it's
worth getting a poll in a wider audience.
DL: Is there anything to help implementers like test code?
TD: Really sorry, the code isn't online yet. It is a bunch of header files,
unit tests, platform dependent stuff, a CoreAudio backend. There is enough
material for someone to start implementing another backend. There are a few
API questions to be answered. Support for audio files, MIDI, isn't there
yet, but this will happen in the next few months. Someone could start now,
but the API is going to change.
RO: One of the good things about coming to SG13 with the design is that
people can come up with improvements, use cases, design ideas. The downside
of arriving with an implementation is that a lot of distance has been
travelled already, and its harder to turn around.
TD: We need feedback from experts: typically you would reinterpret cast from
a void pointer or use virtual functions. This is the kind of advice we need.
RO: One of the reasons for having the study groups at the end of the
committee week is that some of the library people we will be able to come.
TD: We want to make it consistent with the standard library. For example,
you can have multiple audio devices connected, how would that seem "normal"
in the standard library? This is why we want to bring it to experts.
LJ: I see some different discussion about <lost> arrays, I think there is
some common stuff to do there.
TD: There is a data structure for an audio buffer which is basically a 2D
array, which isn't in the standard, so we have to do our own. A lock free
queue is something we need
GD: I have one in flight, P0059
RO: We should identify any separable bits that can be presented, see if
there is anything outstanding already. How would it interoperate with the
rest of the standard?
TD: Interacting with the filesystem is obvious; open an MP3 file and play
it, saving a recording to your disk. We haven't covered audio files yet.
Phase one is the device, phase 2 is the file, phase 3 is MIDI. We aren't
there yet.
JM: The file thing is easy and uninteresting because it is totally
synchronous. This isn't how networking works.
TD: It's not actually synchronous: you always process audio on a priority
real time thread. Whenever you want to do anything with audio you need to
have a way to communicate to another thread. You definitely need to
synchronise threads.
RO: If I use asynchronous networking, can I read and write an audio file?
Networking comes with its own set of interfaces. Don't think reading files,
think receiving data.
JM: At random times a new chink of bytes arrives and needs to be output.
Maybe there is buffering, waiting for the next chunk to arrive, independent
threads makes me slightly nervous - this seems to stop us being platform
neutral.
TD: Different operating systems do different things. We tried to find the
common denominator. These are interesting questions. Not now, we should
discuss this in Kona at the evening session. Guy Somberg: I do want to give
some brief answers: Receiving bytes from the network, you want to output
that, the mechanism for that is that you will have to isolate the data and
buffer it in a longer buffer than you have audio, you'll read the buffer,
decompress it on another thread, put it into a buffer that the audio thread
is reading.
TD: A lock free queue typically
RO: Use cases, A sketch of how it would work, and then checking that the
interfaces are compatible would be helpful
TD: I think the next paper would be a great place to address this.
RO: The other question I have is more general: do you think you have access
to the right set of audio experts?
TD: Between GS and I we represent a large network. We presented this at the
Audio Developer Conference in November.
GS: There's an Adobe internal conference in a couple of weeks where I will be
presenting. GDC would not be much use, game developers don't live at this low
level. Game dev would use this by proxy. Some of our plans make it more
palatable for some class of game developers.
TD: Pro audio people are at a lower level and those who heard it at ADC were
full of feedback. We have the support of the industry.
RO: One of my problems with the graphics paper is that the graphics experts
don't come to WG21. Is it easier to make contact with the right people in
audio?
TD: It will be easier for audio, Cologne should have several audio people
JM: Let's not jump the gun, make sure that the expectations are set right
for those people. I need more text about the landscape of the presentation,
more about the big picture.
RO: I hope Jens' questions will be answered by the evening session.
JM: Can we make that offline consumable?
LJ: Maybe we could also record a video?
TD: There will be a video from my ACCU Conference talk, you can watch that.
JM: Just write a P paper. I need that in order to understand whether such an
audio proposal can address my use cases. Give an overview over the general
landscape of audio. Describe the different layers. That should be an
independently useful paper that doesn't go into the low-level bits that you
cover in the current paper.
TD: No time to write this paper before Kona, but we'll try to put it out
afterwards. This is excellent feedback.
RO: If anyone is interested in remote participation for Kona, please write to
the SG13 mailing list. Thank you all for the participation!