Reminder of accessibility metadata call coming up in an hour (9:00 AM PDT)

0 views
Skip to first unread message

Charles Myers

unread,
Oct 22, 2013, 10:53:14 AM10/22/13
to a11y-metad...@googlegroups.com, public...@w3.org

I'm back and caught up on accessibility metadata from the calls of two weeks ago.  The eganda for today's meeting cal be seen below and at https://wiki.benetech.org/display/a11ymetadata/Next+Accessibility+Metadata+Meeting+Agenda

 

I also wrote our minutes from the last two meetings at https://wiki.benetech.org/pages/viewpage.action?pageId=58853548 and the issue tracker has been updated on the mediaFeature issue. http://www.w3.org/wiki/WebSchemas/Accessibility/Issues_Tracker#What_is_the_goal_of_mediaFeature.3F_.28conforming_or_informational.29_Do_we_have_this_right.3F

 

Note that we have a new conference call number this week.  And we will be back on a regular weekly schedule from this point on.

October 22, 2013 Accessibility Metadata working group call

Weekly Meeting
Schedule: The next call will be Tuesday, October 22, 9:00am PDT (California), 12:00am EDT (Ontario, New York), 5:00PM in London and 6:00 PM on the continent, 3:00 AM in Australia
Conference call: +1-866-906-9888 (US toll free), +1-857-288-2555 (international), Participant Code: 1850396#
Etherpad: (10/22/2013)
IRC: Freenode.net #a11ymetadata (although more of the collab seems to happen in the etherpad)

The goal of the call will be a review of the open issues on the w3c wiki and get to closure on these issues and work these with schema.org representatives.  See issues and accessMode/mediaFeature matrix. There will also be a discussion of the use of these attributes for search, as shown in the blog article.

The next call will be October 22 and then will settle into weekly meetings as required.

The public site is http://www.a11ymetadata.org/ and our twitter hashtag is #a11ymetadata.

Overall Agenda

New Business - We will start discussing this promptly at the top of the hour.

  • mediaFeature - our goal is to get agreement on the mediaFeature properties, as noted in the issue list.  As noted in the last call's minutes, we did a deep dive into visual and textual transform features last time. I've editted the list down to reflect both new properties that we decided on last time and some of the simplifications that come with the extension mechanism. I'd like to reach a conclusion on those, both for the specific names but also for the general framework, so that one can see the extension mechanism.  I'd like to propose even that we segment this discussion into two parts... agreement on the current properties and then consideration of new properties (I want to see the discussion make progress)
    • transformFeature - do we mike that name (against the "content feature")
      • Finish discussion on visualTransformFeature and textualTransformFeature
      • Consider auditoryTransformFeature (structural Navigation will be covered in textualTransform) and tactileTransform
    • Review contentFeature side of the mediaFeatures starting from the proposed table in the issues list
      • textual (note the removal of desacribedMath) - alternativeText, captions, chemML, laTex, longDescription, mathML, transcript
      • tactile (note the simplication of braille to be the extended form) - braille, tactileGraphic, tactileObject
      • auditory - audiDescription
      • visual - signLanguage, captions/open
  • ATCompatible
  • ControlFlexibility and accessAPI (we'll be lucky if we get to this point)
  • accessMode and the three proposals for the available access modes (this is a topic for a future call)
  • is/hasAdaptation

Liddy Nevile

unread,
Oct 29, 2013, 12:04:15 PM10/29/13
to Charles Myers, a11y-metad...@googlegroups.com, public...@w3.org
My comments...

Charles Nevile ...
Charles raised the question of whether these attributes are a
declaration of conformance (as in alternativeText means that "all of
the photographs and other media have alternate text") or just whether
the author of the content (or adapted version of the content) used
alternate text on the significant parts of the content to the best of
their abilities. The intent of these are the latter. Since this
metadata is being added by people who care about accessibility, we
have to trust that they will apply their best efforts before they'd
add the attribute.

It has long been a tradition in the DC world of metadata to assume
that people have good intentions - they don't always, but those who do
make it worthwhile trusting...

then there is a discussion about mediaFeature.... I am developing some
fairly strong feelings baout this. First, I don't think 'mediaFeature'
is anything like as good a name as accessFeature ' given that we are
mostly describing things that are done to increase accessibility - and
we have accessMode... Then Jutta wanted us to add in 'adaptation' or
the equivalnet. I think that a feature implies something special but
taking Jutta's position it might be better to have them called
accessAdaptation - ie for things like captions etc??? Certainly I
would not want both feature and adaptation in a single name - that
would be introducing redundancy, I think...

Next, I think the idea that we should label things because someone
tried to fix it is absurd - to be honest. We are asking people to make
assertions about the resource, or their needs, not to tell us how nice
they are. An assertion, made in good faith, should mean that something
has been achieved - eg alt tags for all images, etc ....

Next, I want us to be clear about accessMode. As Charles Nevile and I
understand it, this will be a set of assertions that tell us what is
the minimum complete set of accessModes that will convey all the
content of a resource. So we might get visual + text, visual + audio,
text, etc ... ie more than one statement. This can be done and it
involves a trick - generally the value of RDF means that if I make an
assertion and then you add another, both bits of info can be put
together to make a richer statement. In this case, we certainly do not
want that to happen! In RDF the merging of statements can be avoided
by using what is known as a 'blank node'.
I am writing all this because I think both being clear about the use
of accessMode and knowing that it will work is really important :-)


On 23/10/2013, at 1:53 AM, Charles Myers wrote:

> I'm back and caught up on accessibility metadata from the calls of
> two weeks ago. The eganda for today's meeting cal be seen below and
> at https://wiki.benetech.org/display/a11ymetadata/Next+Accessibility+Metadata+Meeting+Agenda
>
> I also wrote our minutes from the last two meetings at https://wiki.benetech.org/pages/viewpage.action?pageId=58853548
> and the issue tracker has been updated on the mediaFeature issue.http://www.w3.org/wiki/WebSchemas/Accessibility/Issues_Tracker#What_is_the_goal_of_mediaFeature.3F_.28conforming_or_informational.29_Do_we_have_this_right.3F
> --
> You received this message because you are subscribed to the Google
> Groups "Accessibility Metadata Project" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to a11y-metadata-pr...@googlegroups.com.
> To post to this group, send email to a11y-metad...@googlegroups.com
> .
> For more options, visit https://groups.google.com/groups/opt_out.

Madeleine Rothberg

unread,
Oct 29, 2013, 12:32:07 PM10/29/13
to Liddy Nevile, Charles Myers, a11y-metad...@googlegroups.com, public...@w3.org
Liddy,

I can't write a full response because I am in another meeting, but I want to stress that the idea you have raised of a minimum complete set of accessModes is useful but should not replace access mode as previously defined. I believe we must retain the access mode field that lists the access modes a resource uses to communicate. When alternatives are added or linked then more access mode combos become viable and that can feed into the list of various minimum complete sets of accessModes.

Madeleine

Charles Myers

unread,
Oct 29, 2013, 2:25:15 PM10/29/13
to Madeleine Rothberg, Liddy Nevile, a11y-metad...@googlegroups.com, public...@w3.org
And I'll point out two things from the call.

1) That our overall goal is to make simple things easy and the difficult possible. That's a good W3C perspective.
and then
2) Andy suggested� that the most important thing was to agree on a common data model. There may be multiple paths (different sets of metadata) to that model for different levels of complexity, but they all work towards the same data model with clear paths.

I still believe that the issue is that we're looking at is that access modes mean multiple things to multiple people.
But, now that we're in agreement on mediaFeature as much as we can, it's now time to consider accessMode. I'll create my writeup and get it up on the wiki soon. Hopefully, this can achieve a common data model and some use cases against it. I believe that this can express access modes in ways that work for all, from implied to explicit to after augmentation (the "desitnation" access modes).

 and our twitter hashtag is #a11ymetadata.

Overall Agenda
New Business - We will start discussing this promptly at the top of the hour.

   � mediaFeature - our goal is to get agreement on the mediaFeature properties, as noted in the issue list.  As noted in the last call's minutes, we did a deep dive into visual and textual transform features last time. I've editted the list down to reflect both new properties that we decided on last time and some of the simplifications that come with the extension mechanism. I'd like to reach a conclusion on those, both for the specific names but also for the general framework, so that one can see the extension mechanism.  I'd like to propose even that we segment this discussion into two parts... agreement on the current properties and then consideration of new properties (I want to see the discussion make progress)
       � transformFeature - do we mike that name (against the "content feature")
           � Finish discussion on visualTransformFeature and textualTransformFeature
           � Consider auditoryTransformFeature (structural Navigation will be covered in textualTransform) and tactileTransform
       � Review contentFeature side of the mediaFeatures starting from the proposed table in the issues list
           � textual (note the removal of desacribedMath) - alternativeText, captions, chemML, laTex, longDescription, mathML, transcript
           � tactile (note the simplication of braille to be the extended form) - braille, tactileGraphic, tactileObject
           � auditory - audiDescription
           � visual - signLanguage, captions/open
   � ATCompatible
   � ControlFlexibility and accessAPI (we'll be lucky if we get to this point)
   � accessMode and the three proposals for the available access modes (this is a topic for a future call)
   � is/hasAdaptation
-- 
You received this message because you are subscribed to the Google Groups "Accessibility Metadata Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email to a11y-metadata-pr...@googlegroups.com.
To post to this group, send email to a11y-metad...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Liddy Nevile

unread,
Oct 29, 2013, 5:01:28 PM10/29/13
to Madeleine Rothberg, Charles Myers, a11y-metad...@googlegroups.com, public...@w3.org
Madeleine,
you seem to have misunderstood me.

I am saying, as Charles Nevile also understands it, I believe, that
when stating the accessMode, one states what is required to be able to
comprehend and use a resource.

If there are a range of things available, say video (incl audio) and
captions, some users will use the audio and some the captions -
correct? In this case, the video could have assessModes:

visual + auditory
visual
visual + text

A user who wants captions would probably have visual + captions in
their profile. It is easy to infer that they want the video with the
captions on the screen (however they get there) - they might also get
the sound but as they have not included it, that is not an accessMode
they are asking for. Clearly they will want this resource - no?

A person who does not have vision might also be interested in this
resource. They will probably say their accessModes are text and
auditory and so they are not likely to want this resource - they have
not included visual and the resource is, apparently, incomplete
without it.

What is different about this? I think I was just adding, in my email,
that this can be done so the resource description and user needs
statements of accessModes must not get concatenated, which would make
them useless, and that this prohibition is possible - contrary to what
normally happens with metadata.

Liddy

Andy Heath

unread,
Oct 30, 2013, 6:05:01 AM10/30/13
to Liddy Nevile, Madeleine Rothberg, Charles Myers, a11y-metad...@googlegroups.com, public...@w3.org
Liddy,

I think your example is a good one to explain exactly why it *won't*
work like that. The problem is it gives too much weight to the author
and not the context. For example, for a video with captions your
example gives the metadata

visual + auditory
visual
visual + text

as describing the modalities that "is required to be able to
comprehend and use a resource."

This is *as the author sees it*.
So what other ways are there to see it ?

Well what about using the auditory mode alone ? (I do this very often
with the kind of videos that are just talking heads - the bbc don't
think of that usage but I still do it - I even turn the brightness down
to black to save battery while doing that). Similarly for text. So the
full set of accessModes required to understand it here would need to include

auditory
text

But authors don't think of these things - only users do. And in general
we won't think of all the ways people might want to use the content.
Expanding all the accessModes exhaustively would be pointless as an
algorithm could do that trivially. And even now, I just went back and
re-read it and realised I didn't think of "auditory + text". This seems
to me has been a central point of our work over the years - to NOT
project onto users how they should use things but instead to give users
control. Author ideas of how to use stuff is not giving users control
in my view.

Charles (Myers) - the point ascribed to me as the need for a common data
model in the other email - I'm afraid I haven't expressed myself clearly
enough - my point was subtly different to what its reported as. My point
was that we need a common data model yes, but we should use different
fields for the physical access modes present and the author's view of
how that resource "should be used". For example, if we *do* decide to
provide author-deteremined-usage info (which I don't support but ..)
then using this same example of Liddy's the metadata might be something like

accessMode = visual
accessMode = auditory
accessMode = text

accessModeUsage = visual + auditory
accessModeUsage = visual
accessModeUsage = visual + text

This is repetitious and has redundant information and doesn't look good
- there may be more economical ways to express it but mixing the
accessMode usage and the physical accessModes in the same fields will
lead people towards the mixed model - i.e. we will have to explain the
"+" calculus of values relating to accessMode and this will
overcomplicate the simple description. So my point was, even though the
two different ways to use accessMode *could* use the same fields i.e.
they could just be alternative ways to use those fields, we should still
separate them. The fact is that the meaning of say "visual" is
different in each case - in one case it means "physically present" and
in the other it means "how I think you might use it". There is no case
in my mind to use the same fields for these very different uses.

andy

> Madeleine,
>>>> � mediaFeature - our goal is to get agreement on the mediaFeature
>>>> properties, as noted in the issue list. As noted in the last call's
>>>> minutes, we did a deep dive into visual and textual transform
>>>> features last time. I've editted the list down to reflect both new
>>>> properties that we decided on last time and some of the
>>>> simplifications that come with the extension mechanism. I'd like to
>>>> reach a conclusion on those, both for the specific names but also
>>>> for the general framework, so that one can see the extension
>>>> mechanism. I'd like to propose even that we segment this discussion
>>>> into two parts... agreement on the current properties and then
>>>> consideration of new properties (I want to see the discussion make
>>>> progress)
>>>> � transformFeature - do we mike that name (against the
>>>> "content feature")
>>>> � Finish discussion on visualTransformFeature and
>>>> textualTransformFeature
>>>> � Consider auditoryTransformFeature (structural Navigation
>>>> will be covered in textualTransform) and tactileTransform
>>>> � Review contentFeature side of the mediaFeatures starting
>>>> from the proposed table in the issues list
>>>> � textual (note the removal of desacribedMath) -
>>>> alternativeText, captions, chemML, laTex, longDescription, mathML,
>>>> transcript
>>>> � tactile (note the simplication of braille to be the
>>>> extended form) - braille, tactileGraphic, tactileObject
>>>> � auditory - audiDescription
>>>> � visual - signLanguage, captions/open
>>>> � ATCompatible
>>>> � ControlFlexibility and accessAPI (we'll be lucky if we get to
>>>> this point)
>>>> � accessMode and the three proposals for the available access
>>>> modes (this is a topic for a future call)
>>>> � is/hasAdaptation
>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "Accessibility Metadata Project" group.
>>>> To unsubscribe from this group and stop receiving emails from it,
>>>> send an email to a11y-metadata-pr...@googlegroups.com.
>>>> To post to this group, send email to
>>>> a11y-metad...@googlegroups.com.
>>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Accessibility Metadata Project" group.
>>> To unsubscribe from this group and stop receiving emails from it,
>>> send an email to a11y-metadata-pr...@googlegroups.com.
>>> To post to this group, send email to
>>> a11y-metad...@googlegroups.com.
>>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>> --
>> You received this message because you are subscribed to the Google
>> Groups "Accessibility Metadata Project" group.
>> To unsubscribe from this group and stop receiving emails from it, send
>> an email to a11y-metadata-pr...@googlegroups.com.
>> To post to this group, send email to
>> a11y-metad...@googlegroups.com.
>> For more options, visit https://groups.google.com/groups/opt_out.
>




andy
andy...@axelrod.plus.com
--
__________________
Andy Heath
http://axelafa.com

Charles Myers

unread,
Oct 30, 2013, 8:27:49 AM10/30/13
to Andy Heath, Liddy Nevile, Madeleine Rothberg, a11y-metad...@googlegroups.com, public...@w3.org
Andy,
Your email raises a two points. I agree with one and disagree with
another.

I agree with the need to express the source access modes, the
mediafeatures, and then the access modes that the the content is made
available by. We have two views of this expressed in the issue tracker at
http://www.w3.org/wiki/WebSchemas/Accessibility/Issues_Tracker#accessMode_and_accessibilityMode_subtype.2C_proposal_1
and
http://www.w3.org/wiki/WebSchemas/Accessibility/Issues_Tracker#accessMode_and_mediafeature_use_cases
and I think that we'll end up with some ways to express both the start
and the augmented access modes. Assuming that the mediaFeature item is
resolved (or close to it, this is the next logical part to discuss). And
one of the outcomes of the call on Tuesday is that we need to tackle the
description of the ways to do this, from simple access modes that are
implied by media type to the more complex sets that should be possible
to encode. I think I heard myself volunteering for that, and to oput
the proposal on the wiki for collaboration. This email thread gives me
one more driver for that.

The point that I disagree with is the application of this metadata to
describe degrees of utility of the access modes. I think that this is a
slippery slope, as it requires the metadata to express judgement on the
importance. If user context means judgement of which delivery method is
"best" for the user, as opposed to what is possible, I think that we
have a scope problem. Not that it's not an interesting problem; I just
don't think that we can tackle this in the current effort to get this
into schema.org.

And I'd like to take this out of the theoretical into an example of two
videos... from TED talks.
So, here are two videos (and I picked ones for their attributes, not the
content).
I'll note that these videos have both video and audio as their basic
access modes, and then have closed caption available (text, selectecable
as the language in the lower right of the video pane) and a transcript,
which appears under the video on the web page if selected. Both of
these videos make the same information available.

The way that I see this metadata is that it starts as
Visual + auditory
and then has captions (auditory available as text, synchronized with the
video) and a transcript

So I have now added
Visual+Textual
Textual

So we have three ways that the content can be used:
Visual + Auditory
Visual+Textual
Textual
(note that closed captions are textual, and depend on the player to
become visual, open captions, or sign language, which are "burned into"
the visual plane are visual.

The first is a typical TED video that is very dependent on the images.
Carolyn Porco: This is Saturn
http://www.ted.com/talks/carolyn_porco_flies_us_to_saturn.html
This is very visually dependent, as it is full of pictures from
spacecraft of Saturn. As a sighted person, I would choose to watch this
video... it would lose too much meaning to me.

The second video comes from Amanda Bennett, where she describes her
husband's experience with death. The talk is just her talking, moving
around the stage and gesturing.
Amanda Bennett: We need a heroic narrative for death
http://www.ted.com/talks/amanda_bennett_a_heroic_narrative_for_letting_go.html

The line I'm afraid that you're crossing is one of the utility of one
set of access modes for the user over another. If I had to give grades
(using the scale of A - F, with F being a fail) to the usefuless of
these talks, I'd rate them like this:
Carolyn Porco:
Visual + Auditory (A)
Visual+Textual (A)
Textual (D)
And, if I chose to just do the video as auditory, it'd be a (D)

Amanda Bennett:
Visual + Auditory (A)
Visual+Textual (A, but you lose some of the passion)
Textual (A, but you lose even a bit more of the passion)
And, if I chose to just do the video as auditory, it'd be an (A)

Trying to explain whether an access mode conveys useful or necessary
information is a difficult task. I don't believe that we can take this
on and have success in a finite period.

On a personal note, I DO wish that TED talks had flags to tell me which
ones depended on visual, so I could listen to the ones that were not
visual-dependent as I drove. But I'll accept an internet and podcasts
that don't tell me that, for now.

Andy Heath

unread,
Oct 30, 2013, 9:25:37 AM10/30/13
to Charles Myers, Liddy Nevile, Madeleine Rothberg, a11y-metad...@googlegroups.com, public...@w3.org
Charles (Myers), just a quick question ( a fuller answer later)

> Trying to explain whether an access mode conveys useful or necessary
> information is a difficult task. I don't believe that we can take this
> on and have success in a finite period.

I completely agree. I'm not sure how an argument in favour of doing so
has been ascribed to me. I have I believe argued against doing that.
That in my view is what the "+" calculus Liddy is proposing actually
does. What I was saying was "If we MUST do that then keep it separate"
and my reasons for suggesting that are exactly because I think "how a
user chooses to use something" or "what information content is a
replacement for other" should be well beyond our scope.

Have I been misattributed ? Is my point that "if we're doing that do it
separately" clear ?

andy

Liddy Nevile

unread,
Oct 31, 2013, 6:11:25 AM10/31/13
to Andy Heath, Madeleine Rothberg, Charles Myers, a11y-metad...@googlegroups.com, public...@w3.org
mmm... now I think you have misunderstood or misread me, Andy.

The particular video being described is only accessible with the
declared combinations - so audio alone will simply not cut it - it
does not have what that requires... so ????
>>>>> • mediaFeature - our goal is to get agreement on the mediaFeature
>>>>> properties, as noted in the issue list. As noted in the last
>>>>> call's
>>>>> minutes, we did a deep dive into visual and textual transform
>>>>> features last time. I've editted the list down to reflect both new
>>>>> properties that we decided on last time and some of the
>>>>> simplifications that come with the extension mechanism. I'd like
>>>>> to
>>>>> reach a conclusion on those, both for the specific names but also
>>>>> for the general framework, so that one can see the extension
>>>>> mechanism. I'd like to propose even that we segment this
>>>>> discussion
>>>>> into two parts... agreement on the current properties and then
>>>>> consideration of new properties (I want to see the discussion make
>>>>> progress)
>>>>> • transformFeature - do we mike that name (against the
>>>>> "content feature")
>>>>> • Finish discussion on visualTransformFeature and
>>>>> textualTransformFeature
>>>>> • Consider auditoryTransformFeature (structural
>>>>> Navigation
>>>>> will be covered in textualTransform) and tactileTransform
>>>>> • Review contentFeature side of the mediaFeatures starting
>>>>> from the proposed table in the issues list
>>>>> • textual (note the removal of desacribedMath) -
>>>>> alternativeText, captions, chemML, laTex, longDescription, mathML,
>>>>> transcript
>>>>> • tactile (note the simplication of braille to be the
>>>>> extended form) - braille, tactileGraphic, tactileObject
>>>>> • auditory - audiDescription
>>>>> • visual - signLanguage, captions/open
>>>>> • ATCompatible
>>>>> • ControlFlexibility and accessAPI (we'll be lucky if we get to
>>>>> this point)
>>>>> • accessMode and the three proposals for the available access
>>>>> modes (this is a topic for a future call)
>>>>> • is/hasAdaptation
>>>>> --
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "Accessibility Metadata Project" group.
>>>>> To unsubscribe from this group and stop receiving emails from it,
>>>>> send an email to a11y-metadata-pr...@googlegroups.com
>>>>> .
>>>>> To post to this group, send email to
>>>>> a11y-metad...@googlegroups.com.
>>>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "Accessibility Metadata Project" group.
>>>> To unsubscribe from this group and stop receiving emails from it,
>>>> send an email to a11y-metadata-project
>>>> +unsub...@googlegroups.com.
>>>> To post to this group, send email to
>>>> a11y-metad...@googlegroups.com.
>>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Accessibility Metadata Project" group.
>>> To unsubscribe from this group and stop receiving emails from it,
>>> send
>>> an email to a11y-metadata-pr...@googlegroups.com.
>>> To post to this group, send email to
>>> a11y-metad...@googlegroups.com.
>>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>
>
>
>
> andy
> andy...@axelrod.plus.com
> --
> __________________
> Andy Heath
> http://axelafa.com
>

Andy Heath

unread,
Oct 31, 2013, 6:40:56 AM10/31/13
to Liddy Nevile, Madeleine Rothberg, Charles Myers, a11y-metad...@googlegroups.com, public...@w3.org
Liddy - this is "Access for All" not "Access for Disabled People".
You are NOT entitled to say what is accessible to me (and I get angry
when people try to do so). Its *my* choice as to what I can use - that
has always been a tenet of our work.

andy
>>>>>> � mediaFeature - our goal is to get agreement on the mediaFeature
>>>>>> properties, as noted in the issue list. As noted in the last call's
>>>>>> minutes, we did a deep dive into visual and textual transform
>>>>>> features last time. I've editted the list down to reflect both new
>>>>>> properties that we decided on last time and some of the
>>>>>> simplifications that come with the extension mechanism. I'd like to
>>>>>> reach a conclusion on those, both for the specific names but also
>>>>>> for the general framework, so that one can see the extension
>>>>>> mechanism. I'd like to propose even that we segment this discussion
>>>>>> into two parts... agreement on the current properties and then
>>>>>> consideration of new properties (I want to see the discussion make
>>>>>> progress)
>>>>>> � transformFeature - do we mike that name (against the
>>>>>> "content feature")
>>>>>> � Finish discussion on visualTransformFeature and
>>>>>> textualTransformFeature
>>>>>> � Consider auditoryTransformFeature (structural Navigation
>>>>>> will be covered in textualTransform) and tactileTransform
>>>>>> � Review contentFeature side of the mediaFeatures starting
>>>>>> from the proposed table in the issues list
>>>>>> � textual (note the removal of desacribedMath) -
>>>>>> alternativeText, captions, chemML, laTex, longDescription, mathML,
>>>>>> transcript
>>>>>> � tactile (note the simplication of braille to be the
>>>>>> extended form) - braille, tactileGraphic, tactileObject
>>>>>> � auditory - audiDescription
>>>>>> � visual - signLanguage, captions/open
>>>>>> � ATCompatible
>>>>>> � ControlFlexibility and accessAPI (we'll be lucky if we get to
>>>>>> this point)
>>>>>> � accessMode and the three proposals for the available access
>>>>>> modes (this is a topic for a future call)
>>>>>> � is/hasAdaptation
>>>>>> --
>>>>>> You received this message because you are subscribed to the Google
>>>>>> Groups "Accessibility Metadata Project" group.
>>>>>> To unsubscribe from this group and stop receiving emails from it,

Matt Garrish

unread,
Oct 31, 2013, 3:01:46 PM10/31/13
to Andy Heath, Liddy Nevile, Madeleine Rothberg, Charles Myers, a11y-metad...@googlegroups.com, public...@w3.org
To jump in from a slightly different angle, I'm not convinced it's realistic
to expect these combinations in an online ebook store, for example, or that
they would have any value. Legally it's questionable whether you could
market your work as something it's not, as access to these modes being
discussed is, more often than not, not intrinsic in the author's
presentation, but may depend entirely on support in the user's reading
system to gain access to.

Someone using a mainstream device may not realize what they are getting, or
what has happened to their expected access mode until after the purchase, so
not only would publishers be hesitant to go near this, but I expect
ebookstores would just filter it out and ignore it even if we were to put it
in.

accessMode and mediaFeature as they were defined in the original proposal
make for simple facet searching in such an interface, and don't prescribe
what is useful to the reader, on the other hand.

I'd feel more comfortable allowing ebookstores, at least, to come up with
the shorthands as real need arises, that they can document to their users,
removing the opaqueness of what any given author was thinking when they set
them.

I'm not against the quest for web simplicity being expressed here, but from
where I sit I'm still worried that we're losing sight of what we'd set out
to achieve, namely to move ahead with what we know and look at these kinds
of specializations and shorthands later as we could more objectively analyze
the need and requirements.

Can we not extricate accessMode as it was defined for the primary way in
which the author conveyed information from the discussion and keep moving
with it as originally defined? If a combinatory shorthand has value, it
shouldn't clobber either of the properties that feed into it.

It seems like we don't need to rush this piece for a 1.0, either, since the
information is carried by the existing two properties. And, since it's new
and untested, I'd prefer we didn't rush into just to make a first release,
but gave ourselves time to figure out all the ramifications.

Matt
>>>>>> . mediaFeature - our goal is to get agreement on the mediaFeature
>>>>>> properties, as noted in the issue list. As noted in the last call's
>>>>>> minutes, we did a deep dive into visual and textual transform
>>>>>> features last time. I've editted the list down to reflect both new
>>>>>> properties that we decided on last time and some of the
>>>>>> simplifications that come with the extension mechanism. I'd like to
>>>>>> reach a conclusion on those, both for the specific names but also
>>>>>> for the general framework, so that one can see the extension
>>>>>> mechanism. I'd like to propose even that we segment this discussion
>>>>>> into two parts... agreement on the current properties and then
>>>>>> consideration of new properties (I want to see the discussion make
>>>>>> progress)
>>>>>> . transformFeature - do we mike that name (against the
>>>>>> "content feature")
>>>>>> . Finish discussion on visualTransformFeature and
>>>>>> textualTransformFeature
>>>>>> . Consider auditoryTransformFeature (structural Navigation
>>>>>> will be covered in textualTransform) and tactileTransform
>>>>>> . Review contentFeature side of the mediaFeatures starting
>>>>>> from the proposed table in the issues list
>>>>>> . textual (note the removal of desacribedMath) -
>>>>>> alternativeText, captions, chemML, laTex, longDescription, mathML,
>>>>>> transcript
>>>>>> . tactile (note the simplication of braille to be the
>>>>>> extended form) - braille, tactileGraphic, tactileObject
>>>>>> . auditory - audiDescription
>>>>>> . visual - signLanguage, captions/open
>>>>>> . ATCompatible
>>>>>> . ControlFlexibility and accessAPI (we'll be lucky if we get to
>>>>>> this point)
>>>>>> . accessMode and the three proposals for the available access
>>>>>> modes (this is a topic for a future call)
>>>>>> . is/hasAdaptation

Matt Garrish

unread,
Oct 31, 2013, 3:18:19 PM10/31/13
to Matt Garrish, Andy Heath, Liddy Nevile, Madeleine Rothberg, Charles Myers, a11y-metad...@googlegroups.com, public...@w3.org
And a final thought, but even if accessMode were to change to some other
form in the future to express multiple concepts in one property, it doesn't
seem like it would invalidate starting with the definition we have now, as
it would simply be more expansive in terms of coverage.

Matt

Liddy Nevile

unread,
Oct 31, 2013, 4:48:50 PM10/31/13
to Andy Heath, Madeleine Rothberg, Charles Myers, a11y-metad...@googlegroups.com, public...@w3.org
Andy
at no point am I saying what is for you - I am providing an example of
a resource with its description and the stated needs of a sample user
and showing how they match ????

Liddy
>>>>>>> • mediaFeature - our goal is to get agreement on the
>>>>>>> mediaFeature
>>>>>>> properties, as noted in the issue list. As noted in the last
>>>>>>> call's
>>>>>>> minutes, we did a deep dive into visual and textual transform
>>>>>>> features last time. I've editted the list down to reflect both
>>>>>>> new
>>>>>>> properties that we decided on last time and some of the
>>>>>>> simplifications that come with the extension mechanism. I'd
>>>>>>> like to
>>>>>>> reach a conclusion on those, both for the specific names but
>>>>>>> also
>>>>>>> for the general framework, so that one can see the extension
>>>>>>> mechanism. I'd like to propose even that we segment this
>>>>>>> discussion
>>>>>>> into two parts... agreement on the current properties and then
>>>>>>> consideration of new properties (I want to see the discussion
>>>>>>> make
>>>>>>> progress)
>>>>>>> • transformFeature - do we mike that name (against the
>>>>>>> "content feature")
>>>>>>> • Finish discussion on visualTransformFeature and
>>>>>>> textualTransformFeature
>>>>>>> • Consider auditoryTransformFeature (structural
>>>>>>> Navigation
>>>>>>> will be covered in textualTransform) and tactileTransform
>>>>>>> • Review contentFeature side of the mediaFeatures starting
>>>>>>> from the proposed table in the issues list
>>>>>>> • textual (note the removal of desacribedMath) -
>>>>>>> alternativeText, captions, chemML, laTex, longDescription,
>>>>>>> mathML,
>>>>>>> transcript
>>>>>>> • tactile (note the simplication of braille to be the
>>>>>>> extended form) - braille, tactileGraphic, tactileObject
>>>>>>> • auditory - audiDescription
>>>>>>> • visual - signLanguage, captions/open
>>>>>>> • ATCompatible
>>>>>>> • ControlFlexibility and accessAPI (we'll be lucky if we get to
>>>>>>> this point)
>>>>>>> • accessMode and the three proposals for the available access
>>>>>>> modes (this is a topic for a future call)
>>>>>>> • is/hasAdaptation
>>>>>>> --
>>>>>>> You received this message because you are subscribed to the
>>>>>>> Google
>>>>>>> Groups "Accessibility Metadata Project" group.
>>>>>>> To unsubscribe from this group and stop receiving emails from
>>>>>>> it,
>>>>>>> send an email to a11y-metadata-pr...@googlegroups.com
>>>>>>> .
>>>>>>> To post to this group, send email to
>>>>>>> a11y-metad...@googlegroups.com.
>>>>>>> For more options, visit https://groups.google.com/groups/
>>>>>>> opt_out.
>>>>>>
>>>>>> --
>>>>>> You received this message because you are subscribed to the
>>>>>> Google
>>>>>> Groups "Accessibility Metadata Project" group.
>>>>>> To unsubscribe from this group and stop receiving emails from it,
>>>>>> send an email to a11y-metadata-pr...@googlegroups.com
>>>>>> .
Reply all
Reply to author
Forward
0 new messages