Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Drop any candidate with a type descriptor if type is not supported #245

Closed
eeeps opened this issue Sep 24, 2014 · 9 comments
Closed

Drop any candidate with a type descriptor if type is not supported #245

eeeps opened this issue Sep 24, 2014 · 9 comments

Comments

@eeeps
Copy link

eeeps commented Sep 24, 2014

If/when #210 becomes real, I think a lot of people are going to want to do:

<img src="jpg.jpg"
     srcset="webp.webp type(image/webp),
             jxr.jxr type(image/vnd.ms-photo)"
     alt="" />

...because it feels a lot like:

<img src="jpg.jpg"
     srcset="retina.jpg 2x,
             super-retina.jpg 3x"
     alt="" />

...which works.

Currently, though, in browsers that don’t understand type() it is simply ignored and both candidates get assigned 1x. The first is chosen, and the browser tries to load the WebP – if the browser doesn’t support WebP, the user gets a broken image, even though there was a perfectly good Jpeg sitting right there in the src fallback.

Browsers that don’t support type() shouldn’t try to load formats that they may or may not understand.

@tabatkins
Copy link

Yes, our handling of unknown descriptors (just drop them and continue processing the candidate) is broken and future-hostile.

My original spec just parsed each candidate against the grammar, and dropped it when it didn't match. The spec was later changed to an explicit parsing algorithm that was closer to HTML's original srcset behavior (just dropping unknown descriptors), which was a mistake. We need to return to the "drop the candidate" behavior, even if we keep the "explicitly parse according to this algorithm" spec treatment.

@eeeps
Copy link
Author

eeeps commented Sep 24, 2014

Is it more future hostile to drop things that you don't understand, or ignore them / assign 1x?

<img src="fallback.jpg" srcset="a.jpg 800w 600h, b.jpg 1600w 1200h" sizes="contain 75vw 75vh" />

Say a major browser supports w, but not h or contain. Does dropping both candidates and loading the fallback discourage adoption of the new features?

Is that a problem?

I lean towards no (support will come eventually, the fallback isn’t so bad), but feel like I haven’t thought this all the way through, yet.

@tabatkins
Copy link

CSS's success in remaining extensible for the last decade suggests that "drop things you don't understand, in whole meaningful units" is the best way forward. You have to duplicate yourself during transition periods, but it means that transition is possible and, more importantly, controllable by the author.

There are also a few CSS-related things, such as the media attribute, that went the other way, just dropping any individual thing they didn't understand, and their extensibility story was terrible. (We had to add the only keyword to MQs solely to let people trick media into ignoring their MQs that had media features in addition to a media type, since media only looked at the type and ignored anything else.)

@tabatkins
Copy link

(Note also one place where CSS has failed in extensibility due to doing this wrong: if a single selector in a selector list is invalid, the entire selector list gets dropped (and its associated rule), rather than just making that one selector match nothing and letting other comma-separated selectors in the list continue to work. This is too large of a "meaningful unit", and makes it very difficult to use new selectors.)

@eeeps
Copy link
Author

eeeps commented Sep 24, 2014

You have to duplicate yourself during transition periods, but it means that transition is possible and, more importantly, controllable by the author.

We’ve strayed a little from type, but re-considering the previous example, duplicating yourself to control the transition might look like:

<img src="fallback.jpg"
     srcset="a.jpg 800w 600h,
             a.jpg 800w,
             b.jpg 1600w 1200h,
             b.jpg 1600w"
     sizes="contain 75vw 75vh, 75vw" />

?

This would require that browsers choose candidates that they know more about (should that be spec'd?), and being sure that the “whole and meaningful unit” of a sizes is each comma-separated chunk, and not the whole attribute (which might already be the case).

@zcorpan
Copy link

zcorpan commented Sep 25, 2014

See http://ircbot.responsiveimages.org/bot/log/respimg/2014-08-14#T86602 for earlier discussion about this issue.

I'm OK with changing it though. But I think special-casing h is a nice feature we should keep.

@zcorpan
Copy link

zcorpan commented Sep 26, 2014

@yoavweiss thoughts?

@zcorpan
Copy link

zcorpan commented Sep 26, 2014

Proposed change:

diff --git a/source b/source
index 57121db..f80c98e 100644
--- a/source
+++ b/source
@@ -1801,7 +1801,7 @@ interface <dfn>HTMLImageElement</dfn> : <span>HTMLElement</span> {

      <dt>Anything else</dt>

-     <dd><p>This is a <span data-x="concept-microsyntax-parse-error">parse error</span>.</p></dd>
+     <dd><p>Let <var>error</var> be <i>yes</i>.</p></dd>

     </dl>

@yoavweiss
Copy link
Member

I'm with @tabatkins on this. Dropping unknown candidates is probably better than letting them in the selection algo as something they're not

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants