Replies: 19 comments 9 replies
-
Let's say instead of allowing /// <summary>
/// Turns the GPS on or off
/// </summary>
/// <remarks>
/// <para>Will turn on the system location service.</para>
/// <para specificTo="NETFX_CORE">
/// Make sure you have enabled the location capability in your application's manifest. See
/// https://msdn.microsoft.com/en-us/library/windows/apps/br211423.aspx
/// for more information.
/// </para>
/// <para specificTo="__ANDROID__">
/// Make sure you have enabled the location capability in your application's manifest. See
/// http://developer.android.com/guide/topics/manifest/manifest-intro.html
/// for more information.
/// </para>
/// </remarks>
public bool IsGpsEnabled { get; set; } In your example that seems overly verbose; but I think there might be a general need for marking parts of the documentation as only applying to a particular .NET platform or operating system. It would be nice if we could record this in the .XML file and allow the documentation tool to output them accordingly, i..e by placing them in specific sections with title or header or even making their visibility conditional on some context switcher on the web page. Thoughts? |
Beta Was this translation helpful? Give feedback.
-
@terrajobst I think keeping the current |
Beta Was this translation helpful? Give feedback.
-
@terrajobst |
Beta Was this translation helpful? Give feedback.
-
@terrajobst Your approach won't work. We're compiling different binaries, which would then have different XML doc pieces. You would never need a "merged" xml doc, as the xml doc generated is specific to that binary. Second you would assume any consumer of that xml doc has the exact same If-Def defined. You would have to use the TFM instead, but TFM is quite different from if-defs which can be very custom (and most often are). I think your suggestion comes from a point of view of .NET Standard usage. In that case, you wouldn't filter documentation, but make them different sections/paragraphs, so the user of the library would realize they need to do different things on different platforms with the same API. |
Beta Was this translation helpful? Give feedback.
-
An argument was made in the old thread that changing this was against the spec that states doc comments have be to consecutive. However I think the spec leaves plenty of room for interpretation here, and the way I read it, seems to support it's allowed.
If the section is excluded, that means the doc comments in fact does become consecutive.
and
After excluding/including the section, the source code now adheres to the lexical grammar. These directives are called pre processing for a reason. If you pre-process the code, then the documentation is in fact following the spec. |
Beta Was this translation helpful? Give feedback.
-
I ran into this today and it's odd that this happens at all. I'm with @dotMorten in that pre-processing should produce the actual lexical source processed by the lexer/parser and it's surprising that this doesn't work this way today. Any news on this (but as this issue is rather old, I guess: nope) ? |
Beta Was this translation helpful? Give feedback.
-
I was thinking about how we'd surface |
Beta Was this translation helpful? Give feedback.
-
I had a hard time believing it till I saw that almost-verbatim copies is the only way to slightly modify documentation across target frameworks: #if NETSTANDARD1_3
/// <summary>
/// <para>
/// Selects the ultimate shadowing property just like <see langword="dynamic"/> would,
/// rather than throwing <see cref="AmbiguousMatchException"/>
/// for properties that shadow properties of a different property type.
/// </para>
/// <para>
/// If you request both public and nonpublic properties, every public property is preferred
/// over every nonpublic property. It would violate the principle of least surprise for a
/// derived class’s implementation detail to be chosen over the public API for a type.
/// </para>
/// </summary>
#elif NETSTANDARD1_6
/// <summary>
/// <para>
/// Selects the ultimate shadowing property just like <see langword="dynamic"/> would,
/// rather than throwing <see cref="AmbiguousMatchException"/>
/// for properties that shadow properties of a different property type
/// which is what <see cref="TypeInfo.GetProperty(string, BindingFlags)"/> does.
/// </para>
/// <para>
/// If you request both public and nonpublic properties, every public property is preferred
/// over every nonpublic property. It would violate the principle of least surprise for a
/// derived class’s implementation detail to be chosen over the public API for a type.
/// </para>
/// </summary>
/// <param name="type">See <see cref="TypeInfo.GetProperty(string, BindingFlags)"/>.</param>
/// <param name="name">See <see cref="TypeInfo.GetProperty(string, BindingFlags)"/>.</param>
/// <param name="bindingFlags">See <see cref="TypeInfo.GetProperty(string, BindingFlags)"/>.</param>
#else
/// <summary>
/// <para>
/// Selects the ultimate shadowing property just like <see langword="dynamic"/> would,
/// rather than throwing <see cref="AmbiguousMatchException"/>
/// for properties that shadow properties of a different property type
/// which is what <see cref="Type.GetProperty(string, BindingFlags)"/> does.
/// </para>
/// <para>
/// If you request both public and nonpublic properties, every public property is preferred
/// over every nonpublic property. It would violate the principle of least surprise for a
/// derived class’s implementation detail to be chosen over the public API for a type.
/// </para>
/// </summary>
/// <param name="type">See <see cref="Type.GetProperty(string, BindingFlags)"/>.</param>
/// <param name="name">See <see cref="Type.GetProperty(string, BindingFlags)"/>.</param>
/// <param name="bindingFlags">See <see cref="Type.GetProperty(string, BindingFlags)"/>.</param>
#endif |
Beta Was this translation helpful? Give feedback.
-
The thing is, it's not really 'pre processing'. It's in-situ processing. That's why, for example, you can write the following: Console.WriteLine(@"
#if false
This is all inside the string and will get printed to the console
#endif
"); If this truly were preprocessed, then the part between "#if false" and "#endif" would be entirely removed. Now, that said, i don't see any real problem with the compiler taking the sections of trivia that are doc comments, removing the disabled text sections, and doing final analysis as if it was one contiguous region of doc comments. |
Beta Was this translation helpful? Give feedback.
-
I'm sorry, I didn't mean that to sound harsh. @dotMorten says at the top this has been a source of pain for people for a long time. It just got real for me. This should be easy to approve and then super easy to implement in roslyn, shouldn't it? |
Beta Was this translation helpful? Give feedback.
-
This just blew my mind. 😃 |
Beta Was this translation helpful? Give feedback.
-
There would certainly be complexity. Enough that i would not call it "super easy to implement". For example, the lexer would have to have pretty interesting changes to it. Consider the following simple case: /// <summary>
#if false
...
#endif
/// </summary> Recall that currently parsing of doc comments happens in situ with trivia scanning. i.e. while you're in the middle of a scan, and you are handling trivia, we recursively go into parsing XML for the doc comment. THis parse would then have to essentially pause and store state while it recursively handled the Certainly doable. But also something that would take methodical work to ensure that no bugs/strangeness what introduced while this happened. This would also certainly change the roslyn parse tree model (a breaking change which we would have to accept). Today, the above xml snippet would be three distinct trivia sections. Now, you'd want it to be one section (so you could get one single structured trivia node out). In that sub-parse-tree, the We would also have to test/validate all IDE code to make sure it could handle these new trees. So, i would place this at medium cost+complexity to implement. Certainly not impossible. But definitely not a 5 minute fix either :) |
Beta Was this translation helpful? Give feedback.
-
Only medium? Let's do it! 😁 @jnm2 your example does actually work. The trick is to if-def the ENTIRE XML doc section. However that usually means lots of duplicate XML doc to maintain when often it's just a single paragraph or less that needs to be tweaked for each platform. |
Beta Was this translation helpful? Give feedback.
-
@dotMorten Yep, the example was what I was complaining about. 😃 |
Beta Was this translation helpful? Give feedback.
-
Here is an open-source tool I wrote to post-process the XML comments: https://github.com/Patrick8639/ConditionalXmlComments |
Beta Was this translation helpful? Give feedback.
-
any news on this ? |
Beta Was this translation helpful? Give feedback.
-
I just ran into the same problem. Nice to see I'm not alone; less nice to see there's too few of us, and it happens too infrequently, to make a dent in the roadmap. I guess I'll live with it. 😐 |
Beta Was this translation helpful? Give feedback.
-
Same. I'm multi-targeting several platforms at once, each with slightly different XML comments (references to .NET components, links to documentation, etc). This means many ~20-line comment blocks, each duplicated 3 or 4 times. Super frustrating, very noisy, and prone to dulication errors. Seems like the intuitive use of PP directives in XML comments ought to be something that is supported. |
Beta Was this translation helpful? Give feedback.
-
The .NET platform teams has moved to using triple-slash (documentation comments) as the source of truth for most of the class libraries. I believe this is what most developers do. The scenario here is multi-targeted builds where a single project produces builds for multiple targets (usually different frameworks) and therefore produces multiple binaries, and, by extension, multiple XML documentation files. In the context of IntelliSense (code completion in an IDE that shows these comments as tooltips) having multiple files is pretty easy to reason about. My question is how we'd use this model for generating documentation that is meant to cover a single multi-targeted library. Like a web experience like docs.microsoft.com. One could envisage a framework switcher, akin to the toggle in the editor, but I don't think this is desirable. I believe developers would want a single document for a given API provided by a library where framework/platform differences are explained in text, preferably with some visual marker. My rationale is that if you build cross-platform code consumers often need to reason about how an API works across multiple targets. Not every consumer is an application author that only targets a single framework/operating system. One way to do this is to leave it up to the documentation tool. In a sense, that's already what is happening because each binary produced by a multi-targeted build can have a different API surface. Therefore, we generally expect a documentation generator to merge all APIs into a single table of contents for instance. The question is how we'd deal with disjoint contents of the XML documentation. I guess a tool could merge them on a per-XML node basis and add metadata which leg of the multi-targeted build this is specific too. I think it would be worthwhile to explore a few examples to see how one would want to author it on the C# side and how one would want to see it appear on the docs. If we take the example above I can see it convenient on the C# side but when I read the docs I would probably want to see more like this:
I'm not convinced I'd want to use conditional compilation on the C# side to model this, but I do think I'd want some way to express this in the documentation. |
Beta Was this translation helpful? Give feedback.
-
Please support conditional compilation within XML documentation comments. This has been a source of pain with cross-compiled code for a long time. Often linked code files are used to share source between different target platforms of the same API; however, as a result often times the documentation comments must change conditionally as well. As of now, there's no good way around this limitation - you have to duplicate the entire XML comments conditionally, rather than just each section or line.
Steps to Reproduce:
Expected Behavior:
No build warnings. XML is generated with platform specific remarks. This allows me to add a couple of platform specific remarks. Currently you have to if-def the entire code summary section, causing a lot of documentation to be duplicated, rather than just the areas that differ
Actual Behavior:
Several build warnings
And the generated XML:
Moved over from: dotnet/roslyn#96
Beta Was this translation helpful? Give feedback.
All reactions