-
-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Invalid links returned with some chinese characters as delimiters #15
Comments
Could you post a permalink to demo http://markdown-it.github.io/linkify-it/ ? You can type there all examples at once, with results. |
Problem is in |
I added another issue where a user mistakenly added http:// twice. These are both (technically speaking) user behavior issues, but the first one especially is quite common. Formally speaking ( is not a word terminator, however because it takes so much white space, many chinese people don't put a space in front of it (it's an act of being lazy, but it's quite common). Perhaps I should do some string manipulation before I pass it to |
Example with double http:// is technically correct. That's like Example with |
I'm sorry I'm not a Chinese speaker. Perhaps someone else can help out here. There are also other Chinese punctuation characters, like |
Let's leave it open until someone can formalize info for asian languages group (probably japan language has the same issues). I'm ready to fix as soon as possible, but would like to avoid kludges with inccomplete workarounds. |
More examples of how difficult these Chinese posts are to parse: link What's your idea on this? |
Seems you given wrong link, it shows default text. |
Sorry, I've fixed the link now. |
Thanks for examples. Posted a question at commonmark forum http://talk.commonmark.org/t/linkifier-lets-discuss-and-test/1045/9?u=vitaly As far as i see in last example, spaces are not used at all. It's possible to to track locale change (non-english -> english), but that's not safe. |
That's correct, spaces are not used. The Chinese use different punctuation marks or even don't use them because URLs are in English script and therefore the disctinction is easily visible to the eye, but more difficult for machines. |
What about links with chineese chars? I can find link start somehow by PS. Anyway, it worth to start collecting test samples for china language separately. Something like this |
You're absolutely right. I think the first rule makes sense, but URLs can probably have chinese characters. Perhaps the first rule by itself would help though, and somekind of smart processing for certain delimiters that are unlikely to appear within an URL, e.g.:
|
Found this URL doesn't match: BTW, I found a website comparing different regexps for URL: |
By the way, Facebook does this properly, but I'm not sure if they use a proprietary solution. |
I will try to resolve this problem. |
@fengmk2 implementation can be not easy, but for the first step it would be enougth to have collection of fixtures with good coverage of Chinese edge cases. See https://github.com/markdown-it/linkify-it/tree/master/test/fixtures |
Here is my example page linkifying incorrectly: In this case, it's a vertical bar, though to be precise it's |
@mikelambert Nothing to fix. Fuzzy mode is not safe, it's an author's mistale to use linkify-it in wrong way. |
I'm the website author using linkify-it to add links to raw text from a variety of sources which I didn't write myself (facebook events and other websites). And the individual source authors are not using linkify-it, and of course not writing their text with linkify-it in mind. I recognize that fuzzy is not safe, and not perfect. However, it seems unfortunate that a vertical bar (what some authors are using as visual punctuation) is treated as part of a domain name. It seemed like the fuzziness could be smarter, even if it's still fuzzy. So to clarify, is this a "it's not a bug, just user error" bug, or a "it's not a bug I care about fixing, but patches are welcome" bug? |
@mikelambert I mean, problems with My personal opinion is, that linkifier is used not as expected. So, this example is not good enougth as a reason for changes. May be i don't understand something, but this is my opinion for now. If i had to do such site, i would parse links first, then compose header. Also, i understand that people can have another opinions and may wish to just quick fix something via hacks. For this case, linkifier allows to override regexps without need to fork project. |
What do you mean by "layout compose" ? I assume you are referring to writing up the text and adding the https://www.facebook.com/events/896736620379772/ is the source text I am working with. (Notice that Facebook gets the linkification correct.) I receive the raw text from the FB API, and then am trying to linkify it for use on my own website. I understand that linkifier-it might be the wrong tool (since this is not natural text), and I am using it incorrectly (since it is applied after the author's layout composition). But unfortunately, I am not the author of the text, and so I am not able to linkify before adding the Thanks for your time! |
Then, if you have source in known format ( The reason to add |
In other words, if you have auto-generated text - consider parse/detect it's structure prior to apply linkifier. Or if you know some new uncovered de-facto patterns of human writing - create new issue with proofs (live examples), and i'll try to fix it if possible. |
Thank you, that's a creative solution to this problem. I'll go with that for now. I'll create a separate issue with the few examples I have, and you can decide if it's justified "de-facto pattern of human writin" or not there. :) |
@mikelambert I'm wondering what does facebook linkify do with this following link? link: you can see what this link take you to here: https://zh.wikipedia.org/wiki/( |
Feel free to create a dummy facebook event (or even a FB post on your wall) and see what happens? It seems like it fails to parse that link properly, instead linking to https://zh.wikipedia.org/wiki/ . |
I am looking forward to the fix! It could be more enjoyable every time I paste links in the GitHub README with Chinese characters following it can SMARTLY display them correctly. |
Steps to reproduce:
【视频奇志大兵《发烧友》 在线观看 - 酷6视频】奇志大兵《发烧友》 在线观看,奇志 大兵 搞笑双簧 _ 发烧友 (追星族) http://t.cn/RZwjG7U(分享自 @酷6网)
http://t.cn/RZwjG7U(分享自
whereas the output link should be
http://t.cn/RZwjG7U
The reason is that ( is not recognized as a separating delimiter, yet it is quite common in Chinese.
Out of 500 posts I gathered, about 20 to 30 of them had links like this, resulting in invalid links reported by linkify.
Note that I realize that these users are technically posting invalid URLs, but 20-30 out of 500 is very common and therefore there should be a way to deal with this. Any suggestion?
The text was updated successfully, but these errors were encountered: