-
-
Notifications
You must be signed in to change notification settings - Fork 139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add recursive option #78
Comments
Yeah it has been discussed a few times already and it's high on the todo-list.
but there is no connection back to the extractor. |
The lack of recursive spidering make this project unusable for my purpose of checking all links internal and external on a website. I am trying to find a replacement for michaeltelford/broken_link_finder . It is written in Ruby and with out super user access a the new place where this will run it is impossible to install. I am looking for a "portable" replacement. I worked with michaeltelford to get his project into a much more usable state. Check out that projects issue queue for some of the reasoning that went into the development. In any case regarding this issue. The spidering should stop with links found on the current domain, but the links found to external sources should still be checked. |
#165 is getting very close to completion. It implements the functionality described. If you like to support, please build the version from that branch and test it. Feedback on the pull request is appreciated. |
@mre I am new to rust but it seemed pretty straight forward as to how to build from Working on an Existing Cargo Package. However I have run into an issue. At first I thought it was a credential issue but it looks like a 404 issue.
That GitHub URL returns a 404 so I am not sure how to proceed to build. Rust information: $ rustc --version
rustc 1.50.0 (cb75ad5db 2021-02-10)
$ rustup --version
rustup 1.23.1 (3df2264a9 2020-11-30)
info: This is the version for the rustup toolchain manager, not the rustc compiler.
info: The currently active `rustc` version is `rustc 1.50.0 (cb75ad5db 2021-02-10)`
$ cargo --version
cargo 1.50.0 (f04e7fab7 2021-02-04) The GitHub repository amaurym/async-smtp does not seem to exist anymore. |
I dug through the source = "git+https://github.com/async-email/async-smtp?branch=master#0f1c4c6a565833f8c7fc314de84c4cbbc8da2b4a" So it looks like the source for async-smtp has moved to async-email/async-smtp. |
Just found #189 which is the same issue I reported here as to why the build fails. |
Yeah. Related:
We are blocked by upstream at the moment. 😕 |
Ah, upstream reacherhq/check-if-email-exists updated to use upstream instead of his fork three days ago. I am guessing he also deleted the fork then, but this project is still using it. See here: chore: Update wording around licenses #892 This repository still has references to his fork that no longer exists on both the
So it looks like pull 36 in the upstream is closed but the new crate has not been published as the newest one is dated January 10. @mre let me know if there is any movement on this and I will then try to build from the |
but I still cannot bulid from the |
So, I think that the version of |
Thanks for the info. I'll tackle that once #208 is merged. 😄 |
@mre I am still willing to test this but I will be finished with my current job in second week of June and may not have a need for it after that for a while. I would like to get this setup to replace the current program that we are using to check for broken links that I cannot easily move to a shared server because it is ruby. Let me know if you get the version of |
Thanks for your patience. Want to work on this as soon as I find the time. No guarantees this will be soon, though. 😅 |
@mre Patience I have, but time is running out. I finish at my current work place on June 9. I had hope to use this to replace a Ruby broken link checker that is running on an in-house server I need to decommission. I need something I can run on shared hosting with out needing to install a bunch of dependencies I don't have permission to do so. |
I found that muffet and linkcheck serve the recursive usecase the best right now, and particularly muffet is very fast at this. What neither does is to opportunistically check /sitemap.xml to traverse the site faster/get to efficient parallelization faster. Lychee could one-up them on performance if that is done by default. |
New PR which tackles this: #465 |
I'm unsure how this is actually implemented, perhaps what I am about to say is already covered, so sorry for the duplication. Recursion is also very important to me, but I would like to allow the user to specify a list of origins (scheme+host+port) to allow recursion for, or a list of regular expressions. Say for example one has both a |
Good point. It's not implemented and wasn't mentioned before. The way I envisioned it was that all links which belong to the same input URI would be followed recursively, while the rest would not. So you could do |
Imagine that instead of Or another example, that extra domain could be something hosting examples HTMLs that might be linked from the main site, and one would like to make sure that every example works as expected. Or if the above reasons don't seem convincing enough (granted they are quite extreme), I assume that inside the code there already exists a set of "allowed" domains or origins for recursion, that are filled in at startup based on the starting links; thus allowing the user to manipulate that wouldn't be much of a burden, but will also increase flexibility. |
Hm... the main question is always how to wire that up in the CLI without provoking additional mental overhead. lychee --recursive --include-recursive docs -- www blog docs |
Yes, sure. There were a few attempts, but there were always issues with the design. It's a feature which touches on almost all parts of the code and we have to get this right. I'd love to dedicate more time to it, but it's hard to add that feature next to other responsibilities. Currently looking into companies who might be willing to sponsor that feature as I guess it will be quite some work, but it would have a very positive impact to the usefulness for all users. I know that there are companies out there which would really like to have that, but so far there hasn't been a lot of traction with regards to sponsoring. |
I'll close this since I already built a solution. |
Nice package. |
Still no recursive option in the main branch since 2020? I'm trying to run this great program via Docker but really miss the recursive option... |
I'm happy to offer a bounty of sorts of 100€ (payable via PayPal or SEPA) payment for whoever implements this, if multiple people work on it I'm happy to split the money. I know this won't cover the whole development of this feature. |
@lfrancke hiya, does your offer still stand? |
Thanks for checking. Yes, it does. |
Somewhat tangential: I'm a big fan of I have a cli tool for this: https://github.com/lukehsiao/sitemap2urllist
which seems to serve pretty well. Likely will be obsolete if/once lychee adds recursive support, but perhaps useful nonetheless. |
I saw that even people familiar with the codebase failed after 3 attempts because of design issues so i must admit i'm a bit intimidated haha '^^ I guess a massively parallel recursive walk is the goal? @mre , was the v3 attempt closed because of a design issue, or just because it had bugs still and had drifted too much from main? |
Yeah, the third attempt changed too many things at the same time; I ran into issues and let the branch diverge too much, which made progress harder. In my opinion, the first attempt, while simplistic, had the best chance to get merged. With that first attempt, I ran into a weird edge case where the pipeline would not terminate. It simply got stuck somewhere. Probably because my count of outstanding requests was off. If you are willing to give it a try, feel free to take a look at the different versions to see which one you like best and pick up the work from there. Of course, a "clean room" implementation might also work. In fact, too much familiarity with the codebase might be a hindrance for finding a good solution. In any case, thanks for looking into that! Good luck and if you have any questions, feel free to reach out here or send me an email so that we can chat about ideas or brainstorm a bit. |
Ok so I implemented something, it's more of a PoC for now but it works I just uuuuuh left implementation of a recursion depth limit for later and I, for funsies, tried on en.wikipedia.org and then std.rust-lang.org, with the implementation might be a little too "efficient" lmao EDIT: the wifi box reboot was successful, i might've overwhelmed its router lmao, i got my connection back up |
theres also an issue with tokio never unlocking sends on the responses channel once they get filled up, a band aid is to bump |
Great!
haha. 😆
You're right. Simply increasing max-concurrency is indeed just masking the underlying issue by giving more room before it manifests, rather than addressing why the channel isn't properly draining in the first place. I wonder if it's just backpressure. |
It would be nice to pass a URL and have it crawl the entire website recursively looking for dead links.
In order to avoid crawling the entire internet, it should stop recursing once a request no longer matches the original domain.
The text was updated successfully, but these errors were encountered: