Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Removed anchor links from queue in ready_queue #15

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

John61590
Copy link

No description provided.

@@ -7,6 +7,9 @@ def ready_queue(address, html):
links = linkregex.findall(html)
queue = []
for link in links:
#no anchors i.e. video#title
if "#" in link:
continue
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think removing all links with anchors may be a bit aggressive and cause data loss. If it's purely an anchor link, i.e. the same page + an anchor, then sure, removing it is fine. However, occasionally you get links to specific parts of external pages in which case you're more likely to want to scan it.

So, maybe an improvement is to remove the anchor from a link, and ditch it if it's already queued or is the same page? Implement that logic and I'll gladly merge this.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"links to specific parts of external pages in which case you're more
likely to want to scan it."
Can you give an example? I haven't come across data loss yet in my
crawler.

-----Original Message-----
From: Ryan Merl [email protected]
To: theanti9/PyCrawler [email protected]
Cc: John Bohne [email protected]
Sent: Thu, Mar 13, 2014 2:13 am
Subject: Re: [PyCrawler] Removed anchor links from queue in ready_queue
(#15)

In ready_queue.py:

@@ -7,6 +7,9 @@ def ready_queue(address, html):
links = linkregex.findall(html)
queue = []
for link in links:

  • #no anchors i.e. video#title
    
  • if "#" in link:
    
  •     continue
    

I think removing all links with anchors may be a bit aggressive and
cause data loss. If it's purely an anchor link, i.e. the same page + an
anchor, then sure, removing it is fine. However, occasionally you get
links to specific parts of external pages in which case you're more
likely to want to scan it.
So, maybe an improvement is to remove the anchor from a link, and ditch
it if it's already queued or is the same page? Implement that logic and
I'll gladly merge this.

Reply to this email directly or view it on GitHub.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. Let's say, for instance, I'm writing an article about GitHub, and I want to talk about it's profitability. I may link to it's Wikipedia article, specifically, the section about how it makes money with the following url: http://en.wikipedia.org/wiki/Github#Revenue_model

The in this patch as it stands will ditch this whole link. However, if we're running a scan to maybe collect all the external links in a site or other such information, we may want to actually process it. What I was proposing was that we remove the anchor and then validate the link for duplication or exclude it for any other normal criteria instead of just excluding links entirely because it has an anchor in it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants