-
-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
quick fix for LinkedIn Automation #961
Conversation
the changed job_manager can work well to "applied job" pages.
I mean can you use sonnet or chatgpt do to this faster? I need some explaination, can help you with this |
Still have bugs, but can work based on it
@surapuramakhil could you please help me test this version. Right now, it works well on my device. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey, thanks for the work! I tested it using python main.py --collect
and encountered some issues. Please review the comments for further details.
jobs_xpath_query = "//ul[contains(@class, 'scaffold-layout__list-container')]" | ||
# XPath query to find the ul tag with class scaffold-layout__list | ||
jobs_xpath_query = ( | ||
"//div[contains(@class, 'scaffold-layout__list-detail-container')]//ul" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While the code works, the scaffold-layout__list
class appears to be closer to the target ul element than scaffold-layout__list-detail-container
, making it more appropriate for selection.
src/ai_hawk/job_manager.py
Outdated
try: | ||
job.link = job_tile.find_element(By.CLASS_NAME, 'job-card-list__title').get_attribute('href').split('?')[0] | ||
job.link = title_element.get_attribute("href").split("?")[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There’s a potential unhandled exception here. If the previos line raised an exception and title_element
is not found, this line will fail and raise a different exception rather than NoSuchElementException, causing the entire program to stop abruptly.
Suggestion: job.link = job_tile.find_element(By.CLASS_NAME, 'job-card-list__title--link').get_attribute('href').split('?')[0]
src/ai_hawk/job_manager.py
Outdated
title_element = job_tile.find_element( | ||
By.XPATH, ".//div[contains(@class, 'artdeco-entity-lockup__title')]//a" | ||
) | ||
job.title = title_element.text.strip() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've got job titles duplicated with this code: "job_title": "Senior Hardware Experience Designer\nSenior Hardware Experience Designer",
Suggestion: job.title = job_tile.find_element(By.CLASS_NAME, 'job-card-list__title--link').find_element(By.TAG_NAME, 'strong').text
src/ai_hawk/job_manager.py
Outdated
# Extract job Location | ||
try: | ||
job.location = job_tile.find_element( | ||
By.XPATH, ".//ul[contains(@class, 'job-card-container__metadata-wrapper')]//li" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
src/ai_hawk/job_manager.py
Outdated
logger.debug(f"Job link extracted: {job.link}") | ||
except NoSuchElementException: | ||
logger.warning("Job link is missing.") | ||
|
||
# Extract Company Name |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Company name and location are splited by a dot now. I've got Company name along with the Location when I did --collect
.
Suggestion:
# Extract Company Name and Location
try:
full_text = job_tile.find_element(By.XPATH, ".//div[contains(@class, 'artdeco-entity-lockup__subtitle')]//span").text
company, location = full_text.split('·')
job.company = company.strip()
logger.debug(f"Job company extracted: {job.company}")
job.location = location.strip()
logger.debug(f"Job location extracted: {job.location}")
except NoSuchElementException as e:
logger.warning(f'Job company and location are missing. {e} {traceback.format_exc()}')
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for all of your advices!!! I will try and apply
bad news.. my linkedin account is blocked |
@cjbbb are you serious? Why? |
I have no idea. let me try to connect with them and release my account. |
@cjbbb @surapuramakhil any update for this Pr? |
Needs a needs a filename sanitization method in job_application_saver.py # job_application_saver.py
# Used in the event that the job title has invalid characters
def sanitize_filename(filename):
# Remove invalid characters
return re.sub(r'[<>:"/\\|?*\n]', '_', filename) # job_application_saver.py
dir_name = sanitize_filename(f"{job.id} - {job.company} {job.title}")
dir_path = os.path.join(BASE_DIR, dir_name)
... Job name and location are in the same element. Split the string and pick the respective element from the list # job_manager.py
# Extract Company Name
try:
job.company = job_tile.find_element(
By.XPATH, ".//div[contains(@class, 'artdeco-entity-lockup__subtitle')]//span"
).text.split('·')[0].strip()
logger.debug(f"Job company extracted: {job.company}")
except NoSuchElementException as e:
logger.warning(f"Job company is missing. {e} {traceback.format_exc()}")
# Extract job Location
try:
job.location = job_tile.find_element(
By.XPATH, ".//div[contains(@class, 'artdeco-entity-lockup__subtitle')]//span"
).text.split('·')[1].strip()
logger.debug(f"Job location extracted: {job.location}")
except NoSuchElementException:
logger.warning("Job location is missing.")
... Use a nested try block to first find text in the section. If no question is found, go to grandparent's span and try to find the question there. # linkedIn_easy_applier.py
def _find_and_handle_dropdown_question(self, job_context : JobContext, section: WebElement) -> bool:
job_application = job_context.job_application
# in the event that there is one question, the question is outside the subsection, so we need to find the
# parents' text
try:
try:
question = section.find_element(By.CLASS_NAME, 'fb-dash-form-element__label')
question_text = question.text.lower()
except NoSuchElementException:
logger.debug(f"Unable to find subsection question, trying parent class...")
# parent = hash, grandparent = hash + span texts
grand_parent = section.find_element(By.XPATH, "../..")
# find the elements with texts
question = grand_parent.find_elements(By.TAG_NAME, "span")
# combine the texts
question = '\n'.join([question.text for question in question])
question_text = question.lower()
... It seems to be working perfectly fine now. |
There's an issue with unfollowing. Change the query. def _unfollow_company(self) -> None:
try:
logger.debug("Unfollowing company")
follow_checkbox = self.driver.find_element(
By.XPATH, "//label[@for='follow-company-checkbox']"
)
follow_checkbox.click()
except Exception as e:
logger.debug(f"Failed to unfollow company: {e}") |
not usable again? linkedin changed ui again? |
closing this PR |
The changed "job_manager.py" can work well to "applied job" pages.
Still need to change "linkedin_easy_applier.py". There is too much work. If someone has time today, they can continue working on this basis.