Skip to content

Commit

Permalink
Refactored get_news, get_analyst_price_targets (#108)
Browse files Browse the repository at this point in the history
  • Loading branch information
d3an authored Jun 19, 2021
1 parent 786f5a9 commit 4f8304a
Show file tree
Hide file tree
Showing 6 changed files with 151 additions and 184 deletions.
20 changes: 10 additions & 10 deletions README.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
finviz-api
########
##########
*Unofficial Python API for FinViz*

.. image:: https://badge.fury.io/py/finviz.svg
Expand All @@ -16,13 +16,13 @@ finviz-api


Downloading & Installation
-----
---------------------------

$ pip install -U git+https://github.com/mariostoev/finviz


What is Finviz?
=====
================
FinViz_ aims to make market information accessible and provides a lot of data in visual snapshots, allowing traders and investors to quickly find the stock, future or forex pair they are looking for. The site provides advanced screeners, market maps, analysis, comparative tools, and charts.

.. _FinViz: https://finviz.com/?a=128493348
Expand All @@ -32,7 +32,7 @@ FinViz_ aims to make market information accessible and provides a lot of data in
Any quotes data displayed on finviz.com is delayed by 15 minutes for NASDAQ, and 20 minutes for NYSE and AMEX. This API should **NOT** be used for live trading, it's main purpose is financial analysis, research, and data scraping.

Using Screener
=====
===============

Before using the Screener class, you have to manually go to the website's screener and enter your desired settings. The URL will automatically change every time you add a new setting. After you're done the URL will look something like this:

Expand Down Expand Up @@ -68,7 +68,7 @@ To make matters easier inside the code you won't refer to tables by their number
.. image:: https://i.imgur.com/cb7UdxB.png

Using Portfolio
=====
================
.. code:: python
from finviz.portfolio import Portfolio
Expand Down Expand Up @@ -112,7 +112,7 @@ Note that, if any *optional* fields are left empty, the API will assign them tod
portfolio.create_portfolio('<portfolio-name>', '<path-to-csv-file>')
Individual stocks
=====
==================

.. code:: pycon
Expand All @@ -128,7 +128,7 @@ Individual stocks
[{'date': '2019-10-24', 'category': 'Reiterated', 'analyst': 'UBS', 'rating': 'Buy', 'price_from': 235, 'price_to': 275}, ...
Downloading charts
=====
===================

.. code:: python
Expand All @@ -149,12 +149,12 @@ Downloading charts
# ta='0' > ignore technical analysis
Documentation
=====
==============

You can read the rest of the documentation inside the docstrings.

Contributing
=====
=============
You can contribute to the project by reporting bugs, suggesting enhancements, or directly by extending and writing features (see the ongoing projects_).

.. _projects: https://github.com/mariostoev/finviz/projects/1
Expand All @@ -165,5 +165,5 @@ You can contribute to the project by reporting bugs, suggesting enhancements, or
:target: https://www.paypal.me/finvizapi

Disclaimer
-----
-----------
*Using the library to acquire data from FinViz is against their Terms of Service and robots.txt. Use it responsibly and at your own risk. This library is built purely for educational purposes.*
93 changes: 40 additions & 53 deletions finviz/main_func.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
import datetime
from datetime import datetime

from finviz.helper_functions.request_functions import http_request_get
from finviz.helper_functions.scraper_functions import get_table
Expand Down Expand Up @@ -74,11 +74,28 @@ def get_news(ticker):

get_page(ticker)
page_parsed = STOCK_PAGE[ticker]
all_news = page_parsed.cssselect('a[class="tab-link-news"]')
headlines = [row.xpath("text()")[0] for row in all_news]
urls = [row.get("href") for row in all_news]
rows = page_parsed.cssselect('table[id="news-table"]')[0].xpath('./tr[not(@id)]')

return list(zip(headlines, urls))
results = []
date = None
for row in rows:
raw_timestamp = row.xpath("./td")[0].xpath('text()')[0][0:-2]

if len(raw_timestamp) > 8:
parsed_timestamp = datetime.strptime(raw_timestamp, "%b-%d-%y %I:%M%p")
date = parsed_timestamp.date()
else:
parsed_timestamp = datetime.strptime(raw_timestamp, "%I:%M%p").replace(
year=date.year, month=date.month, day=date.day)

results.append((
parsed_timestamp.strftime("%Y-%m-%d %H:%M"),
row.xpath("./td")[1].cssselect('a[class="tab-link-news"]')[0].xpath("text()")[0],
row.xpath("./td")[1].cssselect('a[class="tab-link-news"]')[0].get("href"),
row.xpath("./td")[1].cssselect('div[class="news-link-right"] span')[0].xpath("text()")[0][1:]
))

return results


def get_all_news():
Expand Down Expand Up @@ -131,60 +148,30 @@ def get_analyst_price_targets(ticker, last_ratings=5):
get_page(ticker)
page_parsed = STOCK_PAGE[ticker]
table = page_parsed.cssselect('table[class="fullview-ratings-outer"]')[0]
ratings_list = [row.xpath("td//text()") for row in table]
ratings_list = [
[val for val in row if val != "\n"] for row in ratings_list
] # remove new line entries

headers = [
"date",
"category",
"analyst",
"rating",
"price_from",
"price_to",
] # header names
count = 0

for row in ratings_list:
if count == last_ratings:
break
# default values for len(row) == 4 , that is there is NO price information
price_from, price_to = 0, 0
if len(row) == 5:

strings = row[4].split("→")
# print(strings)
if len(strings) == 1:
# if only ONE price is available then it is 'price_to' value
price_to = strings[0].strip(" ").strip("$")
else:
# both '_from' & '_to' prices available
price_from = strings[0].strip(" ").strip("$")
price_to = strings[1].strip(" ").strip("$")
# only take first 4 elements, discard last element if exists
elements = row[:4]
elements.append(
datetime.datetime.strptime(row[0], "%b-%d-%y").strftime("%Y-%m-%d")
) # convert date format
elements.extend(row[1:3])
elements.append(row[3].replace("→", "->"))
elements.append(price_from)
elements.append(price_to)

for row in table:
rating = row.xpath("td//text()")
rating = [val.replace("→", "->").replace("$", "") for val in rating if val != '\n']
rating[0] = datetime.strptime(rating[0], "%b-%d-%y").strftime("%Y-%m-%d")

data = {
"date": elements[0],
"category": elements[1],
"analyst": elements[2],
"rating": elements[3],
"price_from": float(price_from),
"price_to": float(price_to),
"date": rating[0],
"category": rating[1],
"analyst": rating[2],
"rating": rating[3],
}
if len(rating) == 5:
if "->" in rating[4]:
rating.extend(rating[4].replace(" ", "").split("->"))
del rating[4]
data["target_from"] = float(rating[4])
data["target_to"] = float(rating[5])
else:
data["target"] = float(rating[4])

analyst_price_targets.append(data)
count += 1
except Exception as e:
# print("-> Exception: %s parsing analysts' ratings for ticker %s" % (str(e), ticker))
pass

return analyst_price_targets
return analyst_price_targets[:last_ratings]
Loading

0 comments on commit 4f8304a

Please sign in to comment.