Skip to content

Commit

Permalink
New name, bug fixes, updated README.md
Browse files Browse the repository at this point in the history
* wdisneyw is now pyParker to have a better name and avoid any copyright issues
* Remove "Disney's" inside some of the park's name, this would cause some problems with UNIX systems
* Fixed problem with some UNIX systems while creating the values ("\n" => ",")
* Added requirements, instructions, todo list, credits and licensing to reflect the new license.
  • Loading branch information
BourgonLaurent committed Jul 8, 2019
1 parent 6830cd3 commit 9c3f5d4
Show file tree
Hide file tree
Showing 3 changed files with 87 additions and 31 deletions.
6 changes: 5 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -106,4 +106,8 @@ venv.bak/
# personnel
data/*
location.csv
temp.csv
temp.csv
synpyParker.py

#Vscode
.vscode/*
53 changes: 49 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,50 @@
# wdisneyw-py
Programme Python3 qui enregistre les temps d'attente à Walt Disney World, FL.
# pyParker

## Crédits
Les données sont prises à partir du site http://laughingplace.com, il existe une solution pour les prendre directement de Disney, mais c'est plus compliqué... à faire....
`Python3`program that records wait times and extra information at Walt Disney World, FL. Created this program to plan a family trip at Walt Disney World.

## Requirements

- `Python3` (might work with `Python2`, but not tested and will not be supported)
- [BeautifulSoup4](https://www.crummy.com/software/BeautifulSoup/) installed
- Internet connection
- Rights in the folder you're in (especially on Synology's DSM)

## Instructions

1. Download the latest version of `pyParker.py` in [Releases](https://github.com/BourgonLaurent/pyParker/releases)
- It is recommended to put it it's own folder, because it will create automatically a folder `data` at the place where it executed
2. If you don't have `Python3` installed, [install it](https://www.python.org/downloads/)
3. If you don't have `BeautifulSoup4`, install it [manually](https://www.crummy.com/software/BeautifulSoup/) or with `pip` (`pip install beautifulsoup4`)
4. Open Terminal/CMD/Powershell, navigate to `pyParker.py`'s folder (using `cd`) and run the script by entering `python3 pyParker.py`.

## Using it

A folder `data` will be created, inside this folder there will be sub-folders for each park. Inside these folders you will find a file for each attraction and a [INFORMATION].csv file, this file contains *the date, the day of the week, the opening hours and Extra Magic Hours.* To interpret the data, you will have to use an external tool.

## TODO

(This list is made in chronological order, it may change)

- Create a repository that will have the waits I logged by commiting changes automatically.
- Transform all the web scraping in a module.
- Make a sudo-GUI to create a script for automation (currently the script needs to be modified manually to use on a Synology).
- Take data directly from Disney, this will provide accurate data at the time specified.
- Support Disneyland, CA and Universal Studios, FL (same website, same webscrape, just need to implement it)
- Support other parks (Six Flag and others).
- Script that will convert it in a graph instead of doing it manually.
- Implement a GUI to select parks and to create a graph.

## Credits

Data taken from [Laughing Place](http://laughingplace.com), there's a way to take them from Disney directly.

## Licensing

> _**This is free and unencumbered software released into the public domain.**_
>
> _**Anyone is free**_ to copy, modify, publish, use, compile, sell, or distribute this software, either in source code form or as a compiled binary, for any purpose, commercial or non-commercial, and by any means.
>
> In jurisdictions that recognize copyright laws, the author or authors of this software dedicate any and all copyright interest in the software to the public domain. We make this dedication for the benefit of the public at large and to the detriment of our heirs and successors. We intend this dedication to be an overt act of relinquishment in perpetuity of all present and future rights to this software under copyright law.
>
> THE SOFTWARE IS PROVIDED "AS IS", _**WITHOUT WARRANTY**_ OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
> For more information, please refer to <http://unlicense.org>
59 changes: 33 additions & 26 deletions wdisneyw.py → pyParker.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
#!/usr/bin/python3
from bs4 import BeautifulSoup as soup
from urllib.request import urlopen as uReq
import urllib.request
Expand All @@ -21,38 +22,40 @@ def verifConfig():
makeDir("./data/")
makeDir("./data/Magic Kingdom/")
makeDir("./data/Epcot/")
makeDir("./data/Disney's Hollywood Studios/")
makeDir("./data/Disney's Animal Kingdom/")
makeDir("./data/Hollywood Studios/")
makeDir("./data/Animal Kingdom/")

makeFile("./data/Magic Kingdom/[INFORMATION].csv")
makeFile("./data/Epcot/[INFORMATION].csv")
makeFile("./data/Disney's Hollywood Studios/[INFORMATION].csv")
makeFile("./data/Disney's Animal Kingdom/[INFORMATION].csv")
makeFile("./data/Hollywood Studios/[INFORMATION].csv")
makeFile("./data/Animal Kingdom/[INFORMATION].csv")

def retrieveHTML(selected_park):
global mk, ep, hs, ak

if selected_park not in (mk, ep, hs, ak): #Check if park exists
if selected_park not in (mk, ep, hs, ak): # Check if park exists
print("Invalid park selected")
sys.exit()
#else: #DEBUG
#print(selected_park)
# else: #DEBUG
# print(selected_park)
# Specify User-Agent to prevent Error 403: Forbiden
user_agent = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36"
# Create url by using the park specified
selected_url = "https://www.laughingplace.com/w/p/{}-current-wait-times/".format(selected_park)
req = urllib.request.Request(selected_url,headers={"User-Agent": user_agent}) # setting headers to go incognito
uClient = uReq(req) # opening up connection
page_html = uClient.read() # grabbing page
uClient.close() # closing connection
req = urllib.request.Request(selected_url, headers={"User-Agent": user_agent}) # setting headers to go incognito
uClient = uReq(req) # opening up connection
page_html = uClient.read() # grabbing page
uClient.close() # closing connection
return page_html


def webScrape(page_html):
page_soup = soup(page_html, "lxml") #html parsing
title = page_soup.h1.text.strip() #Title of the page
park = title.replace(" Current Wait Times", "") # Name of the park
r_datetime = page_soup.findAll("div", {"class":"header"})[0].text.strip() #Find the locations
page_soup = soup(page_html, "lxml") # html parsing
title = page_soup.h1.text.strip() # Title of the page
park = title.replace(" Current Wait Times", "") # Name of the park
if "Disney's " in park:
park = park.replace("Disney's ", "")
r_datetime = page_soup.findAll("div", {"class": "header"})[0].text.strip() # Find the locations

date, time = r_datetime.split("\n")
date, ophour = date.split(": ")
Expand All @@ -65,20 +68,22 @@ def webScrape(page_html):
containers = table.findAll("tr")
containers = list(dict.fromkeys(containers))

location={}
location["park"]=park
location["ophour"]=ophour
location["date"]=date
location["day"]=day
location["time"]=time
location = {}
location["park"] = park
location["ophour"] = ophour
location["date"] = date
location["day"] = day
location["time"] = time

attlist = {}
for container in containers:
entry = container.text.strip()
c_entry = container.text.strip()
#Cleaning entries
if "\n" in entry:
c_entry = entry.replace("\n", ",", 1)
if "\n" in c_entry:
c_entry = c_entry.replace("\n", ",", 1)
c_entry = c_entry.replace('\n','')
else:
continue
if " minutes" in c_entry:
c_entry = c_entry.replace(" minutes", "")
if "“" in c_entry:
Expand All @@ -101,9 +106,11 @@ def webScrape(page_html):
c_entry = c_entry.replace(":", "")
if "™" in c_entry:
c_entry = c_entry.replace("™", "")
#Split entry to have 2 values
if u"\u2018" in c_entry:
c_entry = c_entry.replace(u"\u2018", "")
# Split entry to have 2 values
att, time = c_entry.split(",")
#Add to dictionnary
# Add to dictionnary
attlist[att] = time
return attlist, location

Expand Down

0 comments on commit 9c3f5d4

Please sign in to comment.