Skip to content

Commit

Permalink
Created module and data files for assignment
Browse files Browse the repository at this point in the history
  • Loading branch information
kylemasc917 committed Nov 5, 2024
1 parent 78839c8 commit 573ef54
Show file tree
Hide file tree
Showing 11 changed files with 7,490 additions and 0 deletions.
923 changes: 923 additions & 0 deletions CO2_International_Data/Africa_Data.txt

Large diffs are not rendered by default.

1,033 changes: 1,033 additions & 0 deletions CO2_International_Data/Antarctica_Data.txt

Large diffs are not rendered by default.

1,097 changes: 1,097 additions & 0 deletions CO2_International_Data/Asia_Data.txt

Large diffs are not rendered by default.

21 changes: 21 additions & 0 deletions CO2_International_Data/Balkan_Data.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
Balkan has no data because Balkan needs no data!

Боже правде, ти што спасе
Од пропасти до сад нас,
Чуј и од сад наше гласе,
И од сад нам буди спас!

Лијепа наша домовино,
Ој јуначка земљо мила,
Старе славе дједовино,
Да би вазда сретна била!

𝄆 Напреј застава славе,
На бој јунашка кри!
За благор очетњаве
Нај пушка говори! 𝄇

Боже спаси, Боже храни
Нашег Краља и наш род!
Краља Петра, Боже храни,
Моли ти се сав наш род.
1,140 changes: 1,140 additions & 0 deletions CO2_International_Data/Europa_Data.txt

Large diffs are not rendered by default.

1,324 changes: 1,324 additions & 0 deletions CO2_International_Data/NA_Data.txt

Large diffs are not rendered by default.

58 changes: 58 additions & 0 deletions CO2_International_Data/NOTES.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
This contains the license, citation, and location for each data set used in this module.

The licence used by every data set is:
These data were produced by NOAA and are not subject to copyright
protection in the United States. NOAA waives any potential copyright and
related rights in these data worldwide through the Creative Commons Zero
1.0 Universal Public Domain Dedication (CC0 1.0)

CC0 1.0 Universal

The data citation for each set is:

Lan, X., J.W. Mund, A.M. Crotwell, K.W. Thoning, E. Moglia, M. Madronich, K. Baugh,
G. Petron, M.J. Crotwell, D. Neff, S. Wolter, T. Mefford and S. DeVogel (2024),
Atmospheric Carbon Dioxide Dry Air Mole Fractions from the NOAA GML Carbon Cycle Cooperative
Global Air Sampling Network, 1968-2023, Version: 2024-07-30, https://doi.org/10.15138/wkgj-f215

South America Data:

# site_code : USH
# site_name : Ushuaia
# site_country : Argentina

Oceania Data:

# site_code : CGO
# site_name : Cape Grim, Tasmania
# site_country : Australia

North America Data:

# site_code : UTA
# site_name : Wendover, Utah
# site_country : United States

Africa Data:

# site_code : ASK
# site_name : Assekrem
# site_country : Algeria

Asia Data:

# site_code : AMY
# site_name : Anmyeon-do
# site_country : Republic of Korea

Antarctica Data:

# site_code : PSA
# site_name : Palmer Station, Antarctica
# site_country : United States

Europe Data:

# site_code : ZEP
# site_name : Ny-Alesund, Svalbard
# site_country : Norway and Sweden
911 changes: 911 additions & 0 deletions CO2_International_Data/Oceania_Data.txt

Large diffs are not rendered by default.

956 changes: 956 additions & 0 deletions CO2_International_Data/SA_Data.txt

Large diffs are not rendered by default.

1 change: 1 addition & 0 deletions CO2_International_Data/co2_international_data.json

Large diffs are not rendered by default.

26 changes: 26 additions & 0 deletions CO2_International_Data/international_data_converter.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
"""
This module converts a number of CO2 data files as
.txt files into a single panda dataframe
with only the datetime and CO2 (ppm) values.
"""
import pandas as pd

# List of file paths
file_paths = ["Africa_Data.txt", "Asia_Data.txt", "NA_Data.txt", "SA_Data.txt",
"Oceania_Data.txt", "Antarctica_Data.txt", "Europa_Data.txt"]

# Specify the columns you want to include
columns_to_use = ["datetime", "value"] # Replace with the actual column names you need

# Read each .txt file with only the specified columns and store them in a list of DataFrames
dataframes = [pd.read_csv(file, delimiter=" ", usecols=columns_to_use) for file in file_paths]

# Concatenate all DataFrames into one
combined_df = pd.concat(dataframes, ignore_index=True)

# Convert the combined DataFrame to JSON
json_data = combined_df.to_json(orient="records")

# Save the JSON data to a .json file
with open('co2_international_data.json', 'w', encoding="utf-8") as json_file:
json_file.write(json_data)

0 comments on commit 573ef54

Please sign in to comment.