-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ODHack: Analyze user behavior across different lending protocols #100
Comments
Hi, can i jump on this issue |
Hi @lukaspetrasek can I work on this |
Hi, can you guys please tell me something about you, what skills/experience do you have and how do you plan to tackle this issue? This task is not simple, so I have to learn more information before I assign anyone 🙏🏼 |
I have worked on something similar to this before the difference was the dataset was stored in a csvfile not on a Google storage. This project basically involves data visualization for informed decision making. For this project i will be using python. Steps to tackle task
|
Okay, assigning you @NueloSE 👍🏼 @NueloSE Let me know if everything is clear. If you have any questions, please ask here. What is you TG handler please? 🙏🏼 Consider joining our TG group. |
Hi @NueloSE , I assume the PR is ready for review, right? |
@lukaspetrasek, i have implemented all requested changes. It is ready for review : a676ecc |
I am applying to this issue via OnlyDust platform. My background and how it can be leveragedI am a python dev, worked in the field of Data Science and ML. I am a new-comer and I am interested in solving this issue. How I will approach this issue?I would start by loading the data from Google Storage.I have experienced in Google storage and Jupyter Notebook, I'll load the data in pandas dataframe and analyze it as mentioned. Visualizations can be done by matplotlib, seaborn and dash for interactive dashboards. After carefully analyzing, manipulating and visualizing I'll be able to answer the mentioned questions. |
I am applying to this issue via OnlyDust platform. My background and how it can be leveragedWith a background in data analysis using Python, experience with Google Cloud, and proficiency in Jupyter notebooks, I have worked on projects that involve complex data visualization and user behavior analysis. My expertise with tools like Pandas, Matplotlib, and Seaborn allows me to efficiently analyze, manipulate, and visualize large datasets, making me well-suited for this project. How I plan on tackling this issueI would start by loading the data from Google Storage, ensuring the code is flexible to switch between cloud and local databases. I’ll perform an initial exploration of the data, using Pandas to handle the loan data and creating visualizations in Jupyter notebooks. For visualizations, I’ll use Venn diagrams to show user engagement across protocols and dive into token-specific behavior. Additional insights like staked/borrowed capital distribution across tokens and protocols will be highlighted, ensuring the analysis is both thorough and meaningful |
I am applying to this issue via OnlyDust platform. My background and how it can be leveragedI have experience in Python, data analysis, and blockchain protocols. I’ve worked with datasets in Jupyter notebooks, performing behavior analysis and creating visualizations. My background in DeFi and lending platforms makes me well-suited for this task How I plan on tackling this issueI would first create a flexible data loader to handle both Google Storage and local databases. Then, I’d analyze user behavior by visualizing data across protocols and answering key questions with Venn diagrams and token-specific graphs. I’d ensure the code is well-documented and capable of answering additional hypotheses. |
I am applying to this issue via OnlyDust platform. My background and how it can be leveragedHI , i am a blockchain developer with experience in cario, javascript, typescript, solidity, css, html etc. i am an active contributor here on onlydust . this is my first time contributing to this repo. please assign me ,i am ready to work How I plan on tackling this issuei intend to approach the issue by carrying out the following : |
I am applying to this issue via OnlyDust platform. My background and how it can be leveragedBackground and Leverage: I have experience in building modular and scalable systems that handle data efficiently. I have worked extensively with APIs, databases, and data visualization libraries, allowing me to approach this problem with a solid foundation in both back-end development and data analysis. My background in both front-end and back-end development will enable me to handle the data-loading part flexibly and create meaningful visualizations to answer key questions. How I plan on tackling this issue
Implementation Plan: Use Pandas to load the data from Google Storage or the local database (e.g., PostgreSQL). def load_data(source="google", protocol="zklend"): Load zkLend dataloans_data = load_data(source="google", protocol="zklend") Aggregating the data based on users, protocols, and their collateral and debt. Example: Aggregate zkLend dataaggregated_data = aggregate_user_data(loans_data) Use the aggregated data to group users by the number of protocols they interact with. import matplotlib.pyplot as plt Count users by number of protocolsuser_protocol_count = aggregated_data.groupby('user').protocol.nunique() Visualize the number of protocols used by usersprotocol_count_distribution = user_protocol_count.value_counts() Identify users who interact with different protocols (e.g., zkLend, another protocol). For simplicity, assume we have user sets for zkLend and another protocolusers_zklend = set(aggregated_data[aggregated_data['protocol'] == 'zklend'].user) Create a Venn diagramvenn2([users_zklend, users_other_protocol], set_labels=("zkLend", "Other Protocol")) Filter users with at least $10k USD worth of capital (collateral + debt)high_capital_users = aggregated_data[aggregated_data['collateral_amount'] + aggregated_data['debt_amount'] >= 10000] Visualize capital distributionhigh_capital_users.groupby('protocol').agg({ Group data by both token and protocol. Group by token and protocoltoken_data = aggregated_data.groupby(['token', 'protocol']).agg({ Visualize capital distribution per token across protocolstoken_data.pivot(index='token', columns='protocol', values='collateral_amount').plot(kind='bar', stacked=True, title="Capital by Token Across Protocols") |
I am applying to this issue via OnlyDust platform. My background and how it can be leveragedI am a python dev, i am also working on many blockchain projects,in general i am looking to diversify my portfolio. How I plan on tackling this issueData Loading: Load the Parquet file into a Jupyter notebook using Pandas, ensuring flexibility for local or cloud data sources. Data Preprocessing: Clean and filter key columns like user ID, protocol, collateral, debt, and tokens. User Behavior Visualization: Calculate how many users interact with one or multiple protocols. Use bar charts and Venn diagrams to visualize liquidity and borrowing behavior. Advanced Analysis: Analyze staked/borrowed capital distribution across protocols and visualize it by token type. Additional Insights: Explore additional metrics like protocol popularity and document findings. |
I am a Python developer with experience working on blockchain projects, aiming to broaden my portfolio. Approach to the Issue Data Loading: Use Pandas to load the Parquet file into Jupyter for local or cloud analysis. |
this looks awesome tbvh |
I am applying to this issue via OnlyDust platform. My background and how it can be leveragedHi, I am backend developer and i'd like to take this task |
I am applying to this issue via OnlyDust platform. My background and how it can be leveragedHi, I'm developer with experience in Starknet, I was working closely with blockchain and web3 technologies |
"Analyze user behavior across different lending protocols.
Steps:
Definition of Done
The code functions well and is documented, the analysis provides meaningful outputs and answers the questions from the setup.
The text was updated successfully, but these errors were encountered: