You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Embedded systems master. Start Q2 or perhaps (possible) Literature Survey earlier.
On progress for Q1 finishing all course work+survey. Then Q2 start thesis full-time, ~nov.
We evaluate DroidBot-GPT with a self-created dataset that contains 33 tasks collected
from 17 Android applications spanning 10 categories. It can successfully complete 39.39% of the
tasks, and the average partial completion progress is about 66.76%.
EDIT2: lots of activity on federated learning and swarm learning. Enough for a "literature survey". Have a broader scope than thesis, gives a broad understanding, before doing a deep dive for thesis.
Literature survey has always 1 essential taxonomy table. Taxonomy in a single glance. Systematic overview of past years of innovation. Key milestones identified. Scientific grounding, 1 or more scientific article per entry/line/milestone. Table with overview and literature, see brilliant With Honours example:
Very concrete possible thesis direction. Starting point: take the proof-of-principle implementation of a Web3 recommender with full decentralisation and monte-carlo basic machine learning. Replace the recommender with TFLite model, add hardware-acceleration, tune the memory usage toward huge trust graphs, tune algorithm to giant content databases, and make it "industrial-proof". Actual deployment to Google Play store and watch the crash reports. Starting point: "Web3Recommend: Decentralised recommendations with trust and relevance" by Delft student Rohan. His repo: https://github.com/rmadhwal/trustchain-superapp/tree/TrustedRecommendations
Facebook infinite scroll AI or addiction ML from Tiktok has tons of related work and libraries, you can start from cratch using federated learning {But prefer latest science: self-supervised self-organising learning}
brainstorm placeholder
Embedded systems master. Start Q2 or perhaps (possible) Literature Survey earlier.
On progress for Q1 finishing all course work+survey. Then Q2 start thesis full-time, ~nov.
Likes Rust, C coding. Completed machine learning, evolutionary algorithms (ongoing) {not deep learning}. Interested in decentralised machine learning. See our Delft/EPFL work: MoDeST: Bridging the Gap between Federated and Decentralized Learning with Decentralized Sampling. master thesis on arXiv: G-Rank: Unsupervised Continuous Learn-to-Rank for Edge Devices in a P2P Network. CODE: https://github.com/awrgold/G-Rank (Jupyter Notebook file)
Scholar Google search for anything related to "LLM Android"
DroidBot-GPT: GPT-powered UI Automation for Android. Clusion after 2 minutes of effort: nobody is combining LLM for Android yet. Homework for reader: "nanoGPT Android".
EDIT: another potential direction to explore, hardware-accelerated machine learning on Android https://towardsdatascience.com/gpu-accelerated-machine-learning-in-your-mobile-applications-using-the-android-ndk-vulkan-kompute-1e9da37b7617
EDIT2: lots of activity on federated learning and swarm learning. Enough for a "literature survey". Have a broader scope than thesis, gives a broad understanding, before doing a deep dive for thesis.
Very concrete possible thesis direction. Starting point: take the proof-of-principle implementation of a Web3 recommender with full decentralisation and monte-carlo basic machine learning. Replace the recommender with TFLite model, add hardware-acceleration, tune the memory usage toward huge trust graphs, tune algorithm to giant content databases, and make it "industrial-proof". Actual deployment to Google Play store and watch the crash reports. Starting point: "Web3Recommend: Decentralised recommendations with trust and relevance" by Delft student Rohan. His repo: https://github.com/rmadhwal/trustchain-superapp/tree/TrustedRecommendations
update: https://www.economist.com/leaders/2023/05/11/what-does-a-leaked-google-memo-reveal-about-the-future-of-ai
update2: BeyondFederated by master student Quinten:
update3: First month of thesis time is wisely spend actively {e.g. coding&reading} exporing 2 diffirent thesis direction and may end up inspired for your 3rd direction. {Decentralised federated: hardware-acceleration, tune stochastic gradient decent for infinite scalability, focus specifically on decentralised content recommendation, or learning rate and parameter tuning magic}
Update4: 2011 earliest work. Gossip Learning with Linear Models on Fully Distributed Data
The text was updated successfully, but these errors were encountered: