Skip to content
View abideenml's full-sized avatar

Block or report abideenml

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this userโ€™s behavior. Learn more about reporting abuse.

Report abuse
abideenml/README.md

ML engineer adept at LLM pretraining, fine-tuning, rlhf, rag, and agentic workflows.

๐Ÿ”ฌ Recent OS Projects

  • llm.pth - Hackable implementations of Autoregressive models (Llama, mixtral, gemma, deepseek), Research papers (cope, yarn, mod, mome, mla) and techniques (sft, dpo, kto, ipo) in Pytorch.
  • AutoSynth - Automatically create synthetic data using SOTA techniques (Self Instruct, Magpie, Agent Instruct, Arena Learning, Genstruct, Instruction Synthesizer, Self-Curation) using your LLMs.
  • Llama3.1-SyntheticDataPipeline - Implementation of Synthetic data pipeline of Llama 3.1 using Langgraph, Groq, Pytest and Black.
  • LightAgents - A wrapper free Agents library with RAG, function calling, json mode, telemetry and multi-layer memory.
  • llama3.cuda - llama3.cuda is an implementation of Llama 3.1 in pure C/CUDA. Consists of Swiglu, RoPE, CSE, RMSNorm and GQA kernels.

๐Ÿ’ป Recent Work Projects

  • Elemental Compute - Implemented a self-optimizing multimodal pipeline with RAG, Agentic workflow, and open-source AI using LLM-as-a-Judge and Mixture of Agents. Managed 30+ GPUs for multi-node inference of the entire multimodal pipeline consisting of LLama-3.1 70B, Phi-3-medium-128k-instruct, Llava-next-8b, and SDXL-Lightning.
  • John Snow Labs - Released a series of JSL-MedX 3B, 7B, 8B, and 70B LLMs in the Healthcare domain. JSL-MedX models are ranked No. 1 on the Open Medical Leaderboard across all Param variants.
  • QueryLoopAi - Pre-trained a 500M SLM from scratch on a carefully curated high-quality 15B tokens synthetic dataset. Created the entire training and evaluation pipeline along with managing training on 8xA100s. Created Kendrick, a mixture of experts model with 32k experts and Multi-latent head attention.

๐Ÿ“ Recent Writing

View the archives (42 posts) @ zain.com.


Linkedin FollowMedium FollowDiscordTwitterSubstack

Pinned Loading

  1. llm.pth llm.pth Public

    Implementation of various Autoregressive models, Research papers and techniques. Main aim is to write clean, modular, wrapper-free implementations.

    Python 1

  2. LightAgents LightAgents Public

    A lightweight Agents library with RAG, function calling, json mode, telemetry and multi-layer memory.

    Python

  3. llama3.cuda llama3.cuda Public

    llama3.cuda is an implementation of Llama 3.1 in pure C/CUDA.

    Cuda 1

  4. AutoSynth AutoSynth Public

    Automatically create synthetic data using SOTA techniques (Self Instruct, Magpie, Agent Instruct, Arena Learning, Genstruct, Instruction Synthesizer, Self-Curation) using your LLMs.

    Jupyter Notebook 2

  5. Llama3.1-SyntheticDataPipeline Llama3.1-SyntheticDataPipeline Public

    Implementation of Synthetic data pipeline of Llama 3.1 using Langgraph, Groq, Pytest and Black. Paper: https://arxiv.org/abs/2407.21783

    Jupyter Notebook 1

  6. Kedro-MLops-pipeline Kedro-MLops-pipeline Public

    Churn Prediction with Kedro, Kedro-Viz, and Kedro-Mlflow โ„๏ธ ๐Ÿ‘จ. PowerBI Dashboard ๐Ÿ“Š also included. Kedro๐Ÿ”— https://kedro.org/

    Jupyter Notebook 3 1