An open-source real-time face and hand landmark detection using Mediapipe in a Next.js application
On-device machine learning for everyone
Delight your customers with innovative machine learning features. MediaPipe contains everything that you need to customize and deploy to mobile (Android, iOS), web, desktop, edge devices, and IoT, effortlessly.
- Clone the repository
https://github.com/shahriarshafin/face-hand-tracker.git
- Change the working directory
cd face-hand-tracker
- Install dependencies
npm install # or, yarn install
- Run the app in development mode
npm run dev # or, yarn dev
That's All! Now open localhost:3000 to see the app.
You can get started with MediaPipe Solutions by by checking out any of the developer guides for vision, text, and audio tasks. If you need help setting up a development environment for use with MediaPipe Tasks, check out the setup guides for Android, iOS, Web apps, and Python.
MediaPipe Solutions provides a suite of libraries and tools for you to quickly apply artificial intelligence (AI) and machine learning (ML) techniques in your applications. You can plug these solutions into your applications immediately, customize them to your needs, and use them across multiple development platforms. MediaPipe Solutions is part of the MediaPipe, so you can further customize the solutions code to meet your application needs.
These libraries and resources provide the core functionality for each MediaPipe Solution:
- MediaPipe Tasks: Cross-platform APIs and libraries for deploying solutions. Learn more.
- MediaPipe models: Pre-trained, ready-to-run models for use with each solution.
These tools let you customize and evaluate solutions:
- MediaPipe Model Maker: Customize models for solutions with your data. Learn more.
- MediaPipe Studio: Visualize, evaluate, and benchmark solutions in your browser. Learn more.
This project is licensed under the MIT License - see the LICENSE
file for details.