- SkillsPyAutoGUI, OpenCV, CV2, mediapipe
- ProgressIn Progress
This project is a set of computer vision scripts that are activated via different gestures, toggling between a mode where I can control settings such as brightness, volume, playback, etc. and one where I can control the mouse, dragging and clicking as needed.
The scripts use PyAutoGUI to simulate keyboard and mouse inputs, and OpenCV through mediapipe to detect gestures.