Skip to content

Commit

Permalink
It's all downhill from here
Browse files Browse the repository at this point in the history
Fix docs/resources.md links

Add information to README.md

Add an headtracker script

Add README.md

Add gitignore

Add gitignore

Add mediapipe links to resources.md

Removed venv
  • Loading branch information
RiscadoA committed Sep 28, 2022
0 parents commit 6dda712
Show file tree
Hide file tree
Showing 7 changed files with 802 additions and 0 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
venv/
674 changes: 674 additions & 0 deletions LICENSE

Large diffs are not rendered by default.

17 changes: 17 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# HS Expressions

This project aims to build a robotic head which can reproduce human emotions
and expressions.

## Features

A *Raspberry Pi 4 model B*, with *NixOS*, is used to control the robot. *NixOS*
is used since it allows us to configure the *Raspberry* declaratively (and why
not?).

TODO

## Team

TODO

14 changes: 14 additions & 0 deletions docs/resources.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Resources

## Hardware

- [Servos and Raspberry Pi](https://tutorials-raspberrypi.com/raspberry-pi-servo-motor-control/).

## Software

- [NixOS on Raspberry Pi](https://nixos.wiki/wiki/NixOS_on_ARM/Raspberry_Pi).
- [Real-time eye tracking using OpenCV and Dlib](https://towardsdatascience.com/real-time-eye-tracking-using-opencv-and-dlib-b504ca724ac6).
- [CVZone](https://github.com/cvzone/cvzone).
- [MediaPipe Face Mesh](https://google.github.io/mediapipe/solutions/face_mesh#resources)
- [Face Mesh Explained](https://www.assemblyai.com/blog/mediapipe-for-dummies/)

20 changes: 20 additions & 0 deletions facetracking/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
### Installation

1. Clone the repo
```sh
git clone https://github.com/HackerSchool/hs-expressions
```
2. Create a virtual environment
```sh
cd ./hs-expressions/facetracking
python3 -m venv venv
source ./venv/bin/activate
```
2. Install requirements
```sh
pip install -r requirements.txt
```
3. Start the script (press ESC to exit)
```sh
python3 ./facetracker.py
```
59 changes: 59 additions & 0 deletions facetracking/facetracker.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
import cv2
import mediapipe as mp
mp_drawing = mp.solutions.drawing_utils
mp_drawing_styles = mp.solutions.drawing_styles
mp_face_mesh = mp.solutions.face_mesh


# For webcam input:
drawing_spec = mp_drawing.DrawingSpec(thickness=1, circle_radius=1)
cap = cv2.VideoCapture(0)
with mp_face_mesh.FaceMesh(
max_num_faces=1,
refine_landmarks=True,
min_detection_confidence=0.5,
min_tracking_confidence=0.5) as face_mesh:
while cap.isOpened():
success, image = cap.read()
if not success:
print("Ignoring empty camera frame.")
# If loading a video, use 'break' instead of 'continue'.
continue

# To improve performance, optionally mark the image as not writeable to
# pass by reference.
image.flags.writeable = False
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
results = face_mesh.process(image)

# Draw the face mesh annotations on the image.
image.flags.writeable = True
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
if results.multi_face_landmarks:
for face_landmarks in results.multi_face_landmarks:
mp_drawing.draw_landmarks(
image=image,
landmark_list=face_landmarks,
connections=mp_face_mesh.FACEMESH_TESSELATION,
landmark_drawing_spec=None,
connection_drawing_spec=mp_drawing_styles
.get_default_face_mesh_tesselation_style())
mp_drawing.draw_landmarks(
image=image,
landmark_list=face_landmarks,
connections=mp_face_mesh.FACEMESH_CONTOURS,
landmark_drawing_spec=None,
connection_drawing_spec=mp_drawing_styles
.get_default_face_mesh_contours_style())
mp_drawing.draw_landmarks(
image=image,
landmark_list=face_landmarks,
connections=mp_face_mesh.FACEMESH_IRISES,
landmark_drawing_spec=None,
connection_drawing_spec=mp_drawing_styles
.get_default_face_mesh_iris_connections_style())
# Flip the image horizontally for a selfie-view display.
cv2.imshow('MediaPipe Face Mesh', cv2.flip(image, 1))
if cv2.waitKey(5) & 0xFF == 27:
break
cap.release()
17 changes: 17 additions & 0 deletions facetracking/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
absl-py==1.2.0
attrs==22.1.0
contourpy==1.0.5
cycler==0.11.0
fonttools==4.37.2
kiwisolver==1.4.4
matplotlib==3.6.0
mediapipe==0.8.11
numpy==1.23.3
opencv-contrib-python==4.6.0.66
opencv-python==4.6.0.66
packaging==21.3
Pillow==9.2.0
protobuf==3.20.2
pyparsing==3.0.9
python-dateutil==2.8.2
six==1.16.0

0 comments on commit 6dda712

Please sign in to comment.