Skip to content

API and Features

Gancheng Zhu edited this page Sep 8, 2025 · 5 revisions

All Python and C++ APIs can be found here.

1 Modify Pupil.IO Behavior via Configuration Class

1.1 Initialize a default DefaultConfig class

By default, the eye-tracking component will run with the built-in configuration.

import pupilio

# use a custom config file to control the tracker
config = pupilio.DefaultConfig()

pupil_io = pupilio.Pupilio(config)

1.2 Customizing Calibration Images, Sounds, and Language Resources

Change the image and audio paths:

config.cali_target_img = "cute_duck.png"
config.cali_target_beep = "duck_beep.wav"
config.cali_smiling_face_img =  "cute_duck.png"
config.cali_frowning_face_img =  "cute_duck.png"
``

Set the size range of the calibration target:

```Python
config.cali_target_img_maximum_size = 120
config.cali_target_img_minimum_size = 60

Adjust the animation frequency of the calibration target:

config.cali_target_animation_frequency = 2

1.3 Five-Point Calibration

The default mode uses two-point calibration. You can switch to five-point calibration:

config.cali_mode = 5

1.4 Enable or Disable Saving of Validation Results

config.enable_validation_result_saving = 1

1.5 Gaze Simulation Mode

This mode allows the Pupilio Python package to run on any Windows computer, using the mouse as a substitute for gaze data.

config.simulation_mode = True

1.6 Monocular Mode

config.active_eye = ActiveEye.LEFT_EYE

or

config.active_eye = ActiveEye.RIGHT_EYE

1.7 Disable Face Preview during Calibration

config.face_previewing = 0

1.8 Disable Kappa Angle Verification

By default, Pupil.IO verifies the estimated kappa angle after calibration (1 = enabled). When set to 0, kappa verification is disabled, which is useful for users with strabismus.

config.enable_kappa_verification = 1

1.9 Change Calibration Language Resources

Change the language of the calibration interface instructions. The lang parameter supports values like: fr-FR, zh-HK, es-ES, jp-JP, ko-KR. The default language is Simplified Chinese.

config.instruction_language(lang='en-US')

1.10 Example Code Snippet

import pupilio

# use a custom config file to control the tracker
config = pupilio.DefaultConfig()

config.instruction_language(lang='en-US')

# Calibration target image and beep
config.cali_target_img = "cute_duck.png"
config.cali_target_beep = "duck_beep.wav"

# The calibration target image would zoom-in and -out, set the max and min image size here

config.cali_target_img_maximum_size = 120
config.cali_target_img_minimum_size = 60

# A cartoon face to assist users to adjust head position
# recommended size: 128 x 128 pixels
config.cali_smiling_face_img =  "cute_duck.png"
config.cali_frowning_face_img =  "cute_duck.png"

# Disable kappa angle verification
config.enable_kappa_verification = 0

# Disable face preview during calibration
config.face_previewing = 0

# Monocular mode
config.active_eye = ActiveEye.RIGHT_EYE

# Gaze simulation mode
config.simulation_mode = True

# Enable or disable saving of validation results
config.enable_validation_result_saving = 1

# Five-point calibration
config.cali_mode = 5

pupil_io = pupilio.Pupilio(config)

2 Real-time face images through a UDP connection

Please run the UDP sender first, then run the UDP reciever.

2.1 UDP sender

from pupilio import Pupilio

pi = Pupilio()
pi.previewer_start('127.0.0.1', 8848)

# do something here
while True:
    _key = input('Type anything here; then press ENTER to stop the previewing UDP server:\n')
    if _key:
        break

# stop the previewing thread
pi.previewer_stop()

# release the tracker
pi.release()

2.2 UDP reciever

import socket
import numpy as np
import cv2

# open a socket
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)

# specify the server address (local connection in this example)
sock.bind(('127.0.0.1', 8848))

# open a CV window to show the received face images
cv2.namedWindow('Video', cv2.WINDOW_NORMAL)

while True:
    # send testing message to the UDP server
    # message = b'Test message'
    # sock.sendto(message, server_address)

    # receive data
    data, addr = sock.recvfrom(1024 * 1024)

    # break when there is no data
    if not data:
        break

    # data to images
    np_data = np.frombuffer(data, np.uint8)
    frame = cv2.imdecode(np_data, cv2.IMREAD_GRAYSCALE)

    if frame is None:
        print("Received invalid frame.")
        continue

    # show the captured frames
    cv2.imshow('Video', frame)

    # press Q/q to exit the script
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

sock.close()
cv2.destroyAllWindows()