Introduction to Alicia Imitation Learning System
1. Data Collection System
1.1 Library Installation
Clone the Agile Python SDK library
git clone https://github.com/Xianova-Robotics/Alicia_duo_sdk.git
cd Alicia_duo_sdk
Install dependencies
pip install pyserial
conda install -c conda-forge python-orocos-kdl
pip install -r requirements.txt
Hardware connection
1.2 Data Reading
After connecting the robotic arm, run the following code in the terminal:
cd ~/Alicia_duo_sdk/examples
python3 read_angles.py
If successfully installed and configured, you will see continuous output of the robotic arm's joint angles, gripper angle, and button states in the terminal. Press Ctrl+C
to exit the program.
Here is an example output:
=== Robotic Arm Data Reading Example ===
2025-05-22 19:06:42,616 - SerialComm - INFO - Initializing serial communication module: Port=Auto, Baud Rate=921600
2025-05-22 19:06:42,616 - SerialComm - INFO - Debug mode: Disabled
2025-05-22 19:06:42,616 - DataParser - INFO - Initializing data parsing module
2025-05-22 19:06:42,616 - Controller - INFO - Initializing robotic arm control module
2025-05-22 19:06:42,616 - Controller - INFO - Debug mode: Disabled
2025-05-22 19:06:42,619 - SerialComm - INFO - Found 2 serial devices: /dev/ttyS0 /dev/ttyUSB0
2025-05-22 19:06:42,619 - SerialComm - INFO - Found available device: /dev/ttyUSB0
2025-05-22 19:06:42,619 - SerialComm - INFO - Connecting to port: /dev/ttyUSB0
2025-05-22 19:06:42,620 - SerialComm - INFO - Serial connection successful
2025-05-22 19:06:42,620 - Controller - INFO - Status update thread started running
2025-05-22 19:06:42,620 - Controller - INFO - Status update thread has started
Connection successful, starting to read data...
Press Ctrl+C to exit
--------------------------------------------------
Joint angles (degrees): [0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
Gripper angle (degrees): 0.0
Button states: False False
Joint angles (degrees): [-90.79, -12.57, 22.76, -0.09, -59.5, 1.76]
Gripper angle (degrees): 1.06
Button states: False False
Joint angles (degrees): [-90.79, -12.57, 22.76, -0.09, -59.5, 1.76]
Gripper angle (degrees): 1.06
Button states: False False
1.3 Raw Format Data Collection
Refer to https://github.com/GZF123/SparkMind
1.4 LeRobot Format Data Collection
Refer to https://github.com/Xuanya-Robotics/lerobot
2. Imitation Learning Algorithm Examples
2.1 ACT Algorithm
Introduction
Action Chunking with Transformers (ACT) is an imitation learning strategy that uses Transformer networks to process observation sequences and predict "chunks" of future actions. Unlike strategies that predict actions frame by frame, ACT predicts actions for multiple time steps at once, which helps learn more temporally coherent behaviors and potentially improves inference efficiency.
Here is a terminal usage document for the LeRobot ACT strategy for the alicia_duo
robot, focusing only on the two core commands: train.py (training) and control_robot.py
(inference).
I. Preparation
-
Dataset (Prepared for
alicia_duo
):-
Ensure expert demonstration data has been collected for the
alicia_duo
robot usinglerobot/scripts/control_robot.py
. -
Assumed Dataset Information:
-
Local Repository ID:
local/alicia_duo_act_dataset
-
Dataset Root Directory:
data/alicia_duo_act_episodes/alicia_duo_act_dataset
-
-
-
Environment:
-
LeRobot framework and its dependencies are correctly installed.
-
Terminal can access LeRobot scripts.
-
The LeRobot interface for the
alicia_duo
robot is configured and available.
-
II. ACT Training (Using train.py for alicia_duo
)
python lerobot/scripts/train.py \
--dataset.repo_id local/alicia_duo_act_dataset \
--dataset.root data/alicia_duo_act_episodes/alicia_duo_act_dataset \
--policy.type act \
--output_dir outputs/train/act_alicia_duo_model \
--job_name alicia_duo_act_training_run \
--policy.device cuda \
--batch_size 32 \
--steps 200000 \
--num_workers 4 \
--log_freq 100 \
--save_freq 5000 \
--eval_freq 10000 \
--policy.optimizer_lr 1e-4 \
--policy.chunk_size 20 \
--policy.n_layer 4 \
--policy.n_head 8 \
--policy.hidden_size 512 \
# --- Uncomment and configure the following lines if alicia_duo uses visual input ---
# --policy.vision_backbone resnet18_act \
# --policy.camera_names <alicia_duo_camera_name_in_dataset> \
# --- Add other parameters based on alicia_duo's specific configuration and ACT model requirements ---
# --- For example: --policy.kl_weight 0.01 (if ACT model is a VAE variant) ---
# --- To see all available parameters, run: python lerobot/scripts/train.py --help ---
Key Training Parameters Explanation (for alicia_duo
):
-
--dataset.repo_id
: Local identifier for thealicia_duo
dataset. -
--dataset.root
: Root directory where thealicia_duo
dataset files are located. -
--policy.type act
: Required, specifies using the ACT strategy. -
--output_dir
: Directory to save the training output of thealicia_duo
ACT model. -
--job_name
: Name for thisalicia_duo
ACT training run. -
Other parameters like
batch_size
,steps
,num_workers
,optimizer_lr
,chunk_size
,n_layer
,n_head
,hidden_size
should be adjusted based on the task characteristics ofalicia_duo
and available computational resources. -
If
alicia_duo
uses cameras, ensure--policy.vision_backbone
and--policy.camera_names
are correctly configured.
Monitoring: Training logs will be output to the terminal. alicia_duo
ACT model checkpoints are saved in --output_dir
.
III. ACT Inference/Evaluation (Using control_robot.py
to Control alicia_duo
)
This script is used to run the trained ACT strategy on the alicia_duo
robot.
python lerobot/scripts/control_robot.py \
--robot alicia_duo \
--control.type record \
--control.fps 30 \
--control.repo_id user_or_org/eval_act_alicia_duo_model \
--control.num_episodes 10 \
--control.policy.path outputs/train/act_alicia_duo_model/checkpoints/<name_of_your_checkpoint_folder> \
# --- If alicia_duo model uses cameras and camera configuration needs to be specified in command line ---
# --robot.cameras='{"<alicia_duo_camera_name>": {"width": <width>, "height": <height>}}' \
# --- Add other parameters based on control_robot.py and alicia_duo's specific requirements ---
# --- For example, parameters related to alicia_duo connection and task description ---
# --- To see all available parameters, run: python lerobot/scripts/control_robot.py --help ---
Key Inference Parameters Explanation (for alicia_duo
):
-
--robot alicia_duo
: Required, specifies the robot to control asalicia_duo
. -
--control.type record
: Specifies the running mode.record
is typically used for executing policy and recording results. -
--control.fps <int>
: Desired frequency of thealicia_duo
control loop. -
--control.repo_id <huggingface_repo_id>
: (Optional) Repository ID to save evaluation results. -
--control.num_episodes <int>
: Number of task episodes to run the policy execution onalicia_duo
. -
--control.policy.path <path_to_checkpoint>
: Required, points to the directory of the trained ACT model checkpoint foralicia_duo
. -
--robot.cameras
: (Optional) If thealicia_duo
model requires visual input and camera settings need to be specified or overridden in the command line.
Execution: The script will load the specified alicia_duo
ACT model and control the robot by generating actions based on alicia_duo
's current observations.
Important Notes (for alicia_duo
):
-
Robot Interface: Ensure the
--robot alicia_duo
parameter in the LeRobot framework correctly loads and initializes the robot interface foralicia_duo
. -
View Help: Use the
--help
parameter to view detailed options for each script.python lerobot/scripts/train.py --help
python lerobot/scripts/control_robot.py --help