Auto-Generated Python API Reference¶
This page is generated directly from source code docstrings using mkdocstrings.
Robot¶
threewe.Robot
¶
AI-First interface for controlling a 3we robot.
Supports three backends: - "gazebo": Gazebo Harmonic simulation (CPU, CI, quick iteration) - "isaac_sim": NVIDIA Isaac Sim (GPU RL training, domain randomization) - "real": Physical robot via ROS2 (Pi 5 + ESP32-S3 + micro-ROS)
All backends return data in identical formats — code written for one backend works on all others without modification.
Source code in sdk/threewe/src/threewe/robot.py
38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 | |
ros2_node
property
¶
Advanced: access the underlying ROS2 node for custom subscriptions.
Returns None for backends without ROS2 (mock, isaac_sim).
connect()
¶
disconnect()
¶
get_camera_image()
¶
get_rgbd_image()
¶
get_lidar_scan()
¶
get_pose()
¶
get_velocity()
¶
get_imu()
¶
get_battery_state()
¶
get_map()
¶
get_wheel_speeds()
¶
get_motor_current()
¶
get_observation(modalities=None)
¶
Get standardized observation dict for VLA/RL model ingestion.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
modalities
|
list[str] | None
|
Which sensor modalities to include. Defaults to ("image", "lidar", "pose", "velocity"). Supported: "image", "depth", "lidar", "pose", "velocity", "imu", "map". |
None
|
Source code in sdk/threewe/src/threewe/robot.py
set_velocity(vx, vy, omega)
¶
Command body velocity. Must be called continuously or timeout stops motors.
set_wheel_velocities(speeds)
¶
Command individual wheel speeds via inverse kinematics.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
speeds
|
list[float]
|
(4,) target wheel speeds in RPM [FL, FR, RL, RR]. Converted to body velocity via mecanum forward kinematics and sent as cmd_vel. |
required |
Source code in sdk/threewe/src/threewe/robot.py
stop()
¶
move_to(x, y, theta=None, timeout=60.0)
async
¶
Navigate to target pose using Nav2 path planning.
Source code in sdk/threewe/src/threewe/robot.py
move_forward(distance)
async
¶
Drive straight forward by the specified distance (meters).
rotate(angle)
async
¶
Rotate in place by the specified angle (radians, positive=CCW).
explore(timeout=60.0)
async
¶
Autonomously explore unknown areas until map is complete or timeout.
follow_path(waypoints)
async
¶
execute_action(action)
¶
Execute a policy network output action vector.
Expected shape: (3,) normalized to [-1, 1] as [vx, vy, omega]. Scaled by configured max velocities.
Source code in sdk/threewe/src/threewe/robot.py
execute_instruction(instruction, mode='blocking')
async
¶
Execute a natural language instruction using VLM reasoning.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
instruction
|
str
|
Natural language task description. |
required |
mode
|
str
|
"blocking" for sequential perception-action loop, "async" for decoupled VLM inference + 20Hz control loop. |
'blocking'
|
Requires the ai extra: pip install threewe[ai]
Source code in sdk/threewe/src/threewe/robot.py
Types¶
threewe.types
¶
Core data types for the threewe API.
All sensor data is returned as numpy arrays with standardized shapes and dtypes. Coordinate system: right-hand, X forward, Y left, Z up. Angles in radians.
Pose2D
dataclass
¶
Velocity
dataclass
¶
CameraIntrinsics
dataclass
¶
Pinhole camera intrinsic parameters.
Source code in sdk/threewe/src/threewe/types.py
RGBDImage
dataclass
¶
RGB-D image pair with intrinsics.
Source code in sdk/threewe/src/threewe/types.py
LaserScan
dataclass
¶
2D laser scan.
Source code in sdk/threewe/src/threewe/types.py
OccupancyGrid
dataclass
¶
2D occupancy grid map.
Source code in sdk/threewe/src/threewe/types.py
MoveResult
dataclass
¶
IMUData
dataclass
¶
IMU sensor reading.
Source code in sdk/threewe/src/threewe/types.py
BatteryState
dataclass
¶
ExploreResult
dataclass
¶
ExecutionResult
dataclass
¶
Result of a VLM instruction execution.
Source code in sdk/threewe/src/threewe/types.py
Configuration¶
threewe.config
¶
Configuration loading and validation.
load_config(name='standard_v2')
¶
Load a robot configuration by name.
Searches in order: 1. Absolute/relative path if name contains a path separator 2. Built-in configs directory (sdk/threewe/configs/) 3. Current working directory
Source code in sdk/threewe/src/threewe/config.py
Exceptions¶
threewe.exceptions
¶
Exception hierarchy for threewe.
All exceptions inherit from ThreeweError to allow blanket catching.
ThreeweError
¶
RobotConnectionError
¶
Bases: ThreeweError
Cannot connect to the robot or simulator.
NavigationError
¶
Bases: ThreeweError
Navigation action failed (path blocked, goal unreachable).
HardwareError
¶
Bases: ThreeweError
Hardware fault detected (motor overheat, sensor disconnect).
EmergencyStopError
¶
Bases: ThreeweError
Emergency stop triggered.
RobotTimeoutError
¶
Bases: ThreeweError
Operation timed out.
SafetyError
¶
Bases: ThreeweError
Safety constraint violated (speed limit, boundary breach).
Backends¶
threewe.backends
¶
Backend Abstraction Layer — the Sim2Real consistency contract.
All backends (Gazebo, Isaac Sim, Real Hardware) implement this interface. Users interact with the Robot class which delegates to the active backend.
BackendBase
¶
Bases: ABC
Abstract base for all robot backends.
Consistency contract guarantees: - Image: (H, W, 3) uint8 RGB - Depth: (H, W) float32, meters, invalid=0.0 - LiDAR: (N,) float32, meters, uniform angular sampling - Pose: right-hand, X forward, Y left, Z up, radians - Velocity: m/s linear, rad/s angular
Source code in sdk/threewe/src/threewe/backends/__init__.py
29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 | |
is_connected
abstractmethod
property
¶
Whether the backend is currently connected.
connect()
abstractmethod
¶
disconnect()
abstractmethod
¶
get_camera_image()
abstractmethod
¶
get_rgbd_image()
abstractmethod
¶
get_lidar_scan()
abstractmethod
¶
get_pose()
abstractmethod
¶
get_velocity()
abstractmethod
¶
get_imu()
abstractmethod
¶
get_battery_state()
abstractmethod
¶
get_map()
abstractmethod
¶
get_wheel_speeds()
abstractmethod
¶
get_motor_current()
abstractmethod
¶
set_velocity(vx, vy, omega)
abstractmethod
¶
Command body velocity. Must be called continuously or timeout stops motors.
stop()
abstractmethod
¶
move_to(x, y, theta=None, timeout=60.0)
abstractmethod
async
¶
Navigate to target pose using path planning.
move_forward(distance)
abstractmethod
async
¶
rotate(angle)
abstractmethod
async
¶
explore(timeout=60.0)
abstractmethod
async
¶
Benchmark¶
threewe.benchmark
¶
Benchmark framework for standardized robot evaluation.
Provides reproducible evaluation tasks with standard metrics: - PointNav: Point-to-point navigation (SPL, Success Rate) - ObjectNav: Object-goal navigation (SPL, Success Rate) - Exploration: Coverage exploration (Coverage %, Duration)
Usage
threewe benchmark run --task pointnav --episodes 100 --backend gazebo threewe benchmark compare --result result.json --baseline nav2_pointnav_office
LeaderboardEntry
dataclass
¶
A single leaderboard submission entry.
Source code in sdk/threewe/src/threewe/benchmark/leaderboard.py
ObjectNavEpisodeConfig
dataclass
¶
Configuration for a single ObjectNav episode.
Source code in sdk/threewe/src/threewe/benchmark/objectnav_runner.py
BenchmarkRunner
¶
Runs standardized benchmark episodes on the robot.
Source code in sdk/threewe/src/threewe/benchmark/runner.py
53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 | |
run_pointnav(num_episodes=100, seed=42)
async
¶
Run point-to-point navigation benchmark using scene poses.
Source code in sdk/threewe/src/threewe/benchmark/runner.py
run_exploration(num_episodes=10, timeout_per_episode=120.0)
async
¶
Run exploration coverage benchmark.
Source code in sdk/threewe/src/threewe/benchmark/runner.py
run_objectnav(num_episodes=100, seed=42)
async
¶
Run object-goal navigation benchmark using scene poses.
Source code in sdk/threewe/src/threewe/benchmark/runner.py
compare_to_baseline(baseline_name, success_rate, spl, avg_duration, coverage=0.0)
¶
Compare new results to a stored baseline.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
baseline_name
|
str
|
Key in BASELINES dict. |
required |
success_rate
|
float
|
New success rate. |
required |
spl
|
float
|
New SPL value. |
required |
avg_duration
|
float
|
New average duration. |
required |
coverage
|
float
|
New coverage (for exploration tasks). |
0.0
|
Returns:
| Type | Description |
|---|---|
ComparisonResult
|
ComparisonResult with deltas and improvement flag. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If baseline name is unknown. |
Source code in sdk/threewe/src/threewe/benchmark/baselines.py
list_baselines()
¶
rank_entries(entries, metric='spl')
¶
Rank leaderboard entries by a given metric (descending).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
entries
|
list[LeaderboardEntry]
|
List of entries to rank. |
required |
metric
|
str
|
Metric to sort by ("spl", "success_rate"). |
'spl'
|
Returns:
| Type | Description |
|---|---|
list[LeaderboardEntry]
|
Sorted list from best to worst. |
Source code in sdk/threewe/src/threewe/benchmark/leaderboard.py
validate_submission(data)
¶
Validate a leaderboard submission dict.
Returns:
| Type | Description |
|---|---|
list[str]
|
List of error messages. Empty list means valid. |
Source code in sdk/threewe/src/threewe/benchmark/leaderboard.py
compute_spl(successes, path_lengths, optimal_lengths)
¶
Compute Success weighted by Path Length (SPL).
SPL = (1/N) * sum( S_i * (L_i / max(P_i, L_i)) ) where S_i = success, L_i = optimal path, P_i = actual path.
Source code in sdk/threewe/src/threewe/benchmark/metrics.py
compute_success_rate(successes)
¶
Compute success rate as fraction of successful episodes.
generate_objectnav_episodes(scene_name, num_episodes, seed=42)
¶
Generate ObjectNav episode configs from a scene's goal poses.
Each episode picks a random object category and goal pose from the scene.
Source code in sdk/threewe/src/threewe/benchmark/objectnav_runner.py
run_objectnav_episode(robot, config)
async
¶
Run a single ObjectNav episode.
The robot navigates toward the target object position. Success is
determined by reaching within config.success_radius of the target.
Source code in sdk/threewe/src/threewe/benchmark/objectnav_runner.py
get_task(name)
¶
Get a benchmark task by name.
Raises:
| Type | Description |
|---|---|
ValueError
|
If task name is unknown. |
Source code in sdk/threewe/src/threewe/benchmark/tasks.py
AI Integration¶
threewe.ai
¶
AI integration sub-package — VLM, VLA, and text encoding runners.
TextEncoder
¶
Encode text instructions into fixed-dimensional vectors.
Methods:
| Name | Description |
|---|---|
- "tfidf" |
Zero-dependency TF-IDF-like bag-of-words encoding. Fast, deterministic, no downloads. Suitable for low-dimensional conditioning signals in simple VLA models. |
- "sentence_transformers" |
Uses sentence-transformers library for high-quality semantic embeddings. Requires optional dependency. |
Source code in sdk/threewe/src/threewe/ai/text_encoder.py
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 | |
encode(text)
¶
Encode text into a fixed-size float32 vector.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
text
|
str
|
The instruction text to encode. |
required |
Returns:
| Type | Description |
|---|---|
ndarray
|
numpy array of shape (dim,) with dtype float32. |
Source code in sdk/threewe/src/threewe/ai/text_encoder.py
VLARunner
¶
Run VLA models for end-to-end robot control.
A VLA model takes observations (image + state) and optionally a language instruction, and directly outputs action vectors (velocities/joint commands).
Source code in sdk/threewe/src/threewe/ai/vla_runner.py
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 | |
action_dim
property
¶
Expected output action dimension.
from_pretrained(model_id, device='cpu', revision=None)
classmethod
¶
Load a VLA model from HuggingFace Hub.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_id
|
str
|
HuggingFace model identifier (e.g., "lerobot/act_3we_nav") |
required |
device
|
str
|
Target device ("cpu", "cuda", "mps") |
'cpu'
|
revision
|
str | None
|
Specific model revision (commit hash) to pin the download. |
None
|
Source code in sdk/threewe/src/threewe/ai/vla_runner.py
from_local(path, device='cpu')
classmethod
¶
Load a local VLA model (ONNX, PyTorch, or Hailo HEF).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
Path to model directory or file |
required |
device
|
str
|
Target device ("cpu", "cuda", "mps", "hailo") |
'cpu'
|
Source code in sdk/threewe/src/threewe/ai/vla_runner.py
predict(obs, instruction='')
¶
Predict action from observation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
obs
|
dict[str, ndarray]
|
Observation dict with keys like "image", "state", "lidar" |
required |
instruction
|
str
|
Optional language instruction for conditioned policies |
''
|
Returns:
| Type | Description |
|---|---|
ndarray
|
Action vector as numpy array. Shape depends on model |
ndarray
|
(typically (3,) for [vx, vy, omega] or (7,) for arm joints). |
Source code in sdk/threewe/src/threewe/ai/vla_runner.py
VLMRunner
¶
Runs VLM inference for robot instruction execution.
Source code in sdk/threewe/src/threewe/ai/vlm_runner.py
30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 | |
from_env(max_steps=20, temperature=0.0)
classmethod
¶
Create a VLMRunner from environment variables.
Reads
THREEWE_VLM_MODEL: Model name (default: gpt-4o) OPENAI_API_KEY: API key for OpenAI or compatible endpoint THREEWE_VLM_BASE_URL: Custom API endpoint (for Qwen-VL, local models) THREEWE_VLM_MAX_STEPS: Max steps per instruction (default: 20) THREEWE_VLM_TEMPERATURE: Sampling temperature (default: 0.0)
Source code in sdk/threewe/src/threewe/ai/vlm_runner.py
plan(image, instruction)
¶
Get a single-step action plan from the VLM given an image and instruction.
Source code in sdk/threewe/src/threewe/ai/vlm_runner.py
execute_vlm_instruction(robot, instruction, model='gpt-4o', api_key=None, base_url=None, max_steps=20, on_step=None)
async
¶
Execute a natural language instruction using a VLM in a perception-action loop.
The VLM observes camera images and generates navigation actions until the instruction is fulfilled or max_steps is reached.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
robot
|
Robot
|
Connected Robot instance. |
required |
instruction
|
str
|
Natural language task description. |
required |
model
|
str
|
VLM model name. |
'gpt-4o'
|
api_key
|
str | None
|
Optional API key override. |
None
|
base_url
|
str | None
|
Optional API base URL override. |
None
|
max_steps
|
int
|
Maximum perception-action cycles. |
20
|
on_step
|
StepCallback | None
|
Optional callback invoked after each step with (step_number, raw_response, parsed_action). |
None
|
Source code in sdk/threewe/src/threewe/ai/vlm_runner.py
157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 | |
Data Recording¶
threewe.data
¶
Data recording and export for trajectory collection.
Supports: - HDF5 format for efficient storage - LeRobot-compatible Parquet + MP4 export - HuggingFace Hub push/pull
TrajectoryRecorder
¶
Records robot trajectories for imitation learning.
Usage
recorder = TrajectoryRecorder(robot) recorder.start_episode()
... control robot ...¶
recorder.record_step(action=[0.1, 0.0, 0.0]) recorder.end_episode() recorder.save_hdf5("trajectories.h5")
Source code in sdk/threewe/src/threewe/data/recorder.py
52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 | |
start_episode(metadata=None)
¶
record_step(action)
¶
Record current observation and the action taken.
Source code in sdk/threewe/src/threewe/data/recorder.py
end_episode()
¶
End current episode and store it.
Source code in sdk/threewe/src/threewe/data/recorder.py
save_hdf5(path)
¶
Save all episodes to HDF5 format.
Structure
/episode_000/ images: (T, H, W, 3) uint8 poses: (T, 3) float32 velocities: (T, 3) float32 actions: (T, 3) float32 timestamps: (T,) float64 [lidar: (T, N) float32] [depth: (T, H, W) float32]
Source code in sdk/threewe/src/threewe/data/recorder.py
save_lerobot(output_dir)
¶
Export episodes in LeRobot-compatible format (Parquet + MP4).
Creates
output_dir/ data/ episode_000.parquet episode_001.parquet ... videos/ episode_000.mp4 episode_001.mp4 ... meta/ info.json
Source code in sdk/threewe/src/threewe/data/recorder.py
pull_dataset(repo_id, local_dir, token=None, revision=None)
¶
Pull a LeRobot dataset from HuggingFace Hub.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
repo_id
|
str
|
HuggingFace repo ID. |
required |
local_dir
|
str | Path
|
Local directory to save to. |
required |
token
|
str | None
|
HuggingFace API token. |
None
|
revision
|
str | None
|
Specific dataset revision (commit hash) to pin the download. |
None
|
Returns:
| Type | Description |
|---|---|
Path
|
Path to the downloaded dataset. |
Raises:
| Type | Description |
|---|---|
ImportError
|
If huggingface-hub is not installed. |
Source code in sdk/threewe/src/threewe/data/hub.py
push_dataset(local_dir, repo_id, token=None)
¶
Push a LeRobot dataset directory to HuggingFace Hub.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
local_dir
|
str | Path
|
Local directory containing data/, videos/, meta/. |
required |
repo_id
|
str
|
HuggingFace repo ID (e.g. 'user/dataset-name'). |
required |
token
|
str | None
|
HuggingFace API token. Uses cached token if None. |
None
|
Returns:
| Type | Description |
|---|---|
str
|
URL of the uploaded dataset. |
Raises:
| Type | Description |
|---|---|
ImportError
|
If huggingface-hub is not installed. |
FileNotFoundError
|
If local_dir doesn't exist. |