Skip to content

Commit 8cfab38

Browse files
authored
Fix typos (#1070)
1 parent ee5525f commit 8cfab38

17 files changed

+33
-33
lines changed

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -221,7 +221,7 @@ dataset attributes:
221221
│ ├ episode_index (int64): index of the episode for this sample
222222
│ ├ frame_index (int64): index of the frame for this sample in the episode ; starts at 0 for each episode
223223
│ ├ timestamp (float32): timestamp in the episode
224-
│ ├ next.done (bool): indicates the end of en episode ; True for the last frame in each episode
224+
│ ├ next.done (bool): indicates the end of an episode ; True for the last frame in each episode
225225
│ └ index (int64): general index in the whole dataset
226226
├ episode_data_index: contains 2 tensors with the start and end indices of each episode
227227
│ ├ from (1D int64 tensor): first frame index for each episode — shape (num episodes,) starts with 0
@@ -270,7 +270,7 @@ See `python lerobot/scripts/eval.py --help` for more instructions.
270270

271271
### Train your own policy
272272

273-
Check out [example 3](./examples/3_train_policy.py) that illustrate how to train a model using our core library in python, and [example 4](./examples/4_train_policy_with_script.md) that shows how to use our training script from command line.
273+
Check out [example 3](./examples/3_train_policy.py) that illustrates how to train a model using our core library in python, and [example 4](./examples/4_train_policy_with_script.md) that shows how to use our training script from command line.
274274

275275
To use wandb for logging training and evaluation curves, make sure you've run `wandb login` as a one-time setup step. Then, when running the training command above, enable WandB in the configuration by adding `--wandb.enable=true`.
276276

@@ -321,7 +321,7 @@ Once you have trained a policy you may upload it to the Hugging Face hub using a
321321
You first need to find the checkpoint folder located inside your experiment directory (e.g. `outputs/train/2024-05-05/20-21-12_aloha_act_default/checkpoints/002500`). Within that there is a `pretrained_model` directory which should contain:
322322
- `config.json`: A serialized version of the policy configuration (following the policy's dataclass config).
323323
- `model.safetensors`: A set of `torch.nn.Module` parameters, saved in [Hugging Face Safetensors](https://huggingface.co/docs/safetensors/index) format.
324-
- `train_config.json`: A consolidated configuration containing all parameter userd for training. The policy configuration should match `config.json` exactly. Thisis useful for anyone who wants to evaluate your policy or for reproducibility.
324+
- `train_config.json`: A consolidated configuration containing all parameters used for training. The policy configuration should match `config.json` exactly. This is useful for anyone who wants to evaluate your policy or for reproducibility.
325325

326326
To upload these to the hub, run the following:
327327
```bash

docs/source/assemble_so101.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -194,7 +194,7 @@ Here is a video of the process:
194194
</div>
195195

196196
### Clean Parts
197-
Remove all support material from the 3D-printed parts, the easiest wat to do this is using a small screwdriver to get underneath the support material.
197+
Remove all support material from the 3D-printed parts, the easiest way to do this is using a small screwdriver to get underneath the support material.
198198

199199
### Joint 1
200200

docs/source/getting_started_real_world_robot.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -152,7 +152,7 @@ If everything is set up correctly, you can proceed with the rest of the tutorial
152152

153153
## Teleoperate with cameras
154154

155-
We can now teleoperate again while at the same time visualzing the camera's and joint positions with `rerun`.
155+
We can now teleoperate again while at the same time visualizing the camera's and joint positions with `rerun`.
156156

157157
```bash
158158
python lerobot/scripts/control_robot.py \
@@ -165,7 +165,7 @@ python lerobot/scripts/control_robot.py \
165165

166166
Once you're familiar with teleoperation, you can record your first dataset with SO-101.
167167

168-
We use the Hugging Face hub features for uploading your dataset. If you haven't previously used the Hub, make sure you've can login via the cli using a write-access token, this token can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens).
168+
We use the Hugging Face hub features for uploading your dataset. If you haven't previously used the Hub, make sure you can login via the cli using a write-access token, this token can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens).
169169

170170
Add your token to the cli by running this command:
171171
```bash
@@ -318,7 +318,7 @@ python lerobot/scripts/train.py \
318318

319319
Let's explain the command:
320320
1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/so101_test`.
321-
2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../lerobot/common/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor sates, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
321+
2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../lerobot/common/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor states, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
322322
4. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
323323
5. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
324324

examples/10_use_so100.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -578,7 +578,7 @@ python lerobot/scripts/train.py \
578578

579579
Let's explain it:
580580
1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/so100_test`.
581-
2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../lerobot/common/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor sates, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
581+
2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../lerobot/common/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor states, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
582582
4. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
583583
5. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
584584

examples/11_use_lekiwi.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -134,7 +134,7 @@ First we will assemble the two SO100 arms. One to attach to the mobile base and
134134

135135
## SO100 Arms
136136
### Configure motors
137-
The instructions for configuring the motors can be found [Here](https://github.com/huggingface/lerobot/blob/main/examples/10_use_so100.md#c-configure-the-motors) in step C of the SO100 tutorial. Besides the ID's for the arm motors we also need to set the motor ID's for the mobile base. These needs to be in a specific order to work. Below an image of the motor ID's and motor mounting positions for the mobile base. Note that we only use one Motor Control board on LeKiwi. This means the motor ID's for the wheels are 7, 8 and 9.
137+
The instructions for configuring the motors can be found [Here](https://github.com/huggingface/lerobot/blob/main/examples/10_use_so100.md#c-configure-the-motors) in step C of the SO100 tutorial. Besides the ID's for the arm motors we also need to set the motor ID's for the mobile base. These need to be in a specific order to work. Below an image of the motor ID's and motor mounting positions for the mobile base. Note that we only use one Motor Control board on LeKiwi. This means the motor ID's for the wheels are 7, 8 and 9.
138138

139139
<img src="../media/lekiwi/motor_ids.webp?raw=true" alt="Motor ID's for mobile robot" title="Motor ID's for mobile robot" width="60%">
140140

@@ -567,7 +567,7 @@ python lerobot/scripts/train.py \
567567

568568
Let's explain it:
569569
1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/lekiwi_test`.
570-
2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../lerobot/common/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor sates, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
570+
2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../lerobot/common/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor states, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
571571
4. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
572572
5. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
573573

examples/11_use_moss.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ cd ~/lerobot && pip install -e ".[feetech]"
4444

4545
## Configure the motors
4646

47-
Follow steps 1 of the [assembly video](https://www.youtube.com/watch?v=DA91NJOtMic) which illustrates the use of our scripts below.
47+
Follow step 1 of the [assembly video](https://www.youtube.com/watch?v=DA91NJOtMic) which illustrates the use of our scripts below.
4848

4949
**Find USB ports associated to your arms**
5050
To find the correct ports for each arm, run the utility script twice:
@@ -164,7 +164,7 @@ Try to avoid rotating the motor while doing so to keep position 2048 set during
164164

165165
## Assemble the arms
166166

167-
Follow step 4 of the [assembly video](https://www.youtube.com/watch?v=DA91NJOtMic). The first arm should take a bit more than 1 hour to assemble, but once you get use to it, you can do it under 1 hour for the second arm.
167+
Follow step 4 of the [assembly video](https://www.youtube.com/watch?v=DA91NJOtMic). The first arm should take a bit more than 1 hour to assemble, but once you get used to it, you can do it under 1 hour for the second arm.
168168

169169
## Calibrate
170170

@@ -301,7 +301,7 @@ python lerobot/scripts/train.py \
301301

302302
Let's explain it:
303303
1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/moss_test`.
304-
2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../lerobot/common/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor sates, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
304+
2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../lerobot/common/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor states, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
305305
4. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
306306
5. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
307307

examples/12_use_so101.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -428,7 +428,7 @@ camera_01_frame_000047.png
428428

429429
Note: Some cameras may take a few seconds to warm up, and the first frame might be black or green.
430430

431-
Now that you have the camera indexes, you should change then in the config. You can also change the fps, width or height of the camera.
431+
Now that you have the camera indexes, you should change them in the config. You can also change the fps, width or height of the camera.
432432

433433
The camera config is defined per robot, can be found here [`RobotConfig`](https://github.com/huggingface/lerobot/blob/main/lerobot/common/robot_devices/robots/configs.py) and looks like this:
434434
```python
@@ -515,7 +515,7 @@ If you have an additional camera you can add a wrist camera to the SO101. There
515515

516516
## Teleoperate with cameras
517517

518-
We can now teleoperate again while at the same time visualzing the camera's and joint positions with `rerun`.
518+
We can now teleoperate again while at the same time visualizing the camera's and joint positions with `rerun`.
519519

520520
```bash
521521
python lerobot/scripts/control_robot.py \
@@ -528,7 +528,7 @@ python lerobot/scripts/control_robot.py \
528528

529529
Once you're familiar with teleoperation, you can record your first dataset with SO-100.
530530

531-
We use the Hugging Face hub features for uploading your dataset. If you haven't previously used the Hub, make sure you've can login via the cli using a write-access token, this token can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens).
531+
We use the Hugging Face hub features for uploading your dataset. If you haven't previously used the Hub, make sure you can login via the cli using a write-access token, this token can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens).
532532

533533
Add your token to the cli by running this command:
534534
```bash

examples/2_evaluate_pretrained_policy.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313
# limitations under the License.
1414

1515
"""
16-
This scripts demonstrates how to evaluate a pretrained policy from the HuggingFace Hub or from your local
16+
This script demonstrates how to evaluate a pretrained policy from the HuggingFace Hub or from your local
1717
training outputs directory. In the latter case, you might want to run examples/3_train_policy.py first.
1818
1919
It requires the installation of the 'gym_pusht' simulation environment. Install it by running:
@@ -119,7 +119,7 @@
119119
rewards.append(reward)
120120
frames.append(env.render())
121121

122-
# The rollout is considered done when the success state is reach (i.e. terminated is True),
122+
# The rollout is considered done when the success state is reached (i.e. terminated is True),
123123
# or the maximum number of iterations is reached (i.e. truncated is True)
124124
done = terminated | truncated | done
125125
step += 1

examples/3_train_policy.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212
# See the License for the specific language governing permissions and
1313
# limitations under the License.
1414

15-
"""This scripts demonstrates how to train Diffusion Policy on the PushT environment.
15+
"""This script demonstrates how to train Diffusion Policy on the PushT environment.
1616
1717
Once you have trained a model with this script, you can try to evaluate it on
1818
examples/2_evaluate_pretrained_policy.py

examples/4_train_policy_with_script.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
This tutorial will explain the training script, how to use it, and particularly how to configure everything needed for the training run.
2-
> **Note:** The following assume you're running these commands on a machine equipped with a cuda GPU. If you don't have one (or if you're using a Mac), you can add `--policy.device=cpu` (`--policy.device=mps` respectively). However, be advised that the code executes much slower on cpu.
2+
> **Note:** The following assumes you're running these commands on a machine equipped with a cuda GPU. If you don't have one (or if you're using a Mac), you can add `--policy.device=cpu` (`--policy.device=mps` respectively). However, be advised that the code executes much slower on cpu.
33
44

55
## The training script
@@ -23,7 +23,7 @@ def train(cfg: TrainPipelineConfig):
2323

2424
You can inspect the `TrainPipelineConfig` defined in [`lerobot/configs/train.py`](../lerobot/configs/train.py) (which is heavily commented and meant to be a reference to understand any option)
2525

26-
When running the script, inputs for the command line are parsed thanks to the `@parser.wrap()` decorator and an instance of this class is automatically generated. Under the hood, this is done with [Draccus](https://github.com/dlwh/draccus) which is a tool dedicated for this purpose. If you're familiar with Hydra, Draccus can similarly load configurations from config files (.json, .yaml) and also override their values through command line inputs. Unlike Hydra, these configurations are pre-defined in the code through dataclasses rather than being defined entirely in config files. This allows for more rigorous serialization/deserialization, typing, and to manipulate configuration as objects directly in the code and not as dictionaries or namespaces (which enables nice features in an IDE such as autocomplete, jump-to-def, etc.)
26+
When running the script, inputs for the command line are parsed thanks to the `@parser.wrap()` decorator and an instance of this class is automatically generated. Under the hood, this is done with [Draccus](https://github.com/dlwh/draccus) which is a tool dedicated to this purpose. If you're familiar with Hydra, Draccus can similarly load configurations from config files (.json, .yaml) and also override their values through command line inputs. Unlike Hydra, these configurations are pre-defined in the code through dataclasses rather than being defined entirely in config files. This allows for more rigorous serialization/deserialization, typing, and to manipulate configuration as objects directly in the code and not as dictionaries or namespaces (which enables nice features in an IDE such as autocomplete, jump-to-def, etc.)
2727

2828
Let's have a look at a simplified example. Amongst other attributes, the training config has the following attributes:
2929
```python
@@ -43,7 +43,7 @@ class DatasetConfig:
4343
```
4444

4545
This creates a hierarchical relationship where, for example assuming we have a `cfg` instance of `TrainPipelineConfig`, we can access the `repo_id` value with `cfg.dataset.repo_id`.
46-
From the command line, we can specify this value with using a very similar syntax `--dataset.repo_id=repo/id`.
46+
From the command line, we can specify this value by using a very similar syntax `--dataset.repo_id=repo/id`.
4747

4848
By default, every field takes its default value specified in the dataclass. If a field doesn't have a default value, it needs to be specified either from the command line or from a config file – which path is also given in the command line (more in this below). In the example above, the `dataset` field doesn't have a default value which means it must be specified.
4949

@@ -135,7 +135,7 @@ will start a training run with the same configuration used for training [lerobot
135135

136136
## Resume training
137137

138-
Being able to resume a training run is important in case it crashed or aborted for any reason. We'll demonstrate how to that here.
138+
Being able to resume a training run is important in case it crashed or aborted for any reason. We'll demonstrate how to do that here.
139139

140140
Let's reuse the command from the previous run and add a few more options:
141141
```bash

0 commit comments

Comments
 (0)