Skip to content

Pose cfg recipe #2

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions _toc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,8 @@ parts:
- file: docs/recipes/DLCMethods
- file: docs/recipes/OpenVINO
- file: docs/recipes/flip_and_rotate
- file: docs/recipes/pose_cfg_file_breakdown
- file: docs/recipes/publishing_notebooks_into_the_DLC_main_cookbook
- caption: Mission & Contribute
chapters:
- file: docs/MISSION_AND_VALUES
Expand Down
213 changes: 213 additions & 0 deletions docs/recipes/pose_cfg_file_breakdown.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,213 @@
# The `pose_cfg.yaml` Handbook
Hello! Mabuhay! Hola!
In this notebook, we will have a rundown on the following pose config parameters related to models' training and data augmentation:

# 2. What is *pose_cfg.yml*?
<a id="whatisposecfg"></a>
The `pose_cfg.yaml` file offers easy access to a range of training parameters that the user may want or have to adjust depending on the used dataset and task. This recipe is aimed at giving an average user an intuition on those hyperparameters and situations in which addressing them can be useful.

# 3. Full parameter list
<a id="fullparamlist"></a>
- [The `pose_cfg.yaml` Handbook](#the-pose_cfgyaml-handbook)
- [2. What is *pose\_cfg.yml*?](#2-what-is-pose_cfgyml)
- [3. Full parameter list](#3-full-parameter-list)
- [3.1 Training Hyperparameters](#31-training-hyperparameters)
- [3.1.A `max_input_size` and `min_input_size`](#31a-max_input_size-and-min_input_size)
- [3.1.B `global_scale`](#31b-global_scale)
- [3.1.C `batch_size`](#31c-batch_size)
- [3.1.D `pos_dist_thresh`](#31d-pos_dist_thresh)
- [3.1.E `pafwidth`](#31e-pafwidth)
- [3.2 Data augmentation parameters](#32-data-augmentation-parameters)
- [Geometric transformations](#geometric-transformations)
- [3.2.1 `scale_jitter_lo` and `scale_jitter_up`](#321-scale_jitter_lo-and-scale_jitter_up)
- [3.1.2 `rotation`](#312-rotation)
- [3.2.3 `rotratio` (rotation ratio)](#323-rotratio-rotation-ratio)
- [3.2.4 `fliplr` (or a horizontal flip)](#324-fliplr-or-a-horizontal-flip)
- [3.2.5 `crop_size`](#325-crop_size)
- [3.2.6 `crop_ratio`](#326-crop_ratio)
- [3.2.7 `max_shift`](#327-max_shift)
- [3.2.8 `crop_sampling`](#328-crop_sampling)
- [Kernel transformations](#kernel-transformations)
- [3.2.9 `sharpening` and `sharpenratio`](#329-sharpening-and-sharpenratio)
- [3.2.10 `edge`](#3210-edge)
- [References](#references)

<a id="hyperparam"></a>
## 3.1 Training Hyperparameters

<a id="input_size"></a>
### 3.1.A `max_input_size` and `min_input_size`
The default values are `1500` and `64`, respectively.

💡Pro-tip:💡
- change `max_input_size` when the resolution of the video is higher than 1500x1500 or when `scale_jitter_up` will possibly go over that value
- change `min_input_size` when the resolution of the video is smaller than 64x64 or when `scale_jitter_lo` will possibly go below that value

<a id="global_scale"></a>
### 3.1.B `global_scale`
The default value is `0.8`. It's the most basic, first scaling that happens to all images in the training queue.

💡Pro-tip:💡
- With images that are low resolution or lack detail, it may be beneficial to increase the `global_scale` to 1, to keep the original size and retain as much information as possible.

### 3.1.C `batch_size`
<a id="batch_size"></a>

The default for single animal projects is 1, and for maDLC projects it's `8`. It's the number of frames used per training iteration.

In both cases, you can increase the batchsize up to the limit of your GPU memory and train for a lower number of iterations. The relationship between the number of iterations and `batch_size` is not linear so `batch_size: 8` doesn't mean you can train for 8x less iterations, but like with every training, plateauing loss can be treated as an indicator of reaching optimal performance.

💡Pro-tip:💡
- Having a higher `batch_size` can be beneficial in terms of models' generalization

___________________________________________________________________________________

Values mentioned above and the augmentation parameters are often intuitive, and knowing our own data, we are able to decide on what will and won't be beneficial. Unfortunately, not all hyperparameters are this simple or intuitive. Two parameters that might require some tuning on challenging datasets are `pafwidth` and `pos_dist_thresh`.

<a id="pos"></a>
### 3.1.D `pos_dist_thresh`
The default value is `17`. It's the size of a window within which detections are considered positive training samples, meaning they tell the model that it's going in the right direction.

<a id="paf"></a>
### 3.1.E `pafwidth`
The default value is `20`. PAF stands for part affinity fields. It is a method of learning associations between pairs of bodyparts by preserving the location and orientation of the limb (the connection between two keypoints). This learned part affinity helps in proper animal assembly, making the model less prone to associating bodyparts of one individual with those of another. [1](#ref1)

<a id="data_aug"></a>
## 3.2 Data augmentation parameters
In the simplest form, we can think of data augmentation as something similar to imagination or dreaming. Humans imagine different scenarios based on experience, ultimately allowing us to better understand our world. [2, 3, 4](#references)

Similarly, we train our models to different types of "imagined" scenarios, which we limit to the foreseeable ones, so we ultimately get a robust model that can more likely handle new data and scenes.

Classes of data augmentations, characterized by their nature, are given by:
- [**Geometric transformations**](#geometric)
1. [`scale_jitter_lo` and `scale_jitter_up`](#scale_jitter)
2. [`rotation`](#rot)
3. [`rotratio`](#rotratio)
4. [`mirror`](#mirror)
5. [`crop size`](#crop_size)
6. [`crop ratio`](#crop_ratio)
7. [`max shift`](#max_shift)
8. [`crop sampling`](#crop_sampling)
- [**Kernel transformations**](#kernel)
9. [`sharpening` and `sharpen_ratio`](#sharp)
10. [`edge_enhancement`](#edge)

<a id="geometric"></a>
### Geometric transformations
**Geometric transformations** such as *flipping*, *rotating*, *translating*, *cropping*, *scaling*, and *injecting noise*, which are very good for positional biases present in the training data.

<a id="scale_jitter"></a>
### 3.2.1 `scale_jitter_lo` and `scale_jitter_up`
*Scale jittering* resizes an image within a given resize range. This allows the model to learn from different sizes of objects in the scene, therefore increasing its robustness to generalize, especially on newer scenes or object sizes.

The image below, retrieved from [3](#ref3), illustrates the difference between two scale jittering methods.

![scale_jittering.png](attachment:scale_jittering.png)

During training, each image is randomly scaled within the range `[scale_jitter_lo, scale_jitter_up]` to augment training data. The default values for these two parameters are:
- `scale_jitter_lo = 0.5`
- `scale_jitter_up = 1.25`

💡Pro-tips:💡
- ⭐⭐⭐ If the target animal/s do not have an incredibly high variance in size throughout the video (e.g., jumping or moving towards the static camera), keeping the **default** values **unchanged** will give just enough variability in the data for the model to generalize better ✅

- ⭐⭐However, you may want to adjust these parameters if you want your model to:
- handle new data with possibly **larger (25% bigger than original)** animal subjects ➡️ in this scenario, increase the value of *scale_jitter_up*
- handle new data with possibly **smaller (50% smaller than the original)** animal subjects ➡️ in this scenario, decrease the value of *scale_jitter_lo*
- **generalize well in new set-ups/environments** with minimal to no pre-training
⚠️ But as a consequence, **training time will take longer**.😔🕒
- ⭐If you have a fully static camera set-up and the sizes of the animals do not vary much, you may also try to **shorten** this range to **reduce training time**.😃🕒(⚠️ but, as a consequence, your model might only fit your data and not generalize well)

<a id="rot"></a>
### 3.1.2 `rotation`
*Rotation augmentations* are done by rotating the image right or left on an axis between $1^{\circ}$ and $359^{\circ}$. The safety of rotation augmentations is heavily determined by the rotation degree parameter. Slight rotations such as between $+1^{\circ}$ and $+20^{\circ}$ or $-1^{\circ}$ to $-20^{\circ}$ is generally an acceptable range. Keep in mind that as the rotation degree increases, the precision of the label placement can decrease

The image below, retrieved from [2](#ref2), illustrates the difference between the different rotation degrees.
![augset_rot.png](attachment:augset_rot.png)

During training, each image is rotated $+/-$ the `rotation` degree parameter set. By default, this parameter is set to `25`, which means that the images are augmented with a $+25^{\circ}$ rotation of itself and a $-25^{\circ}$ degree rotation of itself. Should you want to opt out of this augmentation, set the rotation value to `False`.

💡Pro-tips:💡
- ⭐If you have labelled all the possible rotations of your animal/s, keeping the **default** value **unchanged** is **enough** ✅

- However, you may want to adjust this parameter if you want your model to:
- handle new data with new rotations of the animal subjects
- handle the possibly unlabelled rotations of your minimally-labeled data
- But as a consequence, the more you increase the rotation degree, the more the original keypoint labels may not be preserved

<a id="rotratio"></a>
### 3.2.3 `rotratio` (rotation ratio)
This parameter in the DLC module is given by the percentage of sampled data to be augmented from your training data. The default value is set to `0.4` or $40\%$. This means that there is a $40\%$ chance that images within the current batch will be rotated.

💡Pro-tip:💡
- ⭐ Generally, keeping the **default** value **unchanged** is **enough** ✅

<a id="fliplr"></a>
### 3.2.4 `fliplr` (or a horizontal flip)
**Mirroring**, otherwise called **horizontal axis fipping**, is much more common than flipping the vertical axis. This augmentation is one of the easiest to implement and has proven useful on datasets such as CIFAR-10 and ImageNet. However, on datasets involving text recognition, such as MNIST or SVHN, this is not a label-preserving transformation.

The image below is an illustration of this property (shown on the right-most column).
![augset_flip.png](attachment:augset_flip.png)

This parameter randomly flips an image horizontally to augment training data.
By default, this parameter is set to `False` especially on poses with mirror symmetric joints (for example, so the left hand and right hand are not swapped).

💡Pro-tip:💡
- ⭐ If you work with labels with symmetric joints, keep the **default** value **unchanged** - unless the dataset is biased (animal moves mostly in one direction, but sometimes in the opposite)✅
- Keeping the default value to `False` will work well in most cases.

<a id ="crop_size"></a>
### 3.2.5 `crop_size`
Cropping consists of removing unwanted pixels from the image, thus selecting a part of the image and discarding the rest, reducing the size of the input.

In DeepLabCut *pose_config.yaml* file, by default, `crop_size` is set to (`400,400`), width, and height, respectively. This means it will cut out parts of an image of this size.

💡Pro-tip:💡
- If your images are very large, you could consider increasing the crop size. However, be aware that you'll need a strong GPU, or you will hit memory errors!
- If your images are very small, you could consider decreasing the crop size.

<a id ="cropratio"></a>
### 3.2.6 `crop_ratio`
Also, the number of frames to be cropped is defined by the variable `cropratio`, which is set to `0.4` by default. That means that there is a $40\%$ the images within the current batch will be cropped. By default, this value works well.

<a id ="max_shift"></a>
### 3.2.7 `max_shift`

The crop shift between each cropped image is defined by `max_shift` variable, which explains the max relative shift to the position of the crop centre. By default is set to `0.4`, which means it will be displaced 40% max from the center to not apply identical cropping each time the same image is encountered during training - this is especially important for `density` and `hybrid` cropping methods.

The image below is modified from
[2](#references).
![cropping.png](attachment:cropping.png)

<a id ="crop_sampling"></a>
### 3.2.8 `crop_sampling`
Likewise, there are different cropping sampling methods (`crop_sampling`), we can use depending on how our image looks like.

💡Pro-tips💡
- For highly crowded scenes, `hybrid` and `density` approaches will work best.
- `uniform` will take out random parts of the image, disregarding the annotations completely
- 'keypoint' centers on a random keypoint and crops based on that location - might be best in preserving the whole animal (if reasonable `crop_size` is used)

<a id ="kernel"></a>
### Kernel transformations
Kernel filters are very popular in image processing to sharpen and blur images. Intuitively, blurring an image might increase the motion blur resistance during testing. Otherwise, sharpening for data enhancement could result in capturing more detail on objects of interest.

<a id ="sharp"></a>
### 3.2.9 `sharpening` and `sharpenratio`
In DeepLabCut *pose_config.yaml* file, by default, `sharpening` is set to `False`, but if we want to use this type of data augmentation, we can set it `True` and specify a value for `sharpenratio`, which by default is set to `0.3`. Blurring is not defined in the *pose_config.yaml*, but if the user finds it convenient, it can be added to the data augmentation pipeline.

The image below is modified from
[2](#references).
![kernelfilter.png](attachment:kernelfilter.png)

<a id ="edge"></a>
### 3.2.10 `edge`
Concerning sharpness, we have an additional parameter, `edge` enhancement, which enhances the edge contrast of an image to improve its apparent sharpness. Likewise, by default, this parameter is set `False`, but if you want to include it, you just need to set it `True`.

# References
<ol id="references">
<li id="ref1">Cao, Z., Simon, T., Wei, S. E., & Sheikh, Y. (2017). Realtime multi-person 2d pose estimation using part affinity fields. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7291-7299).<a href="https://openaccess.thecvf.com/content_cvpr_2017/html/Cao_Realtime_Multi-Person_2D_CVPR_2017_paper.html">https://openaccess.thecvf.com/content_cvpr_2017/html/Cao_Realtime_Multi-Person_2D_CVPR_2017_paper.html</a></li>
<li id="ref2">Mathis, A., Schneider, S., Lauer, J., & Mathis, M. W. (2020). A Primer on Motion Capture with Deep Learning: Principles, Pitfalls, and Perspectives. In Neuron (Vol. 108, Issue 1, pp. 44-65). Elsevier BV. <a href="https://doi.org/10.1016/j.neuron.2020.09.017">https://doi.org/10.1016/j.neuron.2020.09.017</a></li>
<li id="ref3">Ghiasi, G., Cui, Y., Srinivas, A., Qian, R., Lin, T.-Y., Cubuk, E. D., Le, Q. V., & Zoph, B. (2020). Simple Copy-Paste is a Strong Data Augmentation Method for Instance Segmentation (Version 2). arXiv. <a href="https://doi.org/10.48550/ARXIV.2012.07177">https://doi.org/10.48550/ARXIV.2012.07177</a></li>
<li id="ref4">Shorten, C., & Khoshgoftaar, T. M. (2019). A survey on Image Data Augmentation for Deep Learning. In Journal of Big Data (Vol. 6, Issue 1). Springer Science and Business Media LLC. <a href="https://doi.org/10.1186/s40537-019-0197-0">https://doi.org/10.1186/s40537-019-0197-0</a> </li>
</ol>
104 changes: 104 additions & 0 deletions docs/recipes/publishing_notebooks_into_the_DLC_main_cookbook.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
# Publishing Notebooks into the Main DLC Cookbook
*Date: 13 June 2023*


## Introduction
Publishing notebooks into the main DLC cookbook can be done in a few easy steps!

## Requirements
To accomplish this, you need the following installed:
- jupyter-book
- numpydoc
- nbconvert
- jupyter_contrib_nbextensions

You can do this by running the following command:
***
```
pip install deeplabcut[docs]
```
***
**Relevant Git repos:**
- DeepLabCut: https://github.com/DeepLabCut/DeepLabCut
```
git clone https://github.com/DeepLabCut/DeepLabCut.git
```
- DeepLabCut2023version (forked from main DLC repo): https://github.com/DeepLabCutAIResidency/DeepLabCut2023version
```
git clone https://github.com/DeepLabCutAIResidency/DeepLabCut2023version.git
```

## Steps
1. Double check for spelling and grammatical errors (on Grammarly - https://grammarly.com/ or by using Jupyter notebook's spellcheck extension called `spellchecker`).
***
```
jupyter nbextension enable spellchecker/main
```
***
Once installed, restart your notebook, and when you load your notebook again, you will see the incorrectly spelled words highlighted in red. See example below:
<img src="spellcheck.png"></img>
2. Convert your notebook into a Markdown file (.md).
***
```
jupyter nbconvert --to markdown [notebook.ipynb]
```
***
<img src="nbconvert.png"></img>

3. Move your newly converted Markdown file (.md) to directory: `docs/recipes/`
***
```
cp file.md path+to+the+local+copy+of+DLC/DeepLabCut2023version/docs/recipes
```
***
4. Add the path to your new_recipe.md under the Tutorials & Cookbook paths in the `DeepLabCut2023version/_toc.yml` file.
***
```
- file: docs/recipes/pose_cfg_file_breakdown
```
***
<img src="update_toc.png"></img>
5. Build your notebook into the DLC recipe book
***
```
jupyter book build /absolute/path/to/the/DLC/repoDeepLabCut2023version
```
***
Example:
***
```
jupyter book build /Users/rae/DeepLabCut2023version
```
***
The build log should look like below:
<img src="jupyter_book_build.png"></img>

6. Test locally by checking the `index.html` file in `/Users/rae/Desktop/DeepLabCut2023version/_build/html/`
<img src="build_result.png"></img>

7. When everything is a-okay, commit to Git. If not, edit your file and go to back to step 1.

**`git status`** to check the local changes in your current project
```
git status
```
**`git add`** to add your file/s to the commit bin
```
git add [filename]
```

**`git commit`** to add your file/s to the commit bin
```
git commit -m "commit message here; make it descriptive!"
```
**`git pull`** or **`git rebase`** to update your local copy from the main branch.
```
git rebase
```
**`git push`** to push your changes to the main branch.
```
git push
```
8. When everything's clear, confirm your pull request on the Git website: https://github.com/DeepLabCutAIResidency/DeepLabCut2023version

## All done!