Skip to content

Commit 15979db

Browse files
authored
[Doc] Add custom metrics document (#2799)
1 parent 9dbe415 commit 15979db

File tree

2 files changed

+161
-1
lines changed

2 files changed

+161
-1
lines changed
+80
Original file line numberDiff line numberDiff line change
@@ -1 +1,81 @@
11
# Add New Metrics
2+
3+
## Develop with the source code of MMSegmentation
4+
5+
Here we show how to develop a new metric with an example of `CustomMetric` as the following.
6+
7+
1. Create a new file `mmseg/evaluation/metrics/custom_metric.py`.
8+
9+
```python
10+
from typing import List, Sequence
11+
12+
from mmengine.evaluator import BaseMetric
13+
14+
from mmseg.registry import METRICS
15+
16+
17+
@METRICS.register_module()
18+
class CustomMetric(BaseMetric):
19+
20+
def __init__(self, arg1, arg2):
21+
"""
22+
The metric first processes each batch of data_samples and predictions,
23+
and appends the processed results to the results list. Then it
24+
collects all results together from all ranks if distributed training
25+
is used. Finally, it computes the metrics of the entire dataset.
26+
"""
27+
28+
def process(self, data_batch: dict, data_samples: Sequence[dict]) -> None:
29+
pass
30+
31+
def compute_metrics(self, results: list) -> dict:
32+
pass
33+
34+
def evaluate(self, size: int) -> dict:
35+
pass
36+
```
37+
38+
In the above example, `CustomMetric` is a subclass of `BaseMetric`. It has three methods: `process`, `compute_metrics` and `evaluate`.
39+
40+
- `process()` process one batch of data samples and predictions. The processed results are stored in `self.results` which will be used to compute the metrics after all the data samples are processed. Please refer to [MMEngine documentation](https://github.com/open-mmlab/mmengine/blob/main/docs/en/design/evaluation.md) for more details.
41+
42+
- `compute_metrics()` is used to compute the metrics from the processed results.
43+
44+
- `evaluate()` is an interface to compute the metrics and return the results. It will be called by `ValLoop` or `TestLoop` in the `Runner`. In most cases, you don't need to override this method, but you can override it if you want to do some extra work.
45+
46+
**Note:** You might find the details of calling `evaluate()` method in the `Runner` [here](https://github.com/open-mmlab/mmengine/blob/main/mmengine/runner/loops.py#L366). The `Runner` is the executor of the training and testing process, you can find more details about it at the [engine document](./engine.md).
47+
48+
2. Import the new metric in `mmseg/evaluation/metrics/__init__.py`.
49+
50+
```python
51+
from .custom_metric import CustomMetric
52+
__all__ = ['CustomMetric', ...]
53+
```
54+
55+
3. Add the new metric to the config file.
56+
57+
```python
58+
val_evaluator = dict(type='CustomMetric', arg1=xxx, arg2=xxx)
59+
test_evaluator = dict(type='CustomMetric', arg1=xxx, arg2=xxx)
60+
```
61+
62+
## Develop with the released version of MMSegmentation
63+
64+
The above example shows how to develop a new metric with the source code of MMSegmentation. If you want to develop a new metric with the released version of MMSegmentation, you can follow the following steps.
65+
66+
1. Create a new file `/Path/to/metrics/custom_metric.py`, implement the `process`, `compute_metrics` and `evaluate` methods, `evaluate` method is optional.
67+
68+
2. Import the new metric in your code or config file.
69+
70+
```python
71+
from path.to.metrics import CustomMetric
72+
```
73+
74+
or
75+
76+
```python
77+
custom_imports = dict(imports=['/Path/to/metrics'], allow_failed_imports=False)
78+
79+
val_evaluator = dict(type='CustomMetric', arg1=xxx, arg2=xxx)
80+
test_evaluator = dict(type='CustomMetric', arg1=xxx, arg2=xxx)
81+
```
+81-1
Original file line numberDiff line numberDiff line change
@@ -1 +1,81 @@
1-
# 新增评测指标 (待更新)
1+
# 新增评测指标
2+
3+
## 使用 MMSegmentation 的源代码进行开发
4+
5+
在这里,我们用 `CustomMetric` 作为例子来展示如何开发一个新的评测指标。
6+
7+
1. 创建一个新文件 `mmseg/evaluation/metrics/custom_metric.py`
8+
9+
```python
10+
from typing import List, Sequence
11+
12+
from mmengine.evaluator import BaseMetric
13+
14+
from mmseg.registry import METRICS
15+
16+
17+
@METRICS.register_module()
18+
class CustomMetric(BaseMetric):
19+
20+
def __init__(self, arg1, arg2):
21+
"""
22+
The metric first processes each batch of data_samples and predictions,
23+
and appends the processed results to the results list. Then it
24+
collects all results together from all ranks if distributed training
25+
is used. Finally, it computes the metrics of the entire dataset.
26+
"""
27+
28+
def process(self, data_batch: dict, data_samples: Sequence[dict]) -> None:
29+
pass
30+
31+
def compute_metrics(self, results: list) -> dict:
32+
pass
33+
34+
def evaluate(self, size: int) -> dict:
35+
pass
36+
```
37+
38+
在上面的示例中,`CustomMetric``BaseMetric` 的子类。它有三个方法:`process``compute_metrics``evaluate`
39+
40+
- `process()` 处理一批数据样本和预测。处理后的结果需要显示地传给 `self.results` ,将在处理所有数据样本后用于计算指标。更多细节请参考 [MMEngine 文档](https://github.com/open-mmlab/mmengine/blob/main/docs/zh_cn/design/evaluation.md)
41+
42+
- `compute_metrics()` 用于从处理后的结果中计算指标。
43+
44+
- `evaluate()` 是一个接口,用于计算指标并返回结果。它将由 `ValLoop``TestLoop``Runner` 中调用。在大多数情况下,您不需要重写此方法,但如果您想做一些额外的工作,可以重写它。
45+
46+
**注意:** 您可以在[这里](https://github.com/open-mmlab/mmengine/blob/main/mmengine/runner/loops.py#L366) 找到 `Runner` 调用 `evaluate()` 方法的过程。`Runner` 是训练和测试过程的执行器,您可以在[训练引擎文档](./engine.md)中找到有关它的详细信息。
47+
48+
2.`mmseg/evaluation/metrics/__init__.py` 中导入新的指标。
49+
50+
```python
51+
from .custom_metric import CustomMetric
52+
__all__ = ['CustomMetric', ...]
53+
```
54+
55+
3. 在配置文件中设置新的评测指标
56+
57+
```python
58+
val_evaluator = dict(type='CustomMetric', arg1=xxx, arg2=xxx)
59+
test_evaluator = dict(type='CustomMetric', arg1=xxx, arg2=xxx)
60+
```
61+
62+
## 使用发布版本的 MMSegmentation 进行开发
63+
64+
上面的示例展示了如何使用 MMSegmentation 的源代码开发新指标。如果您想使用 MMSegmentation 的发布版本开发新指标,可以按照以下步骤操作。
65+
66+
1. 创建一个新文件 `/Path/to/metrics/custom_metric.py`,实现 `process``compute_metrics``evaluate` 方法,`evaluate` 方法是可选的。
67+
68+
2. 在代码或配置文件中导入新的指标。
69+
70+
```python
71+
from path.to.metrics import CustomMetric
72+
```
73+
74+
或者
75+
76+
```python
77+
custom_imports = dict(imports=['/Path/to/metrics'], allow_failed_imports=False)
78+
79+
val_evaluator = dict(type='CustomMetric', arg1=xxx, arg2=xxx)
80+
test_evaluator = dict(type='CustomMetric', arg1=xxx, arg2=xxx)
81+
```

0 commit comments

Comments
 (0)