|
1 | 1 | # Add New Metrics
|
| 2 | + |
| 3 | +## Develop with the source code of MMSegmentation |
| 4 | + |
| 5 | +Here we show how to develop a new metric with an example of `CustomMetric` as the following. |
| 6 | + |
| 7 | +1. Create a new file `mmseg/evaluation/metrics/custom_metric.py`. |
| 8 | + |
| 9 | + ```python |
| 10 | + from typing import List, Sequence |
| 11 | + |
| 12 | + from mmengine.evaluator import BaseMetric |
| 13 | + |
| 14 | + from mmseg.registry import METRICS |
| 15 | + |
| 16 | + |
| 17 | + @METRICS.register_module() |
| 18 | + class CustomMetric(BaseMetric): |
| 19 | + |
| 20 | + def __init__(self, arg1, arg2): |
| 21 | + """ |
| 22 | + The metric first processes each batch of data_samples and predictions, |
| 23 | + and appends the processed results to the results list. Then it |
| 24 | + collects all results together from all ranks if distributed training |
| 25 | + is used. Finally, it computes the metrics of the entire dataset. |
| 26 | + """ |
| 27 | + |
| 28 | + def process(self, data_batch: dict, data_samples: Sequence[dict]) -> None: |
| 29 | + pass |
| 30 | + |
| 31 | + def compute_metrics(self, results: list) -> dict: |
| 32 | + pass |
| 33 | + |
| 34 | + def evaluate(self, size: int) -> dict: |
| 35 | + pass |
| 36 | + ``` |
| 37 | + |
| 38 | + In the above example, `CustomMetric` is a subclass of `BaseMetric`. It has three methods: `process`, `compute_metrics` and `evaluate`. |
| 39 | + |
| 40 | + - `process()` process one batch of data samples and predictions. The processed results are stored in `self.results` which will be used to compute the metrics after all the data samples are processed. Please refer to [MMEngine documentation](https://github.com/open-mmlab/mmengine/blob/main/docs/en/design/evaluation.md) for more details. |
| 41 | + |
| 42 | + - `compute_metrics()` is used to compute the metrics from the processed results. |
| 43 | + |
| 44 | + - `evaluate()` is an interface to compute the metrics and return the results. It will be called by `ValLoop` or `TestLoop` in the `Runner`. In most cases, you don't need to override this method, but you can override it if you want to do some extra work. |
| 45 | + |
| 46 | + **Note:** You might find the details of calling `evaluate()` method in the `Runner` [here](https://github.com/open-mmlab/mmengine/blob/main/mmengine/runner/loops.py#L366). The `Runner` is the executor of the training and testing process, you can find more details about it at the [engine document](./engine.md). |
| 47 | + |
| 48 | +2. Import the new metric in `mmseg/evaluation/metrics/__init__.py`. |
| 49 | + |
| 50 | + ```python |
| 51 | + from .custom_metric import CustomMetric |
| 52 | + __all__ = ['CustomMetric', ...] |
| 53 | + ``` |
| 54 | + |
| 55 | +3. Add the new metric to the config file. |
| 56 | + |
| 57 | + ```python |
| 58 | + val_evaluator = dict(type='CustomMetric', arg1=xxx, arg2=xxx) |
| 59 | + test_evaluator = dict(type='CustomMetric', arg1=xxx, arg2=xxx) |
| 60 | + ``` |
| 61 | + |
| 62 | +## Develop with the released version of MMSegmentation |
| 63 | + |
| 64 | +The above example shows how to develop a new metric with the source code of MMSegmentation. If you want to develop a new metric with the released version of MMSegmentation, you can follow the following steps. |
| 65 | + |
| 66 | +1. Create a new file `/Path/to/metrics/custom_metric.py`, implement the `process`, `compute_metrics` and `evaluate` methods, `evaluate` method is optional. |
| 67 | + |
| 68 | +2. Import the new metric in your code or config file. |
| 69 | + |
| 70 | + ```python |
| 71 | + from path.to.metrics import CustomMetric |
| 72 | + ``` |
| 73 | + |
| 74 | + or |
| 75 | + |
| 76 | + ```python |
| 77 | + custom_imports = dict(imports=['/Path/to/metrics'], allow_failed_imports=False) |
| 78 | + |
| 79 | + val_evaluator = dict(type='CustomMetric', arg1=xxx, arg2=xxx) |
| 80 | + test_evaluator = dict(type='CustomMetric', arg1=xxx, arg2=xxx) |
| 81 | + ``` |
0 commit comments