hat.metrics¶
Metrics widely used for different datasets in HAT.
Metrics¶
|
Computes accuracy classification score. |
|
Computes top k predictions accuracy. |
|
Evaluation in COCO protocol. |
|
Show loss. |
|
Evaluation segmentation results. |
|
Base class for all evaluation metrics. |
API Reference¶
- class hat.metrics.Accuracy(axis=1, name='accuracy')¶
Computes accuracy classification score.
- Parameters
axis (int) – The axis that represents classes
name (str) – Name of this metric instance for display.
- update(labels, preds)¶
Override this method to update the state variables.
- class hat.metrics.AccuracySeg(name='accuracy', axis=1)¶
# TODO(min.du, 0.5): merged with Accuracy #.
- update(output)¶
Override this method to update the state variables.
- class hat.metrics.COCODetectionMetric(ann_file: str, val_interval: int = 1, name: str = 'COCOMeanAP', save_prefix: str = './WORKSPACE/results', adas_eval_task: Optional[str] = None, use_time: bool = True, cleanup: bool = False)¶
Evaluation in COCO protocol.
- Parameters
ann_file – validation data annotation json file path.
val_interval – evaluation interval.
name – name of this metric instance for display.
save_prefix – path to save result.
adas_eval_task – task name for adas-eval, such as ‘vehicle’, ‘person’ and so on.
use_time – whether to use time for name.
cleanup – whether to clean up the saved results when the process ends.
- Raises
RuntimeError – fail to write json to disk.
- get()¶
Get evaluation metrics.
- reset()¶
Reset the metric state variables to their default value.
If (and only if) there are state variables that are not registered with ‘self.add_state’ need to be regularly set to default values, please extend this method in subclasses.
- update(output: Dict)¶
Update internal buffer with latest predictions.
Note that the statistics are not available until you call self.get() to return the metrics.
- Parameters
output – A dict of model output which includes det results and image infos.
- class hat.metrics.EvalMetric(name: Union[List[str], str], process_group: Optional[torch._C._distributed_c10d.ProcessGroup] = None, stack_after_sync: bool = True, warn_without_compute: bool = True)¶
Base class for all evaluation metrics.
Built on top of torchmetrics.metric.Metric, this base class introduces the name attribute and a name-value format output (the get method). It also makes possible to syncnronize state tensors of different shapes in each device (by setting stack_after_sync to False), to support AP-like metrics.
Note
This is a base class that provides common metric interfaces. One should not use this class directly, but inherit it to create new metric classes instead.
- Parameters
name – Name of this metric instance for display.
process_group – Specify the process group on which synchronization is called. Default: None (which selects the entire world)
stack_after_sync – Whether to stack state tensors synchronized across devices before reduction. It is necessary to set it to False, while the shape of a state tensor might vary across devices since otherwise a shape mismatch error will be raised. Default value is True.
warn_without_compute – Whether to output warning log if self.compute is not called in self.get. Since synchronization among devices is executed in self.compute, this value reflects if the metric will support distributed computation.
- compute() → Union[float, List[float]]¶
Override this method to compute final results from metric states.
All states variables registered with ‘self.add_state’ are synchronized across devices before the execution of this method.
- get() → Tuple[Union[str, List[str]], Union[float, List[float]]]¶
Get current evaluation result.
To skip the synchronization among devices, please override this method and calculate results without calling ‘self.compute()’.
- Returns
Name of the metrics. values: Value of the evaluations.
- Return type
names
- get_name_value()¶
Return zipped name and value pairs.
- Returns
A (name, value) tuple list.
- Return type
List(tuples)
- reset() → None¶
Reset the metric state variables to their default value.
If (and only if) there are state variables that are not registered with ‘self.add_state’ need to be regularly set to default values, please extend this method in subclasses.
- abstract update(*_: Any, **__: Any) → None¶
Override this method to update the state variables.
- class hat.metrics.LossShow(name: str = 'loss', norm: bool = True)¶
Show loss.
# TODO(min.du, 0.1): a better class name is required #
- Parameters
name – Name of this metric instance for display.
norm – Whether norm loss when loss size bigger than 1. If True, calculate mean loss, else calculate loss sum. Default True.
- get()¶
Get current evaluation result.
To skip the synchronization among devices, please override this method and calculate results without calling ‘self.compute()’.
- Returns
Name of the metrics. values: Value of the evaluations.
- Return type
names
- reset()¶
Reset the metric state variables to their default value.
If (and only if) there are state variables that are not registered with ‘self.add_state’ need to be regularly set to default values, please extend this method in subclasses.
- update(loss: Union[torch.Tensor, Dict[str, torch.Tensor]])¶
Override this method to update the state variables.
- class hat.metrics.MeanIOU(seg_class: List[str], name: str = 'MeanIOU', ignore_index: int = 255, global_ignore_index: Union[Sequence, int] = 255, verbose: bool = False)¶
Evaluation segmentation results.
- Parameters
seg_class (list(str)) – A list of classes the segmentation dataset includes,the order should be the same as the label.
name (str) – Name of this metric instance for display, also used as monitor params for Checkpoint.
ignore_index (int) – The label index that will be ignored in evaluation.
global_ignore_index (list,int) – The label index that will be ignored in global evaluation,such as:mIoU,mAcc,aAcc.Supporting list of label index.
verbose (bool) – Whether to return verbose value for aidi eval, default is False.
- compute()¶
Get evaluation metrics.
- update(label: torch.Tensor, preds: Union[Sequence[torch.Tensor], torch.Tensor])¶
Update internal buffer with latest predictions.
Note that the statistics are not available until you call self.get() to return the metrics.
- Parameters
preds – model output.
label – gt.
- class hat.metrics.TopKAccuracy(top_k, name='top_k_accuracy')¶
Computes top k predictions accuracy.
TopKAccuracy differs from Accuracy in that it considers the prediction to be
True
as long as the ground truth label is in the top K predicated labels.If top_k =
1
, then TopKAccuracy is identical to Accuracy.- Parameters
top_k (int) – Whether targets are in top k predictions.
name (str) – Name of this metric instance for display.
- update(labels, preds)¶
Override this method to update the state variables.