Evaluation

Metrics

mantrap_evaluation.metrics.evaluate(solver: mantrap.solver.base.trajopt.TrajOptSolver, time_steps: int = 10, label: str = None, num_tests: int = 40, mean_df: bool = True, **solve_kwargs) → Tuple[pandas.core.frame.DataFrame, torch.Tensor, torch.Tensor]

Evaluate the solver performance in current set configuration using MonteCarlo testing.

For evaluation solve an N-step trajectory optimization problem, given additional solver-kwargs, which can be additionally passed. Then compute each metric value and return the results as pd.DataFrame.

Parameters
  • solver – solver to evaluation (has to be of TrajOptSolver class).

  • time_steps – number of time-steps to solve for evaluation.

  • label – label of evaluation in resulting data-frame, by default the log-name of the solver.

  • num_tests – number of monte-carlo tests.

  • mean_df – return the mean data-frame over all experiments.

  • solve_kwargs – additional kwargs for solve() method.s

mantrap_evaluation.metrics.metric_ado_effort(env: mantrap.environment.base.graph_based.GraphBasedEnvironment, ado_trajectories: torch.Tensor, **unused) → float

Determine the ado’s additional control effort introduced by the ego.

For calculating the additional control effort of the ado agents their acceleration is approximately determined numerically and compared to the acceleration of the according ado in a scene without ego robot. Then accumulate the acceleration differences for the final score.

Parameters
  • ado_trajectories – trajectories of ados (num_ados, num_modes, t_horizon, 5).

  • env – simulation environment (is copied within function, so not altered).

mantrap_evaluation.metrics.metric_directness(ego_trajectory: torch.Tensor, goal: torch.Tensor, **unused) → float

Determine how direct the robot is going from start to goal state.

Metrics should be fairly independent to be really meaningful, however measuring the efficiency of the ego trajectory only based on the travel time from start to goad position is highly connected to the ego’s control effort. Therefore the ratio of every ego velocity vector going in the goal direction is determined, and normalized by the number of time-steps.

\[score = \frac{\sum_t \overrightarrow{s}_t * \overrightarrow{v}_t}{T}\]
Parameters
  • ego_trajectory – trajectory of ego (t_horizon, 5).

  • goal – optimization goal state (may vary in size, but usually 2D position).

mantrap_evaluation.metrics.metric_ego_effort(ego_trajectory: torch.Tensor, max_acceleration: float = 2.0, **unused) → float

Determine the ego’s control effort (acceleration).

For calculating the control effort of the ego agent approximate the acceleration by assuming the acceleration between two points in discrete time t0 and t1 as linear, i.e. a_t = (v_t - v_{t-1}) / dt. For normalize then compare the determined acceleration with the maximal possible control effort.

\[score = \frac{\sum at}{\sum a_{max}}\]
Parameters
  • ego_trajectory – trajectory of ego (t_horizon, 5).

  • max_acceleration – maximal (possible) acceleration of ego robot.

mantrap_evaluation.metrics.metric_extra_time(ego_trajectory: torch.Tensor, env: mantrap.environment.base.graph_based.GraphBasedEnvironment, **unused) → float

Determine extra time to reach goal destination compared to direct path.

Compare the derived ego trajectory travel time with the time it would need to get to the goal, which is defined as the last point of the ego trajectory, directly. To compute the direct path, solve the simplified optimization formulation consisting of goal objective and dynamics constraints only. The parameters other than the goal position do not matter for solving this simplified formulation.

Parameters

ego_trajectory – trajectory of ego (t_horizon, 5).

mantrap_evaluation.metrics.metric_final_distance(ego_trajectory: torch.Tensor, goal: torch.Tensor, **unused) → float

Determine the final distance between ego and its goal position.

For normalize divide the final distance by the initial distance. Scores larger than 1 mean, that the robot is more distant from the goal than at the beginning.

\[score = ||x_T - g||_2 / ||x_0 - g||_2\]
Parameters
  • ego_trajectory – trajectory of ego (t_horizon, 5).

  • goal – optimization goal state (may vary in size, but usually 2D position).

mantrap_evaluation.metrics.metric_minimal_distance(ego_trajectory: torch.Tensor, ado_trajectories: torch.Tensor, num_inter_points: int = 100, **unused) → float

Determine the minimal distance between the robot and any agent (minimal separation distance).

Therefore the function expects to get a robot trajectory and positions for every ado at every point of time, to determine the minimal distance in the continuous time. In order to transform the discrete to continuous time trajectories it is assumed that the robot as well as the other agents move linearly, as a single integrator, i.e. neglecting accelerations, from one discrete time-step to another, so that it’s positions can be interpolated in between using a first order interpolation method.

Parameters
  • ego_trajectory – trajectory of ego (t_horizon, 5).

  • ado_trajectories – trajectories of ados (num_ados, num_modes, t_horizon, 5).

  • num_inter_points – number of interpolation points between each time-step.

Dataset API

Mantrap Evaluation Dataset API

For a broad set of testing the mantrap project provides an API for integration of custom datasets.

Currently the ETH Pedestrian datasets is supported for automatic evaluation, however other datasets can be added easily with the mantrap evaluation API. To continue with the ETH Pedestrian dataset, download it first by following these steps:

cd mantrap_evaluation/datasets/eth
bash download.bash

In order to add other scenarios for evaluation you have to add a function with the following structure and add it to the dictionary of scenarios in __init__.py:

def foo(
    env_type: mantrap.environment.base.GraphBasedEnvironment.__class__,
) -> Tuple[mantrap.environment.base.GraphBasedEnvironment, torch.Tensor, Union[Dict[str, torch.Tensor], None]]:

The function basically defines the initial state of the ego (robot) as well the initial state and state histories of the ados (pedestrian) in the scene, then calls the create_environment() method defined in api.py to create an environment, which builds the first return argument. The second argument is the ego state (position) for the ego robot, the third the ground truth positions for the ados in the scene, i.e. how they would have moved if there wouldn’t be a robot in the scene. Being based on a perfect behaviour prediction model and grounded on a perfect optimization the ado trajectories conditioned on the robot trajectory should approach this ground-truth trajectories.

An easy to understand example of the usage of the API can be found in custom/haruki.py.