fragile.optimize.evotorch#

This modes implements an interfacing with the evotorch library.

Module Contents#

Classes#

EvotorchEnv

This environment implements an interface with the evotorch library.

class fragile.optimize.evotorch.EvotorchEnv(algorithm, function=None, bounds=None, **kwargs)#

Bases: fragile.core.env.Function

This environment implements an interface with the evotorch library.

When providing an instance of an evotorch Searcher it will wrap it and allow to use all the fragile features on top of evotorch, such as plotting, custom policies, etc.

Parameters
  • algorithm (evotorch.algorithms.searchalgorithm.SearchAlgorithm) –

  • function (Optional[callable]) –

  • bounds (Optional[judo.Bounds]) –

default_inputs#
default_outputs = ['observs', 'rewards', 'oobs']#
property algorithm#

Return the evotorch SearchAlgorithm instance used by this environment.

Return type

evotorch.algorithms.searchalgorithm.SearchAlgorithm

property population#

Access the SolutionBatch instance used by the evotorch SearchAlgorithm.

Return type

evotorch.core.SolutionBatch

property problem#

Access the Problem instance used by the evotorch SearchAlgorithm.

Return type

evotorch.core.Problem

property dtype#

Access the dtype used by the evotorch SearchAlgorithm for the solution values.

property eval_dtype#

Access the dtype used by the SearchAlgorithm for the solution evaluations.

property solution_length#

Access the length of the solution used by the evotorch SearchAlgorithm.

Return type

int

_get_bounds()#

Initialize the Bounds instance used by this environment.

Extract all the information about the dimensionality and data type of the solutions from the problem instance used by the evotorch SearchAlgorithm.

Return type

judo.Bounds

_get_function()#

Return a function that sets the values of the evotorch SolutionBatch with the provided points and iterates the evotorch SearchAlgorithm to obtain new solutions.

step(actions, observs, **kwargs)#

Sum the target action to the observations to obtain the new points, and evaluate the reward and boundary conditions.

Returns

Dictionary containing the information of the new points evaluated.

{"observs": new_points, "rewards": scalar array,              "oobs": boolean array}

reset(inplace=True, root_walker=None, states=None, **kwargs)#

Reset the Function to the start of a new episode and updates its internal data.

Parameters
  • inplace (bool) –

  • root_walker (Optional[fragile.core.typing.StateData]) –

  • states (Optional[fragile.core.typing.StateData]) –

Return type

Union[None, fragile.core.typing.StateData]