tensortrade.strategies.tensorforce_trading_strategy module

class tensortrade.strategies.tensorforce_trading_strategy.TensorforceTradingStrategy(environment, agent_spec, save_best_agent=False, **kwargs)[source]

Bases: tensortrade.strategies.trading_strategy.TradingStrategy

A trading strategy capable of self tuning, training, and evaluating with Tensorforce.

__init__(environment, agent_spec, save_best_agent=False, **kwargs)[source]
Parameters
  • environment (TradingEnvironment) – A TradingEnvironment instance for the agent to trade within.

  • agent – A Tensorforce agent or agent specification.

  • save_best_agent (optional) – The runner will automatically save the best agent

  • kwargs (optional) – Optional keyword arguments to adjust the strategy.

property agent

A Tensorforce Agent instance that will learn the strategy.

Return type

tensorforce.agents.Agent

property max_episode_timesteps

The maximum timesteps per episode.

Return type

int

restore_agent(directory, filename=None)[source]

Deserialize the strategy’s learning agent from a file.

Parameters
  • directory (str) – The str path of the directory the agent checkpoint is stored in.

  • filename (optional) – The str path of the file the agent specification is stored in. The .json file extension will be automatically appended if not provided.

run(steps=None, episodes=None, evaluation=False, episode_callback=None)[source]

Evaluate the agent’s performance within the environment.

Parameters
  • steps (Optional[int]) – The number of steps to run the agent within the environment. Required if episodes is not passed.

  • episodes (Optional[int]) – The number of episodes to run the agent within the environment. Required if steps is not passed.

  • testing – Whether or not the agent should be evaluated on the environment it is running in. Defaults to false.

  • episode_callback (optional) – A callback function for monitoring the agent’s progress within the environment.

Return type

DataFrame

Returns

A history of the agent’s trading performance during evaluation.

save_agent(directory, filename=None, append_timestep=False)[source]

Serialize the learning agent to a file for restoring later.

Parameters
  • directory (str) – The str path of the directory the agent checkpoint is stored in.

  • filename (optional) – The str path of the file the agent specification is stored in. The .json file extension will be automatically appended if not provided.

  • append_timestep (bool) – Whether the timestep should be appended to filename to prevent overwriting previous models. Defaults to False.

tune(steps=None, episodes=None, callback=None)[source]

Tune the agent’s hyper-parameters and feature set for the environment.

Parameters
  • steps_per_train – The number of steps per training of each hyper-parameter set.

  • steps_per_test – The number of steps per evaluation of each hyper-parameter set.

  • episode_callback (optional) – A callback function for monitoring progress of the tuning process.

Return type

DataFrame

Returns

A history of the agent’s trading performance during tuning.