cognac.env.SysAdmin package

Submodules

cognac.env.SysAdmin.env module

class cognac.env.SysAdmin.env.SysAdminNetworkEnvironment(influence_graph: ~numpy.ndarray, max_steps=100, show_neighborhood_state=True, reward_class=<class 'cognac.env.SysAdmin.rewards.SysAdminDefaultReward'>, is_global_reward=False, base_arrival_rate=0.5, base_fail_rate=0.1, dead_rate_multiplier=0.2, base_success_rate=0.3, faulty_success_rate=0.1)

Bases: ParallelEnv

action_space(agent) Discrete

Takes in agent and returns the action space for that agent.

MUST return the same value for the same agent name

Default implementation is to return the action_spaces dict

get_obs() dict

Get the observation dict for the multi-agent environment from the current state of the system.

Returns:

dict: Dictionnary of observations for each agent.

metadata: dict[str, Any] = {'name': 'sysadmin_environment_v0'}
observation_space(agent) MultiDiscrete

Takes in agent and returns the observation space for that agent.

MUST return the same value for the same agent name

Default implementation is to return the observation_spaces dict

render()

Displays a rendered frame from the environment, if supported.

Alternate render modes in the default environments are ‘rgb_array’ which returns a numpy array and is supported by all environments outside of classic, and ‘ansi’ which returns the strings printed (specific to classic environments).

reset(seed=None, options=None)

Resets the environment.

And returns a dictionary of observations (keyed by the agent name)

step(actions)

Receives a dictionary of actions keyed by the agent name.

Returns the observation dictionary, reward dictionary, terminated dictionary, truncated dictionary and info dictionary, where each dictionary is keyed by the agent.

cognac.env.SysAdmin.rewards module

class cognac.env.SysAdmin.rewards.SysAdminDefaultReward

Bases: BaseReward

Module contents