OpenRacer package¶
Submodules¶
OpenRacer.Constants module¶
OpenRacer.Interface module¶
- class OpenRacer.Interface.Interface(model: ModelInterface, host: str = 'localhost', port: int = 8000, debug: bool = False)[source]¶
Bases:
object
- printStart(intro: List[str] = ['Welcome to OperRacer'], padding: int = 5)[source]¶
Print startup Box to welcome and show links
- Args:
intro (List[str], optional): Lines to be shown in startup panel. Defaults to [“Welcome to OperRacer”]. padding (int, optional): use to decied size of panel. Defaults to 5.
OpenRacer.Model module¶
- class OpenRacer.Model.ModelBase[source]¶
Bases:
object
- backprop(action, inputData)[source]¶
This will be called while training after every step this can be used to evaluate your step and adjust the model.
- Args:
action (List[List[float]]): It will be a 2D List. [[0.1,0.2], [0.3,-0.1]] inputData (Params|Object): It would be same input data as provided in trainEval/TestEval.
- preProcess(inputData: Params)[source]¶
Incase you want to preprocess your inputs.
- Args:
inputData (Params): This is data received from Unity
- Returns:
Processed Data. Default is Params.
- rewardFn(action: List[List[float]], inputData) List[float] [source]¶
It will be called on each step. you can define how to reward your Agent based on its input and action. it will call reward function on current model
- Args:
action (List[List[float]]): It will be a 2D List. [[0.1,0.2], [0.3,-0.1]] inputData (Params|Object): It would be same input data as provided in trainEval/TestEval.
- Returns:
List[float]: Rewards for each Agent.
- save(sessionNum, dirPath)[source]¶
this function will be called at the end of every session to save the model. user needs to implement save functionality. it can be saved on the directory path provided in the input.
- Args:
sessionNum (int): session number of currently ended session. dirPath (str): path to the directory created by OpenRacer module.
- testEval(inputData)[source]¶
- This will be called for each step in testing/Race. It will call testEval of current model
- Note:
output range should be [-1,1] in both axis. x: [-1, 1] -> [Backward, Forward] y: [-1, 1] -> [Left, Right]
- Example output:
For eg: For 2 agents, [
[0.5, 0.2], #Agent 1: 0.5 forward and 0.2 towards Right [0.6, -0.1] #Agent 2: 0.6 forward and 0.1 towards Left
]
- Args:
inputData (Returned from PreProcess): This will contain same object returned from preProcess. If not set by default it will get Params.
- Returns:
List[List]: It should return a list of actions need to be taken by Agent.
- trainEval(inputData) List[List[float]] [source]¶
- This will be called for each step in training. it will call trainEval function of current model
- Note:
output range should be [-1,1] in both axis. x: [-1, 1] -> [Backward, Forward] y: [-1, 1] -> [Left, Right]
- Example output:
For eg: For 2 agents, [
[0.5, 0.2], #Agent 1: 0.5 forward and 0.2 towards Right [0.6, -0.1] #Agent 2: 0.6 forward and 0.1 towards Left
]
- Args:
inputData (Returned from PreProcess): This will contain same object returned from preProcess. If not set by default it will get Params.
- Returns:
List[List[float]]: It should return a list of actions need to be taken by Agent.
- class OpenRacer.Model.ModelInterface[source]¶
Bases:
object
- addModel(model)[source]¶
add ML model with a name atteibute. model.name should be a non empty string.
- Args:
model (class): a class of ML that will have eval, train, backPropagate, loss fn
- backprop(action, inputData)[source]¶
This will be called while training after every step this can be used to evaluate your step and adjust the model.
- Args:
action (List[List[float]]): It will be a 2D List. [[0.1,0.2], [0.3,-0.1]] inputData (Params|Object): It would be same input data as provided in trainEval/TestEval.
- eval(inputDataFromUnity: str, isTraining: bool = False) dict [source]¶
This is the function that will be called on each step to evaluate what to do.
- Args:
inputDataFromUnity (str): This is unformatted data from Unity. isTraining (bool, optional): Is this a step in training. If true then It will call backpropogate. Defaults to False.
- Returns:
dict: This is action dict that contains x, y input for Car AI in Unity.
- formatInput(unprocessedInput: str) Params [source]¶
This converts data from string received from untiy to Prams
- Args:
unprocessedInput (str): Str message received from unity over Websocket.
- Returns:
Params: Processed input for taking next step.
- preProcess(inputData: Params)[source]¶
Incase you want to preprocess your inputs.
- Args:
inputData (Params): This is data received from Unity
- Returns:
Processed Data. Default is Params.
- rewardFn(action: List[List[float]], inputData) List[float] [source]¶
It will be called on each step. you can define how to reward your Agent based on its input and action. it will call reward function on current model
- Args:
action (List[List[float]]): It will be a 2D List. [[0.1,0.2], [0.3,-0.1]] inputData (Params|Object): It would be same input data as provided in trainEval/TestEval.
- Returns:
List[float]: Rewards for each Agent.
- sessionEnd(sessionNum: int)[source]¶
called after every epoc end to save model. for more control you can add save function to model class and it will be called and interface will not save it.
- Args:
sessionNum (int): number of epoch ended
- setModel(modelName)[source]¶
set model to be used for eval and train
- Args:
modelName (str): name of model to be used. it should be present in added models.
- setTrack(track: List[List[float]])[source]¶
Setting track coordniates for the session
- Args:
track (List[Tuple[float]]): List of Tuple of coordinates. (x, y, z). Coordinates are according to Unity. x, z should be used for 2Dcase. (x,z) => (x,y)
- testEval(inputData)[source]¶
This will be called for each step in testing/Race. It will call testEval of current model
- Args:
inputData (Returned from PreProcess): This will contain same object returned from preProcess. If not set by default it will get Params.
- Returns:
List[List]: It should return a list of actions need to be taken by Agent. Range should be [-1,1] in both axis. x: [-1, 1] -> [Backward, Forward] y: [-1, 1] -> [Left, Right] For eg: For 2 agents, [
[0.5, 0.2], #Agent 1: 0.5 forward and 0.2 towards Right [0.6, -0.1] #Agent 2: 0.6 forward and 0.1 towards Left
]
- trainEval(inputData) List[List[float]] [source]¶
This will be called for each step in training. it will call trainEval function of current model
- Args:
inputData (Returned from PreProcess): This will contain same object returned from preProcess. If not set by default it will get Params.
- Returns:
List[List[float]]: It should return a list of actions need to be taken by Agent.
- Note:
Range should be [-1,1] in both axis. x: [-1, 1] -> [Backward, Forward] y: [-1, 1] -> [Left, Right]
- Example:
For eg: For 2 agents, [
[0.5, 0.2], #Agent 1: 0.5 forward and 0.2 towards Right [0.6, -0.1] #Agent 2: 0.6 forward and 0.1 towards Left
]
OpenRacer.Recorder module¶
- class OpenRacer.Recorder.Recorder(modelName: str)[source]¶
Bases:
object
- createDetailsTable = '\n CREATE TABLE if not exists details(\n sessionCount int,\n agentCount int,\n trackName string,\n sessionTime int\n );\n '¶
- createInputTable = '\n CREATE TABLE if not exists step(\n timestamp DATETIME DEFAULT CURRENT_TIMESTAMP NOT NULL,\n all_wheels_on_track BOOLEAN CHECK(all_wheels_on_track IN(0, 1)),\n x float,\n y float,\n closest_waypoint1 int,\n closest_waypoint2 int,\n distance_from_center float,\n is_crashed BOOLEAN CHECK(is_crashed IN(0, 1)),\n is_left_of_center BOOLEAN CHECK(is_left_of_center IN(0, 1)),\n is_reversed BOOLEAN CHECK(is_reversed IN(0, 1)),\n progress float,\n speed float,\n steering_angle float,\n steps int,\n track_length float,\n track_width float,\n actionX float, \n actionY float,\n reward float, \n agentId int,\n session int);'¶
- createRaceTable = '\n CREATE TABLE if not exists race(\n timestamp DATETIME DEFAULT CURRENT_TIMESTAMP NOT NULL,\n all_wheels_on_track BOOLEAN CHECK(all_wheels_on_track IN(0, 1)),\n x float,\n y float,\n closest_waypoint1 int,\n closest_waypoint2 int,\n distance_from_center float,\n is_crashed BOOLEAN CHECK(is_crashed IN(0, 1)),\n is_left_of_center BOOLEAN CHECK(is_left_of_center IN(0, 1)),\n is_reversed BOOLEAN CHECK(is_reversed IN(0, 1)),\n progress float,\n speed float,\n steering_angle float,\n steps int,\n track_length float,\n track_width float,\n actionX float, \n actionY float,\n reward float, \n agentId int,\n lap int);\n '¶
OpenRacer.Routes module¶
- class OpenRacer.Routes.Routes(model: ModelInterface)[source]¶
Bases:
object
- async checkCommand(signal: str)[source]¶
Check message recieved from unity and process to send response
- Args:
signal (str): it will be string receivied from unity. Structure will be “command~value”.
- Returns:
str|dict|list: based on request different types of responses are generated
OpenRacer.Util module¶
OpenRacer.datatypes module¶
OpenRacer.example module¶
- class OpenRacer.example.RandomModel(seed: int = 0)[source]¶
Bases:
ModelBase
- preProcess(inputData)[source]¶
Incase you want to preprocess your inputs.
- Args:
inputData (Params): This is data received from Unity
- Returns:
Processed Data. Default is Params.
- rewardFn(action, inputData)[source]¶
It will be called on each step. you can define how to reward your Agent based on its input and action. it will call reward function on current model
- Args:
action (List[List[float]]): It will be a 2D List. [[0.1,0.2], [0.3,-0.1]] inputData (Params|Object): It would be same input data as provided in trainEval/TestEval.
- Returns:
List[float]: Rewards for each Agent.
- save(epocNum: int, dirPath)[source]¶
this function will be called at the end of every session to save the model. user needs to implement save functionality. it can be saved on the directory path provided in the input.
- Args:
sessionNum (int): session number of currently ended session. dirPath (str): path to the directory created by OpenRacer module.
- testEval(inputData)[source]¶
- This will be called for each step in testing/Race. It will call testEval of current model
- Note:
output range should be [-1,1] in both axis. x: [-1, 1] -> [Backward, Forward] y: [-1, 1] -> [Left, Right]
- Example output:
For eg: For 2 agents, [
[0.5, 0.2], #Agent 1: 0.5 forward and 0.2 towards Right [0.6, -0.1] #Agent 2: 0.6 forward and 0.1 towards Left
]
- Args:
inputData (Returned from PreProcess): This will contain same object returned from preProcess. If not set by default it will get Params.
- Returns:
List[List]: It should return a list of actions need to be taken by Agent.
- trainEval(inputData)[source]¶
- This will be called for each step in training. it will call trainEval function of current model
- Note:
output range should be [-1,1] in both axis. x: [-1, 1] -> [Backward, Forward] y: [-1, 1] -> [Left, Right]
- Example output:
For eg: For 2 agents, [
[0.5, 0.2], #Agent 1: 0.5 forward and 0.2 towards Right [0.6, -0.1] #Agent 2: 0.6 forward and 0.1 towards Left
]
- Args:
inputData (Returned from PreProcess): This will contain same object returned from preProcess. If not set by default it will get Params.
- Returns:
List[List[float]]: It should return a list of actions need to be taken by Agent.