OpenRacer package

Submodules

OpenRacer.Constants module

class OpenRacer.Constants.COMMAND(value)[source]

Bases: str, Enum

An enumeration.

Details = 'details'
End = 'end'
Epoch = 'epoch'
Eval = 'eval'
Lap = 'lap'
Test = 'test'
Track = 'track'
TrackAck = 'trackAck'

OpenRacer.Interface module

class OpenRacer.Interface.Interface(model: ModelInterface, host: str = 'localhost', port: int = 8000, debug: bool = False)[source]

Bases: object

printStart(intro: List[str] = ['Welcome to OperRacer'], padding: int = 5)[source]

Print startup Box to welcome and show links

Args:

intro (List[str], optional): Lines to be shown in startup panel. Defaults to [“Welcome to OperRacer”]. padding (int, optional): use to decied size of panel. Defaults to 5.

start()[source]

Start the server. Calls the fastapi server to start. Also creates sqlite db for each start.

OpenRacer.Model module

class OpenRacer.Model.ModelBase[source]

Bases: object

backprop(action, inputData)[source]

This will be called while training after every step this can be used to evaluate your step and adjust the model.

Args:

action (List[List[float]]): It will be a 2D List. [[0.1,0.2], [0.3,-0.1]] inputData (Params|Object): It would be same input data as provided in trainEval/TestEval.

preProcess(inputData: Params)[source]

Incase you want to preprocess your inputs.

Args:

inputData (Params): This is data received from Unity

Returns:

Processed Data. Default is Params.

rewardFn(action: List[List[float]], inputData) List[float][source]

It will be called on each step. you can define how to reward your Agent based on its input and action. it will call reward function on current model

Args:

action (List[List[float]]): It will be a 2D List. [[0.1,0.2], [0.3,-0.1]] inputData (Params|Object): It would be same input data as provided in trainEval/TestEval.

Returns:

List[float]: Rewards for each Agent.

save(sessionNum, dirPath)[source]

this function will be called at the end of every session to save the model. user needs to implement save functionality. it can be saved on the directory path provided in the input.

Args:

sessionNum (int): session number of currently ended session. dirPath (str): path to the directory created by OpenRacer module.

testEval(inputData)[source]
This will be called for each step in testing/Race. It will call testEval of current model
Note:

output range should be [-1,1] in both axis. x: [-1, 1] -> [Backward, Forward] y: [-1, 1] -> [Left, Right]

Example output:

For eg: For 2 agents, [

[0.5, 0.2], #Agent 1: 0.5 forward and 0.2 towards Right [0.6, -0.1] #Agent 2: 0.6 forward and 0.1 towards Left

]

Args:

inputData (Returned from PreProcess): This will contain same object returned from preProcess. If not set by default it will get Params.

Returns:

List[List]: It should return a list of actions need to be taken by Agent.

trainEval(inputData) List[List[float]][source]
This will be called for each step in training. it will call trainEval function of current model
Note:

output range should be [-1,1] in both axis. x: [-1, 1] -> [Backward, Forward] y: [-1, 1] -> [Left, Right]

Example output:

For eg: For 2 agents, [

[0.5, 0.2], #Agent 1: 0.5 forward and 0.2 towards Right [0.6, -0.1] #Agent 2: 0.6 forward and 0.1 towards Left

]

Args:

inputData (Returned from PreProcess): This will contain same object returned from preProcess. If not set by default it will get Params.

Returns:

List[List[float]]: It should return a list of actions need to be taken by Agent.

class OpenRacer.Model.ModelInterface[source]

Bases: object

addModel(model)[source]

add ML model with a name atteibute. model.name should be a non empty string.

Args:

model (class): a class of ML that will have eval, train, backPropagate, loss fn

backprop(action, inputData)[source]

This will be called while training after every step this can be used to evaluate your step and adjust the model.

Args:

action (List[List[float]]): It will be a 2D List. [[0.1,0.2], [0.3,-0.1]] inputData (Params|Object): It would be same input data as provided in trainEval/TestEval.

eval(inputDataFromUnity: str, isTraining: bool = False) dict[source]

This is the function that will be called on each step to evaluate what to do.

Args:

inputDataFromUnity (str): This is unformatted data from Unity. isTraining (bool, optional): Is this a step in training. If true then It will call backpropogate. Defaults to False.

Returns:

dict: This is action dict that contains x, y input for Car AI in Unity.

formatAction(action: ndarray)[source]
formatInput(unprocessedInput: str) Params[source]

This converts data from string received from untiy to Prams

Args:

unprocessedInput (str): Str message received from unity over Websocket.

Returns:

Params: Processed input for taking next step.

getModel()[source]

returns model that will be used for training and testing

preProcess(inputData: Params)[source]

Incase you want to preprocess your inputs.

Args:

inputData (Params): This is data received from Unity

Returns:

Processed Data. Default is Params.

rewardFn(action: List[List[float]], inputData) List[float][source]

It will be called on each step. you can define how to reward your Agent based on its input and action. it will call reward function on current model

Args:

action (List[List[float]]): It will be a 2D List. [[0.1,0.2], [0.3,-0.1]] inputData (Params|Object): It would be same input data as provided in trainEval/TestEval.

Returns:

List[float]: Rewards for each Agent.

sessionEnd(sessionNum: int)[source]

called after every epoc end to save model. for more control you can add save function to model class and it will be called and interface will not save it.

Args:

sessionNum (int): number of epoch ended

setModel(modelName)[source]

set model to be used for eval and train

Args:

modelName (str): name of model to be used. it should be present in added models.

setTrack(track: List[List[float]])[source]

Setting track coordniates for the session

Args:

track (List[Tuple[float]]): List of Tuple of coordinates. (x, y, z). Coordinates are according to Unity. x, z should be used for 2Dcase. (x,z) => (x,y)

testEval(inputData)[source]

This will be called for each step in testing/Race. It will call testEval of current model

Args:

inputData (Returned from PreProcess): This will contain same object returned from preProcess. If not set by default it will get Params.

Returns:

List[List]: It should return a list of actions need to be taken by Agent. Range should be [-1,1] in both axis. x: [-1, 1] -> [Backward, Forward] y: [-1, 1] -> [Left, Right] For eg: For 2 agents, [

[0.5, 0.2], #Agent 1: 0.5 forward and 0.2 towards Right [0.6, -0.1] #Agent 2: 0.6 forward and 0.1 towards Left

]

trainEval(inputData) List[List[float]][source]

This will be called for each step in training. it will call trainEval function of current model

Args:

inputData (Returned from PreProcess): This will contain same object returned from preProcess. If not set by default it will get Params.

Returns:

List[List[float]]: It should return a list of actions need to be taken by Agent.

Note:

Range should be [-1,1] in both axis. x: [-1, 1] -> [Backward, Forward] y: [-1, 1] -> [Left, Right]

Example:

For eg: For 2 agents, [

[0.5, 0.2], #Agent 1: 0.5 forward and 0.2 towards Right [0.6, -0.1] #Agent 2: 0.6 forward and 0.1 towards Left

]

OpenRacer.Recorder module

class OpenRacer.Recorder.Recorder(modelName: str)[source]

Bases: object

createDetailsTable = '\n    CREATE TABLE if not exists details(\n        sessionCount int,\n        agentCount int,\n        trackName string,\n        sessionTime int\n    );\n        '
createInputTable = '\n    CREATE TABLE if not exists step(\n        timestamp DATETIME DEFAULT CURRENT_TIMESTAMP NOT NULL,\n        all_wheels_on_track BOOLEAN CHECK(all_wheels_on_track IN(0, 1)),\n        x float,\n        y float,\n        closest_waypoint1 int,\n        closest_waypoint2 int,\n        distance_from_center float,\n        is_crashed BOOLEAN CHECK(is_crashed IN(0, 1)),\n        is_left_of_center BOOLEAN CHECK(is_left_of_center IN(0, 1)),\n        is_reversed BOOLEAN CHECK(is_reversed IN(0, 1)),\n        progress float,\n        speed float,\n        steering_angle float,\n        steps int,\n        track_length float,\n        track_width float,\n        actionX float, \n        actionY float,\n        reward float, \n        agentId int,\n        session int);'
createRaceTable = '\n    CREATE TABLE if not exists race(\n        timestamp DATETIME DEFAULT CURRENT_TIMESTAMP NOT NULL,\n        all_wheels_on_track BOOLEAN CHECK(all_wheels_on_track IN(0, 1)),\n        x float,\n        y float,\n        closest_waypoint1 int,\n        closest_waypoint2 int,\n        distance_from_center float,\n        is_crashed BOOLEAN CHECK(is_crashed IN(0, 1)),\n        is_left_of_center BOOLEAN CHECK(is_left_of_center IN(0, 1)),\n        is_reversed BOOLEAN CHECK(is_reversed IN(0, 1)),\n        progress float,\n        speed float,\n        steering_angle float,\n        steps int,\n        track_length float,\n        track_width float,\n        actionX float, \n        actionY float,\n        reward float, \n        agentId int,\n        lap int);\n    '
details(sessionCount, agentCount, trackName, sessionTime)[source]
getAgentRun(agentId: int)[source]
getDetailsOf(attribute: str, agent: int, session: int)[source]
getProgress()[source]
getRecords(agentId: int, session: int)[source]
getSessionRun(session: int)[source]
raceDetails()[source]
recordRaceStep(formatedInput, action, reward, session)[source]
recordStep(formatedInput, action, reward, session)[source]
runDetails()[source]

OpenRacer.Routes module

class OpenRacer.Routes.Routes(model: ModelInterface)[source]

Bases: object

async checkCommand(signal: str)[source]

Check message recieved from unity and process to send response

Args:

signal (str): it will be string receivied from unity. Structure will be “command~value”.

Returns:

str|dict|list: based on request different types of responses are generated

getAgentRun(agentId: int)[source]
getChartData(attribute: str, agentId: int, session: int)[source]
getCommand(signal: str) List[str][source]

Seperate the signal into command and value part

Args:

signal (str): It should always be in format of command~value.

Returns:

[command:str, value:ste]: returns the command and value.

getProgressChartData()[source]
getRaceDetails()[source]
getRecords(agentId: int, session: int)[source]
getRunDetails()[source]
getSessionRun(session: int)[source]
hello()[source]

Respond with hello. could be use for testing

ui(_)[source]

Sends build index.html from react.

async websocket_endpoint(websocket: WebSocket)[source]

This is used for communicating with Unity APP using websockets.

Args:

websocket (WebSocket): Websocket client to recieve and send messages

OpenRacer.Util module

OpenRacer.Util.loadFile(path)[source]

OpenRacer.datatypes module

class OpenRacer.datatypes.Params[source]

Bases: tuple

OpenRacer.example module

class OpenRacer.example.RandomModel(seed: int = 0)[source]

Bases: ModelBase

clamp(n, smallest, largest)[source]
preProcess(inputData)[source]

Incase you want to preprocess your inputs.

Args:

inputData (Params): This is data received from Unity

Returns:

Processed Data. Default is Params.

rewardFn(action, inputData)[source]

It will be called on each step. you can define how to reward your Agent based on its input and action. it will call reward function on current model

Args:

action (List[List[float]]): It will be a 2D List. [[0.1,0.2], [0.3,-0.1]] inputData (Params|Object): It would be same input data as provided in trainEval/TestEval.

Returns:

List[float]: Rewards for each Agent.

save(epocNum: int, dirPath)[source]

this function will be called at the end of every session to save the model. user needs to implement save functionality. it can be saved on the directory path provided in the input.

Args:

sessionNum (int): session number of currently ended session. dirPath (str): path to the directory created by OpenRacer module.

scale(n, smallest, largest, newSmallest, newLargest)[source]
testEval(inputData)[source]
This will be called for each step in testing/Race. It will call testEval of current model
Note:

output range should be [-1,1] in both axis. x: [-1, 1] -> [Backward, Forward] y: [-1, 1] -> [Left, Right]

Example output:

For eg: For 2 agents, [

[0.5, 0.2], #Agent 1: 0.5 forward and 0.2 towards Right [0.6, -0.1] #Agent 2: 0.6 forward and 0.1 towards Left

]

Args:

inputData (Returned from PreProcess): This will contain same object returned from preProcess. If not set by default it will get Params.

Returns:

List[List]: It should return a list of actions need to be taken by Agent.

trainEval(inputData)[source]
This will be called for each step in training. it will call trainEval function of current model
Note:

output range should be [-1,1] in both axis. x: [-1, 1] -> [Backward, Forward] y: [-1, 1] -> [Left, Right]

Example output:

For eg: For 2 agents, [

[0.5, 0.2], #Agent 1: 0.5 forward and 0.2 towards Right [0.6, -0.1] #Agent 2: 0.6 forward and 0.1 towards Left

]

Args:

inputData (Returned from PreProcess): This will contain same object returned from preProcess. If not set by default it will get Params.

Returns:

List[List[float]]: It should return a list of actions need to be taken by Agent.

Module contents