pocketpose.models.body.posenet#

Module Contents#

Classes#

PoseNet

Base class for PoseNet models.

PoseNetSinglePerson

MoveNet Lightning model.

PoseNetMultiPerson

MoveNet Lightning model.

class pocketpose.models.body.posenet.PoseNet(model_path: str, model_url: str, input_size: tuple)#

Bases: pocketpose.models.interfaces.TFLiteModel

Base class for PoseNet models.

process_image(image)#

Default implementation of process_image() for models that don’t need preprocessing.

This method can be overridden by subclasses to implement model-specific preprocessing.

Args:
image (np.ndarray): The image to prepare for prediction. The image is a numpy

array with shape (1, height, width, channels) and dtype uint8 (range [0, 255]).

flip_keypoints(keypoints, image_width)#

Flip the keypoints horizontally.

postprocess_prediction(prediction, original_size) List[List[float]]#

Postprocesses the prediction to get the keypoints.

Args:
prediction (Any): The raw prediction returned by the model. This can

be a single tensor or a tuple of tensors, depending on the model.

original_size (tuple): The original size of the input image as (height, width).

Returns:

The predicted keypoints as a list of (x, y, score) tuples.

extract_keypoints_from_heatmaps(heatmaps)#

Extract the keypoints from the heatmaps.

Args:

heatmaps: The heatmaps to extract the keypoints from. Shape: (height, width, num_keypoints)

Returns:

A tuple containing the keypoints and their confidences.

apply_offsets(keypoints, offsets, output_stride=32)#
class pocketpose.models.body.posenet.PoseNetSinglePerson#

Bases: PoseNet

MoveNet Lightning model.

class pocketpose.models.body.posenet.PoseNetMultiPerson#

Bases: PoseNet

MoveNet Lightning model.