--- title: Utils keywords: fastai sidebar: home_sidebar summary: "Basic utilities with few dependencies." description: "Basic utilities with few dependencies." ---
{% raw %}
{% endraw %} {% raw %}
%load_ext autoreload
%autoreload 2
%matplotlib inline
{% endraw %} {% raw %}
{% endraw %}

At training time, we will typically want to put the model and the current mini batch on the GPU. When developing on a CPU, a GPU isn't available, so we define a variable that will automatically find the right device. This goes in utils rather than core to avoid circular imports with the callbacks module.

{% raw %}
{% endraw %} {% raw %}
DEVICE
device(type='cpu')
{% endraw %} {% raw %}
{% endraw %} {% raw %}

hasarg[source]

hasarg(func, arg)

Checks if a function has a given argument.
Works with args and kwargs as well if you exclude the
stars. See example below.

Parameters
----------
func: function
arg: str
    Name of argument to look for.

Returns
-------
bool

Example
-------
def foo(a, b=6, *args):
    return

>>> hasarg(foo, 'b')
True

>>> hasarg(foo, 'args')
True

>>> hasarg(foo, 'c')
False
{% endraw %} {% raw %}
{% endraw %} {% raw %}

quick_stats[source]

quick_stats(x, digits=3)

Quick wrapper to get mean and standard deviation of a tensor.

Parameters
----------
x: torch.Tensor
digits: int
    Number of digits to round mean and standard deviation to.

Returns
-------
tuple[float]
{% endraw %} {% raw %}
{% endraw %} {% raw %}

concat[source]

concat(*args, dim=-1)

Wrapper to torch.cat which accepts tensors as non-keyword
arguments rather than requiring them to be wrapped in a list.
This can be useful if we've built some generalized functionality
where parameters must be passed in a consistent manner.

Parameters
----------
args: torch.tensor
    Multiple tensors to concatenate.
dim: int
    Dimension to concatenate on (last dimension by default).

Returns
-------
torch.tensor
{% endraw %} {% raw %}
{% endraw %} {% raw %}

weighted_avg[source]

weighted_avg(*args, weights)

Compute a weighted average of multiple tensors.

Parameters
----------
args: torch.tensor
    Multiple tensors with the same dtype and shape that you want to average.
weights: list
    Ints or floats to weight each input tensor. The length of this list must
    match the number of tensors passed in: the first weight will be multiplied
    by the first tensor, the second weight by the second tensor, etc. If your
    weights don't sum to 1, they will be normalized automatically.

Returns
-------
torch.tensor: Same dtype and shape as each of the input tensors.
{% endraw %} {% raw %}
{% endraw %} {% raw %}

identity[source]

identity(x)

Temporarily copied from htools.

Returns the input argument. Sometimes it is convenient to have this if
we sometimes apply a function to an item: rather than defining a None
variable, sometimes setting it to a function, then checking if it's None
every time we're about to call it, we can set the default as identity and
safely call it without checking.

Parameters
----------
x: any

Returns
-------
x: Unchanged input.
{% endraw %} {% raw %}
{% endraw %} {% raw %}

tensor_dict_diffs[source]

tensor_dict_diffs(d1, d2)

Compare two dictionaries of tensors. The two dicts must have the
same keys.

Parameters
----------
d1: dict[any: torch.Tensor]
d2: dict[any: torch.Tensor]

Returns
-------
list: Returns the keys where tensors differ for d1 and d2.
{% endraw %} {% raw %}
{% endraw %} {% raw %}

find_tensors[source]

find_tensors(gpu_only=True)

Prints a list of the Tensors being tracked by the garbage collector.
From
https://forums.fast.ai/t/gpu-memory-not-being-freed-after-training-is-over/10265/8
with some minor reformatting.

Parameters
----------
gpu_only: bool
    If True, only find tensors that are on the GPU.

Returns
-------
None: Output is printed to stdout.
{% endraw %}