[72]:
import numpy as np
import matplotlib.pyplot as plt
import torch
from torchvision.transforms.functional import rotate
from pytomography.utils import rotate_detector_z
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

PyTomography is built around PyTorch, which uses the data type torch.tensor (very analogous to the numpy.array)

[73]:
x = torch.tensor([2,3,4])
x**2
[73]:
tensor([ 4,  9, 16])

All objects and images are stored using torch.tensors. The shape of the tensor depends on the imaging modality

  • For SPECT, objects have shape [batch_size, Lx, Ly, Lz] and images have shape [batch_size, Ltheta, Lr, Lz]

[74]:
x = torch.ones((1,128,128,128))
x.shape
[74]:
torch.Size([1, 128, 128, 128])

We can create a simple cylinder using a meshgrid

[75]:
x = torch.linspace(-1,1,128)
xv, yv, zv = torch.meshgrid(x,x,x, indexing='ij')
obj = (xv**2 + 0.9*zv**2 < 0.5) * (torch.abs(yv)<0.8)
obj = obj.to(torch.float)
obj.shape
[75]:
torch.Size([128, 128, 128])

We need to ensure we have the batch_size dimension

[76]:
obj = obj.unsqueeze(dim=0)
obj.shape
[76]:
torch.Size([1, 128, 128, 128])

Let’s plot from a coronal, sagital, and axial perspective

[34]:
fig, _ = plt.subplots(1,3,figsize=(10,3))
plt.subplot(131)
plt.pcolormesh(x,x,obj[0].sum(axis=0).T, cmap='Greys_r')
plt.xlabel('y')
plt.ylabel('z')
plt.title('Coronal')
plt.subplot(132)
plt.pcolormesh(x,x,obj[0].sum(axis=1).T, cmap='Greys_r')
plt.xlabel('x')
plt.ylabel('z')
plt.title('Sagittal')
plt.subplot(133)
plt.pcolormesh(obj[0].sum(axis=2).T, cmap='Greys_r')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Axial')
fig.tight_layout()
../_images/notebooks_dt1_10_0.png

Fundamental Operations#

Now that we have a 3D object \(f\), we may want to turn this into projections \(g\) using

\[g = Hf\]

Assuming no attenuation, PSF modeling, scattering, we can model this by projecting the object at different angles (think of it like taking an X-Ray at a number of different angles)

Idea#

Rotate the object first, always project along the x-axis

[37]:
proj = obj.sum(axis=1)
proj.shape
[37]:
torch.Size([1, 128, 128])

This is an projection

The Rotate Function#

The object is a tensor with 4 dimensions (including batch size). We can use the rotate function, which rotates the object in the \(xy\)-plane so long as \(x\) and \(y\) are the final two dimensions

  • Right now \(z\) is the last dimension

So we need to transpose the array then rotate

[77]:
beta = 45
phi = 270 - beta
obj_rotated = rotate(obj.permute(0,3,1,2), -phi).permute(0,2,3,1)

We rotate by \(-\phi\) so that the detector angle is \(\phi\)

[53]:
proj_45 = obj_rotated.sum(axis=1)
proj_45.shape
[53]:
torch.Size([1, 128, 128])

All this functionality is contained in the rotate_detector_z function here.

[54]:
plt.pcolormesh(proj_45[0].T, cmap='Greys')
[54]:
<matplotlib.collections.QuadMesh at 0x7fb33e5c2e80>
../_images/notebooks_dt1_22_1.png

This is equivalent to what we would call \(g_{45^{\circ}}\) in the SPECT example of the manual here. We can the compute \(g = \sum_{\theta} g_{\theta} \otimes \hat{\theta}\) by computing these projections for a number of angles

[80]:
angles = np.arange(0,360.,3)
[82]:
image = torch.zeros((1,len(angles),128,128))
[84]:
image[:,2].shape
[84]:
torch.Size([1, 128, 128])
[85]:
image = torch.zeros((1,len(angles),128,128))
for i,angle in enumerate(angles):
    object_i = rotate_detector_z(obj,angle)
    image[:,i] = object_i.sum(axis=1)

Now we can look at the image

[86]:
plt.pcolormesh(image[0,:,:,64].T)
plt.xlabel('Angle')
plt.ylabel('r')
[86]:
Text(0, 0.5, 'r')
../_images/notebooks_dt1_29_1.png
[ ]: