This notebook is unfinished and still under development

1D NMR Processing and Display

a simplified environment for processing 1D Bruker NMR datasets with SPIKE.

Run each python cell in sequence by using the ⇥Run button above (or typing shift Enter).

Cells are meant to be used in order, taking you to the complete analysis, but you can go back at any time.

The SPIKE code used for processing is visible in the cells, and can be used as a minimal tutorial.

Remark to use this program, you should have installed the following packages:

  • a complete scientific python environment ( tested with python 3.6 - anaconda but it should also work in python 2.7)
  • spike ( version 0.99.9 minimum )
  • ipywidgets ( tested with version 7.1 )
  • ipyml ( adds interactivity in the notebook )

Initialization

the following cell should be run only once, at the beginning of the processing

In [1]:
# load all python and interactive tools
from __future__ import print_function, division
from IPython.display import display, HTML, Markdown, Image
display(Markdown('## STARTING Environment...'))
%matplotlib widget
import os.path as op
import spike
from spike.File.BrukerNMR import Import_1D
from spike.Interactive import INTER as I
from spike.Interactive.ipyfilechooser import FileChooser
display(Markdown('## ...program is Ready'))
from importlib import reload  # the two following lines are debugging help
reload(I)                   # and can be removed safely when in production
I.hidecode()

STARTING Environment...

    ========================
          SPIKE
    ========================
    Version     : 0.99.13
    Date        : 07-10-2019
    Revision Id : 436
    ========================
*** bokeh_display not loaded ***
*** wavelet not loaded ***
*** zoom3D not loaded ***
plugins loaded:
Bruker_NMR_FT,  Bucketing,  FTMS_calib,  Fitter,  Integrate,  Linear_prediction,  PALMA,  Peaks,  apmin,  bcorr,  fastclean,  gaussenh,  pg_sane,  rem_ridge,  sane,  sg,  test,  urQRd, 

spike.plugins.report() for a short description of each plugins
spike.plugins.report('module_name') for complete documentation on one plugin

...program is Ready

usefull to show/print a clean screen when processing is finished

Choose the file

The FileChooser() tool creates a dialog box which allows to choose a file on your disk

  • use the Select button
  • modify the ( optional ) path argument, to start the exploration on a given location
  • After the selection, the selected filename is found in FC.selected
In [2]:
FC = FileChooser(path='/DATA/',filename='fid')
display(FC)

Import dataset

This is simply done with the Import_1D() tool, which returns a SPIKE object.

We store the dataset into a variable, typing the variable name shows a summary of the dataset.

In [3]:
print('Reading file ',FC.selected)
d1 = Import_1D(FC.selected)
d1.filename = FC.selected
d1.set_unit('sec').display(title=FC.nmrname+" fid")
Reading file  /DATA/ARTEref/2/fid
Out[3]:
1D data-set
Axis F1 :NMR axis at 700.163291 MHz,  8192 complex pairs,  from -1.338538 ppm (-937.194868 Hz) to 10.683670 ppm  (7480.313549 Hz)
data-set is complex

In the current set-up, the figure can be explored (zoom, shift, resize, etc) with the jupyter tools displayed below the dataset. The figure can also be saved as a png graphic file.

For more interactivity - see below.

Basic Processing

We are going to use a basic processing set-up, check the documentation for advanced processing

Fourier Transform

In [4]:
D1 = d1.copy() # copy the imported data-set to another object for processing
D1.apod_em(0.3).zf(4).ft_sim().bk_corr().apmin()  # chaining  apodisation - zerofill - FT - Bruker correction - autophase
D1.set_unit('ppm').display(title=FC.nmrname)  # chain  set to ppm unit - and display
Out[4]:
1D data-set
Axis F1 :NMR axis at 700.163291 MHz,  32768 complex pairs,  from -1.338538 ppm (-937.194868 Hz) to 10.683670 ppm  (7480.313549 Hz)
data-set is complex

Following steps are optional

rephasing

If is is required use the interactive phaser

Use scale and zoom to tune the display; then use P0, P1, pivot to optimize the phase.

Once finished, click on Apply correction

In [5]:
reload(I)
I.Phaser1D(D1, reverse_scroll=True);
no applied phase

Baseline correction

A simple interactive baseline correction tool

In [6]:
reload(I)
I.baseline1D(D1, reverse_scroll=True);
Applied correction:
 [-0.9668680131947585, 0.6093183485796191, 2.9604280406326864, 4.306216710797024, 5.969373756013748, 7.281040739611759, 8.386876904245158, 9.61305622149958]

Peak-Picker

  • moving the threshold determines the minimum peak intensity
  • peaks are searched only in the selected zoom window
In [9]:
reload(I)
ph = I.NMRPeaker1D(D1, reverse_scroll=True);

Integrate

Integration zones are computed from the peaks detected with the Peak-Picker above required

In [24]:
reload(I)
I.NMRIntegrate(D1);
/home/mad/Documents/spike/Notebooks/spike/Interactive/INTER.py:208: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
  fi,ax = plt.subplots(figsize=figsize)

Interactive composite display

Convenient to set-up your own figure

In [30]:
reload(I)
s = I.Show1Dplus(D1, title=FC.nmrname, reverse_scroll=True);
/home/mad/Documents/spike/Notebooks/spike/Interactive/INTER.py:208: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
  fi,ax = plt.subplots(figsize=figsize)

Save the data-set

either as stand alone native SPIKE files, (there are other formats)

In [ ]:
D1.save('example1.gs1')

or as a csv text file, - in which case, it is probably better to remove the imaginary part, not useful there.

The file contains some basic informations in addition to the spectral data

In [ ]:
D1.copy().real().save_csv('example.csv')

Save the peak list to a csv file

In [ ]:
D1.pk2pandas().to_csv('peaklist.csv')

Save the integrals to a csv file

In [ ]:
D1.integrals.to_pandas().to_csv('integrals.csv')

Export a buckelist

In [ ]:
# adapt the parameters below
Zoom = (0.5,8)                    # zone to bucket       - (start, end) in ppm
BucketSize = 0.04                 # width of the buckets - in ppm
Output = 'screen'                 # 'screen'  or  'file'  determines output
BucketFileName = 'bucket.csv'     #  the filename if Output (above) is 'file'  - don't forget the .csv extension.
In [ ]:
# the following cell executes the bucketing
if Output == 'file':
    with open(BucketFileName,'w') as F:
        D1.bucket1d(zoom=Zoom, bsize=BucketSize, pp=True, file=F)
    print('buckets written to %s\n'%op.realpath(BucketFileName))
else:
    D1.bucket1d(zoom=Zoom, bsize=BucketSize, pp=True);

Tools in this page is under intensive development - things are going to change rapidly.