How do I store variables in the sequence script?

196 views
Skip to first unread message

Enrique Mendez

unread,
Mar 5, 2021, 11:00:28 AM3/5/21
to the labscript suite
Hey all,

I've finally got labscript running on our experiment, and I need to be able to write variables to the HDF file for performing complex sequences.

For example, say I want to start running multiple shots.

In this case, a very simple pattern of ("Diagnostic_Seq.py", and "Load_mot.py").

So I need "diagnostic_seq.py" to write a variable to the HDF file "there_are_photons_coming=True", that tells the analysis to perform photon analysis. Likewise, for load_mot "there_are_photons_coming=False" that will prevent my code from hanging on what looks like a bad shot to the analysis script.

A more useful example, is when I generate tiny subsequences: characterize_cavity(t)
count_number_of_atoms(t), etc...

I would like each of them to define variables that the analysis can look for, so it doesn't have to extract it from the compiled traces.

characterize_cavity(t) needs to define, "cavity_scan_start_time" and "cavity_scan_sweep_time".

How can I define these variables so I can implement these more complex sequences?

Thanks,
Enrique

Zak V

unread,
Mar 5, 2021, 6:32:54 PM3/5/21
to the labscript suite
Hi Enrique,

I believe (anyone else feel free to correct me if I'm wrong) that the canonical way to achieve what you're looking for with labscript doesn't require saving data to the hdf5 file during shot compilation.

To prevent your analysis scripts from hanging when run on shots that they weren't designed for, you can edit the scripts. For example, you can make them return before doing any analysis if the shot isn't created from the desired labscript. In singleshot routines the name of the labscript file can be obtained with:

```
from lyse import data, path
ser = data(path)  # Returns a pandas series
ser['labscript']  # Returns 'some_labscript_name.py'
```

Or in a multishot routine, you can use the 'labscript' column of the lyse dataframe:

```
from lyse import data
df = data()
df['labscript']  # Returns a series with the name of each shot's labscript.
```

Alternatively, a more pythonic approach would be to use a try/except block to check if the shot file has the data that your analysis needs. That approach has the advantage that you don't have to update the list of labscripts for a which a given analysis script should run whenever you make a new labscript; it will just automatically run whenever the labscript generates the data that the analysis needs. For a singleshot routine that could something like:

```
from lyse import Run, path
run = Run(path)
try:
    run.get_trace(trace_name)
except Exception:
    print("Could not retrieve data for analysis from shot file. Returning...")
    return 
```

For your second point, namely extracting parameters like `cavity_scan_start_time` and `cavity_scan_sweep_time` for use in your analysis, I believe that's normally done by creating globals with those results in runmanager. Often something like `cavity_scan_sweep_time` is set by its own global, and `cavity_scan_start_time` can probably be calculated as the sum of some other globals, like `mot_loading_duration + transfer_duration + ...`. The values of those globals can then be extracted during analysis with `Run.get_globals()` in singleshot routines or from the dataframe column for the global in multishot routines.

That said, I've also sometimes wished that I could save results during shot compilation. Since we sometimes enable/disable different parts of our sequences, the logic for calculating something like `cavity_scan_start_time` can be a bit complicated and annoying to update if the labscript is changed. Being able to just save the time in the labscript during the shot compilation would be easier, less error-prone, and more robust against changes to the labscript. I'm not aware of any official way to do that though. One possible way (which is a bit hacky and may break in future versions of labscript) is to get the path to the shot's hdf5 file from `labscript.compiler.hdf5_filename`. Then you can open that file with h5py (make sure to import labscript_utils.h5_lock before importing h5py in your labscript) and edit the file as needed to save the data that you'd like. Note that, at least as of this writing, creating a `lyse.Run` instance in your labscript won't work unless you initialize it with `no_write=True` because it will try to determine the name of the lyse analysis script that created it but error out. Unfortunately initializing it with `no_write=True` prohibits saving anything, which kind of defeats the purpose, so you'll be better off just editing the file with h5py directly. Again though I'm not necessarily recommending this since it is fairly hacky.

I believe that properly implementing a way to save results from the labscript itself wouldn't be too hard. If the labscript creators are ok with that idea I could give you some guidance on how to implement it. I think it would mainly involve adding a global `path` variable to the labscript module which would get updated during `labscript_init()`. It might also be helpful to then modify `lyse.Run.__init__()` so that `Run` instances can be created from the labscript, and in that case maybe set the default group for saving results to a group named after the labscript itself, since there isn't an analysis script to name it after.

Cheers,
Zak

Rohit Prasad Bhatt

unread,
Mar 6, 2021, 5:12:41 PM3/6/21
to 'Philip Starkey' via The labscript suite
Hi Enrique,
I didn't fully understand your question. But have you looked into the FunctionRunner device in labscript? It can execute arbitrary python code at the beginning or end of every shot.

Regards,
Rohit Prasad Bhatt

--
You received this message because you are subscribed to the Google Groups "the labscript suite" group.
To unsubscribe from this group and stop receiving emails from it, send an email to labscriptsuit...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/labscriptsuite/3c3e50e6-2d3e-43d3-a31f-d9cbe7f8ec12n%40googlegroups.com.

dihm....@gmail.com

unread,
Mar 6, 2021, 10:33:51 PM3/6/21
to the labscript suite
Enrique,

Another somewhat canonical way to set lyse analysis control variables is to just define extra runmanager globals. These globals are automatically saved to the shot file, readily obtained in lyse, and can easily control script execution via try/catches. And because runmanager globals can be defined by any valid python code, a fair amount of creativity can be employed (ie not just hard coding booleans).

-David

Enrique Mendez

unread,
Mar 7, 2021, 8:11:05 PM3/7/21
to the labscript suite
Hey David!

That sounds like the exact solution I'm looking for.

Can you elaborate the syntax, and how these globals are defined, and why it is/how it is that labscript saves these globals or more importantly where they are saved to?

Thanks,
Enrique

Philip Starkey

unread,
Mar 7, 2021, 8:43:01 PM3/7/21
to labscri...@googlegroups.com
Hi Enrique,

I have some examples in my thesis, see section 8.1 and appendices D and E. 


They're just normal runmanager global's but instead of using them in the labscript file, you use them in your lyse analysis scripts! They're stored in the HDF5 file but you shouldn't need to access the file directly. There are methods in the normal lyse API to access them.

Cheers,
Phil

Enrique Mendez

unread,
Mar 7, 2021, 9:29:02 PM3/7/21
to labscri...@googlegroups.com
Hey Phil,

I should have been more clear, I meant the how does one define runmanager globals within the python script, i.e., within the "experiment logic" file without manually needing to add it to the globals file.

It seems your example points to defining the global in the runmanager, or long scripts, where I can't discern what is initially runmanager defined global or initially script defined global that is then saved in the shot.


Best,
Enrique

Philip Starkey

unread,
Mar 8, 2021, 7:43:43 PM3/8/21
to labscri...@googlegroups.com
Hi Enrique,

Ah I see. You can't really define them within the experiment logic script itself. Could you maybe explain in more detail what you are trying to do/why you need to define it in the experiment logic? It might be possible to do what you want in another way and if not, it's good to have concrete examples of what we can't currently do so we can direct development to correct those deficiencies!

Cheers,
Phil

dihm....@gmail.com

unread,
Mar 10, 2021, 2:48:41 PM3/10/21
to the labscript suite
Enrique,

To add on to Phil's comment, the typical use case for labscript/runmanager is that the labscript file/structure is largely fixed and shot-to-shot control is done solely through modifying runmanager globals. Put another way, it is not really recommended to directly modify the script every time you want to change some detail. This often means just setting variable times/values with runamanger. But you can also use runmanager globals to control script structure. For instance, a boolean could define when certain functions are run. A crude example might be:

```
def load_MOT(t):
    # does MOT loading at time t
    # takes MOT_load_time to happen
    return t+MOT_load_time

def load_trap(t):
    # loads trap
    # takes trap_load_time to happen
    return t+trap_load_time

def abs_image(t):
    camera1.acquire(t)
    t += blow_time
    camera1.acquire(t)
    t += background_time
    camera1.acquire(t)

def count_photons(t):
    # configure counting however
   return t+time_to_count

start()

t = 0

t += start_delay
t = load_MOT(t)
if trapped:
    t = load_trap(t)
if image:
    t = abs_image(t)
elif count and not image:
    t = count_photons(t)

stop(t+1e-3)
```

In this very crude example, MOT_load_time, trap_load_time, blow_time, background_time, time_to_count, and start_delay are all runmanager variables used by the script to define the potentially variable timing. The booleans trapped, image, and count control if the MOT atoms are loaded into the trap, if an absorption image should be taken, and if photons should be counted. You could also define runmanager globals for use with lyse like
```
photons_present = trapped and not image
photon_start_time = start_delay + MOT_load_time + load_trap
```
These variables are not (explicitly) used by the script, but are used to control lyse analysis by reading the runmanager globals. An example single shot routine would look like
```
from lyse import *

ser = data(path)

try:
    photons_present = ser['count']
except:
    return

if photons_present:
    # do analysis
    photon_start_time = ser['photon_start_time']
else:
    return
```
This script checks if the variable is even defined, if not it just exits. Then it checks if the analysis is wanted. If not, it also exits.

Hopefully this rough example makes sense and is actually something that helps. I'll admit this paradigm is a little painful in the sense that not everything is set about your experiment in the same place (ie the script file), but in the end it tends to make the experiment much more rapidly flexible.

-David

Enrique Mendez

unread,
Mar 11, 2021, 11:46:45 AM3/11/21
to the labscript suite
Hey All, 

Thanks for the responses.

For Phil, Let me be more explicit why it's useful to be able to define global variables. For David, we are not trying to do simple parameter scans across many experiments, we are doing multiple repetitions of a subsequence within a given experiment to save time, and in some cases, because no other way of data extraction is possible.

Experimental Setup: Our experiment is all about atoms inside a cavity, and we can gain information about the atom's spin state by scanning light across the cavity and looking at the transmission. This is our most important frequently used mechanism for gaining information about the atoms. It also tells us about our cavity.

We need to perform this cavity scan for a few reasons.

  • To characterize the cavity to see if its frequency has drifted.
  • To measure the Spin State along one axis
  • Possibly perform weak measurements.
  • To measure the lifetime of our atoms.
  • To characterize spin mixing effects from our cooling beams.
  • If we implement atom recycling, we can redo the same experiment without worrying about long mot loading times.
You can see that for a few of these tasks one needs to repeat this cavity scan multiple times inside a single sequence to get an understanding of either the time evolution of the system, or to simply gather more data (weak measurement case & spin mixing case).


Definition for a Subsequence Function:

Since this experimental subsequence is so elementary to our sequence it is natural to define a function with globals.

global cavity_scan_parameters = {}

def cavity_scan(t, label):
    '''scan light across the cavity at time t. 
    label = name of the cavity scan you are performing
   ''''
   cavity_scan_parameters[label] = (cavity_scan_sweep_start_time, cavity_scan_sweep_duration)
  sweep_light(t, cavity_scan_sweep_start_time, cavity_scan_sweep_duration)
   pass


Example Utilization in Sequence Logic.
In a single sequence, I could do 

cavity_scan(t=0,label="measure_cavity_frequency")

for x in range(3):
   t += dt
    cavity_scan(t,label="measure_spin_up_state")
    rotate_atoms(t+0.1)
   cavity_scan(t+0.2, label="measure_spin_down_state")

In another sequence for measuring atomic lifetime then I could do 

i=0
while t < 3:
   t += dt; i+=1;
    cavity_scan(t,label=f"atomic_lifetime_measure_spin_up_state_{i}")
    rotate_atoms(t+0.1)
   cavity_scan(t+0.2, label=f"atomic_lifetime_measure_spin_down_state_{i}")

In both cases, the function has taken care to remember each type of scan that was done and it's parameters so I don't have to extract it from the compiled traces. So this lends its self to robust analysis code. This is necessary as our photon counter spits data into one file in a binary format and to decode it we need to know these parameters and times.

Example Utilization in Analysis

extract_photon_arrival_times.py
for key in cavity_scan_parameters:
    data = extract_photon_numbers(hdf, cavity_scan_parameters[key])
   save_result(photon_label=key, data)

analyze_empty_cavity_frequency.py
if "measure_cavity_frequency" in photon_labels:
   '''measure cavity frequency drift and correct it'''

analyze_atom_lifetime.py
if "atomic_lifetime" in photon_labels:
     '''calculate atomic lifetime'''

Conclusion

Hopefully this made it clearer that it would simplify the analysis and allow for more freedom for the experimentalist to do complicated sequences. If you know a simpler way without declaring a global variable for extracting the cavity sweep parameters, please let me know.

Thanks!
Enrique

dihm....@gmail.com

unread,
Mar 11, 2021, 3:31:42 PM3/11/21
to the labscript suite
Enrique,

Thanks for the great, detailed example! Sorry to keep inserting myself in the conversation, but you have piqued my interest. I'm always curious about clever ways to use labscript to do more complicated things than just straight procedural shots and you have certainly found an important use case!

What I might reiterate is that labscript already has a system for passing globals between components (namely runmanager globals). Now it is certainly limited relative to general python, as you've found, but I am pretty sure only minor modification of your script would allow for its use. That said, what I'm about to suggest is definitely not easier to write than what you have (at least up front), and that may be a compelling enough argument for adding some sort of decorator to the runmanager compiler that saves variables at the end of script compilation to the other runmanager globals. I'm just not all that sure how easy that is to do, particularly in a general and robust enough way necessary for the mainline. That done, let's give it a go.

The general principle is that any global you want to keep needs to be defined in runmanager, and then you write your script to modify itself to match those definitions (ideally) or at least the runmanager globals mirror that functionality.

Taking your example 1.

In Runmanager
# All the constants, obviously
measure_cav_freq = True
measure_cav_freq_start = 0 #in seconds
measure_cav_freq_duration = 1e-3
spin_loops = 3
loop_scan_types = ['up','down']
lnum = len(loop_scan_types)
loop_start = 1
loop_dt = 0.5
loop_scan_times = [0,0.2]
loop_scan_durations = np.full(len(loop_scan_types),1e-3)

scan_types = np.tile(loop_scan_types,spin_loops)
scan_times = np.array([loop_scan_times+(i*loop_dt) for i in range(spin_loops)]).flatten() + loop_start
scan_durations = np.tile(loop_scan_durations,spin_loops)

You now how arrays that give the scan types, start times, and durations. If you insist on putting them into a single dict, that's fine (but doesn't work in this example since the labels would overwrite each other anyway). Note that runmanager tries to auto-expand any sequence type into a set of shots. This can be manually disabled by clearing the "Expansion" column. The entire array will then be passed in for the variable. I'll note that this auto expansion can cause trouble since it is the default behavior when runmanager is first loaded. If you have a lot of arrays that get cross-producted together, you can end up in a situation where runmanager stalls trying to calculate how many shots it would take before it lets you clear out the expansions. Like I said, this is certainly not a perfect solution to your need. Modifying runmanager to not do expansions automatically is also possible, but maybe not any easier than implementing your desired solution.

In the script

def cavity_scan(t,duration,label)
    sweep_light(t,duration)
    pass

if measure_cav_freq:
    cavity_scan(measure_cav_freq_start,measure_cav_freq_duration,'measure_cavity_frequency')

for i in range(spin_loops):
    cavity_scan(scan_times[lnum*i],scan_durations[lnum*i],scan_types[lnum*i])
    rotate_atoms(t+0.1)
    cavity_scan(scan_times[lnum*i+1],scan_durations[lnum*i+1],scan_types[lnum*i+1])
    t += loop_dt

In analysis

from lyse import *
seq = data(path)

for typ,start,dur in zip(scan_types,scan_times,scan_durations):
    dat = extract_photon_numbes(hdf, start,dur)
    save_result(photon_label=typ,dat)

if measure_cav_freq:
    # do stuff

The other example is similar in construction. I don't claim this is the only, or even best, way to handle this using current runmanager. Again, I recognize this is not as simple to work with as your desired solution. It requires writing a lot of python one-liners, making sure runmanager doesn't die on loadup with so many lists, and front-loading a lot of the script structuring logic to runmanager itself. But it is something that works today and doesn't need any monkeying around under the hood.

Hopefully that helps,
-David

P.S. If you decide to do this, a couple further tips. First, don't hesitate to use different scripts for different tasks and switch between them instead of trying to code a master script that can do literally anything. Switching scripts is basically painless. Second, make sure to divide up your globals groups so they can be disabled when not in use. Even if there is some naming overlap, this helps cut down unexpected behavior and keeps the number of globals (and therefore list expansions to disable) to a minimum. When doing this, note that lyse can get a bit funny about shots with different globals. Lyse does well generally, but there are definitely edge cases that can be annoying. Avoid by clearing the lyse analysis dataframe of shots when making script/globals changes.

Philip Starkey

unread,
Mar 12, 2021, 7:27:07 PM3/12/21
to labscri...@googlegroups.com
Hi Enrique,

Thanks for the added context. That makes a lot of sense. I'm not surprised there is not an obvious solution to this as the "Define analysis globals in runmanager" is already abusing the separation of shot parameterisation from analysis code, and the way you need to define your experiment logic for your cavity experiments is also pushing the boundaries of what labscript can do!

It seems pretty clear to me that what you need (and what we should probably add in mainline labscript) is a way to save arbitrary metadata from within the experiment logic. If you are interested in adding this feature yourself, please open an issue in the labscript repository so we can discuss with you in more detail. If you just want a quick hack, I would suggest just opening the HDF5 file and storing some metadata in a new group there (but be aware that if the feature is eventually added to labscript it may be incompatible with your implementation). An additional option could be to try and use the "time marker" feature in labscript to store necessary metadata. That's also a bit of an abuse of the function, but could be sufficient. Basically you can call add_time_marker(t, label) where label is a string and it will save it in a time_markers group in the HDF5 file. I don't think lyse has a way of pulling that out automatically but you can always open the HDF5 file yourself to get it. Alternatively you can try to define the logic in runmanager globals as David suggests.

If you want to write to the HDF5 file from within your experiment logic file I think you just need to do
import labscript_utils.h5_lock
import h5py
with h5py.File(compiler.hdf5_filename, 'a') as hdf5_file:
# Your code goes here

Hope that helps provide some additional options!

Cheers,
Phil



--
Dr Philip Starkey
Senior Technician (Lightboard developer)

School of Physics & Astronomy
Monash University
10 College Walk, Clayton Campus
Victoria 3800, Australia

Enrique Mendez

unread,
Mar 25, 2021, 10:53:40 AM3/25/21
to labscri...@googlegroups.com
Hey Phil! 

I’m not so sure it needs to be added into mainline labscript. It sort of already is. Your last bit about writing to the HDF file is what I had in mind. 

Anyone who wants to save metadata needs only make a class for augmenting labscript function calls for their specific device or use case.

That’s precisely what I started doing. I’m going to try and save data via the python pickle package in an ExperimentalCavity class. However, I tried running your code. Runmanager complained “compiler” is not defined. 

Can you please clarify how I can access the HDF filename? And if it has constraints, i.e. Will it only be defined after start() is called or something or after stop() is called?

Also should I be worried about corrupting the HDF file? I will only use it via “with” statements so my primary concern is does labscript have it open outside of function calls. 

Thanks,
Enrique

Philip Starkey

unread,
Mar 28, 2021, 8:39:04 AM3/28/21
to labscri...@googlegroups.com
Hi Enrique,

In your labscript file you will want to do:
from labscript import compiler
import labscript_utils.h5_lock, h5py
print("HDF5 filepath:", compiler.hdf5_filename)

# with h5py.File(compiler.hdf5_filename, 'a') as hdf5_file:
#    # do something with the file here
#    pass

That prints the HDF5 filepath to the runmanager terminal for me. It should be fine to do this at any point in the script. Hope that helps!

Cheers,
Phil

Enrique Mendez

unread,
Mar 31, 2021, 2:51:21 PM3/31/21
to the labscript suite
Hey Phil!

Thank for the help. I've got metadata functioning! It's incredibly helpful so thank you for that. 
For posterity, I've attached my particular example usage in case someone wishes to do something similar in the future.

I've ran into an unforeseen issue though which tells me I should focus on adding this feature in mainline labscript.

While I can correctly add metadata when I compile, if BLACS is set in repeat mode, the repetition HDFs do not contain the metadata. This tells me BLACS
isn't copying my metadata folder.

Do you know of any work arounds or why this is? or should this discussion move over to github?

Best,
Enrique
meta_data_example.zip

Chris Billington

unread,
Mar 31, 2021, 8:05:46 PM3/31/21
to labscri...@googlegroups.com
Hi Enrique,

The relevant change in BLACS would be in this function:


where there is a list of groups to copy to repeated shot files. If you're going to implement this as a feature in labscript itself, that function would need to include the group name where the custom metadata is stored.

Regards,

Chris

Philip Starkey

unread,
Mar 31, 2021, 8:21:31 PM3/31/21
to labscri...@googlegroups.com
Hi all,

Maybe this metadata should be stored in the shot_properties HDF5 group then...is that not actually what it is for? (I had completely forgotten it existed).


Then you wouldn't have to worry about opening the HDF5 file for storing simple variables, you just put them in the compiler.shot_properties dictionary. It gets slightly more complicated if you want to write arrays as HDF5 datasets, but something could be figured out. And BLACS would not need updating because it already preserves that group between repeats.

Have I missed anything?

Cheers,
Phil


Reply all
Reply to author
Forward
0 new messages