Loading and visualizing polysomnography data

If you have polysomnography data in European Data Format (.edf), you can use the MNE package to load and preprocess your data in Python. MNE also supports several other standard formats (e.g. BrainVision, BDF, EEGLab). A simple preprocessing pipeline using MNE is shown below.

import mne
# Load the EDF file
raw = mne.io.read_raw_edf('MYEDFFILE.edf', preload=True)
# Downsample the data to 100 Hz
# Apply a bandpass filter from 0.1 to 40 Hz
raw.filter(0.1, 40)
# Select a subset of EEG channels
raw.pick(['C4-A1', 'C3-A2'])

YASA is a command-line software and does not support data visualization. To scroll through your data, we recommend the free software EDFBrowser (https://www.teuniz.net/edfbrowser/):


Event detection

The spindles detection is a custom adaptation of the Lacourse et al 2018 method. A step-by-step description of the algorithm can be found in this notebook.

The slow-waves detection combines the methods proposed in Massimini et al 2004 and Carrier et al 2011. A step-by-step description of the algorithm can be found here.


Both algorithms have parameters that can (and should) be fine-tuned to your data, as explained in the next question.

There are several parameters that can be adjusted in the spindles / slow-waves / artefact detection. While the default parameters should work reasonably well on most data, they might not be adequate for your data, especially if you’re working with specific populations (e.g. older adults, kids, patients with certain disorders, etc).

For the sake of example, let’s say that you have 100 recordings and you want to apply YASA to automatically detect the spindles. However, you’d like to fine-tune the parameters to your data. We recommend the following approach:

  1. Grab a few representative recordings (e.g. 5 or 10 out of 100) and manually annotate the sleep spindles. You can use EDFBrowser to manually score the sleep spindles. Ideally, the manual scoring should be high-quality, so you may also ask a few other trained individuals to score the same data until you reach a consensus.

  2. Apply YASA on the same recordings, first with the default parameters and then by slightly varying each parameter. For example, you may want to use a different detection threshold each time you run the algorithm, or a different frequency band for the filtering. In other words, you loop across several possible combinations of parameters. Save the resulting detection dataframe.

  3. Finally, find the combination of parameters that give you the results that are the most similar to your own scoring. For example, you can use the combination of parameters that maximize the F1-score of the detected spindles against your own visual detection.

  4. Use the “winning” combination to score the remaining recordings in your database.

YASA does not currently support visual editing of the detected events. However, you can import the events as annotations in EDFBrowser and edit the events from there. If you simply want to visualize the detected events (no editing), you can also use the plot_detection method.

Sleep staging

YASA was trained and evaluated on a large and heterogeneous database of thousands of polysomnography recordings, including healthy individuals and patients with sleep disorders. Overall, the results show that YASA matches human inter-rater agreement, with an accuracy of ~85% against expert consensus scoring. The full validation of YASA was published in eLife:

However, our recommendation is that YASA should not replace human scoring, but rather serve as a starting point to speed up sleep staging. If possible, you should always have a trained sleep scorer visually check the predictions of YASA, with a particular emphasis on low-confidence epochs and/or N1 sleep epochs, as these are the epochs most often misclassified by the algorithm. Finally, users can also leverage the yasa.plot_spectrogram() function to plot the predicted hypnogram on top of the full-night spectrogram. Such plots are very useful to quickly identify blatant errors in the hypnogram.


YASA does not come with a graphical user interface (GUI) and therefore editing the predicted hypnogram is not currently possible. The simplest way is therefore to export the hypnogram in CSV format and then open the file — together with the corresponding polysomnography data — in an external GUI, as shown below.


EDFBrowser is a free software for visualizing polysomnography data in European Data Format (.edf), which also provides a module for visualizing and editing hypnograms.

The code below show hows to export the hypnogram in an EDFBrowser-compatible format. It assumes that you have already run the algorithm and stored the predicted hypnogram in an array named hypno.

# Export to a CSV file compatible with EDFBrowser
import numpy as np
import pandas as pd
hypno_export = pd.DataFrame({
  "onset": np.arange(len(hypno)) * 30,
  "label": hypno,
  "duration": 30})
hypno_export.to_csv("my_hypno_EDFBrowser.csv", index=False)

You can then import the hypnogram in EDFBrowser by clicking on the “Import annotations/events” in the “Tools” menu. Then, select the “ASCII/CSV” tab and change the parameters as follow:


Click “Import”. Once it’s done, the hypnogram can be enabled via the “Window” menu. A dialog will appear where you can setup the labels for the different sleep stages and the mapping to the annotations in the file. The default parameters should work. When using the Annotation editor, the hypnogram will be updated realtime when adding, moving or deleting annotations. Once you’re done editing, you can export the edited hypnogram with “Export anotations/events” in the “Tools” menu.



SpiSOP is an open-source Matlab toolbox for the analysis and visualization of polysomnography sleep data. It comes with a sleep scoring GUI. As explained in the documentation, the hypnogram should be a tab-separated text file with two columns (no headers). The first column has the sleep stages (0: Wake, 1: N1, 2: N2, 3: N3, 5: REM) and the second column indicates whether the current epoch should be marked as artefact (1) or valid (0).

hypno_int = pd.Series(hypno).map({"W": 0, "N1": 1, "N2": 2, "N3": 3, "R": 5}).to_numpy()
hypno_export = pd.DataFrame({"label": hypno_int, "artefact": 0})
hypno_export.to_csv("my_hypno_SpiSOP.txt", sep="\t", header=False, index=False)


Visbrain is an open-source Python toolbox that includes a module for visualizing polysomnography sleep data and scoring sleep (see screenshot below).


Visbrain accepts several formats for the hypnogram. The code below show how to export the hypnogram in the Elan software format (i.e. a text file with the .hyp extension):

hypno_int = pd.Series(hypno).map({"W": 0, "N1": 1, "N2": 2, "N3": 3, "R": 5}).to_numpy()
header = "time_base 30\nsampling_period 1/30\nepoch_nb %i\nepoch_list" % len(hypno_int)
np.savetxt("my_hypno_Visbrain.txt", hypno_int, fmt='%s', delimiter=',', newline='\n',
           header=header, comments="", encoding="utf-8")

YASA was only designed for human scalp data and as such will not work with animal data or intracranial data. Adding support for such data would require the two following steps:

  1. Modifying (some of) the features. For example, rodent sleep does not have the same temporal dynamics as human sleep, and therefore one could modify the length of the smoothing window to better capture these dynamics.

  2. Re-training the classifier using a large database of previously-scored data.

Despite these required changes, one advantage of YASA is that it provides a useful framework for implementing such sleep staging algorithms. For example, one can save a huge amount of time by simply re-using and adapting the built-in yasa.SleepStaging class. In addition, all the code used to train YASA is freely available at https://github.com/raphaelvallat/yasa_classifier and can be re-used to re-train the classifier on non-human data.


You can click “Watch” on the GitHub of YASA. Whenever a new release is out there, you can upgrade your version by typing the following line in a terminal window:

pip install --upgrade yasa

There are many ways to contribute to YASA, even if you are not a programmer, for example, reporting bugs or results that are inconsistent with other softwares, improving the documentation and examples, or, even buying the developpers a coffee!

To cite YASA, please use the eLife publication:


@article {vallat2021open,
  title={An open-source, high-performance tool for automated sleep staging},
  author={Vallat, Raphael and Walker, Matthew P},
  doi = {https://doi.org/10.7554/eLife.70092},
  URL = {https://elifesciences.org/articles/70092},
  publisher={eLife Sciences Publications, Ltd}