Skip to content


How to fix an issue with updating a package in SWAN?


At times, in order to update the lhcsmapi package one has to execute the command

pip install --user --upgrade lhcsmapi
twice while using the SWAN terminal (cf. an error message in the figure below).

In case this command returns an error, please try to execute it again. Should that operation also fail, please uninstall the package by executing

$ pip uninstall lhcsmapi

and performing a fresh installation the package

$ pip install --user lhcsmapi

Should you experience any further issues with installing a package, please contact SWAN support or use the preinstalled package with the environment script.

How to obtain NXCALS Access with SWAN?


The API allows to perform a query of signals from PM and NXCALS. The NXCALS database requires an assignment of dedicated access rights for a user. If you want to query NXCALS with the API, please follow a procedure below on how to request the NXCALS access.

  1. Go to for most updated procedure
  2. Send an e-mail to with the following pieces of information:
  3. your NICE username
  4. system: WinCCOA, CMW
  5. NXCALS environment: PRO

Optionally one can mention that the NXCALS database will be accessed through SWAN. Once the access is granted, you can use NXCALS with SWAN.

Why am I getting NameError: name 'spark' is not defined error?


The reason for that is twofold. First, you need to select NXCals option in the SWAN configuration. Second, you need to establish NXCALS cluster connection. For more details please visit

How to find the FPA report and Excel table?


The FPA reports are stored on EOS. The exact location is determined by the circuit type and its name. For more details please visit

What to do if the NXCALS cluster connection takes more than usual?


This issue may occur occasionally. In this case one has to reconnect to SWAN. Should the problem persist, one needs to contact the SWAN team to resolve the issue on their side.

Can I connect to SWAN with CERN Terminal Server (


Yes! You can connect to your local machine and run SWAN from there. Please note that SWAN works with any browser at the CERN general network and outside of CERN.

Where could I learn more about SWAN?


Please take a look at a very informative Academic Training Lecture on SWAN:

What to do in case of Error while connecting to Spark cluster?



As of 15.02.2020, 2 simultaneous Spark connections are possible with SWAN. We are in the process of increasing that limit to 4 active sessions.

If you obtain an error as shown below, it means that you exhausted the number of allowed Spark connections

Please note that closing a notebook does not close the Spark connection. Leaving open sessions may lead to holding resources and blocking you from performing an analysis. Thus, in case you do not plan to use the notebook, please click the Spark connection icon and press Restart Spark session button.

The notebook will maintain the results of your analysis.

In case you want to erase the notebook content and disconnect from Spark at the same time, please select from the top menu Kernel and afterwards Restart and Clear Output.

You may always check the running processes on SWAN by clicking the three dots ... in your project space and selecting Running Processes.

This will open a tab on the right side of your browser with an overview of running process. You can terminate a notebook with Shutdown button.