How to fix an issue with updating a package in SWAN?
At times, in order to update the lhcsmapi package one has to execute the command
pip install --user --upgrade lhcsmapi
In case this command returns an error, please try to execute it again. Should that operation also fail, please uninstall the package by executing
$ pip uninstall lhcsmapi
and performing a fresh installation the package
$ pip install --user lhcsmapi
Should you experience any further issues with installing a package, please contact SWAN support or use the preinstalled package with the environment script.
How to obtain NXCALS Access with SWAN?
The API allows to perform a query of signals from PM and NXCALS. The NXCALS database requires an assignment of dedicated access rights for a user. If you want to query NXCALS with the API, please follow a procedure below on how to request the NXCALS access.
- Go to http://nxcals-docs.web.cern.ch/current/user-guide/data-access/nxcals-access-request/ for most updated procedure
- Send an e-mail to firstname.lastname@example.org with the following pieces of information:
- your NICE username
- system: WinCCOA, CMW
- NXCALS environment: PRO
Optionally one can mention that the NXCALS database will be accessed through SWAN. Once the access is granted, you can use NXCALS with SWAN.
Why am I getting
NameError: name 'spark' is not defined error?
The reason for that is twofold. First, you need to select NXCals option in the SWAN configuration. Second, you need to establish NXCALS cluster connection. For more details please visit http://sigmon-docs.web.cern.ch/hwc-and-fpa-notebooks/#42-connect-to-the-nxcals-spark-cluster
How to find the FPA report and Excel table?
The FPA reports are stored on EOS. The exact location is determined by the circuit type and its name. For more details please visit http://sigmon-docs.web.cern.ch/hwc-and-fpa-notebooks/#47-access-to-reports
What to do if the NXCALS cluster connection takes more than usual?
This issue may occur occasionally. In this case one has to reconnect to SWAN. Should the problem persist, one needs to contact the SWAN team to resolve the issue on their side.
Can I connect to SWAN with CERN Terminal Server (cernts.cern.ch)?
Yes! You can connect to your local machine and run SWAN from there. Please note that SWAN works with any browser at the CERN general network and outside of CERN.
Where could I learn more about SWAN?
Please take a look at a very informative Academic Training Lecture on SWAN: https://indico.cern.ch/event/847492/
What to do in case of
Error while connecting to Spark cluster?
As of 15.02.2020, 2 simultaneous Spark connections are possible with SWAN. We are in the process of increasing that limit to 4 active sessions.
If you obtain an error as shown below, it means that you exhausted the number of allowed Spark connections
Please note that closing a notebook does not close the Spark connection. Leaving open sessions may lead to holding resources and blocking you from performing an analysis.
Thus, in case you do not plan to use the notebook, please click the Spark connection icon and press
Restart Spark session button.
The notebook will maintain the results of your analysis.
In case you want to erase the notebook content and disconnect from Spark at the same time, please select from the top menu
Kernel and afterwards
Restart and Clear Output.
You may always check the running processes on SWAN by clicking the three dots
... in your project space and selecting
This will open a tab on the right side of your browser with an overview of running process. You can terminate a notebook with