CERN Accelerating science

This website is no longer maintained. Its content may be obsolete. Please visit http://home.cern for current CERN information.

Post mortem

 PM Data Collection and Storage c/o  Nikolai Trofimov         PM Home Page

Example PM SDDS files FGC QPS1 QPS2

Recent talks

 

Outline of proposed solution

 

Requirements

Key Use Cases:

  1. Immediate diagnostics after beam loss - what happened.
  2. Off-line analysis of PM event
  3. MB quench
  4. Decision support analysis: is it safe to continue (key example: Quench)

Scope: all relevant beam measurements and all equipment information.

Gateway and push

Collection

Processing

Analysis

Data Providers

PIC

What triggers to push PM data? When BIC is not present? Woah.... What triggering  the PM?

QPS - Herve

MB Quench analysis   Quench Signal Analysis [Adriaan]

QPS data description   QPS PM buffer example

QPS - Main Bend Quench Use Case

Send buffer in bits - server has to pick up the pieces.

Here is the size for the Post-Mortem buffer:
For DQAMC type A: This is 2500 times:
. LHC Time stamp (long)
. Number of block (int)
. 7 doubles
. 1 int

For DQAMC type B: This is 2500 times:
. LHC Time stamp (long)
. Number of block (int)
. 8 doubles
. 1 int

For DQAMG: This is 2500 times:
. LHC Time stamp (long)
. Number of block (int)
. 40 doubles
. 10 int

Please not that currently we don't know if
we have to split the ints in bits, this will depend on the tool to
analyse the Post-Mortem data.

LBDS - Bren

The only particularity of the LBDS might be the max ~10s time we need for getting a green light from the PM system after a dump, for compatibility with the "Inject and Dump" mode....

 

Power converters c/o Steve

What data is required and at what frequency?

My intention at the moment is to buffer the status data that the gateways receive at 50Hz from all FGCs for an appropriate period in a circular
buffer and to send that out in the case of a post-mortem event.  If I can get away with this, then post-mortem can be handled at the gateway level
without needing to involve the FGCs.  If more data is required, or a higher resolution, then the gateways will have to send commands to the
FGCs to extract additional data.

Issues

  1. What format do the clients send the data?    Need FESA, PVSS, other API      Andy, Steve, Herve, Franck, Sergio, Lionel
  2. Write binary - do the number crunching at top level?
  3. Mechanism - secure [CMW...?]  
  4. Destination - where is it put. nfs/directory structure
  5. Conversion to SDDS or whatever.  Browsable.
  6. PM server - executive management - checks
  7. Database upload - archival - possibly generic browser based on database.
  8. System specific analysis tools - automatic - on-request
  9. Pull in logging, alarms, settings

1. What bloody format do the clients put their data in?  XML v native v SDDS   Data tagging being key. Data items have to be named

2. Compressing and CMW seem OK.       Franck, Sergio

3. Put it on a file sever - set up hierarchy etc.

4. Final file data format. Conversion. WHAT???? XML  SDDS

Steve's comments

What format should the data be sent in?

I don't have a particularly strong opinion on this subject.  The candidates are SDDS and XML plus numerous other potentially suitable languages.  Unless a language requires a lot of work on the front-end side  to format the data, I think the decision should be made by the people who  will be responsible for the receiving side, since they will have to extract and manipulate the received data.

Ideally, something simple and easy to construct.

How should the data be sent?

The basic options are whether to write directly to the NFS file system from the front-ends or to use some other communications protocol.

With NFS we need to be careful as to its behaviour in the event of a network outage or the server being down.  On the plus side, it is a very simple solution.

Otherwise, a robust and reliable communications protocol should be found.  This means either implementing a library to be used on the front-ends or CMW in some form.

 

First suggestion... Mike (based on what they've put in place at RHIC)

Essentially:

Worry about:

Discussion [29/3/2004] of above including a look at power converters, QPS, and SDDS (Notes by Robin)

 

Second suggestion - Sergio

Here some details on my tests.

1 - The client creates a document XML using the library libxml2 and some basic functions (C/C++) which I developed. The document is validated using XSchema (Optional). 2 - The document is compressed using the library zlib. 3 - The document is then sent to the PM server by using the CMW. The functions are written in C/C++. The maximum size is 500 Kbytes in compressed mode, which corresponds to approximately > 3 MBytes of document XML. 4 - The document is uncompressed at the time of its arrival in the PM server. The PM, server is written in Java. 5 - The document is validated using XSchema (optional) 6 - The document is saved in the local data base

The following step is to integrate the few basic functions (C/C++) in the QPS and to make tests.

I also made some research on the tools for analysis. I found that the CERN develops with other laboratories a product which is called JAS3 (Java Analysis Studio). I installed it and make some positives tests.

Third suggestion - Sergio

After our last meeting on postmortem, the use of the CMW and the data compression, to transmit the data of postmortem were reasonable.

The only point still outstanding was the use of the SDDS format! As I explained during the various meetings, it is an old format which does not have the potential of the XML. Moreover, it is a "format propri�taire". For me, it is also essential to separate the format of the postmortem data and the tools of visualization and analysis.

I thus made some search and I found that the transmission and the use of data in binary format are a problem also for others such as the "datagrid", and a proposal was submitted to the W3C, the "Binary XML description language". The University of Edinburgh supports and develops a C/C++ library BinX and Java JNI interface for the use of "Binary XML". See http://www.edikt.org/binx/

Tests: I carried out tests of transfer of data between platforms Intel and PowerPC and its works very well. I read the binary data with BinX library, and then I transformed the binary data into XML also with the BinX library. Then, with the data in format XML, I transformed them in the XML AIDA format with a XSLT and then visualized in JAS3 for analysis.