CERN Accelerating science

This website is no longer maintained. Its content may be obsolete. Please visit http://home.cern for current CERN information.

SEQUENCED DATA ACQUISITION FOR THE LARGE HADRON COLLIDER

c/o Jim Patrick - Each day the SDA data is mined to produce a number of plots that are put on the web at
http://www-bd.fnal.gov/pplot/extra/today/ExtraPlots.html
These give a running history of the collider performance as a function of many parameters.
Also if he didn't show you Timofei has a nice web page that allows one to easily make plots across stores of intensities and emittances through the full injection cycle.
These are at
http://www-bd.fnal.gov/SDAMisc/intensities_mine.jsp
though Firefox is required.

Draft proposal from Fermilab

Potential implementation [doc]

Ralph Steinhagen's thoughts...

***********************start*************************************************
Here are some of my comments/thoughts:

We should maybe separate 'technical implementation' and 'conceptual choices';
Once the latter is understood, the first is (IMHO) fairly easy since most
ingredients have been already developed/tested here at CERN.

Based on what you presented during one of the webex meetings in November, the
SDA based system consists of three major 'packages':
1. An event driven data acquisition process,
2. the data repository and
3. a set of tools to configure the above and to perform basic analyses of the
stored data sets: SDA Viewer, SDA Editor, Report Editor, Store Checker, [..]
see [2] for details.

*concerning the first package:
I reckon that the initial development for a limited number of devices is
fairly easy and does not require too much time -- thanks to present event
driven DAQ infrastructure of CMW/FESA. In principle, one has to copy-paste
the acquisition source code of the existing beam instrumentation (and other)
applications. However, the main issue is the long-term maintenence and
synchronisation of the operational acquisition routines with those used by
the SDA concentrator process -- API changes may and will occur. This is
something that has to be accepted/done/guaranteed by CO/OP/BI/... for the
long-term success of this project.
Hence, I added Mike and Jorg to the e-mail list as they are not only open
minded but also, due to their reputation, have the necessary means to get a
more substantial and wider support for SDA.

*concerning the second package:
I think that from the functionality point of view the choice of database is
more or less arbitrary (XML, SDDS or SQL based). However, maybe you got more
feedback on this from others, people over here do not seem too enthusiastic
about XML based databases. In my opinion, the actual choice is less important
but I also understand that the CO group prefers to use software
packages/combinations that are already present at CERN to minimise
heterogeneity and by "special SW packages" bound human resources.

In my opinion, the important point is how the measurements are arranged within
the database! The additional value of SDA -- in comparison to e.g. the
logging system, comes through the synchronisation/cross-correlation between
several beam instruments and other systems on a per event basis.

If understood correctly, according to [2], you chose to organise SDA
into 'shots', 'cases' and 'sets' which consists of 'events', 'atoms' holding
the measurements and 'collections of atoms'. To be frank, this level of
abstraction and terminology is not intuitive for inexperienced people and
might be more confusing than it helps.
Based on earlier discussions with Jorg, Mike and others, one common
denominator could be to base SDA on an similar approach as the data
acquisition in HEP experiments that organise their data on an 'event' basis
[3,4]:

Following up the HEP idea, I would propose to reduce the hierarchy/complexity
to the following three levels:
1. 'stores' - e.g. 'store #2230' containing a summary, the start and end time,
type of physics programme (commissioning, pp collision, Pb Pb collision, ...)
and other statistics. Except for the naming, it is closely related to
your 'shot' definition.
2. 'events' that are related to given operational steps (at RHIC they are
called milestones) and contain a given set of 'atoms'. Some LHC event
examples: 'Injection Bm #n' for the n-th injection of beam n, 'start
ramp', 'start squeeze', 'dump beam Bn', ...
3. 'atoms' containing all relevant beam and hardware based measurement. Your
proposal is a bit more flexibility with respect to whether an 'atom' is
stored with a given set or not. Since we do not know which data is relevant
and which is not, for a thorough analysis it is favourable to keep the same
event structure for all events and to store all available measurements --
within reasonable limits of course [5]. Reference [4] gives a brief summary
of the CMS 'event' definition.

I think, the main task for the second work package is less the actual
implementation/storage of the data but a more detailed
definition/specification of what is relevant for the SDA analysis and how the
data is logically arranged within an atom/event (scalar, vector, matrix and
related device mapping). Then, once the events and the event structure is
clear, one should discuss the local data storage model (within ORACLE): e.g.
ROOT trees, one-to-one mapping between device values and table entries (the
LHC logging system way) and other proprietary file formats. There are lots of
other possibilities, each with their pros and cons which require requires
further elaboration.

Looking at the tests in the SPS, it is important that the data acquisition
does not interfere with regular machine operation. For most systems (e.g.
BLM, BCTs, FBCTs, LHC-BPMs) this is not an issue since the passively monitor
and have one set of device configuration per cycle type. This has to be
however be checked in case of the SPS BPM system (MOPOS). Coming back to your
proposed 'scalar', event triggered BPM, data sets larger than 10KB and short
high-frequency burst data acquisition, I propose to include the SPS LHC-type
BPMs, the BCT, possibly LHC-FBCT and maybe the SPS transfer-line BPMs.


*concerning the third package:
This is definitely something where we can learn most from your experience at
FNAL. In principle acquiring the relevant data and storing it with a given
format in a SQL based database is quite simple and fulfils
intrinsically 'goal 1.18'. Based on Java's SQL capabilities, everyone (those
who know Java or another SQL scripting language) is capable of implementing
semi-automated analyses and associated plots.


The SDA Viewer/Editor may be helpful to facilitate and simplify the
less-static configuration of the events. However, the additional value of
SDA, in my opinion, does not come from an event based storage alone but the
additional set of pre-defined/pre-programmed 'small tools' and scripts
performing basic pre-checks and that present the data in an easy and human
readable format. As SDA should be used by also non-(accelerator)-experts
("bureaucrats"), we should maybe discuss about how to comprehensible present
those large amounts of stored data (goals 1.16 and 1.17) . The ASCII summary
tables are nice but could/should be complemented by graphical visualisations
and overview plots (x-t, x-y correlation plots etc.) between different
properties to quickly access the machine status and parameter development,
efficiencies etc. The LHC logging system has e.g. this capability. I (re-)
added some of the basic/standard questions/status plots that should be
addressed maybe by the SDA viewer and plots. See also e.g. the KEKB overview
at: http://www-acc.kek.jp/KEKB/

Some miscellaneous comments:
*Goal 1.20: The PS has a different device model and acquisition paradigm which
is significantly different from the SPS/LHC. I would suggest to keep the
project small and to confine the SDA Prototype to a couple of devices (<6) in
the SPS and its transfer lines.

*Java 1.6: The CERN CO group just started to migrate to 1.5. The 1.6 version
might be at the edge of what is presently supported by CO front-ends and AB
consoles.

P.S.
your references:
[1] "Goals for a Vertical Slice Test of Sequenced Data Acquisition on the SPS
Run in May 2007", January, 2007
[2] "Sequenced Data Acquisition for the Large Hadron Collider", September,
2006
[3] from: http://en.wikipedia.org/wiki/Event_(particle_physics)
" An event in particle physics describes one set of particle interactions
occurring in a brief span of time, typically recorded together.[..]"
[4] https://twiki.cern.ch/twiki/bin/view/CMS/WorkBookCMSSWFramework#EdM
[5] for info: earlier e-mail in November ("Re: Use cases for SDA", previously
sent to you, Mike and Verena)
[..]
'Monitoring of messy injections' / 'injection quality assurance':
1. What are the losses from one to the next injection?
- long term cross check with beam loss and radiation monitoring system
2. Evolution of the injection oscillation: r.m.s. (max) trajectory?
- required for machine protection -> an increase could/should inhibit
further injection. (Though this interlock will be regenerated elsewhere
(->Verena) this question could help to anticipate bad injection
situation before they become critical.)
3. Energy mismatch between SPS and LHC? (value from energy feedback)
4.0 Stored beam intensity/energy in the ring?
4.1 How many bunches are and what is their pattern in the machine?
4.2 Monitoring of bunch pattern degradation, especially for bunches at the end
of each batch. (indicates quality of fast longitudinal and transverse
damper, instabilities, etc.)
5. What is the abort gap population before and after injection?
6. What are the transverse beam parameter before and after the injection?
- tune/coupling
- chromaticity
- orbit (includes COD settings)
- transverse beam size (/emittance)
7. What are the beam losses before and after the injection?
8. Are the collimators at nominal position/retracted?
- minimum gap opening
(I guess this will be the most frequent question ;-)
9. What is the status (ON/OFF) of the various feedbacks?
- orbit, tune (FB+PLL), coupling (FB+PLL), chromaticity, energy,
transverse and longitudinal damper, ...
10. What is the status of the main accelerator support systems?
- QPS, cryo system, lattice power converter
11. What is the reference machine optic?
- Becomes more important during the individual squeeze steps.
12. What is the time since this event was issued the last time?
- A good estimates for the individual turn-over times!
- How long do we need from
* one injection to the next
* one ramp to the next
* one fill for physics to the next
(13. Brief overview (warnings) of the different interlock systems)

Each issue can of course be re-arranged or re-grouped, but it would be nice,
if we could have a brief overview option (page) for each one of them.

One basic event could e.g. contain (injection only):
Beam based information:
- Beam loss monitors (snap-shot, 1..2 seconds?)
- BCT and FastBCT snap-shot
- Radiation monitors
- 2 seconds of tune/coupling trace (1 second before/1 second after)
- 2 seconds of orbit, including some r.m.s. calculations for specific regions
(e.g. collimation)
- 2 seconds of chromaticity trace
- bunch length of selected bunch(bes), RF voltage -> momentum spread
- transverse beam profile (/emittance)
- (time stamp of the event)

Some more hardware status information:
- position of movable objects in the LHC (collimators, screens, Roman Pots)
- temperature of the most important cryo-circuits (indicates also losses)
- tunnel temperature (esp. in IR3 and IR7)
- SPS/LHC injection momentum mismatch (energy feedback)
- Number of people in the CCC.
- Status/current of the main lattice circuits (main bends/quads)
- Status of the BIC, QPS, SIS,

I am pretty sure that this list is far cry from complete but it contains a lot
of instruments/questions that could be used/answered during the coming SPS
run.
[..]
****************************end ********************************************