ACTL

Array control & data acquisition

»Local telescope camera triggers select events and generate data rates that are estimated to vary between 10 MB/s and 5 GB/s.«

CTA’s array control (ACTL) provides the hardware and software that is necessary to:

 

  • Monitor and control all telescopes and auxiliary devices in the CTA arrays
  • Schedule and perform observations and calibration runs
  • Time-stamp, read-out, filter and store data

CTA operations will be carried out from on-site data centres (OSDC) at the CTA sites. Each OSDC has computing hardware for execution of ACTL software and for mass storage, and is connected to the local CTA array (for readout and control) and the outside world (for data export and control).

 

Besides the control of the numerous devices associated with telescopes and the common calibration facilities (see table below), the triggering, time-stamping and storing of the data is the most important task of the ACTL system. The overall data stream that the ACTL system has to cope with is dominated by the telescope cameras, whose local camera triggers selected events and generates data rates that are assumed to vary between 10 MB/s and about 5 GB/s. The stereoscopic (i.e. two or more telescopes triggered simultaneously) requirement will result in array trigger rates of 30 KHz) and a data rate of 3 GB/s. These data must be cached in an on-site data repository for about two months to allow data suppression, event selection and export of data to off-site data centres.

 

ACTL Hardware

The concept for the ACTL hardware emphasizes the usage of standard, off-the-shelf networking and computing elements and works to minimize the amount of hardware (e.g. electronics cards) that must be specifically developed for CTA. It also accounts for the need to develop, maintain and operate a more complex and more stable (when compared to existing Imaging Atmospheric Cherenkov Telescope or IACT arrays) software system with limited manpower at moderate cost. It prescribes the use of software frameworks, the application of widely accepted standards, tools and protocols, and follows an open-source approach.

 

The ACTL hardware will comprise of:

 

  1. A single-mode fiber Ethernet wide area network (several 10 Gbit/s) connecting the telescopes with the OSDC and facilitating data transfer, array-level triggering, and clock-distribution. In addition, a local WLAN will be made available close to each telescope to facilitate access with laptops and tablets for debugging and commissioning.
  2. Computers (so called camera servers) located in the OSDC and assigned to one telescope to receive the data after a camera trigger. The camera servers buffer the Cherenkov data while the array trigger makes its decision.
  3. A central computing cluster (also located in the OSDC) for execution of ACTL software, event building and filtering, and operation of the data repository. Estimates for the number of computing cores and the capacity of the data repository are about 1550 (870) and 3 PB (1.5 PB) for CTA-southern hemisphere (CTA-northern hemisphere).
  4. Hardware (few computers) for a SoftWare Array Trigger (SWAT), also located in the OSDC. The SWAT inspects the telescope event time-stamps and selects stereoscopic events.
  5. A WhiteRabbit (WR) network connecting a central GPS clock with each telescope and the SWAT. The WhiteRabbit system provides time-stamps with sub-nanosecond precision that are used to associate data and telescope events.
  6. WR interface cards for time-stamping and array-level triggering, respectively, that are deployed close to any hardware using their services, in particular the cameras.

 

The design intentionally does not prescribe hardware standards for transfer of data and control. Dedicated protocols and Ethernet are used for the transfer of the bulk data (camera data); for all other applications, the usage of one particular software standard is enforced to ensure connectivity with many different hardware devices.

ACTL Software

The ACTL software system depends on external inputs (e.g. for the long-term scheduling), accepts incoming alerts (e.g. directly from other observatories or in connection with “target of opportunity” observations), and manages the execution (but not the implementation) of a real-time analysis (RTA, level-A analysis) software.

 

The high-level ACTL software is developed on top of ALMA Common Software (ACS), a software framework or the implementation of distributed data acquisition and control systems. ACS has been developed by the European Southern Observatory and has been successfully applied in projects of similar scale. ACS distributions (provided by ALMA computing) are executed on a variant of the Linux operating system (Scientific Linux), which will be used on all full-scale ACTL computers. ACS is based on a container-component model and supports the programming languages C++, Java and Python (in particular for scripting).

The high-level ACTL software will be executed predominantly on the central computer cluster and will access most hardware devices via OPC UA. OPC UA is an industry standard, so OPC UA servers will either be provided by device vendors or be developed (using two suggested software development kits) by the device teams. In most cases, the functionality of the OPC UA servers (variables, methods etc.) define the interface between ACTL and a hardware device (e.g. a weather station or CCD camera) that must be controlled and read out.

 

The ACTL software system will comprise the following major parts:

 

  1. A scheduler for CTA (SCTA) optimizing the array usage for observations at any point in time given a list of selected targets, their CTA-assigned scientific priority, the available telescopes and external conditions (e.g. weather).
  2. A central control system (about an order of magnitude of 1,000 distributed processes), implementing the execution of observations using the scheduler under local (human operators with graphical user interfaces) or remote control.
  3. An acquisition system for the telescope data, implementing the further filtering of data and its storage in the on-site data repository.
  4. A monitoring, configuration and slow-control system, commanding each hardware device and examining and recording (in databases) its state and configuration regardless of whether observations are currently ongoing.

Graphical User Interface

This video below illustrates the prototype for the graphical user interface (GUI) CTA operators will use for telescope control and monitoring of the array. The GUI is being designed by a team of physicists from CTA, together with collaborators from the field of Human-Computer interaction. The video begins with a representation of the layout of a CTA array, where each circle corresponds to a single telescope. The relative position of telescopes corresponds to a scaled physical layout on-site, constituting a pseudo-geographic display of the array. As the user zooms in, the status of the sub-systems of each telescope is revealed in increasing detail. See more videos of the GUI on YouTube.

Credit: Iftach Sadeh, DESY, http://astro.desy.de/gamma_astronomy/cta/

ACTL Contacts:

Work Package Leader: Peter Wegner, DESY
Work Package Co-Leader: Gino Tosti, Università degli Studi di Perugia
Systems Engineer: Matthias Fuessling, DESY
Project Manager: Igor Oya, DESY