Scientific targets



Yüklə 445 b.
tarix19.01.2018
ölçüsü445 b.
#39325







Scientific targets

  • Scientific targets

  • Cosmic Accelerators

    • Active Galactic Nuclei, Pulsare Wind Nebula, SN Remnants, Gamma Ray Bursts, …
  • Fundamental Questions

    • Dark Matter, Cosmic Rays, Quantum Gravity, Cosmology…


  • The MAGIC Collaboration:

    • 21 institutes (mostly in Europe)
    • ~ 200 members
  • Telescope site in Canary Islands

    • Observatorio Roque de los Muchachos (ORM)
  • MAGIC-I in operation since 2004

  • MAGIC-II in operation since 2009

  • Future detector enhancements



  • Discovery of 14 new VHE g-ray sources

    • 8 extragalactic + 4 galactic
  • New populations unveiled

    • Radio-quasar & Micro-quasar
  • Detection of distant VHE g-rays

    • z = 0.54, farthest up to now
  • Detection of pulsated VHE g-rays

    • Originated in the Crab pulsar
  • Test on Lorentz Invariance (QG effects)

    • Using big emission flares
  • >40 papers in high impact journals

    • including 4 in Science


Major issue: Background rejection

  • Major issue: Background rejection

  • Monte Carlo simulations required

    • No VHE “test beam” available




MAGIC produces ~300 TB of raw data per year

  • MAGIC produces ~300 TB of raw data per year

    • And up to 400 TB in the final configuration
  • The MAGIC Data Center at PIC provides:

    • Data transfer from ORM and storage
    • Official data reprocessing
    • Computing resources and tools
    • User access and support
  • PIC data center operating since 2006

    • 2009: Upgraded for the 2nd telescope
    • Challenge: scalable infrastructure


Increase in data volume after the upgrade

  • Increase in data volume after the upgrade

  • In ~3 years data volume will be increased 4-fold





Challenges:

  • Challenges:

    • Large increase in data volume
    • Need scalable Storage solution
      • Maintenance of old infrastructure prevented Innovation
    • Need to improve data access
      • Robust infrastructure for many concurrent accesses
      • Data catalog with metadata
    • Open Computing:
      • Accessible to all collaborators
      • Simple and easy to use for standard analysis


Why GRID?

  • Why GRID?

    • Data reduction and analysis require lots of computing resources
    • Must distribute data to all collaborators across Europe
    • User access to shared resources and standardized analysis tools
    • Better and easier data management
    • Increased technical support, benefit from community
  • The MAGIC Data Center @ PIC

    • Experience and knowledge on using Grid from LHC projects
    • Manpower partially funded by EGEE
    • Storage based on PIC SEs
    • Computing @PIC and other MAGIC sites
      • Other sites currently devoted to MC production


MAGIC VO exists since 2004

  • MAGIC VO exists since 2004

    • Initiative by H. Kornmayer et al.
  • Hiatus 2005-2007

    • No manpower
  • 2007-08: New crew taking over grid operations

    • UCM (Madrid) and Dortmund,
    • in collaboration with INSA (MC)
    • IFAE and PIC (Data Center)
  • 2009-10: Wide adoption

    • Now Grid is widely used in MAGIC




Migration of services while in production (in progress)

  • Migration of services while in production (in progress)

  • Migration of Storage:

    • Move existing data to Storage Elements in Grid
    • Use FTS for the data transfer from the observatory
    • Adapt administration tools to new infrastructure
    • Create user-friendly interfaces to access data
  • Migration of Computing:

    • Port existing analysis tools to Grid
    • Develop library of standard tools for the user community
    • User-friendly interface to monitor and submit jobs


MAGIC users were reluctant to use Grid

  • MAGIC users were reluctant to use Grid

    • Grid has a steep learning curve
    • Used to the old storage and computing
    • Lack of a ‘killer application’
  • Conquer your user community

    • Good documentation and user support
    • Training sessions
    • User-friendly tools
    • Work with users: feedback
    • Highlight the hidden benefits of the new infrastructure
      • Less maintenance -> Better support


Data Transfer, Storage and Access

  • Data Transfer, Storage and Access



Current storage system requires too much maintenance

  • Current storage system requires too much maintenance

  • Non-existent file catalog, fragmented disk space, …

  • Solution: adopt Tier1-grade Grid-based storage system

    • Standard tools + supported service @ PIC
    • LFC: Easier data management and monitoring


Suboptimal network data transfer (SSH-based)

  • Suboptimal network data transfer (SSH-based)

    • Insufficient bandwidth
    • RAW data stored in LTO tapes and sent by air mail
    • Poor control over network transfers
    • Bad integration with Grid (intermediate disk)
  • Integration to Grid Infrastructure

    • Data cluster in observation site uses GFS
      • Not supported by SRM
    • Using BeStMan to create GridFTP + SRM server
    • Data transfer managed by FTS + manager
    • Pending administrative actions to set it up
    • Aim to deprecate air mail transfers


Data access requirements:

  • Data access requirements:

    • Access data anytime from anywhere
  • Two approaches:

    • Data access using Grid tools (GridFTP, SRM or equivalent)
      • Robust transfers, not easy file browsing
      • Not all institutes support Grid
    • Web access
      • Easy file browsing, not that easy transfers
      • Security concerns
  • Solution:

    • Build web-based service to interface to GridFTP + LFC
    • Use dCache httpDoor for “Grid-handicapped” users
      • Access only to local SE






Traditional computing at MAGIC

  • Traditional computing at MAGIC

    • Each institute uses its own computing resources (CPU + Storage)
    • Only few users have access to a computing farm
    • Data center CPUs exclusive to “official” analysis
  • We recently opened the computing to all users

    • Grid-based computing
    • Additional resources for users: CPU and Disk
    • In development: library of standard analysis tools
    • PIC data center will still play a central role
      • Data management, manpower, …
  • + resources & efficiency: more and better scientific outcome



Looking to create a ‘Killer Application’

  • Looking to create a ‘Killer Application’

  • Aim: cover all steps in the analysis chain

  • A tool for everybody: Simple yet flexible

  • Based on existing tools and years of experience

  • Library of high level functions

    • Shield from Grid complexity
  • One central tool is easier to develop

  • Better user support

    • Most user support is for buggy user-created software
  • Future: Interface to submit and monitor jobs





MAGIC has adopted Grid as a computing model.

  • MAGIC has adopted Grid as a computing model.

  • The use of Grid in the Data Center was key to the successful upgrade process.

  • A WLCG Tier-1 site is now also the Tier-0 for MAGIC reusing methodologies and personnel.

  • It is necessary to create customized applications for an easy access to the data and computing.



Yüklə 445 b.

Dostları ilə paylaş:




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©muhaz.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin