High energy & nuclear physics



Yüklə 446 b.
səhifə3/9
tarix01.11.2017
ölçüsü446 b.
#24870
1   2   3   4   5   6   7   8   9

The answer is "money"... In 1999, the "LHC Computing Grid" was merely a concept on the drawing board for a computing system to store, process and analyse data produced from the Large Hadron Collider at CERN. However when work began on the design of the computing system for LHC data analysis, it rapidly became clear that the required computing power was far beyond the funding capacity available at CERN.

  • The answer is "money"... In 1999, the "LHC Computing Grid" was merely a concept on the drawing board for a computing system to store, process and analyse data produced from the Large Hadron Collider at CERN. However when work began on the design of the computing system for LHC data analysis, it rapidly became clear that the required computing power was far beyond the funding capacity available at CERN.

  • On the other hand, most of the laboratories and universities collaborating on the LHC had access to national or regional computing facilities.

  • The obvious question was: Could these facilities be somehow integrated to provide a single LHC computing service? The rapid evolution of wide area networking - increasing capacity and bandwidth coupled with falling costs - made it look possible. From there, the path to the LHC Computing Grid was set.



  • Multiple copies of data can be kept in different sites, ensuring access for all scientists involved, independent of geographical location

  • Allows optimum use of spare capacity for multiple computer centres, making it more efficient

  • Having computer centres in multiple time zones eases round-the-clock monitoring and the availability of expert support

  • No single points of failure

  • The cost of maintenance and upgrades is distributed, since individual institutes fund local computing resources and retain responsibility for these, while still contributing to the global goal

  • Independently managed resources have encouraged novel approaches to computing and analysis

  • So-called “brain drain”, where researchers are forced to leave their country to access resources, is reduced when resources are available from their desktop

  • The system can be easily reconfigured to face new challenges, making it able to dynamically evolve throughout the life of the LHC, growing in capacity to meet the rising demands as more data is collected each year

  • Provides considerable flexibility in deciding how and where to provide future computing resources

  • Allows community to take advantage of new technologies that may appear and that offer improved usability, cost effectiveness or energy efficiency



Distributed supercomputing

  • Distributed supercomputing

  • High throughput computing

  • On demand (real time) computing

  • Data intensive computing

  • Collaborative computing





Real need for very high performance infrastructures

  • Real need for very high performance infrastructures

  • Basic idea: share distributed computing resources

    • The sharing that the GRID is concerned with is not primarily file exchange but rather direct access to computers, software, data, and other resources, as is required by a range of collaborative problem-solving and resource-brokering strategies emerging in industry, science, and engineering” (I. Foster)


  • Speed-up:

    • if TS is the best time to process a problem sequentially,
    • then the parallel processing time should be TP=TS/P with P processors
    • speedup = TS/TP
    • the speedup is limited by Amdhal law: any parallel program has a purely sequential and a parallelizable part TS= F + T//,
    • thus the speedup is limited: S = (F + T//) / (F + (T///P)) < P
  • Scale-up:

    • if TPS is the time to solve a problem of size S with P processors,
    • then TPS should also be the time to process a problem of size n*S with n*P processors


8000 Physicists, 170 Sites, 34 Countries

  • 8000 Physicists, 170 Sites, 34 Countries

  • 15 PB of data per year; 100,000 CPUs



A 4 layers Computing Model

  • A 4 layers Computing Model

    • Tier-0: CERN: accelerator
      • Data Acquisition and Reconstruction
      • Data Distribution to Tier-1 (~online)
    • Tier-1
      • 24x7 Access and Availability,
      • Quasi-online data Acquisition
      • Data Service on the Grid
      • “Heavy” Analysis of the data
      • ~10 countries
    • Tier-2
      • Simulation
      • Final User, Analysis of the data (batch and interactive modes)
      • ~40 Countries
    • Tier-3
      • Final User, Scientific analysis





Yüklə 446 b.

Dostları ilə paylaş:
1   2   3   4   5   6   7   8   9




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©muhaz.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin