Rmax = 33862 (i.e., 33,9 Pflops) – Rpeak = 54902 (computing efficiency : 61,7 %)
səhifə 2/9 tarix 01.11.2017 ölçüsü 446 b. #24870
Rmax = 33862 (i.e., 33,9 Pflops) – Rpeak = 54902 (computing efficiency : 61,7 %) 3,120,000 cores – Memory: 1.375 PB – Disk: 12,4 PB –fat-tree based Interconnection Network 16000 computer nodes 1 node = 2 Intel (12 cores) Ivy Bridge Xeon + 3 (57 cores) Xeon Phi co-procs + 88GB memory shared by the Ivy Bridges procs + 8 GB memory shared by the Xeon Phi chips Power:17,8 MW (1,9 Tflops/kW – 1,9 Gflops/W … only!) « Tianhe-2 operation for 1 hour is equivalent to 1.3 billion people calculator operating one thousand years » (best-news.us – assertion not checked)
Top500.org Top500.org performance development logarithmic progression! (x10 in 3years) clusters , clusters (86%)! 51% in industry max power efficiency: 5,3 Gflops/W #500: 153 TFlops! – Total : 309 Pflops poster Top500 Graph500.org Green500.org and GreenGraph500 List max: 5,3 Gflops/W #1 green500 = #168 top500 (317-594 Tflops) #1 top500 = #57 green500 (2GFlops/W)
From LAN (cluster) computing to WAN computing From LAN (cluster) computing to WAN computing Set of machines distributed over a MAN/WAN that are used to execute parallel loosely coupled codes Depending on the infrastructure (soft and hard), network computing is derived in Internet computing, P2P, Grid computing, etc.
Definitions become fuzzy... Definitions become fuzzy... A meta computer = set of (widely) distributed (high performance) processing resources that can be associated for processing a parallel not so loosely coupled code A meta computer = parallel virtual machine over a distributed system
Use of (idle) computer interconnected by Internet for processing large throughput applications Use of (idle) computer interconnected by Internet for processing large throughput applications Ex: SETI@HOME 5M+ users since launching 20013/10: 1,4M users, 3,5M computers; 135k active users, 190k active computers 625 Tflops (average 505 Tflops)! 233 « countries » 2M years of CPU time since 1999; BOINC infrastructure (Décrypthon, RSA-155…) much less active than it used to be (:2 since 2011)
Internet computing on a pool of sites Internet computing on a pool of sites Grid computing with poor communication facilities Ex: Condor (invented in the 80’s)
A site is both client and server: servent A site is both client and server: servent Dynamic servent discovery by « contamination » 2 approaches: centralized management: Napster, Kazaa, eDonkey… distributed management: Gnutella, KAD, Freenet, Bittorrent… Applications: file sharing, video delivery , collaborative computing
“Coordinated resource sharing and problem solving in dynamic , multi-institutional virtual organisation s” (I. Foster) “Coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organisation s” (I. Foster)
Information grid Information grid Large access to distributed data (the Web) Data grid Management and processing of very large distributed data sets – Data intensive computing Computing grid
Grids date back “only” 1996 Grids date back “only” 1996 Parallelism is older! (first classification in 1972) Motivations: need more computing power (weather forecast , atomic simulation, genomics…) need more storage capacity (Petabytes and more) in a word: improve performance! 3 ways ... Work harder --> Use faster hardware Work smarter --> Optimize algorithms Get help --> Use more computers !
Dostları ilə paylaş: