GridCoord DoW



Yüklə 0,53 Mb.
səhifə11/22
tarix27.10.2017
ölçüsü0,53 Mb.
#15514
1   ...   7   8   9   10   11   12   13   14   ...   22

Local activities


DutchGrid is the platform for Grid Computing and Technology in the Netherlands. Open to all institutions for research and test-bed activities, the goal of DutchGrid is to coordinate the various deployment efforts and to offer a forum for the exchange of experiences on Grid technologies.

As well as the DutchGrid Platform, the Grid Forum Netherlands association has been established to promote the Grid in the Netherlands at all levels. Its goal is to bring together the expertise and innovative power of Dutch grid experts, businesses, government, academia and other stakeholders. By creating a focal point for grid technology in the Netherlands it hopes to make knowledge about this breakthrough technology available to Dutch society.

ASCI is a Dutch graduate school of advanced computing and imaging established in 1993 and accredited by the Royal Netherlands Academy of Arts and Sciences. Research groups of Delft University of Technology, Vrije Universiteit, University of Amsterdam, Leiden University, University Utrecht, University of Twente, University of Groningen and Eindhoven Technical University participate in ASCI.

Summary of international activities


Participation in FP6 Projects

  • CoreGRID Delft Univ. of Technology, Vrije Univ. Amsterdam

  • DEISA SARA

  • EGEE TERENA, FOM, SARA, UvA

  • GridCoord Univ. of Amsterdam

  • NextGRID Univ. of Amsterdam


Poland

Summary of activities and strenghts


The Grid research and technology activities in Poland have been developed mainly through the national program called “PIONIER – Polish Optical Internet, Advanced Applications, Services and Technologies for Information Society”. The program was defined for years 2001-2005. Recently the follow up program has been proposed for years 2006-2010, with the new name: PIONIER2: The Infrastructure for Research and Education. Hopefully, new Grid-related projects will be funded, if the program gets approved.

All the projects that were funded by PIONIER always consited of two main phases: research and development phase and the deployment phase. All the projects we co-funded by partners and industry (minumum 50%) and the State Committee for Scientific Research (maximum 50%).

The scientific strengths are mainly in the following topics:

Innovative software technology for Grids and enabling platforms: middleware and tools for Grid application development and enablement, security in Grids, including grid authorization, intrussion detection systems, Grid accounting, Grid monitoring, mobile grid user support

Production grid design and deployoment, including Globus-based infrastructures, resource management and monitoring, data management, storage management, administration and security, Virtual Organizations;

Infrastructures, Polish Optical Internet, services and network applications.

PIONIER is the Polish National Research and Education network. It is operated by Poznan Supercomputing and Networking Center – PSNC, affiliated to the Institute of Bioorganic Chemistry of Polish Academy of Sciences. PIONIER connects 21 academic Metropolitan Area Networks, with over 700 academic units, including universities, institutes of Polish Academy of Sciences, hospitals, libraries and industrial R&D institutes. PIONIER is based on the fiber cables that are owned by the scientific community in Poland, with 10GE transmission on DWDM system. Currently PIONIER includes over 3000 km of fiber cables while the target infrastructure will reach over 6000 km of cables. The PIONIER goal is also to build international fiber connectivity to the neighbouring countries. Currently there are 2 such links built – one to Germany and one to Czech Republic. Another 6 connections is currently under construction, and they will link PIONIER with Germany (2 additional links), Russia, Lithuania, Belarus and Ukraine.

PIONIER network uses the following international connections:


  • 10 Gbit/s link to GEANT (including access to other European NRENs, Internet2, Abilene, Canarie)

  • 1 Gbit/s direct fiber link to CESNET, SANET and ACONET

  • 34 Mbit/s link to BASNET (providing transit to GEANT and Internet)

  • 7 Gbit/s of commodity Internet links

Figure 1. Current PIONIER connectivity status

PIONIER supports the following set of services:


  • access to the global Internet

  • dedicated access to GEANT network and other research networks

  • specific support for research projects

  • VLBI (dedicated 1Gbit/s between Torun and Dwingeloo (NL)

  • ATLAS (dedicated 1Gbit/s between Krakow and CERN (CH)

  • EGEE, etc.

  • two independent Network Operation Centers (Poznan and Lodz, operating 24/7)

  • FTP, news (186th position in the world ranking Top1000), WWW, DNS (K-root server mirror)

Budget: 50.00 MEuro

Key projects

PROGRESS Project


PROGRESS stands for "Polish Research On GRid Environment for Sun Servers". It started as a grant co-financed by the Polish State Committee for Scientifc Research and Sun Microsystems Poland and lasted since December 2001 until December 2003. It resulted in a set of grid-portal environment tools that have been made available for use in the form of open source software packages and the PROGRESS HPC Portal testbed deployment. The PROGRESS HPC Portal testbed is a subject to continuous development and extensions as new functionality is added to the middleware, new resources are added to the hardware infrastructure and new applications are being enabled on the grid.

In the PROGRESS grid-portal environment the computing resources are managed by a Globus Toolkit 2.x installation, and are delivered by the PROGRESS Grid Resource Broker (GRB). The GRB is contacted by the PROGRESS Grid Service Provider whenever a user requests a computing job to be submitted for execution in the grid. The GRB decides where and how to run the application associated with the job based on the job requirements and the current status of the grid resources. The job description is passed in a form of a specially designed XML language: XRSL.

Grid services grouped into the GSP are available for use within the Web Portal, which is the prime user interface; the alternative is a migrating desktop kind of interface. The whole infrastructure is assisted by the Data Management System, which is used as the source of input data and the destination for results of computing experiments performed in the PROGRESS grid.

According to our knowledge at the PROGRESS project design phase, grid-portal environments that were deployed around the world at that time were implemented as 3-tier systems with HPC resources at the bottom, grid management systems (GMS) in the middle and portals in the top tier. This architecture works, but seems to be not flexible enough. In such a case when creating a new portal administrators are required to build both - the presentation and logical layers of the environment. Thus, there are as many installations of the logical layer as there are portals. This approach is also not very useful for business and market applications. The PROGRESS grid-portal environment introduces a new solution to this problem. Functions of the logical layer of the portal are grouped into a separated module running independently of the other modules in the environment. This module is called a grid service provider (GSP) and adds another tier to the system, right above the GMS and below the portal itself. The GSP works as a web services factory. The portal in the new architecture serves as the user interface and implements just the presentation functions: it collects the input from the users and displays the data obtained from services.

Using a GSP within the grid access environment makes the use of the grid resources more comfortable to the end users. The GSP serves as the source of grid resources: computing power, grid enabled applications, collaboration tools and others, to web portal and other user interfaces. Installed on a separate server connected to the grid access network the Web Services based GSP allows for easy building of numerous portals and other user interfaces utilizing the same grid services. Applications coming from one application repository can be accessed with the use of two or more different interfaces. What is more, it is easy to imagine creating various thematic web portals: a bioX portal, an electric engineering portal and a physics portal. Those thematic web portals may also use the same grid services and resources available through the GSP. The next advantage is the possibility of providing all GSP clients with computing resources belonging to two or more different grids. All these resources can again be available through one and the same module, the GSP, and can again be easily utilized by the multiple computing web portals.

The PROGRESS testbed resource infrastructure accessed by the tools designed and deployed within the scope of the PROGRESS project consisted of:



  • three Sun Fire 6800 computing servers: two in Poznań and one in Kraków

  • two Sun Fire V880 servers for data storage: one at each location

  • one Sun Fire 280R server as the front-end

These servers were connected via dedicated channels of the PIONIER research network.

This infrastructure was the base to deploy the testbed implementations of the designed grid-portal environment architecture in the form of PROGRESS HPC Portal.

Collaborators: Poznań Supercomputing and Networking Center, Academic Computer Centre Cyfronet AGH Kraków and Technical University of Łódź.

Budget: 5.00 MEuro


SGI Grid Project (High Performance Computing and Visualization with the SGI Grid for Virtual Laboratory Applications)


The SGIgrid project aims to design and implement:

  • state-of-the-art, broadband services for remote access to expensive laboratory equipment

  • backup computational center

  • remote data-visualization service

These services will be based on the national HPC infrastructure and advanced visualization.

During the R&D stage we will build a testbed installation which will be used to prepare the aforementioned services and applications to be put into practice. For the hardware platform we proposed a cluster of SGI computers equipped with MIPS and Intel CPUs designed for intense computations and advanced visualization. The testbed installation will be partially provided by project participants while the remaining part will be provided by SGI.

The R&D phase will result in middleware design as well as the aforementioned applications.The middleware will provide users with transparent access to available resources - including the existing and future data storage systems. The tasks proposed in our project concern essential Grid technologies. Basing on the middleware structure we will develop the virtual laboratory, parallel visualization system and backup computing center for IMGW.

During the deployment phase we will focus on extending the hardware and software infrastructure for a system of virtual laboratories (located in Gdansk, Lodz, Wroclaw, Krakow and Poznan). Furthermore, we will put into practice the backup computing center for IMGW located in Warsaw. Also, during this stage the number of centers providing parallel visualization for HPC will be increased.

As an additional valuable result we also consider the experience in organizing and administering a nationwide, unified computing environment. The SGIgrid project will provide users of Polish HPC centers with high, aggregated computational power, bring savings due to better utilization of software licenses and also bring other immeasurable savings since the infrastructure will be used to build a backup computational center for IMiGW which is covered by the System of Country Monitoring and Protection.

Collaborators: Silicon Graphics, Inc. (SGI co-funds the project in deployment, investment and research areas), Institute of Meteorology and Water Management (IMGW, Warsaw is a business partner for whom a backup computational center will be designed and implemented. The backup center is supposed to maintain continuity of computations and critical deadlines for them during the meteorological and hydrological prognosis performed by IMGW), Academic Computer Center Cyfronet Kraków (ACC Cyfronet AGH coordinates the project, participates in R&D and deployment works), Poznań Supercomputing and Networking Center (PSNC participates in R&D and deployment works), Wroclaw Centre for Networking and Supercomputing (WCNS participates in R&D and deployment works), Academic Computer Center in Gdańsk (TASK participates in R&D and deployment works), Technical University of Łódź Computer Center (PŁ-CK participates in R&D and deployment works).

Budget: 4.00 Meuro

Virtual Laboratory Grid (VLAB)


The Virtual Laboratory research project has been developed in Poznań Supercomputing and Networking Center since the beginning of the year 2002. VLAB is a part of the "HIGH PERFORMANCE COMPUTATIONS AND VISUALISATION FOR VIRTUAL LABORATORY PURPOSES WITH THE USAGE OF SGI CLUSTER" project which is based on the State Committee for Scientific Research decision (decision number 03282/C.T11-6/2002 of December 9th 2002) on participation in the funding of the research, development and implementation, according to an agreement between SCSR and AGH - ACK Cyfronet .

The SGI corporation also participates in the project funding in the scope of the research, implementation and investments, according to an agreement between SGI and ACK Cyfronet. More information concerning formal terms of this project is available on its official website.

This web portal is the first draft version of an interface which should ensure access to the virtual laboratory - that is why its possibilities are now limited. Along with the progress of the project development this site will be completed and extended.

The scope of this research project also covers a pilot project of a Virtual NMR Laboratory. This laboratory will rely on the resources of the Institute of Bioorganic Chemistry, PAS (Bruker 600MHz and Varian 300 spectrometers). The methods of high resolution homo- and heteronuclear NMR spectroscopy, which are used in the Laboratory of Structural Chemistry of Nucleic Acids, allow to determine the 3-dimensional structure of biomolecules in a solution. The concept of the Virtual Laboratory has been elaborated to facilitate the research in this domain, which is important to the biopolymers chemistry, molecular biology, biomedicine, and biotechnology.

Furthermore, a research grant has been launched by the Ministry of Scientific Research and Information Technology. In this context, the scope of the Virtual Laboratory project is more oriented towards the research approach. It means that the main goal would be to develop a universal system architecture capable of handling a wide variety of different laboratory equipment. Additionally, an attempt will be made to generalize the concept of the task and the resource in the Grid environment, where laboratory equipment can be seen as a resource, while research experiments can be considered as system tasks.

In the scope of the NMR libraries and security of data management system Virtual Laboratory project is supported by the Computing Center of Technical University in Łódź

It is also planned to develop a prototype installation incorporating a radio telescope (provided by NCU, Toruń), which will be used to test some of the already implemented VLab architecture elements.

Collaborators: Poznan Supercomputing and Networking Center, Institute of Bioorganic Chemistry of the Polish Academy of Sciences.

Budget: 80,00 kEuro

Interactive TV Grid Project


The project aims at deployment of a multimedia content delivery system for high quality multimedia delivery. The system is designed for a nation-wide delivery of live TV programming, Video-on-Demand and Audio-on-Demand with interactive access over broadband IP networks. The range of operation can be easily extended beyond the national level. A large scale of deployment results in high bandwidth requirements to enable thousands of user to access the service simultaneously. The architecture of the system encompasses two levels with Regional Data Centers obtaining content from content providers at the upper level, and distributing it among themselves and to lower level proxy servers. Such an approach results in a scalable design, in which the bandwidth requirement for one link of a Data Center is estimated at 1 Gbps. The content distribution at the aggregate speed of such magnitude is realized over PIONIER, a nation-wide Polish all-optical network based on DWDM technology. PIONIER connects 21 Metropolitan Area Networks, which are potential sites for hosting Regional Data Centers. The prototype delivery system connects three Regional Centers in Warsaw, Cracow and Poznan in Poland and can provide service to up to 15 000 users. The content is digitally encoded with MPEG-4 compatible codec at 1.5 Mbps with the possibility of using different platforms, e.g. Windows Media and Real Media. An IP-based content delivery system constitutes the fourth, next to terrestrial, satellite and cable, platform for digital TV distribution. In contrast to the first three platforms, an IP-based solution offers true interactivity enabling new ways to offer the content access service.

The main challenge in a design of large scale content delivery system is the aggregate volume of data to be delivered and a high magnitude of the aggregate transmission rate. Although the digital content encoding for a high quality stream requires transmission speed of 1.5 Mbps which is not excessive, providing service to thousands of users at the same time poses a serious challenge. A large variety of the available content further increases bandwidth requirement. In addition, an interactive access to the content pushes the transmission rate requirements into a higher range of values.

High bandwidth requirements due to a large scale deployment is addressed with a scalable two-level content delivery architecture. However, content caching increases the number of users and thus, each of the network distribution components still has to provide service to a potentially large number of users. It is estimated that a single Regional Data Center will provide service to a number of users ranging from a few thousands to over a dozen thousands. Due to lack of the multicast support at the network level as well as content protection and security requirements, a Content Delivery Network (CDN) is a preferable solution. On the other hand, such an approach may require high investments for a delivery infrastructure. Thus, the number of a CDN components and their connectivity has to be carefully designed. In addition, multimedia streaming has strict timing requirements and startup delay in getting access to the content needs to be minimized, which calls for usage of either high bandwidth over-provisioned networks or QoS mechanisms available in the network. All of the above mentioned issues lead to a necessity of a deployment over advanced high speed networks and migration into lambda-based networks.

Collaborators: TVP S.A. - Polish National TV, Academic Computer Center Cyfronet AGH, Poznan Supercomputing and Networking Center, PIONIER National Optical Network.

Budget: 6MEuro


Clusterix – National Cluster of Linux Systems


CLUSTERIX (National CLUSTER of LInuX Systems). This project contains the concept of building a core infrastructure for clustering local PC clusters, allocated and distributed among universities. The implementation makes it possible to deploy a production-class Grid environment, which consists of local PC-clusters with 64- and 32-bit architectures, located in geographically distant independent centres. The overall performance of the current GRID installation is 4,4 Tflops with 800+ Intel Itanium2 processors. The management software (middleware) being developed will allow dynamic changes in the hardware configuration. The software based on Open Source environments will allow extending the existing functionality and creating new tools as necessary. The project is being implemented by 12 Polish supercomputing centres and universities (for more information see: http://clusterix.pcz.pl/) and lead by the University in Częstochowa.

Collaborators:



  • Technical University of Bialystok

  • Czestochowa Unversity of Technology (coordinator)

  • Academic Computer Center in Gdansk TASK

  • Technical University of Lodz, Computer Center

  • Maria Curii-Sklodowska University in Lublin, LubMAN

  • Academic Computer Center CYFRONET AGH, Krakow

  • Computer Center of Opole University

  • Poznan Supercomputing and Networking Center

  • Technical University of Szczecin

  • Warsaw University of Technology

  • Wroclaw University of Technology

  • University of Zielona Gora

Budget: 0.8 MEuro

Achievements


The main achievements include:

  • Building the national optical network (PIONIER), connecting 21 metropolitan area networks. PIONIER is used as a backbone for the national Grid (being built under the Clusterix project)

  • Developing and deploying a set of Grid services and tools on the Polish national Grid

  • Building virtual laboratories

  • Running first applications on the national Grid, including bioinformatics, climate, CFD, and other applications.

  • Integrating the Polish scientific community.

Yüklə 0,53 Mb.

Dostları ilə paylaş:
1   ...   7   8   9   10   11   12   13   14   ...   22




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©muhaz.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin