1.5Service Activities and associated work plan Overall strategy
The objective of the Service Activity is to deploy and ensure the correct, continuous, and effective operation of the resources shared through the iMarine Data e-Infrastructure. These distributed resources will be made available for consumption to the EA-CoP members. In particular, a set of Virtual Research Environments envisaged to realise real life business cases (cf. Section ) will be developed by relying on the deployed resources.
The iMarine Data e-Infrastructure is not intended to be a closed environment. Rather, it aims at exploiting other existing e-Infrastructures, other deployed services, and other resources. By following this objective, the iMarine Data e-Infrastructure will become interoperable with the infrastructures of the D4Science Federation and others.
Through the D4Science Federation it will be possible to access resources managed and orchestrated by other infrastructures, such as EGI, VENUS-C, GENESI, DRIVER, OpenAIRE, etc. EGI offers the organization, management, and support to exploit storage and computational resources to organized communities that can contribute their facilities through their National Initiatives. VENUS-C provides a common cloud PaaS access to resources delivered by different providers. GENESI operates, validates, and optimises the integrated access and use of Earth Science data. DRIVER provides access to 3 millions documents published and maintained by national archives. OpenAIRE provides access to research results available in Open Access repositories.
The iMarine Data e-Infrastructure, and the running Virtual Research Environments, will be based on the gCube system (developed in D4Science and D4Science-II project and further enhanced in this project) and on other software systems such as EMI8 and Hadoop9 (developed and released by external projects). The gCube software (which includes services, libraries, portlets) will provide the core functionality to operate the iMarine Data e-Infrastructure and make available Virtual Research Environments, while the EMI and Hadoop software will provide access to computing and storage capacity. The deployed Virtual Research Environments will implement the resource sharing vision of the project EA-CoP by delivering an environment to explore the community distribute data and tools resources.
The correct operation of the infrastructure is assured by the definition of clear operational procedures that identify the activities required to manage the infrastructure and the involved actors. These procedures cover different areas such as infrastructure downtimes, incident management, and others.
To achieve the Service Activity objectives described above a number of tasks have been identified and organised in three highly interconnected work packages:
-
SA1 – iMarine Data e-Infrastructure Deployment and Operation will manage the iMarine Data e-Infrastructure by providing hardware resources, deploying and maintaining the infrastructure core services (gCube, EMI, Hadoop), providing monitoring and accounting information, and defining procedures to manage the infrastructure;
-
SA2 – Virtual Research Environments Deployment and Operation will deploy and operate the Virtual Research Environments running in the iMarine Data e-Infrastructure by developing vertical solutions integrating community applications and services with gCube services, developing common interfaces and tools, providing community data resources, and managing the Virtual Research Environments;
-
SA3 – Enabling-technology Integration and Distribution will manage the software technology that enables the iMarine Data e-Infrastructure by building, integrating, and testing the source code developed within the project, and making available well-documented releases through appropriate distribution channels.
The software developed by JRA and SA2 is made available to SA3 using the project source code repository. SA3 integrates and tests the source code, and distributes certified releases using the project software repository. SA1 takes these releases to deploy and upgrade the iMarine Data e-Infrastructure. By relying on the deployed infrastructure managements tools, SA2 deploys and operates the Virtual Research Environments that deliver the functionality requested and developed by the CoP. Finally, to ensure a continuous service operation, SA1 monitors the status and load of the iMarine Data e-Infrastructure, SA2 monitors the status and load of the VREs, and all SA work packages participate in the resolution of incidents that may affect the infrastructure availability.
GANTT Diagram
Detailed work description Work package list
Work package No
|
Work package title
|
Type of activity
|
Lead participant No
|
Lead participant short name
|
Person-months
|
Start month
|
End month
|
SA1
|
iMarine Data e-Infrastructure Deployment and Operation
|
SVC
|
4
|
CERN
|
49
|
1
|
30
|
SA2
|
Virtual Research Environments Deployment and Operation
|
SVC
|
2
|
CNR
|
139
|
1
|
30
|
SA3
|
Enabling-technology Integration and Distribution
|
SVC
|
5
|
E-IIS
|
27
|
1
|
30
|
|
|
TOTAL
|
215
|
|
| SA1 – iMarine Data e-Infrastructure Deployment and Operation
The main objective of this work package is to operate the iMarine Data e-Infrastructure by providing all necessaries facilities to organise the provided resources in a coherent distributed infrastructure. This includes the management of the hardware and software resources deployed in the infrastructure, the monitoring the status of the infrastructure service, the accounting of the infrastructure exploitation. This work package includes the following tasks:
-
TSA1.1: iMarine Data e-Infrastructure Operation
Define a complete set of procedures to manage the Data Infrastructure;
Deploy and maintain the infrastructure core services and portal;
Plan and execute the necessary upgrades of the infrastructure services;
Provide support to infrastructure administrators and users for infrastructure incidents;
Define interoperability agreements and resources sharing policy with other Infrastructures.
-
TSA1.2: iMarine Data e-Infrastructure Nodes Provision
Identify the providers of hardware resources and plan their participation in the Data Infrastructure;
Provide hardware resources to the Data Infrastructure (dedicated or on-demand nodes);
Deploy and maintain nodes hosting the gCube hosting node (gHN);
Deploy and maintain nodes hosting the EMI middleware services;
Deploy and maintain nodes hosting the Hadoop system.
-
TSA1.3: iMarine Data e-Infrastructure Availability, Monitoring and Accounting
Develop tools to monitor the status of the infrastructure resources
Develop tools to account the infrastructure load at system level and the infrastructure usage by end-users;
Develop tools to verify the availability of the major functionality offered by the infrastructure
Monitor the infrastructure resources status and report incidents or defects;
Monitor the infrastructure resources usage and prepare summary reports;
Monitor the infrastructure functionality availability and report incidents or defects.
The activities of this work package will be described and reported in four deliverables: two deliverables to describe the iMarine Data e-Infrastructure plans in terms of software deployment and procedures defined (DSA1.1-2), and two deliverables to report on the activities performed on the iMarine Data e-Infrastructure (DSA1.3-4).
The results of the work package will be assessed in three milestones: two milestones for the deployment and availability of the required infrastructure nodes and gCube core services (MSA1.1-2), and one milestone for the deployment and availability of the infrastructure monitoring and accounting tools (MSA1.3).
SA2 – Virtual Research Environments Deployment and Operation
The main objective of this work package is to develop and deploy Virtual Research Environments in the iMarine Data e-Infrastructure to run applications and services identified and developed by the EA-CoP. These applications and services will integrate and exploit facilities developed in the context of the JRA work packages and other data and tools facilities provided by the EA-CoP. This work package includes the following tasks:
-
TSA2.1: Virtual Research Environments Operation
Define procedures to manage the Virtual Research Environments operation;
Analyze the requirements identified by the EA-CoP;
Identify the required Virtual Research Environments and the resources to be provided;
Deploy and maintain the required Virtual Research Environments.
-
TSA2.2: Virtual Research Environments Resources and Tools Provision
Identification, registration, and maintenance of data resources;
Definition and implementation of vertical solutions integrating community specific applications and tools with the facilities offered by the gCube workflow management and the gCube APIs.
-
TSA2.3: Virtual Research Environments Common Interfaces and Tools
Development of common User Interface and tools tailored to serve the virtual research communities through the integration of existing technologies;
Development of Social Networking facilities;
Development of Business Process workflows.
-
TSA2.4: Virtual Research Environments Development Support
Support for the exploitation of the APIs provided by JRA4;
Support for the exploitation of the common interfaces and tools;
Support for the integration of existing tools and resources.
The activities of this work package will be described and reported in six deliverables: two deliverables to identify and describe the required Virtual Research Environments (DSA2.1-2), two deliverables to describe the development plans for community tools and common tools (DSA2.3-4), and two deliverables to report on the activities performed in the Virtual Research Environments (DSA2.5-6).
The results of the work package will be assessed in four milestones: two milestones for the development and release of the identified community tools and common tools (MSA2.1-2), and two milestones for the deployment of the identified Virtual Research Environments (MSA2.3-4).
SA3 – Enabling-technology Integration and Distribution
The main objective of this work package is to integrate, test and distribute the software (those maintained and enhanced in JRA Activities plus the community tools being developed in the context of SA2 work package) that enable the Data Infrastructure and Virtual Research Environments provided by the project by selecting, deploying and enhancing existing open source tools. This work package includes the following tasks:
-
TSA3.1: Software Integration, Testing and Release
Define the procedures to manage the release of gCube and community software;
Defining the model for self testing and the guidelines to provide automatable functional tests;
Plan the required releases as defined by the project bodies;
Produce major, minor, and maintenance releases according to the established plan;
Test new releases from the deployment, and performance point of view;
Maintain a testing infrastructure where to run the identified tests;
Execute continuous integration and testing of the latest source code;
Maintain the project official software repository updated with all latest releases;
Maintain the necessary tools to support the integration and testing activities.
-
TSA3.2: Software Distribution and Documentation
Upload all releases to the project Software Repository;
Document and link all releases in the gCube website;
Prepare the identified gCube special packages;
Validate, enhance and build the source code documentation;
Validate, enhance and build the documentation for users and administrators;
Validate, enhance and build the guide for developers of gCube compliant services.
The activities of this work package will be described and reported in four deliverables: two deliverables to identify and describe the procedures and tools used to integrate, test, and distribute the project software releases (DSA3.1-2), and two deliverables to report on the activities performed in the preparation of the project software releases (DSA3.3-4).
The results of the work package will be assessed in three milestones: one milestone for the set-up of the project software repository and other build and test tools (MSA3.1), one milestone for the definition of the gHN and gCube APIs packages (MSA3.2), and one milestone for the availability of the produced documentation (MSA3.3).
Deliverables list -
Del. no.
|
Deliverable name
|
WP no.
|
Nature
|
Dissemination level
|
Delivery date
|
DSA1.1-2
|
iMarine Data e-Infrastructure Plan
|
SA1
|
O
|
PU
|
M1, M16
|
DSA1.3-4
|
iMarine Data e-Infrastructure Operation Report
|
SA1
|
R
|
PU
|
M12, M30
|
DSA2.1-2
|
Virtual Research Environments Plan
|
SA2
|
O
|
PU
|
M1, M16
|
DSA2.3-4
|
Applications and Tools Development Plan
|
SA2
|
O
|
PU
|
M3, M18
|
DSA2.5-6
|
Virtual Research Environments Activity Report
|
SA2
|
R
|
PU
|
M12, M30
|
DSA3.1-2
|
Software Release Procedures and Tools
|
SA3
|
O
|
PU
|
M1, M16
|
DSA3.3-4
|
Software Release Activity Report
|
SA3
|
R
|
PU
|
M12, M30
|
Work package descriptions
Work package number
|
SA1
|
Start date or starting event:
|
M1
|
Work package title
|
iMarine Data E-Infrastructure Deployment and Operation
|
Activity Type
|
SVC
|
Participant number
|
1
|
2
|
3
|
4
|
5
|
6
|
7
|
Participant short name
|
ERCIM
|
CNR
|
NKUA
|
CERN
|
E-IIS
|
US
|
FORTH
|
Person-months per participant
|
|
13
|
4
|
20
|
8
|
|
|
|
8
|
9
|
10
|
11
|
12
|
13
|
14
|
|
Terradue
|
Trust-IT
|
FAO
|
FIN
|
UNESCO
|
CRIA
|
IRD
|
|
|
|
2
|
2
|
|
|
|
|
Objectives
The main objective of this work package is to operate the iMarine Data e-Infrastructure by providing all necessaries facilities to organise the provided resources in a coherent distributed infrastructure. This includes the management of the hardware and software resources deployed in the infrastructure, the monitoring the status of the infrastructure service, the accounting of the infrastructure exploitation, and the establishment of links with the other infrastructures that compose the D4Science Federation.
|
|
Description of work
Work package leader: CERN;
TSA1.1: iMarine Data e-Infrastructure Operation
Task leader: CERN; Participants: CNR;
The main objective of this task is to manage the iMarine Data e-Infrastructure and ensure the availability of its resources to infrastructure administrators and users. A fundamental aspect in the management of the iMarine Data e-Infrastructure is the clear definition of procedures to manage it. Operational procedures will be defined for the following areas: node provision, deployment, upgrade, certification, downtime, accounting, monitoring, availability, and incident management. Besides the definition of these procedures this task is also responsible for the execution of the procedures related to the deployment and maintenance of gCube core services (e.g. Portal, Information System, Resource Management), the infrastructure upgrade to new releases, and the monitoring of incidents affecting the infrastructure availability. Finally, this task will also coordinate the relationship between the iMarine Data e-Infrastructure and other infrastructures, some of them belonging to the D4Science Federation.
Summarizing, the following activities are planned:
-
Define a complete set of procedures to manage the iMarine Data e-Infrastructure;
-
Deploy and maintain the infrastructure core services and portal;
-
Plan and execute the necessary upgrades of the infrastructure services;
-
Provide support to infrastructure administrators and users for infrastructure incidents;
-
Define interoperability agreements and resources sharing policy with other Infrastructures.
TSA1.2: iMarine Data e-Infrastructure Nodes Provision
Task leader: CNR; Participants: NKUA, E-IIS, FAO, FIN;
The main objective of this task is to provide the computational and storage resources (hereafter called nodes) that compose the iMarine Data e-Infrastructure. The provision of nodes by each partner will be performed in two possible models: permanent nodes fully dedicated to the project exploitation, on-demand nodes provided for a pre-defined time not charging for their consumption. The participant partners will provide at least the following nodes: CNR will provide 100 concurrent typical compute instances, 5 TB storage space, and 1 Gb network for data transfer. ENG E-IIS will provide up to 100 virtual small server instances, 7 TB storage space, and 100 Mb network. NKUA will provide 50 compute instances, 5 TB storage space, and 1 Gb network.
This set of resources will be used to deploy, operate, and demonstrate the capabilities of the iMarine Data e-Infrastructure and Virtual Research Environments. Additional nodes may be added according to the project requests for new Virtual Research Environments. These additional nodes will be limited to the hardware availability of the providers.
Summarizing, the following activities are planned:
-
Identify the providers of hardware resources and plan their participation in the iMarine Data e-Infrastructure;
-
Provide hardware resources to the Data Infrastructure (dedicated or on-demand nodes);
-
Deploy and maintain nodes hosting the gCube hosting node (gHN);
-
Deploy and maintain nodes hosting the EMI middleware services;
-
Deploy and maintain nodes hosting the Hadoop system.
TSA1.3: iMarine Data Infrastructure Availability, Monitoring and Accounting
Task leader: CERN; Participants: E-IIS;
The main objective of this task is to define, develop, and exploit a number of tools to efficiently monitor the status, usage, and availability of the iMarine Data e-Infrastructure. These tools will provide to the appropriate infrastructure users and/or administrators the information required to control the use of the infrastructure resources and to make the infrastructure more reliable.
This task will provide tools to verify (1) the status of each infrastructure nodes, (2) verify the availability of the core functionality provided by the infrastructure, (3) the usage of the infrastructure resources by the infrastructure users, and (4) the infrastructure service-to-service communication load.
Summarizing, the following activities are planned:
-
Develop tools to monitor the status of the infrastructure resources
-
Develop tools to account the infrastructure load at system level and the infrastructure usage by end-users;
-
Develop tools to verify the availability of the major functionality offered by the infrastructure
-
Monitor the infrastructure resources status and report incidents or defects;
-
Monitor the infrastructure resources usage and prepare summary reports;
-
Monitor the infrastructure functionality availability and report incidents or defects.
|
|
Deliverables
-
DSA1.1-2 iMarine Data e-Infrastructure Plan (M1, M16) defines plans for the provision of the infrastructure nodes as well as the procedures and the tools adopted to ensure the correct and effective operation of the infrastructure;
-
DSA1.3-4 iMarine Data e-Infrastructure Operation Report (M12, M30) reports on the status of the Data e-Infrastructure in terms of nodes available, software deployed, quality of the service, and usage.
|
Work package number
|
SA2
|
Start date or starting event:
|
M1
|
Work package title
|
Virtual Research Environments Deployment and Operation
|
Activity Type
|
SVC
|
Participant number
|
1
|
2
|
3
|
4
|
5
|
6
|
7
|
Participant short name
|
ERCIM
|
CNR
|
NKUA
|
CERN
|
E-IIS
|
US
|
FORTH
|
Person-months per participant
|
|
41
|
15
|
|
|
|
|
|
8
|
9
|
10
|
11
|
12
|
13
|
14
|
|
Terradue
|
Trust-IT
|
FAO
|
FIN
|
UNESCO
|
CRIA
|
IRD
|
|
|
|
33
|
18
|
12
|
12
|
8
|
|
Objectives
The main objective of this work package is to develop and deploy Virtual Research Environments in the iMarine Data e-Infrastructure to run applications and services identified and developed by the EA-CoP. These applications and services will integrate and exploit facilities developed in the context of the JRA work packages and other data and tools facilities provided by the EA-CoP.
The deployment and operation of Virtual Research Environments will drastically cut down the time-to-market related to data management tasks. In fact, the integration of heterogeneous resources will largely be simplified by exploiting the tailored APIs for data discovery, transfer, curation (including harmonization), and transformation. Different community-specific tools and applications will be easily integrated through the workflow management commodities that allow to exploit several computational paradigms.
|
|
Description of work
Work package leader: CNR;
TSA2.1: Virtual Research Environments Operation
Task leader: CNR; Participants: FAO;
The main objective of this task is to deploy and maintain the Virtual Research Environments designed and implemented to support the uses cases and requirements identified by the EA-CoP. In particular, it will focus on three real life business cases (cf. Section ). These Virtual Research Environments will integrate a well defined set of resources (data collections, metadata schemas, EA-CoP tools and applications, gCube components).
This task will work in close collaboration with the NA3 work package to analyze the requirements from the EA-CoP and consequently identify the required Virtual Research Environments and the resources to be provided. The CoP partners already employ vertically integrated software solutions, and want to bring these or parts thereof to the infrastructure in order to overcome obstacles in their data flows. This task will analyse the re-use of existing CoP resources, or identify cost effective alternatives.
Finally, this task is responsible for the deployment of the identified Virtual Research Environments in the iMarine Data e-Infrastructure deployed by the SA1 work package, and the continuous maintenance of these environments. These activities will be executed based on clearly identified operational procedures.
Summarizing, the following activities are planned:
-
Define procedures to manage the Virtual Research Environments operation;
-
Analyze the requirements identified by the EA-CoP;
-
Identify the required Virtual Research Environments and the resources to be provided;
-
Deploy and maintain the required Virtual Research Environments.
TSA2.2: Virtual Research Environments Resources and Tools Provision
Task leader: FAO; Participants: CNR, FIN, UNESCO, CRIA, IRD;
The CoP partners have made substantial investments in developing their systems, and seek integration models to improve their services to their customers. A cost effective provisioning model that improves the current fragmented development efforts must be formulated to serve the different business cases.
The main objective of this task is to identify, develop, and integrate the data resources and tools required by the EA-CoP to be exploited in the envisaged Virtual Research Environments. These resources are key to the success of the Virtual Research Environments as they represent the community contribution and needs.
The data resources identified by the EA-CoP must be registered and maintained in the iMarine Data e-Infrastructure to be exploited by the different Virtual Research Environments.
The tools and applications identified by the EA-CoP will be implemented (if needed) and registered in the iMarine Data e-Infrastructure to be exploited by the different Virtual Research Environments. These applications represent vertical solutions that explore the gCube workflow management facilities and the gCube high-level APIs provided by the JRA work packages, to integrate the infrastructure and resource management functionality with the community specific functionality.
Summarizing, the following activities are planned:
-
Identification, registration, and maintenance of data resources;
-
Definition and implementation of vertical solutions integrating community specific applications and tools with the facilities offered by the gCube workflow management facilities and the gCube APIs.
TSA2.3: Virtual Research Environments Common Interfaces and Tools
Task leader: CNR; Participants: NKUA;
The main objective of this task is to develop common interfaces and tools envisaged as needed by the various virtual research communities aggregated in the context of the CoP. These components are not functional to the management of the data. Rather they are keys for their exploitation. First of all, the CoP aims to collaborate to exploit the data richness aggregated by the Data Infrastructure. This task will not only integrate and customise the technologies and tools provided by the so-called Web 2.0. Rather, it will tailor the Web 2.0 approaches towards the sharing of the data and the development of trust. A feature-rich experts’ database will be built by aggregating expert profiles with the activity operations they perform in a social framework where confidentiality, privacy, and public information will be modelled by each expert according to his/her willing. Sharing of data will be fostered to immediate availability of results, workflows, annotations, documents, etc. independently if they are stored in a public or private storage area. Shared workspaces, blogs, chats, notification walls, and broadcast messages will be supported all-in-one common environment. This set of features will be freely available or it will be governed by personalised business process workflows that will be defined and instantiated through a dedicated environment that will allow the definition of specific policies aimed to regulate the data movement.
Summarizing, the following activities are planned:
-
Development of common user interface and tools tailored to serve several virtual research communities through the integration of existing technologies;
-
Development of social networking facilities;
-
Development of business process workflows.
TSA2.4: Virtual Research Environments Development Support
Task leader: NKUA; Participants: CNR;
The main objective of this task is to provide continuous support to the EA-CoP in the definition and development of the applications and tools required by the identified Virtual Research Environments. This dedicated support will facilitate the development activities in particular when exploring the gCube workflow management commodities, gCube high-level APIs, and gCube common interfaces.
Summarizing, the following activities are planned:
-
Support for the exploitation of the APIs provided by JRA4;
-
Support for the exploitation of the common interfaces and tools;
-
Support for the integration of existing tools and resources.
|
|
Deliverables
-
DSA2.1-2 Virtual Research Environments Plan (M1, M16) describes the identified Virtual Research Environments to be deployed in the iMarine Data e-Infrastructure as requested by the EA-CoP;
-
DSA2.3-4 Applications and Tools Development Plan (M3, M18) plans the development of the identified community tools and applications and the common tools and interfaces required for the iMarine Data e-Infrastructure Virtual Research Environments.
-
DSA2.5-6 Virtual Research Environments Activity Report (M12, M30) reports on the deployed Virtual Research Environments in terms of community tools integrated, resources involved, and user exploitation.
|
Work package number
|
SA3
|
Start date or starting event:
|
M1
|
Work package title
|
Enabling-technology Integration and Distribution
|
Activity Type
|
SVC
|
Participant number
|
1
|
2
|
3
|
4
|
5
|
6
|
7
|
Participant short name
|
ERCIM
|
CNR
|
NKUA
|
CERN
|
E-IIS
|
US
|
FORTH
|
Person-months per participant
|
|
5
|
5
|
|
17
|
|
|
|
8
|
9
|
10
|
11
|
12
|
13
|
14
|
|
Terradue
|
Trust-IT
|
FAO
|
FIN
|
UNESCO
|
CRIA
|
IRD
|
|
|
|
|
|
|
|
|
|
Objectives
The main objective of this work package is to integrate, test and distribute the software (those maintained and enhanced in JRA Activities plus the community tools being developed in the context of SA2 work package) that enables the iMarine Data e-Infrastructure and Virtual Research Environments provided by the project by selecting, deploying and enhancing existing open source tools.
This work package will adopt the ETICS model to automate as much as possible the build, test, integration and distribution of software. The ETICS software is now an open source initiative jointly maintained by single experts and E-IIS, while two instances of the ETICS infrastructures are available as part of the EMI services and the private investments of E-IIS. ETICS approach allow the system owner to configure the software released starting from the source code available in any VCS and identifying the dependencies, target platforms, build settings, testing scripts, etc.
|
|
Description of work
Work package leader: E-IIS;
TSA3.1: Software Integration, Testing and Release
Task leader: E-IIS; Participants: CNR;
The main objective of this task is to select, deploy, maintain, and possibly enhance the tools and facilities required to store, build, test, and release the software developed by the project software providers work packages.
A number of common tools will be exploited and further developed to deliver these tasks: an SVN instance hosting the source code will ensure the availability of the open source code and its code documentation to any contributors; ETICS build and test system will ensure the management of daily builds/tests and release builds/tests; and will provide a common interface to visualize build results, test results, source code metrics, and other verifications.
From the testing perspective, this work package will be responsible to test each individual release before its distribution by running the following test types: deployment tests, functional tests, and performance tests. Functional tests as well as unit tests will be automatically run if programmers will develop specific functionality to self test the code.
Summarizing, the following activities are planned:
-
Define the procedures to manage the release of gCube and community software;
-
Defining the model for self testing and the guidelines to provide automatable functional test scripts;
-
Plan the required releases as defined by the project bodies;
-
Produce major, minor, and maintenance releases according to the established plan;
-
Test new releases from the deployment, functional, and performance point of view;
-
Maintain a testing infrastructure where to run the identified tests;
-
Execute continuous integration and testing of the latest source code;
-
Maintain the project official software repository updated with all latest releases;
-
Maintain the necessary tools to support the integration and testing activities.
TSA3.2: Software Distribution and Documentation
Task leader: NKUA; Participants: CNR;
The main objective of this task is release the source code that has been successfully built, integrated, and tested in TSA3.1. The software packages composing each release will be uploaded to the project Software Repository. All releases will also be linked and documented in the gCube website. Besides making available in the project Software Repository all packages of each release, this task will also prepare a number of other special packages: the gCube Hosting Node package which includes the container and the core services required in each gCube nodes, and several gCube APIs packages grouping different service APIs according to the output of JRA4 work package.
Finally, this task will verify that the released packages are properly documented in terms of: source code documentation (javadoc), documentation for users and administrators, and documentation for community developers.
Summarizing, the following activities are planned:
-
Upload all releases to the project Software Repository;
-
Document and link all releases in the gCube website;
-
Prepare the identified gCube special packages;
-
Validate, enhance and build the source code documentation;
-
Validate, enhance and build the documentation for users and administrators;
-
Validate, enhance and build the guide for developers of gCube compliant services.
|
|
Deliverables
-
DSA3.1-2 Software Release Procedures and Tools (M1, M16) describes the procedures and tools used to build, integrate, test, and distribute the project software releases;
-
DSA3.3-4 Software Release Activity Report (M12, M30) reports on the outcome of the release activity performed in the periods. Includes a summary on the documentation status.
|
Summary of staff effort
Participant number
|
Participant short name
|
SA1
|
SA2
|
SA3
|
Total person months
|
1
|
ERCIM
|
0
|
0
|
0
|
0
|
2
|
CNR
|
13
|
41
|
5
|
59
|
3
|
NKUA
|
4
|
15
|
5
|
24
|
4
|
CERN
|
20
|
0
|
0
|
20
|
5
|
E-IIS
|
8
|
0
|
17
|
25
|
6
|
US
|
0
|
0
|
0
|
0
|
7
|
FORTH
|
0
|
0
|
0
|
0
|
8
|
Terradue
|
0
|
0
|
0
|
0
|
9
|
Trust-IT
|
0
|
0
|
0
|
0
|
10
|
FAO
|
2
|
33
|
0
|
35
|
11
|
FIN
|
2
|
18
|
0
|
20
|
12
|
UNESCO
|
0
|
12
|
0
|
12
|
13
|
CRIA
|
0
|
12
|
0
|
12
|
14
|
IRD
|
0
|
8
|
0
|
8
|
Total
|
49
|
139
|
27
|
215
| List of milestones
Milestone number
|
Milestone name
|
Work package(s) involved
|
Expected date
|
Means of verification
|
MSA1.1-2
|
Infrastructure nodes and gCube core services available
|
SA1
|
M3, M18
|
VREs can be deployed in the Data Infrastructure
|
MSA1.3
|
Infrastructure availability, monitoring, and accounting tools deployed
|
SA1
|
M3
|
Infrastructure statistics on status, load, and usage is available
|
MSA2.1-2
|
Community tool and common tools development and release
|
SA2
|
M6, M21
|
Planned VRE functionality can be deployed
|
MSA2.3-4
|
Virtual Research Environments deployment
|
SA2
|
M6, M21
|
Planned VRE functionality is made available
|
MSA3.1
|
Set-up of the project software repository and other build and test tools
|
SA3
|
M2
|
Software release are made available to SA1
|
MSA3.2
|
Definition of gCube Hosting Node and gCube APIs special packages
|
SA3
|
M3
|
gCube special packages are made available to SA1
|
MSA3.3
|
Make available the documentation for user, administrators, and community developers
|
SA3
|
M6
|
Documentation is made available in the gCube website
| Pert diagram
The diagram below depicts the main relationships between the various tasks of the Service Activities. In particular, it presents how policies governing iMarine Data e-Infrastructure flow from NA3 to SA1 that is called to deploy an infrastructure which is compliant with them. In order to reach this objective, SA1 exploits the software packages that results from SA3 which in turn is called to package and test the software artefacts produced by JRA tasks. This infrastructure will host a number of Virtual Research Environments developed in the context of SA2 by CoP stakeholders to realise scenarios and functionality captured by the three real life business cases the proposal decided to support (cf. Section ). To implement these Virtual Research Environments SA2 will rely on software artefacts produced in SA3 and complement them by developing scenario specific application and services tailored to serve a specific need by properly exploiting the generic facilities developed in the context of JRA for data management and consumption.
Figure . Service Activities Pert Diagram
Risk Analysis and Contingency Plans
A risk breakdown structure for the SA activities is presented in the following table.
Table . Service Activities Risk Analysis and Contingency Plan
Risk
|
Evaluation and Description10
|
Contingency Plans
|
Unavailability of dedicated computing and storage resources
|
Internal, Low Probability, Medium Impact.
The computing and storage resources provided by the project partners represented in SA1 are not made available.
|
The required amount of resources can be acquired from external cloud providers like Engineering (E-IIS) or Amazon, thanks to the gCube extension to cloud.
|
Unavailability of on-demand computing and storage resources
|
Internal/External, Medium Probability, Medium Impact.
The computing and storage resources provided on-demand as cloud resources in SA1 are not made available. Alternatively, the gCube extension to access clouds resources is not operational.
|
The required amount of resources will be discussed with the project partners to understand their availability to provide more dedicated nodes.
|
Impossible to access resources from other infrastructures
|
External, Medium Probability, Medium Impact.
The resources provided by other (data) infrastructures (from the D4Science Federation and others) are not reachable and cannot be consumed from the iMarine Data e-Infrastructure.
|
The Service Activity teams of both infrastructures establish direct communication to analyse the problem. If required, the defined interoperability solutions are updated and the development teams involved.
|
Useless procedures
|
Internal, Low Probability, Medium Impact.
The identified procedures to manage the Data e-Infrastructure, the VREs, and the software release are useless and introduce delays.
|
The reasons for the misalignment between procedures and daily practices are analysed and improvements to the current procedures proposed and tested. The advisement from external experts may be established.
|
Unclear or unstable requirements
|
Internal, Medium Probability, High Impact.
The requirements and concrete use cases identified by the EA-CoP are not clear or unstable and do not allow the definition of appropriate Virtual Research Environments.
|
More regular and face-to-face meetings between the EA-CoP members and the technical teams are established to promote a clear communication between the two teams and a detailed discussion of the requirements.
|
Low quality of the delivered community tools and gCube common tools
|
Internal, Medium Probability, Medium Impact.
The applications and tools developed by the user communities and/or the gCube common tools are not deployable or are of low quality.
|
The effort on the development support task (TSA2.4) is intensified to allow a better communication and support to the developers of these applications and tools.
|
Unavailability of data resources
|
Internal, Medium Probability, High Impact.
The data resources planned for the different use cases are not delivered and made available by the user communities.
|
Direct contact with the user communities and increased support is put in place to understand the reasons for the unavailability of the data resources. The policies for data provision may be revised.
|
Limited or unavailable VRE functionality
|
Internal, Low Probably, High Impact.
The identified VRE functionality is not provided or do not satisfied the initial requirements.
|
Define very clearly the planned VRE functionality. Push developers to deliver early prototypes and users to provide early feedback.
|
Unavailability of build and testing tools
|
Internal, Low Probably, High Impact.
The tools defined to integrate and test the project source code are not made available.
|
Other instances of the same tools are exploited (e.g. an ETICS instance is hosted at CERN and run by EMI project, however Engineering (E-IIS) will also install an instance in its premises). The usage of other tools is considered.
|
Dostları ilə paylaş: |