Grid and Supercomputing Activities at the ICT Division of CIEMAT Alicia Acero1 , Angelines Alberto1 , Fernando Blanco1 , Jorge Blanco1 , Francisco Castej´ on1,2 , Jos´e A. F´ abregas1 , Liubov A. Flores1 , Manuel Gim´enez1 , Pedro 1 1,2 Gonz´ alez , Jos´e Guasp , Ra´ ul Isea3 , Pedro Ladr´on de Guevara1 , Rafael Mayo1 , 1 Esther Montes , Antonio Mu˜ noz1 , Manuel Rodr´ıguez1 , Antonio J. 1 Rubio-Montero , Eulogio Serradilla1 , and Stefano Troiani1 1

2

CIEMAT, Avda. Complutense, 22, 28040 Madrid, Spain, [email protected] Laboratorio Nacional de Fusi´ on-Asociaci´ on Euratom-CIEMAT, 28040 Madrid, Spain 3 Fundaci´ on IDEA, Hoyo de la Puerta, Valle de Sartenejal, Baruta 1080, Venezuela

Abstract. In this paper we report about the Grid activities that are being carried out at the Information and Communication Technologies Division of CIEMAT. At the same time, a summary of the supercomputation ones is also presented. Most of these activities have been developed in the framework of the Sixth and Seventh Framework Programmes of the European Commission, some of them mainly focused on the European communities, but others on the Latin American ones too.

Grid Projects, Supercomputation, CIEMAT

1

Introduction

The CIEMAT, Centro de Investigaciones Energ´eticas Medioambientales y Tecnol´ ogicas, is a Spanish Public Organism which has been carrying out research and technological development projects since 1951; among them, and from its pioneering days, a special focus has been done on the Computation technologies. Thus, it has addressed the new information and communication systems and scientific applications that are being emerging as time has gone by, such as Grid, or those for the development of novel architectures for supercomputation. CIEMAT has then collaborated in the design, development and programming support in advanced computation as well as tailor-made information systems with added value for R&D projects. Within the latest, we are going to report in this work about the Grid activities that have been recently done or are currently on the way in the Information and Communication Technologies (ICT) Division. In addition, a brief summary of the supercomputation tasks and initiatives will be also presented.

2

Grid activities with Latin America: the EELA Project

The E-infrastructure shared between Europe and Latin America (EELA) Project [7] had a main objective during its two years lifetime (2006-2007), this is, to bring

the e-Infrastructures of Latin American countries to the level of those of Europe exploiting a Specific Support Action measure of the European Commission (EC). For doing so, EELA benefited of the mature state of the ALICE project [2] and of the RedCLARA [12] network to focus on Grid infrastructure and related e-Science applications, identifying and promoting a sustainable framework for e-Science. Thus, the core of the EELA approach was to create a collaboration network that cared of the organization of training and dissemination in Grid technologies and of the deployment of a pilot Grid infrastructure for e-Science applications of social interest and high impact on Latin America. The ICT division of CIEMAT was the Project Coordinator. It also managed some of the Work packages such as the Project Office (WP1) and the Identification and Support of Grid Enhanced Applications (WP3) and, of course, collaborated in the rest, Pilot Test-bed Operation (WP2) and Dissemination Activities (WP4). The main Grid tasks carried out by our team was to set up a Grid site and to manage the porting process to the Grid architecture of the scientific applications of interest. The CIEMAT site in EELA had the category of Regional Operations Centre, this is, we had to provide not only the usual resources (UI, CE, SE, WNs, VOMS), but also the advanced services of such a level (RB, BDII, UI, LFC, monitoring...). In the arena of applications, and beyond the management activities in the fields of Biomedicine, High Energy Physics (HEP), Climate and e-Learning, CIEMAT supported several specific applications: GATE [30] (for radiotherapy simulations), BiG [4] (for the alignment of sequences) and LHCb [3] (for precise measurements of Charge-Parity violation). A special mention must be done in WISDOM [19], since EELA supported the second Data Challenge with an own target to be docked in Plasmodium vivax by means of more than one hundred thousand submitted jobs, the 35% of which were executed on our infrastructure, and the ALICE HEP experiment [1] (see below). As aforementioned, the ICT Division also installed and managed the necessary administrative on-line tools for the smooth running of the Project, as well as it collaborated with the rest of the CIEMAT staff, and participated in the dissemination and training activities of EELA [10]. The final result of the Project was fully satisfactory and the European Commission ranked it with its best qualification, i.e. ’Good to excellent’. As an example, we can mention that, inside WP3 (coordinated by the ICT Division), 25 papers were published and 57 presentations in non-EELA events were made in the two years lifetime of the Project.

3

A follow-up for Latin America: The EELA-2 Project

Due to the good results obtained in its first phase, the EELA consortium submitted to the EC a second phase for the ending project. Thus, the E-science grid facility for Europe and Latin America (EELA-2) initiative [5] was funded in the Seventh Framework Programme. It started on April 2008 for two years, and is also coordinated by CIEMAT, but through the Centro Extreme˜ no de Tecnolog´ıas Avanzadas (CETA) for this time.

Four main objectives define the content of the EELA-2 mission: Build a powerful, functional and well supported Grid facility; address a large community of users; assert the financial & management schemes to operate and support the e-Infrastructure on the long range; and, anticipate the handover of the eInfrastructure operation and support. The ICT Division of CIEMAT is involved in several Activities: Grid Infrastructure (SA1), Application Support (NA3) and Joint Research (JRA1). For supporting the infrastructure, the CIEMAT site keeps working and offering as much services as possible: CE, SE, UI, BDII... The old RB has been migrated to a WMS and in a near future and the CE will be updated to a CREAM-CE. Currently, 38 WNs (130 cores) and 6.6 TB are available. 3.1

Research on Virtual Machines

The Research Activity (JRA1) in our Division is focused on the Application and Virtual Organization (VO) compliant execution environments through virtualization. Virtual Machine (VM) technologies add a new abstraction layer that allows partitioning and isolating the physical hardware resources. They also offer a natural way to face a highly heterogeneous environment and can overcome some of its problems in current Grid infrastructures. A general approach involves the use of virtual machines as workload units, instead of jobs, to provide a compliant execution environment for a batch of sequential tasks. Also, virtualization-capable hosts managed by site owners or by VO administrators would deploy dynamically VO specific services and software packages. A step forward is the creation of customized virtual clusters for specific applications and finally overlay a virtual cluster on more than one physical cluster to cover the maximum amount of resources in the Grid. 3.2

DKES

Neoclassical transport for 3D fusion devices and, in particular for the TJ-II stellarator, can be calculated by means of two approaches: Monte Carlo (MC) methods and Drift Kinetic Equation solvers, being the most commonly used the DKES code [24]. In last years, MC methods have been successfully deployed on Grid especially to solve Physics challenges, but for this case only can offer an estimation of the diagonal part of the transport matrix [9]. On the other hand, DKES provides correct quantitative results of the complete transport matrix with the drawback of high computation time and memory consumption. As much other software for plasma fluids calculations, DKE solvers are usually executed in shared memory systems. Nevertheless its parametric and sequential nature makes possible its division in minimal tasks that can run on cluster and Grid environments. Because of DKES base software calculates all the monoenergetic diffusion coefficients per each parameter provided (radius of the magnetic surface, temperature, density, ions or electrons and electric field), and these parameters can get a wide range of values in TJ-II (the installed and operative stellarator at CIEMAT), when a scientist performs a bit more precise analysis, he/she is growing dramatically

his/her need of computational resources. Then, to achieve valuable results, final DKEsG software implementation must be able to use the maximum of available resources even if they are heterogeneous (local systems and remote resources in several Grid infrastructures). The objective of this research is to obtain a complete set of monoenergetic parameters for TJ-II, as a first step, and for other devices as further steps. It will be reached once the code will be able to run on a Grid platform and several upgrades will be done to the original van Rij & Hirshman’s code, i.e. perform the necessary calculations to obtain the coefficients of the Onsager transport matrix from the diffusion coefficients for monoenergetic particles (what [24] provides). This work is being developed jointly with the staff from the Spanish National Fusion Laboratory and the current status of its development can be found in [27]. 3.3

PhyloGrid

The determination of the evolution history of different species is nowadays one of the more exciting challenges that are currently emerging in the computational Biology. In this framework, Phylogeny is able to determine the relationship among the species and, in this way, to understand the influence between hosts and virus. As an example we can mention the AIDS disease. In this application we are working on the development of a workflow based on Taverna [22] which is being implemented for calculations in Phylogeny by means of the MrBayes tool [26]. The user is able to define the parameters for doing the Bayesian calculation and determine the model of evolution. To do this, no knowledge about the computational procedure is required since a friendly interface is provided. This code is being developed jointly with the staff from the Fundaci´on IDEA (Venezuela). Several works have been presented describing the application and making studies about the phylogenetic tree from the Duffy domain [21], the Human Immunodeficiency Virus evolution [17] or the Human Papilloma Virus classification [18].

4

The ALICE experiment at LHC

Hosted at CERN, the European Laboratory for Nuclear Research, the ALICE (A Large Ion Collider Experiment) collaboration [1] has built a dedicated heavy-ion detector to exploit the unique physics potential of nucleus-nucleus interactions at Large Hadron Collider energies. The aim is to study the physics of strongly interacting matter at extreme energy densities, where the formation of a new phase of matter, the quark-gluon plasma, is expected. In 2006, the ALICE bureau and the CIEMAT, the Universidad de Santiago de Compostela and the Centro de Estudios Aplicados al Desarrollo Nuclear (CEADEN) signed a five years Memorandum of Understanding for providing a site for the calculations and storage of the experiment production, which is currently in force involved also in the EELA-2 framework. At the same time, a group

of Physicists enrolled into the ICT Division is studying some interactions to be detected in the ALICE set-up. In particular, they perform Monte Carlo events to reproduce the production, decay and background of a particular quarkonia state to be detected in the ALICE detector. The site at CIEMAT complies with the ALICE requirements, in special with the necessary VO BOX and the ALiEn software. To the date, not counting the CETA production and since the off-line experiment began to submit jobs to our site in August 2006, almost seventy thousand jobs have been done (around 645000 hours of CPU time, i.e. 645 KSI2K for our nodes). A SE of 4.4 TB is also available.

5

ICT Division and Fusion: The EUFORIA Project

EUFORIA (EU Fusion fOR Iter Applications) [8] is a project funded by the European Union under the Seventh Framework Programme (FP7) which provides a comprehensive framework and infrastructure for core and edge transport and turbulence simulation, linking Grid and High Performance Computing (HPC), to the Fusion modelling community. It started in January 2008 and will last till the end of 2010. Its aim is to enhance the modelling capabilities for ITER sized plasmas through the adaptation, optimization and integration of a set of critical applications for edge and core transport modelling targeting different computing paradigms as needed (serial and parallel grid computing and HPC). As a second step, complex workflows between several applications running in both HPC and Grid will be established. Nevertheless, these codes will be tested and will produce scientific results in Fusion devices already running. Because of its collaboration with the Spanish National Fusion Laboratory, the ICT Division of CIEMAT will port to the Grid the GEM code [28]. The gyrokinetic electromagnetic turbulence code GEM is a comprehensive delta-f particle code that includes the full dynamics of gyrokinetic ions and driftkinetic electrons. Magnetic perturbations perpendicular to the equilibrium field are fully modelled. The simulation is useful for studying well-magnetized plasma physics and is especially powerful because it is accurate at very-low fluctuation levels. Electron-ion collisions are included as well as the full-capability to model general axisymmetric toroidal equilibria. First steps to overcome this objective have already been taken and collaborations with the Max-Planck-Institut f¨ ur Plasmaphysik (Garching) and the University of Edinburgh (who is porting the code to the HPC paradigm) are on the way.

6

Collaboration with the EGEE-III Project

Due to the fact that our Fusion activities started with other Projects and since the Spanish National Fusion Laboratory is the manager of the Fusion Cluster, our Division is also collaborating with the largest European Commission funded Grid Project. The third phase of the Enabling Grids for E-sciencE Project (EGEE)

[13] is the largest multi-disciplinary grid infrastructure in the world, which brings together more than 140 institutions to produce a reliable and scalable computing resource available to the European and global research community. Within it, the ICT Division is developing on the Grid environment a version of the FAFNER code [20] which was adapted for the TJ-II Fusion device. This tool simulates by Monte Carlo methods the Neutral Beam Injection (NBI) technology, i.e. a key heating method for most of the fusion experiments worldwide and the one that will be used in ITER. The independent trajectories of energetic neutral atoms are followed by the code after entering the plasma, where they can suffer several processes whose probabilities are proportional to the previously estimated crosssections. To the date, FAFNER-2 has been usually run at CIEMAT by means of a batch mode on a shared memory Cray architecture, but it has been also translated to MPI. Our work has been to adapt this MPI version to Grid and to add the DRMAA API for maximising its execution. This version already runs and performs accurate calculations as can be seen in [25], where the steps of the transformation from a SHMEM over MIPS to a Grid over X86 technology are also explained.

7

The EPIKH Project: Not only development

The Exchange Programme to advance e-Infrastructure Know-How (EPIKH) Project [6] has recently signed its agreement for funding with the European Commission through its International Research Staff Exchange Scheme. Due to a high effort has been carried out for stimulating and fostering e-Science and Grid outside the European borders, now it is time to consolidate it by means of a well structured programme in several areas (Latin America, Asia and Africa). Thus, the strategic aims of the EPIKH project are to reinforce the impact of eInfrastructures in scientific research defining and delivering stimulating programme of educational events (including Grid Schools and High Performance Computing courses) and broaden the engagement in e-Science activities and collaborations both geographically and across disciplines. The ICT Division staff will act as tutor in many of these training events both for system administrators and application developers in Latin America. As a consequence, we plan to foster the establishment of scientific collaborations among the countries and continents involved in the project for the future.

8

Our domestic sustainability: the Spanish e-Science Network and other national Projects

The Spanish e-Science Network [15] aims to coordinate and foster the scientific activities in Spain in the arena of e-Science. To do so, a human network with collaborating groups is being consolidated and a shared use of geographically distributed resources connected through Internet is also on the way. It is coordinated by Universidad Polit´ecnica de Valencia and holds several particular initiatives.

One of them is the Spanish National Grid Initiative (NGI), whose objective is to consolidate in a long term a high quality Grid production. In our site (gLite and Globus compliant), the three VOs are already supported (the general one, operation and training) and CIEMAT is pushing its development being also part of the Spanish JRU. The ICT Division is one of the accredited groups of support for developing scientific applications; it belongs to the Grid defined Areas and some of the codes presented in this work belong also to the portfolio of applications of the Spanish e-Science Network [29]. GRIDImadrid [14] is an initiative in the framework of the R&D Activity Biogridnet whose objective is to integrate the Grid technology in the Madrid region and investigate on Grids. Jointly with other 6 institutions, an e-infrastructure has been set-up where several middleware techniques have been developed and where the ICT Division is contributing with its own Globus site. In addition to these activities, we can mention the Ibercivis Project [23]. It is based on BOINC computing and is one of the most widespread projects in Spain with respect to this technology. Its main purpose is then to deploy a permanent distributed computing platform based on volunteer computing. Our Division is collaborating in the dissemination activities by coordinating the scientific part of the web page where the different research projects are presented

9 9.1

Supercomputation activities at ICT Division The precedents

From 1959, CIEMAT had boosted the use of computation as a research method. Lots of calculations in many areas have been performed: Fusion, Environment, Meteorology, HEP... and several computational environments have been available for the researchers, not only from CIEMAT, but from any other national or international institutions: Vectorial, Cray, Altix, cc-NUMA, X86... The first computer devoted to scientific calculations in Spain was purchased by JEN in 1959, a UNICAC SOLID STATE UCT, from the Remington Rand Company. Later on, three UNIVAC machines, 1106, 1110, and 1110/81 were consecutively bought by leasing in the 70’s, being the latest operative by terminal and not by perforated cards. All these computers were available for any researcher from other institutions such as Ministries, INTA, CSIC, several Universities, etc. In 1985, CIEMAT launched its first ICT internal programme for distributing tasks among several terminals connected in a Local Area Network and a new computer, IBM 3090/150 with a vectorial processor, is purchased. To this kind of processor, three CRAY computers were incorporated to the CIEMAT Data Center: XMS, YMP-EL and J90 (16 processors). The CRAY T3E computer with parallel architecture arrived in the latest 90’s. A step beyond in this decade was taken with the SGI/Origin 3800 (160 MIPS processors R14000, 600 MHz) computer with cc-NUMA architecture. Its performance was upgraded with two SGI/Altix machines of 96 (1.3 GHz) and 64 (1.5 GHz) Itanium 2 processors, although they now count on Linux as Operative System.

A breakthrough because of the better value for money they offer was produced by the clusters. This commodity architecture began to be installed in many Data Centres as ours and now it is the most extended world wide. We currently have several farms working; since they are very dynamically configured due to the concrete demand, we can summarize that we have around 200 processors (most of them connected by Infiniband) in both Grid and HPC platforms. Of course, to this hardware is mandatory to add the necessary software to achieve good results by the submission of jobs in a batch system through the years. Thus, many tools are available forming a wide range of possibilities: compilers, debuggers, parallelization, graphical and numerical libraries, etc [11]. As an example of the research carried out in our Centre, we present some tests for the 3D image reconstruction code for a PET based on an iterative algorithm MLEM (Maximum Likehood Expectation Maximization) that has been developed by our staff with the Sparse Matrix technique (see Table 1). In this Table, the matrix size is related to the quality of the reconstructed image and gives a measure of the cost in terms of computational resources. Matrix File size [MB] Matrix size [number of data (109 )] Time consumption 20 x 20 x 5 3.59 0.52 03 : 09 : 00 80 x 80 x 40 9.66 1.84 20 : 12 : 30 Table 1. MLEM example

9.2

The new cluster: Euler

To the date, most of the CIEMAT available supercomputation systems were based on shared memory. Currently, a constellation cluster has become in production at the ICT Division, who is in charge of its management. Euler is a cluster with the following characteristics: – – – – – –

144 processors Dual Xeon quad-core 3.0 GHz (2 GB per core) a RAM memory of 2.3 TB for the 1152 cores 140 GB of storage per node (21 TB as a whole) interconnected by Infiniband fully devoted to the execution of jobs X86 architecture

This constellation in blade format is, then, a distributed system architecture even when is formed by individual SMP systems of 8 cores and 16 GB of shared memory. Euler is connected to the rest of the CIEMAT Network by a gigabit ethernet link. High performance file access (necessary by scientific computation) is guaranteed by the parallel file system Lustre, making possible to transfer stripped data among several clustered storage servers. These hosts store the data in a secure multi-tier RAID 6+0 disk cabinet, which sum up to 60 TB.

With respect to the computational power, since we are counting on an Intel Chip 5450, we obtain a peak value Rpeak of 13.82 Tflops and a Rmax value of 10.98 Tflops according to the Linpack test (HPL). These figures set Euler among the most powerful Spanish Supercomputers. CIEMAT aim is to join the Spanish Supecomputation Network promoted by the Spanish Ministry of Science and Innovation and make available the Euler resources to the research community. Many applications will run in the near future in this supercomputer and the ICT Division is supporting the CIEMAT research groups to port every tool to be executed on it. 9.3

A drawback: the use of heterogeneous resources

As we have aforementioned, the Data Center of CIEMAT must manage heterogeneous resources. Because of this, our staff is researching about a friendly interface which will allow the user to make the most of the already available resources in a certain time, either cluster or Grid. By means of GridWay [16] and Virtual Machines, that can be started or not, a user will be able to connect to our facilities and send jobs without caring if the Grid or HPC resources are busy because this interface will manage all the free ones and will adapt them to the job requirements.

10

Conclusions

A brief summary of the Grid and Supercomputing activities developed at the ICT Division of CIEMAT in the last years has been presented. The Projects in which the Division has been involved as well as the applications and developments achieved are also present. As a consequence, a strong background in these technologies has been achieved by the staff. This topic has become important since the Division is now set in a good position to collaborate and participate in future national or international initiatives.

Acknowledgements The ICT Division of CIEMAT acknowledges the funding support provided by the European Commission through the EELA, EELA-2, EUFORIA, EPIKH and EGEE-III Projects and by the Spanish Ministry of Science and Innovation through the Spanish e-Science Network.

References 1. K. Aamodt et al. ”The ALICE Experiment at the CERN LHC”, Journal of Instrumentation, 3, pp. S08002, (2008). 2. The ALICE Project, http://http://alice.dante.net/ 3. A. Augusto-Alves Jr. et al. ”The LHCb Detector at the LHC”, Journal of Instrumentation, 3, pp. S08005, (2008).

4. G. Aparicio et al. ”BLAST in Grid (BiG): A Grid-Enabled Software Architecture and Implementation of Parallel and Sequential BLAST”, Proc. Spanish Conf. e-Science Grid Computing, 1, pp. 11–22, (2007). 5. R. Barbera et al. ”Grid infrastructures for e-Science: a use case from Latin America and Europe”, Proc. 6th International Conference on Open Access, (2008). 6. R. Barbera et al. ”The EPIKH Project”, Proc. IST-Africa Conf. 2009, in print (2009). 7. J. Casado, R. Mayo, R. Mu˜ noz. ”The EELA Project, an e-Infrastructure shared between Europe and Latin America”, 1st Iberian Grid Infra. Conf. Proc., 1, pp. 29–35, (2007). 8. F. Castej´ on et al. ”EUFORIA: Grid and High Performance Computing at the service of fusion modelling”, 2nd Iberian Grid Infra. Conf. Proc., 2, pp. 115–126, (2008). 9. F. Castej´ on et al. ”Ion kinetic transport in the presence of collisions and electric field in TJ-II ECRH plasmas” Plasma Phys. Control. Fusion, 49, pp. 753–776, (2007). 10. C. Cherubino et al. ”EELA Training Activities. Expanding the Grid user community in Europe and Latin America”, 1st Iberian Grid Infra. Conf. Proc., 1, pp. 377–380, (2007). 11. CIEMAT software tools, http://www.ciemat.es/portal.do?TR=CIDR=129 12. The CLARA Network, http://www.redclara.net/ 13. The EGEE Project, http://www.eu-egee.org 14. The GRIDImadrid initiative, http://www.gridimadrid.org/ 15. V. Hern´ andez. ”Fostering Spanish Scientific Activity by Means of Collaborative use of Distributed Computational Resources”, Available from http://www.e-ciencia.es/docpublicos.jsp 16. E. Huedo, R. S. Montero, I. M. Llorente. ”The GridWay Framework for Adaptive Scheduling and Execution on Grids”, Scalable Computing–Practice and Experience 6, pp. 1–8, (2005). 17. R. Isea et al. ”Challenges and characterization of a Biological system on Grid by means of the PhyloGrid application”, Procs. First EELA-2 Conf., pp. 139–146, (2009). 18. R. Isea et al. ”Computational challenges on Grid Computing for workflows applied to Phylogeny”, Lec. Notes Computer Sciences, in print (2009). 19. N. Jacq et al. ”Grid enabled virtual screening against malaria”, J. Grid Computing, 6, pp. 29–43, (2008). 20. G. G. Lister. ”A fully 3-D Neutral Beam Injection Code Using Monte Carlo Methods”, Technical Report 4/222, Max Planck IPP, Garching, Germany (1985). 21. E. Montes, R. Isea, R. Mayo. ”PhyloGrid: a development for a workflow in Phylogeny”, 2nd Iberian Grid Infra. Conf. Proc., 2, pp. 378–387, (2008). 22. T. Oinn et al. ”Taverna: lessons in creating a workflow environment for the life sciences”, Concurrency and Computation: Prac. Exp., 18, pp. 1067–1100, (2006). 23. R. Ramos et al. ”Ibercivis: A BOINC-based framework for advanced volunteer computing”, Procs. First EELA-2 Conf., pp. 295–302, (2009). 24. W. I. van Rij, S. P. Hirshman. ”Variational bounds for transport coefficients in threedimensional toroidal plasmas”, Phys. Fluids B, 1, pp. 563–569, (1989). 25. M. Rodr´ıguez et al. ”FAFNER2: adaptation of a code for estimating NBI heating of fusion plasmas on the Grid”, Procs. First EELA-2 Conf., pp. 227–234, (2009). 26. F. Ronquist, J. P. Huelsenbeck. ”MrBayes 3: Bayesian phylogenetic inference”, Bioinformatics, 19, pp. 1572–1574, (2003). 27. A. J. Rubio-Montero et al. ”Calculations of Neoclassical Transport on the Grid”, Procs. First EELA-2 Conf., pp. 213–218, (2009). 28. B. D. Scott. ”Tokamak edge turbulence: background theory and computation”, Plasma Phys. Control. Fusion, 49, pp. S25-S41, (2007).

29. Spanish e-Science Network wiki, http://www.e-ciencia.es/wiki/index.php/Portal:Aplicaciones 30. D. Strul et al. ”GATE (Geant4 Application for Tomographic Emission): a PET/SPECT general-purpose simulation platform”, Nucl. Phys. B (Proc. Suppl.), 125, pp. 75–79, (2003).

Grid and Supercomputing Activities at the ICT Division ...

ing the infrastructure, the CIEMAT site keeps working and offering as much ser- ... VO administrators would deploy dynamically VO specific services and ..... guaranteed by the parallel file system Lustre, making possible to transfer stripped.

139KB Sizes 0 Downloads 129 Views

Recommend Documents

Quasi-opportunistic Supercomputing in Grid ...
In service-centric systems, quality of service (QoS) is defined as the ability of a service .... bank', which keeps track of the accounts of ADs within a VO and is responsi- .... This would be done through the development of software components for.

1st Division ICT Summit.pdf
TO All Publa Sch@ls Oblrid Sup€ryiloB. glemsnisry ... Ct.i,b Si z.nho4c6 w lrcn afr amro afl pm End t is the f r.t ot s $rie ot@nf€6nc. ... 1st Division ICT Summit.pdf.

Barcelona Supercomputing Center Centro Nacional del ...
“Which Hadoop & data—center configuration is the best in cost- effectiveness terms?” ... Towards comparing Big-Data deployment providers and configurations.

pdf-1839\bravery-above-blunder-the-9th-australian-division-at ...
... the apps below to open or edit this item. pdf-1839\bravery-above-blunder-the-9th-australian-divi ... -the-australian-army-history-series-by-john-coates.pdf.

Exploiting desktop supercomputing for three ...
Oct 8, 2008 - resolution, the size and number of images involved in a three-dimensional reconstruction ... High resolution structural studies demand huge computational ... under the conditions that the image data, as well as the informa-.

Exploiting desktop supercomputing for three ...
Oct 8, 2008 - under the conditions that the image data, as well as the informa- ...... Also, memory and hard disk storage prices drop, leading to more powerful ...

PDF The Grid - Audiobooks
PDF The Grid - Audiobooks

Read The Grid - Download
Read The Grid - Download

ICT Data and Suggestions Tamilnadu.pdf
Suggestions in respect of proposed new ICT Policy. On behalf of all the Branches of All India Central Excise Inspectors'. Association across Tamilnadu falling ...

ICTEd_538_3rd semester ICT Education Theories and Practices.pdf ...
Digital technology and learning. ICT and access to education. ICT and technical and vocational education. ICT and integration to different subjects teaching and.

ICT teacher training: Evaluation of the curriculum and ...
National Educational Technology. Standards. (NETS) Project of the International Society for. Technology in Education (ISTE) that presents the standards for ICT ...

The Innovation Activities of Multinational Enterprises and the Demand ...
Apr 3, 2017 - The development and management of new technologies within ... across the border who have specialized knowledge of the company's products/services, ...... contains software development), and management consulting.

ICT and Global Economic Growth
Program on Technology and Economic Policy, Harvard Kennedy School of ..... Internet fosters competition and productivity in the health care industry; Eggleston, ...... Technology and Organizations, Graduate School of Management, University ...

ICT teacher training: Evaluation of the curriculum and ...
the sample was called by phone to make an .... (49.7%), the use of the Internet (23.3%), and the use ... use of office packages (χ2 ¼ 7.66, po0.05), working.

ICT AcceptableUseStudent.pdf
immediately inform a member of staff who will then report it to a member of the ICT Support Team or a member of. the senior management team. 2.10 Students ...

A Mechanism Design for Fair Division - Research at Google
shift into cloud computing, more and more services that used to be run on ... One distinguishing property of resource allocation protocols in computing is that,.

bLong Division & Synthetic Division, Factor and Remainder Theorems ...
bLong Division & Synthetic Division, Factor and Remainder Theorems completed.pdf. bLong Division & Synthetic Division, Factor and Remainder Theorems ...