An approach to grid scheduling by using Condor-G Matchmaking mechanism
|
|
|
- Lorraine Bennett
- 10 years ago
- Views:
Transcription
1 An approach to grid scheduling by using Condor-G Matchmaking mechanism E. Imamagic, B. Radic, D. Dobrenic University Computing Centre, University of Zagreb, Croatia {emir.imamagic, branimir.radic, Abstract. Grid is a distributed environment that integrates computing, storage and other resources in order to enable execution of applications that cannot be run on a single resource. Such environment requires advanced scheduling system in order to efficiently execute users applications. In this paper, we give overview of issues related to grid scheduling. We describe in details one of the most mature solutions - Condor-G Matchmaking mechanism. Furthermore, we propose our own approach to building grid scheduling system based on Condor-G Matchmaking. defined as the process of scheduling jobs where resources are distributed over multiple administrative domains [21]. Up to date, there is no grid scheduling system that fully meets the requirements that will be described in following sections. Keywords. Grid scheduling, execution management, Condor-G, matchmaking 1. Introduction Grid is geographically distributed system that enables integration of heterogeneous resources needed for execution of complex, demanding scientific and engineering application. Integration of resources beyond boundaries of a single organization also enables easier cooperation among diverse scientists who work in different geographical locations. Grid middleware (GMW) is set of services and protocols that enable seamless integration of resources in grid. It provides a layer of abstraction that hides differences in underlying technologies (e.g. computer clusters, storage managers, application services, etc.). Numerous standards are being defined for grid protocols and services, majority of them within Global Grid Forum (GGF) [5] organization. Basic functionalities of grid middleware are security, information, job and data management. Most widely used solutions that provide these basic functionalities are Globus Toolkit [10] and UNICORE [24]. Grid scheduling (also called superscheduling, metascheduling and grid brokering) is one of the advanced features of grid middleware. It is Figure 1. Grid scheduling architecture Execution of jobs in grid is shown in Figure 1. It consists of following activities: users submit their jobs described by using a description language to the grid scheduler (1), scheduler uses grid middlewares information systems to discover and evaluate resources (2), once the scheduler defines where the job will be executed, execution is started and managed by using available grid middleware components (e.g. job and data management systems) (3). Grid scheduling differentiates from classical cluster batch scheduling in many ways. In case of grid, scheduler does not have full control over resources, information about resources is usually
2 unreliable and stale, application behavior is difficult to predict. All these reasons make grid scheduling far more difficult to realize. This paper is organized as follows. After short introduction, in section 2 we describe main issues related to grid scheduling. In section 3, we describe how the Condor-G Matchmaking can be used for grid scheduling. We describe our grid environment in section 4. In section 5, we describe our proposed systems in detail. Related work is described in section 6. Conclusions and future directions are given in sections 7 and Grid scheduling issues Due to the complexity of grid environment, grid scheduling raises numerous issues. In this section, we shortly describe the most important issues that scheduling system should meet. Many grid middlewares that provide basic set of functionalities exist today. Some of them are implemented in accordance with grid standards (e.g. Globus Toolkit 4.0) and some became de facto standards (e.g. UNICORE and Globus Toolkit 2.4). Grid scheduling system should be able to utilize as many grid middlewares as possible. Here are some existing solutions for particular subsystems: Security management: Globus Grid Security Infrastructure (GSI) ([14]), UNICORE security, Virtual Organization Membership Service (VOMS) ([25]) Information systems: Globus Monitoring and Discovery System (MDS) 2 and MDS 4 ([11]), Ganglia ([5]), Network Weather Service ([19]) Job management: Globus Grid Resource Allocation and Management (GRAM) 2 and GRAM 4 ([12]), UNICORE, Condor Data management: Globus GridFTP and Reliable File Transfer (RFT) ([13]), UNICORE file management. Ability to refresh proxy certificates in environments that utilize Globus Toolkit as grid middleware is preferred. The most widely used solution for this problem is using MyProxy credential repository ([21]). Grid scheduler should support various job types. Basic types are serial and parallel jobs. Serial job is an application that demands single processor for execution. Parallel job requires more then one processors for execution (typically MPI applications). Some of the complex job types are job arrays and workflows. Job array comprises of multiple execution of the same job (usually initialized with different parameters) and workflow is a set of dependent tasks. Standard and advanced functionalities of job management system should be supported: Checkpointing ability to store job s current state, which can be used afterwards to reconstruct the job. Preemption ability to evict one job from resource in favor of other with higher priority. Job migration ability to dynamically move active job from one resource to another. Rescheduling jobs on alternative resources. This ability is important in cases when job does not execute properly on specific resources. Fault tolerance ability to automatically recover from job or resource failure. Advance reservation of resources for a specific period of time. Scheduler should be capable of assigning priorities to jobs and resources. In addition, resource owners should be able to define which jobs they prefer. In the same way, users should be able to rank the resources and request specific features (e.g. libraries or hardware capabilities). Users should be able to define custom scheduling algorithms. Beside support for custom algorithm development, most common algorithms implementations should be provided. Scheduling system should make use of local job management system s advance reservations mechanism. By using advance reservations, grid execution system enables users to run their applications in explicitly defined timeframe. In addition, by using advance reservations, parallel applications distributed over multiple clusters can be guaranteed synchronous startup of processes on all clusters. Scheduler should take into account data location and movement (data-aware scheduling) and characteristics of network links. Example of data-aware scheduling is assigning jobs to resources closer to data instead of moving large data over network. System scalability is very important due to the nature of grid system. For example, grid scheduling system should be capable of handling continuous and massive load. Detailed analysis of existing grid scheduling solutions by using criteria described above is presented in [18]. Conclusion was that systems GridWay ([16]) and Condor-G satisfy the most of the features. In the remaining of this article, we will concentrate solely on Condor-G.
3 3. Condor Condor [1] is a distributed environment developed at University of Wisconsin designed for high throughput computing (HTC) and CPU harvesting. CPU harvesting is a process of exploiting non-dedicated computers (e.g. desktop computers) when they are not used. Figure 2. Condor architecture Condor architecture is shown in Figure 2. Heart of the system is Condor Matchmaker. Users describe their applications with ClassAds language and submit them to Matchmaker. ClassAds allows users to define custom attributes for resources and jobs. On the other side, resources publish information to Matchmaker. Condor Matchmaker then matches job requests with available resources. Condor provides numerous advanced functionalities described in section 2, such as job arrays and workflows support, checkpointing, job migration, rescheduling, fault recovery, it enables users to define resource requirements and rank resources and mechanism for transferring files to/from remote machines. Although utilized for integration of resources, Condor is intended to be used in smaller scale environments and should be seen more as a local job management system than a grid middleware Condor-G Condor-G [5] is the Condor extension that enables using Condor tools for submitting jobs to grid. Currently supported grid middlewares are: Globus Toolkit versions 2.4, 3 and 4, UNICORE and NorduGrid. Additionally Condor-G can use MyProxy server for storing user credentials in case of long running jobs. However, Condor-G cannot schedule jobs, it simply executes the job on remote resource by using grid middleware. Condor-G Matchmaking mechanism extends Condor-G with ability to use matchmaking algorithm to schedule jobs to grid. Condor-G Matchmaking inherits almost all advantages of standard Condor system and Condor-G. It enables job scheduling based on complex job requirements. For example, user can easily define specific requirements, such as storage space required, required libraries, etc. In addition, user can prefer specific set of resources to others, by giving them a higher rank. Furthermore, using multiple Condor-G Matchmakers is supported. This feature enables distribution of workload, which is important for achieving greater scalability of the system. One of disadvantages of Condor-G Matchmaking is lack of support for parallel jobs. In current version user can submit parallel jobs, however Condor-G Matchmaker will not be aware of jobs parallelism. It will simply assume that it is dealing with serial jobs. Furthermore, there is no implicit data-aware scheduling, although users can explicitly define resources, closer to input data, are preferred. Also, in current version, it is not possible to define custom scheduling algorithm. Another issue is that Condor-G Matchmaking lacks integration with grid information systems. In order to use Condor-G Matchmaking, one has to develop a custom system that will provide information about resources to Condor-G Matchmaker. 4. CRO-GRID grid environment CRO-GRID [2] is Croatian national grid initiative that consists of three projects and aims to build computational grid for science and enterprise needs. CRO-GRID Infrastructure project [3] is focused on building and maintaining grid infrastructure. CRO-GRID Mediator is developing service-oriented grid middleware components based on recent WSRF specification. Four scientific applications are
4 being developed for this grid within CRO-GRID Application project. At this moment, CRO-GRID Infrastructure consists of five clusters placed in institutes in four cities. Local job management systems (JMS) used on clusters are Sun Grid Engine and Torque with Maui scheduler. Network used for CRO-GRID infrastructure is shown in Figure 3. In cooperation with Giga CARNet project ([7]), all links (beside link to ETFOS cluster) have gigabit bandwidth. rationale is following. First, it meets most of the requirements described in section 2. The most important requirement for us is the support for multiple grid middlewares. Second, it creates unique interface described in section 4. Basic idea is to utilize grid information systems (GIS) that are part of deployed grid middlewares. Information about available computational resources is retrieved from GIS and published to Condor-G Matchmaker. Once the resource state is changed, Publisher updates the Condor-G Matchmaker. All this is completely transparent for user. User submits the job as it would be submitted to a standard Condor system. Once the match is made, Condor-G submits the job by using appropriate grid middleware. Proposed system is currently in operational testing stage. Once the performance testing phase finishes, system will be offered for use in CRO- GRID grid infrastructure. Details about our components and current state of development are described in following subsections Proposed architecture Figure 3. CRO-GRID Infrastructure network Currently, we are using three grid middlewares: Globus Toolkit versions 2.4 and 4.0 and UNICORE. General idea was to deploy all three of them in order to gain practical experience and allow developers of applications to decide which one provides the best functionalities. Beside basic grid middleware, we deployed numerous additional services, such as GridSphere grid portal ([15]), MyProxy credential storage, Nagios monitoring system ([19]), etc. At the same time, we want to provide to users the unique interface for submitting jobs to all three middlewares. By providing such interface, we could transparently choose between grid middlewares. In addition, such interface should enable resource owners to decide which grid middleware to use. It should also enable easy integration of resources from other grids. 5. CRO-GRID Condor-G Matchmaking approach After thorough investigation of existing grid scheduling systems ([18]) we decided to utilize Condor-G Matchmaking mechanism. Our Architecture of proposed scheduling system based on Condor-G Matchmaking is presented in Figures 4 and 5. The first figure presents central part of system and the second depicts the system from the aspect of user. Figure 4. CRO-GRID Condor-G Matchmaking (central part) Central side of our system consists of three components: GIS Adapter, Publisher and
5 Condor-G Matchmakers. Publisher is responsible for providing information to Condor-G Matchmakers. GIS adapters enable Publisher to pull information from particular type of GIS. We utilize Condor-G Matchmaking ability to use multiple Condor-G Matchmakers in order to accomplish greater scalability. As shown in Figure 4 Publisher is deployed independently of both GIS and Condor-G Matchmaker. Service itself does not require significant resources. Resource information can be published to multiple Condor-G Matchmakers. If that is the case Publisher is usually deployed close to the GIS interface (e.g. on the same server as GIS central service) GIS Adapter Figure 5. CRO-GRID Condor-G Matchmaking (user s perspective) User s perspective is shown in Fig 5. User describes job with ClassAds as in standard Condor system and uses Condor tools to submit it (1). Condor-G matchmaker assigns set of grid resources to the job for execution. Condor-G utilizes appropriate underlying grid middleware components to execute the job (2). If needed, Condor-G enables refreshing of users certificate by using MyProxy server (3). Our User interface component enables user to retrieve additional, grid-specific information about the job Publisher Publisher component is responsible for publishing information gathered from available GIS to Condor-G Matchmaker. Special issue with Publisher is timing. Period between two publishing events (publish period) has to be tightly correlated with Condor-G Matchmaking scheduling period and GIS refresh period. Publish period has to be smaller than scheduling period, otherwise Condor-G Matchmaker will use stale information. On the other hand publish period has to be longer then GIS period, otherwise stale information will be published. GIS Adapter is component responsible for interpreting data from particular type of GIS. Publisher uses it for retrieving information from GIS. Complexity of GIS Adapter depends on the information system (e.g. type of API it provides, type of format it uses, etc.). Currently we support Globus MDS2 and MDS4. In case of MDS2, we developed an adapter that can parse data in LDAP form. In case of MDS4, data is stored in XML form. New GIS Adapters for additional information systems can be easily developed. By providing new Adapters, one can add support for other grid middlewares User interface Since the Condor tools do not provide extensive information about Condor-G Matchmaking, we developed wrappers around Condor tools. Client interface enables user to see where the job is executed, which grid middleware is used and to get the job s status Advantages By relying on GIS, we eliminate the need for following components: resource adapters, sensor management and information management. We do not have to develop adapters for gathering information from each resource type (e.g. various local job management systems: Sun Grid Engine, Condor, etc). Next, we do not need to implement mechanisms and protocols for controlling sensors that collect information on individual resources. Last, but not the least, grid information systems have to contain subsystems for management of the gathered information. Also, by eliminating these components we avoid introducing additional load to the resources. As all grid middlewares have to have some sort of information system, we find that our approach will always be applicable.
6 5.6. Drawbacks Existing GIS solutions such as Globus MDS2, MDS4 or R-GMA usually do not provide rich set of information about particular resources. Information is often limited to the set that each type of resource can provide. However, unreliability and lack of information are basic characteristics of grid systems. 6. Related work Because of its many advantages, Condor-G and Condor-G Matchmaking mechanisms are used by many grid projects. EGEE s glite ([8]) and its preceding project European DataGrid Project (EDG) ([4]) utilize Condor-G for submitting jobs in the Workload Management System (WMS). In this case, entire complex system is built around Condor-G. Therefore, the usage of WMS requires installation of additional glite s components. Main difference is that our approach is much more lightweight and can be deployed by using existing grid technologies: Condor, Globus Toolkit, UNICORE and MyProxy. Condor-G Matchmaking is used by many grid projects such as Grid X1 ([15]) and SAMGrid ([22]). Major difference between approaches is that in their case information is gathered directly from resources instead from GIS. In subsection 5.5 we described negative aspects of such approach. 7. Conclusion Seamless job execution management is extremely important capability of computational grid. As we see it, today's systems are not mature enough for end users to use them heavily. Another issue of existing systems is limited grid middleware support. In order for scheduling system to be widely adopted, it should support as many grid middlewares as possible. In this article, we propose scheduling system based on one of the most mature solutions Condor-G Matchmaking mechanism. Main features of our approach are: full utilization of all Condor-G functionalities, ability to extract information from diverse sources and ability to transparently switch between grid middlewares. 8. Future Work We are planning to extend Publisher with ability to utilize multiple grid information systems to retrieve different type of information about the same resource. By combining information, Publisher will be able to provide more accurate and detailed information about resources. In future, we are also planning to extend our system in following directions: Integrating proposed system into GridSphere grid portal environment in order to provide user-friendly interface Extending set of resource attributes published to Condor-G Matchmaker Extending set of supported grid information systems Investigating approaches for managing parallel jobs Integrating data scheduling service Stork with existing solution. 9. Acknowledgements All given results were achieved through the work on the TEST program Technological research-development projects with the support of the Ministry of science, education and sports. 10. References [1] The Condor Project Homepage. [2] CRO-GRID Homepage. [3] CRO-GRID Infrastructure Project. [4] The DataGrid Project, [5] Frey J, Tannenbaum T, Livny M, Foster F, Tuecke S. Condor-G: A Computation Management Agent for Multi-Institutional Grids. Cluster Computing 2002; 5(3): p [6] Ganglia Monitoring System. [7] Giga CARNet. [8] glite, Lightweight Middleware for Grid Computing. [9] Global Grid Forum. [10] Globus Alliance.
7 [11] Globus Toolkit Information Services. [12] Globus Toolkit Execution Management. tion/ [13] Globus Toolkit Data Management. [14] Grid Security Infrastructure. [15] GridSphere Project, [16] GridWay, [17] Grid X1, A Canadian Computational Grid. [18] Imamagic E, Radic B, Dobrenic D. CRO- GRID Grid Execution Management System. In: Proceedings of the 27th International Conference on Information Technology Interfaces; 2005 Jun 20-23; Cavtat, Croatia. Zagreb: SRCE University Computing Centre; p [19] Nagios, [20] Network Weather Service. [21] Novotny J, Tuecke S, Welch V. An Online Credential Repository for the Grid: MyProxy. In: IEEE International Symposium on High Performance Distributed Computing; 2001 Aug 7-9; San Francisco, USA. p [22] SAMGrid [23] Schopf J M. A General Architecture for Scheduling on the Grid; pdf [24] UNICORE. [25] Virtual Organization Membership Service (VOMS).
Concepts and Architecture of the Grid. Summary of Grid 2, Chapter 4
Concepts and Architecture of the Grid Summary of Grid 2, Chapter 4 Concepts of Grid Mantra: Coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations Allows
GRID COMPUTING Techniques and Applications BARRY WILKINSON
GRID COMPUTING Techniques and Applications BARRY WILKINSON Contents Preface About the Author CHAPTER 1 INTRODUCTION TO GRID COMPUTING 1 1.1 Grid Computing Concept 1 1.2 History of Distributed Computing
Cluster, Grid, Cloud Concepts
Cluster, Grid, Cloud Concepts Kalaiselvan.K Contents Section 1: Cluster Section 2: Grid Section 3: Cloud Cluster An Overview Need for a Cluster Cluster categorizations A computer cluster is a group of
A Survey Study on Monitoring Service for Grid
A Survey Study on Monitoring Service for Grid Erkang You [email protected] ABSTRACT Grid is a distributed system that integrates heterogeneous systems into a single transparent computer, aiming to provide
HPC and Grid Concepts
HPC and Grid Concepts Divya MG ([email protected]) CDAC Knowledge Park, Bangalore 16 th Feb 2012 GBC@PRL Ahmedabad 1 Presentation Overview What is HPC Need for HPC HPC Tools Grid Concepts GARUDA Overview
Grid Scheduling Dictionary of Terms and Keywords
Grid Scheduling Dictionary Working Group M. Roehrig, Sandia National Laboratories W. Ziegler, Fraunhofer-Institute for Algorithms and Scientific Computing Document: Category: Informational June 2002 Status
Grid Scheduling Architectures with Globus GridWay and Sun Grid Engine
Grid Scheduling Architectures with and Sun Grid Engine Sun Grid Engine Workshop 2007 Regensburg, Germany September 11, 2007 Ignacio Martin Llorente Javier Fontán Muiños Distributed Systems Architecture
XSEDE Service Provider Software and Services Baseline. September 24, 2015 Version 1.2
XSEDE Service Provider Software and Services Baseline September 24, 2015 Version 1.2 i TABLE OF CONTENTS XSEDE Production Baseline: Service Provider Software and Services... i A. Document History... A-
Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware
Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware R. Goranova University of Sofia St. Kliment Ohridski,
Monitoring Clusters and Grids
JENNIFER M. SCHOPF AND BEN CLIFFORD Monitoring Clusters and Grids One of the first questions anyone asks when setting up a cluster or a Grid is, How is it running? is inquiry is usually followed by the
A Taxonomy and Survey of Grid Resource Planning and Reservation Systems for Grid Enabled Analysis Environment
A Taxonomy and Survey of Grid Resource Planning and Reservation Systems for Grid Enabled Analysis Environment Arshad Ali 3, Ashiq Anjum 3, Atif Mehmood 3, Richard McClatchey 2, Ian Willers 2, Julian Bunn
The Lattice Project: A Multi-Model Grid Computing System. Center for Bioinformatics and Computational Biology University of Maryland
The Lattice Project: A Multi-Model Grid Computing System Center for Bioinformatics and Computational Biology University of Maryland Parallel Computing PARALLEL COMPUTING a form of computation in which
GridWay: Open Source Meta-scheduling Technology for Grid Computing
: Open Source Meta-scheduling Technology for Grid Computing Ruben S. Montero dsa-research.org Open Source Grid & Cluster Oakland CA, May 2008 Contents Introduction What is? Architecture & Components Scheduling
Web Service Based Data Management for Grid Applications
Web Service Based Data Management for Grid Applications T. Boehm Zuse-Institute Berlin (ZIB), Berlin, Germany Abstract Web Services play an important role in providing an interface between end user applications
Web Service Robust GridFTP
Web Service Robust GridFTP Sang Lim, Geoffrey Fox, Shrideep Pallickara and Marlon Pierce Community Grid Labs, Indiana University 501 N. Morton St. Suite 224 Bloomington, IN 47404 {sblim, gcf, spallick,
Grid Computing With FreeBSD
Grid Computing With FreeBSD USENIX ATC '04: UseBSD SIG Boston, MA, June 29 th 2004 Brooks Davis, Craig Lee The Aerospace Corporation El Segundo, CA {brooks,lee}aero.org http://people.freebsd.org/~brooks/papers/usebsd2004/
Classic Grid Architecture
Peer-to to-peer Grids Classic Grid Architecture Resources Database Database Netsolve Collaboration Composition Content Access Computing Security Middle Tier Brokers Service Providers Middle Tier becomes
The GridWay Meta-Scheduler
The GridWay Meta-Scheduler Committers Ignacio M. Llorente Ruben S. Montero Eduardo Huedo Contributors Tino Vazquez Jose Luis Vazquez Javier Fontan Jose Herrera 1 Goals of the Project Goals of the Project
An objective comparison test of workload management systems
An objective comparison test of workload management systems Igor Sfiligoi 1 and Burt Holzman 1 1 Fermi National Accelerator Laboratory, Batavia, IL 60510, USA E-mail: [email protected] Abstract. The Grid
Collaborative & Integrated Network & Systems Management: Management Using Grid Technologies
2011 International Conference on Computer Communication and Management Proc.of CSIT vol.5 (2011) (2011) IACSIT Press, Singapore Collaborative & Integrated Network & Systems Management: Management Using
Digital libraries of the future and the role of libraries
Digital libraries of the future and the role of libraries Donatella Castelli ISTI-CNR, Pisa, Italy Abstract Purpose: To introduce the digital libraries of the future, their enabling technologies and their
GridFTP: A Data Transfer Protocol for the Grid
GridFTP: A Data Transfer Protocol for the Grid Grid Forum Data Working Group on GridFTP Bill Allcock, Lee Liming, Steven Tuecke ANL Ann Chervenak USC/ISI Introduction In Grid environments,
Condor for the Grid. 3) http://www.cs.wisc.edu/condor/
Condor for the Grid 1) Condor and the Grid. Douglas Thain, Todd Tannenbaum, and Miron Livny. In Grid Computing: Making The Global Infrastructure a Reality, Fran Berman, Anthony J.G. Hey, Geoffrey Fox,
AN APPROACH TO DEVELOPING BUSINESS PROCESSES WITH WEB SERVICES IN GRID
AN APPROACH TO DEVELOPING BUSINESS PROCESSES WITH WEB SERVICES IN GRID R. D. Goranova 1, V. T. Dimitrov 2 Faculty of Mathematics and Informatics, University of Sofia S. Kliment Ohridski, 1164, Sofia, Bulgaria
Abstract. 1. Introduction. Ohio State University Columbus, OH 43210 {langella,oster,hastings,kurc,saltz}@bmi.osu.edu
Dorian: Grid Service Infrastructure for Identity Management and Federation Stephen Langella 1, Scott Oster 1, Shannon Hastings 1, Frank Siebenlist 2, Tahsin Kurc 1, Joel Saltz 1 1 Department of Biomedical
CSF4:A WSRF Compliant Meta-Scheduler
CSF4:A WSRF Compliant Meta-Scheduler Wei Xiaohui 1, Ding Zhaohui 1, Yuan Shutao 2, Hou Chang 1, LI Huizhen 1 (1: The College of Computer Science & Technology, Jilin University, China 2:Platform Computing,
Basic Scheduling in Grid environment &Grid Scheduling Ontology
Basic Scheduling in Grid environment &Grid Scheduling Ontology By: Shreyansh Vakil CSE714 Fall 2006 - Dr. Russ Miller. Department of Computer Science and Engineering, SUNY Buffalo What is Grid Computing??
Status and Integration of AP2 Monitoring and Online Steering
Status and Integration of AP2 Monitoring and Online Steering Daniel Lorenz - University of Siegen Stefan Borovac, Markus Mechtel - University of Wuppertal Ralph Müller-Pfefferkorn Technische Universität
Sun Grid Engine, a new scheduler for EGEE
Sun Grid Engine, a new scheduler for EGEE G. Borges, M. David, J. Gomes, J. Lopez, P. Rey, A. Simon, C. Fernandez, D. Kant, K. M. Sephton IBERGRID Conference Santiago de Compostela, Spain 14, 15, 16 May
GridSolve: : A Seamless Bridge Between the Standard Programming Interfaces and Remote Resources
GridSolve: : A Seamless Bridge Between the Standard Programming Interfaces and Remote Resources Jack Dongarra University of Tennessee and Oak Ridge National Laboratory 2/25/2006 1 Overview Grid/NetSolve
Grid Computing Vs. Cloud Computing
International Journal of Information and Computation Technology. ISSN 0974-2239 Volume 3, Number 6 (2013), pp. 577-582 International Research Publications House http://www. irphouse.com /ijict.htm Grid
MIGRATING DESKTOP AND ROAMING ACCESS. Migrating Desktop and Roaming Access Whitepaper
Migrating Desktop and Roaming Access Whitepaper Poznan Supercomputing and Networking Center Noskowskiego 12/14 61-704 Poznan, POLAND 2004, April white-paper-md-ras.doc 1/11 1 Product overview In this whitepaper
16th International Conference on Control Systems and Computer Science (CSCS16 07)
16th International Conference on Control Systems and Computer Science (CSCS16 07) TOWARDS AN IO INTENSIVE GRID APPLICATION INSTRUMENTATION IN MEDIOGRID Dacian Tudor 1, Florin Pop 2, Valentin Cristea 2,
The Data Grid: Towards an Architecture for Distributed Management and Analysis of Large Scientific Datasets
The Data Grid: Towards an Architecture for Distributed Management and Analysis of Large Scientific Datasets!! Large data collections appear in many scientific domains like climate studies.!! Users and
THE CCLRC DATA PORTAL
THE CCLRC DATA PORTAL Glen Drinkwater, Shoaib Sufi CCLRC Daresbury Laboratory, Daresbury, Warrington, Cheshire, WA4 4AD, UK. E-mail: [email protected], [email protected] Abstract: The project aims
Resource Management on Computational Grids
Univeristà Ca Foscari, Venezia http://www.dsi.unive.it Resource Management on Computational Grids Paolo Palmerini Dottorato di ricerca di Informatica (anno I, ciclo II) email: [email protected] 1/29
An Introduction to Virtualization and Cloud Technologies to Support Grid Computing
New Paradigms: Clouds, Virtualization and Co. EGEE08, Istanbul, September 25, 2008 An Introduction to Virtualization and Cloud Technologies to Support Grid Computing Distributed Systems Architecture Research
GRIP:Creating Interoperability between Grids
GRIP:Creating Interoperability between Grids Philipp Wieder, Dietmar Erwin, Roger Menday Research Centre Jülich EuroGrid Workshop Cracow, October 29, 2003 Contents Motivation Software Base at a Glance
Introduction to grid technologies, parallel and cloud computing. Alaa Osama Allam Saida Saad Mohamed Mohamed Ibrahim Gaber
Introduction to grid technologies, parallel and cloud computing Alaa Osama Allam Saida Saad Mohamed Mohamed Ibrahim Gaber OUTLINES Grid Computing Parallel programming technologies (MPI- Open MP-Cuda )
Grids Computing and Collaboration
Grids Computing and Collaboration Arto Teräs CSC, the Finnish IT center for science University of Pune, India, March 12 th 2007 Grids Computing and Collaboration / Arto Teräs 2007-03-12 Slide
SEE-GRID-SCI. www.see-grid-sci.eu. SEE-GRID-SCI USER FORUM 2009 Turkey, Istanbul 09-10 December, 2009
SEE-GRID-SCI Grid Site Monitoring tools developed and used at SCL www.see-grid-sci.eu SEE-GRID-SCI USER FORUM 2009 Turkey, Istanbul 09-10 December, 2009 V. Slavnić, B. Acković, D. Vudragović, A. Balaž,
Data Management in an International Data Grid Project. Timur Chabuk 04/09/2007
Data Management in an International Data Grid Project Timur Chabuk 04/09/2007 Intro LHC opened in 2005 several Petabytes of data per year data created at CERN distributed to Regional Centers all over the
A Service Platform for On-Line Games
A Service Platform for On-Line Games Debanjan Saha Sambit Sahu Anees Shaikh Network Services and Software IBM TJ Watson Research Center Hawthorne, NY 10598 {dsaha,sambits}@us.ibm.com, [email protected]
LSKA 2010 Survey Report Job Scheduler
LSKA 2010 Survey Report Job Scheduler Graduate Institute of Communication Engineering {r98942067, r98942112}@ntu.edu.tw March 31, 2010 1. Motivation Recently, the computing becomes much more complex. However,
Cloud Computing. Lecture 5 Grid Case Studies 2014-2015
Cloud Computing Lecture 5 Grid Case Studies 2014-2015 Up until now Introduction. Definition of Cloud Computing. Grid Computing: Schedulers Globus Toolkit Summary Grid Case Studies: Monitoring: TeraGRID
Deploying a distributed data storage system on the UK National Grid Service using federated SRB
Deploying a distributed data storage system on the UK National Grid Service using federated SRB Manandhar A.S., Kleese K., Berrisford P., Brown G.D. CCLRC e-science Center Abstract As Grid enabled applications
M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK 2014 2015 Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2.
M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK 2014 2015 Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2. What are the different types of virtualization? Explain
Interoperability in Grid Computing
Anette Weisbecker, Fraunhofer IAO, Stuttgart 18 th April 2007 Special Interest Session III Outline: Interoperability in Grid Computing Grid Computing for Medicine and Life Science Interoperability Architecture
Grid Technology and Information Management for Command and Control
Grid Technology and Information Management for Command and Control Dr. Scott E. Spetka Dr. George O. Ramseyer* Dr. Richard W. Linderman* ITT Industries Advanced Engineering and Sciences SUNY Institute
An Online Credential Repository for the Grid: MyProxy
An Online Credential Repository for the Grid: MyProxy Jason Novotny Lawrence Berkeley Laboratory [email protected] Steven Tuecke Mathematics and Computer Science Division Argonne National Laboratory [email protected]
The GRID and the Linux Farm at the RCF
The GRID and the Linux Farm at the RCF A. Chan, R. Hogue, C. Hollowell, O. Rind, J. Smith, T. Throwe, T. Wlodek, D. Yu Brookhaven National Laboratory, NY 11973, USA The emergence of the GRID architecture
IBM Solutions Grid for Business Partners Helping IBM Business Partners to Grid-enable applications for the next phase of e-business on demand
PartnerWorld Developers IBM Solutions Grid for Business Partners Helping IBM Business Partners to Grid-enable applications for the next phase of e-business on demand 2 Introducing the IBM Solutions Grid
Authorization Strategies for Virtualized Environments in Grid Computing Systems
Authorization Strategies for Virtualized Environments in Grid Computing Systems Xinming Ou Anna Squicciarini Sebastien Goasguen Elisa Bertino Purdue University Abstract The development of adequate security
SLA BASED SERVICE BROKERING IN INTERCLOUD ENVIRONMENTS
SLA BASED SERVICE BROKERING IN INTERCLOUD ENVIRONMENTS Foued Jrad, Jie Tao and Achim Streit Steinbuch Centre for Computing, Karlsruhe Institute of Technology, Karlsruhe, Germany {foued.jrad, jie.tao, achim.streit}@kit.edu
Roberto Barbera. Centralized bookkeeping and monitoring in ALICE
Centralized bookkeeping and monitoring in ALICE CHEP INFN 2000, GRID 10.02.2000 WP6, 24.07.2001 Roberto 1 Barbera ALICE and the GRID Phase I: AliRoot production The GRID Powered by ROOT 2 How did we get
Condor-G: A Computation Management Agent for Multi-Institutional Grids
: A Computation Management Agent for Multi-Institutional Grids James Frey, Todd Tannenbaum, Miron Livny Ian Foster, Steven Tuecke Department of Computer Science Mathematics and Computer Science Division
High Availability Essentials
High Availability Essentials Introduction Ascent Capture s High Availability Support feature consists of a number of independent components that, when deployed in a highly available computer system, result
Grid Computing @ Sun Carlo Nardone. Technical Systems Ambassador GSO Client Solutions
Grid Computing @ Sun Carlo Nardone Technical Systems Ambassador GSO Client Solutions Phases of Grid Computing Cluster Grids Single user community Single organization Campus Grids Multiple user communities
Grid Security : Authentication and Authorization
Grid Security : Authentication and Authorization IFIP Workshop 2/7/05 Jong Kim Dept. of Computer Sci. and Eng. Pohang Univ. of Sci. and Tech. (POSTECH) Contents Grid Security Grid Security Challenges Grid
KNOWLEDGE GRID An Architecture for Distributed Knowledge Discovery
KNOWLEDGE GRID An Architecture for Distributed Knowledge Discovery Mario Cannataro 1 and Domenico Talia 2 1 ICAR-CNR 2 DEIS Via P. Bucci, Cubo 41-C University of Calabria 87036 Rende (CS) Via P. Bucci,
CHAPTER 4 PROPOSED GRID NETWORK MONITORING ARCHITECTURE AND SYSTEM DESIGN
39 CHAPTER 4 PROPOSED GRID NETWORK MONITORING ARCHITECTURE AND SYSTEM DESIGN This chapter discusses about the proposed Grid network monitoring architecture and details of the layered architecture. This
Cloud and Virtualization to Support Grid Infrastructures
ESAC GRID Workshop '08 ESAC, Villafranca del Castillo, Spain 11-12 December 2008 Cloud and Virtualization to Support Grid Infrastructures Distributed Systems Architecture Research Group Universidad Complutense
1 st Symposium on Colossal Data and Networking (CDAN-2016) March 18-19, 2016 Medicaps Group of Institutions, Indore, India
1 st Symposium on Colossal Data and Networking (CDAN-2016) March 18-19, 2016 Medicaps Group of Institutions, Indore, India Call for Papers Colossal Data Analysis and Networking has emerged as a de facto
LinuxWorld Conference & Expo Server Farms and XML Web Services
LinuxWorld Conference & Expo Server Farms and XML Web Services Jorgen Thelin, CapeConnect Chief Architect PJ Murray, Product Manager Cape Clear Software Objectives What aspects must a developer be aware
Data Grids. Lidan Wang April 5, 2007
Data Grids Lidan Wang April 5, 2007 Outline Data-intensive applications Challenges in data access, integration and management in Grid setting Grid services for these data-intensive application Architectural
Integration of the OCM-G Monitoring System into the MonALISA Infrastructure
Integration of the OCM-G Monitoring System into the MonALISA Infrastructure W lodzimierz Funika, Bartosz Jakubowski, and Jakub Jaroszewski Institute of Computer Science, AGH, al. Mickiewicza 30, 30-059,
