Grid computing is an interconnected network which is made to solve complex computational problems by sharing all the resources in the network and by utilizing unused processing cycles of the systems connected in the network. Previously supercomputers, databases, repository servers, high-speed networks and clusters are united and used to solve such a big problems. Also this will reduce the cost and processor capacity and resources can be utilized ideally. It conceals the complexity and create all the users to feel that they are working on a single machine.
Grid computing is like a power grid where the electricity is used from socket when we plug in but we don't know where and how it is generated. In the same way the different distributed resources are utilized by different systems in the network whenever required without knowing from where they are.
In the last few years, grid computing has emerged into one of the most important topic in the computing field. The research area of grid computing has been making particularly good progress in the last few years, due to the increasing number of scientific applications that are demanding specific use of computational resources and a dynamic and heterogeneous infrastructure.
It's an autonomous network which manages, configures, heals, optimises, protects by itself and also allocate resources dynamically across the network. Also different grid middleware mechanisms were developed in order to make grid computing possible.
Because of its benifits it is becoming more popular than any other distributed networks and its application areas are also increasing now-a-days. But due to some challenges it is restricted to become a universal computing platform.
Grid Computing Introduction:
Grid Computing is a type of distributed computing where several systems are connected together to utilize the same distributed resources like databases, storage servers, high-speed networks, super computers and clusters collectively for solving large scale, massive and computationally complex problems. It uses the unused processing cycles of al computers in a network for solving these complex problems for any stand-alone machine. All these responsibilities like controlling unused processing cycles, distributing information and tasks to a group of computers in a network can be done by one main computer in the network. The systems linked in a grid might be in the same room, or scattered across the world; they might be running on different operating systems with different hardware platforms.
Computing grids are just like electrical grids in concept. In an electrical grid, wall outlets allows us to link to an infrastructure of resources that generate, distribute, and bill for electricity. When we connect to the electrical grid, we don't need to know where the power plant is or how the current gets to us, simply the electric power is distributed through different power stations and the user uses it through a simple switch.
It is an emerging technology that provides seamless access to computing power and data storage capacity distributed over the globe. It was originally conceived by the research scientists. Due to its ability to maximize the efficiency of computing sources as well as its ability to solve large problems with considerably less computing power, it's becoming popular more quickly.
In the commercial world, the main aim of grid is to maximize the availability of an organization's computer resources by making them shareable across various applications (sometimes called virtualization) and provide computing on demand to third parties as a utility service.
- Grid Computing Provides coordinated resource sharing within an organization and among virtual organizations (VOs) and addresses issues of security, VO membership, sharing policy,
- payment for use of resources, etc. that occur in such cross-organizational settings.
- A grid is built using standard, open, general-purpose protocols and interfaces that address fundamental issues such as authentication, authorization, resource discovery and resource access. It highlights that for any distributed system to be a part of the grid it must implement the ?inter-grid protocols and standards that are gradually being created by grid-standards creation communities like the Open Grid Forum . This would encourage both open source and commercial distributed systems to interoperate effectively across organizations and thereby realize the grid vision.
- A grid delivers nontrivial qualities of service (QoS) relating to throughput, availability, response time, resource co-allocation, etc., such that the utility derived from the grid infrastructure is significantly greater than what would have been derived if resources were used in isolation.
Types of grids
We can use grid computing in different ways based on user application requirements. Based on the type of solutions grids are often categorized. There are three main types of grids available specified below. Also the combination of two or more of these grids can be used, based on the developing applications and the type of grid environment used.
A computational grid is aimed on setting unused resources particularly for computing power. In this grid, most of the machines are highly-functioning servers.
A scavenging grid is commonly used with large numbers of desktop personal computers. Machines are scavenged for un used CPU time and other resources. Owners of the desktop machines are generally responsible for control over the resources available in the grid environment.
A data grid is bound to accommodate and giving access to data across various organizations. Users are not involved where this data is placed as long as they are able to access the data. For example, let us suppose there are two organizations doing life science research, each with inimitable data. A data grid allows them to authenticate the data, maintain the data and security issues such as authentication, encryption.
Peer-to-peer computing is one of the common distributed computing technique that is generally combined with Grid computing. In fact, some are confused and consider that this is another form of Grid computing.
Grid components: a high-level perspective
Depending on it's use and design, some of the components below might or might not be required, and in some cases they might be combined to form a hybrid component. However, understanding the character of the components will help us to understand the needs when making grid-compatible applications.
Portal or user interface
A grid user should not know the complexity of computing grid. The user interface can come in various forms and be application-oriented, let's think of it as a portal. Many users today has good knowledge of Web portal, where the browser provides a single interface to access various information sources. A grid portal gives the interface for a user to access applications that will use the services & resources provided by the grid. From this the grid will look like as a virtual resource just like the user of electricity thinks the receptacle as an medium to a virtual generator.
User view of a grid
The present Globus Toolkit is not providing any services or tools to create a portal, but this can be achieved with tools such as Web Sphere Portal and Web Sphere Application Server.
Security is the most important requirement of Grid Computing. At the base of any grid environment different mechanisms must be provided for security issues such as authentication, authorization, encryption and decryption and so on. The Grid Security Infrastructure (GSI) component of the Globus Toolkit enables high security mechanisms. The GSI consists of an Open SSL implementation. It gives a single sign-in mechanism, so that if a user is authenticated, a proxy certificate is issued and used when multiple actions are taking place across the grid. When designing grid environment, we may use the Global Security Infrastructure login to give access to the portal, or we may consists our own security for the web portal. The web portal will then be responsible for logging in to the grid, either using the user's authenticated information or using a group of credentials for all authenticated users of the web portal.
Security in a grid application
The user will launch an application once his credentials are authenticated . Based on the type of application, and likely on other parameters given by the user, the next task is to distinguish the convenient and applicable resources to use within the grid environment. This functionality could be accomplished by a broker function. Although Globus didn't provide any grid implementation, there is an LDAP-based information service. The LDAP-based service is called as the Grid Information Service (GIS), or generally the Monitoring and Discovery Service (MDS). The GIS provides information about the possible resources within the grid and their condition. A broker service could be grown as MDS.
Scheduler is responsible for allocate different jobs to run on various resources. If idle resources are identified, the next task is to schedule the individual jobs to run on them. If a group of stand-alone jobs are to be executed with no inter-process communications, then a perticular scheduler may not be required. However, if we want to use a specific resource or make sure that various jobs within the application run simultaneously (for instance, if they need any inter-process communication), then a job scheduler must be used to control the execution of the jobs. The Globus Toolkit does not consist a scheduler, but there are various schedulers available that have been tested and can be used in a Globus grid environment. There are different levels of schedulers available in grid environment. Let suppose, a cluster could be identified as a single resource. The cluster may use its own scheduler to manage the nodes that it contains. A higher level scheduler also called as meta scheduler might be used to schedule work to be done on a cluster, while the cluster's scheduler would manage the actual scheduling of work on the cluster's individual nodes.
The data including application modules must be made available to the nodes where an application's jobs will execute, then their needs to be a secure and reliable method for transferring files and data to various machines within the grid environment. The Globus Toolkit consists a data management component that offers such services. This component, known as Grid Access to Secondary Storage (GASS), includes facilities such as Grid FTP. Grid File transfer protocol(FTP) is made on the basis of standard FTP protocol, but adds additional functions and uses the GSI for user authentication and authorization. Therefore, if the user is authenticated he will get a proxy certificate, with that certificate he can use the Grid File transfer protocol(FTP) facility to transfer files without having to go through a sign-in process to every node involved. This facility gives us a third-party file transfer so that one node can start a file transfer between two other nodes.
Job and resource management
Up to now we discussed various functionalities provided by the grid environment, with that we now get to the main functionality that help processing actual task in grid. The Grid Resource Allocation Manager (GRAM) gives the services to launch a job on a particular grid resource, checks its status, and retrieve the results when the work is completed.
Other services :
There are other services that may need to be included in the grid environment and considered when designing and implementing the application. For suppose, inter-process communication and accounting or chargeback services are two common services that are generally required.
The goal of a computing grid, like that of the electrical grid, is to provide users with access to the resources they need, when they need them. The two distinct but related goals addressed by Grids are: providing remote access to IT assets, and aggregating processing power. The most obvious resource included in a grid is a processor, but the sensors, data-storage systems, applications, and other resources are also encompassed by grids.
The main design goal of grid computing is solving big problems by retaining the flexibility of any single supercomputer to work on various smaller problems. Thus Gridcomputing facilitates a multi-user environment. Its secondary goal is to make better use of availableCPU cycles and supply for the alternate demands of largecomputational operation. This approach allows the use of secure authentication techniques to allow distant users tocontrolcomputing resources across the grid.
Gridcomputing involves sharing non homogeneous resources (based on different platforms, hardware/software configurations, andcomputer languages), located in multiple places belonging to different administrative domains over a network using open standards. In short, it involves virtualizingcomputing resources. Gridcomputing is generally confused with clustercomputing. The main difference is that a cluster is a single set of nodes resides in one particular location, while a Grid is combination of various clusters and other type of resources (e.g. networks, storage facilities).
The grid vision of providing users continuous access to computing resources, similar to public utility services like electricity and telephone, can be traced back to the Multics is also called as "multiplexed information and computing service (Multiplexed Information and Computing Service) system. The term ?grid computing was itself preceded by the term metacomputing which also advocated transparent user access to distribute and heterogeneous computing resources by linking such resources by software and an underlying network.
MultiplexedInformation andComputingServices (Multics) is a large number of, common programming systems which is being developed as a research project. The first Multics system was implemented on the GE 645 computer. The main design goals is to make a computing system which is adequate of achieving almost all of the present and near-future needs of a bigger computer environment. Such systems must run continuously and reliably 7 days a week, 24 hours a day such as a telephone or power systems, and must be able to meet wide service demands: from different man-machine communication to the orderly processing of distant-user jobs; from the use of the system with consecrated languages and subsystems to the programming of the system itself; and from unified, tape, and printer availability to remotely located terminals. Such information processing and communication systems are believed to be mandatory for the future development of computer use in business, in industry, in government and in scientific researches as well as provocative applications which would be different insane.
There were nine major goals for Multics:
- Suitableprivate terminaluse.
- Intermittent operation analogous to telephone, power services.
- Different ranges of system configurations, modifiable without system or user program arrangement.
- A highly reliable internalfile system.
- Support for particular information sharing.
- Hierarchical structures of information for system administration and decentralization of user activities.
- Support for different type of applications.
- Support for multiple programming environments & human interfaces.
- The ability to develop the system with changes in technology and in user requirements.
These aims were traced back to Grid Computing and can be achieved by using it.
A grid middleware is a distributed computing software that integrates network-connected computing resources like computer clusters, data servers, standalone PCs, sensor networks, etc., that may span multiple administrative domains, with the objective of making the combined resource pool available to user applications for number crunching, remote data access, remote application access, among others. A grid middleware is what makes grid computing possible. With multiple Virtual Organizations involved in joint research collaborations, issues pertaining to security (authentication and authorization), resource management, job monitoring, secure file transfers, etc. are of paramount importance. Thus, in addition to making available a seamless distributed computing infrastructure to cater to the computing needs of the grid user, the grid middleware usually provides mechanisms for security, job submission, job monitoring, resource management and file transfers, among others.
The Globus middleware is an open architecture and an open source set of services and software libraries, developed in consultation with the user community, which supports grids and grid applications. It implements a set of components (based on standard grid protocols and interfaces) that provide basic grid services like authentication, resource discovery, resource access, resource management, data management, communication, etc., and a set of software libraries, both of which facilitate the construction of more sophisticated grid middleware. As such, Globus is regarded more as a toolkit for the development of other grid middleware rather than a ready-to-use grid solution. Globus is thus referred to as Globus Toolkit (GT) in different versions of the middleware, viz., GT-2, GT-4, etc.
A few of the grid protocols that are implemented by Globus and its purpose are:
- The Grid Security Infrastructure (GSI) protocol supports single sign-on user authentication.
- The Grid Resource Allocation and Management (GRAM) protocol is for allocation and management of user jobs on remote resources.
- The Monitoring and Discovery Service (MDS-2) provides a framework for discovering and accessing information like server configuration information, networks status, etc.
- The GridFTP protocol is an extension of the popular File Transfer Protocol (FTP) protocol and supports partial and parallel file access.
Some of these protocols like GridFTP and GSI were first defined and implemented by Globus version 2 (GT-2), before they were subsequently reviewed within the standards bodies and recognised as standards. This is hardly surprising because from 1997 onwards GT-2 was generally considered the de facto standard for grid computing because of its focus on reusability and interoperability with other grid systems. A community-wide grid protocol standardization effort started in around 2001 with the emergence of the Global Grid Forum, now called the Open Grid Forum. This ultimately produced the Open Grid Services Architecture (OGSA) - a service oriented framework, defined by a set of community-developed standards, for the development of grid middleware. OGSA builds on concepts and technologies from both the grid and web services communities with the objective of providing an extensible set of grid services that VOs can aggregate in various ways. It is widely believed that OGSA-based grid middleware will encourage the adoption of grid computing technology in industry and will facilitate the development of grid-based commercial applications. Globus toolkit versions 3 and 4 (GT-3, GT-4) are both based on OGSA.
GT-4 is supported on UNIX, Linux and Windows operating systems. However, not all components can be installed on Windows. For example, neither the pre-web services implementations of the resource management component of GT-4 (GRAM), nor the WSRF implementations GRAM can be installed on a Windows system. Furthermore, the non-web services GT-4 implementations for security (My Proxy), file transfer (GridFTP), replication, and information service (MDS-2) can only be run on UNIX and Linux platforms.
A job scheduling system that is designed to maximize the utilization of collections of networked PCs, referred to as a Condor Pool, through identification of idle resources and scheduling background user jobs on them is Condor. Although Condor was originally designed to harness unutilized CPU-cycles from non-dedicated PCs within an organization, the same design can be used to manage dedicated compute clusters. It is possible to operate Condor across organizational boundaries by using the Condor-G extension to Globus.
Condor is supported on UNIX, Linux and Windows platforms. But not all components of Condor can be installed on a Windows machine like Globus. For example, Condor does not support several Condor execution environments like standard universe, PVM universe, GT-4 grid type, LSF grid type, etc. on Windows.
European Data Grid (EDG) middleware:
Developing technological infrastructure for facilitation of e-Science collaborations in Europe is the goal of the EU-funded European Data Grid (EDG) that was started with. The grid computing middleware developed during this project is commonly referred to as the EDG middleware. The EDG middleware itself is based on GT-2, but in addition to Globus-supported standard grid features like grid security infrastructure, grid information service, resource discovery and monitoring, job submission and management, etc., it extends Globus to offer high functionality middleware services like resource brokering and replication management. Resource brokering and replication management services are implemented using the Resource Broker (RB) and Replication Management Tools (RMT) respectively, both of which are integrated with the EDG middleware. Through the RB component, EDG middleware implements the "push" middleware architecture wherein the RB periodically polls the computing resources to find out the load levels and decide on whether new jobs are to be assigned to the resources. After the completion of the EDG in 2004, some of the EDG middleware components, notably RB and RMT, have been further developed as part of other EU-funded grid projects like the Enabling Grids for E-sciencE (EGEE). The EDG middleware has only been tested on RedHat Linux 7.3.
The two different methods (models, approaches, architectures, mechanisms) for scheduling tasks (jobs) on resources are "pull" and "push". The tasks are scheduled by a middleware component that can be referred to by various names, for example, job scheduler, workload management system, task dispatcher, master process, etc. For the purpose of this research it is sufficient to view the task scheduling component as an integrated part of the grid middleware. In a "pull" model the computing resources request jobs from a central resource which maintains the job queue; whereas in a 'push" model one central resource schedules jobs on the available resources and tries to centrally optimize the allocation of jobs between the resources. In the decentralized "pull" model the system state information is maintained by each resource, whereas in the centralized "push" model state information of all the resources is maintained at a central resource.
Virtual Data Toolkit (VDT) middleware:
Virtual Data Toolkit (VDT) is a grid middleware primarily meant for the US Open Science Grid. It is a combined package of various grid middleware components, including Globus and Condor, and other utilities. The goal of VDT is to provide users with a middleware that is thoroughly tested, simple to install and maintain, and easy to use. Linux-based platforms like Debian Linux, Fedora Core Linux, RedHat Enterprise Linux, Rocks Linux, Scientific Linux and SUSE Linux are only supported by the latest version of VDT.
The development of gLite middleware is being supported by the European Commission funded EGEE project. gLite is primarily being developed for the LHC Computation Grid (LCG) and the EGEE grids. Twelve academic and industrial partners are involved in the development of gLite. These include the European Organization for Nuclear Research (CERN), the National Institute of Nuclear Physics (INFN, Italy), National Centre for Scientific Research (CNRS, France), Council for the Central Laboratory of the Research Councils (CCLRC, UK), and National Institute for Nuclear Physics and High Energy Physics (NIKHEF, The Netherlands).
The gLite-3 middleware uses components developed from several other grid projects like Globus, Condor and EDG. gLite-3 is based on the web services architecture and its underlying computing resources are referred to as Computing Elements, or gLite CE for short. On one hand, gLite-3 middleware supports the ?pull? architecture that empowers the gLite CEs to decide the best time to start a grid job; on the other hand, a RB can be used to "push" jobs just as EDG middleware. Another middleware which uses the "pull" architecture for its RB is AliEn (a middleware primarily developed for LHC ALICE experiment. Because of its "pull" implementation the AliEn RB does not need to know the status of all resources in the system. GLite-3 middleware is presently supported only on the Scientific Linux operating system.
LCG-2 is the middleware for the LCG and the EGEE grids. It is a precursor to the gLite middleware, and is being gradually replaced by gLite on both these production grids. The operating systems supported by LCG are Red Hat 7.3 and Scientific Linux 3.
The Open Middleware Infrastructure Institute (OMII), based in the University of Southampton and established as part of the five year (starting from late 2001) 250 million UK e-Science core program, is mainly responsible for ensuring "production-level" quality standards for grid middleware components being delivered by various UK e-Science projects, ensuring that the components are well documented and maintained in a middleware repository, undertaking integration testing of these UK developed middleware components for interoperability with components produced outside of the UK, and for testing the components to ensure interoperability with open grid and web services standards. In order to achieve "production-level" quality of middleware components, OMII works jointly with the e-Science project teams in all phases of software development and/or employs its own pool of software engineers to work on the software artifacts after they have been delivered by the grid projects. Some of these components are collectively released as a combined, quality assured, easy to install OMII software release. This software is also referred to as the OMII middleware and it presently consists of two specific releases, viz., OMII server release and OMII client release. The OMII grid middleware is open source and can be downloaded from the OMII website
Conceptual view of users and service providers (OMII, 2006b)
The client part of OMII middleware can be installed on different distributions of Linux, Windows and Apple Macintosh operating systems. However, the server part can only be installed on Linux flavour operating systems and on Apple Macintosh . Both the client and the server parts require Java to be pre-installed on the target machines.
The most incredible characteristics that make Grid a more usable system that all its predecessors are listed below:
Heterogeneity: Grids involve heterogeneity. It allows incorporating varying software and hardware resources spread across different administrative domains.
A wide spectrum of Resources: The grid is an all-compassing in context of the resources that constitute it. Broadly speaking, the grid resources incorporate computational resources, data storage, communication links, software, licenses, special equipment, supercomputers, and clusters. The Grids promise to provide consistent, dependable, transparent access to these resources despite their source.
User-Centric: Grids lay the entire focus on the end-user. This means that the specific machines are that are used to execute an application are chosen from user's point of view, maximizing the performance of that application, regardless of the effect on the system as a whole.
- Distributed Supercomputing applications couple multiple computational resources - supercomputers and/or workstations
- Distributed supercomputing applications include SFExpress (large-scale modelling of battle entities with complex interactive behaviour for distributed interactive simulation), Climate Modelling (modelling of climate behaviour using complex models and long time-scales)
- Grid is used to schedule large numbers of independent or loosely coupled tasks by making the unused cycles to work.
- High-throughput applications include RSA key cracking, seti@home (detection of extra-terrestrial communication) uses grid.
- Focus is on synthesizing new information from large amounts of physically distributed data
- Examples include NILE (distributed system for high energy physics experiments using data from CLEO), SAR/SRB applications, digital library applications etc.,
APPLICATION AREAS FOR GRID COMPUTING:
- Large-scale financial processing, e.g. credit card processing, portfolio and risk analysis, financial forecasting
- Data mining and data warehousing
- Large-scale transaction processing
- Back office database and file processing
- Pharmaceutical and biological, e.g. drug discovery, protein folding
- Complex scientific simulations, e.g. modelling environmental effects
- Seismic data interpretation
- Remote monitoring and data collection, e.g. medical, security, industrial devices
- Automotive and aerospace, for collaborative design and data-intensive testing.
Many grids are appearing in the sciences, in fields such as chemistry, physics, and genetics, and cryptologists and mathematicians have also begun working with grid computing. Grid technology has the potential to significantly impact other areas of study with heavy computational requirements, such as urban planning. Another important area for the technology is animation, which requires massive amounts of computational power and is a common tool in a growing number of disciplines.
The following are the benefits of grid computing on different application areas:
- Research & Development: Accelerate and enhance the R&D process with research-intensive applications; reduce R&D costs and increase efficiency of co-development; improve hit-rates through better simulation of real-world characteristics.
- Engineering & Design: Accelerate and improve product design and development; reduce product design costs and increase efficiency of co-development; reduce time to market by executing tasks faster and more accurately.
- Business Analytics: Improve understanding of risk exposure; run price optimization models more frequently and run more complex problems faster; enhance decision making due to better transparency.
- Enterprise Optimization: Improve transparency of IT resource management across an enterprise; enhance exploitation of existing IT resources; rapidly and efficiently scale to meet volatile workload environments; reduce downtime.
- Government Development: Stimulate economic development; improve collaboration across government agencies; enable faster and more accurate decision making.
- Financial services: Reduce statistical margin of error; make faster trade decisions and reduce portfolio risk with increased number of scenarios.
- Automotive: Accelerate time to market of new auto and truck designs; enable cross-platform design and engineering collaboration; shorten design cycles.
- Aerospace: Enhance data sharing in aerospace engineering and design; leverage distributed workflow within and among departments and companies while optimizing server infrastructure.
- Life sciences: Accelerate discovery process in genomics, proteomics and molecular biology; execute rapid sequence comparison algorithms; enable innovative information analysis.
- Government: Stimulate economic development; integrate data from disparate military and civilian agencies; make faster and more accurate decisions.
- Higher education: Seamlessly share raw data; gain secure access to shared resources; simplify data access and integration.
- Electronics: Speed collaborative processes and reduce time to market; augment computing capabilities to decrease cycle time; optimize computing capacity and existing infrastructure investments.
- Agricultural chemicals: Achieve quick turnaround for large volumes of calculations and simulations; enable lead identification through innovative information analysis; increase number of calculations processed.
- Petroleum: Reduce imaging time and improve reservoir management results; seamlessly manage distributed systems and data; consolidate applications, networks and data.
Some other advantages are
Grids use a layer of middleware to communicate with and manipulate heterogeneous hardware and data sets. In some fieldsastronomy, for examplehardware cannot reasonably be moved and is prohibitively expensive to replicate on other sites. In other instances, databases vital to research projects cannot be duplicated and transferred to other sites. Grids overcome these logistical obstacles. A grid might coordinate scientific instruments in one country with a database in another and processors in a third. From a user's perspective, these resources function as a single systemdifferences in platform and location become invisible.
Grids make research projects possible that formerly were impractical or unfeasible due to the physical location of vital resources. Using a grid, researchers in Great Britain, for example, can conduct research that relies on databases across Europe, instrumentation in Japan, and computational power in the United States. Although speeds and capacities of processors continue to increase, resource-intensive applications are proliferating as well. With grids, programs previously hindered by constraints on computing power become possible.
Challenges of Grid:
Grid can run any application 1000 times faster than other computing network without using any additional hardware and software. Even then all applications are not suitable to run on grid. Some applications cannot be parallelized and some may take large amount of time and work to modify to achieve faster throughput which will affect the performance, reliability and security of an organization.
Some other limitations of grid computing are:
- less dynamic, scalable and fault tolerant
- There are certain limitations of bandwidth, processing power and memory, so Cellular wireless networks are more limited than traditional wired networks.
- There are some other problems like security, standardization and new protocols that are needed to be solved for bringing number of operating systems, vendor platforms and applications together.
Due to these challenges in the grid computing and other problems in security which is a very important thing and so on are restricting it from becoming a universal computing platform.
The grid computing is an emerging technology and being implemented in many projects such as Oracle 10G. SETI (Search for Extraterrestrial Intelligence) @Home project is a well-known grid computing project, in which PC users worldwide donate unused processor cycles to help the search for signs of extraterrestrial life by analysing signals coming from outer space. The project relies on individual users to volunteer to allow the project to utilize the unused processing power of the user's computer. This method saves the project both money and resources. The concept of on-demand business has been introduced by IBM through grid computing, it has developed the Globus tool kit for enabling the grid on which several resource brokers work.
IT infrastructures concept is introduced by grid computing because distributed computing over a network of heterogeneous resources is supported by it and it is enabled by open standards. Different mechanisms have been developed for solving different problems. Different technologies like JAVA, XML, CORBA etc., can be used to implement the Grid Computing. Depending on the application, security, and priority of the task the grid can be implemented and can be implemented either in intranet or on internet.
Also the application areas are increasing now-a-days due to its number of advantages. As the increasing use of the grid technology the need for efficient resource management becomes complex. There is need for developing more efficient scheduling strategies with efficient usage of the grid-enabled resources. The resource management systems and schedulers need to be adaptive so that they can handle dynamic changes in the resources and user requirements. At the same time, they need to provide scalable, controllable, measurable, and easily enforceable policies for management of the resources.
Some issues regarding security, standardization and protocols are facing by grid computing. Some of non-profit groups like Global Grid Forum, the Globus project and the New Productivity Initiative are working on some of these issues like security and standardization. If these issues can be solved then grid computing will become a universal computing platform.