Grid-HPA: Predicting Resource Requirements of a Job in the Grid Computing Environment

Grid-HPA: Predicting Resource Requirements of a Job in the Grid Computing Environment,M. Bohlouli,M. Analoui

Grid-HPA: Predicting Resource Requirements of a Job in the Grid Computing Environment  
BibTex | RIS | RefWorks Download
For complete support of Quality of Service, it is better that environment itself predicts resource requirements of a job by using special methods in the Grid computing. The exact and correct prediction causes exact matching of required resources with available resources. After the execution of each job, the used resources will be saved in the active database named "History". At first some of the attributes will be exploit from the main job and according to a defined similarity algorithm the most similar executed job will be exploited from "History" using statistic terms such as linear regression or average, resource requirements will be predicted. The new idea in this research is based on active database and centralized history maintenance. Implementation and testing of the proposed architecture results in accuracy percentage of 96.68% to predict CPU usage of jobs and 91.29% of memory usage and 89.80% of the band width usage. 1 Quality of Service maintain a history of jobs that have executed along with their respective resource requirements. (4) To estimate a given jobs' resource requirements, we identify similar applications in the history and then compute a statistical estimate (such as the mean and linear regression) of their runtimes. We use this as the predicted resource requirements. Prediction can be done in different ways such as centralized prediction and decentralized prediction. Also it can be done in different locations such as system scheduler, resource manager or gate keeper of each site in Grid computing. (3) Prediction of job resource requirements is based on decentralized method in most of the recent researches. Decentralized methods have several disadvantages. The most important disadvantages are: x Necessity of large number of interchange and transmission of information for prediction between sites and broker. x Increase of errors x Relatively longer time for prediction x Limitation of history in each site x Saving replica jobs in the sites and inability to remove repeated jobs. Saving executing information of sites in the scheduler will increase accuracy percentage in finding the similar job, because the number of available jobs in History will increase accordingly. Therefore prediction will be more close to actuality and it is not necessary to have large number of transmission in the network. In the proposed method, we consider that database will update with specific threshold. Exactly similar jobs will be deleted from the history in updating process. In proposed method active database is used.
Published in 2008.
Cumulative Annual
View Publication
The following links allow you to view full publications. These links are maintained by other sources not affiliated with Microsoft Academic Search.