Your SlideShare is downloading. ×
GCF
GCF
GCF
GCF
GCF
GCF
GCF
GCF
GCF
GCF
GCF
GCF
GCF
GCF
GCF
GCF
GCF
GCF
GCF
GCF
GCF
GCF
GCF
GCF
GCF
GCF
GCF
GCF
GCF
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

GCF

377

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
377
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
2
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. GRID COMPUTING FRAMEWORK ANIL HARWANI KALPESH KAGRESHA YASH LONDHE GAURAV MENGHANI (Group No. 33) Under the guidance of Ms. Sakshi Surve Assistant Professor, Computer Engineering Department
  • 2. Grid Computing
    • Grid computing (or the use of computational grids) is the combination of computer resources from multiple administrative domains applied to a common task, usually to a scientific, technical or business problem that requires a great number of computer processing cycles or the need to process large amounts of data.
    • The primary goal of a Grid is to form a loosely coupled system of computers[clients] over a LAN or Internet which are capable of performing tasks issued by the server. Clients can join or leave the grid at any point of time.
  • 3. Applications & Benefits
    • Computationally intensive tasks such as brute-forcing over a symmetric encryption key space, simulation of natural forces, prediction of cyclones, etc.
    • If the problem to be solved is inherently parallel in nature then the scaling provided by Grids can easily introduce a speed up factor, which is roughly proportional to the number of clients participating in the Grid.
    • The performance of some large Grids are comparable to some of the fastest supercomputers and hence Grids are a feasible cheaper substitute.
  • 4. Concerns
    • Setup of a grid is a complicated process, and hence is not considered a serious option.
    • Almost all grid computing middleware use a complicated structure and use resources of computers spread around the globe, and hence dependent on voluntary commitment of resources by unknown machines. This might not always be suitable.
    • Academic institutions don’t have access to easy-to-deploy grid computing middleware.
  • 5. Grid Computing Framework
    • These concerns would be addressed in our project, Grid Computing Framework.
    • This Framework is a Third Party Application which helps the developer in rapidly deploying a flexible, reliable and efficient Grid.
  • 6. Goals
    • To Create a Open Source Linux-based Grid Computing Framework which works on a moderately sized LAN and, is:
      • Easy to Deploy
      • Easy to Use
      • Easy to Maintain
      • Efficient and reliable with good performance scaling
  • 7. Plan of Action
    • Accept the problem to be solved from the user, consisting of parallel code units called Tasks, dependency matrix of tasks, etc.
    • Distribute these tasks while taking in consideration the inter-dependency of tasks, and using a load-balancing algorithm.
    • Solve tasks at clients; record the output and errors (if any). Send the output and the error and performance logs to the server.
    • Collect outputs and logs from clients. Update client performance statistics.
    • Arrange outputs as desired by the user and present it to the user.
  • 8. Submission of the Problem
    • The user submits the Problem at the server. A problem is described using:
      • Problem Solving Schema (PSS)
      • Task File(s)
      • Task File Input Set(s)
      • Result Compilation Program (RCP)
  • 9. Division of Tasks
    • The server apportions tasks to the clients using a load balancing algorithm. Each Task has the following:
      • Task File
      • Task Input
      • Task Priority
      • Task Timeout
  • 10. Execution at Client-side
    • The client-side module parses the tasks being given to it, executes them and sends a packet of information called Task Execution Result . It comprises of:
      • Task Output
      • Error Log
      • Statistics
  • 11. Result Compilation
    • Task Execution Results are received by the Server and are processed by the Result Compilation Program. Finally, the following are presented to the user
      • Problem Output (Generated by RCP)
      • Task Execution Results
      • Error Logs
      • Statistics
  • 12. Client-side State Transition Diagram
  • 13. Server-side State Transition Diagram
  • 14. Platform
    • Open Source Technologies
    • What is Linux?
    • Why Linux?
    • Ubuntu - Debian Linux distribution
  • 15. Programming on Linux
    • GNU project
      • A free software project started in 1983
      • Provide tools for: development( GCC), graphical desktop(GTK+), applications and utilities(GNUzilla)
    • GCC
      • Tool for writing, compiling and executing a code
      • Supports various programming languages like C, C++,etc.
  • 16.
    • GTK+ 2.0
      • Tool for designing a GUI( Graphical User Interface)
        • Objects used:
          • GtkObject
            • GtkWidget
          • GtkContainer
            • GtkWindow
            • GtkFrame
            • GtkButton
            • GtkComboBox
            • GtkBox
            • GtkVBox
            • GtkHBox
            • GtkNotebook
            • GtkTextView
          • GtkTextBuffer
  • 17.
    • Compiling a Single Source File
        • Example: source file name: main.c
          • gcc -c main.c (to compile main.c and create an object file)
          • gcc -o main1 main.o (to link the object file and create an executable file)
          • Both the above tasks can be done in a single step: gcc -o main1 main.c
          • . /main1 (to run the executable file)
  • 18. Threads
    • Process
      • A running instance of a program is called a process
    • Threads
      • Two or more concurrently running tasks spawned by a process
      • A process and its thread(s) share the same memory space and address space
      • Context switching between threads is faster than between processes
  • 19.
    • Thread Creation
      • Each thread in a process is identified by a thread ID
        • include the header file: “pthread.h”
        • Declare a thread variable: “ pthread_t thread1 ”
        • Create a thread:
        • int pthread_create(pthread_t thread1 , const pthread_attr_t * attr , void (* start_routine )(void*), void * arg )
        • The above function returns 0 on successful thread creation
        • Compile and link such a source file:
        • gcc -o threadoutput threadsource.c –lpthread
  • 20.
    • Joining Threads
      • For synchronized/sequential execution of threads
      • Threads are joined as:
      • int pthread_join(pthread_t thread , void ** value_ptr)
      • suspends execution of the calling thread until the target thread terminates
  • 21. Socket Programming
    • What is Socket ?
    • Writing client and server programs
    • TCP or UDP servers
    • Sockets are implemented using the Berkeley Sockets API library
  • 22. Client Server Model
    • What is client server model?
    • Establishing a socket on the server side
    • Establishing a socket on the client side
  • 23. Client/Server relationship of sockets APIs for TCP
  • 24. Sockets API Functions
    • int socket(int domain, int type, int protocol);
    • int bind(int sockfd, struct sockaddr *my_addr, int addrlen);
    • int listen(int sockfd, int backlog);
    • int accept(int sockfd, struct sockaddr *addr, int *addrlen);
    • int connect(int sockfd, struct sockaddr *serv_addr, int addrlen);
  • 25. Sockets API Functions
    • int send(int sockfd, const void *msg, int len, int flags);
    • int recv(int sockfd, void *buf, int len, unsigned int flags);
    • ssize_t write(int fd, const void *buf, size_t count);
    • ssize_t read(int fd, void *buf, size_t count);
    • int close(int sockfd);
  • 26. Work Done So Far
    • A basic client-server module has been implemented.
    • The client connects with the server, and the server maintains the list of the clients.
    • The server keeps record of the performance metric (to judge the computing power) and network metric (to find if the node is congested) of the connected clients.
    • A GUI was designed for the server module.
  • 27. GUI
  • 28. Further Work
    • The server module needs to be extended to accept the problem and distribute the tasks, and retrieve and present the results.
    • The client module needs to be extended to accept tasks, process and send back the results.
    • An efficient load balancing algorithm needs to be designed.
    • Rigorous testing needs to be done, and any required optimizations need to be made.
  • 29. Thank You

×