Definition and Architecture of Google File System

Definition of Google File System

The Google File System (GFS) was developed in the late 1990s. It uses thousands of storage systems built from inexpensive commodity components to provide petabytes of storage to a large user community with diverse needs.

It is not surprising that the main concern of the GFS designers was to ensure the reliability of a system exposed to hardware failures, system software errors, application errors, and last but not least, human errors.

GFS files are collections of fixed-size segments called chunks; at the time of file creation, each chunk is assigned a unique chunk handle. A chunk consists of 64 KB blocks and each block has a 32-bit checksum.

Chunks are stored on Linux file systems and are replicated on multiple sites; a user may change the number of the replicas from the standard value of three to any desired value.

The chunk size is 64 MB; this choice is motivated by the desire to optimize performance for large files and to reduce the amount of metadata maintained by the system.

The architecture of a GFS Cluster


A master controls a large number of chunk servers; it maintains metadata such as filenames, access control information, the location of all the replicas for every chunk of each file, and the state of individual chunk servers.

Some of the metadata is stored in persistent storage (e.g., the operation log records the file namespace as well as the file-to-chunk mapping).

The locations of the chunks are stored only in the control structure of the master’s memory and are updated at system startup or when a new chunk server joins the cluster.

This strategy allows the master to have up-to-date information about the location of the chunks.

System reliability is a major concern, and the operation log maintains a historical record of metadata changes, enabling the master to recover in case of a failure.

As a result, such changes are atomic and are not made visible to the clients until they have been recorded on multiple replicas on persistent storage.

To recover from a failure, the master replays the operation log. To minimize the recovery time, the master periodically checkpoints its state and at recovery time replays only the log records after the last checkpoint.

Each chunk server is a commodity Linux system; it receives instructions from the master and responds with status information.

To access a file, an application sends to the master the filename and the chunk index, the offset in the file for the read or write operation; the master responds with the chunk handle and the location of the chunk.

Then the application communicates directly with the chunk server to carry out the desired file operation.

The consistency model is very effective and scalable. Operations, such as file creation, are atomic and are handled by the master.

To ensure scalability, the master has minimal involvement in file mutations and operations such as write or append that occur frequently.

In such cases the master grants a lease for a particular chunk to one of the chunk servers called the primary; then, the primary creates a serial order for the updates of that chunk.


Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>