Abstract
Current trends in computing technologies are drifting towards distributed and parallel computing. These topologies emphasize on concurrent access schemes. Because of distributed computing, the use of shared resources like databases, memory, etc. is growing. Locking mechanisms can be used effectively to achieve high concurrency levels without corrupting the resources. There are a wide range of synchronization protocols, some providing strong consistency while others granting weak consistency. There is a trade-off between complexity and the level of consistency offered by these locking mechanisms. In this report I describe two locking mechanisms appropriate for the consistency requirements of distributed computing environments, viz. Distributed Lock Managers (DLM’s) and Leases. Distributed lock managers leverage conventional client-server locking mechanisms to adapt them to distributed backgrounds. They present strong consistency but are complex in implementation. Leases are time-based protocols and are a hybrid version of server-based and client-based locking strategies. I have compared these two approaches and proposed suitable use cases for these architectures.
Consistency Mechanisms
Introduction
Several trends anticipate properties of future distributed systems. Systems are being extended over wider-area networks, the speed of processors also continues to grow. Finally, larger numbers of hosts, both clients and servers, are being tied together within a single system. The increase in the use of distributed systems necessitates proper functioning of consistency mechanisms. Various nodes within the system can have concurrent access to shared resources. It is required that they all get a consistent view of the resources. For this reason, when one node is writing to such resources, no other nodes should get access to the same resource. This can be achieved through locking mechanisms. Fig 1.1 Concurrent access to shared resources
The WorldWideWeb (WWW), an extensively distributed architecture, has observed an exponential growth in recent times. The growth is in the number of users as well as in the diversity of applications accessing information stored at geographically distributed sites. This growth, however, is not uniform. Certain objects are accessed more than others and create hot spots. This leads to overload at the server, congestion in the network and increase in client response times. In addition, newer applications require smaller access latencies and stronger consistency guarantees. Consider the following illustration:
Consider a web server that provides on-line stock trading over the internet[9]. Typically, the online traders require the latest stock quote of the stock, which is under consideration. In addition, they want to know the latest news about the company, its quarterly earnings, charts on its performance in the last few months and other statistical information. However, the semantic requirements of the information vary. The traders will be able to tolerate small inconsistency in the statistical information (like number of employees) that is not critical in making a decision. However, the stock quotes, latest news and other critical information need to be consistent all the time. A trader should not be given a inconsistent stock quote and asked to decide whether to buy the stock. This example shows that applications require different consistency guarantees, often requiring these diverse guarantees to coexist.
In order to prevent stale information from being transmitted to clients, a proxy must maintain consistency of cached objects with those on the servers Existing proxies mostly employ weak consistency mechanisms. That is, the proxy does not guarantee that the object served from the cache will always be consistent with the server at all times. This mechanism may be good enough for applications (like reading information about a company) that do not require strong consistency. Until recently, applications did not place stronger consistency requirements. However, with the evolution of the web, application like online trading and shopping are gaining dominance, and these impose strong consistency requirements. The current proxy consistency mechanisms provide no or little support for such applications.
A prototype model for strong consistency mechanism in the web has been developed and implemented in the current internet. The work enables coexistence of weak consistency mechanisms with those that provide strong consistency, thereby serving the diverse needs of applications.
1.2. Classification of Consistency Mechanisms
Consistency mechanisms can be classified to belong to two categories: weak consistency and strong consistency [7].
1.2.1. Weak Consistency Mechanisms
A weak consistency mechanism provides no guarantee that the object served from the cache will be consistent with that on the server. This means that stale objects can be served from the cache. Most existing proxies employ this form of consistency. Several weak consistency mechanisms have been proposed:
1.2.1.1. TTL Mechanism
Here the server assigns each object with a timestamp that specifies the lifetime of the object. The time-stamp also called as time-to-live (TTL) is based on a heuristic guess. The proxies cache the object as long as the TTL value has not expired. When the TTL expires the proxy checks the consistency of the object with the server.
In this mechanism, the server makes a calculated guess on the lifetime of the object. However, there is no assurance that the object will not be modified before the TTL expires. The proxy may have the object with valid TTL and the server may modify it. In this case an inconsistent object will be served by the proxy.
The advantage of this method is the server need not maintain any state for an object.
1.2.1.2. Poll Mechanism
In this mechanism, the proxy initiates a check (poll) to see if the object in its cache is consistent with the server. The checks can be initiated periodically or based on some other factor like the age of the object. The proxy is thus said to poll the server.
In this mechanism the proxy guesses that the object may have been modified and decides to poll. If the object is modified in between polls, and a request is made after an object has been modified but before the poll, then the proxy serves inconsistent data.
The advantage of this mechanism is that no state is required to be maintained at the server nor any computation is needed at the server.
1.2.1.3. Hybrid Mechanism
This mechanism combines the TTL mechanisms with the poll. The proxy polls only after the TTL has expired. However, in this mechanism though the proxy makes an educated guess on when to poll (based on a TTL), it still suffers from the same drawbacks of both mechanisms.
1.2.2. Strong Consistency Mechanisms
A strong consistency mechanism guarantees that the object served from the cache will always be consistent with those on the server. This means that stale objects can never be served from the cache. The strong consistency mechanism can be categorized as follows:
1.2.2.1. Server Invalidation
In this mechanism the server notifies the proxies whenever the object is modified. Proxies are always aware whether the data in their cache is consistent with the server. Thus, the proxy never serves inconsistent objects.
The advantage of this mechanism is that it is optimal in the number of network messages needed to maintain consistency (a message is sent only when the object is being modified).
The disadvantage is that the server needs to maintain state for each object. It has to know the list of all proxies that have accessed the object in order to invalidate them. This state has to be maintained indefinitely, since it is not known when the object may be modified. The resulting state space can be very huge and ever growing if we consider the millions of clients accessing the object.
The second disadvantage is that the server may have to send many invalidations when the object is modified. This will cause a burst in the network traffic when the object is modified. If the proxies are unreachable due to network failure or a proxy crash, then the server must block on the write or risk violating consistency guarantees.
1.2.2.2. Client Poll
In this mechanism, the proxy polls the server on every object read. If the object has changed then, it is fetched from the server and the cache is updated. Thus, the proxy always serves consistent objects from its cache.
The advantage of this mechanism is that the server need not maintain any state. If the object is modified and the network between the proxy and server is broken, only the proxy needs to block on the read (unlike server invalidation). Thus, the writes at the server are independent of network failures.
The main disadvantage is that every read at the proxy adds a round trip delay. This will negate the benefit of caching data close to clients. In addition, the number of network messages sent to maintain consistency is much more than needed. Thus, there are lot of control messages generated that waste network bandwidth.
1.3. Need for DLM’s
The synchronization mechanisms required for distributed systems are different from those required by other systems. Such systems exhibit exhaustive use of shared resources. Distributed lock manager (DLM) provides synchronization services appropriate for a highly parallelized distributed system. Systems can use locks to control access to distributed copies of data buffers (caches) or to limit concurrent access to shared disk devices. Locks can also be used for controlling application instance start-up and for detecting application instance failures. In addition, applications can use the locking services for their other synchronization needs.
1.4. Need for Leases
As seen in the previous section, server invalidation and client poll mechanism suffer from limitations that prevent efficient deployment of strong consistency mechanisms in the current WWW. Server invalidation has the problem of unbounded state space requirement at the server, while client poll results is a network message and round trip delay on every read.
Server invalidation and client poll form two ends of the spectrum. One is optimal in the number of network messages (equal to number of modifications), while the other is optimal in the amount of state space maintained at the server (no state space). Thus, there is a tradeoff between server state space vis-à-vis network bandwidth. As one moves from the client poll to server invalidation, the state space increases while the number of network messages and bandwidth requirement decreases.
Leases are time-bound locks during which the server notifies the changes to the clients. This is a hybrid mechanism that employs both server invalidation and client poll to maintain strong consistency.
In the following chapters we take a look at the overview of Distributed Lock Managers and study the locking model in detail. Chapter four describes Leases and a case study of NQNFS as an implementation of leases. Chapter 5 explains the differences between the two locking mechanisms studied in the report.
2. An Overview of Distributed Lock Managers
Distributed Lock Managers (DLM’s) provide advisory locking services that allow concurrent applications running on multiple nodes in a Linux cluster to coordinate their use of shared resources [1]. Cooperating applications running on different nodes in a Linux cluster can share common resources without corrupting those resources. The shared resources are not corrupted because the lock manager synchronizes (and if necessary, serializes) access to them.
All locks are advisory, that is, voluntary. The system does not enforce locking. Instead, applications running on the cluster must cooperate for locking to work. An application that wants to use a shared resource is responsible for first obtaining a lock on that resource before attempting to access it.
Applications that can benefit from using Distributed Lock Managers are transaction-oriented, such as a database or a resource controller or manager.
In this chapter we study the overview of DLM’s.
2.1 Locking Model
Locking models provides a rich set of locking modes and both synchronous and asynchronous execution. Locking models supports:
Six locking modes that increasingly restrict access to a resource
The promotion and demotion of locks through conversion
Synchronous completion of lock requests
Asynchronous completion through asynchronous system trap (AST) emulation
Global data through lock value blocks
2.2. Application Programming Interfaces
Distributed Lock Managers support an application-programming interface (API), a collection of C language routines that allow you to acquire, manipulate, and release locks. This API presents a high-level interface that you can use to implement locking in an application.
Distributed Linux includes two versions of the lock manager API libraries. They are:
• libdlm.so: for user-space DLM client applications
• libdlmk.o: for kernel-space DLM client applications
2.3. Distributed Lock Manager Architecture
The lock manager defines a lock resource as the lockable entity. The lock manager creates a lock resource the first time an application requests a lock against it. A single lock resource can have one or many locks associated with it. A lock is always associated with one lock resource.
The lock manager provides a single, unified lock image shared among all nodes in the cluster. Each node runs a copy of the lock manager daemon. These lock manager daemons communicate with each other to maintain a cluster-wide database of lock resources and the locks held on these lock resources.
Within this cluster-wide database, the lock manager maintains one master copy of each lock resource. This master copy can reside on any cluster node. Initially, the master copy resides on the node on which the lock request originated. The lock manager maintains a cluster-wide directory of the locations of the master copy of all the lock resources within the cluster. The lock manager attempts to evenly divide the contents of this directory across all cluster nodes. When an application requests a lock on a lock resource, the lock manager first determines which node holds the directory entry and then, reads the directory entry to find out which node holds the master copy of the lock resource.
Fig. 2.1. Distributed Lock Manager Architecture.
By allowing all nodes to maintain the master copy of lock resources, instead of having one primary lock manager in a cluster, the lock manager can reduce network traffic in cases when the lock request can be handled on the local node. Handling the requests on the local node also avoids the potential bottleneck resulting from having one primary lock manager and reduces the time required to reconstruct the lock database when a fall over occurs.
To increase the likelihood of local processing, the lock manager can also move a lock resource master to the node that is accessing the lock resource most frequently. This process is called lock resource master migration. Using these techniques, the lock manager attempts to increase lock throughput and reduce the network traffic overhead. Applications can also explicitly instruct the lock manager to process a lock locally.
When a node fails, the lock managers running on the surviving cluster nodes release the locks held by the failed node. The lock manager then processes lock requests from surviving nodes that were previously blocked by locks owned by the failed node. In addition, the other nodes re-master locks that were mastered on the failed node.
2.4.DLM’s and Different Cluster Infrastructures
DLM’s provides its own mechanisms to support its locking features, such as inter-node communication to manage lock traffic and recovery protocols to re-master locks after a node failure or to migrate locks when a node joins the cluster. However, distributed lock managers do not provide mechanisms to actually manage the cluster itself.
Therefore DLM’s expect to operate in a cluster in conjunction with another cluster infrastructure environment that provides the following minimum requirements:
• Node liveness: The node is a node part of a cluster.
• Consistent view of membership: All nodes agree on cluster membership.
• IP address liveness: An IP address to use to communicate with the DLM on a node; the DLM will use only a single IP address at any one time for a node, but it supports switching among different IP addresses for each node.
The DLM works with any cluster infrastructure environments that provide the minimum requirements listed above. The choice of an open source or closed source environment is up to the user.
Continued.......
0 comments:
Post a Comment
Thanks for your Valuable comment