Property of Continuous Media Server (part 2)


    Scalability, heterogeneity, and fault resilience
               Any CM server design must scale to support growth in user demand or application requirements. Several techniques address this requirement, including the use of multidisk arrays. However, if the design connected all the disks to a single large computer, the I/O bandwidth constraints would limit the overall achievable throughput—hence, Yima’s architecture uses multiple computers, or multinodes. As Figure 1 shows, the Yima server architecture interconnects storage nodes via a high-speed network fabric that can expand as demand increases. This modular architecture makes it easy to upgrade older PCs and add new nodes.                                                          
               Applications that rely on large-scale CM servers, such as video-on-demand, require continuous operation. To achieve high reliability and availability for all data stored in the server, Yima uses disk merging to implement a parity-based data-redundancy scheme that, in addition to providing fault tolerance, can also take advantage of a heterogeneous storage subsystem. Disk merging presents a virtual view of logical disks on top of the actual physical storage system, which might consist of disks that provide different bandwidths and storage space. This abstraction allows a system’s application layers to assume a uniform characteristic for all the logical disks, which in turn allows using conventional scheduling and data placement algorithms across the physical storage system.

Data reorganization:
            Computer clusters try to balance load distribution across all nodes. Over time, both round-robin and random data-placement techniques distribute data retrievals evenly across all disk drives. When a system operator adds a node or disk, however, the system must redistribute the data to avoid partitioning the server. Reorganizing the blocks involves much less overhead when the system uses random rather than round-robin placement. For example, with round-robin striping, adding or removing a disk requires the relocation of almost all data blocks. Randomized placement requires moving only a fraction of the blocks from each disk to the added disk—just enough to ensure that the blocks are still randomly placed to preserve the load balance.
           Yima uses a pseudo-random number generator to produce a random, yet reproducible, number sequence to determine block locations. Because some blocks must move to the added disks when the system scales up, Yima cannot use the previous pseudo-random number sequence to find the blocks; therefore, Yima must derive a new random number sequence. We use a composition of random functions to determine this new sequence. Our approach—termed Scaling Disks for Data Arranged Randomly (Scaddar)—preserves the sequence’s pseudo-random properties, resulting in minimal block movements and little overhead in the computation of new locations.The Scaddar algorithm can support disk scaling while Yima is online.

 Multinode server architecture:
        We built the Yima servers from clusters of server PCs called nodes. A distributed file system provides a complete view of all the data on every node without requiring individual data blocks to be replicated, except as  equired for fault tolerance. A Yima cluster can run in either a master-slave or bipartite mode.
 Master-slave design (Yima-1):
With this design, an application running on a specific node operates on all local and remote files. Operations on remote files require network access to the corresponding node.
The Yima-1 software consists of two components:
• the Yima-1 high-performance distributed file
  system, and
• the Yima-1 media streaming server.
     As Figure 2a shows, the distributed file system  consists of multiple file I/O modules located on each  node. The media-streaming server itself is composed   of a scheduler, a real-time streaming protocol (RTSP) module, and a real-time protocol (RTP) module. Each Yima-1 node runs the distributed file system, while certain nodes also run the Yima-1 media-streaming server. A node running only the file I/O module has only slave capabilities, while a node that runs both components has master and slave capabilities.
         A master server node is a client’s point of contact during a session. We define a session as a complete RTSP transaction for a CM stream. When a client wants to request a data stream using RTSP, it connects to a master server node, which in turn brokers the request to the slave nodes. If multiple master nodes exist in the cluster, this assignment is decided based on a round-robin domain name service (RR-DNS) or a load-balancing switch. A pseudo-random number generator manages the locations of all data blocks.
                  Using a distributed file system obviates the need for applications to be aware of the storage system’s distributed nature. Even applications designed for a single node can to some degree take advantage of this cluster organization. The Yima-1 media streaming server component, based on Apple’s Darwin Streaming Server (DSS) project assumes that all media data reside in a single local directory. Enhanced with our distributed file system, multiple copies of the DSS code—each copy running on its own master node—can share the same media data. This also simplifies our client design since it sends all RTSP control commands to only one server node.
       Finally, Yima-1 uses a pause-resume flow-control technique to deliver VBR media. A stream is sent at a rate of either RN or zero megabits per second, where RN is an estimated peak transfer rate for the movie. The client issues pause-and-resume commands to the server depending on how full the client buffer is. Although the pause-resume design is simple and effective, its on-off nature can lead to bursty traffic.
        With the Yima-1 architecture, several major performance problems offset the ease of using clustered storage, such as a single point of failure at the master node and heavy inter-node traffic. These drawbacks motivated the design of Yima-2, which provides a higher performing and more scalable solution for managing inter node traffic.

Bipartite design (Yima-2):
Yima-2 offers the    flexibility of delivering any data type while still being compatible with the MPEG-4 industry Standard.
 We based Yima-2’s bipartite model on two groups of nodes: server group and a client group.
     With Yima-1, the scheduler, RTSP, and RTP server modules are   centralized on a single master node from the viewpoint of a single client. Yima-2 expands on the decentralization by keeping only the RTSP module centralized—again from the viewpoint of a single client—and parallelizing the scheduling and RTP functions, as Figure 2b shows. In Yima-2, every node retrieves, schedules, and sends its own local data blocks directly to the requesting client, thereby eliminating Yima-1’s master-node bottleneck. These improvements significantly reduce internode traffic.
       Although the bipartite design offers clear advantages, its realization imposes several new challenges. First, clients must handle receiving data from multiple nodes. Second, we replaced the original DSS code component with a distributed scheduler and RTP server to achieve Yima-2’s decentralized architecture. Last, Yima-2 requires a flow-control mechanism to prevent client buffer overflow or starvation. With Yima-2, each client maintains contact with one RTSP module throughout a session for control information. For load-balancing purposes, each server node can run an RTSP module, and the decision of which RTSP server to contact remains the same as in Yima-1: RR-DNS or switch. However, contrary to the Yima-1 design, a simple RR-DNS cannot make the server cluster appear as one node since clients must communicate with individual nodes for retransmissions. Moreover, if an RTSP server fails, sessions are not lost. Instead, the system reassigns the sessions to another RTSP server, with no disruption in data delivery. We adapted the MPEG-4 file format as specified in MPEG-4 Version 2 for the storage of media blocks. This flexible-container format is based on Apple’s QuickTime file format. In Yima-2, we expanded on the MPEG-4 format by allowing encapsulation of other compressed media data such as MPE -2. This offers the flexibility of delivering any data type while still being compatible with the MPEG-4 industry standard. To avoid bursty traffic caused by Yima-1’s pause/resume transmission scheme and still accommodate VBR media, the client sends feedback to make minor adjustments to the data transmission rate in Yima-2. By sending occasional slowdown or speedup commands to the Yima-2 server, the client can receive a smooth data flow by monitoring the amount of data in its buffer.

CLIENT SYSTEMS
We built the Yima Presentation Player as a client application to demonstrate and experiment with our Yima server. The player can display a variety of media types on both Linux and Windows platforms. Clients receive streams via standard RTSP  and RTP communications.

Client buffer management
A circular buffer in the Yima Presentation Player reassembles VBR media streams from RTP packets that are received from the server nodes. Researchers have proposed numerous techniques to smooth the variable consumption rate RC by approximating it with a number of constant-rate      segments. Implementing such algorithms at the server side, however, requires complete knowledge of RC as a function of time. We based our buffer management techniques on a flow-control mechanism so they would work in a dynamic environment. A circular buffer of size B accumulates the media data and keeps track of several watermarks including buffer overflow WMO and buffer underflow WMU. The decoding thread consumes data from the same buffer. Two schemes, pause/resume and .p, control the data flow.

Pause-resume. 
        If the data in the buffer reaches WMO, the client software pauses the data flow from the server. The playback will continue to consume media data from the buffer. When the data in the buffer reaches the under- flow watermark WMU, the stream from the server resumes. However, the buffer must set WMO and WMU with safety margins that account for network delays. Consequently, if the data delivery rate (RN) is set correctly, the buffer will not underflow while the stream is resumed. Although the pause/resume technique is a simple and effective design, if pause and resume actions coincide across multiple sessions, bursty traffic will become a noticeable effect.

 Client-controlled . p.
                    .p is the interpacket delivery time the schedulers use to transmit packets to the client. Schedulers use the network time protocol (NTP) to synchronize time across nodes. Using a common time reference and each packet’s time stamp, server nodes send packets in sequence at timed intervals. The client fine-tunes the delivery rate by updating the server with new .p values based on the amount of data in its buffer. Fine-tuning is achieved by using multiple watermarks in addition to WMO and WMU, as Figure 1 shows.
     When the level of data in the client buffer reaches a watermark, the client sends a corresponding .p speedup or slowdown command to maintain the amount of data within the buffer. The buffer smoothes out any fluctuations in network traffic or server load imbalance that might delay packets. Thus, the client can control the delivery rate of received data to achieve smoother delivery, prevent bursty traffic, and keep a constant level of buffer data.

Player media types :
                 Figure 1 shows the player’s three-threaded structure. The playback thread interfaces with the actual media decoder. The decoder can be either software- or hardware based. Table 1 lists some decoders that we incorporated. The CineCast hardware MPEG decoders from Vela Research support both MPEG-1 and MPEG- 2 video and two-channel audio. For content that includes 5.1 channels  of Dolby Digital audio, as used in DVD movies, we use the Dxr2 PCI cardfrom Creative Technology to decompress both MPEG-1 and MPEG-2 video in hardware. The card also decodes MPEG audio and provides a 5.1-channel Sony-Philips Digital Interface (SP-DIF) digital audio output terminal. With the emergence of MPEG-4, we began experimenting with a DivX software decoder.9 MPEG-4 provides a higher compression ratio than MPEG-2. A typical 6-Mbps MPEG-2 media file may only require an 800-Kbps delivery rate when encoded with MPEG-4. We delivered an MPEG-4 video stream at near NTSC quality to a residential client site via an ADSL connection.10

HDTV client:
          The streaming of high-definition content presented several challenges. First, high-definition media require a high-transmission bandwidth. For example, a video resolution of 1,920 x 1,080 pixels encoded via MPEG-2 results in a data rate of 19.4 Mbps. This was less of a problem on the server side because we designed Yima to handle high data rates. The more intriguing problems arose on the client side. We integrated an mpeg2dec open source software decoder because it was cost-effective. Although it decoded our content, achieving realtime frame rates with high-definition video was nontrivial because of the high resolution. On a dual-processor 933-MHz Pentium III, we achieved approximately 20 frames per second using unoptimized code with Red Hat  linux 6.2 and Xfree86 4.0.1 on an nVidia Quadro 2 graphics accelerator. In our most recent implementation, we used a Vela Research CineCast HD hardware decoder, which achieved real-time frame rates at data rates up to 45 Mbps.

A Yima client can render up to eight synchronous streams of MPEG-2 video and 24 audio channels.
Figure 3. Panoramic video and 10.2- channel audio playback system block diagram. One Yima client renders five channels of synchronized video in
a mosaic of 3,600 × 480 pixels while another Yima client renders 10.2 channels of synchronized audio (0.2 refers to two low-frequency channels, or subwoofers).62

Multistream synchronization
The flow-control techniques implemented in the Yima client-server communications protocol synchronize multiple, independently stored media streams. Figure 3 shows the client configuration for the playback of panoramic, five-channel video and 10.2- channel audio. The five video channels originate from a 360-degree video camera system such as the FullView model from Panoram Technologies. We encode each video channel into a standard MPEG-2 program stream. The client receives the 10.2 channels of high-quality, uncompressed audio separately. During playback, all streams must render in tight synchronization so the five video frames corresponding to one time instance combine accurately into a panoramic mosaic of 3,600 x 480 pixels every 1/30th of a second. The player can show the resulting panoramic video on either a wide-screen or head-mounted display. The experience is enhanced with 10.2-channel surround audio, presented phase-accurately and in synchronization with the video. Yima achieves precise playback with three levels of synchronization: block-level via retrieval scheduling, coarse-grained via the flow-control protocol, and fine-grained through hardware support. The flow-control protocol maintains approximately the same amount of data in all client buffers.With this prerequisite in place, we can use multiple CineCastdecoders and a genlock timing-signal-generator device to lock-step the hardware MPEG decoders to produce frame-accurate output. All streams must start precisely at the same time. The CineCast decoders provide an external trigger that accurately initiates playback through software. Using two PCs, one equipped with two four-channel CineCast decoders and one with a multichannel sound card, a Yima client can render up to eight synchronous streams of MPEG-2 video and 24 audio channels.

RTP/UDP AND SELECTIVE RE-TRANSMISSION
Yima supports the industry-standard RTP for the delivery of time-sensitive data. Because RTP transmissions are based on the best-effort user datagram protocol, a data packet could arrive out of order at the client or be altogether dropped along the network. To reduce the number of lost RTP data packets, we implemented a selective retransmission protocol.11We configured the protocol to attempt at most one retransmission of each lost RTP packet, but only if the retransmitted packet would arrive in time for consumption. When multiple servers deliver packets that are part of a single stream, as with Yima-2, and a  packet does not arrive, how does the client know which server node attempted to send it? In other words, it is not obvious where the client should send its retransmission request. There are two solutions to this problem. The client can broadcast the retransmission request to all server nodes, or it can compute the server node to which it issues the retransmission request. With the broadcast approach, all server nodes receive a packet retransmission request, check whether they hold the packet, and either ignore the request or perform a retransmission. Consequently, broadcasting wastes network bandwidth and increases server load. Yima-2 incorporates the unicast approach. Instead of broadcasting a retransmission request to all the server nodes, the client unicasts the request to the specific server node possessing the requested packet. The client determines the server node from which a lost RTP packet was intended to be delivered by detecting gaps in node-specific packet sequence numbers. Although this approach requires packets to contain a node-specific sequence number along with a global sequence number, the clients require very little computation to identify and locate missing packets.

TEST RESULTS
In extensive sets of experiments, Yima-2 exhibits an almost perfectly linear increase in the number of streams as the number of nodes increases. Yima- 2’s performance may become sub-linear with larger configurations, low-bit-rate streams, or both, but  it scales much better than Yima-1, which levels off early. We attribute Yima-1’s non-linearity to the increase of internodal data traffic. The geographical distance between the two end points measured  approximately 40 kilometers. We set up the client in a residential apartment and linked it to the Internet via an ADSL connection. The ADSL provider did not guarantee any minimum band- width but stated that it would not exceed 1.5 Mbps. The raw bandwidth achieved end-to-end between the Yima client and servers was approximately 1 Mbps. The visual and aural quality of an MPEG-4 encoded movie at less than 1 Mbps is surprisingly good. Our test movie, encoded at almost full NTSC resolution, displayed little degradation—a performance attributable to the low packet loss rate of 0.365 percent without retransmissions and 0.098 percent with retransmissions. The results demonstrated the superiority of Yima-2 in scale-up and rate control. They also demonstrated the incorporated retransmission protocol’s effectiveness. We colocated a Yima server at Metromedia Fiber Network in El Segundo, California, to demonstrate successful streaming of five synchronized video channels.

previous Related article : Intro to Media Server
Share on Google Plus

About Unknown

This is a short description in the author block about the author. You edit it by entering text in the "Biographical Info" field in the user admin panel.

0 comments:

Post a Comment

Thanks for your Valuable comment