Storage area networks create better data handling for concurrent data sharing

Dec. 1, 1998
A complete storage area network (SAN) configuration. [16,268 bytes] of data, digital images, and visual applications is the engine driving the demand for faster, more capable network data access solutions. Today's challenge is to implement network data access architectures that make this data accessible.

Data anywhere on the network

Robert Farkaly
DataDirect Networks

The explosive growth in the quantity

of data, digital images, and visual applications is the engine driving the demand for faster, more capable network data access solutions. Today's challenge is to implement network data access architectures that make this data accessible.

Duplicating data and distributing "mega-files" to users has become too cumbersome. The same set of data is used to create pictures, 3D images, color maps, and other digital images, and when these images are copied, shared, and moved across the network, the results can be debilitating. Until now there has been no other workable solution.

Concurrent data networking architecture (CDNA) was designed to address the challenges. It enables multiple users, platforms, and applications to access the same data, at the same time, and use it in a collaborative fashion. CDNA requires the use of fiber channel and the implementation of a storage area network (SAN).

Fiber Channel

Fiber channel provides a 2.5-10.0 fold increase in effective data bandwidth over the traditional SCSI interface. Planned enhancements will expand its data carrying capability by a factor of four to 400 megabytes per second.

Fiber channel eliminates the SCSI distance limitations that have kept data physically close to servers. It is capable of carrying many different types of channel protocols, including the SCSI protocol, IPI, SBCCS, and a wide range of communications protocols including IP, ATM, and FDDI.

Disk drives no longer need to reside within the server cabinet. The distance limitations have been eliminated while the data carrying capability has been dramatically improved.

Fiber channel's bandwidth and multi-protocol capability enabled a natural migration to the storage area network (SAN).

The SAN, however, is an incomplete solution. The fiber channel SAN consolidates resources but does not provide data access or management services like data or file sharing. Each server continues to have its own dedicated logical devices that are not seen by other servers on the SAN.

Even though the disk drives are connected to the SAN, the file system within each server now becomes the limiting factor. Each of the servers "owns" a portion of the disk.

Compatibility

The puzzle gets worse when servers made by more than one manufacturer are introduced into the mix. A Sun server can never directly access the data on an SGI server, and neither of these UNIX servers can access data on a Windows NT server.

Even using a fiber channel SAN, data still has to be copied or network file system (NFS) remote mounted. Part of the infrastructure is in place, but the most critical elements are still required to complete the solution.

A complete SAN implementation requires the introduction of a new class of equipment, developed to deliver all the benefits of the SAN while eliminating the restrictions. In a fully implemented SAN, two powerful characteristics are allowed to converge:

  • Direct attachment of storage to the network
  • Separation of file systems and data management services from servers and workstations.
A complete SAN moves, or "externalizes" the file system from the application server. Servers open, close, read, and write files as if they were directly attached. Devices are remotely attached via the fiber channel SAN.

Creating an external file system also enables multiple servers to share data as never before. Data is neither copied nor exchanged. Rather, data flows directly between the application and SAN-attached devices. Multiple servers can access the same data at the same time.

Data flows efficiently. The back-up and hierarchical storage management now take place on the SAN, freeing the communications network from this task.

Benefits

Key benefits of SAN architecture include:
  • Higher application availability: Data is externalized, independent of the application, and accessible through alternate paths.
  • Improved performance: SAN adds bandwidth for specific functions without increasing the load on the communications network.
  • Easier centralized management: The SAN encourages centralization, reducing management time and cost. Modern, comprehensive data-center management practices can be employed.
  • Centralized and consolidated RAID (redundant array of independent disks): This translates into higher performance, lower cost of management, greater flexibility, scalability, reliability, availability, and serviceability.
  • Practical data transfer, vaulting, and exchange with remote sites: SANs enable cost-effective implementation of high-availability disaster protection configurations.
Network computing, is the basis for the information infrastructure in tomorrow's image-intensive oil and gas communities. Applications become user- and data-centric, rather than computation-centric. Delivering data and services to the right user in the right place and at the right time becomes the driving requirement for achieving a competitive advantage.

Properly implemented, SAN addresses the explosive growth in digital images and visual applications as well as providing the storage architecture that can make all this new data accessible to those who need it - anywhere on the network.

Copyright 1998 Oil & Gas Journal. All Rights Reserved.