“The primary business objective of FAN is to reduce the total cost of ownership of file data, reduce complexity in file data management and increase corporate compliance. This is achieved by improved storage management, providing the ability to consolidate and optimise storage across an organisation, improving disaster recovery and business continuity whilst minimising or eliminating client downtime”, says Gerald Penaflor, Brocade, Regional Sales Director for South Asia Pacific and Korea.
FAN focuses upon the centralised management of file data whereas SAN enables centralised management of block data. FAN can be integrated with a SAN in a situation where file data has been consolidated into a SAN using a Network Attached Storage (NAS) head or a File Server attached to a SAN. In this situation, a customer can leverage the efficiencies provided by both FAN and SAN. A SAN is not a pre-requisite for a FAN but a combined SAN/FAN strategy does offer greater management efficiencies and a lower total cost of ownership.
FAN’s evolution can be viewed in a similar light to that of SAN. The concept of a SAN evolved when organisations felt the need to centrally store all block-level information, from an ease of management perspective. It essentially collapsed the various silos of storage into one pool that can be accessed using standard protocols from any or every server in the enterprise. Files that are now spread across the organisation, pose a serious risk to enterprises with compliance becoming a major factor governing all businesses. Hence, FAN is gradually evolving to help enterprises pool together all file-level storage devices and using intelligent software, enable ease of migration and management across the enterprise. In essence, FAN is what drives the creation of a common network of file storage, similar to the evolution of SAN that focused on the creation of a common network of block-level storage.
“Both technologies (SAN and FAN) started with the aim of consolidating information assets, optimising infrastructure and reducing management costs. SAN evolved to address the requirement for mission-critical enterprise apps to access data with high performance; whereas FAN evolved to address the issue of managing the vast amount of unstructured data spread across the organisation,” feels Soumitra Agarwal, Marketing Director, NetApp.
In reality, the two technologies are more than complementary; they are symbiotic. A SAN is a requirement for a robust FAN solution, and FAN solutions consist of tools that SAN cannot provide. As FANs make management easier at the file level, they permit the continued growth of data in underlying storage subsystems, which are usually SAN attached. Notably, all file data is ultimately stored in block format, and block data is optimally stored on a SAN.
Prem Nithin, Senior Technical Consultant, Cisco, India & SAARC predicts, “As with a SAN, there are many technologies and approaches that will be possible in the design and deployment of a FAN. Many vendors will participate in the FAN market, and innovation will continue at a fast pace over the next several years.” Establishing an accepted definition of a FAN is critical because it will allow IT teams to develop common shorthand and reference models for how they architect, deploy, manage, and augment their file infrastructure. In the absence of this kind of framework, many enterprises will simply drown in coming years, not only from a deluge of mismanaged file data, but also from the inevitable confusion that would result without a common nomenclature.
Underlying technologies
- Storage devices. The foundational level on top of which a FAN resides is the storage infrastructure. This can be either a SAN or a NAS environment. The only prerequisite is that a FAN must leverage a networked storage environment to enable data and resource sharing.
- File serving devices/interfaces. Either as a directly integrated part of the storage infrastructure (e.g., NAS), or as a gateway interface (e.g., SAN), all FANs must have devices capable of surfacing file-level information in the form of standard protocols such as CIFS or NFS.
- Namespaces. All FANs leverage the premise that file systems with the ability to organise, present and store file content for their authorised end clients exist. This capability is referred to as the file system’s “namespace.” It is one of the central concepts around which a FAN revolves. There are several kinds of namespaces possible in a FAN.
- File management and control services. The other central concept in the architecture of a FAN is the software intelligence that interoperates with namespaces to create new value across the entire enterprise. From a deployment perspective, these services might be integrated directly with file systems, or in networking devices, but they may also be standalone services. Examples include file virtualisation, classification, de-duplication, and wide-area file services.
- End clients. All FANs have end client machines that access the namespaces created by file systems. These clients could be on any platform or computing device.
- Connectivity. There are many possible ways that a FAN connects its end clients to the namespaces. They are commonly connected across a standard LAN, but they may simultaneously or alternatively leverage any manner of wide-area technologies, as well.
Developments in FAN technology
Various development efforts have begun exploring different FAN technologies. All the leading vendors in the industry have some or the other solution for different aspects of FAN. Symantec offers tools that provide visibility across the entire data centre, server, storage and data protection devices, from server to SAN and delivers the capability to actively manage and control the storage environment.
Network Planning
Planning a network can be quite a Herculean task. How does one decided how much bandwidth is enough? What are the criteria when deciding on security and reliability products? Where would a wireless network be optimised to the fullest and which environs would suit wired LANs. Milind Kamat helps network planners find some answers
There was a time when networking was easy. All that the network manager had to do was simply connect the desired computers to a LAN and put up a network operating system for file and print sharing. But in today’s scenario, networking means much more. New applications emerging such as computer telephony, instant messaging and SMS, not to mention e-mail, ERP, document management, and others. All these applications put heavy demands on the network in terms of bandwidth, scalability, reliability, reach, outsourcing, security, etc. And on top of that, there are new technologies, new protocols and new standards, which are making life more and more difficult for network managers. This article examines some of the critical areas in greater detail, to create a better understanding of the impact on network planning.
Technologies like EMC Documentum Content Management give a structure to all unstructured content as well as enforce rules around access control, workflow, tiering, etc. In addition, technologies acquired through EMC’s acquisition of Infoscape allow customers to analyse file-level information based on the criticality of content within a file and accordingly decide how to act upon it. Virtualisation technologies through EMC’s acquisition of Rainfinity allow customers to implement a global namespace around all file-based storage and ease management. Content Addressed Storage (CAS) technologies like EMC Centera help enterprises in deduplicating and drastically reducing backup windows of the existing file-level storage devices.
Where possible FAN will look to leverage existing standards—file access in UNIX and Windows use NFS and CIFS respectively and FAN leverages these standards—predicts Penaflor. However, as with all new technology there will always be areas not covered by existing standards. FAN is focussed on all aspects of file data management and as a concept FAN is relatively new. However, a number of the underlying technologies used in FAN can be considered mature. Brocade has just released Storage X v6.0. A FAN is suite of hardware and, optionally, software management technologies used to organise, route, switch and provide consistent access to large amounts of file data. Different solutions in the suite will therefore be at different stages of maturity.
“There are no clear standards for FAN as yet. True heterogeneous FANs with a unified global namespace and vendor independent interoperability will take some time to arrive. Software-based solutions currently offer considerable flexibility in letting you put together FANs with storage components from a variety of vendors and across multiple OS platforms,” says Basant Rajan, Chief Technology Officer, Symantec India.
Bandwidth & scalability
Raw bandwidth capability is growing very rapidly, and in fact it is growing faster than computing capacity. While computing power is doubling every 18 months, as per Moore’s Law, telecom power is tripling every 18 months. Whereas we started with 10 Mbps on the LAN and 14.4 Kbps on the modem, today gigabit speeds are available on the LAN and modems are increasingly being replaced by DSL/Cable connections with megabit speeds.
So if you are laying out cables and have chosen the lowest cost cabling, you will soon find that your LAN will become a business bottleneck. Today, copper cabling, which is still cheaper than fibre, must be ready for gigabit speeds, or else you may have to replace those new cables in a couple of years.
Applications like computer telephony are driving bandwidth. It is not just that computer telephony is cheaper than normal telephony. Since the widespread use of the Internet as a business tool, companies are getting closer to their customers, and hence are becoming more and more geographically dispersed. And the real-time nature of business means that employees need to be able to quickly contact customers, suppliers and other employees. So the requirement for immediate voice communications is increasing day by day. And the best technology to cater to that is computer telephony, particularly with the emerging standard of Voice-over- IP.
Bandwidth increase is not only over LAN cables and terrestrial lines. Thanks to increasing mobility of personnel and preference of mobile numbers over fixed line numbers, there is an increasing need to provide wireless bandwidth. New emerging 802.11 standards are now offering greater bandwidth along with launch of GPRS services by cellular operators.
Another application that is likely to create demand for bandwidth in the near future is Multimedia Messaging Service (MMS), which will be used for transferring photographs, voice recordings, etc. With more and more mobile handsets offering cameras as built-in equipment or as attachments, the mobile handset will soon be transformed from a phone to a multifunctional device that has a bigger user of bandwidth.
To plan future-proof networks, preference should be given to mobile devices and network components that can be made to perform faster through software upgrades, rather than hardware replacements. A parallel consideration is that the vendor should have a history of actually providing software upgrades.
Reliability
First generation LANs used to work during office hours, and typically were shut down at night. But now networks work round the clock, 365 days a year. Users will complain whenever the network is down, whether for maintenance or upgradation or whatever else. In fact, the 9/11 attacks show that networks are expected to be up and running even during times of disasters.
Network downtime is of two types: planned and unplanned. Planned downtime is a result of maintenance, upgradation, re-configuration, etc. Choosing equipment that can be serviced without disconnecting it is a sure way of minimising downtime. Another criterion when selecting equipment is that it should be capable of handling multiple protocols and standards. This is useful when the network design changes, and the equipment needs to be reconfigured. For example, a server connection may have to be upgraded from Ethernet to Fast Ethernet. If the switch is auto sensing, then there is no need to replace it, thereby avoiding network downtime.
As for unplanned downtime, it helps if the networking products are purchased from a vendor that is TL 9000 certified. TL 9000 is a quality certification specifically for the telecom industry. This certification is awarded by a body known as the Quest Forum, which is a coalition of leading networking and telecom companies. TL 9000 is a set of quality system that provides metric for measuring the company’s quality standards instead of just documenting the processes. It is based on ISO 9000’s structure with 84 additional requirements categorised into hardware, software and service. The core objectives of TL 9000 are to foster continual improvement to the quality and services to telecom and ultimately deliver customer satisfaction.
Reach & mobility
Networks are becoming increasingly wireless. Wireless technology is improving rapidly and now offers three types of networks—Personal Area Networks (PAN), Local Area Networks (LAN) and Wide Area Networks (WAN). PAN and LAN equipment is typically owned and operated by the owners, whereas WAN is usually provided by a third party such as a telco, ISP or VSAT operator.
Wireless LANS are often used in buildings or sites where wiring cannot be laid down due to various reasons. For example, it is not desirable to physically make holes and lay conduits in heritage constructions like museums. Even in public places like airports and railway stations, wireless LANs are the preferred way of providing connectivity to notebook users.
Unlike a wired LAN, a wireless LAN needs to be designed for a wide variety of equipment. While wired LANs normally connect to company-owned equipment, wireless LANs often connect to personal equipment—notebooks, PDAs, mobile phones. Hence, these need to be able to cater to a wider variety of protocols, speeds, formats, etc. What is often overlooked in wireless networks is the provision to provide adequate number of power outlets for recharging mobile devices. This is something that the network planner should take up with the administration department.
Security
With applications becoming increasingly business critical, networks are carrying more and more business critical data. Hence the issue of security is becoming increasingly important. Earlier security products were available only from vendors who manufactured these products. But these days, many networking products come in-built with security functionality. This provides the advantage that integration of networking and security becomes easier. Another advantage is that whenever the network equipment is upgraded, the security level also usually gets upgraded. This is ideal for small companies that wish to avoid the additional cost of purchasing and maintaining security products. Another issue is that it is becoming increasingly difficult to have a closed network. It is not very difficult for a cyber criminal to attach a modem to one of the PCs, and expose the network to the outside world. Hence the realisation that e-security is not just a function of technology, but also of people and processes.
Convergence
While there is a convergence of functionality into digital networks, the number of protocols is increasing. We now have IP, USB, DSL, Ethernet, etc, criss-crossing across networks. Hence, it is important that core networking equipment such as switches, routers and gateways be capable of handling multiple protocols. When you choose equipment with such facilities, then the network becomes more flexible and makes life easier for users.
Operating costs
The highest component of operating costs is typically for WAN connectivity. In the initial stages users paid for bandwidth on a monthly basis. But these days, the charges can vary depending on the quality of service, time of service (day versus night, peak versus non peak), type of content (voice versus data), etc. Operating costs are also a function of consolidating various WAN services with one service provider versus spreading it across various service providers. Another issue in planning networks is that telecom costs are falling rapidly, and one needs to consider whether to invest in more expensive equipment or buy more bandwidth. Yet another issue that arises is that charge may vary depending on the communication protocol being used. Frame relay may have one rate, ATM another rate and IP may have yet another rate, and so on. Reliance Infocomm was able to significantly reduce its cost of fibre, by postponing its procurement to the point where fibre prices had dropped by more than half.
Conclusion
Network planning means optimising a host of complex variables. But before designing the network, it is important to have a good picture of the future of the organisation—how fast will it grow, what kinds of services will be required, what partnerships will be developed. Once these inputs are available, it becomes easier to visualise the network. Based on that, a network can be designed for current needs, with a map that provides a migration path to the future.

0 Comments:
Post a Comment
<< Home