Interview your university network specialist. Ask how various parts of the system communicates with each other throughout the university. (Q) Given the chance to redesign the existing setup, enumerate and discuss your keypoints for an effective and efficient network environment ideal for the university.
(at least 3000 words)
In this assignment we are tasked to interview the university network specialist Mr. Ariel Roy Reyes. We have submitted questionnaires. But before going to the point, let us first discuss what is NETWORK and a NETWORK SPECIALIST.
NETWORK
A computer network, often simply referred to as a network, is a collection of computers and devices connected by communications channels that facilitates communications among users and allows users to share resources with other users. Networks may be classified according to a wide variety of characteristics. This article provides a general overview of types and categories and also presents the basic components of a network.
NETWORK ADMINISTRATOR/SPECIALISTS
Network administrator is a modern professional responsible for the maintenance of computer hardware and software that comprises a computer network. This normally includes the deployment, configuration, maintenance and monitoring of active network equipment. A related role is that of the network specialist, or network analyst, who concentrates on network design and security.
The Network Administrator is usually the level of technical/network staff in an organization and will rarely be involved with direct user support. The Network Administrator will concentrate on the overall health of the network, server deployment, security, and ensuring that the network connectivity throughout a company's LAN/WAN infrastructure is on par with technical considerations at the network level of an organization's hierarchy. Network Administrators are considered Tier 3 support personnel that only work on break/fix issues that could not be resolved at the Tier1 (helpdesk) or Tier 2 (desktop/network technician) levels.
Depending on the company, the Network Administrator may also design and deploy networks. However, these tasks may be assigned to a Network Engineer should one be available to the company.
The actual role of the Network Administrator will vary from company to company, but will commonly include activities and tasks such as network address assignment, assignment of routing protocols and routing table configuration as well as configuration of authentication and authorization – directory services. It often includes maintenance of network facilities in individual machines, such as drivers and settings of personal computers as well as printers and such. It sometimes also includes maintenance of certain network servers: file servers, VPN gateways, intrusion detection systems, etc.
Network specialists and analysts concentrate on the network design and security, particularly troubleshooting and/or debugging network-related problems. Their work can also include the maintenance of the network's authorization infrastructure, as well as network backup systems.
The administrator is responsible for the security of the network and for assigning IP addresses to the devices connected to the networks. Assigning IP addresses gives the subnet administrator some control over the personnel who connects to the subnet. It also helps to ensure that the administrator knows each system that is connected and who personally is responsible for the system.
Questions:
1. In system development, how various parts of the system communicate with each other throughout the university? In what way?
• Regarding System Development, the best person to ask is our University Programmers, Mr. Fortich and Dr. Mercado.
2. What are the components involved in the system(s) in the university? (hardware, software, technology, etc.)
• I am not in the right position to discuss the details of the software components used as there are other assigned personnel for such job. However, talking about hardware component and technology used, basically I, assigned as the network administrator, am entrusted to maintain our different servers to run 24/7. Currently, we have our Web Server hosted here in our University in our HP ProLiant ML350 Server. Its an old but stable server set-up here in our Networks Office and has been active since Engr. Val A. Quimno , not yet a dean, was appointed as the Network Administrator. The said server has the following specification:
• Intel Xeon 3.0 GHz, 3.2 GHz, or 3.4 GHz processors (dual processor capability) with 1MB level 2 cache standard. Processors include support for Hyper-Threading and Extended Memory 64 Technology (EM64T)
• Intel® E7520 chipset
• 800-MHz Front Side Bus
• Integrated Dual Channel Ultra320 SCSI Adapter
• Smart Array 641 Controller (standard in Array Models only)
• NC7761 PCI Gigabit NIC (embedded)
• Up to 1 GB of PC2700 DDR SDRAM with Advanced ECC capabilities (Expandable to 8 GB)
• Six expansion slots: one 64-bit/133-MHz PCI-X, two 64-bit/100-MHz PCI-X, one 64-bit/66-MHz PCI-X, one x4 PCI-Express, and one x8 PCI-Express
• New HP Power Regulator for ProLiant delivering server level, policy based power management with industry leading energy efficiency and savings on system power and cooling costs
• Three USB ports: 1 front, 1 internal, 1 rear
• Support for Ultra320 SCSI hard drives (six hot plug or four non-hot plug drives supported standard, model dependent)
• Internalstorage capacity of up to 1.8TB; 2.4TB with optional 2-bay hot plug SCSI drive
• 725W Hot-Plug Power Supply (standard, most models); optional 725W Hot-Pluggable Redundant Power Supply (1+1) available. Non hot plug SCSI models include a 460W non-hot plug power supply.
• Tool-free chassis entry and component access
• Support for ROM based setup utility (RBSU) and redundant ROM
• Systems Insight Manager, SmartStart, and Automatic Server Recovery 2 (ASR-2) included
• Protected by HP Services and a worldwide network of resellers and service providers. Three-year Next Business Day, on-site limited global warranty. Certain restrictions and exclusions apply. Pre-Failure Notification on processors, memory, and SCSI hard drives.
Aside from it, our mail server running under Compaq Proliant ML330 Server, our oldest server, is also hosted here in our Networks Office. Together with other Servers, such as Proxy and Enrollment Servers, both proxy and our enrollment servers are running in a microcomputer/personal computers but with higher specifications to act as servers.
3. How do these communicate with one another? (topology, network connectivity, protocols, etc.) – may include data flow/ UML diagrams to better explain.
All Servers are connected in a shared medium grouped as one subnetwork. In general, our network follows the extended star topology which is connected to a DUAL WAN Router that serves as the load balancer between our two Internet Service Providers. All other workstations are grouped into different subnetworks as in star topology branching out from our servers subnetwork as in extended star topology. At present, we are making use of class C IP Address for private IP address assignments. Other workstations IP assignments are configured statically (example: laboratories) while others are Dynamic (example: offices). All workstations are connected via our proxy servers that do some basic filtering/firewall to control users access to the internet aside from router filtering/firewall management. So, whenever any workstation has to connect to the internet, it has to pass through software and hardware based firewall.
4. What are the processes involved in the communication (each system to other systems)?
As mentioned above, in item 3, all workstations are connected via a proxy server. It means that whenever a workstation is turned on, it requests for an IP address from the proxy server (for dynamically configured IP address) and connect to the network after IP address is acquired. As connection is established, each system can now communicate and share resources within the same subnetwork and to server following the concepts discuss in your Computer Network Class.
5. How do you go along with the maintenance of the system?
Basically, our servers are expected to be in good condition since it is required to be up 24/7. Daily, during my vacant period, monitoring on the servers are observed that includes checking logs, checking hardware performance such as CPU health, etc. If problems are observed, remedies are then and then applied. Once in a week, regular overall checkup is observed as preventive maintenance to ensure not to experience longer downtime if possible.
6. Does the system follow a specific standard? Explain Please.
As I was appointed as the Network Administrator, everything was already in place except for some minor changes. Basically, different networking standards was already observed such as cabling standards, TIA/EIA 568A-B, different IEEE standards as discussed in your Computer Networks Subject, etc.
7. How is the security of the system? Are there any vulnerabilities? Risks? Corresponding mitigation techniques? Access control?
As I have mentioned, we have implemented both software and hardware based filtering/firewall. Basically, Risks or vulnerabilities and different mitigation techniques were considered to increase security in our network. Aside from filtering/firewall, constant monitoring on networks activity also increases the security of the system.
8. Are there any interference? During what (most) times do these occur? Explain their effects especially with regards to the business of the university?
Major Interferences are normally encountered as an effect of unforeseen and beyond our control events such as black outs, and the like. The said interference would of course affect University’s day-to-day businesses for obviously this will paralyze all our activities that rely on electricity and further this might cause damage on our network devices, etc. that may later be the reason for longer downtime. Problems encountered by our providers such as connection to the National/International Gateway also affect University’s business such as correlating to University’s Business Partners outside and within the country.
Network hardware
This section discusses network hardware issues to consider when introducing Caching Proxy functionality into your network.
Memory considerations
A large amount of memory must be dedicated to the proxy server. Caching Proxy can consume 2 GB of virtual address space when a large memory-only cache is configured. Memory is also needed for the kernel, shared libraries, and network buffers. Therefore, it is possible to have a proxy server that consumes 3 or 4 GB of physical memory. Note that a memory-only cache is significantly faster than a raw disk cache, and this configuration change alone can be considered a performance improvement.
Hard disk considerations
It is important to have a large amount of disk space on the machine on which Caching Proxy is installed. This is especially true when disk caches are used. Reading and writing to a hard disk is an intensive process for a computer. Although Caching Proxy's I/O procedures are efficient, the mechanical limitations of hard drives can limit performance when the Caching Proxy is configured to use a disk cache. The disk I/O bottleneck can be alleviated with practices such as using multiple hard disks for raw cache devices and log files and by using disk drives with fast seek times, rotational speeds, and transfer rates.
Network considerations
Network requirements such as the speed, type, and number of NICs, and the speed of the network connection to the proxy server affect the performance of Caching Proxy. It is generally in the best interest of performance to use two NICs on a proxy server machine: one for incoming traffic and one for outgoing traffic. It is likely that a single NIC can reach its maximum limit by HTTP request and response traffic alone. Furthermore, NICs should be at least 100 MB, and they should always be configured for full-duplex operation. This is because automatic negotiation between routing and switching equipment can possibly cause errors and hinder throughput. Finally, the speed of the network connection is very important. For example, you cannot expect to service a high request load and achieve optimal throughput if the connection to the Caching Proxy machine is a saturated T1 carrier.
CPU considerations
The central processing unit (CPU) of a Caching Proxy machine can possibly become a limiting factor. CPU power affects the amount of time it takes to process requests and the number of CPUs in the network affects scalability. It is important to match the CPU requirements of the proxy server to the environment, especially to model the peak request load that the proxy server will service.
Network architecture
For overall performance, it is generally beneficial to scale the architecture and not just add individual pieces of hardware. No matter how much hardware you add to a single machine, that hardware still has a maximum level of performance.
The section discusses network architecture issues to take into consideration when introducing Caching Proxy functionality into your network.
Web site popularity and proxy server load considerations
If your enterprise's Web site is popular, there can be greater demand for its content than a single proxy server can satisfy effectively, resulting in slow response times. To optimize network performance, consider including clustered, load-balanced Caching Proxy machines or using a shared cache architecture with Remote Cache Access (RCA) in your overall network architecture.
• Load-balanced clusters
One way to scale the architecture is to cluster proxy servers and use the Load Balancer component to balance the load among them. Clustering proxy servers is a beneficial design consideration not only for performance and scalability reasons but for redundancy and reliability reasons as well. A single proxy server represents a single point of failure; if it fails or becomes inaccessible because of a network failure, users cannot access your Web site.
• Shared cache architecture
Also consider a shared cache architecture with RCA. A shared cache architecture spreads the total virtual cache among multiple Caching Proxy servers that usually use an intercache protocol like the Internet Cache Protocol (ICP) or the Cache Array Routing Protocol (CARP). RCA is designed to maximize clustered cache hit ratios by providing a large virtual cache.
Performance benefits result from using an RCA array of proxy servers as opposed to a single stand-alone Caching Proxy or even a cluster of stand alone Caching Proxy machines. For the most part, the performance benefits are caused by the increase in the total virtual cache size, which maximizes the cache hit ratio and minimizes cache inconsistency and latency. With RCA, only one copy of a particular document resides in the cache. With a cluster of proxy servers, the total cache size is increased, but multiple proxy servers are likely to fetch and cache the same information. The total cache hit ratio is therefore not increased.
RCA is commonly used in large enterprise content-hosting scenarios. However, RCA's usefulness is not limited to extremely large enterprise deployments. Consider using RCA if your network's load requires a cluster of cache servers and if the majority of requests are cache hits. Depending on your network setup, RCA does not always improve enterprise performance due to an increase in the number of TCP connections that a client uses when RCA is configured. This is because an RCA member is not only responsible for servicing URLs for which it has the highest score but it must also forward requests to other members or clusters if it gets a request for a URL for which it does not have the highest score. This means that any given member of an RCA array might have more open TCP connections than it would if it operated as a stand-alone server.
Traffic type considerations
Major contributions to improved performance stem from Caching Proxy's caching capabilities. However, the cache of the proxy server can become a bottleneck if it is not properly configured. To determine the best cache configuration, a significant effort must be made to analyze traffic characteristics. The type, size, amount, and attributes of the content affect the performance of the proxy server in terms of the time it takes to retrieve documents from origin servers and the load on the server. When you understand the type of traffic that Caching Proxy is going to proxy or serve from its cache, then you can factor in those characteristics when configuring the proxy server. For example, knowing that 80% of the objects being cached are images (*.gif or *.jpg) and are approximately 200 KB in size can certainly help you tune caching parameters and determine the size of the cache. Additionally, understanding that most of the content is personalized dynamic pages that are not candidates for caching is also pertinent to tuning Caching Proxy.
Analyzing traffic characteristics enables you to determine whether using a memory or disk cache can optimize your cache's performance. Also, familiarity with your network's traffic characteristics enables you to determine whether improved performance can result from using the Caching Proxy's dynamic caching feature.
• Memory versus disk cache
Disk caches are appropriate for sites with large amounts of information to be cached. For example, if the site content is large (greater than 5 GB) and there is an 80 to 90% cache hit rate, then a disk cache is recommended. However, it is known that using a memory (RAM) cache is faster, and there are many scenarios when using a memory-only cache is feasible for large sites. For example, if Caching Proxy's cache hit rate is not as important or if a shared cache configuration is being used, then a memory cache is practical.
• Caching dynamically generated content
Caching Proxy can cache and invalidate dynamic content (JSP and servlet results) generated by the WebSphere® Application Server dynamic cache, providing a virtual extension of the Application Server cache into network-based caches. Enabling the caching of dynamically generated content is beneficial to network performance in an environment where there are many requests for dynamically produced public Web pages that expire based on application logic or an event such as a message from a database. The page's lifetime is finite, but an expiration trigger cannot be set in at the time of its creation; therefore, hosts without a dynamic caching and invalidation feature must designate such as page as having a time-to-live value of zero.
If such a dynamically generated page will be requested more than once during its lifetime by one or more users, then dynamic caching provides a valuable offload and reduces the workload on your network's content hosts. Using dynamic caching also improves network performance by providing faster response to users by eliminating network delays and reducing bandwidth usage due to fewer Internet traversals.
REFERENCES:
http://en.wikipedia.org/wiki/Computer_network
http://en.wikipedia.org/wiki/Network_administrator
http://publib.boulder.ibm.com/infocenter/wasinfo/v7r0/index.jsp?topic=/com.ibm.websphere.edge.doc/concepts/concepts11.htm
(at least 3000 words)
In this assignment we are tasked to interview the university network specialist Mr. Ariel Roy Reyes. We have submitted questionnaires. But before going to the point, let us first discuss what is NETWORK and a NETWORK SPECIALIST.
NETWORK
A computer network, often simply referred to as a network, is a collection of computers and devices connected by communications channels that facilitates communications among users and allows users to share resources with other users. Networks may be classified according to a wide variety of characteristics. This article provides a general overview of types and categories and also presents the basic components of a network.
NETWORK ADMINISTRATOR/SPECIALISTS
Network administrator is a modern professional responsible for the maintenance of computer hardware and software that comprises a computer network. This normally includes the deployment, configuration, maintenance and monitoring of active network equipment. A related role is that of the network specialist, or network analyst, who concentrates on network design and security.
The Network Administrator is usually the level of technical/network staff in an organization and will rarely be involved with direct user support. The Network Administrator will concentrate on the overall health of the network, server deployment, security, and ensuring that the network connectivity throughout a company's LAN/WAN infrastructure is on par with technical considerations at the network level of an organization's hierarchy. Network Administrators are considered Tier 3 support personnel that only work on break/fix issues that could not be resolved at the Tier1 (helpdesk) or Tier 2 (desktop/network technician) levels.
Depending on the company, the Network Administrator may also design and deploy networks. However, these tasks may be assigned to a Network Engineer should one be available to the company.
The actual role of the Network Administrator will vary from company to company, but will commonly include activities and tasks such as network address assignment, assignment of routing protocols and routing table configuration as well as configuration of authentication and authorization – directory services. It often includes maintenance of network facilities in individual machines, such as drivers and settings of personal computers as well as printers and such. It sometimes also includes maintenance of certain network servers: file servers, VPN gateways, intrusion detection systems, etc.
Network specialists and analysts concentrate on the network design and security, particularly troubleshooting and/or debugging network-related problems. Their work can also include the maintenance of the network's authorization infrastructure, as well as network backup systems.
The administrator is responsible for the security of the network and for assigning IP addresses to the devices connected to the networks. Assigning IP addresses gives the subnet administrator some control over the personnel who connects to the subnet. It also helps to ensure that the administrator knows each system that is connected and who personally is responsible for the system.
Questions:
1. In system development, how various parts of the system communicate with each other throughout the university? In what way?
• Regarding System Development, the best person to ask is our University Programmers, Mr. Fortich and Dr. Mercado.
2. What are the components involved in the system(s) in the university? (hardware, software, technology, etc.)
• I am not in the right position to discuss the details of the software components used as there are other assigned personnel for such job. However, talking about hardware component and technology used, basically I, assigned as the network administrator, am entrusted to maintain our different servers to run 24/7. Currently, we have our Web Server hosted here in our University in our HP ProLiant ML350 Server. Its an old but stable server set-up here in our Networks Office and has been active since Engr. Val A. Quimno , not yet a dean, was appointed as the Network Administrator. The said server has the following specification:
• Intel Xeon 3.0 GHz, 3.2 GHz, or 3.4 GHz processors (dual processor capability) with 1MB level 2 cache standard. Processors include support for Hyper-Threading and Extended Memory 64 Technology (EM64T)
• Intel® E7520 chipset
• 800-MHz Front Side Bus
• Integrated Dual Channel Ultra320 SCSI Adapter
• Smart Array 641 Controller (standard in Array Models only)
• NC7761 PCI Gigabit NIC (embedded)
• Up to 1 GB of PC2700 DDR SDRAM with Advanced ECC capabilities (Expandable to 8 GB)
• Six expansion slots: one 64-bit/133-MHz PCI-X, two 64-bit/100-MHz PCI-X, one 64-bit/66-MHz PCI-X, one x4 PCI-Express, and one x8 PCI-Express
• New HP Power Regulator for ProLiant delivering server level, policy based power management with industry leading energy efficiency and savings on system power and cooling costs
• Three USB ports: 1 front, 1 internal, 1 rear
• Support for Ultra320 SCSI hard drives (six hot plug or four non-hot plug drives supported standard, model dependent)
• Internalstorage capacity of up to 1.8TB; 2.4TB with optional 2-bay hot plug SCSI drive
• 725W Hot-Plug Power Supply (standard, most models); optional 725W Hot-Pluggable Redundant Power Supply (1+1) available. Non hot plug SCSI models include a 460W non-hot plug power supply.
• Tool-free chassis entry and component access
• Support for ROM based setup utility (RBSU) and redundant ROM
• Systems Insight Manager, SmartStart, and Automatic Server Recovery 2 (ASR-2) included
• Protected by HP Services and a worldwide network of resellers and service providers. Three-year Next Business Day, on-site limited global warranty. Certain restrictions and exclusions apply. Pre-Failure Notification on processors, memory, and SCSI hard drives.
Aside from it, our mail server running under Compaq Proliant ML330 Server, our oldest server, is also hosted here in our Networks Office. Together with other Servers, such as Proxy and Enrollment Servers, both proxy and our enrollment servers are running in a microcomputer/personal computers but with higher specifications to act as servers.
3. How do these communicate with one another? (topology, network connectivity, protocols, etc.) – may include data flow/ UML diagrams to better explain.
All Servers are connected in a shared medium grouped as one subnetwork. In general, our network follows the extended star topology which is connected to a DUAL WAN Router that serves as the load balancer between our two Internet Service Providers. All other workstations are grouped into different subnetworks as in star topology branching out from our servers subnetwork as in extended star topology. At present, we are making use of class C IP Address for private IP address assignments. Other workstations IP assignments are configured statically (example: laboratories) while others are Dynamic (example: offices). All workstations are connected via our proxy servers that do some basic filtering/firewall to control users access to the internet aside from router filtering/firewall management. So, whenever any workstation has to connect to the internet, it has to pass through software and hardware based firewall.
4. What are the processes involved in the communication (each system to other systems)?
As mentioned above, in item 3, all workstations are connected via a proxy server. It means that whenever a workstation is turned on, it requests for an IP address from the proxy server (for dynamically configured IP address) and connect to the network after IP address is acquired. As connection is established, each system can now communicate and share resources within the same subnetwork and to server following the concepts discuss in your Computer Network Class.
5. How do you go along with the maintenance of the system?
Basically, our servers are expected to be in good condition since it is required to be up 24/7. Daily, during my vacant period, monitoring on the servers are observed that includes checking logs, checking hardware performance such as CPU health, etc. If problems are observed, remedies are then and then applied. Once in a week, regular overall checkup is observed as preventive maintenance to ensure not to experience longer downtime if possible.
6. Does the system follow a specific standard? Explain Please.
As I was appointed as the Network Administrator, everything was already in place except for some minor changes. Basically, different networking standards was already observed such as cabling standards, TIA/EIA 568A-B, different IEEE standards as discussed in your Computer Networks Subject, etc.
7. How is the security of the system? Are there any vulnerabilities? Risks? Corresponding mitigation techniques? Access control?
As I have mentioned, we have implemented both software and hardware based filtering/firewall. Basically, Risks or vulnerabilities and different mitigation techniques were considered to increase security in our network. Aside from filtering/firewall, constant monitoring on networks activity also increases the security of the system.
8. Are there any interference? During what (most) times do these occur? Explain their effects especially with regards to the business of the university?
Major Interferences are normally encountered as an effect of unforeseen and beyond our control events such as black outs, and the like. The said interference would of course affect University’s day-to-day businesses for obviously this will paralyze all our activities that rely on electricity and further this might cause damage on our network devices, etc. that may later be the reason for longer downtime. Problems encountered by our providers such as connection to the National/International Gateway also affect University’s business such as correlating to University’s Business Partners outside and within the country.
Network hardware
This section discusses network hardware issues to consider when introducing Caching Proxy functionality into your network.
Memory considerations
A large amount of memory must be dedicated to the proxy server. Caching Proxy can consume 2 GB of virtual address space when a large memory-only cache is configured. Memory is also needed for the kernel, shared libraries, and network buffers. Therefore, it is possible to have a proxy server that consumes 3 or 4 GB of physical memory. Note that a memory-only cache is significantly faster than a raw disk cache, and this configuration change alone can be considered a performance improvement.
Hard disk considerations
It is important to have a large amount of disk space on the machine on which Caching Proxy is installed. This is especially true when disk caches are used. Reading and writing to a hard disk is an intensive process for a computer. Although Caching Proxy's I/O procedures are efficient, the mechanical limitations of hard drives can limit performance when the Caching Proxy is configured to use a disk cache. The disk I/O bottleneck can be alleviated with practices such as using multiple hard disks for raw cache devices and log files and by using disk drives with fast seek times, rotational speeds, and transfer rates.
Network considerations
Network requirements such as the speed, type, and number of NICs, and the speed of the network connection to the proxy server affect the performance of Caching Proxy. It is generally in the best interest of performance to use two NICs on a proxy server machine: one for incoming traffic and one for outgoing traffic. It is likely that a single NIC can reach its maximum limit by HTTP request and response traffic alone. Furthermore, NICs should be at least 100 MB, and they should always be configured for full-duplex operation. This is because automatic negotiation between routing and switching equipment can possibly cause errors and hinder throughput. Finally, the speed of the network connection is very important. For example, you cannot expect to service a high request load and achieve optimal throughput if the connection to the Caching Proxy machine is a saturated T1 carrier.
CPU considerations
The central processing unit (CPU) of a Caching Proxy machine can possibly become a limiting factor. CPU power affects the amount of time it takes to process requests and the number of CPUs in the network affects scalability. It is important to match the CPU requirements of the proxy server to the environment, especially to model the peak request load that the proxy server will service.
Network architecture
For overall performance, it is generally beneficial to scale the architecture and not just add individual pieces of hardware. No matter how much hardware you add to a single machine, that hardware still has a maximum level of performance.
The section discusses network architecture issues to take into consideration when introducing Caching Proxy functionality into your network.
Web site popularity and proxy server load considerations
If your enterprise's Web site is popular, there can be greater demand for its content than a single proxy server can satisfy effectively, resulting in slow response times. To optimize network performance, consider including clustered, load-balanced Caching Proxy machines or using a shared cache architecture with Remote Cache Access (RCA) in your overall network architecture.
• Load-balanced clusters
One way to scale the architecture is to cluster proxy servers and use the Load Balancer component to balance the load among them. Clustering proxy servers is a beneficial design consideration not only for performance and scalability reasons but for redundancy and reliability reasons as well. A single proxy server represents a single point of failure; if it fails or becomes inaccessible because of a network failure, users cannot access your Web site.
• Shared cache architecture
Also consider a shared cache architecture with RCA. A shared cache architecture spreads the total virtual cache among multiple Caching Proxy servers that usually use an intercache protocol like the Internet Cache Protocol (ICP) or the Cache Array Routing Protocol (CARP). RCA is designed to maximize clustered cache hit ratios by providing a large virtual cache.
Performance benefits result from using an RCA array of proxy servers as opposed to a single stand-alone Caching Proxy or even a cluster of stand alone Caching Proxy machines. For the most part, the performance benefits are caused by the increase in the total virtual cache size, which maximizes the cache hit ratio and minimizes cache inconsistency and latency. With RCA, only one copy of a particular document resides in the cache. With a cluster of proxy servers, the total cache size is increased, but multiple proxy servers are likely to fetch and cache the same information. The total cache hit ratio is therefore not increased.
RCA is commonly used in large enterprise content-hosting scenarios. However, RCA's usefulness is not limited to extremely large enterprise deployments. Consider using RCA if your network's load requires a cluster of cache servers and if the majority of requests are cache hits. Depending on your network setup, RCA does not always improve enterprise performance due to an increase in the number of TCP connections that a client uses when RCA is configured. This is because an RCA member is not only responsible for servicing URLs for which it has the highest score but it must also forward requests to other members or clusters if it gets a request for a URL for which it does not have the highest score. This means that any given member of an RCA array might have more open TCP connections than it would if it operated as a stand-alone server.
Traffic type considerations
Major contributions to improved performance stem from Caching Proxy's caching capabilities. However, the cache of the proxy server can become a bottleneck if it is not properly configured. To determine the best cache configuration, a significant effort must be made to analyze traffic characteristics. The type, size, amount, and attributes of the content affect the performance of the proxy server in terms of the time it takes to retrieve documents from origin servers and the load on the server. When you understand the type of traffic that Caching Proxy is going to proxy or serve from its cache, then you can factor in those characteristics when configuring the proxy server. For example, knowing that 80% of the objects being cached are images (*.gif or *.jpg) and are approximately 200 KB in size can certainly help you tune caching parameters and determine the size of the cache. Additionally, understanding that most of the content is personalized dynamic pages that are not candidates for caching is also pertinent to tuning Caching Proxy.
Analyzing traffic characteristics enables you to determine whether using a memory or disk cache can optimize your cache's performance. Also, familiarity with your network's traffic characteristics enables you to determine whether improved performance can result from using the Caching Proxy's dynamic caching feature.
• Memory versus disk cache
Disk caches are appropriate for sites with large amounts of information to be cached. For example, if the site content is large (greater than 5 GB) and there is an 80 to 90% cache hit rate, then a disk cache is recommended. However, it is known that using a memory (RAM) cache is faster, and there are many scenarios when using a memory-only cache is feasible for large sites. For example, if Caching Proxy's cache hit rate is not as important or if a shared cache configuration is being used, then a memory cache is practical.
• Caching dynamically generated content
Caching Proxy can cache and invalidate dynamic content (JSP and servlet results) generated by the WebSphere® Application Server dynamic cache, providing a virtual extension of the Application Server cache into network-based caches. Enabling the caching of dynamically generated content is beneficial to network performance in an environment where there are many requests for dynamically produced public Web pages that expire based on application logic or an event such as a message from a database. The page's lifetime is finite, but an expiration trigger cannot be set in at the time of its creation; therefore, hosts without a dynamic caching and invalidation feature must designate such as page as having a time-to-live value of zero.
If such a dynamically generated page will be requested more than once during its lifetime by one or more users, then dynamic caching provides a valuable offload and reduces the workload on your network's content hosts. Using dynamic caching also improves network performance by providing faster response to users by eliminating network delays and reducing bandwidth usage due to fewer Internet traversals.
REFERENCES:
http://en.wikipedia.org/wiki/Computer_network
http://en.wikipedia.org/wiki/Network_administrator
http://publib.boulder.ibm.com/infocenter/wasinfo/v7r0/index.jsp?topic=/com.ibm.websphere.edge.doc/concepts/concepts11.htm