Data Center Designservices
Data centers are comprised of a high speed, high demand networking communication systems capable of handing the traffic for SAN (Storage Area Networks), NAS (Network Attached Storage), file/application/web server farms, and other components located in the controlled environment. The control of the environment relates to humidity, flood, electrical, temperature, fire controls, and of course, physical access. Communication in and out of the data center is provided by WAN, CAN/MAN and LAN links in a variety of configurations depending upon the needs of the particular center.
A properly designed data center will provide availability, accessibility, scalability, and reliability 24 hours a day, 7 days a week, 365 days per year minus any scheduled downtime for maintenance. Telephone companies work for 99.999% uptime and the data center is no different. There are two basic types of data centers: corporate and institutional data centers (CDCs) and Internet Data Centers (IDCs). CDCs are maintained and operated from within the corporation, while IDCs are operated by Internet Service Providers (ISPs). The ISPs provide third party web sites, collocation facilities and other data services for companies such as outsourced email.
Critical data centers are monitored by a NOC (Network Operations Center) which may be in-house or outsourced to a third party. The NOC is the first place outages are realized and the starting point for corrective action. NOCs are generally staffed during the data center’s hours of operations. In 24 x 7 data centers, the NOC is an around the clock department. Equipment monitoring devices will advise the NOC of problems such as overheating, equipment outages, and component failure via a set of triggers that can be configured on the equipment or via a third party monitoring software which can run over all of the equipment.
Data Center Engineering Design
Data center facilities are classified as Tier 1 through Tier 4 as described by the Uptime Institute, Inc. (Level 1 through Level 4) and are based on the their respective facility infrastructure (Tier 1 being the lowest and Tier 4 being the highest in terms of reliability requirements). Consideration should be given to the design of the facility and to the possibility that requirements may increase in reliability level due to the changing of business needs. It is vital that a long range strategy be defined for the facility and its associated critical systems. In addition, facilities with critical areas may need to be designed so that even preventative maintenance can be performed on the facilities systems and associated equipment without the need to shut down operation of critical process equipment. A Tier rating is limited to the rating of the weakest subsystem that will impact site operation. A site with a Tier 4 UPS configuration and a Tier 2 chilled water system will yield a Tier 2 site rating.
Tier 1 (Level 1):
A basic data center with non-redundant capacity components and a single non-redundant distribution path serving the sites computer equipment. Planned work requires most or all of the systems to be shut down, impacting the computer systems.
Tier 2 (Level 2):
A data center with redundant capacity components and a single non-redundant distribution path serving the sites computer equipment. Planned work requires most or all of the systems to be shut down. Failure to perform maintenance work increases the risk of unplanned disruption as well as the severity of major failures.
Tier 3 (Level 3):
A concurrently maintainable data center with redundant capacity components and multiple distribution paths serving the sites computer equipment. Generally, only one distribution path serves the computer equipment. Each and every capacity component of the distribution path can be removed from service on a planned maintenance window without causing any computer equipment to be shut-down. In order to establish concurrent maintainability of the critical power distribution system, Tier 3 sites require all computer hardware to have dual power inputs.
Tier 4 (Level 4):
A fault tolerant data center with redundant capacity systems and multiple distribution paths simultaneously serving the sites computer equipment. Each and every capacity component of the distribution path can be removed from service without causing any computer equipment to be shut-down. In order to establish fault tolerance and concurrent maintainability of the critical power distribution system, Tier 4 sites require all computer hardware to have dual power inputs. Distribution paths must be physically separated (compartmentalized) to prevent any single event from impacting either systems or paths simultaneously.
Data Center Planning and Design Guideline:
Data center planning has become somewhat of a specialty in the architectural world. Most architectural firms either have an RCDD (Registered Communications Distribution Designer) on staff, or acting as a consultant to assist with the specialized equipment not addressed by their Electrical Engineers and Mechanical Engineers. The equipment housed within the center is complex each with specific requirements for heating, cooling, power budgets and spatial considerations. A typical data center contains the following components:
- Computing and network infrastructure (cabling, fiber, and electronics)
- NOC or NOC communications and monitoring
- Power distribution, generation and conditioning systems – Uninterruptible Power Supplies, generators
- Environmental control and HVAC systems
- Fire Detection and Suppression systems (typically halon or other non-water suppression)
- Physical security and access control prevention, allowance, and logging
- Circuit breaker protection (lightning protection in some cases)
- Proper lighting
- Minimum of 8’5″ ceiling height
- Grounding
- Racks and cabinets for equipment
- Pathway: Raised access flooring and overhead cable tray
- Carrier circuits and equipment
- Telecommunications equipment
- Proper clearances around all equipment, termination panels and racks
Data centers must be carefully planned PRIOR to building to assure compliance with all applicable codes and standards. Design considerations include site and location selection, space, power and cooling capacity planning, floor loading, access and security, environmental cleanliness, hazard avoidance and growth. In order to calculate the above needs, the architect and RCDD must know the components that will be housed in the data center including all electronics, cabling, computers, racks, etc.
To provide this list it is important to predict the number of users, application types and platforms, rack units required for rack mount equipment and most importantly, expected or predicted growth. Anticipating growth and technological changes can be somewhat of a “crystal ball” prediction. With the possible combination of storage islands, application islands, server platforms and electronic components literally being factorial, planning is as important to a data center as the cabling is to a network. The data center will take on a life of its own and should be able to respond to growth and changes in equipment, standards and demands all while remaining manageable and of course, reliable. Larger data centers are designed in tiers with each tier performing different functions and generally with different security levels. Redundancy may be between different levels or different geographic locations depending on the needs of the users of the facility.
Datacenter Construction Trends:
While demand for outsourced datacenter space has been steady or increasing, datacenter construction has, sadly, lagged behind. This trend – demand outpacing supply –is expected to continue through 2009. Students of economics will be appalled at such a long term market imbalance. However, there are several reasons, some historical, for the current situation.
Datacenter power demands are increasing: The move towards 1U (‘pizza box’) servers and blade servers have caused power demands in most datacenters to increase significantly. Whereas once a power density of 2.5kw per cabinet was more than sufficient, most 1U deployments require something closer to 5kw per cabinet and blade servers can require 10kw per cabinet or more. One of the effects of greater power requirements is that datacenters built to older specifications can’t support the same cabinet per square foot density, effectively ‘shrinking’ datacenters by approximately 20-30%. This effect makes any shortfall in supply all the more striking, as the current supply is effectively ‘eroded’ by increasing power demands.
Demand for higher quality datacenters: Enterprises, carriers, Internet content providers, governments, and systems integrators all require datacenter infrastructure that is more reliable, resilient, and redundant than ever before. Typical requirements include N+1 electrical and cooling systems, modern fire detection and suppression, multiple fiber entrance facilities, and 24×7 staffing and security. Many older datacenters do not meet these requirements, and upgrades frequently require the entire datacenter to be taken out of service for a significant period of time.
Prices are increasing for new datacenters: The demand for higher power density datacenters with higher levels of redundancy has significantly boosted the cost of new datacenters. Greenfield builds (no existing structure) typically cost $1,300 per square foot or more for 180w per square foot of power. Brownfield builds (utilizing an existing shell) can cost $900-1,100 for a similar power density. Rising build costs have made it difficult to attract capital (especially from banks) to build datacenters, and those costs mean that capital investment dollars don’t go as far as they used to.
Datacenter Quality Trends:
Classification of datacenters is based on the degree of redundancy inherent in the facility’s design. Datacenters can be classified as ‘standard’ or ‘premium’ datacenters, based on the following criteria:
- Backup generator configuration – N+1 or higher levels of redundancy are required for ‘premium’ status.
- Uninterruptible Power Supply (UPS) configuration – N+1 or higher levels of redundancy are required for ‘premium’ status.
- Redundant electrical grids – Dedicated utility substations (or multiple utility grid power) are highly desirable for ‘premium’ status.
- Cooling system configuration – N+1 or higher levels of redundancy are required for ‘premium’ status.
- Staffing levels – 24-hour staffing, preferably with 24-hour security, is required for ‘premium’ status.
- Security systems – Two-factor authentication utilizing biometric controls is highly desirable for ‘premium’ status, as is extensive video monitoring that is either taped or monitored 24 hours a day.
- Fire detection and suppression systems – Modern fire detection and suppression equipment, such as Very Early Smoke Detection (VESDA) and Double Pre-Action Drypipe sprinklers, or an advanced gas fire suppression system are required for ‘premium’ status.
- Optical Fiber entrance facilities – Multiple fiber entrance facilities are required for ‘premium’ status.
Several facts became evident:
- Premium facilities tend to be newer and larger than standard facilities.
- Premium facilities have a higher utilization level and that utilization level is growing faster than with standard facilities.
- Premium facilities are in greatest demand in the top datacenter markets, as compared to standard facilities in the same markets.