An overhaul of the data center sometimes comes with a truckload of IT terms and concepts. While most IT pros have this terminology down pat, here's a refresher for new IT pros, the folks outside of IT or IT vets who want a quick trip down memory lane. For more information, refer to our reference guide on data center optimization.
Agile development is an approach to software development that emphasizes incremental steps and regular feedback from application users. It aims to incorporate learning and flexibility without adding excessively to requirements. In the data center, agility means being able to respond quickly to user and organizational changes.
A blade server is a small form-factor computer, typically used in arrays mounted together in a frame that fits into a standard rack. Blades are narrower than standard 19-inch rack servers, but they contain most of the components of a complete computer.
Change management is a process for making sure changes to the data center infrastructure occur in a consistent, documented way. The focus should be on not only avoiding interruptions in service, but also effecting change as efficiently as possible.
Chargeback is a system by which the data center or the IT department in an organization bills users for the computing services they consume.
Cloud computing is essentially delivering computing as a service. Cloud computing uses different chargeback plans than traditional data center approaches, typically billing flat fees per user per month.
A data center is a facility that houses enterprise computer resources and the supporting power and cooling infrastructure.
DCIM comprises software tools for discovering, monitoring and controlling assets forming a data center, including both power and computing resources.
Data dedup is a method of minimizing data storage requirements by eliminating redundant instances of data. Various deduplication algorithms flag data at the file or block level.
Ethernet is a nearly ubiquitous network technology that divides data into packets or frames. First commercially available in 1980, it has become an industry standard. Throughput typically ranges from 1 gigabit per second to 10 Gigabit Ethernet, but IEEE has published standards for 40 and 100 Gig-E speeds.
Fibre Chanel is a high-speed data network technology commonly used for storage area networks within data centers. Originally developed for communications among super computers, it is capable of transfer rates of up to 10 gigabits per second.
A hypervisor is the layer of software that manages communications between virtual machines and the hardware on which they are running at any given moment. It allows multiple operating systems to run simultaneously as guests on a host physical server.
Incident management encompasses the processes and activities conducted in response to one-time events that disrupt data center service.
ITIL is a set of service-oriented best practices to guide the data center in supporting the broader organization’s needs. ITIL prescribes not only the processes for service management but also the competencies, skills and metrics that are needed.
A MAC address is a unique identifier added to network interface cards by the manufacturer that is necessary for communicating on networks.
NAS refers to data storage appliances, designed for use by two or more computers on a common network, that typically house redundant disk arrays. They appear as file servers to applications making calls across the network.
Orchestration refers to the automation of a series of tasks, from provisioning a user to invoking a series of services.
PUE is a ratio measuring the power feeding into the data center to the power consumed by the IT equipment. A 1-to-1 ratio is a practical impossibility because lighting and air-conditioning also use power. A facility with a ratio of 1-to-5 is considered highly efficient.
Problem management encompasses the processes and activities aimed at solving recurrent issues in the data center.
Resiliency is the ability of a data center to maintain service in spite of problems such as power outages, server failures or network link failures.
An SLA is an agreement negotiated between the IT department and a user or a vendor that specifies how a service will be delivered in terms of response times, maximum allowable downtime and other performance parameters.
SPECvirt is a benchmark published by the Standard Performance Evaluation Corp. to measure performance of servers in virtual environments. It simulates workloads under various server consolidation levels.
A SAN is a storage environment that provides access to block-level data in disk arrays running on a dedicated network. SANs are distinguished from NAS, which provide file-level data, and use an interface standard that makes them appear as local storage to the operating system.
Thin provisioning is a technique for making the most efficient use of SANs by allotting storage as needed on a dynamic basis rather than in bulk up front based on anticipated future requirements.
Tiering refers to the storage of data in the most appropriate medium based on its intended use. Data needed on demand would be top-tier and stored on solid-state or fast disks. Data rarely needed would be archived on the lowest tier, usually optical disks or tape (sometimes offline).
The term topology refers to network design. Planning data center operations and enhancements require both physical and logical topologies.
Virtualization involves the encapsulation of an application, operating system and memory as a self-contained software unit, known as a virtual machine (VM), that can reside with other VMs on a single server. A VM is not tied to a particular physical machine and can move easily from machine to machine based on load balancing, backup or recovery needs.
A WAN is a backbone network that serves geographically disparate users — consisting of a combination of dedicated lines, virtual networks over the Internet and wireless technologies.