Term
|
Definition
| increasing the ability of a server to meet increased demands |
|
|
Term
| what are 2 scalability strategies with a brief description? |
|
Definition
scale up ------ subsystem upgrades scale out ----- increase # of servers for single task |
|
|
Term
| how is scale-out accomplished? (2 things) |
|
Definition
| load balancing and task distribution |
|
|
Term
| what is a program that assists with the 2 scale-out methods? |
|
Definition
| mnlb - microsoft network load balancing |
|
|
Term
| what are physical devices which are used in the process of scale-out? |
|
Definition
|
|
Term
what is the ideal scalability strategy for: a file server? |
|
Definition
|
|
Term
what is the ideal scalability strategy for: a print server? |
|
Definition
|
|
Term
what is the ideal scalability strategy for: a terminal server? |
|
Definition
|
|
Term
what is the ideal scalability strategy for: a web server? |
|
Definition
|
|
Term
what is the ideal scalability strategy for: an e-mail server? |
|
Definition
|
|
Term
what is the ideal scalability strategy for: a database server? |
|
Definition
|
|
Term
what is the ideal scalability strategy for: computation servers? |
|
Definition
|
|
Term
| what 3 tasks do domain controllers perform? |
|
Definition
user authentication resource access validation security control |
|
|
Term
| name an improvement to win2k8 DC's |
|
Definition
better wan replication RODC functionality ADDS runs as a role, not an integral OS aspect |
|
|
Term
| what are 2 ADDC activities? |
|
Definition
client - server server - server |
|
|
Term
|
Definition
| stores, retrieves and updates data |
|
|
Term
|
Definition
| manage client print requests by spooling jobs to disk |
|
|
Term
|
Definition
| stores, searches, retrieves and updates data from disk |
|
|
Term
| what is a key aspect of a well-tuned server of any kind? |
|
Definition
|
|
Term
| what is an e-mail server? |
|
Definition
| a repository and router of e-mail |
|
|
Term
| what are the 3 most common bottlenecks for servers of all kinds? |
|
Definition
|
|
Term
|
Definition
| host web pages/applications |
|
|
Term
| what are web 2.0 servers? |
|
Definition
| run web 2.0 applications on data centers over massively distributed networks |
|
|
Term
| what are groupware servers? |
|
Definition
| allow user communities to share information |
|
|
Term
| file sharing architectures can handle a maximum of about... |
|
Definition
|
|
Term
| what is a peer-to-peer architecture? |
|
Definition
| each peer has equivalent responsibilities |
|
|
Term
| what is a downside of peer-to-peer architectures? |
|
Definition
hard to manage security bandwidth intensive |
|
|
Term
| peer-to-peer architectures work best with... |
|
Definition
| a non-routable protocol like NETBeui |
|
|
Term
| in client-server architectures, what 2 message types are used to communicate between the client and server? |
|
Definition
RPC - remote procedure calls SQL - standard query language |
|
|
Term
| in client-server architecture the client is.... and the server is.... |
|
Definition
client is active server is passive |
|
|
Term
| two types of client-server structures are... |
|
Definition
two tier - server responds to client requests three tier - a middle tier is added to query a database and respond to client requests |
|
|
Term
| when should scale-up become scale-out? |
|
Definition
| when hardware maximum thresholds have been reached with scale-up |
|
|
Term
| what is the least likely source of a performance bottleneck on a server? |
|
Definition
|
|
Term
| what is an example of a server type that DOES take full advantage of P4 processors? |
|
Definition
|
|
Term
| file servers do not use much CPU power because most requests use ... |
|
Definition
| DMA - direct memory access |
|
|
Term
| all data traffic into and out of a server uses... |
|
Definition
|
|
Term
| the latest PCI bus advancement is ... |
|
Definition
|
|
Term
| what 2 technologies allow PCI cards to be added/replaced while the server is still running |
|
Definition
|
|
Term
| the performance of the PCI bus, CPU and memory relies heavily on the ... |
|
Definition
|
|
Term
| IBM uses a type of memory mirroring similar to RAID 1 called... |
|
Definition
|
|
Term
| what does active memory do? |
|
Definition
| RAM is divided into 2 ports and one port is mirrored to the other |
|
|
Term
| mirroring is always handled by... |
|
Definition
|
|
Term
| in terms of RAM, servers often benefit from mirroring and ... |
|
Definition
| memory compression technology |
|
|
Term
| an example of memory compression technology is... |
|
Definition
| IBM MXT (memory expansion technology) |
|
|
Term
| from the administrator's viewpoint, the ... is the most configurable component of a server |
|
Definition
|
|
Term
| on a PC, ... is the most important measure of disk speed, on a server, ... is the most important measure of disk speed |
|
Definition
|
|
Term
| the average load on a network should not exceed ... of it's capacity |
|
Definition
|
|
Term
| specs of intel xeon 512kb L2 cache processor? |
|
Definition
dual processor ready hyper threading dual channel ddr 3.2GBps 400Mhz system bus |
|
|
Term
| specs of intel xeon E7 processor? |
|
Definition
10 core capability hyper threading (20 threads) up to 4 CPUs w/ 2TB DDR3 RAM total and 30MB caching |
|
|
Term
| for energy conservation, intel xeon E7 processors have... |
|
Definition
|
|
Term
| specs of intel xeon E3 processors |
|
Definition
6 cores 12 threads 12MB cache |
|
|
Term
| specs of AMD opteron 6000 series |
|
Definition
12 cores DDR3 1.333GHZ memory support quad channel mem support 12MB L3 cache |
|
|
Term
| AMD opteron 4000 series specs |
|
Definition
6 cores 6MB L3 cache supports DDR3 1.333GHZ mem dual channel mem support |
|
|
Term
| any PCI device is called an... |
|
Definition
|
|
Term
|
Definition
|
|
Term
|
Definition
|
|
Term
| the bus width of pci-x is... |
|
Definition
|
|
Term
| a data transfer on the PCI bus is called a ... |
|
Definition
|
|
Term
|
Definition
multiplexed address and data bus which means address/data lines are same wires |
|
|
Term
| pci express is a ... bus, pci x is a ... bus |
|
Definition
|
|
Term
| PCI agents which initiate a transfer are called... |
|
Definition
|
|
Term
| agents which respond to PCI transfers are called.... |
|
Definition
|
|
Term
| PCI data transfers have a throughput of ... o the maximum |
|
Definition
|
|
Term
| servers used to only have ... PCI slots |
|
Definition
|
|
Term
| the ... was developed to overcome the small amount of PCI slots on older servers |
|
Definition
|
|
Term
| what is the function of a PCI-to-PCI bridge? |
|
Definition
| allows for multiple PCI busses running at a variety of speeds |
|
|
Term
| PCI-X bus has a maximum throughput of... |
|
Definition
| 1GBps at 133MHZ w/ 64bit bus width |
|
|
Term
| servers currently use two types of RAM, what are they? |
|
Definition
|
|
Term
|
Definition
| the memory controller communicates directly with the DRAM, better performance, not as reliable |
|
|
Term
|
Definition
| registers isolate the mem controller from the DRAM |
|
|
Term
| DDR3 RAM uses ... less power than DDR2 |
|
Definition
|
|
Term
| dual rank RDIMM can perform at |
|
Definition
|
|
Term
| when a LAN adapter receives requests for data from clients on a server, what 2 things happen? |
|
Definition
| the request frames are stored in the LAN adapter's buffer and the adapter generates interrupts to access the CPU |
|
|
Term
| DDR has a .... performance gain over non-DDR memory |
|
Definition
|
|
Term
| about what percentage of power supplied does a server's CPU consume? |
|
Definition
|
|
Term
| what percentage of energy goes towards servers storage and network equipment in a data center generally? |
|
Definition
|
|
Term
| what percentage of the time does a server get utilized generally? |
|
Definition
|
|
Term
| what is a utility administrators are using to deal with energy inefficiency at the enterprise level? |
|
Definition
| ACPI - advanced configuration and power interface |
|
|
Term
| what 3 ways does ACPI handle energy efficiency? |
|
Definition
throttling-states - masks clock ticks from processor/power capping power-states - reduce clock rate of CPU sleep (c) - states - clock rate is zeroed out |
|
|
Term
| which is more efficient, a t-state or a p-state? |
|
Definition
| p-state because it delivers more power at a given voltage |
|
|
Term
| what is the second biggest user of power inside a server? |
|
Definition
|
|
Term
| what uses more power, FB-DIMM or unbuffered-DIMM? |
|
Definition
| FB-DIMM because of the advanced memory buffer (uses 5 was constantly) |
|
|
Term
| since FB-DIMM uses so much more power than unbuffered-DIMM, what has been made to deal with this |
|
Definition
| green FB-DIMM, uses 37% less energy |
|
|
Term
| is green FB-DIMM and regular FB-DIMM interchangeable? |
|
Definition
|
|
Term
| the best performance per watt layout w/ DDR2 DIMM is ______________ |
|
Definition
|
|
Term
| ________________ are more energy efficient than local drives on every server |
|
Definition
| dedicated drive arrays separate from server |
|
|
Term
| all fans should be based on _______ |
|
Definition
|
|
Term
| what is one of the most recent developments in data center cooling which has saved up to 6 figures per year? |
|
Definition
outside-air cooling used since 2009 in northeast england and new mexico |
|
|
Term
| an alternative to PSU's in server environments in the interest of energy savings is ... |
|
Definition
| powering servers directly with DC UPS's |
|
|
Term
| define energy efficiency: |
|
Definition
| less energy to provide same level of service |
|
|
Term
| the cost of powering and cooling a server for 3 years is ... the cost of the server itself |
|
Definition
|
|
Term
|
Definition
| computer room air conditioner |
|
|
Term
|
Definition
|
|
Term
|
Definition
| thermal design power - the max amount o power the cooling system in a computer is required to dissipate |
|
|
Term
| as the number of cores in a processor increases, the wattage required per core goes .... |
|
Definition
|
|
Term
| intel and AMD's p-states are called |
|
Definition
| demand based switching and powernow |
|
|
Term
| what is the difference between power capping and power saving? |
|
Definition
| capping applies a ceiling to the amount of power that can be used, power saving will allow for more energy to be used if it is needed |
|
|
Term
| intel's _________ is a piece of hardware dedicated to optimizing CPU performance |
|
Definition
|
|
Term
| name 2 factors which affect power consumption in server memory |
|
Definition
number of DIMMs size of DIMMs |
|
|
Term
| __________ are more energy efficient than hard disk drives |
|
Definition
|
|
Term
| __________ are more energy efficient than hard disk drives |
|
Definition
|
|
Term
| redundant power supplies run _________ and are not at the top of their __________ |
|
Definition
at or below 50% capacity efficiency curve |
|
|
Term
| linux machine's power usage can be tuned using which file |
|
Definition
| /sys/devices/system/cpu/sched_mc_power_savings |
|
|
Term
| a solution to the underutilization of servers is... |
|
Definition
| virtualization of servers within one physical host server |
|
|
Term
| name 2 IBM tools for energy efficiency |
|
Definition
| Power Configurator and Active Energy Manager |
|
|
Term
| IBM's active energy manager allows you to set 2 things for power savings, what are they |
|
Definition
power policies group power capping policies |
|
|
Term
| heat output from servers/racks is measured in |
|
Definition
|
|
Term
| 2 units of measurement for the efficiency of a data center are.. |
|
Definition
PUE - power usage effectiveness DCiE - data center infrastructure efficiency |
|
|
Term
| on average what percentage of power supplied does a server's redundant fans use? |
|
Definition
|
|
Term
| in t-states, the maximum number of clock ticks that can be masked from the processor is... |
|
Definition
| 7 out of every 8 clock ticks |
|
|
Term
| the 3 largest virtualization softwares are... |
|
Definition
| VMWARE esx, microsoft hyper-v, and xen |
|
|
Term
| a data center with high power efficiency is one with a DCIE of more than _____ |
|
Definition
|
|
Term
|
Definition
| the number of transistors on a chip doubles about every two years |
|
|
Term
| what is a problem created when transistor sizes become too small? |
|
Definition
| electron leakage = wasted power and heating issues |
|
|
Term
| starting with the xeon tulsa MP, intel dual cores began sharing... |
|
Definition
|
|
Term
| 2 primary features of the intel core micro architecture are: |
|
Definition
intel wide dynamic execution and intel intelligent power capability |
|
|
Term
| what is intel wide dynamic execution? |
|
Definition
| four instructions can be processes simultaneously in the pipeline |
|
|
Term
|
Definition
| netburst is the predecessor to intel's core micro architecture |
|
|
Term
| the process of combining multiple instructions into a single one is called... |
|
Definition
|
|
Term
| what is intel intelligent power capability? |
|
Definition
| lowers power consumption by powering down unused parts of the CPU |
|
|
Term
| AMD opteron processors were the first processors with... |
|
Definition
|
|
Term
|
Definition
| a bus that connects an AMD CPU directly to various I/O devices |
|
|
Term
| the three operation modes of AMD opteron processors are.. |
|
Definition
32 bit 64 bit 32 and 64 bit mixed |
|
|
Term
| in opteron processors, the two separate L2 caches communicate with eachother through... |
|
Definition
|
|
Term
|
Definition
explicitly parallel instruction computing an intel itanium technology which combines 3 instructions into a 128 bit structure |
|
|
Term
|
Definition
64 bits of virtual address space data stored in 64 bit format arithmetic operations are performed in 64 bit operands GPR's and ALU's are all 64 bits wide |
|
|
Term
| the intel 64 bit architecture for itanium processors is called: |
|
Definition
|
|
Term
| the intel 64 bit architecture for xeon processors is called: |
|
Definition
|
|
Term
| the 64 bit architecture on the AMD opteron processors is called |
|
Definition
|
|
Term
| AMD64 AND intel 64 architectures both use |
|
Definition
64-bit GPR's (general purpose registers) and are compatible with eachother |
|
|
Term
| the 3 operation modes of AMD64 and intel 64 processors are... |
|
Definition
32 bit legacy mode compatability mode (can run 32/64 bit applications but still needs 64 bit OS/drivers) full 64 bit mode |
|
|
Term
| CPU performance is affected by: (4 things) |
|
Definition
system design and architecture OS application workload |
|
|
Term
| server workloads are generally _____ in nature |
|
Definition
|
|
Term
| the method of mapping larger capacity physical memory to the much smaller capacity cache is known as... |
|
Definition
|
|
Term
| the higher the associativity of a cache, the __________ the lookup time for an address within the cache |
|
Definition
|
|
Term
| cache lookups occur in... |
|
Definition
|
|
Term
| the greater the number of processors in a server, the greater the _________ of a bigger cache size |
|
Definition
|
|
Term
| performance improvement from a clock speed increase are only actually ________ of the percentage increase in clock speed |
|
Definition
|
|
Term
| multiple processor cores increase efficiency only if... |
|
Definition
| the application can take advantage of the multiple cores |
|
|
Term
| one way associative caches are AKA |
|
Definition
| direct-mapped caches, the memory address can only be in one place on the cache and one place in the RAM |
|
|
Term
| the race for clock speeds between processor companies ended and multi-core processors began because... |
|
Definition
| transistors were becoming too small on chips and electron leakage was occuring |
|
|
Term
|
Definition
| heat produced from electron leaks in transistors |
|
|
Term
| how are quad core processors different from dual core processors? |
|
Definition
| they have independent execution cores |
|
|
Term
| the L2 cache in intel core processors is _________ and uses __________ technology |
|
Definition
shared advanced transfer cache technology |
|
|
Term
| intel smart memory access does what... |
|
Definition
| allows technology in the processor to prefetch instructions more often |
|
|
Term
| intel advanced digital media boost does what... |
|
Definition
| allows SSE instructions to be processed in 128 bit chunks instead of two 64 bit chunks |
|
|
Term
| what is newer, the intel core architecture or the intel nehalem architecture? |
|
Definition
|
|
Term
| the nehalem architecture provides features like |
|
Definition
QPI - point to point connections from CPU to all I/O devices, eliminates need for shared bus
integrated memory controller
3 level cache heirarchy (last one shared) |
|
|
Term
| the nehalem architecture has a second TLB which does what... |
|
Definition
| translation lookaside buffer - improves speed of virtual address translation |
|
|
Term
| the nehalem architecture has a second BTB which does what |
|
Definition
the branch target buffer (BTB) is a part of the branch predictor which predicts which instructions may be needed next by an application two levels of BTB makes this faster |
|
|
Term
| the nehalem architecture has SMT which does what |
|
Definition
simultaneous multi threading allows two streams of code to be executed simultaneously |
|
|
Term
| the nehalem architecture offers LSD which does what |
|
Definition
| loop stream detector - recognizes repetitive instruction execution and idles processes to boost performance and power efficiency |
|
|
Term
| DDPM stands for what and does what |
|
Definition
dual dynamic power management AMD technology which separates power delivery between the cores and memory controller |
|
|
Term
|
Definition
| an environment which allows multiple guest OS's to run on a single server and share hardware resources |
|
|
Term
| what does virtualization require in terms of hardware utilization management? what is this also known as? |
|
Definition
a virtual machine manager (VMM) it is also known as a software virtualization layer
OR a hypervisor |
|
|
Term
| what is one obvious downside of virtualization? |
|
Definition
| it introduces a single point of failure for multiple servers instead of just one |
|
|
Term
| what does it mean that a server can be scaled dynamically in terms of virtualization? |
|
Definition
| virtual servers can be added and removed as needed |
|
|
Term
| name 2 solid reasons virtualization is a good thing |
|
Definition
consolidation of servers reduces enterprise power usage
security/backups are more streamlined |
|
|
Term
| what is the typical consolidation ratio range for server virtualization |
|
Definition
|
|
Term
| what are intel's and AMD's hardware virtualization implementations? what do they do? |
|
Definition
AMD's is AMD-V and intel's is intel VT (virtualization technology)
they:
allow virtual machines/apps to run at standard priv levels no binary translation/paravirtualization needed increase reliability/security |
|
|
Term
| in terms of virtualization, what are binary translation and paravirtualization? |
|
Definition
| these are software based virtualization methods that focus on effective resource sharing |
|
|
Term
| with a hypervisor/VMM, every guest operating system appears to have the host's ______________ to itself |
|
Definition
|
|
Term
| the primary role of the hypervisor is to make sure... |
|
Definition
| that virtual machines do not interrupt eachother while trying to access hardware resources |
|
|
Term
| in x86 computing, what is the highest privilege level? what is the lowest privilege level? what are these levels used for? |
|
Definition
highest: 0 lowest: 3 they determine what hardware resources can be accessed by the software |
|
|
Term
| outside of virtualization, only ______ privilege levels are used |
|
Definition
|
|
Term
normally, an operating system runs at privilege level... but the VMM moves guest OS's to level ... and itself to level ... this process is called |
|
Definition
|
|
Term
| the main issue with ring deprivileging is... |
|
Definition
| OS's are not meant to run at privilege level 1, the fact that the VMM is a middleman introduces overhead and performance degradation |
|
|
Term
| what is a method of combating the overhead created by ring deprivileging? |
|
Definition
binary translation. this is where the VMM intercepts instructions sent by the guest OS which require privilege level 0 and solve the issues of it coming from privilege level 1 |
|
|
Term
| in the open source community of OS's, what is a method which can be used to deal with ring deprivileging issues? |
|
Definition
paravirtualization can be used, but only by open source OS's because their source code must be edited in this case, the virtual machine knows the VMM is existent |
|
|
Term
| since CPU's page physical mem addresses and not the linear addresses made by the OS, what have AMD and intel created to deal with this translation? |
|
Definition
intel: extended page tables AMD: rapid virtualization indexing |
|
|
Term
| what 3 primary things must be compatible with virtualization for it to work properly? |
|
Definition
BIOS processor technologies hypervisor |
|
|
Term
|
Definition
|
|
Term
| nested paging provides a ___________ improvement over software-based address translation techniques |
|
Definition
|
|
Term
| When translation is performed, it is stored for future use in a... |
|
Definition
Translation Look-aside Buffer (TLB). |
|
|
Term
| intel VT-x introduces two new CPU operations which are... |
|
Definition
VMX root operation - VMM functions VMX non-root operation - guest operating system functions |
|
|
Term
| intel's VMCS stands for and handles... |
|
Definition
virtual machine control structure The VMCS tracks VM entries and VM exits, as well as the processor state of the guest operating system and VMM in VMX non-root operations. |
|
|
Term
| AMD's version of intel's non-root-operation is called... |
|
Definition
|
|
Term
| AMD's VMCB stands for and performs what function... |
|
Definition
| tracks the CPU state for a guest operating system, just like intel's VMCS |
|
|
Term
| intel VT's improved functions are...stand for...and perform what function |
|
Definition
VPID, virtual process ID, allows VMM to assign VPID's to virtual machines which are used by the CPU to tag addresses in the TLB
EPT (extended page tables) translate guest physical mem addresses to host physical mem addresses |
|
|
Term
| 3 latency reduction technologies made by intel in the context of virtualization are: |
|
Definition
Intel I/O Acceleration Technology (IOAT), Virtual Machine Device Queues (VMDq), and Single Root I/O Virtualization (SR-IOV) |
|
|
Term
| AMD-V's RVI performs what function and stands for... |
|
Definition
rapid virtualization indexing allows virtual machines to more directly manage memory |
|
|
Term
| in the context of virtualization, what is live migration? |
|
Definition
| the ability to move a virtual machine to a different host seamlessly |
|
|
Term
PCIe transmits in... PCI-X transmits in.... |
|
Definition
|
|
Term
| two predecessors to PCI are... |
|
Definition
|
|
Term
| the PCI bus is _________ bits wide and is known as a _________________ bus |
|
Definition
32-64 multiplexed address and data bus |
|
|
Term
| the PCI bus is _________ bits wide and is known as a _________________ bus |
|
Definition
32-64 multiplexed address and data bus |
|
|
Term
| because PCI is a multiplexed address and data bus, a ____________ is required to switch from addresses to data |
|
Definition
|
|
Term
| any PCI device is called an.... |
|
Definition
|
|
Term
| any PCI transfer is called... |
|
Definition
|
|
Term
| types of PCI transactions include |
|
Definition
| request, arbitration, grant, address, turnaround and data |
|
|
Term
| a PCI agent who initiates a transfer is called... |
|
Definition
|
|
Term
| PCI transactions do not use ________ |
|
Definition
|
|
Term
| is PCI-X compatible with legacy PCI? |
|
Definition
|
|
Term
| when multiple speeds of PCI are being used, the clock must scale to the .. |
|
Definition
|
|
Term
| PCI-X was developed to match the speeds of adapters such as.. |
|
Definition
gigabit ethernet fibre channel ULTRA320 SCSI |
|
|
Term
| all devices on PCI-x bus must revert to the ... |
|
Definition
|
|
Term
| PCI-x uses _________ bandwidth |
|
Definition
|
|
Term
|
Definition
QDR is quad data rate it is a family of SRAM in which separate input and output ports each operate at DDR |
|
|
Term
| why would companies create DDR and QDR instead of just increasing the MHZ of RAM to match the speeds of adapters? |
|
Definition
increasing the clock rate will eventually decrease system stability DDR and QDR allows an increase in bandwidth w/o an increase in clock speed |
|
|
Term
| PCI-X 533 uses ________ for data transfer |
|
Definition
|
|
Term
| in terms of PCI-X, the attribute phase performs what function |
|
Definition
| provides information about the transaction for buffer management |
|
|
Term
| in terms of PCI-X, split transactions performs what function |
|
Definition
| replaces delayed transaction and frees up bus for communication |
|
|
Term
| in terms of PCI-X, allowable disconnect boundary performs what function |
|
Definition
| prevents a single process from monopolizing the PCI bus with a large transaction |
|
|
Term
| PCI-x split transactions are allowed to work because.... |
|
Definition
| sequence information is included in each transaction which allows the transfer to start from where it left off |
|
|
Term
| in terms of PCI-X, what is relaxed order structure? |
|
Definition
| a technique that allows the rearranging and prioritizing of transactions by the PCI-PCI bridge |
|
|
Term
| what is a transaction byte count? |
|
Definition
| tells the PCI-PCI bridge how long a transaction will take |
|
|
Term
PCI express has __________ pairs one used for _______, the other for ______ |
|
Definition
wire transmit receive (dual simplex) |
|
|
Term
| each two pair wire in a PCI express setup is called a |
|
Definition
|
|
Term
| the xX in PCI express x1,x16 etc refers to |
|
Definition
| the number of lanes on the PCI express connector/cable |
|
|
Term
|
Definition
| the raw data rate or bits per second that a bus can move |
|
|
Term
| encoding overhead takes _______ of the GT/sec speed |
|
Definition
|
|
Term
| originally, the PCI system could only support ______ devices |
|
Definition
|
|
Term
| to solve the problem of the low number of maximum PCI devices on a bus, the __________ was developed, which allowed ... |
|
Definition
PCI to PCI bridge this allowed multiple speeds of PCI busses and support for more cards |
|
|
Term
| PCIe provides dedicated _______ to each device |
|
Definition
|
|
Term
PCI adapters that have the ability to gain direct access to system memory are called |
|
Definition
|
|
Term
| Bus master devices are also called |
|
Definition
| direct memory access (DMA) devices. |
|
|
Term
| the biggest bottleneck to PCI transfers is ... |
|
Definition
|
|
Term
PCI-X devices use ____ I/O signalling when operating in PCI-X mode. They also support the ___ I/O signalling levels when operating in 33 MHz conventional mode, |
|
Definition
|
|
Term
| A PCIe x1 would require how many wires to connect |
|
Definition
|
|
Term
While the underlying hardware technology is different between PCI-X and PCI Express, they remain compatible at the... |
|
Definition
|
|
Term
PCI Express 2.0 doubles the bit rate of lanes and achieves up to... |
|
Definition
| 320 Gbps in a full duplex x32 configuration. |
|
|
Term
| in a server, the chipset defines the operation of at least |
|
Definition
|
|
Term
| increased cache size causes slower main memory access because... |
|
Definition
the processor takes longer to search the cache for data and the order that data is searched for goes registers, cache, ram, disk |
|
|
Term
|
Definition
cycles per instruction the number of clock cycles required to execute an instruction |
|
|
Term
| what 2 things can be done to improve system performance from a chipset standpoint |
|
Definition
1) decreasing CPI 2) raising clock rate |
|
|
Term
| the CPU relies on the chipset to quickly... |
|
Definition
| transfer information from the main memory |
|
|
Term
| the most frequently accessed shared resource in a computer is |
|
Definition
| RAM, because of this, it also has the highest latency |
|
|
Term
| in terms of a chipset, hardware scalability is determined by... |
|
Definition
| how efficiently multiple CPUs can share memory |
|
|
Term
| ________________ accelerate CPU access to memory, but limit multi processor scalability |
|
Definition
|
|
Term
| two commonly used multi-processor chipset architectures are... |
|
Definition
NUMA (non uniform memory addressing) SMP (symmetric multi processing) |
|
|
Term
| which is more scalable, NUMA or SMP? why? |
|
Definition
| NUMA b/c in SMP all processors wait for resources in the same queue |
|
|
Term
| in NUMA, processors have access to ... in terms of memory |
|
Definition
| local/near memory, and remote memory |
|
|
Term
| groups of processors are connected by which two technologies made by intel and AMD? |
|
Definition
hypertransport for AMD scalability ports for intel xeon systems |
|
|
Term
| AMD's version of NUMA is called |
|
Definition
| SUMO: sufficiently uniform memory organization |
|
|
Term
| local memory to one processor in a processor group is ______ memory to another process in the same group |
|
Definition
|
|
Term
|
Definition
| the number of processors which can access local memory |
|
|
Term
| in NUMA, remote memory can be accessed by a CPU but ... |
|
Definition
|
|
Term
| requests between local and remote memory use ... |
|
Definition
| scalability ports or hypertransport links |
|
|
Term
| in AMD's SUMO architecture, each CPU uses _____ hypertransport links, for what? |
|
Definition
3 two for CPU-CPU linkage one for I/O linkage |
|
|
Term
| hypertransport allows ____ CPU's to be directly connected and ____ CPU's to be indirectly connected but no more than ____ hops away |
|
Definition
|
|
Term
| when would remote memory be used in a multi processor architecture? |
|
Definition
| when queues are so large on local/near memory that the latency to get to the remote memory is worth it |
|
|
Term
| when would remote memory be used in a multi processor architecture? |
|
Definition
| when queues are so large on local/near memory that the latency to get to the remote memory is worth it |
|
|
Term
| what assists remote memory access in multi processor architectures? |
|
Definition
| the SRAT, static resource affinity table |
|
|
Term
| the SRAT stores information such as |
|
Definition
local memory for each processor number of processors |
|
|
Term
| the SRAT is stored in...and is read by ... on boot |
|
Definition
|
|
Term
| NUMA works well for the following operating systems |
|
Definition
w2k3/8 ent w2k3/8 DC linux 2.6 and up |
|
|
Term
| in SMP, system resources are all ______ by the multiple processors |
|
Definition
| shared, which increases queue times |
|
|
Term
| the caches of each CPU in an SMP architecture must be kept ______, which is where the _____ protocol comes in |
|
Definition
coherent MESI (modified, exclusive, shared and invalid) |
|
|
Term
| CPU's use ___________ in SMP architectures during every read/write to memory to ensure coherency between caches |
|
Definition
|
|
Term
| for each data request by a CPU, there is a broadcast to all other processors to see if the requested data is in their caches, which is called a |
|
Definition
|
|
Term
| explain the 4 states of data in CPU cache according to the MESI protocol |
|
Definition
modified - data exists in cache and has been modified exclusive - data exists in only one cache shared - data is in more than one cache invalid - data has been modified by a write to the cache |
|
|
Term
| what is AMD's version of MESI? |
|
Definition
MOESI has one more data flag, "owner" when a CPU needs to read updated data it reads the data from the "owner"'s cache |
|
|
Term
| snoop overhead increases as a result of what 2 things |
|
Definition
number of CPU's increases cache size increases |
|
|
Term
| intel's FSB protocol is limited to _____ processors |
|
Definition
|
|
Term
| a solution to the latency issues associated with snoop cycles is... and what does it do exactly |
|
Definition
cache coherency filter checks if address is even in remote caches before snoop cycle has begun if no address found in filter, no snoop cycle begun |
|
|
Term
| UEFI is a replacement to IBM server ____ |
|
Definition
|
|
Term
| in terms of applications, what must be able to handle multiple processors |
|
Definition
| the coding must be able to generate the amount of the threads that the multiprocessor architecture is able to handle or the benefits of multiple processors are sort of void |
|
|
Term
| in general not many applications scale well beyond ____ core processors |
|
Definition
|
|
Term
| ____________ servers can often take advantage of multiple CPUs |
|
Definition
| database and application servers |
|
|
Term
| database and application servers can expect up to a ________ increase in performance with a second processor |
|
Definition
|
|
Term
| NUMA systems show good scalability up to |
|
Definition
|
|
Term
| UEFI can boot from a drive greater than |
|
Definition
|
|
Term
| UEFI was first introduced in _________ |
|
Definition
|
|
Term
| UEFI was originally designed for _______ |
|
Definition
|
|
Term
| legacy BIOS was originally designed for ________ systems |
|
Definition
|
|
Term
| UEFI has its own __________ which means that it does not need the OS to load device drivers |
|
Definition
|
|
Term
| the number one bottleneck for authentication servers is... |
|
Definition
|
|
Term
| the number one bottleneck for file servers is... |
|
Definition
|
|
Term
| the number one bottleneck on print servers is... |
|
Definition
|
|
Term
| the number one bottleneck on database servers is |
|
Definition
|
|
Term
| the number one bottleneck for email servers is |
|
Definition
|
|
Term
| the number one bottleneck for web servers is... |
|
Definition
|
|
Term
| the number one bottleneck for web 2.0 servers is... |
|
Definition
|
|
Term
| the biggest bottleneck to a groupware server is.... and what is a groupware server |
|
Definition
| a groupware server is something like microsoft exchange that allows communities of users to share information, biggest bottleneck is memory |
|
|
Term
| the biggest bottleneck to multimedia servers is... |
|
Definition
|
|
Term
| the most popular communication server is.. |
|
Definition
Windows 2003 remote access services (RAS) server. |
|
|
Term
| 3 infrastructure server are... |
|
Definition
|
|
Term
| potential bottleneck for high performance computing is... |
|
Definition
|
|
Term
| memory is a ______________ bottleneck for servers |
|
Definition
|
|
Term
| cache memory is _____, ______, and _____ capacity |
|
Definition
|
|
Term
| disks are ______, ______, and _______ capacity |
|
Definition
| slow, inexpensive and high capacity |
|
|
Term
| main memory (RAM) acts as a bridge in terms of speed between what two subcomponents? |
|
Definition
|
|
Term
| virtualized servers generally require ________ of RAM each |
|
Definition
|
|
Term
|
Definition
| dynamic random access memory |
|
|
Term
| SDRAM stands for, and what does the S mean exactly? |
|
Definition
| synchronous DRAM, synchronous means it operates in sync with the system clock for reads/writes |
|
|
Term
| the capacity of DRAM chips is measured in |
|
Definition
|
|
Term
| ECC stands for and performs what function |
|
Definition
| Error checking/correcting code, it can correct 1 bit errors and detect 2 bit errors |
|
|
Term
|
Definition
| capacity of DRAM chips X # of DRAM chips |
|
|
Term
| the 3 capacities of DRAMs are.. |
|
Definition
|
|
Term
|
Definition
| 64 bits of data (a cache line) |
|
|
Term
| in SDRAM, the first address is supplied by... |
|
Definition
|
|
Term
| SDRAM increments a _________ to indicate the next available memory location |
|
Definition
|
|
Term
| SDRAM uses its internal clock to _____ |
|
Definition
|
|
Term
| SDRAM uses the system clock to ________ |
|
Definition
| increment its address pointer |
|
|
Term
| registered DIMMs isolate the _________ from __________, lightening the ______ |
|
Definition
| memory controller, DRAM, electrical load |
|
|
Term
| the max clock speed of DDR memory is |
|
Definition
|
|
Term
| DDR memory operates at ____ and ____ VDC |
|
Definition
|
|
Term
| DDR has a __________ prefetch |
|
Definition
|
|
Term
|
Definition
| memory fetches X sets of 64 bits of data at a time |
|
|
Term
| DDR2 requires a ___________ FSB |
|
Definition
|
|
Term
| DDR2 functions at _________ VDC |
|
Definition
|
|
Term
| DDR2 has ________ prefetching |
|
Definition
|
|
Term
|
Definition
|
|
Term
|
Definition
|
|
Term
| DDR and DDR2 actually have the _________ throughput at the same frequency |
|
Definition
|
|
Term
| Even though DDR2 has a worse latency than DDR, DDR2's ___________ can be much higher than DDR, making it faster in the end |
|
Definition
|
|
Term
| DDR3 operates at _________ VDC |
|
Definition
|
|
Term
| DDR3's memory bus operates at _______ the memory core |
|
Definition
|
|
Term
| is DDR3 backwards compatible w/ DDR2? |
|
Definition
|
|
Term
| DDR3 contains __________, which aid in the process of maintaining a server's thermal limits |
|
Definition
|
|
Term
| DDR3 works best with the __________ number of DIMMs per channel |
|
Definition
|
|
Term
| unbuffered DIMMs operate in a __________ |
|
Definition
|
|
Term
| FB-DIMMs operate in a ____________ |
|
Definition
| serial PTP links topology |
|
|
Term
| FB-DIMM has a ________ latency than unbuffered DIMM, but has greater ____________ due to the lighter electrical load |
|
Definition
|
|
Term
| every memory request must ____________ ____________ the serial linkage before returning to the memory controller in FB-DIMM setups, which increases latency |
|
Definition
|
|
Term
| FB-DIMM can support up to ______ DIMMS |
|
Definition
|
|
Term
| FB-DIMM operates at ________ VDC |
|
Definition
|
|
Term
| the AMB is responsible for... |
|
Definition
channel/memory req's handling forwarding mem requests to other AMB's detection/reporting of errors |
|
|
Term
| southbound frames are for _______ processes and northbound frames are for ________ processes |
|
Definition
|
|
Term
| FB DIMM has a _______ latency compared to unbuffered DDR2 RAM, in which __________ increases as DIMMs are added |
|
Definition
constant, latency this is why FB DIMM is more scalable |
|
|
Term
|
Definition
|
|
Term
| metaRAM is used with _______ and _______ DIMMs |
|
Definition
|
|
Term
|
Definition
| allows multiple SDRAMs to appear as a single, large capacity SDRAM |
|
|
Term
| since metaRAM lightens the electrical load, DIMMs in this setup are allowed to run at ________________ frequencies |
|
Definition
|
|
Term
| metaRAM can deliver _________ times the regular amount of server memory capacity b/c of ______________ |
|
Definition
| 2-4x, lightened electrical load |
|
|
Term
| unfortunately, metaRAM went _________ so this technology is no longer in use |
|
Definition
|
|
Term
| what is memory interleaving? |
|
Definition
| divides memory/cache lines up between 2 or more DIMMs to increase performance, as more than 1 line can be accessed at a time because they are potentially held by different DIMMs |
|
|
Term
| two common interleaving setups are |
|
Definition
|
|
Term
| in a 2 DIMM interleaving array, one DIMM could hold only the _________ addresses and the other could hold only the ___________ addresses |
|
Definition
|
|
Term
| 4 way interleaving is able to transfer ______ as much data per memory access as 2 way interleaving |
|
Definition
|
|
Term
| two way interleaving has a ________ bit bus, and 4 way interleaving has a _________ bit bus |
|
Definition
|
|
Term
| after first access latency in interleaved setups, all memory addresses in DIMMs requested are transferred in __________, w/o _________ |
|
Definition
|
|
Term
| interleaving used to be _________ configurable but is now handled exclusively by ________ |
|
Definition
|
|
Term
| CUCSEM stands for and performs which function |
|
Definition
cisco unified computing system extended memory its essentially the same deal as metaRAM |
|
|
Term
| server demands are driven by |
|
Definition
64 bit applications operating systems virtualization |
|
|
Term
| adding ___________ is the most cost effective way to improve web server and database server performance |
|
Definition
|
|
Term
| a typical virtualized server uses |
|
Definition
2 xeon 5500 processors 2-4GB of DDR3 per virt. machine has 36GB of total memory |
|
|
Term
| CUCSEM uses 2 things, which are |
|
Definition
the quad core xeon 5500 and ASICs, application specific integrated circuits |
|
|
Term
| __________ is placed between the processor and DIMM, its role is to ... |
|
Definition
ASIC increases memory capacity |
|
|
Term
| a dual socket machine using CUCSEM supports ___________ of memory |
|
Definition
|
|
Term
| why is the 5500 processor a good idea in environments which require a shitload of RAM? |
|
Definition
built in mem controller 3 channels of memory (DDR3) each core has dedicated system memory cost effective |
|
|
Term
|
Definition
|
|
Term
| 2 primary aspects of memory performance |
|
Definition
|
|
Term
| BW is calculated as... (recite the equation) |
|
Definition
size of channel in bytes (multiplied by) # of channels (multiplied by) frequency of memory |
|
|
Term
|
Definition
frequency number of channels size of channels |
|
|
Term
| latency can be defined as |
|
Definition
| the number of FSB clock cycles needed to retrieve a cache line |
|
|
Term
| mem addresses are divided equally into |
|
Definition
| column addresses and row addresses |
|
|
Term
|
Definition
| which page of memory an address is in |
|
|
Term
|
Definition
| the location on the page of memory the address is actually in |
|
|
Term
| the fundamental disk subsystem has two components, which are |
|
Definition
the hard disk and the controller |
|
|
Term
| the technical name for the disks which comprise a hard disk drive is |
|
Definition
|
|
Term
| the ______________ is mounted on an arm in a hard disk drive |
|
Definition
|
|
Term
| the linear movement of the head is called a |
|
Definition
|
|
Term
|
Definition
| the time for the head to get to a track |
|
|
Term
| the time for data to move under the head is called |
|
Definition
|
|
Term
| ____________ is the time for the disk to transfer requested data |
|
Definition
|
|
Term
| _________ and ____________ used parallel cables to connect the host adapter to the devices |
|
Definition
|
|
Term
|
Definition
| small computer system interface |
|
|
Term
|
Definition
| enhanced integrated drive electronics |
|
|
Term
| EIDE cables can be no longer than |
|
Definition
|
|
Term
| the short length/high shielding needs of parallel wires are due to the |
|
Definition
| electrical noise created by the high number of wires |
|
|
Term
| because serial cables have less wires they can be ... |
|
Definition
|
|
Term
| a disk array controller read operation sequence goes something like... |
|
Definition
1. LBA given for read command 2. OS generates interrupt 3. disk array controller executes I/O commands 4. command sent to target drive 5. target drive processes cmd by moving head to track where data resides 6. head reads servo track and waits for data to go underneath the head 7. read data is transferred to a buffer 8. controller starts DMA operation 9. PCI data is transferred into main memory 10. controller communicates completion to device driver |
|
|
Term
| the average seek operation on a high end hard disk is |
|
Definition
|
|
Term
| the average seek time for a consumer grade HDD is |
|
Definition
|
|
Term
| rotational latency can be defined as |
|
Definition
| the time it takes for data to move under the head, which is half the rotational time of the disk |
|
|
Term
| rotational latency for 15k RPM drives, 10k RPM drives and 7.2k RPM drives |
|
Definition
|
|
Term
| 4 directly attached storage technologies are |
|
Definition
|
|
Term
| ________ is replacing SCSI |
|
Definition
|
|
Term
| unlike SCSI, SAS allows ... which is the greatest advantage of SAS |
|
Definition
| simultaneous processing of multiple I/O requests |
|
|
Term
| SAS disk array controllers contain |
|
Definition
processor SAS controller PCI bus interface memory internal bus |
|
|
Term
| unlike SCSI, SAS has a __________ architecture |
|
Definition
|
|
Term
| SAS supports _________ per direction per lane |
|
Definition
|
|
Term
| SCSI is limited to ___________ drives per channel, SAS can support hundreds |
|
Definition
|
|
Term
|
Definition
|
|
Term
| SATA 3.0 aka ________ provides: |
|
Definition
SATA 6gbps - backplane interconnects - enclosure management - improved performance |
|
|
Term
| peak throughput of SSD is |
|
Definition
| 600MBPS including overheads |
|
|
Term
| actual sustained read/write speed of SSD's is between |
|
Definition
|
|
Term
|
Definition
|
|
Term
|
Definition
|
|
Term
| SATA's initial transfer rate was |
|
Definition
|
|
Term
| PATA's initial transfer rate was |
|
Definition
|
|
Term
| SATA has ... which is also found in SCSI and improves data integrity |
|
Definition
| CRC (cyclical redundancy checking) |
|
|
Term
| the four main remote storage technologies are |
|
Definition
SAN - storage area network NAS - network attached storage fibre channel iSCSI |
|
|
Term
| the usual network architecture in a SAN is... and has a max data rate of ... |
|
Definition
|
|
Term
| __________ is cheaper than ____________ |
|
Definition
|
|
Term
|
Definition
- server - operating system (its own) - storage |
|
|
Term
| the _____________ of a NAS device accesses storage devices |
|
Definition
|
|
Term
| ____________ are far easier to configure and set up than SANs |
|
Definition
|
|
Term
| 4 issues in SCSI that fibre channel addresses are |
|
Definition
cable distance bandwidth reliability scalability |
|
|
Term
| despite the name, fibre channel cabling can have either ... or ... links |
|
Definition
|
|
Term
|
Definition
| 1,2,4 gbps w/ multiple channels allowed |
|
|
Term
| iSCSI allows ________ protocols over TCP/IP |
|
Definition
|
|
Term
| to use iSCSI, the server would require either a ... or a ... |
|
Definition
|
|
Term
| iSCSI adapters require their own |
|
Definition
|
|
Term
| for security, iSCSI can use |
|
Definition
|
|
Term
| iSCSI is the __________ of all the remote storage technologies |
|
Definition
|
|
Term
| 3 advantages of SSD technology |
|
Definition
- lower power usage - faster data access - higher reliability |
|
|
Term
| a 4k transfer can be over ________ faster than SAS using SSD, a 64k transfer over _______ faster than SAS |
|
Definition
|
|
Term
| SSD only supports RAID levels.. |
|
Definition
|
|
Term
| the original issue with SSD was the entire cell had to be __________ before data was added and re-added to the cell |
|
Definition
|
|
Term
| a solution to the inefficient writing method SSD uses is |
|
Definition
| TRIM function, which reorganizes data and scrubs it after its deleted |
|
|
Term
|
Definition
| emptying the recycle bin or formatting the drive |
|
|
Term
|
Definition
| part of windows 7 and in firmware from some drive manufacturers |
|
|
Term
| bandwidth is calculated as |
|
Definition
size of channel in bytes X number of channels X frequency of memory's FSB |
|
|
Term
| processors access memory using a ___________ which is __________ wide |
|
Definition
| cache line, 64 byte wide line |
|
|
Term
| memory bandwidth DOES NOT depend on |
|
Definition
| memory technologies like DDR2, DDR3, SDRAM etc |
|
|
Term
| latency can be described as |
|
Definition
| the number of FSB clock cycles required to retreive a cache line |
|
|
Term
| memory addresses are divided equally into |
|
Definition
| row addresses and column addresses |
|
|
Term
|
Definition
| the page of memory an address is in |
|
|
Term
|
Definition
| where in a page of memory an address is |
|
|
Term
| memory addresses are sent in this order |
|
Definition
|
|
Term
|
Definition
|
|
Term
| two policies regarding CAS are |
|
Definition
page open policy - page stays open until row changes page closed policy - page is closed after each request |
|
|
Term
|
Definition
| the process of changing the column address in a memory request |
|
|
Term
| four common access times for memory are |
|
Definition
CAS - column address select RAS to CAS - delay time b/t row access/column access RAS - row access strobe CL - CAS latency |
|
|
Term
|
Definition
| the # of memory clock ticks that elapse b/t a column address change and the DIMM actually producing that data |
|
|
Term
| latencies are usually expressed in 2 ways |
|
Definition
|
|
Term
| typical CL values for 400mhz, 533mhz, and 667mhz DIMMS are |
|
Definition
|
|
Term
| DDR memory __________ the CL because |
|
Definition
| doubles, because the memory bus operates at double+ the memory core |
|
|
Term
|
Definition
| when a single thread is executed by a single core w/ an empty cache |
|
|
Term
|
Definition
| when a second thread/processor is added to access the same memory area |
|
|
Term
| CL quoted by manufacturers is always |
|
Definition
| unloaded latency, theoretical, basically useless |
|
|
Term
| a benchmark used to determine sustained bandwidth in a system's memory is |
|
Definition
|
|
Term
| in SMP architectures, the ____________ connects processors |
|
Definition
| north bridge/mem controller |
|
|
Term
| in SMP architectures, the ________ connects each processor to the FSB, which is _____________ |
|
Definition
bus interface unit (BIU) shared |
|
|
Term
| in SMP architectures, the north bridge handles traffic |
|
Definition
between CPU and mem between CPU and I/O devs b/t I/O devs and mem |
|
|
Term
| in SMP architectures, the FSB is easily ________ which introduces |
|
Definition
|
|
Term
| in SMP architectures, it is common to have the _________ and __________ speeds matching |
|
Definition
|
|
Term
| a processor's _________ is used to reduce pressure on the __________ |
|
Definition
|
|
Term
| a two processor xeon system can have a FSB which is __________, and ____________ |
|
Definition
|
|
Term
| the main CPU used with NUMA is |
|
Definition
|
|
Term
| the mem controller in opteron processors is ____________, which reduces __________ |
|
Definition
|
|
Term
| in NUMA architectures, when a CPU is added to the system, more ______________________ are added |
|
Definition
| paths to memory (integrated mem controller) |
|
|
Term
| local NUMA memory is attached to |
|
Definition
| the local memory controller |
|
|
Term
| remote NUMA memory is attached to |
|
Definition
| a remote CPU's memory controller |
|
|
Term
| opteron processors in NUMA architectures have ____________ units |
|
Definition
3 hypertransport units, 2 for other processors, 1 for I/O connections |
|
|
Term
| the opteron's crossbar switch handles |
|
Definition
| routing of data, command and addresses b/t cores and hypertransport units |
|
|
Term
| when processor speeds are increased, |
|
Definition
| hypertransport, crossbar switch and memory controller speeds are increased too |
|
|
Term
| every device in an opteron's processor package uses a single |
|
Definition
|
|
Term
| 32bit CPUs are usually limited to _________ of RAM |
|
Definition
|
|
Term
| two technologies that deal with 32bit CPUs' low memory limit are |
|
Definition
physical address extensions (PAE) address windowing extensions (AWE) |
|
|
Term
|
Definition
| intel extended server memory architecture |
|
|
Term
|
Definition
| 64GB of memory to be addressed on 32bit system |
|
|
Term
recent versions of windows ____________ have PAE automatically enabled |
|
Definition
| server 2k3 2k8 datacenter and enterprise |
|
|
Term
| to enable PAE in older versions of windows, add ________ to _________ |
|
Definition
|
|
Term
| AWE allows ___________ to directly address more than 4GB of RAM |
|
Definition
|
|
Term
| advanced ECC memory is AKA |
|
Definition
|
|
Term
| in ECC, the extra chip on the DIMM is called |
|
Definition
|
|
Term
| in ECC, ______________________ are not always detected |
|
Definition
| triple bit and larger errors |
|
|
Term
| ________________ allows an entire DRAM to fail while the system still functions |
|
Definition
|
|
Term
| intel xeon 5500 series CPUs in NUMA architectures are connected with |
|
Definition
| QPI - quick path interconnect links |
|
|
Term
| each of the _________ memory channels on a xeon 5500 supports up to _______ DIMMs |
|
Definition
|
|
Term
|
Definition
| total memory minus amount being used |
|
|
Term
| to determine needed memory you must |
|
Definition
| double the peak working set and add a 30% buffer |
|
|
Term
| average memory utilization should not exceed |
|
Definition
|
|
Term
| the only reason a server should page to disk is |
|
Definition
| a memory mapped file I/O is present |
|
|
Term
|
Definition
| redundant array of independent/inexpensive disks |
|
|
Term
| what exactly does RAID-0 entail |
|
Definition
| it is a striping setup, in which all data is evenly distributed to all drives involved, it's soul aim is performance, not fault tolerance |
|
|
Term
|
Definition
| a mirrored copy of one drive is placed on another drive for redundancy |
|
|
Term
|
Definition
| 1 drive in the array is used for parity, all other drives are striped with data |
|
|
Term
|
Definition
| data and parity are striped across all drives, this is a far better setup than RAID-4 |
|
|
Term
|
Definition
| in RAID-6, data and parity are striped across all drives twice, this allows a 2 drive failure instead of 1 with RAID-5 |
|
|
Term
|
Definition
| striping (raid0) across multiple raid-1 arrays, which gives you speed and redundancy at the same time |
|
|
Term
|
Definition
| striping (raid0) across multiple raid-5 arrays, which allows speed as well as drive failure allowances |
|
|
Term
| the fastest raid version is |
|
Definition
|
|
Term
| raid-0 has ______________ striping |
|
Definition
|
|
Term
| raid-0 is the _________ raid array type |
|
Definition
|
|
Term
| RAID-1's write performance is reduced because |
|
Definition
| both drives involved must write the data |
|
|
Term
| raid-3 has a __________ parity disk |
|
Definition
|
|
Term
| raid-3 has __________ striping |
|
Definition
|
|
Term
| what exactly are parity disks used for in raid arrays? |
|
Definition
| the parity calculation can restore the data under different circumstances/raid arrays |
|
|
Term
| raid-4 has ___________ striping |
|
Definition
|
|
Term
| the acronym SMART stands for |
|
Definition
| self monitoring analysis and reporting technology |
|
|
Term
| raid-5 is __________ slower in write operations than raid-0 and raid-1 because of striped parity |
|
Definition
|
|
Term
| raid-5's ______________ is similar to raid-0 |
|
Definition
|
|
Term
| raid-5 has _________________ striping |
|
Definition
|
|
Term
| raid-6 requires a minimum of ______ drives |
|
Definition
|
|
Term
| raid-6 allows a _________ failure |
|
Definition
|
|
Term
| rebuild times for raid-6 are ____________ |
|
Definition
| very long for 2 disk failures |
|
|
Term
|
Definition
| striping across multiple striped raid-0 arrays |
|
|
Term
| in any striped array, doubling the number of drives gives a ________ increase in throughput |
|
Definition
|
|
Term
| in a striped array, if there is a large number of read operations compared to write operations, performance will increase by ___________ |
|
Definition
|
|
Term
| disk stroke can be described as |
|
Definition
| the active data set size of a disk |
|
|
Term
| if data is stored on 10% of the disk surface the disk is said to have |
|
Definition
|
|
Term
| if disk stroke goes down, |
|
Definition
|
|
Term
| a 10k RPM disk w/ 50% stroke could achieve _________ I/O's per second of a 70/30 read/write workload |
|
Definition
|
|
Term
| if disk stroke is at 20%, the performance of a 10k RPM disk would be ____ I/O's per second, a ____ increase |
|
Definition
|
|
Term
| adding more drives and reducing disk stroke increase performance because ________ is minimized |
|
Definition
|
|
Term
| disk fragmentation decreases performance because data in files is not |
|
Definition
|
|
Term
| fragmented files require _______ |
|
Definition
| multiple seeks, (movement of the drive head) |
|
|
Term
| a _________ logical array representing a physical array is fastest |
|
Definition
|
|
Term
| ________ logical drives representing data across different disks can slow performance by |
|
Definition
|
|
Term
|
Definition
| the amount of data stored in one segment on an array |
|
|
Term
|
Definition
|
|
Term
| stripe size and segment size are the same thing, this is not a question just a reminder |
|
Definition
|
|
Term
| the range of stripe sizes is |
|
Definition
|
|
Term
| stripe size should _________ the average size of a disk transaction to optimize performance |
|
Definition
|
|
Term
| what would happen if a stripe size were too small? |
|
Definition
| more drives have to be accessed for each I/O request |
|
|
Term
| what happens if a stripe size is too large? |
|
Definition
| excessive disk reads will occur |
|
|
Term
| the stripe size of a groupware server should be |
|
Definition
|
|
Term
| the stripe size of a database server should be |
|
Definition
|
|
Term
| the stripe size of a file server should be |
|
Definition
|
|
Term
| the stripe size of a file server on linux should be |
|
Definition
|
|
Term
| the stripe size of a video file server should be |
|
Definition
|
|
Term
| ______________ is used to set stripe size |
|
Definition
| windows performance console |
|
|
Term
| two disk cache modes available are |
|
Definition
|
|
Term
|
Definition
| information is written to the disk before os is notified the write was successful |
|
|
Term
|
Definition
| the os is notified of a successful write as soon as it is written to the cache |
|
|
Term
|
Definition
|
|
Term
| in general, what cache mode is quicker? |
|
Definition
| write-back, but only if the server is lightly loaded, if it is busy the cache becomes full and causes a bottleneck |
|
|
Term
| out of the two cache modes, ___________ will perform better under heavy server load |
|
Definition
|
|
Term
| the threshold that write through should be used at is |
|
Definition
| when the system has a response time greater than 40ms |
|
|
Term
| increasing the RAID adapter cache size does |
|
Definition
| almost nothing for performance on a server |
|
|
Term
| the reason cache size increase on a raid controller doesnt help is because |
|
Definition
| the cache size is such a tiny portion of the whole database that data could rarely be retrieved directly from there |
|
|
Term
| a large cache size can actually mess up performance on a server because |
|
Definition
| the adapter continuously searches the cache before going to the disk |
|
|
Term
| while doing a rebuild of data as a result of a drive failure, the raid controller will have |
|
Definition
|
|
Term
|
Definition
|
|
Term
| _____________ are essential to updating drivers/firmware on any subcomponent of a server |
|
Definition
|
|
Term
| fibre channel frames have a maximum data payload of |
|
Definition
|
|
Term
|
Definition
| physical layer for fibre channel |
|
|
Term
| FC-1 for fibre channel is |
|
Definition
the transmission protocol which handles: encoding of bits data transmission/error detection signal clock generation |
|
|
Term
| FC-2 on fibre channel does what |
|
Definition
| builds data frames and segments large transfer requests |
|
|
Term
| FC-3 in fibre channel does what |
|
Definition
| defines the common services layer, defines what services are available across all fibre channel ports |
|
|
Term
| FC-4 in fibre channel does what |
|
Definition
| defines the protocols that can be used to transmit data over the fibre links |
|
|
Term
| 4 protocols which can be used at FC-4 in fibre channel are |
|
Definition
|
|
Term
| RAID-0 has ____________ more throughput than RAID-1 |
|
Definition
|
|
Term
| RAID-5 has __________ more throughput than RAID-6 |
|
Definition
|
|
Term
| a 10k RPM drive has __________ improvement over a 7200RPM drive |
|
Definition
|
|
Term
| A 15k RPM drive has __________ improvement over a 10k RPM drive |
|
Definition
|
|
Term
| ultra160 scsi is _________ faster than ultra scsi |
|
Definition
|
|
Term
| ultra320 scsi is ________ faster than ultra160 scsi |
|
Definition
|
|
Term
| a disk read in fibre channel must |
|
Definition
| travel up and down all FC layers |
|
|
Term
| ____________ and __________ are two popular NAS technologies |
|
Definition
|
|
Term
| 1 additional fibre channel controller can |
|
Definition
| double the performance of a fibre channel setup |
|
|
Term
| client server relationships can be described as |
|
Definition
| request/response relationships |
|
|
Term
| in a client/server relationship, the client _______ and the server ________ |
|
Definition
|
|
Term
| the 5 LAN adapter functions are |
|
Definition
network interface/control protocol control communication processor PCI bus interface buffers/storage |
|
|
Term
| standard ethernet has up to _______ FSB transactions for _______ network transaction, this process includes ____ CPU transactions |
|
Definition
|
|
Term
| in more modern ethernet systems, some TCP processing is _____ to the adapter using only ______ FSB transactions |
|
Definition
|
|
Term
| TCP/IP requires up to ______ transfers for each _______ |
|
Definition
|
|
Term
| of the transfers initiated by a TCP/IP transaction, a server CPU will execute ________ but one of these transfers |
|
Definition
|
|
Term
| in TCP/IP communications, a server which is moving 75MBps over the LAN is doing _________ times that amount of traffic over the memory bus |
|
Definition
|
|
Term
| the 2 main steps in a TCP/IP response form a server are |
|
Definition
- LAN adapter uses bus master transfer to send data directly to NIC card buffers - CPU processes data; data transferred to TCP/IP stack and FS memory |
|
|
Term
| ________ utilization can be high in high network use environments |
|
Definition
|
|
Term
| 4 factors which affect network controller performance: |
|
Definition
transfer size number of ethernet ports CPU and frontside bus 10 gigabit ethernet adapters |
|
|
Term
| _______________ result in increased CPU and FSB usage b/c the _____ and ______ must process each network packet no matter how much payload is in it |
|
Definition
| small data transfer sizes, CPU, network adapter |
|
|
Term
| a full duplex gigabit ethernet connection can ________ and ___________ data at the same time, which allows up to ________ throughput |
|
Definition
|
|
Term
| in network operations, when transfer size decreases, CPU usage _______ and throughput ______ |
|
Definition
|
|
Term
| standard ethernet adapters have a maximum frame size of |
|
Definition
|
|
Term
| standard ethernet adapters have a maximum transmission unit of _______, another ______ are used for the L2 header and another _______ are used for the CRC |
|
Definition
|
|
Term
| _______ bytes in every L2 header must be used for TCP/IP _________, ________ and ________ information, leaving ________ bytes for data to be carried by packet |
|
Definition
| 40, addressing, header, checksum, 1460 |
|
|
Term
| ethernet frames which have been adapted to handle VLAN's have an MTU of |
|
Definition
|
|
Term
| 1480 byte frames cause a dip in performance because |
|
Definition
| the 1460byte payload max is reached and the frame has to be split up into two frames, making the server work twice as hard for only 20 extra bytes of payload |
|
|
Term
| in ethernet situations, if no other system is a bottleneck, ________ allows ________ the throughput |
|
Definition
|
|
Term
| if CPU is not a bottleneck, scaling can add ________ or more ports |
|
Definition
|
|
Term
| w/ four ports and four CPU's the CPU is only a bottleneck with ______________ |
|
Definition
|
|
Term
| sustained throughput is dependent on |
|
Definition
| how fast buffer copies can be performed |
|
|
Term
|
Definition
| moving data between device driver, TCP/IP buffers and filesystem buffers |
|
|
Term
| if the processor in a network situation IS a bottleneck, a _____ increase in processor speed results in a ________ increase in network throughput |
|
Definition
|
|
Term
| generally, NIC's cannot use __________ at at a time |
|
Definition
|
|
Term
| increasing the number of _______ is only useful if the number of ________ are also increased |
|
Definition
|
|
Term
| with hyperthreading, an inrease in speed is seen for _______ block sizes |
|
Definition
|
|
Term
| ___________ is higher without hyperthreading |
|
Definition
|
|
Term
| 10 gigabit ethernet adapters follow the ____________ standard |
|
Definition
|
|
Term
| since gigabit ethernet adapters are __________, collision detection is not necessary |
|
Definition
| full duplex, the data going in either direction never has a chance to cross paths with data going the other direction |
|
|
Term
| with a fibre 10gigabit ethernet connection, ____________ speeds can be achieved while in burst mode, which is actually __________ for current bus technologies |
|
Definition
|
|
Term
| a server with a bottleneck will only run as fast as the _________ will allow, in other words, a system is only as strong as its weakest link |
|
Definition
|
|
Term
| applications that transfer data using ________ will result in low throughput and high CPU overhead |
|
Definition
|
|
Term
| most NIC device drivers do not scale well with |
|
Definition
|
|
Term
| ______________ make a significant difference to throughput |
|
Definition
|
|
Term
| 3 advanced network features are |
|
Definition
TOE (tcp offload engine) i/o accelerator technology TCP chimney offload |
|
|
Term
| the __________ is a hardware based solution to high CPU usage in TCP/IP network transactions |
|
Definition
|
|
Term
| in a TOE, the ______ is moved to the ______ instead of the ________ |
|
Definition
|
|
Term
| although TOE reduces CPU usage, _________ still have high CPU usage and ________ are far more efficient |
|
Definition
| small block sizes, larger block sizes |
|
|
Term
| the 4 standard ethernet data flow steps are |
|
Definition
1. Packet received by NIC and moved to driver/kernel memory space 2. Network controller interrupts CPU to signal arrival of packet 3. CPU processes TCP/IP headers 4. Data is copied from kernel memory space to user memory space by the CPU |
|
|
Term
| the ethernet data flow steps when TOE is in use are |
|
Definition
1. Packet received by NIC and loaded into TOE engine 2. TOE processes TCP/IP headers 3. Data is copied from NIC memory to user memory space by the TOE engine 4. The NIC interrupts the CPU to indicate the packet is available to the user |
|
|
Term
| TOE __________ processing from the __________ to _____________ |
|
Definition
| offloads, operating system, hardware |
|
|
Term
| TOE can provide gains in throughput at __________________________ transfer sizes |
|
Definition
|
|
Term
| small transfer sizes actually show ______ CPU usage with TOE |
|
Definition
|
|
Term
| ___________ does not provide significant gains in _______- for ___________ transfer sizes |
|
Definition
|
|
Term
| ________ DOES significantly reduce _________ at larger transfer sizes |
|
Definition
|
|
Term
| intel's alternative to TOE is called and is AKA |
|
Definition
IOAT (input output acceleration technology) NETDMA |
|
|
Term
| ___________ by intel (TOE competitive technology) is _________ with the ________ |
|
Definition
| IOAT, integrated, chipset |
|
|
Term
| intel's IOAT is supported by |
|
Definition
|
|
Term
| 4 primary IOAT features are |
|
Definition
an optimized protocol stack which reduces proessing cycles header splitting (simultaneous processing of header/payload) interrupt modulation (prevents excessive interrupts) uses DMA to reduce latency while waiting for memory access to finish |
|
|
Term
| 3 primary features of IOAT's protocol stack are |
|
Definition
separate data and control paths cache aware data structures improved exception testing |
|
|
Term
| much like TOE, _______ was made to ________ ___________ ____________ |
|
Definition
| IOAT, reduce CPU interrupts |
|
|
Term
| the 4 step data flow of an IOAT network operation is |
|
Definition
1. a packet is received by NIC and DMA'd to driver/kernel memory space 2. the network controller generates interrupts 3. CPU processes TCP/IP headers 4. data copied from kernel mem space to user mem space by DMA engine |
|
|
Term
| IOAT only improves performance from _________________ to ___________________, not the other way around |
|
Definition
|
|
Term
| IOAT provides ______________ difference in throughput especially with larger block sizes |
|
Definition
|
|
Term
| _________ utilization can be cut in half with IOAT and large block transfers |
|
Definition
|
|
Term
| compare and contrast TOE and IOAT |
|
Definition
1. TOE reduces CPU bottlenecks in both directions, IOAT only does this for receive 2. TOE offloads protocol and data movement, IOAT only offloads data movement 3. IOAT is stateless and works with all connections, TOE does not |
|
|
Term
| microsoft's alternative to technologies like TOE and IOAT is |
|
Definition
|
|
Term
| microsoft's ___________ is implemented with ________ |
|
Definition
| TCP chimney offload, scalable network pack (SNP) |
|
|
Term
| microsoft's ___________ works with TOE to reduce ____________ usage |
|
Definition
|
|
Term
| ________________ is also implemented in microsoft's SNP |
|
Definition
|
|
Term
| with receive-side scaling, a single adapter can be processed by ___________ |
|
Definition
|
|
Term
| in receive-side scaling, cache locality is where |
|
Definition
| packets from each connection are mapped to a specific processor, increasing efficiency |
|
|
Term
| __________ is a part of microsoft's SNP and allows direct memory to memory transfers through the network (no CPU involved) |
|
Definition
| RDMA, remote direct memory access |
|
|
Term
| iSCSI operates at ___________ of the OSI model |
|
Definition
|
|
Term
| iSCSI comes in both _________ and ___________ initiators, __________ is cheaper |
|
Definition
| hardware and software, software |
|
|
Term
| ___________ iSCSI initiators are much ______ than ____________ initiators |
|
Definition
| hardware, faster, software |
|
|
Term
| networks should be tested with __________ data transfers not just file copies |
|
Definition
|
|
Term
| an iSCSI initiator should reside on its ________________ for performance and security |
|
Definition
|
|