Introduction
Traditional large-scale storage solutions often feature expensive, proprietary hardware that presents physical, management, security, and even electrical challenges when integrating into a data center populated with familiar network and server gear. Ceph, by design, does not prescribe specific hardware component types or models. Hardware vendor lock-in is avoided, and the architect is able to choose server and network components that meet individual cost, standards, performance, and physical criteria. Ceph's distributed design also affords tremendous flexibility over time as needs change and hardware product lines cycle and evolve. You may build your cluster from one set of gear today only to find that tomorrow's needs, constraints, and budget are distinctly different. With Ceph, you can seamlessly add capacity or refresh aging gear with very different models, completely transparently to your users. It is entirely straightforward to perform a 100% replacement of a cluster's hardware without users experiencing so much as a hiccup.
Ceph clusters have been built successfully from the gamut of server classes, from 90-drive 4RU monsters all the way down to minimal systems integrated directly into the storage drive's logic board (http://ceph.com/community/500-osd-ceph-cluster).
That said, it is still important to choose equipment that meets organizational, supportability, financial, and performance needs. In the next section, we explore a number of criteria; your situation may well present others as well. Ceph is designed for Linux systems and can run on a wide variety of distributions. Most clusters are deployed on x64 systems, but 32-bit and ARM architectures are possible as well.
Today's servers and components offer a dizzying array of choices; if they seem overwhelming, one may seek configurations pre-configured for Ceph usage by brands such as Quanta and Supermicro.
https://www.qct.io/solution/index/Storage-Virtualization/QxStor-Red-Hat-Ceph-Storage-Edition
https://www.supermicro.com/solutions/storage_ceph.cfm
https://www.supermicro.com/solutions/storage_ceph.cfm
It is highly valuable, nonetheless, to be conversant with the ideas presented in the rest of this chapter. These examples are provided to illustrate this idea and are not to be construed as recommendations for any specific vendor.