Basic deployment of SAS Switch technology

In a large company with hundreds of decisions, managers, and employees, effective communication and collaboration are essential. To facilitate this, it's common to locate the headquarters in a prime downtown area like Shanghai Lujiazui, where high land costs are offset by the convenience of location, enabling efficient operations. Similarly, when dealing with large volumes of goods that require storage and transportation, companies often choose to place their warehouses in the outer suburbs near airports, where land is more affordable and logistics are streamlined. This concept of “land value” isn’t just limited to physical spaces—it also applies to computer systems. Think of components like the CPU, memory, and disk as different parts of a system. The closer these components are to the CPU, the faster and more efficient they operate, with lower latency and cost. This means that the physical space near the CPU holds higher value, while space further away has less value. In this analogy, external storage in a data center can be compared to a warehouse located far from the main office—cost-effective but less immediately accessible. The idea of “land price” is especially evident in blade server architecture. Blade servers typically come in a multi-slot chassis with a high-speed backplane. These chassis are expensive, so manufacturers offer various blades such as CPU blades, storage blades, and switch blades for users to configure. Based on the high cost of the blade system, we can draw a few practical conclusions: 1) It’s usually better not to include disks on the CPU blade. The valuable space should be used for more critical components like CPUs and memory modules. 2) Storage blades should be avoided if possible, as those slots are better reserved for more essential blades like the CPU or switch. 3) Instead, connect an external JBOD (Just a Bunch Of Disks) to provide shared storage for all the CPUs in the blade system. This approach is both cost-effective and scalable. In large-scale data centers, similar principles apply. By reducing the number of disks per node, server density can be significantly increased. For example, a single node that once occupied 1U space can now be upgraded to 1U2, 1U3, or even 1U4. Today, the industry’s highest density allows up to 160 server nodes per rack, totaling 320 CPUs—more than four times the density of traditional racks. Efficiency improvements don’t just come from better space utilization; they also enable device sharing within a compact package. For instance, Facebook’s design allows four server nodes to share one Ethernet card and a PMC+Intel RSA module. In a reference design, multiple NVMe SSDs can be shared among several nodes. When densely packed in a small space, PCIe connections via low-cost PCBs allow for fast interconnection without the need for expensive cables. In the Scorpio 2.0 rack, partners have integrated SAS Switch and JBOD into the system, making it easy to set up a SAS-based storage solution. With this physical architecture, optimizing resource utilization becomes crucial. How many server nodes should be deployed per rack, and how many JBODs are needed? The answer depends on the specific application. Key factors include the ratio of CPUs to disks, which can be determined by evaluating the application’s performance and adjusted dynamically over time. The process involves assessing workload patterns, balancing compute and storage resources, and continuously optimizing based on real-time usage. Ultimately, deploying SAS Switch technology in enterprise and data center environments is a smart way to achieve efficiency, scalability, and cost-effectiveness.

Ceramic Cap

ceramic cap

ceramic cap

YANGZHOU POSITIONING TECH CO., LTD. , https://www.yzpst.com