What role will the data center play in a world that supports 5G?

For decades, the data center has been considered the connection point of the network. For enterprises, telecom operators, cable TV operators, and cloud computing service providers such as Google and Facebook, the data center is the heart and muscle of the IT industry. In addition, the emergence of cloud computing has emphasized the importance of modern data centers.

The role of the data center is constantly evolving. As the network increases its support for 5G and the Internet of Things, IT managers will focus on the edge and move more capacity and processing power closer to the end user. At the same time, they are also re-evaluating the role of the data center.

Research firm Gartner predicted in 2018 that by 2025, 75% of enterprise-generated data will be created and processed at the edge, compared to only 10% in 2018. At the same time, the amount of data will further increase. For example, an autonomous car can output an average of 4000 GB of data per hour.

Network service providers need to determine how to better support the huge growth of edge-based traffic and the demand for digital delay performance without affecting existing data center investments. One of the answers is the heavy investment in east-west traffic network links and peer-to-peer redundant nodes, as well as the question of enhancing processing capabilities when creating data. But what role will the data center play?

Artificial Intelligence/Machine Learning Feedback Loop

The future business case for ultra-large-scale and cloud-computing-scale data centers lies in their huge processing and storage capabilities. As edge activity heats up, the power of the data center will be needed to create algorithms that enable data to be processed. In a world with the Internet of Things, the importance of artificial intelligence (AI) and machine learning (ML) cannot be underestimated. The role of the data center in achieving it is also different.

Generating the algorithms required to drive artificial intelligence (AI) and machine learning (ML) requires a lot of data processing. Core data centers have begun to deploy more powerful CPUs combined with tensor processing units (TPUs) or other dedicated hardware. In addition, artificial intelligence (AI) and machine learning (ML) applications usually require faster, larger-capacity networks, and advanced switching layers provide power to all servers that are solving the same problem. Artificial intelligence (AI) and machine learning (ML) models are the products of this effort.

At the other end of the process, artificial intelligence (AI) and machine learning (ML) models need to be placed where they have the greatest impact on the business. For example, for enterprise artificial intelligence (AI) applications such as facial recognition, ultra-low latency requires them to be deployed locally, rather than at the core. But the model must also be adjusted regularly, so the data collected at the edge will then be fed back to the data center to update and improve the algorithm.

A more distributed collaborative environment

The artificial intelligence (AI)/machine learning (ML) feedback loop is an example of how data centers will need to support a broader and more diverse network ecosystem. For participants in the hyper-scale data center field, this means adapting to a more distributed collaborative environment. They want to enable customers to deploy artificial intelligence (AI) or machine learning (ML) on the edge of their platform, but not necessarily in their own data center facilities.

Cloud computing providers like AWS, Microsoft, and Google are now deploying their cloud computing hardware closer to customers, including central offices and on-premises data centers. This enables customers to use ultra-large-scale data centers and multiple edge facilities to build and run cloud-based applications. Because these platforms are also embedded in the systems of many operators, customers can also run their applications wherever the operators exist. The model is still in its infancy, but it provides customers with greater flexibility while enabling cloud computing providers to better support the edge.

Another ecosystem approach implemented by Vapor IO provides a business model characterized by a hosted data center with standardized computing, storage, and network resources. Smaller customers (such as gaming companies) can find virtual machines near the customers and use Vapor IO’s ecosystem to run their applications. Services like this may operate under a revenue sharing model, which may be an attractive example for small businesses trying to develop an ecosystem of edge services.

The challenge

As the vision of next-generation networks gradually becomes a reality, industry organizations must deal with implementation challenges. In the data center, the server connection will increase from 50Gb per channel to 100Gb, the switching bandwidth will increase to 256Tb, and the use of 100Gb technology will enable users to have 800Gb pluggable modules.

What is still unclear is how the data center industry designs the infrastructure from the core to the edge, especially how to implement the data center interconnect (DCI) architecture and metro and long-distance links and support highly redundant peer-to-peer edge nodes. Another challenge is to develop the orchestration and automation capabilities required to manage and route large volumes of traffic. As various industries move towards networks that support 5G/Internet of Things, these issues have become a top priority.

What people need to know is that the construction and implementation of next-generation networks will involve coordinated efforts. Data centers that can provide low-cost, high-capacity computing and storage will undoubtedly play an important role. However, as more distributed devices at the edge take on more load, the role of the data center will further evolve as part of a larger distributed ecosystem.

About 10% of the data generated by enterprises is created and processed outside of traditional centralized data centers or cloud platforms. Gartner predicts that by 2025, this number will reach 75%.

Combining them will require faster and more reliable fiber optic networks, starting from the core and extending to the farthest edge of the network. This will be a wiring and connection platform powered by PAM4 and related processing technologies, using co-packaged and digital coherent optics, and packaged in compact wiring, which will provide continuous and consistent performance throughout the process.

Whether it is a large data center operator, an edge-focused enterprise, or an infrastructure provider, there will be enough room for development in the next-generation network.

上一篇:

下一篇: