What is edge computing?
1. What is "edge computing" sacred
Edge computing has not been around for a long time. Many people have generalized this concept. The scope and explanations are different, and some are even repetitive and contradictory. As far as the author is concerned, OpenStack is more respected (it is a joint venture between NASA and NASA). A free software and open source project licensed under the Apache license, developed and initiated by Rackspace in cooperation, defines the concept of the community:
"Edge computing is to provide application developers and service providers with cloud services and IT environment services on the edge of the network; the goal is to provide computing, storage, and network bandwidth close to data input or users."
In layman's terms: Edge computing is essentially a service, similar to cloud computing and big data services, but this kind of service is very close to the user; why is it so close? The purpose is to make users feel that they can read all content very quickly.
The important problems that edge computing solves are the high latency, network instability, and low bandwidth problems that exist in the traditional cloud computing (or central computing) model. To give a realistic example, almost everyone has encountered a 404 error in a mobile APP. Such errors are related to network conditions and cloud server bandwidth limitations. Due to resource constraints, cloud computing services will inevitably be affected by high latency and network instability. However, by migrating some or all of the processing procedures closer to users or data collection points, edge computing can greatly reduce the number of cloud centers. The impact of the mode site on the application.
Edge computing appeared at the same time as fog computing. In fact, there are overlaps between the two concepts. These two words began to appear in 2011 and have now become a hot spot for investment by giants. Let's take a look at the direction chosen by the world's technology giants:
OpenFog, a fog computing research project co-invested and founded by Arm, Cisco, Dell, Intel, Microsoft, and Princeton University;
Discovery, a fog computing and large-scale distributed cloud research project jointly led by Orange (France Telecom) and Inria (French National Institute of Computer and Automation);
Huawei's "full cloud" strategy, EC-IOT, established an edge computing industry alliance in 2016;
Intel’s "Cloud Computing at the Edge" project;
NTT's "Edge Computing" project, AT&T's "Cloud 2.0" project;
The GreenGrass project (codename of edge computing) released by Amazon AWS;
Microsoft Azure launched the IOT Edge project, focusing on the development of edge computing projects;
The IOT Core project released by Google;
LinkEdge project released by Alibaba Cloud.
From 2016 to the present, the giants have launched a fierce competition on the road to edge computing, and the track has been very crowded.
Edge computing originated from the need to build virtual networks in the wide area network. Operators need a simple management platform similar to cloud computing, so the cloud computing management platform of the miniature board has begun to enter the market. From this point of view, edge computing is actually It was born out of cloud computing. With the continuous evolution of this micro-platform, especially thanks to virtualization technology (referring to virtualizing a computer into multiple logical computers through virtualization technology.
Running multiple logical computers on one computer at the same time, each logical computer can run a different operating system, and application programs can run in mutually independent spaces without affecting each other, thereby significantly improving the efficiency of the computer. With the continuous development of ), people have discovered that this platform has the ability to manage thousands of edge nodes and can meet the needs of diversified scenarios. After different vendors have continuously improved this platform and added rich functions, edge computing has begun. Entered the fast lane of development.
2. Why do you need edge computing
Cloud computing and edge computing are usually used for comparison. As mentioned above, edge computing is actually born out of cloud computing. Then, since there is cloud computing, why should there be edge computing?
Everyone is familiar with cloud computing. It has many characteristics: it has huge computing power, mass storage capacity, and can build a variety of applications through different software tools. Many apps we use are essentially dependent on a variety of Cloud computing technology, such as live video platform, e-commerce platform. Edge computing is born out of cloud computing, close to the device side, with fast response capabilities, but cannot cope with a large number of computing and storage occasions. The relationship between the two can be explained by the nervous system of our body.
Cloud computing can process a large amount of information and can store short- and long-term data, which is very similar to our brain. The brain is the largest and most complex structure in the central nervous system, and also the highest part. It is an organ that regulates body functions and is the material basis for advanced neural activities such as consciousness, spirit, language, learning, memory, and intelligence. The gray matter layer of the human brain is rich in hundreds of millions of nerve cells, which form the basis of intelligence. The gray matter layer is not only the brain. The human spinal cord also contains the gray matter layer and has a simple central nervous system that is responsible for the reflex actions from the limbs and trunk, as well as the transmission of neural information between the brain and the periphery. We all learned the knee-jerk reaction in junior high school biology. This is evidence of the ability of the spinal cord to respond. Edge computing is to cloud computing like the spinal cord to the brain. Edge computing has a fast response speed and does not require cloud computing support, but it is low in intelligence and cannot adapt to the processing of complex information.
Everyone has been injured. Whether it is stabbed or scalded, our body can react quickly.
While the spinal cord gives instructions, it also sends pain signals to the brain, making people feel pain. Take a look at the whole process. This risk-avoidance action is before consciousness is generated, and the speed is very fast, avoiding your body harm. After hundreds of millions of years of evolution, the human body is now very complete. Since the structure is designed in this way, there must be some truth to it. Let’s take a look at this set of data: "As far as humans are concerned, among the nerve cells that connect the spinal cord to the muscles, the outer layer is covered with a myelin sheath, and the signal transmission speed of large-diameter neurons is 70-120 meters per second. On the contrary, the signal transmission speed of brain neurons is 0.5-2 meters per second. The gap between them is too big.
Therefore, the spinal cord replaces the brain to make certain quick decisions, which is entirely to satisfy certain specific functions of the body and has an irreplaceable effect. If we recognize that there is a reasonable explanation, we can accept it with peace of mind: when there is cloud computing, edge computing is still needed. Of course, after rigorous logical analysis, this conclusion is still valid.
There will be two trends in the future development of the Internet of Things: massive connections and the resulting massive amounts of data. How big is the specific connection and data? According to Garter (the world's most authoritative IT research and consulting company, founded in 1979 and headquartered in Stanford, Connecticut, USA), it predicts that there will be as many as 142 connected things in use in 2019 Billion, the total will reach 25 billion by 2021, and they will generate a lot of data. Mobile phones are also Internet-connected things. You can estimate the amount of traffic that can be generated every month, and it will probably be in the range of 100G. However, in the Internet of Things, there are more than just mobile phones, such as the following two objects:
Take the Boeing 787 as an example, each flight can generate terabytes of data. The United States collects 3.6 million flight records every month; monitors 25,000 engines in all aircraft, and each engine generates 588GB of data a day. If such a level of data is uploaded to a cloud computing server, it will impose harsh requirements on both computing power and bandwidth. The wind turbine is equipped with a variety of sensors for measuring wind speed, pitch, oil temperature, etc., which are measured every few milliseconds to detect the degree of wear of blades, gearboxes, and frequency converters. A wind farm with 500 wind turbines will meet annually. Generate 2PB of data.
Such PB-level data, if it is uploaded to the cloud computing center in real time and makes a decision, no matter from the perspective of computing power and bandwidth, it puts forward demanding requirements, not to mention the immediate response problem caused by delay. Faced with such a scenario, edge computing has its advantages. Because it is deployed near the device side, it can feedback decisions in real time through algorithms, and can filter most of the data, effectively reducing the load on the cloud, making massive connections and massive amounts of data. Data processing becomes possible. Therefore, edge computing will serve as a supplement to cloud computing and will co-exist in the architecture of the Internet of Things in the future.
Having said that, let's summarize the advantages of this edge computing:
Low latency: computing power is deployed near the device side, and the device requests real-time response;
Low-bandwidth operation: The ability to move work closer to the user or data collection terminal can reduce the impact of site bandwidth limitations. Especially when the edge node service reduces the request to send a large amount of data processing to the hub.
Privacy protection: Local data collection, local analysis, and local processing effectively reduce the chance of data being exposed to public networks and protect data privacy.
Three "Edge Computing" Application Scenarios
Since edge computing is an important supplement to cloud computing, what are the application scenarios of edge computing? The basic feature of the edge computing model is to bring the computing power closer to the user, that is, the sites are widely distributed and the edge nodes are connected by a wide area network.
1. "Out of the box cloud" for retail/finance/remote connectivity: Provides a series of customizable edge computing environments. This type of edge computing is mainly provided for enterprise use and serves specific industrial applications. It is fundamentally combined with a distributed structure to achieve the following effects: reducing hardware consumption, multi-site standardized deployment, flexible replacement of applications deployed on the edge side (not affected by hardware, the same application runs on all nodes), improving weak Operational stability under network conditions. If the networking conditions are limited, by setting the networking mode to limited network connection, you can provide content caching or provide computing, storage services and network services, such as the new retail edge computing environment.
2. Mobile connection: Before the large-scale popularization of 5G networks, mobile networks still remained limited and unstable. Therefore, mobile/wireless networks can also be regarded as common environmental elements of cloud edge computing. Many applications more or less rely on mobile networks, such as augmented reality for remote repair, telemedicine, IoT devices that collect data from public facilities (water, gas, electricity, facility management), inventory, supply chain, and transportation solutions Solutions, smart cities, smart roads and remote security applications. These applications all benefit from the near-end processing capabilities of edge computing.
3. Universal Customer Premise Equipment (uCPE): It is characterized by limited network connections, relatively stable workload but high availability. At the same time, it also needs a way to support the mixed placement of data applications across hundreds to thousands of nodes, and expansion The existing uCPE deployment will also become a new requirement. This is very suitable for NFV Network Function Virtualization (NFV) applications, especially when different sites may require different series of service chain applications, or a series of different applications in a region require unified collaboration. Due to the utilization of local resources and the need to store and process data under intermittent network connections, we may need to support a mesh or hierarchical structure. Self-repair and self-management combined with remote node management are necessary conditions.
4. Satellite communications (SATCOM): This scenario is characterized by a large number of available terminal equipment distributed in the most remote and harsh environment. It is extremely reasonable to use these decentralized platforms to provide hosting services, especially when considering the extremely high latency, limited bandwidth, and the cost of cross-satellite communications. Specific examples may include ships (from fishing boats to tankers), aircraft, oil drilling, mining operations or military infrastructure.
4. The number one player of "edge computing" (alliance, company, participant)
Let's take a look at what types of companies have entered this field?
Edge computing players: cloud computing giants
Edge computing has a certain impact on cloud computing, but it also has a strong synergy with cloud computing. In order to maintain the market space that they should have, domestic and foreign cloud service providers have deployed edge computing in advance to avoid being swallowed. In the era of the Internet of Things, more terminals or sensors are connected to the Internet of Things, and the scale of nodes is much larger than the Internet. Each Internet of Things node will generate a large amount of real-time data. This means that cloud service providers must deploy computing at the edge and invest in scale and scale. The time period is a huge challenge. Of course, cloud service providers are not reconciled to being touched by others. Microsoft, Amazon, Google, Alibaba, Huawei, and Baidu are all actively deploying edge computing.
Although these companies have different edge computing technology routes, they generally follow a rule: closely integrate the edge and the cloud, give full play to the edge's low-latency, security and other features, and combine the cloud's big data analysis capabilities.
Amazon AWS was the first to release its own edge computing technology. The framework is Greengrass, which allows user data to be transferred locally, and data is extracted and uploaded to the cloud through design functions.
With that, Microsoft launched Azure IOT edge. After completely dismantling the Windows team this year, Microsoft invested US$5 billion to enter the Internet of Things market, and the main area is edge computing: Microsoft officially announced the open source of Azure IoT Edge at the Build 2018 developer conference. Microsoft said that in the future, developers can be able to Modify and debug Azure IoT Edge, while having more control over Edge applications.
In 2017, a new edge computing service Cloud IoT Core was released to help companies connect and manage IoT devices, and quickly process data collected by IoT devices.
The second type of edge computing players: traditional equipment giants
Due to the rapid rise of IoT, Intel has begun to make efforts in the edge computing market and launched several platforms. At the edge, Intel can provide computing power suites of different sizes, or computing containers. Intel's ambitions are not limited to being a hardware platform vendor. He wants to build his own ecosystem. At present, he has cooperated with Wind River to launch a convenient edge computing system.
Dell, as early as 2016, announced its entry into the Internet of Things market with a high profile, and as the initiator of the edge computing project under the Linux Foundation, its status should not be underestimated. Edgex Foundry, an open source project under the Linux Foundation, is committed to developing an edge computing platform with plug-and-play capabilities. Dell has taken the lead in launching an edge gateway based on Edgex foundry.
Cisco interconnected edge computing with Microsoft's Azure cloud platform in mid-2017 to ensure that it provides enterprises with integrated services from the edge to the cloud.
ARM, ARM platforms currently have CortexA, CortexR. CortexM, Mechine Learning, and SecurCore platforms. At present, a large number of smart phones (ios, android), commercial advertising machines, express cabinets, etc. are all supported by ARM. Due to the rise of edge computing technology, especially face recognition on the device side, and the rise of voice recognition capabilities, ARM's high-end chips have begun to face the market, which can favorably support the development of AI.
With a large number of IP-based video surveillance, video surveillance companies have increasingly become IOT companies. Monitoring equipment has the function of physical collection-images, combined with powerful edge device analysis capabilities, can provide face recognition, traffic monitoring and other functions, becoming an important part of a smart city.
Three types of edge computing players: CDN giants
The core value of CDN (that is, content distribution network) is to intelligently distribute digital content to nodes closer to users, thereby improving overall distribution efficiency, reducing network delays, and saving bandwidth resources. Its inherent edge node attributes are low Latency and low bandwidth give it a first-mover advantage in the edge computing market. CDN itself is the embryonic form of edge computing.
As a global CDN leader, Akamai cooperated with IBM in edge computing as early as 2003. In June of this year, Akamai and IBM provided edge-based services on its WebSphere.
Wangsu Technology has also regarded edge computing as its core strategy. It started to build edge computing networks in 2016 and gradually launched edge computing microservices in 2017, and will gradually open edge IaaS and PaaS services.
CloudFlare launched CloudFlare Workers in 2017, opening edge computing services in the form of microservices and supporting users to program at the edge. This indicates that it has initially built an edge computing platform.
Nuu:bit announced that it can be integrated with Microsoft's Azure Universe Database. At the same time, Microsoft's Azure system can also integrate Nuu:bit's data on the platform. This is also a great breakthrough.
Limelight launched an enhanced version of EdgePrism OS software on its CDN network in the first half of this year, allowing users to input and deliver local content at the edge.
Four types of edge computing players: operators
In a highly competitive market, in order to obtain high-performance and low-latency services, mobile operators have begun to deploy mobile edge computing (MEC).
AT&T said that edge computing is a key part of supporting new technologies, including the Internet of Things, software-defined networking, blockchain, artificial intelligence and 5G. AT&T is using edge computing to support AR/VR-type applications, autonomous driving, and smart city projects.
Deutsche Telekom uses edge computing to improve the connectivity of self-driving cars, digital transformation, and advancing 5G better network performance.
Five types of edge computing players: core research institutions
Nowadays, the market for edge computing is getting bigger and bigger. Not only are many well-known companies beginning to deploy edge computing, many research institutions, including some universities at home and abroad, have begun to embrace the big market of edge computing.
Carnegie Mellon University led a new project-CONIX in January 2018. The project received $27.5 million in funding. In the next five years, CONIX will create a new project between edge devices and the cloud. The network computing architecture prepares for the rise of edge computing. In February, Deutsche Telekom cooperated with Crown Castle to set up an edge computing laboratory in the United States. Carnegie Mellon University of Pittsburgh is the central site of the project.
Six types of edge computing players: industry alliances
The Edgecross Alliance (Japan) was established at the end of 2017 and was founded by 6 companies, including Mitsubishi Electric, Advantech, Omron, NEC, Japan IBM and Japan Oracle. The edge computing field platform defined by Edgecross Alliance has two goals. One is to realize a small-scale IoT system on the production site, and the second is to match IoT data tags for production data.
The Avnu Alliance is a community that uses open standards to create an interoperable ecosystem of low-latency, time-synchronized, and highly reliable networked devices. On December 5, 2017, Avnu and the Edge Computing Industry Alliance signed a cooperation agreement with the purpose of promoting the common interests of industrial networks and edge computing.
ETSI (European Telecommunications Standards Institute) takes the initiative to standardize MEC. Operators can open their radio access network (RAN) edges to authorized third parties, enabling them to flexibly and quickly deploy innovative applications and services for mobile users, enterprises, and vertical network segments. MEC is the natural development result of the evolution of mobile base stations and the integration of IT and telecommunication networks.
On January 24, 2019, the Linux Foundation announced the establishment of the LF Edge Foundation for edge computing in San Francisco. The new LF Edge Foundation aims to formulate a unified software stack, terminology definition and development framework for various application types of edge computing, and promote the unification of the underlying structure of the edge computing field, thereby promoting the rapid development of the entire industry.