Why does the internet need protocols




















Different packets may take different routes and arrive at different times, to be eventually reassembled at their destination. Back then most shared content, including e-mail and Web browsing, involved small sets of data transmitted with no particular urgency. It made sense for routers to process all packets equally because traffic patterns were mostly the same. That picture has changed dramatically over the past decade. Network traffic today consists of bigger data sets, organized in more varied and complex ways.

For instance, smart meters produce energy data in short, periodic bursts, while Internet Protocol television IPTV services generate large, steady streams. Basic packet switching is just too rigid to manage such a dynamic load. And engineers are turning to nature for inspiration. In particular, the human brain and body are excellent models for building better data networks. Say, for example, you want to watch a YouTube clip.

They start at the outermost reaches of the Net: the access network, where terminals such as phones, sensors, servers, and PCs link up. Then the packets move through regional networks to the core network, or backbone. Here, dense fiber-optic cables ferry traffic at high speeds and across vast distances. Finally, the packets make their way back to the access network, where your smartphone resides.

Routers send each incoming packet along the best available route through this hierarchy. It works like this: Inside each router, a collection of microchips called the routing engine maintains a table that lists the pathways to possible destinations. The routing engine continually updates this table using information from neighboring nodes, which monitor the network for signs of traffic jams. Then it switches the packet to a queue, or buffer, where it awaits transmission.

The router repeats this process for each incoming packet. There are several disadvantages to this design. First, it requires a lot of computational muscle. Imagine if a mail carrier had to recalculate the delivery route for each letter and package as it was collected. Routers likewise ignore the fact that many incoming packets may be headed for the same terminal. Routers also overlook the type of data flow each packet belongs to. If more packets accumulate than the buffer can hold, the router discards excess packets somewhat randomly.

Similarly, a large file transfer could clog up voice and browsing traffic so that no single flow reaches its destination in a timely manner. And what happens when a crucial routing node fails, such as when a Vodafone network center in Rotterdam, Netherlands, caught fire in ? Ideally, other routers will figure out how to divert traffic around the outage.

But often, local detours just move the congestion elsewhere. Some routers become overloaded with packets, causing more rerouting and triggering a cascade of failures that can take down large chunks of the network.

After the Vodafone fire, mobile base stations were out of commission for more than a week. Routers could manage data flows more effectively if they made smarter choices about which packets to discard and which ones to expedite. To do this, they would need to gather much more information about the network than simply the availability of routing links. For instance, if a router knew it was receiving high-quality IPTV packets destined for a satellite phone, it might choose to drop those packets in order to prioritize others that are more likely to reach their destinations.

Ultimately, routers will have to coordinate their decisions and actions across all levels of the Internet, from the backbone to the end terminals, and the applications running on them. And as new user devices, services, and threats come on line in the future, the system will need to be smart enough to adapt. Examples: Social relationships, gene regulation, neural networks in the brain Advantages: When data can reach any destination in a small number of steps, latency stays low.

Examples: Human sexual partners, scientific-paper citations Advantages: Minimizing the number of hubs helps stop the spread of viruses and protects against attacks. Examples: Traffic-control systems including stoplights, yield signs, and speed limits Advantages: Controlling data flows helps prevent traffic spikes from causing network congestion or collapse.

Examples: Autonomic functions such as breathing and digesting versus cognition Advantages: Real-time control lets nodes coordinate actions, while gradual learning helps the network evolve. The first step in designing a more intelligent Internet is to endow every connected computer with the ability to route data. By off-loading local traffic from the Internet, mesh networks would free up bandwidth for long-distance services, such as IPTV, that would otherwise require costly infrastructure upgrades.

These networks would also add routing pathways that bypass bottlenecks, so traffic could flow to areas where Internet access is now poor, extending cellular service underground, for example, and providing extra coverage during natural disasters.

But to handle data and terminals of many different kinds, routers including the terminals themselves need better methods for building and selecting data pathways. One way to engineer these protocols is to borrow tricks from a complex network that already exists in nature: the human autonomic nervous system. This system controls breathing, digestion, blood circulation, body heat, the killing of pathogens, and many other bodily functions. It does all of this, as the name suggests, autonomously—without our direction or even our awareness.

Most crucially, the autonomic nervous system can detect disturbances and make adjustments before these disruptions turn into life-threatening problems. If all this sounds a little vague, consider the example of digestion.

To begin breaking it down, the stomach must secrete the proper amount of gastric juices. This might seem like a simple calculation: more meat, more juices. In fact, the parts of the brain that control this process rely on a smorgasbord of inputs from many other systems, including taste, smell, memory, blood flow, hormone levels, muscle activity, and immune responses.

Does that burger contain harmful bacteria that must be killed or purged? Does the body need to conserve blood and fuel for more important tasks, such as running from an enemy?

By coordinating many different organs and functions at once, the autonomic system keeps the body running smoothly. By contrast, the Internet addresses a disturbance, such as a spike in traffic or a failed node, only after it starts causing trouble.

Routers, servers, and computer terminals all try to fix the problem separately, rather than work together. This often just makes the problem worse—as was the case during the Vodafone fire. A more cooperative Internet requires routing and forwarding protocols that behave more like the autonomic nervous system. Network engineers are still figuring out how best to design such a system, and their solutions will no doubt become more sophisticated as they work more closely with biologists and neuroscientists.

Algorithms that follow this architecture must perform four main tasks:. Then the knowledge algorithms analyze all that data. For example, if a router that typically receives low-quality video streams suddenly receives a high-quality one, the algorithms calculate whether the router can process the stream before the video packets fill its buffer.

Lastly, they execute the plan. The execution commands may modify the routing tables, tweak the queuing methods, reduce transmission power, or select a different transmission channel, among many possible actions. Not only will it help prevent individual routers from failing, but by monitoring data from neighboring nodes and relaying commands, it will also create feedback loops within the local network.

In turn, these local loops swap information with other local networks, thereby propagating useful intelligence across the Net. Computer networks employ various types of equipment, including routers, switches, hubs and network interface cards. These pieces of equipment come from different vendors, but they must all work together or the network does not operate correctly.

Network protocols define the rules that govern network communication. These rules determine things like packet format, type and size. They also determine what happens when an error occurs, and which part of the network is supposed to handle the error and how.

Network protocols work in layers, the highest being what the user sees, and the lowest being the wire that the information travels across. These layers communicate with each other according to the rules, allowing human communication to occur accurately and efficiently.

He holds a master's degree in applied computer science and several certifications. Syntax refers to the structure or format of data and signal levels. It indicates how to read the data in the form of bits or fields. It also decides the order in which the data is presented to the receiver. Example: A protocol might expect that the size of a data packet will be 16 bits.

So, every communication that is following that protocol should send bit data. Semantics refers to the interpretation or meaning of each section of bits or fields. It specifies which field defines what action. It defines how a particular section of bits or pattern can be interpreted, and what action needs to be taken. It includes control information for coordination and error handling.

Example: It interprets whether the bits of address identify the route to be taken or the final destination of the message or something else. Example: A sender can send the data at a speed of Mbps, but the receiver can consume it only at a speed of 20 Mbps, then there may be data losses or the packets might get dropped.

So, proper synchronization must be there between a sender and a receiver. Do share this blog with your friends to spread the knowledge. Visit our YouTube channel for more content. You can read more blogs from here. Admin AfterAcademy 6 Dec What are Protocols and what are the key elements of protocols?

Share this blog and spread the knowledge. Share On Facebook.



0コメント

  • 1000 / 1000