A technique for improving Web-server performance, by consolidating incomingTCP/IP requests to reduce the number of times server connections have to be made.
A TCP multiplexing system improves the efficiency of the Web server farm or Internet service by acting as a thin connection (or channel) proxy to servers, caches and content delivery networks. The system receives TCP/IP requests, consolidates them and applies logic to the opening and closing of server connections. It can direct and funnel client requests into high-speed server sessions that avoid the constant interruption of setup and tear down.
When a user requests a Web page, his browser sends numerous requests for different types of information - such as Java applet, multimedia and database access - resulting in as many as 50 connection requests to Web servers.
The server opens a socket and allocates memory and processes for that user, opens a session for the user, acknowledges the client's HTTP request, fetches relevant data from cache or disk memory, flows the data back to typically slow access connections and finally closes the session.
If a user's session includes static and dynamic content and an e-commerce transaction, each type of request requires a new connection, and the Web server or servers must dedicate resources for each. This slows the server's response time, taking resources from its primary task - to serve Web content.
HTTP 1.1 lets a browser send multiple requests across a persistent connection to a server, eliminating some overhead from a single client. Content providers, however, effectively turn off this feature because if connections are left open for each client session, a site will soon run out of server resources. And if the connection is kept open, you need to limit it to 5 to 15 seconds to avoid tying up your servers with idle connections.
TCP multiplexing systems keep client-side connections open with longer timeouts. By eliminating most of the "hello-goodbye" setup and tear down overhead so that transactions can flow freely over the WAN via managed server connections, these systems dramatically improve the efficiency of high-traffic Web sites and Internet services.
WAN latency is a significant cause of congestion on the Internet. Dynamic transactions and content updates require access to origin sites across the WAN, causing delays that add up to several seconds, or even minutes, for large transfers.
Consider a Web page with 50 objects, each requiring three packets to open a connection and four packets to close. Assuming 200 msec of latency per round trip and four concurrent browser connections, that's 16.8 seconds of TCP overhead to load one page, vs. just 1.4 seconds across a persistent, managed connection using the TCP multiplexing method.
The TCP multiplexing engine monitors each incoming and outgoing packet to manage sessions and determine when a connection can be used for another client. The server no longer has to expend its processing power setting up and tearing down sessions for each user request.
It can also manage packets to protect against traffic spikes and attacks, provide access control and direct content delivery. Additionally, since client sessions are managed by the TCP multiplexing engine, the network overcomes the TCP slow-start overhead that is inherent to TCP/IP networking. The client is no longer required to wait while the server tries to gauge the quality of the client connection.
Other people asked questions on similar topics, check out the answers they received:
Other people asked questions on various topics, and are still waiting for answer. Would be great if you can take a sec and answer them