Position Paper

Using Virtual Machines with Gulfstream’s Barcode Gateway

The Barcode Gateway is unlike any other piece of software that you are familiar with in using Dynamics SL. The Barcode Gateway (BG) sits on top of the running Dynamics SL client. It consists of two major programs, the GS20KFE and the GS20000. The KFE interactively communicates with the scanners and does creates the scanner screen displays. It communicates with the GS20000 which talks to the SL client. So the KFE does all of the scanner communications and the GS20000 does all of the SL communications. The KFE gathers all of the relevant information in memory and when the user hits “P” for Process on the gun, the KFE sends the information to the GS20000 which uses the SL SWIM API to create the batch in SL and optionally requests SL to release the batch in one big burst of information. In terms of TCP/IP traffic, there is very little (minor) traffic between the scanners and the workstation running the BG; however there is major (heavy) traffic between the workstation and the MS SQL Server. There are major traffic bursts when creating the batches and even more when doing inquiries. The inquiries are heavy, because the BG just doesn’t ask the SQL Server for example – one inventory part inquiry. It actually asks for a range of parts before and after the particular part number and caches the data. This enhances scrolling part number inquiries, since the data is already in BG memory.

Given that the BG relies on the TCP/IP protocol so much, a reliable and vibrant network is critical. The TCP/IP protocol is a 3 way matching protocol.

Step One: The message packet is sent from the Sender to the Recipient.
Step Two: The message packet is acknowledged by the Recipient back to the Sender.
Step Three: The Sender acknowledges the receipt of the acknowledgement back to the Recipient.

So if any of these packets is delayed due to network congestion, packet loss or hurt in any way, the parties will slow the communication down to half speed and retry the communications at this new lower speed. If there is still trouble, they will slow down to half of the new speed.

Enter in Virtual machines.

Let’s say that there is one physical machine running 5 virtual machines with IP addresses 10.0.0.10 to 10.0.0.15, ie. .10, .11, .12, .13, .14 but the real card is actually 10.0.0.15. This means that the .15 physical card has to respond to the 5 virtual addresses. To further complicate the situation, the network card software layer has to respond like a router not a single addressed network card endpoint. Let’s say that the virtual BG machine has an address of .12, this means that the .15 card has to watch for traffic and respond to any traffic in the range of .10 to .15. Not only does it have to respond to requests at any address in that range, it has to translate the addresses into and from the virtual machines, called address translation or spoofing. This translation is not instantaneous or without errors. One can see that when the SQL server at say address .20, responds to the BG request for 100 item numbers, descriptions and balances, it sends a great number of packets all destined from the .20 address to the .12 address. Let’s say that the .15 card fails to see or acknowledge even one percent of the packets, then the network communications between the BG and SQL Server would slow down considerably and the retry ratio would escalate. This .15 card is busy, it is working hard to keep up with 5 virtual machines and the address translation is horrendous and a large point of failure,
especially with something as TCP/IP timing sensitive as the BG.