The average packet size inferred by the demand entries is BSP and 500 ups which is equal to 12,000 bits or 1,500 bytes. *2: Capture a screen shot of the results screen and paste it into your answer sheet. *3: What do the stats show about the traffic entering the cloud? The graph shows that the traffic is entering the IP cloud at 18 Mbps at a rate of 1,500 packets prerecord. Each of the three flows is sending traffic at Mbps and Phipps.
Paste a shot of this screen into your worksheet. What do you observe about the packet delay related to the TOSS we assigned? The delay did not change over time, it continued to stay the same. Why does this simulation run so much quicker than the first one? It uses much less memory by transmitting the data in the background, ATA rate of around iamb to be precise. *7: What do you observe about the queuing delays in both scenarios? The delay of queuing is much higher for background data. The first panel compares the queuing delays between the three scenarios. Paste a shot of this panel in your answer sheet,
How do you compare these results? Explicit seems to have no delay, while the Hybrid and the background traffic are very close. *10: The second graph panel compares the packet end-to-end delays for the three traffic demands Obtained using explicit traffic and hybrid traffic (no TEE delays were available in the purely background traffic scenario). Paste a shot this panel into your answer sheet. *11: How would you compare the results between the first and third scenarios? In the first scenario the hybrid and explicit had almost the exact same delay. In the third scenario the explicit had a much lower delay.