python - Configuring client throughput in simple TCP server -
i given sample project goes this:
- client connects server b.
- a sends packet b, b returns same packet a.
- client sending throughput configurable
- measure turnaround time per packet.
now step 3 confusing me.
using python, way can think of "configuring throughput" set delay between characters in string.
- take string "test"
- start timer, send "t" server, , have server return it.
- once server returns it, stop timer , log it.
then call sleep()
determined amount of time (this configurable part)
then same letters
e s t
logging, time in-between.
however, seems silly, because not @ affecting relationship between client , server, setting delay between characters being sent.
or missing something? there way of "configuring" client a's throughput, , if so, mean?
thank you.
you can make client a's throughput configurable - on broader window of 1 second. first of sending single packets 1 characters going 'negatively affect' throughput, because payload way small compared headers attached.
the way can achieve decent measurable throughput sending mss packets (typically 1460 bytes in ethernet environments). can follows - send 'bursts' of 'n' packets per second. eg. if need throughput of 14600
burstsize = 10 in range(burstsize): sock.write(data)
every 1 second. how achieve - 1 second? it's bit tricky , hard accurate - this.
do time.time()
before for
, time.time()
after for
, after simple select
remaining time. (that's way of introducing delay). broad skeleton
while true: t1 = time.time() in range(burstsize): sock.write(1460) t2 = time.time() if t2 - t1 < 1.0: tout = 1.0 - (t2-t1) select.select([], [], [], tout) #
this simplistic - should give fair idea how go it.
measuring turnaround time every packet quite hard - , doesn't tell accurately latency in network tcp, because influenced tcp flow control. better use udp measure turnaround time, if interested in understanding underlying network characteristics bandwidth/latency.
Comments
Post a Comment