How does one choose the size of a buffer (bytes I read from or write to socket) for the maximum throughput when implementing a low-level HTTP and FTP transfer? My application should transfer data with HTTP or FTP on connections varying from 130 Kbps to 3 Mbps (I know the expected speed beforehand). Sometimes it’s a one way transfer, sometimes it goes in both directions. Should I stick with some average buffer size or I must vary it depending on the connection speed?
Thanks.
First, get some measurements.
Then, after you have a reliable performance measurement, make changes to your buffer size and plot a graph of speed vs. buffer size.
Since you know the connection speeds in advance, you should be able to get some measurements of actual speeds with different actual buffer sizes.
The OS, protocol stack and network is too complex to work out an answer from first principles. You need to measure before you do anything.