I'm answering on behalf of him: as he has described it, he matches one pair of requester/provider per resource type and stores those. So the memory could have one valid coal/coal pair, one valid iron/iron pair etc. The trains are then send to the appropriate routes and the pairs are removed from the memory. When the coal/coal pair is being serviced, another coal/coal pair could enter the memory immediately, and so on.DemiPixel wrote:Since you're storing requester/provider as `amt of type`, what happens if more than one provider/requester have/want the same type of item? Do you simply ignore them until the first gets used and you poll them again?
Actually his system is very similar to mine, with the difference that he does have a buffer of several routes that can be given to trains immediately as they arrive. In my implementation I only look for a valid route when a train enters the station (i.e. on demand). This does introduce some minimal additional delay but is less complex.
Counting trains in some way is useful, but it should not be overengineered because the typical case amounts to simply locking a station while a train is on the way. It is very rare that you actually need to count several trains in many places, because a well designed system under full load will simply not have the buffers for this to even be possible (if you have 3 train loads of ore buffered then there is not a lot of demand, and all the train counting will not really improve performance anyway).
What you want to avoid are two situations:
- Trains piling up at a requester because as soon as the request comes in, every available provider is matched to the requester. This can be done by either locking the requester entirely (train count = 1 means requester is locked) or discounting inbound resources (either by counting trains if you always ship full loads or by counting incoming resources if you want to ship partial loads).
- Trains going to a provider that does not have a full load of goods (or less than the minimum amount that you're willing to ship). I think this case can usually be handled by simply locking the provider, ie. only allowing one train at a time to the provider, because it normally takes a while for the buffer at the provider to fill up again, and the train latency is not high enough that you actually need to account for that. So by the time the provider could again be polled it still does not have a full buffer anyway. If that makes sense. The situation where you actually profit from counting trains at the provider is when the outpost is so far away that the buffer will actually fill up during the time that it takes a train to get there -- so you would want a train on the way even before the buffer is full, to avoid wasting bandwidth. In my opinion that situation is too rare to warrant a complex solution.
In the other case where stuff keeps piling up at the providers because there is no demand you really do not need to queue up trains at the providers, since there is no demand anyway.