Friday Facts #271 - Fluid optimisations & GUI Style inspector

Regular reports on Factorio development.
Nightinggale
Fast Inserter
Fast Inserter
Posts: 120
Joined: Sun May 14, 2017 12:01 pm
Contact:

Re: Friday Facts #271 - Fluid optimisations & GUI Style inspector

Post by Nightinggale »

fur_and_whiskers wrote: ↑Tue Dec 04, 2018 3:39 amInteresting that both Factorio and Oxygen Not Included have announced they're doing this within a month of each other. Really hope its a result of colaboration, if so everyone wins.
I think the answer lies with team red and team blue.

If you look at CPU development, there is one major bottleneck: memory latency. If you make the CPU twice as fast, it will spend twice as many CPU cycles waiting for the memory and increased CPU performance is diminishing. This means the way to increase performance is to hide memory latency. CPU cache is one such way, but it requires the programmers to organize memory in a way where the cache works efficiently, something like placing fluid boxes sequentially in memory meaning when one block is read, multiple fluid boxes are placed in the (much faster) CPU cache meaning when reading them one by one, only the first one will require the full memory latency wait.

Another approach is hyperthreading. That's a CPU core with two tasks. If one task is waiting for the memory, the other task will execute at the same time to keep the CPU core itself busy. It's said that this increase the CPU power by 30%, though it highly depend on the code you execute. No memory latency would make hyperthreading useless.

Then increasing the core performance has become this problematic, their answer is to increase the core count. It became 2 around 2005, later 4, which we have had for quite a while now, but last year AMD released Ryzen. Since AMD can't beat Intel on core performance, their answer is to increase the number of cores dramatically to win on multi core performance. Intel's respond is the same: more cores.

Less than 2 years ago, the fastest gaming CPU had 4 cores. Now it has 8. The near future will most likely be a world where gamers have more and more CPU cores.

This brings us back to Factorio and Oxygen not included. Both games are at the end of the early release cycle. Both have to optimize for performance. In a world where the CPU core count is increasing rapidly, it will be logical to optimize for multi core support. This means optimizing the same way is not because the games share anything, but rather because they run on the same hardware and they optimize for the hardware.

Why they optimize pipes for multithreading? My guess is it has to do with memory. Multithreading can be dead slow (even slower than single core) if the cores share memory and they have to communicate between cores to not corrupt memory for each other. If you can figure out a way to make a piece of code, which will only write to memory not used by any other code at the same time, no coordination is needed. Pipes seems like ideal candidates for this because pipes, which aren't connected will not affect each other, meaning they can be calculated without any memory coordination at all.

Another hint that they did not share code is I think we were told Factorio is written in C++. I know Oxygen not included uses the unity engine, meaning it has to be written in C#. If two games should share code in any way, a good start would be to use the same programming language.

Sure I'm an outsider making guesses based on what has been made public (hence not covered by contract and can speak my mind), but knowing software development and how companies usually work, I would say that to say two for profit games share code because they multithread pipes for optimization is like saying two car companies work together because they both use cars with wheels optimized for road usage.
User avatar
Lubricus
Filter Inserter
Filter Inserter
Posts: 298
Joined: Sun Jun 04, 2017 12:13 pm
Contact:

Re: Friday Facts #271 - Fluid optimisations & GUI Style inspector

Post by Lubricus »

Nightinggale wrote: ↑Tue Dec 04, 2018 2:27 pm
fur_and_whiskers wrote: ↑Tue Dec 04, 2018 3:39 amInteresting that both Factorio and Oxygen Not Included have announced they're doing this within a month of each other. Really hope its a result of colaboration, if so everyone wins.
I think the answer lies with team red and team blue.

If you look at CPU development, there is one major bottleneck: memory latency. If you make the CPU twice as fast, it will spend twice as many CPU cycles waiting for the memory and increased CPU performance is diminishing. This means the way to increase performance is to hide memory latency. CPU cache is one such way, but it requires the programmers to organize memory in a way where the cache works efficiently, something like placing fluid boxes sequentially in memory meaning when one block is read, multiple fluid boxes are placed in the (much faster) CPU cache meaning when reading them one by one, only the first one will require the full memory latency wait.

Another approach is hyperthreading. That's a CPU core with two tasks. If one task is waiting for the memory, the other task will execute at the same time to keep the CPU core itself busy. It's said that this increase the CPU power by 30%, though it highly depend on the code you execute. No memory latency would make hyperthreading useless.

Then increasing the core performance has become this problematic, their answer is to increase the core count. It became 2 around 2005, later 4, which we have had for quite a while now, but last year AMD released Ryzen. Since AMD can't beat Intel on core performance, their answer is to increase the number of cores dramatically to win on multi core performance. Intel's respond is the same: more cores.

Less than 2 years ago, the fastest gaming CPU had 4 cores. Now it has 8. The near future will most likely be a world where gamers have more and more CPU cores.

This brings us back to Factorio and Oxygen not included. Both games are at the end of the early release cycle. Both have to optimize for performance. In a world where the CPU core count is increasing rapidly, it will be logical to optimize for multi core support. This means optimizing the same way is not because the games share anything, but rather because they run on the same hardware and they optimize for the hardware.

Why they optimize pipes for multithreading? My guess is it has to do with memory. Multithreading can be dead slow (even slower than single core) if the cores share memory and they have to communicate between cores to not corrupt memory for each other. If you can figure out a way to make a piece of code, which will only write to memory not used by any other code at the same time, no coordination is needed. Pipes seems like ideal candidates for this because pipes, which aren't connected will not affect each other, meaning they can be calculated without any memory coordination at all.

Another hint that they did not share code is I think we were told Factorio is written in C++. I know Oxygen not included uses the unity engine, meaning it has to be written in C#. If two games should share code in any way, a good start would be to use the same programming language.

Sure I'm an outsider making guesses based on what has been made public (hence not covered by contract and can speak my mind), but knowing software development and how companies usually work, I would say that to say two for profit games share code because they multithread pipes for optimization is like saying two car companies work together because they both use cars with wheels optimized for road usage.
The way I understand the FFF they optimized for memory caching and got multi-threading as and side effect.
Kovarex has earlier stated that they haven't optimized for multi threading because they have always found easier solutions that have a bigger impact at performance.

For Oxygen not included i think Unity has some functionality similar to the way the pipe optimization works
Unity's New ECS (Entity Component System) https://www.youtube.com/watch?v=M0poSxG5YAI
Nightinggale
Fast Inserter
Fast Inserter
Posts: 120
Joined: Sun May 14, 2017 12:01 pm
Contact:

Re: Friday Facts #271 - Fluid optimisations & GUI Style inspector

Post by Nightinggale »

Lubricus wrote: ↑Tue Dec 04, 2018 4:17 pmThe way I understand the FFF they optimized for memory caching and got multi-threading as and side effect.
The question is how to read this section.
FFF" wrote:But the second advantage is even greater, as the fluid systems are now independent and fluid flow doesn't interfere with anything else in the map, their update can be completely parallelized without any worries. So we just did that.
Is it a side effect which suddenly showed up or was it planned? It says the "advantage is even greater", indicating it could have been the main goal. The seemingly carelessness in "So we did just that" could also point towards the fact that once Dominik reached this stage, adding multithreading was trivial. Adding multithreading to code prepared for multithreading can indeed be trivial.

My guess is that multithreading has been a design goal, not because of FFF #271, but rather because of the discussion in FFF #260 brings up multithreading.
Dominik wrote: ↑Fri Sep 14, 2018 6:08 pm
djnono wrote: ↑Fri Sep 14, 2018 5:34 pm Why not drop the entire fluid network on its own thread, using the proposed system, and handle the impact on the main update thread with a 1 tick delay ?
See, when writing it I totally forgot about the multi-threading. Yes, things like this should be possible and hopefully we will get to it too.
So they just noticed that by pure luck they ended up with code, which works perfectly with multithreading and they have the code 2.5 months after commenting on prospects of multithreading in that very piece of code. Regardless of what anybody says now, it looks very planned to me and it will be hard to convince me otherwise.
Lubricus wrote: ↑Tue Dec 04, 2018 4:17 pmKovarex has earlier stated that they haven't optimized for multi threading because they have always found easier solutions that have a bigger impact at performance.
I think you refer to FFF #215 Multithreading issues. If you read closely, one of the proposed reasons why it didn't increase performance was "I realized, that we already do the "prepare logic" in parallel". In other words not boosting performance from parallel execution was because the real workload was already using all cores, or rather it seems to be capped at 8 cores for undisclosed reasons. This means the prepare logic is "optimized for multi threading" meaning they obviously do that once in a while.

There are multiple reasons why going multithreaded can cause a slowdown. FFF talks about thrashing the shared level 3 cache. Hyperthreading has been accused of doing the same to all 3 cache levels. There is an overhead related to forking a thread into multiple and joining back into one later.

Naturally if those slowdowns become more significant than the gain from multithreading, then yes it will cause a slowdown. Multithreading for performance is a complex problem, which is full of pitfalls, which are often encountered. However the performance boost is significant if done right, meaning just outright discarding the idea of multithreading is a bad idea.
Lubricus wrote: ↑Tue Dec 04, 2018 4:17 pmFor Oxygen not included i think Unity has some functionality similar to the way the pipe optimization works
Unity's New ECS (Entity Component System) https://www.youtube.com/watch?v=M0poSxG5YAI
Unity had a major event a while ago (a year ago? Can't remember) where the big thing was tools for adding multi core support. Since the CPU manufacturers have decided the future is with many cores, the race is on for making the easiest to use multi core tools. I'm not much into the details of ECS, but from what understand from it, it's mainly a tool for something like Command and Conquer where each unit can run in a thread, hence huge armies scale well to many core CPUs. I'm far from convinced Oxygen not included use this because it would sort of indicate that each unit of gas could be a thread and that would be a horrible solution. If you look at ONI pipe, they are actually rather simple (more simple than Factorio pipes which in itself indicates no shared work). Pipes consist of gas/liquid entities going from outputs to inputs, meaning most pipe "tiles" are "whatever enters will exit in that direction". This creates a bunch of independent "pipe lanes", which can then work just fine in multithreading. With such a game specific setup, a game specific (custom) coding solution will likely work best.


Could we go back to Factorio? While Oxygen not included features are interesting, they should really be a topic for Klei's forum as they (big surprise) actually made a forum for their own game.
User avatar
Lubricus
Filter Inserter
Filter Inserter
Posts: 298
Joined: Sun Jun 04, 2017 12:13 pm
Contact:

Re: Friday Facts #271 - Fluid optimisations & GUI Style inspector

Post by Lubricus »

Nightinggale wrote: ↑Tue Dec 04, 2018 7:39 pm
Lubricus wrote: ↑Tue Dec 04, 2018 4:17 pmKovarex has earlier stated that they haven't optimized for multi threading because they have always found easier solutions that have a bigger impact at performance.
I think you refer to FFF #215 Multithreading issues. If you read closely, one of the proposed reasons why it didn't increase performance was "I realized, that we already do the "prepare logic" in parallel". In other words not boosting performance from parallel execution was because the real workload was already using all cores, or rather it seems to be capped at 8 cores for undisclosed reasons. This means the prepare logic is "optimized for multi threading" meaning they obviously do that once in a while.
I was mostly thinking of this interview
https://www.youtube.com/watch?time_cont ... 66QDZ7LL5Y
Cadde
Fast Inserter
Fast Inserter
Posts: 149
Joined: Tue Oct 02, 2018 5:44 pm
Contact:

Re: Friday Facts #271 - Fluid optimisations & GUI Style inspector

Post by Cadde »

Lubricus wrote: ↑Tue Dec 04, 2018 8:24 am Is there any changes in the image to render between game-ticks? I think your movement is part of the game logic and the ticks...
Could Menus and map-view be separate from the main game-loop?
Last thing i want to be accused of is being an expert on multithreading and disconnected rendering and simulation... BUT...

The idea is that your simulation runs in the background at a fixed step while your rendering and camera is completely disconnected from the simulation. Basically, the last time we had fixed time step rendering in games was back in Super Mario Bros times. Where if you sped up the clock of the hardware the game was running on, everything would run faster. For example, PAL vs NTSC. If you ran the PAL version on an NTSC machine the game would run faster as a result due to different refresh rates between US and basically the rest of the world.

So, with the advent of home gaming PC's, the games shifted away from that philosophy ever so gradually. First by making use of timed events rather than cycled (FPS count) events. Secondly by disconnecting the rendering from the simulation in various ways.

Movement should be a strictly logic (simulation) event, rendering should strictly be a reflection of the simulation. What entities are being drawn and where comes from the simulation, but everything else (animations, camera, lighting etc) is "anticipated" from what happened last in the simulation.

So, scrolling the screen (tiles) isn't at all affected by how well the simulation is running in the background. If the scrolling is tied to your character then that scrolling would interpolate between the last known location of the character and the next expected location of the character over so many seconds.

That is, the characters last action was to move left, the camera will keep scrolling left until the simulation updates. When the simulation updates, the camera stops scrolling left.
Obviously there's a problem there, if the character hits something and the camera didn't know (as it shouldn't) that the character stopped, the camera will have to snap back to the characters actual location once he's stopped. That's where interpolation once again comes in. The camera should smoothly accept that it's not where it should be and gradually move it's way back to where it should.

With the whole reason being to avoid frame jitters. The other benefit of doing this is that the simulation doesn't really care about what's happening with the renderer. If the renderer is running at 2 FPS, the simulation can still continue running at 60 FPS in the background. Of course there's a con to that as well, if you have 2 FPS render and your base is under attack then you can't defend yourself now can you?
That's also easily fixed by saying "at what MIN rendered FPS should the simulation slow down to allow the player to still act?"
Nightinggale
Fast Inserter
Fast Inserter
Posts: 120
Joined: Sun May 14, 2017 12:01 pm
Contact:

Re: Friday Facts #271 - Fluid optimisations & GUI Style inspector

Post by Nightinggale »

Cadde wrote: ↑Wed Dec 05, 2018 5:27 amLast thing i want to be accused of is being an expert on multithreading and disconnected rendering and simulation... BUT...
That's a very dangerous statement, particularly online. If you happen to say something like that somewhere with less admin supervision and less friendly people, then you would certainly be called a lot of stuff you wouldn't want to be called.
Lubricus wrote: ↑Tue Dec 04, 2018 8:24 amIs there any changes in the image to render between game-ticks? I think your movement is part of the game logic and the ticks...
Could Menus and map-view be separate from the main game-loop?
I wonder if the developers mentioned something about this already.
Lubricus wrote: ↑Tue Dec 04, 2018 10:42 pm I was mostly thinking of this interview
https://www.youtube.com/watch?time_cont ... 66QDZ7LL5Y
That video mentions issues with multithreading. In fact it mentions that if game logic and drawing happens in different threads, then the GPU could draw one area with say a car. Next it draws another area and there it includes the car again because the game logic moved in between. Freezing the game logic while drawing solves this problem.

Designing split graphics and game logic is hard. Converting existing mixed game logic and drawing into a split design is even harder.

My guess is that the only real thing, which can be done with a reasonable amount of work is to create a timestamp every time a new frame is started. Next time a frame is started, sleep until time >= old timestamp+setting. The setting is then some number you can set in settings and you will get the current approach if the setting is 0. Setting it to some higher number should allow some V-sync like feature at whatever speed you want. It might not hit 60 FPS precisely, but is that a problem?

Now that I think about it, how is multiplayer handled? What if somebody runs at 70 FPS/UPS on a 144 Hz monitor and somebody else runs on a 60 Hz monitor? Will they need to sync UPS? Maybe a fixed framerate can be gained by reusing some network UPS sync code.
Cadde wrote: ↑Wed Dec 05, 2018 5:27 am The idea is that your simulation runs in the background at a fixed step while your rendering and camera is completely disconnected from the simulation. Basically, the last time we had fixed time step rendering in games was back in Super Mario Bros times. Where if you sped up the clock of the hardware the game was running on, everything would run faster. For example, PAL vs NTSC. If you ran the PAL version on an NTSC machine the game would run faster as a result due to different refresh rates between US and basically the rest of the world.
Sonic the hedgehog was coded in Japan for 60 Hz. Worked fine for US exports, but in Europe the TVs ran at 50 Hz. They didn't change anything with the game logic, meaning everything moved 17% slower, which hurts the concept of the game. Nobody noticed until much later though as viewing the 50 and 60 Hz versions side by side for comparison wasn't really possible at the time.

The reason why Europe and America don't use the same frequency is actually an interesting, yet poorly known fact. Very early TV signals were really bad and the TVs had problems figuring out the timing from the signal. What they needed was some global clock signal to ensure transmitter and receiver ran at the same speed. They decided to use the power line. Whenever the AC voltage passed 0 V, a new frame would start. This worked really well, but it required the TVs to be plugged into the same power phase and polarity as the transmitter. While TVs stopped depending on this even before color, the standard for FPS had already been set to match the frequency of the power grid. Since countries didn't agree on 50 vs 60 Hz for power grids, the countries would not agree on TV framerate either.
Dominik
Former Staff
Former Staff
Posts: 658
Joined: Sat Oct 12, 2013 9:08 am
Contact:

Re: Friday Facts #271 - Fluid optimisations & GUI Style inspector

Post by Dominik »

Nightinggale wrote: ↑Tue Dec 04, 2018 7:39 pm My guess is that multithreading has been a design goal, not because of FFF #271, but rather because of the discussion in FFF #260 brings up multithreading.
Dominik wrote: ↑Fri Sep 14, 2018 6:08 pm
djnono wrote: ↑Fri Sep 14, 2018 5:34 pm Why not drop the entire fluid network on its own thread, using the proposed system, and handle the impact on the main update thread with a 1 tick delay ?
See, when writing it I totally forgot about the multi-threading. Yes, things like this should be possible and hopefully we will get to it too.
So they just noticed that by pure luck they ended up with code, which works perfectly with multithreading and they have the code 2.5 months after commenting on prospects of multithreading in that very piece of code. Regardless of what anybody says now, it looks very planned to me and it will be hard to convince me otherwise.
Yes, it was a part of the decisions, I just forgot about it when writing the FFF. But even without it I would do it all the same, for the concentrated memory allocation.
Nightinggale wrote: ↑Wed Dec 05, 2018 6:19 am
Cadde wrote: ↑Wed Dec 05, 2018 5:27 amLast thing i want to be accused of is being an expert on multithreading and disconnected rendering and simulation... BUT...
That's a very dangerous statement, particularly online. If you happen to say something like that somewhere with less admin supervision and less friendly people, then you would certainly be called a lot of stuff you wouldn't want to be called.
None of us here is a multithreading expert, we are all learning :)
Cadde
Fast Inserter
Fast Inserter
Posts: 149
Joined: Tue Oct 02, 2018 5:44 pm
Contact:

Re: Friday Facts #271 - Fluid optimisations & GUI Style inspector

Post by Cadde »

Nightinggale wrote: ↑Wed Dec 05, 2018 6:19 am
Cadde wrote: ↑Wed Dec 05, 2018 5:27 amLast thing i want to be accused of is being an expert on multithreading and disconnected rendering and simulation... BUT...
That's a very dangerous statement, particularly online. If you happen to say something like that somewhere with less admin supervision and less friendly people, then you would certainly be called a lot of stuff you wouldn't want to be called.
I have never been accused of not living dangerously online... But i have been accused of a lot of other things as a result, yes...
Nightinggale wrote: ↑Wed Dec 05, 2018 6:19 am That video mentions issues with multithreading. In fact it mentions that if game logic and drawing happens in different threads, then the GPU could draw one area with say a car. Next it draws another area and there it includes the car again because the game logic moved in between. Freezing the game logic while drawing solves this problem.
Double buffering has been a thing since forever. Basically, your game draws to one area of memory while the screen refreshes from another.
The same technique can be applied to simulation, simulation "draws" to one area of memory while the renderer renders from the non-updating one.

"Ok, how does the simulation work when the previous frame data is locked in the render thread?"
Simulation READS from one memory and writes to the other. So you have duplicate copies of the simulation in memory. Whichever is being written to is not the one the renderer reads from.

"Ok, so the simulation finishes it's update mid frame. What now?"
The simulation waits for the current frame to finish rendering, not the other way around. Simulation runs on a clock anyways, it can wait without too much disturbance. Rendering can't wait (for me) because that's when you notice the game is running badly. I don't care if a simulation process takes 16 milliseconds or 60 milliseconds. As long as the process itself has the same result.

Instead of moving "10 of something" each simulation frame you say "this thing moves 10 items in 16 milliseconds" so if the simulation updates slowly, say once every second then you have moved 1000/16 * 10 items regardless of whether the simulation was halted for a whole second.

"but what about enemies attacking your base"
If your simulation is so heavy that it's running that slow, sort your updates into critical and non-critical. Critical updates will run always. Non critical ones will run whenever there's enough time. So for pipes as an example, instead of moving 200 fluid per tick between a pump and a pipe, move as much fluid in one step as it would have moved in 60 steps, or how many seconds total has passed since the last simulation update because the simulation was busy with something, like waiting for a frame to finish rendering before swapping memory to write to.
Nightinggale wrote: ↑Wed Dec 05, 2018 6:19 am Designing split graphics and game logic is hard. Converting existing mixed game logic and drawing into a split design is even harder.
Sorry, i forgot to mention i know this isn't easy.
But i would love it if they did. They could at the very least start with disconnecting the camera from the simulation. So the camera doesn't jitter just because the simulation does. That would actually solve most of my headaches obtained from watching jittery frames.
Nightinggale wrote: ↑Wed Dec 05, 2018 6:19 am My guess is that the only real thing, which can be done with a reasonable amount of work is to create a timestamp every time a new frame is started. Next time a frame is started, sleep until time >= old timestamp+setting. The setting is then some number you can set in settings and you will get the current approach if the setting is 0. Setting it to some higher number should allow some V-sync like feature at whatever speed you want. It might not hit 60 FPS precisely, but is that a problem?
As i said, timers are not accurate in computers. You cannot represent 60 FPS without getting sync issues in software. The reasonable frame rates one should go for are...

"CPU CLOCK CYCLE" (basically whatever the resolution of the underlying timer is) which equates to megaFPS.
Any number which syncs perfectly with timer resolution which equates to anything between 1 and megaFPS.

And then anything that divides 1000 evenly. So 1000 fps, 500, 250, 125, 100, 50, 40, 25, 20, 10, 8, 5, 4, 2, 1. This because most timers only provide four options... ticks (number of timer cycles since last query, doesn't increment by one on every query, hence resolution), milliseconds, seconds, minutes and hours. All of them except the ticks one are floating point numbers. Floating point numbers have rounding errors.

Milliseconds is the one most commonly used. Most games check if "current_ms >= last_render_ms + fps_ms" then render a new frame.
And that's where things get wonky. because we can only represent milliseconds in 32, 64 or even 128 bit precision we can never really match 60 FPS perfectly ever. There's always going to be rounding errors.
However, if we had a target frame rate of 50 or 100 for example, a lot of the rounding error related stuff would just go away. Instead you would have syncing issues with 60/120 hz monitors but i personally always run without vsync anyways. I'd rather have screen tearing than having a dropped frame because i missed a vsync.
Nightinggale wrote: ↑Wed Dec 05, 2018 6:19 am Now that I think about it, how is multiplayer handled? What if somebody runs at 70 FPS/UPS on a 144 Hz monitor and somebody else runs on a 60 Hz monitor? Will they need to sync UPS? Maybe a fixed framerate can be gained by reusing some network UPS sync code.
[/quote]
Multiplayer should never be sync sensitive in that way IMHO. If you missed a server frame, your update should be handled on the server in the next frame.
But that leaves us with a big problem for games of factorio's magnitude... Most other multiplayer games have keyframes where the server sends out a "master" packet. This master packet is the game state as the server sees it and the client will do well to disregard everything it knows and use that master packet. Which means some things can suddenly become undone client side.

Still, i prefer multiplayer games where 1:1 client/server sync isn't important or slow machines/poor connections will bog down the game for everyone.
Instead, the server will send updates to clients that says "this is your life now". (which is a problem in first person shooters, being hit around corners etc)
The sheer amount of data in factorio makes such keyframing difficult though. The server will have to churn out massive amounts of data to keep clients up to date on what's happening server side.

But even this boils down to the "per tick processing" issue i've described above with render jitters.
Instead of handling everything step by step each tick. Say "this process is running" and that's it. It's running... It's going to stop running whenever it's commanded to stop or some timer has been triggered. Then all the server has to tell it's clients is... This process is running and that process isn't, in a keyframe. Lots less data to keep synced than every single pipes current liquid amount.

But factorio isn't that kind of game.
Pinga
Inserter
Inserter
Posts: 42
Joined: Fri Oct 27, 2017 3:59 pm
Contact:

Re: Friday Facts #271 - Fluid optimisations & GUI Style inspector

Post by Pinga »

Hopefully not too late to the party, but since the fluidbox merging part of the implementation is only for 0.18, I suppose not.

Combining adjancent fluidboxes to reduce the ammount of them that needs processing is obviously a great idea, but I feel that the system could be a bit more aggressive than just pipes with no intersections. Allow me to illustrate:

1.png
1.png (1.4 MiB) Viewed 7461 times

The red sections are series of 1 or 2 pipes between each intersection with a machine input / output. Or in the case of the water in the middle, a grid of pipes. They are small enough, and close enough, that their fluidboxes could be combined without any loss in the intended functionality of the power plant.

The green section is a set of adjancent machines of the same type, with bidirectional flow between them. They could share a fluidbox.

Finally, the blue section represents any adjacent storage tanks.

Here's another example of what could be one fluidbox:

2.png
2.png (132.41 KiB) Viewed 7461 times

Combining straight paths is a great start, but most factories consist of a series of repeated small (1-2 pipes) intersections between each machine. Taking this most common case into consideration would dramatically reduce the amount of fluidboxes required to simulate the same behavior.
Jap2.0
Smart Inserter
Smart Inserter
Posts: 2370
Joined: Tue Jun 20, 2017 12:02 am
Contact:

Re: Friday Facts #271 - Fluid optimisations & GUI Style inspector

Post by Jap2.0 »

Cadde wrote: ↑Wed Dec 05, 2018 3:05 pm
And... that's when you get desyncs for breakfast, second breakfast, lunch, tea, dinner, and midnight snack, and rewrite the entire game's update cycle, and the renderer (again).
There are 10 types of people: those who get this joke and those who don't.
Cadde
Fast Inserter
Fast Inserter
Posts: 149
Joined: Tue Oct 02, 2018 5:44 pm
Contact:

Re: Friday Facts #271 - Fluid optimisations & GUI Style inspector

Post by Cadde »

Jap2.0 wrote: ↑Wed Dec 05, 2018 8:34 pm
Cadde wrote: ↑Wed Dec 05, 2018 3:05 pm
And... that's when you get desyncs for breakfast, second breakfast, lunch, tea, dinner, and midnight snack, and rewrite the entire game's update cycle, and the renderer (again).
How would you desync when you keyframe the vital information? Only sync storages, machines and positions etc. No need to sync every little action.
The actions are visual, not mechanical. You reduce the game to input->expected output but you keep the visuals looking like it's actually doing something 60 times per second.

You can get desyncs when you rely on equal execution on both ends. deterministic systems only work when everything is perfectly identical. A slower PC is automatically/always out of sync. And so, the whole multiplayer simulation becomes a slugfest.

EDIT: Besides, i am not asking to change the simulation. I got sidetracked here. I am asking to decouple the camera from the fixed timestep of the simulation so that you can have "guaranteed" 60 fps on the camera all the time at least. And it would be nice if all the graphics were decoupled from the simulation so that we could run the game in any FPS we want without having the simulation running the same speed.
Nightinggale
Fast Inserter
Fast Inserter
Posts: 120
Joined: Sun May 14, 2017 12:01 pm
Contact:

Re: Friday Facts #271 - Fluid optimisations & GUI Style inspector

Post by Nightinggale »

Cadde wrote: ↑Wed Dec 05, 2018 3:05 pmDouble buffering has been a thing since forever. Basically, your game draws to one area of memory while the screen refreshes from another.
The same technique can be applied to simulation, simulation "draws" to one area of memory while the renderer renders from the non-updating one.
Double buffering means you will have two copies of the data in memory. It's not a huge concern memory wise (32 bit limits are history), but it means copying from one buffer to another. This means you just added a new memory I/O bottleneck.

I did a bit of calculations. It seems that running on DDR3 at stock speed, the memory has bandwidth for around 1.1 GB/frame. Subtract the current memory I/O and whatever is left seems microscopic compared to what it would require to copy all entities and possibly some non-entitiy data as well.

To put it simple, I don't think you have the time to buffer that much for each tick. In fact regardless of hardware you will always be able to find a factory where this is a bottleneck, which reduces FPS/UPS when it would run 60 without the double buffer.
Cadde wrote: ↑Wed Dec 05, 2018 3:05 pm As i said, timers are not accurate in computers. You cannot represent 60 FPS without getting sync issues in software. The reasonable frame rates one should go for are...
I read what you said as a rounding issue rather than a hardware timing issue. However you are right that software timing will be less accurate than hardware timing. It should however be possible to get something reasonably able to hit 16 ms, particularly if you avoid sleep and do your own delay function, like a loop while (time() < time_next_tick). If we get really fancy, such a loop timer should be running in a process of its own with really high priority and then you can get fairly decent timing accuracy.

The question is if this is worth it. Wouldn't you be able to go into windows and switch your monitor to 60 Hz and use V-sync? That should allow you to use a hardware timer for screen updates and it would solve the problem entirely.
Cadde wrote: ↑Wed Dec 05, 2018 3:05 pm Multiplayer should never be sync sensitive in that way IMHO. If you missed a server frame, your update should be handled on the server in the next frame.
But that leaves us with a big problem for games of factorio's magnitude... Most other multiplayer games have keyframes where the server sends out a "master" packet. This master packet is the game state as the server sees it and the client will do well to disregard everything it knows and use that master packet. Which means some things can suddenly become undone client side.
This sounds so strange to me that I wonder if it's actually correct. UPS is updates per second, meaning 60 UPS is 60 calls to tick() per second. Now say the server is running 60 UPS and the client 59 UPS. This means once every second the client will not do the calculations when the server does. How would the server figure out which data to send? Would it make a savegame each tick and transmit that? Remember each entity can have changed game data. Is it keeping track of which one changed data during the current tick? If the client can't keep up, how is it able to have CPU power to decode and apply the incoming network data?

I'm still leaning towards the network games using some sort of sync on UPS, meaning if one computer falls behind, the others will wait for it. Something like some handshake where clients tells the server when completing a tick.

Usually this problem is fixed by making the server run 59 UPS too, hence same speed on server and client. However you say the server tells the client what happened in that frame for the calculations the client didn't make? Since it's potentially a call to every single entity on the map, that would be a lot of data. It would be kind of like saving the game and transmitting it to the client 60 times a second.
Dominik wrote: ↑Wed Dec 05, 2018 9:57 am None of us here is a multithreading expert, we are all learning :)
I'm beginning to suspect there are no experts in multithreading at all. Excluding those who can't multithread at all, everybody I encounter (myself included) says something like that. This is likely why games are still mostly singlethreaded after more than a decade of multicore CPUs.

I think it has to do with how hard it is to find good documentation on how to optimize multithreaded code for speed while performance for single core is much better documented. For instance it is (or should be) common knowledge that variables used together should be stored together in memory and why (memory chunk reading+CPU caches). Now I'm asking the same question regard thread locking. What happens and why is it slow? If you are lucky guides/tutorials will mention that they are slow, but not why. It took me ages before I learned why. When a core tries to lock, it makes the lock request, which is then flushed to level 3 cache. Mind you (which is important here) the core accesses level 1, which then copies to level 2, which then copies to level 3. Data can't skip levels. Once in level 3, it waits for all other cores to flush to make sure only one CPU is getting the lock. Once gained, it has to travel back through the other caches to reach the core again. From the request until the reply, the core has done absolutely nothing. It's time wasted. When releasing, it will just release to level 1 cache and then carry on. The normal cache flushing system will then at some point get it back to level 3 where it will release. This delay doesn't matter for the thread/core releasing the lock, but it causes an extra delay for the thread/core waiting for the lock (if any is waiting).

In short: a lock request from a core needs to go from the core to level 3 cache, from there to all cores and then return back through the caches to the core making the request before the lock is active. The locking core is idle while this takes place.

It can be said this simple, but for some reason we are usually not told this. I suspect it's this disconnect, which makes programmers unsure of how to handle threads. Without this knowledge, programmers try something, don't try to avoid locks since they don't know they slow down everything and when multithreaded is slower than singlethreaded, they go "multithreaded sucks and doesn't work. I will go back to what I know works".

Starting and stopping threads also have a huge overhead, something else which isn't always mentioned. The solution to this is to use a threadpool, which will use one thread for each core and then reuse threads rather than starting new ones. Apparently the best threadpool is Intels Threading Building Blocks. It will grant each thread with a queue of tasks to perform and then it will monitor the queues. If one queue becomes much shorter than the others (all the quick tasks came in the same by chance), it will move tasks around to make the queues of somewhat equal length again, which will help to keep all cores busy until all tasks are done.

The best part is Intel released TBB in order to make programmers optimize the code for their hardware (multicore CPUs). This means their business model for it is to sell CPUs, not to profit on the library itself. As a result, it's free, open source and cross platform.
Cadde
Fast Inserter
Fast Inserter
Posts: 149
Joined: Tue Oct 02, 2018 5:44 pm
Contact:

Re: Friday Facts #271 - Fluid optimisations & GUI Style inspector

Post by Cadde »

Nightinggale wrote: ↑Thu Dec 06, 2018 1:58 am Double buffering means you will have two copies of the data in memory. It's not a huge concern memory wise (32 bit limits are history), but it means copying from one buffer to another. This means you just added a new memory I/O bottleneck.
Not really, it would probably wreak havoc with CPU cache but nothing at all in memory. Yes, you would have duplicate copies of all the simulation data in memory, a non issue as the actual simulation data is pretty small.

Now then, here's what would happen on game load.
  • Load savegame data into memory, let's call it page A
  • Copy savegame data into other memory page B
  • Run tick, reading from page A, writing to page B.
  • Renderer continually reads from page A
  • Tick finishes, swap to reading from page B, writing to page A
  • Renderer continually reads from page B
  • ...
  • REPEAT
So, like i said, the problem would probably be with CPU cache as it (possibly, i don't know) wouldn't know what memory to precache at any given time. That remains to be seen.

Other than that, doing it RIGHT you can actually increase memory performance this way as you are keeping two distinct memory lanes open. One where you are reading and one where you are writing. Same principle as hard-drive raids. Because you aren't reading and writing to the same physical memory cells you can actually read and write at the same time. This all assuming you actually are using separate channels to read and write to. Otherwise there's going to be collisions and/or seeks in memory which is bad for performance.

Another way you can think of it is copying files. If you copy to the same physical hard-drive it's going to take longer than if you copy between different drives. (assuming both drives have similar performance)
Nightinggale wrote: ↑Thu Dec 06, 2018 1:58 am I read what you said as a rounding issue rather than a hardware timing issue. However you are right that software timing will be less accurate than hardware timing. It should however be possible to get something reasonably able to hit 16 ms, particularly if you avoid sleep and do your own delay function, like a loop while (time() < time_next_tick). If we get really fancy, such a loop timer should be running in a process of its own with really high priority and then you can get fairly decent timing accuracy.
Yes, a dedicated update timer could alleviate the issue somewhat. But the issue isn't so much that it's not running on time. It's that every once in a while, it's running out of sync completely.

Back in the days when i had a 60 hz monitor, i still let my games run as many FPS as they possibly could. No VSync. This because even if i never get to see all those frames, it does completely eliminate stutters. Rendering 120 frames per second on a 60 hz monitor means if for whatever reason you drop to 110 FPS all you'll notice is a screen tear. And that's perfectly fine for me, it doesn't give me throbbing headaches.
Nightinggale wrote: ↑Thu Dec 06, 2018 1:58 amThe question is if this is worth it. Wouldn't you be able to go into windows and switch your monitor to 60 Hz and use V-sync? That should allow you to use a hardware timer for screen updates and it would solve the problem entirely.
Already did that. Vsync makes things even worse.

Again, it's not an issue of what's being displayed on screen. It's an issue of WHEN it's being updated. When you get a mix between 60 and 59 fps you'll notice it if you are like me.

There have been many variations on how to deal with some kind of sync in games. Some games hard lock the FPS with the simulation, so 30 FPS games for example are not locked at 30 fps because of graphics but because that's the simulation time step they've chosen.

Some games lock simulation with FPS in a relationship like 1:3 for example. So 20 FPS simulation, 60 FPS render. In such games, the camera rotation updates 60 times per second but the position is locked to a simulation entity like the player's character. So camera only moves in the world 20 times per second.
Some of those relationship games have actually dealt with this by rubberbanding the player, so the camera interpolates it's position based on what happened last in the simulation. Those games are FINE most of the time. Though i wouldn't mind them being 60 FPS simulation and 120 fps render. Or some other combination, even user customizable.

Some other games run simulation separate from render completely. The renderer simply uses what's given to it. It doesn't actually represent the simulation perfectly. The simulation updates a few choice elements for the renderer rather than every single item.
Consider a belt for instance. The simulation doesn't say "draw 3 circuit boards, 2 iron ore and 3 copper ore and they are in these locations on the belt", the simulation says "the belt is loaded" and the renderer renders a loaded belt. No, i am not saying this would be appropriate for factorio. Just saying how other games do it. The upcoming "Satisfactory" game probably does it that way from what i can tell. Their belts are probably restricted to one type of item and the renderer just draws "stuff" on the belt when the simulation says the belt has items on it. I can't be bothered to expand on that explanation but there's a bit more to it.

Either way, it's the games that decouples the camera from everything else, giving the camera a higher priority, that leaves me with a healthy head.

And don't forget, it's only a wish i have.

----

Sorry, i don't want to get any more sidetracked into multiplayer discussions as it's beside the point i am trying to make.
So i won't respond to the rest.
Nightinggale
Fast Inserter
Fast Inserter
Posts: 120
Joined: Sun May 14, 2017 12:01 pm
Contact:

Re: Friday Facts #271 - Fluid optimisations & GUI Style inspector

Post by Nightinggale »

Cadde wrote: ↑Thu Dec 06, 2018 3:11 amAnd that's perfectly fine for me, it doesn't give me throbbing headaches.
So in essence you are saying it's not a perfectionistic "never lose a frame" fanatic problem, but rather "imperfections in motion gives me motion sickness" kind of problem. So you are saying placing screen updates and game logic in two different threads will be beneficial related to reduce the risk of headache from playing? That sounds interesting.

This discussion has made me wonder what it would take to actually use double buffering without going overboard in memory copying and I think I got an idea.

Klonan mentioned in the video something about the car moving while being drawn was an issue. Is movement of entities the only issue? What would it take to make the double buffering take place in the entity class? (I'm sure there is a base entity class somewhere in the code)

What data would be needed. It would be something like x, y, direction, sprite. Seems doable, particularly if sprite can be reduced to an int or pointer. Now imagine adding a class with those and add two instances in the entity class. Let's call them container 0 and 1.

Now the one being used is tick % 2. Say we calculate tick 3. 3 % 2 = 1. This means this tick we use container 1. The first thing we do in tick() is to copy container 0 into 1. After that we run the entire code normally where the access functions goes into container 1. Next tick it will copy container 1 into 0 and then all access functions will use container 0. This should provide the same game logic as right now. The reason why it's a class is it allows copying all of them in one go, which is faster than adding a line for each variable being copied.

The trick here is to make the screen "lag one tick behind". While calculating tick 3, it will draw tick 2. This means while it's updating it will draw container 0. All entities will have fixed container 0, meaning the screen drawing code will view unchanging data while drawing. This should remove the need to sync the screen drawing code with the game logic, hence allowing hundreds of FPS if some future GPU can handle it while maintaining 60 UPS.

Perhaps it needs some variable to determine if tick() has been called, like just storing 0 or 1 when copying the container. This way even if it will copy to container 1, if some other entity use a get function prior to tick(), it will read from 0, which contains the result from the last tick. Container 1 would in that case contain the data from 2 ticks ago, hence outdated.

There is one issue though, but I don't know if it's an issue or not. What if it draws 10% from frame 2, the game logic finishes tick 3 and starts tick 4. What will happen? Will there be a frame, which becomes a mix of two ticks? Could something be done to prevent this from being an issue?
I can't answer any of those questions because screen updates is outside my field of expertise. I just point to them as potential issues from a data integrity point of view.
Cadde
Fast Inserter
Fast Inserter
Posts: 149
Joined: Tue Oct 02, 2018 5:44 pm
Contact:

Re: Friday Facts #271 - Fluid optimisations & GUI Style inspector

Post by Cadde »

Nightinggale wrote: ↑Thu Dec 06, 2018 5:34 amSo in essence you are saying it's not a perfectionistic "never lose a frame" fanatic problem, but rather "imperfections in motion gives me motion sickness" kind of problem. So you are saying placing screen updates and game logic in two different threads will be beneficial related to reduce the risk of headache from playing? That sounds interesting.
Indeed it's kind of like a motion sickness, except i never get motion sickness playing any game. But it feels like being punched in the face every time a frame arrives "out of sync" so to speak. A single dropped frame is fine, but when they chain together in such a way that you get wildly varying FPS then it's painful, irritating and demoralizing all at once. As soon as i encounter such an issue i do everything in my power to combat it.

Looking at the Steam FPS counter, it's flickering between 59 and 60 FPS when it's happening. But it's more than that, most FPS counters work on averages.

Let's say it works on an average over a second.

First 500 ms runs at 60 FPS
Second 500 ms runs at 30 FPS
Such a frame counter (1 second averages) would display 45 FPS as that's the average over the whole second, but doesn't say jack about how the game performed in that one second.

Okay, factorio's ingame FPS counter updates every frame. It shows alternating FPS and UPS between 59.9 and 60. Most of the time it's stable at 60 but then every once in a while it starts flickering 59.9 - 60.0 and then goes back to solid 60 again.
I can't say how they count their frame timings for sure but i would assume that the number of milliseconds that passed in the last frame is used.
So if the last frame took 16.666... ms to simulate and draw, including the idle time, then it would display 1000/16.666... = 60.0 FPS.
But if the last frame took 16.694... ms to draw (A difference of a mere 0.0278 ms) then it will display 1000/16.694... = 59.9 FPS.
BUT, even then it doesn't tell the whole story... What happened in reality? This happened...

Frame 1 took 16.666 ms to render. 60 FPS all is good!
Frame 2 took 16.666 ms to render. Still 60 FPS all is good!
Frame 3 took 16.694 ms to render. Oh... The game is running slow now, the frame is out of sync.
Frame 4 took 16.666 ms to render. 60 FPS again but the frame is still out of sync.
Frame 5 took 16.666 ms to render. 60 FPS still, also still out of sync.
Frame 6 took 16.694 ms to render. Oh snap... Game is running slow, the frame is even more out of sync.

If it does that for all 60 frames of a second then my eyes will notice it. Even if vsync is off.
(By the way, if vsync was on it would actually DROP a whole frame waiting for the next sync instead. Frame 3 and 6 wouldn't draw at all and that's even worse when it happens so often.)
So, it's switching between running steady and jagged like that and that's what bothers me. But why would this be?

Let's experiment some with floating point numbers shall we?
Using the limitations of 32 bit floating point values, as in all numbers in the below example have floating point rounding errors in them:

Here's some C# source code

Code: Select all

        static void Main(string[] args) {
            float frame_step = (1000f / 60);
            float last_render = 0;

            for (var i = 0; i < 60; i++) {
                Console.Write(i);
                Console.Write(" : ");
                Console.WriteLine(last_render);
                last_render = last_render + frame_step;
            }
            Console.ReadKey();
        }
And here's the output

Code: Select all

0 : 0
1 : 16.66667
2 : 33.33333
3 : 50
4 : 66.66666
5 : 83.33333
6 : 99.99999
7 : 116.6667
8 : 133.3333
9 : 150
10 : 166.6667
11 : 183.3333
12 : 200
13 : 216.6667
14 : 233.3334
15 : 250
16 : 266.6667
17 : 283.3333
18 : 300
19 : 316.6667
20 : 333.3333
21 : 350
22 : 366.6666
23 : 383.3333
24 : 399.9999
25 : 416.6666
26 : 433.3333
27 : 449.9999
28 : 466.6666
29 : 483.3332
30 : 499.9999
31 : 516.6666
32 : 533.3333
33 : 549.9999
34 : 566.6666
35 : 583.3333
36 : 600
37 : 616.6667
38 : 633.3334
39 : 650.0001
40 : 666.6667
41 : 683.3334
42 : 700.0001
43 : 716.6668
44 : 733.3335
45 : 750.0002
46 : 766.6669
47 : 783.3336
48 : 800.0002
49 : 816.6669
50 : 833.3336
51 : 850.0003
52 : 866.667
53 : 883.3337
54 : 900.0004
55 : 916.6671
56 : 933.3337
57 : 950.0004
58 : 966.6671
59 : 983.3338
Observe the output and you will find a few irregularities.
Frame index 1 is 16.66667, all sixes expected as decimals.
Frame index 6 is 99.9999, should be an even 100.
Frame index 14 is 233.3334 where most others are x.3333
Frame index 29 is 483.3332
Frame index 39 is 650.0001
Frame index 45 is 750.0002
Frame index 51 is 850.0003
Frame index 54 is 900.0004

If one used that as a frame limiter you would notice it too! The frame times would be all over the place.

So yes, timing does matter and it's not just how well the game is running, it's probably related to rounding errors.
It wouldn't be noticeable if the game rendered more frames than the refresh rate of the monitor.

Which made me think, haven't tested it though... What if i increased the game speed to 1.01 and reduced my monitors refresh rate to 60. The game would run 1% faster but maybe, just maybe i could get rid of the stutterings.
Nightinggale wrote: ↑Thu Dec 06, 2018 5:34 amKlonan mentioned in the video something about the car moving while being drawn was an issue. Is movement of entities the only issue? What would it take to make the double buffering take place in the entity class? (I'm sure there is a base entity class somewhere in the code)
Only BIG movement is an issue, if a drone teleports across the screen that's not an issue. Only when the whole screen moves does it affect me.
Likewise, if you fill the screen with drones and they are all moving in different directions but at 20 FPS, it doesn't affect me. It's single direction movement of large areas of the screen that has an effect. Such as moving the camera or moving the character (which is the same as moving the camera)
If the camera keeps moving at a fixed rate then no probs.
Nightinggale wrote: ↑Thu Dec 06, 2018 5:34 amWhat data would be needed. It would be something like x, y, direction, sprite. Seems doable, particularly if sprite can be reduced to an int or pointer. Now imagine adding a class with those and add two instances in the entity class. Let's call them container 0 and 1.

Now the one being used is tick % 2. Say we calculate tick 3. 3 % 2 = 1. This means this tick we use container 1. The first thing we do in tick() is to copy container 0 into 1. After that we run the entire code normally where the access functions goes into container 1. Next tick it will copy container 1 into 0 and then all access functions will use container 0. This should provide the same game logic as right now. The reason why it's a class is it allows copying all of them in one go, which is faster than adding a line for each variable being copied.

The trick here is to make the screen "lag one tick behind". While calculating tick 3, it will draw tick 2. This means while it's updating it will draw container 0. All entities will have fixed container 0, meaning the screen drawing code will view unchanging data while drawing. This should remove the need to sync the screen drawing code with the game logic, hence allowing hundreds of FPS if some future GPU can handle it while maintaining 60 UPS.

Perhaps it needs some variable to determine if tick() has been called, like just storing 0 or 1 when copying the container. This way even if it will copy to container 1, if some other entity use a get function prior to tick(), it will read from 0, which contains the result from the last tick. Container 1 would in that case contain the data from 2 ticks ago, hence outdated.

There is one issue though, but I don't know if it's an issue or not. What if it draws 10% from frame 2, the game logic finishes tick 3 and starts tick 4. What will happen? Will there be a frame, which becomes a mix of two ticks? Could something be done to prevent this from being an issue?
I can't answer any of those questions because screen updates is outside my field of expertise. I just point to them as potential issues from a data integrity point of view.
I'm sorry to say but you lost me there. I was talking about double buffering the whole simulation. Reading from one page, writing to the other.
There's no real copying taking place.
But now i realize my error... DAMN.

Double buffering works fine on rendering because the WHOLE screen is being redrawn. Whereas double buffering the simulation would mean you are writing to data that is 2 frames old.
Maybe i get what you mean now. ;)

Either way, i don't really need double buffered simulation to be happy. All i need is that the camera is FREE of the simulation so when it moves it can do so without waiting for the simulation to finish.

Like i said before, if the renderer ran at 120 fps all would be good, the issue wouldn't be so noticeable then. The smaller the frame steps, the less noticable a single frame drop or desync is. If we had 1000 FPS this wouldn't have been an issue to anyone. The myelinated nerves in human eyes can fire 1000 times per second. Some peoples eyes are slower of course, hence they can deal with 60 and below FPS just fine.
I am "blessed" with fast firing myelinated nerves or something. I haven't actually tested my eyes, but i tell it like i see it. ;)
User avatar
bobingabout
Smart Inserter
Smart Inserter
Posts: 7352
Joined: Fri May 09, 2014 1:01 pm
Contact:

Re: Friday Facts #271 - Fluid optimisations & GUI Style inspector

Post by bobingabout »

Just thinking about fluids again...

One of the problems I have with the current system, is that if I set the flow rate higher than normal (For my gasses that I wanted to flow faster), or have smaller pipes than default (which has the same effect, it makes the flow faster) it currently just induces the "unstable simulation" state. However, reducing the flow rate on a fluid to make it more viscous (Goo-like than fluid like) does slow the flow as intended. Making the pipes thicker sort of has a similar effect too, but that wasn't something I intended.

Is viscosity in fluids still going to be part of the new fluid system? Will it support lower viscosity fluids as well as higher?
Creator of Bob's mods. Expanding your gameplay since version 0.9.8.
I also have a Patreon.
Dominik
Former Staff
Former Staff
Posts: 658
Joined: Sat Oct 12, 2013 9:08 am
Contact:

Re: Friday Facts #271 - Fluid optimisations & GUI Style inspector

Post by Dominik »

Pinga wrote: ↑Wed Dec 05, 2018 7:45 pm
It is not true that it would not affect the result. Naturally, if the first consumer in the series spends the fluid, nothing is left for the next. If we join the fluidboxes as you say (either connecting the turbines or a pipe with many branches), this would disappear with everybody getting the same. Back to fluid teleportation basically.
Dominik
Former Staff
Former Staff
Posts: 658
Joined: Sat Oct 12, 2013 9:08 am
Contact:

Re: Friday Facts #271 - Fluid optimisations & GUI Style inspector

Post by Dominik »

bobingabout wrote: ↑Thu Dec 06, 2018 9:59 am Is viscosity in fluids still going to be part of the new fluid system? Will it support lower viscosity fluids as well as higher?
I have that running right now. But no promises that I will keep it as it is yet another added computation.
Pinga
Inserter
Inserter
Posts: 42
Joined: Fri Oct 27, 2017 3:59 pm
Contact:

Re: Friday Facts #271 - Fluid optimisations & GUI Style inspector

Post by Pinga »

Dominik wrote: ↑Thu Dec 06, 2018 10:57 am
bobingabout wrote: ↑Thu Dec 06, 2018 9:59 am Is viscosity in fluids still going to be part of the new fluid system? Will it support lower viscosity fluids as well as higher?
I have that running right now. But no promises that I will keep it as it is yet another added computation.
Is there any plans to communicate that better though? Adding another hidden variable to an already arcane system that shows nothing about throughput doesn't sound great. Currently it's hard to look at a pipe and say "X liquid per second is coming through here", or "I need Y pipes to feed this machine". Even the most experienced players will just overshoot and hope for the best. And then there's the odd stuff like how an inline tank reduces your throughput, but a pump from a tank is faster. Or how a straight line of pipes is worse than a sequence of undergrounds. Things are often not very intuitive.
Dominik
Former Staff
Former Staff
Posts: 658
Joined: Sat Oct 12, 2013 9:08 am
Contact:

Re: Friday Facts #271 - Fluid optimisations & GUI Style inspector

Post by Dominik »

Pinga wrote: ↑Thu Dec 06, 2018 11:27 am
Dominik wrote: ↑Thu Dec 06, 2018 10:57 am
bobingabout wrote: ↑Thu Dec 06, 2018 9:59 am Is viscosity in fluids still going to be part of the new fluid system? Will it support lower viscosity fluids as well as higher?
I have that running right now. But no promises that I will keep it as it is yet another added computation.
Is there any plans to communicate that better though? Adding another hidden variable to an already arcane system that shows nothing about throughput doesn't sound great. Currently it's hard to look at a pipe and say "X liquid per second is coming through here", or "I need Y pipes to feed this machine". Even the most experienced players will just overshoot and hope for the best. And then there's the odd stuff like how an inline tank reduces your throughput, but a pump from a tank is faster. Or how a straight line of pipes is worse than a sequence of undergrounds. Things are often not very intuitive.
I don't want to spoil stuff, it will eventually all be in FFF. But yes.
Post Reply

Return to β€œNews”