-
-
Notifications
You must be signed in to change notification settings - Fork 355
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Counterflow Flame control #1622
Conversation
For reference, this is the control file that I've been using for testing the overall execution of the two-point flame control method. It's still not very stable, which I believe it should at least be somewhat stable based on the lack of any caveats present in the 1994 paper from Nishioka, Law, and Takeno (the paper that serves as the basis for this implementation). Also attached is the mechanism file I am using to debug the implementation. I had to rename the YAML file to have a .txt suffix to make Github happy.
|
I'm getting back to this feature now to get these last few issue resolved. |
f97c934
to
62a4961
Compare
62a4961
to
bd94503
Compare
@ischoegl It is kind of confusing how internal residual functions set some default BCs and then boundary objects have to come by again and change/adjust the boundaries? |
This has been Cantera’s approach for a long time. I don’t really see an obvious alternative, either? |
Yeah. I've been trying to debug an issue that has popped up with something not being set properly when transitioning from a non-two-point control to the two-point control, which causes the steady solver to fail. I found myself bouncing back and forth between the StFlow and the Boundaries1D to double check that something that I thought had been set didn't get switched by the other class. I don't have any more constructive thoughts on it other than that it seems like the boundary values would be the responsibility of the boundary objects. Do you/Ray/Bryan have any go-to approaches for debugging a failing steady-state solve? I've set the log level to 8 to get outputs for the solution and residuals to the YAML debug file. There seemed to be some sort of bug where if log level was 8, the failing newton solve section would throw an error because of the part in the Sim1D where it tries to write debug information before and after taking a timestep, but the prescence of the first "debug" section in the YAML file cases the writeHeader to complain that the group already exists when the "after timestepping" part tries to write a "debug" entry to the file. I ended up passing the overwrite flag as true for those writes. I'm not sure if that isn't the correct thing to do there. I also have a steady-state callback in the python script that I'm using to plot all the variables of interest during each steady-state solve attempt. I'm just running a bland H2/O2 burke mechanism with 1200K boundaries at 8e5 Pa, ideal gas. The initial steady-state solve works fine, but I'm working on trying to figure out when I turn on the two point control and attempt a single steady-state solve, the solver fails. |
I've updated my debugging python script if anyone wants to reproduce the crash scenario. |
Just so I don't forget, I ran across a strange execution where two calls were made to evalLambda, one with jmin=0 and jmax=41 (for a 42 point grid), and then a call with jmin=0 and jmax=1. This second call went into the interior loop at the bottom of the eval method for exactly one iteration because of the coding that sets j0 = max(jmin, 1) and that the for-loop is inclusive of the upper bound. It may not be an actual issue, but I was surprised by it when I saw some output that I wasn't expecting with some print statements I had placed. Not sure if an update that comes to evalLambda for only once cell, which ends up updating the state of more than the requested cell is a non-desirable effect.
|
I can see that, but don't necessarily object to the side effect that
I don't, but @speth may. There is some write-up on debugging here, but I have not really done much with that. Instead, I typically create a breakout from the Python API, which was largely sufficient for what I have done thus far.
Interesting. I was not aware that there are write steps before and after taking a timestep (and am also wondering whether this saves redundant info in subsequent time steps?). In any case, your workaround looks reasonable; as this presumably is also an issue for the |
@speth I was wondering if you could help me check my understanding of something in the Sim1D.cpp file. What is the logic for what should be stored in the debug_sim1d.yaml file during an unsuccessful steady-state solve? I was thinking that the save() and saveResidual() calls in the solve() method that happen after a failed steady-state solve would write the state of the system to the yaml file, and then there is the section immediately after where the code tries to take some timesteps, at which time it also prints information to the debug yaml file. I figure that this data would be stored in the yaml file as something like a "state of the system after we take N timesteps" entry in the yaml file. Is the timestep coding trying to take N timesteps from the failed solution state? Or does it take N timesteps from the initial state before the newton solve was attempted? Is it like this: Or this? I'm just wondering about whether it is worth it to write an entry in the debug yaml file for the state before the attempted Newton solve so that we can capture the before/after state around the state-state newton solve attempt. I've written a simple Bokeh server visualizer to visualize the full state of the system for each entry in the yaml file. I added new high-level groups for "debug_afterTimestepping" and "residual_afterTimestepping" to see changes. I recently added a "debug_beforeNewton" and "residual_beforeNewton" group by calling the save/saveResidual methods before the call to the newtonSolve() method. I've noticed that when I plot all three, the timestepping result and the beforeNewton seem to be the same, which I suppose means that the first option above is probably what is happening. Not sure if I'm going about this all wrong, but I was trying to get some sort of more detailed state information that was easy to parse for when I'm adjusting BCs and equations where I can easily see the changes happening in the domain. |
I believe that the timestepping starts from the state before the attempted Newton solve. You can see this in dt = timeStep(nsteps, dt, m_state->data(), m_xnew.data(), loglevel-1); where
I think the state written to the debug file and labelled as "After unsuccessful Newton solve" is the state before the attempted Newton solve, since |
32c95dc
to
c33475b
Compare
I was looking at my "setRightControlPoint() and setLeftControlPoint() methods for the two-point control method, and there's already a much better implementation in the setFixedTemperature() method used for freely propagating flames. I was trying to see if I could find a way to extricate that conditional check in that method that looks at the return value of isFree() and find a way to generalize for any domains that want to inert a specified temperature point in a domain. The coding in that method has me a bit confused because it is looping over several domains to insert the new point. Is that because the domains use a global numbering and shoving a single point into a domain would mess up the other (n-1) domains around the free-flame domain? Also the extra work the method does of saving the grid and solutions for all domains that are not the freely propagating domain seems like it shouldn't necessarily be required, but I can see that the final section that loops over the domains uses some sort of daisy-chaining of indices to update grids for all domains. I was thinking that being able to call that setFixedTemperature() twice with the left and right control point temperatures would be nice, but it seemed a bit ham-fisted to just add a third "|| d_free->twoPointControlEnabled()" to the if-statement. |
c33475b
to
eec5de5
Compare
Yes, I think it would make sense to adapt the
The idea is that the simulation could consist of multiple flow domains (set up as boundary-flow-boundary-flow-boundary, for example). If that were a freely propagating flame, you would still only set one temperature fixed point, and the location where the temperature reaches the specified value could be in either domain, so you have to loop over all of them.
The grid and the state vector are both global, covering all domains. So to insert a point, you need to effectively push back the data for all the remaining points in both the grid and state vector. |
An update, I have been able to run a script that loops over all the burning flame configurations (by symmetrically increasing the left and right mdot values) until flame extinction, then reloading the last solution and enabling two-point control to march down the unstable branch. It does seem capable of marching down the unstable branch. One thing that must be accounted for is that the drop in the maximum temperature of the flame is very sensitive to the temperature increment at the control points during the unstable branch walk-down. So in the script that I'm using, I often will be saving previous solutions and restoring them if failures occur due to the 1D solver being unable to solve for a case where the flame changes significantly from the initial condition. I haven't worked out a particularly satisfying way to re-use that older fixed temperature coding yet. I'm going to look at it again today, but if I can't make progress on that, I don't think it should hold up the rest of this PR. |
eec5de5
to
967157c
Compare
967157c
to
03ec3b8
Compare
03ec3b8
to
49bcef7
Compare
I'm working on adding a unit test that runs an H2/O2 counterflow diffusion flame, gets the temperature profile, then activates the two-point control, sets control points based on the maximum flame temperature, and the re-solves the governing equations with the two-point control activated (but no temperature decrement at the control points). This should give a solution that is similar to the solution without the two-point control method. I'm having some issues though with the unit tests hanging when I try to run them. I haven't figured it out yet, but it might have something to do with the output files having two-point control entries added to them and that causing an issue when tests are comparing with files that don't have those entries. This is the test for reference:
|
I got the unit test for the two-point control to work. Still seeing some other tests fail or hang for some reason. @speth I have a small description of the two-point method in the include file for the StFlow.h. Do I need to expand on this method in more detail somewhere else in the Cantera documentation? (Like an .rst file or something?). |
@wandadars ... thank you for your continued work on this. While I won't be able to add much in terms of detailed review at the moment, I wanted to answer your question about documentation: yes, it would be great if you could add some information on the theory in the On a separate note, there appears to be a critical/inadvertent change in computational behavior, as at this point a lot of the unit test suite appears to be timing out due to the 1 hour time limit (rather than running within a few minutes). |
I pushed a fix for the test that was causing the CI to hang. The test in question was getting stuck regridding a flame over and over, where it would add and then remove the same grid points on successive iterations. I think we were just getting lucky that this didn't happen before, and was only triggered by the addition of the equation for |
@speth I get a failing test when trying to round-trip the parameters that get saved in the yaml file.
This is a very small difference and I was going to switch the test to an approx, but I wanted to double check with you: Should we expect perfect equality here in the case of reading data from the YAML file? For reference, this is what gets written:
|
I'm getting core dump errors for unit tests that deal with the two_point_control_enabled property. Should all the python properties for the two-point control method be in the FlameBase class in onedim.py, or should those be down in the CounterflowDiffusionFlame class? I haven't quite grasped it yet, but the following coding gives the core dumps when the commented section is uncommented.
The m_usesLambda variable is protected, but this method is part of the same class, so I didn't think that would be an issue, but it appears to be. Edit: While the m_usesLambda is a protected variable, the method in question is also a part of the same class, which I thought meant it could access protected variables. I chained the direct use the variable to the isStrained() method, which returns the value of the m_usesLambda variable, and that stopped the weird core-dumps that I was seeing when calling the method from the Python interface. Edit 2: I spoke too soon. Now I'm still getting those core-dumps. Before it was in the coding that get/set the two_point_control_enabled property, and changing that direct access of m_usesLambda to a function-based access worked. But now, I get something strange where, earlier in the tests I call sim.right_control_point_temperature and there aren't any issues, but if I set the two_point_control_enabled to false and then try to call that right_control_point_temperature property, the pytest reports a core dump. |
I may not understand how the c++ exceptions make it up to the Python level. That might be what is causing some of the issues that I'm having. |
For any C++ method that might raise an exception, you need to add Pytest's output capturing capabilities unfortunately suppress the useful error message you would get otherwise. If you run the test outside of SCons (may require some fiddling with paths; see) with a command something like:
where the
|
I think roundoff-level errors have to be tolerated when comparing values that have been converted to string. |
Ah, that's it. Thanks for clearing that up. Is there a way to view what the documentation looks like with the edits to the markdown files? |
Yes, though it's a little tricky to find. At the top of this page, Go to the "Checks" tab, the select "CI" from the list on the left. Scroll all the way down on the right until you get to the "Artifacts" section. The "docs" artifact contains the documentation build using your PR. |
@speth I believe I have addressed all the review comments/suggestions that you provided. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, @wandadars, I think this is looking pretty good. I have a few more comments, mainly regarding the documentation, the example, and some minor formatting quibbles. This should be good to go after these are addressed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me. I just added a couple of minor formatting changes. Once this passes CI, I'll merge it. Thanks for bringing the long saga of adding this feature to Cantera to a successful end, @wandadars.
Likewise thanks! |
This pull request supersedes #1510 and is based on the newer coding in the StFlow class. This feature aims to provide a capability for users to generate 1D flame solutions that include solutions along the unstable burning branch of the typical s-curve that is used for flamelet progress variable methods.
Checklist
scons build
&scons test
) and unit tests address code coverage