8000 State inflow · Issue #564 · AMReX-Combustion/PelePhysics · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

State inflow #564

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
ThomasHowarth opened this issue Mar 26, 2025 · 5 comments
Open

State inflow #564

ThomasHowarth opened this issue Mar 26, 2025 · 5 comments

Comments

@ThomasHowarth
Copy link
Contributor

The existing TurbInflow setup lets you inflow velocities but not other scalars to have, for example, a partially premixed inflow. I have an existing setup for this (StateInflow), which I added as an additional Utility directory, and is essentially a copy of TurbInflow but with more flexibility over the number of fields it loads. @drummerdoc mentioned it would be nice to have this capability, but I wanted to raise an issue to discuss if there was a nicer way of implementing this and, in particular, avoiding essentially duplicated code.

@drummerdoc
Copy link
Contributor
drummerdoc commented Mar 26, 2025

Thomas, this is great. Thanks for contributing. How hard is it to simply replace the other code? I think it's really just a superset of functionality, no? Is the number of unknowns something we can sneak in backwards-compatible? It's part of every Fab read, for example, so it might be super easy. Also, there is a user interested in using the (poorly named) "swirltype" option, where a time stamp is added to the HDR file for each plane and the data interpolation in the normal direction is nonuniform to account for this (also, the search for the correct source data planes is slightly more painful). It would be great if the backward-compatible hack for ncomp would not break that bit as well.

Short of that, Marc HDF implemented a more AMReX-like way to do this for AmrWind, where a linear interp is used instead of parabolic (thus only two planes are used), and the intermediate data is managed as a BndryRegister, I believe. But even this was a little tacky in that the time stamps were encoded in a weird way. However, it was nice in that the source data for each step is in a separate file and the reader pulled in the data as necessary each time (rather than having to know a priori how to assemble the data you'll need. This made more sense in the nonsubcycling case where BC's are filled at the same time over all levels (vs subcycling, where we're going back and forth over the coarse time step interval.

Maybe it's worth finding a way to combine some of these ideas. For one, the turbInflow codes require that each processor hold multiple entire planes of data. Even with "nplanes" set to something small, this may end up blowing things out when there are multiple instantiations of the framework (for different inflow jets, for example).

I'd be interested to toss around better ideas to manage all this stuff. The hard parts are probably already done and it might be just gluing things together in a more intelligent way. This aspect seems to really trouble lots of users.

@drummerdoc
Copy link
Contributor
drummerdoc commented Mar 26, 2025

Also, in a direction that could probably be classified as "overworking this issue", we could think of this as building an interface to support code coupling, then create a simple driver to support serving precomputed d 8000 ata up to this coupler. But, more generally, if there was a clean way to create an API that is more general, that could actually be super useful and extensible. Ideas?

For example, some are using LMeX to create inflow data for LMeX. Seems reasonable to expect that LMeX should be able to talk to itself in this mode...

@baperry2
Copy link
Contributor

@ThomasHowarth - I'd be open to just replacing TurbInflow with your StateInflow if it is a superset of the capability and we can support the existing TurbInflow capabality through it.

I'm not sure if the BndryRegister stuff from @marchdf in Amr-Wind is really what we want here. That is (or was?) designed for the turbinflow to be on the same grid as the new simulation. I think our TurbInflow, if extended to all scalars, is actually pretty flexible in terms of how we can use it.

One cool thing with the Amr-Wind stuff is that it can be extended for runtime coupling between different AMReX codes (or multiple instances of the same code) through the AMReX MultiBlock capability. We did that with ERF and AMR-Wind through the https://github.com/erf-model/erf-amrwind-driver. I can easily imagine many cases where it would be useful to couple two separate PeleLMeX domains, or a PeleLMeX domain plus a PeleC domain, in a similar manner. But that is getting very far into "overworking this issue".

@marchdf
Copy link
Contributor
marchdf commented Mar 28, 2025

Agreed with Bruce, let's not overengineer this. If what you built is a superset then lets deprecate the plain Turbinflow (if that can be. done cleanly without breaking backwars compat).

What we have in amr-wind is more complex but more flexible. But it brings in a whole bunch of machinery that we probably could do without for now. It does have utilities and things like that for interpolation to other grids, custom generation of data for the planes, support for mesh refinement, etc. Maybe one day we can interface with it or pull it out of amr-wind to make it standalone. That might be a cool thing to do ;)

@drummerdoc
Copy link
Contributor

Oddly, I believe that we all are saying exactly the same thing. Add an ncomp that can be extracted from the fab files rather than assumed, and ensure that whatever is done does not prevent us from supporting "swriltype" someday soon. Then, if we have some cool idea to blend this with the BndryRegister stuff, at least conceptually, that could be useful to many folks, and it would allow code coupling.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants
0