-
Couldn't load subscription status.
- Fork 578
Squashing nuclides with same percent units when adding nuclides to a material #3568
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Squashing nuclides with same percent units when adding nuclides to a material #3568
Conversation
|
Not suggesting that this should be a blocker, but I've heard a number of times from analysts they often like to see repeated nuclides when they come from different mixtures, i.e. keeping the contributions from different nucldies printed in blocks by mixtures, repeating nuclides as needed. I can see it both ways, you have a clear record in the python file which made your problem, so thats tracable, alhtough you might keep the xml file left behind. So might it be worth haveing a flag which gives the option to squash as an argument to that function, which defaults to true? |
ok fair enough, optional squashing (default true) when exporting to xml sounds like a more suitable solution then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can understand both arguments here but I do lean toward the opinion that materials should not have nuclides duplicated. It may happen to work OK on the C++ side right now but it's not hard to imagine some implicit assumption of the nuclides being unique being introduced in the future.
Separate from the considerations already discussed, one thing I don't like here is that this makes building a material O(N²) since each time a nuclide is added it loops over the entire list of nuclides. The obvious way around that is to store a dictionary per material but that also means more memory usage, which I'm not crazy about 🤔
|
I see that we currently have Materials with nuclides : list of tuples (string , float) I think there are a few cases where dicts can take up less memory than the list of tuples For small dicts I think dicts can be smaller memory than the list of tuples from pympler import asizeof
data_list = [('Li6', 1.6345345), ('Be9', 2.2323423), ('U235', 5.3234234)]
data_dict = dict(data_list)
print("Total list size:", asizeof.asizeof(data_list))
print("Total dict size:", asizeof.asizeof(data_dict))
>>>Total list size: 472
>>>Total dict size: 400This appears to also be true for larger materials from pympler import asizeof
data_list = [(f"Fe{50 + i}", (i + 1) * 0.1234) for i in range(1000000)]
data_dict = dict(data_list)
print("Total list size:", asizeof.asizeof(data_list))
print("Total dict size:", asizeof.asizeof(data_dict))
>>>Total list size: 143649128
>>>Total dict size: 109958720If there are duplicate entries for nuclides in the list of tuples (like 10 entries for Fe56) then these can be combined to a single entry in a dict. |
Description
This PR changes the behaviour of material.add_nuclide so that if a nuclide with the same name and percentage type exists in the material then the new nuclide will be combined with the existing one.
If the nuclide exists in the material but has a different percentage type then a warning message will be printed to the user.
I think this PR puts us in better place than currently as it solves the issue of adding like nuclides in some cases and warns the user in other cases. Previously the nuclides were silently duplicated.
As discussed on the forum
https://openmc.discourse.group/t/question-about-percent-type-argument-of-the-new-method-add-elements-from-formula-introduced-in-version-0-12/702/5
Fixes # (issue)
Checklist