-
Notifications
You must be signed in to change notification settings - Fork 177
taking into account purgeable space on volume creation #1959
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
taking into account purgeable space on volume creation #1959
Conversation
95c48e2 to
2713e1a
Compare
|
Hey @giggsoff -- it would be really helpful if you can first describe one (or a few) scenarios that you're trying to address here. The reason I'm asking is b/c if you're looking for a bullet-proof accounting -- we still can't do that reliably. However, it doesn't mean we shouldn't address various usecases that can be easy to adress. So... please describe your scenarios first. |
eriknordmark
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It makes sense to add some description, but I think I understand the interesting case.
However, in general a read-only volume can be shared by multiple app instances, and an app instance can have multiple volumes, and finally when purging some subset of the volumes will be purged. That is controlled by the controller.
Thus to do the accounting well I think the controller needs to indicate in the volume API some "intended to replace volume UUID X". That way you can tell how much more space will be needed during the transient (while downloading and creating the replacement) and once the replacement has taken place.
zedmanager could try to infer this replacement and add the UUID to the VolumeRefConfig instead of getting it from the controller. But that is only useful if volumemgr seens the VolumeRefConfig before it sees the VolumeConfig and that typically doesn't happen.
|
Sorry for delayed response. |
Yes, but when you do that I think the controller sends a new volume UUID.
No, changing when zedagent publishes things doesn't guarantee anything about when the subscribers sees it; the system is asynchronous. The order is undefined. For instance, today if volumemgr is busy it might see the VolumeRefConfig before the VolumeConfig. If interlock is needed it must be explicit.
Here the semantics of "replacement" is solely for the purpose of calculating disk usage. for the purge operation If an app with 2 volumes is replaced by an app with 4 volumes then the controller can decide what it wants to label as replacing what, and the calculation will still be correct. |
2713e1a to
f93782c
Compare
|
In the last version I add |
eriknordmark
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some nits about naming and comments.
But this new flow should work in terms of the overall flow and risk of race conditions between zedagent, zedmanager, and volumemgr.
However, there is one detail in the processing of the updates in volumemgr which can make the approach fail:
When zedagent publish a modification to VolumeConfig1 to set Stale and publishes a new VolumeConfig2 without setting stale, there is no guarantee on the order those are processed in pubsub hence in volumemgr. If volumemgr processes the new VolumeConfig2 before the change to VolumeConfig1 it will not know of the Stale setting. In that case will it not fail to take into account the Stale?
AFAICT if volumemgr is busy I think this will fail 50% of the time - pubsub calculates the set of keys which have been added/modified/deleted and that is based on the order in a golang map.
Signed-off-by: Petr Fedchenkov <[email protected]>
Signed-off-by: Petr Fedchenkov <[email protected]>
f93782c to
6ffa397
Compare
eriknordmark
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM - let it test.
We need to verify that the purging tests in ring1 are indeed run with this.
Also, we probably do not have a test case when 2x max size would exceed the disk size.
Signed-off-by: Petr Fedchenkov <[email protected]>
c891b37 to
97e7b80
Compare
|
@petr-zededa how are we testing this to make sure purge doesn't assume the 2x max as prior to the fix? Do we need new tests? |
I am working on basic support for modify of apps inside Eden lf-edge/eden#545 |
We should add an ability to calculate remaining space based on CurrentSize of volume of particular app we try to purge/update inside volumemgr.
On step of volume creation inside volumemgr we do not now for which application it was created (or it is not linked to any application). So, it seems reasonable to add ApplicationID field into Volume Status/Config to have opportunity to determinate for which app we use it.
There are open questions for me:
cc @zed-rishabh
Signed-off-by: Petr Fedchenkov [email protected]