The subscriptions will come with limited cloud storage space. I’m currently running tests with some pretty large Rhino models and can see that quickly adding up to a point where we reach the storage limit long before we reach the CPU-hour limit.
I’m finding myself sending off many jobs for the same model as I debug/tweak parameters/clean up geometry. Once I have a working job that is giving me reliable results, I would want to then delete all the previous jobs, such that when I (or my teammates) look through the joblist they aren’t inundated with faulty jobs but only the “good” ones.
Thanks for explaining in detail the reasons you might want to delete a job. It’s super helpful to have on our end.
Unfortunately we don’t yet support deleting jobs I think your use case makes a lot of sense though so I’ll keep note of it and see if it’s something we can implement on the next month or so. We’re cooking up some new features at the moment that mean we’ll have to plan to implement this feature request.
Sorry for the radio silence here. I have no update for you guys at this stage. We haven’t prioritised this issue as the backend team (myself and Tyler) are working hard on:
Bugs with our event handling system causing Runs to be left in a “running” state when they are actually completed
Setting up our payment system so we can get out of early access
Hope that’s ok with you guys. Feel free to push back if you think not having this feature is a complete deal breaker for you however and we can see whether we can re-prioritise other work we have in the pipeline
We have already exposed the end point for deleting jobs (pollination-server - Swagger UI) and we only need to add it to the UI so users can use it from the interface.
We have also implemented an automated clean up that deletes all the intermediate files for every run after a week to help with freeing up space. If you check the runs that are older than a week you will see that only input and output files are preserved.