feature-request

Storage configuration

While the current default of RAID1 configuration provides a reliable foundation for data integrity, it would be beneficial to have more flexibility in how Cycle handles storage.

In scenarios where we are utilizing distributed storage solutions like Garage, redundancy is already managed at the application level. In these cases, the ability to prioritize maximum storage capacity over local hardware redundancy would be highly advantageous. Furthermore, providing users with granular control over which specific storage devices are assigned to a container or VM would significantly improve resource optimization and environment customization.

avatar
2
  • Hey Davor, thanks for the feature request.

    Believe it or not, this is in our next build and will be released later today :) You'll be able to pick between different RAID modes when setting up your ISO settings for virtual providers.

    Furthermore, providing users with granular control over which specific storage devices are assigned to a container or VM would significantly improve resource optimization and environment customization.

    Let me noodle on this for a bit -- it would be a bit of a challenge due to how storage is handled in CycleOS and a fundamental shift. Currently we build a unified storage pool over any devices >2TB, and you can opt containers into that pool during create. For more granular control, we'd have to rethink that model.

    We'll reach out on slack when the new update is live with the RAID flexibility!

    Attachments

    avatar
    platform
  • Re: further customization, I think we'd probably need to give a way to tell Cycle to ignore drives on provision. Then, containers/vms could simply mount em' using the devices mount options. Would be easy to just mount /dev/sdc as a device into a container then.

    avatar
    platform
v2026.02.26.01 © 2024 Petrichor Holdings, Inc.

🍪 Help Us Improve Our Site

We use first-party cookies to keep the site fast and secure, see which pages need improved, and remember little things to make your experience better. For more information, read our Privacy Policy.