We have a large SQL Server 2017 database, approximately 50TB in size, that has been partitioned into nearly 300 filegroups so far. This was originally implemented so we could set old filegroups to read-only, make one final backup, and only have to backup the Read/Write filegroups on a nightly basis. This has been very successful over the last few years.
Now, we desire to offload some of our very old file groups onto slower media since it's becoming quite expensive to keep all of this on flash drives when it's rarely or never even accessed. We could do this two ways - remove the filegroups altogether and just leave the backups on slower media, but from my understanding, we would have to implement sliding partitions. And if we need to get access to that data in the future, it would be quite difficult to constantly update our partition function and schema to accommodate this. If I'm incorrect about that, feel free to mention an easy way to implement this.
Another thought is to move the older ndf files onto slower media and keep them active. This would require taking the database offline, moving the filegroups, and bring it back online, but that wouldn't be the end of the world since we could do that annually or semi-annually, as needed, during a scheduled maintenance period. If we do this, however, does the entire database slow down since it has to search the read-only files on the slower media as well, or would it only slow down if it actually has to pull data from those slower filegroup/partitions? The search parameter would 100% of the time be the clustered index, which is also what the partition function is set up on if that matters. The whole database consists of two columns, an incremented ID (which is the Clustered Index) and a BLOB (which would never be searched, only returned.)