However, in the real world, things happen to prevent a TRIM from completing. Maybe you deleted a large amount of data, and the system is shut down before all TRIM operations finish, or an unexpected shutdown occurs.
Could also be that the controller on the SSD is simply too busy during this specific period of time, i.e. trying to give you the best performance that it can offer with read/write operations so the TRIM commands get discarded from the queue, they never get processed until the OS later decides to invoke the TRIM commands once more. The controller then marks the blocks invalid that haven't already been marked invalid from previous (TRIM) commands:
The SSD then compares these hints to its own information. Anything that has not been TRIMed, but should have been TRIMed, will then be TRIMed.
As long as it doesn't get interrupted yet again, then yes, but even if it does get interrupted again (for whatever the reason might be), it just has to wait until the next round, and eventually (or hopefully) it should happen.
Remember, it is only the OS that can say for sure what blocks are not considered to be in use. The SSD relies upon the OS to get that info because it doesn't understand or translate the underlying data stored on it. For all it knows, you might be running Bob's excellent super file system. As a result, it has to depend upon the OS for that info.
When looking at it from a different kind of perspective, it's actually quite the opposite, as the OS has no detailed awareness of what the controller on the SSD might be doing behind the scenes. The firmware and the controller chip are responsible for things like Garbage Collection (GC), Over Provisioning (OP) and the effects that OP has on the wear leveling strategy that the SSD also uses. Granted, the SSD still has to rely on TRIM for GC to be able to "know" permissions of which blocks it is allowed to erase. But then, the OS has to rely on the SSD doing what it's supposed to do to keep up with performance, and to avoid causing additional wear and tear on the SSD, as much as possible, whenever it is feasible to avoid. As SSD technology keeps evolving, we now have SSDs that are capable to gather additional info behind the scenes about data use and data access patterns. By analyzing the data, clever firmware is looking ahead of the OS and trying to predict OS actions, also including TRIM actions. Before it can erase, it still has to wait for TRIM to actually occur, but it never can hurt to look ahead and optimize based on these findings─the info about likelihood.
Another example of how modern SSDs can look at the data content of the files stored is the internal data compression mechanism that these SSDs also use. How these mechanisms are optimized in the firmware of the SSD in question may part depend on some awareness level of popular filesystems, of how popular filesystems are organized/designed/used in popular OS. There is no real easy way to tell whether an SSD might be inspecting the headers of individual files to predict entropy. High entropy of the data causes data compression to not yield a high data compression factor. By taking into account the likelihood of low entropy versus high, it can be possible for the controller to prioritize its data compression workload accordingly, as part of an optimized/balanced scheduling strategy the goal of which would be to manage the internal data processing more efficiently. Samsung has already introduced its
2nd gen of SmartSSD with advanced internal processing capabilities that no longer are limited to these common/old generic/heuristic types of approach. It's the way of the future, but yeah, it will be up to the OS and specialized software/apps to make use of new functions (such as erasure coding) that are made possible with the Xilinx Versal XCVC1902 that this new piece of SSD hardware uses.