I would use a cache/workfile to store the "key" to each row you have in the grid. By "key", I mean the field(s) that make up what you consider to be unique or don't want duplicated.
You would need to write to the cache/workfile whenever you write a record to the grid. If loading a grid from BSVW find, you can write to the cache/workfile in the Write Grid Line Before event. I would flag these cache/workfile records as "read from file".
If manually adding a record to the grid, you will need to write to the cache/workfile. You really need to first check if the record you're adding is already in the cache/workfile, so I would use the Row Exit & Changed Inline (RECI) grid event.
As Peter pointed out, much of this depends on your definition of a duplicate. Can the user update any of the grid fields that make up your duplicate criteria? If so, you will have to implement update functionality to the cache/workfile. Again, I would do this in the RECI event.
I am not a fan of having to read all grid rows for any grid row change, but if that is easier for you, do what works for you. I would rather have an indexed cache/workfile that you can easily check with a single Get/Fetch.
Good luck.