That depends very much on the complexity of the definition of a duplicate.
There are system functions to retrieve grid row data. You will have to process the grid in row order to compare the rows.
It isn't just the code. You will need to understand the process/methodology you will need. So it is best if you work out how to do the comparison. If you need help doing this, post what you have done so far.
If you need help coding, you can ask about that too, providing what you have done to resolve the issue yourself.
I would use a cache/workfile to store the "key" to each row you have in the grid. By "key", I mean the field(s) that make up what you consider to be unique or don't want duplicated.
You would need to write to the cache/workfile whenever you write a record to the grid. If loading a grid from BSVW find, you can write to the cache/workfile in the Write Grid Line Before event. I would flag these cache/workfile records as "read from file".
If manually adding a record to the grid, you will need to write to the cache/workfile. You really need to first check if the record you're adding is already in the cache/workfile, so I would use the Row Exit & Changed Inline (RECI) grid event.
As Peter pointed out, much of this depends on your definition of a duplicate. Can the user update any of the grid fields that make up your duplicate criteria? If so, you will have to implement update functionality to the cache/workfile. Again, I would do this in the RECI event.
I am not a fan of having to read all grid rows for any grid row change, but if that is easier for you, do what works for you. I would rather have an indexed cache/workfile that you can easily check with a single Get/Fetch.