First of all when there is a lot of content associated, we moved on to use manual delete process using SQL DELETE
instead off Rails destroy
. (Though this may not work for You, If You carelessly introduced a lot of callback dependencies that does something after record is destroyed)
def custom_delete
self.class.transaction do
related_objects.delete_all
related_objects_2.delete_all
delete
end
end
If You find Yourself writing this all the time, You can simply wrap it inside class method that accepts list of related_objects keys to delete.
class ActiveRecord::Base
class << self
def bulk_delete_related(*args)
define_method "custom_delete" do
ActiveRecord::Base.transaction do
args.each do |field|
send(field).delete_all
end
end
delete
end
end
end
end
class SomeModel < ActiverRecord::Base
bulk_delete :related_objects, :related_objects2, :related_object
end
I inserted the class method inside ActiveRecord::Base class directly, but probably You should better extract it to module. Also this only speeds things up, but does not resolve the original problem.
Secondly You can introduce FK constraints (we did that to ensure integrity, as we do a lot of custom SQL). It will work the way that User won't be deleted as long as there are linked objects. Though it might not be what You want. To increase effectivity of this solution You can always delegate user deletion to a background job, that will retry deleting user until it's actually can be deleted (no new objects dropped in)
Alternatively You can do the other way around as we did at my previous work in some cases. If it's more important to delete user rather than to be sure that there are no zombie records, use some swipe process to clean up time to time.
Finally the truth is somewhere in the middle - apply constraints to relations that definitely need to be cleaned up before removing user and just rely on sweeper to remove less important ones that shouldn't interfere with user deletion.
Problem is not trivial but it should be solvable to some extent.