Nfs - q&a
Posted: Thu Sep 10, 2009 7:44 am
Interesting statements made (in the Quality and Truth thread), yet we have a product in NFS that has absolutely no QA built in. The main premise of NFS was to stop duplication, yet duplication was never clearly defined. For example which types of duplications? One support missionary said that duplication was the same but different, yet the Oxford English dictionary clearly defines a duplication as exactly the same. If a product was built with no clear definition in place, then what exactly is it stopping? Second QA question is why doesn't NFS have quality control built in. It is easy to clear names/partial or other with no information. The less information that you have the more likely you are to duplicate work that has already been done, because the program is not going to know if *Ann* with no information is the same as Ann born abt 1830 or 1840 or 1850 in the US, or Canada or England or France or somewhere else. The least amount of information required the higher the probability of duplication regardless of the definition used.
There needs to be checks and balances built into the program, and a fair amount of quality assurance, without it, those duplications that were the premise behind this program in the first place will be meaningless, particularly when the program is adding to the problem it was built to stop because of its wish washy parameters.
There needs to be checks and balances built into the program, and a fair amount of quality assurance, without it, those duplications that were the premise behind this program in the first place will be meaningless, particularly when the program is adding to the problem it was built to stop because of its wish washy parameters.