Ark-steadying, and fools rushing in.
I will try to describe some important points of my opinion and suggestion, although it won’t be easy for either writer or reader. I am working on a longer version I could email sometime if anyone wants a little more depth. I assume those people working on the system will not be happy to hear more “constructive criticism,” they probably get plenty, but millions of people will be affected by their work product, and I hope they will be able to consider outside opinions.
My opinion is simply that we need to start over in our approach to handling centrally held genealogical data. Our past thinking and practices and data stores are constraining us too much.
I believe the hopes we place in the nFS system are more than it can deliver, and a separate, similar system will be needed to fulfill the rest of those hopes. It is useful to tidy up the past temple work in the nFS, but that might best remain a low-priority “background” operation, while also scheduling new temple work. Trying to make that system the single, all-purpose answer to all future genealogy data activity, for members and non-members, is asking too much. There are too many complicating factors and too many data problems.
The best way to avoid duplicates being sent to the temple is not to try to head them off at the pass just as they are figuratively going in the temple door, as in past times, when only primitive means of cooperation among researchers was possible. With the Internet available, a far superior way to stop duplicates headed for the temple, before they even start, is to help members (and non-members) avoid any wasted or redundant research (all that work which happens long before any names are sent to a temple). Stopping that redundant research can only happen either 1) by a fully finished and trusted nFS (a very long way off, perhaps nearly impossible task) or 2) by a separate database that is not encumbered by 1.5 billion duplicates, and is designed from the beginning to only contain (or show) the best data, even if that is only a few percent of the total submitted.
I believe the data in nFS is not the best data available to act as a base for all future research activity. Data submitted for temple work has often originally been in full family group sheet form, but was then broken up for the ordinances to be done separately. Much genealogical data was lost in the process. Rather than try to put all those fragments back together and cull or merge the duplicates, an extremely difficult task to do correctly, it would give a superior quality and far faster result to just start again with the original full family group sheet version, which might have been enhanced since first submitted. Resubmitting that data and further enhancing it would be a far better use of member time than trying to correct everyone else’s past errors.
As it is, the only good way I can think of to get a clean nFS database is to compare every item of it to those prior, external, complete family group sheets. Guessing and merging names on the fly without that reference will almost certainly add and continue more errors. And if we have that superior reference in hand, it then seems pointless to painstakingly go through and correct and compress the nFS. It would be far easier and more accurate to just resubmit the whole thing. Someone will say that we have probably then just reintroduced another whole pile of duplicates. But that is not true if the database has a “magical” part that shows only what is likely to be the best data, mostly determined, Google-like, by which data has the most relationship links among relatives, usually found in a descendent structure. The database system changes no data and merges no data. It merely highlights the data that is most likely to be well-researched, well-documented, and complete, and therefore most likely to be accurate.
As it is, even after the estimated 125 million hours of work is done on the nFS duplicate record removal project, it may still be the second best version of the data. Granted, if there is some way to get a complete, consolidated and trusted name structure, from then on it could be used to check to see if research and temple work has been done for specific people. But even then it will likely have the limitation that it favors current church members and their ancestors. Others will have a more difficult time using it because they will typically have no common place to plug their family data into it.
The database system I suggest would also clarify the personal or family responsibility for data accuracy for certain sets of names in a way that does not appear to be addressed in the nFS. It appears that the old Ancestral File problem of potentially having multiple people modifying the same data, often alternating from one version to another on update cycles, without the various groups knowing about it and no one coordinating it, could happen in the nFS system. This direct communal sharing of update access to common ancestors is likely to lead to a number of problems that an indirect method would avoid. Everyone who wished to have their say, could do so, but no one could modify anyone else’s data.
As some final, broader thoughts, I see the estimated 125+ million hours of member labor, if used correctly in other forms of genealogy work, as the equivalent of four years of all our full time missionaries’ work, meaning 1 million new members are in the balance. That labor is by no means free, or only available to do one kind of church work. In that same vein, getting up to 4 million new non-member genealogists involved with an exciting new Church system ought to be one of the top priority benefits of all this new software development work.