Why so many duplicate ordinances performed?

Discussions around Genealogy technology.
JamesAnderson
Senior Member
Posts: 773
Joined: Tue Jan 23, 2007 2:03 pm

#31

Post by JamesAnderson »

For the one with the misspelled name they found the ship register for that had the correct name:

That will also be the case, in the new system you will be able to add the correct info, and combine the old record into the folder, then put the correct information 'on top' in the summary page. The combining will also consolidate the ordinance data, and will show the EARLIEST date the work was done in the case of duplicates.

Did this last week. We had two of a kind for every child of an ancestor, and just by looking at dates we were able to locate the best info, combine into the best info with that on top, and once that was done we had narrowed down the needed work to one endowment, one sealing to spouse, and nine or ten sealings to parents. If the combining had not been possible, we would have likely done everything over without really knowing the difference.

There still may be some duplication due to a very small number of missing records, and in a few cases very early on a few temple records were lost. They know there is a gap between about 1915 and 1930 or so with Salt Lake Temple records for at least one ordinance and I think that was sealings.
User avatar
huffkw
Member
Posts: 54
Joined: Sun Jan 21, 2007 6:34 pm
Location: Spanish Fork, Utah
Contact:

The two-database scenario

#32

Post by huffkw »

JamesAnderson:

It looks like we are not seeing all of your post. It seems like it starts in the middle of a story.

But I want to respond anyway.

Here is the scenario I want to see:
We have one database for research and one database for temple work. The two are very different. They are optimized for very different processes.

We use the temple work database (NFS) to clean up the data problems from our first 150 years of effort (concerning about 100 million unique names). We put in only names that have had temple work done, or which have temple work scheduled.

In the research database, we work to quickly put in and lineage-link 300 million US deceased and perhaps 500 million European deceased, making 800 million in all. This process has no immediate connection to temple work at all, but only seeks to get the best data and records together.

That 800 million names, 8 times the 100 million names we have completed so far, can then be drawn upon for temple work names. At about 3 million names a year, that should last us for about 270 years.
-------------------------------------------------
JamesAnderson wrote: There still may be some duplication due to a very small number of missing records, and in a few cases very early on a few temple records were lost.
Your assumption seems to be that source images will be part of the nFS system at some point. I can see that could be a good idea, to clear up specific temple work issues, but I do not see that as the complete solution. It can be part of the complete solution, but only for temple work. I am trying very hard here to get people to think out of the “temple work” box, and consider the separate “research” box, something the rarely happens, although LDStech user Marian JOhnson and a few others seem to agree on the need for a separate research system to avoid the unnecessary entanglements of wide-ranging new research with completed temple work.
JamesAnderson
Senior Member
Posts: 773
Joined: Tue Jan 23, 2007 2:03 pm

#33

Post by JamesAnderson »

I was speaking to the very narrow issue besides the main issue of which this thread was about, but by all means thank you for your comments afterwards.

It was Don Anderson, manager of Worldwide Support, that mentioned that we have 99.9 percent of all ordinances performed so far into new FamilySearch. He said that the missing data that is not in there yet is from some records they found and have to get put back in that got misplaced somehow. From other sources I've learned that this includes one file recently found that had a million or more extracted names that did not make it even into the old Internet IGI or maybe even the CDs. This also included the Salt Lake Temple records mentioned as well, and a few other things.

Earlier mistakes in taking and recording ordinances account for a very small number, someone in my ward mentioned that someone found all the sealings missing on some ancestors that were pioneer-era, and nothing was ever found to indicate they were done so they had to do them again to be sure they did get done.

The last group that fits in here involves anything that was submitted in non-romanic charachters such as Japanese, Chinese, and some other languages. That is, those that were entered not using characters like those seen in English or predominatly characters used in Western European languages.

Now back to your post, I see great value in creating a large lineage-linked database of North American (we need to think of Canada due to immigration and even the fact that the British had Canada early on as well) and the British Isles and Europe.

We need then to identify where the largest pools of data that will help to create this tree are, and extract them into family groups. And at some point we will then be able to match parents of one generation as being the children of another, and that is how I think it could go once the databases are built initially. The searching function in NFS is a good example of how to find a person and NFS itself shows how easily the person could be put into the right place.

Early efforts have already been noted elsewhere, such as a project to try to create lineages from the Norwegian 'Farm Books' or 'Bygdeboks'. That will go some distance to helping people find Norwegian ancestors without having to go through book after book and knowing specificity as to the locality first, the books practically require you to know the smallest possible locality first.

Am I closer to being on the right track on your thinking here?
User avatar
huffkw
Member
Posts: 54
Joined: Sun Jan 21, 2007 6:34 pm
Location: Spanish Fork, Utah
Contact:

"One time through it all for everybody”

#34

Post by huffkw »

JamesAnderson wrote: Early efforts have already been noted elsewhere, such as a project to try to create lineages from the Norwegian 'Farm Books' or 'Bygdeboks'. That will go some distance to helping people find Norwegian ancestors without having to go through book after book and knowing specificity as to the locality first, the books practically require you to know the smallest possible locality first.

Am I closer to being on the right track on your thinking here?
You are right on target, and I appreciate your examples.
It is that sort of “one-time-through-it-all-for-everybody” by specialists or semi-specialists in all the areas of the world, and all record types, that will make for a huge improvement in overall efficiency, and will generate a lot of excitement. If every separate person who is researching, has to learn every little detail about every new record set they use, just to locate a name or two, it is horribly inefficient, slow, discouraging, and probably less accurate. Using the spirit and power of cooperation by people specializing in various record sets, and then sharing the results with others doing the same in other areas, could speed up the whole process by up to 20 times.

When all or part that work has been done, then pedigrees can be listed for all participants, and the depth and quality of the data will be quite amazing, I am sure. Further verification can be done at that point, but most of the research trails will have been laid out already.

The final goal is still to get that individual pedigree, but every trick in the book has been used to speed up the process and improve the quality. Getting the most, best data the quickest will probably mean finding many different ways to do it. I believe one person, alone, doing a step-by-step pedigree, will usually prove to be the least efficient method. So the creativity of all participants should discover many ways to improve the output of the whole process.
BradJackman-p40
New Member
Posts: 30
Joined: Fri Jan 25, 2008 10:09 am
Location: Salt Lake City, UT

Two Databases - 1) Research, 2) Temple Work

#35

Post by BradJackman-p40 »

I think you guys are on the right track. As a professional genealogist I get so frustrated that what currently exists in NFS is what I'm supposed to be cleaning. I didn't think it was best to dump all the databases into one. I spoke my mind and made my comments in Beta 1 and Beta 2. I don't think what's on the NFS really CAN or SHOULD be used for research. I keep my PAF file clean, sourced, noted, and updated. If there was a place to put real research, I'd use it. As for NFS, there's no way my 1500 hours I would need to spend to give all my sources, notes, and clarifications by hand would even be appreciated in such a database. However, if there was a lineage linked database to connect extracted sources, serious family research, and personal documents, I would sign up and be fully on board.

In fact, if a volunteer program existed as was mentioned before, I think that I would be seriously motivated to volunteer to connect families and extracted records, just so the database got better.

GIGO - Garbage in, garbage out. Seems like all we did with NFS was put all the garbage in. I think I know what's going to come out.
User avatar
garysturn
Senior Member
Posts: 606
Joined: Thu Feb 15, 2007 11:10 am
Location: Draper, Utah, USA
Contact:

Labs

#36

Post by garysturn »

BradJackman wrote:
In fact, if a volunteer program existed as was mentioned before, I think that I would be seriously motivated to volunteer to connect families and extracted records, just so the database got better.

The plan for newFamilySearch is to add original documents to clean up the files. If the database does not have all the bad stuff in there so we can dispute it, it will just get resubmitted that way into any new database that was created. There are protypes of how newFamilySearch might look in some future version with all the source images included at FamilySearch Labs see the Life Browser.
Gary Turner
If you haven't already, please take a moment to review our new
Code of Conduct
BradJackman-p40
New Member
Posts: 30
Joined: Fri Jan 25, 2008 10:09 am
Location: Salt Lake City, UT

Two Databases Needed: One for Temple, One for Research

#37

Post by BradJackman-p40 »

I am fully aware of the labs programs, and the LifeBrowser, and several other prototypes being developed to assist in sourcing the data. However, there are still many significant problems with the way nFS is handling the duplicate information. Many ambiguous records exist that unwittingly tie two or more unique individuals together. When erroneous information is disputed, but is the only information for a particular event, it does not go away. The IGI sourcing buries any legitmate sourcing in pages and pages of duplicate IGI sources. The nFS does not allow batch number searches for extracted records, nor does it differentiate in any legitimate way an extracted record from a user-submitted record. There is no incentive for sources to be added in the current incarnation, because they are not promoted or presented in a meaningful way. There is no way to designate not-a-match, so after spending hours of un-combining, another user can come along and combine two similar but unique individuals. No reasonable opportunity for discussion or debate is given. Reserving temple ordinances is too easy, and promotes duplication. Pioneer and royal ancestry have been ruined by automated combination, and novice users. I could go on and on...

Your premise that you need incorrect data in order to get correct data is erroneous. The IGI and PRF and AF for that matter include many more errors from typos, guesses, and unscrupulous users than it does from false research. More times than not the duplication and errors are from ambiguity rather than from a bad date on a birth certificate. If there are legitimate sources out there for one individual that are conflicting (and there are), they need to be uploaded, discussed, and reviewed, something not provided for in the current system.

The right way to do temple ordinances has always been available, and people ignored it and submitted duplicates. If people want to continue in ignorance on the nFS, they can. Instead of searching for an individual to add to their tree, they are free to make a new person, and ignore any duplicates. That's what people have been doing with the IGI and TempleReady for years.

The requirements for a good TempleReady system are not the same requirements for a good genealogy research database. TempleReady will need to preserve all temple ordinances. Genealogists need to DELETE data that leads people down the wrong path, and prove the right path, not spend all their time explaining and fixing the errors of 45 other people who didn't know what they were doing.

It does me and my ancestors no good to fix all the problems others (and nFS) have created with duplicate submissions and erroneous information when their duplicate ordinances are already done. Instead, I could be doing original research and finding new family members who need temple ordinances on a clean file that is proved, sourced, and distributed for peer review . I've already cleaned my file, I've sourced my data, I've found all the temple dates. There's no incentive for me to use nFS at all, except to add the basic data at the end of a line so that I can submit new ancestors to the temple.

HOWEVER, if there were a clean research database (something like Life Browser) that could be used to discuss sources, plan for group research methods, and share information, I would be encouraged to use it because it would help me, my relatives, and my ancestors. Putting all of the nFS data into a Life Browser system will negate the benefits by introducing all of the errors into people's research. Millions, possibly hundreds of millions of hours will be required to disprove the mistakes of others, when the duplication has already been done. A clean research database would attract users who had correct data and wanted to continue original research. A genealogist is not going to want to go backwards and repair everyone else's errors.

So, I return to my original submission - If two databases were provided, one for temple work and one for research (allow cross linking, but not information sharing, between the databases), I would be encouraged to use it. I'm not in this to make genealogy easy and painless for my 2nd cousins. I'm in it to find my ancestors who need temple ordinances. It would be a great disservice to them to spend the next few years focusing on fixing everyone else's ignorant, uninformed, un-researched, un-sourced, un-checked duplicate submissions and ignore those who have not had their work done at all.

nFS needs to take a look at the genealogical community's desires when they develop products, and not just insist that their way is the only way. I know not one professional genealogist who is happy, pleased, or excited about the nFS. That's the very crowd you should be courting! They'll be the saving grace of nFS, if you get them on board.

I'm happy about the prospects of Life Browser, but if it utilizes the same data set from nFS, it'll be useless.
russellhltn
Community Administrator
Posts: 34487
Joined: Sat Jan 20, 2007 2:53 pm
Location: U.S.

#38

Post by russellhltn »

BradJackman wrote:Genealogists need to DELETE data that leads people down the wrong path
Don't delete bad data - explain why it's faulty. That way when it surfaces again later, it won't be re-researched.
BradJackman-p40
New Member
Posts: 30
Joined: Fri Jan 25, 2008 10:09 am
Location: Salt Lake City, UT

Deleting bad data - Why we need two databases

#39

Post by BradJackman-p40 »

Your're right about not deleting information, that it's better to show why it's wrong. That's exactly the approach to take, if there are conflicting sources, such as a conflicting birth and death certificate. But that's not the case with nFS 99% of the time. Most of the time it's incomplete data, or data with typos, or with guesses.

I just spent 8 hours, literally, fixing a PAF file from a client who went through and aimlessly merged every woman who didn't have a surname to a random woman who did. Trying to explain why each person made a mistake is impossible. Most of the time THEY don't even know why they're doing what they are doing.

If you want to prove something is RIGHT, you don't have to address al the other errors, just provide sources. If you can say "John Stewart was born on 9 Sep 1902 and I know this because I have a birth certificate, two censuses, a marriage record, the SSDI, a death certificate, naturalization records, ship manifest, and personal knowledge" then you shouldn't need to keep in your records the fact that someone guessed that his birthdate was "about 1900" 25 times. If people are guessing, they're not caring about records. And if they're not caring about records, no amount of explaining why they're wrong is going to help.

I guess I see the whole database a little differently. If you just want a place to dump data, then you're fine with nFS the way it is. But if you want a quality research tool that will encourage good practices, provide accurate information, and encourage others to go beyond what has already been done, then you can't keep all the bad data in. This is why I have concluded that there needs to be two databases.

In the old/current way of doing things, you had your PAF file, and you had the IGI as a data dump. You kept the PAF file clean, and added only the GOOD data to it. You didn't download all of the alternate parents. You don't enter each of 30 ordinance dates, you certainly didn't insert 20 minor variations of the same name, date, and birthdate, and you absolutely do not keep a record of all of the problems in every online database. I don't know a single person who spends their time gathering every FALSE, typo'd, and incomplete record out of the IGI and AF and PRF.

If we don't provide a clean, usable database from which to work, it becomes another data dump and people will maintain their own PAF files and not worry about cleaning up NFS. NFS could become the mother of all database disasters very quickly, if unscrupulous persons continue to run amok merging every Joe, Sally, and Jim.

Please people, is anyone even listening to the people who are using the product? Reading the blogs? Subscribing to the FHC groups? The horror stories abound about Family History Center workers who spend days un-combining records to make sure the files are clean, only to have someone come through a few days later and make the same mistakes, merging incorrect people, negating all the work. Now the FHC workers and real genealogists are telling people to avoid NFS.

There's two major problems here: Too much bad data in NFS, and too many people who don't care enough to think about what they're doing. We can't fix the second one, but we can lessen their ability to mess things up by fixing the first one.
The_Earl
Member
Posts: 278
Joined: Wed Mar 21, 2007 9:12 am

#40

Post by The_Earl »

BradJackman wrote:...If you want to prove something is RIGHT, you don't have to address al the other errors, just provide sources. If you can say "John Stewart was born on 9 Sep 1902 and I know this because I have a birth certificate, two censuses, a marriage record, the SSDI, a death certificate, naturalization records, ship manifest, and personal knowledge" then you shouldn't need to keep in your records the fact that someone guessed that his birthdate was "about 1900" 25 times. If people are guessing, they're not caring about records. And if they're not caring about records, no amount of explaining why they're wrong is going to help. ...
Unless you are talking about two different people :). In my family, there is a disagreement about one of my ancestors. Current thought is that there are two possibilities, both very similar, and both somewhat intertwined.

We really need a way to concurrently build different versions of a family history until everyone can agree on a particular version. A perfect example of this is when you have a theory or guess as to a history. You can still explore BOTH histories, and when you find documentation to prove or disprove your theory, you resolve the differences, and unify the tree again.

Think of it as a smart way to keep lots of separate PAF files, all syncronized, with an automatic merge system.

More discussion here:
http://tech.lds.org/forum/showthread.php?t=353&page=3
Post Reply

Return to “Family History”