Vita Brevis

Limitations of Artificial Intelligence (AI) in Genealogical Research

Written by Anjelica Oswald | May 11, 2026 12:00:03 PM

It seems that every headline these days touts Artificial Intelligence (AI) as the future, and even genealogy isn’t immune to the call of AI. While AI can make some roles easier and provide some benefits, it is a tool that should be used with scrutiny.

Lessons in AI

In the summer of 2025, Research Services received a chat from a patron who had asked ChatGPT for a marriage record of two people in Boston in the late 1700s. ChatGPT responded with a date for the marriage and even described three very specific sources where the patron could find this record: FamilySearch, AmericanAncestors.org, and at the Massachusetts State Archives. AI was correct that all three of those sources should be able to provide the record, but when this person asked us to locate the record for them, we couldn’t.

Searching for the names in our databases and on FamilySearch did not pull a marriage record. We even went through the city commissioner record books to the exact pages ChatGPT said the record would be with no record located.

AI knew where the record should be located, but it made up a record that didn’t exist. This is known as an AI “hallucination,” a phenomenon in which AI provides inaccurate data. According to LiveScience, research by OpenAI found that the more advanced an AI program, the more it was found to “hallucinate.” Two of their most advanced models “hallucinated” 33% and 48% of the time, meaning users were receiving the wrong information more than a quarter of the time they were using it.1

If you were to believe AI without confirming your own research, you could end up with incorrect or made-up information on your tree. Always do more research when ChatGPT tells you something. While it may know where something should be, it’s not a search engine or a researcher.

When to use AI

One of the most helpful tools when doing digital genealogy is Optical Character Recognition, or OCR. Though technically not considered AI, it can be a component of AI programs. OCR reads the text on images and recognizes words, enabling users to search for specific names or keywords.

FamilySearch recently launched their full-text search feature which utilizes OCR to comb through their various databases. This tool has been incredibly useful in locating records we may never have thought to check. It also helps comb through unindexed and often difficult to read records.

But even OCR isn’t without its faults. It struggles to recognize old cursive and will often miss names. OCR also struggles to read words that are upside down or crammed together and can inaccurately transcribe records. We recently had a record for Harvard College that was transcribed as “Harman Collens,” so it is not a perfect program.

The original Harvard College record (upper image) and the faulty OCR transcription (lower image).

Nothing Beats a Human Researcher

Despite the usefulness of OCR, current AI models are not accurate enough to replace human researchers. As AI advances, its transcription abilities should improve and unveil more possible records. It may also be able to show people new record sets or websites to aid in research.

Ultimately, these tools cannot scour records like a human can. They can’t understand the nuances of changing names, human relationships, or movement. Take the time to research yourself or hire professionals to do it for you. Technology isn’t going to solve these mysteries like a real person will. 

 

 

Sources

1. Roland Moore-Colyer, “AI hallucinates more frequently as it gets more advanced — is there any way to stop it from happening, and should we even try?” LiveScience.com, 21 June 2025, accessed 1 August 2025.