AI in Construction: Recognizing Hallucination
Large Language Models can hallucinate. Sometimes, they refuse to admit it.
Collin Tsui
10/28/20253 min read


AI dominates today’s business headlines, with many articles talking about Tech, Finance, Software… computer centric industries. But what about Construction? We’ll explore each of these topics from a Construction perspective over a few blog posts.
Hordes of vendors line up to sell you AI-powered solutions. The first question I ask is which type of AI, because AI can have many flavors:
Machine Learning (ML)
Large Language Model (LLM)
Agentic AI
In creating the headline image for an upcoming post, I was starkly reminded of how sometimes AI hallucinates (aka makes a mistake), then absolutely refuses to admit it. At best, it’s inconvenient. At worst, it’s dangerous.
Here’s the full chat (commentary below):




At this point, I began to question myself a little bit. Maybe I was being shown a square thumbnail of a bigger picture? Nope. Now I was amused, and curious how far the LLM would go.
Hallucinations Explained
It’s not the mistake that bothers me, it’s how the LLM (Google’s Gemini 2.5 Flash) refused to acknowledge it can be wrong. It dismissed my objection multiple times, and went so far as to fabricate facts to justify why it’s right.
In this case, I can obviously tell the difference between a square and rectangle. I was curious to see how adamant the LLM would be, and it turns out it can go pretty far.
The reason for the hallucination is slightly technical. LLMs are trained on large sets of data. In that training, it is given a prompt, rewarded for producing a desired response, then the LLM tweaks its weights (roughly analogous to logic) accordingly. This process is repeated millions (billions?) of times to train a model.
Note there is no penalty for undesired response, nor a reward for admitting it doesn’t know. In the absence of negative reinforcement, the LLM learns there is no consequence to guessing wrong, and so it does.
Remember all those multi-choosey tests you took in school? If you didn’t know an answer, you just guessed, because you wouldn’t get that point either way. Imagine if you lost a point for every wrong answer instead. That would certainly change your behavior. The LLM doesn’t have that, hence, the guessing.
Who’s In Charge?
Here’s the takeaway: You, as the human in the loop, have to be knowledgeable enough to recognize a mistake, and confident enough not to go along with it.
Knowledgeable – LLM use is best limited to topics that you are already familiar with, and can spot factual inaccuracies, procedural errors, and logical fallacies. Your job is to verify the LLM is right. If you have to tread in unfamiliar territory, use multiple LLMs to fact check each other (still not great, but better than nothing), or maybe even get out the ole’ Google (but ignore the AI bit).
Confident – If you think the LLM is wrong, trust your gut, and look into it more (this means do more research, unless you have subject matter expertise). LLMs do not typically convey uncertainty, and they can be very convincing (as you can see in the chat above). Be inquisitive and don’t let it steamroll you.
Remember, you’re the one in charge. That is the value you bring as a human.
Ready for analytics you can trust?
Are you having success with AI on words, but struggling to do the same with project data? Need to get reliable metrics to your team? Let's talk!
I build custom Power BI solutions that transform raw data into validated metrics to keep projects on time and on budget. Contact me today to make your data clear.
