Unlocking your retail insights with LLMs

Unlocking your retail insights with LLMs

I’ve spent the last five years working in Boston’s tech scene, but my journey into AI and machine learning has taken me through Glasgow, Toronto, and roles at companies like Amazon and Best Buy.

Along the way, I’ve learned something important: the most powerful AI applications usually come from solving unglamorous problems. Things like cleaning up messy customer data or figuring out why someone bought that laptop instead of the other four they looked at.

Today, I want to share how we’re using large language models (LLMs) at Best Buy to tackle exactly those challenges. But before we dive into the technical details, let me say this clearly: you should not use LLMs just because they’re trendy. The business use case has to come first. Always.

A data engineer’s guide to pipeline frameworks
If you’re architecting for 2026, these are the seven frameworks you actually need to care about.
Unlocking your retail insights with LLMs

When should you actually use LLMs for data enrichment?

One question I hear constantly is: should we be using LLMs for our data problems?

The honest answer is, it depends. And many teams skip that part because generative AI is exciting.

You need the right business use case. If the only tool you have is a hammer, everything starts looking like a nail. That mindset gets expensive very quickly with LLMs. These models are excellent at certain tasks, especially when dealing with unstructured data that traditional ML struggles with. 

They’re great at summarizing text, applying common sense reasoning, and connecting dots across messy datasets.

For expert advice like this straight to your inbox every other Friday, sign up for Pro+ membership.

You’ll also get access to 300+ hours of exclusive video content, a complimentary Summit ticket, and so much more.

So, what are you waiting for?


Get Pro+

But they also come with challenges

LLMs can get overwhelmed if you dump too much context into a single prompt. They sometimes ignore instructions that are buried in long prompt templates. And yes, they hallucinate. I actually see hallucination less as a bug and more as a side effect of their strength. 

Their ability to extrapolate is what makes them powerful. It just needs guardrails.

The good news is that costs are falling quickly. I’ve watched token costs drop dramatically over the past few years while model capabilities have improved just as fast. That combination opens doors for use cases that simply were not economically realistic before.

You also need strong quality assurance processes, clear privacy compliance, and a technical team that is ready for long-term maintenance. Too many teams focus on the initial launch and forget that these systems need ongoing care. 

LLMs are not “set it and forget it” tools. They are more like high-maintenance pets. Impressive, useful, but definitely not self-sufficient.

Scroll to Top