AI Lady

AI Lady

The AI Glossary Every Leader Should Know

Guide to AI Terms (Explained Simply, Finally!)

Priya Tahiliani's avatar
Priya Tahiliani
Nov 24, 2025
∙ Paid

An HR leader recently asked me:

“Do I really need to understand all these AI terms such as tokens, GPT’s etc.?”

Here is what I believe:

AI terminology isn’t about turning you into a technologist.
It’s about helping you feel in control.

When you understand the language, you stop feeling like a spectator in AI conversations and step back into the driver’s seat of your HR strategy.

💛 Why this glossary matters for every leader:

  • It lets you follow and lead AI conversations instead of silently nodding.

  • It helps you challenge vendors and look beyond buzzwords.

  • It supports HR’s shift from tech consumer to tech creator.

  • It helps you rewrite the narrative that HR “doesn’t get” technology.

I’ve explained these terms in the exact order your brain naturally learns them.
We begin with the simple concepts - your words, your instructions, your prompts.

Then we move into how AI models work, what they’re capable of, and how they use your data.

Once that foundation is set, we explore accuracy, safety, integration, and finally the agentic tools that let HR automate and create.

Each bucket builds on the previous one, helping you understand AI without ever feeling lost or overwhelmed.

1. Foundation:

Before we talk about AI systems, we begin with the basics - your words, your instructions, and how AI responds.

Prompt

Your instructions or questions to the AI.

A prompt is what you tell the AI to do - the input that guides its output.
It can include:

  • the task (summarize, rewrite, draft, analyze)

  • the audience (employees, managers, leaders)

  • the tone (formal, friendly, neutral)

  • the format (bullets, email, table, policy summary)

  • the constraints (keep it short, use plain language, use our policy only)

Better prompts → better outputs. Prompting is a skill HR leaders can and should build.

Example prompt framework:

  • Role – Tell the AI who it should act as.

“You are a senior HR Policy & Program advisor.”

  • Context – Provide background or sources.

“Use the attached HR policy document and our employee handbook.”

  • Action – Say exactly what you want and in what format.

“Summarize the key changes in 3 bullet points for managers.”

HR Leader Tip:
Ask vendors:

  • “How much prompting skill is required to use your tool effectively?”

  • “Does your system support structured prompt frameworks?”

AI Model

A trained system that recognizes patterns and produces outputs.

An AI model is a mathematical system trained on large amounts of data.
It doesn’t memorize; it learns statistical patterns such as how language flows, how concepts relate, how questions map to answers.
When you use an AI tool such as ChatGPT, you’re interacting with an underlying model that has learned these patterns and uses them to generate responses.

HR Leader Tip:
Ask vendors:

  • “What model powers your system, and why that one?”

  • “Is it optimized for HR tasks or a general model?”

  • “Where does the model process data (cloud vs on-prem)?”

  • “Do you plan to add or support additional models in the future?”

Training an AI Model

How an AI model learns from example data.

Training is the process where a model is exposed to huge datasets (like text, code, or other content) and learns relationships within that data.
In HR language: training is like “onboarding + years of experience” for the model - it’s how it becomes capable.

HR Leader Tip:
Ask vendors:

  • “Is your model trained on general data, industry-specific data, or HR datasets?”

  • “How often is the model retrained or updated?”

  • “Can we add our own organization’s data safely?”

Inference

The process of an AI model producing an output from your input.

Inference is what happens after you send a prompt to an AI system.
It’s the real-time process where the model:

  1. reads your input

  2. breaks it into tokens

  3. applies its learned patterns

  4. generates a response

No new learning happens during inference! Learning happens during training.
Inference is simply the execution phase where the AI uses what it already learned to respond, summarize, classify, or take an action.

Every time you chat with an AI, ask a copilot a question or use a voice agent you are triggering inference.

Systems with faster inference feel more responsive and are more suitable for:

  • employee-facing tools

  • high-volume HR queries

  • real-time support

Example:
An employee asks:
“What’s the process for updating my bank details?”

The AI model instantly interprets the question, retrieves the relevant policy, and generates a clear answer.
That entire behind-the-scenes process from reading → reasoning → replying — is inference.

HR Leader Tip:
Ask vendors:

  • “How fast is your model’s inference time during peak HR periods?”

  • “Does inference happen on-device, in the cloud, or in a private environment?”

  • “What affects response speed — model size, context window, or compute limits?”

  • “Can inference be optimized for employee self-service scenarios?”

Tokens

A small unit of text that the model reads.

AI models don’t process an entire paragraph at once.
They break it into tokens that are small chunks of text (often pieces of words).
The number of tokens in your prompt and documents affects:

  • how much the model can “see” at once

  • how long it takes

  • how much it costs (many tools bill per token)

Every AI model has its own way of calculating tokens, below is an example for OpenAI.

A helpful rule of thumb is that one token generally corresponds to ~4 characters of text for common English text. This translates to roughly ¾ of a word (so 100 tokens ~= 75 words)

HR Leader Tip:
Ask vendors:

  • “How are costs calculated? By tokens, messages, or usage tiers?”

  • “Will this tool become expensive with long documents?”

  • “Does your tool optimize token usage automatically?”

2. Model Types: Understanding the “Engines” Behind AI

Once you know how you interact with AI, the next step is understanding the models themselves - the engines that power every AI tool. These terms explain the different kinds of AI and what makes each one useful.

LLM (Large Language Model)

A model trained on very large amounts of text to understand and generate language.

LLMs, like GPT-4, are powerful AI models that can:

  • summarize documents

  • write emails or policies

  • answer questions

  • analyze text for patterns or sentiment

They learn from broad, large-scale datasets so they can handle many topics and tasks.

HR Leader Tip:
Ask vendors:

  • “Which LLMs does your product support or integrate with?”

  • “Do you use a single model or switch depending on the task?”

  • “How do you ensure accuracy for HR-specific content?”

Small Language Model (SLM)

A smaller, more efficient language model.

SLMs are compact models designed to be:

  • faster

  • cheaper to run

  • easier to deploy inside organizations

They’re ideal for focused internal tasks, like answering FAQs from your own documents.

HR Leader Tip:
Ask vendors:

  • “Can SLMs be deployed privately for sensitive HR content?”

  • “How do you balance speed and accuracy?”

Generative AI

AI that creates new content.

Generative AI uses models (like LLMs) to generate original text, images, or other content - not just retrieve information.
In HR, it can draft job descriptions, write communication drafts, summarize survey comments, or propose development plans.

HR Leader Tip:
Ask vendors:

  • “Where does your tool generate content versus retrieve information?”

  • “What safeguards exist to prevent incorrect generation of data?”

  • “Can we restrict generative functions in sensitive areas?”

Classic AI

AI built for one specific task, following fixed rules or narrow logic.

Classic AI systems do one job at a time like classifying data, detecting patterns, routing tickets, or playing chess.
Each use case traditionally needed its own separate model, designed and tuned for that task.

They are predictable and stable but not flexible or adaptable.

By contrast, Generative AI is general-purpose AI:
the same model can perform many different tasks simply by changing the prompt or instructions.

HR Leader Tip:
Ask vendors:

  • “Which parts of this workflow use classic rule-based automation?”

  • “Can we configure the logic ourselves?”

  • “How do you combine classic automation with AI-driven reasoning?”

Reasoning Model

An AI model optimized to think step-by-step and solve complex problems.

A reasoning model is designed not just to generate text but to analyze, plan, and make structured decisions. It breaks tasks into steps, evaluates possible actions, and follows a chain of logic.

These models improve accuracy for analytical HR tasks such as interpreting surveys, summarizing audits, or evaluating employee data patterns.

Example:
An AI agent that explains why it categorized certain employee comments as “workload concerns” by walking through its reasoning.

HR Leader Tip:
Ask vendors:

  • “Is your system built on a reasoning model?”

  • “Can we see the reasoning steps or chain-of-thought?”

  • “How does your model handle complex HR scenarios requiring judgment?”

Deterministic Output

The same input always produces the same output.

Deterministic AI behaves predictably. When given identical data or instructions, it generates identical responses every time unlike generative AI, which may produce variations.

Deterministic outputs are useful for compliance-heavy HR workflows requiring consistency.

Example:
A deterministic classification model that always assigns a specific type of HR ticket to the same category.

HR Leader Tip:
Ask vendors:

  • “Where in your system do you guarantee deterministic behavior?”

  • “Can we choose between deterministic and generative modes?”

  • “How do you manage consistency in high-risk HR scenarios?”

3. Capabilities: What AI Can Actually Do

Now that we understand the types of AI models, we explore their practical capabilities - how fast they respond, how well they reason, and the different types of inputs they can handle. These terms help you understand what the real experience will feel like for employees and managers using AI tools every day.

Chain of Thought

The step-by-step reasoning an AI uses to arrive at an answer.

Chain of thought refers to the internal reasoning steps an AI may generate to solve a problem. It’s the “thinking trail” - the sequence of logic the model follows before producing a final answer.

Keep reading with a 7-day free trial

Subscribe to AI Lady to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Priya Tahiliani · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture