AI Context is King

China’s dominance in electric vehicles has the world’s auto manufacturers sitting up. BYD, NIO, and others rewrote the rules while legacy players were still debating strategy.

Something similar is happening in AI—but most Western executives haven’t noticed yet.

The AI consumed in boardrooms across America and Europe comes overwhelmingly from domestic tech giants: Amazon, Google, Microsoft. But much of the real progress is happening elsewhere. DeepSeek, operating out of China, just published research that should make every bank executive rethink their AI strategy. Not because it’s about banking—it isn’t. But because it exposes a fundamental mistake the entire AI industry is making. The same mistake banks are about to make.

The paper is called Engram, and its thesis is simple: the AI industry has been scaling the wrong thing.

The Infrastructure Trap

Anyone paying attention to AI knows there’s a high chance we’re overspending on the infamous trillion-dollar AI buildout. Hyperscalers are racing to deploy more GPUs, more high-bandwidth memory, more compute. The assumption is that scale equals intelligence.

DeepSeek’s research says otherwise.

Their Engram architecture separates memory from reasoning. Instead of making AI models constantly reconstruct knowledge through expensive computation, essentially re-deriving facts from scratch every time, Engram gives models structured context they can retrieve rather than re-reason.

The results are striking. Long-context accuracy jumped from 84.2% to 97%. Not by adding more GPUs. By adding smarter context architecture.

The paper puts it bluntly: traditional approaches waste “valuable sequential depth on trivial operations that could otherwise be allocated to higher-level reasoning.”

In other words: raw compute without context architecture is expensive and underperforms.

The Banking Parallel

The AI industry has poured $1.4 trillion into infrastructure—training clusters, GPU farms, foundation models, against roughly $150 billion in actual revenue. That’s a 10:1 infrastructure-to-revenue ratio.

Banks are walking into exactly the same trap.

The AI budgets are real. The infrastructure investments are massive. But here’s what the industry keeps missing: the value of AI doesn’t live in the models themselves. It lives in how effectively those models are directed toward specific, revenue-generating outcomes.

Without a structured model of customer context—without a map of where customers are, where they’re heading, and what would help them progress, AI generates noise rather than signal.

The Missing Layer

Here’s the uncomfortable truth: banks have more behavioural data than TikTok, Netflix, and Amazon combined. Every transaction, every login, every payment pattern—it’s all there.

And this dataset is expanding rapidly. Open Banking means banks no longer see just their own transactions. They see the complete financial picture—core banking data plus every account, every payment, every financial relationship the customer has across the entire ecosystem. The behavioural intelligence available to banks is growing exponentially.

But without context, that data is useless to AI.

Most banking AI has zero knowledge of who the customer actually is, what motivates them, or where they are in their financial journey. It’s flying blind. And when AI flies blind, it delivers the same generic responses to everyone.

Microsoft’s Copilot achieved a 1.81% paid conversion rate. Not because the AI wasn’t capable. Because without structured context, even brilliant AI produces generic output that doesn’t connect to what customers actually need.

The stack looks like this:

  • AI/LLM Layer — Generic without context
  • Game LayerMissing
  • Digital Experience Layer — Present
  • Core & Open Banking — Present

That missing game layer is where the value lives.

What DeepSeek Actually Discovered

Engram’s insight is that AI models don’t need to reason their way to every piece of knowledge. Some things can simply be stored and retrieved. This frees up computational resources for what actually requires reasoning.

The parallel to banking is precise.

Your AI doesn’t need to reason its way through every customer interaction from scratch. What it needs is structured context about each customer—where they sit on their financial journey, what motivates them, what the next appropriate action should be.

DeepSeek found that allocating 20-25% of model resources to memory (rather than pure reasoning) yielded the best performance. The pure reasoning approach was actively suboptimal.

For banks, the question is similar: what percentage of your AI investment is going toward the context architecture that makes AI output actually relevant?

The Game Structure Insight

DeepMind’s breakthroughs in game-playing AI succeeded because they deeply understood the game’s structure. They didn’t just throw compute at the problem. They defined the state space, mapped the action space, and created clear reward functions.

This is exactly what’s missing from most banking AI implementations.

When an LLM receives a generic prompt about a customer, it has no structured understanding of:

  • Where that customer currently sits on their financial journey
  • What archetype drives their behaviour
  • What actions are appropriate given their current state
  • What “success” looks like for this specific individual

Without this context, AI generates generic responses. With it, AI generates relevant, actionable guidance.

The difference is stark:

AI without context: “Here are some savings tips…”
Generic advice that applies to everyone and resonates with no one.

AI with context: “Sarah, you’re 3 days from your streak record…”
A contextual nudge that knows her archetype, level, and momentum.

The Odyssey Architecture

This is why we built Odyssey’s player map; 103 million unique coordinates that transform the ambiguous problem of “help this customer” into a computable state that AI can reason about.

Every customer exists at a unique intersection of who they are, where they are, what they’re doing, and where they’re heading:

Archetype — Who they are at their core. Seven distinct types: Achiever, Explorer, Socialiser, Competitor, Strategist, Protector, Dreamer. Not everyone is motivated the same way.

League — Their mastery domain. Seven leagues representing different areas of financial life.

Mission — Their current objective. Forty-nine missions across the journey.

Level — Progress within each mission. Seven levels of advancement.

That’s 103 million possible coordinates. More than enough for any bank to position every customer individually rather than in crude demographic segments.

How It Works

Defined State Space. The player map positions each customer across multiple dimensions. This isn’t demographic segmentation. It’s individual positioning that makes relevance computable.

Mapped Action Space. The nudge library defines possible actions the AI can take. The player map constrains which nudges are appropriate for each customer’s current position. No more generic recommendations.

Clear Reward Functions. Mission completions, level progressions, and financial wellness improvements serve as quantifiable signals. The AI knows what “progress” means for each individual.

Temporal Reasoning. The journey architecture enables AI to understand that current actions affect future states. Nudging toward an emergency fund today creates conditions for investment conversations tomorrow.

Engram lets AI retrieve context rather than constantly re-reason.

Odyssey gives banking AI the structured customer context that makes output relevant rather than generic.

Both solve the same fundamental problem: infrastructure investment is wasted without the context architecture that directs it toward outcomes.

From Cohorts to Individuals

Traditional bank AI operates on cohorts. Customers are segmented into broad groups. Recommendations are generated for the segment, not the individual.

Odyssey enables genuine hyper-personalisation. The player map provides individual coordinates. Every interaction refines the algorithm. The system learns what motivates each person, which nudges drive action, what rewards resonate.

This isn’t incremental improvement. It’s a different architecture entirely—one that gives AI the context it needs to move from noise to signal.

103 million coordinates is more than most banks will ever need. But it means every customer can occupy their own unique position, with AI that understands exactly where they are and what comes next.

The Competitive Reality

PayPay didn’t win 80 million users and two-thirds of Japan’s mobile payments market by out-spending traditional banks on infrastructure. They won by understanding that engagement—structured context about what customers need and when—drives behaviour, and behaviour drives outcomes.

While traditional banks debate AI strategy, mobile-first platforms are capturing market share by solving the context problem first.

The banks that thrive in the next decade will be the ones who recognise what DeepSeek just proved: scale without the right architecture is expensive underperformance.

Context, Not Compute

DeepSeek’s paper ends with a provocation: “We envision conditional memory functions as an indispensable modelling primitive for next-generation sparse models.”

Translation: the future of AI isn’t more compute. It’s structured context that makes compute effective.

The future of banking AI isn’t more infrastructure. It’s the customer context layer that turns generic AI into relevant, actionable guidance.

That’s what Odyssey delivers. The game layer that gives AI structured understanding of each customer. The guardrails that constrain output toward financial wellness. The 103 million coordinates of context that the trillion-dollar AI buildout forgot.

The industry is learning the hard way that scale isn’t the answer.

The smart money is on context.