Hook
The dashboard showed a stock price that was 63 seconds old, and a user almost traded on it. The cache had done its job — it served the value fast — but it had no idea the data was already stale for that asset class. That was the moment I realized speed is not the same as freshness, and a stock quote that is 61 seconds old is not the same thing as a news article that is 14 minutes old. The cache itself needed to know that difference.
I built the financial dashboard cache around that idea. The cache is not just a bucket for avoiding repeat calls; it is the place where I encode whether a piece of market data is still trustworthy enough to render. That decision lives in the TTLs, the key shape, and the read path.
Key insight
The naive version of this problem is easy to imagine: one cache, one expiration, one rule. That works until the dashboard starts mixing data with very different lifetimes. A quote wants one cadence, intraday history wants another, and search results can safely live much longer. If I flatten all of that into a single timeout, I force the dashboard to treat unlike data as if it aged the same way.
So I split the cache by data type and made TTL selection part of the write path. That means the cache layer is doing two jobs at once: it stores values, and it encodes the freshness contract for each kind of market data. When a read happens, the cache either returns the value immediately or deletes the expired entry and forces a miss. There is no separate “maybe stale” state leaking upward into the UI. That lines up with the usual cache-aside flow: check the cache first, and repopulate on a miss. AWS describes that pattern this way in its caching guidance.
The shape of that flow is the whole trick. The component does not guess freshness, and the fetch layer does not need to know which UI state was trying to render. The cache decides, then the rest of the system follows that decision.
How the cache is shaped
I used two closely related cache patterns in this codebase. In the market data cache, the cache is specialized for market data and uses typed accessors for each category. In the cache module, I kept a more general-purpose in-memory cache with a reusable withCache helper.
The market dashboard version is the more interesting one because it makes the data types explicit. It stores entries as a Map<string, CacheEntry<unknown>>, where each entry carries the data and an expiresAt timestamp. The key is built from a type prefix and the identifying parts of the request.
interface CacheEntry<T> {
data: T;
expiresAt: number;
}
const TTL = {
quote: 60 * 1000,
intradayHistory: 5 * 60 * 1000,
dailyHistory: 60 * 60 * 1000,
news: 15 * 60 * 1000,
indices: 60 * 1000,
search: 24 * 60 * 60 * 1000,
};
class StockDataCache {
private cache = new Map<string, CacheEntry<unknown>>();
private getKey(type: string, ...parts: string[]): string {
return `${type}:${parts.join(':')}`;
}
private get<T>(key: string): T | null {
const entry = this.cache.get(key);
if (!entry) return null;
if (Date.now() > entry.expiresAt) {
this.cache.delete(key);
return null;
}
return entry.data as T;
}
private set<T>(key: string, data: T, ttl: number): void {
this.cache.set(key, { data, expiresAt: Date.now() + ttl });
}
}
What I like about this shape is that the expiration is stored with the value, not in some separate table of metadata. That keeps the read path blunt and honest: if the timestamp has passed, the entry is gone. There is no ambiguity for the dashboard to interpret later.
The general cache in the cache module follows the same instinct but keeps the API broader. It stores data, timestamp, and ttl, and it includes a cleanup() pass that removes expired entries across the map. That file also ships a withCache helper, which reads through the cache first and only invokes the async function on a miss.
interface CacheEntry<T = any> {
data: T
timestamp: number
ttl: number
}
class MemoryCache {
private cache = new Map<string, CacheEntry>()
set<T>(key: string, data: T, ttlMs: number = 300000): void {
this.cache.set(key, {
data,
timestamp: Date.now(),
ttl: ttlMs
})
}
get<T>(key: string): T | null {
const entry = this.cache.get(key)
if (!entry) return null
if (Date.now() - entry.timestamp > entry.ttl) {
this.cache.delete(key)
return null
}
return entry.data as T
}
}
export async function withCache<T>(
key: string,
fn: () => Promise<T>,
ttlMs: number = 300000
): Promise<T> {
const cached = cache.get<T>(key)
if (cached !== null) {
return cached
}
const result = await fn()
cache.set(key, result, ttlMs)
return result
}
This helper is the generic version of the same idea: trust the cached value if it is still inside its window, otherwise fetch and repopulate. I prefer this pattern when I want the freshness rule to sit beside the fetch call instead of being duplicated across several routes.
Why the TTLs are different
The TTL table in the stock data cache module is where the product rule becomes visible. Quotes and indices use one minute. Intraday history uses five minutes. Daily history gets one hour. News gets fifteen minutes. Search keeps its results for a full day.
That split is not a performance trick; it is a trust model. I wanted each category to age at the pace that made sense for the dashboard’s behavior. A quote should not linger long enough to mislead the user, but a search result does not need to churn every time someone types the same ticker again. The cache is the policy boundary.
The important part is that the TTL is chosen when the value is written. For history, the code checks the timeframe and assigns either intraday or daily TTL. For quotes, news, indices, and search, each setter applies its own fixed window.
getQuote<T>(ticker: string): T | null {
return this.get<T>(this.getKey('quote', ticker));
}
setQuote<T>(ticker: string, data: T): void {
this.set(this.getKey('quote', ticker), data, TTL.quote);
}
getHistory<T>(ticker: string, timeframe: string): T | null {
return this.get<T>(this.getKey('history', ticker, timeframe));
}
setHistory<T>(ticker: string, timeframe: string, data: T): void {
const ttl = timeframe === '1D' ? TTL.intradayHistory : TTL.dailyHistory;
this.set(this.getKey('history', ticker, timeframe), data, ttl);
}
getNews<T>(ticker: string): T | null {
return this.get<T>(this.getKey('news', ticker));
}
setNews<T>(ticker: string, data: T): void {
this.set(this.getKey('news', ticker), data, TTL.news);
}
The non-obvious detail here is the split inside setHistory. I did not want every history request to age the same way, because the dashboard can ask for both short-lived intraday data and longer-lived daily data. The timeframe itself carries enough meaning to justify the TTL decision.
How the dashboard decides what is still trustworthy
The dashboard does not ask, “Is this cached?” It asks, “Is this cached value still within the lifetime I assigned to this kind of market data?” That is a more useful question because it binds the UI to the freshness rule rather than to the storage mechanism.
In the stock data cache module, the answer is computed inside get<T>. If the key is missing, the cache returns null. If the timestamp has expired, the entry is deleted and the read also returns null. Only live entries are handed back to the caller.
That matters because the UI can treat null as a clean miss and go upstream without needing a second branch for “stale but maybe acceptable.” I prefer that because stale data is a business decision, not a rendering detail. Once the cache says the value is dead, the rest of the dashboard never has to debate it.
The general cache in the cache module follows the same pattern, but the extra cleanup() method gives me a maintenance pass when I want to sweep the map. The read path still remains the authority on freshness. That is the part I trust most, because it is evaluated at the moment of use.
The timeline that makes the rule visible
A cache like this is easiest to understand as a timeline rather than a table. I think of it as three states in a row: fresh, stale, and refreshed. The value starts fresh after a write, becomes stale when its TTL passes, and then gets replaced by a new fetch when the next read misses.
write ───── fresh window ───── expiry ───── miss → upstream fetch → refreshed write
That tiny line is the operational contract. The cache does not pretend a stale value is good enough. It simply stops returning it, and the next read repopulates the slot with a value that starts a new window.
The subtle benefit is that this keeps the dashboard behavior predictable. If a read succeeds, I know exactly why: the value is still inside its assigned window. If it fails, I know exactly why too: the cache has already declared it too old to trust.
Why I kept a general cache alongside the market cache
the cache module exists because not every in-memory cache in the app needs market-specific semantics. Some paths just need a simple TTL wrapper around async work. For those cases, the MemoryCache plus withCache pattern is enough.
The market cache, by contrast, is intentionally opinionated. It knows about quotes, history, news, indices, and search. It also knows that history is not one thing; it has at least two freshness profiles depending on timeframe. That specialization is exactly what makes it useful in the dashboard.
I like this split because it keeps the generic helper from becoming a junk drawer. The specialized cache can encode product rules without dragging those assumptions into places that do not need them.
Nuances that matter in practice
The cache key shape matters as much as the TTL. In the stock data cache module, getKey(type, ...parts) joins the parts with colons. That gives me a simple namespace boundary between quote, history, news, indices, and search entries. A quote for one ticker never collides with a history entry for the same ticker because the type prefix keeps them apart.
Then there is deletion on expiry. I chose to remove expired entries at read time rather than keep them around as stale records. That keeps the map from accumulating dead values and keeps the read behavior simple: a value is either acceptable or absent.
The cache API is intentionally tiny. There is no elaborate invalidation protocol in this file. The dashboard gets a small set of accessors, each with a clear TTL policy, and that is enough for the data this app renders.
the cache module exposes a clear() and a cleanup() path, which is useful when I want broader cache hygiene. The market-specific cache also has a clear() method, which makes it easy to reset the whole store when needed.
Closing
Once freshness lives in the cache, the dashboard stops guessing. A quote either earned its way onto the screen or it did not. The rule either holds or it does not.
