Sergei Nasibian is a Quantitative Strategist at Rothesay, a London-based asset management company, where he developed from scratch the entire risk calculations Sergei Nasibian is a Quantitative Strategist at Rothesay, a London-based asset management company, where he developed from scratch the entire risk calculations

Spotting the Shift: Real-Time Change Detection with K-NN Density Estimation and KL Divergence

2026/02/14 06:10
5분 읽기

Sergei Nasibian is a Quantitative Strategist at Rothesay, a London-based asset management company, where he developed from scratch the entire risk calculations framework that serves as the main source of analytics for hedging market exposure. Previously, Sergei worked as a Senior Data Scientist at Yandex Eats, where he developed the company’s delivery pricing system from the ground up and supported business expansion into new geographies. He also worked as a Data Scientist at McKinsey & Company and as a Quantitative Researcher at WorldQuant, where he won the global alpha building competition. Sergei holds a degree in Mathematics from Lomonosov Moscow State University and specializes in stochastic processes and clustering algorithms.

A model-agnostic method to catch subtle shifts in data distribution before your metrics degrade.

Machine learning models tend not to go downhill suddenly. Rather, their performance deteriorates gradually: drifts in the metric values, confidence measures, and accuracy of predictions tend to start before being noticed.

One reason why the model gradually goes unfit is the change of the input data distribution. Even slight changes may lead to the model becoming less reliable. Noticing such shifts of the input data has become vital for maintaining production-level systems.

Our guest expert, Sergei Nasibian, offers a real-time solution to the drift detection of data. This solution has both a straightforward and mathematical explanation. The expert’s approach uses the concept of k-nearest-neighbor density estimation and Kullback-Leibler divergence to detect whenever real-time data deviates from the training environment. This solution neither relies upon the assumption of the distribution type of the given data nor uses the knowledge of the internal functionings of the model.

The Silent Saboteur: Why Data Drift Matters

In production machine learning, the distributions of data are rarely constant. Market behavior as well as other factors can cause the input data to drift. In traditional monitoring, the output of the model, the measures of recall, accuracy, and precision, are of concern. However, when we see the output drop, the problem has already happened. Instead of that, Sergei monitors the input data.

The Dynamic Duo: K-NN and KL Divergence

Sergei’s method incorporates two very complementary techniques:

K-Nearest Neighbors for Density Estimation: K-Nearest Neighbors for density estimation relies solely on the data for the calculations instead of assuming how the data should look (Gaussian parametric family, anybody?). The algorithm relies on the proximity of the point to its k-nearest neighbor for the estimation of the probability density in the feature space.

KL Divergence: This measures the difference between two probability distributions. The greater the KL divergence, the more the current and reference distributions of the data differ. This can represent drift in the data.

The Method: Simple Yet Effective

Our expert’s detection system functions as follows:

Define the baseline: Training samples serve as the baseline distribution. Estimate the reference probability density via k-NN algorithm. 

Form the Sliding Window: Maintain a sliding window of “recent” observations as new observations flow in – the sliding window represents the “current” distribution. Apply the k-NN algorithm to estimate the probability density of the observations falling into the sliding window as well (use the same parameter k).

Calculate the KL divergence: Use the KL divergence metric to compare the two distributions. Higher values represent drifts, while smaller values represent similar distributions.

Trigger the alert: An alert should be triggered when the KL divergence goes above the pre-set threshold.

The Devil in the Details: Practical Considerations

Window Size Selection: If the chosen window size is too small, you’ll end up chasing the noise. When the window size gets too large, rapid changes will get missed. The type of your dataset and the time required to spot the changes will let you know what the best strategy should be.

Threshold calibration: Choosing the appropriate value for the KL divergence threshold is also very important. Like window size, this can lead to false positives if set too low or miss actual drifts if set too high. Sergei recommends splitting the homogeneous part of the sample into n sequential windows and calculating the pairwise KL-divergences. Then the 95th or 99th percentile of the set of obtained KL-divergences can be chosen as the threshold.

Determining the value of k: The greater the value of k, the less sensitive/localized the density function estimations will be. Lower values of k will emphasize distribution irregularities, but may lead to high sensitivity to errors in the data. A good starting point for the determination of the value of k is the square root of the sample size.

Real-World Application: E-commerce Recommendation Systems

For example, consider a recommendation system for an online retailer. If the model was trained on pre-pandemic shopping data, but customer behavior has since changed (e.g., increased purchases of home goods and decreased interest in travel accessories), the input data distribution will shift.

Traditional monitoring might show declining click-through rates days after the shift began. Our expert’s k-NN approach would flag the change much earlier by detecting that incoming customer feature vectors (in-app behavior) no longer match the training distribution.

When KL divergence spikes, you know something’s changed. Maybe it’s a seasonal trend, a marketing campaign effect, or a fundamental shift in customer preferences. Either way, you’re alerted in time to investigate and adapt.

The Scalability Question

Sergei notes that this technique can be adapted at the required scale through proper engineering solutions:

Sampling Strategies: When dealing with large-scale data, operate on a sample set instead of a full distribution.

Approximate Nearest Neighbors: Use Annoy or Faiss libraries for approximate nearest neighbor search.

Parallel Processing: Density estimation as well as the computation of the KL divergence can be processed across multiple machines.

Incremental Updates: Update rolling statistics rather than recomputing everything.

시장 기회
ChangeX 로고
ChangeX 가격(CHANGE)
$0.00071203
$0.00071203$0.00071203
+95.82%
USD
ChangeX (CHANGE) 실시간 가격 차트
면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, [email protected]으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.