OPEN-SOURCE SCRIPT

Lorentzian Harmonic Flow - Temporal Market Dynamic

756
Lorentzian Harmonic Flow - Temporal Market Dynamic (⚡LHF)

By: DskyzInvestments

What this is
LHF Pro is a research‑grade analytical instrument that models market time as a compressible medium, extracts directional flow in curved time using heavy‑tailed kernels, and consults a history‑based memory bank for context before synthesizing a final, bounded probabilistic score. It is not a mashup; each subsystem is mathematically coupled to a single clock (time dilation via gamma) and a single lens (Lorentzian heavy‑tailed weighting). This script is dense in logic (and therefore heavy) because it prioritizes rigor, interpretability, and visual clarity.

Intended use
Education and research. This tool expresses state recognition and regime context—not guarantees. It does not place orders. It is fully functional as published and contains no placeholders. Nothing herein is financial advice.

Why this is original and useful

Curved time: Markets do not move at a constant pace. LHF Pro computes a Lorentz‑style gamma (γ) from relative speed so its analytical windows contract when the tape accelerates and relax when it slows.
Heavy‑tailed lens: Lorentzian kernels weight information with fat tails to respect rare but consequential extremes (unlike Gaussian decay).
Memory of regimes: A K‑nearest‑neighbors engine works in a multi‑feature space using Lorentz kernels per dimension and exponential age fade, returning a memory bias (directional expectation) and assurance (confidence mass).
One ecosystem: Squeeze, TCI, flow, acceleration, and memory live on the same clock and blend into a single final_score—visualized and documented on the dashboard.
Cognitive map: A 2D heat map projects memory resonance by age and flow regime, making “where the past is speaking” visible.
Shadow portfolio metaphor: Neighbor outcomes act like tiny hypothetical positions whose weighted average forms an educational pressure gauge (no execution, purely didactic).
Mathematical framework (full transparency)

1) Returns, volatility, and speed‑of‑market

Log return: rₜ = ln(closeₜ / closeₜ₋₁)
Realized vol: rv = stdev(r, vol_len); vol‑of‑vol: burst = |rv − rv[1]|
Speed‑of‑market (analog to c): c = c_multiplier × (EMA(rv) + 0.5 × EMA(burst) + ε)
2) Trend velocity and Lorentz gamma (time dilation)

Trend velocity: v = |close − close[vel_len]| / (vel_len × ATR)
Relative speed: v_rel = v / c
Gamma: γ = 1 / √(1 − v_rel²), stabilized by caps (e.g., ≤10)
Interpretation: γ > 1 compresses market time → use shorter effective windows.
3) Adaptive temporal scale

Adaptive length: L = base_len / γ^power (bounded for safety)
Harmonic horizons: Lₛ = L × short_ratio, Lₘ = L × mid_ratio, Lₗ = L × long_ratio
4) Lorentzian smoothing and Harmonic Flow

Kernel weight per lag i: wᵢ = 1 / (1 + (d/γ)²), d = i/L
Horizon baselines: lw_h = Σ wᵢ·price / Σ wᵢ
Z‑deviation: z_h = (close − lw_h)/ATR
Harmonic Flow (HFL): HFL = (w_short·zₛ + w_mid·zₘ + w_long·zₗ) / (w_short + w_mid + w_long)
5) Flow kinematics

Velocity: HFL_vel = HFL − HFL[1]
Acceleration (curvature): HFL_acc = HFL − 2·HFL[1] + HFL[2]
6) Squeeze and temporal compression

Bollinger width vs Keltner width using L
Squeeze: BB_width < KC_width × squeeze_mult
Temporal Compression Index: TCI = base_len / L; TCI > 1 ⇒ compressed time
7) Entropy (regime complexity)
Shannon‑inspired proxy on |log returns| with numerical safeguards and smoothing. Higher entropy → more chaotic regime.

8) Memory bank and Lorentzian k‑NN

Feature vector (5D): [HFL, γ, entropy, realized vol (EMA(rv)), HFL_vel]
Outcomes stored: forward returns at H5, H13, H34
Per‑dimension similarity: k(Δ) = 1 / (1 + Δ²), weighted by user’s feature weights
Age fading: weight_age = mem_fade^age_bars
Neighbor score: sᵢ = similarityᵢ × weight_ageᵢ
Memory bias: mem_bias = Σ sᵢ·outcomeᵢ / Σ sᵢ
Assurance: mem_assurance = Σ sᵢ (confidence mass)
Normalization: mem_bias normalized by ATR and clamped into [−1, 1] band
Shadow portfolio metaphor: neighbors behave like micro‑positions; their weighted net forward return becomes a continuous, adaptive expectation.
9) Blended score and breakout proxy

Blend factor: α_mem = 0.45 + 0.15 × (γ − 1)
Final score: final_score = (1−α_mem)·tanh(HFL / (flow_thr·1.5)) + α_mem·tanh(mem_bias_norm)
Breakout probability (bounded): energy = cap(TCI−1) + |HFL_acc|×k + cap(γ−1)×k + cap(mem_assurance)×k; breakout_prob = sigmoid(energy). Caps avoid runaway “100%” readings.
Inputs — every control, purpose, mechanics, and tuning

🔮 Lorentz Core

Auto‑Adapt (Vol/Entropy): On = L responds to γ and entropy (breathes with regime), Off = static testing.
Base Length: Calm‑market anchor horizon. Lower (21–28) for fast tapes; higher (55–89+) for slow.
Velocity Window (vel_len): Bars used in v. Shorter = more reactive γ; longer = steadier.
Volatility Window (vol_len): Bars used for rv/burst (c). Shorter = more sensitive c.
Speed‑of‑Market Multiplier (c_multiplier): Raises/lowers c. Lower values → easier γ spikes (more adaptation). Aim for strong trends to peak around γ ≈ 2–4.
Gamma Compression Power: Exponent of γ in L. <1 softens; >1 amplifies adaptation swings.
Max Kernel Span: Upper bound on smoothing loop (quality vs CPU).
🎼 Harmonic Flow

Short/Mid/Long Horizon Ratios: Partition L into fast/medium/slow views. Smaller short_ratio → faster reaction; larger long_ratio → sturdier bias.
Weights (w_short/w_mid/w_long): Governs HFL blend. Higher w_short → nimble; higher w_long → stable.
📈 Signals

Squeeze Strictness: Threshold for BB<KC. Lower → fewer but cleaner.
Flow Threshold: Minimum |HFL| for directionality; raises signal quality when increased.
Acceleration Threshold: Minimum |HFL_acc|; higher favors impulsive expansions.
Min Bars Between Signals: Spam guard.
Confirmation Mode: Aggressive/Balanced/Conservative scales thresholds.
Real‑Time Mode: Permits intrabar signals (minor repaint risk); Off = confirmed‑bar mode.
🧠 Memory Bank

Enable Memory Bank + Heat Map: Toggles the resonance engine.
Memory Size: 64–1024 ring buffer depth; higher = more patterns, heavier CPU.
Memory Fade: Exponential age decay. Lower focuses on recent regimes; higher preserves long memory.
k Neighbors: Voting council size; smaller = reactive; larger = smoother.
Prediction Horizon: H5/H13/H34/Blend; align to your holding period.
Feature Weights: Flow, Gamma, Entropy, Volatility, Momentum. Emphasize what drives your asset (e.g., Volatility for crypto).
🔥 Memory Heat Map

Show/Position: Toggle and corner placement.
Age Bins (X): Columns over recency.
Flow Bins (Y): Rows over bearish→neutral→bullish regimes.
Opacity Min/Max: Maps resonance weights to visual intensity.
🎨 Visuals

Background Aura: Green/red bias veil.
Flow River: Price ± HFL×scaled ATR.
Prediction Arc: Kinematic projection from HFL_vel/acc + memory bias (trajectory hint).
Compression Cloud: Width ∝ (TCI−1).
Signal Markers: ▲/▼ when full conditions pass.
🏆 Dashboard

Show/Position/Size/Theme: Display controls from compact to full diagnostics.
⚙️ Efficiency (if present in your build)

Efficiency Mode: Max Quality / Balanced / Fast caps KNN and heat‑map scans (no visual changes).
BB/KC Cap: Optional bound on loop length to reduce CPU on slower machines.
Dashboard — mission‑control metrics and how to use them

Gamma (γ): Time compression. ≈1 normal; 2–4 indicates accelerated tape. Rising γ → shorten expectations and prefer adaptive setups.
TCI: Compression index. >1 = compressed (coiled spring); <1 = dilated.
v/c: Relative speed; near 1 denotes extreme pacing. Diagnostic only.
Entropy: Regime complexity; high entropy suggests caution, smaller size, or waiting for order to return.
HFL: Curved‑time directional flow; sign and magnitude are the instantaneous bias.
HFL_acc: Curvature; spikes often accompany regime ignition post‑squeeze.
Mem Bias: Directional expectation from historical analogs (ATR‑normalized, bounded). Aligns or conflicts with HFL.
Assurance: Confidence mass from neighbors; higher → more reliable memory bias.
Squeeze: ON/RELEASE/OFF from BB<KC logic; RELEASes are mechanical triggers for expansion.
Breakout P: Bounded probability proxy from compressed time + acceleration + γ + assurance (capped components).
Score: Final flow/memory blend with γ‑adaptive mixing; use as bias gate.
Neighbors (K): Diagnostics for memory depth and similarity coverage.
Signal row: Long/Short/Neutral output respecting thresholds and anti‑spam gap.
How to interpret the visuals (colors, lines, labels, and shapes)

Aura (background): green/red tint expresses net bias and rises with conviction; a contextual backdrop, not a trigger.
Flow River (bands): Thickness reacts to HFL and ATR; river tilts and crosses often precede regime handoffs.
Compression Cloud (orange veil): Width scales with TCI − 1; wide cloud + strong |HFL_acc| + squeeze release suggests expansion risk.
Prediction Arc (curved line): Forward geodesic using HFL_vel and HFL_acc blended with memory bias. Read the slope and direction more than the absolute level.
Signal Markers: ▲ bullish, ▼ bearish, printed only when confluence and gap rules are satisfied.
Cognitive Map (heat map):
X (Age): Left = recent memories; right = older regimes.
Y (Flow regime): Top = bearish, middle = neutral, bottom = bullish.
Hue: Green = bullish historical outcome; Red = bearish.
Opacity: Resonance strength (similarity × age fade).
Read: Vertical “hot streaks” = regime persistence; Horizontal “hot streaks” = outcome stability across regimes.
Practical usage and tuning playbook

Step 1 — Context: Observe γ and TCI. If both are high, expect compressed time and faster information flow; prefer adaptive logic.
Step 2 — Instantaneous state: Check HFL and HFL_acc; strong sign‑aligned values indicate directional pressure with curvature.
Step 3 — Historical context: Read Mem Bias and Assurance. Favor setups when memory aligns with flow and assurance is elevated.
Step 4 — Compression mechanics: Monitor Squeeze and Breakout P; releases with high energy and aligned memory are noteworthy.
Step 5 — Execution discipline: Use final_score as a gate, apply anti‑spam min_gap, and rely on your own risk framework (stops beyond pivots, scale judiciously).
Multi‑timeframe: Use higher TF for regime bias (γ, TCI, HFL slope); execute on lower TF for timing (acceleration, squeeze release, score cross).
Performance, stability, and limitations

Heaviness: The script is heavy because the logic is dense (kernel loops, memory arrays, heat mapping, and a full dashboard). Reduce load by lowering mem_size, max_kernel_len, hiding the heat map, or using a faster efficiency setting if available.
Repainting: Real‑time mode updates within the bar; confirmed‑bar mode is provided for static analysis.
Numeric safeguards: Caps, clamping, and safe divisions are applied to maintain stability.
No look‑ahead: Memory outcomes are computed on completed bars; writes can be restricted to barstate.isconfirmed.
No performance promises: Outputs are probabilistic and regime‑dependent. Always use independent validation and risk management.
Development notes (engineering transparency)

Kernel selection: The Lorentzian (Cauchy) kernel is used in both smoothing (with γ‑scaled distance) and feature similarity (per‑dimension Δ). It intentionally over‑weights tails compared to Gaussian models, elevating extreme but relevant analogs.
Harmonic blending: Three horizon views are converted to z‑scores and fused by weights (short/mid/long). This isolates structural flow while remaining adaptable to compression via γ.
KNN with age fade: Parallel arrays form a ring buffer of features/outcomes. Per‑neighbor weights combine feature similarity and exponential age decay; their mass is mem_assurance.
Score mixing: tanh compresses both HFL and normalized memory bias; γ dynamically tunes the blend ratio (α_mem).
Visualization pipeline: The dashboard reads directly from live calculations; the cognitive map accumulates resonance into age×flow cells using the same Lorentz kernel used for similarity—nothing is “painted” ad‑hoc.
Efficiency path: Max kernel spans, optional scan caps, and single‑build tables per bar balance load without altering the math or look.
Compliance notes

This description explains what the script does, how it works mathematically, how components fit together, and how to use it.
It contains no placeholders, no all‑caps shouting, no performance claims, and no marketing promises.
It emphasizes research and learning and is not financial advice.
The script is original and functional as described; each subsystem serves a specific, articulated role in the unified methodology.

Closing thought
Markets are far more turbulent than economists would have us believe.” — Benoît Mandelbrot
LHF embraces that turbulence by design—curving time when speed rises, weighting the tails where history matters most, and letting memory inform the present without dictating it.

— Dskyz, Trade with insight. Trade with anticipation.

คำจำกัดสิทธิ์ความรับผิดชอบ

ข้อมูลและบทความไม่ได้มีวัตถุประสงค์เพื่อก่อให้เกิดกิจกรรมทางการเงิน, การลงทุน, การซื้อขาย, ข้อเสนอแนะ หรือคำแนะนำประเภทอื่น ๆ ที่ให้หรือรับรองโดย TradingView อ่านเพิ่มเติมที่ ข้อกำหนดการใช้งาน