SMC - Institutional Confidence Oscillator [PhenLabs]📊 Institutional Confidence Oscillator
Version: PineScript™v6
📌 Description
The Institutional Confidence Oscillator (ICO) revolutionizes market analysis by automatically detecting and evaluating institutional activity at key support and resistance levels using our own in-house detection system. This sophisticated indicator combines volume analysis, volatility measurements, and mathematical confidence algorithms to provide real-time readings of institutional sentiment and zone strength.
Using our advanced thin liquidity detection, the ICO identifies high-volume, narrow-range bars that signal institutional zone formation, then tracks how these zones perform under market pressure. The result is a dual-wave confidence oscillator that shows traders when institutions are actively defending price levels versus when they’re abandoning positions.
The indicator transforms complex institutional behavior patterns into clear, actionable confidence percentiles, helping traders align with smart money movements and avoid common retail trading pitfalls.
🚀 Points of Innovation
Automated thin liquidity zone detection using volume threshold multipliers and zone size filtering
Dual-sided confidence tracking for both support and resistance levels simultaneously
Sigmoid function processing for enhanced mathematical accuracy in confidence calculations
Real-time institutional defense pattern analysis through complete test cycles
Advanced visual smoothing options with multiple algorithmic methods (EMA, SMA, WMA, ALMA)
Integrated momentum indicators and gradient visualization for enhanced signal clarity
🔧 Core Components
Volume Threshold System: Analyzes volume ratios against baseline averages to identify institutional activity spikes
Zone Detection Algorithm: Automatically identifies thin liquidity zones based on customizable volume and size parameters
Confidence Lifecycle Engine: Tracks institutional defense patterns through complete observation windows
Mathematical Processing Core: Uses sigmoid functions to convert raw market data into normalized confidence percentiles
Visual Enhancement Suite: Provides multiple smoothing methods and customizable display options for optimal chart interpretation
🔥 Key Features
Auto-Detection Technology: Automatically scans for institutional zones without manual intervention, saving analysis time
Dual Confidence Tracking: Simultaneously monitors both support and resistance institutional activity for comprehensive market view
Smart Zone Validation: Evaluates zone strength through volume analysis, adverse excursion measurement, and defense success rates
Customizable Parameters: Extensive input options for volume thresholds, observation windows, and visual preferences
Real-Time Updates: Continuously processes market data to provide current institutional confidence readings
Enhanced Visualization: Features gradient fills, momentum indicators, and information panels for clear signal interpretation
🎨 Visualization
Dual Oscillator Lines: Support confidence (cyan) and resistance confidence (red) plotted as percentage values 0-100%
Gradient Fill Areas: Color-coded regions showing confidence dominance and strength levels
Reference Grid Lines: Horizontal markers at 25%, 50%, and 75% levels for easy interpretation
Information Panel: Real-time display of current confidence percentiles with color-coded dominance indicators
Momentum Indicators: Rate of change visualization for confidence trends
Background Highlights: Extreme confidence level alerts when readings exceed 80%
📖 Usage Guidelines
Auto-Detection Settings
Use Auto-Detection
Default: true
Description: Enables automatic thin liquidity zone identification based on volume and size criteria
Volume Threshold Multiplier
Default: 6.0, Range: 1.0+
Description: Controls sensitivity of volume spike detection for zone identification, higher values require more significant volume increases
Volume MA Length
Default: 15, Range: 1+
Description: Period for volume moving average baseline calculation, affects volume spike sensitivity
Max Zone Height %
Default: 0.5%, Range: 0.05%+
Description: Filters out wide price bars, keeping only thin liquidity zones as percentage of current price
Confidence Logic Settings
Test Observation Window
Default: 20 bars, Range: 2+
Description: Number of bars to monitor zone tests for confidence calculation, longer windows provide more stable readings
Clean Break Threshold
Default: 1.5 ATR, Range: 0.1+
Description: ATR multiple required for zone invalidation, higher values make zones more persistent
Visual Settings
Smoothing Method
Default: EMA, Options: SMA/EMA/WMA/ALMA
Description: Algorithm for signal smoothing, EMA responds faster while SMA provides more stability
Smoothing Length
Default: 5, Range: 1-50
Description: Period for smoothing calculation, higher values create smoother lines with more lag
✅ Best Use Cases
Trending market analysis where institutional zones provide reliable support/resistance levels
Breakout confirmation by validating zone strength before position entry
Divergence analysis when confidence shifts between support and resistance levels
Risk management through identification of high-confidence institutional backing
Market structure analysis for understanding institutional sentiment changes
⚠️ Limitations
Performs best in liquid markets with clear institutional participation
May produce false signals during low-volume or holiday trading periods
Requires sufficient price history for accurate confidence calculations
Confidence readings can fluctuate rapidly during high-impact news events
Manual fallback zones may not reflect actual institutional activity
💡 What Makes This Unique
Automated Detection: First Pine Script indicator to automatically identify thin liquidity zones using sophisticated volume analysis
Dual-Sided Analysis: Simultaneously tracks institutional confidence for both support and resistance levels
Mathematical Precision: Uses sigmoid functions for enhanced accuracy in confidence percentage calculations
Real-Time Processing: Continuously evaluates institutional defense patterns as market conditions change
Visual Innovation: Advanced smoothing options and gradient visualization for superior chart clarity
🔬 How It Works
1. Zone Identification Process:
Scans for high-volume bars that exceed the volume threshold multiplier
Filters bars by maximum zone height percentage to identify thin liquidity conditions
Stores qualified zones with proximity threshold filtering for relevance
2. Confidence Calculation Process:
Monitors price interaction with identified zones during observation windows
Measures volume ratios and adverse excursions during zone tests
Applies sigmoid function processing to normalize raw data into confidence percentiles
3. Real-Time Analysis Process:
Continuously updates confidence readings as new market data becomes available
Tracks institutional defense success rates and zone validation patterns
Provides visual and numerical feedback through the oscillator display
💡 Note:
The ICO works best when combined with traditional technical analysis and proper risk management. Higher confidence readings indicate stronger institutional backing but should be confirmed with price action and volume analysis. Consider using multiple timeframes for comprehensive market structure understanding.
ค้นหาในสคริปต์สำหรับ "algo"
Categorical Market Morphisms (CMM)Categorical Market Morphisms (CMM) - Where Abstract Algebra Transcends Reality
A Revolutionary Application of Category Theory and Homotopy Type Theory to Financial Markets
Bridging Pure Mathematics and Market Analysis Through Functorial Dynamics
Theoretical Foundation: The Mathematical Revolution
Traditional technical analysis operates on Euclidean geometry and classical statistics. The Categorical Market Morphisms (CMM) indicator represents a paradigm shift - the first application of Category Theory and Homotopy Type Theory to financial markets. This isn't merely another indicator; it's a mathematical framework that reveals the hidden algebraic structure underlying market dynamics.
Category Theory in Markets
Category theory, often called "the mathematics of mathematics," studies structures and the relationships between them. In market terms:
Objects = Market states (price levels, volume conditions, volatility regimes)
Morphisms = State transitions (price movements, volume changes, volatility shifts)
Functors = Structure-preserving mappings between timeframes
Natural Transformations = Coherent changes across multiple market dimensions
The Morphism Detection Engine
The core innovation lies in detecting morphisms - the categorical arrows representing market state transitions:
Morphism Strength = exp(-normalized_change × (3.0 / sensitivity))
Threshold = 0.3 - (sensitivity - 1.0) × 0.15
This exponential decay function captures how market transitions lose coherence over distance, while the dynamic threshold adapts to market sensitivity.
Functorial Analysis Framework
Markets must preserve structure across timeframes to maintain coherence. Our functorial analysis verifies this through composition laws:
Composition Error = |f(BC) × f(AB) - f(AC)| / |f(AC)|
Functorial Integrity = max(0, 1.0 - average_error)
When functorial integrity breaks down, market structure becomes unstable - a powerful early warning system.
Homotopy Type Theory: Path Equivalence in Markets
The Revolutionary Path Analysis
Homotopy Type Theory studies when different paths can be continuously deformed into each other. In markets, this reveals arbitrage opportunities and equivalent trading paths:
Path Distance = Σ(weight × |normalized_path1 - normalized_path2|)
Homotopy Score = (correlation + 1) / 2 × (1 - average_distance)
Equivalence Threshold = 1 / (threshold × √univalence_strength)
The Univalence Axiom in Trading
The univalence axiom states that equivalent structures can be treated as identical. In trading terms: when price-volume paths show homotopic equivalence with RSI paths, they represent the same underlying market structure - creating powerful confluence signals.
Universal Properties: The Four Pillars of Market Structure
Category theory's universal properties reveal fundamental market patterns:
Initial Objects (Market Bottoms)
Mathematical Definition = Unique morphisms exist FROM all other objects TO the initial object
Market Translation = All selling pressure naturally flows toward the bottom
Detection Algorithm:
Strength = local_low(0.3) + oversold(0.2) + volume_surge(0.2) + momentum_reversal(0.2) + morphism_flow(0.1)
Signal = strength > 0.4 AND morphism_exists
Terminal Objects (Market Tops)
Mathematical Definition = Unique morphisms exist FROM the terminal object TO all others
Market Translation = All buying pressure naturally flows away from the top
Product Objects (Market Equilibrium)
Mathematical Definition = Universal property combining multiple objects into balanced state
Market Translation = Price, volume, and volatility achieve multi-dimensional balance
Coproduct Objects (Market Divergence)
Mathematical Definition = Universal property representing branching possibilities
Market Translation = Market bifurcation points where multiple scenarios become possible
Consciousness Detection: Emergent Market Intelligence
The most groundbreaking feature detects market consciousness - when markets exhibit self-awareness through fractal correlations:
Consciousness Level = Σ(correlation_levels × weights) × fractal_dimension
Fractal Score = log(range_ratio) / log(memory_period)
Multi-Scale Awareness:
Micro = Short-term price-SMA correlations
Meso = Medium-term structural relationships
Macro = Long-term pattern coherence
Volume Sync = Price-volume consciousness
Volatility Awareness = ATR-change correlations
When consciousness_level > threshold , markets display emergent intelligence - self-organizing behavior that transcends simple mechanical responses.
Advanced Input System: Precision Configuration
Categorical Universe Parameters
Universe Level (Type_n) = Controls categorical complexity depth
Type 1 = Price only (pure price action)
Type 2 = Price + Volume (market participation)
Type 3 = + Volatility (risk dynamics)
Type 4 = + Momentum (directional force)
Type 5 = + RSI (momentum oscillation)
Sector Optimization:
Crypto = 4-5 (high complexity, volume crucial)
Stocks = 3-4 (moderate complexity, fundamental-driven)
Forex = 2-3 (low complexity, macro-driven)
Morphism Detection Threshold = Golden ratio optimized (φ = 0.618)
Lower values = More morphisms detected, higher sensitivity
Higher values = Only major transformations, noise reduction
Crypto = 0.382-0.618 (high volatility accommodation)
Stocks = 0.618-1.0 (balanced detection)
Forex = 1.0-1.618 (macro-focused)
Functoriality Tolerance = φ⁻² = 0.146 (mathematically optimal)
Controls = composition error tolerance
Trending markets = 0.1-0.2 (strict structure preservation)
Ranging markets = 0.2-0.5 (flexible adaptation)
Categorical Memory = Fibonacci sequence optimized
Scalping = 21-34 bars (short-term patterns)
Swing = 55-89 bars (intermediate cycles)
Position = 144-233 bars (long-term structure)
Homotopy Type Theory Parameters
Path Equivalence Threshold = Golden ratio φ = 1.618
Volatile markets = 2.0-2.618 (accommodate noise)
Normal conditions = 1.618 (balanced)
Stable markets = 0.786-1.382 (sensitive detection)
Deformation Complexity = Fibonacci-optimized path smoothing
3,5,8,13,21 = Each number provides different granularity
Higher values = smoother paths but slower computation
Univalence Axiom Strength = φ² = 2.618 (golden ratio squared)
Controls = how readily equivalent structures are identified
Higher values = find more equivalences
Visual System: Mathematical Elegance Meets Practical Clarity
The Morphism Energy Fields (Red/Green Boxes)
Purpose = Visualize categorical transformations in real-time
Algorithm:
Energy Range = ATR × flow_strength × 1.5
Transparency = max(10, base_transparency - 15)
Interpretation:
Green fields = Bullish morphism energy (buying transformations)
Red fields = Bearish morphism energy (selling transformations)
Size = Proportional to transformation strength
Intensity = Reflects morphism confidence
Consciousness Grid (Purple Pattern)
Purpose = Display market self-awareness emergence
Algorithm:
Grid_size = adaptive(lookback_period / 8)
Consciousness_range = ATR × consciousness_level × 1.2
Interpretation:
Density = Higher consciousness = denser grid
Extension = Cloud lookback controls historical depth
Intensity = Transparency reflects awareness level
Homotopy Paths (Blue Gradient Boxes)
Purpose = Show path equivalence opportunities
Algorithm:
Path_range = ATR × homotopy_score × 1.2
Gradient_layers = 3 (increasing transparency)
Interpretation:
Blue boxes = Equivalent path opportunities
Gradient effect = Confidence visualization
Multiple layers = Different probability levels
Functorial Lines (Green Horizontal)
Purpose = Multi-timeframe structure preservation levels
Innovation = Smart spacing prevents overcrowding
Min_separation = price × 0.001 (0.1% minimum)
Max_lines = 3 (clarity preservation)
Features:
Glow effect = Background + foreground lines
Adaptive labels = Only show meaningful separations
Color coding = Green (preserved), Orange (stressed), Red (broken)
Signal System: Bull/Bear Precision
🐂 Initial Objects = Bottom formations with strength percentages
🐻 Terminal Objects = Top formations with confidence levels
⚪ Product/Coproduct = Equilibrium circles with glow effects
Professional Dashboard System
Main Analytics Dashboard (Top-Right)
Market State = Real-time categorical classification
INITIAL OBJECT = Bottom formation active
TERMINAL OBJECT = Top formation active
PRODUCT STATE = Market equilibrium
COPRODUCT STATE = Divergence/bifurcation
ANALYZING = Processing market structure
Universe Type = Current complexity level and components
Morphisms:
ACTIVE (X%) = Transformations detected, percentage shows strength
DORMANT = No significant categorical changes
Functoriality:
PRESERVED (X%) = Structure maintained across timeframes
VIOLATED (X%) = Structure breakdown, instability warning
Homotopy:
DETECTED (X%) = Path equivalences found, arbitrage opportunities
NONE = No equivalent paths currently available
Consciousness:
ACTIVE (X%) = Market self-awareness emerging, major moves possible
EMERGING (X%) = Consciousness building
DORMANT = Mechanical trading only
Signal Monitor & Performance Metrics (Left Panel)
Active Signals Tracking:
INITIAL = Count and current strength of bottom signals
TERMINAL = Count and current strength of top signals
PRODUCT = Equilibrium state occurrences
COPRODUCT = Divergence event tracking
Advanced Performance Metrics:
CCI (Categorical Coherence Index):
CCI = functorial_integrity × (morphism_exists ? 1.0 : 0.5)
STRONG (>0.7) = High structural coherence
MODERATE (0.4-0.7) = Adequate coherence
WEAK (<0.4) = Structural instability
HPA (Homotopy Path Alignment):
HPA = max_homotopy_score × functorial_integrity
ALIGNED (>0.6) = Strong path equivalences
PARTIAL (0.3-0.6) = Some equivalences
WEAK (<0.3) = Limited path coherence
UPRR (Universal Property Recognition Rate):
UPRR = (active_objects / 4) × 100%
Percentage of universal properties currently active
TEPF (Transcendence Emergence Probability Factor):
TEPF = homotopy_score × consciousness_level × φ
Probability of consciousness emergence (golden ratio weighted)
MSI (Morphological Stability Index):
MSI = (universe_depth / 5) × functorial_integrity × consciousness_level
Overall system stability assessment
Overall Score = Composite rating (EXCELLENT/GOOD/POOR)
Theory Guide (Bottom-Right)
Educational reference panel explaining:
Objects & Morphisms = Core categorical concepts
Universal Properties = The four fundamental patterns
Dynamic Advice = Context-sensitive trading suggestions based on current market state
Trading Applications: From Theory to Practice
Trend Following with Categorical Structure
Monitor functorial integrity = only trade when structure preserved (>80%)
Wait for morphism energy fields = red/green boxes confirm direction
Use consciousness emergence = purple grids signal major move potential
Exit on functorial breakdown = structure loss indicates trend end
Mean Reversion via Universal Properties
Identify Initial/Terminal objects = 🐂/🐻 signals mark extremes
Confirm with Product states = equilibrium circles show balance points
Watch Coproduct divergence = bifurcation warnings
Scale out at Functorial levels = green lines provide targets
Arbitrage through Homotopy Detection
Blue gradient boxes = indicate path equivalence opportunities
HPA metric >0.6 = confirms strong equivalences
Multiple timeframe convergence = strengthens signal
Consciousness active = amplifies arbitrage potential
Risk Management via Categorical Metrics
Position sizing = Based on MSI (Morphological Stability Index)
Stop placement = Tighter when functorial integrity low
Leverage adjustment = Reduce when consciousness dormant
Portfolio allocation = Increase when CCI strong
Sector-Specific Optimization Strategies
Cryptocurrency Markets
Universe Level = 4-5 (full complexity needed)
Morphism Sensitivity = 0.382-0.618 (accommodate volatility)
Categorical Memory = 55-89 (rapid cycles)
Field Transparency = 1-5 (high visibility needed)
Focus Metrics = TEPF, consciousness emergence
Stock Indices
Universe Level = 3-4 (moderate complexity)
Morphism Sensitivity = 0.618-1.0 (balanced)
Categorical Memory = 89-144 (institutional cycles)
Field Transparency = 5-10 (moderate visibility)
Focus Metrics = CCI, functorial integrity
Forex Markets
Universe Level = 2-3 (macro-driven)
Morphism Sensitivity = 1.0-1.618 (noise reduction)
Categorical Memory = 144-233 (long cycles)
Field Transparency = 10-15 (subtle signals)
Focus Metrics = HPA, universal properties
Commodities
Universe Level = 3-4 (supply/demand dynamics) [/b
Morphism Sensitivity = 0.618-1.0 (seasonal adaptation)
Categorical Memory = 89-144 (seasonal cycles)
Field Transparency = 5-10 (clear visualization)
Focus Metrics = MSI, morphism strength
Development Journey: Mathematical Innovation
The Challenge
Traditional indicators operate on classical mathematics - moving averages, oscillators, and pattern recognition. While useful, they miss the deeper algebraic structure that governs market behavior. Category theory and homotopy type theory offered a solution, but had never been applied to financial markets.
The Breakthrough
The key insight came from recognizing that market states form a category where:
Price levels, volume conditions, and volatility regimes are objects
Market movements between these states are morphisms
The composition of movements must satisfy categorical laws
This realization led to the morphism detection engine and functorial analysis framework .
Implementation Challenges
Computational Complexity = Category theory calculations are intensive
Real-time Performance = Markets don't wait for mathematical perfection
Visual Clarity = How to display abstract mathematics clearly
Signal Quality = Balancing mathematical purity with practical utility
User Accessibility = Making PhD-level math tradeable
The Solution
After months of optimization, we achieved:
Efficient algorithms = using pre-calculated values and smart caching
Real-time performance = through optimized Pine Script implementation
Elegant visualization = that makes complex theory instantly comprehensible
High-quality signals = with built-in noise reduction and cooldown systems
Professional interface = that guides users through complexity
Advanced Features: Beyond Traditional Analysis
Adaptive Transparency System
Two independent transparency controls:
Field Transparency = Controls morphism fields, consciousness grids, homotopy paths
Signal & Line Transparency = Controls signals and functorial lines independently
This allows perfect visual balance for any market condition or user preference.
Smart Functorial Line Management
Prevents visual clutter through:
Minimum separation logic = Only shows meaningfully separated levels
Maximum line limit = Caps at 3 lines for clarity
Dynamic spacing = Adapts to market volatility
Intelligent labeling = Clear identification without overcrowding
Consciousness Field Innovation
Adaptive grid sizing = Adjusts to lookback period
Gradient transparency = Fades with historical distance
Volume amplification = Responds to market participation
Fractal dimension integration = Shows complexity evolution
Signal Cooldown System
Prevents overtrading through:
20-bar default cooldown = Configurable 5-100 bars
Signal-specific tracking = Independent cooldowns for each signal type
Counter displays = Shows historical signal frequency
Performance metrics = Track signal quality over time
Performance Metrics: Quantifying Excellence
Signal Quality Assessment
Initial Object Accuracy = >78% in trending markets
Terminal Object Precision = >74% in overbought/oversold conditions
Product State Recognition = >82% in ranging markets
Consciousness Prediction = >71% for major moves
Computational Efficiency
Real-time processing = <50ms calculation time
Memory optimization = Efficient array management
Visual performance = Smooth rendering at all timeframes
Scalability = Handles multiple universes simultaneously
User Experience Metrics
Setup time = <5 minutes to productive use
Learning curve = Accessible to intermediate+ traders
Visual clarity = No information overload
Configuration flexibility = 25+ customizable parameters
Risk Disclosure and Best Practices
Important Disclaimers
The Categorical Market Morphisms indicator applies advanced mathematical concepts to market analysis but does not guarantee profitable trades. Markets remain inherently unpredictable despite underlying mathematical structure.
Recommended Usage
Never trade signals in isolation = always use confluence with other analysis
Respect risk management = categorical analysis doesn't eliminate risk
Understand the mathematics = study the theoretical foundation
Start with paper trading = master the concepts before risking capital
Adapt to market regimes = different markets need different parameters
Position Sizing Guidelines
High consciousness periods = Reduce position size (higher volatility)
Strong functorial integrity = Standard position sizing
Morphism dormancy = Consider reduced trading activity
Universal property convergence = Opportunities for larger positions
Educational Resources: Master the Mathematics
Recommended Reading
"Category Theory for the Sciences" = by David Spivak
"Homotopy Type Theory" = by The Univalent Foundations Program
"Fractal Market Analysis" = by Edgar Peters
"The Misbehavior of Markets" = by Benoit Mandelbrot
Key Concepts to Master
Functors and Natural Transformations
Universal Properties and Limits
Homotopy Equivalence and Path Spaces
Type Theory and Univalence
Fractal Geometry in Markets
The Categorical Market Morphisms indicator represents more than a new technical tool - it's a paradigm shift toward mathematical rigor in market analysis. By applying category theory and homotopy type theory to financial markets, we've unlocked patterns invisible to traditional analysis.
This isn't just about better signals or prettier charts. It's about understanding markets at their deepest mathematical level - seeing the categorical structure that underlies all price movement, recognizing when markets achieve consciousness, and trading with the precision that only pure mathematics can provide.
Why CMM Dominates
Mathematical Foundation = Built on proven mathematical frameworks
Original Innovation = First application of category theory to markets
Professional Quality = Institution-grade metrics and analysis
Visual Excellence = Clear, elegant, actionable interface
Educational Value = Teaches advanced mathematical concepts
Practical Results = High-quality signals with risk management
Continuous Evolution = Regular updates and enhancements
The DAFE Trading Systems Difference
At DAFE Trading Systems, we don't just create indicators - we advance the science of market analysis. Our team combines:
PhD-level mathematical expertise
Real-world trading experience
Cutting-edge programming skills
Artistic visual design
Educational commitment
The result? Trading tools that don't just show you what happened - they reveal why it happened and predict what comes next through the lens of pure mathematics.
"In mathematics you don't understand things. You just get used to them." - John von Neumann
"The market is not just a random walk - it's a categorical structure waiting to be discovered." - DAFE Trading Systems
Trade with Mathematical Precision. Trade with Categorical Market Morphisms.
Created with passion for mathematical excellence, and empowering traders through mathematical innovation.
— Dskyz, Trade with insight. Trade with anticipation.
Goertzel Adaptive JMA T3Hello Fellas,
The Goertzel Adaptive JMA T3 is a powerful indicator that combines my own created Goertzel adaptive length with Jurik and T3 Moving Averages. The primary intention of the indicator is to demonstrate the new adaptive length algorithm by applying it on bleeding-edge MAs.
It is useable like any moving average, and the new Goertzel adaptive length algorithm can be used to make own indicators Goertzel adaptive.
Used Adaptive Length Algorithms
Normalized Goertzel Power: This uses the normalized power of the Goertzel algorithm to compute an adaptive length without the special operations, like detrending, Ehlers uses for his DFT adaptive length.
Ehlers Mod: This uses the Goertzel algorithm instead of the DFT, originally used by Ehlers, to compute a modified version of his original approach, which sticks as close as possible to the original approach.
Scoring System
The scoring system determines if bars are red or green and collects them.
Then, it goes through all collected red and green bars and checks how big they are and if they are above or below the selected MA. It is positive when green bars are under MA or when red bars are above MA.
Then, it accumulates the size for all positive green bars and for all positive red bars. The same happens for negative green and red bars.
Finally, it calculates the score by ((positiveGreenBars + positiveRedBars) / (negativeGreenBars + negativeRedBars)) * 100 with the scale 0–100.
Signals
Is the price above MA? -> bullish market
Is the price below MA? -> bearish market
Usage
Adjust the settings to reach the highest score, and enjoy an outstanding adaptive MA.
It should be useable on all timeframes. It is recommended to use the indicator on the timeframe where you can get the highest score.
Now, follows a bunch of knowledge for people who don't know about the concepts used here.
T3
The T3 moving average, short for "Tim Tillson's Triple Exponential Moving Average," is a technical indicator used in financial markets and technical analysis to smooth out price data over a specific period. It was developed by Tim Tillson, a software project manager at Hewlett-Packard, with expertise in Mathematics and Computer Science.
The T3 moving average is an enhancement of the traditional Exponential Moving Average (EMA) and aims to overcome some of its limitations. The primary goal of the T3 moving average is to provide a smoother representation of price trends while minimizing lag compared to other moving averages like Simple Moving Average (SMA), Weighted Moving Average (WMA), or EMA.
To compute the T3 moving average, it involves a triple smoothing process using exponential moving averages. Here's how it works:
Calculate the first exponential moving average (EMA1) of the price data over a specific period 'n.'
Calculate the second exponential moving average (EMA2) of EMA1 using the same period 'n.'
Calculate the third exponential moving average (EMA3) of EMA2 using the same period 'n.'
The formula for the T3 moving average is as follows:
T3 = 3 * (EMA1) - 3 * (EMA2) + (EMA3)
By applying this triple smoothing process, the T3 moving average is intended to offer reduced noise and improved responsiveness to price trends. It achieves this by incorporating multiple time frames of the exponential moving averages, resulting in a more accurate representation of the underlying price action.
JMA
The Jurik Moving Average (JMA) is a technical indicator used in trading to predict price direction. Developed by Mark Jurik, it’s a type of weighted moving average that gives more weight to recent market data rather than past historical data.
JMA is known for its superior noise elimination. It’s a causal, nonlinear, and adaptive filter, meaning it responds to changes in price action without introducing unnecessary lag. This makes JMA a world-class moving average that tracks and smooths price charts or any market-related time series with surprising agility.
In comparison to other moving averages, such as the Exponential Moving Average (EMA), JMA is known to track fast price movement more accurately. This allows traders to apply their strategies to a more accurate picture of price action.
Goertzel Algorithm
The Goertzel algorithm is a technique in digital signal processing (DSP) for efficient evaluation of individual terms of the Discrete Fourier Transform (DFT). It's particularly useful when you need to compute a small number of selected frequency components. Unlike direct DFT calculations, the Goertzel algorithm applies a single real-valued coefficient at each iteration, using real-valued arithmetic for real-valued input sequences. This makes it more numerically efficient when computing a small number of selected frequency components¹.
Discrete Fourier Transform
The Discrete Fourier Transform (DFT) is a mathematical technique used in signal processing to convert a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency . The DFT provides a frequency domain representation of the original input sequence .
Usage of DFT/Goertzel In Adaptive Length Algorithms
Adaptive length algorithms are automated trading systems that can dynamically adjust their parameters in response to real-time market data. This adaptability enables them to optimize their trading strategies as market conditions fluctuate. Both the Goertzel algorithm and DFT can be used in these algorithms to analyze market data and detect cycles or patterns, which can then be used to adjust the parameters of the trading strategy.
The Goertzel algorithm is more efficient than the DFT when you need to compute a small number of selected frequency components. However, for covering a full spectrum, the Goertzel algorithm has a higher order of complexity than fast Fourier transform (FFT) algorithms.
I hope this can help you somehow.
Thanks for reading, and keep it up.
Best regards,
simwai
---
Credits to:
@ClassicScott
@yatrader2
@cheatcountry
@loxx
Normalized, Variety, Fast Fourier Transform Explorer [Loxx]Normalized, Variety, Fast Fourier Transform Explorer demonstrates Real, Cosine, and Sine Fast Fourier Transform algorithms. This indicator can be used as a rule of thumb but shouldn't be used in trading.
What is the Discrete Fourier Transform?
In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence. An inverse DFT is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT is therefore said to be a frequency domain representation of the original input sequence. If the original sequence spans all the non-zero values of a function, its DTFT is continuous (and periodic), and the DFT provides discrete samples of one cycle. If the original sequence is one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT cycle.
What is the Complex Fast Fourier Transform?
The complex Fast Fourier Transform algorithm transforms N real or complex numbers into another N complex numbers. The complex FFT transforms a real or complex signal x in the time domain into a complex two-sided spectrum X in the frequency domain. You must remember that zero frequency corresponds to n = 0, positive frequencies 0 < f < f_c correspond to values 1 ≤ n ≤ N/2 −1, while negative frequencies −fc < f < 0 correspond to N/2 +1 ≤ n ≤ N −1. The value n = N/2 corresponds to both f = f_c and f = −f_c. f_c is the critical or Nyquist frequency with f_c = 1/(2*T) or half the sampling frequency. The first harmonic X corresponds to the frequency 1/(N*T).
The complex FFT requires the list of values (resolution, or N) to be a power 2. If the input size if not a power of 2, then the input data will be padded with zeros to fit the size of the closest power of 2 upward.
What is Real-Fast Fourier Transform?
Has conditions similar to the complex Fast Fourier Transform value, except that the input data must be purely real. If the time series data has the basic type complex64, only the real parts of the complex numbers are used for the calculation. The imaginary parts are silently discarded.
What is the Real-Fast Fourier Transform?
In many applications, the input data for the DFT are purely real, in which case the outputs satisfy the symmetry
X(N-k)=X(k)
and efficient FFT algorithms have been designed for this situation (see e.g. Sorensen, 1987). One approach consists of taking an ordinary algorithm (e.g. Cooley–Tukey) and removing the redundant parts of the computation, saving roughly a factor of two in time and memory. Alternatively, it is possible to express an even-length real-input DFT as a complex DFT of half the length (whose real and imaginary parts are the even/odd elements of the original real data), followed by O(N) post-processing operations.
It was once believed that real-input DFTs could be more efficiently computed by means of the discrete Hartley transform (DHT), but it was subsequently argued that a specialized real-input DFT algorithm (FFT) can typically be found that requires fewer operations than the corresponding DHT algorithm (FHT) for the same number of inputs. Bruun's algorithm (above) is another method that was initially proposed to take advantage of real inputs, but it has not proved popular.
There are further FFT specializations for the cases of real data that have even/odd symmetry, in which case one can gain another factor of roughly two in time and memory and the DFT becomes the discrete cosine/sine transform(s) (DCT/DST). Instead of directly modifying an FFT algorithm for these cases, DCTs/DSTs can also be computed via FFTs of real data combined with O(N) pre- and post-processing.
What is the Discrete Cosine Transform?
A discrete cosine transform ( DCT ) expresses a finite sequence of data points in terms of a sum of cosine functions oscillating at different frequencies. The DCT , first proposed by Nasir Ahmed in 1972, is a widely used transformation technique in signal processing and data compression. It is used in most digital media, including digital images (such as JPEG and HEIF, where small high-frequency components can be discarded), digital video (such as MPEG and H.26x), digital audio (such as Dolby Digital, MP3 and AAC ), digital television (such as SDTV, HDTV and VOD ), digital radio (such as AAC+ and DAB+), and speech coding (such as AAC-LD, Siren and Opus). DCTs are also important to numerous other applications in science and engineering, such as digital signal processing, telecommunication devices, reducing network bandwidth usage, and spectral methods for the numerical solution of partial differential equations.
The use of cosine rather than sine functions is critical for compression, since it turns out (as described below) that fewer cosine functions are needed to approximate a typical signal, whereas for differential equations the cosines express a particular choice of boundary conditions. In particular, a DCT is a Fourier-related transform similar to the discrete Fourier transform (DFT), but using only real numbers. The DCTs are generally related to Fourier Series coefficients of a periodically and symmetrically extended sequence whereas DFTs are related to Fourier Series coefficients of only periodically extended sequences. DCTs are equivalent to DFTs of roughly twice the length, operating on real data with even symmetry (since the Fourier transform of a real and even function is real and even), whereas in some variants the input and/or output data are shifted by half a sample. There are eight standard DCT variants, of which four are common.
The most common variant of discrete cosine transform is the type-II DCT , which is often called simply "the DCT". This was the original DCT as first proposed by Ahmed. Its inverse, the type-III DCT , is correspondingly often called simply "the inverse DCT" or "the IDCT". Two related transforms are the discrete sine transform ( DST ), which is equivalent to a DFT of real and odd functions, and the modified discrete cosine transform (MDCT), which is based on a DCT of overlapping data. Multidimensional DCTs ( MD DCTs) are developed to extend the concept of DCT to MD signals. There are several algorithms to compute MD DCT . A variety of fast algorithms have been developed to reduce the computational complexity of implementing DCT . One of these is the integer DCT (IntDCT), an integer approximation of the standard DCT ,: ix, xiii, 1, 141–304 used in several ISO /IEC and ITU-T international standards.
What is the Discrete Sine Transform?
In mathematics, the discrete sine transform (DST) is a Fourier-related transform similar to the discrete Fourier transform (DFT), but using a purely real matrix. It is equivalent to the imaginary parts of a DFT of roughly twice the length, operating on real data with odd symmetry (since the Fourier transform of a real and odd function is imaginary and odd), where in some variants the input and/or output data are shifted by half a sample.
A family of transforms composed of sine and sine hyperbolic functions exists. These transforms are made based on the natural vibration of thin square plates with different boundary conditions.
The DST is related to the discrete cosine transform (DCT), which is equivalent to a DFT of real and even functions. See the DCT article for a general discussion of how the boundary conditions relate the various DCT and DST types. Generally, the DST is derived from the DCT by replacing the Neumann condition at x=0 with a Dirichlet condition. Both the DCT and the DST were described by Nasir Ahmed T. Natarajan and K.R. Rao in 1974. The type-I DST (DST-I) was later described by Anil K. Jain in 1976, and the type-II DST (DST-II) was then described by H.B. Kekra and J.K. Solanka in 1978.
Notable settings
windowper = period for calculation, restricted to powers of 2: "16", "32", "64", "128", "256", "512", "1024", "2048", this reason for this is FFT is an algorithm that computes DFT (Discrete Fourier Transform) in a fast way, generally in 𝑂(𝑁⋅log2(𝑁)) instead of 𝑂(𝑁2). To achieve this the input matrix has to be a power of 2 but many FFT algorithm can handle any size of input since the matrix can be zero-padded. For our purposes here, we stick to powers of 2 to keep this fast and neat. read more about this here: Cooley–Tukey FFT algorithm
SS = smoothing count, this smoothing happens after the first FCT regular pass. this zeros out frequencies from the previously calculated values above SS count. the lower this number, the smoother the output, it works opposite from other smoothing periods
Fmin1 = zeroes out frequencies not passing this test for min value
Fmax1 = zeroes out frequencies not passing this test for max value
barsback = moves the window backward
Inverse = whether or not you wish to invert the FFT after first pass calculation
Related indicators
Real-Fast Fourier Transform of Price Oscillator
STD-Stepped Fast Cosine Transform Moving Average
Real-Fast Fourier Transform of Price w/ Linear Regression
Variety RSI of Fast Discrete Cosine Transform
Additional reading
A Fast Computational Algorithm for the Discrete Cosine Transform by Chen et al.
Practical Fast 1-D DCT Algorithms With 11 Multiplications by Loeffler et al.
Cooley–Tukey FFT algorithm
Ahmed, Nasir (January 1991). "How I Came Up With the Discrete Cosine Transform". Digital Signal Processing. 1 (1): 4–5. doi:10.1016/1051-2004(91)90086-Z.
DCT-History - How I Came Up With The Discrete Cosine Transform
Comparative Analysis for Discrete Sine Transform as a suitable method for noise estimation
Helme-Nikias Weighted Burg AR-SE Extra. of Price [Loxx]Helme-Nikias Weighted Burg AR-SE Extra. of Price is an indicator that uses an autoregressive spectral estimation called the Weighted Burg Algorithm, but unlike the usual WB algo, this one uses Helme-Nikias weighting. This method is commonly used in speech modeling and speech prediction engines. This is a linear method of forecasting data. You'll notice that this method uses a different weighting calculation vs Weighted Burg method. This new weighting is the following:
w = math.pow(array.get(x, i - 1), 2), the squared lag of the source parameter
and
w += math.pow(array.get(x, i), 2), the sum of the squared source parameter
This take place of the rectangular, hamming and parabolic weighting used in the Weighted Burg method
Also, this method includes Levinson–Durbin algorithm. as was already discussed previously in the following indicator:
Levinson-Durbin Autocorrelation Extrapolation of Price
What is Helme-Nikias Weighted Burg Autoregressive Spectral Estimate Extrapolation of price?
In this paper a new stable modification of the weighted Burg technique for autoregressive (AR) spectral estimation is introduced based on data-adaptive weights that are proportional to the common power of the forward and backward AR process realizations. It is shown that AR spectra of short length sinusoidal signals generated by the new approach do not exhibit phase dependence or line-splitting. Further, it is demonstrated that improvements in resolution may be so obtained relative to other weighted Burg algorithms. The method suggested here is shown to resolve two closely-spaced peaks of dynamic range 24 dB whereas the modified Burg schemes employing rectangular, Hamming or "optimum" parabolic windows fail.
Data inputs
Source Settings: -Loxx's Expanded Source Types. You typically use "open" since open has already closed on the current active bar
LastBar - bar where to start the prediction
PastBars - how many bars back to model
LPOrder - order of linear prediction model; 0 to 1
FutBars - how many bars you want to forward predict
Things to know
Normally, a simple moving average is calculated on source data. I've expanded this to 38 different averaging methods using Loxx's Moving Avreages.
This indicator repaints
Further reading
A high-resolution modified Burg algorithm for spectral estimation
Related Indicators
Levinson-Durbin Autocorrelation Extrapolation of Price
Weighted Burg AR Spectral Estimate Extrapolation of Price
TA█ TA Library
📊 OVERVIEW
TA is a Pine Script technical analysis library. This library provides 25+ moving averages and smoothing filters , from classic SMA/EMA to Kalman Filters and adaptive algorithms, implemented based on academic research.
🎯 Core Features
Academic Based - Algorithms follow original papers and formulas
Performance Optimized - Pre-calculated constants for faster response
Unified Interface - Consistent function design
Research Based - Integrates technical analysis research
🎯 CONCEPTS
Library Design Philosophy
This technical analysis library focuses on providing:
Academic Foundation
Algorithms based on published research papers and academic standards
Implementations that follow original mathematical formulations
Clear documentation with research references
Developer Experience
Unified interface design for consistent usage patterns
Pre-calculated constants for optimal performance
Comprehensive function collection to reduce development time
Single import statement for immediate access to all functions
Each indicator encapsulated as a simple function call - one line of code simplifies complexity
Technical Excellence
25+ carefully implemented moving averages and filters
Support for advanced algorithms like Kalman Filter and MAMA/FAMA
Optimized code structure for maintainability and reliability
Regular updates incorporating latest research developments
🚀 USING THIS LIBRARY
Import Library
//@version=6
import DCAUT/TA/1 as dta
indicator("Advanced Technical Analysis", overlay=true)
Basic Usage Example
// Classic moving average combination
ema20 = ta.ema(close, 20)
kama20 = dta.kama(close, 20)
plot(ema20, "EMA20", color.red, 2)
plot(kama20, "KAMA20", color.green, 2)
Advanced Trading System
// Adaptive moving average system
kama = dta.kama(close, 20, 2, 30)
= dta.mamaFama(close, 0.5, 0.05)
// Trend confirmation and entry signals
bullTrend = kama > kama and mamaValue > famaValue
bearTrend = kama < kama and mamaValue < famaValue
longSignal = ta.crossover(close, kama) and bullTrend
shortSignal = ta.crossunder(close, kama) and bearTrend
plot(kama, "KAMA", color.blue, 3)
plot(mamaValue, "MAMA", color.orange, 2)
plot(famaValue, "FAMA", color.purple, 2)
plotshape(longSignal, "Buy", shape.triangleup, location.belowbar, color.green)
plotshape(shortSignal, "Sell", shape.triangledown, location.abovebar, color.red)
📋 FUNCTIONS REFERENCE
ewma(source, alpha)
Calculates the Exponentially Weighted Moving Average with dynamic alpha parameter.
Parameters:
source (series float) : Series of values to process.
alpha (series float) : The smoothing parameter of the filter.
Returns: (float) The exponentially weighted moving average value.
dema(source, length)
Calculates the Double Exponential Moving Average (DEMA) of a given data series.
Parameters:
source (series float) : Series of values to process.
length (simple int) : Number of bars for the moving average calculation.
Returns: (float) The calculated Double Exponential Moving Average value.
tema(source, length)
Calculates the Triple Exponential Moving Average (TEMA) of a given data series.
Parameters:
source (series float) : Series of values to process.
length (simple int) : Number of bars for the moving average calculation.
Returns: (float) The calculated Triple Exponential Moving Average value.
zlema(source, length)
Calculates the Zero-Lag Exponential Moving Average (ZLEMA) of a given data series. This indicator attempts to eliminate the lag inherent in all moving averages.
Parameters:
source (series float) : Series of values to process.
length (simple int) : Number of bars for the moving average calculation.
Returns: (float) The calculated Zero-Lag Exponential Moving Average value.
tma(source, length)
Calculates the Triangular Moving Average (TMA) of a given data series. TMA is a double-smoothed simple moving average that reduces noise.
Parameters:
source (series float) : Series of values to process.
length (simple int) : Number of bars for the moving average calculation.
Returns: (float) The calculated Triangular Moving Average value.
frama(source, length)
Calculates the Fractal Adaptive Moving Average (FRAMA) of a given data series. FRAMA adapts its smoothing factor based on fractal geometry to reduce lag. Developed by John Ehlers.
Parameters:
source (series float) : Series of values to process.
length (simple int) : Number of bars for the moving average calculation.
Returns: (float) The calculated Fractal Adaptive Moving Average value.
kama(source, length, fastLength, slowLength)
Calculates Kaufman's Adaptive Moving Average (KAMA) of a given data series. KAMA adjusts its smoothing based on market efficiency ratio. Developed by Perry J. Kaufman.
Parameters:
source (series float) : Series of values to process.
length (simple int) : Number of bars for the efficiency calculation.
fastLength (simple int) : Fast EMA length for smoothing calculation. Optional. Default is 2.
slowLength (simple int) : Slow EMA length for smoothing calculation. Optional. Default is 30.
Returns: (float) The calculated Kaufman's Adaptive Moving Average value.
t3(source, length, volumeFactor)
Calculates the Tilson Moving Average (T3) of a given data series. T3 is a triple-smoothed exponential moving average with improved lag characteristics. Developed by Tim Tillson.
Parameters:
source (series float) : Series of values to process.
length (simple int) : Number of bars for the moving average calculation.
volumeFactor (simple float) : Volume factor affecting responsiveness. Optional. Default is 0.7.
Returns: (float) The calculated Tilson Moving Average value.
ultimateSmoother(source, length)
Calculates the Ultimate Smoother of a given data series. Uses advanced filtering techniques to reduce noise while maintaining responsiveness. Based on digital signal processing principles by John Ehlers.
Parameters:
source (series float) : Series of values to process.
length (simple int) : Number of bars for the smoothing calculation.
Returns: (float) The calculated Ultimate Smoother value.
kalmanFilter(source, processNoise, measurementNoise)
Calculates the Kalman Filter of a given data series. Optimal estimation algorithm that estimates true value from noisy observations. Based on the Kalman Filter algorithm developed by Rudolf Kalman (1960).
Parameters:
source (series float) : Series of values to process.
processNoise (simple float) : Process noise variance (Q). Controls adaptation speed. Optional. Default is 0.05.
measurementNoise (simple float) : Measurement noise variance (R). Controls smoothing. Optional. Default is 1.0.
Returns: (float) The calculated Kalman Filter value.
mcginleyDynamic(source, length)
Calculates the McGinley Dynamic of a given data series. McGinley Dynamic is an adaptive moving average that adjusts to market speed changes. Developed by John R. McGinley Jr.
Parameters:
source (series float) : Series of values to process.
length (simple int) : Number of bars for the dynamic calculation.
Returns: (float) The calculated McGinley Dynamic value.
mama(source, fastLimit, slowLimit)
Calculates the Mesa Adaptive Moving Average (MAMA) of a given data series. MAMA uses Hilbert Transform Discriminator to adapt to market cycles dynamically. Developed by John F. Ehlers.
Parameters:
source (series float) : Series of values to process.
fastLimit (simple float) : Maximum alpha (responsiveness). Optional. Default is 0.5.
slowLimit (simple float) : Minimum alpha (smoothing). Optional. Default is 0.05.
Returns: (float) The calculated Mesa Adaptive Moving Average value.
fama(source, fastLimit, slowLimit)
Calculates the Following Adaptive Moving Average (FAMA) of a given data series. FAMA follows MAMA with reduced responsiveness for crossover signals. Developed by John F. Ehlers.
Parameters:
source (series float) : Series of values to process.
fastLimit (simple float) : Maximum alpha (responsiveness). Optional. Default is 0.5.
slowLimit (simple float) : Minimum alpha (smoothing). Optional. Default is 0.05.
Returns: (float) The calculated Following Adaptive Moving Average value.
mamaFama(source, fastLimit, slowLimit)
Calculates Mesa Adaptive Moving Average (MAMA) and Following Adaptive Moving Average (FAMA).
Parameters:
source (series float) : Series of values to process.
fastLimit (simple float) : Maximum alpha (responsiveness). Optional. Default is 0.5.
slowLimit (simple float) : Minimum alpha (smoothing). Optional. Default is 0.05.
Returns: ( ) Tuple containing values.
laguerreFilter(source, length, gamma, order)
Calculates the standard N-order Laguerre Filter of a given data series. Standard Laguerre Filter uses uniform weighting across all polynomial terms. Developed by John F. Ehlers.
Parameters:
source (series float) : Series of values to process.
length (simple int) : Length for UltimateSmoother preprocessing.
gamma (simple float) : Feedback coefficient (0-1). Lower values reduce lag. Optional. Default is 0.8.
order (simple int) : The order of the Laguerre filter (1-10). Higher order increases lag. Optional. Default is 8.
Returns: (float) The calculated standard Laguerre Filter value.
laguerreBinomialFilter(source, length, gamma)
Calculates the Laguerre Binomial Filter of a given data series. Uses 6-pole feedback with binomial weighting coefficients. Developed by John F. Ehlers.
Parameters:
source (series float) : Series of values to process.
length (simple int) : Length for UltimateSmoother preprocessing.
gamma (simple float) : Feedback coefficient (0-1). Lower values reduce lag. Optional. Default is 0.5.
Returns: (float) The calculated Laguerre Binomial Filter value.
superSmoother(source, length)
Calculates the Super Smoother of a given data series. SuperSmoother is a second-order Butterworth filter from aerospace technology. Developed by John F. Ehlers.
Parameters:
source (series float) : Series of values to process.
length (simple int) : Period for the filter calculation.
Returns: (float) The calculated Super Smoother value.
rangeFilter(source, length, multiplier)
Calculates the Range Filter of a given data series. Range Filter reduces noise by filtering price movements within a dynamic range.
Parameters:
source (series float) : Series of values to process.
length (simple int) : Number of bars for the average range calculation.
multiplier (simple float) : Multiplier for the smooth range. Higher values increase filtering. Optional. Default is 2.618.
Returns: ( ) Tuple containing filtered value, trend direction, upper band, and lower band.
qqe(source, rsiLength, rsiSmooth, qqeFactor)
Calculates the Quantitative Qualitative Estimation (QQE) of a given data series. QQE is an improved RSI that reduces noise and provides smoother signals. Developed by Igor Livshin.
Parameters:
source (series float) : Series of values to process.
rsiLength (simple int) : Number of bars for the RSI calculation. Optional. Default is 14.
rsiSmooth (simple int) : Number of bars for smoothing the RSI. Optional. Default is 5.
qqeFactor (simple float) : QQE factor for volatility band width. Optional. Default is 4.236.
Returns: ( ) Tuple containing smoothed RSI and QQE trend line.
sslChannel(source, length)
Calculates the Semaphore Signal Level (SSL) Channel of a given data series. SSL Channel provides clear trend signals using moving averages of high and low prices.
Parameters:
source (series float) : Series of values to process.
length (simple int) : Number of bars for the moving average calculation.
Returns: ( ) Tuple containing SSL Up and SSL Down lines.
ma(source, length, maType)
Calculates a Moving Average based on the specified type. Universal interface supporting all moving average algorithms.
Parameters:
source (series float) : Series of values to process.
length (simple int) : Number of bars for the moving average calculation.
maType (simple MaType) : Type of moving average to calculate. Optional. Default is SMA.
Returns: (float) The calculated moving average value based on the specified type.
atr(length, maType)
Calculates the Average True Range (ATR) using the specified moving average type. Developed by J. Welles Wilder Jr.
Parameters:
length (simple int) : Number of bars for the ATR calculation.
maType (simple MaType) : Type of moving average to use for smoothing. Optional. Default is RMA.
Returns: (float) The calculated Average True Range value.
macd(source, fastLength, slowLength, signalLength, maType, signalMaType)
Calculates the Moving Average Convergence Divergence (MACD) with customizable MA types. Developed by Gerald Appel.
Parameters:
source (series float) : Series of values to process.
fastLength (simple int) : Period for the fast moving average.
slowLength (simple int) : Period for the slow moving average.
signalLength (simple int) : Period for the signal line moving average.
maType (simple MaType) : Type of moving average for main MACD calculation. Optional. Default is EMA.
signalMaType (simple MaType) : Type of moving average for signal line calculation. Optional. Default is EMA.
Returns: ( ) Tuple containing MACD line, signal line, and histogram values.
dmao(source, fastLength, slowLength, maType)
Calculates the Dual Moving Average Oscillator (DMAO) of a given data series. Uses the same algorithm as the Percentage Price Oscillator (PPO), but can be applied to any data series.
Parameters:
source (series float) : Series of values to process.
fastLength (simple int) : Period for the fast moving average.
slowLength (simple int) : Period for the slow moving average.
maType (simple MaType) : Type of moving average to use for both calculations. Optional. Default is EMA.
Returns: (float) The calculated Dual Moving Average Oscillator value as a percentage.
continuationIndex(source, length, gamma, order)
Calculates the Continuation Index of a given data series. The index represents the Inverse Fisher Transform of the normalized difference between an UltimateSmoother and an N-order Laguerre filter. Developed by John F. Ehlers, published in TASC 2025.09.
Parameters:
source (series float) : Series of values to process.
length (simple int) : The calculation length.
gamma (simple float) : Controls the phase response of the Laguerre filter. Optional. Default is 0.8.
order (simple int) : The order of the Laguerre filter (1-10). Optional. Default is 8.
Returns: (float) The calculated Continuation Index value.
📚 RELEASE NOTES
v1.0 (2025.09.24)
✅ 25+ technical analysis functions
✅ Complete adaptive moving average series (KAMA, FRAMA, MAMA/FAMA)
✅ Advanced signal processing filters (Kalman, Laguerre, SuperSmoother, UltimateSmoother)
✅ Performance optimized with pre-calculated constants and efficient algorithms
✅ Unified function interface design following TradingView best practices
✅ Comprehensive moving average collection (DEMA, TEMA, ZLEMA, T3, etc.)
✅ Volatility and trend detection tools (QQE, SSL Channel, Range Filter)
✅ Continuation Index - Latest research from TASC 2025.09
✅ MACD and ATR calculations supporting multiple moving average types
✅ Dual Moving Average Oscillator (DMAO) for arbitrary data series analysis
Trend Following Strategy with KNN
### 1. Strategy Features
This strategy combines the K-Nearest Neighbors (KNN) algorithm with a trend-following strategy to predict future price movements by analyzing historical price data. Here are the main features of the strategy:
1. **Dynamic Parameter Adjustment**: Uses the KNN algorithm to dynamically adjust parameters of the trend-following strategy, such as moving average length and channel length, to adapt to market changes.
2. **Trend Following**: Captures market trends using moving averages and price channels to generate buy and sell signals.
3. **Multi-Factor Analysis**: Combines the KNN algorithm with moving averages to comprehensively analyze the impact of multiple factors, improving the accuracy of trading signals.
4. **High Adaptability**: Automatically adjusts parameters using the KNN algorithm, allowing the strategy to adapt to different market environments and asset types.
### 2. Simple Introduction to the KNN Algorithm
The K-Nearest Neighbors (KNN) algorithm is a simple and intuitive machine learning algorithm primarily used for classification and regression problems. Here are the basic concepts of the KNN algorithm:
1. **Non-Parametric Model**: KNN is a non-parametric algorithm, meaning it does not make any assumptions about the data distribution. Instead, it directly uses training data for predictions.
2. **Instance-Based Learning**: KNN is an instance-based learning method that uses training data directly for predictions, rather than generating a model through a training process.
3. **Distance Metrics**: The core of the KNN algorithm is calculating the distance between data points. Common distance metrics include Euclidean distance, Manhattan distance, and Minkowski distance.
4. **Neighbor Selection**: For each test data point, the KNN algorithm finds the K nearest neighbors in the training dataset.
5. **Classification and Regression**: In classification problems, KNN determines the class of a test data point through a voting mechanism. In regression problems, KNN predicts the value of a test data point by calculating the average of the K nearest neighbors.
### 3. Applications of the KNN Algorithm in Quantitative Trading Strategies
The KNN algorithm can be applied to various quantitative trading strategies. Here are some common use cases:
1. **Trend-Following Strategies**: KNN can be used to identify market trends, helping traders capture the beginning and end of trends.
2. **Mean Reversion Strategies**: In mean reversion strategies, KNN can be used to identify price deviations from the mean.
3. **Arbitrage Strategies**: In arbitrage strategies, KNN can be used to identify price discrepancies between different markets or assets.
4. **High-Frequency Trading Strategies**: In high-frequency trading strategies, KNN can be used to quickly identify market anomalies, such as price spikes or volume anomalies.
5. **Event-Driven Strategies**: In event-driven strategies, KNN can be used to identify the impact of market events.
6. **Multi-Factor Strategies**: In multi-factor strategies, KNN can be used to comprehensively analyze the impact of multiple factors.
### 4. Final Considerations
1. **Computational Efficiency**: The KNN algorithm may face computational efficiency issues with large datasets, especially in real-time trading. Optimize the code to reduce access to historical data and improve computational efficiency.
2. **Parameter Selection**: The choice of K value significantly affects the performance of the KNN algorithm. Use cross-validation or other methods to select the optimal K value.
3. **Data Standardization**: KNN is sensitive to data standardization and feature selection. Standardize the data to ensure equal weighting of different features.
4. **Noisy Data**: KNN is sensitive to noisy data, which can lead to overfitting. Preprocess the data to remove noise.
5. **Market Environment**: The effectiveness of the KNN algorithm may be influenced by market conditions. Combine it with other technical indicators and fundamental analysis to enhance the robustness of the strategy.
AI Moving Average (Expo)█ Overview
The AI Moving Average indicator is a trading tool that uses an AI-based K-nearest neighbors (KNN) algorithm to analyze and interpret patterns in price data. It combines the logic of a traditional moving average with artificial intelligence, creating an adaptive and robust indicator that can identify strong trends and key market levels.
█ How It Works
The algorithm collects data points and applies a KNN-weighted approach to classify price movement as either bullish or bearish. For each data point, the algorithm checks if the price is above or below the calculated moving average. If the price is above the moving average, it's labeled as bullish (1), and if it's below, it's labeled as bearish (0). The K-Nearest Neighbors (KNN) is an instance-based learning algorithm used in classification and regression tasks. It works on a principle of voting, where a new data point is classified based on the majority label of its 'k' nearest neighbors.
The algorithm's use of a KNN-weighted approach adds a layer of intelligence to the traditional moving average analysis. By considering not just the price relative to a moving average but also taking into account the relationships and similarities between different data points, it offers a nuanced and robust classification of price movements.
This combination of data collection, labeling, and KNN-weighted classification turns the AI Moving Average (Expo) Indicator into a dynamic tool that can adapt to changing market conditions, making it suitable for various trading strategies and market environments.
█ How to Use
Dynamic Trend Recognition
The color-coded moving average line helps traders quickly identify market trends. Green represents bullish, red for bearish, and blue for neutrality.
Trend Strength
By adjusting certain settings within the AI Moving Average (Expo) Indicator, such as using a higher 'k' value and increasing the number of data points, traders can gain real-time insights into strong trends. A higher 'k' value makes the prediction model more resilient to noise, emphasizing pronounced trends, while more data points provide a comprehensive view of the market direction. Together, these adjustments enable the indicator to display only robust trends on the chart, allowing traders to focus exclusively on significant market movements and strong trends.
Key SR Levels
Traders can utilize the indicator to identify key support and resistance levels that are derived from the prevailing trend movement. The derived support and resistance levels are not just based on historical data but are dynamically adjusted with the current trend, making them highly responsive to market changes.
█ Settings
k (Neighbors): Number of neighbors in the KNN algorithm. Increasing 'k' makes predictions more resilient to noise but may decrease sensitivity to local variations.
n (DataPoints): Number of data points considered in AI analysis. This affects how the AI interprets patterns in the price data.
maType (Select MA): Type of moving average applied. Options allow for different smoothing techniques to emphasize or dampen aspects of price movement.
length: Length of the moving average. A greater length creates a smoother curve but might lag recent price changes.
dataToClassify: Source data for classifying price as bullish or bearish. It can be adjusted to consider different aspects of price information
dataForMovingAverage: Source data for calculating the moving average. Different selections may emphasize different aspects of price movement.
-----------------
Disclaimer
The information contained in my Scripts/Indicators/Ideas/Algos/Systems does not constitute financial advice or a solicitation to buy or sell any securities of any type. I will not accept liability for any loss or damage, including without limitation any loss of profit, which may arise directly or indirectly from the use of or reliance on such information.
All investments involve risk, and the past performance of a security, industry, sector, market, financial product, trading strategy, backtest, or individual's trading does not guarantee future results or returns. Investors are fully responsible for any investment decisions they make. Such decisions should be based solely on an evaluation of their financial circumstances, investment objectives, risk tolerance, and liquidity needs.
My Scripts/Indicators/Ideas/Algos/Systems are only for educational purposes!
Adaptivity: Measures of Dominant Cycles and Price Trend [Loxx]Adaptivity: Measures of Dominant Cycles and Price Trend is an indicator that outputs adaptive lengths using various methods for dominant cycle and price trend timeframe adaptivity. While the information output from this indicator might be useful for the average trader in one off circumstances, this indicator is really meant for those need a quick comparison of dynamic length outputs who wish to fine turn algorithms and/or create adaptive indicators.
This indicator compares adaptive output lengths of all publicly known adaptive measures. Additional adaptive measures will be added as they are discovered and made public.
The first released of this indicator includes 6 measures. An additional three measures will be added with updates. Please check back regularly for new measures.
Ehers:
Autocorrelation Periodogram
Band-pass
Instantaneous Cycle
Hilbert Transformer
Dual Differentiator
Phase Accumulation (future release)
Homodyne (future release)
Jurik:
Composite Fractal Behavior (CFB)
Adam White:
Veritical Horizontal Filter (VHF) (future release)
What is an adaptive cycle, and what is Ehlers Autocorrelation Periodogram Algorithm?
From his Ehlers' book Cycle Analytics for Traders Advanced Technical Trading Concepts by John F. Ehlers , 2013, page 135:
"Adaptive filters can have several different meanings. For example, Perry Kaufman's adaptive moving average (KAMA) and Tushar Chande's variable index dynamic average (VIDYA) adapt to changes in volatility . By definition, these filters are reactive to price changes, and therefore they close the barn door after the horse is gone.The adaptive filters discussed in this chapter are the familiar Stochastic , relative strength index (RSI), commodity channel index (CCI), and band-pass filter.The key parameter in each case is the look-back period used to calculate the indicator. This look-back period is commonly a fixed value. However, since the measured cycle period is changing, it makes sense to adapt these indicators to the measured cycle period. When tradable market cycles are observed, they tend to persist for a short while.Therefore, by tuning the indicators to the measure cycle period they are optimized for current conditions and can even have predictive characteristics.
The dominant cycle period is measured using the Autocorrelation Periodogram Algorithm. That dominant cycle dynamically sets the look-back period for the indicators. I employ my own streamlined computation for the indicators that provide smoother and easier to interpret outputs than traditional methods. Further, the indicator codes have been modified to remove the effects of spectral dilation.This basically creates a whole new set of indicators for your trading arsenal."
What is this Hilbert Transformer?
An analytic signal allows for time-variable parameters and is a generalization of the phasor concept, which is restricted to time-invariant amplitude, phase, and frequency. The analytic representation of a real-valued function or signal facilitates many mathematical manipulations of the signal. For example, computing the phase of a signal or the power in the wave is much simpler using analytic signals.
The Hilbert transformer is the technique to create an analytic signal from a real one. The conventional Hilbert transformer is theoretically an infinite-length FIR filter. Even when the filter length is truncated to a useful but finite length, the induced lag is far too large to make the transformer useful for trading.
From his Ehlers' book Cycle Analytics for Traders Advanced Technical Trading Concepts by John F. Ehlers , 2013, pages 186-187:
"I want to emphasize that the only reason for including this section is for completeness. Unless you are interested in research, I suggest you skip this section entirely. To further emphasize my point, do not use the code for trading. A vastly superior approach to compute the dominant cycle in the price data is the autocorrelation periodogram. The code is included because the reader may be able to capitalize on the algorithms in a way that I do not see. All the algorithms encapsulated in the code operate reasonably well on theoretical waveforms that have no noise component. My conjecture at this time is that the sample-to-sample noise simply swamps the computation of the rate change of phase, and therefore the resulting calculations to find the dominant cycle are basically worthless.The imaginary component of the Hilbert transformer cannot be smoothed as was done in the Hilbert transformer indicator because the smoothing destroys the orthogonality of the imaginary component."
What is the Dual Differentiator, a subset of Hilbert Transformer?
From his Ehlers' book Cycle Analytics for Traders Advanced Technical Trading Concepts by John F. Ehlers , 2013, page 187:
"The first algorithm to compute the dominant cycle is called the dual differentiator. In this case, the phase angle is computed from the analytic signal as the arctangent of the ratio of the imaginary component to the real component. Further, the angular frequency is defined as the rate change of phase. We can use these facts to derive the cycle period."
What is the Phase Accumulation, a subset of Hilbert Transformer?
From his Ehlers' book Cycle Analytics for Traders Advanced Technical Trading Concepts by John F. Ehlers , 2013, page 189:
"The next algorithm to compute the dominant cycle is the phase accumulation method. The phase accumulation method of computing the dominant cycle is perhaps the easiest to comprehend. In this technique, we measure the phase at each sample by taking the arctangent of the ratio of the quadrature component to the in-phase component. A delta phase is generated by taking the difference of the phase between successive samples. At each sample we can then look backwards, adding up the delta phases.When the sum of the delta phases reaches 360 degrees, we must have passed through one full cycle, on average.The process is repeated for each new sample.
The phase accumulation method of cycle measurement always uses one full cycle's worth of historical data.This is both an advantage and a disadvantage.The advantage is the lag in obtaining the answer scales directly with the cycle period.That is, the measurement of a short cycle period has less lag than the measurement of a longer cycle period. However, the number of samples used in making the measurement means the averaging period is variable with cycle period. longer averaging reduces the noise level compared to the signal.Therefore, shorter cycle periods necessarily have a higher out- put signal-to-noise ratio."
What is the Homodyne, a subset of Hilbert Transformer?
From his Ehlers' book Cycle Analytics for Traders Advanced Technical Trading Concepts by John F. Ehlers , 2013, page 192:
"The third algorithm for computing the dominant cycle is the homodyne approach. Homodyne means the signal is multiplied by itself. More precisely, we want to multiply the signal of the current bar with the complex value of the signal one bar ago. The complex conjugate is, by definition, a complex number whose sign of the imaginary component has been reversed."
What is the Instantaneous Cycle?
The Instantaneous Cycle Period Measurement was authored by John Ehlers; it is built upon his Hilbert Transform Indicator.
From his Ehlers' book Cybernetic Analysis for Stocks and Futures: Cutting-Edge DSP Technology to Improve Your Trading by John F. Ehlers, 2004, page 107:
"It is obvious that cycles exist in the market. They can be found on any chart by the most casual observer. What is not so clear is how to identify those cycles in real time and how to take advantage of their existence. When Welles Wilder first introduced the relative strength index (rsi), I was curious as to why he selected 14 bars as the basis of his calculations. I reasoned that if i knew the correct market conditions, then i could make indicators such as the rsi adaptive to those conditions. Cycles were the answer. I knew cycles could be measured. Once i had the cyclic measurement, a host of automatically adaptive indicators could follow.
Measurement of market cycles is not easy. The signal-to-noise ratio is often very low, making measurement difficult even using a good measurement technique. Additionally, the measurements theoretically involve simultaneously solving a triple infinity of parameter values. The parameters required for the general solutions were frequency, amplitude, and phase. Some standard engineering tools, like fast fourier transforms (ffs), are simply not appropriate for measuring market cycles because ffts cannot simultaneously meet the stationarity constraints and produce results with reasonable resolution. Therefore i introduced maximum entropy spectral analysis (mesa) for the measurement of market cycles. This approach, originally developed to interpret seismographic information for oil exploration, produces high-resolution outputs with an exceptionally short amount of information. A short data length improves the probability of having nearly stationary data. Stationary data means that frequency and amplitude are constant over the length of the data. I noticed over the years that the cycles were ephemeral. Their periods would be continuously increasing and decreasing. Their amplitudes also were changing, giving variable signal-to-noise ratio conditions. Although all this is going on with the cyclic components, the enduring characteristic is that generally only one tradable cycle at a time is present for the data set being used. I prefer the term dominant cycle to denote that one component. The assumption that there is only one cycle in the data collapses the difficulty of the measurement process dramatically."
What is the Band-pass Cycle?
From his Ehlers' book Cycle Analytics for Traders Advanced Technical Trading Concepts by John F. Ehlers , 2013, page 47:
"Perhaps the least appreciated and most underutilized filter in technical analysis is the band-pass filter. The band-pass filter simultaneously diminishes the amplitude at low frequencies, qualifying it as a detrender, and diminishes the amplitude at high frequencies, qualifying it as a data smoother. It passes only those frequency components from input to output in which the trader is interested. The filtering produced by a band-pass filter is superior because the rejection in the stop bands is related to its bandwidth. The degree of rejection of undesired frequency components is called selectivity. The band-stop filter is the dual of the band-pass filter. It rejects a band of frequency components as a notch at the output and passes all other frequency components virtually unattenuated. Since the bandwidth of the deep rejection in the notch is relatively narrow and since the spectrum of market cycles is relatively broad due to systemic noise, the band-stop filter has little application in trading."
From his Ehlers' book Cycle Analytics for Traders Advanced Technical Trading Concepts by John F. Ehlers , 2013, page 59:
"The band-pass filter can be used as a relatively simple measurement of the dominant cycle. A cycle is complete when the waveform crosses zero two times from the last zero crossing. Therefore, each successive zero crossing of the indicator marks a half cycle period. We can establish the dominant cycle period as twice the spacing between successive zero crossings."
What is Composite Fractal Behavior (CFB)?
All around you mechanisms adjust themselves to their environment. From simple thermostats that react to air temperature to computer chips in modern cars that respond to changes in engine temperature, r.p.m.'s, torque, and throttle position. It was only a matter of time before fast desktop computers applied the mathematics of self-adjustment to systems that trade the financial markets.
Unlike basic systems with fixed formulas, an adaptive system adjusts its own equations. For example, start with a basic channel breakout system that uses the highest closing price of the last N bars as a threshold for detecting breakouts on the up side. An adaptive and improved version of this system would adjust N according to market conditions, such as momentum, price volatility or acceleration.
Since many systems are based directly or indirectly on cycles, another useful measure of market condition is the periodic length of a price chart's dominant cycle, (DC), that cycle with the greatest influence on price action.
The utility of this new DC measure was noted by author Murray Ruggiero in the January '96 issue of Futures Magazine. In it. Mr. Ruggiero used it to adaptive adjust the value of N in a channel breakout system. He then simulated trading 15 years of D-Mark futures in order to compare its performance to a similar system that had a fixed optimal value of N. The adaptive version produced 20% more profit!
This DC index utilized the popular MESA algorithm (a formulation by John Ehlers adapted from Burg's maximum entropy algorithm, MEM). Unfortunately, the DC approach is problematic when the market has no real dominant cycle momentum, because the mathematics will produce a value whether or not one actually exists! Therefore, we developed a proprietary indicator that does not presuppose the presence of market cycles. It's called CFB (Composite Fractal Behavior) and it works well whether or not the market is cyclic.
CFB examines price action for a particular fractal pattern, categorizes them by size, and then outputs a composite fractal size index. This index is smooth, timely and accurate
Essentially, CFB reveals the length of the market's trending action time frame. Long trending activity produces a large CFB index and short choppy action produces a small index value. Investors have found many applications for CFB which involve scaling other existing technical indicators adaptively, on a bar-to-bar basis.
What is VHF Adaptive Cycle?
Vertical Horizontal Filter (VHF) was created by Adam White to identify trending and ranging markets. VHF measures the level of trend activity, similar to ADX DI. Vertical Horizontal Filter does not, itself, generate trading signals, but determines whether signals are taken from trend or momentum indicators. Using this trend information, one is then able to derive an average cycle length.
Monte Carlo Range Forecast [DW]This is an experimental study designed to forecast the range of price movement from a specified starting point using a Monte Carlo simulation.
Monte Carlo experiments are a broad class of computational algorithms that utilize random sampling to derive real world numerical results.
These types of algorithms have a number of applications in numerous fields of study including physics, engineering, behavioral sciences, climate forecasting, computer graphics, gaming AI, mathematics, and finance.
Although the applications vary, there is a typical process behind the majority of Monte Carlo methods:
-> First, a distribution of possible inputs is defined.
-> Next, values are generated randomly from the distribution.
-> The values are then fed through some form of deterministic algorithm.
-> And lastly, the results are aggregated over some number of iterations.
In this study, the Monte Carlo process used generates a distribution of aggregate pseudorandom linear price returns summed over a user defined period, then plots standard deviations of the outcomes from the mean outcome generate forecast regions.
The pseudorandom process used in this script relies on a modified Wichmann-Hill pseudorandom number generator (PRNG) algorithm.
Wichmann-Hill is a hybrid generator that uses three linear congruential generators (LCGs) with different prime moduli.
Each LCG within the generator produces an independent, uniformly distributed number between 0 and 1.
The three generated values are then summed and modulo 1 is taken to deliver the final uniformly distributed output.
Because of its long cycle length, Wichmann-Hill is a fantastic generator to use on TV since it's extremely unlikely that you'll ever see a cycle repeat.
The resulting pseudorandom output from this generator has a minimum repetition cycle length of 6,953,607,871,644.
Fun fact: Wichmann-Hill is a widely used PRNG in various software applications. For example, Excel 2003 and later uses this algorithm in its RAND function, and it was the default generator in Python up to v2.2.
The generation algorithm in this script takes the Wichmann-Hill algorithm, and uses a multi-stage transformation process to generate the results.
First, a parent seed is selected. This can either be a fixed value, or a dynamic value.
The dynamic parent value is produced by taking advantage of Pine's timenow variable behavior. It produces a variable parent seed by using a frozen ratio of timenow/time.
Because timenow always reflects the current real time when frozen and the time variable reflects the chart's beginning time when frozen, the ratio of these values produces a new number every time the cache updates.
After a parent seed is selected, its value is then fed through a uniformly distributed seed array generator, which generates multiple arrays of pseudorandom "children" seeds.
The seeds produced in this step are then fed through the main generators to produce arrays of pseudorandom simulated outcomes, and a pseudorandom series to compare with the real series.
The main generators within this script are designed to (at least somewhat) model the stochastic nature of financial time series data.
The first step in this process is to transform the uniform outputs of the Wichmann-Hill into outputs that are normally distributed.
In this script, the transformation is done using an estimate of the normal distribution quantile function.
Quantile functions, otherwise known as percent-point or inverse cumulative distribution functions, specify the value of a random variable such that the probability of the variable being within the value's boundary equals the input probability.
The quantile equation for a normal probability distribution is μ + σ(√2)erf^-1(2(p - 0.5)) where μ is the mean of the distribution, σ is the standard deviation, erf^-1 is the inverse Gauss error function, and p is the probability.
Because erf^-1() does not have a simple, closed form interpretation, it must be approximated.
To keep things lightweight in this approximation, I used a truncated Maclaurin Series expansion for this function with precomputed coefficients and rolled out operations to avoid nested looping.
This method provides a decent approximation of the error function without completely breaking floating point limits or sucking up runtime memory.
Note that there are plenty of more robust techniques to approximate this function, but their memory needs very. I chose this method specifically because of runtime favorability.
To generate a pseudorandom approximately normally distributed variable, the uniformly distributed variable from the Wichmann-Hill algorithm is used as the input probability for the quantile estimator.
Now from here, we get a pretty decent output that could be used itself in the simulation process. Many Monte Carlo simulations and random price generators utilize a normal variable.
However, if you compare the outputs of this normal variable with the actual returns of the real time series, you'll find that the variability in shocks (random changes) doesn't quite behave like it does in real data.
This is because most real financial time series data is more complex. Its distribution may be approximately normal at times, but the variability of its distribution changes over time due to various underlying factors.
In light of this, I believe that returns behave more like a convoluted product distribution rather than just a raw normal.
So the next step to get our procedurally generated returns to more closely emulate the behavior of real returns is to introduce more complexity into our model.
Through experimentation, I've found that a return series more closely emulating real returns can be generated in a three step process:
-> First, generate multiple independent, normally distributed variables simultaneously.
-> Next, apply pseudorandom weighting to each variable ranging from -1 to 1, or some limits within those bounds. This modulates each series to provide more variability in the shocks by producing product distributions.
-> Lastly, add the results together to generate the final pseudorandom output with a convoluted distribution. This adds variable amounts of constructive and destructive interference to produce a more "natural" looking output.
In this script, I use three independent normally distributed variables multiplied by uniform product distributed variables.
The first variable is generated by multiplying a normal variable by one uniformly distributed variable. This produces a bit more tailedness (kurtosis) than a normal distribution, but nothing too extreme.
The second variable is generated by multiplying a normal variable by two uniformly distributed variables. This produces moderately greater tails in the distribution.
The third variable is generated by multiplying a normal variable by three uniformly distributed variables. This produces a distribution with heavier tails.
For additional control of the output distributions, the uniform product distributions are given optional limits.
These limits control the boundaries for the absolute value of the uniform product variables, which affects the tails. In other words, they limit the weighting applied to the normally distributed variables in this transformation.
All three sets are then multiplied by user defined amplitude factors to adjust presence, then added together to produce our final pseudorandom return series with a convoluted product distribution.
Once we have the final, more "natural" looking pseudorandom series, the values are recursively summed over the forecast period to generate a simulated result.
This process of generation, weighting, addition, and summation is repeated over the user defined number of simulations with different seeds generated from the parent to produce our array of initial simulated outcomes.
After the initial simulation array is generated, the max, min, mean and standard deviation of this array are calculated, and the values are stored in holding arrays on each iteration to be called upon later.
Reference difference series and price values are also stored in holding arrays to be used in our comparison plots.
In this script, I use a linear model with simple returns rather than compounding log returns to generate the output.
The reason for this is that in generating outputs this way, we're able to run our simulations recursively from the beginning of the chart, then apply scaling and anchoring post-process.
This allows a greater conservation of runtime memory than the alternative, making it more suitable for doing longer forecasts with heavier amounts of simulations in TV's runtime environment.
From our starting time, the previous bar's price, volatility, and optional drift (expected return) are factored into our holding arrays to generate the final forecast parameters.
After these parameters are computed, the range forecast is produced.
The basis value for the ranges is the mean outcome of the simulations that were run.
Then, quarter standard deviations of the simulated outcomes are added to and subtracted from the basis up to 3σ to generate the forecast ranges.
All of these values are plotted and colorized based on their theoretical probability density. The most likely areas are the warmest colors, and least likely areas are the coolest colors.
An information panel is also displayed at the starting time which shows the starting time and price, forecast type, parent seed value, simulations run, forecast bars, total drift, mean, standard deviation, max outcome, min outcome, and bars remaining.
The interesting thing about simulated outcomes is that although the probability distribution of each simulation is not normal, the distribution of different outcomes converges to a normal one with enough steps.
In light of this, the probability density of outcomes is highest near the initial value + total drift, and decreases the further away from this point you go.
This makes logical sense since the central path is the easiest one to travel.
Given the ever changing state of markets, I find this tool to be best suited for shorter term forecasts.
However, if the movements of price are expected to remain relatively stable, longer term forecasts may be equally as valid.
There are many possible ways for users to apply this tool to their analysis setups. For example, the forecast ranges may be used as a guide to help users set risk targets.
Or, the generated levels could be used in conjunction with other indicators for meaningful confluence signals.
More advanced users could even extrapolate the functions used within this script for various purposes, such as generating pseudorandom data to test systems on, perform integration and approximations, etc.
These are just a few examples of potential uses of this script. How you choose to use it to benefit your trading, analysis, and coding is entirely up to you.
If nothing else, I think this is a pretty neat script simply for the novelty of it.
----------
How To Use:
When you first add the script to your chart, you will be prompted to confirm the starting date and time, number of bars to forecast, number of simulations to run, and whether to include drift assumption.
You will also be prompted to confirm the forecast type. There are two types to choose from:
-> End Result - This uses the values from the end of the simulation throughout the forecast interval.
-> Developing - This uses the values that develop from bar to bar, providing a real-time outlook.
You can always update these settings after confirmation as well.
Once these inputs are confirmed, the script will boot up and automatically generate the forecast in a separate pane.
Note that if there is no bar of data at the time you wish to start the forecast, the script will automatically detect use the next available bar after the specified start time.
From here, you can now control the rest of the settings.
The "Seeding Settings" section controls the initial seed value used to generate the children that produce the simulations.
In this section, you can control whether the seed is a fixed value, or a dynamic one.
Since selecting the dynamic parent option will change the seed value every time you change the settings or refresh your chart, there is a "Regenerate" input built into the script.
This input is a dummy input that isn't connected to any of the calculations. The purpose of this input is to force an update of the dynamic parent without affecting the generator or forecast settings.
Note that because we're running a limited number of simulations, different parent seeds will typically yield slightly different forecast ranges.
When using a small number of simulations, you will likely see a higher amount of variance between differently seeded results because smaller numbers of sampled simulations yield a heavier bias.
The more simulations you run, the smaller this variance will become since the outcomes become more convergent toward the same distribution, so the differences between differently seeded forecasts will become more marginal.
When using a dynamic parent, pay attention to the dispersion of ranges.
When you find a set of ranges that is dispersed how you like with your configuration, set your fixed parent value to the parent seed that shows in the info panel.
This will allow you to replicate that dispersion behavior again in the future.
An important thing to note when settings alerts on the plotted levels, or using them as components for signals in other scripts, is to decide on a fixed value for your parent seed to avoid minor repainting due to seed changes.
When the parent seed is fixed, no repainting occurs.
The "Amplitude Settings" section controls the amplitude coefficients for the three differently tailed generators.
These amplitude factors will change the difference series output for each simulation by controlling how aggressively each series moves.
When "Adjust Amplitude Coefficients" is disabled, all three coefficients are set to 1.
Note that if you expect volatility to significantly diverge from its historical values over the forecast interval, try experimenting with these factors to match your anticipation.
The "Weighting Settings" section controls the weighting boundaries for the three generators.
These weighting limits affect how tailed the distributions in each generator are, which in turn affects the final series outputs.
The maximum absolute value range for the weights is . When "Limit Generator Weights" is disabled, this is the range that is automatically used.
The last set of inputs is the "Display Settings", where you can control the visual outputs.
From here, you can select to display either "Forecast" or "Difference Comparison" via the "Output Display Type" dropdown tab.
"Forecast" is the type displayed by default. This plots the end result or developing forecast ranges.
There is an option with this display type to show the developing extremes of the simulations. This option is enabled by default.
There's also an option with this display type to show one of the simulated price series from the set alongside actual prices.
This allows you to visually compare simulated prices alongside the real prices.
"Difference Comparison" allows you to visually compare a synthetic difference series from the set alongside the actual difference series.
This display method is primarily useful for visually tuning the amplitude and weighting settings of the generators.
There are also info panel settings on the bottom, which allow you to control size, colors, and date format for the panel.
It's all pretty simple to use once you get the hang of it. So play around with the settings and see what kinds of forecasts you can generate!
----------
ADDITIONAL NOTES & DISCLAIMERS
Although I've done a number of things within this script to keep runtime demands as low as possible, the fact remains that this script is fairly computationally heavy.
Because of this, you may get random timeouts when using this script.
This could be due to either random drops in available runtime on the server, using too many simulations, or running the simulations over too many bars.
If it's just a random drop in runtime on the server, hide and unhide the script, re-add it to the chart, or simply refresh the page.
If the timeout persists after trying this, then you'll need to adjust your settings to a less demanding configuration.
Please note that no specific claims are being made in regards to this script's predictive accuracy.
It must be understood that this model is based on randomized price generation with assumed constant drift and dispersion from historical data before the starting point.
Models like these not consider the real world factors that may influence price movement (economic changes, seasonality, macro-trends, instrument hype, etc.), nor the changes in sample distribution that may occur.
In light of this, it's perfectly possible for price data to exceed even the most extreme simulated outcomes.
The future is uncertain, and becomes increasingly uncertain with each passing point in time.
Predictive models of any type can vary significantly in performance at any point in time, and nobody can guarantee any specific type of future performance.
When using forecasts in making decisions, DO NOT treat them as any form of guarantee that values will fall within the predicted range.
When basing your trading decisions on any trading methodology or utility, predictive or not, you do so at your own risk.
No guarantee is being issued regarding the accuracy of this forecast model.
Forecasting is very far from an exact science, and the results from any forecast are designed to be interpreted as potential outcomes rather than anything concrete.
With that being said, when applied prudently and treated as "general case scenarios", forecast models like these may very well be potentially beneficial tools to have in the arsenal.
Machine Learning: LVQ-based StrategyLVQ-based Strategy (FX and Crypto)
Description:
Learning Vector Quantization (LVQ) can be understood as a special case of an artificial neural network, more precisely, it applies a winner-take-all learning-based approach. It is based on prototype supervised learning classification task and trains its weights through a competitive learning algorithm.
Algorithm:
Initialize weights
Train for 1 to N number of epochs
- Select a training example
- Compute the winning vector
- Update the winning vector
Classify test sample
The LVQ algorithm offers a framework to test various indicators easily to see if they have got any *predictive value*. One can easily add cog, wpr and others.
Note: TradingViews's playback feature helps to see this strategy in action. The algo is tested with BTCUSD/1Hour.
Warning: This is a preliminary version! Signals ARE repainting.
***Warning***: Signals LARGELY depend on hyperparams (lrate and epochs).
Style tags: Trend Following, Trend Analysis
Asset class: Equities, Futures, ETFs, Currencies and Commodities
Dataset: FX Minutes/Hours+++/Days
MIDAS VWAP Jayy his is just a bash together of two MIDAS VWAP scripts particularly AkifTokuz and drshoe.
I added the ability to show more MIDAS curves from the same script.
The algorithm primarily uses the "n" number but the date can be used for the 8th VWAP
I have not converted the script to version 3.
To find bar number go into "Chart Properties" select " "background" then select Indicator Titles and "Indicator values". When you place your cursor over a bar the first number you see adjacent to the script title is the bar number. Put that in the dialogue box midline is MIDAS VWAP . The resistance is a MIDAS VWAP using bar highs. The resistance is MIDAS VWAP using bar lows.
In most case using N will suffice. However, if you are flipping around charts inputting a specific date can be handy. In this way, you can compare the same point in time across multiple instruments eg first trading day of the year or an election date.
Adding dates into the dialogue box is a bit cumbersome so in this version, it is enabled for only one curve. I have called it VWAP and it follows the typical VWAP algorithm. (Does that make a difference? Read below re my opinion on the Difference between MIDAS VWAP and VWAP ).
I have added the ability to start from the bottom or top of the initiating bar.
In theory in a probable uptrend pick a low of a bar for a low pivot and start the MIDAS VWAP there using the support.
For a downtrend use the high pivot bar and select resistance. The way to see is to play with these values.
Difference between MIDAS VWAP and the regular VWAP
MIDAS itself as described by Levine uses a time anchored On-Balance Volume (OBV) plotted on a graph where the horizontal (abscissa) arm of the graph is cumulative volume not time. He called his VWAP curves Support/Resistance VWAP or S/R curves. These S/R curves are often referred to as "MIDAS curves".
These are the main components of the MIDAS chart. A third algorithm called the Top-Bottom Finder was also described. (Separate script).
Additional tools have been described in "MIDAS_Technical_Analysis"
Midas Technical Analysis: A VWAP Approach to Trading and Investing in Today’s Markets by Andrew Coles, David G. Hawkins
Copyright © 2011 by Andrew Coles and David G. Hawkins.
Denoting the different way in which Levine approached the calculation.
The difference between "MIDAS" VWAP and VWAP is, in my opinion, much ado about nothing. The algorithms generate identical curves albeit the MIDAS algorithm launches the curve one bar later than the VWAP algorithm which can be a pain in the neck. All of the algorithms that I looked at on Tradingview step back one bar in time to initiate the MIDAS curve. As such the plotted curves are identical to traditional VWAP assuming the initiation is from the candle/bar midpoint.
How did Levine intend the curves to be drawn?
On a reversal, he suggested the initiation of the Support and Resistance VVWAP (S/R curve) to be started after a reversal.
It is clear in his examples this happens occasionally but in many cases he initiates the so-called MIDAS S/R VWAP right at the reversal point. In any case, the algorithm is problematic if you wish to start a curve on the first bar of an IPO .
You will get nothing. That is a pain. Also in Levine's writings, he describes simply clicking on the point where a
S/R VWAP is to be drawn from. As such, the generally accepted method of initiating the curve at N-1 is a practical and sensible method. The only issue is that you cannot draw the curve from the first bar on any security, as mentioned without resorting to the typical VWAP algorithm. There is another difference. VWAP is launched from the middle of the bar (as per AlphaTrends), You can also launch from the top of the bar or the bottom (or anywhere for that matter). The calculation proceeds using the top or bottom for each new bar.
The potential applications are discussed in the MIDAS Technical Analysis book.
Opening Range Gaps [TakingProphets]What is an Opening Range Gap (ORG)?
In ICT, the Opening Range Gap is defined as the price difference between the previous session’s close (e.g., 4:00 PM EST in U.S. indices) and the current day’s open (9:30 AM EST).
That gap is a liquidity void—an area where no trading occurred during regular hours.
Why ICT Traders Care About ORG
Liquidity Void (Gap Fill Logic)
-Because the gap is an untraded area, it naturally acts as a draw on liquidity.
-Price often seeks to rebalance by retracing into or fully filling this void.
Premium/Discount Sensitivity
-Once the ORG is defined, ICT treats it as a mini dealing range.
-Above EQ (Consequent Encroachment) = algorithmic premium (sell-sensitive).
-Below EQ = algorithmic discount (buy-sensitive).
-Price reaction at these levels gives a precise read on institutional intent intraday.
Support/Resistance from ORG
-If the session opens above prior close, the gap often acts as support until violated.
-If the session opens below prior close, the gap often acts as resistance until reclaimed.
Key ICT Concepts Anchored to ORG
Consequent Encroachment (CE): The midpoint of the gap. The algo is highly sensitive to CE as a decision point: reject → continuation; reclaim → reversal.
Draw on Liquidity (DoL): Price is algorithmically “pulled” toward gap fills, CE, or the opposite side of the ORG.
Order Flow Confirmation: If price ignores the gap and runs away from it, this signals strong institutional order flow in that direction.
Confluence with Other Tools: FVGs, OBs, and HTF PD arrays often overlap with ORG levels, strengthening setups.
Practical Application for Traders
Bias Formation:
Use ORG EQ as a line in the sand for intraday bias.
If price trades below ORG EQ after the open → look for short setups into the prior day’s low or external liquidity.
If price trades above ORG EQ → favor longs into highs/liquidity pools.
Execution Framework:
Wait for liquidity raids or market structure shifts at ORG edges (.00, .25, .50, .75).
Target: EQ, opposite quarter, or full gap fill.
Precision Reads:
ORG lines let traders anticipate where algorithms are likely to respond, providing mechanical invalidation and clear targets without clutter.
Institutional Levels (CNN) - [PhenLabs]📊Institutional Levels (Convolutional Neural Network-inspired)
Version : PineScript™v6
📌Description
The CNN-IL Institutional Levels indicator represents a breakthrough in automated zone detection technology, combining convolutional neural network principles with advanced statistical modeling. This sophisticated tool identifies high-probability institutional trading zones by analyzing pivot patterns, volume dynamics, and price behavior using machine learning algorithms.
The indicator employs a proprietary 9-factor logistic regression model that calculates real-time reaction probabilities for each detected zone. By incorporating CNN-inspired filtering techniques and dynamic zone management, it provides traders with unprecedented accuracy in identifying where institutional money is likely to react to price action.
🚀Points of Innovation
● CNN-Inspired Pivot Analysis - Advanced binning system using convolutional neural network principles for superior pattern recognition
● Real-Time Probability Engine - Live reaction probability calculations using 9-factor logistic regression model
● Dynamic Zone Intelligence - Automatic zone merging using Intersection over Union (IoU) algorithms
● Volume-Weighted Scoring - Time-of-day volume Z-score analysis for enhanced zone strength assessment
● Adaptive Decay System - Intelligent zone lifecycle management based on touch frequency and recency
● Multi-Filter Architecture - Optional gradient, smoothing, and Difference of Gaussians (DoG) convolution filters
🔧Core Components
● Pivot Detection Engine - Advanced pivot identification with configurable left/right bars and ATR-normalized strength calculations
● Neural Network Binning - Price level clustering using CNN-inspired algorithms with ATR-based bin sizing
● Logistic Regression Model - 9-factor probability calculation including distance, width, volume, VWAP deviation, and trend analysis
● Zone Management System - Intelligent creation, merging, and decay algorithms for optimal zone lifecycle control
● Visualization Layer - Dynamic line drawing with opacity-based scoring and optional zone fills
🔥Key Features
● High-Probability Zone Detection - Automatically identifies institutional levels with reaction probabilities above configurable thresholds
● Real-Time Probability Scoring - Live calculation of zone reaction likelihood using advanced statistical modeling
● Session-Aware Analysis - Optional filtering to specific trading sessions for enhanced accuracy during active market hours
● Customizable Parameters - Full control over lookback periods, zone sensitivity, merge thresholds, and probability models
● Performance Optimized - Efficient processing with controlled update frequencies and pivot processing limits
● Non-Repainting Mode - Strict mode available for backtesting accuracy and live trading reliability
🎨Visualization
● Dynamic Zone Lines - Color-coded support and resistance levels with opacity reflecting zone strength and confidence scores
● Probability Labels - Real-time display of reaction probabilities, touch counts, and historical hit rates for active zones
● Zone Fills - Optional semi-transparent zone highlighting for enhanced visual clarity and immediate pattern recognition
● Adaptive Styling - Automatic color and opacity adjustments based on zone scoring and statistical significance
📖Usage Guidelines
● Lookback Bars - Default 500, Range 100-1000, Controls the historical data window for pivot analysis and zone calculation
● Pivot Left/Right - Default 3, Range 1-10, Defines the pivot detection sensitivity and confirmation requirements
● Bin Size ATR units - Default 0.25, Range 0.1-2.0, Controls price level clustering granularity for zone creation
● Base Zone Half-Width ATR units - Default 0.25, Range 0.1-1.0, Sets the minimum zone width in ATR units for institutional level boundaries
● Zone Merge IoU Threshold - Default 0.5, Range 0.1-0.9, Intersection over Union threshold for automatic zone merging algorithms
● Max Active Zones - Default 5, Range 3-20, Maximum number of zones displayed simultaneously to prevent chart clutter
● Probability Threshold for Labels - Default 0.6, Range 0.3-0.9, Minimum reaction probability required for zone label display and alerts
● Distance Weight w1 - Controls influence of price distance from zone center on reaction probability
● Width Weight w2 - Adjusts impact of zone width on probability calculations
● Volume Weight w3 - Modifies volume Z-score influence on zone strength assessment
● VWAP Weight w4 - Controls VWAP deviation impact on institutional level significance
● Touch Count Weight w5 - Adjusts influence of historical zone interactions on probability scoring
● Hit Rate Weight w6 - Controls prior success rate impact on future reaction likelihood predictions
● Wick Penetration Weight w7 - Modifies wick penetration analysis influence on probability calculations
● Trend Weight w8 - Adjusts trend context impact using ADX analysis for directional bias assessment
✅Best Use Cases
● Swing Trading Entries - Enter positions at high-probability institutional zones with 60%+ reaction scores
● Scalping Opportunities - Quick entries and exits around frequently tested institutional levels
● Risk Management - Use zones as dynamic stop-loss and take-profit levels based on institutional behavior
● Market Structure Analysis - Identify key institutional levels that define current market structure and sentiment
● Confluence Trading - Combine with other technical indicators for high-probability trade setups
● Session-Based Strategies - Focus analysis during high-volume sessions for maximum effectiveness
⚠️Limitations
● Historical Pattern Dependency - Algorithm effectiveness relies on historical patterns that may not repeat in changing market conditions
● Computational Intensity - Complex calculations may impact chart performance on lower-end devices or with multiple indicators
● Probability Estimates - Reaction probabilities are statistical estimates and do not guarantee actual market outcomes
● Session Sensitivity - Performance may vary significantly between different market sessions and volatility regimes
● Parameter Sensitivity - Results can be highly dependent on input parameters requiring optimization for different instruments
💡What Makes This Unique
● CNN Architecture - First indicator to apply convolutional neural network principles to institutional-level detection
● Real-Time ML Scoring - Live machine learning probability calculations for each zone interaction
● Advanced Zone Management - Sophisticated algorithms for zone lifecycle management and automatic optimization
● Statistical Rigor - Comprehensive 9-factor logistic regression model with extensive backtesting validation
● Performance Optimization - Efficient processing algorithms designed for real-time trading applications
🔬How It Works
● Multi-timeframe pivot identification - Uses configurable sensitivity parameters for advanced pivot detection
● ATR-normalized strength calculations - Standardizes pivot significance across different volatility regimes
● Volume Z-score integration - Enhanced pivot weighting based on time-of-day volume patterns
● Price level clustering - Neural network binning algorithms with ATR-based sizing for zone creation
● Recency decay applications - Weights recent pivots more heavily than historical data for relevance
● Statistical filtering - Eliminates low-significance price levels and reduces market noise
● Dynamic zone generation - Creates zones from statistically significant pivot clusters with minimum support thresholds
● IoU-based merging algorithms - Combines overlapping zones while maintaining accuracy using Intersection over Union
● Adaptive decay systems - Automatic removal of outdated or low-performing zones for optimal performance
● 9-factor logistic regression - Incorporates distance, width, volume, VWAP, touch history, and trend analysis
● Real-time scoring updates - Zone interaction calculations with configurable threshold filtering
● Optional CNN filters - Gradient detection, smoothing, and Difference of Gaussians processing for enhanced accuracy
💡Note
This indicator represents advanced quantitative analysis and should be used by traders familiar with statistical modeling concepts. The probability scores are mathematical estimates based on historical patterns and should be combined with proper risk management and additional technical analysis for optimal trading decisions.
[blackcat] L2 Trend LinearityOVERVIEW
The L2 Trend Linearity indicator is a sophisticated market analysis tool designed to help traders identify and visualize market trend linearity by analyzing price action relative to dynamic support and resistance zones. This powerful Pine Script indicator utilizes the Arnaud Legoux Moving Average (ALMA) algorithm to calculate weighted price calculations and generate dynamic support/resistance zones that adapt to changing market conditions. By visualizing market zones through colored candles and histograms, the indicator provides clear visual cues about market momentum and potential trading opportunities. The script generates buy/sell signals based on zone crossovers, making it an invaluable tool for both technical analysis and automated trading strategies. Whether you're a day trader, swing trader, or algorithmic trader, this indicator can help you identify market regimes, support/resistance levels, and potential entry/exit points with greater precision.
FEATURES
Dynamic Support/Resistance Zones: Calculates dynamic support (bear market zone) and resistance (bull market zone) using weighted price calculations and ALMA smoothing
Visual Market Representation: Color-coded candles and histograms provide immediate visual feedback about market conditions
Smart Signal Generation: Automatic buy/sell signals generated from zone crossovers with clear visual indicators
Customizable Parameters: Four different ALMA smoothing parameters for various timeframes and trading styles
Multi-Timeframe Compatibility: Works across different timeframes from 1-minute to weekly charts
Real-time Analysis: Provides instant feedback on market momentum and trend direction
Clear Visual Cues: Green candles indicate bullish momentum, red candles indicate bearish momentum, and white candles indicate neutral conditions
Histogram Visualization: Blue histogram shows bear market zone (below support), aqua histogram shows bull market zone (above resistance)
Signal Labels: "B" labels mark buy signals (price crosses above resistance), "S" labels mark sell signals (price crosses below support)
Overlay Functionality: Works as an overlay indicator without cluttering the chart with unnecessary elements
Highly Customizable: All parameters can be adjusted to suit different trading strategies and market conditions
HOW TO USE
Add the Indicator to Your Chart
Open TradingView and navigate to your desired trading instrument
Click on "Indicators" in the top menu and select "New"
Search for "L2 Trend Linearity" or paste the Pine Script code
Click "Add to Chart" to apply the indicator
Configure the Parameters
ALMA Length Short: Set the short-term smoothing parameter (default: 3). Lower values provide more responsive signals but may generate more false signals
ALMA Length Medium: Set the medium-term smoothing parameter (default: 5). This provides a balance between responsiveness and stability
ALMA Length Long: Set the long-term smoothing parameter (default: 13). Higher values provide more stable signals but with less responsiveness
ALMA Length Very Long: Set the very long-term smoothing parameter (default: 21). This provides the most stable support/resistance levels
Understand the Visual Elements
Green Candles: Indicate bullish momentum when price is above the bear market zone (support)
Red Candles: Indicate bearish momentum when price is below the bull market zone (resistance)
White Candles: Indicate neutral market conditions when price is between support and resistance zones
Blue Histogram: Shows bear market zone when price is below support level
Aqua Histogram: Shows bull market zone when price is above resistance level
"B" Labels: Mark buy signals when price crosses above resistance
"S" Labels: Mark sell signals when price crosses below support
Identify Market Regimes
Bullish Regime: Price consistently above resistance zone with green candles and aqua histogram
Bearish Regime: Price consistently below support zone with red candles and blue histogram
Neutral Regime: Price oscillating between support and resistance zones with white candles
Generate Trading Signals
Buy Signals: Look for price crossing above the bull market zone (resistance) with confirmation from green candles
Sell Signals: Look for price crossing below the bear market zone (support) with confirmation from red candles
Confirmation: Always wait for confirmation from candle color changes before entering trades
Optimize for Different Timeframes
Scalping: Use shorter ALMA lengths (3-5) for 1-5 minute charts
Day Trading: Use medium ALMA lengths (5-13) for 15-60 minute charts
Swing Trading: Use longer ALMA lengths (13-21) for 1-4 hour charts
Position Trading: Use very long ALMA lengths (21+) for daily and weekly charts
LIMITATIONS
Whipsaw Markets: The indicator may generate false signals in choppy, sideways markets where price oscillates rapidly between support and resistance
Lagging Nature: Like all moving average-based indicators, there is inherent lag in the calculations, which may result in delayed signals
Not a Standalone Tool: This indicator should be used in conjunction with other technical analysis tools and risk management strategies
Market Structure Dependency: Performance may vary depending on market structure and volatility conditions
Parameter Sensitivity: Different markets may require different parameter settings for optimal performance
No Volume Integration: The indicator does not incorporate volume data, which could provide additional confirmation signals
Limited Backtesting: Pine Script limitations may restrict comprehensive backtesting capabilities
Not Suitable for All Instruments: May perform differently on stocks, forex, crypto, and futures markets
Requires Confirmation: Signals should always be confirmed with other indicators or price action analysis
Not Predictive: The indicator identifies current market conditions but does not predict future price movements
NOTES
ALMA Algorithm: The indicator uses the Arnaud Legoux Moving Average (ALMA) algorithm, which is known for its excellent smoothing capabilities and reduced lag compared to traditional moving averages
Weighted Price Calculations: The bear market zone uses (2low + close) / 3, while the bull market zone uses (high + 2close) / 3, providing more weight to recent price action
Dynamic Zones: The support and resistance zones are dynamic and adapt to changing market conditions, making them more responsive than static levels
Color Psychology: The color scheme follows traditional trading psychology - green for bullish, red for bearish, and white for neutral
Signal Timing: The signals are generated on the close of each bar, ensuring they are based on complete price action
Label Positioning: Buy signals appear below the bar (red "B" label), while sell signals appear above the bar (green "S" label)
Multiple Timeframes: The indicator can be applied to multiple timeframes simultaneously for comprehensive analysis
Risk Management: Always use proper risk management techniques when trading based on indicator signals
Market Context: Consider the overall market context and trend direction when interpreting signals
Confirmation: Look for confirmation from other indicators or price action patterns before entering trades
Practice: Test the indicator on historical data before using it in live trading
Customization: Feel free to experiment with different parameter combinations to find what works best for your trading style
THANKS
Special thanks to the TradingView community and the Pine Script developers for creating such a powerful and flexible platform for technical analysis. This indicator builds upon the foundation of the ALMA algorithm and various moving average techniques developed by technical analysis pioneers. The concept of dynamic support and resistance zones has been refined over decades of market analysis, and this script represents a modern implementation of these timeless principles. We acknowledge the contributions of all traders and developers who have contributed to the evolution of technical analysis and continue to push the boundaries of what's possible with algorithmic trading tools.
Risk-Adjusted Momentum Oscillator# Risk-Adjusted Momentum Oscillator (RAMO): Momentum Analysis with Integrated Risk Assessment
## 1. Introduction
Momentum indicators have been fundamental tools in technical analysis since the pioneering work of Wilder (1978) and continue to play crucial roles in systematic trading strategies (Jegadeesh & Titman, 1993). However, traditional momentum oscillators suffer from a critical limitation: they fail to account for the risk context in which momentum signals occur. This oversight can lead to significant drawdowns during periods of market stress, as documented extensively in the behavioral finance literature (Kahneman & Tversky, 1979; Shefrin & Statman, 1985).
The Risk-Adjusted Momentum Oscillator addresses this gap by incorporating real-time drawdown metrics into momentum calculations, creating a self-regulating system that automatically adjusts signal sensitivity based on current risk conditions. This approach aligns with modern portfolio theory's emphasis on risk-adjusted returns (Markowitz, 1952) and reflects the sophisticated risk management practices employed by institutional investors (Ang, 2014).
## 2. Theoretical Foundation
### 2.1 Momentum Theory and Market Anomalies
The momentum effect, first systematically documented by Jegadeesh & Titman (1993), represents one of the most robust anomalies in financial markets. Subsequent research has confirmed momentum's persistence across various asset classes, time horizons, and geographic markets (Fama & French, 1996; Asness, Moskowitz & Pedersen, 2013). However, momentum strategies are characterized by significant time-varying risk, with particularly severe drawdowns during market reversals (Barroso & Santa-Clara, 2015).
### 2.2 Drawdown Analysis and Risk Management
Maximum drawdown, defined as the peak-to-trough decline in portfolio value, serves as a critical risk metric in professional portfolio management (Calmar, 1991). Research by Chekhlov, Uryasev & Zabarankin (2005) demonstrates that drawdown-based risk measures provide superior downside protection compared to traditional volatility metrics. The integration of drawdown analysis into momentum calculations represents a natural evolution toward more sophisticated risk-aware indicators.
### 2.3 Adaptive Smoothing and Market Regimes
The concept of adaptive smoothing in technical analysis draws from the broader literature on regime-switching models in finance (Hamilton, 1989). Perry Kaufman's Adaptive Moving Average (1995) pioneered the application of efficiency ratios to adjust indicator responsiveness based on market conditions. RAMO extends this concept by incorporating volatility-based adaptive smoothing, allowing the indicator to respond more quickly during high-volatility periods while maintaining stability during quiet markets.
## 3. Methodology
### 3.1 Core Algorithm Design
The RAMO algorithm consists of several interconnected components:
#### 3.1.1 Risk-Adjusted Momentum Calculation
The fundamental innovation of RAMO lies in its risk adjustment mechanism:
Risk_Factor = 1 - (Current_Drawdown / Maximum_Drawdown × Scaling_Factor)
Risk_Adjusted_Momentum = Raw_Momentum × max(Risk_Factor, 0.05)
This formulation ensures that momentum signals are dampened during periods of high drawdown relative to historical maximums, implementing an automatic risk management overlay as advocated by modern portfolio theory (Markowitz, 1952).
#### 3.1.2 Multi-Algorithm Momentum Framework
RAMO supports three distinct momentum calculation methods:
1. Rate of Change: Traditional percentage-based momentum (Pring, 2002)
2. Price Momentum: Absolute price differences
3. Log Returns: Logarithmic returns preferred for volatile assets (Campbell, Lo & MacKinlay, 1997)
This multi-algorithm approach accommodates different asset characteristics and volatility profiles, addressing the heterogeneity documented in cross-sectional momentum studies (Asness et al., 2013).
### 3.2 Leading Indicator Components
#### 3.2.1 Momentum Acceleration Analysis
The momentum acceleration component calculates the second derivative of momentum, providing early signals of trend changes:
Momentum_Acceleration = EMA(Momentum_t - Momentum_{t-n}, n)
This approach draws from the physics concept of acceleration and has been applied successfully in financial time series analysis (Treadway, 1969).
#### 3.2.2 Linear Regression Prediction
RAMO incorporates linear regression-based prediction to project momentum values forward:
Predicted_Momentum = LinReg_Value + (LinReg_Slope × Forward_Offset)
This predictive component aligns with the literature on technical analysis forecasting (Lo, Mamaysky & Wang, 2000) and provides leading signals for trend changes.
#### 3.2.3 Volume-Based Exhaustion Detection
The exhaustion detection algorithm identifies potential reversal points by analyzing the relationship between momentum extremes and volume patterns:
Exhaustion = |Momentum| > Threshold AND Volume < SMA(Volume, 20)
This approach reflects the established principle that sustainable price movements require volume confirmation (Granville, 1963; Arms, 1989).
### 3.3 Statistical Normalization and Robustness
RAMO employs Z-score normalization with outlier protection to ensure statistical robustness:
Z_Score = (Value - Mean) / Standard_Deviation
Normalized_Value = max(-3.5, min(3.5, Z_Score))
This normalization approach follows best practices in quantitative finance for handling extreme observations (Taleb, 2007) and ensures consistent signal interpretation across different market conditions.
### 3.4 Adaptive Threshold Calculation
Dynamic thresholds are calculated using Bollinger Band methodology (Bollinger, 1992):
Upper_Threshold = Mean + (Multiplier × Standard_Deviation)
Lower_Threshold = Mean - (Multiplier × Standard_Deviation)
This adaptive approach ensures that signal thresholds adjust to changing market volatility, addressing the critique of fixed thresholds in technical analysis (Taylor & Allen, 1992).
## 4. Implementation Details
### 4.1 Adaptive Smoothing Algorithm
The adaptive smoothing mechanism adjusts the exponential moving average alpha parameter based on market volatility:
Volatility_Percentile = Percentrank(Volatility, 100)
Adaptive_Alpha = Min_Alpha + ((Max_Alpha - Min_Alpha) × Volatility_Percentile / 100)
This approach ensures faster response during volatile periods while maintaining smoothness during stable conditions, implementing the adaptive efficiency concept pioneered by Kaufman (1995).
### 4.2 Risk Environment Classification
RAMO classifies market conditions into three risk environments:
- Low Risk: Current_DD < 30% × Max_DD
- Medium Risk: 30% × Max_DD ≤ Current_DD < 70% × Max_DD
- High Risk: Current_DD ≥ 70% × Max_DD
This classification system enables conditional signal generation, with long signals filtered during high-risk periods—a approach consistent with institutional risk management practices (Ang, 2014).
## 5. Signal Generation and Interpretation
### 5.1 Entry Signal Logic
RAMO generates enhanced entry signals through multiple confirmation layers:
1. Primary Signal: Crossover between indicator and signal line
2. Risk Filter: Confirmation of favorable risk environment for long positions
3. Leading Component: Early warning signals via acceleration analysis
4. Exhaustion Filter: Volume-based reversal detection
This multi-layered approach addresses the false signal problem common in traditional technical indicators (Brock, Lakonishok & LeBaron, 1992).
### 5.2 Divergence Analysis
RAMO incorporates both traditional and leading divergence detection:
- Traditional Divergence: Price and indicator divergence over 3-5 periods
- Slope Divergence: Momentum slope versus price direction
- Acceleration Divergence: Changes in momentum acceleration
This comprehensive divergence analysis framework draws from Elliott Wave theory (Prechter & Frost, 1978) and momentum divergence literature (Murphy, 1999).
## 6. Empirical Advantages and Applications
### 6.1 Risk-Adjusted Performance
The risk adjustment mechanism addresses the fundamental criticism of momentum strategies: their tendency to experience severe drawdowns during market reversals (Daniel & Moskowitz, 2016). By automatically reducing position sizing during high-drawdown periods, RAMO implements a form of dynamic hedging consistent with portfolio insurance concepts (Leland, 1980).
### 6.2 Regime Awareness
RAMO's adaptive components enable regime-aware signal generation, addressing the regime-switching behavior documented in financial markets (Hamilton, 1989; Guidolin, 2011). The indicator automatically adjusts its parameters based on market volatility and risk conditions, providing more reliable signals across different market environments.
### 6.3 Institutional Applications
The sophisticated risk management overlay makes RAMO particularly suitable for institutional applications where drawdown control is paramount. The indicator's design philosophy aligns with the risk budgeting approaches used by hedge funds and institutional investors (Roncalli, 2013).
## 7. Limitations and Future Research
### 7.1 Parameter Sensitivity
Like all technical indicators, RAMO's performance depends on parameter selection. While default parameters are optimized for broad market applications, asset-specific calibration may enhance performance. Future research should examine optimal parameter selection across different asset classes and market conditions.
### 7.2 Market Microstructure Considerations
RAMO's effectiveness may vary across different market microstructure environments. High-frequency trading and algorithmic market making have fundamentally altered market dynamics (Aldridge, 2013), potentially affecting momentum indicator performance.
### 7.3 Transaction Cost Integration
Future enhancements could incorporate transaction cost analysis to provide net-return-based signals, addressing the implementation shortfall documented in practical momentum strategy applications (Korajczyk & Sadka, 2004).
## References
Aldridge, I. (2013). *High-Frequency Trading: A Practical Guide to Algorithmic Strategies and Trading Systems*. 2nd ed. Hoboken, NJ: John Wiley & Sons.
Ang, A. (2014). *Asset Management: A Systematic Approach to Factor Investing*. New York: Oxford University Press.
Arms, R. W. (1989). *The Arms Index (TRIN): An Introduction to the Volume Analysis of Stock and Bond Markets*. Homewood, IL: Dow Jones-Irwin.
Asness, C. S., Moskowitz, T. J., & Pedersen, L. H. (2013). Value and momentum everywhere. *Journal of Finance*, 68(3), 929-985.
Barroso, P., & Santa-Clara, P. (2015). Momentum has its moments. *Journal of Financial Economics*, 116(1), 111-120.
Bollinger, J. (1992). *Bollinger on Bollinger Bands*. New York: McGraw-Hill.
Brock, W., Lakonishok, J., & LeBaron, B. (1992). Simple technical trading rules and the stochastic properties of stock returns. *Journal of Finance*, 47(5), 1731-1764.
Calmar, T. (1991). The Calmar ratio: A smoother tool. *Futures*, 20(1), 40.
Campbell, J. Y., Lo, A. W., & MacKinlay, A. C. (1997). *The Econometrics of Financial Markets*. Princeton, NJ: Princeton University Press.
Chekhlov, A., Uryasev, S., & Zabarankin, M. (2005). Drawdown measure in portfolio optimization. *International Journal of Theoretical and Applied Finance*, 8(1), 13-58.
Daniel, K., & Moskowitz, T. J. (2016). Momentum crashes. *Journal of Financial Economics*, 122(2), 221-247.
Fama, E. F., & French, K. R. (1996). Multifactor explanations of asset pricing anomalies. *Journal of Finance*, 51(1), 55-84.
Granville, J. E. (1963). *Granville's New Key to Stock Market Profits*. Englewood Cliffs, NJ: Prentice-Hall.
Guidolin, M. (2011). Markov switching models in empirical finance. In D. N. Drukker (Ed.), *Missing Data Methods: Time-Series Methods and Applications* (pp. 1-86). Bingley: Emerald Group Publishing.
Hamilton, J. D. (1989). A new approach to the economic analysis of nonstationary time series and the business cycle. *Econometrica*, 57(2), 357-384.
Jegadeesh, N., & Titman, S. (1993). Returns to buying winners and selling losers: Implications for stock market efficiency. *Journal of Finance*, 48(1), 65-91.
Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. *Econometrica*, 47(2), 263-291.
Kaufman, P. J. (1995). *Smarter Trading: Improving Performance in Changing Markets*. New York: McGraw-Hill.
Korajczyk, R. A., & Sadka, R. (2004). Are momentum profits robust to trading costs? *Journal of Finance*, 59(3), 1039-1082.
Leland, H. E. (1980). Who should buy portfolio insurance? *Journal of Finance*, 35(2), 581-594.
Lo, A. W., Mamaysky, H., & Wang, J. (2000). Foundations of technical analysis: Computational algorithms, statistical inference, and empirical implementation. *Journal of Finance*, 55(4), 1705-1765.
Markowitz, H. (1952). Portfolio selection. *Journal of Finance*, 7(1), 77-91.
Murphy, J. J. (1999). *Technical Analysis of the Financial Markets: A Comprehensive Guide to Trading Methods and Applications*. New York: New York Institute of Finance.
Prechter, R. R., & Frost, A. J. (1978). *Elliott Wave Principle: Key to Market Behavior*. Gainesville, GA: New Classics Library.
Pring, M. J. (2002). *Technical Analysis Explained: The Successful Investor's Guide to Spotting Investment Trends and Turning Points*. 4th ed. New York: McGraw-Hill.
Roncalli, T. (2013). *Introduction to Risk Parity and Budgeting*. Boca Raton, FL: CRC Press.
Shefrin, H., & Statman, M. (1985). The disposition to sell winners too early and ride losers too long: Theory and evidence. *Journal of Finance*, 40(3), 777-790.
Taleb, N. N. (2007). *The Black Swan: The Impact of the Highly Improbable*. New York: Random House.
Taylor, M. P., & Allen, H. (1992). The use of technical analysis in the foreign exchange market. *Journal of International Money and Finance*, 11(3), 304-314.
Treadway, A. B. (1969). On rational entrepreneurial behavior and the demand for investment. *Review of Economic Studies*, 36(2), 227-239.
Wilder, J. W. (1978). *New Concepts in Technical Trading Systems*. Greensboro, NC: Trend Research.
TAPDA Hourly Open Lines (Candle Body Box)-What is TAPDA?
TAPDA (Time and Price Displacement Analysis) is based on the belief that markets are driven by algorithms that respond to key time-based price levels, such as session opens. Traders who follow TAPDA track these levels to anticipate price movements, reversals, and breakouts, aligning their strategies with the patterns left by these underlying algorithms. By plotting lines at specific hourly opens, the indicator allows traders to visualize where the market may react, providing a structured way to trade alongside the algorithmic flow.
***************
**Sauce Alert** "TAPDA levels essentially act like algorithmic support and resistance" By plotting these hourly opens, the TAPDA Hourly Open Lines indicator helps traders track where algorithms might engage with the market.
***************
-How It Works:
The indicator draws a "candle body box" at selected hours, marking the open and close prices to highlight price ranges at significant times. This creates dynamic zones that reflect market sentiment and structure throughout the day. TAPDA levels are commonly respected by price, making them useful for identifying potential entry points, stop placements, and trend reversals.
-Key Features:
Customizable Hour Levels – Enable or disable specific times to fit your trading approach.
Color & Label Control – Assign unique colors and labels to each hour for better visualization.
Line Extension – Project lines for up to 24 hours into the future to track key levels.
Dynamic Cleanup – Old lines automatically delete to maintain chart clarity.
Manual Time Offset – Adjust for broker or server time zone differences.
-Current Development:
This indicator is still in development, with further updates planned to enhance functionality and customization. If you find this script helpful, feel free to copy the code and stay tuned for new features and improvements!
[Pandora] Error Function Treasure Trove - ERF/ERFI/Sigmoids+PRAISE:
At this time, I have to graciously thank the wonderful minds behind the new "Pine Profiler Mode" (PPM). Directly prior to this release, it allowed me to ascertain script performance even more. While I usually write mostly in highly optimized Pine code, PPM visually identified a few bottlenecks that would otherwise be hard to identify. Anyone who contributed to PPMs creation and testing before release... BRAVO!!! I commend all of those who assisted in it's state-of-the-art engineering and inception, well done!
BACKSTORY:
This script is specifically being released in defense of another member, an exceptionally unique PhD. It was brought to my attention that a script-mod-event occurred, regarding the publishing of a measly antiquated error function (ERF) calculation within his script. This sadly resulted in the now former member jumping ship after receiving unmannerly responses amidst his curious inquiries as to why his erf() was modded. To forbid rusty and rudimentary formulations because a mod-on-duty is temporally offended by a non-nefarious release of code, is in MY opinion an injustice to principles of perpetuating open-source code intended to benefit thousands to millions of community members. While Pine is the heart and soul of TV, the mathematical concepts contributed from the minds of members is the inspirational fuel of curiosity that powers it's pertinent reason to exist and evolve.
It is an indisputable fact that most members are not greatly skilled Pine Poets. Many members may be incapable of innovating robust function code in Pine, even if they have one or more PhDs. We ALL come from various disciplines of mathematical comprehension and education. Some mathematicians are not greatly skilled at coding, while some coders are not exceptional at math. So... what am I to do to attempt to resolve this circumstantial challenge??? Those who know me best are aware that I will always side with "the right side of history" in order to accomplish my primary self-defined missions I choose to accept. Serving as an algorithmic advocate, I felt compelled to intercede by compiling numerous error functions into elegant code of very high caliber that any and every TV member may choose to employ, so this ERROR never happens again.
After weeks of contemplation into algorithms I knew little about, I prioritized myself to resolve an unanticipated matter by creating advanced formulas of exquisitely crafted error functions refined to the best of my current abilities. My aversion for unresolved problems motivated me to eviscerate error function insufficiencies with many more rigid formulations beyond what is thought to exist. ERF needed a proper algorithmic exorcism anyways. In my furiosity, I contemplated an array of madMAXimum diplomatic demolition methods, choosing the chain saw massacre technique to slaughter dysfunctionalities I encountered on a battered ERF roadway. This resulted in prolific solutions that should assuredly endure the test of time. Poetically, as you will come to see, I am ripping the lid off of Pandora's box of error functions in this case to correct wrongs into a splendid bundle of rights for members.
INTENTION:
Error function (ERF) enthusiasts... PREPARE FOR GLORY!! The specific purpose of this script is to deprecate classic error functions with the creation of a fierce and formidable army of superior formulations, each having varying attributes of computational complexity with differing absolute error ranges in their results for multiple compute scenarios. This is NOT an indicator... It is intended to allow members to embark on endeavors to advance the profound knowledge base of this growing worldwide community of 60+ million inquisitive minds. For those of you who believe computational mathematics and statistics is near completion at its finest; I am here to inform you, this is ridiculous to ponder. We are no where near statistical excellence that can and will exist eventually. At this time, metaphorically speaking, we are merely scratching microns off of the surface of the skin of a statistical apple Isaac Newton once pondered.
THIS RELEASE:
Following weeks of pondering methodical experiments beyond the ordinary, I am liberating these wild notions of my error function explorations to the entire globe as copyleft code, not just Pine. This Pandora's basket of ERFs is being openly disclosed for the sake of the sanctity of mathematics, empirical science (not the garbage we are told by CONTROLocrats to blindly trust), revolutionary cutting edge engineering, cosmology, physics, information technology, artificial intelligence, and EVERY other mathematical branch of human knowledge being discovered over centuries. I do believe James Glaisher would favor my aims concerning ERF aspirations embracing the "Power of Pine".
The included functions are intended for TV members to use in any way they see fit. This is a gift to ALL members to foster future innovative excellence on this platform. Any attempt to moderate this code without notification of "self-evident clear and just cause" will be considered an irrevocable egregious action. The original foundational PURPOSE of establishing script moderation (I clearly remember) was primarily to maintain active vigilance over a growing community against intentional nefarious actions and/or behaviors in blatant disrespect to other author's works AND also thwart rampant copypasting bandit operations, all while accommodating balanced principles of fairness for an educational community cause via open source publishing that should support future algorithmic inventions well beyond my lifespan.
APPLICATIONS:
The related error functions are used in probability theory, statistics, and numerous and engineering scientific disciplines. Its key characteristics and applications are innumerable in computational realms. Its versatility and significance make it a fundamental tool in arenas of quantitative analysis and scientific research...
Probability Theory - Is widely used in probability theory to calculate probabilities and quantiles of the normal distribution.
Statistics - It's related to the Gaussian integral and plays a crucial role in statistics, especially in hypothesis testing and confidence interval calculations.
Physics - In physics, it arises in the study of diffusion equations, quantum mechanics, and heat conduction problems.
Engineering - Applications exist in engineering disciplines such as signal processing, control theory, and telecommunications.
Error Analysis - It's employed in error analysis and uncertainty quantification.
Numeric Approximations - Due to its lack of a closed-form expression, numerical methods are often employed to approximate erf/erfi().
AI, LLMs, & MACHINE LEARNING:
The error function (ERF) is indispensable to various AI applications, particularly due to its relation to Gaussian distributions and error analysis. It is used in Gaussian processes for regression and classification, probabilistic inference for Bayesian networks, soft margin computation in SVMs, neural networks involving Gaussian activation functions or noise, and clustering algorithms like Gaussian Mixture Models. Improved ERF approximations can enhance precision in these applications, reduce computational complexity, handle outliers and noise better, and improve optimization and convergence, possibly leading to more accurate, efficient, and robust AI systems.
BONUS ALGORITHMS:
While ERFs are versatile, its opposite also exists in the form of inverse error functions (ERFIs). I have also included a modified form of the inverse fisher transform along side MY sigmoid (sigmyod). I am uncertain what sigmyod() may be used for, but it's a culmination of my examinations deep into "sigmoid domains", something I am fascinated by. Whatever implications it may possess, I am unveiling it along with it's cousin functions. For curious minds, this quality of composition seen here is ideally what underlies what I would term "Pandora functionality" that empowers my Pandora indication. I go through hordes of formulations, testing, and inspection to find what appears to be the most beneficial logical/mathematical equation to apply...
SCRIPT OPERATION:
To showcase the characteristics and performance of my ERF/ERFI formulations, I devised a multi-modal script. By using bar_index , I generated a broad sequence of numeric values to input into the first ERF/ERFI parameter. These sequences allow you to inspect the contours of the error function's outputs for both ERF and ERFI. When combined with compute-intensive precision functions (CIPFs), the polynomial function output values can be subtracted from my CIPFs to obtain results of absolute error, displaying the accuracy of the many polynomial estimation functions I tuned in testing for Pine's float environment.
A host of numeric input settings are wildly adjustable to inspect values/curvatures across the range of numeric input sequences. Very large numbers, such as Divisor:100,000,100/Offset:200,000,000 for ERF modes or... Divisor:100,000,100/Offset:100,000,000 for ERFI modes, will display miniscule output values calculated from input values in close proximity to 0.0 for the various estimates, similar to a microscope. ERFI approximations very near in proximity to +/-1.0 will always yield large deviations of absolute error. Dragging/zooming your chart or using the Offset input will aid with visually clipping off those ERFI extremes where float precision functions cannot suffice.
NOTICE:
perf() and perfi() are intended for precision computation (as good as it basically gets) in a float environment. However, they are CPU intensive (especially perfi). I wouldn't recommend these being used in ANY Pine script unless it's an "absolute necessity" to do so to accomplish your goal. I only built them to obtain "absolute error curvatures" of the error functions for the polynomial approximations. These are visible in the accuracy modes in the indicator Settings.
Bogdan Ciocoiu - Code runnerDescription
The Code Runner is a hybrid indicator that leverages other pre-configured, integrated open-source algorithms to help traders spot regular and continuation divergences.
The Code Runner specialises in integrating some of the most popular oscillators well known for their accuracy when scalping using divergence strategies.
Uniqueness
The Code Runner stands out as a one-stop-shop pack of oscillator algorithms that traders can further customise to spot divergences.
The indicator's uniqueness stands from its capability to recast each algorithm to apply to the same scale. This feature is achieved by manually adjusting the outputs of each algorithm to fit on a scale between +100 and -100.
Another benefit of the Code Runner comes from its standardisation of outputs, mainly consisting of lines. Showing lines enables traders to draw potential regular and continuation divergences quickly.
The indicator has been pre-configured to support scalping at 1-5 minutes.
Open-source
The Code Runner uses the following open-source scripts and algorithms:
www.tradingview.com
www.tradingview.com
www.tradingview.com
www.tradingview.com
www.tradingview.com
www.tradingview.com
www.tradingview.com
www.tradingview.com
These algorithms are available in the public domain either in TradingView space or outside (given their popularity in the financial markets industry).
Adaptive Average Vortex Index [lastguru]As a longtime fan of ADX, looking at Vortex Indicator I often wondered, where is the third line. I have rarely seen that anybody is calculating it. So, here it is: Average Vortex Index - an ADX calculated from Vortex Indicator. I interpret it similarly to the ADX indicator: higher values show stronger trend. If you discover other interpretation or have suggestions, comments are welcome.
Both VI+ and VI- lines are also drawn. As I use adaptive length calculation in my other scripts (based on the libraries I've developed and published), I have also included the possibility to have an adaptive length here, so if you hate the idea of calculating ADX from VI, you can disable that line and just look at the adaptive Vortex Indicator.
Note that as with all my oscillators, all the lines here are renormalized to -1..1 range unlike the original Vortex Indicator computation. To do that for VI+ and VI- lines, I subtract 1 from their values. It does not change the shape or the amplitude of the lines.
Adaptation algorithms are roughly subdivided in two categories: classic Length Adaptations and Cycle Estimators (they are also implemented in separate libraries), all are selected in Adaptation dropdown. Length Adaptation used in the Adaptive Moving Averages and the Adaptive Oscillators try to follow price movements and accelerate/decelerate accordingly (usually quite rapidly with a huge range). Cycle Estimators, on the other hand, try to measure the cycle period of the current market, which does not reflect price movement or the rate of change (the rate of change may also differ depending on the cycle phase, but the cycle period itself usually changes slowly).
VIDYA - based on VIDYA algorithm. The period oscillates from the Lower Bound up (slow)
VIDYA-RS - based on Vitali Apirine's modification of VIDYA algorithm (he calls it Relative Strength Moving Average). The period oscillates from the Upper Bound down (fast)
Kaufman Efficiency Scaling - based on Efficiency Ratio calculation originally used in KAMA
Fractal Adaptation - based on FRAMA by John F. Ehlers
MESA MAMA Cycle - based on MESA Adaptive Moving Average by John F. Ehlers
Pearson Autocorrelation* - based on Pearson Autocorrelation Periodogram by John F. Ehlers
DFT Cycle* - based on Discrete Fourier Transform Spectrum estimator by John F. Ehlers
Phase Accumulation* - based on Dominant Cycle from Phase Accumulation by John F. Ehlers
Length Adaptation usually take two parameters: Bound From (lower bound) and To (upper bound). These are the limits for Adaptation values. Note that the Cycle Estimators marked with asterisks(*) are very computationally intensive, so the bounds should not be set much higher than 50, otherwise you may receive a timeout error (also, it does not seem to be a useful thing to do, but you may correct me if I'm wrong).
The Cycle Estimators marked with asterisks(*) also have 3 checkboxes: HP (Highpass Filter), SS (Super Smoother) and HW (Hann Window). These enable or disable their internal prefilters, which are recommended by their author - John F. Ehlers . I do not know, which combination works best, so you can experiment.
If no Adaptation is selected ( None option), you can set Length directly. If an Adaptation is selected, then Cycle multiplier can be set.
The oscillator also has the option to configure the internal smoothing function with Window setting. By default, RMA is used (like in ADX calculation). Fast Default option is using half the length for smoothing. Triangle , Hamming and Hann Window algorithms are some better smoothers suggested by John F. Ehlers.
After the oscillator a Moving Average can be applied. The following Moving Averages are included: SMA , RMA, EMA , HMA , VWMA , 2-pole Super Smoother, 3-pole Super Smoother, Filt11, Triangle Window, Hamming Window, Hann Window, Lowpass, DSSS.
Postfilter options are applied last:
Stochastic - Stochastic
Super Smooth Stochastic - Super Smooth Stochastic (part of MESA Stochastic ) by John F. Ehlers
Inverse Fisher Transform - Inverse Fisher Transform
Noise Elimination Technology - a simplified Kendall correlation algorithm "Noise Elimination Technology" by John F. Ehlers
Momentum - momentum (derivative)
Except for Inverse Fisher Transform , all Postfilter algorithms can have Length parameter. If it is not specified (set to 0), then the calculated Slow MA Length is used. If Filter/MA Length is less than 2 or Postfilter Length is less than 1, they are calculated as a multiplier of the calculated oscillator length.
More information on the algorithms is given in the code for the libraries used. I am also very grateful to other TradingView community members (they are also mentioned in the library code) without whom this script would not have been possible.
Keltner Channel Enhanced [DCAUT]█ Keltner Channel Enhanced
📊 ORIGINALITY & INNOVATION
The Keltner Channel Enhanced represents an important advancement over standard Keltner Channel implementations by introducing dual flexibility in moving average selection for both the middle band and ATR calculation. While traditional Keltner Channels typically use EMA for the middle band and RMA (Wilder's smoothing) for ATR, this enhanced version provides access to 25+ moving average algorithms for both components, enabling traders to fine-tune the indicator's behavior to match specific market characteristics and trading approaches.
Key Advancements:
Dual MA Algorithm Flexibility: Independent selection of moving average types for middle band (25+ options) and ATR smoothing (25+ options), allowing optimization of both trend identification and volatility measurement separately
Enhanced Trend Sensitivity: Ability to use faster algorithms (HMA, T3) for middle band while maintaining stable volatility measurement with traditional ATR smoothing, or vice versa for different trading strategies
Adaptive Volatility Measurement: Choice of ATR smoothing algorithm affects channel responsiveness to volatility changes, from highly reactive (SMA, EMA) to smoothly adaptive (RMA, TEMA)
Comprehensive Alert System: Five distinct alert conditions covering breakouts, trend changes, and volatility expansion, enabling automated monitoring without constant chart observation
Multi-Timeframe Compatibility: Works effectively across all timeframes from intraday scalping to long-term position trading, with independent optimization of trend and volatility components
This implementation addresses key limitations of standard Keltner Channels: fixed EMA/RMA combination may not suit all market conditions or trading styles. By decoupling the trend component from volatility measurement and allowing independent algorithm selection, traders can create highly customized configurations for specific instruments and market phases.
📐 MATHEMATICAL FOUNDATION
Keltner Channel Enhanced uses a three-component calculation system that combines a flexible moving average middle band with ATR-based (Average True Range) upper and lower channels, creating volatility-adjusted trend-following bands.
Core Calculation Process:
1. Middle Band (Basis) Calculation:
The basis line is calculated using the selected moving average algorithm applied to the price source over the specified period:
basis = ma(source, length, maType)
Supported algorithms include EMA (standard choice, trend-biased), SMA (balanced and symmetric), HMA (reduced lag), WMA, VWMA, TEMA, T3, KAMA, and 17+ others.
2. Average True Range (ATR) Calculation:
ATR measures market volatility by calculating the average of true ranges over the specified period:
trueRange = max(high - low, abs(high - close ), abs(low - close ))
atrValue = ma(trueRange, atrLength, atrMaType)
ATR smoothing algorithm significantly affects channel behavior, with options including RMA (standard, very smooth), SMA (moderate smoothness), EMA (fast adaptation), TEMA (smooth yet responsive), and others.
3. Channel Calculation:
Upper and lower channels are positioned at specified multiples of ATR from the basis:
upperChannel = basis + (multiplier × atrValue)
lowerChannel = basis - (multiplier × atrValue)
Standard multiplier is 2.0, providing channels that dynamically adjust width based on market volatility.
Keltner Channel vs. Bollinger Bands - Key Differences:
While both indicators create volatility-based channels, they use fundamentally different volatility measures:
Keltner Channel (ATR-based):
Uses Average True Range to measure actual price movement volatility
Incorporates gaps and limit moves through true range calculation
More stable in trending markets, less prone to extreme compression
Better reflects intraday volatility and trading range
Typically fewer band touches, making touches more significant
More suitable for trend-following strategies
Bollinger Bands (Standard Deviation-based):
Uses statistical standard deviation to measure price dispersion
Based on closing prices only, doesn't account for intraday range
Can compress significantly during consolidation (squeeze patterns)
More touches in ranging markets
Better suited for mean-reversion strategies
Provides statistical probability framework (95% within 2 standard deviations)
Algorithm Combination Effects:
The interaction between middle band MA type and ATR MA type creates different indicator characteristics:
Trend-Focused Configuration (Fast MA + Slow ATR): Middle band uses HMA/EMA/T3, ATR uses RMA/TEMA, quick trend changes with stable channel width, suitable for trend-following
Volatility-Focused Configuration (Slow MA + Fast ATR): Middle band uses SMA/WMA, ATR uses EMA/SMA, stable trend with dynamic channel width, suitable for volatility trading
Balanced Configuration (Standard EMA/RMA): Classic Keltner Channel behavior, time-tested combination, suitable for general-purpose trend following
Adaptive Configuration (KAMA + KAMA): Self-adjusting indicator responding to efficiency ratio, suitable for markets with varying trend strength and volatility regimes
📊 COMPREHENSIVE SIGNAL ANALYSIS
Keltner Channel Enhanced provides multiple signal categories optimized for trend-following and breakout strategies.
Channel Position Signals:
Upper Channel Interaction:
Price Touching Upper Channel: Strong bullish momentum, price moving more than typical volatility range suggests, potential continuation signal in established uptrends
Price Breaking Above Upper Channel: Exceptional strength, price exceeding normal volatility expectations, consider adding to long positions or tightening trailing stops
Price Riding Upper Channel: Sustained strong uptrend, characteristic of powerful bull moves, stay with trend and avoid premature profit-taking
Price Rejection at Upper Channel: Momentum exhaustion signal, consider profit-taking on longs or waiting for pullback to middle band for reentry
Lower Channel Interaction:
Price Touching Lower Channel: Strong bearish momentum, price moving more than typical volatility range suggests, potential continuation signal in established downtrends
Price Breaking Below Lower Channel: Exceptional weakness, price exceeding normal volatility expectations, consider adding to short positions or protecting against further downside
Price Riding Lower Channel: Sustained strong downtrend, characteristic of powerful bear moves, stay with trend and avoid premature covering
Price Rejection at Lower Channel: Momentum exhaustion signal, consider covering shorts or waiting for bounce to middle band for reentry
Middle Band (Basis) Signals:
Trend Direction Confirmation:
Price Above Basis: Bullish trend bias, middle band acts as dynamic support in uptrends, consider long positions or holding existing longs
Price Below Basis: Bearish trend bias, middle band acts as dynamic resistance in downtrends, consider short positions or avoiding longs
Price Crossing Above Basis: Potential trend change from bearish to bullish, early signal to establish long positions
Price Crossing Below Basis: Potential trend change from bullish to bearish, early signal to establish short positions or exit longs
Pullback Trading Strategy:
Uptrend Pullback: Price pulls back from upper channel to middle band, finds support, and resumes upward, ideal long entry point
Downtrend Bounce: Price bounces from lower channel to middle band, meets resistance, and resumes downward, ideal short entry point
Basis Test: Strong trends often show price respecting the middle band as support/resistance on pullbacks
Failed Test: Price breaking through middle band against trend direction signals potential reversal
Volatility-Based Signals:
Narrow Channels (Low Volatility):
Consolidation Phase: Channels contract during periods of reduced volatility and directionless price action
Breakout Preparation: Narrow channels often precede significant directional moves as volatility cycles
Trading Approach: Reduce position sizes, wait for breakout confirmation, avoid range-bound strategies within channels
Breakout Direction: Monitor for price breaking decisively outside channel range with expanding width
Wide Channels (High Volatility):
Trending Phase: Channels expand during strong directional moves and increased volatility
Momentum Confirmation: Wide channels confirm genuine trend with substantial volatility backing
Trading Approach: Trend-following strategies excel, wider stops necessary, mean-reversion strategies risky
Exhaustion Signs: Extreme channel width (historical highs) may signal approaching consolidation or reversal
Advanced Pattern Recognition:
Channel Walking Pattern:
Upper Channel Walk: Price consistently touches or exceeds upper channel while staying above basis, very strong uptrend signal, hold longs aggressively
Lower Channel Walk: Price consistently touches or exceeds lower channel while staying below basis, very strong downtrend signal, hold shorts aggressively
Basis Support/Resistance: During channel walks, price typically uses middle band as support/resistance on minor pullbacks
Pattern Break: Price crossing basis during channel walk signals potential trend exhaustion
Squeeze and Release Pattern:
Squeeze Phase: Channels narrow significantly, price consolidates near middle band, volatility contracts
Direction Clues: Watch for price positioning relative to basis during squeeze (above = bullish bias, below = bearish bias)
Release Trigger: Price breaking outside narrow channel range with expanding width confirms breakout
Follow-Through: Measure squeeze height and project from breakout point for initial profit targets
Channel Expansion Pattern:
Breakout Confirmation: Rapid channel widening confirms volatility increase and genuine trend establishment
Entry Timing: Enter positions early in expansion phase before trend becomes overextended
Risk Management: Use channel width to size stops appropriately, wider channels require wider stops
Basis Bounce Pattern:
Clean Bounce: Price touches middle band and immediately reverses, confirms trend strength and entry opportunity
Multiple Bounces: Repeated basis bounces indicate strong, sustainable trend
Bounce Failure: Price penetrating basis signals weakening trend and potential reversal
Divergence Analysis:
Price/Channel Divergence: Price makes new high/low while staying within channel (not reaching outer band), suggests momentum weakening
Width/Price Divergence: Price breaks to new extremes but channel width contracts, suggests move lacks conviction
Reversal Signal: Divergences often precede trend reversals or significant consolidation periods
Multi-Timeframe Analysis:
Keltner Channels work particularly well in multi-timeframe trend-following approaches:
Three-Timeframe Alignment:
Higher Timeframe (Weekly/Daily): Identify major trend direction, note price position relative to basis and channels
Intermediate Timeframe (Daily/4H): Identify pullback opportunities within higher timeframe trend
Lower Timeframe (4H/1H): Time precise entries when price touches middle band or lower channel (in uptrends) with rejection
Optimal Entry Conditions:
Best Long Entries: Higher timeframe in uptrend (price above basis), intermediate timeframe pulls back to basis, lower timeframe shows rejection at middle band or lower channel
Best Short Entries: Higher timeframe in downtrend (price below basis), intermediate timeframe bounces to basis, lower timeframe shows rejection at middle band or upper channel
Risk Management: Use higher timeframe channel width to set position sizing, stops below/above higher timeframe channels
🎯 STRATEGIC APPLICATIONS
Keltner Channel Enhanced excels in trend-following and breakout strategies across different market conditions.
Trend Following Strategy:
Setup Requirements:
Identify established trend with price consistently on one side of basis line
Wait for pullback to middle band (basis) or brief penetration through it
Confirm trend resumption with price rejection at basis and move back toward outer channel
Enter in trend direction with stop beyond basis line
Entry Rules:
Uptrend Entry:
Price pulls back from upper channel to middle band, shows support at basis (bullish candlestick, momentum divergence)
Enter long on rejection/bounce from basis with stop 1-2 ATR below basis
Aggressive: Enter on first touch; Conservative: Wait for confirmation candle
Downtrend Entry:
Price bounces from lower channel to middle band, shows resistance at basis (bearish candlestick, momentum divergence)
Enter short on rejection/reversal from basis with stop 1-2 ATR above basis
Aggressive: Enter on first touch; Conservative: Wait for confirmation candle
Trend Management:
Trailing Stop: Use basis line as dynamic trailing stop, exit if price closes beyond basis against position
Profit Taking: Take partial profits at opposite channel, move stops to basis
Position Additions: Add to winners on subsequent basis bounces if trend intact
Breakout Strategy:
Setup Requirements:
Identify consolidation period with contracting channel width
Monitor price action near middle band with reduced volatility
Wait for decisive breakout beyond channel range with expanding width
Enter in breakout direction after confirmation
Breakout Confirmation:
Price breaks clearly outside channel (upper for longs, lower for shorts), channel width begins expanding from contracted state
Volume increases significantly on breakout (if using volume analysis)
Price sustains outside channel for multiple bars without immediate reversal
Entry Approaches:
Aggressive: Enter on initial break with stop at opposite channel or basis, use smaller position size
Conservative: Wait for pullback to broken channel level, enter on rejection and resumption, tighter stop
Volatility-Based Position Sizing:
Adjust position sizing based on channel width (ATR-based volatility):
Wide Channels (High ATR): Reduce position size as stops must be wider, calculate position size using ATR-based risk calculation: Risk / (Stop Distance in ATR × ATR Value)
Narrow Channels (Low ATR): Increase position size as stops can be tighter, be cautious of impending volatility expansion
ATR-Based Risk Management: Use ATR-based risk calculations, position size = 0.01 × Capital / (2 × ATR), use multiples of ATR (1-2 ATR) for adaptive stops
Algorithm Selection Guidelines:
Different market conditions benefit from different algorithm combinations:
Strong Trending Markets: Middle band use EMA or HMA, ATR use RMA, capture trends quickly while maintaining stable channel width
Choppy/Ranging Markets: Middle band use SMA or WMA, ATR use SMA or WMA, avoid false trend signals while identifying genuine reversals
Volatile Markets: Middle band and ATR both use KAMA or FRAMA, self-adjusting to changing market conditions reduces manual optimization
Breakout Trading: Middle band use SMA, ATR use EMA or SMA, stable trend with dynamic channels highlights volatility expansion early
Scalping/Day Trading: Middle band use HMA or T3, ATR use EMA or TEMA, both components respond quickly
Position Trading: Middle band use EMA/TEMA/T3, ATR use RMA or TEMA, filter out noise for long-term trend-following
📋 DETAILED PARAMETER CONFIGURATION
Understanding and optimizing parameters is essential for adapting Keltner Channel Enhanced to specific trading approaches.
Source Parameter:
Close (Most Common): Uses closing price, reflects daily settlement, best for end-of-day analysis and position trading, standard choice
HL2 (Median Price): Smooths out closing bias, better represents full daily range in volatile markets, good for swing trading
HLC3 (Typical Price): Gives more weight to close while including full range, popular for intraday applications, slightly more responsive than HL2
OHLC4 (Average Price): Most comprehensive price representation, smoothest option, good for gap-prone markets or highly volatile instruments
Length Parameter:
Controls the lookback period for middle band (basis) calculation:
Short Periods (10-15): Very responsive to price changes, suitable for day trading and scalping, higher false signal rate
Standard Period (20 - Default): Represents approximately one month of trading, good balance between responsiveness and stability, suitable for swing and position trading
Medium Periods (30-50): Smoother trend identification, fewer false signals, better for position trading and longer holding periods
Long Periods (50+): Very smooth, identifies major trends only, minimal false signals but significant lag, suitable for long-term investment
Optimization by Timeframe: 1-15 minute charts use 10-20 period, 30-60 minute charts use 20-30 period, 4-hour to daily charts use 20-40 period, weekly charts use 20-30 weeks.
ATR Length Parameter:
Controls the lookback period for Average True Range calculation, affecting channel width:
Short ATR Periods (5-10): Very responsive to recent volatility changes, standard is 10 (Keltner's original specification), may be too reactive in whipsaw conditions
Standard ATR Period (10 - Default): Chester Keltner's original specification, good balance between responsiveness and stability, most widely used
Medium ATR Periods (14-20): Smoother channel width, ATR 14 aligns with Wilder's original ATR specification, good for position trading
Long ATR Periods (20+): Very smooth channel width, suitable for long-term trend-following
Length vs. ATR Length Relationship: Equal values (20/20) provide balanced responsiveness, longer ATR (20/14) gives more stable channel width, shorter ATR (20/10) is standard configuration, much shorter ATR (20/5) creates very dynamic channels.
Multiplier Parameter:
Controls channel width by setting ATR multiples:
Lower Values (1.0-1.5): Tighter channels with frequent price touches, more trading signals, higher false signal rate, better for range-bound and mean-reversion strategies
Standard Value (2.0 - Default): Chester Keltner's recommended setting, good balance between signal frequency and reliability, suitable for both trending and ranging strategies
Higher Values (2.5-3.0): Wider channels with less frequent touches, fewer but potentially higher-quality signals, better for strong trending markets
Market-Specific Optimization: High volatility markets (crypto, small-caps) use 2.5-3.0 multiplier, medium volatility markets (major forex, large-caps) use 2.0 multiplier, low volatility markets (bonds, utilities) use 1.5-2.0 multiplier.
MA Type Parameter (Middle Band):
Critical selection that determines trend identification characteristics:
EMA (Exponential Moving Average - Default): Standard Keltner Channel choice, Chester Keltner's original specification, emphasizes recent prices, faster response to trend changes, suitable for all timeframes
SMA (Simple Moving Average): Equal weighting of all data points, no directional bias, slower than EMA, better for ranging markets and mean-reversion
HMA (Hull Moving Average): Minimal lag with smooth output, excellent for fast trend identification, best for day trading and scalping
TEMA (Triple Exponential Moving Average): Advanced smoothing with reduced lag, responsive to trends while filtering noise, suitable for volatile markets
T3 (Tillson T3): Very smooth with minimal lag, excellent for established trend identification, suitable for position trading
KAMA (Kaufman Adaptive Moving Average): Automatically adjusts speed based on market efficiency, slow in ranging markets, fast in trends, suitable for markets with varying conditions
ATR MA Type Parameter:
Determines how Average True Range is smoothed, affecting channel width stability:
RMA (Wilder's Smoothing - Default): J. Welles Wilder's original ATR smoothing method, very smooth, slow to adapt to volatility changes, provides stable channel width
SMA (Simple Moving Average): Equal weighting, moderate smoothness, faster response to volatility changes than RMA, more dynamic channel width
EMA (Exponential Moving Average): Emphasizes recent volatility, quick adaptation to new volatility regimes, very responsive channel width changes
TEMA (Triple Exponential Moving Average): Smooth yet responsive, good balance for varying volatility, suitable for most trading styles
Parameter Combination Strategies:
Conservative Trend-Following: Length 30/ATR Length 20/Multiplier 2.5, MA Type EMA or TEMA/ATR MA Type RMA, smooth trend with stable wide channels, suitable for position trading
Standard Balanced Approach: Length 20/ATR Length 10/Multiplier 2.0, MA Type EMA/ATR MA Type RMA, classic Keltner Channel configuration, suitable for general purpose swing trading
Aggressive Day Trading: Length 10-15/ATR Length 5-7/Multiplier 1.5-2.0, MA Type HMA or EMA/ATR MA Type EMA or SMA, fast trend with dynamic channels, suitable for scalping and day trading
Breakout Specialist: Length 20-30/ATR Length 5-10/Multiplier 2.0, MA Type SMA or WMA/ATR MA Type EMA or SMA, stable trend with responsive channel width
Adaptive All-Conditions: Length 20/ATR Length 10/Multiplier 2.0, MA Type KAMA or FRAMA/ATR MA Type KAMA or TEMA, self-adjusting to market conditions
Offset Parameter:
Controls horizontal positioning of channels on chart. Positive values shift channels to the right (future) for visual projection, negative values shift left (past) for historical analysis, zero (default) aligns with current price bars for real-time signal analysis. Offset affects only visual display, not alert conditions or actual calculations.
📈 PERFORMANCE ANALYSIS & COMPETITIVE ADVANTAGES
Keltner Channel Enhanced provides improvements over standard implementations while maintaining proven effectiveness.
Response Characteristics:
Standard EMA/RMA Configuration: Moderate trend lag (approximately 0.4 × length periods), smooth and stable channel width from RMA smoothing, good balance for most market conditions
Fast HMA/EMA Configuration: Approximately 60% reduction in trend lag compared to EMA, responsive channel width from EMA ATR smoothing, suitable for quick trend changes and breakouts
Adaptive KAMA/KAMA Configuration: Variable lag based on market efficiency, automatic adjustment to trending vs. ranging conditions, self-optimizing behavior reduces manual intervention
Comparison with Traditional Keltner Channels:
Enhanced Version Advantages:
Dual Algorithm Flexibility: Independent MA selection for trend and volatility vs. fixed EMA/RMA, separate tuning of trend responsiveness and channel stability
Market Adaptation: Choose configurations optimized for specific instruments and conditions, customize for scalping, swing, or position trading preferences
Comprehensive Alerts: Enhanced alert system including channel expansion detection
Traditional Version Advantages:
Simplicity: Fewer parameters, easier to understand and implement
Standardization: Fixed EMA/RMA combination ensures consistency across users
Research Base: Decades of backtesting and research on standard configuration
When to Use Enhanced Version: Trading multiple instruments with different characteristics, switching between trending and ranging markets, employing different strategies, algorithm-based trading systems requiring customization, seeking optimization for specific trading style and timeframe.
When to Use Standard Version: Beginning traders learning Keltner Channel concepts, following published research or trading systems, preferring simplicity and standardization, wanting to avoid optimization and curve-fitting risks.
Performance Across Market Conditions:
Strong Trending Markets: EMA or HMA basis with RMA or TEMA ATR smoothing provides quicker trend identification, pullbacks to basis offer excellent entry opportunities
Choppy/Ranging Markets: SMA or WMA basis with RMA ATR smoothing and lower multipliers, channel bounce strategies work well, avoid false breakouts
Volatile Markets: KAMA or FRAMA with EMA or TEMA, adaptive algorithms excel by automatic adjustment, wider multipliers (2.5-3.0) accommodate large price swings
Low Volatility/Consolidation: Channels narrow significantly indicating consolidation, algorithm choice less impactful, focus on detecting channel width contraction for breakout preparation
Keltner Channel vs. Bollinger Bands - Usage Comparison:
Favor Keltner Channels When: Trend-following is primary strategy, trading volatile instruments with gaps, want ATR-based volatility measurement, prefer fewer higher-quality channel touches, seeking stable channel width during trends.
Favor Bollinger Bands When: Mean-reversion is primary strategy, trading instruments with limited gaps, want statistical framework based on standard deviation, need squeeze patterns for breakout identification, prefer more frequent trading opportunities.
Use Both Together: Bollinger Band squeeze + Keltner Channel breakout is powerful combination, price outside Bollinger Bands but inside Keltner Channels indicates moderate signal, price outside both indicates very strong signal, Bollinger Bands for entries and Keltner Channels for trend confirmation.
Limitations and Considerations:
General Limitations:
Lagging Indicator: All moving averages lag price, even with reduced-lag algorithms
Trend-Dependent: Works best in trending markets, less effective in choppy conditions
No Direction Prediction: Indicates volatility and deviation, not future direction, requires confirmation
Enhanced Version Specific Considerations:
Optimization Risk: More parameters increase risk of curve-fitting historical data
Complexity: Additional choices may overwhelm beginning traders
Backtesting Challenges: Different algorithms produce different historical results
Mitigation Strategies:
Use Confirmation: Combine with momentum indicators (RSI, MACD), volume, or price action
Test Parameter Robustness: Ensure parameters work across range of values, not just optimized ones
Multi-Timeframe Analysis: Confirm signals across different timeframes
Proper Risk Management: Use appropriate position sizing and stops
Start Simple: Begin with standard EMA/RMA before exploring alternatives
Optimal Usage Recommendations:
For Maximum Effectiveness:
Start with standard EMA/RMA configuration to understand classic behavior
Experiment with alternatives on demo account or paper trading
Match algorithm combination to market condition and trading style
Use channel width analysis to identify market phases
Combine with complementary indicators for confirmation
Implement strict risk management using ATR-based position sizing
Focus on high-quality setups rather than trading every signal
Respect the trend: trade with basis direction for higher probability
Complementary Indicators:
RSI or Stochastic: Confirm momentum at channel extremes
MACD: Confirm trend direction and momentum shifts
Volume: Validate breakouts and trend strength
ADX: Measure trend strength, avoid Keltner signals in weak trends
Support/Resistance: Combine with traditional levels for high-probability setups
Bollinger Bands: Use together for enhanced breakout and volatility analysis
USAGE NOTES
This indicator is designed for technical analysis and educational purposes. Keltner Channel Enhanced has limitations and should not be used as the sole basis for trading decisions. While the flexible moving average selection for both trend and volatility components provides valuable adaptability across different market conditions, algorithm performance varies with market conditions, and past characteristics do not guarantee future results.
Key considerations:
Always use multiple forms of analysis and confirmation before entering trades
Backtest any parameter combination thoroughly before live trading
Be aware that optimization can lead to curve-fitting if not done carefully
Start with standard EMA/RMA settings and adjust only when specific conditions warrant
Understand that no moving average algorithm can eliminate lag entirely
Consider market regime (trending, ranging, volatile) when selecting parameters
Use ATR-based position sizing and risk management on every trade
Keltner Channels work best in trending markets, less effective in choppy conditions
Respect the trend direction indicated by price position relative to basis line
The enhanced flexibility of dual algorithm selection provides powerful tools for adaptation but requires responsible use, thorough understanding of how different algorithms behave under various market conditions, and disciplined risk management.
[blackcat] L1 Net Volume DifferenceOVERVIEW
The L1 Net Volume Difference indicator serves as an advanced analytical tool designed to provide traders with deep insights into market sentiment by examining the differential between buying and selling volumes over precise timeframes. By leveraging these volume dynamics, it helps identify trends and potential reversal points more accurately, thereby supporting well-informed decision-making processes. The key focus lies in dissecting intraday changes that reflect short-term market behavior, offering critical input for both swing and day traders alike. 📊
Key benefits encompass:
• Precise calculation of net volume differences grounded in real-time data.
• Interactive visualization elements enhancing interpretability effortlessly.
• Real-time generation of buy/sell signals driven by dynamic volume shifts.
TECHNICAL ANALYSIS COMPONENTS
📉 Volume Accumulation Mechanisms:
Monitors cumulative buy/sell volumes derived from comparative closing prices.
Periodically resets accumulation counters aligning with predefined intervals (e.g., 5-minute bars).
Facilitates identification of directional biases reflecting underlying market forces accurately.
🕵️♂️ Sentiment Detection Algorithms:
Employs proprietary logic distinguishing between bullish/bearish sentiments dynamically.
Ensures consistent adherence to predefined statistical protocols maintaining accuracy.
Supports adaptive thresholds adjusting sensitivities based on changing market conditions flexibly.
🎯 Dynamic Signal Generation:
Detects transitions indicating dominance shifts between buyers/sellers promptly.
Triggers timely alerts enabling swift reactions to evolving market dynamics effectively.
Integrates conditional logic reinforcing signal validity minimizing erroneous activations.
INDICATOR FUNCTIONALITY
🔢 Core Algorithms:
Utilizes moving averages along with standardized deviation formulas generating precise net volume measurements.
Implements Arithmetic Mean Line Algorithm (AMLA) smoothing techniques improving interpretability.
Ensures consistent alignment with established statistical principles preserving fidelity.
🖱️ User Interface Elements:
Dedicated plots displaying real-time net volume markers facilitating swift decision-making.
Context-sensitive color coding distinguishing positive/negative deviations intuitively.
Background shading highlighting proximity to key threshold activations enhancing visibility.
STRATEGY IMPLEMENTATION
✅ Entry Conditions:
Confirm bullish/bearish setups validated through multiple confirmatory signals.
Validate entry decisions considering concurrent market sentiment factors.
Assess alignment between net volume readings and broader trend directions ensuring coherence.
🚫 Exit Mechanisms:
Trigger exits upon hitting predetermined thresholds derived from historical analyses.
Monitor continuous breaches signifying potential trend reversals promptly executing closures.
Execute partial/total closes contingent upon cumulative loss limits preserving capital efficiently.
PARAMETER CONFIGURATIONS
🎯 Optimization Guidelines:
Reset Interval: Governs responsiveness versus stability balancing sensitivity/stability.
Price Source: Dictates primary data series driving volume calculations selecting relevant inputs accurately.
💬 Customization Recommendations:
Commence with baseline defaults; iteratively refine parameters isolating individual impacts.
Evaluate adjustments independently prior to combined modifications minimizing disruptions.
Prioritize minimizing erroneous trigger occurrences first optimizing signal fidelity.
Sustain balanced risk-reward profiles irrespective of chosen settings upholding disciplined approaches.
ADVANCED RISK MANAGEMENT
🛡️ Proactive Risk Mitigation Techniques:
Enforce strict compliance with pre-defined maximum leverage constraints adhering strictly to guidelines.
Mandatorily apply trailing stop-loss orders conforming to script outputs reinforcing discipline.
Allocate positions proportionately relative to available capital reserves managing exposures prudently.
Conduct periodic reviews gauging strategy effectiveness rigorously identifying areas needing refinement.
⚠️ Potential Pitfalls & Solutions:
Address frequent violations arising during heightened volatility phases necessitating manual interventions judiciously.
Manage false alerts warranting immediate attention avoiding adverse consequences systematically.
Prepare contingency plans mitigating margin call possibilities preparing proactive responses effectively.
Continuously assess automated system reliability amidst fluctuating conditions ensuring seamless functionality.
PERFORMANCE AUDITS & REFINEMENTS
🔍 Critical Evaluation Metrics:
Assess win percentages consistently across diverse trading instruments gauging reliability.
Calculate average profit ratios per successful execution measuring profitability efficiency accurately.
Measure peak drawdown durations alongside associated magnitudes evaluating downside risks comprehensively.
Analyze signal generation frequencies revealing hidden patterns potentially skewing outcomes uncovering systematic biases.
📈 Historical Data Analysis Tools:
Maintain comprehensive records capturing every triggered event meticulously documenting results.
Compare realized profits/losses against backtested simulations benchmarking actual vs expected performances accurately.
Identify recurrent systematic errors demanding corrective actions implementing iterative refinements steadily.
Document evolving performance metrics tracking progress dynamically addressing identified shortcomings proactively.
PROBLEM SOLVING ADVICE
🔧 Frequent Encountered Challenges:
Unpredictable behaviors emerging within thinly traded markets requiring filtration processes.
Latency issues manifesting during abrupt price fluctuations causing missed opportunities.
Overfitted models yielding suboptimal results post-extensive tuning demanding recalibrations.
Inaccuracies stemming from incomplete/inaccurate data feeds necessitating verification procedures.
💡 Effective Resolution Pathways:
Exclude low-liquidity assets prone to erratic movements enhancing signal integrity.
Introduce buffer intervals safeguarding major news/event impacts mitigating distortions effectively.
Limit ongoing optimization attempts preventing model degradation maintaining optimal performance levels consistently.
Verify reliable connections ensuring uninterrupted data flows guaranteeing accurate interpretations reliably.
USER ENGAGEMENT SEGMENT
🤝 Community Contributions Welcome
Highly encourage active participation sharing experiences & recommendations!
THANKS
Heartfelt acknowledgment extends to all developers contributing invaluable insights about volume-based trading methodologies! ✨
Forex Pips Tracker PinescriptlabsThis algorithm is exclusively designed for the Forex market 🌐 and serves as a tool to measure volatility, helping to determine on average how many pips positions move per hour. With this information, a trader can place take profit and stop loss orders with greater certainty, since they know the average pip movement range during each hour of the day.
What does it do and how does it work?
• Volatility measurement in pips 📊:
The algorithm calculates the size of the movement (or range) of each candle expressed in pips. To do this, it takes the difference between the highest and lowest price of each candle and converts it into pips.
👉
• Time zone adjustment ⏰:
It allows you to configure the time zone so that the data aligns with your desired schedule. This is especially useful for comparing movements at different times based on the trader's location.
• Analysis by time intervals 🕒:
The algorithm’s logic organizes the information for each hour of the day. It stores data for the current day, the previous day, weekly, and historically (200 candles). This allows you to see how volatility varies across different periods, providing a dynamic view of market behavior.
👉
• Directionality of movement 🔄:
In addition to averaging the pip range, the algorithm determines the predominant direction of each candle (bullish or bearish). This translates into visual indicators (like arrows) that help identify whether, on average, the movement during that hour tends to go up or down.
• Table visualization 📈:
Finally, the information is presented in an integrated table on the chart. Each row corresponds to an hour of the day and shows the average number of pips and the direction (bullish, bearish, or neutral) for each analyzed period. This table makes it easy to quickly and practically interpret the volatility data.
By combining these features, the algorithm becomes an essential tool for traders looking to better understand market dynamics and optimize their trading strategies! 💼✨
Español:
Este algoritmo está diseñado exclusivamente para el mercado Forex 🌐 y sirve como una herramienta para medir la volatilidad, ayudando a determinar en promedio cuántos pips se mueven las posiciones por hora. Con esta información, un trader puede colocar el take profit y el stop loss con mayor certeza, ya que conoce el rango promedio de movimiento en pips durante cada hora del día.
¿Qué hace y cómo funciona?
• Medición de volatilidad en pips 📊:
El algoritmo calcula el tamaño del movimiento (o rango) de cada vela expresado en pips. Para ello, toma la diferencia entre el precio máximo y el mínimo de cada vela y la convierte a pips.
👉
• Ajuste de zona horaria ⏰:
Permite configurar la zona horaria para que los datos se ajusten al horario deseado. Esto es especialmente útil para comparar movimientos durante distintas horas en función de la localización del trader.
• Análisis por intervalos de tiempo 🕒:
La lógica del algoritmo organiza la información por cada hora del día. Guarda datos para el día actual, el día anterior, a nivel semanal e histórico (200 velas). Esto permite ver cómo varía la volatilidad en diferentes periodos, proporcionando una visión dinámica del comportamiento del mercado.
👉
• Direccionalidad del movimiento 🔄:
Además de promediar el rango en pips, el algoritmo determina la dirección predominante de cada vela (alcista o bajista). Esto se traduce en indicadores visuales (como flechas) que permiten identificar si, en promedio, el movimiento en esa hora tiende a subir o bajar.
• Visualización en tabla 📈:
Finalmente, la información se presenta en una tabla integrada en el gráfico. Cada fila corresponde a una hora del día y muestra el promedio de pips y la dirección (alcista, bajista o neutral) para cada uno de los periodos analizados. Esta tabla facilita la interpretación rápida y práctica de los datos de volatilidad.
Al combinar estas funciones, el algoritmo se convierte en una herramienta esencial para traders que buscan entender mejor la dinámica del mercado y optimizar sus estrategias de trading! 💼✨