DafeRLMLLibDafeRLMLLib: The Reinforcement Learning & Machine Learning Engine
This is not an indicator. This is an artificial intelligence. A state-based, self-learning engine designed to bring the power of professional quantitative finance to the Pine Script ecosystem. Welcome to the next frontier of trading analysis.
█ CHAPTER 1: THE PHILOSOPHY - FROM STATIC RULES TO DYNAMIC LEARNING
Technical analysis has, for a century, been a discipline of static, human-defined rules. "If RSI is below 30, then buy." "If the 50 EMA crosses the 200 EMA, then sell." These are fixed heuristics. They are brittle. They fail to adapt to the market's ever-changing personality—its shifts between trend and range, high and low volatility, risk-on and risk-off sentiment. An indicator built on static rules is an automaton, destined to fail when the environment it was designed for inevitably changes.
The DafeRLMLLib was created to shatter this paradigm. It is not a tool with fixed rules; it is a framework for discovering optimal rules. It is a true Reinforcement Learning (RL) and Machine Learning (ML) engine, built from the ground up in Pine Script. Its purpose is not to follow a pre-programmed strategy, but to learn a strategy through trial, error, and feedback.
This library provides a complete, professional-grade toolkit for developers to build indicators that think, adapt, and evolve. It observes the market state, selects an action, receives a reward signal based on the outcome, and updates its internal "brain" to improve its future decisions. This is not just a step forward; it is a quantum leap into the future of on-chart intelligence.
█ CHAPTER 2: THE CORE INNOVATIONS - WHAT MAKES THIS A TRUE ML ENGINE?
This library is not a collection of simple moving averages labeled as "AI." It is a suite of genuine, academically recognized machine learning algorithms, adapted for the unique constraints and opportunities of the Pine Script environment.
Multi-Algorithm Architecture: You are not locked into one learning model. The library provides a choice of powerful RL algorithms:
Q-Learning with TD(λ) Eligibility Traces: A classic, robust algorithm for learning state-action values. We've enhanced it with eligibility traces (Lambda), allowing the agent to more efficiently assign credit or blame to a sequence of past actions, dramatically speeding up the learning process.
REINFORCE Policy Gradient with Baseline: A more advanced method that directly learns a "policy"—a probability distribution over actions—instead of just values. The baseline helps to stabilize learning by reducing variance.
Actor-Critic Architecture: The state-of-the-art. This hybrid model combines the best of both worlds. The "Actor" (the policy) decides what to do, and the "Critic" (the value function) evaluates how good that action was. The Critic's feedback is then used to directly improve the Actor's decisions.
Prioritized Experience Replay: Like a human, the AI learns more from surprising or significant events. Instead of learning from experiences in a simple chronological order, the library stores them in a ReplayBuffer. It then replays these memories to the learning algorithms, prioritizing experiences that resulted in a large prediction error. This makes learning incredibly efficient.
Meta-Learning & Self-Tuning: An AI that cannot learn how to learn is still a dumb machine. The MetaState module is a meta-learning layer that monitors the agent's own performance over time. If it detects that performance is degrading, it will automatically increase the learning rate ("Synaptic Plasticity"). If performance is improving, it will decrease the learning rate to stabilize the learned strategy. It tunes its own hyperparameters.
Catastrophic Forgetting Prevention: A common failure mode for simple neural networks is "catastrophic forgetting," where learning a new task completely erases knowledge of a previous one. This library includes mechanisms like soft_reset and L2 regularization to prevent the agent's learned weights from exploding or being wiped out by a single bad run of trades, ensuring more stable, long-term learning.
The Universal Socket Interface: How does the AI "see" the market? Through DataSockets. This brilliant, extensible interface allows a developer to connect any data series—an RSI, a volume metric, a volatility reading, a custom calculation—to the AI's "brain." Each socket normalizes its input, tracks its own statistics, and feeds into the state-building process. This makes the library universally adaptable to any trading idea.
█ CHAPTER 3: A DUAL-PURPOSE FRAMEWORK - MODES OF OPERATION
This library is a foundational component of the DAFE AI ecosystem, designed for ultimate flexibility. It can be used in two primary modes: as a powerful standalone intelligence, or as the core cognitive engine within a larger, bridged super-system. Understanding these modes is key to unlocking its full potential.
MODE 1: STANDALONE ENGINE OPERATION (Independent Power
The DafeRLMLLib can be used entirely on its own to create a complete, self-learning trading indicator. This approach is perfect for building focused, single-purpose tools that are designed to master a specific task. In this mode, the developer is responsible for creating the full feedback loop within their own indicator script.
The Workflow:
Your indicator initializes the ML agent.
On each bar, it feeds the agent market data via the socket interface.
It asks the agent for an action (e.g., Buy, Sell, Hold).
Your script then executes its own internal trade logic based on the agent's decision.
Your script is responsible for tracking the Profit & Loss (PnL) of the resulting simulated trade.
When the trade is closed, your script feeds the final PnL directly back into the agent's learn() function as the "reward" signal.
The Result: A pure, state-based learning system. The agent directly learns the consequences of its own actions. This is excellent for discovering novel, micro-level trading patterns and for building indicators that are designed to operate with complete autonomy.
MODE 2: BRIDGED SUPER-SYSTEM OPERATION (Synergistic Intelligence)
This is the pinnacle of the DAFE ecosystem. In this advanced mode, the DafeRLMLLib acts as the core "cognitive engine" or the "tactical brain" within a larger, multi-library system. It can be fused with a strategic portfolio management engine (like the DafeSPALib) via a master communication protocol (the DafeMLSPABridge).
The Workflow:
The ML engine (this library) generates a set of creative, state-based proposals or predictions.
The Bridge Library translates these proposals into a portfolio of micro-strategies.
The SPA (Strategy Portfolio Allocation) engine, acting as a high-level manager, analyzes the real-time performance of these micro-strategies and selects the one it trusts the most. This becomes the final decision. The PnL from the SPA's final, performance-vetted decision is then routed back through the Bridge as a highly-qualified reward signal for the ML engine.
The Result: A hybrid intelligence that is more robust and adaptive than either system alone. The ML engine provides tactical creativity, while the SPA engine provides ruthless, strategic, performance-based oversight. The ML proposes, the SPA disposes, and the ML learns from the SPA's wisdom. This creates a system of checks, balances, and continuous, synergistic learning, perfect for building an ultimate, all-in-one "drawing indicator" or trading system.
As a developer, the choice is yours. Use this library independently to build powerful, specialized learning tools, or use it as the foundational brain for a truly comprehensive trading AI.
█ CHAPTER 4: A GUIDE FOR DEVELOPERS - INTEGRATING THE BRAIN
We have made it incredibly simple to bring your indicators to life with the DAFE AI. This is the true purpose of the library—to empower you. This section provides the full, unabridged input template and usage guide.
PART I: THE INPUTS TEMPLATE
To give your users full control over the AI, copy this entire block of inputs into your indicator script. It is professionally organized with groups and detailed tooltips.
// ╔═════════════════════════════════════════════════════╗
// ║ INPUTS TEMPLATE (COPY INTO YOUR SCRIPT) ║
// ╚═════════════════════════════════════════════════════╝
// INPUT GROUPS
string G_RL_AGENT = "═══════════ 🧠 AGENT CONFIGURATION ════════════"
string G_RL_LEARN = "═══════════ 📚 LEARNING PARAMETERS ═══════════"
string G_RL_REWARD = "═══════════ 💰 REWARD SYSTEM ═══════════════"
string G_RL_REPLAY = "═══════════ 📼 EXPERIENCE REPLAY ════════════"
string G_RL_META = "═══════════ 🔮 META-LEARNING ═══════════════"
string G_RL_DASH = "═══════════ 📋 DIAGNOSTICS DASHBOARD ═════════"
// AGENT CONFIGURATION
string i_rl_algorithm = input.string("Actor-Critic", "🤖 Algorithm",
options= , group=G_RL_AGENT,
tooltip="Selects the core learning algorithm. " +
"• Q-Learning: Classic, robust, and fast for discrete states. Learns the 'value' of actions. " +
"• Policy Gradient: Learns a direct probability distribution over actions. " +
"• Actor-Critic: The state-of-the-art. The 'Actor' decides, the 'Critic' evaluates. " +
"• Ensemble: Runs both Q-Learning and Policy Gradient and chooses the action with the highest confidence. " +
"RECOMMENDATION: Start with 'Q-Learning' for stability or 'Actor-Critic' for performance.")
int i_rl_num_features = input.int(8, "Number of Features (Sockets)", minval=2, maxval=12, group=G_RL_AGENT,
tooltip="Defines the size of the AI's 'vision'. This MUST match the number of sockets you connect.")
int i_rl_num_actions = input.int(3, "Number of Actions", minval=2, maxval=5, group=G_RL_AGENT,
tooltip="Defines what the AI can do. 3 is standard (0=Neutral, 1=Buy, 2=Sell).")
// LEARNING PARAMETERS
float i_rl_learning_rate = input.float(0.05, "🎓 Learning Rate (Alpha)", minval=0.001, maxval=0.2, step=0.005, group=G_RL_LEARN,
tooltip="How strongly the AI updates its knowledge. Low (0.01-0.03) is stable. High (0.1+) is aggressive.")
float i_rl_discount = input.float(0.95, "🔮 Discount Factor (Gamma)", minval=0.8, maxval=0.99, step=0.01, group=G_RL_LEARN,
tooltip="Determines the agent's 'foresight'. High (0.95+) for trend following. Low (0.85) for scalping.")
float i_rl_epsilon = input.float(0.15, "🧭 Exploration Rate (Epsilon)", minval=0.01, maxval=0.5, step=0.01, group=G_RL_LEARN,
tooltip="For Q-Learning. The probability of taking a random action to explore. Decays automatically over time.")
float i_rl_lambda = input.float(0.7, "⚡ Eligibility Trace (Lambda)", minval=0.0, maxval=0.95, step=0.05, group=G_RL_LEARN,
tooltip="For Q-Learning. A powerful accelerator that allows a reward to be 'traced' back through a sequence of actions.")
// REWARD SYSTEM
string i_rl_reward_mode = input.string("Normalized", "💰 Reward Shaping Mode",
options= , group=G_RL_REWARD,
tooltip="Modifies the raw PnL reward signal to guide learning. " +
"• Normalized: Creates a stable reward signal (Recommended). " +
"• Asymmetric: Punishes losses more than it rewards gains. Teaches risk aversion. " +
"• Risk-Adjusted: Divides PnL by risk (e.g., ATR). Teaches better risk/reward.")
// EXPERIENCE REPLAY
bool i_rl_use_replay = input.bool(true, "📼 Enable Experience Replay", group=G_RL_REPLAY,
tooltip="Allows the agent to store and re-learn from past experiences. Dramatically improves learning stability. HIGHLY RECOMMENDED.")
int i_rl_replay_capacity = input.int(500, "Replay Buffer Size", minval=100, maxval=2000, group=G_RL_REPLAY)
int i_rl_replay_batch = input.int(4, "Replay Batch Size", minval=1, maxval=10, group=G_RL_REPLAY)
// META-LEARNING
bool i_rl_use_meta = input.bool(true, "🔮 Enable Meta-Learning", group=G_RL_META,
tooltip="Allows the agent to self-tune its own learning rate based on performance trends.")
// DIAGNOSTICS DASHBOARD
bool i_rl_show_dash = input.bool(true, "📋 Show Diagnostics Dashboard", group=G_RL_DASH)
PART II: THE IMPLEMENTATION LOGIC
This is the boilerplate code you will adapt to your indicator. It shows the complete Observe-Act-Learn loop.
// ╔═══════════════════════════════════════════════════════╗
// ║ USAGE EXAMPLE (ADAPT TO YOUR SCRIPT) ║
// ╚═══════════════════════════════════════════════════════╝
// 1. INITIALIZE THE AGENT (happens only on the first bar)
int algo_id = i_rl_algorithm == "Q-Learning" ? 0 : i_rl_algorithm == "Policy Gradient" ? 1 : i_rl_algorithm == "Actor-Critic" ? 2 : 3
int reward_id = i_rl_reward_mode == "Raw PnL" ? 0 : i_rl_reward_mode == "Normalized" ? 1 : i_rl_reward_mode == "Asymmetric" ? 2 : 3
var rl.RLAgent agent = rl.init(algo_id, i_rl_num_features, i_rl_num_actions, i_rl_learning_rate, 54, i_rl_replay_capacity, i_rl_epsilon, i_rl_discount, i_rl_lambda, reward_id)
// 2. CONNECT THE "SENSES" (happens only on the first bar)
if barstate.isfirst
// Connect your indicator's data series to the AI's sockets. The number MUST match 'i_rl_num_features'.
agent := rl.connect_socket(agent, "rsi", ta.rsi(close, 14), "oscillator", 1.0)
agent := rl.connect_socket(agent, "atr_norm", ta.atr(14)/close*100, "custom", 0.8)
// ... connect all other features ...
// 3. THE MAIN LOOP (Observe -> Act -> Learn) - runs on every bar
var bool in_trade = false
var int trade_direction = 0
var float entry_price = 0.0
var int last_state_hash = 0
var int last_action_taken = 0
// --- OBSERVE: Build the current market state ---
rl.RLState current_state = rl.build_state(agent)
// --- ACT: Ask the AI for a decision ---
= rl.select_action(agent, current_state)
agent := updated_agent // CRITICAL: Always update the agent state
// --- EXECUTE: Your custom trade logic goes here ---
if not in_trade and ai_action.action != 0 // Assuming 0 is "Hold"
in_trade := true
trade_direction := ai_action.action == 1 ? 1 : -1 // Assuming 1=Buy, 2=Sell
entry_price := close
last_state_hash := current_state.hash // Store the state at the moment of entry
last_action_taken := ai_action.action
// --- LEARN: Check for trade closure and provide feedback ---
bool trade_is_closed = false
float reward = 0.0
if in_trade
// Your custom exit condition here (e.g., stop loss, take profit, opposite signal)
bool exit_condition = bar_index > ta.valuewhen(in_trade, bar_index, 0) + 20
if exit_condition
trade_is_closed := true
pnl = trade_direction == 1 ? (close - entry_price) / entry_price : (entry_price - close) / entry_price
reward := pnl * 100
in_trade := false
// If a trade was closed on THIS bar, feed the experience to the AI
if trade_is_closed
agent := rl.learn(agent, last_state_hash, last_action_taken, reward, current_state, true)
// 4. DISPLAY DIAGNOSTICS
if i_rl_show_dash and barstate.islast
string diag_text = rl.diagnostics(agent)
label.new(bar_index, high, diag_text, style=label.style_label_down, color=color.new(#0A0A14, 10), textcolor=#00FF41, size=size.small, textalign=text.align_left)
█ DEVELOPMENT PHILOSOPHY
The DafeRLMLLib was born from a desire to push the boundaries of Pine Script and to empower the entire TradingView developer community. We believe that the future of technical analysis is not just in creating more complex algorithms, but in building systems that can learn, adapt, and optimize themselves. This library is an open-source framework designed to be a launchpad for a new generation of truly intelligent indicators on TradingView.
This library is designed to help you and your users discover what "the best trades" are, not by following a fixed set of rules, but by learning from the market's own feedback, one trade at a time.
█ DISCLAIMER & IMPORTANT NOTES
THIS IS A LIBRARY FOR ADVANCED DEVELOPERS: This script does nothing on its own. It is a powerful engine that must be integrated into other indicators.
REINFORCEMENT LEARNING IS COMPLEX: RL is not a magic bullet. It requires careful feature engineering (choosing the right sockets), a well-defined reward signal, and a sufficient amount of training data (trades) to converge on a profitable strategy.
ALL TRADING INVOLVES RISK: The AI's decisions are based on statistical probabilities learned from past data. It does not predict the future with certainty.
"The goal of a successful trader is to make the best trades. Money is secondary."
— Alexander Elder
Taking you to school. - Dskyz, Create with RL.
Arrays
lib_w2c_INDILibrary "lib_w2c_INDI"
f_getChunk18()
f_getChunk19()
f_getChunk20()
f_getChunkNames()
f_getExpectedLength()
f_getKeyword()
f_dummyRegister()
f_loadChunk(name)
Parameters:
name (string)
lib_w2b_INDILibrary "lib_w2b_INDI"
f_getChunk9()
f_getChunk10()
f_getChunk11()
f_getChunk12()
f_getChunk13()
f_getChunk14()
f_getChunk15()
f_getChunk16()
f_getChunk17()
f_getChunkNames()
f_getExpectedLength()
f_getKeyword()
f_dummyRegister()
f_loadChunk(name)
Parameters:
name (string)
lib_w2a_INDILibrary "lib_w2a_INDI"
f_getChunk0()
f_getChunk1()
f_getChunk2()
f_getChunk3()
f_getChunk4()
f_getChunk5()
f_getChunk6()
f_getChunk7()
f_getChunk8()
f_getChunkNames()
f_getExpectedLength()
f_getKeyword()
f_dummyRegister()
f_loadChunk(name)
Parameters:
name (string)
lib_w1c_INDILibrary "lib_w1c_INDI"
f_getChunk18()
f_getChunkNames()
f_getExpectedLength()
f_getKeyword()
f_dummyRegister()
f_loadChunk(name)
Parameters:
name (string)
lib_w1b_INDILibrary "lib_w1b_INDI"
f_getChunk9()
f_getChunk10()
f_getChunk11()
f_getChunk12()
f_getChunk13()
f_getChunk14()
f_getChunk15()
f_getChunk16()
f_getChunk17()
f_getChunkNames()
f_getExpectedLength()
f_getKeyword()
f_dummyRegister()
f_loadChunk(name)
Parameters:
name (string)
lib_w1a_INDILibrary "lib_w1a_INDI"
f_getChunk0()
f_getChunk1()
f_getChunk2()
f_getChunk3()
f_getChunk4()
f_getChunk5()
f_getChunk6()
f_getChunk7()
f_getChunk8()
f_getChunkNames()
f_getExpectedLength()
f_getKeyword()
f_dummyRegister()
f_loadChunk(name)
Parameters:
name (string)
lib_b2_INDILibrary "lib_b2_INDI"
f_getChunk0()
f_getChunkNames()
f_getExpectedLength()
f_getKeyword()
f_dummyRegister()
f_loadChunk(name)
Parameters:
name (string)
lib_b1_INDILibrary "lib_b1_INDI"
f_getChunk0()
f_getChunkNames()
f_getExpectedLength()
f_getKeyword()
f_dummyRegister()
f_loadChunk(name)
Parameters:
name (string)
OverfittingMetricsLibrary "OverfittingMetrics"
calculateMetrics(tradeResults, tradeTypes, minTrades, startCapital)
Parameters:
tradeResults (array)
tradeTypes (array)
minTrades (int)
startCapital (float)
randomizeParameter(baseValue, variationPercent, seed)
Parameters:
baseValue (float)
variationPercent (float)
seed (int)
classifyMarket(priceSeries, lookbackLength)
Parameters:
priceSeries (float)
lookbackLength (simple int)
checkOverfittingWarnings(winRate, profitFactor, totalTrades)
Parameters:
winRate (float)
profitFactor (float)
totalTrades (int)
calculateConsistency(tradeResults)
Parameters:
tradeResults (array)
isOverfitDetected(winRate, profitFactor, totalTrades, minTrades)
Parameters:
winRate (float)
profitFactor (float)
totalTrades (int)
minTrades (int)
getOverfitScore(winRate, profitFactor, totalTrades)
Parameters:
winRate (float)
profitFactor (float)
totalTrades (int)
SpatialIndexYou can start using this now by inserthing this at the top of your indicator/strategy/library.
import ArunaReborn/SpatialIndex/1 as SI
Overview
SpatialIndex is a high-performance Pine Script library that implements price-bucketed spatial indexing for efficient proximity queries on large datasets. Instead of scanning through hundreds or thousands of items linearly (O(n)), this library provides O(k) bucket lookup where k is typically just a handful of buckets, dramatically improving performance for price-based filtering operations.
This library works with any data type through index-based references, making it universally applicable for support/resistance levels, pivot points, order zones, pattern detection points, Fair Value Gaps, and any other price-based data that needs frequent proximity queries.
Why This Library Exists
The Problem
When building advanced technical indicators that track large numbers of price levels (support/resistance zones, pivot points, order blocks, etc.), you often need to answer questions like:
- *"Which levels are within 5% of the current price?"*
- *"What zones overlap with this price range?"*
- *"Are there any significant levels near my entry point?"*
The naive approach is to loop through every single item and check its price. For 500 levels across multiple timeframes, this means 500 comparisons every bar . On instruments with thousands of historical bars, this quickly becomes a performance bottleneck that can cause scripts to time out or lag.
The Solution
SpatialIndex solves this by organizing items into price buckets —like filing cabinets organized by price range. When you query for items near $50,000, the library only looks in the relevant buckets (e.g., $49,000-$51,000 range), ignoring all other price regions entirely.
Performance Example:
- Linear scan: Check 500 items = 500 comparisons per query
- Spatial index: Check 3-5 buckets with ~10 items each = 30-50 comparisons per query
- Result: 10-16x faster queries
Key Features
Core Capabilities
- ✅ Generic Design : Works with any data type via index references
- ✅ Multiple Index Strategies : Fixed bucket size or ATR-based dynamic sizing
- ✅ Range Support : Index items that span price ranges (zones, gaps, channels)
- ✅ Efficient Queries : O(k) bucket lookup instead of O(n) linear scan
- ✅ Multiple Query Types : Proximity percentage, fixed range, exact price with tolerance
- ✅ Dynamic Updates : Add, remove, update items in O(1) time
- ✅ Batch Operations : Efficient bulk removal and reindexing
- ✅ Query Caching : Optional caching for repeated queries within same bar
- ✅ Statistics & Debugging : Built-in stats and diagnostic functions
### Advanced Features
- ATR-Based Bucketing : Automatically adjusts bucket sizes based on volatility
- Multi-Bucket Spanning : Items that span ranges are indexed in all overlapping buckets
- Reindexing Support : Handles array removals with automatic index shifting
- Cache Management : Configurable query caching with automatic invalidation
- Empty Bucket Cleanup : Automatically removes empty buckets to minimize memory
How It Works
The Bucketing Concept
Think of price space as divided into discrete buckets, like a histogram:
```
Price Range: $98-$100 $100-$102 $102-$104 $104-$106 $106-$108
Bucket Key: 49 50 51 52 53
Items:
```
When you query for items near $103:
1. Calculate which buckets overlap the $101.50-$104.50 range (keys 50, 51, 52)
2. Return items from only those buckets:
3. Never check items in buckets 49 or 53
Bucket Size Selection
Fixed Size Mode:
```pine
var SI.SpatialBucket index = SI.newSpatialBucket(2.0) // $2 per bucket
```
- Good for: Instruments with stable price ranges
- Example: For stocks trading at $100, 2.0 = 2% increments
ATR-Based Mode:
```pine
float atr = ta.atr(14)
var SI.SpatialBucket index = SI.newSpatialBucketATR(1.0, atr) // 1x ATR per bucket
SI.updateATR(index, atr) // Update each bar
```
- Good for: Instruments with varying volatility
- Adapts automatically to market conditions
- 1.0 multiplier = one bucket spans one ATR unit
Optimal Bucket Size:
The library includes a helper function to calculate optimal size:
```pine
float optimalSize = SI.calculateOptimalBucketSize(close, 5.0) // For 5% proximity queries
```
This ensures queries span approximately 3 buckets for optimal performance.
Index-Based Architecture
The library doesn't store your actual data—it only stores indices that point to your external arrays:
```pine
// Your data
var array levels = array.new()
var array types = array.new()
var array ages = array.new()
// Your index
var SI.SpatialBucket index = SI.newSpatialBucket(2.0)
// Add a level
array.push(levels, 50000.0)
array.push(types, "support")
array.push(ages, 0)
SI.add(index, array.size(levels) - 1, 50000.0) // Store index 0
// Query near current price
SI.QueryResult result = SI.queryProximity(index, close, 5.0)
for idx in result.indices
float level = array.get(levels, idx)
string type = array.get(types, idx)
// Work with your actual data
```
This design means:
- ✅ Works with any data structure you define
- ✅ No data duplication
- ✅ Minimal memory footprint
- ✅ Full control over your data
---
Usage Guide
Basic Setup
```pine
// Import library
import username/SpatialIndex/1 as SI
// Create index
var SI.SpatialBucket index = SI.newSpatialBucket(2.0)
// Your data arrays
var array supportLevels = array.new()
var array touchCounts = array.new()
```
Adding Items
Single Price Point:
```pine
// Add a support level at $50,000
array.push(supportLevels, 50000.0)
array.push(touchCounts, 1)
int levelIdx = array.size(supportLevels) - 1
SI.add(index, levelIdx, 50000.0)
```
Price Range (Zones/Gaps):
```pine
// Add a resistance zone from $51,000 to $52,000
array.push(zoneBottoms, 51000.0)
array.push(zoneTops, 52000.0)
int zoneIdx = array.size(zoneBottoms) - 1
SI.addRange(index, zoneIdx, 51000.0, 52000.0) // Indexed in all overlapping buckets
```
Querying Items
Proximity Query (Percentage):
```pine
// Find all levels within 5% of current price
SI.QueryResult result = SI.queryProximity(index, close, 5.0)
if array.size(result.indices) > 0
for idx in result.indices
float level = array.get(supportLevels, idx)
// Process nearby level
```
Fixed Range Query:
```pine
// Find all items between $49,000 and $51,000
SI.QueryResult result = SI.queryRange(index, 49000.0, 51000.0)
```
Exact Price with Tolerance:
```pine
// Find items at exactly $50,000 +/- $100
SI.QueryResult result = SI.queryAt(index, 50000.0, 100.0)
```
Removing Items
Safe Removal Pattern:
```pine
SI.QueryResult result = SI.queryProximity(index, close, 5.0)
if array.size(result.indices) > 0
// IMPORTANT: Sort descending to safely remove from arrays
array sorted = SI.sortIndicesDescending(result)
for idx in sorted
// Remove from index
SI.remove(index, idx)
// Remove from your data arrays
array.remove(supportLevels, idx)
array.remove(touchCounts, idx)
// Reindex to maintain consistency
SI.reindexAfterRemoval(index, idx)
```
Batch Removal (More Efficient):
```pine
// Collect indices to remove
array toRemove = array.new()
for i = 0 to array.size(supportLevels) - 1
if array.get(touchCounts, i) > 10 // Remove old levels
array.push(toRemove, i)
// Remove in descending order from data arrays
array sorted = array.copy(toRemove)
array.sort(sorted, order.descending)
for idx in sorted
SI.remove(index, idx)
array.remove(supportLevels, idx)
array.remove(touchCounts, idx)
// Batch reindex (much faster than individual reindexing)
SI.reindexAfterBatchRemoval(index, toRemove)
```
Updating Items
```pine
// Update a level's price (e.g., after refinement)
float newPrice = 50100.0
SI.update(index, levelIdx, newPrice)
array.set(supportLevels, levelIdx, newPrice)
// Update a zone's range
SI.updateRange(index, zoneIdx, 51000.0, 52500.0)
array.set(zoneBottoms, zoneIdx, 51000.0)
array.set(zoneTops, zoneIdx, 52500.0)
```
Query Caching
For repeated queries within the same bar:
```pine
// Create cache (persistent)
var SI.CachedQuery cache = SI.newCachedQuery()
// Cached query (returns cached result if parameters match)
SI.QueryResult result = SI.queryProximityCached(
index,
cache,
close,
5.0, // proximity%
1 // cache duration in bars
)
// Invalidate cache when index changes significantly
if bigChangeDetected
SI.invalidateCache(cache)
```
---
Practical Examples
Example 1: Support/Resistance Finder
```pine
//@version=6
indicator("S/R with Spatial Index", overlay=true)
import username/SpatialIndex/1 as SI
// Data storage
var array levels = array.new()
var array types = array.new() // "support" or "resistance"
var array touches = array.new()
var array ages = array.new()
// Spatial index
var SI.SpatialBucket index = SI.newSpatialBucket(close * 0.02) // 2% buckets
// Detect pivots
bool isPivotHigh = ta.pivothigh(high, 5, 5)
bool isPivotLow = ta.pivotlow(low, 5, 5)
// Add new levels
if isPivotHigh
array.push(levels, high )
array.push(types, "resistance")
array.push(touches, 1)
array.push(ages, 0)
SI.add(index, array.size(levels) - 1, high )
if isPivotLow
array.push(levels, low )
array.push(types, "support")
array.push(touches, 1)
array.push(ages, 0)
SI.add(index, array.size(levels) - 1, low )
// Find nearby levels (fast!)
SI.QueryResult nearby = SI.queryProximity(index, close, 3.0) // Within 3%
// Process nearby levels
for idx in nearby.indices
float level = array.get(levels, idx)
string type = array.get(types, idx)
// Check for touch
if type == "support" and low <= level and low > level
array.set(touches, idx, array.get(touches, idx) + 1)
else if type == "resistance" and high >= level and high < level
array.set(touches, idx, array.get(touches, idx) + 1)
// Age and cleanup old levels
for i = array.size(ages) - 1 to 0
array.set(ages, i, array.get(ages, i) + 1)
// Remove levels older than 500 bars or with 5+ touches
if array.get(ages, i) > 500 or array.get(touches, i) >= 5
SI.remove(index, i)
array.remove(levels, i)
array.remove(types, i)
array.remove(touches, i)
array.remove(ages, i)
SI.reindexAfterRemoval(index, i)
// Visualization
for idx in nearby.indices
line.new(bar_index, array.get(levels, idx), bar_index + 10, array.get(levels, idx),
color=array.get(types, idx) == "support" ? color.green : color.red)
```
Example 2: Multi-Timeframe Zone Detector
```pine
//@version=6
indicator("MTF Zones", overlay=true)
import username/SpatialIndex/1 as SI
// Store zones from multiple timeframes
var array zoneBottoms = array.new()
var array zoneTops = array.new()
var array zoneTimeframes = array.new()
// ATR-based spatial index for adaptive bucketing
var SI.SpatialBucket index = SI.newSpatialBucketATR(1.0, ta.atr(14))
SI.updateATR(index, ta.atr(14)) // Update bucket size with volatility
// Request higher timeframe data
= request.security(syminfo.tickerid, "240", )
// Detect HTF zones
if not na(htf_high) and not na(htf_low)
float zoneTop = htf_high
float zoneBottom = htf_low * 0.995 // 0.5% zone thickness
// Check if zone already exists nearby
SI.QueryResult existing = SI.queryRange(index, zoneBottom, zoneTop)
if array.size(existing.indices) == 0 // No overlapping zones
// Add new zone
array.push(zoneBottoms, zoneBottom)
array.push(zoneTops, zoneTop)
array.push(zoneTimeframes, "4H")
int idx = array.size(zoneBottoms) - 1
SI.addRange(index, idx, zoneBottom, zoneTop)
// Query zones near current price
SI.QueryResult nearbyZones = SI.queryProximity(index, close, 2.0) // Within 2%
// Highlight nearby zones
for idx in nearbyZones.indices
box.new(bar_index - 50, array.get(zoneBottoms, idx),
bar_index, array.get(zoneTops, idx),
bgcolor=color.new(color.blue, 90))
```
### Example 3: Performance Comparison
```pine
//@version=6
indicator("Spatial Index Performance Test")
import username/SpatialIndex/1 as SI
// Generate 500 random levels
var array levels = array.new()
var SI.SpatialBucket index = SI.newSpatialBucket(close * 0.02)
if bar_index == 0
for i = 0 to 499
float randomLevel = close * (0.9 + math.random() * 0.2) // +/- 10%
array.push(levels, randomLevel)
SI.add(index, i, randomLevel)
// Method 1: Linear scan (naive approach)
int linearCount = 0
float proximityPct = 5.0
float lowBand = close * (1 - proximityPct/100)
float highBand = close * (1 + proximityPct/100)
for i = 0 to array.size(levels) - 1
float level = array.get(levels, i)
if level >= lowBand and level <= highBand
linearCount += 1
// Method 2: Spatial index query
SI.QueryResult result = SI.queryProximity(index, close, proximityPct)
int spatialCount = array.size(result.indices)
// Compare performance
plot(result.queryCount, "Items Examined (Spatial)", color=color.green)
plot(linearCount, "Items Examined (Linear)", color=color.red)
plot(spatialCount, "Results Found", color=color.blue)
// Spatial index typically examines 10-50 items vs 500 for linear scan!
```
API Reference Summary
Initialization
- `newSpatialBucket(bucketSize)` - Fixed bucket size
- `newSpatialBucketATR(atrMultiplier, atrValue)` - ATR-based buckets
- `updateATR(sb, newATR)` - Update ATR for dynamic sizing
Adding Items
- `add(sb, itemIndex, price)` - Add item at single price point
- `addRange(sb, itemIndex, priceBottom, priceTop)` - Add item spanning range
Querying
- `queryProximity(sb, refPrice, proximityPercent)` - Query by percentage
- `queryRange(sb, priceBottom, priceTop)` - Query fixed range
- `queryAt(sb, price, tolerance)` - Query exact price with tolerance
- `queryProximityCached(sb, cache, refPrice, pct, duration)` - Cached query
Removing & Updating
- `remove(sb, itemIndex)` - Remove item
- `update(sb, itemIndex, newPrice)` - Update item price
- `updateRange(sb, itemIndex, newBottom, newTop)` - Update item range
- `reindexAfterRemoval(sb, removedIndex)` - Reindex after single removal
- `reindexAfterBatchRemoval(sb, removedIndices)` - Batch reindex
- `clear(sb)` - Remove all items
Utilities
- `size(sb)` - Get item count
- `isEmpty(sb)` - Check if empty
- `contains(sb, itemIndex)` - Check if item exists
- `getStats(sb)` - Get debug statistics string
- `calculateOptimalBucketSize(price, pct)` - Calculate optimal bucket size
- `sortIndicesDescending(result)` - Sort for safe removal
- `sortIndicesAscending(result)` - Sort ascending
Performance Characteristics
Time Complexity
- Add : O(1) for single point, O(m) for range spanning m buckets
- Remove : O(1) lookup + O(b) bucket cleanup where b = buckets item spans
- Query : O(k) where k = buckets in range (typically 3-5) vs O(n) linear scan
- Update : O(1) removal + O(1) addition = O(1) total
Space Complexity
- Memory per item : ~8 bytes for index reference + map overhead
- Bucket overhead : Proportional to price range coverage
- Typical usage : For 500 items with 50 active buckets ≈ 4-8KB total
Scalability
- ✅ 100 items : ~5-10x faster than linear scan
- ✅ 500 items : ~10-15x faster
- ✅ 1000+ items : ~15-20x faster
- ⚠️ Performance degrades if bucket size is too small (too many buckets)
- ⚠️ Performance degrades if bucket size is too large (too many items per bucket)
Best Practices
Bucket Size Selection
1. Start with 2-5% of asset price for percentage-based queries
2. Use ATR-based mode for volatile assets or multi-symbol scripts
3. Test bucket size using `calculateOptimalBucketSize()` function
4. Monitor with `getStats()` to ensure reasonable bucket count
Memory Management
1. Clear old items regularly to prevent unbounded growth
2. Use age tracking to remove stale data
3. Set maximum item limits based on your needs
4. Batch removals are more efficient than individual removals
Query Optimization
1. Use caching for repeated queries within same bar
2. Invalidate cache when index changes significantly
3. Sort results descending before removal iteration
4. Batch operations when possible (reindexing, removal)
Data Consistency
1. Always reindex after removal to maintain index alignment
2. Remove from arrays in descending order to avoid index shifting issues
3. Use batch reindex for multiple simultaneous removals
4. Keep external arrays and index in sync at all times
Limitations & Caveats
Known Limitations
- Not suitable for exact price matching : Use tolerance with `queryAt()`
- Bucket size affects performance : Too small = many buckets, too large = many items per bucket
- Memory usage : Scales with price range coverage and item count
- Reindexing overhead : Removing items mid-array requires index shifting
When NOT to Use
- ❌ Datasets with < 50 items (linear scan is simpler)
- ❌ Items that change price every bar (constant reindexing overhead)
- ❌ When you need ALL items every time (no benefit over arrays)
- ❌ Exact price level matching without tolerance (use maps instead)
When TO Use
- ✅ Large datasets (100+ items) with occasional queries
- ✅ Proximity-based filtering (% of price, ATR-based ranges)
- ✅ Multi-timeframe level tracking
- ✅ Zone/range overlap detection
- ✅ Price-based spatial filtering
---
Technical Details
Bucketing Algorithm
Items are assigned to buckets using integer division:
```
bucketKey = floor((price - basePrice) / bucketSize)
```
For ATR-based mode:
```
effectiveBucketSize = atrValue × atrMultiplier
bucketKey = floor((price - basePrice) / effectiveBucketSize)
```
Range Indexing
Items spanning price ranges are indexed in all overlapping buckets to ensure accurate range queries. The midpoint bucket is designated as the "primary bucket" for removal operations.
Index Consistency
The library maintains two maps:
1. `buckets`: Maps bucket keys → IntArray wrappers containing item indices
2. `itemToBucket`: Maps item indices → primary bucket key (for O(1) removal)
This dual-mapping ensures both fast queries and fast removal while maintaining consistency.
Implementation Note: Pine Script doesn't allow nested collections (map containing arrays directly), so the library uses an `IntArray` wrapper type to hold arrays within the map structure. This is an internal implementation detail that doesn't affect usage.
---
Version History
Version 1.0
- Initial release
Credits & License
License : Mozilla Public License 2.0 (TradingView default)
Library Type : Open-source educational resource
This library is designed as a public domain utility for the Pine Script community. As per TradingView library rules, this code can be freely reused by other authors. If you use this library in your scripts, please provide appropriate credit as required by House Rules.
Summary
SpatialIndex is a specialized library that solves a specific problem: fast proximity queries on large price-based datasets . If you're building indicators that track hundreds of levels, zones, or price points and need to frequently filter by proximity to current price, this library can provide 10-20x performance improvements over naive linear scanning.
The index-based architecture makes it universally applicable to any data type, and the ATR-based bucketing ensures it adapts to market conditions automatically. Combined with query caching and batch operations, it provides a complete solution for spatial data management in Pine Script.
Use this library when speed matters and your dataset is large.
MLMatrixLibOverview
MLMatrixLib is a comprehensive Pine Script v6 library implementing machine learning algorithms using native matrix operations. This library provides traders and developers with a toolkit of statistical and ML methods for building quantitative trading systems, performing data analysis, and creating adaptive indicators.
How It Works
The library leverages Pine Script's native matrix type to perform efficient linear algebra operations. Each algorithm is implemented from first principles, using matrix decomposition, iterative optimization, and statistical estimation techniques. All functions are designed for numerical stability with careful handling of edge cases.
---
Library Contents (34 Sections)
Section 1: Utility Functions & Matrix Operations
Core building blocks including:
• identity(n) - Creates n×n identity matrix
• diagonal(values) - Creates diagonal matrix from array
• ones(rows, cols) / zeros(rows, cols) - Matrix constructors
• frobeniusNorm(m) / l1Norm(m) - Matrix norm calculations
• hadamard(m1, m2) - Element-wise multiplication
• columnMeans(m) / rowMeans(m) - Statistical aggregations
• standardize(m) - Z-score normalization (zero mean, unit variance)
• minMaxNormalize(m) - Scale values to range
• fitStandardScaler(m) / fitMinMaxScaler(m) - Reusable scaler parameters
• addBiasColumn(m) - Prepend column of ones for regression
• arrayMedian(arr) / arrayPercentile(arr, p) - Array statistics
Section 2: Activation Functions
Numerically stable implementations:
• sigmoid(x) / sigmoidMatrix(m) - Logistic function with overflow protection
• tanhActivation(x) / tanhMatrix(m) - Hyperbolic tangent
• relu(x) / reluMatrix(m) - Rectified Linear Unit
• leakyRelu(x, alpha) - Leaky ReLU with configurable slope
• elu(x, alpha) - Exponential Linear Unit
• Derivatives for backpropagation: sigmoidDerivative, tanhDerivative, reluDerivative
Section 3: Linear Regression (OLS)
Ordinary Least Squares implementation using the normal equation (X'X)⁻¹X'y:
• fitLinearRegression(X, y) - Fits model, returns coefficients, R², standard error
• fitSimpleLinearRegression(x, y) - Single-variable regression
• predictLinear(model, X) - Generate predictions
• predictionInterval(model, X, confidence) - Confidence intervals using t-distribution
• Model type stores: coefficients, R-squared, residuals, standard error
Section 4: Weighted Linear Regression
Generalized least squares with observation weights:
• fitWeightedLinearRegression(X, y, weights) - Solves (X'WX)⁻¹X'Wy
• Useful for downweighting outliers or emphasizing recent data
Section 5: Polynomial Regression
Fits polynomials of arbitrary degree:
• fitPolynomialRegression(x, y, degree) - Constructs Vandermonde matrix
• predictPolynomial(model, x) - Evaluate polynomial at points
Section 6: Ridge Regression (L2 Regularization)
Adds penalty term λ||β||² to prevent overfitting:
• fitRidgeRegression(X, y, lambda) - Solves (X'X + λI)⁻¹X'y
• Lambda parameter controls regularization strength
Section 7: LASSO Regression (L1 Regularization)
Coordinate descent algorithm for sparse solutions:
• fitLassoRegression(X, y, lambda, maxIter, tolerance) - Iterative soft-thresholding
• Produces sparse coefficients by driving some to exactly zero
• softThreshold(x, lambda) - Core shrinkage operator
Section 8: Elastic Net (L1 + L2 Regularization)
Combines LASSO and Ridge penalties:
• fitElasticNet(X, y, lambda, alpha, maxIter, tolerance)
• Alpha balances L1 vs L2: alpha=1 is LASSO, alpha=0 is Ridge
Section 9: Huber Robust Regression
Iteratively Reweighted Least Squares (IRLS) for outlier resistance:
• fitHuberRegression(X, y, delta, maxIter, tolerance)
• Delta parameter defines transition between L1 and L2 loss
• Downweights observations with large residuals
Section 10: Quantile Regression
Estimates conditional quantiles using linear programming approximation:
• fitQuantileRegression(X, y, tau, maxIter, tolerance)
• Tau specifies quantile (0.5 = median, 0.25 = lower quartile, etc.)
Section 11: Logistic Regression (Binary Classification)
Gradient descent optimization of cross-entropy loss:
• fitLogisticRegression(X, y, learningRate, maxIter, tolerance)
• predictProbability(model, X) - Returns probabilities
• predictClass(model, X, threshold) - Returns binary predictions
Section 12: Linear SVM (Support Vector Machine)
Sub-gradient descent with hinge loss:
• fitLinearSVM(X, y, C, learningRate, maxIter)
• C parameter controls regularization (higher = harder margin)
• predictSVM(model, X) - Returns class predictions
Section 13: Recursive Least Squares (RLS)
Online learning with exponential forgetting:
• createRLSState(nFeatures, lambda, delta) - Initialize state
• updateRLS(state, x, y) - Update with new observation
• Lambda is forgetting factor (0.95-0.99 typical)
• Useful for adaptive indicators that update incrementally
Section 14: Covariance and Correlation
Matrix statistics:
• covarianceMatrix(m) - Sample covariance
• correlationMatrix(m) - Pearson correlations
• pearsonCorrelation(x, y) - Single correlation coefficient
• spearmanCorrelation(x, y) - Rank-based correlation
Section 15: Principal Component Analysis (PCA)
Dimensionality reduction via eigendecomposition:
• fitPCA(X, nComponents) - Power iteration method
• transformPCA(X, model) - Project data onto principal components
• Returns components, explained variance, and mean
Section 16: K-Means Clustering
Lloyd's algorithm with k-means++ initialization:
• fitKMeans(X, k, maxIter, tolerance) - Cluster data points
• predictCluster(model, X) - Assign new points to clusters
• withinClusterVariance(model) - Measure cluster compactness
Section 17: Gaussian Mixture Model (GMM)
Expectation-Maximization algorithm:
• fitGMM(X, k, maxIter, tolerance) - Soft clustering with probabilities
• predictProbaGMM(model, X) - Returns membership probabilities
• Models data as mixture of Gaussian distributions
Section 18: Kalman Filter
Linear state estimation:
• createKalman1D(processNoise, measurementNoise, ...) - 1D filter
• createKalman2D(processNoise, measurementNoise) - Position + velocity tracking
• kalmanStep(state, measurement) - Predict-update cycle
• Optimal filtering for noisy measurements
Section 19: K-Nearest Neighbors (KNN)
Instance-based learning:
• fitKNN(X, y) - Store training data
• predictKNN(model, X, k) - Classify by majority vote
• predictKNNRegression(model, X, k) - Average of k neighbors
• predictKNNWeighted(model, X, k) - Distance-weighted voting
Section 20: Neural Network (Feedforward)
Multi-layer perceptron:
• createNeuralNetwork(architecture) - Define layer sizes
• trainNeuralNetwork(nn, X, y, learningRate, epochs) - Backpropagation
• predictNN(nn, X) - Forward pass
• Supports configurable hidden layers
Section 21: Naive Bayes Classifier
Gaussian Naive Bayes:
• fitNaiveBayes(X, y) - Estimate class-conditional distributions
• predictNaiveBayes(model, X) - Maximum a posteriori classification
• Assumes feature independence given class
Section 22: Anomaly Detection
Statistical outlier detection:
• fitAnomalyDetector(X, contamination) - Mahalanobis distance-based
• detectAnomalies(model, X) - Returns anomaly scores
• isAnomaly(model, X, threshold) - Binary classification
Section 23: Dynamic Time Warping (DTW)
Time series similarity:
• dtw(series1, series2) - Compute DTW distance
• Handles sequences of different lengths
• Useful for pattern matching
Section 24: Markov Chain / Regime Detection
Discrete state transitions:
• fitMarkovChain(states, nStates) - Estimate transition matrix
• predictNextState(transitionMatrix, currentState) - Most likely next state
• stationaryDistribution(transitionMatrix) - Long-run probabilities
Section 25: Hidden Markov Model (Simple)
Baum-Welch algorithm:
• fitHMM(observations, nStates, maxIter) - EM training
• viterbi(model, observations) - Most likely state sequence
• Useful for regime detection
Section 26: Exponential Smoothing & Holt-Winters
Time series smoothing:
• exponentialSmooth(data, alpha) - Simple exponential smoothing
• holtWinters(data, alpha, beta, gamma, seasonLength) - Triple smoothing
• Captures trend and seasonality
Section 27: Entropy and Information Theory
Information measures:
• entropy(probabilities) - Shannon entropy in bits
• conditionalEntropy(jointProbs, marginalProbs) - H(X|Y)
• mutualInformation(probsX, probsY, jointProbs) - I(X;Y)
• kldivergence(p, q) - Kullback-Leibler divergence
Section 28: Hurst Exponent
Long-range dependence measure:
• hurstExponent(data) - R/S analysis
• H < 0.5: mean-reverting, H = 0.5: random walk, H > 0.5: trending
Section 29: Change Detection (CUSUM)
Cumulative sum control chart:
• cusumChangeDetection(data, threshold, drift) - Detect regime changes
• cusumOnline(value, prevCusumPos, prevCusumNeg, target, drift) - Streaming version
Section 30: Autocorrelation
Serial dependence analysis:
• autocorrelation(data, maxLag) - ACF for all lags
• partialAutocorrelation(data, maxLag) - PACF via Durbin-Levinson
• Useful for time series model identification
Section 31: Ensemble Methods
Model combination:
• baggingPredict(models, X) - Average predictions
• votingClassify(models, X) - Majority vote
• Improves robustness through aggregation
Section 32: Model Evaluation Metrics
Performance assessment:
• mse(actual, predicted) / rmse / mae / mape - Regression metrics
• accuracy(actual, predicted) - Classification accuracy
• precision / recall / f1Score - Binary classification metrics
• confusionMatrix(actual, predicted, nClasses) - Multi-class evaluation
• rSquared(actual, predicted) / adjustedRSquared - Goodness of fit
Section 33: Cross-Validation
Model validation:
• trainTestSplit(X, y, trainRatio) - Random split
• Foundation for walk-forward validation
Section 34: Trading Convenience Functions
Trading-specific utilities:
• priceMatrix(length) - OHLC data as matrix
• logReturns(length) - Log return series
• rollingSlope(src, length) - Linear trend strength
• kalmanFilter(src, processNoise, measurementNoise) - Filtered price
• kalmanFilter2D(src, ...) - Price with velocity estimate
• adaptiveMA(src, sensitivity) - Kalman-based adaptive moving average
• volAdjMomentum(src, length) - Volatility-normalized momentum
• detectSRLevels(length, nLevels) - K-means based S/R detection
• buildFeatures(src, lengths) - Multi-timeframe feature construction
• technicalFeatures(length) - Standard indicator feature set (RSI, MACD, BB, ATR, etc.)
• lagFeatures(src, lags) - Time-lagged features
• sharpeRatio(returns) - Risk-adjusted return measure
• sortinoRatio(returns) - Downside risk-adjusted return
• maxDrawdown(equity) - Maximum peak-to-trough decline
• calmarRatio(returns, equity) - Return/drawdown ratio
• kellyCriterion(winRate, avgWin, avgLoss) - Optimal position sizing
• fractionalKelly(...) - Conservative Kelly sizing
• rollingBeta(assetReturns, benchmarkReturns) - Market exposure
• fractalDimension(data) - Market complexity measure
---
Usage Example
```
import YourUsername/MLMatrixLib/1 as ml
// Create feature matrix
matrix X = ml.priceMatrix(50)
X := ml.standardize(X)
// Fit linear regression
ml.LinearRegressionModel model = ml.fitLinearRegression(X, y)
float prediction = ml.predictLinear(model, X_new)
// Kalman filter for smoothing
float smoothedPrice = ml.kalmanFilter(close, 0.01, 1.0)
// Detect support/resistance levels
array levels = ml.detectSRLevels(100, 3)
// K-means clustering for regime detection
ml.KMeansModel km = ml.fitKMeans(features, 3)
int cluster = ml.predictCluster(km, newFeature)
```
---
Technical Notes
• All matrix operations use Pine Script's native matrix type
• Numerical stability ensured through:
- Clamping exponential arguments to prevent overflow
- Division by zero protection with epsilon thresholds
- Iterative algorithms with convergence tolerance
• Designed for bar-by-bar execution in Pine Script's event-driven model
• Compatible with Pine Script v6
---
Disclaimer
This library provides mathematical tools for quantitative analysis. It does not constitute financial advice. Past performance of any algorithm does not guarantee future results. Users are responsible for validating models on their specific use cases and understanding the limitations of each method.
ZigZag forceLibrary "ZigZag"
method lastPivot(this)
Retrieves the last `Pivot` object's reference from a `ZigZag` object's `pivots`
array if it contains at least one element, or `na` if the array is empty.
Callable as a method or a function.
Namespace types: ZigZag
Parameters:
this (ZigZag) : (series ZigZag) The `ZigZag` object's reference.
Returns: (Pivot) The reference of the last `Pivot` instance in the `ZigZag` object's
`pivots` array, or `na` if the array is empty.
method update(this)
Updates a `ZigZag` object's pivot information, volume data, lines, and
labels when it detects new pivot points.
NOTE: This function requires a single execution on each bar for accurate
calculations.
Callable as a method or a function.
Namespace types: ZigZag
Parameters:
this (ZigZag) : (series ZigZag) The `ZigZag` object's reference.
Returns: (bool) `true` if the function detects a new pivot point and updates the
`ZigZag` object's data, `false` otherwise.
newInstance(settings)
Creates a new `ZigZag` instance with optional settings.
Parameters:
settings (Settings) : (series Settings) Optional. A `Settings` object's reference for the new
`ZigZag` instance's `settings` field. If `na`, the `ZigZag` instance
uses a new `Settings` object with default properties. The default is `na`.
Returns: (ZigZag) A new `ZigZag` object's reference.
Settings
A structure for objects that store calculation and display properties for `ZigZag` instances.
Fields:
devThreshold (series float) : The minimum percentage deviation from a previous pivot point required to change the Zig Zag's direction.
depth (series int) : The number of bars required for pivot point detection.
lineColor (series color) : The color of each line in the Zig Zag drawing.
extendLast (series bool) : Specifies whether the Zig Zag drawing includes a line connecting the most recent pivot point to the latest bar's `close`.
displayReversalPrice (series bool) : Specifies whether the Zig Zag drawing shows pivot prices in its labels.
displayCumulativeVolume (series bool) : Specifies whether the Zig Zag drawing shows the cumulative volume between pivot points in its labels.
displayReversalPriceChange (series bool) : Specifies whether the Zig Zag drawing shows the reversal amount from the previous pivot point in each label.
differencePriceMode (series string) : The reversal amount display mode. Possible values: `"Absolute"` for price change or `"Percent"` for percentage change.
draw (series bool) : Specifies whether the Zig Zag drawing displays its lines and labels.
allowZigZagOnOneBar (series bool) : Specifies whether the Zig Zag calculation can register a pivot high *and* pivot low on the same bar.
Pivot
A structure for objects that store chart point references, drawing references, and volume information for `ZigZag` instances.
Fields:
ln (series line) : References a `line` object that connects the coordinates from the `start` and `end` chart points.
lb (series label) : References a `label` object that displays pivot data at the `end` chart point's coordinates.
isHigh (series bool) : Specifies whether the pivot at the `end` chart point's coordinates is a pivot high.
vol (series float) : The cumulative volume across the bars between the `start` and `end` chart points.
start (chart.point) : References a `chart.point` object containing the coordinates of the previous pivot point.
end (chart.point) : References a `chart.point` object containing the coordinates of the current pivot point.
ZigZag
A structure for objects that maintain Zig Zag drawing settings, pivots, and cumulative volume data.
Fields:
settings (Settings) : References a `Settings` object that specifies the Zig Zag drawing's calculation and display properties.
pivots (array) : References an array of `Pivot` objects that store pivot point, drawing, and volume information.
sumVol (series float) : The cumulative volume across bars covered by the latest `Pivot` object's line segment.
extend (Pivot) : References a `Pivot` object that projects a line from the last confirmed pivot point to the current bar's `close`.
News2024H1Library "News2024H1" - This contains news events from 2024 H1 News Events
f_loadNewsRows()
f_loadExcSevByTypeId()
f_loadExcTagByTypeId()
NewsTypesLibrary "NewsTypes" Provides the based library for the news system
f_hhmmToMs(_hhmm)
Parameters:
_hhmm (int)
f_addNews(_d, _hhmm, _tid, _dArr, _tArr, _idArr)
Parameters:
_d (string)
_hhmm (int)
_tid (int)
_dArr (array)
_tArr (array)
_idArr (array)
f_addNewsMs(_d, _ms, _tid, _dArr, _tArr, _idArr)
Parameters:
_d (string)
_ms (int)
_tid (int)
_dArr (array)
_tArr (array)
_idArr (array)
f_loadTypeSevByTypeId()
PineML_v6Library "PineML_v6"
ML Library for lightweight strategies. Implements k-NN with matrix storage.
method new_model(k, history, features)
Създава нов модел
Namespace types: series int, simple int, input int, const int
Parameters:
k (int) : Брой съседи (напр. 5)
history (int) : Дълбочина на паметта (напр. 1000 бара)
features (int) : Брой променливи, които ще следим
method train(model, feature_array, label)
Добавя нови данни към паметта на модела
Namespace types: KNN_Model
Parameters:
model (KNN_Model) : Инстанцията на модела
feature_array (array) : Масив с текущите стойности на индикаторите
label (float) : Резултатът (класът), свързан с тези данни
method predict(model, query_features)
Изчислява прогноза на база текущите данни
Namespace types: KNN_Model
Parameters:
model (KNN_Model)
query_features (array)
KNN_Model
Fields:
k_neighbors (series int)
max_history (series int)
features (matrix)
labels (array)
feature_count (series int)
LECAPS_BONCAP_LibraryLibrary "LECAPS_BONCAP_Library"
getInstrumentCount()
getTicker(index)
Parameters:
index (int)
getTickerShort(index)
Parameters:
index (int)
getMaturityPrice(index)
Parameters:
index (int)
getMaturityTimestamp(index)
Parameters:
index (int)
getMaturityYear(index)
Parameters:
index (int)
getMaturityMonth(index)
Parameters:
index (int)
getMaturityDay(index)
Parameters:
index (int)
isBoncap(index)
Parameters:
index (int)
isLecap(index)
Parameters:
index (int)
getInstrumentType(index)
Parameters:
index (int)
getDolarFuturesCount()
getDolarFuturesTicker(index)
Parameters:
index (int)
getDolarFuturesShort(index)
Parameters:
index (int)
getDolarFuturesExpiry(index)
Parameters:
index (int)
getDaysToMaturity(index)
Parameters:
index (int)
getDataSummary(index)
Parameters:
index (int)
arraysLibrary "arrays"
Supplementary array methods.
method delete(arr, index)
remove int object from array of integers at specific index
Namespace types: array
Parameters:
arr (array) : int array
index (int) : index at which int object need to be removed
Returns: void
method delete(arr, index)
remove float object from array of float at specific index
Namespace types: array
Parameters:
arr (array) : float array
index (int) : index at which float object need to be removed
Returns: float
method delete(arr, index)
remove bool object from array of bool at specific index
Namespace types: array
Parameters:
arr (array) : bool array
index (int) : index at which bool object need to be removed
Returns: bool
method delete(arr, index)
remove string object from array of string at specific index
Namespace types: array
Parameters:
arr (array) : string array
index (int) : index at which string object need to be removed
Returns: string
method delete(arr, index)
remove color object from array of color at specific index
Namespace types: array
Parameters:
arr (array) : color array
index (int) : index at which color object need to be removed
Returns: color
method delete(arr, index)
remove chart.point object from array of chart.point at specific index
Namespace types: array
Parameters:
arr (array) : chart.point array
index (int) : index at which chart.point object need to be removed
Returns: void
method delete(arr, index)
remove line object from array of lines at specific index and deletes the line
Namespace types: array
Parameters:
arr (array) : line array
index (int) : index at which line object need to be removed and deleted
Returns: void
method delete(arr, index)
remove label object from array of labels at specific index and deletes the label
Namespace types: array
Parameters:
arr (array) : label array
index (int) : index at which label object need to be removed and deleted
Returns: void
method delete(arr, index)
remove box object from array of boxes at specific index and deletes the box
Namespace types: array
Parameters:
arr (array) : box array
index (int) : index at which box object need to be removed and deleted
Returns: void
method delete(arr, index)
remove table object from array of tables at specific index and deletes the table
Namespace types: array
Parameters:
arr (array) : table array
index (int) : index at which table object need to be removed and deleted
Returns: void
method delete(arr, index)
remove linefill object from array of linefills at specific index and deletes the linefill
Namespace types: array
Parameters:
arr (array) : linefill array
index (int) : index at which linefill object need to be removed and deleted
Returns: void
method delete(arr, index)
remove polyline object from array of polylines at specific index and deletes the polyline
Namespace types: array
Parameters:
arr (array) : polyline array
index (int) : index at which polyline object need to be removed and deleted
Returns: void
method popr(arr)
remove last int object from array
Namespace types: array
Parameters:
arr (array) : int array
Returns: int
method popr(arr)
remove last float object from array
Namespace types: array
Parameters:
arr (array) : float array
Returns: float
method popr(arr)
remove last bool object from array
Namespace types: array
Parameters:
arr (array) : bool array
Returns: bool
method popr(arr)
remove last string object from array
Namespace types: array
Parameters:
arr (array) : string array
Returns: string
method popr(arr)
remove last color object from array
Namespace types: array
Parameters:
arr (array) : color array
Returns: color
method popr(arr)
remove last chart.point object from array
Namespace types: array
Parameters:
arr (array) : chart.point array
Returns: void
method popr(arr)
remove and delete last line object from array
Namespace types: array
Parameters:
arr (array) : line array
Returns: void
method popr(arr)
remove and delete last label object from array
Namespace types: array
Parameters:
arr (array) : label array
Returns: void
method popr(arr)
remove and delete last box object from array
Namespace types: array
Parameters:
arr (array) : box array
Returns: void
method popr(arr)
remove and delete last table object from array
Namespace types: array
Parameters:
arr (array) : table array
Returns: void
method popr(arr)
remove and delete last linefill object from array
Namespace types: array
Parameters:
arr (array) : linefill array
Returns: void
method popr(arr)
remove and delete last polyline object from array
Namespace types: array
Parameters:
arr (array) : polyline array
Returns: void
method shiftr(arr)
remove first int object from array
Namespace types: array
Parameters:
arr (array) : int array
Returns: int
method shiftr(arr)
remove first float object from array
Namespace types: array
Parameters:
arr (array) : float array
Returns: float
method shiftr(arr)
remove first bool object from array
Namespace types: array
Parameters:
arr (array) : bool array
Returns: bool
method shiftr(arr)
remove first string object from array
Namespace types: array
Parameters:
arr (array) : string array
Returns: string
method shiftr(arr)
remove first color object from array
Namespace types: array
Parameters:
arr (array) : color array
Returns: color
method shiftr(arr)
remove first chart.point object from array
Namespace types: array
Parameters:
arr (array) : chart.point array
Returns: void
method shiftr(arr)
remove and delete first line object from array
Namespace types: array
Parameters:
arr (array) : line array
Returns: void
method shiftr(arr)
remove and delete first label object from array
Namespace types: array
Parameters:
arr (array) : label array
Returns: void
method shiftr(arr)
remove and delete first box object from array
Namespace types: array
Parameters:
arr (array) : box array
Returns: void
method shiftr(arr)
remove and delete first table object from array
Namespace types: array
Parameters:
arr (array) : table array
Returns: void
method shiftr(arr)
remove and delete first linefill object from array
Namespace types: array
Parameters:
arr (array) : linefill array
Returns: void
method shiftr(arr)
remove and delete first polyline object from array
Namespace types: array
Parameters:
arr (array) : polyline array
Returns: void
method push(arr, val, maxItems)
add int to the end of an array with max items cap. Objects are removed from start to maintain max items cap
Namespace types: array
Parameters:
arr (array) : int array
val (int) : int object to be pushed
maxItems (int) : max number of items array can hold
Returns: int
method push(arr, val, maxItems)
add float to the end of an array with max items cap. Objects are removed from start to maintain max items cap
Namespace types: array
Parameters:
arr (array) : float array
val (float) : float object to be pushed
maxItems (int) : max number of items array can hold
Returns: float
method push(arr, val, maxItems)
add bool to the end of an array with max items cap. Objects are removed from start to maintain max items cap
Namespace types: array
Parameters:
arr (array) : bool array
val (bool) : bool object to be pushed
maxItems (int) : max number of items array can hold
Returns: bool
method push(arr, val, maxItems)
add string to the end of an array with max items cap. Objects are removed from start to maintain max items cap
Namespace types: array
Parameters:
arr (array) : string array
val (string) : string object to be pushed
maxItems (int) : max number of items array can hold
Returns: string
method push(arr, val, maxItems)
add color to the end of an array with max items cap. Objects are removed from start to maintain max items cap
Namespace types: array
Parameters:
arr (array) : color array
val (color) : color object to be pushed
maxItems (int) : max number of items array can hold
Returns: color
method push(arr, val, maxItems)
add chart.point to the end of an array with max items cap. Objects are removed and deleted from start to maintain max items cap
Namespace types: array
Parameters:
arr (array) : chart.point array
val (chart.point) : chart.point object to be pushed
maxItems (int) : max number of items array can hold
Returns: chart.point
method push(arr, val, maxItems)
add line to the end of an array with max items cap. Objects are removed and deleted from start to maintain max items cap
Namespace types: array
Parameters:
arr (array) : line array
val (line) : line object to be pushed
maxItems (int) : max number of items array can hold
Returns: line
method push(arr, val, maxItems)
add label to the end of an array with max items cap. Objects are removed and deleted from start to maintain max items cap
Namespace types: array
Parameters:
arr (array) : label array
val (label) : label object to be pushed
maxItems (int) : max number of items array can hold
Returns: label
method push(arr, val, maxItems)
add box to the end of an array with max items cap. Objects are removed and deleted from start to maintain max items cap
Namespace types: array
Parameters:
arr (array) : box array
val (box) : box object to be pushed
maxItems (int) : max number of items array can hold
Returns: box
method push(arr, val, maxItems)
add table to the end of an array with max items cap. Objects are removed and deleted from start to maintain max items cap
Namespace types: array
Parameters:
arr (array) : table array
val (table) : table object to be pushed
maxItems (int) : max number of items array can hold
Returns: table
method push(arr, val, maxItems)
add linefill to the end of an array with max items cap. Objects are removed and deleted from start to maintain max items cap
Namespace types: array
Parameters:
arr (array) : linefill array
val (linefill) : linefill object to be pushed
maxItems (int) : max number of items array can hold
Returns: linefill
method push(arr, val, maxItems)
add polyline to the end of an array with max items cap. Objects are removed and deleted from start to maintain max items cap
Namespace types: array
Parameters:
arr (array) : polyline array
val (polyline) : polyline object to be pushed
maxItems (int) : max number of items array can hold
Returns: polyline
method unshift(arr, val, maxItems)
add int to the beginning of an array with max items cap. Objects are removed from end to maintain max items cap
Namespace types: array
Parameters:
arr (array) : int array
val (int) : int object to be unshift
maxItems (int) : max number of items array can hold
Returns: int
method unshift(arr, val, maxItems)
add float to the beginning of an array with max items cap. Objects are removed from end to maintain max items cap
Namespace types: array
Parameters:
arr (array) : float array
val (float) : float object to be unshift
maxItems (int) : max number of items array can hold
Returns: float
method unshift(arr, val, maxItems)
add bool to the beginning of an array with max items cap. Objects are removed from end to maintain max items cap
Namespace types: array
Parameters:
arr (array) : bool array
val (bool) : bool object to be unshift
maxItems (int) : max number of items array can hold
Returns: bool
method unshift(arr, val, maxItems)
add string to the beginning of an array with max items cap. Objects are removed from end to maintain max items cap
Namespace types: array
Parameters:
arr (array) : string array
val (string) : string object to be unshift
maxItems (int) : max number of items array can hold
Returns: string
method unshift(arr, val, maxItems)
add color to the beginning of an array with max items cap. Objects are removed from end to maintain max items cap
Namespace types: array
Parameters:
arr (array) : color array
val (color) : color object to be unshift
maxItems (int) : max number of items array can hold
Returns: color
method unshift(arr, val, maxItems)
add chart.point to the beginning of an array with max items cap. Objects are removed and deleted from end to maintain max items cap
Namespace types: array
Parameters:
arr (array) : chart.point array
val (chart.point) : chart.point object to be unshift
maxItems (int) : max number of items array can hold
Returns: chart.point
method unshift(arr, val, maxItems)
add line to the beginning of an array with max items cap. Objects are removed and deleted from end to maintain max items cap
Namespace types: array
Parameters:
arr (array) : line array
val (line) : line object to be unshift
maxItems (int) : max number of items array can hold
Returns: line
method unshift(arr, val, maxItems)
add label to the beginning of an array with max items cap. Objects are removed and deleted from end to maintain max items cap
Namespace types: array
Parameters:
arr (array) : label array
val (label) : label object to be unshift
maxItems (int) : max number of items array can hold
Returns: label
method unshift(arr, val, maxItems)
add box to the beginning of an array with max items cap. Objects are removed and deleted from end to maintain max items cap
Namespace types: array
Parameters:
arr (array) : box array
val (box) : box object to be unshift
maxItems (int) : max number of items array can hold
Returns: box
method unshift(arr, val, maxItems)
add table to the beginning of an array with max items cap. Objects are removed and deleted from end to maintain max items cap
Namespace types: array
Parameters:
arr (array) : table array
val (table) : table object to be unshift
maxItems (int) : max number of items array can hold
Returns: table
method unshift(arr, val, maxItems)
add linefill to the beginning of an array with max items cap. Objects are removed and deleted from end to maintain max items cap
Namespace types: array
Parameters:
arr (array) : linefill array
val (linefill) : linefill object to be unshift
maxItems (int) : max number of items array can hold
Returns: linefill
method unshift(arr, val, maxItems)
add polyline to the beginning of an array with max items cap. Objects are removed and deleted from end to maintain max items cap
Namespace types: array
Parameters:
arr (array) : polyline array
val (polyline) : polyline object to be unshift
maxItems (int) : max number of items array can hold
Returns: polyline
method isEmpty(arr)
checks if an int array is either null or empty
Namespace types: array
Parameters:
arr (array) : int array
Returns: bool
method isEmpty(arr)
checks if a float array is either null or empty
Namespace types: array
Parameters:
arr (array) : float array
Returns: bool
method isEmpty(arr)
checks if a string array is either null or empty
Namespace types: array
Parameters:
arr (array) : string array
Returns: bool
method isEmpty(arr)
checks if a bool array is either null or empty
Namespace types: array
Parameters:
arr (array) : bool array
Returns: bool
method isEmpty(arr)
checks if a color array is either null or empty
Namespace types: array
Parameters:
arr (array) : color array
Returns: bool
method isEmpty(arr)
checks if a chart.point array is either null or empty
Namespace types: array
Parameters:
arr (array) : chart.point array
Returns: bool
method isEmpty(arr)
checks if a line array is either null or empty
Namespace types: array
Parameters:
arr (array) : line array
Returns: bool
method isEmpty(arr)
checks if a label array is either null or empty
Namespace types: array
Parameters:
arr (array) : label array
Returns: bool
method isEmpty(arr)
checks if a box array is either null or empty
Namespace types: array
Parameters:
arr (array) : box array
Returns: bool
method isEmpty(arr)
checks if a linefill array is either null or empty
Namespace types: array
Parameters:
arr (array) : linefill array
Returns: bool
method isEmpty(arr)
checks if a polyline array is either null or empty
Namespace types: array
Parameters:
arr (array) : polyline array
Returns: bool
method isEmpty(arr)
checks if a table array is either null or empty
Namespace types: array
Parameters:
arr (array) : table array
Returns: bool
method isNotEmpty(arr)
checks if an int array is not null and has at least one item
Namespace types: array
Parameters:
arr (array) : int array
Returns: bool
method isNotEmpty(arr)
checks if a float array is not null and has at least one item
Namespace types: array
Parameters:
arr (array) : float array
Returns: bool
method isNotEmpty(arr)
checks if a string array is not null and has at least one item
Namespace types: array
Parameters:
arr (array) : string array
Returns: bool
method isNotEmpty(arr)
checks if a bool array is not null and has at least one item
Namespace types: array
Parameters:
arr (array) : bool array
Returns: bool
method isNotEmpty(arr)
checks if a color array is not null and has at least one item
Namespace types: array
Parameters:
arr (array) : color array
Returns: bool
method isNotEmpty(arr)
checks if a chart.point array is not null and has at least one item
Namespace types: array
Parameters:
arr (array) : chart.point array
Returns: bool
method isNotEmpty(arr)
checks if a line array is not null and has at least one item
Namespace types: array
Parameters:
arr (array) : line array
Returns: bool
method isNotEmpty(arr)
checks if a label array is not null and has at least one item
Namespace types: array
Parameters:
arr (array) : label array
Returns: bool
method isNotEmpty(arr)
checks if a box array is not null and has at least one item
Namespace types: array
Parameters:
arr (array) : box array
Returns: bool
method isNotEmpty(arr)
checks if a linefill array is not null and has at least one item
Namespace types: array
Parameters:
arr (array) : linefill array
Returns: bool
method isNotEmpty(arr)
checks if a polyline array is not null and has at least one item
Namespace types: array
Parameters:
arr (array) : polyline array
Returns: bool
method isNotEmpty(arr)
checks if a table array is not null and has at least one item
Namespace types: array
Parameters:
arr (array) : table array
Returns: bool
method flush(arr)
remove all int objects in an array
Namespace types: array
Parameters:
arr (array) : int array
Returns: int
method flush(arr)
remove all float objects in an array
Namespace types: array
Parameters:
arr (array) : float array
Returns: float
method flush(arr)
remove all bool objects in an array
Namespace types: array
Parameters:
arr (array) : bool array
Returns: bool
method flush(arr)
remove all string objects in an array
Namespace types: array
Parameters:
arr (array) : string array
Returns: string
method flush(arr)
remove all color objects in an array
Namespace types: array
Parameters:
arr (array) : color array
Returns: color
method flush(arr)
remove all chart.point objects in an array
Namespace types: array
Parameters:
arr (array) : chart.point array
Returns: chart.point
method flush(arr)
remove and delete all line objects in an array
Namespace types: array
Parameters:
arr (array) : line array
Returns: line
method flush(arr)
remove and delete all label objects in an array
Namespace types: array
Parameters:
arr (array) : label array
Returns: label
method flush(arr)
remove and delete all box objects in an array
Namespace types: array
Parameters:
arr (array) : box array
Returns: box
method flush(arr)
remove and delete all table objects in an array
Namespace types: array
Parameters:
arr (array) : table array
Returns: table
method flush(arr)
remove and delete all linefill objects in an array
Namespace types: array
Parameters:
arr (array) : linefill array
Returns: linefill
method flush(arr)
remove and delete all polyline objects in an array
Namespace types: array
Parameters:
arr (array) : polyline array
Returns: polyline
BoxesLibLibrary "BoxesLib"
isOverlappingBox(_boxes, _top, _bottom)
Parameters:
_boxes (array)
_top (float)
_bottom (float)
isTooCloseBox(_boxes, _top, _bottom, zoneProximityPct)
Parameters:
_boxes (array)
_top (float)
_bottom (float)
zoneProximityPct (float)
createBox(_boxes, _top, _bottom, _leftBarIndex, _color, _txt, _is_breakout, numberLimit, zoneProximityPct, currentClose, isConfirmed)
Parameters:
_boxes (array)
_top (float)
_bottom (float)
_leftBarIndex (int)
_color (color)
_txt (string)
_is_breakout (bool)
numberLimit (int)
zoneProximityPct (float)
currentClose (float)
isConfirmed (bool)
manageBoxes(_boxes, _is_breakout, currentClose, isConfirmed)
Parameters:
_boxes (array)
_is_breakout (bool)
currentClose (float)
isConfirmed (bool)
MK_FractalLibrary "MK_Fractal"
ResCal(price, degree)
Parameters:
price (float)
degree (float)
SupCal(price, degree)
Parameters:
price (float)
degree (float)
CalcAllLevels(price)
Parameters:
price (float)
FindNearestLevel(current_price, resistances, supports)
Parameters:
current_price (float)
resistances (array)
supports (array)
IsTouchSupport(current_price, supports, tolerance_pct)
Parameters:
current_price (float)
supports (array)
tolerance_pct (float)
IsTouchResistance(current_price, resistances, tolerance_pct)
Parameters:
current_price (float)
resistances (array)
tolerance_pct (float)
GetLevelByDegree(price, degree)
Parameters:
price (float)
degree (float)
GetFormattedLevels(price)
Parameters:
price (float)






















